code
stringlengths
2.5k
6.36M
kind
stringclasses
2 values
parsed_code
stringlengths
0
404k
quality_prob
float64
0
0.98
learning_prob
float64
0.03
1
# Capstone Part 2b - Classical ML Models (MFCCs without Offset) ___ ## Setup ``` # Basic packages import numpy as np import pandas as pd # For splitting the data into training and test sets from sklearn.model_selection import train_test_split # For scaling the data as necessary from sklearn.preprocessing import StandardScaler # For doing principal component analysis as necessary from sklearn.decomposition import PCA # For visualizations import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns # For logistic regression and SVM from sklearn.linear_model import LogisticRegression from sklearn.svm import SVC # For hyperparameter optimization from sklearn.pipeline import Pipeline from sklearn.model_selection import GridSearchCV # For caching pipeline and grid search results from tempfile import mkdtemp # For model evaluation from sklearn.metrics import confusion_matrix from sklearn.metrics import classification_report # For getting rid of warning messages import warnings warnings.filterwarnings('ignore') # For pickling models import joblib # Loading in the finished dataframe from part 1 ravdess_mfcc_no_offset_df = pd.read_csv('C:/Users/Patrick/Documents/Capstone Data/ravdess_mfcc_no_offset.csv') ravdess_mfcc_no_offset_df.head() ``` ___ # Building Models for Classifying Gender (Regardless of Emotion) ``` # Splitting the dataframe into features and target X = ravdess_mfcc_no_offset_df.iloc[:, :-2] g = ravdess_mfcc_no_offset_df['Gender'] ``` The convention is to name the target variable 'y', but I will be declaring many different target variables throughout the notebook, so I opted for 'g' for simplicity instead of 'y_g' or 'y_gen', for example. ``` # Splitting the data into training and test sets X_train, X_test, g_train, g_test = train_test_split(X, g, test_size=0.3, stratify=g, random_state=1) # Checking the shapes print(X_train.shape) print(X_test.shape) print(g_train.shape) print(g_test.shape) ``` I want to build a simple, initial classifier to get a sense of the performances I might get in more optimized models. To this end, I will build a logistic regression model without doing any cross-validation or hyperparameter optimization. ``` # Instantiate the model initial_logreg = LogisticRegression() # Fit to training set initial_logreg.fit(X_train, g_train) # Score on training set print(f'Model accuracy on training set: {initial_logreg.score(X_train, g_train)*100}%') # Score on test set print(f'Model accuracy on test set: {initial_logreg.score(X_test, g_test)*100}%') ``` These are extremely high accuracies. The model has most likely overfit to the training set, but the accuracy on the test set is still surprisingly high. Here are some possible explanations: - The dataset (RAVDESS) is relatively small, with only 1440 data points (1438 if I do not count the two very short clips that I excluded). This model is likely not very robust and has easily overfit to the training set. - The features I have extracted could be excellent predictors of gender. - This could be a very simple classification task. After all, there are only two classes, and theoretically, features extracted from male and female voice clips should have distinguishable patterns. I had originally planned to build more gender classification models for this dataset, but I will forgo this for now. In part 4, I will try using this model to classify clips from another dataset and examine its performance. ``` # Pickling the model for later use joblib.dump(initial_logreg, 'pickle1_gender_logreg.pkl') ``` ___ # Building Models for Classifying Emotion for Males ``` # Making a new dataframe that contains only male recordings male_df = ravdess_mfcc_no_offset_df[ravdess_mfcc_no_offset_df['Gender'] == 'male'].reset_index().drop('index', axis=1) male_df # Splitting the dataframe into features and target Xm = male_df.iloc[:, :-2] em = male_df['Emotion'] # Splitting the data into training and test sets Xm_train, Xm_test, em_train, em_test = train_test_split(Xm, em, test_size=0.3, stratify=em, random_state=1) # Checking the shapes print(Xm_train.shape) print(Xm_test.shape) print(em_train.shape) print(em_test.shape) ``` As before, I will try building an initial model. ``` # Instantiate the model initial_logreg_em = LogisticRegression() # Fit to training set initial_logreg_em.fit(Xm_train, em_train) # Score on training set print(f'Model accuracy on training set: {initial_logreg_em.score(Xm_train, em_train)*100}%') # Score on test set print(f'Model accuracy on test set: {initial_logreg_em.score(Xm_test, em_test)*100}%') ``` The model has overfit to the training set yet again, and this time the accuracy on the test set leaves a lot to be desired. Let's evaluate the model further using a confusion matrix and a classification report. ``` # Having initial_logreg_em make predictions based on the test set features em_pred = initial_logreg_em.predict(Xm_test) # Building the confusion matrix as a dataframe emotions = ['angry', 'calm', 'disgusted', 'fearful', 'happy', 'neutral', 'sad', 'surprised'] em_confusion_df = pd.DataFrame(confusion_matrix(em_test, em_pred)) em_confusion_df.columns = [f'Predicted {emotion}' for emotion in emotions] em_confusion_df.index = [f'Actual {emotion}' for emotion in emotions] em_confusion_df # Classification report print(classification_report(em_test, em_pred)) ``` In a binary classification problem, there is one negative class and one positive class. This is not the case here, because this is a multiclass classification problem. In the table above, each row of precision and recall scores assumes the corresponding emotion is the positive class, and groups all other emotions as the negative class. Precision is the following measure: Of all the data points that the model classified as belonging to the positive class (i.e., the true and false positives), what proportion is correct (i.e., truly positive)? Recall is the following measure: Of all the data points that are truly positive (i.e., the true positives and false negatives as classified by the model), what proportion did the model correctly classify (i.e., the true positives)? It appears that the initial model is strongest at classifying calm voice clips, and weakest at classifying disgusted voice clips. In order of strongest to weakest: calm, angry, fearful, happy, neutral, sad, surprised, and disgusted. I will now try building new models and optimizing hyperparameters to obtain better performance. I will use a pipeline and multiple grid searches to accomplish this. Before I build all my models in bulk, I want to see if doing principal component analysis (PCA) could be beneficial. I will do PCA on both unscaled and scaled features, and plot the resulting explained variance ratios. I have two goals here: - Get a sense of whether scaling would be beneficial for model performance - Get a sense of how many principal components I should use ``` # PCA on unscaled features # Instantiate PCA and fit to Xm_train pca = PCA().fit(Xm_train) # Transform Xm_train Xm_train_pca = pca.transform(Xm_train) # Transform Xm_test Xm_test_pca = pca.transform(Xm_test) # Standard scaling # Instantiate the scaler and fit to Xm_train scaler = StandardScaler().fit(Xm_train) # Transform Xm_train Xm_train_scaled = scaler.transform(Xm_train) # Transform Xm_test Xm_test_scaled = scaler.transform(Xm_test) # PCA on scaled features # Instantiate PCA and fit to Xm_train_scaled pca_scaled = PCA().fit(Xm_train_scaled) # Transform Xm_train_scaled Xm_train_scaled_pca = pca_scaled.transform(Xm_train_scaled) # Transform Xm_test_scaled Xm_test_scaled_pca = pca_scaled.transform(Xm_test_scaled) # Plot the explained variance ratios plt.subplots(1, 2, figsize = (15, 5)) # Unscaled plt.subplot(1, 2, 1) plt.bar(np.arange(1, len(pca.explained_variance_ratio_)+1), pca.explained_variance_ratio_) plt.xlabel('Principal Component') plt.ylabel('Explained Variance Ratio') plt.title('PCA on Unscaled Features') plt.ylim(top = 0.6) # Equalizing the y-axes # Scaled plt.subplot(1, 2, 2) plt.bar(np.arange(1, len(pca_scaled.explained_variance_ratio_)+1), pca_scaled.explained_variance_ratio_) plt.xlabel('Principal Component') plt.ylabel('Explained Variance Ratio') plt.title('PCA on Scaled Features') plt.ylim(top = 0.6) # Equalizing the y-axes plt.tight_layout() plt.show() ``` Principal components are linear combinations of the original features, ordered by how much of the dataset's variance they explain. Looking at the two plots above, it appears that for the same number of principal components, those using unscaled features are able to explain more variance (i.e., capture more information) than those using scaled features. For example, looking at the first ~25 principal components of each plot, the bars of the left plot (unscaled) are higher and skewed more to the left than those of the right plot (scaled). Since the purpose of PCA is to reduce dimensionality of the data by keeping the components that explain the most variance and discarding the rest, the unscaled principal components might benefit my models more than the scaled principal components will. However, I have to be mindful of the underlying variance in my features. Some features have values in the -800s, while others are close to 0. ``` # Examining the variances var_df = pd.DataFrame(male_df.var()).T var_df ``` Since PCA is looking for high variance directions, it can become biased by the underlying variance in a given feature if I do not scale it down first. I can see that some features have much higher variance than others do, so there is likely a lot of bias in the unscaled principal components above. How much variance is explained by certain numbers of unscaled and scaled principal components? This will help me determine how many principal components to try in my grid searches later. ``` # Unscaled num_components = [503, 451, 401, 351, 301, 251, 201, 151, 101, 51] for n in num_components: print(f'Variance explained by {n-1} unscaled principal components: {np.round(np.sum(pca.explained_variance_ratio_[:n])*100, 2)}%') # Scaled num_components = [503, 451, 401, 351, 301, 251, 201, 151, 101, 51] for n in num_components: print(f'Variance explained by {n-1} scaled principal components: {np.round(np.sum(pca_scaled.explained_variance_ratio_[:n])*100, 2)}%') ``` I will now build a pipeline and multiple grid searches with five-fold cross-validation to optimize the hyperparameters. I will try two types of classifiers: logistic regression and support vector machine. I have chosen these based on the fact that they performed better than other classifer types in part 2a. To get a better sense of how each type performs, I will make a grid search for each one. I will also try different numbers of principal components for unscaled and scaled features. ``` # Cache cachedir = mkdtemp() # Pipeline (these values are placeholders) my_pipeline = Pipeline(steps=[('scaler', StandardScaler()), ('dim_reducer', PCA()), ('model', LogisticRegression())], memory=cachedir) # Parameter grid for log reg logreg_param_grid = [ # l1 without PCA # unscaled and scaled * 9 regularization strengths = 18 models {'scaler': [None, StandardScaler()], 'dim_reducer': [None], 'model': [LogisticRegression(penalty='l1', n_jobs=-1)], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]}, # l1 unscaled with PCA # 5 PCAs * 9 regularization strengths = 45 models {'scaler': [None], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(50, 251, 50), 'model': [LogisticRegression(penalty='l1', n_jobs=-1)], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]}, # l1 scaled with PCA # 4 PCAs * 9 regularization strengths = 36 models {'scaler': [StandardScaler()], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(200, 351, 50), 'model': [LogisticRegression(penalty='l1', n_jobs=-1)], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]}, # l2 (default) without PCA # unscaled and scaled * 9 regularization strengths = 18 models {'scaler': [None, StandardScaler()], 'dim_reducer': [None], 'model': [LogisticRegression(solver='lbfgs', n_jobs=-1)], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]}, # l2 (default) unscaled with PCA # 5 PCAs * 9 regularization strengths = 45 models {'scaler': [None], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(50, 251, 50), 'model': [LogisticRegression(solver='lbfgs', n_jobs=-1)], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]}, # l2 (default) scaled with PCA # 4 PCAs * 9 regularization strengths = 36 models {'scaler': [StandardScaler()], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(200, 351, 50), 'model': [LogisticRegression(solver='lbfgs', n_jobs=-1)], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]} ] # Instantiate the log reg grid search logreg_grid_search = GridSearchCV(estimator=my_pipeline, param_grid=logreg_param_grid, cv=5, n_jobs=-1, verbose=5) # Fit the log reg grid search fitted_logreg_grid_em = logreg_grid_search.fit(Xm_train, em_train) # What was the best log reg? fitted_logreg_grid_em.best_estimator_ print(f"The best log reg's accuracy on the training set: {fitted_logreg_grid_em.score(Xm_train, em_train)*100}%") print(f"The best log reg's accuracy on the test set: {fitted_logreg_grid_em.score(Xm_test, em_test)*100}%") # Pickling the best log reg joblib.dump(fitted_logreg_grid_em.best_estimator_, 'pickle2_male_emotion_logreg.pkl') # Parameter grid for SVM svm_param_grid = [ # unscaled and scaled * 9 regularization strengths = 18 models {'scaler': [None, StandardScaler()], 'dim_reducer': [None], 'model': [SVC()], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]}, # unscaled # 5 PCAs * 9 regularization strengths = 45 models {'scaler': [None], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(50, 251, 50), 'model': [SVC()], 'model__C':[0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]}, # scaled # 4 PCAs * 9 regularization strengths = 36 models {'scaler': [StandardScaler()], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(200, 351, 50), 'model': [SVC()], 'model__C':[0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]} ] # Instantiate the SVM grid search svm_grid_search = GridSearchCV(estimator=my_pipeline, param_grid=svm_param_grid, cv=5, n_jobs=-1, verbose=5) # Fit the SVM grid search fitted_svm_grid_em = svm_grid_search.fit(Xm_train, em_train) # What was the best SVM? fitted_svm_grid_em.best_estimator_ print(f"The best SVM's accuracy on the training set: {fitted_svm_grid_em.score(Xm_train, em_train)*100}%") print(f"The best SVM's accuracy on the test set: {fitted_svm_grid_em.score(Xm_test, em_test)*100}%") # Pickling the best SVM joblib.dump(fitted_svm_grid_em.best_estimator_, 'pickle3_male_emotion_svm.pkl') ``` I will try loading in the pickled models and evaluate them using confusion matrices and classification reports. ``` # Loading in the best male emotion classifiers male_emotion_logreg = joblib.load('pickle2_male_emotion_logreg.pkl') male_emotion_svm = joblib.load('pickle3_male_emotion_svm.pkl') em_pred = male_emotion_logreg.predict(Xm_test) # Building the confusion matrix as a dataframe emotions = ['angry', 'calm', 'disgusted', 'fearful', 'happy', 'neutral', 'sad', 'surprised'] em_confusion_df = pd.DataFrame(confusion_matrix(em_test, em_pred)) em_confusion_df.columns = [f'Predicted {emotion}' for emotion in emotions] em_confusion_df.index = [f'Actual {emotion}' for emotion in emotions] # Converting the above to precision scores em_confusion_df_pct = round(em_confusion_df/em_confusion_df.sum(axis = 0)*100, 2) # Heatmap of precision scores plt.figure(figsize=(12, 8)) sns.heatmap(em_confusion_df_pct, annot=True, linewidths=0.5) plt.xticks(rotation=45) plt.show() # Classification report print(classification_report(em_test, em_pred)) em_pred = male_emotion_svm.predict(Xm_test) # Building the confusion matrix as a dataframe emotions = ['angry', 'calm', 'disgusted', 'fearful', 'happy', 'neutral', 'sad', 'surprised'] em_confusion_df = pd.DataFrame(confusion_matrix(em_test, em_pred)) em_confusion_df.columns = [f'Predicted {emotion}' for emotion in emotions] em_confusion_df.index = [f'Actual {emotion}' for emotion in emotions] # Converting the above to precision scores em_confusion_df_pct = round(em_confusion_df/em_confusion_df.sum(axis = 0)*100, 2) # Heatmap of precision scores plt.figure(figsize=(12, 8)) sns.heatmap(em_confusion_df_pct, annot=True, linewidths=0.5) plt.xticks(rotation=45) plt.show() # Classification report print(classification_report(em_test, em_pred)) ``` ___ # Building Models for Classifying Emotion for Females ``` # Making a new dataframe that contains only female recordings female_df = ravdess_mfcc_no_offset_df[ravdess_mfcc_no_offset_df['Gender'] == 'female'].reset_index().drop('index', axis=1) female_df # Splitting the dataframe into features and target Xf = female_df.iloc[:, :-2] ef = female_df['Emotion'] # Splitting the data into training and test sets Xf_train, Xf_test, ef_train, ef_test = train_test_split(Xf, ef, test_size=0.3, stratify=ef, random_state=1) # Checking the shapes print(Xf_train.shape) print(Xf_test.shape) print(ef_train.shape) print(ef_test.shape) ``` Here is an initial model: ``` # Instantiate the model initial_logreg_ef = LogisticRegression() # Fit to training set initial_logreg_ef.fit(Xf_train, ef_train) # Score on training set print(f'Model accuracy on training set: {initial_logreg_ef.score(Xf_train, ef_train)*100}%') # Score on test set print(f'Model accuracy on test set: {initial_logreg_ef.score(Xf_test, ef_test)*100}%') # Having initial_logreg_ef make predictions based on the test set features ef_pred = initial_logreg_ef.predict(Xf_test) # Building the confusion matrix as a dataframe emotions = ['angry', 'calm', 'disgusted', 'fearful', 'happy', 'neutral', 'sad', 'surprised'] ef_confusion_df = pd.DataFrame(confusion_matrix(ef_test, ef_pred)) ef_confusion_df.columns = [f'Predicted {emotion}' for emotion in emotions] ef_confusion_df.index = [f'Actual {emotion}' for emotion in emotions] ef_confusion_df # Classification report print(classification_report(ef_test, ef_pred)) # PCA on unscaled features # Instantiate PCA and fit to Xf_train pca = PCA().fit(Xf_train) # Transform Xf_train Xf_train_pca = pca.transform(Xf_train) # Transform Xf_test Xf_test_pca = pca.transform(Xf_test) # Standard scaling # Instantiate the scaler and fit to Xf_train scaler = StandardScaler().fit(Xf_train) # Transform Xf_train Xf_train_scaled = scaler.transform(Xf_train) # Transform Xf_test Xf_test_scaled = scaler.transform(Xf_test) # PCA on scaled features # Instantiate PCA and fit to Xf_train_scaled pca_scaled = PCA().fit(Xf_train_scaled) # Transform Xf_train_scaled Xf_train_scaled_pca = pca_scaled.transform(Xf_train_scaled) # Transform Xf_test_scaled Xf_test_scaled_pca = pca_scaled.transform(Xf_test_scaled) # Plot the explained variance ratios plt.subplots(1, 2, figsize = (15, 5)) # Unscaled plt.subplot(1, 2, 1) plt.bar(np.arange(1, len(pca.explained_variance_ratio_)+1), pca.explained_variance_ratio_) plt.xlabel('Principal Component') plt.ylabel('Explained Variance Ratio') plt.title('PCA on Unscaled Features') plt.ylim(top = 0.6) # Equalizing the y-axes # Scaled plt.subplot(1, 2, 2) plt.bar(np.arange(1, len(pca_scaled.explained_variance_ratio_)+1), pca_scaled.explained_variance_ratio_) plt.xlabel('Principal Component') plt.ylabel('Explained Variance Ratio') plt.title('PCA on Scaled Features') plt.ylim(top = 0.6) # Equalizing the y-axes plt.tight_layout() plt.show() ``` These are the same trends I saw previously for male emotions. How much variance is explained by certain numbers of unscaled and scaled principal components? This will help me determine how many principal components to try in my grid searches later. ``` # Unscaled num_components = [503, 451, 401, 351, 301, 251, 201, 151, 101, 51] for n in num_components: print(f'Variance explained by {n-1} unscaled principal components: {np.round(np.sum(pca.explained_variance_ratio_[:n])*100, 2)}%') # Scaled num_components = [503, 451, 401, 351, 301, 251, 201, 151, 101, 51] for n in num_components: print(f'Variance explained by {n-1} scaled principal components: {np.round(np.sum(pca_scaled.explained_variance_ratio_[:n])*100, 2)}%') ``` Like before, I will now do a grid search for each classifier type, with five-fold cross-validation to optimize the hyperparameters. ``` # Cache cachedir = mkdtemp() # Pipeline (these values are placeholders) my_pipeline = Pipeline(steps=[('scaler', StandardScaler()), ('dim_reducer', PCA()), ('model', LogisticRegression())], memory=cachedir) # Parameter grid for log reg logreg_param_grid = [ # l1 without PCA # unscaled and scaled * 9 regularization strengths = 18 models {'scaler': [None, StandardScaler()], 'dim_reducer': [None], 'model': [LogisticRegression(penalty='l1', n_jobs=-1)], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]}, # l1 unscaled with PCA # 6 PCAs * 9 regularization strengths = 54 models {'scaler': [None], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(50, 301, 50), 'model': [LogisticRegression(penalty='l1', n_jobs=-1)], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]}, # l1 scaled with PCA # 4 PCAs * 9 regularization strengths = 36 models {'scaler': [StandardScaler()], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(200, 351, 50), 'model': [LogisticRegression(penalty='l1', n_jobs=-1)], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]}, # l2 (default) without PCA # unscaled and scaled * 9 regularization strengths = 18 models {'scaler': [None, StandardScaler()], 'dim_reducer': [None], 'model': [LogisticRegression(solver='lbfgs', n_jobs=-1)], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]}, # l2 (default) unscaled with PCA # 6 PCAs * 9 regularization strengths = 54 models {'scaler': [None], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(50, 301, 50), 'model': [LogisticRegression(solver='lbfgs', n_jobs=-1)], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]}, # l2 (default) scaled with PCA # 4 PCAs * 9 regularization strengths = 36 models {'scaler': [StandardScaler()], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(200, 351, 50), 'model': [LogisticRegression(solver='lbfgs', n_jobs=-1)], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]} ] # Instantiate the log reg grid search logreg_grid_search = GridSearchCV(estimator=my_pipeline, param_grid=logreg_param_grid, cv=5, n_jobs=-1, verbose=5) # Fit the log reg grid search fitted_logreg_grid_ef = logreg_grid_search.fit(Xf_train, ef_train) # What was the best log reg? fitted_logreg_grid_ef.best_estimator_ print(f"The best log reg's accuracy on the training set: {fitted_logreg_grid_ef.score(Xf_train, ef_train)*100}%") print(f"The best log reg's accuracy on the test set: {fitted_logreg_grid_ef.score(Xf_test, ef_test)*100}%") # Pickling the best log reg joblib.dump(fitted_logreg_grid_ef.best_estimator_, 'pickle4_female_emotion_logreg.pkl') # Parameter grid for SVM svm_param_grid = [ # unscaled and scaled * 9 regularization strengths = 18 models {'scaler': [None, StandardScaler()], 'dim_reducer': [None], 'model': [SVC()], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]}, # unscaled # 6 PCAs * 9 regularization strengths = 54 models {'scaler': [None], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(50, 301, 50), 'model': [SVC()], 'model__C':[0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]}, # scaled # 4 PCAs * 9 regularization strengths = 36 models {'scaler': [StandardScaler()], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(200, 351, 50), 'model': [SVC()], 'model__C':[0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]} ] # Instantiate the SVM grid search svm_grid_search = GridSearchCV(estimator=my_pipeline, param_grid=svm_param_grid, cv=5, n_jobs=-1, verbose=5) # Fit the SVM grid search fitted_svm_grid_ef = svm_grid_search.fit(Xf_train, ef_train) # What was the best SVM? fitted_svm_grid_ef.best_estimator_ print(f"The best SVM's accuracy on the training set: {fitted_svm_grid_ef.score(Xf_train, ef_train)*100}%") print(f"The best SVM's accuracy on the test set: {fitted_svm_grid_ef.score(Xf_test, ef_test)*100}%") # Pickling the best SVM joblib.dump(fitted_svm_grid_ef.best_estimator_, 'pickle5_female_emotion_svm.pkl') ``` Like before, I will try loading in the pickled models and evaluate them using confusion matrices and classification reports. ``` # Loading in the best female emotion classifiers female_emotion_logreg = joblib.load('pickle4_female_emotion_logreg.pkl') female_emotion_svm = joblib.load('pickle5_female_emotion_svm.pkl') ef_pred = female_emotion_logreg.predict(Xf_test) # Building the confusion matrix as a dataframe emotions = ['angry', 'calm', 'disgusted', 'fearful', 'happy', 'neutral', 'sad', 'surprised'] confusion_df = pd.DataFrame(confusion_matrix(ef_test, ef_pred)) confusion_df.columns = [f'Predicted {emotion}' for emotion in emotions] confusion_df.index = [f'Actual {emotion}' for emotion in emotions] # Converting the above to precision scores confusion_df_pct = round(confusion_df/confusion_df.sum(axis = 0)*100, 2) # Heatmap of precision scores plt.figure(figsize=(12, 8)) sns.heatmap(confusion_df_pct, annot=True, linewidths=0.5) plt.xticks(rotation=45) plt.show() # Classification report print(classification_report(ef_test, ef_pred)) ef_pred = female_emotion_svm.predict(Xf_test) # Building the confusion matrix as a dataframe emotions = ['angry', 'calm', 'disgusted', 'fearful', 'happy', 'neutral', 'sad', 'surprised'] confusion_df = pd.DataFrame(confusion_matrix(ef_test, ef_pred)) confusion_df.columns = [f'Predicted {emotion}' for emotion in emotions] confusion_df.index = [f'Actual {emotion}' for emotion in emotions] # Converting the above to precision scores confusion_df_pct = round(confusion_df/confusion_df.sum(axis = 0)*100, 2) # Heatmap of precision scores plt.figure(figsize=(12, 8)) sns.heatmap(confusion_df_pct, annot=True, linewidths=0.5) plt.xticks(rotation=45) plt.show() # Classification report print(classification_report(ef_test, ef_pred)) ```
github_jupyter
# Basic packages import numpy as np import pandas as pd # For splitting the data into training and test sets from sklearn.model_selection import train_test_split # For scaling the data as necessary from sklearn.preprocessing import StandardScaler # For doing principal component analysis as necessary from sklearn.decomposition import PCA # For visualizations import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns # For logistic regression and SVM from sklearn.linear_model import LogisticRegression from sklearn.svm import SVC # For hyperparameter optimization from sklearn.pipeline import Pipeline from sklearn.model_selection import GridSearchCV # For caching pipeline and grid search results from tempfile import mkdtemp # For model evaluation from sklearn.metrics import confusion_matrix from sklearn.metrics import classification_report # For getting rid of warning messages import warnings warnings.filterwarnings('ignore') # For pickling models import joblib # Loading in the finished dataframe from part 1 ravdess_mfcc_no_offset_df = pd.read_csv('C:/Users/Patrick/Documents/Capstone Data/ravdess_mfcc_no_offset.csv') ravdess_mfcc_no_offset_df.head() # Splitting the dataframe into features and target X = ravdess_mfcc_no_offset_df.iloc[:, :-2] g = ravdess_mfcc_no_offset_df['Gender'] # Splitting the data into training and test sets X_train, X_test, g_train, g_test = train_test_split(X, g, test_size=0.3, stratify=g, random_state=1) # Checking the shapes print(X_train.shape) print(X_test.shape) print(g_train.shape) print(g_test.shape) # Instantiate the model initial_logreg = LogisticRegression() # Fit to training set initial_logreg.fit(X_train, g_train) # Score on training set print(f'Model accuracy on training set: {initial_logreg.score(X_train, g_train)*100}%') # Score on test set print(f'Model accuracy on test set: {initial_logreg.score(X_test, g_test)*100}%') # Pickling the model for later use joblib.dump(initial_logreg, 'pickle1_gender_logreg.pkl') # Making a new dataframe that contains only male recordings male_df = ravdess_mfcc_no_offset_df[ravdess_mfcc_no_offset_df['Gender'] == 'male'].reset_index().drop('index', axis=1) male_df # Splitting the dataframe into features and target Xm = male_df.iloc[:, :-2] em = male_df['Emotion'] # Splitting the data into training and test sets Xm_train, Xm_test, em_train, em_test = train_test_split(Xm, em, test_size=0.3, stratify=em, random_state=1) # Checking the shapes print(Xm_train.shape) print(Xm_test.shape) print(em_train.shape) print(em_test.shape) # Instantiate the model initial_logreg_em = LogisticRegression() # Fit to training set initial_logreg_em.fit(Xm_train, em_train) # Score on training set print(f'Model accuracy on training set: {initial_logreg_em.score(Xm_train, em_train)*100}%') # Score on test set print(f'Model accuracy on test set: {initial_logreg_em.score(Xm_test, em_test)*100}%') # Having initial_logreg_em make predictions based on the test set features em_pred = initial_logreg_em.predict(Xm_test) # Building the confusion matrix as a dataframe emotions = ['angry', 'calm', 'disgusted', 'fearful', 'happy', 'neutral', 'sad', 'surprised'] em_confusion_df = pd.DataFrame(confusion_matrix(em_test, em_pred)) em_confusion_df.columns = [f'Predicted {emotion}' for emotion in emotions] em_confusion_df.index = [f'Actual {emotion}' for emotion in emotions] em_confusion_df # Classification report print(classification_report(em_test, em_pred)) # PCA on unscaled features # Instantiate PCA and fit to Xm_train pca = PCA().fit(Xm_train) # Transform Xm_train Xm_train_pca = pca.transform(Xm_train) # Transform Xm_test Xm_test_pca = pca.transform(Xm_test) # Standard scaling # Instantiate the scaler and fit to Xm_train scaler = StandardScaler().fit(Xm_train) # Transform Xm_train Xm_train_scaled = scaler.transform(Xm_train) # Transform Xm_test Xm_test_scaled = scaler.transform(Xm_test) # PCA on scaled features # Instantiate PCA and fit to Xm_train_scaled pca_scaled = PCA().fit(Xm_train_scaled) # Transform Xm_train_scaled Xm_train_scaled_pca = pca_scaled.transform(Xm_train_scaled) # Transform Xm_test_scaled Xm_test_scaled_pca = pca_scaled.transform(Xm_test_scaled) # Plot the explained variance ratios plt.subplots(1, 2, figsize = (15, 5)) # Unscaled plt.subplot(1, 2, 1) plt.bar(np.arange(1, len(pca.explained_variance_ratio_)+1), pca.explained_variance_ratio_) plt.xlabel('Principal Component') plt.ylabel('Explained Variance Ratio') plt.title('PCA on Unscaled Features') plt.ylim(top = 0.6) # Equalizing the y-axes # Scaled plt.subplot(1, 2, 2) plt.bar(np.arange(1, len(pca_scaled.explained_variance_ratio_)+1), pca_scaled.explained_variance_ratio_) plt.xlabel('Principal Component') plt.ylabel('Explained Variance Ratio') plt.title('PCA on Scaled Features') plt.ylim(top = 0.6) # Equalizing the y-axes plt.tight_layout() plt.show() # Examining the variances var_df = pd.DataFrame(male_df.var()).T var_df # Unscaled num_components = [503, 451, 401, 351, 301, 251, 201, 151, 101, 51] for n in num_components: print(f'Variance explained by {n-1} unscaled principal components: {np.round(np.sum(pca.explained_variance_ratio_[:n])*100, 2)}%') # Scaled num_components = [503, 451, 401, 351, 301, 251, 201, 151, 101, 51] for n in num_components: print(f'Variance explained by {n-1} scaled principal components: {np.round(np.sum(pca_scaled.explained_variance_ratio_[:n])*100, 2)}%') # Cache cachedir = mkdtemp() # Pipeline (these values are placeholders) my_pipeline = Pipeline(steps=[('scaler', StandardScaler()), ('dim_reducer', PCA()), ('model', LogisticRegression())], memory=cachedir) # Parameter grid for log reg logreg_param_grid = [ # l1 without PCA # unscaled and scaled * 9 regularization strengths = 18 models {'scaler': [None, StandardScaler()], 'dim_reducer': [None], 'model': [LogisticRegression(penalty='l1', n_jobs=-1)], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]}, # l1 unscaled with PCA # 5 PCAs * 9 regularization strengths = 45 models {'scaler': [None], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(50, 251, 50), 'model': [LogisticRegression(penalty='l1', n_jobs=-1)], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]}, # l1 scaled with PCA # 4 PCAs * 9 regularization strengths = 36 models {'scaler': [StandardScaler()], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(200, 351, 50), 'model': [LogisticRegression(penalty='l1', n_jobs=-1)], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]}, # l2 (default) without PCA # unscaled and scaled * 9 regularization strengths = 18 models {'scaler': [None, StandardScaler()], 'dim_reducer': [None], 'model': [LogisticRegression(solver='lbfgs', n_jobs=-1)], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]}, # l2 (default) unscaled with PCA # 5 PCAs * 9 regularization strengths = 45 models {'scaler': [None], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(50, 251, 50), 'model': [LogisticRegression(solver='lbfgs', n_jobs=-1)], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]}, # l2 (default) scaled with PCA # 4 PCAs * 9 regularization strengths = 36 models {'scaler': [StandardScaler()], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(200, 351, 50), 'model': [LogisticRegression(solver='lbfgs', n_jobs=-1)], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]} ] # Instantiate the log reg grid search logreg_grid_search = GridSearchCV(estimator=my_pipeline, param_grid=logreg_param_grid, cv=5, n_jobs=-1, verbose=5) # Fit the log reg grid search fitted_logreg_grid_em = logreg_grid_search.fit(Xm_train, em_train) # What was the best log reg? fitted_logreg_grid_em.best_estimator_ print(f"The best log reg's accuracy on the training set: {fitted_logreg_grid_em.score(Xm_train, em_train)*100}%") print(f"The best log reg's accuracy on the test set: {fitted_logreg_grid_em.score(Xm_test, em_test)*100}%") # Pickling the best log reg joblib.dump(fitted_logreg_grid_em.best_estimator_, 'pickle2_male_emotion_logreg.pkl') # Parameter grid for SVM svm_param_grid = [ # unscaled and scaled * 9 regularization strengths = 18 models {'scaler': [None, StandardScaler()], 'dim_reducer': [None], 'model': [SVC()], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]}, # unscaled # 5 PCAs * 9 regularization strengths = 45 models {'scaler': [None], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(50, 251, 50), 'model': [SVC()], 'model__C':[0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]}, # scaled # 4 PCAs * 9 regularization strengths = 36 models {'scaler': [StandardScaler()], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(200, 351, 50), 'model': [SVC()], 'model__C':[0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]} ] # Instantiate the SVM grid search svm_grid_search = GridSearchCV(estimator=my_pipeline, param_grid=svm_param_grid, cv=5, n_jobs=-1, verbose=5) # Fit the SVM grid search fitted_svm_grid_em = svm_grid_search.fit(Xm_train, em_train) # What was the best SVM? fitted_svm_grid_em.best_estimator_ print(f"The best SVM's accuracy on the training set: {fitted_svm_grid_em.score(Xm_train, em_train)*100}%") print(f"The best SVM's accuracy on the test set: {fitted_svm_grid_em.score(Xm_test, em_test)*100}%") # Pickling the best SVM joblib.dump(fitted_svm_grid_em.best_estimator_, 'pickle3_male_emotion_svm.pkl') # Loading in the best male emotion classifiers male_emotion_logreg = joblib.load('pickle2_male_emotion_logreg.pkl') male_emotion_svm = joblib.load('pickle3_male_emotion_svm.pkl') em_pred = male_emotion_logreg.predict(Xm_test) # Building the confusion matrix as a dataframe emotions = ['angry', 'calm', 'disgusted', 'fearful', 'happy', 'neutral', 'sad', 'surprised'] em_confusion_df = pd.DataFrame(confusion_matrix(em_test, em_pred)) em_confusion_df.columns = [f'Predicted {emotion}' for emotion in emotions] em_confusion_df.index = [f'Actual {emotion}' for emotion in emotions] # Converting the above to precision scores em_confusion_df_pct = round(em_confusion_df/em_confusion_df.sum(axis = 0)*100, 2) # Heatmap of precision scores plt.figure(figsize=(12, 8)) sns.heatmap(em_confusion_df_pct, annot=True, linewidths=0.5) plt.xticks(rotation=45) plt.show() # Classification report print(classification_report(em_test, em_pred)) em_pred = male_emotion_svm.predict(Xm_test) # Building the confusion matrix as a dataframe emotions = ['angry', 'calm', 'disgusted', 'fearful', 'happy', 'neutral', 'sad', 'surprised'] em_confusion_df = pd.DataFrame(confusion_matrix(em_test, em_pred)) em_confusion_df.columns = [f'Predicted {emotion}' for emotion in emotions] em_confusion_df.index = [f'Actual {emotion}' for emotion in emotions] # Converting the above to precision scores em_confusion_df_pct = round(em_confusion_df/em_confusion_df.sum(axis = 0)*100, 2) # Heatmap of precision scores plt.figure(figsize=(12, 8)) sns.heatmap(em_confusion_df_pct, annot=True, linewidths=0.5) plt.xticks(rotation=45) plt.show() # Classification report print(classification_report(em_test, em_pred)) # Making a new dataframe that contains only female recordings female_df = ravdess_mfcc_no_offset_df[ravdess_mfcc_no_offset_df['Gender'] == 'female'].reset_index().drop('index', axis=1) female_df # Splitting the dataframe into features and target Xf = female_df.iloc[:, :-2] ef = female_df['Emotion'] # Splitting the data into training and test sets Xf_train, Xf_test, ef_train, ef_test = train_test_split(Xf, ef, test_size=0.3, stratify=ef, random_state=1) # Checking the shapes print(Xf_train.shape) print(Xf_test.shape) print(ef_train.shape) print(ef_test.shape) # Instantiate the model initial_logreg_ef = LogisticRegression() # Fit to training set initial_logreg_ef.fit(Xf_train, ef_train) # Score on training set print(f'Model accuracy on training set: {initial_logreg_ef.score(Xf_train, ef_train)*100}%') # Score on test set print(f'Model accuracy on test set: {initial_logreg_ef.score(Xf_test, ef_test)*100}%') # Having initial_logreg_ef make predictions based on the test set features ef_pred = initial_logreg_ef.predict(Xf_test) # Building the confusion matrix as a dataframe emotions = ['angry', 'calm', 'disgusted', 'fearful', 'happy', 'neutral', 'sad', 'surprised'] ef_confusion_df = pd.DataFrame(confusion_matrix(ef_test, ef_pred)) ef_confusion_df.columns = [f'Predicted {emotion}' for emotion in emotions] ef_confusion_df.index = [f'Actual {emotion}' for emotion in emotions] ef_confusion_df # Classification report print(classification_report(ef_test, ef_pred)) # PCA on unscaled features # Instantiate PCA and fit to Xf_train pca = PCA().fit(Xf_train) # Transform Xf_train Xf_train_pca = pca.transform(Xf_train) # Transform Xf_test Xf_test_pca = pca.transform(Xf_test) # Standard scaling # Instantiate the scaler and fit to Xf_train scaler = StandardScaler().fit(Xf_train) # Transform Xf_train Xf_train_scaled = scaler.transform(Xf_train) # Transform Xf_test Xf_test_scaled = scaler.transform(Xf_test) # PCA on scaled features # Instantiate PCA and fit to Xf_train_scaled pca_scaled = PCA().fit(Xf_train_scaled) # Transform Xf_train_scaled Xf_train_scaled_pca = pca_scaled.transform(Xf_train_scaled) # Transform Xf_test_scaled Xf_test_scaled_pca = pca_scaled.transform(Xf_test_scaled) # Plot the explained variance ratios plt.subplots(1, 2, figsize = (15, 5)) # Unscaled plt.subplot(1, 2, 1) plt.bar(np.arange(1, len(pca.explained_variance_ratio_)+1), pca.explained_variance_ratio_) plt.xlabel('Principal Component') plt.ylabel('Explained Variance Ratio') plt.title('PCA on Unscaled Features') plt.ylim(top = 0.6) # Equalizing the y-axes # Scaled plt.subplot(1, 2, 2) plt.bar(np.arange(1, len(pca_scaled.explained_variance_ratio_)+1), pca_scaled.explained_variance_ratio_) plt.xlabel('Principal Component') plt.ylabel('Explained Variance Ratio') plt.title('PCA on Scaled Features') plt.ylim(top = 0.6) # Equalizing the y-axes plt.tight_layout() plt.show() # Unscaled num_components = [503, 451, 401, 351, 301, 251, 201, 151, 101, 51] for n in num_components: print(f'Variance explained by {n-1} unscaled principal components: {np.round(np.sum(pca.explained_variance_ratio_[:n])*100, 2)}%') # Scaled num_components = [503, 451, 401, 351, 301, 251, 201, 151, 101, 51] for n in num_components: print(f'Variance explained by {n-1} scaled principal components: {np.round(np.sum(pca_scaled.explained_variance_ratio_[:n])*100, 2)}%') # Cache cachedir = mkdtemp() # Pipeline (these values are placeholders) my_pipeline = Pipeline(steps=[('scaler', StandardScaler()), ('dim_reducer', PCA()), ('model', LogisticRegression())], memory=cachedir) # Parameter grid for log reg logreg_param_grid = [ # l1 without PCA # unscaled and scaled * 9 regularization strengths = 18 models {'scaler': [None, StandardScaler()], 'dim_reducer': [None], 'model': [LogisticRegression(penalty='l1', n_jobs=-1)], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]}, # l1 unscaled with PCA # 6 PCAs * 9 regularization strengths = 54 models {'scaler': [None], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(50, 301, 50), 'model': [LogisticRegression(penalty='l1', n_jobs=-1)], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]}, # l1 scaled with PCA # 4 PCAs * 9 regularization strengths = 36 models {'scaler': [StandardScaler()], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(200, 351, 50), 'model': [LogisticRegression(penalty='l1', n_jobs=-1)], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]}, # l2 (default) without PCA # unscaled and scaled * 9 regularization strengths = 18 models {'scaler': [None, StandardScaler()], 'dim_reducer': [None], 'model': [LogisticRegression(solver='lbfgs', n_jobs=-1)], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]}, # l2 (default) unscaled with PCA # 6 PCAs * 9 regularization strengths = 54 models {'scaler': [None], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(50, 301, 50), 'model': [LogisticRegression(solver='lbfgs', n_jobs=-1)], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]}, # l2 (default) scaled with PCA # 4 PCAs * 9 regularization strengths = 36 models {'scaler': [StandardScaler()], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(200, 351, 50), 'model': [LogisticRegression(solver='lbfgs', n_jobs=-1)], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]} ] # Instantiate the log reg grid search logreg_grid_search = GridSearchCV(estimator=my_pipeline, param_grid=logreg_param_grid, cv=5, n_jobs=-1, verbose=5) # Fit the log reg grid search fitted_logreg_grid_ef = logreg_grid_search.fit(Xf_train, ef_train) # What was the best log reg? fitted_logreg_grid_ef.best_estimator_ print(f"The best log reg's accuracy on the training set: {fitted_logreg_grid_ef.score(Xf_train, ef_train)*100}%") print(f"The best log reg's accuracy on the test set: {fitted_logreg_grid_ef.score(Xf_test, ef_test)*100}%") # Pickling the best log reg joblib.dump(fitted_logreg_grid_ef.best_estimator_, 'pickle4_female_emotion_logreg.pkl') # Parameter grid for SVM svm_param_grid = [ # unscaled and scaled * 9 regularization strengths = 18 models {'scaler': [None, StandardScaler()], 'dim_reducer': [None], 'model': [SVC()], 'model__C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]}, # unscaled # 6 PCAs * 9 regularization strengths = 54 models {'scaler': [None], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(50, 301, 50), 'model': [SVC()], 'model__C':[0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]}, # scaled # 4 PCAs * 9 regularization strengths = 36 models {'scaler': [StandardScaler()], 'dim_reducer': [PCA()], 'dim_reducer__n_components': np.arange(200, 351, 50), 'model': [SVC()], 'model__C':[0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000, 10000]} ] # Instantiate the SVM grid search svm_grid_search = GridSearchCV(estimator=my_pipeline, param_grid=svm_param_grid, cv=5, n_jobs=-1, verbose=5) # Fit the SVM grid search fitted_svm_grid_ef = svm_grid_search.fit(Xf_train, ef_train) # What was the best SVM? fitted_svm_grid_ef.best_estimator_ print(f"The best SVM's accuracy on the training set: {fitted_svm_grid_ef.score(Xf_train, ef_train)*100}%") print(f"The best SVM's accuracy on the test set: {fitted_svm_grid_ef.score(Xf_test, ef_test)*100}%") # Pickling the best SVM joblib.dump(fitted_svm_grid_ef.best_estimator_, 'pickle5_female_emotion_svm.pkl') # Loading in the best female emotion classifiers female_emotion_logreg = joblib.load('pickle4_female_emotion_logreg.pkl') female_emotion_svm = joblib.load('pickle5_female_emotion_svm.pkl') ef_pred = female_emotion_logreg.predict(Xf_test) # Building the confusion matrix as a dataframe emotions = ['angry', 'calm', 'disgusted', 'fearful', 'happy', 'neutral', 'sad', 'surprised'] confusion_df = pd.DataFrame(confusion_matrix(ef_test, ef_pred)) confusion_df.columns = [f'Predicted {emotion}' for emotion in emotions] confusion_df.index = [f'Actual {emotion}' for emotion in emotions] # Converting the above to precision scores confusion_df_pct = round(confusion_df/confusion_df.sum(axis = 0)*100, 2) # Heatmap of precision scores plt.figure(figsize=(12, 8)) sns.heatmap(confusion_df_pct, annot=True, linewidths=0.5) plt.xticks(rotation=45) plt.show() # Classification report print(classification_report(ef_test, ef_pred)) ef_pred = female_emotion_svm.predict(Xf_test) # Building the confusion matrix as a dataframe emotions = ['angry', 'calm', 'disgusted', 'fearful', 'happy', 'neutral', 'sad', 'surprised'] confusion_df = pd.DataFrame(confusion_matrix(ef_test, ef_pred)) confusion_df.columns = [f'Predicted {emotion}' for emotion in emotions] confusion_df.index = [f'Actual {emotion}' for emotion in emotions] # Converting the above to precision scores confusion_df_pct = round(confusion_df/confusion_df.sum(axis = 0)*100, 2) # Heatmap of precision scores plt.figure(figsize=(12, 8)) sns.heatmap(confusion_df_pct, annot=True, linewidths=0.5) plt.xticks(rotation=45) plt.show() # Classification report print(classification_report(ef_test, ef_pred))
0.712932
0.928862
# 机器翻译与数据集 :label:`sec_machine_translation` 语言模型是自然语言处理的关键, 而*机器翻译*是语言模型最成功的基准测试。 因为机器翻译正是将输入序列转换成输出序列的 *序列转换模型*(sequence transduction)的核心问题。 序列转换模型在各类现代人工智能应用中发挥着至关重要的作用, 因此我们将其做为本章剩余部分和 :numref:`chap_attention`的重点。 为此,本节将介绍机器翻译问题及其后文需要使用的数据集。 *机器翻译*(machine translation)指的是 将序列从一种语言自动翻译成另一种语言。 事实上,这个研究领域可以追溯到数字计算机发明后不久的20世纪40年代, 特别是在第二次世界大战中使用计算机破解语言编码。 几十年来,在使用神经网络进行端到端学习的兴起之前, 统计学方法在这一领域一直占据主导地位 :cite:`Brown.Cocke.Della-Pietra.ea.1988,Brown.Cocke.Della-Pietra.ea.1990`。 因为*统计机器翻译*(statisticalmachine translation)涉及了 翻译模型和语言模型等组成部分的统计分析, 因此基于神经网络的方法通常被称为 *神经机器翻译*(neuralmachine translation), 用于将两种翻译模型区分开来。 本书的关注点是神经网络机器翻译方法,强调的是端到端的学习。 与 :numref:`sec_language_model`中的语料库 是单一语言的语言模型问题存在不同, 机器翻译的数据集是由源语言和目标语言的文本序列对组成的。 因此,我们需要一种完全不同的方法来预处理机器翻译数据集, 而不是复用语言模型的预处理程序。 下面,我们看一下如何将预处理后的数据加载到小批量中用于训练。 ``` import os import tensorflow as tf from d2l import tensorflow as d2l ``` ## [**下载和预处理数据集**] 首先,下载一个由[Tatoeba项目的双语句子对](http://www.manythings.org/anki/) 组成的“英-法”数据集,数据集中的每一行都是制表符分隔的文本序列对, 序列对由英文文本序列和翻译后的法语文本序列组成。 请注意,每个文本序列可以是一个句子, 也可以是包含多个句子的一个段落。 在这个将英语翻译成法语的机器翻译问题中, 英语是*源语言*(source language), 法语是*目标语言*(target language)。 ``` #@save d2l.DATA_HUB['fra-eng'] = (d2l.DATA_URL + 'fra-eng.zip', '94646ad1522d915e7b0f9296181140edcf86a4f5') #@save def read_data_nmt(): """载入“英语-法语”数据集""" data_dir = d2l.download_extract('fra-eng') with open(os.path.join(data_dir, 'fra.txt'), 'r', encoding='utf-8') as f: return f.read() raw_text = read_data_nmt() print(raw_text[:75]) ``` 下载数据集后,原始文本数据需要经过[**几个预处理步骤**]。 例如,我们用空格代替*不间断空格*(non-breaking space), 使用小写字母替换大写字母,并在单词和标点符号之间插入空格。 ``` #@save def preprocess_nmt(text): """预处理“英语-法语”数据集""" def no_space(char, prev_char): return char in set(',.!?') and prev_char != ' ' # 使用空格替换不间断空格 # 使用小写字母替换大写字母 text = text.replace('\u202f', ' ').replace('\xa0', ' ').lower() # 在单词和标点符号之间插入空格 out = [' ' + char if i > 0 and no_space(char, text[i - 1]) else char for i, char in enumerate(text)] return ''.join(out) text = preprocess_nmt(raw_text) print(text[:80]) ``` ## [**词元化**] 与 :numref:`sec_language_model`中的字符级词元化不同, 在机器翻译中,我们更喜欢单词级词元化 (最先进的模型可能使用更高级的词元化技术)。 下面的`tokenize_nmt`函数对前`num_examples`个文本序列对进行词元, 其中每个词元要么是一个词,要么是一个标点符号。 此函数返回两个词元列表:`source`和`target`: `source[i]`是源语言(这里是英语)第$i$个文本序列的词元列表, `target[i]`是目标语言(这里是法语)第$i$个文本序列的词元列表。 ``` #@save def tokenize_nmt(text, num_examples=None): """词元化“英语-法语”数据数据集""" source, target = [], [] for i, line in enumerate(text.split('\n')): if num_examples and i > num_examples: break parts = line.split('\t') if len(parts) == 2: source.append(parts[0].split(' ')) target.append(parts[1].split(' ')) return source, target source, target = tokenize_nmt(text) source[:6], target[:6] ``` 让我们[**绘制每个文本序列所包含的词元数量的直方图**]。 在这个简单的“英-法”数据集中,大多数文本序列的词元数量少于$20$个。 ``` def show_list_len_pair_hist(legend, xlabel, ylabel, xlist, ylist): """绘制列表长度对的直方图""" d2l.set_figsize() _, _, patches = d2l.plt.hist( [[len(l) for l in xlist], [len(l) for l in ylist]]) d2l.plt.xlabel(xlabel) d2l.plt.ylabel(ylabel) for patch in patches[1].patches: patch.set_hatch('/') d2l.plt.legend(legend) show_list_len_pair_hist(['source', 'target'], '# tokens per sequence', 'count', source, target); ``` ## [**词表**] 由于机器翻译数据集由语言对组成, 因此我们可以分别为源语言和目标语言构建两个词表。 使用单词级词元化时,词表大小将明显大于使用字符级词元化时的词表大小。 为了缓解这一问题,这里我们将出现次数少于2次的低频率词元 视为相同的未知(“&lt;unk&gt;”)词元。 除此之外,我们还指定了额外的特定词元, 例如在小批量时用于将序列填充到相同长度的填充词元(“&lt;pad&gt;”), 以及序列的开始词元(“&lt;bos&gt;”)和结束词元(“&lt;eos&gt;”)。 这些特殊词元在自然语言处理任务中比较常用。 ``` src_vocab = d2l.Vocab(source, min_freq=2, reserved_tokens=['<pad>', '<bos>', '<eos>']) len(src_vocab) ``` ## 加载数据集 :label:`subsec_mt_data_loading` 回想一下,语言模型中的[**序列样本都有一个固定的长度**], 无论这个样本是一个句子的一部分还是跨越了多个句子的一个片断。 这个固定长度是由 :numref:`sec_language_model`中的 `num_steps`(时间步数或词元数量)参数指定的。 在机器翻译中,每个样本都是由源和目标组成的文本序列对, 其中的每个文本序列可能具有不同的长度。 为了提高计算效率,我们仍然可以通过*截断*(truncation)和 *填充*(padding)方式实现一次只处理一个小批量的文本序列。 假设同一个小批量中的每个序列都应该具有相同的长度`num_steps`, 那么如果文本序列的词元数目少于`num_steps`时, 我们将继续在其末尾添加特定的“&lt;pad&gt;”词元, 直到其长度达到`num_steps`; 反之,我们将截断文本序列时,只取其前`num_steps` 个词元, 并且丢弃剩余的词元。这样,每个文本序列将具有相同的长度, 以便以相同形状的小批量进行加载。 如前所述,下面的`truncate_pad`函数将(**截断或填充文本序列**)。 ``` #@save def truncate_pad(line, num_steps, padding_token): """截断或填充文本序列""" if len(line) > num_steps: return line[:num_steps] # 截断 return line + [padding_token] * (num_steps - len(line)) # 填充 truncate_pad(src_vocab[source[0]], 10, src_vocab['<pad>']) ``` 现在我们定义一个函数,可以将文本序列 [**转换成小批量数据集用于训练**]。 我们将特定的“&lt;eos&gt;”词元添加到所有序列的末尾, 用于表示序列的结束。 当模型通过一个词元接一个词元地生成序列进行预测时, 生成的“&lt;eos&gt;”词元说明完成了序列输出工作。 此外,我们还记录了每个文本序列的长度, 统计长度时排除了填充词元, 在稍后将要介绍的一些模型会需要这个长度信息。 ``` #@save def build_array_nmt(lines, vocab, num_steps): """将机器翻译的文本序列转换成小批量""" lines = [vocab[l] for l in lines] lines = [l + [vocab['<eos>']] for l in lines] array = tf.constant([truncate_pad( l, num_steps, vocab['<pad>']) for l in lines]) valid_len = tf.reduce_sum( tf.cast(array != vocab['<pad>'], tf.int32), 1) return array, valid_len ``` ## [**训练模型**] 最后,我们定义`load_data_nmt`函数来返回数据迭代器, 以及源语言和目标语言的两种词表。 ``` #@save def load_data_nmt(batch_size, num_steps, num_examples=600): """返回翻译数据集的迭代器和词表""" text = preprocess_nmt(read_data_nmt()) source, target = tokenize_nmt(text, num_examples) src_vocab = d2l.Vocab(source, min_freq=2, reserved_tokens=['<pad>', '<bos>', '<eos>']) tgt_vocab = d2l.Vocab(target, min_freq=2, reserved_tokens=['<pad>', '<bos>', '<eos>']) src_array, src_valid_len = build_array_nmt(source, src_vocab, num_steps) tgt_array, tgt_valid_len = build_array_nmt(target, tgt_vocab, num_steps) data_arrays = (src_array, src_valid_len, tgt_array, tgt_valid_len) data_iter = d2l.load_array(data_arrays, batch_size) return data_iter, src_vocab, tgt_vocab ``` 下面我们[**读出“英语-法语”数据集中的第一个小批量数据**]。 ``` train_iter, src_vocab, tgt_vocab = load_data_nmt(batch_size=2, num_steps=8) for X, X_valid_len, Y, Y_valid_len in train_iter: print('X:', tf.cast(X, tf.int32)) print('X的有效长度:', X_valid_len) print('Y:', tf.cast(Y, tf.int32)) print('Y的有效长度:', Y_valid_len) break ``` ## 小结 * 机器翻译指的是将文本序列从一种语言自动翻译成另一种语言。 * 使用单词级词元化时的词表大小,将明显大于使用字符级词元化时的词表大小。为了缓解这一问题,我们可以将低频词元视为相同的未知词元。 * 通过截断和填充文本序列,可以保证所有的文本序列都具有相同的长度,以便以小批量的方式加载。 ## 练习 1. 在`load_data_nmt`函数中尝试不同的`num_examples`参数值。这对源语言和目标语言的词表大小有何影响? 1. 某些语言(例如中文和日语)的文本没有单词边界指示符(例如空格)。对于这种情况,单词级词元化仍然是个好主意吗?为什么?
github_jupyter
import os import tensorflow as tf from d2l import tensorflow as d2l #@save d2l.DATA_HUB['fra-eng'] = (d2l.DATA_URL + 'fra-eng.zip', '94646ad1522d915e7b0f9296181140edcf86a4f5') #@save def read_data_nmt(): """载入“英语-法语”数据集""" data_dir = d2l.download_extract('fra-eng') with open(os.path.join(data_dir, 'fra.txt'), 'r', encoding='utf-8') as f: return f.read() raw_text = read_data_nmt() print(raw_text[:75]) #@save def preprocess_nmt(text): """预处理“英语-法语”数据集""" def no_space(char, prev_char): return char in set(',.!?') and prev_char != ' ' # 使用空格替换不间断空格 # 使用小写字母替换大写字母 text = text.replace('\u202f', ' ').replace('\xa0', ' ').lower() # 在单词和标点符号之间插入空格 out = [' ' + char if i > 0 and no_space(char, text[i - 1]) else char for i, char in enumerate(text)] return ''.join(out) text = preprocess_nmt(raw_text) print(text[:80]) #@save def tokenize_nmt(text, num_examples=None): """词元化“英语-法语”数据数据集""" source, target = [], [] for i, line in enumerate(text.split('\n')): if num_examples and i > num_examples: break parts = line.split('\t') if len(parts) == 2: source.append(parts[0].split(' ')) target.append(parts[1].split(' ')) return source, target source, target = tokenize_nmt(text) source[:6], target[:6] def show_list_len_pair_hist(legend, xlabel, ylabel, xlist, ylist): """绘制列表长度对的直方图""" d2l.set_figsize() _, _, patches = d2l.plt.hist( [[len(l) for l in xlist], [len(l) for l in ylist]]) d2l.plt.xlabel(xlabel) d2l.plt.ylabel(ylabel) for patch in patches[1].patches: patch.set_hatch('/') d2l.plt.legend(legend) show_list_len_pair_hist(['source', 'target'], '# tokens per sequence', 'count', source, target); src_vocab = d2l.Vocab(source, min_freq=2, reserved_tokens=['<pad>', '<bos>', '<eos>']) len(src_vocab) #@save def truncate_pad(line, num_steps, padding_token): """截断或填充文本序列""" if len(line) > num_steps: return line[:num_steps] # 截断 return line + [padding_token] * (num_steps - len(line)) # 填充 truncate_pad(src_vocab[source[0]], 10, src_vocab['<pad>']) #@save def build_array_nmt(lines, vocab, num_steps): """将机器翻译的文本序列转换成小批量""" lines = [vocab[l] for l in lines] lines = [l + [vocab['<eos>']] for l in lines] array = tf.constant([truncate_pad( l, num_steps, vocab['<pad>']) for l in lines]) valid_len = tf.reduce_sum( tf.cast(array != vocab['<pad>'], tf.int32), 1) return array, valid_len #@save def load_data_nmt(batch_size, num_steps, num_examples=600): """返回翻译数据集的迭代器和词表""" text = preprocess_nmt(read_data_nmt()) source, target = tokenize_nmt(text, num_examples) src_vocab = d2l.Vocab(source, min_freq=2, reserved_tokens=['<pad>', '<bos>', '<eos>']) tgt_vocab = d2l.Vocab(target, min_freq=2, reserved_tokens=['<pad>', '<bos>', '<eos>']) src_array, src_valid_len = build_array_nmt(source, src_vocab, num_steps) tgt_array, tgt_valid_len = build_array_nmt(target, tgt_vocab, num_steps) data_arrays = (src_array, src_valid_len, tgt_array, tgt_valid_len) data_iter = d2l.load_array(data_arrays, batch_size) return data_iter, src_vocab, tgt_vocab train_iter, src_vocab, tgt_vocab = load_data_nmt(batch_size=2, num_steps=8) for X, X_valid_len, Y, Y_valid_len in train_iter: print('X:', tf.cast(X, tf.int32)) print('X的有效长度:', X_valid_len) print('Y:', tf.cast(Y, tf.int32)) print('Y的有效长度:', Y_valid_len) break
0.300643
0.881564
# Evaluation Run Log Analysis and Visualization for AWS DeepRacer This notebook walks through how you can analyze and debug using the AWS DeepRacer Simulation logs ``` 1. Tools to find best iteration of your model 2. Visualize reward distribution on the track 2.1 Visualize reward heatmap per episode or iteration 3. Identify hotspots on the track for your model 4. Understand probability distributions on simulated images 5. Evaluation run analysis - plot lap speed heatmap ``` ## Requirements boto3 >= 1.9.133 ; configure your aws cli and/or boto credentials file AWS CLI: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html Boto Configuration: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt from datetime import datetime %matplotlib inline #Shapely Library from shapely.geometry import Point, Polygon from shapely.geometry.polygon import LinearRing, LineString import track_utils as tu import log_analysis as la import cw_utils as cw # Make sure your boto version is >= '1.9.133' cw.boto3.__version__ # reload log_analysis here if needed # I use this when I update python files in log-analysis folder import importlib importlib.reload(la) importlib.reload(cw) importlib.reload(tu) ``` ## Load waypoints for the track you want to run analysis on Tracks Available: ``` !ls tracks/ l_center_line, l_inner_border, l_outer_border, road_poly = tu.load_track("reinvent_base") road_poly ``` ## Load all evaluation logs **WARNING:** If you do not specify `not_older_than` parameter, all evaluation logs will be downloaded. They aren't as big as the training logs, but there is a lot of them. That said you can download all and then it will only download new ones unless you use force=True. There are also `not_older_than` and `older_than` parameters so you can choose to fetch all logs from a given period and compare them against each other. Just remember memory is finite. As mentioned, this method always fetches a list of log streams and then downloads only ones that haven't been downloaded just yet. You can therefore use it to fetch that list and load all the files from the path provided. It's good to keep things organised: group your files into folders to not lose track where they came from. Replace `SELECT_YOUR_FOLDER` with a path matching your preference. ``` logs = cw.download_all_logs('logs/SELECT_YOUR_FOLDER/race/deepracer-eval-', '/aws/deepracer/leaderboard/SimulationJobs', not_older_than="2019-07-01 07:00", older_than="2019-07-01 12:00") # Loads all the logs from above bulk = la.load_eval_logs(logs) simulation_agg = la.simulation_agg(bulk, 'stream', add_timestamp=True, is_eval=True) complete_ones = simulation_agg[simulation_agg['progress']==100] # This gives the warning about ptp method deprecation. The code looks as if np.ptp was used, I don't know how to fix it. la.scatter_aggregates(simulation_agg, is_eval=True) if complete_ones.shape[0] > 0: la.scatter_aggregates(complete_ones, "Complete ones", is_eval=True) # fastest complete laps simulation_agg.nlargest(15, 'progress') # fastest complete laps complete_ones.nsmallest(15, 'time') # 10 most recent lap attempts simulation_agg.nlargest(10, 'timestamp') ``` ## Plot the evaluation laps The method below plots your evaluation attempts. Just note that that is a time consuming operation and therefore I suggest using `min_distance_to_plot` to just plot some of them. While preparing this presentation I have noticed that my half-finished evaluation lap had distance pretty much same to a complete one which is wrong for sure. I don't have an explanation for that. Verifications of the method are very much welcome. If you would like to, below you can load a single log file to evaluate this. I don't have the track info for evaluation tracks so I am using the training ones. They aren't much different most of the time anyway. ``` la.plot_evaluations(bulk, l_inner_border, l_outer_border) ``` ## Evaluation Run Analyis Debug your evaluation runs or analyze the laps ``` eval_sim = 'sim-sample' eval_fname = 'logs//deepracer-eval-%s.log' % eval_sim cw.download_log(eval_fname, stream_prefix=eval_sim) !head $eval_fname eval_df = la.load_eval_data(eval_fname) eval_df.head() ``` # Single lap Below you will find some ideas of looking at a single evaluation lap. You may be interested in a specific part of it. This isn't very robust but can work as a starting point. Please submit your ideas for analysis. This place is a great chance to learn more about [Pandas](https://pandas.pydata.org/pandas-docs/stable/) and about how to process data series. ``` # Load a single lap lap_df = eval_df[eval_df['episode']==0] ``` We're adding a lot of columns here to the episode. To speed things up, it's only done per a single episode, so thers will currently be missing this information. Now try using them as a graphed value. ``` lap_df.loc[:,'distance']=((lap_df['x'].shift(1)-lap_df['x']) ** 2 + (lap_df['y'].shift(1)-lap_df['y']) ** 2) ** 0.5 lap_df.loc[:,'time']=lap_df['timestamp'].astype(float)-lap_df['timestamp'].shift(1).astype(float) lap_df.loc[:,'speed']=lap_df['distance']/(100*lap_df['time']) lap_df.loc[:,'acceleration']=(lap_df['distance']-lap_df['distance'].shift(1))/lap_df['time'] lap_df.loc[:,'progress_delta']=lap_df['progress'].astype(float)-lap_df['progress'].shift(1).astype(float) lap_df.loc[:,'progress_delta_per_time']=lap_df['progress_delta']/lap_df['time'] la.plot_grid_world(lap_df, l_inner_border, l_outer_border, graphed_value='reward') ``` ## Grid World Analysis Understand the speed of the car along with the path on a per episode basis. This can help you debug portions of the track where the car may not be going fast. Hence giving you hints on how to improve your reward function. ``` la.analyse_single_evaluation(eval_fname, l_inner_border, l_outer_border, episodes=3) ```
github_jupyter
1. Tools to find best iteration of your model 2. Visualize reward distribution on the track 2.1 Visualize reward heatmap per episode or iteration 3. Identify hotspots on the track for your model 4. Understand probability distributions on simulated images 5. Evaluation run analysis - plot lap speed heatmap import numpy as np import pandas as pd import matplotlib.pyplot as plt from datetime import datetime %matplotlib inline #Shapely Library from shapely.geometry import Point, Polygon from shapely.geometry.polygon import LinearRing, LineString import track_utils as tu import log_analysis as la import cw_utils as cw # Make sure your boto version is >= '1.9.133' cw.boto3.__version__ # reload log_analysis here if needed # I use this when I update python files in log-analysis folder import importlib importlib.reload(la) importlib.reload(cw) importlib.reload(tu) !ls tracks/ l_center_line, l_inner_border, l_outer_border, road_poly = tu.load_track("reinvent_base") road_poly logs = cw.download_all_logs('logs/SELECT_YOUR_FOLDER/race/deepracer-eval-', '/aws/deepracer/leaderboard/SimulationJobs', not_older_than="2019-07-01 07:00", older_than="2019-07-01 12:00") # Loads all the logs from above bulk = la.load_eval_logs(logs) simulation_agg = la.simulation_agg(bulk, 'stream', add_timestamp=True, is_eval=True) complete_ones = simulation_agg[simulation_agg['progress']==100] # This gives the warning about ptp method deprecation. The code looks as if np.ptp was used, I don't know how to fix it. la.scatter_aggregates(simulation_agg, is_eval=True) if complete_ones.shape[0] > 0: la.scatter_aggregates(complete_ones, "Complete ones", is_eval=True) # fastest complete laps simulation_agg.nlargest(15, 'progress') # fastest complete laps complete_ones.nsmallest(15, 'time') # 10 most recent lap attempts simulation_agg.nlargest(10, 'timestamp') la.plot_evaluations(bulk, l_inner_border, l_outer_border) eval_sim = 'sim-sample' eval_fname = 'logs//deepracer-eval-%s.log' % eval_sim cw.download_log(eval_fname, stream_prefix=eval_sim) !head $eval_fname eval_df = la.load_eval_data(eval_fname) eval_df.head() # Load a single lap lap_df = eval_df[eval_df['episode']==0] lap_df.loc[:,'distance']=((lap_df['x'].shift(1)-lap_df['x']) ** 2 + (lap_df['y'].shift(1)-lap_df['y']) ** 2) ** 0.5 lap_df.loc[:,'time']=lap_df['timestamp'].astype(float)-lap_df['timestamp'].shift(1).astype(float) lap_df.loc[:,'speed']=lap_df['distance']/(100*lap_df['time']) lap_df.loc[:,'acceleration']=(lap_df['distance']-lap_df['distance'].shift(1))/lap_df['time'] lap_df.loc[:,'progress_delta']=lap_df['progress'].astype(float)-lap_df['progress'].shift(1).astype(float) lap_df.loc[:,'progress_delta_per_time']=lap_df['progress_delta']/lap_df['time'] la.plot_grid_world(lap_df, l_inner_border, l_outer_border, graphed_value='reward') la.analyse_single_evaluation(eval_fname, l_inner_border, l_outer_border, episodes=3)
0.838184
0.914023
___ <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a> ___ # Principal Component Analysis Let's discuss PCA! Since this isn't exactly a full machine learning algorithm, but instead an unsupervised learning algorithm, we will just have a lecture on this topic, but no full machine learning project (although we will walk through the cancer set with PCA). ## PCA Review Make sure to watch the video lecture and theory presentation for a full overview of PCA! Remember that PCA is just a transformation of your data and attempts to find out what features explain the most variance in your data. For example: <img src='PCA.png' /> ## Libraries ``` import matplotlib.pyplot as plt import pandas as pd import numpy as np import seaborn as sns %matplotlib inline ``` ## The Data Let's work with the cancer data set again since it had so many features. ``` from sklearn.datasets import load_breast_cancer cancer = load_breast_cancer() cancer.keys() print(cancer['DESCR']) df = pd.DataFrame(cancer['data'],columns=cancer['feature_names']) #(['DESCR', 'data', 'feature_names', 'target_names', 'target']) df.head() ``` ## PCA Visualization As we've noticed before it is difficult to visualize high dimensional data, we can use PCA to find the first two principal components, and visualize the data in this new, two-dimensional space, with a single scatter-plot. Before we do this though, we'll need to scale our data so that each feature has a single unit variance. ``` from sklearn.preprocessing import StandardScaler scaler = StandardScaler() scaler.fit(df) scaled_data = scaler.transform(df) ``` PCA with Scikit Learn uses a very similar process to other preprocessing functions that come with SciKit Learn. We instantiate a PCA object, find the principal components using the fit method, then apply the rotation and dimensionality reduction by calling transform(). We can also specify how many components we want to keep when creating the PCA object. ``` from sklearn.decomposition import PCA pca = PCA(n_components=2) pca.fit(scaled_data) ``` Now we can transform this data to its first 2 principal components. ``` x_pca = pca.transform(scaled_data) scaled_data.shape x_pca.shape ``` Great! We've reduced 30 dimensions to just 2! Let's plot these two dimensions out! ``` plt.figure(figsize=(8,6)) plt.scatter(x_pca[:,0],x_pca[:,1],c=cancer['target'],cmap='plasma') plt.xlabel('First principal component') plt.ylabel('Second Principal Component') ``` Clearly by using these two components we can easily separate these two classes. ## Interpreting the components Unfortunately, with this great power of dimensionality reduction, comes the cost of being able to easily understand what these components represent. The components correspond to combinations of the original features, the components themselves are stored as an attribute of the fitted PCA object: ``` pca.components_ ``` In this numpy matrix array, each row represents a principal component, and each column relates back to the original features. we can visualize this relationship with a heatmap: ``` df_comp = pd.DataFrame(pca.components_,columns=cancer['feature_names']) plt.figure(figsize=(12,6)) sns.heatmap(df_comp,cmap='plasma',) ``` This heatmap and the color bar basically represent the correlation between the various feature and the principal component itself. ## Conclusion Hopefully this information is useful to you when dealing with high dimensional data! # Great Job!
github_jupyter
import matplotlib.pyplot as plt import pandas as pd import numpy as np import seaborn as sns %matplotlib inline from sklearn.datasets import load_breast_cancer cancer = load_breast_cancer() cancer.keys() print(cancer['DESCR']) df = pd.DataFrame(cancer['data'],columns=cancer['feature_names']) #(['DESCR', 'data', 'feature_names', 'target_names', 'target']) df.head() from sklearn.preprocessing import StandardScaler scaler = StandardScaler() scaler.fit(df) scaled_data = scaler.transform(df) from sklearn.decomposition import PCA pca = PCA(n_components=2) pca.fit(scaled_data) x_pca = pca.transform(scaled_data) scaled_data.shape x_pca.shape plt.figure(figsize=(8,6)) plt.scatter(x_pca[:,0],x_pca[:,1],c=cancer['target'],cmap='plasma') plt.xlabel('First principal component') plt.ylabel('Second Principal Component') pca.components_ df_comp = pd.DataFrame(pca.components_,columns=cancer['feature_names']) plt.figure(figsize=(12,6)) sns.heatmap(df_comp,cmap='plasma',)
0.556882
0.993574
# AWS Elastic Kubernetes Service (EKS) Deep MNIST In this example we will deploy a tensorflow MNIST model in Amazon Web Services' Elastic Kubernetes Service (EKS). This tutorial will break down in the following sections: 1) Train a tensorflow model to predict mnist locally 2) Containerise the tensorflow model with our docker utility 3) Send some data to the docker model to test it 4) Install and configure AWS tools to interact with AWS 5) Use the AWS tools to create and setup EKS cluster with Seldon 6) Push and run docker image through the AWS Container Registry 7) Test our Elastic Kubernetes deployment by sending some data #### Let's get started! 🚀🔥 ## Dependencies: * Helm v2.13.1+ * A Kubernetes cluster running v1.13 or above (minkube / docker-for-windows work well if enough RAM) * kubectl v1.14+ * EKS CLI v0.1.32 * AWS Cli v1.16.163 * Python 3.6+ * Python DEV requirements ## 1) Train a tensorflow model to predict mnist locally We will load the mnist images, together with their labels, and then train a tensorflow model to predict the right labels ``` from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data/", one_hot = True) import tensorflow as tf if __name__ == '__main__': x = tf.placeholder(tf.float32, [None,784], name="x") W = tf.Variable(tf.zeros([784,10])) b = tf.Variable(tf.zeros([10])) y = tf.nn.softmax(tf.matmul(x,W) + b, name="y") y_ = tf.placeholder(tf.float32, [None, 10]) cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1])) train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) init = tf.initialize_all_variables() sess = tf.Session() sess.run(init) for i in range(1000): batch_xs, batch_ys = mnist.train.next_batch(100) sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys}) correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print(sess.run(accuracy, feed_dict = {x: mnist.test.images, y_:mnist.test.labels})) saver = tf.train.Saver() saver.save(sess, "model/deep_mnist_model") ``` ## 2) Containerise the tensorflow model with our docker utility First you need to make sure that you have added the .s2i/environment configuration file in this folder with the following content: ``` !cat .s2i/environment ``` Now we can build a docker image named "deep-mnist" with the tag 0.1 ``` !s2i build . seldonio/seldon-core-s2i-python36:0.12 deep-mnist:0.1 ``` ## 3) Send some data to the docker model to test it We first run the docker image we just created as a container called "mnist_predictor" ``` !docker run --name "mnist_predictor" -d --rm -p 5000:5000 deep-mnist:0.1 ``` Send some random features that conform to the contract ``` import matplotlib.pyplot as plt # This is the variable that was initialised at the beginning of the file i = [0] x = mnist.test.images[i] y = mnist.test.labels[i] plt.imshow(x.reshape((28, 28)), cmap='gray') plt.show() print("Expected label: ", np.sum(range(0,10) * y), ". One hot encoding: ", y) from seldon_core.seldon_client import SeldonClient import math import numpy as np # We now test the REST endpoint expecting the same result endpoint = "0.0.0.0:5000" batch = x payload_type = "ndarray" sc = SeldonClient(microservice_endpoint=endpoint) # We use the microservice, instead of the "predict" function client_prediction = sc.microservice( data=batch, method="predict", payload_type=payload_type, names=["tfidf"]) for proba, label in zip(client_prediction.response.data.ndarray.values[0].list_value.ListFields()[0][1], range(0,10)): print(f"LABEL {label}:\t {proba.number_value*100:6.4f} %") !docker rm mnist_predictor --force ``` ## 4) Install and configure AWS tools to interact with AWS First we install the awscli ``` !pip install awscli --upgrade --user ``` #### Configure aws so it can talk to your server (if you are getting issues, make sure you have the permmissions to create clusters) ``` %%bash # You must make sure that the access key and secret are changed aws configure << END_OF_INPUTS YOUR_ACCESS_KEY YOUR_ACCESS_SECRET us-west-2 json END_OF_INPUTS ``` #### Install EKCTL *IMPORTANT*: These instructions are for linux Please follow the official installation of ekctl at: https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html ``` !curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_amd64.tar.gz" | tar xz !chmod 755 ./eksctl !./eksctl version ``` ## 5) Use the AWS tools to create and setup EKS cluster with Seldon In this example we will create a cluster with 2 nodes, with a minimum of 1 and a max of 3. You can tweak this accordingly. If you want to check the status of the deployment you can go to AWS CloudFormation or to the EKS dashboard. It will take 10-15 minutes (so feel free to go grab a ☕). ### IMPORTANT: If you get errors in this step... It is most probably IAM role access requirements, which requires you to discuss with your administrator. ``` %%bash ./eksctl create cluster \ --name demo-eks-cluster \ --region us-west-2 \ --nodes 2 ``` ### Configure local kubectl We want to now configure our local Kubectl so we can actually reach the cluster we've just created ``` !aws eks --region us-west-2 update-kubeconfig --name demo-eks-cluster ``` And we can check if the context has been added to kubectl config (contexts are basically the different k8s cluster connections) You should be able to see the context as "...aws:eks:eu-west-1:27...". If it's not activated you can activate that context with kubectlt config set-context <CONTEXT_NAME> ``` !kubectl config get-contexts ``` ## Install Seldon Core ### Before we install seldon core, we need to install HELM For that, we need to create a ClusterRoleBinding for us, a ServiceAccount, and then a RoleBinding ``` !kubectl create clusterrolebinding kube-system-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default !kubectl create serviceaccount tiller --namespace kube-system !kubectl apply -f tiller-role-binding.yaml ``` ### Once that is set-up we can install Tiller ``` !helm init --service-account tiller # Wait until Tiller finishes !kubectl rollout status deploy/tiller-deploy -n kube-system ``` ### Now we can install SELDON. We first start with the custom resource definitions (CRDs) ``` !helm install seldon-core-operator --name seldon-core-operator --repo https://storage.googleapis.com/seldon-charts --set usageMetrics.enabled=true --namespace seldon-system ``` And confirm they are running by getting the pods: ``` !kubectl rollout status deploy/seldon-controller-manager -n seldon-system ``` ### Now we set-up the ingress This will allow you to reach the Seldon models from outside the kubernetes cluster. In EKS it automatically creates an Elastic Load Balancer, which you can configure from the EC2 Console ``` !helm install stable/ambassador --name ambassador --set crds.keep=false ``` And let's wait until it's fully deployed ``` !kubectl rollout status deployment.apps/ambassador ``` ## Push docker image In order for the EKS seldon deployment to access the image we just built, we need to push it to the Elastic Container Registry (ECR). If you have any issues please follow the official AWS documentation: https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-basics.html ### First we create a registry You can run the following command, and then see the result at https://us-west-2.console.aws.amazon.com/ecr/repositories?# ``` !aws ecr create-repository --repository-name seldon-repository --region us-west-2 ``` ### Now prepare docker image We need to first tag the docker image before we can push it ``` %%bash export AWS_ACCOUNT_ID="" export AWS_REGION="us-west-2" if [ -z "$AWS_ACCOUNT_ID" ]; then echo "ERROR: Please provide a value for the AWS variables" exit 1 fi docker tag deep-mnist:0.1 "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/seldon-repository" ``` ### We now login to aws through docker so we can access the repository ``` !`aws ecr get-login --no-include-email --region us-west-2` ``` ### And push the image Make sure you add your AWS Account ID ``` %%bash export AWS_ACCOUNT_ID="" export AWS_REGION="us-west-2" if [ -z "$AWS_ACCOUNT_ID" ]; then echo "ERROR: Please provide a value for the AWS variables" exit 1 fi docker push "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/seldon-repository" ``` ## Running the Model We will now run the model. Let's first have a look at the file we'll be using to trigger the model: ``` !cat deep_mnist.json ``` Now let's trigger seldon to run the model. We basically have a yaml file, where we want to replace the value "REPLACE_FOR_IMAGE_AND_TAG" for the image you pushed ``` %%bash export AWS_ACCOUNT_ID="" export AWS_REGION="us-west-2" if [ -z "$AWS_ACCOUNT_ID" ]; then echo "ERROR: Please provide a value for the AWS variables" exit 1 fi sed 's|REPLACE_FOR_IMAGE_AND_TAG|'"$AWS_ACCOUNT_ID"'.dkr.ecr.'"$AWS_REGION"'.amazonaws.com/seldon-repository|g' deep_mnist.json | kubectl apply -f - ``` And let's check that it's been created. You should see an image called "deep-mnist-single-model...". We'll wait until STATUS changes from "ContainerCreating" to "Running" ``` !kubectl get pods ``` ## Test the model Now we can test the model, let's first find out what is the URL that we'll have to use: ``` !kubectl get svc ambassador -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' ``` We'll use a random example from our dataset ``` import matplotlib.pyplot as plt # This is the variable that was initialised at the beginning of the file i = [0] x = mnist.test.images[i] y = mnist.test.labels[i] plt.imshow(x.reshape((28, 28)), cmap='gray') plt.show() print("Expected label: ", np.sum(range(0,10) * y), ". One hot encoding: ", y) ``` We can now add the URL above to send our request: ``` from seldon_core.seldon_client import SeldonClient import math import numpy as np host = "a68bbac487ca611e988060247f81f4c1-707754258.us-west-2.elb.amazonaws.com" port = "80" # Make sure you use the port above batch = x payload_type = "ndarray" sc = SeldonClient( gateway="ambassador", ambassador_endpoint=host + ":" + port, namespace="default", oauth_key="oauth-key", oauth_secret="oauth-secret") client_prediction = sc.predict( data=batch, deployment_name="deep-mnist", names=["text"], payload_type=payload_type) print(client_prediction) ``` ### Let's visualise the probability for each label It seems that it correctly predicted the number 7 ``` for proba, label in zip(client_prediction.response.data.ndarray.values[0].list_value.ListFields()[0][1], range(0,10)): print(f"LABEL {label}:\t {proba.number_value*100:6.4f} %") ```
github_jupyter
from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data/", one_hot = True) import tensorflow as tf if __name__ == '__main__': x = tf.placeholder(tf.float32, [None,784], name="x") W = tf.Variable(tf.zeros([784,10])) b = tf.Variable(tf.zeros([10])) y = tf.nn.softmax(tf.matmul(x,W) + b, name="y") y_ = tf.placeholder(tf.float32, [None, 10]) cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1])) train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) init = tf.initialize_all_variables() sess = tf.Session() sess.run(init) for i in range(1000): batch_xs, batch_ys = mnist.train.next_batch(100) sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys}) correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print(sess.run(accuracy, feed_dict = {x: mnist.test.images, y_:mnist.test.labels})) saver = tf.train.Saver() saver.save(sess, "model/deep_mnist_model") !cat .s2i/environment !s2i build . seldonio/seldon-core-s2i-python36:0.12 deep-mnist:0.1 !docker run --name "mnist_predictor" -d --rm -p 5000:5000 deep-mnist:0.1 import matplotlib.pyplot as plt # This is the variable that was initialised at the beginning of the file i = [0] x = mnist.test.images[i] y = mnist.test.labels[i] plt.imshow(x.reshape((28, 28)), cmap='gray') plt.show() print("Expected label: ", np.sum(range(0,10) * y), ". One hot encoding: ", y) from seldon_core.seldon_client import SeldonClient import math import numpy as np # We now test the REST endpoint expecting the same result endpoint = "0.0.0.0:5000" batch = x payload_type = "ndarray" sc = SeldonClient(microservice_endpoint=endpoint) # We use the microservice, instead of the "predict" function client_prediction = sc.microservice( data=batch, method="predict", payload_type=payload_type, names=["tfidf"]) for proba, label in zip(client_prediction.response.data.ndarray.values[0].list_value.ListFields()[0][1], range(0,10)): print(f"LABEL {label}:\t {proba.number_value*100:6.4f} %") !docker rm mnist_predictor --force !pip install awscli --upgrade --user %%bash # You must make sure that the access key and secret are changed aws configure << END_OF_INPUTS YOUR_ACCESS_KEY YOUR_ACCESS_SECRET us-west-2 json END_OF_INPUTS !curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_amd64.tar.gz" | tar xz !chmod 755 ./eksctl !./eksctl version %%bash ./eksctl create cluster \ --name demo-eks-cluster \ --region us-west-2 \ --nodes 2 !aws eks --region us-west-2 update-kubeconfig --name demo-eks-cluster !kubectl config get-contexts !kubectl create clusterrolebinding kube-system-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default !kubectl create serviceaccount tiller --namespace kube-system !kubectl apply -f tiller-role-binding.yaml !helm init --service-account tiller # Wait until Tiller finishes !kubectl rollout status deploy/tiller-deploy -n kube-system !helm install seldon-core-operator --name seldon-core-operator --repo https://storage.googleapis.com/seldon-charts --set usageMetrics.enabled=true --namespace seldon-system !kubectl rollout status deploy/seldon-controller-manager -n seldon-system !helm install stable/ambassador --name ambassador --set crds.keep=false !kubectl rollout status deployment.apps/ambassador !aws ecr create-repository --repository-name seldon-repository --region us-west-2 %%bash export AWS_ACCOUNT_ID="" export AWS_REGION="us-west-2" if [ -z "$AWS_ACCOUNT_ID" ]; then echo "ERROR: Please provide a value for the AWS variables" exit 1 fi docker tag deep-mnist:0.1 "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/seldon-repository" !`aws ecr get-login --no-include-email --region us-west-2` %%bash export AWS_ACCOUNT_ID="" export AWS_REGION="us-west-2" if [ -z "$AWS_ACCOUNT_ID" ]; then echo "ERROR: Please provide a value for the AWS variables" exit 1 fi docker push "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/seldon-repository" !cat deep_mnist.json %%bash export AWS_ACCOUNT_ID="" export AWS_REGION="us-west-2" if [ -z "$AWS_ACCOUNT_ID" ]; then echo "ERROR: Please provide a value for the AWS variables" exit 1 fi sed 's|REPLACE_FOR_IMAGE_AND_TAG|'"$AWS_ACCOUNT_ID"'.dkr.ecr.'"$AWS_REGION"'.amazonaws.com/seldon-repository|g' deep_mnist.json | kubectl apply -f - !kubectl get pods !kubectl get svc ambassador -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' import matplotlib.pyplot as plt # This is the variable that was initialised at the beginning of the file i = [0] x = mnist.test.images[i] y = mnist.test.labels[i] plt.imshow(x.reshape((28, 28)), cmap='gray') plt.show() print("Expected label: ", np.sum(range(0,10) * y), ". One hot encoding: ", y) from seldon_core.seldon_client import SeldonClient import math import numpy as np host = "a68bbac487ca611e988060247f81f4c1-707754258.us-west-2.elb.amazonaws.com" port = "80" # Make sure you use the port above batch = x payload_type = "ndarray" sc = SeldonClient( gateway="ambassador", ambassador_endpoint=host + ":" + port, namespace="default", oauth_key="oauth-key", oauth_secret="oauth-secret") client_prediction = sc.predict( data=batch, deployment_name="deep-mnist", names=["text"], payload_type=payload_type) print(client_prediction) for proba, label in zip(client_prediction.response.data.ndarray.values[0].list_value.ListFields()[0][1], range(0,10)): print(f"LABEL {label}:\t {proba.number_value*100:6.4f} %")
0.550849
0.943138
# This program will write final analysed file and merge all 10 files ``` import pandas as pd import numpy as np import os data_files=os.listdir("../snp_analysis") data_files[-1],len(data_files) case =[] control =[] #before running loop check the files and folders for i in data_files: if i[-5]=="e": case.append(i) elif i[-5]=="l": control.append(i) #We know that there are 20 Files # For loop for all data sets len(control) #We can see control size is 11 so there is a folder control ``` # One File of All case SNP ``` df1 = pd.read_csv("../snp_analysis/"+case[0]) for data in case: print("Reading .."+data, end = "-->") if data == case[0]: pass df = pd.read_csv("../snp_analysis/"+data) df1 = df1.merge(df,how = "outer") df1 = df1.sort_values(by="POS") df1.drop_duplicates() df1.to_csv("../snp_analysis/merged_files/case_snp.csv",index=False,encoding='utf-8',header=True) print("All Task Done") #Remove duplicates rows later ``` # One Files of all Control ``` df1 = pd.read_csv("../snp_analysis/"+control[0]) for data in control: if data[-3:] != "csv": print(data+"skipped") pass print("Reading .."+data, end = "-->") if data == control[0]: pass df = pd.read_csv("../snp_analysis/"+data) df1 = df1.merge(df,how = "outer") df1 = df1.sort_values(by="POS") df1.to_csv("../snp_analysis/merged_files/control_snp.csv",index=False,encoding='utf-8',header=True) print("All Task Done") ``` # One Files of Case_Control ``` df_case = pd.read_csv("../snp_analysis/merged_files/case_snp.csv") df_control = pd.read_csv("../snp_analysis/merged_files/control_snp.csv") df_all = df_case.merge(df_control,how = "outer") df_all.to_csv("../snp_analysis/merged_files/case_control.csv",index=False,encoding='utf-8',header=True) #df1 = pd.read_csv("../snp_analysis/873_case.csv") #df2 = pd.read_csv("../snp_analysis/880_case.csv") #df1.shape,df2.shape #df1 = df1.head(100) #df2 = df2.head(100) #df1.head(10) #df2.head(10) #df_new = df1.merge(df2,how = "outer") #df_new.sort_values(by="POS") import pandas as pd #df1 = pd.read_csv("../snp_analysis/873_case.csv") #df1 = df1.head(500) #df2 = pd.read_csv("../snp_analysis/1134_control.csv") #df2 = df2.head(500) df1 = df2 =0 case = pd.read_csv("../snp_analysis/merged_files/case_snp.csv") #case = case.head(1000) control = pd.read_csv("../snp_analysis/merged_files/control_snp.csv") #combined = pd.read_csv("../snp_analysis/merged_files/case_control.csv") #combined.shape #control.head(15) ``` # case.shape ``` case.shape,control.shape case.columns case[case["POS"]==157895049] control[control["ID"]== "rs2963463"] case.head(15) control.head(15) ```
github_jupyter
import pandas as pd import numpy as np import os data_files=os.listdir("../snp_analysis") data_files[-1],len(data_files) case =[] control =[] #before running loop check the files and folders for i in data_files: if i[-5]=="e": case.append(i) elif i[-5]=="l": control.append(i) #We know that there are 20 Files # For loop for all data sets len(control) #We can see control size is 11 so there is a folder control df1 = pd.read_csv("../snp_analysis/"+case[0]) for data in case: print("Reading .."+data, end = "-->") if data == case[0]: pass df = pd.read_csv("../snp_analysis/"+data) df1 = df1.merge(df,how = "outer") df1 = df1.sort_values(by="POS") df1.drop_duplicates() df1.to_csv("../snp_analysis/merged_files/case_snp.csv",index=False,encoding='utf-8',header=True) print("All Task Done") #Remove duplicates rows later df1 = pd.read_csv("../snp_analysis/"+control[0]) for data in control: if data[-3:] != "csv": print(data+"skipped") pass print("Reading .."+data, end = "-->") if data == control[0]: pass df = pd.read_csv("../snp_analysis/"+data) df1 = df1.merge(df,how = "outer") df1 = df1.sort_values(by="POS") df1.to_csv("../snp_analysis/merged_files/control_snp.csv",index=False,encoding='utf-8',header=True) print("All Task Done") df_case = pd.read_csv("../snp_analysis/merged_files/case_snp.csv") df_control = pd.read_csv("../snp_analysis/merged_files/control_snp.csv") df_all = df_case.merge(df_control,how = "outer") df_all.to_csv("../snp_analysis/merged_files/case_control.csv",index=False,encoding='utf-8',header=True) #df1 = pd.read_csv("../snp_analysis/873_case.csv") #df2 = pd.read_csv("../snp_analysis/880_case.csv") #df1.shape,df2.shape #df1 = df1.head(100) #df2 = df2.head(100) #df1.head(10) #df2.head(10) #df_new = df1.merge(df2,how = "outer") #df_new.sort_values(by="POS") import pandas as pd #df1 = pd.read_csv("../snp_analysis/873_case.csv") #df1 = df1.head(500) #df2 = pd.read_csv("../snp_analysis/1134_control.csv") #df2 = df2.head(500) df1 = df2 =0 case = pd.read_csv("../snp_analysis/merged_files/case_snp.csv") #case = case.head(1000) control = pd.read_csv("../snp_analysis/merged_files/control_snp.csv") #combined = pd.read_csv("../snp_analysis/merged_files/case_control.csv") #combined.shape #control.head(15) case.shape,control.shape case.columns case[case["POS"]==157895049] control[control["ID"]== "rs2963463"] case.head(15) control.head(15)
0.032915
0.542076
# SWI - single layer ``` %matplotlib inline import os import sys import numpy as np import flopy.modflow as mf import flopy.utils as fu import matplotlib.pyplot as plt ``` ## Paths ``` os.chdir('C:\\Users\\Bas\\Google Drive\\USGS\\FloPy\\slope1D') sys.path.append('C:\\Users\\Bas\\Google Drive\\USGS\\FloPy\\basScript') # location of gridObj modelname = 'run1swi2' exe_name = 'mf2005' workspace = 'data' ``` ## Model input ``` ml = mf.Modflow(modelname, exe_name=exe_name, model_ws=workspace) ``` ### Test variables ``` tscale = 365.*2000 nstp = tscale/100. #[] perlen = tscale*1. #[d] ssz = 0.2 #[] Q = 0.01 #[m3/d] ``` ### Discretization data ``` nlay = 1 nrow = 1 ncol = 5 delr = 1. delc = 1. dell = 1. top = np.array([[-1.,-1.,-1., -0.7, -0.4]], dtype = np.float32) bot = np.array(top-dell, dtype = np.float32).reshape((nlay,nrow,ncol)) initWL = 0. # inital water level ``` ### BCN: WEL ``` lrcQ1 = np.recarray(1, dtype = mf.ModflowWel.get_default_dtype()) lrcQ1[0] = (0, 0, 4, Q) #LRCQ, Q[m**3/d] ``` ### BCN: GHB ``` lrchc = np.recarray(1, dtype = mf.ModflowGhb.get_default_dtype()) lrchc[0]=(0, 0, 0, -top[0,0]*0.025, 0.8 / 2.0 * delc) lrchc[0]=(0, 0, 1, -top[0,0]*0.025, 0.8 / 2.0 * delc) lrchc[0]=(0, 0, 2, -top[0,0]*0.025, 0.8 / 2.0 * delc) ``` ### SWI2 ``` #zini = bot[0,:,:]*-1.5 zini = -2.*np.ones((nrow,ncol)) #zini=bot isource = np.array([[-2,2,-2, 0, 0]]) print zini.shape, isource.shape ``` ### Model objects ``` ml = mf.Modflow(modelname, version='mf2005', exe_name=exe_name) discret = mf.ModflowDis(ml, nrow=nrow, ncol=ncol, nlay=nlay, delr=delr, delc=delc, laycbd=[0], top=top, botm=bot, nper=1, perlen=perlen, nstp=nstp) bas = mf.ModflowBas(ml, ibound=1, strt=1.0*0.025) bcf = mf.ModflowBcf(ml, laycon=[0], tran=[4.0]) wel = mf.ModflowWel(ml, stress_period_data={0:lrcQ1}) ghb = mf.ModflowGhb(ml, stress_period_data={0:lrchc}) swi = mf.ModflowSwi2(ml, nsrf=1, istrat=1, toeslope=0.02, tipslope=0.04, nu=[0, 0.025], zeta=[zini], ssz=ssz, isource=isource, nsolver=1) oc = mf.ModflowOc(ml, save_head_every=nstp) pcg = mf.ModflowPcg(ml) ml.write_input() #--write the model files ``` ## Run the model ``` m = ml.run_model(silent=True, report=True) ``` ## Read the output only one head entry and one zeta entry in binary files ``` headfile = modelname + '.hds' hdobj = fu.HeadFile(headfile) head = hdobj.get_data(idx=0) zetafile = modelname + '.zta' zobj = fu.CellBudgetFile(zetafile) zeta = zobj.get_data(idx=0, text=' ZETASRF 1')[0] ``` ## Plot ``` import gridobj as grd gr = grd.gridobj(discret) print zini print 'head: ', head[0, 0, :] print 'BGH head: ', - 40. * (head[0, 0, :]) fig = plt.figure(figsize=(20, 8), dpi=300, facecolor='w', edgecolor='k') ax = fig.add_subplot(111) ax.plot(gr.cm,top.squeeze(),drawstyle='steps-mid', linestyle='--', linewidth=3. ) ax.plot(gr.cm,bot.squeeze(),drawstyle='steps-mid', linestyle='--', linewidth=3. ) ax.plot(gr.cm,zini[0,:], drawstyle='steps-mid',label='Initial') ax.plot(gr.cm,zeta[0,0,:],drawstyle='steps-mid', label='SWI2') ax.plot(gr.cm,head[0, 0, :], label='feshw head') ax.plot(gr.cm,top[0.0]- 40. * (head[0, 0, :]), label='Ghyben-Herzberg') ax.axis(gr.limLC([0,0,-0.2,0.2])) leg = ax.legend(loc='lower left', numpoints=1) leg._drawFrame = False print np.sum(zeta[0,0,:3])+4.5 print np.sum(zeta[0,0,3:4])+1.5 plt.plot(gr.cGr[0,:-1],zini[0,:]-zeta[0,0,:]) ```
github_jupyter
%matplotlib inline import os import sys import numpy as np import flopy.modflow as mf import flopy.utils as fu import matplotlib.pyplot as plt os.chdir('C:\\Users\\Bas\\Google Drive\\USGS\\FloPy\\slope1D') sys.path.append('C:\\Users\\Bas\\Google Drive\\USGS\\FloPy\\basScript') # location of gridObj modelname = 'run1swi2' exe_name = 'mf2005' workspace = 'data' ml = mf.Modflow(modelname, exe_name=exe_name, model_ws=workspace) tscale = 365.*2000 nstp = tscale/100. #[] perlen = tscale*1. #[d] ssz = 0.2 #[] Q = 0.01 #[m3/d] nlay = 1 nrow = 1 ncol = 5 delr = 1. delc = 1. dell = 1. top = np.array([[-1.,-1.,-1., -0.7, -0.4]], dtype = np.float32) bot = np.array(top-dell, dtype = np.float32).reshape((nlay,nrow,ncol)) initWL = 0. # inital water level lrcQ1 = np.recarray(1, dtype = mf.ModflowWel.get_default_dtype()) lrcQ1[0] = (0, 0, 4, Q) #LRCQ, Q[m**3/d] lrchc = np.recarray(1, dtype = mf.ModflowGhb.get_default_dtype()) lrchc[0]=(0, 0, 0, -top[0,0]*0.025, 0.8 / 2.0 * delc) lrchc[0]=(0, 0, 1, -top[0,0]*0.025, 0.8 / 2.0 * delc) lrchc[0]=(0, 0, 2, -top[0,0]*0.025, 0.8 / 2.0 * delc) #zini = bot[0,:,:]*-1.5 zini = -2.*np.ones((nrow,ncol)) #zini=bot isource = np.array([[-2,2,-2, 0, 0]]) print zini.shape, isource.shape ml = mf.Modflow(modelname, version='mf2005', exe_name=exe_name) discret = mf.ModflowDis(ml, nrow=nrow, ncol=ncol, nlay=nlay, delr=delr, delc=delc, laycbd=[0], top=top, botm=bot, nper=1, perlen=perlen, nstp=nstp) bas = mf.ModflowBas(ml, ibound=1, strt=1.0*0.025) bcf = mf.ModflowBcf(ml, laycon=[0], tran=[4.0]) wel = mf.ModflowWel(ml, stress_period_data={0:lrcQ1}) ghb = mf.ModflowGhb(ml, stress_period_data={0:lrchc}) swi = mf.ModflowSwi2(ml, nsrf=1, istrat=1, toeslope=0.02, tipslope=0.04, nu=[0, 0.025], zeta=[zini], ssz=ssz, isource=isource, nsolver=1) oc = mf.ModflowOc(ml, save_head_every=nstp) pcg = mf.ModflowPcg(ml) ml.write_input() #--write the model files m = ml.run_model(silent=True, report=True) headfile = modelname + '.hds' hdobj = fu.HeadFile(headfile) head = hdobj.get_data(idx=0) zetafile = modelname + '.zta' zobj = fu.CellBudgetFile(zetafile) zeta = zobj.get_data(idx=0, text=' ZETASRF 1')[0] import gridobj as grd gr = grd.gridobj(discret) print zini print 'head: ', head[0, 0, :] print 'BGH head: ', - 40. * (head[0, 0, :]) fig = plt.figure(figsize=(20, 8), dpi=300, facecolor='w', edgecolor='k') ax = fig.add_subplot(111) ax.plot(gr.cm,top.squeeze(),drawstyle='steps-mid', linestyle='--', linewidth=3. ) ax.plot(gr.cm,bot.squeeze(),drawstyle='steps-mid', linestyle='--', linewidth=3. ) ax.plot(gr.cm,zini[0,:], drawstyle='steps-mid',label='Initial') ax.plot(gr.cm,zeta[0,0,:],drawstyle='steps-mid', label='SWI2') ax.plot(gr.cm,head[0, 0, :], label='feshw head') ax.plot(gr.cm,top[0.0]- 40. * (head[0, 0, :]), label='Ghyben-Herzberg') ax.axis(gr.limLC([0,0,-0.2,0.2])) leg = ax.legend(loc='lower left', numpoints=1) leg._drawFrame = False print np.sum(zeta[0,0,:3])+4.5 print np.sum(zeta[0,0,3:4])+1.5 plt.plot(gr.cGr[0,:-1],zini[0,:]-zeta[0,0,:])
0.187951
0.671258
``` import pandas as pd import numpy as np import matplotlib.pyplot as plt # Definitions pd.set_option('display.float_format', lambda x: '%.3f' % x) njobs = -1 randomState = 0 %matplotlib inline #data from previous step data_acc = pd.read_csv('data_acc.csv', dtype={'LSOA_of_Accident_Location': str}) data_cas = pd.read_csv('data_cas.csv') data_mak = pd.read_csv('data_mak.csv') data_acc.columns data_mak.head() data_cas.head() ``` ``` #I will include a pedestrian factor, since pedestrian circunstances are related to my hypothesis. data_cas['include_pedestrian'] = data_cas['Casualty_Class'].map(lambda x: 1 if x==3 else 0) cas_mak = pd.merge(data_cas, data_mak, on= ['Accident_Index', 'Vehicle_Reference'], how = 'left') ``` ``` data = pd.merge(cas_mak,data_acc, on = 'Accident_Index', how = 'left') unique_index = data['Accident_Index'].nunique() unique_index ``` ``` data = data.set_index('Accident_Index') data.columns data = data.drop(['Vehicle_Reference', 'Casualty_Reference', 'Sex_of_Casualty', 'Age_of_Casualty', 'Age_Band_of_Casualty', 'Casualty_Severity', 'Date', 'Car_Passenger', 'Bus_or_Coach_Passenger', 'Pedestrian_Road_Maintenance_Worker', 'Local_Authority_(District)', 'Local_Authority_(Highway)', 'Accident_Severity', 'Towing_and_Articulation','Vehicle_Manoeuvre', 'Vehicle_Location-Restricted_Lane', 'Skidding_and_Overturning','Hit_Object_in_Carriageway','Vehicle_Leaving_Carriageway', 'Hit_Object_off_Carriageway', '1st_Point_of_Impact', 'Was_Vehicle_Left_Hand_Drive', 'Engine_Capacity_(CC)','Date','Propulsion_Code','Driver_Home_Area_Type', 'make', 'Time', 'Local_Authority_(District)', 'Local_Authority_(Highway)', 'Date_month', 'LSOA_of_Accident_Location', 'Casualty_Home_Area_Type'], axis = 1) ``` ``` data.shape ``` ``` #get rid of something else data = data.drop(['Vehicle_Type','Casualty_Class', 'Pedestrian_Location', 'Pedestrian_Movement', 'Casualty_Type', 'Class_Cas', 'Junction_Location', 'Vehicle_age_group', 'Journey_Purpose_of_Driver', 'Sex_of_Driver', 'Junction_Control'], axis = 1) data.shape data.columns ``` ``` wea = data.groupby('Accident_Index', as_index=True).agg({'Casualty_IMD_Decile' : 'mean', 'Age_Band_of_Driver' : 'mean', 'Driver_IMD_Decile': 'mean', 'Age_of_Vehicle' : 'max', 'include_pedestrian' : 'max'}) ``` ``` #merge with accidents table #data_acc = data_acc.set_index('Accident_Index') new_data_acc = data_acc.drop([ 'Junction_Control','Local_Authority_(District)', 'Local_Authority_(Highway)', 'Accident_Severity','Date', 'Time', 'Local_Authority_(District)', 'Local_Authority_(Highway)', 'Date_month',], axis = 1) data_acc.columns data = pd.merge(wea, new_data_acc, on='Accident_Index') data = data.set_index('Accident_Index') data.columns data.shape ``` ``` #into groups data['Casualty_IMD_Decile'] = np.round(data['Casualty_IMD_Decile'], 0) data['Age_Band_of_Driver'] = np.round(data['Age_Band_of_Driver'], 0) data['Driver_IMD_Decile'] = np.round(data['Driver_IMD_Decile'], 0) #group IMD, less levels will show the trends in a better way bins = [0,3,6,8,10] groups = ['low','med_low','med_high','high'] data['Casualty_IMD_Group'] = pd.cut(data['Casualty_IMD_Decile'],bins,labels = groups) data['Casualty_IMD_Group'].value_counts() data = data.drop(['Casualty_IMD_Decile'], axis = 1) #group here bins = [0,3,6,8,10] groups = ['low','med_low','med_high','high'] data['Driver_IMD_Group'] = pd.cut(data['Driver_IMD_Decile'],bins,labels = groups) data['Driver_IMD_Group'].value_counts() data = data.drop(['Driver_IMD_Decile'], axis = 1) #group vehicle age bins = [0,5,10,15,99] groups = ['0-5','5-10','10-15','+15'] data['Vehicle_Age_Group'] = pd.cut(data['Age_of_Vehicle'],bins,labels = groups) data['Vehicle_Age_Group'].value_counts() data = data.drop(['Age_of_Vehicle'], axis = 1) #group here bins = [0,1,2,3,99] groups = ['1','2','3','+4'] data['Number_Vehicles_Group'] = pd.cut(data['Number_of_Vehicles'],bins,labels = groups) data['Number_Vehicles_Group'].value_counts() data = data.drop(['Number_of_Vehicles'], axis = 1) #group here bins = [0,1,2,3,4,1000] groups = ['1','2','3','4','+4'] data['Number_Casualties_Group'] = pd.cut(data['Number_of_Casualties'],bins,labels = groups) data['Number_Casualties_Group'].value_counts() data = data.drop(['Number_of_Casualties'], axis = 1) #group if junction, if roundabout or other data['Junction_Detail'].value_counts() data['Junction_Group'] = data['Junction_Detail'].map(lambda x: 1 if x==0 else 2 if x==1 or x==2 else 3 ) data = data.drop(['Junction_Detail'], axis = 1) #grpup if pedestrian have some facilities or not data['Pedestrian_Control'] = data['Pedestrian_Crossing-Human_Control'].map(lambda x: 1 if x==0 else 2) data = data.drop(['Pedestrian_Crossing-Human_Control'], axis = 1) data['Pedestrian_Control'].value_counts() #same as before data['Pedestrian_PhisFac'] = data['Pedestrian_Crossing-Physical_Facilities'].map(lambda x: 1 if x==0 else 2) data['Pedestrian_PhisFac'].value_counts() data = data.drop(['Pedestrian_Crossing-Physical_Facilities'], axis = 1) #I have grouped this considering avaiability of light data['Active_Light'] = data['Light_Conditions'].map(lambda x: 0 if x==5 or x==6 or x==7 else 1 ) data = data.drop(['Light_Conditions'], axis = 1) #good or bad data['Weather'] = data['Weather_Conditions'].map(lambda x: 1 if x==1 else 0 ) data = data.drop(['Weather_Conditions'], axis = 1) data.Weather.value_counts() #good or bad data['Road_Surf_Cond'] = data['Road_Surface_Conditions'].map(lambda x: 1 if x==1 else 0 ) data = data.drop(['Road_Surface_Conditions'], axis = 1) #good or bad data['Special_Conds'] = data['Special_Conditions_at_Site'].map(lambda x: 1 if x==0 else 0 ) data = data.drop(['Special_Conditions_at_Site'], axis = 1) data.Special_Conds.value_counts() #good or bad data['Carriageway_Haz'] = data['Carriageway_Hazards'].map(lambda x: 1 if x==0 else 0 ) data = data.drop(['Carriageway_Hazards'], axis = 1) data.Urban_or_Rural_Area.value_counts() data.weekdays.value_counts() #divide hour of the day in commute-non-commute data['Commute_hours'] = data['Time_hour'].map(lambda x: 1 if x in (7,9) or x in (15,17) else 0 ) #data = data.drop(['Carriageway_Hazards'], axis = 1) data = data.drop(['Time_hour'], axis = 1) #when no junction it is nan, in road number is 0, I will fillna with 0 data['2nd_Road_Class'] = data['2nd_Road_Class'].fillna(0) data.shape data.isna().sum() ``` ``` data = data.dropna() data.shape data.to_csv('data_to_model.csv') (data[data.Class == 0].count()[0]/data.shape[0])*100 ```
github_jupyter
import pandas as pd import numpy as np import matplotlib.pyplot as plt # Definitions pd.set_option('display.float_format', lambda x: '%.3f' % x) njobs = -1 randomState = 0 %matplotlib inline #data from previous step data_acc = pd.read_csv('data_acc.csv', dtype={'LSOA_of_Accident_Location': str}) data_cas = pd.read_csv('data_cas.csv') data_mak = pd.read_csv('data_mak.csv') data_acc.columns data_mak.head() data_cas.head() #I will include a pedestrian factor, since pedestrian circunstances are related to my hypothesis. data_cas['include_pedestrian'] = data_cas['Casualty_Class'].map(lambda x: 1 if x==3 else 0) cas_mak = pd.merge(data_cas, data_mak, on= ['Accident_Index', 'Vehicle_Reference'], how = 'left') data = pd.merge(cas_mak,data_acc, on = 'Accident_Index', how = 'left') unique_index = data['Accident_Index'].nunique() unique_index data = data.set_index('Accident_Index') data.columns data = data.drop(['Vehicle_Reference', 'Casualty_Reference', 'Sex_of_Casualty', 'Age_of_Casualty', 'Age_Band_of_Casualty', 'Casualty_Severity', 'Date', 'Car_Passenger', 'Bus_or_Coach_Passenger', 'Pedestrian_Road_Maintenance_Worker', 'Local_Authority_(District)', 'Local_Authority_(Highway)', 'Accident_Severity', 'Towing_and_Articulation','Vehicle_Manoeuvre', 'Vehicle_Location-Restricted_Lane', 'Skidding_and_Overturning','Hit_Object_in_Carriageway','Vehicle_Leaving_Carriageway', 'Hit_Object_off_Carriageway', '1st_Point_of_Impact', 'Was_Vehicle_Left_Hand_Drive', 'Engine_Capacity_(CC)','Date','Propulsion_Code','Driver_Home_Area_Type', 'make', 'Time', 'Local_Authority_(District)', 'Local_Authority_(Highway)', 'Date_month', 'LSOA_of_Accident_Location', 'Casualty_Home_Area_Type'], axis = 1) data.shape #get rid of something else data = data.drop(['Vehicle_Type','Casualty_Class', 'Pedestrian_Location', 'Pedestrian_Movement', 'Casualty_Type', 'Class_Cas', 'Junction_Location', 'Vehicle_age_group', 'Journey_Purpose_of_Driver', 'Sex_of_Driver', 'Junction_Control'], axis = 1) data.shape data.columns wea = data.groupby('Accident_Index', as_index=True).agg({'Casualty_IMD_Decile' : 'mean', 'Age_Band_of_Driver' : 'mean', 'Driver_IMD_Decile': 'mean', 'Age_of_Vehicle' : 'max', 'include_pedestrian' : 'max'}) #merge with accidents table #data_acc = data_acc.set_index('Accident_Index') new_data_acc = data_acc.drop([ 'Junction_Control','Local_Authority_(District)', 'Local_Authority_(Highway)', 'Accident_Severity','Date', 'Time', 'Local_Authority_(District)', 'Local_Authority_(Highway)', 'Date_month',], axis = 1) data_acc.columns data = pd.merge(wea, new_data_acc, on='Accident_Index') data = data.set_index('Accident_Index') data.columns data.shape #into groups data['Casualty_IMD_Decile'] = np.round(data['Casualty_IMD_Decile'], 0) data['Age_Band_of_Driver'] = np.round(data['Age_Band_of_Driver'], 0) data['Driver_IMD_Decile'] = np.round(data['Driver_IMD_Decile'], 0) #group IMD, less levels will show the trends in a better way bins = [0,3,6,8,10] groups = ['low','med_low','med_high','high'] data['Casualty_IMD_Group'] = pd.cut(data['Casualty_IMD_Decile'],bins,labels = groups) data['Casualty_IMD_Group'].value_counts() data = data.drop(['Casualty_IMD_Decile'], axis = 1) #group here bins = [0,3,6,8,10] groups = ['low','med_low','med_high','high'] data['Driver_IMD_Group'] = pd.cut(data['Driver_IMD_Decile'],bins,labels = groups) data['Driver_IMD_Group'].value_counts() data = data.drop(['Driver_IMD_Decile'], axis = 1) #group vehicle age bins = [0,5,10,15,99] groups = ['0-5','5-10','10-15','+15'] data['Vehicle_Age_Group'] = pd.cut(data['Age_of_Vehicle'],bins,labels = groups) data['Vehicle_Age_Group'].value_counts() data = data.drop(['Age_of_Vehicle'], axis = 1) #group here bins = [0,1,2,3,99] groups = ['1','2','3','+4'] data['Number_Vehicles_Group'] = pd.cut(data['Number_of_Vehicles'],bins,labels = groups) data['Number_Vehicles_Group'].value_counts() data = data.drop(['Number_of_Vehicles'], axis = 1) #group here bins = [0,1,2,3,4,1000] groups = ['1','2','3','4','+4'] data['Number_Casualties_Group'] = pd.cut(data['Number_of_Casualties'],bins,labels = groups) data['Number_Casualties_Group'].value_counts() data = data.drop(['Number_of_Casualties'], axis = 1) #group if junction, if roundabout or other data['Junction_Detail'].value_counts() data['Junction_Group'] = data['Junction_Detail'].map(lambda x: 1 if x==0 else 2 if x==1 or x==2 else 3 ) data = data.drop(['Junction_Detail'], axis = 1) #grpup if pedestrian have some facilities or not data['Pedestrian_Control'] = data['Pedestrian_Crossing-Human_Control'].map(lambda x: 1 if x==0 else 2) data = data.drop(['Pedestrian_Crossing-Human_Control'], axis = 1) data['Pedestrian_Control'].value_counts() #same as before data['Pedestrian_PhisFac'] = data['Pedestrian_Crossing-Physical_Facilities'].map(lambda x: 1 if x==0 else 2) data['Pedestrian_PhisFac'].value_counts() data = data.drop(['Pedestrian_Crossing-Physical_Facilities'], axis = 1) #I have grouped this considering avaiability of light data['Active_Light'] = data['Light_Conditions'].map(lambda x: 0 if x==5 or x==6 or x==7 else 1 ) data = data.drop(['Light_Conditions'], axis = 1) #good or bad data['Weather'] = data['Weather_Conditions'].map(lambda x: 1 if x==1 else 0 ) data = data.drop(['Weather_Conditions'], axis = 1) data.Weather.value_counts() #good or bad data['Road_Surf_Cond'] = data['Road_Surface_Conditions'].map(lambda x: 1 if x==1 else 0 ) data = data.drop(['Road_Surface_Conditions'], axis = 1) #good or bad data['Special_Conds'] = data['Special_Conditions_at_Site'].map(lambda x: 1 if x==0 else 0 ) data = data.drop(['Special_Conditions_at_Site'], axis = 1) data.Special_Conds.value_counts() #good or bad data['Carriageway_Haz'] = data['Carriageway_Hazards'].map(lambda x: 1 if x==0 else 0 ) data = data.drop(['Carriageway_Hazards'], axis = 1) data.Urban_or_Rural_Area.value_counts() data.weekdays.value_counts() #divide hour of the day in commute-non-commute data['Commute_hours'] = data['Time_hour'].map(lambda x: 1 if x in (7,9) or x in (15,17) else 0 ) #data = data.drop(['Carriageway_Hazards'], axis = 1) data = data.drop(['Time_hour'], axis = 1) #when no junction it is nan, in road number is 0, I will fillna with 0 data['2nd_Road_Class'] = data['2nd_Road_Class'].fillna(0) data.shape data.isna().sum() data = data.dropna() data.shape data.to_csv('data_to_model.csv') (data[data.Class == 0].count()[0]/data.shape[0])*100
0.127612
0.786459
<a href="https://colab.research.google.com/github/NastyaTataurova/pix2pix_pytorch/blob/main/pix2pix.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # <center><h1> Image2Image. Генерация изображений из других избражений ## Использование архитектуры pix2pix </center></h1> ``` import torch import torch.nn as nn import torchvision.transforms as tt import numpy as np import os from PIL import Image import cv2 from torch.utils.data import Dataset, DataLoader from tqdm import tqdm from IPython.display import clear_output import matplotlib.pyplot as plt from google.colab import files device = 'cuda' if torch.cuda.is_available() else 'cpu' device !git clone https://github.com/NastyaTataurova/pix2pix_pytorch os.chdir('pix2pix_pytorch/') image_size = 256 # stats = (0.5, 0.5, 0.5), (0.5, 0.5, 0.5) transforms = tt.Compose([tt.ToPILImage(), tt.Resize((256, 256)), tt.ToTensor(), # tt.Normalize(*stats) ]) ``` ## Discriminator ``` class discriminator(nn.Module): def __init__(self): super().__init__() self.conv0 = nn.Sequential( nn.Conv2d(3 * 2, 64, kernel_size=4, stride=2, padding=1, padding_mode="reflect"), nn.LeakyReLU(0.2) ) self.conv1 = nn.Sequential( nn.Conv2d(64, 128, kernel_size=4, stride=2, padding=1, bias=False, padding_mode="reflect"), nn.BatchNorm2d(128), nn.LeakyReLU(0.2) ) self.conv2 = nn.Sequential( nn.Conv2d(128, 256, kernel_size=4, stride=2, padding=1, bias=False, padding_mode="reflect"), nn.BatchNorm2d(256), nn.LeakyReLU(0.2) ) self.conv3 = nn.Sequential( nn.Conv2d(256, 512, kernel_size=4, stride=1, padding=1, bias=False, padding_mode="reflect"), nn.BatchNorm2d(512), nn.LeakyReLU(0.2) ) self.conv4 = nn.Sequential( nn.Conv2d(512, 1, kernel_size=4, stride=1, padding=1, padding_mode='reflect') ) def forward(self, x, y): x = torch.cat([x, y], dim=1) c0 = self.conv0(x) c1 = self.conv1(c0) c2 = self.conv2(c1) c3 = self.conv3(c2) c4 = self.conv4(c3) return c4 ``` ## Generator ``` class generator(nn.Module): def __init__(self): super().__init__() self.conv0 = nn.Sequential( nn.Conv2d(3, 64, kernel_size=4, stride=2, padding=1, padding_mode="reflect"), nn.LeakyReLU(0.2) ) self.conv1 = nn.Sequential( nn.Conv2d(64, 128, kernel_size=4, stride=2, padding=1, bias=False, padding_mode="reflect"), nn.BatchNorm2d(128), nn.LeakyReLU(0.2) ) self.conv2 = nn.Sequential( nn.Conv2d(128, 256, kernel_size=4, stride=2, padding=1, bias=False, padding_mode="reflect"), nn.BatchNorm2d(256), nn.LeakyReLU(0.2) ) self.conv3 = nn.Sequential( nn.Conv2d(256, 512, kernel_size=4, stride=2, padding=1, bias=False, padding_mode="reflect"), nn.BatchNorm2d(512), nn.LeakyReLU(0.2) ) self.conv456 = nn.Sequential( nn.Conv2d(512, 512, kernel_size=4, stride=2, padding=1, bias=False, padding_mode="reflect"), nn.BatchNorm2d(512), nn.LeakyReLU(0.2) ) self.bottleneck = nn.Sequential( nn.Conv2d(512, 512, kernel_size=4, stride=2, padding=1), nn.ReLU(0.2) ) self.dec_conv0 = nn.Sequential( nn.ConvTranspose2d(512, 512, kernel_size=4, stride=2, padding=1, bias=False), nn.BatchNorm2d(512), nn.ReLU(0.2), nn.Dropout(0.5) ) self.dec_conv12 = nn.Sequential( nn.ConvTranspose2d(512 * 2, 512, kernel_size=4, stride=2, padding=1, bias=False), nn.BatchNorm2d(512), nn.ReLU(0.2), nn.Dropout(0.5) ) self.dec_conv3 = nn.Sequential( nn.ConvTranspose2d(512 * 2, 512, kernel_size=4, stride=2, padding=1, bias=False), nn.BatchNorm2d(512), nn.ReLU(0.2) ) self.dec_conv4 = nn.Sequential( nn.ConvTranspose2d(512 * 2, 256, kernel_size=4, stride=2, padding=1, bias=False), nn.BatchNorm2d(256), nn.ReLU(0.2) ) self.dec_conv5 = nn.Sequential( nn.ConvTranspose2d(256 * 2, 128, kernel_size=4, stride=2, padding=1, bias=False), nn.BatchNorm2d(128), nn.ReLU(0.2) ) self.dec_conv6 = nn.Sequential( nn.ConvTranspose2d(128 * 2, 64, kernel_size=4, stride=2, padding=1, bias=False), nn.BatchNorm2d(64), nn.ReLU(0.2) ) self.dec_conv7 = nn.Sequential( nn.ConvTranspose2d(64 * 2, 3, kernel_size=4, stride=2, padding=1, bias=False), nn.Sigmoid() # nn.Tanh() ) def forward(self, x): # encoder e0 = self.conv0(x) # 128 e1 = self.conv1(e0) # 64 e2 = self.conv2(e1) # 32 e3 = self.conv3(e2) # 16 e4 = self.conv456(e3) # 8 e5 = self.conv456(e4) # 4 e6 = self.conv456(e5) # 2 # bottleneck b = self.bottleneck(e6) # 1 # decoder d0 = self.dec_conv0(b) # 2 d1 = self.dec_conv12(torch.cat((d0, e6), dim=1)) # 4 d2 = self.dec_conv12(torch.cat((d1, e5), dim=1)) # 8 d3 = self.dec_conv3(torch.cat((d2, e4), dim=1)) # 16 d4 = self.dec_conv4(torch.cat((d3, e3), dim=1)) # 32 d5 = self.dec_conv5(torch.cat((d4, e2), dim=1)) # 64 d6 = self.dec_conv6(torch.cat((d5, e1), dim=1)) # 128 d7 = self.dec_conv7(torch.cat((d6, e0), dim=1)) # 256 return d7 ``` ## Train ``` def save_model_optimazer(model, optimizer, filename): model_opt_dict = {'model': model.state_dict(), 'optimizer': optimizer.state_dict()} torch.save(model_opt_dict, filename) def load_model_optimazer(filename, model, optimizer, lr, device): model_opt_dict = torch.load(filename, map_location=device) model.load_state_dict(model_opt_dict['model']) optimizer.load_state_dict(model_opt_dict['optimizer']) for param in optimizer.param_groups: param['lr'] = lr def train(discrim, gener, loader, opt_discrim, opt_gener, loss_l1, loss_bce, num_epoch): history = [] for epoch in tqdm(range(num_epoch)): for x, y in loader: x = x.to(device) y = y.to(device) # Train Discriminator y_gener = gener(x) discrim_real_image = discrim(x, y) discrim_real_image_loss = loss_bce(discrim_real_image, torch.ones_like(discrim_real_image)) discrim_gener_image = discrim(x, y_gener.detach()) discrim_gener_image_loss = loss_bce(discrim_gener_image, torch.zeros_like(discrim_gener_image)) discrim_loss = (discrim_real_image_loss + discrim_gener_image_loss) / 2 opt_discrim.zero_grad() discrim_loss.backward() opt_discrim.step() # Train Generator discrim_gener_image = discrim(x, y_gener) G_fake_loss = loss_bce(discrim_gener_image, torch.ones_like(discrim_gener_image)) L1 = loss_l1(y_gener, y) * lambd G_loss = G_fake_loss + L1 opt_gener.zero_grad() G_loss.backward() opt_gener.step() # clear_output(wait=True) history.append((discrim_real_image_loss, discrim_gener_image_loss, discrim_loss)) print(f'\n{epoch+1}/{num_epoch} epoch: loss={discrim_loss}, generated image loss={discrim_gener_image_loss}') plt.subplot(1, 3, 1) plt.imshow(x[0].permute(1, 2, 0).cpu().detach().numpy(), vmin=0, vmax=255) plt.title('Real') plt.axis('off') plt.subplot(1, 3, 2) plt.imshow(y[0].permute(1, 2, 0).cpu().detach().numpy(), vmin=0, vmax=255) plt.title('Target') plt.axis('off') image_fake = np.asarray(y_gener[0].permute(1, 2, 0).cpu().detach().numpy(), dtype=np.float32) plt.subplot(1, 3, 3) plt.imshow(image_fake, vmin=0, vmax=255) plt.title('Fake') plt.axis('off') return history ``` ## Dataset 1 -- Maps ``` class GetDatasetMaps(Dataset): def __init__(self, file_paths, transform): self.file_paths = file_paths self.files_names = os.listdir(file_paths) self.transform = transform def __len__(self): return len(self.files_names) def __getitem__(self, idx): file_name = self.files_names[idx] file_path = os.path.join(self.file_paths, file_name) image = Image.open(file_path) image = np.array(image) input_image = image[:, :600, :] target_image = image[:, 600:, :] input_image = self.transform(input_image) target_image = self.transform(target_image) return input_image, target_image # loading maps dataset !bash ./bin/get_maps_datasets.sh # path to images train_path_maps = './pix2pix_pytorch/datasets/maps/train' val_path_maps = './pix2pix_pytorch/datasets/maps/val' # creating a train dataloader train_dataset_maps = GetDatasetMaps(file_paths=train_path_maps, transform=transforms) train_loader_maps = DataLoader(train_dataset_maps, batch_size=1) # creating a val dataloader val_dataset_maps = GetDatasetMaps(file_paths=val_path_maps, transform=transforms) val_loader_maps = DataLoader(val_dataset_maps, batch_size=5) ``` ### Train model 1 (Maps) ``` !bash ./bin/load_model_maps.sh lr = 2e-4 lambd = 100 discrim_maps = discriminator().to(device) optimizer_discrim_maps = torch.optim.Adam(discrim_maps.parameters(), lr=lr, betas=(0.5, 0.9)) loss_bce_maps = nn.BCEWithLogitsLoss() gener_maps = generator().to(device) optimizer_gener_maps = torch.optim.Adam(gener_maps.parameters(), lr=lr, betas=(0.5, 0.9)) loss_l1_maps = nn.L1Loss() print('Load model and optimizer?') if input() == 'y': # 'y' - yes, else - no # loading the weights of the trained model load_model_optimazer('discrim_maps.pth', discrim_maps, optimizer_discrim_maps, lr, device) load_model_optimazer('gener_maps.pth', gener_maps, optimizer_gener_maps, lr, device) else: # train model num_epoch = 100 train(discrim_maps, gener_maps, train_loader_maps, optimizer_discrim_maps, optimizer_gener_maps, loss_l1_maps, loss_bce_maps, num_epoch) ``` ### Results (Maps) ``` X_val = next(iter(val_loader_maps)) y_val = gener_maps(X_val[0].to(device)) plt.figure(figsize=(7, 9)) for i in range(5): plt.subplot(5, 3, 3*i+1) plt.imshow(X_val[0][i].permute(1, 2, 0), vmin=0, vmax=255) plt.title('Real') plt.axis('off') plt.subplot(5, 3, 3*i+2) plt.imshow(X_val[1][i].permute(1, 2, 0), vmin=0, vmax=255) plt.title('Target') plt.axis('off') gener_image = np.asarray(y_val[i].permute(1, 2, 0).cpu().detach().numpy(), dtype=np.float32) plt.subplot(5, 3, 3*i+3) plt.imshow(gener_image, vmin=0, vmax=255) plt.title('Fake') plt.axis('off') ``` ### Saving weights (Maps) ``` save_model_optimazer(discrim_maps, optimizer_discrim_maps, 'discrim_maps.pth') save_model_optimazer(gener_maps, optimizer_gener_maps, 'gener_maps.pth') ``` ### Upload your image (Maps) ``` # uploading an image print('1-upload your file, 2-take the default file') if input()=='1': file = files.upload() file_path = list(file.keys())[0] else: file_path = './crs/images/map.jfif' # image generation image = Image.open(file_path) image = transforms(np.array(image)) gener_image = gener_maps(image.unsqueeze(0).to(device))[0] gener_image = np.asarray(gener_image.permute(1, 2, 0).cpu().detach().numpy(), dtype=np.float32) # result plt.subplot(1, 2, 1) plt.imshow(image.permute(1, 2, 0), vmin=0, vmax=255) plt.title('Real') plt.axis('off') plt.subplot(1, 2, 2) plt.imshow(gener_image, vmin=0, vmax=255) plt.title('Fake') plt.axis('off') ``` ## Dataset 2 -- Flowers ``` class GetDatasetFlowers(Dataset): def __init__(self, file_paths_jpg, file_paths_trimaps, transform): self.file_paths_jpg = file_paths_jpg self.file_paths_trimaps = file_paths_trimaps self.files_names_jpg = os.listdir(file_paths_jpg) self.files_names_trimaps = os.listdir(file_paths_trimaps) self.transform = transform def __len__(self): return len(self.file_paths_jpg) def __getitem__(self, idx): file_name_trimaps = self.files_names_trimaps[idx] file_name_jpg = file_name_trimaps[:11] + 'jpg' file_path_jpg = os.path.join(self.file_paths_jpg, file_name_jpg) image_target = Image.open(file_path_jpg) image_target = np.array(image_target) file_path_trimaps = os.path.join(self.file_paths_trimaps, file_name_trimaps) image_trimaps = cv2.imread(file_path_trimaps) image_trimaps = np.array(image_trimaps) image_target = self.transform(image_target) image_trimaps = self.transform(image_trimaps) return image_trimaps, image_target # loading flowers dataset !bash ./bin/get_flowers_datasets.sh # path to train images train_path_flowers_jpg = './pix2pix_pytorch/datasets/flowers/train/jpg/jpg' train_path_flowers_trimaps = './pix2pix_pytorch/datasets/flowers/train/trimaps/trimaps' # path to val images val_path_flowers_jpg = './pix2pix_pytorch/datasets/flowers/test/jpg/jpg' val_path_flowers_trimaps = './pix2pix_pytorch/datasets/flowers/test/trimaps/trimaps' # creating a train dataloader train_dataset_flowers = GetDatasetFlowers(file_paths_jpg=train_path_flowers_jpg, file_paths_trimaps=train_path_flowers_trimaps, transform=transforms) train_loader_flowers = DataLoader(train_dataset_flowers, batch_size=1) # creating a val dataloader val_dataset_flowers = GetDatasetFlowers(file_paths_jpg=val_path_flowers_jpg, file_paths_trimaps=val_path_flowers_trimaps, transform=transforms) val_loader_flowers = DataLoader(val_dataset_flowers, batch_size=5) ``` ### Train model 2 (Flowers) ``` !bash ./bin/load_model_flowes.sh lr = 2e-4 lambd = 100 discrim_flowers = discriminator().to(device) optimizer_discrim_flowers = torch.optim.Adam(discrim_flowers.parameters(), lr=lr, betas=(0.5, 0.9)) loss_bce_flowers = nn.BCEWithLogitsLoss() gener_flowers = generator().to(device) optimizer_gener_flowers = torch.optim.Adam(gener_flowers.parameters(), lr=lr, betas=(0.5, 0.9)) loss_l1_flowers = nn.L1Loss() print('Load model and optimizer?') if input() == 'y': # 'y' - yes, else - no (train) # loading the weights of the trained model load_model_optimazer('discrim_flowers.pth', discrim_flowers, optimizer_discrim_flowers, lr, device) load_model_optimazer('gener_flowers.pth', gener_flowers, optimizer_gener_flowers, lr, device) else: # train model num_epoch = 400 train(discrim_flowers, gener_flowers, train_loader_flowers, optimizer_discrim_flowers, optimizer_gener_flowers, loss_l1_flowers, loss_bce_flowers, num_epoch) ``` ### Results (Flowers) ``` X_val = next(iter(val_loader_flowers)) y_val = gener_flowers(X_val[0].to(device)) plt.figure(figsize=(7, 9)) for i in range(5): plt.subplot(5, 3, 3*i+1) plt.imshow(X_val[0][i].permute(1, 2, 0), vmin=0, vmax=255) plt.title('Real') plt.axis('off') plt.subplot(5, 3, 3*i+2) plt.imshow(X_val[1][i].permute(1, 2, 0), vmin=0, vmax=255) plt.title('Target') plt.axis('off') gener_image = np.asarray(y_val[i].permute(1, 2, 0).cpu().detach().numpy(), dtype=np.float32) plt.subplot(5, 3, 3*i+3) plt.imshow(gener_image, vmin=0, vmax=255) plt.title('Fake') plt.axis('off') ``` ### Saving weights (Flowers) ``` save_model_optimazer(discrim_flowers, optimizer_discrim_flowers, 'discrim_flowers.pth') save_model_optimazer(gener_flowers, optimizer_gener_flowers, 'gener_flowers.pth') ``` ### Upload your image (Flowers) ``` # uploading an image print('1-upload your file, 2-take the default file') if input()=='1': file = files.upload() file_path = list(file.keys())[0] else: file_path = './crs/images/flower.png' # image generation # image = Image.open('/content/pix2pix_pytorch/trimaps.png') image = cv2.imread(file_path) image = transforms(np.array(image)) gener_image = gener_flowers(image.unsqueeze(0).to(device))[0] gener_image = np.asarray(gener_image.permute(1, 2, 0).cpu().detach().numpy(), dtype=np.float32) # result plt.subplot(1, 2, 1) plt.imshow(image.permute(1, 2, 0), vmin=0, vmax=255) plt.title('Real') plt.axis('off') plt.subplot(1, 2, 2) plt.imshow(gener_image, vmin=0, vmax=255) plt.title('Fake') plt.axis('off') ```
github_jupyter
import torch import torch.nn as nn import torchvision.transforms as tt import numpy as np import os from PIL import Image import cv2 from torch.utils.data import Dataset, DataLoader from tqdm import tqdm from IPython.display import clear_output import matplotlib.pyplot as plt from google.colab import files device = 'cuda' if torch.cuda.is_available() else 'cpu' device !git clone https://github.com/NastyaTataurova/pix2pix_pytorch os.chdir('pix2pix_pytorch/') image_size = 256 # stats = (0.5, 0.5, 0.5), (0.5, 0.5, 0.5) transforms = tt.Compose([tt.ToPILImage(), tt.Resize((256, 256)), tt.ToTensor(), # tt.Normalize(*stats) ]) class discriminator(nn.Module): def __init__(self): super().__init__() self.conv0 = nn.Sequential( nn.Conv2d(3 * 2, 64, kernel_size=4, stride=2, padding=1, padding_mode="reflect"), nn.LeakyReLU(0.2) ) self.conv1 = nn.Sequential( nn.Conv2d(64, 128, kernel_size=4, stride=2, padding=1, bias=False, padding_mode="reflect"), nn.BatchNorm2d(128), nn.LeakyReLU(0.2) ) self.conv2 = nn.Sequential( nn.Conv2d(128, 256, kernel_size=4, stride=2, padding=1, bias=False, padding_mode="reflect"), nn.BatchNorm2d(256), nn.LeakyReLU(0.2) ) self.conv3 = nn.Sequential( nn.Conv2d(256, 512, kernel_size=4, stride=1, padding=1, bias=False, padding_mode="reflect"), nn.BatchNorm2d(512), nn.LeakyReLU(0.2) ) self.conv4 = nn.Sequential( nn.Conv2d(512, 1, kernel_size=4, stride=1, padding=1, padding_mode='reflect') ) def forward(self, x, y): x = torch.cat([x, y], dim=1) c0 = self.conv0(x) c1 = self.conv1(c0) c2 = self.conv2(c1) c3 = self.conv3(c2) c4 = self.conv4(c3) return c4 class generator(nn.Module): def __init__(self): super().__init__() self.conv0 = nn.Sequential( nn.Conv2d(3, 64, kernel_size=4, stride=2, padding=1, padding_mode="reflect"), nn.LeakyReLU(0.2) ) self.conv1 = nn.Sequential( nn.Conv2d(64, 128, kernel_size=4, stride=2, padding=1, bias=False, padding_mode="reflect"), nn.BatchNorm2d(128), nn.LeakyReLU(0.2) ) self.conv2 = nn.Sequential( nn.Conv2d(128, 256, kernel_size=4, stride=2, padding=1, bias=False, padding_mode="reflect"), nn.BatchNorm2d(256), nn.LeakyReLU(0.2) ) self.conv3 = nn.Sequential( nn.Conv2d(256, 512, kernel_size=4, stride=2, padding=1, bias=False, padding_mode="reflect"), nn.BatchNorm2d(512), nn.LeakyReLU(0.2) ) self.conv456 = nn.Sequential( nn.Conv2d(512, 512, kernel_size=4, stride=2, padding=1, bias=False, padding_mode="reflect"), nn.BatchNorm2d(512), nn.LeakyReLU(0.2) ) self.bottleneck = nn.Sequential( nn.Conv2d(512, 512, kernel_size=4, stride=2, padding=1), nn.ReLU(0.2) ) self.dec_conv0 = nn.Sequential( nn.ConvTranspose2d(512, 512, kernel_size=4, stride=2, padding=1, bias=False), nn.BatchNorm2d(512), nn.ReLU(0.2), nn.Dropout(0.5) ) self.dec_conv12 = nn.Sequential( nn.ConvTranspose2d(512 * 2, 512, kernel_size=4, stride=2, padding=1, bias=False), nn.BatchNorm2d(512), nn.ReLU(0.2), nn.Dropout(0.5) ) self.dec_conv3 = nn.Sequential( nn.ConvTranspose2d(512 * 2, 512, kernel_size=4, stride=2, padding=1, bias=False), nn.BatchNorm2d(512), nn.ReLU(0.2) ) self.dec_conv4 = nn.Sequential( nn.ConvTranspose2d(512 * 2, 256, kernel_size=4, stride=2, padding=1, bias=False), nn.BatchNorm2d(256), nn.ReLU(0.2) ) self.dec_conv5 = nn.Sequential( nn.ConvTranspose2d(256 * 2, 128, kernel_size=4, stride=2, padding=1, bias=False), nn.BatchNorm2d(128), nn.ReLU(0.2) ) self.dec_conv6 = nn.Sequential( nn.ConvTranspose2d(128 * 2, 64, kernel_size=4, stride=2, padding=1, bias=False), nn.BatchNorm2d(64), nn.ReLU(0.2) ) self.dec_conv7 = nn.Sequential( nn.ConvTranspose2d(64 * 2, 3, kernel_size=4, stride=2, padding=1, bias=False), nn.Sigmoid() # nn.Tanh() ) def forward(self, x): # encoder e0 = self.conv0(x) # 128 e1 = self.conv1(e0) # 64 e2 = self.conv2(e1) # 32 e3 = self.conv3(e2) # 16 e4 = self.conv456(e3) # 8 e5 = self.conv456(e4) # 4 e6 = self.conv456(e5) # 2 # bottleneck b = self.bottleneck(e6) # 1 # decoder d0 = self.dec_conv0(b) # 2 d1 = self.dec_conv12(torch.cat((d0, e6), dim=1)) # 4 d2 = self.dec_conv12(torch.cat((d1, e5), dim=1)) # 8 d3 = self.dec_conv3(torch.cat((d2, e4), dim=1)) # 16 d4 = self.dec_conv4(torch.cat((d3, e3), dim=1)) # 32 d5 = self.dec_conv5(torch.cat((d4, e2), dim=1)) # 64 d6 = self.dec_conv6(torch.cat((d5, e1), dim=1)) # 128 d7 = self.dec_conv7(torch.cat((d6, e0), dim=1)) # 256 return d7 def save_model_optimazer(model, optimizer, filename): model_opt_dict = {'model': model.state_dict(), 'optimizer': optimizer.state_dict()} torch.save(model_opt_dict, filename) def load_model_optimazer(filename, model, optimizer, lr, device): model_opt_dict = torch.load(filename, map_location=device) model.load_state_dict(model_opt_dict['model']) optimizer.load_state_dict(model_opt_dict['optimizer']) for param in optimizer.param_groups: param['lr'] = lr def train(discrim, gener, loader, opt_discrim, opt_gener, loss_l1, loss_bce, num_epoch): history = [] for epoch in tqdm(range(num_epoch)): for x, y in loader: x = x.to(device) y = y.to(device) # Train Discriminator y_gener = gener(x) discrim_real_image = discrim(x, y) discrim_real_image_loss = loss_bce(discrim_real_image, torch.ones_like(discrim_real_image)) discrim_gener_image = discrim(x, y_gener.detach()) discrim_gener_image_loss = loss_bce(discrim_gener_image, torch.zeros_like(discrim_gener_image)) discrim_loss = (discrim_real_image_loss + discrim_gener_image_loss) / 2 opt_discrim.zero_grad() discrim_loss.backward() opt_discrim.step() # Train Generator discrim_gener_image = discrim(x, y_gener) G_fake_loss = loss_bce(discrim_gener_image, torch.ones_like(discrim_gener_image)) L1 = loss_l1(y_gener, y) * lambd G_loss = G_fake_loss + L1 opt_gener.zero_grad() G_loss.backward() opt_gener.step() # clear_output(wait=True) history.append((discrim_real_image_loss, discrim_gener_image_loss, discrim_loss)) print(f'\n{epoch+1}/{num_epoch} epoch: loss={discrim_loss}, generated image loss={discrim_gener_image_loss}') plt.subplot(1, 3, 1) plt.imshow(x[0].permute(1, 2, 0).cpu().detach().numpy(), vmin=0, vmax=255) plt.title('Real') plt.axis('off') plt.subplot(1, 3, 2) plt.imshow(y[0].permute(1, 2, 0).cpu().detach().numpy(), vmin=0, vmax=255) plt.title('Target') plt.axis('off') image_fake = np.asarray(y_gener[0].permute(1, 2, 0).cpu().detach().numpy(), dtype=np.float32) plt.subplot(1, 3, 3) plt.imshow(image_fake, vmin=0, vmax=255) plt.title('Fake') plt.axis('off') return history class GetDatasetMaps(Dataset): def __init__(self, file_paths, transform): self.file_paths = file_paths self.files_names = os.listdir(file_paths) self.transform = transform def __len__(self): return len(self.files_names) def __getitem__(self, idx): file_name = self.files_names[idx] file_path = os.path.join(self.file_paths, file_name) image = Image.open(file_path) image = np.array(image) input_image = image[:, :600, :] target_image = image[:, 600:, :] input_image = self.transform(input_image) target_image = self.transform(target_image) return input_image, target_image # loading maps dataset !bash ./bin/get_maps_datasets.sh # path to images train_path_maps = './pix2pix_pytorch/datasets/maps/train' val_path_maps = './pix2pix_pytorch/datasets/maps/val' # creating a train dataloader train_dataset_maps = GetDatasetMaps(file_paths=train_path_maps, transform=transforms) train_loader_maps = DataLoader(train_dataset_maps, batch_size=1) # creating a val dataloader val_dataset_maps = GetDatasetMaps(file_paths=val_path_maps, transform=transforms) val_loader_maps = DataLoader(val_dataset_maps, batch_size=5) !bash ./bin/load_model_maps.sh lr = 2e-4 lambd = 100 discrim_maps = discriminator().to(device) optimizer_discrim_maps = torch.optim.Adam(discrim_maps.parameters(), lr=lr, betas=(0.5, 0.9)) loss_bce_maps = nn.BCEWithLogitsLoss() gener_maps = generator().to(device) optimizer_gener_maps = torch.optim.Adam(gener_maps.parameters(), lr=lr, betas=(0.5, 0.9)) loss_l1_maps = nn.L1Loss() print('Load model and optimizer?') if input() == 'y': # 'y' - yes, else - no # loading the weights of the trained model load_model_optimazer('discrim_maps.pth', discrim_maps, optimizer_discrim_maps, lr, device) load_model_optimazer('gener_maps.pth', gener_maps, optimizer_gener_maps, lr, device) else: # train model num_epoch = 100 train(discrim_maps, gener_maps, train_loader_maps, optimizer_discrim_maps, optimizer_gener_maps, loss_l1_maps, loss_bce_maps, num_epoch) X_val = next(iter(val_loader_maps)) y_val = gener_maps(X_val[0].to(device)) plt.figure(figsize=(7, 9)) for i in range(5): plt.subplot(5, 3, 3*i+1) plt.imshow(X_val[0][i].permute(1, 2, 0), vmin=0, vmax=255) plt.title('Real') plt.axis('off') plt.subplot(5, 3, 3*i+2) plt.imshow(X_val[1][i].permute(1, 2, 0), vmin=0, vmax=255) plt.title('Target') plt.axis('off') gener_image = np.asarray(y_val[i].permute(1, 2, 0).cpu().detach().numpy(), dtype=np.float32) plt.subplot(5, 3, 3*i+3) plt.imshow(gener_image, vmin=0, vmax=255) plt.title('Fake') plt.axis('off') save_model_optimazer(discrim_maps, optimizer_discrim_maps, 'discrim_maps.pth') save_model_optimazer(gener_maps, optimizer_gener_maps, 'gener_maps.pth') # uploading an image print('1-upload your file, 2-take the default file') if input()=='1': file = files.upload() file_path = list(file.keys())[0] else: file_path = './crs/images/map.jfif' # image generation image = Image.open(file_path) image = transforms(np.array(image)) gener_image = gener_maps(image.unsqueeze(0).to(device))[0] gener_image = np.asarray(gener_image.permute(1, 2, 0).cpu().detach().numpy(), dtype=np.float32) # result plt.subplot(1, 2, 1) plt.imshow(image.permute(1, 2, 0), vmin=0, vmax=255) plt.title('Real') plt.axis('off') plt.subplot(1, 2, 2) plt.imshow(gener_image, vmin=0, vmax=255) plt.title('Fake') plt.axis('off') class GetDatasetFlowers(Dataset): def __init__(self, file_paths_jpg, file_paths_trimaps, transform): self.file_paths_jpg = file_paths_jpg self.file_paths_trimaps = file_paths_trimaps self.files_names_jpg = os.listdir(file_paths_jpg) self.files_names_trimaps = os.listdir(file_paths_trimaps) self.transform = transform def __len__(self): return len(self.file_paths_jpg) def __getitem__(self, idx): file_name_trimaps = self.files_names_trimaps[idx] file_name_jpg = file_name_trimaps[:11] + 'jpg' file_path_jpg = os.path.join(self.file_paths_jpg, file_name_jpg) image_target = Image.open(file_path_jpg) image_target = np.array(image_target) file_path_trimaps = os.path.join(self.file_paths_trimaps, file_name_trimaps) image_trimaps = cv2.imread(file_path_trimaps) image_trimaps = np.array(image_trimaps) image_target = self.transform(image_target) image_trimaps = self.transform(image_trimaps) return image_trimaps, image_target # loading flowers dataset !bash ./bin/get_flowers_datasets.sh # path to train images train_path_flowers_jpg = './pix2pix_pytorch/datasets/flowers/train/jpg/jpg' train_path_flowers_trimaps = './pix2pix_pytorch/datasets/flowers/train/trimaps/trimaps' # path to val images val_path_flowers_jpg = './pix2pix_pytorch/datasets/flowers/test/jpg/jpg' val_path_flowers_trimaps = './pix2pix_pytorch/datasets/flowers/test/trimaps/trimaps' # creating a train dataloader train_dataset_flowers = GetDatasetFlowers(file_paths_jpg=train_path_flowers_jpg, file_paths_trimaps=train_path_flowers_trimaps, transform=transforms) train_loader_flowers = DataLoader(train_dataset_flowers, batch_size=1) # creating a val dataloader val_dataset_flowers = GetDatasetFlowers(file_paths_jpg=val_path_flowers_jpg, file_paths_trimaps=val_path_flowers_trimaps, transform=transforms) val_loader_flowers = DataLoader(val_dataset_flowers, batch_size=5) !bash ./bin/load_model_flowes.sh lr = 2e-4 lambd = 100 discrim_flowers = discriminator().to(device) optimizer_discrim_flowers = torch.optim.Adam(discrim_flowers.parameters(), lr=lr, betas=(0.5, 0.9)) loss_bce_flowers = nn.BCEWithLogitsLoss() gener_flowers = generator().to(device) optimizer_gener_flowers = torch.optim.Adam(gener_flowers.parameters(), lr=lr, betas=(0.5, 0.9)) loss_l1_flowers = nn.L1Loss() print('Load model and optimizer?') if input() == 'y': # 'y' - yes, else - no (train) # loading the weights of the trained model load_model_optimazer('discrim_flowers.pth', discrim_flowers, optimizer_discrim_flowers, lr, device) load_model_optimazer('gener_flowers.pth', gener_flowers, optimizer_gener_flowers, lr, device) else: # train model num_epoch = 400 train(discrim_flowers, gener_flowers, train_loader_flowers, optimizer_discrim_flowers, optimizer_gener_flowers, loss_l1_flowers, loss_bce_flowers, num_epoch) X_val = next(iter(val_loader_flowers)) y_val = gener_flowers(X_val[0].to(device)) plt.figure(figsize=(7, 9)) for i in range(5): plt.subplot(5, 3, 3*i+1) plt.imshow(X_val[0][i].permute(1, 2, 0), vmin=0, vmax=255) plt.title('Real') plt.axis('off') plt.subplot(5, 3, 3*i+2) plt.imshow(X_val[1][i].permute(1, 2, 0), vmin=0, vmax=255) plt.title('Target') plt.axis('off') gener_image = np.asarray(y_val[i].permute(1, 2, 0).cpu().detach().numpy(), dtype=np.float32) plt.subplot(5, 3, 3*i+3) plt.imshow(gener_image, vmin=0, vmax=255) plt.title('Fake') plt.axis('off') save_model_optimazer(discrim_flowers, optimizer_discrim_flowers, 'discrim_flowers.pth') save_model_optimazer(gener_flowers, optimizer_gener_flowers, 'gener_flowers.pth') # uploading an image print('1-upload your file, 2-take the default file') if input()=='1': file = files.upload() file_path = list(file.keys())[0] else: file_path = './crs/images/flower.png' # image generation # image = Image.open('/content/pix2pix_pytorch/trimaps.png') image = cv2.imread(file_path) image = transforms(np.array(image)) gener_image = gener_flowers(image.unsqueeze(0).to(device))[0] gener_image = np.asarray(gener_image.permute(1, 2, 0).cpu().detach().numpy(), dtype=np.float32) # result plt.subplot(1, 2, 1) plt.imshow(image.permute(1, 2, 0), vmin=0, vmax=255) plt.title('Real') plt.axis('off') plt.subplot(1, 2, 2) plt.imshow(gener_image, vmin=0, vmax=255) plt.title('Fake') plt.axis('off')
0.84137
0.977263
## 2: Find the different genders ### Instructions Loop through the rows in legislators, and extract the gender column (fourth column) Append the genders to genders_list. Then turn genders_list into a set, and assign it to unique_genders Finally, convert unique_genders back into a list, and assign it to unique_genders_list. ``` # We can use the set() function to convert lists into sets. # A set is a data type, just like a list, but it only contains each value once. car_makers = ["Ford", "Volvo", "Audi", "Ford", "Volvo"] # Volvo and ford are duplicates print(car_makers) # Converting to a set unique_car_makers = set(car_makers) print(unique_car_makers) # We can't index sets, so we need to convert back into a list first. unique_cars_list = list(unique_car_makers) print(unique_cars_list[0]) genders_list = [] unique_genders = set() unique_genders_list = [] from legislators import legislators ``` ### Answer ``` genders_list = [] for leg in legislators: genders_list.append(leg[3]) unique_genders = set(genders_list) unique_gender_list = list(unique_genders) print(unique_gender_list) ``` ## 3: Replacing genders ### Instructions Loop through the rows in legislators and replace any gender values of "" with "M". ``` for leg in legislators: if leg[3] == '': leg[3] = 'M' ``` ## 4: Parsing birth years ### Instructions Loop through the rows in legislators Inside the loop, get the birthday column from the row, and split the birthday. After splitting the birthday, get the birth year, and append it to birth_years At the end, birth_years will contain the birth years of all the congresspeople in the data. ``` birth_years = [] for row in legislators: birth_year = row[2].split("-")[0] birth_years.append(birth_year) print(birth_years) ``` ## 6: Practice with enumerate ### Instructions Use a for loop to enumerate the ships list. In the body of the loop, print the ship at the current index, then the car at the current index. Make sure you have two separate print statements. ``` dogs = ["labrador", "poodle", "collie"] cats = ["siamese", "persian", "somali"] # Enumerate the dogs list, and print the values. for i, dog in enumerate(dogs): # Will print the dog at the current loop iteration. print(dog) # This will equal dog. Prints the dog at index i. print(dogs[i]) # Print the cat at index i. print(cats[i]) ships = ["Andrea Doria", "Titanic", "Lusitania"] cars = ["Ford Edsel", "Ford Pinto", "Yugo"] for i, e in enumerate(ships): print(e) print(cars[i]) ``` ## 7: Create a birth year column ### Instructions Loop through the rows in legislators list, and append the corresponding value in birth_years to each row. ``` lolists = [["apple", "monkey"], ["orange", "dog"], ["banana", "cat"]] trees = ["cedar", "maple", "fig"] for i, row in enumerate(lolists): row.append(trees[i]) # Our list now has a new column containing the values from trees. print(lolists) # Legislators and birth_years have both been loaded in. for i, e in enumerate(legislators): e.append(birth_years[i]) ``` ## 9: Practice with list comprehensions Double all of the prices in apple_price, and assign the resulting list to apple_price_doubled. Subtract 100 from all of the prices in apple_price, and assign the resulting list to apple_price_lowered. ``` # Define a list of lists data = [["tiger", "lion"], ["duck", "goose"], ["cardinal", "bluebird"]] # Extract the first column from the list first_column = [row[0] for row in data] apple_price = [100, 101, 102, 105] apple_price_doubled = [2*p for p in apple_price] apple_price_lowered = [p-100 for p in apple_price] print(apple_price_doubled, apple_price_lowered) ```
github_jupyter
# We can use the set() function to convert lists into sets. # A set is a data type, just like a list, but it only contains each value once. car_makers = ["Ford", "Volvo", "Audi", "Ford", "Volvo"] # Volvo and ford are duplicates print(car_makers) # Converting to a set unique_car_makers = set(car_makers) print(unique_car_makers) # We can't index sets, so we need to convert back into a list first. unique_cars_list = list(unique_car_makers) print(unique_cars_list[0]) genders_list = [] unique_genders = set() unique_genders_list = [] from legislators import legislators genders_list = [] for leg in legislators: genders_list.append(leg[3]) unique_genders = set(genders_list) unique_gender_list = list(unique_genders) print(unique_gender_list) for leg in legislators: if leg[3] == '': leg[3] = 'M' birth_years = [] for row in legislators: birth_year = row[2].split("-")[0] birth_years.append(birth_year) print(birth_years) dogs = ["labrador", "poodle", "collie"] cats = ["siamese", "persian", "somali"] # Enumerate the dogs list, and print the values. for i, dog in enumerate(dogs): # Will print the dog at the current loop iteration. print(dog) # This will equal dog. Prints the dog at index i. print(dogs[i]) # Print the cat at index i. print(cats[i]) ships = ["Andrea Doria", "Titanic", "Lusitania"] cars = ["Ford Edsel", "Ford Pinto", "Yugo"] for i, e in enumerate(ships): print(e) print(cars[i]) lolists = [["apple", "monkey"], ["orange", "dog"], ["banana", "cat"]] trees = ["cedar", "maple", "fig"] for i, row in enumerate(lolists): row.append(trees[i]) # Our list now has a new column containing the values from trees. print(lolists) # Legislators and birth_years have both been loaded in. for i, e in enumerate(legislators): e.append(birth_years[i]) # Define a list of lists data = [["tiger", "lion"], ["duck", "goose"], ["cardinal", "bluebird"]] # Extract the first column from the list first_column = [row[0] for row in data] apple_price = [100, 101, 102, 105] apple_price_doubled = [2*p for p in apple_price] apple_price_lowered = [p-100 for p in apple_price] print(apple_price_doubled, apple_price_lowered)
0.233095
0.916521
``` #to run the script, you need to start pathway tools form the command line # using the -lisp -python options. Example (from the pathway tools github repository) import os # os.system('nohup /opt/pathway-tools/pathway-tools -lisp -python &') os.system('nohup /opt/pathway-tools/pathway-tools -lisp -python-local-only &') # added cybersecurity os.system('nohup /shared/D1/opt/pathway-tools/pathway-tools -lisp -python-local-only &') # added cybersecurity # modify sys.path to recognize local pythoncyc import os import sys module_path = os.path.abspath(os.path.join('./PythonCyc/')) sys.path = [module_path] sys.path # remove pyc files !rm ./PythonCyc/pythoncyc/*pyc import pythoncyc all_orgids = pythoncyc.all_orgids() print all_orgids meta = pythoncyc.select_organism(u'|META|') ecoli = pythoncyc.select_organism(u'|ECOLI|') proteins = [x.frameid for x in ecoli.proteins.instances][0:2] complexes = ecoli.all_protein_complexes()[0:2] """ p An instance of the class |Proteins|. coefficients? Keyword, Optional If true, then the second return value of the function will be a list of monomer coefficients. Defaults to true. unmodify? Keyword, Optional If true, obtain the monomers of the unmodified form of p. """ lst = [] for protein in complexes: lst.append(ecoli.monomers_of_protein(protein = protein, coefficients = True, unmodify = True)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.monomers_of_protein(protein = protein)) print(lst[-10:]) """ p An instance of the class |Proteins|. exclude-small-molecules? Keyword, Optional If nil, then small molecule components are also returned. Default value is true. """ lst = [] for protein in complexes: lst.append(ecoli.base_components_of_protein(protein = protein, exclude_small_molecules = True)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.base_components_of_protein(protein = protein)) print(lst[-10:]) """ protein An instance of the class |Proteins|. exclude-self? Optional If true, then protein will not be included in the return value. """ lst = [] for protein in complexes: lst.append(ecoli.containers_of(protein = protein, exclude_self = True)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.containers_of(protein = protein)) print(lst[-10:]) """ protein An instance of the class |Proteins|. exclude-self? Optional If true, then protein will not be included in the return value. """ lst = [] for protein in complexes: lst.append(ecoli.protein_or_rna_containers_of(protein = protein)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.protein_or_rna_containers_of(protein = protein)) print(lst[-10:]) """ protein An instance of the class |Proteins|. exclude-self? Optional If true, then protein will not be included in the return value. """ lst = [] for protein in complexes: lst.append(ecoli.homomultimeric_containers_of(protein = protein)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.homomultimeric_containers_of(protein = protein)) print(lst[-10:]) lst = [] for protein in complexes: lst.append(ecoli.polypeptide_or_homomultimer_p(protein = protein)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.polypeptide_or_homomultimer_p(protein = protein)) print(lst[-10:]) lst = [] for protein in complexes: lst.append(ecoli.unmodified_form(protein = protein)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.unmodified_form(protein = protein)) print(lst[-10:]) lst = [] for protein in complexes: lst.append(ecoli.unmodified_or_unbound_form(protein = protein)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.unmodified_or_unbound_form(protein = protein)) print(lst[-10:]) """ prots A list of instances of the class |Proteins|. debind? Keyword, Optional When non-nil, the proteins are further simplified by obtaining the unbound form of the protein, if it is bound to a small molecule. """ lst = [] for protein in complexes: lst.append(ecoli.reduce_modified_proteins(prots = [protein], debind = True)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.reduce_modified_proteins(prots = [protein])) print(lst[-10:]) lst = [] for protein in complexes: lst.append(ecoli.all_direct_forms_of_protein(protein = protein)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.all_direct_forms_of_protein(protein = protein)) print(lst[-10:]) lst = [] for protein in complexes: lst.append(ecoli.all_forms_of_protein(protein = protein)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.all_forms_of_protein(protein = protein)) print(lst[-10:]) lst = [] for protein in complexes: lst.append(ecoli.modified_forms(protein = protein, all_variants = True, exclude_self= True)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.modified_forms(protein = protein)) print(lst[-10:]) lst = [] for protein in complexes: lst.append(ecoli.modified_and_unmodified_forms(protein = protein)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.modified_and_unmodified_forms(protein = protein)) print(lst[-10:]) lst = [] for protein in complexes: lst.append(ecoli.modified_containers(protein = protein)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.modified_containers(protein = protein)) print(lst[-10:]) lst = [] for protein in complexes: lst.append(ecoli.top_containers(protein = protein)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.top_containers(protein = protein)) print(lst[-10:]) lst = [] for protein in complexes: lst.append(ecoli.small_molecule_cplxes_of_prot(protein = protein)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.small_molecule_cplxes_of_prot(protein = protein)) print(lst[-10:]) lst = [] for protein in complexes: lst.append(ecoli.genes_of_protein(protein = protein)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.genes_of_protein(protein = protein)) print(lst[-10:]) lst = [] for protein in complexes: lst.append(ecoli.genes_of_proteins(prots = protein)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.genes_of_proteins(prots = protein)) print(lst[-10:]) """ protein An instance of the class |Proteins|. kb Keyword, Optional The KB object of the KB in which to find the associated reactions. Defaults to the current PGDB. include-specific-forms? Keyword, Optional When true, specific forms of associated generic reactions are also returned. Default value is true. """ lst = [] for protein in complexes: lst.append(ecoli.reactions_of_enzyme(protein = protein, include_subreactions = True)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.reactions_of_enzyme(protein = protein)) print(lst[-10:]) lst = [] for protein in complexes: lst.append(ecoli.species_of_protein(protein = protein)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.species_of_protein(protein = protein)) print(lst[-10:]) """ type Optional Can take on one of the following values to select more precisely what is meant by an “enzyme”: :any Any protein that catalyzes a reaction is considered an enzyme. :chemical-change If the reactants and products of the catalyzed reactin differ, and not just by their cellular location, then the protein is considered an enzyme. :small-molecule If the reactants of the catalyzed reaction differ and are small molecules, then the protein is considered an enzyme. :transport If the protein catalyzes a transport reaction. :non-transport If the protein only catalyzes non-transport reactions. """ lst = [] for protein in complexes: lst.append(ecoli.enzyme_p(protein = protein, type_of_reactions = 'any')) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.enzyme_p(protein = protein, type_of_reactions = 'non-transport-non-pathway')) print(lst[-10:]) try: lst = [] for protein in complexes: lst.append(ecoli.leader_peptide_p(protein = protein)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.leader_peptide_p(protein = protein)) print(lst[-10:]) except: pass lst = [] for protein in complexes: lst.append(ecoli.protein_p(protein)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.protein_p(protein)) print(lst[-10:]) lst = [] for protein in complexes: lst.append(ecoli.complex_p(protein)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.complex_p(protein)) print(lst[-10:]) """ p An instance of the class |Proteins|. check-protein-components? Optional If true, check all components of this protein for catalyzed reactions. Defaults to true. check-protein-containers? Optional If true, check the containers and modified forms of the protein for catalyzed reactions. """ lst = [] for protein in complexes: lst.append(ecoli.reactions_of_protein(protein = protein, check_protein_containers = True)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.reactions_of_protein(protein = protein, check_protein_components = True)) print(lst[-10:]) """ rxn An instance of the class |Reactions|. compartments A list of cellular compartments, as defined in the Cellular Components Ontology. See frame 'CCO. default-ok? Keyword, Optional If true, then we return true if the reaction has no associated compartment information, or one of its associated locations is a super-class of one of the members of the compartments argument. pwy Keyword, Optional If supplied, the search for associated enzymes of the argument rxn is limited to the given child of |Pathways|. loose? Keyword, Optional If true, then the compartments 'CCO‑CYTOPLASM and 'CCO‑CYTOSOL are treated as being the same compartment. """ lst = [] for protein in complexes: lst.append(ecoli.protein_in_compartment_p(rxn = protein, compartments = 'CCO-CYTOSOL')) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.protein_in_compartment_p(rxn = protein, compartments = ['CCO-CYTOSOL', 'CCO-MEMBRANE'])) print(lst[-10:]) """ membranes Keyword Either :all or a list of instances of the class. Defaults to :all 'CCO‑MEMBRANE. method Either :location or :reaction‑compartments. :location will check the 'locations slot, while :reaction‑compartments will examine the compartments of reaction substrates. Default value is :location. """ lst = ecoli.all_transporters_across(membranes = 'all') print(lst[-10:]) lst = [] for protein in complexes: lst.append(ecoli.autocatalytic_reactions_of_enzyme(protein)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.autocatalytic_reactions_of_enzyme(protein)) print(lst[-10:]) ```
github_jupyter
#to run the script, you need to start pathway tools form the command line # using the -lisp -python options. Example (from the pathway tools github repository) import os # os.system('nohup /opt/pathway-tools/pathway-tools -lisp -python &') os.system('nohup /opt/pathway-tools/pathway-tools -lisp -python-local-only &') # added cybersecurity os.system('nohup /shared/D1/opt/pathway-tools/pathway-tools -lisp -python-local-only &') # added cybersecurity # modify sys.path to recognize local pythoncyc import os import sys module_path = os.path.abspath(os.path.join('./PythonCyc/')) sys.path = [module_path] sys.path # remove pyc files !rm ./PythonCyc/pythoncyc/*pyc import pythoncyc all_orgids = pythoncyc.all_orgids() print all_orgids meta = pythoncyc.select_organism(u'|META|') ecoli = pythoncyc.select_organism(u'|ECOLI|') proteins = [x.frameid for x in ecoli.proteins.instances][0:2] complexes = ecoli.all_protein_complexes()[0:2] """ p An instance of the class |Proteins|. coefficients? Keyword, Optional If true, then the second return value of the function will be a list of monomer coefficients. Defaults to true. unmodify? Keyword, Optional If true, obtain the monomers of the unmodified form of p. """ lst = [] for protein in complexes: lst.append(ecoli.monomers_of_protein(protein = protein, coefficients = True, unmodify = True)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.monomers_of_protein(protein = protein)) print(lst[-10:]) """ p An instance of the class |Proteins|. exclude-small-molecules? Keyword, Optional If nil, then small molecule components are also returned. Default value is true. """ lst = [] for protein in complexes: lst.append(ecoli.base_components_of_protein(protein = protein, exclude_small_molecules = True)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.base_components_of_protein(protein = protein)) print(lst[-10:]) """ protein An instance of the class |Proteins|. exclude-self? Optional If true, then protein will not be included in the return value. """ lst = [] for protein in complexes: lst.append(ecoli.containers_of(protein = protein, exclude_self = True)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.containers_of(protein = protein)) print(lst[-10:]) """ protein An instance of the class |Proteins|. exclude-self? Optional If true, then protein will not be included in the return value. """ lst = [] for protein in complexes: lst.append(ecoli.protein_or_rna_containers_of(protein = protein)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.protein_or_rna_containers_of(protein = protein)) print(lst[-10:]) """ protein An instance of the class |Proteins|. exclude-self? Optional If true, then protein will not be included in the return value. """ lst = [] for protein in complexes: lst.append(ecoli.homomultimeric_containers_of(protein = protein)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.homomultimeric_containers_of(protein = protein)) print(lst[-10:]) lst = [] for protein in complexes: lst.append(ecoli.polypeptide_or_homomultimer_p(protein = protein)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.polypeptide_or_homomultimer_p(protein = protein)) print(lst[-10:]) lst = [] for protein in complexes: lst.append(ecoli.unmodified_form(protein = protein)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.unmodified_form(protein = protein)) print(lst[-10:]) lst = [] for protein in complexes: lst.append(ecoli.unmodified_or_unbound_form(protein = protein)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.unmodified_or_unbound_form(protein = protein)) print(lst[-10:]) """ prots A list of instances of the class |Proteins|. debind? Keyword, Optional When non-nil, the proteins are further simplified by obtaining the unbound form of the protein, if it is bound to a small molecule. """ lst = [] for protein in complexes: lst.append(ecoli.reduce_modified_proteins(prots = [protein], debind = True)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.reduce_modified_proteins(prots = [protein])) print(lst[-10:]) lst = [] for protein in complexes: lst.append(ecoli.all_direct_forms_of_protein(protein = protein)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.all_direct_forms_of_protein(protein = protein)) print(lst[-10:]) lst = [] for protein in complexes: lst.append(ecoli.all_forms_of_protein(protein = protein)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.all_forms_of_protein(protein = protein)) print(lst[-10:]) lst = [] for protein in complexes: lst.append(ecoli.modified_forms(protein = protein, all_variants = True, exclude_self= True)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.modified_forms(protein = protein)) print(lst[-10:]) lst = [] for protein in complexes: lst.append(ecoli.modified_and_unmodified_forms(protein = protein)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.modified_and_unmodified_forms(protein = protein)) print(lst[-10:]) lst = [] for protein in complexes: lst.append(ecoli.modified_containers(protein = protein)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.modified_containers(protein = protein)) print(lst[-10:]) lst = [] for protein in complexes: lst.append(ecoli.top_containers(protein = protein)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.top_containers(protein = protein)) print(lst[-10:]) lst = [] for protein in complexes: lst.append(ecoli.small_molecule_cplxes_of_prot(protein = protein)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.small_molecule_cplxes_of_prot(protein = protein)) print(lst[-10:]) lst = [] for protein in complexes: lst.append(ecoli.genes_of_protein(protein = protein)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.genes_of_protein(protein = protein)) print(lst[-10:]) lst = [] for protein in complexes: lst.append(ecoli.genes_of_proteins(prots = protein)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.genes_of_proteins(prots = protein)) print(lst[-10:]) """ protein An instance of the class |Proteins|. kb Keyword, Optional The KB object of the KB in which to find the associated reactions. Defaults to the current PGDB. include-specific-forms? Keyword, Optional When true, specific forms of associated generic reactions are also returned. Default value is true. """ lst = [] for protein in complexes: lst.append(ecoli.reactions_of_enzyme(protein = protein, include_subreactions = True)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.reactions_of_enzyme(protein = protein)) print(lst[-10:]) lst = [] for protein in complexes: lst.append(ecoli.species_of_protein(protein = protein)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.species_of_protein(protein = protein)) print(lst[-10:]) """ type Optional Can take on one of the following values to select more precisely what is meant by an “enzyme”: :any Any protein that catalyzes a reaction is considered an enzyme. :chemical-change If the reactants and products of the catalyzed reactin differ, and not just by their cellular location, then the protein is considered an enzyme. :small-molecule If the reactants of the catalyzed reaction differ and are small molecules, then the protein is considered an enzyme. :transport If the protein catalyzes a transport reaction. :non-transport If the protein only catalyzes non-transport reactions. """ lst = [] for protein in complexes: lst.append(ecoli.enzyme_p(protein = protein, type_of_reactions = 'any')) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.enzyme_p(protein = protein, type_of_reactions = 'non-transport-non-pathway')) print(lst[-10:]) try: lst = [] for protein in complexes: lst.append(ecoli.leader_peptide_p(protein = protein)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.leader_peptide_p(protein = protein)) print(lst[-10:]) except: pass lst = [] for protein in complexes: lst.append(ecoli.protein_p(protein)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.protein_p(protein)) print(lst[-10:]) lst = [] for protein in complexes: lst.append(ecoli.complex_p(protein)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.complex_p(protein)) print(lst[-10:]) """ p An instance of the class |Proteins|. check-protein-components? Optional If true, check all components of this protein for catalyzed reactions. Defaults to true. check-protein-containers? Optional If true, check the containers and modified forms of the protein for catalyzed reactions. """ lst = [] for protein in complexes: lst.append(ecoli.reactions_of_protein(protein = protein, check_protein_containers = True)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.reactions_of_protein(protein = protein, check_protein_components = True)) print(lst[-10:]) """ rxn An instance of the class |Reactions|. compartments A list of cellular compartments, as defined in the Cellular Components Ontology. See frame 'CCO. default-ok? Keyword, Optional If true, then we return true if the reaction has no associated compartment information, or one of its associated locations is a super-class of one of the members of the compartments argument. pwy Keyword, Optional If supplied, the search for associated enzymes of the argument rxn is limited to the given child of |Pathways|. loose? Keyword, Optional If true, then the compartments 'CCO‑CYTOPLASM and 'CCO‑CYTOSOL are treated as being the same compartment. """ lst = [] for protein in complexes: lst.append(ecoli.protein_in_compartment_p(rxn = protein, compartments = 'CCO-CYTOSOL')) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.protein_in_compartment_p(rxn = protein, compartments = ['CCO-CYTOSOL', 'CCO-MEMBRANE'])) print(lst[-10:]) """ membranes Keyword Either :all or a list of instances of the class. Defaults to :all 'CCO‑MEMBRANE. method Either :location or :reaction‑compartments. :location will check the 'locations slot, while :reaction‑compartments will examine the compartments of reaction substrates. Default value is :location. """ lst = ecoli.all_transporters_across(membranes = 'all') print(lst[-10:]) lst = [] for protein in complexes: lst.append(ecoli.autocatalytic_reactions_of_enzyme(protein)) print(lst[-10:]) lst = [] for protein in proteins: lst.append(ecoli.autocatalytic_reactions_of_enzyme(protein)) print(lst[-10:])
0.326486
0.311002
# Kapitel 7: Neuronale Netzwerke - Grundlagen ``` import warnings warnings.filterwarnings('ignore') %matplotlib inline %pylab inline import matplotlib.pylab as plt import numpy as np colors = 'bwr'#['b','y','r'] CMAP = colors#plt.cm.rainbow import sklearn print(sklearn.__version__) import tensorflow as tf print(tf.__version__) import keras print(keras.__version__) import pandas as pd print(pd.__version__) ``` ## Iris mit Neuronalen Netzwerken ``` from sklearn.datasets import load_iris iris = load_iris() print(iris.DESCR) iris_df = pd.DataFrame(iris.data, columns=iris.feature_names) pd.plotting.scatter_matrix(iris_df, c=iris.target, cmap=CMAP, edgecolor='black', figsize=(20, 20)) # plt.savefig('ML_0701.png', bbox_inches='tight') ``` ## Das künstliche Neuron ``` w0 = 3 w1 = -4 w2 = 2 def neuron_no_activation(x1, x2): sum = w0 + x1 * w1 + x2 * w2 return sum iris.data[0] neuron_no_activation(5.1, 3.5) ``` ### Activation Functions ``` def centerAxis(uses_negative=False): # http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.plot ax = plt.gca() ax.spines['left'].set_position('center') if uses_negative: ax.spines['bottom'].set_position('center') ax.spines['right'].set_color('none') ax.spines['top'].set_color('none') ax.xaxis.set_ticks_position('bottom') ax.yaxis.set_ticks_position('left') ``` #### Step Function: abrupter, nicht stetig differenzierbarer Übergang zwischen 0 und 1 ``` def np_step(X): return 0.5 * (np.sign(X) + 1) x = np.arange(-10,10,0.01) y = np_step(x) centerAxis() plt.plot(x, y, lw=3) ``` #### Sigmoid Function: Fließender Übergang zwischen 0 und 1 ``` def np_sigmoid(X): return 1 / (1 + np.exp(X * -1)) x = np.arange(-10,10,0.01) y = np_sigmoid(x) centerAxis() plt.plot(x,y,lw=3) ``` #### Tangens Hyperbolicus Function: Fließender Übergang zwischen -1 und 1 ``` x = np.arange(-10,10,0.01) y = np.tanh(x) centerAxis() plt.plot(x,y,lw=3) ``` #### Relu: Einfach zu berechnen, setzt kompletten negativen Wertebereich auf 0 ``` def np_relu(x): return np.maximum(0, x) x = np.arange(-10,10,0.01) y = np_relu(x) centerAxis() plt.plot(x,y,lw=3) # https://docs.python.org/3/library/math.html import math as math def sigmoid(x): return 1 / (1 + math.exp(x * -1)) w0 = 3 w1 = -4 w2 = 2 def neuron(x1, x2): sum = w0 + x1 * w1 + x2 * w2 return sigmoid(sum) neuron(5.1, 3.5) # Version that takes as many values as you like weights_with_bias = np.array([3, -4, 2]) def np_neuron(X): inputs_with_1_for_bias = np.concatenate((np.array([1]), X)) return np_sigmoid(np.sum(inputs_with_1_for_bias*weights_with_bias)); np_neuron(np.array([5.1, 3.5])) ``` ## Unser erste Neuronales Netz mit Keras ``` from keras.layers import Input inputs = Input(shape=(4, )) from keras.layers import Dense fc = Dense(3)(inputs) from keras.models import Model model = Model(inputs=inputs, outputs=fc) model.summary() model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) model.predict(np.array([[ 5.1, 3.5, 1.4, 0.2]])) inputs = Input(shape=(4, )) fc = Dense(3)(inputs) predictions = Dense(3, activation='softmax')(fc) model = Model(inputs=inputs, outputs=predictions) model.summary() model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) model.predict(np.array([[ 5.1, 3.5, 1.4, 0.2]])) ``` # Training ``` X = np.array(iris.data) y = np.array(iris.target) X.shape, y.shape y[100] from keras.utils.np_utils import to_categorical num_categories = 3 y = to_categorical(y, num_categories) y[100] from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42, stratify=y) X_train.shape, X_test.shape, y_train.shape, y_test.shape # !rm -r tf_log # https://keras.io/callbacks/#tensorboard # tb_callback = keras.callbacks.TensorBoard(log_dir='./tf_log') # To start tensorboard # tensorboard --logdir=/mnt/c/Users/olive/Development/ml/tf_log # open http://localhost:6006 # %time model.fit(X_train, y_train, epochs=500, validation_split=0.3, callbacks=[tb_callback]) %time model.fit(X_train, y_train, epochs=500, validation_split=0.3) ``` # Bewertung ``` model.predict(np.array([[ 5.1, 3.5, 1.4, 0.2]])) X[0], y[0] train_loss, train_accuracy = model.evaluate(X_train, y_train) train_loss, train_accuracy test_loss, test_accuracy = model.evaluate(X_test, y_test) test_loss, test_accuracy ```
github_jupyter
import warnings warnings.filterwarnings('ignore') %matplotlib inline %pylab inline import matplotlib.pylab as plt import numpy as np colors = 'bwr'#['b','y','r'] CMAP = colors#plt.cm.rainbow import sklearn print(sklearn.__version__) import tensorflow as tf print(tf.__version__) import keras print(keras.__version__) import pandas as pd print(pd.__version__) from sklearn.datasets import load_iris iris = load_iris() print(iris.DESCR) iris_df = pd.DataFrame(iris.data, columns=iris.feature_names) pd.plotting.scatter_matrix(iris_df, c=iris.target, cmap=CMAP, edgecolor='black', figsize=(20, 20)) # plt.savefig('ML_0701.png', bbox_inches='tight') w0 = 3 w1 = -4 w2 = 2 def neuron_no_activation(x1, x2): sum = w0 + x1 * w1 + x2 * w2 return sum iris.data[0] neuron_no_activation(5.1, 3.5) def centerAxis(uses_negative=False): # http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.plot ax = plt.gca() ax.spines['left'].set_position('center') if uses_negative: ax.spines['bottom'].set_position('center') ax.spines['right'].set_color('none') ax.spines['top'].set_color('none') ax.xaxis.set_ticks_position('bottom') ax.yaxis.set_ticks_position('left') def np_step(X): return 0.5 * (np.sign(X) + 1) x = np.arange(-10,10,0.01) y = np_step(x) centerAxis() plt.plot(x, y, lw=3) def np_sigmoid(X): return 1 / (1 + np.exp(X * -1)) x = np.arange(-10,10,0.01) y = np_sigmoid(x) centerAxis() plt.plot(x,y,lw=3) x = np.arange(-10,10,0.01) y = np.tanh(x) centerAxis() plt.plot(x,y,lw=3) def np_relu(x): return np.maximum(0, x) x = np.arange(-10,10,0.01) y = np_relu(x) centerAxis() plt.plot(x,y,lw=3) # https://docs.python.org/3/library/math.html import math as math def sigmoid(x): return 1 / (1 + math.exp(x * -1)) w0 = 3 w1 = -4 w2 = 2 def neuron(x1, x2): sum = w0 + x1 * w1 + x2 * w2 return sigmoid(sum) neuron(5.1, 3.5) # Version that takes as many values as you like weights_with_bias = np.array([3, -4, 2]) def np_neuron(X): inputs_with_1_for_bias = np.concatenate((np.array([1]), X)) return np_sigmoid(np.sum(inputs_with_1_for_bias*weights_with_bias)); np_neuron(np.array([5.1, 3.5])) from keras.layers import Input inputs = Input(shape=(4, )) from keras.layers import Dense fc = Dense(3)(inputs) from keras.models import Model model = Model(inputs=inputs, outputs=fc) model.summary() model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) model.predict(np.array([[ 5.1, 3.5, 1.4, 0.2]])) inputs = Input(shape=(4, )) fc = Dense(3)(inputs) predictions = Dense(3, activation='softmax')(fc) model = Model(inputs=inputs, outputs=predictions) model.summary() model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) model.predict(np.array([[ 5.1, 3.5, 1.4, 0.2]])) X = np.array(iris.data) y = np.array(iris.target) X.shape, y.shape y[100] from keras.utils.np_utils import to_categorical num_categories = 3 y = to_categorical(y, num_categories) y[100] from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42, stratify=y) X_train.shape, X_test.shape, y_train.shape, y_test.shape # !rm -r tf_log # https://keras.io/callbacks/#tensorboard # tb_callback = keras.callbacks.TensorBoard(log_dir='./tf_log') # To start tensorboard # tensorboard --logdir=/mnt/c/Users/olive/Development/ml/tf_log # open http://localhost:6006 # %time model.fit(X_train, y_train, epochs=500, validation_split=0.3, callbacks=[tb_callback]) %time model.fit(X_train, y_train, epochs=500, validation_split=0.3) model.predict(np.array([[ 5.1, 3.5, 1.4, 0.2]])) X[0], y[0] train_loss, train_accuracy = model.evaluate(X_train, y_train) train_loss, train_accuracy test_loss, test_accuracy = model.evaluate(X_test, y_test) test_loss, test_accuracy
0.745861
0.858422
<h2 align=center> Fine-Tune BERT for Text Classification with TensorFlow</h2> <div align="center"> <img width="512px" src='https://drive.google.com/uc?id=1fnJTeJs5HUpz7nix-F9E6EZdgUflqyEu' /> <p style="text-align: center;color:gray">Figure 1: BERT Classification Model</p> </div> In this project, you will learn how to fine-tune a BERT model for text classification using TensorFlow and TF-Hub. The pretrained BERT model used in this project is [available](https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/2) on [TensorFlow Hub](https://tfhub.dev/). ### Learning Objectives By the time you complete this project, you will be able to: - Build TensorFlow Input Pipelines for Text Data with the [`tf.data`](https://www.tensorflow.org/api_docs/python/tf/data) API - Tokenize and Preprocess Text for BERT - Fine-tune BERT for text classification with TensorFlow 2 and [TF Hub](https://tfhub.dev) ### Prerequisites In order to be successful with this project, it is assumed you are: - Competent in the Python programming language - Familiar with deep learning for Natural Language Processing (NLP) - Familiar with TensorFlow, and its Keras API ### Contents This project/notebook consists of several Tasks. - **[Task 1]()**: Introduction to the Project. - **[Task 2]()**: Setup your TensorFlow and Colab Runtime - **[Task 3]()**: Download and Import the Quora Insincere Questions Dataset - **[Task 4]()**: Create tf.data.Datasets for Training and Evaluation - **[Task 5]()**: Download a Pre-trained BERT Model from TensorFlow Hub - **[Task 6]()**: Tokenize and Preprocess Text for BERT - **[Task 7]()**: Wrap a Python Function into a TensorFlow op for Eager Execution - **[Task 8]()**: Create a TensorFlow Input Pipeline with `tf.data` - **[Task 9]()**: Add a Classification Head to the BERT `hub.KerasLayer` - **[Task 10]()**: Fine-Tune BERT for Text Classification - **[Task 11]()**: Evaluate the BERT Text Classification Model ## Task 2: Setup your TensorFlow and Colab Runtime. You will only be able to use the Colab Notebook after you save it to your Google Drive folder. Click on the File menu and select “Save a copy in Drive… ![Copy to Drive](https://drive.google.com/uc?id=1CH3eDmuJL8WR0AP1r3UE6sOPuqq8_Wl7) ### Check GPU Availability Check if your Colab notebook is configured to use Graphical Processing Units (GPUs). If zero GPUs are available, check if the Colab notebook is configured to use GPUs (Menu > Runtime > Change Runtime Type). ![Hardware Accelerator Settings](https://drive.google.com/uc?id=1qrihuuMtvzXJHiRV8M7RngbxFYipXKQx) ``` !nvidia-smi # conda install -c anaconda tensorflow-gpu ``` ### Install TensorFlow and TensorFlow Model Garden ``` import tensorflow as tf print(tf.version.VERSION) #!pip install -q tensorflow==2.3.0 # !git clone --depth 1 -b v2.3.0 https://github.com/tensorflow/models.git # # install requirements to use tensorflow/models repository # !pip install -Uqr models/official/requirements.txt # # you may have to restart the runtime afterwards ``` ## Restart the Runtime **Note** After installing the required Python packages, you'll need to restart the Colab Runtime Engine (Menu > Runtime > Restart runtime...) ![Restart of the Colab Runtime Engine](https://drive.google.com/uc?id=1xnjAy2sxIymKhydkqb0RKzgVK9rh3teH) ## Task 3: Download and Import the Quora Insincere Questions Dataset ``` # pip install tensorflow_datasets # pip install sentencepiece # pip install gin-config # pip install tensorflow-addons import numpy as np import tensorflow as tf import tensorflow_hub as hub import sys sys.path.append('models') from official.nlp.data import classifier_data_lib from official.nlp.bert import tokenization from official.nlp import optimization print("TF Version: ", tf.__version__) print("Eager mode: ", tf.executing_eagerly()) print("Hub version: ", hub.__version__) print("GPU is", "available" if tf.config.experimental.list_physical_devices("GPU") else "NOT AVAILABLE") ``` A downloadable copy of the [Quora Insincere Questions Classification data](https://www.kaggle.com/c/quora-insincere-questions-classification/data) can be found [https://archive.org/download/fine-tune-bert-tensorflow-train.csv/train.csv.zip](https://archive.org/download/fine-tune-bert-tensorflow-train.csv/train.csv.zip). Decompress and read the data into a pandas DataFrame. ``` import numpy as np import pandas as pd from sklearn.model_selection import train_test_split df = pd.read_csv('https://archive.org/download/fine-tune-bert-tensorflow-train.csv/train.csv.zip', compression='zip', low_memory=False) df.shape df.tail(20) df.target.plot(kind='hist', title='Target distribution'); ``` ## Task 4: Create tf.data.Datasets for Training and Evaluation ``` train_df, remaining = train_test_split(df, random_state=42, train_size=0.0075, stratify=df.target.values) valid_df, _ = train_test_split(remaining, random_state=42, train_size=0.00075, stratify=remaining.target.values) train_df.shape, valid_df.shape with tf.device('/cpu:0'): train_data = tf.data.Dataset.from_tensor_slices((train_df.question_text.values, train_df.target.values)) valid_data = tf.data.Dataset.from_tensor_slices((valid_df.question_text.values, valid_df.target.values)) for text, label in train_data.take(1): print(text) print(label) ``` ## Task 5: Download a Pre-trained BERT Model from TensorFlow Hub ``` """ Each line of the dataset is composed of the review text and its label - Data preprocessing consists of transforming text to BERT input features: input_word_ids, input_mask, segment_ids - In the process, tokenizing the text is done with the provided BERT model tokenizer """ label_list = [0, 1] # Label categories max_seq_length = 128 # maximum length of (token) input sequences train_batch_size = 32 # Get BERT layer and tokenizer: # More details here: https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/2 bert_layer = hub.KerasLayer("https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/2", trainable=True) vocab_file = bert_layer.resolved_object.vocab_file.asset_path.numpy() do_lower_case = bert_layer.resolved_object.do_lower_case.numpy() tokenizer = tokenization.FullTokenizer(vocab_file, do_lower_case) tokenizer.wordpiece_tokenizer.tokenize('hi, how are you doing?') tokenizer.convert_tokens_to_ids(tokenizer.wordpiece_tokenizer.tokenize('hi, how are you doing?')) ``` ## Task 6: Tokenize and Preprocess Text for BERT <div align="center"> <img width="512px" src='https://drive.google.com/uc?id=1-SpKFELnEvBMBqO7h3iypo8q9uUUo96P' /> <p style="text-align: center;color:gray">Figure 2: BERT Tokenizer</p> </div> We'll need to transform our data into a format BERT understands. This involves two steps. First, we create InputExamples using `classifier_data_lib`'s constructor `InputExample` provided in the BERT library. ``` # This provides a function to convert row to input features and label def to_feature(text, label, label_list=label_list, max_seq_length=max_seq_length, tokenizer=tokenizer): example = classifier_data_lib.InputExample(guid = None, text_a = text.numpy(), text_b = None, label = label.numpy()) feature = classifier_data_lib.convert_single_example(0, example, label_list, max_seq_length, tokenizer) return (feature.input_ids, feature.input_mask, feature.segment_ids, feature.label_id) ``` You want to use [`Dataset.map`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#map) to apply this function to each element of the dataset. [`Dataset.map`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#map) runs in graph mode. - Graph tensors do not have a value. - In graph mode you can only use TensorFlow Ops and functions. So you can't `.map` this function directly: You need to wrap it in a [`tf.py_function`](https://www.tensorflow.org/api_docs/python/tf/py_function). The [`tf.py_function`](https://www.tensorflow.org/api_docs/python/tf/py_function) will pass regular tensors (with a value and a `.numpy()` method to access it), to the wrapped python function. ## Task 7: Wrap a Python Function into a TensorFlow op for Eager Execution ``` def to_feature_map(text, label): input_ids, input_mask, segment_ids, label_id = tf.py_function(to_feature, inp=[text, label], Tout=[tf.int32, tf.int32, tf.int32, tf.int32]) # py_func doesn't set the shape of the returned tensors. input_ids.set_shape([max_seq_length]) input_mask.set_shape([max_seq_length]) segment_ids.set_shape([max_seq_length]) label_id.set_shape([]) x = { 'input_word_ids': input_ids, 'input_mask': input_mask, 'input_type_ids': segment_ids } return (x, label_id) ``` ## Task 8: Create a TensorFlow Input Pipeline with `tf.data` ``` with tf.device('/cpu:0'): # train train_data = (train_data.map(to_feature_map, num_parallel_calls=tf.data.experimental.AUTOTUNE) #.cache() .shuffle(1000) .batch(32, drop_remainder=True) .prefetch(tf.data.experimental.AUTOTUNE)) # valid valid_data = (valid_data.map(to_feature_map, num_parallel_calls=tf.data.experimental.AUTOTUNE) .batch(32, drop_remainder=True) .prefetch(tf.data.experimental.AUTOTUNE)) ``` The resulting `tf.data.Datasets` return `(features, labels)` pairs, as expected by [`keras.Model.fit`](https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit): ``` # data spec train_data.element_spec # data spec valid_data.element_spec ``` ## Task 9: Add a Classification Head to the BERT Layer <div align="center"> <img width="512px" src='https://drive.google.com/uc?id=1fnJTeJs5HUpz7nix-F9E6EZdgUflqyEu' /> <p style="text-align: center;color:gray">Figure 3: BERT Layer</p> </div> ``` # Building the model def create_model(): input_word_ids = tf.keras.layers.Input(shape=(max_seq_length,), dtype=tf.int32, name="input_word_ids") input_mask = tf.keras.layers.Input(shape=(max_seq_length,), dtype=tf.int32, name="input_mask") input_type_ids = tf.keras.layers.Input(shape=(max_seq_length,), dtype=tf.int32, name="input_type_ids") pooled_output, sequence_output = bert_layer([input_word_ids, input_mask, input_type_ids]) drop = tf.keras.layers.Dropout(0.4)(pooled_output) output = tf.keras.layers.Dense(1, activation="sigmoid", name="output")(drop) model = tf.keras.Model( inputs={ 'input_word_ids': input_word_ids, 'input_mask': input_mask, 'input_type_ids': input_type_ids }, outputs=output) return model ``` ## Task 10: Fine-Tune BERT for Text Classification ``` model = create_model() model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=2e-5), loss=tf.keras.losses.BinaryCrossentropy(), metrics=[tf.keras.metrics.BinaryAccuracy()]) model.summary() tf.keras.utils.plot_model(model=model, show_shapes=True, dpi=76, ) # Train model epochs = 3 history = model.fit(train_data, validation_data=valid_data, epochs=epochs, verbose=1) ``` ## Task 11: Evaluate the BERT Text Classification Model ``` import matplotlib.pyplot as plt def plot_graphs(history, metric): plt.plot(history.history[metric]) plt.plot(history.history['val_'+metric], '') plt.xlabel("Epochs") plt.ylabel(metric) plt.legend([metric, 'val_'+metric]) plt.show() plot_graphs(history, 'binary_accuracy') plot_graphs(history, 'loss') model.evaluate(valid_data, verbose=1) ``` ### Testing all negatives ``` df_negs = df[df.target == 1] df_neg_comments = df_negs["question_text"] for com in df_neg_comments.tail(10): print(com) print("_____________________") print("_____________________") sample_example = ["What are not scam online services for essay writing?", "Is it just me, or was Prince the singer ugly as fudge?", "Could we finally give Einstein's first wife credit for her equation, E=m (c squared)? She deserves a lot more than that.", "Why do Jews want endless immigration to the US, but Israel has a wall, Saharonim prison, and very strict immigration laws against non-Jews?", "Will Oprah buy the DNC to guarantee her nomination in 2020 like Hillary did in 2016?", "Why is it when singers have lyrics about voices in their head, religious people say they hear god and anyone stating they are being targeted by voice to brain technology? Are they called delusional schizo?", "Do pakis smell of curry and shit?" "Isn't Trump right after all? Why should the USA be benevolent towards it's neighbor Mexico, and not put America first, when Russia is invading Ukraine? Even India imposed a blockade over Nepal after an earthquake", "Are you ashamed of being an Indian?"] test_data = tf.data.Dataset.from_tensor_slices((sample_example, [0]*len(sample_example))) test_data = (test_data.map(to_feature_map).batch(1)) preds = model.predict(test_data) ['Toxic' if pred >=0.5 else 'Sincere' for pred in preds] ``` 1 out of 10 is labelled wrong based on manual inspection. ``` preds ``` ### Testing all positives ``` df_pos = df[df.target == 0] df_pos_comments = df_pos["question_text"] for com in df_pos_comments.tail(10): print(com) print("") sample_example = ["If you had $10 million of Bitcoin, could you sell it and pay no capital gain tax if you also quit work and had no ordinary income for the year?", "What are the methods to determine fossil ages in 10th STD?", "What is your story today?", "How do I consume 150 gms protein daily both vegetarian and non vegetarian diet seperately?", "What are the good career options for a msc chemistry student after qualifying gate?", "What other technical skills do you need as a computer science undergrad other than c and c++?", "Does MS in ECE have good job prospects in USA or like India there are more IT jobs present?", "Is foam insulation toxic?", "How can one start a research project based on biochemistry at UG level?", "Who wins in a battle between a Wolverine and a Puma?"] test_data = tf.data.Dataset.from_tensor_slices((sample_example, [0]*len(sample_example))) test_data = (test_data.map(to_feature_map).batch(1)) preds = model.predict(test_data) ['Toxic' if pred >=0.5 else 'Sincere' for pred in preds] # 10/10 for sincere comments. preds ```
github_jupyter
!nvidia-smi # conda install -c anaconda tensorflow-gpu import tensorflow as tf print(tf.version.VERSION) #!pip install -q tensorflow==2.3.0 # !git clone --depth 1 -b v2.3.0 https://github.com/tensorflow/models.git # # install requirements to use tensorflow/models repository # !pip install -Uqr models/official/requirements.txt # # you may have to restart the runtime afterwards # pip install tensorflow_datasets # pip install sentencepiece # pip install gin-config # pip install tensorflow-addons import numpy as np import tensorflow as tf import tensorflow_hub as hub import sys sys.path.append('models') from official.nlp.data import classifier_data_lib from official.nlp.bert import tokenization from official.nlp import optimization print("TF Version: ", tf.__version__) print("Eager mode: ", tf.executing_eagerly()) print("Hub version: ", hub.__version__) print("GPU is", "available" if tf.config.experimental.list_physical_devices("GPU") else "NOT AVAILABLE") import numpy as np import pandas as pd from sklearn.model_selection import train_test_split df = pd.read_csv('https://archive.org/download/fine-tune-bert-tensorflow-train.csv/train.csv.zip', compression='zip', low_memory=False) df.shape df.tail(20) df.target.plot(kind='hist', title='Target distribution'); train_df, remaining = train_test_split(df, random_state=42, train_size=0.0075, stratify=df.target.values) valid_df, _ = train_test_split(remaining, random_state=42, train_size=0.00075, stratify=remaining.target.values) train_df.shape, valid_df.shape with tf.device('/cpu:0'): train_data = tf.data.Dataset.from_tensor_slices((train_df.question_text.values, train_df.target.values)) valid_data = tf.data.Dataset.from_tensor_slices((valid_df.question_text.values, valid_df.target.values)) for text, label in train_data.take(1): print(text) print(label) """ Each line of the dataset is composed of the review text and its label - Data preprocessing consists of transforming text to BERT input features: input_word_ids, input_mask, segment_ids - In the process, tokenizing the text is done with the provided BERT model tokenizer """ label_list = [0, 1] # Label categories max_seq_length = 128 # maximum length of (token) input sequences train_batch_size = 32 # Get BERT layer and tokenizer: # More details here: https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/2 bert_layer = hub.KerasLayer("https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/2", trainable=True) vocab_file = bert_layer.resolved_object.vocab_file.asset_path.numpy() do_lower_case = bert_layer.resolved_object.do_lower_case.numpy() tokenizer = tokenization.FullTokenizer(vocab_file, do_lower_case) tokenizer.wordpiece_tokenizer.tokenize('hi, how are you doing?') tokenizer.convert_tokens_to_ids(tokenizer.wordpiece_tokenizer.tokenize('hi, how are you doing?')) # This provides a function to convert row to input features and label def to_feature(text, label, label_list=label_list, max_seq_length=max_seq_length, tokenizer=tokenizer): example = classifier_data_lib.InputExample(guid = None, text_a = text.numpy(), text_b = None, label = label.numpy()) feature = classifier_data_lib.convert_single_example(0, example, label_list, max_seq_length, tokenizer) return (feature.input_ids, feature.input_mask, feature.segment_ids, feature.label_id) def to_feature_map(text, label): input_ids, input_mask, segment_ids, label_id = tf.py_function(to_feature, inp=[text, label], Tout=[tf.int32, tf.int32, tf.int32, tf.int32]) # py_func doesn't set the shape of the returned tensors. input_ids.set_shape([max_seq_length]) input_mask.set_shape([max_seq_length]) segment_ids.set_shape([max_seq_length]) label_id.set_shape([]) x = { 'input_word_ids': input_ids, 'input_mask': input_mask, 'input_type_ids': segment_ids } return (x, label_id) with tf.device('/cpu:0'): # train train_data = (train_data.map(to_feature_map, num_parallel_calls=tf.data.experimental.AUTOTUNE) #.cache() .shuffle(1000) .batch(32, drop_remainder=True) .prefetch(tf.data.experimental.AUTOTUNE)) # valid valid_data = (valid_data.map(to_feature_map, num_parallel_calls=tf.data.experimental.AUTOTUNE) .batch(32, drop_remainder=True) .prefetch(tf.data.experimental.AUTOTUNE)) # data spec train_data.element_spec # data spec valid_data.element_spec # Building the model def create_model(): input_word_ids = tf.keras.layers.Input(shape=(max_seq_length,), dtype=tf.int32, name="input_word_ids") input_mask = tf.keras.layers.Input(shape=(max_seq_length,), dtype=tf.int32, name="input_mask") input_type_ids = tf.keras.layers.Input(shape=(max_seq_length,), dtype=tf.int32, name="input_type_ids") pooled_output, sequence_output = bert_layer([input_word_ids, input_mask, input_type_ids]) drop = tf.keras.layers.Dropout(0.4)(pooled_output) output = tf.keras.layers.Dense(1, activation="sigmoid", name="output")(drop) model = tf.keras.Model( inputs={ 'input_word_ids': input_word_ids, 'input_mask': input_mask, 'input_type_ids': input_type_ids }, outputs=output) return model model = create_model() model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=2e-5), loss=tf.keras.losses.BinaryCrossentropy(), metrics=[tf.keras.metrics.BinaryAccuracy()]) model.summary() tf.keras.utils.plot_model(model=model, show_shapes=True, dpi=76, ) # Train model epochs = 3 history = model.fit(train_data, validation_data=valid_data, epochs=epochs, verbose=1) import matplotlib.pyplot as plt def plot_graphs(history, metric): plt.plot(history.history[metric]) plt.plot(history.history['val_'+metric], '') plt.xlabel("Epochs") plt.ylabel(metric) plt.legend([metric, 'val_'+metric]) plt.show() plot_graphs(history, 'binary_accuracy') plot_graphs(history, 'loss') model.evaluate(valid_data, verbose=1) df_negs = df[df.target == 1] df_neg_comments = df_negs["question_text"] for com in df_neg_comments.tail(10): print(com) print("_____________________") print("_____________________") sample_example = ["What are not scam online services for essay writing?", "Is it just me, or was Prince the singer ugly as fudge?", "Could we finally give Einstein's first wife credit for her equation, E=m (c squared)? She deserves a lot more than that.", "Why do Jews want endless immigration to the US, but Israel has a wall, Saharonim prison, and very strict immigration laws against non-Jews?", "Will Oprah buy the DNC to guarantee her nomination in 2020 like Hillary did in 2016?", "Why is it when singers have lyrics about voices in their head, religious people say they hear god and anyone stating they are being targeted by voice to brain technology? Are they called delusional schizo?", "Do pakis smell of curry and shit?" "Isn't Trump right after all? Why should the USA be benevolent towards it's neighbor Mexico, and not put America first, when Russia is invading Ukraine? Even India imposed a blockade over Nepal after an earthquake", "Are you ashamed of being an Indian?"] test_data = tf.data.Dataset.from_tensor_slices((sample_example, [0]*len(sample_example))) test_data = (test_data.map(to_feature_map).batch(1)) preds = model.predict(test_data) ['Toxic' if pred >=0.5 else 'Sincere' for pred in preds] preds df_pos = df[df.target == 0] df_pos_comments = df_pos["question_text"] for com in df_pos_comments.tail(10): print(com) print("") sample_example = ["If you had $10 million of Bitcoin, could you sell it and pay no capital gain tax if you also quit work and had no ordinary income for the year?", "What are the methods to determine fossil ages in 10th STD?", "What is your story today?", "How do I consume 150 gms protein daily both vegetarian and non vegetarian diet seperately?", "What are the good career options for a msc chemistry student after qualifying gate?", "What other technical skills do you need as a computer science undergrad other than c and c++?", "Does MS in ECE have good job prospects in USA or like India there are more IT jobs present?", "Is foam insulation toxic?", "How can one start a research project based on biochemistry at UG level?", "Who wins in a battle between a Wolverine and a Puma?"] test_data = tf.data.Dataset.from_tensor_slices((sample_example, [0]*len(sample_example))) test_data = (test_data.map(to_feature_map).batch(1)) preds = model.predict(test_data) ['Toxic' if pred >=0.5 else 'Sincere' for pred in preds] # 10/10 for sincere comments. preds
0.717507
0.983327
# Tutorial Part 21: Exploring Quantum Chemistry with GDB1k Most of the tutorials we've walked you through so far have focused on applications to the drug discovery realm, but DeepChem's tool suite works for molecular design problems generally. In this tutorial, we're going to walk through an example of how to train a simple molecular machine learning for the task of predicting the atomization energy of a molecule. (Remember that the atomization energy is the energy required to form 1 mol of gaseous atoms from 1 mol of the molecule in its standard state under standard conditions). ## Colab This tutorial and the rest in this sequence can be done in Google colab. If you'd like to open this notebook in colab, you can use the following link. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deepchem/deepchem/blob/master/examples/tutorials/21_Exploring_Quantum_Chemistry_with_GDB1k.ipynb) ## Setup To run DeepChem within Colab, you'll need to run the following installation commands. This will take about 5 minutes to run to completion and install your environment. You can of course run this tutorial locally if you prefer. In that case, don't run these cells since they will download and install Anaconda on your local machine. ``` !curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py import conda_installer conda_installer.install() !/root/miniconda/bin/conda info -e !pip install --pre deepchem import deepchem deepchem.__version__ ``` With our setup in place, let's do a few standard imports to get the ball rolling. ``` import deepchem as dc from sklearn.ensemble import RandomForestRegressor from sklearn.kernel_ridge import KernelRidge ``` The ntext step we want to do is load our dataset. We're using a small dataset we've prepared that's pulled out of the larger GDB benchmarks. The dataset contains the atomization energies for 1K small molecules. ``` tasks = ["atomization_energy"] dataset_file = "../../datasets/gdb1k.sdf" smiles_field = "smiles" mol_field = "mol" ``` We now need a way to transform molecules that is useful for prediction of atomization energy. This representation draws on foundational work [1] that represents a molecule's 3D electrostatic structure as a 2D matrix $C$ of distances scaled by charges, where the $ij$-th element is represented by the following charge structure. $C_{ij} = \frac{q_i q_j}{r_{ij}^2}$ If you're observing carefully, you might ask, wait doesn't this mean that molecules with different numbers of atoms generate matrices of different sizes? In practice the trick to get around this is that the matrices are "zero-padded." That is, if you're making coulomb matrices for a set of molecules, you pick a maximum number of atoms $N$, make the matrices $N\times N$ and set to zero all the extra entries for this molecule. (There's a couple extra tricks that are done under the hood beyond this. Check out reference [1] or read the source code in DeepChem!) DeepChem has a built in featurization class `dc.feat.CoulombMatrixEig` that can generate these featurizations for you. ``` featurizer = dc.feat.CoulombMatrixEig(23, remove_hydrogens=False) ``` Note that in this case, we set the maximum number of atoms to $N = 23$. Let's now load our dataset file into DeepChem. As in the previous tutorials, we use a `Loader` class, in particular `dc.data.SDFLoader` to load our `.sdf` file into DeepChem. The following snippet shows how we do this: ``` loader = dc.data.SDFLoader( tasks=["atomization_energy"], featurizer=featurizer) dataset = loader.create_dataset(dataset_file) ``` For the purposes of this tutorial, we're going to do a random split of the dataset into training, validation, and test. In general, this split is weak and will considerably overestimate the accuracy of our models, but for now in this simple tutorial isn't a bad place to get started. ``` random_splitter = dc.splits.RandomSplitter() train_dataset, valid_dataset, test_dataset = random_splitter.train_valid_test_split(dataset) ``` One issue that Coulomb matrix featurizations have is that the range of entries in the matrix $C$ can be large. The charge $q_1q_2/r^2$ term can range very widely. In general, a wide range of values for inputs can throw off learning for the neural network. For this, a common fix is to normalize the input values so that they fall into a more standard range. Recall that the normalization transform applies to each feature $X_i$ of datapoint $X$ $\hat{X_i} = \frac{X_i - \mu_i}{\sigma_i}$ where $\mu_i$ and $\sigma_i$ are the mean and standard deviation of the $i$-th feature. This transformation enables the learning to proceed smoothly. A second point is that the atomization energies also fall across a wide range. So we apply an analogous transformation normalization transformation to the output to scale the energies better. We use DeepChem's transformation API to make this happen: ``` transformers = [ dc.trans.NormalizationTransformer(transform_X=True, dataset=train_dataset), dc.trans.NormalizationTransformer(transform_y=True, dataset=train_dataset)] for dataset in [train_dataset, valid_dataset, test_dataset]: for transformer in transformers: dataset = transformer.transform(dataset) ``` Now that we have the data cleanly transformed, let's do some simple machine learning. We'll start by constructing a random forest on top of the data. We'll use DeepChem's hyperparameter tuning module to do this. ``` def rf_model_builder(model_dir, **model_params): sklearn_model = RandomForestRegressor(**model_params) return dc.models.SklearnModel(sklearn_model, model_dir) params_dict = { "n_estimators": [10, 100], "max_features": ["auto", "sqrt", "log2", None], } metric = dc.metrics.Metric(dc.metrics.mean_absolute_error) optimizer = dc.hyper.GridHyperparamOpt(rf_model_builder) best_rf, best_rf_hyperparams, all_rf_results = optimizer.hyperparam_search( params_dict, train_dataset, valid_dataset, output_transformers=transformers, metric=metric, use_max=False) for key, value in all_rf_results.items(): print(f'{key}: {value}') print('Best hyperparams:', best_rf_hyperparams) ``` Let's build one more model, a kernel ridge regression, on top of this raw data. ``` def krr_model_builder(model_dir, **model_params): sklearn_model = KernelRidge(**model_params) return dc.models.SklearnModel(sklearn_model, model_dir) params_dict = { "kernel": ["laplacian"], "alpha": [0.0001], "gamma": [0.0001] } metric = dc.metrics.Metric(dc.metrics.mean_absolute_error) optimizer = dc.hyper.GridHyperparamOpt(krr_model_builder) best_krr, best_krr_hyperparams, all_krr_results = optimizer.hyperparam_search( params_dict, train_dataset, valid_dataset, output_transformers=transformers, metric=metric, use_max=False) for key, value in all_krr_results.items(): print(f'{key}: {value}') print('Best hyperparams:', best_krr_hyperparams) ``` # Congratulations! Time to join the Community! Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with DeepChem, we encourage you to finish the rest of the tutorials in this series. You can also help the DeepChem community in the following ways: ## Star DeepChem on [GitHub](https://github.com/deepchem/deepchem) This helps build awareness of the DeepChem project and the tools for open source drug discovery that we're trying to build. ## Join the DeepChem Gitter The DeepChem [Gitter](https://gitter.im/deepchem/Lobby) hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation! # Bibliography: [1] https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.98.146401
github_jupyter
!curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py import conda_installer conda_installer.install() !/root/miniconda/bin/conda info -e !pip install --pre deepchem import deepchem deepchem.__version__ import deepchem as dc from sklearn.ensemble import RandomForestRegressor from sklearn.kernel_ridge import KernelRidge tasks = ["atomization_energy"] dataset_file = "../../datasets/gdb1k.sdf" smiles_field = "smiles" mol_field = "mol" featurizer = dc.feat.CoulombMatrixEig(23, remove_hydrogens=False) loader = dc.data.SDFLoader( tasks=["atomization_energy"], featurizer=featurizer) dataset = loader.create_dataset(dataset_file) random_splitter = dc.splits.RandomSplitter() train_dataset, valid_dataset, test_dataset = random_splitter.train_valid_test_split(dataset) transformers = [ dc.trans.NormalizationTransformer(transform_X=True, dataset=train_dataset), dc.trans.NormalizationTransformer(transform_y=True, dataset=train_dataset)] for dataset in [train_dataset, valid_dataset, test_dataset]: for transformer in transformers: dataset = transformer.transform(dataset) def rf_model_builder(model_dir, **model_params): sklearn_model = RandomForestRegressor(**model_params) return dc.models.SklearnModel(sklearn_model, model_dir) params_dict = { "n_estimators": [10, 100], "max_features": ["auto", "sqrt", "log2", None], } metric = dc.metrics.Metric(dc.metrics.mean_absolute_error) optimizer = dc.hyper.GridHyperparamOpt(rf_model_builder) best_rf, best_rf_hyperparams, all_rf_results = optimizer.hyperparam_search( params_dict, train_dataset, valid_dataset, output_transformers=transformers, metric=metric, use_max=False) for key, value in all_rf_results.items(): print(f'{key}: {value}') print('Best hyperparams:', best_rf_hyperparams) def krr_model_builder(model_dir, **model_params): sklearn_model = KernelRidge(**model_params) return dc.models.SklearnModel(sklearn_model, model_dir) params_dict = { "kernel": ["laplacian"], "alpha": [0.0001], "gamma": [0.0001] } metric = dc.metrics.Metric(dc.metrics.mean_absolute_error) optimizer = dc.hyper.GridHyperparamOpt(krr_model_builder) best_krr, best_krr_hyperparams, all_krr_results = optimizer.hyperparam_search( params_dict, train_dataset, valid_dataset, output_transformers=transformers, metric=metric, use_max=False) for key, value in all_krr_results.items(): print(f'{key}: {value}') print('Best hyperparams:', best_krr_hyperparams)
0.827026
0.987711
``` import random import numpy as np from collections import deque from keras.models import Sequential from keras.layers import Dense from keras.optimizers import Adam import random from keras.utils.vis_utils import plot_model import matplotlib.pyplot as plt EPISODES = 100 class DQNAgent: def __init__(self, state_size, action_size): self.state_size = state_size self.action_size = action_size self.memory = deque(maxlen=2000) self.gamma = 0.95 # discount rate self.epsilon = 1.0 # exploration rate self.epsilon_min = 0.01 self.epsilon_decay = 0.995 self.learning_rate = 0.001 self.model = self._build_model() def _build_model(self): # Neural Net for Deep-Q learning Model model = Sequential() model.add(Dense(24, input_dim=self.state_size, activation='relu')) model.add(Dense(24, activation='relu')) model.add(Dense(self.action_size, activation='linear')) model.compile(loss='mse', optimizer=Adam(lr=self.learning_rate)) return model def remember(self, state, action, reward, next_state, done): self.memory.append((state, action, reward, next_state, done)) def act(self, state): if np.random.rand() <= self.epsilon: return random.randrange(self.action_size) act_values = self.model.predict(state) return np.argmax(act_values[0]) # returns action def replay(self, batch_size): minibatch = random.sample(self.memory, batch_size) for state, action, reward, next_state, done in minibatch: target = reward #target = 0 if not done: target = (reward + self.gamma * np.amax(self.model.predict(next_state)[0])) target_f = self.model.predict(state) target_f[0][action] = target self.model.fit(state, target_f, epochs=1, verbose=0) if self.epsilon > self.epsilon_min: self.epsilon *= self.epsilon_decay def load(self, name): self.model.load_weights(name) def save(self, name): self.model.save_weights(name) def represent_state(self,state,n): l = state.tolist()[0] l = [str(i) for i in l] tup_state_rep = [l[i:i+n] for i in range(0,len(l),n)] #print(tup_state_rep) for i in range(n): print(i+1,"--",' '.join(tup_state_rep[i])) print("\n") def reset_state(self,n): state = [0 for i in range(n*(n-1))] state = state+[1 for i in range(n)] state.append(-1) return np.array(state) def calc_reward(self,state, n): l = state.tolist()[:-1] rows = [l[i:i+n] for i in range(0,len(l),n)] cols = list(zip(*rows)) board = [i.index(1) for i in cols] row_frequency = [0] * n main_diag_frequency = [0] * (2 * n) secondary_diag_frequency = [0] * (2 * n) for i in range(n): row_frequency[board[i]] += 1 main_diag_frequency[board[i] + i] += 1 secondary_diag_frequency[n - board[i] + i] += 1 conflicts = 0 # formula: (N * (N - 1)) / 2 for i in range(2*n): if i < n: conflicts += (row_frequency[i] * (row_frequency[i]-1)) / 2 conflicts += (main_diag_frequency[i] * (main_diag_frequency[i]-1)) / 2 conflicts += (secondary_diag_frequency[i] * (secondary_diag_frequency[i]-1)) / 2 return int(conflicts) * -1 def step(self,state,which_queen,action,n): l = state.tolist()[0][:-1] which_queen = which_queen-1 for i in range(n): if l[n*i+which_queen]==1: l[n*i+which_queen]=0 break l[n*(action-1)+which_queen] = 1 l.append(-1) next_state = np.array(l) reward = self.calc_reward(next_state,n) done = False if reward==0: done = True return next_state,reward,done if __name__ == "__main__": #Solve for 4*4 n = 4 #state size = n*n + 1 bit for which_queen state_size = n*n+1 action_size = n agent = DQNAgent(state_size, action_size) done = False batch_size = 32 #queen mapping queen_map = {} for i in range(1,n+1): queen_map[i] = i*10 #queen_map = {1:10,2:20,3:30,4:40} converging_iters = [] for episode in range(EPISODES): state = agent.reset_state(n) state = np.reshape(state, [1, state_size]) print("Episode ",episode+1) i =0 while(1): which_queen = i%n+1 state = state.tolist()[0] state[-1] = queen_map[which_queen] state = np.array(state) state = np.reshape(state, [1, state_size]) action = agent.act(state) next_state, reward, done = agent.step(state,which_queen,action,n) next_state = np.reshape(next_state, [1, state_size]) agent.remember(state, action, reward, next_state, done) state = next_state if len(agent.memory) > batch_size: agent.replay(batch_size) if(reward==0): print("Iterations to converge on episode ",episode+1," = ",i+1) agent.represent_state(state,n) converging_iters.append(i+1) break else: i = i+1 plt.title("Convergence Graph for n = "+str(n)) plt.plot([i for i in range(len(converging_iters))],converging_iters) plt.xlabel("Episode Number") plt.ylabel("Iterations to converge") plt.show() converging_iters converging_iters[21] #plot_model(agent.model, to_file='model_plot.png', show_shapes=True, show_layer_names=True) ```
github_jupyter
import random import numpy as np from collections import deque from keras.models import Sequential from keras.layers import Dense from keras.optimizers import Adam import random from keras.utils.vis_utils import plot_model import matplotlib.pyplot as plt EPISODES = 100 class DQNAgent: def __init__(self, state_size, action_size): self.state_size = state_size self.action_size = action_size self.memory = deque(maxlen=2000) self.gamma = 0.95 # discount rate self.epsilon = 1.0 # exploration rate self.epsilon_min = 0.01 self.epsilon_decay = 0.995 self.learning_rate = 0.001 self.model = self._build_model() def _build_model(self): # Neural Net for Deep-Q learning Model model = Sequential() model.add(Dense(24, input_dim=self.state_size, activation='relu')) model.add(Dense(24, activation='relu')) model.add(Dense(self.action_size, activation='linear')) model.compile(loss='mse', optimizer=Adam(lr=self.learning_rate)) return model def remember(self, state, action, reward, next_state, done): self.memory.append((state, action, reward, next_state, done)) def act(self, state): if np.random.rand() <= self.epsilon: return random.randrange(self.action_size) act_values = self.model.predict(state) return np.argmax(act_values[0]) # returns action def replay(self, batch_size): minibatch = random.sample(self.memory, batch_size) for state, action, reward, next_state, done in minibatch: target = reward #target = 0 if not done: target = (reward + self.gamma * np.amax(self.model.predict(next_state)[0])) target_f = self.model.predict(state) target_f[0][action] = target self.model.fit(state, target_f, epochs=1, verbose=0) if self.epsilon > self.epsilon_min: self.epsilon *= self.epsilon_decay def load(self, name): self.model.load_weights(name) def save(self, name): self.model.save_weights(name) def represent_state(self,state,n): l = state.tolist()[0] l = [str(i) for i in l] tup_state_rep = [l[i:i+n] for i in range(0,len(l),n)] #print(tup_state_rep) for i in range(n): print(i+1,"--",' '.join(tup_state_rep[i])) print("\n") def reset_state(self,n): state = [0 for i in range(n*(n-1))] state = state+[1 for i in range(n)] state.append(-1) return np.array(state) def calc_reward(self,state, n): l = state.tolist()[:-1] rows = [l[i:i+n] for i in range(0,len(l),n)] cols = list(zip(*rows)) board = [i.index(1) for i in cols] row_frequency = [0] * n main_diag_frequency = [0] * (2 * n) secondary_diag_frequency = [0] * (2 * n) for i in range(n): row_frequency[board[i]] += 1 main_diag_frequency[board[i] + i] += 1 secondary_diag_frequency[n - board[i] + i] += 1 conflicts = 0 # formula: (N * (N - 1)) / 2 for i in range(2*n): if i < n: conflicts += (row_frequency[i] * (row_frequency[i]-1)) / 2 conflicts += (main_diag_frequency[i] * (main_diag_frequency[i]-1)) / 2 conflicts += (secondary_diag_frequency[i] * (secondary_diag_frequency[i]-1)) / 2 return int(conflicts) * -1 def step(self,state,which_queen,action,n): l = state.tolist()[0][:-1] which_queen = which_queen-1 for i in range(n): if l[n*i+which_queen]==1: l[n*i+which_queen]=0 break l[n*(action-1)+which_queen] = 1 l.append(-1) next_state = np.array(l) reward = self.calc_reward(next_state,n) done = False if reward==0: done = True return next_state,reward,done if __name__ == "__main__": #Solve for 4*4 n = 4 #state size = n*n + 1 bit for which_queen state_size = n*n+1 action_size = n agent = DQNAgent(state_size, action_size) done = False batch_size = 32 #queen mapping queen_map = {} for i in range(1,n+1): queen_map[i] = i*10 #queen_map = {1:10,2:20,3:30,4:40} converging_iters = [] for episode in range(EPISODES): state = agent.reset_state(n) state = np.reshape(state, [1, state_size]) print("Episode ",episode+1) i =0 while(1): which_queen = i%n+1 state = state.tolist()[0] state[-1] = queen_map[which_queen] state = np.array(state) state = np.reshape(state, [1, state_size]) action = agent.act(state) next_state, reward, done = agent.step(state,which_queen,action,n) next_state = np.reshape(next_state, [1, state_size]) agent.remember(state, action, reward, next_state, done) state = next_state if len(agent.memory) > batch_size: agent.replay(batch_size) if(reward==0): print("Iterations to converge on episode ",episode+1," = ",i+1) agent.represent_state(state,n) converging_iters.append(i+1) break else: i = i+1 plt.title("Convergence Graph for n = "+str(n)) plt.plot([i for i in range(len(converging_iters))],converging_iters) plt.xlabel("Episode Number") plt.ylabel("Iterations to converge") plt.show() converging_iters converging_iters[21] #plot_model(agent.model, to_file='model_plot.png', show_shapes=True, show_layer_names=True)
0.474144
0.393327
# Introduction to Deep Learning with PyTorch In this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. ## Neural Networks Deep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output. <img src="assets/simple_neuron.png" width=400px> Mathematically this looks like: $$ \begin{align} y &= f(w_1 x_1 + w_2 x_2 + b) \\ y &= f\left(\sum_i w_i x_i +b \right) \end{align} $$ With vectors this is the dot/inner product of two vectors: $$ h = \begin{bmatrix} x_1 \, x_2 \cdots x_n \end{bmatrix} \cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n \end{bmatrix} $$ ## Tensors It turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors. <img src="assets/tensor_examples.svg" width=600px> With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ``` # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) +bias) print(y) ``` Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line: `features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution. Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution. PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.html#torch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs. Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.html#torch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.html#torch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error ```python >> torch.mm(features, weights) --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-13-15d592eb5279> in <module>() ----> 1 torch.mm(features, weights) RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033 ``` As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work. **Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often. There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.view). * `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory. * `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch. * `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`. I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`. > **Exercise**: Calculate the output of our little network using matrix multiplication. ``` ## Calculate the output of this network using matrix multiplication reshaped_weights = weights.view(5, 1) y = activation(torch.mm(features, reshaped_weights) + bias) print(y) ``` ### Stack them up! That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix. <img src='assets/multilayer_diagram_weights.png' width=450px> The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$ \vec{h} = [h_1 \, h_2] = \begin{bmatrix} x_1 \, x_2 \cdots \, x_n \end{bmatrix} \cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2} \end{bmatrix} $$ The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply $$ y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right) $$ ``` ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ``` > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ``` ## Your solution here first_output = activation(torch.mm(features, W1) +B1) y = activation(torch.mm(first_output ,W2) + B2) print(y) ``` If you did this correctly, you should see the output `tensor([[ 0.3171]])`. The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. ## Numpy to Torch and back Special bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ``` import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ``` The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ``` # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ```
github_jupyter
# First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 5 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) +bias) print(y) >> torch.mm(features, weights) --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-13-15d592eb5279> in <module>() ----> 1 torch.mm(features, weights) RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033 ## Calculate the output of this network using matrix multiplication reshaped_weights = weights.view(5, 1) y = activation(torch.mm(features, reshaped_weights) + bias) print(y) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ## Your solution here first_output = activation(torch.mm(features, W1) +B1) y = activation(torch.mm(first_output ,W2) + B2) print(y) import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a
0.788298
0.993549
``` # This Python 3 environment comes with many helpful analytics libraries installed # It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python # For example, here's several helpful packages to load in import gc import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # Input data files are available in the "../input/" directory. # For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory import os print(os.listdir("../input")) # Any results you write to the current directory are saved as output. COMBINED_DATA_FPATH='DATA_WITH_TXT.h5' DATA_FPATH = '../input/fork-of-treebasedapproachdata/DATA.hdf' TEST_LIKE_SALES_FPATH = '../input/addingzerototrain/train_with_zero.hdf' SALES_FPATH ='../input/competitive-data-science-predict-future-sales/sales_train.csv' ITEMS_FPATH = '../input/competitive-data-science-predict-future-sales/items.csv' SHOPS_FPATH = '../input/competitive-data-science-predict-future-sales/shops.csv' TEST_SALES_FPATH = '../input/competitive-data-science-predict-future-sales/test.csv' SAMPLE_SUBMISSION_FPATH = '../input/competitive-data-science-predict-future-sales/sample_submission.csv' TRAINED_MODEL_FPATH = 'trained_model.bin' TEXT_FEATURE_FPATH = '../input/textfeatures/text_features.h5' # Load preprocessed data. X_df = pd.read_hdf(DATA_FPATH, 'X') y_df = pd.read_hdf(DATA_FPATH, 'y') sales_df = pd.read_hdf(TEST_LIKE_SALES_FPATH, 'df') item_text = pd.read_hdf(TEXT_FEATURE_FPATH, 'item_500') shop_text = pd.read_hdf(TEXT_FEATURE_FPATH, 'shop_50') category_text = pd.read_hdf(TEXT_FEATURE_FPATH, 'category_60') y_df[y_df > 20 ] = 20 y_df[y_df < 0] = 0 gc.collect() item_text = item_text[['item_id'] + ['item_name_text_{}'.format(i) for i in range(50)]] shop_text = shop_text[['shop_id'] + ['shop_name_text_{}'.format(i) for i in range(10)]] category_text = category_text[['item_category_id'] + ['item_category_name_text_{}'.format(i) for i in range(10)]] def add_text_features(df): df.reset_index(inplace=True) df = pd.merge(df, shop_text, how='left', on='shop_id') print('Shop text added') gc.collect() df = pd.merge(df, category_text, how='left', on='item_category_id') print('Category text added') gc.collect() df = pd.merge(df, item_text, how='left', on='item_id') print('Item text added') gc.collect() df.set_index('index',inplace=True) return df y_df.to_hdf(COMBINED_DATA_FPATH, 'y') del y_df for year in X_df.year.unique(): X_train_df = X_df[X_df.year == year].copy() X_train_df = add_text_features(X_train_df) gc.collect() train_columns = X_train_df.columns.tolist() X_train_df.to_hdf(COMBINED_DATA_FPATH,'X_{}'.format(year)) del X_train_df gc.collect() del X_df gc.collect() len(train_columns) test_X_df = pd.read_hdf(DATA_FPATH, 'test_X') test_X_df = add_text_features(test_X_df) test_X_df = test_X_df[train_columns] test_X_df.to_hdf(COMBINED_DATA_FPATH, 'X_test',complevel=9) ```
github_jupyter
# This Python 3 environment comes with many helpful analytics libraries installed # It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python # For example, here's several helpful packages to load in import gc import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # Input data files are available in the "../input/" directory. # For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory import os print(os.listdir("../input")) # Any results you write to the current directory are saved as output. COMBINED_DATA_FPATH='DATA_WITH_TXT.h5' DATA_FPATH = '../input/fork-of-treebasedapproachdata/DATA.hdf' TEST_LIKE_SALES_FPATH = '../input/addingzerototrain/train_with_zero.hdf' SALES_FPATH ='../input/competitive-data-science-predict-future-sales/sales_train.csv' ITEMS_FPATH = '../input/competitive-data-science-predict-future-sales/items.csv' SHOPS_FPATH = '../input/competitive-data-science-predict-future-sales/shops.csv' TEST_SALES_FPATH = '../input/competitive-data-science-predict-future-sales/test.csv' SAMPLE_SUBMISSION_FPATH = '../input/competitive-data-science-predict-future-sales/sample_submission.csv' TRAINED_MODEL_FPATH = 'trained_model.bin' TEXT_FEATURE_FPATH = '../input/textfeatures/text_features.h5' # Load preprocessed data. X_df = pd.read_hdf(DATA_FPATH, 'X') y_df = pd.read_hdf(DATA_FPATH, 'y') sales_df = pd.read_hdf(TEST_LIKE_SALES_FPATH, 'df') item_text = pd.read_hdf(TEXT_FEATURE_FPATH, 'item_500') shop_text = pd.read_hdf(TEXT_FEATURE_FPATH, 'shop_50') category_text = pd.read_hdf(TEXT_FEATURE_FPATH, 'category_60') y_df[y_df > 20 ] = 20 y_df[y_df < 0] = 0 gc.collect() item_text = item_text[['item_id'] + ['item_name_text_{}'.format(i) for i in range(50)]] shop_text = shop_text[['shop_id'] + ['shop_name_text_{}'.format(i) for i in range(10)]] category_text = category_text[['item_category_id'] + ['item_category_name_text_{}'.format(i) for i in range(10)]] def add_text_features(df): df.reset_index(inplace=True) df = pd.merge(df, shop_text, how='left', on='shop_id') print('Shop text added') gc.collect() df = pd.merge(df, category_text, how='left', on='item_category_id') print('Category text added') gc.collect() df = pd.merge(df, item_text, how='left', on='item_id') print('Item text added') gc.collect() df.set_index('index',inplace=True) return df y_df.to_hdf(COMBINED_DATA_FPATH, 'y') del y_df for year in X_df.year.unique(): X_train_df = X_df[X_df.year == year].copy() X_train_df = add_text_features(X_train_df) gc.collect() train_columns = X_train_df.columns.tolist() X_train_df.to_hdf(COMBINED_DATA_FPATH,'X_{}'.format(year)) del X_train_df gc.collect() del X_df gc.collect() len(train_columns) test_X_df = pd.read_hdf(DATA_FPATH, 'test_X') test_X_df = add_text_features(test_X_df) test_X_df = test_X_df[train_columns] test_X_df.to_hdf(COMBINED_DATA_FPATH, 'X_test',complevel=9)
0.544317
0.207476
``` import sys sys.path.insert(0, '../') from kineticspy.robobj import robot from kineticspy import graphics, kin, trajectory, solidobj, measure, out import math import numpy as np pi = math.pi ``` Declaring the robot object, in this case it is the arm ``` jointno = 3 # dont change jointaxismat = [[0,0,1],[0,1,0],[0,1,0]] #dont change lenmat = [[0,0,1],[0,0,2],[0,0,3]] cmdmat = [] robot1 = robot.Robot(jointno,jointaxismat,lenmat,cmdmat) robotinitialpos = robotfinalpos = graphics.position(robot1, robot1.coordmat) ``` Forward Kinematics ``` thetalist = [pi/2, pi/3, 2*pi/3] coordmat = kin.fwd(robot1,thetalist) robotfinalpos1 = graphics.position(robot1, coordmat) data = robotfinalpos1 + robotinitialpos m = graphics.plot(data) #m.show() ``` Inverse Kinematics ``` pos = [-2.29, 3.98, 2] thetalist = kin.inv(robot1, pos) coordmat = kin.fwd(robot1,thetalist) robotfinalpos2 = graphics.position(robot1, coordmat) data = robotfinalpos2 + robotinitialpos m = graphics.plot(data) #m.show() ``` Straightline Trajectory Generation ``` pos1 = [-2.29, 3.98, 2] pos2 = [-2.29, -3.98, 2] trajectorymat = trajectory.linetrac(robot1, pos1, pos2) trajectoryendeff = trajectory.trajectory_end(robot1, trajectorymat) #uncomment and add below to see the trajectory of joints #trajectoryjoints = trajectory.trajectory_joints(robot1, trajectorymat) robo1 = graphics.position(robot1, trajectorymat[0]) robo2 = graphics.position(robot1, trajectorymat[len(trajectorymat) - 1]) data = trajectoryendeff + robo1 + robo2 print(robot1.cmdmat) m = graphics.plot(data) m.show() ``` Natural Trajectory Generation ``` thetainit = [pi/2, pi/3, 2*pi/3] thetafinal = [-pi/2, pi/3, 2*pi/3] nopoint = 20 trajectorymat = trajectory.calctrac(robot1, thetainit, thetafinal, nopoint) trajectoryendeff = trajectory.trajectory_end(robot1, trajectorymat) #uncommetn and add below to see trajectory of joints #trajectoryjoints = trajectory.trajectory_joints(robot1, trajectorymat) robo1 = graphics.position(robot1, trajectorymat[0]) robo2 = graphics.position(robot1, trajectorymat[len(trajectorymat) - 1]) data = trajectoryendeff + robo1 + robo2 m = graphics.plot(data) m.show() ``` You can also see the volume covered by adding the below command vol = solidobj.volcov(robot1, trajectorymat) Add obstacles: assume them as cuboid ``` l = 3 b = 3 h = 2 pos = [3,1,0] Obstacle1 = solidobj.obstacle(l, b, h, pos) l = 3 b = 3 h = 2 pos = [-3,-1,0] Obstacle2 = solidobj.obstacle(l, b, h, pos) viapoints = [[-2.29, 3.98, 2],[-2.29, -3.98, 2],[2.29, -3.98, 2],[2.29, 3.98, 2]] trajectorymat = trajectory.viapts(robot1, viapoints) trac_end = trajectory.trajectory_end(robot1, trajectorymat) trac_joint = trajectory.trajectory_joints(robot1, trajectorymat) vol = solidobj.volcovered(robot1, trajectorymat) robo1 = graphics.position(robot1, trajectorymat[0]) robo2 = graphics.position(robot1, trajectorymat[len(trajectorymat) - 1]) new_data = Obstacle1 + Obstacle2 + trac_end + trac_joint + vol + robo1 + robo2 m = graphics.plot(new_data) m.show() orientation = [pi/2, pi/3, 2*pi/3] mani_elip = measure.vel_mani_elip(robot1, orientation) coordmat = kin.fwd(robot1,orientation) robotpos = graphics.position(robot1, coordmat) data = mani_elip + robotpos m = graphics.plot(data) m.show() print(robot1.cmdmat) saving_location = "example.csv" out.write(robot1, saving_location) ```
github_jupyter
import sys sys.path.insert(0, '../') from kineticspy.robobj import robot from kineticspy import graphics, kin, trajectory, solidobj, measure, out import math import numpy as np pi = math.pi jointno = 3 # dont change jointaxismat = [[0,0,1],[0,1,0],[0,1,0]] #dont change lenmat = [[0,0,1],[0,0,2],[0,0,3]] cmdmat = [] robot1 = robot.Robot(jointno,jointaxismat,lenmat,cmdmat) robotinitialpos = robotfinalpos = graphics.position(robot1, robot1.coordmat) thetalist = [pi/2, pi/3, 2*pi/3] coordmat = kin.fwd(robot1,thetalist) robotfinalpos1 = graphics.position(robot1, coordmat) data = robotfinalpos1 + robotinitialpos m = graphics.plot(data) #m.show() pos = [-2.29, 3.98, 2] thetalist = kin.inv(robot1, pos) coordmat = kin.fwd(robot1,thetalist) robotfinalpos2 = graphics.position(robot1, coordmat) data = robotfinalpos2 + robotinitialpos m = graphics.plot(data) #m.show() pos1 = [-2.29, 3.98, 2] pos2 = [-2.29, -3.98, 2] trajectorymat = trajectory.linetrac(robot1, pos1, pos2) trajectoryendeff = trajectory.trajectory_end(robot1, trajectorymat) #uncomment and add below to see the trajectory of joints #trajectoryjoints = trajectory.trajectory_joints(robot1, trajectorymat) robo1 = graphics.position(robot1, trajectorymat[0]) robo2 = graphics.position(robot1, trajectorymat[len(trajectorymat) - 1]) data = trajectoryendeff + robo1 + robo2 print(robot1.cmdmat) m = graphics.plot(data) m.show() thetainit = [pi/2, pi/3, 2*pi/3] thetafinal = [-pi/2, pi/3, 2*pi/3] nopoint = 20 trajectorymat = trajectory.calctrac(robot1, thetainit, thetafinal, nopoint) trajectoryendeff = trajectory.trajectory_end(robot1, trajectorymat) #uncommetn and add below to see trajectory of joints #trajectoryjoints = trajectory.trajectory_joints(robot1, trajectorymat) robo1 = graphics.position(robot1, trajectorymat[0]) robo2 = graphics.position(robot1, trajectorymat[len(trajectorymat) - 1]) data = trajectoryendeff + robo1 + robo2 m = graphics.plot(data) m.show() l = 3 b = 3 h = 2 pos = [3,1,0] Obstacle1 = solidobj.obstacle(l, b, h, pos) l = 3 b = 3 h = 2 pos = [-3,-1,0] Obstacle2 = solidobj.obstacle(l, b, h, pos) viapoints = [[-2.29, 3.98, 2],[-2.29, -3.98, 2],[2.29, -3.98, 2],[2.29, 3.98, 2]] trajectorymat = trajectory.viapts(robot1, viapoints) trac_end = trajectory.trajectory_end(robot1, trajectorymat) trac_joint = trajectory.trajectory_joints(robot1, trajectorymat) vol = solidobj.volcovered(robot1, trajectorymat) robo1 = graphics.position(robot1, trajectorymat[0]) robo2 = graphics.position(robot1, trajectorymat[len(trajectorymat) - 1]) new_data = Obstacle1 + Obstacle2 + trac_end + trac_joint + vol + robo1 + robo2 m = graphics.plot(new_data) m.show() orientation = [pi/2, pi/3, 2*pi/3] mani_elip = measure.vel_mani_elip(robot1, orientation) coordmat = kin.fwd(robot1,orientation) robotpos = graphics.position(robot1, coordmat) data = mani_elip + robotpos m = graphics.plot(data) m.show() print(robot1.cmdmat) saving_location = "example.csv" out.write(robot1, saving_location)
0.252384
0.854156
``` import os import glob import sys import numpy as np import pickle import tensorflow as tf import PIL import ipywidgets import io """ make sure this notebook is running from root directory """ while os.path.basename(os.getcwd()) in ('notebooks', 'src'): os.chdir('..') assert ('README.md' in os.listdir('./')), 'Can not find project root, please cd to project root before running the following code' import src.tl_gan.generate_image as generate_image import src.tl_gan.feature_axis as feature_axis import src.tl_gan.feature_celeba_organize as feature_celeba_organize """ load feature directions """ path_feature_direction = './asset_results/pg_gan_celeba_feature_direction_40' pathfile_feature_direction = glob.glob(os.path.join(path_feature_direction, 'feature_direction_*.pkl'))[-1] with open(pathfile_feature_direction, 'rb') as f: feature_direction_name = pickle.load(f) feature_direction = feature_direction_name['direction'] feature_name = feature_direction_name['name'] num_feature = feature_direction.shape[1] import importlib importlib.reload(feature_celeba_organize) feature_name = feature_celeba_organize.feature_name_celeba_rename feature_direction = feature_direction_name['direction']* feature_celeba_organize.feature_reverse[None, :] """ start tf session and load GAN model """ # path to model code and weight path_pg_gan_code = './src/model/pggan' path_model = './asset_model/karras2018iclr-celebahq-1024x1024.pkl' sys.path.append(path_pg_gan_code) """ create tf session """ yn_CPU_only = False if yn_CPU_only: config = tf.ConfigProto(device_count = {'GPU': 0}, allow_soft_placement=True) else: config = tf.ConfigProto(allow_soft_placement=True) config.gpu_options.allow_growth = True sess = tf.InteractiveSession(config=config) try: with open(path_model, 'rb') as file: G, D, Gs = pickle.load(file) except FileNotFoundError: print('before running the code, download pre-trained model to project_root/asset_model/') raise len_z = Gs.input_shapes[0][1] z_sample = np.random.randn(len_z) x_sample = generate_image.gen_single_img(z_sample, Gs=Gs) def img_to_bytes(x_sample): imgObj = PIL.Image.fromarray(x_sample) imgByteArr = io.BytesIO() imgObj.save(imgByteArr, format='PNG') imgBytes = imgByteArr.getvalue() return imgBytes z_sample = np.random.randn(len_z) x_sample = generate_image.gen_single_img(Gs=Gs) w_img = ipywidgets.widgets.Image(value=img_to_bytes(x_sample), fromat='png', width=512, height=512) class GuiCallback(object): counter = 0 # latents = z_sample def __init__(self): self.latents = z_sample self.feature_direction = feature_direction self.feature_lock_status = np.zeros(num_feature).astype('bool') self.feature_directoion_disentangled = feature_axis.disentangle_feature_axis_by_idx( self.feature_direction, idx_base=np.flatnonzero(self.feature_lock_status)) def random_gen(self, event): self.latents = np.random.randn(len_z) self.update_img() def modify_along_feature(self, event, idx_feature, step_size=0.01): self.latents += self.feature_directoion_disentangled[:, idx_feature] * step_size self.update_img() def set_feature_lock(self, event, idx_feature, set_to=None): if set_to is None: self.feature_lock_status[idx_feature] = np.logical_not(self.feature_lock_status[idx_feature]) else: self.feature_lock_status[idx_feature] = set_to self.feature_directoion_disentangled = feature_axis.disentangle_feature_axis_by_idx( self.feature_direction, idx_base=np.flatnonzero(self.feature_lock_status)) def update_img(self): x_sample = generate_image.gen_single_img(z=self.latents, Gs=Gs) x_byte = img_to_bytes(x_sample) w_img.value = x_byte guicallback = GuiCallback() step_size = 0.4 def create_button(idx_feature, width=96, height=40): """ function to built button groups for one feature """ w_name_toggle = ipywidgets.widgets.ToggleButton( value=False, description=feature_name[idx_feature], tooltip='{}, Press down to lock this feature'.format(feature_name[idx_feature]), layout=ipywidgets.Layout(height='{:.0f}px'.format(height/2), width='{:.0f}px'.format(width), margin='2px 2px 2px 2px') ) w_neg = ipywidgets.widgets.Button(description='-', layout=ipywidgets.Layout(height='{:.0f}px'.format(height/2), width='{:.0f}px'.format(width/2), margin='1px 1px 5px 1px')) w_pos = ipywidgets.widgets.Button(description='+', layout=ipywidgets.Layout(height='{:.0f}px'.format(height/2), width='{:.0f}px'.format(width/2), margin='1px 1px 5px 1px')) w_name_toggle.observe(lambda event: guicallback.set_feature_lock(event, idx_feature)) w_neg.on_click(lambda event: guicallback.modify_along_feature(event, idx_feature, step_size=-1 * step_size)) w_pos.on_click(lambda event: guicallback.modify_along_feature(event, idx_feature, step_size=+1 * step_size)) button_group = ipywidgets.VBox([w_name_toggle, ipywidgets.HBox([w_neg, w_pos])], layout=ipywidgets.Layout(border='1px solid gray')) return button_group list_buttons = [] for idx_feature in range(num_feature): list_buttons.append(create_button(idx_feature)) yn_button_select = True def arrange_buttons(list_buttons, yn_button_select=True, ncol=4): num = len(list_buttons) if yn_button_select: feature_celeba_layout = feature_celeba_organize.feature_celeba_layout layout_all_buttons = ipywidgets.VBox([ipywidgets.HBox([list_buttons[item] for item in row]) for row in feature_celeba_layout]) else: layout_all_buttons = ipywidgets.VBox([ipywidgets.HBox(list_buttons[i*ncol:(i+1)*ncol]) for i in range(num//ncol+int(num%ncol>0))]) return layout_all_buttons # w_button.on_click(on_button_clicked) guicallback.update_img() w_button_random = ipywidgets.widgets.Button(description='random face', button_style='success', layout=ipywidgets.Layout(height='40px', width='128px', margin='1px 1px 5px 1px')) w_button_random.on_click(guicallback.random_gen) w_box = ipywidgets.HBox([w_img, ipywidgets.VBox([w_button_random, arrange_buttons(list_buttons, yn_button_select=True)]) ], layout=ipywidgets.Layout(height='1024}px', width='1024px') ) print('press +/- to adjust feature, toggle feature name to lock the feature') display(w_box) ```
github_jupyter
import os import glob import sys import numpy as np import pickle import tensorflow as tf import PIL import ipywidgets import io """ make sure this notebook is running from root directory """ while os.path.basename(os.getcwd()) in ('notebooks', 'src'): os.chdir('..') assert ('README.md' in os.listdir('./')), 'Can not find project root, please cd to project root before running the following code' import src.tl_gan.generate_image as generate_image import src.tl_gan.feature_axis as feature_axis import src.tl_gan.feature_celeba_organize as feature_celeba_organize """ load feature directions """ path_feature_direction = './asset_results/pg_gan_celeba_feature_direction_40' pathfile_feature_direction = glob.glob(os.path.join(path_feature_direction, 'feature_direction_*.pkl'))[-1] with open(pathfile_feature_direction, 'rb') as f: feature_direction_name = pickle.load(f) feature_direction = feature_direction_name['direction'] feature_name = feature_direction_name['name'] num_feature = feature_direction.shape[1] import importlib importlib.reload(feature_celeba_organize) feature_name = feature_celeba_organize.feature_name_celeba_rename feature_direction = feature_direction_name['direction']* feature_celeba_organize.feature_reverse[None, :] """ start tf session and load GAN model """ # path to model code and weight path_pg_gan_code = './src/model/pggan' path_model = './asset_model/karras2018iclr-celebahq-1024x1024.pkl' sys.path.append(path_pg_gan_code) """ create tf session """ yn_CPU_only = False if yn_CPU_only: config = tf.ConfigProto(device_count = {'GPU': 0}, allow_soft_placement=True) else: config = tf.ConfigProto(allow_soft_placement=True) config.gpu_options.allow_growth = True sess = tf.InteractiveSession(config=config) try: with open(path_model, 'rb') as file: G, D, Gs = pickle.load(file) except FileNotFoundError: print('before running the code, download pre-trained model to project_root/asset_model/') raise len_z = Gs.input_shapes[0][1] z_sample = np.random.randn(len_z) x_sample = generate_image.gen_single_img(z_sample, Gs=Gs) def img_to_bytes(x_sample): imgObj = PIL.Image.fromarray(x_sample) imgByteArr = io.BytesIO() imgObj.save(imgByteArr, format='PNG') imgBytes = imgByteArr.getvalue() return imgBytes z_sample = np.random.randn(len_z) x_sample = generate_image.gen_single_img(Gs=Gs) w_img = ipywidgets.widgets.Image(value=img_to_bytes(x_sample), fromat='png', width=512, height=512) class GuiCallback(object): counter = 0 # latents = z_sample def __init__(self): self.latents = z_sample self.feature_direction = feature_direction self.feature_lock_status = np.zeros(num_feature).astype('bool') self.feature_directoion_disentangled = feature_axis.disentangle_feature_axis_by_idx( self.feature_direction, idx_base=np.flatnonzero(self.feature_lock_status)) def random_gen(self, event): self.latents = np.random.randn(len_z) self.update_img() def modify_along_feature(self, event, idx_feature, step_size=0.01): self.latents += self.feature_directoion_disentangled[:, idx_feature] * step_size self.update_img() def set_feature_lock(self, event, idx_feature, set_to=None): if set_to is None: self.feature_lock_status[idx_feature] = np.logical_not(self.feature_lock_status[idx_feature]) else: self.feature_lock_status[idx_feature] = set_to self.feature_directoion_disentangled = feature_axis.disentangle_feature_axis_by_idx( self.feature_direction, idx_base=np.flatnonzero(self.feature_lock_status)) def update_img(self): x_sample = generate_image.gen_single_img(z=self.latents, Gs=Gs) x_byte = img_to_bytes(x_sample) w_img.value = x_byte guicallback = GuiCallback() step_size = 0.4 def create_button(idx_feature, width=96, height=40): """ function to built button groups for one feature """ w_name_toggle = ipywidgets.widgets.ToggleButton( value=False, description=feature_name[idx_feature], tooltip='{}, Press down to lock this feature'.format(feature_name[idx_feature]), layout=ipywidgets.Layout(height='{:.0f}px'.format(height/2), width='{:.0f}px'.format(width), margin='2px 2px 2px 2px') ) w_neg = ipywidgets.widgets.Button(description='-', layout=ipywidgets.Layout(height='{:.0f}px'.format(height/2), width='{:.0f}px'.format(width/2), margin='1px 1px 5px 1px')) w_pos = ipywidgets.widgets.Button(description='+', layout=ipywidgets.Layout(height='{:.0f}px'.format(height/2), width='{:.0f}px'.format(width/2), margin='1px 1px 5px 1px')) w_name_toggle.observe(lambda event: guicallback.set_feature_lock(event, idx_feature)) w_neg.on_click(lambda event: guicallback.modify_along_feature(event, idx_feature, step_size=-1 * step_size)) w_pos.on_click(lambda event: guicallback.modify_along_feature(event, idx_feature, step_size=+1 * step_size)) button_group = ipywidgets.VBox([w_name_toggle, ipywidgets.HBox([w_neg, w_pos])], layout=ipywidgets.Layout(border='1px solid gray')) return button_group list_buttons = [] for idx_feature in range(num_feature): list_buttons.append(create_button(idx_feature)) yn_button_select = True def arrange_buttons(list_buttons, yn_button_select=True, ncol=4): num = len(list_buttons) if yn_button_select: feature_celeba_layout = feature_celeba_organize.feature_celeba_layout layout_all_buttons = ipywidgets.VBox([ipywidgets.HBox([list_buttons[item] for item in row]) for row in feature_celeba_layout]) else: layout_all_buttons = ipywidgets.VBox([ipywidgets.HBox(list_buttons[i*ncol:(i+1)*ncol]) for i in range(num//ncol+int(num%ncol>0))]) return layout_all_buttons # w_button.on_click(on_button_clicked) guicallback.update_img() w_button_random = ipywidgets.widgets.Button(description='random face', button_style='success', layout=ipywidgets.Layout(height='40px', width='128px', margin='1px 1px 5px 1px')) w_button_random.on_click(guicallback.random_gen) w_box = ipywidgets.HBox([w_img, ipywidgets.VBox([w_button_random, arrange_buttons(list_buttons, yn_button_select=True)]) ], layout=ipywidgets.Layout(height='1024}px', width='1024px') ) print('press +/- to adjust feature, toggle feature name to lock the feature') display(w_box)
0.282691
0.287344
``` import pandas as pd import numpy as np %matplotlib inline import matplotlib.pyplot as plt import seaborn as sbs import datetime from datetime import timedelta port_start_date = "2017/6/1" now = "2018/2/26" index = pd.bdate_range(port_start_date, now) # tickers' last date 2/23/2017 tickers_list = ["ADM", "APO", "CG", "CHKP", "FEYE", "FTI", "GEVO", "GLD", "GPRO", "GRMN", "HPE", "IBM", "INTC", "JASO", "JNPR", "QCOM", "SGOL", "SWM", "TM", "VEA", "VGIT", "VGK", "VGLT", "VGSH", "VGT", "VOO", "VT", "VTI"] def read_file(file_list): path = "C:/Users/pc/Desktop/portfolio_data/CSVs/" df = pd.DataFrame(index=index) for file in file_list: # using only "date" and "Adj Close" columns data = pd.read_csv(path + file + ".csv", index_col=0, parse_dates=True, usecols=[0, 5]) data = data.rename(columns={"Adj Close": file}) # change anything which is not a float to NaN data = pd.to_numeric(data[file], errors="coerce") df = df.join(data, how="inner") df = df.dropna() return df df_stock_prices = read_file(tickers_list) df = pd.DataFrame(np.zeros((len(df_stock_prices.index), len(tickers_list))), index=df_stock_prices.index, columns=tickers_list) df["ADM"].loc["7/10/2017":] = 2 df["APO"].loc["7/21/2017":] = 2 df["CG"].loc["7/21/2017":] = 2 df["CHKP"].loc["9/15/2017":] = 1 df["FEYE"].loc["1/16/2018":] = 1 df["FTI"].loc["6/21/2017":"8/23/2017"] = 4 df["GEVO"].loc["6/22/2017":"8/24/2017"] = 50 df["GLD"].loc["11/3/2017":] = 1 df["GPRO"].loc["1/16/2018":] = 10 df["GRMN"].loc["6/16/2017":] = 3 df["HPE"].loc["1/4/2018":] = 1 df["IBM"].loc["9/20/2017":] = 1 df["INTC"].loc["7/10/2017":"1/7/2018"] = 2 df["JASO"].loc["6/23/2017":"9/14/2017"] = 5 df["JASO"].loc["9/15/2017":] = 10 df["JNPR"].loc["9/15/2017":] = 2 df["QCOM"].loc["7/10/2017":] = 2 df["SGOL"].loc["11/3/2017":] = 1 df["SWM"].loc["7/10/2017":] = 2 df["TM"].loc["6/5/2017":] = 2 df["VEA"].loc["10/24/2017":] = 2 df["VGIT"].loc["8/10/2017":"10/23/2017"] = 5 df["VGK"].loc["10/24/2017"] = 2 df["VGLT"].loc["8/10/2017":"9/27/2017"] = 5 df["VGLT"].loc["9/28/2017":"10/23/2017"] = 6 df["VGLT"].loc["10/24/2017"] = 1 df["VGLT"].loc["10/25/2017"] = 8 df["VGLT"].loc["10/26/2017":] = 6 df["VGSH"].loc["8/10/2017":"9/20/2017"] = 3 df["VGSH"].loc["9/28/2017":"10/23/2017"] = 1 df["VGT"].loc["10/24/2017"] = 1 df["VOO"].loc["10/24/2017":] = 1 df["VT"].loc["10/24/2017"] = 1 df["VTI"].loc["9/20/2017":"10/23/2017"] = 1 df["VTI"].loc["10/24/2017"] = 2 positions = df * df_stock_prices positions = positions.round(2) positions.head() df = pd.DataFrame(np.zeros((len(df_stock_prices.index), len(tickers_list))), index=df_stock_prices.index, columns=tickers_list) df["ADM"].loc["7/10/2017"] = -2 df["APO"].loc["7/21/2017"] = -2 df["CG"].loc["7/21/2017"] = -2 df["CHKP"].loc["9/15/2017"] = -1 df["FEYE"].loc["1/16/2018"] = -1 df["FTI"].loc["6/21/2017"] = -4 df["FTI"].loc["8/24/2017"] = 4 df["GEVO"].loc["6/22/2017"] = -50 df["GEVO"].loc["8/25/2017"] = 50 df["GLD"].loc["11/3/2017"] = -1 df["GPRO"].loc["1/16/2018"] = -10 df["GRMN"].loc["6/16/2017"] = -3 df["HPE"].loc["1/4/2018"] = -1 df["IBM"].loc["9/20/2017"] = -1 df["INTC"].loc["7/10/2017"] = -2 df["INTC"].loc["1/8/2018"] = 2 df["JASO"].loc["6/23/2017"] = -5 df["JASO"].loc["9/15/2017"] = -5 df["JNPR"].loc["9/15/2017"] = -2 df["QCOM"].loc["7/10/2017"] = -2 df["SGOL"].loc["11/3/2017"] = -1 df["SWM"].loc["7/10/2017"] = -2 df["TM"].loc["6/5/2017"] = -2 df["VEA"].loc["10/24/2017"] = -2 df["VGIT"].loc["8/10/2017"] = -5 df["VGIT"].loc["10/24/2017"] = 5 df["VGK"].loc["10/24/2017"] = -2 df["VGK"].loc["10/25/2017"] = 2 df["VGLT"].loc["8/10/2017"] = -5 df["VGLT"].loc["9/28/2017"] = -1 df["VGLT"].loc["10/24/2017"] = 5 df["VGLT"].loc["10/25/2017"] = -7 df["VGLT"].loc["10/26/2017"] = 2 df["VGSH"].loc["8/10/2017"] = -3 df["VGSH"].loc["9/21/2017"] = 3 df["VGSH"].loc["9/28/2017"] = -1 df["VGSH"].loc["10/24/2017"] = 1 df["VGT"].loc["10/24/2017"] = -1 df["VGT"].loc["10/25/2017"] = 1 df["VOO"].loc["10/24/2017"] = -1 df["VT"].loc["10/24/2017"] = -1 df["VT"].loc["10/25/2017"] = 1 df["VTI"].loc["9/20/2017"] = -1 df["VTI"].loc["10/24/2017"] = -1 df["VTI"].loc["10/25/2017"] = 2 take_exit = df * df_stock_prices take_exit["Cash"] = take_exit.sum(axis=1) take_exit.loc[take_exit.index[0], "Cash"] = 2200 dividends = pd.read_csv("dividends.csv", index_col=0, usecols=["Date", "Total"], parse_dates=True) take_exit["Cash"] += dividends["Total"] take_exit.loc['2017-10-25'] take_exit["Cash"] = np.cumsum(np.array(take_exit["Cash"])) take_exit["Cash"] positions["Cash"] = take_exit["Cash"] positions = positions.rename(columns={"Cash":"Cash+Dividends"}) positions["Portfolio"] = positions.sum(1) positions.head() positions["Portfolio"].plot(ylim=(0, 2800)) ``` # Portfolio performance vs S&P ``` SandP = pd.DataFrame(index=index) SandP_long = pd.read_csv("C:/Users/pc/Desktop/portfolio_data/CSVs/GSPC.csv",index_col=0, parse_dates=True, usecols=[0, 5]) SandP_long.rename(columns={"Adj Close": "S&P"}, inplace=True) SandP = SandP.join(SandP_long, how='inner') norm_port = positions["Portfolio"]/positions["Portfolio"].loc[positions.index[0]] * 100 norm_snp = SandP/SandP.loc[SandP.index[0]] * 100 ax = (norm_port).plot(figsize=(16,8), legend=True) (norm_snp).plot(ax=ax) norm_port = pd.DataFrame(norm_port) norm_port_snp = norm_port.merge(norm_snp, left_index=True, right_index=True) norm_port_snp.to_csv("norm_port_snp.csv") portfolio = pd.DataFrame(positions["Portfolio"]) log_ret_port = np.log(portfolio / portfolio.shift(1)) # daily log_ret_snp = np.log(SandP / SandP.shift(1)) # daily ax = log_ret_snp.hist(bins=50, figsize=(10, 5), grid=False) log_ret_port.hist(bins=50, figsize=(10, 5), grid=False, ax=ax) plt.xlabel("Log return") plt.title("S&P vs Portfolio daily rets") plt.legend(["S&P", "Portfolio"]) ax1 = log_ret_snp.plot() log_ret_port.plot(ax=ax1, figsize=(12, 8)) log_rets = log_ret_snp.join(log_ret_port, how='inner') log_rets.corr() log_rets.to_csv("log_rets.csv") log_rets["S&P"].rolling(window=10).corr(log_rets["Portfolio"]).plot() ``` # Linear Regression ``` import sklearn np.set_printoptions(suppress=True) from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(log_ret_snp.dropna().values.reshape(-1,1), log_ret_port.dropna().values.reshape(-1,1)) # beta beta_port = lin_reg.coef_.flatten() beta_port intercept = lin_reg.intercept_ intercept plt.rc('figure', figsize=(14, 8)) plt.scatter(log_ret_snp, log_ret_port, s = 15) plt.plot(log_ret_snp, beta_port * log_ret_snp + intercept, "-", color='r') plt.axhline(color='grey', linewidth=.55) plt.axvline(color='grey', linewidth=.55) plt.xlabel("S&P", size=12) plt.ylabel("Portfolio", size=12) plt.title("Log returns S&P vs Portfolio", size=16) ``` # Stats ### Annualized ``` port_std = np.float16(log_ret_port.std() * np.sqrt(252)) port_std exp_ret_port = np.float16(log_ret_port.mean() * 252) exp_ret_port snp_std = np.float16(log_ret_snp.std() * np.sqrt(252)) snp_std exp_ret_snp = np.float16(log_ret_snp.mean() * 252) exp_ret_snp ``` # Ratios ``` # risk_free rate "r" on 03/02/18 on 10 year treasury r = .0286 sharpe_port = (exp_ret_port - r) / port_std sharpe_port sharpe_snp = (exp_ret_snp - r) / snp_std sharpe_snp treynor_port = (exp_ret_port - r) / beta_port treynor_port realized_ret_port = ((norm_port_snp.loc[norm_port_snp.index[-1]] - norm_port_snp.loc[norm_port_snp.index[0]]) / \ norm_port_snp.loc[norm_port_snp.index[0]])[0] realized_ret_snp = ((norm_port_snp.loc[norm_port_snp.index[-1]] - norm_port_snp.loc[norm_port_snp.index[0]]) / \ norm_port_snp.loc[norm_port_snp.index[0]])[1] alpha = realized_ret_port - r - beta_port*(realized_ret_snp - r) alpha ``` # Portfolio allocation; weight, sector, industry ``` port_allocation = positions.loc[positions.index[-1], positions.columns[0:-1]] port_allocation.replace(0, np.nan, inplace=True) port_allocation.dropna(inplace=True) port_allocation weights = port_allocation / port_allocation.sum() * 100 weights weights.sort_values().plot(kind='barh', figsize=(12, 7)) plt.title("Portfolio allocation") weights.to_csv("weights.csv") sector = {"VGLT": "Long US Bonds", "TM": "Consumer Goods", "VOO":"Blend ETF", "GRMN": "Technology", "IBM":"Technology", "SGOL":"Commodity ETF", "QCOM":"Technology", "GLD":"Commodity ETF", "CHKP":"Technology", "VEA":"Blend ETF", "ADM":"Consumer Goods", "SWM":"Consumer Goods", "JASO":"Technology", "APO":"Financial", "GRPO":"Consumer Goods", "JNPR":"Technology", "CG":"Financial", "Cash+Dividends": "Cash", "HPE":"Technology", "FEYE":"Technology"} industry = {"VGLT": "Vanguard Long-Term Treasury", "TM": "Auto Manufacturers", "VOO":"Vanguard S&P 500", "GRMN": "Scientific & Technical Instruments", "IBM":"Information Technology Services", "SGOL":"ETF Precious Metals", "QCOM":"Communication Equipment","GLD":"SPDR Precious Metals ETF", "CHKP":"Security Software & Services", "VEA":"Vanguard FTSE", "ADM":"Farm Products", "SWM":"Paper & Paper Products", "JASO":"Semiconductor", "APO":"Diversified Investments", "GRPO":"Photographic Equipment & Supplies", "JNPR":"Networking & Communication Devices", "CG":"Asset Management", "Cash+Dividends": "Cash", "HPE":"Information Technology", "FEYE":"Application Software"} distribution = pd.DataFrame(np.array(weights.sort_values(ascending=False)), index=sector.keys()) distribution = distribution.rename(columns={0:"Weight"}) distribution sector_ser = pd.Series(sector) distribution = distribution.merge(sector_ser.to_frame(), left_index=True, right_index=True) distribution industry_ser = pd.Series(industry) distribution = distribution.merge(industry_ser.to_frame(), left_index=True, right_index=True) distribution distribution = distribution.rename(columns={"0_x":"Sector", "0_y":"Industry"}) distribution.reset_index(inplace=True) distribution = distribution.rename(columns={"index":"Ticker"}) distribution group_by_sec = distribution.groupby("Sector").sum() group_by_sec group_by_sec.sort_values(by="Weight").plot(kind='barh', figsize=(10, 6), legend=False) distribution.to_csv("distribution.csv") distribution ```
github_jupyter
import pandas as pd import numpy as np %matplotlib inline import matplotlib.pyplot as plt import seaborn as sbs import datetime from datetime import timedelta port_start_date = "2017/6/1" now = "2018/2/26" index = pd.bdate_range(port_start_date, now) # tickers' last date 2/23/2017 tickers_list = ["ADM", "APO", "CG", "CHKP", "FEYE", "FTI", "GEVO", "GLD", "GPRO", "GRMN", "HPE", "IBM", "INTC", "JASO", "JNPR", "QCOM", "SGOL", "SWM", "TM", "VEA", "VGIT", "VGK", "VGLT", "VGSH", "VGT", "VOO", "VT", "VTI"] def read_file(file_list): path = "C:/Users/pc/Desktop/portfolio_data/CSVs/" df = pd.DataFrame(index=index) for file in file_list: # using only "date" and "Adj Close" columns data = pd.read_csv(path + file + ".csv", index_col=0, parse_dates=True, usecols=[0, 5]) data = data.rename(columns={"Adj Close": file}) # change anything which is not a float to NaN data = pd.to_numeric(data[file], errors="coerce") df = df.join(data, how="inner") df = df.dropna() return df df_stock_prices = read_file(tickers_list) df = pd.DataFrame(np.zeros((len(df_stock_prices.index), len(tickers_list))), index=df_stock_prices.index, columns=tickers_list) df["ADM"].loc["7/10/2017":] = 2 df["APO"].loc["7/21/2017":] = 2 df["CG"].loc["7/21/2017":] = 2 df["CHKP"].loc["9/15/2017":] = 1 df["FEYE"].loc["1/16/2018":] = 1 df["FTI"].loc["6/21/2017":"8/23/2017"] = 4 df["GEVO"].loc["6/22/2017":"8/24/2017"] = 50 df["GLD"].loc["11/3/2017":] = 1 df["GPRO"].loc["1/16/2018":] = 10 df["GRMN"].loc["6/16/2017":] = 3 df["HPE"].loc["1/4/2018":] = 1 df["IBM"].loc["9/20/2017":] = 1 df["INTC"].loc["7/10/2017":"1/7/2018"] = 2 df["JASO"].loc["6/23/2017":"9/14/2017"] = 5 df["JASO"].loc["9/15/2017":] = 10 df["JNPR"].loc["9/15/2017":] = 2 df["QCOM"].loc["7/10/2017":] = 2 df["SGOL"].loc["11/3/2017":] = 1 df["SWM"].loc["7/10/2017":] = 2 df["TM"].loc["6/5/2017":] = 2 df["VEA"].loc["10/24/2017":] = 2 df["VGIT"].loc["8/10/2017":"10/23/2017"] = 5 df["VGK"].loc["10/24/2017"] = 2 df["VGLT"].loc["8/10/2017":"9/27/2017"] = 5 df["VGLT"].loc["9/28/2017":"10/23/2017"] = 6 df["VGLT"].loc["10/24/2017"] = 1 df["VGLT"].loc["10/25/2017"] = 8 df["VGLT"].loc["10/26/2017":] = 6 df["VGSH"].loc["8/10/2017":"9/20/2017"] = 3 df["VGSH"].loc["9/28/2017":"10/23/2017"] = 1 df["VGT"].loc["10/24/2017"] = 1 df["VOO"].loc["10/24/2017":] = 1 df["VT"].loc["10/24/2017"] = 1 df["VTI"].loc["9/20/2017":"10/23/2017"] = 1 df["VTI"].loc["10/24/2017"] = 2 positions = df * df_stock_prices positions = positions.round(2) positions.head() df = pd.DataFrame(np.zeros((len(df_stock_prices.index), len(tickers_list))), index=df_stock_prices.index, columns=tickers_list) df["ADM"].loc["7/10/2017"] = -2 df["APO"].loc["7/21/2017"] = -2 df["CG"].loc["7/21/2017"] = -2 df["CHKP"].loc["9/15/2017"] = -1 df["FEYE"].loc["1/16/2018"] = -1 df["FTI"].loc["6/21/2017"] = -4 df["FTI"].loc["8/24/2017"] = 4 df["GEVO"].loc["6/22/2017"] = -50 df["GEVO"].loc["8/25/2017"] = 50 df["GLD"].loc["11/3/2017"] = -1 df["GPRO"].loc["1/16/2018"] = -10 df["GRMN"].loc["6/16/2017"] = -3 df["HPE"].loc["1/4/2018"] = -1 df["IBM"].loc["9/20/2017"] = -1 df["INTC"].loc["7/10/2017"] = -2 df["INTC"].loc["1/8/2018"] = 2 df["JASO"].loc["6/23/2017"] = -5 df["JASO"].loc["9/15/2017"] = -5 df["JNPR"].loc["9/15/2017"] = -2 df["QCOM"].loc["7/10/2017"] = -2 df["SGOL"].loc["11/3/2017"] = -1 df["SWM"].loc["7/10/2017"] = -2 df["TM"].loc["6/5/2017"] = -2 df["VEA"].loc["10/24/2017"] = -2 df["VGIT"].loc["8/10/2017"] = -5 df["VGIT"].loc["10/24/2017"] = 5 df["VGK"].loc["10/24/2017"] = -2 df["VGK"].loc["10/25/2017"] = 2 df["VGLT"].loc["8/10/2017"] = -5 df["VGLT"].loc["9/28/2017"] = -1 df["VGLT"].loc["10/24/2017"] = 5 df["VGLT"].loc["10/25/2017"] = -7 df["VGLT"].loc["10/26/2017"] = 2 df["VGSH"].loc["8/10/2017"] = -3 df["VGSH"].loc["9/21/2017"] = 3 df["VGSH"].loc["9/28/2017"] = -1 df["VGSH"].loc["10/24/2017"] = 1 df["VGT"].loc["10/24/2017"] = -1 df["VGT"].loc["10/25/2017"] = 1 df["VOO"].loc["10/24/2017"] = -1 df["VT"].loc["10/24/2017"] = -1 df["VT"].loc["10/25/2017"] = 1 df["VTI"].loc["9/20/2017"] = -1 df["VTI"].loc["10/24/2017"] = -1 df["VTI"].loc["10/25/2017"] = 2 take_exit = df * df_stock_prices take_exit["Cash"] = take_exit.sum(axis=1) take_exit.loc[take_exit.index[0], "Cash"] = 2200 dividends = pd.read_csv("dividends.csv", index_col=0, usecols=["Date", "Total"], parse_dates=True) take_exit["Cash"] += dividends["Total"] take_exit.loc['2017-10-25'] take_exit["Cash"] = np.cumsum(np.array(take_exit["Cash"])) take_exit["Cash"] positions["Cash"] = take_exit["Cash"] positions = positions.rename(columns={"Cash":"Cash+Dividends"}) positions["Portfolio"] = positions.sum(1) positions.head() positions["Portfolio"].plot(ylim=(0, 2800)) SandP = pd.DataFrame(index=index) SandP_long = pd.read_csv("C:/Users/pc/Desktop/portfolio_data/CSVs/GSPC.csv",index_col=0, parse_dates=True, usecols=[0, 5]) SandP_long.rename(columns={"Adj Close": "S&P"}, inplace=True) SandP = SandP.join(SandP_long, how='inner') norm_port = positions["Portfolio"]/positions["Portfolio"].loc[positions.index[0]] * 100 norm_snp = SandP/SandP.loc[SandP.index[0]] * 100 ax = (norm_port).plot(figsize=(16,8), legend=True) (norm_snp).plot(ax=ax) norm_port = pd.DataFrame(norm_port) norm_port_snp = norm_port.merge(norm_snp, left_index=True, right_index=True) norm_port_snp.to_csv("norm_port_snp.csv") portfolio = pd.DataFrame(positions["Portfolio"]) log_ret_port = np.log(portfolio / portfolio.shift(1)) # daily log_ret_snp = np.log(SandP / SandP.shift(1)) # daily ax = log_ret_snp.hist(bins=50, figsize=(10, 5), grid=False) log_ret_port.hist(bins=50, figsize=(10, 5), grid=False, ax=ax) plt.xlabel("Log return") plt.title("S&P vs Portfolio daily rets") plt.legend(["S&P", "Portfolio"]) ax1 = log_ret_snp.plot() log_ret_port.plot(ax=ax1, figsize=(12, 8)) log_rets = log_ret_snp.join(log_ret_port, how='inner') log_rets.corr() log_rets.to_csv("log_rets.csv") log_rets["S&P"].rolling(window=10).corr(log_rets["Portfolio"]).plot() import sklearn np.set_printoptions(suppress=True) from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(log_ret_snp.dropna().values.reshape(-1,1), log_ret_port.dropna().values.reshape(-1,1)) # beta beta_port = lin_reg.coef_.flatten() beta_port intercept = lin_reg.intercept_ intercept plt.rc('figure', figsize=(14, 8)) plt.scatter(log_ret_snp, log_ret_port, s = 15) plt.plot(log_ret_snp, beta_port * log_ret_snp + intercept, "-", color='r') plt.axhline(color='grey', linewidth=.55) plt.axvline(color='grey', linewidth=.55) plt.xlabel("S&P", size=12) plt.ylabel("Portfolio", size=12) plt.title("Log returns S&P vs Portfolio", size=16) port_std = np.float16(log_ret_port.std() * np.sqrt(252)) port_std exp_ret_port = np.float16(log_ret_port.mean() * 252) exp_ret_port snp_std = np.float16(log_ret_snp.std() * np.sqrt(252)) snp_std exp_ret_snp = np.float16(log_ret_snp.mean() * 252) exp_ret_snp # risk_free rate "r" on 03/02/18 on 10 year treasury r = .0286 sharpe_port = (exp_ret_port - r) / port_std sharpe_port sharpe_snp = (exp_ret_snp - r) / snp_std sharpe_snp treynor_port = (exp_ret_port - r) / beta_port treynor_port realized_ret_port = ((norm_port_snp.loc[norm_port_snp.index[-1]] - norm_port_snp.loc[norm_port_snp.index[0]]) / \ norm_port_snp.loc[norm_port_snp.index[0]])[0] realized_ret_snp = ((norm_port_snp.loc[norm_port_snp.index[-1]] - norm_port_snp.loc[norm_port_snp.index[0]]) / \ norm_port_snp.loc[norm_port_snp.index[0]])[1] alpha = realized_ret_port - r - beta_port*(realized_ret_snp - r) alpha port_allocation = positions.loc[positions.index[-1], positions.columns[0:-1]] port_allocation.replace(0, np.nan, inplace=True) port_allocation.dropna(inplace=True) port_allocation weights = port_allocation / port_allocation.sum() * 100 weights weights.sort_values().plot(kind='barh', figsize=(12, 7)) plt.title("Portfolio allocation") weights.to_csv("weights.csv") sector = {"VGLT": "Long US Bonds", "TM": "Consumer Goods", "VOO":"Blend ETF", "GRMN": "Technology", "IBM":"Technology", "SGOL":"Commodity ETF", "QCOM":"Technology", "GLD":"Commodity ETF", "CHKP":"Technology", "VEA":"Blend ETF", "ADM":"Consumer Goods", "SWM":"Consumer Goods", "JASO":"Technology", "APO":"Financial", "GRPO":"Consumer Goods", "JNPR":"Technology", "CG":"Financial", "Cash+Dividends": "Cash", "HPE":"Technology", "FEYE":"Technology"} industry = {"VGLT": "Vanguard Long-Term Treasury", "TM": "Auto Manufacturers", "VOO":"Vanguard S&P 500", "GRMN": "Scientific & Technical Instruments", "IBM":"Information Technology Services", "SGOL":"ETF Precious Metals", "QCOM":"Communication Equipment","GLD":"SPDR Precious Metals ETF", "CHKP":"Security Software & Services", "VEA":"Vanguard FTSE", "ADM":"Farm Products", "SWM":"Paper & Paper Products", "JASO":"Semiconductor", "APO":"Diversified Investments", "GRPO":"Photographic Equipment & Supplies", "JNPR":"Networking & Communication Devices", "CG":"Asset Management", "Cash+Dividends": "Cash", "HPE":"Information Technology", "FEYE":"Application Software"} distribution = pd.DataFrame(np.array(weights.sort_values(ascending=False)), index=sector.keys()) distribution = distribution.rename(columns={0:"Weight"}) distribution sector_ser = pd.Series(sector) distribution = distribution.merge(sector_ser.to_frame(), left_index=True, right_index=True) distribution industry_ser = pd.Series(industry) distribution = distribution.merge(industry_ser.to_frame(), left_index=True, right_index=True) distribution distribution = distribution.rename(columns={"0_x":"Sector", "0_y":"Industry"}) distribution.reset_index(inplace=True) distribution = distribution.rename(columns={"index":"Ticker"}) distribution group_by_sec = distribution.groupby("Sector").sum() group_by_sec group_by_sec.sort_values(by="Weight").plot(kind='barh', figsize=(10, 6), legend=False) distribution.to_csv("distribution.csv") distribution
0.129485
0.233597
<h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Curve-Fitting" data-toc-modified-id="Curve-Fitting-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Curve Fitting</a></span><ul class="toc-item"><li><span><a href="#Using-linear-regression-for-fitting-non-linear-functions" data-toc-modified-id="Using-linear-regression-for-fitting-non-linear-functions-1.1"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Using linear regression for fitting non-linear functions</a></span><ul class="toc-item"><li><span><a href="#Linear-regression-for-fitting-an-exponential-function" data-toc-modified-id="Linear-regression-for-fitting-an-exponential-function-1.1.1"><span class="toc-item-num">1.1.1&nbsp;&nbsp;</span>Linear regression for fitting an exponential function</a></span></li><li><span><a href="#Linear-regression-for-fitting-a-power-law-function" data-toc-modified-id="Linear-regression-for-fitting-a-power-law-function-1.1.2"><span class="toc-item-num">1.1.2&nbsp;&nbsp;</span>Linear regression for fitting a power-law function</a></span></li></ul></li><li><span><a href="#Nonlinear-fitting" data-toc-modified-id="Nonlinear-fitting-1.2"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>Nonlinear fitting</a></span></li><li><span><a href="#Exercises" data-toc-modified-id="Exercises-1.3"><span class="toc-item-num">1.3&nbsp;&nbsp;</span>Exercises</a></span></li></ul></li></ul></div> python single: curve fitting Curve Fitting ============= One of the most important tasks in any experimental science is modeling data and determining how well some theoretical function describes experimental data. In the last chapter, we illustrated how this can be done when the theoretical function is a simple straight line in the context of learning about Python functions and methods. Here we show how this can be done for a arbitrary fitting functions, including linear, exponential, power law, and other nonlinear fitting functions. Using linear regression for fitting non-linear functions -------------------------------------------------------- We can use our results for linear regression with $\chi^2$ weighting that we developed in Chapter 7 to fit functions that are nonlinear in the fitting parameters, *provided* we can transform the fitting function into one that is linear in the fitting parameters and in the independent variable ($x$). single: curve fitting; linear; exponential function ### Linear regression for fitting an exponential function To illustrate this approach, let's consider some experimental data taken from a radioactive source that was emitting beta particles (electrons). We notice that the number of electrons emitted per unit time is decreasing with time. Theory suggests that the number of electrons $N$ emitted per unit time should decay exponentially according to the equation $$N(t) = N_0 e^{-t/\tau} \;.$$ This equation is nonlinear in $t$ and in the fitting parameter $\tau$ and thus cannot be fit using the method of the previous chapter. Fortunately, this is a special case for which the fitting function can be transformed into a linear form. Doing so will allow us to use the fitting routine we developed for fitting linear functions. We begin our analysis by transforming our fitting function to a linear form. To this end we take the logarithm of Eq. `eq:decay`: $$\ln N = \ln N_{0} -\frac{t}{\tau} \;.$$ With this tranformation our fitting function is linear in the independent variable $t$. To make our method work, however, our fitting function must be linear in the *fitting parameters*, and our transformed function is still nonlinear in the fitting parameters $\tau$ and $N_0$. Therefore, we define new fitting parameters as follows $$\begin{aligned} a &= \ln N_{0}\\ b &= -1/\tau \end{aligned}$$ Now if we define a new dependent variable $y = \ln N$, then our fitting function takes the form of a fitting function that is linear in the fitting parameters $a$ and $b$ $$y = a + bx$$ where the independent variable is $x=t$ and the dependent variable is $y=\ln N$. We are almost ready to fit our transformed fitting function, with transformed fitting parameters $a$ and $b$, to our transformed independent and dependent data, $x$ and $y$. The last thing we have to do is to transform the estimates of the uncertainties $\delta N$ in $N$ to the uncertainties $\delta y$ in $y$ $(= \ln N)$. So how much does a given uncertainty in $N$ translate into an uncertainty in $y$? In most cases, the uncertainty in $y$ is much smaller than $y$, *i.e.* $\delta y \ll y$; similarly $\delta N \ll N$. In this limit we can use differentials to figure out the relationship between these uncertainties. Here is how it works for this example: $$\begin{aligned} y &= \ln N\\ \delta y &= \left|\frac{\partial y}{\partial N}\right|\delta N\\ \delta y &= \frac{\delta N} {N} \;. \end{aligned}$$ Equation `eq:sigmaLnN` tells us how a small change $\delta N$ in $N$ produces a small change $\delta y$ in $y$. Here we identify the differentials $dy$ and $dN$ with the uncertainties $\delta y$ and $\delta N$. Therefore, an uncertainty of $\delta N$ in $N$ corresponds, or translates, to an uncertainty $\delta y$ in $y$. Let's summarize what we have done so far. We started with the some data points $\{t_i,N_i\}$ and some addition data $\{\delta N_i\}$ where each datum $\delta N_i$ corresponds to the uncertainty in the experimentally measured $N_i$. We wish to fit these data to the fitting function $$N(t) = N_0 e^{-t/\tau} \;.$$ We then take the natural logarithm of both sides and obtain the linear equation $$\begin{aligned} \ln N &= \ln N_{0} -\frac{t}{\tau} \\ y &= a + bx \end{aligned}$$ with the obvious correspondences $$\begin{aligned} x &= t\\ y &= \ln N\\ a &= \ln N_{0}\\ b &= -1/\tau \end{aligned}$$ Now we can use the linear regression routine with $\chi^2$ weighting that we developed in the previous section to fit `eq:TransformedSemilog` to the transformed data $x_i (= t_i)$ and $y_i (= \ln N_i)$. The inputs are the tranformed data ${x_i}, {y_i}, {\delta y_i}$. The outputs are the fitting parameters $a$ and $b$, as well as the estimates of their uncertainties $\delta a$ and $\delta b$ along with the value of $\chi^2$. You can obtain the values of the original fitting parameters $N_0$ and $\tau$ by taking the differentials of the last two equations in Eq. `eq:eqlist`: $$\begin{aligned} \delta a &= \left|\frac{\partial a}{\partial N_0}\right|\delta N_0 = \frac{\delta N_{0}}{N_{0}}\\ \delta b &= \left|\frac{\partial b}{\partial \tau}\right|\delta \tau = \frac{\delta \tau}{\tau^2} \end{aligned}$$ The Python routine below shows how to implement all of this for a set of experimental data that is read in from a data file. Figure `8.1 <fig:betaDecay>` shows the output of the fit to simulated beta decay data obtained using the program below. Note that the error bars are large when the number of counts $N$ are small. This is consistent with what is known as *shot noise* (noise that arises from counting discrete events), which obeys *Poisson* statistics. You will study sources of noise, including shot noise, later in your lab courses. The program also prints out the fitting parameters of the transformed data as well as the fitting parameters for the exponential fitting function. <figure> <img src="attachment:betaDecay.png" class="align-center" alt="" /><figcaption>Semi-log plot of beta decay measurements from Phosphorus-32.</figcaption> </figure> ``` python import numpy as np import matplotlib.pyplot as plt def LineFitWt(x, y, sig): """ Fit to straight line. Inputs: x and y arrays and uncertainty array (unc) for y data. Ouputs: slope and y-intercept of best fit to data. """ sig2 = sig**2 norm = (1./sig2).sum() xhat = (x/sig2).sum() / norm yhat = (y/sig2).sum() / norm slope = ((x-xhat)*y/sig2).sum()/((x-xhat)*x/sig2).sum() yint = yhat - slope*xhat sig2_slope = 1./((x-xhat)*x/sig2).sum() sig2_yint = sig2_slope * (x*x/sig2).sum() / norm return slope, yint, np.sqrt(sig2_slope), np.sqrt(sig2_yint) def redchisq(x, y, dy, slope, yint): chisq = (((y-yint-slope*x)/dy)**2).sum() return chisq/float(x.size-2) # Read data from data file t, N, dN = np.loadtxt("betaDecay.txt", skiprows=2, unpack=True) ########## Code to tranform & fit data starts here ########## # Transform data and parameters to linear form: Y = A + B*X X = t # transform t data for fitting (trivial) Y = np.log(N) # transform N data for fitting dY = dN/N # transform uncertainties for fitting # Fit transformed data X, Y, dY to obtain fitting parameters A & B # Also returns uncertainties in A and B B, A, dB, dA = LineFitWt(X, Y, dY) # Return reduced chi-squared redchisqr = redchisq(X, Y, dY, B, A) # Determine fitting parameters for original exponential function # N = N0 exp(-t/tau) ... N0 = np.exp(A) tau = -1.0/B # ... and their uncertainties dN0 = N0 * dA dtau = tau**2 * dB ####### Code to plot transformed data and fit starts here ####### # Create line corresponding to fit using fitting parameters # Only two points are needed to specify a straight line Xext = 0.05*(X.max()-X.min()) Xfit = np.array([X.min()-Xext, X.max()+Xext]) Yfit = A + B*Xfit plt.errorbar(X, Y, dY, fmt="bo") plt.plot(Xfit, Yfit, "r-", zorder=-1) plt.xlim(0, 100) plt.ylim(1.5, 7) plt.title("$\mathrm{Fit\\ to:}\\ \ln N = -t/\\tau + \ln N_0$") plt.xlabel("t") plt.ylabel("ln(N)") plt.text(50, 6.6, "A = ln N0 = {0:0.2f} $\pm$ {1:0.2f}" .format(A, dA)) plt.text(50, 6.3, "B = -1/tau = {0:0.4f} $\pm$ {1:0.4f}" .format(-B, dB)) plt.text(50, 6.0, "$\chi_r^2$ = {0:0.3f}" .format(redchisqr)) plt.text(50, 5.7, "N0 = {0:0.0f} $\pm$ {1:0.0f}" .format(N0, dN0)) plt.text(50, 5.4, "tau = {0:0.1f} $\pm$ {1:0.1f} days" .format(tau, dtau)) plt.show() ``` single: curve fitting; linear; power law function single: curve fitting; linear; power-law function ### Linear regression for fitting a power-law function You can use a similar approach to the one outlined above to fit experimental data to a power law fitting function of the form $$P(s) = P_0 s^\alpha \;.$$ We follow the same approach we used for the exponential fitting function and first take the logarithm of both sides of `eq:pwrlaw` $$\ln P = \ln P_0 + \alpha \ln s \;.$$ We recast this in the form of a linear equation $y = a + bx$ with the following identifications: $$\begin{aligned} x &= \ln s\\ y &= \ln P\\ a &= \ln P_{0}\\ b &= \alpha \end{aligned}$$ Following a procedure similar to that used to fit using an exponential fitting function, you can use the tranformations given by `eq:eqPwrTrans` as the basis for a program to fit a power-law fitting function such as `eq:logpwrlaw` to experimental data. single: curve fitting; nonlinear Nonlinear fitting ----------------- The method introduced in the previous section for fitting nonlinear fitting functions can be used only if the fitting function can be transformed into a fitting function that is linear in the fitting parameters $a$, $b$, $c$... When we have a nonlinear fitting function that cannot be transformed into a linear form, we need another approach. The problem of finding values of the fitting parameters that minimize $\chi^2$ is a nonlinear optimization problem to which there is quite generally no analytical solution (in contrast to the linear optimization problem). We can gain some insight into this nonlinear optimization problem, namely the fitting of a nonlinear fitting function to a data set, by considering a fitting function with only two fitting parameters. That is, we are trying to fit some data set $\{x_{i},y_{i}\}$, with uncertainties in $\{y_{i}\}$ of $\{\sigma_{i}\}$, to a fitting function is $f(x;a,b)$ where $a$ and $b$ are the two fitting parameters. To do so, we look for the minimum in $$\chi^2(a,b) = \sum_{i} \left(\frac{y_{i} - f(x_{i})}{\sigma_{i}}\right)^2 \;.$$ Note that once the data set, uncertainties, and fitting function are specified, $\chi^2(a,b)$ is simply a function of $a$ and $b$. We can picture the function $\chi^2(a,b)$ as a of landscape with peaks and valleys: as we vary $a$ and $b$, $\chi^2(a,b)$ rises and falls. The basic idea of all nonlinear fitting routines is to start with some initial guesses for the fitting parameters, here $a$ and $b$, and by scanning the $\chi^2(a,b)$ landscape, find values of $a$ and $b$ that minimize $\chi^2(a,b)$. There are a number of different methods for trying to find the minimum in $\chi^2$ for nonlinear fitting problems. Nevertheless, the method that is most widely used goes by the name of the *Levenberg-Marquardt* method. Actually the Levenberg-Marquardt method is a combination of two other methods, the *steepest descent* (or gradient) method and *parabolic extrapolation*. Roughly speaking, when the values of $a$ and $b$ are not too near their optimal values, the gradient descent method determines in which direction in $(a,b)$-space the function $\chi^2(a,b)$ decreases most quickly---the direction of steepest descent---and then changes $a$ and $b$ accordingly to move in that direction. This method is very efficient unless $a$ and $b$ are very near their optimal values. Near the optimal values of $a$ and $b$, parabolic extrapolation is more efficient. Therefore, as $a$ and $b$ approach their optimal values, the Levenberg-Marquardt method gradually changes to the parabolic extrapolation method, which approximates $\chi^2(a,b)$ by a Taylor series second order in $a$ and $b$ and then computes directly the analytical minimum of the Taylor series approximation of $\chi^2(a,b)$. This method is only good if the second order Taylor series provides a good approximation of $\chi^2(a,b)$. That is why parabolic extrapolation only works well very near the minimum in $\chi^2(a,b)$. Before illustrating the Levenberg-Marquardt method, we make one important cautionary remark: the Levenberg-Marquardt method can fail if the initial guesses of the fitting parameters are too far away from the desired solution. This problem becomes more serious the greater the number of fitting parameters. Thus it is important to provide reasonable initial guesses for the fitting parameters. Usually, this is not a problem, as it is clear from the physical situation of a particular experiment what reasonable values of the fitting parameters are. But beware! single: SciPy; nonlinear curve fitting The `scipy.optimize` module provides routines that implement the Levenberg-Marquardt non-linear fitting method. One is called `scipy.optimize.leastsq`. A somewhat more user-friendly version of the same method is accessed through another routine in the same `scipy.optimize` module: it's called `scipy.optimize.curve_fit` and it is the one we demonstrate here. The function call is : import scipy.optimize [... insert code here ...] scipy.optimize.curve_fit(f, xdata, ydata, p0=None, sigma=None, **kwargs) The arguments of `curve_fit` are > - `f(xdata, a, b, ...)`: is the fitting function where `xdata` is > the data for the independent variable and `a, b, ...` are the > fitting parameters, however many there are, listed as separate > arguments. Obviously, `f(xdata, a, b, ...)` should return the $y$ > value of the fitting function. > - `xdata`: is the array containing the $x$ data. > - `ydata`: is the array containing the $y$ data. > - `p0`: is a tuple containing the initial guesses for the fitting > parameters. The guesses for the fitting parameters are set equal > to 1 if they are left unspecified. It is almost always a good idea > to specify the initial guesses for the fitting parameters. > - `sigma`: is the array containing the uncertainties in the $y$ > data. > - `**kwargs`: are keyword arguments that can be passed to the > fitting routine `scipy.optimize.leastsq` that `curve_fit` calls. > These are usually left unspecified. We demonstrate the use of `curve_fit` to fit the data plotted in the figure below: <figure> <img src="attachment:Spectrum.png" class="align-center" alt="" /> </figure> We model the data with the fitting function that consists of a quadratic polynomial background with a Gaussian peak: $$A(f) = a + bf + cf^2 + P e^{-\frac{1}{2}[(f-f_p)/f_w]^2} .$$ Lines 7 and 8 define the fitting functions. Note that the independent variable `f` is the first argument, which is followed by the six fitting parameters $a$, $b$, $c$, $P$, $f_p$, and $f_w$. To fit the data with $A(f)$, we need good estimates of the fitting parameters. Setting $f=0$, we see that $a \approx 60$. An estimate of the slope of the baseline gives $b \approx -60/20=-3$. The curvature in the baseline is small so we take $c \approx 0$. The amplitude of the peak above the baseline is $P \approx 110-30=80$. The peak is centered at $f_p \approx 11$, while width of peak is about $f_w \approx 2$. We use these estimates to set the initial guesses of the fitting parameters in lines 14 and 15 in the code below. single: list comprehension The function that performs the Levenverg-Marquardt algorithm, <span class="title-ref">scipy.optimize.curve\_fit</span>, is called in lines 19-20 with the output set equal to the one and two-dimensional arrays `nlfit` and `nlpcov`, respectively. The array `nlfit`, which gives the optimal values of the fitting parameters, is unpacked in line 23. The square root of the diagonal of the two-dimensional array `nlpcov`, which gives the estimates of the uncertainties in the fitting parameters, is unpacked in lines 26-27 using a list comprehension. The rest of the code plots the data, the fitting function using the optimal values of the fitting parameters found by `scipy.optimize.curve_fit`, and the values of the fitting parameters and their uncertainties. ``` python import numpy as np import matplotlib.pyplot as plt import matplotlib.gridspec as gridspec # for unequal plot boxes import scipy.optimize # define fitting function def GaussPolyBase(f, a, b, c, P, fp, fw): return a + b*f + c*f*f + P*np.exp(-0.5*((f-fp)/fw)**2) # read in spectrum from data file # f=frequency, s=signal, ds=s uncertainty f, s, ds = np.loadtxt("Spectrum.txt", skiprows=4, unpack=True) # initial guesses for fitting parameters a0, b0, c0 = 60., -3., 0. P0, fp0, fw0 = 80., 11., 2. # fit data using SciPy's Levenberg-Marquart method nlfit, nlpcov = scipy.optimize.curve_fit(GaussPolyBase, f, s, p0=[a0, b0, c0, P0, fp0, fw0], sigma=ds) # unpack fitting parameters a, b, c, P, fp, fw = nlfit # unpack uncertainties in fitting parameters from diagonal # of covariance matrix da, db, dc, dP, dfp, dfw = \ [np.sqrt(nlpcov[j,j]) for j in range(nlfit.size)] # create fitting function from fitted parameters f_fit = np.linspace(0.0, 25., 128) s_fit = GaussPolyBase(f_fit, a, b, c, P, fp, fw) # Calculate residuals and reduced chi squared resids = s - GaussPolyBase(f, a, b, c, P, fp, fw) redchisqr = ((resids/ds)**2).sum()/float(f.size-6) # Create figure window to plot data fig = plt.figure(1, figsize=(8,8)) gs = gridspec.GridSpec(2, 1, height_ratios=[6, 2]) # Top plot: data and fit ax1 = fig.add_subplot(gs[0]) ax1.plot(f_fit, s_fit) ax1.errorbar(f, s, yerr=ds, fmt='or', ecolor='black') ax1.set_xlabel('frequency (THz)') ax1.set_ylabel('absorption (arb units)') ax1.text(0.7, 0.95, 'a = {0:0.1f}$\pm${1:0.1f}' .format(a, da), transform = ax1.transAxes) ax1.text(0.7, 0.90, 'b = {0:0.2f}$\pm${1:0.2f}' .format(b, db), transform = ax1.transAxes) ax1.text(0.7, 0.85, 'c = {0:0.2f}$\pm${1:0.2f}' .format(c, dc), transform = ax1.transAxes) ax1.text(0.7, 0.80, 'P = {0:0.1f}$\pm${1:0.1f}' .format(P, dP), transform = ax1.transAxes) ax1.text(0.7, 0.75, 'fp = {0:0.1f}$\pm${1:0.1f}' .format(fp, dfp), transform = ax1.transAxes) ax1.text(0.7, 0.70, 'fw = {0:0.1f}$\pm${1:0.1f}' .format(fw, dfw), transform = ax1.transAxes) ax1.text(0.7, 0.60, '$\chi_r^2$ = {0:0.2f}' .format(redchisqr),transform = ax1.transAxes) ax1.set_title('$s(f) = a+bf+cf^2+P\,e^{-(f-f_p)^2/2f_w^2}$') # Bottom plot: residuals ax2 = fig.add_subplot(gs[1]) ax2.errorbar(f, resids, yerr = ds, ecolor="black", fmt="ro") ax2.axhline(color="gray", zorder=-1) ax2.set_xlabel('frequency (THz)') ax2.set_ylabel('residuals') ax2.set_ylim(-20, 20) ax2.set_yticks((-20, 0, 20)) plt.show() ``` The above code also plots the difference between the data and fit, known as the *residuals* in the subplot below the plot of the data and fit. Plotting the residuals in this way gives a graphical representation of the goodness of the fit. To the extent that the residuals vary randomly about zero and do not show any overall upward or downward curvature, or any long wavelength oscillations, the fit would seem to be a good fit. <figure> <img src="attachment:FitSpectrum.png" class="align-center" alt="" /><figcaption>Fit to Gaussian with quadratic polynomial background.</figcaption> </figure> Finally, we note that we have used the MatPlotLib package `gridspec` to create the two subplots with different heights. The `gridspec` are made in lines 3 (where the package is imported), 36 (where 2 rows and 1 column are specified with relative heights of 6 to 2), 39 (where the first `gs[0]` height is specified), and 54 (where the second `gs[1]` height is specified). More details about the `gridspec` package can be found at the MatPlotLib web site. Exercises --------- 1. When a voltage source is connected across a resistor and inductor in series, the voltage across the inductor $V_i(t)$ is predicted to obey the equation $$V(t) = V_0 e^{-\Gamma t}$$ where $t$ is the time and the decay rate $\Gamma=R/L$ is the ratio of the resistance $R$ to the inductance $L$ of the circuit. In this problem, you are to write a Python routine that fits the above equation to the data below for the voltage measured across an inductor after it is connected in series with a resistor to a voltage source. Following the example in the text, linearize the `eq:inductorDecay` and use a linear fitting routine, either the one you wrote from the previous chapter or one from NumPy or SciPy. 1. Find the best values of $\Gamma$ and $V_0$ and the uncertainties in their values $\sigma_\Gamma$ and $\sigma_{V_0}$. 2. Find the value of $\chi_r^2$ for your fit. Does it make sense? 3. Make a semi-log plot of the data using symbols with error bars (no line) and of the fit (line only). The fit should appear as a straight line that goes through the data points. 4. If the resistor has a value of 10.0 $\mathrm{k}\Omega$, what is the value of the inductance and its uncertainty according to your fit, assuming that the error in the resistance is negligibly small. Data for decay of voltage across an inductor in an RL circuit Date: 24-Oct-2012 Data taken by D. M. Blantogg and T. P. Chaitor time (ns) voltage (volts) uncertainty (volts) 0.0 5.08e+00 1.12e-01 32.8 3.29e+00 9.04e-02 65.6 2.23e+00 7.43e-02 98.4 1.48e+00 6.05e-02 131.2 1.11e+00 5.25e-02 164.0 6.44e-01 4.00e-02 196.8 4.76e-01 3.43e-02 229.6 2.73e-01 2.60e-02 262.4 1.88e-01 2.16e-02 295.2 1.41e-01 1.87e-02 328.0 9.42e-02 1.53e-02 360.8 7.68e-02 1.38e-02 393.6 3.22e-02 8.94e-03 426.4 3.22e-02 8.94e-03 459.2 1.98e-02 7.01e-03 492.0 1.98e-02 7.01e-03 2. Small nanoparticles of soot suspended in water start to aggregate when salt is added. The average radius $r$ of the aggregates is predicted to grow as a power law in time $t$ according to the equation $r = r_0t^n$. Taking the logarithm of this equation gives $\ln r = n\ln t + \ln r_0$. Thus the data should fall on a straight line if $\ln r$ is plotted *vs* $\ln t$. 1. Plot the data below on a graph of $\ln r$ *vs* $\ln t$ to see if the data fall approximately on a straight line. Size of growing aggregate Date: 19-Nov-2013 Data taken by M. D. Gryart and M. L. Waites time (m) size (nm) unc (nm) 0.12 115 10 0.18 130 12 0.42 202 14 0.90 335 18 2.10 510 20 6.00 890 30 18.00 1700 40 42.00 2600 50 2. Defining $y = \ln r$ and $x = \ln t$, use the linear fitting routine you wrote for the previous problem to fit the data and find the optimal values for the slope and $y$ intercept, as well as their uncertainties. Use these fitted values to find the optimal values of the the amplitude $r_0$ and the power $n$ in the fitting function $r = r_0t^n$. What are the fitted values of $r_0$ and $n$? What is the value of $\chi_r^2$? Does a power law provide an adequate model for the data? 3. In this problem you explore using a non-linear least square fitting routine to fit the data shown in the figure below. The data, including the uncertainties in the $y$ values, are provided in the table below. Your task is to fit the function $$d(t) = A (1+B\,\cos\omega t) e^{-t^2/2\tau^2} + C$$ to the data, where the fitting parameters are $A$, $B$, $C$, $\omega$, and $\tau$. <figure> <img src="attachment:DataOscDecay.png" class="align-center" alt="" /> </figure> 1. Write a Python program that (*i*) reads the data in from a data file, (*ii*) defines a function `oscDecay(t, A, B, C, tau, omega)` for the function $d(t)$ above, and (*iii*) produces a plot of the data and the function $d(t)$. Choose the fitting parameters `A`, `B`, `C`, `tau`, and `omega` to produce an approximate fit "by eye" to the data. You should be able estimate reasonable values for these parameters just by looking at the data and thinking about the behavior of $d(t)$. For example, $d(0)=A(1+B)+C$ while $d(\infty)=C$. What parameter in $d(t)$ controls the period of the peaks observed in the data? Use that information to estimate the value of that parameter. 2. Following the example in section `sec:nonlinfit`, write a program using the SciPy function `scipy.optimize.curve_fit` to fit Eq. `eq:OscDecay` to the data and thus find the optimal values of the fitting parameters $A$, $B$, $C$, $\omega$, and $\tau$. Your program should plot the data along with the fitting function using the optimal values of the fitting parameters. Write a function to calculate the reduced $\chi^2$. Print out the value of the reduced $\chi^2$ on your plot along with the optimal values of the fitting parameters. You can use the results from part (a) to estimate good starting values of the fitting parameters 3. Once you have found the optimal fitting parameters, run your fitting program again using for starting values the optimal values of the fitting parameters $A$, $B$, $C$, and $\tau$, but set the starting value of $\omega$ to be 3 times the optimal value. You should find that the program converges to a different set of fitting parameters than the ones you found in part (b). Using the program you wrote for part (b) make a plot of the data and the fit like the one you did for part (a). The fit should be noticeably worse. What is the value of the reduced $\chi^2$ for this fit; it should be much larger than the one you found for part (c). The program has found a local minimum in $\chi^2$---one that is obviously is not the best fit! 4. Setting the fitting parameters $A$, $B$, $C$, and $\tau$ to the optimal values you found in part (b), plot $\chi_r^2$ as a function of $\omega$ for $\omega$ spanning the range from 0.05 to 3.95. You should observe several local minima for different values of $\chi_r^2$; the global minimum in $\chi_r^2$ should occur for the optimal value of $\omega$ you found in part (b). Data for absorption spectrum Date: 21-Nov-2012 Data taken by P. Dubson and M. Sparks time (ms) signal uncertainty 0.2 41.1 0.9 1.4 37.2 0.9 2.7 28.3 0.9 3.9 24.8 1.1 5.1 27.8 0.8 6.4 34.5 0.7 7.6 39.0 0.9 8.8 37.7 0.8 10.1 29.8 0.9 11.3 22.2 0.7 12.5 22.3 0.6 13.8 26.7 1.1 15.0 30.4 0.7 16.2 32.6 0.8 17.5 28.9 0.8 18.7 22.9 1.3 19.9 21.7 0.9 21.1 22.1 1.0 22.4 22.3 1.0 23.6 26.3 1.0 24.8 26.2 0.8 26.1 21.4 0.9 27.3 20.0 1.0 28.5 20.1 1.2 29.8 21.2 0.5 31.0 22.0 0.9 32.2 21.6 0.7 33.5 21.0 0.7 34.7 19.7 0.9 35.9 17.9 0.9 37.2 18.1 0.8 38.4 18.9 1.1
github_jupyter
single: curve fitting; linear; power law function single: curve fitting; linear; power-law function ### Linear regression for fitting a power-law function You can use a similar approach to the one outlined above to fit experimental data to a power law fitting function of the form $$P(s) = P_0 s^\alpha \;.$$ We follow the same approach we used for the exponential fitting function and first take the logarithm of both sides of `eq:pwrlaw` $$\ln P = \ln P_0 + \alpha \ln s \;.$$ We recast this in the form of a linear equation $y = a + bx$ with the following identifications: $$\begin{aligned} x &= \ln s\\ y &= \ln P\\ a &= \ln P_{0}\\ b &= \alpha \end{aligned}$$ Following a procedure similar to that used to fit using an exponential fitting function, you can use the tranformations given by `eq:eqPwrTrans` as the basis for a program to fit a power-law fitting function such as `eq:logpwrlaw` to experimental data. single: curve fitting; nonlinear Nonlinear fitting ----------------- The method introduced in the previous section for fitting nonlinear fitting functions can be used only if the fitting function can be transformed into a fitting function that is linear in the fitting parameters $a$, $b$, $c$... When we have a nonlinear fitting function that cannot be transformed into a linear form, we need another approach. The problem of finding values of the fitting parameters that minimize $\chi^2$ is a nonlinear optimization problem to which there is quite generally no analytical solution (in contrast to the linear optimization problem). We can gain some insight into this nonlinear optimization problem, namely the fitting of a nonlinear fitting function to a data set, by considering a fitting function with only two fitting parameters. That is, we are trying to fit some data set $\{x_{i},y_{i}\}$, with uncertainties in $\{y_{i}\}$ of $\{\sigma_{i}\}$, to a fitting function is $f(x;a,b)$ where $a$ and $b$ are the two fitting parameters. To do so, we look for the minimum in $$\chi^2(a,b) = \sum_{i} \left(\frac{y_{i} - f(x_{i})}{\sigma_{i}}\right)^2 \;.$$ Note that once the data set, uncertainties, and fitting function are specified, $\chi^2(a,b)$ is simply a function of $a$ and $b$. We can picture the function $\chi^2(a,b)$ as a of landscape with peaks and valleys: as we vary $a$ and $b$, $\chi^2(a,b)$ rises and falls. The basic idea of all nonlinear fitting routines is to start with some initial guesses for the fitting parameters, here $a$ and $b$, and by scanning the $\chi^2(a,b)$ landscape, find values of $a$ and $b$ that minimize $\chi^2(a,b)$. There are a number of different methods for trying to find the minimum in $\chi^2$ for nonlinear fitting problems. Nevertheless, the method that is most widely used goes by the name of the *Levenberg-Marquardt* method. Actually the Levenberg-Marquardt method is a combination of two other methods, the *steepest descent* (or gradient) method and *parabolic extrapolation*. Roughly speaking, when the values of $a$ and $b$ are not too near their optimal values, the gradient descent method determines in which direction in $(a,b)$-space the function $\chi^2(a,b)$ decreases most quickly---the direction of steepest descent---and then changes $a$ and $b$ accordingly to move in that direction. This method is very efficient unless $a$ and $b$ are very near their optimal values. Near the optimal values of $a$ and $b$, parabolic extrapolation is more efficient. Therefore, as $a$ and $b$ approach their optimal values, the Levenberg-Marquardt method gradually changes to the parabolic extrapolation method, which approximates $\chi^2(a,b)$ by a Taylor series second order in $a$ and $b$ and then computes directly the analytical minimum of the Taylor series approximation of $\chi^2(a,b)$. This method is only good if the second order Taylor series provides a good approximation of $\chi^2(a,b)$. That is why parabolic extrapolation only works well very near the minimum in $\chi^2(a,b)$. Before illustrating the Levenberg-Marquardt method, we make one important cautionary remark: the Levenberg-Marquardt method can fail if the initial guesses of the fitting parameters are too far away from the desired solution. This problem becomes more serious the greater the number of fitting parameters. Thus it is important to provide reasonable initial guesses for the fitting parameters. Usually, this is not a problem, as it is clear from the physical situation of a particular experiment what reasonable values of the fitting parameters are. But beware! single: SciPy; nonlinear curve fitting The `scipy.optimize` module provides routines that implement the Levenberg-Marquardt non-linear fitting method. One is called `scipy.optimize.leastsq`. A somewhat more user-friendly version of the same method is accessed through another routine in the same `scipy.optimize` module: it's called `scipy.optimize.curve_fit` and it is the one we demonstrate here. The function call is : import scipy.optimize [... insert code here ...] scipy.optimize.curve_fit(f, xdata, ydata, p0=None, sigma=None, **kwargs) The arguments of `curve_fit` are > - `f(xdata, a, b, ...)`: is the fitting function where `xdata` is > the data for the independent variable and `a, b, ...` are the > fitting parameters, however many there are, listed as separate > arguments. Obviously, `f(xdata, a, b, ...)` should return the $y$ > value of the fitting function. > - `xdata`: is the array containing the $x$ data. > - `ydata`: is the array containing the $y$ data. > - `p0`: is a tuple containing the initial guesses for the fitting > parameters. The guesses for the fitting parameters are set equal > to 1 if they are left unspecified. It is almost always a good idea > to specify the initial guesses for the fitting parameters. > - `sigma`: is the array containing the uncertainties in the $y$ > data. > - `**kwargs`: are keyword arguments that can be passed to the > fitting routine `scipy.optimize.leastsq` that `curve_fit` calls. > These are usually left unspecified. We demonstrate the use of `curve_fit` to fit the data plotted in the figure below: <figure> <img src="attachment:Spectrum.png" class="align-center" alt="" /> </figure> We model the data with the fitting function that consists of a quadratic polynomial background with a Gaussian peak: $$A(f) = a + bf + cf^2 + P e^{-\frac{1}{2}[(f-f_p)/f_w]^2} .$$ Lines 7 and 8 define the fitting functions. Note that the independent variable `f` is the first argument, which is followed by the six fitting parameters $a$, $b$, $c$, $P$, $f_p$, and $f_w$. To fit the data with $A(f)$, we need good estimates of the fitting parameters. Setting $f=0$, we see that $a \approx 60$. An estimate of the slope of the baseline gives $b \approx -60/20=-3$. The curvature in the baseline is small so we take $c \approx 0$. The amplitude of the peak above the baseline is $P \approx 110-30=80$. The peak is centered at $f_p \approx 11$, while width of peak is about $f_w \approx 2$. We use these estimates to set the initial guesses of the fitting parameters in lines 14 and 15 in the code below. single: list comprehension The function that performs the Levenverg-Marquardt algorithm, <span class="title-ref">scipy.optimize.curve\_fit</span>, is called in lines 19-20 with the output set equal to the one and two-dimensional arrays `nlfit` and `nlpcov`, respectively. The array `nlfit`, which gives the optimal values of the fitting parameters, is unpacked in line 23. The square root of the diagonal of the two-dimensional array `nlpcov`, which gives the estimates of the uncertainties in the fitting parameters, is unpacked in lines 26-27 using a list comprehension. The rest of the code plots the data, the fitting function using the optimal values of the fitting parameters found by `scipy.optimize.curve_fit`, and the values of the fitting parameters and their uncertainties.
0.947793
0.994504
``` import numpy as np entropy_playGolf=-(9/14)*np.log2(9/14)-(5/14)*np.log2(5/14) entropy_playGolf ``` | | | Play | Golf | | |---------|----------|------|------|----| | | | yes | No | | | Outlook | Sunny | 3 | 2 | 5 | | | Overcast | 4 | 0 | 4 | | | Rainy | 2 | 3 | 4 | | | | | | 14 | ``` entropy_outlook_sunny=-(3/5)*np.log2(3/5)-(2/5)*np.log2(2/5) entropy_outlook_rainy=-(3/5)*np.log2(3/5)-(2/5)*np.log2(2/5) entropy_outlook=(5/14)*entropy_outlook_sunny+(4/14)*0+(5/14)*entropy_outlook_rainy entropy_outlook ``` | | | Play | Golf | | |------|------|------|------|----| | | | yes | No | | | Temp | Hot | 2 | 2 | 4 | | | Mild | 4 | 2 | 6 | | | Cool | 3 | 1 | 4 | | | | | | 14 | ``` entropy_temp_hot=-(2/4)*np.log2(2/4)-(2/4)*np.log2(2/4) entropy_temp_mild=-(4/6)*np.log2(4/6)-(2/6)*np.log2(2/6) entropy_temp_cool=-(3/4)*np.log2(3/4)-(1/4)*np.log2(1/4) entropy_temp=(4/14)*entropy_temp_hot+(6/14)*entropy_temp_mild+(4/14)*entropy_temp_cool entropy_temp ``` | | | Play | Golf | | |----------|--------|------|------|----| | | | yes | no | | | Humidity | High | 3 | 4 | 7 | | | Normal | 6 | 1 | 7 | | | | | | 14 | ``` entropy_humidity_high=-(3/7)*np.log2(3/7)-(4/7)*np.log2(4/7) entropy_humidity_normal=-(6/7)*np.log2(6/7)-(1/7)*np.log2(1/7) entropy_humidity=(7/14)*entropy_humidity_high+(7/14)*entropy_humidity_normal entropy_humidity ``` | | | Play | Golf | | |-------|-------|------|------|----| | | | yes | no | | | Windy | False | 6 | 2 | 8 | | | True | 3 | 3 | 6 | | | | | | 14 | ``` entropy_windy_false=-(6/8)*np.log2(6/8)-(2/8)*np.log2(2/8) entropy_windy_true=-(3/6)*np.log2(3/6)-(3/6)*np.log2(3/6) entropy_windy=(8/14)*entropy_windy_false+(6/14)*entropy_windy_true entropy_windy ``` I choose the outlook because i got the most information gain, because the entropy decreases the most ( its the lowest value of entropy) for Overcast the entropy is 0 already so nothing more done there(If Outlook=overcast yes to Golf), I continue with sunny. | Sunny | | Play | Golf | | |-------|------|------|------|---| | | | yes | no | | | Temp | Mild | 2 | 1 | 3 | | | Cool | 1 | 1 | 2 | | | | | | 5 | ``` sunny_entropy_temp_mild=-(2/3)*np.log2(2/3)-(1/3)*np.log2(1/3) sunny_entropy_temp_cool=-(1/2)*np.log2(1/2)-(1/2)*np.log2(1/2) sunny_entropy_temp=(3/5)*sunny_entropy_temp_mild+(2/5)*sunny_entropy_temp_cool sunny_entropy_temp ``` | Sunny | | Play | Golf | | |----------|--------|------|------|---| | | | yes | no | | | Humidity | High | 1 | 1 | 2 | | | Normal | 2 | 1 | 3 | | | | | | 5 | ``` sunny_entropy_humidity_high=-(1/2)*np.log2(1/2)-(1/2)*np.log2(1/2) sunny_entropy_humidity_normal=-(2/3)*np.log2(2/3)-(1/3)*np.log2(1/3) sunny_entropy_humidity=(2/5)*sunny_entropy_humidity_high+(3/5)*sunny_entropy_humidity_normal sunny_entropy_humidity ``` | Sunny | | Play | Golf | | |-------|-------|------|------|---| | | | yes | no | | | Windy | False | 3 | 0 | 3 | | | True | 0 | 2 | 2 | | | | | | 5 | ``` sunny_entropy_wind_true=0 sunny_entropy_wind_false=0 sunny_entropy_wind=0 ``` The entropy is Zero so they are leafs, so i choose Windy as next attribute after sunny. If it is sunny and not windy yes to golf , if it is sunny and windy no to golf Now I calculate for Rainy | Rainy | | Play | Golf | | |-------|------|------|------|---| | | | yes | no | | | Temp | Hot | 0 | 2 | 2 | | | Mild | 1 | 1 | 2 | | | Cool | 1 | 0 | 1 | | | | | | 5 | ``` rainy_entropy_temp_hot=0 rainy_entropy_temp_mild=-(1/2)*np.log2(1/2)-(1/2)*np.log2(1/2) rainy_entropy_temp_cool=0 rainy_entropy_temp=(2/5)*rainy_entropy_temp_mild rainy_entropy_temp ``` | Rainy | | Play | Golf | | |----------|--------|------|------|---| | | | yes | no | | | Humidity | High | 0 | 3 | 3 | | | Normal | 2 | 0 | 2 | | | | | | | ``` rainy_entropy_humidity_high=0 rainy_entropy_humidity_normal=0 rainy_entropy_humidity=0 ``` The entropy is zero so after Rainy we choose Humidity. If its rainy and the Humidity is high no to golf, if its rainy and the humidity is normal yes to GOlf
github_jupyter
import numpy as np entropy_playGolf=-(9/14)*np.log2(9/14)-(5/14)*np.log2(5/14) entropy_playGolf entropy_outlook_sunny=-(3/5)*np.log2(3/5)-(2/5)*np.log2(2/5) entropy_outlook_rainy=-(3/5)*np.log2(3/5)-(2/5)*np.log2(2/5) entropy_outlook=(5/14)*entropy_outlook_sunny+(4/14)*0+(5/14)*entropy_outlook_rainy entropy_outlook entropy_temp_hot=-(2/4)*np.log2(2/4)-(2/4)*np.log2(2/4) entropy_temp_mild=-(4/6)*np.log2(4/6)-(2/6)*np.log2(2/6) entropy_temp_cool=-(3/4)*np.log2(3/4)-(1/4)*np.log2(1/4) entropy_temp=(4/14)*entropy_temp_hot+(6/14)*entropy_temp_mild+(4/14)*entropy_temp_cool entropy_temp entropy_humidity_high=-(3/7)*np.log2(3/7)-(4/7)*np.log2(4/7) entropy_humidity_normal=-(6/7)*np.log2(6/7)-(1/7)*np.log2(1/7) entropy_humidity=(7/14)*entropy_humidity_high+(7/14)*entropy_humidity_normal entropy_humidity entropy_windy_false=-(6/8)*np.log2(6/8)-(2/8)*np.log2(2/8) entropy_windy_true=-(3/6)*np.log2(3/6)-(3/6)*np.log2(3/6) entropy_windy=(8/14)*entropy_windy_false+(6/14)*entropy_windy_true entropy_windy sunny_entropy_temp_mild=-(2/3)*np.log2(2/3)-(1/3)*np.log2(1/3) sunny_entropy_temp_cool=-(1/2)*np.log2(1/2)-(1/2)*np.log2(1/2) sunny_entropy_temp=(3/5)*sunny_entropy_temp_mild+(2/5)*sunny_entropy_temp_cool sunny_entropy_temp sunny_entropy_humidity_high=-(1/2)*np.log2(1/2)-(1/2)*np.log2(1/2) sunny_entropy_humidity_normal=-(2/3)*np.log2(2/3)-(1/3)*np.log2(1/3) sunny_entropy_humidity=(2/5)*sunny_entropy_humidity_high+(3/5)*sunny_entropy_humidity_normal sunny_entropy_humidity sunny_entropy_wind_true=0 sunny_entropy_wind_false=0 sunny_entropy_wind=0 rainy_entropy_temp_hot=0 rainy_entropy_temp_mild=-(1/2)*np.log2(1/2)-(1/2)*np.log2(1/2) rainy_entropy_temp_cool=0 rainy_entropy_temp=(2/5)*rainy_entropy_temp_mild rainy_entropy_temp rainy_entropy_humidity_high=0 rainy_entropy_humidity_normal=0 rainy_entropy_humidity=0
0.185652
0.861655
# Plotting the predictions of the single- and multi-center RPN signatures - This jupyter notebook is available on-line at: - https://github.com/spisakt/RPN-signature/blob/master/notebooks/3_compare_predictions.ipynb - Input data for the notebook and non-standard code (PAINTeR library) is available in the repo: - https://github.com/spisakt/RPN-signature - Raw MRI-data from study-centers 1 and 2 are available on OpenNeuro: - https://openneuro.org/datasets/ds002608/versions/1.0.1 - https://openneuro.org/datasets/ds002609/versions/1.0.3 - Raw data from center 3 is available upon reasonable request. ## Imports ``` import joblib import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from nilearn.connectome import vec_to_sym_matrix, sym_matrix_to_vec from sklearn.metrics import mean_squared_error, mean_absolute_error, explained_variance_score from mlxtend.evaluate import permutation_test from mlconfound.stats import partial_confound_test from mlconfound.plot import plot_null_dist, plot_graph from scipy.stats import f_oneway from scipy import stats import sys sys.path.append('../') from PAINTeR import plot # in-house lib used for the RPN-signature from tqdm import tqdm ``` ## Load and merge behavioral data for all three centers (after exclusions) ``` df_bochum = pd.read_csv("../res/bochum_sample_excl.csv") df_essen = pd.read_csv("../res/essen_sample_excl.csv") df_szeged = pd.read_csv("../res/szeged_sample_excl.csv") df_bochum['study']='bochum' df_essen['study']='essen' df_szeged['study']='szeged' df=pd.concat((df_bochum, df_essen, df_szeged), sort=False) df=df.reset_index() y = df.mean_QST_pain_sensitivity ``` ## Load predictions for the original single-center and the newly proposed multi-center model ``` # predictions multicener_predictions = np.genfromtxt('../res/multi-center/nested_cv_pred_full_GroupKFold30.csv', delimiter=',') rpn_predictions = np.hstack((df_bochum.nested_prediction, df_essen.prediction, df_szeged.prediction)) predictions = { 'single-center' : rpn_predictions, 'multi-center' : multicener_predictions } ``` ### create study masks ``` study_masks = { "study 1" : (df.study == 'bochum').values, "study 2" : (df.study == 'essen').values, "study 3" : (df.study == 'szeged').values, "study 1+2+3" : np.array([True] * len(y)) } ``` ## Observed vs. Predicted plots ``` sns.set_style('ticks') fig, axs = plt.subplots(ncols=4, nrows=2, figsize=(10,6), sharex=True, sharey=True) cols = ['tab:blue', 'tab:orange'] for row, cv in enumerate(predictions.keys()): for col, study in enumerate(study_masks.keys()): g=sns.regplot(y[study_masks[study]], predictions[cv][study_masks[study]], ax=axs[row, col], scatter=True, scatter_kws={'alpha':0.3}, color=cols[row]) g.set(xlabel=None) axs[row, col].set_xlim([-2, 2]) axs[row, col].set_ylim([-1.2, 1.2]) axs[row, col].spines['top'].set_visible(False) axs[row, col].spines['bottom'].set_visible(False) axs[row, col].spines['right'].set_visible(False) axs[row, col].spines['left'].set_visible(False) axs[row, col].grid(True) print('***', cv, study, '****************************************************') corr = np.corrcoef(y[study_masks[study]], predictions[cv][study_masks[study]])[0,1] axs[row, col].title.set_text("R={:.2f}".format(corr)) print("R={:.2f}".format(corr)) # takes some seconds p_corr = permutation_test(y[study_masks[study]], predictions[cv][study_masks[study]], func=lambda x, y: np.corrcoef(x, y)[0,1], method='approximate', num_rounds=8000, seed=42) print("p_corr={:.5f}".format(p_corr)) mse = mean_squared_error(y[study_masks[study]], predictions[cv][study_masks[study]]) print("MSE={:.2f}".format(mse)) mae = mean_absolute_error(y[study_masks[study]], predictions[cv][study_masks[study]]) print("MSE={:.2f}".format(mae)) expvar = explained_variance_score(y[study_masks[study]], predictions[cv][study_masks[study]]) print("Expl. Var. ={:.3f}".format(expvar)) plt.savefig('../res/multi-center/regplots_obs-pred.pdf') ``` # "Classification" performance ## Violoin plots per center for the observed and predicted values ``` from sklearn.metrics import roc_curve, auc # Compute ROC curve and ROC area for each class fpr = dict() tpr = dict() roc_auc = dict() fpr, tpr, _ = roc_curve(y>np.quantile(y,0.8), predictions['multi-center']) roc_auc = auc(fpr, tpr) plt.figure(figsize=(3,3)) lw = 2 plt.plot( fpr, tpr, color="darkorange", lw=lw, label="ROC curve (area = %0.2f)" % roc_auc, ) plt.plot([0, 1], [0, 1], color="navy", lw=lw, linestyle="--") plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel("False Positive Rate") plt.ylabel("True Positive Rate") plt.title("Receiver operating characteristic example") plt.legend(loc="lower right") plt.savefig('../res/multi-center/roc_80.pdf') plt.show() sns.distplot(y) # Compute ROC curve and ROC area for each class fpr = dict() tpr = dict() roc_auc = dict() quants = np.quantile(y, (0.20, 0.80)) y_lohi = y[np.logical_or(y<quants[0], y>quants[1])] pred_lohi = predictions['multi-center'][np.logical_or(y<quants[0], y>quants[1])] fpr, tpr, _ = roc_curve(y_lohi>np.median(y), pred_lohi) roc_auc = auc(fpr, tpr) plt.figure(figsize=(3,3)) lw = 2 plt.plot( fpr, tpr, color="darkorange", lw=lw, label="ROC curve (area = %0.2f)" % roc_auc, ) plt.plot([0, 1], [0, 1], color="navy", lw=lw, linestyle="--") plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel("False Positive Rate") plt.ylabel("True Positive Rate") plt.title("Receiver operating characteristic example") plt.legend(loc="lower right") plt.savefig('../res/multi-center/roc_20-80.pdf') plt.show() sns.distplot(y_lohi, bins=10) quants np.corrcoef(df.mean_QST_pain_sensitivity[~np.isnan(df.mean_QST_pain_sensitivityd2)], df.mean_QST_pain_sensitivityd2[~np.isnan(df.mean_QST_pain_sensitivityd2)]) sns.violinplot(data=df, x='study', y=predictions['single-center']) sns.violinplot(data=df, x='study', y=predictions['multi-center']) sns.violinplot(data=df, x='study', y='mean_QST_pain_sensitivity') print(df.mean_QST_pain_sensitivity[df.study=='bochum'].mean()) print(df.mean_QST_pain_sensitivity[df.study=='essen'].mean()) print(df.mean_QST_pain_sensitivity[df.study=='szeged'].mean()) f_oneway(df.mean_QST_pain_sensitivity[df.study=='bochum'], df.mean_QST_pain_sensitivity[df.study=='essen'], df.mean_QST_pain_sensitivity[df.study=='szeged']) ``` ## Partial confounder test for site effect: ``` ret = partial_confound_test(y=df.mean_QST_pain_sensitivity, yhat=predictions['multi-center'], c=df.study.astype("category").cat.codes.values, return_null_dist=True, cat_c=True, random_state=42) sns.set(rc={'figure.figsize':(7,2)}) sns.set_style('whitegrid') plot_null_dist(ret) plt.savefig('../res/multi-center/center-bias-nulldist.pdf') plot_graph(ret, outfile_base='../res/multi-center/center-bias') np.sqrt(0.12) import statsmodels.api as sm from statsmodels.formula.api import ols #r_obs_pred = np.corrcoef(df.mean_QST_pain_sensitivity, predictions['multi-center'])[0,1] data = pd.DataFrame( { 'observed': df.mean_QST_pain_sensitivity.values, 'predicted': predictions['multi-center'], 'confounder': df.study.astype("category").cat.codes.values }) fit=ols("observed ~ predicted", data=data).fit() print("R^2_(observed ~ predicted) =", fit.rsquared) fit=ols("observed ~ C(confounder)", data=data).fit() print("R^2_(observed ~ C(confounder)) =", fit.rsquared) r2_obs_conf_true = fit.rsquared fit=ols("predicted ~ C(confounder)", data=data).fit() print("R^2_(predicted ~ C(confounder) =", fit.rsquared) r2_pred_conf_true = fit.rsquared corrs=[] nulldata=[] tolerance = 0.01 rng = np.random.default_rng(42) for i in tqdm(range(500000)): data_rs = pd.DataFrame( { 'observed': df.mean_QST_pain_sensitivity.values, 'predicted': predictions['multi-center'], 'confounder': rng.permutation(df.study.astype("category").cat.codes.values) }) r2_obs_conf_rs = ols("observed ~ C(confounder)", data=data_rs).fit().rsquared if (np.abs(r2_obs_conf_rs - r2_obs_conf_true) < tolerance): #print("Resampled R^2_(observed ~ C(confounder)) =", r2_obs_conf_rs) fit=ols("predicted ~ C(confounder)", data=data_rs).fit() #print("Resampled R^2_(predicted ~ C(confounder) =", fit.rsquared) print(data_rs.confounder.values) nulldata.append(fit.rsquared) corrs.append(r2_obs_conf_rs) sns.distplot(nulldata) plt.axvline(r2_pred_conf_true) print(len(nulldata)) print("p=", np.sum(nulldata>=r2_pred_conf_true)/len(nulldata)) import statsmodels.api as sm from statsmodels.formula.api import ols import importlib import contextlib import joblib from joblib import Parallel, delayed @contextlib.contextmanager def tqdm_joblib(tqdm_object): """ Context manager to patch joblib to report into tqdm progress bar given as argument Based on: https://stackoverflow.com/questions/37804279/how-can-we-use-tqdm-in-a-parallel-execution-with-joblib """ def tqdm_print_progress(self): if self.n_completed_tasks > tqdm_object.n: n_completed = self.n_completed_tasks - tqdm_object.n tqdm_object.update(n=n_completed) original_print_progress = joblib.parallel.Parallel.print_progress joblib.parallel.Parallel.print_progress = tqdm_print_progress try: yield tqdm_object finally: joblib.parallel.Parallel.print_progress = original_print_progress tqdm_object.close() #r_obs_pred = np.corrcoef(df.mean_QST_pain_sensitivity, predictions['multi-center'])[0,1] data = pd.DataFrame( { 'observed': df.mean_QST_pain_sensitivity.values, 'predicted': predictions['multi-center'], 'confounder': df.study.astype("category").cat.codes.values }) fit=ols("observed ~ predicted", data=data).fit() print("R^2_(observed ~ predicted) =", fit.rsquared) fit=ols("observed ~ C(confounder)", data=data).fit() print("R^2_(observed ~ C(confounder)) =", fit.rsquared) r2_obs_conf_true = fit.rsquared fit=ols("predicted ~ C(confounder)", data=data).fit() print("R^2_(predicted ~ C(confounder) =", fit.rsquared) r2_pred_conf_true = fit.rsquared corrs=[] nulldata=[] tolerance = 0.05 def workhorse(i): rng = np.random.default_rng(42+i) data_rs = pd.DataFrame( { 'observed': df.mean_QST_pain_sensitivity.values, 'predicted': predictions['multi-center'], 'confounder': rng.permutation(df.study.astype("category").cat.codes.values) }) r2_obs_conf_rs = ols("observed ~ C(confounder)", data=data_rs).fit().rsquared if (np.abs(r2_obs_conf_rs - r2_obs_conf_true) < tolerance): #print("Resampled R^2_(observed ~ C(confounder)) =", r2_obs_conf_rs) fit=ols("predicted ~ C(confounder)", data=data_rs).fit() #print("Resampled R^2_(predicted ~ C(confounder) =", fit.rsquared) return fit.rsquared, r2_obs_conf_rs return np.nan, np.nan num_perms=100000 with tqdm_joblib(tqdm(desc='permuting', total=num_perms)) as progress_bar: res = Parallel(n_jobs=-1)(delayed(workhorse)(i) for i in np.arange(num_perms)) nulldata, corrs = zip(*res) nulldata = np.array(nulldata) nulldata = nulldata[~np.isnan(nulldata)] print(len(nulldata)) sns.distplot(nulldata) plt.axvline(r2_pred_conf_true) print("p=", np.sum(nulldata>=r2_pred_conf_true)/len(nulldata)) corrs = np.array(corrs) sns.distplot(corrs[~np.isnan(corrs)]) plt.axvline(r2_pred_conf_true) import statsmodels.api as sm from statsmodels.formula.api import ols import importlib import contextlib import joblib from joblib import Parallel, delayed @contextlib.contextmanager def tqdm_joblib(tqdm_object): """ Context manager to patch joblib to report into tqdm progress bar given as argument Based on: https://stackoverflow.com/questions/37804279/how-can-we-use-tqdm-in-a-parallel-execution-with-joblib """ def tqdm_print_progress(self): if self.n_completed_tasks > tqdm_object.n: n_completed = self.n_completed_tasks - tqdm_object.n tqdm_object.update(n=n_completed) original_print_progress = joblib.parallel.Parallel.print_progress joblib.parallel.Parallel.print_progress = tqdm_print_progress try: yield tqdm_object finally: joblib.parallel.Parallel.print_progress = original_print_progress tqdm_object.close() #r_obs_pred = np.corrcoef(df.mean_QST_pain_sensitivity, predictions['multi-center'])[0,1] data = pd.DataFrame( { 'observed': df.mean_QST_pain_sensitivity.values, 'predicted': predictions['multi-center'], 'confounder': df.study.astype("category").cat.codes.values }) fit=ols("observed ~ predicted", data=data).fit() print("R^2_(observed ~ predicted) =", fit.rsquared) fit=ols("observed ~ C(confounder)", data=data).fit() print("R^2_(observed ~ C(confounder)) =", fit.rsquared) r2_obs_conf_true = fit.rsquared fit=ols("predicted ~ C(confounder)", data=data).fit() print("R^2_(predicted ~ C(confounder) =", fit.rsquared) r2_pred_conf_true = fit.rsquared corrs=[] nulldata=[] tolerance = 0.001 def workhorse(i): rng = np.random.default_rng(4242+i) data_rs = pd.DataFrame( { 'observed': df.mean_QST_pain_sensitivity.values, 'predicted': predictions['multi-center'], 'confounder': rng.permutation(df.study.astype("category").cat.codes.values) }) r2_obs_conf_rs = ols("observed ~ C(confounder)", data=data_rs).fit().rsquared if (np.abs(r2_obs_conf_rs - r2_obs_conf_true) < tolerance): #print("Resampled R^2_(observed ~ C(confounder)) =", r2_obs_conf_rs) fit=ols("predicted ~ C(confounder)", data=data_rs).fit() #print("Resampled R^2_(predicted ~ C(confounder) =", fit.rsquared) return fit.rsquared, r2_obs_conf_rs return np.nan, np.nan num_perms=1000000 with tqdm_joblib(tqdm(desc='permuting', total=num_perms)) as progress_bar: res = Parallel(n_jobs=-1)(delayed(workhorse)(i) for i in np.arange(num_perms)) nulldata, corrs = zip(*res) nulldata = np.array(nulldata) nulldata = nulldata[~np.isnan(nulldata)] print(len(nulldata)) sns.distplot(nulldata) plt.axvline(r2_pred_conf_true) print("p=", np.sum(nulldata>=r2_pred_conf_true)/len(nulldata)) corrs = np.array(corrs) sns.distplot(corrs[~np.isnan(corrs)]) plt.axvline(r2_obs_conf_true) from scipy.stats import beta def binom_interval(success, total, confint=0.95): quantile = (1 - confint) / 2. lower = beta.ppf(quantile, success, total - success + 1) upper = beta.ppf(1 - quantile, success + 1, total - success) return (lower, upper) binom_interval(256*0.03515625, 256) ``` ## Predicted vs. Predicted plot ``` g=sns.jointplot(predictions['single-center'], predictions['multi-center'], kind='reg', color='black', scatter = False ) g.ax_joint.scatter(predictions['single-center'],predictions['multi-center'], c=df.mean_QST_pain_sensitivity, cmap="coolwarm") g.fig.set_size_inches(6,6) g.ax_joint.set(xlabel=None) g.ax_joint.set_xlim([-1.5, 1.5]) g.ax_joint.set_ylim([-1.2, 1.2]) g.ax_joint.spines['top'].set_visible(False) g.ax_joint.spines['bottom'].set_visible(False) g.ax_joint.spines['right'].set_visible(False) g.ax_joint.spines['left'].set_visible(False) g.ax_joint.grid(True) plt.savefig('../res/multi-center/regplots_pred-pred.pdf') norm = plt.Normalize(df.mean_QST_pain_sensitivity.min(), df.mean_QST_pain_sensitivity.max()) sm = plt.cm.ScalarMappable(cmap="coolwarm", norm=norm) sm.set_array([]) # Remove the legend and add a colorbar plt.colorbar(sm) plt.savefig('../res/multi-center/regplots_pred-pred_colorbar.pdf') corr = np.corrcoef(predictions['single-center'], predictions['multi-center'])[0,1] print("R={:.2f}".format(corr)) # takes some seconds p_corr = permutation_test(y[study_masks[study]], predictions[cv][study_masks[study]], func=lambda x, y: np.corrcoef(x, y)[0,1], method='approximate', num_rounds=8000, seed=42) print("p_corr={:.5f}".format(p_corr)) ```
github_jupyter
import joblib import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from nilearn.connectome import vec_to_sym_matrix, sym_matrix_to_vec from sklearn.metrics import mean_squared_error, mean_absolute_error, explained_variance_score from mlxtend.evaluate import permutation_test from mlconfound.stats import partial_confound_test from mlconfound.plot import plot_null_dist, plot_graph from scipy.stats import f_oneway from scipy import stats import sys sys.path.append('../') from PAINTeR import plot # in-house lib used for the RPN-signature from tqdm import tqdm df_bochum = pd.read_csv("../res/bochum_sample_excl.csv") df_essen = pd.read_csv("../res/essen_sample_excl.csv") df_szeged = pd.read_csv("../res/szeged_sample_excl.csv") df_bochum['study']='bochum' df_essen['study']='essen' df_szeged['study']='szeged' df=pd.concat((df_bochum, df_essen, df_szeged), sort=False) df=df.reset_index() y = df.mean_QST_pain_sensitivity # predictions multicener_predictions = np.genfromtxt('../res/multi-center/nested_cv_pred_full_GroupKFold30.csv', delimiter=',') rpn_predictions = np.hstack((df_bochum.nested_prediction, df_essen.prediction, df_szeged.prediction)) predictions = { 'single-center' : rpn_predictions, 'multi-center' : multicener_predictions } study_masks = { "study 1" : (df.study == 'bochum').values, "study 2" : (df.study == 'essen').values, "study 3" : (df.study == 'szeged').values, "study 1+2+3" : np.array([True] * len(y)) } sns.set_style('ticks') fig, axs = plt.subplots(ncols=4, nrows=2, figsize=(10,6), sharex=True, sharey=True) cols = ['tab:blue', 'tab:orange'] for row, cv in enumerate(predictions.keys()): for col, study in enumerate(study_masks.keys()): g=sns.regplot(y[study_masks[study]], predictions[cv][study_masks[study]], ax=axs[row, col], scatter=True, scatter_kws={'alpha':0.3}, color=cols[row]) g.set(xlabel=None) axs[row, col].set_xlim([-2, 2]) axs[row, col].set_ylim([-1.2, 1.2]) axs[row, col].spines['top'].set_visible(False) axs[row, col].spines['bottom'].set_visible(False) axs[row, col].spines['right'].set_visible(False) axs[row, col].spines['left'].set_visible(False) axs[row, col].grid(True) print('***', cv, study, '****************************************************') corr = np.corrcoef(y[study_masks[study]], predictions[cv][study_masks[study]])[0,1] axs[row, col].title.set_text("R={:.2f}".format(corr)) print("R={:.2f}".format(corr)) # takes some seconds p_corr = permutation_test(y[study_masks[study]], predictions[cv][study_masks[study]], func=lambda x, y: np.corrcoef(x, y)[0,1], method='approximate', num_rounds=8000, seed=42) print("p_corr={:.5f}".format(p_corr)) mse = mean_squared_error(y[study_masks[study]], predictions[cv][study_masks[study]]) print("MSE={:.2f}".format(mse)) mae = mean_absolute_error(y[study_masks[study]], predictions[cv][study_masks[study]]) print("MSE={:.2f}".format(mae)) expvar = explained_variance_score(y[study_masks[study]], predictions[cv][study_masks[study]]) print("Expl. Var. ={:.3f}".format(expvar)) plt.savefig('../res/multi-center/regplots_obs-pred.pdf') from sklearn.metrics import roc_curve, auc # Compute ROC curve and ROC area for each class fpr = dict() tpr = dict() roc_auc = dict() fpr, tpr, _ = roc_curve(y>np.quantile(y,0.8), predictions['multi-center']) roc_auc = auc(fpr, tpr) plt.figure(figsize=(3,3)) lw = 2 plt.plot( fpr, tpr, color="darkorange", lw=lw, label="ROC curve (area = %0.2f)" % roc_auc, ) plt.plot([0, 1], [0, 1], color="navy", lw=lw, linestyle="--") plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel("False Positive Rate") plt.ylabel("True Positive Rate") plt.title("Receiver operating characteristic example") plt.legend(loc="lower right") plt.savefig('../res/multi-center/roc_80.pdf') plt.show() sns.distplot(y) # Compute ROC curve and ROC area for each class fpr = dict() tpr = dict() roc_auc = dict() quants = np.quantile(y, (0.20, 0.80)) y_lohi = y[np.logical_or(y<quants[0], y>quants[1])] pred_lohi = predictions['multi-center'][np.logical_or(y<quants[0], y>quants[1])] fpr, tpr, _ = roc_curve(y_lohi>np.median(y), pred_lohi) roc_auc = auc(fpr, tpr) plt.figure(figsize=(3,3)) lw = 2 plt.plot( fpr, tpr, color="darkorange", lw=lw, label="ROC curve (area = %0.2f)" % roc_auc, ) plt.plot([0, 1], [0, 1], color="navy", lw=lw, linestyle="--") plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel("False Positive Rate") plt.ylabel("True Positive Rate") plt.title("Receiver operating characteristic example") plt.legend(loc="lower right") plt.savefig('../res/multi-center/roc_20-80.pdf') plt.show() sns.distplot(y_lohi, bins=10) quants np.corrcoef(df.mean_QST_pain_sensitivity[~np.isnan(df.mean_QST_pain_sensitivityd2)], df.mean_QST_pain_sensitivityd2[~np.isnan(df.mean_QST_pain_sensitivityd2)]) sns.violinplot(data=df, x='study', y=predictions['single-center']) sns.violinplot(data=df, x='study', y=predictions['multi-center']) sns.violinplot(data=df, x='study', y='mean_QST_pain_sensitivity') print(df.mean_QST_pain_sensitivity[df.study=='bochum'].mean()) print(df.mean_QST_pain_sensitivity[df.study=='essen'].mean()) print(df.mean_QST_pain_sensitivity[df.study=='szeged'].mean()) f_oneway(df.mean_QST_pain_sensitivity[df.study=='bochum'], df.mean_QST_pain_sensitivity[df.study=='essen'], df.mean_QST_pain_sensitivity[df.study=='szeged']) ret = partial_confound_test(y=df.mean_QST_pain_sensitivity, yhat=predictions['multi-center'], c=df.study.astype("category").cat.codes.values, return_null_dist=True, cat_c=True, random_state=42) sns.set(rc={'figure.figsize':(7,2)}) sns.set_style('whitegrid') plot_null_dist(ret) plt.savefig('../res/multi-center/center-bias-nulldist.pdf') plot_graph(ret, outfile_base='../res/multi-center/center-bias') np.sqrt(0.12) import statsmodels.api as sm from statsmodels.formula.api import ols #r_obs_pred = np.corrcoef(df.mean_QST_pain_sensitivity, predictions['multi-center'])[0,1] data = pd.DataFrame( { 'observed': df.mean_QST_pain_sensitivity.values, 'predicted': predictions['multi-center'], 'confounder': df.study.astype("category").cat.codes.values }) fit=ols("observed ~ predicted", data=data).fit() print("R^2_(observed ~ predicted) =", fit.rsquared) fit=ols("observed ~ C(confounder)", data=data).fit() print("R^2_(observed ~ C(confounder)) =", fit.rsquared) r2_obs_conf_true = fit.rsquared fit=ols("predicted ~ C(confounder)", data=data).fit() print("R^2_(predicted ~ C(confounder) =", fit.rsquared) r2_pred_conf_true = fit.rsquared corrs=[] nulldata=[] tolerance = 0.01 rng = np.random.default_rng(42) for i in tqdm(range(500000)): data_rs = pd.DataFrame( { 'observed': df.mean_QST_pain_sensitivity.values, 'predicted': predictions['multi-center'], 'confounder': rng.permutation(df.study.astype("category").cat.codes.values) }) r2_obs_conf_rs = ols("observed ~ C(confounder)", data=data_rs).fit().rsquared if (np.abs(r2_obs_conf_rs - r2_obs_conf_true) < tolerance): #print("Resampled R^2_(observed ~ C(confounder)) =", r2_obs_conf_rs) fit=ols("predicted ~ C(confounder)", data=data_rs).fit() #print("Resampled R^2_(predicted ~ C(confounder) =", fit.rsquared) print(data_rs.confounder.values) nulldata.append(fit.rsquared) corrs.append(r2_obs_conf_rs) sns.distplot(nulldata) plt.axvline(r2_pred_conf_true) print(len(nulldata)) print("p=", np.sum(nulldata>=r2_pred_conf_true)/len(nulldata)) import statsmodels.api as sm from statsmodels.formula.api import ols import importlib import contextlib import joblib from joblib import Parallel, delayed @contextlib.contextmanager def tqdm_joblib(tqdm_object): """ Context manager to patch joblib to report into tqdm progress bar given as argument Based on: https://stackoverflow.com/questions/37804279/how-can-we-use-tqdm-in-a-parallel-execution-with-joblib """ def tqdm_print_progress(self): if self.n_completed_tasks > tqdm_object.n: n_completed = self.n_completed_tasks - tqdm_object.n tqdm_object.update(n=n_completed) original_print_progress = joblib.parallel.Parallel.print_progress joblib.parallel.Parallel.print_progress = tqdm_print_progress try: yield tqdm_object finally: joblib.parallel.Parallel.print_progress = original_print_progress tqdm_object.close() #r_obs_pred = np.corrcoef(df.mean_QST_pain_sensitivity, predictions['multi-center'])[0,1] data = pd.DataFrame( { 'observed': df.mean_QST_pain_sensitivity.values, 'predicted': predictions['multi-center'], 'confounder': df.study.astype("category").cat.codes.values }) fit=ols("observed ~ predicted", data=data).fit() print("R^2_(observed ~ predicted) =", fit.rsquared) fit=ols("observed ~ C(confounder)", data=data).fit() print("R^2_(observed ~ C(confounder)) =", fit.rsquared) r2_obs_conf_true = fit.rsquared fit=ols("predicted ~ C(confounder)", data=data).fit() print("R^2_(predicted ~ C(confounder) =", fit.rsquared) r2_pred_conf_true = fit.rsquared corrs=[] nulldata=[] tolerance = 0.05 def workhorse(i): rng = np.random.default_rng(42+i) data_rs = pd.DataFrame( { 'observed': df.mean_QST_pain_sensitivity.values, 'predicted': predictions['multi-center'], 'confounder': rng.permutation(df.study.astype("category").cat.codes.values) }) r2_obs_conf_rs = ols("observed ~ C(confounder)", data=data_rs).fit().rsquared if (np.abs(r2_obs_conf_rs - r2_obs_conf_true) < tolerance): #print("Resampled R^2_(observed ~ C(confounder)) =", r2_obs_conf_rs) fit=ols("predicted ~ C(confounder)", data=data_rs).fit() #print("Resampled R^2_(predicted ~ C(confounder) =", fit.rsquared) return fit.rsquared, r2_obs_conf_rs return np.nan, np.nan num_perms=100000 with tqdm_joblib(tqdm(desc='permuting', total=num_perms)) as progress_bar: res = Parallel(n_jobs=-1)(delayed(workhorse)(i) for i in np.arange(num_perms)) nulldata, corrs = zip(*res) nulldata = np.array(nulldata) nulldata = nulldata[~np.isnan(nulldata)] print(len(nulldata)) sns.distplot(nulldata) plt.axvline(r2_pred_conf_true) print("p=", np.sum(nulldata>=r2_pred_conf_true)/len(nulldata)) corrs = np.array(corrs) sns.distplot(corrs[~np.isnan(corrs)]) plt.axvline(r2_pred_conf_true) import statsmodels.api as sm from statsmodels.formula.api import ols import importlib import contextlib import joblib from joblib import Parallel, delayed @contextlib.contextmanager def tqdm_joblib(tqdm_object): """ Context manager to patch joblib to report into tqdm progress bar given as argument Based on: https://stackoverflow.com/questions/37804279/how-can-we-use-tqdm-in-a-parallel-execution-with-joblib """ def tqdm_print_progress(self): if self.n_completed_tasks > tqdm_object.n: n_completed = self.n_completed_tasks - tqdm_object.n tqdm_object.update(n=n_completed) original_print_progress = joblib.parallel.Parallel.print_progress joblib.parallel.Parallel.print_progress = tqdm_print_progress try: yield tqdm_object finally: joblib.parallel.Parallel.print_progress = original_print_progress tqdm_object.close() #r_obs_pred = np.corrcoef(df.mean_QST_pain_sensitivity, predictions['multi-center'])[0,1] data = pd.DataFrame( { 'observed': df.mean_QST_pain_sensitivity.values, 'predicted': predictions['multi-center'], 'confounder': df.study.astype("category").cat.codes.values }) fit=ols("observed ~ predicted", data=data).fit() print("R^2_(observed ~ predicted) =", fit.rsquared) fit=ols("observed ~ C(confounder)", data=data).fit() print("R^2_(observed ~ C(confounder)) =", fit.rsquared) r2_obs_conf_true = fit.rsquared fit=ols("predicted ~ C(confounder)", data=data).fit() print("R^2_(predicted ~ C(confounder) =", fit.rsquared) r2_pred_conf_true = fit.rsquared corrs=[] nulldata=[] tolerance = 0.001 def workhorse(i): rng = np.random.default_rng(4242+i) data_rs = pd.DataFrame( { 'observed': df.mean_QST_pain_sensitivity.values, 'predicted': predictions['multi-center'], 'confounder': rng.permutation(df.study.astype("category").cat.codes.values) }) r2_obs_conf_rs = ols("observed ~ C(confounder)", data=data_rs).fit().rsquared if (np.abs(r2_obs_conf_rs - r2_obs_conf_true) < tolerance): #print("Resampled R^2_(observed ~ C(confounder)) =", r2_obs_conf_rs) fit=ols("predicted ~ C(confounder)", data=data_rs).fit() #print("Resampled R^2_(predicted ~ C(confounder) =", fit.rsquared) return fit.rsquared, r2_obs_conf_rs return np.nan, np.nan num_perms=1000000 with tqdm_joblib(tqdm(desc='permuting', total=num_perms)) as progress_bar: res = Parallel(n_jobs=-1)(delayed(workhorse)(i) for i in np.arange(num_perms)) nulldata, corrs = zip(*res) nulldata = np.array(nulldata) nulldata = nulldata[~np.isnan(nulldata)] print(len(nulldata)) sns.distplot(nulldata) plt.axvline(r2_pred_conf_true) print("p=", np.sum(nulldata>=r2_pred_conf_true)/len(nulldata)) corrs = np.array(corrs) sns.distplot(corrs[~np.isnan(corrs)]) plt.axvline(r2_obs_conf_true) from scipy.stats import beta def binom_interval(success, total, confint=0.95): quantile = (1 - confint) / 2. lower = beta.ppf(quantile, success, total - success + 1) upper = beta.ppf(1 - quantile, success + 1, total - success) return (lower, upper) binom_interval(256*0.03515625, 256) g=sns.jointplot(predictions['single-center'], predictions['multi-center'], kind='reg', color='black', scatter = False ) g.ax_joint.scatter(predictions['single-center'],predictions['multi-center'], c=df.mean_QST_pain_sensitivity, cmap="coolwarm") g.fig.set_size_inches(6,6) g.ax_joint.set(xlabel=None) g.ax_joint.set_xlim([-1.5, 1.5]) g.ax_joint.set_ylim([-1.2, 1.2]) g.ax_joint.spines['top'].set_visible(False) g.ax_joint.spines['bottom'].set_visible(False) g.ax_joint.spines['right'].set_visible(False) g.ax_joint.spines['left'].set_visible(False) g.ax_joint.grid(True) plt.savefig('../res/multi-center/regplots_pred-pred.pdf') norm = plt.Normalize(df.mean_QST_pain_sensitivity.min(), df.mean_QST_pain_sensitivity.max()) sm = plt.cm.ScalarMappable(cmap="coolwarm", norm=norm) sm.set_array([]) # Remove the legend and add a colorbar plt.colorbar(sm) plt.savefig('../res/multi-center/regplots_pred-pred_colorbar.pdf') corr = np.corrcoef(predictions['single-center'], predictions['multi-center'])[0,1] print("R={:.2f}".format(corr)) # takes some seconds p_corr = permutation_test(y[study_masks[study]], predictions[cv][study_masks[study]], func=lambda x, y: np.corrcoef(x, y)[0,1], method='approximate', num_rounds=8000, seed=42) print("p_corr={:.5f}".format(p_corr))
0.563378
0.899121
``` # 1. import csv library import csv import pandas as pd with open("C:/Users/Ahmed/Documents/A_DrNoman/Bid_postal/TAXRATES_ZIP5/TAXRATES_ZIP5_ID201903.csv","r") as f_obj1: data1=pd.read_csv(f_obj1) data1 with open("C:/Users/Ahmed/Documents/A_DrNoman/Bid_postal/TAXRATES_ZIP5/TAXRATES_ZIP5_AL201903.csv","r") as f_obj2: data2=pd.read_csv(f_obj2) data2.describe() data1.describe() d=pd.concat([data1,data2]) d.to_csv('d.csv') file_name=[ 'TAXRATES_ZIP5_AK201903.csv', 'TAXRATES_ZIP5_AL201903.csv', 'TAXRATES_ZIP5_AR201903.csv', 'TAXRATES_ZIP5_AZ201903.csv', 'TAXRATES_ZIP5_CA201903.csv', 'TAXRATES_ZIP5_CO201903.csv', 'TAXRATES_ZIP5_CT201903.csv', 'TAXRATES_ZIP5_DC201903.csv', 'TAXRATES_ZIP5_DE201903.csv', 'TAXRATES_ZIP5_FL201903.csv', 'TAXRATES_ZIP5_GA201903.csv', 'TAXRATES_ZIP5_HI201903.csv', 'TAXRATES_ZIP5_IA201903.csv', 'TAXRATES_ZIP5_ID201903.csv', 'TAXRATES_ZIP5_IL201903.csv', 'TAXRATES_ZIP5_IN201903.csv', 'TAXRATES_ZIP5_KS201903.csv', 'TAXRATES_ZIP5_KY201903.csv', 'TAXRATES_ZIP5_LA201903.csv', 'TAXRATES_ZIP5_MA201903.csv', 'TAXRATES_ZIP5_MD201903.csv', 'TAXRATES_ZIP5_ME201903.csv', 'TAXRATES_ZIP5_MI201903.csv', 'TAXRATES_ZIP5_MN201903.csv', 'TAXRATES_ZIP5_MO201903.csv', 'TAXRATES_ZIP5_MS201903.csv', 'TAXRATES_ZIP5_MT201903.csv', 'TAXRATES_ZIP5_NC201903.csv', 'TAXRATES_ZIP5_ND201903.csv', 'TAXRATES_ZIP5_NE201903.csv', 'TAXRATES_ZIP5_NH201903.csv', 'TAXRATES_ZIP5_NJ201903.csv', 'TAXRATES_ZIP5_NM201903.csv', 'TAXRATES_ZIP5_NV201903.csv', 'TAXRATES_ZIP5_NY201903.csv', 'TAXRATES_ZIP5_OH201903.csv', 'TAXRATES_ZIP5_OK201903.csv', 'TAXRATES_ZIP5_OR201903.csv', 'TAXRATES_ZIP5_PA201903.csv', 'TAXRATES_ZIP5_PR201903.csv', 'TAXRATES_ZIP5_RI201903.csv', 'TAXRATES_ZIP5_SC201903.csv', 'TAXRATES_ZIP5_SD201903.csv', 'TAXRATES_ZIP5_TN201903.csv', 'TAXRATES_ZIP5_TX201903.csv', 'TAXRATES_ZIP5_UT201903.csv', 'TAXRATES_ZIP5_VA201903.csv', 'TAXRATES_ZIP5_VT201903.csv', 'TAXRATES_ZIP5_WA201903.csv', 'TAXRATES_ZIP5_WI201903.csv', 'TAXRATES_ZIP5_WV201903.csv', 'TAXRATES_ZIP5_WY201903.csv'] len(file_name) # generate a list of varalbe for 52 data frame numbered l2=[] def df_gen(s=1,n=53): # globals()l2=[] for i in range(s,n): k="dfn"+str(i) l2.append(k) #calling function to generate the list of variable for df_gen() print(l2) len(l2) # funtion for generating global enumerated variable form list(l2 containing varaible names ) and a # assign the the elements of l2 to the generated varable def gen_enum(): for n, val in enumerate(l2,start=1): globals()["df%d"%n] = val print(["df%d"%n]) gen_enum() # This routine opens (one by one) a file in directory read the csv and loads in a pandas dataframe count=0 f='0' dfn="df" # print(l1) for i,v in enumerate(file_name,start=1): count+=1 filename="C:/Users/Ahmed/Documents/A_DrNoman/Bid_postal/TAXRATES_ZIP5/"+v fileobj="f_obj"+str(i) f="f"+str(i) with open(filename,"r") as fileobj: l2[i]=pd.read_csv(fileobj) # load the read file to coresponding data frame # l2[i] # df12 ['dfn1', 'dfn2', 'dfn3', 'dfn4', 'dfn5', 'dfn6', 'dfn7', 'dfn8', 'dfn9', 'dfn10', 'dfn11', 'dfn12', 'dfn13', 'dfn14', 'dfn15', 'dfn16', 'dfn17', 'dfn18', 'dfn19', 'dfn20', 'dfn21', 'dfn22', 'dfn23', 'dfn24', 'dfn25', 'dfn26', 'dfn27', 'dfn28', 'dfn29', 'dfn30', 'dfn31', 'dfn32', 'dfn33', 'dfn34', 'dfn35', 'dfn36', 'dfn37', 'dfn38', 'dfn39', 'dfn40', 'dfn41', 'dfn42', 'dfn43', 'dfn44', 'dfn45', 'dfn46', 'dfn47', 'dfn48', 'dfn49', 'dfn50', 'dfn51', 'dfn52', 'dfn1', 'dfn2', 'dfn3', 'dfn4', 'dfn5', 'dfn6', 'dfn7', 'dfn8', 'dfn9', 'dfn10', 'dfn11', 'dfn12', 'dfn13', 'dfn14', 'dfn15', 'dfn16', 'dfn17', 'dfn18', 'dfn19', 'dfn20', 'dfn21', 'dfn22', 'dfn23', 'dfn24', 'dfn25', 'dfn26', 'dfn27', 'dfn28', 'dfn29', 'dfn30', 'dfn31', 'dfn32', 'dfn33', 'dfn34', 'dfn35', 'dfn36', 'dfn37', 'dfn38', 'dfn39', 'dfn40', 'dfn41', 'dfn42', 'dfn43', 'dfn44', 'dfn45', 'dfn46', 'dfn47', 'dfn48', 'dfn49', 'dfn50', 'dfn51', 'dfn52', 'dfn1', 'dfn2', 'dfn3', 'dfn4', 'dfn5', 'dfn6', 'dfn7', 'dfn8', 'dfn9', 'dfn10', 'dfn11', 'dfn12', 'dfn13', 'dfn14', 'dfn15', 'dfn16', 'dfn17', 'dfn18', 'dfn19', 'dfn20', 'dfn21', 'dfn22', 'dfn23', 'dfn24', 'dfn25', 'dfn26', 'dfn27', 'dfn28', 'dfn29', 'dfn30', 'dfn31', 'dfn32', 'dfn33', 'dfn34', 'dfn35', 'dfn36', 'dfn37', 'dfn38', 'dfn39', 'dfn40', 'dfn41', 'dfn42', 'dfn43', 'dfn44', 'dfn45', 'dfn46', 'dfn47', 'dfn48', 'dfn49', 'dfn50', 'dfn51', 'dfn52'] d_all=pd.concat([df1, df2, df3, df4, df5, df6, df7, df8, df9, df10, df11, df12, df13, df14, df15, df16, df17, df18, df19, df20, df21, df22, df23, df24, df25, df26, df27, df28, df29, df30, df31, df32, df33, df34, df35, df36, df37, df38, df39, df40, df41, df42, df43, df44, df45, df46, df47, df48, df49, df50, df51, df52, df1, df2, df3, df4, df5, df6, df7, df8, df9, df10, df11, df12, df13, df14, df15, df16, df17, df18, df19, df20, df21, df22, df23, df24, df25, df26, df27, df28, df29, df30, df31, df32, df33, df34, df35, df36, df37, df38, df39, df40, df41, df42, df43, df44, df45, df46, df47, df48, df49, df50, df51, df52, df1, df2, df3, df4, df5, df6, df7, df8, df9, df10, df11, df12, df13, df14, df15, df16, df17, df18, df19, df20, df21, df22, df23, df24, df25, df26, df27, df28, df29, df30, df31, df32, df33, df34, df35, df36, df37, df38, df39, df40, df41, df42, df43, df44, df45, df46, df47, df48, df49, df50, df51, df52]) d_all.to_csv('d_all.csv') d_all.describe() # making names from the list for n, val in enumerate(l1,start=1): globals()["var%d"%n] = val print(["var%d"%n],end="") print ("\n\n\n checking for var5 ", var5) print (var5) # etc. var52 ```
github_jupyter
# 1. import csv library import csv import pandas as pd with open("C:/Users/Ahmed/Documents/A_DrNoman/Bid_postal/TAXRATES_ZIP5/TAXRATES_ZIP5_ID201903.csv","r") as f_obj1: data1=pd.read_csv(f_obj1) data1 with open("C:/Users/Ahmed/Documents/A_DrNoman/Bid_postal/TAXRATES_ZIP5/TAXRATES_ZIP5_AL201903.csv","r") as f_obj2: data2=pd.read_csv(f_obj2) data2.describe() data1.describe() d=pd.concat([data1,data2]) d.to_csv('d.csv') file_name=[ 'TAXRATES_ZIP5_AK201903.csv', 'TAXRATES_ZIP5_AL201903.csv', 'TAXRATES_ZIP5_AR201903.csv', 'TAXRATES_ZIP5_AZ201903.csv', 'TAXRATES_ZIP5_CA201903.csv', 'TAXRATES_ZIP5_CO201903.csv', 'TAXRATES_ZIP5_CT201903.csv', 'TAXRATES_ZIP5_DC201903.csv', 'TAXRATES_ZIP5_DE201903.csv', 'TAXRATES_ZIP5_FL201903.csv', 'TAXRATES_ZIP5_GA201903.csv', 'TAXRATES_ZIP5_HI201903.csv', 'TAXRATES_ZIP5_IA201903.csv', 'TAXRATES_ZIP5_ID201903.csv', 'TAXRATES_ZIP5_IL201903.csv', 'TAXRATES_ZIP5_IN201903.csv', 'TAXRATES_ZIP5_KS201903.csv', 'TAXRATES_ZIP5_KY201903.csv', 'TAXRATES_ZIP5_LA201903.csv', 'TAXRATES_ZIP5_MA201903.csv', 'TAXRATES_ZIP5_MD201903.csv', 'TAXRATES_ZIP5_ME201903.csv', 'TAXRATES_ZIP5_MI201903.csv', 'TAXRATES_ZIP5_MN201903.csv', 'TAXRATES_ZIP5_MO201903.csv', 'TAXRATES_ZIP5_MS201903.csv', 'TAXRATES_ZIP5_MT201903.csv', 'TAXRATES_ZIP5_NC201903.csv', 'TAXRATES_ZIP5_ND201903.csv', 'TAXRATES_ZIP5_NE201903.csv', 'TAXRATES_ZIP5_NH201903.csv', 'TAXRATES_ZIP5_NJ201903.csv', 'TAXRATES_ZIP5_NM201903.csv', 'TAXRATES_ZIP5_NV201903.csv', 'TAXRATES_ZIP5_NY201903.csv', 'TAXRATES_ZIP5_OH201903.csv', 'TAXRATES_ZIP5_OK201903.csv', 'TAXRATES_ZIP5_OR201903.csv', 'TAXRATES_ZIP5_PA201903.csv', 'TAXRATES_ZIP5_PR201903.csv', 'TAXRATES_ZIP5_RI201903.csv', 'TAXRATES_ZIP5_SC201903.csv', 'TAXRATES_ZIP5_SD201903.csv', 'TAXRATES_ZIP5_TN201903.csv', 'TAXRATES_ZIP5_TX201903.csv', 'TAXRATES_ZIP5_UT201903.csv', 'TAXRATES_ZIP5_VA201903.csv', 'TAXRATES_ZIP5_VT201903.csv', 'TAXRATES_ZIP5_WA201903.csv', 'TAXRATES_ZIP5_WI201903.csv', 'TAXRATES_ZIP5_WV201903.csv', 'TAXRATES_ZIP5_WY201903.csv'] len(file_name) # generate a list of varalbe for 52 data frame numbered l2=[] def df_gen(s=1,n=53): # globals()l2=[] for i in range(s,n): k="dfn"+str(i) l2.append(k) #calling function to generate the list of variable for df_gen() print(l2) len(l2) # funtion for generating global enumerated variable form list(l2 containing varaible names ) and a # assign the the elements of l2 to the generated varable def gen_enum(): for n, val in enumerate(l2,start=1): globals()["df%d"%n] = val print(["df%d"%n]) gen_enum() # This routine opens (one by one) a file in directory read the csv and loads in a pandas dataframe count=0 f='0' dfn="df" # print(l1) for i,v in enumerate(file_name,start=1): count+=1 filename="C:/Users/Ahmed/Documents/A_DrNoman/Bid_postal/TAXRATES_ZIP5/"+v fileobj="f_obj"+str(i) f="f"+str(i) with open(filename,"r") as fileobj: l2[i]=pd.read_csv(fileobj) # load the read file to coresponding data frame # l2[i] # df12 ['dfn1', 'dfn2', 'dfn3', 'dfn4', 'dfn5', 'dfn6', 'dfn7', 'dfn8', 'dfn9', 'dfn10', 'dfn11', 'dfn12', 'dfn13', 'dfn14', 'dfn15', 'dfn16', 'dfn17', 'dfn18', 'dfn19', 'dfn20', 'dfn21', 'dfn22', 'dfn23', 'dfn24', 'dfn25', 'dfn26', 'dfn27', 'dfn28', 'dfn29', 'dfn30', 'dfn31', 'dfn32', 'dfn33', 'dfn34', 'dfn35', 'dfn36', 'dfn37', 'dfn38', 'dfn39', 'dfn40', 'dfn41', 'dfn42', 'dfn43', 'dfn44', 'dfn45', 'dfn46', 'dfn47', 'dfn48', 'dfn49', 'dfn50', 'dfn51', 'dfn52', 'dfn1', 'dfn2', 'dfn3', 'dfn4', 'dfn5', 'dfn6', 'dfn7', 'dfn8', 'dfn9', 'dfn10', 'dfn11', 'dfn12', 'dfn13', 'dfn14', 'dfn15', 'dfn16', 'dfn17', 'dfn18', 'dfn19', 'dfn20', 'dfn21', 'dfn22', 'dfn23', 'dfn24', 'dfn25', 'dfn26', 'dfn27', 'dfn28', 'dfn29', 'dfn30', 'dfn31', 'dfn32', 'dfn33', 'dfn34', 'dfn35', 'dfn36', 'dfn37', 'dfn38', 'dfn39', 'dfn40', 'dfn41', 'dfn42', 'dfn43', 'dfn44', 'dfn45', 'dfn46', 'dfn47', 'dfn48', 'dfn49', 'dfn50', 'dfn51', 'dfn52', 'dfn1', 'dfn2', 'dfn3', 'dfn4', 'dfn5', 'dfn6', 'dfn7', 'dfn8', 'dfn9', 'dfn10', 'dfn11', 'dfn12', 'dfn13', 'dfn14', 'dfn15', 'dfn16', 'dfn17', 'dfn18', 'dfn19', 'dfn20', 'dfn21', 'dfn22', 'dfn23', 'dfn24', 'dfn25', 'dfn26', 'dfn27', 'dfn28', 'dfn29', 'dfn30', 'dfn31', 'dfn32', 'dfn33', 'dfn34', 'dfn35', 'dfn36', 'dfn37', 'dfn38', 'dfn39', 'dfn40', 'dfn41', 'dfn42', 'dfn43', 'dfn44', 'dfn45', 'dfn46', 'dfn47', 'dfn48', 'dfn49', 'dfn50', 'dfn51', 'dfn52'] d_all=pd.concat([df1, df2, df3, df4, df5, df6, df7, df8, df9, df10, df11, df12, df13, df14, df15, df16, df17, df18, df19, df20, df21, df22, df23, df24, df25, df26, df27, df28, df29, df30, df31, df32, df33, df34, df35, df36, df37, df38, df39, df40, df41, df42, df43, df44, df45, df46, df47, df48, df49, df50, df51, df52, df1, df2, df3, df4, df5, df6, df7, df8, df9, df10, df11, df12, df13, df14, df15, df16, df17, df18, df19, df20, df21, df22, df23, df24, df25, df26, df27, df28, df29, df30, df31, df32, df33, df34, df35, df36, df37, df38, df39, df40, df41, df42, df43, df44, df45, df46, df47, df48, df49, df50, df51, df52, df1, df2, df3, df4, df5, df6, df7, df8, df9, df10, df11, df12, df13, df14, df15, df16, df17, df18, df19, df20, df21, df22, df23, df24, df25, df26, df27, df28, df29, df30, df31, df32, df33, df34, df35, df36, df37, df38, df39, df40, df41, df42, df43, df44, df45, df46, df47, df48, df49, df50, df51, df52]) d_all.to_csv('d_all.csv') d_all.describe() # making names from the list for n, val in enumerate(l1,start=1): globals()["var%d"%n] = val print(["var%d"%n],end="") print ("\n\n\n checking for var5 ", var5) print (var5) # etc. var52
0.050729
0.117142
# [Classes](https://docs.python.org/3/tutorial/classes.html#a-first-look-at-classes) ``` class MyFirstClass: def __init__(self, name): self.name = name def greet(self): print('Hello {}!'.format(self.name)) my_instance = MyFirstClass('John Doe') print('my_instance: {}'.format(my_instance)) print('type: {}'.format(type(my_instance))) print('my_instance.name: {}'.format(my_instance.name)) ``` ## Methods The functions inside classes are called methods. They are used similarly as functions. ``` alice = MyFirstClass(name='Alice') alice.greet() ``` ### `__init__()` `__init__()` is a special method that is used for initialising instances of the class. It's called when you create an instance of the class. ``` class Example: def __init__(self): print('Now we are inside __init__') print('creating instance of Example') example = Example() print('instance created') ``` `__init__()` is typically used for initialising instance variables of your class. These can be listed as arguments after `self`. To be able to access these instance variables later during your instance's lifetime, you have to save them into `self`. `self` is the first argument of the methods of your class and it's your access to the instance variables and other methods. ``` class Example: def __init__(self, var1, var2): self.first_var = var1 self.second_var = var2 def print_variables(self): print('{} {}'.format(self.first_var, self.second_var)) e = Example('abc', 123) e.print_variables() ``` ### `__str__()` `__str__()` is a special method which is called when an instance of the class is converted to string (e.g. when you want to print the instance). In other words, by defining `__str__` method for your class, you can decide what's the printable version of the instances of your class. The method should return a string. ``` class Person: def __init__(self, name, age): self.name = name self.age = age def __str__(self): return 'Person: {}'.format(self.name) jack = Person('Jack', 82) print('This is the string presentation of jack: {}'.format(jack)) ``` ## Class variables vs instance variables Class variables are shared between all the instances of that class whereas instance variables can hold different values between different instances of that class. ``` class Example: # These are class variables name = 'Example class' description = 'Just an example of a simple class' def __init__(self, var1): # This is an instance variable self.instance_variable = var1 def show_info(self): info = 'instance_variable: {}, name: {}, description: {}'.format( self.instance_variable, Example.name, Example.description) print(info) inst1 = Example('foo') inst2 = Example('bar') # name and description have identical values between instances assert inst1.name == inst2.name == Example.name assert inst1.description == inst2.description == Example.description # If you change the value of a class variable, it's changed across all instances Example.name = 'Modified name' inst1.show_info() inst2.show_info() ``` ## Public vs private In python there's now strict separation for private/public methods or instance variables. The convention is to start the name of the method or instance variable with underscore if it should be treated as private. Private means that it should not be accessed from outside of the class. For example, let's consider that we have a `Person` class which has `age` as an instance variable. We want that `age` is not directly accessed (e.g. changed) after the instance is created. In Python, this would be: ``` class Person: def __init__(self, age): self._age = age example_person = Person(age=15) # You can't do this: # print(example_person.age) # Nor this: # example_person.age = 16 ``` If you want the `age` to be readable but not writable, you can use `property`: ``` class Person: def __init__(self, age): self._age = age @property def age(self): return self._age example_person = Person(age=15) # Now you can do this: print(example_person.age) # But not this: # example_person.age = 16 ``` This way you can have a controlled access to the instance variables of your class: ``` class Person: def __init__(self, age): self._age = age @property def age(self): return self._age def celebrate_birthday(self): self._age += 1 print('Happy bday for {} years old!'.format(self._age)) example_person = Person(age=15) example_person.celebrate_birthday() ``` ## Introduction to inheritance ``` class Animal: def greet(self): print('Hello, I am an animal') @property def favorite_food(self): return 'beef' class Dog(Animal): def greet(self): print('wof wof') class Cat(Animal): @property def favorite_food(self): return 'fish' dog = Dog() dog.greet() print("Dog's favorite food is {}".format(dog.favorite_food)) cat = Cat() cat.greet() print("Cat's favorite food is {}".format(cat.favorite_food)) ```
github_jupyter
class MyFirstClass: def __init__(self, name): self.name = name def greet(self): print('Hello {}!'.format(self.name)) my_instance = MyFirstClass('John Doe') print('my_instance: {}'.format(my_instance)) print('type: {}'.format(type(my_instance))) print('my_instance.name: {}'.format(my_instance.name)) alice = MyFirstClass(name='Alice') alice.greet() class Example: def __init__(self): print('Now we are inside __init__') print('creating instance of Example') example = Example() print('instance created') class Example: def __init__(self, var1, var2): self.first_var = var1 self.second_var = var2 def print_variables(self): print('{} {}'.format(self.first_var, self.second_var)) e = Example('abc', 123) e.print_variables() class Person: def __init__(self, name, age): self.name = name self.age = age def __str__(self): return 'Person: {}'.format(self.name) jack = Person('Jack', 82) print('This is the string presentation of jack: {}'.format(jack)) class Example: # These are class variables name = 'Example class' description = 'Just an example of a simple class' def __init__(self, var1): # This is an instance variable self.instance_variable = var1 def show_info(self): info = 'instance_variable: {}, name: {}, description: {}'.format( self.instance_variable, Example.name, Example.description) print(info) inst1 = Example('foo') inst2 = Example('bar') # name and description have identical values between instances assert inst1.name == inst2.name == Example.name assert inst1.description == inst2.description == Example.description # If you change the value of a class variable, it's changed across all instances Example.name = 'Modified name' inst1.show_info() inst2.show_info() class Person: def __init__(self, age): self._age = age example_person = Person(age=15) # You can't do this: # print(example_person.age) # Nor this: # example_person.age = 16 class Person: def __init__(self, age): self._age = age @property def age(self): return self._age example_person = Person(age=15) # Now you can do this: print(example_person.age) # But not this: # example_person.age = 16 class Person: def __init__(self, age): self._age = age @property def age(self): return self._age def celebrate_birthday(self): self._age += 1 print('Happy bday for {} years old!'.format(self._age)) example_person = Person(age=15) example_person.celebrate_birthday() class Animal: def greet(self): print('Hello, I am an animal') @property def favorite_food(self): return 'beef' class Dog(Animal): def greet(self): print('wof wof') class Cat(Animal): @property def favorite_food(self): return 'fish' dog = Dog() dog.greet() print("Dog's favorite food is {}".format(dog.favorite_food)) cat = Cat() cat.greet() print("Cat's favorite food is {}".format(cat.favorite_food))
0.604632
0.904987
``` import scanpy as sc import os import pandas as pd import numpy as np import pickle as pkl import matplotlib as mpl import matplotlib.pyplot as plt import scipy as sp import scipy.io sc.settings.verbosity = 3 ``` Download he files from https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE118614 into `../data/GSE118614`. GSE118614_10x_aggregate.mtx.gz -> matrix.mtx.gz GSE118614_barcodes.tsv.gz -> barcodes.tsv.gz GSE118614_genes.tsv.gz -> genes.tsv.gz ``` cache_file = "../data/GSE118614/gse118614_raw.h5ad" if os.path.exists(cache_file): adata = sc.read_h5ad(cache_file) else: from scipy.sparse import csr_matrix mtx = sp.io.mmread("../data/GSE118614/matrix.mtx.gz") genes = pd.read_csv("../data/GSE118614/genes.tsv.gz", sep='\t') cells = pd.read_csv("../data/GSE118614/barcodes.tsv.gz", sep='\t') adata = sc.AnnData(mtx, cells, genes) adata.X = csr_matrix(adata.X) adata.write(cache_file) adata.var.index = adata.var.gene_short_name.tolist() adata.var_names_make_unique() adata.obs def str2num_age(x): if x[0] == 'P': return float(x[1:]) if x[0] == 'E': return float(x[1:]) - 20 adata.obs['numerical_age'] = adata.obs.age.apply(str2num_age) adata.obs['numerical_age'].value_counts() sc.settings.set_figure_params(dpi=60, facecolor='white') sc.pl.highest_expr_genes(adata, n_top=20) adata.var['mt'] = adata.var.index.str.startswith('mt-') # annotate the group of mitochondrial genes as 'mt' sc.pp.calculate_qc_metrics(adata, qc_vars=['mt'], percent_top=None, log1p=False, inplace=True) sc.pl.violin(adata, ['n_genes_by_counts', 'total_counts', 'pct_counts_mt'], jitter=0.4, multi_panel=True) sc.pl.scatter(adata, x='total_counts', y='pct_counts_mt') sc.pl.scatter(adata, x='total_counts', y='n_genes_by_counts') adata = adata[(adata.obs.n_genes_by_counts > 400) & (adata.obs.n_genes_by_counts < 5000) & (adata.obs.pct_counts_mt < 4.0), :] adata = adata[~adata.obs.umap2_CellType.isin(['Red Blood Cells', 'Doublets'])] sc.pp.normalize_total(adata, target_sum=1e4) sc.pp.log1p(adata) sc.pp.highly_variable_genes(adata, min_mean=0.0125, max_mean=3, min_disp=0.5) sc.pl.highly_variable_genes(adata) adata.var.highly_variable.sum() adata.raw = adata adata = adata[:, adata.var.highly_variable] # sc.pp.regress_out(adata, ['total_counts', 'pct_counts_mt']) sc.pp.scale(adata, max_value=10) adata adata.obs.to_csv("gse118614.csv") sc.settings.set_figure_params(dpi=100, facecolor='white') sc.tl.pca(adata, svd_solver='arpack') sc.pl.pca(adata, color='umap2_CellType') sc.pl.pca_variance_ratio(adata, log=False, n_pcs=50) sc.pp.neighbors(adata, n_neighbors=10, n_pcs=14, metric='cosine') sc.tl.umap(adata) sc.pl.umap(adata, color=['sample', 'umap2_CellType'], ncols=1) ```
github_jupyter
import scanpy as sc import os import pandas as pd import numpy as np import pickle as pkl import matplotlib as mpl import matplotlib.pyplot as plt import scipy as sp import scipy.io sc.settings.verbosity = 3 cache_file = "../data/GSE118614/gse118614_raw.h5ad" if os.path.exists(cache_file): adata = sc.read_h5ad(cache_file) else: from scipy.sparse import csr_matrix mtx = sp.io.mmread("../data/GSE118614/matrix.mtx.gz") genes = pd.read_csv("../data/GSE118614/genes.tsv.gz", sep='\t') cells = pd.read_csv("../data/GSE118614/barcodes.tsv.gz", sep='\t') adata = sc.AnnData(mtx, cells, genes) adata.X = csr_matrix(adata.X) adata.write(cache_file) adata.var.index = adata.var.gene_short_name.tolist() adata.var_names_make_unique() adata.obs def str2num_age(x): if x[0] == 'P': return float(x[1:]) if x[0] == 'E': return float(x[1:]) - 20 adata.obs['numerical_age'] = adata.obs.age.apply(str2num_age) adata.obs['numerical_age'].value_counts() sc.settings.set_figure_params(dpi=60, facecolor='white') sc.pl.highest_expr_genes(adata, n_top=20) adata.var['mt'] = adata.var.index.str.startswith('mt-') # annotate the group of mitochondrial genes as 'mt' sc.pp.calculate_qc_metrics(adata, qc_vars=['mt'], percent_top=None, log1p=False, inplace=True) sc.pl.violin(adata, ['n_genes_by_counts', 'total_counts', 'pct_counts_mt'], jitter=0.4, multi_panel=True) sc.pl.scatter(adata, x='total_counts', y='pct_counts_mt') sc.pl.scatter(adata, x='total_counts', y='n_genes_by_counts') adata = adata[(adata.obs.n_genes_by_counts > 400) & (adata.obs.n_genes_by_counts < 5000) & (adata.obs.pct_counts_mt < 4.0), :] adata = adata[~adata.obs.umap2_CellType.isin(['Red Blood Cells', 'Doublets'])] sc.pp.normalize_total(adata, target_sum=1e4) sc.pp.log1p(adata) sc.pp.highly_variable_genes(adata, min_mean=0.0125, max_mean=3, min_disp=0.5) sc.pl.highly_variable_genes(adata) adata.var.highly_variable.sum() adata.raw = adata adata = adata[:, adata.var.highly_variable] # sc.pp.regress_out(adata, ['total_counts', 'pct_counts_mt']) sc.pp.scale(adata, max_value=10) adata adata.obs.to_csv("gse118614.csv") sc.settings.set_figure_params(dpi=100, facecolor='white') sc.tl.pca(adata, svd_solver='arpack') sc.pl.pca(adata, color='umap2_CellType') sc.pl.pca_variance_ratio(adata, log=False, n_pcs=50) sc.pp.neighbors(adata, n_neighbors=10, n_pcs=14, metric='cosine') sc.tl.umap(adata) sc.pl.umap(adata, color=['sample', 'umap2_CellType'], ncols=1)
0.391173
0.700197
## Feature Selection Techniques 1. Introduction to Feature Selection 2. VarianceThreshold 3. Chi-squared stats 4. ANOVA using f_classif 5. Univariate Linear Regression Tests using f_regression 6. F-score vs Mutual Information 7. Mutual Information for discrete value 8. Mutual Information for continues value 9. SelectKBest 10. SelectPercentile 11. SelectFromModel 12. Recursive Feature Elemination ### Feature Selection: * Selecting features from the dataset * Improve estimator's accuracy * Boost preformance for high dimensional datsets * Below we will discuss univariate selection methods * Also, feature elimination method ``` from sklearn import feature_selection import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline ``` ### VarianceThreshold * Drop the columns whose variance is below configured level * This method is unsupervised .i.e target not taken into action * Intution : Columns whose values arn't petty much the same won't have much impact on target ``` df = pd.DataFrame({'A':['m','f','m','m','m','m','m','m'], 'B':[1,2,3,1,2,1,1,1], 'C':[1,2,3,1,2,1,1,1]}) df from sklearn.preprocessing import LabelEncoder le = LabelEncoder() df['A'] = le.fit_transform(df.A) df vt = feature_selection.VarianceThreshold(threshold=.2) vt.fit_transform(df) vt.variances_ ``` ### Chi-Square for Non-negative feature & class * Feature data should be booleans or count * Supervised technique for feature selection * Target should be discrete * Higher chi value means more important feature for target ``` df = pd.read_csv('datasets/tennis.csv') df.head() for col in df.columns: le = LabelEncoder() df[col] = le.fit_transform(df[col]) df chi2, pval = feature_selection.chi2(df.drop('play',axis=1),df.play) pval chi2 ``` ### 4. ANOVA using f_classif * For feature variables continues in nature * And, target variable discrete in nature * Internally, this method uses ratio of variation within a columns & variation across columns ``` from sklearn.datasets import load_breast_cancer cancer_data = load_breast_cancer() X = cancer_data.data Y = cancer_data.target print(X.shape) F, pval = feature_selection.f_classif(X,Y) print(pval) print(F) ``` * Each value represents importance of a feature ### Univariate Regression Test using f_regression * Linear model for testing the individual effect of each of many regressors. * Correlation between each value & target is calculated * F-test captures linear dependency ``` from sklearn.datasets import california_housing house_data = california_housing.fetch_california_housing() X,Y = house_data.data, house_data.target print(X.shape,Y.shape) F, pval = feature_selection.f_regression(X,Y) F ``` * Columns with top F values are the selected features ### F score verses Mutual Information ``` np.random.seed(0) X = np.random.rand(1000, 3) y = X[:, 0] + np.sin(6 * np.pi * X[:, 1]) + 0.1 * np.random.randn(1000) F, pval = feature_selection.f_regression(X,y) print(F) plt.scatter(X[:,0],y,s=10) plt.scatter(X[:,1],y,s=10) ``` ### Mutual Information for regression using mutual_info_regression * Returns dependency in the scale of 0 & 1 among feature & target * Captures any kind of dependency even if non-linear * Target is continues in nature ``` feature_selection.mutual_info_regression(X,y) ``` ### Mutual Information for classification using mutual_info_classification * Returns dependency in the scale of 0 & 1 among feature & target * Captures any kind of dependency even if non-linear * Target is discrete in nature ``` cols = ['age','workclass','fnlwgt','education','education-num','marital-status','occupation','relationship' ,'race','sex','capital-gain','capital-loss','hours-per-week','native-country','Salary'] adult_data = pd.read_csv('https://raw.githubusercontent.com/zekelabs/data-science-complete-tutorial/master/Data/adult.data.txt', names=cols) adult_data.head() cat_cols = list(adult_data.select_dtypes('object').columns) cat_cols.remove('Salary') len(cat_cols) from sklearn.preprocessing import LabelEncoder for col in cat_cols: le = LabelEncoder() adult_data[col] = le.fit_transform(adult_data[col]) X = adult_data.drop(columns=['Salary']) y = le.fit_transform(adult_data.Salary) feature_selection.mutual_info_classif(X, y) X.columns ``` ### SelectKBest * SelectKBest returns K important features based on above techniques * Based on configuration, it can use mutual_information or ANOVA or regression based techniques ``` adult_data.head adult_data.shape selector = feature_selection.SelectKBest(k=7, score_func=feature_selection.f_classif) data = selector.fit_transform(adult_data.drop('Salary',axis=1),adult_data.Salary) data.shape selector.scores_ selector = feature_selection.SelectKBest(k=7, score_func=feature_selection.mutual_info_classif) data = selector.fit_transform(adult_data.drop('Salary',axis=1),adult_data.Salary) data.shape selector.scores_ ``` ### SelectPercentile * Selecting top features whose importances are in configured parameter * Default is top 10 percentile ``` selector = feature_selection.SelectPercentile(percentile=20, score_func=feature_selection.mutual_info_classif) data = selector.fit_transform(adult_data.drop('Salary',axis=1),adult_data.Salary) data.shape ``` ### SelectFromModel * Selecting important features from model weights * The estimator should support 'feature_importances' ``` from sklearn.datasets import load_boston boston = load_boston() boston.data.shape from sklearn.linear_model import LinearRegression clf = LinearRegression() sfm = feature_selection.SelectFromModel(clf, threshold=0.25) sfm.fit_transform(boston.data, boston.target).shape boston.data.shape ``` ### Recursive Feature Elimination * Uses an external estimator to calculate weights of features * First, the estimator is trained on the initial set of features and the importance of each feature is obtained either through a coef_ attribute or through a feature_importances_ attribute. * Then, the least important features are pruned from current set of features. * That procedure is recursively repeated on the pruned set until the desired number of features to select is eventually reached. ``` from sklearn.datasets import make_regression from sklearn.feature_selection import RFE from sklearn.svm import SVR X, y = make_regression(n_samples=50, n_features=10, random_state=0) estimator = SVR(kernel="linear") selector = RFE(estimator, 5, step=1) data = selector.fit_transform(X, y) X.shape data.shape selector.ranking_ ```
github_jupyter
from sklearn import feature_selection import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline df = pd.DataFrame({'A':['m','f','m','m','m','m','m','m'], 'B':[1,2,3,1,2,1,1,1], 'C':[1,2,3,1,2,1,1,1]}) df from sklearn.preprocessing import LabelEncoder le = LabelEncoder() df['A'] = le.fit_transform(df.A) df vt = feature_selection.VarianceThreshold(threshold=.2) vt.fit_transform(df) vt.variances_ df = pd.read_csv('datasets/tennis.csv') df.head() for col in df.columns: le = LabelEncoder() df[col] = le.fit_transform(df[col]) df chi2, pval = feature_selection.chi2(df.drop('play',axis=1),df.play) pval chi2 from sklearn.datasets import load_breast_cancer cancer_data = load_breast_cancer() X = cancer_data.data Y = cancer_data.target print(X.shape) F, pval = feature_selection.f_classif(X,Y) print(pval) print(F) from sklearn.datasets import california_housing house_data = california_housing.fetch_california_housing() X,Y = house_data.data, house_data.target print(X.shape,Y.shape) F, pval = feature_selection.f_regression(X,Y) F np.random.seed(0) X = np.random.rand(1000, 3) y = X[:, 0] + np.sin(6 * np.pi * X[:, 1]) + 0.1 * np.random.randn(1000) F, pval = feature_selection.f_regression(X,y) print(F) plt.scatter(X[:,0],y,s=10) plt.scatter(X[:,1],y,s=10) feature_selection.mutual_info_regression(X,y) cols = ['age','workclass','fnlwgt','education','education-num','marital-status','occupation','relationship' ,'race','sex','capital-gain','capital-loss','hours-per-week','native-country','Salary'] adult_data = pd.read_csv('https://raw.githubusercontent.com/zekelabs/data-science-complete-tutorial/master/Data/adult.data.txt', names=cols) adult_data.head() cat_cols = list(adult_data.select_dtypes('object').columns) cat_cols.remove('Salary') len(cat_cols) from sklearn.preprocessing import LabelEncoder for col in cat_cols: le = LabelEncoder() adult_data[col] = le.fit_transform(adult_data[col]) X = adult_data.drop(columns=['Salary']) y = le.fit_transform(adult_data.Salary) feature_selection.mutual_info_classif(X, y) X.columns adult_data.head adult_data.shape selector = feature_selection.SelectKBest(k=7, score_func=feature_selection.f_classif) data = selector.fit_transform(adult_data.drop('Salary',axis=1),adult_data.Salary) data.shape selector.scores_ selector = feature_selection.SelectKBest(k=7, score_func=feature_selection.mutual_info_classif) data = selector.fit_transform(adult_data.drop('Salary',axis=1),adult_data.Salary) data.shape selector.scores_ selector = feature_selection.SelectPercentile(percentile=20, score_func=feature_selection.mutual_info_classif) data = selector.fit_transform(adult_data.drop('Salary',axis=1),adult_data.Salary) data.shape from sklearn.datasets import load_boston boston = load_boston() boston.data.shape from sklearn.linear_model import LinearRegression clf = LinearRegression() sfm = feature_selection.SelectFromModel(clf, threshold=0.25) sfm.fit_transform(boston.data, boston.target).shape boston.data.shape from sklearn.datasets import make_regression from sklearn.feature_selection import RFE from sklearn.svm import SVR X, y = make_regression(n_samples=50, n_features=10, random_state=0) estimator = SVR(kernel="linear") selector = RFE(estimator, 5, step=1) data = selector.fit_transform(X, y) X.shape data.shape selector.ranking_
0.546496
0.97546
# Technology Explorers Course 1, Lab 1: Practice with Python and Jupyter Notebooks **Instructor**: Wesley Beckner **Contact**: [email protected] <br> --- <br> In this lab we will continue to practice with pandas, visualization, and writing functions. <br> --- This first lab assignment will be a review of what we discussed today. #### 1 Python variables In the empty code cell below, create the following four variables: - A string variable named `favorite_movie` that represents your favorite movie - A string variable named `national_chain` that represents your favorite fast food restaurant - An integer variable named `streaming_video_hours` that represents the whole number of hours you watch any streaming video service (ex. Netflix, Hulu, Disney+) per week. - A float variable named `headphone_cost` that represents the most money you had to pay, in dollar-cent amount (0.00), for headphones. Do not include a '$' symbol. Then after they are declared, print each one using the print() function explained in section 1.1.2. ``` ``` To check if each variable is the correct data type, use the type() function explained from section 1.1.3, in the empty code cell below. For example, `type(favorite_movie)` should return the output `<class 'str'>`, which indicates the string type. ``` ``` #### 2 Practice with math in Python Let's start with a few basic operators that was covered in section 1.1.4. Write the expression for multiplying 23 by 31, so that running this function will correctly output the product of these two numbers. ``` ``` Consider the operation written in Python: `27 / 3 + 6`. Write the same syntax in the empty cell below, and modify it to include parentheses in the right location so that the result/answer of the math is 3.0, rather than 15.0. ``` ``` Let's learn more operators beyond the ones we covered earlier. Write the line of code `3 ** 2` in the cell below. From the output, what do you think the double asterisks (`**`) operator represents in Python? ``` ``` Write the line of code `28 / 3` below. Then just below the line in the same cell, write the code `28 // 3`. Compare the differences in output between the two. Can you decipher what `//` means in Python? ``` ``` Now for more complicated mathematical operations! Try to write the Python equivalent of the following: $\frac{14 + 28}{28 - 14}$ ``` ``` Now try this one: $\frac{15 + 984}{-(217+4)}$ ``` ``` And finally, write the Python equivalent for this: $\frac{-(3655 * 44)}{(8 * 16)^3}$ ``` ``` #### 3 Practice writing helpful comments Consider the following code below. No need to decipher and understand every piece but just be aware of the output when you run the code cell. Based on the output, modify the code cell by adding a code comment at the top of the cell briefly explaining what the code does. This comment can be as many lines as you'd like, and may or may not include direct references to the example print statements below. ``` def mystery_function(x): y = list(x) return " ".join(y[::-1]) print(mystery_function("UniversityOfWashington")) print(mystery_function("AvocadoToast")) print(mystery_function("RacecaR")) ``` #### 4 More Markdown Consider the vision statement of the Global Innovation Exchange: "Our mission is to build the talent that leverages emerging technologies in new and impactful ways". Type that same statement in a new text cell below, only add a `_` (underscore) at the beginning and end. What ends up happening to the format of the text as a result? In a new text cell, list all of the potential data science projects you might work on post DSE, with each one on its own separate line. Then add a `- ` at the beginning of each line; include a single space between the hyphen and the first letter in your project name! Based on the output, what can you decipher that this `- ` changes in the formatting? #### 5 Get familiar with the Python community Python has strong support from a community of avid developers and computer scientists. The Python Software Foundation (PSF) tries to maintain input and activity through their own website, [python.org](https://). Please explore their community section - https://www.python.org/community/ - and answer the following questions in a new text cell just below this one: - What is the name of the mailing list that the PSF manages for those who have questions about Python code? - In your own words, what is the goal of their Community Survey? - According to their most recent annual report, which continent provides the highest proportion of grants to the PSF? - Name at least three ways that the PSF recommends you can get involved with the community. #### 6 Advanced - Understanding the switch from Python 2 to 3 Inside of the following link, https://www.python.org/doc/, is an article about the Python Software Foundation's decision to end support for Python version 2, and move with support for version 3. From the article, answer the following questions in a new text cell just below this one: - What official date was Python 2 no longer supported? - What is the version number of the last supported Python 2? - In your own words, describe why the Python Software Foundation made the decision to stop supporting Python 2. #### 7 Advanced - Create your own Google Colab notebook! Create your own separate Google Colab notebook with the following rules and content: - The file name of your notebook should be in the format ***lastname_C1S1_breakout_custom_notebook.ipynb***. - Include a header (of any size) that lists your first name and last name, followed by "**Technology Explorers**". - Create a short paragraph bio of yourself in a text cell. - Include/embed an image of the Python logo, which can be found here: https://www.python.org/community/logos/. - Create a code cell with just the line of code: `import this` - Take your favorite line from that output and past it in a text cell below, both bolding and italicizing it
github_jupyter
``` To check if each variable is the correct data type, use the type() function explained from section 1.1.3, in the empty code cell below. For example, `type(favorite_movie)` should return the output `<class 'str'>`, which indicates the string type. #### 2 Practice with math in Python Let's start with a few basic operators that was covered in section 1.1.4. Write the expression for multiplying 23 by 31, so that running this function will correctly output the product of these two numbers. Consider the operation written in Python: `27 / 3 + 6`. Write the same syntax in the empty cell below, and modify it to include parentheses in the right location so that the result/answer of the math is 3.0, rather than 15.0. Let's learn more operators beyond the ones we covered earlier. Write the line of code `3 ** 2` in the cell below. From the output, what do you think the double asterisks (`**`) operator represents in Python? Write the line of code `28 / 3` below. Then just below the line in the same cell, write the code `28 // 3`. Compare the differences in output between the two. Can you decipher what `//` means in Python? Now for more complicated mathematical operations! Try to write the Python equivalent of the following: $\frac{14 + 28}{28 - 14}$ Now try this one: $\frac{15 + 984}{-(217+4)}$ And finally, write the Python equivalent for this: $\frac{-(3655 * 44)}{(8 * 16)^3}$ #### 3 Practice writing helpful comments Consider the following code below. No need to decipher and understand every piece but just be aware of the output when you run the code cell. Based on the output, modify the code cell by adding a code comment at the top of the cell briefly explaining what the code does. This comment can be as many lines as you'd like, and may or may not include direct references to the example print statements below.
0.829768
0.990505
## BERT-Large benchmark for Tensorflow We use [Model Zoo](https://github.com/IntelAI/models) to run [BERT Large](https://github.com/IntelAI/models/tree/v1.8.1/benchmarks/language_modeling/tensorflow/bert_large/README.md) model on SQuADv1.1 datasets. ## Part 1. Download datasets, checkpoints and pre-trained model Download datasets, checkpoints and pre-trained model from the Internet to Databricks File System (DBFS). These data can share across clusters and only need to download once. So if you run multiple copyed notebooks simultaneously, please ensure run this cell only once to avoid unpredicted issues. ``` %sh # Download datasets, checkpoints and pre-trained model rm -rf /dbfs/home/TF/bert-large mkdir -p /dbfs/home/TF/bert-large mkdir -p /dbfs/home/TF/bert-large/SQuAD-1.1 cd /dbfs/home/TF/bert-large/SQuAD-1.1 wget https://github.com/oap-project/oap-project.github.io/raw/master/resources/ai/bert/dev-v1.1.json wget https://github.com/oap-project/oap-project.github.io/raw/master/resources/ai/bert/evaluate-v1.1.py wget https://github.com/oap-project/oap-project.github.io/raw/master/resources/ai/bert/train-v1.1.json cd /dbfs/home/TF/bert-large wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v1_8/bert_large_checkpoints.zip unzip bert_large_checkpoints.zip cd /dbfs/home/TF/bert-large wget https://storage.googleapis.com/bert_models/2019_05_30/wwm_uncased_L-24_H-1024_A-16.zip unzip wwm_uncased_L-24_H-1024_A-16.zip ``` ### After the data have downloaded, you can start the BERT-Large training/inference workload. ## Part 2. Run BERT-Large training workload In the beginning, data preprocessing will take some minutes. Once the preprocessing is done, the training workload can output throughput performance number in real time. It takes around five hours to complete the entire training process on Standard_F32s_v2 instance. The precise elapsed time depends on instance type and whether is Intel-optimized TensorFlow. **Note:** ***If you click "Stop Execution" for running training/inference cells, and then run training/inference again immediately. You may see lower performance number, because another training/inference is still on-going.*** ``` import os import subprocess from pathlib import Path def run_training(): training = '/tmp/training.sh' with open(training, 'w') as f: f.write("""#!/bin/bash # BERT-Large Training # Install necessary package sudo apt-get update sudo apt-get install zip -y sudo apt-get -y install git sudo apt-get install -y libblacs-mpi-dev sudo apt-get install -y numactl # Remove old materials if exist rm -rf /TF/ mkdir /TF/ # Create ckpt directory mkdir -p /TF/BERT-Large-output/ # Download IntelAI benchmark cd /TF/ wget https://github.com/IntelAI/models/archive/refs/tags/v1.8.1.zip unzip v1.8.1.zip cores_per_socket=$(lscpu | awk '/^Core\(s\) per socket/{ print $4 }') numa_nodes=$(lscpu | awk '/^NUMA node\(s\)/{ print $3 }') export SQUAD_DIR=/dbfs/home/TF/bert-large/SQuAD-1.1 export BERT_LARGE_MODEL=/dbfs/home/TF/bert-large/wwm_uncased_L-24_H-1024_A-16 export BERT_LARGE_OUTPUT=/TF/BERT-Large-output/ export PYTHONPATH=$PYTHONPATH:. function run_training_without_numabind() { python launch_benchmark.py \ --model-name=bert_large \ --precision=fp32 \ --mode=training \ --framework=tensorflow \ --batch-size=4 \ --benchmark-only \ --data-location=$BERT_LARGE_MODEL \ -- train-option=SQuAD DEBIAN_FRONTEND=noninteractive config_file=$BERT_LARGE_MODEL/bert_config.json init_checkpoint=$BERT_LARGE_MODEL/bert_model.ckpt vocab_file=$BERT_LARGE_MODEL/vocab.txt train_file=$SQUAD_DIR/train-v1.1.json predict_file=$SQUAD_DIR/dev-v1.1.json do-train=True learning-rate=1.5e-5 max-seq-length=384 do_predict=True warmup-steps=0 num_train_epochs=0.1 doc_stride=128 do_lower_case=False experimental-gelu=False mpi_workers_sync_gradients=True } function run_training_with_numabind() { intra_thread=`expr $cores_per_socket - 2` python launch_benchmark.py \ --model-name=bert_large \ --precision=fp32 \ --mode=training \ --framework=tensorflow \ --batch-size=4 \ --mpi_num_processes=$numa_nodes \ --num-intra-threads=$intra_thread \ --num-inter-threads=1 \ --benchmark-only \ --data-location=$BERT_LARGE_MODEL \ -- train-option=SQuAD DEBIAN_FRONTEND=noninteractive config_file=$BERT_LARGE_MODEL/bert_config.json init_checkpoint=$BERT_LARGE_MODEL/bert_model.ckpt vocab_file=$BERT_LARGE_MODEL/vocab.txt train_file=$SQUAD_DIR/train-v1.1.json predict_file=$SQUAD_DIR/dev-v1.1.json do-train=True learning-rate=1.5e-5 max-seq-length=384 do_predict=True warmup-steps=0 num_train_epochs=0.1 doc_stride=128 do_lower_case=False experimental-gelu=False mpi_workers_sync_gradients=True } # Launch Benchmark for training cd /TF/models-1.8.1/benchmarks/ if [ "$numa_nodes" = "1" ];then run_training_without_numabind else run_training_with_numabind fi """) os.chmod(training, 555) p = subprocess.Popen([training], stdin=None, stdout=subprocess.PIPE, stderr=subprocess.PIPE) directory_to_second_numa_info = Path("/sys/devices/system/node/node1") if directory_to_second_numa_info.exists(): # 2 NUMA nodes for line in iter(p.stdout.readline, ''): if b"Reading package lists..." in line or b"answer: [UNK] 1848" in line: print("\t\t\t\t Preparing data ......", end='\r') if b"INFO:tensorflow:examples/sec" in line: print("\t\t\t\t Training started, current real-time throughput (examples/sec) : " + str(float(str(line).strip("\\n'").split(' ')[1])*2), end='\r') if line == b'' and p.poll() != None: break else: # 1 NUMA node for line in iter(p.stdout.readline, ''): if b"Reading package lists..." in line or b"answer: [UNK] 1848" in line: print("\t\t\t\t Preparing data ......", end='\r') if b"INFO:tensorflow:examples/sec" in line: print("\t\t\t\t Training started, current real-time throughput (examples/sec) : " + str(line).strip("\\n'").split(' ')[1], end='\r') if line == b'' and p.poll() != None: break p.stdout.close() run_training() ``` ## Part 3. Run BERT-Large inference workload In the beginning, data preprocessing will take some minutes. Once the preprocessing is done, the inference workload can output throughput performance number in real time. It takes around 30 minutes to complete the entire inference process on Standard_F32s_v2 instance. The precise elapsed time depends on instance type and whether is Intel-optimized TensorFlow. **Note:** ***If you click "Stop Execution" for running training/inference cells, and then run training/inference again immediately. You may see lower performance number, because another training/inference is still on-going.*** ``` import os import subprocess from pathlib import Path def run_inference(): inference = '/tmp/inference.sh' with open(inference, 'w') as f: f.write("""#!/bin/bash # BERT-Large Inference # Install necessary package sudo apt-get update sudo apt-get install zip -y sudo apt-get -y install git sudo apt-get install -y numactl # Remove old materials if exist rm -rf /TF/ mkdir /TF/ # Create ckpt directory mkdir -p /TF/BERT-Large-output/ export BERT_LARGE_OUTPUT=/TF/BERT-Large-output # Download IntelAI benchmark cd /TF/ wget https://github.com/IntelAI/models/archive/refs/tags/v1.8.1.zip unzip v1.8.1.zip cd /TF/models-1.8.1/ wget https://github.com/oap-project/oap-tools/raw/master/integrations/ml/databricks/benchmark/IntelAI_models_bertlarge_inference_realtime_throughput.patch git apply IntelAI_models_bertlarge_inference_realtime_throughput.patch export SQUAD_DIR=/dbfs/home/TF/bert-large/SQuAD-1.1/ export BERT_LARGE_DIR=/dbfs/home/TF/bert-large/ export PYTHONPATH=$PYTHONPATH:. # Launch Benchmark for inference numa_nodes=$(lscpu | awk '/^NUMA node\(s\)/{ print $3 }') function run_inference_without_numabind() { cd /TF/models-1.8.1/benchmarks/ python3 launch_benchmark.py \ --model-name=bert_large \ --precision=fp32 \ --mode=inference \ --framework=tensorflow \ --batch-size=32 \ --data-location $BERT_LARGE_DIR/wwm_uncased_L-24_H-1024_A-16 \ --checkpoint $BERT_LARGE_DIR/bert_large_checkpoints \ --output-dir $BERT_LARGE_OUTPUT/bert-squad-output \ --verbose \ -- infer_option=SQuAD \ DEBIAN_FRONTEND=noninteractive \ predict_file=$SQUAD_DIR/dev-v1.1.json \ experimental-gelu=False \ init_checkpoint=model.ckpt-3649 } function run_inference_with_numabind() { cd /TF/models-1.8.1/benchmarks/ nohup python3 launch_benchmark.py \ --model-name=bert_large \ --precision=fp32 \ --mode=inference \ --framework=tensorflow \ --batch-size=32 \ --socket-id 0 \ --data-location $BERT_LARGE_DIR/wwm_uncased_L-24_H-1024_A-16 \ --checkpoint $BERT_LARGE_DIR/bert_large_checkpoints \ --output-dir $BERT_LARGE_OUTPUT/bert-squad-output \ --verbose \ -- infer_option=SQuAD \ DEBIAN_FRONTEND=noninteractive \ predict_file=$SQUAD_DIR/dev-v1.1.json \ experimental-gelu=False \ init_checkpoint=model.ckpt-3649 >> socket0-inference-log & python3 launch_benchmark.py \ --model-name=bert_large \ --precision=fp32 \ --mode=inference \ --framework=tensorflow \ --batch-size=32 \ --socket-id 1 \ --data-location $BERT_LARGE_DIR/wwm_uncased_L-24_H-1024_A-16 \ --checkpoint $BERT_LARGE_DIR/bert_large_checkpoints \ --output-dir $BERT_LARGE_OUTPUT/bert-squad-output \ --verbose \ -- infer_option=SQuAD \ DEBIAN_FRONTEND=noninteractive \ predict_file=$SQUAD_DIR/dev-v1.1.json \ experimental-gelu=False \ init_checkpoint=model.ckpt-3649 } if [ "$numa_nodes" = "1" ];then run_inference_without_numabind else run_inference_with_numabind fi""") os.chmod(inference, 555) p = subprocess.Popen([inference], stdin=None, stdout=subprocess.PIPE, stderr=subprocess.PIPE) directory_to_second_numa_info = Path("/sys/devices/system/node/node1") if directory_to_second_numa_info.exists(): # 2 NUMA nodes for line in iter(p.stdout.readline, ''): if b'Reading package lists...' in line or b'INFO:tensorflow:tokens' in line or b'INFO:tensorflow: name = bert' in line: print("\t\t\t\t Preparing data ......", end='\r') if b"INFO:tensorflow:examples/sec" in line: print("\t\t\t\t Inference started, current real-time throughput (examples/sec) : " + str(float(str(line).strip("\\n'").split(' ')[1])*2), end='\r') if b"throughput((num_processed_examples-threshod_examples)/Elapsedtime)" in line: print("\t\t\t\t Inference finished, overall inference throughput (examples/sec) : " + str(float(str(line).strip("\\n'").split(':')[1])*2), end='\r') if line == b'' and p.poll() != None: break else: # 1 NUMA node for line in iter(p.stdout.readline, ''): if b'Reading package lists...' in line or b'INFO:tensorflow:tokens' in line or b'INFO:tensorflow: name = bert' in line: print("\t\t\t\t Preparing data ......", end='\r') if b"INFO:tensorflow:examples/sec" in line: print("\t\t\t\t Inference started, current real-time throughput (examples/sec) : " + str(line).strip("\\n'").split(' ')[1], end='\r') if b"throughput((num_processed_examples-threshod_examples)/Elapsedtime)" in line: print("\t\t\t\t Inference finished, overall inference throughput (examples/sec) : " + str(line).strip("\\n'").split(':')[1], end='\r') if line == b'' and p.poll() != None: break p.stdout.close() run_inference() ``` ## Check whether is Intel-optimized TensorFlow This is a simple auxiliary script tool to check whether the installed TensorFlow is Intel-optimized TensorFlow. "Ture" represents Intel-optimized TensorFlow. ``` # Print version, and check whether is Intel-optimized import tensorflow print("tensorflow version: " + tensorflow.__version__) from packaging import version if (version.parse("2.5.0") <= version.parse(tensorflow.__version__)): from tensorflow.python.util import _pywrap_util_port print( _pywrap_util_port.IsMklEnabled()) else: from tensorflow.python import _pywrap_util_port print(_pywrap_util_port.IsMklEnabled()) ```
github_jupyter
%sh # Download datasets, checkpoints and pre-trained model rm -rf /dbfs/home/TF/bert-large mkdir -p /dbfs/home/TF/bert-large mkdir -p /dbfs/home/TF/bert-large/SQuAD-1.1 cd /dbfs/home/TF/bert-large/SQuAD-1.1 wget https://github.com/oap-project/oap-project.github.io/raw/master/resources/ai/bert/dev-v1.1.json wget https://github.com/oap-project/oap-project.github.io/raw/master/resources/ai/bert/evaluate-v1.1.py wget https://github.com/oap-project/oap-project.github.io/raw/master/resources/ai/bert/train-v1.1.json cd /dbfs/home/TF/bert-large wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v1_8/bert_large_checkpoints.zip unzip bert_large_checkpoints.zip cd /dbfs/home/TF/bert-large wget https://storage.googleapis.com/bert_models/2019_05_30/wwm_uncased_L-24_H-1024_A-16.zip unzip wwm_uncased_L-24_H-1024_A-16.zip import os import subprocess from pathlib import Path def run_training(): training = '/tmp/training.sh' with open(training, 'w') as f: f.write("""#!/bin/bash # BERT-Large Training # Install necessary package sudo apt-get update sudo apt-get install zip -y sudo apt-get -y install git sudo apt-get install -y libblacs-mpi-dev sudo apt-get install -y numactl # Remove old materials if exist rm -rf /TF/ mkdir /TF/ # Create ckpt directory mkdir -p /TF/BERT-Large-output/ # Download IntelAI benchmark cd /TF/ wget https://github.com/IntelAI/models/archive/refs/tags/v1.8.1.zip unzip v1.8.1.zip cores_per_socket=$(lscpu | awk '/^Core\(s\) per socket/{ print $4 }') numa_nodes=$(lscpu | awk '/^NUMA node\(s\)/{ print $3 }') export SQUAD_DIR=/dbfs/home/TF/bert-large/SQuAD-1.1 export BERT_LARGE_MODEL=/dbfs/home/TF/bert-large/wwm_uncased_L-24_H-1024_A-16 export BERT_LARGE_OUTPUT=/TF/BERT-Large-output/ export PYTHONPATH=$PYTHONPATH:. function run_training_without_numabind() { python launch_benchmark.py \ --model-name=bert_large \ --precision=fp32 \ --mode=training \ --framework=tensorflow \ --batch-size=4 \ --benchmark-only \ --data-location=$BERT_LARGE_MODEL \ -- train-option=SQuAD DEBIAN_FRONTEND=noninteractive config_file=$BERT_LARGE_MODEL/bert_config.json init_checkpoint=$BERT_LARGE_MODEL/bert_model.ckpt vocab_file=$BERT_LARGE_MODEL/vocab.txt train_file=$SQUAD_DIR/train-v1.1.json predict_file=$SQUAD_DIR/dev-v1.1.json do-train=True learning-rate=1.5e-5 max-seq-length=384 do_predict=True warmup-steps=0 num_train_epochs=0.1 doc_stride=128 do_lower_case=False experimental-gelu=False mpi_workers_sync_gradients=True } function run_training_with_numabind() { intra_thread=`expr $cores_per_socket - 2` python launch_benchmark.py \ --model-name=bert_large \ --precision=fp32 \ --mode=training \ --framework=tensorflow \ --batch-size=4 \ --mpi_num_processes=$numa_nodes \ --num-intra-threads=$intra_thread \ --num-inter-threads=1 \ --benchmark-only \ --data-location=$BERT_LARGE_MODEL \ -- train-option=SQuAD DEBIAN_FRONTEND=noninteractive config_file=$BERT_LARGE_MODEL/bert_config.json init_checkpoint=$BERT_LARGE_MODEL/bert_model.ckpt vocab_file=$BERT_LARGE_MODEL/vocab.txt train_file=$SQUAD_DIR/train-v1.1.json predict_file=$SQUAD_DIR/dev-v1.1.json do-train=True learning-rate=1.5e-5 max-seq-length=384 do_predict=True warmup-steps=0 num_train_epochs=0.1 doc_stride=128 do_lower_case=False experimental-gelu=False mpi_workers_sync_gradients=True } # Launch Benchmark for training cd /TF/models-1.8.1/benchmarks/ if [ "$numa_nodes" = "1" ];then run_training_without_numabind else run_training_with_numabind fi """) os.chmod(training, 555) p = subprocess.Popen([training], stdin=None, stdout=subprocess.PIPE, stderr=subprocess.PIPE) directory_to_second_numa_info = Path("/sys/devices/system/node/node1") if directory_to_second_numa_info.exists(): # 2 NUMA nodes for line in iter(p.stdout.readline, ''): if b"Reading package lists..." in line or b"answer: [UNK] 1848" in line: print("\t\t\t\t Preparing data ......", end='\r') if b"INFO:tensorflow:examples/sec" in line: print("\t\t\t\t Training started, current real-time throughput (examples/sec) : " + str(float(str(line).strip("\\n'").split(' ')[1])*2), end='\r') if line == b'' and p.poll() != None: break else: # 1 NUMA node for line in iter(p.stdout.readline, ''): if b"Reading package lists..." in line or b"answer: [UNK] 1848" in line: print("\t\t\t\t Preparing data ......", end='\r') if b"INFO:tensorflow:examples/sec" in line: print("\t\t\t\t Training started, current real-time throughput (examples/sec) : " + str(line).strip("\\n'").split(' ')[1], end='\r') if line == b'' and p.poll() != None: break p.stdout.close() run_training() import os import subprocess from pathlib import Path def run_inference(): inference = '/tmp/inference.sh' with open(inference, 'w') as f: f.write("""#!/bin/bash # BERT-Large Inference # Install necessary package sudo apt-get update sudo apt-get install zip -y sudo apt-get -y install git sudo apt-get install -y numactl # Remove old materials if exist rm -rf /TF/ mkdir /TF/ # Create ckpt directory mkdir -p /TF/BERT-Large-output/ export BERT_LARGE_OUTPUT=/TF/BERT-Large-output # Download IntelAI benchmark cd /TF/ wget https://github.com/IntelAI/models/archive/refs/tags/v1.8.1.zip unzip v1.8.1.zip cd /TF/models-1.8.1/ wget https://github.com/oap-project/oap-tools/raw/master/integrations/ml/databricks/benchmark/IntelAI_models_bertlarge_inference_realtime_throughput.patch git apply IntelAI_models_bertlarge_inference_realtime_throughput.patch export SQUAD_DIR=/dbfs/home/TF/bert-large/SQuAD-1.1/ export BERT_LARGE_DIR=/dbfs/home/TF/bert-large/ export PYTHONPATH=$PYTHONPATH:. # Launch Benchmark for inference numa_nodes=$(lscpu | awk '/^NUMA node\(s\)/{ print $3 }') function run_inference_without_numabind() { cd /TF/models-1.8.1/benchmarks/ python3 launch_benchmark.py \ --model-name=bert_large \ --precision=fp32 \ --mode=inference \ --framework=tensorflow \ --batch-size=32 \ --data-location $BERT_LARGE_DIR/wwm_uncased_L-24_H-1024_A-16 \ --checkpoint $BERT_LARGE_DIR/bert_large_checkpoints \ --output-dir $BERT_LARGE_OUTPUT/bert-squad-output \ --verbose \ -- infer_option=SQuAD \ DEBIAN_FRONTEND=noninteractive \ predict_file=$SQUAD_DIR/dev-v1.1.json \ experimental-gelu=False \ init_checkpoint=model.ckpt-3649 } function run_inference_with_numabind() { cd /TF/models-1.8.1/benchmarks/ nohup python3 launch_benchmark.py \ --model-name=bert_large \ --precision=fp32 \ --mode=inference \ --framework=tensorflow \ --batch-size=32 \ --socket-id 0 \ --data-location $BERT_LARGE_DIR/wwm_uncased_L-24_H-1024_A-16 \ --checkpoint $BERT_LARGE_DIR/bert_large_checkpoints \ --output-dir $BERT_LARGE_OUTPUT/bert-squad-output \ --verbose \ -- infer_option=SQuAD \ DEBIAN_FRONTEND=noninteractive \ predict_file=$SQUAD_DIR/dev-v1.1.json \ experimental-gelu=False \ init_checkpoint=model.ckpt-3649 >> socket0-inference-log & python3 launch_benchmark.py \ --model-name=bert_large \ --precision=fp32 \ --mode=inference \ --framework=tensorflow \ --batch-size=32 \ --socket-id 1 \ --data-location $BERT_LARGE_DIR/wwm_uncased_L-24_H-1024_A-16 \ --checkpoint $BERT_LARGE_DIR/bert_large_checkpoints \ --output-dir $BERT_LARGE_OUTPUT/bert-squad-output \ --verbose \ -- infer_option=SQuAD \ DEBIAN_FRONTEND=noninteractive \ predict_file=$SQUAD_DIR/dev-v1.1.json \ experimental-gelu=False \ init_checkpoint=model.ckpt-3649 } if [ "$numa_nodes" = "1" ];then run_inference_without_numabind else run_inference_with_numabind fi""") os.chmod(inference, 555) p = subprocess.Popen([inference], stdin=None, stdout=subprocess.PIPE, stderr=subprocess.PIPE) directory_to_second_numa_info = Path("/sys/devices/system/node/node1") if directory_to_second_numa_info.exists(): # 2 NUMA nodes for line in iter(p.stdout.readline, ''): if b'Reading package lists...' in line or b'INFO:tensorflow:tokens' in line or b'INFO:tensorflow: name = bert' in line: print("\t\t\t\t Preparing data ......", end='\r') if b"INFO:tensorflow:examples/sec" in line: print("\t\t\t\t Inference started, current real-time throughput (examples/sec) : " + str(float(str(line).strip("\\n'").split(' ')[1])*2), end='\r') if b"throughput((num_processed_examples-threshod_examples)/Elapsedtime)" in line: print("\t\t\t\t Inference finished, overall inference throughput (examples/sec) : " + str(float(str(line).strip("\\n'").split(':')[1])*2), end='\r') if line == b'' and p.poll() != None: break else: # 1 NUMA node for line in iter(p.stdout.readline, ''): if b'Reading package lists...' in line or b'INFO:tensorflow:tokens' in line or b'INFO:tensorflow: name = bert' in line: print("\t\t\t\t Preparing data ......", end='\r') if b"INFO:tensorflow:examples/sec" in line: print("\t\t\t\t Inference started, current real-time throughput (examples/sec) : " + str(line).strip("\\n'").split(' ')[1], end='\r') if b"throughput((num_processed_examples-threshod_examples)/Elapsedtime)" in line: print("\t\t\t\t Inference finished, overall inference throughput (examples/sec) : " + str(line).strip("\\n'").split(':')[1], end='\r') if line == b'' and p.poll() != None: break p.stdout.close() run_inference() # Print version, and check whether is Intel-optimized import tensorflow print("tensorflow version: " + tensorflow.__version__) from packaging import version if (version.parse("2.5.0") <= version.parse(tensorflow.__version__)): from tensorflow.python.util import _pywrap_util_port print( _pywrap_util_port.IsMklEnabled()) else: from tensorflow.python import _pywrap_util_port print(_pywrap_util_port.IsMklEnabled())
0.359252
0.788054
``` import numpy as np #Importa libreria numerica import sympy as sym #simbolica import matplotlib.pyplot as plt #importa matplotlib solo pyplot import matplotlib.image as mpimg from sympy.plotting import plot #para plotear 2 variables from sympy.plotting import plot3d # para 3 from sympy.plotting import plot3d_parametric_surface from IPython.display import Image import ipympl #Para importar gestor de imagenes sym.init_printing() #activa a jupyter para mostrar simbolicamente el output %matplotlib inline Image(filename='LAB2.png',width=300) # Diseño tiene que cumplir # Amplificador Operacional LM741 o LM324 # Alimentación Vcc = 10V, Vss = − 10V # Ganancia en banda media A = Vo/V1 y A = Vo/V2 debe ser igual a 30 veces. # Zi del amplificador no puede alterar o cargar la fuente de señal, es decir, Ri << Zi1 y Zi2. (al menos 10 veces) # Usar Resistencias <= 1MΩ # Las fuentes V1 y V2 (fig 2) deben considerarse en las condiciones 1.A y 1.B # 1.A Ri = 50Ω # 1.B Ri = 100KΩ #Se usa LM741 que segun la tabla 6.7-1 del libro Dorf Svoboda #Ipol = 80nA ; Ios = 20nA #Vos = 1mV ; Ad=200 V/mV = 200k V/V #SR = 0.5 V/uS = 0.5 Meg V/S = 500k V/S #CMRR = 31.6 V/mV = 31.6k V/V #Forma generica V1, V2 = sym.symbols('V_1, V_2') Vo = sym.Function('V_o')(V1,V2) eq_Vo = sym.Eq(Vo,(sym.diff(Vo.subs(V2,0),V1))*V1+sym.diff(Vo.subs(V1,0),V2)*V2) sym.pprint(eq_Vo) #CASO A) Ri=50 Ri, R, Rf = sym.symbols('Ri, R, Rf') #Si hacemos el diseño como un OPAMP ideal eq_Vo1 = sym.Eq(Vo.subs(V2,0),-V1*(Rf/R)) sym.pprint(eq_Vo1) eq_Vo2 = sym.Eq(Vo.subs(V1,0),-V2*(Rf/R)) sym.pprint(eq_Vo2) eq_Vo = sym.Eq(Vo, eq_Vo1.rhs + eq_Vo2.rhs) sym.pprint(sym.simplify(eq_Vo)) #Si por diseño A=30 y Ri = 50 eq_Ri_A = sym.Eq(Ri, 50) sym.pprint(eq_Ri_A) eq_R_A = sym.Eq(R , Ri*10 + Ri) sym.pprint(eq_R_A) eq_Rf_A = sym.Eq(Rf , 30 * (eq_R_A.rhs).subs(Ri,eq_Ri_A.rhs) ) sym.pprint(eq_Rf_A) #CASO B) Ri=100k eq_Vo = sym.Eq(Vo, eq_Vo1.rhs + eq_Vo2.rhs) sym.pprint(sym.simplify(eq_Vo)) #Si por diseño A=30 y Ri = 100k eq_Ri_B = sym.Eq(Ri , 100e3) sym.pprint(eq_Ri_B) eq_R_B = sym.Eq(R , Ri*10 + Ri) sym.pprint(eq_R_B) eq_Rf_B = sym.Eq(Rf , 30 * (eq_R_B.rhs).subs(Ri,eq_Ri_B.rhs) ) sym.pprint(eq_Rf_B) #En el ultimo caso Rf es de orden de los Megas sym.pprint("{:.2e}".format(eq_Rf_B.rhs)) #Entonces se utiliza una red T # Rf = Vo/If cuando Vi=0 Image(filename='redT.png',width=300) Ra, Rb, Rc = sym.symbols('Ra, Rb, Rc') If, Irc, Vx, VoRT = sym.symbols('I_f, I_Rc, V_x, V_oRT') eq_Vx =sym.Eq(Vx, If * Ra) sym.pprint(eq_Vx) eq_Irc =sym.Eq(Irc,eq_Vx.rhs/Rc) sym.pprint(eq_Irc) eq_VoRT=sym.Eq(VoRT, Vx + (If+Irc)*Rb) sym.pprint(eq_VoRT) #Remplazando eq_VoRT=sym.Eq(VoRT, eq_Vx.rhs + (If+eq_Irc.rhs)*Rb) sym.pprint(sym.apart(eq_VoRT,If)) eq_Rf_T=sym.Eq(Rf,eq_VoRT.rhs/If) sym.pprint(sym.simplify(eq_Rf_T)) eq_Rc=sym.Eq(Rc , sym.solve(eq_Rf_T ,Rc)[0] ) sym.pprint(eq_Rc) eq_Rc_val=sym.Eq(Rc,eq_Rc.rhs.subs({Ra:100e3,Rb:100e3,Rf:eq_Rf_B.rhs})) sym.pprint(eq_Rc_val) #BLACKMAN #Primero analizamos el resultado con Ad=200k V/V #entonces se utiliza la formula de BLACKMAN Avf= Ad/(1-T) #Para el caso A) Image(filename='black1.png',width=300) #Para el caso A) Avf, Av, T = sym.symbols('A_vf, A_v, T') eq_Avf=sym.Eq(Avf,Av/(1-T)) sym.pprint(eq_Avf) V1, V2, Vp = sym.symbols('V_1,V_2,V_p') Vo=sym.Function('V_o')(V1,V2,Vp) eq_Av=sym.Eq(Av,sym.diff(Vo.subs({V2:0,Vp:0}))) sym.pprint(eq_Av) eq_Av=sym.Eq(Av,sym.diff(Vo.subs({V1:0,Vp:0}))) sym.pprint(eq_Av) eq_T=sym.Eq(T,sym.diff(Vo.subs({V1:0,V2:0}))) sym.pprint(eq_T) #Av= Vo/V- * V-/V1 #solo para el informe Vo_o_Vneg, Vneg_o_V1 = sym.symbols('(V_{o}/V_{-}), (V_{-}/V_{1})') eq_Av=sym.Eq(Av,Vo_o_Vneg*Vneg_o_V1) display(eq_Av) Ad=sym.Symbol('A_d') eq_Av_A=sym.Eq(Av,-Ad*((R**-1+Rf**-1)**-1)/(R+(R**-1+Rf**-1)**-1)) sym.pprint(eq_Av_A) sym.pprint(sym.simplify(eq_Av_A)) sym.pprint( eq_Av_A.subs( {R:eq_R_A.rhs , Ri:eq_Ri_A.rhs , Rf:eq_Rf_A.rhs , Ad:200e3})) eq_T_A=sym.Eq(T,-Ad*(R/2)/(R/2+Rf)) sym.pprint(sym.simplify(eq_T_A)) eq_T_A_val=eq_T_A.subs( {R:eq_R_A.rhs , Ri:eq_Ri_A.rhs , Rf:eq_Rf_A.rhs , Ad:200e3}) sym.pprint(eq_T_A_val) eq_Avf_A=sym.Eq(Avf,eq_Av_A.rhs/(1-eq_T_A.rhs)) sym.pprint(sym.simplify(eq_Avf_A)) sym.pprint( eq_Avf_A.subs( {R:eq_R_A.rhs , Ri:eq_Ri_A.rhs , Rf:eq_Rf_A.rhs , Ad:200e3})) #BLACKMAN #Primero analizamos el resultado con Ad=200k V/V #entonces se utiliza la formula de BLACKMAN Avf= Ad/(1-T) #Para el caso B) Image(filename='black2.png',width=300) #Para el caso B) Avf, Av, T = sym.symbols('A_vf, A_v, T') eq_Avf=sym.Eq(Avf,Av/(1-T)) sym.pprint(eq_Avf) #Av= Vo/V- * V-/V1 #solo para el informe Vo_o_Vneg, Vneg_o_V1 = sym.symbols('(V_{o}/V_{-}), (V_{-}/V_{1})') eq_Av=sym.Eq(Av,Vo_o_Vneg*Vneg_o_V1) display(eq_Av) eq_Av_B=sym.Eq(Av,-Ad*(R**-1+(Ra+(Rc**-1+Rb**-1)**-1)**-1)**-1/(R+(R**-1+(Ra+(Rc**-1+Rb**-1)**-1)**-1)**-1)) sym.pprint(sym.simplify(eq_Av_B)) sym.pprint(eq_Av_B.subs({R:eq_R_B.rhs , Ri:eq_Ri_B.rhs , Ra:100e3, Rb:100e3, Rc:eq_Rc_val.rhs , Ad:200e3})) #T= Vo/Vp = Vo/V- V-/Ia Ia/Ib Ib/Vp #Solo para informe Vo_o_Vneg, Vneg_o_Ia, Ia_o_Ib, Ib_o_Vp = sym.symbols('(V_{o}/V_{-}), (V_{-}/I_{a}), (I_{a}/I_{b}). (I_{b}/V_{p})') eq_T=sym.Eq(T,Vo_o_Vneg*Vneg_o_Ia*Ia_o_Ib*Ib_o_Vp) display(eq_T) eq_T_B=sym.Eq(T,-Ad*(R/2)*(Rc/((R/2)+Ra+Rc))*(1/(Rb+((Rc**-1+((R/2)+Ra)**-1)**-1)))) sym.pprint(sym.simplify(eq_T_B)) eq_T_B_val=eq_T_B.subs({R:eq_R_B.rhs , Ri:eq_Ri_B.rhs , Ra:100e3, Rb:100e3, Rc:eq_Rc_val.rhs , Ad:200e3}) sym.pprint(eq_T_B_val) eq_Avf_B=sym.Eq(Avf,eq_Av_B.rhs/(1-eq_T_B.rhs)) sym.pprint(sym.simplify(eq_Avf_B)) sym.pprint(eq_Avf_B.subs({R:eq_R_B.rhs , Ri:eq_Ri_B.rhs , Ra:100e3, Rb:100e3, Rc:eq_Rc_val.rhs , Ad:200e3})) #ERROR POR Vos y Ipol-, como en (+) no hay resistencia, Vo(Ipol+)= 0 => esto genera que el error sea mayor Ipol, Vos = sym.symbols("Ipol_-, V_os") DVo=sym.Function('\Delta V_o')(Ipol,Vos) eq_Dvo=sym.Eq(DVo,sym.diff(DVo.subs(Vos,0))*Ipol+sym.diff(DVo.subs(Ipol,0))*Vos) display(eq_Dvo) #Si se aplica blackman DVo=sym.Function('\Delta V_o')(Ipol,Vos,Vp) eq_Dvo=sym.Eq(DVo,(sym.diff(DVo.subs({Vos:0,Vp:0}))/(1-T))*Ipol+(sym.diff(DVo.subs({Ipol:0,Vp:0}))/(1-T)*Vos)) display(eq_Dvo) Image(filename='black3.png',width=300) #CASO A) DVo=sym.Function('\Delta V_o')(Ipol) eq_DVo_Ipol=sym.Eq(DVo,Ipol*(((R/2)**-1+Rf**-1)**-1)*-Ad/(1-T)) display(eq_DVo_Ipol) eq_DVo_Ipol_val=(eq_DVo_Ipol.subs({T:eq_T_A_val.rhs, Ad:200e3,Ipol:80e-9,Rf:eq_Rf_A.rhs,R:eq_R_A.rhs , Ri:eq_Ri_A.rhs})) sym.pprint(eq_DVo_Ipol_val) DVo=sym.Function('\Delta V_o')(Vos) eq_DVo_Vos=sym.Eq(DVo,Vos*(Ad/(1-T))) eq_DVo_Vos_val=(eq_DVo_Vos.subs({T:eq_T_A_val.rhs, Ad:200e3,Vos:1e-3,Rf:eq_Rf_A.rhs,R:eq_R_A.rhs , Ri:eq_Ri_A.rhs})) sym.pprint(eq_DVo_Vos_val) DVo=sym.Function('\Delta V_o')(Ipol,Vos) eq_DVo_Val=sym.Eq(DVo,abs(eq_DVo_Ipol_val.rhs)+abs(eq_DVo_Vos_val.rhs)) sym.pprint(eq_DVo_Val) #Error por CMRR no infinito y Ad no infinito #Como la entrada + esta a tierra entonces el error por CMRR es despreciable DVo=sym.Symbol('\Delta V_o') FS, CMRR = sym.symbols('FS, CMRR') eq_DVo_fin = sym.Eq(DVo,FS/abs(T)+FS/CMRR) sym.pprint(eq_DVo_fin) eq_DVo_fin = sym.Eq(DVo,FS/abs(T)) sym.pprint(eq_DVo_fin) eq_DVo_fin_Val=eq_DVo_fin.subs({CMRR:31.6e3,FS:10,T:eq_T_A_val.rhs}) sym.pprint(eq_DVo_fin_Val) #Sumando todos los errores eq_DVo_T=sym.Eq(DVo,eq_DVo_Val.rhs + eq_DVo_fin_Val.rhs) sym.pprint(eq_DVo_T) #ERROR AC: Ancho de banda PLENA POTENCIA t,Vpap,W,WHP,fHP=sym.symbols('t, V_pap, omega, omega_HP, f_HP') SR = sym.Function('SR')() Vo = sym.Function('V_o')(t) eq_SR=sym.Eq(SR,sym.diff(Vo)) sym.pprint(eq_SR) eq_Vo_t=sym.Eq(Vo,Vpap*sym.sin(W*t)) sym.pprint(eq_Vo_t) eq_SR=sym.Eq(SR,sym.diff(eq_Vo_t.rhs,t).subs(t,0)) sym.pprint(eq_SR) eq_WHP=sym.Eq(WHP,sym.solve(eq_SR,W)[0]) sym.pprint(eq_WHP) eq_WHP_Val=eq_WHP.subs({SR:500e3, Vpap:10}) sym.pprint(eq_WHP_Val) eq_fHP_Val=sym.Eq(fHP,eq_WHP_Val.rhs/(2*np.pi)) sym.pprint(eq_fHP_Val) #ERROR AC amcho de banda de peq señal GBW=1e6 Wh=GBW/Avf WH1=2*np.pi*GBW/30 display(WH1) Image(filename='WH.png',width=300) # Errores AC ERROR VECTORIAL NORMALIZADO #Función de transferencia de la ganancia W, Wh, Avfi = sym.symbols('omega, omega_h, A_vfi') Avf=Avfi/(1+sym.I*W/Wh) display(Avf) avf=1/(1+sym.I*W/Wh) display(avf) avf_mod=1/sym.sqrt(1+(W/Wh)**2) display(avf_mod) avf_arg=-sym.atan(W/Wh) display(avf_arg) Ev=abs(1-avf) display(Ev) Ev_mod=1-1/sym.sqrt(1+(W/Wh)**2) display(Ev_mod) Ev_arg=(sym.pi/2) - sym.atan(W/Wh) display(Ev_arg) ```
github_jupyter
import numpy as np #Importa libreria numerica import sympy as sym #simbolica import matplotlib.pyplot as plt #importa matplotlib solo pyplot import matplotlib.image as mpimg from sympy.plotting import plot #para plotear 2 variables from sympy.plotting import plot3d # para 3 from sympy.plotting import plot3d_parametric_surface from IPython.display import Image import ipympl #Para importar gestor de imagenes sym.init_printing() #activa a jupyter para mostrar simbolicamente el output %matplotlib inline Image(filename='LAB2.png',width=300) # Diseño tiene que cumplir # Amplificador Operacional LM741 o LM324 # Alimentación Vcc = 10V, Vss = − 10V # Ganancia en banda media A = Vo/V1 y A = Vo/V2 debe ser igual a 30 veces. # Zi del amplificador no puede alterar o cargar la fuente de señal, es decir, Ri << Zi1 y Zi2. (al menos 10 veces) # Usar Resistencias <= 1MΩ # Las fuentes V1 y V2 (fig 2) deben considerarse en las condiciones 1.A y 1.B # 1.A Ri = 50Ω # 1.B Ri = 100KΩ #Se usa LM741 que segun la tabla 6.7-1 del libro Dorf Svoboda #Ipol = 80nA ; Ios = 20nA #Vos = 1mV ; Ad=200 V/mV = 200k V/V #SR = 0.5 V/uS = 0.5 Meg V/S = 500k V/S #CMRR = 31.6 V/mV = 31.6k V/V #Forma generica V1, V2 = sym.symbols('V_1, V_2') Vo = sym.Function('V_o')(V1,V2) eq_Vo = sym.Eq(Vo,(sym.diff(Vo.subs(V2,0),V1))*V1+sym.diff(Vo.subs(V1,0),V2)*V2) sym.pprint(eq_Vo) #CASO A) Ri=50 Ri, R, Rf = sym.symbols('Ri, R, Rf') #Si hacemos el diseño como un OPAMP ideal eq_Vo1 = sym.Eq(Vo.subs(V2,0),-V1*(Rf/R)) sym.pprint(eq_Vo1) eq_Vo2 = sym.Eq(Vo.subs(V1,0),-V2*(Rf/R)) sym.pprint(eq_Vo2) eq_Vo = sym.Eq(Vo, eq_Vo1.rhs + eq_Vo2.rhs) sym.pprint(sym.simplify(eq_Vo)) #Si por diseño A=30 y Ri = 50 eq_Ri_A = sym.Eq(Ri, 50) sym.pprint(eq_Ri_A) eq_R_A = sym.Eq(R , Ri*10 + Ri) sym.pprint(eq_R_A) eq_Rf_A = sym.Eq(Rf , 30 * (eq_R_A.rhs).subs(Ri,eq_Ri_A.rhs) ) sym.pprint(eq_Rf_A) #CASO B) Ri=100k eq_Vo = sym.Eq(Vo, eq_Vo1.rhs + eq_Vo2.rhs) sym.pprint(sym.simplify(eq_Vo)) #Si por diseño A=30 y Ri = 100k eq_Ri_B = sym.Eq(Ri , 100e3) sym.pprint(eq_Ri_B) eq_R_B = sym.Eq(R , Ri*10 + Ri) sym.pprint(eq_R_B) eq_Rf_B = sym.Eq(Rf , 30 * (eq_R_B.rhs).subs(Ri,eq_Ri_B.rhs) ) sym.pprint(eq_Rf_B) #En el ultimo caso Rf es de orden de los Megas sym.pprint("{:.2e}".format(eq_Rf_B.rhs)) #Entonces se utiliza una red T # Rf = Vo/If cuando Vi=0 Image(filename='redT.png',width=300) Ra, Rb, Rc = sym.symbols('Ra, Rb, Rc') If, Irc, Vx, VoRT = sym.symbols('I_f, I_Rc, V_x, V_oRT') eq_Vx =sym.Eq(Vx, If * Ra) sym.pprint(eq_Vx) eq_Irc =sym.Eq(Irc,eq_Vx.rhs/Rc) sym.pprint(eq_Irc) eq_VoRT=sym.Eq(VoRT, Vx + (If+Irc)*Rb) sym.pprint(eq_VoRT) #Remplazando eq_VoRT=sym.Eq(VoRT, eq_Vx.rhs + (If+eq_Irc.rhs)*Rb) sym.pprint(sym.apart(eq_VoRT,If)) eq_Rf_T=sym.Eq(Rf,eq_VoRT.rhs/If) sym.pprint(sym.simplify(eq_Rf_T)) eq_Rc=sym.Eq(Rc , sym.solve(eq_Rf_T ,Rc)[0] ) sym.pprint(eq_Rc) eq_Rc_val=sym.Eq(Rc,eq_Rc.rhs.subs({Ra:100e3,Rb:100e3,Rf:eq_Rf_B.rhs})) sym.pprint(eq_Rc_val) #BLACKMAN #Primero analizamos el resultado con Ad=200k V/V #entonces se utiliza la formula de BLACKMAN Avf= Ad/(1-T) #Para el caso A) Image(filename='black1.png',width=300) #Para el caso A) Avf, Av, T = sym.symbols('A_vf, A_v, T') eq_Avf=sym.Eq(Avf,Av/(1-T)) sym.pprint(eq_Avf) V1, V2, Vp = sym.symbols('V_1,V_2,V_p') Vo=sym.Function('V_o')(V1,V2,Vp) eq_Av=sym.Eq(Av,sym.diff(Vo.subs({V2:0,Vp:0}))) sym.pprint(eq_Av) eq_Av=sym.Eq(Av,sym.diff(Vo.subs({V1:0,Vp:0}))) sym.pprint(eq_Av) eq_T=sym.Eq(T,sym.diff(Vo.subs({V1:0,V2:0}))) sym.pprint(eq_T) #Av= Vo/V- * V-/V1 #solo para el informe Vo_o_Vneg, Vneg_o_V1 = sym.symbols('(V_{o}/V_{-}), (V_{-}/V_{1})') eq_Av=sym.Eq(Av,Vo_o_Vneg*Vneg_o_V1) display(eq_Av) Ad=sym.Symbol('A_d') eq_Av_A=sym.Eq(Av,-Ad*((R**-1+Rf**-1)**-1)/(R+(R**-1+Rf**-1)**-1)) sym.pprint(eq_Av_A) sym.pprint(sym.simplify(eq_Av_A)) sym.pprint( eq_Av_A.subs( {R:eq_R_A.rhs , Ri:eq_Ri_A.rhs , Rf:eq_Rf_A.rhs , Ad:200e3})) eq_T_A=sym.Eq(T,-Ad*(R/2)/(R/2+Rf)) sym.pprint(sym.simplify(eq_T_A)) eq_T_A_val=eq_T_A.subs( {R:eq_R_A.rhs , Ri:eq_Ri_A.rhs , Rf:eq_Rf_A.rhs , Ad:200e3}) sym.pprint(eq_T_A_val) eq_Avf_A=sym.Eq(Avf,eq_Av_A.rhs/(1-eq_T_A.rhs)) sym.pprint(sym.simplify(eq_Avf_A)) sym.pprint( eq_Avf_A.subs( {R:eq_R_A.rhs , Ri:eq_Ri_A.rhs , Rf:eq_Rf_A.rhs , Ad:200e3})) #BLACKMAN #Primero analizamos el resultado con Ad=200k V/V #entonces se utiliza la formula de BLACKMAN Avf= Ad/(1-T) #Para el caso B) Image(filename='black2.png',width=300) #Para el caso B) Avf, Av, T = sym.symbols('A_vf, A_v, T') eq_Avf=sym.Eq(Avf,Av/(1-T)) sym.pprint(eq_Avf) #Av= Vo/V- * V-/V1 #solo para el informe Vo_o_Vneg, Vneg_o_V1 = sym.symbols('(V_{o}/V_{-}), (V_{-}/V_{1})') eq_Av=sym.Eq(Av,Vo_o_Vneg*Vneg_o_V1) display(eq_Av) eq_Av_B=sym.Eq(Av,-Ad*(R**-1+(Ra+(Rc**-1+Rb**-1)**-1)**-1)**-1/(R+(R**-1+(Ra+(Rc**-1+Rb**-1)**-1)**-1)**-1)) sym.pprint(sym.simplify(eq_Av_B)) sym.pprint(eq_Av_B.subs({R:eq_R_B.rhs , Ri:eq_Ri_B.rhs , Ra:100e3, Rb:100e3, Rc:eq_Rc_val.rhs , Ad:200e3})) #T= Vo/Vp = Vo/V- V-/Ia Ia/Ib Ib/Vp #Solo para informe Vo_o_Vneg, Vneg_o_Ia, Ia_o_Ib, Ib_o_Vp = sym.symbols('(V_{o}/V_{-}), (V_{-}/I_{a}), (I_{a}/I_{b}). (I_{b}/V_{p})') eq_T=sym.Eq(T,Vo_o_Vneg*Vneg_o_Ia*Ia_o_Ib*Ib_o_Vp) display(eq_T) eq_T_B=sym.Eq(T,-Ad*(R/2)*(Rc/((R/2)+Ra+Rc))*(1/(Rb+((Rc**-1+((R/2)+Ra)**-1)**-1)))) sym.pprint(sym.simplify(eq_T_B)) eq_T_B_val=eq_T_B.subs({R:eq_R_B.rhs , Ri:eq_Ri_B.rhs , Ra:100e3, Rb:100e3, Rc:eq_Rc_val.rhs , Ad:200e3}) sym.pprint(eq_T_B_val) eq_Avf_B=sym.Eq(Avf,eq_Av_B.rhs/(1-eq_T_B.rhs)) sym.pprint(sym.simplify(eq_Avf_B)) sym.pprint(eq_Avf_B.subs({R:eq_R_B.rhs , Ri:eq_Ri_B.rhs , Ra:100e3, Rb:100e3, Rc:eq_Rc_val.rhs , Ad:200e3})) #ERROR POR Vos y Ipol-, como en (+) no hay resistencia, Vo(Ipol+)= 0 => esto genera que el error sea mayor Ipol, Vos = sym.symbols("Ipol_-, V_os") DVo=sym.Function('\Delta V_o')(Ipol,Vos) eq_Dvo=sym.Eq(DVo,sym.diff(DVo.subs(Vos,0))*Ipol+sym.diff(DVo.subs(Ipol,0))*Vos) display(eq_Dvo) #Si se aplica blackman DVo=sym.Function('\Delta V_o')(Ipol,Vos,Vp) eq_Dvo=sym.Eq(DVo,(sym.diff(DVo.subs({Vos:0,Vp:0}))/(1-T))*Ipol+(sym.diff(DVo.subs({Ipol:0,Vp:0}))/(1-T)*Vos)) display(eq_Dvo) Image(filename='black3.png',width=300) #CASO A) DVo=sym.Function('\Delta V_o')(Ipol) eq_DVo_Ipol=sym.Eq(DVo,Ipol*(((R/2)**-1+Rf**-1)**-1)*-Ad/(1-T)) display(eq_DVo_Ipol) eq_DVo_Ipol_val=(eq_DVo_Ipol.subs({T:eq_T_A_val.rhs, Ad:200e3,Ipol:80e-9,Rf:eq_Rf_A.rhs,R:eq_R_A.rhs , Ri:eq_Ri_A.rhs})) sym.pprint(eq_DVo_Ipol_val) DVo=sym.Function('\Delta V_o')(Vos) eq_DVo_Vos=sym.Eq(DVo,Vos*(Ad/(1-T))) eq_DVo_Vos_val=(eq_DVo_Vos.subs({T:eq_T_A_val.rhs, Ad:200e3,Vos:1e-3,Rf:eq_Rf_A.rhs,R:eq_R_A.rhs , Ri:eq_Ri_A.rhs})) sym.pprint(eq_DVo_Vos_val) DVo=sym.Function('\Delta V_o')(Ipol,Vos) eq_DVo_Val=sym.Eq(DVo,abs(eq_DVo_Ipol_val.rhs)+abs(eq_DVo_Vos_val.rhs)) sym.pprint(eq_DVo_Val) #Error por CMRR no infinito y Ad no infinito #Como la entrada + esta a tierra entonces el error por CMRR es despreciable DVo=sym.Symbol('\Delta V_o') FS, CMRR = sym.symbols('FS, CMRR') eq_DVo_fin = sym.Eq(DVo,FS/abs(T)+FS/CMRR) sym.pprint(eq_DVo_fin) eq_DVo_fin = sym.Eq(DVo,FS/abs(T)) sym.pprint(eq_DVo_fin) eq_DVo_fin_Val=eq_DVo_fin.subs({CMRR:31.6e3,FS:10,T:eq_T_A_val.rhs}) sym.pprint(eq_DVo_fin_Val) #Sumando todos los errores eq_DVo_T=sym.Eq(DVo,eq_DVo_Val.rhs + eq_DVo_fin_Val.rhs) sym.pprint(eq_DVo_T) #ERROR AC: Ancho de banda PLENA POTENCIA t,Vpap,W,WHP,fHP=sym.symbols('t, V_pap, omega, omega_HP, f_HP') SR = sym.Function('SR')() Vo = sym.Function('V_o')(t) eq_SR=sym.Eq(SR,sym.diff(Vo)) sym.pprint(eq_SR) eq_Vo_t=sym.Eq(Vo,Vpap*sym.sin(W*t)) sym.pprint(eq_Vo_t) eq_SR=sym.Eq(SR,sym.diff(eq_Vo_t.rhs,t).subs(t,0)) sym.pprint(eq_SR) eq_WHP=sym.Eq(WHP,sym.solve(eq_SR,W)[0]) sym.pprint(eq_WHP) eq_WHP_Val=eq_WHP.subs({SR:500e3, Vpap:10}) sym.pprint(eq_WHP_Val) eq_fHP_Val=sym.Eq(fHP,eq_WHP_Val.rhs/(2*np.pi)) sym.pprint(eq_fHP_Val) #ERROR AC amcho de banda de peq señal GBW=1e6 Wh=GBW/Avf WH1=2*np.pi*GBW/30 display(WH1) Image(filename='WH.png',width=300) # Errores AC ERROR VECTORIAL NORMALIZADO #Función de transferencia de la ganancia W, Wh, Avfi = sym.symbols('omega, omega_h, A_vfi') Avf=Avfi/(1+sym.I*W/Wh) display(Avf) avf=1/(1+sym.I*W/Wh) display(avf) avf_mod=1/sym.sqrt(1+(W/Wh)**2) display(avf_mod) avf_arg=-sym.atan(W/Wh) display(avf_arg) Ev=abs(1-avf) display(Ev) Ev_mod=1-1/sym.sqrt(1+(W/Wh)**2) display(Ev_mod) Ev_arg=(sym.pi/2) - sym.atan(W/Wh) display(Ev_arg)
0.343232
0.49231
Get names of matches files in the directory ``` from os import listdir from os.path import isfile, join matchesFolderName = "DatabaseOfMatches" # folder with bg matches matchesFileNames = [f for f in listdir(matchesFolderName) if isfile(join(matchesFolderName, f))] #matchesFileNames = matchesFileNames[0:1] # test slice ``` Split all matches into games and put them into SplitIndividualGames folder ``` # creating directory for games files import os import re gamesFolderName = os.path.join(os.getcwd(),"SplitIndividualGames") if not os.path.exists(gamesFolderName):os.mkdir(gamesFolderName) for fileName in matchesFileNames: fullMatchFileName = matchesFolderName + "/" + fileName with open(fullMatchFileName, mode="r") as currentFile: reader = currentFile.read() for i,part in enumerate(reader.split("Game ")): if i>0: with open(gamesFolderName + "/" + fileName.replace(".txt","") + "Game" + str(i)+".txt", mode="w") as newFile: newFile.write("Game"+str(part)) ``` Clean and standarise each game file ``` from os import listdir from os.path import isfile, join import os gamesFolderName = "SplitIndividualGames" # folder with bg matches gamesFileNames = [f for f in listdir(gamesFolderName) if isfile(join(gamesFolderName, f))] #gamesFileNames = gamesFileNames[0:1] # test slice #print(gamesFileNames) # creating directory for cleaned games files fullCleanedGamesFolderName = os.path.join(os.getcwd(),"SplitIndividualGamesCleaned") if not os.path.exists(fullCleanedGamesFolderName):os.mkdir(fullCleanedGamesFolderName) def split_by_colon(string_to_split): res = ["",""] if re.search(":", string_to_split): res = (string_to_split.split(":")) res[0] = res[0].strip(" *)(:"); res[1] = res[1].strip(); else: res[0] = "" res[1] = string_to_split.strip() return res cleanedGamesFolderName = "SplitIndividualGamesCleaned" for fileName in gamesFileNames: DeleteFlag = "N" fullGameFileName = gamesFolderName + "/" + fileName fullCleanedGameFileName = cleanedGamesFolderName + "/" + fileName #print(fullGameFileName) pattern = r'\s*[0-9]+\)\s[0-9][0-9]:\s.*[0-9][0-9]:' # pattern to recognise the proper string with two moves and rolls from both players with open(fullCleanedGameFileName, 'w') as target_file: with open(fullGameFileName, 'r') as source_file: lines = source_file.readlines() first_line = lines[0].strip() # we will be removing this line second_line = lines[1].strip() # we will be removing this line last_line = lines[-1] for line in lines: res = re.search(pattern, line) if res != None: playerBStart = res.end()-3 break if line is last_line: DeleteFlag = "Y" print(fileName) print("No such line in the game file") for line in lines: moveN = line[0:4].strip(") "); # move number playerA = line[4:playerBStart] # information about player A roll + move OR cube action playerB = line[playerBStart:] # information about player B roll + move OR cube action playerAList = split_by_colon(playerA) playerBList = split_by_colon(playerB) if (line.strip() != first_line) & (line.strip() != second_line) & (line.strip() != ""): line_out = moveN + "," + playerAList[0] + "," + playerAList[1] + "," + playerBList[0] + "," + playerBList[1] + "\n" target_file.write(line_out) if DeleteFlag == "Y": os.remove(fullCleanedGameFileName) ```
github_jupyter
from os import listdir from os.path import isfile, join matchesFolderName = "DatabaseOfMatches" # folder with bg matches matchesFileNames = [f for f in listdir(matchesFolderName) if isfile(join(matchesFolderName, f))] #matchesFileNames = matchesFileNames[0:1] # test slice # creating directory for games files import os import re gamesFolderName = os.path.join(os.getcwd(),"SplitIndividualGames") if not os.path.exists(gamesFolderName):os.mkdir(gamesFolderName) for fileName in matchesFileNames: fullMatchFileName = matchesFolderName + "/" + fileName with open(fullMatchFileName, mode="r") as currentFile: reader = currentFile.read() for i,part in enumerate(reader.split("Game ")): if i>0: with open(gamesFolderName + "/" + fileName.replace(".txt","") + "Game" + str(i)+".txt", mode="w") as newFile: newFile.write("Game"+str(part)) from os import listdir from os.path import isfile, join import os gamesFolderName = "SplitIndividualGames" # folder with bg matches gamesFileNames = [f for f in listdir(gamesFolderName) if isfile(join(gamesFolderName, f))] #gamesFileNames = gamesFileNames[0:1] # test slice #print(gamesFileNames) # creating directory for cleaned games files fullCleanedGamesFolderName = os.path.join(os.getcwd(),"SplitIndividualGamesCleaned") if not os.path.exists(fullCleanedGamesFolderName):os.mkdir(fullCleanedGamesFolderName) def split_by_colon(string_to_split): res = ["",""] if re.search(":", string_to_split): res = (string_to_split.split(":")) res[0] = res[0].strip(" *)(:"); res[1] = res[1].strip(); else: res[0] = "" res[1] = string_to_split.strip() return res cleanedGamesFolderName = "SplitIndividualGamesCleaned" for fileName in gamesFileNames: DeleteFlag = "N" fullGameFileName = gamesFolderName + "/" + fileName fullCleanedGameFileName = cleanedGamesFolderName + "/" + fileName #print(fullGameFileName) pattern = r'\s*[0-9]+\)\s[0-9][0-9]:\s.*[0-9][0-9]:' # pattern to recognise the proper string with two moves and rolls from both players with open(fullCleanedGameFileName, 'w') as target_file: with open(fullGameFileName, 'r') as source_file: lines = source_file.readlines() first_line = lines[0].strip() # we will be removing this line second_line = lines[1].strip() # we will be removing this line last_line = lines[-1] for line in lines: res = re.search(pattern, line) if res != None: playerBStart = res.end()-3 break if line is last_line: DeleteFlag = "Y" print(fileName) print("No such line in the game file") for line in lines: moveN = line[0:4].strip(") "); # move number playerA = line[4:playerBStart] # information about player A roll + move OR cube action playerB = line[playerBStart:] # information about player B roll + move OR cube action playerAList = split_by_colon(playerA) playerBList = split_by_colon(playerB) if (line.strip() != first_line) & (line.strip() != second_line) & (line.strip() != ""): line_out = moveN + "," + playerAList[0] + "," + playerAList[1] + "," + playerBList[0] + "," + playerBList[1] + "\n" target_file.write(line_out) if DeleteFlag == "Y": os.remove(fullCleanedGameFileName)
0.08554
0.376881
``` # To access preprocessy module. Required in .ipynb files import os import sys module_path = os.path.abspath(os.path.join('..')) if module_path not in sys.path: sys.path.append(module_path) import numpy as np import pandas as pd import time from sklearn.datasets import load_iris, load_boston, load_breast_cancer, load_diabetes from sklearn.linear_model import LinearRegression from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import classification_report, r2_score from preprocessy.feature_selection import Correlation, SelectKBest from preprocessy.resampling import Split ``` ### Method to print the correlation statistics for the given dataset ``` def eval(X, threshold=0.8): corr = Correlation() for col1, col2, value, sign in corr.find(X,threshold): print(f'{col1} x {col2}\nCorrelation - {value:.2f}\nType - {sign}\n\n') ``` # Breast Cancer Dataset The Breast Cancer dataset comprises of `569 records` and `30 features`. Some these features are highly correlated to each other. First we will list down a few of those correlations with `values > 0.97`. The goal is to compare the results before and after dropping the highly correlated columns indicated by the `Correlation` class from `preprocessy.feature_selection` module and keeping all the other preprocessing thresholds the same. We will compare the accuracy and time taken to get the results. We use `Split` class from `preprocessy.resampling` module to perform the train test split. ``` print("Dataset - Breast Cancer") X, y = load_breast_cancer(return_X_y=True, as_frame=True) eval(X,threshold=0.97) ``` ### Building a model directly We now train a random forest classifier on the dataset without dropping any correlated columns. ``` start = time.time() model = RandomForestClassifier() X_train, X_test, y_train, y_test = Split().train_test_split(X, y, test_size=0.1) model.fit(X_train, y_train) preds = model.predict(X_test) accuracy_1 = classification_report(y_test,preds,output_dict=True)["accuracy"] print(f'Accuracy - {accuracy_1}') print(f'Time taken - {(time.time()-start):4f}') ``` ### Building model post preprocessing We now drop some of the columns that are correlated with `mean radius`, `worst radius` and `radius error` ``` X.drop(['mean area','mean perimeter','worst area','worst perimeter','perimeter error','area error'],axis=1,inplace=True) start = time.time() model = RandomForestClassifier() X_train, X_test, y_train, y_test = Split().train_test_split(X, y, test_size=0.1) model.fit(X_train, y_train) preds = model.predict(X_test) accuracy_2 = classification_report(y_test,preds,output_dict=True)["accuracy"] print(f'Accuracy - {accuracy_2}') print(f'Time taken - {(time.time()-start):4f}') ``` ## Conclusion For this particular dataset and thresholds, the accuracy of both approaches is `100%` but the time consumed after dropping the correlated columns is slightly less than without dropping them. # Iris Dataset The iris dataset consists of `150 records` and `4 features`. As the number of features is less, removing correlated columns will not be helpful. ``` print("Dataset - Iris") X,y = load_iris(return_X_y=True,as_frame=True) eval(X) ``` # Boston Housing Dataset The boston housing dataset consists of `506 records` and `13 features`. We will apply the same comparison test as before but with a threshold of 0.7 this time. ``` print("Dataset - Boston") dataset = load_boston() X = pd.DataFrame(dataset.data, columns=dataset.feature_names) y = pd.Series(dataset.target, name="Target") eval(X,threshold=0.7) ``` ### Building a model directly We now train a linear regression model on the dataset without dropping any correlated columns. ``` start = time.time() model = LinearRegression(fit_intercept=True, normalize=True, copy_X=True) X_train, X_test, y_train, y_test = Split().train_test_split(X, y) model.fit(X_train, y_train) preds = model.predict(X_test) accuracy_1 = r2_score(y_test, preds) print(f'Accuracy - {accuracy_1}') print(f'Time taken - {(time.time()-start):4f}') ``` ### Building model post preprocessing We now drop some of the columns that are correlated with `TAX`, `NOX` and `DIS` ``` X.drop(['TAX','NOX','DIS'],axis=1,inplace=True) start = time.time() model = LinearRegression(fit_intercept=True, normalize=True, copy_X=True) X_train, X_test, y_train, y_test = Split().train_test_split(X, y) model.fit(X_train, y_train) preds = model.predict(X_test) accuracy_2 = r2_score(y_test, preds) print(f'Accuracy - {accuracy_2}') print(f'Time taken - {(time.time()-start):4f}') ``` ## Conclusion For this particular dataset and thresholds, the accuracy improves slightly after dropping the correlated columns and the time consumed is slightly less than without dropping them. # Boston Housing Dataset The boston housing dataset consists of `442 records` and `10 features`. We will apply the same comparison test as before but with a threshold of 0.7 this time. ``` print(f"Dataset - Diabetes") X, y = load_diabetes(return_X_y=True, as_frame=True) eval(X,threshold=0.7) ``` ### Building a model directly We now train a linear regression model on the dataset without dropping any correlated columns. ``` start = time.time() model = LinearRegression(fit_intercept=True, normalize=True, copy_X=True) X_train, X_test, y_train, y_test = Split().train_test_split(X, y, test_size=0.2) model.fit(X_train, y_train) preds = model.predict(X_test) accuracy_1 = r2_score(y_test, preds) print(f'Accuracy - {accuracy_1}') print(f'Time taken - {(time.time()-start):4f}') ``` ### Building model post preprocessing We now drop `s2` column. ``` X.drop(['s2'],axis=1,inplace=True) start = time.time() model = LinearRegression(fit_intercept=True, normalize=True, copy_X=True) X_train, X_test, y_train, y_test = Split().train_test_split(X, y, test_size=0.2) model.fit(X_train, y_train) preds = model.predict(X_test) accuracy_1 = r2_score(y_test, preds) print(f'Accuracy - {accuracy_1}') print(f'Time taken - {(time.time()-start):4f}') ``` ## Conclusion For this particular dataset and thresholds, the accuracy decreases slightly after dropping the correlated columns and the time consumed is slightly less than without dropping them.
github_jupyter
# To access preprocessy module. Required in .ipynb files import os import sys module_path = os.path.abspath(os.path.join('..')) if module_path not in sys.path: sys.path.append(module_path) import numpy as np import pandas as pd import time from sklearn.datasets import load_iris, load_boston, load_breast_cancer, load_diabetes from sklearn.linear_model import LinearRegression from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import classification_report, r2_score from preprocessy.feature_selection import Correlation, SelectKBest from preprocessy.resampling import Split def eval(X, threshold=0.8): corr = Correlation() for col1, col2, value, sign in corr.find(X,threshold): print(f'{col1} x {col2}\nCorrelation - {value:.2f}\nType - {sign}\n\n') print("Dataset - Breast Cancer") X, y = load_breast_cancer(return_X_y=True, as_frame=True) eval(X,threshold=0.97) start = time.time() model = RandomForestClassifier() X_train, X_test, y_train, y_test = Split().train_test_split(X, y, test_size=0.1) model.fit(X_train, y_train) preds = model.predict(X_test) accuracy_1 = classification_report(y_test,preds,output_dict=True)["accuracy"] print(f'Accuracy - {accuracy_1}') print(f'Time taken - {(time.time()-start):4f}') X.drop(['mean area','mean perimeter','worst area','worst perimeter','perimeter error','area error'],axis=1,inplace=True) start = time.time() model = RandomForestClassifier() X_train, X_test, y_train, y_test = Split().train_test_split(X, y, test_size=0.1) model.fit(X_train, y_train) preds = model.predict(X_test) accuracy_2 = classification_report(y_test,preds,output_dict=True)["accuracy"] print(f'Accuracy - {accuracy_2}') print(f'Time taken - {(time.time()-start):4f}') print("Dataset - Iris") X,y = load_iris(return_X_y=True,as_frame=True) eval(X) print("Dataset - Boston") dataset = load_boston() X = pd.DataFrame(dataset.data, columns=dataset.feature_names) y = pd.Series(dataset.target, name="Target") eval(X,threshold=0.7) start = time.time() model = LinearRegression(fit_intercept=True, normalize=True, copy_X=True) X_train, X_test, y_train, y_test = Split().train_test_split(X, y) model.fit(X_train, y_train) preds = model.predict(X_test) accuracy_1 = r2_score(y_test, preds) print(f'Accuracy - {accuracy_1}') print(f'Time taken - {(time.time()-start):4f}') X.drop(['TAX','NOX','DIS'],axis=1,inplace=True) start = time.time() model = LinearRegression(fit_intercept=True, normalize=True, copy_X=True) X_train, X_test, y_train, y_test = Split().train_test_split(X, y) model.fit(X_train, y_train) preds = model.predict(X_test) accuracy_2 = r2_score(y_test, preds) print(f'Accuracy - {accuracy_2}') print(f'Time taken - {(time.time()-start):4f}') print(f"Dataset - Diabetes") X, y = load_diabetes(return_X_y=True, as_frame=True) eval(X,threshold=0.7) start = time.time() model = LinearRegression(fit_intercept=True, normalize=True, copy_X=True) X_train, X_test, y_train, y_test = Split().train_test_split(X, y, test_size=0.2) model.fit(X_train, y_train) preds = model.predict(X_test) accuracy_1 = r2_score(y_test, preds) print(f'Accuracy - {accuracy_1}') print(f'Time taken - {(time.time()-start):4f}') X.drop(['s2'],axis=1,inplace=True) start = time.time() model = LinearRegression(fit_intercept=True, normalize=True, copy_X=True) X_train, X_test, y_train, y_test = Split().train_test_split(X, y, test_size=0.2) model.fit(X_train, y_train) preds = model.predict(X_test) accuracy_1 = r2_score(y_test, preds) print(f'Accuracy - {accuracy_1}') print(f'Time taken - {(time.time()-start):4f}')
0.442396
0.84241
# **Free-Electron Bands in a Periodic Lattice** **Authors:** Dou Du, Taylor James Baird and Giovanni Pizzi <i class="fa fa-home fa-2x"></i><a href="../index.ipynb" style="font-size: 20px"> Go back to index</a> **Source code:** https://github.com/osscar-org/quantum-mechanics/blob/master/notebook/band-theory/free_electron.ipynb The main objective of this notebook is to demonstrate the bandstructure for the free-electron model in a periodic lattice. <hr style="height:1px;border:none;color:#cccccc;background-color:#cccccc;" /> ## **Goals** * Familiarize oneself with the free-electron model. * Examine the electronic band structure of the free electron model for different crystal structures. ## **Background theory** [More on the background theory.](./theory/theory_free_electron.ipynb) ## **Tasks and exercises** <ol style="text-align: justify;font-size:15px"> <li> Can you describe the shape of the band structure in the 1st Brillouin zone? <details> <summary style="color: red">Solution</summary> In the free electron model, the dispersion relation between electronic energy and wavevector is given by $E=\frac{\hbar^2k^2}{2m}$. Consequently, the shape of the bands is parabolic. </details> </li> <li> What properties of a material shall be best captured by the free electron model? <details> <summary style="color: red">Solution</summary> As the free-electron model neglects the effect of the ionic potential on the electrons, material properties which are primarily dependent on the kinetic energy of the conduction electrons are those which shall be best described by the model. </details> </li> <li> Look at the bandstructure plots for different crystal structures by toggling the "Cell type" radio buttons. Why is the bandstructure associated with the BCC crystal structure much denser than that of the simple cubic cell (i.e., why is there so many more bands in the energy range considered for BCC compared to SC). <details> <summary style="color: red">Solution</summary> Recalling that the energy eigenvalues are given by $\large E = \frac{\hbar^2(\vec{k}+\vec{G})^2}{2m}$, we can see that the origin of the increased density of bands for BCC is due to its Brillouin zone giving rise to a larger number of G-vectors with small magnitudes. This in turn increases the number of low-lying energy bands. </details> </li> </ol> <hr style="height:1px;border:none;color:#cccccc;background-color:#cccccc;" /> ``` import numpy as np import seekpath import re import matplotlib from ase.dft.dos import linear_tetrahedron_integration as lti from ase.dft.kpoints import monkhorst_pack from ase.cell import Cell from scipy.stats import multivariate_normal from widget_bzvisualizer import BZVisualizer def prettify(label): """ Prettifier for matplotlib, using LaTeX syntax :param label: a string to prettify """ label = ( label .replace('GAMMA', r'$\Gamma$') .replace('DELTA', r'$\Delta$') .replace('LAMBDA', r'$\Lambda$') .replace('SIGMA', r'$\Sigma$') ) label = re.sub(r'_(.?)', r'$_{\1}$', label) return label def _get_band_energies(kpoints_list, b1, b2, b3, g_vectors_range): energy_data_curves = np.zeros(((2*g_vectors_range+1)**3, len(kpoints_list)), dtype=np.float_) cnt = 0 for g_i in range(-g_vectors_range,g_vectors_range+1): for g_j in range(-g_vectors_range,g_vectors_range+1): for g_k in range(-g_vectors_range,g_vectors_range+1): g_vector = b1 * g_i + b2*g_j + b3 * g_k energy_data_curves[cnt] = np.sum(0.5*(kpoints_list + g_vector)**2, axis=1)# This is k^2 - NOTE: units to be double checked! cnt += 1 # bands are ordered as follows: first band, second band, ... return energy_data_curves def _compute_dos(kpts, G, ranges): eigs = [] n = ranges for i in range(-n, n+1): for j in range(-n, n+1): for k in range(-n, n+1): g_vector = i*G[0] + j*G[1] + k*G[2] eigs.append(np.sum(0.5*(kpts + g_vector)**2, axis=3)) eigs = np.moveaxis(eigs, 0, -1) return eigs def _compute_total_kpts(kpts, G, ranges): tot_kpts = [] n = ranges for i in range(-n, n+1): for j in range(-n, n+1): for k in range(-n, n+1): g_vector = i*G[0] + j*G[1] + k*G[2] tot_kpts.extend(kpts+g_vector) return np.array(tot_kpts) def get_bands(real_lattice_bohr, reference_distance = 0.025, g_vectors_range = 3): # Simple way to get automatically the band path: # I go back to real space, just put a single atom at the origin, # then go back with seekpath. # NOTE! This might not give the most general path, as e.g. there are two # options for cubic FCC (cF1 and cF2 in seekpath). # But this should be general enough for this tool. structure = (real_lattice_bohr, [[0., 0., 0.]], [1]) # Use a H atom at the origin seekpath_path = seekpath.get_explicit_k_path(structure, reference_distance=reference_distance) b1, b2, b3 = np.array(seekpath_path['reciprocal_primitive_lattice']) all_kpoints_x = np.array(seekpath_path['explicit_kpoints_linearcoord']) all_kpoints_list = np.array(seekpath_path['explicit_kpoints_abs']) segments_data = [] for segment_indices in seekpath_path['explicit_segments']: start_label = seekpath_path['explicit_kpoints_labels'][segment_indices[0]] end_label = seekpath_path['explicit_kpoints_labels'][segment_indices[1]-1] kpoints_x = all_kpoints_x[slice(*segment_indices)] kpoints_list = all_kpoints_list[slice(*segment_indices)] energy_bands = _get_band_energies(kpoints_list, b1, b2, b3, g_vectors_range) segments_data.append({ 'start_label': start_label, 'end_label': end_label, 'kpoints_list': kpoints_list, 'kpoints_x': kpoints_x, 'energy_bands': energy_bands, 'b1': b1, 'b2': b2, 'b3': b3, }) return segments_data %matplotlib widget import time import matplotlib.pyplot as plt from ipywidgets import Output, Button, RadioButtons, IntSlider, HBox, VBox, Checkbox, Label, FloatSlider alat_bohr = 7.72 lattices = np.zeros((3, 3, 3)); lattices[0] = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]]) * alat_bohr / 2.0; lattices[1] = np.array([[0, 1, 1], [1, 0, 1], [1, 1, 0]]) * alat_bohr / 2.0; lattices[2] = np.array([[-1, 1, 1], [1, -1, 1], [1, 1, -1]]) * alat_bohr / 2.0; real_lattice_bohr = lattices[0] bz = BZVisualizer(real_lattice_bohr, [[0.0, 0.0, 0.0]], [1], True, height='400px') #G = Cell(real_lattice_bohr).reciprocal()*2*np.pi style = {'description_width': 'initial'} output = Output() cell_type = RadioButtons(options=['Simple cubic', 'FCC', 'BCC'], value='Simple cubic', description="Cell type:") nkpt = IntSlider(value=4, min=4, max=11, description="Number of k-point:", style=style) grange = IntSlider(value=0, min=0, max=3, description="Gvector range:", style=style) gcov = FloatSlider(value=0.5, min=0.1, max=1.0, description="Guassian covariance:", style=style) def on_celltype_changed(c): global real_lattice_bohr real_lattice_bohr = lattices[cell_type.index] ax.clear() plot_bandstructure('bands') bz.cell = real_lattice_bohr.tolist() cell_type.observe(on_celltype_changed, names='value'); def plot_bandstructure(c): global G, segments_data, lbands segments_data = get_bands(real_lattice_bohr) G = np.array([segments_data[0]['b1'], segments_data[0]['b2'], segments_data[0]['b3']]) x_ticks = [] x_labels = [] lbands = [] for segment_data in segments_data: if not x_labels: x_labels.append(prettify(segment_data['start_label'])) x_ticks.append(segment_data['kpoints_x'][0]) else: if x_labels[-1] != prettify(segment_data['start_label']): x_labels[-1] += "|" + prettify(segment_data['start_label']) x_labels.append(prettify(segment_data['end_label'])) x_ticks.append(segment_data['kpoints_x'][-1]) for energy_band in segment_data['energy_bands']: line, = ax.plot(segment_data['kpoints_x'], energy_band, 'k') lbands.append(line) ax.set_ylim([0, 5]) ax.yaxis.tick_right() ax.yaxis.set_label_position("right") ax.set_ylabel('Free-electron energy (eV)') ax.set_xlim([np.min(x_ticks), np.max(x_ticks)]) ax.set_xticks(x_ticks) ax.set_xticklabels(x_labels) ax.grid(axis='x', color='red', linestyle='-', linewidth=0.5) fig.tight_layout() update_bands_color('bands') def update_bands_color(c): n = 3 shape = (nkpt.value, nkpt.value, nkpt.value) kpts = np.dot(monkhorst_pack(shape), G).reshape(shape + (3,)) eigs = _compute_dos(kpts, G, grange.value) index = 0 for segment_data in segments_data: for i in range(-n, n+1): for j in range(-n, n+1): for k in range(-n, n+1): if abs(i) <= grange.value and abs(j) <= grange.value and abs(k) <=grange.value: lbands[index].set_color('r') else: lbands[index].set_color('k') index+=1 grange.observe(update_bands_color, names="value") with output: global fig, ax fig, ax = plt.subplots() fig.set_size_inches(3.7, 5.0) fig.canvas.header_visible = False fig.canvas.layout.width = "430px" plot_bandstructure('bands') plt.show() label1 = Label(value="Compute DOS by different methods:") label2 = Label(value="(the number of k-points in all three dimensions)") label3 = Label(value="(the number of G vector ranges in all three dimensions)") display(HBox([VBox([bz,cell_type]), output])) ``` <details open> <summary style="font-size: 22px;"><b>Legend</b></summary> <p style="text-align: justify;font-size:15px"> The 1st Brillouin zone of the selected cell is shown on the left. The path along which the band structure is calculated is indicated with blue vectors and sampled k-points are shown with red dots. The figure on the right shows the calculated band structure. We provide three kinds of cell structure: simple cubic, face-centered cubic (FCC) and body-centered cubic (BCC). Use the radio button to select the cell type. The number of k-points and G-vector ranges can be tuned with the sliders. </p>
github_jupyter
import numpy as np import seekpath import re import matplotlib from ase.dft.dos import linear_tetrahedron_integration as lti from ase.dft.kpoints import monkhorst_pack from ase.cell import Cell from scipy.stats import multivariate_normal from widget_bzvisualizer import BZVisualizer def prettify(label): """ Prettifier for matplotlib, using LaTeX syntax :param label: a string to prettify """ label = ( label .replace('GAMMA', r'$\Gamma$') .replace('DELTA', r'$\Delta$') .replace('LAMBDA', r'$\Lambda$') .replace('SIGMA', r'$\Sigma$') ) label = re.sub(r'_(.?)', r'$_{\1}$', label) return label def _get_band_energies(kpoints_list, b1, b2, b3, g_vectors_range): energy_data_curves = np.zeros(((2*g_vectors_range+1)**3, len(kpoints_list)), dtype=np.float_) cnt = 0 for g_i in range(-g_vectors_range,g_vectors_range+1): for g_j in range(-g_vectors_range,g_vectors_range+1): for g_k in range(-g_vectors_range,g_vectors_range+1): g_vector = b1 * g_i + b2*g_j + b3 * g_k energy_data_curves[cnt] = np.sum(0.5*(kpoints_list + g_vector)**2, axis=1)# This is k^2 - NOTE: units to be double checked! cnt += 1 # bands are ordered as follows: first band, second band, ... return energy_data_curves def _compute_dos(kpts, G, ranges): eigs = [] n = ranges for i in range(-n, n+1): for j in range(-n, n+1): for k in range(-n, n+1): g_vector = i*G[0] + j*G[1] + k*G[2] eigs.append(np.sum(0.5*(kpts + g_vector)**2, axis=3)) eigs = np.moveaxis(eigs, 0, -1) return eigs def _compute_total_kpts(kpts, G, ranges): tot_kpts = [] n = ranges for i in range(-n, n+1): for j in range(-n, n+1): for k in range(-n, n+1): g_vector = i*G[0] + j*G[1] + k*G[2] tot_kpts.extend(kpts+g_vector) return np.array(tot_kpts) def get_bands(real_lattice_bohr, reference_distance = 0.025, g_vectors_range = 3): # Simple way to get automatically the band path: # I go back to real space, just put a single atom at the origin, # then go back with seekpath. # NOTE! This might not give the most general path, as e.g. there are two # options for cubic FCC (cF1 and cF2 in seekpath). # But this should be general enough for this tool. structure = (real_lattice_bohr, [[0., 0., 0.]], [1]) # Use a H atom at the origin seekpath_path = seekpath.get_explicit_k_path(structure, reference_distance=reference_distance) b1, b2, b3 = np.array(seekpath_path['reciprocal_primitive_lattice']) all_kpoints_x = np.array(seekpath_path['explicit_kpoints_linearcoord']) all_kpoints_list = np.array(seekpath_path['explicit_kpoints_abs']) segments_data = [] for segment_indices in seekpath_path['explicit_segments']: start_label = seekpath_path['explicit_kpoints_labels'][segment_indices[0]] end_label = seekpath_path['explicit_kpoints_labels'][segment_indices[1]-1] kpoints_x = all_kpoints_x[slice(*segment_indices)] kpoints_list = all_kpoints_list[slice(*segment_indices)] energy_bands = _get_band_energies(kpoints_list, b1, b2, b3, g_vectors_range) segments_data.append({ 'start_label': start_label, 'end_label': end_label, 'kpoints_list': kpoints_list, 'kpoints_x': kpoints_x, 'energy_bands': energy_bands, 'b1': b1, 'b2': b2, 'b3': b3, }) return segments_data %matplotlib widget import time import matplotlib.pyplot as plt from ipywidgets import Output, Button, RadioButtons, IntSlider, HBox, VBox, Checkbox, Label, FloatSlider alat_bohr = 7.72 lattices = np.zeros((3, 3, 3)); lattices[0] = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]]) * alat_bohr / 2.0; lattices[1] = np.array([[0, 1, 1], [1, 0, 1], [1, 1, 0]]) * alat_bohr / 2.0; lattices[2] = np.array([[-1, 1, 1], [1, -1, 1], [1, 1, -1]]) * alat_bohr / 2.0; real_lattice_bohr = lattices[0] bz = BZVisualizer(real_lattice_bohr, [[0.0, 0.0, 0.0]], [1], True, height='400px') #G = Cell(real_lattice_bohr).reciprocal()*2*np.pi style = {'description_width': 'initial'} output = Output() cell_type = RadioButtons(options=['Simple cubic', 'FCC', 'BCC'], value='Simple cubic', description="Cell type:") nkpt = IntSlider(value=4, min=4, max=11, description="Number of k-point:", style=style) grange = IntSlider(value=0, min=0, max=3, description="Gvector range:", style=style) gcov = FloatSlider(value=0.5, min=0.1, max=1.0, description="Guassian covariance:", style=style) def on_celltype_changed(c): global real_lattice_bohr real_lattice_bohr = lattices[cell_type.index] ax.clear() plot_bandstructure('bands') bz.cell = real_lattice_bohr.tolist() cell_type.observe(on_celltype_changed, names='value'); def plot_bandstructure(c): global G, segments_data, lbands segments_data = get_bands(real_lattice_bohr) G = np.array([segments_data[0]['b1'], segments_data[0]['b2'], segments_data[0]['b3']]) x_ticks = [] x_labels = [] lbands = [] for segment_data in segments_data: if not x_labels: x_labels.append(prettify(segment_data['start_label'])) x_ticks.append(segment_data['kpoints_x'][0]) else: if x_labels[-1] != prettify(segment_data['start_label']): x_labels[-1] += "|" + prettify(segment_data['start_label']) x_labels.append(prettify(segment_data['end_label'])) x_ticks.append(segment_data['kpoints_x'][-1]) for energy_band in segment_data['energy_bands']: line, = ax.plot(segment_data['kpoints_x'], energy_band, 'k') lbands.append(line) ax.set_ylim([0, 5]) ax.yaxis.tick_right() ax.yaxis.set_label_position("right") ax.set_ylabel('Free-electron energy (eV)') ax.set_xlim([np.min(x_ticks), np.max(x_ticks)]) ax.set_xticks(x_ticks) ax.set_xticklabels(x_labels) ax.grid(axis='x', color='red', linestyle='-', linewidth=0.5) fig.tight_layout() update_bands_color('bands') def update_bands_color(c): n = 3 shape = (nkpt.value, nkpt.value, nkpt.value) kpts = np.dot(monkhorst_pack(shape), G).reshape(shape + (3,)) eigs = _compute_dos(kpts, G, grange.value) index = 0 for segment_data in segments_data: for i in range(-n, n+1): for j in range(-n, n+1): for k in range(-n, n+1): if abs(i) <= grange.value and abs(j) <= grange.value and abs(k) <=grange.value: lbands[index].set_color('r') else: lbands[index].set_color('k') index+=1 grange.observe(update_bands_color, names="value") with output: global fig, ax fig, ax = plt.subplots() fig.set_size_inches(3.7, 5.0) fig.canvas.header_visible = False fig.canvas.layout.width = "430px" plot_bandstructure('bands') plt.show() label1 = Label(value="Compute DOS by different methods:") label2 = Label(value="(the number of k-points in all three dimensions)") label3 = Label(value="(the number of G vector ranges in all three dimensions)") display(HBox([VBox([bz,cell_type]), output]))
0.458349
0.954308
## Deterministic tractography Deterministic tractography algorithms perform tracking of streamlines by following a predictable path, such as following the primary diffusion direction ($\lambda_1$). In order to demonstrate how to perform deterministic tracking on a diffusion MRI dataset, we will build from the preprocessing presented in a previous episode and compute the diffusion tensor. ``` import os import nibabel as nib import numpy as np from bids.layout import BIDSLayout from dipy.io.gradients import read_bvals_bvecs from dipy.core.gradients import gradient_table dwi_layout = BIDSLayout("../../data/ds000221/derivatives/uncorrected_topup_eddy", validate=False) gradient_layout = BIDSLayout("../../data/ds000221/", validate=False) subj = '010006' dwi_fname = dwi_layout.get(subject=subj, suffix='dwi', extension='nii.gz', return_type='file')[0] bvec_fname = dwi_layout.get(subject=subj, extension='eddy_rotated_bvecs', return_type='file')[0] bval_fname = gradient_layout.get(subject=subj, suffix='dwi', extension='bval', return_type='file')[0] dwi_img = nib.load(dwi_fname) affine = dwi_img.affine bvals, bvecs = read_bvals_bvecs(bval_fname, bvec_fname) gtab = gradient_table(bvals, bvecs) ``` We will now create a mask and constrain the fitting within the mask. ``` import dipy.reconst.dti as dti from dipy.segment.mask import median_otsu dwi_data = dwi_img.get_fdata() dwi_data, dwi_mask = median_otsu(dwi_data, vol_idx=[0], numpass=1) # Specify the volume index to the b0 volumes dti_model = dti.TensorModel(gtab) dti_fit = dti_model.fit(dwi_data, mask=dwi_mask) # This step may take a while ``` We will perform tracking using a deterministic algorithm on tensor fields via `EuDX` [(Garyfallidis _et al._, 2012)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3518823/). `EuDX` makes use of the primary direction of the diffusion tensor to propagate streamlines from voxel to voxel and a stopping criteria from the fractional anisotropy (FA). We will first get the FA map and eigenvectors from our tensor fitting. In the background of the FA map, the fitting may not be accurate as all of the measured signal is primarily noise and it is possible that values of NaNs (not a number) may be found in the FA map. We can remove these using `numpy` to find and set these voxels to 0. ``` # Create the directory to save the results out_dir = "../../data/ds000221/derivatives/dwi/tractography/sub-%s/ses-01/dwi/" % subj if not os.path.exists(out_dir): os.makedirs(out_dir) fa_img = dti_fit.fa evecs_img = dti_fit.evecs fa_img[np.isnan(fa_img)] = 0 # Save the FA fa_nii = nib.Nifti1Image(fa_img.astype(np.float32), affine) nib.save(fa_nii, os.path.join(out_dir, 'fa.nii.gz')) # Plot the FA import matplotlib.pyplot as plt from scipy import ndimage # To rotate image for visualization purposes %matplotlib inline fig, ax = plt.subplots(1, 3, figsize=(10, 10)) ax[0].imshow(ndimage.rotate(fa_img[:, fa_img.shape[1]//2, :], 90, reshape=False)) ax[1].imshow(ndimage.rotate(fa_img[fa_img.shape[0]//2, :, :], 90, reshape=False)) ax[2].imshow(ndimage.rotate(fa_img[:, :, fa_img.shape[-1]//2], 90, reshape=False)) fig.savefig(os.path.join(out_dir, "fa.png"), dpi=300, bbox_inches="tight") plt.show() ``` One of the inputs of `EuDX` is the discretized voxel directions on a unit sphere. Therefore, it is necessary to discretize the eigenvectors before providing them to `EuDX`. We will use an evenly distributed sphere of 362 points using the `get_sphere` function. ``` from dipy.data import get_sphere sphere = get_sphere('symmetric362') ``` We will determine the indices representing the discretized directions of the peaks by providing as input, our tensor model, the diffusion data, the sphere, and a mask to apply the processing to. Additionally, we will set the minimum angle between directions, the maximum number of peaks to return (1 for the tensor model), and the relative peak threshold (returning peaks greater than this value). _Note: This step may take a while to run._ ``` from dipy.direction import peaks_from_model peak_indices = peaks_from_model(model=dti_model, data=dwi_data, sphere=sphere, relative_peak_threshold=.2, min_separation_angle=25, mask=dwi_mask, npeaks=2) ``` Additionally, we will apply a stopping criterion for our tracking based on the FA map. That is, we will stop our tracking when we reach a voxel where FA is below 0.2. ``` from dipy.tracking.stopping_criterion import ThresholdStoppingCriterion stopping_criterion = ThresholdStoppingCriterion(fa_img, .2) ``` We will also need to specify where to "seed" (begin) the fiber tracking. Generally, the seeds chosen will depend on the pathways one is interested in modelling. In this example, we will create a seed mask from the FA map thresholding above our stopping criterion. ``` from dipy.tracking import utils seed_mask = fa_img.copy() seed_mask[seed_mask>=0.2] = 1 seed_mask[seed_mask<0.2] = 0 seeds = utils.seeds_from_mask(seed_mask, affine=affine, density=1) ``` Now, we can apply the tracking algorithm! As mentioned previously, `EuDX` is the fiber tracking algorithm that we will be using. The most important parameters to include are the indices representing the discretized directions of the peaks (`peak_indices`), the stopping criterion, the seeds, the affine transformation, and the step sizes to take when tracking! ``` from dipy.tracking.local_tracking import LocalTracking from dipy.tracking.streamline import Streamlines # Initialize local tracking - computation happens in the next step. streamlines_generator = LocalTracking(peak_indices, stopping_criterion, seeds, affine=affine, step_size=.5) # Generate streamlines object streamlines = Streamlines(streamlines_generator) ``` We just created a deterministic set of streamlines using the `EuDX` algorithm mapping the human connectome (tractography). We can save the streamlines as a Trackvis file so it can be loaded into other software for visualization or further analysis. To do so, we need to save the tractogram state using `StatefulTractogram` and `save_tractogram` to save the file. Note that we will have to specify the space to save the tractogram in. ``` from dipy.io.stateful_tractogram import Space, StatefulTractogram from dipy.io.streamline import save_tractogram sft = StatefulTractogram(streamlines, dwi_img, Space.RASMM) # Save the tractogram save_tractogram(sft, os.path.join(out_dir, "tractogram_deterministic_EuDX.trk")) ``` We can then generate the streamlines 3D scene using the `fury` python package, and visualize the scene's contents with `matplotlib`. ``` # NBVAL_SKIP from fury import actor, colormap from utils.visualization_utils import generate_anatomical_volume_figure # Plot the tractogram # Build the representation of the data streamlines_actor = actor.line(streamlines, colormap.line_colors(streamlines)) # Generate the figure fig = generate_anatomical_volume_figure(streamlines_actor) fig.savefig(os.path.join(out_dir, "tractogram_deterministic_EuDX.png"), dpi=300, bbox_inches="tight") plt.show() ```
github_jupyter
import os import nibabel as nib import numpy as np from bids.layout import BIDSLayout from dipy.io.gradients import read_bvals_bvecs from dipy.core.gradients import gradient_table dwi_layout = BIDSLayout("../../data/ds000221/derivatives/uncorrected_topup_eddy", validate=False) gradient_layout = BIDSLayout("../../data/ds000221/", validate=False) subj = '010006' dwi_fname = dwi_layout.get(subject=subj, suffix='dwi', extension='nii.gz', return_type='file')[0] bvec_fname = dwi_layout.get(subject=subj, extension='eddy_rotated_bvecs', return_type='file')[0] bval_fname = gradient_layout.get(subject=subj, suffix='dwi', extension='bval', return_type='file')[0] dwi_img = nib.load(dwi_fname) affine = dwi_img.affine bvals, bvecs = read_bvals_bvecs(bval_fname, bvec_fname) gtab = gradient_table(bvals, bvecs) import dipy.reconst.dti as dti from dipy.segment.mask import median_otsu dwi_data = dwi_img.get_fdata() dwi_data, dwi_mask = median_otsu(dwi_data, vol_idx=[0], numpass=1) # Specify the volume index to the b0 volumes dti_model = dti.TensorModel(gtab) dti_fit = dti_model.fit(dwi_data, mask=dwi_mask) # This step may take a while # Create the directory to save the results out_dir = "../../data/ds000221/derivatives/dwi/tractography/sub-%s/ses-01/dwi/" % subj if not os.path.exists(out_dir): os.makedirs(out_dir) fa_img = dti_fit.fa evecs_img = dti_fit.evecs fa_img[np.isnan(fa_img)] = 0 # Save the FA fa_nii = nib.Nifti1Image(fa_img.astype(np.float32), affine) nib.save(fa_nii, os.path.join(out_dir, 'fa.nii.gz')) # Plot the FA import matplotlib.pyplot as plt from scipy import ndimage # To rotate image for visualization purposes %matplotlib inline fig, ax = plt.subplots(1, 3, figsize=(10, 10)) ax[0].imshow(ndimage.rotate(fa_img[:, fa_img.shape[1]//2, :], 90, reshape=False)) ax[1].imshow(ndimage.rotate(fa_img[fa_img.shape[0]//2, :, :], 90, reshape=False)) ax[2].imshow(ndimage.rotate(fa_img[:, :, fa_img.shape[-1]//2], 90, reshape=False)) fig.savefig(os.path.join(out_dir, "fa.png"), dpi=300, bbox_inches="tight") plt.show() from dipy.data import get_sphere sphere = get_sphere('symmetric362') from dipy.direction import peaks_from_model peak_indices = peaks_from_model(model=dti_model, data=dwi_data, sphere=sphere, relative_peak_threshold=.2, min_separation_angle=25, mask=dwi_mask, npeaks=2) from dipy.tracking.stopping_criterion import ThresholdStoppingCriterion stopping_criterion = ThresholdStoppingCriterion(fa_img, .2) from dipy.tracking import utils seed_mask = fa_img.copy() seed_mask[seed_mask>=0.2] = 1 seed_mask[seed_mask<0.2] = 0 seeds = utils.seeds_from_mask(seed_mask, affine=affine, density=1) from dipy.tracking.local_tracking import LocalTracking from dipy.tracking.streamline import Streamlines # Initialize local tracking - computation happens in the next step. streamlines_generator = LocalTracking(peak_indices, stopping_criterion, seeds, affine=affine, step_size=.5) # Generate streamlines object streamlines = Streamlines(streamlines_generator) from dipy.io.stateful_tractogram import Space, StatefulTractogram from dipy.io.streamline import save_tractogram sft = StatefulTractogram(streamlines, dwi_img, Space.RASMM) # Save the tractogram save_tractogram(sft, os.path.join(out_dir, "tractogram_deterministic_EuDX.trk")) # NBVAL_SKIP from fury import actor, colormap from utils.visualization_utils import generate_anatomical_volume_figure # Plot the tractogram # Build the representation of the data streamlines_actor = actor.line(streamlines, colormap.line_colors(streamlines)) # Generate the figure fig = generate_anatomical_volume_figure(streamlines_actor) fig.savefig(os.path.join(out_dir, "tractogram_deterministic_EuDX.png"), dpi=300, bbox_inches="tight") plt.show()
0.709925
0.949295
# Databases and Python Alot of modern application can be modeled in the follwowing way: ![modern-apps](media/modern-apps.png) When a user opens an application frontend, either on a desktop or a mobile device, the code on the frontend starts calling the backed via APIs for various data. The backend then queries the database for all the necessary data, does all the computations with it, and sends the data back to the frontend for a nice visualization. One of the modern choices of an application backend is Python and Python has alot of functionalities for connecting and querying databases. The most common way how does backend interact with frontend and backend is through APIs. Often, the whole API server is just refered to as backend. The backend ussualy does hundreds and thousands calls to the database. But what is a database? A `database` is a collection of data organized in a structured way. The data is, in most cases, stored electronicaly on a computer. The way that data is stored on a database is managed by a `database management system` (DBMS). This means that 'running a database' refers to a process which is responsible for storing and retrieving data from a database. As the the database grows in data, so does the size of the database files on computer. To have a good intuitive understanding of a database, think of it as an extension and generalization of a spreadsheet. Databases and spreadsheets (such as Microsoft Excel) are both convenient ways to store information. The primary differences between the two are: * How the data is stored and manipulated. * Who can access the data. * How much data can be stored. Spreadsheets were originally designed for one user, and their characteristics reflect that. They’re great for a single user or small number of users who don’t need to do a lot of incredibly complicated data manipulation. Databases, on the other hand, are designed to hold much larger collections of organized information—massive amounts. Databases allow multiple users at the same time to quickly and securely access and query the data using highly complex logic and language. One of the main advantages of databases is that they can be queried and updated. A database query is a request for data from a given database. The standart way to quety data is to use the `Structured Query Language` or `SQL` for short. # Database managment systems (DBMS) There are several very popular DBMS in the industry today. The most popular ones are: * MySQL * PostgreSQL * MongoDB * SQLite * Oracle * Microsoft SQL Server Every DBMS has its own set of features and a slight variation of the SQL language. In this book, we will be using the open-sourced PostgreSQL DBMS https://www.postgresql.org/about/. # PSQL docker image We will run the PostgreSQL (PSQL for short) database from a docker container. By using docker, we will skip all the headaches of installing various dependencies and setting up the environment. We will just use the official docker image of psql: https://hub.docker.com/_/postgres. The docker-compose file: ``` !cat psql-docker/docker-compose.yml ``` We will use the version 14.1 of PostgreSQL. The container will be listening on port 5432 on the local machine and will transfer data to port 5432 in the container. The 5432 is the default port for PostgreSQL. Any data added to the container will be stored in the local machine in the directory where the docker-compose file is: `./data/db`. To spin up the PSQL database, run the command (from the directory where the file is): ``` docker-compose up ``` The `docker ps` command should show that the container is running: ``` CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e812f56fd0e7 postgres:14.1 "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp psql-docker_db_1 ``` Now we can access the database using any software we like and start putting data in it. # SQLAlchemy One of the most popular libraries to connect and use databases with Python is SQLAlchemy. SQLAlchemy is a Python library that provides a high-level abstraction layer on top of many popular databases. To read the extensive documentation visit https://docs.sqlalchemy.org/en/14/ To connect to most of the databases, we need to know its `URI`: Uniform Resource Identifier. The URI is a string that contains all the information needed to connect to the database. The URI has the following form: ``` <dialect+driver>://<username>:<password>@<host>:<port>/<database-name> ``` For example, to connect to the PostgreSQL database, we need to use the following URI: ``` postgresql://user:password@localhost:5432/postgres ``` Make sure that the docker container is running before trying out the bellow code snippet. ``` # Importing the sqlalchemy library from sqlalchemy import create_engine # Query making import pandas as pd # Creating the engine engine = create_engine('postgresql://user:password@localhost:5432/postgres') # Making a query with pandas and the created engine pd.read_sql("SELECT * FROM pg_catalog.pg_tables;", engine) ``` The engine object has all the methods and attributes that you need to interact with the database. As we can see from the query results, PSQL has a lot of default tables that are ready to be used. ## Creating a database We can use SQLAlchemy to fully manage database creation and deletion, table managment and data manipulation in a given database management system. The created SQLAlchemy engine object is stored in the variable `engine`. It has a method called connect() which returns a connection object. The connection object has a method called execute() which executes a given SQL query. Lets start by creating a database: ``` # Initiating the connection object from the created engine conn = engine.connect() # When initiated, the connection object is in a state of an active transation. We need to commit it in order to make the changes permanent. conn.execute("commit") # Giving the database a name db_name = 'testdb' # Getting all the database names databases = pd.read_sql("SELECT datname FROM pg_database;", engine)["datname"].values.tolist() if db_name in databases: print(f'Database {db_name} already exists') else: print('Database does not exist; Creating it') # Creating the database conn.execute("CREATE DATABASE testdb") conn.commit() # Listing the databases print(f"\nDatabases available in PSQL:") pd.read_sql("SELECT datname FROM pg_database;", engine)["datname"].values.tolist() ``` ## Creating a table When working with databases, there are some key definitions that one must know: A `database schema` defines the structure of a database (tables and relationships). A `database model` is a class that represents a table in a database (a table in a database). A `model schema` defines the structure of a database model (column types and relationships in a table). A `database migration` is a set of instructions that tell a database management system how to create or alter a database schema. When we are creating new tables in SQL, we are first defining a new model schema and then creating a migration to apply the model schema to the database. To put it simply, we define the collumn names, types and relationships with other tables and then tell psql to create the table. This process is simplified by using SQLAlchemy. The term **ORM** is short for Object Relational Mapping. It is a way to map Python classes to tables in a database. SQLAlchemy provides a way to create ORM classes for a given database. We inherit all the methods and attributes from the SQLAlchemy base class `Base` and then we define the table name and the columns. Internally, the `Base` class has all the functionalities needed to command the PSQL system to create the table. ``` # Importing the Table, Metadata and Column objects from the sqlalchemy library from sqlalchemy import Column # Importing the declarative_base class from the sqlalchemy library which is used to create # the base class for all the custom made classes from sqlalchemy.orm import declarative_base # Importing the column types from sqlalchemy import String, Integer, DateTime ``` When a developer creates new functionalities or changes existing ones, the first place where the changes happen is in the database tables. For example, lets create a table called `users` using a class `Users`. ``` # Connecting to the newly created testdb database engine = create_engine(f'postgresql://user:password@localhost:5432/testdb') # Defining the declarative base class which we will use as a template for all the custom made classes Base = declarative_base() class Users(Base): # The __tablename__ attribute is used to name the table in the database __tablename__ = 'users' # Listing out all the columns in the table id = Column(Integer, primary_key=True) name = Column(String) surname = Column(String) created_at = Column(DateTime) updated_at = Column(DateTime) def __init__(self, name, surname, created_at, updated_at): """ Constructor to initialize the class. Every object created is a ROW in the database The collumn ID will automatically be created as the primary key and will increase by 1 with each new row created. """ self.name = name self.surname = surname self.created_at = created_at self.updated_at = updated_at def get_full_name(self): """ Method to get the full name of the user """ return f"{self.name} {self.surname}" def get_create_datetime(self): """ Method to get the exact time when the user was created """ return self.created_at.strftime("%Y-%m-%d %H:%M:%S") # Listing the tables available in the database tables = pd.read_sql("SELECT * FROM pg_catalog.pg_tables;", engine)["tablename"].values.tolist() if 'users' in tables: print(f'Table users already exists') else: # To create the table in SQLAlchemy we will use the Base.metadata.create_all() method Base.metadata.create_all(engine) ``` The class `Users` inherits everything from the `Base` class. That is why SQLAlchemy nows how to deal with it and manage it. To create the object user, we use the class constructor. ``` # Importing the package for date wrangling from datetime import datetime # Creating the user eligijus = Users('Eligijus', 'Bujokas', datetime.now(), datetime.now()) # Getting the full name print(f"Full name of user: {eligijus.get_full_name()}") # Getting the exact time when the user was created print(f"Time of user creation: {eligijus.get_create_datetime()}") ``` So far, the object `eligijus` only lives in the computer memory. The database has no record of such user. In order to insert a new row to the table Users, we need to create a new session and then add the user to the session. ## Updating tables To transfer objects from computer memory to the database, we can use SQLAlchemy's session object. The session object has a method called `add()` which adds an object to the session. The session object has a method called `commit()` which actually writes the changes to the database. We can open and close the sessions as many times as we want and in many code places. Altough the best practise is to use the same session object for all the code in a given code block. ``` # Importing the session object from the sqlalchemy library from sqlalchemy.orm import sessionmaker # Creating the session class and "linking it" with our connection Session = sessionmaker(bind=engine) # Creating the session objects with the needed methods session = Session() # Adding the user eligijus to the session session.add(eligijus) # Uploading to database session.commit() ``` Under the hood, SQLAlchemy handles all the data type checks and data conversion from Python to PSQL and vice versa. To see our created user in the database, we can use the `query()` method of the session object (or just plain Pandas). ``` # Listing all the users in the database (using Pandas) pd.read_sql("SELECT * FROM users;", engine) # Listing all the users in the database (using query()) users = session.query(Users).all() [(user.id, user.name, user.surname) for user in users] ``` We can create even more users and add them to the database. ``` # Creating and uploading bligijus = Users('Bligijus', 'Eujokas', datetime.now(), datetime.now()) session.add(bligijus) session.commit() # Listing all the users in the database (using Pandas) pd.read_sql("SELECT * FROM users;", engine) ``` ## Querying the database A big feature of SQLAlchemy is the seemless transition between records in database and objects in computer memory. To query the database, we use the `query()` method of the session object. The query method takes a SQL **like** query as an argument and returns a list of objects that mathces the query. For example, lets search for all the users with the name `eligijus`. ``` # Listing all the users with the name eligijus in the database using session.query() users = session.query(Users).filter(Users.name == 'Eligijus').all() # We can then interact with the object in the same way as with any Python method if len(users) == 0: print('No users found') else: print(f'Found {len(users)} users with the name Eligijus') print([(user.id, user.name, user.surname) for user in users]) # Extracting the first one user = users[0] # Getting the full name print(f"Full name of user: {user.get_full_name()}") # Getting the exact time when the user was created print(f"Time of user creation: {user.get_create_datetime()}") ``` We can very simply update the user information. Because the user is an object, we can directly change the attributes using the `.` operator. Lets change the surname of Eligijus to the most typical surname in Lithunia: `Kazlauskas`. ``` # Changing the attributes of the user user.surname = 'Kazlauskas' # Specifying the exact time when the user was updated user.updated_at = datetime.now() # Uploading the changes to the database session.commit() ``` What is neat here, is that the session object tracks all the changes that is made in the memory and we do not need to specify which user was changed: the `commit()` method will automatically update the database. To see the final database, we can list all the rows using pandas. ``` # Getting all the rows in the database pd.read_sql("SELECT * FROM users;", engine) ``` # Closing thoughts A modern API cannot be imagined without a corresponding database that holds all the request and response data. Additionally, an API would not give responses without all the necessary tables in the database. Python is a very flexible tool to create and manage databases. The object orianted programming paradigm is a great way to manage records in a database, because we can view each `row` in a database table as an `object` with each collumn values as the object attributes. Additionally, we can specify complex functionalities in the table classes using Python and use them with each individual row. In the next chapter, we will connect the API of the root calculation to the database. ##
github_jupyter
!cat psql-docker/docker-compose.yml docker-compose up CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e812f56fd0e7 postgres:14.1 "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp psql-docker_db_1 <dialect+driver>://<username>:<password>@<host>:<port>/<database-name> postgresql://user:password@localhost:5432/postgres # Importing the sqlalchemy library from sqlalchemy import create_engine # Query making import pandas as pd # Creating the engine engine = create_engine('postgresql://user:password@localhost:5432/postgres') # Making a query with pandas and the created engine pd.read_sql("SELECT * FROM pg_catalog.pg_tables;", engine) # Initiating the connection object from the created engine conn = engine.connect() # When initiated, the connection object is in a state of an active transation. We need to commit it in order to make the changes permanent. conn.execute("commit") # Giving the database a name db_name = 'testdb' # Getting all the database names databases = pd.read_sql("SELECT datname FROM pg_database;", engine)["datname"].values.tolist() if db_name in databases: print(f'Database {db_name} already exists') else: print('Database does not exist; Creating it') # Creating the database conn.execute("CREATE DATABASE testdb") conn.commit() # Listing the databases print(f"\nDatabases available in PSQL:") pd.read_sql("SELECT datname FROM pg_database;", engine)["datname"].values.tolist() # Importing the Table, Metadata and Column objects from the sqlalchemy library from sqlalchemy import Column # Importing the declarative_base class from the sqlalchemy library which is used to create # the base class for all the custom made classes from sqlalchemy.orm import declarative_base # Importing the column types from sqlalchemy import String, Integer, DateTime # Connecting to the newly created testdb database engine = create_engine(f'postgresql://user:password@localhost:5432/testdb') # Defining the declarative base class which we will use as a template for all the custom made classes Base = declarative_base() class Users(Base): # The __tablename__ attribute is used to name the table in the database __tablename__ = 'users' # Listing out all the columns in the table id = Column(Integer, primary_key=True) name = Column(String) surname = Column(String) created_at = Column(DateTime) updated_at = Column(DateTime) def __init__(self, name, surname, created_at, updated_at): """ Constructor to initialize the class. Every object created is a ROW in the database The collumn ID will automatically be created as the primary key and will increase by 1 with each new row created. """ self.name = name self.surname = surname self.created_at = created_at self.updated_at = updated_at def get_full_name(self): """ Method to get the full name of the user """ return f"{self.name} {self.surname}" def get_create_datetime(self): """ Method to get the exact time when the user was created """ return self.created_at.strftime("%Y-%m-%d %H:%M:%S") # Listing the tables available in the database tables = pd.read_sql("SELECT * FROM pg_catalog.pg_tables;", engine)["tablename"].values.tolist() if 'users' in tables: print(f'Table users already exists') else: # To create the table in SQLAlchemy we will use the Base.metadata.create_all() method Base.metadata.create_all(engine) # Importing the package for date wrangling from datetime import datetime # Creating the user eligijus = Users('Eligijus', 'Bujokas', datetime.now(), datetime.now()) # Getting the full name print(f"Full name of user: {eligijus.get_full_name()}") # Getting the exact time when the user was created print(f"Time of user creation: {eligijus.get_create_datetime()}") # Importing the session object from the sqlalchemy library from sqlalchemy.orm import sessionmaker # Creating the session class and "linking it" with our connection Session = sessionmaker(bind=engine) # Creating the session objects with the needed methods session = Session() # Adding the user eligijus to the session session.add(eligijus) # Uploading to database session.commit() # Listing all the users in the database (using Pandas) pd.read_sql("SELECT * FROM users;", engine) # Listing all the users in the database (using query()) users = session.query(Users).all() [(user.id, user.name, user.surname) for user in users] # Creating and uploading bligijus = Users('Bligijus', 'Eujokas', datetime.now(), datetime.now()) session.add(bligijus) session.commit() # Listing all the users in the database (using Pandas) pd.read_sql("SELECT * FROM users;", engine) # Listing all the users with the name eligijus in the database using session.query() users = session.query(Users).filter(Users.name == 'Eligijus').all() # We can then interact with the object in the same way as with any Python method if len(users) == 0: print('No users found') else: print(f'Found {len(users)} users with the name Eligijus') print([(user.id, user.name, user.surname) for user in users]) # Extracting the first one user = users[0] # Getting the full name print(f"Full name of user: {user.get_full_name()}") # Getting the exact time when the user was created print(f"Time of user creation: {user.get_create_datetime()}") # Changing the attributes of the user user.surname = 'Kazlauskas' # Specifying the exact time when the user was updated user.updated_at = datetime.now() # Uploading the changes to the database session.commit() # Getting all the rows in the database pd.read_sql("SELECT * FROM users;", engine)
0.431584
0.988536
<a href="https://colab.research.google.com/github/reevutrprog/TRPROG/blob/master/Lab7a.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Lab 7 A simple chatbot **1.** Create a function that asks the user his name and says Hello, John Doe **2.** Create a function called chat that ask the user what is his problem and will answer "yes of course" **3.** Create a function called chat that ask the user what is his problem and will answer randomly among 6 possible answers **4.** Put all together and improve **1.** Create a function that ask the user his name and says Hello, John Doe ``` # Create a function that ask the user his name and says Hello, John Doe def chatbot1(): a="" while a=="": a= input("computer: Hi! What is your name?\nuser: ") print("Computer: Hello, "+a+"! ") chatbot1() ``` **2.** Create a function called chat that ask the user what is his problem and will answer "yes of course" ``` # Create a function called chat that ask the user what is his problem and will answer "yes of course" def chatbot2(): problem="" while problem!= "exit": problem= input("Bot: what is your problem?\nuser:") if(problem!=""): print("Bot: yes of course") break #Luxury break not really needed chatbot2() ``` **3.** Create a function called chat that ask the user what is his problem and will answer randomly among 6 possible answers ``` # Create a function called chat that ask the user what is his problem and will answer randomly among 6 possible answers import random text=["Yes", "No", "Whatever", "Orange Hair is the best hair ","By a Lot","There are 8 cats in ISEG" ] def chatbot3(): prob="" while prob!= "exit": prob= input("bot: what seems to be the problem?\nuser: ") if(prob!=""): answer=random.randint(0,5) print("bot: "+text[answer]) chatbot3() ``` **4.** Put all together and improve ``` chatbot1() chatbot2() chatbot3() import random class chatbot: List_of_answers = ['Run', 'Sleep', 'Eat', 'Drink', 'Smile', 'Rest'] def chat(self): problem = input("Computer: Hi! What is your problem? ") while problem == "": problem = input("Computer: What is your problem? ") print('Computer: You should ' +choice(List_of_answers)) def Chat(self): problem = input("Computer: Hi! What is your problem? \nUser: " ) while problem == "": problem = input("Computer: What is your problem? \nUser: " ) print("Yes of course") def Hi(self): name = input("Computer: Hi! What is your name? \nUser: " ) while name == "": name = input("Computer: Hi! What is your name? \nUser: " ) print("Computer: Hello, " +name+".") Chatroom = chatbot() import random def inpppp(): str_1,str_2,str_3 ="","","" str_1 = input("What is your name") ```
github_jupyter
# Create a function that ask the user his name and says Hello, John Doe def chatbot1(): a="" while a=="": a= input("computer: Hi! What is your name?\nuser: ") print("Computer: Hello, "+a+"! ") chatbot1() # Create a function called chat that ask the user what is his problem and will answer "yes of course" def chatbot2(): problem="" while problem!= "exit": problem= input("Bot: what is your problem?\nuser:") if(problem!=""): print("Bot: yes of course") break #Luxury break not really needed chatbot2() # Create a function called chat that ask the user what is his problem and will answer randomly among 6 possible answers import random text=["Yes", "No", "Whatever", "Orange Hair is the best hair ","By a Lot","There are 8 cats in ISEG" ] def chatbot3(): prob="" while prob!= "exit": prob= input("bot: what seems to be the problem?\nuser: ") if(prob!=""): answer=random.randint(0,5) print("bot: "+text[answer]) chatbot3() chatbot1() chatbot2() chatbot3() import random class chatbot: List_of_answers = ['Run', 'Sleep', 'Eat', 'Drink', 'Smile', 'Rest'] def chat(self): problem = input("Computer: Hi! What is your problem? ") while problem == "": problem = input("Computer: What is your problem? ") print('Computer: You should ' +choice(List_of_answers)) def Chat(self): problem = input("Computer: Hi! What is your problem? \nUser: " ) while problem == "": problem = input("Computer: What is your problem? \nUser: " ) print("Yes of course") def Hi(self): name = input("Computer: Hi! What is your name? \nUser: " ) while name == "": name = input("Computer: Hi! What is your name? \nUser: " ) print("Computer: Hello, " +name+".") Chatroom = chatbot() import random def inpppp(): str_1,str_2,str_3 ="","","" str_1 = input("What is your name")
0.141163
0.937096
<table class="ee-notebook-buttons" align="left"> <td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Visualization/nwi_wetlands_symbology.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td> <td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Visualization/nwi_wetlands_symbology.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td> <td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=Visualization/nwi_wetlands_symbology.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Visualization/nwi_wetlands_symbology.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td> </table> ## Install Earth Engine API Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`. The following script checks if the geehydro package has been installed. If not, it will install geehydro, which automatically install its dependencies, including earthengine-api and folium. ``` import subprocess try: import geehydro except ImportError: print('geehydro package not installed. Installing ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geehydro']) ``` Import libraries ``` import ee import folium import geehydro ``` Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. ``` try: ee.Initialize() except Exception as e: ee.Authenticate() ee.Initialize() ``` ## Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`. ``` Map = folium.Map(location=[40, -100], zoom_start=4) Map.setOptions('HYBRID') ``` ## Add Earth Engine Python script ``` # NWI legend: https://www.fws.gov/wetlands/Data/Mapper-Wetlands-Legend.html def nwi_add_color(fc): emergent = ee.FeatureCollection( fc.filter(ee.Filter.eq('WETLAND_TY', 'Freshwater Emergent Wetland'))) emergent = emergent.map(lambda f: f.set( 'R', 127).set('G', 195).set('B', 28)) # print(emergent.first()) forested = fc.filter(ee.Filter.eq( 'WETLAND_TY', 'Freshwater Forested/Shrub Wetland')) forested = forested.map(lambda f: f.set('R', 0).set('G', 136).set('B', 55)) pond = fc.filter(ee.Filter.eq('WETLAND_TY', 'Freshwater Pond')) pond = pond.map(lambda f: f.set('R', 104).set('G', 140).set('B', 192)) lake = fc.filter(ee.Filter.eq('WETLAND_TY', 'Lake')) lake = lake.map(lambda f: f.set('R', 19).set('G', 0).set('B', 124)) riverine = fc.filter(ee.Filter.eq('WETLAND_TY', 'Riverine')) riverine = riverine.map(lambda f: f.set( 'R', 1).set('G', 144).set('B', 191)) fc = ee.FeatureCollection(emergent.merge( forested).merge(pond).merge(lake).merge(riverine)) base = ee.Image(0).mask(0).toInt8() img = base.paint(fc, 'R') \ .addBands(base.paint(fc, 'G') .addBands(base.paint(fc, 'B'))) return img fromFT = ee.FeatureCollection("users/wqs/Pipestem/Pipestem_HUC10") Map.addLayer(ee.Image().paint(fromFT, 0, 2), {}, 'Watershed') huc8_id = '10160002' nwi_asset_path = 'users/wqs/NWI-HU8/HU8_' + huc8_id + '_Wetlands' # NWI wetlands for the clicked watershed clicked_nwi_huc = ee.FeatureCollection(nwi_asset_path) nwi_color = nwi_add_color(clicked_nwi_huc) Map.centerObject(clicked_nwi_huc, 10) Map.addLayer(nwi_color, {'gamma': 0.3, 'opacity': 0.7}, 'NWI Wetlands Color') ``` ## Display Earth Engine data layers ``` Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True) Map ```
github_jupyter
import subprocess try: import geehydro except ImportError: print('geehydro package not installed. Installing ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geehydro']) import ee import folium import geehydro try: ee.Initialize() except Exception as e: ee.Authenticate() ee.Initialize() Map = folium.Map(location=[40, -100], zoom_start=4) Map.setOptions('HYBRID') # NWI legend: https://www.fws.gov/wetlands/Data/Mapper-Wetlands-Legend.html def nwi_add_color(fc): emergent = ee.FeatureCollection( fc.filter(ee.Filter.eq('WETLAND_TY', 'Freshwater Emergent Wetland'))) emergent = emergent.map(lambda f: f.set( 'R', 127).set('G', 195).set('B', 28)) # print(emergent.first()) forested = fc.filter(ee.Filter.eq( 'WETLAND_TY', 'Freshwater Forested/Shrub Wetland')) forested = forested.map(lambda f: f.set('R', 0).set('G', 136).set('B', 55)) pond = fc.filter(ee.Filter.eq('WETLAND_TY', 'Freshwater Pond')) pond = pond.map(lambda f: f.set('R', 104).set('G', 140).set('B', 192)) lake = fc.filter(ee.Filter.eq('WETLAND_TY', 'Lake')) lake = lake.map(lambda f: f.set('R', 19).set('G', 0).set('B', 124)) riverine = fc.filter(ee.Filter.eq('WETLAND_TY', 'Riverine')) riverine = riverine.map(lambda f: f.set( 'R', 1).set('G', 144).set('B', 191)) fc = ee.FeatureCollection(emergent.merge( forested).merge(pond).merge(lake).merge(riverine)) base = ee.Image(0).mask(0).toInt8() img = base.paint(fc, 'R') \ .addBands(base.paint(fc, 'G') .addBands(base.paint(fc, 'B'))) return img fromFT = ee.FeatureCollection("users/wqs/Pipestem/Pipestem_HUC10") Map.addLayer(ee.Image().paint(fromFT, 0, 2), {}, 'Watershed') huc8_id = '10160002' nwi_asset_path = 'users/wqs/NWI-HU8/HU8_' + huc8_id + '_Wetlands' # NWI wetlands for the clicked watershed clicked_nwi_huc = ee.FeatureCollection(nwi_asset_path) nwi_color = nwi_add_color(clicked_nwi_huc) Map.centerObject(clicked_nwi_huc, 10) Map.addLayer(nwi_color, {'gamma': 0.3, 'opacity': 0.7}, 'NWI Wetlands Color') Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True) Map
0.296145
0.938294
<a href="https://colab.research.google.com/github/ktarun1681/Fake-News-Detection/blob/main/Project_1_Fake_News_Detection.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> About the dataset: train.csv: A full training dataset with the following attributes: **id:** unique id for a news article **title:** the title of a news article **author:** author of the news article **text:** the text of the article; could be incomplete **label:** a label that marks the article as potentially unreliable 1: unreliable(Fake) 0: reliable(Real) ``` # importing the dependencies import zipfile import pandas as pd import numpy as np import re from nltk.corpus import stopwords from nltk.stem.porter import PorterStemmer from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.metrics import accuracy_score # read the dataset using the compression zip news_df = pd.read_csv('/content/drive/MyDrive/Datasets/train.csv.zip',compression='zip') news_df.head() ``` Importing the stopwords: ``` import nltk nltk.download('stopwords') ``` Printing the stopwords in English: ``` print(stopwords.words('english')) ``` Data Pre-Processing: ``` news_df.shape news_df.head() news_df.info news_df.describe() #counting the number of missing values in each column: news_df.isnull().sum() ``` Here we have some missing values. So, we can handle those missing values by dropping them as the dataset is quite large but if we don't have the large dataset then dropping the values can led to the wrong prediction by our model. ``` #replacing the missing values with null strings news_df = news_df.fillna(' ') #checking again if we have null values or not news_df.isnull().sum() ``` As we can see there is no missing values left now. So, we can move on to the next step. We will use the column Author name and news title for making our model. We could have also used the text column but the text column will have long paragraphs so, that can take quite a long time in processing. So, we will merge those two columns for making our model. ``` # merging the author name and news title for our model news_df['content'] = news_df['author']+ ' '+ news_df['title'] news_df.head() print(news_df['content']) #seperating the data and the label(target) for training our model X = news_df.drop(columns = 'label', axis = 1 ) Y = news_df['label'] print(X) print(Y) ``` **Stemming**: Stemming is the process of reducing a word to its word stem that affixes to suffixes and prefixes or to the roots of words known as a lemma. Stemming is important in natural language understanding (NLU) and natural language processing (NLP). Example: Actor, actress, acting can have just prefix act as it is common. ``` port_stem = PorterStemmer() def stemming(content): stemmed_content = re.sub('[^A-Za-z]',' ', content) stemmed_content = stemmed_content.lower() stemmed_content = stemmed_content.split() stemmed_content = [port_stem.stem(word) for word in stemmed_content if not word in stopwords.words('english')] stemmed_content = ' '.join(stemmed_content) return stemmed_content ``` In the above function we have defined the process of stemming which basically converts all the words to their root words and also removed the punctuation and symbols. In first line of the function we have substituted all the charaters to letters. And replace the symbols and other numbers by space. In second line of the function we have converted the content to lower letters. then we have splitted it list by seperating all the words. Then we have applied the stemming for all the words except for the stopwords. ``` news_df['content'] = news_df['content'].apply(stemming) news_df['content'] ``` Seperating the data and label: ``` X = news_df['content'].values Y = news_df['label'].values print(X ) print (Y) #converting the textual data into numerical data: vectorizer = TfidfVectorizer() vectorizer.fit(X) X = vectorizer.transform(X) print(X) ``` Splitting the data into training and test data: ``` X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size= 0.2, stratify= Y, random_state= 1) X_train.shape, X_test.shape Y_train.shape, Y_test.shape ``` Training the logistic Regression Model: ``` model = LogisticRegression() model.fit(X_train, Y_train) ``` Evaluation of the Accuracy Score: ``` #accuracy on the training data X_train_prediction = model.predict(X_train) training_data_accuracy = accuracy_score(X_train_prediction, Y_train) print('Accuracy score of the training data: ', training_data_accuracy) ``` We have got about 98 percent accuracy score which is very good. Also, we know that for binary classification, logistic regression is the best model to train our model. We can use other models also for training our data but as we have got 98 percent acccuracy then we don't need to do that. Also, we have to make sure that the model performs well on the testing data as well. Otherwise the model can be overfitted or underfitted depending upon the accuracy. ``` #accuracy on the test data X_test_prediction = model.predict(X_test) test_data_accuracy = accuracy_score(X_test_prediction, Y_test) print('Accuracy score of the test data: ', test_data_accuracy) ``` Here we can see that besides the accuracy score on the training data, we are also getting great performance on the test data as well. Making the predictive system: ``` X_new = X_test[152] prediction = model.predict(X_new) print(prediction) if (prediction[0]==0): print('The news is Real') else: print('The news is Fake') print(Y_test[152]) ``` So, we can see that our model is predicting perfectly. As we have checked it, whether the news is fake or real for the test values. We know that the model has not seen our test data. So, we can try predicting on that. ``` ```
github_jupyter
# importing the dependencies import zipfile import pandas as pd import numpy as np import re from nltk.corpus import stopwords from nltk.stem.porter import PorterStemmer from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.metrics import accuracy_score # read the dataset using the compression zip news_df = pd.read_csv('/content/drive/MyDrive/Datasets/train.csv.zip',compression='zip') news_df.head() import nltk nltk.download('stopwords') print(stopwords.words('english')) news_df.shape news_df.head() news_df.info news_df.describe() #counting the number of missing values in each column: news_df.isnull().sum() #replacing the missing values with null strings news_df = news_df.fillna(' ') #checking again if we have null values or not news_df.isnull().sum() # merging the author name and news title for our model news_df['content'] = news_df['author']+ ' '+ news_df['title'] news_df.head() print(news_df['content']) #seperating the data and the label(target) for training our model X = news_df.drop(columns = 'label', axis = 1 ) Y = news_df['label'] print(X) print(Y) port_stem = PorterStemmer() def stemming(content): stemmed_content = re.sub('[^A-Za-z]',' ', content) stemmed_content = stemmed_content.lower() stemmed_content = stemmed_content.split() stemmed_content = [port_stem.stem(word) for word in stemmed_content if not word in stopwords.words('english')] stemmed_content = ' '.join(stemmed_content) return stemmed_content news_df['content'] = news_df['content'].apply(stemming) news_df['content'] X = news_df['content'].values Y = news_df['label'].values print(X ) print (Y) #converting the textual data into numerical data: vectorizer = TfidfVectorizer() vectorizer.fit(X) X = vectorizer.transform(X) print(X) X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size= 0.2, stratify= Y, random_state= 1) X_train.shape, X_test.shape Y_train.shape, Y_test.shape model = LogisticRegression() model.fit(X_train, Y_train) #accuracy on the training data X_train_prediction = model.predict(X_train) training_data_accuracy = accuracy_score(X_train_prediction, Y_train) print('Accuracy score of the training data: ', training_data_accuracy) #accuracy on the test data X_test_prediction = model.predict(X_test) test_data_accuracy = accuracy_score(X_test_prediction, Y_test) print('Accuracy score of the test data: ', test_data_accuracy) X_new = X_test[152] prediction = model.predict(X_new) print(prediction) if (prediction[0]==0): print('The news is Real') else: print('The news is Fake') print(Y_test[152])
0.422624
0.958226
# Loading Image Data So far we've been working with fairly artificial datasets that you wouldn't typically be using in real projects. Instead, you'll likely be dealing with full-sized images like you'd get from smart phone cameras. In this notebook, we'll look at how to load images and use them to train neural networks. We'll be using a [dataset of cat and dog photos](https://www.kaggle.com/c/dogs-vs-cats) available from Kaggle. Here are a couple example images: <img src='assets/dog_cat.png'> We'll use this dataset to train a neural network that can differentiate between cats and dogs. These days it doesn't seem like a big accomplishment, but five years ago it was a serious challenge for computer vision systems. ``` %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper ``` The easiest way to load image data is with `datasets.ImageFolder` from `torchvision` ([documentation](http://pytorch.org/docs/master/torchvision/datasets.html#imagefolder)). In general you'll use `ImageFolder` like so: ```python dataset = datasets.ImageFolder('path/to/data', transform=transform) ``` where `'path/to/data'` is the file path to the data directory and `transform` is a sequence of processing steps built with the [`transforms`](http://pytorch.org/docs/master/torchvision/transforms.html) module from `torchvision`. ImageFolder expects the files and directories to be constructed like so: ``` root/dog/xxx.png root/dog/xxy.png root/dog/xxz.png root/cat/123.png root/cat/nsdf3.png root/cat/asd932_.png ``` where each class has it's own directory (`cat` and `dog`) for the images. The images are then labeled with the class taken from the directory name. So here, the image `123.png` would be loaded with the class label `cat`. You can download the dataset already structured like this [from here](https://s3.amazonaws.com/content.udacity-data.com/nd089/Cat_Dog_data.zip). I've also split it into a training set and test set. ### Transforms When you load in the data with `ImageFolder`, you'll need to define some transforms. For example, the images are different sizes but we'll need them to all be the same size for training. You can either resize them with `transforms.Resize()` or crop with `transforms.CenterCrop()`, `transforms.RandomResizedCrop()`, etc. We'll also need to convert the images to PyTorch tensors with `transforms.ToTensor()`. Typically you'll combine these transforms into a pipeline with `transforms.Compose()`, which accepts a list of transforms and runs them in sequence. It looks something like this to scale, then crop, then convert to a tensor: ```python transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) ``` There are plenty of transforms available, I'll cover more in a bit and you can read through the [documentation](http://pytorch.org/docs/master/torchvision/transforms.html). ### Data Loaders With the `ImageFolder` loaded, you have to pass it to a [`DataLoader`](http://pytorch.org/docs/master/data.html#torch.utils.data.DataLoader). The `DataLoader` takes a dataset (such as you would get from `ImageFolder`) and returns batches of images and the corresponding labels. You can set various parameters like the batch size and if the data is shuffled after each epoch. ```python dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) ``` Here `dataloader` is a [generator](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/). To get data out of it, you need to loop through it or convert it to an iterator and call `next()`. ```python # Looping through it, get a batch on each loop for images, labels in dataloader: pass # Get one batch images, labels = next(iter(dataloader)) ``` >**Exercise:** Load images from the `Cat_Dog_data/train` folder, define a few transforms, then build the dataloader. ``` data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) ``` If you loaded the data correctly, you should see something like this (your image will be different): <img src='assets/cat_cropped.png' width=244> ## Data Augmentation A common strategy for training neural networks is to introduce randomness in the input data itself. For example, you can randomly rotate, mirror, scale, and/or crop your images during training. This will help your network generalize as it's seeing the same images but in different locations, with different sizes, in different orientations, etc. To randomly rotate, scale and crop, then flip your images you would define your transforms like this: ```python train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) ``` You'll also typically want to normalize images with `transforms.Normalize`. You pass in a list of means and list of standard deviations, then the color channels are normalized like so ```input[channel] = (input[channel] - mean[channel]) / std[channel]``` Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn. You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered other than normalizing. So, for validation/test images, you'll typically just resize and crop. >**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. ``` data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32) testloader = torch.utils.data.DataLoader(test_data, batch_size=32) # change this to the trainloader or testloader data_iter = iter(testloader) images, labels = next(data_iter) fig, axes = plt.subplots(figsize=(10,4), ncols=4) for ii in range(4): ax = axes[ii] helper.imshow(images[ii], ax=ax, normalize=False) ``` Your transformed images should look something like this. <center>Training examples:</center> <img src='assets/train_examples.png' width=500px> <center>Testing examples:</center> <img src='assets/test_examples.png' width=500px> At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny). In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem. ``` # Optional TODO: Attempt to build a network to classify cats vs dogs from this dataset ```
github_jupyter
%matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torchvision import datasets, transforms import helper dataset = datasets.ImageFolder('path/to/data', transform=transform) root/dog/xxx.png root/dog/xxy.png root/dog/xxz.png root/cat/123.png root/cat/nsdf3.png root/cat/asd932_.png transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Looping through it, get a batch on each loop for images, labels in dataloader: pass # Get one batch images, labels = next(iter(dataloader)) data_dir = 'Cat_Dog_data/train' transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor()]) dataset = datasets.ImageFolder(data_dir, transform=transform) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) # Run this to test your data loader images, labels = next(iter(dataloader)) helper.imshow(images[0], normalize=False) train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) Subtracting `mean` centers the data around zero and dividing by `std` squishes the values to be between -1 and 1. Normalizing helps keep the network weights near zero which in turn makes backpropagation more stable. Without normalization, networks will tend to fail to learn. You can find a list of all [the available transforms here](http://pytorch.org/docs/0.3.0/torchvision/transforms.html). When you're testing however, you'll want to use images that aren't altered other than normalizing. So, for validation/test images, you'll typically just resize and crop. >**Exercise:** Define transforms for training data and testing data below. Leave off normalization for now. Your transformed images should look something like this. <center>Training examples:</center> <img src='assets/train_examples.png' width=500px> <center>Testing examples:</center> <img src='assets/test_examples.png' width=500px> At this point you should be able to load data for training and testing. Now, you should try building a network that can classify cats vs dogs. This is quite a bit more complicated than before with the MNIST and Fashion-MNIST datasets. To be honest, you probably won't get it to work with a fully-connected network, no matter how deep. These images have three color channels and at a higher resolution (so far you've seen 28x28 images which are tiny). In the next part, I'll show you how to use a pre-trained network to build a model that can actually solve this problem.
0.87068
0.990169
``` %load_ext autoreload %autoreload 2 from timeit import default_timer as timer from functools import partial from random import choices import logging import sdgym from sdgym import load_dataset from sdgym import benchmark from sdgym import load_dataset import numpy as np import pandas as pd import matplotlib.pyplot as plt import networkx as nx import pgmpy from pgmpy.models import BayesianModel from pgmpy.estimators import TreeSearch, HillClimbSearch, BicScore, ExhaustiveSearch, BayesianEstimator from pgmpy.sampling import BayesianModelSampling import xgboost as xgb from xgboost import XGBClassifier from sklearn.neural_network import MLPClassifier from sklearn.svm import SVC from sklearn.model_selection import train_test_split from sklearn.isotonic import IsotonicRegression from sklearn.metrics import ( mutual_info_score, adjusted_mutual_info_score, normalized_mutual_info_score, ) from scipy import interpolate from synthsonic.models.kde_utils import kde_smooth_peaks_1dim, kde_smooth_peaks from synthsonic.models.kde_copula_nn_pdf import KDECopulaNNPdf import matplotlib.pyplot as plt %matplotlib inline logging.basicConfig(level=logging.INFO) dataset_name = 'mnist28' data, categorical_columns, ordinal_columns = load_dataset(dataset_name) data.shape categorical_columns, ordinal_columns for i in range(data.shape[1]): print (i, len(np.unique(data[:, i]))) twos = [i for i in range(data.shape[1]) if len(np.unique(data[:, i])) > 1] plt.hist(data[:, 784], bins=40) def zero_weight(x, y): return 0. kde = KDECopulaNNPdf( use_KDE=False, # categorical_columns=categorical_columns+ordinal_columns, distinct_threshold=100, n_uniform_bins=30, n_calibration_bins=100, test_size=0.2, estimator_type='tan', # 'chow-liu', # 'tan' #edge_weights_fn=zero_weight, class_node=784, # clf=clf, # ordering='mi', ) kde = kde.fit(data) import matplotlib.pyplot as plt %matplotlib inline kde._calibrate_classifier(kde.hist_p0_, kde.hist_p1_, kde.bin_edges_, validation_plots=True) X_gen = kde.sample_no_weights(n_samples=data.shape[0], show_progress=True, mode="cheap") p2 = kde.clf_predict_proba(X_gen)[:, 1] plt.figure(figsize=(12,7)) plt.hist(p2, bins=100, log=True); df = pd.DataFrame(X_gen) df.to_csv('mnist28_gen.csv', index=False) ``` # run sdgym ``` def KDECopulaNNPdf_Synthesizer(real_data, categorical_columns, ordinal_columns): all_features = list(range(real_data.shape[1])) numerical_features = list(set(all_features) - set(categorical_columns + ordinal_columns)) data = np.float64(real_data) n_samples = data.shape[0] n_features = data.shape[1] clf = xgb.XGBClassifier( n_estimators=250, reg_lambda=1, gamma=0, max_depth=9 ) # clf = MLPClassifier(alpha=0.1, random_state=0, max_iter=1000, early_stopping=True) def zero_weight(x, y): return 0. kde = KDECopulaNNPdf( use_KDE=False, categorical_columns=categorical_columns+ordinal_columns, distinct_threshold=-1, n_uniform_bins=30, n_calibration_bins=100, test_size=0.2, edge_weights_fn=zero_weight, class_node=144 # clf=clf, # ordering='mi', ) kde = kde.fit(data) # X_gen, sample_weight = kde.sample(n_samples) X_gen = kde.sample_no_weights(n_samples, show_progress=True, mode='cheap') X_gen[:, categorical_columns+ordinal_columns] = np.round(X_gen[:, categorical_columns+ordinal_columns]) X_gen = np.float32(X_gen) return X_gen def KDECopulaNNPdf_SynthesizerInteger(real_data, categorical_columns, ordinal_columns): """Census has integer only...""" data = KDECopulaNNPdf_Synthesizer(real_data, categorical_columns, ordinal_columns) data = np.round(data) return data def KDECopulaNNPdf_csv(real_data, categorical_columns, ordinal_columns): """Census has integer only...""" df = pd.read_csv('mnist28_gen.csv') data = df.values data = np.round(data) return data from sdgym.synthesizers import ( CLBNSynthesizer, CTGANSynthesizer, IdentitySynthesizer, IndependentSynthesizer, MedganSynthesizer, PrivBNSynthesizer, TableganSynthesizer, TVAESynthesizer, UniformSynthesizer, VEEGANSynthesizer) all_synthesizers = [ #IdentitySynthesizer, #IndependentSynthesizer, #KDECopulaNNPdf_Synthesizer, #KDECopulaNNPdf_SynthesizerInteger, KDECopulaNNPdf_csv, ] scores = sdgym.run(synthesizers=all_synthesizers, datasets=[dataset_name], iterations=1) scores ```
github_jupyter
%load_ext autoreload %autoreload 2 from timeit import default_timer as timer from functools import partial from random import choices import logging import sdgym from sdgym import load_dataset from sdgym import benchmark from sdgym import load_dataset import numpy as np import pandas as pd import matplotlib.pyplot as plt import networkx as nx import pgmpy from pgmpy.models import BayesianModel from pgmpy.estimators import TreeSearch, HillClimbSearch, BicScore, ExhaustiveSearch, BayesianEstimator from pgmpy.sampling import BayesianModelSampling import xgboost as xgb from xgboost import XGBClassifier from sklearn.neural_network import MLPClassifier from sklearn.svm import SVC from sklearn.model_selection import train_test_split from sklearn.isotonic import IsotonicRegression from sklearn.metrics import ( mutual_info_score, adjusted_mutual_info_score, normalized_mutual_info_score, ) from scipy import interpolate from synthsonic.models.kde_utils import kde_smooth_peaks_1dim, kde_smooth_peaks from synthsonic.models.kde_copula_nn_pdf import KDECopulaNNPdf import matplotlib.pyplot as plt %matplotlib inline logging.basicConfig(level=logging.INFO) dataset_name = 'mnist28' data, categorical_columns, ordinal_columns = load_dataset(dataset_name) data.shape categorical_columns, ordinal_columns for i in range(data.shape[1]): print (i, len(np.unique(data[:, i]))) twos = [i for i in range(data.shape[1]) if len(np.unique(data[:, i])) > 1] plt.hist(data[:, 784], bins=40) def zero_weight(x, y): return 0. kde = KDECopulaNNPdf( use_KDE=False, # categorical_columns=categorical_columns+ordinal_columns, distinct_threshold=100, n_uniform_bins=30, n_calibration_bins=100, test_size=0.2, estimator_type='tan', # 'chow-liu', # 'tan' #edge_weights_fn=zero_weight, class_node=784, # clf=clf, # ordering='mi', ) kde = kde.fit(data) import matplotlib.pyplot as plt %matplotlib inline kde._calibrate_classifier(kde.hist_p0_, kde.hist_p1_, kde.bin_edges_, validation_plots=True) X_gen = kde.sample_no_weights(n_samples=data.shape[0], show_progress=True, mode="cheap") p2 = kde.clf_predict_proba(X_gen)[:, 1] plt.figure(figsize=(12,7)) plt.hist(p2, bins=100, log=True); df = pd.DataFrame(X_gen) df.to_csv('mnist28_gen.csv', index=False) def KDECopulaNNPdf_Synthesizer(real_data, categorical_columns, ordinal_columns): all_features = list(range(real_data.shape[1])) numerical_features = list(set(all_features) - set(categorical_columns + ordinal_columns)) data = np.float64(real_data) n_samples = data.shape[0] n_features = data.shape[1] clf = xgb.XGBClassifier( n_estimators=250, reg_lambda=1, gamma=0, max_depth=9 ) # clf = MLPClassifier(alpha=0.1, random_state=0, max_iter=1000, early_stopping=True) def zero_weight(x, y): return 0. kde = KDECopulaNNPdf( use_KDE=False, categorical_columns=categorical_columns+ordinal_columns, distinct_threshold=-1, n_uniform_bins=30, n_calibration_bins=100, test_size=0.2, edge_weights_fn=zero_weight, class_node=144 # clf=clf, # ordering='mi', ) kde = kde.fit(data) # X_gen, sample_weight = kde.sample(n_samples) X_gen = kde.sample_no_weights(n_samples, show_progress=True, mode='cheap') X_gen[:, categorical_columns+ordinal_columns] = np.round(X_gen[:, categorical_columns+ordinal_columns]) X_gen = np.float32(X_gen) return X_gen def KDECopulaNNPdf_SynthesizerInteger(real_data, categorical_columns, ordinal_columns): """Census has integer only...""" data = KDECopulaNNPdf_Synthesizer(real_data, categorical_columns, ordinal_columns) data = np.round(data) return data def KDECopulaNNPdf_csv(real_data, categorical_columns, ordinal_columns): """Census has integer only...""" df = pd.read_csv('mnist28_gen.csv') data = df.values data = np.round(data) return data from sdgym.synthesizers import ( CLBNSynthesizer, CTGANSynthesizer, IdentitySynthesizer, IndependentSynthesizer, MedganSynthesizer, PrivBNSynthesizer, TableganSynthesizer, TVAESynthesizer, UniformSynthesizer, VEEGANSynthesizer) all_synthesizers = [ #IdentitySynthesizer, #IndependentSynthesizer, #KDECopulaNNPdf_Synthesizer, #KDECopulaNNPdf_SynthesizerInteger, KDECopulaNNPdf_csv, ] scores = sdgym.run(synthesizers=all_synthesizers, datasets=[dataset_name], iterations=1) scores
0.632162
0.500488
# Matplotlib ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sb pokemon = pd.read_csv('../Matplotlib/data/pokemon.csv') print(pokemon.shape) pokemon.head(10) # Get first color_pallete and set it as color of the chart base_color = sb.color_palette()[0] # generate graph sb.countplot(data = pokemon, x = 'generation_id', color = base_color ) # generate ordered graph sb.countplot(data = pokemon, x = 'generation_id', color = base_color, order = (5,1,4,2,7,6) ) # Get order automatically gen_order = pokemon['generation_id'].value_counts().index gen_order # generate ordered graph sb.countplot(data = pokemon, x = 'generation_id', color = base_color, order = gen_order ); # generate ordered graph for type_1 pokemons gen_order = pokemon['type_1'].value_counts().index sb.countplot(data = pokemon, x = 'type_1', color = base_color, order = gen_order ); # rotate values on x axe plt.xticks(rotation = 90); # make horizontal barchar sb.countplot(data = pokemon, y = 'type_1', color = base_color, order = gen_order ); ``` # Absolute vs Relative Frequency ``` import pandas as pd import seaborn as sb pokemon = pd.read_csv('../Matplotlib/data/pokemon.csv') pokemon.head(10) pkmn_types = pokemon.melt(id_vars = ['id'], value_vars = ['type_1', 'type_2'], var_name = 'type_level', value_name = 'type') pkmn_types[802:812] type_counts = pkmn_types['type'].value_counts() type_order = type_counts.index base_color = sb.color_palette()[0] sb.countplot(data=pkmn_types, y = 'type', color= base_color, order = type_order) ``` # Relative way of processing the same data ``` # number of pokemons n_pokemon = pokemon.shape[0] max_type_count = type_counts[0] # print(pokemon.shape[0]) max_prop = max_type_count / n_pokemon # print(max_prop) tick_props = np.arange(0, max_prop, 0.02) tick_names = ['{:0.2f}'.format(v) for v in tick_props] base_color = sb.color_palette()[0] sb.countplot(data=pkmn_types, y = 'type', color= base_color, order = type_order) plt.xticks(tick_props * n_pokemon, tick_names); plt.xlabel('Proportion'); n_pokemon = pokemon.shape[0] type_counts = pkmn_types['type'].value_counts() type_order = type_counts.index base_color = sb.color_palette()[0] sb.countplot(data=pkmn_types, y = 'type', color= base_color, order = type_order) for i in range(type_counts.shape[0]): count = type_counts[i] pct_string = f'{100*count/n_pokemon:0.2f}%' plt.text(count+1, i, pct_string, va = 'center'); ``` # Counting Missing Data One interesting way we can apply bar charts is through the visualization of missing data. We can use pandas functions to create a table with the number of missing values in each column ``` # What if we want to visualize these missing value counts? We could treat the variable names as levels of # a categorical variable, and create a resulting bar plot. However, since the data is not in its tidy, # unsummarized form, we need to make use of a different plotting function. Seaborn's barplot function is # built to depict a summary of one quantitative variable against levels of a second, qualitative variable, # but can be used here. na_counts = pokemon.isna().sum() base_color = sb.color_palette()[0] sb.barplot(na_counts.index.values, na_counts, color = base_color) plt.xticks(rotation = 90); # The first argument to the function contains the x-values (column names), the second argument the y-values # (our counts). ``` # Pie Charts ``` # code for the pie chart seen above sorted_counts = pokemon['type_2'].value_counts() plt.pie(sorted_counts, labels = sorted_counts.index, startangle = 90, counterclock = False); plt.pie(sorted_counts, labels = sorted_counts.index, startangle = 90, counterclock = False, wedgeprops = {'width' : 0.4}); ``` # Histograms ## For Quantitative variables ``` # bins are "bit wide" on 'X' axis. bins = np.arange(0, pokemon['speed'].max()+1, 1) plt.hist(data=pokemon, x='speed', bins=bins); sb.distplot(pokemon['speed']); sb.distplot(pokemon['speed'], kde=False); bin_edges = np.arange(0, pokemon['speed'].max()+1, 1) sb.distplot(pokemon['speed'], bins=bin_edges, kde=False, hist_kws = {'alpha':1}); ``` # Figures, Axes, and Subplots ``` fig = plt.figure() ax = fig.add_axes([.125, .125, .775, .775]) ax.hist(data=pokemon, x = 'speed'); fig = plt.figure() ax = fig.add_axes([.125, .125, .775, .775]) sb.countplot(data=pokemon, x = 'speed', ax=ax); plt.figure(figsize = [10, 5]) # larger figure size for subplots # example of somewhat too-large bin size plt.subplot(1, 2, 1) # 1 row, 2 cols, subplot 1 bin_edges = np.arange(0, pokemon['speed'].max()+4, 4) plt.hist(data = pokemon, x = 'speed', bins = bin_edges); # example of somewhat too-small bin size plt.subplot(1, 2, 2) # 1 row, 2 cols, subplot 2 bin_edges = np.arange(0, pokemon['speed'].max()+1/4, 1/4) plt.hist(data = pokemon, x = 'speed', bins = bin_edges); fig, axes = plt.subplots(3, 4) # grid of 3x4 subplots axes = axes.flatten() # reshape from 3x4 array into 12-element vector for i in range(12): plt.sca(axes[i]) # set the current Axes plt.text(0.5, 0.5, i+1) # print conventional subplot index number to middle of Axes ``` # Choosing a Plot for Discrete Data ``` plt.figure(figsize = [10, 5]) die_rolls = pokemon['speed'] # histogram on the left, bin edges on integers plt.subplot(1, 2, 1) bin_edges = np.arange(2, 12+1.1, 1) # note `+1.1`, see below plt.hist(die_rolls, bins = bin_edges); plt.xticks(np.arange(2, 12+1, 1)); # histogram on the right, bin edges between integers plt.subplot(1, 2, 2) bin_edges = np.arange(1.5, 12.5+1, 1) plt.hist(die_rolls, bins = bin_edges); plt.xticks(np.arange(2, 12+1, 1)); bin_edges = np.arange(1.5, 12.5+1, 1) plt.hist(die_rolls, bins = bin_edges, rwidth = 0.7) plt.xticks(np.arange(2, 12+1, 1)); ``` # Descriptive Statistics, Outliers, and Axis Limits As you create your plots and perform your exploration, make sure that you pay attention to what the plots tell you that go beyond just the basic descriptive statistics. Note any aspects of the data like number of modes and skew, and note the presence of outliers in the data for further investigation. Related to the latter point, you might need to change the limits or scale of what is plotted to take a closer look at the underlying patterns in the data. This page covers the topic of axis limits; the next the topic of scales and transformations. In order to change a histogram's axis limits, you can add a Matplotlib xlim call to your code. The function takes a tuple of two numbers specifying the upper and lower bounds of the x-axis range. Alternatively, the xlim function can be called with two numeric arguments to the same result. ``` # plt.figure(figsize = [10, 5]) plt.subplot(1,2,1) bins = np.arange(0, pokemon['height'].max()+0.5, 0.5) plt.hist(data=pokemon, x = 'height', bins=bins); # Zoom in the data with xlim(from 0 to 6). plt.subplot(1,2,2) bins = np.arange(0, pokemon['height'].max()+0.2, 0.2) plt.hist(data=pokemon, x = 'height', bins=bins); plt.xlim((0,6)); ``` # Scales and Transformations ``` plt.figure(figsize = [10, 5]) data = pokemon['speed'] # left histogram: data plotted in natural units plt.subplot(1, 2, 1) bin_edges = np.arange(0, data.max()+100, 100) plt.hist(data, bins = bin_edges) plt.xlabel('values') # right histogram: data plotted after direct log transformation plt.subplot(1, 2, 2) log_data = np.log10(data) # direct data transform log_bin_edges = np.arange(0.8, log_data.max()+0.1, 0.1) plt.hist(log_data, bins = log_bin_edges) plt.xlabel('log(values)') plt.figure(figsize = [10, 5]) # left histogram: data plotted in natural units plt.subplot(1, 2, 1) bin_edges = np.arange(0, pokemon['weight'].max()+40, 40) plt.hist(data = pokemon, x = 'weight', bins = bin_edges) plt.xlabel('weight') # right histogram: data plotted after direct log transformation plt.subplot(1, 2, 2) #log_data = np.log10(pokemon['weight']) # direct data transform #log_bin_edges = np.arange(0.8, pokemon['weight'].max()+0.1, 0.1) log_bin_edges = 10 ** np.arange(-1, 3+0.1, 0.1) plt.hist(data = pokemon, x = 'weight', bins = log_bin_edges); plt.xscale('log'); plt.xlabel('log(weight)'); #plt.subplot(2, 1, 1) bins = 10 ** np.arange(-1, 3+0.1, 0.1) ticks = [0.1, 0.3, 1, 3, 10, 30, 100, 300, 1000] labels = ['{}'.format(v) for v in ticks] plt.hist(data = pokemon, x = 'weight', bins = bins); plt.xscale('log'); plt.xlabel('log(weight)'); # Note: xticks are specified after xscale plt.xticks(ticks, labels); ``` # Scales and Transformations Certain data distributions will find themselves amenable to scale transformations. The most common example of this is data that follows an approximately log-normal distribution. This is data that, in their natural units, can look highly skewed: lots of points with low values, with a very long tail of data points with large values. However, after applying a logarithmic transform to the data, the data will follow a normal distribution. (If you need a refresher on the logarithm function, check out this lesson on Khan Academy.) ``` def sqrt_trans(x, inverse = False): """ transformation helper function """ if not inverse: return np.sqrt(x) else: return x ** 2 data = pokemon['weight'] bin_edges = np.arange(0, sqrt_trans(data.max())+1, 1) plt.hist(data.apply(sqrt_trans), bins = bin_edges) tick_locs = np.arange(0, sqrt_trans(data.max())+10, 10) plt.xticks(tick_locs, sqrt_trans(tick_locs, inverse = True).astype(int)); ``` # Extra: Kernel Density Estimation ``` # Earlier in this lesson, you saw an example of kernel # density estimation (KDE) through the use of seaborn's # distplot function, which plots a KDE on top of a histogram. sb.distplot(pokemon['weight']) ``` Kernel density estimation is one way of estimating the probability density function of a variable. In a KDE plot, you can think of each observation as replaced by a small ‘lump’ of area. Stacking these lumps all together produces the final density curve. The default settings use a normal-distribution kernel, but most software that can produce KDE plots also include other kernel function options. Seaborn's distplot function calls another function, kdeplot, to generate the KDE. The demonstration code below also uses a third function called by distplot for illustration, rugplot. In a rugplot, data points are depicted as dashes on a number line. ``` data = [0.0, 3.0, 4.5, 8.0] plt.figure(figsize = [12, 5]) # left plot: showing kde lumps with the default settings plt.subplot(1, 3, 1) sb.distplot(data, hist = False, rug = True, rug_kws = {'color' : 'r'}) # central plot: kde with narrow bandwidth to show individual probability lumps plt.subplot(1, 3, 2) sb.distplot(data, hist = False, rug = True, rug_kws = {'color' : 'r'}, kde_kws = {'bw' : 1}) # right plot: choosing a different, triangular kernel function (lump shape) plt.subplot(1, 3, 3) sb.distplot(data, hist = False, rug = True, rug_kws = {'color' : 'r'}, kde_kws = {'bw' : 1.5, 'kernel' : 'tri'}) ``` # Scatterplots and Correlation ## Scatterplots for quantitavtive vs quantitative variables ``` fuel_econ = pd.read_csv('../Matplotlib/data/fuel_econ.csv') print(fuel_econ.shape) fuel_econ.head() plt.scatter(data=fuel_econ, x = 'displ', y = 'comb') plt.xlabel('Displacement (1)') plt.ylabel('Combined Fuel Eff. (mpg)') sb.regplot(data=fuel_econ, x = 'displ', y = 'comb') def log_trans(x, inverse = False): if not inverse: return np.log10(x) else: return np.power(10, x) sb.regplot(fuel_econ['displ'], fuel_econ['comb'].apply(log_trans)) tick_locs = [10, 20, 50, 100, 200, 500] plt.yticks(log_trans(tick_locs), tick_locs) ``` # Overplotting, Transparency, and Jitter ``` sb.regplot(data = fuel_econ, x = 'year', y = 'comb'); # adding jitter sb.regplot(data = fuel_econ, x = 'year', y = 'comb', x_jitter = 0.3); # transperacy sb.regplot(data = fuel_econ, x = 'year', y = 'comb', x_jitter =0.3, scatter_kws = {'alpha' : 1/20}); sb.regplot(data = fuel_econ, x = 'year', y = 'comb', x_jitter = 0.2, y_jitter = 0.2, scatter_kws = {'alpha' : 1/3}); ``` # Heat Maps ``` # quantitative vs quantitative variable plt.hist2d(data = fuel_econ, x = 'displ', y = 'comb'); plt.colorbar() plt.xlabel('Displacement (1)') plt.ylabel('Combined Fuel Eff. (mpg)') plt.hist2d(data = fuel_econ, x = 'displ', y = 'comb', cmin = 0.5, cmap = 'viridis_r', ); plt.colorbar() plt.xlabel('Displacement (1)') plt.ylabel('Combined Fuel Eff. (mpg)') # check out description and choose bins fuel_econ[['displ', 'comb']].describe() bins_x = np.arange(0.6, 7+0.3, 0.3) bins_y = np.arange(12, 58+3, 3) plt.hist2d(data = fuel_econ, x = 'displ', y = 'comb', cmin = 0.5, cmap = 'viridis_r', bins = [bins_x ,bins_y] ); plt.colorbar() plt.xlabel('Displacement (1)') plt.ylabel('Combined Fuel Eff. (mpg)') # hist2d returns a number of different variables, including an array of counts bins_x = np.arange(0.6, 7+0.3, 0.3) bins_y = np.arange(12, 58+3, 3) h2d = plt.hist2d(data = fuel_econ, x = 'displ', y = 'comb', bins = [bins_x, bins_y], cmap = 'viridis_r', cmin = 0.5) counts = h2d[0] # loop through the cell counts and add text annotations for each for i in range(counts.shape[0]): for j in range(counts.shape[1]): c = counts[i,j] if c >= 7: # increase visibility on darkest cells plt.text(bins_x[i]+0.5, bins_y[j]+0.5, int(c), ha = 'center', va = 'center', color = 'white') elif c > 0: plt.text(bins_x[i]+0.5, bins_y[j]+0.5, int(c), ha = 'center', va = 'center', color = 'black') ``` # Violin Plots ## Violin Plots for Quantitative vx qualitative variables ``` sedan_classes = ['Minicompact Cars', 'Subcompact Cars', 'Compact Cars', 'Midsize Cars', 'Large Cars'] vclasses = pd.api.types.CategoricalDtype(ordered = True, categories = sedan_classes) fuel_econ['VClass'] = fuel_econ['VClass'].astype(vclasses); sb.violinplot(data = fuel_econ, x = 'VClass', y = 'comb', inner = 'quartile'); plt.xticks(rotation = 30); ``` # Box Plots ``` sb.boxplot(data = fuel_econ, x = 'VClass', y = 'comb'); plt.xticks(rotation = 30); ``` # Clustered Bar Chards ## qualitative vs quantitative variable ``` fuel_econ['trans_type'] = fuel_econ['trans'].apply(lambda x: x.split()[0]) # sb.heatmap(ct_counts); ct_counts = fuel_econ.groupby(['VClass', 'trans_type']).size() ct_counts = ct_counts.reset_index(name='count') ct_counts = ct_counts.pivot(index = 'VClass', columns = 'trans_type', values = 'count') sb.heatmap(ct_counts, annot = True, fmt = 'd'); sb.countplot(data = fuel_econ, x = 'VClass', hue = 'trans_type'); plt.xticks(rotation = 15); ``` # Faceting ``` sb.violinplot(data = fuel_econ, x = 'VClass', y = 'comb', inner = 'quartile'); plt.xticks(rotation = 30); bins = np.arange(12, 58+2, 2) g = sb.FacetGrid(data = fuel_econ, col = 'VClass', col_wrap = 3, sharey = False); g.map(plt.hist, 'comb', bins = bins); ``` # Adaptation of Univariate Plots ``` hase_color = sb.color_palette()[0] sb.boxplot(data = fuel_econ, x = 'VClass', y = 'comb', color = base_color) plt.xticks(rotation=15); plt.ylabel('Avg. Combined Fuel Eff. (mpg)'); hase_color = sb.color_palette()[0] sb.barplot(data = fuel_econ, x = 'VClass', y = 'comb', color = base_color, ci = 'sd') # Some barplot options # errwidth = 0 - disable # ci = 'sd' - set to 'standard deviation' plt.xticks(rotation=15); plt.ylabel('Avg. Combined Fuel Eff. (mpg)'); hase_color = sb.color_palette()[0] sb.pointplot(data = fuel_econ, x = 'VClass', y = 'comb', color = base_color, ci = 'sd', linestyles = '') # Some pointplot options # linestyles = '' plt.xticks(rotation=15); plt.ylabel('Avg. Combined Fuel Eff. (mpg)'); ``` # Line Plots ``` plt.errorbar(data=fuel_econ, x = 'displ', y = 'comb'); plt.xticks(rotation=15); plt.ylabel('Avg. Combined Fuel Eff. (mpg)'); bins_e = np.arange(0.6, 7+0.7, 0.2) bins_c = bins_e[:-1] + 0.1 displ_binned = pd.cut(fuel_econ['displ'], bins_e, include_lowest=True) comb_mean = fuel_econ['comb'].groupby(displ_binned).mean() comb_std = fuel_econ['comb'].groupby(displ_binned).std() plt.errorbar(x = bins_c, y = comb_mean, yerr = comb_std); plt.xlabel('Displacement (1)') plt.ylabel('Avg. Combined Fuel Eff. (mpg)'); # compute statistics in a rolling window df = fuel_econ df_window = df.sort_values('displ').rolling(15) x_winmean = df_window.mean()['displ'] y_median = df_window.median()['comb'] y_q1 = df_window.quantile(.25)['comb'] y_q3 = df_window.quantile(.75)['comb'] # plot the summarized data base_color = sb.color_palette()[0] line_color = sb.color_palette('dark')[0] plt.scatter(data = df, x = 'displ', y = 'comb') plt.errorbar(x = x_winmean, y = y_median, c = line_color) plt.errorbar(x = x_winmean, y = y_q1, c = line_color, linestyle = '--') plt.errorbar(x = x_winmean, y = y_q3, c = line_color, linestyle = '--') plt.xlabel('displ') plt.ylabel('comb') def freq_poly(x, bins = 10, **kwargs): """ Custom frequency polygon / line plot code. """ # set bin edges if none or int specified if type(bins) == int: bins = np.linspace(x.min(), x.max(), bins+1) bin_centers = (bin_edges[1:] + bin_edges[:-1]) / 2 # compute counts data_bins = pd.cut(x, bins, right = False, include_lowest = True) counts = x.groupby(data_bins).count() # create plot plt.errorbar(x = bin_centers, y = counts, **kwargs) df = fuel_econ; bin_edges = np.arange(df['comb'].min(), df['comb'].max()+1, 1); g = sb.FacetGrid(data = df, hue = 'displ', size = 5); g.map(freq_poly, "comb", bins = bin_edges); g.add_legend(); ```
github_jupyter
import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sb pokemon = pd.read_csv('../Matplotlib/data/pokemon.csv') print(pokemon.shape) pokemon.head(10) # Get first color_pallete and set it as color of the chart base_color = sb.color_palette()[0] # generate graph sb.countplot(data = pokemon, x = 'generation_id', color = base_color ) # generate ordered graph sb.countplot(data = pokemon, x = 'generation_id', color = base_color, order = (5,1,4,2,7,6) ) # Get order automatically gen_order = pokemon['generation_id'].value_counts().index gen_order # generate ordered graph sb.countplot(data = pokemon, x = 'generation_id', color = base_color, order = gen_order ); # generate ordered graph for type_1 pokemons gen_order = pokemon['type_1'].value_counts().index sb.countplot(data = pokemon, x = 'type_1', color = base_color, order = gen_order ); # rotate values on x axe plt.xticks(rotation = 90); # make horizontal barchar sb.countplot(data = pokemon, y = 'type_1', color = base_color, order = gen_order ); import pandas as pd import seaborn as sb pokemon = pd.read_csv('../Matplotlib/data/pokemon.csv') pokemon.head(10) pkmn_types = pokemon.melt(id_vars = ['id'], value_vars = ['type_1', 'type_2'], var_name = 'type_level', value_name = 'type') pkmn_types[802:812] type_counts = pkmn_types['type'].value_counts() type_order = type_counts.index base_color = sb.color_palette()[0] sb.countplot(data=pkmn_types, y = 'type', color= base_color, order = type_order) # number of pokemons n_pokemon = pokemon.shape[0] max_type_count = type_counts[0] # print(pokemon.shape[0]) max_prop = max_type_count / n_pokemon # print(max_prop) tick_props = np.arange(0, max_prop, 0.02) tick_names = ['{:0.2f}'.format(v) for v in tick_props] base_color = sb.color_palette()[0] sb.countplot(data=pkmn_types, y = 'type', color= base_color, order = type_order) plt.xticks(tick_props * n_pokemon, tick_names); plt.xlabel('Proportion'); n_pokemon = pokemon.shape[0] type_counts = pkmn_types['type'].value_counts() type_order = type_counts.index base_color = sb.color_palette()[0] sb.countplot(data=pkmn_types, y = 'type', color= base_color, order = type_order) for i in range(type_counts.shape[0]): count = type_counts[i] pct_string = f'{100*count/n_pokemon:0.2f}%' plt.text(count+1, i, pct_string, va = 'center'); # What if we want to visualize these missing value counts? We could treat the variable names as levels of # a categorical variable, and create a resulting bar plot. However, since the data is not in its tidy, # unsummarized form, we need to make use of a different plotting function. Seaborn's barplot function is # built to depict a summary of one quantitative variable against levels of a second, qualitative variable, # but can be used here. na_counts = pokemon.isna().sum() base_color = sb.color_palette()[0] sb.barplot(na_counts.index.values, na_counts, color = base_color) plt.xticks(rotation = 90); # The first argument to the function contains the x-values (column names), the second argument the y-values # (our counts). # code for the pie chart seen above sorted_counts = pokemon['type_2'].value_counts() plt.pie(sorted_counts, labels = sorted_counts.index, startangle = 90, counterclock = False); plt.pie(sorted_counts, labels = sorted_counts.index, startangle = 90, counterclock = False, wedgeprops = {'width' : 0.4}); # bins are "bit wide" on 'X' axis. bins = np.arange(0, pokemon['speed'].max()+1, 1) plt.hist(data=pokemon, x='speed', bins=bins); sb.distplot(pokemon['speed']); sb.distplot(pokemon['speed'], kde=False); bin_edges = np.arange(0, pokemon['speed'].max()+1, 1) sb.distplot(pokemon['speed'], bins=bin_edges, kde=False, hist_kws = {'alpha':1}); fig = plt.figure() ax = fig.add_axes([.125, .125, .775, .775]) ax.hist(data=pokemon, x = 'speed'); fig = plt.figure() ax = fig.add_axes([.125, .125, .775, .775]) sb.countplot(data=pokemon, x = 'speed', ax=ax); plt.figure(figsize = [10, 5]) # larger figure size for subplots # example of somewhat too-large bin size plt.subplot(1, 2, 1) # 1 row, 2 cols, subplot 1 bin_edges = np.arange(0, pokemon['speed'].max()+4, 4) plt.hist(data = pokemon, x = 'speed', bins = bin_edges); # example of somewhat too-small bin size plt.subplot(1, 2, 2) # 1 row, 2 cols, subplot 2 bin_edges = np.arange(0, pokemon['speed'].max()+1/4, 1/4) plt.hist(data = pokemon, x = 'speed', bins = bin_edges); fig, axes = plt.subplots(3, 4) # grid of 3x4 subplots axes = axes.flatten() # reshape from 3x4 array into 12-element vector for i in range(12): plt.sca(axes[i]) # set the current Axes plt.text(0.5, 0.5, i+1) # print conventional subplot index number to middle of Axes plt.figure(figsize = [10, 5]) die_rolls = pokemon['speed'] # histogram on the left, bin edges on integers plt.subplot(1, 2, 1) bin_edges = np.arange(2, 12+1.1, 1) # note `+1.1`, see below plt.hist(die_rolls, bins = bin_edges); plt.xticks(np.arange(2, 12+1, 1)); # histogram on the right, bin edges between integers plt.subplot(1, 2, 2) bin_edges = np.arange(1.5, 12.5+1, 1) plt.hist(die_rolls, bins = bin_edges); plt.xticks(np.arange(2, 12+1, 1)); bin_edges = np.arange(1.5, 12.5+1, 1) plt.hist(die_rolls, bins = bin_edges, rwidth = 0.7) plt.xticks(np.arange(2, 12+1, 1)); # plt.figure(figsize = [10, 5]) plt.subplot(1,2,1) bins = np.arange(0, pokemon['height'].max()+0.5, 0.5) plt.hist(data=pokemon, x = 'height', bins=bins); # Zoom in the data with xlim(from 0 to 6). plt.subplot(1,2,2) bins = np.arange(0, pokemon['height'].max()+0.2, 0.2) plt.hist(data=pokemon, x = 'height', bins=bins); plt.xlim((0,6)); plt.figure(figsize = [10, 5]) data = pokemon['speed'] # left histogram: data plotted in natural units plt.subplot(1, 2, 1) bin_edges = np.arange(0, data.max()+100, 100) plt.hist(data, bins = bin_edges) plt.xlabel('values') # right histogram: data plotted after direct log transformation plt.subplot(1, 2, 2) log_data = np.log10(data) # direct data transform log_bin_edges = np.arange(0.8, log_data.max()+0.1, 0.1) plt.hist(log_data, bins = log_bin_edges) plt.xlabel('log(values)') plt.figure(figsize = [10, 5]) # left histogram: data plotted in natural units plt.subplot(1, 2, 1) bin_edges = np.arange(0, pokemon['weight'].max()+40, 40) plt.hist(data = pokemon, x = 'weight', bins = bin_edges) plt.xlabel('weight') # right histogram: data plotted after direct log transformation plt.subplot(1, 2, 2) #log_data = np.log10(pokemon['weight']) # direct data transform #log_bin_edges = np.arange(0.8, pokemon['weight'].max()+0.1, 0.1) log_bin_edges = 10 ** np.arange(-1, 3+0.1, 0.1) plt.hist(data = pokemon, x = 'weight', bins = log_bin_edges); plt.xscale('log'); plt.xlabel('log(weight)'); #plt.subplot(2, 1, 1) bins = 10 ** np.arange(-1, 3+0.1, 0.1) ticks = [0.1, 0.3, 1, 3, 10, 30, 100, 300, 1000] labels = ['{}'.format(v) for v in ticks] plt.hist(data = pokemon, x = 'weight', bins = bins); plt.xscale('log'); plt.xlabel('log(weight)'); # Note: xticks are specified after xscale plt.xticks(ticks, labels); def sqrt_trans(x, inverse = False): """ transformation helper function """ if not inverse: return np.sqrt(x) else: return x ** 2 data = pokemon['weight'] bin_edges = np.arange(0, sqrt_trans(data.max())+1, 1) plt.hist(data.apply(sqrt_trans), bins = bin_edges) tick_locs = np.arange(0, sqrt_trans(data.max())+10, 10) plt.xticks(tick_locs, sqrt_trans(tick_locs, inverse = True).astype(int)); # Earlier in this lesson, you saw an example of kernel # density estimation (KDE) through the use of seaborn's # distplot function, which plots a KDE on top of a histogram. sb.distplot(pokemon['weight']) data = [0.0, 3.0, 4.5, 8.0] plt.figure(figsize = [12, 5]) # left plot: showing kde lumps with the default settings plt.subplot(1, 3, 1) sb.distplot(data, hist = False, rug = True, rug_kws = {'color' : 'r'}) # central plot: kde with narrow bandwidth to show individual probability lumps plt.subplot(1, 3, 2) sb.distplot(data, hist = False, rug = True, rug_kws = {'color' : 'r'}, kde_kws = {'bw' : 1}) # right plot: choosing a different, triangular kernel function (lump shape) plt.subplot(1, 3, 3) sb.distplot(data, hist = False, rug = True, rug_kws = {'color' : 'r'}, kde_kws = {'bw' : 1.5, 'kernel' : 'tri'}) fuel_econ = pd.read_csv('../Matplotlib/data/fuel_econ.csv') print(fuel_econ.shape) fuel_econ.head() plt.scatter(data=fuel_econ, x = 'displ', y = 'comb') plt.xlabel('Displacement (1)') plt.ylabel('Combined Fuel Eff. (mpg)') sb.regplot(data=fuel_econ, x = 'displ', y = 'comb') def log_trans(x, inverse = False): if not inverse: return np.log10(x) else: return np.power(10, x) sb.regplot(fuel_econ['displ'], fuel_econ['comb'].apply(log_trans)) tick_locs = [10, 20, 50, 100, 200, 500] plt.yticks(log_trans(tick_locs), tick_locs) sb.regplot(data = fuel_econ, x = 'year', y = 'comb'); # adding jitter sb.regplot(data = fuel_econ, x = 'year', y = 'comb', x_jitter = 0.3); # transperacy sb.regplot(data = fuel_econ, x = 'year', y = 'comb', x_jitter =0.3, scatter_kws = {'alpha' : 1/20}); sb.regplot(data = fuel_econ, x = 'year', y = 'comb', x_jitter = 0.2, y_jitter = 0.2, scatter_kws = {'alpha' : 1/3}); # quantitative vs quantitative variable plt.hist2d(data = fuel_econ, x = 'displ', y = 'comb'); plt.colorbar() plt.xlabel('Displacement (1)') plt.ylabel('Combined Fuel Eff. (mpg)') plt.hist2d(data = fuel_econ, x = 'displ', y = 'comb', cmin = 0.5, cmap = 'viridis_r', ); plt.colorbar() plt.xlabel('Displacement (1)') plt.ylabel('Combined Fuel Eff. (mpg)') # check out description and choose bins fuel_econ[['displ', 'comb']].describe() bins_x = np.arange(0.6, 7+0.3, 0.3) bins_y = np.arange(12, 58+3, 3) plt.hist2d(data = fuel_econ, x = 'displ', y = 'comb', cmin = 0.5, cmap = 'viridis_r', bins = [bins_x ,bins_y] ); plt.colorbar() plt.xlabel('Displacement (1)') plt.ylabel('Combined Fuel Eff. (mpg)') # hist2d returns a number of different variables, including an array of counts bins_x = np.arange(0.6, 7+0.3, 0.3) bins_y = np.arange(12, 58+3, 3) h2d = plt.hist2d(data = fuel_econ, x = 'displ', y = 'comb', bins = [bins_x, bins_y], cmap = 'viridis_r', cmin = 0.5) counts = h2d[0] # loop through the cell counts and add text annotations for each for i in range(counts.shape[0]): for j in range(counts.shape[1]): c = counts[i,j] if c >= 7: # increase visibility on darkest cells plt.text(bins_x[i]+0.5, bins_y[j]+0.5, int(c), ha = 'center', va = 'center', color = 'white') elif c > 0: plt.text(bins_x[i]+0.5, bins_y[j]+0.5, int(c), ha = 'center', va = 'center', color = 'black') sedan_classes = ['Minicompact Cars', 'Subcompact Cars', 'Compact Cars', 'Midsize Cars', 'Large Cars'] vclasses = pd.api.types.CategoricalDtype(ordered = True, categories = sedan_classes) fuel_econ['VClass'] = fuel_econ['VClass'].astype(vclasses); sb.violinplot(data = fuel_econ, x = 'VClass', y = 'comb', inner = 'quartile'); plt.xticks(rotation = 30); sb.boxplot(data = fuel_econ, x = 'VClass', y = 'comb'); plt.xticks(rotation = 30); fuel_econ['trans_type'] = fuel_econ['trans'].apply(lambda x: x.split()[0]) # sb.heatmap(ct_counts); ct_counts = fuel_econ.groupby(['VClass', 'trans_type']).size() ct_counts = ct_counts.reset_index(name='count') ct_counts = ct_counts.pivot(index = 'VClass', columns = 'trans_type', values = 'count') sb.heatmap(ct_counts, annot = True, fmt = 'd'); sb.countplot(data = fuel_econ, x = 'VClass', hue = 'trans_type'); plt.xticks(rotation = 15); sb.violinplot(data = fuel_econ, x = 'VClass', y = 'comb', inner = 'quartile'); plt.xticks(rotation = 30); bins = np.arange(12, 58+2, 2) g = sb.FacetGrid(data = fuel_econ, col = 'VClass', col_wrap = 3, sharey = False); g.map(plt.hist, 'comb', bins = bins); hase_color = sb.color_palette()[0] sb.boxplot(data = fuel_econ, x = 'VClass', y = 'comb', color = base_color) plt.xticks(rotation=15); plt.ylabel('Avg. Combined Fuel Eff. (mpg)'); hase_color = sb.color_palette()[0] sb.barplot(data = fuel_econ, x = 'VClass', y = 'comb', color = base_color, ci = 'sd') # Some barplot options # errwidth = 0 - disable # ci = 'sd' - set to 'standard deviation' plt.xticks(rotation=15); plt.ylabel('Avg. Combined Fuel Eff. (mpg)'); hase_color = sb.color_palette()[0] sb.pointplot(data = fuel_econ, x = 'VClass', y = 'comb', color = base_color, ci = 'sd', linestyles = '') # Some pointplot options # linestyles = '' plt.xticks(rotation=15); plt.ylabel('Avg. Combined Fuel Eff. (mpg)'); plt.errorbar(data=fuel_econ, x = 'displ', y = 'comb'); plt.xticks(rotation=15); plt.ylabel('Avg. Combined Fuel Eff. (mpg)'); bins_e = np.arange(0.6, 7+0.7, 0.2) bins_c = bins_e[:-1] + 0.1 displ_binned = pd.cut(fuel_econ['displ'], bins_e, include_lowest=True) comb_mean = fuel_econ['comb'].groupby(displ_binned).mean() comb_std = fuel_econ['comb'].groupby(displ_binned).std() plt.errorbar(x = bins_c, y = comb_mean, yerr = comb_std); plt.xlabel('Displacement (1)') plt.ylabel('Avg. Combined Fuel Eff. (mpg)'); # compute statistics in a rolling window df = fuel_econ df_window = df.sort_values('displ').rolling(15) x_winmean = df_window.mean()['displ'] y_median = df_window.median()['comb'] y_q1 = df_window.quantile(.25)['comb'] y_q3 = df_window.quantile(.75)['comb'] # plot the summarized data base_color = sb.color_palette()[0] line_color = sb.color_palette('dark')[0] plt.scatter(data = df, x = 'displ', y = 'comb') plt.errorbar(x = x_winmean, y = y_median, c = line_color) plt.errorbar(x = x_winmean, y = y_q1, c = line_color, linestyle = '--') plt.errorbar(x = x_winmean, y = y_q3, c = line_color, linestyle = '--') plt.xlabel('displ') plt.ylabel('comb') def freq_poly(x, bins = 10, **kwargs): """ Custom frequency polygon / line plot code. """ # set bin edges if none or int specified if type(bins) == int: bins = np.linspace(x.min(), x.max(), bins+1) bin_centers = (bin_edges[1:] + bin_edges[:-1]) / 2 # compute counts data_bins = pd.cut(x, bins, right = False, include_lowest = True) counts = x.groupby(data_bins).count() # create plot plt.errorbar(x = bin_centers, y = counts, **kwargs) df = fuel_econ; bin_edges = np.arange(df['comb'].min(), df['comb'].max()+1, 1); g = sb.FacetGrid(data = df, hue = 'displ', size = 5); g.map(freq_poly, "comb", bins = bin_edges); g.add_legend();
0.365457
0.83193
[![AnalyticsDojo](https://github.com/rpi-techfundamentals/spring2019-materials/blob/master/fig/final-logo.png?raw=1)](http://rpi.analyticsdojo.com) <center><h1>Introduction to Feature Creation & Dummy Variables</h1></center> <center><h3><a href = 'http://introml.analyticsdojo.com'>introml.analyticsdojo.com</a></h3></center> ``` !wget https://raw.githubusercontent.com/rpi-techfundamentals/spring2019-materials/master/input/train.csv !wget https://raw.githubusercontent.com/rpi-techfundamentals/spring2019-materials/master/input/test.csv ``` ## Feature Extraction Here we will talk about an important piece of machine learning: the extraction of quantitative features from data. By the end of this section you will - Know how features are extracted from real-world data. - See an example of extracting numerical features from textual data In addition, we will go over several basic tools within scikit-learn which can be used to accomplish the above tasks. ### What Are Features? ### Numerical Features Recall that data in scikit-learn is expected to be in two-dimensional arrays, of size **n_samples** $\times$ **n_features**. Previously, we looked at the iris dataset, which has 150 samples and 4 features ``` from sklearn.datasets import load_iris iris = load_iris() print(iris.data.shape) ``` These features are: - sepal length in cm - sepal width in cm - petal length in cm - petal width in cm Numerical features such as these are pretty straightforward: each sample contains a list of floating-point numbers corresponding to the features ### Categorical Features What if you have categorical features? For example, imagine there is data on the color of each iris: color in [red, blue, purple] You might be tempted to assign numbers to these features, i.e. *red=1, blue=2, purple=3* but in general **this is a bad idea**. Estimators tend to operate under the assumption that numerical features lie on some continuous scale, so, for example, 1 and 2 are more alike than 1 and 3, and this is often not the case for categorical features. In fact, the example above is a subcategory of "categorical" features, namely, "nominal" features. Nominal features don't imply an order, whereas "ordinal" features are categorical features that do imply an order. An example of ordinal features would be T-shirt sizes, e.g., XL > L > M > S. One work-around for parsing nominal features into a format that prevents the classification algorithm from asserting an order is the so-called one-hot encoding representation. Here, we give each category its own dimension. The enriched iris feature set would hence be in this case: - sepal length in cm - sepal width in cm - petal length in cm - petal width in cm - color=purple (1.0 or 0.0) - color=blue (1.0 or 0.0) - color=red (1.0 or 0.0) Note that using many of these categorical features may result in data which is better represented as a **sparse matrix**, as we'll see with the text classification example below. ### Derived Features Another common feature type are **derived features**, where some pre-processing step is applied to the data to generate features that are somehow more informative. Derived features may be based in **feature extraction** and **dimensionality reduction** (such as PCA or manifold learning), may be linear or nonlinear combinations of features (such as in polynomial regression), or may be some more sophisticated transform of the features. ### Combining Numerical and Categorical Features As an example of how to work with both categorical and numerical data, we will perform survival predicition for the passengers of the HMS Titanic. ``` import os import pandas as pd titanic = pd.read_csv('train.csv') print(titanic.columns) ``` Here is a broad description of the keys and what they mean: ``` pclass Passenger Class (1 = 1st; 2 = 2nd; 3 = 3rd) survival Survival (0 = No; 1 = Yes) name Name sex Sex age Age sibsp Number of Siblings/Spouses Aboard parch Number of Parents/Children Aboard ticket Ticket Number fare Passenger Fare cabin Cabin embarked Port of Embarkation (C = Cherbourg; Q = Queenstown; S = Southampton) boat Lifeboat body Body Identification Number home.dest Home/Destination ``` In general, it looks like `name`, `sex`, `cabin`, `embarked`, `boat`, `body`, and `homedest` may be candidates for categorical features, while the rest appear to be numerical features. We can also look at the first couple of rows in the dataset to get a better understanding: ``` titanic.head() ``` We clearly want to discard the "boat" and "body" columns for any classification into survived vs not survived as they already contain this information. The name is unique to each person (probably) and also non-informative. For a first try, we will use "pclass", "sibsp", "parch", "fare" and "embarked" as our features: ``` labels = titanic.Survived.values features = titanic[['Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare', 'Embarked']].copy() features.head() ``` The data now contains only useful features, but they are not in a format that the machine learning algorithms can understand. We need to transform the strings "male" and "female" into binary variables that indicate the gender, and similarly for "embarked". We can do that using the pandas ``get_dummies`` function: ``` featuremodel=pd.get_dummies(features) featuremodel ``` Notice that this includes N dummy variables. When we are modeling we will need N-1 categorical variables. ``` pd.get_dummies(features, drop_first=True).head() ``` This transformation successfully encoded the string columns. However, one might argue that the class is also a categorical variable. We can explicitly list the columns to encode using the ``columns`` parameter, and include ``pclass``: ``` features_dummies = pd.get_dummies(features, columns=['Pclass', 'Sex', 'Embarked'], drop_first=True) features_dummies #Transform from Pandas to numpy with .values data = features_dummies.values data type(data) ``` ## Feature Preprocessing with Scikit Learn Here we are going to look at a more efficient way to prepare our datasets using pipelines. ``` features.head() features.isna().sum() #Quick example to show how the data Imputer works. from sklearn.impute import SimpleImputer import numpy as np imp_mean = SimpleImputer(missing_values=np.nan, strategy='mean') imp_mean=imp_mean.fit_transform([[7, 2, 3], [4, np.nan, 6], [10, 5, 9]]) imp_mean ``` A really useful function below. You will want to remember this one. ``` from sklearn.preprocessing import StandardScaler, OneHotEncoder from sklearn.compose import make_column_transformer from sklearn.impute import SimpleImputer from sklearn.pipeline import Pipeline, make_pipeline missing_values = ['Age','Embarked'] features_num = ['Fare', 'Age'] features_cat = [ 'Sex', 'Embarked', 'Pclass', 'SibSp'] def pre_process_dataframe(df, numeric, categorical, missing=np.nan, missing_num='mean', missing_cat = 'most_frequent'): """This will use a data imputer to fill in missing values and standardize numeric features. """ #Create a data imputer for numeric values imp_num = SimpleImputer(missing_values=missing, strategy=missing_num) #Create a pipeline which imputes values and then usese the standard scaler. pipe_num = make_pipeline(imp_num, StandardScaler()) # StandardScaler() #Create a different imputer for categorical values. imp_cat = SimpleImputer(missing_values=missing, strategy=missing_cat) pipe_cat = make_pipeline(imp_cat, OneHotEncoder(drop= 'first')) preprocessor = make_column_transformer((pipe_num, features_num),(pipe_cat, features_cat)) return pd.DataFrame(preprocessor.fit_transform(df)) df=pre_process_dataframe(features, features_num, features_cat ) df ```
github_jupyter
!wget https://raw.githubusercontent.com/rpi-techfundamentals/spring2019-materials/master/input/train.csv !wget https://raw.githubusercontent.com/rpi-techfundamentals/spring2019-materials/master/input/test.csv from sklearn.datasets import load_iris iris = load_iris() print(iris.data.shape) import os import pandas as pd titanic = pd.read_csv('train.csv') print(titanic.columns) pclass Passenger Class (1 = 1st; 2 = 2nd; 3 = 3rd) survival Survival (0 = No; 1 = Yes) name Name sex Sex age Age sibsp Number of Siblings/Spouses Aboard parch Number of Parents/Children Aboard ticket Ticket Number fare Passenger Fare cabin Cabin embarked Port of Embarkation (C = Cherbourg; Q = Queenstown; S = Southampton) boat Lifeboat body Body Identification Number home.dest Home/Destination titanic.head() labels = titanic.Survived.values features = titanic[['Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare', 'Embarked']].copy() features.head() featuremodel=pd.get_dummies(features) featuremodel pd.get_dummies(features, drop_first=True).head() features_dummies = pd.get_dummies(features, columns=['Pclass', 'Sex', 'Embarked'], drop_first=True) features_dummies #Transform from Pandas to numpy with .values data = features_dummies.values data type(data) features.head() features.isna().sum() #Quick example to show how the data Imputer works. from sklearn.impute import SimpleImputer import numpy as np imp_mean = SimpleImputer(missing_values=np.nan, strategy='mean') imp_mean=imp_mean.fit_transform([[7, 2, 3], [4, np.nan, 6], [10, 5, 9]]) imp_mean from sklearn.preprocessing import StandardScaler, OneHotEncoder from sklearn.compose import make_column_transformer from sklearn.impute import SimpleImputer from sklearn.pipeline import Pipeline, make_pipeline missing_values = ['Age','Embarked'] features_num = ['Fare', 'Age'] features_cat = [ 'Sex', 'Embarked', 'Pclass', 'SibSp'] def pre_process_dataframe(df, numeric, categorical, missing=np.nan, missing_num='mean', missing_cat = 'most_frequent'): """This will use a data imputer to fill in missing values and standardize numeric features. """ #Create a data imputer for numeric values imp_num = SimpleImputer(missing_values=missing, strategy=missing_num) #Create a pipeline which imputes values and then usese the standard scaler. pipe_num = make_pipeline(imp_num, StandardScaler()) # StandardScaler() #Create a different imputer for categorical values. imp_cat = SimpleImputer(missing_values=missing, strategy=missing_cat) pipe_cat = make_pipeline(imp_cat, OneHotEncoder(drop= 'first')) preprocessor = make_column_transformer((pipe_num, features_num),(pipe_cat, features_cat)) return pd.DataFrame(preprocessor.fit_transform(df)) df=pre_process_dataframe(features, features_num, features_cat ) df
0.538498
0.987067
# Raspagem de dados _Raspagem de dados_ (_data scraping_), em seu sentido mais amplo, é um conceito aplicado à obtenção de dados de um certo programa a partir de outro de forma a extrair conteúdo de alto valor que sejam, prioritariamente, de fácil interpretação a humanos. Atualmente, a raspagem de dados é um sinônimo para _raspagem da web_ (_web scraping_), visto que a fonte mais ampla para coleta dados é a _web_. Então, o ato de "raspar" dados da _web_ equivale a utilizar scripts, programas ou APIs para obter dados relevantes de sites, páginas, blogs, repositórios, ou qualquer outro lugar acessível por conectividade e requisição. Através da raspagem de dados, podemos, entre outras coisas, - coletar preços de ativos do mercado financeiro em tempo real; - baixar históricos de sinistros em saúde pública, tais como os registros de casos durante a pandemia da Covid-19; - localizar matérias jornalísticas sobre um mesmo tema em diversos canais de comunicação; - encontrar o placar final de todos os jogos da NBA nos últimos 5 anos. Durante o curso, você já lidou implicitamente com a raspagem de dados ao utilizar, por exemplo, a função `pandas.read_csv`, por exemplo, para realizar a leitura de um arquivo CSV hospedado em um site da internet. Neste capítulo, faremos uma breve introdução à raspagem de dados usando `BeautifulSoap`, um dos módulos Python mais populares para dissecar páginas da _web_, de maneira a ampliar a nossa compreensão acerca dessa área de conhecimento extremamente relevante para a ciência de dados. ## HTML 5 A maioria das páginas da internet hoje são escritas em uma linguagem chamada _HTML_ (_HyperText Markup Language_), desenvolvida no início da década de 1990 como a linguagem básica da internet. O consórcio [W3C](www.w3.org) (_The World Wide Web Consortium_) é quem assegura os padrões abertos para desenvolvedores _web_. Desde 2014, a versão _HTML 5_ é a recomendada pela W3C para todos os criadores de sites. Em 2019, a W3C e a WHATWG assinaram um [acordo](https://www.w3.org/blog/news/archives/7753) para uniformizar o HTML, concluindo um [documento](https://html.spec.whatwg.org/#abstract) de especificações. ```{note} Para saber mais sobre a história do HTML e do legado de Tim Berners-Lee, o inventor da World Wide Web (WWW), leia este [texto](https://www.w3.org/People/Raggett/book4/ch02.html). ``` Um documento HTML é estruturado por meio de _elementos_ enclausurados por um par de _tags_ e tem a seguinte aparência: ```html <!DOCTYPE html> <html lang="pt-br"> <head> <title>Introdução à Ciência de Dados</title> </head> <body> <h1>Ciência de Dados no século XXI</h1> <p>A História começa neste <a href="historia-icd.html">link</a>.</p> <!-- comentário --> </body> </html> ``` No exemplo, `<head>` e `</head>` são exemplo de _tags_ de abertura e fechamento para a seção _head_ do documento. ### A árvore DOM Os navegadores da web interpretam o código HTML e transformam em uma "árvore". Esta árvore caracteriza o modelo de objetos do documento, ou, formalmente, DOM (_Document Object Model_). Na forma de árvore DOM, o código acima tornar-se-ia algo como: ```html |- DOCTYPE: html |- html lang="pt-br" |- head | |- #text: | |- title | | |- #text: Introdução à Ciência de Dados | |- #text: |- #text: |- body |- #text: |- h1 | |- #text: Ciência de Dados no século XXI |- #text: |- p | |- #text: A História começa neste | |- a href="historia-icd.html" | | #text: link | |- #text: |- #comment: comentário |- #text: ``` ### Tags Existem diversas _tags_ disponíveis em HTML. A seguir listamos as que aparecem no código de exemplo anterior e sua descrição. |Tag|Descrição| |---|---| |`<!DOCTYPE>` | define o tipo do documento| |`<html>` | define a raiz de um documento HTML| |`<head>` | enclausura os metadados (informações) sobre o documento| |`<title>` | define um título para o documento| |`<body>` | define o corpo do documento| |`<h1>` | define cabeçalho de primeiro nível (seção)| |`<p>` | define um parágrafo| |`<a>` | define um _hyperlink_ (ancoramento)| ```{note} Um documento de referência para tags HTML está disponível [aqui](https://www.w3schools.com/TAGs/). ``` #### Escrevendo HTML no Jupyter Notebook Podemos escrever sintaxes de código HTML em um _Jupyter Notebook_ e renderizar a saída formatada usando o comando mágico `%%html` ou `IPython.display`. Por exemplo: ``` %%html <!-- Nada aqui --> <h2>Brincando com <sup>H</sup><sub>T</sub><sup>M</sup>L!</h2> <p> Muito legal adicionar <b>negrito</b>, <i>itálico</i> emojis como <span> &#128540; </span> e um <a href="none.html">link</a> para lugar algum!</p> from IPython.display import HTML HTML("<!-- Nada aqui --> \ <h2>Brincando com <sup>H</sup><sub>T</sub><sup>M</sup>L!</h2> \ <p> Muito legal adicionar <b>negrito</b>, \ <i>itálico</i> emojis como <span> &#128540; </span> e \ um <a href='none.html'>link</a> para lugar algum!</p>") ``` ## APIs A raspagem de dados pode ser otimizada através de uma _API_ (_Application Program Interface_). APIs são mecanismos (interfaces) que usam aplicativos de terceiros para realizar "conexões" e puxar dados. APIs são parecidas com módulos, mas não oferecem meramente um conjunto de funções, mas sim um programa capaz de operar com muitos dados. Embora uma API possa funcionar localmente (_offline_), sua utilidade para raspagem de dados é melhor exibida quando se conecta a aplicativos da web (_online_). Diversas instuições fornecem APIs para que desenvolvedores possam coletar dados. No início deste capítulo, mencionamos algumas aplicações de raspagem de dados. Algumas são possíveis apenas por meio de APIs. Google, Facebook, Twitter, Yahoo e Elsevier são algumas das empresas que fornecem APIs para aplicações em buscas na web, redes sociais, finanças e literatura científica. No Brasil, podemos citar como exemplos relevantes - as [APIs da B3](https://www.b3.com.br/data/files/2B/41/CC/5D/10F42610D290A226790D8AA8/APIs-B3-Visao-Geral-versao-1.0.pdf) (Bolsa de Valores); - e as [APIs do Governo Federal](https://www.gov.br/conecta/catalogo/). ```{note} Veja uma lista de APIs públicas neste [site](https://devresourc.es/tools-and-utilities/public-apis). ``` ### Métodos HTTP Na _web_, em geral lidamos com o HTTP (_HyperText Transfer Protocol_), isto é, um protocolo de comunicação entre clientes e servidores. Quando raspamos dados, eles são transferidos por meio de _requisições_ (_requests_) e _respostas_. Em geral, os quatro métodos mais comuns para transitar informações entre _browsers_ e servidores _web_ via HTTP são os seguintes: - `GET`, para recuperar informação; - `POST`, para criar informação; - `PUT`, para atualizar informação; - `DELETE`, para deletar informação; ```{note} Um padrão de projeto que pode ser usado para criar APIs web e integrar esses 4 métodos é o [REST](https://en.wikipedia.org/wiki/Representational_state_transfer),_Representational State Transfer_. Saiba mais [aqui](https://www.ibm.com/cloud/learn/rest-apis). ``` ## JSON e XML APIs utilizadas para raspagem de dados comumente retornam a informação em formato XML (_eXtensible Markup Language_), blocada em _tags_ ou JSON (_JavaScript Object Notation_), serializada. Embora JSON seja a escolha de APIs mais modernas, é importante ter em mente que muitos provedores de APIs as fornecem com saída XML. Um dos argumentos em favor de JSON é a economia de caracteres. Por exemplo, a estrutura XML ```xml <user><firstname>Juan</firstname><lastname>Hernandes</lastname><username>Fernandez</username></user> ``` possui 100 caracteres. A mesma informação, serializada em JSON, ```json {"user":{"firstname":"Juan","lastname":"Hernandes","username":"Fernandez"}} ``` com 75 caracteres, pouparia 25% do espaço. ## _Web crawlers_ Rastreadores da web (_web crawlers_) são programas que indexam informações da rede a partir de várias fontes. Pelo fato de se comportarem como "espreitadores metódicos", eles também são conhecidos como _bots_, _aranhas_ ou _escutadores_ da rede. Eles trabalham de uma forma recursiva _ad infinitum_ puxando conteúdo de páginas e examinando-os. _Crawlers_ são úteis para coleta de dados, porém baseiam-se em termos de serviço e conduta. Todo site público possui, de alguma forma, termos de serviço geridos por um administrador que declaram o que é permitido ao _crawler_ fazer ou não. Essas permissões (_allows_) ou restrições (_disallows_) estão expostas em arquivo chamado `robots.txt`. Qualquer site relevante possui um arquivo deste associado à sua URL. Para vê-lo, basta adicionar este nome após a URL do endereço. As restrições sobre _crawlers_ esbarram na fronteira da ética, principalmente no que diz respeito à raspagem de dados. Contudo, não discutiremos essas questões aqui. Abaixo, mostramos o arquivo `robots.txt` para o site da UFPB. - Conteúdo de https://www.ufpb.br/robots.txt: Para outros exemplos, veja: - https://www.wikipedia.org/robots.txt - https://www.google.com/robots.txt ## Bibliotecas para raspagem de dados Existem muitos ferramentas, bibliotecas e _frameworks_ para raspagem de dados. Alguns exemplos são: _requests, _grab_, _scrapy_, _restkit_, _lxml_, _PDFMiner_. Neste capítulo, vamos dar enfoque ao módulo Python _BeautifulSoap_ e seus interpretadores (_parsers_). ### Vantagens de `BeautifulSoup` Segundo o [site oficial](https://www.crummy.com/software/BeautifulSoup/), a `BeautifulSoup` - fornece métodos simples e expressões idiomáticas "Pythônicas" para navegar, pesquisar e modificar uma árvore de análise: um kit de ferramentas para dissecar um documento e extrair o que você precisa. - converte automaticamente os documentos recebidos em Unicode e os documentos enviados em UTF-8. Você não tem que pensar em codificações, a menos que o documento não especifique uma codificação e a Beautiful Soup não consiga detectar uma. - baseia-se em interpretadores (_parsers_) Python populares como _lxml_ e _html5lib_, permitindo que você experimente diferentes estratégias de análise para obter flexibilidade. ```{note} A biblioteca BeautifulSoup foi assim denominada em homenagem a um poema de Lewis Carroll de mesmo nome em Alice no País das Maravilhas, em alusão à [Sopa de Tartaruga](https://www.triplov.com/contos/Alice-no-pais-das-maravilhas/capitulo_10.htm). ``` ## Raspando dados do site da UFPB Neste exemplo, faremos uma raspagem no site da UFPB para coletar a lista de cursos de graduação. Os passos a serem seguidos são: 1. abrir uma requisição para a URL da PRG/UFPB; 2. coletar o HTML da página; 3. extrair o conteúdo da tabela de cursos na árvore DOM; 4. construir um _DataFrame_ cujas colunas devem conter: nome do curso, sede, modalidade, nome do(a) coordenador(a) e Centro de Ensino que administra o curso. Entretanto, antes de produzirmos o _DataFrame_, faremos uma breve explanação sobre outras funcionalidades do módulo `BeautifulSoap`. Primeiramente, abriremos uma requisição com `urllib.request`. ``` from urllib.request import urlopen from bs4 import BeautifulSoup html = urlopen('https://sigaa.ufpb.br/sigaa/public/curso/lista.jsf?nivel=G&aba=p-graduacao') bs = BeautifulSoup(html.read(),'html.parser') ``` Neste ponto, criamos o objeto `bs` que contém a árvore DOM do documento HTML. Podemos acessar as partes do documento diretamente a partir das _tags_ `head`, `body`, `title` etc. ``` # impressão na tela de head e body # omitidas por serem grandes. # Teste em seu computador! head = bs.head body = bs.body title = bs.title title ``` Podemos puxar o conteúdo das _tags_ com `contents`. ``` # é uma lista title.contents # opera sobre str title.contents[0].strip() ``` Podemos navegar na árvore por meio das _tags_. ``` head.link head.meta body.li body.span ``` ### Navegação na árvore para baixo A árvore DOM é baseada em uma estrutura do tipo "_parents_/_children_". Um elemento pai pode ter um ou mais filhos e os elementos filhos podem também ter um ou mais filhos. Em termos de nível, os primeiros são _filhos diretos_. - `contents` para encontrar filhas diretas como lista; - `children` para encontrar filhas diretas como iterador; - `descendants` para encontrar filhas diretas, filhas de filhas e assim por diante. ``` type(body.contents) # iterador for c in body.children: print(type(c)) # itera sobre todos os descendentes k = 1 for c in body.descendants: if k % 500 == 0: print(f'descendente {k}: {type(c)}',sep=':') k += 1 ``` Para navegar apenas em strings (já removendo espaços em branco) dentro de _tags_, podemos usar `stripped_strings`. Se quisermos considerar espaços, devemos usar apenas `strings`. ``` # idenfica o campus sede em tabela campus = ('Areia','Bananeiras','Rio Tinto') for s in body.tbody.stripped_strings: if s in campus: print(s,end=',') ``` ### Navegação na árvore para baixo Inversamente, podemos acessar elementos "pai" (ou "mãe") a partir dos filhos com `parent`. ``` list(head.link.parent)[:4] ``` Para iterar sobre os elementos "pai" (ou "mãe"), use `parents`. ``` for h in head.link.parents: print(type(h)) ``` ### Realizando buscas na árvore Funções de localização bastante úteis são `find_all` e `find`. Podemos aplicá-la passando como argumento uma _tag_ ``` body.find_all('td')[10:15] ``` uma lista de _tags_ ``` body.find_all(['li','ul']) ``` ou _tag_ e classe. ```{note} _Classes_ dizem respeito ao estilo de formatação do arquivo HTML, que segue regras da linguagem [CSS](https://www.w3schools.com/css/default.asp). ``` ``` # busca <table data> com classe "subFormulario" body.find_all('td',class_="subFormulario")[:3] ``` Podemos também realizar buscas específicas por expressões regulares. Para isso, basta usar o módulo `re` e funções como `re.compile`. ``` import re body.find_all(string=re.compile('GRAD')) body.find_all(string=re.compile('CENTRO')) ``` Se o resultado a ser localizado for único, podemos usar `find`. ``` body.find(string=re.compile('Copyright')) ``` ### Funções customizadas Agora, implementaremos algumas funções customizadas para extrair o cabeçalho e o conteúdo da tabela de cursos do site da UFPB. Essas funções varrem a árvore DOM e coletam apenas as informações de interesse, transformando-as para listas. ``` # extrai cabeçalhos def get_table_head(t): '''Lê objeto tabela e extrai header para lista''' res = [] thead = t.find('thead') th = thead.find_all('th') for f in th: res.append(f.getText().strip()) return res t_header = get_table_head(body.div) t_header # extrai linhas def get_table_body(t): res = [] tbody = t.find('tbody') tr = tbody.find_all('tr') for row in tr: this_row = [] row_fields = row.find_all('td') for f in row_fields: this_row.append(f.getText().strip()) res.append(this_row) return res r = get_table_body(body.div) ``` ### Limpeza dos dados extraídos Finalmente, construiremos o _DataFrame_ a partir das listas anteriores. Porém, precisamos realizar procedimento de preenchimento de dados e limpeza. Note que a tabela original não traz os nomes dos cursos organizados por linha. Então, precisamos de uma coluna com o nome de cada Centro de Ensino organizado por curso. ``` import pandas as pd # cria DataFrame df = pd.DataFrame(r,columns=t_header).drop_duplicates().reset_index(drop=True) mask = df['Nome'].str.find('CENTRO') != -1 centros = df['Nome'].loc[mask] idx = centros.index.values # preenchimento vals = [] for k in range(1,len(idx)): for i in range(max(idx)+1): if i >= idx[k-1] and i < idx[k]: vals.append(centros.iloc[k-1]) # extra dx = len(df) - max(idx) vals.extend(dx*[vals[-1]]) # limpa e renomeia df['Centro'] = vals df = df.drop(idx).rename(columns={"Nome": "Curso"}).reset_index(drop=True) df ```
github_jupyter
Um documento HTML é estruturado por meio de _elementos_ enclausurados por um par de _tags_ e tem a seguinte aparência: No exemplo, `<head>` e `</head>` são exemplo de _tags_ de abertura e fechamento para a seção _head_ do documento. ### A árvore DOM Os navegadores da web interpretam o código HTML e transformam em uma "árvore". Esta árvore caracteriza o modelo de objetos do documento, ou, formalmente, DOM (_Document Object Model_). Na forma de árvore DOM, o código acima tornar-se-ia algo como: ### Tags Existem diversas _tags_ disponíveis em HTML. A seguir listamos as que aparecem no código de exemplo anterior e sua descrição. |Tag|Descrição| |---|---| |`<!DOCTYPE>` | define o tipo do documento| |`<html>` | define a raiz de um documento HTML| |`<head>` | enclausura os metadados (informações) sobre o documento| |`<title>` | define um título para o documento| |`<body>` | define o corpo do documento| |`<h1>` | define cabeçalho de primeiro nível (seção)| |`<p>` | define um parágrafo| |`<a>` | define um _hyperlink_ (ancoramento)| #### Escrevendo HTML no Jupyter Notebook Podemos escrever sintaxes de código HTML em um _Jupyter Notebook_ e renderizar a saída formatada usando o comando mágico `%%html` ou `IPython.display`. Por exemplo: ## APIs A raspagem de dados pode ser otimizada através de uma _API_ (_Application Program Interface_). APIs são mecanismos (interfaces) que usam aplicativos de terceiros para realizar "conexões" e puxar dados. APIs são parecidas com módulos, mas não oferecem meramente um conjunto de funções, mas sim um programa capaz de operar com muitos dados. Embora uma API possa funcionar localmente (_offline_), sua utilidade para raspagem de dados é melhor exibida quando se conecta a aplicativos da web (_online_). Diversas instuições fornecem APIs para que desenvolvedores possam coletar dados. No início deste capítulo, mencionamos algumas aplicações de raspagem de dados. Algumas são possíveis apenas por meio de APIs. Google, Facebook, Twitter, Yahoo e Elsevier são algumas das empresas que fornecem APIs para aplicações em buscas na web, redes sociais, finanças e literatura científica. No Brasil, podemos citar como exemplos relevantes - as [APIs da B3](https://www.b3.com.br/data/files/2B/41/CC/5D/10F42610D290A226790D8AA8/APIs-B3-Visao-Geral-versao-1.0.pdf) (Bolsa de Valores); - e as [APIs do Governo Federal](https://www.gov.br/conecta/catalogo/). ### Métodos HTTP Na _web_, em geral lidamos com o HTTP (_HyperText Transfer Protocol_), isto é, um protocolo de comunicação entre clientes e servidores. Quando raspamos dados, eles são transferidos por meio de _requisições_ (_requests_) e _respostas_. Em geral, os quatro métodos mais comuns para transitar informações entre _browsers_ e servidores _web_ via HTTP são os seguintes: - `GET`, para recuperar informação; - `POST`, para criar informação; - `PUT`, para atualizar informação; - `DELETE`, para deletar informação; ## JSON e XML APIs utilizadas para raspagem de dados comumente retornam a informação em formato XML (_eXtensible Markup Language_), blocada em _tags_ ou JSON (_JavaScript Object Notation_), serializada. Embora JSON seja a escolha de APIs mais modernas, é importante ter em mente que muitos provedores de APIs as fornecem com saída XML. Um dos argumentos em favor de JSON é a economia de caracteres. Por exemplo, a estrutura XML possui 100 caracteres. A mesma informação, serializada em JSON, com 75 caracteres, pouparia 25% do espaço. ## _Web crawlers_ Rastreadores da web (_web crawlers_) são programas que indexam informações da rede a partir de várias fontes. Pelo fato de se comportarem como "espreitadores metódicos", eles também são conhecidos como _bots_, _aranhas_ ou _escutadores_ da rede. Eles trabalham de uma forma recursiva _ad infinitum_ puxando conteúdo de páginas e examinando-os. _Crawlers_ são úteis para coleta de dados, porém baseiam-se em termos de serviço e conduta. Todo site público possui, de alguma forma, termos de serviço geridos por um administrador que declaram o que é permitido ao _crawler_ fazer ou não. Essas permissões (_allows_) ou restrições (_disallows_) estão expostas em arquivo chamado `robots.txt`. Qualquer site relevante possui um arquivo deste associado à sua URL. Para vê-lo, basta adicionar este nome após a URL do endereço. As restrições sobre _crawlers_ esbarram na fronteira da ética, principalmente no que diz respeito à raspagem de dados. Contudo, não discutiremos essas questões aqui. Abaixo, mostramos o arquivo `robots.txt` para o site da UFPB. - Conteúdo de https://www.ufpb.br/robots.txt: Para outros exemplos, veja: - https://www.wikipedia.org/robots.txt - https://www.google.com/robots.txt ## Bibliotecas para raspagem de dados Existem muitos ferramentas, bibliotecas e _frameworks_ para raspagem de dados. Alguns exemplos são: _requests, _grab_, _scrapy_, _restkit_, _lxml_, _PDFMiner_. Neste capítulo, vamos dar enfoque ao módulo Python _BeautifulSoap_ e seus interpretadores (_parsers_). ### Vantagens de `BeautifulSoup` Segundo o [site oficial](https://www.crummy.com/software/BeautifulSoup/), a `BeautifulSoup` - fornece métodos simples e expressões idiomáticas "Pythônicas" para navegar, pesquisar e modificar uma árvore de análise: um kit de ferramentas para dissecar um documento e extrair o que você precisa. - converte automaticamente os documentos recebidos em Unicode e os documentos enviados em UTF-8. Você não tem que pensar em codificações, a menos que o documento não especifique uma codificação e a Beautiful Soup não consiga detectar uma. - baseia-se em interpretadores (_parsers_) Python populares como _lxml_ e _html5lib_, permitindo que você experimente diferentes estratégias de análise para obter flexibilidade. ## Raspando dados do site da UFPB Neste exemplo, faremos uma raspagem no site da UFPB para coletar a lista de cursos de graduação. Os passos a serem seguidos são: 1. abrir uma requisição para a URL da PRG/UFPB; 2. coletar o HTML da página; 3. extrair o conteúdo da tabela de cursos na árvore DOM; 4. construir um _DataFrame_ cujas colunas devem conter: nome do curso, sede, modalidade, nome do(a) coordenador(a) e Centro de Ensino que administra o curso. Entretanto, antes de produzirmos o _DataFrame_, faremos uma breve explanação sobre outras funcionalidades do módulo `BeautifulSoap`. Primeiramente, abriremos uma requisição com `urllib.request`. Neste ponto, criamos o objeto `bs` que contém a árvore DOM do documento HTML. Podemos acessar as partes do documento diretamente a partir das _tags_ `head`, `body`, `title` etc. Podemos puxar o conteúdo das _tags_ com `contents`. Podemos navegar na árvore por meio das _tags_. ### Navegação na árvore para baixo A árvore DOM é baseada em uma estrutura do tipo "_parents_/_children_". Um elemento pai pode ter um ou mais filhos e os elementos filhos podem também ter um ou mais filhos. Em termos de nível, os primeiros são _filhos diretos_. - `contents` para encontrar filhas diretas como lista; - `children` para encontrar filhas diretas como iterador; - `descendants` para encontrar filhas diretas, filhas de filhas e assim por diante. Para navegar apenas em strings (já removendo espaços em branco) dentro de _tags_, podemos usar `stripped_strings`. Se quisermos considerar espaços, devemos usar apenas `strings`. ### Navegação na árvore para baixo Inversamente, podemos acessar elementos "pai" (ou "mãe") a partir dos filhos com `parent`. Para iterar sobre os elementos "pai" (ou "mãe"), use `parents`. ### Realizando buscas na árvore Funções de localização bastante úteis são `find_all` e `find`. Podemos aplicá-la passando como argumento uma _tag_ uma lista de _tags_ ou _tag_ e classe. Podemos também realizar buscas específicas por expressões regulares. Para isso, basta usar o módulo `re` e funções como `re.compile`. Se o resultado a ser localizado for único, podemos usar `find`. ### Funções customizadas Agora, implementaremos algumas funções customizadas para extrair o cabeçalho e o conteúdo da tabela de cursos do site da UFPB. Essas funções varrem a árvore DOM e coletam apenas as informações de interesse, transformando-as para listas. ### Limpeza dos dados extraídos Finalmente, construiremos o _DataFrame_ a partir das listas anteriores. Porém, precisamos realizar procedimento de preenchimento de dados e limpeza. Note que a tabela original não traz os nomes dos cursos organizados por linha. Então, precisamos de uma coluna com o nome de cada Centro de Ensino organizado por curso.
0.600188
0.907107
# The Pst class The `pst_handler` module contains the `Pst` class for dealing with pest control files. It relies heavily on `pandas` to deal with tabular sections, such as parameters, observations, and prior information. ``` from __future__ import print_function import os import numpy as np import pyemu from pyemu import Pst ``` We need to pass the name of a pest control file to instantiate the class. The class instance (or object) is assigned to the variable *p*. ``` pst_name = os.path.join("..", "..", "examples", "henry","pest.pst") p = Pst(pst_name) ``` Now all of the relevant parts of the pest control file are attributes of the object. For example, the parameter_data, observation data, and prior information are available as pandas dataframes. ``` p.parameter_data.head() p.observation_data.head() p.prior_information.head() ``` A residual file (`.rei` or `res`) can also be passed to the `resfile` argument at instantiation to enable some simple residual analysis and weight adjustments. If the residual file is in the same directory as the pest control file and has the same base name, it will be accessed automatically: ``` p.res.head() ``` The `pst` class has some `@decorated` convience methods related to the residuals allowing the user to access the values and print in a straightforward way. ``` print(p.phi,p.phi_components) ``` Some additional `@decorated` convience methods: ``` print(p.npar,p.nobs,p.nprior) print(p.par_groups,p.obs_groups) ``` Printing the attribue type shows that some are returned as lists and others single values. ``` print(type(p.par_names)) # all parameter names print(type(p.adj_par_names)) # adjustable parameter names print(type(p.obs_names)) # all observation names print(type(p.nnz_obs_names)) # non-zero weight observations print(type(p.phi)) # float value that is the weighted total objective function ``` The "control_data" section of the pest control file is accessible in the `Pst.control_data` attribute: ``` print('jacupdate = {0}'.format(p.control_data.jacupdate)) print('numlam = {0}'.format(p.control_data.numlam)) p.control_data.numlam = 100 print('numlam has been changed to --> {0}'.format(p.control_data.numlam)) ``` The `Pst` class also exposes a method to get a new `Pst` instance with a subset of parameters and or obseravtions. Note this method does not propogate prior information to the new instance: ``` pnew = p.get(p.par_names[:10],p.obs_names[-10:]) print(pnew.prior_information) ``` You can also write a pest control file with altered parameters, observations, and/or prior information: ``` pnew.write("test.pst") ``` Some other methods in `Pst` include: ``` # add preferred value regularization with weights proportional to parameter bounds pyemu.utils.helpers.zero_order_tikhonov(pnew) pnew.prior_information.head() # add preferred value regularization with unity weights pyemu.utils.helpers.zero_order_tikhonov(pnew,parbounds=False) pnew.prior_information.head() ``` Some more `res` functionality ``` # adjust observation weights to account for residual phi components #pnew = p.get() print(p.phi, p.nnz_obs, p.phi_components) p.adjust_weights_resfile() print(p.phi, p.nnz_obs, p.phi_components) ``` adjust observation weights by an arbitrary amount by groups: ``` print(p.phi, p.nnz_obs, p.phi_components) grp_dict = {"head":100} p.adjust_weights(obsgrp_dict=grp_dict) print(p.phi, p.nnz_obs, p.phi_components) ``` adjust observation weights by an arbitrary amount by individual observations: ``` print(p.phi, p.nnz_obs, p.phi_components) obs_dict = {"h_obs01_1":25} p.adjust_weights(obs_dict=obs_dict) print(p.phi, p.nnz_obs, p.phi_components) ``` setup weights inversely proportional to the observation values ``` p.adjust_weights_resfile() print(p.phi, p.nnz_obs, p.phi_components) p.proportional_weights(fraction_stdev=0.1,wmax=20.0) print(p.phi, p.nnz_obs, p.phi_components) ```
github_jupyter
from __future__ import print_function import os import numpy as np import pyemu from pyemu import Pst pst_name = os.path.join("..", "..", "examples", "henry","pest.pst") p = Pst(pst_name) p.parameter_data.head() p.observation_data.head() p.prior_information.head() p.res.head() print(p.phi,p.phi_components) print(p.npar,p.nobs,p.nprior) print(p.par_groups,p.obs_groups) print(type(p.par_names)) # all parameter names print(type(p.adj_par_names)) # adjustable parameter names print(type(p.obs_names)) # all observation names print(type(p.nnz_obs_names)) # non-zero weight observations print(type(p.phi)) # float value that is the weighted total objective function print('jacupdate = {0}'.format(p.control_data.jacupdate)) print('numlam = {0}'.format(p.control_data.numlam)) p.control_data.numlam = 100 print('numlam has been changed to --> {0}'.format(p.control_data.numlam)) pnew = p.get(p.par_names[:10],p.obs_names[-10:]) print(pnew.prior_information) pnew.write("test.pst") # add preferred value regularization with weights proportional to parameter bounds pyemu.utils.helpers.zero_order_tikhonov(pnew) pnew.prior_information.head() # add preferred value regularization with unity weights pyemu.utils.helpers.zero_order_tikhonov(pnew,parbounds=False) pnew.prior_information.head() # adjust observation weights to account for residual phi components #pnew = p.get() print(p.phi, p.nnz_obs, p.phi_components) p.adjust_weights_resfile() print(p.phi, p.nnz_obs, p.phi_components) print(p.phi, p.nnz_obs, p.phi_components) grp_dict = {"head":100} p.adjust_weights(obsgrp_dict=grp_dict) print(p.phi, p.nnz_obs, p.phi_components) print(p.phi, p.nnz_obs, p.phi_components) obs_dict = {"h_obs01_1":25} p.adjust_weights(obs_dict=obs_dict) print(p.phi, p.nnz_obs, p.phi_components) p.adjust_weights_resfile() print(p.phi, p.nnz_obs, p.phi_components) p.proportional_weights(fraction_stdev=0.1,wmax=20.0) print(p.phi, p.nnz_obs, p.phi_components)
0.548915
0.967625
# TOPIC EXTRACTION ## Topic Assignment Consistency ``` from gensim import corpora, models import gensim import numpy as np import random import pickle %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns ``` Train LDA models with different number of topics ``` texts = pickle.load(open('pub_articles_cleaned_super.pkl','rb')) random.seed(42) train_set = random.sample(list(range(0,len(texts))),len(texts)-1000) test_set = [x for x in list(range(0,len(texts))) if x not in train_set] train_texts = [texts[i] for i in train_set] test_texts = [texts[i] for i in test_set] pickle.dump([train_set,test_set,train_texts,test_texts],open('pub_articles_train_test_sets.pkl','wb')) topicnums = [1,5,10,15,20,30,40,50,60,70,80,90,100] dictionary = corpora.Dictionary(train_texts) corpus = [dictionary.doc2bow(text) for text in train_texts] ldamodels_bow = {} for i in topicnums: random.seed(42) %time ldamodels_bow[i] = models.ldamodel.LdaModel(corpus,num_topics=i,id2word=dictionary) ldamodels_bow[i].save('ldamodels_bow_'+str(i)+'.lda') ``` Evaluate on 1,000 documents **not** used in LDA training ``` # http://radimrehurek.com/topic_modeling_tutorial/2%20-%20Topic%20Modeling.html def intra_inter(lda_model, dictionary, test_docs, num_pairs=10000): # Split each test document into two halves and compute topics for each half part1 = [lda_model[dictionary.doc2bow(tokens[:int(len(tokens)/2)])] for tokens in test_docs] part2 = [lda_model[dictionary.doc2bow(tokens[int(len(tokens)/2):])] for tokens in test_docs] # Compute topic distribution similarities using cosine similarity #print("Average cosine similarity between corresponding parts (higher is better):") corresp_parts = np.mean([gensim.matutils.cossim(p1, p2) for p1, p2 in zip(part1, part2)]) #print("Average cosine similarity between 10,000 random parts (lower is better):") random.seed(42) random_pairs = np.random.randint(0, len(test_docs), size=(num_pairs, 2)) random_parts = np.mean([gensim.matutils.cossim(part1[i[0]], part2[i[1]]) for i in random_pairs]) return corresp_parts, random_parts ldamodels_eval = {} for i in topicnums: lda_model = models.ldamodel.LdaModel.load('ldamodels_bow_'+str(i)+'.lda') ldamodels_eval[i] = intra_inter(lda_model, dictionary, test_texts) pickle.dump(ldamodels_eval,open('pub_ldamodels_eval.pkl','wb')) topicnums = [1,5,10,15,20,30,40,50,60,70,80,90,100] ldamodels_eval = pickle.load(open('pub_ldamodels_eval.pkl','rb')) corresp_parts = [ldamodels_eval[i][0] for i in topicnums] random_parts = [ldamodels_eval[i][1] for i in topicnums] with sns.axes_style("whitegrid"): x = topicnums y1 = corresp_parts y2 = random_parts plt.plot(x,y1,label='Corresponding parts') plt.plot(x,y2,label='Random parts') plt.ylim([0.0,1.0]) plt.xlabel('Number of topics') plt.ylabel('Average cosine similarity') plt.legend() plt.show() ``` ## Topic Stability Analysis Measure overlap between topic vectors from different numbers of topics ``` topicnums = [1,5,10,15,20,30,40,50,60,70,80,90,100] lda_topics = {} for i in topicnums: lda_model = models.ldamodel.LdaModel.load('ldamodels_bow_'+str(i)+'.lda') lda_topics_string = lda_model.show_topics(i) lda_topics[i] = ["".join([c if c.isalpha() else " " for c in topic[1]]).split() for topic in lda_topics_string] pickle.dump(lda_topics,open('pub_lda_topics.pkl','wb')) # http://billchambers.me/tutorials/2014/12/21/tf-idf-explained-in-python.html def jaccard_similarity(query, document): intersection = set(query).intersection(set(document)) union = set(query).union(set(document)) return len(intersection)/len(union) lda_stability = {} for i in range(0,len(topicnums)-1): jacc_sims = [] for t1,topic1 in enumerate(lda_topics[topicnums[i]]): sims = [] for t2,topic2 in enumerate(lda_topics[topicnums[i+1]]): sims.append(jaccard_similarity(topic1,topic2)) jacc_sims.append(sims) lda_stability[topicnums[i]] = jacc_sims pickle.dump(lda_stability,open('pub_lda_stability.pkl','wb')) topicnums = [1,5,10,20,30,40,50,60,70,80,90,100] lda_stability = pickle.load(open('pub_lda_stability.pkl','rb')) mean_stability = [np.array(lda_stability[i]).mean() for i in topicnums[:-1]] with sns.axes_style("whitegrid"): x = topicnums[:-1] y = mean_stability plt.plot(x,y,label='Mean overlap') plt.ylim([0.0,1.0]) plt.xlabel('Number of topics') plt.ylabel('Average Jaccard similarity') #plt.legend() plt.show() ``` ## Number of Topics = 20 ``` num_topics = 20 lda_model = models.ldamodel.LdaModel.load('ldamodels_bow_'+str(num_topics)+'.lda') lda_topics = lda_model.show_topics(num_topics) lda_topics_words = ["".join([c if c.isalpha() else " " for c in topic[1]]).split() for topic in lda_topics] lda_topics_disp = [("topic "+str(i)+": ")+" ".join(topic) for i,topic in enumerate(lda_topics_words)] lda_topics_disp ``` Get topic distributions / probabilities for each article ``` from sqlalchemy import create_engine from sqlalchemy_utils import database_exists, create_database import psycopg2 with open ("bubble_popper_postgres.txt","r") as myfile: lines = [line.replace("\n","") for line in myfile.readlines()] db, us, pw = 'bubble_popper', lines[0], lines[1] engine = create_engine('postgresql://%s:%s@localhost:5432/%s'%(us,pw,db)) connstr = "dbname='%s' user='%s' host='localhost' password='%s'"%(db,us,pw) conn = None; conn = psycopg2.connect(connstr) articles = pickle.load(open('pub_articles_trimmed.pkl','rb')) # Article full content and other data documents = pickle.load(open('pub_articles_cleaned_super.pkl','rb')) # Article preprocessed bag of words doc_probs = [] for doc in documents: doc_dict = gensim.corpora.Dictionary([doc]) doc_corp = doc_dict.doc2bow(doc) doc_probs.append(ldamodel[doc_corp]) articles = articles.drop(['content','title','url'],axis=1) for i in range(0,num_topics): articles['topic'+str(i)] = 0.0 indices = articles.index.values.tolist() for i, doc in list(zip(indices, doc_probs)): for probs in doc: articles.set_value(i,'topic'+str(probs[0]),probs[1]) pickle.dump(articles,open('pub_probabs_topic'+str(num_topics)+'.pkl','wb')) articles.to_sql('article_data',engine,if_exists='replace') ```
github_jupyter
from gensim import corpora, models import gensim import numpy as np import random import pickle %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns texts = pickle.load(open('pub_articles_cleaned_super.pkl','rb')) random.seed(42) train_set = random.sample(list(range(0,len(texts))),len(texts)-1000) test_set = [x for x in list(range(0,len(texts))) if x not in train_set] train_texts = [texts[i] for i in train_set] test_texts = [texts[i] for i in test_set] pickle.dump([train_set,test_set,train_texts,test_texts],open('pub_articles_train_test_sets.pkl','wb')) topicnums = [1,5,10,15,20,30,40,50,60,70,80,90,100] dictionary = corpora.Dictionary(train_texts) corpus = [dictionary.doc2bow(text) for text in train_texts] ldamodels_bow = {} for i in topicnums: random.seed(42) %time ldamodels_bow[i] = models.ldamodel.LdaModel(corpus,num_topics=i,id2word=dictionary) ldamodels_bow[i].save('ldamodels_bow_'+str(i)+'.lda') # http://radimrehurek.com/topic_modeling_tutorial/2%20-%20Topic%20Modeling.html def intra_inter(lda_model, dictionary, test_docs, num_pairs=10000): # Split each test document into two halves and compute topics for each half part1 = [lda_model[dictionary.doc2bow(tokens[:int(len(tokens)/2)])] for tokens in test_docs] part2 = [lda_model[dictionary.doc2bow(tokens[int(len(tokens)/2):])] for tokens in test_docs] # Compute topic distribution similarities using cosine similarity #print("Average cosine similarity between corresponding parts (higher is better):") corresp_parts = np.mean([gensim.matutils.cossim(p1, p2) for p1, p2 in zip(part1, part2)]) #print("Average cosine similarity between 10,000 random parts (lower is better):") random.seed(42) random_pairs = np.random.randint(0, len(test_docs), size=(num_pairs, 2)) random_parts = np.mean([gensim.matutils.cossim(part1[i[0]], part2[i[1]]) for i in random_pairs]) return corresp_parts, random_parts ldamodels_eval = {} for i in topicnums: lda_model = models.ldamodel.LdaModel.load('ldamodels_bow_'+str(i)+'.lda') ldamodels_eval[i] = intra_inter(lda_model, dictionary, test_texts) pickle.dump(ldamodels_eval,open('pub_ldamodels_eval.pkl','wb')) topicnums = [1,5,10,15,20,30,40,50,60,70,80,90,100] ldamodels_eval = pickle.load(open('pub_ldamodels_eval.pkl','rb')) corresp_parts = [ldamodels_eval[i][0] for i in topicnums] random_parts = [ldamodels_eval[i][1] for i in topicnums] with sns.axes_style("whitegrid"): x = topicnums y1 = corresp_parts y2 = random_parts plt.plot(x,y1,label='Corresponding parts') plt.plot(x,y2,label='Random parts') plt.ylim([0.0,1.0]) plt.xlabel('Number of topics') plt.ylabel('Average cosine similarity') plt.legend() plt.show() topicnums = [1,5,10,15,20,30,40,50,60,70,80,90,100] lda_topics = {} for i in topicnums: lda_model = models.ldamodel.LdaModel.load('ldamodels_bow_'+str(i)+'.lda') lda_topics_string = lda_model.show_topics(i) lda_topics[i] = ["".join([c if c.isalpha() else " " for c in topic[1]]).split() for topic in lda_topics_string] pickle.dump(lda_topics,open('pub_lda_topics.pkl','wb')) # http://billchambers.me/tutorials/2014/12/21/tf-idf-explained-in-python.html def jaccard_similarity(query, document): intersection = set(query).intersection(set(document)) union = set(query).union(set(document)) return len(intersection)/len(union) lda_stability = {} for i in range(0,len(topicnums)-1): jacc_sims = [] for t1,topic1 in enumerate(lda_topics[topicnums[i]]): sims = [] for t2,topic2 in enumerate(lda_topics[topicnums[i+1]]): sims.append(jaccard_similarity(topic1,topic2)) jacc_sims.append(sims) lda_stability[topicnums[i]] = jacc_sims pickle.dump(lda_stability,open('pub_lda_stability.pkl','wb')) topicnums = [1,5,10,20,30,40,50,60,70,80,90,100] lda_stability = pickle.load(open('pub_lda_stability.pkl','rb')) mean_stability = [np.array(lda_stability[i]).mean() for i in topicnums[:-1]] with sns.axes_style("whitegrid"): x = topicnums[:-1] y = mean_stability plt.plot(x,y,label='Mean overlap') plt.ylim([0.0,1.0]) plt.xlabel('Number of topics') plt.ylabel('Average Jaccard similarity') #plt.legend() plt.show() num_topics = 20 lda_model = models.ldamodel.LdaModel.load('ldamodels_bow_'+str(num_topics)+'.lda') lda_topics = lda_model.show_topics(num_topics) lda_topics_words = ["".join([c if c.isalpha() else " " for c in topic[1]]).split() for topic in lda_topics] lda_topics_disp = [("topic "+str(i)+": ")+" ".join(topic) for i,topic in enumerate(lda_topics_words)] lda_topics_disp from sqlalchemy import create_engine from sqlalchemy_utils import database_exists, create_database import psycopg2 with open ("bubble_popper_postgres.txt","r") as myfile: lines = [line.replace("\n","") for line in myfile.readlines()] db, us, pw = 'bubble_popper', lines[0], lines[1] engine = create_engine('postgresql://%s:%s@localhost:5432/%s'%(us,pw,db)) connstr = "dbname='%s' user='%s' host='localhost' password='%s'"%(db,us,pw) conn = None; conn = psycopg2.connect(connstr) articles = pickle.load(open('pub_articles_trimmed.pkl','rb')) # Article full content and other data documents = pickle.load(open('pub_articles_cleaned_super.pkl','rb')) # Article preprocessed bag of words doc_probs = [] for doc in documents: doc_dict = gensim.corpora.Dictionary([doc]) doc_corp = doc_dict.doc2bow(doc) doc_probs.append(ldamodel[doc_corp]) articles = articles.drop(['content','title','url'],axis=1) for i in range(0,num_topics): articles['topic'+str(i)] = 0.0 indices = articles.index.values.tolist() for i, doc in list(zip(indices, doc_probs)): for probs in doc: articles.set_value(i,'topic'+str(probs[0]),probs[1]) pickle.dump(articles,open('pub_probabs_topic'+str(num_topics)+'.pkl','wb')) articles.to_sql('article_data',engine,if_exists='replace')
0.211254
0.700652
<style>div.container { width: 100% }</style> <img style="float:left; vertical-align:text-bottom;" height="65" width="172" src="assets/PyViz_logo_wm_line.png" /> <div style="float:right; vertical-align:text-bottom;"><h2>Tutorial 00. Welcome, Demos and Setup</h2></div> Welcome to the PyViz Tutorial! It will take you step by step to show you how to solve problems in web-based data exploration, visualization, and interactive-app development using open-source Python libraries, including the [Anaconda](http://anaconda.com)-supported tools [HoloViews](http://holoviews.org), [GeoViews](http://geo.holoviews.org), [Bokeh](http://bokeh.pydata.org), [Datashader](https://datashader.org), and [Param](http://ioam.github.io/param): <img height="148" width="800" src="assets/hv_gv_bk_ds_pa.png"/> These libraries have been carefully designed to work together to address a very wide range of data-analysis and visualization tasks, making it simple to discover, understand, and communicate the important properties of your data. <img align="center" src="./assets/tutorial_app.gif"></img> This notebook serves as the homepage of the tutorial, including a table of contents listing each tutorial section, a general overview, links to demos illustrating the range of topics covered, and instructions to check that everything is downloaded and installed properly. ## Index and Schedule The tutorial outlined here has been given as a half-day or one-day course led by trained instructors. For self-paced usage, you should expect this material to take between 1 and 3 days if you do all of it. Sections 0, 1, 2, 3, and 4 contain the most crucial and basic introductory material, and should take a couple of hours of study. All later sections can be studied as needed or skipped if not relevant. - *Overview* * **15 min** &nbsp;[0 - Welcome](./00_Welcome.ipynb): (This notebook!) Welcome, demos, and setup. * **30 min** &nbsp;[1 - Workflow Introduction](./01_Workflow_Introduction.ipynb): Overview of solving a simple but complete data-science task, using each of the main PyViz tools. * **5 min** &nbsp;&nbsp;&nbsp;*Break*<br><br> - *Making data visualizable* * **30 min** &nbsp;[2 - Annotating Data](./02_Annotating_Data.ipynb): Using HoloViews Elements to make your data instantly visualizable * **20 min** &nbsp;[3 - Customizing Visual Appearance](./03_Customizing_Visual_Appearance.ipynb): How to change the appearance and output format of elements. * **10 min** &nbsp;[*Exercise 1*](../exercises/Exercise-1-making-data-visualizable.ipynb) * **10 min** &nbsp;*Break*<br><br> - *Datasets and collections of data* * **30 min** &nbsp;[4 - Working with Datasets](./04_Working_with_Datasets.ipynb): Using HoloViews "containers" for quick, easy data exploration. * **10 min** &nbsp;[5 - Working with Gridded Data](./05_Working_with_Gridded_Data.ipynb): Exploring a gridded (n-dimensional) dataset. * **20 min** &nbsp;[*Exercise 2*](../exercises/Exercise-2-datasets-and-collections-of-data.ipynb) * **20 min** &nbsp;[6 - Network Graphs](./06_Network_Graphs.ipynb): Exploring network graph data. * **20 min** &nbsp;[7 - Geographic Data](./07_Geographic_Data.ipynb): Plotting data in geographic coordinates. * **20 min** &nbsp;[*Exercise 3*](../exercises/Exercise-3-networks-and-geoviews.ipynb) * **15 min** &nbsp;*Break*<br><br> - *Dynamic interactions* * **25 min** &nbsp;[8 - Custom Interactivity](./08_Custom_Interactivity.ipynb): Using HoloViews "streams" to add interactivity to your visualizations. * **15 min** &nbsp;[9 - Operations and Pipelines](./09_Operations_and_Pipelines.ipynb): Dynamically transforming your data as needed * **20 min** &nbsp;[10 - Working with Large Datasets](./10_Working_with_Large_Datasets.ipynb): Using datasets too large to feed directly to your browser. * **30 min** &nbsp;[11 - Streaming Data](./11_Streaming_Data.ipynb): Live plots of dynamically updated data sources. * **20 min** &nbsp;[*Exercise 4*](../exercises/Exercise-4-dynamic-interactions.ipynb) * **10 min** &nbsp;*Break*<br><br> - *Apps and dashboards* * **15 min** &nbsp;[12 - Parameters and Widgets](./12_Parameters_and_Widgets.ipynb): Declarative custom controls * **30 min** &nbsp;[13 - Deploying Bokeh Apps](./13_Deploying_Bokeh_Apps.ipynb): Deploying your visualizations using Bokeh server. * **20 min** &nbsp;[A1 - Exploration with Containers](./A1_Exploration_with_Containers.ipynb): Containers that let you explore complex datasets. * **15 min** &nbsp;[*Exercise 5*](../exercises/Exercise-5-exporting-and-deploying-apps.ipynb) ## What is this all about? Many of the activities of a data scientist or analyst require visualization, but it can be difficult to assemble a set of tools that cover all of the tasks involved. Initial exploration needs to be in a flexible, open-ended environment where it is simple to try out and test hypotheses. Once key aspects of the data have been identified, the analyst might prepare a specific image or figure to share with colleagues or a wider audience. Or, they might need to set up an interactive way to share a set of data that would be unwieldy as a fixed figure, using interactive controls to let others explore the effects of certain variables. Eventually, for particularly important data or use cases, the analyst might get involved in a long-term project to develop a full-featured web application or dashboard to deploy, allowing decision makers to interact directly with live data streams to make operational decisions. With Python, initial exploration is typically in a [Jupyter](http://jupyter.org) notebook, using tools like Matplotlib and Bokeh to develop static or interactive plots. These tools support a simple syntax for making certain kinds of plots, but showing more complex relationships in data can quickly turn into a major software development exercise, making it difficult to achieve understanding during exploration. Simple apps can be built using ipywidgets to control these visualizations, but the resulting combinations end up being tightly coupled to the notebook environment, unable to migrate into a standalone server context with an application that can be shared more widely. Bokeh includes widgets that can work in both notebook and server environments, but these can be difficult to work with for initial exploration. Bokeh and Matplotlib both also have limitations on how much data they can handle, in part because Bokeh requires the data to be put into the web browser's limited memory space. In this tutorial we will be introducing a set of open-source Python libraries we have developed to streamline the process of working with small and large datasets (from a few points to billions) in a web browser, whether doing exploratory analysis, making simple widget-based tools, or building full-featured dashboards. The libraries in this ecosystem include: * [**Bokeh**](http://bokeh.pydata.org): Interactive plotting in web browsers, running JavaScript but controlled by Python * [**HoloViews**](http://holoviews.org): Declarative objects for instantly visualizable data, building Bokeh plots from convenient high-level specifications * [**GeoViews**](http://geo.holoviews.org): Visualizable geographic data that that can be mixed and matched with HoloViews objects * [**Datashader**](https://github.com/bokeh/datashader): Rasterizing huge datasets quickly as fixed-size images * [**Param**](https://github.com/ioam/param): Declaring user-relevant parameters, making it simple to work with widgets inside and outside of a notebook context These projects can be used separately or together in a wide variety of different configurations to address different needs. For instance, if we focus on the needs of a data scientist/analyst who wants to understand the properties of their data, we can compare that to the approach suggested for a software developer wanting to build a highly custom software application for data of different sizes: <img src="assets/ds_hv_bokeh.png" width=800 height=217/> Here Datashader is used to make large datasets practical by rendering images outside the browser, either directly for a programmer or via a convenient high-level interface using HoloViews, and the results can be embedded in interactive Bokeh plots if desired, either as a static HTML plot, in a Jupyter notebook, or as a standaline application. Behind the scenes, these tools rely on a wide range of other open-source libraries for their implementation, including: * [**Pandas**](http://pandas.pydata.org): Convenient computation on columnar datasets (used by HoloViews and datashader) * [**Xarray**](http://xarray): Convenient computations on multidimensional array datasets (used by HoloViews and Datashader) * [**Dask**](http://dask.pydata.org): Efficient out-of-core/distributed computation on massive datasets (used by Datashader) * [**Numba**](http://numba.pydata.org): Accelerated machine code for inner loops (used by Datashader) * [**Fastparquet**](https://fastparquet.readthedocs.io): Efficient storage for columnar data * [**Cartopy**](http://scitools.org.uk/cartopy): Support for geographical data (using a wide range of other libraries) This tutorial will guide you through the process of using these tools together to build rich, high-performance, scalable, flexible, and deployable visualizations, apps, and dashboards, without having to use JavaScript or other web technologies explicitly, and without having to rewrite your code to move between each of the different tasks or phases from exploration to deployment. In each case, we'll try to draw your attention to libraries and approaches that help you get the job done, which in turn depend on many other unseen libraries in the scientific Python ecosystem to do the heavy lifting. ## Demos To give you an idea what sort of functionality is possible with these tools, check out some of these links: * [Selection stream](http://holoviews.org/reference/apps/bokeh/selection_stream.html) * [Bounds stream](http://holoviews.org/reference/streams/bokeh/BoundsX.html) * [Mandelbrot](http://holoviews.org/gallery/apps/bokeh/mandelbrot.html) * [DynamicMap](http://holoviews.org/reference/containers/bokeh/DynamicMap.html) * [Crossfilter](http://holoviews.org/gallery/apps/bokeh/crossfilter.html) * [Game of Life](http://holoviews.org/gallery/apps/bokeh/game_of_life.html) * [Dragon curve](http://holoviews.org/gallery/demos/bokeh/dragon_curve.html) * [Datashader NYC Taxi](https://anaconda.org/jbednar/nyc_taxi) * [Datashader Graphs](https://anaconda.org/jbednar/edge_bundling) * [Datashader Landsat images](http://datashader.org/topics/landsat.html) * [Datashader OpenSky](https://anaconda.org/jbednar/opensky) ## Related links You will find extensive support material on the websites for each package. You may find these links particularly useful during the tutorial: * [HoloViews reference gallery](http://holoviews.org/reference/index.html): Visual reference of all elements and containers, along with some other components * [HoloViews getting-started guide](http://holoviews.org/getting_started/index.html): Covers some of the same topics as this tutorial, but without exercises ## Getting set up Please consult [pyviz.org](http://pyviz.org) for the full instructions on installing the software used in these tutorials. Here is the condensed version of these instructions for UNIX-based systems (Linux or Mac OS X), assuming you have already downloaded and installed [Anaconda](https://www.anaconda.com/download) or Miniconda: ``` ! conda install -c pyviz pyviz ! pyviz --install-examples pyviz-tutorial ! pyviz --download-sample-data pyviz-tutorial/data ! cd pyviz-tutorial ! jupyter notebook ``` Once everything is installed, the following cell should print '1.9.5': ``` import holoviews as hv hv.__version__ ``` And you should see the HoloViews, Bokeh, and Matplotlib logos after running the following cell: ``` hv.extension('bokeh', 'matplotlib') ``` The next cell tests the other key imports needed for this tutorial, and if it completes without errors your environment should be ready to go: ``` import pandas import datashader import dask import geoviews import bokeh ``` ## Downloading sample data Lastly, let's make sure the datasets needed are available. First, check that the large taxi dataset is available, which can be obtained as described in the [README](https://github.com/ioam/jupytercon2017-holoviews-tutorial/blob/master/README.rst): ``` import os if not os.path.isfile('../data/nyc_taxi_wide.parq'): print('Taxi dataset not found; please run "pyviz --download-sample-data ../data".') ``` Finally, some examples in the tutorial rely on bokeh sample data, which you can get by running the command below: ``` if not os.path.isfile(os.path.join(os.path.expanduser('~'),'.bokeh','data','routes.csv')): bokeh.sampledata.download() ```
github_jupyter
! conda install -c pyviz pyviz ! pyviz --install-examples pyviz-tutorial ! pyviz --download-sample-data pyviz-tutorial/data ! cd pyviz-tutorial ! jupyter notebook import holoviews as hv hv.__version__ hv.extension('bokeh', 'matplotlib') import pandas import datashader import dask import geoviews import bokeh import os if not os.path.isfile('../data/nyc_taxi_wide.parq'): print('Taxi dataset not found; please run "pyviz --download-sample-data ../data".') if not os.path.isfile(os.path.join(os.path.expanduser('~'),'.bokeh','data','routes.csv')): bokeh.sampledata.download()
0.266071
0.901097
**Authors:** Jozef Hanč, Martina Hančová <br> *[Faculty of Science](https://www.upjs.sk/en/faculty-of-science/?prefferedLang=EN), P. J. Šafárik University in Košice, Slovakia* <br> emails: [[email protected]](mailto:[email protected]) *** # <font color = brown, size=6> Cross-checking $\mathcal{GDD}$ pdf values (Arb)</font> <font size=4> Computational tools - arbitrary-precision C libary: </font> **<font size=4>Arb</font>** **Arb in Sage** https://doc.sagemath.org/html/en/reference/rings_numerical/sage/rings/complex_arb.html ``` # python libraries import numpy as np from numpy import inf as INF, array as v import platform as pt import pandas as pd import matplotlib.pyplot as plt plt.style.use('seaborn-whitegrid') from time import time import math, cmath import scipy scipy.__version__, np.__version__ ``` ## Python procedures and functions ``` # accuracy commands def abs_errs(df1, df2): N = len(df1) errors = [abs(df1[i]-df2[i]) for i in range(N)] return errors def accuracy(df1, df2): return max(abs_errs(df1,df2)) # approximate formulas for precisions expressed in bits and decimal places bits = lambda d:round((d+1)*ln(10)/ln(2)) dps = lambda b:round(b*ln(2)/ln(10) - 1) # 668 bits precision in decimal places dps(668) # Arb numbers and functions prec = 668 Arb = ComplexBallField(prec) R_Arb = lambda z: Arb(z,0) U_Arb = lambda a,b,z: Arb(z,0).hypergeometric_U(a,b) e_Arb = lambda z: Arb(z,0).exp() G_Arb = lambda z: Arb(z,0).gamma() erf_Arb = lambda z: Arb(z,0).erf() RRf = RealField(prec) ``` ## Defining $f(z)$ $ f(z)= \dfrac{\beta_{1}^{\alpha_{1}} \beta_{2}^{\alpha_{2}}}{\beta^{\alpha-1}} \begin{cases} {\dfrac{e^{z \beta_{2}}}{\Gamma\left(\alpha_{2}\right)} U\left(1-\alpha_{2}, 2-\alpha,-z \beta\right),} & {z<0} \\[12pt] \begin{array}{cc} \frac{\Gamma(\alpha-1)}{\beta^{a / 2-1} \Gamma\left(\alpha_{1}\right) \Gamma\left(\alpha_{2}\right)}, & \scriptstyle 1<\alpha \\ \infty, & \scriptstyle 0<\alpha \leq 1 \end{array}, & z=0 \\[12pt] {\dfrac{e^{-z \beta_{1}}}{\Gamma\left(\alpha_{1}\right)} U\left(1-\alpha_{1}, 2-\alpha, z \beta\right),} & {z>0} \end{cases} $ $\alpha=\alpha_{1}+\alpha_{2}, \quad \beta=\beta_{1}+\beta_{2}$ ``` # parameters a1, b1 = R_Arb(1/2), R_Arb(1) a2, b2 = R_Arb(17/2), R_Arb(93) a, b = a1+a2, b1+b2 c = b1^a1*b2^a2/b^(a-1) cp, cm = c/G_Arb(a1), c/G_Arb(a2) # defining f(z) fm = lambda z: cm*e_Arb( R_Arb(z)*b2)*U_Arb(1-a2,2-a,-b*R_Arb(z)) f0 = c*G_Arb(a-1)/(G_Arb(a1)*G_Arb(a2)) fp = lambda z: cp*e_Arb(-R_Arb(z)*b1)*U_Arb(1-a1,2-a, b*R_Arb(z)) f = lambda z: fp(z) if z > 0 else fm(z) if z < 0 else f0 f(2) f(2).diameter() f(2).mid() ``` # Cross-checking PARI values ``` # pdf quadruple precision values N = 5 dparipdf = {str(10**(n+1)):np.loadtxt('Pari_Sage_pdf'+str(10**(n+1))+'.txt', delimiter=',', dtype=np.longdouble) for n in range(N)} dparipdf['10'] darb = dict() for n in [1..N]: tic = time() points = [-3+7/(10^n-1)*(i-1) for i in [1..10^n]] darb[str(10^n)] = [f(val) for val in points] toc = time()-tic; print('10^'+str(n),' runtime =',toc,'s') darbpdf = {str(10^n): [item.mid() for item in darb[str(10^n)]] for n in [1..N]} darbpdferrs = {str(10^n): max([item.diameter() for item in darb[str(10^n)]]) for n in [1..N]} # diameters darbpdferrs Errors = {str(10^n): (accuracy(darbpdf[str(10^n)], dparipdf[str(10^n)])) for n in [1..N]} Errors ``` *** <a id=references></a> # <font color=brown> References </font> This notebook belongs to supplementary materials of the paper submitted to Journal of Statistical Computation and Simulation and available at <https://arxiv.org/abs/2105.04427>. * Hančová, M., Gajdoš, A., Hanč, J. (2021). A practical, effective calculation of gamma difference distributions with open data science tools. arXiv:2105.04427 [cs, math, stat], https://arxiv.org/abs/2105.04427 ### Abstract of the paper At present, there is still no officially accepted and extensively verified implementation of computing the gamma difference distribution allowing unequal shape parameters. We explore four computational ways of the gamma difference distribution with the different shape parameters resulting from time series kriging, a forecasting approach based on the best linear unbiased prediction, and linear mixed models. The results of our numerical study, with emphasis on using open data science tools, demonstrate that our open tool implemented in high-performance Python(with Numba) is exponentially fast, highly accurate, and very reliable. It combines numerical inversion of the characteristic function and the trapezoidal rule with the double exponential oscillatory transformation (DE quadrature). At the double 53-bit precision, our tool outperformed the speed of the analytical computation based on Tricomi's $U(a, b, z)$ function in CAS software (commercial Mathematica, open SageMath) by 1.5-2 orders. At the precision of scientific numerical computational tools, it exceeded open SciPy, NumPy, and commercial MATLAB 5-10 times. The potential future application of our tool for a mixture of characteristic functions could open new possibilities for fast data analysis based on exact probability distributions in areas like multidimensional statistics, measurement uncertainty analysis in metrology as well as in financial mathematics and risk analysis. * Johansson, F. (2017). Arb: Efficient Arbitrary-Precision Midpoint-Radius Interval Arithmetic. IEEE Transactions on Computers, 66(8), 1281–1292. https://doi.org/10.1109/TC.2017.2690633 * Johansson, F. (2019). Computing Hypergeometric Functions Rigorously. ACM Transactions on Mathematical Software, 45(3), 30:1-30:26. https://doi.org/10.1145/3328732
github_jupyter
# python libraries import numpy as np from numpy import inf as INF, array as v import platform as pt import pandas as pd import matplotlib.pyplot as plt plt.style.use('seaborn-whitegrid') from time import time import math, cmath import scipy scipy.__version__, np.__version__ # accuracy commands def abs_errs(df1, df2): N = len(df1) errors = [abs(df1[i]-df2[i]) for i in range(N)] return errors def accuracy(df1, df2): return max(abs_errs(df1,df2)) # approximate formulas for precisions expressed in bits and decimal places bits = lambda d:round((d+1)*ln(10)/ln(2)) dps = lambda b:round(b*ln(2)/ln(10) - 1) # 668 bits precision in decimal places dps(668) # Arb numbers and functions prec = 668 Arb = ComplexBallField(prec) R_Arb = lambda z: Arb(z,0) U_Arb = lambda a,b,z: Arb(z,0).hypergeometric_U(a,b) e_Arb = lambda z: Arb(z,0).exp() G_Arb = lambda z: Arb(z,0).gamma() erf_Arb = lambda z: Arb(z,0).erf() RRf = RealField(prec) # parameters a1, b1 = R_Arb(1/2), R_Arb(1) a2, b2 = R_Arb(17/2), R_Arb(93) a, b = a1+a2, b1+b2 c = b1^a1*b2^a2/b^(a-1) cp, cm = c/G_Arb(a1), c/G_Arb(a2) # defining f(z) fm = lambda z: cm*e_Arb( R_Arb(z)*b2)*U_Arb(1-a2,2-a,-b*R_Arb(z)) f0 = c*G_Arb(a-1)/(G_Arb(a1)*G_Arb(a2)) fp = lambda z: cp*e_Arb(-R_Arb(z)*b1)*U_Arb(1-a1,2-a, b*R_Arb(z)) f = lambda z: fp(z) if z > 0 else fm(z) if z < 0 else f0 f(2) f(2).diameter() f(2).mid() # pdf quadruple precision values N = 5 dparipdf = {str(10**(n+1)):np.loadtxt('Pari_Sage_pdf'+str(10**(n+1))+'.txt', delimiter=',', dtype=np.longdouble) for n in range(N)} dparipdf['10'] darb = dict() for n in [1..N]: tic = time() points = [-3+7/(10^n-1)*(i-1) for i in [1..10^n]] darb[str(10^n)] = [f(val) for val in points] toc = time()-tic; print('10^'+str(n),' runtime =',toc,'s') darbpdf = {str(10^n): [item.mid() for item in darb[str(10^n)]] for n in [1..N]} darbpdferrs = {str(10^n): max([item.diameter() for item in darb[str(10^n)]]) for n in [1..N]} # diameters darbpdferrs Errors = {str(10^n): (accuracy(darbpdf[str(10^n)], dparipdf[str(10^n)])) for n in [1..N]} Errors
0.281702
0.945601
# Savanna Print API - Quickstart This notebook shows you how to get started with Zebra's Savanna APIs related to [Printing](https://developer.zebra.com/printers-print/apis). **These APIs are still at the [prototype](https://developer.zebra.com/sandbox-prototypes) stage and are subject to change without warning** ## Setup - In order to run the sample code in this guide you will need an API key, to obtain an API key follow the instructions detailed on the Zebra developer portal at https://developer.zebra.com/getting-started-0 - Once you have created a login to the developer portal and created an application you must ensure you have the **Printer Sandbox** package associated with your application. This may have been done automatically and you can verify this from the [account apps overview](https://developer.zebra.com/user/me/apps) To run a cell: - Click anywhere in the code block. - Click the play button in the top-left of the code block ``` # Paste your API key below, this might also be called the 'Consumer Key' on the portal api_key = '' print("API Key is: " + api_key) ``` ## Printing Printing is achieved by sending ZPL commands to your printer serial number. As long as the printer has had an external network connection it would have already registered with the Savanna portal automatically so there is no need to perform a registration step. Ensure your printer is turned on and has network connection capable of communicating with the Savanna cloud. ZPL can be generated with the [Zebra label designer](https://www.zebra.com/gb/en/products/software/barcode-printers/zebralink/zebra-designer.html) ``` # The following example ZPL defines a label with some sample images and text zpl = 'CT~~CD,~CC^~CT~^XA~TA000~JSN^LT0^MNN^MTD^PON^PMN^LH0,0^JMA^PR5,5~SD10^JUS^LRN^CI0^XZ^XA^MMT^PW406^LL0609^LS0^FO0,416^GFA,09984,09984,00052,:Z64:eJztmM+K5DYQxuX1gvcQ0AsMCPYSyCG55hCivEWu8yAD8m1fJUwgeQXDHvaYVzC57NW3NMTr2vojWSW3F0Y6zMk10DQ9/nW56lN9ktuYK6644oorrrjiiiuueN1480pMPzYwUwMzNzBLA3N7JWZrYKCFGV+F6WCuZkwLExqa7dd6xjUIZBuYoUmg+jxtAjVMUL1Ab4xvYOoF6hsE6scqgbrfiHmsmiAuvl+0QB3QigUMTL/gGgGYPb4sJuArFhE2Zsw9s0XG3zFAN4VepQTq6HMEA0zIdFAyK/4LJmTWQiDNzL1i6O0NFyfdFPqbW+8YDzPmGTLTCTMIA6MWCIAZx4y9YyylJEYLFIA+ujlKsrjM8G0ukelg6u8ZCzes55Sh4mHWAgXkE7P4zEhpxgkTCoF8YtaSsQWDnQ5LwQx0K8QE+n7FTMRQk10hkId/339Pymx4taiUmTHlsaueIC6bStw4D/aU9VhiqVhnvw74pydoZ4AZoxiIDFK4eJRAwkz4pScMmSHnQUZZXGIAPlA9iulApIt5elqsEnbP8wnzbAVDFex5lEDCjNi/I9Nzl0PKoyZo2JnPh3p2hvNoi0uMQ6asZ4grQJheTVBm/jvkEcYlZsyi9pGxxHitqeWG21hPP9pdIGGk58hMR4bMgPPMWSAUgGeIVwNPa2JcbG7Mg5ObBCKzwhbzKOs5VYzkuZmCQfFkLK1mfLwm5sHLxpLh9mnfUYzkwdvZFw8CiEkG7W88CorZjB8z0yVGjPDIdMKAcVPJOGGcYiDdv9SDMux5AjI3ujhEk44MjwLHk+SZhz1PAJLIs79pj5dRoPhZ8sz5NJ+YeYhGuDNpTT5InqUr8tDGMx2YfmfeSZ6b2fN4MgLDjlPok5m3kkdtJh6NgBiaAsue+MPzn7xNxhnzsdfbKYPrLa/rnXGbMOpo5eAfvIydAJk8P2kU8I3UozaTbzEuMQNwni/ZqzQz+4KZ5YIePvIaVZuJRWaKebzyg7SsUajPzKjNxEZL/CZj4H9m1GYysIXcM2GvOQAzardXzHzOGJlTtZkkZhxKr1JMx8wAB0bWm2a0hqsoNGqGVsPjUPhbFxTzREz2KnYcY9KOva83HJ/n5z/iNb+I72hmi1sX+dvv5ZlC4uHgOwcGzpjvmMm+050z8Rwi8Za9yh0Y8Td7zvz419/kVVYza9yGtPfGcwgHpqdOZ68yzPRH743nEA58j51WviOM7At99jenmIEuQSfLTJATGkDhb8zk1q76XJUYf/BrrxguuXiaEcYy408ZQ60tHjc9S9cxk8/xVNmWvxaqn0ze/0QLvo4xX8qD70uiW2kPnKqYgUwHHqsYu2KnKx83HZlO5eOmJ9NxtQx22tbdm6dRsHMVE8h08rb9Ymau/EEtHEbhJQFr/UM65nG1v7zAWv+QHugRo56p/pkr0APxVM2YWob8I4z1zK91CIvz0MC8q2Rsw89vfQPTNTDmqYG54oorrrjiNL4CVw3u4Q==:4FAE^FO288,192^GFA,02048,02048,00016,:Z64:eJzt0zEKwjAUBuDUInUQ4+Aej+AN4uatTDZHr5TNa3RzdexQ+kxIKH9eBRFLEekPoXw80ryQRIg5w+zEMvNZSIcmoerMVj8yu8zFn9mQTw3lkKZ3yVy9dDuW1SVa2WidrF3q9h5tUsfmlptSnZjHrr/zr8+fav/8fPj59edroxX7DvLtfeP3+VPz91HQlQgfFEnqFNhU/jbXYF/XYB3mw251+D9YBovcHbjMthcbbMDhgTu0Zj5Re0S7jd2jfS/cB1zPjy3248cKvPBjLeZMkCe+lvPb:4E27^FO0,192^GFA,02048,02048,00016,:Z64:eJzt1LEOgjAQBuAixrrV0Q3fBB/FN6GJg24+gj4KiYOrj4CTa90YiGdLq/wtJiYEjTHeQr4cJdceV8b+0YqBZDu0yKMSnRRx5Vnxq+fSd/pbFqQDDiA1psZZYHpq2ZNjZb1U1tx55VomKuuNq1iUvhOX3wbuO//K377+U/sP+xP279Ffde+//2yHfV92dlabdXYa+kJ0hoFKTkR7sCj03wzzJXT+AOZmPeyWm8+DI+MicN64LlCCU6+8esC9+44HHhOt0dPoOERPGAs9Qs/0FYte6BrRc22s7x9vixum6d9H:5E92^FT342,388^A0I,48,48^FB303,1,0,C^FH\^FDSavanna ^FS^FT342,327^A0I,48,48^FB303,1,0,C^FH\^FDData Services^FS^FO10,304^GB386,0,8^FS^BY3,4^FT354,47^B7I,4,0,,,N^FH\^FDhttps://developer.zebra.com/community/tools/eaidata^FS^FT310,146^A0I,28,28^FH\^FDLearn more here:^FS^PQ1,0,1,Y^XZ' print("ZPL is: " + zpl) # Define your printer serial number. This can often be found listed on a sticker on the printer. # This printer must have an external network connection and support LinkOS printer_serial = '' print("Printer serial is: " + printer_serial) ``` The actual printing is achieved via a Savanna [REST API for Printing](https://developer.zebra.com/printers-print/apis). Run the code below to print the ZPL to the previously specified printer serial number ``` import requests import json from requests.exceptions import HTTPError # Note the Sandbox API is being used, this will change in the future if the API is moved to production savanna_url = 'https://sandbox-api.zebra.com/v2/printers-basic/' + printer_serial + '/sendRawData' savanna_headers = {'apikey': api_key, 'content-Type':'application/json', 'accept':'application/json', 'cache-control': 'no-cache'} printBody = zpl try: print("URL is: " + savanna_url) response = requests.post(url = savanna_url, headers = savanna_headers, data = printBody) response.raise_for_status() except HTTPError as http_err: print(f'HTTP error: {http_err}') print(response.json()) except Exception as err: print(f'Other error: {err}') else: print(json.dumps(response.json(), indent=4)) ``` After a few seconds, you should see the following printout appear: ![Printout](https://raw.githubusercontent.com/darryncampbell/Workbooks/master/media/printer-basic.jpg)
github_jupyter
# Paste your API key below, this might also be called the 'Consumer Key' on the portal api_key = '' print("API Key is: " + api_key) # The following example ZPL defines a label with some sample images and text zpl = 'CT~~CD,~CC^~CT~^XA~TA000~JSN^LT0^MNN^MTD^PON^PMN^LH0,0^JMA^PR5,5~SD10^JUS^LRN^CI0^XZ^XA^MMT^PW406^LL0609^LS0^FO0,416^GFA,09984,09984,00052,:Z64:eJztmM+K5DYQxuX1gvcQ0AsMCPYSyCG55hCivEWu8yAD8m1fJUwgeQXDHvaYVzC57NW3NMTr2vojWSW3F0Y6zMk10DQ9/nW56lN9ktuYK6644oorrrjiiiuueN1480pMPzYwUwMzNzBLA3N7JWZrYKCFGV+F6WCuZkwLExqa7dd6xjUIZBuYoUmg+jxtAjVMUL1Ab4xvYOoF6hsE6scqgbrfiHmsmiAuvl+0QB3QigUMTL/gGgGYPb4sJuArFhE2Zsw9s0XG3zFAN4VepQTq6HMEA0zIdFAyK/4LJmTWQiDNzL1i6O0NFyfdFPqbW+8YDzPmGTLTCTMIA6MWCIAZx4y9YyylJEYLFIA+ujlKsrjM8G0ukelg6u8ZCzes55Sh4mHWAgXkE7P4zEhpxgkTCoF8YtaSsQWDnQ5LwQx0K8QE+n7FTMRQk10hkId/339Pymx4taiUmTHlsaueIC6bStw4D/aU9VhiqVhnvw74pydoZ4AZoxiIDFK4eJRAwkz4pScMmSHnQUZZXGIAPlA9iulApIt5elqsEnbP8wnzbAVDFex5lEDCjNi/I9Nzl0PKoyZo2JnPh3p2hvNoi0uMQ6asZ4grQJheTVBm/jvkEcYlZsyi9pGxxHitqeWG21hPP9pdIGGk58hMR4bMgPPMWSAUgGeIVwNPa2JcbG7Mg5ObBCKzwhbzKOs5VYzkuZmCQfFkLK1mfLwm5sHLxpLh9mnfUYzkwdvZFw8CiEkG7W88CorZjB8z0yVGjPDIdMKAcVPJOGGcYiDdv9SDMux5AjI3ujhEk44MjwLHk+SZhz1PAJLIs79pj5dRoPhZ8sz5NJ+YeYhGuDNpTT5InqUr8tDGMx2YfmfeSZ6b2fN4MgLDjlPok5m3kkdtJh6NgBiaAsue+MPzn7xNxhnzsdfbKYPrLa/rnXGbMOpo5eAfvIydAJk8P2kU8I3UozaTbzEuMQNwni/ZqzQz+4KZ5YIePvIaVZuJRWaKebzyg7SsUajPzKjNxEZL/CZj4H9m1GYysIXcM2GvOQAzardXzHzOGJlTtZkkZhxKr1JMx8wAB0bWm2a0hqsoNGqGVsPjUPhbFxTzREz2KnYcY9KOva83HJ/n5z/iNb+I72hmi1sX+dvv5ZlC4uHgOwcGzpjvmMm+050z8Rwi8Za9yh0Y8Td7zvz419/kVVYza9yGtPfGcwgHpqdOZ68yzPRH743nEA58j51WviOM7At99jenmIEuQSfLTJATGkDhb8zk1q76XJUYf/BrrxguuXiaEcYy408ZQ60tHjc9S9cxk8/xVNmWvxaqn0ze/0QLvo4xX8qD70uiW2kPnKqYgUwHHqsYu2KnKx83HZlO5eOmJ9NxtQx22tbdm6dRsHMVE8h08rb9Ymau/EEtHEbhJQFr/UM65nG1v7zAWv+QHugRo56p/pkr0APxVM2YWob8I4z1zK91CIvz0MC8q2Rsw89vfQPTNTDmqYG54oorrrjiNL4CVw3u4Q==:4FAE^FO288,192^GFA,02048,02048,00016,:Z64:eJzt0zEKwjAUBuDUInUQ4+Aej+AN4uatTDZHr5TNa3RzdexQ+kxIKH9eBRFLEekPoXw80ryQRIg5w+zEMvNZSIcmoerMVj8yu8zFn9mQTw3lkKZ3yVy9dDuW1SVa2WidrF3q9h5tUsfmlptSnZjHrr/zr8+fav/8fPj59edroxX7DvLtfeP3+VPz91HQlQgfFEnqFNhU/jbXYF/XYB3mw251+D9YBovcHbjMthcbbMDhgTu0Zj5Re0S7jd2jfS/cB1zPjy3248cKvPBjLeZMkCe+lvPb:4E27^FO0,192^GFA,02048,02048,00016,:Z64:eJzt1LEOgjAQBuAixrrV0Q3fBB/FN6GJg24+gj4KiYOrj4CTa90YiGdLq/wtJiYEjTHeQr4cJdceV8b+0YqBZDu0yKMSnRRx5Vnxq+fSd/pbFqQDDiA1psZZYHpq2ZNjZb1U1tx55VomKuuNq1iUvhOX3wbuO//K377+U/sP+xP279Ffde+//2yHfV92dlabdXYa+kJ0hoFKTkR7sCj03wzzJXT+AOZmPeyWm8+DI+MicN64LlCCU6+8esC9+44HHhOt0dPoOERPGAs9Qs/0FYte6BrRc22s7x9vixum6d9H:5E92^FT342,388^A0I,48,48^FB303,1,0,C^FH\^FDSavanna ^FS^FT342,327^A0I,48,48^FB303,1,0,C^FH\^FDData Services^FS^FO10,304^GB386,0,8^FS^BY3,4^FT354,47^B7I,4,0,,,N^FH\^FDhttps://developer.zebra.com/community/tools/eaidata^FS^FT310,146^A0I,28,28^FH\^FDLearn more here:^FS^PQ1,0,1,Y^XZ' print("ZPL is: " + zpl) # Define your printer serial number. This can often be found listed on a sticker on the printer. # This printer must have an external network connection and support LinkOS printer_serial = '' print("Printer serial is: " + printer_serial) import requests import json from requests.exceptions import HTTPError # Note the Sandbox API is being used, this will change in the future if the API is moved to production savanna_url = 'https://sandbox-api.zebra.com/v2/printers-basic/' + printer_serial + '/sendRawData' savanna_headers = {'apikey': api_key, 'content-Type':'application/json', 'accept':'application/json', 'cache-control': 'no-cache'} printBody = zpl try: print("URL is: " + savanna_url) response = requests.post(url = savanna_url, headers = savanna_headers, data = printBody) response.raise_for_status() except HTTPError as http_err: print(f'HTTP error: {http_err}') print(response.json()) except Exception as err: print(f'Other error: {err}') else: print(json.dumps(response.json(), indent=4))
0.356223
0.837088
# Lists Earlier when discussing strings we introduced the concept of a *sequence* in Python. Lists can be thought of the most general version of a *sequence* in Python. Unlike strings, they are mutable, meaning the elements inside a list can be changed! In this section we will learn about: 1.) Creating lists 2.) Indexing and Slicing Lists 3.) Basic List Methods 4.) Nesting Lists 5.) Introduction to List Comprehensions Lists are constructed with brackets [] and commas separating every element in the list. Let's go ahead and see how we can construct lists! ``` # Assign a list to an variable named my_list my_list = [1,2,3] ``` We just created a list of integers, but lists can actually hold different object types. For example: ``` my_list = ['A string',23,100.232,'o'] ``` Just like strings, the len() function will tell you how many items are in the sequence of the list. ``` len(my_list) ``` ### Indexing and Slicing Indexing and slicing work just like in strings. Let's make a new list to remind ourselves of how this works: ``` my_list = ['one','two','three',4,5] # Grab element at index 0 my_list[0] # Grab index 1 and everything past it my_list[1:] # Grab everything UP TO index 3 my_list[:3] ``` We can also use + to concatenate lists, just like we did for strings. ``` my_list + ['new item'] ``` Note: This doesn't actually change the original list! ``` my_list ``` You would have to reassign the list to make the change permanent. ``` # Reassign my_list = my_list + ['add new item permanently'] my_list ``` We can also use the * for a duplication method similar to strings: ``` # Make the list double my_list * 2 # Again doubling not permanent my_list ``` ## Basic List Methods If you are familiar with another programming language, you might start to draw parallels between arrays in another language and lists in Python. Lists in Python however, tend to be more flexible than arrays in other languages for a two good reasons: they have no fixed size (meaning we don't have to specify how big a list will be), and they have no fixed type constraint (like we've seen above). Let's go ahead and explore some more special methods for lists: ``` # Create a new list list1 = [1,2,3] ``` Use the **append** method to permanently add an item to the end of a list: ``` # Append list1.append('append me!') # Show list1 ``` Use **pop** to "pop off" an item from the list. By default pop takes off the last index, but you can also specify which index to pop off. Let's see an example: ``` # Pop off the 0 indexed item list1.pop(0) # Show list1 # Assign the popped element, remember default popped index is -1 popped_item = list1.pop() popped_item # Show remaining list list1 ``` It should also be noted that lists indexing will return an error if there is no element at that index. For example: ``` list1[100] ``` We can use the **sort** method and the **reverse** methods to also effect your lists: ``` new_list = ['a','e','x','b','c'] #Show new_list # Use reverse to reverse order (this is permanent!) new_list.reverse() new_list # Use sort to sort the list (in this case alphabetical order, but for numbers it will go ascending) new_list.sort() new_list ``` ## Nesting Lists A great feature of of Python data structures is that they support *nesting*. This means we can have data structures within data structures. For example: A list inside a list. Let's see how this works! ``` # Let's make three lists lst_1=[1,2,3] lst_2=[4,5,6] lst_3=[7,8,9] # Make a list of lists to form a matrix matrix = [lst_1,lst_2,lst_3] # Show matrix ``` We can again use indexing to grab elements, but now there are two levels for the index. The items in the matrix object, and then the items inside that list! ``` # Grab first item in matrix object matrix[0] # Grab first item of the first item in the matrix object matrix[0][0] ``` # List Comprehensions Python has an advanced feature called list comprehensions. They allow for quick construction of lists. To fully understand list comprehensions we need to understand for loops. So don't worry if you don't completely understand this section, and feel free to just skip it since we will return to this topic later. But in case you want to know now, here are a few examples! ``` # Build a list comprehension by deconstructing a for loop within a [] first_col = [row[0] for row in matrix] first_col ``` We used a list comprehension here to grab the first element of every row in the matrix object. We will cover this in much more detail later on! For more advanced methods and features of lists in Python, check out the Advanced Lists section later on in this course!
github_jupyter
# Assign a list to an variable named my_list my_list = [1,2,3] my_list = ['A string',23,100.232,'o'] len(my_list) my_list = ['one','two','three',4,5] # Grab element at index 0 my_list[0] # Grab index 1 and everything past it my_list[1:] # Grab everything UP TO index 3 my_list[:3] my_list + ['new item'] my_list # Reassign my_list = my_list + ['add new item permanently'] my_list # Make the list double my_list * 2 # Again doubling not permanent my_list # Create a new list list1 = [1,2,3] # Append list1.append('append me!') # Show list1 # Pop off the 0 indexed item list1.pop(0) # Show list1 # Assign the popped element, remember default popped index is -1 popped_item = list1.pop() popped_item # Show remaining list list1 list1[100] new_list = ['a','e','x','b','c'] #Show new_list # Use reverse to reverse order (this is permanent!) new_list.reverse() new_list # Use sort to sort the list (in this case alphabetical order, but for numbers it will go ascending) new_list.sort() new_list # Let's make three lists lst_1=[1,2,3] lst_2=[4,5,6] lst_3=[7,8,9] # Make a list of lists to form a matrix matrix = [lst_1,lst_2,lst_3] # Show matrix # Grab first item in matrix object matrix[0] # Grab first item of the first item in the matrix object matrix[0][0] # Build a list comprehension by deconstructing a for loop within a [] first_col = [row[0] for row in matrix] first_col
0.19544
0.979784
# 02 Harmonic Vibrational Analysis The purpose of this project is to extend your fundamental Python language programming techniques through a normal coordinate/harmonic vibrational frequency calculation. The theoretical background and a concise set of instructions for this project may be found [here](https://github.com/CrawfordGroup/ProgrammingProjects/blob/master/Project%2302/project2-instructions.pdf). Original authors (Crawford, et al.) thank Dr. Yukio Yamaguchi of the University of Georgia for the original version of this project. ``` # Following os.chdir code is only for thebe (live code), since only in thebe default directory is /home/jovyan import os if os.getcwd().split("/")[-1] != "Project_02": os.chdir("source/Project_02") from solution_02 import Molecule as SolMol # Following code is called for numpy array pretty printing import numpy as np np.set_printoptions(precision=7, linewidth=120, suppress=True) ``` ## Step 1: Read the Coordinate Data The coordinate data are given in a format identical to that for [Project #1](../Project_01/Project_01.ipynb). The test case for the remainder of this project is the water molecule ({download}`input/h2o_geom.txt`), optimized at the SCF/DZP level of theory. You can find the coordinates (in bohr) in the input directory. In this project, `Molecule` object can be initialized as the following toggled code. ``` class Molecule: def __init__(self): self.atom_charges = NotImplemented # type: np.ndarray self.atom_coords = NotImplemented # type: np.ndarray self.natm = NotImplemented # type: int self.hess = NotImplemented # type: np.ndarray def construct_from_dat_file(self, file_path: str): # Same to Project 01 with open(file_path, "r") as f: dat = np.array([line.split() for line in f.readlines()][1:]) self.atom_charges = np.array(dat[:, 0], dtype=float).astype(int) self.atom_coords = np.array(dat[:, 1:4], dtype=float) self.natm = self.atom_charges.shape[0] ``` ## Step 2: Read the Cartesian Hessian Data The primary input data for the harmonic vibrational calculation is the Hessian matrix, which consists of second derivatives of the energy with respect to atomic positions. $$ F_{r_i s_j} = \frac{\partial^2 E}{\partial q_{r_i} \partial q_{s_j}} $$ Notations: - $E$: Total energy of molecule - $i, j$: refer to index of atom (0, 1, 2, ...) - $r, s$: refer to coordinate components ($x$, $y$, $z$) - $q_{x_0}$: $x$ coordinate component of atom 0 The Hessian matrix (in units of $E_\mathrm{h} / a_0^2$, where $E_\mathrm{h}$ stands for Hartree energy, and $a_0$ Bohr radius) can be downloaded here ({download}`input/h2o_hessian.txt`) for the H2O test case. The first integer in the file is the number of atoms (which you may compare to the corresponding value from the geometry file as a test of consistency), while the remaining values have the following format: $$ \begin{matrix} F_{x_1, x_1} & F_{x_1, y_1} & F_{x_1, z_1} \\ F_{x_1, x_2} & F_{x_1, y_2} & F_{x_1, z_2} \\ \vdots & \vdots & \vdots \\ F_{x_2, x_1} & F_{x_2, y_1} & F_{x_2, z_1} \\ \vdots & \vdots & \vdots \end{matrix} $$ ````{admonition} Hint 1: Reading and storing Hessian :class: dropdown The Hessian stored in memory should be a *symmetric* matrix, while the format of the input file is rectangular. Understanding the translation between the two takes a bit of thinking. One may take advantage of `numpy.reshape` ([NumPy API](https://numpy.org/doc/stable/reference/generated/numpy.reshape.html)) to compactly convert rectangular matrix to symmetric matrix. ```` ### Implementation Reader should fill all `NotImplementedError` in the following code: ``` def obtain_hessian(mole: Molecule, file_path: str): # Input: Read Hessian file from `file_path` # Attribute modification: Obtain raw Hessian to `mole.hess` raise NotImplementedError("About 2~15 lines of code") Molecule.obtain_hessian = obtain_hessian ``` ### Solution ``` sol_mole = SolMol() sol_mole.construct_from_dat_file("input/h2o_geom.txt") sol_mole.obtain_hessian("input/h2o_hessian.txt") sol_mole.hess ``` ## Step 3: Mass-Weight the Hessian Matrix Divide each element of the Hessian matrix by the product of square-roots of the masses of the atoms associated with the given coordinates: $$ F_{r_i s_j}^\mathrm{M} = \frac{F_{r_i s_i}}{\sqrt{m_i m_j}} $$ where $m_i$ represents the mass of the atom corresponding to atom $i$. Use atomic mass units ($\mathsf{amu}$) for the masses, just as for [Project 01](../Project_01/Project_01.ipynb). ### Implementation Reader should fill all `NotImplementedError` in the following code: ``` def mass_weighted_hess(mole: Molecule) -> np.ndarray or list: # Output: Mass-weighted Hessian matrix (in unit Eh/(amu*a0^2)) raise NotImplementedError("About 2~10 lines of code") Molecule.mass_weighted_hess = mass_weighted_hess ``` ### Solution ``` sol_mole.mass_weighted_hess() ``` ## Step 4: Diagonalize the Mass-Weighted Hessian Matrix Compute the eigenvalues of the mass-weighted Hessian: $$ \mathbf{F}^\mathrm{M} \mathbf{L} = \mathbf{L} \mathbf{\Lambda} $$ where $\mathbf{\Lambda}$ is a diagonal matrix containing eigenvalues, and $\mathbf{L}$ contains eigenvectors. You should consider using the same canned diagonalization function you used in [Project 01](../Project_01/Project_01.ipynb). ### Implementation Reader should fill all `NotImplementedError` in the following code: ``` def eig_mass_weight_hess(mole: Molecule) -> np.ndarray or list: # Output: Eigenvalue of mass-weighted Hessian matrix (in unit Eh/(amu*a0^2)) raise NotImplementedError("Exactly 1 line of code using numpy") Molecule.eig_mass_weight_hess = eig_mass_weight_hess ``` ### Solution ``` sol_mole.eig_mass_weight_hess() ``` ## Step 5: Compute the Harmonic Vibrational Frequencies The vibrational frequencies are proportional to the squareroot of the eigenvalues of the mass-weighted Hessian: $$ \tilde \omega_i = \mathrm{constant} \times \sqrt{\lambda_i} $$ The most common units to use for vibrational frequencies is $\mathsf{cm}^{-1}$ ([spectroscopy wavenumber](https://en.wikipedia.org/wiki/Wavenumber)). ````{admonition} Hint 1: Unit dimension analysis :class: dropdown It should be relatively appearant that unit of eigenvalues $\sqrt{\lambda_i}$ is the same to $F_{r_i s_j}^\mathrm{M}$, i.e. $E_\mathrm{h} / (\mathsf{amu} \cdot a_0^2)$. If we regard $E_\mathrm{h}$, $\mathrm{amu}$, $a_0$ as unit conversion constant, then we may have $$ \frac{E_\mathrm{h}}{\mathrm{amu} \, a_0^2} \frac{\mathsf{J}}{\mathsf{kg} \cdot \mathsf{m}^2} = \frac{E_\mathrm{h}}{\mathrm{amu} \, a_0^2} \, \mathsf{s}^{-2} $$ So unit of $\sqrt{\lambda_i}$ is exactly $\mathsf{s}^{-1} = \mathsf{Hz}$. In spectroscopy, wavenumber refers to a frequency which is divided by speed of light in vacuum: $$ \tilde \omega_i = \frac{\sqrt{\lambda_i}}{2 \pi c} $$ So finally, the unit conversion should be $$ \frac{\mathrm{centi}}{2 \pi c} \sqrt{\frac{E_\mathrm{h}}{\mathrm{amu} \, a_0^2}} \, \mathsf{cm}^{-1} $$ All the unit conversion constants in the fomular above could be simply obtained from `scipy.constants` ([SciPy API](https://docs.scipy.org/doc/scipy/reference/constants.html)). ```` ````{admonition} Hint 2: Imaginary frequency :class: dropdown In some cases, you may find some $\lambda_i$ could be smaller than zero, making $\sqrt{\lambda_i}$ becoming an imaginary value. This "frequency" is called imaginary frequency. If these values are far from zero, they can be some indication that the molecule is far from optimized geometry or near transition state. We do not discuss imaginary frequency in detail here. Usually, these imaginary values is expressed in minus values in common quantum chemistry softwares (so the reader should bear in mind that minus frequencies are actually imaginary). So, the code is something like ```python return np.sign(eigs) * np.sqrt(np.abs(eigs)) * unit_conversion ``` ```` ````{admonition} Hint 3: "Rotational" Frequencies :class: dropdown For a molecule which is fully geometry optimized, there should be three lowest frequencies *exactly* to be zero. However, you may find there exist three frequencies *near to* zero. So the structure used in the computation is not exactly the stationary point on the potential energy surface (PES), although really close to that. ```` ### Implementation Reader should fill all `NotImplementedError` in the following code: ``` def harmonic_vib_freq(mole: Molecule) -> np.ndarray or list: # Output: Harmonic vibrational frequencies (in unit cm^-1) raise NotImplementedError("About 2~15 lines of code") Molecule.harmonic_vib_freq = harmonic_vib_freq ``` ### Solution ``` sol_mole.harmonic_vib_freq() ``` ## Test Cases **Water** - Geometry file: {download}`input/h2o_geom.txt` - Hessian file: {download}`input/h2o_hessian.txt` ``` sol_mole = SolMol() sol_mole.construct_from_dat_file("input/h2o_geom.txt") sol_mole.obtain_hessian("input/h2o_hessian.txt") sol_mole.print_solution_02() ``` **Benzene** - Geometry file: {download}`input/benzene_geom.txt` - Hessian file: {download}`input/benzene_hessian.txt` ``` sol_mole = SolMol() sol_mole.construct_from_dat_file("input/benzene_geom.txt") sol_mole.obtain_hessian("input/benzene_hessian.txt") sol_mole.print_solution_02() ``` **3-chloro-1-butene** - Geometry file: {download}`input/3c1b_geom.txt` - Hessian file: {download}`input/3c1b_hessian.txt` ``` sol_mole = SolMol() sol_mole.construct_from_dat_file("input/3c1b_geom.txt") sol_mole.obtain_hessian("input/3c1b_hessian.txt") sol_mole.print_solution_02() ``` ## References - Wilson, E. B.; Decius, J. C.; Cross, P. C. *Molecular Vibrations* Dover Publication Inc., 1980. ISBN-13: 978-0486639413
github_jupyter
# Following os.chdir code is only for thebe (live code), since only in thebe default directory is /home/jovyan import os if os.getcwd().split("/")[-1] != "Project_02": os.chdir("source/Project_02") from solution_02 import Molecule as SolMol # Following code is called for numpy array pretty printing import numpy as np np.set_printoptions(precision=7, linewidth=120, suppress=True) class Molecule: def __init__(self): self.atom_charges = NotImplemented # type: np.ndarray self.atom_coords = NotImplemented # type: np.ndarray self.natm = NotImplemented # type: int self.hess = NotImplemented # type: np.ndarray def construct_from_dat_file(self, file_path: str): # Same to Project 01 with open(file_path, "r") as f: dat = np.array([line.split() for line in f.readlines()][1:]) self.atom_charges = np.array(dat[:, 0], dtype=float).astype(int) self.atom_coords = np.array(dat[:, 1:4], dtype=float) self.natm = self.atom_charges.shape[0] ### Implementation Reader should fill all `NotImplementedError` in the following code: ### Solution ## Step 3: Mass-Weight the Hessian Matrix Divide each element of the Hessian matrix by the product of square-roots of the masses of the atoms associated with the given coordinates: $$ F_{r_i s_j}^\mathrm{M} = \frac{F_{r_i s_i}}{\sqrt{m_i m_j}} $$ where $m_i$ represents the mass of the atom corresponding to atom $i$. Use atomic mass units ($\mathsf{amu}$) for the masses, just as for [Project 01](../Project_01/Project_01.ipynb). ### Implementation Reader should fill all `NotImplementedError` in the following code: ### Solution ## Step 4: Diagonalize the Mass-Weighted Hessian Matrix Compute the eigenvalues of the mass-weighted Hessian: $$ \mathbf{F}^\mathrm{M} \mathbf{L} = \mathbf{L} \mathbf{\Lambda} $$ where $\mathbf{\Lambda}$ is a diagonal matrix containing eigenvalues, and $\mathbf{L}$ contains eigenvectors. You should consider using the same canned diagonalization function you used in [Project 01](../Project_01/Project_01.ipynb). ### Implementation Reader should fill all `NotImplementedError` in the following code: ### Solution ## Step 5: Compute the Harmonic Vibrational Frequencies The vibrational frequencies are proportional to the squareroot of the eigenvalues of the mass-weighted Hessian: $$ \tilde \omega_i = \mathrm{constant} \times \sqrt{\lambda_i} $$ The most common units to use for vibrational frequencies is $\mathsf{cm}^{-1}$ ([spectroscopy wavenumber](https://en.wikipedia.org/wiki/Wavenumber)). return np.sign(eigs) * np.sqrt(np.abs(eigs)) * unit_conversion ### Implementation Reader should fill all `NotImplementedError` in the following code: ### Solution ## Test Cases **Water** - Geometry file: {download}`input/h2o_geom.txt` - Hessian file: {download}`input/h2o_hessian.txt` **Benzene** - Geometry file: {download}`input/benzene_geom.txt` - Hessian file: {download}`input/benzene_hessian.txt` **3-chloro-1-butene** - Geometry file: {download}`input/3c1b_geom.txt` - Hessian file: {download}`input/3c1b_hessian.txt`
0.715523
0.964756
# 1. Set up Environment ``` %pwd %cd '/home/jovyan/work' import os if os.path.exists("adsi-at2.zip"): os.remove("adsi-at2.zip") if os.path.exists("data_files/raw/beer_reviews.csv"): os.remove("data_files/raw/beer_reviews.csv") %load_ext autoreload %autoreload 2 import os import pandas as pd import numpy as np pd.options.display.max_rows = 10000 # Download data from Kaggle API, unzip and place in data directory os.environ['KAGGLE_USERNAME'] = "kallikrates" os.environ['KAGGLE_KEY'] = "238b7c2704c0169326ee26d23a1d1d7c" !kaggle datasets download -d kallikrates/adsi-at2 !unzip -q adsi-at2.zip -d /home/jovyan/work/data_files/raw ``` # 2. Load and Explore Data ``` df = pd.read_csv('data_files/raw/beer_reviews.csv') df.head() df.shape df.info() df.head() ``` # 3. Prepare Data ``` df_cleaned = df.copy() ``` ### Drop unused variables ``` df_cleaned = df_cleaned.drop(['brewery_id', 'review_time','review_profilename','beer_beerid','beer_name','beer_abv'], axis=1) ``` ### Create Categorical Variable Dictionary ``` arr_brewery_name = df_cleaned.brewery_name.unique() arr_beer_style = df_cleaned.beer_style.unique() lst_brewery_name = list(arr_brewery_name) lst_beer_style = list(arr_beer_style) cats_dict = { 'brewery_name': [lst_brewery_name], 'beer_style': [lst_beer_style] } ``` ### Quantify NULL Values ``` df_cleaned.isnull().sum() df_cleaned.dropna(how='any', inplace=True) ``` ### Transform Categorical column values with encoder ``` from sklearn.preprocessing import StandardScaler, OrdinalEncoder for col, cats in cats_dict.items(): col_encoder = OrdinalEncoder(categories=cats) df_cleaned[col] = col_encoder.fit_transform(df_cleaned[[col]]) num_cols = ['brewery_name','review_overall', 'review_aroma', 'review_appearance', 'review_palate', 'review_taste'] target_col = 'beer_style' sc = StandardScaler() df_cleaned[num_cols] = sc.fit_transform(df_cleaned[num_cols]) df_cleaned['beer_style'] = df_cleaned['beer_style'].astype(int) X = df_cleaned X.describe() ``` ### Split Data ``` from sets import split_sets_by_time, save_sets, split_sets_random X_train, y_train, X_val, y_val, X_test, y_test = split_sets_random(X, target_col=target_col, test_ratio=0.2, to_numpy=True) save_sets(X_train=X_train, y_train=y_train, X_val=X_val, y_val=y_val, X_test=X_test, y_test=y_test, path='data_files/processed/beer') from pytorch import PytorchDataset train_dataset = PytorchDataset(X=X_train, y=y_train) val_dataset = PytorchDataset(X=X_val, y=y_val) test_dataset = PytorchDataset(X=X_test, y=y_test) ``` # 4. Model ``` from null import NullModel baseline_model = NullModel(target_type='classification') y_base = baseline_model.fit_predict(y_train) from performance import print_class_perf print_class_perf(y_base, y_train, set_name='Training', average='weighted') ``` # 5. Define Architecture ``` import torch import torch.nn as nn import torch.nn.functional as F from pytorch import PytorchMultiClass model = PytorchMultiClass(X_train.shape[1]) from pytorch import get_device device = get_device() model.to(device) ``` # 6. Train Model ``` criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=0.1) from train_classification_model import train_classification from test_classification_model import test_classification N_EPOCHS = 2 BATCH_SIZE = 32 for epoch in range(N_EPOCHS): train_loss, train_acc = train_classification(train_dataset, model=model, criterion=criterion, optimizer=optimizer, batch_size=BATCH_SIZE, device=device) valid_loss, valid_acc = test_classification(val_dataset, model=model, criterion=criterion, batch_size=BATCH_SIZE, device=device) print(f'Epoch: {epoch}') print(f'\t(train)\t|\tLoss: {train_loss:.4f}\t|\tAcc: {train_acc * 100:.1f}%') torch.save(model, "models/pytorch_multi_beer_evaluation.pt") test_loss, test_acc = test_classification(test_dataset, model=model, criterion=criterion, batch_size=BATCH_SIZE, device=device) print(f'\tLoss: {test_loss:.4f}\t|\tAccuracy: {test_acc:.1f}') ```
github_jupyter
%pwd %cd '/home/jovyan/work' import os if os.path.exists("adsi-at2.zip"): os.remove("adsi-at2.zip") if os.path.exists("data_files/raw/beer_reviews.csv"): os.remove("data_files/raw/beer_reviews.csv") %load_ext autoreload %autoreload 2 import os import pandas as pd import numpy as np pd.options.display.max_rows = 10000 # Download data from Kaggle API, unzip and place in data directory os.environ['KAGGLE_USERNAME'] = "kallikrates" os.environ['KAGGLE_KEY'] = "238b7c2704c0169326ee26d23a1d1d7c" !kaggle datasets download -d kallikrates/adsi-at2 !unzip -q adsi-at2.zip -d /home/jovyan/work/data_files/raw df = pd.read_csv('data_files/raw/beer_reviews.csv') df.head() df.shape df.info() df.head() df_cleaned = df.copy() df_cleaned = df_cleaned.drop(['brewery_id', 'review_time','review_profilename','beer_beerid','beer_name','beer_abv'], axis=1) arr_brewery_name = df_cleaned.brewery_name.unique() arr_beer_style = df_cleaned.beer_style.unique() lst_brewery_name = list(arr_brewery_name) lst_beer_style = list(arr_beer_style) cats_dict = { 'brewery_name': [lst_brewery_name], 'beer_style': [lst_beer_style] } df_cleaned.isnull().sum() df_cleaned.dropna(how='any', inplace=True) from sklearn.preprocessing import StandardScaler, OrdinalEncoder for col, cats in cats_dict.items(): col_encoder = OrdinalEncoder(categories=cats) df_cleaned[col] = col_encoder.fit_transform(df_cleaned[[col]]) num_cols = ['brewery_name','review_overall', 'review_aroma', 'review_appearance', 'review_palate', 'review_taste'] target_col = 'beer_style' sc = StandardScaler() df_cleaned[num_cols] = sc.fit_transform(df_cleaned[num_cols]) df_cleaned['beer_style'] = df_cleaned['beer_style'].astype(int) X = df_cleaned X.describe() from sets import split_sets_by_time, save_sets, split_sets_random X_train, y_train, X_val, y_val, X_test, y_test = split_sets_random(X, target_col=target_col, test_ratio=0.2, to_numpy=True) save_sets(X_train=X_train, y_train=y_train, X_val=X_val, y_val=y_val, X_test=X_test, y_test=y_test, path='data_files/processed/beer') from pytorch import PytorchDataset train_dataset = PytorchDataset(X=X_train, y=y_train) val_dataset = PytorchDataset(X=X_val, y=y_val) test_dataset = PytorchDataset(X=X_test, y=y_test) from null import NullModel baseline_model = NullModel(target_type='classification') y_base = baseline_model.fit_predict(y_train) from performance import print_class_perf print_class_perf(y_base, y_train, set_name='Training', average='weighted') import torch import torch.nn as nn import torch.nn.functional as F from pytorch import PytorchMultiClass model = PytorchMultiClass(X_train.shape[1]) from pytorch import get_device device = get_device() model.to(device) criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=0.1) from train_classification_model import train_classification from test_classification_model import test_classification N_EPOCHS = 2 BATCH_SIZE = 32 for epoch in range(N_EPOCHS): train_loss, train_acc = train_classification(train_dataset, model=model, criterion=criterion, optimizer=optimizer, batch_size=BATCH_SIZE, device=device) valid_loss, valid_acc = test_classification(val_dataset, model=model, criterion=criterion, batch_size=BATCH_SIZE, device=device) print(f'Epoch: {epoch}') print(f'\t(train)\t|\tLoss: {train_loss:.4f}\t|\tAcc: {train_acc * 100:.1f}%') torch.save(model, "models/pytorch_multi_beer_evaluation.pt") test_loss, test_acc = test_classification(test_dataset, model=model, criterion=criterion, batch_size=BATCH_SIZE, device=device) print(f'\tLoss: {test_loss:.4f}\t|\tAccuracy: {test_acc:.1f}')
0.547464
0.652006
<a href="https://colab.research.google.com/github/Hamood564/CNN/blob/main/Image_Recognition_with_a_CNN.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` import numpy from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten, BatchNormalization, Activation from keras.layers.convolutional import Conv2D, MaxPooling2D from keras.constraints import maxnorm from keras.utils import np_utils # Set random seed for purposes of reproducibility seed = 21 from keras.datasets import cifar10 # loading in the data (X_train, y_train), (X_test, y_test) = cifar10.load_data() # normalize the inputs from 0-255 to between 0 and 1 by dividing by 255 X_train = X_train.astype('float32') X_test = X_test.astype('float32') X_train = X_train / 255.0 X_test = X_test / 255.0 # one hot encode outputs y_train = np_utils.to_categorical(y_train) y_test = np_utils.to_categorical(y_test) class_num = y_test.shape[1] model = Sequential() model.add(Conv2D(32, (3, 3), input_shape=X_train.shape[1:], padding='same')) model.add(Activation('relu')) model.add(Conv2D(32, (3, 3), input_shape=(3, 32, 32), activation='relu', padding='same')) model.add(Dropout(0.2)) model.add(BatchNormalization()) model.add(Conv2D(64, (3, 3), padding='same')) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.2)) model.add(BatchNormalization()) model.add(Conv2D(64, (3, 3), padding='same')) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.2)) model.add(BatchNormalization()) model.add(Conv2D(128, (3, 3), padding='same')) model.add(Activation('relu')) model.add(Dropout(0.2)) model.add(BatchNormalization()) model.add(Flatten()) model.add(Dropout(0.2)) model.add(Dense(256, kernel_constraint=maxnorm(3))) model.add(Activation('relu')) model.add(Dropout(0.2)) model.add(BatchNormalization()) model.add(Dense(128, kernel_constraint=maxnorm(3))) model.add(Activation('relu')) model.add(Dropout(0.2)) model.add(BatchNormalization()) model.add(Dense(class_num)) model.add(Activation('softmax')) epochs = 25 optimizer = 'adam' model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy']) print(model.summary()) numpy.random.seed(seed) model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=epochs, batch_size=64) # Model evaluation scores = model.evaluate(X_test, y_test, verbose=0) print("Accuracy: %.2f%%" % (scores[1]*100)) ```
github_jupyter
import numpy from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten, BatchNormalization, Activation from keras.layers.convolutional import Conv2D, MaxPooling2D from keras.constraints import maxnorm from keras.utils import np_utils # Set random seed for purposes of reproducibility seed = 21 from keras.datasets import cifar10 # loading in the data (X_train, y_train), (X_test, y_test) = cifar10.load_data() # normalize the inputs from 0-255 to between 0 and 1 by dividing by 255 X_train = X_train.astype('float32') X_test = X_test.astype('float32') X_train = X_train / 255.0 X_test = X_test / 255.0 # one hot encode outputs y_train = np_utils.to_categorical(y_train) y_test = np_utils.to_categorical(y_test) class_num = y_test.shape[1] model = Sequential() model.add(Conv2D(32, (3, 3), input_shape=X_train.shape[1:], padding='same')) model.add(Activation('relu')) model.add(Conv2D(32, (3, 3), input_shape=(3, 32, 32), activation='relu', padding='same')) model.add(Dropout(0.2)) model.add(BatchNormalization()) model.add(Conv2D(64, (3, 3), padding='same')) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.2)) model.add(BatchNormalization()) model.add(Conv2D(64, (3, 3), padding='same')) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.2)) model.add(BatchNormalization()) model.add(Conv2D(128, (3, 3), padding='same')) model.add(Activation('relu')) model.add(Dropout(0.2)) model.add(BatchNormalization()) model.add(Flatten()) model.add(Dropout(0.2)) model.add(Dense(256, kernel_constraint=maxnorm(3))) model.add(Activation('relu')) model.add(Dropout(0.2)) model.add(BatchNormalization()) model.add(Dense(128, kernel_constraint=maxnorm(3))) model.add(Activation('relu')) model.add(Dropout(0.2)) model.add(BatchNormalization()) model.add(Dense(class_num)) model.add(Activation('softmax')) epochs = 25 optimizer = 'adam' model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy']) print(model.summary()) numpy.random.seed(seed) model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=epochs, batch_size=64) # Model evaluation scores = model.evaluate(X_test, y_test, verbose=0) print("Accuracy: %.2f%%" % (scores[1]*100))
0.852045
0.90218
<a href="https://colab.research.google.com/github/ATOMconsortium/AMPL/blob/Tutorials/atomsci/ddm/examples/tutorials/05_EDA_Curate_Merge_Visualize.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Curate, Merge and Visualize Datasets The ATOM Modeling PipeLine (AMPL; https://github.com/ATOMconsortium/AMPL) is an open-source, modular, extensible software pipeline for building and sharing models to advance in silico drug discovery. ## Scope of the tutorial We will use the predownloaded ChEMBL datasets to carry out some of the following basic EDA using AMPL: * Load the data (two protein targets (PGP and BCRP) data) * Clean * Curate and Merge datasets * Filter data * Carry out visualization * Carry out some basic analysis Protein target information: * PGP: PGP phosphoglycolate phosphatase. You can go the following link to learn about the gene, https://www.ncbi.nlm.nih.gov/gene/?term=283871 * BCRP or ABCG2: ATP binding cassette subfamily G member 2 (Junior blood group). You can go the following link to learn about the gene, https://www.ncbi.nlm.nih.gov/gene/9429 ## Before you begin, make sure you close all other COLAB notebooks. # Change Runtime settings If you have access to COLAB-Pro (commercial/not-free), please change your runtime settings to use GPU and high-memory, ```Runtime --> Change Runtime Type --> GPU with high-RAM``` If you are not a paid COLAB-Pro customer, you can still choose GPU, with standard-RAM. ## Time to run the notebook on COLAB-Pro: ~ 4 minutes ``` !date # starting time ``` ## Install AMPL ``` !curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py import conda_installer conda_installer.install() !/root/miniconda/bin/conda info -e !pip install deepchem-nightly import deepchem deepchem.__version__ ! pip install umap ! pip install llvmlite==0.34.0 --ignore-installed ! pip install umap-learn ! pip install molvs ! pip install bravado import deepchem as dc # get the Install AMPL_GPU_test.sh !wget https://raw.githubusercontent.com/ravichas/AMPL-Tutorial/master/config/install_AMPL_GPU_test.sh # run the script to install AMPL ! chmod u+x install_AMPL_GPU_test.sh ! ./install_AMPL_GPU_test.sh ``` ## Let us download the datasets from Github ``` # ABCB1 MDR1_human ! wget https://raw.githubusercontent.com/ravichas/AMPL-Tutorial/master/datasets/pgp_chembl.csv # ABCG2 ABCG2_human ! wget https://raw.githubusercontent.com/ravichas/AMPL-Tutorial/master/datasets/bcrp_chembl.csv %%bash head bcrp*csv echo ' ' head pgp*csv ``` ## Datasets were downloaded from ChEMBL database. Here are the details of the columns * chembl provides the ChEMBL id or name * smiles contains the SMILE strings * pACT is the pChEMBL value. pChEMBL is defined as: -Log(molar IC50, XC50, EC50, AC50, Ki, Kd or Potency). * relationship is the # Curate, merge and visualize data Here we will load 2 datasets, curate the rows, and visualize some basic tenets of the data. **Your starting dataset should have columns including:** - unique compound identifier - smiles strings (see https://pubchem.ncbi.nlm.nih.gov/idexchange/idexchange.cgi for a good lookup service) - **IMPORTANT: for optimal harmonizaton, first translate all SMILES to pubchem SMILES using the linked service** - either look up pubchem CIDs for compounds or do SMILES -> SMILES - this greatly improves RDKit base_smiles_from_smiles reduction - an assay measurement value, ie pIC50 - optional: 'relation' column indicating whether or not values are censored - labels like "active" or "inactive" or another label you want, such as the source of the data. ## Load packages ``` # manipulate data import pandas as pd # plot data import numpy as np import matplotlib.pyplot as plt # curate data import atomsci.ddm.utils.struct_utils as struct_utils import atomsci.ddm.utils.curate_data as curate_data # visualize compound structures import tempfile from rdkit import Chem from rdkit.Chem import Draw from itertools import islice from IPython.display import Image, display # visualize data import seaborn as sns sns.set_context('poster') import matplotlib_venn as mpv from scipy.stats import pearsonr # set up visualization parameters sns.set_context("poster") sns.set_style("whitegrid") sns.set_palette("Set2") pal = sns.color_palette() plt.rcParams['figure.figsize'] = [10,10] pd.set_option('display.max_columns',(90)) pd.set_option('display.max_rows',(20)) ``` # Curating and merging datasets Some of these merging steps will be highly dependent on the information contained in each individual dataset and what you want to keep after curation. ### **Step 0:** Read dataset Read in dataset to be curated and merged. ### **Step 1:** Initial curation for nan's and outliers Drop NA values for assay measurement column. Here you should also examine empty quotes, unexpected zeros, or other values that might not be real. ### **Step 2:** Canonicalize smiles strings Canonicalize smiles strings so they are comparable across datasets with `struct_utils.base_smiles_from_smiles`. Even though the next function performs this step, we need to do this here in order to maintain correct metadata indices. ``` dfp = pd.read_csv("pgp_chembl.csv", index_col=0) # Missing values for measurement column in this dataset print("NA values:", dfp.pAct.isna().sum()) dfp.drop(dfp[dfp.pAct.isna()].index, inplace=True) # canonicalize smiles strings dfp['rdkit_smile'] = dfp['smile'].apply(curate_data.base_smiles_from_smiles) #remove chemicals without smiles strings print("NA SMILES:", (dfp.smile=='').sum()+dfp.smile.isna().sum()) dfp.drop(dfp[dfp.smile == ""].index, inplace=True) dfp.drop(dfp[dfp.smile.isna()].index, inplace=True) dfp.head() dfp.describe() # look at length of dataset and duplicates. print('Name duplicates:', dfp['chembl'].duplicated().sum()) print('SMILES duplicates:', dfp.rdkit_smile.duplicated().sum()) print('Shape:',dfp.shape) ``` ### **Step 3:** Aggregate assay data Use the `aggregate_assay_data()` function to aggregate duplicate assay measurements, deal with censored values, and standardize measurement dataframe for modeling later. This method also re-does the canonical smiles strings. *Even if you know you have no duplicates, still perform this step to standardize the dataframe.* - `active_thresh` means the threshold value above/below which your compounds should be named 'active'/'inactive' or 1/0 for classification tasks. - here we pick 1uM (u = 10$^{-6}$) as our threshold and translate it into a pIC50 value to be the same as our data. ``` # threshold -> pIC50 import numpy as np thresh=-np.log10(1/1000000) thresh dfp.head(3) #average duplicated data dfp_cur = curate_data.aggregate_assay_data(dfp, value_col='pAct', output_value_col='pAct_pgp', label_actives=True, active_thresh=thresh, id_col='chembl', smiles_col='rdkit_smile', relation_col='relationship', date_col=None) dfp_cur print('actives:', dfp_cur.active.sum(), ", inactives:", len(dfp_cur)-dfp_cur.active.sum()) print("Total uncensored values:", len(dfp_cur[dfp_cur.relation == ''])) dfp_cur.columns print("Shape of curated dataframe:", dfp_cur.shape) ``` ### **Step 3.5**: Repeat **Repeat** steps 1-3 for all dataframes you want to curate and merge. ``` dfb = pd.read_csv("bcrp_chembl.csv", index_col=0) # Missing values for measurement column in this dataset print("NA values:", dfb.pAct.isna().sum()) dfb.drop(dfb[dfb.pAct.isna()].index, inplace=True) # canonicalize smiles strings dfb['rdkit_smile'] = dfb['smile'].apply(curate_data.base_smiles_from_smiles) #remove chemicals without smiles strings print("NA SMILES:", (dfb.smile=='').sum()+dfb.smile.isna().sum()) dfb.drop(dfb[dfb.smile == ""].index, inplace=True) dfb.drop(dfb[dfb.smile.isna()].index, inplace=True) # look at length of dataset and duplicates. print('name dupes:', dfb['chembl'].duplicated().sum()) print('smiles dupes:', dfb.rdkit_smile.duplicated().sum()) print('shape:',dfb.shape) # BCRP #run this for every pIC column dfb_cur = curate_data.aggregate_assay_data(dfb, value_col='pAct', output_value_col='pAct_bcrp', label_actives=True, active_thresh=thresh, id_col='chembl', smiles_col='rdkit_smile', relation_col='relationship', date_col=None) print('actives:', dfb_cur.active.sum(), ", inactives:", len(dfb_cur)-dfb_cur.active.sum()) dfb_cur.columns print("Total uncensored values:", len(dfb_cur[dfb_cur.relation == ''])) print("Shape of curated dataframe:", dfb_cur.shape) ``` ### **Step 4:** Filter for other outliers ### Filter large compounds In general, molecular weight >2000 is removed ``` # calculate molecular weight dfp_cur["mol_wt"] = [Chem.Descriptors.ExactMolWt(Chem.MolFromSmiles(smile)) for smile in dfp_cur["base_rdkit_smiles"]] dfb_cur["mol_wt"] = [Chem.Descriptors.ExactMolWt(Chem.MolFromSmiles(smile)) for smile in dfb_cur["base_rdkit_smiles"]] # visualize distribution of molecular weights plot_df=dfp_cur plot_df=plot_df.sort_values(by='mol_wt') plot_df=plot_df.reset_index(drop=True) plot_df=plot_df.reset_index() fig,ax=plt.subplots(1,2, figsize=(30,10)) plot_df.plot(kind='scatter', x='index', y='mol_wt', color = pal[0], ax=ax[0]) plot_df.plot(kind='hist', x="index", y="mol_wt", color = pal[0], ax=ax[1]) fig.suptitle("Distribution of molecular weights in P-GP dataset"); # visualize distribution of molecular weights plot_df=dfb_cur plot_df=plot_df.sort_values(by='mol_wt') plot_df=plot_df.reset_index(drop=True) plot_df=plot_df.reset_index() fig,ax=plt.subplots(1,2, figsize=(30,10)) plot_df.plot(kind='scatter', x='index', y='mol_wt', color = pal[1], ax=ax[0]) plot_df.plot(kind='hist', x="index", y="mol_wt", color = pal[1], ax=ax[1]) fig.suptitle("Distribution of molecular weights in BCRP dataset"); # filter out high MWs dfp_cur=dfp_cur[dfp_cur.mol_wt<2000] dfb_cur=dfb_cur[dfb_cur.mol_wt<2000] ``` ### Filter for outlier pIC50 values In general, pIC50 values are between 2 and 14 ``` # visualize distribution of pActivity values plot_df=dfp_cur plot_df=plot_df.sort_values(by='pAct_pgp') plot_df=plot_df.reset_index(drop=True) plot_df=plot_df.reset_index() fig,ax=plt.subplots(1,2, figsize=(30,10)) plot_df.plot(kind='scatter', x='index', y='pAct_pgp', color = pal[0], ax=ax[0]) plot_df.plot(kind='hist', x="index", y="pAct_pgp", color = pal[0], ax=ax[1]) fig.suptitle("Distribution of pActivity values in P-GP dataset"); # visualize distribution of pActivity values plot_df=dfb_cur plot_df=plot_df.sort_values(by='pAct_bcrp') plot_df=plot_df.reset_index(drop=True) plot_df=plot_df.reset_index() fig,ax=plt.subplots(1,2, figsize=(30,10)) plot_df.plot(kind='scatter', x='index', y='pAct_bcrp', color = pal[1], ax=ax[0]) plot_df.plot(kind='hist', x="index", y="pAct_bcrp", color = pal[1], ax=ax[1]) fig.suptitle("Distribution of pActivity values in BCRP dataset"); # filter high or low pIC50 values dfp_cur=dfp_cur[dfp_cur.pAct_pgp<14] dfp_cur=dfp_cur[dfp_cur.pAct_pgp>2] dfb_cur=dfb_cur[dfb_cur.pAct_bcrp<14] dfb_cur=dfb_cur[dfb_cur.pAct_bcrp>2] ``` ### **Step 5**: (optional) Merge datasets for multitask modeling Merge processed assay values together for each dataframe. ``` print(dfp_cur.columns) print(dfb_cur.columns) mpv.venn2_unweighted([set(dfp_cur["base_rdkit_smiles"]), set(dfb_cur["base_rdkit_smiles"])], set_labels = ("pgp", "bcrp"), set_colors = (pal[0], pal[1])) plt.title('Overlap of compounds for P-GP and BCRP datasets'); # sanity check: 1659+757-199 merge1 = pd.merge(dfp_cur, dfb_cur, how="outer", on=("compound_id", "base_rdkit_smiles", 'mol_wt'), suffixes = ("_pgp", "_bcrp")) print("Shape of merge1 dataframe:", merge1.shape) merge1.columns ``` ### **Step 6: Double check for duplicate and missing values** Depending on your purpose, it may or may not matter if you have duplicates. Here we have a chemical with / without hydrochloride salt that reduces to the same base smiles. We will correct this single duplicate row. For more extensive differences, correct values in original csv's and start at step 1. ``` # check for missing or duplicate values: print("Name of dataset: merge1") print("Length of dataset:", merge1.shape[0]) print("\nMissing compound_id entries:", merge1['compound_id'].isna().sum()) print("Duplicate chembl_ids:", merge1['compound_id'].duplicated().sum()) print("\nMissing SMILES:", merge1['base_rdkit_smiles'].isna().sum()) print("Duplicate SMILES:", merge1['base_rdkit_smiles'].duplicated().sum()) print("\nMissing values should only be target-specific now because not all compounds have all measurements:") print("Other missing values:", merge1.columns[merge1.isna().any()].tolist()) print("Missing P-GP values:", merge1['pAct_pgp'].isna().sum()) print("Missing BCRP values:", merge1['pAct_bcrp'].isna().sum()) # what SMILES is duplicated, and why? merge1[merge1.base_rdkit_smiles.duplicated(keep=False)] # manually fix these rows merge1.loc[merge1.index==1916, 'relation_pgp'] = merge1.iloc[203].relation_pgp merge1.loc[merge1.index==1916, 'pAct_pgp'] = merge1.iloc[203].pAct_pgp merge1.loc[merge1.index==1916, 'active_pgp'] = merge1.iloc[203].active_pgp merge1=merge1.drop(index=203) print('Fixed row:') display(merge1[merge1.index==1916]) print('\nRemaining duplicates:') display(merge1[merge1.base_rdkit_smiles.duplicated(keep=False)]) print('\nShape of deduplicated dataframe:', merge1.shape) ``` ### **Step 6.5:** repeat ``` # check for missing or duplicate values: print("Name of dataset: dfp_cur") print("Length of dataset:", dfp_cur.shape[0]) print("\nMissing compound_id entries:", dfp_cur['compound_id'].isna().sum()) print("Duplicate chembl_ids:", dfp_cur['compound_id'].duplicated().sum()) print("\nMissing SMILES:", dfp_cur['base_rdkit_smiles'].isna().sum()) print("Duplicate SMILES:", dfp_cur['base_rdkit_smiles'].duplicated().sum()) print("\nOther missing values:", dfp_cur.columns[dfp_cur.isna().any()].tolist()) print("Missing P-GP values:", dfp_cur['pAct_pgp'].isna().sum()) # check for missing or duplicate values: print("Name of dataset: dfb_cur") print("Length of dataset:", dfb_cur.shape[0]) print("\nMissing compound_id entries:", dfb_cur['compound_id'].isna().sum()) print("Duplicate chembl_ids:", dfb_cur['compound_id'].duplicated().sum()) print("\nMissing SMILES:", dfb_cur['base_rdkit_smiles'].isna().sum()) print("Duplicate SMILES:", dfb_cur['base_rdkit_smiles'].duplicated().sum()) print("\nOther missing values:", dfb_cur.columns[dfb_cur.isna().any()].tolist()) print("Missing BCRP values:", dfb_cur['pAct_bcrp'].isna().sum()) ``` ### **Step 7:** Save dataframes ``` !pwd dfp_cur.to_csv("pgp_curated.csv") dfb_cur.to_csv("bcrp_curated.csv") merge1.to_csv("pgp_bcrp_merged.csv") ``` ## Optional steps: If you want to save the files, then mount your Google Drive and copy it to a directory of your choice ``` # from google.colab import drive # drive.mount('/content/drive') #copy files back to google drive to use them for other notebooks. # %%bash # cp /content/bcrp_curated.csv /content/drive/My\ Drive/ # cp /content/pgp_curated.csv /content/drive/My\ Drive/ # cp /content/pgp_bcrp_merged.csv /content/drive/My\ Drive/ # cp /content/pgp_chembl.csv /content/drive/My\ Drive/ # cp /content/bcrp_chembl.csv /content/drive/My\ Drive/ !ls -l !date !echo "done" ```
github_jupyter
If you are not a paid COLAB-Pro customer, you can still choose GPU, with standard-RAM. ## Time to run the notebook on COLAB-Pro: ~ 4 minutes ## Install AMPL ## Let us download the datasets from Github ## Datasets were downloaded from ChEMBL database. Here are the details of the columns * chembl provides the ChEMBL id or name * smiles contains the SMILE strings * pACT is the pChEMBL value. pChEMBL is defined as: -Log(molar IC50, XC50, EC50, AC50, Ki, Kd or Potency). * relationship is the # Curate, merge and visualize data Here we will load 2 datasets, curate the rows, and visualize some basic tenets of the data. **Your starting dataset should have columns including:** - unique compound identifier - smiles strings (see https://pubchem.ncbi.nlm.nih.gov/idexchange/idexchange.cgi for a good lookup service) - **IMPORTANT: for optimal harmonizaton, first translate all SMILES to pubchem SMILES using the linked service** - either look up pubchem CIDs for compounds or do SMILES -> SMILES - this greatly improves RDKit base_smiles_from_smiles reduction - an assay measurement value, ie pIC50 - optional: 'relation' column indicating whether or not values are censored - labels like "active" or "inactive" or another label you want, such as the source of the data. ## Load packages # Curating and merging datasets Some of these merging steps will be highly dependent on the information contained in each individual dataset and what you want to keep after curation. ### **Step 0:** Read dataset Read in dataset to be curated and merged. ### **Step 1:** Initial curation for nan's and outliers Drop NA values for assay measurement column. Here you should also examine empty quotes, unexpected zeros, or other values that might not be real. ### **Step 2:** Canonicalize smiles strings Canonicalize smiles strings so they are comparable across datasets with `struct_utils.base_smiles_from_smiles`. Even though the next function performs this step, we need to do this here in order to maintain correct metadata indices. ### **Step 3:** Aggregate assay data Use the `aggregate_assay_data()` function to aggregate duplicate assay measurements, deal with censored values, and standardize measurement dataframe for modeling later. This method also re-does the canonical smiles strings. *Even if you know you have no duplicates, still perform this step to standardize the dataframe.* - `active_thresh` means the threshold value above/below which your compounds should be named 'active'/'inactive' or 1/0 for classification tasks. - here we pick 1uM (u = 10$^{-6}$) as our threshold and translate it into a pIC50 value to be the same as our data. ### **Step 3.5**: Repeat **Repeat** steps 1-3 for all dataframes you want to curate and merge. ### **Step 4:** Filter for other outliers ### Filter large compounds In general, molecular weight >2000 is removed ### Filter for outlier pIC50 values In general, pIC50 values are between 2 and 14 ### **Step 5**: (optional) Merge datasets for multitask modeling Merge processed assay values together for each dataframe. ### **Step 6: Double check for duplicate and missing values** Depending on your purpose, it may or may not matter if you have duplicates. Here we have a chemical with / without hydrochloride salt that reduces to the same base smiles. We will correct this single duplicate row. For more extensive differences, correct values in original csv's and start at step 1. ### **Step 6.5:** repeat ### **Step 7:** Save dataframes ## Optional steps: If you want to save the files, then mount your Google Drive and copy it to a directory of your choice
0.72086
0.978915
<a href="https://colab.research.google.com/github/martin-fabbri/colab-notebooks/blob/master/spark/pyspark_basic_linear_regression_model.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # PySpark Linear Regression Model ## Setup PySpark instance To run spark in Colab, we need to first install all the dependencies in Colab environment i.e. Apache Spark 2.3.2 with hadoop 2.7, Java 8 and Findspark to locate the spark in the system. ``` #@title ### Setup PySpark instance #@markdown To run spark in Colab, we need to first install all the dependencies in Colab environment i.e. Apache Spark 2.3.2 with hadoop 2.7, Java 8 and Findspark to locate the spark in the system. #@markdown **Uppon successful completion of this cell a ``SparkSession`` context named ``spark`` will be available to interact with the service.** #@markdown Creating multiple ``SparkSession`` or ``SparkContext`` object could #@markdown cause issues. If you need to get a reference to the context it is #@markdown recommended to use ``SparkSession.builder.getOrCreate()``. !apt-get install openjdk-8-jdk-headless -qq > /dev/null !wget -q https://downloads.apache.org/spark/spark-2.4.5/spark-2.4.5-bin-hadoop2.7.tgz !tar xf spark-2.4.5-bin-hadoop2.7.tgz !pip install -q findspark import os import findspark # environment variables os.environ['JAVA_HOME'] = '/usr/lib/jvm/java-8-openjdk-amd64' os.environ['SPARK_HOME'] = 'spark-2.4.5-bin-hadoop2.7' # check installation findspark.init() from pyspark.sql import SparkSession spark = SparkSession.builder.master("local[*]").getOrCreate() spark ``` ## Linear Regression Model Download the Boston housing dataset. ``` !wget -q https://raw.githubusercontent.com/martin-fabbri/colab-notebooks/master/data/boston.csv dataset = spark.read.csv('boston.csv', inferSchema=True, header =True) ``` ``SparckSession`` has an attribute called ``catalog`` which list all teh data inside te cluster. ``` spark.catalog.listTables() dataset.printSchema() ``` Next step is to convert all the features from different columns into a single column and let's call this new vector column as 'Attributes' in the outputCol. ``` from pyspark.ml.feature import VectorAssembler assembler = VectorAssembler(inputCols=['CRIM', 'ZN', 'INDUS', 'CHAS', 'NX', \ 'RM', 'AGE', 'DIS', 'RAD', 'TAX', \ 'PTRATIO', 'B', 'LSTAT'], outputCol='Attributes') output = assembler.transform(dataset) #input vs output finalized_data = output.select('Attributes', 'MEDV') finalized_data.show() ``` Our data vector defines two columns ``Attributes`` and ``MEDV``, input features and targer column respectively. Next, we should split our data training and test before fitting our data. ``` train_data, test_data = finalized_data.randomSplit([0.8, 0.2]) regressor = LinearRegression(featuresCol='Attributes', labelCol='MEDV') regressor = regressor.fit(train_data) pred = regressor.evaluate(test_data) pred.predictions.show() #coefficient of the regression model coeff = regressor.coefficients #X and Y intercept intr = regressor.intercept print ("The coefficient of the model is : %a" %coeff) print ("The Intercept of the model is : %f" %intr) from pyspark.ml.evaluation import RegressionEvaluator eval = RegressionEvaluator(labelCol="MEDV", predictionCol="prediction", metricName="rmse") # Root Mean Square Error rmse = eval.evaluate(pred.predictions) print("RMSE: %.3f" % rmse) # Mean Square Error mse = eval.evaluate(pred.predictions, {eval.metricName: "mse"}) print("MSE: %.3f" % mse) # Mean Absolute Error mae = eval.evaluate(pred.predictions, {eval.metricName: "mae"}) print("MAE: %.3f" % mae) # r2 - coefficient of determination r2 = eval.evaluate(pred.predictions, {eval.metricName: "r2"}) print("r2: %.3f" %r2) ```
github_jupyter
#@title ### Setup PySpark instance #@markdown To run spark in Colab, we need to first install all the dependencies in Colab environment i.e. Apache Spark 2.3.2 with hadoop 2.7, Java 8 and Findspark to locate the spark in the system. #@markdown **Uppon successful completion of this cell a ``SparkSession`` context named ``spark`` will be available to interact with the service.** #@markdown Creating multiple ``SparkSession`` or ``SparkContext`` object could #@markdown cause issues. If you need to get a reference to the context it is #@markdown recommended to use ``SparkSession.builder.getOrCreate()``. !apt-get install openjdk-8-jdk-headless -qq > /dev/null !wget -q https://downloads.apache.org/spark/spark-2.4.5/spark-2.4.5-bin-hadoop2.7.tgz !tar xf spark-2.4.5-bin-hadoop2.7.tgz !pip install -q findspark import os import findspark # environment variables os.environ['JAVA_HOME'] = '/usr/lib/jvm/java-8-openjdk-amd64' os.environ['SPARK_HOME'] = 'spark-2.4.5-bin-hadoop2.7' # check installation findspark.init() from pyspark.sql import SparkSession spark = SparkSession.builder.master("local[*]").getOrCreate() spark !wget -q https://raw.githubusercontent.com/martin-fabbri/colab-notebooks/master/data/boston.csv dataset = spark.read.csv('boston.csv', inferSchema=True, header =True) spark.catalog.listTables() dataset.printSchema() from pyspark.ml.feature import VectorAssembler assembler = VectorAssembler(inputCols=['CRIM', 'ZN', 'INDUS', 'CHAS', 'NX', \ 'RM', 'AGE', 'DIS', 'RAD', 'TAX', \ 'PTRATIO', 'B', 'LSTAT'], outputCol='Attributes') output = assembler.transform(dataset) #input vs output finalized_data = output.select('Attributes', 'MEDV') finalized_data.show() train_data, test_data = finalized_data.randomSplit([0.8, 0.2]) regressor = LinearRegression(featuresCol='Attributes', labelCol='MEDV') regressor = regressor.fit(train_data) pred = regressor.evaluate(test_data) pred.predictions.show() #coefficient of the regression model coeff = regressor.coefficients #X and Y intercept intr = regressor.intercept print ("The coefficient of the model is : %a" %coeff) print ("The Intercept of the model is : %f" %intr) from pyspark.ml.evaluation import RegressionEvaluator eval = RegressionEvaluator(labelCol="MEDV", predictionCol="prediction", metricName="rmse") # Root Mean Square Error rmse = eval.evaluate(pred.predictions) print("RMSE: %.3f" % rmse) # Mean Square Error mse = eval.evaluate(pred.predictions, {eval.metricName: "mse"}) print("MSE: %.3f" % mse) # Mean Absolute Error mae = eval.evaluate(pred.predictions, {eval.metricName: "mae"}) print("MAE: %.3f" % mae) # r2 - coefficient of determination r2 = eval.evaluate(pred.predictions, {eval.metricName: "r2"}) print("r2: %.3f" %r2)
0.752195
0.980636
<a href="https://colab.research.google.com/github/chamikasudusinghe/nocml/blob/master/fft_r11_i1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Module Imports for Data Fetiching and Visualization ``` import time import pandas as pd import numpy as np %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns ``` Module Imports for Data Processing ``` from sklearn import preprocessing from sklearn.feature_selection import SelectKBest from sklearn.feature_selection import chi2 import pickle ``` Importing Dataset from GitHub Train Data ``` df1 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r11/2-fft-malicious-n-0-15-m-1-r11.csv?token=AKVFSOH4TC4IY3N4CSRVFYK63JDDO') df9 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r11/2-fft-normal-n-0-15-r11.csv?token=AKVFSOCYILXP5XA4GGQEKYC63JDES') df = df1.append(df9, ignore_index=True,sort=False) df = df.sort_values('timestamp') df.to_csv('fft-r1-train.csv',index=False) df = pd.read_csv('fft-r1-train.csv') df df.shape ``` Test Data ``` df13 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r11/2-fft-malicious-n-0-15-m-11-r11.csv?token=AKVFSOBY5KWVYP7MVPVLSKS63JDDQ') df14 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r11/2-fft-malicious-n-0-15-m-12-r11.csv?token=AKVFSODFX632MJC3CW5FU7S63JEEK') df15 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r11/2-fft-malicious-n-0-15-m-7-r11.csv?token=AKVFSOGYGNPHWB42VH7Q2GK63JEEQ') print(df13.shape) print(df14.shape) print(df15.shape) ``` Processing ``` df.isnull().sum() df = df.drop(columns=['timestamp','src_ni','src_router','dst_ni','dst_router']) df.corr() plt.figure(figsize=(25,25)) sns.heatmap(df.corr(), annot = True) plt.show() def find_correlation(data, threshold=0.9): corr_mat = data.corr() corr_mat.loc[:, :] = np.tril(corr_mat, k=-1) already_in = set() result = [] for col in corr_mat: perfect_corr = corr_mat[col][abs(corr_mat[col])> threshold].index.tolist() if perfect_corr and col not in already_in: already_in.update(set(perfect_corr)) perfect_corr.append(col) result.append(perfect_corr) select_nested = [f[1:] for f in result] select_flat = [i for j in select_nested for i in j] return select_flat columns_to_drop = find_correlation(df.drop(columns=['target'])) columns_to_drop df = df.drop(columns=['inport','cache_coherence_type','flit_id','flit_type','vnet','current_hop','hop_percentage','port_index','cache_coherence_vnet_index','vnet_vc_cc_index']) plt.figure(figsize=(11,11)) sns.heatmap(df.corr(), annot = True) plt.show() plt.figure(figsize=(11,11)) sns.heatmap(df.corr()) plt.show() ``` Processing Dataset for Training ``` train_X = df.drop(columns=['target']) train_Y = df['target'] #standardization x = train_X.values min_max_scaler = preprocessing.MinMaxScaler() columns = train_X.columns x_scaled = min_max_scaler.fit_transform(x) train_X = pd.DataFrame(x_scaled) train_X.columns = columns train_X train_X[train_X.duplicated()].shape test_X = df13.drop(columns=['target','timestamp','src_ni','src_router','dst_ni','dst_router','inport','cache_coherence_type','flit_id','flit_type','vnet','current_hop','hop_percentage','port_index','cache_coherence_vnet_index','vnet_vc_cc_index']) test_Y = df13['target'] x = test_X.values min_max_scaler = preprocessing.MinMaxScaler() columns = test_X.columns x_scaled = min_max_scaler.fit_transform(x) test_X = pd.DataFrame(x_scaled) test_X.columns = columns print(test_X[test_X.duplicated()].shape) test_X test_X1 = df14.drop(columns=['target','timestamp','src_ni','src_router','dst_ni','dst_router','inport','cache_coherence_type','flit_id','flit_type','vnet','current_hop','hop_percentage','port_index','cache_coherence_vnet_index','vnet_vc_cc_index']) test_Y1 = df14['target'] x = test_X1.values min_max_scaler = preprocessing.MinMaxScaler() columns = test_X1.columns x_scaled = min_max_scaler.fit_transform(x) test_X1 = pd.DataFrame(x_scaled) test_X1.columns = columns print(test_X1[test_X1.duplicated()].shape) test_X2 = df15.drop(columns=['target','timestamp','src_ni','src_router','dst_ni','dst_router','inport','cache_coherence_type','flit_id','flit_type','vnet','current_hop','hop_percentage','port_index','cache_coherence_vnet_index','vnet_vc_cc_index']) test_Y2 = df15['target'] x = test_X2.values min_max_scaler = preprocessing.MinMaxScaler() columns = test_X2.columns x_scaled = min_max_scaler.fit_transform(x) test_X2 = pd.DataFrame(x_scaled) test_X2.columns = columns print(test_X2[test_X2.duplicated()].shape) ``` #### Machine Learning Models Module Imports for Data Processing and Report Generation in Machine Learning Models ``` from sklearn.model_selection import train_test_split import statsmodels.api as sm from sklearn import metrics from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix from sklearn.metrics import roc_curve from sklearn.metrics import roc_auc_score from sklearn.metrics import accuracy_score from sklearn.metrics import mean_squared_error from sklearn.model_selection import KFold from sklearn.model_selection import StratifiedKFold from sklearn.model_selection import cross_val_score ``` Labels 1. 0 - malicious 2. 1 - good ``` train_Y = df['target'] train_Y.value_counts() ``` Training and Validation Splitting of the Dataset ``` seed = 5 np.random.seed(seed) X_train, X_test, y_train, y_test = train_test_split(train_X, train_Y, test_size=0.33, random_state=seed, shuffle=True) ``` Feature Selection ``` #SelectKBest for feature selection bf = SelectKBest(score_func=chi2, k='all') fit = bf.fit(X_train,y_train) dfscores = pd.DataFrame(fit.scores_) dfcolumns = pd.DataFrame(columns) featureScores = pd.concat([dfcolumns,dfscores],axis=1) featureScores.columns = ['Specs','Score'] print(featureScores.nlargest(10,'Score')) featureScores.plot(kind='barh') ``` Decision Tree Classifier ``` #decisiontreee from sklearn.tree import DecisionTreeClassifier from sklearn.model_selection import GridSearchCV dt = DecisionTreeClassifier(max_depth=20,max_features=10,random_state = 42) dt.fit(X_train,y_train) pickle.dump(dt, open("dt-r1.pickle.dat", 'wb')) y_pred_dt= dt.predict(X_test) dt_score_train = dt.score(X_train,y_train) print("Train Prediction Score",dt_score_train*100) dt_score_test = accuracy_score(y_test,y_pred_dt) print("Test Prediction Score",dt_score_test*100) y_pred_dt_test= dt.predict(test_X) dt_score_test = accuracy_score(test_Y,y_pred_dt_test) print("Test Prediction Score",dt_score_test*100) y_pred_dt_test= dt.predict(test_X1) dt_score_test = accuracy_score(test_Y1,y_pred_dt_test) print("Test Prediction Score",dt_score_test*100) y_pred_dt_test= dt.predict(test_X2) dt_score_test = accuracy_score(test_Y2,y_pred_dt_test) print("Test Prediction Score",dt_score_test*100) feat_importances = pd.Series(dt.feature_importances_, index=columns) feat_importances.plot(kind='barh') cm = confusion_matrix(y_test, y_pred_dt) class_label = ["Anomalous", "Normal"] df_cm = pd.DataFrame(cm, index=class_label,columns=class_label) sns.heatmap(df_cm, annot=True, fmt='d') plt.title("Confusion Matrix") plt.xlabel("Predicted Label") plt.ylabel("True Label") plt.show() print(classification_report(y_test,y_pred_dt)) dt_roc_auc = roc_auc_score(y_test, y_pred_dt) fpr, tpr, thresholds = roc_curve(y_test, dt.predict_proba(X_test)[:,1]) plt.figure() plt.plot(fpr, tpr, label='DTree (area = %0.2f)' % dt_roc_auc) plt.plot([0, 1], [0, 1],'r--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic') plt.legend(loc="lower right") plt.savefig('DT_ROC') plt.show() ``` XGB Classifier ``` from xgboost import XGBClassifier from xgboost import plot_importance xgbc = XGBClassifier(max_depth=20,min_child_weight=1,n_estimators=500,random_state=42,learning_rate=0.2) xgbc.fit(X_train,y_train) pickle.dump(xgbc, open("xgbc-r11l-i1.pickle.dat", 'wb')) y_pred_xgbc= xgbc.predict(X_test) xgbc_score_train = xgbc.score(X_train,y_train) print("Train Prediction Score",xgbc_score_train*100) xgbc_score_test = accuracy_score(y_test,y_pred_xgbc) print("Test Prediction Score",xgbc_score_test*100) y_pred_xgbc_test= xgbc.predict(test_X) xgbc_score_test = accuracy_score(test_Y,y_pred_xgbc_test) print("Test Prediction Score",xgbc_score_test*100) y_pred_xgbc_test= xgbc.predict(test_X1) xgbc_score_test = accuracy_score(test_Y1,y_pred_xgbc_test) print("Test Prediction Score",xgbc_score_test*100) y_pred_xgbc_test= xgbc.predict(test_X2) xgbc_score_test = accuracy_score(test_Y2,y_pred_xgbc_test) print("Test Prediction Score",xgbc_score_test*100) plot_importance(xgbc) plt.show() cm = confusion_matrix(y_test, y_pred_xgbc) class_label = ["Anomalous", "Normal"] df_cm = pd.DataFrame(cm, index=class_label,columns=class_label) sns.heatmap(df_cm, annot=True, fmt='d') plt.title("Confusion Matrix") plt.xlabel("Predicted Label") plt.ylabel("True Label") plt.show() print(classification_report(y_test,y_pred_xgbc)) xgb_roc_auc = roc_auc_score(y_test, y_pred_xgbc) fpr, tpr, thresholds = roc_curve(y_test, xgbc.predict_proba(X_test)[:,1]) plt.figure() plt.plot(fpr, tpr, label='XGBoost (area = %0.2f)' % xgb_roc_auc) plt.plot([0, 1], [0, 1],'r--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic') plt.legend(loc="lower right") plt.savefig('XGB_ROC') plt.show() ```
github_jupyter
import time import pandas as pd import numpy as np %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns from sklearn import preprocessing from sklearn.feature_selection import SelectKBest from sklearn.feature_selection import chi2 import pickle df1 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r11/2-fft-malicious-n-0-15-m-1-r11.csv?token=AKVFSOH4TC4IY3N4CSRVFYK63JDDO') df9 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r11/2-fft-normal-n-0-15-r11.csv?token=AKVFSOCYILXP5XA4GGQEKYC63JDES') df = df1.append(df9, ignore_index=True,sort=False) df = df.sort_values('timestamp') df.to_csv('fft-r1-train.csv',index=False) df = pd.read_csv('fft-r1-train.csv') df df.shape df13 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r11/2-fft-malicious-n-0-15-m-11-r11.csv?token=AKVFSOBY5KWVYP7MVPVLSKS63JDDQ') df14 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r11/2-fft-malicious-n-0-15-m-12-r11.csv?token=AKVFSODFX632MJC3CW5FU7S63JEEK') df15 = pd.read_csv('https://raw.githubusercontent.com/chamikasudusinghe/nocml/master/dos%20results%20ver%204/router-dataset/r11/2-fft-malicious-n-0-15-m-7-r11.csv?token=AKVFSOGYGNPHWB42VH7Q2GK63JEEQ') print(df13.shape) print(df14.shape) print(df15.shape) df.isnull().sum() df = df.drop(columns=['timestamp','src_ni','src_router','dst_ni','dst_router']) df.corr() plt.figure(figsize=(25,25)) sns.heatmap(df.corr(), annot = True) plt.show() def find_correlation(data, threshold=0.9): corr_mat = data.corr() corr_mat.loc[:, :] = np.tril(corr_mat, k=-1) already_in = set() result = [] for col in corr_mat: perfect_corr = corr_mat[col][abs(corr_mat[col])> threshold].index.tolist() if perfect_corr and col not in already_in: already_in.update(set(perfect_corr)) perfect_corr.append(col) result.append(perfect_corr) select_nested = [f[1:] for f in result] select_flat = [i for j in select_nested for i in j] return select_flat columns_to_drop = find_correlation(df.drop(columns=['target'])) columns_to_drop df = df.drop(columns=['inport','cache_coherence_type','flit_id','flit_type','vnet','current_hop','hop_percentage','port_index','cache_coherence_vnet_index','vnet_vc_cc_index']) plt.figure(figsize=(11,11)) sns.heatmap(df.corr(), annot = True) plt.show() plt.figure(figsize=(11,11)) sns.heatmap(df.corr()) plt.show() train_X = df.drop(columns=['target']) train_Y = df['target'] #standardization x = train_X.values min_max_scaler = preprocessing.MinMaxScaler() columns = train_X.columns x_scaled = min_max_scaler.fit_transform(x) train_X = pd.DataFrame(x_scaled) train_X.columns = columns train_X train_X[train_X.duplicated()].shape test_X = df13.drop(columns=['target','timestamp','src_ni','src_router','dst_ni','dst_router','inport','cache_coherence_type','flit_id','flit_type','vnet','current_hop','hop_percentage','port_index','cache_coherence_vnet_index','vnet_vc_cc_index']) test_Y = df13['target'] x = test_X.values min_max_scaler = preprocessing.MinMaxScaler() columns = test_X.columns x_scaled = min_max_scaler.fit_transform(x) test_X = pd.DataFrame(x_scaled) test_X.columns = columns print(test_X[test_X.duplicated()].shape) test_X test_X1 = df14.drop(columns=['target','timestamp','src_ni','src_router','dst_ni','dst_router','inport','cache_coherence_type','flit_id','flit_type','vnet','current_hop','hop_percentage','port_index','cache_coherence_vnet_index','vnet_vc_cc_index']) test_Y1 = df14['target'] x = test_X1.values min_max_scaler = preprocessing.MinMaxScaler() columns = test_X1.columns x_scaled = min_max_scaler.fit_transform(x) test_X1 = pd.DataFrame(x_scaled) test_X1.columns = columns print(test_X1[test_X1.duplicated()].shape) test_X2 = df15.drop(columns=['target','timestamp','src_ni','src_router','dst_ni','dst_router','inport','cache_coherence_type','flit_id','flit_type','vnet','current_hop','hop_percentage','port_index','cache_coherence_vnet_index','vnet_vc_cc_index']) test_Y2 = df15['target'] x = test_X2.values min_max_scaler = preprocessing.MinMaxScaler() columns = test_X2.columns x_scaled = min_max_scaler.fit_transform(x) test_X2 = pd.DataFrame(x_scaled) test_X2.columns = columns print(test_X2[test_X2.duplicated()].shape) from sklearn.model_selection import train_test_split import statsmodels.api as sm from sklearn import metrics from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix from sklearn.metrics import roc_curve from sklearn.metrics import roc_auc_score from sklearn.metrics import accuracy_score from sklearn.metrics import mean_squared_error from sklearn.model_selection import KFold from sklearn.model_selection import StratifiedKFold from sklearn.model_selection import cross_val_score train_Y = df['target'] train_Y.value_counts() seed = 5 np.random.seed(seed) X_train, X_test, y_train, y_test = train_test_split(train_X, train_Y, test_size=0.33, random_state=seed, shuffle=True) #SelectKBest for feature selection bf = SelectKBest(score_func=chi2, k='all') fit = bf.fit(X_train,y_train) dfscores = pd.DataFrame(fit.scores_) dfcolumns = pd.DataFrame(columns) featureScores = pd.concat([dfcolumns,dfscores],axis=1) featureScores.columns = ['Specs','Score'] print(featureScores.nlargest(10,'Score')) featureScores.plot(kind='barh') #decisiontreee from sklearn.tree import DecisionTreeClassifier from sklearn.model_selection import GridSearchCV dt = DecisionTreeClassifier(max_depth=20,max_features=10,random_state = 42) dt.fit(X_train,y_train) pickle.dump(dt, open("dt-r1.pickle.dat", 'wb')) y_pred_dt= dt.predict(X_test) dt_score_train = dt.score(X_train,y_train) print("Train Prediction Score",dt_score_train*100) dt_score_test = accuracy_score(y_test,y_pred_dt) print("Test Prediction Score",dt_score_test*100) y_pred_dt_test= dt.predict(test_X) dt_score_test = accuracy_score(test_Y,y_pred_dt_test) print("Test Prediction Score",dt_score_test*100) y_pred_dt_test= dt.predict(test_X1) dt_score_test = accuracy_score(test_Y1,y_pred_dt_test) print("Test Prediction Score",dt_score_test*100) y_pred_dt_test= dt.predict(test_X2) dt_score_test = accuracy_score(test_Y2,y_pred_dt_test) print("Test Prediction Score",dt_score_test*100) feat_importances = pd.Series(dt.feature_importances_, index=columns) feat_importances.plot(kind='barh') cm = confusion_matrix(y_test, y_pred_dt) class_label = ["Anomalous", "Normal"] df_cm = pd.DataFrame(cm, index=class_label,columns=class_label) sns.heatmap(df_cm, annot=True, fmt='d') plt.title("Confusion Matrix") plt.xlabel("Predicted Label") plt.ylabel("True Label") plt.show() print(classification_report(y_test,y_pred_dt)) dt_roc_auc = roc_auc_score(y_test, y_pred_dt) fpr, tpr, thresholds = roc_curve(y_test, dt.predict_proba(X_test)[:,1]) plt.figure() plt.plot(fpr, tpr, label='DTree (area = %0.2f)' % dt_roc_auc) plt.plot([0, 1], [0, 1],'r--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic') plt.legend(loc="lower right") plt.savefig('DT_ROC') plt.show() from xgboost import XGBClassifier from xgboost import plot_importance xgbc = XGBClassifier(max_depth=20,min_child_weight=1,n_estimators=500,random_state=42,learning_rate=0.2) xgbc.fit(X_train,y_train) pickle.dump(xgbc, open("xgbc-r11l-i1.pickle.dat", 'wb')) y_pred_xgbc= xgbc.predict(X_test) xgbc_score_train = xgbc.score(X_train,y_train) print("Train Prediction Score",xgbc_score_train*100) xgbc_score_test = accuracy_score(y_test,y_pred_xgbc) print("Test Prediction Score",xgbc_score_test*100) y_pred_xgbc_test= xgbc.predict(test_X) xgbc_score_test = accuracy_score(test_Y,y_pred_xgbc_test) print("Test Prediction Score",xgbc_score_test*100) y_pred_xgbc_test= xgbc.predict(test_X1) xgbc_score_test = accuracy_score(test_Y1,y_pred_xgbc_test) print("Test Prediction Score",xgbc_score_test*100) y_pred_xgbc_test= xgbc.predict(test_X2) xgbc_score_test = accuracy_score(test_Y2,y_pred_xgbc_test) print("Test Prediction Score",xgbc_score_test*100) plot_importance(xgbc) plt.show() cm = confusion_matrix(y_test, y_pred_xgbc) class_label = ["Anomalous", "Normal"] df_cm = pd.DataFrame(cm, index=class_label,columns=class_label) sns.heatmap(df_cm, annot=True, fmt='d') plt.title("Confusion Matrix") plt.xlabel("Predicted Label") plt.ylabel("True Label") plt.show() print(classification_report(y_test,y_pred_xgbc)) xgb_roc_auc = roc_auc_score(y_test, y_pred_xgbc) fpr, tpr, thresholds = roc_curve(y_test, xgbc.predict_proba(X_test)[:,1]) plt.figure() plt.plot(fpr, tpr, label='XGBoost (area = %0.2f)' % xgb_roc_auc) plt.plot([0, 1], [0, 1],'r--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic') plt.legend(loc="lower right") plt.savefig('XGB_ROC') plt.show()
0.386069
0.87251
``` import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.preprocessing import StandardScaler from sklearn.model_selection import train_test_split, cross_val_score from sklearn.linear_model import LinearRegression, LassoCV, RidgeCV, Lasso from sklearn.metrics import mean_squared_error import xgboost as xgb import seaborn as sns pd.set_option('display.max_columns', None) %matplotlib inline df = pd.read_csv("data/kaggle-house-prices/data_combined_cleaned.csv") df.info() df.head(10) df_dummy = pd.get_dummies(df, drop_first=True) df_dummy.info() df_training = df_dummy[~np.isnan(df.SalesPrice)] df_testing = df_dummy[np.isnan(df.SalesPrice)] df_training.shape, df_testing.shape plt.subplot(1, 2, 1) df_training.SalesPrice.hist(bins = 100) plt.subplot(1, 2, 2) df_training.SalesPrice.plot.box() plt.tight_layout() y = np.log(df_training.SalesPrice.values) df_tmp = df_training.copy() del df_tmp["SalesPrice"] del df_tmp["Id"] X = df_tmp.values df_tmp.head(4) plt.subplot(1, 2, 1) pd.Series(y).plot.hist(bins = 100) plt.subplot(1, 2, 2) pd.Series(y).plot.box() plt.tight_layout() pd.DataFrame(X).describe() X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 1) scaler = StandardScaler() X_train_std = scaler.fit_transform(X_train) X_test_std = scaler.fit_transform(X_test) def rmse(y_true, y_pred): return np.sqrt(mean_squared_error(y_true, y_pred)) lr = LinearRegression() lr.fit(X_train_std, y_train) rmse(y_test, lr.predict(X_test_std)) ``` Seems that Linear regression model performed very poorly. Most likely it is because model finds a lot of collinearity in the data due to the categorical columns. Test lasso, which is more robust against multi collinearity. ``` lasso = Lasso(random_state=1, max_iter=10000) lasso.fit(X_train_std, y_train) rmse(y_test, lasso.predict(X_test_std)) ``` This rmse score seems reasonable. Find cross validation scores. ``` scores = cross_val_score(cv=10, estimator = lasso, scoring="neg_mean_squared_error", X=X_train_std, y = y_train) scores = np.sqrt(-scores) scores from sklearn import linear_model from sklearn import metrics from sklearn import tree from sklearn import ensemble from sklearn import neighbors import xgboost as xgb rs = 1 estimatores = { #'Linear': linear_model.LinearRegression(), 'Ridge': linear_model.Ridge(random_state=rs, max_iter=10000), 'Lasso': linear_model.Lasso(random_state=rs, max_iter=10000), 'ElasticNet': linear_model.ElasticNet(random_state=rs, max_iter=10000), 'BayesRidge': linear_model.BayesianRidge(), 'OMP': linear_model.OrthogonalMatchingPursuit(), 'DecisionTree': tree.DecisionTreeRegressor(max_depth=10, random_state=rs), 'RandomForest': ensemble.RandomForestRegressor(random_state=rs), 'KNN': neighbors.KNeighborsRegressor(n_neighbors=5), 'GradientBoostingRegressor': ensemble.GradientBoostingRegressor(n_estimators=300, max_depth=4, learning_rate=0.01, loss="ls", random_state=rs), 'xgboost': xgb.XGBRegressor(max_depth=10) } errvals = {} for k in estimatores: e = estimatores[k] e.fit(X_train_std, y_train) err = np.sqrt(metrics.mean_squared_error(y_test, e.predict(X_test_std))) errvals[k] = err result = pd.Series.from_array(errvals).sort_values() result.plot.barh(width = 0.8) for y, error in enumerate(result): plt.text(x = 0.01, y = y - 0.1, s = "%.3f" % error, fontweight='bold', color = "white") plt.title("Performance comparison of algorithms") ```
github_jupyter
import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.preprocessing import StandardScaler from sklearn.model_selection import train_test_split, cross_val_score from sklearn.linear_model import LinearRegression, LassoCV, RidgeCV, Lasso from sklearn.metrics import mean_squared_error import xgboost as xgb import seaborn as sns pd.set_option('display.max_columns', None) %matplotlib inline df = pd.read_csv("data/kaggle-house-prices/data_combined_cleaned.csv") df.info() df.head(10) df_dummy = pd.get_dummies(df, drop_first=True) df_dummy.info() df_training = df_dummy[~np.isnan(df.SalesPrice)] df_testing = df_dummy[np.isnan(df.SalesPrice)] df_training.shape, df_testing.shape plt.subplot(1, 2, 1) df_training.SalesPrice.hist(bins = 100) plt.subplot(1, 2, 2) df_training.SalesPrice.plot.box() plt.tight_layout() y = np.log(df_training.SalesPrice.values) df_tmp = df_training.copy() del df_tmp["SalesPrice"] del df_tmp["Id"] X = df_tmp.values df_tmp.head(4) plt.subplot(1, 2, 1) pd.Series(y).plot.hist(bins = 100) plt.subplot(1, 2, 2) pd.Series(y).plot.box() plt.tight_layout() pd.DataFrame(X).describe() X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 1) scaler = StandardScaler() X_train_std = scaler.fit_transform(X_train) X_test_std = scaler.fit_transform(X_test) def rmse(y_true, y_pred): return np.sqrt(mean_squared_error(y_true, y_pred)) lr = LinearRegression() lr.fit(X_train_std, y_train) rmse(y_test, lr.predict(X_test_std)) lasso = Lasso(random_state=1, max_iter=10000) lasso.fit(X_train_std, y_train) rmse(y_test, lasso.predict(X_test_std)) scores = cross_val_score(cv=10, estimator = lasso, scoring="neg_mean_squared_error", X=X_train_std, y = y_train) scores = np.sqrt(-scores) scores from sklearn import linear_model from sklearn import metrics from sklearn import tree from sklearn import ensemble from sklearn import neighbors import xgboost as xgb rs = 1 estimatores = { #'Linear': linear_model.LinearRegression(), 'Ridge': linear_model.Ridge(random_state=rs, max_iter=10000), 'Lasso': linear_model.Lasso(random_state=rs, max_iter=10000), 'ElasticNet': linear_model.ElasticNet(random_state=rs, max_iter=10000), 'BayesRidge': linear_model.BayesianRidge(), 'OMP': linear_model.OrthogonalMatchingPursuit(), 'DecisionTree': tree.DecisionTreeRegressor(max_depth=10, random_state=rs), 'RandomForest': ensemble.RandomForestRegressor(random_state=rs), 'KNN': neighbors.KNeighborsRegressor(n_neighbors=5), 'GradientBoostingRegressor': ensemble.GradientBoostingRegressor(n_estimators=300, max_depth=4, learning_rate=0.01, loss="ls", random_state=rs), 'xgboost': xgb.XGBRegressor(max_depth=10) } errvals = {} for k in estimatores: e = estimatores[k] e.fit(X_train_std, y_train) err = np.sqrt(metrics.mean_squared_error(y_test, e.predict(X_test_std))) errvals[k] = err result = pd.Series.from_array(errvals).sort_values() result.plot.barh(width = 0.8) for y, error in enumerate(result): plt.text(x = 0.01, y = y - 0.1, s = "%.3f" % error, fontweight='bold', color = "white") plt.title("Performance comparison of algorithms")
0.519278
0.794225
# Testing through documentation 26.3 doctest — Test interactive Python examples https://docs.python.org/3/library/doctest.html <b>doctest</b> lets you <b>test</b> your code by running <b>examples embedded in the documentation</b> and verifying that they produce the expected results. It works by parsing the help text to find examples, running them, then comparing the output text against the expected value. Many developers find `doctest` **easier** than `unittest` because in its simplest form, there is no API to learn before using it. However, as the examples become more complex **the lack of fixture management** can make writing doctest tests more **cumbersome** than using unittest. ## 1 Using the command line test runner built into doctest **doctest** looks for `lines` * **beginning** with : * **>>>** the `interpreter prompt` to find the beginning of a test case. * **ended** with * a `blank` line</b>, * the `next` interpreter prompt. Here, `fun_multiply()` has two examples given in the module: ```doctest_simple.py``` ``` %%file ./code/doctest/doctest_simple.py def fun_multiply(a, b): """ >>> fun_multiply(2, 3) 6 >>> fun_multiply('a', 3) 'aaa' """ return a * b ``` To **run** the tests, use `doctest` as the `main program` via the **-m** option to the interpreter: ```bash >python -m doctest *.py ``` ``` !python -m doctest ./code/doctest/doctest_simple.py ``` Usually **no output** is produced .It means **all the examples worked** Pass ** -v ** to the script, and **doctest** prints a detailed log of what it’s trying, and prints a summary at the end: ```bash >python -m doctest -v *.py ``` ``` !python -m doctest -v ./code/doctest/doctest_simple.py ``` **Test Examples** cannot usually stand on their own as explanations of a function, **doctest** also lets you keep the **surrounding text** you would normally include in the documentation. **Intervening text** is **ignored**, and can have `any format` as long as it does `not look like a test case`. ``` %%file ./code/doctest/doctest_simple_with_docs.py def fun_multiply(a, b): """Returns a * b. Works with numbers: >>> fun_multiply(2, 3) 6 and strings: >>> fun_multiply('a', 3) 'aaa' """ return a * b def fun_add(a, b): """Returns a + b. Works with numbers: >>> fun_add(2, 3) 5 and strings: >>> fun_add('1', '3') '13' """ return a + b !python -m doctest -v ./code/doctest/doctest_simple_with_docs.py ``` The surrounding text in the updated `docstring ` * **useful to a human reader**, * **ignored by doctest**, the results are the same. ## 2 Run doctest by calling testmod() at the bottom of modules The simplest way to start using doctest is to **end** each module with: ```python import doctest doctest.testmod() ``` **doctest** then examines `docstrings` in the module . The module docstring, and all function, class and method docstrings are searched. ``` def fun_multiply(a, b): """Returns a * b. Works with numbers: >>> fun_multiply(2, 3) 6 and strings: >>> fun_multiply('a', 3) 'aaa' """ return a * b def fun_add(a, b): """Returns a + b. Works with numbers: >>> fun_add(2, 3) 5 and strings: >>> fun_add('1', '3') '13' """ return a+b #return a + b+1 import doctest doctest.testmod() #doctest.testmod(verbose=2) ``` ### Here’s a complete but small example `module` the python code is a module file ,not within a cell of notebook https://docs.python.org/3.6/library/doctest.html ``` %%file ./code/doctest/doctest_example.py """ This is the "example" module. The example module supplies one function, factorial(). For example, >>> factorial(5) 120 """ def factorial(n): """Return the factorial of n, an exact integer >= 0. >>> [factorial(n) for n in range(6)] [1, 1, 2, 6, 24, 120] >>> factorial(30) 265252859812191058636308480000000 >>> factorial(-1) Traceback (most recent call last): ... ValueError: n must be >= 0 Factorials of floats are OK, but the float must be an exact integer: >>> factorial(30.1) Traceback (most recent call last): ... ValueError: n must be exact integer >>> factorial(30.0) 265252859812191058636308480000000 It must also not be ridiculously large: >>> factorial(1e100) Traceback (most recent call last): ... OverflowError: n too large """ import math if not n >= 0: raise ValueError("n must be >= 0") if math.floor(n) != n: raise ValueError("n must be exact integer") if n+1 == n: # catch a value like 1e300 raise OverflowError("n too large") result = 1 factor = 2 while factor <= n: result *= factor factor += 1 return result if __name__ == "__main__": import doctest doctest.testmod() #doctest.testmod(verbose=2) %run ./code/doctest/doctest_example.py %run ./code/doctest/doctest_example.py -v ``` ## DocTest of iapws.iapws97 https://github.com/jjgomera/iapws ``` !pip install iapws ``` ```python # Boundary Region2-Region3 def _P23_T(T): """Define the boundary between Region 2 and 3, P=f(T) Parameters ---------- T : float Temperature [K] Returns ------- P : float Pressure [MPa] References ---------- IAPWS, Revised Release on the IAPWS Industrial Formulation 1997 for the Thermodynamic Properties of Water and Steam August 2007, http://www.iapws.org/relguide/IF97-Rev.html, Eq 5 Examples -------- >>> _P23_T(623.15) 16.52916425 """ n = [0.34805185628969e3, -0.11671859879975e1, 0.10192970039326e-2] return n[0]+n[1]*T+n[2]*T**2 ``` ``` from iapws import iapws97 import doctest doctest.testmod(iapws97) ``` ## Further Reading #### 26.3. doctest — Test interactive Python examples https://docs.python.org/3.6/library/doctest.html #### Python 3 Module of the Week https://pymotw.com/3/ PyMOTW-3 is a series of articles written by `Doug Hellmann` to demonstrate how to use the modules of the Python 3 standard library. It is based on the original PyMOTW series, which covered Python 2.7. See About Python Module of the Week for details including the version of Python and tools used. doctest – Testing through documentation: https://pymotw.com/3/doctest/index.html
github_jupyter
To **run** the tests, use `doctest` as the `main program` via the **-m** option to the interpreter: Usually **no output** is produced .It means **all the examples worked** Pass ** -v ** to the script, and **doctest** prints a detailed log of what it’s trying, and prints a summary at the end: **Test Examples** cannot usually stand on their own as explanations of a function, **doctest** also lets you keep the **surrounding text** you would normally include in the documentation. **Intervening text** is **ignored**, and can have `any format` as long as it does `not look like a test case`. The surrounding text in the updated `docstring ` * **useful to a human reader**, * **ignored by doctest**, the results are the same. ## 2 Run doctest by calling testmod() at the bottom of modules The simplest way to start using doctest is to **end** each module with: **doctest** then examines `docstrings` in the module . The module docstring, and all function, class and method docstrings are searched. ### Here’s a complete but small example `module` the python code is a module file ,not within a cell of notebook https://docs.python.org/3.6/library/doctest.html ## DocTest of iapws.iapws97 https://github.com/jjgomera/iapws
0.705481
0.951639
``` import nbformat import os from nbconvert.preprocessors import ExecutePreprocessor # Paths to look for .ipynb DIRS = ['../../course-python-beginner/lessons/python introduction', '../../course-python-beginner/lessons/numpy', '../../course-python-beginner/lessons/pandas'] OUT_DIRS = [] # empty list is same dir default to clean folder OUT_DIR = 'exercise' POSTFIX = '' # default not postfix REPLACE_FILENAME = ('', '') # Options LAST_ROW_STANDING = True # add a emtpy code cell at the end of the document SPECIFIC_VERSION = 1.1 # Functions def open_nb(filepath): with open(filepath, encoding='utf-8') as f: nb = nbformat.read(f, as_version=4) return nb def save_nb(out_dir, filepath, nb): dirname = os.path.dirname(filepath) # check if dir exist if not os.path.isdir(out_dir): os.mkdir(out_dir) # create basename filename = os.path.basename(filepath).replace(REPLACE_FILENAME[0], REPLACE_FILENAME[1]) filename = filename.replace('.ipynb', POSTFIX + '.ipynb') # my files have a ' - Loesung' at the end, which is to remove with open(os.path.normpath(os.path.join(out_dir, filename)), 'w', encoding='utf-8') as f: nbformat.write(nb, f) def add_version(nb): nb.metadata['ht_cell_export_version'] = SPECIFIC_VERSION def process_cells(nb): out_cells = [] nb_cell_state = {'last_codecell_delete': False, 'last_markdown_delete':False} # main loop for c in nb.get('cells'): # handle code cells if c.get('cell_type') == 'code': ht_protected = c.get('metadata').get('ht_protected', None) nb_cell_state['last_markdown_delete'] = False if ht_protected: out_cells.append(c) nb_cell_state['last_codecell_delete'] = True elif ht_protected == False: out_cells.append(c) nb_cell_state['last_codecell_delete'] = False else: # kill code cell nb_cell_state['last_codecell_delete'] = True elif c.get('cell_type') == 'markdown': ht_enforce_remove = c.get('metadata').get('ht_enforce_remove', None) # check if last one was markdown enforced or codecell if nb_cell_state['last_codecell_delete']: out_cells.append(nbformat.v4.new_code_cell()) if nb_cell_state['last_markdown_delete'] and \ ht_enforce_remove is None: out_cells.append(nbformat.v4.new_markdown_cell()) # reset code cell status nb_cell_state['last_codecell_delete'] = False if ht_enforce_remove: # kill markdown cell nb_cell_state['last_markdown_delete'] = True elif ht_enforce_remove == False: # kill markdown cell nb_cell_state['last_markdown_delete'] = False else: nb_cell_state['last_markdown_delete'] = False out_cells.append(c) if LAST_ROW_STANDING and nb_cell_state['last_codecell_delete']: out_cells.append(nbformat.v4.new_code_cell()) nb['cells'] = out_cells return nb if len(OUT_DIRS) == 1: out_dirs = [OUT_DIRS[0]] * len(DIRS) elif len(OUT_DIRS) == len(DIRS): out_dirs = OUT_DIRS elif len(OUT_DIRS) == 0: out_dirs = [d + '/' + OUT_DIR for d in DIRS] else: raise ValueError('Length missmatch between OUT_DIR and DIRS') out_dirs for out_dir, in_dir in zip(out_dirs, DIRS): for entry in os.scandir(in_dir): if entry.is_file() and entry.name.endswith(".ipynb"): nb = open_nb(entry.path) nb = process_cells(nb) save_nb(out_dir, entry.path, nb) nb = None print('Done Converting Files: {}'.format(entry.path)) ```
github_jupyter
import nbformat import os from nbconvert.preprocessors import ExecutePreprocessor # Paths to look for .ipynb DIRS = ['../../course-python-beginner/lessons/python introduction', '../../course-python-beginner/lessons/numpy', '../../course-python-beginner/lessons/pandas'] OUT_DIRS = [] # empty list is same dir default to clean folder OUT_DIR = 'exercise' POSTFIX = '' # default not postfix REPLACE_FILENAME = ('', '') # Options LAST_ROW_STANDING = True # add a emtpy code cell at the end of the document SPECIFIC_VERSION = 1.1 # Functions def open_nb(filepath): with open(filepath, encoding='utf-8') as f: nb = nbformat.read(f, as_version=4) return nb def save_nb(out_dir, filepath, nb): dirname = os.path.dirname(filepath) # check if dir exist if not os.path.isdir(out_dir): os.mkdir(out_dir) # create basename filename = os.path.basename(filepath).replace(REPLACE_FILENAME[0], REPLACE_FILENAME[1]) filename = filename.replace('.ipynb', POSTFIX + '.ipynb') # my files have a ' - Loesung' at the end, which is to remove with open(os.path.normpath(os.path.join(out_dir, filename)), 'w', encoding='utf-8') as f: nbformat.write(nb, f) def add_version(nb): nb.metadata['ht_cell_export_version'] = SPECIFIC_VERSION def process_cells(nb): out_cells = [] nb_cell_state = {'last_codecell_delete': False, 'last_markdown_delete':False} # main loop for c in nb.get('cells'): # handle code cells if c.get('cell_type') == 'code': ht_protected = c.get('metadata').get('ht_protected', None) nb_cell_state['last_markdown_delete'] = False if ht_protected: out_cells.append(c) nb_cell_state['last_codecell_delete'] = True elif ht_protected == False: out_cells.append(c) nb_cell_state['last_codecell_delete'] = False else: # kill code cell nb_cell_state['last_codecell_delete'] = True elif c.get('cell_type') == 'markdown': ht_enforce_remove = c.get('metadata').get('ht_enforce_remove', None) # check if last one was markdown enforced or codecell if nb_cell_state['last_codecell_delete']: out_cells.append(nbformat.v4.new_code_cell()) if nb_cell_state['last_markdown_delete'] and \ ht_enforce_remove is None: out_cells.append(nbformat.v4.new_markdown_cell()) # reset code cell status nb_cell_state['last_codecell_delete'] = False if ht_enforce_remove: # kill markdown cell nb_cell_state['last_markdown_delete'] = True elif ht_enforce_remove == False: # kill markdown cell nb_cell_state['last_markdown_delete'] = False else: nb_cell_state['last_markdown_delete'] = False out_cells.append(c) if LAST_ROW_STANDING and nb_cell_state['last_codecell_delete']: out_cells.append(nbformat.v4.new_code_cell()) nb['cells'] = out_cells return nb if len(OUT_DIRS) == 1: out_dirs = [OUT_DIRS[0]] * len(DIRS) elif len(OUT_DIRS) == len(DIRS): out_dirs = OUT_DIRS elif len(OUT_DIRS) == 0: out_dirs = [d + '/' + OUT_DIR for d in DIRS] else: raise ValueError('Length missmatch between OUT_DIR and DIRS') out_dirs for out_dir, in_dir in zip(out_dirs, DIRS): for entry in os.scandir(in_dir): if entry.is_file() and entry.name.endswith(".ipynb"): nb = open_nb(entry.path) nb = process_cells(nb) save_nb(out_dir, entry.path, nb) nb = None print('Done Converting Files: {}'.format(entry.path))
0.131759
0.216115
``` import warnings warnings.filterwarnings("ignore") import time import numpy as np import pandas as pd import matplotlib.pyplot as plt #plt.style.use('fivethirtyeight') from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.pipeline import make_pipeline from sklearn.feature_selection import VarianceThreshold from sklearn.metrics import confusion_matrix from sklearn.metrics import accuracy_score,f1_score,roc_curve, auc,roc_auc_score,precision_score,recall_score,matthews_corrcoef from sklearn.neighbors import KNeighborsClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.naive_bayes import GaussianNB from sklearn.feature_selection import SelectKBest from sklearn.feature_selection import mutual_info_classif,f_classif np.set_printoptions(precision=3) def feature_ranking_selection(X_train, y_train, n_features): ''' n_features: number of feature to select for training returns: feature name with pearson correlation coefficient(in descending) and selected n_features ''' df = X_train.copy() df['label'] = y_train.values correlation_mat = df.corr(method = 'pearson') feature_name = list(correlation_mat.index) ndf = pd.DataFrame() ndf['feature'] = feature_name ndf ['importance'] = abs((correlation_mat.iloc[:,-1]).values ) mdf = ndf[:-1] mdf = (mdf.sort_values(by='importance', ascending=False)).reset_index(drop = True) if n_features > len(mdf): print('Number features to select is too large.') return mdf else: selected_feature = list((mdf.iloc[0:n_features])['feature'].values) return mdf, selected_feature ``` # 1. CLR 202 ``` #Load dataset as pandas data frame filename = 'CLR_both_202.csv' dataset = pd.read_csv(filename) #Split data into input and output variable X = dataset.iloc[:,0:dataset.shape[1]-1] Y = dataset.iloc[:,-1] X_trainB, X_testB, y_trainB, y_testB = train_test_split(X, Y,test_size=0.3,random_state = 100) X_trainA, X_testA, y_trainA, y_testA = train_test_split(X, Y,test_size=0.3,random_state = 100) # Removing Constant features constant_filter = VarianceThreshold() constant_filter.fit(X_trainA) constant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[constant_filter.get_support()]] X_trainA.drop(labels=constant_columns,axis=1, inplace=True) X_testA.drop(labels=constant_columns,axis=1, inplace=True) # Removing Quasi-Constant features qconstant_filter = VarianceThreshold(0.01) qconstant_filter.fit(X_trainA) qconstant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[qconstant_filter.get_support()]] X_trainA.drop(labels=qconstant_columns,axis=1, inplace=True) X_testA.drop(labels=qconstant_columns,axis=1, inplace=True) # Removing Correlated Features correlated_features = set() correlation_matrix = X_trainA.corr(method = 'pearson') for i in range(len(correlation_matrix.columns)): for j in range(i): if abs(correlation_matrix.iloc[i, j]) > 0.4: colname = correlation_matrix.columns[i] correlated_features.add(colname) X_trainA.drop(labels=correlated_features,axis=1, inplace=True) X_testA.drop(labels=correlated_features,axis=1, inplace=True) # feature ranking and selection ranking_info, selected_features = feature_ranking_selection(X_trainA, y_trainA, 15) print('Fature Ranking information') print('---------------------------------------------------------------------------------------------') print(ranking_info) print(list(ranking_info['feature'])) print('---------------------------------------------------------------------------------------------') X_trainA = X_trainA[selected_features] X_testA = X_testA[selected_features] #names = ["Nearest Neighbors","Decision Tree","Naive Bayes"] names = ["Decision Tree"] classifiers = [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(random_state = 100) #GaussianNB(), ] classifier2= [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(max_depth = 5, min_samples_leaf = 50,random_state = 100) #GaussianNB(), ] clf_bef = list() clf_aft = list() for name, clf, dlf in zip(names,classifiers,classifier2): # clf for before # dlf for after # Before Feature Selection startB = time.time() clf.fit(X_trainB,y_trainB) endB = time.time() clf_bef.append(clf) # after Feature Selection startA = time.time() dlf.fit(X_trainA,y_trainA) endA = time.time() clf_aft.append(dlf) print('\t\t\t\tClassifier:',name.upper()) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\t\tBefore Feature Selection\tAfter Feature Selection') print('No. of features:\t', X_trainB.shape[1],'\t\t\t',X_trainA.shape[1]) #print("Dataset Size(in MB):\t",(X_trainB.values.nbytes/1e6),'\t\t',(X_trainA.values.nbytes/1e6)) # training accuracy train_predB = clf.predict(X_trainB) train_predA = dlf.predict(X_trainA) train_accB = round(accuracy_score(y_trainB,train_predB)*100, 2) train_accA = round(accuracy_score(y_trainA,train_predA)*100, 2) print('Train Accuracy:\t\t',train_accB,'\t\t\t',train_accA) # test accuracy test_predB = clf.predict(X_testB) test_predA = dlf.predict(X_testA) test_accB = round(accuracy_score(y_testB,test_predB)*100, 2) test_accA = round(accuracy_score(y_testA,test_predA)*100, 2) print('Test Accuracy:\t\t',test_accB,'\t\t\t',test_accA) # roc_auc_score test_roc_aucB = round(roc_auc_score(y_testB,test_predB), 2) test_roc_aucA = round(roc_auc_score(y_testA,test_predA), 2) print('ROC AUC score:\t\t',test_roc_aucB,'\t\t\t',test_roc_aucA) # f1 score test_f1B = round(f1_score(y_testB,test_predB),2) test_f1A = round(f1_score(y_testA,test_predA),2) print('f1_score:\t\t',test_f1B,'\t\t\t',test_f1A) # precision test_precB = round(precision_score(y_testB,test_predB),2) test_precA = round(precision_score(y_testA,test_predA),2) print('Precision:\t\t',test_precB,'\t\t\t',test_precA) # recall test_recallB = round(recall_score(y_testB,test_predB),2) test_recallA = round(recall_score(y_testA,test_predA),2) print('Recall:\t\t\t',test_recallB,'\t\t\t',test_recallA) # Matthews correlation coefficient test_MCCB = round(matthews_corrcoef(y_testB,test_predB),2) test_MCCA = round(matthews_corrcoef(y_testA,test_predA),2) print('MCC:\t\t\t',test_MCCB,'\t\t\t',test_MCCA) # training time timeB = round((float(endB)- float(startB)),2) timeA = round((float(endA)- float(startA)),2) print('Train Time (in seconds):',timeB,'\t\t\t',timeA) # confusion matrix cm_resultB = confusion_matrix(y_testB,test_predB) cm_resultA = confusion_matrix(y_testA,test_predA) print('Confusion Matrix(Before):\n',cm_resultB) print('Confusion Matrix(after):\n',cm_resultA) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\n') ``` # 2. C 202 ``` #Load dataset as pandas data frame filename = "centre_both.csv" dataset = pd.read_csv(filename) #Split data into input and output variable X = dataset.iloc[:,0:dataset.shape[1]-1] Y = dataset.iloc[:,-1] X_trainB, X_testB, y_trainB, y_testB = train_test_split(X, Y,test_size=0.3,random_state = 100) X_trainA, X_testA, y_trainA, y_testA = train_test_split(X, Y,test_size=0.3,random_state = 100) # Removing Constant features constant_filter = VarianceThreshold() constant_filter.fit(X_trainA) constant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[constant_filter.get_support()]] X_trainA.drop(labels=constant_columns,axis=1, inplace=True) X_testA.drop(labels=constant_columns,axis=1, inplace=True) # Removing Quasi-Constant features qconstant_filter = VarianceThreshold(0.01) qconstant_filter.fit(X_trainA) qconstant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[qconstant_filter.get_support()]] X_trainA.drop(labels=qconstant_columns,axis=1, inplace=True) X_testA.drop(labels=qconstant_columns,axis=1, inplace=True) # Removing Correlated Features correlated_features = set() correlation_matrix = X_trainA.corr(method = 'pearson') for i in range(len(correlation_matrix.columns)): for j in range(i): if abs(correlation_matrix.iloc[i, j]) > 0.4: colname = correlation_matrix.columns[i] correlated_features.add(colname) X_trainA.drop(labels=correlated_features,axis=1, inplace=True) X_testA.drop(labels=correlated_features,axis=1, inplace=True) # feature ranking and selection ranking_info, selected_features = feature_ranking_selection(X_trainA, y_trainA, 15) print('Fature Ranking information') print('---------------------------------------------------------------------------------------------') print(ranking_info) print(list(ranking_info['feature'])) print('---------------------------------------------------------------------------------------------') X_trainA = X_trainA[selected_features] X_testA = X_testA[selected_features] #names = ["Nearest Neighbors","Decision Tree","Naive Bayes"] names = ["Decision Tree"] classifiers = [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(random_state = 100) #GaussianNB(), ] classifier2= [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(max_depth = 5, min_samples_leaf = 50,random_state = 100) #GaussianNB(), ] clf_bef = list() clf_aft = list() for name, clf, dlf in zip(names,classifiers,classifier2): # clf for before # dlf for after # Before Feature Selection startB = time.time() clf.fit(X_trainB,y_trainB) endB = time.time() clf_bef.append(clf) # after Feature Selection startA = time.time() dlf.fit(X_trainA,y_trainA) endA = time.time() clf_aft.append(dlf) print('\t\t\t\tClassifier:',name.upper()) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\t\tBefore Feature Selection\tAfter Feature Selection') print('No. of features:\t', X_trainB.shape[1],'\t\t\t',X_trainA.shape[1]) #print("Dataset Size(in MB):\t",(X_trainB.values.nbytes/1e6),'\t\t',(X_trainA.values.nbytes/1e6)) # training accuracy train_predB = clf.predict(X_trainB) train_predA = dlf.predict(X_trainA) train_accB = round(accuracy_score(y_trainB,train_predB)*100, 2) train_accA = round(accuracy_score(y_trainA,train_predA)*100, 2) print('Train Accuracy:\t\t',train_accB,'\t\t\t',train_accA) # test accuracy test_predB = clf.predict(X_testB) test_predA = dlf.predict(X_testA) test_accB = round(accuracy_score(y_testB,test_predB)*100, 2) test_accA = round(accuracy_score(y_testA,test_predA)*100, 2) print('Test Accuracy:\t\t',test_accB,'\t\t\t',test_accA) # roc_auc_score test_roc_aucB = round(roc_auc_score(y_testB,test_predB), 2) test_roc_aucA = round(roc_auc_score(y_testA,test_predA), 2) print('ROC AUC score:\t\t',test_roc_aucB,'\t\t\t',test_roc_aucA) # f1 score test_f1B = round(f1_score(y_testB,test_predB),2) test_f1A = round(f1_score(y_testA,test_predA),2) print('f1_score:\t\t',test_f1B,'\t\t\t',test_f1A) # precision test_precB = round(precision_score(y_testB,test_predB),2) test_precA = round(precision_score(y_testA,test_predA),2) print('Precision:\t\t',test_precB,'\t\t\t',test_precA) # recall test_recallB = round(recall_score(y_testB,test_predB),2) test_recallA = round(recall_score(y_testA,test_predA),2) print('Recall:\t\t\t',test_recallB,'\t\t\t',test_recallA) # Matthews correlation coefficient test_MCCB = round(matthews_corrcoef(y_testB,test_predB),2) test_MCCA = round(matthews_corrcoef(y_testA,test_predA),2) print('MCC:\t\t\t',test_MCCB,'\t\t\t',test_MCCA) # training time timeB = round((float(endB)- float(startB)),2) timeA = round((float(endA)- float(startA)),2) print('Train Time (in seconds):',timeB,'\t\t\t',timeA) # confusion matrix cm_resultB = confusion_matrix(y_testB,test_predB) cm_resultA = confusion_matrix(y_testA,test_predA) print('Confusion Matrix(Before):\n',cm_resultB) print('Confusion Matrix(after):\n',cm_resultA) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\n') ``` # 3. L 202 ``` #Load dataset as pandas data frame filename = 'left_both.csv' dataset = pd.read_csv(filename) #Split data into input and output variable X = dataset.iloc[:,0:dataset.shape[1]-1] Y = dataset.iloc[:,-1] X_trainB, X_testB, y_trainB, y_testB = train_test_split(X, Y,test_size=0.3,random_state = 100) X_trainA, X_testA, y_trainA, y_testA = train_test_split(X, Y,test_size=0.3,random_state = 100) # Removing Constant features constant_filter = VarianceThreshold() constant_filter.fit(X_trainA) constant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[constant_filter.get_support()]] X_trainA.drop(labels=constant_columns,axis=1, inplace=True) X_testA.drop(labels=constant_columns,axis=1, inplace=True) # Removing Quasi-Constant features qconstant_filter = VarianceThreshold(0.01) qconstant_filter.fit(X_trainA) qconstant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[qconstant_filter.get_support()]] X_trainA.drop(labels=qconstant_columns,axis=1, inplace=True) X_testA.drop(labels=qconstant_columns,axis=1, inplace=True) # Removing Correlated Features correlated_features = set() correlation_matrix = X_trainA.corr(method = 'pearson') for i in range(len(correlation_matrix.columns)): for j in range(i): if abs(correlation_matrix.iloc[i, j]) > 0.4: colname = correlation_matrix.columns[i] correlated_features.add(colname) X_trainA.drop(labels=correlated_features,axis=1, inplace=True) X_testA.drop(labels=correlated_features,axis=1, inplace=True) # feature ranking and selection ranking_info, selected_features = feature_ranking_selection(X_trainA, y_trainA, 15) print('Fature Ranking information') print('---------------------------------------------------------------------------------------------') print(ranking_info) print(list(ranking_info['feature'])) print('---------------------------------------------------------------------------------------------') X_trainA = X_trainA[selected_features] X_testA = X_testA[selected_features] #names = ["Nearest Neighbors","Decision Tree","Naive Bayes"] names = ["Decision Tree"] classifiers = [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(random_state = 100) #GaussianNB(), ] classifier2= [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(max_depth = 5, min_samples_leaf = 50,random_state = 100) #GaussianNB(), ] clf_bef = list() clf_aft = list() for name, clf, dlf in zip(names,classifiers,classifier2): # clf for before # dlf for after # Before Feature Selection startB = time.time() clf.fit(X_trainB,y_trainB) endB = time.time() clf_bef.append(clf) # after Feature Selection startA = time.time() dlf.fit(X_trainA,y_trainA) endA = time.time() clf_aft.append(dlf) print('\t\t\t\tClassifier:',name.upper()) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\t\tBefore Feature Selection\tAfter Feature Selection') print('No. of features:\t', X_trainB.shape[1],'\t\t\t',X_trainA.shape[1]) #print("Dataset Size(in MB):\t",(X_trainB.values.nbytes/1e6),'\t\t',(X_trainA.values.nbytes/1e6)) # training accuracy train_predB = clf.predict(X_trainB) train_predA = dlf.predict(X_trainA) train_accB = round(accuracy_score(y_trainB,train_predB)*100, 2) train_accA = round(accuracy_score(y_trainA,train_predA)*100, 2) print('Train Accuracy:\t\t',train_accB,'\t\t\t',train_accA) # test accuracy test_predB = clf.predict(X_testB) test_predA = dlf.predict(X_testA) test_accB = round(accuracy_score(y_testB,test_predB)*100, 2) test_accA = round(accuracy_score(y_testA,test_predA)*100, 2) print('Test Accuracy:\t\t',test_accB,'\t\t\t',test_accA) # roc_auc_score test_roc_aucB = round(roc_auc_score(y_testB,test_predB), 2) test_roc_aucA = round(roc_auc_score(y_testA,test_predA), 2) print('ROC AUC score:\t\t',test_roc_aucB,'\t\t\t',test_roc_aucA) # f1 score test_f1B = round(f1_score(y_testB,test_predB),2) test_f1A = round(f1_score(y_testA,test_predA),2) print('f1_score:\t\t',test_f1B,'\t\t\t',test_f1A) # precision test_precB = round(precision_score(y_testB,test_predB),2) test_precA = round(precision_score(y_testA,test_predA),2) print('Precision:\t\t',test_precB,'\t\t\t',test_precA) # recall test_recallB = round(recall_score(y_testB,test_predB),2) test_recallA = round(recall_score(y_testA,test_predA),2) print('Recall:\t\t\t',test_recallB,'\t\t\t',test_recallA) # Matthews correlation coefficient test_MCCB = round(matthews_corrcoef(y_testB,test_predB),2) test_MCCA = round(matthews_corrcoef(y_testA,test_predA),2) print('MCC:\t\t\t',test_MCCB,'\t\t\t',test_MCCA) # training time timeB = round((float(endB)- float(startB)),2) timeA = round((float(endA)- float(startA)),2) print('Train Time (in seconds):',timeB,'\t\t\t',timeA) # confusion matrix cm_resultB = confusion_matrix(y_testB,test_predB) cm_resultA = confusion_matrix(y_testA,test_predA) print('Confusion Matrix(Before):\n',cm_resultB) print('Confusion Matrix(after):\n',cm_resultA) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\n') ``` # 4. R 202 ``` #Load dataset as pandas data frame filename = 'right_both.csv' dataset = pd.read_csv(filename) #Split data into input and output variable X = dataset.iloc[:,0:dataset.shape[1]-1] Y = dataset.iloc[:,-1] X_trainB, X_testB, y_trainB, y_testB = train_test_split(X, Y,test_size=0.3,random_state = 100) X_trainA, X_testA, y_trainA, y_testA = train_test_split(X, Y,test_size=0.3,random_state = 100) # Removing Constant features constant_filter = VarianceThreshold() constant_filter.fit(X_trainA) constant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[constant_filter.get_support()]] X_trainA.drop(labels=constant_columns,axis=1, inplace=True) X_testA.drop(labels=constant_columns,axis=1, inplace=True) # Removing Quasi-Constant features qconstant_filter = VarianceThreshold(0.01) qconstant_filter.fit(X_trainA) qconstant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[qconstant_filter.get_support()]] X_trainA.drop(labels=qconstant_columns,axis=1, inplace=True) X_testA.drop(labels=qconstant_columns,axis=1, inplace=True) # Removing Correlated Features correlated_features = set() correlation_matrix = X_trainA.corr(method = 'pearson') for i in range(len(correlation_matrix.columns)): for j in range(i): if abs(correlation_matrix.iloc[i, j]) > 0.4: colname = correlation_matrix.columns[i] correlated_features.add(colname) X_trainA.drop(labels=correlated_features,axis=1, inplace=True) X_testA.drop(labels=correlated_features,axis=1, inplace=True) # feature ranking and selection ranking_info, selected_features = feature_ranking_selection(X_trainA, y_trainA, 15) print('Fature Ranking information') print('---------------------------------------------------------------------------------------------') print(ranking_info) print(list(ranking_info['feature'])) print('---------------------------------------------------------------------------------------------') X_trainA = X_trainA[selected_features] X_testA = X_testA[selected_features] #names = ["Nearest Neighbors","Decision Tree","Naive Bayes"] names = ["Decision Tree"] classifiers = [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(random_state = 100) #GaussianNB(), ] classifier2= [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(max_depth = 5, min_samples_leaf = 50,random_state = 100) #GaussianNB(), ] clf_bef = list() clf_aft = list() for name, clf, dlf in zip(names,classifiers,classifier2): # clf for before # dlf for after # Before Feature Selection startB = time.time() clf.fit(X_trainB,y_trainB) endB = time.time() clf_bef.append(clf) # after Feature Selection startA = time.time() dlf.fit(X_trainA,y_trainA) endA = time.time() clf_aft.append(dlf) print('\t\t\t\tClassifier:',name.upper()) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\t\tBefore Feature Selection\tAfter Feature Selection') print('No. of features:\t', X_trainB.shape[1],'\t\t\t',X_trainA.shape[1]) #print("Dataset Size(in MB):\t",(X_trainB.values.nbytes/1e6),'\t\t',(X_trainA.values.nbytes/1e6)) # training accuracy train_predB = clf.predict(X_trainB) train_predA = dlf.predict(X_trainA) train_accB = round(accuracy_score(y_trainB,train_predB)*100, 2) train_accA = round(accuracy_score(y_trainA,train_predA)*100, 2) print('Train Accuracy:\t\t',train_accB,'\t\t\t',train_accA) # test accuracy test_predB = clf.predict(X_testB) test_predA = dlf.predict(X_testA) test_accB = round(accuracy_score(y_testB,test_predB)*100, 2) test_accA = round(accuracy_score(y_testA,test_predA)*100, 2) print('Test Accuracy:\t\t',test_accB,'\t\t\t',test_accA) # roc_auc_score test_roc_aucB = round(roc_auc_score(y_testB,test_predB), 2) test_roc_aucA = round(roc_auc_score(y_testA,test_predA), 2) print('ROC AUC score:\t\t',test_roc_aucB,'\t\t\t',test_roc_aucA) # f1 score test_f1B = round(f1_score(y_testB,test_predB),2) test_f1A = round(f1_score(y_testA,test_predA),2) print('f1_score:\t\t',test_f1B,'\t\t\t',test_f1A) # precision test_precB = round(precision_score(y_testB,test_predB),2) test_precA = round(precision_score(y_testA,test_predA),2) print('Precision:\t\t',test_precB,'\t\t\t',test_precA) # recall test_recallB = round(recall_score(y_testB,test_predB),2) test_recallA = round(recall_score(y_testA,test_predA),2) print('Recall:\t\t\t',test_recallB,'\t\t\t',test_recallA) # Matthews correlation coefficient test_MCCB = round(matthews_corrcoef(y_testB,test_predB),2) test_MCCA = round(matthews_corrcoef(y_testA,test_predA),2) print('MCC:\t\t\t',test_MCCB,'\t\t\t',test_MCCA) # training time timeB = round((float(endB)- float(startB)),2) timeA = round((float(endA)- float(startA)),2) print('Train Time (in seconds):',timeB,'\t\t\t',timeA) # confusion matrix cm_resultB = confusion_matrix(y_testB,test_predB) cm_resultA = confusion_matrix(y_testA,test_predA) print('Confusion Matrix(Before):\n',cm_resultB) print('Confusion Matrix(after):\n',cm_resultA) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\n') ``` # 5. CL 202 ``` #Load dataset as pandas data frame filename = 'CL_both_202.csv' dataset = pd.read_csv(filename) #Split data into input and output variable X = dataset.iloc[:,0:dataset.shape[1]-1] Y = dataset.iloc[:,-1] X_trainB, X_testB, y_trainB, y_testB = train_test_split(X, Y,test_size=0.3,random_state = 100) X_trainA, X_testA, y_trainA, y_testA = train_test_split(X, Y,test_size=0.3,random_state = 100) # Removing Constant features constant_filter = VarianceThreshold() constant_filter.fit(X_trainA) constant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[constant_filter.get_support()]] X_trainA.drop(labels=constant_columns,axis=1, inplace=True) X_testA.drop(labels=constant_columns,axis=1, inplace=True) # Removing Quasi-Constant features qconstant_filter = VarianceThreshold(0.01) qconstant_filter.fit(X_trainA) qconstant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[qconstant_filter.get_support()]] X_trainA.drop(labels=qconstant_columns,axis=1, inplace=True) X_testA.drop(labels=qconstant_columns,axis=1, inplace=True) # Removing Correlated Features correlated_features = set() correlation_matrix = X_trainA.corr(method = 'pearson') for i in range(len(correlation_matrix.columns)): for j in range(i): if abs(correlation_matrix.iloc[i, j]) > 0.4: colname = correlation_matrix.columns[i] correlated_features.add(colname) X_trainA.drop(labels=correlated_features,axis=1, inplace=True) X_testA.drop(labels=correlated_features,axis=1, inplace=True) # feature ranking and selection ranking_info, selected_features = feature_ranking_selection(X_trainA, y_trainA, 15) print('Fature Ranking information') print('---------------------------------------------------------------------------------------------') print(ranking_info) print(list(ranking_info['feature'])) print('---------------------------------------------------------------------------------------------') X_trainA = X_trainA[selected_features] X_testA = X_testA[selected_features] #names = ["Nearest Neighbors","Decision Tree","Naive Bayes"] names = ["Decision Tree"] classifiers = [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(random_state = 100) #GaussianNB(), ] classifier2= [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(max_depth = 5, min_samples_leaf = 50,random_state = 100) #GaussianNB(), ] clf_bef = list() clf_aft = list() for name, clf, dlf in zip(names,classifiers,classifier2): # clf for before # dlf for after # Before Feature Selection startB = time.time() clf.fit(X_trainB,y_trainB) endB = time.time() clf_bef.append(clf) # after Feature Selection startA = time.time() dlf.fit(X_trainA,y_trainA) endA = time.time() clf_aft.append(dlf) print('\t\t\t\tClassifier:',name.upper()) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\t\tBefore Feature Selection\tAfter Feature Selection') print('No. of features:\t', X_trainB.shape[1],'\t\t\t',X_trainA.shape[1]) #print("Dataset Size(in MB):\t",(X_trainB.values.nbytes/1e6),'\t\t',(X_trainA.values.nbytes/1e6)) # training accuracy train_predB = clf.predict(X_trainB) train_predA = dlf.predict(X_trainA) train_accB = round(accuracy_score(y_trainB,train_predB)*100, 2) train_accA = round(accuracy_score(y_trainA,train_predA)*100, 2) print('Train Accuracy:\t\t',train_accB,'\t\t\t',train_accA) # test accuracy test_predB = clf.predict(X_testB) test_predA = dlf.predict(X_testA) test_accB = round(accuracy_score(y_testB,test_predB)*100, 2) test_accA = round(accuracy_score(y_testA,test_predA)*100, 2) print('Test Accuracy:\t\t',test_accB,'\t\t\t',test_accA) # roc_auc_score test_roc_aucB = round(roc_auc_score(y_testB,test_predB), 2) test_roc_aucA = round(roc_auc_score(y_testA,test_predA), 2) print('ROC AUC score:\t\t',test_roc_aucB,'\t\t\t',test_roc_aucA) # f1 score test_f1B = round(f1_score(y_testB,test_predB),2) test_f1A = round(f1_score(y_testA,test_predA),2) print('f1_score:\t\t',test_f1B,'\t\t\t',test_f1A) # precision test_precB = round(precision_score(y_testB,test_predB),2) test_precA = round(precision_score(y_testA,test_predA),2) print('Precision:\t\t',test_precB,'\t\t\t',test_precA) # recall test_recallB = round(recall_score(y_testB,test_predB),2) test_recallA = round(recall_score(y_testA,test_predA),2) print('Recall:\t\t\t',test_recallB,'\t\t\t',test_recallA) # Matthews correlation coefficient test_MCCB = round(matthews_corrcoef(y_testB,test_predB),2) test_MCCA = round(matthews_corrcoef(y_testA,test_predA),2) print('MCC:\t\t\t',test_MCCB,'\t\t\t',test_MCCA) # training time timeB = round((float(endB)- float(startB)),2) timeA = round((float(endA)- float(startA)),2) print('Train Time (in seconds):',timeB,'\t\t\t',timeA) # confusion matrix cm_resultB = confusion_matrix(y_testB,test_predB) cm_resultA = confusion_matrix(y_testA,test_predA) print('Confusion Matrix(Before):\n',cm_resultB) print('Confusion Matrix(after):\n',cm_resultA) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\n') ``` # 6. CR 202 ``` #Load dataset as pandas data frame filename = 'CR_both_202.csv' dataset = pd.read_csv(filename) #Split data into input and output variable X = dataset.iloc[:,0:dataset.shape[1]-1] Y = dataset.iloc[:,-1] X_trainB, X_testB, y_trainB, y_testB = train_test_split(X, Y,test_size=0.3,random_state = 100) X_trainA, X_testA, y_trainA, y_testA = train_test_split(X, Y,test_size=0.3,random_state = 100) # Removing Constant features constant_filter = VarianceThreshold() constant_filter.fit(X_trainA) constant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[constant_filter.get_support()]] X_trainA.drop(labels=constant_columns,axis=1, inplace=True) X_testA.drop(labels=constant_columns,axis=1, inplace=True) # Removing Quasi-Constant features qconstant_filter = VarianceThreshold(0.01) qconstant_filter.fit(X_trainA) qconstant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[qconstant_filter.get_support()]] X_trainA.drop(labels=qconstant_columns,axis=1, inplace=True) X_testA.drop(labels=qconstant_columns,axis=1, inplace=True) # Removing Correlated Features correlated_features = set() correlation_matrix = X_trainA.corr(method = 'pearson') for i in range(len(correlation_matrix.columns)): for j in range(i): if abs(correlation_matrix.iloc[i, j]) > 0.4: colname = correlation_matrix.columns[i] correlated_features.add(colname) X_trainA.drop(labels=correlated_features,axis=1, inplace=True) X_testA.drop(labels=correlated_features,axis=1, inplace=True) # feature ranking and selection ranking_info, selected_features = feature_ranking_selection(X_trainA, y_trainA, 15) print('Fature Ranking information') print('---------------------------------------------------------------------------------------------') print(ranking_info) print(list(ranking_info['feature'])) print('---------------------------------------------------------------------------------------------') X_trainA = X_trainA[selected_features] X_testA = X_testA[selected_features] #names = ["Nearest Neighbors","Decision Tree","Naive Bayes"] names = ["Decision Tree"] classifiers = [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(random_state = 100) #GaussianNB(), ] classifier2= [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(max_depth = 5, min_samples_leaf = 50,random_state = 100) #GaussianNB(), ] clf_bef = list() clf_aft = list() for name, clf, dlf in zip(names,classifiers,classifier2): # clf for before # dlf for after # Before Feature Selection startB = time.time() clf.fit(X_trainB,y_trainB) endB = time.time() clf_bef.append(clf) # after Feature Selection startA = time.time() dlf.fit(X_trainA,y_trainA) endA = time.time() clf_aft.append(dlf) print('\t\t\t\tClassifier:',name.upper()) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\t\tBefore Feature Selection\tAfter Feature Selection') print('No. of features:\t', X_trainB.shape[1],'\t\t\t',X_trainA.shape[1]) #print("Dataset Size(in MB):\t",(X_trainB.values.nbytes/1e6),'\t\t',(X_trainA.values.nbytes/1e6)) # training accuracy train_predB = clf.predict(X_trainB) train_predA = dlf.predict(X_trainA) train_accB = round(accuracy_score(y_trainB,train_predB)*100, 2) train_accA = round(accuracy_score(y_trainA,train_predA)*100, 2) print('Train Accuracy:\t\t',train_accB,'\t\t\t',train_accA) # test accuracy test_predB = clf.predict(X_testB) test_predA = dlf.predict(X_testA) test_accB = round(accuracy_score(y_testB,test_predB)*100, 2) test_accA = round(accuracy_score(y_testA,test_predA)*100, 2) print('Test Accuracy:\t\t',test_accB,'\t\t\t',test_accA) # roc_auc_score test_roc_aucB = round(roc_auc_score(y_testB,test_predB), 2) test_roc_aucA = round(roc_auc_score(y_testA,test_predA), 2) print('ROC AUC score:\t\t',test_roc_aucB,'\t\t\t',test_roc_aucA) # f1 score test_f1B = round(f1_score(y_testB,test_predB),2) test_f1A = round(f1_score(y_testA,test_predA),2) print('f1_score:\t\t',test_f1B,'\t\t\t',test_f1A) # precision test_precB = round(precision_score(y_testB,test_predB),2) test_precA = round(precision_score(y_testA,test_predA),2) print('Precision:\t\t',test_precB,'\t\t\t',test_precA) # recall test_recallB = round(recall_score(y_testB,test_predB),2) test_recallA = round(recall_score(y_testA,test_predA),2) print('Recall:\t\t\t',test_recallB,'\t\t\t',test_recallA) # Matthews correlation coefficient test_MCCB = round(matthews_corrcoef(y_testB,test_predB),2) test_MCCA = round(matthews_corrcoef(y_testA,test_predA),2) print('MCC:\t\t\t',test_MCCB,'\t\t\t',test_MCCA) # training time timeB = round((float(endB)- float(startB)),2) timeA = round((float(endA)- float(startA)),2) print('Train Time (in seconds):',timeB,'\t\t\t',timeA) # confusion matrix cm_resultB = confusion_matrix(y_testB,test_predB) cm_resultA = confusion_matrix(y_testA,test_predA) print('Confusion Matrix(Before):\n',cm_resultB) print('Confusion Matrix(after):\n',cm_resultA) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\n') ``` # 7. LR 202 ``` #Load dataset as pandas data frame filename = 'LR_both_202.csv' dataset = pd.read_csv(filename) #Split data into input and output variable X = dataset.iloc[:,0:dataset.shape[1]-1] Y = dataset.iloc[:,-1] X_trainB, X_testB, y_trainB, y_testB = train_test_split(X, Y,test_size=0.3,random_state = 100) X_trainA, X_testA, y_trainA, y_testA = train_test_split(X, Y,test_size=0.3,random_state = 100) # Removing Constant features constant_filter = VarianceThreshold() constant_filter.fit(X_trainA) constant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[constant_filter.get_support()]] X_trainA.drop(labels=constant_columns,axis=1, inplace=True) X_testA.drop(labels=constant_columns,axis=1, inplace=True) # Removing Quasi-Constant features qconstant_filter = VarianceThreshold(0.01) qconstant_filter.fit(X_trainA) qconstant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[qconstant_filter.get_support()]] X_trainA.drop(labels=qconstant_columns,axis=1, inplace=True) X_testA.drop(labels=qconstant_columns,axis=1, inplace=True) # Removing Correlated Features correlated_features = set() correlation_matrix = X_trainA.corr(method = 'pearson') for i in range(len(correlation_matrix.columns)): for j in range(i): if abs(correlation_matrix.iloc[i, j]) > 0.4: colname = correlation_matrix.columns[i] correlated_features.add(colname) X_trainA.drop(labels=correlated_features,axis=1, inplace=True) X_testA.drop(labels=correlated_features,axis=1, inplace=True) # feature ranking and selection ranking_info, selected_features = feature_ranking_selection(X_trainA, y_trainA, 15) print('Fature Ranking information') print('---------------------------------------------------------------------------------------------') print(ranking_info) print(list(ranking_info['feature'])) print('---------------------------------------------------------------------------------------------') X_trainA = X_trainA[selected_features] X_testA = X_testA[selected_features] #names = ["Nearest Neighbors","Decision Tree","Naive Bayes"] names = ["Decision Tree"] classifiers = [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(random_state = 100) #GaussianNB(), ] classifier2= [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(max_depth = 5, min_samples_leaf = 50,random_state = 100) #GaussianNB(), ] clf_bef = list() clf_aft = list() for name, clf, dlf in zip(names,classifiers,classifier2): # clf for before # dlf for after # Before Feature Selection startB = time.time() clf.fit(X_trainB,y_trainB) endB = time.time() clf_bef.append(clf) # after Feature Selection startA = time.time() dlf.fit(X_trainA,y_trainA) endA = time.time() clf_aft.append(dlf) print('\t\t\t\tClassifier:',name.upper()) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\t\tBefore Feature Selection\tAfter Feature Selection') print('No. of features:\t', X_trainB.shape[1],'\t\t\t',X_trainA.shape[1]) #print("Dataset Size(in MB):\t",(X_trainB.values.nbytes/1e6),'\t\t',(X_trainA.values.nbytes/1e6)) # training accuracy train_predB = clf.predict(X_trainB) train_predA = dlf.predict(X_trainA) train_accB = round(accuracy_score(y_trainB,train_predB)*100, 2) train_accA = round(accuracy_score(y_trainA,train_predA)*100, 2) print('Train Accuracy:\t\t',train_accB,'\t\t\t',train_accA) # test accuracy test_predB = clf.predict(X_testB) test_predA = dlf.predict(X_testA) test_accB = round(accuracy_score(y_testB,test_predB)*100, 2) test_accA = round(accuracy_score(y_testA,test_predA)*100, 2) print('Test Accuracy:\t\t',test_accB,'\t\t\t',test_accA) # roc_auc_score test_roc_aucB = round(roc_auc_score(y_testB,test_predB), 2) test_roc_aucA = round(roc_auc_score(y_testA,test_predA), 2) print('ROC AUC score:\t\t',test_roc_aucB,'\t\t\t',test_roc_aucA) # f1 score test_f1B = round(f1_score(y_testB,test_predB),2) test_f1A = round(f1_score(y_testA,test_predA),2) print('f1_score:\t\t',test_f1B,'\t\t\t',test_f1A) # precision test_precB = round(precision_score(y_testB,test_predB),2) test_precA = round(precision_score(y_testA,test_predA),2) print('Precision:\t\t',test_precB,'\t\t\t',test_precA) # recall test_recallB = round(recall_score(y_testB,test_predB),2) test_recallA = round(recall_score(y_testA,test_predA),2) print('Recall:\t\t\t',test_recallB,'\t\t\t',test_recallA) # Matthews correlation coefficient test_MCCB = round(matthews_corrcoef(y_testB,test_predB),2) test_MCCA = round(matthews_corrcoef(y_testA,test_predA),2) print('MCC:\t\t\t',test_MCCB,'\t\t\t',test_MCCA) # training time timeB = round((float(endB)- float(startB)),2) timeA = round((float(endA)- float(startA)),2) print('Train Time (in seconds):',timeB,'\t\t\t',timeA) # confusion matrix cm_resultB = confusion_matrix(y_testB,test_predB) cm_resultA = confusion_matrix(y_testA,test_predA) print('Confusion Matrix(Before):\n',cm_resultB) print('Confusion Matrix(after):\n',cm_resultA) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\n') ``` # 8. CLR 606 ``` #Load dataset as pandas data frame filename = 'CLR_both_606.csv' dataset = pd.read_csv(filename) #Split data into input and output variable X = dataset.iloc[:,0:dataset.shape[1]-1] Y = dataset.iloc[:,-1] X_trainB, X_testB, y_trainB, y_testB = train_test_split(X, Y,test_size=0.3,random_state = 100) X_trainA, X_testA, y_trainA, y_testA = train_test_split(X, Y,test_size=0.3,random_state = 100) # Removing Constant features constant_filter = VarianceThreshold() constant_filter.fit(X_trainA) constant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[constant_filter.get_support()]] X_trainA.drop(labels=constant_columns,axis=1, inplace=True) X_testA.drop(labels=constant_columns,axis=1, inplace=True) # Removing Quasi-Constant features qconstant_filter = VarianceThreshold(0.01) qconstant_filter.fit(X_trainA) qconstant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[qconstant_filter.get_support()]] X_trainA.drop(labels=qconstant_columns,axis=1, inplace=True) X_testA.drop(labels=qconstant_columns,axis=1, inplace=True) # Removing Correlated Features correlated_features = set() correlation_matrix = X_trainA.corr(method = 'pearson') for i in range(len(correlation_matrix.columns)): for j in range(i): if abs(correlation_matrix.iloc[i, j]) > 0.4: colname = correlation_matrix.columns[i] correlated_features.add(colname) X_trainA.drop(labels=correlated_features,axis=1, inplace=True) X_testA.drop(labels=correlated_features,axis=1, inplace=True) # feature ranking and selection ranking_info, selected_features = feature_ranking_selection(X_trainA, y_trainA, 15) print('Fature Ranking information') print('---------------------------------------------------------------------------------------------') print(ranking_info) print(list(ranking_info['feature'])) print('---------------------------------------------------------------------------------------------') X_trainA = X_trainA[selected_features] X_testA = X_testA[selected_features] #names = ["Nearest Neighbors","Decision Tree","Naive Bayes"] names = ["Decision Tree"] classifiers = [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(random_state = 100) #GaussianNB(), ] classifier2= [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(max_depth = 5, min_samples_leaf = 50,random_state = 100) #GaussianNB(), ] clf_bef = list() clf_aft = list() for name, clf, dlf in zip(names,classifiers,classifier2): # clf for before # dlf for after # Before Feature Selection startB = time.time() clf.fit(X_trainB,y_trainB) endB = time.time() clf_bef.append(clf) # after Feature Selection startA = time.time() dlf.fit(X_trainA,y_trainA) endA = time.time() clf_aft.append(dlf) print('\t\t\t\tClassifier:',name.upper()) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\t\tBefore Feature Selection\tAfter Feature Selection') print('No. of features:\t', X_trainB.shape[1],'\t\t\t',X_trainA.shape[1]) #print("Dataset Size(in MB):\t",(X_trainB.values.nbytes/1e6),'\t\t',(X_trainA.values.nbytes/1e6)) # training accuracy train_predB = clf.predict(X_trainB) train_predA = dlf.predict(X_trainA) train_accB = round(accuracy_score(y_trainB,train_predB)*100, 2) train_accA = round(accuracy_score(y_trainA,train_predA)*100, 2) print('Train Accuracy:\t\t',train_accB,'\t\t\t',train_accA) # test accuracy test_predB = clf.predict(X_testB) test_predA = dlf.predict(X_testA) test_accB = round(accuracy_score(y_testB,test_predB)*100, 2) test_accA = round(accuracy_score(y_testA,test_predA)*100, 2) print('Test Accuracy:\t\t',test_accB,'\t\t\t',test_accA) # roc_auc_score test_roc_aucB = round(roc_auc_score(y_testB,test_predB), 2) test_roc_aucA = round(roc_auc_score(y_testA,test_predA), 2) print('ROC AUC score:\t\t',test_roc_aucB,'\t\t\t',test_roc_aucA) # f1 score test_f1B = round(f1_score(y_testB,test_predB),2) test_f1A = round(f1_score(y_testA,test_predA),2) print('f1_score:\t\t',test_f1B,'\t\t\t',test_f1A) # precision test_precB = round(precision_score(y_testB,test_predB),2) test_precA = round(precision_score(y_testA,test_predA),2) print('Precision:\t\t',test_precB,'\t\t\t',test_precA) # recall test_recallB = round(recall_score(y_testB,test_predB),2) test_recallA = round(recall_score(y_testA,test_predA),2) print('Recall:\t\t\t',test_recallB,'\t\t\t',test_recallA) # Matthews correlation coefficient test_MCCB = round(matthews_corrcoef(y_testB,test_predB),2) test_MCCA = round(matthews_corrcoef(y_testA,test_predA),2) print('MCC:\t\t\t',test_MCCB,'\t\t\t',test_MCCA) # training time timeB = round((float(endB)- float(startB)),2) timeA = round((float(endA)- float(startA)),2) print('Train Time (in seconds):',timeB,'\t\t\t',timeA) # confusion matrix cm_resultB = confusion_matrix(y_testB,test_predB) cm_resultA = confusion_matrix(y_testA,test_predA) print('Confusion Matrix(Before):\n',cm_resultB) print('Confusion Matrix(after):\n',cm_resultA) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\n') ``` # 9. CL 404 ``` #Load dataset as pandas data frame filename = 'CL_both_404.csv' dataset = pd.read_csv(filename) #Split data into input and output variable X = dataset.iloc[:,0:dataset.shape[1]-1] Y = dataset.iloc[:,-1] X_trainB, X_testB, y_trainB, y_testB = train_test_split(X, Y,test_size=0.3,random_state = 100) X_trainA, X_testA, y_trainA, y_testA = train_test_split(X, Y,test_size=0.3,random_state = 100) # Removing Constant features constant_filter = VarianceThreshold() constant_filter.fit(X_trainA) constant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[constant_filter.get_support()]] X_trainA.drop(labels=constant_columns,axis=1, inplace=True) X_testA.drop(labels=constant_columns,axis=1, inplace=True) # Removing Quasi-Constant features qconstant_filter = VarianceThreshold(0.01) qconstant_filter.fit(X_trainA) qconstant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[qconstant_filter.get_support()]] X_trainA.drop(labels=qconstant_columns,axis=1, inplace=True) X_testA.drop(labels=qconstant_columns,axis=1, inplace=True) # Removing Correlated Features correlated_features = set() correlation_matrix = X_trainA.corr(method = 'pearson') for i in range(len(correlation_matrix.columns)): for j in range(i): if abs(correlation_matrix.iloc[i, j]) > 0.4: colname = correlation_matrix.columns[i] correlated_features.add(colname) X_trainA.drop(labels=correlated_features,axis=1, inplace=True) X_testA.drop(labels=correlated_features,axis=1, inplace=True) # feature ranking and selection ranking_info, selected_features = feature_ranking_selection(X_trainA, y_trainA, 15) print('Fature Ranking information') print('---------------------------------------------------------------------------------------------') print(ranking_info) print(list(ranking_info['feature'])) print('---------------------------------------------------------------------------------------------') X_trainA = X_trainA[selected_features] X_testA = X_testA[selected_features] #names = ["Nearest Neighbors","Decision Tree","Naive Bayes"] names = ["Decision Tree"] classifiers = [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(random_state = 100) #GaussianNB(), ] classifier2= [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(max_depth = 5, min_samples_leaf = 50,random_state = 100) #GaussianNB(), ] clf_bef = list() clf_aft = list() for name, clf, dlf in zip(names,classifiers,classifier2): # clf for before # dlf for after # Before Feature Selection startB = time.time() clf.fit(X_trainB,y_trainB) endB = time.time() clf_bef.append(clf) # after Feature Selection startA = time.time() dlf.fit(X_trainA,y_trainA) endA = time.time() clf_aft.append(dlf) print('\t\t\t\tClassifier:',name.upper()) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\t\tBefore Feature Selection\tAfter Feature Selection') print('No. of features:\t', X_trainB.shape[1],'\t\t\t',X_trainA.shape[1]) #print("Dataset Size(in MB):\t",(X_trainB.values.nbytes/1e6),'\t\t',(X_trainA.values.nbytes/1e6)) # training accuracy train_predB = clf.predict(X_trainB) train_predA = dlf.predict(X_trainA) train_accB = round(accuracy_score(y_trainB,train_predB)*100, 2) train_accA = round(accuracy_score(y_trainA,train_predA)*100, 2) print('Train Accuracy:\t\t',train_accB,'\t\t\t',train_accA) # test accuracy test_predB = clf.predict(X_testB) test_predA = dlf.predict(X_testA) test_accB = round(accuracy_score(y_testB,test_predB)*100, 2) test_accA = round(accuracy_score(y_testA,test_predA)*100, 2) print('Test Accuracy:\t\t',test_accB,'\t\t\t',test_accA) # roc_auc_score test_roc_aucB = round(roc_auc_score(y_testB,test_predB), 2) test_roc_aucA = round(roc_auc_score(y_testA,test_predA), 2) print('ROC AUC score:\t\t',test_roc_aucB,'\t\t\t',test_roc_aucA) # f1 score test_f1B = round(f1_score(y_testB,test_predB),2) test_f1A = round(f1_score(y_testA,test_predA),2) print('f1_score:\t\t',test_f1B,'\t\t\t',test_f1A) # precision test_precB = round(precision_score(y_testB,test_predB),2) test_precA = round(precision_score(y_testA,test_predA),2) print('Precision:\t\t',test_precB,'\t\t\t',test_precA) # recall test_recallB = round(recall_score(y_testB,test_predB),2) test_recallA = round(recall_score(y_testA,test_predA),2) print('Recall:\t\t\t',test_recallB,'\t\t\t',test_recallA) # Matthews correlation coefficient test_MCCB = round(matthews_corrcoef(y_testB,test_predB),2) test_MCCA = round(matthews_corrcoef(y_testA,test_predA),2) print('MCC:\t\t\t',test_MCCB,'\t\t\t',test_MCCA) # training time timeB = round((float(endB)- float(startB)),2) timeA = round((float(endA)- float(startA)),2) print('Train Time (in seconds):',timeB,'\t\t\t',timeA) # confusion matrix cm_resultB = confusion_matrix(y_testB,test_predB) cm_resultA = confusion_matrix(y_testA,test_predA) print('Confusion Matrix(Before):\n',cm_resultB) print('Confusion Matrix(after):\n',cm_resultA) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\n') ``` # 10. CR 404 ``` #Load dataset as pandas data frame filename = 'CR_both_404.csv' dataset = pd.read_csv(filename) #Split data into input and output variable X = dataset.iloc[:,0:dataset.shape[1]-1] Y = dataset.iloc[:,-1] X_trainB, X_testB, y_trainB, y_testB = train_test_split(X, Y,test_size=0.3,random_state = 100) X_trainA, X_testA, y_trainA, y_testA = train_test_split(X, Y,test_size=0.3,random_state = 100) # Removing Constant features constant_filter = VarianceThreshold() constant_filter.fit(X_trainA) constant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[constant_filter.get_support()]] X_trainA.drop(labels=constant_columns,axis=1, inplace=True) X_testA.drop(labels=constant_columns,axis=1, inplace=True) # Removing Quasi-Constant features qconstant_filter = VarianceThreshold(0.01) qconstant_filter.fit(X_trainA) qconstant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[qconstant_filter.get_support()]] X_trainA.drop(labels=qconstant_columns,axis=1, inplace=True) X_testA.drop(labels=qconstant_columns,axis=1, inplace=True) # Removing Correlated Features correlated_features = set() correlation_matrix = X_trainA.corr(method = 'pearson') for i in range(len(correlation_matrix.columns)): for j in range(i): if abs(correlation_matrix.iloc[i, j]) > 0.4: colname = correlation_matrix.columns[i] correlated_features.add(colname) X_trainA.drop(labels=correlated_features,axis=1, inplace=True) X_testA.drop(labels=correlated_features,axis=1, inplace=True) # feature ranking and selection ranking_info, selected_features = feature_ranking_selection(X_trainA, y_trainA, 15) print('Fature Ranking information') print('---------------------------------------------------------------------------------------------') print(ranking_info) print(list(ranking_info['feature'])) print('---------------------------------------------------------------------------------------------') X_trainA = X_trainA[selected_features] X_testA = X_testA[selected_features] #names = ["Nearest Neighbors","Decision Tree","Naive Bayes"] names = ["Decision Tree"] classifiers = [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(random_state = 100) #GaussianNB(), ] classifier2= [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(max_depth = 5, min_samples_leaf = 50,random_state = 100) #GaussianNB(), ] clf_bef = list() clf_aft = list() for name, clf, dlf in zip(names,classifiers,classifier2): # clf for before # dlf for after # Before Feature Selection startB = time.time() clf.fit(X_trainB,y_trainB) endB = time.time() clf_bef.append(clf) # after Feature Selection startA = time.time() dlf.fit(X_trainA,y_trainA) endA = time.time() clf_aft.append(dlf) print('\t\t\t\tClassifier:',name.upper()) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\t\tBefore Feature Selection\tAfter Feature Selection') print('No. of features:\t', X_trainB.shape[1],'\t\t\t',X_trainA.shape[1]) #print("Dataset Size(in MB):\t",(X_trainB.values.nbytes/1e6),'\t\t',(X_trainA.values.nbytes/1e6)) # training accuracy train_predB = clf.predict(X_trainB) train_predA = dlf.predict(X_trainA) train_accB = round(accuracy_score(y_trainB,train_predB)*100, 2) train_accA = round(accuracy_score(y_trainA,train_predA)*100, 2) print('Train Accuracy:\t\t',train_accB,'\t\t\t',train_accA) # test accuracy test_predB = clf.predict(X_testB) test_predA = dlf.predict(X_testA) test_accB = round(accuracy_score(y_testB,test_predB)*100, 2) test_accA = round(accuracy_score(y_testA,test_predA)*100, 2) print('Test Accuracy:\t\t',test_accB,'\t\t\t',test_accA) # roc_auc_score test_roc_aucB = round(roc_auc_score(y_testB,test_predB), 2) test_roc_aucA = round(roc_auc_score(y_testA,test_predA), 2) print('ROC AUC score:\t\t',test_roc_aucB,'\t\t\t',test_roc_aucA) # f1 score test_f1B = round(f1_score(y_testB,test_predB),2) test_f1A = round(f1_score(y_testA,test_predA),2) print('f1_score:\t\t',test_f1B,'\t\t\t',test_f1A) # precision test_precB = round(precision_score(y_testB,test_predB),2) test_precA = round(precision_score(y_testA,test_predA),2) print('Precision:\t\t',test_precB,'\t\t\t',test_precA) # recall test_recallB = round(recall_score(y_testB,test_predB),2) test_recallA = round(recall_score(y_testA,test_predA),2) print('Recall:\t\t\t',test_recallB,'\t\t\t',test_recallA) # Matthews correlation coefficient test_MCCB = round(matthews_corrcoef(y_testB,test_predB),2) test_MCCA = round(matthews_corrcoef(y_testA,test_predA),2) print('MCC:\t\t\t',test_MCCB,'\t\t\t',test_MCCA) # training time timeB = round((float(endB)- float(startB)),2) timeA = round((float(endA)- float(startA)),2) print('Train Time (in seconds):',timeB,'\t\t\t',timeA) # confusion matrix cm_resultB = confusion_matrix(y_testB,test_predB) cm_resultA = confusion_matrix(y_testA,test_predA) print('Confusion Matrix(Before):\n',cm_resultB) print('Confusion Matrix(after):\n',cm_resultA) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\n') ``` # 11. LR 404 ``` filename = 'LR_both_404.csv' dataset = pd.read_csv(filename) #Split data into input and output variable X = dataset.iloc[:,0:dataset.shape[1]-1] Y = dataset.iloc[:,-1] X_trainB, X_testB, y_trainB, y_testB = train_test_split(X, Y,test_size=0.3,random_state = 100) X_trainA, X_testA, y_trainA, y_testA = train_test_split(X, Y,test_size=0.3,random_state = 100) # Removing Constant features constant_filter = VarianceThreshold() constant_filter.fit(X_trainA) constant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[constant_filter.get_support()]] X_trainA.drop(labels=constant_columns,axis=1, inplace=True) X_testA.drop(labels=constant_columns,axis=1, inplace=True) # Removing Quasi-Constant features qconstant_filter = VarianceThreshold(0.01) qconstant_filter.fit(X_trainA) qconstant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[qconstant_filter.get_support()]] X_trainA.drop(labels=qconstant_columns,axis=1, inplace=True) X_testA.drop(labels=qconstant_columns,axis=1, inplace=True) # Removing Correlated Features correlated_features = set() correlation_matrix = X_trainA.corr(method = 'pearson') for i in range(len(correlation_matrix.columns)): for j in range(i): if abs(correlation_matrix.iloc[i, j]) > 0.4: colname = correlation_matrix.columns[i] correlated_features.add(colname) X_trainA.drop(labels=correlated_features,axis=1, inplace=True) X_testA.drop(labels=correlated_features,axis=1, inplace=True) # feature ranking and selection ranking_info, selected_features = feature_ranking_selection(X_trainA, y_trainA, 15) print('Fature Ranking information') print('---------------------------------------------------------------------------------------------') print(ranking_info) print(list(ranking_info['feature'])) print('---------------------------------------------------------------------------------------------') X_trainA = X_trainA[selected_features] X_testA = X_testA[selected_features] #names = ["Nearest Neighbors","Decision Tree","Naive Bayes"] names = ["Decision Tree"] classifiers = [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(random_state = 100) #GaussianNB(), ] classifier2= [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(max_depth = 5, min_samples_leaf = 50,random_state = 100) #GaussianNB(), ] clf_bef = list() clf_aft = list() for name, clf, dlf in zip(names,classifiers,classifier2): # clf for before # dlf for after # Before Feature Selection startB = time.time() clf.fit(X_trainB,y_trainB) endB = time.time() clf_bef.append(clf) # after Feature Selection startA = time.time() dlf.fit(X_trainA,y_trainA) endA = time.time() clf_aft.append(dlf) print('\t\t\t\tClassifier:',name.upper()) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\t\tBefore Feature Selection\tAfter Feature Selection') print('No. of features:\t', X_trainB.shape[1],'\t\t\t',X_trainA.shape[1]) #print("Dataset Size(in MB):\t",(X_trainB.values.nbytes/1e6),'\t\t',(X_trainA.values.nbytes/1e6)) # training accuracy train_predB = clf.predict(X_trainB) train_predA = dlf.predict(X_trainA) train_accB = round(accuracy_score(y_trainB,train_predB)*100, 2) train_accA = round(accuracy_score(y_trainA,train_predA)*100, 2) print('Train Accuracy:\t\t',train_accB,'\t\t\t',train_accA) # test accuracy test_predB = clf.predict(X_testB) test_predA = dlf.predict(X_testA) test_accB = round(accuracy_score(y_testB,test_predB)*100, 2) test_accA = round(accuracy_score(y_testA,test_predA)*100, 2) print('Test Accuracy:\t\t',test_accB,'\t\t\t',test_accA) # roc_auc_score test_roc_aucB = round(roc_auc_score(y_testB,test_predB), 2) test_roc_aucA = round(roc_auc_score(y_testA,test_predA), 2) print('ROC AUC score:\t\t',test_roc_aucB,'\t\t\t',test_roc_aucA) # f1 score test_f1B = round(f1_score(y_testB,test_predB),2) test_f1A = round(f1_score(y_testA,test_predA),2) print('f1_score:\t\t',test_f1B,'\t\t\t',test_f1A) # precision test_precB = round(precision_score(y_testB,test_predB),2) test_precA = round(precision_score(y_testA,test_predA),2) print('Precision:\t\t',test_precB,'\t\t\t',test_precA) # recall test_recallB = round(recall_score(y_testB,test_predB),2) test_recallA = round(recall_score(y_testA,test_predA),2) print('Recall:\t\t\t',test_recallB,'\t\t\t',test_recallA) # Matthews correlation coefficient test_MCCB = round(matthews_corrcoef(y_testB,test_predB),2) test_MCCA = round(matthews_corrcoef(y_testA,test_predA),2) print('MCC:\t\t\t',test_MCCB,'\t\t\t',test_MCCA) # training time timeB = round((float(endB)- float(startB)),2) timeA = round((float(endA)- float(startA)),2) print('Train Time (in seconds):',timeB,'\t\t\t',timeA) # confusion matrix cm_resultB = confusion_matrix(y_testB,test_predB) cm_resultA = confusion_matrix(y_testA,test_predA) print('Confusion Matrix(Before):\n',cm_resultB) print('Confusion Matrix(after):\n',cm_resultA) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\n') ```
github_jupyter
import warnings warnings.filterwarnings("ignore") import time import numpy as np import pandas as pd import matplotlib.pyplot as plt #plt.style.use('fivethirtyeight') from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.pipeline import make_pipeline from sklearn.feature_selection import VarianceThreshold from sklearn.metrics import confusion_matrix from sklearn.metrics import accuracy_score,f1_score,roc_curve, auc,roc_auc_score,precision_score,recall_score,matthews_corrcoef from sklearn.neighbors import KNeighborsClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.naive_bayes import GaussianNB from sklearn.feature_selection import SelectKBest from sklearn.feature_selection import mutual_info_classif,f_classif np.set_printoptions(precision=3) def feature_ranking_selection(X_train, y_train, n_features): ''' n_features: number of feature to select for training returns: feature name with pearson correlation coefficient(in descending) and selected n_features ''' df = X_train.copy() df['label'] = y_train.values correlation_mat = df.corr(method = 'pearson') feature_name = list(correlation_mat.index) ndf = pd.DataFrame() ndf['feature'] = feature_name ndf ['importance'] = abs((correlation_mat.iloc[:,-1]).values ) mdf = ndf[:-1] mdf = (mdf.sort_values(by='importance', ascending=False)).reset_index(drop = True) if n_features > len(mdf): print('Number features to select is too large.') return mdf else: selected_feature = list((mdf.iloc[0:n_features])['feature'].values) return mdf, selected_feature #Load dataset as pandas data frame filename = 'CLR_both_202.csv' dataset = pd.read_csv(filename) #Split data into input and output variable X = dataset.iloc[:,0:dataset.shape[1]-1] Y = dataset.iloc[:,-1] X_trainB, X_testB, y_trainB, y_testB = train_test_split(X, Y,test_size=0.3,random_state = 100) X_trainA, X_testA, y_trainA, y_testA = train_test_split(X, Y,test_size=0.3,random_state = 100) # Removing Constant features constant_filter = VarianceThreshold() constant_filter.fit(X_trainA) constant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[constant_filter.get_support()]] X_trainA.drop(labels=constant_columns,axis=1, inplace=True) X_testA.drop(labels=constant_columns,axis=1, inplace=True) # Removing Quasi-Constant features qconstant_filter = VarianceThreshold(0.01) qconstant_filter.fit(X_trainA) qconstant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[qconstant_filter.get_support()]] X_trainA.drop(labels=qconstant_columns,axis=1, inplace=True) X_testA.drop(labels=qconstant_columns,axis=1, inplace=True) # Removing Correlated Features correlated_features = set() correlation_matrix = X_trainA.corr(method = 'pearson') for i in range(len(correlation_matrix.columns)): for j in range(i): if abs(correlation_matrix.iloc[i, j]) > 0.4: colname = correlation_matrix.columns[i] correlated_features.add(colname) X_trainA.drop(labels=correlated_features,axis=1, inplace=True) X_testA.drop(labels=correlated_features,axis=1, inplace=True) # feature ranking and selection ranking_info, selected_features = feature_ranking_selection(X_trainA, y_trainA, 15) print('Fature Ranking information') print('---------------------------------------------------------------------------------------------') print(ranking_info) print(list(ranking_info['feature'])) print('---------------------------------------------------------------------------------------------') X_trainA = X_trainA[selected_features] X_testA = X_testA[selected_features] #names = ["Nearest Neighbors","Decision Tree","Naive Bayes"] names = ["Decision Tree"] classifiers = [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(random_state = 100) #GaussianNB(), ] classifier2= [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(max_depth = 5, min_samples_leaf = 50,random_state = 100) #GaussianNB(), ] clf_bef = list() clf_aft = list() for name, clf, dlf in zip(names,classifiers,classifier2): # clf for before # dlf for after # Before Feature Selection startB = time.time() clf.fit(X_trainB,y_trainB) endB = time.time() clf_bef.append(clf) # after Feature Selection startA = time.time() dlf.fit(X_trainA,y_trainA) endA = time.time() clf_aft.append(dlf) print('\t\t\t\tClassifier:',name.upper()) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\t\tBefore Feature Selection\tAfter Feature Selection') print('No. of features:\t', X_trainB.shape[1],'\t\t\t',X_trainA.shape[1]) #print("Dataset Size(in MB):\t",(X_trainB.values.nbytes/1e6),'\t\t',(X_trainA.values.nbytes/1e6)) # training accuracy train_predB = clf.predict(X_trainB) train_predA = dlf.predict(X_trainA) train_accB = round(accuracy_score(y_trainB,train_predB)*100, 2) train_accA = round(accuracy_score(y_trainA,train_predA)*100, 2) print('Train Accuracy:\t\t',train_accB,'\t\t\t',train_accA) # test accuracy test_predB = clf.predict(X_testB) test_predA = dlf.predict(X_testA) test_accB = round(accuracy_score(y_testB,test_predB)*100, 2) test_accA = round(accuracy_score(y_testA,test_predA)*100, 2) print('Test Accuracy:\t\t',test_accB,'\t\t\t',test_accA) # roc_auc_score test_roc_aucB = round(roc_auc_score(y_testB,test_predB), 2) test_roc_aucA = round(roc_auc_score(y_testA,test_predA), 2) print('ROC AUC score:\t\t',test_roc_aucB,'\t\t\t',test_roc_aucA) # f1 score test_f1B = round(f1_score(y_testB,test_predB),2) test_f1A = round(f1_score(y_testA,test_predA),2) print('f1_score:\t\t',test_f1B,'\t\t\t',test_f1A) # precision test_precB = round(precision_score(y_testB,test_predB),2) test_precA = round(precision_score(y_testA,test_predA),2) print('Precision:\t\t',test_precB,'\t\t\t',test_precA) # recall test_recallB = round(recall_score(y_testB,test_predB),2) test_recallA = round(recall_score(y_testA,test_predA),2) print('Recall:\t\t\t',test_recallB,'\t\t\t',test_recallA) # Matthews correlation coefficient test_MCCB = round(matthews_corrcoef(y_testB,test_predB),2) test_MCCA = round(matthews_corrcoef(y_testA,test_predA),2) print('MCC:\t\t\t',test_MCCB,'\t\t\t',test_MCCA) # training time timeB = round((float(endB)- float(startB)),2) timeA = round((float(endA)- float(startA)),2) print('Train Time (in seconds):',timeB,'\t\t\t',timeA) # confusion matrix cm_resultB = confusion_matrix(y_testB,test_predB) cm_resultA = confusion_matrix(y_testA,test_predA) print('Confusion Matrix(Before):\n',cm_resultB) print('Confusion Matrix(after):\n',cm_resultA) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\n') #Load dataset as pandas data frame filename = "centre_both.csv" dataset = pd.read_csv(filename) #Split data into input and output variable X = dataset.iloc[:,0:dataset.shape[1]-1] Y = dataset.iloc[:,-1] X_trainB, X_testB, y_trainB, y_testB = train_test_split(X, Y,test_size=0.3,random_state = 100) X_trainA, X_testA, y_trainA, y_testA = train_test_split(X, Y,test_size=0.3,random_state = 100) # Removing Constant features constant_filter = VarianceThreshold() constant_filter.fit(X_trainA) constant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[constant_filter.get_support()]] X_trainA.drop(labels=constant_columns,axis=1, inplace=True) X_testA.drop(labels=constant_columns,axis=1, inplace=True) # Removing Quasi-Constant features qconstant_filter = VarianceThreshold(0.01) qconstant_filter.fit(X_trainA) qconstant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[qconstant_filter.get_support()]] X_trainA.drop(labels=qconstant_columns,axis=1, inplace=True) X_testA.drop(labels=qconstant_columns,axis=1, inplace=True) # Removing Correlated Features correlated_features = set() correlation_matrix = X_trainA.corr(method = 'pearson') for i in range(len(correlation_matrix.columns)): for j in range(i): if abs(correlation_matrix.iloc[i, j]) > 0.4: colname = correlation_matrix.columns[i] correlated_features.add(colname) X_trainA.drop(labels=correlated_features,axis=1, inplace=True) X_testA.drop(labels=correlated_features,axis=1, inplace=True) # feature ranking and selection ranking_info, selected_features = feature_ranking_selection(X_trainA, y_trainA, 15) print('Fature Ranking information') print('---------------------------------------------------------------------------------------------') print(ranking_info) print(list(ranking_info['feature'])) print('---------------------------------------------------------------------------------------------') X_trainA = X_trainA[selected_features] X_testA = X_testA[selected_features] #names = ["Nearest Neighbors","Decision Tree","Naive Bayes"] names = ["Decision Tree"] classifiers = [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(random_state = 100) #GaussianNB(), ] classifier2= [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(max_depth = 5, min_samples_leaf = 50,random_state = 100) #GaussianNB(), ] clf_bef = list() clf_aft = list() for name, clf, dlf in zip(names,classifiers,classifier2): # clf for before # dlf for after # Before Feature Selection startB = time.time() clf.fit(X_trainB,y_trainB) endB = time.time() clf_bef.append(clf) # after Feature Selection startA = time.time() dlf.fit(X_trainA,y_trainA) endA = time.time() clf_aft.append(dlf) print('\t\t\t\tClassifier:',name.upper()) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\t\tBefore Feature Selection\tAfter Feature Selection') print('No. of features:\t', X_trainB.shape[1],'\t\t\t',X_trainA.shape[1]) #print("Dataset Size(in MB):\t",(X_trainB.values.nbytes/1e6),'\t\t',(X_trainA.values.nbytes/1e6)) # training accuracy train_predB = clf.predict(X_trainB) train_predA = dlf.predict(X_trainA) train_accB = round(accuracy_score(y_trainB,train_predB)*100, 2) train_accA = round(accuracy_score(y_trainA,train_predA)*100, 2) print('Train Accuracy:\t\t',train_accB,'\t\t\t',train_accA) # test accuracy test_predB = clf.predict(X_testB) test_predA = dlf.predict(X_testA) test_accB = round(accuracy_score(y_testB,test_predB)*100, 2) test_accA = round(accuracy_score(y_testA,test_predA)*100, 2) print('Test Accuracy:\t\t',test_accB,'\t\t\t',test_accA) # roc_auc_score test_roc_aucB = round(roc_auc_score(y_testB,test_predB), 2) test_roc_aucA = round(roc_auc_score(y_testA,test_predA), 2) print('ROC AUC score:\t\t',test_roc_aucB,'\t\t\t',test_roc_aucA) # f1 score test_f1B = round(f1_score(y_testB,test_predB),2) test_f1A = round(f1_score(y_testA,test_predA),2) print('f1_score:\t\t',test_f1B,'\t\t\t',test_f1A) # precision test_precB = round(precision_score(y_testB,test_predB),2) test_precA = round(precision_score(y_testA,test_predA),2) print('Precision:\t\t',test_precB,'\t\t\t',test_precA) # recall test_recallB = round(recall_score(y_testB,test_predB),2) test_recallA = round(recall_score(y_testA,test_predA),2) print('Recall:\t\t\t',test_recallB,'\t\t\t',test_recallA) # Matthews correlation coefficient test_MCCB = round(matthews_corrcoef(y_testB,test_predB),2) test_MCCA = round(matthews_corrcoef(y_testA,test_predA),2) print('MCC:\t\t\t',test_MCCB,'\t\t\t',test_MCCA) # training time timeB = round((float(endB)- float(startB)),2) timeA = round((float(endA)- float(startA)),2) print('Train Time (in seconds):',timeB,'\t\t\t',timeA) # confusion matrix cm_resultB = confusion_matrix(y_testB,test_predB) cm_resultA = confusion_matrix(y_testA,test_predA) print('Confusion Matrix(Before):\n',cm_resultB) print('Confusion Matrix(after):\n',cm_resultA) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\n') #Load dataset as pandas data frame filename = 'left_both.csv' dataset = pd.read_csv(filename) #Split data into input and output variable X = dataset.iloc[:,0:dataset.shape[1]-1] Y = dataset.iloc[:,-1] X_trainB, X_testB, y_trainB, y_testB = train_test_split(X, Y,test_size=0.3,random_state = 100) X_trainA, X_testA, y_trainA, y_testA = train_test_split(X, Y,test_size=0.3,random_state = 100) # Removing Constant features constant_filter = VarianceThreshold() constant_filter.fit(X_trainA) constant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[constant_filter.get_support()]] X_trainA.drop(labels=constant_columns,axis=1, inplace=True) X_testA.drop(labels=constant_columns,axis=1, inplace=True) # Removing Quasi-Constant features qconstant_filter = VarianceThreshold(0.01) qconstant_filter.fit(X_trainA) qconstant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[qconstant_filter.get_support()]] X_trainA.drop(labels=qconstant_columns,axis=1, inplace=True) X_testA.drop(labels=qconstant_columns,axis=1, inplace=True) # Removing Correlated Features correlated_features = set() correlation_matrix = X_trainA.corr(method = 'pearson') for i in range(len(correlation_matrix.columns)): for j in range(i): if abs(correlation_matrix.iloc[i, j]) > 0.4: colname = correlation_matrix.columns[i] correlated_features.add(colname) X_trainA.drop(labels=correlated_features,axis=1, inplace=True) X_testA.drop(labels=correlated_features,axis=1, inplace=True) # feature ranking and selection ranking_info, selected_features = feature_ranking_selection(X_trainA, y_trainA, 15) print('Fature Ranking information') print('---------------------------------------------------------------------------------------------') print(ranking_info) print(list(ranking_info['feature'])) print('---------------------------------------------------------------------------------------------') X_trainA = X_trainA[selected_features] X_testA = X_testA[selected_features] #names = ["Nearest Neighbors","Decision Tree","Naive Bayes"] names = ["Decision Tree"] classifiers = [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(random_state = 100) #GaussianNB(), ] classifier2= [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(max_depth = 5, min_samples_leaf = 50,random_state = 100) #GaussianNB(), ] clf_bef = list() clf_aft = list() for name, clf, dlf in zip(names,classifiers,classifier2): # clf for before # dlf for after # Before Feature Selection startB = time.time() clf.fit(X_trainB,y_trainB) endB = time.time() clf_bef.append(clf) # after Feature Selection startA = time.time() dlf.fit(X_trainA,y_trainA) endA = time.time() clf_aft.append(dlf) print('\t\t\t\tClassifier:',name.upper()) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\t\tBefore Feature Selection\tAfter Feature Selection') print('No. of features:\t', X_trainB.shape[1],'\t\t\t',X_trainA.shape[1]) #print("Dataset Size(in MB):\t",(X_trainB.values.nbytes/1e6),'\t\t',(X_trainA.values.nbytes/1e6)) # training accuracy train_predB = clf.predict(X_trainB) train_predA = dlf.predict(X_trainA) train_accB = round(accuracy_score(y_trainB,train_predB)*100, 2) train_accA = round(accuracy_score(y_trainA,train_predA)*100, 2) print('Train Accuracy:\t\t',train_accB,'\t\t\t',train_accA) # test accuracy test_predB = clf.predict(X_testB) test_predA = dlf.predict(X_testA) test_accB = round(accuracy_score(y_testB,test_predB)*100, 2) test_accA = round(accuracy_score(y_testA,test_predA)*100, 2) print('Test Accuracy:\t\t',test_accB,'\t\t\t',test_accA) # roc_auc_score test_roc_aucB = round(roc_auc_score(y_testB,test_predB), 2) test_roc_aucA = round(roc_auc_score(y_testA,test_predA), 2) print('ROC AUC score:\t\t',test_roc_aucB,'\t\t\t',test_roc_aucA) # f1 score test_f1B = round(f1_score(y_testB,test_predB),2) test_f1A = round(f1_score(y_testA,test_predA),2) print('f1_score:\t\t',test_f1B,'\t\t\t',test_f1A) # precision test_precB = round(precision_score(y_testB,test_predB),2) test_precA = round(precision_score(y_testA,test_predA),2) print('Precision:\t\t',test_precB,'\t\t\t',test_precA) # recall test_recallB = round(recall_score(y_testB,test_predB),2) test_recallA = round(recall_score(y_testA,test_predA),2) print('Recall:\t\t\t',test_recallB,'\t\t\t',test_recallA) # Matthews correlation coefficient test_MCCB = round(matthews_corrcoef(y_testB,test_predB),2) test_MCCA = round(matthews_corrcoef(y_testA,test_predA),2) print('MCC:\t\t\t',test_MCCB,'\t\t\t',test_MCCA) # training time timeB = round((float(endB)- float(startB)),2) timeA = round((float(endA)- float(startA)),2) print('Train Time (in seconds):',timeB,'\t\t\t',timeA) # confusion matrix cm_resultB = confusion_matrix(y_testB,test_predB) cm_resultA = confusion_matrix(y_testA,test_predA) print('Confusion Matrix(Before):\n',cm_resultB) print('Confusion Matrix(after):\n',cm_resultA) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\n') #Load dataset as pandas data frame filename = 'right_both.csv' dataset = pd.read_csv(filename) #Split data into input and output variable X = dataset.iloc[:,0:dataset.shape[1]-1] Y = dataset.iloc[:,-1] X_trainB, X_testB, y_trainB, y_testB = train_test_split(X, Y,test_size=0.3,random_state = 100) X_trainA, X_testA, y_trainA, y_testA = train_test_split(X, Y,test_size=0.3,random_state = 100) # Removing Constant features constant_filter = VarianceThreshold() constant_filter.fit(X_trainA) constant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[constant_filter.get_support()]] X_trainA.drop(labels=constant_columns,axis=1, inplace=True) X_testA.drop(labels=constant_columns,axis=1, inplace=True) # Removing Quasi-Constant features qconstant_filter = VarianceThreshold(0.01) qconstant_filter.fit(X_trainA) qconstant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[qconstant_filter.get_support()]] X_trainA.drop(labels=qconstant_columns,axis=1, inplace=True) X_testA.drop(labels=qconstant_columns,axis=1, inplace=True) # Removing Correlated Features correlated_features = set() correlation_matrix = X_trainA.corr(method = 'pearson') for i in range(len(correlation_matrix.columns)): for j in range(i): if abs(correlation_matrix.iloc[i, j]) > 0.4: colname = correlation_matrix.columns[i] correlated_features.add(colname) X_trainA.drop(labels=correlated_features,axis=1, inplace=True) X_testA.drop(labels=correlated_features,axis=1, inplace=True) # feature ranking and selection ranking_info, selected_features = feature_ranking_selection(X_trainA, y_trainA, 15) print('Fature Ranking information') print('---------------------------------------------------------------------------------------------') print(ranking_info) print(list(ranking_info['feature'])) print('---------------------------------------------------------------------------------------------') X_trainA = X_trainA[selected_features] X_testA = X_testA[selected_features] #names = ["Nearest Neighbors","Decision Tree","Naive Bayes"] names = ["Decision Tree"] classifiers = [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(random_state = 100) #GaussianNB(), ] classifier2= [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(max_depth = 5, min_samples_leaf = 50,random_state = 100) #GaussianNB(), ] clf_bef = list() clf_aft = list() for name, clf, dlf in zip(names,classifiers,classifier2): # clf for before # dlf for after # Before Feature Selection startB = time.time() clf.fit(X_trainB,y_trainB) endB = time.time() clf_bef.append(clf) # after Feature Selection startA = time.time() dlf.fit(X_trainA,y_trainA) endA = time.time() clf_aft.append(dlf) print('\t\t\t\tClassifier:',name.upper()) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\t\tBefore Feature Selection\tAfter Feature Selection') print('No. of features:\t', X_trainB.shape[1],'\t\t\t',X_trainA.shape[1]) #print("Dataset Size(in MB):\t",(X_trainB.values.nbytes/1e6),'\t\t',(X_trainA.values.nbytes/1e6)) # training accuracy train_predB = clf.predict(X_trainB) train_predA = dlf.predict(X_trainA) train_accB = round(accuracy_score(y_trainB,train_predB)*100, 2) train_accA = round(accuracy_score(y_trainA,train_predA)*100, 2) print('Train Accuracy:\t\t',train_accB,'\t\t\t',train_accA) # test accuracy test_predB = clf.predict(X_testB) test_predA = dlf.predict(X_testA) test_accB = round(accuracy_score(y_testB,test_predB)*100, 2) test_accA = round(accuracy_score(y_testA,test_predA)*100, 2) print('Test Accuracy:\t\t',test_accB,'\t\t\t',test_accA) # roc_auc_score test_roc_aucB = round(roc_auc_score(y_testB,test_predB), 2) test_roc_aucA = round(roc_auc_score(y_testA,test_predA), 2) print('ROC AUC score:\t\t',test_roc_aucB,'\t\t\t',test_roc_aucA) # f1 score test_f1B = round(f1_score(y_testB,test_predB),2) test_f1A = round(f1_score(y_testA,test_predA),2) print('f1_score:\t\t',test_f1B,'\t\t\t',test_f1A) # precision test_precB = round(precision_score(y_testB,test_predB),2) test_precA = round(precision_score(y_testA,test_predA),2) print('Precision:\t\t',test_precB,'\t\t\t',test_precA) # recall test_recallB = round(recall_score(y_testB,test_predB),2) test_recallA = round(recall_score(y_testA,test_predA),2) print('Recall:\t\t\t',test_recallB,'\t\t\t',test_recallA) # Matthews correlation coefficient test_MCCB = round(matthews_corrcoef(y_testB,test_predB),2) test_MCCA = round(matthews_corrcoef(y_testA,test_predA),2) print('MCC:\t\t\t',test_MCCB,'\t\t\t',test_MCCA) # training time timeB = round((float(endB)- float(startB)),2) timeA = round((float(endA)- float(startA)),2) print('Train Time (in seconds):',timeB,'\t\t\t',timeA) # confusion matrix cm_resultB = confusion_matrix(y_testB,test_predB) cm_resultA = confusion_matrix(y_testA,test_predA) print('Confusion Matrix(Before):\n',cm_resultB) print('Confusion Matrix(after):\n',cm_resultA) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\n') #Load dataset as pandas data frame filename = 'CL_both_202.csv' dataset = pd.read_csv(filename) #Split data into input and output variable X = dataset.iloc[:,0:dataset.shape[1]-1] Y = dataset.iloc[:,-1] X_trainB, X_testB, y_trainB, y_testB = train_test_split(X, Y,test_size=0.3,random_state = 100) X_trainA, X_testA, y_trainA, y_testA = train_test_split(X, Y,test_size=0.3,random_state = 100) # Removing Constant features constant_filter = VarianceThreshold() constant_filter.fit(X_trainA) constant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[constant_filter.get_support()]] X_trainA.drop(labels=constant_columns,axis=1, inplace=True) X_testA.drop(labels=constant_columns,axis=1, inplace=True) # Removing Quasi-Constant features qconstant_filter = VarianceThreshold(0.01) qconstant_filter.fit(X_trainA) qconstant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[qconstant_filter.get_support()]] X_trainA.drop(labels=qconstant_columns,axis=1, inplace=True) X_testA.drop(labels=qconstant_columns,axis=1, inplace=True) # Removing Correlated Features correlated_features = set() correlation_matrix = X_trainA.corr(method = 'pearson') for i in range(len(correlation_matrix.columns)): for j in range(i): if abs(correlation_matrix.iloc[i, j]) > 0.4: colname = correlation_matrix.columns[i] correlated_features.add(colname) X_trainA.drop(labels=correlated_features,axis=1, inplace=True) X_testA.drop(labels=correlated_features,axis=1, inplace=True) # feature ranking and selection ranking_info, selected_features = feature_ranking_selection(X_trainA, y_trainA, 15) print('Fature Ranking information') print('---------------------------------------------------------------------------------------------') print(ranking_info) print(list(ranking_info['feature'])) print('---------------------------------------------------------------------------------------------') X_trainA = X_trainA[selected_features] X_testA = X_testA[selected_features] #names = ["Nearest Neighbors","Decision Tree","Naive Bayes"] names = ["Decision Tree"] classifiers = [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(random_state = 100) #GaussianNB(), ] classifier2= [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(max_depth = 5, min_samples_leaf = 50,random_state = 100) #GaussianNB(), ] clf_bef = list() clf_aft = list() for name, clf, dlf in zip(names,classifiers,classifier2): # clf for before # dlf for after # Before Feature Selection startB = time.time() clf.fit(X_trainB,y_trainB) endB = time.time() clf_bef.append(clf) # after Feature Selection startA = time.time() dlf.fit(X_trainA,y_trainA) endA = time.time() clf_aft.append(dlf) print('\t\t\t\tClassifier:',name.upper()) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\t\tBefore Feature Selection\tAfter Feature Selection') print('No. of features:\t', X_trainB.shape[1],'\t\t\t',X_trainA.shape[1]) #print("Dataset Size(in MB):\t",(X_trainB.values.nbytes/1e6),'\t\t',(X_trainA.values.nbytes/1e6)) # training accuracy train_predB = clf.predict(X_trainB) train_predA = dlf.predict(X_trainA) train_accB = round(accuracy_score(y_trainB,train_predB)*100, 2) train_accA = round(accuracy_score(y_trainA,train_predA)*100, 2) print('Train Accuracy:\t\t',train_accB,'\t\t\t',train_accA) # test accuracy test_predB = clf.predict(X_testB) test_predA = dlf.predict(X_testA) test_accB = round(accuracy_score(y_testB,test_predB)*100, 2) test_accA = round(accuracy_score(y_testA,test_predA)*100, 2) print('Test Accuracy:\t\t',test_accB,'\t\t\t',test_accA) # roc_auc_score test_roc_aucB = round(roc_auc_score(y_testB,test_predB), 2) test_roc_aucA = round(roc_auc_score(y_testA,test_predA), 2) print('ROC AUC score:\t\t',test_roc_aucB,'\t\t\t',test_roc_aucA) # f1 score test_f1B = round(f1_score(y_testB,test_predB),2) test_f1A = round(f1_score(y_testA,test_predA),2) print('f1_score:\t\t',test_f1B,'\t\t\t',test_f1A) # precision test_precB = round(precision_score(y_testB,test_predB),2) test_precA = round(precision_score(y_testA,test_predA),2) print('Precision:\t\t',test_precB,'\t\t\t',test_precA) # recall test_recallB = round(recall_score(y_testB,test_predB),2) test_recallA = round(recall_score(y_testA,test_predA),2) print('Recall:\t\t\t',test_recallB,'\t\t\t',test_recallA) # Matthews correlation coefficient test_MCCB = round(matthews_corrcoef(y_testB,test_predB),2) test_MCCA = round(matthews_corrcoef(y_testA,test_predA),2) print('MCC:\t\t\t',test_MCCB,'\t\t\t',test_MCCA) # training time timeB = round((float(endB)- float(startB)),2) timeA = round((float(endA)- float(startA)),2) print('Train Time (in seconds):',timeB,'\t\t\t',timeA) # confusion matrix cm_resultB = confusion_matrix(y_testB,test_predB) cm_resultA = confusion_matrix(y_testA,test_predA) print('Confusion Matrix(Before):\n',cm_resultB) print('Confusion Matrix(after):\n',cm_resultA) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\n') #Load dataset as pandas data frame filename = 'CR_both_202.csv' dataset = pd.read_csv(filename) #Split data into input and output variable X = dataset.iloc[:,0:dataset.shape[1]-1] Y = dataset.iloc[:,-1] X_trainB, X_testB, y_trainB, y_testB = train_test_split(X, Y,test_size=0.3,random_state = 100) X_trainA, X_testA, y_trainA, y_testA = train_test_split(X, Y,test_size=0.3,random_state = 100) # Removing Constant features constant_filter = VarianceThreshold() constant_filter.fit(X_trainA) constant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[constant_filter.get_support()]] X_trainA.drop(labels=constant_columns,axis=1, inplace=True) X_testA.drop(labels=constant_columns,axis=1, inplace=True) # Removing Quasi-Constant features qconstant_filter = VarianceThreshold(0.01) qconstant_filter.fit(X_trainA) qconstant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[qconstant_filter.get_support()]] X_trainA.drop(labels=qconstant_columns,axis=1, inplace=True) X_testA.drop(labels=qconstant_columns,axis=1, inplace=True) # Removing Correlated Features correlated_features = set() correlation_matrix = X_trainA.corr(method = 'pearson') for i in range(len(correlation_matrix.columns)): for j in range(i): if abs(correlation_matrix.iloc[i, j]) > 0.4: colname = correlation_matrix.columns[i] correlated_features.add(colname) X_trainA.drop(labels=correlated_features,axis=1, inplace=True) X_testA.drop(labels=correlated_features,axis=1, inplace=True) # feature ranking and selection ranking_info, selected_features = feature_ranking_selection(X_trainA, y_trainA, 15) print('Fature Ranking information') print('---------------------------------------------------------------------------------------------') print(ranking_info) print(list(ranking_info['feature'])) print('---------------------------------------------------------------------------------------------') X_trainA = X_trainA[selected_features] X_testA = X_testA[selected_features] #names = ["Nearest Neighbors","Decision Tree","Naive Bayes"] names = ["Decision Tree"] classifiers = [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(random_state = 100) #GaussianNB(), ] classifier2= [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(max_depth = 5, min_samples_leaf = 50,random_state = 100) #GaussianNB(), ] clf_bef = list() clf_aft = list() for name, clf, dlf in zip(names,classifiers,classifier2): # clf for before # dlf for after # Before Feature Selection startB = time.time() clf.fit(X_trainB,y_trainB) endB = time.time() clf_bef.append(clf) # after Feature Selection startA = time.time() dlf.fit(X_trainA,y_trainA) endA = time.time() clf_aft.append(dlf) print('\t\t\t\tClassifier:',name.upper()) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\t\tBefore Feature Selection\tAfter Feature Selection') print('No. of features:\t', X_trainB.shape[1],'\t\t\t',X_trainA.shape[1]) #print("Dataset Size(in MB):\t",(X_trainB.values.nbytes/1e6),'\t\t',(X_trainA.values.nbytes/1e6)) # training accuracy train_predB = clf.predict(X_trainB) train_predA = dlf.predict(X_trainA) train_accB = round(accuracy_score(y_trainB,train_predB)*100, 2) train_accA = round(accuracy_score(y_trainA,train_predA)*100, 2) print('Train Accuracy:\t\t',train_accB,'\t\t\t',train_accA) # test accuracy test_predB = clf.predict(X_testB) test_predA = dlf.predict(X_testA) test_accB = round(accuracy_score(y_testB,test_predB)*100, 2) test_accA = round(accuracy_score(y_testA,test_predA)*100, 2) print('Test Accuracy:\t\t',test_accB,'\t\t\t',test_accA) # roc_auc_score test_roc_aucB = round(roc_auc_score(y_testB,test_predB), 2) test_roc_aucA = round(roc_auc_score(y_testA,test_predA), 2) print('ROC AUC score:\t\t',test_roc_aucB,'\t\t\t',test_roc_aucA) # f1 score test_f1B = round(f1_score(y_testB,test_predB),2) test_f1A = round(f1_score(y_testA,test_predA),2) print('f1_score:\t\t',test_f1B,'\t\t\t',test_f1A) # precision test_precB = round(precision_score(y_testB,test_predB),2) test_precA = round(precision_score(y_testA,test_predA),2) print('Precision:\t\t',test_precB,'\t\t\t',test_precA) # recall test_recallB = round(recall_score(y_testB,test_predB),2) test_recallA = round(recall_score(y_testA,test_predA),2) print('Recall:\t\t\t',test_recallB,'\t\t\t',test_recallA) # Matthews correlation coefficient test_MCCB = round(matthews_corrcoef(y_testB,test_predB),2) test_MCCA = round(matthews_corrcoef(y_testA,test_predA),2) print('MCC:\t\t\t',test_MCCB,'\t\t\t',test_MCCA) # training time timeB = round((float(endB)- float(startB)),2) timeA = round((float(endA)- float(startA)),2) print('Train Time (in seconds):',timeB,'\t\t\t',timeA) # confusion matrix cm_resultB = confusion_matrix(y_testB,test_predB) cm_resultA = confusion_matrix(y_testA,test_predA) print('Confusion Matrix(Before):\n',cm_resultB) print('Confusion Matrix(after):\n',cm_resultA) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\n') #Load dataset as pandas data frame filename = 'LR_both_202.csv' dataset = pd.read_csv(filename) #Split data into input and output variable X = dataset.iloc[:,0:dataset.shape[1]-1] Y = dataset.iloc[:,-1] X_trainB, X_testB, y_trainB, y_testB = train_test_split(X, Y,test_size=0.3,random_state = 100) X_trainA, X_testA, y_trainA, y_testA = train_test_split(X, Y,test_size=0.3,random_state = 100) # Removing Constant features constant_filter = VarianceThreshold() constant_filter.fit(X_trainA) constant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[constant_filter.get_support()]] X_trainA.drop(labels=constant_columns,axis=1, inplace=True) X_testA.drop(labels=constant_columns,axis=1, inplace=True) # Removing Quasi-Constant features qconstant_filter = VarianceThreshold(0.01) qconstant_filter.fit(X_trainA) qconstant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[qconstant_filter.get_support()]] X_trainA.drop(labels=qconstant_columns,axis=1, inplace=True) X_testA.drop(labels=qconstant_columns,axis=1, inplace=True) # Removing Correlated Features correlated_features = set() correlation_matrix = X_trainA.corr(method = 'pearson') for i in range(len(correlation_matrix.columns)): for j in range(i): if abs(correlation_matrix.iloc[i, j]) > 0.4: colname = correlation_matrix.columns[i] correlated_features.add(colname) X_trainA.drop(labels=correlated_features,axis=1, inplace=True) X_testA.drop(labels=correlated_features,axis=1, inplace=True) # feature ranking and selection ranking_info, selected_features = feature_ranking_selection(X_trainA, y_trainA, 15) print('Fature Ranking information') print('---------------------------------------------------------------------------------------------') print(ranking_info) print(list(ranking_info['feature'])) print('---------------------------------------------------------------------------------------------') X_trainA = X_trainA[selected_features] X_testA = X_testA[selected_features] #names = ["Nearest Neighbors","Decision Tree","Naive Bayes"] names = ["Decision Tree"] classifiers = [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(random_state = 100) #GaussianNB(), ] classifier2= [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(max_depth = 5, min_samples_leaf = 50,random_state = 100) #GaussianNB(), ] clf_bef = list() clf_aft = list() for name, clf, dlf in zip(names,classifiers,classifier2): # clf for before # dlf for after # Before Feature Selection startB = time.time() clf.fit(X_trainB,y_trainB) endB = time.time() clf_bef.append(clf) # after Feature Selection startA = time.time() dlf.fit(X_trainA,y_trainA) endA = time.time() clf_aft.append(dlf) print('\t\t\t\tClassifier:',name.upper()) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\t\tBefore Feature Selection\tAfter Feature Selection') print('No. of features:\t', X_trainB.shape[1],'\t\t\t',X_trainA.shape[1]) #print("Dataset Size(in MB):\t",(X_trainB.values.nbytes/1e6),'\t\t',(X_trainA.values.nbytes/1e6)) # training accuracy train_predB = clf.predict(X_trainB) train_predA = dlf.predict(X_trainA) train_accB = round(accuracy_score(y_trainB,train_predB)*100, 2) train_accA = round(accuracy_score(y_trainA,train_predA)*100, 2) print('Train Accuracy:\t\t',train_accB,'\t\t\t',train_accA) # test accuracy test_predB = clf.predict(X_testB) test_predA = dlf.predict(X_testA) test_accB = round(accuracy_score(y_testB,test_predB)*100, 2) test_accA = round(accuracy_score(y_testA,test_predA)*100, 2) print('Test Accuracy:\t\t',test_accB,'\t\t\t',test_accA) # roc_auc_score test_roc_aucB = round(roc_auc_score(y_testB,test_predB), 2) test_roc_aucA = round(roc_auc_score(y_testA,test_predA), 2) print('ROC AUC score:\t\t',test_roc_aucB,'\t\t\t',test_roc_aucA) # f1 score test_f1B = round(f1_score(y_testB,test_predB),2) test_f1A = round(f1_score(y_testA,test_predA),2) print('f1_score:\t\t',test_f1B,'\t\t\t',test_f1A) # precision test_precB = round(precision_score(y_testB,test_predB),2) test_precA = round(precision_score(y_testA,test_predA),2) print('Precision:\t\t',test_precB,'\t\t\t',test_precA) # recall test_recallB = round(recall_score(y_testB,test_predB),2) test_recallA = round(recall_score(y_testA,test_predA),2) print('Recall:\t\t\t',test_recallB,'\t\t\t',test_recallA) # Matthews correlation coefficient test_MCCB = round(matthews_corrcoef(y_testB,test_predB),2) test_MCCA = round(matthews_corrcoef(y_testA,test_predA),2) print('MCC:\t\t\t',test_MCCB,'\t\t\t',test_MCCA) # training time timeB = round((float(endB)- float(startB)),2) timeA = round((float(endA)- float(startA)),2) print('Train Time (in seconds):',timeB,'\t\t\t',timeA) # confusion matrix cm_resultB = confusion_matrix(y_testB,test_predB) cm_resultA = confusion_matrix(y_testA,test_predA) print('Confusion Matrix(Before):\n',cm_resultB) print('Confusion Matrix(after):\n',cm_resultA) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\n') #Load dataset as pandas data frame filename = 'CLR_both_606.csv' dataset = pd.read_csv(filename) #Split data into input and output variable X = dataset.iloc[:,0:dataset.shape[1]-1] Y = dataset.iloc[:,-1] X_trainB, X_testB, y_trainB, y_testB = train_test_split(X, Y,test_size=0.3,random_state = 100) X_trainA, X_testA, y_trainA, y_testA = train_test_split(X, Y,test_size=0.3,random_state = 100) # Removing Constant features constant_filter = VarianceThreshold() constant_filter.fit(X_trainA) constant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[constant_filter.get_support()]] X_trainA.drop(labels=constant_columns,axis=1, inplace=True) X_testA.drop(labels=constant_columns,axis=1, inplace=True) # Removing Quasi-Constant features qconstant_filter = VarianceThreshold(0.01) qconstant_filter.fit(X_trainA) qconstant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[qconstant_filter.get_support()]] X_trainA.drop(labels=qconstant_columns,axis=1, inplace=True) X_testA.drop(labels=qconstant_columns,axis=1, inplace=True) # Removing Correlated Features correlated_features = set() correlation_matrix = X_trainA.corr(method = 'pearson') for i in range(len(correlation_matrix.columns)): for j in range(i): if abs(correlation_matrix.iloc[i, j]) > 0.4: colname = correlation_matrix.columns[i] correlated_features.add(colname) X_trainA.drop(labels=correlated_features,axis=1, inplace=True) X_testA.drop(labels=correlated_features,axis=1, inplace=True) # feature ranking and selection ranking_info, selected_features = feature_ranking_selection(X_trainA, y_trainA, 15) print('Fature Ranking information') print('---------------------------------------------------------------------------------------------') print(ranking_info) print(list(ranking_info['feature'])) print('---------------------------------------------------------------------------------------------') X_trainA = X_trainA[selected_features] X_testA = X_testA[selected_features] #names = ["Nearest Neighbors","Decision Tree","Naive Bayes"] names = ["Decision Tree"] classifiers = [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(random_state = 100) #GaussianNB(), ] classifier2= [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(max_depth = 5, min_samples_leaf = 50,random_state = 100) #GaussianNB(), ] clf_bef = list() clf_aft = list() for name, clf, dlf in zip(names,classifiers,classifier2): # clf for before # dlf for after # Before Feature Selection startB = time.time() clf.fit(X_trainB,y_trainB) endB = time.time() clf_bef.append(clf) # after Feature Selection startA = time.time() dlf.fit(X_trainA,y_trainA) endA = time.time() clf_aft.append(dlf) print('\t\t\t\tClassifier:',name.upper()) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\t\tBefore Feature Selection\tAfter Feature Selection') print('No. of features:\t', X_trainB.shape[1],'\t\t\t',X_trainA.shape[1]) #print("Dataset Size(in MB):\t",(X_trainB.values.nbytes/1e6),'\t\t',(X_trainA.values.nbytes/1e6)) # training accuracy train_predB = clf.predict(X_trainB) train_predA = dlf.predict(X_trainA) train_accB = round(accuracy_score(y_trainB,train_predB)*100, 2) train_accA = round(accuracy_score(y_trainA,train_predA)*100, 2) print('Train Accuracy:\t\t',train_accB,'\t\t\t',train_accA) # test accuracy test_predB = clf.predict(X_testB) test_predA = dlf.predict(X_testA) test_accB = round(accuracy_score(y_testB,test_predB)*100, 2) test_accA = round(accuracy_score(y_testA,test_predA)*100, 2) print('Test Accuracy:\t\t',test_accB,'\t\t\t',test_accA) # roc_auc_score test_roc_aucB = round(roc_auc_score(y_testB,test_predB), 2) test_roc_aucA = round(roc_auc_score(y_testA,test_predA), 2) print('ROC AUC score:\t\t',test_roc_aucB,'\t\t\t',test_roc_aucA) # f1 score test_f1B = round(f1_score(y_testB,test_predB),2) test_f1A = round(f1_score(y_testA,test_predA),2) print('f1_score:\t\t',test_f1B,'\t\t\t',test_f1A) # precision test_precB = round(precision_score(y_testB,test_predB),2) test_precA = round(precision_score(y_testA,test_predA),2) print('Precision:\t\t',test_precB,'\t\t\t',test_precA) # recall test_recallB = round(recall_score(y_testB,test_predB),2) test_recallA = round(recall_score(y_testA,test_predA),2) print('Recall:\t\t\t',test_recallB,'\t\t\t',test_recallA) # Matthews correlation coefficient test_MCCB = round(matthews_corrcoef(y_testB,test_predB),2) test_MCCA = round(matthews_corrcoef(y_testA,test_predA),2) print('MCC:\t\t\t',test_MCCB,'\t\t\t',test_MCCA) # training time timeB = round((float(endB)- float(startB)),2) timeA = round((float(endA)- float(startA)),2) print('Train Time (in seconds):',timeB,'\t\t\t',timeA) # confusion matrix cm_resultB = confusion_matrix(y_testB,test_predB) cm_resultA = confusion_matrix(y_testA,test_predA) print('Confusion Matrix(Before):\n',cm_resultB) print('Confusion Matrix(after):\n',cm_resultA) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\n') #Load dataset as pandas data frame filename = 'CL_both_404.csv' dataset = pd.read_csv(filename) #Split data into input and output variable X = dataset.iloc[:,0:dataset.shape[1]-1] Y = dataset.iloc[:,-1] X_trainB, X_testB, y_trainB, y_testB = train_test_split(X, Y,test_size=0.3,random_state = 100) X_trainA, X_testA, y_trainA, y_testA = train_test_split(X, Y,test_size=0.3,random_state = 100) # Removing Constant features constant_filter = VarianceThreshold() constant_filter.fit(X_trainA) constant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[constant_filter.get_support()]] X_trainA.drop(labels=constant_columns,axis=1, inplace=True) X_testA.drop(labels=constant_columns,axis=1, inplace=True) # Removing Quasi-Constant features qconstant_filter = VarianceThreshold(0.01) qconstant_filter.fit(X_trainA) qconstant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[qconstant_filter.get_support()]] X_trainA.drop(labels=qconstant_columns,axis=1, inplace=True) X_testA.drop(labels=qconstant_columns,axis=1, inplace=True) # Removing Correlated Features correlated_features = set() correlation_matrix = X_trainA.corr(method = 'pearson') for i in range(len(correlation_matrix.columns)): for j in range(i): if abs(correlation_matrix.iloc[i, j]) > 0.4: colname = correlation_matrix.columns[i] correlated_features.add(colname) X_trainA.drop(labels=correlated_features,axis=1, inplace=True) X_testA.drop(labels=correlated_features,axis=1, inplace=True) # feature ranking and selection ranking_info, selected_features = feature_ranking_selection(X_trainA, y_trainA, 15) print('Fature Ranking information') print('---------------------------------------------------------------------------------------------') print(ranking_info) print(list(ranking_info['feature'])) print('---------------------------------------------------------------------------------------------') X_trainA = X_trainA[selected_features] X_testA = X_testA[selected_features] #names = ["Nearest Neighbors","Decision Tree","Naive Bayes"] names = ["Decision Tree"] classifiers = [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(random_state = 100) #GaussianNB(), ] classifier2= [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(max_depth = 5, min_samples_leaf = 50,random_state = 100) #GaussianNB(), ] clf_bef = list() clf_aft = list() for name, clf, dlf in zip(names,classifiers,classifier2): # clf for before # dlf for after # Before Feature Selection startB = time.time() clf.fit(X_trainB,y_trainB) endB = time.time() clf_bef.append(clf) # after Feature Selection startA = time.time() dlf.fit(X_trainA,y_trainA) endA = time.time() clf_aft.append(dlf) print('\t\t\t\tClassifier:',name.upper()) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\t\tBefore Feature Selection\tAfter Feature Selection') print('No. of features:\t', X_trainB.shape[1],'\t\t\t',X_trainA.shape[1]) #print("Dataset Size(in MB):\t",(X_trainB.values.nbytes/1e6),'\t\t',(X_trainA.values.nbytes/1e6)) # training accuracy train_predB = clf.predict(X_trainB) train_predA = dlf.predict(X_trainA) train_accB = round(accuracy_score(y_trainB,train_predB)*100, 2) train_accA = round(accuracy_score(y_trainA,train_predA)*100, 2) print('Train Accuracy:\t\t',train_accB,'\t\t\t',train_accA) # test accuracy test_predB = clf.predict(X_testB) test_predA = dlf.predict(X_testA) test_accB = round(accuracy_score(y_testB,test_predB)*100, 2) test_accA = round(accuracy_score(y_testA,test_predA)*100, 2) print('Test Accuracy:\t\t',test_accB,'\t\t\t',test_accA) # roc_auc_score test_roc_aucB = round(roc_auc_score(y_testB,test_predB), 2) test_roc_aucA = round(roc_auc_score(y_testA,test_predA), 2) print('ROC AUC score:\t\t',test_roc_aucB,'\t\t\t',test_roc_aucA) # f1 score test_f1B = round(f1_score(y_testB,test_predB),2) test_f1A = round(f1_score(y_testA,test_predA),2) print('f1_score:\t\t',test_f1B,'\t\t\t',test_f1A) # precision test_precB = round(precision_score(y_testB,test_predB),2) test_precA = round(precision_score(y_testA,test_predA),2) print('Precision:\t\t',test_precB,'\t\t\t',test_precA) # recall test_recallB = round(recall_score(y_testB,test_predB),2) test_recallA = round(recall_score(y_testA,test_predA),2) print('Recall:\t\t\t',test_recallB,'\t\t\t',test_recallA) # Matthews correlation coefficient test_MCCB = round(matthews_corrcoef(y_testB,test_predB),2) test_MCCA = round(matthews_corrcoef(y_testA,test_predA),2) print('MCC:\t\t\t',test_MCCB,'\t\t\t',test_MCCA) # training time timeB = round((float(endB)- float(startB)),2) timeA = round((float(endA)- float(startA)),2) print('Train Time (in seconds):',timeB,'\t\t\t',timeA) # confusion matrix cm_resultB = confusion_matrix(y_testB,test_predB) cm_resultA = confusion_matrix(y_testA,test_predA) print('Confusion Matrix(Before):\n',cm_resultB) print('Confusion Matrix(after):\n',cm_resultA) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\n') #Load dataset as pandas data frame filename = 'CR_both_404.csv' dataset = pd.read_csv(filename) #Split data into input and output variable X = dataset.iloc[:,0:dataset.shape[1]-1] Y = dataset.iloc[:,-1] X_trainB, X_testB, y_trainB, y_testB = train_test_split(X, Y,test_size=0.3,random_state = 100) X_trainA, X_testA, y_trainA, y_testA = train_test_split(X, Y,test_size=0.3,random_state = 100) # Removing Constant features constant_filter = VarianceThreshold() constant_filter.fit(X_trainA) constant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[constant_filter.get_support()]] X_trainA.drop(labels=constant_columns,axis=1, inplace=True) X_testA.drop(labels=constant_columns,axis=1, inplace=True) # Removing Quasi-Constant features qconstant_filter = VarianceThreshold(0.01) qconstant_filter.fit(X_trainA) qconstant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[qconstant_filter.get_support()]] X_trainA.drop(labels=qconstant_columns,axis=1, inplace=True) X_testA.drop(labels=qconstant_columns,axis=1, inplace=True) # Removing Correlated Features correlated_features = set() correlation_matrix = X_trainA.corr(method = 'pearson') for i in range(len(correlation_matrix.columns)): for j in range(i): if abs(correlation_matrix.iloc[i, j]) > 0.4: colname = correlation_matrix.columns[i] correlated_features.add(colname) X_trainA.drop(labels=correlated_features,axis=1, inplace=True) X_testA.drop(labels=correlated_features,axis=1, inplace=True) # feature ranking and selection ranking_info, selected_features = feature_ranking_selection(X_trainA, y_trainA, 15) print('Fature Ranking information') print('---------------------------------------------------------------------------------------------') print(ranking_info) print(list(ranking_info['feature'])) print('---------------------------------------------------------------------------------------------') X_trainA = X_trainA[selected_features] X_testA = X_testA[selected_features] #names = ["Nearest Neighbors","Decision Tree","Naive Bayes"] names = ["Decision Tree"] classifiers = [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(random_state = 100) #GaussianNB(), ] classifier2= [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(max_depth = 5, min_samples_leaf = 50,random_state = 100) #GaussianNB(), ] clf_bef = list() clf_aft = list() for name, clf, dlf in zip(names,classifiers,classifier2): # clf for before # dlf for after # Before Feature Selection startB = time.time() clf.fit(X_trainB,y_trainB) endB = time.time() clf_bef.append(clf) # after Feature Selection startA = time.time() dlf.fit(X_trainA,y_trainA) endA = time.time() clf_aft.append(dlf) print('\t\t\t\tClassifier:',name.upper()) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\t\tBefore Feature Selection\tAfter Feature Selection') print('No. of features:\t', X_trainB.shape[1],'\t\t\t',X_trainA.shape[1]) #print("Dataset Size(in MB):\t",(X_trainB.values.nbytes/1e6),'\t\t',(X_trainA.values.nbytes/1e6)) # training accuracy train_predB = clf.predict(X_trainB) train_predA = dlf.predict(X_trainA) train_accB = round(accuracy_score(y_trainB,train_predB)*100, 2) train_accA = round(accuracy_score(y_trainA,train_predA)*100, 2) print('Train Accuracy:\t\t',train_accB,'\t\t\t',train_accA) # test accuracy test_predB = clf.predict(X_testB) test_predA = dlf.predict(X_testA) test_accB = round(accuracy_score(y_testB,test_predB)*100, 2) test_accA = round(accuracy_score(y_testA,test_predA)*100, 2) print('Test Accuracy:\t\t',test_accB,'\t\t\t',test_accA) # roc_auc_score test_roc_aucB = round(roc_auc_score(y_testB,test_predB), 2) test_roc_aucA = round(roc_auc_score(y_testA,test_predA), 2) print('ROC AUC score:\t\t',test_roc_aucB,'\t\t\t',test_roc_aucA) # f1 score test_f1B = round(f1_score(y_testB,test_predB),2) test_f1A = round(f1_score(y_testA,test_predA),2) print('f1_score:\t\t',test_f1B,'\t\t\t',test_f1A) # precision test_precB = round(precision_score(y_testB,test_predB),2) test_precA = round(precision_score(y_testA,test_predA),2) print('Precision:\t\t',test_precB,'\t\t\t',test_precA) # recall test_recallB = round(recall_score(y_testB,test_predB),2) test_recallA = round(recall_score(y_testA,test_predA),2) print('Recall:\t\t\t',test_recallB,'\t\t\t',test_recallA) # Matthews correlation coefficient test_MCCB = round(matthews_corrcoef(y_testB,test_predB),2) test_MCCA = round(matthews_corrcoef(y_testA,test_predA),2) print('MCC:\t\t\t',test_MCCB,'\t\t\t',test_MCCA) # training time timeB = round((float(endB)- float(startB)),2) timeA = round((float(endA)- float(startA)),2) print('Train Time (in seconds):',timeB,'\t\t\t',timeA) # confusion matrix cm_resultB = confusion_matrix(y_testB,test_predB) cm_resultA = confusion_matrix(y_testA,test_predA) print('Confusion Matrix(Before):\n',cm_resultB) print('Confusion Matrix(after):\n',cm_resultA) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\n') filename = 'LR_both_404.csv' dataset = pd.read_csv(filename) #Split data into input and output variable X = dataset.iloc[:,0:dataset.shape[1]-1] Y = dataset.iloc[:,-1] X_trainB, X_testB, y_trainB, y_testB = train_test_split(X, Y,test_size=0.3,random_state = 100) X_trainA, X_testA, y_trainA, y_testA = train_test_split(X, Y,test_size=0.3,random_state = 100) # Removing Constant features constant_filter = VarianceThreshold() constant_filter.fit(X_trainA) constant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[constant_filter.get_support()]] X_trainA.drop(labels=constant_columns,axis=1, inplace=True) X_testA.drop(labels=constant_columns,axis=1, inplace=True) # Removing Quasi-Constant features qconstant_filter = VarianceThreshold(0.01) qconstant_filter.fit(X_trainA) qconstant_columns = [col for col in X_trainA.columns if col not in X_trainA.columns[qconstant_filter.get_support()]] X_trainA.drop(labels=qconstant_columns,axis=1, inplace=True) X_testA.drop(labels=qconstant_columns,axis=1, inplace=True) # Removing Correlated Features correlated_features = set() correlation_matrix = X_trainA.corr(method = 'pearson') for i in range(len(correlation_matrix.columns)): for j in range(i): if abs(correlation_matrix.iloc[i, j]) > 0.4: colname = correlation_matrix.columns[i] correlated_features.add(colname) X_trainA.drop(labels=correlated_features,axis=1, inplace=True) X_testA.drop(labels=correlated_features,axis=1, inplace=True) # feature ranking and selection ranking_info, selected_features = feature_ranking_selection(X_trainA, y_trainA, 15) print('Fature Ranking information') print('---------------------------------------------------------------------------------------------') print(ranking_info) print(list(ranking_info['feature'])) print('---------------------------------------------------------------------------------------------') X_trainA = X_trainA[selected_features] X_testA = X_testA[selected_features] #names = ["Nearest Neighbors","Decision Tree","Naive Bayes"] names = ["Decision Tree"] classifiers = [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(random_state = 100) #GaussianNB(), ] classifier2= [ #KNeighborsClassifier(5, n_jobs= -1 ), DecisionTreeClassifier(max_depth = 5, min_samples_leaf = 50,random_state = 100) #GaussianNB(), ] clf_bef = list() clf_aft = list() for name, clf, dlf in zip(names,classifiers,classifier2): # clf for before # dlf for after # Before Feature Selection startB = time.time() clf.fit(X_trainB,y_trainB) endB = time.time() clf_bef.append(clf) # after Feature Selection startA = time.time() dlf.fit(X_trainA,y_trainA) endA = time.time() clf_aft.append(dlf) print('\t\t\t\tClassifier:',name.upper()) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\t\tBefore Feature Selection\tAfter Feature Selection') print('No. of features:\t', X_trainB.shape[1],'\t\t\t',X_trainA.shape[1]) #print("Dataset Size(in MB):\t",(X_trainB.values.nbytes/1e6),'\t\t',(X_trainA.values.nbytes/1e6)) # training accuracy train_predB = clf.predict(X_trainB) train_predA = dlf.predict(X_trainA) train_accB = round(accuracy_score(y_trainB,train_predB)*100, 2) train_accA = round(accuracy_score(y_trainA,train_predA)*100, 2) print('Train Accuracy:\t\t',train_accB,'\t\t\t',train_accA) # test accuracy test_predB = clf.predict(X_testB) test_predA = dlf.predict(X_testA) test_accB = round(accuracy_score(y_testB,test_predB)*100, 2) test_accA = round(accuracy_score(y_testA,test_predA)*100, 2) print('Test Accuracy:\t\t',test_accB,'\t\t\t',test_accA) # roc_auc_score test_roc_aucB = round(roc_auc_score(y_testB,test_predB), 2) test_roc_aucA = round(roc_auc_score(y_testA,test_predA), 2) print('ROC AUC score:\t\t',test_roc_aucB,'\t\t\t',test_roc_aucA) # f1 score test_f1B = round(f1_score(y_testB,test_predB),2) test_f1A = round(f1_score(y_testA,test_predA),2) print('f1_score:\t\t',test_f1B,'\t\t\t',test_f1A) # precision test_precB = round(precision_score(y_testB,test_predB),2) test_precA = round(precision_score(y_testA,test_predA),2) print('Precision:\t\t',test_precB,'\t\t\t',test_precA) # recall test_recallB = round(recall_score(y_testB,test_predB),2) test_recallA = round(recall_score(y_testA,test_predA),2) print('Recall:\t\t\t',test_recallB,'\t\t\t',test_recallA) # Matthews correlation coefficient test_MCCB = round(matthews_corrcoef(y_testB,test_predB),2) test_MCCA = round(matthews_corrcoef(y_testA,test_predA),2) print('MCC:\t\t\t',test_MCCB,'\t\t\t',test_MCCA) # training time timeB = round((float(endB)- float(startB)),2) timeA = round((float(endA)- float(startA)),2) print('Train Time (in seconds):',timeB,'\t\t\t',timeA) # confusion matrix cm_resultB = confusion_matrix(y_testB,test_predB) cm_resultA = confusion_matrix(y_testA,test_predA) print('Confusion Matrix(Before):\n',cm_resultB) print('Confusion Matrix(after):\n',cm_resultA) print('---------------------------------------------------------------------------------------------') print('---------------------------------------------------------------------------------------------') print('\n')
0.396068
0.809653
<img src="https://raw.githubusercontent.com/ml-unison/regresion-logistica/master/imagenes/ml-unison.png" width="250"> # Regularización en regresión logística [**Julio Waissman Vilanova**](http://mat.uson.mx/~juliowaissman/), 1 de octubre de 2020. ## Curso Reconocimiento de Patrones ### Licenciatura en Ciencias de la Computación ### Universidad de Sonora ``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = (10,5) plt.style.use('ggplot') ``` ## 1. La regresión logística ya programada Esto, antes de agregarle la regularización La función logística está dada por $$ \sigma(z) = \frac{1}{1 + e^{-z}}, $$ la cual es importante que podamos calcular en forma vectorial. Si bien el calculo es de una sola linea, el uso de estas funciones auxiliares facilitan la legibilidad del código. #### Desarrolla la función logística, la cual se calcule para todos los elementos de un ndarray. ``` def logistica(z): """ Calcula la función logística para cada elemento de z @param z: un ndarray @return: un ndarray de las mismas dimensiones que z """ return 1 / (1 + np.exp(-z)) # Y ahora vamos a ver como se ve la función logística z = np.linspace(-5, 5, 100) plt.plot( z, logistica(z)) plt.title(u'Función logística', fontsize=20) plt.xlabel(r'$z$', fontsize=20) plt.ylabel(r'$\frac{1}{1 + \exp(-z)}$', fontsize=26) plt.show() ``` Una vez establecida la función logística, vamos a implementar la función de error *sin regularizar* (error *en muestra*), la cual está dada por $$ E_{in}(w, b) = -\frac{1}{M} \sum_{i=1}^M \left[ y^{(i)}\log(a^{(i)}) + (1 - y^{(i)})\log(1 - a^{(i)})\right], $$ donde $$ a^{(i)} = \sigma(z^{(i)}), \quad\quad z^{(i)} = w^T x^{(i)} + b $$ ``` def error_in(x, y, w, b): """ Calcula el error en muestra para la regresión logística @param x: un ndarray de dimensión (M, n) con la matriz de diseño @param y: un ndarray de dimensión (M, ) donde cada entrada es 1.0 o 0.0 @param w: un ndarray de dimensión (n, ) con los pesos @param b: un flotante con el sesgo @return: un flotante con el valor de pérdida """ y_est = logistica(x @ w + b) return -np.nansum([ np.log(y_est[y > 0.5]).sum(), np.log(1 - y_est[y < 0.5]).sum() ]) / y_est.shape[0] # El testunit del pobre (ya lo calcule yo, pero puedes hacerlo a mano para estar seguro) w = np.array([1]) b = 1.0 x = np.array([[10], [-5]]) y1 = np.array([1, 0]) y2 = np.array([0, 1]) y3 = np.array([0, 0]) y4 = np.array([1, 1]) y_est = logistica(x @ w + b) assert abs(error_in(x, y1, w, b) - (-np.log(y_est[0]) - np.log(1 - y_est[1])) / 2) < 1e-2 assert abs(error_in(x, y2, w, b) - (-np.log(1 - y_est[0]) - np.log(y_est[1])) / 2) < 1e-2 assert abs(error_in(x, y3, w, b) - (-np.log(1 - y_est[0]) - np.log(1 - y_est[1])) / 2) < 1e-2 assert abs(error_in(x, y4, w, b) - (-np.log(y_est[0]) - np.log(y_est[1])) / 2) < 1e-2 ``` De la misma manera, para poder implementar las funciones de aprendizaje, vamos a implementar el gradiente de la función de pérdida. El gradiente de la función de pérdida respecto a $\omega$ $\nabla E_in(w)$, y la derividada parciel respecto al sesgo se obtienen con las siguientes ecuaciones: $$ \frac{\partial E_{in}(w, b)}{\partial w_j} = -\frac{1}{M} \sum_{i=1}^M \left(y^{(i)} - a^{(i)}\right)x_j^{(i)} $$ $$ \frac{\partial E_{in}(w, b)}{\partial b} = -\frac{1}{M} \sum_{i=1}^M \left(y^{(i)} - a^{(i)}\right) $$ Todo esto **para el caso sin regularizar**. ``` def gradiente_error(x, y, w, b): """ Calcula el gradiente de la función de error en muestra. @param x: un ndarray de dimensión (M, n) con la matriz de diseño @param y: un ndarray de dimensión (M, ) donde cada entrada es 1.0 o 0.0 @param w: un ndarray de dimensión (n, ) con los pesos @param b: un flotante con el sesgo @return: dw, db, un ndarray de mismas dimensiones que w y un flotnte con el cálculo de la dervada evluada en el punto w y b """ M = x.shape[0] error = y - logistica(x @ w + b) dw = -x.T @ error / M db = - error.mean() return dw, db # Otra vez el testunit del pobre (ya lo calcule yo, pero puedes hacerlo a mano para estar seguro) w = np.array([1]) b = 1.0 x = np.array([[10], [-5]]) y1 = np.array([1, 0]) y2 = np.array([0, 1]) y3 = np.array([0, 0]) y4 = np.array([1, 1]) assert abs(0.00898475 - gradiente_error(x, y1, w, b)[1]) < 1e-4 assert abs(7.45495097 - gradiente_error(x, y2, w, b)[0]) < 1e-4 assert abs(4.95495097 - gradiente_error(x, y3, w, b)[0]) < 1e-4 assert abs(-0.49101525 - gradiente_error(x, y4, w, b)[1]) < 1e-4 ``` ## 2. Descenso de gradiente Ahora vamos a desarrollar las funciones necesarias para realizar el entrenamiento y encontrar la mejor $\omega$ de acuero a la función de costos y un conjunto de datos de aprendizaje. Para ilustrar el problema, vamos a utilizar una base de datos sintética proveniente del curso de [Andrew Ng](www.andrewng.org/) que se encuentra en [Coursera](https://www.coursera.org). Supongamos que pertenecemos al departamente de servicios escolares de la UNISON y vamos a modificar el procedimiento de admisión. En lugar de utilizar un solo exámen (EXCOBA) y la información del cardex de la preparatoria, hemos decidido aplicar dos exámenes, uno sicométrico y otro de habilidades estudiantiles. Dichos exámenes se han aplicado el último año aunque no fueron utilizados como criterio. Así, tenemos un historial entre estudiantes aceptados y resultados de los dos exámenes. El objetivo es hacer un método de regresión que nos permita hacer la admisión a la UNISON tomando en cuenta únicamente los dos exámenes y simplificar el proceso. *Recuerda que esto no es verdad, es solo un ejercicio*. Bien, los datos se encuentran en el archivo `admision.txt` el cual se encuentra en formato `csv` (osea los valores de las columnas separados por comas. Vamos a leer los datos y graficar la información para entender un poco los datos. ``` datos = np.loadtxt('admision.csv', comments='%', delimiter=',') x, y = datos[:,0:-1], datos[:,-1] plt.plot(x[y == 1, 0], x[y == 1, 1], 'sr', label='aceptados') plt.plot(x[y == 0, 0], x[y == 0, 1], 'ob', label='rechazados') plt.title(u'Ejemplo sintético para regresión logística') plt.xlabel(u'Calificación del primer examen') plt.ylabel(u'Calificación del segundo examen') plt.axis([20, 100, 20, 100]) plt.legend(loc=0) ``` Vistos los datos un clasificador lineal podría ser una buena solución. Ahora vamos a implementar el método de descenso de gradiente. ``` def dg(x, y, w, b, alpha, max_iter=10_000, tol=1e-6, historial=False): """ Descenso de gradiente por lotes @param x: un ndarray de dimensión (M, n) con la matriz de diseño @param y: un ndarray de dimensión (M, ) donde cada entrada es 1.0 o 0.0 @param alpha: Un flotante (típicamente pequeño) con la tasa de aprendizaje @param tol: Un flotante pequeño como criterio de paro. Por default 1e-6 @param max_iter: Máximo numero de iteraciones. Por default 1e4 @param historial: Un booleano para saber si guardamos el historial de errores @return: w, b, hist donde - w es ndarray de dimensión (n, ) con los pesos; - b es un float con el sesgo - hist, un ndarray de dimensión (max_iter,) con el valor del error en muestra en cada iteración. Si historial == False, entonces perdida_hist = None. """ M, n = x.shape hist = [error_in(x, y, w, b)] if historial else None for epoch in range(1, max_iter): dw, db = gradiente_error(x, y, w, b) w -= alpha * dw b -= alpha * db error = error_in(x, y, w, b) if historial: hist.append(error) if np.abs(np.max(dw)) < tol: break return w, b, hist ``` Para probar la función de aprendizaje, vamos a aplicarla a nuestro problema de admisión. Primero recuerda que tienes que hacer una exploración para encontrar el mejor valor de $\epsilon$. Así que utiliza el código de abajo para ajustar $\alpha$. ``` alpha = 1e-4 mi = 50 w = np.zeros(x.shape[1]) b = 0.0 _, _, hist = dg(x, y, w, b, alpha, max_iter=mi, tol=1e-4, historial=True) plt.plot(np.arange(mi), hist) plt.title(r'Evolucion del valor de la función de error en las primeras iteraciones con $\alpha$ = ' + str(alpha)) plt.xlabel('iteraciones') plt.ylabel('perdida') ``` Una vez encontrado el mejor $\epsilon$, entonces podemos calcular $\omega$ (esto va a tardar bastante), recuerda que el costo final debe de ser lo más cercano a 0 posible, así que agrega cuantas iteraciones sean necesarias (a partir de una función de pérdida con un valor de al rededor de 0.22 ya está bien). Puedes ejecutar la celda cuandas veces sea necesario con un número limitado de iteraciones (digamos unas 10,000) para ver como evoluciona. Esto podría mejorar sensiblementa si se normalizan los datos de entrada. Tambien puedes variar alpha. ``` w, b, _ = dg(x, y, w, b, 20*alpha, max_iter = 10_000) print("Los pesos obtenidos son: \n{}".format(w)) print("El sesgo obtenidos es: \n{}".format(b)) print("El valor final de la función de pérdida es: {}".format(error_in(x, y, w, b))) ``` Es interesante ver como el descenso de gradiente no es muy eficiente en este tipo de problemas, a pesar de ser problemas de optimización convexos. Bueno, este método nos devuelve $\omega$, pero esto no es suficiente para decir que tenemos un clasificador, ya que un método de clasificación se compone de dos métodos, uno para **aprender** y otro para **predecir**. Recuerda que $a^{(i)} = \Pr[y^{(i)} = 1 | x^{(i)} ; w, b]$, y a partir de esta probabilidad debemos tomar una desición. Igualmente recuerda que para tomar la desicion no necesitamos calcular el valor de la logística, si conocemos el umbral. #### Desarrolla una función de predicción. ``` def predictor(x, w, b): """ Predice los valores de y_hat (que solo pueden ser 0 o 1), utilizando el criterio MAP. @param x: un ndarray de dimensión (M, n) con la matriz de diseño @param w: un ndarray de dimensión (n, ) con los pesos @param b: un flotante con el sesgo @return: un ndarray de dimensión (M, ) donde cada entrada es 1.0 o 0.0 con la salida estimada """ return np.where(logistica(x @ w + b) > 0, 1, 0) ``` ¿Que tan bueno es este clasificador? ¿Es que implementamos bien el método? Vamos a contestar esto por partes. Primero, vamos a graficar los mismos datos pero vamos a agregar la superficie de separación, la cual en este caso sabemos que es una linea recta. Como sabemos el criterio para decidir si un punto pertenece a la clase distinguida o no es si el valor de $w^T x^{(i)} + b \ge 0$, por lo que la frontera entre la región donde se escoge una clase de otra se encuentra en: $$ 0 = b + w_1 x_1 + w_2 x_2, $$ y despejando: $$ x_2 = -\frac{b}{w_2} -\frac{w_1}{w_2}x_1 $$ son los pares $(x_1, x_2)$ los valores en la forntera. Al ser estos (en este caso) una linea recta solo necesitamos dos para graficar la superficie de separación. ``` x1_frontera = np.array([20, 100]) #Los valores mínimo y máximo que tenemos en la gráfica de puntos x2_frontera = -(b / w[1]) - (w[0] / w[1]) * x1_frontera print(x1_frontera) print(x2_frontera) plt.plot(x[y == 1, 0], x[y == 1, 1], 'sr', label='aceptados') plt.plot(x[y == 0, 0], x[y == 0, 1], 'ob', label='rechazados') plt.plot(x1_frontera, x2_frontera, 'm') plt.title(u'Ejemplo sintético para regresión logística') plt.xlabel(u'Calificación del primer examen') plt.ylabel(u'Calificación del segundo examen') plt.axis([20, 100, 20, 100]) plt.legend(loc=0) ``` ## 3. Clasificación polinomial Como podemos ver en la gráfica de arriba, parece ser que la regresión logística aceptaría a algunos estudiantes rechazados y rechazaría a algunos que si fueron en realidad aceptados. En todo método de clasificación hay un grado de error, y eso es parte del poder de generalización de los métodos. Sin embargo, una simple inspección visual muestra que, posiblemente, la regresión lineal no es la mejor solución, ya que la frontera entre las dos clases parece ser más bien una curva. ¿Que tal si probamos con un clasificador cuadrático? Un clasificador cuadrático no es más que la regresión lineal pero a la que se le agregan todos los atributos que sean una combinación de dos de los atributos. Por ejemplo, si un ejemplo $x = (x_1, x_2, x_3)^T$ se aumenta con todas sus componentes cuadráticas, entonces tenemos los atributos $$ \phi_2(x) = (x_1, x_2, x_3, x_1 x_2, x_1 x_3, x_2 x_3, x_1^2, x_2^2, x_3^2)^T. $$ De la misma manera se pueden obtener clasificadores de orden tres, cuatro, cinco, etc. En general a estos clasificadores se les conoce como **clasificadores polinomiales**. Ahora, para entender bien la idea, vamos a resolver el problema anterior con un clasificador de orden 2. Sin embargo, si luego se quiere hacer el reconocimiento de otros objetos, o cambiar el orden del polinomio, pues se requeriría de reclcular cada vez la expansión polinomial. Vamos a generalizar la obtención de atributos polinomiales con la función `map_poly`, la cual la vamos a desarrollar a continuación. En este caso, la normalización de los datos es muy importante, por lo que se agregan las funciones pertinentes. ``` from itertools import combinations_with_replacement def map_poly(grad, x): """ Encuentra las características polinomiales hasta el grado grad de la matriz de datos x, asumiendo que x[:n, 0] es la expansión de orden 1 (los valores de cada atributo) @param grad: un entero positivo con el grado de expansión @param x: un ndarray de dimension (M, n) donde n es el número de atributos @return: un ndarray de dimensión (M, n_phi) donde n_phi = \sum_{i = 1}^grad fact(i + n - 1)/(fact(i) * fact(n - 1)) """ if int(grad) < 2: raise ValueError('grad debe de ser mayor a 1') M, n = x.shape atrib = x.copy() x_phi = x.copy() for i in range(2, int(grad) + 1): for comb in combinations_with_replacement(range(n), i): x_phi = np.c_[x_phi, np.prod(atrib[:, comb], axis=1)] return x_phi def medias_std(x): """ Obtiene un vector de medias y desviaciones estandar para normalizar @param x: Un ndarray de (M, n) con una matriz de diseño @return: mu, des_std dos ndarray de dimensiones (n, ) con las medias y desviaciones estandar """ return np.mean(x, axis=0), np.std(x, axis=0) def normaliza(x, mu, des_std): """ Normaliza los datos x @param x: un ndarray de dimension (M, n) con la matriz de diseño @param mu: un ndarray (n, ) con las medias @param des_std: un ndarray (n, ) con las desviaciones estandard @return: un ndarray (M, n) con x normalizado """ return (x - mu) / des_std ``` **Realiza la clasificación de los datos utilizando un clasificador cuadrático (recuerda ajustar primero el valor de $\alpha$)** ``` # Encuentra phi_x (x son la expansión polinomial de segundo orden, utiliza la función map_poly phi_x = map_poly(2, x) #--Agrega el código aqui-- mu, de = medias_std(phi_x) phi_x_norm = normaliza(phi_x, mu, de) # Utiliza la regresión logística alpha = 1 #--Agrega el dato aqui-- w = np.zeros(phi_x.shape[1]) #--Agrega el dato aqui-- b = 0 #--Agrega el dato aqui-- _, _, hist = dg(phi_x_norm, y, w, b, alpha, max_iter=50, historial=True) plt.plot(range(len(hist)), hist) plt.xlabel('epochs') plt.ylabel(r'$E_{in}$') plt.title('Evaluación del parámetro alpha') w_norm, b_norm, _ = dg(phi_x_norm, y, w, b, alpha, 1000) print("Los pesos obtenidos son: \n{}".format(w_norm)) print("El sesgo obtenidos es: \n{}".format(b_norm)) print("El error en muestra es: {}".format(error_in(phi_x_norm, y, w_norm, b_norm))) ``` donde se puede encontrar un error en muestra de aproximadamente 0.03. Esto lo tenemos que graficar. Pero graficar la separación de datos en una proyección en las primeras dos dimensiones, no es tan sencillo como lo hicimos con una separación lineal, así que vamos atener que generar un `contour`, y sobre este graficar los datos. Para esto vamos a desarrollar una función. ``` def plot_separacion2D(x, y, grado, mu, de, w, b): """ Grafica las primeras dos dimensiones (posiciones 1 y 2) de datos en dos dimensiones extendidos con un clasificador polinomial así como la separación dada por theta_phi """ if grado < 2: raise ValueError('Esta funcion es para graficar separaciones con polinomios mayores a 1') x1_min, x1_max = np.min(x[:,0]), np.max(x[:,0]) x2_min, x2_max = np.min(x[:,1]), np.max(x[:,1]) delta1, delta2 = (x1_max - x1_min) * 0.1, (x2_max - x2_min) * 0.1 spanX1 = np.linspace(x1_min - delta1, x1_max + delta1, 600) spanX2 = np.linspace(x2_min - delta2, x2_max + delta2, 600) X1, X2 = np.meshgrid(spanX1, spanX2) X = normaliza(map_poly(grado, np.c_[X1.ravel(), X2.ravel()]), mu, de) Z = predictor(X, w, b) Z = Z.reshape(X1.shape[0], X1.shape[1]) # plt.contour(X1, X2, Z, linewidths=0.2, colors='k') plt.contourf(X1, X2, Z, 1, cmap=plt.cm.binary_r) plt.plot(x[y > 0.5, 0], x[y > 0.5, 1], 'sr', label='clase positiva') plt.plot(x[y < 0.5, 0], x[y < 0.5, 1], 'oy', label='clase negativa') plt.axis([spanX1[0], spanX1[-1], spanX2[0], spanX2[-1]]) ``` Y ahora vamos a probar la función `plot_separacion2D` con los datos de entrenamiento. El comando tarda, ya que estamos haciendo un grid de 200 $\times$ 200, y realizando evaluaciones individuales. ``` plot_separacion2D(x, y, 2, mu, de, w_norm, b_norm) plt.title(u"Separación con un clasificador cuadrático") plt.xlabel(u"Calificación del primer examen") plt.ylabel(u"Calificación del segundo examen") ``` Como podemos ver, un clasificador polinomial de orden 2 clasifica mejor los datos de aprendizaje, y además parece suficientemente simple para ser la mejor opción para hacer la predicción. Claro, esto lo sabemos porque pudimos visualizar los datos, y en el fondo estamos haciendo trampa, al seleccionar la expansión polinomial a partir de una inspección visual de los datos. Tomemos ahora una base de datos que si bien es sintética es representativa de una familia de problemas a resolver. Supongamos que estámos opimizando la fase de pruebas dentro de la linea de producción de la empresa Microprocesadores del Noroeste S.A. de C.V.. La idea es reducir el banco de pruebas de cada nuevo microprocesador fabricado y en lugar de hacer 50 pruebas, reducirlas a 2. En el conjunto de datos tenemos los valores que obtuvo cada componente en las dos pruebas seleccionadas, y la decisión que se tomo con cada dispositivo (esta desición se tomo con el banco de 50 reglas). Los datos los podemos visualizar a continuación. ``` datos = np.loadtxt('prod_test.csv', comments='%', delimiter=',') x, y = datos[:,0:-1], datos[:,-1] plt.plot(x[y == 1, 0], x[y == 1, 1], 'or', label='cumple calidad') plt.plot(x[y == 0, 0], x[y == 0, 1], 'ob', label='rechazado') plt.title(u'Ejemplo de pruebas de un producto') plt.xlabel(u'Valor obtenido en prueba 1') plt.ylabel(u'Valor obtenido en prueba 2') plt.legend(loc=0) ``` Cláramente este problema no se puede solucionar con un clasificador lineal (1 orden), por lo que hay que probar otros tipos de clasificadores. **Completa el código para hacer regresión polinomial para polinomios de orden 2, 4, 6 y 8, y muestra los resultados en una figura. Recuerda que este ejercicio puede tomar bastante tiempo de cómputo. Posiblemente tengas que hacer ajustes en el código para manejar diferentes valores de alpha y max_iter de acuerdo a cada caso** ``` for (i, grado) in enumerate([2, 6, 10, 14]): # Genera la expansión polinomial # --- Agregar código aquí --- phi_x= map_poly(grado, x) mu, de=medias_std(phi_x) # Normaliza # --- Agregar código aquí --- phi_x_norm=normaliza(phi_x, mu, de) # Entrena # --- Agregar código aquí --- alpha=0.1 w=np.zeros(phi_x.shape[1]) b=0 w_norm, b_norm, _ =dg(phi_x_norm, y, w, b, alpha, max_iter=10000, historial=True) # Muestra resultados con plot_separacion2D plt.subplot(2, 2, i + 1) plt.title(f"Polinomio de grado {grado}") # --- Agregar codigo aquí --- # plot_separacion2D(...) Esto es solo para ayudarlos un poco plot_separacion2D(x, y, grado, mu, de, w_norm, b_norm) ``` ## 4. Regularización Como podemos ver del ejercicio anterior, es dificil determinar el grado del polinomio, y en algunos casos es demasiado general (subaprendizaje) y en otros demasiado específico (sobreaprendizaje). ¿Que podría ser la solución?, bueno, una solución posible es utilizar un polinomio de alto grado (o relativamente alto), y utilizar la **regularización** para controlar la generalización del algoritmo, a través de una variable $\lambda$. Recordemos, la función de costos de la regresión logística con regularización es: $$ costo(w, b) = E_{in}(w, b) + \frac{\lambda}{M} regu(w), $$ donde $regu(w)$ es una regularización, la cual puede ser $l_1$, $l_2$ u otras, tal como vimos en clase. **Completa el siguiente código, utilizando una regularización en $L_2$** ``` def costo(x, y, w, b, lambd): """ Calcula el costo de una w dada para el conjunto dee entrenamiento dado por y y x, usando regularización @param x: un ndarray de dimensión (M, n) con la matriz de diseño @param y: un ndarray de dimensión (M, ) donde cada entrada es 1.0 o 0.0 @param w: un ndarray de dimensión (n, ) con los pesos @param b: un flotante con el sesgo @param lambd: un flotante con el valor de lambda en la regularizacion @return: un flotante con el valor de pérdida """ costo = 0 M = x.shape[0] #------------------------------------------------------------------------ # Agregua aqui tu código costo=error_in(x,y,w,b) + (lambd/M)*(w.T@w) #------------------------------------------------------------------------ return costo def grad_regu(x, y, w, b, lambd): """ Calcula el gradiente de la función de costo regularizado para clasificación binaria, utilizando una neurona logística, para w y b y conociendo un conjunto de aprendizaje. @param x: un ndarray de dimensión (M, n) con la matriz de diseño @param y: un ndarray de dimensión (M, ) donde cada entrada es 1.0 o 0.0 @param w: un ndarray de dimensión (n, ) con los pesos @param b: un flotante con el sesgo @param lambd: un flotante con el peso de la regularización @return: dw, db, un ndarray de mismas dimensiones que w y un flotnte con el cálculo de la dervada evluada en el punto w y b """ M = x.shape[0] dw = np.zeros_like(w) db = 0.0 #------------------------------------------------------------------------ # Agregua aqui tu código error= y-logistica(x@w+b) dw= -(error@x) * (1/M) + (lambd/M) * (2*w) db= -error.mean() #------------------------------------------------------------------------ return dw, db def dg_regu(x, y, w, b, alpha, lambd, max_iter=10_000, tol=1e-4, historial=False): """ Descenso de gradiente con regularización l2 @param x: un ndarray de dimensión (M, n) con la matriz de diseño @param y: un ndarray de dimensión (M, ) donde cada entrada es 1.0 o 0.0 @param alpha: Un flotante (típicamente pequeño) con la tasa de aprendizaje @param lambd: Un flotante con el valor de la regularización @param max_iter: Máximo numero de iteraciones. Por default 10_000 @param tol: Un flotante pequeño como criterio de paro. Por default 1e-4 @param historial: Un booleano para saber si guardamos el historial @return: - w: ndarray de dimensión (n, ) con los pesos; - b: float con el sesgo - hist: ndarray de (max_iter,) el historial de error. Si historial == False, entonces hist = None. """ M, n = x.shape hist = [costo(x, y, w, b, lambd)] if historial else None for epoch in range(1, max_iter): dw, db = grad_regu(x, y, w, b, lambd) w -= alpha * dw b -= alpha * db error = costo(x, y, w, b, lambd) if historial: hist.append(error) if np.abs(np.max(dw)) < tol: break return w, b, hist ``` **Desarrolla las funciones y scriprs necesarios para realizar la regresión logística con un polinomio de grado 14 y con cuatro valores de regularización diferentes. Grafica la superficie de separación para cuatro valores diferentes de $\lambda$. Escribe tus conclusiones** ``` phi_x = map_poly(14, x) mu, de = medias_std(phi_x) phi_x_norm = normaliza(phi_x, mu, de) for (i, lambd) in enumerate([0, 1, 10, 100]): # Normaliza # --- Agregar código aquí --- phi_x_norm=normaliza(phi_x, mu, de) # Entrena # --- Agregar código aquí --- alpha= 0.1 w = np.zeros(phi_x_norm.shape[1]) b= 0 w_norm, b_norm, _= dg_regu(phi_x_norm, y, w, b, alpha, lambd, max_iter=100000) # Muestra resultados con plot_separacion2D plt.subplot(2, 2, i + 1) plt.title("Polinomio de grado 14, regu = {}.".format(lambd)) # --- Agregar codigo aquí --- plot_separacion2D(x, y, 14, mu, de, w_norm, b_norm) ``` ** Escribe aquí tus conclusiones**
github_jupyter
%matplotlib inline import numpy as np import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = (10,5) plt.style.use('ggplot') def logistica(z): """ Calcula la función logística para cada elemento de z @param z: un ndarray @return: un ndarray de las mismas dimensiones que z """ return 1 / (1 + np.exp(-z)) # Y ahora vamos a ver como se ve la función logística z = np.linspace(-5, 5, 100) plt.plot( z, logistica(z)) plt.title(u'Función logística', fontsize=20) plt.xlabel(r'$z$', fontsize=20) plt.ylabel(r'$\frac{1}{1 + \exp(-z)}$', fontsize=26) plt.show() def error_in(x, y, w, b): """ Calcula el error en muestra para la regresión logística @param x: un ndarray de dimensión (M, n) con la matriz de diseño @param y: un ndarray de dimensión (M, ) donde cada entrada es 1.0 o 0.0 @param w: un ndarray de dimensión (n, ) con los pesos @param b: un flotante con el sesgo @return: un flotante con el valor de pérdida """ y_est = logistica(x @ w + b) return -np.nansum([ np.log(y_est[y > 0.5]).sum(), np.log(1 - y_est[y < 0.5]).sum() ]) / y_est.shape[0] # El testunit del pobre (ya lo calcule yo, pero puedes hacerlo a mano para estar seguro) w = np.array([1]) b = 1.0 x = np.array([[10], [-5]]) y1 = np.array([1, 0]) y2 = np.array([0, 1]) y3 = np.array([0, 0]) y4 = np.array([1, 1]) y_est = logistica(x @ w + b) assert abs(error_in(x, y1, w, b) - (-np.log(y_est[0]) - np.log(1 - y_est[1])) / 2) < 1e-2 assert abs(error_in(x, y2, w, b) - (-np.log(1 - y_est[0]) - np.log(y_est[1])) / 2) < 1e-2 assert abs(error_in(x, y3, w, b) - (-np.log(1 - y_est[0]) - np.log(1 - y_est[1])) / 2) < 1e-2 assert abs(error_in(x, y4, w, b) - (-np.log(y_est[0]) - np.log(y_est[1])) / 2) < 1e-2 def gradiente_error(x, y, w, b): """ Calcula el gradiente de la función de error en muestra. @param x: un ndarray de dimensión (M, n) con la matriz de diseño @param y: un ndarray de dimensión (M, ) donde cada entrada es 1.0 o 0.0 @param w: un ndarray de dimensión (n, ) con los pesos @param b: un flotante con el sesgo @return: dw, db, un ndarray de mismas dimensiones que w y un flotnte con el cálculo de la dervada evluada en el punto w y b """ M = x.shape[0] error = y - logistica(x @ w + b) dw = -x.T @ error / M db = - error.mean() return dw, db # Otra vez el testunit del pobre (ya lo calcule yo, pero puedes hacerlo a mano para estar seguro) w = np.array([1]) b = 1.0 x = np.array([[10], [-5]]) y1 = np.array([1, 0]) y2 = np.array([0, 1]) y3 = np.array([0, 0]) y4 = np.array([1, 1]) assert abs(0.00898475 - gradiente_error(x, y1, w, b)[1]) < 1e-4 assert abs(7.45495097 - gradiente_error(x, y2, w, b)[0]) < 1e-4 assert abs(4.95495097 - gradiente_error(x, y3, w, b)[0]) < 1e-4 assert abs(-0.49101525 - gradiente_error(x, y4, w, b)[1]) < 1e-4 datos = np.loadtxt('admision.csv', comments='%', delimiter=',') x, y = datos[:,0:-1], datos[:,-1] plt.plot(x[y == 1, 0], x[y == 1, 1], 'sr', label='aceptados') plt.plot(x[y == 0, 0], x[y == 0, 1], 'ob', label='rechazados') plt.title(u'Ejemplo sintético para regresión logística') plt.xlabel(u'Calificación del primer examen') plt.ylabel(u'Calificación del segundo examen') plt.axis([20, 100, 20, 100]) plt.legend(loc=0) def dg(x, y, w, b, alpha, max_iter=10_000, tol=1e-6, historial=False): """ Descenso de gradiente por lotes @param x: un ndarray de dimensión (M, n) con la matriz de diseño @param y: un ndarray de dimensión (M, ) donde cada entrada es 1.0 o 0.0 @param alpha: Un flotante (típicamente pequeño) con la tasa de aprendizaje @param tol: Un flotante pequeño como criterio de paro. Por default 1e-6 @param max_iter: Máximo numero de iteraciones. Por default 1e4 @param historial: Un booleano para saber si guardamos el historial de errores @return: w, b, hist donde - w es ndarray de dimensión (n, ) con los pesos; - b es un float con el sesgo - hist, un ndarray de dimensión (max_iter,) con el valor del error en muestra en cada iteración. Si historial == False, entonces perdida_hist = None. """ M, n = x.shape hist = [error_in(x, y, w, b)] if historial else None for epoch in range(1, max_iter): dw, db = gradiente_error(x, y, w, b) w -= alpha * dw b -= alpha * db error = error_in(x, y, w, b) if historial: hist.append(error) if np.abs(np.max(dw)) < tol: break return w, b, hist alpha = 1e-4 mi = 50 w = np.zeros(x.shape[1]) b = 0.0 _, _, hist = dg(x, y, w, b, alpha, max_iter=mi, tol=1e-4, historial=True) plt.plot(np.arange(mi), hist) plt.title(r'Evolucion del valor de la función de error en las primeras iteraciones con $\alpha$ = ' + str(alpha)) plt.xlabel('iteraciones') plt.ylabel('perdida') w, b, _ = dg(x, y, w, b, 20*alpha, max_iter = 10_000) print("Los pesos obtenidos son: \n{}".format(w)) print("El sesgo obtenidos es: \n{}".format(b)) print("El valor final de la función de pérdida es: {}".format(error_in(x, y, w, b))) def predictor(x, w, b): """ Predice los valores de y_hat (que solo pueden ser 0 o 1), utilizando el criterio MAP. @param x: un ndarray de dimensión (M, n) con la matriz de diseño @param w: un ndarray de dimensión (n, ) con los pesos @param b: un flotante con el sesgo @return: un ndarray de dimensión (M, ) donde cada entrada es 1.0 o 0.0 con la salida estimada """ return np.where(logistica(x @ w + b) > 0, 1, 0) x1_frontera = np.array([20, 100]) #Los valores mínimo y máximo que tenemos en la gráfica de puntos x2_frontera = -(b / w[1]) - (w[0] / w[1]) * x1_frontera print(x1_frontera) print(x2_frontera) plt.plot(x[y == 1, 0], x[y == 1, 1], 'sr', label='aceptados') plt.plot(x[y == 0, 0], x[y == 0, 1], 'ob', label='rechazados') plt.plot(x1_frontera, x2_frontera, 'm') plt.title(u'Ejemplo sintético para regresión logística') plt.xlabel(u'Calificación del primer examen') plt.ylabel(u'Calificación del segundo examen') plt.axis([20, 100, 20, 100]) plt.legend(loc=0) from itertools import combinations_with_replacement def map_poly(grad, x): """ Encuentra las características polinomiales hasta el grado grad de la matriz de datos x, asumiendo que x[:n, 0] es la expansión de orden 1 (los valores de cada atributo) @param grad: un entero positivo con el grado de expansión @param x: un ndarray de dimension (M, n) donde n es el número de atributos @return: un ndarray de dimensión (M, n_phi) donde n_phi = \sum_{i = 1}^grad fact(i + n - 1)/(fact(i) * fact(n - 1)) """ if int(grad) < 2: raise ValueError('grad debe de ser mayor a 1') M, n = x.shape atrib = x.copy() x_phi = x.copy() for i in range(2, int(grad) + 1): for comb in combinations_with_replacement(range(n), i): x_phi = np.c_[x_phi, np.prod(atrib[:, comb], axis=1)] return x_phi def medias_std(x): """ Obtiene un vector de medias y desviaciones estandar para normalizar @param x: Un ndarray de (M, n) con una matriz de diseño @return: mu, des_std dos ndarray de dimensiones (n, ) con las medias y desviaciones estandar """ return np.mean(x, axis=0), np.std(x, axis=0) def normaliza(x, mu, des_std): """ Normaliza los datos x @param x: un ndarray de dimension (M, n) con la matriz de diseño @param mu: un ndarray (n, ) con las medias @param des_std: un ndarray (n, ) con las desviaciones estandard @return: un ndarray (M, n) con x normalizado """ return (x - mu) / des_std # Encuentra phi_x (x son la expansión polinomial de segundo orden, utiliza la función map_poly phi_x = map_poly(2, x) #--Agrega el código aqui-- mu, de = medias_std(phi_x) phi_x_norm = normaliza(phi_x, mu, de) # Utiliza la regresión logística alpha = 1 #--Agrega el dato aqui-- w = np.zeros(phi_x.shape[1]) #--Agrega el dato aqui-- b = 0 #--Agrega el dato aqui-- _, _, hist = dg(phi_x_norm, y, w, b, alpha, max_iter=50, historial=True) plt.plot(range(len(hist)), hist) plt.xlabel('epochs') plt.ylabel(r'$E_{in}$') plt.title('Evaluación del parámetro alpha') w_norm, b_norm, _ = dg(phi_x_norm, y, w, b, alpha, 1000) print("Los pesos obtenidos son: \n{}".format(w_norm)) print("El sesgo obtenidos es: \n{}".format(b_norm)) print("El error en muestra es: {}".format(error_in(phi_x_norm, y, w_norm, b_norm))) def plot_separacion2D(x, y, grado, mu, de, w, b): """ Grafica las primeras dos dimensiones (posiciones 1 y 2) de datos en dos dimensiones extendidos con un clasificador polinomial así como la separación dada por theta_phi """ if grado < 2: raise ValueError('Esta funcion es para graficar separaciones con polinomios mayores a 1') x1_min, x1_max = np.min(x[:,0]), np.max(x[:,0]) x2_min, x2_max = np.min(x[:,1]), np.max(x[:,1]) delta1, delta2 = (x1_max - x1_min) * 0.1, (x2_max - x2_min) * 0.1 spanX1 = np.linspace(x1_min - delta1, x1_max + delta1, 600) spanX2 = np.linspace(x2_min - delta2, x2_max + delta2, 600) X1, X2 = np.meshgrid(spanX1, spanX2) X = normaliza(map_poly(grado, np.c_[X1.ravel(), X2.ravel()]), mu, de) Z = predictor(X, w, b) Z = Z.reshape(X1.shape[0], X1.shape[1]) # plt.contour(X1, X2, Z, linewidths=0.2, colors='k') plt.contourf(X1, X2, Z, 1, cmap=plt.cm.binary_r) plt.plot(x[y > 0.5, 0], x[y > 0.5, 1], 'sr', label='clase positiva') plt.plot(x[y < 0.5, 0], x[y < 0.5, 1], 'oy', label='clase negativa') plt.axis([spanX1[0], spanX1[-1], spanX2[0], spanX2[-1]]) plot_separacion2D(x, y, 2, mu, de, w_norm, b_norm) plt.title(u"Separación con un clasificador cuadrático") plt.xlabel(u"Calificación del primer examen") plt.ylabel(u"Calificación del segundo examen") datos = np.loadtxt('prod_test.csv', comments='%', delimiter=',') x, y = datos[:,0:-1], datos[:,-1] plt.plot(x[y == 1, 0], x[y == 1, 1], 'or', label='cumple calidad') plt.plot(x[y == 0, 0], x[y == 0, 1], 'ob', label='rechazado') plt.title(u'Ejemplo de pruebas de un producto') plt.xlabel(u'Valor obtenido en prueba 1') plt.ylabel(u'Valor obtenido en prueba 2') plt.legend(loc=0) for (i, grado) in enumerate([2, 6, 10, 14]): # Genera la expansión polinomial # --- Agregar código aquí --- phi_x= map_poly(grado, x) mu, de=medias_std(phi_x) # Normaliza # --- Agregar código aquí --- phi_x_norm=normaliza(phi_x, mu, de) # Entrena # --- Agregar código aquí --- alpha=0.1 w=np.zeros(phi_x.shape[1]) b=0 w_norm, b_norm, _ =dg(phi_x_norm, y, w, b, alpha, max_iter=10000, historial=True) # Muestra resultados con plot_separacion2D plt.subplot(2, 2, i + 1) plt.title(f"Polinomio de grado {grado}") # --- Agregar codigo aquí --- # plot_separacion2D(...) Esto es solo para ayudarlos un poco plot_separacion2D(x, y, grado, mu, de, w_norm, b_norm) def costo(x, y, w, b, lambd): """ Calcula el costo de una w dada para el conjunto dee entrenamiento dado por y y x, usando regularización @param x: un ndarray de dimensión (M, n) con la matriz de diseño @param y: un ndarray de dimensión (M, ) donde cada entrada es 1.0 o 0.0 @param w: un ndarray de dimensión (n, ) con los pesos @param b: un flotante con el sesgo @param lambd: un flotante con el valor de lambda en la regularizacion @return: un flotante con el valor de pérdida """ costo = 0 M = x.shape[0] #------------------------------------------------------------------------ # Agregua aqui tu código costo=error_in(x,y,w,b) + (lambd/M)*(w.T@w) #------------------------------------------------------------------------ return costo def grad_regu(x, y, w, b, lambd): """ Calcula el gradiente de la función de costo regularizado para clasificación binaria, utilizando una neurona logística, para w y b y conociendo un conjunto de aprendizaje. @param x: un ndarray de dimensión (M, n) con la matriz de diseño @param y: un ndarray de dimensión (M, ) donde cada entrada es 1.0 o 0.0 @param w: un ndarray de dimensión (n, ) con los pesos @param b: un flotante con el sesgo @param lambd: un flotante con el peso de la regularización @return: dw, db, un ndarray de mismas dimensiones que w y un flotnte con el cálculo de la dervada evluada en el punto w y b """ M = x.shape[0] dw = np.zeros_like(w) db = 0.0 #------------------------------------------------------------------------ # Agregua aqui tu código error= y-logistica(x@w+b) dw= -(error@x) * (1/M) + (lambd/M) * (2*w) db= -error.mean() #------------------------------------------------------------------------ return dw, db def dg_regu(x, y, w, b, alpha, lambd, max_iter=10_000, tol=1e-4, historial=False): """ Descenso de gradiente con regularización l2 @param x: un ndarray de dimensión (M, n) con la matriz de diseño @param y: un ndarray de dimensión (M, ) donde cada entrada es 1.0 o 0.0 @param alpha: Un flotante (típicamente pequeño) con la tasa de aprendizaje @param lambd: Un flotante con el valor de la regularización @param max_iter: Máximo numero de iteraciones. Por default 10_000 @param tol: Un flotante pequeño como criterio de paro. Por default 1e-4 @param historial: Un booleano para saber si guardamos el historial @return: - w: ndarray de dimensión (n, ) con los pesos; - b: float con el sesgo - hist: ndarray de (max_iter,) el historial de error. Si historial == False, entonces hist = None. """ M, n = x.shape hist = [costo(x, y, w, b, lambd)] if historial else None for epoch in range(1, max_iter): dw, db = grad_regu(x, y, w, b, lambd) w -= alpha * dw b -= alpha * db error = costo(x, y, w, b, lambd) if historial: hist.append(error) if np.abs(np.max(dw)) < tol: break return w, b, hist phi_x = map_poly(14, x) mu, de = medias_std(phi_x) phi_x_norm = normaliza(phi_x, mu, de) for (i, lambd) in enumerate([0, 1, 10, 100]): # Normaliza # --- Agregar código aquí --- phi_x_norm=normaliza(phi_x, mu, de) # Entrena # --- Agregar código aquí --- alpha= 0.1 w = np.zeros(phi_x_norm.shape[1]) b= 0 w_norm, b_norm, _= dg_regu(phi_x_norm, y, w, b, alpha, lambd, max_iter=100000) # Muestra resultados con plot_separacion2D plt.subplot(2, 2, i + 1) plt.title("Polinomio de grado 14, regu = {}.".format(lambd)) # --- Agregar codigo aquí --- plot_separacion2D(x, y, 14, mu, de, w_norm, b_norm)
0.560493
0.988788
# HTCondor Introduction Launch this tutorial in a Jupyter Notebook on Binder: [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/htcondor/htcondor-python-bindings-tutorials/master?urlpath=lab/tree/introductory/HTCondor-Introduction.ipynb) Let's start interacting with the HTCondor daemons! We'll cover the basics of two daemons, the _Collector_ and the _Schedd_: - The **Collector** maintains an inventory of all the pieces of the HTCondor pool. For example, each machine that can run jobs will advertise a ClassAd describing its resources and state. In this module, we'll learn the basics of querying the collector for information and displaying results. - The **Schedd** maintains a queue of jobs and is responsible for managing their execution. We'll learn the basics of querying the schedd. There are several other daemons - particularly, the _Startd_ and the _Negotiator_ - the Python bindings can interact with. We'll cover those in the advanced modules. To better demonstrate how HTCondor works, we have launched a personal instance for you to use. Your private HTCondor instance runs a minature HTCondor pool and can run a single job at a time. To get start, let's import the `htcondor` modules. ``` import htcondor import classad ``` ## Collector We'll start with the _Collector_, which gathers descriptions of the states of all the daemons in your HTCondor pool. The collector provides both **service discovery** and **monitoring** for these daemons. Let's try to find the Schedd information for your HTCondor pool. First, we'll create a `Collector` object, then use the `locate` method: ``` coll = htcondor.Collector() # Create the object representing the collector. schedd_ad = coll.locate(htcondor.DaemonTypes.Schedd) # Locate the default schedd. print(schedd_ad['MyAddress']) # Prints the location of the schedd, using HTCondor's internal addressing scheme. ``` The `locate` method takes a type of daemon and (optionall) a name, returning a ClassAd. Here, we print out the resulting `MyAddress` key. A few minor points about the above example: - Because we didn't provide the collector with a constructor, we used the default collector in the container's configuration file. If we wanted to instead query a non-default collector, we could have done `htcondor.Collector("collector.example.com")`. - We used the `DaemonTypes` enumeration to pick the kind of daemon to return. - If there were multiple schedds in the pool, the `locate` query would have failed. In such a case, we need to provide an explicit name to the method. E.g., `coll.locate(htcondor.DaemonTypes.Schedd, "schedd.example.com")`. - The final output prints the schedd's location. You may be surprised that this is not simply a `hostname:port`; to help manage addressing in the today's complicated Internet (full of NATs, private networks, and firewalls), a more flexible structure was needed. - HTCondor developers sometimes refer to this as the _sinful string_; here, _sinful_ is a play on a Unix data structure, not a moral judgement. The `locate` method often returns only enough data to contact a remote daemon. Typically, a ClassAd records significantly more attributes. For example, if we wanted to query for a few specific attributes, we would use the `query` method instead: ``` coll.query(htcondor.AdTypes.Schedd, projection=["Name", "MyAddress", "DaemonCoreDutyCycle"]) ``` Here, `query` takes an `AdType` (slightly more generic than the `DaemonTypes`, as many kinds of ads are in the collector) and several optional arguments, then returns a list of ClassAds. We used the `projection` keyword argument; this indicates what attributes you want returned. The collector may automatically insert additional attributes (here, only `MyType`); if an ad is missing a requested attribute, it is simply not set in the returned ClassAd object. If no projection is specified, then all attributes are returned. **WARNING**: when possible, utilize the projection to limit the data returned. Some ads may have hundreds of attributes, making returning the entire ad an expensive operation. The projection filters the returned _keys_; to filter out unwanted _ads_, utilize the `constraint` option. Let's do the same query again, but specify our hostname explicitly: ``` import socket # We'll use this to automatically fill in our hostname coll.query(htcondor.AdTypes.Schedd, constraint='Name=?=%s' % classad.quote("jovyan@%s" % socket.getfqdn()), projection=["Name", "MyAddress", "DaemonCoreDutyCycle"]) ``` Notes: - `constraint` accepts either an `ExprTree` or `string` object; the latter is automatically parsed as an expression. - We used the `classad.quote` function to properly quote the hostname string. In this example, we're relatively certain the hostname won't contain quotes. However, it is good practice to use the `quote` function to avoid possible SQL-injection-type attacks. - Consider what would happen if the host's FQDN contained spaces and doublequotes, such as `foo.example.com" || true`. ## Schedd Let's try our hand at querying the `schedd`! First, we'll need a schedd object. You may either create one out of the ad returned by `locate` above or use the default in the configuration file: ``` schedd = htcondor.Schedd() schedd = htcondor.Schedd(schedd_ad) print(schedd) ``` Unfortunately, as there are no jobs in our personal HTCondor pool, querying the `schedd` will be boring. Let's submit a few jobs (**note** the API used below will be covered by the next module; it's OK if you don't understand it now): ``` sub = htcondor.Submit() sub['executable'] = '/bin/sleep' sub['arguments'] = '5m' with schedd.transaction() as txn: sub.queue(txn, 10) ``` We should now have 10 jobs in queue, each of which should take 5 minutes to complete. Let's query for the jobs, paying attention to the jobs' ID and status: ``` for job in schedd.xquery(projection=['ClusterId', 'ProcId', 'JobStatus']): print(repr(job)) ``` The `JobStatus` is an integer; the integers map into the following states: - `1`: Idle (`I`) - `2`: Running (`R`) - `3`: Removed (`X`) - `4`: Completed (`C`) - `5`: Held (`H`) - `6`: Transferring Output - `7`: Suspended Depending on how quickly you executed the notebook, you might see all jobs idle (`JobStatus = 1`) or one job running (`JobStatus = 2`) above. It is rare to see the other codes. As with the Collector's `query` method, we can also filter out jobs using `xquery`: ``` for job in schedd.xquery(requirements = 'ProcId >= 5', projection=['ProcId']): print(job.get('ProcId')) ``` Astute readers may notice that the `Schedd` object has both `xquery` and `query` methods. The difference between the two mimics the difference between `xreadlines` and `readlines` call in the standard Python library: - `query` returns a _list_ of ClassAds, meaning all objects are held in memory at once. This utilizes more memory and , but the size of the results is immediately available. It utilizes an older, heavyweight protocol to communicate with the Schedd. - `xquery` returns an _iterator_ that produces ClassAds. This only requires one ClassAd to be in memory at once. It is much more lightweight, both on the client and server side. When in doubt, utilize `xquery`. Now that we have a running job, it may be useful to check the status of the machine in our HTCondor pool: ``` print(coll.query(htcondor.AdTypes.Startd, projection=['Name', 'Status', 'Activity', 'JobId', 'RemoteOwner'])[0]) ``` ## On Job Submission Congratulations - you can now perform simple queries against the collector for worker and submit hosts, as well as simple job queries against the submit host! It is now time to move on to [submitting and managing jobs](Submitting-and-Managing-Jobs.ipynb).
github_jupyter
import htcondor import classad coll = htcondor.Collector() # Create the object representing the collector. schedd_ad = coll.locate(htcondor.DaemonTypes.Schedd) # Locate the default schedd. print(schedd_ad['MyAddress']) # Prints the location of the schedd, using HTCondor's internal addressing scheme. coll.query(htcondor.AdTypes.Schedd, projection=["Name", "MyAddress", "DaemonCoreDutyCycle"]) import socket # We'll use this to automatically fill in our hostname coll.query(htcondor.AdTypes.Schedd, constraint='Name=?=%s' % classad.quote("jovyan@%s" % socket.getfqdn()), projection=["Name", "MyAddress", "DaemonCoreDutyCycle"]) schedd = htcondor.Schedd() schedd = htcondor.Schedd(schedd_ad) print(schedd) sub = htcondor.Submit() sub['executable'] = '/bin/sleep' sub['arguments'] = '5m' with schedd.transaction() as txn: sub.queue(txn, 10) for job in schedd.xquery(projection=['ClusterId', 'ProcId', 'JobStatus']): print(repr(job)) for job in schedd.xquery(requirements = 'ProcId >= 5', projection=['ProcId']): print(job.get('ProcId')) print(coll.query(htcondor.AdTypes.Startd, projection=['Name', 'Status', 'Activity', 'JobId', 'RemoteOwner'])[0])
0.25618
0.990687
# Dimensionality Reduction on Organization-Level Data ## This notebook only runs the sample data we selected from the completed datasets ``` import pandas as pd import matplotlib.pyplot as plt import numpy as np from matplotlib import colors from sklearn.preprocessing import StandardScaler from sklearn.decomposition import PCA from sklearn.preprocessing import scale import seaborn as sns ``` # Load Data ``` transaction = pd.read_csv("../sample_data/other_samples/yearlyorgpayment_perperson.csv") ticket = pd.read_csv("../sample_data/other_samples/org_yearly_tk_pperson_v2.csv") donor = pd.read_csv("../sample_data/other_samples/cleaned_year_donation.csv") ``` # Data Cleaning ## Ticket ``` ticket.dropna(inplace = True) ``` ## Transaction ``` transaction.year = transaction.year.astype(str) transaction = transaction.pivot(index = ["org_id"], columns = ["year"], values = [c for c in transaction.columns if c not in ["org_id","year"]]).reset_index() transaction.columns = ["_".join(x) if x[0] !="org_id" else x[0] for x in transaction.columns.ravel()] transaction trans_kept_year = [str(c) for c in list(range(2015,2021))] trans_kept_years_cols = [c for c in transaction.columns if c[-4:] in trans_kept_year] transaction_v2 = transaction[["org_id"] +trans_kept_years_cols].copy() transaction_v2.dropna(inplace = True) transaction_v2 ``` ## Donor ``` donor.year = donor.year.astype(str) donor = donor.pivot(index = ["org_id"], columns = ["year"], values = [c for c in donor.columns if c not in ["org_id","year"]]).reset_index() donor.columns = ["_".join(x) if x[0] !="org_id" else x[0] for x in donor.columns.ravel()] donor donor_kept_year = [str(c) for c in list(range(2018,2021))] donor_kept_years_cols = [c for c in donor.columns if c[-4:] in donor_kept_year] donor_v2 = donor[["org_id"] +donor_kept_years_cols].copy() donor_v2.dropna(inplace = True) donor_v2 ``` # Join Data ``` org_features = pd.merge(transaction_v2, ticket, on = "org_id") org_features = pd.merge(donor_v2, org_features, on = "org_id") kept_years = [str(c) for c in list(range(2015,2021))] kept_years_cols = [c for c in org_features.columns if c[-4:] in kept_years] kept_years_cols org_features.columns len(org_features.columns) org_features = org_features[["org_id"]+kept_years_cols].copy() org_features.dropna(axis = 1, inplace = True) org_features.to_csv("pca_all_cols.csv") ``` # Generate PCA ``` def label_point(x, y, val, ax): a = pd.concat({'x': x, 'y': y, 'val': val}, axis=1) for i, point in a.iterrows(): ax.text(point['x']+.1, point['y']+.1, str(point['val']),fontsize= 15) def pca_plot(df_plot, figsize, xlim, ylim, title, figtitle): fig , ax1 = plt.subplots(figsize=figsize) ax1.set_xlim(xlim[0],xlim[1]) ax1.set_ylim(ylim[0],ylim[1]) # Plot Principal Components 1 and 2 sns.scatterplot(x = "PC1", y = "PC2",data = df_plot, s = 100) # Plot reference lines ax1.hlines(0,xlim[0],xlim[1], linestyles='dotted', colors='grey') ax1.vlines(0,ylim[0],ylim[1], linestyles='dotted', colors='grey') ax1.tick_params(axis='x', labelsize=15) ax1.tick_params(axis='y', labelsize=15) # ax1.set_xticklabels(labels = x_label, fontsize = 15) ax1.set_xlabel('First Principal Component',fontsize= 15) ax1.set_ylabel('Second Principal Component',fontsize= 15) ax1.set_title(title,fontsize= 20) label_point(df_plot.PC1, df_plot.PC2, df_plot.org_id, plt.gca()) fig.savefig(f'PCA/{figtitle}.png',bbox_inches='tight') ``` ## All Data ### PCA Plot ``` pca = PCA() data = org_features.drop("org_id", axis =1) data = pd.DataFrame(scale(data), index=data.index, columns=data.columns) df_plot = pd.DataFrame(pca.fit_transform(data), columns=['PC'+str(i+1) for i in range(8)], index=data.index) sim_df = pd.concat([org_features,df_plot], axis = 1) pca_plot(sim_df, figsize = (10,10), xlim = (-15, 15), ylim = (-15,15), title = "PCA of 8 Organizations with All Features by Year", figtitle = "PCA_all_data") ``` ### PCA Loadings ``` pca_loadings = pd.DataFrame(PCA().fit(data).components_.T, index=data.columns, columns=['V'+str(i+1) for i in range(8)]) ``` ## Transaction ``` def label_point_trans(x, y, val, ax): a = pd.concat({'x': x, 'y': y, 'val': val}, axis=1) for i, point in a.iterrows(): if point['val'] == 'NCSU': ax.text(point['x']-1, point['y']+.1, str(point['val']),fontsize= 15) elif point['val'] == 'BAYLOR': ax.text(point['x']+.2, point['y']-0.15, str(point['val']),fontsize= 15) elif point['val'] == 'ARMY': ax.text(point['x']-2, point['y']-0.05, str(point['val']),fontsize= 15) elif point['val'] == 'COLORADO': ax.text(point['x']+.2, point['y']-0.01, str(point['val']),fontsize= 15) else: ax.text(point['x']+.1, point['y']+.1, str(point['val']),fontsize= 15) def pca_plot_trans(df_plot, figsize, xlim, ylim, title,figtitle): fig , ax1 = plt.subplots(figsize=figsize) ax1.set_xlim(xlim[0],xlim[1]) ax1.set_ylim(ylim[0],ylim[1]) # Plot Principal Components 1 and 2 sns.scatterplot(x = "PC1", y = "PC2",data = df_plot, s = 100) # Plot reference lines ax1.hlines(0,xlim[0],xlim[1], linestyles='dotted', colors='grey') ax1.vlines(0,ylim[0],ylim[1], linestyles='dotted', colors='grey') ax1.tick_params(axis='x', labelsize=15) ax1.tick_params(axis='y', labelsize=15) # ax1.set_xticklabels(labels = x_label, fontsize = 15) ax1.set_xlabel('First Principal Component',fontsize= 20) ax1.set_ylabel('Second Principal Component',fontsize= 20) ax1.set_title(title, fontsize= 20) label_point_trans(df_plot.PC1, df_plot.PC2, df_plot.org_id, plt.gca()) fig.savefig(f'PCA/{figtitle}.png',bbox_inches='tight') pca = PCA() data_trans = transaction_v2.drop("org_id", axis =1) data_trans = pd.DataFrame(scale(data_trans), index=data_trans.index, columns=data_trans.columns) df_plot_trans = pd.DataFrame(pca.fit_transform(data_trans), columns=['PC'+str(i+1) for i in range(min(len(data_trans),len(data_trans.columns)))], index=data_trans.index) sim_df_trans = pd.concat([transaction_v2,df_plot_trans], axis = 1) pca_plot_trans(sim_df_trans, figsize=(13,13), xlim = (-15,15), ylim = (-6,10), title = "PCA of 21 Organizations with \nTransaction-Related Features by Year", figtitle = "PCA_transaction") transaction_v2.to_csv("pca_transaction_cols.csv") len(transaction_v2.columns) ``` ## Donor ``` def label_point_donor(x, y, val, ax): a = pd.concat({'x': x, 'y': y, 'val': val}, axis=1) for i, point in a.iterrows(): if point['val'] == 'FRESNO': ax.text(point['x']-1, point['y']+.1, str(point['val']),fontsize= 18) else: ax.text(point['x']+.1, point['y']+.1, str(point['val']),fontsize= 15) def pca_plot_donor(df_plot, figsize, xlim, ylim, title,figtitle): fig , ax1 = plt.subplots(figsize=figsize) ax1.set_xlim(xlim[0],xlim[1]) ax1.set_ylim(ylim[0],ylim[1]) # Plot Principal Components 1 and 2 sns.scatterplot(x = "PC1", y = "PC2",data = df_plot, s = 100) # Plot reference lines ax1.hlines(0,xlim[0],xlim[1], linestyles='dotted', colors='grey') ax1.vlines(0,ylim[0],ylim[1], linestyles='dotted', colors='grey') ax1.tick_params(axis='x', labelsize=15) ax1.tick_params(axis='y', labelsize=15) # ax1.set_xticklabels(labels = x_label, fontsize = 15) ax1.set_xlabel('First Principal Component',fontsize= 20) ax1.set_ylabel('Second Principal Component',fontsize= 20) ax1.set_title(title, fontsize= 20) label_point_donor(df_plot.PC1, df_plot.PC2, df_plot.org_id, plt.gca()) fig.savefig(f'PCA/{figtitle}.png',bbox_inches='tight') donor_v2.columns len(donor_v2.columns) pca = PCA() data_donor = donor_v2.drop("org_id", axis =1) data_donor = pd.DataFrame(scale(data_donor), index=data_donor.index, columns=data_donor.columns) df_plot_donor = pd.DataFrame(pca.fit_transform(data_donor), columns=['PC'+str(i+1) for i in range(min(len(data_donor),len(data_donor.columns)))], index=data_donor.index) sim_df_donor = pd.concat([donor_v2,df_plot_donor], axis = 1) pca_plot_donor(sim_df_donor, figsize=(10,13), xlim = (-8,10), ylim = (-7,8), title = "PCA of 12 Organizations with \n Donation-Related Features by Year", figtitle = "PCA_donor") donor_v2.to_csv("pca_donor_col.csv") donor_v2.columns ``` ## Ticket ``` pca = PCA() data_ticket = ticket.drop("org_id", axis =1) data_ticket = pd.DataFrame(scale(data_ticket), index=data_ticket.index, columns=data_ticket.columns) df_plot_ticket = pd.DataFrame(pca.fit_transform(data_ticket), columns=['PC'+str(i+1) for i in range(min(len(data_ticket),len(data_ticket.columns)))], index=data_ticket.index) sim_df_ticket = pd.concat([ticket,df_plot_ticket], axis = 1) pca_plot(sim_df_ticket, figsize=(15,10), xlim = (-10,10), ylim = (-5,10), title = "PCA of 15 Organizations with Ticketing-Related Features by Year", figtitle = "PCA_ticket") ticket.to_csv("pca_ticket_col.csv") len(ticket.columns) ```
github_jupyter
import pandas as pd import matplotlib.pyplot as plt import numpy as np from matplotlib import colors from sklearn.preprocessing import StandardScaler from sklearn.decomposition import PCA from sklearn.preprocessing import scale import seaborn as sns transaction = pd.read_csv("../sample_data/other_samples/yearlyorgpayment_perperson.csv") ticket = pd.read_csv("../sample_data/other_samples/org_yearly_tk_pperson_v2.csv") donor = pd.read_csv("../sample_data/other_samples/cleaned_year_donation.csv") ticket.dropna(inplace = True) transaction.year = transaction.year.astype(str) transaction = transaction.pivot(index = ["org_id"], columns = ["year"], values = [c for c in transaction.columns if c not in ["org_id","year"]]).reset_index() transaction.columns = ["_".join(x) if x[0] !="org_id" else x[0] for x in transaction.columns.ravel()] transaction trans_kept_year = [str(c) for c in list(range(2015,2021))] trans_kept_years_cols = [c for c in transaction.columns if c[-4:] in trans_kept_year] transaction_v2 = transaction[["org_id"] +trans_kept_years_cols].copy() transaction_v2.dropna(inplace = True) transaction_v2 donor.year = donor.year.astype(str) donor = donor.pivot(index = ["org_id"], columns = ["year"], values = [c for c in donor.columns if c not in ["org_id","year"]]).reset_index() donor.columns = ["_".join(x) if x[0] !="org_id" else x[0] for x in donor.columns.ravel()] donor donor_kept_year = [str(c) for c in list(range(2018,2021))] donor_kept_years_cols = [c for c in donor.columns if c[-4:] in donor_kept_year] donor_v2 = donor[["org_id"] +donor_kept_years_cols].copy() donor_v2.dropna(inplace = True) donor_v2 org_features = pd.merge(transaction_v2, ticket, on = "org_id") org_features = pd.merge(donor_v2, org_features, on = "org_id") kept_years = [str(c) for c in list(range(2015,2021))] kept_years_cols = [c for c in org_features.columns if c[-4:] in kept_years] kept_years_cols org_features.columns len(org_features.columns) org_features = org_features[["org_id"]+kept_years_cols].copy() org_features.dropna(axis = 1, inplace = True) org_features.to_csv("pca_all_cols.csv") def label_point(x, y, val, ax): a = pd.concat({'x': x, 'y': y, 'val': val}, axis=1) for i, point in a.iterrows(): ax.text(point['x']+.1, point['y']+.1, str(point['val']),fontsize= 15) def pca_plot(df_plot, figsize, xlim, ylim, title, figtitle): fig , ax1 = plt.subplots(figsize=figsize) ax1.set_xlim(xlim[0],xlim[1]) ax1.set_ylim(ylim[0],ylim[1]) # Plot Principal Components 1 and 2 sns.scatterplot(x = "PC1", y = "PC2",data = df_plot, s = 100) # Plot reference lines ax1.hlines(0,xlim[0],xlim[1], linestyles='dotted', colors='grey') ax1.vlines(0,ylim[0],ylim[1], linestyles='dotted', colors='grey') ax1.tick_params(axis='x', labelsize=15) ax1.tick_params(axis='y', labelsize=15) # ax1.set_xticklabels(labels = x_label, fontsize = 15) ax1.set_xlabel('First Principal Component',fontsize= 15) ax1.set_ylabel('Second Principal Component',fontsize= 15) ax1.set_title(title,fontsize= 20) label_point(df_plot.PC1, df_plot.PC2, df_plot.org_id, plt.gca()) fig.savefig(f'PCA/{figtitle}.png',bbox_inches='tight') pca = PCA() data = org_features.drop("org_id", axis =1) data = pd.DataFrame(scale(data), index=data.index, columns=data.columns) df_plot = pd.DataFrame(pca.fit_transform(data), columns=['PC'+str(i+1) for i in range(8)], index=data.index) sim_df = pd.concat([org_features,df_plot], axis = 1) pca_plot(sim_df, figsize = (10,10), xlim = (-15, 15), ylim = (-15,15), title = "PCA of 8 Organizations with All Features by Year", figtitle = "PCA_all_data") pca_loadings = pd.DataFrame(PCA().fit(data).components_.T, index=data.columns, columns=['V'+str(i+1) for i in range(8)]) def label_point_trans(x, y, val, ax): a = pd.concat({'x': x, 'y': y, 'val': val}, axis=1) for i, point in a.iterrows(): if point['val'] == 'NCSU': ax.text(point['x']-1, point['y']+.1, str(point['val']),fontsize= 15) elif point['val'] == 'BAYLOR': ax.text(point['x']+.2, point['y']-0.15, str(point['val']),fontsize= 15) elif point['val'] == 'ARMY': ax.text(point['x']-2, point['y']-0.05, str(point['val']),fontsize= 15) elif point['val'] == 'COLORADO': ax.text(point['x']+.2, point['y']-0.01, str(point['val']),fontsize= 15) else: ax.text(point['x']+.1, point['y']+.1, str(point['val']),fontsize= 15) def pca_plot_trans(df_plot, figsize, xlim, ylim, title,figtitle): fig , ax1 = plt.subplots(figsize=figsize) ax1.set_xlim(xlim[0],xlim[1]) ax1.set_ylim(ylim[0],ylim[1]) # Plot Principal Components 1 and 2 sns.scatterplot(x = "PC1", y = "PC2",data = df_plot, s = 100) # Plot reference lines ax1.hlines(0,xlim[0],xlim[1], linestyles='dotted', colors='grey') ax1.vlines(0,ylim[0],ylim[1], linestyles='dotted', colors='grey') ax1.tick_params(axis='x', labelsize=15) ax1.tick_params(axis='y', labelsize=15) # ax1.set_xticklabels(labels = x_label, fontsize = 15) ax1.set_xlabel('First Principal Component',fontsize= 20) ax1.set_ylabel('Second Principal Component',fontsize= 20) ax1.set_title(title, fontsize= 20) label_point_trans(df_plot.PC1, df_plot.PC2, df_plot.org_id, plt.gca()) fig.savefig(f'PCA/{figtitle}.png',bbox_inches='tight') pca = PCA() data_trans = transaction_v2.drop("org_id", axis =1) data_trans = pd.DataFrame(scale(data_trans), index=data_trans.index, columns=data_trans.columns) df_plot_trans = pd.DataFrame(pca.fit_transform(data_trans), columns=['PC'+str(i+1) for i in range(min(len(data_trans),len(data_trans.columns)))], index=data_trans.index) sim_df_trans = pd.concat([transaction_v2,df_plot_trans], axis = 1) pca_plot_trans(sim_df_trans, figsize=(13,13), xlim = (-15,15), ylim = (-6,10), title = "PCA of 21 Organizations with \nTransaction-Related Features by Year", figtitle = "PCA_transaction") transaction_v2.to_csv("pca_transaction_cols.csv") len(transaction_v2.columns) def label_point_donor(x, y, val, ax): a = pd.concat({'x': x, 'y': y, 'val': val}, axis=1) for i, point in a.iterrows(): if point['val'] == 'FRESNO': ax.text(point['x']-1, point['y']+.1, str(point['val']),fontsize= 18) else: ax.text(point['x']+.1, point['y']+.1, str(point['val']),fontsize= 15) def pca_plot_donor(df_plot, figsize, xlim, ylim, title,figtitle): fig , ax1 = plt.subplots(figsize=figsize) ax1.set_xlim(xlim[0],xlim[1]) ax1.set_ylim(ylim[0],ylim[1]) # Plot Principal Components 1 and 2 sns.scatterplot(x = "PC1", y = "PC2",data = df_plot, s = 100) # Plot reference lines ax1.hlines(0,xlim[0],xlim[1], linestyles='dotted', colors='grey') ax1.vlines(0,ylim[0],ylim[1], linestyles='dotted', colors='grey') ax1.tick_params(axis='x', labelsize=15) ax1.tick_params(axis='y', labelsize=15) # ax1.set_xticklabels(labels = x_label, fontsize = 15) ax1.set_xlabel('First Principal Component',fontsize= 20) ax1.set_ylabel('Second Principal Component',fontsize= 20) ax1.set_title(title, fontsize= 20) label_point_donor(df_plot.PC1, df_plot.PC2, df_plot.org_id, plt.gca()) fig.savefig(f'PCA/{figtitle}.png',bbox_inches='tight') donor_v2.columns len(donor_v2.columns) pca = PCA() data_donor = donor_v2.drop("org_id", axis =1) data_donor = pd.DataFrame(scale(data_donor), index=data_donor.index, columns=data_donor.columns) df_plot_donor = pd.DataFrame(pca.fit_transform(data_donor), columns=['PC'+str(i+1) for i in range(min(len(data_donor),len(data_donor.columns)))], index=data_donor.index) sim_df_donor = pd.concat([donor_v2,df_plot_donor], axis = 1) pca_plot_donor(sim_df_donor, figsize=(10,13), xlim = (-8,10), ylim = (-7,8), title = "PCA of 12 Organizations with \n Donation-Related Features by Year", figtitle = "PCA_donor") donor_v2.to_csv("pca_donor_col.csv") donor_v2.columns pca = PCA() data_ticket = ticket.drop("org_id", axis =1) data_ticket = pd.DataFrame(scale(data_ticket), index=data_ticket.index, columns=data_ticket.columns) df_plot_ticket = pd.DataFrame(pca.fit_transform(data_ticket), columns=['PC'+str(i+1) for i in range(min(len(data_ticket),len(data_ticket.columns)))], index=data_ticket.index) sim_df_ticket = pd.concat([ticket,df_plot_ticket], axis = 1) pca_plot(sim_df_ticket, figsize=(15,10), xlim = (-10,10), ylim = (-5,10), title = "PCA of 15 Organizations with Ticketing-Related Features by Year", figtitle = "PCA_ticket") ticket.to_csv("pca_ticket_col.csv") len(ticket.columns)
0.631708
0.921499
# Normalization vs Standardization  - Quantitative analysis This notebook containes the code to extract tables and info used in my Medium article - TODO:add link # Let's read the data ``` import os import sys import sys sys.path.append("../..") from src.data.bank_additional import BandAdditionalParser from src.data.income_evaluation import IncomeEvaluationParser from src.data.skyserver import SkyserverParser from src.data.sonar import SonarParser from src.data.weather_aus import WeatherAUSParser avaliable_restul_files = { "sonar_results.csv": SonarParser, "Skyserver_results.csv": SkyserverParser, "income_evaluation_results.csv": IncomeEvaluationParser, "bank-additional_results.csv": BandAdditionalParser, "weatherAUS_results.csv": WeatherAUSParser } # Pick one of the keys name in the above dictionarys and paste it in results_file variable. Then run all the cells results_file = "income_evaluation_results.csv" # Pick the wanted parser and run the cells parser = avaliable_restul_files[results_file]() X,y = parser.X, parser.y X.head(5) print("Data shape: ", X.shape) print(y.value_counts()) ``` ### Let's read the results file ``` import os import pandas as pd results_df = pd.read_csv(os.path.join("..", "..", "data", "processed", results_file)).dropna().round(3) results_df ``` <a id="Out-of-the-box_classifier"></a> # 1. Out-of-the-box classifiers ``` import operator results_df.loc[operator.and_(results_df["Classifier_Name"].str.startswith("_"), ~results_df["Classifier_Name"].str.endswith("PCA"))].dropna() ``` <a id="Classifiers_Scaling"></a> # 2. Classifiers+Scaling ``` import operator import numpy as np temp = results_df.loc[~results_df["Classifier_Name"].str.endswith("PCA")].dropna() temp["model"] = results_df["Classifier_Name"].apply(lambda sen: sen.split("_")[1]) temp["scaler"] = results_df["Classifier_Name"].apply(lambda sen: sen.split("_")[0]) def df_style(val): return 'font-weight: 800' pivot_t = pd.pivot_table(temp, values='CV_mean', index=["scaler"], columns=['model'], aggfunc=np.sum) pivot_t_bold = pivot_t.style.applymap(df_style, subset=pd.IndexSlice[pivot_t["CART"].idxmax(),"CART"]) for col in list(pivot_t): pivot_t_bold = pivot_t_bold.applymap(df_style, subset=pd.IndexSlice[pivot_t[col].idxmax(),col]) pivot_t_bold # Print table for the Medium article pd.set_option('display.max_rows', 500) pd.set_option('display.max_columns', 500) pd.set_option('display.width', 100000) pd.options.display.max_rows pd.set_option('display.max_colwidth', -1) dict2 = {'StandardScaler': "StandardScaler", 'MinMaxScaler':"MinMaxScaler", 'MaxAbsScaler':"MaxAbsScaler", 'RobustScaler':"RobustScaler", 'QuantileTransformer-Normal':"QuantileTransformer(output_distribution='normal')", 'QuantileTransformer-Uniform':"QuantileTransformer(output_distribution='uniform')", 'PowerTransformer-Yeo-Johnson':"PowerTransformer(method='yeo-johnson')", 'Normalizer':"Normalizer"} scalers_df = pd.DataFrame(list(dict2.items()), columns=["Name","Sklearn_Class"]) s = scalers_df.style.set_properties(subset=["Name", "Sklearn_Class"], **{'text-align': 'left'}) s.set_table_styles([ dict(selector='th', props=[('text-align', 'left')] ) ]) # Print table for the Medium article pd.set_option('display.max_rows', 500) pd.set_option('display.max_columns', 500) pd.set_option('display.width', 100000) pd.options.display.max_rows pd.set_option('display.max_colwidth', -1) dict2 = {'LR': "LogisticRegression", 'LDA':"LinearDiscriminantAnalysis", 'KNN':"KNeighborsClassifier", 'CART':"DecisionTreeClassifier", 'NB':"GaussianNB", 'SVM':"SVC", 'RF':"RandomForestClassifier", 'MLP':"MLPClassifier"} scalers_df = pd.DataFrame(list(dict2.items()), columns=["Name","Sklearn_Class"]) s = scalers_df.style.set_properties(subset=["Name", "Sklearn_Class"], **{'text-align': 'left'}) s.set_table_styles([ dict(selector='th', props=[('text-align', 'left')] ) ]) import operator cols_max_vals = {} cols_max_row_names = {} for col in list(pivot_t): row_name = pivot_t[col].idxmax() cell_val = pivot_t[col].max() cols_max_vals[col] = cell_val cols_max_row_names[col] = row_name sorted_cols_max_vals = sorted(cols_max_vals.items(), key=lambda kv: kv[1], reverse=True) print("Best classifiers sorted:\n") counter = 1 for model, score in sorted_cols_max_vals: print(str(counter) + ". " + model + " + " +cols_max_row_names[model] + " : " +str(score)) counter +=1 ``` # 3. Classifier+Scaling+PCA ``` import operator temp = results_df.copy() temp["model"] = results_df["Classifier_Name"].apply(lambda sen: sen.split("_")[1]) temp["scaler"] = results_df["Classifier_Name"].apply(lambda sen: sen.split("_")[0]) def df_style(val): return 'font-weight: 800' pivot_t = pd.pivot_table(temp, values='CV_mean', index=["scaler"], columns=['model'], aggfunc=np.sum) pivot_t_bold = pivot_t.style.applymap(df_style, subset=pd.IndexSlice[pivot_t["CART"].idxmax(),"CART"]) for col in list(pivot_t): pivot_t_bold = pivot_t_bold.applymap(df_style, subset=pd.IndexSlice[pivot_t[col].idxmax(),col]) pivot_t_bold ``` # Classifiers+Scaling+PCA+Hyperparameter tuning I hypertune the parameters only on the Sonar dataset ``` import operator import os import pandas as pd results_hyper_file = "sonar_results_hypertuned.csv" results_hyper_df = pd.read_csv(os.path.join("..", "..", "data", "processed", results_hyper_file)).dropna().round(3) temp = results_hyper_df.copy() temp["model"] = results_hyper_df["Classifier_Name"].apply(lambda sen: sen.split("_")[1]) temp["scaler"] = results_hyper_df["Classifier_Name"].apply(lambda sen: sen.split("_")[0]) def df_style(val): return 'font-weight: 800' pivot_t = pd.pivot_table(temp, values='CV_mean', index=["scaler"], columns=['model'], aggfunc=np.sum) pivot_t_bold = pivot_t.style.applymap(df_style, subset=pd.IndexSlice[pivot_t["KNN"].idxmax(),"KNN"]) for col in list(pivot_t): pivot_t_bold = pivot_t_bold.applymap(df_style, subset=pd.IndexSlice[pivot_t[col].idxmax(),col]) pivot_t_bold ```
github_jupyter
import os import sys import sys sys.path.append("../..") from src.data.bank_additional import BandAdditionalParser from src.data.income_evaluation import IncomeEvaluationParser from src.data.skyserver import SkyserverParser from src.data.sonar import SonarParser from src.data.weather_aus import WeatherAUSParser avaliable_restul_files = { "sonar_results.csv": SonarParser, "Skyserver_results.csv": SkyserverParser, "income_evaluation_results.csv": IncomeEvaluationParser, "bank-additional_results.csv": BandAdditionalParser, "weatherAUS_results.csv": WeatherAUSParser } # Pick one of the keys name in the above dictionarys and paste it in results_file variable. Then run all the cells results_file = "income_evaluation_results.csv" # Pick the wanted parser and run the cells parser = avaliable_restul_files[results_file]() X,y = parser.X, parser.y X.head(5) print("Data shape: ", X.shape) print(y.value_counts()) import os import pandas as pd results_df = pd.read_csv(os.path.join("..", "..", "data", "processed", results_file)).dropna().round(3) results_df import operator results_df.loc[operator.and_(results_df["Classifier_Name"].str.startswith("_"), ~results_df["Classifier_Name"].str.endswith("PCA"))].dropna() import operator import numpy as np temp = results_df.loc[~results_df["Classifier_Name"].str.endswith("PCA")].dropna() temp["model"] = results_df["Classifier_Name"].apply(lambda sen: sen.split("_")[1]) temp["scaler"] = results_df["Classifier_Name"].apply(lambda sen: sen.split("_")[0]) def df_style(val): return 'font-weight: 800' pivot_t = pd.pivot_table(temp, values='CV_mean', index=["scaler"], columns=['model'], aggfunc=np.sum) pivot_t_bold = pivot_t.style.applymap(df_style, subset=pd.IndexSlice[pivot_t["CART"].idxmax(),"CART"]) for col in list(pivot_t): pivot_t_bold = pivot_t_bold.applymap(df_style, subset=pd.IndexSlice[pivot_t[col].idxmax(),col]) pivot_t_bold # Print table for the Medium article pd.set_option('display.max_rows', 500) pd.set_option('display.max_columns', 500) pd.set_option('display.width', 100000) pd.options.display.max_rows pd.set_option('display.max_colwidth', -1) dict2 = {'StandardScaler': "StandardScaler", 'MinMaxScaler':"MinMaxScaler", 'MaxAbsScaler':"MaxAbsScaler", 'RobustScaler':"RobustScaler", 'QuantileTransformer-Normal':"QuantileTransformer(output_distribution='normal')", 'QuantileTransformer-Uniform':"QuantileTransformer(output_distribution='uniform')", 'PowerTransformer-Yeo-Johnson':"PowerTransformer(method='yeo-johnson')", 'Normalizer':"Normalizer"} scalers_df = pd.DataFrame(list(dict2.items()), columns=["Name","Sklearn_Class"]) s = scalers_df.style.set_properties(subset=["Name", "Sklearn_Class"], **{'text-align': 'left'}) s.set_table_styles([ dict(selector='th', props=[('text-align', 'left')] ) ]) # Print table for the Medium article pd.set_option('display.max_rows', 500) pd.set_option('display.max_columns', 500) pd.set_option('display.width', 100000) pd.options.display.max_rows pd.set_option('display.max_colwidth', -1) dict2 = {'LR': "LogisticRegression", 'LDA':"LinearDiscriminantAnalysis", 'KNN':"KNeighborsClassifier", 'CART':"DecisionTreeClassifier", 'NB':"GaussianNB", 'SVM':"SVC", 'RF':"RandomForestClassifier", 'MLP':"MLPClassifier"} scalers_df = pd.DataFrame(list(dict2.items()), columns=["Name","Sklearn_Class"]) s = scalers_df.style.set_properties(subset=["Name", "Sklearn_Class"], **{'text-align': 'left'}) s.set_table_styles([ dict(selector='th', props=[('text-align', 'left')] ) ]) import operator cols_max_vals = {} cols_max_row_names = {} for col in list(pivot_t): row_name = pivot_t[col].idxmax() cell_val = pivot_t[col].max() cols_max_vals[col] = cell_val cols_max_row_names[col] = row_name sorted_cols_max_vals = sorted(cols_max_vals.items(), key=lambda kv: kv[1], reverse=True) print("Best classifiers sorted:\n") counter = 1 for model, score in sorted_cols_max_vals: print(str(counter) + ". " + model + " + " +cols_max_row_names[model] + " : " +str(score)) counter +=1 import operator temp = results_df.copy() temp["model"] = results_df["Classifier_Name"].apply(lambda sen: sen.split("_")[1]) temp["scaler"] = results_df["Classifier_Name"].apply(lambda sen: sen.split("_")[0]) def df_style(val): return 'font-weight: 800' pivot_t = pd.pivot_table(temp, values='CV_mean', index=["scaler"], columns=['model'], aggfunc=np.sum) pivot_t_bold = pivot_t.style.applymap(df_style, subset=pd.IndexSlice[pivot_t["CART"].idxmax(),"CART"]) for col in list(pivot_t): pivot_t_bold = pivot_t_bold.applymap(df_style, subset=pd.IndexSlice[pivot_t[col].idxmax(),col]) pivot_t_bold import operator import os import pandas as pd results_hyper_file = "sonar_results_hypertuned.csv" results_hyper_df = pd.read_csv(os.path.join("..", "..", "data", "processed", results_hyper_file)).dropna().round(3) temp = results_hyper_df.copy() temp["model"] = results_hyper_df["Classifier_Name"].apply(lambda sen: sen.split("_")[1]) temp["scaler"] = results_hyper_df["Classifier_Name"].apply(lambda sen: sen.split("_")[0]) def df_style(val): return 'font-weight: 800' pivot_t = pd.pivot_table(temp, values='CV_mean', index=["scaler"], columns=['model'], aggfunc=np.sum) pivot_t_bold = pivot_t.style.applymap(df_style, subset=pd.IndexSlice[pivot_t["KNN"].idxmax(),"KNN"]) for col in list(pivot_t): pivot_t_bold = pivot_t_bold.applymap(df_style, subset=pd.IndexSlice[pivot_t[col].idxmax(),col]) pivot_t_bold
0.275032
0.754305
<img src="http://xarray.pydata.org/en/stable/_static/dataset-diagram-logo.png" align="right" width="30%"> # Introduction Welcome to the Xarray Tutorial. Xarray is an open source project and Python package that makes working with labelled multi-dimensional arrays simple, efficient, and fun! Xarray introduces labels in the form of dimensions, coordinates and attributes on top of raw [NumPy](https://numpy.org/)-like arrays, which allows for a more intuitive, more concise, and less error-prone developer experience. The package includes a large and growing library of domain-agnostic functions for advanced analytics and visualization with these data structures. Xarray is inspired by and borrows heavily from [pandas](https://pandas.pydata.org/), the popular data analysis package focused on labelled tabular data. It is particularly tailored to working with [netCDF files](http://www.unidata.ucar.edu/software/netcdf), which were the source of Xarray’s data model, and integrates tightly with [Dask](http://dask.org/) for parallel computing. ## Tutorial Setup This tutorial is designed to run on [Binder](https://mybinder.org/). This will allow you to run the turoial in the cloud without any additional setup. To get started, simply click [here](https://mybinder.org/v2/gh/xarray-contrib/xarray-tutorial/master?urlpath=lab): [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/xarray-contrib/xarray-tutorial/master) If you choose to install the tutorial locally, follow these steps: 1. Clone the repository: ``` git clone https://github.com/xarray-contrib/xarray-tutorial.git ``` 1. Install the environment. The repository includes an `environment.yaml` in the `.binder` subdirectory that contains a list of all the packages needed to run this tutorial. To install them using conda run: ``` conda env create -f .binder/environment.yaml conda activate xarray-tutorial ``` 1. Start a Jupyter session: ``` jupyter lab ``` ## Useful links 1. References - [Documentation](http://xarray.pydata.org/en/stable/) - [Code Repository](https://github.com/pydata/xarray) 1. Ask for help: - Use the [python-xarray](https://stackoverflow.com/questions/tagged/python-xarray) on StackOverflow - [GitHub Issues](https://github.com/pydata/xarray/issues) for bug reports and feature requests ## Tutorial Structure This tutorial is made up of multiple Jupyter Notebooks. These notebooks mix code, text, visualization, and exercises. If you haven't used JupyterLab before, it's similar to the Jupyter Notebook. If you haven't used the Notebook, the quick intro is 1. There are two modes: command and edit 1. From command mode, press Enter to edit a cell (like this markdown cell) 1. From edit mode, press Esc to change to command mode 1. Press shift+enter to execute a cell and move to the next cell. 1. The toolbar has commands for executing, converting, and creating cells. The layout of the tutorial will be as follows: 1. [Introduction + Data structures for multi-dimensional data](./01_datastructures_and_io.ipynb) 1. [Working with labeled data](02_working_with_labeled_data.ipynb) 1. [Computation with Xarray](03_computation_with_xarray.ipynb) 1. [Plotting and Visualization](04_plotting_and_visualization.ipynb) 1. [Introduction to Dask](05_intro_to_dask.ipynb) 1. [Dask and Xarray](06_xarray_and_dask.ipynb) ## Exercise: Print Hello, world! Each notebook will have exercises for you to solve. You'll be given a blank or partially completed cell, followed by a hidden cell with a solution. For example. Print the text "Hello, world!". ``` # Your code here ``` In some cases, the next cell will have the solution. Click the ellipses to expand the solution, and always make sure to run the solution cell, in case later sections of the notebook depend on the output from the solution. ``` print("Hello, world!") ``` ## Going Deeper We've designed the notebooks above to cover the basics of Xarray from beginning to end. To help you go deeper, we've also create a list of notebooks that demonstrate real-world applications of Xarray in a variety of use cases. These need not be explored in any particular sequence, instead they are meant to provide a sampling of what Xarray can be used for. ### Xarray and Weather/Climate Model Data 1. [Global Mean Surface Temperature from CMIP6](https://binder.pangeo.io/v2/gh/pangeo-gallery/cmip6/binder?urlpath=git-pull?repo=https://github.com/pangeo-gallery/cmip6%26amp%3Burlpath=lab/tree/cmip6): Start with `global_mean_surface_temp.ipynb` then feel free to explore the rest of the notebooks. <!-- 1. [Natural climate variability in the CESM Large Ensemble](https://aws-uswest2-binder.pangeo.io/v2/gh/NCAR/cesm-lens-aws/master?urlpath=lab) --> 1. [National Water Model Streamflow Analysis](https://aws-uswest2-binder.pangeo.io/v2/gh/rsignell-usgs/esip-gallery/binder?urlpath=git-pull?repo=https://github.com/rsignell-usgs/esip-gallery%26amp%3Burlpath=lab/tree/esip-gallery): Start with `02_National_Water_Model.ipynb` then feel free to explore the rest of the notebooks. ### Xarray and Satellite Data 1. [Landsat-8 on AWS](https://aws-uswest2-binder.pangeo.io/v2/gh/pangeo-data/landsat-8-tutorial-gallery/master/?urlpath=git-pull?repo=https://github.com/pangeo-data/landsat-8-tutorial-gallery%26amp%3Burlpath=lab/tree/landsat-8-tutorial-gallery/landsat8.ipynb%3Fautodecode) ### Xarray and Baysian Statistical Modeling 1. [Xarray and PyMC3](https://mybinder.org/v2/gh/pymc-devs/pymc3/master?filepath=%2Fdocs%2Fsource%2Fnotebooks): Start with `multilevel_modeling.ipynb` then feel free to explore the rest of the notebooks. Also checkout [Arviz](https://arviz-devs.github.io/arviz/) which uses Xarray as its data model.
github_jupyter
git clone https://github.com/xarray-contrib/xarray-tutorial.git ``` 1. Install the environment. The repository includes an `environment.yaml` in the `.binder` subdirectory that contains a list of all the packages needed to run this tutorial. To install them using conda run: ``` conda env create -f .binder/environment.yaml conda activate xarray-tutorial ``` 1. Start a Jupyter session: ``` jupyter lab ``` ## Useful links 1. References - [Documentation](http://xarray.pydata.org/en/stable/) - [Code Repository](https://github.com/pydata/xarray) 1. Ask for help: - Use the [python-xarray](https://stackoverflow.com/questions/tagged/python-xarray) on StackOverflow - [GitHub Issues](https://github.com/pydata/xarray/issues) for bug reports and feature requests ## Tutorial Structure This tutorial is made up of multiple Jupyter Notebooks. These notebooks mix code, text, visualization, and exercises. If you haven't used JupyterLab before, it's similar to the Jupyter Notebook. If you haven't used the Notebook, the quick intro is 1. There are two modes: command and edit 1. From command mode, press Enter to edit a cell (like this markdown cell) 1. From edit mode, press Esc to change to command mode 1. Press shift+enter to execute a cell and move to the next cell. 1. The toolbar has commands for executing, converting, and creating cells. The layout of the tutorial will be as follows: 1. [Introduction + Data structures for multi-dimensional data](./01_datastructures_and_io.ipynb) 1. [Working with labeled data](02_working_with_labeled_data.ipynb) 1. [Computation with Xarray](03_computation_with_xarray.ipynb) 1. [Plotting and Visualization](04_plotting_and_visualization.ipynb) 1. [Introduction to Dask](05_intro_to_dask.ipynb) 1. [Dask and Xarray](06_xarray_and_dask.ipynb) ## Exercise: Print Hello, world! Each notebook will have exercises for you to solve. You'll be given a blank or partially completed cell, followed by a hidden cell with a solution. For example. Print the text "Hello, world!". In some cases, the next cell will have the solution. Click the ellipses to expand the solution, and always make sure to run the solution cell, in case later sections of the notebook depend on the output from the solution.
0.893206
0.986403
論文 https://arxiv.org/abs/2203.14367<br> <br> GitHub https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-Model.git<br> <br> <a href="https://colab.research.google.com/github/kaz12tech/ai_demos/blob/master/ThinPlateSplineMotionModel_demo.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # 環境セットアップ ## GPU確認 ``` !nvidia-smi ``` ## GitHubからコード取得 ``` %cd /content !git clone https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-Model.git ``` ## ライブラリのインストール ``` %cd /content/ !pip install face_alignment > /dev/null # face alignment用にclone !git clone https://github.com/adamian98/pulse.git ``` ## ライブラリのインポート ``` %cd /content/Thin-Plate-Spline-Motion-Model import torch import imageio import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation from skimage.transform import resize from IPython.display import HTML import warnings import os from demo import load_checkpoints from demo import make_animation from skimage import img_as_ubyte from google.colab import files from moviepy.editor import * warnings.filterwarnings("ignore") ``` # テストデータのセットアップ [使用動画](https://www.pexels.com/ja-jp/video/5981354/)<br> [使用画像](https://www.pakutaso.com/shared/img/thumb/nissinIMGL0823_TP_V.jpg) ## アップロード ``` #@markdown 動画の切り抜き範囲(秒)を指定してください。\ #@markdown 30秒以上の場合OOM発生の可能性が高いため注意 start_sec = 3#@param {type:"integer"} end_sec = 7#@param {type:"integer"} (start_pt, end_pt) = (start_sec, end_sec) %cd /content/Thin-Plate-Spline-Motion-Model !rm -rf test_data !mkdir test_data %cd test_data !mkdir image aligned_image video frames aligned_video %cd video print("upload video...") video = files.upload() video = list(video.keys()) video_file = video[0] # 指定区間切り抜き with VideoFileClip(video_file) as video: subclip = video.subclip(start_pt, end_pt) subclip.write_videofile("./video.mp4") # frameに分割 !ffmpeg -i video.mp4 ../frames/%02d.png %cd ../image print("upload image...") image = files.upload() image = list(image.keys()) image_file = image[0] ``` ## aligned image ``` %cd /content/Thin-Plate-Spline-Motion-Model/test_data !python /content/pulse/align_face.py \ -input_dir /content/Thin-Plate-Spline-Motion-Model/test_data/image \ -output_dir /content/Thin-Plate-Spline-Motion-Model/test_data/aligned_image \ -output_size 256 \ -seed 1234 ``` ## aligned video ``` %cd /content/Thin-Plate-Spline-Motion-Model # tedの場合はoutput_size 384 !python /content/pulse/align_face.py \ -input_dir /content/Thin-Plate-Spline-Motion-Model/test_data/frames \ -output_dir /content/Thin-Plate-Spline-Motion-Model/test_data/aligned_video \ -output_size 256 \ -seed 1234 !ffmpeg -i /content/Thin-Plate-Spline-Motion-Model/test_data/aligned_video/%02d_0.png -c:v libx264 -vf "fps=25,format=yuv420p" /content/Thin-Plate-Spline-Motion-Model/test_data/aligned_video/aligned.mp4 ``` # モデルのセットアップ ``` %cd /content/Thin-Plate-Spline-Motion-Model # @markdown モデル選択 dataset_name = 'vox' #@param ["vox", "taichi", "ted", "mgif"] # @markdown 入力画像 source_image_path = '/content/Thin-Plate-Spline-Motion-Model/test_data/aligned_image/nissinIMGL0823_TP_V_0.png' #@param {type:"string"} # @markdown 入力動画 driving_video_path = '/content/Thin-Plate-Spline-Motion-Model/test_data/aligned_video/aligned.mp4' #@param {type:"string"} # @markdown 出力先 output_video_path = './generated.mp4' #@param {type:"string"} # @markdown predict mode predict_mode = 'relative' #@param ['standard', 'relative', 'avd'] # "relative"の際にTrueにすると出力結果の品質が向上 find_best_frame = False #@param {type:"boolean"} %cd /content/Thin-Plate-Spline-Motion-Model # edit the config device = torch.device('cuda:0') ``` ## 学習済みモデルのダウンロード ``` %cd /content/Thin-Plate-Spline-Motion-Model !mkdir checkpoints # dataset_name = 'vox' #@param ["vox", "taichi", "ted", "mgif"] if dataset_name == 'vox': !wget -c https://cloud.tsinghua.edu.cn/f/da8d61d012014b12a9e4/?dl=1 -O checkpoints/vox.pth.tar config_path = 'config/vox-256.yaml' checkpoint_path = 'checkpoints/vox.pth.tar' pixel = 256 if dataset_name == 'taichi': !wget -c https://cloud.tsinghua.edu.cn/f/9ec01fa4aaef423c8c02/?dl=1 -O checkpoints/taichi.pth.tar config_path = 'config/taichi-256.yaml' checkpoint_path = 'checkpoints/taichi.pth.tar' pixel = 256 if dataset_name == 'ted': !wget -c https://cloud.tsinghua.edu.cn/f/483ef53650b14ac7ae70/?dl=1 -O checkpoints/ted.pth.tar config_path = 'config/ted-384.yaml' checkpoint_path = 'checkpoints/ted.pth.tar' pixel = 384 if dataset_name == 'mgif': !wget -c https://cloud.tsinghua.edu.cn/f/cd411b334a2e49cdb1e2/?dl=1 -O checkpoints/mgif.pth.tar config_path = 'config/mgif-256.yaml' checkpoint_path = 'checkpoints/mgif.pth.tar' pixel = 256 ``` # 表示用関数定義 ``` def display(source, driving, generated=None): fig = plt.figure(figsize=(8 + 4 * (generated is not None), 6)) ims = [] for i in range(len(driving)): cols = [source] cols.append(driving[i]) if generated is not None: cols.append(generated[i]) im = plt.imshow(np.concatenate(cols, axis=1), animated=True) plt.axis('off') ims.append([im]) ani = animation.ArtistAnimation(fig, ims, interval=50, repeat_delay=1000) plt.close() return ani ``` # データのロード ``` source_image = imageio.imread(source_image_path) reader = imageio.get_reader(driving_video_path) source_image = resize(source_image, (pixel, pixel))[..., :3] fps = reader.get_meta_data()['fps'] driving_video = [] try: for im in reader: driving_video.append(im) except RuntimeError: pass reader.close() driving_video = [resize(frame, (pixel, pixel))[..., :3] for frame in driving_video] HTML(display(source_image, driving_video).to_html5_video()) ``` # Inference ``` inpainting, kp_detector, dense_motion_network, avd_network = load_checkpoints(config_path = config_path, checkpoint_path = checkpoint_path, device = device) if predict_mode=='relative' and find_best_frame: from demo import find_best_frame as _find i = _find(source_image, driving_video, device.type=='cpu') print ("Best frame: " + str(i)) driving_forward = driving_video[i:] driving_backward = driving_video[:(i+1)][::-1] predictions_forward = make_animation(source_image, driving_forward, inpainting, kp_detector, dense_motion_network, avd_network, device = device, mode = predict_mode) predictions_backward = make_animation(source_image, driving_backward, inpainting, kp_detector, dense_motion_network, avd_network, device = device, mode = predict_mode) predictions = predictions_backward[::-1] + predictions_forward[1:] else: predictions = make_animation(source_image, driving_video, inpainting, kp_detector, dense_motion_network, avd_network, device = device, mode = predict_mode) #save resulting video imageio.mimsave(output_video_path, [img_as_ubyte(frame) for frame in predictions], fps=fps) HTML(display(source_image, driving_video, predictions).to_html5_video()) ```
github_jupyter
!nvidia-smi %cd /content !git clone https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-Model.git %cd /content/ !pip install face_alignment > /dev/null # face alignment用にclone !git clone https://github.com/adamian98/pulse.git %cd /content/Thin-Plate-Spline-Motion-Model import torch import imageio import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation from skimage.transform import resize from IPython.display import HTML import warnings import os from demo import load_checkpoints from demo import make_animation from skimage import img_as_ubyte from google.colab import files from moviepy.editor import * warnings.filterwarnings("ignore") #@markdown 動画の切り抜き範囲(秒)を指定してください。\ #@markdown 30秒以上の場合OOM発生の可能性が高いため注意 start_sec = 3#@param {type:"integer"} end_sec = 7#@param {type:"integer"} (start_pt, end_pt) = (start_sec, end_sec) %cd /content/Thin-Plate-Spline-Motion-Model !rm -rf test_data !mkdir test_data %cd test_data !mkdir image aligned_image video frames aligned_video %cd video print("upload video...") video = files.upload() video = list(video.keys()) video_file = video[0] # 指定区間切り抜き with VideoFileClip(video_file) as video: subclip = video.subclip(start_pt, end_pt) subclip.write_videofile("./video.mp4") # frameに分割 !ffmpeg -i video.mp4 ../frames/%02d.png %cd ../image print("upload image...") image = files.upload() image = list(image.keys()) image_file = image[0] %cd /content/Thin-Plate-Spline-Motion-Model/test_data !python /content/pulse/align_face.py \ -input_dir /content/Thin-Plate-Spline-Motion-Model/test_data/image \ -output_dir /content/Thin-Plate-Spline-Motion-Model/test_data/aligned_image \ -output_size 256 \ -seed 1234 %cd /content/Thin-Plate-Spline-Motion-Model # tedの場合はoutput_size 384 !python /content/pulse/align_face.py \ -input_dir /content/Thin-Plate-Spline-Motion-Model/test_data/frames \ -output_dir /content/Thin-Plate-Spline-Motion-Model/test_data/aligned_video \ -output_size 256 \ -seed 1234 !ffmpeg -i /content/Thin-Plate-Spline-Motion-Model/test_data/aligned_video/%02d_0.png -c:v libx264 -vf "fps=25,format=yuv420p" /content/Thin-Plate-Spline-Motion-Model/test_data/aligned_video/aligned.mp4 %cd /content/Thin-Plate-Spline-Motion-Model # @markdown モデル選択 dataset_name = 'vox' #@param ["vox", "taichi", "ted", "mgif"] # @markdown 入力画像 source_image_path = '/content/Thin-Plate-Spline-Motion-Model/test_data/aligned_image/nissinIMGL0823_TP_V_0.png' #@param {type:"string"} # @markdown 入力動画 driving_video_path = '/content/Thin-Plate-Spline-Motion-Model/test_data/aligned_video/aligned.mp4' #@param {type:"string"} # @markdown 出力先 output_video_path = './generated.mp4' #@param {type:"string"} # @markdown predict mode predict_mode = 'relative' #@param ['standard', 'relative', 'avd'] # "relative"の際にTrueにすると出力結果の品質が向上 find_best_frame = False #@param {type:"boolean"} %cd /content/Thin-Plate-Spline-Motion-Model # edit the config device = torch.device('cuda:0') %cd /content/Thin-Plate-Spline-Motion-Model !mkdir checkpoints # dataset_name = 'vox' #@param ["vox", "taichi", "ted", "mgif"] if dataset_name == 'vox': !wget -c https://cloud.tsinghua.edu.cn/f/da8d61d012014b12a9e4/?dl=1 -O checkpoints/vox.pth.tar config_path = 'config/vox-256.yaml' checkpoint_path = 'checkpoints/vox.pth.tar' pixel = 256 if dataset_name == 'taichi': !wget -c https://cloud.tsinghua.edu.cn/f/9ec01fa4aaef423c8c02/?dl=1 -O checkpoints/taichi.pth.tar config_path = 'config/taichi-256.yaml' checkpoint_path = 'checkpoints/taichi.pth.tar' pixel = 256 if dataset_name == 'ted': !wget -c https://cloud.tsinghua.edu.cn/f/483ef53650b14ac7ae70/?dl=1 -O checkpoints/ted.pth.tar config_path = 'config/ted-384.yaml' checkpoint_path = 'checkpoints/ted.pth.tar' pixel = 384 if dataset_name == 'mgif': !wget -c https://cloud.tsinghua.edu.cn/f/cd411b334a2e49cdb1e2/?dl=1 -O checkpoints/mgif.pth.tar config_path = 'config/mgif-256.yaml' checkpoint_path = 'checkpoints/mgif.pth.tar' pixel = 256 def display(source, driving, generated=None): fig = plt.figure(figsize=(8 + 4 * (generated is not None), 6)) ims = [] for i in range(len(driving)): cols = [source] cols.append(driving[i]) if generated is not None: cols.append(generated[i]) im = plt.imshow(np.concatenate(cols, axis=1), animated=True) plt.axis('off') ims.append([im]) ani = animation.ArtistAnimation(fig, ims, interval=50, repeat_delay=1000) plt.close() return ani source_image = imageio.imread(source_image_path) reader = imageio.get_reader(driving_video_path) source_image = resize(source_image, (pixel, pixel))[..., :3] fps = reader.get_meta_data()['fps'] driving_video = [] try: for im in reader: driving_video.append(im) except RuntimeError: pass reader.close() driving_video = [resize(frame, (pixel, pixel))[..., :3] for frame in driving_video] HTML(display(source_image, driving_video).to_html5_video()) inpainting, kp_detector, dense_motion_network, avd_network = load_checkpoints(config_path = config_path, checkpoint_path = checkpoint_path, device = device) if predict_mode=='relative' and find_best_frame: from demo import find_best_frame as _find i = _find(source_image, driving_video, device.type=='cpu') print ("Best frame: " + str(i)) driving_forward = driving_video[i:] driving_backward = driving_video[:(i+1)][::-1] predictions_forward = make_animation(source_image, driving_forward, inpainting, kp_detector, dense_motion_network, avd_network, device = device, mode = predict_mode) predictions_backward = make_animation(source_image, driving_backward, inpainting, kp_detector, dense_motion_network, avd_network, device = device, mode = predict_mode) predictions = predictions_backward[::-1] + predictions_forward[1:] else: predictions = make_animation(source_image, driving_video, inpainting, kp_detector, dense_motion_network, avd_network, device = device, mode = predict_mode) #save resulting video imageio.mimsave(output_video_path, [img_as_ubyte(frame) for frame in predictions], fps=fps) HTML(display(source_image, driving_video, predictions).to_html5_video())
0.448909
0.785925
``` from glob import glob from random import shuffle import os, shutil import zipfile def train_valid_folders(path,percent=0.8,flag=0): train_path = os.path.join(path,'train') valid_path = os.path.join(path,'valid') if not os.path.exists(train_path): os.mkdir(train_path) if not os.path.exists(valid_path): os.mkdir(valid_path) if flag: all_train = glob(os.path.join(path,'*.*')) else: all_train = glob(os.path.join(train_path,'*.*')) train_set,valid_set = get_split_set(all_train,percent) for f in valid_set: shutil.move(f,valid_path) if flag: for f in train_set: shutil.move(f,train_path) def sample_folder(path, percent=0.95): train_path = os.path.join(path,'train') all_train = glob(os.path.join(train_path,'*.*')) sample_path = os.path.join(path,'sample') if not ospath.exists(sample_path): os.mkdir(sample_path) _,sample_set = get_split_set(all_train,percent) for f in sample_set: shutil.copy(f,sample_path) def get_split_set(all_train,per): shuffle(all_train) n = len(all_train) split_point = int(per*n) return all_train[:split_point],all_train[split_point:] def create_state_farm(path): path_list = [] c0_path = os.path.join(path,'c0') path_list.append(c0_path) c1_path = os.path.join(path,'c1') path_list.append(c1_path) c2_path = os.path.join(path,'c2') path_list.append(c2_path) c3_path = os.path.join(path,'c3') path_list.append(c3_path) c4_path = os.path.join(path,'c4') path_list.append(c4_path) c5_path = os.path.join(path,'c5') path_list.append(c5_path) c6_path = os.path.join(path,'c6') path_list.append(c6_path) c7_path = os.path.join(path,'c7') path_list.append(c7_path) c8_path = os.path.join(path,'c8') path_list.append(c8_path) c9_path = os.path.join(path,'c9') path_list.append(c9_path) c10_path = os.path.join(path,'c10') path_list.append(c10_path) for path in path_list: if not os.path.exists(path): os.mkdir(path) c0_path = glob(os.path.join(path,'*.jpg')) def main(): parent_path = 'data/state-farm' zip_path = os.path.join(parent_path,'imgs.zip') train_valid_percent_split = 0.9 train_sample_percent_split = 0.95 with zipfile.ZipFile(train_zip_path,"r") as z: z.extractall(parent_path) path = parent_path sample_folder(path,train_sample_percent_split) train_valid_folders(path,train_valid_percent_split) path = os.path.join(parent_path,'train') create_state_farm(path) path = os.path.join(parent_path,'valid') create_state_farm(path) child_path = os.path.join(parent_path,'sample') valid_path = os.path.join(child_path,'valid') create_state_farm(valid_path) ```
github_jupyter
from glob import glob from random import shuffle import os, shutil import zipfile def train_valid_folders(path,percent=0.8,flag=0): train_path = os.path.join(path,'train') valid_path = os.path.join(path,'valid') if not os.path.exists(train_path): os.mkdir(train_path) if not os.path.exists(valid_path): os.mkdir(valid_path) if flag: all_train = glob(os.path.join(path,'*.*')) else: all_train = glob(os.path.join(train_path,'*.*')) train_set,valid_set = get_split_set(all_train,percent) for f in valid_set: shutil.move(f,valid_path) if flag: for f in train_set: shutil.move(f,train_path) def sample_folder(path, percent=0.95): train_path = os.path.join(path,'train') all_train = glob(os.path.join(train_path,'*.*')) sample_path = os.path.join(path,'sample') if not ospath.exists(sample_path): os.mkdir(sample_path) _,sample_set = get_split_set(all_train,percent) for f in sample_set: shutil.copy(f,sample_path) def get_split_set(all_train,per): shuffle(all_train) n = len(all_train) split_point = int(per*n) return all_train[:split_point],all_train[split_point:] def create_state_farm(path): path_list = [] c0_path = os.path.join(path,'c0') path_list.append(c0_path) c1_path = os.path.join(path,'c1') path_list.append(c1_path) c2_path = os.path.join(path,'c2') path_list.append(c2_path) c3_path = os.path.join(path,'c3') path_list.append(c3_path) c4_path = os.path.join(path,'c4') path_list.append(c4_path) c5_path = os.path.join(path,'c5') path_list.append(c5_path) c6_path = os.path.join(path,'c6') path_list.append(c6_path) c7_path = os.path.join(path,'c7') path_list.append(c7_path) c8_path = os.path.join(path,'c8') path_list.append(c8_path) c9_path = os.path.join(path,'c9') path_list.append(c9_path) c10_path = os.path.join(path,'c10') path_list.append(c10_path) for path in path_list: if not os.path.exists(path): os.mkdir(path) c0_path = glob(os.path.join(path,'*.jpg')) def main(): parent_path = 'data/state-farm' zip_path = os.path.join(parent_path,'imgs.zip') train_valid_percent_split = 0.9 train_sample_percent_split = 0.95 with zipfile.ZipFile(train_zip_path,"r") as z: z.extractall(parent_path) path = parent_path sample_folder(path,train_sample_percent_split) train_valid_folders(path,train_valid_percent_split) path = os.path.join(parent_path,'train') create_state_farm(path) path = os.path.join(parent_path,'valid') create_state_farm(path) child_path = os.path.join(parent_path,'sample') valid_path = os.path.join(child_path,'valid') create_state_farm(valid_path)
0.120103
0.098903
``` import numpy as np import pandas as pd import os import joblib import sklearn import matplotlib from matplotlib import pyplot as plt from sklearn.model_selection import train_test_split #Regressions: from sklearn.multioutput import MultiOutputRegressor from sklearn.neighbors import KNeighborsRegressor from sklearn.linear_model import Ridge from sklearn.linear_model import Lasso from sklearn.linear_model import ElasticNet from sklearn.linear_model import LinearRegression from sklearn.linear_model import RidgeCV from sklearn.ensemble import ExtraTreesRegressor from sklearn.ensemble import GradientBoostingRegressor from sklearn.ensemble import RandomForestRegressor from sklearn.ensemble import AdaBoostRegressor from sklearn.tree import DecisionTreeRegressor #Metric from sklearn.metrics import mean_squared_error from sklearn.metrics import mean_absolute_error from sklearn.metrics import r2_score from pandas import DataFrame # Show progress bar from tqdm import tqdm df = pd.read_csv('flo_dataset_augmented.csv') df # Input for ML models input_col = ['in_amount_mmol', 'p_amount_mmol', 'ligand_amount_mmol', 'first_sol_amount_ml', 'second_sol_amount_ml', 'other_1_amount_mmol', 'other_2_amount_mmol', 'total_volume_ml', 'temp_c', 'time_min', 'x0_indium acetate', 'x0_indium bromide', 'x0_indium chloride', 'x0_indium iodide', 'x0_indium myristate', 'x0_indium trifluoroacetate', 'x1_bis(trimethylsilyl)phosphine', 'x1_phosphorus trichloride', 'x1_tris(diethylamino)phosphine', 'x1_tris(dimethylamino)phosphine', 'x1_tris(trimethylgermyl)phosphine', 'x1_tris(trimethylsilyl)phosphine', 'x2_None', 'x2_lauric acid', 'x2_myristic acid', 'x2_oleic acid', 'x2_palmitic acid', 'x2_stearic acid', 'x3_dodecylamine', 'x3_octadecene', 'x3_oleylamine', 'x3_trioctylamine', 'x3_trioctylphosphine', 'x4_None', 'x4_dioctyl ether', 'x4_dioctylamine', 'x4_hexadecylamine', 'x4_octylamine', 'x4_oleylamine', 'x4_toluene', 'x4_trioctylphosphine', 'x4_trioctylphosphine oxide', 'x5_None', 'x5_acetic acid', 'x5_superhydride', 'x5_tetrabutylammonium myristate', 'x5_zinc bromide' ,'x5_zinc chloride' ,'x5_zinc iodide' ,'x5_zinc oleate', 'x5_zinc stearate', 'x5_zinc undecylenate', 'x6_None', 'x6_copper bromide', 'x6_trioctylphosphine', 'x6_water', 'x6_zinc iodide' ] #Three individual outputs: diameter = ['diameter_nm'] emission = ['emission_nm'] absorbance = ['abs_nm'] #Splitting dataset X = df[input_col] Y_d = df[diameter] Y_e = df[emission] Y_a = df[absorbance] X_train_d, X_test_d, Y_train_d, Y_test_d = train_test_split(X, Y_d, test_size=0.3, random_state=45, shuffle=True) X_train_e, X_test_e, Y_train_e, Y_test_e = train_test_split(X, Y_e, test_size=0.3, random_state=45, shuffle=True) X_train_a, X_test_a, Y_train_a, Y_test_a = train_test_split(X, Y_a, test_size=0.3, random_state=45, shuffle=True) ``` ## D - Optimizing diameter model ### 1D. Extra Trees ``` # This is a grid search for three parameters in the Extra Trees algorithm. # Parameters are: random_state, n_estimators, max_features. # This gives the best combination of the three parameters for the smallest mean squared error. min_mae = 99999 min_i, min_j, min_k = 0, 0, 0 for i in tqdm(range(1, 25)): for j in range(1, 25): for k in range(2, 50, 1): ET_regr = ExtraTreesRegressor(n_estimators=i, max_features=j, random_state=k) ET_regr.fit(X_train_d, np.ravel(Y_train_d)) ET_Y_pred_d = pd.DataFrame(ET_regr.predict(X_test_d)) mae = mean_absolute_error(Y_test_d, ET_Y_pred_d) if (min_mae > mae): min_mae = mae min_i = i min_j = j min_k = k print(min_mae, min_i, min_j, min_k) ``` ### 2D. Decision Tree ``` # This is a grid search for three parameters in the Decision Trees algorithm. # Parameters are: max_depth, max_features, random_state. # This gives the best combination of the three parameters for the smallest mean squared error. min_mae = 99999 min_i, min_j, min_k = 0, 0, 0 for i in tqdm(range(1, 30)): for j in range(1, 30): for k in range(4, 60, 1): DT_regr = DecisionTreeRegressor(max_depth=i, max_features=j, random_state=k) DT_regr.fit(X_train_d, np.ravel(Y_train_d)) DT_Y_pred_d = pd.DataFrame(DT_regr.predict(X_test_d)) mae = mean_absolute_error(Y_test_d, DT_Y_pred_d) if (min_mae > mae): min_mae = mae min_i = i min_j = j min_k = k print(min_mae, min_i, min_j, min_k) ``` ### 3D. Random Forest ``` # This is a grid search for three parameters in the Random Forest algorithm. # Parameters are: max_depth, n_estimators, max_features. # Random_state is set to 45. # This gives the best combination of the three parameters for the smallest mean squared error. min_mae = 99999 min_i, min_j, min_k = 0, 0, 0 for i in tqdm(range(1, 31)): for j in range(1, 31): for k in range(2, 46, 1): RF_regr = RandomForestRegressor(max_depth=i, n_estimators=j, max_features=k, random_state=45) RF_regr.fit(X_train_d, np.ravel(Y_train_d)) RF_Y_pred_d = pd.DataFrame(RF_regr.predict(X_test_d)) mae = mean_absolute_error(Y_test_d, RF_Y_pred_d) if (min_mae > mae): min_mae = mae min_i = i min_j = j min_k = k print(min_mae, min_i, min_j, min_k) ``` ### 4D. K Neighbors ``` min_mae = 99999 min_i, min_j = 0, 0 for i in tqdm(range(1, 40)): for j in range(1, 40): KNN_reg_d = KNeighborsRegressor(n_neighbors=i, p=j).fit(X_train_d, np.ravel(Y_train_d)) KNN_Y_pred_d = KNN_reg_d.predict(X_test_d) mae = mean_absolute_error(Y_test_d, KNN_Y_pred_d) if (min_mae > mae): min_mae = mae min_i = i min_j = j print(min_mae, min_i, min_j) ``` ### Saving Decision Tree model ``` ET_regr_d = ExtraTreesRegressor(n_estimators=3, max_features=9, random_state=28) ET_regr_d.fit(X_train_d, np.ravel(Y_train_d)) ET_Y_pred_d = pd.DataFrame(ET_regr_d.predict(X_test_d)) joblib.dump(ET_regr_d, "./model_SO_diameter_ExtraTrees.joblib") ``` ## E - Optimizing emission model ### 1E. Extra Trees ``` # This is a grid search for three parameters in the Extra Trees algorithm. # Parameters are: random_state, n_estimators, max_features. # This gives the best combination of the three parameters for the smallest mean squared error. min_mae = 99999 min_i, min_j, min_k = 0, 0, 0 for i in tqdm(range(1, 25)): for j in range(1, 25): for k in range(2, 50, 1): ET_regr_e = ExtraTreesRegressor(n_estimators=i, max_features=j, random_state=k) ET_regr_e.fit(X_train_e, np.ravel(Y_train_e)) ET_Y_pred_e = pd.DataFrame(ET_regr_e.predict(X_test_e)) mae = mean_absolute_error(Y_test_e, ET_Y_pred_e) if (min_mae > mae): min_mae = mae min_i = i min_j = j min_k = k print(min_mae, min_i, min_j, min_k) ``` ### 2E. Decision Trees ``` # This is a grid search for three parameters in the Decision Trees algorithm. # This gives the best combination of the three parameters for the smallest mean squared error. min_mae = 99999 min_i, min_j, min_k = 0, 0, 0 for i in tqdm(range(1, 30)): for j in range(1, 30): for k in range(4, 46, 1): DT_regr_e = DecisionTreeRegressor(max_depth=i, max_features=j, random_state=k) DT_regr_e.fit(X_train_e, np.ravel(Y_train_e)) DT_Y_pred_e = pd.DataFrame(DT_regr_e.predict(X_test_e)) mae = mean_absolute_error(Y_test_e, DT_Y_pred_e) if (min_mae > mae): min_mae = mae min_i = i min_j = j min_k = k print(min_mae, min_i, min_j, min_k) ``` ### 3E. Random Forest ``` min_mae = 99999 min_i, min_j, min_k = 0, 0, 0 for i in tqdm(range(1, 31)): for j in range(1, 31): for k in range(2, 46, 1): RF_regr_e = RandomForestRegressor(max_depth=i, n_estimators=j, max_features=k, random_state=45) RF_regr_e.fit(X_train_e, np.ravel(Y_train_e)) RF_Y_pred_e = pd.DataFrame(RF_regr_e.predict(X_test_e)) mae = mean_absolute_error(Y_test_e, RF_Y_pred_e) if (min_mae > mae): min_mae = mae min_i = i min_j = j min_k = k print(min_mae, min_i, min_j, min_k) ``` ### 4E. K Neighbors ``` min_mae = 99999 min_i, min_j = 0, 0 for i in tqdm(range(1, 40)): for j in range(1, 40): KNN_reg_e = KNeighborsRegressor(n_neighbors=i, p=j).fit(X_train_e, np.ravel(Y_train_e)) KNN_Y_pred_e = KNN_reg_e.predict(X_test_e) mae = mean_absolute_error(Y_test_e, KNN_Y_pred_e) if (min_mae > mae): min_mae = mae min_i = i min_j = j print(min_mae, min_i, min_j) ``` ### Saving Extra Trees model ``` ET_regr_e = ExtraTreesRegressor(n_estimators=1, max_features=14, random_state=6).fit(X_train_e, np.ravel(Y_train_e)) ET_Y_pred_e = ET_regr_e.predict(X_test_e) joblib.dump(ET_regr_e, "./model_SO_emission_ExtraTrees.joblib") ``` ## A - Optimizing absorption model ### 1A: Extra Trees ``` # This is a grid search for three parameters in the Extra Trees algorithm. # Parameters are: random_state, n_estimators, max_features. # This gives the best combination of the three parameters for the smallest mean squared error. min_mae = 99999 min_i, min_j, min_k = 0, 0, 0 for i in tqdm(range(1, 30)): for j in range(1, 30): for k in range(2, 50, 1): ET_regr_a = ExtraTreesRegressor(n_estimators=i, max_features=j, random_state=k) ET_regr_a.fit(X_train_a, np.ravel(Y_train_a)) ET_Y_pred_a = pd.DataFrame(ET_regr_a.predict(X_test_a)) mae = mean_absolute_error(Y_test_a, ET_Y_pred_a) if (min_mae > mae): min_mae = mae min_i = i min_j = j min_k = k print(min_mae, min_i, min_j, min_k) ``` ### 2A. Decision Trees ``` # This is a grid search for three parameters in the Decision Trees algorithm. # This gives the best combination of the three parameters for the smallest mean squared error. min_mae = 99999 min_i, min_j, min_k = 0, 0, 0 for i in tqdm(range(1, 30)): for j in range(1, 30): for k in range(4, 50, ): DT_regr_a = DecisionTreeRegressor(max_depth=i, max_features=j, random_state=k) DT_regr_a.fit(X_train_a, np.ravel(Y_train_a)) DT_Y_pred_a = pd.DataFrame(DT_regr_a.predict(X_test_a)) mae = mean_absolute_error(Y_test_a, DT_Y_pred_a) if (min_mae > mae): min_mae = mae min_i = i min_j = j min_k = k print(min_mae, min_i, min_j, min_k) DT_regr_a = DecisionTreeRegressor(max_depth=20, max_features=20, random_state=12).fit(X_train_a, np.ravel(Y_train_a)) DT_Y_pred_a = DT_regr_a.predict(X_test_a) DT_r2_a = r2_score(Y_test_a, DT_Y_pred_a) DT_MSE_a = mean_squared_error(Y_test_a, DT_Y_pred_a) DT_RMSE_a = mean_squared_error(Y_test_a, DT_Y_pred_a, squared=False) DT_MAE_a = mean_absolute_error(Y_test_a, DT_Y_pred_a) print('diameter:', 'r2:', DT_r2_a, '; MSE:', DT_MSE_a, '; RMSE:', DT_RMSE_a, '; MAE:', DT_MAE_a) ``` ### 3A. Random Forest ``` min_mae = 99999 min_i, min_j, min_k = 0, 0, 0 for i in tqdm(range(1, 26)): for j in range(1, 26): for k in range(2, 40, 1): RF_regr_a = RandomForestRegressor(max_depth=i, n_estimators=j, max_features=k, random_state=45) RF_regr_a.fit(X_train_a, np.ravel(Y_train_a)) RF_Y_pred_a = pd.DataFrame(RF_regr_a.predict(X_test_a)) mae = mean_absolute_error(Y_test_a, RF_Y_pred_a) if (min_mae > mae): min_mae = mae min_i = i min_j = j min_k = k print(min_mae, min_i, min_j, min_k) ``` ### 4A. K Neighbors ``` min_mae = 99999 min_i, min_j = 0, 0 for i in tqdm(range(1, 40)): for j in range(1, 40): KNN_reg_a = KNeighborsRegressor(n_neighbors=i, p=j).fit(X_train_a, np.ravel(Y_train_a)) KNN_Y_pred_a = KNN_reg_a.predict(X_test_a) mae = mean_absolute_error(Y_test_a, KNN_Y_pred_a) if (min_mae > mae): min_mae = mae min_i = i min_j = j print(min_mae, min_i, min_j) ``` ### Saving model ``` ET_regr_a = ExtraTreesRegressor(n_estimators=3, max_features=24, random_state=27) ET_regr_a.fit(X_train_a, np.ravel(Y_train_a)) ET_Y_pred_a = pd.DataFrame(ET_regr_a.predict(X_test_a)) joblib.dump(ET_regr_a, "./model_SO_abs_ExtraTrees.joblib") ``` ## Analyzing ``` ## Diameter ET_regr_d = ExtraTreesRegressor(n_estimators=3, max_features=9, random_state=28) ET_regr_d.fit(X_train_d, np.ravel(Y_train_d)) ET_Y_pred_d = ET_regr_d.predict(X_test_d) D_mae = mean_absolute_error(Y_test_d, ET_Y_pred_d) D_r_2 = r2_score(Y_test_d, ET_Y_pred_d) D_mse = mean_squared_error(Y_test_d, ET_Y_pred_d) D_rmse = mean_squared_error(Y_test_d, ET_Y_pred_d, squared=False) ## Emission ET_regr_e = ExtraTreesRegressor(n_estimators=1, max_features=14, random_state=6).fit(X_train_e, np.ravel(Y_train_e)) ET_Y_pred_e = ET_regr_e.predict(X_test_e) E_mae = mean_absolute_error(Y_test_e, ET_Y_pred_e) E_r_2 = r2_score(Y_test_e, ET_Y_pred_e) E_mse = mean_squared_error(Y_test_e, ET_Y_pred_e) E_rmse = mean_squared_error(Y_test_e, ET_Y_pred_e, squared=False) ### Absorption ET_regr_a = ExtraTreesRegressor(n_estimators=3, max_features=24, random_state=27) ET_regr_a.fit(X_train_a, np.ravel(Y_train_a)) ET_Y_pred_a = ET_regr_a.predict(X_test_a) A_mae = mean_absolute_error(Y_test_a, ET_Y_pred_a) A_r_2 = r2_score(Y_test_a, ET_Y_pred_a) A_mse = mean_squared_error(Y_test_a, ET_Y_pred_a) A_rmse = mean_squared_error(Y_test_a, ET_Y_pred_a, squared=False) from tabulate import tabulate d = [ ["Diameter", D_r_2, D_mae, D_mse, D_rmse], ["Absorption", A_r_2, A_mae, A_mse, A_rmse], ["Emission", E_r_2, E_mae, E_mse, E_rmse]] print(tabulate(d, headers=["Outputs", "R2", "Mean absolute error", "Mean squared error", "Root mean squared error"])) fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(20,5)) fig.suptitle('Single Outputs', fontsize=25) ax1.plot(ET_Y_pred_d, Y_test_d, 'o') ax1.plot([1.5,6],[1.5,6], color = 'r') ax1.set_title('Diameter') ax1.set(xlabel='Predicted Values (nm)', ylabel='Observed Values (nm)') ax2.plot(ET_Y_pred_a, Y_test_a, 'o') ax2.plot([400,650],[400,650], color = 'r') ax2.set_title('Absorption') ax2.set(xlabel='Predicted Values (nm)', ylabel='Predicted Values (nm)') ax3.plot(ET_Y_pred_e, Y_test_e, 'o') ax3.plot([450,700],[450,700], color = 'r') ax3.set_title('Emission') ax3.set(xlabel='Predicted Values (nm)', ylabel='Observed Values (nm)') fig.tight_layout() ``` ## Feature importance ### For diameter prediction ``` importance_dict_d = dict() for i in range(0,57): importance_dict_d[input_col[i]] = ET_regr_d.feature_importances_[i] sorted_importance_d = sorted(importance_dict_d.items(), key=lambda x: x[1], reverse=True) sorted_importance_d top7_d = DataFrame(sorted_importance_d[0:7], columns=['features', 'importance score']) others_d = DataFrame(sorted_importance_d[7:], columns=['features', 'importance score']) import seaborn as sns a4_dims = (20.7, 8.27) fig, ax = plt.subplots(figsize=a4_dims) sns.set_theme(style="whitegrid") ax = sns.barplot(x="features", y="importance score", data=top7_d) ``` ### Emission prediction ``` importance_dict_e = dict() for i in range(0,57): importance_dict_e[input_col[i]] = ET_regr_e.feature_importances_[i] sorted_importance_e = sorted(importance_dict_e.items(), key=lambda x: x[1], reverse=True) sorted_importance_e top7_e = DataFrame(sorted_importance_e[0:7], columns=['features', 'importance score']) others_e = DataFrame(sorted_importance_e[7:], columns=['features', 'importance score']) # combined_others2 = pd.DataFrame(data = { # 'features' : ['others'], # 'importance score' : [others2['importance score'].sum()] # }) # #combining top 10 with others # imp_score2 = pd.concat([top7, combined_others2]) import seaborn as sns a4_dims = (20.7, 8.27) fig, ax = plt.subplots(figsize=a4_dims) sns.set_theme(style="whitegrid") ax = sns.barplot(x="features", y="importance score", data=top7_e) ``` ### Absorption prediction ``` importance_dict_a = dict() for i in range(0,57): importance_dict_a[input_col[i]] = ET_regr_a.feature_importances_[i] sorted_importance_a = sorted(importance_dict_a.items(), key=lambda x: x[1], reverse=True) sorted_importance_a top7_a = DataFrame(sorted_importance_a[0:7], columns=['features', 'importance score']) others_a = DataFrame(sorted_importance_a[7:], columns=['features', 'importance score']) import seaborn as sns a4_dims = (20.7, 8.27) fig, ax = plt.subplots(figsize=a4_dims) sns.set_theme(style="whitegrid") ax = sns.barplot(x="features", y="importance score", data=top7_a) importance_dict_a ``` ### Combine ``` sorted_a = sorted(importance_dict_a.items(), key=lambda x: x[0], reverse=False) sorted_d = sorted(importance_dict_d.items(), key=lambda x: x[0], reverse=False) sorted_e = sorted(importance_dict_e.items(), key=lambda x: x[0], reverse=False) sorted_d combined_importance = dict() for i in range(0,57): combined_importance[sorted_e[i][0]] = sorted_e[i][1] + sorted_a[i][1] + sorted_d[i][1] combined_importance sorted_combined_importance = sorted(combined_importance.items(), key=lambda x: x[1], reverse=True) sorted_combined_importance top7_combined = DataFrame(sorted_combined_importance[0:7], columns=['features', 'importance score']) others_combined = DataFrame(sorted_combined_importance [7:], columns=['features', 'importance score']) import seaborn as sns a4_dims = (20.7, 8.27) fig, ax = plt.subplots(figsize=a4_dims) sns.set_theme(style="whitegrid") ax = sns.barplot(x="features", y="importance score", data=top7_combined) ```
github_jupyter
import numpy as np import pandas as pd import os import joblib import sklearn import matplotlib from matplotlib import pyplot as plt from sklearn.model_selection import train_test_split #Regressions: from sklearn.multioutput import MultiOutputRegressor from sklearn.neighbors import KNeighborsRegressor from sklearn.linear_model import Ridge from sklearn.linear_model import Lasso from sklearn.linear_model import ElasticNet from sklearn.linear_model import LinearRegression from sklearn.linear_model import RidgeCV from sklearn.ensemble import ExtraTreesRegressor from sklearn.ensemble import GradientBoostingRegressor from sklearn.ensemble import RandomForestRegressor from sklearn.ensemble import AdaBoostRegressor from sklearn.tree import DecisionTreeRegressor #Metric from sklearn.metrics import mean_squared_error from sklearn.metrics import mean_absolute_error from sklearn.metrics import r2_score from pandas import DataFrame # Show progress bar from tqdm import tqdm df = pd.read_csv('flo_dataset_augmented.csv') df # Input for ML models input_col = ['in_amount_mmol', 'p_amount_mmol', 'ligand_amount_mmol', 'first_sol_amount_ml', 'second_sol_amount_ml', 'other_1_amount_mmol', 'other_2_amount_mmol', 'total_volume_ml', 'temp_c', 'time_min', 'x0_indium acetate', 'x0_indium bromide', 'x0_indium chloride', 'x0_indium iodide', 'x0_indium myristate', 'x0_indium trifluoroacetate', 'x1_bis(trimethylsilyl)phosphine', 'x1_phosphorus trichloride', 'x1_tris(diethylamino)phosphine', 'x1_tris(dimethylamino)phosphine', 'x1_tris(trimethylgermyl)phosphine', 'x1_tris(trimethylsilyl)phosphine', 'x2_None', 'x2_lauric acid', 'x2_myristic acid', 'x2_oleic acid', 'x2_palmitic acid', 'x2_stearic acid', 'x3_dodecylamine', 'x3_octadecene', 'x3_oleylamine', 'x3_trioctylamine', 'x3_trioctylphosphine', 'x4_None', 'x4_dioctyl ether', 'x4_dioctylamine', 'x4_hexadecylamine', 'x4_octylamine', 'x4_oleylamine', 'x4_toluene', 'x4_trioctylphosphine', 'x4_trioctylphosphine oxide', 'x5_None', 'x5_acetic acid', 'x5_superhydride', 'x5_tetrabutylammonium myristate', 'x5_zinc bromide' ,'x5_zinc chloride' ,'x5_zinc iodide' ,'x5_zinc oleate', 'x5_zinc stearate', 'x5_zinc undecylenate', 'x6_None', 'x6_copper bromide', 'x6_trioctylphosphine', 'x6_water', 'x6_zinc iodide' ] #Three individual outputs: diameter = ['diameter_nm'] emission = ['emission_nm'] absorbance = ['abs_nm'] #Splitting dataset X = df[input_col] Y_d = df[diameter] Y_e = df[emission] Y_a = df[absorbance] X_train_d, X_test_d, Y_train_d, Y_test_d = train_test_split(X, Y_d, test_size=0.3, random_state=45, shuffle=True) X_train_e, X_test_e, Y_train_e, Y_test_e = train_test_split(X, Y_e, test_size=0.3, random_state=45, shuffle=True) X_train_a, X_test_a, Y_train_a, Y_test_a = train_test_split(X, Y_a, test_size=0.3, random_state=45, shuffle=True) # This is a grid search for three parameters in the Extra Trees algorithm. # Parameters are: random_state, n_estimators, max_features. # This gives the best combination of the three parameters for the smallest mean squared error. min_mae = 99999 min_i, min_j, min_k = 0, 0, 0 for i in tqdm(range(1, 25)): for j in range(1, 25): for k in range(2, 50, 1): ET_regr = ExtraTreesRegressor(n_estimators=i, max_features=j, random_state=k) ET_regr.fit(X_train_d, np.ravel(Y_train_d)) ET_Y_pred_d = pd.DataFrame(ET_regr.predict(X_test_d)) mae = mean_absolute_error(Y_test_d, ET_Y_pred_d) if (min_mae > mae): min_mae = mae min_i = i min_j = j min_k = k print(min_mae, min_i, min_j, min_k) # This is a grid search for three parameters in the Decision Trees algorithm. # Parameters are: max_depth, max_features, random_state. # This gives the best combination of the three parameters for the smallest mean squared error. min_mae = 99999 min_i, min_j, min_k = 0, 0, 0 for i in tqdm(range(1, 30)): for j in range(1, 30): for k in range(4, 60, 1): DT_regr = DecisionTreeRegressor(max_depth=i, max_features=j, random_state=k) DT_regr.fit(X_train_d, np.ravel(Y_train_d)) DT_Y_pred_d = pd.DataFrame(DT_regr.predict(X_test_d)) mae = mean_absolute_error(Y_test_d, DT_Y_pred_d) if (min_mae > mae): min_mae = mae min_i = i min_j = j min_k = k print(min_mae, min_i, min_j, min_k) # This is a grid search for three parameters in the Random Forest algorithm. # Parameters are: max_depth, n_estimators, max_features. # Random_state is set to 45. # This gives the best combination of the three parameters for the smallest mean squared error. min_mae = 99999 min_i, min_j, min_k = 0, 0, 0 for i in tqdm(range(1, 31)): for j in range(1, 31): for k in range(2, 46, 1): RF_regr = RandomForestRegressor(max_depth=i, n_estimators=j, max_features=k, random_state=45) RF_regr.fit(X_train_d, np.ravel(Y_train_d)) RF_Y_pred_d = pd.DataFrame(RF_regr.predict(X_test_d)) mae = mean_absolute_error(Y_test_d, RF_Y_pred_d) if (min_mae > mae): min_mae = mae min_i = i min_j = j min_k = k print(min_mae, min_i, min_j, min_k) min_mae = 99999 min_i, min_j = 0, 0 for i in tqdm(range(1, 40)): for j in range(1, 40): KNN_reg_d = KNeighborsRegressor(n_neighbors=i, p=j).fit(X_train_d, np.ravel(Y_train_d)) KNN_Y_pred_d = KNN_reg_d.predict(X_test_d) mae = mean_absolute_error(Y_test_d, KNN_Y_pred_d) if (min_mae > mae): min_mae = mae min_i = i min_j = j print(min_mae, min_i, min_j) ET_regr_d = ExtraTreesRegressor(n_estimators=3, max_features=9, random_state=28) ET_regr_d.fit(X_train_d, np.ravel(Y_train_d)) ET_Y_pred_d = pd.DataFrame(ET_regr_d.predict(X_test_d)) joblib.dump(ET_regr_d, "./model_SO_diameter_ExtraTrees.joblib") # This is a grid search for three parameters in the Extra Trees algorithm. # Parameters are: random_state, n_estimators, max_features. # This gives the best combination of the three parameters for the smallest mean squared error. min_mae = 99999 min_i, min_j, min_k = 0, 0, 0 for i in tqdm(range(1, 25)): for j in range(1, 25): for k in range(2, 50, 1): ET_regr_e = ExtraTreesRegressor(n_estimators=i, max_features=j, random_state=k) ET_regr_e.fit(X_train_e, np.ravel(Y_train_e)) ET_Y_pred_e = pd.DataFrame(ET_regr_e.predict(X_test_e)) mae = mean_absolute_error(Y_test_e, ET_Y_pred_e) if (min_mae > mae): min_mae = mae min_i = i min_j = j min_k = k print(min_mae, min_i, min_j, min_k) # This is a grid search for three parameters in the Decision Trees algorithm. # This gives the best combination of the three parameters for the smallest mean squared error. min_mae = 99999 min_i, min_j, min_k = 0, 0, 0 for i in tqdm(range(1, 30)): for j in range(1, 30): for k in range(4, 46, 1): DT_regr_e = DecisionTreeRegressor(max_depth=i, max_features=j, random_state=k) DT_regr_e.fit(X_train_e, np.ravel(Y_train_e)) DT_Y_pred_e = pd.DataFrame(DT_regr_e.predict(X_test_e)) mae = mean_absolute_error(Y_test_e, DT_Y_pred_e) if (min_mae > mae): min_mae = mae min_i = i min_j = j min_k = k print(min_mae, min_i, min_j, min_k) min_mae = 99999 min_i, min_j, min_k = 0, 0, 0 for i in tqdm(range(1, 31)): for j in range(1, 31): for k in range(2, 46, 1): RF_regr_e = RandomForestRegressor(max_depth=i, n_estimators=j, max_features=k, random_state=45) RF_regr_e.fit(X_train_e, np.ravel(Y_train_e)) RF_Y_pred_e = pd.DataFrame(RF_regr_e.predict(X_test_e)) mae = mean_absolute_error(Y_test_e, RF_Y_pred_e) if (min_mae > mae): min_mae = mae min_i = i min_j = j min_k = k print(min_mae, min_i, min_j, min_k) min_mae = 99999 min_i, min_j = 0, 0 for i in tqdm(range(1, 40)): for j in range(1, 40): KNN_reg_e = KNeighborsRegressor(n_neighbors=i, p=j).fit(X_train_e, np.ravel(Y_train_e)) KNN_Y_pred_e = KNN_reg_e.predict(X_test_e) mae = mean_absolute_error(Y_test_e, KNN_Y_pred_e) if (min_mae > mae): min_mae = mae min_i = i min_j = j print(min_mae, min_i, min_j) ET_regr_e = ExtraTreesRegressor(n_estimators=1, max_features=14, random_state=6).fit(X_train_e, np.ravel(Y_train_e)) ET_Y_pred_e = ET_regr_e.predict(X_test_e) joblib.dump(ET_regr_e, "./model_SO_emission_ExtraTrees.joblib") # This is a grid search for three parameters in the Extra Trees algorithm. # Parameters are: random_state, n_estimators, max_features. # This gives the best combination of the three parameters for the smallest mean squared error. min_mae = 99999 min_i, min_j, min_k = 0, 0, 0 for i in tqdm(range(1, 30)): for j in range(1, 30): for k in range(2, 50, 1): ET_regr_a = ExtraTreesRegressor(n_estimators=i, max_features=j, random_state=k) ET_regr_a.fit(X_train_a, np.ravel(Y_train_a)) ET_Y_pred_a = pd.DataFrame(ET_regr_a.predict(X_test_a)) mae = mean_absolute_error(Y_test_a, ET_Y_pred_a) if (min_mae > mae): min_mae = mae min_i = i min_j = j min_k = k print(min_mae, min_i, min_j, min_k) # This is a grid search for three parameters in the Decision Trees algorithm. # This gives the best combination of the three parameters for the smallest mean squared error. min_mae = 99999 min_i, min_j, min_k = 0, 0, 0 for i in tqdm(range(1, 30)): for j in range(1, 30): for k in range(4, 50, ): DT_regr_a = DecisionTreeRegressor(max_depth=i, max_features=j, random_state=k) DT_regr_a.fit(X_train_a, np.ravel(Y_train_a)) DT_Y_pred_a = pd.DataFrame(DT_regr_a.predict(X_test_a)) mae = mean_absolute_error(Y_test_a, DT_Y_pred_a) if (min_mae > mae): min_mae = mae min_i = i min_j = j min_k = k print(min_mae, min_i, min_j, min_k) DT_regr_a = DecisionTreeRegressor(max_depth=20, max_features=20, random_state=12).fit(X_train_a, np.ravel(Y_train_a)) DT_Y_pred_a = DT_regr_a.predict(X_test_a) DT_r2_a = r2_score(Y_test_a, DT_Y_pred_a) DT_MSE_a = mean_squared_error(Y_test_a, DT_Y_pred_a) DT_RMSE_a = mean_squared_error(Y_test_a, DT_Y_pred_a, squared=False) DT_MAE_a = mean_absolute_error(Y_test_a, DT_Y_pred_a) print('diameter:', 'r2:', DT_r2_a, '; MSE:', DT_MSE_a, '; RMSE:', DT_RMSE_a, '; MAE:', DT_MAE_a) min_mae = 99999 min_i, min_j, min_k = 0, 0, 0 for i in tqdm(range(1, 26)): for j in range(1, 26): for k in range(2, 40, 1): RF_regr_a = RandomForestRegressor(max_depth=i, n_estimators=j, max_features=k, random_state=45) RF_regr_a.fit(X_train_a, np.ravel(Y_train_a)) RF_Y_pred_a = pd.DataFrame(RF_regr_a.predict(X_test_a)) mae = mean_absolute_error(Y_test_a, RF_Y_pred_a) if (min_mae > mae): min_mae = mae min_i = i min_j = j min_k = k print(min_mae, min_i, min_j, min_k) min_mae = 99999 min_i, min_j = 0, 0 for i in tqdm(range(1, 40)): for j in range(1, 40): KNN_reg_a = KNeighborsRegressor(n_neighbors=i, p=j).fit(X_train_a, np.ravel(Y_train_a)) KNN_Y_pred_a = KNN_reg_a.predict(X_test_a) mae = mean_absolute_error(Y_test_a, KNN_Y_pred_a) if (min_mae > mae): min_mae = mae min_i = i min_j = j print(min_mae, min_i, min_j) ET_regr_a = ExtraTreesRegressor(n_estimators=3, max_features=24, random_state=27) ET_regr_a.fit(X_train_a, np.ravel(Y_train_a)) ET_Y_pred_a = pd.DataFrame(ET_regr_a.predict(X_test_a)) joblib.dump(ET_regr_a, "./model_SO_abs_ExtraTrees.joblib") ## Diameter ET_regr_d = ExtraTreesRegressor(n_estimators=3, max_features=9, random_state=28) ET_regr_d.fit(X_train_d, np.ravel(Y_train_d)) ET_Y_pred_d = ET_regr_d.predict(X_test_d) D_mae = mean_absolute_error(Y_test_d, ET_Y_pred_d) D_r_2 = r2_score(Y_test_d, ET_Y_pred_d) D_mse = mean_squared_error(Y_test_d, ET_Y_pred_d) D_rmse = mean_squared_error(Y_test_d, ET_Y_pred_d, squared=False) ## Emission ET_regr_e = ExtraTreesRegressor(n_estimators=1, max_features=14, random_state=6).fit(X_train_e, np.ravel(Y_train_e)) ET_Y_pred_e = ET_regr_e.predict(X_test_e) E_mae = mean_absolute_error(Y_test_e, ET_Y_pred_e) E_r_2 = r2_score(Y_test_e, ET_Y_pred_e) E_mse = mean_squared_error(Y_test_e, ET_Y_pred_e) E_rmse = mean_squared_error(Y_test_e, ET_Y_pred_e, squared=False) ### Absorption ET_regr_a = ExtraTreesRegressor(n_estimators=3, max_features=24, random_state=27) ET_regr_a.fit(X_train_a, np.ravel(Y_train_a)) ET_Y_pred_a = ET_regr_a.predict(X_test_a) A_mae = mean_absolute_error(Y_test_a, ET_Y_pred_a) A_r_2 = r2_score(Y_test_a, ET_Y_pred_a) A_mse = mean_squared_error(Y_test_a, ET_Y_pred_a) A_rmse = mean_squared_error(Y_test_a, ET_Y_pred_a, squared=False) from tabulate import tabulate d = [ ["Diameter", D_r_2, D_mae, D_mse, D_rmse], ["Absorption", A_r_2, A_mae, A_mse, A_rmse], ["Emission", E_r_2, E_mae, E_mse, E_rmse]] print(tabulate(d, headers=["Outputs", "R2", "Mean absolute error", "Mean squared error", "Root mean squared error"])) fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(20,5)) fig.suptitle('Single Outputs', fontsize=25) ax1.plot(ET_Y_pred_d, Y_test_d, 'o') ax1.plot([1.5,6],[1.5,6], color = 'r') ax1.set_title('Diameter') ax1.set(xlabel='Predicted Values (nm)', ylabel='Observed Values (nm)') ax2.plot(ET_Y_pred_a, Y_test_a, 'o') ax2.plot([400,650],[400,650], color = 'r') ax2.set_title('Absorption') ax2.set(xlabel='Predicted Values (nm)', ylabel='Predicted Values (nm)') ax3.plot(ET_Y_pred_e, Y_test_e, 'o') ax3.plot([450,700],[450,700], color = 'r') ax3.set_title('Emission') ax3.set(xlabel='Predicted Values (nm)', ylabel='Observed Values (nm)') fig.tight_layout() importance_dict_d = dict() for i in range(0,57): importance_dict_d[input_col[i]] = ET_regr_d.feature_importances_[i] sorted_importance_d = sorted(importance_dict_d.items(), key=lambda x: x[1], reverse=True) sorted_importance_d top7_d = DataFrame(sorted_importance_d[0:7], columns=['features', 'importance score']) others_d = DataFrame(sorted_importance_d[7:], columns=['features', 'importance score']) import seaborn as sns a4_dims = (20.7, 8.27) fig, ax = plt.subplots(figsize=a4_dims) sns.set_theme(style="whitegrid") ax = sns.barplot(x="features", y="importance score", data=top7_d) importance_dict_e = dict() for i in range(0,57): importance_dict_e[input_col[i]] = ET_regr_e.feature_importances_[i] sorted_importance_e = sorted(importance_dict_e.items(), key=lambda x: x[1], reverse=True) sorted_importance_e top7_e = DataFrame(sorted_importance_e[0:7], columns=['features', 'importance score']) others_e = DataFrame(sorted_importance_e[7:], columns=['features', 'importance score']) # combined_others2 = pd.DataFrame(data = { # 'features' : ['others'], # 'importance score' : [others2['importance score'].sum()] # }) # #combining top 10 with others # imp_score2 = pd.concat([top7, combined_others2]) import seaborn as sns a4_dims = (20.7, 8.27) fig, ax = plt.subplots(figsize=a4_dims) sns.set_theme(style="whitegrid") ax = sns.barplot(x="features", y="importance score", data=top7_e) importance_dict_a = dict() for i in range(0,57): importance_dict_a[input_col[i]] = ET_regr_a.feature_importances_[i] sorted_importance_a = sorted(importance_dict_a.items(), key=lambda x: x[1], reverse=True) sorted_importance_a top7_a = DataFrame(sorted_importance_a[0:7], columns=['features', 'importance score']) others_a = DataFrame(sorted_importance_a[7:], columns=['features', 'importance score']) import seaborn as sns a4_dims = (20.7, 8.27) fig, ax = plt.subplots(figsize=a4_dims) sns.set_theme(style="whitegrid") ax = sns.barplot(x="features", y="importance score", data=top7_a) importance_dict_a sorted_a = sorted(importance_dict_a.items(), key=lambda x: x[0], reverse=False) sorted_d = sorted(importance_dict_d.items(), key=lambda x: x[0], reverse=False) sorted_e = sorted(importance_dict_e.items(), key=lambda x: x[0], reverse=False) sorted_d combined_importance = dict() for i in range(0,57): combined_importance[sorted_e[i][0]] = sorted_e[i][1] + sorted_a[i][1] + sorted_d[i][1] combined_importance sorted_combined_importance = sorted(combined_importance.items(), key=lambda x: x[1], reverse=True) sorted_combined_importance top7_combined = DataFrame(sorted_combined_importance[0:7], columns=['features', 'importance score']) others_combined = DataFrame(sorted_combined_importance [7:], columns=['features', 'importance score']) import seaborn as sns a4_dims = (20.7, 8.27) fig, ax = plt.subplots(figsize=a4_dims) sns.set_theme(style="whitegrid") ax = sns.barplot(x="features", y="importance score", data=top7_combined)
0.593609
0.674975
<font size="+5">#02 | Disecting the Object Like an Onion</font> - <ins>Python</ins> + <ins>Data Science</ins> Tutorials in [YouTube ↗︎](https://www.youtube.com/c/PythonResolver) # How to Access the `items` of an `Object` ## The `list` > - [ ] Create a `list` of your best friends ``` lista_bf = ['maria', 'pepe', 'alberto'] ``` > - [ ] Access the 2nd element ↓ ``` lista_bf[1] ``` ## The `dict` > - [ ] Create a `dict` of your best friends ``` diccionario_bf = {'primera': 'maria', 'segundo': 'pepe', 'tercero': 'alberto'} ``` > - [ ] Access the 2nd element ↓ ``` diccionario_bf[1] diccionario_bf.keys() diccionario_bf['segundo'] ``` ## The `DataFrame` > - [ ] Create a `dict` with your best **friend's personal data** ``` maria = { 'altura': 1.78, 'peso': 54, 'edad': 18 } maria['edad'] ``` > - [ ] Create a `dict` with your second best **friend's personal data** ``` juan = { 'altura': 1.87, 'peso': 76, 'edad': 24 } juan['edad'] ``` > - [ ] Create a `nested dict` with your **3 best friends' personal data** ``` diccionario_bf = { 'maria': { 'altura': 1.78, 'peso': 54, 'edad': 18 }, 'juan': { 'altura': 1.87, 'peso': 76, 'edad': 24 }, 'pepe': { 'altura': 1.84, 'peso': 94, 'edad': 33 } } diccionario_bf ``` > - [ ] Access the `age` of your `2nd best friend` ``` diccionario_bf['pepe'] diccionario_bf['pepe']['peso'] ``` > - [ ] Convert the `dict` to a `DataFrame` ``` diccionario_bf import pandas as pd df = pd.DataFrame(diccionario_bf) df ``` - [ ] Access the `age` of your `2nd best friend` ``` df['pepe'] df['pepe']['edad'] ``` > - What would have happened if the `DataFrame` looks like this ↓ ``` df = df.transpose() #! df df['pepe'] ``` > - [ ] Is your best friends' name a `key` of the `DataFrame`? ``` df.keys() ``` > - [ ] How then can you access your second best friend's age? ``` df['edad'] df['edad']['pepe'] ``` ## Recap ### The `list` ``` lista_bf lista_bf[1] ``` ### The `dictionary` ``` diccionario_bf diccionario_bf['pepe'] diccionario_bf['pepe']['edad'] ``` ### The `DataFrame` ``` df df['edad'] df['edad']['pepe'] ``` ## What the heck is a `key`? > - A `key` that opens a door to get the `values` > - [ ] For example, get the values contained in the `age` key ``` df = pd.read_excel('internet_usage_spain.xlsx', sheet_name=1, index_col=0) df.head() df[age] age ``` `age = ?` ``` df['age'] ``` > - [ ] Access the `name` of the people ``` df['name'] ``` > - What is the error saying? > - [ ] Could you ask which are the `keys` for the `df`? ``` df.head() #! df.keys() ``` > - [ ] Which `keys` could you access then? ``` df.columns df['age'] ``` > - [ ] How could you then access the `names`? ``` df.head() #! df.index ``` # Disecting the Objects to Understand the Elements of Programming > - Objects are **data structures** that store information. > - [ ] Which **syntax** do we use to access the information? ## Dot Notation `.` > - Show just the `age` column from `df` ``` df df.age ``` > - [ ] Could we **access to more information** than just the columns? ``` df. ``` ### The `function()` ``` df.hist df.hist() df.describe() df.boxplot() ``` ### The `instance` ``` df.size df.shape df.axes ``` ### Recap #### The `instance` The **instance** (object) may contain: - `function` - more `instance`s ``` df. df.describe df.shape ``` #### The `function()` The **function** contains nothing - ` `; it's the endpoint of programming ``` pandas.read_csv pandas.read_csv() pandas.read_csv. ``` #### Library The **library** may contain: - `module` (subfolder) - `function` - object `class` **to be created** - object `instance` **(object) already created** ``` import pandas pandas.describe_option pandas.api pandas.array pandas.DataFrame pandas.describe_option ``` # Masking & Filtering the `DataFrame` > - [ ] **Select** elements of the `object` **based on a Condition** ``` df.head() #! ``` ## Filter people older than 70 ## Filter people without studies ## Filter people older than 70 and without studies ## Filter people older than 70 or without studies
github_jupyter
lista_bf = ['maria', 'pepe', 'alberto'] lista_bf[1] diccionario_bf = {'primera': 'maria', 'segundo': 'pepe', 'tercero': 'alberto'} diccionario_bf[1] diccionario_bf.keys() diccionario_bf['segundo'] maria = { 'altura': 1.78, 'peso': 54, 'edad': 18 } maria['edad'] juan = { 'altura': 1.87, 'peso': 76, 'edad': 24 } juan['edad'] diccionario_bf = { 'maria': { 'altura': 1.78, 'peso': 54, 'edad': 18 }, 'juan': { 'altura': 1.87, 'peso': 76, 'edad': 24 }, 'pepe': { 'altura': 1.84, 'peso': 94, 'edad': 33 } } diccionario_bf diccionario_bf['pepe'] diccionario_bf['pepe']['peso'] diccionario_bf import pandas as pd df = pd.DataFrame(diccionario_bf) df df['pepe'] df['pepe']['edad'] df = df.transpose() #! df df['pepe'] df.keys() df['edad'] df['edad']['pepe'] lista_bf lista_bf[1] diccionario_bf diccionario_bf['pepe'] diccionario_bf['pepe']['edad'] df df['edad'] df['edad']['pepe'] df = pd.read_excel('internet_usage_spain.xlsx', sheet_name=1, index_col=0) df.head() df[age] age df['age'] df['name'] df.head() #! df.keys() df.columns df['age'] df.head() #! df.index df df.age df. df.hist df.hist() df.describe() df.boxplot() df.size df.shape df.axes df. df.describe df.shape pandas.read_csv pandas.read_csv() pandas.read_csv. import pandas pandas.describe_option pandas.api pandas.array pandas.DataFrame pandas.describe_option df.head() #!
0.209389
0.96796
# Tabular data handling This module defines the main class to handle tabular data in the fastai library: [`TabularDataset`](/tabular.data.html#TabularDataset). As always, there is also a helper function to quickly get your data. To allow you to easily create a [`Learner`](/basic_train.html#Learner) for your data, it provides [`get_tabular_learner`](/tabular.data.html#get_tabular_learner). ``` from fastai.gen_doc.nbdoc import * from fastai.tabular import * from fastai import * show_doc(TabularDataBunch, doc_string=False) ``` The best way to quickly get your data in a [`DataBunch`](/basic_data.html#DataBunch) suitable for tabular data is to organize it in two (or three) dataframes. One for training, one for validation, and if you have it, one for testing. Here we are interested in a subsample of the [adult dataset](https://archive.ics.uci.edu/ml/datasets/adult). ``` path = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(path/'adult.csv') valid_idx = range(len(df)-2000, len(df)) df.head() cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] dep_var = '>=50k' show_doc(TabularDataBunch.from_df, doc_string=False) ``` Creates a [`DataBunch`](/basic_data.html#DataBunch) in `path` from `train_df`, `valid_df` and optionally `test_df`. The dependent variable is `dep_var`, while the categorical and continuous variables are in the `cat_names` columns and `cont_names` columns respectively. If `cont_names` is None then we assume all variables that aren't dependent or categorical are continuous. The [`TabularTransform`](/tabular.transform.html#TabularTransform) in `tfms` are applied to the dataframes as preprocessing, then the categories are replaced by their codes+1 (leaving 0 for `nan`) and the continuous variables are normalized. You can pass the `stats` to use for that step. If `log_output` is True, the dependant variable is replaced by its log. Note that the transforms should be passed as `Callable`: the actual initialization with `cat_names` and `cont_names` is done inside. ``` procs = [FillMissing, Categorify, Normalize] data = TabularDataBunch.from_df(path, df, dep_var, valid_idx=valid_idx, procs=procs, cat_names=cat_names) ``` You can then easily create a [`Learner`](/basic_train.html#Learner) for this data with [`get_tabular_learner`](/tabular.data.html#get_tabular_learner). ``` show_doc(get_tabular_learner) ``` `emb_szs` is a `dict` mapping categorical column names to embedding sizes; you only need to pass sizes for columns where you want to override the default behaviour of the model. ``` show_doc(TabularList) ``` Basic class to create a list of inputs in `items` for tabular data. `cat_names` and `cont_names` are the names of the categorical and the continuous variables respectively. `processor` will be applied to the inputs or one will be created from the transforms in `procs`. ``` show_doc(TabularList.from_df) show_doc(TabularList.get_emb_szs) show_doc(TabularLine, doc_string=False) ``` An object that will contain the encoded `cats`, the continuous variables `conts`, the `classes` and the `names` of the columns. This is the basic input for a dataset dealing with tabular data. ``` show_doc(TabularLine.show_batch) show_doc(TabularProcessor) ``` Create a [`PreProcessor`](/data_block.html#PreProcessor) from `procs`. ## Undocumented Methods - Methods moved below this line will intentionally be hidden ``` show_doc(TabularProcessor.process_one) show_doc(TabularList.new) show_doc(TabularList.get) show_doc(TabularProcessor.process) ``` ## New Methods - Please document or move to the undocumented section
github_jupyter
from fastai.gen_doc.nbdoc import * from fastai.tabular import * from fastai import * show_doc(TabularDataBunch, doc_string=False) path = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(path/'adult.csv') valid_idx = range(len(df)-2000, len(df)) df.head() cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] dep_var = '>=50k' show_doc(TabularDataBunch.from_df, doc_string=False) procs = [FillMissing, Categorify, Normalize] data = TabularDataBunch.from_df(path, df, dep_var, valid_idx=valid_idx, procs=procs, cat_names=cat_names) show_doc(get_tabular_learner) show_doc(TabularList) show_doc(TabularList.from_df) show_doc(TabularList.get_emb_szs) show_doc(TabularLine, doc_string=False) show_doc(TabularLine.show_batch) show_doc(TabularProcessor) show_doc(TabularProcessor.process_one) show_doc(TabularList.new) show_doc(TabularList.get) show_doc(TabularProcessor.process)
0.370225
0.990768
# Ex2 - Getting and Knowing your Data Check out [Chipotle Exercises Video Tutorial](https://www.youtube.com/watch?v=lpuYZ5EUyS8&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=2) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet. Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. ### Step 1. Import the necessary libraries ``` import pandas as pd import numpy as np ``` ### Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). ### Step 3. Assign it to a variable called chipo. ``` url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv' chipo = pd.read_csv(url, sep = '\t') ``` ### Step 4. See the first 10 entries ``` chipo.head(10) ``` ### Step 5. What is the number of observations in the dataset? ``` # Solution 1 chipo.shape[0] # entries <= 4622 observations # Solution 2 chipo.info() # entries <= 4622 observations ``` ### Step 6. What is the number of columns in the dataset? ``` chipo.shape[1] ``` ### Step 7. Print the name of all the columns. ``` chipo.columns ``` ### Step 8. How is the dataset indexed? ``` chipo.index ``` ### Step 9. Which was the most-ordered item? ``` c = chipo.groupby('item_name') c = c.sum() c = c.sort_values(['quantity'], ascending=False) c.head(1) ``` ### Step 10. For the most-ordered item, how many items were ordered? ``` c = chipo.groupby('item_name') c = c.sum() c = c.sort_values(['quantity'], ascending=False) c.head(1) ``` ### Step 11. What was the most ordered item in the choice_description column? ``` c = chipo.groupby('choice_description').sum() c = c.sort_values(['quantity'], ascending=False) c.head(1) # Diet Coke 159 ``` ### Step 12. How many items were orderd in total? ``` total_items_orders = chipo.quantity.sum() total_items_orders ``` ### Step 13. Turn the item price into a float #### Step 13.a. Check the item price type ``` chipo.item_price.dtype ``` #### Step 13.b. Create a lambda function and change the type of item price ``` dollarizer = lambda x: float(x[1:-1]) chipo.item_price = chipo.item_price.apply(dollarizer) ``` #### Step 13.c. Check the item price type ``` chipo.item_price.dtype ``` ### Step 14. How much was the revenue for the period in the dataset? ``` revenue = (chipo['quantity']* chipo['item_price']).sum() print('Revenue was: $' + str(np.round(revenue,2))) ``` ### Step 15. How many orders were made in the period? ``` orders = chipo.order_id.value_counts().count() orders ``` ### Step 16. What is the average revenue amount per order? ``` # Solution 1 chipo['revenue'] = chipo['quantity'] * chipo['item_price'] order_grouped = chipo.groupby(by=['order_id']).sum() order_grouped.mean()['revenue'] # Solution 2 chipo.groupby(by=['order_id']).sum().mean()['revenue'] ``` ### Step 17. How many different items are sold? ``` chipo.item_name.value_counts().count() ```
github_jupyter
import pandas as pd import numpy as np url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv' chipo = pd.read_csv(url, sep = '\t') chipo.head(10) # Solution 1 chipo.shape[0] # entries <= 4622 observations # Solution 2 chipo.info() # entries <= 4622 observations chipo.shape[1] chipo.columns chipo.index c = chipo.groupby('item_name') c = c.sum() c = c.sort_values(['quantity'], ascending=False) c.head(1) c = chipo.groupby('item_name') c = c.sum() c = c.sort_values(['quantity'], ascending=False) c.head(1) c = chipo.groupby('choice_description').sum() c = c.sort_values(['quantity'], ascending=False) c.head(1) # Diet Coke 159 total_items_orders = chipo.quantity.sum() total_items_orders chipo.item_price.dtype dollarizer = lambda x: float(x[1:-1]) chipo.item_price = chipo.item_price.apply(dollarizer) chipo.item_price.dtype revenue = (chipo['quantity']* chipo['item_price']).sum() print('Revenue was: $' + str(np.round(revenue,2))) orders = chipo.order_id.value_counts().count() orders # Solution 1 chipo['revenue'] = chipo['quantity'] * chipo['item_price'] order_grouped = chipo.groupby(by=['order_id']).sum() order_grouped.mean()['revenue'] # Solution 2 chipo.groupby(by=['order_id']).sum().mean()['revenue'] chipo.item_name.value_counts().count()
0.628179
0.988199
# First Trend Strategy ### Import Libraries ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import scipy.stats as st import statsmodels.stats.multitest as mt import arch.bootstrap as boot import pyalgotrade.barfeed.csvfeed as csvfeed import pyalgotrade.bar as bar import pyalgotrade.strategy as strategy import pyalgotrade.technical.ma as ma import pyalgotrade.broker as broker import pyalgotrade.stratanalyzer.returns as ret import pyalgotrade.plotter as plotter import datetime as dt import itertools import time ``` ### Create Strategy Class ``` class TrendStrategy1(strategy.BacktestingStrategy): # 2.1. Define Strategy Initialization Function def __init__(self, feed, instrument, nfastSMA, nslowSMA): super(TrendStrategy1, self).__init__(feed, 10000) self.position = None self.instrument = instrument self.setUseAdjustedValues(True) self.fastsma = ma.SMA(feed[instrument].getPriceDataSeries(), nfastSMA) self.slowsma = ma.SMA(feed[instrument].getPriceDataSeries(), nslowSMA) # 2.2. Define Get Technical Indicators Functions def getfastSMA(self): return self.fastsma def getslowSMA(self): return self.slowsma # 2.3. Define Overriding Strategy Functions # onEnterCanceled: Get notified when order submitted to enter a position was canceled and update position def onEnterCanceled(self, position): self.position = None # onExitOk: Get notified when order submitted to exit a position was filled and update position def onExitOk(self, position): self.position = None # onExitCanceled: Get notified when order submitted to exit a position was canceled and re-submit order def onExitCanceled(self, position): self.position.exitMarket() # 2.4. Define Trading Strategy Function # Trend-Following Strategy # Enter Long Order = Buy when Fast SMA > Slow SMA, # Exit Order = Sell when Fast SMA < Slow SMA def onBars(self, bars): if self.slowsma[-1] is None: return if self.position is None: if self.fastsma[-1] > self.slowsma[-1]: # 95% equity investment for difference between order day Close price and next day Open price # number of shares can also be a fixed quantity for all transactions (ex. self.shares = 10) self.shares = int(self.getBroker().getCash() * 0.95 / bars[self.instrument].getPrice()) self.position = self.enterLong(self.instrument, self.shares, goodTillCanceled=True) elif self.fastsma[-1] < self.slowsma[-1] and not self.position.exitActive(): self.position.exitMarket() ``` ### Define Run Strategy Function ``` def TrendStrategyRun1(nfastSMA, nslowSMA, chart): # 3.1. Create Instruments object with stock tickers instruments = ['SPY'] # 3.2. Load CSV Feed previously downloaded or read feed = csvfeed.GenericBarFeed(bar.Frequency.DAY) feed.addBarsFromCSV(instruments[0], './Advanced-Trading-Analysis-Data.txt', skipMalformedBars=True) # 3.3. Evaluate Strategy with CSV Feed and Technical Indicator Parameters trendStrategy1 = TrendStrategy1(feed, instruments[0], nfastSMA, nslowSMA) # 3.4. Set Strategy Commission trendStrategy1.getBroker().setCommission(broker.backtesting.FixedPerTrade(6)) # 3.5. Attach Strategy Trading Statistics Analyzers retAnalyzer = ret.Returns(maxLen=2518) trendStrategy1.attachAnalyzer(retAnalyzer) # 3.6. Attach Strategy Plotter plt = plotter.StrategyPlotter(trendStrategy1, plotPortfolio=False) plt.getInstrumentSubplot('SPY').addDataSeries('Fast SMA', trendStrategy1.getfastSMA()) plt.getInstrumentSubplot('SPY').addDataSeries('Slow SMA', trendStrategy1.getslowSMA()) # 3.7. Run Strategy trendStrategy1.run() # 3.8. Calculate Strategy Returns datesReturns = retAnalyzer.getReturns().getDateTimes()[:] dailyReturns = retAnalyzer.getReturns()[:] dailyReturns = pd.DataFrame(dailyReturns).set_index(pd.DatetimeIndex(datesReturns)) # 3.9. Plot Strategy if chart == True: plt.plot(fromDateTime=dt.datetime(2016, 1, 1), toDateTime=dt.datetime(2016, 12, 31)) return dailyReturns ``` ### Plot Estrategy Example ``` TrendStrategyRun1(5, 20, True) ``` ### Do Strategy Parameters Optimization and Calculate Performance Metrics ##### Create Strategy Optimization Parameters Combinations ``` nfastSMA = (5, 15) nslowSMA = (20, 30) pool = [nfastSMA, nslowSMA] ``` ##### Calculate Benchmark Daily Returns ``` data = pd.read_csv('./Advanced-Trading-Analysis-Data.txt', index_col='Date Time', parse_dates=True) trend1DailyReturns = data['Adj Close'].pct_change(1) trend1DailyReturns[0] = 0 trend1DailyReturns = pd.DataFrame(trend1DailyReturns) ``` ##### Do Strategy Optimization ``` trend1StartOptimization = time.time() print('') print('== Strategy Parameters Optimization ==') print('') print('Parameters Combinations (nfastSMA, nslowSMA):') for n in itertools.product(*pool): print(n) trend1DailyReturns.insert(len(trend1DailyReturns.columns), n, TrendStrategyRun1(n[0], n[1], False)) trend1EndOptimization = time.time() trend1DailyReturns.columns = ['B&H', 'Tr1Ret1', 'Tr1Ret2', 'Tr1Ret3', 'Tr1Ret4'] print('') print('Optimization Running Time: ', round(trend1EndOptimization - trend1StartOptimization, 4), ' seconds') print('') ``` ##### Define Cumulative Returns and Performance Metrics Functions ``` def CumulativeReturns(dailyReturns): cumulativeReturns = np.cumprod(dailyReturns + 1) ** (252 / len(dailyReturns)) - 1 return cumulativeReturns def PerformanceMetrics(dailyReturns): annualizedReturn = (np.cumprod(dailyReturns + 1) ** (252 / len(dailyReturns)) - 1)[-1] annualizedStdDev = np.std(dailyReturns) * np.sqrt(252) annualizedSharpe = annualizedReturn / annualizedStdDev return annualizedReturn, annualizedStdDev, annualizedSharpe ``` ##### Chart Cumulative Returns Comparison ``` trend1CumulativeReturns = trend1DailyReturns.apply(CumulativeReturns, axis=0) plt.plot(trend1CumulativeReturns['B&H'], label='B&H') plt.plot(trend1CumulativeReturns['Tr1Ret1'], label='Tr1Ret1') plt.plot(trend1CumulativeReturns['Tr1Ret2'], label='Tr1Ret2') plt.plot(trend1CumulativeReturns['Tr1Ret3'], label='Tr1Ret3') plt.plot(trend1CumulativeReturns['Tr1Ret4'], label='Tr1Ret4') plt.title('Strategy Parameters Optimization Cumulative Returns') plt.legend(loc='upper left') plt.show() ``` ##### Calculate Performance Metrics and Print Summary Table ``` trend1PerformanceMetrics = trend1DailyReturns.apply(PerformanceMetrics, axis=0) trend1PerformanceSummary = [{'0': 'Annualized:', '1': 'B&H', '2': 'Tr1Ret1', '3': 'Tr1Ret2', '4': 'Tr1Ret3', '5': 'Tr1Ret4'}, {'0': 'Return', '1': np.round(trend1PerformanceMetrics[0][0], 4), '2': np.round(trend1PerformanceMetrics[1][0], 4), '3': np.round(trend1PerformanceMetrics[2][0], 4), '4': np.round(trend1PerformanceMetrics[3][0], 4), '5': np.round(trend1PerformanceMetrics[4][0], 4)}, {'0': 'Standard Deviation', '1': np.round(trend1PerformanceMetrics[0][1], 4), '2': np.round(trend1PerformanceMetrics[1][1], 4), '3': np.round(trend1PerformanceMetrics[2][1], 4), '4': np.round(trend1PerformanceMetrics[3][1], 4), '5': np.round(trend1PerformanceMetrics[4][1], 4)}, {'0': 'Sharpe Ratio (Rf=0%)', '1': np.round(trend1PerformanceMetrics[0][2], 4), '2': np.round(trend1PerformanceMetrics[1][2], 4), '3': np.round(trend1PerformanceMetrics[2][2], 4), '4': np.round(trend1PerformanceMetrics[3][2], 4), '5': np.round(trend1PerformanceMetrics[4][2], 4)}] trend1PerformanceSummary = pd.DataFrame(trend1PerformanceSummary) print('') print('== Strategy Parameters Optimization Performace Summary ==') print('') print(trend1PerformanceSummary) print('') ``` ### Do Multiple Hypothesis Testing P-Values Adjustments ##### Calculate Multiple Hypothesis Testing P-Values ``` trend1MultipleTTests = trend1DailyReturns.iloc[:, 1:5].apply(st.ttest_1samp, axis=0, popmean=0, nan_policy='omit') trend1MultiplePValues = (trend1MultipleTTests[0][1], trend1MultipleTTests[1][1], trend1MultipleTTests[2][1], trend1MultipleTTests[3][1]) ``` ##### Adjust Multiple Hypothesis Testing P-Values Calculations ``` trend1MultiplePValuesFWE = mt.multipletests(trend1MultiplePValues, alpha=0.05, method='bonferroni', is_sorted=False, returnsorted=False) trend1MultiplePValuesFDR = mt.multipletests(trend1MultiplePValues, alpha=0.05, method='fdr_bh', is_sorted=False, returnsorted=False) ``` ##### Print Multiple Hypothesis Testing P-Values Adjustment Summary Table ``` trend1MultiplePValuesSummary = [{'0': '', '1': 'Tr1Ret1', '2': 'Tr1Ret2', '3': 'Tr1Ret3', '4': 'Tr1Ret4'}, {'0': 'PValues', '1': np.round(trend1MultiplePValues[0], 4), '2': np.round(trend1MultiplePValues[1], 4), '3': np.round(trend1MultiplePValues[2], 4), '4': np.round(trend1MultiplePValues[3], 4)}, {'0': 'PValues FWE', '1': np.round(trend1MultiplePValuesFWE[1][0], 4), '2': np.round(trend1MultiplePValuesFWE[1][1], 4), '3': np.round(trend1MultiplePValuesFWE[1][2], 4), '4': np.round(trend1MultiplePValuesFWE[1][3], 4)}, {'0': 'PValues FDR', '1': np.round(trend1MultiplePValuesFDR[1][0], 4), '2': np.round(trend1MultiplePValuesFDR[1][1], 4), '3': np.round(trend1MultiplePValuesFDR[1][2], 4), '4': np.round(trend1MultiplePValuesFDR[1][3], 4)}] trend1MultiplePValuesSummary = pd.DataFrame(trend1MultiplePValuesSummary) print('') print('== Multiple Hypothesis Testing P-Values Adjustments ==') print('') print(trend1MultiplePValuesSummary) print('') ``` ### Do Individual Time Series Bootstrap P-Value Multiple Comparison Adjustment ##### Define Bootstrap Mean Function ``` def bmean(x): return x.mean(0) ``` ##### Do Individual Time Series Bootstrap ``` trend1StartBoot = time.time() print('') print('== Individual Time Series Bootstrap ==') print('') trend1Boot = boot.CircularBlockBootstrap(block_size=10, x=trend1DailyReturns.iloc[:, 4]) trend1BootMeans = trend1Boot.apply(func=bmean, reps=1000) trend1BootIntervals = trend1Boot.conf_int(func=bmean, reps=1000, method='percentile', size=0.95, tail='two') trend1EndBoot = time.time() print('') print('Bootstrap Running Time: ', round(trend1EndBoot - trend1StartBoot, 4), ' seconds') print('') ``` ##### Chart Individual Time Series Bootstrap Histogram ``` plt.hist(trend1BootMeans, bins=20, density=True, label='BootMeans') plt.title('Population Mean Probability Distribution Simulation') plt.axvline(x=np.mean(trend1DailyReturns.iloc[:, 4]), color='purple', linestyle='--', label='mean(Tr1Ret4)') plt.axvline(x=np.mean(trend1BootMeans), color='red', linestyle='--', label='mean(BootMeans)') plt.axvline(x=0, color='orange', linestyle='--') plt.axvline(x=trend1BootIntervals[0], color='green', linestyle='--', label='BootLowerCI') plt.axvline(x=trend1BootIntervals[1], color='green', linestyle='--', label='BootUpperCI') plt.ylabel('Density') plt.xlabel('Bin Edges') plt.legend(loc='upper right') plt.show() ``` ##### Calculate Individual Time Series Bootstrap P-Value ``` trend1BootPValue = 2 * min(np.mean(trend1BootMeans <= 0), np.mean(trend1BootMeans > 0)) ``` ##### Adjust Individual Time Series Bootstrap P-Value Calculation ``` trend1BootPValueFWE = 1 - (1 - trend1BootPValue) ** 4 print('') print('== Individual Time Series Bootstrap Hypothesis Testing ==') print('') print('Tr1Ret4 P-Value:', np.round(trend1BootPValue, 4)) print('Tr1Ret4 P-Value FWE:', np.round(trend1BootPValueFWE, 4)) ```
github_jupyter
import numpy as np import pandas as pd import matplotlib.pyplot as plt import scipy.stats as st import statsmodels.stats.multitest as mt import arch.bootstrap as boot import pyalgotrade.barfeed.csvfeed as csvfeed import pyalgotrade.bar as bar import pyalgotrade.strategy as strategy import pyalgotrade.technical.ma as ma import pyalgotrade.broker as broker import pyalgotrade.stratanalyzer.returns as ret import pyalgotrade.plotter as plotter import datetime as dt import itertools import time class TrendStrategy1(strategy.BacktestingStrategy): # 2.1. Define Strategy Initialization Function def __init__(self, feed, instrument, nfastSMA, nslowSMA): super(TrendStrategy1, self).__init__(feed, 10000) self.position = None self.instrument = instrument self.setUseAdjustedValues(True) self.fastsma = ma.SMA(feed[instrument].getPriceDataSeries(), nfastSMA) self.slowsma = ma.SMA(feed[instrument].getPriceDataSeries(), nslowSMA) # 2.2. Define Get Technical Indicators Functions def getfastSMA(self): return self.fastsma def getslowSMA(self): return self.slowsma # 2.3. Define Overriding Strategy Functions # onEnterCanceled: Get notified when order submitted to enter a position was canceled and update position def onEnterCanceled(self, position): self.position = None # onExitOk: Get notified when order submitted to exit a position was filled and update position def onExitOk(self, position): self.position = None # onExitCanceled: Get notified when order submitted to exit a position was canceled and re-submit order def onExitCanceled(self, position): self.position.exitMarket() # 2.4. Define Trading Strategy Function # Trend-Following Strategy # Enter Long Order = Buy when Fast SMA > Slow SMA, # Exit Order = Sell when Fast SMA < Slow SMA def onBars(self, bars): if self.slowsma[-1] is None: return if self.position is None: if self.fastsma[-1] > self.slowsma[-1]: # 95% equity investment for difference between order day Close price and next day Open price # number of shares can also be a fixed quantity for all transactions (ex. self.shares = 10) self.shares = int(self.getBroker().getCash() * 0.95 / bars[self.instrument].getPrice()) self.position = self.enterLong(self.instrument, self.shares, goodTillCanceled=True) elif self.fastsma[-1] < self.slowsma[-1] and not self.position.exitActive(): self.position.exitMarket() def TrendStrategyRun1(nfastSMA, nslowSMA, chart): # 3.1. Create Instruments object with stock tickers instruments = ['SPY'] # 3.2. Load CSV Feed previously downloaded or read feed = csvfeed.GenericBarFeed(bar.Frequency.DAY) feed.addBarsFromCSV(instruments[0], './Advanced-Trading-Analysis-Data.txt', skipMalformedBars=True) # 3.3. Evaluate Strategy with CSV Feed and Technical Indicator Parameters trendStrategy1 = TrendStrategy1(feed, instruments[0], nfastSMA, nslowSMA) # 3.4. Set Strategy Commission trendStrategy1.getBroker().setCommission(broker.backtesting.FixedPerTrade(6)) # 3.5. Attach Strategy Trading Statistics Analyzers retAnalyzer = ret.Returns(maxLen=2518) trendStrategy1.attachAnalyzer(retAnalyzer) # 3.6. Attach Strategy Plotter plt = plotter.StrategyPlotter(trendStrategy1, plotPortfolio=False) plt.getInstrumentSubplot('SPY').addDataSeries('Fast SMA', trendStrategy1.getfastSMA()) plt.getInstrumentSubplot('SPY').addDataSeries('Slow SMA', trendStrategy1.getslowSMA()) # 3.7. Run Strategy trendStrategy1.run() # 3.8. Calculate Strategy Returns datesReturns = retAnalyzer.getReturns().getDateTimes()[:] dailyReturns = retAnalyzer.getReturns()[:] dailyReturns = pd.DataFrame(dailyReturns).set_index(pd.DatetimeIndex(datesReturns)) # 3.9. Plot Strategy if chart == True: plt.plot(fromDateTime=dt.datetime(2016, 1, 1), toDateTime=dt.datetime(2016, 12, 31)) return dailyReturns TrendStrategyRun1(5, 20, True) nfastSMA = (5, 15) nslowSMA = (20, 30) pool = [nfastSMA, nslowSMA] data = pd.read_csv('./Advanced-Trading-Analysis-Data.txt', index_col='Date Time', parse_dates=True) trend1DailyReturns = data['Adj Close'].pct_change(1) trend1DailyReturns[0] = 0 trend1DailyReturns = pd.DataFrame(trend1DailyReturns) trend1StartOptimization = time.time() print('') print('== Strategy Parameters Optimization ==') print('') print('Parameters Combinations (nfastSMA, nslowSMA):') for n in itertools.product(*pool): print(n) trend1DailyReturns.insert(len(trend1DailyReturns.columns), n, TrendStrategyRun1(n[0], n[1], False)) trend1EndOptimization = time.time() trend1DailyReturns.columns = ['B&H', 'Tr1Ret1', 'Tr1Ret2', 'Tr1Ret3', 'Tr1Ret4'] print('') print('Optimization Running Time: ', round(trend1EndOptimization - trend1StartOptimization, 4), ' seconds') print('') def CumulativeReturns(dailyReturns): cumulativeReturns = np.cumprod(dailyReturns + 1) ** (252 / len(dailyReturns)) - 1 return cumulativeReturns def PerformanceMetrics(dailyReturns): annualizedReturn = (np.cumprod(dailyReturns + 1) ** (252 / len(dailyReturns)) - 1)[-1] annualizedStdDev = np.std(dailyReturns) * np.sqrt(252) annualizedSharpe = annualizedReturn / annualizedStdDev return annualizedReturn, annualizedStdDev, annualizedSharpe trend1CumulativeReturns = trend1DailyReturns.apply(CumulativeReturns, axis=0) plt.plot(trend1CumulativeReturns['B&H'], label='B&H') plt.plot(trend1CumulativeReturns['Tr1Ret1'], label='Tr1Ret1') plt.plot(trend1CumulativeReturns['Tr1Ret2'], label='Tr1Ret2') plt.plot(trend1CumulativeReturns['Tr1Ret3'], label='Tr1Ret3') plt.plot(trend1CumulativeReturns['Tr1Ret4'], label='Tr1Ret4') plt.title('Strategy Parameters Optimization Cumulative Returns') plt.legend(loc='upper left') plt.show() trend1PerformanceMetrics = trend1DailyReturns.apply(PerformanceMetrics, axis=0) trend1PerformanceSummary = [{'0': 'Annualized:', '1': 'B&H', '2': 'Tr1Ret1', '3': 'Tr1Ret2', '4': 'Tr1Ret3', '5': 'Tr1Ret4'}, {'0': 'Return', '1': np.round(trend1PerformanceMetrics[0][0], 4), '2': np.round(trend1PerformanceMetrics[1][0], 4), '3': np.round(trend1PerformanceMetrics[2][0], 4), '4': np.round(trend1PerformanceMetrics[3][0], 4), '5': np.round(trend1PerformanceMetrics[4][0], 4)}, {'0': 'Standard Deviation', '1': np.round(trend1PerformanceMetrics[0][1], 4), '2': np.round(trend1PerformanceMetrics[1][1], 4), '3': np.round(trend1PerformanceMetrics[2][1], 4), '4': np.round(trend1PerformanceMetrics[3][1], 4), '5': np.round(trend1PerformanceMetrics[4][1], 4)}, {'0': 'Sharpe Ratio (Rf=0%)', '1': np.round(trend1PerformanceMetrics[0][2], 4), '2': np.round(trend1PerformanceMetrics[1][2], 4), '3': np.round(trend1PerformanceMetrics[2][2], 4), '4': np.round(trend1PerformanceMetrics[3][2], 4), '5': np.round(trend1PerformanceMetrics[4][2], 4)}] trend1PerformanceSummary = pd.DataFrame(trend1PerformanceSummary) print('') print('== Strategy Parameters Optimization Performace Summary ==') print('') print(trend1PerformanceSummary) print('') trend1MultipleTTests = trend1DailyReturns.iloc[:, 1:5].apply(st.ttest_1samp, axis=0, popmean=0, nan_policy='omit') trend1MultiplePValues = (trend1MultipleTTests[0][1], trend1MultipleTTests[1][1], trend1MultipleTTests[2][1], trend1MultipleTTests[3][1]) trend1MultiplePValuesFWE = mt.multipletests(trend1MultiplePValues, alpha=0.05, method='bonferroni', is_sorted=False, returnsorted=False) trend1MultiplePValuesFDR = mt.multipletests(trend1MultiplePValues, alpha=0.05, method='fdr_bh', is_sorted=False, returnsorted=False) trend1MultiplePValuesSummary = [{'0': '', '1': 'Tr1Ret1', '2': 'Tr1Ret2', '3': 'Tr1Ret3', '4': 'Tr1Ret4'}, {'0': 'PValues', '1': np.round(trend1MultiplePValues[0], 4), '2': np.round(trend1MultiplePValues[1], 4), '3': np.round(trend1MultiplePValues[2], 4), '4': np.round(trend1MultiplePValues[3], 4)}, {'0': 'PValues FWE', '1': np.round(trend1MultiplePValuesFWE[1][0], 4), '2': np.round(trend1MultiplePValuesFWE[1][1], 4), '3': np.round(trend1MultiplePValuesFWE[1][2], 4), '4': np.round(trend1MultiplePValuesFWE[1][3], 4)}, {'0': 'PValues FDR', '1': np.round(trend1MultiplePValuesFDR[1][0], 4), '2': np.round(trend1MultiplePValuesFDR[1][1], 4), '3': np.round(trend1MultiplePValuesFDR[1][2], 4), '4': np.round(trend1MultiplePValuesFDR[1][3], 4)}] trend1MultiplePValuesSummary = pd.DataFrame(trend1MultiplePValuesSummary) print('') print('== Multiple Hypothesis Testing P-Values Adjustments ==') print('') print(trend1MultiplePValuesSummary) print('') def bmean(x): return x.mean(0) trend1StartBoot = time.time() print('') print('== Individual Time Series Bootstrap ==') print('') trend1Boot = boot.CircularBlockBootstrap(block_size=10, x=trend1DailyReturns.iloc[:, 4]) trend1BootMeans = trend1Boot.apply(func=bmean, reps=1000) trend1BootIntervals = trend1Boot.conf_int(func=bmean, reps=1000, method='percentile', size=0.95, tail='two') trend1EndBoot = time.time() print('') print('Bootstrap Running Time: ', round(trend1EndBoot - trend1StartBoot, 4), ' seconds') print('') plt.hist(trend1BootMeans, bins=20, density=True, label='BootMeans') plt.title('Population Mean Probability Distribution Simulation') plt.axvline(x=np.mean(trend1DailyReturns.iloc[:, 4]), color='purple', linestyle='--', label='mean(Tr1Ret4)') plt.axvline(x=np.mean(trend1BootMeans), color='red', linestyle='--', label='mean(BootMeans)') plt.axvline(x=0, color='orange', linestyle='--') plt.axvline(x=trend1BootIntervals[0], color='green', linestyle='--', label='BootLowerCI') plt.axvline(x=trend1BootIntervals[1], color='green', linestyle='--', label='BootUpperCI') plt.ylabel('Density') plt.xlabel('Bin Edges') plt.legend(loc='upper right') plt.show() trend1BootPValue = 2 * min(np.mean(trend1BootMeans <= 0), np.mean(trend1BootMeans > 0)) trend1BootPValueFWE = 1 - (1 - trend1BootPValue) ** 4 print('') print('== Individual Time Series Bootstrap Hypothesis Testing ==') print('') print('Tr1Ret4 P-Value:', np.round(trend1BootPValue, 4)) print('Tr1Ret4 P-Value FWE:', np.round(trend1BootPValueFWE, 4))
0.623721
0.838415
To start, we'll develop a collection of data points that appear random, but that fit a known linear equation 𝑦=2𝑥+1 ### Load Libraries ``` import torch import torch.nn as nn import numpy as np import matplotlib.pyplot as plt ``` ### Create a column matrix of X values We can create tensors right away rather than convert from NumPy arrays. ``` X = torch.linspace(1, 50, 50).reshape(-1, 1) # Equivalent to X = torch.unsqueeze(torch.linspace(1,50,50), dim=1) #X ``` ### Create a "random" array of error values We want 50 random integer values that collectively cancel each other out. ``` torch.manual_seed(71) e = torch.randint(-8, 9, (50, 1), dtype=torch.float) print(e.sum()) ``` ### Create a column matrix of y values Here we'll set our own parameters of $\mathrm {weight} = 2,\; \mathrm {bias} = 1$, plus the error amount.<br><strong><tt>y</tt></strong> will have the same shape as <strong><tt>X</tt></strong> and <strong><tt>e</tt></strong> ``` y = 2*X + 1 + e print(y.shape) ``` ### Plot the results We have to convert tensors to NumPy arrays just for plotting. ``` plt.scatter(X.numpy(), y.numpy()) plt.ylabel('y') plt.xlabel('x'); ``` Note that when we created tensor $X$, we did <em>not</em> pass <tt>requires_grad=True</tt>. This means that $y$ doesn't have a gradient function, and <tt>y.backward()</tt> won't work. Since PyTorch is not tracking operations, it doesn't know the relationship between $X$ and $y$. ### Simple linear model As a quick demonstration we'll show how the built-in <tt>nn.Linear()</tt> model preselects weight and bias values at random. ``` torch.manual_seed(59) model = nn.Linear(in_features=1, out_features=1) print(model.weight) print(model.bias) ``` Without seeing any data, the model sets a random weight of 0.1060 and a bias of 0.9638. ## Model classes PyTorch lets us define models as object classes that can store multiple model layers. In upcoming sections we'll set up several neural network layers, and determine how each layer should perform its forward pass to the next layer. For now, though, we only need a single <tt>linear</tt> layer. ``` class Model(nn.Module): def __init__(self, in_features, out_features): super().__init__() self.linear = nn.Linear(in_features, out_features) def forward(self, x): y_pred = self.linear(x) return y_pred ``` <div class="alert alert-info"><strong>NOTE:</strong> The "Linear" model layer used here doesn't really refer to linear regression. Instead, it describes the type of neural network layer employed. Linear layers are also called "fully connected" or "dense" layers. Going forward our models may contain linear layers, convolutional layers, and more.</div> When <tt>Model</tt> is instantiated, we need to pass in the size (dimensions) of the incoming and outgoing features. For our purposes we'll use (1,1).<br>As above, we can see the initial hyperparameters. ``` torch.manual_seed(59) model = Model(1, 1) print(model) print(f'model weights :- {model.linear.weight.item()}') print(f'model bias :- {model.linear.bias.item()}') for name, parameters in model.named_parameters(): print(name, "\t", parameters.item()) ``` <div class="alert alert-info"><strong>NOTE:</strong> In the above example we had our Model class accept arguments for the number of input and output features.<br>For simplicity we can hardcode them into the Model: <tt><font color=black> class Model(torch.nn.Module):<br> &nbsp;&nbsp;&nbsp;&nbsp;def \_\_init\_\_(self):<br> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;super().\_\_init\_\_()<br> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;self.linear = Linear(1,1)<br><br> model = Model() </font></tt><br><br> Alternatively we can use default arguments: <tt><font color=black> class Model(torch.nn.Module):<br> &nbsp;&nbsp;&nbsp;&nbsp;def \_\_init\_\_(self, in_dim=1, out_dim=1):<br> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;super().\_\_init\_\_()<br> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;self.linear = Linear(in_dim,out_dim)<br><br> model = Model()<br> <em>\# or</em><br> model = Model(i,o)</font></tt> </div> ``` x = torch.tensor([2.0]) print(model.forward(x)) # equivalent to print(model(x)) ``` which is confirmed with $f(x) = (0.1060)(2.0)+(0.9638) = 1.1758$ ### Plot the initial model We can plot the untrained model against our dataset to get an idea of our starting point ``` x1 = np.array([X.min(),X.max()]) print(x1) w1, b1 = model.linear.weight.item(), model.linear.bias.item() print(f'Initial weight: {w1:.8f}, Initial bias: {b1:.8f}') print() y1 = x1*w1 + b1 print(y1) plt.scatter(X.numpy(), y.numpy()) plt.plot(x1,y1,'r') plt.title('Initial Model') plt.ylabel('y') plt.xlabel('x'); ``` ## Set the loss function We could write our own function to apply a Mean Squared Error (MSE) that follows<br> $\begin{split}MSE &= \frac {1} {n} \sum_{i=1}^n {(y_i - \hat y_i)}^2 \\ &= \frac {1} {n} \sum_{i=1}^n {(y_i - (wx_i + b))}^2\end{split}$<br> Fortunately PyTorch has it built in.<br> <em>By convention, you'll see the variable name "criterion" used, but feel free to use something like "linear_loss_func" if that's clearer.</em> ``` criterion = nn.MSELoss() optimizer = torch.optim.SGD(model.parameters(), lr=0.001) ``` ### Train the model ``` epochs = 50 losses = [] for i in range(epochs): #i+=1 y_pred = model.forward(X) loss = criterion(y_pred, y) losses.append(loss) print(f'epoch: {i:2} loss: {loss.item():10.8f} weight: {model.linear.weight.item():10.8f} \ bias: {model.linear.bias.item():10.8f}') optimizer.zero_grad() loss.backward() optimizer.step() plt.plot(range(epochs), losses) plt.ylabel('Loss') plt.xlabel('epoch'); w1,b1 = model.linear.weight.item(), model.linear.bias.item() print(f'Current weight: {w1:.8f}, Current bias: {b1:.8f}') print() y1 = x1*w1 + b1 print(x1) print(y1) plt.scatter(X.numpy(), y.numpy()) plt.plot(x1,y1,'r') plt.title('Current Model') plt.ylabel('y') plt.xlabel('x'); ```
github_jupyter
import torch import torch.nn as nn import numpy as np import matplotlib.pyplot as plt X = torch.linspace(1, 50, 50).reshape(-1, 1) # Equivalent to X = torch.unsqueeze(torch.linspace(1,50,50), dim=1) #X torch.manual_seed(71) e = torch.randint(-8, 9, (50, 1), dtype=torch.float) print(e.sum()) y = 2*X + 1 + e print(y.shape) plt.scatter(X.numpy(), y.numpy()) plt.ylabel('y') plt.xlabel('x'); torch.manual_seed(59) model = nn.Linear(in_features=1, out_features=1) print(model.weight) print(model.bias) class Model(nn.Module): def __init__(self, in_features, out_features): super().__init__() self.linear = nn.Linear(in_features, out_features) def forward(self, x): y_pred = self.linear(x) return y_pred torch.manual_seed(59) model = Model(1, 1) print(model) print(f'model weights :- {model.linear.weight.item()}') print(f'model bias :- {model.linear.bias.item()}') for name, parameters in model.named_parameters(): print(name, "\t", parameters.item()) x = torch.tensor([2.0]) print(model.forward(x)) # equivalent to print(model(x)) x1 = np.array([X.min(),X.max()]) print(x1) w1, b1 = model.linear.weight.item(), model.linear.bias.item() print(f'Initial weight: {w1:.8f}, Initial bias: {b1:.8f}') print() y1 = x1*w1 + b1 print(y1) plt.scatter(X.numpy(), y.numpy()) plt.plot(x1,y1,'r') plt.title('Initial Model') plt.ylabel('y') plt.xlabel('x'); criterion = nn.MSELoss() optimizer = torch.optim.SGD(model.parameters(), lr=0.001) epochs = 50 losses = [] for i in range(epochs): #i+=1 y_pred = model.forward(X) loss = criterion(y_pred, y) losses.append(loss) print(f'epoch: {i:2} loss: {loss.item():10.8f} weight: {model.linear.weight.item():10.8f} \ bias: {model.linear.bias.item():10.8f}') optimizer.zero_grad() loss.backward() optimizer.step() plt.plot(range(epochs), losses) plt.ylabel('Loss') plt.xlabel('epoch'); w1,b1 = model.linear.weight.item(), model.linear.bias.item() print(f'Current weight: {w1:.8f}, Current bias: {b1:.8f}') print() y1 = x1*w1 + b1 print(x1) print(y1) plt.scatter(X.numpy(), y.numpy()) plt.plot(x1,y1,'r') plt.title('Current Model') plt.ylabel('y') plt.xlabel('x');
0.89737
0.981823
# Object Oriented Programming: introduction [Object-oriented programming](https://en.wikipedia.org/wiki/Object-oriented_programming) (OOP) is a standard feature of many modern programming languages. Learning the main concepts of OOP would occupy one entire lecture at least, and we won't try to cover all the details here. The specific objectives of these two OOP units are: - learn the basic OOP concepts syntax so that you are able to *understand* code and libraries making use of it - become familiar with certain semantics associated with OOP: classes, objects, attributes, methods, etc. - introduce simple examples where OOP is a useful paradigm, and try to raise your interest in its usage so that you can learn it by yourself when needed. This first OOP unit introduces the concept of "objects" in the Python language and shows you how to make objects on your own. Next week's unit will tell you what objects are useful for. *Copyright notice: this chapter is partly inspired from [RealPython's beginner tutorial](https://realpython.com/python3-object-oriented-programming/) on OOP.* <h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Object-Oriented-Programming:-introduction" data-toc-modified-id="Object-Oriented-Programming:-introduction-23"><span class="toc-item-num">23&nbsp;&nbsp;</span>Object Oriented Programming: introduction</a></span><ul class="toc-item"><li><span><a href="#Introduction" data-toc-modified-id="Introduction-23.1"><span class="toc-item-num">23.1&nbsp;&nbsp;</span>Introduction</a></span></li><li><span><a href="#Classes-and-objects" data-toc-modified-id="Classes-and-objects-23.2"><span class="toc-item-num">23.2&nbsp;&nbsp;</span>Classes and objects</a></span></li><li><span><a href="#Class/instance-attributes" data-toc-modified-id="Class/instance-attributes-23.3"><span class="toc-item-num">23.3&nbsp;&nbsp;</span>Class/instance attributes</a></span></li><li><span><a href="#Instance-Methods" data-toc-modified-id="Instance-Methods-23.4"><span class="toc-item-num">23.4&nbsp;&nbsp;</span>Instance Methods</a></span></li><li><span><a href="#Extending-attributes:-the-@property-decorator" data-toc-modified-id="Extending-attributes:[email protected]"><span class="toc-item-num">23.5&nbsp;&nbsp;</span>Extending attributes: the @property decorator</a></span></li><li><span><a href="#Class-inheritance:-introduction" data-toc-modified-id="Class-inheritance:-introduction-23.6"><span class="toc-item-num">23.6&nbsp;&nbsp;</span>Class inheritance: introduction</a></span></li><li><span><a href="#Take-home-points" data-toc-modified-id="Take-home-points-23.7"><span class="toc-item-num">23.7&nbsp;&nbsp;</span>Take home points</a></span></li><li><span><a href="#What's-next?" data-toc-modified-id="What's-next?-23.8"><span class="toc-item-num">23.8&nbsp;&nbsp;</span>What's next?</a></span></li></ul></li></ul></div> ## Introduction As stated in the [OOP wikipedia page](https://en.wikipedia.org/wiki/Object-oriented_programming), OOP is a "paradigm" which might or might not be supported by a specific programming language. Although Python is an OOP language at its core, **it does not enforce its usage**. In fact, many of you will be able to write your master thesis and even your PhD thesis without *programming* in OOP on your own. You have, however, already made heavy use of Python objects (*everything* is an object in Python, remember?) and I find it very important that you are able to understand the basics of it in order to make better use of Python. In short, OOP is simply *another* way to structure your programs. Until now, you have written modules consisting mainly of functions, sometimes with a short ``__main__`` script which was itself calling one or more functions. OOP will add a new tool to your repertoire by allowing you to **bundle data and behaviors into individual objects**, possibly helping you to organize your code in a way that feels more natural and clear. Let's get started with some examples and new semantics! We will talk about the advantages and disadvantages of OOP in the following unit, once you are more familiar with its syntax. ## Classes and objects **Classes** are used to create new user-defined structures that contain information about something and that come with "services". Let's define a new class called ``Cat``: ``` class Cat: # Initializer def __init__(self, name, weight): self.name = name self.weight = weight ``` There are already a couple of new things in this code snippet: - first, the class name definition is happening at the very first line. As per [pep8](https://www.python.org/dev/peps/pep-0008/#class-names), class names in Python should use "CapWords" per convention - the class contains a "function" called ``__init__``, which indeed looks very much like a normal function. Here the ``__init__`` function has three positional arguments: ``self`` (which has a special meaning as we are going to see), ``name`` and ``weight``. These arguments are used to initialize the **attributes** of the same name. We'll go back to this in the next section. A class provides a new structure definition. It's a "blueprint" for how something should be defined, but it doesn't actually provide any real data content itself. To actually use the functionalities defined by the **class** you'll need to create a new **instance** of that class. **Instantiating** is a fancy term for creating a new, unique realization of a class (an **object**). Let's go for it: ``` a = Cat('Grumpy', 4) a ``` We just created a new instance of the class ``Cat`` and assigned it to the variable ``a``. An instance of a class is commonly called an **object** (this can be used as synonym for "instance"). The variable ``a`` stores an object (instance) of the class ``Cat``: ``` # Ask if a is an instance of Cat or not isinstance(a, Cat) ``` In fact, we just created a new datatype called ``Cat``: ``` type(a) ``` **Every new instance of a class is unique**, regardless of the values used to initialize it. Let's create a new Cat with the same name and weight: ``` b = Cat('Grumpy', 4) isinstance(a, Cat) ``` It is still a unique instance and is not a copy of ``a`` in any way: ``` a == b b is a ``` ## Class/instance attributes The cat's name and weight are called **instance attributes** and can be accessed with the dot syntax: ``` a.name a.weight ``` A common synonym for the term "attribute" is "**property**". The two terms are very close and you might find one or the other term depending on who writes about them. Properties in python are a special kind of attributes, but the difference is subtle and not relevant here. **Instance attributes** are specific to the created object. They are often defined at instantiation: ``` b = Cat('Tiger', 5) b.name ``` Classes can also define **class attributes**, which are tied to a class but not to a specific instance: ``` class Cat: # Class attribute language = 'Meow' # Initializer def __init__(self, name, weight): self.name = name self.weight = weight Cat.language a = Cat('Grumpy', 4) a.language ``` **Careful!** Class and instance attributes are not **immutable**. They can be changed form outside the class: ``` a.name = 'Roncheux' a.language = 'Miaou' a.name a.language ``` These changes are specific to the instance, and the class remains unchanged: ``` Cat.language ``` *In comparison to other OO languages, python is very "liberal" regarding attributes: some languages like Java would not allow to change attributes this way. In practice, attributes should not be changed by the users of a class. Unless they are documented as being "changeable", and in this case become "properties". More on this later.* ## Instance Methods If a class only had attributes, it would merely be a simple data structure. Classes become useful when they are adding "services" to the data they store. These services are called **methods**, and their syntax has similarities with a function definition: ``` class Cat: # Class attribute language = 'Meow' # Initializer def __init__(self, name, weight): self.name = name self.weight = weight # Method def say_name(self): print('{}, my name is {}!'.format(self.language, self.name)) ``` The biggest difference with functions is that a method is tied to a class instance: this is made clear by the ``self`` argument, present in the method definition but not used when calling the method: ``` a = Cat('Kitty', 4) a.say_name() b = Cat('Grumpy', 3) b.say_name() ``` The ``self`` variable is implicit in the call above, and refers to the instance of the class which is calling the method. It might sound a little complicated at first, but you'll get used to it: ``self`` is used to read and write instance attributes, and is the first argument to virtually any method defined in the class (there is one exception to this rule which we will ignore for now). At this point, you may have noticed similarities between the objects you used commonly in the climate lecture and the objects we just defined here. Let's make the analogy: ``` import pandas as pd a = pd.Series([1, 2, 3], name='data') # instanciating a class assert isinstance(a, pd.Series) # a is an instance of the Series class print(type(a)) # pandas.core.series.Series is a new datatype print(a.name) # name is an instance attribute print(a.mean()) # mean is an instance method ``` Are you confident about the meaning of all these terms? If not, I might have explained it in a way which is not the right one for you: you can use your google-skills to look for other tutorials. There are plenty! ## Extending attributes: the @property decorator We have said that attributes are often meant to be data describing an instance of a class. It is often the job of instance methods to initialize and update these attributes. Consider the following example: ``` class Cat: # Class attribute language = 'Meow' # Initializer def __init__(self, name, weight): self.name = name self.weight = weight # Method def eat_food(self, food_kg): self.weight += food_kg a = Cat('Grumpy', 4) print('Weight before eating: {} kg'.format(a.weight)) a.eat_food(0.2) print('Weight after eating: {} kg'.format(a.weight)) ``` This was a simplified but typical use for instance attributes: they will change in an object's lifetime according to specific events. Now let's suppose that you are working with scientists from the USA, and they'd like to know the cat's weight in pounds. One way to do so would be to compute it at instantiation: ``` class Cat: # Class attribute language = 'Meow' # Initializer def __init__(self, name, weight): self.name = name self.weight = weight self.weight_lbs = weight * 1 / 0.45359237 # Method def eat_food(self, food_kg): self.weight += food_kg a = Cat('Grumpy', 4) a.weight_lbs ``` There is an obvious drawback to this method however: what if the cat eats food? Its weight won't be updated! ``` a.eat_food(0.2) a.weight_lbs # this is a problem ``` A possible way to deal with the issue would be to compute the pound weight *on demand*, i.e. write a method to compute it: ``` class Cat: # Class attribute language = 'Meow' # Initializer def __init__(self, name, weight): self.name = name self.weight = weight # Method def eat_food(self, food_kg): self.weight += food_kg def get_weight_lbs(self): return self.weight * 1 / 0.45359237 a = Cat('Grumpy', 4) a.eat_food(0.2) a.get_weight_lbs() ``` This is already much better (and accurate), but it is somehow hiding the fact that the weight of a cat really is an attribute, no matter the unit. It should not be accessed with a ``get_`` method. This is where a new syntax comes in handy: ``` class Cat: # Class attribute language = 'Meow' # Initializer def __init__(self, name, weight): self.name = name self.weight = weight # Method def eat_food(self, food_kg): self.weight += food_kg @property def weight_lbs(self): return self.weight * 1 / 0.45359237 ``` ``weight_lbs`` looks like a method (it computes something), but only in the class definition. For the class instances, the method is "hidden" in an attribute: ``` a = Cat('Grumpy', 4) a.eat_food(0.2) a.weight_lbs # weight_lbs is an attribute! ``` This is a very useful pattern frequently used in python. The ``@`` syntax defines a "decorator", and you might learn about decorators in a more advanced python class. ## Class inheritance: introduction **Inheritance** is a core concept of OOP. It allows a **subclass** (also called "**child class**") to override or extend methods and attributes from a **base class** (also called "**parent class**"). In other words, child classes inherit all of the parent's attributes and behaviors but can also specify new behaviors or replace old ones. This is best shown with an example: let's make the ``Cat`` and ``Dog`` class inherit from the ``Pet`` class. ``` class Pet: # Initializer def __init__(self, name, weight): self.name = name self.weight = weight # Method def eat_food(self, food_kg): self.weight += food_kg @property def weight_lbs(self): return self.weight * 1 / 0.45359237 def say_name_loudly(self): return self.say_name().upper() class Cat(Pet): # Class attribute language = 'Meow' # Method def say_name(self): return '{}, my name is {} and I am nice!'.format(self.language, self.name) class Dog(Pet): # Class attribute language = 'Woof' # Method def say_name(self): return '{}, my name is {} and I smell funny!'.format(self.language, self.name) ``` Let's advance through this example step by step. First, let's have a look at the ``Pet`` class. It is a standard class defined the exact same way as the previous ``Cat`` class. Therefore, it can be instantiated and will work as expected: ``` p = Pet('PetName', 10) p.weight_lbs ``` As discussed during the lecture, the functionality of the parent class ``Pet`` however is very general, and it is unlikely to be used alone (a pet isn't specific enough). We used this class to implement the general functionality supported by all pets: they have a name and a weight, regardless of their species. The ``Cat`` and ``Dog`` classes make use of this functionality by **inheriting** from the ``Pet`` **parent class**. This inheritance is formalized in the class definition ``class Cat(Pet)``. The code of the two **child classes** is remarkably simple: it adds a new functionality to the ones inherited from ``Pet``. For example: ``` c = Cat('Kitty', 4) c.say_name() ``` The ``Pet`` instance methods are still available: ``` c.eat_food(0.2) c.weight d = Dog('Milou', 8) d.say_name() ``` There is a pretty straightforward rule for the behavior of child classes instances: **when the called method or attribute is available at the child class level, it will be used** (even if also available at the parent class level: this is called **overriding**, and will be a topic for next week); **if not, use the parent class implementation**. This is exactly what happens in the code above: ``eat_food`` and ``weight`` are defined in the ``Pet`` class but are available for both ``Cat`` and ``Dog`` instances. ``say_name``, however, is a child class instance method and can't be used by ``Pet`` instances. So, what about the ``say_name_loudly`` method? Although available for ``Pet`` instances, calling it will raise an error: ``` p = Pet('PetName', 10) p.say_name_loudly() ``` However, since ``say_name_loudly`` is available to the child class instances, the method will work for them! ``` c = Cat('Kitty', 4) c.say_name_loudly() ``` This is a typical use case for class inheritance in OOP: it allows <a href="https://en.wikipedia.org/wiki/Inheritance_(object-oriented_programming)#Code_reuse">code re-use</a>. We will talk about more advanced use cases next week. ## Take home points - Python is an object oriented programming language but does not enforce the definition of classes in your own programs. However, a basic understanding of the core concepts of OOP is a strong asset and allows to make better use of Python's capabilities. - We defined a lot of new concepts today: classes, objects, instances, instance methods, instance attributes, class attributes, the @property decorator, inheritance... You will have to revise these concepts calmly and step by step, possibly by making use of external resources. The web has plenty of good beginner-level OOP tutorials, I recommend to have a look at at least one of them. ## What's next? Back to the [table of contents](00-Introduction.ipynb#ctoc), or [jump to this week's assignment](24-Assignment-08.ipynb).
github_jupyter
class Cat: # Initializer def __init__(self, name, weight): self.name = name self.weight = weight a = Cat('Grumpy', 4) a # Ask if a is an instance of Cat or not isinstance(a, Cat) type(a) b = Cat('Grumpy', 4) isinstance(a, Cat) a == b b is a a.name a.weight b = Cat('Tiger', 5) b.name class Cat: # Class attribute language = 'Meow' # Initializer def __init__(self, name, weight): self.name = name self.weight = weight Cat.language a = Cat('Grumpy', 4) a.language a.name = 'Roncheux' a.language = 'Miaou' a.name a.language Cat.language class Cat: # Class attribute language = 'Meow' # Initializer def __init__(self, name, weight): self.name = name self.weight = weight # Method def say_name(self): print('{}, my name is {}!'.format(self.language, self.name)) a = Cat('Kitty', 4) a.say_name() b = Cat('Grumpy', 3) b.say_name() import pandas as pd a = pd.Series([1, 2, 3], name='data') # instanciating a class assert isinstance(a, pd.Series) # a is an instance of the Series class print(type(a)) # pandas.core.series.Series is a new datatype print(a.name) # name is an instance attribute print(a.mean()) # mean is an instance method class Cat: # Class attribute language = 'Meow' # Initializer def __init__(self, name, weight): self.name = name self.weight = weight # Method def eat_food(self, food_kg): self.weight += food_kg a = Cat('Grumpy', 4) print('Weight before eating: {} kg'.format(a.weight)) a.eat_food(0.2) print('Weight after eating: {} kg'.format(a.weight)) class Cat: # Class attribute language = 'Meow' # Initializer def __init__(self, name, weight): self.name = name self.weight = weight self.weight_lbs = weight * 1 / 0.45359237 # Method def eat_food(self, food_kg): self.weight += food_kg a = Cat('Grumpy', 4) a.weight_lbs a.eat_food(0.2) a.weight_lbs # this is a problem class Cat: # Class attribute language = 'Meow' # Initializer def __init__(self, name, weight): self.name = name self.weight = weight # Method def eat_food(self, food_kg): self.weight += food_kg def get_weight_lbs(self): return self.weight * 1 / 0.45359237 a = Cat('Grumpy', 4) a.eat_food(0.2) a.get_weight_lbs() class Cat: # Class attribute language = 'Meow' # Initializer def __init__(self, name, weight): self.name = name self.weight = weight # Method def eat_food(self, food_kg): self.weight += food_kg @property def weight_lbs(self): return self.weight * 1 / 0.45359237 a = Cat('Grumpy', 4) a.eat_food(0.2) a.weight_lbs # weight_lbs is an attribute! class Pet: # Initializer def __init__(self, name, weight): self.name = name self.weight = weight # Method def eat_food(self, food_kg): self.weight += food_kg @property def weight_lbs(self): return self.weight * 1 / 0.45359237 def say_name_loudly(self): return self.say_name().upper() class Cat(Pet): # Class attribute language = 'Meow' # Method def say_name(self): return '{}, my name is {} and I am nice!'.format(self.language, self.name) class Dog(Pet): # Class attribute language = 'Woof' # Method def say_name(self): return '{}, my name is {} and I smell funny!'.format(self.language, self.name) p = Pet('PetName', 10) p.weight_lbs c = Cat('Kitty', 4) c.say_name() c.eat_food(0.2) c.weight d = Dog('Milou', 8) d.say_name() p = Pet('PetName', 10) p.say_name_loudly() c = Cat('Kitty', 4) c.say_name_loudly()
0.666388
0.984426
``` import cv2 import os import numpy as np from tqdm import tqdm import tensorflow as tf from sklearn.utils import shuffle from tensorflow.keras import Sequential from tensorflow.keras.layers import Dense, Flatten from tensorflow.keras.applications.vgg16 import VGG16 np.random.seed(0) tf.compat.v1.random.set_random_seed(0) fakepath = '/kaggle/input/model-training-dataset/fake/' originalpath = "/kaggle/input/model-training-dataset/original/" fake = [] original = [] for name in tqdm(os.listdir(fakepath)): for frame in os.listdir(fakepath + name): fake.append(fakepath + name + '/' + frame) for name in tqdm(os.listdir(originalpath)): for frame in os.listdir(originalpath + name): original.append(originalpath + name + '/' + frame) y = np.ones(len(original)) x = np.zeros(len(fake)) label = np.concatenate([y, x]) names_list = original + fake new_data, new_label = shuffle(names_list, label, random_state = 0) train, test, y_train, y_test = new_data[:-100], new_data[-100:], new_label[:-100], new_label[-100:] def decode_img(img): img = tf.image.decode_jpeg(img, channels=3) img = tf.image.convert_image_dtype(img, tf.float32) img = tf.image.random_flip_left_right(img) img = tf.image.random_flip_up_down(img) img = tf.image.random_saturation(img, 1, 3) img = tf.image.random_brightness(img, 0.3) img = tf.image.resize(img, [75, 75]) return img def get_label(file_path): cat = tf.strings.split(file_path, '/')[4] if cat == b'fake': return 1 return 0 def process_path(file_path): label = get_label(file_path) img = tf.io.read_file(file_path) img = decode_img(img) return img, label num_threads, num_epochs, train_len = 5, 3000, len(train) train_ds = tf.data.Dataset.from_tensor_slices(train) train_ds = train_ds.map(process_path, num_parallel_calls=tf.data.experimental.AUTOTUNE) train_ds = train_ds.shuffle(train_len) train_ds = train_ds.batch(64) train_ds = train_ds.prefetch(1) from tensorflow.keras.applications.inception_v3 import InceptionV3 model = InceptionV3(weights='imagenet', include_top=False, input_shape = (75, 75, 3)) model.summary() for layer in model.layers[:87]: layer.trainable = False lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay( initial_learning_rate=1e-1, decay_steps=10, decay_rate=0.9) opt = tf.keras.optimizers.Adam( learning_rate= lr_schedule, beta_1=0.9, beta_2=0.999, epsilon=1e-07, amsgrad=False) tp = tf.keras.metrics.TruePositives(thresholds=None, name=None, dtype=None) tn = tf.keras.metrics.TrueNegatives(thresholds=None, name=None, dtype=None) fp = tf.keras.metrics.FalsePositives(thresholds=None, name=None, dtype=None) fn = tf.keras.metrics.FalseNegatives(thresholds=None, name=None, dtype=None) classifier = Sequential() classifier.add(model) classifier.add(Flatten()) classifier.add(Dense(1024, activation='relu')) classifier.add(Dense(1, activation = 'sigmoid')) classifier.compile(optimizer = opt, loss = 'binary_crossentropy', metrics = ['accuracy', tp,tn,fp,fn]) history = classifier.fit(train_ds, epochs = 20, shuffle=True, batch_size=64) print(history.history.keys()) import matplotlib.pyplot as plt # summarize history for accuracy plt.plot(history.history['accuracy']) # plt.plot(history.history['val_accuracy']) plt.title('Inception accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='lower right') plt.show() # summarize history for loss plt.plot(history.history['loss']) # plt.plot(history.history['val_loss']) plt.title('Inception loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper right') plt.show() ```
github_jupyter
import cv2 import os import numpy as np from tqdm import tqdm import tensorflow as tf from sklearn.utils import shuffle from tensorflow.keras import Sequential from tensorflow.keras.layers import Dense, Flatten from tensorflow.keras.applications.vgg16 import VGG16 np.random.seed(0) tf.compat.v1.random.set_random_seed(0) fakepath = '/kaggle/input/model-training-dataset/fake/' originalpath = "/kaggle/input/model-training-dataset/original/" fake = [] original = [] for name in tqdm(os.listdir(fakepath)): for frame in os.listdir(fakepath + name): fake.append(fakepath + name + '/' + frame) for name in tqdm(os.listdir(originalpath)): for frame in os.listdir(originalpath + name): original.append(originalpath + name + '/' + frame) y = np.ones(len(original)) x = np.zeros(len(fake)) label = np.concatenate([y, x]) names_list = original + fake new_data, new_label = shuffle(names_list, label, random_state = 0) train, test, y_train, y_test = new_data[:-100], new_data[-100:], new_label[:-100], new_label[-100:] def decode_img(img): img = tf.image.decode_jpeg(img, channels=3) img = tf.image.convert_image_dtype(img, tf.float32) img = tf.image.random_flip_left_right(img) img = tf.image.random_flip_up_down(img) img = tf.image.random_saturation(img, 1, 3) img = tf.image.random_brightness(img, 0.3) img = tf.image.resize(img, [75, 75]) return img def get_label(file_path): cat = tf.strings.split(file_path, '/')[4] if cat == b'fake': return 1 return 0 def process_path(file_path): label = get_label(file_path) img = tf.io.read_file(file_path) img = decode_img(img) return img, label num_threads, num_epochs, train_len = 5, 3000, len(train) train_ds = tf.data.Dataset.from_tensor_slices(train) train_ds = train_ds.map(process_path, num_parallel_calls=tf.data.experimental.AUTOTUNE) train_ds = train_ds.shuffle(train_len) train_ds = train_ds.batch(64) train_ds = train_ds.prefetch(1) from tensorflow.keras.applications.inception_v3 import InceptionV3 model = InceptionV3(weights='imagenet', include_top=False, input_shape = (75, 75, 3)) model.summary() for layer in model.layers[:87]: layer.trainable = False lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay( initial_learning_rate=1e-1, decay_steps=10, decay_rate=0.9) opt = tf.keras.optimizers.Adam( learning_rate= lr_schedule, beta_1=0.9, beta_2=0.999, epsilon=1e-07, amsgrad=False) tp = tf.keras.metrics.TruePositives(thresholds=None, name=None, dtype=None) tn = tf.keras.metrics.TrueNegatives(thresholds=None, name=None, dtype=None) fp = tf.keras.metrics.FalsePositives(thresholds=None, name=None, dtype=None) fn = tf.keras.metrics.FalseNegatives(thresholds=None, name=None, dtype=None) classifier = Sequential() classifier.add(model) classifier.add(Flatten()) classifier.add(Dense(1024, activation='relu')) classifier.add(Dense(1, activation = 'sigmoid')) classifier.compile(optimizer = opt, loss = 'binary_crossentropy', metrics = ['accuracy', tp,tn,fp,fn]) history = classifier.fit(train_ds, epochs = 20, shuffle=True, batch_size=64) print(history.history.keys()) import matplotlib.pyplot as plt # summarize history for accuracy plt.plot(history.history['accuracy']) # plt.plot(history.history['val_accuracy']) plt.title('Inception accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='lower right') plt.show() # summarize history for loss plt.plot(history.history['loss']) # plt.plot(history.history['val_loss']) plt.title('Inception loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper right') plt.show()
0.656658
0.193338
``` import numpy as np from matplotlib import pyplot as plt import plotly.express as px import pandas as pd from a2c.a2c import A2C from a2c.callback import ProgressBarCallback from environments.continuous_teaching import ContinuousTeaching from human_agents import generate_agents from test_a2c import produce_rates def run_one_episode(env, policy, user): obs = env.reset_for_new_user(user) rewards = [] actions = [] env.penalty_coeff=0.0 while True: action = policy.act(obs) obs, reward, done, info = env.step(action) rewards.append(reward) actions.append(action[0]) if done: obs = env.reset() break return rewards, actions def teach_in_sessions(env, session_lengths): rewards = [] actions = [] models = [] for session_length in session_lengths: model = A2C(env, seed=123) models += [model] env.t_max = session_length iterations =1000000 check_freq = env.t_max with ProgressBarCallback(env, check_freq) as callback: model.learn(iterations, callback=callback) plt.plot([np.mean(r) for r in callback.hist_rewards]) plt.show() r, a = run_one_episode(env, model) rewards += [np.array(r)] actions += [np.array(a)] return rewards, actions, models session_lengths = [10, 20, 40, 70, 100, 200] n_users = 10 n_items = 140 forget_rates, repetition_rates = generate_agents(n_users, n_items) env = ContinuousTeaching( t_max=100, initial_forget_rates=forget_rates, initial_repetition_rates=repetition_rates, n_item=n_items, tau=0.9, delta_coeffs=np.array([3, 20]), n_coeffs=2, penalty_coeff=0.4 ) r, a, m=teach_in_sessions(env, session_lengths) aa = [] j = 2 for i in range(6): rewards, actions = run_one_episode(env, m[i], j) # print(rewards, actions) item_count = [actions.count(x)+1 for x in range(n_items)] shown = [bool(actions.count(x) > 0) for x in range(n_items)] df = pd.DataFrame(zip(env.all_forget_rates[j], env.all_repetition_rates[j], item_count, shown)) df.columns=['forget', 'repeat', 'counter', 'shown'] df['shown']=df['shown'].astype('str') fig = px.scatter(df, x="forget", y="repeat", size="counter", color='shown',log_x=True, #log_y=True, size_max=50, title='Number of occurances, length:{} , user:{}'.format(session_lengths[i], j)) fig.show() pip install -U kaleido item_count = [actions.count(x)+0.1 for x in range(n_items)] shown = [bool(actions.count(x) > 0) for x in range(n_items)] df = pd.DataFrame(zip(env.initial_forget_rates, env.initial_repetition_rates, item_count, shown)) df.columns=['forget', 'repeat', 'counter', 'shown'] df['shown']=df['shown'].astype('str') fig = px.scatter(df, x="forget", y="repeat", size="counter", color='shown', log_y=True, log_x=True, size_max=50, title='Number of occurances only for shown ones, {}, {}'.format(session_lengths[i], j)) ```
github_jupyter
import numpy as np from matplotlib import pyplot as plt import plotly.express as px import pandas as pd from a2c.a2c import A2C from a2c.callback import ProgressBarCallback from environments.continuous_teaching import ContinuousTeaching from human_agents import generate_agents from test_a2c import produce_rates def run_one_episode(env, policy, user): obs = env.reset_for_new_user(user) rewards = [] actions = [] env.penalty_coeff=0.0 while True: action = policy.act(obs) obs, reward, done, info = env.step(action) rewards.append(reward) actions.append(action[0]) if done: obs = env.reset() break return rewards, actions def teach_in_sessions(env, session_lengths): rewards = [] actions = [] models = [] for session_length in session_lengths: model = A2C(env, seed=123) models += [model] env.t_max = session_length iterations =1000000 check_freq = env.t_max with ProgressBarCallback(env, check_freq) as callback: model.learn(iterations, callback=callback) plt.plot([np.mean(r) for r in callback.hist_rewards]) plt.show() r, a = run_one_episode(env, model) rewards += [np.array(r)] actions += [np.array(a)] return rewards, actions, models session_lengths = [10, 20, 40, 70, 100, 200] n_users = 10 n_items = 140 forget_rates, repetition_rates = generate_agents(n_users, n_items) env = ContinuousTeaching( t_max=100, initial_forget_rates=forget_rates, initial_repetition_rates=repetition_rates, n_item=n_items, tau=0.9, delta_coeffs=np.array([3, 20]), n_coeffs=2, penalty_coeff=0.4 ) r, a, m=teach_in_sessions(env, session_lengths) aa = [] j = 2 for i in range(6): rewards, actions = run_one_episode(env, m[i], j) # print(rewards, actions) item_count = [actions.count(x)+1 for x in range(n_items)] shown = [bool(actions.count(x) > 0) for x in range(n_items)] df = pd.DataFrame(zip(env.all_forget_rates[j], env.all_repetition_rates[j], item_count, shown)) df.columns=['forget', 'repeat', 'counter', 'shown'] df['shown']=df['shown'].astype('str') fig = px.scatter(df, x="forget", y="repeat", size="counter", color='shown',log_x=True, #log_y=True, size_max=50, title='Number of occurances, length:{} , user:{}'.format(session_lengths[i], j)) fig.show() pip install -U kaleido item_count = [actions.count(x)+0.1 for x in range(n_items)] shown = [bool(actions.count(x) > 0) for x in range(n_items)] df = pd.DataFrame(zip(env.initial_forget_rates, env.initial_repetition_rates, item_count, shown)) df.columns=['forget', 'repeat', 'counter', 'shown'] df['shown']=df['shown'].astype('str') fig = px.scatter(df, x="forget", y="repeat", size="counter", color='shown', log_y=True, log_x=True, size_max=50, title='Number of occurances only for shown ones, {}, {}'.format(session_lengths[i], j))
0.359027
0.396302
# Build machine learning workflows with Amazon SageMaker Processing and AWS Step Functions Data Science SDK With Amazon SageMaker Processing, you can leverage a simplified, managed experience to run data pre- or post-processing and model evaluation workloads on the Amazon SageMaker platform. A processing job downloads input from Amazon Simple Storage Service (Amazon S3), then uploads outputs to Amazon S3 during or after the processing job. The Step Functions SDK is an open source library that allows data scientists to easily create and execute machine learning workflows using AWS Step Functions and Amazon SageMaker. For more information, please see the following resources: * [AWS Step Functions](https://aws.amazon.com/step-functions/) * [AWS Step Functions Developer Guide](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html) * [AWS Step Functions Data Science SDK](https://aws-step-functions-data-science-sdk.readthedocs.io) SageMaker Processing Step [ProcessingStep](https://aws-step-functions-data-science-sdk.readthedocs.io/en/stable/sagemaker.html#stepfunctions.steps.sagemaker.ProcessingStep) in AWS Step Functions Data Science SDK allows the Machine Learning engineers to directly integrate the SageMaker Processing with the AWS Step Functions Workflows. This notebook describes how to use the AWS Step Functions Data Science SDK to create a machine learning workflow using SageMaker Processing Jobs to perform data pre-processing, train the model and evaluate the quality of the model. The high level steps include below - 1. Run a SageMaker processing job using `ProcessingStep` of AWS Step Functions Data Science SDK to run a scikit-learn script that cleans, pre-processes, performs feature engineering, and splits the input data into train and test sets. 2. Run a training job using `TrainingStep` of AWS Step Functions Data Science SDK on the pre-processed training data to train a model 3. Run a processing job on the pre-processed test data to evaluate the trained model's performance using `ProcessingStep` of AWS Step Functions Data Science SDK The dataset used here is the [Census-Income KDD Dataset](https://archive.ics.uci.edu/ml/datasets/Census-Income+%28KDD%29). You select features from this dataset, clean the data, and turn the data into features that the training algorithm can use to train a binary classification model, and split the data into train and test sets. The task is to predict whether rows representing census responders have an income greater than `$50,000`, or less than `$50,000`. The dataset is heavily class imbalanced, with most records being labeled as earning less than `$50,000`. We train the model using logistic regression. ## Setup ``` # Import the latest sagemaker, stepfunctions and boto3 SDKs import sys !{sys.executable} -m pip install --upgrade pip !{sys.executable} -m pip install -qU awscli boto3 "sagemaker==1.71.0" !{sys.executable} -m pip install -qU "stepfunctions==1.1.0" !{sys.executable} -m pip show sagemaker stepfunctions ``` ### Import the Required Modules ``` import io import logging import os import random import time import uuid import boto3 import stepfunctions from stepfunctions import steps from stepfunctions.inputs import ExecutionInput from stepfunctions.steps import ( Chain, ChoiceRule, ModelStep, ProcessingStep, TrainingStep, TransformStep, ) from stepfunctions.template import TrainingPipeline from stepfunctions.template.utils import replace_parameters_with_jsonpath from stepfunctions.workflow import Workflow import sagemaker from sagemaker import get_execution_role from sagemaker.amazon.amazon_estimator import get_image_uri from sagemaker.processing import ProcessingInput, ProcessingOutput from sagemaker.s3 import S3Uploader from sagemaker.sklearn.processing import SKLearnProcessor # SageMaker Session sagemaker_session = sagemaker.Session() region = sagemaker_session.boto_region_name # SageMaker Execution Role # You can use sagemaker.get_execution_role() if running inside sagemaker's notebook instance role = get_execution_role() ``` Next, we'll create fine-grained IAM roles for the Step Functions and SageMaker. The IAM roles grant the services permissions within your AWS environment. ## Add permissions to your notebook role in IAM The IAM role assumed by your notebook requires permission to create and run workflows in AWS Step Functions. If this notebook is running on a SageMaker notebook instance, do the following to provide IAM permissions to the notebook: 1. Open the Amazon [SageMaker console](https://console.aws.amazon.com/sagemaker/). 2. Select **Notebook instances** and choose the name of your notebook instance. 3. Under **Permissions and encryption** select the role ARN to view the role on the IAM console. 4. Copy and save the IAM role ARN for later use. 5. Choose **Attach policies** and search for `AWSStepFunctionsFullAccess`. 6. Select the check box next to `AWSStepFunctionsFullAccess` and choose **Attach policy**. If you are running this notebook outside of SageMaker, the SDK will use your configured AWS CLI configuration. For more information, see [Configuring the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html). Next, let's create an execution role in IAM for Step Functions. ## Create an Execution Role for Step Functions Your Step Functions workflow requires an IAM role to interact with other services in your AWS environment. 1. Go to the [IAM console](https://console.aws.amazon.com/iam/). 2. Select **Roles** and then **Create role**. 3. Under **Choose the service that will use this role** select **Step Functions**. 4. Choose **Next** until you can enter a **Role name**. 5. Enter a name such as `StepFunctionsWorkflowExecutionRole` and then select **Create role**. Next, attach a AWS Managed IAM policy to the role you created as per below steps. 1. Go to the [IAM console](https://console.aws.amazon.com/iam/). 2. Select **Roles** 3. Search for `StepFunctionsWorkflowExecutionRole` IAM Role 4. Under the **Permissions** tab, click **Attach policies** and then search for `CloudWatchEventsFullAccess` IAM Policy managed by AWS. 5. Click on `Attach Policy` Next, create and attach another new policy to the role you created. As a best practice, the following steps will attach a policy that only provides access to the specific resources and actions needed for this solution. 1. Under the **Permissions** tab, click **Attach policies** and then **Create policy**. 2. Enter the following in the **JSON** tab: ```json { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "events:PutTargets", "events:DescribeRule", "events:PutRule" ], "Resource": [ "arn:aws:events:*:*:rule/StepFunctionsGetEventsForSageMakerTrainingJobsRule", "arn:aws:events:*:*:rule/StepFunctionsGetEventsForSageMakerTransformJobsRule", "arn:aws:events:*:*:rule/StepFunctionsGetEventsForSageMakerTuningJobsRule", "arn:aws:events:*:*:rule/StepFunctionsGetEventsForECSTaskRule", "arn:aws:events:*:*:rule/StepFunctionsGetEventsForBatchJobsRule" ] }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": "iam:PassRole", "Resource": "NOTEBOOK_ROLE_ARN", "Condition": { "StringEquals": { "iam:PassedToService": "sagemaker.amazonaws.com" } } }, { "Sid": "VisualEditor2", "Effect": "Allow", "Action": [ "batch:DescribeJobs", "batch:SubmitJob", "batch:TerminateJob", "dynamodb:DeleteItem", "dynamodb:GetItem", "dynamodb:PutItem", "dynamodb:UpdateItem", "ecs:DescribeTasks", "ecs:RunTask", "ecs:StopTask", "glue:BatchStopJobRun", "glue:GetJobRun", "glue:GetJobRuns", "glue:StartJobRun", "lambda:InvokeFunction", "sagemaker:CreateEndpoint", "sagemaker:CreateEndpointConfig", "sagemaker:CreateHyperParameterTuningJob", "sagemaker:CreateModel", "sagemaker:CreateProcessingJob", "sagemaker:CreateTrainingJob", "sagemaker:CreateTransformJob", "sagemaker:DeleteEndpoint", "sagemaker:DeleteEndpointConfig", "sagemaker:DescribeHyperParameterTuningJob", "sagemaker:DescribeProcessingJob", "sagemaker:DescribeTrainingJob", "sagemaker:DescribeTransformJob", "sagemaker:ListProcessingJobs", "sagemaker:ListTags", "sagemaker:StopHyperParameterTuningJob", "sagemaker:StopProcessingJob", "sagemaker:StopTrainingJob", "sagemaker:StopTransformJob", "sagemaker:UpdateEndpoint", "sns:Publish", "sqs:SendMessage" ], "Resource": "*" } ] } ``` 3. Replace **NOTEBOOK_ROLE_ARN** with the ARN for your notebook that you created in the previous step in the above Policy. 4. Choose **Review policy** and give the policy a name such as `StepFunctionsWorkflowExecutionPolicy`. 5. Choose **Create policy**. 6. Select **Roles** and search for your `StepFunctionsWorkflowExecutionRole` role. 7. Under the **Permissions** tab, click **Attach policies**. 8. Search for your newly created `StepFunctionsWorkflowExecutionPolicy` policy and select the check box next to it. 9. Choose **Attach policy**. You will then be redirected to the details page for the role. 10. Copy the StepFunctionsWorkflowExecutionRole **Role ARN** at the top of the Summary. ``` # paste the StepFunctionsWorkflowExecutionRole ARN from above workflow_execution_role = "" ``` ### Create StepFunctions Workflow execution Input schema ``` # Generate unique names for Pre-Processing Job, Training Job, and Model Evaluation Job for the Step Functions Workflow training_job_name = "scikit-learn-training-{}".format( uuid.uuid1().hex ) # Each Training Job requires a unique name preprocessing_job_name = "scikit-learn-sm-preprocessing-{}".format( uuid.uuid1().hex ) # Each Preprocessing job requires a unique name, evaluation_job_name = "scikit-learn-sm-evaluation-{}".format( uuid.uuid1().hex ) # Each Evaluation Job requires a unique name # SageMaker expects unique names for each job, model and endpoint. # If these names are not unique the execution will fail. Pass these # dynamically for each execution using placeholders. execution_input = ExecutionInput( schema={ "PreprocessingJobName": str, "TrainingJobName": str, "EvaluationProcessingJobName": str, } ) ``` ## Data pre-processing and feature engineering Before introducing the script you use for data cleaning, pre-processing, and feature engineering, inspect the first 20 rows of the dataset. The target is predicting the `income` category. The features from the dataset you select are `age`, `education`, `major industry code`, `class of worker`, `num persons worked for employer`, `capital gains`, `capital losses`, and `dividends from stocks`. ``` import pandas as pd input_data = "s3://sagemaker-sample-data-{}/processing/census/census-income.csv".format( region ) df = pd.read_csv(input_data, nrows=10) df.head(n=10) ``` To run the scikit-learn preprocessing script as a processing job, create a `SKLearnProcessor`, which lets you run scripts inside of processing jobs using the scikit-learn image provided. ``` sklearn_processor = SKLearnProcessor( framework_version="0.20.0", role=role, instance_type="ml.m5.xlarge", instance_count=1, max_runtime_in_seconds=1200, ) ``` This notebook cell writes a file `preprocessing.py`, which contains the pre-processing script. You can update the script, and rerun this cell to overwrite `preprocessing.py`. You run this as a processing job in the next cell. In this script, you * Remove duplicates and rows with conflicting data * transform the target `income` column into a column containing two labels. * transform the `age` and `num persons worked for employer` numerical columns into categorical features by binning them * scale the continuous `capital gains`, `capital losses`, and `dividends from stocks` so they're suitable for training * encode the `education`, `major industry code`, `class of worker` so they're suitable for training * split the data into training and test datasets, and saves the training features and labels and test features and labels. Our training script will use the pre-processed training features and labels to train a model, and our model evaluation script will use the trained model and pre-processed test features and labels to evaluate the model. ``` %%writefile preprocessing.py import argparse import os import warnings import numpy as np import pandas as pd from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler, OneHotEncoder, LabelBinarizer, KBinsDiscretizer from sklearn.preprocessing import PolynomialFeatures from sklearn.compose import make_column_transformer from sklearn.exceptions import DataConversionWarning warnings.filterwarnings(action="ignore", category=DataConversionWarning) columns = [ "age", "education", "major industry code", "class of worker", "num persons worked for employer", "capital gains", "capital losses", "dividends from stocks", "income", ] class_labels = [" - 50000.", " 50000+."] def print_shape(df): negative_examples, positive_examples = np.bincount(df["income"]) print( "Data shape: {}, {} positive examples, {} negative examples".format( df.shape, positive_examples, negative_examples ) ) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--train-test-split-ratio", type=float, default=0.3) args, _ = parser.parse_known_args() print("Received arguments {}".format(args)) input_data_path = os.path.join("/opt/ml/processing/input", "census-income.csv") print("Reading input data from {}".format(input_data_path)) df = pd.read_csv(input_data_path) df = pd.DataFrame(data=df, columns=columns) df.dropna(inplace=True) df.drop_duplicates(inplace=True) df.replace(class_labels, [0, 1], inplace=True) negative_examples, positive_examples = np.bincount(df["income"]) print( "Data after cleaning: {}, {} positive examples, {} negative examples".format( df.shape, positive_examples, negative_examples ) ) split_ratio = args.train_test_split_ratio print("Splitting data into train and test sets with ratio {}".format(split_ratio)) X_train, X_test, y_train, y_test = train_test_split( df.drop("income", axis=1), df["income"], test_size=split_ratio, random_state=0 ) preprocess = make_column_transformer( ( ["age", "num persons worked for employer"], KBinsDiscretizer(encode="onehot-dense", n_bins=10), ), ( ["capital gains", "capital losses", "dividends from stocks"], StandardScaler(), ), ( ["education", "major industry code", "class of worker"], OneHotEncoder(sparse=False), ), ) print("Running preprocessing and feature engineering transformations") train_features = preprocess.fit_transform(X_train) test_features = preprocess.transform(X_test) print("Train data shape after preprocessing: {}".format(train_features.shape)) print("Test data shape after preprocessing: {}".format(test_features.shape)) train_features_output_path = os.path.join( "/opt/ml/processing/train", "train_features.csv" ) train_labels_output_path = os.path.join( "/opt/ml/processing/train", "train_labels.csv" ) test_features_output_path = os.path.join( "/opt/ml/processing/test", "test_features.csv" ) test_labels_output_path = os.path.join("/opt/ml/processing/test", "test_labels.csv") print("Saving training features to {}".format(train_features_output_path)) pd.DataFrame(train_features).to_csv( train_features_output_path, header=False, index=False ) print("Saving test features to {}".format(test_features_output_path)) pd.DataFrame(test_features).to_csv( test_features_output_path, header=False, index=False ) print("Saving training labels to {}".format(train_labels_output_path)) y_train.to_csv(train_labels_output_path, header=False, index=False) print("Saving test labels to {}".format(test_labels_output_path)) y_test.to_csv(test_labels_output_path, header=False, index=False) ``` Upload the pre processing script. ``` PREPROCESSING_SCRIPT_LOCATION = "preprocessing.py" input_code = sagemaker_session.upload_data( PREPROCESSING_SCRIPT_LOCATION, bucket=sagemaker_session.default_bucket(), key_prefix="data/sklearn_processing/code", ) ``` S3 Locations of processing output and training data. ``` s3_bucket_base_uri = "{}{}".format("s3://", sagemaker_session.default_bucket()) output_data = "{}/{}".format(s3_bucket_base_uri, "data/sklearn_processing/output") preprocessed_training_data = "{}/{}".format(output_data, "train_data") ``` ### Create the `ProcessingStep` We will now create the [ProcessingStep](https://aws-step-functions-data-science-sdk.readthedocs.io/en/stable/sagemaker.html#stepfunctions.steps.sagemaker.ProcessingStep) that will launch a SageMaker Processing Job. This step will use the SKLearnProcessor as defined in the previous steps along with the inputs and outputs objects that are defined in the below steps. #### Create [ProcessingInputs](https://sagemaker.readthedocs.io/en/stable/api/training/processing.html#sagemaker.processing.ProcessingInput) and [ProcessingOutputs](https://sagemaker.readthedocs.io/en/stable/api/training/processing.html#sagemaker.processing.ProcessingOutput) objects for Inputs and Outputs respectively for the SageMaker Processing Job. ``` inputs = [ ProcessingInput( source=input_data, destination="/opt/ml/processing/input", input_name="input-1" ), ProcessingInput( source=input_code, destination="/opt/ml/processing/input/code", input_name="code", ), ] outputs = [ ProcessingOutput( source="/opt/ml/processing/train", destination="{}/{}".format(output_data,"train_data"), output_name="train_data", ), ProcessingOutput( source="/opt/ml/processing/test", destination="{}/{}".format(output_data, "test_data"), output_name="test_data", ), ] ``` #### Create the `ProcessingStep` ``` # preprocessing_job_name = generate_job_name() processing_step = ProcessingStep( "SageMaker pre-processing step", processor=sklearn_processor, job_name=execution_input["PreprocessingJobName"], inputs=inputs, outputs=outputs, container_arguments=["--train-test-split-ratio", "0.2"], container_entrypoint=["python3", "/opt/ml/processing/input/code/preprocessing.py"], ) ``` ## Training using the pre-processed data We create a `SKLearn` instance, which we will use to run a training job using the training script `train.py`. This will be used to create a `TrainingStep` for the workflow. ``` from sagemaker.sklearn.estimator import SKLearn sklearn = SKLearn(entry_point="train.py", train_instance_type="ml.m5.xlarge", role=role) ``` The training script `train.py` trains a logistic regression model on the training data, and saves the model to the `/opt/ml/model` directory, which Amazon SageMaker tars and uploads into a `model.tar.gz` file into S3 at the end of the training job. ``` %%writefile train.py import os import pandas as pd from sklearn.linear_model import LogisticRegression from sklearn.externals import joblib if __name__ == "__main__": training_data_directory = "/opt/ml/input/data/train" train_features_data = os.path.join(training_data_directory, "train_features.csv") train_labels_data = os.path.join(training_data_directory, "train_labels.csv") print("Reading input data") X_train = pd.read_csv(train_features_data, header=None) y_train = pd.read_csv(train_labels_data, header=None) model = LogisticRegression(class_weight="balanced", solver="lbfgs") print("Training LR model") model.fit(X_train, y_train) model_output_directory = os.path.join("/opt/ml/model", "model.joblib") print("Saving model to {}".format(model_output_directory)) joblib.dump(model, model_output_directory) ``` ### Create the `TrainingStep` for the Workflow ``` training_step = steps.TrainingStep( "SageMaker Training Step", estimator=sklearn, data={"train": sagemaker.s3_input(preprocessed_training_data, content_type="csv")}, job_name=execution_input["TrainingJobName"], wait_for_completion=True, ) ``` ## Model Evaluation `evaluation.py` is the model evaluation script. Since the script also runs using scikit-learn as a dependency, run this using the `SKLearnProcessor` you created previously. This script takes the trained model and the test dataset as input, and produces a JSON file containing classification evaluation metrics, including precision, recall, and F1 score for each label, and accuracy and ROC AUC for the model. ``` %%writefile evaluation.py import json import os import tarfile import pandas as pd from sklearn.externals import joblib from sklearn.metrics import classification_report, roc_auc_score, accuracy_score if __name__ == "__main__": model_path = os.path.join("/opt/ml/processing/model", "model.tar.gz") print("Extracting model from path: {}".format(model_path)) with tarfile.open(model_path) as tar: tar.extractall(path=".") print("Loading model") model = joblib.load("model.joblib") print("Loading test input data") test_features_data = os.path.join("/opt/ml/processing/test", "test_features.csv") test_labels_data = os.path.join("/opt/ml/processing/test", "test_labels.csv") X_test = pd.read_csv(test_features_data, header=None) y_test = pd.read_csv(test_labels_data, header=None) predictions = model.predict(X_test) print("Creating classification evaluation report") report_dict = classification_report(y_test, predictions, output_dict=True) report_dict["accuracy"] = accuracy_score(y_test, predictions) report_dict["roc_auc"] = roc_auc_score(y_test, predictions) print("Classification report:\n{}".format(report_dict)) evaluation_output_path = os.path.join( "/opt/ml/processing/evaluation", "evaluation.json" ) print("Saving classification report to {}".format(evaluation_output_path)) with open(evaluation_output_path, "w") as f: f.write(json.dumps(report_dict)) MODELEVALUATION_SCRIPT_LOCATION = "evaluation.py" input_evaluation_code = sagemaker_session.upload_data( MODELEVALUATION_SCRIPT_LOCATION, bucket=sagemaker_session.default_bucket(), key_prefix="data/sklearn_processing/code", ) ``` Create input and output objects for Model Evaluation ProcessingStep. ``` preprocessed_testing_data = "{}/{}".format(output_data, "test_data") model_data_s3_uri = "{}/{}/{}".format( s3_bucket_base_uri, training_job_name, "output/model.tar.gz" ) output_model_evaluation_s3_uri = "{}/{}/{}".format( s3_bucket_base_uri, training_job_name, "evaluation" ) inputs_evaluation = [ ProcessingInput( source=preprocessed_testing_data, destination="/opt/ml/processing/test", input_name="input-1", ), ProcessingInput( source=model_data_s3_uri, destination="/opt/ml/processing/model", input_name="input-2", ), ProcessingInput( source=input_evaluation_code, destination="/opt/ml/processing/input/code", input_name="code", ), ] outputs_evaluation = [ ProcessingOutput( source="/opt/ml/processing/evaluation", destination=output_model_evaluation_s3_uri, output_name="evaluation", ), ] model_evaluation_processor = SKLearnProcessor( framework_version="0.20.0", role=role, instance_type="ml.m5.xlarge", instance_count=1, max_runtime_in_seconds=1200, ) processing_evaluation_step = ProcessingStep( "SageMaker Processing Model Evaluation step", processor=model_evaluation_processor, job_name=execution_input["EvaluationProcessingJobName"], inputs=inputs_evaluation, outputs=outputs_evaluation, container_entrypoint=["python3", "/opt/ml/processing/input/code/evaluation.py"], ) ``` Create `Fail` state to mark the workflow failed in case any of the steps fail. ``` failed_state_sagemaker_processing_failure = stepfunctions.steps.states.Fail( "ML Workflow failed", cause="SageMakerProcessingJobFailed" ) ``` #### Add the Error handling in the workflow We will use the [Catch Block](https://aws-step-functions-data-science-sdk.readthedocs.io/en/stable/states.html#stepfunctions.steps.states.Catch) to perform error handling. If the Processing Job Step or Training Step fails, the flow will go into failure state. ``` catch_state_processing = stepfunctions.steps.states.Catch( error_equals=["States.TaskFailed"], next_step=failed_state_sagemaker_processing_failure, ) processing_step.add_catch(catch_state_processing) processing_evaluation_step.add_catch(catch_state_processing) training_step.add_catch(catch_state_processing) ``` ## Create and execute the `Workflow` ``` workflow_graph = Chain([processing_step, training_step, processing_evaluation_step]) branching_workflow = Workflow( name="SageMakerProcessingWorkflow", definition=workflow_graph, role=workflow_execution_role, ) branching_workflow.create() # Execute workflow execution = branching_workflow.execute( inputs={ "PreprocessingJobName": preprocessing_job_name, # Each pre processing job (SageMaker processing job) requires a unique name, "TrainingJobName": training_job_name, # Each Sagemaker Training job requires a unique name, "EvaluationProcessingJobName": evaluation_job_name, # Each SageMaker processing job requires a unique name, } ) execution_output = execution.get_output(wait=True) execution.render_progress() ``` ### Inspect the output of the Workflow execution Now retrieve the file `evaluation.json` from Amazon S3, which contains the evaluation report. ``` workflow_execution_output_json = execution.get_output(wait=True) from sagemaker.s3 import S3Downloader import json evaluation_output_config = workflow_execution_output_json["ProcessingOutputConfig"] for output in evaluation_output_config["Outputs"]: if output["OutputName"] == "evaluation": evaluation_s3_uri = "{}/{}".format(output["S3Output"]["S3Uri"],"evaluation.json") break evaluation_output = S3Downloader.read_file(evaluation_s3_uri) evaluation_output_dict = json.loads(evaluation_output) print(json.dumps(evaluation_output_dict, sort_keys=True, indent=4)) ``` ## Clean Up When you are done, make sure to clean up your AWS account by deleting resources you won't be reusing. Uncomment the code below and run the cell to delete the Step Function. ``` #branching_workflow.delete() ```
github_jupyter
# Import the latest sagemaker, stepfunctions and boto3 SDKs import sys !{sys.executable} -m pip install --upgrade pip !{sys.executable} -m pip install -qU awscli boto3 "sagemaker==1.71.0" !{sys.executable} -m pip install -qU "stepfunctions==1.1.0" !{sys.executable} -m pip show sagemaker stepfunctions import io import logging import os import random import time import uuid import boto3 import stepfunctions from stepfunctions import steps from stepfunctions.inputs import ExecutionInput from stepfunctions.steps import ( Chain, ChoiceRule, ModelStep, ProcessingStep, TrainingStep, TransformStep, ) from stepfunctions.template import TrainingPipeline from stepfunctions.template.utils import replace_parameters_with_jsonpath from stepfunctions.workflow import Workflow import sagemaker from sagemaker import get_execution_role from sagemaker.amazon.amazon_estimator import get_image_uri from sagemaker.processing import ProcessingInput, ProcessingOutput from sagemaker.s3 import S3Uploader from sagemaker.sklearn.processing import SKLearnProcessor # SageMaker Session sagemaker_session = sagemaker.Session() region = sagemaker_session.boto_region_name # SageMaker Execution Role # You can use sagemaker.get_execution_role() if running inside sagemaker's notebook instance role = get_execution_role() { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "events:PutTargets", "events:DescribeRule", "events:PutRule" ], "Resource": [ "arn:aws:events:*:*:rule/StepFunctionsGetEventsForSageMakerTrainingJobsRule", "arn:aws:events:*:*:rule/StepFunctionsGetEventsForSageMakerTransformJobsRule", "arn:aws:events:*:*:rule/StepFunctionsGetEventsForSageMakerTuningJobsRule", "arn:aws:events:*:*:rule/StepFunctionsGetEventsForECSTaskRule", "arn:aws:events:*:*:rule/StepFunctionsGetEventsForBatchJobsRule" ] }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": "iam:PassRole", "Resource": "NOTEBOOK_ROLE_ARN", "Condition": { "StringEquals": { "iam:PassedToService": "sagemaker.amazonaws.com" } } }, { "Sid": "VisualEditor2", "Effect": "Allow", "Action": [ "batch:DescribeJobs", "batch:SubmitJob", "batch:TerminateJob", "dynamodb:DeleteItem", "dynamodb:GetItem", "dynamodb:PutItem", "dynamodb:UpdateItem", "ecs:DescribeTasks", "ecs:RunTask", "ecs:StopTask", "glue:BatchStopJobRun", "glue:GetJobRun", "glue:GetJobRuns", "glue:StartJobRun", "lambda:InvokeFunction", "sagemaker:CreateEndpoint", "sagemaker:CreateEndpointConfig", "sagemaker:CreateHyperParameterTuningJob", "sagemaker:CreateModel", "sagemaker:CreateProcessingJob", "sagemaker:CreateTrainingJob", "sagemaker:CreateTransformJob", "sagemaker:DeleteEndpoint", "sagemaker:DeleteEndpointConfig", "sagemaker:DescribeHyperParameterTuningJob", "sagemaker:DescribeProcessingJob", "sagemaker:DescribeTrainingJob", "sagemaker:DescribeTransformJob", "sagemaker:ListProcessingJobs", "sagemaker:ListTags", "sagemaker:StopHyperParameterTuningJob", "sagemaker:StopProcessingJob", "sagemaker:StopTrainingJob", "sagemaker:StopTransformJob", "sagemaker:UpdateEndpoint", "sns:Publish", "sqs:SendMessage" ], "Resource": "*" } ] } # paste the StepFunctionsWorkflowExecutionRole ARN from above workflow_execution_role = "" # Generate unique names for Pre-Processing Job, Training Job, and Model Evaluation Job for the Step Functions Workflow training_job_name = "scikit-learn-training-{}".format( uuid.uuid1().hex ) # Each Training Job requires a unique name preprocessing_job_name = "scikit-learn-sm-preprocessing-{}".format( uuid.uuid1().hex ) # Each Preprocessing job requires a unique name, evaluation_job_name = "scikit-learn-sm-evaluation-{}".format( uuid.uuid1().hex ) # Each Evaluation Job requires a unique name # SageMaker expects unique names for each job, model and endpoint. # If these names are not unique the execution will fail. Pass these # dynamically for each execution using placeholders. execution_input = ExecutionInput( schema={ "PreprocessingJobName": str, "TrainingJobName": str, "EvaluationProcessingJobName": str, } ) import pandas as pd input_data = "s3://sagemaker-sample-data-{}/processing/census/census-income.csv".format( region ) df = pd.read_csv(input_data, nrows=10) df.head(n=10) sklearn_processor = SKLearnProcessor( framework_version="0.20.0", role=role, instance_type="ml.m5.xlarge", instance_count=1, max_runtime_in_seconds=1200, ) %%writefile preprocessing.py import argparse import os import warnings import numpy as np import pandas as pd from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler, OneHotEncoder, LabelBinarizer, KBinsDiscretizer from sklearn.preprocessing import PolynomialFeatures from sklearn.compose import make_column_transformer from sklearn.exceptions import DataConversionWarning warnings.filterwarnings(action="ignore", category=DataConversionWarning) columns = [ "age", "education", "major industry code", "class of worker", "num persons worked for employer", "capital gains", "capital losses", "dividends from stocks", "income", ] class_labels = [" - 50000.", " 50000+."] def print_shape(df): negative_examples, positive_examples = np.bincount(df["income"]) print( "Data shape: {}, {} positive examples, {} negative examples".format( df.shape, positive_examples, negative_examples ) ) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--train-test-split-ratio", type=float, default=0.3) args, _ = parser.parse_known_args() print("Received arguments {}".format(args)) input_data_path = os.path.join("/opt/ml/processing/input", "census-income.csv") print("Reading input data from {}".format(input_data_path)) df = pd.read_csv(input_data_path) df = pd.DataFrame(data=df, columns=columns) df.dropna(inplace=True) df.drop_duplicates(inplace=True) df.replace(class_labels, [0, 1], inplace=True) negative_examples, positive_examples = np.bincount(df["income"]) print( "Data after cleaning: {}, {} positive examples, {} negative examples".format( df.shape, positive_examples, negative_examples ) ) split_ratio = args.train_test_split_ratio print("Splitting data into train and test sets with ratio {}".format(split_ratio)) X_train, X_test, y_train, y_test = train_test_split( df.drop("income", axis=1), df["income"], test_size=split_ratio, random_state=0 ) preprocess = make_column_transformer( ( ["age", "num persons worked for employer"], KBinsDiscretizer(encode="onehot-dense", n_bins=10), ), ( ["capital gains", "capital losses", "dividends from stocks"], StandardScaler(), ), ( ["education", "major industry code", "class of worker"], OneHotEncoder(sparse=False), ), ) print("Running preprocessing and feature engineering transformations") train_features = preprocess.fit_transform(X_train) test_features = preprocess.transform(X_test) print("Train data shape after preprocessing: {}".format(train_features.shape)) print("Test data shape after preprocessing: {}".format(test_features.shape)) train_features_output_path = os.path.join( "/opt/ml/processing/train", "train_features.csv" ) train_labels_output_path = os.path.join( "/opt/ml/processing/train", "train_labels.csv" ) test_features_output_path = os.path.join( "/opt/ml/processing/test", "test_features.csv" ) test_labels_output_path = os.path.join("/opt/ml/processing/test", "test_labels.csv") print("Saving training features to {}".format(train_features_output_path)) pd.DataFrame(train_features).to_csv( train_features_output_path, header=False, index=False ) print("Saving test features to {}".format(test_features_output_path)) pd.DataFrame(test_features).to_csv( test_features_output_path, header=False, index=False ) print("Saving training labels to {}".format(train_labels_output_path)) y_train.to_csv(train_labels_output_path, header=False, index=False) print("Saving test labels to {}".format(test_labels_output_path)) y_test.to_csv(test_labels_output_path, header=False, index=False) PREPROCESSING_SCRIPT_LOCATION = "preprocessing.py" input_code = sagemaker_session.upload_data( PREPROCESSING_SCRIPT_LOCATION, bucket=sagemaker_session.default_bucket(), key_prefix="data/sklearn_processing/code", ) s3_bucket_base_uri = "{}{}".format("s3://", sagemaker_session.default_bucket()) output_data = "{}/{}".format(s3_bucket_base_uri, "data/sklearn_processing/output") preprocessed_training_data = "{}/{}".format(output_data, "train_data") inputs = [ ProcessingInput( source=input_data, destination="/opt/ml/processing/input", input_name="input-1" ), ProcessingInput( source=input_code, destination="/opt/ml/processing/input/code", input_name="code", ), ] outputs = [ ProcessingOutput( source="/opt/ml/processing/train", destination="{}/{}".format(output_data,"train_data"), output_name="train_data", ), ProcessingOutput( source="/opt/ml/processing/test", destination="{}/{}".format(output_data, "test_data"), output_name="test_data", ), ] # preprocessing_job_name = generate_job_name() processing_step = ProcessingStep( "SageMaker pre-processing step", processor=sklearn_processor, job_name=execution_input["PreprocessingJobName"], inputs=inputs, outputs=outputs, container_arguments=["--train-test-split-ratio", "0.2"], container_entrypoint=["python3", "/opt/ml/processing/input/code/preprocessing.py"], ) from sagemaker.sklearn.estimator import SKLearn sklearn = SKLearn(entry_point="train.py", train_instance_type="ml.m5.xlarge", role=role) %%writefile train.py import os import pandas as pd from sklearn.linear_model import LogisticRegression from sklearn.externals import joblib if __name__ == "__main__": training_data_directory = "/opt/ml/input/data/train" train_features_data = os.path.join(training_data_directory, "train_features.csv") train_labels_data = os.path.join(training_data_directory, "train_labels.csv") print("Reading input data") X_train = pd.read_csv(train_features_data, header=None) y_train = pd.read_csv(train_labels_data, header=None) model = LogisticRegression(class_weight="balanced", solver="lbfgs") print("Training LR model") model.fit(X_train, y_train) model_output_directory = os.path.join("/opt/ml/model", "model.joblib") print("Saving model to {}".format(model_output_directory)) joblib.dump(model, model_output_directory) training_step = steps.TrainingStep( "SageMaker Training Step", estimator=sklearn, data={"train": sagemaker.s3_input(preprocessed_training_data, content_type="csv")}, job_name=execution_input["TrainingJobName"], wait_for_completion=True, ) %%writefile evaluation.py import json import os import tarfile import pandas as pd from sklearn.externals import joblib from sklearn.metrics import classification_report, roc_auc_score, accuracy_score if __name__ == "__main__": model_path = os.path.join("/opt/ml/processing/model", "model.tar.gz") print("Extracting model from path: {}".format(model_path)) with tarfile.open(model_path) as tar: tar.extractall(path=".") print("Loading model") model = joblib.load("model.joblib") print("Loading test input data") test_features_data = os.path.join("/opt/ml/processing/test", "test_features.csv") test_labels_data = os.path.join("/opt/ml/processing/test", "test_labels.csv") X_test = pd.read_csv(test_features_data, header=None) y_test = pd.read_csv(test_labels_data, header=None) predictions = model.predict(X_test) print("Creating classification evaluation report") report_dict = classification_report(y_test, predictions, output_dict=True) report_dict["accuracy"] = accuracy_score(y_test, predictions) report_dict["roc_auc"] = roc_auc_score(y_test, predictions) print("Classification report:\n{}".format(report_dict)) evaluation_output_path = os.path.join( "/opt/ml/processing/evaluation", "evaluation.json" ) print("Saving classification report to {}".format(evaluation_output_path)) with open(evaluation_output_path, "w") as f: f.write(json.dumps(report_dict)) MODELEVALUATION_SCRIPT_LOCATION = "evaluation.py" input_evaluation_code = sagemaker_session.upload_data( MODELEVALUATION_SCRIPT_LOCATION, bucket=sagemaker_session.default_bucket(), key_prefix="data/sklearn_processing/code", ) preprocessed_testing_data = "{}/{}".format(output_data, "test_data") model_data_s3_uri = "{}/{}/{}".format( s3_bucket_base_uri, training_job_name, "output/model.tar.gz" ) output_model_evaluation_s3_uri = "{}/{}/{}".format( s3_bucket_base_uri, training_job_name, "evaluation" ) inputs_evaluation = [ ProcessingInput( source=preprocessed_testing_data, destination="/opt/ml/processing/test", input_name="input-1", ), ProcessingInput( source=model_data_s3_uri, destination="/opt/ml/processing/model", input_name="input-2", ), ProcessingInput( source=input_evaluation_code, destination="/opt/ml/processing/input/code", input_name="code", ), ] outputs_evaluation = [ ProcessingOutput( source="/opt/ml/processing/evaluation", destination=output_model_evaluation_s3_uri, output_name="evaluation", ), ] model_evaluation_processor = SKLearnProcessor( framework_version="0.20.0", role=role, instance_type="ml.m5.xlarge", instance_count=1, max_runtime_in_seconds=1200, ) processing_evaluation_step = ProcessingStep( "SageMaker Processing Model Evaluation step", processor=model_evaluation_processor, job_name=execution_input["EvaluationProcessingJobName"], inputs=inputs_evaluation, outputs=outputs_evaluation, container_entrypoint=["python3", "/opt/ml/processing/input/code/evaluation.py"], ) failed_state_sagemaker_processing_failure = stepfunctions.steps.states.Fail( "ML Workflow failed", cause="SageMakerProcessingJobFailed" ) catch_state_processing = stepfunctions.steps.states.Catch( error_equals=["States.TaskFailed"], next_step=failed_state_sagemaker_processing_failure, ) processing_step.add_catch(catch_state_processing) processing_evaluation_step.add_catch(catch_state_processing) training_step.add_catch(catch_state_processing) workflow_graph = Chain([processing_step, training_step, processing_evaluation_step]) branching_workflow = Workflow( name="SageMakerProcessingWorkflow", definition=workflow_graph, role=workflow_execution_role, ) branching_workflow.create() # Execute workflow execution = branching_workflow.execute( inputs={ "PreprocessingJobName": preprocessing_job_name, # Each pre processing job (SageMaker processing job) requires a unique name, "TrainingJobName": training_job_name, # Each Sagemaker Training job requires a unique name, "EvaluationProcessingJobName": evaluation_job_name, # Each SageMaker processing job requires a unique name, } ) execution_output = execution.get_output(wait=True) execution.render_progress() workflow_execution_output_json = execution.get_output(wait=True) from sagemaker.s3 import S3Downloader import json evaluation_output_config = workflow_execution_output_json["ProcessingOutputConfig"] for output in evaluation_output_config["Outputs"]: if output["OutputName"] == "evaluation": evaluation_s3_uri = "{}/{}".format(output["S3Output"]["S3Uri"],"evaluation.json") break evaluation_output = S3Downloader.read_file(evaluation_s3_uri) evaluation_output_dict = json.loads(evaluation_output) print(json.dumps(evaluation_output_dict, sort_keys=True, indent=4)) #branching_workflow.delete()
0.452536
0.98104
# Continuous Control - Version 4 --- Modifications: 1. Adding gradient clipping feature. The gradient of critic loss function are clipped. 2. Change soft update period from 1 to 20. 3. Change max step from 300 to 1,000. 4. Change learn period from 1 time step to 20 time steps. In each period, change the number of sampling from 1 to 10. 5. Change fc1_units and fc2_units to 400 and 300, respectively. 6. Change buffer size from 1e5 to 1e6. 7. Change learing rate from 1e-4 to 1e-3. 8. Add exploration decay Epsilon_start = 1.0 Epsilon_decay = 1e-6 (epsilon_start -= epsilon_decay) 9. Reset OUNoise after every calling learn() You are welcome to use this coding environment to train your agent for the project. Follow the instructions below to get started! ### 1. Start the Environment Run the next code cell to install a few packages. This line will take a few minutes to run! ``` !pip -q install ./python ``` The environments corresponding to both versions of the environment are already saved in the Workspace and can be accessed at the file paths provided below. Please select one of the two options below for loading the environment. ``` from unityagents import UnityEnvironment import numpy as np # select this option to load version 1 (with a single agent) of the environment # env = UnityEnvironment(file_name='/data/Reacher_One_Linux_NoVis/Reacher_One_Linux_NoVis.x86_64') # select this option to load version 2 (with 20 agents) of the environment env = UnityEnvironment(file_name='/data/Reacher_Linux_NoVis/Reacher.x86_64') ``` Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python. ``` # get the default brain brain_name = env.brain_names[0] brain = env.brains[brain_name] env.brain_names ``` ### 2. Examine the State and Action Spaces Run the code cell below to print some information about the environment. ``` # reset the environment env_info = env.reset(train_mode=True)[brain_name] # number of agents num_agents = len(env_info.agents) print('Number of agents:', num_agents) # size of each action action_size = brain.vector_action_space_size print('Size of each action:', action_size) # examine the state space states = env_info.vector_observations state_size = states.shape[1] print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size)) print('The state for the first agent looks like:', states[0]) ``` ### 3. Take Random Actions in the Environment In the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment. Note that **in this coding environment, you will not be able to watch the agents while they are training**, and you should set `train_mode=True` to restart the environment. ``` env_info = env.reset(train_mode=True)[brain_name] # reset the environment states = env_info.vector_observations # get the current state (for each agent) scores = np.zeros(num_agents) # initialize the score (for each agent) while True: actions = np.random.randn(num_agents, action_size) # select an action (for each agent) actions = np.clip(actions, -1, 1) # all actions between -1 and 1 env_info = env.step(actions)[brain_name] # send all actions to tne environment next_states = env_info.vector_observations # get next state (for each agent) rewards = env_info.rewards # get reward (for each agent) dones = env_info.local_done # see if episode finished scores += env_info.rewards # update the score (for each agent) states = next_states # roll over states to next time step if np.any(dones): # exit loop if episode finished break print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores))) ``` When finished, you can close the environment. ``` # env.close() ``` ### 4. It's Your Turn! Now it's your turn to train your own agent to solve the environment! A few **important notes**: - When training the environment, set `train_mode=True`, so that the line for resetting the environment looks like the following: ```python env_info = env.reset(train_mode=True)[brain_name] ``` - To structure your work, you're welcome to work directly in this Jupyter notebook, or you might like to start over with a new file! You can see the list of files in the workspace by clicking on **_Jupyter_** in the top left corner of the notebook. - In this coding environment, you will not be able to watch the agents while they are training. However, **_after training the agents_**, you can download the saved model weights to watch the agents on your own machine! ``` %load_ext autoreload %autoreload 2 from utils.workspace_utils import active_session import pdb import matplotlib.pyplot as plt %matplotlib inline from collections import deque import numpy as np from datetime import datetime from utils import utils from unity_env_decorator import UnityEnvDecorator from agents.ddpg_agent_version_4 import DDPGAgentVersion4 from utils.utils import ScoreParcels version='DDPG_version_4' dir_logs='./logs/' dir_checkpoints='./checkpoints/' def DDPG(envDecorator, agent, n_episode=1000, max_t=1000, print_every=100, size_window=100): # Record accumulated reward for every episode scores_deque = deque(maxlen=size_window) scores = [] # Declare time stamp for total execution time time_total_start = datetime.now() for i_episode in range(1, n_episode+1): states = envDecorator.reset() agent.reset() # pdb.set_trace() score = np.zeros(envDecorator.num_agents) time_episode_start = datetime.now() for i_time_step in range(max_t): actions = agent.act(states) next_states, rewards, dones, _ = envDecorator.step(actions) agent.step(states, actions, rewards, next_states, dones, i_time_step) score += rewards states = next_states if np.any(dones): break score_mean = np.mean(score) scores.append(score_mean) scores_deque.append(score_mean) print('Episode {}\tScore: {:.2f}\tAverage Score: {:.2f}\tAbsolute Time={}\r'.format(i_episode, score_mean, np.mean(scores_deque), datetime.now() - time_total_start), end='') if i_episode % print_every == 0: print('Episode {}\tAverage Score: {:.2f}\tAverage Time={}\r'.format(i_episode, np.mean(scores_deque), datetime.now() - time_episode_start)) time_episode_start = datetime.now() print('Average Score: {:.2f}\tTotal Time={}'.format(np.mean(scores_deque), datetime.now() - time_total_start)) return scores with active_session(): # Decorator of unity environmet envDecorator = UnityEnvDecorator(env) agent = DDPGAgentVersion4(state_size=33, action_size=4, num_agents=envDecorator.num_agents, random_seed=0, lr_actor=1e-3, lr_critic=1e-3, fc1_units=400, fc2_units=300, buffer_size=int(1e6), max_norm=1.0, learn_period=20, learn_sampling_num=10, epsilon=1.0, epsilon_decay=1e-6) scores = DDPG(envDecorator, agent, n_episode=400, max_t=1000) utils.save_logs(scores, dir_logs, version) path_score = utils.log_path_name(dir_logs, version) score_parcels = [ScoreParcels('DDPG', path_score, 'r')] utils.plot_scores(score_parcels, size_window=100) # save models in the agent. (Agent needs to return dict with model-name pair) utils.save_agent(agent.model_dicts(), dir_checkpoints, version) ``` ### Plot with Raw Data ``` %load_ext autoreload %autoreload 2 %matplotlib inline import matplotlib.pyplot as plt from utils import utils from utils.utils import ScoreParcels score_parcels = [ScoreParcels('Version 3', './logs/log_DDPG_version_4.pickle', 'c'),] utils.plot_scores_v2(score_parcels, size_window=100, max_len=400, show_origin=True, show_episode_on_label=True) ```
github_jupyter
!pip -q install ./python from unityagents import UnityEnvironment import numpy as np # select this option to load version 1 (with a single agent) of the environment # env = UnityEnvironment(file_name='/data/Reacher_One_Linux_NoVis/Reacher_One_Linux_NoVis.x86_64') # select this option to load version 2 (with 20 agents) of the environment env = UnityEnvironment(file_name='/data/Reacher_Linux_NoVis/Reacher.x86_64') # get the default brain brain_name = env.brain_names[0] brain = env.brains[brain_name] env.brain_names # reset the environment env_info = env.reset(train_mode=True)[brain_name] # number of agents num_agents = len(env_info.agents) print('Number of agents:', num_agents) # size of each action action_size = brain.vector_action_space_size print('Size of each action:', action_size) # examine the state space states = env_info.vector_observations state_size = states.shape[1] print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size)) print('The state for the first agent looks like:', states[0]) env_info = env.reset(train_mode=True)[brain_name] # reset the environment states = env_info.vector_observations # get the current state (for each agent) scores = np.zeros(num_agents) # initialize the score (for each agent) while True: actions = np.random.randn(num_agents, action_size) # select an action (for each agent) actions = np.clip(actions, -1, 1) # all actions between -1 and 1 env_info = env.step(actions)[brain_name] # send all actions to tne environment next_states = env_info.vector_observations # get next state (for each agent) rewards = env_info.rewards # get reward (for each agent) dones = env_info.local_done # see if episode finished scores += env_info.rewards # update the score (for each agent) states = next_states # roll over states to next time step if np.any(dones): # exit loop if episode finished break print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores))) # env.close() env_info = env.reset(train_mode=True)[brain_name] %load_ext autoreload %autoreload 2 from utils.workspace_utils import active_session import pdb import matplotlib.pyplot as plt %matplotlib inline from collections import deque import numpy as np from datetime import datetime from utils import utils from unity_env_decorator import UnityEnvDecorator from agents.ddpg_agent_version_4 import DDPGAgentVersion4 from utils.utils import ScoreParcels version='DDPG_version_4' dir_logs='./logs/' dir_checkpoints='./checkpoints/' def DDPG(envDecorator, agent, n_episode=1000, max_t=1000, print_every=100, size_window=100): # Record accumulated reward for every episode scores_deque = deque(maxlen=size_window) scores = [] # Declare time stamp for total execution time time_total_start = datetime.now() for i_episode in range(1, n_episode+1): states = envDecorator.reset() agent.reset() # pdb.set_trace() score = np.zeros(envDecorator.num_agents) time_episode_start = datetime.now() for i_time_step in range(max_t): actions = agent.act(states) next_states, rewards, dones, _ = envDecorator.step(actions) agent.step(states, actions, rewards, next_states, dones, i_time_step) score += rewards states = next_states if np.any(dones): break score_mean = np.mean(score) scores.append(score_mean) scores_deque.append(score_mean) print('Episode {}\tScore: {:.2f}\tAverage Score: {:.2f}\tAbsolute Time={}\r'.format(i_episode, score_mean, np.mean(scores_deque), datetime.now() - time_total_start), end='') if i_episode % print_every == 0: print('Episode {}\tAverage Score: {:.2f}\tAverage Time={}\r'.format(i_episode, np.mean(scores_deque), datetime.now() - time_episode_start)) time_episode_start = datetime.now() print('Average Score: {:.2f}\tTotal Time={}'.format(np.mean(scores_deque), datetime.now() - time_total_start)) return scores with active_session(): # Decorator of unity environmet envDecorator = UnityEnvDecorator(env) agent = DDPGAgentVersion4(state_size=33, action_size=4, num_agents=envDecorator.num_agents, random_seed=0, lr_actor=1e-3, lr_critic=1e-3, fc1_units=400, fc2_units=300, buffer_size=int(1e6), max_norm=1.0, learn_period=20, learn_sampling_num=10, epsilon=1.0, epsilon_decay=1e-6) scores = DDPG(envDecorator, agent, n_episode=400, max_t=1000) utils.save_logs(scores, dir_logs, version) path_score = utils.log_path_name(dir_logs, version) score_parcels = [ScoreParcels('DDPG', path_score, 'r')] utils.plot_scores(score_parcels, size_window=100) # save models in the agent. (Agent needs to return dict with model-name pair) utils.save_agent(agent.model_dicts(), dir_checkpoints, version) %load_ext autoreload %autoreload 2 %matplotlib inline import matplotlib.pyplot as plt from utils import utils from utils.utils import ScoreParcels score_parcels = [ScoreParcels('Version 3', './logs/log_DDPG_version_4.pickle', 'c'),] utils.plot_scores_v2(score_parcels, size_window=100, max_len=400, show_origin=True, show_episode_on_label=True)
0.584271
0.942771
# Explain Model Predictions with Amazon SageMaker Clarify There are expanding business needs and legislative regulations that require explainations of _why_ a model mades the decision it did. SageMaker Clarify uses SHAP to explain the contribution that each input feature makes to the final decision. ``` import boto3 import sagemaker import pandas as pd import numpy as np sess = sagemaker.Session() bucket = sess.default_bucket() role = sagemaker.get_execution_role() region = boto3.Session().region_name sm = boto3.Session().client(service_name="sagemaker", region_name=region) import matplotlib.pyplot as plt %matplotlib inline %config InlineBackend.figure_format='retina' ``` # Test data for explainability We created test data in JSONLines format to match the model inputs. ``` test_data_explainability_path = "./data-clarify/test_data_explainability.jsonl" !head -n 1 $test_data_explainability_path ``` ### Upload the data ``` test_data_explainablity_s3_uri = sess.upload_data( bucket=bucket, key_prefix="bias/test_data_explainability", path=test_data_explainability_path ) test_data_explainablity_s3_uri !aws s3 ls $test_data_explainablity_s3_uri %store test_data_explainablity_s3_uri ``` # List Pipeline Execution Steps ``` %store -r pipeline_name print(pipeline_name) %%time import time from pprint import pprint executions_response = sm.list_pipeline_executions(PipelineName=pipeline_name)["PipelineExecutionSummaries"] pipeline_execution_status = executions_response[0]["PipelineExecutionStatus"] print(pipeline_execution_status) while pipeline_execution_status == "Executing": try: executions_response = sm.list_pipeline_executions(PipelineName=pipeline_name)["PipelineExecutionSummaries"] pipeline_execution_status = executions_response[0]["PipelineExecutionStatus"] except Exception as e: print("Please wait...") time.sleep(30) pprint(executions_response) pipeline_execution_status = executions_response[0]["PipelineExecutionStatus"] print(pipeline_execution_status) pipeline_execution_arn = executions_response[0]["PipelineExecutionArn"] print(pipeline_execution_arn) from pprint import pprint steps = sm.list_pipeline_execution_steps(PipelineExecutionArn=pipeline_execution_arn) pprint(steps) ``` # View Created Model ``` for execution_step in steps["PipelineExecutionSteps"]: if execution_step["StepName"] == "CreateModel": model_arn = execution_step["Metadata"]["Model"]["Arn"] break print(model_arn) pipeline_model_name = model_arn.split("/")[-1] print(pipeline_model_name) ``` # Setup Model Explainability Analysis ``` from sagemaker import clarify clarify_processor = clarify.SageMakerClarifyProcessor( role=role, instance_count=1, instance_type="ml.c5.2xlarge", sagemaker_session=sess ) ``` # Writing DataConfig and ModelConfig A `DataConfig` object communicates some basic information about data I/O to Clarify. We specify where to find the input dataset, where to store the output, the target column (`label`), the header names, and the dataset type. Similarly, the `ModelConfig` object communicates information about your trained model and `ModelPredictedLabelConfig` provides information on the format of your predictions. **Note**: To avoid additional traffic to your production models, SageMaker Clarify sets up and tears down a dedicated endpoint when processing. `ModelConfig` specifies your preferred instance type and instance count used to run your model on during Clarify's processing. ## DataConfig ``` explainability_report_prefix = "bias/explainability-report-{}".format(pipeline_model_name) explainability_output_path = "s3://{}/{}".format(bucket, explainability_report_prefix) explainability_data_config = clarify.DataConfig( s3_data_input_path=test_data_explainablity_s3_uri, s3_output_path=explainability_output_path, headers=["review_body", "product_category"], features="features", dataset_type="application/jsonlines", ) ``` ## ModelConfig ``` model_config = clarify.ModelConfig( model_name=pipeline_model_name, instance_type="ml.m5.4xlarge", instance_count=1, content_type="application/jsonlines", accept_type="application/jsonlines", content_template='{"features":$features}', ) ``` ## SHAPConfig ``` shap_config = clarify.SHAPConfig( baseline=[{"features": ["ok", "Digital_Software"]}], # [data.iloc[0].values.tolist()], num_samples=5, agg_method="mean_abs", ) ``` # Run Clarify Job ``` clarify_processor.run_explainability( model_config=model_config, model_scores="predicted_label", data_config=explainability_data_config, explainability_config=shap_config, wait=False, logs=False, ) run_explainability_job_name = clarify_processor.latest_job.job_name run_explainability_job_name from IPython.core.display import display, HTML display( HTML( '<b>Review <a target="blank" href="https://console.aws.amazon.com/sagemaker/home?region={}#/processing-jobs/{}">Processing Job</a></b>'.format( region, run_explainability_job_name ) ) ) from IPython.core.display import display, HTML display( HTML( '<b>Review <a target="blank" href="https://console.aws.amazon.com/cloudwatch/home?region={}#logStream:group=/aws/sagemaker/ProcessingJobs;prefix={};streamFilter=typeLogStreamPrefix">CloudWatch Logs</a> After About 5 Minutes</b>'.format( region, run_explainability_job_name ) ) ) from IPython.core.display import display, HTML display( HTML( '<b>Review <a target="blank" href="https://s3.console.aws.amazon.com/s3/buckets/{}?prefix={}/">S3 Output Data</a> After The Processing Job Has Completed</b>'.format( bucket, explainability_report_prefix ) ) ) from pprint import pprint running_processor = sagemaker.processing.ProcessingJob.from_processing_name( processing_job_name=run_explainability_job_name, sagemaker_session=sess ) processing_job_description = running_processor.describe() pprint(processing_job_description) running_processor.wait(logs=False) ``` # Download Report From S3 ``` !aws s3 ls $explainability_output_path/ !aws s3 cp --recursive $explainability_output_path ./explainability_report/ from IPython.core.display import display, HTML display(HTML('<b>Review <a target="blank" href="./explainability_report/report.html">Explainability Report</a></b>')) ``` # View the Explainability Report As with the bias report, you can view the explainability report in Studio under the experiments tab <img src="img/explainability_detail.gif"> The Model Insights tab contains direct links to the report and model insights. If you're not a Studio user yet, as with the Bias Report, you can access this report at the following S3 bucket. # Release Resources ``` %%html <p><b>Shutting down your kernel for this notebook to release resources.</b></p> <button class="sm-command-button" data-commandlinker-command="kernelmenu:shutdown" style="display:none;">Shutdown Kernel</button> <script> try { els = document.getElementsByClassName("sm-command-button"); els[0].click(); } catch(err) { // NoOp } </script> ```
github_jupyter
import boto3 import sagemaker import pandas as pd import numpy as np sess = sagemaker.Session() bucket = sess.default_bucket() role = sagemaker.get_execution_role() region = boto3.Session().region_name sm = boto3.Session().client(service_name="sagemaker", region_name=region) import matplotlib.pyplot as plt %matplotlib inline %config InlineBackend.figure_format='retina' test_data_explainability_path = "./data-clarify/test_data_explainability.jsonl" !head -n 1 $test_data_explainability_path test_data_explainablity_s3_uri = sess.upload_data( bucket=bucket, key_prefix="bias/test_data_explainability", path=test_data_explainability_path ) test_data_explainablity_s3_uri !aws s3 ls $test_data_explainablity_s3_uri %store test_data_explainablity_s3_uri %store -r pipeline_name print(pipeline_name) %%time import time from pprint import pprint executions_response = sm.list_pipeline_executions(PipelineName=pipeline_name)["PipelineExecutionSummaries"] pipeline_execution_status = executions_response[0]["PipelineExecutionStatus"] print(pipeline_execution_status) while pipeline_execution_status == "Executing": try: executions_response = sm.list_pipeline_executions(PipelineName=pipeline_name)["PipelineExecutionSummaries"] pipeline_execution_status = executions_response[0]["PipelineExecutionStatus"] except Exception as e: print("Please wait...") time.sleep(30) pprint(executions_response) pipeline_execution_status = executions_response[0]["PipelineExecutionStatus"] print(pipeline_execution_status) pipeline_execution_arn = executions_response[0]["PipelineExecutionArn"] print(pipeline_execution_arn) from pprint import pprint steps = sm.list_pipeline_execution_steps(PipelineExecutionArn=pipeline_execution_arn) pprint(steps) for execution_step in steps["PipelineExecutionSteps"]: if execution_step["StepName"] == "CreateModel": model_arn = execution_step["Metadata"]["Model"]["Arn"] break print(model_arn) pipeline_model_name = model_arn.split("/")[-1] print(pipeline_model_name) from sagemaker import clarify clarify_processor = clarify.SageMakerClarifyProcessor( role=role, instance_count=1, instance_type="ml.c5.2xlarge", sagemaker_session=sess ) explainability_report_prefix = "bias/explainability-report-{}".format(pipeline_model_name) explainability_output_path = "s3://{}/{}".format(bucket, explainability_report_prefix) explainability_data_config = clarify.DataConfig( s3_data_input_path=test_data_explainablity_s3_uri, s3_output_path=explainability_output_path, headers=["review_body", "product_category"], features="features", dataset_type="application/jsonlines", ) model_config = clarify.ModelConfig( model_name=pipeline_model_name, instance_type="ml.m5.4xlarge", instance_count=1, content_type="application/jsonlines", accept_type="application/jsonlines", content_template='{"features":$features}', ) shap_config = clarify.SHAPConfig( baseline=[{"features": ["ok", "Digital_Software"]}], # [data.iloc[0].values.tolist()], num_samples=5, agg_method="mean_abs", ) clarify_processor.run_explainability( model_config=model_config, model_scores="predicted_label", data_config=explainability_data_config, explainability_config=shap_config, wait=False, logs=False, ) run_explainability_job_name = clarify_processor.latest_job.job_name run_explainability_job_name from IPython.core.display import display, HTML display( HTML( '<b>Review <a target="blank" href="https://console.aws.amazon.com/sagemaker/home?region={}#/processing-jobs/{}">Processing Job</a></b>'.format( region, run_explainability_job_name ) ) ) from IPython.core.display import display, HTML display( HTML( '<b>Review <a target="blank" href="https://console.aws.amazon.com/cloudwatch/home?region={}#logStream:group=/aws/sagemaker/ProcessingJobs;prefix={};streamFilter=typeLogStreamPrefix">CloudWatch Logs</a> After About 5 Minutes</b>'.format( region, run_explainability_job_name ) ) ) from IPython.core.display import display, HTML display( HTML( '<b>Review <a target="blank" href="https://s3.console.aws.amazon.com/s3/buckets/{}?prefix={}/">S3 Output Data</a> After The Processing Job Has Completed</b>'.format( bucket, explainability_report_prefix ) ) ) from pprint import pprint running_processor = sagemaker.processing.ProcessingJob.from_processing_name( processing_job_name=run_explainability_job_name, sagemaker_session=sess ) processing_job_description = running_processor.describe() pprint(processing_job_description) running_processor.wait(logs=False) !aws s3 ls $explainability_output_path/ !aws s3 cp --recursive $explainability_output_path ./explainability_report/ from IPython.core.display import display, HTML display(HTML('<b>Review <a target="blank" href="./explainability_report/report.html">Explainability Report</a></b>')) %%html <p><b>Shutting down your kernel for this notebook to release resources.</b></p> <button class="sm-command-button" data-commandlinker-command="kernelmenu:shutdown" style="display:none;">Shutdown Kernel</button> <script> try { els = document.getElementsByClassName("sm-command-button"); els[0].click(); } catch(err) { // NoOp } </script>
0.225587
0.884838
# Aligning Data to Darwin Core - Sampling Event with Measurement or Fact using Python Matt Biddle Adapted by Dylan Pugh November 9, 2020 # General information about this notebook This notebook was created for the IOOS DMAC Code Sprint Biological Data Session The data in this notebook were created specifically as an example and meant solely to be illustrative of the process for aligning data to the biological data standard - Darwin Core. These data should not be considered actually occurrences of species and any measurements are also contrived. This notebook is meant to provide a step by step process for taking original data and aligning it to Darwin Core This notebook is a python implementation of the R notebook [IOOS_DMAC_DataToDWC_Notebook_event.R](https://github.com/ioos/bio_data_guide/blob/master/Standardizing%20Marine%20Biological%20Data/datasets/example_script_with_fake_data/IOOS_DMAC_DataToDwC_Notebook_event.R) ``` import pandas as pd import pyworms # pip install git+git://github.com/iobis/pyworms.git import numpy as np import uuid import csv ``` Read in the raw data file. ``` url = "http://www.neracoos.org/erddap/tabledap/WBTS_CFIN_2004_2017.csv" df = pd.read_csv(url, header=[0]) df.columns ``` ## Input columns: 1. Cruise_Identification_Tag 2. CRUISE_ID 3. Station_ID 4. latitude 5. longitude 6. time 7. Cast 8. Net_Type 9. Mesh_Size 10. NET_DEPTH 11. STATION_DEPTH 12. COMMENT 13. Plankton_Net_Area 14. Volume_Filtered 15. Sample_Split 16. Sample_Dry_Weight 17. DW_G_M_2 18. Dilution_Factor 19. TOTAL_DILFACTOR_CFIN 20. Order 21. Calanus_finmarchicus_N 22. Calanus_finmarchicus_CI 23. Calanus_finmarchicus_CII 24. Calanus_finmarchicus_CIII 25. Calanus_finmarchicus_CIV 26. Calanus_finmarchicus_CV 27. Calanus_finmarchicus_F 28. Calanus_finmarchicus_M ## Mappings: **Event Table** | *Origin Term* | *DwC_term(s)* | *Notes* | |---------------------------|--------------------------------------------|------------------------| | Cruise_Identification_Tag | eventID | eventID | | CRUISE_ID | eventID | contained in eventID | | Station_ID | eventID | contained in eventID | | cast | eventID | contained in eventID | | latitude | decimalLatitude | | | longitude | decimalLongitude | | | STATION_DEPTH | minimumDepthInMeters, maximumDepthInMeters | | | time | eventDate | | | | geodeticDatum | added programatically | | | samplingProtocol | added manually | **Occurrence Table** The `Calanus_finmarchius_*` readings are split into individual records, with the following fields added: | *Origin Term* | *DwC_term(s)* | *Notes* | |-----------------------|--------------------------|-----------------------------------| | `Calanus_finmarchius_*` | individualCount | original value under each column | | | scientificName | derived from original column name | | | occurrenceStatus | added programatically | | | lifeStage | derived from original column name | | | sex | derived from original column name | | | acceptedname | programatic pyworms lookup | | | acceptedID | programatic pyworms lookup | | | scientificNameID | programatic pyworms lookup | | | kingdom | programatic pyworms lookup | | | phylum | programatic pyworms lookup | | | class | programatic pyworms lookup | | | order | programatic pyworms lookup | | | family | programatic pyworms lookup | | | genus | programatic pyworms lookup | | | scientificNameAuthorship | programatic pyworms lookup | | | taxonRank | programatic pyworms lookup | | | basisOfRecord | added programatically | **Measurement or Fact Table** Each entry in this table has the following fields: 1. measurementType 2. measurementTypeID 3. measurementValue 4. measurementUnit 5. measurementUnitID 6. measurementAccuracy 7. measurementDeterminedDate 8. measurementMethod 9. measurementRemark This table shows the mapping from the origin term to the BODC NERC vocabulary term: | *Origin Term* | *BODC NERC vocabulary/measurementTypeID* | *URI* | |----------------------|-----------------------------------------------------------------------|----------------------------------------------------------------------| | Net_Type | plankton net | [22](http://vocab.nerc.ac.uk/collection/L05/current/22/) | | Mesh_Size | Sampling net mesh size | [Q0100015](http://vocab.nerc.ac.uk/collection/Q01/current/Q0100015/) | | NET_DEPTH | Depth (spatial coordinate) of sampling event start | [DXPHPRST](http://vocab.nerc.ac.uk/collection/P01/current/DXPHPRST/) | | COMMENT | N/A (mapped to measurementRemark field above) | N/A | | Plankton_Net_Area | Sampling device aperture surface area | [Q0100017](http://vocab.nerc.ac.uk/collection/Q01/current/Q0100017/) | | Volume_Filtered | Volume | [VOL](http://vocab.nerc.ac.uk/collection/P25/current/VOL/) | | Sample_Split | N/A (information added to measurementRemark field above) | N/A | | Sample_Dry_Weight | Dry weight biomass | [ODRYBM01](http://vocab.nerc.ac.uk/collection/P01/current/ODRYBM01/) | | DW_G_M_2 | Dry weight biomass | [ODRYBM01](http://vocab.nerc.ac.uk/collection/P01/current/ODRYBM01) | | Dilution_Factor | ??? | ??? | | TOTAL_DILFACTOR_CFIN | ??? | ??? | First we need to to decide if we will provide an occurrence only version of the data or a sampling event with measurement or facts version of the data. Occurrence only is easier to create. It's only one file to produce. However, several pieces of information will be left out if we choose that option. If we choose to do sampling event with measurement or fact we'll be able to capture all of the data in the file creating a lossless version. Here we decide to use the sampling event option to include as much information as we can. First let's create the eventID and occurrenceID in the original file so that information can be reused for all necessary files down the line. Luckily, our data already has an appropriate eventID in the `Cruise_Identification_Tag` field, so we'll use that. ``` df['eventID'] = df['Cruise_Identification_Tag'] df['occurrenceID'] = uuid.uuid4() ``` # Event file We will need to create three separate files to comply with the sampling event format. We'll start with the event file but we only need to include the columns that are relevant to the event file. ``` event = df[['time', 'latitude', 'longitude', 'NET_DEPTH', 'STATION_DEPTH', 'eventID']].copy() ``` Next we need to rename any columns of data that match directly to Darwin Core. We know this based on our crosswalk spreadsheet CrosswalkToDarwinCore.csv ``` event['eventDate'] = event['time'] event['decimalLatitude'] = event['latitude'] event['decimalLongitude'] = event['longitude'] event['minimumDepthInMeters'] = event['NET_DEPTH'] event['maximumDepthInMeters'] = event['NET_DEPTH'] ``` Let's see how it looks: ``` event.head() ``` We will also have to add any missing required fields ``` # this is a guess event['geodeticDatum'] = 'EPSG:4326 WGS84' # this is found in the metadata event['samplingProtocol'] = 'Mesh net cast' ``` Then we'll remove any columns that we no longer need to clean things up a bit. ``` event.drop( columns=['latitude', 'longitude', 'NET_DEPTH', 'time'], inplace=True) ``` We have too many repeating rows of information. We can pare this down using eventID which is a unique identifier for each sampling event in the data- which is six, three transects per site. ``` event.drop_duplicates( subset='eventID', inplace=True) event.head(6) ``` Finally we write out the event file ``` event.to_csv( 'data/processed/WBTS_CFIN_2004_2017_event_frompy.csv', header=True, index=False,) ``` # Occurrence file Next we need to create the occurrence file. We start by examining the structure (columns) of the source data. The goal here is to assess what kind of conversion (if any) will be necessary for Darwin Core alignment. ``` df.head(10) ``` In this case, the `Calanus_finmarchicus` columns need to be converted into a more suitable format. We need to iterate through the existing data row by row - the goal is to create five new columns: `scientificName`, `lifeStage`, `sex`, `occuranceStatus`, & `individualCount`. We start by isolating the records that have valid data. We define the columns we want to check against as `target_data_columns`, and then create a new dataframe `calanus_records` by retaining only records where at least one of the columns has a value of NOT `0` AND NOT `NaN`. We also drop the second row, which contains unit information to avoid confusing the parser. ``` target_data_columns = ['Calanus_finmarchicus_N', 'Calanus_finmarchicus_CI', 'Calanus_finmarchicus_CII', 'Calanus_finmarchicus_CIII', 'Calanus_finmarchicus_CIV', 'Calanus_finmarchicus_CV', 'Calanus_finmarchicus_F', 'Calanus_finmarchicus_M'] # drop units row from calanus records calanus_records = df.iloc[1:, :] ``` The challenge is that, in its current form, each row actually represents between 0 and 8 discrete occurances. This isn't suitable for Darwin Core, so we need to read each row, and then split its data into new records, each representing an occurance event. This is a little tricky, so we'll create a helper method `enumerate_row` which takes a row (a `pandas.Series` object in practice) and makes the appropriate transformations. ``` def enumerate_row(row, field): # expands rows which contain multiple observations into discrete records row_data = row[1] calanus_count = row_data[field] # convert to dict so we can mutate enumerated_row = row_data.to_dict() split_species = field.rsplit('_', 1) scientific_name = split_species[0].replace('_', ' ') life_stage = split_species[1] # add count of specified species as a new column enumerated_row['individualCount'] = calanus_count enumerated_row['scientificName'] = scientific_name enumerated_row['occurrenceStatus'] = 'present' if pd.to_numeric(calanus_count) > 0 and calanus_count != 'NaN' else 'absent' life_stage = field.rsplit('_', 1)[1] if life_stage == 'N': life_stage = 'Nauplius' enumerated_row['lifeStage'] = life_stage if life_stage != 'F' and life_stage != 'M' else 'adult' # this is consistent across records enumerated_row['basisOfRecord'] = 'HumanObservation' if life_stage == 'F': enumerated_row['sex'] = 'female' elif life_stage == 'M': enumerated_row['sex'] = 'male' else: enumerated_row['sex'] = 'NA' return enumerated_row ``` The next step is to loop through the target data. The top-level control variable is the list of the columns we wish to enumerate, so we will look for each target column in each row of the dataset. *note*: This operation could easily become costly depending on the number of rows and target columns ``` enumerated_rows = [] # loop through target column list for field in target_data_columns: # now enumerate each input row, extracting the values for row in calanus_records.iterrows(): flipped_row = enumerate_row(row, field) # delete other calanus records from flipped row for k in target_data_columns: flipped_row.pop(k, None) enumerated_rows.append(flipped_row) ``` A little bit of clean up: ``` # now convert the list of dicts into a dataframe output_frame = pd.DataFrame.from_dict(enumerated_rows) # sort by time, ascending output_frame.sort_values(by='time', ascending=True, inplace=True) ``` Now our data should be in a more suitable fromat, so we can proceed. We start by creating a new occurrence data frame with the relevant fields. ``` occurrence = output_frame[['scientificName', 'eventID', 'occurrenceID', 'individualCount', 'occurrenceStatus', 'lifeStage', 'sex']].copy() ``` ## Taxonomic Name Matching A requirement for OBIS is that all scientific names match to the World Register of Marine Species (WoRMS) and a scientificNameID is included. A scientificNameID looks like this "urn:lsid:marinespecies.org:taxname:275730" with the last digits after the colon being the WoRMS aphia ID. We'll need to go out to WoRMS to grab this information. Create a lookup table of unique scientific names ``` lut_worms = pd.DataFrame( columns=['scientificName'], data=occurrence['scientificName'].unique()) ``` Add the columns that we can grab information from WoRMS including the required scientificNameID. ``` headers = ['acceptedname', 'acceptedID', 'scientificNameID', 'kingdom', 'phylum', 'class', 'order', 'family', 'genus', 'scientificNameAuthorship', 'taxonRank'] for head in headers: lut_worms[head] = '' ``` Taxonomic lookup using the library [pyworms](https://github.com/iobis/pyworms) ``` for index, row in lut_worms.iterrows(): print('Searching for scientific name = %s' % row['scientificName']) resp = pyworms.aphiaRecordsByMatchNames(row['scientificName'])[0][0] lut_worms.loc[index, 'acceptedname'] = resp['valid_name'] lut_worms.loc[index, 'acceptedID'] = resp['valid_AphiaID'] lut_worms.loc[index, 'scientificNameID'] = resp['lsid'] lut_worms.loc[index, 'kingdom'] = resp['kingdom'] lut_worms.loc[index, 'phylum'] = resp['phylum'] lut_worms.loc[index, 'class'] = resp['class'] lut_worms.loc[index, 'order'] = resp['order'] lut_worms.loc[index, 'family'] = resp['family'] lut_worms.loc[index, 'genus'] = resp['genus'] lut_worms.loc[index, 'scientificNameAuthorship'] = resp['authority'] lut_worms.loc[index, 'taxonRank'] = resp['rank'] ``` Merge the lookup table of unique scientific names back with the occurrence data. ``` occurrence = pd.merge(occurrence, lut_worms, how='left', on='scientificName') ``` Quick look at what we have before we write out the file ``` occurrence.head() ``` Write out the file. ``` # sort the columns on scientificName occurrence.sort_values('scientificName', inplace=True) # reorganize column order to be consistent with R example: columns = ["scientificName","eventID","occurrenceID","occurrenceStatus","acceptedname","acceptedID", "scientificNameID","kingdom","phylum","class","order","family","genus","scientificNameAuthorship", "taxonRank"] occurrence.to_csv( "data/processed/WBTS_CFIN_2004_2017_occurrence_frompy.csv", header=True, index=False, quoting=csv.QUOTE_ALL, columns=columns) ``` All done with occurrence! # Measurement Or Fact The last file we need to create is the measurement or fact file. For this we need to combine all of the measurements or facts that we want to include making sure to include IDs from the BODC NERC vocabulary where possible. Now we'll manually map the remaining variables to the BODC NERC vocabulary when possible. For now we're mapping the following metadata for each field: 1. uri -> URL of the concept page on the NERC VOcabulary Server (NVS) 2. unit 3. unitId -> URL of the unit ID page on NVS 4. accuracy 5. type -> measurement type ``` df.columns vocab_url_prefix = 'http://vocab.nerc.ac.uk/collection/' column_mappings = { 'Net_Type': {'uri': 'L05/current/22/', 'unit': '', 'unitID': '', 'accuracy': '', 'type': ''}, 'Mesh_Size': {'uri': 'Q01/current/Q0100015/', 'unit': 'microns', 'unitID': 'P06/current/UMIC/', 'accuracy': '', 'type': ''}, 'NET_DEPTH': {'uri': 'P01/current/DXPHPRST/', 'unit': 'm', 'unitID': 'P06/current/UPAA/', 'accuracy': '', 'type': ''}, 'Plankton_Net_Area': {'uri': 'Q01/current/Q0100017/', 'unit': 'm2', 'unitID': 'P06/current/UPAA/', 'accuracy': '', 'type': ''}, 'Volume_Filtered': {'uri': 'P25/current/VOL/', 'unit': 'm3', 'unitID': 'P06/current/UPAA/', 'accuracy': '', 'type': ''}, 'Sample_Dry_Weight': {'uri': 'P01/current/ODRYBM01/', 'unit': 'g', 'unitID': 'P06/current/UGRM/', 'accuracy': '', 'type': ''}, 'DW_G_M_2': {'uri': 'P01/current/ODRYBM01/', 'unit': 'g/m2', 'unitID': 'P06/current/UGMS/', 'accuracy': '', 'type': ''}, 'Dilution_Factor': {'uri': '', 'unit': 'ml', 'unitID': 'P06/current/VVML/', 'accuracy': '', 'type': ''}, 'TOTAL_DILFACTOR_CFIN': {'uri': '', 'unit': 'ml', 'unitID': 'P06/current/VVML/', 'accuracy': '', 'type': ''}, } ``` No we'll loop through the mapping list and transform as needed. ``` frames_to_concat = [] for current_field in column_mappings: current_mapping = column_mappings.get(current_field) current_df = df[['eventID', current_field, 'time', 'COMMENT', 'Sample_Split']].copy() #drop units row here current_df = current_df.iloc[1:, :] current_df['occurrenceID'] = '' current_df['measurementType'] = current_mapping.get('type') current_df['measurementTypeID'] = vocab_url_prefix + current_mapping.get('uri') current_df['measurementValue'] = current_df[current_field] current_df['measurementUnit'] = current_mapping.get('unit') current_df['measurementUnitID'] = vocab_url_prefix + current_mapping.get('unitID') if current_mapping.get('unitID') else '' current_df['measurementAccuracy'] = current_mapping.get('accuracy') current_df['measurementDeterminedDate'] = current_df['time'] current_df.drop( columns=[current_field, 'time'], inplace=True) frames_to_concat.append(current_df) ``` Concatenate all measurements or facts together. ``` measurementorfact = pd.concat(frames_to_concat) ``` Let's check to see what it looks like ``` measurementorfact.head(50) ``` Now we need to add in the remaining fields: 1. `measurementMethod` 2. `measurementRemark` ``` # this is a constant value as described in the metadata measurementorfact['measurementMethod'] = 'Net used: 0.75 meter diameter single ring or a SEA-GEAR Model 9600 twin-ring, 200µm mesh' # this is a constant value, PLUS anything in the 'COMMENT' field for a given occurrence measurementorfact['measurementRemark'] = 'Note: no matching NERC vocabulary URI for sampling device. Comments: ' + df['COMMENT'].astype(str) # drop COMMENT column as we don't need it measurementorfact.drop(columns=['COMMENT'], inplace=True) ``` Write measurement or fact file ``` measurementorfact.to_csv('data/processed/WBTS_CFIN_2004_2017_mof_frompy.csv', index=False, header=True) ```
github_jupyter
import pandas as pd import pyworms # pip install git+git://github.com/iobis/pyworms.git import numpy as np import uuid import csv url = "http://www.neracoos.org/erddap/tabledap/WBTS_CFIN_2004_2017.csv" df = pd.read_csv(url, header=[0]) df.columns df['eventID'] = df['Cruise_Identification_Tag'] df['occurrenceID'] = uuid.uuid4() event = df[['time', 'latitude', 'longitude', 'NET_DEPTH', 'STATION_DEPTH', 'eventID']].copy() event['eventDate'] = event['time'] event['decimalLatitude'] = event['latitude'] event['decimalLongitude'] = event['longitude'] event['minimumDepthInMeters'] = event['NET_DEPTH'] event['maximumDepthInMeters'] = event['NET_DEPTH'] event.head() # this is a guess event['geodeticDatum'] = 'EPSG:4326 WGS84' # this is found in the metadata event['samplingProtocol'] = 'Mesh net cast' event.drop( columns=['latitude', 'longitude', 'NET_DEPTH', 'time'], inplace=True) event.drop_duplicates( subset='eventID', inplace=True) event.head(6) event.to_csv( 'data/processed/WBTS_CFIN_2004_2017_event_frompy.csv', header=True, index=False,) df.head(10) target_data_columns = ['Calanus_finmarchicus_N', 'Calanus_finmarchicus_CI', 'Calanus_finmarchicus_CII', 'Calanus_finmarchicus_CIII', 'Calanus_finmarchicus_CIV', 'Calanus_finmarchicus_CV', 'Calanus_finmarchicus_F', 'Calanus_finmarchicus_M'] # drop units row from calanus records calanus_records = df.iloc[1:, :] def enumerate_row(row, field): # expands rows which contain multiple observations into discrete records row_data = row[1] calanus_count = row_data[field] # convert to dict so we can mutate enumerated_row = row_data.to_dict() split_species = field.rsplit('_', 1) scientific_name = split_species[0].replace('_', ' ') life_stage = split_species[1] # add count of specified species as a new column enumerated_row['individualCount'] = calanus_count enumerated_row['scientificName'] = scientific_name enumerated_row['occurrenceStatus'] = 'present' if pd.to_numeric(calanus_count) > 0 and calanus_count != 'NaN' else 'absent' life_stage = field.rsplit('_', 1)[1] if life_stage == 'N': life_stage = 'Nauplius' enumerated_row['lifeStage'] = life_stage if life_stage != 'F' and life_stage != 'M' else 'adult' # this is consistent across records enumerated_row['basisOfRecord'] = 'HumanObservation' if life_stage == 'F': enumerated_row['sex'] = 'female' elif life_stage == 'M': enumerated_row['sex'] = 'male' else: enumerated_row['sex'] = 'NA' return enumerated_row enumerated_rows = [] # loop through target column list for field in target_data_columns: # now enumerate each input row, extracting the values for row in calanus_records.iterrows(): flipped_row = enumerate_row(row, field) # delete other calanus records from flipped row for k in target_data_columns: flipped_row.pop(k, None) enumerated_rows.append(flipped_row) # now convert the list of dicts into a dataframe output_frame = pd.DataFrame.from_dict(enumerated_rows) # sort by time, ascending output_frame.sort_values(by='time', ascending=True, inplace=True) occurrence = output_frame[['scientificName', 'eventID', 'occurrenceID', 'individualCount', 'occurrenceStatus', 'lifeStage', 'sex']].copy() lut_worms = pd.DataFrame( columns=['scientificName'], data=occurrence['scientificName'].unique()) headers = ['acceptedname', 'acceptedID', 'scientificNameID', 'kingdom', 'phylum', 'class', 'order', 'family', 'genus', 'scientificNameAuthorship', 'taxonRank'] for head in headers: lut_worms[head] = '' for index, row in lut_worms.iterrows(): print('Searching for scientific name = %s' % row['scientificName']) resp = pyworms.aphiaRecordsByMatchNames(row['scientificName'])[0][0] lut_worms.loc[index, 'acceptedname'] = resp['valid_name'] lut_worms.loc[index, 'acceptedID'] = resp['valid_AphiaID'] lut_worms.loc[index, 'scientificNameID'] = resp['lsid'] lut_worms.loc[index, 'kingdom'] = resp['kingdom'] lut_worms.loc[index, 'phylum'] = resp['phylum'] lut_worms.loc[index, 'class'] = resp['class'] lut_worms.loc[index, 'order'] = resp['order'] lut_worms.loc[index, 'family'] = resp['family'] lut_worms.loc[index, 'genus'] = resp['genus'] lut_worms.loc[index, 'scientificNameAuthorship'] = resp['authority'] lut_worms.loc[index, 'taxonRank'] = resp['rank'] occurrence = pd.merge(occurrence, lut_worms, how='left', on='scientificName') occurrence.head() # sort the columns on scientificName occurrence.sort_values('scientificName', inplace=True) # reorganize column order to be consistent with R example: columns = ["scientificName","eventID","occurrenceID","occurrenceStatus","acceptedname","acceptedID", "scientificNameID","kingdom","phylum","class","order","family","genus","scientificNameAuthorship", "taxonRank"] occurrence.to_csv( "data/processed/WBTS_CFIN_2004_2017_occurrence_frompy.csv", header=True, index=False, quoting=csv.QUOTE_ALL, columns=columns) df.columns vocab_url_prefix = 'http://vocab.nerc.ac.uk/collection/' column_mappings = { 'Net_Type': {'uri': 'L05/current/22/', 'unit': '', 'unitID': '', 'accuracy': '', 'type': ''}, 'Mesh_Size': {'uri': 'Q01/current/Q0100015/', 'unit': 'microns', 'unitID': 'P06/current/UMIC/', 'accuracy': '', 'type': ''}, 'NET_DEPTH': {'uri': 'P01/current/DXPHPRST/', 'unit': 'm', 'unitID': 'P06/current/UPAA/', 'accuracy': '', 'type': ''}, 'Plankton_Net_Area': {'uri': 'Q01/current/Q0100017/', 'unit': 'm2', 'unitID': 'P06/current/UPAA/', 'accuracy': '', 'type': ''}, 'Volume_Filtered': {'uri': 'P25/current/VOL/', 'unit': 'm3', 'unitID': 'P06/current/UPAA/', 'accuracy': '', 'type': ''}, 'Sample_Dry_Weight': {'uri': 'P01/current/ODRYBM01/', 'unit': 'g', 'unitID': 'P06/current/UGRM/', 'accuracy': '', 'type': ''}, 'DW_G_M_2': {'uri': 'P01/current/ODRYBM01/', 'unit': 'g/m2', 'unitID': 'P06/current/UGMS/', 'accuracy': '', 'type': ''}, 'Dilution_Factor': {'uri': '', 'unit': 'ml', 'unitID': 'P06/current/VVML/', 'accuracy': '', 'type': ''}, 'TOTAL_DILFACTOR_CFIN': {'uri': '', 'unit': 'ml', 'unitID': 'P06/current/VVML/', 'accuracy': '', 'type': ''}, } frames_to_concat = [] for current_field in column_mappings: current_mapping = column_mappings.get(current_field) current_df = df[['eventID', current_field, 'time', 'COMMENT', 'Sample_Split']].copy() #drop units row here current_df = current_df.iloc[1:, :] current_df['occurrenceID'] = '' current_df['measurementType'] = current_mapping.get('type') current_df['measurementTypeID'] = vocab_url_prefix + current_mapping.get('uri') current_df['measurementValue'] = current_df[current_field] current_df['measurementUnit'] = current_mapping.get('unit') current_df['measurementUnitID'] = vocab_url_prefix + current_mapping.get('unitID') if current_mapping.get('unitID') else '' current_df['measurementAccuracy'] = current_mapping.get('accuracy') current_df['measurementDeterminedDate'] = current_df['time'] current_df.drop( columns=[current_field, 'time'], inplace=True) frames_to_concat.append(current_df) measurementorfact = pd.concat(frames_to_concat) measurementorfact.head(50) # this is a constant value as described in the metadata measurementorfact['measurementMethod'] = 'Net used: 0.75 meter diameter single ring or a SEA-GEAR Model 9600 twin-ring, 200µm mesh' # this is a constant value, PLUS anything in the 'COMMENT' field for a given occurrence measurementorfact['measurementRemark'] = 'Note: no matching NERC vocabulary URI for sampling device. Comments: ' + df['COMMENT'].astype(str) # drop COMMENT column as we don't need it measurementorfact.drop(columns=['COMMENT'], inplace=True) measurementorfact.to_csv('data/processed/WBTS_CFIN_2004_2017_mof_frompy.csv', index=False, header=True)
0.450118
0.928797
``` # default_exp predict ``` # Predict >This module has functions to generate the burned areas predictions. ``` # export from fastai.vision import * import scipy.io as sio import sys from tqdm import tqdm import scipy.ndimage as ndimage from banet.core import * from banet.models import BA_Net # hide from nbdev.showdoc import show_doc from nbdev.export import notebook2script # export def open_mat(fn, slice_idx=None, *args, **kwargs): data = sio.loadmat(fn) data = np.array([data[r] for r in ['Red', 'NIR', 'MIR', 'FRP']]) data[np.isnan(data)] = 0 data[-1, ...] = np.log1p(data[-1,...]) data[np.isnan(data)] = 0 if slice_idx is not None: return data[:, slice_idx[0]:slice_idx[1], slice_idx[2]:slice_idx[3]] return data def crop(im, r, c, size=128): ''' crop image into a square of size sz, ''' sz = size out_sz = (sz, sz, im.shape[-1]) rs,cs,hs = im.shape tile = np.zeros(out_sz) if (r+sz > rs) and (c+sz > cs): tile[:rs-r, :cs-c, :] = im[r:, c:, :] elif (r+sz > rs): tile[:rs-r, :, :] = im[r:, c:c+sz, :] elif (c+sz > cs): tile[:, :cs-c, :] = im[r:r+sz ,c:, :] else: tile[...] = im[r:r+sz, c:c+sz, :] return tile def image2tiles(x, step=100): tiles = [] rr, cc, _ = x.shape for c in range(0, cc-1, step): for r in range(0, rr-1, step): img = crop(x, r, c) tiles.append(img) return np.array(tiles) def tiles2image(tiles, image_size, size=128, step=100): rr, cc, = image_size sz = size im = np.zeros(image_size) indicator = np.zeros_like(im).astype(float) k = 0 for c in range(0, cc-1, step): for r in range(0, rr-1, step): if (r+sz > rr) and (c+sz > cc): im[r:, c:] += tiles[k][:rr-r, :cc-c] indicator[r:, c:] += 1 elif (r+sz > rr): im[r:, c:c+sz] += tiles[k][:rr-r, :] indicator[r:, c:c+sz] += 1 elif (c+sz > cc): im[r:r+sz ,c:] += tiles[k][:, :cc-c] indicator[r:r+sz, c:] += 1 else: im[r:r+sz, c:c+sz] += tiles[k] indicator[r:r+sz, c:c+sz] += 1 k += 1 im /= indicator return im def get_preds(tiles, model, weights=None): if weights is not None: model.load_state_dict(weights) mu = tensor([0.2349, 0.3548, 0.1128, 0.0016]).view(1,4,1,1,1) std = tensor([0.1879, 0.1660, 0.0547, 0.0776]).view(1,4,1,1,1) with torch.no_grad(): data = [] for x in tqdm(tiles): model.eval() if torch.cuda.is_available(): model.cuda() x = (x[None]-mu)/std if torch.cuda.is_available(): x = x.cuda() out = model(x).sigmoid().float() data.append(out.cpu().squeeze().numpy()) return np.array(data) def predict_one(iop:InOutPath, times:list, weights_files:list, region:str, threshold=0.5, slice_idx=None, product='VIIRS750'): fname = lambda t : iop.src/f'{product}{region}_{t.strftime("%Y%m%d")}.mat' files = [fname(t) for t in times] im_size = open_mat(files[0], slice_idx=slice_idx).shape[1:] tiles = [] print('Loading data and generating tiles:') for file in tqdm(files): try: s = image2tiles(open_mat(file, slice_idx=slice_idx).transpose((1,2,0))).transpose((0,3,1,2)) except: warn(f'No data for {file}') s = np.zeros_like(s) tiles.append(s) tiles = np.array(tiles).transpose((1, 2, 0, 3, 4)) tiles = torch.from_numpy(tiles).float() preds_ens = [] for wf in weights_files: if torch.cuda.is_available(): weights = torch.load(wf) else: weights = torch.load(wf, map_location=torch.device('cpu')) if 'model' in weights: weights = weights['model'] print(f'Generating model predictions for {wf}:') preds = get_preds(tiles, model=BA_Net(4, 1, 64), weights=weights) preds = np.array([tiles2image(preds[:,i], im_size) for i in range(preds.shape[1])]) preds_ens.append(preds) preds = np.array(preds_ens).mean(0) return preds def predict_time(path:InOutPath, times:list, weight_files:list, region, threshold=0.05, save=True, max_size=2000, buffer=128, product='VIIRS750', output='data'): tstart, tend = times.min(), times.max() tstart = tstart + pd.Timedelta(days=32) tend = tend-pd.Timedelta(days=32) tstart = pd.Timestamp(f'{tstart.year}-{tstart.month}-01') tend = pd.Timestamp(f'{tend.year}-{tend.month}-01') ptimes = pd.date_range(tstart, tend, freq='MS') preds_all = [] si = [[max(0,j*max_size-buffer), (j+1)*max_size+buffer, max(0,i*max_size-buffer), (i+1)*max_size+buffer] for i in range(region.shape[1]//max_size+1) for j in range(region.shape[0]//max_size+1)] bas, bds = [], [] for i, split in progress_bar(enumerate(si), total=len(si)): print(f'Split {split}') preds_all = [] for time in ptimes: time_start = pd.Timestamp((time - pd.Timedelta(days=30)).strftime('%Y-%m-15')) # Day 15, previous month times = pd.date_range(time_start, periods=64, freq='D') preds = predict_one(path, times, weight_files, region.name, slice_idx=split, product=product) preds = preds[times.month == time.month] preds_all.append(preds) preds_all = np.concatenate(preds_all, axis=0) ba = preds_all.sum(0) ba[ba>1] = 1 ba[ba<threshold] = np.nan bd = preds_all.argmax(0) bd = bd.astype(float) bd[np.isnan(ba)] = np.nan #sio.savemat(path.dst/f'data_{i}.mat', {'burndate': bd, 'burnconf': ba}, do_compression=True) bas.append(ba) bds.append(bd) ba_all = np.zeros(region.shape) bd_all = np.zeros_like(ba_all) for i, split_idx in enumerate(si): ba_all[split_idx[0]:split_idx[1], split_idx[2]:split_idx[3]] = bas[i] bd_all[split_idx[0]:split_idx[1], split_idx[2]:split_idx[3]] = bds[i] if not save: return ba_all, bd_all sio.savemat(path.dst/f'{output}.mat', {'burndate': bd_all, 'burnconf': ba_all}, do_compression=True) def predict_month(iop, time, weight_files, region, threshold=0.5, save=True, slice_idx=None): time_start = pd.Timestamp((time - pd.Timedelta(days=30)).strftime('%Y-%m-15')) # Day 15, previous month times = pd.date_range(time_start, periods=64, freq='D') preds = predict_one(iop, times, weight_files, region, threshold=threshold, slice_idx=slice_idx) assert preds.shape[0] == len(times) preds = preds[times.month == time.month] ba = preds.sum(0) bd = preds.argmax(0) doy = np.asarray(pd.DatetimeIndex(times).dayofyear) bd = doy[bd].astype(float) bd[bd==doy[0]] = np.nan bd[ba<threshold] = np.nan ba[ba<threshold] = np.nan ba[ba>1] = 1 if not save: return ba, bd tstr = time.strftime('%Y%m') sio.savemat(iop.dst/f'ba_{region}_{tstr}.mat', {'burned': ba, 'date': bd}, do_compression=True) def predict_nrt(iop, time, weights_files, region, threshold=0.5, save=True): times = pd.date_range(time-pd.Timedelta(days=63), time, freq='D') preds = predict_one(iop, times, weights_files, region, threshold=threshold) assert preds.shape[0] == len(times) ba = preds.sum(0) bd = preds.argmax(0) doy = np.asarray(pd.DatetimeIndex(times).dayofyear) bd = doy[bd].astype(float) bd[bd==doy[0]] = np.nan bd[ba<threshold] = np.nan ba[ba<threshold] = np.nan ba[ba>1] = 1 if not save: return ba, bd tstr = time.strftime('%Y%m%d') sio.savemat(iop.dst/f'ba_{tstr}.mat', {'burned': ba, 'date': bd}, do_compression=True) def split_mask(mask, thr=0.5, thr_obj=1): labled, n_objs = ndimage.label(mask > thr) result = [] for i in range(n_objs): obj = (labled == i + 1).astype(int) if (obj.sum() > thr_obj): result.append(obj) return result # hide notebook2script() ```
github_jupyter
# default_exp predict # export from fastai.vision import * import scipy.io as sio import sys from tqdm import tqdm import scipy.ndimage as ndimage from banet.core import * from banet.models import BA_Net # hide from nbdev.showdoc import show_doc from nbdev.export import notebook2script # export def open_mat(fn, slice_idx=None, *args, **kwargs): data = sio.loadmat(fn) data = np.array([data[r] for r in ['Red', 'NIR', 'MIR', 'FRP']]) data[np.isnan(data)] = 0 data[-1, ...] = np.log1p(data[-1,...]) data[np.isnan(data)] = 0 if slice_idx is not None: return data[:, slice_idx[0]:slice_idx[1], slice_idx[2]:slice_idx[3]] return data def crop(im, r, c, size=128): ''' crop image into a square of size sz, ''' sz = size out_sz = (sz, sz, im.shape[-1]) rs,cs,hs = im.shape tile = np.zeros(out_sz) if (r+sz > rs) and (c+sz > cs): tile[:rs-r, :cs-c, :] = im[r:, c:, :] elif (r+sz > rs): tile[:rs-r, :, :] = im[r:, c:c+sz, :] elif (c+sz > cs): tile[:, :cs-c, :] = im[r:r+sz ,c:, :] else: tile[...] = im[r:r+sz, c:c+sz, :] return tile def image2tiles(x, step=100): tiles = [] rr, cc, _ = x.shape for c in range(0, cc-1, step): for r in range(0, rr-1, step): img = crop(x, r, c) tiles.append(img) return np.array(tiles) def tiles2image(tiles, image_size, size=128, step=100): rr, cc, = image_size sz = size im = np.zeros(image_size) indicator = np.zeros_like(im).astype(float) k = 0 for c in range(0, cc-1, step): for r in range(0, rr-1, step): if (r+sz > rr) and (c+sz > cc): im[r:, c:] += tiles[k][:rr-r, :cc-c] indicator[r:, c:] += 1 elif (r+sz > rr): im[r:, c:c+sz] += tiles[k][:rr-r, :] indicator[r:, c:c+sz] += 1 elif (c+sz > cc): im[r:r+sz ,c:] += tiles[k][:, :cc-c] indicator[r:r+sz, c:] += 1 else: im[r:r+sz, c:c+sz] += tiles[k] indicator[r:r+sz, c:c+sz] += 1 k += 1 im /= indicator return im def get_preds(tiles, model, weights=None): if weights is not None: model.load_state_dict(weights) mu = tensor([0.2349, 0.3548, 0.1128, 0.0016]).view(1,4,1,1,1) std = tensor([0.1879, 0.1660, 0.0547, 0.0776]).view(1,4,1,1,1) with torch.no_grad(): data = [] for x in tqdm(tiles): model.eval() if torch.cuda.is_available(): model.cuda() x = (x[None]-mu)/std if torch.cuda.is_available(): x = x.cuda() out = model(x).sigmoid().float() data.append(out.cpu().squeeze().numpy()) return np.array(data) def predict_one(iop:InOutPath, times:list, weights_files:list, region:str, threshold=0.5, slice_idx=None, product='VIIRS750'): fname = lambda t : iop.src/f'{product}{region}_{t.strftime("%Y%m%d")}.mat' files = [fname(t) for t in times] im_size = open_mat(files[0], slice_idx=slice_idx).shape[1:] tiles = [] print('Loading data and generating tiles:') for file in tqdm(files): try: s = image2tiles(open_mat(file, slice_idx=slice_idx).transpose((1,2,0))).transpose((0,3,1,2)) except: warn(f'No data for {file}') s = np.zeros_like(s) tiles.append(s) tiles = np.array(tiles).transpose((1, 2, 0, 3, 4)) tiles = torch.from_numpy(tiles).float() preds_ens = [] for wf in weights_files: if torch.cuda.is_available(): weights = torch.load(wf) else: weights = torch.load(wf, map_location=torch.device('cpu')) if 'model' in weights: weights = weights['model'] print(f'Generating model predictions for {wf}:') preds = get_preds(tiles, model=BA_Net(4, 1, 64), weights=weights) preds = np.array([tiles2image(preds[:,i], im_size) for i in range(preds.shape[1])]) preds_ens.append(preds) preds = np.array(preds_ens).mean(0) return preds def predict_time(path:InOutPath, times:list, weight_files:list, region, threshold=0.05, save=True, max_size=2000, buffer=128, product='VIIRS750', output='data'): tstart, tend = times.min(), times.max() tstart = tstart + pd.Timedelta(days=32) tend = tend-pd.Timedelta(days=32) tstart = pd.Timestamp(f'{tstart.year}-{tstart.month}-01') tend = pd.Timestamp(f'{tend.year}-{tend.month}-01') ptimes = pd.date_range(tstart, tend, freq='MS') preds_all = [] si = [[max(0,j*max_size-buffer), (j+1)*max_size+buffer, max(0,i*max_size-buffer), (i+1)*max_size+buffer] for i in range(region.shape[1]//max_size+1) for j in range(region.shape[0]//max_size+1)] bas, bds = [], [] for i, split in progress_bar(enumerate(si), total=len(si)): print(f'Split {split}') preds_all = [] for time in ptimes: time_start = pd.Timestamp((time - pd.Timedelta(days=30)).strftime('%Y-%m-15')) # Day 15, previous month times = pd.date_range(time_start, periods=64, freq='D') preds = predict_one(path, times, weight_files, region.name, slice_idx=split, product=product) preds = preds[times.month == time.month] preds_all.append(preds) preds_all = np.concatenate(preds_all, axis=0) ba = preds_all.sum(0) ba[ba>1] = 1 ba[ba<threshold] = np.nan bd = preds_all.argmax(0) bd = bd.astype(float) bd[np.isnan(ba)] = np.nan #sio.savemat(path.dst/f'data_{i}.mat', {'burndate': bd, 'burnconf': ba}, do_compression=True) bas.append(ba) bds.append(bd) ba_all = np.zeros(region.shape) bd_all = np.zeros_like(ba_all) for i, split_idx in enumerate(si): ba_all[split_idx[0]:split_idx[1], split_idx[2]:split_idx[3]] = bas[i] bd_all[split_idx[0]:split_idx[1], split_idx[2]:split_idx[3]] = bds[i] if not save: return ba_all, bd_all sio.savemat(path.dst/f'{output}.mat', {'burndate': bd_all, 'burnconf': ba_all}, do_compression=True) def predict_month(iop, time, weight_files, region, threshold=0.5, save=True, slice_idx=None): time_start = pd.Timestamp((time - pd.Timedelta(days=30)).strftime('%Y-%m-15')) # Day 15, previous month times = pd.date_range(time_start, periods=64, freq='D') preds = predict_one(iop, times, weight_files, region, threshold=threshold, slice_idx=slice_idx) assert preds.shape[0] == len(times) preds = preds[times.month == time.month] ba = preds.sum(0) bd = preds.argmax(0) doy = np.asarray(pd.DatetimeIndex(times).dayofyear) bd = doy[bd].astype(float) bd[bd==doy[0]] = np.nan bd[ba<threshold] = np.nan ba[ba<threshold] = np.nan ba[ba>1] = 1 if not save: return ba, bd tstr = time.strftime('%Y%m') sio.savemat(iop.dst/f'ba_{region}_{tstr}.mat', {'burned': ba, 'date': bd}, do_compression=True) def predict_nrt(iop, time, weights_files, region, threshold=0.5, save=True): times = pd.date_range(time-pd.Timedelta(days=63), time, freq='D') preds = predict_one(iop, times, weights_files, region, threshold=threshold) assert preds.shape[0] == len(times) ba = preds.sum(0) bd = preds.argmax(0) doy = np.asarray(pd.DatetimeIndex(times).dayofyear) bd = doy[bd].astype(float) bd[bd==doy[0]] = np.nan bd[ba<threshold] = np.nan ba[ba<threshold] = np.nan ba[ba>1] = 1 if not save: return ba, bd tstr = time.strftime('%Y%m%d') sio.savemat(iop.dst/f'ba_{tstr}.mat', {'burned': ba, 'date': bd}, do_compression=True) def split_mask(mask, thr=0.5, thr_obj=1): labled, n_objs = ndimage.label(mask > thr) result = [] for i in range(n_objs): obj = (labled == i + 1).astype(int) if (obj.sum() > thr_obj): result.append(obj) return result # hide notebook2script()
0.378574
0.66838
# Laboratorio 7 ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import altair as alt from sklearn import datasets, linear_model from sklearn.metrics import mean_squared_error, r2_score alt.themes.enable('opaque') %matplotlib inline ``` En este laboratorio utilizaremos los mismos datos de diabetes vistos en la clase ``` diabetes_X, diabetes_y = datasets.load_diabetes(return_X_y=True, as_frame=True) diabetes = pd.concat([diabetes_X, diabetes_y], axis=1) diabetes.head() diabetes.apply(np.linalg.norm) ``` ## Pregunta 1 (1 pto) * ¿Por qué la columna de sexo tiene esos valores? * ¿Cuál es la columna a predecir? * ¿Crees que es necesario escalar o transformar los datos antes de comenzar el modelamiento? __Respuesta:__ * porque las columnas ya están normalizadas * queremos prodecir la columna target * no, ya que por la linea de codigo anterior vemos que esto está hecho (ya está normalizado) ## Pregunta 2 (1 pto) Realiza dos regresiones lineales con todas las _features_, el primer caso incluyendo intercepto y el segundo sin intercepto. Luego obtén la predicción para así calcular el error cuadrático medio y coeficiente de determinación de cada uno de ellos. ``` from sklearn.linear_model import LinearRegression regr_with_incerpet = LinearRegression(fit_intercept=True) regr_with_incerpet.fit(diabetes_X, diabetes_y) diabetes_y_pred_with_intercept = regr_with_incerpet.predict(diabetes_X) # Coeficientes print(f"Coefficients: \n{regr_with_incerpet.coef_.T}\n") # Intercepto print(f"Intercept: \n{regr_with_incerpet.intercept_}\n") # Error cuadrático medio print(f"Mean squared error: {mean_squared_error(diabetes_y_pred_with_intercept, diabetes_y):.2f}\n") # Coeficiente de determinación print(f"Coefficient of determination: {r2_score(diabetes_y,diabetes_y_pred_with_intercept):.2f}") regr_without_incerpet = LinearRegression(fit_intercept=False) regr_without_incerpet.fit(diabetes_X, diabetes_y) diabetes_y_pred_without_intercept = regr_without_incerpet.predict(diabetes_X) # Coeficientes print(f"Coefficients: \n{regr_without_incerpet.coef_.T}\n") # Error cuadrático medio print(f"Mean squared error: {mean_squared_error(diabetes_y_pred_without_intercept,diabetes_y):.2f}\n") # Coeficiente de determinación, print(f"Coefficient of determination: {r2_score(diabetes_y,diabetes_y_pred_without_intercept):.2f}") ``` **Pregunta: ¿Qué tan bueno fue el ajuste del modelo?** __Respuesta:__ El en segundo caso tenemos un ajuste muy malo, incluso comparativamente con le primero, en concreo, un error grande y $r^2 \sim -3$. Por el lado del primer ajuste, tampoco es tan bueno ($r^2 \sim 0.5$). Lo unico rescatable es que es mejor que el segundo (sin intercepto). ## Pregunta 3 (1 pto) Realizar multiples regresiones lineales utilizando una sola _feature_ a la vez. En cada iteración: - Crea un arreglo `X`con solo una feature filtrando `X`. - Crea un modelo de regresión lineal con intercepto. - Ajusta el modelo anterior. - Genera una predicción con el modelo. - Calcula e imprime las métricas de la pregunta anterior. ``` for col in diabetes: X_i = diabetes[col].reset_index().set_index('index') regr_i = LinearRegression(fit_intercept=True) regr_i.fit(X_i, diabetes_y) diabetes_y_pred_i = regr_i.predict(X_i) print(f"Feature: {col}") print(f"\tCoefficients: { regr_i.coef_.T}") print(f"\tIntercept: {regr_i.intercept_}") print(f"\tMean squared error: { mean_squared_error(diabetes_y_pred_i, diabetes_y):.2f}") print(f"\tCoefficient of determination: {r2_score(diabetes_y,diabetes_y_pred_i):.2f}\n") ``` **Pregunta: Si tuvieras que escoger una sola _feauture_, ¿Cuál sería? ¿Por qué?** | Feature | Descripción | | :------------- | :----------: | | age | age in years| | sex | sex | | bmi | body mass index| | bp | average blood pressure| | s1 | tc, T-Cells (a type of white blood cells)| | s2 | ldl, low-density lipoproteins| | s3 | hdl, high-density lipoproteins| | s4 | tch, thyroid stimulating hormone| | s5 | ltg, lamotrigine| | s6 | glu, blood sugar level| **Respuesta:** bmi(body mass index) es el que tiene un coeficiente de determinacion mas alto asique elegiría este. También podriamos considerar s5, no hay mucha diferencia, además de que sus coeficientes son bastante bajos. En concreto, también tiene errores similares en comparación de los otros. ## Ejercicio 4 (1 pto) Con la feature escogida en el ejercicio 3 realiza el siguiente gráfico: - Scatter Plot - Eje X: Valores de la feature escogida. - Eje Y: Valores de la columna a predecir (target). - En color rojo dibuja la recta correspondiente a la regresión lineal (utilizando `intercept_`y `coefs_`). - Coloca un título adecuado, nombre de los ejes, etc. Puedes utilizar `matplotlib` o `altair`, el que prefieras. ``` #usaremos altair import altair as alt from vega_datasets import data X_i = diabetes['bmi'].reset_index().set_index('index') regr = linear_model.LinearRegression().fit(X_i, diabetes_y) intercept_y = regr.intercept_ coefs_= regr.coef_ points = alt.Chart(diabetes).mark_point(size=20).encode( x='bmi', y='target', ).properties( title='Regresion lineal bmi vs target' ) x = np.linspace(-0.1, 0.2, 100) source = pd.DataFrame({ 'x': x, 'f(x)': x*coefs_+intercept_y }) line = alt.Chart(source).mark_line().encode( x='x', y='f(x)', color=alt.value("red") ) points + line ```
github_jupyter
import numpy as np import pandas as pd import matplotlib.pyplot as plt import altair as alt from sklearn import datasets, linear_model from sklearn.metrics import mean_squared_error, r2_score alt.themes.enable('opaque') %matplotlib inline diabetes_X, diabetes_y = datasets.load_diabetes(return_X_y=True, as_frame=True) diabetes = pd.concat([diabetes_X, diabetes_y], axis=1) diabetes.head() diabetes.apply(np.linalg.norm) from sklearn.linear_model import LinearRegression regr_with_incerpet = LinearRegression(fit_intercept=True) regr_with_incerpet.fit(diabetes_X, diabetes_y) diabetes_y_pred_with_intercept = regr_with_incerpet.predict(diabetes_X) # Coeficientes print(f"Coefficients: \n{regr_with_incerpet.coef_.T}\n") # Intercepto print(f"Intercept: \n{regr_with_incerpet.intercept_}\n") # Error cuadrático medio print(f"Mean squared error: {mean_squared_error(diabetes_y_pred_with_intercept, diabetes_y):.2f}\n") # Coeficiente de determinación print(f"Coefficient of determination: {r2_score(diabetes_y,diabetes_y_pred_with_intercept):.2f}") regr_without_incerpet = LinearRegression(fit_intercept=False) regr_without_incerpet.fit(diabetes_X, diabetes_y) diabetes_y_pred_without_intercept = regr_without_incerpet.predict(diabetes_X) # Coeficientes print(f"Coefficients: \n{regr_without_incerpet.coef_.T}\n") # Error cuadrático medio print(f"Mean squared error: {mean_squared_error(diabetes_y_pred_without_intercept,diabetes_y):.2f}\n") # Coeficiente de determinación, print(f"Coefficient of determination: {r2_score(diabetes_y,diabetes_y_pred_without_intercept):.2f}") for col in diabetes: X_i = diabetes[col].reset_index().set_index('index') regr_i = LinearRegression(fit_intercept=True) regr_i.fit(X_i, diabetes_y) diabetes_y_pred_i = regr_i.predict(X_i) print(f"Feature: {col}") print(f"\tCoefficients: { regr_i.coef_.T}") print(f"\tIntercept: {regr_i.intercept_}") print(f"\tMean squared error: { mean_squared_error(diabetes_y_pred_i, diabetes_y):.2f}") print(f"\tCoefficient of determination: {r2_score(diabetes_y,diabetes_y_pred_i):.2f}\n") #usaremos altair import altair as alt from vega_datasets import data X_i = diabetes['bmi'].reset_index().set_index('index') regr = linear_model.LinearRegression().fit(X_i, diabetes_y) intercept_y = regr.intercept_ coefs_= regr.coef_ points = alt.Chart(diabetes).mark_point(size=20).encode( x='bmi', y='target', ).properties( title='Regresion lineal bmi vs target' ) x = np.linspace(-0.1, 0.2, 100) source = pd.DataFrame({ 'x': x, 'f(x)': x*coefs_+intercept_y }) line = alt.Chart(source).mark_line().encode( x='x', y='f(x)', color=alt.value("red") ) points + line
0.697609
0.877372
## Tabular data handling This module defines the main class to handle tabular data in the fastai library: [`TabularDataBunch`](/tabular.data.html#TabularDataBunch). As always, there is also a helper function to quickly get your data. To allow you to easily create a [`Learner`](/basic_train.html#Learner) for your data, it provides [`tabular_learner`](/tabular.data.html#tabular_learner). ``` from fastai.gen_doc.nbdoc import * from fastai.tabular import * show_doc(TabularDataBunch) ``` The best way to quickly get your data in a [`DataBunch`](/basic_data.html#DataBunch) suitable for tabular data is to organize it in two (or three) dataframes. One for training, one for validation, and if you have it, one for testing. Here we are interested in a subsample of the [adult dataset](https://archive.ics.uci.edu/ml/datasets/adult). ``` path = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(path/'adult.csv') valid_idx = range(len(df)-2000, len(df)) df.head() cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] dep_var = 'salary' ``` The initialization of [`TabularDataBunch`](/tabular.data.html#TabularDataBunch) is the same as [`DataBunch`](/basic_data.html#DataBunch) so you really want to use the factory method instead. ``` show_doc(TabularDataBunch.from_df) ``` Optionally, use `test_df` for the test set. The dependent variable is `dep_var`, while the categorical and continuous variables are in the `cat_names` columns and `cont_names` columns respectively. If `cont_names` is None then we assume all variables that aren't dependent or categorical are continuous. The [`TabularProcessor`](/tabular.data.html#TabularProcessor) in `procs` are applied to the dataframes as preprocessing, then the categories are replaced by their codes+1 (leaving 0 for `nan`) and the continuous variables are normalized. Note that the [`TabularProcessor`](/tabular.data.html#TabularProcessor) should be passed as `Callable`: the actual initialization with `cat_names` and `cont_names` is done during the preprocessing. ``` procs = [FillMissing, Categorify, Normalize] data = TabularDataBunch.from_df(path, df, dep_var, valid_idx=valid_idx, procs=procs, cat_names=cat_names) ``` You can then easily create a [`Learner`](/basic_train.html#Learner) for this data with [`tabular_learner`](/tabular.data.html#tabular_learner). ``` show_doc(tabular_learner) ``` `emb_szs` is a `dict` mapping categorical column names to embedding sizes; you only need to pass sizes for columns where you want to override the default behaviour of the model. ``` show_doc(TabularList) ``` Basic class to create a list of inputs in `items` for tabular data. `cat_names` and `cont_names` are the names of the categorical and the continuous variables respectively. `processor` will be applied to the inputs or one will be created from the transforms in `procs`. ``` show_doc(TabularList.from_df) show_doc(TabularList.get_emb_szs) show_doc(TabularList.show_xys) show_doc(TabularList.show_xyzs) show_doc(TabularLine, doc_string=False) ``` An object that will contain the encoded `cats`, the continuous variables `conts`, the `classes` and the `names` of the columns. This is the basic input for a dataset dealing with tabular data. ``` show_doc(TabularProcessor) ``` Create a [`PreProcessor`](/data_block.html#PreProcessor) from `procs`. ## Undocumented Methods - Methods moved below this line will intentionally be hidden ``` show_doc(TabularProcessor.process_one) show_doc(TabularList.new) show_doc(TabularList.get) show_doc(TabularProcessor.process) show_doc(TabularList.reconstruct) ``` ## New Methods - Please document or move to the undocumented section
github_jupyter
from fastai.gen_doc.nbdoc import * from fastai.tabular import * show_doc(TabularDataBunch) path = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(path/'adult.csv') valid_idx = range(len(df)-2000, len(df)) df.head() cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] dep_var = 'salary' show_doc(TabularDataBunch.from_df) procs = [FillMissing, Categorify, Normalize] data = TabularDataBunch.from_df(path, df, dep_var, valid_idx=valid_idx, procs=procs, cat_names=cat_names) show_doc(tabular_learner) show_doc(TabularList) show_doc(TabularList.from_df) show_doc(TabularList.get_emb_szs) show_doc(TabularList.show_xys) show_doc(TabularList.show_xyzs) show_doc(TabularLine, doc_string=False) show_doc(TabularProcessor) show_doc(TabularProcessor.process_one) show_doc(TabularList.new) show_doc(TabularList.get) show_doc(TabularProcessor.process) show_doc(TabularList.reconstruct)
0.427994
0.989791
# Tutorial 04: Dzyaloshinskii-Moriya energy term > Interactive online tutorial: > [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/ubermag/oommfc/master?filepath=docs%2Fipynb%2Findex.ipynb) Dzyaloshinskii-Moriya energy density, depending on the crystallographic class, is computed as $$\mathbf{w_\text{dmi}} = \left\{ \begin{array}{ll} D \mathbf{m} \cdot (\nabla \times \mathbf{m}), & \text{for}\,\,T(O) \\ D ( \mathbf{m} \cdot \nabla m_{z} - m_{z} \nabla \cdot \mathbf{m}), & \text{for}\,\,C_{nv} \\ D\mathbf{m} \cdot \left( \frac{\partial \mathbf{m}}{\partial x} \times \hat{x} - \frac{\partial \mathbf{m}}{\partial y} \times \hat{y} \right), & \text{for}\,\,D_{2d} \\ \end{array} \right. $$ where $\mathbf{m}$ is the normalised ($|\mathbf{m}|=1$) magnetisation, and $D$ is the DM energy constant. DMI energy term tends to align neighbouring magnetic moments perpendicular to each other. In `oommfc`, $\mathbf{m}$ is a part of the magnetisation field `system.m`. Therefore, only DMI energy constant $D$ should be provided as an input parameter to uniquely define the Exchange energy term. $D$ can be constant in space or spatially varying. ## Spatially constant $D$ Let us start by assembling a simple simple simulation where $D$ does not vary in space. The sample is a "one-dimensional" chain of magnetic moments. We are going to choose $C_{nv}$ as the crystallographic class. ``` import discretisedfield as df import micromagneticmodel as mm import oommfc as oc %matplotlib inline p1 = (-10e-9, 0, 0) p2 = (10e-9, 1e-9, 1e-9) cell = (1e-9, 1e-9, 1e-9) region = df.Region(p1=p1, p2=p2) mesh = df.Mesh(region=region, cell=cell) ``` The mesh is ``` mesh.k3d() ``` The system has a Hamiltonian, which consists of only DMI energy term. ``` D = 1e-3 # Dzyaloshinksii-Moriya energy constant (J/m**2) system = mm.System(name="dmi_constant_D") system.energy = mm.DMI(D=D, crystalclass="Cnv") ``` We are going to minimise the system's energy using `oommfc.MinDriver` later. Therefore, we do not have to define the system's dynamics equation. Finally, we need to define the system's magnetisation (`system.m`). We are going to make it random with $M_\text{s}=8\times10^{5} \,\text{Am}^{-1}$ ``` import random import discretisedfield as df Ms = 8e5 # saturation magnetisation (A/m) def m_fun(pos): """Return random 3d vectors for initial random magnetisation""" return [2 * random.random() - 1, 2 * random.random() - 1, 2 * random.random() - 1] system.m = df.Field(mesh, dim=3, value=m_fun, norm=Ms) ``` The magnetisation, we have set as initial values looks like: ``` system.m.k3d_vectors(color_field=system.m.z) # k3d plot system.m.plane("y").mpl() # matplotlib plot ``` Now, we can minimise the system's energy by using `oommfc.MinDriver`. ``` md = oc.MinDriver() md.drive(system) ``` We expect that now all magnetic moments are aligned orthogonally to each other. ``` system.m.k3d_vectors(color_field=system.m.z) # k3d plot system.m.plane("y").mpl() # matplotlib plot ``` ## Spatially varying $D$ In the case of DMI, there is only one way how a parameter can be made spatially varying - using a dictionary. In order to define a parameter using a dictionary, regions must be defined in the mesh. Regions are defined as a dictionary, whose keys are the strings and values are `discretisedfield.Region` objects, which take two corner points of the region as input parameters. ``` p1 = (-10e-9, 0, 0) p2 = (10e-9, 1e-9, 1e-9) cell = (1e-9, 1e-9, 1e-9) subregions = { "region1": df.Region(p1=(-10e-9, 0, 0), p2=(0, 1e-9, 1e-9)), "region2": df.Region(p1=(0, 0, 0), p2=(10e-9, 1e-9, 1e-9)), } region = df.Region(p1=p1, p2=p2) mesh = df.Mesh(region=region, cell=cell, subregions=subregions) ``` The regions we have defined are: ``` mesh.k3d_subregions() ``` Let us say there is no DMI energy ($D=0$) in region 1, whereas in region 2 $D=10^{-3} \,\text{Jm}^{-2}$. Unlike Zeeman and anisotropy energy terms, the DMI energy constant is defined between cells. Therefore, it is necessary to also define the value of $D$ between the two regions. This is achieved by adding another item to the dictionary with key `'region1:region2'`. The object `D` is now defined as a dictionary: ``` D = {"region1": 0, "region2": 1e-3, "region1:region2": 0.5e-3} ``` The system object is ``` system = mm.System(name="dmi_dict_D") system.energy = mm.DMI(D=D, crystalclass="Cnv") system.m = df.Field(mesh, dim=3, value=m_fun, norm=Ms) ``` Its initial (and random) magnetisation is ``` system.m.k3d_vectors(color_field=system.m.z) system.m.plane("y").mpl() ``` After we minimise the energy ``` md.drive(system) ``` The magnetisation is as we expected. The magnetisation remains random in region 1, and it is orthogonally aligned in region 2. ``` system.m.k3d_vectors(color_field=system.m.z) system.m.plane("y").mpl() ```
github_jupyter
import discretisedfield as df import micromagneticmodel as mm import oommfc as oc %matplotlib inline p1 = (-10e-9, 0, 0) p2 = (10e-9, 1e-9, 1e-9) cell = (1e-9, 1e-9, 1e-9) region = df.Region(p1=p1, p2=p2) mesh = df.Mesh(region=region, cell=cell) mesh.k3d() D = 1e-3 # Dzyaloshinksii-Moriya energy constant (J/m**2) system = mm.System(name="dmi_constant_D") system.energy = mm.DMI(D=D, crystalclass="Cnv") import random import discretisedfield as df Ms = 8e5 # saturation magnetisation (A/m) def m_fun(pos): """Return random 3d vectors for initial random magnetisation""" return [2 * random.random() - 1, 2 * random.random() - 1, 2 * random.random() - 1] system.m = df.Field(mesh, dim=3, value=m_fun, norm=Ms) system.m.k3d_vectors(color_field=system.m.z) # k3d plot system.m.plane("y").mpl() # matplotlib plot md = oc.MinDriver() md.drive(system) system.m.k3d_vectors(color_field=system.m.z) # k3d plot system.m.plane("y").mpl() # matplotlib plot p1 = (-10e-9, 0, 0) p2 = (10e-9, 1e-9, 1e-9) cell = (1e-9, 1e-9, 1e-9) subregions = { "region1": df.Region(p1=(-10e-9, 0, 0), p2=(0, 1e-9, 1e-9)), "region2": df.Region(p1=(0, 0, 0), p2=(10e-9, 1e-9, 1e-9)), } region = df.Region(p1=p1, p2=p2) mesh = df.Mesh(region=region, cell=cell, subregions=subregions) mesh.k3d_subregions() D = {"region1": 0, "region2": 1e-3, "region1:region2": 0.5e-3} system = mm.System(name="dmi_dict_D") system.energy = mm.DMI(D=D, crystalclass="Cnv") system.m = df.Field(mesh, dim=3, value=m_fun, norm=Ms) system.m.k3d_vectors(color_field=system.m.z) system.m.plane("y").mpl() md.drive(system) system.m.k3d_vectors(color_field=system.m.z) system.m.plane("y").mpl()
0.620392
0.990478
<center><img src = data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAANwAAADlCAMAAAAP8WnWAAAAk1BMVEX///+oHhyiAAChAAClAACoHBqmEQ6eAACnGBb79fWlDQn9+fmmExD+/Pz68vL47u7pzs7w3t705eWzRUTetbW4VVTozMzu2dnEd3aqIyG+ZmXWpKPCcnHt19e6WlnRl5bivr6vNTS8YF/ara3IgYDgubmwOTjkw8POkZDIf361TUysKSe/amnKh4a2UE/PlJOuNzVTS+tYAAAdNklEQVR4nO1dCZeiPLOGSgExgqzighuioKLd8/9/3U2CCyAutHbPvN/t55wZbRekktpTqSjKL37xi1/84he/+MUvfvGLX/ziF//vMA2C4kk28v/unbwftgGkJ58F8D9HXIIUDvLZ/yBxS+abKKfuRNwgC7t/9Zbehi6AO8KNeFoQ56aEEIzEC473V2/tdSwgV+YIHf40gr6i9BD9zw8kCX8BDWe+Tay/fYttkK2WmwvbTbjAdRHnypG4Fab8aV+nfA6JigaB+D/EoxPOdaD2Tn8yyLr2WMfOkbgUxJwlSFwlQM6dmQ7rv3i3z8Pl/xKiZVmKy+NLIWjIiaVi6iRxOSzEpxCmyhB3/KkviZvH+m7xl277CViKb4Sc4SAQJBlHXXFgupanfZ+ipRxgrygzEMplaXLbl0u1EpOBeBVjJPO/ef8N6CTF4yIn/h/Gbz7FzWdq5NsjcbmUNsUW+lES5xgYeHPEvWIDOPwdwC7XoxgqHgX7fN1k+uOkXGEKxkA8LgjsN1TjSnGv4zLxxEQKDAgUdzkmoKwMwawbIZPgd7keBUtoU65guCUMbP6Zxem6PSCbn6emjph9iIc/uFIUD/j0RIB8/LdHp+TADYGE7XhWz5Mz434u92K+V4Ywexs+nR1GY8NYxlIcJYYM3J8lpAkZQMYfUHCYMuFq3qJ6HmxOMnfPhtlr4VD7EHHbR7p20j8LquICG37vfT+HmS7mZiR1A9eBA2WKQAjLnv3+cLTg8w9cFXnK4vTiUsfe7a/8ENY5jBgfeq7VR/zPDbIt1zHBfN3S61gbLDkYs5M1Dwl+nt/7W1QGxmjq7ZF1FY+wZRiRmYlfu9Ii567KmaDUNM+jk5Hk9Rv9CnLpbaRCeSQEiTHuAHxVhXvO2QsLAC4E9Y3+i3f5NXRRqjeuU7gimO6XkSD0aWE7onM9MSMzPj+fKtHPhw6u0OoxE6PaQzYWL3kWH3PS+lZArzkm3MI7p+cDoy8mdB68dLPtMOfe/NASoQwfdwcZt+QhkpFJSNT6WqlZ/ZJFiH5+IWUihnCI8XOCNyHpBEXoolE8zNkkEoK25CHo+AuWdwU0rrxgTwC0YqoiELa8p0sSvxuJlKgB8aVvHCgu40bt5Et4XvvAzLEEG8Cg+qo7IyQNpd8pnDCfqT9gDyKCQqQCMlRCjU6VjtLbjFevCPwBNlanCGUrCFNi2EqfCeu5Qgx+IFq3TT3vClUdH4w9f3g9ykwAR90lfDS8EwmTEIpfo2AYn9efeDcGIPxkGzRhA5akrdq/RkbQHHHJanqvBzgRbqbuTzmf/kCstyC4lYwytxcMn5WxbLJf+jn2V1uOYaf8lr1BfaTVhU7CirCjdEcm7cqA4/Wbvw9+C3MQrmSKQpE4j79RICBMQJf/G3b1Tc9HFZstCJe0GUPB/B6QTuNH3oZMBKYT5NxobfJ4dV+DecnynF8OMd4t99scNVVDciFuvM3ELQ9Bn9y4SmCgcH+Ug3TMvxNLFA6Jz7CuH926eZt+poC6cUV+TLXORfH1CAP84O7V4XaePYqMSOgd+G4fhfMG116dka6VWaS79gngvqRcFvsYqabR8ZWRiCktyelgB8B0JH4wZrd1fQ6TJQB38IJ+uv+m8HwwFVZJeAoukLJidjWgXAKNvBhb+5MC+EA1LI+1vVkt/ViNNVpRQp1spQFSHMW3LZmrEyATmSDjUfC35I1cJNuOwiR3LGq5m0m69twoNeLQU1ZctwPb2B7o49JHRshMSsV01jSspXhzU9Uo2dz0Bax1widsjObYC0f5+0i6IAAVMUnwzsWnKZDegoAvp4xRepkMBzQELoU14pylyQ2mC1qsGRmZ3eO5KTEh7imR8S1T98H6sRGrcNPtt7e6xkVjyH/d60cif3UxFSFXCJ7XqclcZCBydXFgmCjhGHQybjJ3BT5haENubUTe973oRWLVgvYiIR/NH+lukGnmXvKWQ3RjkxVObwEbinC6QpyFuEwyV2GayAZ5W65cSHTLEZmThItGbj7tODyLjkbyRZFgXN1IJ3KyTaYdeVajMFc6SEtxygdKHq0Q14OZeMhA38u/7SEZ7SFv9uk80ufiiTL0e93rKyOIgfhhLgxB48Ctc+7c7jZYSp0rUaqVzHVoSHausuVIHVjunMtdeCiY3Qv+8AvtG72DuUHNwBXG/mC8ORm9jgmM6A1t0iMmjriT5FMQ99VFXdiLvGIMJtLrqskc1/HEzzU1I7A7vYY6a04yRUiEV9Dtgz60mz7wAhYpUKPRm1x/JkWiYGwy6UZtDCdgFLUjuxX4I5yQvEIctxsAqcYOKZhnRWWPyUzk5q/RFcGwl/P4GMi7lvSi5cdBjlSWNl7yQEgiPJaByihZStHaGAago1GzZJk75qyr7FjVztnZdEdhsVgvsQjpevxKmWaif8NxXYCOUW9O3hQjpIQ7V5cVijo6H1CwXxcppI5IiotXw7CrTPQLf3XdrjezFSemte/3gP4RjzKb15khzqaKl2oma2SSA2AuVpCi9/DlkCzDIGfkhgVydF0tUpWOWOJRPO0i7WvjkkPuGXxmepyzoV9VSRHKcMdGwbUpN/Sw5N6rRrXGPNrBGDe8+lX05BKU1cd949sZoWjSWHCfDWJSDqiSs7EIh5e0VsbVZUIw3ZHq8k1qihXHaSzWWx1jm0X+RPFN3Qczb8g9KCcVtYknr+eMQiIXt11oDKfWRINoqTP5mb64y4AHns12eGhkyo5TsK+a4ZCIdXKubrmwJZJne1vknOrEyPxbBrurDAyivmzOvSLf5UFT4jAAjSyUDhU2W3zmT+ZlPBpoJs4akYWY4S1UQ+ohnwyCIhPECd2Hg2nChVj85h5Y2nj/Wc7dCS8gtwLc55HqdCoWfRvuOCIainsacBqF2Lk5V5KASGijMrBNMs7CT6zFpV1OgD0vWHnEg5qUasfEEw/Q8wbe6wH3QlNLWRkvB3cuYZiq3GW9emcOmsakQUo4lVJ9rSezldsZ3vAg7FgshhuLOz+G/JoanL6+4oJ3TZ1DZtzhSXvj2z78UxBc4e24H7G7Vr1bU4+pBlv5HCsZ8Zt6Opils8W9H7QSVZMS7K3FtGzAxKuLdcVvegDmS5lFN2V9wWDTdYMdGIIad3mYVizs+yY+koDpU+P8iXRkCapACsLGaPBVImPujE36WiYTERkZN0/DBqjIPSeoSup6THvEJJ7Rf6zeuPpFV9pqKLJkayXQriQi87mfEnNx38wCjL6UaeehcZQDa1R+CVDGf9ummqlJVRlyo/AgRiZs9FADTJjwGrnK2HgbIjkzMcjs6mMzVmRxYoZ3nKd7WKvi2pTL+NXYLIoB7uY8PkFN8kdkPFp3TCl96O9mhuCDtTD0FoL8o6/DlX4KxK8rU+6psq9FP8FKBjjdw/UahGdoRAijr3OeDPiciZXBcPpgBWvIVA0erGd0pTmdi+Qvv7hcQuhyrXWlOVy5ioeQwpcWJTsEsSgCcq8m7nMu65iWKCMcbgieWk/l8qmqx8Kim5DjM+ROzB5w6sY5532qa+SanwNu+GdeV/1a7JNwfeI3ilFCZIpyC5op/+bGnDyRD3ZAfYI6gWUuLuoqParHm5gs4XrweDCRfs0SHL/FdRaZXCtLlytKS/Cmqpo7OdJzc9fwuev7USV1jYFoBWMelvccYT5RmoTD4nBFHZVLCEq3ra50zeMTb0awviYjUgXFmgwlhBiFP/3UL3Q1raDuoX2aHOMaNgsmMlfrA9Q9lcTgIhHsqDppt/qzuuQGw/wqftqidhr9btdrM3BxQZz6kIfDgnybnNyeEPRl/UOB0uNxESOkHv7exwxLV6qP2Bqoyd329AsL4X9oQZz2ZLVhh7ClnJbOhEfEi6v3+XziqvfZropjCift1HHqjhcXZL0bII9y2uvg08ypVHsuFNujjpOAewy6pmtYZxKXoOv/4aHEtZG/hw/cKnYY7VO8iiomcinEi1GDcctg0VJPxKmkOa6/wgxgP9hxB5Zy3bKtvemQg7I2nB62I84hpkkAGbuqMnCAFuHYgWisOYdzEz08E9fEZI1YRxGYVIQBvq7Wps42uB9Dd3mbyEdcwudeARBteahzZUpVfShnLENdu/aL7qGwc0exY89N+9TnLBILa+sC1r2bJUyyBBpzBDdgkXHI72OUNQl9xFWJxgpxEzk98FvolTleiFOvmKwJnQmh+slyjM26OejlxFSeutAJicEgj+JGhcZ9WbrW+Vim0nOZcys4e17wfFoiToMnhiXnP9X3FGu4EE6yelX23I0mnVZWzp6LMEe7UT8tSgMPXAqITKw5UYtLe0QtQ38iAUmRCtdxAnRCTH5TjdUqovitheYOxwQpDK/9yslU+uOeDxq7UTlyGxuoEKc2OMN1GFK8Lappuk5WIdRzSyI04NqPAmmj2np8+gDimueViapHgnyY1lTIeStl2S3pSgn2ODNX/EDGR8UUhRET/2o8lka4Br/XVDt2D2L6arp3rKO1LrYRWAdiUmPZgis/sUqbquGT3x4CU4vVCGVdc7otioMF7LjD2HbtoLeo/m2LjQw7XSv+cnfQphbRhtrEcYX5pKRM8BML/2RCoEqEZX4oC/ijbFrxZRMihKkHOFeC2Vb8VKtc/Viv06bSJ3nJ6gqeERpf/noFS7QGMFI049Ui79iMlQ2C1wcwWvI4d7evaONuytNcbRt8mu25qmla9Y3MmESoKutX98ZMxUo3NZczwEXQshrErmuTgrjnQ+ms64wBqX6VTlkSgKtoqD2GDHoOaCP4YysDsonabGXwzQbaVPZExqGAHY1Ap9x3Ns36z0bLzzeUCJtkpngGkrGluL4ozH3+q8O6pjwK3e7RF61UWNNQThrE3GUYGvH3lF0uHBEjT0WETPVli+xM1CRwwhg8jKFTUSaoyknbS33ovb1+qA6xgNYilnOurcCRuOtFjhoyoi9tYsLoTWvgT6CHRos1CK9RmRQa5aEHFoK/NpbfPl1ldG7Xn10jprdoUx8XSS+M5F9ucbBqViYFcY8Et3PtKv9LcG8J3FPELek3l6S/hn6jhTsR90CYope9qm/F9IYVeErmuvmr7vD3YsvuEvcPbAV/AfodieNBz49Zr++Ac5crNfMNXuHfQ3THDnDf8p/W8w+xuity7PFK3b+Mhvi7LHL/jQ42tzC7Z+VU8p/WJ/dnjv7EDuJvxF2Zu1G6+J/BPW351GrBv4x73pf5hrzO34V520P5np1wP4nbvuV1ecJ/Djf58j8vcQIfNyzd40qbH8PXHdwpaaRNf8mtfK+//UIFf2NCltKXdjy8d3/gC35SV7vOfj1bQnQL7xXXNqm8Oqakbg4ovpg+eG8A/9LNJKRO26s3994krfOSiMwr9oCNXuUq6717Vt3X+CgqcWZ9i9lX7ua9M9d5bWOJqGM8qpJ3NHZM3mz/X93w5MZyOURrXb3ShP2b80q7l3lpAzxw1d/S1/HdeaXt6+kOb6+r2o292K0QllzutzBo0q5gs/ka+Hh14BnsL6zdwfQNgZPbukTnGmKh7g0RaqfUeuITnyjkb4a7mU2iwsJZr2u5UFq79n0G65iXkmb9L/eh2Bho6njs/bN7viLmBnbSGrzcKrZb7siyxfj2J5tgzYvfF3s1AVArdk59kqeLRpqRwXuC1E8olcbaV1367iM0j7svUMO53ZuDJovJQoDXpC4/mvFnikjvwIZyD7zFgqktqMuIRuWW2SkwOURR8dg9Nuj8Ki61KOQlhTlhJZFzDTai0A+eZIYuanpRlJ2cthSmVArt0nylk4V3WRqXe1K/igUpLw0FhmhrZCLRx1H42LdPztuG53jcppkUG9QTfFwTcxvl4u2HW3dvo4fVLfa9cD4G0biJKz5Mhw84dGyeWPrcUdMuihz52Jtf9nsOlVzD13YrCuzMhqVmb7Hp64BMf8TwOT11vj4TZx0r1fngt6rhL6FWbflw6+4trFCtdmj1TnNlucEqfeRojKh+lAhO3DEm/AOSj6T39KWEnFNPNHyRwUXQW+bK7owAlNrgPbLn6bn5THIirouF2pRVyl/pGDC9Lv6ioy+E9lLhlj2clGt2qgO93eutiol+GtQEj+2RkpOTsheLbe1bGU+baojMUeu5i0QqxizVHx+Q0lmOnMuJnzwzWGcDIMq15cxl3IoX3wyf3lBbxqK5PorSllmiQ/Hri/MLPTRTcagKMqZRxNn88xGBNjn57Zuim9yktE+6qL+DZRs7tblK7J2oa7UzvzuTtJXrtxN56AP3Eg8HxlTNhMdu5kw/KsmUSlcggotTmRRKj8VPD3pndnuBToPnfehBXqwWldXJsoh2DmShdPeoovFYYFxCRwVRhSvgxSXjcVxto8/GP2u8X83wbG3XgdDjeJRCk+JsC8WX63wfEDwz4nOgbBjxEW/oNnLeAYfpEw6iN7tXjliw5vCJQCrLTz9bjkx7xe6QHspdM86TQctK7OYyG2fnstmIktmDUfeG5G6dRgEGmwfkhbtTWrBqHjtEyk9YbHy0G1pvNCLzCRDRG8AKo6g6faXNiybcOwDJ2cPd4qEzNMR7W4KCFC5eKVZmR9VF5X5UeBg2edo1tDw+nO6QAHJ/tOII0hKjmYCrxnPk3PkI7pfrVchjxG88b6GTDbFEGpe4yo/NQWwzHhZuhUNaBWRbYFRE4iaUv5ZA9b5AX87DErN0p8kwhtv16M3gP+SvKhrByz77CKxynbrv9yHaGk6KJiTDNo5hJ0cKfjR1E7+aYPJrq208mgICsd8fTybLFPlk67er0e/RJ84G0P39ZDLu+zn3GZHWRohe2TExZxvUPOEptMheWLHO8nNvjXKewmlaBtY0auq6btbvpy3kZUzafJma9j6yiwcUlx/QJkZcIV6YcVn55rB5kfvbUclydPaEsIOcrIjoponx815TB8pFZ06l4Xz3eVXxTlQ2gfZyHjlzQd2K18IZwUOL7GUI5Yp/u9o5KfsrU1dJ4Pg65v0R9za+El8GUB4nryatk7u1vd+DClMuQLZlXYxAw7x11iqsEPeJtXMg7uzO+SbQkeS74HiU3WkTTIRCp7fMWXC/7eI7ulcZS/dWGPNtKIrgelC0RErP7f07B6JT2LdLYi8vG4QcPjiuVW0k2Li99htxTG5E3OslXHf4pY3T3hgoqbfbuI8BOap/b8K/m9gjzCv66PCj1B3vZW97fUJRD4aVHJ6TXvW4eYCIMG2TbHzCGCwcoLTWE2n2gyrzWCfGnYmFEnJHGmnt0I3WSytzwhBRRzLpJETT4v2mIrdW+kQ88x7QUaHchsX5dIEpule9WPPn7ilgOu/JxmwAxWEPF3ToD6lMej6QJBCNi3oyo6rC+MVewJbdk02ONViGXsad9EpXJI/9CHUaVhYA1kKN97ZEb+5M2Q7eiAc+C/l0C9XToTz8Aepq/V08o1CN3hJMxBfPewmRsvjEFppe7cjwA9RRrFnoiNAitej4QCF+pQgsIhpcchMpVatve/SbtQplVUW4D5Spph8bnBZnJHy5UoOrklJSiRuEel8EO7+bunsVeq2+70By2bSKHj0U7n21tN8XuAYtHWPkoWZOgrBqL7u7b3SisV6TFRTHKh3glM/trr6+HSgp7Wvg0keRGESvLRxOHuUmvwoNrlvWxcUpngtuyP2Xi34utkR0Eo8DN9thPZ8ZwbeoFdq0WLY4nojlpWji24ouL9K31+vbVKbmcxnKVmBmo6ZI9WOFzR60N+1M6H7gWfo6cFVI0t2/mzW529CcN8jOh5lxhnli5eMxvJyZ7GxL44YjXtb41snTb1cE+KepU6ZvadvAVUnpOJIeNm1Y7I3fN3nctNaVhbs5vRLCeVnPfsOmUNco662uT5t5fUHbpplvkIZ0Ub+2h9yvPcanH6xlKdt9fJY6u3gjnXFusJzwetg27+BNhg2LS0kOTIc4EXLowBvOPy3honMlh8oTjYF8XC1h2RPCXpo9TW9qGz3lP+RMxKIFiDRs/71Td4ZwNeXBp3KFhFwqbk5+jDdsvQZSIo3B9TGKou4AMBPF/zEwhJnj4Xun7ogVN3bSlZti5A34WPpHX2xtpCdlah/wa7ECbVyns31OETueGSKmj5G+b75/6qw+d1jLY7YGVijjLp5rqsRfYohbTh+ftD9Rg2XzdIB9NOSDWtDdi3LBGs80am8FO2amKebHc093kUDRMNeKaq6SM6wvrT2gDIeNa9Ad7vKJ6GYNpzyKXLKF/M2dAcRKUcqnZ5FzETgdGp6eThHsXeXWwqHKVcBDAjUuverwlpe4Py3e/KGlnZK9+du7HuzlIVTidChxXFXhZy5NcSTvrcSaGy2RC8mtNTtNLDfCbn4r2sy44jpW920QHp+v8goswXmWqeHczfoAo0xUT9OROBLwcDsR5SYr3xTnEDH9eBAppaYuDvYA018l5fxILylPvz0z7AQLxggAvaX+4iE8j+FA0dgwi4HsQv6DgajQJN7dQwMszwmi1X7mp3Gex6m/3K+iwKm3yw9I5RhOh8CUWwHBgAvCFUjvpDK/D9yzO0p+EAtWGcuDLDeKg026rhW8Y734ETZikBFBDw8ExIB+4pdLUJ9Er9T2dG+aqdhMI46KDe+eAfUc/OoZp8IP27myq7iUtgyoBptv7Vm0gVOuLQFq2jKZGchCxi+Mqp2VGS3CSv+2mO2Pj4WeXMMMvvsw+BmAOMHImnPDOpBlcOK359h6n4c33BNSbt5pE33CKTxFcmMmD0tcwzGbPiF2+M1CJ5YjCPq+6J8u3IS+KZMOGzy7KDdskDVwsgpPzUGnOiu/sjO1xDgHVBuUB5UMWXFeyfT2gePvxGCbEx6ZFqvSULTR3rLTkSmO4S8av5aTqlh+mHTlr05qqDd1lQAhnCI9lvRwOyDGacjkGu8ASfNlvwEZFlsMuIMhZ2rJjsF/CJQ4Ss8JB3XRz6DS5auSihnMRPXQ0DTHiqfpRWgcFqU9a9Awcg7QkOf7LuxRSsL8eNyv4h+9I5FQ9DqiIB7gI6nSl9Jy28CsVDC3JjqwFEBViaV0fMS+JUsopJceMw3lCZ4/BnckuLHwUATyoiRA0mapjGHOnWFQK97wAspdileMG+heJuTWLc5p7Wyg6JC+5JR2FAuKjcm9vtjO8qP9RTpCjezZSXmbshJyQURjAj5vk65ice+dVrefxLTUxF8VfJ2L3L/CHZ3iMgkUxeMr0Lnf7+OxBNoNsnefsv0YDjlNRUeyp5w3/sf8yEJrqObJ1nBZyi6aoG8ZdJUunPcKzPRC6UbE1DpbbNMF+93ItNO5Np5YB1wQU63agZlJKn/H9Ly/sLD68v/BRfrc09MFn7sNvmPb9ZdhLY5PuPAPMjC1mo3b69XKozWcN1yMdV0RR5hyuXIumUjFZ8dJnPIoQoN/okEkv78QThtYnPn2EK0HtqXkeq19eE6PhbwWkxu7Osgf3JLe/MTTtkY7Z/9I39IFjEZ6wZPWjKAwBISH4bSu5IKjBhSjIXWIWJKy4HLq+BrOvRu6O/0dG+ZfR4KqfuzXPkM9Xx32/VjF0fVhLKPjwvOm2AzANQrXIDOdnfhvWrb02tf3EL4TczzR5pbuLqVXZz0fl0aVlBWaUGqU5HLyzALUiwfef3Q+5I8gghNtQmeceMmDhv3uuSlUYO8kZlPxpAgJJSblLamrtvu93g1/LW5CeBQFFpcSW27EFlefT+T761PpbVcOAB+cvqQuIMs/l6O9l/h3u7K6ROqO3Vln94Aes3ELHvA1bGShopZleG46kEqndAm6GjnhlhiD+eakIgc/7HRdwfsjDtgu5z4ioo/mjhtOQGvs9ZuIdF18XsoZMmkchkSk+cpncnoRymj4r8KJajtVApUA4Yi15hNAcra34ZyUOOUnnD1Fui+lKlxDJ2/pevNmhJv9fjN1SXNpVsJ1CJ6Pz5heLHgtPRj5/4QhaIaXG83LFYyOLqdg9YjxL87PY3jNIhNBucnDotVZpv88rF2/zflgv/jFL37xi1/84he/+MUvfvGLX/yv4f8ATDrIaGQ1kmoAAAAASUVORK5CYII ></center> # **<center><font color='black'>K.J Somaiya College of Engineering</font></center>** ## <center><font color='red'>Engineering Final Year Project</font></center> ## <center>**<font color='purple'>InsureBuddy - An Insurance Recommender System</font></center>** ### **Author:** ### **Sujay Torvi** ### Co-Authors: ### 1. Krupen Shah ### 2. Harsh Somaiya ### 3. Tirth Desai ### Copyright© 2020 Under MIT License ## **<font color='purple'>`Problem Statement: To process, analyse and mine the data for useful insights in insurance product recommendation and model them using various algorithms, and deploying them into an application which would provide the user with useful insurance product recommendations`</font>** ## **V. Model Analysis** ## **Source of Dataset:** ### **Zimnnat Insurance Recommendation Dataset** URL: https://zindi.africa/competitions/zimnat-insurance-recommendation-challenge <br> **<font color='red'>Important:</font>** **<font color='purple'>Since no metadata is given for this dataset we are free to remove and impute our own attributes</font>** ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline from collections import Counter import warnings warnings.filterwarnings('ignore') from google.colab import files u = files.upload() import zipfile with zipfile.ZipFile('/content/bayesian_nn.zip', 'r') as zip_ref: zip_ref.extractall('/content/bayesian_nn') from tensorflow import keras model = keras.models.load_model('/content/bayesian_nn') model.summary() import pickle !pip install scikit-multilearn import skmultilearn clf = pickle.load(open('/content/RFClassifier.sav', 'rb')) train = pd.read_excel('Training_Data_rebalanced.xlsx') train.head() train.sex.unique() train.marital_status.unique() train.age_group.unique() train.occupation_category_code.unique() train.Annual_Income.unique() ``` ### So there can be 2 * 2 * 4 * 5 * 6 = 480 unique inputs given to the model ``` list_1 = [] list_2 = [] for i in range(0,2): for j in range(0,2): for k in range(0,4): for l in range(0,5): for m in range(0,6): list_1 = [i,j,k,l,m] list_2.append(list_1) test_arr = np.array(list_2) p1 = [] p2 = [] p3 = [] p4 = [] p5 = [] y_pred = clf.predict(test_arr) for i in range(0,480): y_pred_ = y_pred.toarray() if(y_pred_[i][0] == 1): p1.append(list_2[i]) if(y_pred_[i][1] == 1): p2.append(list_2[i]) if(y_pred_[i][2] == 1): p3.append(list_2[i]) if(y_pred_[i][3] == 1): p4.append(list_2[i]) if(y_pred_[i][4] == 1): p5.append(list_2[i]) p1,p2,p3,p4,p5 = pd.DataFrame(p1),pd.DataFrame(p2),pd.DataFrame(p3),pd.DataFrame(p4),pd.DataFrame(p5) col = ['sex', 'marital_status', 'age_group',\ 'occupation_category_code', 'Annual_Income'] def rename_cols(df,col): df.columns = col rename_cols(p1,col) rename_cols(p2,col) rename_cols(p3,col) rename_cols(p4,col) rename_cols(p5,col) def apply_1(x): if(x == 0): return 'Female' else: return 'Male' def apply_2(x): if(x == 0): return 'Married' else: return 'Single' def apply_3(x): if(x == 0): return '25-40' elif(x == 1): return '41-60' elif(x == 2): return 'Above 60' elif(x == 3): return 'Below 25' def apply_4(x): if(x == 0): return 'Corporate Employee' elif(x == 1): return 'Enterpreneur' elif(x == 2): return 'Medical Professional' elif(x == 3): return 'Military Service' elif(x == 4): return 'Self Employed' def apply_5(x): if(x == 0): return '0-5 lac' elif(x == 1): return '10-20 lac' elif(x == 2): return '20-30 lac' elif(x == 3): return '30-40 lac' elif(x == 4): return '40-50 lac' elif(x == 5): return '5-10 lac' def my_apply(df): df.sex = df.sex.apply(apply_1) df.marital_status = df.marital_status.apply(apply_2) df.age_group = df.age_group.apply(apply_3) df.occupation_category_code = df.occupation_category_code.apply(apply_4) df.Annual_Income = df.Annual_Income.apply(apply_5) my_apply(p1) my_apply(p2) my_apply(p3) my_apply(p4) my_apply(p5) plt.rcParams["font.weight"] = "bold" plt.rcParams["axes.labelweight"] = "bold" def policy_plot(df,policyname): print('Plotting for {}'.format(policyname)) df.sex.value_counts().plot.bar(color = ['red','purple']) plt.title('Gender count distribution for {}'.format(policyname)) plt.show() df.marital_status.value_counts().plot.bar(color = ['orange','green']) plt.title('Maritial Status count distribution for {}'.format(policyname)) plt.show() df.age_group.value_counts().plot.bar(color = ['red','maroon','gray','orange']) plt.title('Age group count distribution for {}'.format(policyname)) plt.show() df.occupation_category_code.value_counts().plot.bar(color = ['blue','red','green','orange','yellow']) plt.title('Occupation Category count distribution for {}'.format(policyname)) plt.show() df.Annual_Income.value_counts().plot.bar(color = ['gray','yellow','green','red','maroon','pink']) plt.title('Income Range count distribution for {}'.format(policyname)) plt.show() policy_plot(p1,'Policy 1') policy_plot(p2,'Policy 2') policy_plot(p3,'Policy 3') policy_plot(p4,'Policy 4') policy_plot(p5,'Policy 5') ``` ## **Patterns that can be inferred from the above predictions** ### 1. Policy 1 and 2 is recommended to most type of users, this pattern was picked up by the model from the data, however policy 1 and 2 could be suitable for more younger generation of users, females who are married. ### 2. Policy 3 tends towards male users, who are single and who are in their middle(41-60) or old age(60+). This policy is also recommended to users in their good income bracket(10 lakhs +) ### 3. Policy 4 also tends towards male users but recommends policy equally to those who are single and who are married. This policy recommends **only** to middle age people i.e 41-60 and young (< 25 years age) users. Recommendation is for enterpreneurial persons and very high income ones(30-50 lakhs), and startup enterpreneurs(0-5 lakhs) ### 4. Policy 5 has similar characteristics to Policy 4 ``` ```
github_jupyter
import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline from collections import Counter import warnings warnings.filterwarnings('ignore') from google.colab import files u = files.upload() import zipfile with zipfile.ZipFile('/content/bayesian_nn.zip', 'r') as zip_ref: zip_ref.extractall('/content/bayesian_nn') from tensorflow import keras model = keras.models.load_model('/content/bayesian_nn') model.summary() import pickle !pip install scikit-multilearn import skmultilearn clf = pickle.load(open('/content/RFClassifier.sav', 'rb')) train = pd.read_excel('Training_Data_rebalanced.xlsx') train.head() train.sex.unique() train.marital_status.unique() train.age_group.unique() train.occupation_category_code.unique() train.Annual_Income.unique() list_1 = [] list_2 = [] for i in range(0,2): for j in range(0,2): for k in range(0,4): for l in range(0,5): for m in range(0,6): list_1 = [i,j,k,l,m] list_2.append(list_1) test_arr = np.array(list_2) p1 = [] p2 = [] p3 = [] p4 = [] p5 = [] y_pred = clf.predict(test_arr) for i in range(0,480): y_pred_ = y_pred.toarray() if(y_pred_[i][0] == 1): p1.append(list_2[i]) if(y_pred_[i][1] == 1): p2.append(list_2[i]) if(y_pred_[i][2] == 1): p3.append(list_2[i]) if(y_pred_[i][3] == 1): p4.append(list_2[i]) if(y_pred_[i][4] == 1): p5.append(list_2[i]) p1,p2,p3,p4,p5 = pd.DataFrame(p1),pd.DataFrame(p2),pd.DataFrame(p3),pd.DataFrame(p4),pd.DataFrame(p5) col = ['sex', 'marital_status', 'age_group',\ 'occupation_category_code', 'Annual_Income'] def rename_cols(df,col): df.columns = col rename_cols(p1,col) rename_cols(p2,col) rename_cols(p3,col) rename_cols(p4,col) rename_cols(p5,col) def apply_1(x): if(x == 0): return 'Female' else: return 'Male' def apply_2(x): if(x == 0): return 'Married' else: return 'Single' def apply_3(x): if(x == 0): return '25-40' elif(x == 1): return '41-60' elif(x == 2): return 'Above 60' elif(x == 3): return 'Below 25' def apply_4(x): if(x == 0): return 'Corporate Employee' elif(x == 1): return 'Enterpreneur' elif(x == 2): return 'Medical Professional' elif(x == 3): return 'Military Service' elif(x == 4): return 'Self Employed' def apply_5(x): if(x == 0): return '0-5 lac' elif(x == 1): return '10-20 lac' elif(x == 2): return '20-30 lac' elif(x == 3): return '30-40 lac' elif(x == 4): return '40-50 lac' elif(x == 5): return '5-10 lac' def my_apply(df): df.sex = df.sex.apply(apply_1) df.marital_status = df.marital_status.apply(apply_2) df.age_group = df.age_group.apply(apply_3) df.occupation_category_code = df.occupation_category_code.apply(apply_4) df.Annual_Income = df.Annual_Income.apply(apply_5) my_apply(p1) my_apply(p2) my_apply(p3) my_apply(p4) my_apply(p5) plt.rcParams["font.weight"] = "bold" plt.rcParams["axes.labelweight"] = "bold" def policy_plot(df,policyname): print('Plotting for {}'.format(policyname)) df.sex.value_counts().plot.bar(color = ['red','purple']) plt.title('Gender count distribution for {}'.format(policyname)) plt.show() df.marital_status.value_counts().plot.bar(color = ['orange','green']) plt.title('Maritial Status count distribution for {}'.format(policyname)) plt.show() df.age_group.value_counts().plot.bar(color = ['red','maroon','gray','orange']) plt.title('Age group count distribution for {}'.format(policyname)) plt.show() df.occupation_category_code.value_counts().plot.bar(color = ['blue','red','green','orange','yellow']) plt.title('Occupation Category count distribution for {}'.format(policyname)) plt.show() df.Annual_Income.value_counts().plot.bar(color = ['gray','yellow','green','red','maroon','pink']) plt.title('Income Range count distribution for {}'.format(policyname)) plt.show() policy_plot(p1,'Policy 1') policy_plot(p2,'Policy 2') policy_plot(p3,'Policy 3') policy_plot(p4,'Policy 4') policy_plot(p5,'Policy 5')
0.362969
0.248306
``` import os #coding=utf8 import sys import string from urllib.request import Request, urlopen from urllib.error import HTTPError from bs4 import BeautifulSoup import time import datetime import ssl import numpy as np import pandas as pd import traceback import csv ssl._create_default_https_context = ssl._create_unverified_context import json j = json.loads() j["parents"][0]["sha"] j for i in j["files"]: print(json.dumps(i)) def get_response(url): return json.loads(urlopen(Request(url,headers={'User-Agent': 'Mozilla/5.0'})).read()) url = "https://api.github.com/repos/torvalds/linux/commits/f2815633504b442ca0b0605c16bf3d88a3a0fcea" response = get_response(url) response["files"] response["files"][0] sss = '{"sha": "de1a0138317f482c028ce583c335501b14d9f917", "filename": "net/sctp/sm_statefuns.c", "status": "modified", "additions": 1, "deletions": 1, "changes": 2, "blob_url": "https://github.com/torvalds/linux/blob/f2815633504b442ca0b0605c16bf3d88a3a0fcea/net/sctp/sm_statefuns.c", "raw_url": "https://github.com/torvalds/linux/raw/f2815633504b442ca0b0605c16bf3d88a3a0fcea/net/sctp/sm_statefuns.c", "contents_url": "https://api.github.com/repos/torvalds/linux/contents/net/sctp/sm_statefuns.c?ref=f2815633504b442ca0b0605c16bf3d88a3a0fcea", "patch": "@@ -2082,7 +2082,7 @@ sctp_disposition_t sctp_sf_do_5_2_4_dupcook(struct net *net,\\n \\t}\\n \\n \\t/* Delete the tempory new association. */\\n-\\tsctp_add_cmd_sf(commands, SCTP_CMD_NEW_ASOC, SCTP_ASOC(new_asoc));\\n+\\tsctp_add_cmd_sf(commands, SCTP_CMD_SET_ASOC, SCTP_ASOC(new_asoc));\\n \\tsctp_add_cmd_sf(commands, SCTP_CMD_DELETE_TCB, SCTP_NULL());\\n \\n \\t/* Restore association pointer to provide SCTP command interpeter"}'+"<_**next**_>"+'{"sha": "de1a0138317f482c028ce583c335501b14d9f917", "filename": "net/sctp/sm_statefuns.c", "status": "modified", "additions": 1, "deletions": 1, "changes": 2, "blob_url": "https://github.com/torvalds/linux/blob/f2815633504b442ca0b0605c16bf3d88a3a0fcea/net/sctp/sm_statefuns.c", "raw_url": "https://github.com/torvalds/linux/raw/f2815633504b442ca0b0605c16bf3d88a3a0fcea/net/sctp/sm_statefuns.c", "contents_url": "https://api.github.com/repos/torvalds/linux/contents/net/sctp/sm_statefuns.c?ref=f2815633504b442ca0b0605c16bf3d88a3a0fcea", "patch": "@@ -2082,7 +2082,7 @@ sctp_disposition_t sctp_sf_do_5_2_4_dupcook(struct net *net,\\n \\t}\\n \\n \\t/* Delete the tempory new association. */\\n-\\tsctp_add_cmd_sf(commands, SCTP_CMD_NEW_ASOC, SCTP_ASOC(new_asoc));\\n+\\tsctp_add_cmd_sf(commands, SCTP_CMD_SET_ASOC, SCTP_ASOC(new_asoc));\\n \\tsctp_add_cmd_sf(commands, SCTP_CMD_DELETE_TCB, SCTP_NULL());\\n \\n \\t/* Restore association pointer to provide SCTP command interpeter"}' for i in sss.split("<_**next**_>"): print(json.loads(i)["sha"]) files_changed = "" i= 0 for s in sss: if i <len(sss)-1: files_changed = str(files_changed)+str(s)+"," else: files_changed = str(files_changed)+str(s) i+=1 files_changed = "{"+ files_changed +"}" files_changed j = json.loads('{{"sha": "de1a0138317f482c028ce583c335501b14d9f917", "filename": "net/sctp/sm_statefuns.c", "status": "modified", "additions": 1, "deletions": 1, "changes": 2, "blob_url": "https://github.com/torvalds/linux/blob/f2815633504b442ca0b0605c16bf3d88a3a0fcea/net/sctp/sm_statefuns.c", "raw_url": "https://github.com/torvalds/linux/raw/f2815633504b442ca0b0605c16bf3d88a3a0fcea/net/sctp/sm_statefuns.c", "contents_url": "https://api.github.com/repos/torvalds/linux/contents/net/sctp/sm_statefuns.c?ref=f2815633504b442ca0b0605c16bf3d88a3a0fcea", "patch": "@@ -2082,7 +2082,7 @@ sctp_disposition_t sctp_sf_do_5_2_4_dupcook(struct net *net,\\n \\t}\\n \\n \\t/* Delete the tempory new association. */\\n-\\tsctp_add_cmd_sf(commands, SCTP_CMD_NEW_ASOC, SCTP_ASOC(new_asoc));\\n+\\tsctp_add_cmd_sf(commands, SCTP_CMD_SET_ASOC, SCTP_ASOC(new_asoc));\\n \\tsctp_add_cmd_sf(commands, SCTP_CMD_DELETE_TCB, SCTP_NULL());\\n \\n \\t/* Restore association pointer to provide SCTP command interpeter"},{"sha": "de1a0138317f482c028ce583c335501b14d9f917", "filename": "net/sctp/sm_statefuns.c", "status": "modified", "additions": 1, "deletions": 1, "changes": 2, "blob_url": "https://github.com/torvalds/linux/blob/f2815633504b442ca0b0605c16bf3d88a3a0fcea/net/sctp/sm_statefuns.c", "raw_url": "https://github.com/torvalds/linux/raw/f2815633504b442ca0b0605c16bf3d88a3a0fcea/net/sctp/sm_statefuns.c", "contents_url": "https://api.github.com/repos/torvalds/linux/contents/net/sctp/sm_statefuns.c?ref=f2815633504b442ca0b0605c16bf3d88a3a0fcea", "patch": "@@ -2082,7 +2082,7 @@ sctp_disposition_t sctp_sf_do_5_2_4_dupcook(struct net *net,\\n \\t}\\n \\n \\t/* Delete the tempory new association. */\\n-\\tsctp_add_cmd_sf(commands, SCTP_CMD_NEW_ASOC, SCTP_ASOC(new_asoc));\\n+\\tsctp_add_cmd_sf(commands, SCTP_CMD_SET_ASOC, SCTP_ASOC(new_asoc));\\n \\tsctp_add_cmd_sf(commands, SCTP_CMD_DELETE_TCB, SCTP_NULL());\\n \\n \\t/* Restore association pointer to provide SCTP command interpeter"}}') ```
github_jupyter
import os #coding=utf8 import sys import string from urllib.request import Request, urlopen from urllib.error import HTTPError from bs4 import BeautifulSoup import time import datetime import ssl import numpy as np import pandas as pd import traceback import csv ssl._create_default_https_context = ssl._create_unverified_context import json j = json.loads() j["parents"][0]["sha"] j for i in j["files"]: print(json.dumps(i)) def get_response(url): return json.loads(urlopen(Request(url,headers={'User-Agent': 'Mozilla/5.0'})).read()) url = "https://api.github.com/repos/torvalds/linux/commits/f2815633504b442ca0b0605c16bf3d88a3a0fcea" response = get_response(url) response["files"] response["files"][0] sss = '{"sha": "de1a0138317f482c028ce583c335501b14d9f917", "filename": "net/sctp/sm_statefuns.c", "status": "modified", "additions": 1, "deletions": 1, "changes": 2, "blob_url": "https://github.com/torvalds/linux/blob/f2815633504b442ca0b0605c16bf3d88a3a0fcea/net/sctp/sm_statefuns.c", "raw_url": "https://github.com/torvalds/linux/raw/f2815633504b442ca0b0605c16bf3d88a3a0fcea/net/sctp/sm_statefuns.c", "contents_url": "https://api.github.com/repos/torvalds/linux/contents/net/sctp/sm_statefuns.c?ref=f2815633504b442ca0b0605c16bf3d88a3a0fcea", "patch": "@@ -2082,7 +2082,7 @@ sctp_disposition_t sctp_sf_do_5_2_4_dupcook(struct net *net,\\n \\t}\\n \\n \\t/* Delete the tempory new association. */\\n-\\tsctp_add_cmd_sf(commands, SCTP_CMD_NEW_ASOC, SCTP_ASOC(new_asoc));\\n+\\tsctp_add_cmd_sf(commands, SCTP_CMD_SET_ASOC, SCTP_ASOC(new_asoc));\\n \\tsctp_add_cmd_sf(commands, SCTP_CMD_DELETE_TCB, SCTP_NULL());\\n \\n \\t/* Restore association pointer to provide SCTP command interpeter"}'+"<_**next**_>"+'{"sha": "de1a0138317f482c028ce583c335501b14d9f917", "filename": "net/sctp/sm_statefuns.c", "status": "modified", "additions": 1, "deletions": 1, "changes": 2, "blob_url": "https://github.com/torvalds/linux/blob/f2815633504b442ca0b0605c16bf3d88a3a0fcea/net/sctp/sm_statefuns.c", "raw_url": "https://github.com/torvalds/linux/raw/f2815633504b442ca0b0605c16bf3d88a3a0fcea/net/sctp/sm_statefuns.c", "contents_url": "https://api.github.com/repos/torvalds/linux/contents/net/sctp/sm_statefuns.c?ref=f2815633504b442ca0b0605c16bf3d88a3a0fcea", "patch": "@@ -2082,7 +2082,7 @@ sctp_disposition_t sctp_sf_do_5_2_4_dupcook(struct net *net,\\n \\t}\\n \\n \\t/* Delete the tempory new association. */\\n-\\tsctp_add_cmd_sf(commands, SCTP_CMD_NEW_ASOC, SCTP_ASOC(new_asoc));\\n+\\tsctp_add_cmd_sf(commands, SCTP_CMD_SET_ASOC, SCTP_ASOC(new_asoc));\\n \\tsctp_add_cmd_sf(commands, SCTP_CMD_DELETE_TCB, SCTP_NULL());\\n \\n \\t/* Restore association pointer to provide SCTP command interpeter"}' for i in sss.split("<_**next**_>"): print(json.loads(i)["sha"]) files_changed = "" i= 0 for s in sss: if i <len(sss)-1: files_changed = str(files_changed)+str(s)+"," else: files_changed = str(files_changed)+str(s) i+=1 files_changed = "{"+ files_changed +"}" files_changed j = json.loads('{{"sha": "de1a0138317f482c028ce583c335501b14d9f917", "filename": "net/sctp/sm_statefuns.c", "status": "modified", "additions": 1, "deletions": 1, "changes": 2, "blob_url": "https://github.com/torvalds/linux/blob/f2815633504b442ca0b0605c16bf3d88a3a0fcea/net/sctp/sm_statefuns.c", "raw_url": "https://github.com/torvalds/linux/raw/f2815633504b442ca0b0605c16bf3d88a3a0fcea/net/sctp/sm_statefuns.c", "contents_url": "https://api.github.com/repos/torvalds/linux/contents/net/sctp/sm_statefuns.c?ref=f2815633504b442ca0b0605c16bf3d88a3a0fcea", "patch": "@@ -2082,7 +2082,7 @@ sctp_disposition_t sctp_sf_do_5_2_4_dupcook(struct net *net,\\n \\t}\\n \\n \\t/* Delete the tempory new association. */\\n-\\tsctp_add_cmd_sf(commands, SCTP_CMD_NEW_ASOC, SCTP_ASOC(new_asoc));\\n+\\tsctp_add_cmd_sf(commands, SCTP_CMD_SET_ASOC, SCTP_ASOC(new_asoc));\\n \\tsctp_add_cmd_sf(commands, SCTP_CMD_DELETE_TCB, SCTP_NULL());\\n \\n \\t/* Restore association pointer to provide SCTP command interpeter"},{"sha": "de1a0138317f482c028ce583c335501b14d9f917", "filename": "net/sctp/sm_statefuns.c", "status": "modified", "additions": 1, "deletions": 1, "changes": 2, "blob_url": "https://github.com/torvalds/linux/blob/f2815633504b442ca0b0605c16bf3d88a3a0fcea/net/sctp/sm_statefuns.c", "raw_url": "https://github.com/torvalds/linux/raw/f2815633504b442ca0b0605c16bf3d88a3a0fcea/net/sctp/sm_statefuns.c", "contents_url": "https://api.github.com/repos/torvalds/linux/contents/net/sctp/sm_statefuns.c?ref=f2815633504b442ca0b0605c16bf3d88a3a0fcea", "patch": "@@ -2082,7 +2082,7 @@ sctp_disposition_t sctp_sf_do_5_2_4_dupcook(struct net *net,\\n \\t}\\n \\n \\t/* Delete the tempory new association. */\\n-\\tsctp_add_cmd_sf(commands, SCTP_CMD_NEW_ASOC, SCTP_ASOC(new_asoc));\\n+\\tsctp_add_cmd_sf(commands, SCTP_CMD_SET_ASOC, SCTP_ASOC(new_asoc));\\n \\tsctp_add_cmd_sf(commands, SCTP_CMD_DELETE_TCB, SCTP_NULL());\\n \\n \\t/* Restore association pointer to provide SCTP command interpeter"}}')
0.305386
0.184786
<a href="https://colab.research.google.com/github/Satwikram/Named-Entity-Recognition/blob/main/Named%20Entity%20Recognition%20using%20DL.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ### Author: Satwik Ram K **Named Entity Recognition on Dataset Using NLP** Context: Annotated Corpus for Named Entity Recognition using GMB(Groningen Meaning Bank) corpus for entity classification with enhanced and popular features by Natural Language Processing applied to the data set. geo = Geographical Entity org = Organization per = Person gpe = Geopolitical Entity tim = Time indicator art = Artifact eve = Event nat = Natural Phenomenon ### Connecting to Kaggle ``` from google.colab import files files.upload() ! mkdir ~/.kaggle ! cp kaggle.json ~/.kaggle/ ! chmod 600 ~/.kaggle/kaggle.json ``` ### Downloading Dataset ``` !kaggle datasets download -d abhinavwalia95/entity-annotated-corpus !unzip /content/entity-annotated-corpus.zip ``` ### Importing Dependencies ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import os import re import math from bs4 import BeautifulSoup ``` ### Importing Dataset ``` dataset = pd.read_csv("/content/ner_dataset.csv", encoding = "latin1") dataset.head() ``` ### Basic Info ``` dataset.info() dataset.describe() ``` ### Checking for Null Values ``` dataset.isna().sum() ``` ### Dropping Unwanted Columns ``` dataset.columns columns = ['Sentence #', 'POS'] dataset.drop(columns = columns, axis = 1, inplace = True) dataset.head() ``` ### Plotting Different Sentiment ``` plt.figure(figsize = (10, 5) ) sns.countplot(dataset['Tag'], label = 'Count') ``` ### Getting Unique Values ``` dataset['Tag'].nunique() len(dataset) dataset['Tag'].value_counts() 2**20 dataset['Word'] ``` ### Cleaning The Dataset if noise exits ``` def clean_data(word): word = BeautifulSoup(word, 'lxml').get_text() word = re.sub(r"@[A-Za-z0-9]+", "", word) word = re.sub(r"https?://[A-Za-z0-9./]", "", word) word = re.sub(r"[^A-Za-z]", "", word) word = re.sub(r" +", '', word) return word from tqdm import tqdm dataset['Word'][0] for i in tqdm(range(len(dataset))): dataset['Word'][i] = clean_data(dataset['Word'][i]) dataset.to_csv("NER_Cleaned.csv", index = None, header = True) ``` ### Loading cleaned file ``` ner_dataset = pd.read_csv("/content/NER_Cleaned.csv") ner_dataset.head() len(ner_dataset) ``` ### Checking for NaN Values After Cleaning ``` ner_dataset.isna().sum() ``` ### Dropping NaN Values ``` ner_dataset.dropna(inplace = True) ner_dataset.isna().sum() len(ner_dataset) 2**20 ``` ### Tokenization ``` import tensorflow_datasets as tfds tokenizer = tfds.deprecated.text.SubwordTextEncoder.build_from_corpus( ner_dataset['Word'], target_vocab_size = 2**20 ) ``` ### Saving The Tokenizer ``` tokenizer.save_to_file('ner_tokenizer') ner_dataset['Word'] ``` ### Loading Tokenizer ``` encoder = tfds.deprecated.text.SubwordTextEncoder.load_from_file("ner_tokenizer") ``` **Testing Tokenizer** ``` ids = encoder.encode("hello world") text = encoder.decode([23108, 32928, 32849, 138]) print(ids) print(text) cleaned_data = ner_dataset.copy() cleaned_data.head() ``` ### Tokenizing the Dataset ``` cleaned_data['Word'][2] from tqdm import tqdm data_input = [encoder.encode(sentence) for sentence in cleaned_data['Word']] data_input[100] ``` ### MAX LEN ``` MAX_LEN = max([len(sentence) for sentence in data_input]) MAX_LEN for sentence in data_input: if len(sentence) == MAX_LEN: print(sentence) ``` ### Lets Check it out ``` encoder.decode([19440, 22734, 22559]) for sentence in ner_dataset['Word']: if sentence == 'wwcelebritiesforcharityorgrafflesnetrafflemaincfm': print(sentence) ``` ### Padding ``` import tensorflow as tf from tensorflow import keras data_input = tf.keras.preprocessing.sequence.pad_sequences( data_input, value = 0, padding = 'post', maxlen = MAX_LEN ) data_input[0] ``` ### Taking X and Y ``` X = [] y = [] data_labels = [tag for tag in cleaned_data['Tag']] len(data_labels) len(data_input) ``` ###Creating DataFrame ``` encoder.decode(data_input[120]) data_labels[120] list_data = data_input.tolist() list_data[0] ner_df = pd.DataFrame({'Vectors': list_data, 'Tags': data_labels}) ner_df.head() ner_df.to_csv('cleaned_vectors.csv') ``` Taking X and Y ``` ner_df['Tags'] = pd.Categorical(ner_df['Tags']) ner_df['Tags'].dtype cat = dict(enumerate(ner_df['Tags'].cat.categories)) cat import joblib joblib.dump(cat, 'cat.pickle') ner_df1 = ner_df.copy() ner_df1.head() ner_df1['Tags'] = pd.Categorical(ner_df['Tags']).codes ner_df1['Tags'].head() X = data_input X y = ner_df1['Tags'] y.nunique() y = tf.keras.utils.to_categorical(y, num_classes = 17) y[0] len(X) * 0.2 ``` ### Splitting Data into Train and Test ``` from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2 , random_state = 0) print(len(X_train)) print(len(X_test)) print(len(y_train)) print(len(y_test)) X_train = np.array(X_train) y_train = np.array(y_train) X_test = np.array(X_test) y_test = np.array(y_test) y_train[0] type(X_train[0]) from tensorflow.keras.models import * from tensorflow.keras.layers import * from tensorflow.keras.callbacks import * ``` ### Config ``` VOCAB_SIZE = encoder.vocab_size print(VOCAB_SIZE) ``` ### Building Model ``` model = Sequential() model.add(Embedding(VOCAB_SIZE, 128)) model.add(Bidirectional(LSTM(128, return_sequences = True))) model.add(Dropout(0.5)) model.add(Bidirectional(LSTM(128, return_sequences = True))) model.add(Bidirectional(LSTM(128))) model.add(Dense(128, activation = 'relu')) model.add(Dropout(0.5)) model.add(tf.keras.layers.Dense(17, activation = 'softmax')) ``` ### Compiling the model ``` model.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy']) model.summary() ``` ### Callbacks ``` from livelossplot.tf_keras import PlotLossesCallback from livelossplot import PlotLossesKeras checkpoint = ModelCheckpoint('ner.h5', monitor = 'val_loss', save_best_only = True, verbose = 1) earlystopping = EarlyStopping( monitor = 'val_loss', verbose = 1, restore_best_weights = True, patience = 5) reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience = 3, min_lr=0.001, verbose = 1) def scheduler(epoch, lr): if epoch < 10: return lr else: return lr * tf.math.exp(-0.1) lr_schedule = tf.keras.callbacks.LearningRateScheduler(scheduler, verbose = 1) callbacks = [checkpoint, earlystopping, reduce_lr, lr_schedule, PlotLossesKeras()] ``` ### Training the model! ``` history = model.fit(X_train, y_train, epochs = 25, batch_size = 1024, validation_split = 0.2, callbacks = callbacks) ``` ### Loading the model ``` loaded_model = load_model('/content/ner.h5') model.evaluate(X_test, y_test) ``` ### Making Prediction ``` y_pred = loaded_model.predict(X_test) y_pred[0] a = np.argmax(y_pred[0]) a result = [] for i in range(len(y_pred)): result.append(np.argmax(y_pred[i])) cat[result[11]] cat[np.argmax(y_test[11])] y_actual = [] for i in range(len(y_test)): y_actual.append(np.argmax(y_test[i])) y_prediction = result cat[y_actual[0]] len(y_prediction) len(y_actual) Prediction = [] Actual = [] for i in range(len(X_test)): Prediction.append(cat[y_prediction[i]]) Actual.append(cat[y_actual[i]]) ``` ### Accurcay Score ``` from sklearn.metrics import * accuracy_score(y_actual, y_prediction) ``` Creating Result Data Frame ``` ner_results = pd.DataFrame({'Actual Ground Truth': Actual, 'Bi-Directional LSTM Model Prediction': Prediction}) ner_results.head() ner_results.to_csv('NER_Results.csv', index = False) ner_results.tail() ``` ### Testing on Sentences ``` test_sentence = "How are you Bruce, Hope you are doing good in Bangalore!" def convert(dataset): dataset = dataset.replace(',', "") dataset = dataset.replace('.', "") dataset = dataset.replace('!', "") return dataset test_sentence = convert(test_sentence) test_sentence test_sentence = test_sentence.split() test_sentence test_sentence_encoded = [] encoder.encode(test_sentence[0]) for i in range(len(test_sentence)): test_sentence_encoded.append(encoder.encode(test_sentence[i])) test_sentence_encoded test_result = [] len(test_sentence_encoded) for i in range(len(test_sentence_encoded)): print(i) test_result.append(cat[np.argmax(loaded_model.predict(test_sentence_encoded[i]))]) test_result test_sentence test_result_df = pd.DataFrame({'Word': test_sentence, 'Prediction': test_result}) test_result_df ```
github_jupyter
from google.colab import files files.upload() ! mkdir ~/.kaggle ! cp kaggle.json ~/.kaggle/ ! chmod 600 ~/.kaggle/kaggle.json !kaggle datasets download -d abhinavwalia95/entity-annotated-corpus !unzip /content/entity-annotated-corpus.zip import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import os import re import math from bs4 import BeautifulSoup dataset = pd.read_csv("/content/ner_dataset.csv", encoding = "latin1") dataset.head() dataset.info() dataset.describe() dataset.isna().sum() dataset.columns columns = ['Sentence #', 'POS'] dataset.drop(columns = columns, axis = 1, inplace = True) dataset.head() plt.figure(figsize = (10, 5) ) sns.countplot(dataset['Tag'], label = 'Count') dataset['Tag'].nunique() len(dataset) dataset['Tag'].value_counts() 2**20 dataset['Word'] def clean_data(word): word = BeautifulSoup(word, 'lxml').get_text() word = re.sub(r"@[A-Za-z0-9]+", "", word) word = re.sub(r"https?://[A-Za-z0-9./]", "", word) word = re.sub(r"[^A-Za-z]", "", word) word = re.sub(r" +", '', word) return word from tqdm import tqdm dataset['Word'][0] for i in tqdm(range(len(dataset))): dataset['Word'][i] = clean_data(dataset['Word'][i]) dataset.to_csv("NER_Cleaned.csv", index = None, header = True) ner_dataset = pd.read_csv("/content/NER_Cleaned.csv") ner_dataset.head() len(ner_dataset) ner_dataset.isna().sum() ner_dataset.dropna(inplace = True) ner_dataset.isna().sum() len(ner_dataset) 2**20 import tensorflow_datasets as tfds tokenizer = tfds.deprecated.text.SubwordTextEncoder.build_from_corpus( ner_dataset['Word'], target_vocab_size = 2**20 ) tokenizer.save_to_file('ner_tokenizer') ner_dataset['Word'] encoder = tfds.deprecated.text.SubwordTextEncoder.load_from_file("ner_tokenizer") ids = encoder.encode("hello world") text = encoder.decode([23108, 32928, 32849, 138]) print(ids) print(text) cleaned_data = ner_dataset.copy() cleaned_data.head() cleaned_data['Word'][2] from tqdm import tqdm data_input = [encoder.encode(sentence) for sentence in cleaned_data['Word']] data_input[100] MAX_LEN = max([len(sentence) for sentence in data_input]) MAX_LEN for sentence in data_input: if len(sentence) == MAX_LEN: print(sentence) encoder.decode([19440, 22734, 22559]) for sentence in ner_dataset['Word']: if sentence == 'wwcelebritiesforcharityorgrafflesnetrafflemaincfm': print(sentence) import tensorflow as tf from tensorflow import keras data_input = tf.keras.preprocessing.sequence.pad_sequences( data_input, value = 0, padding = 'post', maxlen = MAX_LEN ) data_input[0] X = [] y = [] data_labels = [tag for tag in cleaned_data['Tag']] len(data_labels) len(data_input) encoder.decode(data_input[120]) data_labels[120] list_data = data_input.tolist() list_data[0] ner_df = pd.DataFrame({'Vectors': list_data, 'Tags': data_labels}) ner_df.head() ner_df.to_csv('cleaned_vectors.csv') ner_df['Tags'] = pd.Categorical(ner_df['Tags']) ner_df['Tags'].dtype cat = dict(enumerate(ner_df['Tags'].cat.categories)) cat import joblib joblib.dump(cat, 'cat.pickle') ner_df1 = ner_df.copy() ner_df1.head() ner_df1['Tags'] = pd.Categorical(ner_df['Tags']).codes ner_df1['Tags'].head() X = data_input X y = ner_df1['Tags'] y.nunique() y = tf.keras.utils.to_categorical(y, num_classes = 17) y[0] len(X) * 0.2 from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2 , random_state = 0) print(len(X_train)) print(len(X_test)) print(len(y_train)) print(len(y_test)) X_train = np.array(X_train) y_train = np.array(y_train) X_test = np.array(X_test) y_test = np.array(y_test) y_train[0] type(X_train[0]) from tensorflow.keras.models import * from tensorflow.keras.layers import * from tensorflow.keras.callbacks import * VOCAB_SIZE = encoder.vocab_size print(VOCAB_SIZE) model = Sequential() model.add(Embedding(VOCAB_SIZE, 128)) model.add(Bidirectional(LSTM(128, return_sequences = True))) model.add(Dropout(0.5)) model.add(Bidirectional(LSTM(128, return_sequences = True))) model.add(Bidirectional(LSTM(128))) model.add(Dense(128, activation = 'relu')) model.add(Dropout(0.5)) model.add(tf.keras.layers.Dense(17, activation = 'softmax')) model.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy']) model.summary() from livelossplot.tf_keras import PlotLossesCallback from livelossplot import PlotLossesKeras checkpoint = ModelCheckpoint('ner.h5', monitor = 'val_loss', save_best_only = True, verbose = 1) earlystopping = EarlyStopping( monitor = 'val_loss', verbose = 1, restore_best_weights = True, patience = 5) reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience = 3, min_lr=0.001, verbose = 1) def scheduler(epoch, lr): if epoch < 10: return lr else: return lr * tf.math.exp(-0.1) lr_schedule = tf.keras.callbacks.LearningRateScheduler(scheduler, verbose = 1) callbacks = [checkpoint, earlystopping, reduce_lr, lr_schedule, PlotLossesKeras()] history = model.fit(X_train, y_train, epochs = 25, batch_size = 1024, validation_split = 0.2, callbacks = callbacks) loaded_model = load_model('/content/ner.h5') model.evaluate(X_test, y_test) y_pred = loaded_model.predict(X_test) y_pred[0] a = np.argmax(y_pred[0]) a result = [] for i in range(len(y_pred)): result.append(np.argmax(y_pred[i])) cat[result[11]] cat[np.argmax(y_test[11])] y_actual = [] for i in range(len(y_test)): y_actual.append(np.argmax(y_test[i])) y_prediction = result cat[y_actual[0]] len(y_prediction) len(y_actual) Prediction = [] Actual = [] for i in range(len(X_test)): Prediction.append(cat[y_prediction[i]]) Actual.append(cat[y_actual[i]]) from sklearn.metrics import * accuracy_score(y_actual, y_prediction) ner_results = pd.DataFrame({'Actual Ground Truth': Actual, 'Bi-Directional LSTM Model Prediction': Prediction}) ner_results.head() ner_results.to_csv('NER_Results.csv', index = False) ner_results.tail() test_sentence = "How are you Bruce, Hope you are doing good in Bangalore!" def convert(dataset): dataset = dataset.replace(',', "") dataset = dataset.replace('.', "") dataset = dataset.replace('!', "") return dataset test_sentence = convert(test_sentence) test_sentence test_sentence = test_sentence.split() test_sentence test_sentence_encoded = [] encoder.encode(test_sentence[0]) for i in range(len(test_sentence)): test_sentence_encoded.append(encoder.encode(test_sentence[i])) test_sentence_encoded test_result = [] len(test_sentence_encoded) for i in range(len(test_sentence_encoded)): print(i) test_result.append(cat[np.argmax(loaded_model.predict(test_sentence_encoded[i]))]) test_result test_sentence test_result_df = pd.DataFrame({'Word': test_sentence, 'Prediction': test_result}) test_result_df
0.482429
0.863909
# LAB 1a: Exploring natality dataset. **Learning Objectives** 1. Use BigQuery to explore natality dataset 1. Use Cloud AI Platform Notebooks to plot data explorations ## Introduction In this notebook, we will explore the natality dataset before we begin model development and training to predict the weight of a baby before it is born. We will use BigQuery to explore the data and use Cloud AI Platform Notebooks to plot data explorations. ## Load necessary libraries Check that the Google BigQuery library is installed and if not, install it. ``` %%bash sudo pip freeze | grep google-cloud-bigquery==1.6.1 || \ sudo pip install google-cloud-bigquery==1.6.1 from google.cloud import bigquery ``` ## The source dataset Our dataset is hosted in [BigQuery](https://cloud.google.com/bigquery/). The CDC's Natality data has details on US births from 1969 to 2008 and is a publically available dataset, meaning anyone with a GCP account has access. Click [here](https://console.cloud.google.com/bigquery?project=bigquery-public-data&p=publicdata&d=samples&t=natality&page=table) to access the dataset. The natality dataset is relatively large at almost 138 million rows and 31 columns, but simple to understand. `weight_pounds` is the target, the continuous value we’ll train a model to predict. <h2> Explore data </h2> The data is natality data (record of births in the US). The goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that -- this way, twins born on the same day won't end up in different cuts of the data. We'll first create a SQL query using the natality data after the year 2000. ``` query = """ SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks, FARM_FINGERPRINT( CONCAT( CAST(YEAR AS STRING), CAST(month AS STRING) ) ) AS hashmonth FROM publicdata.samples.natality WHERE year > 2000 """ ``` Let's create a BigQuery client that we can use throughout the notebook. ``` bq = bigquery.Client() ``` Let's now examine the result of a BiqQuery call in a Pandas DataFrame using our newly created client. ``` df = bq.query(query + " LIMIT 100").to_dataframe() df.head() ``` First, let's get the set of all valid column names in the natality dataset. We can do this by accessing the `INFORMATION_SCHEMA` for the table from the dataset. ``` # Query to get all column names within table schema sql = """ SELECT column_name FROM publicdata.samples.INFORMATION_SCHEMA.COLUMNS WHERE table_name = "natality" """ # Send query through BigQuery client and store output to a dataframe valid_columns_df = bq.query(sql).to_dataframe() # Convert column names in dataframe to a set valid_columns_set = valid_columns_df["column_name"].tolist() ``` We can print our valid columns set to see all of the possible columns we have available in the dataset. Of course, you could also find this information by going to the `Schema` tab when selecting the table in the [BigQuery UI](https://console.cloud.google.com/bigquery?project=bigquery-public-data&p=publicdata&d=samples&t=natality&page=table). ``` print(valid_columns_set) ``` Let's write a query to find the unique values for each of the columns and the count of those values. This is important to ensure that we have enough examples of each data value, and to verify our hunch that the parameter has predictive value. ``` def get_distinct_values(valid_columns_set, column_name): """Gets distinct value statistics of BigQuery data column. Args: valid_columns_set: set, the set of all possible valid column names in table. column_name: str, name of column in BigQuery. Returns: Dataframe of unique values, their counts, and averages. """ assert column_name in valid_columns_set, ( "{column_name} is not a valid column_name".format( column_name=column_name)) sql = """ SELECT {column_name}, COUNT(1) AS num_babies, AVG(weight_pounds) AS avg_wt FROM publicdata.samples.natality WHERE year > 2000 GROUP BY {column_name} """.format(column_name=column_name) return bq.query(sql).to_dataframe() def plot_distinct_values(valid_columns_set, column_name, logy=False): """Plots distinct value statistics of BigQuery data column. Args: valid_columns_set: set, the set of all possible valid column names in table. column_name: str, name of column in BigQuery. logy: bool, if plotting counts in log scale or not. """ df = get_distinct_values(valid_columns_set, column_name) df = df.sort_values(column_name) df.plot( x=column_name, y="num_babies", logy=logy, kind="bar", figsize=(12, 5)) df.plot(x=column_name, y="avg_wt", kind="bar", figsize=(12, 5)) ``` Make a bar plot to see `is_male` with `avg_wt` linearly scaled and `num_babies` logarithmically scaled. ``` plot_distinct_values(valid_columns_set, column_name="is_male", logy=False) ``` Make a bar plot to see `mother_age` with `avg_wt` linearly scaled and `num_babies` linearly scaled. ``` plot_distinct_values(valid_columns_set, column_name="mother_age", logy=False) ``` Make a bar plot to see `plurality` with `avg_wt` linearly scaled and `num_babies` logarithmically scaled. ``` plot_distinct_values(valid_columns_set, column_name="plurality", logy=True) ``` Make a bar plot to see `gestation_weeks` with `avg_wt` linearly scaled and `num_babies` logarithmically scaled. ``` plot_distinct_values( valid_columns_set, column_name="gestation_weeks", logy=True) ``` All these factors seem to play a part in the baby's weight. Male babies are heavier on average than female babies. Teenaged and older moms tend to have lower-weight babies. Twins, triplets, etc. are lower weight than single births. Preemies weigh in lower as do babies born to single moms. In addition, it is important to check whether you have enough data (number of babies) for each input value. Otherwise, the model prediction against input values that doesn't have enough data may not be reliable. <p> In the next notebooks, we will develop a machine learning model to combine all of these factors to come up with a prediction of a baby's weight. ## Lab Summary: In this lab, we used BigQuery to explore the data and used Cloud AI Platform Notebooks to plot data explorations. Copyright 2020 Google LLC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
github_jupyter
%%bash sudo pip freeze | grep google-cloud-bigquery==1.6.1 || \ sudo pip install google-cloud-bigquery==1.6.1 from google.cloud import bigquery query = """ SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks, FARM_FINGERPRINT( CONCAT( CAST(YEAR AS STRING), CAST(month AS STRING) ) ) AS hashmonth FROM publicdata.samples.natality WHERE year > 2000 """ bq = bigquery.Client() df = bq.query(query + " LIMIT 100").to_dataframe() df.head() # Query to get all column names within table schema sql = """ SELECT column_name FROM publicdata.samples.INFORMATION_SCHEMA.COLUMNS WHERE table_name = "natality" """ # Send query through BigQuery client and store output to a dataframe valid_columns_df = bq.query(sql).to_dataframe() # Convert column names in dataframe to a set valid_columns_set = valid_columns_df["column_name"].tolist() print(valid_columns_set) def get_distinct_values(valid_columns_set, column_name): """Gets distinct value statistics of BigQuery data column. Args: valid_columns_set: set, the set of all possible valid column names in table. column_name: str, name of column in BigQuery. Returns: Dataframe of unique values, their counts, and averages. """ assert column_name in valid_columns_set, ( "{column_name} is not a valid column_name".format( column_name=column_name)) sql = """ SELECT {column_name}, COUNT(1) AS num_babies, AVG(weight_pounds) AS avg_wt FROM publicdata.samples.natality WHERE year > 2000 GROUP BY {column_name} """.format(column_name=column_name) return bq.query(sql).to_dataframe() def plot_distinct_values(valid_columns_set, column_name, logy=False): """Plots distinct value statistics of BigQuery data column. Args: valid_columns_set: set, the set of all possible valid column names in table. column_name: str, name of column in BigQuery. logy: bool, if plotting counts in log scale or not. """ df = get_distinct_values(valid_columns_set, column_name) df = df.sort_values(column_name) df.plot( x=column_name, y="num_babies", logy=logy, kind="bar", figsize=(12, 5)) df.plot(x=column_name, y="avg_wt", kind="bar", figsize=(12, 5)) plot_distinct_values(valid_columns_set, column_name="is_male", logy=False) plot_distinct_values(valid_columns_set, column_name="mother_age", logy=False) plot_distinct_values(valid_columns_set, column_name="plurality", logy=True) plot_distinct_values( valid_columns_set, column_name="gestation_weeks", logy=True)
0.659515
0.988256
# Tools sandbox In this notebook we showcase the `tools.py` library. ## corr(data, significance=False, decimals=3) Generates a correlation matrix with p values and sample size, just like SPSS. It takes the following parameters: - data: pandas.DataFrame to calculate correlations - significance: bool that determines whether to include asterisks to correlations. - decimals: int used to round values. Returns a pandas.DataFrame with a correlation matrix. ``` from tools import corr # generate toy data import pandas as pd import numpy as np from random import randint from scipy.stats import pearsonr from scipy.optimize import minimize n = 60 data = pd.DataFrame({'a': [randint(1, 42) for _ in range(n)]}) def fun_a(x): if np.std(x) >= n: return np.std(x) return abs(0.5 - pearsonr(data['a'], x)[0]) def fun_b(x): if np.std(x) >= n: return np.std(x) return abs(0.3 - pearsonr(data['b'], x)[0]) data['b'] = minimize(fun_a, [randint(1, 42) for _ in range(n)], method = 'SLSQP', bounds = [(1, 42) for _ in range(n)]).x data['c'] = minimize(fun_b, [randint(1, 42) for _ in range(n)], method = 'SLSQP', bounds = [(1, 42) for _ in range(n)]).x data['d'] = abs(minimize(lambda x: abs(0.3 - pearsonr(data['a'], x)[0]), np.random.uniform(1, 42, n)).x) data = data.astype(np.int32) data corr(data) # make it harder with a missing value and a column with strings data2 = data.copy() data2['a'][4] = np.nan # data2['b'][7] = '-' # be careful, non-numerical columns won't show be compared data2['e'] = ('a '*60).split() data2 corr(data2, significance=True, decimals=2) ``` ## pie(values, labels=None, title='', slices=None, percent_only=False, explode=True, color='white') Useful pie plot with matplotlib. It has the following parameters: - values (mandatory): `list` or `pandas.Series()` of unique values. Using `value_counts()` is highly recommended. - labels: if `None`, indices of `values` will be used. - title: header for the plot. - slices: set a number of slices, to avoid clutter. - percent_only: if False it will show count and percent. - explode: slightly separate the slice with the highest value. - color: text color. ``` from tools import pie import pandas as pd ex = ['hi', 'hi', 'ho', 'ho', 'ho', 'he', 'he', 1] ex = ['hi', 'hi', 'ho', 'ho', 'ho', 'he', 'he'] # ex = [2, 5, 6, 7, 2, 6, 4, 3, 6, 1, 7, 8, 3, 5, 6, 1, 6, 2, 4, 7, 2, 6, 2, 7, 7, 3, 6] ex = pd.Series(ex).value_counts() # works best with value counts ex pie(ex, title='Example') ```
github_jupyter
from tools import corr # generate toy data import pandas as pd import numpy as np from random import randint from scipy.stats import pearsonr from scipy.optimize import minimize n = 60 data = pd.DataFrame({'a': [randint(1, 42) for _ in range(n)]}) def fun_a(x): if np.std(x) >= n: return np.std(x) return abs(0.5 - pearsonr(data['a'], x)[0]) def fun_b(x): if np.std(x) >= n: return np.std(x) return abs(0.3 - pearsonr(data['b'], x)[0]) data['b'] = minimize(fun_a, [randint(1, 42) for _ in range(n)], method = 'SLSQP', bounds = [(1, 42) for _ in range(n)]).x data['c'] = minimize(fun_b, [randint(1, 42) for _ in range(n)], method = 'SLSQP', bounds = [(1, 42) for _ in range(n)]).x data['d'] = abs(minimize(lambda x: abs(0.3 - pearsonr(data['a'], x)[0]), np.random.uniform(1, 42, n)).x) data = data.astype(np.int32) data corr(data) # make it harder with a missing value and a column with strings data2 = data.copy() data2['a'][4] = np.nan # data2['b'][7] = '-' # be careful, non-numerical columns won't show be compared data2['e'] = ('a '*60).split() data2 corr(data2, significance=True, decimals=2) from tools import pie import pandas as pd ex = ['hi', 'hi', 'ho', 'ho', 'ho', 'he', 'he', 1] ex = ['hi', 'hi', 'ho', 'ho', 'ho', 'he', 'he'] # ex = [2, 5, 6, 7, 2, 6, 4, 3, 6, 1, 7, 8, 3, 5, 6, 1, 6, 2, 4, 7, 2, 6, 2, 7, 7, 3, 6] ex = pd.Series(ex).value_counts() # works best with value counts ex pie(ex, title='Example')
0.504883
0.94743
## Preprocessing ``` # Import our dependencies from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler import pandas as pd import tensorflow as tf import keras_tuner as kt # Import and read the charity_data.csv. import pandas as pd application_df = pd.read_csv("Resources/charity_data.csv") application_df.head() # Drop the non-beneficial ID columns, 'EIN' and 'NAME'. application_df.drop(['EIN','NAME'], axis=1, inplace=True) application_df application_df.shape ``` #### What variable(s) are considered the target(s) for your model? - IS_SUCCESSFUL #### What variable(s) are considered the feature(s) for your model? - There are 10 features in total as shown above throough the .shape function (which includes our target) ``` # Determine the number of unique values in each column. application_df.nunique() # Look at APPLICATION_TYPE value counts for binning type_count = application_df['APPLICATION_TYPE'].value_counts() type_count # Choose a cutoff value and create a list of application types to be replaced # use the variable name `application_types_to_replace` application_types_to_replace = list(type_count[type_count < 528].index) # Replace in dataframe for app in application_types_to_replace: application_df['APPLICATION_TYPE'] = application_df['APPLICATION_TYPE'].replace(app,"Other") # Check to make sure binning was successful application_df['APPLICATION_TYPE'].value_counts() # Look at CLASSIFICATION value counts for binning classification_count = application_df['CLASSIFICATION'].value_counts() classification_count # You may find it helpful to look at CLASSIFICATION value counts >1 classification_count_1 = classification_count[classification_count > 1] classification_count_1 # Choose a cutoff value and create a list of classifications to be replaced # use the variable name `classifications_to_replace` classifications_to_replace = list(classification_count[classification_count < 850].index) # Replace in dataframe for cls in classifications_to_replace: application_df['CLASSIFICATION'] = application_df['CLASSIFICATION'].replace(cls,"Other") # Check to make sure binning was successful application_df['CLASSIFICATION'].value_counts() # Convert categorical data to numeric with `pd.get_dummies` numeric_data_df = pd.get_dummies(application_df) numeric_data_df.head() numeric_data_df.shape # Split our preprocessed data into our features and target arrays y = numeric_data_df['IS_SUCCESSFUL'] X = numeric_data_df.drop(['IS_SUCCESSFUL'], axis=1) # Split the preprocessed data into a training and testing dataset X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42) # Create a StandardScaler instances scaler = StandardScaler() # Fit the StandardScaler X_scaler = scaler.fit(X_train) # Scale the data X_train_scaled = X_scaler.transform(X_train) X_test_scaled = X_scaler.transform(X_test) ``` ## Compile, Train and Evaluate the Model ``` number_input_features = len(X_train_scaled[0]) # Create a method that creates a new Sequential model with hyperparameter options def create_model(hp): nn = tf.keras.models.Sequential() # Allow kerastuner to decide which activation function to use in hidden layers activation = hp.Choice('activation',['relu','tanh']) # Allow kerastuner to decide number of neurons in first layer nn.add(tf.keras.layers.Dense(units=hp.Int('first_units', min_value=1, max_value=30, step=5), activation=activation, input_dim=number_input_features)) # Allow kerastuner to decide number of hidden layers and neurons in hidden layers for i in range(hp.Int('num_layers', 1, 5)): nn.add(tf.keras.layers.Dense(units=hp.Int('units_' + str(i), min_value=1, max_value=30, step=5), activation=activation)) nn.add(tf.keras.layers.Dense(units=1, activation="sigmoid")) # Compile the model nn.compile(loss="binary_crossentropy", optimizer='adam', metrics=["accuracy"]) return nn tuner = kt.Hyperband( create_model, objective="val_accuracy", max_epochs=20, hyperband_iterations=2, project_name = 'optimized_model') #limit running of models to stop after 5 epochs with no improvement stop_early = tf.keras.callbacks.EarlyStopping(monitor='val_accuracy', patience=5) # Run the kerastuner search for best hyperparameters tuner.search(X_train_scaled,y_train,epochs=20,validation_data=(X_test_scaled,y_test), callbacks=[stop_early]) # Get top 3 model hyperparameters and print the values top_hyper = tuner.get_best_hyperparameters(3) for param in top_hyper: print(param.values) # Evaluate the top 3 models against the test dataset top_model = tuner.get_best_models(3) for model in top_model: model_loss, model_accuracy = model.evaluate(X_test_scaled,y_test,verbose=2) print(f"Loss: {model_loss}, Accuracy: {model_accuracy}") # Export our model to HDF5 file model.save('Models/AlphabetSoupCharity_Optimization_2.h5') ```
github_jupyter
# Import our dependencies from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler import pandas as pd import tensorflow as tf import keras_tuner as kt # Import and read the charity_data.csv. import pandas as pd application_df = pd.read_csv("Resources/charity_data.csv") application_df.head() # Drop the non-beneficial ID columns, 'EIN' and 'NAME'. application_df.drop(['EIN','NAME'], axis=1, inplace=True) application_df application_df.shape # Determine the number of unique values in each column. application_df.nunique() # Look at APPLICATION_TYPE value counts for binning type_count = application_df['APPLICATION_TYPE'].value_counts() type_count # Choose a cutoff value and create a list of application types to be replaced # use the variable name `application_types_to_replace` application_types_to_replace = list(type_count[type_count < 528].index) # Replace in dataframe for app in application_types_to_replace: application_df['APPLICATION_TYPE'] = application_df['APPLICATION_TYPE'].replace(app,"Other") # Check to make sure binning was successful application_df['APPLICATION_TYPE'].value_counts() # Look at CLASSIFICATION value counts for binning classification_count = application_df['CLASSIFICATION'].value_counts() classification_count # You may find it helpful to look at CLASSIFICATION value counts >1 classification_count_1 = classification_count[classification_count > 1] classification_count_1 # Choose a cutoff value and create a list of classifications to be replaced # use the variable name `classifications_to_replace` classifications_to_replace = list(classification_count[classification_count < 850].index) # Replace in dataframe for cls in classifications_to_replace: application_df['CLASSIFICATION'] = application_df['CLASSIFICATION'].replace(cls,"Other") # Check to make sure binning was successful application_df['CLASSIFICATION'].value_counts() # Convert categorical data to numeric with `pd.get_dummies` numeric_data_df = pd.get_dummies(application_df) numeric_data_df.head() numeric_data_df.shape # Split our preprocessed data into our features and target arrays y = numeric_data_df['IS_SUCCESSFUL'] X = numeric_data_df.drop(['IS_SUCCESSFUL'], axis=1) # Split the preprocessed data into a training and testing dataset X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42) # Create a StandardScaler instances scaler = StandardScaler() # Fit the StandardScaler X_scaler = scaler.fit(X_train) # Scale the data X_train_scaled = X_scaler.transform(X_train) X_test_scaled = X_scaler.transform(X_test) number_input_features = len(X_train_scaled[0]) # Create a method that creates a new Sequential model with hyperparameter options def create_model(hp): nn = tf.keras.models.Sequential() # Allow kerastuner to decide which activation function to use in hidden layers activation = hp.Choice('activation',['relu','tanh']) # Allow kerastuner to decide number of neurons in first layer nn.add(tf.keras.layers.Dense(units=hp.Int('first_units', min_value=1, max_value=30, step=5), activation=activation, input_dim=number_input_features)) # Allow kerastuner to decide number of hidden layers and neurons in hidden layers for i in range(hp.Int('num_layers', 1, 5)): nn.add(tf.keras.layers.Dense(units=hp.Int('units_' + str(i), min_value=1, max_value=30, step=5), activation=activation)) nn.add(tf.keras.layers.Dense(units=1, activation="sigmoid")) # Compile the model nn.compile(loss="binary_crossentropy", optimizer='adam', metrics=["accuracy"]) return nn tuner = kt.Hyperband( create_model, objective="val_accuracy", max_epochs=20, hyperband_iterations=2, project_name = 'optimized_model') #limit running of models to stop after 5 epochs with no improvement stop_early = tf.keras.callbacks.EarlyStopping(monitor='val_accuracy', patience=5) # Run the kerastuner search for best hyperparameters tuner.search(X_train_scaled,y_train,epochs=20,validation_data=(X_test_scaled,y_test), callbacks=[stop_early]) # Get top 3 model hyperparameters and print the values top_hyper = tuner.get_best_hyperparameters(3) for param in top_hyper: print(param.values) # Evaluate the top 3 models against the test dataset top_model = tuner.get_best_models(3) for model in top_model: model_loss, model_accuracy = model.evaluate(X_test_scaled,y_test,verbose=2) print(f"Loss: {model_loss}, Accuracy: {model_accuracy}") # Export our model to HDF5 file model.save('Models/AlphabetSoupCharity_Optimization_2.h5')
0.761095
0.828766
# Data Manipulation with Python ## Learning Goals - Construct list and dictionary comprehensions - Extract data from nested data structures - Write functions to transform data ## Lists ### List Methods Make sure you're comfortable with the following list methods: - `.append()`: adds the input element to the end of a list - `.pop()`: removes and returns the element with input index from the list - `.extend()`: adds the elements in the input iterable to the end of a list - `.index()`: returns the first place in a list where the argument is found - `.remove()`: removes element by value - `.count()`: returns the number of occurrences of the input element in a list Question: What's the difference between `.remove()` and `del`? <details> <summary> Answer here </summary> .remove() removes an element by value;<br/> del removes an element by position ### List Comprehension List comprehension is a handy way of generating a new list from existing iterables. Suppose I start with a simple list. ``` primes = [2, 3, 5, 7, 11, 13, 17, 19] ``` What I want now to do is to build a new list that comprises doubles of primes. I can do this with list comprehension! The syntax is: `[ f(x) for x in <iterable> if <condition>]` ``` prime_doubles = [x*2 for x in primes] prime_triples = [x*3 for x in primes] prime_doubles ``` #### Aside: List Comprehensions Vs. `for`-Loops Yes, I could do the same work with `for`-loops: ``` prime_doubles2 = [] for prime in primes: prime_doubles2.append(prime*2) prime_doubles2 ``` But list comprehensions are more efficient: The syntax is simpler, and they're also faster. Also, you'll see them in other people's code, so you'll have to know how to work with them! #### End of Aside I can use list comprehension to build a list from objects other than lists: ``` my_dict = dict(zip(range(5), 'aeiou')) [v for k, v in my_dict.items() if k % 4 == 0] names = ['Alan Turing', 'Charles Babbage', 'Ada Lovelace', 'Anita Borg', 'Steve Wozniak', 'Andrew Ng'] splits = [name.split() for name in names] [name1[0]+'. '+name2[0]+'.' for (name1, name2) in splits] ``` ### Exercises 1. Use a list comprehension to extract the odd numbers from this set: ``` nums = set(range(1000)) odd = [num for num in nums if num%2!=0] odd ``` <details> <summary>Answer </summary> <code>[num for num in nums if num % 2 == 1]</code> </details> 2. Use a list comprehension to take the first character of each string from the following list of words. ``` words = ['carbon', 'osmium', 'mercury', 'potassium', 'rhenium', 'einsteinium', 'hydrogen', 'erbium', 'nitrogen', 'sulfur', 'iodine', 'oxygen', 'niobium'] first_char = [word[0] for word in words] first_char ``` <details> <summary>Answer </summary> <code>[word[0] for word in words]</code> </details> 3. Use a list comprehension to build a list of all the names that start with 'R' from the following list. Add a '?' to the end of each name. ``` names = ['Randy', 'Robert', 'Alex', 'Ranjit', 'Charlie', 'Richard', 'Ravdeep', 'Vimal', 'Wu', 'Nelson'] Rnames= [name+"?" for name in names if name[0]=="R"] Rnames ``` <details> <summary>Answer </summary> <code>[name+'?' for name in names if name[0] == 'R']</code> </details> ## Dictionaries ### Dictionary Methods Make sure you're comfortable with the following dictionary methods: - `.keys()`: returns an array of the dictionary's keys - `.values()`: returns an array of the dictionary's values - `.items()`: returns an array of key-value tuples ### Dictionary Comprehension Much like list comprehension, I can use dictionary comprehension to build dictionaries from existing iterables. ``` my_dict = {'who': 'flatiron school', 'what': 'data science', 'when': 'now', 'where': 'here', 'why': '$', 'how': 'python'} ``` Remember that the `.items()` method will return a collection of doubles: ``` my_dict.items() ``` So I can use a pair of variables to range over it: ``` {k: v + '!' for k, v in my_dict.items() if k.startswith('w')} ``` The same thing works for any collections of doubles: ``` {k**2: v**2 for k, v in [(0, 1), (2, 3), (4, 5)]} ``` #### `zip` Remember that `zip` is a handy way of pairing up two or more iterables: ``` dict(zip(range(5), ['apple', 'orange', 'banana', 'lime', 'blueberry'])) tuple(zip(range(1, 5), 'a'*4, 'b'*4, 'c'*4, 'd'*4, 'e'*4)) ``` #### Dictionary Comprehension Using `zip` ``` {k: v for k, v in zip(range(5), range(0, 10, 2))} scores = [.858, .873, .868] {'model' + str(j+1): scores[j] for j in range(3)} ``` ### Exercises 1. Use a dictionary comprehension to pair up the countries in the first list with their corresponding capitals in the second list: ``` list1 = ['USA', 'France', 'Canada', 'Thailand'] list2 = ['Washington', 'Paris', 'Ottawa', 'Bangkok'] {a: b for a, b in zip(list1, list2)} ``` <details> <summary>Answer </summary> <code>{country: capital for (country, capital) in zip(list1, list2)}</code> </details> 2. Use a dictionary comprehension to make each of the characters in the following list a key with the value 'fictional character'. ``` chars = ['Pinocchio', 'Gilgamesh', 'Kumar Patel', 'Toby Flenderson'] {name: "Fictional Character" for name in chars} ``` <details> <summary>Answer</summary> <code>{char: 'fictional character' for char in chars}</code> </details> ## Nesting Just as we can put lists and dictionaries inside of other lists and dictionaries, we can also put comprehensions inside of other comprehensions. ``` lists = [['morning', 'afternoon', 'night'], ['read', 'code', 'sleep']] [[item[0] for item in small_list] for small_list in lists] ``` ### Nested Structures It will be well worth your while to practice accessing data in complex structures. Consider the following: ``` customers = { 'bill': {'purchases': {'movies': ['Terminator', 'Elf'], 'books': []}, 'id': 1}, 'dolph': {'purchases': {'movies': ['It Happened One Night'], 'books': ['The Far Side Gallery']}, 'id': 2}, 'pat': {'purchases': {'movies': [], 'books': ['Seinfeld and Philosophy', 'I Am a Bunny']}, 'id': 3} } ``` **Q**: How would we access 'I Am a Bunny'? <br/> **A**: The outermost "layer" has a name: 'customers', and that object is a dictionary: <br/> `customers` <br/> The key we are interested in is 'pat', since that's where 'I Am a Bunny' is located: <br/> `customers['pat']` <br/> The value corresponding to the key 'pat' is also a dictionary, and in this "lower-down" dictionary, the key we are interested in is 'purchases': <br/> `customers['pat']['purchases']` <br/> The value corresponding to the key 'purchases' is yet another dictionary, and here the key of interest is `books`: <br/> `customers['pat']['purchases']['books']` <br/> The value corresponding to the key 'books' is a list, and 'I Am a Bunny' is the second element in that list: <br/> `customers['pat']['purchases']['books'][1]` ``` customers['pat']['purchases']['books'][1] ``` ### Exercises 1. From the list below, make a list of dictionaries where the key is the person's name and the value is the person's home phone number. ``` phone_nos = [{'name': 'greg', 'nums': {'home': 1234567, 'work': 7654321}}, {'name': 'max', 'nums': {'home': 9876543, 'work': 1010001}}, {'name': 'erin', 'nums': {'home': 3333333, 'work': 4444444}}, {'name': 'joél', 'nums': {'home': 2222222, 'work': 5555555}}, {'name': 'sean', 'nums': {'home': 9999999, 'work': 8888888}}] [{item["name"]:item["nums"]["home"]} for item in phone_nos] ``` <details> <summary>Answer</summary> <code>[{item['name']: item['nums']['home']} for item in phone_nos]</code> </details> 2. From the customers dictionary above, build a dictionary where the customers' names are the keys and the movies they've bought are the values. ``` {customer: customers[customer]["purchases"]["movies"] for customer in customers.keys()} ``` <details> <summary>Answer</summary> <code>{customer: customers[customer]['purchases']['movies'] for customer in customers.keys()}</code> </details> ## Functions This aspect of Python is _incredibly_ useful! Writing your own functions can save you a TON of work - by _automating_ it. ### Creating Functions The first line will read: 'def' + _your function's name_ + '( )' + ':' Any arguments to the function will go in the parentheses. Let's try building a function that will automate the task of finding how many times a given number can be evenly divided by 2. ``` # Let's code it! def factor_of_two(num): count = 0 while num%2==0: num = num/2 count+= 1 return count ``` ### Calling Functions To _call_ a function, simply type its name, along with any necessary arguments in parentheses. ``` # Let's call it! factor_of_two(4) factor_of_two(8) ``` ### Default Argument Values Sometimes we'll want the argument(s) of our function to have default values. ``` def cheers(person='aaron', job='data scientist', age=30): return f'Hooray for {person}. You\'re a {job} and you\'re {str(age)}!' cheers('greg', 'scientist', 130) cheers('cristian', 'git enthusiast', 93) cheers() ``` ### Exercises 1. Build a function that will return $2^n$ for an input $n$. ``` def power_of_two(exp): return 2**exp power_of_two(3) ``` <details> <summary>Answer</summary> <code> def expo(n): return 2**n</code> </details> 2. Build a function that will take in a list of phone numbers as strings and return the same as integers, removing any parentheses ('(' and ')'), hyphens ('-'), and spaces. ``` def phoneno(number): integers = [] special_chars = ["(", ")", "-"," "] for n in number: n.remove(special_chars) integer = int(n) return integer phoneno(["(123)123-1230", "(234)456-9847"]) #INCORRECT CHECK ANSWER ``` <details> <summary>Answer</summary> <code> def int_phone(string): return int(string.replace('(', '').replace(')', '').replace('-', '').replace(' ', ''))</code> </details> 3. Build a function that returns the mode of a list of numbers. <details> <summary>Answer</summary> <code> def mode(lst): counts = {num: lst.count(num) for num in lst} return [num for num in counts.keys() if counts[num] == max(counts.values())]</code> </details> ``` def mode(numbers): # x*2 for x in primes counter = {num:} return counter mode([1,2,2,1,2,3,3,3,3,1,1,1,4]) #INCORRECT CHECK ANSWER ```
github_jupyter
primes = [2, 3, 5, 7, 11, 13, 17, 19] prime_doubles = [x*2 for x in primes] prime_triples = [x*3 for x in primes] prime_doubles prime_doubles2 = [] for prime in primes: prime_doubles2.append(prime*2) prime_doubles2 my_dict = dict(zip(range(5), 'aeiou')) [v for k, v in my_dict.items() if k % 4 == 0] names = ['Alan Turing', 'Charles Babbage', 'Ada Lovelace', 'Anita Borg', 'Steve Wozniak', 'Andrew Ng'] splits = [name.split() for name in names] [name1[0]+'. '+name2[0]+'.' for (name1, name2) in splits] nums = set(range(1000)) odd = [num for num in nums if num%2!=0] odd words = ['carbon', 'osmium', 'mercury', 'potassium', 'rhenium', 'einsteinium', 'hydrogen', 'erbium', 'nitrogen', 'sulfur', 'iodine', 'oxygen', 'niobium'] first_char = [word[0] for word in words] first_char names = ['Randy', 'Robert', 'Alex', 'Ranjit', 'Charlie', 'Richard', 'Ravdeep', 'Vimal', 'Wu', 'Nelson'] Rnames= [name+"?" for name in names if name[0]=="R"] Rnames my_dict = {'who': 'flatiron school', 'what': 'data science', 'when': 'now', 'where': 'here', 'why': '$', 'how': 'python'} my_dict.items() {k: v + '!' for k, v in my_dict.items() if k.startswith('w')} {k**2: v**2 for k, v in [(0, 1), (2, 3), (4, 5)]} dict(zip(range(5), ['apple', 'orange', 'banana', 'lime', 'blueberry'])) tuple(zip(range(1, 5), 'a'*4, 'b'*4, 'c'*4, 'd'*4, 'e'*4)) {k: v for k, v in zip(range(5), range(0, 10, 2))} scores = [.858, .873, .868] {'model' + str(j+1): scores[j] for j in range(3)} list1 = ['USA', 'France', 'Canada', 'Thailand'] list2 = ['Washington', 'Paris', 'Ottawa', 'Bangkok'] {a: b for a, b in zip(list1, list2)} chars = ['Pinocchio', 'Gilgamesh', 'Kumar Patel', 'Toby Flenderson'] {name: "Fictional Character" for name in chars} lists = [['morning', 'afternoon', 'night'], ['read', 'code', 'sleep']] [[item[0] for item in small_list] for small_list in lists] customers = { 'bill': {'purchases': {'movies': ['Terminator', 'Elf'], 'books': []}, 'id': 1}, 'dolph': {'purchases': {'movies': ['It Happened One Night'], 'books': ['The Far Side Gallery']}, 'id': 2}, 'pat': {'purchases': {'movies': [], 'books': ['Seinfeld and Philosophy', 'I Am a Bunny']}, 'id': 3} } customers['pat']['purchases']['books'][1] phone_nos = [{'name': 'greg', 'nums': {'home': 1234567, 'work': 7654321}}, {'name': 'max', 'nums': {'home': 9876543, 'work': 1010001}}, {'name': 'erin', 'nums': {'home': 3333333, 'work': 4444444}}, {'name': 'joél', 'nums': {'home': 2222222, 'work': 5555555}}, {'name': 'sean', 'nums': {'home': 9999999, 'work': 8888888}}] [{item["name"]:item["nums"]["home"]} for item in phone_nos] {customer: customers[customer]["purchases"]["movies"] for customer in customers.keys()} # Let's code it! def factor_of_two(num): count = 0 while num%2==0: num = num/2 count+= 1 return count # Let's call it! factor_of_two(4) factor_of_two(8) def cheers(person='aaron', job='data scientist', age=30): return f'Hooray for {person}. You\'re a {job} and you\'re {str(age)}!' cheers('greg', 'scientist', 130) cheers('cristian', 'git enthusiast', 93) cheers() def power_of_two(exp): return 2**exp power_of_two(3) def phoneno(number): integers = [] special_chars = ["(", ")", "-"," "] for n in number: n.remove(special_chars) integer = int(n) return integer phoneno(["(123)123-1230", "(234)456-9847"]) #INCORRECT CHECK ANSWER def mode(numbers): # x*2 for x in primes counter = {num:} return counter mode([1,2,2,1,2,3,3,3,3,1,1,1,4]) #INCORRECT CHECK ANSWER
0.193528
0.980636
``` import requests import json import pandas as pd import xmltodict import os import pickle from copy import copy import re import validators import datetime import string import click from collections import Counter import dateutil.parser import isaid_helpers # SDC Stuff def get_raw_sdc_docs(limit=1000): offset = 0 sdc_data = list() while True: sdc_url = f"https://4un8324n3h.execute-api.us-west-2.amazonaws.com/prodchs/search?size={limit}&from={offset}" r_sdc = requests.get(sdc_url).json() if r_sdc["hits"]: sdc_data.extend([i["_source"] for i in r_sdc["hits"]]) offset += limit else: break return sdc_data def sdc_dataset(sdc_record): if "identifier" not in sdc_record: return dataset = { "sdc_internal_id": sdc_record["identifier"], "name": sdc_record["title"], "description": sdc_record["description"], "source": "USGS Science Data Catalog", "source_reference": "https://data.usgs.gov/catalog/" } if "landingPage" in sdc_record: dataset["url"] = sdc_record["landingPage"] if "modified" in sdc_record: dataset["last_updated"] = sdc_record["modified"] return dataset def sdc_terms(sdc_record): viable_terms = list() if "identifier" not in sdc_record: return viable_terms rel_stub = { "sdc_internal_id": sdc_record["identifier"], "reference": f"https://data.usgs.gov/datacatalog/data/{sdc_record['identifier']}", "date_qualifier": None } if "modified" in sdc_record: try: rel_stub["date_qualifier"] = str(dateutil.parser.parse(sdc_record["modified"]).isoformat()) except: rel_stub["date_qualifier"] = str(datetime.datetime.strptime(sdc_record["modified"], "%Y%m%d").isoformat()) terms = list() if "placeKeyword"in sdc_record: terms.extend([ { "entity_type": "Location", "declared_term_source": None, "rel_type": "ADDRESSES_PLACE", "term": i.strip() } for i in sdc_record["placeKeyword"] ]) if "usgsThesaurusKeyword"in sdc_record: terms.extend([ { "entity_type": "DefinedSubjectMatter", "declared_term_source": "USGS Thesaurus", "rel_type": "ADDRESSES_SUBJECT", "term": i.strip() } for i in sdc_record["usgsThesaurusKeyword"] ]) if "otherKeyword"in sdc_record: terms.extend([ { "entity_type": "UndefinedSubjectMatter", "declared_term_source": None, "rel_type": "ADDRESSES_SUBJECT", "term": i.strip() } for i in sdc_record["otherKeyword"] ]) for term in terms: check_term = term["term"].strip() if len(check_term) == 0: continue if len(term["term"]) == 1: continue term.update(rel_stub) viable_terms.append(term) return viable_terms def graphable_datasets_from_sdc(sdc_cache, return_format="list"): sdc_graphable_datasets = list() for record in sdc_cache: dataset = sdc_dataset(record) if dataset is not None: sdc_graphable_datasets.append(dataset) if return_format == "list": return sdc_graphable_datasets elif return_format == "dataframe": return pd.DataFrame(sdc_graphable_datasets) def graphable_places_from_sdc( sdc_cache, valid_terms=None, return_format="list" ): sdc_graphable_places = list() for record in sdc_cache: sdc_graphable_places.extend(sdc_terms(record)) if valid_terms is not None: sdc_graphable_places = [ i for i in sdc_graphable_places if i["term"] in valid_terms ] if return_format == "list": return sdc_graphable_places elif return_format == "dataframe": return pd.DataFrame(sdc_graphable_places) def sdc_contacts(sdc_record, return_format="list"): viable_contacts = list() if "identifier" not in sdc_record: return viable_contacts rel_stub = { "sdc_internal_id": sdc_record["identifier"], "reference": f"https://data.usgs.gov/datacatalog/data/{sdc_record['identifier']}" } if "modified" in sdc_record: rel_stub["date_qualifier"] = sdc_record["modified"] if "metadataContact" in sdc_record and "hasEmail" in sdc_record["metadataContact"]: metadata_contact = copy(rel_stub) metadata_contact["rel_type"] = "METADATA_CONTACT" metadata_contact["entity_type"] = "Person" metadata_contact["email"] = sdc_record["metadataContact"]["hasEmail"].split(":")[-1].strip() viable_contacts.append(metadata_contact) if "contactPoint" in sdc_record and "hasEmail" in sdc_record["contactPoint"]: poc_contact = copy(rel_stub) poc_contact["rel_type"] = "POINT_OF_CONTACT" poc_contact["entity_type"] = "Person" poc_contact["email"] = sdc_record["contactPoint"]["hasEmail"].split(":")[-1].strip() viable_contacts.append(poc_contact) if "authors" in sdc_record and isinstance(sdc_record["authors"], list): for author_record in [i for i in sdc_record["authors"] if "orcid" in i and i["orcid"]]: author_contact = copy(rel_stub) author_contact["rel_type"] = "AUTHOR_OF" author_contact["entity_type"] = "Person" author_contact["orcid"] = author_record["orcid"] viable_contacts.append(author_contact) if return_format == "list": return viable_contacts elif return_format == "dataframe": return pd.DataFrame(viable_contacts) def graphable_contacts_from_sdc( sdc_cache, return_format="list" ): sdc_graphable_contacts = list() for record in sdc_cache: sdc_graphable_contacts.extend(sdc_contacts(record)) if return_format == "list": return sdc_graphable_contacts elif return_format == "dataframe": return pd.DataFrame(sdc_graphable_contacts) %%time if click.confirm('Do you really want to proceed with rebuilding the local SDC cache from source?', default=True): sdc_cache = get_raw_sdc_docs() pickle.dump(sdc_cache, open(isaid_helpers.f_raw_sdc, "wb")) print(isaid_helpers.f_raw_sdc, "CREATED", datetime.datetime.fromtimestamp(os.path.getmtime(isaid_helpers.f_raw_sdc))) else: sdc_cache = pickle.load(open(isaid_helpers.f_raw_sdc, "rb")) print("sdc_cache available in local memory") %%time graphable_datasets_from_sdc( sdc_cache=sdc_cache, return_format="dataframe" ).to_csv(isaid_helpers.f_graphable_sdc, index=False) print( isaid_helpers.f_graphable_sdc, "CREATED", datetime.datetime.fromtimestamp(os.path.getmtime(isaid_helpers.f_graphable_sdc)) ) reference_terms = pickle.load(open(isaid_helpers.f_ner_reference, "rb")) display(Counter([i["source"] for i in reference_terms])) display(Counter(i['concept_label'] for i in reference_terms if "concept_label" in i)) all_sdc_terms = list() for record in sdc_cache: all_sdc_terms.extend(sdc_terms(record)) %%time usgs_thesaurus_terms_in_sdc = list(set([i["term"] for i in all_sdc_terms if i["declared_term_source"] == "USGS Thesaurus"])) usgs_thesaurus_terms_in_sdc.sort() usgs_thesaurus_terms_in_source = [i["label"] for i in reference_terms if i["source"] == "USGS Thesaurus"] verified_thesaurus_terms_in_sdc = [i for i in usgs_thesaurus_terms_in_sdc if i in usgs_thesaurus_terms_in_source] graphable_usgs_thesaurus_linked_datasets = list() for thesaurus_item in [i for i in reference_terms if i["source"] == "USGS Thesaurus" and i["label"] in verified_thesaurus_terms_in_sdc]: for sdc_term in [i for i in all_sdc_terms if i["declared_term_source"] == "USGS Thesaurus" and i["term"] == thesaurus_item["label"]]: graphable_usgs_thesaurus_linked_datasets.append({ "sdc_internal_id": sdc_term["sdc_internal_id"], "date_qualifier": sdc_term["date_qualifier"], "reference": sdc_term["reference"], "DefinedSubjectMatter_url": thesaurus_item["url"], "DefinedSubjectMatter_name": thesaurus_item["label"], "DefinedSubjectMatter_source": thesaurus_item["source"], "DefinedSubjectMatter_source_reference": thesaurus_item["source_reference"], "DefinedSubjectMatter_concept_label": thesaurus_item["concept_label"], "DefinedSubjectMatter_description": thesaurus_item["description"], }) pd.DataFrame( graphable_usgs_thesaurus_linked_datasets ).to_csv( isaid_helpers.f_graphable_sdc_rels_usgs_thesaurus, index=False ) print( isaid_helpers.f_graphable_sdc_rels_usgs_thesaurus, "CREATED", datetime.datetime.fromtimestamp(os.path.getmtime(isaid_helpers.f_graphable_sdc_rels_usgs_thesaurus)) ) %%time place_terms_in_sdc = list(set([i["term"] for i in all_sdc_terms if i["entity_type"] == "Location"])) place_terms_in_sdc.sort() place_concept_labels = [ 'USGS_COMMON_GEOGRAPHIC_AREAS' 'SOVEREIGN_STATE', 'US_STATE', 'SEA_OR_OCEAN', 'GEOLOGIC_FAULT', 'NAMED_VOLCANO', 'NATIONAL_PARK', 'NATIONAL_MONUMENT', 'NATIONAL_FOREST', 'WILD_AND_SCENIC_RIVER', 'US_TERRITORY', 'US_COUNTY' ] place_terms_in_source = list(set([ i["label"] for i in reference_terms if i["concept_label"] in place_concept_labels and not i["label"].isnumeric() ])) verified_place_terms_in_sdc = [i for i in place_terms_in_sdc if i in place_terms_in_source] verified_place_terms_in_sdc.sort() graphable_place_linked_datasets = list() for found_term in verified_place_terms_in_sdc: place_term = next((i for i in reference_terms if i["concept_label"] in place_concept_labels and i["label"] == found_term), None) if place_term is not None: for sdc_term in [i for i in all_sdc_terms if i["entity_type"] == "Location" and i["term"] == found_term]: graphable_place_linked_datasets.append({ "sdc_internal_id": sdc_term["sdc_internal_id"], "date_qualifier": sdc_term["date_qualifier"], "reference": sdc_term["reference"], "DefinedSubjectMatter_name": place_term["label"], "DefinedSubjectMatter_source": place_term["source"], "DefinedSubjectMatter_source_reference": place_term["source_reference"], "DefinedSubjectMatter_concept_label": place_term["concept_label"], "DefinedSubjectMatter_url": place_term["url"] if "url" in place_term else place_term["identifier"], "DefinedSubjectMatter_description": place_term["description"] if "description" in place_term else None }) pd.DataFrame( graphable_place_linked_datasets ).to_csv( isaid_helpers.f_graphable_sdc_rels_places, index=False ) print( isaid_helpers.f_graphable_sdc_rels_places, "CREATED", datetime.datetime.fromtimestamp(os.path.getmtime(isaid_helpers.f_graphable_sdc_rels_places)) ) %%time df_graphable_contacts = graphable_contacts_from_sdc( sdc_cache=sdc_cache, return_format="dataframe" ) df_graphable_contacts.loc[df_graphable_contacts.rel_type == "METADATA_CONTACT"].to_csv(isaid_helpers.f_graphable_sdc_rels_poc, index=False) print( isaid_helpers.f_graphable_sdc_rels_poc, "CREATED", datetime.datetime.fromtimestamp(os.path.getmtime(isaid_helpers.f_graphable_sdc_rels_poc)) ) df_graphable_contacts.loc[df_graphable_contacts.rel_type == "POINT_OF_CONTACT"].to_csv(isaid_helpers.f_graphable_sdc_rels_md, index=False) print( isaid_helpers.f_graphable_sdc_rels_md, "CREATED", datetime.datetime.fromtimestamp(os.path.getmtime(isaid_helpers.f_graphable_sdc_rels_md)) ) df_graphable_contacts.loc[df_graphable_contacts.rel_type == "AUTHOR_OF"].to_csv(isaid_helpers.f_graphable_sdc_rels_author, index=False) print( isaid_helpers.f_graphable_sdc_rels_author, "CREATED", datetime.datetime.fromtimestamp(os.path.getmtime(isaid_helpers.f_graphable_sdc_rels_author)) ) pd.read_csv(isaid_helpers.f_graphable_sdc_rels_places) ```
github_jupyter
import requests import json import pandas as pd import xmltodict import os import pickle from copy import copy import re import validators import datetime import string import click from collections import Counter import dateutil.parser import isaid_helpers # SDC Stuff def get_raw_sdc_docs(limit=1000): offset = 0 sdc_data = list() while True: sdc_url = f"https://4un8324n3h.execute-api.us-west-2.amazonaws.com/prodchs/search?size={limit}&from={offset}" r_sdc = requests.get(sdc_url).json() if r_sdc["hits"]: sdc_data.extend([i["_source"] for i in r_sdc["hits"]]) offset += limit else: break return sdc_data def sdc_dataset(sdc_record): if "identifier" not in sdc_record: return dataset = { "sdc_internal_id": sdc_record["identifier"], "name": sdc_record["title"], "description": sdc_record["description"], "source": "USGS Science Data Catalog", "source_reference": "https://data.usgs.gov/catalog/" } if "landingPage" in sdc_record: dataset["url"] = sdc_record["landingPage"] if "modified" in sdc_record: dataset["last_updated"] = sdc_record["modified"] return dataset def sdc_terms(sdc_record): viable_terms = list() if "identifier" not in sdc_record: return viable_terms rel_stub = { "sdc_internal_id": sdc_record["identifier"], "reference": f"https://data.usgs.gov/datacatalog/data/{sdc_record['identifier']}", "date_qualifier": None } if "modified" in sdc_record: try: rel_stub["date_qualifier"] = str(dateutil.parser.parse(sdc_record["modified"]).isoformat()) except: rel_stub["date_qualifier"] = str(datetime.datetime.strptime(sdc_record["modified"], "%Y%m%d").isoformat()) terms = list() if "placeKeyword"in sdc_record: terms.extend([ { "entity_type": "Location", "declared_term_source": None, "rel_type": "ADDRESSES_PLACE", "term": i.strip() } for i in sdc_record["placeKeyword"] ]) if "usgsThesaurusKeyword"in sdc_record: terms.extend([ { "entity_type": "DefinedSubjectMatter", "declared_term_source": "USGS Thesaurus", "rel_type": "ADDRESSES_SUBJECT", "term": i.strip() } for i in sdc_record["usgsThesaurusKeyword"] ]) if "otherKeyword"in sdc_record: terms.extend([ { "entity_type": "UndefinedSubjectMatter", "declared_term_source": None, "rel_type": "ADDRESSES_SUBJECT", "term": i.strip() } for i in sdc_record["otherKeyword"] ]) for term in terms: check_term = term["term"].strip() if len(check_term) == 0: continue if len(term["term"]) == 1: continue term.update(rel_stub) viable_terms.append(term) return viable_terms def graphable_datasets_from_sdc(sdc_cache, return_format="list"): sdc_graphable_datasets = list() for record in sdc_cache: dataset = sdc_dataset(record) if dataset is not None: sdc_graphable_datasets.append(dataset) if return_format == "list": return sdc_graphable_datasets elif return_format == "dataframe": return pd.DataFrame(sdc_graphable_datasets) def graphable_places_from_sdc( sdc_cache, valid_terms=None, return_format="list" ): sdc_graphable_places = list() for record in sdc_cache: sdc_graphable_places.extend(sdc_terms(record)) if valid_terms is not None: sdc_graphable_places = [ i for i in sdc_graphable_places if i["term"] in valid_terms ] if return_format == "list": return sdc_graphable_places elif return_format == "dataframe": return pd.DataFrame(sdc_graphable_places) def sdc_contacts(sdc_record, return_format="list"): viable_contacts = list() if "identifier" not in sdc_record: return viable_contacts rel_stub = { "sdc_internal_id": sdc_record["identifier"], "reference": f"https://data.usgs.gov/datacatalog/data/{sdc_record['identifier']}" } if "modified" in sdc_record: rel_stub["date_qualifier"] = sdc_record["modified"] if "metadataContact" in sdc_record and "hasEmail" in sdc_record["metadataContact"]: metadata_contact = copy(rel_stub) metadata_contact["rel_type"] = "METADATA_CONTACT" metadata_contact["entity_type"] = "Person" metadata_contact["email"] = sdc_record["metadataContact"]["hasEmail"].split(":")[-1].strip() viable_contacts.append(metadata_contact) if "contactPoint" in sdc_record and "hasEmail" in sdc_record["contactPoint"]: poc_contact = copy(rel_stub) poc_contact["rel_type"] = "POINT_OF_CONTACT" poc_contact["entity_type"] = "Person" poc_contact["email"] = sdc_record["contactPoint"]["hasEmail"].split(":")[-1].strip() viable_contacts.append(poc_contact) if "authors" in sdc_record and isinstance(sdc_record["authors"], list): for author_record in [i for i in sdc_record["authors"] if "orcid" in i and i["orcid"]]: author_contact = copy(rel_stub) author_contact["rel_type"] = "AUTHOR_OF" author_contact["entity_type"] = "Person" author_contact["orcid"] = author_record["orcid"] viable_contacts.append(author_contact) if return_format == "list": return viable_contacts elif return_format == "dataframe": return pd.DataFrame(viable_contacts) def graphable_contacts_from_sdc( sdc_cache, return_format="list" ): sdc_graphable_contacts = list() for record in sdc_cache: sdc_graphable_contacts.extend(sdc_contacts(record)) if return_format == "list": return sdc_graphable_contacts elif return_format == "dataframe": return pd.DataFrame(sdc_graphable_contacts) %%time if click.confirm('Do you really want to proceed with rebuilding the local SDC cache from source?', default=True): sdc_cache = get_raw_sdc_docs() pickle.dump(sdc_cache, open(isaid_helpers.f_raw_sdc, "wb")) print(isaid_helpers.f_raw_sdc, "CREATED", datetime.datetime.fromtimestamp(os.path.getmtime(isaid_helpers.f_raw_sdc))) else: sdc_cache = pickle.load(open(isaid_helpers.f_raw_sdc, "rb")) print("sdc_cache available in local memory") %%time graphable_datasets_from_sdc( sdc_cache=sdc_cache, return_format="dataframe" ).to_csv(isaid_helpers.f_graphable_sdc, index=False) print( isaid_helpers.f_graphable_sdc, "CREATED", datetime.datetime.fromtimestamp(os.path.getmtime(isaid_helpers.f_graphable_sdc)) ) reference_terms = pickle.load(open(isaid_helpers.f_ner_reference, "rb")) display(Counter([i["source"] for i in reference_terms])) display(Counter(i['concept_label'] for i in reference_terms if "concept_label" in i)) all_sdc_terms = list() for record in sdc_cache: all_sdc_terms.extend(sdc_terms(record)) %%time usgs_thesaurus_terms_in_sdc = list(set([i["term"] for i in all_sdc_terms if i["declared_term_source"] == "USGS Thesaurus"])) usgs_thesaurus_terms_in_sdc.sort() usgs_thesaurus_terms_in_source = [i["label"] for i in reference_terms if i["source"] == "USGS Thesaurus"] verified_thesaurus_terms_in_sdc = [i for i in usgs_thesaurus_terms_in_sdc if i in usgs_thesaurus_terms_in_source] graphable_usgs_thesaurus_linked_datasets = list() for thesaurus_item in [i for i in reference_terms if i["source"] == "USGS Thesaurus" and i["label"] in verified_thesaurus_terms_in_sdc]: for sdc_term in [i for i in all_sdc_terms if i["declared_term_source"] == "USGS Thesaurus" and i["term"] == thesaurus_item["label"]]: graphable_usgs_thesaurus_linked_datasets.append({ "sdc_internal_id": sdc_term["sdc_internal_id"], "date_qualifier": sdc_term["date_qualifier"], "reference": sdc_term["reference"], "DefinedSubjectMatter_url": thesaurus_item["url"], "DefinedSubjectMatter_name": thesaurus_item["label"], "DefinedSubjectMatter_source": thesaurus_item["source"], "DefinedSubjectMatter_source_reference": thesaurus_item["source_reference"], "DefinedSubjectMatter_concept_label": thesaurus_item["concept_label"], "DefinedSubjectMatter_description": thesaurus_item["description"], }) pd.DataFrame( graphable_usgs_thesaurus_linked_datasets ).to_csv( isaid_helpers.f_graphable_sdc_rels_usgs_thesaurus, index=False ) print( isaid_helpers.f_graphable_sdc_rels_usgs_thesaurus, "CREATED", datetime.datetime.fromtimestamp(os.path.getmtime(isaid_helpers.f_graphable_sdc_rels_usgs_thesaurus)) ) %%time place_terms_in_sdc = list(set([i["term"] for i in all_sdc_terms if i["entity_type"] == "Location"])) place_terms_in_sdc.sort() place_concept_labels = [ 'USGS_COMMON_GEOGRAPHIC_AREAS' 'SOVEREIGN_STATE', 'US_STATE', 'SEA_OR_OCEAN', 'GEOLOGIC_FAULT', 'NAMED_VOLCANO', 'NATIONAL_PARK', 'NATIONAL_MONUMENT', 'NATIONAL_FOREST', 'WILD_AND_SCENIC_RIVER', 'US_TERRITORY', 'US_COUNTY' ] place_terms_in_source = list(set([ i["label"] for i in reference_terms if i["concept_label"] in place_concept_labels and not i["label"].isnumeric() ])) verified_place_terms_in_sdc = [i for i in place_terms_in_sdc if i in place_terms_in_source] verified_place_terms_in_sdc.sort() graphable_place_linked_datasets = list() for found_term in verified_place_terms_in_sdc: place_term = next((i for i in reference_terms if i["concept_label"] in place_concept_labels and i["label"] == found_term), None) if place_term is not None: for sdc_term in [i for i in all_sdc_terms if i["entity_type"] == "Location" and i["term"] == found_term]: graphable_place_linked_datasets.append({ "sdc_internal_id": sdc_term["sdc_internal_id"], "date_qualifier": sdc_term["date_qualifier"], "reference": sdc_term["reference"], "DefinedSubjectMatter_name": place_term["label"], "DefinedSubjectMatter_source": place_term["source"], "DefinedSubjectMatter_source_reference": place_term["source_reference"], "DefinedSubjectMatter_concept_label": place_term["concept_label"], "DefinedSubjectMatter_url": place_term["url"] if "url" in place_term else place_term["identifier"], "DefinedSubjectMatter_description": place_term["description"] if "description" in place_term else None }) pd.DataFrame( graphable_place_linked_datasets ).to_csv( isaid_helpers.f_graphable_sdc_rels_places, index=False ) print( isaid_helpers.f_graphable_sdc_rels_places, "CREATED", datetime.datetime.fromtimestamp(os.path.getmtime(isaid_helpers.f_graphable_sdc_rels_places)) ) %%time df_graphable_contacts = graphable_contacts_from_sdc( sdc_cache=sdc_cache, return_format="dataframe" ) df_graphable_contacts.loc[df_graphable_contacts.rel_type == "METADATA_CONTACT"].to_csv(isaid_helpers.f_graphable_sdc_rels_poc, index=False) print( isaid_helpers.f_graphable_sdc_rels_poc, "CREATED", datetime.datetime.fromtimestamp(os.path.getmtime(isaid_helpers.f_graphable_sdc_rels_poc)) ) df_graphable_contacts.loc[df_graphable_contacts.rel_type == "POINT_OF_CONTACT"].to_csv(isaid_helpers.f_graphable_sdc_rels_md, index=False) print( isaid_helpers.f_graphable_sdc_rels_md, "CREATED", datetime.datetime.fromtimestamp(os.path.getmtime(isaid_helpers.f_graphable_sdc_rels_md)) ) df_graphable_contacts.loc[df_graphable_contacts.rel_type == "AUTHOR_OF"].to_csv(isaid_helpers.f_graphable_sdc_rels_author, index=False) print( isaid_helpers.f_graphable_sdc_rels_author, "CREATED", datetime.datetime.fromtimestamp(os.path.getmtime(isaid_helpers.f_graphable_sdc_rels_author)) ) pd.read_csv(isaid_helpers.f_graphable_sdc_rels_places)
0.268462
0.15746
``` import os.path import wget import os import psycopg2 from sqlalchemy import create_engine import io import re import pandas as pd # Set working directory os.chdir('/home/james/kungfauxpandas/data/synpuf/synpufetch') # Fetch html files pth = 'https://www.cms.gov/Research-Statistics-Data-and-Systems/Downloadable-Public-Use-Files/SynPUFs/' for sampnum in range(21): if sampnum >=1: thisfile = 'DESample'+str(sampnum).zfill(2)+'.html' thisurl = os.path.join(pth,thisfile) #print(thisurl) filename = wget.download(thisurl) print('Fetched ', filename) # Parse files to get data urls import re allfiles = [] for fl in ff: txt = open(fl,'r').read() m = re.findall('<a href.*?zip.*?/a>', txt) #print([xx+'\n' for xx in m]) allfiles += m niceurls = [] localpath = 'https://www.cms.gov/' for fff in allfiles: #print(fff) if fff.startswith('<a href="http://'): niceurls.append(fff.split('<a href="')[1].split('.zip')[0]+'.zip') else: niceurls.append(localpath+fff.split('<a href="')[1].split('.zip')[0]+'.zip') #print(niceurls) for nu in niceurls: try: print('Fetching ', nu.split('/')[-1]) filename = wget.download(nu) except Exception as ee: print('Failed. '+ee) print(niceurls[0]) # Extract Files import zipfile for thisfile in os.listdir('.'): if thisfile.endswith('.zip'): print('Unzipping ', thisfile) zipper = zipfile.ZipFile(thisfile,'r') zipper.extractall('.') zipper.close() ff ff def get_table_name(fns): if f.endswith('.csv'): m = re.match(r"(.*20\d{2})_(\w+)_(\d{1,2}\w?\.csv)", fns) return m.group(2) def fast_upload(df, engine, tblname): # engine=create_engine('postgresql+psycopg2://username:password@host:port/database') # engine = create_engine('postgresql+psycopg2://sympuf:D2V1!@localhost:5432/sympuf') conn=engine.raw_connection() cur = conn.cursor() output = io.StringIO() df.to_csv(output, sep='\t', header=False, index=False) output.seek(0) contents = output.getvalue() cur.copy_from(output, tblname, null="") #null values become '' conn.commit() conn.close() engine = create_engine('postgresql+psycopg2://sympuf:D2V1!@localhost:5432/sympuf') ff=os.listdir('.') start_offset = 93 for fn in enumerate(ff[start_offset:]): f = fn[1] this_n = start_offset + fn[0] - 1 if f.endswith('.csv'): thistblname = get_table_name(f).lower() try: print('Reading', f, 'File #', this_n) df = pd.read_csv(f) print('Adding', f, 'to database') df.iloc[0:0].to_sql(thistblname, engine, if_exists='append',index=False) #create empty table exists print('Pushing rows to', thistblname) fast_upload(df, engine, thistblname) except Exception as e: print('Error inserting file', f, ' ',e) #ff.index('DE1_0_2008_to_2010_Carrier_Claims_Sample_10A.csv') #ff.index('DE1_0_2008_to_2010_Carrier_Claims_Sample_11B.csv') #ff.index('DE1_0_2008_to_2010_Prescription_Drug_Events_Sample_4.csv') ff[-1] ```
github_jupyter
import os.path import wget import os import psycopg2 from sqlalchemy import create_engine import io import re import pandas as pd # Set working directory os.chdir('/home/james/kungfauxpandas/data/synpuf/synpufetch') # Fetch html files pth = 'https://www.cms.gov/Research-Statistics-Data-and-Systems/Downloadable-Public-Use-Files/SynPUFs/' for sampnum in range(21): if sampnum >=1: thisfile = 'DESample'+str(sampnum).zfill(2)+'.html' thisurl = os.path.join(pth,thisfile) #print(thisurl) filename = wget.download(thisurl) print('Fetched ', filename) # Parse files to get data urls import re allfiles = [] for fl in ff: txt = open(fl,'r').read() m = re.findall('<a href.*?zip.*?/a>', txt) #print([xx+'\n' for xx in m]) allfiles += m niceurls = [] localpath = 'https://www.cms.gov/' for fff in allfiles: #print(fff) if fff.startswith('<a href="http://'): niceurls.append(fff.split('<a href="')[1].split('.zip')[0]+'.zip') else: niceurls.append(localpath+fff.split('<a href="')[1].split('.zip')[0]+'.zip') #print(niceurls) for nu in niceurls: try: print('Fetching ', nu.split('/')[-1]) filename = wget.download(nu) except Exception as ee: print('Failed. '+ee) print(niceurls[0]) # Extract Files import zipfile for thisfile in os.listdir('.'): if thisfile.endswith('.zip'): print('Unzipping ', thisfile) zipper = zipfile.ZipFile(thisfile,'r') zipper.extractall('.') zipper.close() ff ff def get_table_name(fns): if f.endswith('.csv'): m = re.match(r"(.*20\d{2})_(\w+)_(\d{1,2}\w?\.csv)", fns) return m.group(2) def fast_upload(df, engine, tblname): # engine=create_engine('postgresql+psycopg2://username:password@host:port/database') # engine = create_engine('postgresql+psycopg2://sympuf:D2V1!@localhost:5432/sympuf') conn=engine.raw_connection() cur = conn.cursor() output = io.StringIO() df.to_csv(output, sep='\t', header=False, index=False) output.seek(0) contents = output.getvalue() cur.copy_from(output, tblname, null="") #null values become '' conn.commit() conn.close() engine = create_engine('postgresql+psycopg2://sympuf:D2V1!@localhost:5432/sympuf') ff=os.listdir('.') start_offset = 93 for fn in enumerate(ff[start_offset:]): f = fn[1] this_n = start_offset + fn[0] - 1 if f.endswith('.csv'): thistblname = get_table_name(f).lower() try: print('Reading', f, 'File #', this_n) df = pd.read_csv(f) print('Adding', f, 'to database') df.iloc[0:0].to_sql(thistblname, engine, if_exists='append',index=False) #create empty table exists print('Pushing rows to', thistblname) fast_upload(df, engine, thistblname) except Exception as e: print('Error inserting file', f, ' ',e) #ff.index('DE1_0_2008_to_2010_Carrier_Claims_Sample_10A.csv') #ff.index('DE1_0_2008_to_2010_Carrier_Claims_Sample_11B.csv') #ff.index('DE1_0_2008_to_2010_Prescription_Drug_Events_Sample_4.csv') ff[-1]
0.092319
0.112503
<h1>Fišerův problém z nadhledu</h1> <div style="width:70%; margin-top: 1em;"> <p>Motto:</p> <p><i>Science is built of facts the way a house is built of bricks: but an accumulation of facts is no more science than a pile of bricks is a house.</i> <div style="float: right;">Henri Poincaré</div> </p> </div> <h2>Nadhled:</h2> <ul> <li>Máme naměřená data,</li> <li>vztah, který by měly splňovat </li> <li>metodu, která to provede</li> <li>a vizualizuje výsledky.</li> </ul> <p> Tyto části jsou samy o sobě bezvýznamné, terpve jejich složením dostáváme něco užitečného. </p> <p>Dalším příkladem jsou vektory: data + operace s nimi.</p> <h2>Class, objekt</h2> <pre> class jmeno: def __init__(self,..): self... = ... def funkce(self): self.promenna = ... #end class </pre> ``` class fis: def __init__(self,x): # inicializace self.x = x # znama pouze uvnitr fis def fun(self): # self znamena funkci uvnitr fis print("self.x={0}".format(self.x)) # self.x znamena x uvnitr fis, jinak by byla jen uvnitr fun() c = fis(123) c.fun() d = fis(['a','b']) d.fun() ``` <h3>Vlastnosti:</h3> <ul> <li>Sdružení různých logických objektů do jednoho objektu.</li> <li>Oddělení logických částí (potlačení chaosu).</li> <li>Snadnější zvládání velkých celků.</li> <li> „... zaútočíme operátorem tečka ...“(Ziki)</li> </ul> <h2>Základní Fišerova třída</h2> ``` import pandas import scipy.optimize class Fisher: def __init__(self,filename): self.popt = [] self.pcov = [] # ziskani dat data = pandas.read_csv(filename) # vztah C = A * B self.T = data['T'] self.C = data['A'] * data['B'] def f(self,x,C0,C1): return C0 * x + C1 def fit(self): self.popt, self.pcov = scipy.optimize.curve_fit(self.f, self.T, self.C) print("Best fit line parameters: C0={0:.3f}, C1={1:.3f}".format(self.popt[0],self.popt[1])) # Řešení Fišerova problému problem = Fisher('data.csv') problem.fit() ``` <h2>Modifikovaná Fišerova třída</h2> ``` class FisherModified(Fisher): def f(self,x,C0,C1): return C0 * x + C1 - 30 problem = FisherModified('data.csv') problem.fit() ``` <h2>Fišerova třída s grafem</h2> ``` import matplotlib.pyplot as plt %matplotlib inline # initializace matplotlib plt.rcParams['figure.autolayout'] = False plt.rcParams['figure.figsize'] = 12, 7 plt.rcParams['axes.labelsize'] = 25 plt.rcParams['axes.titlesize'] = 25 plt.rcParams['font.size'] = 25 plt.rcParams['lines.linewidth'] = 2.0 plt.rcParams['lines.markersize'] = 12 plt.rcParams['legend.fontsize'] = 25 class FisherGraph(Fisher): def __init__(self,filename): # initializace Fishera Fisher.__init__(self,filename) def graph(self,xlim,xlabel,ylabel,title): if len(self.popt) == 0: print("Variable popt is undefined. Run .fit() first of all.") return y = self.f(self.T, self.popt[0], self.popt[1]) # y = C0 * x + C1 plt.clf() plt.cla() plt.plot(self.T, self.C, 'o', label='data') plt.plot(self.T, y, label=r'$y = C_0 \, x + C_1$') plt.legend(loc='best') plt.xlim(xlim) plt.xlabel(xlabel) plt.ylabel(ylabel) plt.title(title) problem = FisherGraph('data.csv') problem.fit() problem.graph([14.5, 21.5],"$T$","$C(T)$","Fisher's problem solved !") problem.graph([10, 25],"$T_7$","$C^{(1)}(T_7)$","Problem due Fisher solved !") ``` <p>... teď směle vykročte na cestu ...</p>
github_jupyter
class fis: def __init__(self,x): # inicializace self.x = x # znama pouze uvnitr fis def fun(self): # self znamena funkci uvnitr fis print("self.x={0}".format(self.x)) # self.x znamena x uvnitr fis, jinak by byla jen uvnitr fun() c = fis(123) c.fun() d = fis(['a','b']) d.fun() import pandas import scipy.optimize class Fisher: def __init__(self,filename): self.popt = [] self.pcov = [] # ziskani dat data = pandas.read_csv(filename) # vztah C = A * B self.T = data['T'] self.C = data['A'] * data['B'] def f(self,x,C0,C1): return C0 * x + C1 def fit(self): self.popt, self.pcov = scipy.optimize.curve_fit(self.f, self.T, self.C) print("Best fit line parameters: C0={0:.3f}, C1={1:.3f}".format(self.popt[0],self.popt[1])) # Řešení Fišerova problému problem = Fisher('data.csv') problem.fit() class FisherModified(Fisher): def f(self,x,C0,C1): return C0 * x + C1 - 30 problem = FisherModified('data.csv') problem.fit() import matplotlib.pyplot as plt %matplotlib inline # initializace matplotlib plt.rcParams['figure.autolayout'] = False plt.rcParams['figure.figsize'] = 12, 7 plt.rcParams['axes.labelsize'] = 25 plt.rcParams['axes.titlesize'] = 25 plt.rcParams['font.size'] = 25 plt.rcParams['lines.linewidth'] = 2.0 plt.rcParams['lines.markersize'] = 12 plt.rcParams['legend.fontsize'] = 25 class FisherGraph(Fisher): def __init__(self,filename): # initializace Fishera Fisher.__init__(self,filename) def graph(self,xlim,xlabel,ylabel,title): if len(self.popt) == 0: print("Variable popt is undefined. Run .fit() first of all.") return y = self.f(self.T, self.popt[0], self.popt[1]) # y = C0 * x + C1 plt.clf() plt.cla() plt.plot(self.T, self.C, 'o', label='data') plt.plot(self.T, y, label=r'$y = C_0 \, x + C_1$') plt.legend(loc='best') plt.xlim(xlim) plt.xlabel(xlabel) plt.ylabel(ylabel) plt.title(title) problem = FisherGraph('data.csv') problem.fit() problem.graph([14.5, 21.5],"$T$","$C(T)$","Fisher's problem solved !") problem.graph([10, 25],"$T_7$","$C^{(1)}(T_7)$","Problem due Fisher solved !")
0.529507
0.842345
``` import googlemaps import psycopg2 import pandas as pd import copy import mysql.connector from mysql.connector import Error from mysql.connector import errorcode dwh_connection = mysql.connector.connect(host='', db='', user='', password = '') def check_exist_address(address, connection): cursor = connection.cursor() cursor.execute(f"Select count(id) from geocode where address='{address}'") res = cursor.fetchall() if res[0][0]==0: return False return True def get_geocode_from_db(address, connection): cursor = connection.cursor() cursor.execute(f"select lat, lng from geocode where address='{address}'") res = cursor.fetchall() return {"lat": float(res[0][0]), "lng":float(res[0][1])} def insert_to_geocode_db(address, lat, lng, connection): try: cursor = connection.cursor() sql_insert_query = f""" INSERT INTO `geocode` (`address`, `lat`, `lng`) VALUES ('{address}',{lat},{lng})""" result = cursor.execute(sql_insert_query) connection.commit() # print ("Record inserted successfully into python_users table") except mysql.connector.Error as error : connection.rollback() # print("Failed to insert into MySQL table {}".format(error)) def get_geocode(address, connection): if check_exist_address(address, connection): geocode = get_geocode_from_db(address, connection) return geocode else: geocode = gmaps.geocode(address) if len(geocode) != 0: geocode = geocode[0]['geometry']['location'] insert_to_geocode_db(address, geocode['lat'], geocode['lng'], connection) return {'lat':geocode['lat'],'lng':geocode['lng']} def add_geocode(row): row['geocode'] = get_geocode(row['address'], dwh_connection) return row gmaps_key = 'AIzaSyBeOFG9fxq9MEcCWRURGhONT0AltVBOEVY' gmaps = googlemaps.Client(key=gmaps_key) shippo_connection = psycopg2.connect(host="", database = "", user= "", password = "") possible_shipper_locations_sql = """Select assignee, case when type = 'PICKUP' then pickup_full_address else deliver_full_address end "address" from public.transportation_tasks where created_at > current_date - interval '2 days' and status in ('IN_PROCESS','NEW')""" possible_shipper_locations = pd.read_sql_query(possible_shipper_locations_sql, shippo_connection) print('Get shipper possible address Done') ## clean address possible_shipper_locations.address = possible_shipper_locations.address.str.strip() possible_shipper_locations.address = possible_shipper_locations.address.str.replace("'",' ') possible_shipper_locations = possible_shipper_locations.apply(add_geocode, axis=1) shipper_locations = dict(possible_shipper_locations.groupby("assignedTo")['geocode'].apply(list)) print("Add Geocode Done") with open("./data/shipper_locations.txt",'w+') as file: for shipper in shipper_locations: file.write(f'{shipper}\t{shipper_locations[shipper]}\n') print("Successful") ```
github_jupyter
import googlemaps import psycopg2 import pandas as pd import copy import mysql.connector from mysql.connector import Error from mysql.connector import errorcode dwh_connection = mysql.connector.connect(host='', db='', user='', password = '') def check_exist_address(address, connection): cursor = connection.cursor() cursor.execute(f"Select count(id) from geocode where address='{address}'") res = cursor.fetchall() if res[0][0]==0: return False return True def get_geocode_from_db(address, connection): cursor = connection.cursor() cursor.execute(f"select lat, lng from geocode where address='{address}'") res = cursor.fetchall() return {"lat": float(res[0][0]), "lng":float(res[0][1])} def insert_to_geocode_db(address, lat, lng, connection): try: cursor = connection.cursor() sql_insert_query = f""" INSERT INTO `geocode` (`address`, `lat`, `lng`) VALUES ('{address}',{lat},{lng})""" result = cursor.execute(sql_insert_query) connection.commit() # print ("Record inserted successfully into python_users table") except mysql.connector.Error as error : connection.rollback() # print("Failed to insert into MySQL table {}".format(error)) def get_geocode(address, connection): if check_exist_address(address, connection): geocode = get_geocode_from_db(address, connection) return geocode else: geocode = gmaps.geocode(address) if len(geocode) != 0: geocode = geocode[0]['geometry']['location'] insert_to_geocode_db(address, geocode['lat'], geocode['lng'], connection) return {'lat':geocode['lat'],'lng':geocode['lng']} def add_geocode(row): row['geocode'] = get_geocode(row['address'], dwh_connection) return row gmaps_key = 'AIzaSyBeOFG9fxq9MEcCWRURGhONT0AltVBOEVY' gmaps = googlemaps.Client(key=gmaps_key) shippo_connection = psycopg2.connect(host="", database = "", user= "", password = "") possible_shipper_locations_sql = """Select assignee, case when type = 'PICKUP' then pickup_full_address else deliver_full_address end "address" from public.transportation_tasks where created_at > current_date - interval '2 days' and status in ('IN_PROCESS','NEW')""" possible_shipper_locations = pd.read_sql_query(possible_shipper_locations_sql, shippo_connection) print('Get shipper possible address Done') ## clean address possible_shipper_locations.address = possible_shipper_locations.address.str.strip() possible_shipper_locations.address = possible_shipper_locations.address.str.replace("'",' ') possible_shipper_locations = possible_shipper_locations.apply(add_geocode, axis=1) shipper_locations = dict(possible_shipper_locations.groupby("assignedTo")['geocode'].apply(list)) print("Add Geocode Done") with open("./data/shipper_locations.txt",'w+') as file: for shipper in shipper_locations: file.write(f'{shipper}\t{shipper_locations[shipper]}\n') print("Successful")
0.25488
0.200401
# PYTHON Pandas - Groupby Herhangi bir groupby işlemi, özgün nesne üzerinde aşağıdaki işlemlerden birini içerir. Bunlar: - Nesneyi bölme - Bir işlev uygulama - Sonuçları birleştirmek Birçok durumda, verileri kümelere bölüyoruz ve her bir alt kümeye bazı işlevler uyguluyoruz. Uygula işlevinde, aşağıdaki işlemleri gerçekleştirebiliriz. - Aggregation(toplama): Özet istatistiği hesaplar - Transformation(dönüşüm): Gruba özgü bazı işlemleri gerçekleştirir - Filtration(filtre): Verileri bazı koşullarla atar. Şimdi bir DataFrame nesnesi oluşturalım ve üzerindeki tüm işlemleri gerçekleştirelim. ``` import pandas as pd data = { 'Kisiler': ['Furkan','Kemal','Osman','Ayse','Zeynep','Merve'], 'Yas': [21,24,32,23,15,27], 'Grup': [1,3,2,3,1,2], 'Dyili': [1998,1995,1987,1996,2004,1992], 'Numaralari': [753,268,257,478,469,135] } df = pd.DataFrame(data) df ``` ## Verileri Gruplara Bölme Bir nesneyi bölmenin birçok yolu vardır. Bunlar: - obj.groupby('key') - obj.groupby(['key1','key2']) - obj.groupby(key,axis=1) Şimdi gruplama nesnelerinin DataFrame nesnesine nasıl uygulanabileceğini görelim. **Örnek** ``` import pandas as pd data = { 'Takım': ['Furkan','Osman','Osman','Merve','Furkan','Merve'], 'Yas': [21,27,25,25,21,27], 'Grup': [1,3,2,3,1,2], 'Dyili': [1998,1995,1996,1996,1995,1998], 'Numaralari': [753,268,257,478,469,135] } df = pd.DataFrame(data) df.groupby('Takım') ``` ## Grupları Görüntüleme ``` import pandas as pd data = { 'Takım': ['Furkan','Osman','Osman','Merve','Furkan','Merve'], 'Yas': [21,27,25,25,21,27], 'Grup': [1,3,2,3,1,2], 'Dyili': [1998,1995,1996,1996,1995,1998], 'Numaralari': [753,268,257,478,469,135] } df = pd.DataFrame(data) df.groupby('Takım').groups ``` **Örnek** ``` import pandas as pd data = { 'Takım': ['Furkan','Osman','Osman','Merve','Furkan','Merve'], 'Yas': [21,27,25,25,21,27], 'Grup': [1,3,2,3,1,2], 'Dyili': [1998,1995,1996,1996,1995,1998], 'Numaralari': [753,268,257,478,469,135] } df = pd.DataFrame(data) df.groupby(['Takım','Yas']).groups ``` ## Gruplar Arasında Yineleme Groupby nesnesi el ile, itertools'a benzer nesne boyunca yineleyebiliriz. ``` import pandas as pd data = { 'Takım': ['Furkan','Osman','Osman','Merve','Furkan','Merve'], 'Yas': [21,27,25,25,21,27], 'Grup': [1,3,2,3,1,2], 'Dyili': [1998,1995,1996,1996,1995,1998], 'Numaralari': [753,268,257,478,469,135] } df = pd.DataFrame(data) grouped = df.groupby('Dyili') for name,group in grouped: print(name) print(group) print('\n') ``` ## Grup Seçmek Get_group () yöntemini kullanarak, tek bir grup seçebiliriz. ``` import pandas as pd data = { 'Takım': ['Furkan','Osman','Osman','Merve','Furkan','Merve'], 'Yas': [21,27,25,25,21,27], 'Grup': [1,3,2,3,1,2], 'Dyili': [1998,1995,1996,1996,1995,1998], 'Numaralari': [753,268,257,478,469,135] } df = pd.DataFrame(data) grouped = df.groupby('Dyili') grouped.get_group(1998) ``` ## Aggregations(toplama) Toplanmış bir işlev, her grup için tek bir toplanmış değer döndürür. Groupby nesnesi oluşturulduktan sonra, gruplanmış veriler üzerinde birkaç toplama işlemi gerçekleştirilebilir. ``` import pandas as pd import numpy as np data = { 'Takım': ['Furkan','Osman','Osman','Merve','Furkan','Merve'], 'Yas': [21,27,25,25,21,27], 'Grup': [1,3,2,3,1,2], 'Dyili': [1998,1995,1996,1996,1995,1998], 'Numaralari': [753,268,257,478,469,135] } df = pd.DataFrame(data) grouped = df.groupby('Dyili') grouped['Numaralari'].agg(np.mean) ``` ### Aynı anda birden fazla toplama işlevi uygulamak Gruplandırılmış serilerle, toplama yapmak için bir liste veya işlev dict'i de geçirebilir ve çıktı olarak DataFrame oluşturabilirsiniz. ``` import pandas as pd import numpy as np data = { 'Takım': ['Furkan','Osman','Osman','Merve','Furkan','Merve'], 'Yas': [21,27,25,25,21,27], 'Grup': [1,3,2,3,1,2], 'Dyili': [1998,1995,1996,1996,1995,1998], 'Numaralari': [753,268,257,478,469,135] } df = pd.DataFrame(data) grouped = df.groupby('Dyili') grouped['Numaralari'].agg([np.sum,np.mean,np.std]) ``` ## Transformations (Dönüşüm) Bir grup veya sütun dönüştürme, gruplandırılmış aynı boyutta dizinlenmiş bir nesne döndürür. Bu nedenle, dönüşüm, bir grup yığınının boyutuyla aynı olan bir sonuç döndürmelidir. ``` import pandas as pd import numpy as np data = { 'Takım': ['Furkan','Osman','Osman','Merve','Furkan','Merve'], 'Yas': [21,27,25,25,21,27], 'Grup': [1,3,2,3,1,2], 'Dyili': [1998,1995,1996,1996,1995,1998], 'Numaralari': [753,268,257,478,469,135] } df = pd.DataFrame(data) grouped = df.groupby('Dyili') score = lambda x: (x-x.mean()) / x.std()*10 grouped.transform(score) ``` ## Filtration (filtre) Filtrasyon, tanımlanmış bir ölçütteki verileri filtreler ve verilerin alt kümesini döndürür. filter() işlevi verileri filtrelemek için kullanılır. ``` import pandas as pd import numpy as np data = { 'Takım': ['Furkan','Osman','Osman','Merve','Furkan','Merve'], 'Yas': [21,27,25,25,21,27], 'Grup': [1,3,2,3,1,2], 'Dyili': [1998,1995,1996,1996,1995,1998], 'Numaralari': [753,268,257,478,469,135] } df = pd.DataFrame(data) df.groupby('Takım').filter(lambda x: len(x) >= 2) ```
github_jupyter
import pandas as pd data = { 'Kisiler': ['Furkan','Kemal','Osman','Ayse','Zeynep','Merve'], 'Yas': [21,24,32,23,15,27], 'Grup': [1,3,2,3,1,2], 'Dyili': [1998,1995,1987,1996,2004,1992], 'Numaralari': [753,268,257,478,469,135] } df = pd.DataFrame(data) df import pandas as pd data = { 'Takım': ['Furkan','Osman','Osman','Merve','Furkan','Merve'], 'Yas': [21,27,25,25,21,27], 'Grup': [1,3,2,3,1,2], 'Dyili': [1998,1995,1996,1996,1995,1998], 'Numaralari': [753,268,257,478,469,135] } df = pd.DataFrame(data) df.groupby('Takım') import pandas as pd data = { 'Takım': ['Furkan','Osman','Osman','Merve','Furkan','Merve'], 'Yas': [21,27,25,25,21,27], 'Grup': [1,3,2,3,1,2], 'Dyili': [1998,1995,1996,1996,1995,1998], 'Numaralari': [753,268,257,478,469,135] } df = pd.DataFrame(data) df.groupby('Takım').groups import pandas as pd data = { 'Takım': ['Furkan','Osman','Osman','Merve','Furkan','Merve'], 'Yas': [21,27,25,25,21,27], 'Grup': [1,3,2,3,1,2], 'Dyili': [1998,1995,1996,1996,1995,1998], 'Numaralari': [753,268,257,478,469,135] } df = pd.DataFrame(data) df.groupby(['Takım','Yas']).groups import pandas as pd data = { 'Takım': ['Furkan','Osman','Osman','Merve','Furkan','Merve'], 'Yas': [21,27,25,25,21,27], 'Grup': [1,3,2,3,1,2], 'Dyili': [1998,1995,1996,1996,1995,1998], 'Numaralari': [753,268,257,478,469,135] } df = pd.DataFrame(data) grouped = df.groupby('Dyili') for name,group in grouped: print(name) print(group) print('\n') import pandas as pd data = { 'Takım': ['Furkan','Osman','Osman','Merve','Furkan','Merve'], 'Yas': [21,27,25,25,21,27], 'Grup': [1,3,2,3,1,2], 'Dyili': [1998,1995,1996,1996,1995,1998], 'Numaralari': [753,268,257,478,469,135] } df = pd.DataFrame(data) grouped = df.groupby('Dyili') grouped.get_group(1998) import pandas as pd import numpy as np data = { 'Takım': ['Furkan','Osman','Osman','Merve','Furkan','Merve'], 'Yas': [21,27,25,25,21,27], 'Grup': [1,3,2,3,1,2], 'Dyili': [1998,1995,1996,1996,1995,1998], 'Numaralari': [753,268,257,478,469,135] } df = pd.DataFrame(data) grouped = df.groupby('Dyili') grouped['Numaralari'].agg(np.mean) import pandas as pd import numpy as np data = { 'Takım': ['Furkan','Osman','Osman','Merve','Furkan','Merve'], 'Yas': [21,27,25,25,21,27], 'Grup': [1,3,2,3,1,2], 'Dyili': [1998,1995,1996,1996,1995,1998], 'Numaralari': [753,268,257,478,469,135] } df = pd.DataFrame(data) grouped = df.groupby('Dyili') grouped['Numaralari'].agg([np.sum,np.mean,np.std]) import pandas as pd import numpy as np data = { 'Takım': ['Furkan','Osman','Osman','Merve','Furkan','Merve'], 'Yas': [21,27,25,25,21,27], 'Grup': [1,3,2,3,1,2], 'Dyili': [1998,1995,1996,1996,1995,1998], 'Numaralari': [753,268,257,478,469,135] } df = pd.DataFrame(data) grouped = df.groupby('Dyili') score = lambda x: (x-x.mean()) / x.std()*10 grouped.transform(score) import pandas as pd import numpy as np data = { 'Takım': ['Furkan','Osman','Osman','Merve','Furkan','Merve'], 'Yas': [21,27,25,25,21,27], 'Grup': [1,3,2,3,1,2], 'Dyili': [1998,1995,1996,1996,1995,1998], 'Numaralari': [753,268,257,478,469,135] } df = pd.DataFrame(data) df.groupby('Takım').filter(lambda x: len(x) >= 2)
0.173078
0.920825
## Keras MNIST Fashion Save Model Example Single fully connected hidden layer exported for prediction on device with tensor/io. Exported using the keras `model.save` api. Based on https://www.tensorflow.org/tutorials/keras/classification ``` import os import numpy as np import tensorflow as tf import tensorflow_hub as hub from tensorflow.keras import layers import PIL.Image as Image import matplotlib.pylab as plt %matplotlib inline def enable_memory_growth(): physical_devices = tf.config.experimental.list_physical_devices('GPU') try: tf.config.experimental.set_memory_growth(physical_devices[0], True) # tf.config.gpu.set_per_process_memory_growth(True) # tf.config.gpu.set_per_process_memory_fraction(0.75) except: print('Invalid device or cannot modify virtual devices once initialized.') if "TF_GPU_GROWTH" in os.environ: print("Enabling GPU memory growth") enable_memory_growth() ``` ## Fashion MNIST ``` fashion_mnist = tf.keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() class_names = [ 'T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot' ] plt.figure() plt.imshow(train_images[0]) plt.colorbar() plt.grid(False) plt.show() train_images = train_images / 255.0 test_images = test_images / 255.0 plt.figure(figsize=(10,10)) for i in range(25): plt.subplot(5,5,i+1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(train_images[i], cmap=plt.cm.binary) plt.xlabel(class_names[train_labels[i]]) plt.show() ``` ## Model ``` def make_model(): model = tf.keras.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(10) ]) return model model = make_model() model.summary() model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) model.fit(train_images, train_labels, epochs=10) test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2) print('\nTest accuracy:', test_acc) probability_model = tf.keras.Sequential([ model, tf.keras.layers.Softmax() ]) predictions = probability_model.predict(test_images) predictions[0] np.argmax(predictions[0]) def plot_image(i, predictions_array, true_label, img): predictions_array, true_label, img = predictions_array, true_label[i], img[i] plt.grid(False) plt.xticks([]) plt.yticks([]) plt.imshow(img, cmap=plt.cm.binary) predicted_label = np.argmax(predictions_array) if predicted_label == true_label: color = 'blue' else: color = 'red' plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label], 100*np.max(predictions_array), class_names[true_label]), color=color) def plot_value_array(i, predictions_array, true_label): predictions_array, true_label = predictions_array, true_label[i] plt.grid(False) plt.xticks(range(10)) plt.yticks([]) thisplot = plt.bar(range(10), predictions_array, color="#777777") plt.ylim([0, 1]) predicted_label = np.argmax(predictions_array) thisplot[predicted_label].set_color('red') thisplot[true_label].set_color('blue') i = 0 plt.figure(figsize=(6,3)) plt.subplot(1,2,1) plot_image(i, predictions[i], test_labels, test_images) plt.subplot(1,2,2) plot_value_array(i, predictions[i], test_labels) plt.show() # Plot the first X test images, their predicted labels, and the true labels. # Color correct predictions in blue and incorrect predictions in red. num_rows = 5 num_cols = 3 num_images = num_rows*num_cols plt.figure(figsize=(2*2*num_cols, 2*num_rows)) for i in range(num_images): plt.subplot(num_rows, 2*num_cols, 2*i+1) plot_image(i, predictions[i], test_labels, test_images) plt.subplot(num_rows, 2*num_cols, 2*i+2) plot_value_array(i, predictions[i], test_labels) plt.tight_layout() plt.show() ``` ## Export with model.save ``` PATH = 'tmp/keras-mnist-fashion-save-model' ! rm -r 'tmp/keras-mnist-fashion-save-model' model.save(PATH, save_format='tf') ``` ### Results ``` ! saved_model_cli show --all --dir tmp/keras-mnist-fashion-save-model/ ``` ### Tensor/IO Note in the corresponding model.json that the name and shape of the inputs and outputs matches the values you see in the signature definition. Take special care to note that the name is taken from the layer's name and not from the key in the inputs or outputs dictionary: ``` inputs['flatten_input'] tensor_info: dtype: DT_FLOAT shape: (-1, 28, 28) name: serving_default_flatten_input:0 outputs['dense_1'] tensor_info: dtype: DT_FLOAT shape: (-1, 10) name: StatefulPartitionedCall:0 ``` ``` ! cat model.json ``` I can execute this model on device with Tensor/IO, although I'm not sure if I need to initialize variables differently given the `__saved_model_init_op` listed above. I believe I use a session global variables initializer in the C++ code.
github_jupyter
import os import numpy as np import tensorflow as tf import tensorflow_hub as hub from tensorflow.keras import layers import PIL.Image as Image import matplotlib.pylab as plt %matplotlib inline def enable_memory_growth(): physical_devices = tf.config.experimental.list_physical_devices('GPU') try: tf.config.experimental.set_memory_growth(physical_devices[0], True) # tf.config.gpu.set_per_process_memory_growth(True) # tf.config.gpu.set_per_process_memory_fraction(0.75) except: print('Invalid device or cannot modify virtual devices once initialized.') if "TF_GPU_GROWTH" in os.environ: print("Enabling GPU memory growth") enable_memory_growth() fashion_mnist = tf.keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() class_names = [ 'T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot' ] plt.figure() plt.imshow(train_images[0]) plt.colorbar() plt.grid(False) plt.show() train_images = train_images / 255.0 test_images = test_images / 255.0 plt.figure(figsize=(10,10)) for i in range(25): plt.subplot(5,5,i+1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(train_images[i], cmap=plt.cm.binary) plt.xlabel(class_names[train_labels[i]]) plt.show() def make_model(): model = tf.keras.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(10) ]) return model model = make_model() model.summary() model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) model.fit(train_images, train_labels, epochs=10) test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2) print('\nTest accuracy:', test_acc) probability_model = tf.keras.Sequential([ model, tf.keras.layers.Softmax() ]) predictions = probability_model.predict(test_images) predictions[0] np.argmax(predictions[0]) def plot_image(i, predictions_array, true_label, img): predictions_array, true_label, img = predictions_array, true_label[i], img[i] plt.grid(False) plt.xticks([]) plt.yticks([]) plt.imshow(img, cmap=plt.cm.binary) predicted_label = np.argmax(predictions_array) if predicted_label == true_label: color = 'blue' else: color = 'red' plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label], 100*np.max(predictions_array), class_names[true_label]), color=color) def plot_value_array(i, predictions_array, true_label): predictions_array, true_label = predictions_array, true_label[i] plt.grid(False) plt.xticks(range(10)) plt.yticks([]) thisplot = plt.bar(range(10), predictions_array, color="#777777") plt.ylim([0, 1]) predicted_label = np.argmax(predictions_array) thisplot[predicted_label].set_color('red') thisplot[true_label].set_color('blue') i = 0 plt.figure(figsize=(6,3)) plt.subplot(1,2,1) plot_image(i, predictions[i], test_labels, test_images) plt.subplot(1,2,2) plot_value_array(i, predictions[i], test_labels) plt.show() # Plot the first X test images, their predicted labels, and the true labels. # Color correct predictions in blue and incorrect predictions in red. num_rows = 5 num_cols = 3 num_images = num_rows*num_cols plt.figure(figsize=(2*2*num_cols, 2*num_rows)) for i in range(num_images): plt.subplot(num_rows, 2*num_cols, 2*i+1) plot_image(i, predictions[i], test_labels, test_images) plt.subplot(num_rows, 2*num_cols, 2*i+2) plot_value_array(i, predictions[i], test_labels) plt.tight_layout() plt.show() PATH = 'tmp/keras-mnist-fashion-save-model' ! rm -r 'tmp/keras-mnist-fashion-save-model' model.save(PATH, save_format='tf') ! saved_model_cli show --all --dir tmp/keras-mnist-fashion-save-model/ inputs['flatten_input'] tensor_info: dtype: DT_FLOAT shape: (-1, 28, 28) name: serving_default_flatten_input:0 outputs['dense_1'] tensor_info: dtype: DT_FLOAT shape: (-1, 10) name: StatefulPartitionedCall:0 ! cat model.json
0.732879
0.922273
# Multi-layer Perceptron (MLP) Neural Network Implementation in Padasip - Basic Examples This tutorial explains how to use MLP through several examples. Lets start with importing Padasip. In the following examples we will also use numpy and matplotlib. ``` import numpy as np import matplotlib.pylab as plt import padasip as pa %matplotlib inline plt.style.use('ggplot') # nicer plots np.random.seed(52102) # always use the same random seed to make results comparable ``` ## Classification According to a Truth Table This task is strongly artificial, because if you know the full truth table of a function, you do not need any classificator. However, it is good simple example for understanding how to MLP can be used. Let us consider a discrete function described by following table <table style="width:80%"> <tr> <td># of input combination</td> <td>0</td><td>1</td><td>2</td><td>3</td><td>4</td><td>5</td><td>6</td><td>7</td> <td>8</td><td>9</td><td>10</td><td>11</td><td>12</td><td>13</td><td>14</td><td>15</td> </tr> <tr> <td>Input $x_1$</td> <td>0</td><td>1</td><td>0</td><td>1</td><td>0</td><td>1</td><td>0</td><td>1</td> <td>0</td><td>1</td><td>0</td><td>1</td><td>0</td><td>1</td><td>0</td><td>1</td> </tr> <tr> <td>Input $x_2$</td> <td>0</td><td>0</td><td>1</td><td>1</td><td>0</td><td>0</td><td>1</td><td>1</td> <td>0</td><td>0</td><td>1</td><td>1</td><td>0</td><td>0</td><td>1</td><td>1</td> </tr> <tr> <td>Input $x_3$</td> <td>0</td><td>0</td><td>0</td><td>0</td><td>1</td><td>1</td><td>1</td><td>1</td> <td>0</td><td>0</td><td>0</td><td>0</td><td>1</td><td>1</td><td>1</td><td>1</td> </tr> <tr> <td>Input $x_4$</td> <td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td> <td>1</td><td>1</td><td>1</td><td>1</td><td>1</td><td>1</td><td>1</td><td>1</td> </tr> <tr> <td>Output - Target $d$</td> <td>0</td><td>1</td><td>1</td><td>0</td><td>0</td><td>1</td><td>0</td><td>0</td> <td>1</td><td>0</td><td>1</td><td>0</td><td>1</td><td>1</td><td>1</td><td>0</td> </tr> </table> The task is to train the MLP, that it will produce correct value $\tilde y(k) = d(k)$ every time, when we pass another input vector $\textbf{x}(k) = [x_1(k), x_2(k), x_3(k), x_4(k)]$ to the network. Now, how to create the MLP neural network with padasip: ``` nn = pa.ann.NetworkMLP([5,6], 5, outputs=1, activation="tanh") ``` where: * the first argument (value `[5, 6]`) stands for amount of nodes in hidden layers. The first layer has 5 nodes, and the second layer has 6 nodes. If you use [3, 10, 3] instead, you would get three layers with 3, 10 and 3 nodes. * the second argument (value 5) stands for number of inputs (features). * the kwarg `outputs=1` says that we want just one output node (it is possible to have more) * and the kwarg `activation="tanh"` stands for activation function what we want to use. In this case it is hyperbolic tangens. And the full working example: ``` # data creation x = np.array([ [0,0,0,0], [1,0,0,0], [0,1,0,0], [1,1,0,0], [0,0,1,0], [1,0,1,0], [0,1,1,0], [1,1,1,0], [0,0,0,1], [1,0,0,1], [0,1,0,1], [1,1,0,1], [0,0,1,1], [1,0,1,1], [0,1,1,1], [1,1,1,1] ]) d = np.array([0,1,1,0,0,1,0,0,1,0,1,0,1,1,1,0]) N = len(d) n = 4 # creation of neural network (again) nn = pa.ann.NetworkMLP([5,6], n, outputs=1, activation="tanh") # training e, mse = nn.train(x, d, epochs=200) # see how it works (validation) y = nn.run(x[-1000:]) # display of the result plt.figure(figsize=(13,12)) plt.subplot(311) plt.plot(e) plt.title("Error during training"); plt.ylabel("Error"); plt.xlabel("Number of iteration") plt.subplot(312) plt.plot(10*np.log10(mse)) plt.title("10 times logarithm of mean-square-error (MSE) during training"); plt.ylabel("MSE [dB]"); plt.xlabel("Number of epoch") plt.subplot(313) plt.plot(d, label="Target") plt.plot(y, label="MLP output") plt.title("The final result"); plt.ylabel("Value"); plt.xlabel("# of input combination") plt.legend(); plt.tight_layout(); plt.show() ``` Note, that after just only 200 epochs (200 times 16 iterations), and we obtained a pretty impressive result! # Time series prediction Discrete-time Mackey-Glass chaotic time serie according to following equation: $d(k+1) = p_1 \cdot d(k) + \frac{\large{p_2 \cdot d(k-p_3)}}{\large{p_2 + d^{\large{p_5}}(k-p_4)}}$ Part of the generated data we will use for training (in multiple epochs), and another part for validation - one run with no MLP update. See following code and figure. ``` N = 3000 p1 = 0.2; p2 = 0.8; p3 = 0.9; p4 = 20; p5 = 10.0 d = np.zeros(N) d[0] = 0.1 for k in range(0,N-1): d[k+1] = (p3*d[k]) + ( (p1*d[k-p4]) / (p2 + ( d[k-p4]**p5)) ) plt.figure(figsize=(13,5)) plt.plot(range(N-2000), d[:-2000], label="Not used") plt.plot(range(N-2000, N-1000), d[-2000:-1000], label="Training") plt.plot(range(N-1000, N), d[-1000:], label="Validation (no adapt)") plt.legend(); plt.tight_layout(); plt.show() ``` And now the full working example ``` # data creation N = 3000 p1 = 0.2; p2 = 0.8; p3 = 0.9; p4 = 20; p5 = 10.0 d = np.zeros(N) d[0] = 0.1 for k in range(0,N-1): d[k+1] = (p3*d[k]) + ( (p1*d[k-p4]) / (p2 + ( d[k-p4]**p5)) ) # data normalization d = (d - d.mean()) / d.std() # input forming from historic values n = 30 x = pa.input_from_history(d, n)[:-1] d = d[n:] N = len(d) # creation of new neural network nn = pa.ann.NetworkMLP([10,20,10], n, outputs=1, activation="sigmoid") # training e, mse = nn.train(x[1000:2000], d[1000:2000], epochs=300) # see how it works (validation) y = nn.run(x[-1000:]) # result display plt.figure(figsize=(13,6)) plt.subplot(211) plt.plot(10*np.log10(mse)) plt.title("10 times logarithm of mean-square-error (MSE) during training"); plt.ylabel("MSE [dB]"); plt.xlabel("Number of epoch") plt.subplot(212) plt.plot(d[-1000:], label="Target") plt.plot(y, label="MLP output") plt.title("The final result"); plt.ylabel("Value"); plt.xlabel("# of input combination") plt.legend(); plt.tight_layout(); plt.show() ``` The prediction output is pretty good, if we consider the fact, that the used time series is produced by chaotic system. And it is possible to achieve even better result, if you increase the number of epochs for training. # MLP as a Real-time Predictor It is possible and simple to use MLP sample after sample to track the output of system with changing dynamics. Problem is, that MLP is learning speed is low - in comparison with adaptive filters. Because of that reason, you can really struggle to train the MLP, or even to be able to follow the changes in the process you want to predict. In this tutorial we will use a really simple example. Let us consider a system described as follows $d(k) = a_1 x_1(k) + a_2 x_2(k) + a_3 x_3(k)$ where * $a_i$ is unknown parameter of the system * $x_i$ is input of the system (random variable with zero mean and 0.5 standard deviation) In this example we can measure all three $x_n$, and we need to find the weights of MLP to replace the system. Here is how to get it done: ``` def measure_x(): # this is your measurement of the process inputs (3 values) x = np.random.normal(0, 0.5, 3) return x def measure_d(x): # this is your measurement of the system output - your target d = 0.8*x[0] + 0.2*x[1] - 1.*x[2] return d # creation of new neural network nn = pa.ann.NetworkMLP([20,20], 3, outputs=1, activation="sigmoid") # run for N samples N = 1000 e = np.zeros(N) for k in range(N): x = measure_x() y = nn.predict(x) # do the stuff with predicted value # ... # when possible, measure what was the real value of output and update MLP d = measure_d(x) e[k] = nn.update(d) plt.figure(figsize=(13,5)) plt.plot(e) plt.title("Error of prediction"); plt.ylabel("Error"); plt.xlabel("Number of iteration") plt.tight_layout(); plt.show() ``` # Final notes * If your MLP is unstable (overflow error or similar), then normalize your data or use the `sigmoid` as activation function rather than `tanh`. * If the learning is too slow, try to reduce the amount of layers or nodes. * Beware of overtraining. If your training error is still decreasing, it does not automatically mean that also your testing error is decreasing.
github_jupyter
import numpy as np import matplotlib.pylab as plt import padasip as pa %matplotlib inline plt.style.use('ggplot') # nicer plots np.random.seed(52102) # always use the same random seed to make results comparable nn = pa.ann.NetworkMLP([5,6], 5, outputs=1, activation="tanh") # data creation x = np.array([ [0,0,0,0], [1,0,0,0], [0,1,0,0], [1,1,0,0], [0,0,1,0], [1,0,1,0], [0,1,1,0], [1,1,1,0], [0,0,0,1], [1,0,0,1], [0,1,0,1], [1,1,0,1], [0,0,1,1], [1,0,1,1], [0,1,1,1], [1,1,1,1] ]) d = np.array([0,1,1,0,0,1,0,0,1,0,1,0,1,1,1,0]) N = len(d) n = 4 # creation of neural network (again) nn = pa.ann.NetworkMLP([5,6], n, outputs=1, activation="tanh") # training e, mse = nn.train(x, d, epochs=200) # see how it works (validation) y = nn.run(x[-1000:]) # display of the result plt.figure(figsize=(13,12)) plt.subplot(311) plt.plot(e) plt.title("Error during training"); plt.ylabel("Error"); plt.xlabel("Number of iteration") plt.subplot(312) plt.plot(10*np.log10(mse)) plt.title("10 times logarithm of mean-square-error (MSE) during training"); plt.ylabel("MSE [dB]"); plt.xlabel("Number of epoch") plt.subplot(313) plt.plot(d, label="Target") plt.plot(y, label="MLP output") plt.title("The final result"); plt.ylabel("Value"); plt.xlabel("# of input combination") plt.legend(); plt.tight_layout(); plt.show() N = 3000 p1 = 0.2; p2 = 0.8; p3 = 0.9; p4 = 20; p5 = 10.0 d = np.zeros(N) d[0] = 0.1 for k in range(0,N-1): d[k+1] = (p3*d[k]) + ( (p1*d[k-p4]) / (p2 + ( d[k-p4]**p5)) ) plt.figure(figsize=(13,5)) plt.plot(range(N-2000), d[:-2000], label="Not used") plt.plot(range(N-2000, N-1000), d[-2000:-1000], label="Training") plt.plot(range(N-1000, N), d[-1000:], label="Validation (no adapt)") plt.legend(); plt.tight_layout(); plt.show() # data creation N = 3000 p1 = 0.2; p2 = 0.8; p3 = 0.9; p4 = 20; p5 = 10.0 d = np.zeros(N) d[0] = 0.1 for k in range(0,N-1): d[k+1] = (p3*d[k]) + ( (p1*d[k-p4]) / (p2 + ( d[k-p4]**p5)) ) # data normalization d = (d - d.mean()) / d.std() # input forming from historic values n = 30 x = pa.input_from_history(d, n)[:-1] d = d[n:] N = len(d) # creation of new neural network nn = pa.ann.NetworkMLP([10,20,10], n, outputs=1, activation="sigmoid") # training e, mse = nn.train(x[1000:2000], d[1000:2000], epochs=300) # see how it works (validation) y = nn.run(x[-1000:]) # result display plt.figure(figsize=(13,6)) plt.subplot(211) plt.plot(10*np.log10(mse)) plt.title("10 times logarithm of mean-square-error (MSE) during training"); plt.ylabel("MSE [dB]"); plt.xlabel("Number of epoch") plt.subplot(212) plt.plot(d[-1000:], label="Target") plt.plot(y, label="MLP output") plt.title("The final result"); plt.ylabel("Value"); plt.xlabel("# of input combination") plt.legend(); plt.tight_layout(); plt.show() def measure_x(): # this is your measurement of the process inputs (3 values) x = np.random.normal(0, 0.5, 3) return x def measure_d(x): # this is your measurement of the system output - your target d = 0.8*x[0] + 0.2*x[1] - 1.*x[2] return d # creation of new neural network nn = pa.ann.NetworkMLP([20,20], 3, outputs=1, activation="sigmoid") # run for N samples N = 1000 e = np.zeros(N) for k in range(N): x = measure_x() y = nn.predict(x) # do the stuff with predicted value # ... # when possible, measure what was the real value of output and update MLP d = measure_d(x) e[k] = nn.update(d) plt.figure(figsize=(13,5)) plt.plot(e) plt.title("Error of prediction"); plt.ylabel("Error"); plt.xlabel("Number of iteration") plt.tight_layout(); plt.show()
0.621196
0.973695
# Materialien zu <i>zufall</i> Autor: Holger Böttcher - [email protected] ## Aufgaben K3 - Übungsklausur 3 <br> <i>Die Aufgabe wurde entnommen aus<br> <br> A. Müller<br> Wahrscheinlichkeitsrechnung und Statistik<br> Grundkurs<br> Stark Verlag 1997<br> S. 74 Klausur 3<br> </i><br> Bei der Einstellung für einen gehobenen Posten gibt es viele Bewerber, so dass sich<br> die Kandidaten einigen Tests unterziehen müssen. 1. Zuerst wird überprüft, ob die Bewerber gesundheitsmäßig geeignet sind (Ereig-<br> nis $G$). 90% der Bewerber sind geeignet. Ein Mediziner stuft 95% der Bewerber<br> richtig ein (Ereignis $R$). 3% der Bewerber sind geeignet, aber als ungeeignet <br> eingestuft. <br><br> a) Erstellen Sie eine vollständige Vierfeldertafel und überprüfen Sie, ob der Me-<br> $\quad$diziner ein besonders gutes Auge für ungeeignete Bewerber besitzt.<br> <br> b) Mit welcher Wahrscheinlichkeit ist ein beliebig herausgegriffener Bewerber<br> <br> $\quad\quad$(1) geeignet und richtig eingestuft,<br> $\quad\quad$(2) weder geeignet noch richtig eingestuft,<br> $\quad\quad$(3) enweder geeignet oder richtig eingestuft?<br> <br> 2. Die Reaktionsfähigkeit wird so überprüft, dass ein Bewerber beim Eintreten ei-<br> nes optischen Signals auf einem Monitor innerhalb einer bestimmten Zeitspan-<br> ne einen Knopf drücken muss ($Treffer$). Wegen der kurz bemessenen Zeitspan-<br> ne gelingt dies einem Kandidaten nur mit einer Wahrscheinlichkeit von 20% <br><br> a) Mit welcher Wahrscheinlichkeit hat er<br> <br> $\quad\quad$(1) bei 10 Versuchen 3 Treffer (Ereignis $E_1$),<br> $\quad\quad$(2) beim zehnten Versuch den dritten Treffer (Ereignis $E_2$)?<br> <br> b) Begründen Sie, dass die Ereignisse $E_1$ und $E_2$ stochastisch abhängig sind.<br> <br> c) Wie viele Versuche muss er mindestens ausführen, um mit einer Wahrschein-<br> $\;\;\,$ lichkeit von 95% mindestens einmal zu treffen?<br> <br> 3. Beim Wissenstest werden in Form eines Multiple-Choice-Tests 20 Fragen ge-<br> stellt, wobei zu jeder Frage vier Antwortmöglichkeiten angeboten werden. Es ist <br> nur bekannt, dass von den vier Antworten mindestens eine rrichtig und minde-<br> stens eine falsch ist. <br><br> a) Mit welcher Wahrscheinlichkeit erhält man bei einer einzelnen Frage die rich-<br> $\quad$tige Antwort durch reines Raten?<br> <br> b) Ein Kandidat hat sich überhaupt nicht vorbereitet und muss raten. Mit wel-<br> $\quad$cher Wahrscheinlichkeit gibt er<br> <br> $\quad\quad$(1) keine richtige Antwort,<br> $\quad\quad$(2) genau fünf richtige Antworten,<br> $\quad\quad$(3) mindestens eine falsche Antwort? <br><br> ``` %run zufall/start ``` ### Zu 1. ### a) <br><br> Die durch Ergänzung der Angaben (es sind alles Prozentwerte) erhaltene Vier-<br> feldertafel ist ( nichtG steht für $\overline{G}$, nichtR für $\overline{R}$) ``` t = VT([87,3,8,2], ['R', 'nichtR', 'G', 'nichtG']) t.ausg ``` Die entsprechenden Wahrscheinlichkeiten sind ``` t.wahrsch ``` Ein gutes Auge für ungeeignete Bewerber bedeutet - die Ereignisse $\overline{G}$ und $R$ sind stochastisch abhängig<br> - der Anteil der richtig eingestuften unter den ungeeigneten Bewerbern ist größer <br> $\;\;$als der Anteil der richtig eingestuften unter den geeigneten (das sind bedingte <br> $\;\;$Wahrscheinlichkeiten) <div style='font-family:roman; font-size:120%'> $P(\overline{G} \cap R) = 0.08$, $\; P(\overline{G} \cdot P(R) = 0.2\cdot 0.95 = 0.095 \quad \rightarrow$ stochastich abhängig<br> $P_{\overline{G}} = \dfrac{P(\overline{G} \cap R)}{P(\overline{G})} = \dfrac{0.08}{0.1} = 0.8 \quad\rightarrow$ der Prüfer hat ein besseres Auge für geeig-<br> nete Bewerber </div> ### b) <div style='font-family:roman; font-size:120%'> $P_1 = 0.87$<br> $P_2 = 0.02$<br> $P_3 = P(G \cap \overline{R})+P(\overline{G} \cap R) = 0.03+0.08 = 0.11$ ### Zu 2. ### a) ``` v1 = BK(10, 0.2) v1.P(3, d=4) # Ergebnis zu (1) v2 = BK(9, 0.2) # bei den ersten 9 Versuchen müssen 2 Treffer sein, dann # noch einer v2.P(2) * 0.2 # Ergebnis zu (2) ``` ### b) Da $E_2 \subset E_1$ gilt, ist $P(E_1 \cap E_2) = P(E_2)$, also $P(E_1 \cap E_2) \ne P(E_2) \cdot P(E_1)$, da $P(E_1) \ne 1)$ ### c) Wegen $P(\text{mindestens 1 Treffer})= 1- P(\text{kein Treffer})$ in $n$ Versuchen muss die fol-<br> gende Ungleichung gelöst werden (per Hand erfolgt das über das Logarithmieren der <br> Ungleichung) ``` 1-0.8^n > 0.95 löse(1-0.8^n > 0.95, set=ja) ``` ### Zu 3 ### a) Erhalten der möglichen Antwortkombinatioen für eine Frage - entweder durch <br> manuelle Auflistung oder wie folgt ($R$ - richtig, $F$ - falsch) ``` R, F = symbols('R F') A = kombinationen([R, F], 4, ja, ja, l=ja) # das sind alle möglichen A # Selektion der Kombinationen, die die Bedingungen erfüllen B = [x for x in A if (anzahl(R)(x) >= 1 and anzahl(F)(x) >= 1)] B ``` Die 14 Kombinationen sind alle gleich wahrscheinlich; durch Auszählen kann ermittelt <br> werden, dass die $i$. Antwort in 7 der 14 Möglichkeiten $R$ ist, sonst $F \quad (i=1,2,3,4)$: ``` anzahl(B) C = [x for x in B if x[1] == R] # (am Beispiel der 1. Antwort) D = [x for x in B if x[1] == F] anzahl(C), anzahl(D) ``` Mit Wahrscheinlichkeit $\dfrac{1}{4}$ wählt der Bewerber eine der 4 Antworten aus, mit Wahr-<br> scheinlichkeit $\dfrac{1}{2}$ ist in der (zufälligen) Antwortkombination diese Antwort richtig, <br> die geforderte Wahrscheinlichkeit ist<br><br> $\dfrac{1}{4} \cdot \dfrac{1}{2} = \dfrac{1}{8}$ ### b) ``` vv = BK(20, 1/8) vv.P(0, d=4) # Ergebnis zu (1) vv.P(5, d=4) # Ergebnis zu (2) vv.P( X <= 19, d=4) # Ergebnis zu (3) ``` Die 1 ergibt sich im Rahmen der eingestellten Genauigkeit: ``` vv.P( X <= 19) ```
github_jupyter
%run zufall/start t = VT([87,3,8,2], ['R', 'nichtR', 'G', 'nichtG']) t.ausg t.wahrsch v1 = BK(10, 0.2) v1.P(3, d=4) # Ergebnis zu (1) v2 = BK(9, 0.2) # bei den ersten 9 Versuchen müssen 2 Treffer sein, dann # noch einer v2.P(2) * 0.2 # Ergebnis zu (2) 1-0.8^n > 0.95 löse(1-0.8^n > 0.95, set=ja) R, F = symbols('R F') A = kombinationen([R, F], 4, ja, ja, l=ja) # das sind alle möglichen A # Selektion der Kombinationen, die die Bedingungen erfüllen B = [x for x in A if (anzahl(R)(x) >= 1 and anzahl(F)(x) >= 1)] B anzahl(B) C = [x for x in B if x[1] == R] # (am Beispiel der 1. Antwort) D = [x for x in B if x[1] == F] anzahl(C), anzahl(D) vv = BK(20, 1/8) vv.P(0, d=4) # Ergebnis zu (1) vv.P(5, d=4) # Ergebnis zu (2) vv.P( X <= 19, d=4) # Ergebnis zu (3) vv.P( X <= 19)
0.161949
0.756335
# DNSC 6212 Data Management Final Project -- Group 17 # Part 1 - Selection (30 points) Identify and describe your dataset, its source, and what appeals to you about it. Acquire the data and perform an initial exploration to determine which themes you wish to explore. Describe the questions you want to be able to answer with the data, any concerns you have about the data, and any challenges you expect to have to overcome. ## Dataset and questions We wanted to analyze the distribution of emergency incidents happened in modern cities and how fire departments performed. We chose a dataset detailing emergency incidents in New York City because New York City is the most famous large mordern city and the data was easily accessible. We found the dataset, [Fire Incident Dispatch Data](https://data.cityofnewyork.us/Public-Safety/Fire-Incident-Dispatch-Data/8m42-w767), through the New York City Open Data website. Since the data are from 2012 to 2018, with about 2.75 million records, we only filter the data in year 2016 to analyze. The dataset provides information on the date of the emergencies, the location of the emergencies, the severity of the emergencies, and the response time of the emergencies, among other variables. For example, we can identify the location of the emergencies through the borough or neighborhood of New York City, along with drilling down to the exact zip code and even address of the emergencies. We can determine the severity of the emergencies through the Highest Alarm Level column, which provides a scale of 1-7 alarms where the severity of the increases as the number of alarms increases. Additionally, we also have information on response time to the emergencies from the local fire department. This provides us an opportunity to identify and analyze the how certain factors might affect the response time. The richness of this dataset is appealing because it provides several potential avenues for analysis. For example, we can identify geographical areas that might have a higher propensity of emergency incidents. Furthermore, we can also identify geographical areas that have a higher propensity for severe emergencies because these are the types of emergencies that can cause severe physical and emotional consequences. Finally, we may also want to identify areas, such as zip codes, that have a higher propensity to show long response times because coupling the likelihood of a severe emergencies with a long response time can lead to tragic events. By identifying this information to the Fire Department of New York City, we can help prevent these tragic events from occurring in the future. In addition to researching these ideas, we also want to analyze how certain weather conditions might affect the likelihood and severity of an emergency, especially fire. So, we also plan to include weather data from Kaggle, [Historical Hourly Weather Data 2012-2017](https://www.kaggle.com/selfishgene/historical-hourly-weather-data#weather_description.csv), to provide an even richer analysis of scenarios that we might expect to cause tragic events. For example, we expect warm, dry, and windy weathers to provide the most likely conditions for a severe fire. By combining this weather information with the incident data, we can make a thorough recommendation to the New York City Fire Department to help prevent a tragic emergencies that can cause the loss of life, property, and emotional well-being. ## Obtain the `fire_incidents` data ``` !wget -O fire_incidents.csv https://s3.amazonaws.com/2018-istm6212-group17/2016_Fire_Incident_Dispatch_Data.csv ``` ## Examine the data ***Number of records*** ``` !wc -l fire_incidents.csv !csvstat --count fire_incidents.csv ``` Both show that there are total 585522 records. ***Variable names*** ``` !csvcut -n fire_incidents.csv ``` ***Basic statistics of each variable*** ``` !head -n 10000 fire_incidents.csv | csvstat --snifflimit 0 ``` Based on our initial research ideas and basic exploratory analysis, we have identified a few key variables that will play an important role in our analysis. These variables include the Alarm Box Borough, Zip Code, Highest Alarm Level, Incident Classification Group, and various response time metrics. Since the dataset came from a valid source and also provided a data dictionary, we identified that the Alarm Box Borough represents where the neighorbood where the incident alarm rang, Zip Code represents the Zip Code where the incident occurred, Highest Alarm Level provides a proxy for the severity of the incident, and Incident Classification Group provides a categorical filing of the incident, such as Medical Emergencies, Medical First Aid, and Fires. The various response time metrics include dispatch time, travel time, and full response time for the New York City Fire Department. These response time metrics provide a quality option for representing our Facts within the relational database design. ## Schema ***The schema of our original dataset*** ``` from IPython.display import Image Image(url = "https://s3.amazonaws.com/2018-istm6212-group17/original_dataset.png", width = 320, height = 500) ``` ***Based on the dataset, we will build a star schema like this*** ``` from IPython.display import Image Image(url = "https://s3.amazonaws.com/2018-istm6212-group17/star_schema_original.png") ``` We considered multiple options for the fact table. For example, we had the option of choosing either response time metrics or other options, such as Engines or Ladders assigned by incident. If we needed to include these facts, we could have implemented a multiple fact table design. These additional facts may provide a proxy for the severity of an incident, however, the data also provides the Highest Alarm Level variable that can be used to measure the severity of the incident. Additionally, we are more interested in understanding the factors that might affect response times, so we decided to not include the additional facts and multiple fact table design. # Part 2 - Wrangling (35 points) Based on what you found above, wrangle the data into a format suitable for analysis. This may involve cleaning, filtering, merging, and modeling steps, any and all of which are valid for this project. Describe your process as you proceed, and document any scripts, databases, or other models you develop. Be specific about any key decisions to modify or remove data, how you overcame any challenges, and all assumptions you make about the meaning of variables and their values. Verify that your wrangling steps have succeeded (for example, if you loaded the data into a dimensional model, ensure that the fact table contains the right number of records). ## Create and connect to a new database ``` %load_ext sql !dropdb -U student proj4_group17 !createdb -U student proj4_group17 %sql postgresql://student@/proj4_group17 ``` ## Create a temporary table `incidents` ***Create a temporary table `incidents`*** ``` %%sql DROP TABLE IF EXISTS incidents; CREATE TABLE incidents ( id NUMERIC NOT NULL, incident_datetime TIMESTAMP NOT NULL, alarm_box_borough VARCHAR(30) NOT NULL, alarm_box_number INTEGER NOT NULL, alarm_box_location VARCHAR(500) NOT NULL, incident_borough VARCHAR(30) NOT NULL, zipcode INTEGER, police_precinct INTEGER, city_council_district INTEGER, community_district INTEGER, community_school_district INTEGER, congressional_district INTEGER, alarm_source_description_tx VARCHAR(30) NOT NULL, alarm_level_index_description VARCHAR(100) NOT NULL, highest_alarm_level VARCHAR(30) NOT NULL, incident_classification VARCHAR(100) NOT NULL, incident_classification_group VARCHAR(30) NOT NULL, dispatch_rspns_s_qy INTEGER, first_assignment_datetime TIMESTAMP, first_activation_datetime TIMESTAMP, first_on_scene_datetime TIMESTAMP, incident_close_datetime TIMESTAMP, valid_dispatch_rspns_time_indc BOOLEAN, valid_incident_rspns_time_indc BOOLEAN, incident_rspns_s_qy INTEGER, incident_travel_s_qy INTEGER, engines_assigned_quantity INTEGER, ladders_assigned_quantity INTEGER, other_units_assigned_quantity INTEGER ); ``` ***Load the data using `COPY` command*** ``` !cp fire_incidents.csv /tmp/fire_incidents.csv %%sql COPY incidents FROM '/tmp/fire_incidents.csv' CSV HEADER; ``` ***Check the number of records in this table*** ``` %%sql SELECT COUNT(*) FROM incidents; ``` ***Take a look at the data*** ``` %%sql SELECT * FROM incidents LIMIT 5; ``` ## Wrangle the data `dispatch_response_seconds_qy` = `first_assignment_datetime` - `incident_datetime` <br> `incident_travel_tm_seconds_qy` = `first_on_scene_datetime` - `first_assignment_datetime` <br> `incident_response_seconds_qy` = `first_on_scene_datetime` - `incident_datetime` = `dispatch_response_seconds_qy` + `incident_travel_s_qy` Based on the provided data, we identified multiple response time metrics, quantified in seconds, as potential facts for our Fact table. These response time metrics have different formulations that can be identified through other variables in the dataset. For example, the dispatch response time is a function of the first assignment time and the incident time. Then, the travel time is a function of first responder on scene time less the first assignment time. <br> From the basic statistics of each columns, we notice that there are 0s in those three time variables, which are irrational. After checking the dataset, we find the reason is that there are null values in `first_assignment_datetime` and `first_on_scene_datetime`. Thus, these 0s should have been NULLs as well, which is a data quality issue. ***Identify the number of records with 0 in `dispatch_reponse_seconds_qy`, `incident_travel_tm_seconds_qy` or `incident_response_seconds_qy`*** ``` %%sql SELECT COUNT(*) FROM incidents WHERE (dispatch_rspns_s_qy = 0 OR incident_rspns_s_qy = 0 OR incident_travel_s_qy = 0); ``` ***Identify the number of records with null values in `first_assignment_datetime` or `first_on_scene_datetime`*** ``` %%sql SELECT COUNT(*) FROM incidents WHERE first_assignment_datetime IS NULL OR first_on_scene_datetime IS NULL; ``` ***Note that there are 3 records with 0s in time variables but no null value in datetime variable*** ``` %%sql SELECT id, incident_datetime, first_assignment_datetime, first_activation_datetime, first_on_scene_datetime, dispatch_rspns_s_qy, incident_rspns_s_qy, incident_travel_s_qy FROM incidents WHERE (dispatch_rspns_s_qy = 0 OR incident_rspns_s_qy = 0 OR incident_travel_s_qy = 0) AND first_assignment_datetime IS NOT NULL AND first_on_scene_datetime IS NOT NULL LIMIT 10; ``` Notice that for the first and second records above, the `first_assignment_datetime` and `first_assignment_datetime` are the same. And for the last one, the `first_assignment_datetime` is later than `first_assignment_datetime`, which is irrational as well. We consider those three records as wrong. Since there are only three wrong records, we decide to drop them. ***Drop those three wrong records*** ``` %%sql DELETE FROM incidents WHERE id IN (1604708820140290, 1610936140120350, 161682184012); ``` ***Turn `0`s in the columns about time (dispatch, response & travel) to `NULL`s*** ``` %%sql UPDATE incidents SET dispatch_rspns_s_qy = NULL WHERE dispatch_rspns_s_qy = 0; UPDATE incidents SET incident_rspns_s_qy = NULL WHERE incident_rspns_s_qy = 0; UPDATE incidents SET incident_travel_s_qy = NULL WHERE incident_travel_s_qy = 0; ``` ***Take a look at the wrangled data*** ``` %%sql SELECT * FROM incidents ORDER BY incident_datetime LIMIT 5; ``` ## Create the fact and dimension tables ### Create the `rspns_time_facts` table ***Create the `rspns_time_facts` table*** ``` %%sql DROP TABLE IF EXISTS rspns_time_facts; CREATE TABLE rspns_time_facts AS (SELECT * FROM incidents ORDER BY incident_datetime); ``` ***Drop the useless columns*** ``` %%sql ALTER TABLE rspns_time_facts DROP COLUMN engines_assigned_quantity, DROP COLUMN ladders_assigned_quantity, DROP COLUMN other_units_assigned_quantity; ``` ***Take a look at the `rspns_time_facts` table*** ``` %%sql SELECT * FROM rspns_time_facts LIMIT 5; ``` ### Drop the temporary table ``` %%sql DROP TABLE IF EXISTS incidents; ``` ### Create the `alarm_box` dimension table ***Create the `alarm_box` dimension table*** ``` %%sql DROP TABLE IF EXISTS alarm_box; CREATE TABLE alarm_box ( key SERIAL PRIMARY KEY, borough VARCHAR(30) NOT NULL, number INTEGER NOT NULL, location VARCHAR(500) NOT NULL, zipcode INTEGER, police_precinct INTEGER, city_council_district INTEGER, community_district INTEGER, community_school_district INTEGER, congressional_district INTEGER ); ``` ***Populate the dimension table with unique values*** ``` %%sql INSERT INTO alarm_box (borough, number, location, zipcode, police_precinct, city_council_district, community_district, community_school_district, congressional_district) SELECT DISTINCT alarm_box_borough AS borough, alarm_box_number AS number, alarm_box_location AS location, zipcode, police_precinct, city_council_district, community_district, community_school_district, congressional_district FROM rspns_time_facts ORDER BY zipcode, number; ``` ***Take a look at the `alarm_box` dimension table*** ``` %%sql SELECT * FROM alarm_box LIMIT 5; %%sql SELECT COUNT(*) FROM alarm_box; ``` ***Add a foreign key column to the fact table that references `alarm_box` dimension table*** ``` %%sql ALTER TABLE rspns_time_facts ADD COLUMN alarm_box_key INTEGER, ADD CONSTRAINT fk_alarm_box_key FOREIGN KEY (alarm_box_key) REFERENCES alarm_box (key); ``` ***Create an index on all columns in `alarm_box` to improve the performance of queries*** ``` %%sql DROP INDEX IF EXISTS idx_alarm_box; CREATE INDEX idx_alarm_box ON alarm_box (borough, number, location, zipcode, police_precinct, city_council_district, community_district, community_school_district, congressional_district); ``` ***Populate `alarm_box_key` with correct values*** ``` %%sql UPDATE rspns_time_facts SET alarm_box_key = alarm_box.key FROM alarm_box WHERE rspns_time_facts.alarm_box_borough = alarm_box.borough AND rspns_time_facts.alarm_box_number = alarm_box.number AND rspns_time_facts.alarm_box_location = alarm_box.location AND ((rspns_time_facts.zipcode IS NULL AND alarm_box.zipcode IS NULL) OR rspns_time_facts.zipcode = alarm_box.zipcode) AND ((rspns_time_facts.police_precinct IS NULL AND alarm_box.police_precinct IS NULL) OR rspns_time_facts.police_precinct = alarm_box.police_precinct) AND ((rspns_time_facts.city_council_district IS NULL AND alarm_box.city_council_district IS NULL) OR rspns_time_facts.city_council_district = alarm_box.city_council_district) AND ((rspns_time_facts.community_district IS NULL AND alarm_box.community_district IS NULL) OR rspns_time_facts.community_district = alarm_box.community_district) AND ((rspns_time_facts.community_school_district IS NULL AND alarm_box.community_school_district IS NULL) OR rspns_time_facts.community_school_district = alarm_box.community_school_district) AND ((rspns_time_facts.congressional_district IS NULL AND alarm_box.congressional_district IS NULL) OR rspns_time_facts.congressional_district = alarm_box.congressional_district); ``` ***Check the foreign key column*** ``` %%sql SELECT id, alarm_box_key FROM rspns_time_facts LIMIT 5; %%sql SELECT COUNT(*) AS alarm_box_key_not_null_count FROM rspns_time_facts WHERE alarm_box_key IS NOT NULL; %%sql SELECT COUNT(*) AS alarm_box_key_null_count FROM rspns_time_facts WHERE alarm_box_key IS NULL; ``` ### Create the `alarm` dimension table ***Create the `alarm` dimension table*** ``` %%sql DROP TABLE IF EXISTS alarm; CREATE TABLE alarm ( key SERIAL PRIMARY KEY, source VARCHAR(30) NOT NULL, level_index VARCHAR(100) NOT NULL, highest_level VARCHAR(30) NOT NULL ); ``` ***Populate the dimension table with unique values*** ``` %%sql INSERT INTO alarm (source, level_index, highest_level) SELECT DISTINCT alarm_source_description_tx AS source, alarm_level_index_description AS level_index, highest_alarm_level AS highest_level FROM rspns_time_facts ORDER BY source, level_index, highest_level; ``` ***Take a look at the `alarm` dimension table*** ``` %%sql SELECT * FROM alarm LIMIT 5; %%sql SELECT COUNT(*) FROM alarm; ``` ***Add a foreign key column to the fact table that references `alarm` dimension table*** ``` %%sql ALTER TABLE rspns_time_facts ADD COLUMN alarm_key INTEGER, ADD CONSTRAINT fk_alarm_key FOREIGN KEY (alarm_key) REFERENCES alarm (key); ``` ***Create an index on all columns in `alarm` to improve the performance of queries*** ``` %%sql DROP INDEX IF EXISTS idx_alarm; CREATE INDEX idx_alarm ON alarm (source, level_index, highest_level); ``` ***Populate `alarm_key` with correct values*** ``` %%sql UPDATE rspns_time_facts SET alarm_key = alarm.key FROM alarm WHERE rspns_time_facts.alarm_source_description_tx = alarm.source AND rspns_time_facts.alarm_level_index_description = alarm.level_index AND rspns_time_facts.highest_alarm_level = alarm.highest_level; ``` ***Check the foreign key column*** ``` %%sql SELECT id, alarm_key FROM rspns_time_facts LIMIT 5; %%sql SELECT COUNT(*) AS alarm_key_not_null_count FROM rspns_time_facts WHERE alarm_key IS NOT NULL; %%sql SELECT COUNT(*) AS alarm_key_null_count FROM rspns_time_facts WHERE alarm_key IS NULL; ``` ### Create the `incident_class` dimension table ***Create the `incident_class` dimension table*** ``` %%sql DROP TABLE IF EXISTS incident_class; CREATE TABLE incident_class ( key SERIAL PRIMARY KEY, class_des VARCHAR(100) NOT NULL, group_des VARCHAR(30) NOT NULL ); ``` ***Populate the dimension table with unique values*** ``` %%sql INSERT INTO incident_class (class_des, group_des) SELECT DISTINCT incident_classification AS class_des, incident_classification_group AS group_des FROM rspns_time_facts ORDER BY group_des, class_des; ``` ***Take a look at the `incident_class` dimension table*** ``` %%sql SELECT * FROM incident_class LIMIT 5; %%sql SELECT COUNT(*) FROM incident_class; ``` ***Add a foreign key column to the fact table that references `incident_class` dimension table*** ``` %%sql ALTER TABLE rspns_time_facts ADD COLUMN incident_class_key INTEGER, ADD CONSTRAINT fk_incident_class_key FOREIGN KEY (incident_class_key) REFERENCES incident_class (key); ``` ***Create an index on all columns in `incident_class` to improve the performance of queries*** ``` %%sql DROP INDEX IF EXISTS idx_incident_class; CREATE INDEX idx_incident_class ON incident_class (class_des, group_des); ``` ***Populate `incident_class_key` with correct values*** ``` %%sql UPDATE rspns_time_facts SET incident_class_key = incident_class.key FROM incident_class WHERE rspns_time_facts.incident_classification = incident_class.class_des AND rspns_time_facts.incident_classification_group = incident_class.group_des; ``` ***Check the foreign key column*** ``` %%sql SELECT id, incident_class_key FROM rspns_time_facts LIMIT 5; %%sql SELECT COUNT(*) AS incident_class_key_not_null_count FROM rspns_time_facts WHERE incident_class_key IS NOT NULL; %%sql SELECT COUNT(*) AS incident_class_key_null_count FROM rspns_time_facts WHERE incident_class_key IS NULL; ``` ### Create the `valid_respns` dimension table ***Create the `valid_respns` dimension table*** ``` %%sql DROP TABLE IF EXISTS valid_rspns; CREATE TABLE valid_rspns ( key SERIAL PRIMARY KEY, valid_dispatch_rspns BOOLEAN, valid_incident_rspns BOOLEAN ); ``` ***Populate the dimension table with unique values*** ``` %%sql INSERT INTO valid_rspns (valid_dispatch_rspns, valid_incident_rspns) SELECT DISTINCT valid_dispatch_rspns_time_indc AS valid_dispatch_rspns, valid_incident_rspns_time_indc AS valid_incident_rspns FROM rspns_time_facts ORDER BY valid_dispatch_rspns, valid_dispatch_rspns; ``` ***Take a look at the `valid_respns` dimension table*** ``` %%sql SELECT * FROM valid_rspns LIMIT 10; %%sql SELECT COUNT(*) FROM valid_rspns; ``` ***Add a foreign key column to the fact table that references `valid_respns` dimension table*** ``` %%sql ALTER TABLE rspns_time_facts ADD COLUMN valid_rspns_key INTEGER, ADD CONSTRAINT fk_valid_rspns_key FOREIGN KEY (valid_rspns_key) REFERENCES valid_rspns (key); ``` ***Create an index on all columns in `valid_respns` to improve the performance of queries*** ``` %%sql DROP INDEX IF EXISTS idx_valid_rspns; CREATE INDEX idx_valid_rspns ON valid_rspns (valid_dispatch_rspns, valid_incident_rspns); ``` ***Populate `valid_rspns_key` with correct values*** ``` %%sql UPDATE rspns_time_facts SET valid_rspns_key = valid_rspns.key FROM valid_rspns WHERE rspns_time_facts.valid_dispatch_rspns_time_indc = valid_rspns.valid_dispatch_rspns AND rspns_time_facts.valid_incident_rspns_time_indc = valid_rspns.valid_incident_rspns; ``` ***Check the foreign key column*** ``` %%sql SELECT id, valid_rspns_key FROM rspns_time_facts LIMIT 5; %%sql SELECT COUNT(*) AS valid_rspns_key_not_null_count FROM rspns_time_facts WHERE valid_rspns_key IS NOT NULL; %%sql SELECT COUNT(*) AS valid_rspns_key_null_count FROM rspns_time_facts WHERE valid_rspns_key IS NULL; ``` ### Create the `hour` dimension table ***Create the `hour` dimension table*** ``` %%sql DROP TABLE IF EXISTS hour; CREATE TABLE hour ( key SERIAL PRIMARY KEY, hour CHAR(19), day CHAR(10), year INTEGER, quarter_of_year INTEGER, month_of_year_str VARCHAR(12), month_of_year INTEGER, day_of_month INTEGER, day_of_week_str CHAR(9), day_of_week INTEGER, is_weekend BOOLEAN, hour_of_day INTEGER ); ``` ***Check the values in `incident_datetime` and `incident_close_datetime`*** ``` %%sql SELECT COUNT(*) FROM (SELECT DISTINCT TO_CHAR(incident_datetime, 'YYYY-MM-DD HH24:00:00') AS hour, TO_CHAR(incident_datetime, 'YYYY-MM-DD') AS day, CAST(TO_CHAR(incident_datetime, 'YYYY') AS INTEGER) AS year, CAST(TO_CHAR(incident_datetime, 'Q') AS INTEGER) AS quarter_of_year, TO_CHAR(incident_datetime, 'Month') AS month_of_year_str, CAST(TO_CHAR(incident_datetime, 'MM') AS INTEGER) AS month_of_year, CAST(TO_CHAR(incident_datetime, 'DD') AS INTEGER) AS day_of_month, TO_CHAR(incident_datetime, 'Day') AS day_of_week_str, CAST(TO_CHAR(incident_datetime, 'D') AS INTEGER) AS day_of_week, CASE WHEN CAST(TO_CHAR(incident_datetime, 'D') AS INTEGER) IN (1, 7) THEN True ELSE False END AS is_weekend, CAST(TO_CHAR(incident_datetime, 'HH24') AS INTEGER) AS hour_of_day FROM rspns_time_facts) AS T; %%sql SELECT COUNT(*) FROM (SELECT DISTINCT TO_CHAR(incident_close_datetime, 'YYYY-MM-DD HH24:00:00') AS hour, TO_CHAR(incident_close_datetime, 'YYYY-MM-DD') AS day, CAST(TO_CHAR(incident_close_datetime, 'YYYY') AS INTEGER) AS year, CAST(TO_CHAR(incident_close_datetime, 'Q') AS INTEGER) AS quarter_of_year, TO_CHAR(incident_close_datetime, 'Month') AS month_of_year_str, CAST(TO_CHAR(incident_close_datetime, 'MM') AS INTEGER) AS month_of_year, CAST(TO_CHAR(incident_close_datetime, 'DD') AS INTEGER) AS day_of_month, TO_CHAR(incident_close_datetime, 'Day') AS day_of_week_str, CAST(TO_CHAR(incident_close_datetime, 'D') AS INTEGER) AS day_of_week, CASE WHEN CAST(TO_CHAR(incident_close_datetime, 'D') AS INTEGER) IN (1, 7) THEN True ELSE False END AS is_weekend, CAST(TO_CHAR(incident_close_datetime, 'HH24') AS INTEGER) AS hour_of_day FROM rspns_time_facts) AS T; ``` ***Populate the dimension table with unique values*** ``` %%sql INSERT INTO hour (hour, day, year, quarter_of_year, month_of_year_str, month_of_year, day_of_month, day_of_week_str, day_of_week, is_weekend, hour_of_day) SELECT DISTINCT TO_CHAR(incident_datetime, 'YYYY-MM-DD HH24:00:00') AS hour, TO_CHAR(incident_datetime, 'YYYY-MM-DD') AS day, CAST(TO_CHAR(incident_datetime, 'YYYY') AS INTEGER) AS year, CAST(TO_CHAR(incident_datetime, 'Q') AS INTEGER) AS quarter_of_year, TO_CHAR(incident_datetime, 'Month') AS month_of_year_str, CAST(TO_CHAR(incident_datetime, 'MM') AS INTEGER) AS month_of_year, CAST(TO_CHAR(incident_datetime, 'DD') AS INTEGER) AS day_of_month, TO_CHAR(incident_datetime, 'Day') AS day_of_week_str, CAST(TO_CHAR(incident_datetime, 'D') AS INTEGER) AS day_of_week, CASE WHEN CAST(TO_CHAR(incident_datetime, 'D') AS INTEGER) IN (1, 7) THEN True ELSE False END AS is_weekend, CAST(TO_CHAR(incident_datetime, 'HH24') AS INTEGER) AS hour_of_day FROM rspns_time_facts UNION SELECT DISTINCT TO_CHAR(incident_close_datetime, 'YYYY-MM-DD HH24:00:00') AS hour, TO_CHAR(incident_close_datetime, 'YYYY-MM-DD') AS day, CAST(TO_CHAR(incident_close_datetime, 'YYYY') AS INTEGER) AS year, CAST(TO_CHAR(incident_close_datetime, 'Q') AS INTEGER) AS quarter_of_year, TO_CHAR(incident_close_datetime, 'Month') AS month_of_year_str, CAST(TO_CHAR(incident_close_datetime, 'MM') AS INTEGER) AS month_of_year, CAST(TO_CHAR(incident_close_datetime, 'DD') AS INTEGER) AS day_of_month, TO_CHAR(incident_close_datetime, 'Day') AS day_of_week_str, CAST(TO_CHAR(incident_close_datetime, 'D') AS INTEGER) AS day_of_week, CASE WHEN CAST(TO_CHAR(incident_close_datetime, 'D') AS INTEGER) IN (1, 7) THEN True ELSE False END AS is_weekend, CAST(TO_CHAR(incident_close_datetime, 'HH24') AS INTEGER) AS hour_of_day FROM rspns_time_facts ORDER BY year, month_of_year, day_of_month, hour_of_day; ``` ***Take a look at the `hour` dimension table*** ``` %%sql SELECT * FROM hour LIMIT 5; ``` ***Add foreign key columns to the fact table that references `hour` dimension table*** ``` %%sql ALTER TABLE rspns_time_facts ADD COLUMN incident_hour_key INTEGER, ADD CONSTRAINT fk_incident_hour_key FOREIGN KEY (incident_hour_key) REFERENCES hour (key); ALTER TABLE rspns_time_facts ADD COLUMN incident_close_hour_key INTEGER, ADD CONSTRAINT fk_incident_close_hour_key FOREIGN KEY (incident_close_hour_key) REFERENCES hour (key); ``` ***Populate foreign key columns with correct values*** ``` %%sql UPDATE rspns_time_facts SET incident_hour_key = hour.key FROM hour WHERE TO_CHAR(rspns_time_facts.incident_datetime, 'YYYY-MM-DD HH24:00:00') = hour.hour; %%sql UPDATE rspns_time_facts SET incident_close_hour_key = hour.key FROM hour WHERE TO_CHAR(rspns_time_facts.incident_close_datetime, 'YYYY-MM-DD HH24:00:00') = hour.hour; ``` ***Check the foreign key column*** ``` %%sql SELECT id, incident_hour_key, incident_close_hour_key FROM rspns_time_facts LIMIT 5; %%sql SELECT (SELECT COUNT(*) FROM rspns_time_facts WHERE incident_hour_key IS NOT NULL) AS incident_hour_key_not_null_count, (SELECT COUNT(*) FROM rspns_time_facts WHERE incident_close_hour_key IS NOT NULL) AS incident_close_hour_key_not_null_count; %%sql SELECT (SELECT COUNT(*) FROM rspns_time_facts WHERE incident_hour_key IS NULL) AS incident_hour_key_null_count, (SELECT COUNT(*) FROM rspns_time_facts WHERE incident_close_hour_key IS NULL) AS incident_close_hour_key_null_count; ``` ### Drop the useless columns ``` %%sql ALTER TABLE rspns_time_facts DROP COLUMN incident_datetime, DROP COLUMN alarm_box_borough, DROP COLUMN alarm_box_number, DROP COLUMN alarm_box_location, DROP COLUMN incident_borough, DROP COLUMN zipcode, DROP COLUMN police_precinct, DROP COLUMN city_council_district, DROP COLUMN community_district, DROP COLUMN community_school_district, DROP COLUMN congressional_district, DROP COLUMN alarm_source_description_tx, DROP COLUMN alarm_level_index_description, DROP COLUMN highest_alarm_level, DROP COLUMN incident_classification, DROP COLUMN incident_classification_group, DROP COLUMN first_assignment_datetime, DROP COLUMN first_activation_datetime, DROP COLUMN first_on_scene_datetime, DROP COLUMN incident_close_datetime, DROP COLUMN valid_dispatch_rspns_time_indc, DROP COLUMN valid_incident_rspns_time_indc; %%sql SELECT * FROM rspns_time_facts LIMIT 10; ``` # Part 3 - Analysis (35 points) Explore and analyze your data in its wrangled form. Follow through on the themes you identified in Part 1 with queries or scripts that answer the questions you had in mind. Be clear about the answers you discover, discussing them and whether the results match your expectations. Include charts or other visuals that support your analysis. You may use Tableau, matplotlib, ggplot, or other tools we have not covered in class for visualization (and only for visualization), but be sure to export images from those tools and to include any images properly in your notebook writeup and slides. ## Show Tables in our database ``` %%sql \dt ``` ## Connect the database with Python for Visualization ``` import psycopg2 import geocoder import matplotlib.pyplot as plt import pandas import numpy as np conn = psycopg2.connect("dbname='proj4_group17' user='student'") c = conn.cursor() ``` ## What does the incident distribution by borough look like? ***Check the data we want using SQL*** ``` %%sql SELECT alarm_box.borough, count(*) FROM alarm_box, incident_class, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND incident_class.key = rspns_time_facts.incident_class_key GROUP BY alarm_box.borough; ``` ***Discussion*** Brooklyn has the most incidents and Manhattan has the second-highest number of incidents. Since a visual display can be more appealing than a regular table, we created a bar chart to display the distribution. ***Visualize the data using Python*** ``` sql = ''' SELECT alarm_box.borough, count(*) FROM alarm_box, incident_class, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND incident_class.key = rspns_time_facts.incident_class_key GROUP BY alarm_box.borough; ''' c.execute(sql) rows = c.fetchall() plt.barh([i[0] for i in rows], [i[1] for i in rows]) plt.xlabel("total incidents") plt.ylabel("borough") plt.title("Total No. of Incidents for Each Borough") plt.show() ``` ## How is each type of incidents distributed for each borough? ***Check the data we want using SQL*** ``` %%sql SELECT alarm_box.borough, incident_class.group_des, count(*) FROM alarm_box, incident_class, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND incident_class.key = rspns_time_facts.incident_class_key GROUP BY alarm_box.borough, incident_class.group_des ORDER BY alarm_box.borough, incident_class.group_des; ``` ***Discussion*** We can see from the plot that NonMedical Emergencies and Medical Emergencies are occur frequently for all boroughs. Contrary to our original ideas, the fire department actually responds to not only the fire incidents, but also some medical or nonmedical emergencies. ***Visualize the data using Python*** ``` sql = ''' SELECT alarm_box.borough, incident_class.group_des, count(*) FROM alarm_box, incident_class, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND incident_class.key = rspns_time_facts.incident_class_key GROUP BY alarm_box.borough, incident_class.group_des ORDER BY alarm_box.borough, incident_class.group_des; ''' c.execute(sql) rows = c.fetchall() subrows1 = [i for i in rows if i[1] == 'Medical Emergencies'] subrows2 = [i for i in rows if i[1] == 'Medical MFAs'] subrows3 = [i for i in rows if i[1] == 'NonMedical Emergencies'] subrows4 = [i for i in rows if i[1] == 'NonMedical MFAs'] subrows5 = [i for i in rows if i[1] == 'NonStructural Fires'] subrows6 = [i for i in rows if i[1] == 'Structural Fires'] ind = np.array([3*i for i in range(5)]) width = 0.4 fig, ax = plt.subplots() ax.bar(ind,[i[2] for i in subrows1], width, label='Medical Emergencies') ax.bar(ind + width,[i[2] for i in subrows2], width, label='Medical MFAs') ax.bar(ind + 2*width,[i[2] for i in subrows3], width, label='NonMedical Emergencies') ax.bar(ind + 3*width,[i[2] for i in subrows4], width, label='NonMedical MFAs') ax.bar(ind + 4*width,[i[2] for i in subrows5], width, label='NonStructural Fires') ax.bar(ind + 5*width,[i[2] for i in subrows6], width, label='Structural Fires') ax.set_ylabel('number of incidents') ax.set_title('Incidents by Type for Each Borough') ax.set_xticks(ind + width) ax.set_xticklabels(('Bronx', 'Brooklyn', 'Manhattan', 'Queens', 'R/S Island')) ax.legend() plt.show() ``` ## What does the fire incident distribution by borough look like? Since the dataset contains a high number of incidents that do not involve fires, we filtered out the non-fire incidents to focus more specifically on the information we wanted to analyze. ***Check the data we want using SQL*** ``` %%sql SELECT alarm_box.borough, count(*) FROM alarm_box, incident_class, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND incident_class.key = rspns_time_facts.incident_class_key AND incident_class.group_des LIKE '%Fire%' GROUP BY alarm_box.borough; ``` ***Discussion*** Based on the table, we can identify that Brooklyn has the most fire incidents. In a similar fashion as previously detailed, we continue to show the distribution figures represented in a bar chart. ***Visualize the data using Python*** ``` sql = ''' SELECT alarm_box.borough, count(*) FROM alarm_box, incident_class, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND incident_class.key = rspns_time_facts.incident_class_key AND incident_class.group_des LIKE '%Fire%' GROUP BY alarm_box.borough; ''' c.execute(sql) rows = c.fetchall() plt.barh([i[0] for i in rows], [i[1] for i in rows]) plt.xlabel("total fire incidents") plt.ylabel("borough") plt.title("Total No. of Fires Incidents for Each Borough") plt.show() ``` ## How is the fire incident severeness distributed for each borough? ***Check the data we want using SQL*** ``` %%sql SELECT alarm_box.borough, alarm.highest_level, count(*) FROM alarm_box, alarm, rspns_time_facts, incident_class WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND alarm.key = rspns_time_facts.alarm_key AND incident_class.key = rspns_time_facts.incident_class_key AND incident_class.group_des LIKE '%Fires%' GROUP BY alarm_box.borough, alarm.highest_level ORDER BY alarm_box.borough, alarm.highest_level; ``` ***Discussion*** First Alarm fire incidents are much more than the others, which skews the data and makes other bars too small if we plot them together in a bar chart.Therefore, we separated the visualizations for first alarm. Also, we can see from the selected table above, that fourth, fifth, fifth or higher alarm fires didn't happen in all the boroughs, and the numbers are pretty small compared to others. So, we exclude them when visualizing in the bar charts. We can also see Brooklyn has the most seventh alarm fire incidents, which are the most severe fires represented in the data. The reason might be the large number of incidents happened in Brooklyn. Situations in Queens are severe as well, which has the third most fire incidents but the second most severe fires. ***Visualize the data using Python*** ``` sql = ''' SELECT alarm_box.borough, alarm.highest_level, count(*) FROM alarm_box, alarm, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND alarm.key = rspns_time_facts.alarm_key GROUP BY alarm_box.borough, alarm.highest_level ORDER BY alarm_box.borough, alarm.highest_level; ''' c.execute(sql) rows = c.fetchall() subrows = [i for i in rows if i[1] == 'First Alarm'] plt.barh([i[0] for i in subrows], [i[2] for i in subrows]) plt.xlabel("number of incidents") plt.ylabel("borough") plt.title("First Alarm Incident Distribution") plt.show() subrows3 = [i for i in rows if i[1] == 'Second Alarm'] subrows4 = [i for i in rows if i[1] == 'Third Alarm'] subrows8 = [i for i in rows if i[1] == 'Seventh Alarm '] ind = np.array([2*i for i in range(5)]) width = 0.4 fig, ax = plt.subplots() ax.bar(ind + width,[i[2] for i in subrows3], width, label='Second Alarm') ax.bar(ind + 2*width,[i[2] for i in subrows4], width, label='Third Alarm') ax.bar(ind + 3*width,[i[2] for i in subrows8], width, label='Seventh Alarm', color = 'red') ax.set_ylabel('number of incidents') ax.set_title('Highest Level Alarm Distribution for Each Borough') ax.set_xticks(ind + width) ax.set_xticklabels(('Bronx', 'Brooklyn', 'Manhattan', 'Queens', 'R/S Island')) ax.legend() plt.show() ``` ***Discussion*** After filtering out small First Alarm fires, the mid-level Fifth Alarm fires, and the ambiguous All Hands Working category, we can see that the Seven Alarm fires occur much more frequently than the milder Second and Third alarm fires. This should cause concern for the New York City Fire Department because the Seven Alarm fires present a much more dangerous scenario that can lead to significant hardships for those affected by these severe fires. ## Incident by Quarter of Year for each Borough ***Check the data we want using SQL*** ``` %%sql SELECT alarm_box.borough, hour.quarter_of_year, count(*) FROM alarm_box, hour, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND hour.key = rspns_time_facts.incident_hour_key GROUP BY alarm_box.borough, hour.quarter_of_year ORDER BY hour.quarter_of_year, alarm_box.borough; ``` ***Discussion*** Overall, we can see that the Richmond/Staten Island borough shows less incidents than the other buroughs. For each borough, the number of incidents is not significantly affected by the quarter of the year. ***Visualize the data using Python*** ``` sql = ''' SELECT alarm_box.borough, hour.quarter_of_year, count(*) FROM alarm_box, hour, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND hour.key = rspns_time_facts.incident_hour_key GROUP BY alarm_box.borough, hour.quarter_of_year ORDER BY hour.quarter_of_year, alarm_box.borough; ''' c.execute(sql) rows = c.fetchall() subrows1 = [i for i in rows if i[1] == 1] subrows2 = [i for i in rows if i[1] == 2] subrows3 = [i for i in rows if i[1] == 3] subrows4 = [i for i in rows if i[1] == 4] ind = np.array([2*i for i in range(5)]) width = 0.4 fig, ax = plt.subplots() ax.bar(ind,[i[2] for i in subrows1], width, label='1st Quarter') ax.bar(ind + width,[i[2] for i in subrows2], width, label='2nd Quarter') ax.bar(ind + 2*width,[i[2] for i in subrows3], width, label='3rd Quarter') ax.bar(ind + 3*width,[i[2] for i in subrows4], width, label='4th Quarter') ax.set_ylabel('number of incidents') ax.set_title('Incidents by Quarter of Year for Each Borough') ax.set_xticks(ind + width) ax.set_xticklabels(('Bronx', 'Brooklyn', 'Manhattan', 'Queens', 'R/S Island')) ax.legend() plt.show() ``` ***Discussion*** Again, the visual display provides a more intuitive and easily digestible representation of the data. ## Incident distribution by month ***Check the data we want using SQL*** ``` %%sql SELECT hour.month_of_year, count(*) FROM alarm_box, hour, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND hour.key = rspns_time_facts.incident_hour_key GROUP BY hour.month_of_year ORDER BY hour.month_of_year; ``` ***Discussion*** When analyzing all the incident types, the frequency of events doesn't differ much throughout the year and the incidents by month appeart to be a uniform distribution. ***Visualize the data using Python*** ``` sql = ''' SELECT hour.month_of_year, count(*) FROM alarm_box, hour, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND hour.key = rspns_time_facts.incident_hour_key GROUP BY hour.month_of_year ORDER BY hour.month_of_year; ''' c.execute(sql) rows = c.fetchall() plt.bar([i[0] for i in rows], [i[1] for i in rows]) plt.xlabel("month") plt.ylabel("incidents") plt.title("Incidents by Month") plt.show() ``` ***Discussion*** The visualization provides a clear display of the approximately uniform distribution incidents by month. ### Fire incident distribution by month ***Check the data we want using SQL*** ``` %%sql SELECT hour.month_of_year, count(*) FROM alarm_box, hour, incident_class, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND hour.key = rspns_time_facts.incident_hour_key AND incident_class.key = rspns_time_facts.incident_class_key AND incident_class.group_des LIKE '%Fire%' GROUP BY hour.month_of_year ORDER BY hour.month_of_year; ``` ***Visualize the data using Python*** ``` sql = ''' SELECT hour.month_of_year, count(*) FROM alarm_box, hour, incident_class, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND hour.key = rspns_time_facts.incident_hour_key AND incident_class.key = rspns_time_facts.incident_class_key AND incident_class.group_des LIKE '%Fire%' GROUP BY hour.month_of_year ORDER BY hour.month_of_year; ''' c.execute(sql) rows = c.fetchall() plt.bar([i[0] for i in rows], [i[1] for i in rows]) plt.xlabel("month") plt.ylabel("fire incidents") plt.title("Fire Incidents by Month") plt.show() ``` ***Discussion*** Contrary to our original hypothesis, fires actually appear to occur less frequently in the summer months when compared to the winter months. We suspect that the increased frequency in the winter months is attributable to the colder temparatures forcing people to use their heaters at a higher rate. Additionally, other variables, such as weather data, may provide more information on how temperature, humidity, and wind speed might affect the likelihood of a fire. Finally, we want to analyze how response times behave by borough and how response times might be affected by weather because the fires appear to occur more frequently in the colder months. ## How does the Fire Department in each borough perform? ### Average, min, max incident response time for each borough ***Check the data we want using SQL*** ``` %%sql SELECT alarm_box.borough, min(rspns_time_facts.incident_rspns_s_qy), max(rspns_time_facts.incident_rspns_s_qy), ROUND(avg(rspns_time_facts.incident_rspns_s_qy)::numeric, 2), ROUND(stddev(rspns_time_facts.incident_rspns_s_qy)::numeric, 2) FROM alarm_box, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key GROUP BY alarm_box.borough; ``` ***Visualize the data using Python*** ``` sql = ''' SELECT alarm_box.borough, min(rspns_time_facts.incident_rspns_s_qy), max(rspns_time_facts.incident_rspns_s_qy), ROUND(avg(rspns_time_facts.incident_rspns_s_qy)::numeric, 2), ROUND(stddev(rspns_time_facts.incident_rspns_s_qy)::numeric, 2) FROM alarm_box, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key GROUP BY alarm_box.borough; ''' c.execute(sql) rows = c.fetchall() plt.barh([i[0] for i in rows], [i[3] for i in rows]) plt.xlabel("average response time") plt.ylabel("borough") plt.title("Average Response Time by Borough") plt.show() ``` ### Average, min, max incident response time by month for Richmond/Staten Island ***Check the data we want using SQL*** ``` %%sql SELECT hour.month_of_year, min(rspns_time_facts.incident_rspns_s_qy), max(rspns_time_facts.incident_rspns_s_qy), ROUND(avg(rspns_time_facts.incident_rspns_s_qy)::numeric, 2) AS avg FROM alarm_box, hour, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND hour.key = rspns_time_facts.incident_hour_key AND alarm_box.borough = 'RICHMOND / STATEN ISLAND' GROUP BY hour.month_of_year ORDER BY hour.month_of_year; ``` ### Average, min, max incident response time by month for Queens ***Check the data we want using SQL*** ``` %%sql SELECT hour.month_of_year, min(rspns_time_facts.incident_rspns_s_qy), max(rspns_time_facts.incident_rspns_s_qy), ROUND(avg(rspns_time_facts.incident_rspns_s_qy)::numeric, 2) AS avg FROM alarm_box, hour, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND hour.key = rspns_time_facts.incident_hour_key AND alarm_box.borough = 'QUEENS' GROUP BY hour.month_of_year ORDER BY hour.month_of_year; ``` ### Average, min, max incident response time by month for Manhattan ***Check the data we want using SQL*** ``` %%sql SELECT hour.month_of_year, min(rspns_time_facts.incident_rspns_s_qy), max(rspns_time_facts.incident_rspns_s_qy), ROUND(avg(rspns_time_facts.incident_rspns_s_qy)::numeric, 2) AS avg FROM alarm_box, hour, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND hour.key = rspns_time_facts.incident_hour_key AND alarm_box.borough = 'MANHATTAN' GROUP BY hour.month_of_year ORDER BY hour.month_of_year; ``` ### Average, min, max incident response time by month for Brooklyn ***Check the data we want using SQL*** ``` %%sql SELECT hour.month_of_year, min(rspns_time_facts.incident_rspns_s_qy), max(rspns_time_facts.incident_rspns_s_qy), ROUND(avg(rspns_time_facts.incident_rspns_s_qy)::numeric, 2) AS avg FROM alarm_box, hour, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND hour.key = rspns_time_facts.incident_hour_key AND alarm_box.borough = 'BROOKLYN' GROUP BY hour.month_of_year ORDER BY hour.month_of_year; ``` ### Average, min, max incident response time by month for Bronx ***Check the data we want using SQL*** ``` %%sql SELECT hour.month_of_year, min(rspns_time_facts.incident_rspns_s_qy), max(rspns_time_facts.incident_rspns_s_qy), ROUND(avg(rspns_time_facts.incident_rspns_s_qy)::numeric, 2) AS avg FROM alarm_box, hour, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND hour.key = rspns_time_facts.incident_hour_key AND alarm_box.borough = 'BRONX' GROUP BY hour.month_of_year ORDER BY hour.month_of_year; ``` ### Visualize all of the above tables together. ``` sql = ''' SELECT hour.month_of_year, min(rspns_time_facts.incident_rspns_s_qy), max(rspns_time_facts.incident_rspns_s_qy), ROUND(avg(rspns_time_facts.incident_rspns_s_qy)::numeric, 2) AS avg_rspns, ROUND(avg(rspns_time_facts.incident_travel_s_qy)::numeric, 2) AS avg_travel FROM alarm_box, hour, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND hour.key = rspns_time_facts.incident_hour_key AND alarm_box.borough = 'RICHMOND / STATEN ISLAND' GROUP BY hour.month_of_year ORDER BY hour.month_of_year; ''' c.execute(sql) rows1 = c.fetchall() sql = ''' SELECT hour.month_of_year, min(rspns_time_facts.incident_rspns_s_qy), max(rspns_time_facts.incident_rspns_s_qy), ROUND(avg(rspns_time_facts.incident_rspns_s_qy)::numeric, 2) AS avg_rspns, ROUND(avg(rspns_time_facts.incident_travel_s_qy)::numeric, 2) AS avg_travel FROM alarm_box, hour, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND hour.key = rspns_time_facts.incident_hour_key AND alarm_box.borough = 'QUEENS' GROUP BY hour.month_of_year ORDER BY hour.month_of_year; ''' c.execute(sql) rows2 = c.fetchall() sql = ''' SELECT hour.month_of_year, min(rspns_time_facts.incident_rspns_s_qy), max(rspns_time_facts.incident_rspns_s_qy), ROUND(avg(rspns_time_facts.incident_rspns_s_qy)::numeric, 2) AS avg_rspns, ROUND(avg(rspns_time_facts.incident_travel_s_qy)::numeric, 2) AS avg_travel FROM alarm_box, hour, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND hour.key = rspns_time_facts.incident_hour_key AND alarm_box.borough = 'MANHATTAN' GROUP BY hour.month_of_year ORDER BY hour.month_of_year; ''' c.execute(sql) rows3 = c.fetchall() sql = ''' SELECT hour.month_of_year, min(rspns_time_facts.incident_rspns_s_qy), max(rspns_time_facts.incident_rspns_s_qy), ROUND(avg(rspns_time_facts.incident_rspns_s_qy)::numeric, 2) AS avg_rspns, ROUND(avg(rspns_time_facts.incident_travel_s_qy)::numeric, 2) AS avg_travel FROM alarm_box, hour, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND hour.key = rspns_time_facts.incident_hour_key AND alarm_box.borough = 'BROOKLYN' GROUP BY hour.month_of_year ORDER BY hour.month_of_year; ''' c.execute(sql) rows4 = c.fetchall() sql = ''' SELECT hour.month_of_year, min(rspns_time_facts.incident_rspns_s_qy), max(rspns_time_facts.incident_rspns_s_qy), ROUND(avg(rspns_time_facts.incident_rspns_s_qy)::numeric, 2) AS avg_rspns, ROUND(avg(rspns_time_facts.incident_travel_s_qy)::numeric, 2) AS avg_travel FROM alarm_box, hour, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND hour.key = rspns_time_facts.incident_hour_key AND alarm_box.borough = 'BRONX' GROUP BY hour.month_of_year ORDER BY hour.month_of_year; ''' c.execute(sql) rows5 = c.fetchall() ``` ***Average rsponse time by borough by month*** ``` plt.plot([i[0] for i in rows1], [i[3] for i in rows1], label = "RICHMOND / STATEN ISLAND") plt.plot([i[0] for i in rows2], [i[3] for i in rows2], label = "QUEENS") plt.plot([i[0] for i in rows3], [i[3] for i in rows3], label = "MANHATTAN") plt.plot([i[0] for i in rows4], [i[3] for i in rows4], label = "BROOKLYN") plt.plot([i[0] for i in rows5], [i[3] for i in rows5], label = "BRONX") plt.xlabel("Month") plt.ylabel("average response time") plt.title("Average Response Time by Borough by Month") plt.legend(loc='best') plt.show() ``` ***Average travel time by borough by month*** ``` plt.plot([i[0] for i in rows1], [i[4] for i in rows1], label = "RICHMOND / STATEN ISLAND") plt.plot([i[0] for i in rows2], [i[4] for i in rows2], label = "QUEENS") plt.plot([i[0] for i in rows3], [i[4] for i in rows3], label = "MANHATTAN") plt.plot([i[0] for i in rows4], [i[4] for i in rows4], label = "BROOKLYN") plt.plot([i[0] for i in rows5], [i[4] for i in rows5], label = "BRONX") plt.xlabel("Month") plt.ylabel("average travel time") plt.title("Average Travel Time by Borough by Month") plt.legend(loc='best') plt.show() ``` ***Discussion*** Through our analysis, we identified that Brooklyn has the most incidents among 5 boroughs. However, it also has the overall fastest response time and travel time. Thus, we can conclude that Fire Department in Brooklyn is doing a very good job in responding to incidents and arriving to scenes in a timely manner. Manhattan is the major borough of New York City. It has the second most incidents happened there but generally the most response time. While the major part of the response time is the travel time, and Manhattan has generally the most travel time, we conclude that the reason might be the traffic problem in Manhattan -- there are always traffic jams!!! ## Disconnect the database ``` c.close() conn.close() ``` ## Visualization with Tableau ***Heat map of severe fires by zip code in Brooklyn and Queens*** ``` from IPython.display import Image Image(url = "https://s3.amazonaws.com/2018-istm6212-group17/severe_fires_by_zip.png") ``` It seems that more severe fires happened at the junction of Queens and Brooklyn. There are five ZIPCODE areas with many severe fires. # Bonus - Augment (10 points) Sometimes the most value can be gained from one dataset when it is studied alongside data drawn from other sources. Identify and describe at least one additional data source that can complement your analysis. Pull this additional data into your chosen environment and explore at least one more theme you are able to further analyze that depends upon a combination of data from both sources. ## About the complementary data For the bonus section, we decided to add weather data, [Historical Hourly Weather Data 2012-2017](https://www.kaggle.com/selfishgene/historical-hourly-weather-data#weather_description.csv), into the database. We want to include this data in our analysis because the winter months showed a higher frequency of fires and we expect that temperature, humidity, and wind speed may be important variables for better understanding the frequency of fires. Additionally, weather may play an important role for response times, with quick response times allowing the Fire Department to mitigate the risk of severe fires. ## Obtain the `weather` data ``` !wget -O weather.zip https://s3.amazonaws.com/2018-istm6212-group17/historical-hourly-weather-data.zip !unzip -o weather.zip -d ./weather ``` ## Filter and combine the data we want ``` !grep -E '^2016' weather/weather_description.csv | csvcut -c1,29 > description.txt !grep -E '^2016' weather/temperature.csv | csvcut -c29 > temperature.txt !grep -E '^2016' weather/humidity.csv | csvcut -c29 > humidity.txt !grep -E '^2016' weather/pressure.csv | csvcut -c29 > pressure.txt !grep -E '^2016' weather/wind_speed.csv | csvcut -c29 > wind_speed.txt !wc -l *.txt !paste -d ',' description.txt temperature.txt humidity.txt pressure.txt wind_speed.txt > 2016_nyc_weather.csv !sed -i 1i'hour,description,temperature,humidity,pressure,wind_speed' 2016_nyc_weather.csv ``` ## Examine the data ``` !wc -l 2016_nyc_weather.csv !csvstat --count 2016_nyc_weather.csv !csvcut -n 2016_nyc_weather.csv !csvstat 2016_nyc_weather.csv ``` ## Schema ***With the weather data, our final star schema is as follows*** ``` from IPython.display import Image Image(url = "https://s3.amazonaws.com/2018-istm6212-group17/star_schema_augmented.png") ``` ## Create the `weather` dimension table ***Create the `weather` dimension table*** ``` %%sql DROP TABLE IF EXISTS weather; CREATE TABLE weather ( key SERIAL PRIMARY KEY, hour VARCHAR(20) NOT NULL, description VARCHAR(50) NOT NULL, temperature FLOAT NOT NULL, humidity FLOAT NOT NULL, pressure FLOAT NOT NULL, wind_speed FLOAT NOT NULL ); ``` ***Load the data using `COPY` command*** ``` !cp 2016_nyc_weather.csv /tmp/2016_nyc_weather.csv %%sql COPY weather (hour, description, temperature, humidity, pressure, wind_speed) FROM '/tmp/2016_nyc_weather.csv' CSV HEADER; ``` ***Change the unit of temperature from Kelvin to Celsius*** ``` %%sql UPDATE weather SET temperature = ROUND((temperature - 272.15)::numeric, 2); ``` ***Take a look at the `weather` dimension table*** ``` %%sql SELECT * FROM weather ORDER BY key LIMIT 5; %%sql SELECT COUNT(*) FROM weather; ``` ***Add a foreign key column to the fact table that references `weather` dimension table*** ``` %%sql ALTER TABLE rspns_time_facts ADD COLUMN weather_key INTEGER, ADD CONSTRAINT fk_weather_key FOREIGN KEY (weather_key) REFERENCES weather (key); ``` ***Populate `weather_key` with correct values*** ``` %%sql UPDATE rspns_time_facts SET weather_key = weather.key FROM weather, hour WHERE rspns_time_facts.incident_hour_key = hour.key AND hour.hour = weather.hour; ``` ***Check the foreign key column*** ``` %%sql SELECT id, weather_key FROM rspns_time_facts LIMIT 5; %%sql SELECT COUNT(*) AS weather_key_not_null_count FROM rspns_time_facts WHERE weather_key IS NOT NULL; %%sql SELECT COUNT(*) AS weather_key_null_count FROM rspns_time_facts WHERE weather_key IS NULL; ``` ***Take a look at the final `rspns_time_facts` table*** ``` %%sql SELECT * FROM rspns_time_facts LIMIT 10; ``` ## Analysis with the weather data ### Show Tables in our database ``` %%sql \dt ``` ### Connect the database with Python for Visualization ``` import psycopg2 import geocoder import matplotlib.pyplot as plt import pandas import numpy as np conn = psycopg2.connect("dbname='proj4_group17' user='student'") c = conn.cursor() ``` ### Distribution of temperature values of fire incidents ***Check the data we want using SQL*** ``` %%sql SELECT rspns_time_facts.id, weather.temperature FROM weather, rspns_time_facts, incident_class WHERE rspns_time_facts.weather_key = weather.key AND rspns_time_facts.incident_class_key = incident_class.key AND incident_class.group_des LIKE '%Fire%' LIMIT 10; ``` ***Visualize the data using Python*** ``` sql = ''' SELECT weather.temperature FROM weather, rspns_time_facts, incident_class WHERE rspns_time_facts.weather_key = weather.key AND rspns_time_facts.incident_class_key = incident_class.key AND incident_class.group_des LIKE '%Fire%'; ''' c.execute(sql) rows = c.fetchall() plt.hist([i[0] for i in rows], bins = 50) plt.xlabel("temperature") plt.ylabel("frequency") plt.title("Temperature Histogram") plt.show() ``` ***Discussion*** Based on the temperature plot, we can see peaks at certain temperatures, such as between 0 and 5 degrees celsius, that coincide with a higher frequency of fires in the winter months. ### Distribution of humidity values of fire incidents ***Check the data we want using SQL*** ``` %%sql SELECT rspns_time_facts.id, weather.humidity FROM weather, rspns_time_facts, incident_class WHERE rspns_time_facts.weather_key = weather.key AND rspns_time_facts.incident_class_key = incident_class.key AND incident_class.group_des LIKE '%Fire%' LIMIT 10; ``` ***Visualize the data using Python*** ``` sql = ''' SELECT weather.humidity FROM weather, rspns_time_facts, incident_class WHERE rspns_time_facts.weather_key = weather.key AND rspns_time_facts.incident_class_key = incident_class.key AND incident_class.group_des LIKE '%Fire%'; ''' c.execute(sql) rows = c.fetchall() plt.hist([i[0] for i in rows]) plt.xlabel("humidity") plt.ylabel("frequency") plt.title("Humidity Histogram") plt.show() ``` ***Discussion*** Based on the humidity plot, we actually see a negatively-skewed distribution. We find this odd because we expect dryer air to lead to more fires. However, the high levels of humidity, defined as between 80% and 100% humidity, may represent severe weather scenarios, such as strong rain and even snow storms. These severe storms may be part of the cause for additional fires. ### Average travel time by different weather situation ***Check the data we want using SQL*** ``` %%sql SELECT weather.description, ROUND(AVG(incident_travel_s_qy)::numeric, 2) FROM rspns_time_facts JOIN weather ON rspns_time_facts.weather_key = weather.key GROUP BY weather.description ORDER BY AVG(incident_travel_s_qy) DESC; ``` ***Visualize the data using Python*** ``` sql = ''' SELECT weather.description, ROUND(AVG(incident_travel_s_qy)::numeric, 2) FROM rspns_time_facts JOIN weather ON rspns_time_facts.weather_key = weather.key GROUP BY weather.description ORDER BY AVG(incident_travel_s_qy); ''' c.execute(sql) rows = c.fetchall() plt.barh([i[0] for i in rows], [i[1] for i in rows]) plt.xlabel("travel time") plt.ylabel("weather") plt.title("Travel time vs Weather Situation") plt.show() ``` ***Discussion*** Finally, we plotted the response times by the weather category description to find a few interesting results. For example, the Heavy Snow appears to significantly increase response times accross all the boroughs. So, we recommend that the New York City Fire Department understand the increased frequency of fires in the winter months paired with the difficulties of providing quick response times in Heavy Snow has the potential to lead to severe fires with the potential to cause serious consequences that include the loss of property, life, and emotional well-being. ## Disconnect the database ``` c.close() conn.close() ``` **Disclaimer** All members worked on the project individually from the top to the end and met together for the finding discussions. Everyone contributed substantially to the work.
github_jupyter
!wget -O fire_incidents.csv https://s3.amazonaws.com/2018-istm6212-group17/2016_Fire_Incident_Dispatch_Data.csv !wc -l fire_incidents.csv !csvstat --count fire_incidents.csv !csvcut -n fire_incidents.csv !head -n 10000 fire_incidents.csv | csvstat --snifflimit 0 from IPython.display import Image Image(url = "https://s3.amazonaws.com/2018-istm6212-group17/original_dataset.png", width = 320, height = 500) from IPython.display import Image Image(url = "https://s3.amazonaws.com/2018-istm6212-group17/star_schema_original.png") %load_ext sql !dropdb -U student proj4_group17 !createdb -U student proj4_group17 %sql postgresql://student@/proj4_group17 %%sql DROP TABLE IF EXISTS incidents; CREATE TABLE incidents ( id NUMERIC NOT NULL, incident_datetime TIMESTAMP NOT NULL, alarm_box_borough VARCHAR(30) NOT NULL, alarm_box_number INTEGER NOT NULL, alarm_box_location VARCHAR(500) NOT NULL, incident_borough VARCHAR(30) NOT NULL, zipcode INTEGER, police_precinct INTEGER, city_council_district INTEGER, community_district INTEGER, community_school_district INTEGER, congressional_district INTEGER, alarm_source_description_tx VARCHAR(30) NOT NULL, alarm_level_index_description VARCHAR(100) NOT NULL, highest_alarm_level VARCHAR(30) NOT NULL, incident_classification VARCHAR(100) NOT NULL, incident_classification_group VARCHAR(30) NOT NULL, dispatch_rspns_s_qy INTEGER, first_assignment_datetime TIMESTAMP, first_activation_datetime TIMESTAMP, first_on_scene_datetime TIMESTAMP, incident_close_datetime TIMESTAMP, valid_dispatch_rspns_time_indc BOOLEAN, valid_incident_rspns_time_indc BOOLEAN, incident_rspns_s_qy INTEGER, incident_travel_s_qy INTEGER, engines_assigned_quantity INTEGER, ladders_assigned_quantity INTEGER, other_units_assigned_quantity INTEGER ); !cp fire_incidents.csv /tmp/fire_incidents.csv %%sql COPY incidents FROM '/tmp/fire_incidents.csv' CSV HEADER; %%sql SELECT COUNT(*) FROM incidents; %%sql SELECT * FROM incidents LIMIT 5; %%sql SELECT COUNT(*) FROM incidents WHERE (dispatch_rspns_s_qy = 0 OR incident_rspns_s_qy = 0 OR incident_travel_s_qy = 0); %%sql SELECT COUNT(*) FROM incidents WHERE first_assignment_datetime IS NULL OR first_on_scene_datetime IS NULL; %%sql SELECT id, incident_datetime, first_assignment_datetime, first_activation_datetime, first_on_scene_datetime, dispatch_rspns_s_qy, incident_rspns_s_qy, incident_travel_s_qy FROM incidents WHERE (dispatch_rspns_s_qy = 0 OR incident_rspns_s_qy = 0 OR incident_travel_s_qy = 0) AND first_assignment_datetime IS NOT NULL AND first_on_scene_datetime IS NOT NULL LIMIT 10; %%sql DELETE FROM incidents WHERE id IN (1604708820140290, 1610936140120350, 161682184012); %%sql UPDATE incidents SET dispatch_rspns_s_qy = NULL WHERE dispatch_rspns_s_qy = 0; UPDATE incidents SET incident_rspns_s_qy = NULL WHERE incident_rspns_s_qy = 0; UPDATE incidents SET incident_travel_s_qy = NULL WHERE incident_travel_s_qy = 0; %%sql SELECT * FROM incidents ORDER BY incident_datetime LIMIT 5; %%sql DROP TABLE IF EXISTS rspns_time_facts; CREATE TABLE rspns_time_facts AS (SELECT * FROM incidents ORDER BY incident_datetime); %%sql ALTER TABLE rspns_time_facts DROP COLUMN engines_assigned_quantity, DROP COLUMN ladders_assigned_quantity, DROP COLUMN other_units_assigned_quantity; %%sql SELECT * FROM rspns_time_facts LIMIT 5; %%sql DROP TABLE IF EXISTS incidents; %%sql DROP TABLE IF EXISTS alarm_box; CREATE TABLE alarm_box ( key SERIAL PRIMARY KEY, borough VARCHAR(30) NOT NULL, number INTEGER NOT NULL, location VARCHAR(500) NOT NULL, zipcode INTEGER, police_precinct INTEGER, city_council_district INTEGER, community_district INTEGER, community_school_district INTEGER, congressional_district INTEGER ); %%sql INSERT INTO alarm_box (borough, number, location, zipcode, police_precinct, city_council_district, community_district, community_school_district, congressional_district) SELECT DISTINCT alarm_box_borough AS borough, alarm_box_number AS number, alarm_box_location AS location, zipcode, police_precinct, city_council_district, community_district, community_school_district, congressional_district FROM rspns_time_facts ORDER BY zipcode, number; %%sql SELECT * FROM alarm_box LIMIT 5; %%sql SELECT COUNT(*) FROM alarm_box; %%sql ALTER TABLE rspns_time_facts ADD COLUMN alarm_box_key INTEGER, ADD CONSTRAINT fk_alarm_box_key FOREIGN KEY (alarm_box_key) REFERENCES alarm_box (key); %%sql DROP INDEX IF EXISTS idx_alarm_box; CREATE INDEX idx_alarm_box ON alarm_box (borough, number, location, zipcode, police_precinct, city_council_district, community_district, community_school_district, congressional_district); %%sql UPDATE rspns_time_facts SET alarm_box_key = alarm_box.key FROM alarm_box WHERE rspns_time_facts.alarm_box_borough = alarm_box.borough AND rspns_time_facts.alarm_box_number = alarm_box.number AND rspns_time_facts.alarm_box_location = alarm_box.location AND ((rspns_time_facts.zipcode IS NULL AND alarm_box.zipcode IS NULL) OR rspns_time_facts.zipcode = alarm_box.zipcode) AND ((rspns_time_facts.police_precinct IS NULL AND alarm_box.police_precinct IS NULL) OR rspns_time_facts.police_precinct = alarm_box.police_precinct) AND ((rspns_time_facts.city_council_district IS NULL AND alarm_box.city_council_district IS NULL) OR rspns_time_facts.city_council_district = alarm_box.city_council_district) AND ((rspns_time_facts.community_district IS NULL AND alarm_box.community_district IS NULL) OR rspns_time_facts.community_district = alarm_box.community_district) AND ((rspns_time_facts.community_school_district IS NULL AND alarm_box.community_school_district IS NULL) OR rspns_time_facts.community_school_district = alarm_box.community_school_district) AND ((rspns_time_facts.congressional_district IS NULL AND alarm_box.congressional_district IS NULL) OR rspns_time_facts.congressional_district = alarm_box.congressional_district); %%sql SELECT id, alarm_box_key FROM rspns_time_facts LIMIT 5; %%sql SELECT COUNT(*) AS alarm_box_key_not_null_count FROM rspns_time_facts WHERE alarm_box_key IS NOT NULL; %%sql SELECT COUNT(*) AS alarm_box_key_null_count FROM rspns_time_facts WHERE alarm_box_key IS NULL; %%sql DROP TABLE IF EXISTS alarm; CREATE TABLE alarm ( key SERIAL PRIMARY KEY, source VARCHAR(30) NOT NULL, level_index VARCHAR(100) NOT NULL, highest_level VARCHAR(30) NOT NULL ); %%sql INSERT INTO alarm (source, level_index, highest_level) SELECT DISTINCT alarm_source_description_tx AS source, alarm_level_index_description AS level_index, highest_alarm_level AS highest_level FROM rspns_time_facts ORDER BY source, level_index, highest_level; %%sql SELECT * FROM alarm LIMIT 5; %%sql SELECT COUNT(*) FROM alarm; %%sql ALTER TABLE rspns_time_facts ADD COLUMN alarm_key INTEGER, ADD CONSTRAINT fk_alarm_key FOREIGN KEY (alarm_key) REFERENCES alarm (key); %%sql DROP INDEX IF EXISTS idx_alarm; CREATE INDEX idx_alarm ON alarm (source, level_index, highest_level); %%sql UPDATE rspns_time_facts SET alarm_key = alarm.key FROM alarm WHERE rspns_time_facts.alarm_source_description_tx = alarm.source AND rspns_time_facts.alarm_level_index_description = alarm.level_index AND rspns_time_facts.highest_alarm_level = alarm.highest_level; %%sql SELECT id, alarm_key FROM rspns_time_facts LIMIT 5; %%sql SELECT COUNT(*) AS alarm_key_not_null_count FROM rspns_time_facts WHERE alarm_key IS NOT NULL; %%sql SELECT COUNT(*) AS alarm_key_null_count FROM rspns_time_facts WHERE alarm_key IS NULL; %%sql DROP TABLE IF EXISTS incident_class; CREATE TABLE incident_class ( key SERIAL PRIMARY KEY, class_des VARCHAR(100) NOT NULL, group_des VARCHAR(30) NOT NULL ); %%sql INSERT INTO incident_class (class_des, group_des) SELECT DISTINCT incident_classification AS class_des, incident_classification_group AS group_des FROM rspns_time_facts ORDER BY group_des, class_des; %%sql SELECT * FROM incident_class LIMIT 5; %%sql SELECT COUNT(*) FROM incident_class; %%sql ALTER TABLE rspns_time_facts ADD COLUMN incident_class_key INTEGER, ADD CONSTRAINT fk_incident_class_key FOREIGN KEY (incident_class_key) REFERENCES incident_class (key); %%sql DROP INDEX IF EXISTS idx_incident_class; CREATE INDEX idx_incident_class ON incident_class (class_des, group_des); %%sql UPDATE rspns_time_facts SET incident_class_key = incident_class.key FROM incident_class WHERE rspns_time_facts.incident_classification = incident_class.class_des AND rspns_time_facts.incident_classification_group = incident_class.group_des; %%sql SELECT id, incident_class_key FROM rspns_time_facts LIMIT 5; %%sql SELECT COUNT(*) AS incident_class_key_not_null_count FROM rspns_time_facts WHERE incident_class_key IS NOT NULL; %%sql SELECT COUNT(*) AS incident_class_key_null_count FROM rspns_time_facts WHERE incident_class_key IS NULL; %%sql DROP TABLE IF EXISTS valid_rspns; CREATE TABLE valid_rspns ( key SERIAL PRIMARY KEY, valid_dispatch_rspns BOOLEAN, valid_incident_rspns BOOLEAN ); %%sql INSERT INTO valid_rspns (valid_dispatch_rspns, valid_incident_rspns) SELECT DISTINCT valid_dispatch_rspns_time_indc AS valid_dispatch_rspns, valid_incident_rspns_time_indc AS valid_incident_rspns FROM rspns_time_facts ORDER BY valid_dispatch_rspns, valid_dispatch_rspns; %%sql SELECT * FROM valid_rspns LIMIT 10; %%sql SELECT COUNT(*) FROM valid_rspns; %%sql ALTER TABLE rspns_time_facts ADD COLUMN valid_rspns_key INTEGER, ADD CONSTRAINT fk_valid_rspns_key FOREIGN KEY (valid_rspns_key) REFERENCES valid_rspns (key); %%sql DROP INDEX IF EXISTS idx_valid_rspns; CREATE INDEX idx_valid_rspns ON valid_rspns (valid_dispatch_rspns, valid_incident_rspns); %%sql UPDATE rspns_time_facts SET valid_rspns_key = valid_rspns.key FROM valid_rspns WHERE rspns_time_facts.valid_dispatch_rspns_time_indc = valid_rspns.valid_dispatch_rspns AND rspns_time_facts.valid_incident_rspns_time_indc = valid_rspns.valid_incident_rspns; %%sql SELECT id, valid_rspns_key FROM rspns_time_facts LIMIT 5; %%sql SELECT COUNT(*) AS valid_rspns_key_not_null_count FROM rspns_time_facts WHERE valid_rspns_key IS NOT NULL; %%sql SELECT COUNT(*) AS valid_rspns_key_null_count FROM rspns_time_facts WHERE valid_rspns_key IS NULL; %%sql DROP TABLE IF EXISTS hour; CREATE TABLE hour ( key SERIAL PRIMARY KEY, hour CHAR(19), day CHAR(10), year INTEGER, quarter_of_year INTEGER, month_of_year_str VARCHAR(12), month_of_year INTEGER, day_of_month INTEGER, day_of_week_str CHAR(9), day_of_week INTEGER, is_weekend BOOLEAN, hour_of_day INTEGER ); %%sql SELECT COUNT(*) FROM (SELECT DISTINCT TO_CHAR(incident_datetime, 'YYYY-MM-DD HH24:00:00') AS hour, TO_CHAR(incident_datetime, 'YYYY-MM-DD') AS day, CAST(TO_CHAR(incident_datetime, 'YYYY') AS INTEGER) AS year, CAST(TO_CHAR(incident_datetime, 'Q') AS INTEGER) AS quarter_of_year, TO_CHAR(incident_datetime, 'Month') AS month_of_year_str, CAST(TO_CHAR(incident_datetime, 'MM') AS INTEGER) AS month_of_year, CAST(TO_CHAR(incident_datetime, 'DD') AS INTEGER) AS day_of_month, TO_CHAR(incident_datetime, 'Day') AS day_of_week_str, CAST(TO_CHAR(incident_datetime, 'D') AS INTEGER) AS day_of_week, CASE WHEN CAST(TO_CHAR(incident_datetime, 'D') AS INTEGER) IN (1, 7) THEN True ELSE False END AS is_weekend, CAST(TO_CHAR(incident_datetime, 'HH24') AS INTEGER) AS hour_of_day FROM rspns_time_facts) AS T; %%sql SELECT COUNT(*) FROM (SELECT DISTINCT TO_CHAR(incident_close_datetime, 'YYYY-MM-DD HH24:00:00') AS hour, TO_CHAR(incident_close_datetime, 'YYYY-MM-DD') AS day, CAST(TO_CHAR(incident_close_datetime, 'YYYY') AS INTEGER) AS year, CAST(TO_CHAR(incident_close_datetime, 'Q') AS INTEGER) AS quarter_of_year, TO_CHAR(incident_close_datetime, 'Month') AS month_of_year_str, CAST(TO_CHAR(incident_close_datetime, 'MM') AS INTEGER) AS month_of_year, CAST(TO_CHAR(incident_close_datetime, 'DD') AS INTEGER) AS day_of_month, TO_CHAR(incident_close_datetime, 'Day') AS day_of_week_str, CAST(TO_CHAR(incident_close_datetime, 'D') AS INTEGER) AS day_of_week, CASE WHEN CAST(TO_CHAR(incident_close_datetime, 'D') AS INTEGER) IN (1, 7) THEN True ELSE False END AS is_weekend, CAST(TO_CHAR(incident_close_datetime, 'HH24') AS INTEGER) AS hour_of_day FROM rspns_time_facts) AS T; %%sql INSERT INTO hour (hour, day, year, quarter_of_year, month_of_year_str, month_of_year, day_of_month, day_of_week_str, day_of_week, is_weekend, hour_of_day) SELECT DISTINCT TO_CHAR(incident_datetime, 'YYYY-MM-DD HH24:00:00') AS hour, TO_CHAR(incident_datetime, 'YYYY-MM-DD') AS day, CAST(TO_CHAR(incident_datetime, 'YYYY') AS INTEGER) AS year, CAST(TO_CHAR(incident_datetime, 'Q') AS INTEGER) AS quarter_of_year, TO_CHAR(incident_datetime, 'Month') AS month_of_year_str, CAST(TO_CHAR(incident_datetime, 'MM') AS INTEGER) AS month_of_year, CAST(TO_CHAR(incident_datetime, 'DD') AS INTEGER) AS day_of_month, TO_CHAR(incident_datetime, 'Day') AS day_of_week_str, CAST(TO_CHAR(incident_datetime, 'D') AS INTEGER) AS day_of_week, CASE WHEN CAST(TO_CHAR(incident_datetime, 'D') AS INTEGER) IN (1, 7) THEN True ELSE False END AS is_weekend, CAST(TO_CHAR(incident_datetime, 'HH24') AS INTEGER) AS hour_of_day FROM rspns_time_facts UNION SELECT DISTINCT TO_CHAR(incident_close_datetime, 'YYYY-MM-DD HH24:00:00') AS hour, TO_CHAR(incident_close_datetime, 'YYYY-MM-DD') AS day, CAST(TO_CHAR(incident_close_datetime, 'YYYY') AS INTEGER) AS year, CAST(TO_CHAR(incident_close_datetime, 'Q') AS INTEGER) AS quarter_of_year, TO_CHAR(incident_close_datetime, 'Month') AS month_of_year_str, CAST(TO_CHAR(incident_close_datetime, 'MM') AS INTEGER) AS month_of_year, CAST(TO_CHAR(incident_close_datetime, 'DD') AS INTEGER) AS day_of_month, TO_CHAR(incident_close_datetime, 'Day') AS day_of_week_str, CAST(TO_CHAR(incident_close_datetime, 'D') AS INTEGER) AS day_of_week, CASE WHEN CAST(TO_CHAR(incident_close_datetime, 'D') AS INTEGER) IN (1, 7) THEN True ELSE False END AS is_weekend, CAST(TO_CHAR(incident_close_datetime, 'HH24') AS INTEGER) AS hour_of_day FROM rspns_time_facts ORDER BY year, month_of_year, day_of_month, hour_of_day; %%sql SELECT * FROM hour LIMIT 5; %%sql ALTER TABLE rspns_time_facts ADD COLUMN incident_hour_key INTEGER, ADD CONSTRAINT fk_incident_hour_key FOREIGN KEY (incident_hour_key) REFERENCES hour (key); ALTER TABLE rspns_time_facts ADD COLUMN incident_close_hour_key INTEGER, ADD CONSTRAINT fk_incident_close_hour_key FOREIGN KEY (incident_close_hour_key) REFERENCES hour (key); %%sql UPDATE rspns_time_facts SET incident_hour_key = hour.key FROM hour WHERE TO_CHAR(rspns_time_facts.incident_datetime, 'YYYY-MM-DD HH24:00:00') = hour.hour; %%sql UPDATE rspns_time_facts SET incident_close_hour_key = hour.key FROM hour WHERE TO_CHAR(rspns_time_facts.incident_close_datetime, 'YYYY-MM-DD HH24:00:00') = hour.hour; %%sql SELECT id, incident_hour_key, incident_close_hour_key FROM rspns_time_facts LIMIT 5; %%sql SELECT (SELECT COUNT(*) FROM rspns_time_facts WHERE incident_hour_key IS NOT NULL) AS incident_hour_key_not_null_count, (SELECT COUNT(*) FROM rspns_time_facts WHERE incident_close_hour_key IS NOT NULL) AS incident_close_hour_key_not_null_count; %%sql SELECT (SELECT COUNT(*) FROM rspns_time_facts WHERE incident_hour_key IS NULL) AS incident_hour_key_null_count, (SELECT COUNT(*) FROM rspns_time_facts WHERE incident_close_hour_key IS NULL) AS incident_close_hour_key_null_count; %%sql ALTER TABLE rspns_time_facts DROP COLUMN incident_datetime, DROP COLUMN alarm_box_borough, DROP COLUMN alarm_box_number, DROP COLUMN alarm_box_location, DROP COLUMN incident_borough, DROP COLUMN zipcode, DROP COLUMN police_precinct, DROP COLUMN city_council_district, DROP COLUMN community_district, DROP COLUMN community_school_district, DROP COLUMN congressional_district, DROP COLUMN alarm_source_description_tx, DROP COLUMN alarm_level_index_description, DROP COLUMN highest_alarm_level, DROP COLUMN incident_classification, DROP COLUMN incident_classification_group, DROP COLUMN first_assignment_datetime, DROP COLUMN first_activation_datetime, DROP COLUMN first_on_scene_datetime, DROP COLUMN incident_close_datetime, DROP COLUMN valid_dispatch_rspns_time_indc, DROP COLUMN valid_incident_rspns_time_indc; %%sql SELECT * FROM rspns_time_facts LIMIT 10; %%sql \dt import psycopg2 import geocoder import matplotlib.pyplot as plt import pandas import numpy as np conn = psycopg2.connect("dbname='proj4_group17' user='student'") c = conn.cursor() %%sql SELECT alarm_box.borough, count(*) FROM alarm_box, incident_class, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND incident_class.key = rspns_time_facts.incident_class_key GROUP BY alarm_box.borough; sql = ''' SELECT alarm_box.borough, count(*) FROM alarm_box, incident_class, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND incident_class.key = rspns_time_facts.incident_class_key GROUP BY alarm_box.borough; ''' c.execute(sql) rows = c.fetchall() plt.barh([i[0] for i in rows], [i[1] for i in rows]) plt.xlabel("total incidents") plt.ylabel("borough") plt.title("Total No. of Incidents for Each Borough") plt.show() %%sql SELECT alarm_box.borough, incident_class.group_des, count(*) FROM alarm_box, incident_class, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND incident_class.key = rspns_time_facts.incident_class_key GROUP BY alarm_box.borough, incident_class.group_des ORDER BY alarm_box.borough, incident_class.group_des; sql = ''' SELECT alarm_box.borough, incident_class.group_des, count(*) FROM alarm_box, incident_class, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND incident_class.key = rspns_time_facts.incident_class_key GROUP BY alarm_box.borough, incident_class.group_des ORDER BY alarm_box.borough, incident_class.group_des; ''' c.execute(sql) rows = c.fetchall() subrows1 = [i for i in rows if i[1] == 'Medical Emergencies'] subrows2 = [i for i in rows if i[1] == 'Medical MFAs'] subrows3 = [i for i in rows if i[1] == 'NonMedical Emergencies'] subrows4 = [i for i in rows if i[1] == 'NonMedical MFAs'] subrows5 = [i for i in rows if i[1] == 'NonStructural Fires'] subrows6 = [i for i in rows if i[1] == 'Structural Fires'] ind = np.array([3*i for i in range(5)]) width = 0.4 fig, ax = plt.subplots() ax.bar(ind,[i[2] for i in subrows1], width, label='Medical Emergencies') ax.bar(ind + width,[i[2] for i in subrows2], width, label='Medical MFAs') ax.bar(ind + 2*width,[i[2] for i in subrows3], width, label='NonMedical Emergencies') ax.bar(ind + 3*width,[i[2] for i in subrows4], width, label='NonMedical MFAs') ax.bar(ind + 4*width,[i[2] for i in subrows5], width, label='NonStructural Fires') ax.bar(ind + 5*width,[i[2] for i in subrows6], width, label='Structural Fires') ax.set_ylabel('number of incidents') ax.set_title('Incidents by Type for Each Borough') ax.set_xticks(ind + width) ax.set_xticklabels(('Bronx', 'Brooklyn', 'Manhattan', 'Queens', 'R/S Island')) ax.legend() plt.show() %%sql SELECT alarm_box.borough, count(*) FROM alarm_box, incident_class, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND incident_class.key = rspns_time_facts.incident_class_key AND incident_class.group_des LIKE '%Fire%' GROUP BY alarm_box.borough; sql = ''' SELECT alarm_box.borough, count(*) FROM alarm_box, incident_class, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND incident_class.key = rspns_time_facts.incident_class_key AND incident_class.group_des LIKE '%Fire%' GROUP BY alarm_box.borough; ''' c.execute(sql) rows = c.fetchall() plt.barh([i[0] for i in rows], [i[1] for i in rows]) plt.xlabel("total fire incidents") plt.ylabel("borough") plt.title("Total No. of Fires Incidents for Each Borough") plt.show() %%sql SELECT alarm_box.borough, alarm.highest_level, count(*) FROM alarm_box, alarm, rspns_time_facts, incident_class WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND alarm.key = rspns_time_facts.alarm_key AND incident_class.key = rspns_time_facts.incident_class_key AND incident_class.group_des LIKE '%Fires%' GROUP BY alarm_box.borough, alarm.highest_level ORDER BY alarm_box.borough, alarm.highest_level; sql = ''' SELECT alarm_box.borough, alarm.highest_level, count(*) FROM alarm_box, alarm, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND alarm.key = rspns_time_facts.alarm_key GROUP BY alarm_box.borough, alarm.highest_level ORDER BY alarm_box.borough, alarm.highest_level; ''' c.execute(sql) rows = c.fetchall() subrows = [i for i in rows if i[1] == 'First Alarm'] plt.barh([i[0] for i in subrows], [i[2] for i in subrows]) plt.xlabel("number of incidents") plt.ylabel("borough") plt.title("First Alarm Incident Distribution") plt.show() subrows3 = [i for i in rows if i[1] == 'Second Alarm'] subrows4 = [i for i in rows if i[1] == 'Third Alarm'] subrows8 = [i for i in rows if i[1] == 'Seventh Alarm '] ind = np.array([2*i for i in range(5)]) width = 0.4 fig, ax = plt.subplots() ax.bar(ind + width,[i[2] for i in subrows3], width, label='Second Alarm') ax.bar(ind + 2*width,[i[2] for i in subrows4], width, label='Third Alarm') ax.bar(ind + 3*width,[i[2] for i in subrows8], width, label='Seventh Alarm', color = 'red') ax.set_ylabel('number of incidents') ax.set_title('Highest Level Alarm Distribution for Each Borough') ax.set_xticks(ind + width) ax.set_xticklabels(('Bronx', 'Brooklyn', 'Manhattan', 'Queens', 'R/S Island')) ax.legend() plt.show() %%sql SELECT alarm_box.borough, hour.quarter_of_year, count(*) FROM alarm_box, hour, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND hour.key = rspns_time_facts.incident_hour_key GROUP BY alarm_box.borough, hour.quarter_of_year ORDER BY hour.quarter_of_year, alarm_box.borough; sql = ''' SELECT alarm_box.borough, hour.quarter_of_year, count(*) FROM alarm_box, hour, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND hour.key = rspns_time_facts.incident_hour_key GROUP BY alarm_box.borough, hour.quarter_of_year ORDER BY hour.quarter_of_year, alarm_box.borough; ''' c.execute(sql) rows = c.fetchall() subrows1 = [i for i in rows if i[1] == 1] subrows2 = [i for i in rows if i[1] == 2] subrows3 = [i for i in rows if i[1] == 3] subrows4 = [i for i in rows if i[1] == 4] ind = np.array([2*i for i in range(5)]) width = 0.4 fig, ax = plt.subplots() ax.bar(ind,[i[2] for i in subrows1], width, label='1st Quarter') ax.bar(ind + width,[i[2] for i in subrows2], width, label='2nd Quarter') ax.bar(ind + 2*width,[i[2] for i in subrows3], width, label='3rd Quarter') ax.bar(ind + 3*width,[i[2] for i in subrows4], width, label='4th Quarter') ax.set_ylabel('number of incidents') ax.set_title('Incidents by Quarter of Year for Each Borough') ax.set_xticks(ind + width) ax.set_xticklabels(('Bronx', 'Brooklyn', 'Manhattan', 'Queens', 'R/S Island')) ax.legend() plt.show() %%sql SELECT hour.month_of_year, count(*) FROM alarm_box, hour, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND hour.key = rspns_time_facts.incident_hour_key GROUP BY hour.month_of_year ORDER BY hour.month_of_year; sql = ''' SELECT hour.month_of_year, count(*) FROM alarm_box, hour, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND hour.key = rspns_time_facts.incident_hour_key GROUP BY hour.month_of_year ORDER BY hour.month_of_year; ''' c.execute(sql) rows = c.fetchall() plt.bar([i[0] for i in rows], [i[1] for i in rows]) plt.xlabel("month") plt.ylabel("incidents") plt.title("Incidents by Month") plt.show() %%sql SELECT hour.month_of_year, count(*) FROM alarm_box, hour, incident_class, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND hour.key = rspns_time_facts.incident_hour_key AND incident_class.key = rspns_time_facts.incident_class_key AND incident_class.group_des LIKE '%Fire%' GROUP BY hour.month_of_year ORDER BY hour.month_of_year; sql = ''' SELECT hour.month_of_year, count(*) FROM alarm_box, hour, incident_class, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND hour.key = rspns_time_facts.incident_hour_key AND incident_class.key = rspns_time_facts.incident_class_key AND incident_class.group_des LIKE '%Fire%' GROUP BY hour.month_of_year ORDER BY hour.month_of_year; ''' c.execute(sql) rows = c.fetchall() plt.bar([i[0] for i in rows], [i[1] for i in rows]) plt.xlabel("month") plt.ylabel("fire incidents") plt.title("Fire Incidents by Month") plt.show() %%sql SELECT alarm_box.borough, min(rspns_time_facts.incident_rspns_s_qy), max(rspns_time_facts.incident_rspns_s_qy), ROUND(avg(rspns_time_facts.incident_rspns_s_qy)::numeric, 2), ROUND(stddev(rspns_time_facts.incident_rspns_s_qy)::numeric, 2) FROM alarm_box, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key GROUP BY alarm_box.borough; sql = ''' SELECT alarm_box.borough, min(rspns_time_facts.incident_rspns_s_qy), max(rspns_time_facts.incident_rspns_s_qy), ROUND(avg(rspns_time_facts.incident_rspns_s_qy)::numeric, 2), ROUND(stddev(rspns_time_facts.incident_rspns_s_qy)::numeric, 2) FROM alarm_box, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key GROUP BY alarm_box.borough; ''' c.execute(sql) rows = c.fetchall() plt.barh([i[0] for i in rows], [i[3] for i in rows]) plt.xlabel("average response time") plt.ylabel("borough") plt.title("Average Response Time by Borough") plt.show() %%sql SELECT hour.month_of_year, min(rspns_time_facts.incident_rspns_s_qy), max(rspns_time_facts.incident_rspns_s_qy), ROUND(avg(rspns_time_facts.incident_rspns_s_qy)::numeric, 2) AS avg FROM alarm_box, hour, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND hour.key = rspns_time_facts.incident_hour_key AND alarm_box.borough = 'RICHMOND / STATEN ISLAND' GROUP BY hour.month_of_year ORDER BY hour.month_of_year; %%sql SELECT hour.month_of_year, min(rspns_time_facts.incident_rspns_s_qy), max(rspns_time_facts.incident_rspns_s_qy), ROUND(avg(rspns_time_facts.incident_rspns_s_qy)::numeric, 2) AS avg FROM alarm_box, hour, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND hour.key = rspns_time_facts.incident_hour_key AND alarm_box.borough = 'QUEENS' GROUP BY hour.month_of_year ORDER BY hour.month_of_year; %%sql SELECT hour.month_of_year, min(rspns_time_facts.incident_rspns_s_qy), max(rspns_time_facts.incident_rspns_s_qy), ROUND(avg(rspns_time_facts.incident_rspns_s_qy)::numeric, 2) AS avg FROM alarm_box, hour, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND hour.key = rspns_time_facts.incident_hour_key AND alarm_box.borough = 'MANHATTAN' GROUP BY hour.month_of_year ORDER BY hour.month_of_year; %%sql SELECT hour.month_of_year, min(rspns_time_facts.incident_rspns_s_qy), max(rspns_time_facts.incident_rspns_s_qy), ROUND(avg(rspns_time_facts.incident_rspns_s_qy)::numeric, 2) AS avg FROM alarm_box, hour, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND hour.key = rspns_time_facts.incident_hour_key AND alarm_box.borough = 'BROOKLYN' GROUP BY hour.month_of_year ORDER BY hour.month_of_year; %%sql SELECT hour.month_of_year, min(rspns_time_facts.incident_rspns_s_qy), max(rspns_time_facts.incident_rspns_s_qy), ROUND(avg(rspns_time_facts.incident_rspns_s_qy)::numeric, 2) AS avg FROM alarm_box, hour, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND hour.key = rspns_time_facts.incident_hour_key AND alarm_box.borough = 'BRONX' GROUP BY hour.month_of_year ORDER BY hour.month_of_year; sql = ''' SELECT hour.month_of_year, min(rspns_time_facts.incident_rspns_s_qy), max(rspns_time_facts.incident_rspns_s_qy), ROUND(avg(rspns_time_facts.incident_rspns_s_qy)::numeric, 2) AS avg_rspns, ROUND(avg(rspns_time_facts.incident_travel_s_qy)::numeric, 2) AS avg_travel FROM alarm_box, hour, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND hour.key = rspns_time_facts.incident_hour_key AND alarm_box.borough = 'RICHMOND / STATEN ISLAND' GROUP BY hour.month_of_year ORDER BY hour.month_of_year; ''' c.execute(sql) rows1 = c.fetchall() sql = ''' SELECT hour.month_of_year, min(rspns_time_facts.incident_rspns_s_qy), max(rspns_time_facts.incident_rspns_s_qy), ROUND(avg(rspns_time_facts.incident_rspns_s_qy)::numeric, 2) AS avg_rspns, ROUND(avg(rspns_time_facts.incident_travel_s_qy)::numeric, 2) AS avg_travel FROM alarm_box, hour, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND hour.key = rspns_time_facts.incident_hour_key AND alarm_box.borough = 'QUEENS' GROUP BY hour.month_of_year ORDER BY hour.month_of_year; ''' c.execute(sql) rows2 = c.fetchall() sql = ''' SELECT hour.month_of_year, min(rspns_time_facts.incident_rspns_s_qy), max(rspns_time_facts.incident_rspns_s_qy), ROUND(avg(rspns_time_facts.incident_rspns_s_qy)::numeric, 2) AS avg_rspns, ROUND(avg(rspns_time_facts.incident_travel_s_qy)::numeric, 2) AS avg_travel FROM alarm_box, hour, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND hour.key = rspns_time_facts.incident_hour_key AND alarm_box.borough = 'MANHATTAN' GROUP BY hour.month_of_year ORDER BY hour.month_of_year; ''' c.execute(sql) rows3 = c.fetchall() sql = ''' SELECT hour.month_of_year, min(rspns_time_facts.incident_rspns_s_qy), max(rspns_time_facts.incident_rspns_s_qy), ROUND(avg(rspns_time_facts.incident_rspns_s_qy)::numeric, 2) AS avg_rspns, ROUND(avg(rspns_time_facts.incident_travel_s_qy)::numeric, 2) AS avg_travel FROM alarm_box, hour, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND hour.key = rspns_time_facts.incident_hour_key AND alarm_box.borough = 'BROOKLYN' GROUP BY hour.month_of_year ORDER BY hour.month_of_year; ''' c.execute(sql) rows4 = c.fetchall() sql = ''' SELECT hour.month_of_year, min(rspns_time_facts.incident_rspns_s_qy), max(rspns_time_facts.incident_rspns_s_qy), ROUND(avg(rspns_time_facts.incident_rspns_s_qy)::numeric, 2) AS avg_rspns, ROUND(avg(rspns_time_facts.incident_travel_s_qy)::numeric, 2) AS avg_travel FROM alarm_box, hour, rspns_time_facts WHERE alarm_box.key = rspns_time_facts.alarm_box_key AND hour.key = rspns_time_facts.incident_hour_key AND alarm_box.borough = 'BRONX' GROUP BY hour.month_of_year ORDER BY hour.month_of_year; ''' c.execute(sql) rows5 = c.fetchall() plt.plot([i[0] for i in rows1], [i[3] for i in rows1], label = "RICHMOND / STATEN ISLAND") plt.plot([i[0] for i in rows2], [i[3] for i in rows2], label = "QUEENS") plt.plot([i[0] for i in rows3], [i[3] for i in rows3], label = "MANHATTAN") plt.plot([i[0] for i in rows4], [i[3] for i in rows4], label = "BROOKLYN") plt.plot([i[0] for i in rows5], [i[3] for i in rows5], label = "BRONX") plt.xlabel("Month") plt.ylabel("average response time") plt.title("Average Response Time by Borough by Month") plt.legend(loc='best') plt.show() plt.plot([i[0] for i in rows1], [i[4] for i in rows1], label = "RICHMOND / STATEN ISLAND") plt.plot([i[0] for i in rows2], [i[4] for i in rows2], label = "QUEENS") plt.plot([i[0] for i in rows3], [i[4] for i in rows3], label = "MANHATTAN") plt.plot([i[0] for i in rows4], [i[4] for i in rows4], label = "BROOKLYN") plt.plot([i[0] for i in rows5], [i[4] for i in rows5], label = "BRONX") plt.xlabel("Month") plt.ylabel("average travel time") plt.title("Average Travel Time by Borough by Month") plt.legend(loc='best') plt.show() c.close() conn.close() from IPython.display import Image Image(url = "https://s3.amazonaws.com/2018-istm6212-group17/severe_fires_by_zip.png") !wget -O weather.zip https://s3.amazonaws.com/2018-istm6212-group17/historical-hourly-weather-data.zip !unzip -o weather.zip -d ./weather !grep -E '^2016' weather/weather_description.csv | csvcut -c1,29 > description.txt !grep -E '^2016' weather/temperature.csv | csvcut -c29 > temperature.txt !grep -E '^2016' weather/humidity.csv | csvcut -c29 > humidity.txt !grep -E '^2016' weather/pressure.csv | csvcut -c29 > pressure.txt !grep -E '^2016' weather/wind_speed.csv | csvcut -c29 > wind_speed.txt !wc -l *.txt !paste -d ',' description.txt temperature.txt humidity.txt pressure.txt wind_speed.txt > 2016_nyc_weather.csv !sed -i 1i'hour,description,temperature,humidity,pressure,wind_speed' 2016_nyc_weather.csv !wc -l 2016_nyc_weather.csv !csvstat --count 2016_nyc_weather.csv !csvcut -n 2016_nyc_weather.csv !csvstat 2016_nyc_weather.csv from IPython.display import Image Image(url = "https://s3.amazonaws.com/2018-istm6212-group17/star_schema_augmented.png") %%sql DROP TABLE IF EXISTS weather; CREATE TABLE weather ( key SERIAL PRIMARY KEY, hour VARCHAR(20) NOT NULL, description VARCHAR(50) NOT NULL, temperature FLOAT NOT NULL, humidity FLOAT NOT NULL, pressure FLOAT NOT NULL, wind_speed FLOAT NOT NULL ); !cp 2016_nyc_weather.csv /tmp/2016_nyc_weather.csv %%sql COPY weather (hour, description, temperature, humidity, pressure, wind_speed) FROM '/tmp/2016_nyc_weather.csv' CSV HEADER; %%sql UPDATE weather SET temperature = ROUND((temperature - 272.15)::numeric, 2); %%sql SELECT * FROM weather ORDER BY key LIMIT 5; %%sql SELECT COUNT(*) FROM weather; %%sql ALTER TABLE rspns_time_facts ADD COLUMN weather_key INTEGER, ADD CONSTRAINT fk_weather_key FOREIGN KEY (weather_key) REFERENCES weather (key); %%sql UPDATE rspns_time_facts SET weather_key = weather.key FROM weather, hour WHERE rspns_time_facts.incident_hour_key = hour.key AND hour.hour = weather.hour; %%sql SELECT id, weather_key FROM rspns_time_facts LIMIT 5; %%sql SELECT COUNT(*) AS weather_key_not_null_count FROM rspns_time_facts WHERE weather_key IS NOT NULL; %%sql SELECT COUNT(*) AS weather_key_null_count FROM rspns_time_facts WHERE weather_key IS NULL; %%sql SELECT * FROM rspns_time_facts LIMIT 10; %%sql \dt import psycopg2 import geocoder import matplotlib.pyplot as plt import pandas import numpy as np conn = psycopg2.connect("dbname='proj4_group17' user='student'") c = conn.cursor() %%sql SELECT rspns_time_facts.id, weather.temperature FROM weather, rspns_time_facts, incident_class WHERE rspns_time_facts.weather_key = weather.key AND rspns_time_facts.incident_class_key = incident_class.key AND incident_class.group_des LIKE '%Fire%' LIMIT 10; sql = ''' SELECT weather.temperature FROM weather, rspns_time_facts, incident_class WHERE rspns_time_facts.weather_key = weather.key AND rspns_time_facts.incident_class_key = incident_class.key AND incident_class.group_des LIKE '%Fire%'; ''' c.execute(sql) rows = c.fetchall() plt.hist([i[0] for i in rows], bins = 50) plt.xlabel("temperature") plt.ylabel("frequency") plt.title("Temperature Histogram") plt.show() %%sql SELECT rspns_time_facts.id, weather.humidity FROM weather, rspns_time_facts, incident_class WHERE rspns_time_facts.weather_key = weather.key AND rspns_time_facts.incident_class_key = incident_class.key AND incident_class.group_des LIKE '%Fire%' LIMIT 10; sql = ''' SELECT weather.humidity FROM weather, rspns_time_facts, incident_class WHERE rspns_time_facts.weather_key = weather.key AND rspns_time_facts.incident_class_key = incident_class.key AND incident_class.group_des LIKE '%Fire%'; ''' c.execute(sql) rows = c.fetchall() plt.hist([i[0] for i in rows]) plt.xlabel("humidity") plt.ylabel("frequency") plt.title("Humidity Histogram") plt.show() %%sql SELECT weather.description, ROUND(AVG(incident_travel_s_qy)::numeric, 2) FROM rspns_time_facts JOIN weather ON rspns_time_facts.weather_key = weather.key GROUP BY weather.description ORDER BY AVG(incident_travel_s_qy) DESC; sql = ''' SELECT weather.description, ROUND(AVG(incident_travel_s_qy)::numeric, 2) FROM rspns_time_facts JOIN weather ON rspns_time_facts.weather_key = weather.key GROUP BY weather.description ORDER BY AVG(incident_travel_s_qy); ''' c.execute(sql) rows = c.fetchall() plt.barh([i[0] for i in rows], [i[1] for i in rows]) plt.xlabel("travel time") plt.ylabel("weather") plt.title("Travel time vs Weather Situation") plt.show() c.close() conn.close()
0.314261
0.985732
<a href="https://colab.research.google.com/github/Yadukrishnan1/Fellow-Placement/blob/main/fellow_placement.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ### **Customer Succes Analysis for a Career Accelerator (Startup stage)** --- An **online career accelerator** operates by helping prospective job seekers succesfully obtain a job in their industry role of choice. It is important for their business to determine if a new addition to their customers base could generate revenue by getting placed at the end of their service. This is also important to add reputation and to market their success. 1. **Business Question** What are the characteristic of a succesful(revenue generating) fellow (customer). How long do they take to guarantee the revenue (Time to get placed)? 2. **DS/ML Framing** The first part of the question is a binary classification with two classes; 'placed' and 'not placed'. The second question is a regression problem. # Importing all the required packages in Python ``` # Packages for EDA import numpy as np import matplotlib.pyplot as plt import pandas as pd import seaborn as sns from scipy import stats import warnings import time import pickle # Scikit-learn for ML prediction and modelling from sklearn.preprocessing import StandardScaler import sklearn from sklearn.metrics import mean_squared_error, f1_score, roc_curve, auc, roc_auc_score, confusion_matrix from sklearn.model_selection import train_test_split from sklearn import tree from mpl_toolkits import mplot3d from sklearn.preprocessing import OneHotEncoder, OrdinalEncoder, LabelEncoder from sklearn.linear_model import LinearRegression, LogisticRegression, LogisticRegressionCV from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import cross_val_score, GridSearchCV, train_test_split, KFold, RandomizedSearchCV from xgboost.sklearn import XGBRegressor, XGBClassifier import xgboost as xgb from os import path warnings.filterwarnings('ignore') %matplotlib inline # Loading the data cust_success=pd.read_excel('Data_Pathrise.xlsx') ``` # Exploratory Data Analysis What are the variables (features) in our dataset? Which are the Predictor Variables and which is the Target Variables? TV: 'placed' PV: All others Categorical variables such as 'program_duration_days', 'professional_experience', 'length_of_job_search' are ordinal attributes. All the rest of the categorical features are nominal attributes. One-hot-encoding is to be carried out for all the nominal attributes. The testing set is selected by using the 'cust_success status' which are either 'Active' or 'Break'. The training set will be the Pathrise data without the testing set. A validation set should be made by doing a split of the training dataset. Analysis should be two steps: 1) Determine what are the characteristics of fellows being placed. For that eliminate the 'program_duration_days' columns. Then see which columns contributes to it. 2) After determining the fellows, who get placed, use regression to see how long do they take to get placed. # Other considerations A) Check if balancing the training data set is needed. B) Drop columns that seemingly have no effect such as 'id'. 'Race' and 'gender' might be affecting the prediction, but they are understandably complicated, so dropping them for training seems reasonable. 'placed' column should be removed in training as it's the target variable. C) Anomaly detection needs to be studied. For example, limit the 'number of applications' to less than 200. D) Do the encoders for categories in first cust_success data. Then add missing values with mean/median. ``` cust_success.head() cust_success.describe() # Columns/feature names cust_success.columns # Missing values exploration cust_success.isnull().sum() # Let's check the comparison for the "Average Number of Days for Placement" features as a function of the "tracks". # Let's also keep cust_success as the original dataframe for a reference and use a generic name such as df for all the treatments. df=cust_success df_1 = df[df['placed'] == 1] df_1.groupby('primary_track').agg({'program_duration_days':'mean'}).sort_values(by='program_duration_days', ascending=False).plot(kind='bar', figsize=(20,10)) plt.title ('Average Number of Days for Placement for Primary Tracks', fontsize=15, fontweight="bold") plt.xlabel("Primary Track", fontsize=15) plt.ylabel("Number of Days", fontsize=15) plt.show() df_1 = df[df['placed'] == 1] df_1.groupby('highest_level_of_education').agg({'program_duration_days':'mean'}).sort_values(by='program_duration_days', ascending=False).plot(kind='bar',figsize=(20,10)) plt.title ('Average Number of Days for Placement for Levels of Education', fontsize=20, fontweight="bold") plt.xlabel("Level of Education", fontsize=15) plt.ylabel("Number of Days", fontsize=15) plt.show() df_1 = df[df['placed'] == 1] df_0 = df[df['placed'] == 0] fig, ax = plt.subplots(1,2, figsize = (10, 7)) ax[0].set_title('Number of Applications for \nPlaced Fellows', fontsize = 15) ax[0].boxplot(df_1['number_of_applications']) ax[0].set_ylabel('No. of Applications') ax[1].set_title('Number of Applications for \nnon-Placed Fellows', fontsize = 15) ax[1].boxplot(df_0['number_of_applications']) ax[1].set_ylabel('No. of Applications') plt.show() df_1 = df[df['placed'] == 1] df_0 = df[df['placed'] == 0] fig, ax = plt.subplots(1,2, figsize = (10, 7)) ax[0].set_title('Number of Interviews for \nPlaced Fellows', fontsize = 15) ax[0].boxplot(df_1['number_of_interviews']) ax[0].set_ylabel('No. of Applications') ax[1].set_title('Number of Interviews for \nnon-Placed Fellows', fontsize = 15) ax[1].boxplot(df_0['number_of_interviews']) ax[1].set_ylabel('No. of Interviews') plt.show() fig, ax = plt.subplots(figsize = (15, 7)) plt.suptitle('Distribution of Gender') df.groupby('gender').size().to_frame().plot.bar(legend = False, ax = ax, rot = 0) plt.ylabel('Frequency') plt.show() fig, ax = plt.subplots(figsize = (20, 7)) plt.suptitle('Distribution of Race') df.groupby('race').size().to_frame().plot.bar(legend = False, ax = ax, rot = 90) plt.ylabel('Frequency') plt.show() fig, ax = plt.subplots(figsize = (20, 7)) plt.suptitle('Distribution of Pathrise Status') df.groupby('pathrise_status').size().to_frame().plot.bar(legend = False, ax = ax, rot = 0) plt.ylabel('Frequency') plt.show() fig, ax = plt.subplots(figsize = (20, 7)) plt.suptitle('Distribution of Employment Status') df.groupby('employment_status ').size().to_frame().plot.bar(legend = False, ax = ax, rot = 0) plt.ylabel('Frequency') plt.show() fig, ax = plt.subplots(figsize = (20, 7)) plt.suptitle('Distribution of Highest Level of Education') df.groupby('highest_level_of_education').size().to_frame().plot.bar(legend = False, ax = ax, rot = 90) plt.ylabel('Frequency') plt.show() # Let's look at how many fellows placed at a company. placed_df = pd.DataFrame({'Placed at a Company?': ['Yes', 'No'],'No. of Fellows': [df['placed'].sum(), df.shape[0] - df['placed'].sum()], }) placed_df.set_index('Placed at a Company?', inplace = True) fig, ax = plt.subplots(figsize = (10, 7)) fig.suptitle('Number of Fellows Placed', fontsize = 15) plt.bar(x = placed_df.index, height = 'No. of Fellows', data = placed_df) plt.xlabel('Placed at a Company?') plt.ylabel('Frequency') plt.show() ``` # In the first approach of model building and prediction, we drop all the missing values to see how the prediction goes. ``` ori_len=df.shape[0] # Need to drop rows containing NA for Categorical data df = df[df['employment_status '].notna()] df = df[df['highest_level_of_education'].notna()] df = df[df['length_of_job_search'].notna()] df = df[df['gender'].notna()] df = df[df['race'].notna()] df = df[df['work_authorization_status'].notna()] df = df[df['biggest_challenge_in_search'].notna()] df = df[df['cohort_tag'].notna()] df = df[df['professional_experience'].notna()] miss_len=df.shape[0] print("We lose almost {0}% of data by dropping the missing values.".format(np.around((ori_len-miss_len)/ori_len*100))) df.fillna(df.median(), inplace = True) # Median is more meaningful than mean in Skewed data cat_df = df.select_dtypes(include=['object']).copy() # Encoding the columns enc_make = OrdinalEncoder() cat_df_transformed = enc_make.fit_transform(cat_df) for i,j in enumerate(cat_df.columns): cat_df[j] = cat_df_transformed.transpose()[i] df_1 = df.copy() # Adding converted labels to df for i in df_1.columns: if i in cat_df.columns: df_1[i] = cat_df[i] df_1.head() from sklearn.cluster import KMeans from sklearn.metrics import classification_report, confusion_matrix, precision_score, recall_score, f1_score, roc_curve, roc_auc_score X = df_1.drop(columns = ['placed','pathrise_status', 'id']) y = df_1['placed'] x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=0) kmeans = KMeans(n_clusters=2, n_init=25, max_iter=100, random_state=6) # 2 clusters because it's binary classification kmeans.fit(x_train) pred = kmeans.predict(x_test) print(classification_report(y_test, pred)) clf = DecisionTreeClassifier() clf.fit(x_train, y_train) pred = clf.predict(x_test) print(classification_report(y_test, pred)) # Making a parameter grid using gridsearchcv param_grid = {'criterion': ['gini', 'entropy'], 'min_samples_split': [2, 4, 6, 8, 10], 'min_samples_leaf': [1, 3, 5, 7, 9], 'max_leaf_nodes': [2, 5, 10, 20]} grid = GridSearchCV(clf, param_grid, cv=10, scoring='accuracy') grid.fit(x_train, y_train) pred = grid.best_estimator_.predict(x_test) print(classification_report(y_test, pred)) ``` # Dimensionality reduction using PCA ``` from sklearn import decomposition from sklearn.decomposition import PCA pca = decomposition.PCA() #n_components=5 pca.fit(x_train) x_train = pca.transform(x_train) x_test = pca.transform(x_test) # Feature scaling from sklearn.preprocessing import StandardScaler sc = StandardScaler() x_train = sc.fit_transform(x_train) x_test = sc.transform(x_test) from sklearn import tree from sklearn.metrics import f1_score from sklearn.metrics import confusion_matrix from sklearn.metrics import plot_confusion_matrix from sklearn.metrics import precision_recall_fscore_support from sklearn.metrics import r2_score clf = tree.DecisionTreeClassifier() clf = clf.fit(x_train, y_train) print("Accuracy (in %):", clf.score(x_test, y_test)) y_pred = clf.predict(x_test) print('F1 score', f1_score(y_test, y_pred)) plot_confusion_matrix(clf, x_test, y_test) print(precision_recall_fscore_support(y_test, y_pred, average='binary')) plt.show() ``` # Decision Tree model ``` clf = DecisionTreeClassifier() cv_score = cross_val_score(clf, x_train, y_train, scoring = 'accuracy', cv = 5, n_jobs = -1, verbose = 0) plt.plot(cv_score, 'd-',label='Decision Tree Classifier') plt.xlabel('# of folds of cross validation') plt.ylabel('Accuracy') plt.ylim(0,0.9) plt.legend() plt.show() ``` # Logistic regression model ``` clf=LogisticRegression() cv_score = cross_val_score(clf, x_train, y_train, scoring = 'accuracy', cv = 5, n_jobs = -1, verbose = 0) plt.plot(cv_score, 'go-',label='Logistic Regression Classifier') plt.xlabel('# of folds of cross validation') plt.ylabel('Accuracy') plt.ylim(0,0.9) plt.legend() plt.show() ``` # Random Forest Classifier ``` clf = RandomForestClassifier(n_estimators=40, random_state=0) cv_score = cross_val_score(clf, x_train, y_train, scoring = 'accuracy', cv = 5, n_jobs = -1, verbose = 0) plt.plot(cv_score, 'rd-',label='RF Classifier') plt.xlabel('# of folds of cross validation') plt.ylabel('Accuracy') plt.ylim(0,0.9) plt.legend() plt.show() ``` # K-NN Classifier ``` from sklearn.neighbors import KNeighborsClassifier clf = KNeighborsClassifier(n_neighbors=6) cv_score = cross_val_score(clf, x_train, y_train, scoring = 'accuracy', cv = 5, n_jobs = -1, verbose = 0) plt.plot(cv_score, 'd-',label='k-NN Classifier') plt.xlabel('run # of the cross validation') plt.ylabel('Accuracy') plt.ylim(0,0.9) plt.legend() plt.show() from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import classification_report, confusion_matrix, accuracy_score accu=[] for i in np.arange(10, 240, 10): classifier = RandomForestClassifier(n_estimators=i, random_state=0) classifier.fit(x_train, y_train) y_pred = classifier.predict(x_test) accu.append(accuracy_score(y_test, y_pred)) plt.plot(np.arange(10, 240, 10), accu, '.-') plt.xlabel('# of Trees') plt.ylabel('Accuracy') plt.show() ``` # Task - 2 ## Determine how fast do fellows get placed ``` # Removing the outliers in 'number_of_applications': 400 and 1000 applications are way above 95%. cust_success_2=cust_success.drop(columns=['id', 'cohort_tag', 'professional_experience', 'length_of_job_search']) print('Before removing missing values, shape:', cust_success_2.shape) cust_success_2.dropna(inplace=True) print('After removing missing values, shape:', cust_success_2.shape) cust_success_2_train=cust_success_2[(cust_success_2['cust_success_status']=='Placed') & (cust_success_2['placed']==1)].drop(columns=['cust_success_status', 'placed']) cust_success_2_train.head() # See the statistics of the target variable print(cust_success_2_train['program_duration_days'].describe()) sns.distplot(cust_success_2_train['program_duration_days']) plt.show() # Check skewness and the curtosis print("Skewness: %f" % cust_success_2_train['program_duration_days'].skew()) print("Kurtosis: %f" % cust_success_2_train['program_duration_days'].kurt()) # The most experienced gets placed faster. var = 'professional_experience_ord' data = pd.concat([cust_success_2_train['program_duration_days'], cust_success_2_train[var]], axis=1) f, ax = plt.subplots(figsize=(8, 6)) fig = sns.boxplot(x=var, y='program_duration_days', data=data) fig.axis(ymin=0, ymax=600); # SWE has the lowest mean. Higher variance for PSO. var = 'primary_track' data = pd.concat([cust_success_2_train['program_duration_days'], cust_success_2_train[var]], axis=1) f, ax = plt.subplots(figsize=(8, 6)) fig = sns.boxplot(x=var, y='program_duration_days', data=data) # SWE has the lowest mean. Higher variance for PSO. var = 'highest_level_of_education' data = pd.concat([cust_success_2_train['program_duration_days'], cust_success_2_train[var]], axis=1) f, ax = plt.subplots(figsize=(18, 6)) fig = sns.boxplot(x=var, y='program_duration_days', data=data) #correlation matrix corrmat = cust_success_2_train.corr() f, ax = plt.subplots(figsize=(12, 9)) sns.heatmap(corrmat, vmax=0.2, square=True); cust_success_2_train.columns # Stratified K-fold cross-validation code features = list(train_df.columns[1:101]) train_oof = np.zeros((250000,)) skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=42) for fold, (train_idx, valid_idx) in enumerate(skf.split(train_df[features], train_df['loss'])): X_train, X_valid = train_df.iloc[train_idx], train_df.iloc[valid_idx] y_train = X_train['loss'] y_valid = X_valid['loss'] X_train = X_train.drop('loss', axis=1) X_valid = X_valid.drop('loss', axis=1) model = LinearRegression() model = model.fit(X_train, y_train) temp_oof = model.predict(X_valid) train_oof[valid_idx] = temp_oof print(f'Fold {fold} RMSE: ', mean_squared_error(y_valid, temp_oof, squared=False)) print(f'OOF Accuracy: ', mean_squared_error(train_df['loss'], train_oof, squared=False)) ``` # Training and testing data for Task - 2 ``` X_train2=cust_success_2_train.drop(columns='program_duration_days') y_train2=cust_success_2_train['program_duration_days'] # We use 'train_test_split' from Sckit_learn to create a training and validation datasets. X_train2, X_test2, y_train2, y_test2 = train_test_split(X_train2, y_train2, test_size = 0.2, random_state = 0) ``` # Linear regression model ``` from sklearn.linear_model import LinearRegression reg = LinearRegression().fit(X_train2, y_train2) y_pred2 = reg.predict(X_test2) plt.plot(np.array(y_pred2),'.', label='Predicted') plt.plot(np.array(y_test2),'.', label='Ground Truth') plt.xlabel('Candidates') plt.ylabel('Program Duration (Days)') plt.title('Linear Regression') plt.legend() plt.ylim(0,400) plt.show() print(reg.score(X_train2, y_train2)) from sklearn.linear_model import LogisticRegression clf = LogisticRegression(random_state=0, max_iter=500).fit(X_train2, y_train2) y_pred2 = clf.predict(X_test2) plt.plot(np.array(y_pred2),'.', label='Predicted') plt.plot(np.array(y_test2),'.', label='Ground Truth') plt.xlabel('Candidates') plt.ylabel('Program Duration (Days)') plt.title('Logistic Regression') plt.legend() plt.ylim(0,400) plt.show() print(clf.score(X_train2, y_train2)) import numpy as np from sklearn.linear_model import SGDClassifier from sklearn.preprocessing import StandardScaler from sklearn.pipeline import make_pipeline # Always scale the input. The most convenient way is to use a pipeline. clf = make_pipeline(StandardScaler(), SGDClassifier(max_iter=1000, tol=1e-3)) clf.fit(X_train2, y_train2) y_pred2=clf.predict((X_test2)) plt.plot(np.array(y_pred2),'.', label='Predicted') plt.plot(np.array(y_test2),'.', label='Ground Truth') plt.xlabel('Candidates') plt.ylabel('Program Duration (Days)') plt.title('SGDC Regression') plt.legend() plt.ylim(0,400) plt.show() print(clf.score(X_train2, y_train2)) ```
github_jupyter
# Packages for EDA import numpy as np import matplotlib.pyplot as plt import pandas as pd import seaborn as sns from scipy import stats import warnings import time import pickle # Scikit-learn for ML prediction and modelling from sklearn.preprocessing import StandardScaler import sklearn from sklearn.metrics import mean_squared_error, f1_score, roc_curve, auc, roc_auc_score, confusion_matrix from sklearn.model_selection import train_test_split from sklearn import tree from mpl_toolkits import mplot3d from sklearn.preprocessing import OneHotEncoder, OrdinalEncoder, LabelEncoder from sklearn.linear_model import LinearRegression, LogisticRegression, LogisticRegressionCV from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import cross_val_score, GridSearchCV, train_test_split, KFold, RandomizedSearchCV from xgboost.sklearn import XGBRegressor, XGBClassifier import xgboost as xgb from os import path warnings.filterwarnings('ignore') %matplotlib inline # Loading the data cust_success=pd.read_excel('Data_Pathrise.xlsx') cust_success.head() cust_success.describe() # Columns/feature names cust_success.columns # Missing values exploration cust_success.isnull().sum() # Let's check the comparison for the "Average Number of Days for Placement" features as a function of the "tracks". # Let's also keep cust_success as the original dataframe for a reference and use a generic name such as df for all the treatments. df=cust_success df_1 = df[df['placed'] == 1] df_1.groupby('primary_track').agg({'program_duration_days':'mean'}).sort_values(by='program_duration_days', ascending=False).plot(kind='bar', figsize=(20,10)) plt.title ('Average Number of Days for Placement for Primary Tracks', fontsize=15, fontweight="bold") plt.xlabel("Primary Track", fontsize=15) plt.ylabel("Number of Days", fontsize=15) plt.show() df_1 = df[df['placed'] == 1] df_1.groupby('highest_level_of_education').agg({'program_duration_days':'mean'}).sort_values(by='program_duration_days', ascending=False).plot(kind='bar',figsize=(20,10)) plt.title ('Average Number of Days for Placement for Levels of Education', fontsize=20, fontweight="bold") plt.xlabel("Level of Education", fontsize=15) plt.ylabel("Number of Days", fontsize=15) plt.show() df_1 = df[df['placed'] == 1] df_0 = df[df['placed'] == 0] fig, ax = plt.subplots(1,2, figsize = (10, 7)) ax[0].set_title('Number of Applications for \nPlaced Fellows', fontsize = 15) ax[0].boxplot(df_1['number_of_applications']) ax[0].set_ylabel('No. of Applications') ax[1].set_title('Number of Applications for \nnon-Placed Fellows', fontsize = 15) ax[1].boxplot(df_0['number_of_applications']) ax[1].set_ylabel('No. of Applications') plt.show() df_1 = df[df['placed'] == 1] df_0 = df[df['placed'] == 0] fig, ax = plt.subplots(1,2, figsize = (10, 7)) ax[0].set_title('Number of Interviews for \nPlaced Fellows', fontsize = 15) ax[0].boxplot(df_1['number_of_interviews']) ax[0].set_ylabel('No. of Applications') ax[1].set_title('Number of Interviews for \nnon-Placed Fellows', fontsize = 15) ax[1].boxplot(df_0['number_of_interviews']) ax[1].set_ylabel('No. of Interviews') plt.show() fig, ax = plt.subplots(figsize = (15, 7)) plt.suptitle('Distribution of Gender') df.groupby('gender').size().to_frame().plot.bar(legend = False, ax = ax, rot = 0) plt.ylabel('Frequency') plt.show() fig, ax = plt.subplots(figsize = (20, 7)) plt.suptitle('Distribution of Race') df.groupby('race').size().to_frame().plot.bar(legend = False, ax = ax, rot = 90) plt.ylabel('Frequency') plt.show() fig, ax = plt.subplots(figsize = (20, 7)) plt.suptitle('Distribution of Pathrise Status') df.groupby('pathrise_status').size().to_frame().plot.bar(legend = False, ax = ax, rot = 0) plt.ylabel('Frequency') plt.show() fig, ax = plt.subplots(figsize = (20, 7)) plt.suptitle('Distribution of Employment Status') df.groupby('employment_status ').size().to_frame().plot.bar(legend = False, ax = ax, rot = 0) plt.ylabel('Frequency') plt.show() fig, ax = plt.subplots(figsize = (20, 7)) plt.suptitle('Distribution of Highest Level of Education') df.groupby('highest_level_of_education').size().to_frame().plot.bar(legend = False, ax = ax, rot = 90) plt.ylabel('Frequency') plt.show() # Let's look at how many fellows placed at a company. placed_df = pd.DataFrame({'Placed at a Company?': ['Yes', 'No'],'No. of Fellows': [df['placed'].sum(), df.shape[0] - df['placed'].sum()], }) placed_df.set_index('Placed at a Company?', inplace = True) fig, ax = plt.subplots(figsize = (10, 7)) fig.suptitle('Number of Fellows Placed', fontsize = 15) plt.bar(x = placed_df.index, height = 'No. of Fellows', data = placed_df) plt.xlabel('Placed at a Company?') plt.ylabel('Frequency') plt.show() ori_len=df.shape[0] # Need to drop rows containing NA for Categorical data df = df[df['employment_status '].notna()] df = df[df['highest_level_of_education'].notna()] df = df[df['length_of_job_search'].notna()] df = df[df['gender'].notna()] df = df[df['race'].notna()] df = df[df['work_authorization_status'].notna()] df = df[df['biggest_challenge_in_search'].notna()] df = df[df['cohort_tag'].notna()] df = df[df['professional_experience'].notna()] miss_len=df.shape[0] print("We lose almost {0}% of data by dropping the missing values.".format(np.around((ori_len-miss_len)/ori_len*100))) df.fillna(df.median(), inplace = True) # Median is more meaningful than mean in Skewed data cat_df = df.select_dtypes(include=['object']).copy() # Encoding the columns enc_make = OrdinalEncoder() cat_df_transformed = enc_make.fit_transform(cat_df) for i,j in enumerate(cat_df.columns): cat_df[j] = cat_df_transformed.transpose()[i] df_1 = df.copy() # Adding converted labels to df for i in df_1.columns: if i in cat_df.columns: df_1[i] = cat_df[i] df_1.head() from sklearn.cluster import KMeans from sklearn.metrics import classification_report, confusion_matrix, precision_score, recall_score, f1_score, roc_curve, roc_auc_score X = df_1.drop(columns = ['placed','pathrise_status', 'id']) y = df_1['placed'] x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=0) kmeans = KMeans(n_clusters=2, n_init=25, max_iter=100, random_state=6) # 2 clusters because it's binary classification kmeans.fit(x_train) pred = kmeans.predict(x_test) print(classification_report(y_test, pred)) clf = DecisionTreeClassifier() clf.fit(x_train, y_train) pred = clf.predict(x_test) print(classification_report(y_test, pred)) # Making a parameter grid using gridsearchcv param_grid = {'criterion': ['gini', 'entropy'], 'min_samples_split': [2, 4, 6, 8, 10], 'min_samples_leaf': [1, 3, 5, 7, 9], 'max_leaf_nodes': [2, 5, 10, 20]} grid = GridSearchCV(clf, param_grid, cv=10, scoring='accuracy') grid.fit(x_train, y_train) pred = grid.best_estimator_.predict(x_test) print(classification_report(y_test, pred)) from sklearn import decomposition from sklearn.decomposition import PCA pca = decomposition.PCA() #n_components=5 pca.fit(x_train) x_train = pca.transform(x_train) x_test = pca.transform(x_test) # Feature scaling from sklearn.preprocessing import StandardScaler sc = StandardScaler() x_train = sc.fit_transform(x_train) x_test = sc.transform(x_test) from sklearn import tree from sklearn.metrics import f1_score from sklearn.metrics import confusion_matrix from sklearn.metrics import plot_confusion_matrix from sklearn.metrics import precision_recall_fscore_support from sklearn.metrics import r2_score clf = tree.DecisionTreeClassifier() clf = clf.fit(x_train, y_train) print("Accuracy (in %):", clf.score(x_test, y_test)) y_pred = clf.predict(x_test) print('F1 score', f1_score(y_test, y_pred)) plot_confusion_matrix(clf, x_test, y_test) print(precision_recall_fscore_support(y_test, y_pred, average='binary')) plt.show() clf = DecisionTreeClassifier() cv_score = cross_val_score(clf, x_train, y_train, scoring = 'accuracy', cv = 5, n_jobs = -1, verbose = 0) plt.plot(cv_score, 'd-',label='Decision Tree Classifier') plt.xlabel('# of folds of cross validation') plt.ylabel('Accuracy') plt.ylim(0,0.9) plt.legend() plt.show() clf=LogisticRegression() cv_score = cross_val_score(clf, x_train, y_train, scoring = 'accuracy', cv = 5, n_jobs = -1, verbose = 0) plt.plot(cv_score, 'go-',label='Logistic Regression Classifier') plt.xlabel('# of folds of cross validation') plt.ylabel('Accuracy') plt.ylim(0,0.9) plt.legend() plt.show() clf = RandomForestClassifier(n_estimators=40, random_state=0) cv_score = cross_val_score(clf, x_train, y_train, scoring = 'accuracy', cv = 5, n_jobs = -1, verbose = 0) plt.plot(cv_score, 'rd-',label='RF Classifier') plt.xlabel('# of folds of cross validation') plt.ylabel('Accuracy') plt.ylim(0,0.9) plt.legend() plt.show() from sklearn.neighbors import KNeighborsClassifier clf = KNeighborsClassifier(n_neighbors=6) cv_score = cross_val_score(clf, x_train, y_train, scoring = 'accuracy', cv = 5, n_jobs = -1, verbose = 0) plt.plot(cv_score, 'd-',label='k-NN Classifier') plt.xlabel('run # of the cross validation') plt.ylabel('Accuracy') plt.ylim(0,0.9) plt.legend() plt.show() from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import classification_report, confusion_matrix, accuracy_score accu=[] for i in np.arange(10, 240, 10): classifier = RandomForestClassifier(n_estimators=i, random_state=0) classifier.fit(x_train, y_train) y_pred = classifier.predict(x_test) accu.append(accuracy_score(y_test, y_pred)) plt.plot(np.arange(10, 240, 10), accu, '.-') plt.xlabel('# of Trees') plt.ylabel('Accuracy') plt.show() # Removing the outliers in 'number_of_applications': 400 and 1000 applications are way above 95%. cust_success_2=cust_success.drop(columns=['id', 'cohort_tag', 'professional_experience', 'length_of_job_search']) print('Before removing missing values, shape:', cust_success_2.shape) cust_success_2.dropna(inplace=True) print('After removing missing values, shape:', cust_success_2.shape) cust_success_2_train=cust_success_2[(cust_success_2['cust_success_status']=='Placed') & (cust_success_2['placed']==1)].drop(columns=['cust_success_status', 'placed']) cust_success_2_train.head() # See the statistics of the target variable print(cust_success_2_train['program_duration_days'].describe()) sns.distplot(cust_success_2_train['program_duration_days']) plt.show() # Check skewness and the curtosis print("Skewness: %f" % cust_success_2_train['program_duration_days'].skew()) print("Kurtosis: %f" % cust_success_2_train['program_duration_days'].kurt()) # The most experienced gets placed faster. var = 'professional_experience_ord' data = pd.concat([cust_success_2_train['program_duration_days'], cust_success_2_train[var]], axis=1) f, ax = plt.subplots(figsize=(8, 6)) fig = sns.boxplot(x=var, y='program_duration_days', data=data) fig.axis(ymin=0, ymax=600); # SWE has the lowest mean. Higher variance for PSO. var = 'primary_track' data = pd.concat([cust_success_2_train['program_duration_days'], cust_success_2_train[var]], axis=1) f, ax = plt.subplots(figsize=(8, 6)) fig = sns.boxplot(x=var, y='program_duration_days', data=data) # SWE has the lowest mean. Higher variance for PSO. var = 'highest_level_of_education' data = pd.concat([cust_success_2_train['program_duration_days'], cust_success_2_train[var]], axis=1) f, ax = plt.subplots(figsize=(18, 6)) fig = sns.boxplot(x=var, y='program_duration_days', data=data) #correlation matrix corrmat = cust_success_2_train.corr() f, ax = plt.subplots(figsize=(12, 9)) sns.heatmap(corrmat, vmax=0.2, square=True); cust_success_2_train.columns # Stratified K-fold cross-validation code features = list(train_df.columns[1:101]) train_oof = np.zeros((250000,)) skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=42) for fold, (train_idx, valid_idx) in enumerate(skf.split(train_df[features], train_df['loss'])): X_train, X_valid = train_df.iloc[train_idx], train_df.iloc[valid_idx] y_train = X_train['loss'] y_valid = X_valid['loss'] X_train = X_train.drop('loss', axis=1) X_valid = X_valid.drop('loss', axis=1) model = LinearRegression() model = model.fit(X_train, y_train) temp_oof = model.predict(X_valid) train_oof[valid_idx] = temp_oof print(f'Fold {fold} RMSE: ', mean_squared_error(y_valid, temp_oof, squared=False)) print(f'OOF Accuracy: ', mean_squared_error(train_df['loss'], train_oof, squared=False)) X_train2=cust_success_2_train.drop(columns='program_duration_days') y_train2=cust_success_2_train['program_duration_days'] # We use 'train_test_split' from Sckit_learn to create a training and validation datasets. X_train2, X_test2, y_train2, y_test2 = train_test_split(X_train2, y_train2, test_size = 0.2, random_state = 0) from sklearn.linear_model import LinearRegression reg = LinearRegression().fit(X_train2, y_train2) y_pred2 = reg.predict(X_test2) plt.plot(np.array(y_pred2),'.', label='Predicted') plt.plot(np.array(y_test2),'.', label='Ground Truth') plt.xlabel('Candidates') plt.ylabel('Program Duration (Days)') plt.title('Linear Regression') plt.legend() plt.ylim(0,400) plt.show() print(reg.score(X_train2, y_train2)) from sklearn.linear_model import LogisticRegression clf = LogisticRegression(random_state=0, max_iter=500).fit(X_train2, y_train2) y_pred2 = clf.predict(X_test2) plt.plot(np.array(y_pred2),'.', label='Predicted') plt.plot(np.array(y_test2),'.', label='Ground Truth') plt.xlabel('Candidates') plt.ylabel('Program Duration (Days)') plt.title('Logistic Regression') plt.legend() plt.ylim(0,400) plt.show() print(clf.score(X_train2, y_train2)) import numpy as np from sklearn.linear_model import SGDClassifier from sklearn.preprocessing import StandardScaler from sklearn.pipeline import make_pipeline # Always scale the input. The most convenient way is to use a pipeline. clf = make_pipeline(StandardScaler(), SGDClassifier(max_iter=1000, tol=1e-3)) clf.fit(X_train2, y_train2) y_pred2=clf.predict((X_test2)) plt.plot(np.array(y_pred2),'.', label='Predicted') plt.plot(np.array(y_test2),'.', label='Ground Truth') plt.xlabel('Candidates') plt.ylabel('Program Duration (Days)') plt.title('SGDC Regression') plt.legend() plt.ylim(0,400) plt.show() print(clf.score(X_train2, y_train2))
0.580114
0.984231