hexsha
stringlengths
40
40
size
int64
6
14.9M
ext
stringclasses
1 value
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
6
260
max_stars_repo_name
stringlengths
6
119
max_stars_repo_head_hexsha
stringlengths
40
41
max_stars_repo_licenses
sequence
max_stars_count
int64
1
191k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
6
260
max_issues_repo_name
stringlengths
6
119
max_issues_repo_head_hexsha
stringlengths
40
41
max_issues_repo_licenses
sequence
max_issues_count
int64
1
67k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
6
260
max_forks_repo_name
stringlengths
6
119
max_forks_repo_head_hexsha
stringlengths
40
41
max_forks_repo_licenses
sequence
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
avg_line_length
float64
2
1.04M
max_line_length
int64
2
11.2M
alphanum_fraction
float64
0
1
cells
sequence
cell_types
sequence
cell_type_groups
sequence
e7d4bf91ffbe8f8aa8d1ada36666be3adefd400e
1,061
ipynb
Jupyter Notebook
src/Untitled1.ipynb
dgustave/Novelist-Brokers
e3ed4ab7592f8b3eb1289531d5864a3ff5ff1abe
[ "MIT" ]
1
2020-11-18T00:28:52.000Z
2020-11-18T00:28:52.000Z
src/Untitled1.ipynb
dgustave/Novelist-Brokers
e3ed4ab7592f8b3eb1289531d5864a3ff5ff1abe
[ "MIT" ]
null
null
null
src/Untitled1.ipynb
dgustave/Novelist-Brokers
e3ed4ab7592f8b3eb1289531d5864a3ff5ff1abe
[ "MIT" ]
1
2021-02-07T17:52:29.000Z
2021-02-07T17:52:29.000Z
17.112903
65
0.484449
[ [ [ "import requests", "_____no_output_____" ], [ "requests.get(\"http://127.0.0.1:5000/API/all_data\").json()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code" ] ]
e7d4c77f7e3237b4f2717c7c6b71354e316d2d96
22,029
ipynb
Jupyter Notebook
_notebooks/public_schools/EDA.ipynb
Hevia/blog
cc6815a641c0cac131c21ed528f117b4ffbcecc8
[ "Apache-2.0" ]
null
null
null
_notebooks/public_schools/EDA.ipynb
Hevia/blog
cc6815a641c0cac131c21ed528f117b4ffbcecc8
[ "Apache-2.0" ]
3
2021-05-20T22:57:39.000Z
2022-02-26T10:20:26.000Z
_notebooks/public_schools/EDA.ipynb
Hevia/blog
cc6815a641c0cac131c21ed528f117b4ffbcecc8
[ "Apache-2.0" ]
null
null
null
34.966667
131
0.376504
[ [ [ "Data can be found: https://data-seattlecitygis.opendata.arcgis.com/search?collection=Dataset&modified=2021-01-01%2C2021-10-13", "_____no_output_____" ] ], [ [ "import geopandas as gpd", "_____no_output_____" ], [ "gpd.datasets.available", "_____no_output_____" ], [ "world = gpd.read_file(\n gpd.datasets.get_path('naturalearth_lowres')\n)", "_____no_output_____" ], [ "seattle_zoning_2035 = gpd.read_file('Future_Land_Use__2035.shp')\nseattle_zoning_2035.head()", "_____no_output_____" ], [ "education_centers = gpd.read_file('data/Environmental_Education_Centers.shp')\neducation_centers", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
e7d4cf08b0dac61ca64c174069d9882a73f6cf74
588,618
ipynb
Jupyter Notebook
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
a89d259a6ab5621962562eb4c78d8413c1914836
[ "Apache-2.0" ]
null
null
null
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
a89d259a6ab5621962562eb4c78d8413c1914836
[ "Apache-2.0" ]
null
null
null
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
a89d259a6ab5621962562eb4c78d8413c1914836
[ "Apache-2.0" ]
null
null
null
214.902519
59,836
0.890567
[ [ [ "# Advanced Logistic Regression in TensorFlow 2.0 \n\n\n\n## Learning Objectives\n\n1. Load a CSV file using Pandas\n2. Create train, validation, and test sets\n3. Define and train a model using Keras (including setting class weights)\n4. Evaluate the model using various metrics (including precision and recall)\n5. Try common techniques for dealing with imbalanced data:\n Class weighting and\n Oversampling\n\n\n\n## Introduction \nThis lab how to classify a highly imbalanced dataset in which the number of examples in one class greatly outnumbers the examples in another. You will work with the [Credit Card Fraud Detection](https://www.kaggle.com/mlg-ulb/creditcardfraud) dataset hosted on Kaggle. The aim is to detect a mere 492 fraudulent transactions from 284,807 transactions in total. You will use [Keras](../../guide/keras/overview.ipynb) to define the model and [class weights](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model) to help the model learn from the imbalanced data. \n\nPENDING LINK UPDATE: Each learning objective will correspond to a __#TODO__ in the [student lab notebook](https://training-data-analyst/courses/machine_learning/deepdive2/image_classification/labs/5_fashion_mnist_class.ipynb) -- try to complete that notebook first before reviewing this solution notebook.", "_____no_output_____" ], [ "Start by importing the necessary libraries for this lab.", "_____no_output_____" ] ], [ [ "import tensorflow as tf\nfrom tensorflow import keras\n\nimport os\nimport tempfile\n\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\n\nimport sklearn\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import StandardScaler\n\nprint(\"TensorFlow version: \",tf.version.VERSION)", "TensorFlow version: 2.1.0\n" ] ], [ [ "In the next cell, we're going to customize our Matplot lib visualization figure size and colors. Note that each time Matplotlib loads, it defines a runtime configuration (rc) containing the default styles for every plot element we create. This configuration can be adjusted at any time using the plt.rc convenience routine. ", "_____no_output_____" ] ], [ [ "mpl.rcParams['figure.figsize'] = (12, 10)\ncolors = plt.rcParams['axes.prop_cycle'].by_key()['color']", "_____no_output_____" ] ], [ [ "## Data processing and exploration", "_____no_output_____" ], [ "### Download the Kaggle Credit Card Fraud data set\n\nPandas is a Python library with many helpful utilities for loading and working with structured data and can be used to download CSVs into a dataframe.\n\nNote: This dataset has been collected and analysed during a research collaboration of Worldline and the [Machine Learning Group](http://mlg.ulb.ac.be) of ULB (Université Libre de Bruxelles) on big data mining and fraud detection. More details on current and past projects on related topics are available [here](https://www.researchgate.net/project/Fraud-detection-5) and the page of the [DefeatFraud](https://mlg.ulb.ac.be/wordpress/portfolio_page/defeatfraud-assessment-and-validation-of-deep-feature-engineering-and-learning-solutions-for-fraud-detection/) project", "_____no_output_____" ] ], [ [ "file = tf.keras.utils\nraw_df = pd.read_csv('https://storage.googleapis.com/download.tensorflow.org/data/creditcard.csv')\nraw_df.head()", "_____no_output_____" ] ], [ [ "Now, let's view the statistics of the raw dataframe.", "_____no_output_____" ] ], [ [ "raw_df[['Time', 'V1', 'V2', 'V3', 'V4', 'V5', 'V26', 'V27', 'V28', 'Amount', 'Class']].describe()", "_____no_output_____" ] ], [ [ "### Examine the class label imbalance\n\nLet's look at the dataset imbalance:", "_____no_output_____" ] ], [ [ "neg, pos = np.bincount(raw_df['Class'])\ntotal = neg + pos\nprint('Examples:\\n Total: {}\\n Positive: {} ({:.2f}% of total)\\n'.format(\n total, pos, 100 * pos / total))", "Examples:\n Total: 284807\n Positive: 492 (0.17% of total)\n\n" ] ], [ [ "This shows the small fraction of positive samples.", "_____no_output_____" ], [ "### Clean, split and normalize the data\n\nThe raw data has a few issues. First the `Time` and `Amount` columns are too variable to use directly. Drop the `Time` column (since it's not clear what it means) and take the log of the `Amount` column to reduce its range.", "_____no_output_____" ] ], [ [ "cleaned_df = raw_df.copy()\n\n# You don't want the `Time` column.\ncleaned_df.pop('Time')\n\n# The `Amount` column covers a huge range. Convert to log-space.\neps=0.001 # 0 => 0.1¢\ncleaned_df['Log Ammount'] = np.log(cleaned_df.pop('Amount')+eps)", "_____no_output_____" ] ], [ [ "Split the dataset into train, validation, and test sets. The validation set is used during the model fitting to evaluate the loss and any metrics, however the model is not fit with this data. The test set is completely unused during the training phase and is only used at the end to evaluate how well the model generalizes to new data. This is especially important with imbalanced datasets where [overfitting](https://developers.google.com/machine-learning/crash-course/generalization/peril-of-overfitting) is a significant concern from the lack of training data.", "_____no_output_____" ] ], [ [ "# Use a utility from sklearn to split and shuffle our dataset.\ntrain_df, test_df = train_test_split(cleaned_df, test_size=0.2)\ntrain_df, val_df = train_test_split(train_df, test_size=0.2)\n\n# Form np arrays of labels and features.\ntrain_labels = np.array(train_df.pop('Class'))\nbool_train_labels = train_labels != 0\nval_labels = np.array(val_df.pop('Class'))\ntest_labels = np.array(test_df.pop('Class'))\n\ntrain_features = np.array(train_df)\nval_features = np.array(val_df)\ntest_features = np.array(test_df)", "_____no_output_____" ] ], [ [ "Normalize the input features using the sklearn StandardScaler.\nThis will set the mean to 0 and standard deviation to 1.\n\nNote: The `StandardScaler` is only fit using the `train_features` to be sure the model is not peeking at the validation or test sets. ", "_____no_output_____" ] ], [ [ "scaler = StandardScaler()\ntrain_features = scaler.fit_transform(train_features)\n\nval_features = scaler.transform(val_features)\ntest_features = scaler.transform(test_features)\n\ntrain_features = np.clip(train_features, -5, 5)\nval_features = np.clip(val_features, -5, 5)\ntest_features = np.clip(test_features, -5, 5)\n\n\nprint('Training labels shape:', train_labels.shape)\nprint('Validation labels shape:', val_labels.shape)\nprint('Test labels shape:', test_labels.shape)\n\nprint('Training features shape:', train_features.shape)\nprint('Validation features shape:', val_features.shape)\nprint('Test features shape:', test_features.shape)\n", "Training labels shape: (182276,)\nValidation labels shape: (45569,)\nTest labels shape: (56962,)\nTraining features shape: (182276, 29)\nValidation features shape: (45569, 29)\nTest features shape: (56962, 29)\n" ] ], [ [ "Caution: If you want to deploy a model, it's critical that you preserve the preprocessing calculations. The easiest way to implement them as layers, and attach them to your model before export.\n", "_____no_output_____" ], [ "### Look at the data distribution\n\nNext compare the distributions of the positive and negative examples over a few features. Good questions to ask yourself at this point are:\n\n* Do these distributions make sense? \n * Yes. You've normalized the input and these are mostly concentrated in the `+/- 2` range.\n* Can you see the difference between the ditributions?\n * Yes the positive examples contain a much higher rate of extreme values.", "_____no_output_____" ] ], [ [ "pos_df = pd.DataFrame(train_features[ bool_train_labels], columns = train_df.columns)\nneg_df = pd.DataFrame(train_features[~bool_train_labels], columns = train_df.columns)\n\nsns.jointplot(pos_df['V5'], pos_df['V6'],\n kind='hex', xlim = (-5,5), ylim = (-5,5))\nplt.suptitle(\"Positive distribution\")\n\nsns.jointplot(neg_df['V5'], neg_df['V6'],\n kind='hex', xlim = (-5,5), ylim = (-5,5))\n_ = plt.suptitle(\"Negative distribution\")", "_____no_output_____" ] ], [ [ "## Define the model and metrics\n\nDefine a function that creates a simple neural network with a densly connected hidden layer, a [dropout](https://developers.google.com/machine-learning/glossary/#dropout_regularization) layer to reduce overfitting, and an output sigmoid layer that returns the probability of a transaction being fraudulent: ", "_____no_output_____" ] ], [ [ "METRICS = [\n keras.metrics.TruePositives(name='tp'),\n keras.metrics.FalsePositives(name='fp'),\n keras.metrics.TrueNegatives(name='tn'),\n keras.metrics.FalseNegatives(name='fn'), \n keras.metrics.BinaryAccuracy(name='accuracy'),\n keras.metrics.Precision(name='precision'),\n keras.metrics.Recall(name='recall'),\n keras.metrics.AUC(name='auc'),\n]\n\ndef make_model(metrics = METRICS, output_bias=None):\n if output_bias is not None:\n output_bias = tf.keras.initializers.Constant(output_bias)\n model = keras.Sequential([\n keras.layers.Dense(\n 16, activation='relu',\n input_shape=(train_features.shape[-1],)),\n keras.layers.Dropout(0.5),\n keras.layers.Dense(1, activation='sigmoid',\n bias_initializer=output_bias),\n ])\n\n model.compile(\n optimizer=keras.optimizers.Adam(lr=1e-3),\n loss=keras.losses.BinaryCrossentropy(),\n metrics=metrics)\n\n return model", "_____no_output_____" ] ], [ [ "### Understanding useful metrics\n\nNotice that there are a few metrics defined above that can be computed by the model that will be helpful when evaluating the performance.\n\n\n\n* **False** negatives and **false** positives are samples that were **incorrectly** classified\n* **True** negatives and **true** positives are samples that were **correctly** classified\n* **Accuracy** is the percentage of examples correctly classified\n> $\\frac{\\text{true samples}}{\\text{total samples}}$\n* **Precision** is the percentage of **predicted** positives that were correctly classified\n> $\\frac{\\text{true positives}}{\\text{true positives + false positives}}$\n* **Recall** is the percentage of **actual** positives that were correctly classified\n> $\\frac{\\text{true positives}}{\\text{true positives + false negatives}}$\n* **AUC** refers to the Area Under the Curve of a Receiver Operating Characteristic curve (ROC-AUC). This metric is equal to the probability that a classifier will rank a random positive sample higher than than a random negative sample.\n\nNote: Accuracy is not a helpful metric for this task. You can 99.8%+ accuracy on this task by predicting False all the time. \n\nRead more:\n* [True vs. False and Positive vs. Negative](https://developers.google.com/machine-learning/crash-course/classification/true-false-positive-negative)\n* [Accuracy](https://developers.google.com/machine-learning/crash-course/classification/accuracy)\n* [Precision and Recall](https://developers.google.com/machine-learning/crash-course/classification/precision-and-recall)\n* [ROC-AUC](https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc)", "_____no_output_____" ], [ "## Baseline model", "_____no_output_____" ], [ "### Build the model\n\nNow create and train your model using the function that was defined earlier. Notice that the model is fit using a larger than default batch size of 2048, this is important to ensure that each batch has a decent chance of containing a few positive samples. If the batch size was too small, they would likely have no fraudulent transactions to learn from.\n\n\nNote: this model will not handle the class imbalance well. You will improve it later in this tutorial.", "_____no_output_____" ] ], [ [ "EPOCHS = 100\nBATCH_SIZE = 2048\n\nearly_stopping = tf.keras.callbacks.EarlyStopping(\n monitor='val_auc', \n verbose=1,\n patience=10,\n mode='max',\n restore_best_weights=True)", "_____no_output_____" ], [ "model = make_model()\nmodel.summary()", "Model: \"sequential_8\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense_16 (Dense) (None, 16) 480 \n_________________________________________________________________\ndropout_8 (Dropout) (None, 16) 0 \n_________________________________________________________________\ndense_17 (Dense) (None, 1) 17 \n=================================================================\nTotal params: 497\nTrainable params: 497\nNon-trainable params: 0\n_________________________________________________________________\n" ] ], [ [ "Test run the model:", "_____no_output_____" ] ], [ [ "model.predict(train_features[:10])", "_____no_output_____" ] ], [ [ "### Optional: Set the correct initial bias.", "_____no_output_____" ], [ "These are initial guesses are not great. You know the dataset is imbalanced. Set the output layer's bias to reflect that (See: [A Recipe for Training Neural Networks: \"init well\"](http://karpathy.github.io/2019/04/25/recipe/#2-set-up-the-end-to-end-trainingevaluation-skeleton--get-dumb-baselines)). This can help with initial convergence.", "_____no_output_____" ], [ "With the default bias initialization the loss should be about `math.log(2) = 0.69314` ", "_____no_output_____" ] ], [ [ "results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)\nprint(\"Loss: {:0.4f}\".format(results[0]))", "Loss: 1.7441\n" ] ], [ [ "The correct bias to set can be derived from:\n\n$$ p_0 = pos/(pos + neg) = 1/(1+e^{-b_0}) $$\n$$ b_0 = -log_e(1/p_0 - 1) $$\n$$ b_0 = log_e(pos/neg)$$", "_____no_output_____" ] ], [ [ "initial_bias = np.log([pos/neg])\ninitial_bias", "_____no_output_____" ] ], [ [ "Set that as the initial bias, and the model will give much more reasonable initial guesses. \n\nIt should be near: `pos/total = 0.0018`", "_____no_output_____" ] ], [ [ "model = make_model(output_bias = initial_bias)\nmodel.predict(train_features[:10])", "_____no_output_____" ] ], [ [ "With this initialization the initial loss should be approximately:\n\n$$-p_0log(p_0)-(1-p_0)log(1-p_0) = 0.01317$$", "_____no_output_____" ] ], [ [ "results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)\nprint(\"Loss: {:0.4f}\".format(results[0]))", "Loss: 0.0275\n" ] ], [ [ "This initial loss is about 50 times less than if would have been with naive initilization.\n\nThis way the model doesn't need to spend the first few epochs just learning that positive examples are unlikely. This also makes it easier to read plots of the loss during training.", "_____no_output_____" ], [ "### Checkpoint the initial weights\n\nTo make the various training runs more comparable, keep this initial model's weights in a checkpoint file, and load them into each model before training.", "_____no_output_____" ] ], [ [ "initial_weights = os.path.join(tempfile.mkdtemp(),'initial_weights')\nmodel.save_weights(initial_weights)", "_____no_output_____" ] ], [ [ "### Confirm that the bias fix helps\n\nBefore moving on, confirm quick that the careful bias initialization actually helped.\n\nTrain the model for 20 epochs, with and without this careful initialization, and compare the losses: ", "_____no_output_____" ] ], [ [ "model = make_model()\nmodel.load_weights(initial_weights)\nmodel.layers[-1].bias.assign([0.0])\nzero_bias_history = model.fit(\n train_features,\n train_labels,\n batch_size=BATCH_SIZE,\n epochs=20,\n validation_data=(val_features, val_labels), \n verbose=0)", "_____no_output_____" ], [ "model = make_model()\nmodel.load_weights(initial_weights)\ncareful_bias_history = model.fit(\n train_features,\n train_labels,\n batch_size=BATCH_SIZE,\n epochs=20,\n validation_data=(val_features, val_labels), \n verbose=0)", "_____no_output_____" ], [ "def plot_loss(history, label, n):\n # Use a log scale to show the wide range of values.\n plt.semilogy(history.epoch, history.history['loss'],\n color=colors[n], label='Train '+label)\n plt.semilogy(history.epoch, history.history['val_loss'],\n color=colors[n], label='Val '+label,\n linestyle=\"--\")\n plt.xlabel('Epoch')\n plt.ylabel('Loss')\n \n plt.legend()", "_____no_output_____" ], [ "plot_loss(zero_bias_history, \"Zero Bias\", 0)\nplot_loss(careful_bias_history, \"Careful Bias\", 1)", "_____no_output_____" ] ], [ [ "The above figure makes it clear: In terms of validation loss, on this problem, this careful initialization gives a clear advantage. ", "_____no_output_____" ], [ "### Train the model", "_____no_output_____" ] ], [ [ "model = make_model()\nmodel.load_weights(initial_weights)\nbaseline_history = model.fit(\n train_features,\n train_labels,\n batch_size=BATCH_SIZE,\n epochs=EPOCHS,\n callbacks = [early_stopping],\n validation_data=(val_features, val_labels))", "Train on 182276 samples, validate on 45569 samples\nEpoch 1/100\n182276/182276 [==============================] - 3s 16us/sample - loss: 0.0256 - tp: 64.0000 - fp: 745.0000 - tn: 181227.0000 - fn: 240.0000 - accuracy: 0.9946 - precision: 0.0791 - recall: 0.2105 - auc: 0.8031 - val_loss: 0.0079 - val_tp: 17.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 66.0000 - val_accuracy: 0.9984 - val_precision: 0.7083 - val_recall: 0.2048 - val_auc: 0.9377\nEpoch 2/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.0100 - tp: 111.0000 - fp: 131.0000 - tn: 181841.0000 - fn: 193.0000 - accuracy: 0.9982 - precision: 0.4587 - recall: 0.3651 - auc: 0.8758 - val_loss: 0.0056 - val_tp: 40.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 43.0000 - val_accuracy: 0.9989 - val_precision: 0.8511 - val_recall: 0.4819 - val_auc: 0.9422\nEpoch 3/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.0075 - tp: 148.0000 - fp: 57.0000 - tn: 181915.0000 - fn: 156.0000 - accuracy: 0.9988 - precision: 0.7220 - recall: 0.4868 - auc: 0.9206 - val_loss: 0.0048 - val_tp: 52.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 31.0000 - val_accuracy: 0.9992 - val_precision: 0.8814 - val_recall: 0.6265 - val_auc: 0.9382\nEpoch 4/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.0065 - tp: 157.0000 - fp: 48.0000 - tn: 181924.0000 - fn: 147.0000 - accuracy: 0.9989 - precision: 0.7659 - recall: 0.5164 - auc: 0.9210 - val_loss: 0.0045 - val_tp: 52.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 31.0000 - val_accuracy: 0.9992 - val_precision: 0.8814 - val_recall: 0.6265 - val_auc: 0.9387\nEpoch 5/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.0058 - tp: 172.0000 - fp: 43.0000 - tn: 181929.0000 - fn: 132.0000 - accuracy: 0.9990 - precision: 0.8000 - recall: 0.5658 - auc: 0.9246 - val_loss: 0.0042 - val_tp: 51.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 32.0000 - val_accuracy: 0.9991 - val_precision: 0.8793 - val_recall: 0.6145 - val_auc: 0.9390\nEpoch 6/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.0054 - tp: 169.0000 - fp: 28.0000 - tn: 181944.0000 - fn: 135.0000 - accuracy: 0.9991 - precision: 0.8579 - recall: 0.5559 - auc: 0.9210 - val_loss: 0.0039 - val_tp: 56.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 27.0000 - val_accuracy: 0.9993 - val_precision: 0.8889 - val_recall: 0.6747 - val_auc: 0.9391\nEpoch 7/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.0054 - tp: 167.0000 - fp: 33.0000 - tn: 181939.0000 - fn: 137.0000 - accuracy: 0.9991 - precision: 0.8350 - recall: 0.5493 - auc: 0.9224 - val_loss: 0.0038 - val_tp: 60.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 23.0000 - val_accuracy: 0.9993 - val_precision: 0.8955 - val_recall: 0.7229 - val_auc: 0.9392\nEpoch 8/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.0050 - tp: 182.0000 - fp: 28.0000 - tn: 181944.0000 - fn: 122.0000 - accuracy: 0.9992 - precision: 0.8667 - recall: 0.5987 - auc: 0.9215 - val_loss: 0.0038 - val_tp: 62.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 21.0000 - val_accuracy: 0.9994 - val_precision: 0.8986 - val_recall: 0.7470 - val_auc: 0.9332\nEpoch 9/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.0047 - tp: 186.0000 - fp: 36.0000 - tn: 181936.0000 - fn: 118.0000 - accuracy: 0.9992 - precision: 0.8378 - recall: 0.6118 - auc: 0.9238 - val_loss: 0.0036 - val_tp: 63.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 20.0000 - val_accuracy: 0.9994 - val_precision: 0.9000 - val_recall: 0.7590 - val_auc: 0.9332\nEpoch 10/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.0048 - tp: 176.0000 - fp: 33.0000 - tn: 181939.0000 - fn: 128.0000 - accuracy: 0.9991 - precision: 0.8421 - recall: 0.5789 - auc: 0.9208 - val_loss: 0.0036 - val_tp: 63.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 20.0000 - val_accuracy: 0.9994 - val_precision: 0.9000 - val_recall: 0.7590 - val_auc: 0.9332\nEpoch 11/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.0045 - tp: 180.0000 - fp: 32.0000 - tn: 181940.0000 - fn: 124.0000 - accuracy: 0.9991 - precision: 0.8491 - recall: 0.5921 - auc: 0.9341 - val_loss: 0.0035 - val_tp: 64.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 19.0000 - val_accuracy: 0.9994 - val_precision: 0.9014 - val_recall: 0.7711 - val_auc: 0.9331\nEpoch 12/100\n169984/182276 [==========================>...] - ETA: 0s - loss: 0.0045 - tp: 175.0000 - fp: 30.0000 - tn: 169674.0000 - fn: 105.0000 - accuracy: 0.9992 - precision: 0.8537 - recall: 0.6250 - auc: 0.9306Restoring model weights from the end of the best epoch.\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.0045 - tp: 188.0000 - fp: 31.0000 - tn: 181941.0000 - fn: 116.0000 - accuracy: 0.9992 - precision: 0.8584 - recall: 0.6184 - auc: 0.9326 - val_loss: 0.0034 - val_tp: 63.0000 - val_fp: 6.0000 - val_tn: 45480.0000 - val_fn: 20.0000 - val_accuracy: 0.9994 - val_precision: 0.9130 - val_recall: 0.7590 - val_auc: 0.9332\nEpoch 00012: early stopping\n" ] ], [ [ "### Check training history\nIn this section, you will produce plots of your model's accuracy and loss on the training and validation set. These are useful to check for overfitting, which you can learn more about in this [tutorial](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit).\n\nAdditionally, you can produce these plots for any of the metrics you created above. False negatives are included as an example.", "_____no_output_____" ] ], [ [ "def plot_metrics(history):\n metrics = ['loss', 'auc', 'precision', 'recall']\n for n, metric in enumerate(metrics):\n name = metric.replace(\"_\",\" \").capitalize()\n plt.subplot(2,2,n+1)\n plt.plot(history.epoch, history.history[metric], color=colors[0], label='Train')\n plt.plot(history.epoch, history.history['val_'+metric],\n color=colors[0], linestyle=\"--\", label='Val')\n plt.xlabel('Epoch')\n plt.ylabel(name)\n if metric == 'loss':\n plt.ylim([0, plt.ylim()[1]])\n elif metric == 'auc':\n plt.ylim([0.8,1])\n else:\n plt.ylim([0,1])\n\n plt.legend()\n", "_____no_output_____" ], [ "plot_metrics(baseline_history)", "_____no_output_____" ] ], [ [ "Note: That the validation curve generally performs better than the training curve. This is mainly caused by the fact that the dropout layer is not active when evaluating the model.", "_____no_output_____" ], [ "### Evaluate metrics\n\nYou can use a [confusion matrix](https://developers.google.com/machine-learning/glossary/#confusion_matrix) to summarize the actual vs. predicted labels where the X axis is the predicted label and the Y axis is the actual label.", "_____no_output_____" ] ], [ [ "train_predictions_baseline = model.predict(train_features, batch_size=BATCH_SIZE)\ntest_predictions_baseline = model.predict(test_features, batch_size=BATCH_SIZE)", "_____no_output_____" ], [ "def plot_cm(labels, predictions, p=0.5):\n cm = confusion_matrix(labels, predictions > p)\n plt.figure(figsize=(5,5))\n sns.heatmap(cm, annot=True, fmt=\"d\")\n plt.title('Confusion matrix @{:.2f}'.format(p))\n plt.ylabel('Actual label')\n plt.xlabel('Predicted label')\n\n print('Legitimate Transactions Detected (True Negatives): ', cm[0][0])\n print('Legitimate Transactions Incorrectly Detected (False Positives): ', cm[0][1])\n print('Fraudulent Transactions Missed (False Negatives): ', cm[1][0])\n print('Fraudulent Transactions Detected (True Positives): ', cm[1][1])\n print('Total Fraudulent Transactions: ', np.sum(cm[1]))", "_____no_output_____" ] ], [ [ "Evaluate your model on the test dataset and display the results for the metrics you created above.", "_____no_output_____" ] ], [ [ "baseline_results = model.evaluate(test_features, test_labels,\n batch_size=BATCH_SIZE, verbose=0)\nfor name, value in zip(model.metrics_names, baseline_results):\n print(name, ': ', value)\nprint()\n\nplot_cm(test_labels, test_predictions_baseline)", "loss : 0.005941324691873794\ntp : 55.0\nfp : 12.0\ntn : 56845.0\nfn : 50.0\naccuracy : 0.99891156\nprecision : 0.8208955\nrecall : 0.52380955\nauc : 0.9390888\n\nLegitimate Transactions Detected (True Negatives): 56845\nLegitimate Transactions Incorrectly Detected (False Positives): 12\nFraudulent Transactions Missed (False Negatives): 50\nFraudulent Transactions Detected (True Positives): 55\nTotal Fraudulent Transactions: 105\n" ] ], [ [ "If the model had predicted everything perfectly, this would be a [diagonal matrix](https://en.wikipedia.org/wiki/Diagonal_matrix) where values off the main diagonal, indicating incorrect predictions, would be zero. In this case the matrix shows that you have relatively few false positives, meaning that there were relatively few legitimate transactions that were incorrectly flagged. However, you would likely want to have even fewer false negatives despite the cost of increasing the number of false positives. This trade off may be preferable because false negatives would allow fraudulent transactions to go through, whereas false positives may cause an email to be sent to a customer to ask them to verify their card activity.", "_____no_output_____" ], [ "### Plot the ROC\n\nNow plot the [ROC](https://developers.google.com/machine-learning/glossary#ROC). This plot is useful because it shows, at a glance, the range of performance the model can reach just by tuning the output threshold.", "_____no_output_____" ] ], [ [ "def plot_roc(name, labels, predictions, **kwargs):\n fp, tp, _ = sklearn.metrics.roc_curve(labels, predictions)\n\n plt.plot(100*fp, 100*tp, label=name, linewidth=2, **kwargs)\n plt.xlabel('False positives [%]')\n plt.ylabel('True positives [%]')\n plt.xlim([-0.5,20])\n plt.ylim([80,100.5])\n plt.grid(True)\n ax = plt.gca()\n ax.set_aspect('equal')", "_____no_output_____" ], [ "plot_roc(\"Train Baseline\", train_labels, train_predictions_baseline, color=colors[0])\nplot_roc(\"Test Baseline\", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')\nplt.legend(loc='lower right')", "_____no_output_____" ] ], [ [ "It looks like the precision is relatively high, but the recall and the area under the ROC curve (AUC) aren't as high as you might like. Classifiers often face challenges when trying to maximize both precision and recall, which is especially true when working with imbalanced datasets. It is important to consider the costs of different types of errors in the context of the problem you care about. In this example, a false negative (a fraudulent transaction is missed) may have a financial cost, while a false positive (a transaction is incorrectly flagged as fraudulent) may decrease user happiness.", "_____no_output_____" ], [ "## Class weights", "_____no_output_____" ], [ "### Calculate class weights\n\nThe goal is to identify fradulent transactions, but you don't have very many of those positive samples to work with, so you would want to have the classifier heavily weight the few examples that are available. You can do this by passing Keras weights for each class through a parameter. These will cause the model to \"pay more attention\" to examples from an under-represented class.", "_____no_output_____" ] ], [ [ "# Scaling by total/2 helps keep the loss to a similar magnitude.\n# The sum of the weights of all examples stays the same.\nweight_for_0 = (1 / neg)*(total)/2.0 \nweight_for_1 = (1 / pos)*(total)/2.0\n\nclass_weight = {0: weight_for_0, 1: weight_for_1}\n\nprint('Weight for class 0: {:.2f}'.format(weight_for_0))\nprint('Weight for class 1: {:.2f}'.format(weight_for_1))", "Weight for class 0: 0.50\nWeight for class 1: 289.44\n" ] ], [ [ "### Train a model with class weights\n\nNow try re-training and evaluating the model with class weights to see how that affects the predictions.\n\nNote: Using `class_weights` changes the range of the loss. This may affect the stability of the training depending on the optimizer. Optimizers whose step size is dependent on the magnitude of the gradient, like `optimizers.SGD`, may fail. The optimizer used here, `optimizers.Adam`, is unaffected by the scaling change. Also note that because of the weighting, the total losses are not comparable between the two models.", "_____no_output_____" ] ], [ [ "weighted_model = make_model()\nweighted_model.load_weights(initial_weights)\n\nweighted_history = weighted_model.fit(\n train_features,\n train_labels,\n batch_size=BATCH_SIZE,\n epochs=EPOCHS,\n callbacks = [early_stopping],\n validation_data=(val_features, val_labels),\n # The class weights go here\n class_weight=class_weight) ", "WARNING:tensorflow:sample_weight modes were coerced from\n ...\n to \n ['...']\nWARNING:tensorflow:sample_weight modes were coerced from\n ...\n to \n ['...']\nTrain on 182276 samples, validate on 45569 samples\nEpoch 1/100\n182276/182276 [==============================] - 3s 19us/sample - loss: 1.0524 - tp: 138.0000 - fp: 2726.0000 - tn: 179246.0000 - fn: 166.0000 - accuracy: 0.9841 - precision: 0.0482 - recall: 0.4539 - auc: 0.8321 - val_loss: 0.4515 - val_tp: 59.0000 - val_fp: 432.0000 - val_tn: 45054.0000 - val_fn: 24.0000 - val_accuracy: 0.9900 - val_precision: 0.1202 - val_recall: 0.7108 - val_auc: 0.9492\nEpoch 2/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.5537 - tp: 216.0000 - fp: 3783.0000 - tn: 178189.0000 - fn: 88.0000 - accuracy: 0.9788 - precision: 0.0540 - recall: 0.7105 - auc: 0.9033 - val_loss: 0.3285 - val_tp: 69.0000 - val_fp: 514.0000 - val_tn: 44972.0000 - val_fn: 14.0000 - val_accuracy: 0.9884 - val_precision: 0.1184 - val_recall: 0.8313 - val_auc: 0.9605\nEpoch 3/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.4178 - tp: 238.0000 - fp: 4540.0000 - tn: 177432.0000 - fn: 66.0000 - accuracy: 0.9747 - precision: 0.0498 - recall: 0.7829 - auc: 0.9237 - val_loss: 0.2840 - val_tp: 69.0000 - val_fp: 570.0000 - val_tn: 44916.0000 - val_fn: 14.0000 - val_accuracy: 0.9872 - val_precision: 0.1080 - val_recall: 0.8313 - val_auc: 0.9669\nEpoch 4/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.3848 - tp: 247.0000 - fp: 5309.0000 - tn: 176663.0000 - fn: 57.0000 - accuracy: 0.9706 - precision: 0.0445 - recall: 0.8125 - auc: 0.9292 - val_loss: 0.2539 - val_tp: 71.0000 - val_fp: 622.0000 - val_tn: 44864.0000 - val_fn: 12.0000 - val_accuracy: 0.9861 - val_precision: 0.1025 - val_recall: 0.8554 - val_auc: 0.9709\nEpoch 5/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.3596 - tp: 254.0000 - fp: 6018.0000 - tn: 175954.0000 - fn: 50.0000 - accuracy: 0.9667 - precision: 0.0405 - recall: 0.8355 - auc: 0.9323 - val_loss: 0.2363 - val_tp: 72.0000 - val_fp: 713.0000 - val_tn: 44773.0000 - val_fn: 11.0000 - val_accuracy: 0.9841 - val_precision: 0.0917 - val_recall: 0.8675 - val_auc: 0.9725\nEpoch 6/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.3115 - tp: 255.0000 - fp: 6366.0000 - tn: 175606.0000 - fn: 49.0000 - accuracy: 0.9648 - precision: 0.0385 - recall: 0.8388 - auc: 0.9477 - val_loss: 0.2243 - val_tp: 72.0000 - val_fp: 768.0000 - val_tn: 44718.0000 - val_fn: 11.0000 - val_accuracy: 0.9829 - val_precision: 0.0857 - val_recall: 0.8675 - val_auc: 0.9728\nEpoch 7/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.3179 - tp: 258.0000 - fp: 6804.0000 - tn: 175168.0000 - fn: 46.0000 - accuracy: 0.9624 - precision: 0.0365 - recall: 0.8487 - auc: 0.9435 - val_loss: 0.2165 - val_tp: 72.0000 - val_fp: 812.0000 - val_tn: 44674.0000 - val_fn: 11.0000 - val_accuracy: 0.9819 - val_precision: 0.0814 - val_recall: 0.8675 - val_auc: 0.9739\nEpoch 8/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.2880 - tp: 260.0000 - fp: 6669.0000 - tn: 175303.0000 - fn: 44.0000 - accuracy: 0.9632 - precision: 0.0375 - recall: 0.8553 - auc: 0.9530 - val_loss: 0.2122 - val_tp: 72.0000 - val_fp: 783.0000 - val_tn: 44703.0000 - val_fn: 11.0000 - val_accuracy: 0.9826 - val_precision: 0.0842 - val_recall: 0.8675 - val_auc: 0.9769\nEpoch 9/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.2676 - tp: 262.0000 - fp: 6904.0000 - tn: 175068.0000 - fn: 42.0000 - accuracy: 0.9619 - precision: 0.0366 - recall: 0.8618 - auc: 0.9594 - val_loss: 0.2056 - val_tp: 72.0000 - val_fp: 855.0000 - val_tn: 44631.0000 - val_fn: 11.0000 - val_accuracy: 0.9810 - val_precision: 0.0777 - val_recall: 0.8675 - val_auc: 0.9750\nEpoch 10/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.2498 - tp: 266.0000 - fp: 6833.0000 - tn: 175139.0000 - fn: 38.0000 - accuracy: 0.9623 - precision: 0.0375 - recall: 0.8750 - auc: 0.9593 - val_loss: 0.2001 - val_tp: 73.0000 - val_fp: 840.0000 - val_tn: 44646.0000 - val_fn: 10.0000 - val_accuracy: 0.9813 - val_precision: 0.0800 - val_recall: 0.8795 - val_auc: 0.9761\nEpoch 11/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.2681 - tp: 262.0000 - fp: 6845.0000 - tn: 175127.0000 - fn: 42.0000 - accuracy: 0.9622 - precision: 0.0369 - recall: 0.8618 - auc: 0.9559 - val_loss: 0.1964 - val_tp: 73.0000 - val_fp: 865.0000 - val_tn: 44621.0000 - val_fn: 10.0000 - val_accuracy: 0.9808 - val_precision: 0.0778 - val_recall: 0.8795 - val_auc: 0.9768\nEpoch 12/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.2406 - tp: 268.0000 - fp: 7070.0000 - tn: 174902.0000 - fn: 36.0000 - accuracy: 0.9610 - precision: 0.0365 - recall: 0.8816 - auc: 0.9646 - val_loss: 0.1940 - val_tp: 73.0000 - val_fp: 848.0000 - val_tn: 44638.0000 - val_fn: 10.0000 - val_accuracy: 0.9812 - val_precision: 0.0793 - val_recall: 0.8795 - val_auc: 0.9771\nEpoch 13/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.2285 - tp: 269.0000 - fp: 6976.0000 - tn: 174996.0000 - fn: 35.0000 - accuracy: 0.9615 - precision: 0.0371 - recall: 0.8849 - auc: 0.9680 - val_loss: 0.1930 - val_tp: 73.0000 - val_fp: 857.0000 - val_tn: 44629.0000 - val_fn: 10.0000 - val_accuracy: 0.9810 - val_precision: 0.0785 - val_recall: 0.8795 - val_auc: 0.9772\nEpoch 14/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.2322 - tp: 268.0000 - fp: 6718.0000 - tn: 175254.0000 - fn: 36.0000 - accuracy: 0.9629 - precision: 0.0384 - recall: 0.8816 - auc: 0.9644 - val_loss: 0.1915 - val_tp: 73.0000 - val_fp: 808.0000 - val_tn: 44678.0000 - val_fn: 10.0000 - val_accuracy: 0.9820 - val_precision: 0.0829 - val_recall: 0.8795 - val_auc: 0.9781\nEpoch 15/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.2631 - tp: 267.0000 - fp: 6578.0000 - tn: 175394.0000 - fn: 37.0000 - accuracy: 0.9637 - precision: 0.0390 - recall: 0.8783 - auc: 0.9551 - val_loss: 0.1900 - val_tp: 73.0000 - val_fp: 803.0000 - val_tn: 44683.0000 - val_fn: 10.0000 - val_accuracy: 0.9822 - val_precision: 0.0833 - val_recall: 0.8795 - val_auc: 0.9781\nEpoch 16/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.2314 - tp: 266.0000 - fp: 6644.0000 - tn: 175328.0000 - fn: 38.0000 - accuracy: 0.9633 - precision: 0.0385 - recall: 0.8750 - auc: 0.9672 - val_loss: 0.1882 - val_tp: 73.0000 - val_fp: 806.0000 - val_tn: 44680.0000 - val_fn: 10.0000 - val_accuracy: 0.9821 - val_precision: 0.0830 - val_recall: 0.8795 - val_auc: 0.9784\nEpoch 17/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.2152 - tp: 271.0000 - fp: 6663.0000 - tn: 175309.0000 - fn: 33.0000 - accuracy: 0.9633 - precision: 0.0391 - recall: 0.8914 - auc: 0.9687 - val_loss: 0.1895 - val_tp: 73.0000 - val_fp: 754.0000 - val_tn: 44732.0000 - val_fn: 10.0000 - val_accuracy: 0.9832 - val_precision: 0.0883 - val_recall: 0.8795 - val_auc: 0.9785\nEpoch 18/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.2420 - tp: 264.0000 - fp: 6535.0000 - tn: 175437.0000 - fn: 40.0000 - accuracy: 0.9639 - precision: 0.0388 - recall: 0.8684 - auc: 0.9610 - val_loss: 0.1895 - val_tp: 73.0000 - val_fp: 749.0000 - val_tn: 44737.0000 - val_fn: 10.0000 - val_accuracy: 0.9833 - val_precision: 0.0888 - val_recall: 0.8795 - val_auc: 0.9786\nEpoch 19/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.2279 - tp: 268.0000 - fp: 6443.0000 - tn: 175529.0000 - fn: 36.0000 - accuracy: 0.9645 - precision: 0.0399 - recall: 0.8816 - auc: 0.9672 - val_loss: 0.1895 - val_tp: 73.0000 - val_fp: 763.0000 - val_tn: 44723.0000 - val_fn: 10.0000 - val_accuracy: 0.9830 - val_precision: 0.0873 - val_recall: 0.8795 - val_auc: 0.9788\nEpoch 20/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.2247 - tp: 267.0000 - fp: 6596.0000 - tn: 175376.0000 - fn: 37.0000 - accuracy: 0.9636 - precision: 0.0389 - recall: 0.8783 - auc: 0.9684 - val_loss: 0.1896 - val_tp: 73.0000 - val_fp: 760.0000 - val_tn: 44726.0000 - val_fn: 10.0000 - val_accuracy: 0.9831 - val_precision: 0.0876 - val_recall: 0.8795 - val_auc: 0.9797\nEpoch 21/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.2296 - tp: 269.0000 - fp: 6562.0000 - tn: 175410.0000 - fn: 35.0000 - accuracy: 0.9638 - precision: 0.0394 - recall: 0.8849 - auc: 0.9656 - val_loss: 0.1889 - val_tp: 73.0000 - val_fp: 750.0000 - val_tn: 44736.0000 - val_fn: 10.0000 - val_accuracy: 0.9833 - val_precision: 0.0887 - val_recall: 0.8795 - val_auc: 0.9797\nEpoch 22/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.1982 - tp: 271.0000 - fp: 6583.0000 - tn: 175389.0000 - fn: 33.0000 - accuracy: 0.9637 - precision: 0.0395 - recall: 0.8914 - auc: 0.9756 - val_loss: 0.1879 - val_tp: 73.0000 - val_fp: 764.0000 - val_tn: 44722.0000 - val_fn: 10.0000 - val_accuracy: 0.9830 - val_precision: 0.0872 - val_recall: 0.8795 - val_auc: 0.9777\nEpoch 23/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.2154 - tp: 273.0000 - fp: 6552.0000 - tn: 175420.0000 - fn: 31.0000 - accuracy: 0.9639 - precision: 0.0400 - recall: 0.8980 - auc: 0.9682 - val_loss: 0.1882 - val_tp: 73.0000 - val_fp: 762.0000 - val_tn: 44724.0000 - val_fn: 10.0000 - val_accuracy: 0.9831 - val_precision: 0.0874 - val_recall: 0.8795 - val_auc: 0.9779\nEpoch 24/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.1861 - tp: 272.0000 - fp: 6248.0000 - tn: 175724.0000 - fn: 32.0000 - accuracy: 0.9655 - precision: 0.0417 - recall: 0.8947 - auc: 0.9779 - val_loss: 0.1885 - val_tp: 73.0000 - val_fp: 772.0000 - val_tn: 44714.0000 - val_fn: 10.0000 - val_accuracy: 0.9828 - val_precision: 0.0864 - val_recall: 0.8795 - val_auc: 0.9785\nEpoch 25/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.1953 - tp: 270.0000 - fp: 6501.0000 - tn: 175471.0000 - fn: 34.0000 - accuracy: 0.9641 - precision: 0.0399 - recall: 0.8882 - auc: 0.9751 - val_loss: 0.1877 - val_tp: 73.0000 - val_fp: 768.0000 - val_tn: 44718.0000 - val_fn: 10.0000 - val_accuracy: 0.9829 - val_precision: 0.0868 - val_recall: 0.8795 - val_auc: 0.9786\nEpoch 26/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.1704 - tp: 277.0000 - fp: 6215.0000 - tn: 175757.0000 - fn: 27.0000 - accuracy: 0.9658 - precision: 0.0427 - recall: 0.9112 - auc: 0.9808 - val_loss: 0.1903 - val_tp: 73.0000 - val_fp: 698.0000 - val_tn: 44788.0000 - val_fn: 10.0000 - val_accuracy: 0.9845 - val_precision: 0.0947 - val_recall: 0.8795 - val_auc: 0.9788\nEpoch 27/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.1946 - tp: 271.0000 - fp: 6036.0000 - tn: 175936.0000 - fn: 33.0000 - accuracy: 0.9667 - precision: 0.0430 - recall: 0.8914 - auc: 0.9748 - val_loss: 0.1908 - val_tp: 73.0000 - val_fp: 692.0000 - val_tn: 44794.0000 - val_fn: 10.0000 - val_accuracy: 0.9846 - val_precision: 0.0954 - val_recall: 0.8795 - val_auc: 0.9786\nEpoch 28/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.2115 - tp: 271.0000 - fp: 5873.0000 - tn: 176099.0000 - fn: 33.0000 - accuracy: 0.9676 - precision: 0.0441 - recall: 0.8914 - auc: 0.9688 - val_loss: 0.1914 - val_tp: 73.0000 - val_fp: 691.0000 - val_tn: 44795.0000 - val_fn: 10.0000 - val_accuracy: 0.9846 - val_precision: 0.0955 - val_recall: 0.8795 - val_auc: 0.9785\nEpoch 29/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.2237 - tp: 266.0000 - fp: 6047.0000 - tn: 175925.0000 - fn: 38.0000 - accuracy: 0.9666 - precision: 0.0421 - recall: 0.8750 - auc: 0.9672 - val_loss: 0.1909 - val_tp: 73.0000 - val_fp: 698.0000 - val_tn: 44788.0000 - val_fn: 10.0000 - val_accuracy: 0.9845 - val_precision: 0.0947 - val_recall: 0.8795 - val_auc: 0.9784\nEpoch 30/100\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.2232 - tp: 272.0000 - fp: 5990.0000 - tn: 175982.0000 - fn: 32.0000 - accuracy: 0.9670 - precision: 0.0434 - recall: 0.8947 - auc: 0.9668 - val_loss: 0.1919 - val_tp: 73.0000 - val_fp: 642.0000 - val_tn: 44844.0000 - val_fn: 10.0000 - val_accuracy: 0.9857 - val_precision: 0.1021 - val_recall: 0.8795 - val_auc: 0.9785\nEpoch 31/100\n178176/182276 [============================>.] - ETA: 0s - loss: 0.2022 - tp: 273.0000 - fp: 5659.0000 - tn: 172216.0000 - fn: 28.0000 - accuracy: 0.9681 - precision: 0.0460 - recall: 0.9070 - auc: 0.9705Restoring model weights from the end of the best epoch.\n182276/182276 [==============================] - 1s 4us/sample - loss: 0.1989 - tp: 276.0000 - fp: 5796.0000 - tn: 176176.0000 - fn: 28.0000 - accuracy: 0.9680 - precision: 0.0455 - recall: 0.9079 - auc: 0.9708 - val_loss: 0.1920 - val_tp: 73.0000 - val_fp: 626.0000 - val_tn: 44860.0000 - val_fn: 10.0000 - val_accuracy: 0.9860 - val_precision: 0.1044 - val_recall: 0.8795 - val_auc: 0.9788\nEpoch 00031: early stopping\n" ] ], [ [ "### Check training history", "_____no_output_____" ] ], [ [ "plot_metrics(weighted_history)", "_____no_output_____" ] ], [ [ "### Evaluate metrics", "_____no_output_____" ] ], [ [ "train_predictions_weighted = weighted_model.predict(train_features, batch_size=BATCH_SIZE)\ntest_predictions_weighted = weighted_model.predict(test_features, batch_size=BATCH_SIZE)", "_____no_output_____" ], [ "weighted_results = weighted_model.evaluate(test_features, test_labels,\n batch_size=BATCH_SIZE, verbose=0)\nfor name, value in zip(weighted_model.metrics_names, weighted_results):\n print(name, ': ', value)\nprint()\n\nplot_cm(test_labels, test_predictions_weighted)", "loss : 0.06950428275801711\ntp : 94.0\nfp : 905.0\ntn : 55952.0\nfn : 11.0\naccuracy : 0.9839191\nprecision : 0.0940941\nrecall : 0.8952381\nauc : 0.9844724\n\nLegitimate Transactions Detected (True Negatives): 55952\nLegitimate Transactions Incorrectly Detected (False Positives): 905\nFraudulent Transactions Missed (False Negatives): 11\nFraudulent Transactions Detected (True Positives): 94\nTotal Fraudulent Transactions: 105\n" ] ], [ [ "Here you can see that with class weights the accuracy and precision are lower because there are more false positives, but conversely the recall and AUC are higher because the model also found more true positives. Despite having lower accuracy, this model has higher recall (and identifies more fraudulent transactions). Of course, there is a cost to both types of error (you wouldn't want to bug users by flagging too many legitimate transactions as fraudulent, either). Carefully consider the trade offs between these different types of errors for your application.", "_____no_output_____" ], [ "### Plot the ROC", "_____no_output_____" ] ], [ [ "plot_roc(\"Train Baseline\", train_labels, train_predictions_baseline, color=colors[0])\nplot_roc(\"Test Baseline\", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')\n\nplot_roc(\"Train Weighted\", train_labels, train_predictions_weighted, color=colors[1])\nplot_roc(\"Test Weighted\", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')\n\n\nplt.legend(loc='lower right')", "_____no_output_____" ] ], [ [ "## Oversampling", "_____no_output_____" ], [ "### Oversample the minority class\n\nA related approach would be to resample the dataset by oversampling the minority class.", "_____no_output_____" ] ], [ [ "pos_features = train_features[bool_train_labels]\nneg_features = train_features[~bool_train_labels]\n\npos_labels = train_labels[bool_train_labels]\nneg_labels = train_labels[~bool_train_labels]", "_____no_output_____" ] ], [ [ "#### Using NumPy\n\nYou can balance the dataset manually by choosing the right number of random \nindices from the positive examples:", "_____no_output_____" ] ], [ [ "ids = np.arange(len(pos_features))\nchoices = np.random.choice(ids, len(neg_features))\n\nres_pos_features = pos_features[choices]\nres_pos_labels = pos_labels[choices]\n\nres_pos_features.shape", "_____no_output_____" ], [ "resampled_features = np.concatenate([res_pos_features, neg_features], axis=0)\nresampled_labels = np.concatenate([res_pos_labels, neg_labels], axis=0)\n\norder = np.arange(len(resampled_labels))\nnp.random.shuffle(order)\nresampled_features = resampled_features[order]\nresampled_labels = resampled_labels[order]\n\nresampled_features.shape", "_____no_output_____" ] ], [ [ "#### Using `tf.data`", "_____no_output_____" ], [ "If you're using `tf.data` the easiest way to produce balanced examples is to start with a `positive` and a `negative` dataset, and merge them. See [the tf.data guide](../../guide/data.ipynb) for more examples.", "_____no_output_____" ] ], [ [ "BUFFER_SIZE = 100000\n\ndef make_ds(features, labels):\n ds = tf.data.Dataset.from_tensor_slices((features, labels))#.cache()\n ds = ds.shuffle(BUFFER_SIZE).repeat()\n return ds\n\npos_ds = make_ds(pos_features, pos_labels)\nneg_ds = make_ds(neg_features, neg_labels)", "_____no_output_____" ] ], [ [ "Each dataset provides `(feature, label)` pairs:", "_____no_output_____" ] ], [ [ "for features, label in pos_ds.take(1):\n print(\"Features:\\n\", features.numpy())\n print()\n print(\"Label: \", label.numpy())", "Features:\n [-2.46955933 3.42534191 -4.42937043 3.70651659 -3.17895499 -1.30458304\n -5. 2.86676917 -4.9308611 -5. 3.58555137 -5.\n 1.51535494 -5. 0.01049775 -5. -5. -5.\n 2.02380731 0.36595419 1.61836304 -1.16743779 0.31324117 -0.35515978\n -0.62579636 -0.55952005 0.51255883 1.15454727 0.87478003]\n\nLabel: 1\n" ] ], [ [ "Merge the two together using `experimental.sample_from_datasets`:", "_____no_output_____" ] ], [ [ "resampled_ds = tf.data.experimental.sample_from_datasets([pos_ds, neg_ds], weights=[0.5, 0.5])\nresampled_ds = resampled_ds.batch(BATCH_SIZE).prefetch(2)", "_____no_output_____" ], [ "for features, label in resampled_ds.take(1):\n print(label.numpy().mean())", "0.48974609375\n" ] ], [ [ "To use this dataset, you'll need the number of steps per epoch.\n\nThe definition of \"epoch\" in this case is less clear. Say it's the number of batches required to see each negative example once:", "_____no_output_____" ] ], [ [ "resampled_steps_per_epoch = np.ceil(2.0*neg/BATCH_SIZE)\nresampled_steps_per_epoch", "_____no_output_____" ] ], [ [ "### Train on the oversampled data\n\nNow try training the model with the resampled data set instead of using class weights to see how these methods compare.\n\nNote: Because the data was balanced by replicating the positive examples, the total dataset size is larger, and each epoch runs for more training steps. ", "_____no_output_____" ] ], [ [ "resampled_model = make_model()\nresampled_model.load_weights(initial_weights)\n\n# Reset the bias to zero, since this dataset is balanced.\noutput_layer = resampled_model.layers[-1] \noutput_layer.bias.assign([0])\n\nval_ds = tf.data.Dataset.from_tensor_slices((val_features, val_labels)).cache()\nval_ds = val_ds.batch(BATCH_SIZE).prefetch(2) \n\nresampled_history = resampled_model.fit(\n resampled_ds,\n epochs=EPOCHS,\n steps_per_epoch=resampled_steps_per_epoch,\n callbacks = [early_stopping],\n validation_data=val_ds)", "Train for 278.0 steps, validate for 23 steps\nEpoch 1/100\n278/278 [==============================] - 13s 48ms/step - loss: 0.4624 - tp: 267186.0000 - fp: 124224.0000 - tn: 160439.0000 - fn: 17495.0000 - accuracy: 0.7511 - precision: 0.6826 - recall: 0.9385 - auc: 0.9268 - val_loss: 0.3299 - val_tp: 79.0000 - val_fp: 2825.0000 - val_tn: 42661.0000 - val_fn: 4.0000 - val_accuracy: 0.9379 - val_precision: 0.0272 - val_recall: 0.9518 - val_auc: 0.9799\nEpoch 2/100\n278/278 [==============================] - 11s 39ms/step - loss: 0.2362 - tp: 264077.0000 - fp: 26654.0000 - tn: 257570.0000 - fn: 21043.0000 - accuracy: 0.9162 - precision: 0.9083 - recall: 0.9262 - auc: 0.9708 - val_loss: 0.1926 - val_tp: 75.0000 - val_fp: 1187.0000 - val_tn: 44299.0000 - val_fn: 8.0000 - val_accuracy: 0.9738 - val_precision: 0.0594 - val_recall: 0.9036 - val_auc: 0.9779\nEpoch 3/100\n278/278 [==============================] - 11s 40ms/step - loss: 0.1887 - tp: 263490.0000 - fp: 12935.0000 - tn: 271381.0000 - fn: 21538.0000 - accuracy: 0.9395 - precision: 0.9532 - recall: 0.9244 - auc: 0.9804 - val_loss: 0.1373 - val_tp: 75.0000 - val_fp: 1064.0000 - val_tn: 44422.0000 - val_fn: 8.0000 - val_accuracy: 0.9765 - val_precision: 0.0658 - val_recall: 0.9036 - val_auc: 0.9778\nEpoch 4/100\n278/278 [==============================] - 11s 41ms/step - loss: 0.1605 - tp: 263933.0000 - fp: 10513.0000 - tn: 274505.0000 - fn: 20393.0000 - accuracy: 0.9457 - precision: 0.9617 - recall: 0.9283 - auc: 0.9866 - val_loss: 0.1078 - val_tp: 75.0000 - val_fp: 1070.0000 - val_tn: 44416.0000 - val_fn: 8.0000 - val_accuracy: 0.9763 - val_precision: 0.0655 - val_recall: 0.9036 - val_auc: 0.9783\nEpoch 5/100\n278/278 [==============================] - 11s 39ms/step - loss: 0.1423 - tp: 265715.0000 - fp: 9592.0000 - tn: 275145.0000 - fn: 18892.0000 - accuracy: 0.9500 - precision: 0.9652 - recall: 0.9336 - auc: 0.9901 - val_loss: 0.0928 - val_tp: 75.0000 - val_fp: 1051.0000 - val_tn: 44435.0000 - val_fn: 8.0000 - val_accuracy: 0.9768 - val_precision: 0.0666 - val_recall: 0.9036 - val_auc: 0.9762\nEpoch 6/100\n278/278 [==============================] - 11s 40ms/step - loss: 0.1297 - tp: 267181.0000 - fp: 8944.0000 - tn: 275445.0000 - fn: 17774.0000 - accuracy: 0.9531 - precision: 0.9676 - recall: 0.9376 - auc: 0.9920 - val_loss: 0.0847 - val_tp: 75.0000 - val_fp: 1077.0000 - val_tn: 44409.0000 - val_fn: 8.0000 - val_accuracy: 0.9762 - val_precision: 0.0651 - val_recall: 0.9036 - val_auc: 0.9748\nEpoch 7/100\n278/278 [==============================] - 11s 39ms/step - loss: 0.1203 - tp: 267440.0000 - fp: 8606.0000 - tn: 276459.0000 - fn: 16839.0000 - accuracy: 0.9553 - precision: 0.9688 - recall: 0.9408 - auc: 0.9933 - val_loss: 0.0775 - val_tp: 75.0000 - val_fp: 1003.0000 - val_tn: 44483.0000 - val_fn: 8.0000 - val_accuracy: 0.9778 - val_precision: 0.0696 - val_recall: 0.9036 - val_auc: 0.9742\nEpoch 8/100\n278/278 [==============================] - 11s 40ms/step - loss: 0.1132 - tp: 268799.0000 - fp: 8165.0000 - tn: 276260.0000 - fn: 16120.0000 - accuracy: 0.9573 - precision: 0.9705 - recall: 0.9434 - auc: 0.9941 - val_loss: 0.0716 - val_tp: 75.0000 - val_fp: 927.0000 - val_tn: 44559.0000 - val_fn: 8.0000 - val_accuracy: 0.9795 - val_precision: 0.0749 - val_recall: 0.9036 - val_auc: 0.9713\nEpoch 9/100\n278/278 [==============================] - 11s 40ms/step - loss: 0.1074 - tp: 269627.0000 - fp: 7971.0000 - tn: 276559.0000 - fn: 15187.0000 - accuracy: 0.9593 - precision: 0.9713 - recall: 0.9467 - auc: 0.9947 - val_loss: 0.0670 - val_tp: 75.0000 - val_fp: 880.0000 - val_tn: 44606.0000 - val_fn: 8.0000 - val_accuracy: 0.9805 - val_precision: 0.0785 - val_recall: 0.9036 - val_auc: 0.9713\nEpoch 10/100\n278/278 [==============================] - 11s 39ms/step - loss: 0.1017 - tp: 270359.0000 - fp: 7590.0000 - tn: 277311.0000 - fn: 14084.0000 - accuracy: 0.9619 - precision: 0.9727 - recall: 0.9505 - auc: 0.9952 - val_loss: 0.0629 - val_tp: 75.0000 - val_fp: 848.0000 - val_tn: 44638.0000 - val_fn: 8.0000 - val_accuracy: 0.9812 - val_precision: 0.0813 - val_recall: 0.9036 - val_auc: 0.9717\nEpoch 11/100\n276/278 [============================>.] - ETA: 0s - loss: 0.0977 - tp: 269672.0000 - fp: 7408.0000 - tn: 274621.0000 - fn: 13547.0000 - accuracy: 0.9629 - precision: 0.9733 - recall: 0.9522 - auc: 0.9955Restoring model weights from the end of the best epoch.\n278/278 [==============================] - 11s 39ms/step - loss: 0.0978 - tp: 271609.0000 - fp: 7474.0000 - tn: 276625.0000 - fn: 13636.0000 - accuracy: 0.9629 - precision: 0.9732 - recall: 0.9522 - auc: 0.9955 - val_loss: 0.0615 - val_tp: 75.0000 - val_fp: 841.0000 - val_tn: 44645.0000 - val_fn: 8.0000 - val_accuracy: 0.9814 - val_precision: 0.0819 - val_recall: 0.9036 - val_auc: 0.9637\nEpoch 00011: early stopping\n" ] ], [ [ "If the training process were considering the whole dataset on each gradient update, this oversampling would be basically identical to the class weighting.\n\nBut when training the model batch-wise, as you did here, the oversampled data provides a smoother gradient signal: Instead of each positive example being shown in one batch with a large weight, they're shown in many different batches each time with a small weight. \n\nThis smoother gradient signal makes it easier to train the model.", "_____no_output_____" ], [ "### Check training history\n\nNote that the distributions of metrics will be different here, because the training data has a totally different distribution from the validation and test data. ", "_____no_output_____" ] ], [ [ "plot_metrics(resampled_history )", "_____no_output_____" ] ], [ [ "### Re-train\n", "_____no_output_____" ], [ "Because training is easier on the balanced data, the above training procedure may overfit quickly. \n\nSo break up the epochs to give the `callbacks.EarlyStopping` finer control over when to stop training.", "_____no_output_____" ] ], [ [ "resampled_model = make_model()\nresampled_model.load_weights(initial_weights)\n\n# Reset the bias to zero, since this dataset is balanced.\noutput_layer = resampled_model.layers[-1] \noutput_layer.bias.assign([0])\n\nresampled_history = resampled_model.fit(\n resampled_ds,\n # These are not real epochs\n steps_per_epoch = 20,\n epochs=10*EPOCHS,\n callbacks = [early_stopping],\n validation_data=(val_ds))", "Train for 20 steps, validate for 23 steps\nEpoch 1/1000\n20/20 [==============================] - 4s 181ms/step - loss: 0.8800 - tp: 18783.0000 - fp: 16378.0000 - tn: 4036.0000 - fn: 1763.0000 - accuracy: 0.5571 - precision: 0.5342 - recall: 0.9142 - auc: 0.7752 - val_loss: 1.3661 - val_tp: 83.0000 - val_fp: 40065.0000 - val_tn: 5421.0000 - val_fn: 0.0000e+00 - val_accuracy: 0.1208 - val_precision: 0.0021 - val_recall: 1.0000 - val_auc: 0.9425\nEpoch 2/1000\n20/20 [==============================] - 1s 35ms/step - loss: 0.7378 - tp: 19613.0000 - fp: 15282.0000 - tn: 5187.0000 - fn: 878.0000 - accuracy: 0.6055 - precision: 0.5621 - recall: 0.9572 - auc: 0.8680 - val_loss: 1.1629 - val_tp: 83.0000 - val_fp: 36851.0000 - val_tn: 8635.0000 - val_fn: 0.0000e+00 - val_accuracy: 0.1913 - val_precision: 0.0022 - val_recall: 1.0000 - val_auc: 0.9580\nEpoch 3/1000\n20/20 [==============================] - 1s 39ms/step - loss: 0.6431 - tp: 19522.0000 - fp: 13990.0000 - tn: 6558.0000 - fn: 890.0000 - accuracy: 0.6367 - precision: 0.5825 - recall: 0.9564 - auc: 0.8950 - val_loss: 0.9853 - val_tp: 82.0000 - val_fp: 32268.0000 - val_tn: 13218.0000 - val_fn: 1.0000 - val_accuracy: 0.2919 - val_precision: 0.0025 - val_recall: 0.9880 - val_auc: 0.9660\nEpoch 4/1000\n20/20 [==============================] - 1s 39ms/step - loss: 0.5563 - tp: 19488.0000 - fp: 12475.0000 - tn: 8032.0000 - fn: 965.0000 - accuracy: 0.6719 - precision: 0.6097 - recall: 0.9528 - auc: 0.9135 - val_loss: 0.8430 - val_tp: 82.0000 - val_fp: 26633.0000 - val_tn: 18853.0000 - val_fn: 1.0000 - val_accuracy: 0.4155 - val_precision: 0.0031 - val_recall: 0.9880 - val_auc: 0.9713\nEpoch 5/1000\n20/20 [==============================] - 1s 37ms/step - loss: 0.4984 - tp: 19489.0000 - fp: 11049.0000 - tn: 9377.0000 - fn: 1045.0000 - accuracy: 0.7047 - precision: 0.6382 - recall: 0.9491 - auc: 0.9242 - val_loss: 0.7307 - val_tp: 82.0000 - val_fp: 20850.0000 - val_tn: 24636.0000 - val_fn: 1.0000 - val_accuracy: 0.5424 - val_precision: 0.0039 - val_recall: 0.9880 - val_auc: 0.9753\nEpoch 6/1000\n20/20 [==============================] - 1s 39ms/step - loss: 0.4463 - tp: 19305.0000 - fp: 9622.0000 - tn: 10895.0000 - fn: 1138.0000 - accuracy: 0.7373 - precision: 0.6674 - recall: 0.9443 - auc: 0.9336 - val_loss: 0.6405 - val_tp: 82.0000 - val_fp: 15843.0000 - val_tn: 29643.0000 - val_fn: 1.0000 - val_accuracy: 0.6523 - val_precision: 0.0051 - val_recall: 0.9880 - val_auc: 0.9773\nEpoch 7/1000\n20/20 [==============================] - 1s 40ms/step - loss: 0.4121 - tp: 19365.0000 - fp: 8524.0000 - tn: 11931.0000 - fn: 1140.0000 - accuracy: 0.7641 - precision: 0.6944 - recall: 0.9444 - auc: 0.9411 - val_loss: 0.5691 - val_tp: 82.0000 - val_fp: 11981.0000 - val_tn: 33505.0000 - val_fn: 1.0000 - val_accuracy: 0.7371 - val_precision: 0.0068 - val_recall: 0.9880 - val_auc: 0.9787\nEpoch 8/1000\n20/20 [==============================] - 1s 39ms/step - loss: 0.3784 - tp: 19242.0000 - fp: 7375.0000 - tn: 13072.0000 - fn: 1271.0000 - accuracy: 0.7889 - precision: 0.7229 - recall: 0.9380 - auc: 0.9461 - val_loss: 0.5120 - val_tp: 80.0000 - val_fp: 9309.0000 - val_tn: 36177.0000 - val_fn: 3.0000 - val_accuracy: 0.7957 - val_precision: 0.0085 - val_recall: 0.9639 - val_auc: 0.9794\nEpoch 9/1000\n20/20 [==============================] - 1s 45ms/step - loss: 0.3551 - tp: 19106.0000 - fp: 6529.0000 - tn: 13989.0000 - fn: 1336.0000 - accuracy: 0.8080 - precision: 0.7453 - recall: 0.9346 - auc: 0.9495 - val_loss: 0.4657 - val_tp: 80.0000 - val_fp: 7354.0000 - val_tn: 38132.0000 - val_fn: 3.0000 - val_accuracy: 0.8386 - val_precision: 0.0108 - val_recall: 0.9639 - val_auc: 0.9799\nEpoch 10/1000\n20/20 [==============================] - 1s 38ms/step - loss: 0.3350 - tp: 19149.0000 - fp: 5794.0000 - tn: 14698.0000 - fn: 1319.0000 - accuracy: 0.8263 - precision: 0.7677 - recall: 0.9356 - auc: 0.9535 - val_loss: 0.4275 - val_tp: 80.0000 - val_fp: 5832.0000 - val_tn: 39654.0000 - val_fn: 3.0000 - val_accuracy: 0.8720 - val_precision: 0.0135 - val_recall: 0.9639 - val_auc: 0.9802\nEpoch 11/1000\n20/20 [==============================] - 1s 40ms/step - loss: 0.3168 - tp: 19224.0000 - fp: 5013.0000 - tn: 15322.0000 - fn: 1401.0000 - accuracy: 0.8434 - precision: 0.7932 - recall: 0.9321 - auc: 0.9552 - val_loss: 0.3969 - val_tp: 80.0000 - val_fp: 4730.0000 - val_tn: 40756.0000 - val_fn: 3.0000 - val_accuracy: 0.8961 - val_precision: 0.0166 - val_recall: 0.9639 - val_auc: 0.9805\nEpoch 12/1000\n20/20 [==============================] - 1s 40ms/step - loss: 0.3077 - tp: 19028.0000 - fp: 4564.0000 - tn: 16058.0000 - fn: 1310.0000 - accuracy: 0.8566 - precision: 0.8065 - recall: 0.9356 - auc: 0.9593 - val_loss: 0.3695 - val_tp: 80.0000 - val_fp: 3819.0000 - val_tn: 41667.0000 - val_fn: 3.0000 - val_accuracy: 0.9161 - val_precision: 0.0205 - val_recall: 0.9639 - val_auc: 0.9804\nEpoch 13/1000\n20/20 [==============================] - 1s 40ms/step - loss: 0.2936 - tp: 19047.0000 - fp: 4028.0000 - tn: 16444.0000 - fn: 1441.0000 - accuracy: 0.8665 - precision: 0.8254 - recall: 0.9297 - auc: 0.9597 - val_loss: 0.3461 - val_tp: 79.0000 - val_fp: 3149.0000 - val_tn: 42337.0000 - val_fn: 4.0000 - val_accuracy: 0.9308 - val_precision: 0.0245 - val_recall: 0.9518 - val_auc: 0.9802\nEpoch 14/1000\n20/20 [==============================] - 1s 38ms/step - loss: 0.2829 - tp: 19087.0000 - fp: 3596.0000 - tn: 16855.0000 - fn: 1422.0000 - accuracy: 0.8775 - precision: 0.8415 - recall: 0.9307 - auc: 0.9619 - val_loss: 0.3266 - val_tp: 79.0000 - val_fp: 2691.0000 - val_tn: 42795.0000 - val_fn: 4.0000 - val_accuracy: 0.9409 - val_precision: 0.0285 - val_recall: 0.9518 - val_auc: 0.9803\nEpoch 15/1000\n20/20 [==============================] - 1s 39ms/step - loss: 0.2748 - tp: 19020.0000 - fp: 3174.0000 - tn: 17283.0000 - fn: 1483.0000 - accuracy: 0.8863 - precision: 0.8570 - recall: 0.9277 - auc: 0.9627 - val_loss: 0.3095 - val_tp: 79.0000 - val_fp: 2360.0000 - val_tn: 43126.0000 - val_fn: 4.0000 - val_accuracy: 0.9481 - val_precision: 0.0324 - val_recall: 0.9518 - val_auc: 0.9797\nEpoch 16/1000\n20/20 [==============================] - 1s 40ms/step - loss: 0.2666 - tp: 18890.0000 - fp: 2889.0000 - tn: 17757.0000 - fn: 1424.0000 - accuracy: 0.8947 - precision: 0.8673 - recall: 0.9299 - auc: 0.9653 - val_loss: 0.2945 - val_tp: 78.0000 - val_fp: 2101.0000 - val_tn: 43385.0000 - val_fn: 5.0000 - val_accuracy: 0.9538 - val_precision: 0.0358 - val_recall: 0.9398 - val_auc: 0.9796\nEpoch 17/1000\n20/20 [==============================] - 1s 38ms/step - loss: 0.2583 - tp: 18959.0000 - fp: 2517.0000 - tn: 17973.0000 - fn: 1511.0000 - accuracy: 0.9017 - precision: 0.8828 - recall: 0.9262 - auc: 0.9657 - val_loss: 0.2817 - val_tp: 78.0000 - val_fp: 1929.0000 - val_tn: 43557.0000 - val_fn: 5.0000 - val_accuracy: 0.9576 - val_precision: 0.0389 - val_recall: 0.9398 - val_auc: 0.9794\nEpoch 18/1000\n20/20 [==============================] - 1s 46ms/step - loss: 0.2511 - tp: 19104.0000 - fp: 2344.0000 - tn: 18043.0000 - fn: 1469.0000 - accuracy: 0.9069 - precision: 0.8907 - recall: 0.9286 - auc: 0.9678 - val_loss: 0.2704 - val_tp: 78.0000 - val_fp: 1787.0000 - val_tn: 43699.0000 - val_fn: 5.0000 - val_accuracy: 0.9607 - val_precision: 0.0418 - val_recall: 0.9398 - val_auc: 0.9793\nEpoch 19/1000\n20/20 [==============================] - 1s 40ms/step - loss: 0.2445 - tp: 19183.0000 - fp: 2087.0000 - tn: 18215.0000 - fn: 1475.0000 - accuracy: 0.9130 - precision: 0.9019 - recall: 0.9286 - auc: 0.9693 - val_loss: 0.2598 - val_tp: 78.0000 - val_fp: 1665.0000 - val_tn: 43821.0000 - val_fn: 5.0000 - val_accuracy: 0.9634 - val_precision: 0.0448 - val_recall: 0.9398 - val_auc: 0.9791\nEpoch 20/1000\n20/20 [==============================] - 1s 39ms/step - loss: 0.2373 - tp: 18995.0000 - fp: 1906.0000 - tn: 18602.0000 - fn: 1457.0000 - accuracy: 0.9179 - precision: 0.9088 - recall: 0.9288 - auc: 0.9712 - val_loss: 0.2500 - val_tp: 78.0000 - val_fp: 1587.0000 - val_tn: 43899.0000 - val_fn: 5.0000 - val_accuracy: 0.9651 - val_precision: 0.0468 - val_recall: 0.9398 - val_auc: 0.9788\nEpoch 21/1000\n19/20 [===========================>..] - ETA: 0s - loss: 0.2378 - tp: 18121.0000 - fp: 1821.0000 - tn: 17599.0000 - fn: 1371.0000 - accuracy: 0.9180 - precision: 0.9087 - recall: 0.9297 - auc: 0.9714Restoring model weights from the end of the best epoch.\n20/20 [==============================] - 1s 40ms/step - loss: 0.2376 - tp: 19083.0000 - fp: 1918.0000 - tn: 18513.0000 - fn: 1446.0000 - accuracy: 0.9179 - precision: 0.9087 - recall: 0.9296 - auc: 0.9714 - val_loss: 0.2401 - val_tp: 78.0000 - val_fp: 1485.0000 - val_tn: 44001.0000 - val_fn: 5.0000 - val_accuracy: 0.9673 - val_precision: 0.0499 - val_recall: 0.9398 - val_auc: 0.9785\nEpoch 00021: early stopping\n" ] ], [ [ "### Re-check training history", "_____no_output_____" ] ], [ [ "plot_metrics(resampled_history)", "_____no_output_____" ] ], [ [ "### Evaluate metrics", "_____no_output_____" ] ], [ [ "train_predictions_resampled = resampled_model.predict(train_features, batch_size=BATCH_SIZE)\ntest_predictions_resampled = resampled_model.predict(test_features, batch_size=BATCH_SIZE)", "_____no_output_____" ], [ "resampled_results = resampled_model.evaluate(test_features, test_labels,\n batch_size=BATCH_SIZE, verbose=0)\nfor name, value in zip(resampled_model.metrics_names, resampled_results):\n print(name, ': ', value)\nprint()\n\nplot_cm(test_labels, test_predictions_resampled)", "loss : 0.3960801533448772\ntp : 99.0\nfp : 5892.0\ntn : 50965.0\nfn : 6.0\naccuracy : 0.8964573\nprecision : 0.016524788\nrecall : 0.94285715\nauc : 0.9804354\n\nLegitimate Transactions Detected (True Negatives): 50965\nLegitimate Transactions Incorrectly Detected (False Positives): 5892\nFraudulent Transactions Missed (False Negatives): 6\nFraudulent Transactions Detected (True Positives): 99\nTotal Fraudulent Transactions: 105\n" ] ], [ [ "### Plot the ROC", "_____no_output_____" ] ], [ [ "plot_roc(\"Train Baseline\", train_labels, train_predictions_baseline, color=colors[0])\nplot_roc(\"Test Baseline\", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')\n\nplot_roc(\"Train Weighted\", train_labels, train_predictions_weighted, color=colors[1])\nplot_roc(\"Test Weighted\", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')\n\nplot_roc(\"Train Resampled\", train_labels, train_predictions_resampled, color=colors[2])\nplot_roc(\"Test Resampled\", test_labels, test_predictions_resampled, color=colors[2], linestyle='--')\nplt.legend(loc='lower right')", "_____no_output_____" ] ], [ [ "## Applying this tutorial to your problem\n\nImbalanced data classification is an inherantly difficult task since there are so few samples to learn from. You should always start with the data first and do your best to collect as many samples as possible and give substantial thought to what features may be relevant so the model can get the most out of your minority class. At some point your model may struggle to improve and yield the results you want, so it is important to keep in mind the context of your problem and the trade offs between different types of errors.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e7d4dca03d68b00b72ae7a5dd23c37a6f0ad7262
382,676
ipynb
Jupyter Notebook
02_body/chapter3/images/simulation_confined_Brownian_motion/.ipynb_checkpoints/maximal_tau-checkpoint.ipynb
eXpensia/Confined-Brownian-Motion
bd0eb6dea929727ea081dae060a7d1aa32efafd1
[ "MIT" ]
null
null
null
02_body/chapter3/images/simulation_confined_Brownian_motion/.ipynb_checkpoints/maximal_tau-checkpoint.ipynb
eXpensia/Confined-Brownian-Motion
bd0eb6dea929727ea081dae060a7d1aa32efafd1
[ "MIT" ]
null
null
null
02_body/chapter3/images/simulation_confined_Brownian_motion/.ipynb_checkpoints/maximal_tau-checkpoint.ipynb
eXpensia/Confined-Brownian-Motion
bd0eb6dea929727ea081dae060a7d1aa32efafd1
[ "MIT" ]
null
null
null
821.193133
134,364
0.952072
[ [ [ "import numpy as np\nimport matplotlib.pyplot as plt\n\n### Just some matplotlib tweaks\nimport matplotlib as mpl\n\nmpl.rcParams[\"xtick.direction\"] = \"in\"\nmpl.rcParams[\"ytick.direction\"] = \"in\"\nmpl.rcParams[\"lines.markeredgecolor\"] = \"k\"\nmpl.rcParams[\"lines.markeredgewidth\"] = 1.5\nmpl.rcParams[\"figure.dpi\"] = 200\nfrom matplotlib import rc\nrc('font', family='serif')\nrc('text', usetex=True)\nrc('xtick', labelsize='medium')\nrc('ytick', labelsize='medium')\nrc(\"axes\", labelsize = \"large\")\ndef cm2inch(value):\n return value/2.54", "_____no_output_____" ], [ "z = np.linspace(10e-9, 5e-6, 10000)\na = 1.5e-6\nD0 = 4e-21 / (6*np.pi * 0.001 * a)\n#taking alpha = 1\nv_noise = 2*D0 * a * (2*a**2 + 12 *a * z + 21 * z** 2) / (2*a**2 + 9*a*z + z**2)**2\n", "_____no_output_____" ], [ "plt.figure(figsize=( cm2inch(16),cm2inch(8)))\nplt.plot(z*1e6, v_noise*1e6)", "_____no_output_____" ], [ "def eta_z(z):\n return 0.001 * (6*z**2 + 9 * a * z + 2 * a**2)/(6*z**2 + 2*a*z)\n\ndef gamma(z):\n return 6 * np.pi * eta_z(z) * a\n\nlb = 500e-9\nld = 50e-9\nB = 4 #kt unit\ndef F_z(z):\n return - 4e-21 * (-1/ld * B *np.exp(-z/ld) + 1/lb)", "_____no_output_____" ], [ "v_deterministic = 1/gamma(z) * F_z(z)", "_____no_output_____" ], [ "plt.figure(figsize=( cm2inch(16),cm2inch(8)))\nplt.plot(z*1e6, v_deterministic*1e6, label = \"$v_\\mathrm{d}$\")\nplt.plot(z*1e6, v_noise*1e6, label=\"$v_\\mathrm{noise}$\")\nplt.plot(z*1e6, v_noise*1e6 + v_deterministic*1e6, color = \"black\", label = \"$\\\\bar{v}_\\mathrm{d}$\")\nplt.ylabel(\"$v$ ($\\\\mathrm{\\\\mu m.s^{-1}}$)\")\nplt.xlabel(\"$z$ ($\\\\mathrm{\\\\mu m}$)\")\nplt.legend(frameon=False)\nplt.tight_layout()", "_____no_output_____" ], [ "vtot = v_noise*1e6 + v_deterministic*1e6", "_____no_output_____" ], [ "vtot_gradient =np.abs(1/vtot* np.gradient(vtot, np.mean(np.diff(z))))", "_____no_output_____" ], [ "np.mean(np.diff(z))", "_____no_output_____" ], [ "D_z = 4e-21 / gamma(z)\nD_z_gradient = np.abs( 1/ D_z * np.gradient(D_z, np.mean(np.diff(z))))", "_____no_output_____" ], [ "plt.semilogx(z*1e6, vtot_gradient, label = \"Drifts gradient\")\nplt.semilogx(z*1e6, D_z_gradient, label = \"Diffusion gradient\")\nplt.ylabel(\"gradients\")\nplt.xlabel(\"$z$ ($\\mu$m)\")\nplt.legend()", "_____no_output_____" ], [ "D0 = 4e-21 / (6 * np.pi * 0.001 * 1.5e-6)\na = 1.5e-6\ndef tau_max(ld, B, z):\n lb = 500e-9\n return a / (2*D0) * np.power((1/(B/ld + 1/lb) + z),2) / z\n\ndef min_tau_max(ld, B):\n lb = 500e-9\n return 2* a / D0 / (B/ld - 1/lb)\n", "_____no_output_____" ], [ "lb = 500e-9\nB = 10\nz = np.linspace(1e-9, 100e-9, 10000)\n\nfor i in [20e-9, 30e-9, 75e-9, 100e-9]:\n plt.semilogx(z*1e6, tau_max(i, B, z), label= \"$\\ell_\\mathrm{D} =$ \" + str(np.round(i*1e9))[:-2] + \" nm\")\n plt.plot(1/(B/i - 1/lb)* 1e6, min_tau_max(i, B), \"o\", markersize = 2, color = \"b\")\n\nlds = np.linspace(20e-9, 300e-9)\nplt.plot(1/(B/lds - 1/lb)* 1e6, min_tau_max(lds, B), color = \"black\")\n\nplt.ylabel(\"$\\\\tau_\\mathrm{max}$\")\nplt.xlabel(\"$z$ ($\\mu$m)\")\n\n\nplt.legend(frameon = False)\n\n", "_____no_output_____" ], [ "min_tau_max(i, B)", "_____no_output_____" ], [ "fig = plt.figure(figsize = (cm2inch(16),cm2inch(9)))\ngs = fig.add_gridspec(1, 2)\n\nfig.add_subplot(gs[0, 0])\n\nplt.loglog(z*1e6, vtot_gradient, label = \"$\\\\frac{1}{\\\\bar{v}_\\\\mathrm{d}} \\\\frac{\\\\partial \\\\bar{v}_\\\\mathrm{d} }{\\partial z}$\")\nplt.plot(z*1e6, D_z_gradient, label = \"$\\\\frac{1}{D_\\\\bot} \\\\frac{\\\\partial D_\\\\bot }{\\partial z}$\")\nplt.ylabel(\"relative variations (m$^{-1}$)\")\nplt.xlabel(\"$z$ ($\\mu$m)\")\nplt.legend(frameon = False)\n\n#plt.text(0.05, -1e6, \"a)\", fontsize=20)\n\nfig.add_subplot(gs[0, 1])\n\nld = 500e-9\nB = 10\nz = np.linspace(1e-9, 100e-9, 10000)\n\nfor i in [20e-9, 30e-9, 75e-9, 100e-9]:\n plt.semilogx(z*1e6, tau_max(i, B, z), label= \"$\\ell_\\mathrm{D} =$ \" + str(np.round(i*1e9))[:-2] + \" nm\")\n plt.plot(1/(B/i - 1/lb)* 1e6, min_tau_max(i, B), \"o\", markersize = 2, color = \"b\")\n\nlds = np.linspace(20e-9, 100e-9)\nplt.plot(1/(B/lds- 1/lb)* 1e6, min_tau_max(lds, B), color = \"black\")\n\nplt.ylabel(\"$\\\\tau_\\mathrm{max}$\")\nplt.xlabel(\"$z$ ($\\mu$m)\")\n\nplt.text(0.05, 0.06, \"b)\", fontsize=20)\n\nplt.legend(frameon = False)\n\nplt.tight_layout()\n\nplt.savefig(\"maximal_tau.pdf\")\n", "_____no_output_____" ], [ "min_tau_max(20e-9, B)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7d4df56fd88cd5a73e53c71e010c0a1a10d981d
31,495
ipynb
Jupyter Notebook
Analyse the Federal Aviation Authority Dataset using Pandas/WORK DONE/Pandas - Assignment 01.ipynb
NeoWist/aiengineer-simplylearn-projects
6b0c2413c4882e8c711918b4541b6de1a5237f2e
[ "MIT" ]
null
null
null
Analyse the Federal Aviation Authority Dataset using Pandas/WORK DONE/Pandas - Assignment 01.ipynb
NeoWist/aiengineer-simplylearn-projects
6b0c2413c4882e8c711918b4541b6de1a5237f2e
[ "MIT" ]
null
null
null
Analyse the Federal Aviation Authority Dataset using Pandas/WORK DONE/Pandas - Assignment 01.ipynb
NeoWist/aiengineer-simplylearn-projects
6b0c2413c4882e8c711918b4541b6de1a5237f2e
[ "MIT" ]
null
null
null
32.435633
224
0.42067
[ [ [ "<img src=\"http://cfs22.simplicdn.net/ice9/new_logo.svgz \"/>\n\n# Assignment 01: Evaluate the FAA Dataset\n\n*The comments/sections provided are your cues to perform the assignment. You don't need to limit yourself to the number of rows/cells provided. You can add additional rows in each section to add more lines of code.*\n\n*If at any point in time you need help on solving this assignment, view our demo video to understand the different steps of the code.*\n\n**Happy coding!**\n\n* * *", "_____no_output_____" ], [ "#### 1: VIew and import the dataset", "_____no_output_____" ] ], [ [ "#Import necessary libraries\nimport pandas as pd", "_____no_output_____" ], [ "#Import the FAA (Federal Aviation Authority) dataset\ndf_faa_dataset = pd.read_csv(\"D:/COURSES/Artificial Intellegence Engineer/Data Analytics With Python/Analyse the Federal Aviation Authority Dataset using Pandas/WORK DONE/faa_ai_prelim.csv\")", "_____no_output_____" ] ], [ [ "#### 2: View and understand the dataset", "_____no_output_____" ] ], [ [ "#View the dataset shape\ndf_faa_dataset.shape", "_____no_output_____" ], [ "#View the first five observations\ndf_faa_dataset.head()", "_____no_output_____" ], [ "#View all the columns present in the dataset\ndf_faa_dataset.columns", "_____no_output_____" ] ], [ [ "#### 3: Extract the following attributes from the dataset:\n1. Aircraft make name\n2. State name\n3. Aircraft model name\n4. Text information\n5. Flight phase\n6. Event description type\n7. Fatal flag", "_____no_output_____" ] ], [ [ "#Create a new dataframe with only the required columns\ndf_analyze_dataset = df_faa_dataset[['LOC_STATE_NAME', 'RMK_TEXT', 'EVENT_TYPE_DESC', 'ACFT_MAKE_NAME',\n 'ACFT_MODEL_NAME', 'FLT_PHASE', 'FATAL_FLAG']]", "_____no_output_____" ], [ "#View the type of the object\ntype(df_analyze_dataset)", "_____no_output_____" ], [ "#Check if the dataframe contains all the required attributes\ndf_analyze_dataset.head()", "_____no_output_____" ] ], [ [ "#### 4. Clean the dataset and replace the fatal flag NaN with “No”", "_____no_output_____" ] ], [ [ "#Replace all Fatal Flag missing values with the required output\ndf_analyze_dataset['FATAL_FLAG'].fillna(value=\"No\",inplace=True)", "C:\\Users\\amalp\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\pandas\\core\\generic.py:6392: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n return self._update_inplace(result)\n" ], [ "#Verify if the missing values are replaced\ndf_analyze_dataset.head()", "_____no_output_____" ], [ "#Check the number of observations\ndf_analyze_dataset.shape", "_____no_output_____" ] ], [ [ "#### 5. Remove all the observations where aircraft names are not available", "_____no_output_____" ] ], [ [ "#Drop the unwanted values/observations from the dataset\ndf_final_dataset = df_analyze_dataset.dropna(subset=['ACFT_MAKE_NAME'])", "_____no_output_____" ] ], [ [ "#### 6. Find the aircraft types and their occurrences in the dataset", "_____no_output_____" ] ], [ [ "#Check the number of observations now to compare it with the original dataset and see how many values have been dropped\ndf_final_dataset.shape", "_____no_output_____" ], [ "#Group the dataset by aircraft name\naircraftType = df_final_dataset.groupby('ACFT_MAKE_NAME')", "_____no_output_____" ], [ "#View the number of times each aircraft type appears in the dataset (Hint: use the size() method)\naircraftType.size()", "_____no_output_____" ] ], [ [ "#### 7: Display the observations where fatal flag is “Yes”", "_____no_output_____" ] ], [ [ "#Group the dataset by fatal flag\nfatalAccedents = df_final_dataset.groupby('FATAL_FLAG')", "_____no_output_____" ], [ "#View the total number of fatal and non-fatal accidents\nfatalAccedents.size()", "_____no_output_____" ], [ "#Create a new dataframe to view only the fatal accidents (Fatal Flag values = Yes)\naccidents_with_fatality = fatalAccedents.get_group('Yes')", "_____no_output_____" ], [ "accidents_with_fatality.head()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
e7d4eb063338f6a8d28c9d0b3771eae0e22d208e
89,953
ipynb
Jupyter Notebook
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
c9d86b27b0185cc82624b01ed76653dbc12554a3
[ "MIT" ]
null
null
null
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
c9d86b27b0185cc82624b01ed76653dbc12554a3
[ "MIT" ]
null
null
null
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
c9d86b27b0185cc82624b01ed76653dbc12554a3
[ "MIT" ]
null
null
null
43.858118
16,156
0.736207
[ [ [ "<center>\n <img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/Logos/organization_logo/organization_logo.png\" width=\"300\" alt=\"cognitiveclass.ai logo\" />\n</center>\n\n# 1D Numpy in Python\n\nEstimated time needed: **30** minutes\n\n## Objectives\n\nAfter completing this lab you will be able to:\n\n- Import and use `numpy` library\n- Perform operations with `numpy`\n", "_____no_output_____" ], [ "<h2>Table of Contents</h2>\n<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n <ul>\n <li><a href=\"pre\">Preparation</a></li>\n <li>\n <a href=\"numpy\">What is Numpy?</a>\n <ul>\n <li><a href=\"type\">Type</a></li>\n <li><a href=\"val\">Assign Value</a></li>\n <li><a href=\"slice\">Slicing</a></li>\n <li><a href=\"list\">Assign Value with List</a></li>\n <li><a href=\"other\">Other Attributes</a></li>\n </ul>\n </li>\n <li>\n <a href=\"op\">Numpy Array Operations</a>\n <ul>\n <li><a href=\"add\">Array Addition</a></li>\n <li><a href=\"multi\">Array Multiplication</a></li>\n <li><a href=\"prod\">Product of Two Numpy Arrays</a></li>\n <li><a href=\"dot\">Dot Product</a></li>\n <li><a href=\"cons\">Adding Constant to a Numpy Array</a></li>\n </ul>\n </li>\n <li><a href=\"math\">Mathematical Functions</a></li>\n <li><a href=\"lin\">Linspace</a></li>\n </ul>\n\n</div>\n\n<hr>\n", "_____no_output_____" ], [ "<h2 id=\"pre\">Preparation</h2>\n", "_____no_output_____" ] ], [ [ "# Import the libraries\n\nimport time \nimport sys\nimport numpy as np \n\nimport matplotlib.pyplot as plt\n%matplotlib inline ", "_____no_output_____" ], [ "# Plotting functions\n\ndef Plotvec1(u, z, v):\n \n ax = plt.axes()\n ax.arrow(0, 0, *u, head_width=0.05, color='r', head_length=0.1)\n plt.text(*(u + 0.1), 'u')\n \n ax.arrow(0, 0, *v, head_width=0.05, color='b', head_length=0.1)\n plt.text(*(v + 0.1), 'v')\n ax.arrow(0, 0, *z, head_width=0.05, head_length=0.1)\n plt.text(*(z + 0.1), 'z')\n plt.ylim(-2, 2)\n plt.xlim(-2, 2)\n\ndef Plotvec2(a,b):\n ax = plt.axes()\n ax.arrow(0, 0, *a, head_width=0.05, color ='r', head_length=0.1)\n plt.text(*(a + 0.1), 'a')\n ax.arrow(0, 0, *b, head_width=0.05, color ='b', head_length=0.1)\n plt.text(*(b + 0.1), 'b')\n plt.ylim(-2, 2)\n plt.xlim(-2, 2)", "_____no_output_____" ] ], [ [ "Create a Python List as follows:\n", "_____no_output_____" ] ], [ [ "# Create a python list\n\na = [\"0\", 1, \"two\", \"3\", 4]", "_____no_output_____" ] ], [ [ "We can access the data via an index:\n", "_____no_output_____" ], [ "<img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%205/Images/NumOneList.png\" width=\"660\" />\n", "_____no_output_____" ], [ "We can access each element using a square bracket as follows: \n", "_____no_output_____" ] ], [ [ "# Print each element\n\nprint(\"a[0]:\", a[0])\nprint(\"a[1]:\", a[1])\nprint(\"a[2]:\", a[2])\nprint(\"a[3]:\", a[3])\nprint(\"a[4]:\", a[4])", "a[0]: 0\na[1]: 1\na[2]: two\na[3]: 3\na[4]: 4\n" ] ], [ [ "<hr>\n", "_____no_output_____" ], [ "<h2 id=\"numpy\">What is Numpy?</h2>\n", "_____no_output_____" ], [ "A numpy array is similar to a list. It's usually fixed in size and each element is of the same type. We can cast a list to a numpy array by first importing numpy: \n", "_____no_output_____" ] ], [ [ "# import numpy library\n\nimport numpy as np ", "_____no_output_____" ] ], [ [ " We then cast the list as follows:\n", "_____no_output_____" ] ], [ [ "# Create a numpy array\n\na = np.array([0, 1, 2, 3, 4])\na", "_____no_output_____" ] ], [ [ "Each element is of the same type, in this case integers: \n", "_____no_output_____" ], [ "<img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%205/Images/NumOneNp.png\" width=\"500\" />\n", "_____no_output_____" ], [ " As with lists, we can access each element via a square bracket:\n", "_____no_output_____" ] ], [ [ "# Print each element\n\nprint(\"a[0]:\", a[0])\nprint(\"a[1]:\", a[1])\nprint(\"a[2]:\", a[2])\nprint(\"a[3]:\", a[3])\nprint(\"a[4]:\", a[4])", "a[0]: 0\na[1]: 1\na[2]: 2\na[3]: 3\na[4]: 4\n" ] ], [ [ "<h3 id=\"type\">Type</h3>\n", "_____no_output_____" ], [ "If we check the type of the array we get <b>numpy.ndarray</b>:\n", "_____no_output_____" ] ], [ [ "# Check the type of the array\n\ntype(a)", "_____no_output_____" ] ], [ [ "As numpy arrays contain data of the same type, we can use the attribute \"dtype\" to obtain the Data-type of the array’s elements. In this case a 64-bit integer: \n", "_____no_output_____" ] ], [ [ "# Check the type of the values stored in numpy array\n\na.dtype", "_____no_output_____" ] ], [ [ "We can create a numpy array with real numbers:\n", "_____no_output_____" ] ], [ [ "# Create a numpy array\n\nb = np.array([3.1, 11.02, 6.2, 213.2, 5.2])", "_____no_output_____" ] ], [ [ "When we check the type of the array we get <b>numpy.ndarray</b>:\n", "_____no_output_____" ] ], [ [ "# Check the type of array\n\ntype(b)", "_____no_output_____" ] ], [ [ "If we examine the attribute <code>dtype</code> we see float 64, as the elements are not integers: \n", "_____no_output_____" ] ], [ [ "# Check the value type\n\nb.dtype", "_____no_output_____" ] ], [ [ "<h3 id=\"val\">Assign value</h3>\n", "_____no_output_____" ], [ "We can change the value of the array, consider the array <code>c</code>:\n", "_____no_output_____" ] ], [ [ "# Create numpy array\n\nc = np.array([20, 1, 2, 3, 4])\nc", "_____no_output_____" ] ], [ [ "We can change the first element of the array to 100 as follows:\n", "_____no_output_____" ] ], [ [ "# Assign the first element to 100\n\nc[0] = 100\nc", "_____no_output_____" ] ], [ [ "We can change the 5th element of the array to 0 as follows:\n", "_____no_output_____" ] ], [ [ "# Assign the 5th element to 0\n\nc[4] = 0\nc", "_____no_output_____" ] ], [ [ "<h3 id=\"slice\">Slicing</h3>\n", "_____no_output_____" ], [ "Like lists, we can slice the numpy array, and we can select the elements from 1 to 3 and assign it to a new numpy array <code>d</code> as follows:\n", "_____no_output_____" ] ], [ [ "# Slicing the numpy array\n\nd = c[1:4]\nd", "_____no_output_____" ] ], [ [ "We can assign the corresponding indexes to new values as follows: \n", "_____no_output_____" ] ], [ [ "# Set the fourth element and fifth element to 300 and 400\n\nc[3:5] = 300, 400\nc", "_____no_output_____" ] ], [ [ "<h3 id=\"list\">Assign Value with List</h3>\n", "_____no_output_____" ], [ "Similarly, we can use a list to select a specific index.\nThe list ' select ' contains several values:\n", "_____no_output_____" ] ], [ [ "# Create the index list\n\nselect = [0, 2, 3]", "_____no_output_____" ] ], [ [ "We can use the list as an argument in the brackets. The output is the elements corresponding to the particular index:\n", "_____no_output_____" ] ], [ [ "# Use List to select elements\n\nd = c[select]\nd", "_____no_output_____" ] ], [ [ "We can assign the specified elements to a new value. For example, we can assign the values to 100 000 as follows:\n", "_____no_output_____" ] ], [ [ "# Assign the specified elements to new value\n\nc[select] = 100000\nc", "_____no_output_____" ] ], [ [ "<h3 id=\"other\">Other Attributes</h3>\n", "_____no_output_____" ], [ "Let's review some basic array attributes using the array <code>a</code>:\n", "_____no_output_____" ] ], [ [ "# Create a numpy array\n\na = np.array([0, 1, 2, 3, 4])\na", "_____no_output_____" ] ], [ [ "The attribute <code>size</code> is the number of elements in the array:\n", "_____no_output_____" ] ], [ [ "# Get the size of numpy array\n\na.size", "_____no_output_____" ] ], [ [ "The next two attributes will make more sense when we get to higher dimensions but let's review them. The attribute <code>ndim</code> represents the number of array dimensions or the rank of the array, in this case, one:\n", "_____no_output_____" ] ], [ [ "# Get the number of dimensions of numpy array\n\na.ndim", "_____no_output_____" ] ], [ [ "The attribute <code>shape</code> is a tuple of integers indicating the size of the array in each dimension:\n", "_____no_output_____" ] ], [ [ "# Get the shape/size of numpy array\n\na.shape", "_____no_output_____" ], [ "# Create a numpy array\n\na = np.array([1, -1, 1, -1])", "_____no_output_____" ], [ "# Get the mean of numpy array\n\nmean = a.mean()\nmean", "_____no_output_____" ], [ "# Get the standard deviation of numpy array\n\nstandard_deviation=a.std()\nstandard_deviation", "_____no_output_____" ], [ "# Create a numpy array\n\nb = np.array([-1, 2, 3, 4, 5])\nb", "_____no_output_____" ], [ "# Get the biggest value in the numpy array\n\nmax_b = b.max()\nmax_b", "_____no_output_____" ], [ "# Get the smallest value in the numpy array\n\nmin_b = b.min()\nmin_b", "_____no_output_____" ] ], [ [ "<hr>\n", "_____no_output_____" ], [ "<h2 id=\"op\">Numpy Array Operations</h2>\n", "_____no_output_____" ], [ "<h3 id=\"add\">Array Addition</h3>\n", "_____no_output_____" ], [ "Consider the numpy array <code>u</code>:\n", "_____no_output_____" ] ], [ [ "u = np.array([1, 0])\nu", "_____no_output_____" ] ], [ [ "Consider the numpy array <code>v</code>:\n", "_____no_output_____" ] ], [ [ "v = np.array([0, 1])\nv", "_____no_output_____" ] ], [ [ "We can add the two arrays and assign it to z:\n", "_____no_output_____" ] ], [ [ "# Numpy Array Addition\n\nz = u + v\nz", "_____no_output_____" ] ], [ [ " The operation is equivalent to vector addition:\n", "_____no_output_____" ] ], [ [ "# Plot numpy arrays\n\nPlotvec1(u, z, v)", "_____no_output_____" ] ], [ [ "<h3 id=\"multi\">Array Multiplication</h3>\n", "_____no_output_____" ], [ "Consider the vector numpy array <code>y</code>:\n", "_____no_output_____" ] ], [ [ "# Create a numpy array\n\ny = np.array([1, 2])\ny", "_____no_output_____" ] ], [ [ "We can multiply every element in the array by 2:\n", "_____no_output_____" ] ], [ [ "# Numpy Array Multiplication\n\nz = 2 * y\nz", "_____no_output_____" ] ], [ [ " This is equivalent to multiplying a vector by a scaler: \n", "_____no_output_____" ], [ "<h3 id=\"prod\">Product of Two Numpy Arrays</h3>\n", "_____no_output_____" ], [ "Consider the following array <code>u</code>:\n", "_____no_output_____" ] ], [ [ "# Create a numpy array\n\nu = np.array([1, 2])\nu", "_____no_output_____" ] ], [ [ "Consider the following array <code>v</code>:\n", "_____no_output_____" ] ], [ [ "# Create a numpy array\n\nv = np.array([3, 2])\nv", "_____no_output_____" ] ], [ [ " The product of the two numpy arrays <code>u</code> and <code>v</code> is given by:\n", "_____no_output_____" ] ], [ [ "# Calculate the production of two numpy arrays\n\nz = u * v\nz", "_____no_output_____" ] ], [ [ "<h3 id=\"dot\">Dot Product</h3>\n", "_____no_output_____" ], [ "The dot product of the two numpy arrays <code>u</code> and <code>v</code> is given by:\n", "_____no_output_____" ] ], [ [ "# Calculate the dot product\n\nnp.dot(u, v)", "_____no_output_____" ] ], [ [ "<h3 id=\"cons\">Adding Constant to a Numpy Array</h3>\n", "_____no_output_____" ], [ "Consider the following array: \n", "_____no_output_____" ] ], [ [ "# Create a constant to numpy array\n\nu = np.array([1, 2, 3, -1]) \nu", "_____no_output_____" ] ], [ [ "Adding the constant 1 to each element in the array:\n", "_____no_output_____" ] ], [ [ "# Add the constant to array\n\nu + 1", "_____no_output_____" ] ], [ [ " The process is summarised in the following animation:\n", "_____no_output_____" ], [ "<img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%205/Images/NumOneAdd.gif\" width=\"500\" />\n", "_____no_output_____" ], [ "<hr>\n", "_____no_output_____" ], [ "<h2 id=\"math\">Mathematical Functions</h2>\n", "_____no_output_____" ], [ " We can access the value of <code>pi</code> in numpy as follows :\n", "_____no_output_____" ] ], [ [ "# The value of pi\n\nnp.pi", "_____no_output_____" ] ], [ [ " We can create the following numpy array in Radians:\n", "_____no_output_____" ] ], [ [ "# Create the numpy array in radians\n\nx = np.array([0, np.pi/2 , np.pi])\nx", "_____no_output_____" ] ], [ [ "We can apply the function <code>sin</code> to the array <code>x</code> and assign the values to the array <code>y</code>; this applies the sine function to each element in the array: \n", "_____no_output_____" ] ], [ [ "# Calculate the sin of each elements\n\ny = np.sin(x)\ny", "_____no_output_____" ] ], [ [ "<hr>\n", "_____no_output_____" ], [ "<h2 id=\"lin\">Linspace</h2>\n", "_____no_output_____" ], [ " A useful function for plotting mathematical functions is <code>linspace</code>. Linspace returns evenly spaced numbers over a specified interval. We specify the starting point of the sequence and the ending point of the sequence. The parameter \"num\" indicates the Number of samples to generate, in this case 5:\n", "_____no_output_____" ] ], [ [ "# Makeup a numpy array within [-2, 2] and 5 elements\n\nnp.linspace(-2, 2, num=5)", "_____no_output_____" ] ], [ [ "If we change the parameter <code>num</code> to 9, we get 9 evenly spaced numbers over the interval from -2 to 2: \n", "_____no_output_____" ] ], [ [ "# Makeup a numpy array within [-2, 2] and 9 elements\n\nnp.linspace(-2, 2, num=9)", "_____no_output_____" ] ], [ [ "We can use the function <code>linspace</code> to generate 100 evenly spaced samples from the interval 0 to 2π: \n", "_____no_output_____" ] ], [ [ "# Makeup a numpy array within [0, 2π] and 100 elements \n\nx = np.linspace(0, 2*np.pi, num=100)", "_____no_output_____" ] ], [ [ "We can apply the sine function to each element in the array <code>x</code> and assign it to the array <code>y</code>: \n", "_____no_output_____" ] ], [ [ "# Calculate the sine of x list\n\ny = np.sin(x)", "_____no_output_____" ], [ "# Plot the result\n\nplt.plot(x, y)", "_____no_output_____" ] ], [ [ "<hr>\n", "_____no_output_____" ], [ "<h2 id=\"quiz\">Quiz on 1D Numpy Array</h2>\n", "_____no_output_____" ], [ "Implement the following vector subtraction in numpy: u-v\n", "_____no_output_____" ] ], [ [ "# Write your code below and press Shift+Enter to execute\n\nu = np.array([1, 0])\nv = np.array([0, 1])\nu-v", "_____no_output_____" ] ], [ [ "<details><summary>Click here for the solution</summary>\n\n```python\nu - v\n```\n\n</details>\n", "_____no_output_____" ], [ "<hr>\n", "_____no_output_____" ], [ "Multiply the numpy array z with -2:\n", "_____no_output_____" ] ], [ [ "# Write your code below and press Shift+Enter to execute\n\nz = np.array([2, 4])\n-2*z", "_____no_output_____" ] ], [ [ "<details><summary>Click here for the solution</summary>\n\n```python\n-2 * z\n```\n\n</details>\n", "_____no_output_____" ], [ "<hr>\n", "_____no_output_____" ], [ "Consider the list <code>[1, 2, 3, 4, 5]</code> and <code>[1, 0, 1, 0, 1]</code>, and cast both lists to a numpy array then multiply them together:\n", "_____no_output_____" ] ], [ [ "# Write your code below and press Shift+Enter to execute\na = np.array([1, 2, 3, 4, 5])\nb = np.array([1, 0, 1, 0, 1])\na*b", "_____no_output_____" ] ], [ [ "<details><summary>Click here for the solution</summary>\n\n```python\na = np.array([1, 2, 3, 4, 5])\nb = np.array([1, 0, 1, 0, 1])\na * b\n```\n\n</details>\n", "_____no_output_____" ], [ "<hr>\n", "_____no_output_____" ], [ "Convert the list <code>[-1, 1]</code> and <code>[1, 1]</code> to numpy arrays <code>a</code> and <code>b</code>. Then, plot the arrays as vectors using the fuction <code>Plotvec2</code> and find the dot product:\n", "_____no_output_____" ] ], [ [ "# Write your code below and press Shift+Enter to execute\na = np.array([-1, 1])\nb = np.array([1, 1])\nPlotvec2(a, b)\nprint(\"The dot product is\", np.dot(a,b))", "The dot product is 0\n" ] ], [ [ "<details><summary>Click here for the solution</summary>\n\n```python\na = np.array([-1, 1])\nb = np.array([1, 1])\nPlotvec2(a, b)\nprint(\"The dot product is\", np.dot(a,b))\n\n```\n\n</details>\n", "_____no_output_____" ], [ "<hr>\n", "_____no_output_____" ], [ "Convert the list <code>[1, 0]</code> and <code>[0, 1]</code> to numpy arrays <code>a</code> and <code>b</code>. Then, plot the arrays as vectors using the function <code>Plotvec2</code> and find the dot product:\n", "_____no_output_____" ] ], [ [ "# Write your code below and press Shift+Enter to execute\na = np.array([1, 0])\nb = np.array([0, 1])\nPlotvec2(a, b)\nprint(\"The dot product is\", np.dot(a,b))", "The dot product is 0\n" ] ], [ [ "<details><summary>Click here for the solution</summary>\n\n```python\na = np.array([1, 0])\nb = np.array([0, 1])\nPlotvec2(a, b)\nprint(\"The dot product is\", np.dot(a, b))\n\n```\n\n</details>\n", "_____no_output_____" ], [ "<hr>\n", "_____no_output_____" ], [ "Convert the list <code>[1, 1]</code> and <code>[0, 1]</code> to numpy arrays <code>a</code> and <code>b</code>. Then plot the arrays as vectors using the fuction <code>Plotvec2</code> and find the dot product:\n", "_____no_output_____" ] ], [ [ "# Write your code below and press Shift+Enter to execute\na = np.array([1, 1])\nb = np.array([0, 1])\nPlotvec2(a, b)\nprint(\"The dot product is\", np.dot(a,b))", "The dot product is 1\n" ] ], [ [ "<details><summary>Click here for the solution</summary>\n\n```python\na = np.array([1, 1])\nb = np.array([0, 1])\nPlotvec2(a, b)\nprint(\"The dot product is\", np.dot(a, b))\nprint(\"The dot product is\", np.dot(a, b))\n\n```\n\n</details>\n \n", "_____no_output_____" ], [ "<hr>\n", "_____no_output_____" ], [ "Why are the results of the dot product for <code>[-1, 1]</code> and <code>[1, 1]</code> and the dot product for <code>[1, 0]</code> and <code>[0, 1]</code> zero, but not zero for the dot product for <code>[1, 1]</code> and <code>[0, 1]</code>? <p><i>Hint: Study the corresponding figures, pay attention to the direction the arrows are pointing to.</i></p>\n", "_____no_output_____" ] ], [ [ "# Write your code below and press Shift+Enter to execute\n", "_____no_output_____" ] ], [ [ "<details><summary>Click here for the solution</summary>\n\n```python\nThe vectors used for question 4 and 5 are perpendicular. As a result, the dot product is zero. \n\n```\n\n</details>\n \n", "_____no_output_____" ], [ "<hr>\n<h2>The last exercise!</h2>\n<p>Congratulations, you have completed your first lesson and hands-on lab in Python. However, there is one more thing you need to do. The Data Science community encourages sharing work. The best way to share and showcase your work is to share it on GitHub. By sharing your notebook on GitHub you are not only building your reputation with fellow data scientists, but you can also show it off when applying for a job. Even though this was your first piece of work, it is never too early to start building good habits. So, please read and follow <a href=\"https://cognitiveclass.ai/blog/data-scientists-stand-out-by-sharing-your-notebooks/\" target=\"_blank\">this article</a> to learn how to share your work.\n<hr>\n", "_____no_output_____" ], [ "## Author\n\n<a href=\"https://www.linkedin.com/in/joseph-s-50398b136/\" target=\"_blank\">Joseph Santarcangelo</a>\n\n## Other contributors\n\n<a href=\"www.linkedin.com/in/jiahui-mavis-zhou-a4537814a\">Mavis Zhou</a>\n\n## Change Log\n\n| Date (YYYY-MM-DD) | Version | Changed By | Change Description |\n| ----------------- | ------- | ---------- | ---------------------------------- |\n| 2020-08-26 | 2.0 | Lavanya | Moved lab to course repo in GitLab |\n| | | | |\n| | | | |\n\n<hr/>\n\n## <h3 align=\"center\"> © IBM Corporation 2020. All rights reserved. <h3/>\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ] ]
e7d4ed8eea350961444d75d60c5f2f682f8c9c2e
56,661
ipynb
Jupyter Notebook
04_CorrectiveSolutions.ipynb
Ccaccia73/semimonocoque
b67fc1a7f64f4ded85821b4ece779521724d5d55
[ "MIT" ]
null
null
null
04_CorrectiveSolutions.ipynb
Ccaccia73/semimonocoque
b67fc1a7f64f4ded85821b4ece779521724d5d55
[ "MIT" ]
null
null
null
04_CorrectiveSolutions.ipynb
Ccaccia73/semimonocoque
b67fc1a7f64f4ded85821b4ece779521724d5d55
[ "MIT" ]
null
null
null
86.242009
17,782
0.829671
[ [ [ "# Semi-Monocoque Theory: corrective solutions", "_____no_output_____" ] ], [ [ "from pint import UnitRegistry\nimport sympy\nimport networkx as nx\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport sys\n%matplotlib inline\nfrom IPython.display import display", "_____no_output_____" ] ], [ [ "Import **Section** class, which contains all calculations", "_____no_output_____" ] ], [ [ "from Section import Section", "_____no_output_____" ] ], [ [ "Initialization of **sympy** symbolic tool and **pint** for dimension analysis (not really implemented rn as not directly compatible with sympy)", "_____no_output_____" ] ], [ [ "ureg = UnitRegistry()\nsympy.init_printing()", "_____no_output_____" ] ], [ [ "Define **sympy** parameters used for geometric description of sections", "_____no_output_____" ] ], [ [ "A, A0, t, t0, a, b, h, L, E, G = sympy.symbols('A A_0 t t_0 a b h L E G', positive=True)", "_____no_output_____" ] ], [ [ "We also define numerical values for each **symbol** in order to plot scaled section and perform calculations", "_____no_output_____" ] ], [ [ "values = [(A, 150 * ureg.millimeter**2),(A0, 250 * ureg.millimeter**2),(a, 80 * ureg.millimeter), \\\n (b, 20 * ureg.millimeter),(h, 35 * ureg.millimeter),(L, 2000 * ureg.millimeter), \\\n (t, 0.8 *ureg.millimeter),(E, 72e3 * ureg.MPa), (G, 27e3 * ureg.MPa)]\ndatav = [(v[0],v[1].magnitude) for v in values]", "_____no_output_____" ] ], [ [ "# First example: Simple rectangular symmetric section", "_____no_output_____" ], [ "Define graph describing the section:\n\n1) **stringers** are **nodes** with parameters:\n- **x** coordinate\n- **y** coordinate\n- **Area**\n\n2) **panels** are **oriented edges** with parameters:\n- **thickness**\n- **lenght** which is automatically calculated", "_____no_output_____" ] ], [ [ "stringers = {1:[(2*a,h),A],\n 2:[(a,h),A],\n 3:[(sympy.Integer(0),h),A],\n 4:[(sympy.Integer(0),sympy.Integer(0)),A],\n 5:[(2*a,sympy.Integer(0)),A]}\n #5:[(sympy.Rational(1,2)*a,h),A]}\n\npanels = {(1,2):t,\n (2,3):t,\n (3,4):t,\n (4,5):t,\n (5,1):t}", "_____no_output_____" ] ], [ [ "Define section and perform first calculations", "_____no_output_____" ] ], [ [ "S1 = Section(stringers, panels)", "_____no_output_____" ], [ "S1.cycles", "_____no_output_____" ] ], [ [ "## Plot of **S1** section in original reference frame", "_____no_output_____" ], [ "Define a dictionary of coordinates used by **Networkx** to plot section as a Directed graph.\nNote that arrows are actually just thicker stubs", "_____no_output_____" ] ], [ [ "start_pos={ii: [float(S1.g.node[ii]['ip'][i].subs(datav)) for i in range(2)] for ii in S1.g.nodes() }", "_____no_output_____" ], [ "plt.figure(figsize=(12,8),dpi=300)\nnx.draw(S1.g,with_labels=True, arrows= True, pos=start_pos)\nplt.arrow(0,0,20,0)\nplt.arrow(0,0,0,20)\n#plt.text(0,0, 'CG', fontsize=24)\nplt.axis('equal')\nplt.title(\"Section in starting reference Frame\",fontsize=16);", "_____no_output_____" ] ], [ [ "## Plot of **S1** section in inertial reference Frame", "_____no_output_____" ], [ "Section is plotted wrt **center of gravity** and rotated (if necessary) so that *x* and *y* are principal axes.\n**Center of Gravity** and **Shear Center** are drawn", "_____no_output_____" ] ], [ [ "positions={ii: [float(S1.g.node[ii]['pos'][i].subs(datav)) for i in range(2)] for ii in S1.g.nodes() }", "_____no_output_____" ], [ "x_ct, y_ct = S1.ct.subs(datav)\n\nplt.figure(figsize=(12,8),dpi=300)\nnx.draw(S1.g,with_labels=True, pos=positions)\nplt.plot([0],[0],'o',ms=12,label='CG')\nplt.plot([x_ct],[y_ct],'^',ms=12, label='SC')\n#plt.text(0,0, 'CG', fontsize=24)\n#plt.text(x_ct,y_ct, 'SC', fontsize=24)\nplt.legend(loc='lower right', shadow=True)\nplt.axis('equal')\nplt.title(\"Section in pricipal reference Frame\",fontsize=16);", "_____no_output_____" ] ], [ [ "Compute **L** matrix: with 5 nodes we expect 2 **dofs**, one with _symmetric load_ and one with _antisymmetric load_", "_____no_output_____" ] ], [ [ "S1.compute_L()", "_____no_output_____" ], [ "S1.L", "_____no_output_____" ] ], [ [ "Compute **H** matrix", "_____no_output_____" ] ], [ [ "S1.compute_H()", "_____no_output_____" ], [ "S1.H", "_____no_output_____" ] ], [ [ "Compute $\\tilde{K}$ and $\\tilde{M}$ as:\n\n$$\\tilde{K} = L^T \\cdot \\left[ \\frac{A}{A_0} \\right] \\cdot L$$\n$$\\tilde{M} = H^T \\cdot \\left[ \\frac{l}{l_0}\\frac{t_0}{t} \\right] \\cdot L$$", "_____no_output_____" ] ], [ [ "S1.compute_KM(A,h,t)", "_____no_output_____" ], [ "S1.Ktilde", "_____no_output_____" ], [ "S1.Mtilde", "_____no_output_____" ] ], [ [ "Compute **eigenvalues** and **eigenvectors** as:\n\n$$\\left| \\mathbf{I} \\cdot \\beta^2 - \\mathbf{\\tilde{K}}^{-1} \\cdot \\mathbf{\\tilde{M}} \\right| = 0$$\n\nWe substitute some numerical values to simplify the expressions", "_____no_output_____" ] ], [ [ "sol_data = (S1.Ktilde.inv()*(S1.Mtilde.subs(datav))).eigenvects()", "_____no_output_____" ] ], [ [ "**Eigenvalues** correspond to $\\beta^2$", "_____no_output_____" ] ], [ [ "β2 = [sol[0] for sol in sol_data]\nβ2", "_____no_output_____" ] ], [ [ "**Eigenvectors** are orthogonal as expected", "_____no_output_____" ] ], [ [ "X = [sol[2][0] for sol in sol_data]\nX", "_____no_output_____" ] ], [ [ "From $\\beta_i^2$ we compute:\n$$\\lambda_i = \\sqrt{\\frac{E A_0 l_0}{G t_0} \\beta_i^2}$$\n\nsubstuting numerical values", "_____no_output_____" ] ], [ [ "λ = [sympy.N(sympy.sqrt(E*A*h/(G*t)*βi).subs(datav)) for βi in β2]\nλ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7d4f881dc4d07e59374270debfbcb7b5c679ffe
108,186
ipynb
Jupyter Notebook
notebooks/eda/businesses.ipynb
metinsenturk/semantic-analysis
9dd673ed249b4c0f24b8b9d5eb7349a9fdfd7f4f
[ "MIT" ]
null
null
null
notebooks/eda/businesses.ipynb
metinsenturk/semantic-analysis
9dd673ed249b4c0f24b8b9d5eb7349a9fdfd7f4f
[ "MIT" ]
null
null
null
notebooks/eda/businesses.ipynb
metinsenturk/semantic-analysis
9dd673ed249b4c0f24b8b9d5eb7349a9fdfd7f4f
[ "MIT" ]
1
2019-10-23T16:16:28.000Z
2019-10-23T16:16:28.000Z
84.520313
24,224
0.752482
[ [ [ "import os\nimport json\n\nimport numpy as np\nimport pandas as pd\nfrom pandas.io.json import json_normalize \n\nfile_path = \"../../data/raw/yp_competitors.json\"\nif os.path.exists(file_path): \n dataset = pd.read_json(file_path, orient='columns') \nelse: \n print(\"file not yet created\")\n \ndataset.head()", "file not yet created\n" ], [ "dataset2 = pd.read_csv('../../data/raw/yp_competitors.csv')\ndataset2[dataset2.location_state == 'HI'].alias.to_csv('../../data/raw/hi_competitors.csv', index=False)", "_____no_output_____" ], [ "# restaurants, 'seafood', 'desserts', 'vegetarian', \n# if i['alias'] in ['restaurants', 'seafood', 'desserts', 'vegetarian']\ncateg = dataset.categories.apply(lambda x: [i['alias'] for i in x])\n\ncateg2 = categ.apply(\n lambda x: \n x \n if len([i for i in x if i in ['restaurants', 'seafood', 'desserts', 'vegetarian']]) > 0 \n else []\n)\n\ncateg2[:10]", "_____no_output_____" ], [ "len([i for i in categ2 if len(i) > 0])", "_____no_output_____" ], [ "# fixing columns\n# transactions \ndataset.transactions = dataset.transactions.apply(lambda x: ','.join(x))\n\n# category\npd_categories_alias = dataset.categories.apply(lambda x: ', '.join([i['alias'] for i in x]))\npd_categories_title = dataset.categories.apply(lambda x: ', '.join([i['title'] for i in x]))\npd_categories_alias.name = 'category_' + 'alias'\npd_categories_title.name = 'category_' + 'title'", "_____no_output_____" ], [ "# expanding json columns\n# coordinate\npd_coordinates = dataset.coordinates.apply(pd.Series)\npd_coordinates.columns = 'coordinate_' + pd_coordinates.columns\n\n# location\npd_location = dataset.location.apply(pd.Series)\npd_location.display_address = pd_location.display_address.apply(lambda x: ','.join(x))\npd_location.columns = 'location_' + pd_location.columns", "_____no_output_____" ], [ "# merging all in one\ndataset2 = pd.concat(\n [dataset, pd_categories_alias, pd_categories_title, pd_coordinates, pd_location], \n axis=1\n).drop(['categories', 'location', 'coordinates'], axis=1)\ndataset2.columns\ndataset = dataset2", "_____no_output_____" ], [ "# from JSON to CSV\ndataset2.to_csv('../../data/raw/yp_competitors.csv', index=False)\npd.read_csv('../../data/raw/yp_competitors.csv').head()", "_____no_output_____" ] ], [ [ "## Examining Total Category List", "_____no_output_____" ] ], [ [ "# restaurants categories in the dataset: all\nprint(sorted(set(', '.join(dataset.category_alias).split(', '))))", "['acaibowls', 'accessories', 'active', 'afghani', 'airportlounges', 'amateursportsteams', 'amusementparks', 'antiques', 'aquariums', 'aquariumservices', 'arabian', 'arcades', 'argentine', 'artclasses', 'artmuseums', 'artsandcrafts', 'asianfusion', 'australian', 'bagels', 'bakeries', 'banks', 'bars', 'baseballfields', 'basketballcourts', 'basque', 'bbq', 'beachequipmentrental', 'beaches', 'beer_and_wine', 'beerbar', 'beergardens', 'bike_repair_maintenance', 'bikerentals', 'bikes', 'biketours', 'boatcharters', 'boating', 'boattours', 'bookstores', 'bootcamps', 'bowling', 'brasseries', 'brazilian', 'breakfast_brunch', 'breweries', 'brewpubs', 'british', 'bubbletea', 'buffets', 'burgers', 'burmese', 'butcher', 'cafes', 'cafeteria', 'cajun', 'cakeshop', 'campgrounds', 'candy', 'cantonese', 'caribbean', 'casinos', 'catering', 'cheese', 'cheesesteaks', 'chicken_wings', 'chickenshop', 'childrensmuseums', 'chinese', 'chocolate', 'cideries', 'cigarbars', 'climbing', 'clubcrawl', 'cocktailbars', 'coffee', 'coffeeroasteries', 'collegeuniv', 'colombian', 'comedyclubs', 'comfortfood', 'convenience', 'conveyorsushi', 'cookingclasses', 'cosmetics', 'countryclubs', 'countrydancehalls', 'couriers', 'creperies', 'cuban', 'cupcakes', 'customcakes', 'czech', 'danceclubs', 'daycamps', 'delis', 'desserts', 'dimsum', 'diners', 'dinnertheater', 'discgolf', 'divebars', 'diving', 'diyfood', 'dog_parks', 'donuts', 'drugstores', 'education', 'educationservices', 'empanadas', 'eventplanning', 'fabricstores', 'falafel', 'farmersmarket', 'festivals', 'filipino', 'fishing', 'fishnchips', 'fitness', 'fleamarkets', 'flowers', 'fondue', 'food', 'food_court', 'fooddeliveryservices', 'foodstands', 'foodtours', 'foodtrucks', 'football', 'french', 'galleries', 'gardens', 'gastropubs', 'gaybars', 'gelato', 'german', 'giftshops', 'gluten_free', 'golf', 'golfequipment', 'golflessons', 'gourmet', 'greek', 'grocery', 'halal', 'hauntedhouses', 'hawaiian', 'headshops', 'healthmarkets', 'healthtrainers', 'herbsandspices', 'hiking', 'himalayan', 'hindu_temples', 'hkcafe', 'hobbyshops', 'homestaging', 'honey', 'hookah_bars', 'horsebackriding', 'hostels', 'hotdog', 'hotdogs', 'hotels', 'hotpot', 'icecream', 'importedfood', 'indpak', 'internetcafe', 'intlgrocery', 'irish', 'irish_pubs', 'italian', 'izakaya', 'japacurry', 'japanese', 'jazzandblues', 'jewelry', 'juicebars', 'karaoke', 'kebab', 'kids_activities', 'kombucha', 'korean', 'kosher', 'lakes', 'landmarks', 'laotian', 'lasertag', 'latin', 'lebanese', 'localflavor', 'localservices', 'lounges', 'macarons', 'markets', 'massage', 'massage_therapy', 'matchmakers', 'meats', 'medcenters', 'media', 'meditationcenters', 'mediterranean', 'menscloth', 'mexican', 'mideastern', 'modern_european', 'mongolian', 'mountainbiking', 'museums', 'musicvenues', 'newamerican', 'newmexican', 'nightlife', 'nonprofit', 'noodles', 'oliveoil', 'organic_stores', 'outdoorgear', 'outdoormovies', 'paddleboarding', 'paintyourownpottery', 'pakistani', 'panasian', 'parks', 'partysupplies', 'pastashops', 'patiocoverings', 'persian', 'personalchefs', 'peruvian', 'pets', 'pettingzoos', 'pizza', 'playgrounds', 'poke', 'polish', 'polynesian', 'poolhalls', 'poolservice', 'popcorn', 'popuprestaurants', 'portuguese', 'publicart', 'publicservicesgovt', 'pubs', 'puertorican', 'rafting', 'ramen', 'raw_food', 'recreation', 'religiousitems', 'resorts', 'restaurants', 'rock_climbing', 'russian', 'salad', 'salvadoran', 'sandwiches', 'scottish', 'scuba', 'seafood', 'seafoodmarkets', 'servicestations', 'shanghainese', 'shavedice', 'shopping', 'shoppingcenters', 'sicilian', 'singaporean', 'skate_parks', 'skiresorts', 'sledding', 'smokehouse', 'snorkeling', 'social_clubs', 'soulfood', 'soup', 'southern', 'souvenirs', 'spanish', 'speakeasies', 'specialtyschools', 'spiritual_shop', 'sportgoods', 'sports_clubs', 'sportsbars', 'sportswear', 'srilankan', 'stationery', 'steak', 'streetvendors', 'summer_camps', 'surfing', 'surfshop', 'sushi', 'swimmingpools', 'szechuan', 'tacos', 'taiwanese', 'tapas', 'tapasmallplates', 'tcm', 'tea', 'tennis', 'teppanyaki', 'tex-mex', 'thai', 'theater', 'themedcafes', 'tikibars', 'tobaccoshops', 'tours', 'toys', 'tradamerican', 'trampoline', 'travelagents', 'travelservices', 'tubing', 'turkish', 'ukrainian', 'unofficialyelpevents', 'vacation_rentals', 'vapeshops', 'vegan', 'vegetarian', 'venues', 'vietnamese', 'vitaminssupplements', 'waffles', 'walkingtours', 'waterstores', 'wedding_planning', 'whiskeybars', 'wholesale_stores', 'wine_bars', 'womenscloth', 'wraps', 'yelpevents', 'yoga', 'zipline', 'zoos']\n" ], [ "yelp_categories = pd.read_json(\"../../data/raw/categories.json\")\n\n# select only one parent categories\nyelp_categories = yelp_categories[yelp_categories.parents.apply(lambda x: len(x) == 1) == True]\nres_list = yelp_categories[yelp_categories.parents.apply(lambda x: len([i for i in x if i in 'restaurants']) > 0)]\nres_list = res_list.alias\nprint(set(res_list))", "{'newamerican', 'nightfood', 'salad', 'bistros', 'beergarden', 'popuprestaurants', 'freiduria', 'schnitzel', 'eastern_european', 'beisl', 'british', 'korean', 'guamanian', 'russian', 'turkish', 'uzbek', 'pakistani', 'delis', 'hungarian', 'singaporean', 'tex-mex', 'latin', 'honduran', 'portuguese', 'scottish', 'african', 'diners', 'brasseries', 'czechslovakian', 'breakfast_brunch', 'yugoslav', 'cafeteria', 'international', 'newmexican', 'trattorie', 'cajun', 'signature_cuisine', 'swabian', 'cafes', 'norwegian', 'tapas', 'flatbread', 'cheesesteaks', 'iberian', 'kurdish', 'fischbroetchen', 'raw_food', 'parma', 'halal', 'peruvian', 'comfortfood', 'israeli', 'argentine', 'taiwanese', 'danish', 'waffles', 'dumplings', 'irish', 'french', 'asianfusion', 'filipino', 'tapasmallplates', 'fishnchips', 'bavarian', 'greek', 'japanese', 'himalayan', 'vegetarian', 'newzealand', 'foodstands', 'ukrainian', 'malaysian', 'hkcafe', 'moroccan', 'catalan', 'galician', 'meatballs', 'somali', 'traditional_swedish', 'bbq', 'newcanadian', 'persian', 'gamemeat', 'australian', 'laos', 'potatoes', 'fondue', 'gluten_free', 'cambodian', 'lyonnais', 'sandwiches', 'kopitiam', 'seafood', 'soulfood', 'wraps', 'milkbars', 'venison', 'giblets', 'beerhall', 'vegan', 'hawaiian', 'italian', 'swedish', 'tradamerican', 'currysausage', 'asturian', 'hotdog', 'arabian', 'jewish', 'steak', 'pizza', 'nicaraguan', 'slovakian', 'hotpot', 'tavolacalda', 'panasian', 'eritrean', 'pfcomercial', 'czech', 'dinnertheater', 'brazilian', 'burmese', 'creperies', 'burgers', 'kosher', 'corsican', 'polynesian', 'canteen', 'rotisserie_chicken', 'serbocroatian', 'georgian', 'wok', 'food_court', 'srilankan', 'ethiopian', 'sud_ouest', 'vietnamese', 'blacksea', 'caribbean', 'german', 'kebab', 'southern', 'cuban', 'cypriot', 'mideastern', 'laotian', 'belgian', 'nikkei', 'noodles', 'bangladeshi', 'bulgarian', 'gastropubs', 'hotdogs', 'opensandwiches', 'chicken_wings', 'indonesian', 'basque', 'sushi', 'thai', 'norcinerie', 'oriental', 'swissfood', 'austrian', 'mongolian', 'romanian', 'heuriger', 'soup', 'chinese', 'poutineries', 'armenian', 'island_pub', 'baguettes', 'chickenshop', 'mediterranean', 'supperclubs', 'buffets', 'syrian', 'modern_australian', 'afghani', 'chilean', 'indpak', 'riceshop', 'pita', 'spanish', 'andalusian', 'tabernas', 'mexican', 'pubfood', 'scandinavian', 'modern_european', 'polish'}\n" ], [ "'arabian' in res_list.values", "_____no_output_____" ], [ "dataset.category_alias.head()", "_____no_output_____" ], [ "df_res_list = dataset[dataset.category_alias.apply(lambda x: len([i for i in x.split(', ') if i in res_list.values]) > 0)]\nlen(df_res_list)", "_____no_output_____" ], [ "df_res_list.head()", "_____no_output_____" ], [ "from itertools import chain\nset(chain(*[i.split(',') for i in set(dataset.transactions)]))", "_____no_output_____" ], [ "df_transactions = dataset[dataset.transactions.apply(lambda x: len(x) > 0)]\nlen(df_transactions)", "_____no_output_____" ], [ "df_transactions_not = dataset[dataset.transactions.apply(lambda x: len(x) == 0)]\nlen(df_transactions_not)", "_____no_output_____" ], [ "# restaurants? [BUSINESSES with TRANSACTION]\nprint(set(', '.join(df_transactions.category_alias).split(', ')))", "{'newamerican', 'salad', 'breweries', 'british', 'korean', 'turkish', 'russian', 'pakistani', 'delis', 'singaporean', 'tex-mex', 'latin', 'cupcakes', 'seafoodmarkets', 'teppanyaki', 'bagels', 'diners', 'lounges', 'brasseries', 'oliveoil', 'breakfast_brunch', 'poke', 'newmexican', 'healthmarkets', 'shavedice', 'bakeries', 'cajun', 'tea', 'cafes', 'tapas', 'gelato', 'cocktailbars', 'cheesesteaks', 'raw_food', 'peruvian', 'halal', 'comfortfood', 'argentine', 'meats', 'taiwanese', 'icecream', 'waffles', 'wine_bars', 'izakaya', 'irish', 'beer_and_wine', 'french', 'asianfusion', 'tapasmallplates', 'bubbletea', 'sportsbars', 'fishnchips', 'greek', 'catering', 'japanese', 'vegetarian', 'himalayan', 'divebars', 'foodstands', 'vapeshops', 'hkcafe', 'szechuan', 'bbq', 'persian', 'australian', 'poolhalls', 'customcakes', 'gluten_free', 'sandwiches', 'seafood', 'wraps', 'vegan', 'italian', 'hawaiian', 'tradamerican', 'tacos', 'donuts', 'acaibowls', 'steak', 'pizza', 'lebanese', 'cakeshop', 'sicilian', 'hotpot', 'japacurry', 'panasian', 'dinnertheater', 'brazilian', 'creperies', 'burgers', 'grocery', 'beerbar', 'salvadoran', 'vitaminssupplements', 'desserts', 'diyfood', 'vietnamese', 'german', 'caribbean', 'southern', 'cuban', 'irish_pubs', 'mideastern', 'foodtrucks', 'laotian', 'pastashops', 'empanadas', 'countrydancehalls', 'noodles', 'tikibars', 'gastropubs', 'hotdogs', 'chicken_wings', 'sushi', 'basque', 'thai', 'ramen', 'hookah_bars', 'venues', 'musicvenues', 'pubs', 'candy', 'whiskeybars', 'internetcafe', 'coffee', 'falafel', 'bars', 'soup', 'beergardens', 'chinese', 'chickenshop', 'mediterranean', 'buffets', 'dimsum', 'afghani', 'indpak', 'importedfood', 'spanish', 'mexican', 'juicebars', 'modern_european', 'polish', 'eventplanning', 'organic_stores'}\n" ], [ "# not restaurants? [BUSINESSES w/o TRANSACTION]\nprint(set(', '.join(df_transactions_not.category_alias).split(', ')))", "{'newamerican', 'poolservice', 'salad', 'museums', 'breweries', 'golflessons', 'stationery', 'countryclubs', 'yoga', 'education', 'airportlounges', 'playgrounds', 'waterstores', 'drugstores', 'popuprestaurants', 'specialtyschools', 'arcades', 'meditationcenters', 'beachequipmentrental', 'rafting', 'toys', 'british', 'markets', 'korean', 'turkish', 'trampoline', 'russian', 'pakistani', 'delis', 'healthtrainers', 'summer_camps', 'outdoormovies', 'tex-mex', 'nightlife', 'herbsandspices', 'localservices', 'artsandcrafts', 'latin', 'travelagents', 'cupcakes', 'daycamps', 'seafoodmarkets', 'shanghainese', 'portuguese', 'gaybars', 'horsebackriding', 'teppanyaki', 'travelservices', 'mountainbiking', 'outdoorgear', 'scottish', 'active', 'bagels', 'lounges', 'diners', 'sledding', 'yelpevents', 'breakfast_brunch', 'sports_clubs', 'chocolate', 'pets', 'cafeteria', 'fooddeliveryservices', 'beaches', 'poke', 'discgolf', 'newmexican', 'souvenirs', 'healthmarkets', 'shavedice', 'bakeries', 'giftshops', 'cajun', 'hobbyshops', 'unofficialyelpevents', 'basque', 'medcenters', 'menscloth', 'popcorn', 'tea', 'intlgrocery', 'cafes', 'cideries', 'gelato', 'tapas', 'cocktailbars', 'cheesesteaks', 'publicservicesgovt', 'shoppingcenters', 'hindu_temples', 'raw_food', 'peruvian', 'halal', 'biketours', 'comfortfood', 'scuba', 'gardens', 'massage', 'argentine', 'meats', 'tours', 'taiwanese', 'icecream', 'waffles', 'wine_bars', 'karaoke', 'foodtours', 'casinos', 'bike_repair_maintenance', 'artmuseums', 'izakaya', 'irish', 'bikes', 'womenscloth', 'beer_and_wine', 'theater', 'french', 'asianfusion', 'filipino', 'tapasmallplates', 'butcher', 'bubbletea', 'sportsbars', 'restaurants', 'greek', 'catering', 'japanese', 'vegetarian', 'fishnchips', 'divebars', 'foodstands', 'dog_parks', 'fishing', 'cantonese', 'ukrainian', 'szechuan', 'campgrounds', 'smokehouse', 'social_clubs', 'bbq', 'golf', 'amusementparks', 'football', 'food', 'cheese', 'persian', 'lasertag', 'conveyorsushi', 'australian', 'boattours', 'poolhalls', 'fondue', 'customcakes', 'gluten_free', 'sandwiches', 'recreation', 'seafood', 'comedyclubs', 'parks', 'soulfood', 'rock_climbing', 'boatcharters', 'wraps', 'galleries', 'media', 'skiresorts', 'sportswear', 'vegan', 'hawaiian', 'tacos', 'tradamerican', 'italian', 'donuts', 'skate_parks', 'coffeeroasteries', 'spiritual_shop', 'educationservices', 'hotdog', 'acaibowls', 'arabian', 'steak', 'vacation_rentals', 'pizza', 'streetvendors', 'cigarbars', 'lakes', 'cakeshop', 'flowers', 'colombian', 'macarons', 'hotpot', 'homestaging', 'sicilian', 'surfshop', 'festivals', 'japacurry', 'swimmingpools', 'servicestations', 'cookingclasses', 'brewpubs', 'jewelry', 'partysupplies', 'panasian', 'fleamarkets', 'dinnertheater', 'golfequipment', 'czech', 'resorts', 'creperies', 'burgers', 'brazilian', 'kosher', 'polynesian', 'sportgoods', 'headshops', 'burmese', 'grocery', 'paddleboarding', 'clubcrawl', 'hotels', 'surfing', 'publicart', 'gourmet', 'food_court', 'srilankan', 'beerbar', 'themedcafes', 'fabricstores', 'salvadoran', 'hostels', 'fitness', 'walkingtours', 'bootcamps', 'aquariumservices', 'desserts', 'diyfood', 'vietnamese', 'german', 'caribbean', 'southern', 'kebab', 'religiousitems', 'cuban', 'irish_pubs', 'mideastern', 'foodtrucks', 'pastashops', 'tcm', 'eventplanning', 'countrydancehalls', 'snorkeling', 'localflavor', 'bookstores', 'noodles', 'tikibars', 'danceclubs', 'gastropubs', 'hotdogs', 'farmersmarket', 'bikerentals', 'banks', 'landmarks', 'chicken_wings', 'sushi', 'couriers', 'tubing', 'matchmakers', 'thai', 'boating', 'kombucha', 'ramen', 'diving', 'speakeasies', 'patiocoverings', 'wholesale_stores', 'honey', 'wedding_planning', 'hookah_bars', 'bowling', 'venues', 'mongolian', 'zoos', 'musicvenues', 'candy', 'whiskeybars', 'pubs', 'artclasses', 'antiques', 'paintyourownpottery', 'internetcafe', 'shopping', 'coffee', 'falafel', 'baseballfields', 'accessories', 'kids_activities', 'bars', 'massage_therapy', 'hiking', 'soup', 'beergardens', 'chinese', 'amateursportsteams', 'nonprofit', 'chickenshop', 'mediterranean', 'aquariums', 'puertorican', 'buffets', 'cosmetics', 'jazzandblues', 'zipline', 'dimsum', 'indpak', 'tobaccoshops', 'climbing', 'hauntedhouses', 'importedfood', 'childrensmuseums', 'convenience', 'basketballcourts', 'spanish', 'mexican', 'tennis', 'juicebars', 'modern_european', 'collegeuniv', 'pettingzoos', 'organic_stores', 'personalchefs'}\n" ], [ "print(f\"{len(dataset)}\")\nprint(f\"{len(dataset.alias.unique())}\")", "6664\n6664\n" ], [ "dataset[dataset.alias == \"kimos-maui-lahaina\"]\ndataset[(dataset.dist_to_alias == \"kimos-maui-lahaina\") & (dataset.distance < 50)]\n", "_____no_output_____" ], [ "pd_location.columns = 'loc_' + pd_location.columns\npd_location.columns", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\ndataset.review_count.plot()\nplt.show()", "_____no_output_____" ], [ "yelp_branches = [\n 'kimos-maui-lahaina',\n 'sunnyside-tahoe-city-2',\n 'dukes-huntington-beach-huntington-beach-2',\n 'dukes-la-jolla-la-jolla',\n 'dukes-malibu-malibu-2',\n 'dukes-beach-house-lahaina',\n 'dukes-kauai-lihue-3',\n 'dukes-waikiki-honolulu-2',\n 'hula-grill-waikiki-honolulu-3',\n 'hula-grill-kaanapali-lahaina-2',\n 'keokis-paradise-koloa',\n 'leilanis-lahaina-2'\n]\n[i for i in dataset.alias.values if i in yelp_branches]", "_____no_output_____" ] ], [ [ "## Exploratory Data Analysis", "_____no_output_____" ] ], [ [ "print(dataset.loc[dataset.alias.isin(yelp_branches)].rating.sum())\nprint(dataset.loc[dataset.alias.isin(yelp_branches)].rating.mean())", "47.5\n3.9583333333333335\n" ], [ "len(dataset)", "_____no_output_____" ], [ "dataset.is_closed[dataset.is_closed == True].count()", "_____no_output_____" ], [ "dataset.price.value_counts()", "_____no_output_____" ], [ "print(f\"sum : {dataset.review_count.sum()}\")\nprint(f\"mean: {dataset.review_count.mean()}\")", "sum : 1416734\nmean: 212.5951380552221\n" ], [ "print(f\"sum : {dataset.rating.sum()}\")\nprint(f\"mean: {dataset.rating.mean()}\")", "sum : 25611.5\nmean: 3.8432623049219687\n" ], [ "dataset.loc[dataset.alias.isin(yelp_branches)].price.value_counts()", "_____no_output_____" ], [ "print(dataset.loc[dataset.alias.isin(yelp_branches)].review_count.sum())\nprint(dataset.loc[dataset.alias.isin(yelp_branches)].review_count.mean())", "26365\n2197.0833333333335\n" ], [ "import math\n\n\ndef distance(origin, destination):\n \"\"\"\n Calculate the Haversine distance.\n\n Parameters\n ----------\n origin : tuple of float\n (lat, long)\n destination : tuple of float\n (lat, long)\n\n Returns\n -------\n distance_in_km : float\n\n Examples\n --------\n >>> origin = (48.1372, 11.5756) # Munich\n >>> destination = (52.5186, 13.4083) # Berlin\n >>> round(distance(origin, destination), 1)\n 504.2\n \"\"\"\n lat1, lon1 = origin\n lat2, lon2 = destination\n radius = 6371 # km\n\n dlat = math.radians(lat2 - lat1)\n dlon = math.radians(lon2 - lon1)\n a = (math.sin(dlat / 2) * math.sin(dlat / 2) +\n math.cos(math.radians(lat1)) * math.cos(math.radians(lat2)) *\n math.sin(dlon / 2) * math.sin(dlon / 2))\n c = 2 * math.atan2(math.sqrt(a), math.sqrt(1 - a))\n d = radius * c\n return d", "_____no_output_____" ], [ "origin = dataset.iloc[0].coordinate_latitude, dataset.iloc[0].coordinate_longitude\ndestination = dataset.iloc[4].coordinate_latitude, dataset.iloc[4].coordinate_longitude\ndistance(origin, destination) * 1000", "_____no_output_____" ], [ "bins = [0, 100, 500, 1000, 2000, 3000, 5000, 10000, 20000]\nlbls = [1, 2, 3, 4, 5, 6, 7, 8, 9]\npd_bins = pd.cut(dataset.review_count, bins, lbls).value_counts()\npd_bins.plot(title='Binned Review Count').tick_params(axis='x', labelrotation=45)", "_____no_output_____" ], [ "bins = [0, 0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5]\npd_bins = pd.cut(dataset.rating, bins).value_counts()\npd_bins.plot(title='Binned Rating').tick_params(axis='x', labelrotation=45)", "_____no_output_____" ], [ "pd_bins", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7d4ff5a3cbdf286e5286fc5546ad8b667d04306
958
ipynb
Jupyter Notebook
PartI/Chapter8/ch8.ipynb
aimplabs/rkm-mls-2021
9d4dac607c8cd1e95d9dc323c0f7d43405f607e9
[ "MIT" ]
1
2021-06-14T14:15:31.000Z
2021-06-14T14:15:31.000Z
PartI/Chapter8/ch8.ipynb
aimplabs/rkm-mls-2021
9d4dac607c8cd1e95d9dc323c0f7d43405f607e9
[ "MIT" ]
null
null
null
PartI/Chapter8/ch8.ipynb
aimplabs/rkm-mls-2021
9d4dac607c8cd1e95d9dc323c0f7d43405f607e9
[ "MIT" ]
null
null
null
17.418182
51
0.510438
[ [ [ "# Introduction to Natural Language Processing", "_____no_output_____" ], [ "## Review Questions\n\n1. Vector space model\n\n2. TF-IDF \n\n3. Use of `nltk` library \n\n4. Classification and sentiment analysis", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown", "markdown" ] ]
e7d503c97afbee0a9fafcd0ae53e584f8f9271d5
11,496
ipynb
Jupyter Notebook
scatterplotgenerator_Pilyugin.ipynb
kadglass/Metallicity_gradients
66678d2fc9c83144fa5612dabde18b9d944e5429
[ "BSD-3-Clause" ]
null
null
null
scatterplotgenerator_Pilyugin.ipynb
kadglass/Metallicity_gradients
66678d2fc9c83144fa5612dabde18b9d944e5429
[ "BSD-3-Clause" ]
null
null
null
scatterplotgenerator_Pilyugin.ipynb
kadglass/Metallicity_gradients
66678d2fc9c83144fa5612dabde18b9d944e5429
[ "BSD-3-Clause" ]
1
2021-06-10T21:36:55.000Z
2021-06-10T21:36:55.000Z
44.3861
239
0.598295
[ [ [ "# Nate Brunacini, [email protected]\n# Supervisor: Kelly A. Douglass\n# This file includes methods to find the gradient (slope of the trend line) of the 3D (or \"R\") metallicities of \n# each spaxel in a MaNGA galaxy and to create a scatter plot of those gradient values.", "_____no_output_____" ], [ "# Import packages\nfrom astropy.io import fits\nimport deproject_spaxel as dps\nimport numpy as np\nimport math\nimport matplotlib.pyplot as plt\n%matplotlib inline\nfrom astropy.table import Table\nfrom scipy.stats import linregress\n\nimport marvin\nfrom marvin.tools.maps import Maps", "/home/nbrunaci/.local/lib/python3.9/site-packages/marvin/core/exceptions.py:50: UserWarning: cannot initiate Sentry error reporting: [Errno 6] No such device or address.\n warnings.warn('cannot initiate Sentry error reporting: {0}.'.format(str(ee)),\n\u001b[0;34m[INFO]: \u001b[0mNo release version set. Setting default to DR15\n\u001b[1;33m[WARNING]: \u001b[0m\u001b[0;39mpath /home/nbrunaci/sas/mangawork/manga/spectro/redux/v2_4_3/drpall-v2_4_3.fits cannot be found. Setting drpall to None.\u001b[0m \u001b[0;36m(MarvinUserWarning)\u001b[0m\n\u001b[1;33m[WARNING]: \u001b[0m\u001b[0;39mpath /home/nbrunaci/sas/mangawork/manga/spectro/analysis/v2_4_3/2.2.1/dapall-v2_4_3-2.2.1.fits cannot be found. Setting dapall to None.\u001b[0m \u001b[0;36m(MarvinUserWarning)\u001b[0m\n" ], [ "# Takes in plateifu and table of kinematic center data, returns coordinates of kinematic center of galaxy\ndef getKinematicCenter(plateifu,c_table):\n plate, ifu = plateifu.split('-')\n bool_index = np.logical_and(c_table['MaNGA_plate'] == int(plate), c_table['MaNGA_IFU'] == int(ifu))\n x_coord = c_table['x0_map'][bool_index].data[0]\n y_coord = c_table['y0_map'][bool_index].data[0]\n return (y_coord,x_coord)\n \n# x0_map,y0_map: pass in as (y,x); same as (row,column)\n\n# Returns coordinates of photometric center of the galaxy with the given plateifu\ndef getPhotometricCenter(plateifu):\n maps = Maps(plateifu)\n# print(maps.datamodel)\n gfluxmap = maps['spx_mflux']\n center = np.unravel_index(np.argmax(gfluxmap.data),gfluxmap.shape)\n return center", "_____no_output_____" ], [ "# Takes in plateifu, data from drpall file, and table of kinematic centers, generates lists of normalized radius from galactic center and metallicity values, and outputs them in a dictionary\ndef radius_lists(plateifu,drp,c_table):\n with fits.open('MetallicityFITS_Pilyugin/Pilyugin_'+plateifu+'.fits', mode='update') as hdul:\n index = np.where(drp['PLATEIFU'] == plateifu)[0][0]# Index of galaxy with the given plateifu; there is only one value but it is nested, hence the [0][0]\n rot_angle = drp['NSA_ELPETRO_PHI'][index] * math.pi/180# Rotation angle; converted from degrees to radians\n inc_angle = np.arccos(drp['NSA_ELPETRO_BA'][index])#math.pi/2.0 - math.asin(drp['NSA_ELPETRO_BA'][index])# Inclination angle; converted from axis ratio to angle in radians\n re = drp['NSA_ELPETRO_TH50_R'][index]# 50% light radius in SDSS r-band (in arcsec)\n \n # Get the kinematic center of the galaxy; if there is none in the data file, use photometric center\n center = getKinematicCenter(plateifu,c_table)\n if center == -99.0:# No kinematic center if value is -99\n center = getPhotometricCenter(plateifu)\n \n #Arrays of values to be plotted\n radii_R = []# List of normalized radii between each spaxel and the galactic center for spaxels with R metallicity values\n R = []# List of R metallicity values excluding those at masked spaxels\n # Add points to lists\n for row in range(hdul[1].shape[1]):\n for col in range(hdul[1].shape[0]):\n # Calcuate deprojected radius for the spaxel\n coords = (row,col)\n rad_spax,_ = dps.deproject_spaxel(coords,center,rot_angle,inc_angle)#Radius in units of spaxels\n rad_arcsec = rad_spax * 0.5# Radius in arcseconds\n rad_normalized = rad_arcsec/re\n # Add normalized radius and metallicity values to lists if not masked at that spaxel\n if not hdul[3].data[row][col]:# Removes masked values\n radii_R.append(rad_normalized)\n R.append(hdul[1].data[row][col])\n return {\n 'radii_R': radii_R,\n 'R': R,\n 'r50':re\n }", "_____no_output_____" ], [ "# Takes in dictionary of radius and metallicity lists such as that output by the radius_lists function and outputs the parameters of the line of best fit\ndef calculate_fits(r_lists):\n # Not sure whether the r, p, and se values are needed. There is also an intercept_stderr value but that must be \n # accessed as an attribute of the returned objected (as in results = linregress(x,y) then results.intercept_stderr)\n# slope_N2, intercept_N2, r_N2, p_N2, se_N2 = linregress(r_lists['radii_N2'], r_lists['N2'])\n# slope_O3N2, intercept_O3N2, r_N2, p_N2, se_N2 = linregress(r_lists['radii_O3N2'], r_lists['O3N2'])\n# slope_N2O2, intercept_N2O2, r_N2, p_N2, se_N2 = linregress(r_lists['radii_N2O2'], r_lists['N2O2'])\n R_params = linregress(r_lists['radii_R'], r_lists['R'])\n return {\n # To access individual paramters, use (for example) N2_params.slope, .intercept, .rvalue, .pvalue, .stderr,\n # .intercept_stderr\n 'R_params': R_params,\n 'r50':r_lists['r50']\n }", "_____no_output_____" ], [ "# Takes in output from radius_lists and calculate_fits functions as well as plateifu and plots scatter plots (metallicity \n# versus normalized radius) with lines of best fit\ndef scatterplots(r_lists,fit_params,plateifu):\n fig, plots = plt.subplots(1)\n fig.set_figheight(5)\n fig.set_figwidth(5)\n plots.plot(r_lists['radii_R'],r_lists['R'],'.')\n plots.set_title('3D Metallicity vs. Normalized Radius')\n plots.set_ylabel('Metallicity')\n plots.set_xlabel('r / r_e')\n x_R = np.linspace(min(r_lists['radii_R']),max(r_lists['radii_R']))#(0.0,1.6)\n y_R = fit_params['R_params'].slope * x_R + fit_params['R_params'].intercept\n plots.plot(x_R,y_R,'-r')\n plt.savefig('Pilyugin_Galaxy_ScatterPlots/'+plateifu+'ScatterPlot_R')\n plt.close()", "_____no_output_____" ], [ "# Wrapper function to call the above functions all at once. Takes in plateifu, data from drpall file, and table of kinematic \n# centers, calculates the parameters of the line of best fit of the normalized radius versus metallicity \n# data, and creates scatter plots\ndef find_gradient(plateifu,drp,c_table):\n r_lists = radius_lists(plateifu,drp,c_table)\n trend = calculate_fits(r_lists)\n scatterplots(r_lists,trend,plateifu)\n return trend", "_____no_output_____" ], [ "# # Calling the functions\n# with fits.open('drpall-v2_4_3.fits', memmap=True) as drpall:\n# c_table = Table.read('DRP-master_file_vflag_BB_smooth1p85_mapFit_N2O2_HIdr2_noWords_v5.txt',format='ascii.commented_header')\n# find_gradient('9487-12701',drpall[1].data,c_table)#('9487-12701',drpall[1].data,c_table)#('8335-12701')#('7443-12705')\n# # plt.savefig('PosterMaps/Scatter_8335-12701')", "\u001b[1;33m[WARNING]: \u001b[0m\u001b[0;39mOverflowError converting to FloatType in column avg_alpha, possibly resulting in degraded precision.\u001b[0m \u001b[0;36m(AstropyWarning)\u001b[0m\n\u001b[1;33m[WARNING]: \u001b[0m\u001b[0;39mOverflowError converting to FloatType in column pos_alpha, possibly resulting in degraded precision.\u001b[0m \u001b[0;36m(AstropyWarning)\u001b[0m\n\u001b[1;33m[WARNING]: \u001b[0m\u001b[0;39mOverflowError converting to FloatType in column neg_alpha, possibly resulting in degraded precision.\u001b[0m \u001b[0;36m(AstropyWarning)\u001b[0m\n" ], [ "# with fits.open('MetallicityFITS/Brown_7992-12705.fits', mode='update') as hdul:\n# print(hdul.info())", "Filename: MetallicityFITS/Brown_7992-12705.fits\nNo. Name Ver Type Cards Dimensions Format\n 0 PRIMARY 1 PrimaryHDU 5 (0,) \n 1 N2_METALLICITY 1 ImageHDU 8 (74, 74) float64 \n 2 O3N2_METALLICITY 1 ImageHDU 8 (74, 74) float64 \n 3 N2O2_METALLICITY 1 ImageHDU 8 (74, 74) float64 \n 4 N2_IVAR 1 ImageHDU 8 (74, 74) float64 \n 5 O3N2_IVAR 1 ImageHDU 8 (74, 74) float64 \n 6 N2O2_IVAR 1 ImageHDU 8 (74, 74) float64 \n 7 N2_MASK 1 ImageHDU 8 (74, 74) int32 \n 8 O3N2_MASK 1 ImageHDU 8 (74, 74) int32 \n 9 N2O2_MASK 1 ImageHDU 8 (74, 74) int32 \nNone\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7d5043afdc85a6c2c19b8cf172c85af0bd6086c
9,584
ipynb
Jupyter Notebook
scikit-learn/01.Introduction_to_Machine_Learning.ipynb
9465565598/ThaparWorkshopANN
1e6aa9002bb4b3c85ddeb554538f77c8958627e3
[ "MIT" ]
16
2019-06-19T05:43:01.000Z
2020-12-01T13:24:55.000Z
scikit-learn/01.Introduction_to_Machine_Learning.ipynb
9465565598/ThaparWorkshopANN
1e6aa9002bb4b3c85ddeb554538f77c8958627e3
[ "MIT" ]
null
null
null
scikit-learn/01.Introduction_to_Machine_Learning.ipynb
9465565598/ThaparWorkshopANN
1e6aa9002bb4b3c85ddeb554538f77c8958627e3
[ "MIT" ]
37
2019-06-17T11:53:13.000Z
2020-06-02T13:05:31.000Z
48.40404
534
0.65025
[ [ [ "# Scikit-learn Tutorial", "_____no_output_____" ], [ "# Introduction to Machine Learning in Python", "_____no_output_____" ], [ "## What is Machine Learning?", "_____no_output_____" ], [ "Machine learning is the process of extracting knowledge from data automatically, usually with the goal of making predictions on new, unseen data. A classical example is a spam filter, for which the user keeps labeling incoming mails as either spam or not spam. A machine learning algorithm then \"learns\" a predictive model from data that distinguishes spam from normal emails, a model which can predict for new emails whether they are spam or not. \n\nCentral to machine learning is the concept of **automating decision making** from data **without the user specifying explicit rules** how this decision should be made.\n\nFor the case of emails, the user doesn't provide a list of words or characteristics that make an email spam. Instead, the user provides examples of spam and non-spam emails that are labeled as such.\n\nThe second central concept is **generalization**. The goal of a machine learning model is to predict on new, previously unseen data. In a real-world application, we are not interested in marking an already labeled email as spam or not. Instead, we want to make the user's life easier by automatically classifying new incoming mail.", "_____no_output_____" ], [ "<img src=\"figures/supervised_workflow.svg\" width=\"100%\">", "_____no_output_____" ], [ "The data is presented to the algorithm usually as a two-dimensional array (or matrix) of numbers. Each data point (also known as a *sample* or *training instance*) that we want to either learn from or make a decision on is represented as a list of numbers, a so-called feature vector, and its containing features represent the properties of this point. \n\nLater, we will work with a popular dataset called *Iris* -- among many other datasets. Iris, a classic benchmark dataset in the field of machine learning, contains the measurements of 150 iris flowers from 3 different species: Iris-Setosa, Iris-Versicolor, and Iris-Virginica. \n\n\n\n<table style=\"width:100%\">\n <tr>\n <th>Species</th>\n <th>Image</th>\n </tr>\n <tr>\n <td>Iris Setosa</td>\n <td><img src=\"figures/iris_setosa.jpg\" width=\"80%\"></td>\n </tr>\n <tr>\n <td>Iris Versicolor</td>\n <td><img src=\"figures/iris_versicolor.jpg\" width=\"80%\"></td>\n </tr>\n <tr>\n <td>Iris Virginica</td>\n <td><img src=\"figures/iris_virginica.jpg\" width=\"80%\"></td>\n </tr>\n</table>\n\n\n\n\n\nWe represent each flower sample as one row in our data array, and the columns (features) represent the flower measurements in centimeters. For instance, we can represent this Iris dataset, consisting of 150 samples and 4 features, a 2-dimensional array or matrix $\\mathbb{R}^{150 \\times 4}$ in the following format:\n\n\n$$\\mathbf{X} = \\begin{bmatrix}\n x_{1}^{(1)} & x_{2}^{(1)} & x_{3}^{(1)} & \\dots & x_{4}^{(1)} \\\\\n x_{1}^{(2)} & x_{2}^{(2)} & x_{3}^{(2)} & \\dots & x_{4}^{(2)} \\\\\n \\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n x_{1}^{(150)} & x_{2}^{(150)} & x_{3}^{(150)} & \\dots & x_{4}^{(150)}\n\\end{bmatrix}.\n$$\n\n(The superscript denotes the *i*th row, and the subscript denotes the *j*th feature, respectively.", "_____no_output_____" ], [ "There are two kinds of machine learning we will talk about today: ***supervised learning*** and ***unsupervised learning***.", "_____no_output_____" ], [ "### Supervised Learning: Classification and regression\n\nIn **Supervised Learning**, we have a dataset consisting of both input features and a desired output, such as in the spam / no-spam example.\nThe task is to construct a model (or program) which is able to predict the desired output of an unseen object\ngiven the set of features.\n\nSome more complicated examples are:\n\n- Given a multicolor image of an object through a telescope, determine\n whether that object is a star, a quasar, or a galaxy.\n- Given a photograph of a person, identify the person in the photo.\n- Given a list of movies a person has watched and their personal rating\n of the movie, recommend a list of movies they would like.\n- Given a persons age, education and position, infer their salary\n\nWhat these tasks have in common is that there is one or more unknown\nquantities associated with the object which needs to be determined from other\nobserved quantities.\n\nSupervised learning is further broken down into two categories, **classification** and **regression**:\n\n- **In classification, the label is discrete**, such as \"spam\" or \"no spam\". In other words, it provides a clear-cut distinction between categories. Furthermore, it is important to note that class labels are nominal, not ordinal variables. Nominal and ordinal variables are both subcategories of categorical variable. Ordinal variables imply an order, for example, T-shirt sizes \"XL > L > M > S\". On the contrary, nominal variables don't imply an order, for example, we (usually) can't assume \"orange > blue > green\".\n\n\n- **In regression, the label is continuous**, that is a float output. For example,\nin astronomy, the task of determining whether an object is a star, a galaxy, or a quasar is a\nclassification problem: the label is from three distinct categories. On the other hand, we might\nwish to estimate the age of an object based on such observations: this would be a regression problem,\nbecause the label (age) is a continuous quantity.\n\nIn supervised learning, there is always a distinction between a **training set** for which the desired outcome is given, and a **test set** for which the desired outcome needs to be inferred. The learning model fits the predictive model to the training set, and we use the test set to evaluate its generalization performance.\n", "_____no_output_____" ], [ "### Unsupervised Learning\n\nIn **Unsupervised Learning** there is no desired output associated with the data.\nInstead, we are interested in extracting some form of knowledge or model from the given data.\nIn a sense, you can think of unsupervised learning as a means of discovering labels from the data itself.\nUnsupervised learning is often harder to understand and to evaluate.\n\nUnsupervised learning comprises tasks such as *dimensionality reduction*, *clustering*, and\n*density estimation*. For example, in the iris data discussed above, we can used unsupervised\nmethods to determine combinations of the measurements which best display the structure of the\ndata. As we’ll see below, such a projection of the data can be used to visualize the\nfour-dimensional dataset in two dimensions. Some more involved unsupervised learning problems are:\n\n- Given detailed observations of distant galaxies, determine which features or combinations of\n features summarize best the information.\n- Given a mixture of two sound sources (for example, a person talking over some music),\n separate the two (this is called the [blind source separation](http://en.wikipedia.org/wiki/Blind_signal_separation) problem).\n- Given a video, isolate a moving object and categorize in relation to other moving objects which have been seen.\n- Given a large collection of news articles, find recurring topics inside these articles.\n- Given a collection of images, cluster similar images together (for example to group them when visualizing a collection)\n\nSometimes the two may even be combined: e.g. unsupervised learning can be used to find useful\nfeatures in heterogeneous data, and then these features can be used within a supervised\nframework.", "_____no_output_____" ], [ "### (simplified) Machine learning taxonomy\n\n<img src=\"figures/ml_taxonomy.png\" width=\"80%\">", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
e7d50d20c488167720cf48b10527f8309f498b84
38,138
ipynb
Jupyter Notebook
Prod_ML_IDS_model.ipynb
sooualil/atlas-plugin-sample
845eed46d7ae71e3d3aa04daeb3b730173eee73e
[ "BSD-3-Clause" ]
null
null
null
Prod_ML_IDS_model.ipynb
sooualil/atlas-plugin-sample
845eed46d7ae71e3d3aa04daeb3b730173eee73e
[ "BSD-3-Clause" ]
null
null
null
Prod_ML_IDS_model.ipynb
sooualil/atlas-plugin-sample
845eed46d7ae71e3d3aa04daeb3b730173eee73e
[ "BSD-3-Clause" ]
null
null
null
35.34569
314
0.369317
[ [ [ "columns_to_encode = ['protocol', 'application_name', 'application_category_name', 'content_type']", "_____no_output_____" ], [ "additional_columns = ['udps.num_pkts_up_to_128_bytes', 'udps.num_pkts_128_to_256_bytes',\n 'udps.num_pkts_256_to_512_bytes', 'udps.num_pkts_512_to_1024_bytes',\n 'udps.num_pkts_1024_to_1514_bytes', 'udps.min_ttl', 'udps.max_ttl',\n 'udps.min_ip_pkt_len', 'udps.max_ip_pkt_len', 'udps.src2dst_flags',\n 'udps.dst2src_flags', 'udps.tcp_flags', 'udps.tcp_win_max_in',\n 'udps.tcp_win_max_out', 'udps.icmp_type', 'udps.icmp_v4_type',\n 'udps.dns_query_id', 'udps.dns_query_type', 'udps.dns_ttl_answer',\n 'udps.ftp_command_ret_code', 'udps.retransmitted_in_packets',\n 'udps.retransmitted_out_packets', 'udps.retransmitted_in_bytes',\n 'udps.retransmitted_out_bytes', 'udps.src_to_dst_second_bytes',\n 'udps.dst_to_src_second_bytes', 'udps.src_to_dst_avg_throughput',\n 'udps.dst_to_src_avg_throughput', 'udps.src_to_dst_second_bytes2',\n 'udps.dst_to_src_second_bytes2', 'udps.src_to_dst_avg_throughput2',\n 'udps.dst_to_src_avg_throughput2']", "_____no_output_____" ], [ "columns_to_delete = ['udps.bidirectional_pkts']", "_____no_output_____" ], [ "import pandas as pd\nfrom sklearn import preprocessing\nfrom sklearn.ensemble import ExtraTreesClassifier\nfrom joblib import load\nimport os", "_____no_output_____" ] ], [ [ "# With additional features", "_____no_output_____" ] ], [ [ "models_path = \"/home/soufiane.oualil/lustre/data_sec-um6p-st-sccs-6sevvl76uja/IDS/prod_ai_models/ML/full_binary_0/\"\n\n", "_____no_output_____" ], [ "dataset = \"UNSW-NB15\"", "_____no_output_____" ], [ "dataset_path = f\"/home/soufiane.oualil/lustre/data_sec-um6p-st-sccs-6sevvl76uja/IDS/preprocessed_datasets/{dataset}/flow_features/multi/\"\n\n", "_____no_output_____" ], [ "data = pd.read_pickle(f'{dataset_path}/test.p')", "_____no_output_____" ], [ "data.head()", "_____no_output_____" ], [ "y = data['Attack'].values", "_____no_output_____" ], [ "data = data.drop(columns_to_delete + ['Attack'], axis = 1)", "_____no_output_____" ], [ "data.head()", "_____no_output_____" ], [ "for column in columns_to_encode:\n le = load(f'{models_path}encoders/{column}.joblib') \n data[column] = le.transform(data[column])\n ", "/home/soufiane.oualil/.conda/envs/atlas/lib/python3.8/site-packages/sklearn/base.py:329: UserWarning: Trying to unpickle estimator LabelEncoder from version 0.23.2 when using version 1.0.2. This might lead to breaking code or invalid results. Use at your own risk. For more info please refer to:\nhttps://scikit-learn.org/stable/modules/model_persistence.html#security-maintainability-limitations\n warnings.warn(\n" ], [ "le_labels = load(f'{models_path}/encoders/attack_encoder.joblib')\n", "_____no_output_____" ], [ "data.head()", "_____no_output_____" ], [ "clf = load(f'{models_path}/clf_model.joblib') ", "/home/soufiane.oualil/.conda/envs/atlas/lib/python3.8/site-packages/sklearn/base.py:329: UserWarning: Trying to unpickle estimator ExtraTreeClassifier from version 0.23.2 when using version 1.0.2. This might lead to breaking code or invalid results. Use at your own risk. For more info please refer to:\nhttps://scikit-learn.org/stable/modules/model_persistence.html#security-maintainability-limitations\n warnings.warn(\n/home/soufiane.oualil/.conda/envs/atlas/lib/python3.8/site-packages/sklearn/base.py:329: UserWarning: Trying to unpickle estimator ExtraTreesClassifier from version 0.23.2 when using version 1.0.2. This might lead to breaking code or invalid results. Use at your own risk. For more info please refer to:\nhttps://scikit-learn.org/stable/modules/model_persistence.html#security-maintainability-limitations\n warnings.warn(\n" ], [ "preds = clf.predict(data.values)", "_____no_output_____" ], [ "preds_labels = le_labels.inverse_transform(preds)", "_____no_output_____" ], [ "print(preds_labels[:20])", "['Benign' 'Benign' 'Benign' 'Malign' 'Benign' 'Benign' 'Benign' 'Benign'\n 'Benign' 'Benign' 'Benign' 'Benign' 'Benign' 'Benign' 'Benign' 'Benign'\n 'Benign' 'Benign' 'Benign' 'Benign']\n" ] ], [ [ "# Without additional features", "_____no_output_____" ] ], [ [ "models_path = \"/home/abdellah.elmekki/lustre/data_sec-um6p-st-sccs-6sevvl76uja/IDS/prod_ai_models/ML/full_binary_1/\"\n", "_____no_output_____" ], [ "data = pd.read_pickle(f'{dataset_path}/test.p')", "_____no_output_____" ], [ "y = data['Attack'].values", "_____no_output_____" ], [ "data = data.drop(columns_to_delete + additional_columns + ['Attack'], axis = 1)", "_____no_output_____" ], [ "for column in columns_to_encode:\n le = load(f'{models_path}/encoders/{column}.joblib') \n data[column] = le.transform(data[column])\n ", "_____no_output_____" ], [ "le_labels = load(f'{models_path}/encoders/attack_encoder.joblib')\n", "_____no_output_____" ], [ "clf = load(f'{models_path}/clf_model.joblib') ", "_____no_output_____" ], [ "preds = clf.predict(data.values)", "_____no_output_____" ], [ "print(preds_labels[:20])", "['Benign' 'Benign' 'Benign' 'Malign' 'Benign' 'Benign' 'Benign' 'Benign'\n 'Benign' 'Benign' 'Benign' 'Benign' 'Benign' 'Benign' 'Benign' 'Benign'\n 'Benign' 'Benign' 'Benign' 'Benign']\n" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7d52a9f98e828de170cac366b4908fc8e57ca84
5,804
ipynb
Jupyter Notebook
notebooks/test-trino-access.ipynb
os-climate/data-platform-demo
99dfeecc7058479a9f9989efb7a77327b4cd8a22
[ "FTL" ]
null
null
null
notebooks/test-trino-access.ipynb
os-climate/data-platform-demo
99dfeecc7058479a9f9989efb7a77327b4cd8a22
[ "FTL" ]
39
2021-09-09T21:42:19.000Z
2022-03-21T15:30:08.000Z
notebooks/test-trino-access.ipynb
os-climate/data-platform-demo
99dfeecc7058479a9f9989efb7a77327b4cd8a22
[ "FTL" ]
2
2021-09-16T18:25:23.000Z
2021-09-30T22:07:22.000Z
31.715847
140
0.605272
[ [ [ "## Install python libraries\n\nThe following cell can be used to ensure that the python libraries used\nin this test notebook are installed.\nThese may be pre-installed in future notebook images.\nOnce this cell has been run, it need not be re-run unless you have restarted your jupyter server.", "_____no_output_____" ] ], [ [ "# Install the library dependencies used in this notebook\n# (comment this out if you prefer to not re-run this cell)\n%pip install trino python-dotenv", "Requirement already satisfied: trino in /opt/app-root/lib/python3.8/site-packages (0.306.0)\nRequirement already satisfied: python-dotenv in /opt/app-root/lib/python3.8/site-packages (0.19.1)\nRequirement already satisfied: requests in /opt/app-root/lib/python3.8/site-packages (from trino) (2.25.1)\nRequirement already satisfied: chardet<5,>=3.0.2 in /opt/app-root/lib/python3.8/site-packages (from requests->trino) (4.0.0)\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in /opt/app-root/lib/python3.8/site-packages (from requests->trino) (1.26.4)\nRequirement already satisfied: certifi>=2017.4.17 in /opt/app-root/lib/python3.8/site-packages (from requests->trino) (2020.12.5)\nRequirement already satisfied: idna<3,>=2.5 in /opt/app-root/lib/python3.8/site-packages (from requests->trino) (2.10)\n\u001b[33mWARNING: You are using pip version 21.1; however, version 21.3 is available.\nYou should consider upgrading via the '/opt/app-root/bin/python3.8 -m pip install --upgrade pip' command.\u001b[0m\nNote: you may need to restart the kernel to use updated packages.\n" ] ], [ [ "## Loading credentials\n\nThe following cell finds a `credentials.env` file at the jupyter \"home\" (top level) directory.\n\nValues in this `dotenv` file are loaded into the `os.environ` table,\nas if they were regular environment variables.\n\nCredentials are stored in `dotenv` files so that they can be referred to by standard\nenvironment variable names, and do not appear in notebooks or other code,\nwhich would be a security leak.", "_____no_output_____" ] ], [ [ "from dotenv import dotenv_values, load_dotenv\nimport os\nimport pathlib\n\ndotenv_dir = os.environ.get('CREDENTIAL_DOTENV_DIR', os.environ.get('PWD', '/opt/app-root/src'))\ndotenv_path = pathlib.Path(dotenv_dir) / 'credentials.env'\nif os.path.exists(dotenv_path):\n load_dotenv(dotenv_path=dotenv_path,override=True)", "_____no_output_____" ] ], [ [ "## Connect to trino\n\nThe following cell creates a trino api connection.\n\nIt assumes that your `credentials.env` file has been edited so that\n`TRINO_PASSWD` has a JWT token obtained from:\nhttps://das-odh-trino.apps.odh-cl1.apps.os-climate.org/\n\nYour `TRINO_USER` value should be your github username.", "_____no_output_____" ] ], [ [ "import trino\nconn = trino.dbapi.connect(\n host=os.environ['TRINO_HOST'],\n port=int(os.environ['TRINO_PORT']),\n user=os.environ['TRINO_USER'],\n http_scheme='https',\n auth=trino.auth.JWTAuthentication(os.environ['TRINO_PASSWD']),\n verify=True,\n)\ncur = conn.cursor()", "_____no_output_____" ] ], [ [ "## Test your trino connection\n\nThis cell shows all the catalogs visible to you.\nIf your trino api connection initialized correctly above,\nthis `show catalogs` command should always succeed.", "_____no_output_____" ] ], [ [ "cur.execute('show catalogs')\ncur.fetchall()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7d55699de2d4ee8df707746667ab1230a3d3626
368,643
ipynb
Jupyter Notebook
codici/.ipynb_checkpoints/gda-lin-sk-cv-checkpoint.ipynb
tvml/fo2021
a401b826bb3c71ddffb979a19f8e4ddcb0c14d77
[ "MIT" ]
null
null
null
codici/.ipynb_checkpoints/gda-lin-sk-cv-checkpoint.ipynb
tvml/fo2021
a401b826bb3c71ddffb979a19f8e4ddcb0c14d77
[ "MIT" ]
null
null
null
codici/.ipynb_checkpoints/gda-lin-sk-cv-checkpoint.ipynb
tvml/fo2021
a401b826bb3c71ddffb979a19f8e4ddcb0c14d77
[ "MIT" ]
null
null
null
834.033937
109,494
0.941013
[ [ [ "Gaussian discriminant analysis con stessa matrice di covarianza per le distribuzioni delle due classi e conseguente separatore lineare. Implementata in scikit-learn. Valutazione con cross validation. ", "_____no_output_____" ] ], [ [ "import warnings\nwarnings.filterwarnings('ignore')\n\n%matplotlib inline", "_____no_output_____" ], [ "import pandas as pd\nimport numpy as np\nimport scipy.stats as st\nfrom sklearn.discriminant_analysis import LinearDiscriminantAnalysis\nfrom sklearn.model_selection import cross_val_score\nimport sklearn.metrics as mt", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\nimport matplotlib.colors as mcolors\n\nplt.style.use('fivethirtyeight')\n\nplt.rcParams['font.family'] = 'sans-serif'\nplt.rcParams['font.serif'] = 'Ubuntu'\nplt.rcParams['font.monospace'] = 'Ubuntu Mono'\nplt.rcParams['font.size'] = 10\nplt.rcParams['axes.labelsize'] = 10\nplt.rcParams['axes.labelweight'] = 'bold'\nplt.rcParams['axes.titlesize'] = 10\nplt.rcParams['xtick.labelsize'] = 8\nplt.rcParams['ytick.labelsize'] = 8\nplt.rcParams['legend.fontsize'] = 10\nplt.rcParams['figure.titlesize'] = 12\nplt.rcParams['image.cmap'] = 'jet'\nplt.rcParams['image.interpolation'] = 'none'\nplt.rcParams['figure.figsize'] = (16, 8)\nplt.rcParams['lines.linewidth'] = 2\nplt.rcParams['lines.markersize'] = 8\n\ncolors = ['#008fd5', '#fc4f30', '#e5ae38', '#6d904f', '#8b8b8b', '#810f7c', \n'#137e6d', '#be0119', '#3b638c', '#af6f09', '#008fd5', '#fc4f30', '#e5ae38', '#6d904f', '#8b8b8b', \n'#810f7c', '#137e6d', '#be0119', '#3b638c', '#af6f09']\n\ncmap = mcolors.LinearSegmentedColormap.from_list(\"\", [\"#82cafc\", \"#069af3\", \"#0485d1\", colors[0], colors[8]])", "_____no_output_____" ] ], [ [ "Leggiamo i dati da un file csv in un dataframe pandas. I dati hanno 3 valori: i primi due corrispondono alle features e sono assegnati alle colonne x1 e x2 del dataframe; il terzo è il valore target, assegnato alla colonna t. Vengono poi creati una matrice X delle features e un vettore target t", "_____no_output_____" ] ], [ [ "# legge i dati in dataframe pandas\ndata = pd.read_csv(\"../../data/ex2data1.txt\", header= None,delimiter=',', names=['x1','x2','t'])\n\n# calcola dimensione dei dati\nn = len(data)\nn0 = len(data[data.t==0])\n\n# calcola dimensionalità delle features\nfeatures = data.columns\nnfeatures = len(features)-1\n\nX = np.array(data[features[:-1]])\nt = np.array(data['t'])\n", "_____no_output_____" ] ], [ [ "Visualizza il dataset.", "_____no_output_____" ] ], [ [ "fig = plt.figure(figsize=(16,8))\nax = fig.gca()\nax.scatter(data[data.t==0].x1, data[data.t==0].x2, s=40, color=colors[0], alpha=.7)\nax.scatter(data[data.t==1].x1, data[data.t==1].x2, s=40,c=colors[1], alpha=.7)\nplt.xlabel('$x_1$', fontsize=12)\nplt.ylabel('$x_2$', fontsize=12)\nplt.xticks(fontsize=10)\nplt.yticks(fontsize=10)\nplt.title('Dataset', fontsize=12)\nplt.show()", "_____no_output_____" ] ], [ [ "Definisce un classificatore basato su GDA quadratica ed effettua il training sul dataset.", "_____no_output_____" ] ], [ [ "clf = LinearDiscriminantAnalysis(store_covariance=True)\nclf.fit(X, t)", "_____no_output_____" ] ], [ [ "Definiamo la griglia 100x100 da utilizzare per la visualizzazione delle varie distribuzioni.", "_____no_output_____" ] ], [ [ "# insieme delle ascisse dei punti\nu = np.linspace(min(X[:,0]), max(X[:,0]), 100)\n# insieme delle ordinate dei punti\nv = np.linspace(min(X[:,1]), max(X[:,1]), 100)\n# deriva i punti della griglia: il punto in posizione i,j nella griglia ha ascissa U(i,j) e ordinata V(i,j)\nU, V = np.meshgrid(u, v)", "_____no_output_____" ] ], [ [ "Calcola sui punti della griglia le probabilità delle classi $p(x|C_0), p(x|C_1)$ e le probabilità a posteriori delle classi $p(C_0|x), p(C_1|x)$", "_____no_output_____" ] ], [ [ "# probabilità a posteriori delle due distribuzioni sulla griglia\nZ = clf.predict_proba(np.c_[U.ravel(), V.ravel()])\npp0 = Z[:, 0].reshape(U.shape)\npp1 = Z[:, 1].reshape(V.shape)\n# rapporto tra le probabilità a posteriori delle classi per tutti i punti della griglia\nz=pp0/pp1 \n\n# probabilità per le due classi sulla griglia\nmu0 = clf.means_[0]\nmu1 = clf.means_[1]\nsigma = clf.covariance_\nvf0=np.vectorize(lambda x,y:st.multivariate_normal.pdf([x,y],mu0,sigma))\nvf1=np.vectorize(lambda x,y:st.multivariate_normal.pdf([x,y],mu1,sigma))\np0=vf0(U,V)\np1=vf1(U,V)", "_____no_output_____" ] ], [ [ "Visualizzazione della distribuzione di $p(x|C_0)$", "_____no_output_____" ] ], [ [ "fig = plt.figure(figsize=(16,8))\nax = fig.gca()\n# inserisce una rappresentazione della probabilità della classe C0 sotto forma di heatmap\nimshow_handle = plt.imshow(p0, origin='lower', extent=(min(X[:,0]), max(X[:,0]), min(X[:,1]), max(X[:,1])), alpha=.7)\nplt.contour(U, V, p0, linewidths=[.7], colors=[colors[6]])\n# rappresenta i punti del dataset\nax.scatter(data[data.t==0].x1, data[data.t==0].x2, s=40, c=colors[0], alpha=.7)\nax.scatter(data[data.t==1].x1, data[data.t==1].x2, s=40,c=colors[1], alpha=.7)\n# rappresenta la media della distribuzione\nax.scatter(mu0[0], mu0[1], s=150,c=colors[3], marker='*', alpha=1)\n# inserisce titoli, etc.\nplt.xlabel('$x_1$', fontsize=12)\nplt.ylabel('$x_2$', fontsize=12)\nplt.xticks(fontsize=10)\nplt.yticks(fontsize=10)\nplt.xlim(u.min(), u.max())\nplt.ylim(v.min(), v.max())\nplt.title('Distribuzione di $p(x|C_0)$', fontsize=12)\nplt.show()", "_____no_output_____" ] ], [ [ "Visualizzazione della distribuzione di $p(x|C1)$", "_____no_output_____" ] ], [ [ "fig = plt.figure(figsize=(16,8))\nax = fig.gca()\n# inserisce una rappresentazione della probabilità della classe C0 sotto forma di heatmap\nimshow_handle = plt.imshow(p1, origin='lower', extent=(min(X[:,0]), max(X[:,0]), min(X[:,1]), max(X[:,1])), alpha=.7)\nplt.contour(U, V, p1, linewidths=[.7], colors=[colors[6]])\n# rappresenta i punti del dataset\nax.scatter(data[data.t==0].x1, data[data.t==0].x2, s=40, c=colors[0], alpha=.7)\nax.scatter(data[data.t==1].x1, data[data.t==1].x2, s=40,c=colors[1], alpha=.7)\n# rappresenta la media della distribuzione\nax.scatter(mu1[0], mu1[1], s=150,c=colors[3], marker='*', alpha=1)\n# inserisce titoli, etc.\nplt.xlabel('$x_1$', fontsize=12)\nplt.ylabel('$x_2$', fontsize=12)\nplt.xticks(fontsize=10)\nplt.yticks(fontsize=10)\nplt.xlim(u.min(), u.max())\nplt.ylim(v.min(), v.max())\nplt.title('Distribuzione di $p(x|C_1)$', fontsize=12)\nplt.show()", "_____no_output_____" ] ], [ [ "Visualizzazione di $p(C_0|x)$", "_____no_output_____" ] ], [ [ "fig = plt.figure(figsize=(8,8))\nax = fig.gca()\nimshow_handle = plt.imshow(pp0, origin='lower', extent=(min(X[:,0]), max(X[:,0]), min(X[:,1]), max(X[:,1])), alpha=.7)\nax.scatter(data[data.t==0].x1, data[data.t==0].x2, s=40, c=colors[0], alpha=.7)\nax.scatter(data[data.t==1].x1, data[data.t==1].x2, s=40,c=colors[1], alpha=.7)\nplt.contour(U, V, z, [1.0], colors=[colors[7]],linewidths=[1])\nplt.xlabel('$x_1$', fontsize=12)\nplt.ylabel('$x_2$', fontsize=12)\nplt.xticks(fontsize=10)\nplt.yticks(fontsize=10)\nplt.xlim(u.min(), u.max())\nplt.ylim(v.min(), v.max())\nplt.title(\"Distribuzione di $p(C_0|x)$\", fontsize=12)\nplt.show()", "_____no_output_____" ] ], [ [ "Visualizzazione di $p(C_1|x)$", "_____no_output_____" ] ], [ [ "fig = plt.figure(figsize=(8,8))\nax = fig.gca()\nimshow_handle = plt.imshow(pp1, origin='lower', extent=(min(X[:,0]), max(X[:,0]), min(X[:,1]), max(X[:,1])), alpha=.7)\nax.scatter(data[data.t==0].x1, data[data.t==0].x2, s=40, c=colors[0], alpha=.7)\nax.scatter(data[data.t==1].x1, data[data.t==1].x2, s=40,c=colors[1], alpha=.7)\nplt.contour(U, V, z, [1.0], colors=[colors[7]],linewidths=[1])\nplt.xlabel('$x_1$', fontsize=12)\nplt.ylabel('$x_2$', fontsize=12)\nplt.xticks(fontsize=10)\nplt.yticks(fontsize=10)\nplt.xlim(u.min(), u.max())\nplt.ylim(v.min(), v.max())\nplt.title(\"Distribuzione di $p(C_1|x)$\", fontsize=12)\nplt.show()", "_____no_output_____" ] ], [ [ "Applica la cross validation (5-fold) per calcolare l'accuracy effettuando la media sui 5 valori restituiti.", "_____no_output_____" ] ], [ [ "print(\"Accuracy: {0:5.3f}\".format(cross_val_score(clf, X, t, cv=5, scoring='accuracy').mean()))", "Accuracy: 0.870\n" ] ] ]
[ "raw", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "raw" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7d55d8f6354cc6ad7c88e990294cdc9550ed043
16,832
ipynb
Jupyter Notebook
_notebooks/08-final-thoughts.ipynb
ljcolling/bayes2022
c4c731349717db42d762d2205b3ff801ae6a3e73
[ "MIT" ]
1
2022-02-20T22:15:12.000Z
2022-02-20T22:15:12.000Z
_notebooks/08-final-thoughts.ipynb
ljcolling/bayes2022
c4c731349717db42d762d2205b3ff801ae6a3e73
[ "MIT" ]
null
null
null
_notebooks/08-final-thoughts.ipynb
ljcolling/bayes2022
c4c731349717db42d762d2205b3ff801ae6a3e73
[ "MIT" ]
null
null
null
34.281059
89
0.58971
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
e7d56146e90f2d90ac5df91a4547873f758de429
1,530
ipynb
Jupyter Notebook
docs/_sources/Scratchpad.ipynb
sea7aero/pyhab
36d7eb4bef4403bb4b9ec3503f63a666dca42550
[ "MIT" ]
null
null
null
docs/_sources/Scratchpad.ipynb
sea7aero/pyhab
36d7eb4bef4403bb4b9ec3503f63a666dca42550
[ "MIT" ]
null
null
null
docs/_sources/Scratchpad.ipynb
sea7aero/pyhab
36d7eb4bef4403bb4b9ec3503f63a666dca42550
[ "MIT" ]
null
null
null
18
61
0.484314
[ [ [ "from sympy import *", "_____no_output_____" ], [ "import pylatex as p", "_____no_output_____" ] ], [ [ "$$\n w_{t+1} = (1 + r_{t+1}) s(w_t) + y_{t+1}\n$$ (my_other_label)", "_____no_output_____" ], [ "- A link to an equation directive: {eq}`my_label`\n- A link to a dollar math block: {eq}`my_other_label`\n", "_____no_output_____" ], [ "::::{important}\n:::{note}\nThis text is **standard** _Markdown_\n:::\n::::\n", "_____no_output_____" ] ] ]
[ "code", "markdown" ]
[ [ "code", "code" ], [ "markdown", "markdown", "markdown" ] ]
e7d56738f71043199d5f140f01dac81a9575e8f5
106,343
ipynb
Jupyter Notebook
notebooks-pt/Tutorial - Parte 04.ipynb
dnasc/savime-notebooks
1e99c9adc4a3818f0822f27037398f85470a42b7
[ "MIT" ]
1
2020-02-19T13:28:34.000Z
2020-02-19T13:28:34.000Z
notebooks-pt/Tutorial - Parte 04.ipynb
dnasc/savime-notebooks
1e99c9adc4a3818f0822f27037398f85470a42b7
[ "MIT" ]
null
null
null
notebooks-pt/Tutorial - Parte 04.ipynb
dnasc/savime-notebooks
1e99c9adc4a3818f0822f27037398f85470a42b7
[ "MIT" ]
1
2020-06-23T13:05:58.000Z
2020-06-23T13:05:58.000Z
70.147098
378
0.796451
[ [ [ "# Parte 04\n\nNessa parte os modelos criados anteriormente serão utilizados para realizar predições. Para isso, eles devem ser\nregistrados no TFX. Para efetuar as predições, os dados utilizados no treinamento desses modelos serão inseridos\nno SAVIME, o qual ficará encarregado de enviar e receber os dados para/de TFX. ", "_____no_output_____" ] ], [ [ "import os\nimport sys\n\n# Necessário mudar o diretório de trabalho para o nível mais acima\nif not 'notebooks' in os.listdir('.'):\n current_dir = os.path.abspath(os.getcwd())\n parent_dir = os.path.dirname(current_dir)\n os.chdir(parent_dir)\n\n# Inserir aqui o caminho do arquivo de dados: um json contendo informações a respeito \n# da partição de x e y utilizada na parte 01.\ndata_fp = 'saved_models_arima/data.json'\n\n# Configuração do host e porta em que o SAVIME está escutando\nsavime_host = '127.0.0.1'\nsavime_port = 65000\n\n# Configuração TFX\ntfx_host = 'localhost'\ntfx_port = 8501\n\n# Diretório de dados\ndata_dir = 'data'\n\n# Local do array de temperaturas\ndataset_path = os.path.join(data_dir, 'tiny-dataset.hdf5')", "_____no_output_____" ], [ "%load_ext autoreload\n%autoreload 2\n%matplotlib agg\n\nfrom IPython.display import HTML\n\n\nimport json\nimport h5py\nimport numpy as np\nimport seaborn as sns\nimport tensorflow as tf\n\nfrom src.animation import animate_heat_map\nfrom src.predictor_consumer import PredictionConsumer\n\n# Savime imports\nimport pysavime\nfrom pysavime.util.converter import DataVariableBlockConverter\n\nsns.set_context('notebook')\nsns.set_style('whitegrid')\nsns.set_palette(sns.color_palette(\"Paired\"))\n\ntf.get_logger().setLevel('ERROR')\n\nwith open(data_fp, 'r') as _in:\n data = json.load(_in)", "_____no_output_____" ] ], [ [ "A primeira etapa a ser realizada é converter os dados para um formato processável para o SAVIME.", "_____no_output_____" ] ], [ [ "with h5py.File(dataset_path, 'r') as in_:\n array = in_['real'][...]\n \n# Especificar dimensões\ntime_series = ('time_series', range(array.shape[0]))\ntime_step = ('time_step', range(array.shape[1]))\npos_x = ('pos_x', range(array.shape[2]))\npos_y = ('pos_y', range(array.shape[3]))\n\n# Remover última dimensão espúria\nsqueezed_array = np.squeeze(array, axis=-1)\n\n# Salvar array\ntemperatura_data_fp = os.path.join(data_dir, 'temperatura.data')\nsqueezed_array.ravel().astype('float64').tofile(temperatura_data_fp)", "_____no_output_____" ] ], [ [ "Também é nessário fazer a divisão do conjunto de dados de entrada em x e y. Como dito na parte anterior, cada série temporal possuí 10 instantes de tempo. Além disso, os modelos foram treinados a prever o décimo instante de tempo a partir dos nove anteriores. A critério de exemplo, selecionamos abaixo um grupo de séries temporais para realizar a predição de temperatura.", "_____no_output_____" ] ], [ [ "# Seleciona-se apenas um grupo para predição. \nchosen_model_name = data['model']\nchosen_group_ix = 0\nx = squeezed_array[[chosen_group_ix], :-1]\ny = squeezed_array[[chosen_group_ix], 1:]\n\npc = PredictionConsumer(host=tfx_host, port=tfx_port, model_name=chosen_model_name)\ny_hat = pc.predict(x)", "_____no_output_____" ], [ "anim_y = animate_heat_map(np.squeeze(y,axis=0))\nanim_y_html = anim_y.to_html5_video()\nanim_yhat = animate_heat_map(np.squeeze(y_hat,axis=0))\nanim_yhat_html = anim_yhat.to_html5_video()\nHTML(f'<div style=\"float: left;\"> {anim_y_html} </div><div style=\"float: left;\"> {anim_yhat_html} </div>')", "_____no_output_____" ], [ "num_models = 25\nnum_groups, num_time_steps, num_pos_x, num_pos_y = squeezed_array.shape \n\n# Define o dataset com as temperaturas a ser registrado no SAVIME.\ndataset = pysavime.define.file_dataset('temperature_data', temperatura_data_fp, 'double')\nprint('- Dataset CREATE query:', dataset.create_query_str())\n\n# Define o esquema do tar\ngroup_dim = pysavime.define.implicit_tar_dimension('group', 'int32', 0, num_groups - 1)\ntime_step_dim = pysavime.define.implicit_tar_dimension('time_step', 'int32', 0, num_time_steps - 1)\npos_x_dim = pysavime.define.implicit_tar_dimension('pos_x', 'int32', 0, num_pos_x - 1)\npos_y_dim = pysavime.define.implicit_tar_dimension('pos_y', 'int32', 0, num_pos_y - 1)\ntemperature = pysavime.define.tar_attribute('temperature', 'double')\n\ndims = [group_dim, time_step_dim, pos_x_dim, pos_y_dim]\nattributes = [temperature]\ntar = pysavime.define.tar('temperatures_tar', dims, attributes)\nprint('- Tar CREATE query:', tar.create_query_str())\n\n# Define o subtar único responsável por registrar o dataset no tar criado anteriormente.\ngroup_dim_sub = pysavime.define.ordered_subtar_dimension(group_dim, 0, num_groups - 1, True)\ntime_step_dim_sub = pysavime.define.ordered_subtar_dimension(time_step_dim, 0, num_time_steps - 1, True)\npos_x_dim_sub = pysavime.define.ordered_subtar_dimension(pos_x_dim, 0, num_pos_x - 1, True)\npos_y_dim_sub = pysavime.define.ordered_subtar_dimension(pos_y_dim, 0, num_pos_y - 1, True)\ntemperature_sub = pysavime.define.subtar_attribute(temperature, dataset)\n\nsubtar_dims = [group_dim_sub, time_step_dim_sub, pos_x_dim_sub, pos_y_dim_sub]\nsubtar_attrs = [temperature_sub]\nsubtar = pysavime.define.subtar(tar, subtar_dims, subtar_attrs)\nprint('- SubTar LOAD query', subtar.load_query_str())", "- Dataset CREATE query: CREATE_DATASET(\"temperature_data:double:1\", \"@data/temperatura.data\");\n- Tar CREATE query: CREATE_TAR(\"temperatures_tar\", \"*\", \"implicit, group, int32, 0, 399, 1 | implicit, time_step, int32, 0, 9, 1 | implicit, pos_x, int32, 0, 34, 1 | implicit, pos_y, int32, 0, 39, 1\", \"temperature, double: 1\");\n- SubTar LOAD query LOAD_SUBTAR(\"temperatures_tar\", \"ordered, group, 0,399 | ordered, time_step, 0,9 | ordered, pos_x, 0,34 | ordered, pos_y, 0,39\", \"temperature, temperature_data\")\n" ], [ "with pysavime.Client(host='127.0.0.1', port=65000, raise_silent_error=True) as client:\n client.execute(pysavime.operator.create(dataset))\n client.execute(pysavime.operator.create(tar))\n client.execute(pysavime.operator.load(subtar))", "_____no_output_____" ] ], [ [ "Abaixo verificamos se os dados foram corretamente registrados no SAVIME.", "_____no_output_____" ] ], [ [ "with pysavime.Client(host=savime_host, port=savime_port, raise_silent_error=True) as client:\n response = client.execute(pysavime.operator.select(tar))[0]\n \nis_the_same = np.isclose(response.attrs['temperature'].reshape(squeezed_array.shape),squeezed_array).all()\nprint('Checagem:', is_the_same)", "Checagem: True\n" ] ], [ [ "O próximo passo é executar o comando PREDICT.", "_____no_output_____" ] ], [ [ "# Vamos selecionar apenas os 9 primeiros instantes de tempo\ncmd = pysavime.operator.subset(tar, time_step_dim.name, 0, 8)\n\n# Definir as dimensões de entrada e saída do nosso modelo\ninput_dims_spec = [(group_dim.name, num_groups),\n (time_step_dim.name, num_time_steps - 1),\n (pos_x_dim.name, num_pos_x),\n (pos_y_dim.name, num_pos_y)]\n\noutput_dims_spec = [(\"time\", 9)]\n\nregister_cmd = pysavime.operator.register_model(model_identifier=chosen_model_name, \n input_dim_specification=input_dims_spec, \n output_dim_specification=output_dims_spec,\n attribute_specification=[temperature.name])\n\npredict_cmd = pysavime.operator.predict(tar=cmd, model_identifier=chosen_model_name)\n\nprint(register_cmd)\nprint(predict_cmd)", "REGISTER_MODEL(arima_25, \"group-400|time_step-9|pos_x-35|pos_y-40\", \"time-9\", \"temperature\")\nPREDICT(SUBSET(temperatures_tar, time_step, 0, 8), arima_25)\n" ], [ "with pysavime.Client(host=savime_host, port=savime_port, raise_silent_error=True) as client:\n client.execute(register_cmd) \n response = client.execute(predict_cmd)[0]", "_____no_output_____" ], [ "pandas_converter = DataVariableBlockConverter('pandas')\npandas_converter(response)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
e7d5690cd288f476aada5c50e7aa39defbf09a6b
94,309
ipynb
Jupyter Notebook
Convolution_model_Application_v1a.ipynb
Yfyangd/Deep_Learning
815321fde22a1c11e6a818a840f80aec6cb343de
[ "MIT" ]
null
null
null
Convolution_model_Application_v1a.ipynb
Yfyangd/Deep_Learning
815321fde22a1c11e6a818a840f80aec6cb343de
[ "MIT" ]
null
null
null
Convolution_model_Application_v1a.ipynb
Yfyangd/Deep_Learning
815321fde22a1c11e6a818a840f80aec6cb343de
[ "MIT" ]
2
2022-02-14T05:12:59.000Z
2022-02-21T16:04:03.000Z
95.357937
18,850
0.793498
[ [ [ "# Convolutional Neural Networks: Application\n\nWelcome to Course 4's second assignment! In this notebook, you will:\n\n- Implement helper functions that you will use when implementing a TensorFlow model\n- Implement a fully functioning ConvNet using TensorFlow \n\n**After this assignment you will be able to:**\n\n- Build and train a ConvNet in TensorFlow for a classification problem \n\nWe assume here that you are already familiar with TensorFlow. If you are not, please refer the *TensorFlow Tutorial* of the third week of Course 2 (\"*Improving deep neural networks*\").", "_____no_output_____" ], [ "### <font color='darkblue'> Updates to Assignment <font>\n\n#### If you were working on a previous version\n* The current notebook filename is version \"1a\". \n* You can find your work in the file directory as version \"1\".\n* To view the file directory, go to the menu \"File->Open\", and this will open a new tab that shows the file directory.\n\n#### List of Updates\n* `initialize_parameters`: added details about tf.get_variable, `eval`. Clarified test case.\n* Added explanations for the kernel (filter) stride values, max pooling, and flatten functions.\n* Added details about softmax cross entropy with logits.\n* Added instructions for creating the Adam Optimizer.\n* Added explanation of how to evaluate tensors (optimizer and cost).\n* `forward_propagation`: clarified instructions, use \"F\" to store \"flatten\" layer.\n* Updated print statements and 'expected output' for easier visual comparisons.\n* Many thanks to Kevin P. Brown (mentor for the deep learning specialization) for his suggestions on the assignments in this course!", "_____no_output_____" ], [ "## 1.0 - TensorFlow model\n\nIn the previous assignment, you built helper functions using numpy to understand the mechanics behind convolutional neural networks. Most practical applications of deep learning today are built using programming frameworks, which have many built-in functions you can simply call. \n\nAs usual, we will start by loading in the packages. ", "_____no_output_____" ] ], [ [ "import math\nimport numpy as np\nimport h5py\nimport matplotlib.pyplot as plt\nimport scipy\nfrom PIL import Image\nfrom scipy import ndimage\nimport tensorflow as tf\nfrom tensorflow.python.framework import ops\nfrom cnn_utils import *\n\n%matplotlib inline\nnp.random.seed(1)", "_____no_output_____" ] ], [ [ "Run the next cell to load the \"SIGNS\" dataset you are going to use.", "_____no_output_____" ] ], [ [ "# Loading the data (signs)\nX_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()", "_____no_output_____" ] ], [ [ "As a reminder, the SIGNS dataset is a collection of 6 signs representing numbers from 0 to 5.\n\n<img src=\"images/SIGNS.png\" style=\"width:800px;height:300px;\">\n\nThe next cell will show you an example of a labelled image in the dataset. Feel free to change the value of `index` below and re-run to see different examples. ", "_____no_output_____" ] ], [ [ "# Example of a picture\nindex = 6\nplt.imshow(X_train_orig[index])\nprint (\"y = \" + str(np.squeeze(Y_train_orig[:, index])))", "y = 2\n" ] ], [ [ "In Course 2, you had built a fully-connected network for this dataset. But since this is an image dataset, it is more natural to apply a ConvNet to it.\n\nTo get started, let's examine the shapes of your data. ", "_____no_output_____" ] ], [ [ "X_train = X_train_orig/255.\nX_test = X_test_orig/255.\nY_train = convert_to_one_hot(Y_train_orig, 6).T\nY_test = convert_to_one_hot(Y_test_orig, 6).T\nprint (\"number of training examples = \" + str(X_train.shape[0]))\nprint (\"number of test examples = \" + str(X_test.shape[0]))\nprint (\"X_train shape: \" + str(X_train.shape))\nprint (\"Y_train shape: \" + str(Y_train.shape))\nprint (\"X_test shape: \" + str(X_test.shape))\nprint (\"Y_test shape: \" + str(Y_test.shape))\nconv_layers = {}", "number of training examples = 1080\nnumber of test examples = 120\nX_train shape: (1080, 64, 64, 3)\nY_train shape: (1080, 6)\nX_test shape: (120, 64, 64, 3)\nY_test shape: (120, 6)\n" ] ], [ [ "### 1.1 - Create placeholders\n\nTensorFlow requires that you create placeholders for the input data that will be fed into the model when running the session.\n\n**Exercise**: Implement the function below to create placeholders for the input image X and the output Y. You should not define the number of training examples for the moment. To do so, you could use \"None\" as the batch size, it will give you the flexibility to choose it later. Hence X should be of dimension **[None, n_H0, n_W0, n_C0]** and Y should be of dimension **[None, n_y]**. [Hint: search for the tf.placeholder documentation\"](https://www.tensorflow.org/api_docs/python/tf/placeholder).", "_____no_output_____" ] ], [ [ "# GRADED FUNCTION: create_placeholders\n\ndef create_placeholders(n_H0, n_W0, n_C0, n_y):\n \"\"\"\n Creates the placeholders for the tensorflow session.\n \n Arguments:\n n_H0 -- scalar, height of an input image\n n_W0 -- scalar, width of an input image\n n_C0 -- scalar, number of channels of the input\n n_y -- scalar, number of classes\n \n Returns:\n X -- placeholder for the data input, of shape [None, n_H0, n_W0, n_C0] and dtype \"float\"\n Y -- placeholder for the input labels, of shape [None, n_y] and dtype \"float\"\n \"\"\"\n\n ### START CODE HERE ### (≈2 lines)\n X = tf.placeholder(tf.float32, [None, n_H0, n_W0, n_C0])\n Y = tf.placeholder(tf.float32, [None, n_y])\n ### END CODE HERE ###\n \n return X, Y", "_____no_output_____" ], [ "X, Y = create_placeholders(64, 64, 3, 6)\nprint (\"X = \" + str(X))\nprint (\"Y = \" + str(Y))", "X = Tensor(\"Placeholder:0\", shape=(?, 64, 64, 3), dtype=float32)\nY = Tensor(\"Placeholder_1:0\", shape=(?, 6), dtype=float32)\n" ] ], [ [ "**Expected Output**\n\n<table> \n<tr>\n<td>\n X = Tensor(\"Placeholder:0\", shape=(?, 64, 64, 3), dtype=float32)\n\n</td>\n</tr>\n<tr>\n<td>\n Y = Tensor(\"Placeholder_1:0\", shape=(?, 6), dtype=float32)\n\n</td>\n</tr>\n</table>", "_____no_output_____" ], [ "### 1.2 - Initialize parameters\n\nYou will initialize weights/filters $W1$ and $W2$ using `tf.contrib.layers.xavier_initializer(seed = 0)`. You don't need to worry about bias variables as you will soon see that TensorFlow functions take care of the bias. Note also that you will only initialize the weights/filters for the conv2d functions. TensorFlow initializes the layers for the fully connected part automatically. We will talk more about that later in this assignment.\n\n**Exercise:** Implement initialize_parameters(). The dimensions for each group of filters are provided below. Reminder - to initialize a parameter $W$ of shape [1,2,3,4] in Tensorflow, use:\n```python\nW = tf.get_variable(\"W\", [1,2,3,4], initializer = ...)\n```\n#### tf.get_variable()\n[Search for the tf.get_variable documentation](https://www.tensorflow.org/api_docs/python/tf/get_variable). Notice that the documentation says:\n```\nGets an existing variable with these parameters or create a new one.\n```\nSo we can use this function to create a tensorflow variable with the specified name, but if the variables already exist, it will get the existing variable with that same name.\n", "_____no_output_____" ] ], [ [ "# GRADED FUNCTION: initialize_parameters\n\ndef initialize_parameters():\n \"\"\"\n Initializes weight parameters to build a neural network with tensorflow. The shapes are:\n W1 : [4, 4, 3, 8]\n W2 : [2, 2, 8, 16]\n Note that we will hard code the shape values in the function to make the grading simpler.\n Normally, functions should take values as inputs rather than hard coding.\n Returns:\n parameters -- a dictionary of tensors containing W1, W2\n \"\"\"\n \n tf.set_random_seed(1) # so that your \"random\" numbers match ours\n \n ### START CODE HERE ### (approx. 2 lines of code)\n W1 = tf.get_variable(\"W1\", [4, 4, 3, 8], initializer=tf.contrib.layers.xavier_initializer(seed=0))\n W2 = tf.get_variable(\"W2\", [2, 2, 8, 16], initializer=tf.contrib.layers.xavier_initializer(seed=0))\n ### END CODE HERE ###\n\n parameters = {\"W1\": W1,\n \"W2\": W2}\n \n return parameters", "_____no_output_____" ], [ "tf.reset_default_graph()\nwith tf.Session() as sess_test:\n parameters = initialize_parameters()\n init = tf.global_variables_initializer()\n sess_test.run(init)\n print(\"W1[1,1,1] = \\n\" + str(parameters[\"W1\"].eval()[1,1,1]))\n print(\"W1.shape: \" + str(parameters[\"W1\"].shape))\n print(\"\\n\")\n print(\"W2[1,1,1] = \\n\" + str(parameters[\"W2\"].eval()[1,1,1]))\n print(\"W2.shape: \" + str(parameters[\"W2\"].shape))", "W1[1,1,1] = \n[ 0.00131723 0.14176141 -0.04434952 0.09197326 0.14984085 -0.03514394\n -0.06847463 0.05245192]\nW1.shape: (4, 4, 3, 8)\n\n\nW2[1,1,1] = \n[-0.08566415 0.17750949 0.11974221 0.16773748 -0.0830943 -0.08058\n -0.00577033 -0.14643836 0.24162132 -0.05857408 -0.19055021 0.1345228\n -0.22779644 -0.1601823 -0.16117483 -0.10286498]\nW2.shape: (2, 2, 8, 16)\n" ] ], [ [ "** Expected Output:**\n\n```\nW1[1,1,1] = \n[ 0.00131723 0.14176141 -0.04434952 0.09197326 0.14984085 -0.03514394\n -0.06847463 0.05245192]\nW1.shape: (4, 4, 3, 8)\n\n\nW2[1,1,1] = \n[-0.08566415 0.17750949 0.11974221 0.16773748 -0.0830943 -0.08058\n -0.00577033 -0.14643836 0.24162132 -0.05857408 -0.19055021 0.1345228\n -0.22779644 -0.1601823 -0.16117483 -0.10286498]\nW2.shape: (2, 2, 8, 16)\n```", "_____no_output_____" ], [ "### 1.3 - Forward propagation\n\nIn TensorFlow, there are built-in functions that implement the convolution steps for you.\n\n- **tf.nn.conv2d(X,W, strides = [1,s,s,1], padding = 'SAME'):** given an input $X$ and a group of filters $W$, this function convolves $W$'s filters on X. The third parameter ([1,s,s,1]) represents the strides for each dimension of the input (m, n_H_prev, n_W_prev, n_C_prev). Normally, you'll choose a stride of 1 for the number of examples (the first value) and for the channels (the fourth value), which is why we wrote the value as `[1,s,s,1]`. You can read the full documentation on [conv2d](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d).\n\n- **tf.nn.max_pool(A, ksize = [1,f,f,1], strides = [1,s,s,1], padding = 'SAME'):** given an input A, this function uses a window of size (f, f) and strides of size (s, s) to carry out max pooling over each window. For max pooling, we usually operate on a single example at a time and a single channel at a time. So the first and fourth value in `[1,f,f,1]` are both 1. You can read the full documentation on [max_pool](https://www.tensorflow.org/api_docs/python/tf/nn/max_pool).\n\n- **tf.nn.relu(Z):** computes the elementwise ReLU of Z (which can be any shape). You can read the full documentation on [relu](https://www.tensorflow.org/api_docs/python/tf/nn/relu).\n\n- **tf.contrib.layers.flatten(P)**: given a tensor \"P\", this function takes each training (or test) example in the batch and flattens it into a 1D vector. \n * If a tensor P has the shape (m,h,w,c), where m is the number of examples (the batch size), it returns a flattened tensor with shape (batch_size, k), where $k=h \\times w \\times c$. \"k\" equals the product of all the dimension sizes other than the first dimension.\n * For example, given a tensor with dimensions [100,2,3,4], it flattens the tensor to be of shape [100, 24], where 24 = 2 * 3 * 4. You can read the full documentation on [flatten](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/flatten).\n\n- **tf.contrib.layers.fully_connected(F, num_outputs):** given the flattened input F, it returns the output computed using a fully connected layer. You can read the full documentation on [full_connected](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/fully_connected).\n\nIn the last function above (`tf.contrib.layers.fully_connected`), the fully connected layer automatically initializes weights in the graph and keeps on training them as you train the model. Hence, you did not need to initialize those weights when initializing the parameters.\n\n\n#### Window, kernel, filter\nThe words \"window\", \"kernel\", and \"filter\" are used to refer to the same thing. This is why the parameter `ksize` refers to \"kernel size\", and we use `(f,f)` to refer to the filter size. Both \"kernel\" and \"filter\" refer to the \"window.\"", "_____no_output_____" ], [ "**Exercise**\n\nImplement the `forward_propagation` function below to build the following model: `CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED`. You should use the functions above. \n\nIn detail, we will use the following parameters for all the steps:\n - Conv2D: stride 1, padding is \"SAME\"\n - ReLU\n - Max pool: Use an 8 by 8 filter size and an 8 by 8 stride, padding is \"SAME\"\n - Conv2D: stride 1, padding is \"SAME\"\n - ReLU\n - Max pool: Use a 4 by 4 filter size and a 4 by 4 stride, padding is \"SAME\"\n - Flatten the previous output.\n - FULLYCONNECTED (FC) layer: Apply a fully connected layer without an non-linear activation function. Do not call the softmax here. This will result in 6 neurons in the output layer, which then get passed later to a softmax. In TensorFlow, the softmax and cost function are lumped together into a single function, which you'll call in a different function when computing the cost. ", "_____no_output_____" ] ], [ [ "# GRADED FUNCTION: forward_propagation\n\ndef forward_propagation(X, parameters):\n \"\"\"\n Implements the forward propagation for the model:\n CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED\n \n Note that for simplicity and grading purposes, we'll hard-code some values\n such as the stride and kernel (filter) sizes. \n Normally, functions should take these values as function parameters.\n \n Arguments:\n X -- input dataset placeholder, of shape (input size, number of examples)\n parameters -- python dictionary containing your parameters \"W1\", \"W2\"\n the shapes are given in initialize_parameters\n\n Returns:\n Z3 -- the output of the last LINEAR unit\n \"\"\"\n \n # Retrieve the parameters from the dictionary \"parameters\" \n W1 = parameters['W1']\n W2 = parameters['W2']\n \n ### START CODE HERE ###\n # CONV2D: stride of 1, padding 'SAME'\n Z1 = tf.nn.conv2d(X, W1, strides=[1, 1, 1, 1], padding='SAME')\n # RELU\n A1 = tf.nn.relu(Z1)\n # MAXPOOL: window 8x8, stride 8, padding 'SAME'\n P1 = tf.nn.max_pool(A1, ksize=[1, 8, 8, 1], strides=[1, 8, 8, 1], padding='SAME')\n # CONV2D: filters W2, stride 1, padding 'SAME'\n Z2 = tf.nn.conv2d(P1, W2, strides=[1, 1, 1, 1], padding='SAME')\n # RELU\n A2 = tf.nn.relu(Z2)\n # MAXPOOL: window 4x4, stride 4, padding 'SAME'\n P2 = tf.nn.max_pool(A2, ksize=[1, 4, 4, 1], strides=[1, 4, 4, 1], padding='SAME')\n # FLATTEN\n F = tf.contrib.layers.flatten(P2)\n # FULLY-CONNECTED without non-linear activation function (not not call softmax).\n # 6 neurons in output layer. Hint: one of the arguments should be \"activation_fn=None\" \n Z3 = tf.contrib.layers.fully_connected(F, 6, activation_fn=None)\n ### END CODE HERE ###\n\n return Z3", "_____no_output_____" ], [ "tf.reset_default_graph()\n\nwith tf.Session() as sess:\n np.random.seed(1)\n X, Y = create_placeholders(64, 64, 3, 6)\n parameters = initialize_parameters()\n Z3 = forward_propagation(X, parameters)\n init = tf.global_variables_initializer()\n sess.run(init)\n a = sess.run(Z3, {X: np.random.randn(2,64,64,3), Y: np.random.randn(2,6)})\n print(\"Z3 = \\n\" + str(a))", "Z3 = \n[[-0.44670227 -1.57208765 -1.53049231 -2.31013036 -1.29104376 0.46852064]\n [-0.17601591 -1.57972014 -1.4737016 -2.61672091 -1.00810647 0.5747785 ]]\n" ] ], [ [ "**Expected Output**:\n\n```\nZ3 = \n[[-0.44670227 -1.57208765 -1.53049231 -2.31013036 -1.29104376 0.46852064]\n [-0.17601591 -1.57972014 -1.4737016 -2.61672091 -1.00810647 0.5747785 ]]\n```", "_____no_output_____" ], [ "### 1.4 - Compute cost\n\nImplement the compute cost function below. Remember that the cost function helps the neural network see how much the model's predictions differ from the correct labels. By adjusting the weights of the network to reduce the cost, the neural network can improve its predictions.\n\nYou might find these two functions helpful: \n\n- **tf.nn.softmax_cross_entropy_with_logits(logits = Z, labels = Y):** computes the softmax entropy loss. This function both computes the softmax activation function as well as the resulting loss. You can check the full documentation [softmax_cross_entropy_with_logits](https://www.tensorflow.org/api_docs/python/tf/nn/softmax_cross_entropy_with_logits).\n- **tf.reduce_mean:** computes the mean of elements across dimensions of a tensor. Use this to calculate the sum of the losses over all the examples to get the overall cost. You can check the full documentation [reduce_mean](https://www.tensorflow.org/api_docs/python/tf/reduce_mean).\n\n#### Details on softmax_cross_entropy_with_logits (optional reading)\n* Softmax is used to format outputs so that they can be used for classification. It assigns a value between 0 and 1 for each category, where the sum of all prediction values (across all possible categories) equals 1.\n* Cross Entropy is compares the model's predicted classifications with the actual labels and results in a numerical value representing the \"loss\" of the model's predictions.\n* \"Logits\" are the result of multiplying the weights and adding the biases. Logits are passed through an activation function (such as a relu), and the result is called the \"activation.\"\n* The function is named `softmax_cross_entropy_with_logits` takes logits as input (and not activations); then uses the model to predict using softmax, and then compares the predictions with the true labels using cross entropy. These are done with a single function to optimize the calculations.\n\n** Exercise**: Compute the cost below using the function above.", "_____no_output_____" ] ], [ [ "# GRADED FUNCTION: compute_cost \n\ndef compute_cost(Z3, Y):\n \"\"\"\n Computes the cost\n \n Arguments:\n Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (number of examples, 6)\n Y -- \"true\" labels vector placeholder, same shape as Z3\n \n Returns:\n cost - Tensor of the cost function\n \"\"\"\n \n ### START CODE HERE ### (1 line of code)\n cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=Z3, labels=Y))\n ### END CODE HERE ###\n \n return cost", "_____no_output_____" ], [ "tf.reset_default_graph()\n\nwith tf.Session() as sess:\n np.random.seed(1)\n X, Y = create_placeholders(64, 64, 3, 6)\n parameters = initialize_parameters()\n Z3 = forward_propagation(X, parameters)\n cost = compute_cost(Z3, Y)\n init = tf.global_variables_initializer()\n sess.run(init)\n a = sess.run(cost, {X: np.random.randn(4,64,64,3), Y: np.random.randn(4,6)})\n print(\"cost = \" + str(a))", "cost = 2.91034\n" ] ], [ [ "**Expected Output**: \n```\ncost = 2.91034\n```", "_____no_output_____" ], [ "## 1.5 Model \n\nFinally you will merge the helper functions you implemented above to build a model. You will train it on the SIGNS dataset. \n\n**Exercise**: Complete the function below. \n\nThe model below should:\n\n- create placeholders\n- initialize parameters\n- forward propagate\n- compute the cost\n- create an optimizer\n\nFinally you will create a session and run a for loop for num_epochs, get the mini-batches, and then for each mini-batch you will optimize the function. [Hint for initializing the variables](https://www.tensorflow.org/api_docs/python/tf/global_variables_initializer)", "_____no_output_____" ], [ "#### Adam Optimizer\nYou can use `tf.train.AdamOptimizer(learning_rate = ...)` to create the optimizer. The optimizer has a `minimize(loss=...)` function that you'll call to set the cost function that the optimizer will minimize.\n\nFor details, check out the documentation for [Adam Optimizer](https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer)", "_____no_output_____" ], [ "#### Random mini batches\nIf you took course 2 of the deep learning specialization, you implemented `random_mini_batches()` in the \"Optimization\" programming assignment. This function returns a list of mini-batches. It is already implemented in the `cnn_utils.py` file and imported here, so you can call it like this:\n```Python\nminibatches = random_mini_batches(X, Y, mini_batch_size = 64, seed = 0)\n```\n(You will want to choose the correct variable names when you use it in your code).", "_____no_output_____" ], [ "#### Evaluating the optimizer and cost\n\nWithin a loop, for each mini-batch, you'll use the `tf.Session` object (named `sess`) to feed a mini-batch of inputs and labels into the neural network and evaluate the tensors for the optimizer as well as the cost. Remember that we built a graph data structure and need to feed it inputs and labels and use `sess.run()` in order to get values for the optimizer and cost.\n\nYou'll use this kind of syntax:\n```\noutput_for_var1, output_for_var2 = sess.run(\n fetches=[var1, var2],\n feed_dict={var_inputs: the_batch_of_inputs,\n var_labels: the_batch_of_labels}\n )\n```\n* Notice that `sess.run` takes its first argument `fetches` as a list of objects that you want it to evaluate (in this case, we want to evaluate the optimizer and the cost). \n* It also takes a dictionary for the `feed_dict` parameter. \n* The keys are the `tf.placeholder` variables that we created in the `create_placeholders` function above. \n* The values are the variables holding the actual numpy arrays for each mini-batch. \n* The sess.run outputs a tuple of the evaluated tensors, in the same order as the list given to `fetches`. \n\nFor more information on how to use sess.run, see the documentation [tf.Sesssion#run](https://www.tensorflow.org/api_docs/python/tf/Session#run) documentation.", "_____no_output_____" ] ], [ [ "# GRADED FUNCTION: model\n\ndef model(X_train, Y_train, X_test, Y_test, learning_rate = 0.009,\n num_epochs = 100, minibatch_size = 64, print_cost = True):\n \"\"\"\n Implements a three-layer ConvNet in Tensorflow:\n CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED\n \n Arguments:\n X_train -- training set, of shape (None, 64, 64, 3)\n Y_train -- test set, of shape (None, n_y = 6)\n X_test -- training set, of shape (None, 64, 64, 3)\n Y_test -- test set, of shape (None, n_y = 6)\n learning_rate -- learning rate of the optimization\n num_epochs -- number of epochs of the optimization loop\n minibatch_size -- size of a minibatch\n print_cost -- True to print the cost every 100 epochs\n \n Returns:\n train_accuracy -- real number, accuracy on the train set (X_train)\n test_accuracy -- real number, testing accuracy on the test set (X_test)\n parameters -- parameters learnt by the model. They can then be used to predict.\n \"\"\"\n \n ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables\n tf.set_random_seed(1) # to keep results consistent (tensorflow seed)\n seed = 3 # to keep results consistent (numpy seed)\n (m, n_H0, n_W0, n_C0) = X_train.shape \n n_y = Y_train.shape[1] \n costs = [] # To keep track of the cost\n \n # Create Placeholders of the correct shape\n ### START CODE HERE ### (1 line)\n X, Y = create_placeholders(n_H0, n_W0, n_C0, n_y)\n ### END CODE HERE ###\n\n # Initialize parameters\n ### START CODE HERE ### (1 line)\n parameters = initialize_parameters()\n ### END CODE HERE ###\n \n # Forward propagation: Build the forward propagation in the tensorflow graph\n ### START CODE HERE ### (1 line)\n Z3 = forward_propagation(X, parameters)\n ### END CODE HERE ###\n \n # Cost function: Add cost function to tensorflow graph\n ### START CODE HERE ### (1 line)\n cost = compute_cost(Z3, Y)\n ### END CODE HERE ###\n \n # Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer that minimizes the cost.\n ### START CODE HERE ### (1 line)\n optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)\n ### END CODE HERE ###\n \n # Initialize all the variables globally\n init = tf.global_variables_initializer()\n \n # Start the session to compute the tensorflow graph\n with tf.Session() as sess:\n \n # Run the initialization\n sess.run(init)\n \n # Do the training loop\n for epoch in range(num_epochs):\n\n minibatch_cost = 0.\n num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set\n seed = seed + 1\n minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)\n\n for minibatch in minibatches:\n\n # Select a minibatch\n (minibatch_X, minibatch_Y) = minibatch\n \"\"\"\n # IMPORTANT: The line that runs the graph on a minibatch.\n # Run the session to execute the optimizer and the cost.\n # The feedict should contain a minibatch for (X,Y).\n \"\"\"\n ### START CODE HERE ### (1 line)\n _ , temp_cost = sess.run([optimizer, cost], feed_dict={X:minibatch_X, Y:minibatch_Y})\n ### END CODE HERE ###\n \n minibatch_cost += temp_cost / num_minibatches\n \n\n # Print the cost every epoch\n if print_cost == True and epoch % 5 == 0:\n print (\"Cost after epoch %i: %f\" % (epoch, minibatch_cost))\n if print_cost == True and epoch % 1 == 0:\n costs.append(minibatch_cost)\n \n \n # plot the cost\n plt.plot(np.squeeze(costs))\n plt.ylabel('cost')\n plt.xlabel('iterations (per tens)')\n plt.title(\"Learning rate =\" + str(learning_rate))\n plt.show()\n\n # Calculate the correct predictions\n predict_op = tf.argmax(Z3, 1)\n correct_prediction = tf.equal(predict_op, tf.argmax(Y, 1))\n \n # Calculate accuracy on the test set\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, \"float\"))\n print(accuracy)\n train_accuracy = accuracy.eval({X: X_train, Y: Y_train})\n test_accuracy = accuracy.eval({X: X_test, Y: Y_test})\n print(\"Train Accuracy:\", train_accuracy)\n print(\"Test Accuracy:\", test_accuracy)\n \n return train_accuracy, test_accuracy, parameters", "_____no_output_____" ] ], [ [ "Run the following cell to train your model for 100 epochs. Check if your cost after epoch 0 and 5 matches our output. If not, stop the cell and go back to your code!", "_____no_output_____" ] ], [ [ "_, _, parameters = model(X_train, Y_train, X_test, Y_test)", "Cost after epoch 0: 1.917929\nCost after epoch 5: 1.506757\nCost after epoch 10: 0.955359\nCost after epoch 15: 0.845802\nCost after epoch 20: 0.701174\nCost after epoch 25: 0.571977\nCost after epoch 30: 0.518435\nCost after epoch 35: 0.495806\nCost after epoch 40: 0.429827\nCost after epoch 45: 0.407291\nCost after epoch 50: 0.366394\nCost after epoch 55: 0.376922\nCost after epoch 60: 0.299491\nCost after epoch 65: 0.338870\nCost after epoch 70: 0.316400\nCost after epoch 75: 0.310413\nCost after epoch 80: 0.249549\nCost after epoch 85: 0.243457\nCost after epoch 90: 0.200031\nCost after epoch 95: 0.175452\n" ] ], [ [ "**Expected output**: although it may not match perfectly, your expected output should be close to ours and your cost value should decrease.\n\n<table> \n<tr>\n <td> \n **Cost after epoch 0 =**\n </td>\n\n <td> \n 1.917929\n </td> \n</tr>\n<tr>\n <td> \n **Cost after epoch 5 =**\n </td>\n\n <td> \n 1.506757\n </td> \n</tr>\n<tr>\n <td> \n **Train Accuracy =**\n </td>\n\n <td> \n 0.940741\n </td> \n</tr> \n\n<tr>\n <td> \n **Test Accuracy =**\n </td>\n\n <td> \n 0.783333\n </td> \n</tr> \n</table>", "_____no_output_____" ], [ "Congratulations! You have finished the assignment and built a model that recognizes SIGN language with almost 80% accuracy on the test set. If you wish, feel free to play around with this dataset further. You can actually improve its accuracy by spending more time tuning the hyperparameters, or using regularization (as this model clearly has a high variance). \n\nOnce again, here's a thumbs up for your work! ", "_____no_output_____" ] ], [ [ "fname = \"images/thumbs_up.jpg\"\nimage = np.array(ndimage.imread(fname, flatten=False))\nmy_image = scipy.misc.imresize(image, size=(64,64))\nplt.imshow(my_image)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ] ]
e7d57680d126dc174184b09d5b23559179e08cb2
10,610
ipynb
Jupyter Notebook
corpus/database_demo.ipynb
alexeyqu/zadolbali_corpus
dd12dc915106948ccce26562b3f139913123f867
[ "MIT" ]
null
null
null
corpus/database_demo.ipynb
alexeyqu/zadolbali_corpus
dd12dc915106948ccce26562b3f139913123f867
[ "MIT" ]
null
null
null
corpus/database_demo.ipynb
alexeyqu/zadolbali_corpus
dd12dc915106948ccce26562b3f139913123f867
[ "MIT" ]
null
null
null
52.009804
1,560
0.652498
[ [ [ "from sqlalchemy import create_engine, MetaData, Table\nfrom sqlalchemy.orm import mapper, sessionmaker", "_____no_output_____" ], [ "class Story(object):\n pass\n\nclass PosTagEntry(object):\n pass\n \ndef loadSession():\n dbPath = '../corpus/stories.sqlite'\n engine = create_engine('sqlite:///%s' % dbPath, echo=True)\n \n metadata = MetaData(engine)\n\n bookmarks = Table('stories', metadata, autoload=True)\n mapper(Story, bookmarks)\n \n bookmarks = Table('pos_tags', metadata, autoload=True)\n mapper(PosTagEntry, bookmarks)\n \n Session = sessionmaker(bind=engine)\n session = Session()\n return session\n\nsession = loadSession()", "2017-11-22 15:17:33,669 INFO sqlalchemy.engine.base.Engine SELECT CAST('test plain returns' AS VARCHAR(60)) AS anon_1\n2017-11-22 15:17:33,669 INFO sqlalchemy.engine.base.Engine ()\n2017-11-22 15:17:33,671 INFO sqlalchemy.engine.base.Engine SELECT CAST('test unicode returns' AS VARCHAR(60)) AS anon_1\n2017-11-22 15:17:33,672 INFO sqlalchemy.engine.base.Engine ()\n2017-11-22 15:17:33,673 INFO sqlalchemy.engine.base.Engine PRAGMA table_info(\"stories\")\n2017-11-22 15:17:33,673 INFO sqlalchemy.engine.base.Engine ()\n2017-11-22 15:17:33,676 INFO sqlalchemy.engine.base.Engine SELECT sql FROM (SELECT * FROM sqlite_master UNION ALL SELECT * FROM sqlite_temp_master) WHERE name = 'stories' AND type = 'table'\n2017-11-22 15:17:33,678 INFO sqlalchemy.engine.base.Engine ()\n2017-11-22 15:17:33,680 INFO sqlalchemy.engine.base.Engine PRAGMA foreign_key_list(\"stories\")\n2017-11-22 15:17:33,681 INFO sqlalchemy.engine.base.Engine ()\n2017-11-22 15:17:33,682 INFO sqlalchemy.engine.base.Engine SELECT sql FROM (SELECT * FROM sqlite_master UNION ALL SELECT * FROM sqlite_temp_master) WHERE name = 'stories' AND type = 'table'\n2017-11-22 15:17:33,682 INFO sqlalchemy.engine.base.Engine ()\n2017-11-22 15:17:33,685 INFO sqlalchemy.engine.base.Engine PRAGMA index_list(\"stories\")\n2017-11-22 15:17:33,685 INFO sqlalchemy.engine.base.Engine ()\n2017-11-22 15:17:33,686 INFO sqlalchemy.engine.base.Engine PRAGMA index_list(\"stories\")\n2017-11-22 15:17:33,686 INFO sqlalchemy.engine.base.Engine ()\n2017-11-22 15:17:33,687 INFO sqlalchemy.engine.base.Engine SELECT sql FROM (SELECT * FROM sqlite_master UNION ALL SELECT * FROM sqlite_temp_master) WHERE name = 'stories' AND type = 'table'\n2017-11-22 15:17:33,688 INFO sqlalchemy.engine.base.Engine ()\n2017-11-22 15:17:33,693 INFO sqlalchemy.engine.base.Engine PRAGMA table_info(\"pos_tags\")\n2017-11-22 15:17:33,694 INFO sqlalchemy.engine.base.Engine ()\n2017-11-22 15:17:33,695 INFO sqlalchemy.engine.base.Engine SELECT sql FROM (SELECT * FROM sqlite_master UNION ALL SELECT * FROM sqlite_temp_master) WHERE name = 'pos_tags' AND type = 'table'\n2017-11-22 15:17:33,696 INFO sqlalchemy.engine.base.Engine ()\n2017-11-22 15:17:33,697 INFO sqlalchemy.engine.base.Engine PRAGMA foreign_key_list(\"pos_tags\")\n2017-11-22 15:17:33,697 INFO sqlalchemy.engine.base.Engine ()\n2017-11-22 15:17:33,699 INFO sqlalchemy.engine.base.Engine SELECT sql FROM (SELECT * FROM sqlite_master UNION ALL SELECT * FROM sqlite_temp_master) WHERE name = 'pos_tags' AND type = 'table'\n2017-11-22 15:17:33,699 INFO sqlalchemy.engine.base.Engine ()\n2017-11-22 15:17:33,700 INFO sqlalchemy.engine.base.Engine PRAGMA index_list(\"pos_tags\")\n2017-11-22 15:17:33,700 INFO sqlalchemy.engine.base.Engine ()\n2017-11-22 15:17:33,701 INFO sqlalchemy.engine.base.Engine PRAGMA index_list(\"pos_tags\")\n2017-11-22 15:17:33,702 INFO sqlalchemy.engine.base.Engine ()\n2017-11-22 15:17:33,704 INFO sqlalchemy.engine.base.Engine PRAGMA index_info(\"sqlite_autoindex_pos_tags_1\")\n2017-11-22 15:17:33,705 INFO sqlalchemy.engine.base.Engine ()\n2017-11-22 15:17:33,707 INFO sqlalchemy.engine.base.Engine SELECT sql FROM (SELECT * FROM sqlite_master UNION ALL SELECT * FROM sqlite_temp_master) WHERE name = 'pos_tags' AND type = 'table'\n2017-11-22 15:17:33,708 INFO sqlalchemy.engine.base.Engine ()\n" ], [ "stories = session.query(Story).all()\nprint(len(stories))\nprint(dir(stories[0]))", "2017-11-22 15:17:33,809 INFO sqlalchemy.engine.base.Engine BEGIN (implicit)\n2017-11-22 15:17:33,811 INFO sqlalchemy.engine.base.Engine SELECT stories.id AS stories_id, stories.title AS stories_title, stories.published AS stories_published, stories.tags AS stories_tags, stories.text AS stories_text, stories.likes AS stories_likes, stories.hrefs AS stories_hrefs, stories.url AS stories_url \nFROM stories\n2017-11-22 15:17:33,815 INFO sqlalchemy.engine.base.Engine ()\n23558\n['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_sa_class_manager', '_sa_instance_state', 'hrefs', 'id', 'likes', 'published', 'tags', 'text', 'title', 'url']\n" ], [ "print(stories[0].text)", "Работаю в провинциальном городе в магазине отделочных материалов и сантехники.Заходит к нам на днях надменная пергидролевая дева, покрытая слоем штукатурки толщиной в палец. Собирается с мыслями, напускает на себя важный вид и обращается ко мне:Дева, медленно и с видом опытного сантехника: Молодой человек, у вас ванны железные есть?Я: Нет, у нас только акрил. Металлических нет.Дева: Молодой человек, я не спрашиваю металлические, я спрашиваю железные!Я: Извините, железных тоже нет.Дева презрительно смотрит на меня, бурчит что-то себе под нос, и, виляя бедрами, уходит. Смотрим в окно. Выходит. Подходит к побитой жизнью шестерке, деловито садится на переднее сиденье, подзывает торопливо курящего поодаль водителя.Дева, возмущенно: Понабрали крестьян, металлические ванны от железных не отличают!Водитель, тяжело вздохнув, затаптывает окурок, занимает свое место, и экипаж отправляется дальше, на поиски волшебной неметаллической ванны из железа.\n" ], [ "# too slow\n#pos_tags = session.query(PosTagEntry).all()\n#print(len(pos_tags))\n#print(dir(pos_tags[0]))", "_____no_output_____" ], [ "import scripts.get_tagged_text as get_tagged", "_____no_output_____" ], [ "print(get_tagged.get_tagged_story(2, session, Story, PosTagEntry))", "2017-11-22 15:17:34,735 INFO sqlalchemy.engine.base.Engine SELECT pos_tags.id AS pos_tags_id, pos_tags.story_id AS pos_tags_story_id, pos_tags.tag AS pos_tags_tag, pos_tags.start AS pos_tags_start, pos_tags.\"end\" AS pos_tags_end \nFROM pos_tags \nWHERE pos_tags.story_id = ?\n2017-11-22 15:17:34,736 INFO sqlalchemy.engine.base.Engine (2,)\nРаботаю<V> в<PR> провинциальном<A=m> городе<S> в<PR> магазине<S> отделочных<A=pl> материалов<S> и<CONJ> сантехники<S>.Заходит<V> к<PR> нам<S-PRO> на<PR> днях<S> надменная<A=f> пергидролевая<A=f> дева<S>, покрытая<V> слоем<S> штукатурки<S> толщиной<S> в<PR> палец<S>. Собирается<V> с<PR> мыслями<S>, напускает<V> на<PR> себя<S-PRO=acc> важный<A=m> вид<S> и<CONJ> обращается<V> ко<PR> мне<S-PRO>:Дева<S>, медленно<ADV> и<CONJ> с<PR> видом<S> опытного<A=m> сантехника<S>: Молодой<A=m> человек<S>, у<PR> вас<S-PRO> ванны<A=pl> железные<A=pl> есть<V>?Я<S-PRO>: Нет<PART>, у<PR> нас<S-PRO> только<PART> акрил<V>. Металлических<A=pl> нет<PRAEDIC>.Дева<S>: Молодой<A=sg> человек<S>, я<S-PRO> не<PART> спрашиваю<V> металлические<A=pl>, я<S-PRO> спрашиваю<V> железные<A=pl>!Я<S-PRO>: Извините<V>, железных<A=pl> тоже<PART> нет<PRAEDIC>.Дева<S> презрительно<ADV> смотрит<V> на<PR> меня<S-PRO>, бурчит<V> что<CONJ>-то<S-PRO> себе<S-PRO=dat> под<PR> нос<S>, и<CONJ>, виляя<V> бедрами<S>, уходит<V>. Смотрим<V> в<PR> окно<S>. Выходит<V>. Подходит<V> к<PR> побитой<A=f> жизнью<S> шестерке<S>, деловито<ADV> садится<V> на<PR> переднее<A=n> сиденье<S>, подзывает<V> торопливо<ADV> курящего<V> поодаль<S> водителя<S>.Дева<S>, возмущенно<ADV>: Понабрали<V> крестьян<S>, металлические<A=pl> ванны<S> от<PR> железных<A=pl> не<PART> отличают<V>!Водитель<S>, тяжело<ADV> вздохнув<V>, затаптывает<V> окурок<S>, занимает<V> свое<A=n> место<S>, и<CONJ> экипаж<S> отправляется<V> дальше<ADV=comp>, на<PR> поиски<S> волшебной<A=f> неметаллической<A=f> ванны<S> из<PR> железа<S>.\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ] ]
e7d58fdf7cddb5e97b4b14761a6a5068fd46482d
22,425
ipynb
Jupyter Notebook
data/GenesisGrabber.ipynb
yoki31/visualize
178bae9f7defd37e263bbec50599a883af909f27
[ "MIT" ]
null
null
null
data/GenesisGrabber.ipynb
yoki31/visualize
178bae9f7defd37e263bbec50599a883af909f27
[ "MIT" ]
null
null
null
data/GenesisGrabber.ipynb
yoki31/visualize
178bae9f7defd37e263bbec50599a883af909f27
[ "MIT" ]
null
null
null
28.314394
259
0.501672
[ [ [ "# Imports", "_____no_output_____" ] ], [ [ "import requests\nimport getpass\nimport pickle\nimport io\nimport time\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport numpy as np\nimport itertools", "_____no_output_____" ] ], [ [ "# Login\nhttps://www.statistikdaten.bayern.de/genesis/online?Menu=Anmeldung#abreadcrumb", "_____no_output_____" ] ], [ [ "username = input()", "_____no_output_____" ], [ "password = getpass.getpass()", "_____no_output_____" ] ], [ [ "## Test login", "_____no_output_____" ] ], [ [ "class GenesisApi:\n \n def __init__(self, username, password, polling_rate=5):\n self.username = username\n self.password = password\n self.polling_rate = polling_rate\n \n self.__base_url = 'https://www.statistikdaten.bayern.de/genesisWS/rest/2020/'\n \n self.__base_params = {\n 'username': username,\n 'password': password,\n 'language': 'de'\n }\n \n self.__default_table_params = self.__base_params.copy()\n self.__default_table_params.update({\n 'name': '',\n 'area': 'all',\n 'compress': 'false',\n 'transpose': 'false',\n 'startyear': '',\n 'endyear': '',\n 'timeslices': '',\n 'regionalvariable': '',\n 'regionalkey': '',\n 'classifyingkey1': '',\n 'classifyingvariable2': '',\n 'classifyingkey2': '',\n 'classifyingvariable3': '',\n 'classifyingkey3': '',\n 'job': 'true'\n })\n \n self.__default_jobs_params = self.__base_params.copy()\n self.__default_jobs_params.update({\n 'selection': '',\n 'searchcriterion': 'code',\n 'sortcriterion': 'code',\n 'type': 'all',\n 'area': 'all',\n 'pagelength': '100'\n })\n \n self.__default_result_params = self.__base_params.copy()\n self.__default_result_params.update({\n 'name': '',\n 'area': 'all',\n 'compress': 'false'\n })\n \n def check_login(self):\n response = requests.get(self.__base_url + 'helloworld/logincheck', params=self.__base_params)\n b'{\"Status\":\"Sie wurden erfolgreich an- und abgemeldet!\",\"Username\":\"GB3U65P838\"}'\n try:\n return response.json()['Status'] == 'Sie wurden erfolgreich an- und abgemeldet!'\n except Exception as e:\n return False\n\n\n def get_table(self, name, startyear=''):\n startyear = str(startyear)\n \n params = self.__default_table_params.copy()\n params['name'] = name\n params['startyear'] = startyear\n \n response = requests.get(self.__base_url + 'data/table', params=params)\n \n data = response.json()\n code = data['Status']['Code']\n if (code == 0): # Success\n return data\n elif (code == 99): # Table is too big a job has been created\n print('Table is too big, created a job.')\n result_name = data['Status']['Content'].split(':', 1)[1][1:]\n return self.get_job_result(result_name)\n else:\n params['password'] = '***'\n print('Error requesting ' + name + ' with params:', params, 'response:', data)\n return data\n \n def is_job_ready(self, name):\n params = self.__default_jobs_params.copy()\n params['selection'] = 'Werteabruf ' + name\n \n response = requests.get(self.__base_url + 'catalogue/jobs', params=params)\n try:\n return response.json()['List'][0]['State'] == 'Fertig'\n except Exception as e:\n return False\n \n def delete_job_result(self, name):\n params = self.__default_result_params.copy()\n params['name'] = name\n response = requests.get(self.__base_url + 'profile/removeResult', params=params)\n return response \n \n def get_job_result(self, name):\n params = self.__default_result_params.copy()\n params['name'] = name\n \n while(not self.is_job_ready(name)):\n print('Data is not ready waiting ' + str(self.polling_rate) + ' seconds longer.')\n time.sleep(self.polling_rate)\n \n response = requests.get(self.__base_url + 'data/result', params=params)\n self.delete_job_result(name)\n return response.json()\n", "_____no_output_____" ], [ "genesis = GenesisApi(username, password)\ngenesis.check_login()", "_____no_output_____" ] ], [ [ "# Download data\n\nNote: This takes a long time", "_____no_output_____" ] ], [ [ "responses_demographic = {}\n\nfor year in range(1980, 2020 + 1):\n print('Requesting table for the year ' + str(year))\n response = genesis.get_table('12411-003r', year)\n print('Got data')\n responses_demographic[str(year)] = response", "_____no_output_____" ], [ "responses_area = {}\n\n# 33111-201r 1980, 1984, 1988, 1992, 1996, 2000, 2004, 2008, 2009, 2010, 2011, 2012, 2013\n# 33111-101r 2011 - 2015\n# 33111-001r 2014 - 2020\n\nfor year in [1980, 1984, 1988, 1992, 1996, 2000, 2004, 2008, 2009, 2010, 2011, 2012, 2013]:\n print('Requesting table for the year ' + str(year))\n response = genesis.get_table('33111-201r', year)\n print('Got data')\n responses_area[str(year)] = response\n\nfor year in range(2014, 2020 + 1):\n print('Requesting table for the year ' + str(year))\n response = genesis.get_table('33111-001r', year)\n print('Got data')\n responses_area[str(year)] = response", "_____no_output_____" ] ], [ [ "# Convert to DataFrame", "_____no_output_____" ] ], [ [ "def convert_to_dataframe(response, start_at_line, date_line, header_line):\n raw_content = response['Object']['Content']\n content = raw_content.split('\\n', start_at_line)\n date = content[date_line].split(';',1)[0]\n csv = io.StringIO(content[header_line] + '\\n' + content[start_at_line].split('\\n__________', 1)[0])\n df = pd.read_csv(csv, ';')\n df['date'] = pd.to_datetime(date, format='%d.%m.%Y')\n return df", "_____no_output_____" ] ], [ [ "## Demographic", "_____no_output_____" ] ], [ [ "dfs = list()\nfor year, response in responses_demographic.items():\n df = convert_to_dataframe(response, start_at_line=6, date_line=4, header_line=5)\n dfs.append(df)\n\ndf_demographic = pd.concat(dfs, axis=0, ignore_index=True)\n\ncolumn_names = df_demographic.columns.values\ncolumn_names[0] = 'AGS'\ncolumn_names[1] = 'Gemeinde'\ndf_demographic.columns = column_names\n\ndf_demographic['Gemeinde'] = df_demographic['Gemeinde'].str.strip()\ndf_demographic['Insgesamt'] = pd.to_numeric(df_demographic['Insgesamt'], errors='coerce')\ndf_demographic['männlich'] = pd.to_numeric(df_demographic['männlich'], errors='coerce')\ndf_demographic['weiblich'] = pd.to_numeric(df_demographic['weiblich'], errors='coerce')", "_____no_output_____" ], [ "df_demographic\n# TODO Filter regierungsbezirke\n# TODO Filter male and female", "_____no_output_____" ] ], [ [ "## Area", "_____no_output_____" ] ], [ [ "dfs = list()\nfor year, response in responses_area.items():\n df = convert_to_dataframe(response, start_at_line=10, date_line=5, header_line=8)\n\n column_names = df.columns.values\n column_names[0] = 'AGS'\n column_names[1] = 'Gemeinde'\n df.columns = column_names\n\n for column_name in column_names[2: len(column_names) - 1]:\n df[column_name] = pd.to_numeric(df[column_name].str.replace(',', '.'), errors='coerce')\n\n df['Gemeinde'] = df['Gemeinde'].str.strip()\n \n dfs.append(df)\n\ndf_area = pd.concat(dfs, axis=0, ignore_index=True)", "_____no_output_____" ], [ "df_area", "_____no_output_____" ], [ "# TODO Map old area codes to new ones\n# TODO Map area codes to sealed and non-sealed\n# TODO Filter regierungsbezirke", "_____no_output_____" ] ], [ [ "## Combined", "_____no_output_____" ] ], [ [ "df_all = pd.merge(df_area, df_demographic, how='left', on=['AGS', 'Gemeinde', 'date'])\ndf_all.rename(columns={'Insgesamt_x':'Insgesamt Fläche', 'Insgesamt_y':'Insgesamt Bewohner'}, inplace=True)\ndf_all", "_____no_output_____" ] ], [ [ "# Save and load data", "_____no_output_____" ] ], [ [ "df_demographic.to_pickle('df_demographic.pickle')\ndf_area.to_pickle('df_area.pickle')\n\nwith open('responses_demographic.pickle', 'wb') as f:\n pickle.dump(responses_demographic, f, pickle.HIGHEST_PROTOCOL)\n\nwith open('responses_area.pickle', 'wb') as f:\n pickle.dump(responses_area, f, pickle.HIGHEST_PROTOCOL)", "_____no_output_____" ], [ "df_demographic = pd.read_pickle('df_demographic.pickle')\ndf_area = pd.read_pickle('df_area.pickle')\n\nwith open('responses_demographic.pickle', 'rb') as f:\n responses_demographic = pickle.load(f)\n \nwith open('responses_area.pickle', 'rb') as f:\n responses_area = pickle.load(f)", "_____no_output_____" ] ], [ [ "## Categorize", "_____no_output_____" ] ], [ [ "categories = {\n \"living\": [\n \"Wohnen\",\n \"11000 Wohnbaufläche\",\n ],\n\n \"industry\": [\n \"Gewerbe, Industrie\",\n \"Betriebsfläche (ohne Abbauland)\",\n \"Abbauland\",\n \"12100 Industrie und Gewerbe\",\n \"12200 Handel und Dienstleistung\",\n \"12300 Versorgungsanlage\",\n \"12400 Entsorgung\",\n \"13000 Halde\",\n \"14000 Bergbaubetrieb\",\n \"15000 Tagebau, Grube, Steinbruch\",\n ],\n\n \"transport_infrastructure\": [\n \"Straße, Weg, Platz\",\n \"sonstige Verkehrsfläche\",\n \"21000 Straßenverkehr\",\n \"22000 Weg\",\n \"23000 Platz\",\n \"24000 Bahnverkehr\",\n \"25000 Flugverkehr\",\n \"26000 Schiffsverkehr\",\n \"42000 Hafenbecken\",\n ],\n\n \"nature_and_water\": [\n \"Moor\",\n \"Landwirtschaftsfläche (ohne Moor, Heide)\",\n \"Grünanlage\",\n \"Heide\",\n \"Waldfläche\",\n \"Wasserfläche\",\n \"Unland\",\n \"18400 Grünanlage\",\n \"31100 Ackerland\",\n \"31200 Grünland\",\n \"31300 Gartenland\",\n \"31400 Weingarten\",\n \"31500 Obstplantage\",\n \"32000 Wald\",\n \"33000 Gehölz\",\n \"34000 Heide\",\n \"35000 Moor\",\n \"36000 Sumpf\",\n \"37000 Unland, Vegetationslose Fläche\",\n \"41000 Fließgewässer\",\n \"43000 Stehendes Gewässer\",\n ],\n\n \"miscellaneous\": [\n \"Flächen anderer Nutzung (ohne Unland, Friedhof)\",\n \"sonstige Erholungsfläche\",\n \"sonstige Gebäude- und Freifläche\",\n \"Friedhof\",\n \"16000 Fläche gemischter Nutzung\",\n \"17000 Fläche besonderer funktionaler Prägung\",\n \"18100 Sportanlage\",\n \"18200 Freizeitanlage\",\n \"19000 Friedhof\",\n \"18300 Erholungsfläche\",\n ]\n}", "_____no_output_____" ], [ "# Check if we classified all columns and used each only once\nall_columns = set(df_area.columns)\n\nfor l in categories.values():\n all_columns = all_columns - set(l)\n \nall_columns = all_columns - set(['AGS', 'Gemeinde', 'Insgesamt', 'date'])\n\nif (len(all_columns) != 0):\n print (\"The categories\", all_columns, \"have not yet been categorized.\")\n\nfor ((name1, l1), (name2, l2)) in itertools.combinations(categories.items(), 2):\n if (not set(l1).isdisjoint(l2)):\n print(name1, \"and\", name2, \"contain the same category.\")", "_____no_output_____" ], [ "for (name, category) in categories.items():\n df_area[name] = df_area.loc[:,category].sum(axis=1)\n df_area.drop(category, axis=1, inplace=True)\n df_area[name + '_percent'] = df_area[name] / df_area['Insgesamt']", "_____no_output_____" ], [ "used_areas = [\n \"living\",\n \"industry\",\n \"transport_infrastructure\"\n]\ndf_area['used_area'] = 0\nfor name in used_areas:\n df_area['used_area'] = df_area['used_area'] + df_area.loc[:,used_areas].sum(axis=1)\n\ndf_area['used_area_percent'] = df_area['used_area'] / df_area['Insgesamt']", "_____no_output_____" ] ], [ [ "## Rename columns", "_____no_output_____" ] ], [ [ "df_area.rename(columns={\"Insgesamt\": \"total\", \"Gemeinde\": \"municipality\"}, inplace=True)", "_____no_output_____" ] ], [ [ "## Filter unused municipalities", "_____no_output_____" ] ], [ [ "df_area = df_area[df_area[\"AGS\"] <= 9999]", "_____no_output_____" ] ], [ [ "## Merge demographic data", "_____no_output_____" ] ], [ [ "df_area", "_____no_output_____" ], [ "df_demographic.drop([\"männlich\", \"weiblich\"], axis=1, inplace=True)\ndf_demographic.rename(columns={\"Gemeinde\": \"municipality\", \"Insgesamt\": \"demographic\"}, inplace=True)", "_____no_output_____" ], [ "df_area = pd.merge(df_area, df_demographic, how='left', on=['AGS', 'municipality', 'date'])", "_____no_output_____" ] ], [ [ "## Export to JSON", "_____no_output_____" ] ], [ [ "df_area", "_____no_output_____" ], [ "df_export = df_area.copy()\ndf_export['date'] = df_export['date'].dt.strftime('%d.%m.%Y')\n\nwith open(\"data.json\", \"w\", encoding=\"utf-8\") as f:\n df_export.to_json(f, orient=\"records\", force_ascii=False)", "_____no_output_____" ] ], [ [ "# Basic graphs", "_____no_output_____" ] ], [ [ "f, ax = plt.subplots(figsize=(7, 7))\nax.set(yscale=\"log\")\ng = sns.lineplot(data=df_demographic[(df_demographic['Gemeinde']=='Friedberg, St') | (df_demographic['Gemeinde']=='Augsburg (Krfr.St)') | (df_demographic['Gemeinde']=='Garmisch-Partenkirchen, M')], style='Gemeinde', x='date', y='Insgesamt', ax=ax)\ng.set_title('Einwohner')\ng.set(ylim=(1, None))\ng\n#sns.lineplot(data=df_demographic, style='Gemeinde', x='date', y='Insgesamt', ax=ax)#, ylim=(0,300000))", "_____no_output_____" ], [ "f, ax = plt.subplots(figsize=(7, 7))\n#ax.set(yscale=\"log\")\ng = sns.lineplot(data=df_area[(df_area['municipality']=='Friedberg, St') | (df_area['municipality']=='Augsburg (Krfr.St)') | (df_area['municipality']=='Garmisch-Partenkirchen, M')], style='municipality', x='date', y='nature_and_water_percent', ax=ax)\ng.set_title('Natur und Wasserflächen')\n#g.set(ylim=(0, None))\ng", "_____no_output_____" ], [ "gem = ['Bayern']#, 'Oberbayern', 'Schwaben']\nsize = 10\nf, axs = plt.subplots(len(gem), 1, figsize=(size*3, len(gem)*size*3))\n\n#df_area_2 = df_area_2[df_area_2['date'] > pd.to_datetime(\"1.1.2010\", format='%d.%m.%Y')]\n\nfor i in range(0, len(gem)):\n g = df_area[(df_area['municipality']==gem[i])].plot.area(\n x='date', \n y=['living_percent', 'industry_percent', 'transport_infrastructure_percent', 'nature_and_water_percent', 'miscellaneous_percent'], \n stacked=True, \n ax=(axs if len(gem) == 1 else axs[i]))\n g.set_title('Flächen in ' + gem[i])\n g.set(ylim=(0, None))\n\nplt.savefig('flächen.jpg')", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
e7d59282f3970466c31d89a56f73ebdc4b0fb53d
227,388
ipynb
Jupyter Notebook
BNN Model.ipynb
mdtycho/Zar-Currency-Prediction-Model
30884cf108717daea1d7b26c2525a9fc83f8fd51
[ "MIT" ]
null
null
null
BNN Model.ipynb
mdtycho/Zar-Currency-Prediction-Model
30884cf108717daea1d7b26c2525a9fc83f8fd51
[ "MIT" ]
null
null
null
BNN Model.ipynb
mdtycho/Zar-Currency-Prediction-Model
30884cf108717daea1d7b26c2525a9fc83f8fd51
[ "MIT" ]
null
null
null
47.402126
13,864
0.515137
[ [ [ "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport seaborn as sns\nimport tensorflow as tf\nfrom edward.models import Categorical, Normal\nimport edward as ed\nimport pandas as pd\n\nimport warnings\nwarnings.filterwarnings('ignore')", "C:\\ProgramData\\Anaconda3\\lib\\site-packages\\h5py\\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n from ._conv import register_converters as _register_converters\n" ] ], [ [ "## Import data", "_____no_output_____" ] ], [ [ "pd.set_option('display.max_rows', 500)\npd.set_option('display.max_columns', 38)", "_____no_output_____" ], [ "df = pd.read_csv('zar_dataset.csv')", "_____no_output_____" ], [ "df.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 4986 entries, 0 to 4985\nData columns (total 37 columns):\nRSI 4986 non-null float64\nTSI 4986 non-null float64\nATR 4986 non-null float64\nBHBI 4986 non-null float64\nBBL 4986 non-null float64\nBBH 4986 non-null float64\nBLBI 4986 non-null float64\nBBMAVG 4986 non-null float64\nDCH 4986 non-null float64\nDCHI 4986 non-null float64\nDCL 4986 non-null float64\nDCLI 4986 non-null float64\nKCC 4986 non-null float64\nKCH 4986 non-null float64\nKCL 4986 non-null float64\nADX 4986 non-null float64\nADXI 4986 non-null int64\nADXN 4986 non-null float64\nADXP 4986 non-null float64\nCCI 4986 non-null float64\nDPO 4986 non-null float64\nSEMA 4986 non-null float64\nLEMA 4986 non-null float64\nIchimoku 4986 non-null float64\nIchimoku_b 4986 non-null float64\nKST 4986 non-null float64\nKST_SIG 4986 non-null float64\nMACD 4986 non-null float64\nMACD_DIFF 4986 non-null float64\nMACD_SIG 4986 non-null float64\nMI 4986 non-null float64\nTRIX 4986 non-null float64\nVIN 4986 non-null float64\nVIP 4986 non-null float64\nCR 4986 non-null float64\nDR 4986 non-null float64\ntarget_return 4986 non-null float64\ndtypes: float64(36), int64(1)\nmemory usage: 1.4 MB\n" ], [ "df.head()", "_____no_output_____" ], [ "df.describe()", "_____no_output_____" ] ], [ [ "## Create Labels For Data, Classification and Regression Labels", "_____no_output_____" ] ], [ [ "from sklearn.preprocessing import StandardScaler\n\nfrom sklearn.model_selection import train_test_split", "_____no_output_____" ], [ "scaler = StandardScaler()", "_____no_output_____" ], [ "df['target_clf'] = df['target_return'].apply(lambda x: float(x/abs(x)) if x!=0 else -1)", "_____no_output_____" ], [ "df['target_clf'] = df['target_clf'].apply(lambda x: x if x==1 else 0)", "_____no_output_____" ], [ "df.rename(columns = {'target_return':'target_reg'}, inplace = True)", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "df.describe()", "_____no_output_____" ], [ "df.columns", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "X = df[['RSI', 'TSI', 'ATR', 'BHBI', 'BBL', 'BBH', 'BLBI', 'BBMAVG', 'DCH',\n 'DCHI', 'DCL', 'DCLI', 'KCC', 'KCH', 'KCL', 'ADX', 'ADXI', 'ADXN',\n 'ADXP', 'CCI', 'DPO', 'SEMA', 'LEMA', 'Ichimoku', 'Ichimoku_b', 'KST',\n 'KST_SIG', 'MACD', 'MACD_DIFF', 'MACD_SIG', 'MI', 'TRIX', 'VIN', 'VIP',\n 'CR', 'DR']].as_matrix()", "C:\\Users\\Spare\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:5: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.\n \"\"\"\n" ], [ "y_cl = df['target_clf'].as_matrix()", "C:\\Users\\Spare\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:1: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.\n \"\"\"Entry point for launching an IPython kernel.\n" ], [ "X_train, X_test, y_train, y_test = train_test_split(X, y_cl, test_size=0.20)", "_____no_output_____" ] ], [ [ "## Function for Feeding Data In Batches", "_____no_output_____" ] ], [ [ "def next_batch(num, data, labels):\n '''\n Return a total of `num` random samples and labels. \n '''\n idx = np.arange(0 , len(data))\n np.random.shuffle(idx)\n idx = idx[:num]\n data_shuffle = [data[ i] for i in idx]\n labels_shuffle = [labels[ i] for i in idx]\n\n return np.asarray(data_shuffle), np.asarray(labels_shuffle)", "_____no_output_____" ], [ "X.shape", "_____no_output_____" ] ], [ [ "## Build Bayesian Model", "_____no_output_____" ] ], [ [ "N = 100 # number of rows in a minibatch.\nD = 36 # number of features.\nK = 2 # number of classes.", "_____no_output_____" ], [ "# Create a placeholder to hold the data (in minibatches) in a TensorFlow graph.\nx = tf.placeholder(tf.float32, [None, D])\n# Normal(0,1) priors for the variables. Note that the syntax assumes TensorFlow 1.1.\nw = Normal(loc=tf.zeros([D, K]), scale=tf.ones([D, K]))\nb = Normal(loc=tf.zeros(K), scale=tf.ones(K))\n# Categorical likelihood for classication.\ny = Categorical(tf.matmul(x,w)+b)", "_____no_output_____" ], [ "# Contruct the q(w) and q(b). in this case we assume Normal distributions.\nqw = Normal(loc=tf.Variable(tf.random_normal([D, K])),\n scale=tf.nn.softplus(tf.Variable(tf.random_normal([D, K]))))\nqb = Normal(loc=tf.Variable(tf.random_normal([K])),\n scale=tf.nn.softplus(tf.Variable(tf.random_normal([K]))))", "_____no_output_____" ], [ "# We use a placeholder for the labels in anticipation of the traning data.\ny_ph = tf.placeholder(tf.int32, [N])\n# Define the VI inference technique, ie. minimise the KL divergence between q and p.\ninference = ed.KLqp({w: qw, b: qb}, data={y:y_ph})", "_____no_output_____" ], [ "# Initialse the infernce variables\ninference.initialize(n_iter=5000, n_print=100, scale={y: float(X_train.shape[0]) / N})", "_____no_output_____" ], [ "# We will use an interactive session.\nsess = tf.InteractiveSession()\n# Initialise all the vairables in the session.\ntf.global_variables_initializer().run()", "_____no_output_____" ], [ "# Let the training begin. We load the data in minibatches and update the VI infernce using each new batch.\nfor _ in range(inference.n_iter):\n X_batch, Y_batch = next_batch(N, X_train, y_train)\n # TensorFlow method gives the label data in a one hot vetor format. We convert that into a single label.\n #Y_batch = np.argmax(Y_batch,axis=1)\n info_dict = inference.update(feed_dict={x: X_batch, y_ph: Y_batch})\n inference.print_progress(info_dict)", "5000/5000 [100%] ██████████████████████████████ Elapsed: 13s | Loss: 2932.863\n" ], [ "X_test = X_test.astype(np.float32)", "_____no_output_____" ], [ "X_test.dtype", "_____no_output_____" ], [ "# Generate samples the posterior and store them.\nn_samples = 200\nprob_lst = []\nsamples = []\nw_samples = []\nb_samples = []\nfor _ in range(n_samples):\n w_samp = qw.sample()\n b_samp = qb.sample()\n w_samples.append(w_samp)\n b_samples.append(b_samp)\n # Also compue the probabiliy of each class for each (w,b) sample.\n prob = tf.nn.softmax(tf.matmul( X_test,w_samp ) + b_samp)\n prob_lst.append(prob.eval())\n sample = tf.concat([tf.reshape(w_samp,[-1]),b_samp],0)\n samples.append(sample.eval())", "_____no_output_____" ] ], [ [ "## Compute The Accuracy Distribution For The Bayesian Neural Net", "_____no_output_____" ] ], [ [ "# Compute the accuracy of the model. \n# For each sample we compute the predicted class and compare with the test labels.\n# Predicted class is defined as the one which as maximum proability.\n# We perform this test for each (w,b) in the posterior giving us a set of accuracies\n# Finally we make a histogram of accuracies for the test data.\nfig, axes = plt.subplots(figsize = (15, 8))\naccy_test = []\nfor prob in prob_lst:\n y_trn_prd = np.argmax(prob,axis=1).astype(np.float32)\n acc = (y_trn_prd == y_test).mean()*100\n accy_test.append(acc)\n\naxes.hist(accy_test)\naxes.set_title(\"Histogram of prediction accuracies on the test data\")\naxes.set_xlabel(\"Accuracy\")# Compute the accuracy of the model. \n# For each sample we compute the predicted class and compare with the test labels.\n# Predicted class is defined as the one which as maximum proability.\n# We perform this test for each (w,b) in the posterior giving us a set of accuracies\n# Finally we make a histogram of accuracies for the test data.\nfig, axes = plt.subplots(figsize = (15, 8))\naccy_test = []\nfor prob in prob_lst:\n y_trn_prd = np.argmax(prob,axis=1).astype(np.float32)\n acc = (y_trn_prd == y_test).mean()*100\n accy_test.append(acc)\n\naxes.hist(accy_test)\naxes.set_title(\"Histogram of prediction accuracies on the test data\")\naxes.set_xlabel(\"Accuracy\")\naxes.set_ylabel(\"Frequency\")\n\nfig.savefig('accuracy_plot.png')\naxes.set_ylabel(\"Frequency\")\n\nfig.savefig('accuracy_plot.png')", "_____no_output_____" ], [ "# Here we compute the mean of probabilties for each class for all the (w,b) samples.\n# We then use the class with maximum of the mean proabilities as the prediction. \n# In other words, we have used (w,b) samples to construct a set of models and\n# used their combined outputs to make the predcitions.\nY_pred = np.argmax(np.mean(prob_lst,axis=0),axis=1)\nprint(\"accuracy in predicting the test data = \", (Y_pred == y_test).mean()*100)", "accuracy in predicting the test data = 91.28256513026052\n" ], [ "# Load the first row from the test data and its label.\ntest_row = X_test[2]\ntest_label = y_test[2]\nprint('truth = ',test_label)", "truth = 0.0\n" ], [ "# Now the check what the model perdicts for each (w,b) sample from the posterior. This may take a few seconds...\nsing_img_probs = []\nfor w_samp,b_samp in zip(w_samples,b_samples):\n prob = tf.nn.softmax(tf.matmul( X_test[2:3],w_samp ) + b_samp)\n sing_img_probs.append(prob.eval())", "_____no_output_____" ], [ "# Create a histogram of these predictions.\nfig, axes = plt.subplots(figsize = (15, 8))\naxes.hist(np.argmax(sing_img_probs,axis=2),bins = range(3))\naxes.set_xticks(np.arange(0,2))\naxes.set_xlim(0,2)\naxes.set_xlabel(\"Accuracy of the prediction of the test row\")\naxes.set_ylabel(\"Frequency\")", "_____no_output_____" ], [ "y_test.mean()", "_____no_output_____" ] ], [ [ "# Test The Model On Brazil Data, We Should Get A wider Distribution Of Accuracies", "_____no_output_____" ], [ "## Brazil", "_____no_output_____" ] ], [ [ "brazil = pd.read_csv('brazil_clean.csv')", "_____no_output_____" ], [ "brazil.head()", "_____no_output_____" ], [ "brazil.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 7616 entries, 0 to 7615\nData columns (total 39 columns):\nDate 7616 non-null object\nRSI 7616 non-null float64\nTSI 7616 non-null float64\nATR 7616 non-null float64\nBBH 7616 non-null float64\nBHBI 7616 non-null float64\nBBL 7616 non-null float64\nBLBI 7616 non-null float64\nBBMAVG 7616 non-null float64\nDCH 7616 non-null float64\nDCHI 7616 non-null float64\nDCL 7616 non-null float64\nDCLI 7616 non-null float64\nKCC 7616 non-null float64\nKCH 7616 non-null float64\nKCL 7616 non-null float64\nADX 7616 non-null float64\nADXI 7616 non-null int64\nADXN 7616 non-null float64\nADXP 7616 non-null float64\nCCI 7616 non-null float64\nDPO 7616 non-null float64\nSEMA 7616 non-null float64\nLEMA 7616 non-null float64\nIchimoku 7616 non-null float64\nIchimoku_b 7616 non-null float64\nKST 7616 non-null float64\nKST_SIG 7616 non-null float64\nMACD 7616 non-null float64\nMACD_DIFF 7616 non-null float64\nMACD_SIG 7616 non-null float64\nMI 7616 non-null float64\nTRIX 7616 non-null float64\nVIN 7616 non-null float64\nVIP 7616 non-null float64\nCR 7616 non-null float64\nDR 7616 non-null float64\ntarget_return 7616 non-null float64\ntarget_cl 7616 non-null float64\ndtypes: float64(37), int64(1), object(1)\nmemory usage: 2.3+ MB\n" ], [ "brazil['Date'] = pd.to_datetime(brazil['Date'])", "_____no_output_____" ], [ "brazil.head()", "_____no_output_____" ], [ "brazil.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 7616 entries, 0 to 7615\nData columns (total 39 columns):\nDate 7616 non-null datetime64[ns]\nRSI 7616 non-null float64\nTSI 7616 non-null float64\nATR 7616 non-null float64\nBBH 7616 non-null float64\nBHBI 7616 non-null float64\nBBL 7616 non-null float64\nBLBI 7616 non-null float64\nBBMAVG 7616 non-null float64\nDCH 7616 non-null float64\nDCHI 7616 non-null float64\nDCL 7616 non-null float64\nDCLI 7616 non-null float64\nKCC 7616 non-null float64\nKCH 7616 non-null float64\nKCL 7616 non-null float64\nADX 7616 non-null float64\nADXI 7616 non-null int64\nADXN 7616 non-null float64\nADXP 7616 non-null float64\nCCI 7616 non-null float64\nDPO 7616 non-null float64\nSEMA 7616 non-null float64\nLEMA 7616 non-null float64\nIchimoku 7616 non-null float64\nIchimoku_b 7616 non-null float64\nKST 7616 non-null float64\nKST_SIG 7616 non-null float64\nMACD 7616 non-null float64\nMACD_DIFF 7616 non-null float64\nMACD_SIG 7616 non-null float64\nMI 7616 non-null float64\nTRIX 7616 non-null float64\nVIN 7616 non-null float64\nVIP 7616 non-null float64\nCR 7616 non-null float64\nDR 7616 non-null float64\ntarget_return 7616 non-null float64\ntarget_cl 7616 non-null float64\ndtypes: datetime64[ns](1), float64(37), int64(1)\nmemory usage: 2.3 MB\n" ], [ "brazil['target_cl'] = brazil['target_cl'].apply(lambda x: x if x==1 else 0)", "_____no_output_____" ], [ "brazil.rename(columns = {'target_return':'target_reg'}, inplace = True)", "_____no_output_____" ], [ "brazil.head()", "_____no_output_____" ], [ "brazil.describe()", "_____no_output_____" ], [ "brazil.set_index('Date', inplace = True)", "_____no_output_____" ], [ "brazil.head()", "_____no_output_____" ], [ "X_br = brazil[['RSI', 'TSI', 'ATR', 'BHBI', 'BBL', 'BBH', 'BLBI', 'BBMAVG', 'DCH',\n 'DCHI', 'DCL', 'DCLI', 'KCC', 'KCH', 'KCL', 'ADX', 'ADXI', 'ADXN',\n 'ADXP', 'CCI', 'DPO', 'SEMA', 'LEMA', 'Ichimoku', 'Ichimoku_b', 'KST',\n 'KST_SIG', 'MACD', 'MACD_DIFF', 'MACD_SIG', 'MI', 'TRIX', 'VIN', 'VIP',\n 'CR', 'DR']].as_matrix()", "C:\\Users\\Spare\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:5: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.\n \"\"\"\n" ], [ "y_cl_br = brazil['target_cl'].as_matrix()", "C:\\Users\\Spare\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:1: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.\n \"\"\"Entry point for launching an IPython kernel.\n" ], [ "X_br = X_br.astype(np.float32)", "_____no_output_____" ], [ "prob_lst_bra = []\n\nfor w_samp, b_samp in zip(w_samples, b_samples):\n # Also compue the probabiliy of each class for each (w,b) sample.\n prob = tf.nn.softmax(tf.matmul( X_br,w_samp ) + b_samp)\n prob_lst_bra.append(prob.eval())", "_____no_output_____" ], [ "# Compute the accuracy of the model. \n# For each sample we compute the predicted class and compare with the test labels.\n# Predicted class is defined as the one which as maximum proability.\n# We perform this test for each (w,b) in the posterior giving us a set of accuracies\n# Finally we make a histogram of accuracies for the test data.\nfig, axes = plt.subplots(figsize = (15, 8))\naccy_test = []\nfor prob in prob_lst_bra:\n y_trn_prd = np.argmax(prob,axis=1).astype(np.float32)\n acc = (y_trn_prd == y_cl_br).mean()*100\n accy_test.append(acc)\n\naxes.hist(accy_test)\naxes.set_title(\"Histogram of prediction accuracies on the brazil data\")\naxes.set_xlabel(\"Accuracy\")# Compute the accuracy of the model. \naxes.set_ylabel(\"Frequency\")\n\nfig.savefig('accuracy_plot_bra.png')", "_____no_output_____" ], [ "# For each sample we compute the predicted class and compare with the test labels.\n# Predicted class is defined as the one which as maximum proability.\n# We perform this test for each (w,b) in the posterior giving us a set of accuracies\n# Finally we make a histogram of accuracies for the test data.\nfig, axes = plt.subplots(figsize = (15, 8))\naccy_test = []\nfor prob in prob_lst:\n y_trn_prd = np.argmax(prob,axis=1).astype(np.float32)\n acc = (y_trn_prd == y_test).mean()*100\n accy_test.append(acc)\n\naxes.hist(accy_test)\naxes.set_title(\"Histogram of prediction accuracies on the test data\")\naxes.set_xlabel(\"Accuracy\")\naxes.set_ylabel(\"Frequency\")\n", "_____no_output_____" ] ], [ [ "## Conclusion:\n\n<p>We got good results considering the fact that this model</p>", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ] ]
e7d5972d8fdf89142260eae190ead7e9d791bdc6
11,993
ipynb
Jupyter Notebook
jupyter/tutorial/02_train_your_first_model.ipynb
dandansamax/djl
3c4262de4c60922a2f7ce22fd7c686cff62c24f8
[ "Apache-2.0" ]
null
null
null
jupyter/tutorial/02_train_your_first_model.ipynb
dandansamax/djl
3c4262de4c60922a2f7ce22fd7c686cff62c24f8
[ "Apache-2.0" ]
null
null
null
jupyter/tutorial/02_train_your_first_model.ipynb
dandansamax/djl
3c4262de4c60922a2f7ce22fd7c686cff62c24f8
[ "Apache-2.0" ]
null
null
null
49.151639
657
0.676728
[ [ [ "# Train your first model\n\nThis is the second of our [beginner tutorial series](https://github.com/deepjavalibrary/djl/tree/master/jupyter/tutorial) that will take you through creating, training, and running inference on a neural network. In this tutorial, you will learn how to train an image classification model that can recognize handwritten digits.\n\n## Preparation\n\nThis tutorial requires the installation of the Java Jupyter Kernel. To install the kernel, see the [Jupyter README](https://github.com/deepjavalibrary/djl/blob/master/jupyter/README.md).", "_____no_output_____" ] ], [ [ "// Add the snapshot repository to get the DJL snapshot artifacts\n// %mavenRepo snapshots https://oss.sonatype.org/content/repositories/snapshots/\n\n// Add the maven dependencies\n%maven ai.djl:api:0.17.0\n%maven ai.djl:basicdataset:0.17.0\n%maven ai.djl:model-zoo:0.17.0\n%maven ai.djl.mxnet:mxnet-engine:0.17.0\n%maven org.slf4j:slf4j-simple:1.7.32", "_____no_output_____" ], [ "import java.nio.file.*;\n\nimport ai.djl.*;\nimport ai.djl.basicdataset.cv.classification.Mnist;\nimport ai.djl.ndarray.types.*;\nimport ai.djl.training.*;\nimport ai.djl.training.dataset.*;\nimport ai.djl.training.initializer.*;\nimport ai.djl.training.loss.*;\nimport ai.djl.training.listener.*;\nimport ai.djl.training.evaluator.*;\nimport ai.djl.training.optimizer.*;\nimport ai.djl.training.util.*;\nimport ai.djl.basicmodelzoo.cv.classification.*;\nimport ai.djl.basicmodelzoo.basic.*;", "_____no_output_____" ] ], [ [ "# Step 1: Prepare MNIST dataset for training\n\nIn order to train, you must create a [Dataset class](https://javadoc.io/static/ai.djl/api/0.17.0/index.html?ai/djl/training/dataset/Dataset.html) to contain your training data. A dataset is a collection of sample input/output pairs for the function represented by your neural network. Each single input/output is represented by a [Record](https://javadoc.io/static/ai.djl/api/0.17.0/index.html?ai/djl/training/dataset/Record.html). Each record could have multiple arrays of inputs or outputs such as an image question and answer dataset where the input is both an image and a question about the image while the output is the answer to the question.\n\nBecause data learning is highly parallelizable, training is often done not with a single record at a time, but a [Batch](https://javadoc.io/static/ai.djl/api/0.17.0/index.html?ai/djl/training/dataset/Batch.html). This can lead to significant performance gains, especially when working with images\n\n## Sampler\n\nThen, we must decide the parameters for loading data from the dataset. The only parameter we need for MNIST is the choice of [Sampler](https://javadoc.io/static/ai.djl/api/0.17.0/index.html?ai/djl/training/dataset/Sampler.html). The sampler decides which and how many element from datasets are part of each batch when iterating through it. We will have it randomly shuffle the elements for the batch and use a batchSize of 32. The batchSize is usually the largest power of 2 that fits within memory.", "_____no_output_____" ] ], [ [ "int batchSize = 32;\nMnist mnist = Mnist.builder().setSampling(batchSize, true).build();\nmnist.prepare(new ProgressBar());", "_____no_output_____" ] ], [ [ "# Step 2: Create your Model\n\nNext we will build a model. A [Model](https://javadoc.io/static/ai.djl/api/0.17.0/index.html?ai/djl/Model.html) contains a neural network [Block](https://javadoc.io/static/ai.djl/api/0.17.0/index.html?ai/djl/nn/Block.html) along with additional artifacts used for the training process. It possesses additional information about the inputs, outputs, shapes, and data types you will use. Generally, you will use the Model once you have fully completed your Block.\n\nIn this part of the tutorial, we will use the built-in Multilayer Perceptron Block from the Model Zoo. To learn how to build it from scratch, see the previous tutorial: [Create Your First Network](01_create_your_first_network.ipynb).\n\nBecause images in the MNIST dataset are 28x28 grayscale images, we will create an MLP block with 28 x 28 input. The output will be 10 because there are 10 possible classes (0 to 9) each image could be. For the hidden layers, we have chosen `new int[] {128, 64}` by experimenting with different values.", "_____no_output_____" ] ], [ [ "Model model = Model.newInstance(\"mlp\");\nmodel.setBlock(new Mlp(28 * 28, 10, new int[] {128, 64}));", "_____no_output_____" ] ], [ [ "# Step 3: Create a Trainer\n\nNow, you can create a [`Trainer`](https://javadoc.io/static/ai.djl/api/0.17.0/index.html?ai/djl/training/Trainer.html) to train your model. The trainer is the main class to orchestrate the training process. Usually, they will be opened using a try-with-resources and closed after training is over.\n\nThe trainer takes an existing model and attempts to optimize the parameters inside the model's Block to best match the dataset. Most optimization is based upon [Stochastic Gradient Descent](https://en.wikipedia.org/wiki/Stochastic_gradient_descent) (SGD).\n\n## Step 3.1: Setup your training configurations\n\nBefore you create your trainer, we we will need a [training configuration](https://javadoc.io/static/ai.djl/api/0.17.0/index.html?ai/djl/training/DefaultTrainingConfig.html) that describes how to train your model.\n\nThe following are a few common items you may need to configure your training:\n\n* **REQUIRED** [`Loss`](https://javadoc.io/static/ai.djl/api/0.17.0/index.html?ai/djl/training/loss/Loss.html) function: A loss function is used to measure how well our model matches the dataset. Because the lower value of the function is better, it's called the \"loss\" function. The Loss is the only required argument to the model\n* [`Evaluator`](https://javadoc.io/static/ai.djl/api/0.17.0/index.html?ai/djl/training/evaluator/Evaluator.html) function: An evaluator function is also used to measure how well our model matches the dataset. Unlike the loss, they are only there for people to look at and are not used for optimizing the model. Since many losses are not as intuitive, adding other evaluators such as Accuracy can help to understand how your model is doing. If you know of any useful evaluators, we recommend adding them.\n* [`Training Listeners`](https://javadoc.io/static/ai.djl/api/0.17.0/index.html?ai/djl/repository/zoo/ZooModel.html): The training listener adds additional functionality to the training process through a listener interface. This can include showing training progress, stopping early if training becomes undefined, or recording performance metrics. We offer several easy sets of [default listeners](https://javadoc.io/static/ai.djl/api/0.17.0/index.html?ai/djl/repository/zoo/ZooModel.html).\n\nYou can also configure other options such as the Device, Initializer, and Optimizer. See [more details](https://javadoc.io/static/ai.djl/api/0.17.0/index.html?ai/djl/training/TrainingConfig.html).", "_____no_output_____" ] ], [ [ "DefaultTrainingConfig config = new DefaultTrainingConfig(Loss.softmaxCrossEntropyLoss())\n //softmaxCrossEntropyLoss is a standard loss for classification problems\n .addEvaluator(new Accuracy()) // Use accuracy so we humans can understand how accurate the model is\n .addTrainingListeners(TrainingListener.Defaults.logging());\n\n// Now that we have our training configuration, we should create a new trainer for our model\nTrainer trainer = model.newTrainer(config);", "_____no_output_____" ] ], [ [ "# Step 5: Initialize Training\n\nBefore training your model, you have to initialize all of the parameters with starting values. You can use the trainer for this initialization by passing in the input shape.\n\n* The first axis of the input shape is the batch size. This won't impact the parameter initialization, so you can use 1 here.\n* The second axis of the input shape of the MLP - the number of pixels in the input image.", "_____no_output_____" ] ], [ [ "trainer.initialize(new Shape(1, 28 * 28));", "_____no_output_____" ] ], [ [ "# Step 6: Train your model\n\nNow, we can train the model.\n\nWhen training, it is usually organized into epochs where each epoch trains the model on each item in the dataset once. It is slightly faster than training randomly.\n\nThen, we will use the EasyTrain to, as the name promises, make the training easy. If you want to see more details about how the training loop works, see [the EasyTrain class](https://github.com/deepjavalibrary/djl/blob/0.9/api/src/main/java/ai/djl/training/EasyTrain.java) or [read our Dive into Deep Learning book](https://d2l.djl.ai).", "_____no_output_____" ] ], [ [ "// Deep learning is typically trained in epochs where each epoch trains the model on each item in the dataset once.\nint epoch = 2;\n\nEasyTrain.fit(trainer, epoch, mnist, null);", "_____no_output_____" ] ], [ [ "# Step 7: Save your model\n\nOnce your model is trained, you should save it so that it can be reloaded later. You can also add metadata to it such as training accuracy, number of epochs trained, etc that can be used when loading the model or when examining it.", "_____no_output_____" ] ], [ [ "Path modelDir = Paths.get(\"build/mlp\");\nFiles.createDirectories(modelDir);\n\nmodel.setProperty(\"Epoch\", String.valueOf(epoch));\n\nmodel.save(modelDir, \"mlp\");\n\nmodel", "_____no_output_____" ] ], [ [ "# Summary\n\nNow, you've successfully trained a model that can recognize handwritten digits. You'll learn how to apply this model in the next chapter: [Run image classification with your model](03_image_classification_with_your_model.ipynb).\n\nYou can find the complete source code for this tutorial in the [examples project](https://github.com/deepjavalibrary/djl/blob/master/examples/src/main/java/ai/djl/examples/training/TrainMnist.java).", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e7d59d83148eb8f5002cb101400671d2605d2e4a
11,349
ipynb
Jupyter Notebook
random_forest/color.ipynb
den8972/228
7e2d425a3305d3cdb9dc013936e8af4d76765620
[ "MIT" ]
null
null
null
random_forest/color.ipynb
den8972/228
7e2d425a3305d3cdb9dc013936e8af4d76765620
[ "MIT" ]
null
null
null
random_forest/color.ipynb
den8972/228
7e2d425a3305d3cdb9dc013936e8af4d76765620
[ "MIT" ]
null
null
null
30.183511
106
0.503657
[ [ [ "import os\nimport cv2\nimport numpy as np\nimport mrcnn.model as modellib\nimport matplotlib.pyplot as plt\nfrom PIL import Image\nfrom skimage.io import imread\nfrom skimage.color import gray2rgb\nfrom mrcnn import utils\nfrom sklearn.ensemble import RandomForestClassifier\nfrom samples.coco import coco\n%matplotlib inline ", "Using TensorFlow backend.\n" ], [ "# where model and COCO weights are stored\nMODEL_PATH = \"./logs\"\nMODEL_WEIGHTS_PATH = \"./mask_rcnn_coco.h5\"\n# where data set is\ndata_root = \"../CNNs/CUB_200_2011/CUB_200_2011/\"\n# root dir to save images after background removal\ndest = \"./res\"\n\nif not os.path.exists(MODEL_WEIGHTS_PATH):\n utils.download_trained_weights(MODEL_WEIGHTS_PATH)\n\n \nclass InferenceConfig(coco.CocoConfig):\n GPU_COUNT = 1\n IMAGES_PER_GPU = 1 ", "_____no_output_____" ], [ "config = InferenceConfig()\n# Create MaskCNN model in inference mode and load weights\nmodel = modellib.MaskRCNN(mode=\"inference\", model_dir=MODEL_PATH, config=config)\nmodel.load_weights(MODEL_WEIGHTS_PATH, by_name=True)", "_____no_output_____" ], [ "# remove background from image\nmode = [\"random_forest_train\", \"random_forest_test\"]\n\n# class id for bird in COCO\nbird_id = 15\n\nfor m in mode:\n if not os.path.exists(os.path.join(dest, m)):\n os.makedirs(os.path.join(dest, m))\n\nfor m in mode:\n src_dir = os.path.join(data_root, m)\n for img in os.listdir(src_dir):\n file_name = img.split('.')[0]\n # read in one image\n image_input = imread(os.path.join(src_dir, img))\n # convert to RGB\n if image_input.ndim == 2:\n image_input = gray2rgb(image_input)\n results = model.detect([image_input])\n\n\n # unpack inference results \n result = results[0]\n masks = result['masks'].astype(np.uint8)\n scores = result['scores']\n rois = result['rois']\n \n # get result index for all regions identified as bird\n idxes = [i for i in range(len(result['class_ids'])) if result['class_ids'][i] == bird_id]\n \n # keep the one with highest confidence\n if len(idxes) > 0:\n idx = idxes[0]\n max_score = scores[idx]\n\n for index in idxes:\n if scores[index] > max_score:\n max_score = scores[index]\n idx = index\n \n # bounding box for the max\n (y1, x1, y2, x2) = rois[idx] \n width = x2 - x1\n height = y2 - y1\n \n # get mask for the region containing a bird\n bitmap = masks[:,:,idx] \n bitmap[bitmap > 0] = 255 \n bitmap = np.tile(bitmap[:, :, None], [1, 1, 3])\n \n # save result\n path_output_image = f'{dest}/{m}/{img}'\n image_output = Image.fromarray(np.bitwise_and(image_input, bitmap))\n image_output.save(path_output_image)", "_____no_output_____" ], [ "# extraact RGB or HSV color feature from image\ndef extract_hist_feature(img, rgb=True, verbose=False):\n src = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) if not rgb else img\n chans = cv2.split(src)\n colors = (\"b\", \"g\", \"r\")\n name = ['Blue', 'Green', 'Red'] if rgb else [\"Hue\", \"Saturation\", \"Value\"]\n\n features = []\n \n \n # loop over the image channels\n i = 1\n for (chan, color) in zip(chans, colors):\n # create a histogram for the current channel and\n # concatenate the resulting histograms for each\n # channel\n hist = cv2.calcHist([chan], [0], None, [10], [10, 210])\n hist = hist.reshape(-1)\n hist += 1e-5\n features.extend(hist / hist.sum())\n if verbose:\n plt.figure(figsize=(10, 3)) \n plt.subplot(1, 3, i)\n plt.title(f'{name[i - 1]} Histogram')\n plt.xlabel(\"Value\")\n plt.ylabel(\"# of Pixels\")\n plt.xticks([10 + x * 22 for x in range(10)])\n plt.bar([21 + x * 22 for x in range(10)], hist, width=22, color = color)\n i += 1\n plt.tight_layout()\n return features", "_____no_output_____" ], [ "# mapping from image_id to image class\nimg_id2class = {}\nclass_file = os.path.join(data_root, 'image_class_labels.txt')\nwith open(class_file, 'r', encoding='utf-8') as f:\n for line in f:\n img_id, class_id = map(int, line.rstrip().split(' '))\n img_id2class[img_id] = class_id", "_____no_output_____" ] ], [ [ "# HSV feature", "_____no_output_____" ] ], [ [ "train_x = []\ntrain_y = []\ntrain_dir = os.path.join(dest, \"random_forest_train\")\nfor img in os.listdir(train_dir):\n img_id = int(img.split(\".\")[0])\n train_y.append(int(img_id2class[img_id]))\n train_x.append(extract_hist_feature(cv2.imread(os.path.join(train_dir, img)), rgb=False))", "_____no_output_____" ], [ "classifier = RandomForestClassifier()\nclassifier.fit(train_x, train_y)", "_____no_output_____" ], [ "test_x = []\ntest_y = []\ntest_dir = os.path.join(dest, \"random_forest_test\")\nfor img in os.listdir(test_dir):\n img_id = int(img.split(\".\")[0])\n test_y.append(int(img_id2class[img_id]))\n test_x.append(extract_hist_feature(cv2.imread(os.path.join(test_dir, img)), rgb=False))", "_____no_output_____" ], [ "classifier.score(test_x, test_y)", "_____no_output_____" ] ], [ [ "# RGB feature", "_____no_output_____" ] ], [ [ "train_x = []\ntrain_y = []\ntrain_dir = os.path.join(dest, \"random_forest_train\")\nfor img in os.listdir(train_dir):\n img_id = int(img.split(\".\")[0])\n train_y.append(int(img_id2class[img_id]))\n train_x.append(extract_hist_feature(cv2.imread(os.path.join(train_dir, img)), rgb=True))", "_____no_output_____" ], [ "classifier = RandomForestClassifier()\nclassifier.fit(train_x, train_y)", "_____no_output_____" ], [ "test_x = []\ntest_y = []\ntest_dir = os.path.join(dest, \"random_forest_test\")\nfor img in os.listdir(test_dir):\n img_id = int(img.split(\".\")[0])\n test_y.append(int(img_id2class[img_id]))\n test_x.append(extract_hist_feature(cv2.imread(os.path.join(test_dir, img)), rgb=True))", "_____no_output_____" ], [ "classifier.score(test_x, test_y)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
e7d5b1053637d625f19a8a129476dea25b98401d
68,833
ipynb
Jupyter Notebook
w6-midterm/.ipynb_checkpoints/Trial_eda_notebook-checkpoint.ipynb
bmskarate/lighthouseMain
b2434f14f1378b89085d59f896c44eda5f74eecc
[ "MIT" ]
null
null
null
w6-midterm/.ipynb_checkpoints/Trial_eda_notebook-checkpoint.ipynb
bmskarate/lighthouseMain
b2434f14f1378b89085d59f896c44eda5f74eecc
[ "MIT" ]
null
null
null
w6-midterm/.ipynb_checkpoints/Trial_eda_notebook-checkpoint.ipynb
bmskarate/lighthouseMain
b2434f14f1378b89085d59f896c44eda5f74eecc
[ "MIT" ]
null
null
null
74.333693
9,504
0.78121
[ [ [ "import pandas as pd\npd.set_option(\"display.max_columns\", None)\nimport numpy as np\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "df_nov_dec = pd.read_csv(\"data/flights_2018_nov_dec_raw.csv\")\ndf_jan = pd.read_csv(\"data/flights_2018_jan_raw.csv\")", "/Users/louisrossi/opt/anaconda3/envs/ml/lib/python3.8/site-packages/IPython/core/interactiveshell.py:3441: DtypeWarning: Columns (25) have mixed types.Specify dtype option on import or set low_memory=False.\n exec(code_obj, self.user_global_ns, self.user_ns)\n" ], [ "df = pd.concat([df_nov_dec, df_jan]).reset_index().drop(columns=[\"index\"])", "_____no_output_____" ], [ "df_ = df.sample(frac = 0.05)", "_____no_output_____" ], [ "df.dtypes", "_____no_output_____" ], [ "def missing(x):\n n_missing = x.isnull().sum().sort_values(ascending=False)\n p_missing = (x.isnull().sum()/x.isnull().count()).sort_values(ascending=False)\n missing_ = pd.concat([n_missing, p_missing],axis=1, keys = ['number','percent'])\n return missing_", "_____no_output_____" ], [ "missing(df_)", "_____no_output_____" ], [ "from scipy import stats\nimport seaborn as sns\narr_delay = df_.arr_delay", "_____no_output_____" ], [ "stats.kstest(arr_delay,stats.norm.cdf)", "_____no_output_____" ], [ "stats.shapiro(arr_delay)", "/Users/louisrossi/opt/anaconda3/envs/ml/lib/python3.8/site-packages/scipy/stats/morestats.py:1760: UserWarning: p-value may not be accurate for N > 5000.\n warnings.warn(\"p-value may not be accurate for N > 5000.\")\n" ], [ "sample = df.sample(frac=0.05)\nstats.shapiro(sample['arr_delay'])\n\n#fail to reject the null hypothesis that data is normally dist", "_____no_output_____" ], [ "sns.histplot(arr_delay)\nplt.xlim(-300, 300)", "_____no_output_____" ], [ "import datetime as dt\nfrom datetime import date\nfrom datetime import time", "_____no_output_____" ], [ "df['fl_date'] = pd.to_datetime(df['fl_date'])", "_____no_output_____" ], [ "type(df.fl_date[0])", "_____no_output_____" ], [ "df['month'] = df['fl_date'].dt.month\ndf['month'].head()", "_____no_output_____" ], [ "monthly_count = df.groupby(['month'])['arr_delay'].count()", "_____no_output_____" ], [ "monthly_count = pd.DataFrame(monthly_count)\nmonthly_count", "_____no_output_____" ], [ "sns.barplot(x= monthly_count.index,y=monthly_count['arr_delay'])", "_____no_output_____" ], [ "monthly_avg = df.groupby(['month'])['arr_delay'].mean()\nsns.barplot(x=df['month'],y=df['arr_delay'])", "_____no_output_____" ], [ "task4a = df.groupby(['dep_time'])['taxi_out'].count()\nsns.histplot(task4a)", "_____no_output_____" ], [ "task4a_count = pd.DataFrame(task4a)\nsns.barplot(x=task4a_count.index,y=task4a_count['taxi_out'])", "_____no_output_____" ], [ "sns.barplot(x=df['dep_time'],y=df['taxi_out'], ci=None)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7d5bce90c6245b2204194e949add512f6b9d229
971
ipynb
Jupyter Notebook
docs/ipynb/how-to-cite.ipynb
gamdow/oommfc
de33ae2a8348ca78d9e16fe18bc562393703c215
[ "BSD-3-Clause" ]
null
null
null
docs/ipynb/how-to-cite.ipynb
gamdow/oommfc
de33ae2a8348ca78d9e16fe18bc562393703c215
[ "BSD-3-Clause" ]
null
null
null
docs/ipynb/how-to-cite.ipynb
gamdow/oommfc
de33ae2a8348ca78d9e16fe18bc562393703c215
[ "BSD-3-Clause" ]
null
null
null
26.243243
214
0.602472
[ [ [ "# How to cite us\n\nIf you use JOOMMF in your research, apart from acknowledging OOMMF (http://math.nist.gov/oommf/oommf_cites.html) please acknowledge our interface JOOMMF by citing the following paper:\n\nBeg, M., Pepper, R. A., & Fangohr, H. (2017). User interfaces for computational science: A domain specific language for OOMMF embedded in Python. *AIP Advances* **7**, 56025. https://doi.org/10.1063/1.4977225", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown" ] ]
e7d5c2341ed5b65a8dcb2b7017fcff759ecc93e5
57,879
ipynb
Jupyter Notebook
transfer-learning/Transfer_Learning_Solution.ipynb
freedomkwok/deep-learning
f7c2bce1d9416f5db969860cdd64b20e4b2305c6
[ "MIT" ]
null
null
null
transfer-learning/Transfer_Learning_Solution.ipynb
freedomkwok/deep-learning
f7c2bce1d9416f5db969860cdd64b20e4b2305c6
[ "MIT" ]
null
null
null
transfer-learning/Transfer_Learning_Solution.ipynb
freedomkwok/deep-learning
f7c2bce1d9416f5db969860cdd64b20e4b2305c6
[ "MIT" ]
null
null
null
63.04902
2,481
0.658788
[ [ [ "# Transfer Learning\n\nMost of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using [VGGNet](https://arxiv.org/pdf/1409.1556.pdf) trained on the [ImageNet dataset](http://www.image-net.org/) as a feature extractor. Below is a diagram of the VGGNet architecture.\n\n<img src=\"assets/cnnarchitecture.jpg\" width=700px>\n\nVGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.\n\nYou can read more about transfer learning from [the CS231n course notes](http://cs231n.github.io/transfer-learning/#tf).\n\n## Pretrained VGGNet\n\nWe'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. \n\nThis is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link. ", "_____no_output_____" ] ], [ [ "from tensorflow.python.client import device_lib\nprint(device_lib.list_local_devices())\n\nprint(device_lib)", "[name: \"/device:CPU:0\"\ndevice_type: \"CPU\"\nmemory_limit: 268435456\nlocality {\n}\nincarnation: 16373191140287572975\n, name: \"/device:GPU:0\"\ndevice_type: \"GPU\"\nmemory_limit: 6740156088\nlocality {\n bus_id: 1\n}\nincarnation: 5624907406584000031\nphysical_device_desc: \"device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1\"\n]\n<module 'tensorflow.python.client.device_lib' from 'C:\\\\Program Files\\\\Anaconda3\\\\envs\\\\trs\\\\lib\\\\site-packages\\\\tensorflow\\\\python\\\\client\\\\device_lib.py'>\n" ], [ "from urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\n\nvgg_dir = 'tensorflow_vgg/'\n# Make sure vgg exists\nif not isdir(vgg_dir):\n raise Exception(\"VGG directory doesn't exist!\")\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(vgg_dir + \"vgg16.npy\"):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:\n urlretrieve(\n 'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',\n vgg_dir + 'vgg16.npy',\n pbar.hook)\nelse:\n print(\"Parameter file already exists!\")", "Parameter file already exists!\n" ] ], [ [ "## Flower power\n\nHere we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the [TensorFlow inception tutorial](https://www.tensorflow.org/tutorials/image_retraining).", "_____no_output_____" ] ], [ [ "import tarfile\n\ndataset_folder_path = 'flower_photos'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile('flower_photos.tar.gz'):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:\n urlretrieve(\n 'http://download.tensorflow.org/example_images/flower_photos.tgz',\n 'flower_photos.tar.gz',\n pbar.hook)\n\nif not isdir(dataset_folder_path):\n with tarfile.open('flower_photos.tar.gz') as tar:\n tar.extractall()\n tar.close()", "_____no_output_____" ] ], [ [ "## ConvNet Codes\n\nBelow, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.\n\nHere we're using the `vgg16` module from `tensorflow_vgg`. The network takes images of size $244 \\times 224 \\times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from [the source code](https://github.com/machrisaa/tensorflow-vgg/blob/master/vgg16.py):\n\n```\nself.conv1_1 = self.conv_layer(bgr, \"conv1_1\")\nself.conv1_2 = self.conv_layer(self.conv1_1, \"conv1_2\")\nself.pool1 = self.max_pool(self.conv1_2, 'pool1')\n\nself.conv2_1 = self.conv_layer(self.pool1, \"conv2_1\")\nself.conv2_2 = self.conv_layer(self.conv2_1, \"conv2_2\")\nself.pool2 = self.max_pool(self.conv2_2, 'pool2')\n\nself.conv3_1 = self.conv_layer(self.pool2, \"conv3_1\")\nself.conv3_2 = self.conv_layer(self.conv3_1, \"conv3_2\")\nself.conv3_3 = self.conv_layer(self.conv3_2, \"conv3_3\")\nself.pool3 = self.max_pool(self.conv3_3, 'pool3')\n\nself.conv4_1 = self.conv_layer(self.pool3, \"conv4_1\")\nself.conv4_2 = self.conv_layer(self.conv4_1, \"conv4_2\")\nself.conv4_3 = self.conv_layer(self.conv4_2, \"conv4_3\")\nself.pool4 = self.max_pool(self.conv4_3, 'pool4')\n\nself.conv5_1 = self.conv_layer(self.pool4, \"conv5_1\")\nself.conv5_2 = self.conv_layer(self.conv5_1, \"conv5_2\")\nself.conv5_3 = self.conv_layer(self.conv5_2, \"conv5_3\")\nself.pool5 = self.max_pool(self.conv5_3, 'pool5')\n\nself.fc6 = self.fc_layer(self.pool5, \"fc6\")\nself.relu6 = tf.nn.relu(self.fc6)\n```\n\nSo what we want are the values of the first fully connected layer, after being ReLUd (`self.relu6`). To build the network, we use\n\n```\nwith tf.Session() as sess:\n vgg = vgg16.Vgg16()\n input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])\n with tf.name_scope(\"content_vgg\"):\n vgg.build(input_)\n```\n\nThis creates the `vgg` object, then builds the graph with `vgg.build(input_)`. Then to get the values from the layer,\n\n```\nfeed_dict = {input_: images}\ncodes = sess.run(vgg.relu6, feed_dict=feed_dict)\n```", "_____no_output_____" ] ], [ [ "import os\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom tensorflow_vgg import vgg16\nfrom tensorflow_vgg import utils", "_____no_output_____" ], [ "data_dir = 'flower_photos/'\ncontents = os.listdir(data_dir)\nclasses = [each for each in contents if os.path.isdir(data_dir + each)]", "_____no_output_____" ] ], [ [ "Below I'm running images through the VGG network in batches.", "_____no_output_____" ] ], [ [ "# Set the batch size higher if you can fit in in your GPU memory\nbatch_size = 10\ncodes_list = []\nlabels = []\nbatch = []\n\ncodes = None\n\nwith tf.Session() as sess:\n vgg = vgg16.Vgg16()\n input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])\n with tf.name_scope(\"content_vgg\"):\n vgg.build(input_)\n\n for each in classes:\n print(\"Starting {} images\".format(each))\n class_path = data_dir + each\n files = os.listdir(class_path)\n for ii, file in enumerate(files, 1):\n # Add images to the current batch\n # utils.load_image crops the input images for us, from the center\n img = utils.load_image(os.path.join(class_path, file))\n print(img.shape)\n batch.append(img.reshape((1, 224, 224, 3)))\n labels.append(each)\n \n # Running the batch through the network to get the codes\n if ii % batch_size == 0 or ii == len(files):\n images = np.concatenate(batch)\n \n feed_dict = {input_: images}\n print(images.shape)\n codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)\n \n # Here I'm building an array of the codes\n if codes is None:\n codes = codes_batch\n else:\n codes = np.concatenate((codes, codes_batch))\n print(codes.shape)\n # Reset to start building the next batch\n batch = []\n print('{} images processed'.format(ii))", "C:\\Users\\freed\\havardclass\\Udacity Nano\\deep-learning\\transfer-learning\\tensorflow_vgg\\vgg16.npy\nnpy file loaded\nbuild model started\nbuild model finished: 0s\nStarting daisy images\n(224, 224, 3)\n(224, 224, 3)\n(224, 224, 3)\n(224, 224, 3)\n(224, 224, 3)\n(224, 224, 3)\n(224, 224, 3)\n(224, 224, 3)\n(224, 224, 3)\n(224, 224, 3)\n(10, 224, 224, 3)\n" ], [ "codes.shape", "_____no_output_____" ], [ "# write codes to file\nwith open('codes', 'w') as f:\n codes.tofile(f)\n \n# write labels to file\nimport csv\nwith open('labels', 'w') as f:\n writer = csv.writer(f, delimiter='\\n')\n writer.writerow(labels)", "_____no_output_____" ] ], [ [ "## Building the Classifier\n\nNow that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.", "_____no_output_____" ] ], [ [ "# read codes and labels from file\nimport csv\n\nwith open('labels') as f:\n reader = csv.reader(f, delimiter='\\n')\n labels = np.array([each for each in reader if len(each) > 0]).squeeze()\nwith open('codes') as f:\n codes = np.fromfile(f, dtype=np.float32)\n codes = codes.reshape((len(labels), -1))", "_____no_output_____" ] ], [ [ "### Data prep\n\nAs usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!\n\n> **Exercise:** From scikit-learn, use [LabelBinarizer](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelBinarizer.html) to create one-hot encoded vectors from the labels. ", "_____no_output_____" ] ], [ [ "codes", "_____no_output_____" ], [ "from sklearn.preprocessing import LabelBinarizer\n\nlb = LabelBinarizer()\nlb.fit(labels)\n\nlabels_vecs = lb.transform(labels)", "_____no_output_____" ], [ "labels_vecs", "_____no_output_____" ] ], [ [ "Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use [`StratifiedShuffleSplit`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedShuffleSplit.html) from scikit-learn.\n\nYou can create the splitter like so:\n```\nss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)\n```\nThen split the data with \n```\nsplitter = ss.split(x, y)\n```\n\n`ss.split` returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use `next(splitter)` to get the indices. Be sure to read the [documentation](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedShuffleSplit.html) and the [user guide](http://scikit-learn.org/stable/modules/cross_validation.html#random-permutations-cross-validation-a-k-a-shuffle-split).\n\n> **Exercise:** Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets.", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import StratifiedShuffleSplit\n\nss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)\n\ntrain_idx, val_idx = next(ss.split(codes, labels_vecs))\n\nhalf_val_len = int(len(val_idx)/2)\nval_idx, test_idx = val_idx[:half_val_len], val_idx[half_val_len:]\n\ntrain_x, train_y = codes[train_idx], labels_vecs[train_idx]\nval_x, val_y = codes[val_idx], labels_vecs[val_idx]\ntest_x, test_y = codes[test_idx], labels_vecs[test_idx]", "_____no_output_____" ], [ "print(\"Train shapes (x, y):\", train_x.shape, train_y.shape)\nprint(\"Validation shapes (x, y):\", val_x.shape, val_y.shape)\nprint(\"Test shapes (x, y):\", test_x.shape, test_y.shape)", "Train shapes (x, y): (2936, 4096) (2936, 5)\nValidation shapes (x, y): (367, 4096) (367, 5)\nTest shapes (x, y): (367, 4096) (367, 5)\n" ] ], [ [ "If you did it right, you should see these sizes for the training sets:\n\n```\nTrain shapes (x, y): (2936, 4096) (2936, 5)\nValidation shapes (x, y): (367, 4096) (367, 5)\nTest shapes (x, y): (367, 4096) (367, 5)\n```", "_____no_output_____" ], [ "### Classifier layers\n\nOnce you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.\n\n> **Exercise:** With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost.", "_____no_output_____" ] ], [ [ "inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])\nlabels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])\n\nfc = tf.contrib.layers.fully_connected(inputs_, 256)\n \nlogits = tf.contrib.layers.fully_connected(fc, labels_vecs.shape[1], activation_fn=None)\ncross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=labels_, logits=logits)\ncost = tf.reduce_mean(cross_entropy)\n\noptimizer = tf.train.AdamOptimizer().minimize(cost)\n\npredicted = tf.nn.softmax(logits)\ncorrect_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))", "_____no_output_____" ] ], [ [ "### Batches!\n\nHere is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.", "_____no_output_____" ] ], [ [ "def get_batches(x, y, n_batches=10):\n \"\"\" Return a generator that yields batches from arrays x and y. \"\"\"\n batch_size = len(x)//n_batches\n \n for ii in range(0, n_batches*batch_size, batch_size):\n # If we're not on the last batch, grab data with size batch_size\n if ii != (n_batches-1)*batch_size:\n X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size] \n # On the last batch, grab the rest of the data\n else:\n X, Y = x[ii:], y[ii:]\n # I love generators\n yield X, Y", "_____no_output_____" ] ], [ [ "### Training\n\nHere, we'll train the network.\n\n> **Exercise:** So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help.", "_____no_output_____" ] ], [ [ "epochs = 10\niteration = 0\nsaver = tf.train.Saver()\nwith tf.Session() as sess:\n \n sess.run(tf.global_variables_initializer())\n for e in range(epochs):\n for x, y in get_batches(train_x, train_y):\n feed = {inputs_: x,\n labels_: y}\n loss, _ = sess.run([cost, optimizer], feed_dict=feed)\n print(\"Epoch: {}/{}\".format(e+1, epochs),\n \"Iteration: {}\".format(iteration),\n \"Training loss: {:.5f}\".format(loss))\n iteration += 1\n \n if iteration % 5 == 0:\n feed = {inputs_: val_x,\n labels_: val_y}\n val_acc = sess.run(accuracy, feed_dict=feed)\n print(\"Epoch: {}/{}\".format(e, epochs),\n \"Iteration: {}\".format(iteration),\n \"Validation Acc: {:.4f}\".format(val_acc))\n saver.save(sess, \"checkpoints/flowers.ckpt\")", "_____no_output_____" ] ], [ [ "### Testing\n\nBelow you see the test accuracy. You can also see the predictions returned for images.", "_____no_output_____" ] ], [ [ "with tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n \n feed = {inputs_: test_x,\n labels_: test_y}\n test_acc = sess.run(accuracy, feed_dict=feed)\n print(\"Test accuracy: {:.4f}\".format(test_acc))", "_____no_output_____" ], [ "%matplotlib inline\n\nimport matplotlib.pyplot as plt\nfrom scipy.ndimage import imread", "_____no_output_____" ] ], [ [ "Below, feel free to choose images and see how the trained classifier predicts the flowers in them.", "_____no_output_____" ] ], [ [ "test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'\ntest_img = imread(test_img_path)\nplt.imshow(test_img)", "_____no_output_____" ], [ "# Run this cell if you don't have a vgg graph built\nwith tf.Session() as sess:\n input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])\n vgg = vgg16.Vgg16()\n vgg.build(input_)", "_____no_output_____" ], [ "with tf.Session() as sess:\n img = utils.load_image(test_img_path)\n img = img.reshape((1, 224, 224, 3))\n\n feed_dict = {input_: img}\n code = sess.run(vgg.relu6, feed_dict=feed_dict)\n \nsaver = tf.train.Saver()\nwith tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n \n feed = {inputs_: code}\n prediction = sess.run(predicted, feed_dict=feed).squeeze()", "_____no_output_____" ], [ "plt.imshow(test_img)", "_____no_output_____" ], [ "plt.barh(np.arange(5), prediction)\n_ = plt.yticks(np.arange(5), lb.classes_)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
e7d5debc054c0f66141f5230cf857316811a569b
2,520
ipynb
Jupyter Notebook
setup-sm.ipynb
aws-samples/aws-panorama-immersion-day
be8e53402024e884454ae04307793f68d4a8273f
[ "MIT-0" ]
null
null
null
setup-sm.ipynb
aws-samples/aws-panorama-immersion-day
be8e53402024e884454ae04307793f68d4a8273f
[ "MIT-0" ]
null
null
null
setup-sm.ipynb
aws-samples/aws-panorama-immersion-day
be8e53402024e884454ae04307793f68d4a8273f
[ "MIT-0" ]
null
null
null
20.322581
145
0.540873
[ [ [ "### Set up AWS Panorama development environment on SageMaker Notebook\n\nThis notebook installs dependencies required for AWS Panorama application development. Run following cells just once, before starting labs.", "_____no_output_____" ] ], [ [ "%pip install panoramacli\n%pip install mxnet\n%pip install gluoncv", "_____no_output_____" ], [ "!./scripts/install-docker.sh", "_____no_output_____" ], [ "# for CPU build\n!./scripts/install-dlr.sh\n\n# for p2/p3/g4 instance, we could use pre-built package to skip long building time\n#%pip install https://neo-ai-dlr-release.s3-us-west-2.amazonaws.com/v1.10.0/gpu/dlr-1.10.0-py3-none-any.whl\n", "_____no_output_____" ], [ "!./scripts/install-glibc-sm.sh", "_____no_output_____" ], [ "!./scripts/create-opt-aws-panorama.sh", "_____no_output_____" ], [ "!./scripts/install-videos.sh", "_____no_output_____" ] ], [ [ "#### Environment setting up has completed\n\nOpen a notebooks under \"labs/\".", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ] ]
e7d5e1c4dca52732e2d10dcf1ae8adbf10d72f78
718,669
ipynb
Jupyter Notebook
TF-transfer.ipynb
HAXRD/TF2vsPTH
88794f974932d41d8615bdbd29925e4a95375cf0
[ "MIT" ]
null
null
null
TF-transfer.ipynb
HAXRD/TF2vsPTH
88794f974932d41d8615bdbd29925e4a95375cf0
[ "MIT" ]
null
null
null
TF-transfer.ipynb
HAXRD/TF2vsPTH
88794f974932d41d8615bdbd29925e4a95375cf0
[ "MIT" ]
null
null
null
890.54399
627,900
0.944727
[ [ [ "# Transfer learning with Tensorflow\n在这个Notebook当中,我们将介绍如何实用Tensorflow框架实现迁移学习。简单的来说,迁移学习就是利用已经训练好的模型中学习到的特征(Features),再根据用户需要添加额外的网络层,进行快速的针对新的特定的数据集的模型训练。由于这样生成的模型大部分的模型参数已经训练好并且已经学习到一定数量的hidden features,在提供新的数据集的时候再进行训练就能有效利用已学习到的知识来进行预测。\n\n本notebook所使用的预训练好的模型是MobileNet V2,其具体的原理就留给负责这部分的同学在后续具体介绍。我们当前只需要了解其大致的网络结构即可(结构如下图)。\n\n![MobileNet-v2-1](./imgs/mobilenet_v2_1.png)\n\n![MobileNet-v2-2](./imgs/mobilenet_v2_2.png)\n\nMobileNet论文:https://arxiv.org/abs/1801.04381\n\n我们使用的是“猫猫狗狗”数据集,具体的样例我们会在下文的数据部分展示。\n\n下面是大纲:\n\n1. 数据集 & 构造模型输入流\n2. 组合模型\n - 载入预训练好的模型及其参数\n - 在预训练好的模型的基础上添加额外的分类层\n3. 训练模型\n4. 测试模型\n\n参考文献:https://tensorflow.google.cn/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory\n", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nimport numpy as np\nimport os\nimport tensorflow as tf\n\nfrom tensorflow.keras.preprocessing import image_dataset_from_directory", "_____no_output_____" ] ], [ [ "## 1. 数据预处理\n### 数据集分组\n在这里我们数据集已经按照路径分为Train集和Validation集。", "_____no_output_____" ] ], [ [ "PATH = os.path.join('./data', 'cats_and_dogs_filtered')\n\ntrain_dir = os.path.join(PATH, 'train')\nvalidation_dir = os.path.join(PATH, 'validation')\n\nBATCH_SIZE = 32\nIMG_SIZE = (160, 160)\n\ntrain_dataset = image_dataset_from_directory(train_dir,\n shuffle=True,\n batch_size=BATCH_SIZE,\n image_size=IMG_SIZE)\nvalidation_dataset = image_dataset_from_directory(validation_dir,\n shuffle=True,\n batch_size=BATCH_SIZE,\n image_size=IMG_SIZE)\n", "Found 2000 files belonging to 2 classes.\nFound 1000 files belonging to 2 classes.\n" ] ], [ [ "### 查看数据集中的样本", "_____no_output_____" ] ], [ [ "class_names = train_dataset.class_names\n\nplt.figure(figsize=(10, 10))\nfor images, labels in train_dataset.take(1):\n for i in range(9):\n ax = plt.subplot(3, 3, i+1)\n plt.imshow(images[i].numpy().astype(\"uint8\"))\n plt.title(class_names[labels[i]])\n plt.axis(\"off\")", "_____no_output_____" ] ], [ [ "### 将图片像素值归一化", "_____no_output_____" ] ], [ [ "rescale_input = tf.keras.layers.experimental.preprocessing.Rescaling(1./255, offset=0)", "_____no_output_____" ] ], [ [ "## 2.组合模型\n### 载入预训练好的模型(MobileNet-v2)", "_____no_output_____" ] ], [ [ "IMG_SHAPE = IMG_SIZE + (3,)\nbase_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,\n include_top=False,\n weights='imagenet')", "_____no_output_____" ], [ "image_batch, label_batch = next(iter(train_dataset))\nfeature_batch = base_model(image_batch)\nprint(feature_batch.shape)", "(32, 5, 5, 1280)\n" ] ], [ [ "### 特征提取(Feature Extraction)", "_____no_output_____" ] ], [ [ "# Freeze the MobileNet\nbase_model.trainable = False\nbase_model.summary()", "Model: \"mobilenetv2_1.00_160\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_1 (InputLayer) [(None, 160, 160, 3) 0 \n__________________________________________________________________________________________________\nConv1 (Conv2D) (None, 80, 80, 32) 864 input_1[0][0] \n__________________________________________________________________________________________________\nbn_Conv1 (BatchNormalization) (None, 80, 80, 32) 128 Conv1[0][0] \n__________________________________________________________________________________________________\nConv1_relu (ReLU) (None, 80, 80, 32) 0 bn_Conv1[0][0] \n__________________________________________________________________________________________________\nexpanded_conv_depthwise (Depthw (None, 80, 80, 32) 288 Conv1_relu[0][0] \n__________________________________________________________________________________________________\nexpanded_conv_depthwise_BN (Bat (None, 80, 80, 32) 128 expanded_conv_depthwise[0][0] \n__________________________________________________________________________________________________\nexpanded_conv_depthwise_relu (R (None, 80, 80, 32) 0 expanded_conv_depthwise_BN[0][0] \n__________________________________________________________________________________________________\nexpanded_conv_project (Conv2D) (None, 80, 80, 16) 512 expanded_conv_depthwise_relu[0][0\n__________________________________________________________________________________________________\nexpanded_conv_project_BN (Batch (None, 80, 80, 16) 64 expanded_conv_project[0][0] \n__________________________________________________________________________________________________\nblock_1_expand (Conv2D) (None, 80, 80, 96) 1536 expanded_conv_project_BN[0][0] \n__________________________________________________________________________________________________\nblock_1_expand_BN (BatchNormali (None, 80, 80, 96) 384 block_1_expand[0][0] \n__________________________________________________________________________________________________\nblock_1_expand_relu (ReLU) (None, 80, 80, 96) 0 block_1_expand_BN[0][0] \n__________________________________________________________________________________________________\nblock_1_pad (ZeroPadding2D) (None, 81, 81, 96) 0 block_1_expand_relu[0][0] \n__________________________________________________________________________________________________\nblock_1_depthwise (DepthwiseCon (None, 40, 40, 96) 864 block_1_pad[0][0] \n__________________________________________________________________________________________________\nblock_1_depthwise_BN (BatchNorm (None, 40, 40, 96) 384 block_1_depthwise[0][0] \n__________________________________________________________________________________________________\nblock_1_depthwise_relu (ReLU) (None, 40, 40, 96) 0 block_1_depthwise_BN[0][0] \n__________________________________________________________________________________________________\nblock_1_project (Conv2D) (None, 40, 40, 24) 2304 block_1_depthwise_relu[0][0] \n__________________________________________________________________________________________________\nblock_1_project_BN (BatchNormal (None, 40, 40, 24) 96 block_1_project[0][0] \n__________________________________________________________________________________________________\nblock_2_expand (Conv2D) (None, 40, 40, 144) 3456 block_1_project_BN[0][0] \n__________________________________________________________________________________________________\nblock_2_expand_BN (BatchNormali (None, 40, 40, 144) 576 block_2_expand[0][0] \n__________________________________________________________________________________________________\nblock_2_expand_relu (ReLU) (None, 40, 40, 144) 0 block_2_expand_BN[0][0] \n__________________________________________________________________________________________________\nblock_2_depthwise (DepthwiseCon (None, 40, 40, 144) 1296 block_2_expand_relu[0][0] \n__________________________________________________________________________________________________\nblock_2_depthwise_BN (BatchNorm (None, 40, 40, 144) 576 block_2_depthwise[0][0] \n__________________________________________________________________________________________________\nblock_2_depthwise_relu (ReLU) (None, 40, 40, 144) 0 block_2_depthwise_BN[0][0] \n__________________________________________________________________________________________________\nblock_2_project (Conv2D) (None, 40, 40, 24) 3456 block_2_depthwise_relu[0][0] \n__________________________________________________________________________________________________\nblock_2_project_BN (BatchNormal (None, 40, 40, 24) 96 block_2_project[0][0] \n__________________________________________________________________________________________________\nblock_2_add (Add) (None, 40, 40, 24) 0 block_1_project_BN[0][0] \n block_2_project_BN[0][0] \n__________________________________________________________________________________________________\nblock_3_expand (Conv2D) (None, 40, 40, 144) 3456 block_2_add[0][0] \n__________________________________________________________________________________________________\nblock_3_expand_BN (BatchNormali (None, 40, 40, 144) 576 block_3_expand[0][0] \n__________________________________________________________________________________________________\nblock_3_expand_relu (ReLU) (None, 40, 40, 144) 0 block_3_expand_BN[0][0] \n__________________________________________________________________________________________________\nblock_3_pad (ZeroPadding2D) (None, 41, 41, 144) 0 block_3_expand_relu[0][0] \n__________________________________________________________________________________________________\nblock_3_depthwise (DepthwiseCon (None, 20, 20, 144) 1296 block_3_pad[0][0] \n__________________________________________________________________________________________________\nblock_3_depthwise_BN (BatchNorm (None, 20, 20, 144) 576 block_3_depthwise[0][0] \n__________________________________________________________________________________________________\nblock_3_depthwise_relu (ReLU) (None, 20, 20, 144) 0 block_3_depthwise_BN[0][0] \n__________________________________________________________________________________________________\nblock_3_project (Conv2D) (None, 20, 20, 32) 4608 block_3_depthwise_relu[0][0] \n__________________________________________________________________________________________________\nblock_3_project_BN (BatchNormal (None, 20, 20, 32) 128 block_3_project[0][0] \n__________________________________________________________________________________________________\nblock_4_expand (Conv2D) (None, 20, 20, 192) 6144 block_3_project_BN[0][0] \n__________________________________________________________________________________________________\nblock_4_expand_BN (BatchNormali (None, 20, 20, 192) 768 block_4_expand[0][0] \n__________________________________________________________________________________________________\nblock_4_expand_relu (ReLU) (None, 20, 20, 192) 0 block_4_expand_BN[0][0] \n__________________________________________________________________________________________________\nblock_4_depthwise (DepthwiseCon (None, 20, 20, 192) 1728 block_4_expand_relu[0][0] \n__________________________________________________________________________________________________\nblock_4_depthwise_BN (BatchNorm (None, 20, 20, 192) 768 block_4_depthwise[0][0] \n__________________________________________________________________________________________________\nblock_4_depthwise_relu (ReLU) (None, 20, 20, 192) 0 block_4_depthwise_BN[0][0] \n__________________________________________________________________________________________________\nblock_4_project (Conv2D) (None, 20, 20, 32) 6144 block_4_depthwise_relu[0][0] \n__________________________________________________________________________________________________\nblock_4_project_BN (BatchNormal (None, 20, 20, 32) 128 block_4_project[0][0] \n__________________________________________________________________________________________________\nblock_4_add (Add) (None, 20, 20, 32) 0 block_3_project_BN[0][0] \n block_4_project_BN[0][0] \n__________________________________________________________________________________________________\nblock_5_expand (Conv2D) (None, 20, 20, 192) 6144 block_4_add[0][0] \n__________________________________________________________________________________________________\nblock_5_expand_BN (BatchNormali (None, 20, 20, 192) 768 block_5_expand[0][0] \n__________________________________________________________________________________________________\nblock_5_expand_relu (ReLU) (None, 20, 20, 192) 0 block_5_expand_BN[0][0] \n__________________________________________________________________________________________________\nblock_5_depthwise (DepthwiseCon (None, 20, 20, 192) 1728 block_5_expand_relu[0][0] \n__________________________________________________________________________________________________\nblock_5_depthwise_BN (BatchNorm (None, 20, 20, 192) 768 block_5_depthwise[0][0] \n__________________________________________________________________________________________________\nblock_5_depthwise_relu (ReLU) (None, 20, 20, 192) 0 block_5_depthwise_BN[0][0] \n__________________________________________________________________________________________________\nblock_5_project (Conv2D) (None, 20, 20, 32) 6144 block_5_depthwise_relu[0][0] \n__________________________________________________________________________________________________\nblock_5_project_BN (BatchNormal (None, 20, 20, 32) 128 block_5_project[0][0] \n__________________________________________________________________________________________________\nblock_5_add (Add) (None, 20, 20, 32) 0 block_4_add[0][0] \n block_5_project_BN[0][0] \n__________________________________________________________________________________________________\nblock_6_expand (Conv2D) (None, 20, 20, 192) 6144 block_5_add[0][0] \n__________________________________________________________________________________________________\nblock_6_expand_BN (BatchNormali (None, 20, 20, 192) 768 block_6_expand[0][0] \n__________________________________________________________________________________________________\nblock_6_expand_relu (ReLU) (None, 20, 20, 192) 0 block_6_expand_BN[0][0] \n__________________________________________________________________________________________________\nblock_6_pad (ZeroPadding2D) (None, 21, 21, 192) 0 block_6_expand_relu[0][0] \n__________________________________________________________________________________________________\nblock_6_depthwise (DepthwiseCon (None, 10, 10, 192) 1728 block_6_pad[0][0] \n__________________________________________________________________________________________________\nblock_6_depthwise_BN (BatchNorm (None, 10, 10, 192) 768 block_6_depthwise[0][0] \n__________________________________________________________________________________________________\nblock_6_depthwise_relu (ReLU) (None, 10, 10, 192) 0 block_6_depthwise_BN[0][0] \n__________________________________________________________________________________________________\nblock_6_project (Conv2D) (None, 10, 10, 64) 12288 block_6_depthwise_relu[0][0] \n__________________________________________________________________________________________________\nblock_6_project_BN (BatchNormal (None, 10, 10, 64) 256 block_6_project[0][0] \n__________________________________________________________________________________________________\nblock_7_expand (Conv2D) (None, 10, 10, 384) 24576 block_6_project_BN[0][0] \n__________________________________________________________________________________________________\nblock_7_expand_BN (BatchNormali (None, 10, 10, 384) 1536 block_7_expand[0][0] \n__________________________________________________________________________________________________\nblock_7_expand_relu (ReLU) (None, 10, 10, 384) 0 block_7_expand_BN[0][0] \n__________________________________________________________________________________________________\nblock_7_depthwise (DepthwiseCon (None, 10, 10, 384) 3456 block_7_expand_relu[0][0] \n__________________________________________________________________________________________________\nblock_7_depthwise_BN (BatchNorm (None, 10, 10, 384) 1536 block_7_depthwise[0][0] \n__________________________________________________________________________________________________\nblock_7_depthwise_relu (ReLU) (None, 10, 10, 384) 0 block_7_depthwise_BN[0][0] \n__________________________________________________________________________________________________\nblock_7_project (Conv2D) (None, 10, 10, 64) 24576 block_7_depthwise_relu[0][0] \n__________________________________________________________________________________________________\nblock_7_project_BN (BatchNormal (None, 10, 10, 64) 256 block_7_project[0][0] \n__________________________________________________________________________________________________\nblock_7_add (Add) (None, 10, 10, 64) 0 block_6_project_BN[0][0] \n block_7_project_BN[0][0] \n__________________________________________________________________________________________________\nblock_8_expand (Conv2D) (None, 10, 10, 384) 24576 block_7_add[0][0] \n__________________________________________________________________________________________________\nblock_8_expand_BN (BatchNormali (None, 10, 10, 384) 1536 block_8_expand[0][0] \n__________________________________________________________________________________________________\nblock_8_expand_relu (ReLU) (None, 10, 10, 384) 0 block_8_expand_BN[0][0] \n__________________________________________________________________________________________________\nblock_8_depthwise (DepthwiseCon (None, 10, 10, 384) 3456 block_8_expand_relu[0][0] \n__________________________________________________________________________________________________\nblock_8_depthwise_BN (BatchNorm (None, 10, 10, 384) 1536 block_8_depthwise[0][0] \n__________________________________________________________________________________________________\nblock_8_depthwise_relu (ReLU) (None, 10, 10, 384) 0 block_8_depthwise_BN[0][0] \n__________________________________________________________________________________________________\nblock_8_project (Conv2D) (None, 10, 10, 64) 24576 block_8_depthwise_relu[0][0] \n__________________________________________________________________________________________________\nblock_8_project_BN (BatchNormal (None, 10, 10, 64) 256 block_8_project[0][0] \n__________________________________________________________________________________________________\nblock_8_add (Add) (None, 10, 10, 64) 0 block_7_add[0][0] \n block_8_project_BN[0][0] \n__________________________________________________________________________________________________\nblock_9_expand (Conv2D) (None, 10, 10, 384) 24576 block_8_add[0][0] \n__________________________________________________________________________________________________\nblock_9_expand_BN (BatchNormali (None, 10, 10, 384) 1536 block_9_expand[0][0] \n__________________________________________________________________________________________________\nblock_9_expand_relu (ReLU) (None, 10, 10, 384) 0 block_9_expand_BN[0][0] \n__________________________________________________________________________________________________\nblock_9_depthwise (DepthwiseCon (None, 10, 10, 384) 3456 block_9_expand_relu[0][0] \n__________________________________________________________________________________________________\nblock_9_depthwise_BN (BatchNorm (None, 10, 10, 384) 1536 block_9_depthwise[0][0] \n__________________________________________________________________________________________________\nblock_9_depthwise_relu (ReLU) (None, 10, 10, 384) 0 block_9_depthwise_BN[0][0] \n__________________________________________________________________________________________________\nblock_9_project (Conv2D) (None, 10, 10, 64) 24576 block_9_depthwise_relu[0][0] \n__________________________________________________________________________________________________\nblock_9_project_BN (BatchNormal (None, 10, 10, 64) 256 block_9_project[0][0] \n__________________________________________________________________________________________________\nblock_9_add (Add) (None, 10, 10, 64) 0 block_8_add[0][0] \n block_9_project_BN[0][0] \n__________________________________________________________________________________________________\nblock_10_expand (Conv2D) (None, 10, 10, 384) 24576 block_9_add[0][0] \n__________________________________________________________________________________________________\nblock_10_expand_BN (BatchNormal (None, 10, 10, 384) 1536 block_10_expand[0][0] \n__________________________________________________________________________________________________\nblock_10_expand_relu (ReLU) (None, 10, 10, 384) 0 block_10_expand_BN[0][0] \n__________________________________________________________________________________________________\nblock_10_depthwise (DepthwiseCo (None, 10, 10, 384) 3456 block_10_expand_relu[0][0] \n__________________________________________________________________________________________________\nblock_10_depthwise_BN (BatchNor (None, 10, 10, 384) 1536 block_10_depthwise[0][0] \n__________________________________________________________________________________________________\nblock_10_depthwise_relu (ReLU) (None, 10, 10, 384) 0 block_10_depthwise_BN[0][0] \n__________________________________________________________________________________________________\nblock_10_project (Conv2D) (None, 10, 10, 96) 36864 block_10_depthwise_relu[0][0] \n__________________________________________________________________________________________________\nblock_10_project_BN (BatchNorma (None, 10, 10, 96) 384 block_10_project[0][0] \n__________________________________________________________________________________________________\nblock_11_expand (Conv2D) (None, 10, 10, 576) 55296 block_10_project_BN[0][0] \n__________________________________________________________________________________________________\nblock_11_expand_BN (BatchNormal (None, 10, 10, 576) 2304 block_11_expand[0][0] \n__________________________________________________________________________________________________\nblock_11_expand_relu (ReLU) (None, 10, 10, 576) 0 block_11_expand_BN[0][0] \n__________________________________________________________________________________________________\nblock_11_depthwise (DepthwiseCo (None, 10, 10, 576) 5184 block_11_expand_relu[0][0] \n__________________________________________________________________________________________________\nblock_11_depthwise_BN (BatchNor (None, 10, 10, 576) 2304 block_11_depthwise[0][0] \n__________________________________________________________________________________________________\nblock_11_depthwise_relu (ReLU) (None, 10, 10, 576) 0 block_11_depthwise_BN[0][0] \n__________________________________________________________________________________________________\nblock_11_project (Conv2D) (None, 10, 10, 96) 55296 block_11_depthwise_relu[0][0] \n__________________________________________________________________________________________________\nblock_11_project_BN (BatchNorma (None, 10, 10, 96) 384 block_11_project[0][0] \n__________________________________________________________________________________________________\nblock_11_add (Add) (None, 10, 10, 96) 0 block_10_project_BN[0][0] \n block_11_project_BN[0][0] \n__________________________________________________________________________________________________\nblock_12_expand (Conv2D) (None, 10, 10, 576) 55296 block_11_add[0][0] \n__________________________________________________________________________________________________\nblock_12_expand_BN (BatchNormal (None, 10, 10, 576) 2304 block_12_expand[0][0] \n__________________________________________________________________________________________________\nblock_12_expand_relu (ReLU) (None, 10, 10, 576) 0 block_12_expand_BN[0][0] \n__________________________________________________________________________________________________\nblock_12_depthwise (DepthwiseCo (None, 10, 10, 576) 5184 block_12_expand_relu[0][0] \n__________________________________________________________________________________________________\nblock_12_depthwise_BN (BatchNor (None, 10, 10, 576) 2304 block_12_depthwise[0][0] \n__________________________________________________________________________________________________\nblock_12_depthwise_relu (ReLU) (None, 10, 10, 576) 0 block_12_depthwise_BN[0][0] \n__________________________________________________________________________________________________\nblock_12_project (Conv2D) (None, 10, 10, 96) 55296 block_12_depthwise_relu[0][0] \n__________________________________________________________________________________________________\nblock_12_project_BN (BatchNorma (None, 10, 10, 96) 384 block_12_project[0][0] \n__________________________________________________________________________________________________\nblock_12_add (Add) (None, 10, 10, 96) 0 block_11_add[0][0] \n block_12_project_BN[0][0] \n__________________________________________________________________________________________________\nblock_13_expand (Conv2D) (None, 10, 10, 576) 55296 block_12_add[0][0] \n__________________________________________________________________________________________________\nblock_13_expand_BN (BatchNormal (None, 10, 10, 576) 2304 block_13_expand[0][0] \n__________________________________________________________________________________________________\nblock_13_expand_relu (ReLU) (None, 10, 10, 576) 0 block_13_expand_BN[0][0] \n__________________________________________________________________________________________________\nblock_13_pad (ZeroPadding2D) (None, 11, 11, 576) 0 block_13_expand_relu[0][0] \n__________________________________________________________________________________________________\nblock_13_depthwise (DepthwiseCo (None, 5, 5, 576) 5184 block_13_pad[0][0] \n__________________________________________________________________________________________________\nblock_13_depthwise_BN (BatchNor (None, 5, 5, 576) 2304 block_13_depthwise[0][0] \n__________________________________________________________________________________________________\nblock_13_depthwise_relu (ReLU) (None, 5, 5, 576) 0 block_13_depthwise_BN[0][0] \n__________________________________________________________________________________________________\nblock_13_project (Conv2D) (None, 5, 5, 160) 92160 block_13_depthwise_relu[0][0] \n__________________________________________________________________________________________________\nblock_13_project_BN (BatchNorma (None, 5, 5, 160) 640 block_13_project[0][0] \n__________________________________________________________________________________________________\nblock_14_expand (Conv2D) (None, 5, 5, 960) 153600 block_13_project_BN[0][0] \n__________________________________________________________________________________________________\nblock_14_expand_BN (BatchNormal (None, 5, 5, 960) 3840 block_14_expand[0][0] \n__________________________________________________________________________________________________\nblock_14_expand_relu (ReLU) (None, 5, 5, 960) 0 block_14_expand_BN[0][0] \n__________________________________________________________________________________________________\nblock_14_depthwise (DepthwiseCo (None, 5, 5, 960) 8640 block_14_expand_relu[0][0] \n__________________________________________________________________________________________________\nblock_14_depthwise_BN (BatchNor (None, 5, 5, 960) 3840 block_14_depthwise[0][0] \n__________________________________________________________________________________________________\nblock_14_depthwise_relu (ReLU) (None, 5, 5, 960) 0 block_14_depthwise_BN[0][0] \n__________________________________________________________________________________________________\nblock_14_project (Conv2D) (None, 5, 5, 160) 153600 block_14_depthwise_relu[0][0] \n__________________________________________________________________________________________________\nblock_14_project_BN (BatchNorma (None, 5, 5, 160) 640 block_14_project[0][0] \n__________________________________________________________________________________________________\nblock_14_add (Add) (None, 5, 5, 160) 0 block_13_project_BN[0][0] \n block_14_project_BN[0][0] \n__________________________________________________________________________________________________\nblock_15_expand (Conv2D) (None, 5, 5, 960) 153600 block_14_add[0][0] \n__________________________________________________________________________________________________\nblock_15_expand_BN (BatchNormal (None, 5, 5, 960) 3840 block_15_expand[0][0] \n__________________________________________________________________________________________________\nblock_15_expand_relu (ReLU) (None, 5, 5, 960) 0 block_15_expand_BN[0][0] \n__________________________________________________________________________________________________\nblock_15_depthwise (DepthwiseCo (None, 5, 5, 960) 8640 block_15_expand_relu[0][0] \n__________________________________________________________________________________________________\nblock_15_depthwise_BN (BatchNor (None, 5, 5, 960) 3840 block_15_depthwise[0][0] \n__________________________________________________________________________________________________\nblock_15_depthwise_relu (ReLU) (None, 5, 5, 960) 0 block_15_depthwise_BN[0][0] \n__________________________________________________________________________________________________\nblock_15_project (Conv2D) (None, 5, 5, 160) 153600 block_15_depthwise_relu[0][0] \n__________________________________________________________________________________________________\nblock_15_project_BN (BatchNorma (None, 5, 5, 160) 640 block_15_project[0][0] \n__________________________________________________________________________________________________\nblock_15_add (Add) (None, 5, 5, 160) 0 block_14_add[0][0] \n block_15_project_BN[0][0] \n__________________________________________________________________________________________________\nblock_16_expand (Conv2D) (None, 5, 5, 960) 153600 block_15_add[0][0] \n__________________________________________________________________________________________________\nblock_16_expand_BN (BatchNormal (None, 5, 5, 960) 3840 block_16_expand[0][0] \n__________________________________________________________________________________________________\nblock_16_expand_relu (ReLU) (None, 5, 5, 960) 0 block_16_expand_BN[0][0] \n__________________________________________________________________________________________________\nblock_16_depthwise (DepthwiseCo (None, 5, 5, 960) 8640 block_16_expand_relu[0][0] \n__________________________________________________________________________________________________\nblock_16_depthwise_BN (BatchNor (None, 5, 5, 960) 3840 block_16_depthwise[0][0] \n__________________________________________________________________________________________________\nblock_16_depthwise_relu (ReLU) (None, 5, 5, 960) 0 block_16_depthwise_BN[0][0] \n__________________________________________________________________________________________________\nblock_16_project (Conv2D) (None, 5, 5, 320) 307200 block_16_depthwise_relu[0][0] \n__________________________________________________________________________________________________\nblock_16_project_BN (BatchNorma (None, 5, 5, 320) 1280 block_16_project[0][0] \n__________________________________________________________________________________________________\nConv_1 (Conv2D) (None, 5, 5, 1280) 409600 block_16_project_BN[0][0] \n__________________________________________________________________________________________________\nConv_1_bn (BatchNormalization) (None, 5, 5, 1280) 5120 Conv_1[0][0] \n__________________________________________________________________________________________________\nout_relu (ReLU) (None, 5, 5, 1280) 0 Conv_1_bn[0][0] \n==================================================================================================\nTotal params: 2,257,984\nTrainable params: 0\nNon-trainable params: 2,257,984\n__________________________________________________________________________________________________\n" ], [ "global_average_layer = tf.keras.layers.GlobalAveragePooling2D()\nfeature_batch_average = global_average_layer(feature_batch)\nprint(feature_batch_average.shape)", "(32, 1280)\n" ], [ "prediction_layer = tf.keras.layers.Dense(1)\nprediction_batch = prediction_layer(feature_batch_average)\nprint(prediction_batch.shape)", "(32, 1)\n" ] ], [ [ "### 将上述的各个层集合成新的 Model", "_____no_output_____" ] ], [ [ "inputs = tf.keras.Input(shape=(160, 160, 3))\nx = rescale_input(inputs)\nx = base_model(x, training=False)\nx = tf.keras.layers.Dropout(0.2)(x)\noutputs = prediction_layer(x)\nmodel = tf.keras.Model(inputs, outputs)", "_____no_output_____" ] ], [ [ "### 编译模型", "_____no_output_____" ] ], [ [ "lr = 0.0001\nmodel.compile(optimizer=tf.keras.optimizers.Adam(lr=lr),\n loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n metrics='accuracy')", "_____no_output_____" ], [ "model.summary()", "Model: \"model_1\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_3 (InputLayer) [(None, 160, 160, 3)] 0 \n_________________________________________________________________\nrescaling (Rescaling) (None, 160, 160, 3) 0 \n_________________________________________________________________\nmobilenetv2_1.00_160 (Functi (None, 5, 5, 1280) 2257984 \n_________________________________________________________________\ndropout_1 (Dropout) (None, 5, 5, 1280) 0 \n_________________________________________________________________\ndense_3 (Dense) (None, 5, 5, 1) 1281 \n=================================================================\nTotal params: 2,259,265\nTrainable params: 1,281\nNon-trainable params: 2,257,984\n_________________________________________________________________\n" ] ], [ [ "## 3.训练模型", "_____no_output_____" ], [ "### 迁移学习后的Validation准确率", "_____no_output_____" ] ], [ [ "epochs = 10\nhistory = model.fit(train_dataset,\n epochs=initial_epochs,\n validation_data=validation_dataset)", "Epoch 1/10\n63/63 [==============================] - 33s 501ms/step - loss: 0.6727 - accuracy: 0.5925 - val_loss: 0.5018 - val_accuracy: 0.7060\nEpoch 2/10\n63/63 [==============================] - 23s 360ms/step - loss: 0.4419 - accuracy: 0.7635 - val_loss: 0.3507 - val_accuracy: 0.8390\nEpoch 3/10\n63/63 [==============================] - 19s 294ms/step - loss: 0.3292 - accuracy: 0.8435 - val_loss: 0.2682 - val_accuracy: 0.9110\nEpoch 4/10\n63/63 [==============================] - 19s 295ms/step - loss: 0.2573 - accuracy: 0.9000 - val_loss: 0.2208 - val_accuracy: 0.9310\nEpoch 5/10\n63/63 [==============================] - 19s 296ms/step - loss: 0.2187 - accuracy: 0.9180 - val_loss: 0.1895 - val_accuracy: 0.9430\nEpoch 6/10\n63/63 [==============================] - 20s 315ms/step - loss: 0.1942 - accuracy: 0.9280 - val_loss: 0.1679 - val_accuracy: 0.9510\nEpoch 7/10\n63/63 [==============================] - 19s 307ms/step - loss: 0.1683 - accuracy: 0.9360 - val_loss: 0.1523 - val_accuracy: 0.9540\nEpoch 8/10\n63/63 [==============================] - 19s 292ms/step - loss: 0.1510 - accuracy: 0.9485 - val_loss: 0.1406 - val_accuracy: 0.9550\nEpoch 9/10\n63/63 [==============================] - 19s 300ms/step - loss: 0.1393 - accuracy: 0.9495 - val_loss: 0.1314 - val_accuracy: 0.9590\nEpoch 10/10\n63/63 [==============================] - 18s 290ms/step - loss: 0.1318 - accuracy: 0.9570 - val_loss: 0.1239 - val_accuracy: 0.9600\n" ] ], [ [ "### 学习曲线", "_____no_output_____" ] ], [ [ "acc = history.history['accuracy']\nval_acc = history.history['val_accuracy']\n\nloss = history.history['loss']\nval_loss = history.history['val_loss']\n\nplt.figure(figsize=(8, 8))\nplt.subplot(2, 1, 1)\nplt.plot(acc, label='Training Accuracy')\nplt.plot(val_acc, label='Validation Accuracy')\nplt.legend(loc='lower right')\nplt.ylabel('Accuracy')\nplt.ylim([min(plt.ylim()),1])\nplt.title('Training and Validation Accuracy')\n\nplt.subplot(2, 1, 2)\nplt.plot(loss, label='Training Loss')\nplt.plot(val_loss, label='Validation Loss')\nplt.legend(loc='upper right')\nplt.ylabel('Cross Entropy')\nplt.ylim([0,1.0])\nplt.title('Training and Validation Loss')\nplt.xlabel('epoch')\nplt.show()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7d5e70d979cfa23f32fcfff805e64b391e75cb3
357,185
ipynb
Jupyter Notebook
Titanic/titanic.ipynb
hikaruya8/kaggle
f0111ce67d9822f435b3b5503b180320e40d1dd2
[ "MIT" ]
null
null
null
Titanic/titanic.ipynb
hikaruya8/kaggle
f0111ce67d9822f435b3b5503b180320e40d1dd2
[ "MIT" ]
null
null
null
Titanic/titanic.ipynb
hikaruya8/kaggle
f0111ce67d9822f435b3b5503b180320e40d1dd2
[ "MIT" ]
null
null
null
47.59928
17,204
0.39421
[ [ [ "import pandas as pd", "_____no_output_____" ], [ "import numpy as np", "_____no_output_____" ], [ "import matplotlib.pyplot as plt", "_____no_output_____" ], [ "from sklearn.model_selection import train_test_split", "_____no_output_____" ], [ "import seaborn as sns", "_____no_output_____" ], [ "%matplotlib inline", "_____no_output_____" ], [ "train = pd.read_csv(\"data/train.csv\", header = 0)", "_____no_output_____" ], [ "test = pd.read_csv(\"data/test.csv\", header= 0)", "_____no_output_____" ], [ "train.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 891 entries, 0 to 890\nData columns (total 12 columns):\nPassengerId 891 non-null int64\nSurvived 891 non-null int64\nPclass 891 non-null int64\nName 891 non-null object\nSex 891 non-null object\nAge 714 non-null float64\nSibSp 891 non-null int64\nParch 891 non-null int64\nTicket 891 non-null object\nFare 891 non-null float64\nCabin 204 non-null object\nEmbarked 889 non-null object\ndtypes: float64(2), int64(5), object(5)\nmemory usage: 83.6+ KB\n" ], [ "train.head(5)", "_____no_output_____" ], [ "train.Embarked = train.Embarked.replace ([\"C\", \"Q\", \"S\"], [0, 1, 2])", "_____no_output_____" ], [ "train[\"Embarked\"].fillna(train.Embarked.mean(), inplace=True)", "_____no_output_____" ], [ "train.Sex = train.Sex.replace(['male', 'female'], [0, 1])", "_____no_output_____" ], [ "train[\"Age\"].fillna(train.Age.mean(), inplace=True)", "_____no_output_____" ], [ "train", "_____no_output_____" ], [ "corrmat = train.corr()", "_____no_output_____" ], [ "corrmat", "_____no_output_____" ], [ "f, ax = plt.subplots(figsize=(12, 9))", "_____no_output_____" ], [ "sns.heatmap(corrmat, vmax=0.8, square=True)", "_____no_output_____" ], [ "split_data = []", "_____no_output_____" ], [ "for survived in [0,1]:\n split_data.append(train[train.Survived == survived])", "_____no_output_____" ], [ "split_data", "_____no_output_____" ], [ "temp = [i[\"Pclass\"].dropna() for i in split_data]", "_____no_output_____" ], [ "print(temp)", "[0 3\n4 3\n5 3\n6 1\n7 3\n12 3\n13 3\n14 3\n16 3\n18 3\n20 2\n24 3\n26 3\n27 1\n29 3\n30 1\n33 2\n34 1\n35 1\n37 3\n38 3\n40 3\n41 2\n42 3\n45 3\n46 3\n48 3\n49 3\n50 3\n51 3\n ..\n844 3\n845 3\n846 3\n847 3\n848 2\n850 3\n851 3\n852 3\n854 2\n859 3\n860 3\n861 2\n863 3\n864 2\n867 1\n868 3\n870 3\n872 1\n873 3\n876 3\n877 3\n878 3\n881 3\n882 3\n883 2\n884 3\n885 3\n886 2\n888 3\n890 3\nName: Pclass, Length: 549, dtype: int64, 1 1\n2 3\n3 1\n8 3\n9 2\n10 3\n11 1\n15 2\n17 2\n19 3\n21 2\n22 3\n23 1\n25 3\n28 3\n31 1\n32 3\n36 3\n39 3\n43 2\n44 3\n47 3\n52 1\n53 2\n55 1\n56 2\n58 2\n61 1\n65 3\n66 2\n ..\n809 1\n820 1\n821 3\n823 3\n827 2\n828 3\n829 1\n830 3\n831 2\n835 1\n838 3\n839 1\n842 1\n849 1\n853 1\n855 3\n856 1\n857 1\n858 3\n862 1\n865 2\n866 2\n869 3\n871 1\n874 2\n875 3\n879 1\n880 2\n887 1\n889 1\nName: Pclass, Length: 342, dtype: int64]\n" ], [ "plt.hist(temp, histtype=\"barstacked\", bins=3)", "/Users/yamadahikaru/.pyenv/versions/anaconda3-5.0.1/lib/python3.6/site-packages/numpy/core/fromnumeric.py:57: FutureWarning: reshape is deprecated and will raise in a subsequent release. Please use .values.reshape(...) instead\n return getattr(obj, method)(*args, **kwds)\n" ], [ "plt.show()", "_____no_output_____" ], [ "temp = [i[\"Age\"].dropna() for i in split_data]", "_____no_output_____" ], [ "plt.hist(temp, histtype=\"barstacked\", bins=16)", "/Users/yamadahikaru/.pyenv/versions/anaconda3-5.0.1/lib/python3.6/site-packages/numpy/core/fromnumeric.py:57: FutureWarning: reshape is deprecated and will raise in a subsequent release. Please use .values.reshape(...) instead\n return getattr(obj, method)(*args, **kwds)\n" ], [ "plt.show()", "_____no_output_____" ], [ "train", "_____no_output_____" ], [ "train.dtypes", "_____no_output_____" ], [ "train = train.replace(\"male\", 0).replace(\"female\", 1)", "_____no_output_____" ], [ "train.dtypes", "_____no_output_____" ], [ "train", "_____no_output_____" ], [ "corrmat = train.corr()", "_____no_output_____" ], [ "corrmat", "_____no_output_____" ], [ "sns.heatmap(corrmat, vmax=0.8, square=True)", "_____no_output_____" ], [ "combine_salutation = [train]", "_____no_output_____" ], [ "combine_salutation", "_____no_output_____" ], [ "for train in combine_salutation: \n train['Salutation'] = train.Name.str.extract(' ([A-Za-z]+).', expand=False) \nfor train in combine_salutation: \n train['Salutation'] = train['Salutation'].replace(['Lady', 'Countess','Capt', 'Col','Don', 'Dr', 'Major', 'Rev', 'Sir', 'Jonkheer', 'Dona'], 'Rare')\n train['Salutation'] = train['Salutation'].replace('Mlle', 'Miss')\n train['Salutation'] = train['Salutation'].replace('Ms', 'Miss')\n train['Salutation'] = train['Salutation'].replace('Mme', 'Mrs')\n del train['Name']\nSalutation_mapping = {\"Mr\": 1, \"Miss\": 2, \"Mrs\": 3, \"Master\": 4, \"Rare\": 5} \nfor train in combine_salutation: \n train['Salutation'] = train['Salutation'].map(Salutation_mapping) \n train['Salutation'] = train['Salutation'].fillna(0)", "_____no_output_____" ], [ "combine_salutation", "_____no_output_____" ], [ "combine_tickets = combine_salutation", "_____no_output_____" ], [ "combine_tickets", "_____no_output_____" ], [ "for train in combine_tickets:\n train['Ticket_Lett'] = train['Ticket'].apply(lambda x: str(x)[0])\n train['Ticket_Lett'] = train['Ticket_Lett'].apply(lambda x: str(x)) \n train['Ticket_Lett'] = np.where((train['Ticket_Lett']).isin(['1', '2', '3', 'S', 'P', 'C', 'A']), train['Ticket_Lett'], np.where((train['Ticket_Lett']).isin(['W', '4', '7', '6', 'L', '5', '8']), '0','0')) \n train['Ticket_Len'] = train['Ticket'].apply(lambda x: len(x)) \n del train['Ticket'] \ntrain['Ticket_Lett']=train['Ticket_Lett'].replace(\"1\",1).replace(\"2\",2).replace(\"3\",3).replace(\"0\",0).replace(\"S\",3).replace(\"P\",1).replace(\"C\",2).replace(\"A\",0)\n", "_____no_output_____" ], [ "combine_tickets", "_____no_output_____" ], [ "combine_cabin = combine_tickets", "_____no_output_____" ], [ "for train in combine_cabin: \n train['Cabin_Lett'] = train['Cabin'].apply(lambda x: str(x)[0]) \n train['Cabin_Lett'] = train['Cabin_Lett'].apply(lambda x: str(x)) \n train['Cabin_Lett'] = np.where((train['Cabin_Lett']).isin([ 'F', 'E', 'D', 'C', 'B', 'A']),train['Cabin_Lett'], np.where((train['Cabin_Lett']).isin(['W', '4', '7', '6', 'L', '5', '8']), '0','0'))\ndel train['Cabin'] \ntrain['Cabin_Lett']=train['Cabin_Lett'].replace(\"A\",1).replace(\"B\",2).replace(\"C\",1).replace(\"0\",0).replace(\"D\",2).replace(\"E\",2).replace(\"F\",1)", "_____no_output_____" ], [ "train[\"Familysize\"] = train[\"SibSp\"] + train[\"Parch\"] + 1", "_____no_output_____" ], [ "train_data = train.values\nxs = train_data[:, 2:] # Pclass以降の変数\ny = train_data[:, 1] # 正解データ", "_____no_output_____" ], [ "xs", "_____no_output_____" ], [ "y", "_____no_output_____" ], [ "test.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 418 entries, 0 to 417\nData columns (total 11 columns):\nPassengerId 418 non-null int64\nPclass 418 non-null int64\nName 418 non-null object\nSex 418 non-null object\nAge 332 non-null float64\nSibSp 418 non-null int64\nParch 418 non-null int64\nTicket 418 non-null object\nFare 417 non-null float64\nCabin 91 non-null object\nEmbarked 418 non-null object\ndtypes: float64(2), int64(4), object(5)\nmemory usage: 36.0+ KB\n" ], [ "test['Age'].fillna(train.Age.mean(), inplace = True)", "_____no_output_____" ], [ "test[\"Fare\"].fillna(train.Fare.mean(), inplace = True)", "_____no_output_____" ], [ "test.Name = test.Name.replace(\"male\",0).replace(\"female\",1)", "_____no_output_____" ], [ "test.Embarked = test.Embarked.replace ([\"C\", \"Q\", \"S\"], [0, 1, 2])", "_____no_output_____" ], [ "test.Sex = test.Sex.replace(['male', 'female'], [0, 1])", "_____no_output_____" ], [ "test", "_____no_output_____" ], [ "combine = [test]\nfor test in combine:\n test['Salutation'] = test.Name.str.extract(' ([A-Za-z]+)\\.', expand=False)\nfor test in combine:\n test['Salutation'] = test['Salutation'].replace(['Lady', 'Countess','Capt', 'Col',\\\n 'Don', 'Dr', 'Major', 'Rev', 'Sir', 'Jonkheer', 'Dona'], 'Rare')\n\n test['Salutation'] = test['Salutation'].replace('Mlle', 'Miss')\n test['Salutation'] = test['Salutation'].replace('Ms', 'Miss')\n test['Salutation'] = test['Salutation'].replace('Mme', 'Mrs')\n del test['Name']\nSalutation_mapping = {\"Mr\": 1, \"Miss\": 2, \"Mrs\": 3, \"Master\": 4, \"Rare\": 5}\n\nfor test in combine:\n test['Salutation'] = test['Salutation'].map(Salutation_mapping)\n test['Salutation'] = test['Salutation'].fillna(0)\n\nfor test in combine:\n test['Ticket_Lett'] = test['Ticket'].apply(lambda x: str(x)[0])\n test['Ticket_Lett'] = test['Ticket_Lett'].apply(lambda x: str(x))\n test['Ticket_Lett'] = np.where((test['Ticket_Lett']).isin(['1', '2', '3', 'S', 'P', 'C', 'A']), test['Ticket_Lett'],\n np.where((test['Ticket_Lett']).isin(['W', '4', '7', '6', 'L', '5', '8']),\n '0', '0'))\n test['Ticket_Len'] = test['Ticket'].apply(lambda x: len(x))\n del test['Ticket']\ntest['Ticket_Lett']=test['Ticket_Lett'].replace(\"1\",1).replace(\"2\",2).replace(\"3\",3).replace(\"0\",0).replace(\"S\",3).replace(\"P\",1).replace(\"C\",2).replace(\"A\",0) \n\nfor test in combine:\n test['Cabin_Lett'] = test['Cabin'].apply(lambda x: str(x)[0])\n test['Cabin_Lett'] = test['Cabin_Lett'].apply(lambda x: str(x))\n test['Cabin_Lett'] = np.where((test['Cabin_Lett']).isin(['T', 'H', 'G', 'F', 'E', 'D', 'C', 'B', 'A']),test['Cabin_Lett'],\n np.where((test['Cabin_Lett']).isin(['W', '4', '7', '6', 'L', '5', '8']),\n '0','0')) \n del test['Cabin']\ntest['Cabin_Lett']=test['Cabin_Lett'].replace(\"A\",1).replace(\"B\",2).replace(\"C\",1).replace(\"0\",0).replace(\"D\",2).replace(\"E\",2).replace(\"F\",1).replace(\"G\",1) \n\ntest[\"FamilySize\"] = train[\"SibSp\"] + train[\"Parch\"] + 1\n \ntest_data = test.values\nxs_test = test_data[:, 1:]", "_____no_output_____" ], [ "train.head(3)", "_____no_output_____" ], [ "test.head(3)", "_____no_output_____" ], [ "from sklearn.ensemble import RandomForestClassifier", "_____no_output_____" ], [ "model = RandomForestClassifier()", "_____no_output_____" ], [ "model.fit(xs, y)", "_____no_output_____" ], [ "Y_pred = model.predict(xs_test)", "_____no_output_____" ], [ "import csv\nwith open(\"predict_result_data.csv\", \"w\") as f:\n writer = csv.writer(f, lineterminator='\\n')\n writer.writerow([\"PassengerId\", \"Survived\"])\n for pid, survived in zip(test_data[:,0].astype(int), Y_pred.astype(int)):\n writer.writerow([pid, survived])", "_____no_output_____" ], [ "jupyter nbconvert --to python test.ipynb", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7d5f97ddb9dceb96a4c490238bb938d5a425cdd
7,605
ipynb
Jupyter Notebook
7.16.ipynb
liangfhaott3/python
e6651cd3399cfb9bc7b1e1aa0f612583f1510d45
[ "Apache-2.0" ]
null
null
null
7.16.ipynb
liangfhaott3/python
e6651cd3399cfb9bc7b1e1aa0f612583f1510d45
[ "Apache-2.0" ]
null
null
null
7.16.ipynb
liangfhaott3/python
e6651cd3399cfb9bc7b1e1aa0f612583f1510d45
[ "Apache-2.0" ]
null
null
null
17.205882
74
0.434845
[ [ [ "# 基本程序设计\n- 一切代码输入,请使用英文输入法", "_____no_output_____" ], [ "## 编写一个简单的程序\n- 圆公式面积: area = radius \\* radius \\* 3.1415", "_____no_output_____" ], [ "### 在Python里面不需要定义数据的类型", "_____no_output_____" ], [ "## 控制台的读取与输入\n- input 输入进去的是字符串\n- eval", "_____no_output_____" ], [ "- 在jupyter用shift + tab 键可以跳出解释文档", "_____no_output_____" ], [ "## 变量命名的规范\n- 由字母、数字、下划线构成\n- 不能以数字开头 \\*\n- 标识符不能是关键词(实际上是可以强制改变的,但是对于代码规范而言是极其不适合)\n- 可以是任意长度\n- 驼峰式命名", "_____no_output_____" ], [ "## 变量、赋值语句和赋值表达式\n- 变量: 通俗理解为可以变化的量\n- x = 2 \\* x + 1 在数学中是一个方程,而在语言中它是一个表达式\n- test = test + 1 \\* 变量在赋值之前必须有值", "_____no_output_____" ], [ "## 同时赋值\nvar1, var2,var3... = exp1,exp2,exp3...", "_____no_output_____" ], [ "## 定义常量\n- 常量:表示一种定值标识符,适合于多次使用的场景。比如PI\n- 注意:在其他低级语言中如果定义了常量,那么,该常量是不可以被改变的,但是在Python中一切皆对象,常量也是可以被改变的", "_____no_output_____" ], [ "## 数值数据类型和运算符\n- 在Python中有两种数值类型(int 和 float)适用于加减乘除、模、幂次\n<img src = \"../Photo/01.jpg\"></img>", "_____no_output_____" ], [ "## 运算符 /、//、**", "_____no_output_____" ], [ "## 运算符 %", "_____no_output_____" ], [ "## EP:\n- 25/4 多少,如果要将其转变为整数该怎么改写\n- 输入一个数字判断是奇数还是偶数\n- 进阶: 输入一个秒数,写一个程序将其转换成分和秒:例如500秒等于8分20秒\n- 进阶: 如果今天是星期六,那么10天以后是星期几? 提示:每个星期的第0天是星期天", "_____no_output_____" ], [ "## 科学计数法\n- 1.234e+2\n- 1.234e-2", "_____no_output_____" ], [ "## 计算表达式和运算优先级\n<img src = \"../Photo/02.png\"></img>\n<img src = \"../Photo/03.png\"></img>", "_____no_output_____" ], [ "## 增强型赋值运算\n<img src = \"../Photo/04.png\"></img>", "_____no_output_____" ], [ "## 类型转换\n- float -> int\n- 四舍五入 round", "_____no_output_____" ], [ "## EP:\n- 如果一个年营业税为0.06%,那么对于197.55e+2的年收入,需要交税为多少?(结果保留2为小数)\n- 必须使用科学计数法", "_____no_output_____" ], [ "# Project\n- 用Python写一个贷款计算器程序:输入的是月供(monthlyPayment) 输出的是总还款数(totalpayment)\n![](../Photo/05.png)", "_____no_output_____" ], [ "# Homework\n- 1\n<img src=\"../Photo/06.png\"></img>", "_____no_output_____" ] ], [ [ "C=input()\nF=float((9/5))*float(C)+32\nprint(F)", "43\n109.4\n" ] ], [ [ "- 2\n<img src=\"../Photo/07.png\"></img>", "_____no_output_____" ] ], [ [ "r=input()\nh=input()\nS=float(r)**2*3.14\nV=float(S)*float(h)\nprint('底面积为:%.2f'%S)\nprint('体积为:%.2f'%V)", "5.5\n12\n底面积为:94.98\n体积为:1139.82\n" ] ], [ [ "- 3\n<img src=\"../Photo/08.png\"></img>", "_____no_output_____" ] ], [ [ "feet=input()\nmeters=float(feet)*0.305\nprint(meters)", "16.5\n5.0325\n" ] ], [ [ "- 4\n<img src=\"../Photo/10.png\"></img>", "_____no_output_____" ] ], [ [ "k=input()\nt1=input()\nt2=input()\nq=float(k)*(float(t2)-float(t1))*4184\nprint(q)", "55.5\n3.5\n10.5\n1625484.0\n" ] ], [ [ "- 5\n<img src=\"../Photo/11.png\"></img>", "_____no_output_____" ] ], [ [ "ce=input()\nnll=input()\nlx=float(ce)*(float(nll)/1200)\nprint('%.5f'%lx)", "1000\n3.5\n2.91667\n" ] ], [ [ "- 6\n<img src=\"../Photo/12.png\"></img>", "_____no_output_____" ] ], [ [ "v0=input()\nv1=input()\nt=input()\npj=(float(v1)-float(v0))/float(t)\nprint('%.4f'%pj)", "5.5\n50.9\n4.5\n10.0889\n" ] ], [ [ "- 7 进阶\n<img src=\"../Photo/13.png\"></img>", "_____no_output_____" ] ], [ [ "amout=input()\nmoney=0\nfor i in range(6):\n money=(float(amout)+float(money))*(1+0.00417)\nprint('%.2f'%money)\n ", "100\n608.82\n" ] ], [ [ "- 8 进阶\n<img src=\"../Photo/14.png\"></img>", "_____no_output_____" ] ], [ [ "number=int(input())\nif (number>1000)or (number<=0):\n print('false')\nelse:\n gewei=number%10\n shiwei=(number//10)%10\n baiwei=number//100\n sum=gewei+shiwei+baiwei\n print(sum)\n", "999\n27\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7d5fd293509f7cc6b10d10e68f2f967cd178626
233,767
ipynb
Jupyter Notebook
experiments/tuned_1v2/oracle.run1/trials/2/trial.ipynb
stevester94/csc500-notebooks
4c1b04c537fe233a75bed82913d9d84985a89177
[ "MIT" ]
null
null
null
experiments/tuned_1v2/oracle.run1/trials/2/trial.ipynb
stevester94/csc500-notebooks
4c1b04c537fe233a75bed82913d9d84985a89177
[ "MIT" ]
null
null
null
experiments/tuned_1v2/oracle.run1/trials/2/trial.ipynb
stevester94/csc500-notebooks
4c1b04c537fe233a75bed82913d9d84985a89177
[ "MIT" ]
null
null
null
108.325765
75,528
0.803634
[ [ [ "# PTN Template\nThis notebook serves as a template for single dataset PTN experiments \nIt can be run on its own by setting STANDALONE to True (do a find for \"STANDALONE\" to see where) \nBut it is intended to be executed as part of a *papermill.py script. See any of the \nexperimentes with a papermill script to get started with that workflow. ", "_____no_output_____" ] ], [ [ "%load_ext autoreload\n%autoreload 2\n%matplotlib inline\n\n \nimport os, json, sys, time, random\nimport numpy as np\nimport torch\nfrom torch.optim import Adam\nfrom easydict import EasyDict\nimport matplotlib.pyplot as plt\n\nfrom steves_models.steves_ptn import Steves_Prototypical_Network\n\nfrom steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper\nfrom steves_utils.iterable_aggregator import Iterable_Aggregator\nfrom steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig\nfrom steves_utils.torch_sequential_builder import build_sequential\nfrom steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader\nfrom steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path)\nfrom steves_utils.PTN.utils import independent_accuracy_assesment\n\nfrom steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory\n\nfrom steves_utils.ptn_do_report import (\n get_loss_curve,\n get_results_table,\n get_parameters_table,\n get_domain_accuracies,\n)\n\nfrom steves_utils.transforms import get_chained_transform", "_____no_output_____" ] ], [ [ "# Required Parameters\nThese are allowed parameters, not defaults\nEach of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)\n\nPapermill uses the cell tag \"parameters\" to inject the real parameters below this cell.\nEnable tags to see what I mean", "_____no_output_____" ] ], [ [ "required_parameters = {\n \"experiment_name\",\n \"lr\",\n \"device\",\n \"seed\",\n \"dataset_seed\",\n \"labels_source\",\n \"labels_target\",\n \"domains_source\",\n \"domains_target\",\n \"num_examples_per_domain_per_label_source\",\n \"num_examples_per_domain_per_label_target\",\n \"n_shot\",\n \"n_way\",\n \"n_query\",\n \"train_k_factor\",\n \"val_k_factor\",\n \"test_k_factor\",\n \"n_epoch\",\n \"patience\",\n \"criteria_for_best\",\n \"x_transforms_source\",\n \"x_transforms_target\",\n \"episode_transforms_source\",\n \"episode_transforms_target\",\n \"pickle_name\",\n \"x_net\",\n \"NUM_LOGS_PER_EPOCH\",\n \"BEST_MODEL_PATH\",\n \"torch_default_dtype\"\n}", "_____no_output_____" ], [ "\n\nstandalone_parameters = {}\nstandalone_parameters[\"experiment_name\"] = \"STANDALONE PTN\"\nstandalone_parameters[\"lr\"] = 0.0001\nstandalone_parameters[\"device\"] = \"cuda\"\n\nstandalone_parameters[\"seed\"] = 1337\nstandalone_parameters[\"dataset_seed\"] = 1337\n\n\nstandalone_parameters[\"num_examples_per_domain_per_label_source\"]=100\nstandalone_parameters[\"num_examples_per_domain_per_label_target\"]=100\n\nstandalone_parameters[\"n_shot\"] = 3\nstandalone_parameters[\"n_query\"] = 2\nstandalone_parameters[\"train_k_factor\"] = 1\nstandalone_parameters[\"val_k_factor\"] = 2\nstandalone_parameters[\"test_k_factor\"] = 2\n\n\nstandalone_parameters[\"n_epoch\"] = 100\n\nstandalone_parameters[\"patience\"] = 10\nstandalone_parameters[\"criteria_for_best\"] = \"target_accuracy\"\n\nstandalone_parameters[\"x_transforms_source\"] = [\"unit_power\"]\nstandalone_parameters[\"x_transforms_target\"] = [\"unit_power\"]\nstandalone_parameters[\"episode_transforms_source\"] = []\nstandalone_parameters[\"episode_transforms_target\"] = []\n\nstandalone_parameters[\"torch_default_dtype\"] = \"torch.float32\" \n\n\n\nstandalone_parameters[\"x_net\"] = [\n {\"class\": \"nnReshape\", \"kargs\": {\"shape\":[-1, 1, 2, 256]}},\n {\"class\": \"Conv2d\", \"kargs\": { \"in_channels\":1, \"out_channels\":256, \"kernel_size\":(1,7), \"bias\":False, \"padding\":(0,3), },},\n {\"class\": \"ReLU\", \"kargs\": {\"inplace\": True}},\n {\"class\": \"BatchNorm2d\", \"kargs\": {\"num_features\":256}},\n\n {\"class\": \"Conv2d\", \"kargs\": { \"in_channels\":256, \"out_channels\":80, \"kernel_size\":(2,7), \"bias\":True, \"padding\":(0,3), },},\n {\"class\": \"ReLU\", \"kargs\": {\"inplace\": True}},\n {\"class\": \"BatchNorm2d\", \"kargs\": {\"num_features\":80}},\n {\"class\": \"Flatten\", \"kargs\": {}},\n\n {\"class\": \"Linear\", \"kargs\": {\"in_features\": 80*256, \"out_features\": 256}}, # 80 units per IQ pair\n {\"class\": \"ReLU\", \"kargs\": {\"inplace\": True}},\n {\"class\": \"BatchNorm1d\", \"kargs\": {\"num_features\":256}},\n\n {\"class\": \"Linear\", \"kargs\": {\"in_features\": 256, \"out_features\": 256}},\n]\n\n# Parameters relevant to results\n# These parameters will basically never need to change\nstandalone_parameters[\"NUM_LOGS_PER_EPOCH\"] = 10\nstandalone_parameters[\"BEST_MODEL_PATH\"] = \"./best_model.pth\"\n\n# uncomment for CORES dataset\nfrom steves_utils.CORES.utils import (\n ALL_NODES,\n ALL_NODES_MINIMUM_1000_EXAMPLES,\n ALL_DAYS\n)\n\n\nstandalone_parameters[\"labels_source\"] = ALL_NODES\nstandalone_parameters[\"labels_target\"] = ALL_NODES\n\nstandalone_parameters[\"domains_source\"] = [1]\nstandalone_parameters[\"domains_target\"] = [2,3,4,5]\n\nstandalone_parameters[\"pickle_name\"] = \"cores.stratified_ds.2022A.pkl\"\n\n\n# Uncomment these for ORACLE dataset\n# from steves_utils.ORACLE.utils_v2 import (\n# ALL_DISTANCES_FEET,\n# ALL_RUNS,\n# ALL_SERIAL_NUMBERS,\n# )\n# standalone_parameters[\"labels_source\"] = ALL_SERIAL_NUMBERS\n# standalone_parameters[\"labels_target\"] = ALL_SERIAL_NUMBERS\n# standalone_parameters[\"domains_source\"] = [8,20, 38,50]\n# standalone_parameters[\"domains_target\"] = [14, 26, 32, 44, 56]\n# standalone_parameters[\"pickle_name\"] = \"oracle.frame_indexed.stratified_ds.2022A.pkl\"\n# standalone_parameters[\"num_examples_per_domain_per_label_source\"]=1000\n# standalone_parameters[\"num_examples_per_domain_per_label_target\"]=1000\n\n# Uncomment these for Metahan dataset\n# standalone_parameters[\"labels_source\"] = list(range(19))\n# standalone_parameters[\"labels_target\"] = list(range(19))\n# standalone_parameters[\"domains_source\"] = [0]\n# standalone_parameters[\"domains_target\"] = [1]\n# standalone_parameters[\"pickle_name\"] = \"metehan.stratified_ds.2022A.pkl\"\n# standalone_parameters[\"n_way\"] = len(standalone_parameters[\"labels_source\"])\n# standalone_parameters[\"num_examples_per_domain_per_label_source\"]=200\n# standalone_parameters[\"num_examples_per_domain_per_label_target\"]=100\n\n\nstandalone_parameters[\"n_way\"] = len(standalone_parameters[\"labels_source\"])", "_____no_output_____" ], [ "# Parameters\nparameters = {\n \"experiment_name\": \"tuned_1v2:oracle.run1\",\n \"device\": \"cuda\",\n \"lr\": 0.0001,\n \"labels_source\": [\n \"3123D52\",\n \"3123D65\",\n \"3123D79\",\n \"3123D80\",\n \"3123D54\",\n \"3123D70\",\n \"3123D7B\",\n \"3123D89\",\n \"3123D58\",\n \"3123D76\",\n \"3123D7D\",\n \"3123EFE\",\n \"3123D64\",\n \"3123D78\",\n \"3123D7E\",\n \"3124E4A\",\n ],\n \"labels_target\": [\n \"3123D52\",\n \"3123D65\",\n \"3123D79\",\n \"3123D80\",\n \"3123D54\",\n \"3123D70\",\n \"3123D7B\",\n \"3123D89\",\n \"3123D58\",\n \"3123D76\",\n \"3123D7D\",\n \"3123EFE\",\n \"3123D64\",\n \"3123D78\",\n \"3123D7E\",\n \"3124E4A\",\n ],\n \"episode_transforms_source\": [],\n \"episode_transforms_target\": [],\n \"domains_source\": [8, 32, 50],\n \"domains_target\": [14, 20, 26, 38, 44],\n \"num_examples_per_domain_per_label_source\": -1,\n \"num_examples_per_domain_per_label_target\": -1,\n \"n_shot\": 3,\n \"n_way\": 16,\n \"n_query\": 2,\n \"train_k_factor\": 3,\n \"val_k_factor\": 2,\n \"test_k_factor\": 2,\n \"torch_default_dtype\": \"torch.float32\",\n \"n_epoch\": 50,\n \"patience\": 3,\n \"criteria_for_best\": \"target_accuracy\",\n \"x_net\": [\n {\"class\": \"nnReshape\", \"kargs\": {\"shape\": [-1, 1, 2, 256]}},\n {\n \"class\": \"Conv2d\",\n \"kargs\": {\n \"in_channels\": 1,\n \"out_channels\": 256,\n \"kernel_size\": [1, 7],\n \"bias\": False,\n \"padding\": [0, 3],\n },\n },\n {\"class\": \"ReLU\", \"kargs\": {\"inplace\": True}},\n {\"class\": \"BatchNorm2d\", \"kargs\": {\"num_features\": 256}},\n {\n \"class\": \"Conv2d\",\n \"kargs\": {\n \"in_channels\": 256,\n \"out_channels\": 80,\n \"kernel_size\": [2, 7],\n \"bias\": True,\n \"padding\": [0, 3],\n },\n },\n {\"class\": \"ReLU\", \"kargs\": {\"inplace\": True}},\n {\"class\": \"BatchNorm2d\", \"kargs\": {\"num_features\": 80}},\n {\"class\": \"Flatten\", \"kargs\": {}},\n {\"class\": \"Linear\", \"kargs\": {\"in_features\": 20480, \"out_features\": 256}},\n {\"class\": \"ReLU\", \"kargs\": {\"inplace\": True}},\n {\"class\": \"BatchNorm1d\", \"kargs\": {\"num_features\": 256}},\n {\"class\": \"Linear\", \"kargs\": {\"in_features\": 256, \"out_features\": 256}},\n ],\n \"NUM_LOGS_PER_EPOCH\": 10,\n \"BEST_MODEL_PATH\": \"./best_model.pth\",\n \"pickle_name\": \"oracle.Run1_10kExamples_stratified_ds.2022A.pkl\",\n \"x_transforms_source\": [\"unit_mag\"],\n \"x_transforms_target\": [\"unit_mag\"],\n \"dataset_seed\": 1337,\n \"seed\": 1337,\n}\n", "_____no_output_____" ], [ "# Set this to True if you want to run this template directly\nSTANDALONE = False\nif STANDALONE:\n print(\"parameters not injected, running with standalone_parameters\")\n parameters = standalone_parameters\n\nif not 'parameters' in locals() and not 'parameters' in globals():\n raise Exception(\"Parameter injection failed\")\n\n#Use an easy dict for all the parameters\np = EasyDict(parameters)\n\nsupplied_keys = set(p.keys())\n\nif supplied_keys != required_parameters:\n print(\"Parameters are incorrect\")\n if len(supplied_keys - required_parameters)>0: print(\"Shouldn't have:\", str(supplied_keys - required_parameters))\n if len(required_parameters - supplied_keys)>0: print(\"Need to have:\", str(required_parameters - supplied_keys))\n raise RuntimeError(\"Parameters are incorrect\")\n\n", "_____no_output_____" ], [ "###################################\n# Set the RNGs and make it all deterministic\n###################################\nnp.random.seed(p.seed)\nrandom.seed(p.seed)\ntorch.manual_seed(p.seed)\n\ntorch.use_deterministic_algorithms(True) ", "_____no_output_____" ], [ "###########################################\n# The stratified datasets honor this\n###########################################\ntorch.set_default_dtype(eval(p.torch_default_dtype))", "_____no_output_____" ], [ "###################################\n# Build the network(s)\n# Note: It's critical to do this AFTER setting the RNG\n# (This is due to the randomized initial weights)\n###################################\nx_net = build_sequential(p.x_net)", "_____no_output_____" ], [ "start_time_secs = time.time()", "_____no_output_____" ], [ "###################################\n# Build the dataset\n###################################\n\nif p.x_transforms_source == []: x_transform_source = None\nelse: x_transform_source = get_chained_transform(p.x_transforms_source) \n\nif p.x_transforms_target == []: x_transform_target = None\nelse: x_transform_target = get_chained_transform(p.x_transforms_target)\n\nif p.episode_transforms_source == []: episode_transform_source = None\nelse: raise Exception(\"episode_transform_source not implemented\")\n\nif p.episode_transforms_target == []: episode_transform_target = None\nelse: raise Exception(\"episode_transform_target not implemented\")\n\n\neaf_source = Episodic_Accessor_Factory(\n labels=p.labels_source,\n domains=p.domains_source,\n num_examples_per_domain_per_label=p.num_examples_per_domain_per_label_source,\n iterator_seed=p.seed,\n dataset_seed=p.dataset_seed,\n n_shot=p.n_shot,\n n_way=p.n_way,\n n_query=p.n_query,\n train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),\n pickle_path=os.path.join(get_datasets_base_path(), p.pickle_name),\n x_transform_func=x_transform_source,\n example_transform_func=episode_transform_source,\n \n)\ntrain_original_source, val_original_source, test_original_source = eaf_source.get_train(), eaf_source.get_val(), eaf_source.get_test()\n\n\neaf_target = Episodic_Accessor_Factory(\n labels=p.labels_target,\n domains=p.domains_target,\n num_examples_per_domain_per_label=p.num_examples_per_domain_per_label_target,\n iterator_seed=p.seed,\n dataset_seed=p.dataset_seed,\n n_shot=p.n_shot,\n n_way=p.n_way,\n n_query=p.n_query,\n train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),\n pickle_path=os.path.join(get_datasets_base_path(), p.pickle_name),\n x_transform_func=x_transform_target,\n example_transform_func=episode_transform_target,\n)\ntrain_original_target, val_original_target, test_original_target = eaf_target.get_train(), eaf_target.get_val(), eaf_target.get_test()\n\n\ntransform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only\n\ntrain_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda)\nval_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda)\ntest_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda)\n\ntrain_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda)\nval_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda)\ntest_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda)\n\ndatasets = EasyDict({\n \"source\": {\n \"original\": {\"train\":train_original_source, \"val\":val_original_source, \"test\":test_original_source},\n \"processed\": {\"train\":train_processed_source, \"val\":val_processed_source, \"test\":test_processed_source}\n },\n \"target\": {\n \"original\": {\"train\":train_original_target, \"val\":val_original_target, \"test\":test_original_target},\n \"processed\": {\"train\":train_processed_target, \"val\":val_processed_target, \"test\":test_processed_target}\n },\n})", "_____no_output_____" ], [ "# Some quick unit tests on the data\nfrom steves_utils.transforms import get_average_power, get_average_magnitude\n\nq_x, q_y, s_x, s_y, truth = next(iter(train_processed_source))\n\nassert q_x.dtype == eval(p.torch_default_dtype)\nassert s_x.dtype == eval(p.torch_default_dtype)\n\nprint(\"Visually inspect these to see if they line up with expected values given the transforms\")\nprint('x_transforms_source', p.x_transforms_source)\nprint('x_transforms_target', p.x_transforms_target)\nprint(\"Average magnitude, source:\", get_average_magnitude(q_x[0].numpy()))\nprint(\"Average power, source:\", get_average_power(q_x[0].numpy()))\n\nq_x, q_y, s_x, s_y, truth = next(iter(train_processed_target))\nprint(\"Average magnitude, target:\", get_average_magnitude(q_x[0].numpy()))\nprint(\"Average power, target:\", get_average_power(q_x[0].numpy()))\n", "Visually inspect these to see if they line up with expected values given the transforms\nx_transforms_source ['unit_mag']\nx_transforms_target ['unit_mag']\nAverage magnitude, source: 1.0\nAverage power, source: 1.3161097\n" ], [ "###################################\n# Build the model\n###################################\nmodel = Steves_Prototypical_Network(x_net, device=p.device, x_shape=(2,256))\noptimizer = Adam(params=model.parameters(), lr=p.lr)", "(2, 256)\n" ], [ "###################################\n# train\n###################################\njig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device)\n\njig.train(\n train_iterable=datasets.source.processed.train,\n source_val_iterable=datasets.source.processed.val,\n target_val_iterable=datasets.target.processed.val,\n num_epochs=p.n_epoch,\n num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH,\n patience=p.patience,\n optimizer=optimizer,\n criteria_for_best=p.criteria_for_best,\n)", "epoch: 1, [batch: 1 / 12600], examples_per_second: 25.5044, train_label_loss: 2.8021, \n" ], [ "total_experiment_time_secs = time.time() - start_time_secs", "_____no_output_____" ], [ "###################################\n# Evaluate the model\n###################################\nsource_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test)\ntarget_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test)\n\nsource_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val)\ntarget_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val)\n\nhistory = jig.get_history()\n\ntotal_epochs_trained = len(history[\"epoch_indices\"])\n\nval_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val))\n\nconfusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl)\nper_domain_accuracy = per_domain_accuracy_from_confusion(confusion)\n\n# Add a key to per_domain_accuracy for if it was a source domain\nfor domain, accuracy in per_domain_accuracy.items():\n per_domain_accuracy[domain] = {\n \"accuracy\": accuracy,\n \"source?\": domain in p.domains_source\n }\n\n# Do an independent accuracy assesment JUST TO BE SURE!\n# _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device)\n# _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device)\n# _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device)\n# _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device)\n\n# assert(_source_test_label_accuracy == source_test_label_accuracy)\n# assert(_target_test_label_accuracy == target_test_label_accuracy)\n# assert(_source_val_label_accuracy == source_val_label_accuracy)\n# assert(_target_val_label_accuracy == target_val_label_accuracy)\n\nexperiment = {\n \"experiment_name\": p.experiment_name,\n \"parameters\": dict(p),\n \"results\": {\n \"source_test_label_accuracy\": source_test_label_accuracy,\n \"source_test_label_loss\": source_test_label_loss,\n \"target_test_label_accuracy\": target_test_label_accuracy,\n \"target_test_label_loss\": target_test_label_loss,\n \"source_val_label_accuracy\": source_val_label_accuracy,\n \"source_val_label_loss\": source_val_label_loss,\n \"target_val_label_accuracy\": target_val_label_accuracy,\n \"target_val_label_loss\": target_val_label_loss,\n \"total_epochs_trained\": total_epochs_trained,\n \"total_experiment_time_secs\": total_experiment_time_secs,\n \"confusion\": confusion,\n \"per_domain_accuracy\": per_domain_accuracy,\n },\n \"history\": history,\n \"dataset_metrics\": get_dataset_metrics(datasets, \"ptn\"),\n}", "_____no_output_____" ], [ "ax = get_loss_curve(experiment)\nplt.show()", "_____no_output_____" ], [ "get_results_table(experiment)", "_____no_output_____" ], [ "get_domain_accuracies(experiment)", "_____no_output_____" ], [ "print(\"Source Test Label Accuracy:\", experiment[\"results\"][\"source_test_label_accuracy\"], \"Target Test Label Accuracy:\", experiment[\"results\"][\"target_test_label_accuracy\"])\nprint(\"Source Val Label Accuracy:\", experiment[\"results\"][\"source_val_label_accuracy\"], \"Target Val Label Accuracy:\", experiment[\"results\"][\"target_val_label_accuracy\"])", "Source Test Label Accuracy: 0.6930208333333333 Target Test Label Accuracy: 0.6151145833333334\nSource Val Label Accuracy: 0.69234375 Target Val Label Accuracy: 0.6133020833333334\n" ], [ "json.dumps(experiment)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7d609257e1f05a69369216759e7bc1a5a16ab60
14,204
ipynb
Jupyter Notebook
Kara/.ipynb_checkpoints/grabbing_tweets_june_14-checkpoint.ipynb
rglukins/stock-tweet
bca5ce2d32d21ff6cb4feb393ec95167c2fe3a2e
[ "MIT" ]
null
null
null
Kara/.ipynb_checkpoints/grabbing_tweets_june_14-checkpoint.ipynb
rglukins/stock-tweet
bca5ce2d32d21ff6cb4feb393ec95167c2fe3a2e
[ "MIT" ]
null
null
null
Kara/.ipynb_checkpoints/grabbing_tweets_june_14-checkpoint.ipynb
rglukins/stock-tweet
bca5ce2d32d21ff6cb4feb393ec95167c2fe3a2e
[ "MIT" ]
1
2018-12-05T19:02:14.000Z
2018-12-05T19:02:14.000Z
24.963093
108
0.482047
[ [ [ "# Dependencies\nimport tweepy\nimport json\nimport numpy as np\nfrom config2 import consumer_key, consumer_secret, access_token, access_token_secret", "_____no_output_____" ], [ "\n", "_____no_output_____" ], [ "# Twitter API Keys\nconsumer_key = consumer_key\nconsumer_secret = consumer_secret\naccess_token = access_token\naccess_token_secret = access_token_secret", "_____no_output_____" ], [ "# Setup Tweepy API Authentication\nauth = tweepy.OAuthHandler(consumer_key, consumer_secret)\nauth.set_access_token(access_token, access_token_secret)\napi = tweepy.API(auth, parser=tweepy.parsers.JSONParser())", "_____no_output_____" ], [ "target_term = '@facebook'", "_____no_output_____" ], [ "# Lists to hold sentiments\n# Variables for holding sentiments\ncompound_list = []\npositive_list = []\nnegative_list = []\nneutral_list = []\n\nfor x in range(1, 20):\n\n # Get all tweets from home feed\n public_tweets = api.user_timeline(target_term, page=x)\n\n # Loop through all tweets\n for tweet in public_tweets:\n\n # Run Vader Analysis on each tweet\n results = analyzer.polarity_scores(tweet[\"text\"])\n compound = results[\"compound\"]\n pos = results[\"pos\"]\n neu = results[\"neu\"]\n neg = results[\"neg\"]\n\n # Add each value to the appropriate list\n compound_list.append(compound)\n positive_list.append(pos)\n negative_list.append(neg)\n neutral_list.append(neu)", "_____no_output_____" ], [ "print(\nsum(compound_list)/len(compound_list),\nsum(positive_list)/len(positive_list),\nsum(negative_list)/len(negative_list),\nsum(neutral_list)/len(neutral_list))", "0.15938236842105247 0.1086210526315789 0.042015789473684235 0.8493921052631583\n" ], [ "sentiments = {\n 'Average Compounded': sum(compound_list) / len(compound_list),\n 'Average Negative': sum(negative_list) / len(negative_list),\n 'Average Positive': sum(positive_list) / len(positive_list),\n 'Average Neutral': sum(neutral_list) / len(neutral_list)\n}\n", "_____no_output_____" ], [ "sentiments", "_____no_output_____" ], [ "len(compound_list)", "_____no_output_____" ] ], [ [ "## mentions of facebook", "_____no_output_____" ] ], [ [ "# Search for all tweets\n# public_tweets = api.search(target_term, count=300, result_type=\"recent\")", "_____no_output_____" ], [ "# Twitter API Keys\nconsumer_key = consumer_key\nconsumer_secret = consumer_secret\naccess_token = access_token\naccess_token_secret = access_token_secret\n\n# Setup Tweepy API Authentication\nauth = tweepy.OAuthHandler(consumer_key, consumer_secret)\nauth.set_access_token(access_token, access_token_secret)\napi = tweepy.API(auth, parser=tweepy.parsers.JSONParser())\n\n", "_____no_output_____" ], [ "print(target_term)", "@facebook\n" ], [ "# Loop through all public_tweets\n\n\nfb_tweets = []\n\ndate = []\noldest_tweet = None\nfor x in range(1,20):\n public_tweets = api.search(target_term, count=100, result_type=\"recent\", max_id=oldest_tweet)\n\n for tweet in public_tweets['statuses']:\n tweet_id = tweet[\"id\"]\n tweet_author = tweet[\"user\"][\"screen_name\"]\n tweet_text = tweet[\"text\"]\n\n fb_tweets.append(tweet['text'])\n date.append(tweet['created_at'])\n \n oldest_tweet = tweet_id - 1\n\nprint(len(fb_tweets))", "1892\n" ], [ "compound_list = []\npositive_list = []\nnegative_list = []\nneutral_list = []\n\nfor tweet in fb_tweets:\n\n # Run Vader Analysis on each tweet\n results = analyzer.polarity_scores(tweet)\n compound = results[\"compound\"]\n pos = results[\"pos\"]\n neu = results[\"neu\"]\n neg = results[\"neg\"]\n\n # Add each value to the appropriate list\n compound_list.append(compound)\n positive_list.append(pos)\n negative_list.append(neg)\n neutral_list.append(neu)\n", "_____no_output_____" ], [ "sentiments = {\n 'Average Compounded': sum(compound_list) / len(compound_list),\n 'Average Negative': sum(negative_list) / len(negative_list),\n 'Average Positive': sum(positive_list) / len(positive_list),\n 'Average Neutral': sum(neutral_list) / len(neutral_list)\n}\n", "_____no_output_____" ], [ "sentiments", "_____no_output_____" ], [ "len(fb_tweets)", "_____no_output_____" ], [ "date[1891]", "_____no_output_____" ], [ "june_14 = {\n 'Text': fb_tweets,\n 'Compounded': compound_list,\n 'Negative': negative_list,\n 'Positive': positive_list,\n 'Neutral': neutral_list,\n 'Date': date\n}", "_____no_output_____" ], [ "import pandas as pd", "_____no_output_____" ], [ "tweets_june_14_df = pd.DataFrame(june_14)", "_____no_output_____" ], [ "tweets_june_14_df.head()", "_____no_output_____" ], [ "tweets_june_14_df.to_csv('tweets_june_14.csv')", "_____no_output_____" ], [ "print(date[0])\nprint(date[1891])", "Fri Jun 15 00:10:34 +0000 2018\nThu Jun 14 18:50:53 +0000 2018\n" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7d60e0bf4b601036a1cb0a95058eb13caaf66f1
31,807
ipynb
Jupyter Notebook
Wind_tunnel.ipynb
AcharyaRakesh/WindTunnel
4bcdb9a0c14570bc7c8fee5b03cbc9d11207999c
[ "MIT" ]
null
null
null
Wind_tunnel.ipynb
AcharyaRakesh/WindTunnel
4bcdb9a0c14570bc7c8fee5b03cbc9d11207999c
[ "MIT" ]
null
null
null
Wind_tunnel.ipynb
AcharyaRakesh/WindTunnel
4bcdb9a0c14570bc7c8fee5b03cbc9d11207999c
[ "MIT" ]
null
null
null
40.883033
12,844
0.614613
[ [ [ "### 1.Importing the Relevant Libraries", "_____no_output_____" ] ], [ [ "%matplotlib inline\nimport numpy as np\nimport pandas as pd \nimport matplotlib.pyplot as plt\n\n\nfrom sklearn.linear_model import Ridge, Lasso, LinearRegression\nfrom sklearn.tree import DecisionTreeRegressor\nfrom sklearn.neighbors import KNeighborsRegressor\nfrom sklearn import metrics\nfrom sklearn.ensemble import RandomForestRegressor\n\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import StandardScaler", "_____no_output_____" ] ], [ [ "### 2.Reading Data", "_____no_output_____" ] ], [ [ "df = pd.read_csv(\"WindTunnel.csv\")\ndf", "_____no_output_____" ], [ "df = pd.read_csv(\"WindTunnel.csv\")\nplt.xlabel('Freqency')\nplt.ylabel('Velocity')\nplt.plot(df.Freqency,df.Velocity)\n", "_____no_output_____" ], [ "reg = linear_model.LinearRegression()\nreg.fit(df[[\"Freqency\"]],df.Velocity)\n", "_____no_output_____" ], [ "reg.predict([[65.0]])", "_____no_output_____" ], [ "L=0.2\nr=0.1\nv=3.45\nA=0.0323\n\n\nCL = (2*L)/(r*(v**2)*A)\nCL", "_____no_output_____" ], [ "df = pd.read_csv(\"WindTunnel1.csv\")", "_____no_output_____" ], [ "df", "_____no_output_____" ], [ "df1 = df.copy()", "_____no_output_____" ], [ "df1[\"Coef_lift\"] = (2*df['Lift'])/(df['Dencity']*df['Area']*(df['Velocity']**2))", "_____no_output_____" ], [ "df1", "_____no_output_____" ], [ "X = df1.drop(['Coef_lift'],axis='columns')", "_____no_output_____" ], [ "y = df1.Coef_lift", "_____no_output_____" ] ], [ [ "### Data Pre-process ", "_____no_output_____" ] ], [ [ "\nX_train, X_valid, y_train, y_valid = train_test_split(X,y,test_size = 0.2,random_state = 10)", "_____no_output_____" ], [ "\n\nsc = StandardScaler()\nX_train = sc.fit_transform(X_train)\nX_test = sc.transform(X_valid)\n", "_____no_output_____" ], [ "algos = [LinearRegression(), Ridge(), Lasso(),\n KNeighborsRegressor(), DecisionTreeRegressor(),RandomForestRegressor()]\n\nnames = ['Linear Regression', 'Ridge Regression', 'Lasso Regression',\n 'K Neighbors Regressor', 'Decision Tree Regressor', 'RandomForestRegressor']\n\nrmse_list = []\n", "_____no_output_____" ], [ "for name in algos:\n model = name\n model.fit(X_train,y_train)\n y_pred = model.predict(X_valid)\n MSE= metrics.mean_squared_error(y_valid,y_pred)\n rmse = np.sqrt(MSE)\n rmse_list.append(rmse)", "C:\\Users\\hp\\.conda\\envs\\new\\lib\\site-packages\\sklearn\\ensemble\\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.\n \"10 in version 0.20 to 100 in 0.22.\", FutureWarning)\n" ], [ "evaluation = pd.DataFrame({'Model': names,\n 'RMSE': rmse_list})\n\nevaluation", "_____no_output_____" ] ], [ [ "### Building Model\n ", "_____no_output_____" ] ], [ [ "\nclf = Ridge()\nclf.fit(X_train,y_train)", "_____no_output_____" ], [ "clf.score(X_test,y_test)", "_____no_output_____" ], [ "clf.predict([[6.8,0.74,0.0323,0.27]])", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
e7d61659678bb46bcafbd4134fee6d2b590b5ffa
1,570
ipynb
Jupyter Notebook
Lesson 02/Exercise_16_NER_with_spacy.ipynb
TrainingByPackt/Deep-Learning-for-Natural-Language-Processing
d9183b2a01fef044963e7ad967c6373b3887f0d1
[ "MIT" ]
29
2019-05-15T22:57:56.000Z
2022-03-17T02:11:33.000Z
Lesson 02/Exercise_16_NER_with_spacy.ipynb
TrainingByPackt/Deep-Learning-for-Natural-Language-Processing
d9183b2a01fef044963e7ad967c6373b3887f0d1
[ "MIT" ]
1
2021-02-07T22:52:55.000Z
2021-07-12T06:10:50.000Z
Lesson 02/Exercise_16_NER_with_spacy.ipynb
TrainingByPackt/Deep-Learning-for-Natural-Language-Processing
d9183b2a01fef044963e7ad967c6373b3887f0d1
[ "MIT" ]
42
2019-02-17T23:04:07.000Z
2022-01-16T05:47:32.000Z
21.805556
105
0.444586
[ [ [ "doc = nlp(u\"Shubhangi visited the Taj Mahal after taking a SpiceJet flight from Pune.\")", "_____no_output_____" ], [ "for ent in doc.ents:\n print(ent.text, ent.label_)\n", "_____no_output_____" ], [ "doc1 = nlp(u\"Shubhangi Hora visited the Taj Mahal after taking a SpiceJet flight from Pune.\")", "_____no_output_____" ], [ "for ent in doc1.ents:\n print(ent.text, ent.label_)\n", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code" ] ]
e7d6215af10ae591273aac1f7cc1a6f00656f942
237,550
ipynb
Jupyter Notebook
TCI_2_PA.ipynb
poclab-web/fine-tuning-horac
450ee7e4fa43aa024ea3feb4e600a5a84ba4c04e
[ "BSD-3-Clause" ]
null
null
null
TCI_2_PA.ipynb
poclab-web/fine-tuning-horac
450ee7e4fa43aa024ea3feb4e600a5a84ba4c04e
[ "BSD-3-Clause" ]
null
null
null
TCI_2_PA.ipynb
poclab-web/fine-tuning-horac
450ee7e4fa43aa024ea3feb4e600a5a84ba4c04e
[ "BSD-3-Clause" ]
null
null
null
74.771797
17,830
0.61875
[ [ [ "from google.colab import drive\ndrive.mount('/content/drive')", "Mounted at /content/drive\n" ], [ "!wget -c https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh\n!chmod +x Miniconda3-latest-Linux-x86_64.sh\n!bash ./Miniconda3-latest-Linux-x86_64.sh -b -f -p /usr/local\n!conda install -q -y -c rdkit rdkit python=3.7", "--2022-01-26 18:33:03-- https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh\nResolving repo.continuum.io (repo.continuum.io)... 104.18.200.79, 104.18.201.79, 2606:4700::6812:c94f, ...\nConnecting to repo.continuum.io (repo.continuum.io)|104.18.200.79|:443... connected.\nHTTP request sent, awaiting response... 301 Moved Permanently\nLocation: https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh [following]\n--2022-01-26 18:33:04-- https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh\nResolving repo.anaconda.com (repo.anaconda.com)... 104.16.131.3, 104.16.130.3, 2606:4700::6810:8203, ...\nConnecting to repo.anaconda.com (repo.anaconda.com)|104.16.131.3|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 66709754 (64M) [application/x-sh]\nSaving to: ‘Miniconda3-latest-Linux-x86_64.sh’\n\nMiniconda3-latest-L 100%[===================>] 63.62M 75.1MB/s in 0.8s \n\n2022-01-26 18:33:04 (75.1 MB/s) - ‘Miniconda3-latest-Linux-x86_64.sh’ saved [66709754/66709754]\n\nPREFIX=/usr/local\nUnpacking payload ...\nCollecting package metadata (current_repodata.json): - \b\b\\ \b\b| \b\bdone\nSolving environment: - \b\b\\ \b\b| \b\bdone\n\n## Package Plan ##\n\n environment location: /usr/local\n\n added / updated specs:\n - _libgcc_mutex==0.1=main\n - _openmp_mutex==4.5=1_gnu\n - brotlipy==0.7.0=py39h27cfd23_1003\n - ca-certificates==2021.7.5=h06a4308_1\n - certifi==2021.5.30=py39h06a4308_0\n - cffi==1.14.6=py39h400218f_0\n - chardet==4.0.0=py39h06a4308_1003\n - conda-package-handling==1.7.3=py39h27cfd23_1\n - conda==4.10.3=py39h06a4308_0\n - cryptography==3.4.7=py39hd23ed53_0\n - idna==2.10=pyhd3eb1b0_0\n - ld_impl_linux-64==2.35.1=h7274673_9\n - libffi==3.3=he6710b0_2\n - libgcc-ng==9.3.0=h5101ec6_17\n - libgomp==9.3.0=h5101ec6_17\n - libstdcxx-ng==9.3.0=hd4cf53a_17\n - ncurses==6.2=he6710b0_1\n - openssl==1.1.1k=h27cfd23_0\n - pip==21.1.3=py39h06a4308_0\n - pycosat==0.6.3=py39h27cfd23_0\n - pycparser==2.20=py_2\n - pyopenssl==20.0.1=pyhd3eb1b0_1\n - pysocks==1.7.1=py39h06a4308_0\n - python==3.9.5=h12debd9_4\n - readline==8.1=h27cfd23_0\n - requests==2.25.1=pyhd3eb1b0_0\n - ruamel_yaml==0.15.100=py39h27cfd23_0\n - setuptools==52.0.0=py39h06a4308_0\n - six==1.16.0=pyhd3eb1b0_0\n - sqlite==3.36.0=hc218d9a_0\n - tk==8.6.10=hbc83047_0\n - tqdm==4.61.2=pyhd3eb1b0_1\n - tzdata==2021a=h52ac0ba_0\n - urllib3==1.26.6=pyhd3eb1b0_1\n - wheel==0.36.2=pyhd3eb1b0_0\n - xz==5.2.5=h7b6447c_0\n - yaml==0.2.5=h7b6447c_0\n - zlib==1.2.11=h7b6447c_3\n\n\nThe following NEW packages will be INSTALLED:\n\n _libgcc_mutex pkgs/main/linux-64::_libgcc_mutex-0.1-main\n _openmp_mutex pkgs/main/linux-64::_openmp_mutex-4.5-1_gnu\n brotlipy pkgs/main/linux-64::brotlipy-0.7.0-py39h27cfd23_1003\n ca-certificates pkgs/main/linux-64::ca-certificates-2021.7.5-h06a4308_1\n certifi pkgs/main/linux-64::certifi-2021.5.30-py39h06a4308_0\n cffi pkgs/main/linux-64::cffi-1.14.6-py39h400218f_0\n chardet pkgs/main/linux-64::chardet-4.0.0-py39h06a4308_1003\n conda pkgs/main/linux-64::conda-4.10.3-py39h06a4308_0\n conda-package-han~ pkgs/main/linux-64::conda-package-handling-1.7.3-py39h27cfd23_1\n cryptography pkgs/main/linux-64::cryptography-3.4.7-py39hd23ed53_0\n idna pkgs/main/noarch::idna-2.10-pyhd3eb1b0_0\n ld_impl_linux-64 pkgs/main/linux-64::ld_impl_linux-64-2.35.1-h7274673_9\n libffi pkgs/main/linux-64::libffi-3.3-he6710b0_2\n libgcc-ng pkgs/main/linux-64::libgcc-ng-9.3.0-h5101ec6_17\n libgomp pkgs/main/linux-64::libgomp-9.3.0-h5101ec6_17\n libstdcxx-ng pkgs/main/linux-64::libstdcxx-ng-9.3.0-hd4cf53a_17\n ncurses pkgs/main/linux-64::ncurses-6.2-he6710b0_1\n openssl pkgs/main/linux-64::openssl-1.1.1k-h27cfd23_0\n pip pkgs/main/linux-64::pip-21.1.3-py39h06a4308_0\n pycosat pkgs/main/linux-64::pycosat-0.6.3-py39h27cfd23_0\n pycparser pkgs/main/noarch::pycparser-2.20-py_2\n pyopenssl pkgs/main/noarch::pyopenssl-20.0.1-pyhd3eb1b0_1\n pysocks pkgs/main/linux-64::pysocks-1.7.1-py39h06a4308_0\n python pkgs/main/linux-64::python-3.9.5-h12debd9_4\n readline pkgs/main/linux-64::readline-8.1-h27cfd23_0\n requests pkgs/main/noarch::requests-2.25.1-pyhd3eb1b0_0\n ruamel_yaml pkgs/main/linux-64::ruamel_yaml-0.15.100-py39h27cfd23_0\n setuptools pkgs/main/linux-64::setuptools-52.0.0-py39h06a4308_0\n six pkgs/main/noarch::six-1.16.0-pyhd3eb1b0_0\n sqlite pkgs/main/linux-64::sqlite-3.36.0-hc218d9a_0\n tk pkgs/main/linux-64::tk-8.6.10-hbc83047_0\n tqdm pkgs/main/noarch::tqdm-4.61.2-pyhd3eb1b0_1\n tzdata pkgs/main/noarch::tzdata-2021a-h52ac0ba_0\n urllib3 pkgs/main/noarch::urllib3-1.26.6-pyhd3eb1b0_1\n wheel pkgs/main/noarch::wheel-0.36.2-pyhd3eb1b0_0\n xz pkgs/main/linux-64::xz-5.2.5-h7b6447c_0\n yaml pkgs/main/linux-64::yaml-0.2.5-h7b6447c_0\n zlib pkgs/main/linux-64::zlib-1.2.11-h7b6447c_3\n\n\nPreparing transaction: - \b\b\\ \b\b| \b\b/ \b\bdone\nExecuting transaction: \\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\b/ \b\b- \b\b\\ \b\b| \b\bdone\ninstallation finished.\nWARNING:\n You currently have a PYTHONPATH environment variable set. This may cause\n unexpected behavior when running the Python interpreter in Miniconda3.\n For best results, please verify that your PYTHONPATH only points to\n directories of packages that are compatible with the Python interpreter\n in Miniconda3: /usr/local\nCollecting package metadata (current_repodata.json): ...working... done\nSolving environment: ...working... done\n\n## Package Plan ##\n\n environment location: /usr/local\n\n added / updated specs:\n - python=3.7\n - rdkit\n\n\nThe following packages will be downloaded:\n\n package | build\n ---------------------------|-----------------\n blas-1.0 | mkl 6 KB\n bottleneck-1.3.2 | py37heb32a55_1 125 KB\n brotlipy-0.7.0 |py37h27cfd23_1003 320 KB\n bzip2-1.0.8 | h7b6447c_0 78 KB\n ca-certificates-2021.10.26 | h06a4308_2 115 KB\n cairo-1.16.0 | hf32fb01_1 1.0 MB\n certifi-2021.10.8 | py37h06a4308_2 151 KB\n cffi-1.15.0 | py37hd667e15_1 222 KB\n chardet-4.0.0 |py37h06a4308_1003 195 KB\n conda-4.11.0 | py37h06a4308_0 14.4 MB\n conda-package-handling-1.7.3| py37h27cfd23_1 881 KB\n cryptography-36.0.0 | py37h9ce1e76_0 1.3 MB\n fontconfig-2.13.1 | h6c09931_0 250 KB\n freetype-2.11.0 | h70c0345_0 618 KB\n giflib-5.2.1 | h7b6447c_0 78 KB\n glib-2.69.1 | h4ff587b_1 1.7 MB\n icu-58.2 | he6710b0_3 10.5 MB\n intel-openmp-2021.4.0 | h06a4308_3561 4.2 MB\n jpeg-9d | h7f8727e_0 232 KB\n lcms2-2.12 | h3be6417_0 312 KB\n libboost-1.73.0 | h3ff78a5_11 13.9 MB\n libpng-1.6.37 | hbc83047_0 278 KB\n libtiff-4.2.0 | h85742a9_0 502 KB\n libuuid-1.0.3 | h7f8727e_2 17 KB\n libwebp-1.2.0 | h89dd481_0 493 KB\n libwebp-base-1.2.0 | h27cfd23_0 437 KB\n libxcb-1.14 | h7b6447c_0 505 KB\n libxml2-2.9.12 | h03d6c58_0 1.2 MB\n lz4-c-1.9.3 | h295c915_1 185 KB\n mkl-2021.4.0 | h06a4308_640 142.6 MB\n mkl-service-2.4.0 | py37h7f8727e_0 56 KB\n mkl_fft-1.3.1 | py37hd3c417c_0 172 KB\n mkl_random-1.2.2 | py37h51133e4_0 287 KB\n numexpr-2.8.1 | py37h6abb31d_0 123 KB\n numpy-1.21.2 | py37h20f2e39_0 23 KB\n numpy-base-1.21.2 | py37h79a1101_0 4.8 MB\n olefile-0.46 | py37_0 50 KB\n openssl-1.1.1m | h7f8727e_0 2.5 MB\n packaging-21.3 | pyhd3eb1b0_0 36 KB\n pandas-1.3.5 | py37h8c16a72_0 9.3 MB\n pcre-8.45 | h295c915_0 207 KB\n pillow-8.4.0 | py37h5aabda8_0 644 KB\n pip-21.2.2 | py37h06a4308_0 1.8 MB\n pixman-0.40.0 | h7f8727e_1 373 KB\n py-boost-1.73.0 | py37ha9443f7_11 204 KB\n pycosat-0.6.3 | py37h27cfd23_0 81 KB\n pyparsing-3.0.4 | pyhd3eb1b0_0 81 KB\n pysocks-1.7.1 | py37_1 27 KB\n python-3.7.11 | h12debd9_0 45.3 MB\n python-dateutil-2.8.2 | pyhd3eb1b0_0 233 KB\n pytz-2021.3 | pyhd3eb1b0_0 171 KB\n rdkit-2020.09.1.0 | py37hd50e099_1 25.8 MB rdkit\n ruamel_yaml-0.15.100 | py37h27cfd23_0 253 KB\n setuptools-58.0.4 | py37h06a4308_0 775 KB\n zstd-1.4.9 | haebb681_0 480 KB\n ------------------------------------------------------------\n Total: 290.5 MB\n\nThe following NEW packages will be INSTALLED:\n\n blas pkgs/main/linux-64::blas-1.0-mkl\n bottleneck pkgs/main/linux-64::bottleneck-1.3.2-py37heb32a55_1\n bzip2 pkgs/main/linux-64::bzip2-1.0.8-h7b6447c_0\n cairo pkgs/main/linux-64::cairo-1.16.0-hf32fb01_1\n fontconfig pkgs/main/linux-64::fontconfig-2.13.1-h6c09931_0\n freetype pkgs/main/linux-64::freetype-2.11.0-h70c0345_0\n giflib pkgs/main/linux-64::giflib-5.2.1-h7b6447c_0\n glib pkgs/main/linux-64::glib-2.69.1-h4ff587b_1\n icu pkgs/main/linux-64::icu-58.2-he6710b0_3\n intel-openmp pkgs/main/linux-64::intel-openmp-2021.4.0-h06a4308_3561\n jpeg pkgs/main/linux-64::jpeg-9d-h7f8727e_0\n lcms2 pkgs/main/linux-64::lcms2-2.12-h3be6417_0\n libboost pkgs/main/linux-64::libboost-1.73.0-h3ff78a5_11\n libpng pkgs/main/linux-64::libpng-1.6.37-hbc83047_0\n libtiff pkgs/main/linux-64::libtiff-4.2.0-h85742a9_0\n libuuid pkgs/main/linux-64::libuuid-1.0.3-h7f8727e_2\n libwebp pkgs/main/linux-64::libwebp-1.2.0-h89dd481_0\n libwebp-base pkgs/main/linux-64::libwebp-base-1.2.0-h27cfd23_0\n libxcb pkgs/main/linux-64::libxcb-1.14-h7b6447c_0\n libxml2 pkgs/main/linux-64::libxml2-2.9.12-h03d6c58_0\n lz4-c pkgs/main/linux-64::lz4-c-1.9.3-h295c915_1\n mkl pkgs/main/linux-64::mkl-2021.4.0-h06a4308_640\n mkl-service pkgs/main/linux-64::mkl-service-2.4.0-py37h7f8727e_0\n mkl_fft pkgs/main/linux-64::mkl_fft-1.3.1-py37hd3c417c_0\n mkl_random pkgs/main/linux-64::mkl_random-1.2.2-py37h51133e4_0\n numexpr pkgs/main/linux-64::numexpr-2.8.1-py37h6abb31d_0\n numpy pkgs/main/linux-64::numpy-1.21.2-py37h20f2e39_0\n numpy-base pkgs/main/linux-64::numpy-base-1.21.2-py37h79a1101_0\n olefile pkgs/main/linux-64::olefile-0.46-py37_0\n packaging pkgs/main/noarch::packaging-21.3-pyhd3eb1b0_0\n pandas pkgs/main/linux-64::pandas-1.3.5-py37h8c16a72_0\n pcre pkgs/main/linux-64::pcre-8.45-h295c915_0\n pillow pkgs/main/linux-64::pillow-8.4.0-py37h5aabda8_0\n pixman pkgs/main/linux-64::pixman-0.40.0-h7f8727e_1\n py-boost pkgs/main/linux-64::py-boost-1.73.0-py37ha9443f7_11\n pyparsing pkgs/main/noarch::pyparsing-3.0.4-pyhd3eb1b0_0\n python-dateutil pkgs/main/noarch::python-dateutil-2.8.2-pyhd3eb1b0_0\n pytz pkgs/main/noarch::pytz-2021.3-pyhd3eb1b0_0\n rdkit rdkit/linux-64::rdkit-2020.09.1.0-py37hd50e099_1\n zstd pkgs/main/linux-64::zstd-1.4.9-haebb681_0\n\nThe following packages will be UPDATED:\n\n ca-certificates 2021.7.5-h06a4308_1 --> 2021.10.26-h06a4308_2\n certifi 2021.5.30-py39h06a4308_0 --> 2021.10.8-py37h06a4308_2\n cffi 1.14.6-py39h400218f_0 --> 1.15.0-py37hd667e15_1\n conda 4.10.3-py39h06a4308_0 --> 4.11.0-py37h06a4308_0\n cryptography 3.4.7-py39hd23ed53_0 --> 36.0.0-py37h9ce1e76_0\n openssl 1.1.1k-h27cfd23_0 --> 1.1.1m-h7f8727e_0\n pip 21.1.3-py39h06a4308_0 --> 21.2.2-py37h06a4308_0\n pysocks 1.7.1-py39h06a4308_0 --> 1.7.1-py37_1\n setuptools 52.0.0-py39h06a4308_0 --> 58.0.4-py37h06a4308_0\n\nThe following packages will be DOWNGRADED:\n\n brotlipy 0.7.0-py39h27cfd23_1003 --> 0.7.0-py37h27cfd23_1003\n chardet 4.0.0-py39h06a4308_1003 --> 4.0.0-py37h06a4308_1003\n conda-package-han~ 1.7.3-py39h27cfd23_1 --> 1.7.3-py37h27cfd23_1\n pycosat 0.6.3-py39h27cfd23_0 --> 0.6.3-py37h27cfd23_0\n python 3.9.5-h12debd9_4 --> 3.7.11-h12debd9_0\n ruamel_yaml 0.15.100-py39h27cfd23_0 --> 0.15.100-py37h27cfd23_0\n\n\nPreparing transaction: ...working... done\nVerifying transaction: ...working... done\nExecuting transaction: ...working... done\n" ], [ "!python -c \"import site; print (site.getsitepackages())\"", "['/usr/local/lib/python3.7/site-packages']\n" ], [ "import sys\nimport pprint\npprint.pprint(sys.path)\nsys.path.append('/usr/local/lib/python3.7/site-packages/')\nfrom rdkit import rdBase\nprint(rdBase.rdkitVersion)", "['',\n '/content',\n '/env/python',\n '/usr/lib/python37.zip',\n '/usr/lib/python3.7',\n '/usr/lib/python3.7/lib-dynload',\n '/usr/local/lib/python3.7/dist-packages',\n '/usr/lib/python3/dist-packages',\n '/usr/local/lib/python3.7/dist-packages/IPython/extensions',\n '/root/.ipython']\n2020.09.1\n" ], [ "import pandas as pd\nimport matplotlib.pyplot as plt\nfrom pandas.plotting import scatter_matrix\n\ndata = pd.read_csv(\"/content/drive/MyDrive/Colab Notebooks/lab/BDE_t.csv\")\ndata.head()", "_____no_output_____" ], [ "from rdkit import Chem\nfrom rdkit.Chem import Draw\nmols = [Chem.MolFromSmiles(smile) for smile in data['SMILES']]", "_____no_output_____" ], [ "from rdkit.Chem import AllChem\nimport numpy as np\nfingerprints = []\nsafe = []\nfor mol_idx, mol in enumerate(mols):\n try:\n fingerprint = [x for x in AllChem.GetMorganFingerprintAsBitVect(mol, 2, 2**11)]\n fingerprints.append(fingerprint)\n safe.append(mol_idx)\n except:\n print(\"Error\", mol_idx)\n continue\nfingerprints = np.array(fingerprints)\nprint(fingerprints.shape)\npd.DataFrame(fingerprints).head()", "(3567, 2048)\n" ], [ "data_y=data\ndata_y=data_y.drop(\"SMILES\", axis=1)\ndata_y=data_y.drop(\"IP\", axis=1)\ndata_y=data_y.drop(\"PDE\", axis=1)\ndata_y=data_y.drop(\"BDE\", axis=1)\ndata_y=data_y.drop(\"ETE\", axis=1)", "_____no_output_____" ], [ "data_y", "_____no_output_____" ], [ "X=fingerprints\nX = np.array(X, dtype = np.float32)\nnp.save(\"/content/drive/MyDrive/Colab Notebooks/lab/X_TCI_2_PA.npy\", X)\nprint(X)", "[[0. 1. 0. ... 0. 0. 0.]\n [0. 1. 0. ... 0. 0. 0.]\n [0. 1. 0. ... 0. 0. 0.]\n ...\n [0. 0. 0. ... 0. 0. 0.]\n [0. 0. 0. ... 0. 0. 0.]\n [0. 0. 0. ... 0. 0. 0.]]\n" ], [ "\nfrom rdkit import Chem\nfrom rdkit.Chem.Draw import IPythonConsole\nimport numpy as np\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn import model_selection\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Activation\nY=data_y\n\nY = np.array(Y, dtype = np.float32)\n\nAve=Y.mean(axis=0)\nStd=Y.std(ddof=1,axis=0)\n\nY=(Y - Ave) / Std\nnp.save(\"/content/drive/MyDrive/Colab Notebooks/lab/Y_TCI_2_PA.npy\", Y)\nprint(Y)\nprint(Ave)\nprint(Std)", "[[-0.44942018]\n [-0.4200536 ]\n [-0.79657614]\n ...\n [ 1.6851792 ]\n [ 2.1974144 ]\n [-0.84641206]]\n[272.58444]\n[17.763977]\n" ], [ "X_train, X_test, y_train, y_test = model_selection.train_test_split(X, Y, test_size=0.20, random_state=42)\nnp.save(\"/content/drive/MyDrive/Colab Notebooks/lab/X_train_TCI_2_PA.npy\", X_train)\nnp.save(\"/content/drive/MyDrive/Colab Notebooks/lab/X_test_TCI_2_PA.npy\", X_test)\nnp.save(\"/content/drive/MyDrive/Colab Notebooks/lab/y_train_TCI_2_PA.npy\", y_train)\nnp.save(\"/content/drive/MyDrive/Colab Notebooks/lab/y_test_TCI_2_PA.npy\", y_test)", "_____no_output_____" ], [ "import numpy as np\nfrom sklearn import model_selection\nfrom sklearn.metrics import mean_squared_error\nfrom sklearn.metrics import r2_score\nfrom keras.models import Sequential\nfrom keras.layers import Activation, Dense\nfrom keras.wrappers.scikit_learn import KerasRegressor\nfrom tensorflow import keras\nfrom tensorflow.keras.optimizers import Adam\nX_train = np.load(\"/content/drive/MyDrive/Colab Notebooks/lab/X_train_TCI_2_PA.npy\")\ny_train = np.load(\"/content/drive/MyDrive/Colab Notebooks/lab/y_train_TCI_2_PA.npy\")\nX_test = np.load(\"/content/drive/MyDrive/Colab Notebooks/lab/X_test_TCI_2_PA.npy\")\ny_test = np.load(\"/content/drive/MyDrive/Colab Notebooks/lab/y_test_TCI_2_PA.npy\")\n\nfrom keras.layers import Input, Dense, Dropout\nfrom keras.models import Model\n\ninputs = Input(shape=(2**11,))\n\nx = Dense(2**10, activation='tanh', kernel_regularizer=keras.regularizers.l2(0.001))(inputs)\nx = Dropout(.2)(x)\nx = Dense(2**5, activation='tanh', kernel_regularizer=keras.regularizers.l2(0.001))(x)\nx = Dropout(.2)(x)\nx = Dense(2**2, activation='tanh', kernel_regularizer=keras.regularizers.l2(0.001))(x)\nx = Dropout(.2)(x)\npredictions = Dense(1, activation=\"linear\")(x)\n\nearly_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10)\n\nmodel = Model(inputs=inputs, outputs=predictions)\n\nmodel.compile(Adam(lr=1e-3), loss=\"mean_squared_error\")\n\nhistory = model.fit(X_train, y_train, batch_size=128, epochs=2000, validation_data=(X_test, y_test),callbacks=[early_stop])", "/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.\n super(Adam, self).__init__(name, **kwargs)\n" ], [ "import matplotlib.pyplot as plt\nprint('2乗誤差の平均',model.evaluate(X_test, y_test))\n\ntrain_acc = history.history['loss']\ntest_acc = history.history['val_loss']\n\nx = np.arange(len(train_acc))\nplt.plot(x, train_acc, label = 'train mse')\nplt.plot(x, test_acc, label = 'test mse')\nplt.ylim(0, 1)\nplt.legend()", "23/23 [==============================] - 0s 5ms/step - loss: 0.3172\n2乗誤差の平均 0.3171910047531128\n" ], [ "y_pred = model.predict(X_test)\ny_train_pred = model.predict(X_train)\n\nYYY=y_test.transpose()\nyyy=y_pred.transpose()\nplt.scatter(YYY[0], yyy[0], c='r', marker='s',label=\"ALL\")\nplt.legend()\nplt.plot([-3,3],[-3,3])", "_____no_output_____" ], [ "model.save('/content/drive/MyDrive/Colab Notebooks/lab/model_TCI_2_PA.h5')", "_____no_output_____" ], [ "from keras.layers import Input, Dense, Dropout\nfrom keras.models import Model\nfrom keras.models import load_model\nmodel =load_model('/content/drive/MyDrive/Colab Notebooks/lab/model_TCI_2_PA.h5')", "_____no_output_____" ], [ "import numpy as np\nfrom sklearn import model_selection\nfrom sklearn.metrics import mean_squared_error\nfrom sklearn.metrics import r2_score\nfrom keras.models import Sequential\nfrom keras.layers import Activation, Dense\nfrom keras.wrappers.scikit_learn import KerasRegressor\nfrom tensorflow import keras\nfrom tensorflow.keras.optimizers import Adam\nX_train = np.load(\"/content/drive/MyDrive/Colab Notebooks/lab/X_train_TCI_2_PA.npy\")\ny_train = np.load(\"/content/drive/MyDrive/Colab Notebooks/lab/y_train_TCI_2_PA.npy\")\nX_test = np.load(\"/content/drive/MyDrive/Colab Notebooks/lab/X_test_TCI_2_PA.npy\")\ny_test = np.load(\"/content/drive/MyDrive/Colab Notebooks/lab/y_test_TCI_2_PA.npy\")", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\ny_pred = model.predict(X_test)\ny_pred_t = model.predict(X_train)\n#Y_test=y_test*Std+Ave\n#Y_pred=y_pred*Std+Ave\nplt.scatter(y_test, y_pred, c='r', marker='s',label=\"ALL\")\nplt.scatter(y_train, y_pred_t, c='b', marker='s',label=\"ALL\")\nplt.legend()\nplt.plot([-4,4],[-4,4])", "_____no_output_____" ], [ "Ave=Ave.reshape(1,1)\nStd=Std.reshape(1,1)\n\nYYY=y_test*Std+Ave\nyyy=y_pred*Std+Ave\nYYYY=y_train*Std+Ave\nyyyy=y_pred_t*Std+Ave\n\nYYY=YYY.transpose()\nyyy=yyy.transpose()\nYYYY=YYYY.transpose()\nyyyy=yyyy.transpose()", "_____no_output_____" ], [ "plt.scatter(YYY[0], yyy[0], c='r', marker='s',label=\"test\")\nplt.scatter(YYYY[0], yyyy[0], c='b', marker='s',label=\"train\")\nplt.xlabel('calculated Value (kcal/mol)')\nplt.ylabel('predicted Value (kcal/mol)')\nplt.rcParams['xtick.direction'] = 'out'\nplt.rcParams['ytick.direction'] = 'out'\nplt.plot([60,110],[60,110])\nplt.legend()\n\nplt.savefig('/content/drive/MyDrive/Colab Notebooks/lab/PA.png', transparent=True)", "_____no_output_____" ], [ "import tensorflow as tf\nfrom tensorflow import keras\nfrom keras.models import load_model\nbase_model =load_model('/content/drive/MyDrive/Colab Notebooks/lab/model_TCI_2_PA.h5')", "_____no_output_____" ], [ "base_model.summary()", "Model: \"model\"\n_________________________________________________________________\n Layer (type) Output Shape Param # \n=================================================================\n input_1 (InputLayer) [(None, 2048)] 0 \n \n dense (Dense) (None, 1024) 2098176 \n \n dropout (Dropout) (None, 1024) 0 \n \n dense_1 (Dense) (None, 32) 32800 \n \n dropout_1 (Dropout) (None, 32) 0 \n \n dense_2 (Dense) (None, 4) 132 \n \n dropout_2 (Dropout) (None, 4) 0 \n \n dense_3 (Dense) (None, 1) 5 \n \n=================================================================\nTotal params: 2,131,113\nTrainable params: 2,131,113\nNon-trainable params: 0\n_________________________________________________________________\n" ], [ "import pandas as pd\nimport matplotlib.pyplot as plt\nfrom pandas.plotting import scatter_matrix\n\ndt = pd.read_csv(\"/content/drive/MyDrive/Colab Notebooks/lab/H_ORAC.csv\")\ndt.head()\n\nfrom rdkit import Chem\nfrom rdkit.Chem import Draw\nmols = [Chem.MolFromSmiles(smile) for smile in dt['SMILES']]", "_____no_output_____" ], [ "from rdkit.Chem import AllChem\nimport numpy as np\nfingerprints = []\nsafe = []\nfor mol_idx, mol in enumerate(mols):\n try:\n fingerprint = [x for x in AllChem.GetMorganFingerprintAsBitVect(mol, 2, 2**11)]\n fingerprints.append(fingerprint)\n safe.append(mol_idx)\n except:\n print(\"Error\", mol_idx)\n continue\nfingerprints = np.array(fingerprints)\nprint(fingerprints.shape)\npd.DataFrame(fingerprints).head()", "(70, 2048)\n" ], [ "X2=fingerprints\nX2 = np.array(X2, dtype = np.float32)\nnp.save(\"/content/drive/MyDrive/Colab Notebooks/lab/X_2d_ORAC.npy\", X2)\nprint(X2)", "[[0. 0. 0. ... 0. 0. 0.]\n [0. 0. 0. ... 0. 0. 0.]\n [0. 0. 0. ... 0. 0. 0.]\n ...\n [0. 1. 0. ... 0. 0. 0.]\n [0. 1. 0. ... 0. 0. 0.]\n [0. 0. 0. ... 0. 0. 0.]]\n" ], [ "\nfrom rdkit import Chem\nfrom rdkit.Chem.Draw import IPythonConsole\nimport numpy as np\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn import model_selection\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Activation, Input, Dropout\nY2=[y for y in dt['H_ORAC']]\n\nY2 = np.array(Y2, dtype = np.float32)\n\nAve=Y2.mean()\nStd=Y2.std(ddof=1)\n\nY2=(Y2 - Ave )/ Std\n\nnp.save(\"/content/drive/MyDrive/Colab Notebooks/lab/Y_2d_ORAC.npy\", Y2)\nprint(Y2)", "[-0.5434038 -0.1577336 -0.45749253 -0.2201499 -0.53286856 0.21718943\n 0.517175 0.3992576 0.96539456 -0.06432275 4.858367 0.71591735\n 0.86406463 -0.3478773 0.12073161 0.54230046 0.29801032 0.2953972\n 0.8034653 -0.92614245 -0.235798 0.6265299 -0.80607396 -0.5244356\n -0.61346394 -0.62144846 -0.61873865 -0.31283364 -1.0512096 -1.0507561\n -1.0488456 0.26605016 0.05566086 -0.92539907 -0.03932004 -0.99443716\n -0.28263178 -0.77516484 -0.559626 -0.6305682 -0.45141768 -0.3848276\n -0.42543682 -0.5228378 2.7227314 -0.78508466 -1.0499865 -0.02662846\n -0.7496714 1.2863741 1.7340133 -0.47654933 1.5442306 -0.13065857\n -0.8465681 0.31667623 -0.7957791 -0.66367626 0.6136809 1.1964297\n 1.6072878 1.611745 0.02019791 -0.14124039 -0.8637594 -0.9887596\n 0.99622774 -0.4986037 -0.91868335 -0.13418403]\n" ], [ "from keras.models import Model\nfrom tensorflow import keras\nimport numpy as np\nfrom sklearn import model_selection\nfrom sklearn.metrics import mean_squared_error\nfrom sklearn.metrics import r2_score\nfrom keras.layers import Activation, Dense, Dropout\nfrom keras.wrappers.scikit_learn import KerasRegressor\nfrom tensorflow.keras.optimizers import Adam\n\nxx = base_model.output\nxx = Dense(2**5, activation='tanh', kernel_regularizer=keras.regularizers.l2(0.001))(xx)\nxx = Dropout(.2)(xx)\nxx = Dense(2**2, activation='tanh', kernel_regularizer=keras.regularizers.l2(0.001))(xx)\nxx = Dropout(.2)(xx)\n\npredictions = Dense(1, activation='linear')(xx)\n\nmodel2 = Model(inputs=base_model.input, outputs=predictions)\nmodel2.layers.pop(-9)\n\nmodel2.compile(Adam(lr=1e-3), loss=\"mean_squared_error\")\n\nmodel2.summary()", "Model: \"model_1\"\n_________________________________________________________________\n Layer (type) Output Shape Param # \n=================================================================\n input_1 (InputLayer) [(None, 2048)] 0 \n \n dense (Dense) (None, 1024) 2098176 \n \n dropout (Dropout) (None, 1024) 0 \n \n dense_1 (Dense) (None, 32) 32800 \n \n dropout_1 (Dropout) (None, 32) 0 \n \n dense_2 (Dense) (None, 4) 132 \n \n dropout_2 (Dropout) (None, 4) 0 \n \n dense_3 (Dense) (None, 1) 5 \n \n dense_4 (Dense) (None, 32) 64 \n \n dropout_3 (Dropout) (None, 32) 0 \n \n dense_5 (Dense) (None, 4) 132 \n \n dropout_4 (Dropout) (None, 4) 0 \n \n dense_6 (Dense) (None, 1) 5 \n \n=================================================================\nTotal params: 2,131,314\nTrainable params: 2,131,314\nNon-trainable params: 0\n_________________________________________________________________\n" ], [ "X2_train, X2_test, y2_train, y2_test = model_selection.train_test_split(X2, Y2, test_size=0.20, random_state=20211208)\nnp.save(\"/content/drive/MyDrive/Colab Notebooks/lab/X2_train.npy\", X2_train)\nnp.save(\"/content/drive/MyDrive/Colab Notebooks/lab/X2_test.npy\", X2_test)\nnp.save(\"/content/drive/MyDrive/Colab Notebooks/lab/y2_train.npy\", y2_train)\nnp.save(\"/content/drive/MyDrive/Colab Notebooks/lab/y2_test.npy\", y2_test)\nearly_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10)\n\nhistory2 = model2.fit(X2_train, y2_train, batch_size=128, epochs=2000, validation_data=(X2_test, y2_test),callbacks=[early_stop])\nimport matplotlib.pyplot as plt\nprint('2乗誤差の平均',model2.evaluate(X2_test, y2_test))\n\ntrain_acc = history2.history['loss']\ntest_acc = history2.history['val_loss']\n\nx = np.arange(len(train_acc))\nplt.plot(x, train_acc, label = 'train mse')\nplt.plot(x, test_acc, label = 'val mse')\nplt.ylim(0, 3)\nplt.legend()", "Epoch 1/2000\n1/1 [==============================] - 1s 915ms/step - loss: 1.2907 - val_loss: 0.5613\nEpoch 2/2000\n1/1 [==============================] - 0s 31ms/step - loss: 1.2758 - val_loss: 0.5125\nEpoch 3/2000\n1/1 [==============================] - 0s 37ms/step - loss: 1.1736 - val_loss: 0.4677\nEpoch 4/2000\n1/1 [==============================] - 0s 50ms/step - loss: 1.0804 - val_loss: 0.4314\nEpoch 5/2000\n1/1 [==============================] - 0s 29ms/step - loss: 1.0882 - val_loss: 0.3815\nEpoch 6/2000\n1/1 [==============================] - 0s 33ms/step - loss: 0.9999 - val_loss: 0.3417\nEpoch 7/2000\n1/1 [==============================] - 0s 30ms/step - loss: 1.0112 - val_loss: 0.2949\nEpoch 8/2000\n1/1 [==============================] - 0s 35ms/step - loss: 0.9857 - val_loss: 0.2556\nEpoch 9/2000\n1/1 [==============================] - 0s 45ms/step - loss: 0.9533 - val_loss: 0.2202\nEpoch 10/2000\n1/1 [==============================] - 0s 28ms/step - loss: 0.7460 - val_loss: 0.2054\nEpoch 11/2000\n1/1 [==============================] - 0s 29ms/step - loss: 0.7974 - val_loss: 0.2107\nEpoch 12/2000\n1/1 [==============================] - 0s 30ms/step - loss: 0.8484 - val_loss: 0.2287\nEpoch 13/2000\n1/1 [==============================] - 0s 33ms/step - loss: 0.7074 - val_loss: 0.2658\nEpoch 14/2000\n1/1 [==============================] - 0s 29ms/step - loss: 0.8329 - val_loss: 0.2851\nEpoch 15/2000\n1/1 [==============================] - 0s 27ms/step - loss: 0.7247 - val_loss: 0.3062\nEpoch 16/2000\n1/1 [==============================] - 0s 26ms/step - loss: 0.5538 - val_loss: 0.3363\nEpoch 17/2000\n1/1 [==============================] - 0s 30ms/step - loss: 0.9251 - val_loss: 0.3391\nEpoch 18/2000\n1/1 [==============================] - 0s 26ms/step - loss: 0.6963 - val_loss: 0.3446\nEpoch 19/2000\n1/1 [==============================] - 0s 27ms/step - loss: 0.6832 - val_loss: 0.3702\nEpoch 20/2000\n1/1 [==============================] - 0s 27ms/step - loss: 0.5793 - val_loss: 0.4033\n1/1 [==============================] - 0s 19ms/step - loss: 0.4033\n2乗誤差の平均 0.403340607881546\n" ], [ "\nY2_pred_t = model2.predict(X2_test)\nY2_train_pred_t = model2.predict(X2_train)\nplt.scatter(y2_test, Y2_pred_t, c='r', marker='s',label=\"test\")\nplt.scatter(y2_train, Y2_train_pred_t, c='b', marker='s',label=\"train\")\nplt.legend()\nplt.plot([-1,5],[-1,5])", "_____no_output_____" ], [ "y3_test=y2_test*Std+Ave\nY3_train_pred_t=Y2_train_pred_t*Std+Ave\ny3_train=y2_train*Std+Ave\nY3_pred_t=Y2_pred_t*Std+Ave\nplt.scatter(y3_test, Y3_pred_t, c='r', marker='s',label=\"test\")\nplt.scatter(y3_train, Y3_train_pred_t, c='b', marker='s',label=\"train\")\nplt.xlabel('Ground Truth (mol(TE)/mol)')\nplt.ylabel('predicted Value (mol(TE)/mol)')\nplt.rcParams['xtick.direction'] = 'out'\nplt.rcParams['ytick.direction'] = 'out'\nplt.plot([0,12],[0,12])\nplt.legend()", "_____no_output_____" ], [ "from sklearn.metrics import mean_squared_error\nfrom math import sqrt\nprint(mean_squared_error(y3_test, Y3_pred_t))\nprint(mean_squared_error(y3_train, Y3_train_pred_t))", "1.3653971\n1.5537233\n" ], [ "", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7d625f9a42fde4ef31c0582da6cbc89c533a8ea
4,165
ipynb
Jupyter Notebook
makaan_webscraping.ipynb
akhila-sakinala/akhila-sakinala.github.io
eba4c47d8ed0b560eb6341ec87204d3f68ed5238
[ "Apache-2.0" ]
null
null
null
makaan_webscraping.ipynb
akhila-sakinala/akhila-sakinala.github.io
eba4c47d8ed0b560eb6341ec87204d3f68ed5238
[ "Apache-2.0" ]
null
null
null
makaan_webscraping.ipynb
akhila-sakinala/akhila-sakinala.github.io
eba4c47d8ed0b560eb6341ec87204d3f68ed5238
[ "Apache-2.0" ]
null
null
null
22.63587
258
0.40096
[ [ [ "<a href=\"https://colab.research.google.com/github/akhila-sakinala/akhila-sakinala.github.io/blob/master/makaan_webscraping.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "## Import packages", "_____no_output_____" ] ], [ [ "# import packages\nimport requests\nfrom bs4 import BeautifulSoup\n", "_____no_output_____" ], [ "url = \"https://www.makaan.com/hyderabad-residential-property/rent-property-in-hyderabad-city\"\n\nresponse = requests.get(url)\nsoup = BeautifulSoup(response.text,\"html.parser\")", "_____no_output_____" ], [ "s_tag = soup.find_all('span',attrs={'class' : 'seller-type'})", "_____no_output_____" ], [ "for each_owner in s_tag:\n print(each_owner.text)", "OWNER\nOWNER\nOWNER\nOWNER\nOWNER\nOWNER\nOWNER\nOWNER\nOWNER\nOWNER\nOWNER\nOWNER\nOWNER\nOWNER\nOWNER\nOWNER\nOWNER\nOWNER\nOWNER\nOWNER\n" ], [ "s_val = soup.find_all('a',attrs={'class' : 'typelink'})", "_____no_output_____" ], [ "for price in s_val:\n print(price.span.text)", "2 \n1 \n2 \n3 \n2 \n1 \n3 \n1 \n2 \n4 \n2 \n2 \n2 \n6 \n3 \n3 \n1 \n2 \n1 \n2 \n" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ] ]
e7d636790f87fdd85d98bd0d8e3c8b5fafd9d127
91,565
ipynb
Jupyter Notebook
deeplearning1/nbs/statefarm.ipynb
Fandekasp/fastai_courses
35c8ccb8ba62e819cbff05f08a2d792d8ece34db
[ "Apache-2.0" ]
2
2017-02-23T16:07:19.000Z
2017-02-25T16:38:38.000Z
deeplearning1/nbs/statefarm.ipynb
Fandekasp/fastai_courses
35c8ccb8ba62e819cbff05f08a2d792d8ece34db
[ "Apache-2.0" ]
null
null
null
deeplearning1/nbs/statefarm.ipynb
Fandekasp/fastai_courses
35c8ccb8ba62e819cbff05f08a2d792d8ece34db
[ "Apache-2.0" ]
null
null
null
36.936265
2,685
0.569945
[ [ [ "# Enter State Farm", "_____no_output_____" ] ], [ [ "from theano.sandbox import cuda\ncuda.use('gpu0')", "_____no_output_____" ], [ "%matplotlib inline\nfrom __future__ import print_function, division\npath = \"data/state/\"\n#path = \"data/state/sample/\"\nimport utils; reload(utils)\nfrom utils import *\nfrom IPython.display import FileLink", "Using Theano backend.\n" ], [ "batch_size=64", "_____no_output_____" ] ], [ [ "## Setup batches", "_____no_output_____" ] ], [ [ "batches = get_batches(path+'train', batch_size=batch_size)\nval_batches = get_batches(path+'valid', batch_size=batch_size*2, shuffle=False)", "Found 22424 images belonging to 10 classes.\n" ], [ "(val_classes, trn_classes, val_labels, trn_labels, \n val_filenames, filenames, test_filenames) = get_classes(path)", "Found 18946 images belonging to 10 classes.\nFound 3478 images belonging to 10 classes.\nFound 79726 images belonging to 1 classes.\n" ] ], [ [ "Rather than using batches, we could just import all the data into an array to save some processing time. (In most examples I'm using the batches, however - just because that's how I happened to start out.)", "_____no_output_____" ] ], [ [ "trn = get_data(path+'train')\nval = get_data(path+'valid')", "Found 18946 images belonging to 10 classes.\n" ], [ "save_array(path+'results/val.dat', val)\nsave_array(path+'results/trn.dat', trn)", "_____no_output_____" ], [ "val = load_array(path+'results/val.dat')\ntrn = load_array(path+'results/trn.dat')", "_____no_output_____" ] ], [ [ "## Re-run sample experiments on full dataset", "_____no_output_____" ], [ "We should find that everything that worked on the sample (see statefarm-sample.ipynb), works on the full dataset too. Only better! Because now we have more data. So let's see how they go - the models in this section are exact copies of the sample notebook models.", "_____no_output_____" ], [ "### Single conv layer", "_____no_output_____" ] ], [ [ "def conv1(batches):\n model = Sequential([\n BatchNormalization(axis=1, input_shape=(3,224,224)),\n Convolution2D(32,3,3, activation='relu'),\n BatchNormalization(axis=1),\n MaxPooling2D((3,3)),\n Convolution2D(64,3,3, activation='relu'),\n BatchNormalization(axis=1),\n MaxPooling2D((3,3)),\n Flatten(),\n Dense(200, activation='relu'),\n BatchNormalization(),\n Dense(10, activation='softmax')\n ])\n\n model.compile(Adam(lr=1e-4), loss='categorical_crossentropy', metrics=['accuracy'])\n model.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches, \n nb_val_samples=val_batches.nb_sample)\n model.optimizer.lr = 0.001\n model.fit_generator(batches, batches.nb_sample, nb_epoch=4, validation_data=val_batches, \n nb_val_samples=val_batches.nb_sample)\n return model", "_____no_output_____" ], [ "model = conv1(batches)", "Epoch 1/2\n18946/18946 [==============================] - 114s - loss: 0.2273 - acc: 0.9405 - val_loss: 2.4946 - val_acc: 0.2826\nEpoch 2/2\n18946/18946 [==============================] - 114s - loss: 0.0120 - acc: 0.9990 - val_loss: 1.5872 - val_acc: 0.5253\nEpoch 1/4\n18946/18946 [==============================] - 114s - loss: 0.0093 - acc: 0.9992 - val_loss: 1.4836 - val_acc: 0.5825\nEpoch 2/4\n18946/18946 [==============================] - 114s - loss: 0.0032 - acc: 1.0000 - val_loss: 1.3142 - val_acc: 0.6162\nEpoch 3/4\n18946/18946 [==============================] - 114s - loss: 0.0035 - acc: 0.9996 - val_loss: 1.5061 - val_acc: 0.5771\nEpoch 4/4\n18946/18946 [==============================] - 114s - loss: 0.0036 - acc: 0.9997 - val_loss: 1.4528 - val_acc: 0.5808\n" ] ], [ [ "Interestingly, with no regularization or augmentation we're getting some reasonable results from our simple convolutional model. So with augmentation, we hopefully will see some very good results.", "_____no_output_____" ], [ "### Data augmentation", "_____no_output_____" ] ], [ [ "gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05, \n shear_range=0.1, channel_shift_range=20, width_shift_range=0.1)\nbatches = get_batches(path+'train', gen_t, batch_size=batch_size)", "Found 18946 images belonging to 10 classes.\n" ], [ "model = conv1(batches)", "Epoch 1/2\n18946/18946 [==============================] - 114s - loss: 1.2804 - acc: 0.5891 - val_loss: 2.0614 - val_acc: 0.3407\nEpoch 2/2\n18946/18946 [==============================] - 114s - loss: 0.6716 - acc: 0.7916 - val_loss: 1.3377 - val_acc: 0.6208\nEpoch 1/4\n18946/18946 [==============================] - 115s - loss: 0.4787 - acc: 0.8594 - val_loss: 1.2230 - val_acc: 0.6228\nEpoch 2/4\n18946/18946 [==============================] - 114s - loss: 0.3724 - acc: 0.8931 - val_loss: 1.3030 - val_acc: 0.6282\nEpoch 3/4\n18946/18946 [==============================] - 114s - loss: 0.3086 - acc: 0.9162 - val_loss: 1.1986 - val_acc: 0.7119\nEpoch 4/4\n18946/18946 [==============================] - 114s - loss: 0.2612 - acc: 0.9283 - val_loss: 1.4794 - val_acc: 0.5799\n" ], [ "model.optimizer.lr = 0.0001\nmodel.fit_generator(batches, batches.nb_sample, nb_epoch=15, validation_data=val_batches, \n nb_val_samples=val_batches.nb_sample)", "Epoch 1/15\n18946/18946 [==============================] - 114s - loss: 0.2391 - acc: 0.9361 - val_loss: 1.2511 - val_acc: 0.6886\nEpoch 2/15\n18946/18946 [==============================] - 114s - loss: 0.2075 - acc: 0.9430 - val_loss: 1.1327 - val_acc: 0.7294\nEpoch 3/15\n18946/18946 [==============================] - 114s - loss: 0.1800 - acc: 0.9529 - val_loss: 1.1099 - val_acc: 0.7294\nEpoch 4/15\n18946/18946 [==============================] - 114s - loss: 0.1675 - acc: 0.9557 - val_loss: 1.0660 - val_acc: 0.7363\nEpoch 5/15\n18946/18946 [==============================] - 114s - loss: 0.1432 - acc: 0.9625 - val_loss: 1.1585 - val_acc: 0.7073\nEpoch 6/15\n18946/18946 [==============================] - 114s - loss: 0.1358 - acc: 0.9627 - val_loss: 1.1389 - val_acc: 0.6947\nEpoch 7/15\n18946/18946 [==============================] - 114s - loss: 0.1283 - acc: 0.9665 - val_loss: 1.1329 - val_acc: 0.7369\nEpoch 8/15\n18946/18946 [==============================] - 114s - loss: 0.1180 - acc: 0.9686 - val_loss: 1.1817 - val_acc: 0.7194\nEpoch 9/15\n18946/18946 [==============================] - 114s - loss: 0.1137 - acc: 0.9704 - val_loss: 1.0923 - val_acc: 0.7142\nEpoch 10/15\n18946/18946 [==============================] - 114s - loss: 0.1076 - acc: 0.9720 - val_loss: 1.0983 - val_acc: 0.7358\nEpoch 11/15\n18946/18946 [==============================] - 114s - loss: 0.1032 - acc: 0.9736 - val_loss: 1.0206 - val_acc: 0.7458\nEpoch 12/15\n18946/18946 [==============================] - 114s - loss: 0.0956 - acc: 0.9740 - val_loss: 0.9039 - val_acc: 0.7809\nEpoch 13/15\n18946/18946 [==============================] - 114s - loss: 0.0962 - acc: 0.9740 - val_loss: 1.3386 - val_acc: 0.6587\nEpoch 14/15\n18946/18946 [==============================] - 114s - loss: 0.0892 - acc: 0.9777 - val_loss: 1.1150 - val_acc: 0.7470\nEpoch 15/15\n18946/18946 [==============================] - 114s - loss: 0.0886 - acc: 0.9773 - val_loss: 1.9190 - val_acc: 0.5802\n" ] ], [ [ "I'm shocked by *how* good these results are! We're regularly seeing 75-80% accuracy on the validation set, which puts us into the top third or better of the competition. With such a simple model and no dropout or semi-supervised learning, this really speaks to the power of this approach to data augmentation.", "_____no_output_____" ], [ "### Four conv/pooling pairs + dropout", "_____no_output_____" ], [ "Unfortunately, the results are still very unstable - the validation accuracy jumps from epoch to epoch. Perhaps a deeper model with some dropout would help.", "_____no_output_____" ] ], [ [ "gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05, \n shear_range=0.1, channel_shift_range=20, width_shift_range=0.1)\nbatches = get_batches(path+'train', gen_t, batch_size=batch_size)", "Found 18946 images belonging to 10 classes.\n" ], [ "model = Sequential([\n BatchNormalization(axis=1, input_shape=(3,224,224)),\n Convolution2D(32,3,3, activation='relu'),\n BatchNormalization(axis=1),\n MaxPooling2D(),\n Convolution2D(64,3,3, activation='relu'),\n BatchNormalization(axis=1),\n MaxPooling2D(),\n Convolution2D(128,3,3, activation='relu'),\n BatchNormalization(axis=1),\n MaxPooling2D(),\n Flatten(),\n Dense(200, activation='relu'),\n BatchNormalization(),\n Dropout(0.5),\n Dense(200, activation='relu'),\n BatchNormalization(),\n Dropout(0.5),\n Dense(10, activation='softmax')\n ])", "_____no_output_____" ], [ "model.compile(Adam(lr=10e-5), loss='categorical_crossentropy', metrics=['accuracy'])", "_____no_output_____" ], [ "model.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches, \n nb_val_samples=val_batches.nb_sample)", "Epoch 1/2\n18946/18946 [==============================] - 159s - loss: 2.6578 - acc: 0.2492 - val_loss: 1.8681 - val_acc: 0.3844\nEpoch 2/2\n18946/18946 [==============================] - 158s - loss: 1.8098 - acc: 0.4334 - val_loss: 1.3152 - val_acc: 0.5670\n" ], [ "model.optimizer.lr=0.001", "_____no_output_____" ], [ "model.fit_generator(batches, batches.nb_sample, nb_epoch=10, validation_data=val_batches, \n nb_val_samples=val_batches.nb_sample)", "Epoch 1/10\n18946/18946 [==============================] - 159s - loss: 1.4232 - acc: 0.5405 - val_loss: 1.0877 - val_acc: 0.6452\nEpoch 2/10\n18946/18946 [==============================] - 159s - loss: 1.1155 - acc: 0.6346 - val_loss: 1.2730 - val_acc: 0.6878\nEpoch 3/10\n18946/18946 [==============================] - 159s - loss: 0.9043 - acc: 0.7025 - val_loss: 1.1393 - val_acc: 0.6354\nEpoch 4/10\n18946/18946 [==============================] - 159s - loss: 0.7444 - acc: 0.7529 - val_loss: 1.1037 - val_acc: 0.7087\nEpoch 5/10\n18946/18946 [==============================] - 159s - loss: 0.6299 - acc: 0.7955 - val_loss: 0.9123 - val_acc: 0.7455\nEpoch 6/10\n18946/18946 [==============================] - 159s - loss: 0.5220 - acc: 0.8275 - val_loss: 1.0418 - val_acc: 0.7484\nEpoch 7/10\n18946/18946 [==============================] - 159s - loss: 0.4686 - acc: 0.8495 - val_loss: 1.2907 - val_acc: 0.6599\nEpoch 8/10\n18946/18946 [==============================] - 159s - loss: 0.4190 - acc: 0.8653 - val_loss: 1.1321 - val_acc: 0.6906\nEpoch 9/10\n18946/18946 [==============================] - 159s - loss: 0.3735 - acc: 0.8802 - val_loss: 1.1235 - val_acc: 0.7458\nEpoch 10/10\n18946/18946 [==============================] - 159s - loss: 0.3226 - acc: 0.8969 - val_loss: 1.2040 - val_acc: 0.7343\n" ], [ "model.optimizer.lr=0.00001", "_____no_output_____" ], [ "model.fit_generator(batches, batches.nb_sample, nb_epoch=10, validation_data=val_batches, \n nb_val_samples=val_batches.nb_sample)", "Epoch 1/10\n18946/18946 [==============================] - 159s - loss: 0.3183 - acc: 0.8976 - val_loss: 1.0359 - val_acc: 0.7688\nEpoch 2/10\n18946/18946 [==============================] - 158s - loss: 0.2788 - acc: 0.9109 - val_loss: 1.5806 - val_acc: 0.6705\nEpoch 3/10\n18946/18946 [==============================] - 158s - loss: 0.2810 - acc: 0.9124 - val_loss: 0.9836 - val_acc: 0.7887\nEpoch 4/10\n18946/18946 [==============================] - 158s - loss: 0.2403 - acc: 0.9244 - val_loss: 1.1832 - val_acc: 0.7493\nEpoch 5/10\n18946/18946 [==============================] - 159s - loss: 0.2195 - acc: 0.9303 - val_loss: 1.1524 - val_acc: 0.7510\nEpoch 6/10\n18946/18946 [==============================] - 159s - loss: 0.2085 - acc: 0.9359 - val_loss: 1.2245 - val_acc: 0.7415\nEpoch 7/10\n18946/18946 [==============================] - 158s - loss: 0.1961 - acc: 0.9399 - val_loss: 1.1232 - val_acc: 0.7654\nEpoch 8/10\n18946/18946 [==============================] - 158s - loss: 0.1851 - acc: 0.9416 - val_loss: 1.0956 - val_acc: 0.6892\nEpoch 9/10\n18946/18946 [==============================] - 158s - loss: 0.1798 - acc: 0.9451 - val_loss: 1.0586 - val_acc: 0.7740\nEpoch 10/10\n18946/18946 [==============================] - 159s - loss: 0.1669 - acc: 0.9471 - val_loss: 1.4633 - val_acc: 0.6656\n" ] ], [ [ "This is looking quite a bit better - the accuracy is similar, but the stability is higher. There's still some way to go however...", "_____no_output_____" ], [ "### Imagenet conv features", "_____no_output_____" ], [ "Since we have so little data, and it is similar to imagenet images (full color photos), using pre-trained VGG weights is likely to be helpful - in fact it seems likely that we won't need to fine-tune the convolutional layer weights much, if at all. So we can pre-compute the output of the last convolutional layer, as we did in lesson 3 when we experimented with dropout. (However this means that we can't use full data augmentation, since we can't pre-compute something that changes every image.)", "_____no_output_____" ] ], [ [ "vgg = Vgg16()\nmodel=vgg.model\nlast_conv_idx = [i for i,l in enumerate(model.layers) if type(l) is Convolution2D][-1]\nconv_layers = model.layers[:last_conv_idx+1]", "_____no_output_____" ], [ "conv_model = Sequential(conv_layers)", "_____no_output_____" ], [ "(val_classes, trn_classes, val_labels, trn_labels, \n val_filenames, filenames, test_filenames) = get_classes(path)", "Found 18946 images belonging to 10 classes.\nFound 3478 images belonging to 10 classes.\nFound 79726 images belonging to 1 classes.\n" ], [ "conv_feat = conv_model.predict_generator(batches, batches.nb_sample)\nconv_val_feat = conv_model.predict_generator(val_batches, val_batches.nb_sample)\nconv_test_feat = conv_model.predict_generator(test_batches, test_batches.nb_sample)", "_____no_output_____" ], [ "save_array(path+'results/conv_val_feat.dat', conv_val_feat)\nsave_array(path+'results/conv_test_feat.dat', conv_test_feat)\nsave_array(path+'results/conv_feat.dat', conv_feat)", "_____no_output_____" ], [ "conv_feat = load_array(path+'results/conv_feat.dat')\nconv_val_feat = load_array(path+'results/conv_val_feat.dat')\nconv_val_feat.shape", "_____no_output_____" ] ], [ [ "### Batchnorm dense layers on pretrained conv layers", "_____no_output_____" ], [ "Since we've pre-computed the output of the last convolutional layer, we need to create a network that takes that as input, and predicts our 10 classes. Let's try using a simplified version of VGG's dense layers.", "_____no_output_____" ] ], [ [ "def get_bn_layers(p):\n return [\n MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),\n Flatten(),\n Dropout(p/2),\n Dense(128, activation='relu'),\n BatchNormalization(),\n Dropout(p/2),\n Dense(128, activation='relu'),\n BatchNormalization(),\n Dropout(p),\n Dense(10, activation='softmax')\n ]", "_____no_output_____" ], [ "p=0.8", "_____no_output_____" ], [ "bn_model = Sequential(get_bn_layers(p))\nbn_model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])", "_____no_output_____" ], [ "bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=1, \n validation_data=(conv_val_feat, val_labels))", "Train on 18946 samples, validate on 3478 samples\nEpoch 1/1\n18946/18946 [==============================] - 3s - loss: 1.5894 - acc: 0.5625 - val_loss: 0.7031 - val_acc: 0.7522\n" ], [ "bn_model.optimizer.lr=0.01", "_____no_output_____" ], [ "bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=2, \n validation_data=(conv_val_feat, val_labels))", "Train on 18946 samples, validate on 3478 samples\nEpoch 1/2\n18946/18946 [==============================] - 3s - loss: 0.2870 - acc: 0.9109 - val_loss: 0.7728 - val_acc: 0.7683\nEpoch 2/2\n18946/18946 [==============================] - 3s - loss: 0.1422 - acc: 0.9594 - val_loss: 0.7576 - val_acc: 0.7936\n" ], [ "bn_model.save_weights(path+'models/conv8.h5')", "_____no_output_____" ] ], [ [ "Looking good! Let's try pre-computing 5 epochs worth of augmented data, so we can experiment with combining dropout and augmentation on the pre-trained model.", "_____no_output_____" ], [ "### Pre-computed data augmentation + dropout", "_____no_output_____" ], [ "We'll use our usual data augmentation parameters:", "_____no_output_____" ] ], [ [ "gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05, \n shear_range=0.1, channel_shift_range=20, width_shift_range=0.1)\nda_batches = get_batches(path+'train', gen_t, batch_size=batch_size, shuffle=False)", "Found 18946 images belonging to 10 classes.\n" ] ], [ [ "We use those to create a dataset of convolutional features 5x bigger than the training set.", "_____no_output_____" ] ], [ [ "da_conv_feat = conv_model.predict_generator(da_batches, da_batches.nb_sample*5)", "_____no_output_____" ], [ "save_array(path+'results/da_conv_feat2.dat', da_conv_feat)", "_____no_output_____" ], [ "da_conv_feat = load_array(path+'results/da_conv_feat2.dat')", "_____no_output_____" ] ], [ [ "Let's include the real training data as well in its non-augmented form.", "_____no_output_____" ] ], [ [ "da_conv_feat = np.concatenate([da_conv_feat, conv_feat])", "_____no_output_____" ] ], [ [ "Since we've now got a dataset 6x bigger than before, we'll need to copy our labels 6 times too.", "_____no_output_____" ] ], [ [ "da_trn_labels = np.concatenate([trn_labels]*6)", "_____no_output_____" ] ], [ [ "Based on some experiments the previous model works well, with bigger dense layers.", "_____no_output_____" ] ], [ [ "def get_bn_da_layers(p):\n return [\n MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),\n Flatten(),\n Dropout(p),\n Dense(256, activation='relu'),\n BatchNormalization(),\n Dropout(p),\n Dense(256, activation='relu'),\n BatchNormalization(),\n Dropout(p),\n Dense(10, activation='softmax')\n ]", "_____no_output_____" ], [ "p=0.8", "_____no_output_____" ], [ "bn_model = Sequential(get_bn_da_layers(p))\nbn_model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])", "_____no_output_____" ] ], [ [ "Now we can train the model as usual, with pre-computed augmented data.", "_____no_output_____" ] ], [ [ "bn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, nb_epoch=1, \n validation_data=(conv_val_feat, val_labels))", "Train on 113676 samples, validate on 3478 samples\nEpoch 1/1\n113676/113676 [==============================] - 16s - loss: 1.5848 - acc: 0.5068 - val_loss: 0.6340 - val_acc: 0.8131\n" ], [ "bn_model.optimizer.lr=0.01", "_____no_output_____" ], [ "bn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, nb_epoch=4, \n validation_data=(conv_val_feat, val_labels))", "Train on 113676 samples, validate on 3478 samples\nEpoch 1/4\n113676/113676 [==============================] - 16s - loss: 0.6652 - acc: 0.7785 - val_loss: 0.6343 - val_acc: 0.8082\nEpoch 2/4\n113676/113676 [==============================] - 16s - loss: 0.5247 - acc: 0.8318 - val_loss: 0.6951 - val_acc: 0.8085\nEpoch 3/4\n113676/113676 [==============================] - 16s - loss: 0.4553 - acc: 0.8544 - val_loss: 0.6067 - val_acc: 0.8189\nEpoch 4/4\n113676/113676 [==============================] - 16s - loss: 0.4127 - acc: 0.8686 - val_loss: 0.7701 - val_acc: 0.7915\n" ], [ "bn_model.optimizer.lr=0.0001", "_____no_output_____" ], [ "bn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, nb_epoch=4, \n validation_data=(conv_val_feat, val_labels))", "Train on 113676 samples, validate on 3478 samples\nEpoch 1/4\n113676/113676 [==============================] - 16s - loss: 0.3837 - acc: 0.8775 - val_loss: 0.6904 - val_acc: 0.8197\nEpoch 2/4\n113676/113676 [==============================] - 16s - loss: 0.3576 - acc: 0.8872 - val_loss: 0.6593 - val_acc: 0.8209\nEpoch 3/4\n113676/113676 [==============================] - 16s - loss: 0.3384 - acc: 0.8939 - val_loss: 0.7057 - val_acc: 0.8085\nEpoch 4/4\n113676/113676 [==============================] - 16s - loss: 0.3254 - acc: 0.8977 - val_loss: 0.6867 - val_acc: 0.8128\n" ] ], [ [ "Looks good - let's save those weights.", "_____no_output_____" ] ], [ [ "bn_model.save_weights(path+'models/da_conv8_1.h5')", "_____no_output_____" ] ], [ [ "### Pseudo labeling", "_____no_output_____" ], [ "We're going to try using a combination of [pseudo labeling](http://deeplearning.net/wp-content/uploads/2013/03/pseudo_label_final.pdf) and [knowledge distillation](https://arxiv.org/abs/1503.02531) to allow us to use unlabeled data (i.e. do semi-supervised learning). For our initial experiment we'll use the validation set as the unlabeled data, so that we can see that it is working without using the test set. At a later date we'll try using the test set.", "_____no_output_____" ], [ "To do this, we simply calculate the predictions of our model...", "_____no_output_____" ] ], [ [ "val_pseudo = bn_model.predict(conv_val_feat, batch_size=batch_size)", "_____no_output_____" ] ], [ [ "...concatenate them with our training labels...", "_____no_output_____" ] ], [ [ "comb_pseudo = np.concatenate([da_trn_labels, val_pseudo])", "_____no_output_____" ], [ "comb_feat = np.concatenate([da_conv_feat, conv_val_feat])", "_____no_output_____" ] ], [ [ "...and fine-tune our model using that data.", "_____no_output_____" ] ], [ [ "bn_model.load_weights(path+'models/da_conv8_1.h5')", "_____no_output_____" ], [ "bn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, nb_epoch=1, \n validation_data=(conv_val_feat, val_labels))", "Train on 117154 samples, validate on 3478 samples\nEpoch 1/1\n117154/117154 [==============================] - 17s - loss: 0.3412 - acc: 0.8948 - val_loss: 0.7653 - val_acc: 0.8191\n" ], [ "bn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, nb_epoch=4, \n validation_data=(conv_val_feat, val_labels))", "Train on 117154 samples, validate on 3478 samples\nEpoch 1/4\n117154/117154 [==============================] - 17s - loss: 0.3237 - acc: 0.9008 - val_loss: 0.7536 - val_acc: 0.8229\nEpoch 2/4\n117154/117154 [==============================] - 17s - loss: 0.3076 - acc: 0.9050 - val_loss: 0.7572 - val_acc: 0.8235\nEpoch 3/4\n117154/117154 [==============================] - 17s - loss: 0.2984 - acc: 0.9085 - val_loss: 0.7852 - val_acc: 0.8269\nEpoch 4/4\n117154/117154 [==============================] - 17s - loss: 0.2902 - acc: 0.9117 - val_loss: 0.7630 - val_acc: 0.8263\n" ], [ "bn_model.optimizer.lr=0.00001", "_____no_output_____" ], [ "bn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, nb_epoch=4, \n validation_data=(conv_val_feat, val_labels))", "Train on 117154 samples, validate on 3478 samples\nEpoch 1/4\n117154/117154 [==============================] - 17s - loss: 0.2837 - acc: 0.9134 - val_loss: 0.7901 - val_acc: 0.8200\nEpoch 2/4\n117154/117154 [==============================] - 17s - loss: 0.2760 - acc: 0.9155 - val_loss: 0.7648 - val_acc: 0.8275\nEpoch 3/4\n117154/117154 [==============================] - 17s - loss: 0.2723 - acc: 0.9183 - val_loss: 0.7382 - val_acc: 0.8358\nEpoch 4/4\n117154/117154 [==============================] - 17s - loss: 0.2657 - acc: 0.9191 - val_loss: 0.7227 - val_acc: 0.8329\n" ] ], [ [ "That's a distinct improvement - even although the validation set isn't very big. This looks encouraging for when we try this on the test set.", "_____no_output_____" ] ], [ [ "bn_model.save_weights(path+'models/bn-ps8.h5')", "_____no_output_____" ] ], [ [ "### Submit", "_____no_output_____" ], [ "We'll find a good clipping amount using the validation set, prior to submitting.", "_____no_output_____" ] ], [ [ "def do_clip(arr, mx): return np.clip(arr, (1-mx)/9, mx)", "_____no_output_____" ], [ "keras.metrics.categorical_crossentropy(val_labels, do_clip(val_preds, 0.93)).eval()", "_____no_output_____" ], [ "conv_test_feat = load_array(path+'results/conv_test_feat.dat')", "_____no_output_____" ], [ "preds = bn_model.predict(conv_test_feat, batch_size=batch_size*2)", "_____no_output_____" ], [ "subm = do_clip(preds,0.93)", "_____no_output_____" ], [ "subm_name = path+'results/subm.gz'", "_____no_output_____" ], [ "classes = sorted(batches.class_indices, key=batches.class_indices.get)", "_____no_output_____" ], [ "submission = pd.DataFrame(subm, columns=classes)\nsubmission.insert(0, 'img', [a[4:] for a in test_filenames])\nsubmission.head()", "_____no_output_____" ], [ "submission.to_csv(subm_name, index=False, compression='gzip')", "_____no_output_____" ], [ "FileLink(subm_name)", "_____no_output_____" ] ], [ [ "This gets 0.534 on the leaderboard.", "_____no_output_____" ], [ "## The \"things that didn't really work\" section", "_____no_output_____" ], [ "You can safely ignore everything from here on, because they didn't really help.", "_____no_output_____" ], [ "### Finetune some conv layers too", "_____no_output_____" ] ], [ [ "for l in get_bn_layers(p): conv_model.add(l)", "_____no_output_____" ], [ "for l1,l2 in zip(bn_model.layers, conv_model.layers[last_conv_idx+1:]):\n l2.set_weights(l1.get_weights())", "_____no_output_____" ], [ "for l in conv_model.layers: l.trainable =False", "_____no_output_____" ], [ "for l in conv_model.layers[last_conv_idx+1:]: l.trainable =True", "_____no_output_____" ], [ "comb = np.concatenate([trn, val])", "_____no_output_____" ], [ "gen_t = image.ImageDataGenerator(rotation_range=8, height_shift_range=0.04, \n shear_range=0.03, channel_shift_range=10, width_shift_range=0.08)", "_____no_output_____" ], [ "batches = gen_t.flow(comb, comb_pseudo, batch_size=batch_size)", "_____no_output_____" ], [ "val_batches = get_batches(path+'valid', batch_size=batch_size*2, shuffle=False)", "Found 3478 images belonging to 10 classes.\n" ], [ "conv_model.compile(Adam(lr=0.00001), loss='categorical_crossentropy', metrics=['accuracy'])", "_____no_output_____" ], [ "conv_model.fit_generator(batches, batches.N, nb_epoch=1, validation_data=val_batches, \n nb_val_samples=val_batches.N)", "Epoch 1/1\n22400/22424 [============================>.] - ETA: 0s - loss: 0.4348 - acc: 0.9200" ], [ "conv_model.optimizer.lr = 0.0001", "_____no_output_____" ], [ "conv_model.fit_generator(batches, batches.N, nb_epoch=3, validation_data=val_batches, \n nb_val_samples=val_batches.N)", "_____no_output_____" ], [ "for l in conv_model.layers[16:]: l.trainable =True", "_____no_output_____" ], [ "conv_model.optimizer.lr = 0.00001", "_____no_output_____" ], [ "conv_model.fit_generator(batches, batches.N, nb_epoch=8, validation_data=val_batches, \n nb_val_samples=val_batches.N)", "_____no_output_____" ], [ "conv_model.save_weights(path+'models/conv8_ps.h5')", "_____no_output_____" ], [ "conv_model.load_weights(path+'models/conv8_da.h5')", "_____no_output_____" ], [ "val_pseudo = conv_model.predict(val, batch_size=batch_size*2)", "_____no_output_____" ], [ "save_array(path+'models/pseudo8_da.dat', val_pseudo)", "_____no_output_____" ] ], [ [ "### Ensembling", "_____no_output_____" ] ], [ [ "drivers_ds = pd.read_csv(path+'driver_imgs_list.csv')\ndrivers_ds.head()", "_____no_output_____" ], [ "img2driver = drivers_ds.set_index('img')['subject'].to_dict()", "_____no_output_____" ], [ "driver2imgs = {k: g[\"img\"].tolist() \n for k,g in drivers_ds[['subject', 'img']].groupby(\"subject\")}", "_____no_output_____" ], [ "def get_idx(driver_list):\n return [i for i,f in enumerate(filenames) if img2driver[f[3:]] in driver_list]", "_____no_output_____" ], [ "drivers = driver2imgs.keys()", "_____no_output_____" ], [ "rnd_drivers = np.random.permutation(drivers)", "_____no_output_____" ], [ "ds1 = rnd_drivers[:len(rnd_drivers)//2]\nds2 = rnd_drivers[len(rnd_drivers)//2:]", "_____no_output_____" ], [ "models=[fit_conv([d]) for d in drivers]\nmodels=[m for m in models if m is not None]", "_____no_output_____" ], [ "all_preds = np.stack([m.predict(conv_test_feat, batch_size=128) for m in models])\navg_preds = all_preds.mean(axis=0)\navg_preds = avg_preds/np.expand_dims(avg_preds.sum(axis=1), 1)", "_____no_output_____" ], [ "keras.metrics.categorical_crossentropy(val_labels, np.clip(avg_val_preds,0.01,0.99)).eval()", "_____no_output_____" ], [ "keras.metrics.categorical_accuracy(val_labels, np.clip(avg_val_preds,0.01,0.99)).eval()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7d6435a403420d4385c80469a2d060502a3fa7c
5,764
ipynb
Jupyter Notebook
mod_05_list_tuple_dict/Untitled0.ipynb
merazlab/python
43073779db177b50518f2708508f0375894eb254
[ "MIT" ]
null
null
null
mod_05_list_tuple_dict/Untitled0.ipynb
merazlab/python
43073779db177b50518f2708508f0375894eb254
[ "MIT" ]
1
2020-06-10T00:58:51.000Z
2020-06-10T01:13:21.000Z
mod_05_list_tuple_dict/Untitled0.ipynb
merazlab/python
43073779db177b50518f2708508f0375894eb254
[ "MIT" ]
null
null
null
5,764
5,764
0.654233
[ [ [ "#Built-in function", "_____no_output_____" ] ], [ [ "print(abs(4.5))\nprint(abs(-4.5))", "4.5\n4.5\n" ] ], [ [ "all - Return True if all elements of the iterable are true (or if the iterable is empty)", "_____no_output_____" ] ], [ [ "a = [2, 3, 4, 5]\nb = [0]\nc = []\nd = [2, 3, 4, 0]\n\nprint(all(a))\nprint(all(b))\nprint(all(c))\nprint(all(d))\n", "True\nFalse\nTrue\nFalse\n" ] ], [ [ "any - Return True if any element of the iterable is true. If the iterable is empty, return False. Equivalent to:", "_____no_output_____" ] ], [ [ "a = [2, 3, 4, 5]\nb = [0]\nc = []\nd = [2, 3, 4, 0]\n\nprint(any(a))\nprint(any(b))\nprint(any(c))\nprint(any(d))", "True\nFalse\nFalse\nTrue\n" ] ], [ [ "ascii", "_____no_output_____" ] ], [ [ "print(ascii(1111))", "1111\n" ], [ "dir()", "_____no_output_____" ], [ "dir(struct)", "_____no_output_____" ], [ "meta = {}\nmeta[item] = []", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
e7d6524dd9952ceb975c814d9179264417ab7c82
7,107
ipynb
Jupyter Notebook
Notebooks/Classifier.ipynb
colombod/MachineTeaching
ef2d7bdc17a06764dbe315612bc5c939f2e780f6
[ "MIT" ]
1
2021-01-13T17:09:11.000Z
2021-01-13T17:09:11.000Z
Notebooks/Classifier.ipynb
colombod/MachineTeaching
ef2d7bdc17a06764dbe315612bc5c939f2e780f6
[ "MIT" ]
null
null
null
Notebooks/Classifier.ipynb
colombod/MachineTeaching
ef2d7bdc17a06764dbe315612bc5c939f2e780f6
[ "MIT" ]
null
null
null
26.322222
189
0.559167
[ [ [ "# Using notebooks we will explore the process of trainign and consuming a model\n\nFirst let's load some packages to maniupulate images\n", "_____no_output_____" ] ], [ [ "#r \"nuget:SixLabors.ImageSharp,1.0.2\"", "_____no_output_____" ] ], [ [ "## Get images\n\nWe can download images from the web, let's create some helper function", "_____no_output_____" ] ], [ [ "using SixLabors.ImageSharp;\nusing SixLabors.ImageSharp.PixelFormats;\nusing System.Net.Http;\n\nImage GetImage(string url)\n{\n var client = new HttpClient();\n var image = client.GetByteArrayAsync(url).Result;\n return Image.Load(image);\n}", "_____no_output_____" ], [ "var image = GetImage(\"https://user-images.githubusercontent.com/2546640/56708992-deee8780-66ec-11e9-9991-eb85abb1d10a.png\");\nimage", "_____no_output_____" ] ], [ [ "## it would be better to see the image, let's use the foramtter api", "_____no_output_____" ] ], [ [ "using System.IO;\nusing SixLabors.ImageSharp.Formats.Png;\nusing Microsoft.DotNet.Interactive.Formatting;\n\nFormatter.Register<Image>((image, writer) =>\n{\n var id = Guid.NewGuid().ToString(\"N\");\n using var stream = new MemoryStream();\n image.Save(stream, new PngEncoder());\n stream.Flush();\n var data = stream.ToArray();\n var imageSource = $\"data:image/png;base64, {Convert.ToBase64String(data)}\";\n PocketView imgTag = PocketViewTags.img[id: id, src: imageSource, height: image.Height, width: image.Width]();\n writer.Write(imgTag);\n}, HtmlFormatter.MimeType);\n", "_____no_output_____" ], [ "image", "_____no_output_____" ] ], [ [ "Good but something smaller would be better", "_____no_output_____" ] ], [ [ "using SixLabors.ImageSharp.Processing;\n\nImage Reduce(Image source, int maxSize = 300){\n var max = Math.Max(source.Width, source.Height);\n var ratio = ((double)(maxSize)) / max;\n return source.Clone(c => c.Resize((int)(source.Width * ratio), (int)(source.Height * ratio)));\n}\n", "_____no_output_____" ], [ "Reduce(image)", "_____no_output_____" ] ], [ [ "Better, now I am interested in bayblade, let's display some", "_____no_output_____" ] ], [ [ "var urls = new string[]{\n \"https://cdn.shopify.com/s/files/1/0016/0674/6186/products/B154_1_1024x1024.jpg?v=1573909023\",\n \"https://i.ytimg.com/vi/yUH2QeluaIU/maxresdefault.jpg\",\n \"https://www.biggerbids.com/members/images/29371/public/8065336_-DSC5628-32467-26524-.jpg\",\n \"https://i.ytimg.com/vi/BT4SwVmnqqQ/maxresdefault.jpg\",\n \"https://cdn.shopify.com/s/files/1/0016/0674/6186/products/B160covercopy2_1200x1200.jpg?v=1585425105\",\n \"https://animeukiyo.com/wp-content/uploads/2020/05/king-helios-zone-1B-1140x570.jpg\",\n \"https://http2.mlstatic.com/beyblade-burn-phoenix-ice-blue-90wf-takara-tomy-frete-pac-D_NQ_NP_19415-MLB20171031427_092014-F.jpg\"\n};\n\nvar beyBlades = urls.Select(url => new { Image = Reduce(GetImage(url))});\n", "_____no_output_____" ], [ "beyBlades", "_____no_output_____" ] ], [ [ "## Enter lobe\nWe will now use lobe and it's .NET Bindings to developa model to classify those images. Let's start lobe and have a look first, then we will proceed with loading the pacakges we need.", "_____no_output_____" ] ], [ [ "#r \"nuget:lobe\"\n#r \"nuget:lobe.ImageSharp\"\n\nusing lobe;\nusing lobe.ImageSharp;", "_____no_output_____" ] ], [ [ "Lobe can be accessed via web api let's use that for fast loops", "_____no_output_____" ] ], [ [ "#r \"nuget:lobe.Http\"\n\nusing lobe.Http;\n", "_____no_output_____" ], [ "var beyblades_start = new Uri(\"http://localhost:38100/predict/3af915df-14b7-4834-afbd-6615deca4e26\");\nvar beyblades = new Uri(\"http://localhost:38100/predict/f56e1050-391e-4cd6-9bb9-ff74dc4d84f5\");\nvar beyblades_2 = new Uri(\"http://localhost:38100/predict/f56e1050-391e-4cd6-9bb9-ff74dc4d84f5\");\nvar beyblades_3 = new Uri(\"http://localhost:38100/predict/a3271b3a-f63b-4c00-9304-beda43375284\");\nvar beyblade_remote = new Uri(\"http://lobe-diego.ngrok.io/predict/2a6a3005-a8cc-4bc1-a71a-a0fe85f258bb\");\n\nvar httpClassifier = new LobeClient(beyblades_3);\n\nhttpClassifier.Classify(beyBlades.First().Image.CloneAs<Rgb24>())\n\n", "_____no_output_____" ], [ "var imageSources = urls.Select(url => Reduce(GetImage(url),800).CloneAs<Rgb24>()).ToList();", "_____no_output_____" ], [ "var classifications = imageSources.Select((img) => {\n var cls = httpClassifier.Classify(img);\n return new {\n Image = Reduce(img),\n Label = cls.Prediction.Label,\n Confidence = cls.Prediction.Confidence\n };\n});\n\nclassifications", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
e7d65d7a09ecfa88a623c4dc4fab282e718e0728
6,180
ipynb
Jupyter Notebook
Titanic Solution using Excel and Python/21_Titanic_Solution_TPOT_All_feat_30Jan2019.ipynb
KunaalNaik/YT_Kaggle_Titanic_Solution
70690ec7fabfe7f979fba922ce5a39debd696e3c
[ "MIT" ]
3
2020-12-30T04:03:21.000Z
2021-11-28T12:14:24.000Z
Titanic Solution using Excel and Python/21_Titanic_Solution_TPOT_All_feat_30Jan2019.ipynb
KunaalNaik/YT_Kaggle_Titanic_Solution
70690ec7fabfe7f979fba922ce5a39debd696e3c
[ "MIT" ]
null
null
null
Titanic Solution using Excel and Python/21_Titanic_Solution_TPOT_All_feat_30Jan2019.ipynb
KunaalNaik/YT_Kaggle_Titanic_Solution
70690ec7fabfe7f979fba922ce5a39debd696e3c
[ "MIT" ]
5
2020-05-23T06:03:36.000Z
2021-06-23T03:56:55.000Z
24.140625
171
0.548706
[ [ [ "- Install TPOT within Anancoda - https://anaconda.org/conda-forge/tpot\n- More Details - https://epistasislab.github.io/tpot/using/\n- GitHub - https://github.com/EpistasisLab/tpot/", "_____no_output_____" ] ], [ [ "import pandas as pd\nfrom tpot import TPOTClassifier\nfrom sklearn.model_selection import train_test_split", "_____no_output_____" ], [ "train = pd.read_csv(\"excel_full_train.csv\")\ntest = pd.read_csv(\"excel_test.csv\")", "_____no_output_____" ], [ "X = train.drop(['PassengerId','Survived'], axis = 1)\ny = train['Survived']", "_____no_output_____" ], [ "#### Use Test Train Split to divide into train and test\nimport numpy as np\nfrom sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=21)", "_____no_output_____" ] ], [ [ "### Run AutoML with TPOT", "_____no_output_____" ], [ "**Verbose** \n- How much information TPOT communicates while it is running.\n- 0 = none, 1 = minimal, 2 = high, 3 = all.\n- A setting of 2 or higher will add a progress bar during the optimization procedure.", "_____no_output_____" ] ], [ [ "#Set max time for 10 Minutes\ntpot = TPOTClassifier(verbosity=2, max_time_mins=1)", "_____no_output_____" ], [ "tpot.fit(X_train, y_train)", "_____no_output_____" ], [ "print(f'Train : {tpot.score(X_test, y_test):.3f}')\nprint(f'Test : {tpot.score(X_train, y_train):.3f}')", "Train : 0.840\nTest : 0.867\n" ] ], [ [ "### Export Best Pipeline", "_____no_output_____" ] ], [ [ "tpot.export('Auto_ML_TPOT_titanic_pipeline2.py')", "_____no_output_____" ] ], [ [ "### Prediction of Test", "_____no_output_____" ] ], [ [ "sub_test = test.drop(['PassengerId'], axis = 1)", "_____no_output_____" ], [ "sub_test_pred = tpot.predict(sub_test).astype(int)", "_____no_output_____" ], [ "AllSub = pd.DataFrame({ 'PassengerId': test['PassengerId'],\n 'Survived' : sub_test_pred\n \n})\n\nAllSub.to_csv(\"Auto_ML_TPOT_Titanic_Solution.csv\", index = False)", "_____no_output_____" ], [ "#Kaggle LB Score - 0.78468", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
e7d66457a050140250305d7fc4f9bdd95d8b7a61
15,993
ipynb
Jupyter Notebook
samples.ipynb
nancywen25/goodreads
f7598245bbcebd524c11ff65360310006e135c9f
[ "Apache-2.0" ]
null
null
null
samples.ipynb
nancywen25/goodreads
f7598245bbcebd524c11ff65360310006e135c9f
[ "Apache-2.0" ]
null
null
null
samples.ipynb
nancywen25/goodreads
f7598245bbcebd524c11ff65360310006e135c9f
[ "Apache-2.0" ]
null
null
null
40.694656
2,671
0.584568
[ [ [ "# Display Sample Records", "_____no_output_____" ] ], [ [ "import gzip\nimport json\nimport re\nimport os\nimport sys\nimport numpy as np\nimport pandas as pd", "_____no_output_____" ] ], [ [ "**Specify your directory here:**", "_____no_output_____" ] ], [ [ "DIR = './data'", "_____no_output_____" ] ], [ [ "**This function shows how to load datasets**", "_____no_output_____" ] ], [ [ "def load_data(file_name, head = 500):\n '''\n Given a *.json.gz file, returns a list of dictionaries,\n optionally can select the first n records\n '''\n count = 0\n data = []\n with gzip.open(file_name) as fin:\n for l in fin:\n d = json.loads(l)\n count += 1\n data.append(d)\n \n # break if reaches the 500th line\n if (head is not None) and (count >= head):\n break\n return data", "_____no_output_____" ] ], [ [ "**Load and display sample records of books/authors/works/series**", "_____no_output_____" ] ], [ [ "poetry = load_data(os.path.join(DIR, 'goodreads_books_poetry.json.gz'))\n\n# books = load_data(os.path.join(DIR, 'goodreads_books.json.gz'))\n# authors = load_data(os.path.join(DIR, 'goodreads_book_authors.json.gz'))\n# works = load_data(os.path.join(DIR, 'goodreads_book_works.json.gz'))\n# series = load_data(os.path.join(DIR, 'goodreads_book_series.json.gz'))", "_____no_output_____" ], [ "len(poetry)", "_____no_output_____" ], [ "poetry[0]", "_____no_output_____" ], [ "# print(' == sample record (books) ==')\n# display(np.random.choice(books))\n# print(' == sample record (authors) ==')\n# display(np.random.choice(authors))\n# print(' == sample record (works) ==')\n# display(np.random.choice(works))\n# print(' == sample record (series) ==')\n# display(np.random.choice(series))", "_____no_output_____" ] ], [ [ "**Load and display sample records of user-book interactions (shelves)**", "_____no_output_____" ] ], [ [ "interactions = load_data(os.path.join(DIR, 'goodreads_interactions_poetry.json.gz'))\nnp.random.choice(interactions)", "_____no_output_____" ] ], [ [ "**Load and display sample records of book reviews**", "_____no_output_____" ] ], [ [ "reviews = load_data(os.path.join(DIR, 'goodreads_reviews_poetry.json.gz'))\nnp.random.choice(reviews)", "_____no_output_____" ] ], [ [ "**Load and display sample records of book reviews (with spoiler tags)**", "_____no_output_____" ] ], [ [ "spoilers = load_data(os.path.join(DIR, 'goodreads_reviews_spoiler.json.gz'))\nnp.random.choice([s for s in spoilers if s['has_spoiler']])", "_____no_output_____" ], [ "# spoilers = load_data(os.path.join(DIR, 'goodreads_reviews_spoiler_raw.json.gz'))\n# np.random.choice([s for s in spoilers if 'view spoiler' in s['review_text']])", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
e7d66864edbf92ec363c8f59b7b598f2c44278cd
59,293
ipynb
Jupyter Notebook
notebooks/CIE_plots.ipynb
PyPlr/cvd_pupillometry
d2e0980a6f902668e46c14f79988a2743b0c2fd4
[ "MIT" ]
7
2021-03-20T11:40:11.000Z
2022-02-12T15:49:12.000Z
notebooks/CIE_plots.ipynb
PyPlr/cvd_pupillometry
d2e0980a6f902668e46c14f79988a2743b0c2fd4
[ "MIT" ]
1
2021-06-04T06:00:42.000Z
2021-06-04T06:00:42.000Z
notebooks/CIE_plots.ipynb
PyPlr/cvd_pupillometry
d2e0980a6f902668e46c14f79988a2743b0c2fd4
[ "MIT" ]
2
2021-06-04T03:09:53.000Z
2021-08-30T17:14:09.000Z
524.716814
55,552
0.946739
[ [ [ "import matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set_context('poster')\n\nfrom pyplr.CIE import *\n", "_____no_output_____" ], [ "p = sns.color_palette(\"tab10\")\nsns.palplot(p)", "_____no_output_____" ], [ "fig, axs = plt.subplots(1,3, figsize=(12,4))\n\nget_CIE_1924_photopic_vl(asdf=True).plot(ax=axs[0], legend=False)\naxs[0].set_title('CIE 1924 V($\\lambda$)')\n\nget_CIE_CMF(asdf=True).plot(\n ax=axs[1], \n color={'X':p[0],'Y':p[2],'Z':p[3]}, \n legend=False)\naxs[1].set_title('CIE CMFs')\n\nget_CIES026(asdf=True).plot(\n ax=axs[2], \n color={'S':p[0],'M':p[2],'L':p[3],'Rods':p[7],'Mel':p[9]}, \n legend=False)\naxs[2].set_title('CIE S 026')\n\nfor ax in axs:\n #ax.set_xticks([])\n #ax.set_yticks([])\n pass\nplt.tight_layout()\nfig.savefig('../img/CIE.tiff', dpi=300)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code" ] ]
e7d67a8ec36251dcf725391324dc0ccc8262aa78
24,098
ipynb
Jupyter Notebook
dust/.ipynb_checkpoints/extinction-checkpoint.ipynb
CambridgeUniversityPress/IntroductionInterstellarMedium
fbfe64c7d50d15da93ebf2fbc7d86d83cbf8941a
[ "CC0-1.0" ]
3
2021-04-26T15:37:13.000Z
2021-05-13T04:42:15.000Z
dust/extinction.ipynb
interstellarmedium/interstellarmedium.github.io
0440a5bd80052ab87575e70fc39acd4bf8e225b3
[ "CC0-1.0" ]
null
null
null
dust/extinction.ipynb
interstellarmedium/interstellarmedium.github.io
0440a5bd80052ab87575e70fc39acd4bf8e225b3
[ "CC0-1.0" ]
null
null
null
147.840491
19,662
0.873102
[ [ [ "## Introduction to the Interstellar Medium\n### Jonathan Williams", "_____no_output_____" ], [ "### Figure 4.2: Extinction curve", "_____no_output_____" ], [ "#### uses extcurve_s16.py and cubicspline.py from https://faun.rc.fas.harvard.edu/eschlafly/apored/extcurve.html", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline", "_____no_output_____" ], [ "import extcurve_s16", "_____no_output_____" ], [ "fig = plt.figure(figsize=(6,4))\nax1 = fig.add_subplot(1,1,1)\n#ax1.set_xlabel('$\\lambda$ (nm)', fontsize=16)\nax1.set_xlabel('$\\lambda\\ (\\mu m)$', fontsize=16)\nax1.set_ylabel('$A(\\lambda)/A_K$', fontsize=16)\n#ax1.set_xlim(350,2500)\nax1.set_xlim(0.350,2.500)\n#ax1.set_ylim(0,1.3)\nax1.set_ylim(0,15)\n\nlam = np.linspace(500,2500, 100)\nlam_ext = np.linspace(350,500, 10)\noir = np.nonzero((lam > 500) & (lam < 3000))\n\nec = extcurve_s16.extcurve(0.0)\n#f = ec(5420)/ec(5510)\nf = ec(5420)/ec(21900)\nx = np.log10(lam)\ny = f*ec(10*lam)\nw = 500/lam[oir]\nw = lam[oir] * 0 + 1\na,b = np.polyfit(x[oir],np.log10(y[oir]),1,w=w)\nprint(\"R_V = 3.3 power law index = {0:4.2f}\".format(a))\n#ax1.plot(10**x,10**(a*x+b),'r-')\n#ax1.plot(lam, y, 'k-', lw=2)\n#ax1.plot(lam_ext, f*ec(10*lam_ext), 'k:', lw=2)\nax1.plot(lam/1000, y, 'k-', lw=2)\nax1.plot(lam_ext/1000, f*ec(10*lam_ext), 'k:', lw=2)\n\nec = extcurve_s16.extcurve(0.04)\n#f = ec(5420)/ec(5510)\nf = ec(5420)/ec(21900)\ny = f*ec(10*lam)\na,b = np.polyfit(x[oir],np.log10(y[oir]),1,w=w)\nprint(\"R_V = 3.6 power law index = {0:4.2f}\".format(a))\n#ax1.plot(10**x,10**(a*x+b),'r-')\n#ax1.plot(lam, f*ec(10*lam), 'k-', lw=0.5)\n#ax1.plot(lam_ext, f*ec(10*lam_ext), 'k:', lw=0.5)\nax1.plot(lam/1000, f*ec(10*lam), 'k-', lw=0.5)\nax1.plot(lam_ext/1000, f*ec(10*lam_ext), 'k:', lw=0.5)\n\nec = extcurve_s16.extcurve(-0.04)\n#f = ec(5420)/ec(5510)\nf = ec(5420)/ec(21900)\ny = f*ec(10*lam)\na,b = np.polyfit(x[oir],np.log10(y[oir]),1,w=w)\nprint(\"R_V = 3.0 power law index = {0:4.2f}\".format(a))\nax1.plot(lam/1000, f*ec(10*lam), 'k-', lw=0.5)\nax1.plot(lam_ext/1000, f*ec(10*lam_ext), 'k:', lw=0.5)\n\nylab = 3.7\nplt.text(.445, ylab, 'B', fontsize=16, ha='center')\nplt.text(.551, ylab, 'V', fontsize=16, ha='center')\nplt.text(.656, ylab, 'R', fontsize=16, ha='center')\nplt.text(.806, ylab, 'I', fontsize=16, ha='center')\nplt.text(1.220,ylab, 'J', fontsize=16, ha='center')\nplt.text(1.630,ylab, 'H', fontsize=16, ha='center')\nplt.text(2.190,ylab, 'K', fontsize=16, ha='center')\n\nplt.savefig('extinction.pdf')", "R_V = 3.3 power law index = -1.76\nR_V = 3.6 power law index = -1.72\nR_V = 3.0 power law index = -1.79\n" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code", "code" ] ]
e7d67c5d00cab2d0f242214098f1fc3df36399f8
39,314
ipynb
Jupyter Notebook
ASSIGNMENT_7.ipynb
archana1822/DMDW
d07f967435a51b0fb05e2a203a7fa2b599802277
[ "Apache-2.0" ]
null
null
null
ASSIGNMENT_7.ipynb
archana1822/DMDW
d07f967435a51b0fb05e2a203a7fa2b599802277
[ "Apache-2.0" ]
null
null
null
ASSIGNMENT_7.ipynb
archana1822/DMDW
d07f967435a51b0fb05e2a203a7fa2b599802277
[ "Apache-2.0" ]
2
2020-11-03T06:10:34.000Z
2020-11-03T07:38:20.000Z
33.092593
225
0.264817
[ [ [ "<a href=\"https://colab.research.google.com/github/archana1822/DMDW/blob/main/ASSIGNMENT_7.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ " dataset = [\n ['i1','i2','i5'],\n ['i2', 'i4'],\n ['i2', 'i3'],\n ['i1', 'i2', 'i4'],\n ['i1', 'i3'],\n ['i2', 'i3'],\n ['i1','i3'],\n ['i1', 'i2', 'i3','i5'],\n ['i1', 'i2','i3']]\n \n", "_____no_output_____" ], [ "import pandas as pd\nfrom mlxtend.preprocessing import TransactionEncoder\n\nte = TransactionEncoder()\nte_ary = te.fit(dataset).transform(dataset)\ndf = pd.DataFrame(te_ary, columns=te.columns_)\ndf", "_____no_output_____" ], [ "from mlxtend.frequent_patterns import apriori\n\napriori(df, min_support=0.22)", "_____no_output_____" ], [ "apriori(df, min_support=0.22, use_colnames=True)", "_____no_output_____" ], [ "frequent_itemsets = apriori(df, min_support=0.2, use_colnames=True)\nfrequent_itemsets['length'] = frequent_itemsets['itemsets'].apply(lambda x: len(x))\nfrequent_itemsets", "_____no_output_____" ], [ "frequent_itemsets[ (frequent_itemsets['length'] == 2) &\n (frequent_itemsets['support'] >= 0.2) ]", "_____no_output_____" ], [ "frequent_itemsets[ (frequent_itemsets['length'] == 3) &\n (frequent_itemsets['support'] >= 0.2) ]", "_____no_output_____" ], [ "dataset = [\n ['i1','i2','i5'],\n ['i2', 'i4'],\n ['i1', 'i2', 'i4'],\n ['i1', 'i3'],\n ['i2', 'i3'],\n ['i1','i3'],\n ['i1', 'i2','i3']]", "_____no_output_____" ], [ "print(dataset)", "[['i1', 'i2', 'i5'], ['i2', 'i4'], ['i1', 'i2', 'i4'], ['i1', 'i3'], ['i2', 'i3'], ['i1', 'i3'], ['i1', 'i2', 'i3']]\n" ], [ "frequent_itemsets = apriori(df, min_support=0.22, use_colnames=True)\nfrequent_itemsets['length'] = frequent_itemsets['itemsets'].apply(lambda x: len(x))\nfrequent_itemsets\n", "_____no_output_____" ], [ "dataset = [\n ['i1','i2','i4'],\n ['i1', 'i4'],\n ['i2', 'i3', 'i4'],\n ['i2', 'i3'],\n ['i2', 'i4'],\n ['i1','i5'],\n ['i1', 'i4','i5']]", "_____no_output_____" ], [ "frequent_itemsets = apriori(df, min_support=0.20, use_colnames=True)\nfrequent_itemsets['length'] = frequent_itemsets['itemsets'].apply(lambda x: len(x))\nfrequent_itemsets", "_____no_output_____" ], [ "", "_____no_output_____" ], [ "", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7d67ede5a94f543c4146b7cfbfd628c461b9a31
20,765
ipynb
Jupyter Notebook
doc/tutorial/data_structure.ipynb
luzpaz/seaborn
ad0240a867bb1622c79abac850e1dc30cb4b3be1
[ "MIT", "BSD-3-Clause" ]
1
2021-05-12T17:21:16.000Z
2021-05-12T17:21:16.000Z
doc/tutorial/data_structure.ipynb
lilisako/seaborn
9c3dba6eb0193552123eaa14fac1a90974fa24f9
[ "MIT", "BSD-3-Clause" ]
null
null
null
doc/tutorial/data_structure.ipynb
lilisako/seaborn
9c3dba6eb0193552123eaa14fac1a90974fa24f9
[ "MIT", "BSD-3-Clause" ]
null
null
null
40.242248
723
0.637852
[ [ [ ".. _data_tutorial:\n\n.. currentmodule:: seaborn", "_____no_output_____" ], [ "Data structures accepted by seaborn\n===================================\n\n.. raw:: html\n\n <div class=col-md-9>\n\nAs a data visualization library, seaborn requires that you provide it with data. This chapter explains the various ways to accomplish that task. Seaborn supports several different dataset formats, and most functions accept data represented with objects from the `pandas <https://pandas.pydata.org/>`_ or `numpy <https://numpy.org/>`_ libraries as well as built-in Python types like lists and dictionaries. Understanding the usage patterns associated with these different options will help you quickly create useful visualizations for nearly any dataset.\n\n.. note::\n As of current writing (v0.11.0), the full breadth of options covered here are supported by only a subset of the modules in seaborn (namely, the :ref:`relational <relational_api>` and :ref:`distribution <distribution_api>` modules). The other modules offer much of the same flexibility, but have some exceptions (e.g., :func:`catplot` and :func:`lmplot` are limited to long-form data with named variables). The data-ingest code will be standardized over the next few release cycles, but until that point, be mindful of the specific documentation for each function if it is not doing what you expect with your dataset.", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\nimport seaborn as sns\nsns.set_theme()", "_____no_output_____" ] ], [ [ "Long-form vs. wide-form data\n----------------------------\n\nMost plotting functions in seaborn are oriented towards *vectors* of data. When plotting ``x`` against ``y``, each variable should be a vector. Seaborn accepts data *sets* that have more than one vector organized in some tabular fashion. There is a fundamental distinction between \"long-form\" and \"wide-form\" data tables, and seaborn will treat each differently.\n\nLong-form data\n~~~~~~~~~~~~~~\n\nA long-form data table has the following characteristics:\n\n- Each variable is a column\n- Each observation is a row", "_____no_output_____" ], [ "As a simple example, consider the \"flights\" dataset, which records the number of airline passengers who flew in each month from 1949 to 1960. This dataset has three variables (*year*, *month*, and number of *passengers*):", "_____no_output_____" ] ], [ [ "flights = sns.load_dataset(\"flights\")\nflights.head()", "_____no_output_____" ] ], [ [ "With long-form data, columns in the table are given roles in the plot by explicitly assigning them to one of the variables. For example, making a monthly plot of the number of passengers per year looks like this:", "_____no_output_____" ] ], [ [ "sns.relplot(data=flights, x=\"year\", y=\"passengers\", hue=\"month\", kind=\"line\")", "_____no_output_____" ] ], [ [ "The advantage of long-form data is that it lends itself well to this explicit specification of the plot. It can accommodate datasets of arbitrary complexity, so long as the variables and observations can be clearly defined. But this format takes some getting used to, because it is often not the model of the data that one has in their head.\n\nWide-form data\n~~~~~~~~~~~~~~\n\nFor simple datasets, it is often more intuitive to think about data the way it might be viewed in a spreadsheet, where the columns and rows contain *levels* of different variables. For example, we can convert the flights dataset into a wide-form organization by \"pivoting\" it so that each column has each month's time series over years:", "_____no_output_____" ] ], [ [ "flights_wide = flights.pivot(index=\"year\", columns=\"month\", values=\"passengers\")\nflights_wide.head()", "_____no_output_____" ] ], [ [ "Here we have the same three variables, but they are organized differently. The variables in this dataset are linked to the *dimensions* of the table, rather than to named fields. Each observation is defined by both the value at a cell in the table and the coordinates of that cell with respect to the row and column indices.", "_____no_output_____" ], [ "With long-form data, we can access variables in the dataset by their name. That is not the case with wide-form data. Nevertheless, because there is a clear association between the dimensions of the table and the variable in the dataset, seaborn is able to assign those variables roles in the plot.\n\n.. note::\n Seaborn treats the argument to ``data`` as wide form when neither ``x`` nor ``y`` are assigned.", "_____no_output_____" ] ], [ [ "sns.relplot(data=flights_wide, kind=\"line\")", "_____no_output_____" ] ], [ [ "This plot looks very similar to the one before. Seaborn has assigned the index of the dataframe to ``x``, the values of the dataframe to ``y``, and it has drawn a separate line for each month. There is a notable difference between the two plots, however. When the dataset went through the \"pivot\" operation that converted it from long-form to wide-form, the information about what the values mean was lost. As a result, there is no y axis label. (The lines also have dashes here, because :func:`relplot` has mapped the column variable to both the ``hue`` and ``style`` semantic so that the plot is more accessible. We didn't do that in the long-form case, but we could have by setting ``style=\"month\"``).\n\nThus far, we did much less typing while using wide-form data and made nearly the same plot. This seems easier! But a big advantage of long-form data is that, once you have the data in the correct format, you no longer need to think about its *structure*. You can design your plots by thinking only about the variables contained within it. For example, to draw lines that represent the monthly time series for each year, simply reassign the variables:", "_____no_output_____" ] ], [ [ "sns.relplot(data=flights, x=\"month\", y=\"passengers\", hue=\"year\", kind=\"line\")", "_____no_output_____" ] ], [ [ "To achieve the same remapping with the wide-form dataset, we would need to transpose the table:", "_____no_output_____" ] ], [ [ "sns.relplot(data=flights_wide.transpose(), kind=\"line\")", "_____no_output_____" ] ], [ [ "(This example also illustrates another wrinkle, which is that seaborn currently considers the column variable in a wide-form dataset to be categorical regardless of its datatype, whereas, because the long-form variable is numeric, it is assigned a quantitative color palette and legend. This may change in the future).\n\nThe absence of explicit variable assignments also means that each plot type needs to define a fixed mapping between the dimensions of the wide-form data and the roles in the plot. Because this natural mapping may vary across plot types, the results are less predictable when using wide-form data. For example, the :ref:`categorical <categorical_api>` plots assign the *column* dimension of the table to ``x`` and then aggregate across the rows (ignoring the index):", "_____no_output_____" ] ], [ [ "sns.catplot(data=flights_wide, kind=\"box\")", "_____no_output_____" ] ], [ [ "When using pandas to represent wide-form data, you are limited to just a few variables (no more than three). This is because seaborn does not make use of multi-index information, which is how pandas represents additional variables in a tabular format. The `xarray <http://xarray.pydata.org/en/stable/>`_ project offers labeled N-dimensional array objects, which can be considered a generalization of wide-form data to higher dimensions. At present, seaborn does not directly support objects from ``xarray``, but they can be transformed into a long-form :class:`pandas.DataFrame` using the ``to_pandas`` method and then plotted in seaborn like any other long-form data set.\n\nIn summary, we can think of long-form and wide-form datasets as looking something like this:", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nf = plt.figure(figsize=(7, 5))\n\ngs = plt.GridSpec(\n ncols=6, nrows=2, figure=f,\n left=0, right=.35, bottom=0, top=.9,\n height_ratios=(1, 20),\n wspace=.1, hspace=.01\n)\n\ncolors = [c + (.5,) for c in sns.color_palette()]\n\nf.add_subplot(gs[0, :], facecolor=\".8\")\n[\n f.add_subplot(gs[1:, i], facecolor=colors[i])\n for i in range(gs.ncols)\n]\n\ngs = plt.GridSpec(\n ncols=2, nrows=2, figure=f,\n left=.4, right=1, bottom=.2, top=.8,\n height_ratios=(1, 8), width_ratios=(1, 11),\n wspace=.015, hspace=.02\n)\n\nf.add_subplot(gs[0, 1:], facecolor=colors[2])\nf.add_subplot(gs[1:, 0], facecolor=colors[1])\nf.add_subplot(gs[1, 1], facecolor=colors[0])\n\nfor ax in f.axes:\n ax.set(xticks=[], yticks=[])\n\nf.text(.35 / 2, .91, \"Long-form\", ha=\"center\", va=\"bottom\", size=15)\nf.text(.7, .81, \"Wide-form\", ha=\"center\", va=\"bottom\", size=15)", "_____no_output_____" ] ], [ [ "Messy data\n~~~~~~~~~~\n\nMany datasets cannot be clearly interpreted using either long-form or wide-form rules. If datasets that are clearly long-form or wide-form are `\"tidy\" <https://vita.had.co.nz/papers/tidy-data.pdf>`_, we might say that these more ambiguous datasets are \"messy\". In a messy dataset, the variables are neither uniquely defined by the keys nor by the dimensions of the table. This often occurs with *repeated-measures* data, where it is natural to organize a table such that each row corresponds to the *unit* of data collection. Consider this simple dataset from a psychology experiment in which twenty subjects performed a memory task where they studied anagrams while their attention was either divided or focused:", "_____no_output_____" ] ], [ [ "anagrams = sns.load_dataset(\"anagrams\")\nanagrams", "_____no_output_____" ] ], [ [ "The attention variable is *between-subjects*, but there is also a *within-subjects* variable: the number of possible solutions to the anagrams, which varied from 1 to 3. The dependent measure is a score of memory performance. These two variables (number and score) are jointly encoded across several columns. As a result, the whole dataset is neither clearly long-form nor clearly wide-form.\n\nHow might we tell seaborn to plot the average score as a function of attention and number of solutions? We'd first need to coerce the data into one of our two structures. Let's transform it to a tidy long-form table, such that each variable is a column and each row is an observation. We can use the method :meth:`pandas.DataFrame.melt` to accomplish this task:", "_____no_output_____" ] ], [ [ "anagrams_long = anagrams.melt(id_vars=[\"subidr\", \"attnr\"], var_name=\"solutions\", value_name=\"score\")\nanagrams_long.head()", "_____no_output_____" ] ], [ [ "Now we can make the plot that we want:", "_____no_output_____" ] ], [ [ "sns.catplot(data=anagrams_long, x=\"solutions\", y=\"score\", hue=\"attnr\", kind=\"point\")", "_____no_output_____" ] ], [ [ "Further reading and take-home points\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nFor a longer discussion about tabular data structures, you could read the `\"Tidy Data\" <https://vita.had.co.nz/papers/tidy-data.pdf>`_ paper by Hadley Whickham. Note that seaborn uses a slightly different set of concepts than are defined in the paper. While the paper associates tidyness with long-form structure, we have drawn a distinction between \"tidy wide-form\" data, where there is a clear mapping between variables in the dataset and the dimensions of the table, and \"messy data\", where no such mapping exists.\n\nThe long-form structure has clear advantages. It allows you to create figures by explicitly assigning variables in the dataset to roles in plot, and you can do so with more than three variables. When possible, try to represent your data with a long-form structure when embarking on serious analysis. Most of the examples in the seaborn documentation will use long-form data. But in cases where it is more natural to keep the dataset wide, remember that seaborn can remain useful.", "_____no_output_____" ], [ "Options for visualizing long-form data\n--------------------------------------\n\nWhile long-form data has a precise definition, seaborn is fairly flexible in terms of how it is actually organized across the data structures in memory. The examples in the rest of the documentation will typically use :class:`pandas.DataFrame` objects and reference variables in them by assigning names of their columns to the variables in the plot. But it is also possible to store vectors in a Python dictionary or a class that implements that interface:", "_____no_output_____" ] ], [ [ "flights_dict = flights.to_dict()\nsns.relplot(data=flights_dict, x=\"year\", y=\"passengers\", hue=\"month\", kind=\"line\")", "_____no_output_____" ] ], [ [ "Many pandas operations, such as a the split-apply-combine operations of a group-by, will produce a dataframe where information has moved from the columns of the input dataframe to the index of the output. So long as the name is retained, you can still reference the data as normal:", "_____no_output_____" ] ], [ [ "flights_avg = flights.groupby(\"year\").mean()\nsns.relplot(data=flights_avg, x=\"year\", y=\"passengers\", kind=\"line\")", "_____no_output_____" ] ], [ [ "Additionally, it's possible to pass vectors of data directly as arguments to ``x``, ``y``, and other plotting variables. If these vectors are pandas objects, the ``name`` attribute will be used to label the plot:", "_____no_output_____" ] ], [ [ "year = flights_avg.index\npassengers = flights_avg[\"passengers\"]\nsns.relplot(x=year, y=passengers, kind=\"line\")", "_____no_output_____" ] ], [ [ "Numpy arrays and other objects that implement the Python sequence interface work too, but if they don't have names, the plot will not be as informative without further tweaking:", "_____no_output_____" ] ], [ [ "sns.relplot(x=year.to_numpy(), y=passengers.to_list(), kind=\"line\")", "_____no_output_____" ] ], [ [ "Options for visualizing wide-form data\n--------------------------------------\n\nThe options for passing wide-form data are even more flexible. As with long-form data, pandas objects are preferable because the name (and, in some cases, index) information can be used. But in essence, any format that can be viewed as a single vector or a collection of vectors can be passed to ``data``, and a valid plot can usually be constructed.\n\nThe example we saw above used a rectangular :class:`pandas.DataFrame`, which can be thought of as a collection of its columns. A dict or list of pandas objects will also work, but we'll lose the axis labels:", "_____no_output_____" ] ], [ [ "flights_wide_list = [col for _, col in flights_wide.items()]\nsns.relplot(data=flights_wide_list, kind=\"line\")", "_____no_output_____" ] ], [ [ "The vectors in a collection do not need to have the same length. If they have an ``index``, it will be used to align them:", "_____no_output_____" ] ], [ [ "two_series = [flights_wide.loc[:1955, \"Jan\"], flights_wide.loc[1952:, \"Aug\"]]\nsns.relplot(data=two_series, kind=\"line\")", "_____no_output_____" ] ], [ [ "Whereas an ordinal index will be used for numpy arrays or simple Python sequences:", "_____no_output_____" ] ], [ [ "two_arrays = [s.to_numpy() for s in two_series]\nsns.relplot(data=two_arrays, kind=\"line\")", "_____no_output_____" ] ], [ [ "But a dictionary of such vectors will at least use the keys:", "_____no_output_____" ] ], [ [ "two_arrays_dict = {s.name: s.to_numpy() for s in two_series}\nsns.relplot(data=two_arrays_dict, kind=\"line\")", "_____no_output_____" ] ], [ [ "Rectangular numpy arrays are treated just like a dataframe without index information, so they are viewed as a collection of column vectors. Note that this is different from how numpy indexing operations work, where a single indexer will access a row. But it is consistent with how pandas would turn the array into a dataframe or how matplotlib would plot it:", "_____no_output_____" ] ], [ [ "flights_array = flights_wide.to_numpy()\nsns.relplot(data=flights_array, kind=\"line\")", "_____no_output_____" ], [ "# TODO once the categorical module is refactored, its single vectors will get special treatment\n# (they'll look like collection of singletons, rather than a single collection). That should be noted.", "_____no_output_____" ] ] ]
[ "raw", "code", "raw", "code", "raw", "code", "raw", "code", "raw", "code", "raw", "code", "raw", "code", "raw", "code", "raw", "code", "raw", "code", "raw", "code", "raw", "code", "raw", "code", "raw", "code", "raw", "code", "raw", "code", "raw", "code", "raw", "code", "raw", "code", "raw", "code", "raw", "code" ]
[ [ "raw", "raw" ], [ "code" ], [ "raw", "raw" ], [ "code" ], [ "raw" ], [ "code" ], [ "raw" ], [ "code" ], [ "raw", "raw" ], [ "code" ], [ "raw" ], [ "code" ], [ "raw" ], [ "code" ], [ "raw" ], [ "code" ], [ "raw" ], [ "code" ], [ "raw" ], [ "code" ], [ "raw" ], [ "code" ], [ "raw" ], [ "code" ], [ "raw", "raw" ], [ "code" ], [ "raw" ], [ "code" ], [ "raw" ], [ "code" ], [ "raw" ], [ "code" ], [ "raw" ], [ "code" ], [ "raw" ], [ "code" ], [ "raw" ], [ "code" ], [ "raw" ], [ "code" ], [ "raw" ], [ "code", "code" ] ]
e7d6824a8e97217b5d21187d330af0542a23752a
12,024
ipynb
Jupyter Notebook
07_StyleTransfer/图像风格迁移_Style_Transfer.ipynb
swarmapytorch/book_DeepLearning_in_PyTorch_Source
8fa593862c5ae794e1021dd459d0e6c65221094a
[ "Apache-2.0" ]
314
2018-03-16T17:00:53.000Z
2022-03-29T07:02:15.000Z
07_StyleTransfer/图像风格迁移_Style_Transfer.ipynb
swarmapytorch/book_DeepLearning_in_PyTorch_Source
8fa593862c5ae794e1021dd459d0e6c65221094a
[ "Apache-2.0" ]
2
2019-08-13T07:30:02.000Z
2020-02-27T06:40:51.000Z
07_StyleTransfer/图像风格迁移_Style_Transfer.ipynb
swarmapytorch/book_DeepLearning_in_PyTorch_Source
8fa593862c5ae794e1021dd459d0e6c65221094a
[ "Apache-2.0" ]
159
2018-07-10T03:21:13.000Z
2022-03-28T02:10:40.000Z
27.389522
100
0.519212
[ [ [ "# 风格迁移的实现\n\n本文件是集智学园开发的“火炬上的深度学习”课程的配套源代码。我们讲解了Prisma软件实现风格迁移的实现原理\n\n在这节课中,我们将学会玩图像的风格迁移。\n\n\n\n我们需要准备两张图像,一张作为化作风格,一张作为图像内容\n\n同时,在本文件中,我们还展示了如何实用GPU来进行计算 \n\n本文件是集智学园http://campus.swarma.org 出品的“火炬上的深度学习”第IV课的配套源代码", "_____no_output_____" ] ], [ [ "#导入必要的包\nfrom __future__ import print_function\n\nimport torch\nimport torch.nn as nn\nimport torch.optim as optim\n\nfrom PIL import Image\nimport matplotlib.pyplot as plt\n\nimport torchvision.transforms as transforms\nimport torchvision.models as models\n\nimport copy\n\n# 是否用GPU计算,如果检测到有安装好的GPU,则利用它来计算\nuse_cuda = torch.cuda.is_available()\ndtype = torch.cuda.FloatTensor if use_cuda else torch.FloatTensor", "_____no_output_____" ] ], [ [ "## 一、准备输入文件\n\n我们需要准备两张同样大小的文件,一张作为风格,一张作为内容", "_____no_output_____" ] ], [ [ "#风格图像的路径,自行设定\nstyle = 'images/escher.jpg'\n\n#内容图像的路径,自行设定\ncontent = 'images/portrait1.jpg'\n\n#风格损失所占比重\nstyle_weight=1000\n\n#内容损失所占比重\ncontent_weight=1\n\n#希望得到的图片大小(越大越清晰,计算越慢)\nimsize = 128\n\nloader = transforms.Compose([\n transforms.Resize(imsize), # 将加载的图像转变为指定的大小\n transforms.ToTensor()]) # 将图像转化为tensor\n\n#图片加载函数\ndef image_loader(image_name):\n image = Image.open(image_name)\n image = loader(image).clone().detach().requires_grad_(True)\n # 为了适应卷积网络的需要,虚拟一个batch的维度\n image = image.unsqueeze(0)\n return image\n\n#载入图片并检查尺寸\nstyle_img = image_loader(style).type(dtype)\ncontent_img = image_loader(content).type(dtype)\n\nassert style_img.size() == content_img.size(), \\\n \"我们需要输入相同尺寸的风格和内容图像\"\n\n# 绘制图像的函数\ndef imshow(tensor, title=None):\n image = tensor.clone().cpu() # 克隆Tensor防止改变\n image = image.view(3, imsize, imsize) # 删除添加的batch层\n image = unloader(image)\n plt.imshow(image)\n if title is not None:\n plt.title(title)\n plt.pause(0.001) # 停一会以便更新视图\n\n#绘制图片并查看\nunloader = transforms.ToPILImage() # 将其转化为PIL图像(Python Imaging Library) \nplt.ion()\n\nplt.figure()\nimshow(style_img.data, title='Style Image')\n\nplt.figure()\nimshow(content_img.data, title='Content Image')", "_____no_output_____" ] ], [ [ "## 二、风格迁移网络的实现\n\n值得注意的是,风格迁移的实现并没有训练一个神经网络,而是将已训练好的卷积神经网络价格直接迁移过来\n网络的学习过程并不体现为对神经网络权重的训练,而是训练一张输入的图像,让它尽可能地靠近内容图像的内容和风格图像的风格\n\n为了实现风格迁移,我们需要在迁移网络的基础上再构建一个计算图,这样可以加速计算。构建计算图分为两部:\n\n1、加载一个训练好的CNN;\n\n2、在原网络的基础上添加计算风格损失和内容损失的新计算层", "_____no_output_____" ], [ "### 1. 加载已训练好的大型网络VGG", "_____no_output_____" ] ], [ [ "cnn = models.vgg19(pretrained=True).features\n\n# 如果可能就用GPU计算:\nif use_cuda:\n cnn = cnn.cuda()", "_____no_output_____" ] ], [ [ "### 2. 重新定义新的计算模块", "_____no_output_____" ] ], [ [ "#内容损失模块\nclass ContentLoss(nn.Module):\n\n def __init__(self, target, weight):\n super(ContentLoss, self).__init__()\n # 由于网络的权重都是从target上迁移过来,所以在计算梯度的时候,需要把它和原始计算图分离\n self.target = target.detach() * weight\n self.weight = weight\n self.criterion = nn.MSELoss()\n\n def forward(self, input):\n # 输入input为一个特征图\n # 它的功能就是计算误差,误差就是当前计算的内容与target之间的均方误差\n self.loss = self.criterion(input * self.weight, self.target)\n self.output = input\n return self.output\n\n def backward(self, retain_graph=True):\n # 开始进行反向传播算法\n self.loss.backward(retain_graph=retain_graph)\n return self.loss\n\nclass StyleLoss(nn.Module):\n\n # 计算风格损失的神经模块\n def __init__(self, target, weight):\n super(StyleLoss, self).__init__()\n self.target = target.detach() * weight\n self.weight = weight\n #self.gram = GramMatrix()\n self.criterion = nn.MSELoss()\n\n def forward(self, input):\n # 输入input就是一个特征图\n self.output = input.clone()\n # 计算本图像的gram矩阵,并将它与target对比\n input = input.cuda() if use_cuda else input\n self_G = Gram(input)\n self_G.mul_(self.weight)\n # 计算损失函数,即输入特征图的gram矩阵与目标特征图的gram矩阵之间的差异\n self.loss = self.criterion(self_G, self.target)\n return self.output\n\n def backward(self, retain_graph=True):\n # 反向传播算法\n self.loss.backward(retain_graph=retain_graph)\n return self.loss\n\n#定义Gram矩阵\ndef Gram(input):\n # 输入一个特征图,计算gram矩阵\n a, b, c, d = input.size() # a=batch size(=1)\n # b=特征图的数量\n # (c,d)=特征图的图像尺寸 (N=c*d)\n\n features = input.view(a * b, c * d) # 将特征图图像扁平化为一个向量\n\n G = torch.mm(features, features.t()) # 计算任意两个向量之间的乘积\n\n # 我们通过除以特征图中的像素数量来归一化特征图\n return G.div(a * b * c * d)", "_____no_output_____" ], [ "\n# 希望计算的内容或者风格层 :\ncontent_layers = ['conv_4'] #只考虑第四个卷积层的内容\n\n\nstyle_layers = ['conv_1', 'conv_2', 'conv_3', 'conv_4', 'conv_5']\n# 考虑第1、2、3、4、5层的风格损失\n\n\n# 定义列表存储每一个周期的计算损失\ncontent_losses = []\nstyle_losses = []\n\nmodel = nn.Sequential() # 一个新的序贯网络模型\n\n# 如果有GPU就把这些计算挪到GPU上:\nif use_cuda:\n model = model.cuda()\n\n\n \n \n# 接下来要做的操作是:循环vgg的每一层,同时构造一个全新的神经网络model\n# 这个新网络与vgg基本一样,只是多了一些新的层来计算风格损失和内容损失。\n# 将每层卷积核的数据都加载到新的网络模型model上来\ni = 1\nfor layer in list(cnn):\n if isinstance(layer, nn.Conv2d):\n name = \"conv_\" + str(i)\n #将已加载的模块放到model这个新的神经模块中\n model.add_module(name, layer)\n\n if name in content_layers:\n # 如果当前层模型在定义好的要计算内容的层:\n target = model(content_img).clone() #将内容图像当前层的feature信息拷贝到target中\n content_loss = ContentLoss(target, content_weight) #定义content_loss的目标函数\n content_loss = content_loss if use_cuda else content_loss\n model.add_module(\"content_loss_\" + str(i), content_loss) #在新网络上加content_loss层\n content_losses.append(content_loss)\n\n if name in style_layers:\n # 如果当前层在指定的风格层中,进行风格层损失的计算\n target_feature = model(style_img).clone()\n target_feature = target_feature.cuda() if use_cuda else target_feature\n target_feature_gram = Gram(target_feature)\n style_loss = StyleLoss(target_feature_gram, style_weight)\n style_loss = style_loss.cuda() if use_cuda else style_loss\n model.add_module(\"style_loss_\" + str(i), style_loss)\n style_losses.append(style_loss)\n\n if isinstance(layer, nn.ReLU):\n #如果不是卷积层,则做同样处理\n name = \"relu_\" + str(i)\n model.add_module(name, layer)\n\n i += 1\n\n if isinstance(layer, nn.MaxPool2d):\n name = \"pool_\" + str(i)\n model.add_module(name, layer) # ***\n\n", "_____no_output_____" ] ], [ [ "## 二、风格迁移的训练", "_____no_output_____" ], [ "### 1. 首先,我们需要现准备一张原始的图像,可以是一张噪音图或者就是内容图", "_____no_output_____" ] ], [ [ "\n# 如果想从调整一张噪声图像开始,请用下面一行的代码\ninput_img = torch.randn(content_img.data.size())\n\nif use_cuda:\n input_img = input_img.cuda()\n content_img = content_img.cuda()\n style_img = style_img.cuda()\n# 将选中的待调整图打印出来:\nplt.figure()\nimshow(input_img.data, title='Input Image')\n", "_____no_output_____" ] ], [ [ "### 2. 优化输入的图像(训练过程)", "_____no_output_____" ] ], [ [ "# 首先,需要先讲输入图像变成神经网络的参数,这样我们就可以用反向传播算法来调节这个输入图像了\ninput_param = nn.Parameter(input_img.data)\n\n#定义个优化器,采用LBFGS优化算法来优化(试验效果很好,它的特点是可以计算大规模数据的梯度下降)\noptimizer = optim.LBFGS([input_param])\n\n# 迭代步数\nnum_steps=300\n\n\n\"\"\"运行风格迁移的主算法过程.\"\"\"\nprint('正在构造风格迁移模型..')\n\nprint('开始优化..')\nfor i in range(num_steps):\n #每一个训练周期\n \n # 限制输入图像的色彩取值范围在0-1间\n input_param.data.clamp_(0, 1)\n \n # 清空梯度\n optimizer.zero_grad()\n # 将图像输入构造的神经网络中\n model(input_param)\n style_score = 0\n content_score = 0\n \n # 每个损失函数层都开始反向传播算法\n for sl in style_losses:\n style_score += sl.backward()\n for cl in content_losses:\n content_score += cl.backward()\n\n # 每隔50个周期打印一次训练数据\n if i % 50 == 0:\n print(\"运行 {}轮:\".format(i))\n print('风格损失 : {:4f} 内容损失: {:4f}'.format(\n style_score.data.item(), content_score.data.item()))\n print()\n def closure():\n return style_score + content_score\n #一步优化\n optimizer.step(closure)\n\n# 做一些修正,防止数据超界...\noutput = input_param.data.clamp_(0, 1)\n\n# 打印结果图\nplt.figure()\nimshow(output, title='Output Image')\n\nplt.ioff()\nplt.show()", "_____no_output_____" ] ], [ [ "本文件是集智学园http://campus.swarma.org 出品的“火炬上的深度学习”第IV课的配套源代码", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e7d6913d763768b4c116a85021e1cb7198143b5a
11,660
ipynb
Jupyter Notebook
chapter02_supervised-learning/regularization-gluon.ipynb
andylamp/mxnet-the-straight-dope
249cb446a8d0d711c5ca7128ffd68d91fc2e381b
[ "Apache-2.0" ]
2,796
2017-07-12T06:23:19.000Z
2022-02-19T16:38:09.000Z
chapter02_supervised-learning/regularization-gluon.ipynb
m2rik/mxnet-the-straight-dope
b524c70401e9fb62cb2af411cee3abe2e344bace
[ "Apache-2.0" ]
337
2017-07-12T17:07:41.000Z
2020-10-15T20:19:17.000Z
chapter02_supervised-learning/regularization-gluon.ipynb
m2rik/mxnet-the-straight-dope
b524c70401e9fb62cb2af411cee3abe2e344bace
[ "Apache-2.0" ]
867
2017-07-13T03:59:31.000Z
2022-03-18T15:01:55.000Z
32.479109
686
0.553431
[ [ [ "# Overfitting and regularization (with ``gluon``)\n\nNow that we've built a [regularized logistic regression model from scratch](regularization-scratch.html), let's make this more efficient with ``gluon``. We recommend that you read that section for a description as to why regularization is a good idea. As always, we begin by loading libraries and some data.\n\n[**REFINED DRAFT - RELEASE STAGE: CATFOOD**]", "_____no_output_____" ] ], [ [ "from __future__ import print_function\nimport mxnet as mx\nfrom mxnet import autograd\nfrom mxnet import gluon\nimport mxnet.ndarray as nd\nimport numpy as np\nctx = mx.cpu()\n\n# for plotting purposes\n%matplotlib inline\nimport matplotlib\nimport matplotlib.pyplot as plt", "_____no_output_____" ] ], [ [ "## The MNIST Dataset", "_____no_output_____" ] ], [ [ "mnist = mx.test_utils.get_mnist()\nnum_examples = 1000\nbatch_size = 64\ntrain_data = mx.gluon.data.DataLoader(\n mx.gluon.data.ArrayDataset(mnist[\"train_data\"][:num_examples],\n mnist[\"train_label\"][:num_examples].astype(np.float32)), \n batch_size, shuffle=True)\ntest_data = mx.gluon.data.DataLoader(\n mx.gluon.data.ArrayDataset(mnist[\"test_data\"][:num_examples],\n mnist[\"test_label\"][:num_examples].astype(np.float32)), \n batch_size, shuffle=False)", "_____no_output_____" ] ], [ [ "## Multiclass Logistic Regression", "_____no_output_____" ] ], [ [ "net = gluon.nn.Sequential()\nwith net.name_scope():\n net.add(gluon.nn.Dense(10))", "_____no_output_____" ] ], [ [ "## Parameter initialization\n", "_____no_output_____" ] ], [ [ "net.collect_params().initialize(mx.init.Xavier(magnitude=2.24), ctx=ctx)", "_____no_output_____" ] ], [ [ "## Softmax Cross Entropy Loss", "_____no_output_____" ] ], [ [ "loss = gluon.loss.SoftmaxCrossEntropyLoss()", "_____no_output_____" ] ], [ [ "## Optimizer\n\nBy default ``gluon`` tries to keep the coefficients from diverging by using a *weight decay* penalty. So, to get the real overfitting experience we need to switch it off. We do this by passing `'wd': 0.0'` when we instantiate the trainer. ", "_____no_output_____" ] ], [ [ "trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.01, 'wd': 0.0})", "_____no_output_____" ] ], [ [ "## Evaluation Metric", "_____no_output_____" ] ], [ [ "def evaluate_accuracy(data_iterator, net, loss_fun):\n acc = mx.metric.Accuracy()\n loss_avg = 0.\n for i, (data, label) in enumerate(data_iterator):\n data = data.as_in_context(ctx).reshape((-1,784))\n label = label.as_in_context(ctx)\n output = net(data)\n loss = loss_fun(output, label) \n predictions = nd.argmax(output, axis=1)\n acc.update(preds=predictions, labels=label)\n loss_avg = loss_avg*i/(i+1) + nd.mean(loss).asscalar()/(i+1)\n return acc.get()[1], loss_avg\n\ndef plot_learningcurves(loss_tr,loss_ts, acc_tr,acc_ts):\n xs = list(range(len(loss_tr)))\n \n f = plt.figure(figsize=(12,6))\n fg1 = f.add_subplot(121)\n fg2 = f.add_subplot(122)\n \n fg1.set_xlabel('epoch',fontsize=14)\n fg1.set_title('Comparing loss functions')\n fg1.semilogy(xs, loss_tr)\n fg1.semilogy(xs, loss_ts)\n fg1.grid(True,which=\"both\")\n\n fg1.legend(['training loss', 'testing loss'],fontsize=14)\n \n fg2.set_title('Comparing accuracy')\n fg1.set_xlabel('epoch',fontsize=14)\n fg2.plot(xs, acc_tr)\n fg2.plot(xs, acc_ts)\n fg2.grid(True,which=\"both\")\n fg2.legend(['training accuracy', 'testing accuracy'],fontsize=14)\n plt.show()", "_____no_output_____" ] ], [ [ "## Execute training loop", "_____no_output_____" ] ], [ [ "epochs = 700\nmoving_loss = 0.\nniter=0\n\nloss_seq_train = []\nloss_seq_test = []\nacc_seq_train = []\nacc_seq_test = []\n\nfor e in range(epochs):\n for i, (data, label) in enumerate(train_data):\n data = data.as_in_context(ctx).reshape((-1,784))\n label = label.as_in_context(ctx)\n with autograd.record():\n output = net(data)\n cross_entropy = loss(output, label)\n cross_entropy.backward()\n trainer.step(data.shape[0])\n \n ##########################\n # Keep a moving average of the losses\n ##########################\n niter +=1\n moving_loss = .99 * moving_loss + .01 * nd.mean(cross_entropy).asscalar()\n est_loss = moving_loss/(1-0.99**niter)\n \n test_accuracy, test_loss = evaluate_accuracy(test_data, net, loss)\n train_accuracy, train_loss = evaluate_accuracy(train_data, net, loss)\n \n # save them for later\n loss_seq_train.append(train_loss)\n loss_seq_test.append(test_loss)\n acc_seq_train.append(train_accuracy)\n acc_seq_test.append(test_accuracy)\n \n \n if e % 20 == 0:\n print(\"Completed epoch %s. Train Loss: %s, Test Loss %s, Train_acc %s, Test_acc %s\" % \n (e+1, train_loss, test_loss, train_accuracy, test_accuracy)) \n\n## Plotting the learning curves\nplot_learningcurves(loss_seq_train,loss_seq_test,acc_seq_train,acc_seq_test)", "_____no_output_____" ] ], [ [ "## Regularization\n\nNow let's see what this mysterious *weight decay* is all about. We begin with a bit of math. When we add an L2 penalty to the weights we are effectively adding $\\frac{\\lambda}{2} \\|w\\|^2$ to the loss. Hence, every time we compute the gradient it gets an additional $\\lambda w$ term that is added to $g_t$, since this is the very derivative of the L2 penalty. As a result we end up taking a descent step not in the direction $-\\eta g_t$ but rather in the direction $-\\eta (g_t + \\lambda w)$. This effectively shrinks $w$ at each step by $\\eta \\lambda w$, thus the name weight decay. To make this work in practice we just need to set the weight decay to something nonzero.", "_____no_output_____" ] ], [ [ "net.collect_params().initialize(mx.init.Xavier(magnitude=2.24), ctx=ctx, force_reinit=True)\ntrainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.01, 'wd': 0.001})\n\nmoving_loss = 0.\nniter=0\nloss_seq_train = []\nloss_seq_test = []\nacc_seq_train = []\nacc_seq_test = []\n\nfor e in range(epochs):\n for i, (data, label) in enumerate(train_data):\n data = data.as_in_context(ctx).reshape((-1,784))\n label = label.as_in_context(ctx)\n with autograd.record():\n output = net(data)\n cross_entropy = loss(output, label)\n cross_entropy.backward()\n trainer.step(data.shape[0])\n \n ##########################\n # Keep a moving average of the losses\n ##########################\n niter +=1\n moving_loss = .99 * moving_loss + .01 * nd.mean(cross_entropy).asscalar()\n est_loss = moving_loss/(1-0.99**niter)\n \n test_accuracy, test_loss = evaluate_accuracy(test_data, net,loss)\n train_accuracy, train_loss = evaluate_accuracy(train_data, net, loss)\n \n # save them for later\n loss_seq_train.append(train_loss)\n loss_seq_test.append(test_loss)\n acc_seq_train.append(train_accuracy)\n acc_seq_test.append(test_accuracy)\n \n if e % 20 == 0:\n print(\"Completed epoch %s. Train Loss: %s, Test Loss %s, Train_acc %s, Test_acc %s\" % \n (e+1, train_loss, test_loss, train_accuracy, test_accuracy)) \n \n## Plotting the learning curves\nplot_learningcurves(loss_seq_train,loss_seq_test,acc_seq_train,acc_seq_test)", "_____no_output_____" ] ], [ [ "As we can see, the test accuracy improves a bit. Note that the amount by which it improves actually depends on the amount of weight decay. We recommend that you try and experiment with different extents of weight decay. For instance, a larger weight decay (e.g. $0.01$) will lead to inferior performance, one that's larger still ($0.1$) will lead to terrible results. This is one of the reasons why tuning parameters is quite so important in getting good experimental results in practice.", "_____no_output_____" ], [ "## Next\n[Learning environments](../chapter02_supervised-learning/environment.ipynb)", "_____no_output_____" ], [ "For whinges or inquiries, [open an issue on GitHub.](https://github.com/zackchase/mxnet-the-straight-dope)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ] ]
e7d6a153e2cda43c3ee52440342c8b16838b3833
74,588
ipynb
Jupyter Notebook
ava-kaggle.hist.ipynb
lumo7184/Avocado-
d1009e670a9da26dfa9df2746a06b77328594730
[ "MIT" ]
null
null
null
ava-kaggle.hist.ipynb
lumo7184/Avocado-
d1009e670a9da26dfa9df2746a06b77328594730
[ "MIT" ]
null
null
null
ava-kaggle.hist.ipynb
lumo7184/Avocado-
d1009e670a9da26dfa9df2746a06b77328594730
[ "MIT" ]
null
null
null
105.201693
44,828
0.78998
[ [ [ "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as pp\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport seaborn as sns\nfrom sklearn.model_selection import train_test_split\nfrom sklearn import linear_model\nfrom sklearn.metrics import mean_squared_error, r2_score\n# Input data files are available in the \"../input/\" directory.\n# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory", "_____no_output_____" ], [ "%matplotlib inline", "_____no_output_____" ], [ "df = pd.read_csv('avocado.csv')\ndf.head()", "_____no_output_____" ], [ "df['Date'] = pd.to_datetime(df['Date'])", "_____no_output_____" ], [ "df.head(5)", "_____no_output_____" ], [ "df = df.drop(['4046','4225','4770','Large Bags','Small Bags','XLarge Bags','Total Volume'],axis=1)", "_____no_output_____" ], [ "US = df.loc[(df['region']) == 'TotalUS'] \nUS.head(5)", "_____no_output_____" ], [ "df['AveragePrice'].plot(kind='hist', rot=70, logx=True, logy=True)\n# Season # 1= Spring #2= Summer #3= Fall #4= Winter", "_____no_output_____" ], [ "# Create bee swarm plot with Seaborn's default settings\n_ = sns.swarmplot(x='Season#', y='AveragePrice', data=US)\n\n# Label the axes\n_ = plt.xlabel('Season')\n_ = plt.ylabel('Average Price')\n\n# Show the plot\nplt.show()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7d6a74b189641ebe9d20a5bd0c304f5cee3a5e6
113,206
ipynb
Jupyter Notebook
2019-07-09__InDepthPandas/03_pandas_intro.ipynb
snowdj/UCF-MSDA-workshop
b8107262ac9982432ee99077a2324b99b4b0fd87
[ "MIT" ]
3
2019-09-17T10:46:00.000Z
2019-11-05T04:37:40.000Z
2019-07-09__InDepthPandas/03_pandas_intro.ipynb
snowdj/UCF-MSDA-workshop
b8107262ac9982432ee99077a2324b99b4b0fd87
[ "MIT" ]
2
2019-06-20T12:51:39.000Z
2019-06-20T12:53:26.000Z
2019-07-09__InDepthPandas/03_pandas_intro.ipynb
snowdj/UCF-MSDA-workshop
b8107262ac9982432ee99077a2324b99b4b0fd87
[ "MIT" ]
9
2019-07-04T21:10:45.000Z
2022-02-19T00:26:33.000Z
43.963495
41,672
0.631053
[ [ [ "# Introduction\n\n**Prerequisites**\n\n- Python Fundamentals\n\n\n**Outcomes**\n\n- Understand the core pandas objects \n- Series \n- DataFrame \n- Index into particular elements of a Series and DataFrame \n- Understand what `.dtype`/`.dtypes` do \n- Make basic visualizations \n\n\n**Data**\n\n- US regional unemployment data from Bureau of Labor Statistics ", "_____no_output_____" ], [ "## Pandas\n\nThis notebook begins the material on `pandas`\n\nTo start we will import the pandas package and give it the nickname\n`\"pd\"`, which is the conventional way to import pandas", "_____no_output_____" ] ], [ [ "import pandas as pd\n\n# Don't worry about this line for now!\n%matplotlib inline", "_____no_output_____" ] ], [ [ "Sometimes it will be helpful to know which version of pandas we are\nusing\n\nWe can check this by running the code below", "_____no_output_____" ], [ "## Series\n\nThe first main pandas type we will introduce is called Series\n\nA Series is a single column of data, with row labels for each\nobservation\n\nPandas refers to the row labels as the *index* of the Series\n\n<img src=\"https://storage.googleapis.com/ds4e/_static/intro_files/PandasSeries.png\" alt=\"PandasSeries.png\" style=\"\">\n\n \nBelow we create a Series which contains the US unemployment rate every\nother year starting in 1995", "_____no_output_____" ] ], [ [ "values = [5.6, 5.3, 4.3, 4.2, 5.8, 5.3, 4.6, 7.8, 9.1, 8., 5.7]\nyears = list(range(1995, 2017, 2))\n\nunemp = pd.Series(data=values, index=years, name=\"Unemployment\")", "_____no_output_____" ], [ "unemp", "_____no_output_____" ] ], [ [ "We can look at the index and values in our Series", "_____no_output_____" ] ], [ [ "unemp.index", "_____no_output_____" ], [ "unemp.values", "_____no_output_____" ], [ "unemp.", "_____no_output_____" ] ], [ [ "### What can we do with a Series object?", "_____no_output_____" ], [ "#### `.head` and `.tail`\n\nOften our data will have many rows and we won’t want to display it all\nat once\n\nThe methods `.head` and `.tail` show rows at the beginning and end\nof our Series, respectively", "_____no_output_____" ] ], [ [ "unemp.head()", "_____no_output_____" ], [ "unemp.tail()", "_____no_output_____" ] ], [ [ "#### Basic Plotting\n\nWe can also plot data using the `.plot` method\n\nThis is why we needed the `%matplotlib inline` — it tells the notebook\nto display figures inside the notebook itself\n\n*Note*: Pandas can do much more in terms of visualization\n\nWe will talk about more advanced visualization features later", "_____no_output_____" ] ], [ [ "unemp.plot(kind=\"bar\")", "_____no_output_____" ] ], [ [ "#### Unique values\n\nIn this dataset it doesn’t make much sense, but we may want to find the\nunique values in a Series\n\nThis can be done with the `.unique` method", "_____no_output_____" ] ], [ [ "unemp.unique()", "_____no_output_____" ] ], [ [ "#### Indexing\n\nSometimes we will want to select particular elements from a Series\n\nWe can do this using `.loc[index_things]`; where `index_things` is\nan item from the index, or a list of items in the index\n\nWe will see this more in depth in a coming lecture, but for now we\ndemonstrate how to select one or multiple elements of the Series", "_____no_output_____" ] ], [ [ "unemp", "_____no_output_____" ], [ "unemp.loc[[2009, 1995]]", "_____no_output_____" ], [ "unemp.iloc[-1]", "_____no_output_____" ], [ "unemp.loc[[1995, 2005, 2015]]", "_____no_output_____" ] ], [ [ "<blockquote>\n\n**Check for understanding**\n\nFor each of the following exercises, we recommend reading the documentation\nfor help\n\n- Display only the first 2 elements of the Series using the `.head` method \n- Using the `plot` method, make a bar plot \n- Use `.loc` to select the lowest/highest unemployment rate shown in the Series \n- Run the code `unemp.dtype` below. What does it give you? Talk with your neighbor about where it might come from \n\n\n\n</blockquote>", "_____no_output_____" ] ], [ [ "unemp.loc[[unemp.idxmin(), unemp.idxmax()]]", "_____no_output_____" ] ], [ [ "## DataFrame\n\nA DataFrame is how pandas stores one or more columns of data\n\nWe can think a DataFrames a multiple Series stacked side by side as\ncolumns\n\nThis is similar to a sheet in an Excel workbook or a table in a SQL\ndatabase\n\nIn addition to row labels (an index), DataFrames also have column labels\n\nWe refer to these column labels as the columns or column names\n\n<img src=\"https://storage.googleapis.com/ds4e/_static/intro_files/PandasDataFrame.png\" alt=\"PandasDataFrame.png\" style=\"\">\n\n \nBelow we create a DataFrame that contains the unemployment rate every\nother year by region of the US starting in 1995.", "_____no_output_____" ] ], [ [ "data = {\"NorthEast\": [5.9, 5.6, 4.4, 3.8, 5.8, 4.9, 4.3, 7.1, 8.3, 7.9, 5.7],\n \"MidWest\": [4.5, 4.3, 3.6, 4. , 5.7, 5.7, 4.9, 8.1, 8.7, 7.4, 5.1],\n \"South\": [5.3, 5.2, 4.2, 4. , 5.7, 5.2, 4.3, 7.6, 9.1, 7.4, 5.5],\n \"West\": [6.6, 6., 5.2, 4.6, 6.5, 5.5, 4.5, 8.6, 10.7, 8.5, 6.1],\n \"National\": [5.6, 5.3, 4.3, 4.2, 5.8, 5.3, 4.6, 7.8, 9.1, 8., 5.7]}\n\nunemp_region = pd.DataFrame(data, index=years)\nunemp_region", "_____no_output_____" ] ], [ [ "We can retrieve the index and the DataFrame values in the same way we\ndid with a Series", "_____no_output_____" ] ], [ [ "unemp_region.index", "_____no_output_____" ], [ "unemp_region.values", "_____no_output_____" ] ], [ [ "### What can we do with a DataFrame?\n\nPretty much everything we can do with a Series", "_____no_output_____" ], [ "#### `.head` and `.tail`\n\nAs with Series, we can use `.head` and `.tail` to show only the\nfirst or last `n` rows", "_____no_output_____" ] ], [ [ "unemp_region.head()", "_____no_output_____" ], [ "unemp_region.tail(3)", "_____no_output_____" ] ], [ [ "#### Plotting\n\nWe can generate plots with the `.plot` method\n\nNotice we now have a separate line for each column of data", "_____no_output_____" ] ], [ [ "unemp_region.plot()", "_____no_output_____" ] ], [ [ "#### Indexing\n\nWe can also do indexing using `.loc`\n\nHowever, there is a little more to it than before because we can choose\nsubsets of both row and columns", "_____no_output_____" ] ], [ [ "unemp_region.head()", "_____no_output_____" ], [ "unemp_region.loc[1995, \"NorthEast\"]", "_____no_output_____" ], [ "unemp_region.loc[[1995, 2005], \"South\"]", "_____no_output_____" ], [ "unemp_region.loc[1995, [\"NorthEast\", \"National\"]]", "_____no_output_____" ], [ "unemp_region.loc[:, \"NorthEast\"]", "_____no_output_____" ], [ "# `[string]` with no `.loc` extracts a whole column\nunemp_region[\"MidWest\"]", "_____no_output_____" ] ], [ [ "### Computations with columns\n\nPandas can do various computations and mathematical operations on\ncolumns\n\nLet’s take a look at a few of them", "_____no_output_____" ] ], [ [ "# Divide by 100 to move from percent units to a rate\nunemp_region[\"West\"] / 100", "_____no_output_____" ], [ "# Find maximum\nunemp_region[\"West\"].max()", "_____no_output_____" ], [ "unemp_region[\"West\"].iloc[1:5]", "_____no_output_____" ], [ "unemp_region[\"MidWest\"].head(6)", "_____no_output_____" ], [ "# Find the difference between two columns\n# Notice that pandas applies `-` to _all rows_ at one time\n# We'll see more of this throughout these materials\nunemp_region[\"West\"].iloc[1:5] - unemp_region[\"MidWest\"].head(6)", "_____no_output_____" ], [ "# Find correlation between two columns\nunemp_region.West.corr(unemp_region[\"MidWest\"])", "_____no_output_____" ], [ "# find correlation between all column pairs\nunemp_region.corr()", "_____no_output_____" ] ], [ [ "<blockquote>\n\n**Check for understanding**\n\nFor each of the following, we recommend reading the documentation for help\n\n- Use introspection (or google-fu) to find a way to obtain a list with\n all of the column names in `unemp_region` \n- Using the `plot` method, make a bar plot. What does it look like\n now? \n- Use `.loc` to select the the unemployment data for the\n `NorthEast` and `West` for the years 1995, 2005, 2011, and 2015. \n- Run the code `unemp_region.dtypes` below. What does it give you?\n How does this compare with `unemp.dtype`? \n\n\n\n</blockquote>", "_____no_output_____" ], [ "## Data types\n\nWe asked you to run the commands `unemp.dtype` and\n`unemp_region.dtypes` and think about what these methods output\n\nYou might have guessed that they return the type of the values inside\neach column\n\nOccasionally, you might need to investigate what types you have in your\nDataFrame when an operation is not doing what you expect it to", "_____no_output_____" ] ], [ [ "unemp.dtype", "_____no_output_____" ], [ "unemp_region.dtypes", "_____no_output_____" ] ], [ [ "DataFrames will only distinguish between a few types\n\n- Booleans (`bool`) \n- Floating point numbers (`float64`) \n- Integers (`int64`) \n- Dates (`datetime`) — we will learn this soon \n- Categorical data (`categorical`) \n- Everything else, including strings (`object`) \n\n\nIn the future, we will often refer to the type of data stored in a\ncolumn as its `dtype`\n\nLet’s look at an example for when having an incorrect `dtype` can\ncause problems\n\nSuppose that when we imported the data the `South` column was\ninterpreted as a string", "_____no_output_____" ] ], [ [ "str_unemp = unemp_region.copy()\nstr_unemp[\"South\"] = str_unemp[\"South\"].astype(str)\nstr_unemp.dtypes", "_____no_output_____" ] ], [ [ "Everything *looks* ok…", "_____no_output_____" ] ], [ [ "str_unemp.head()", "_____no_output_____" ] ], [ [ "But if we try to do something like compute the sum of all the columns,\nwe get unexpected results…", "_____no_output_____" ] ], [ [ "str_unemp.sum()", "_____no_output_____" ] ], [ [ "This happened because `.sum` effectively calls `+` on all rows in\neach column\n\nRecall that when we apply `+` to two strings, the result is the\nstrings mashed together\n\nSo in this case we saw that the entries in all the rows of the South\ncolumn were stitched together into one long string", "_____no_output_____" ], [ "## Changing DataFrames\n\nWe can change the data inside of a DataFrame in various ways:\n\n- Adding new columns \n- Changing index labels or column names \n- Altering existing data (e.g. doing some arithmetic or making a column\n of strings lowercase) \n\n\nSome of these “mutations” will be topics of future notebooks, so we will\nonly briefly discuss a few of the things we can do below", "_____no_output_____" ], [ "### Creating new columns\n\nWe can create new data by “assigning values to a column” similar to how\nwe assign values to a variable\n\nIn pandas, we create a new column of a DataFrame by writing", "_____no_output_____" ], [ "```python\ndf[\"New Column Name\"] = new_values\n```\n", "_____no_output_____" ], [ "Below we create an unweighted mean of the unemployment rate across the\nfour regions of the US — notice this differs from the national\nunemployment rate", "_____no_output_____" ] ], [ [ "unemp_region[\"UnweightedMean\"] = (unemp_region[\"NorthEast\"] +\n unemp_region[\"MidWest\"] +\n unemp_region[\"South\"] +\n unemp_region[\"West\"])/4", "_____no_output_____" ], [ "unemp_region.head()", "_____no_output_____" ] ], [ [ "### Changing values\n\nChanging the values inside of a DataFrame should be done sparingly\n\nHowever, it can be done by assigning a value to a location in the\nDataFrame\n\n`df.loc[index, column] = value`", "_____no_output_____" ] ], [ [ "unemp_region.loc[1995, \"UnweightedMean\"] = 0.0", "_____no_output_____" ], [ "unemp_region.head()", "_____no_output_____" ] ], [ [ "### Renaming columns\n\nWe can also rename the columns of a DataFrame\n\nThis is helpful because the names that sometimes come with datasets are\nunbearable…\n\nFor example, the original name for the North East unemployment rate\ngiven by the Bureau of Labor Statistics was `LASRD910000000000003`…\n\nThey have their reasons for using these names, but it can make our job\ndifficult since we need to type it sometimes repeatedly\n\nWe can rename columns by passing a dictionary to the `rename` method\n\nThis dictionary contains the old names as the keys and new names as the\nvalues.\n\nSee the example below", "_____no_output_____" ] ], [ [ "names = {\"NorthEast\": \"NE\",\n \"MidWest\": \"MW\",\n \"South\": \"S\",\n \"West\": \"W\"}\nunemp_region.rename(columns=names)", "_____no_output_____" ], [ "unemp_region.head()", "_____no_output_____" ] ], [ [ "We renamed our columns… Why does the DataFrame still show the old\ncolumn names?\n\nMany of the operations that pandas does creates a copy of your data by\ndefault\n\nIt does this in order to protect your data and make sure you don’t\noverwrite information you’d like to keep\n\nWe can make these operations permanent by either\n\n1. Assigning the output back to the variable name\n `df = df.rename(columns=rename_dict)` or \n1. Looking into whether the method has an `inplace` option. For\n example, `df.rename(columns=rename_dict, inplace=True)` \n\n\nThere are times when setting `inplace=True` will make your code faster\n(e.g. if you have a very large DataFrame and you don’t want to copy all\nthe data), but that doesn’t always happen\n\nWe recommend using the first option until you get comfortable with\npandas because operations that don’t alter your data are (usually)\neasier to reason about", "_____no_output_____" ] ], [ [ "names = {\"NorthEast\": \"NE\",\n \"MidWest\": \"MW\",\n \"South\": \"S\",\n \"West\": \"W\"}\n\nunemp_shortname = unemp_region.rename(columns=names)\nunemp_shortname.head()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
e7d6ae9830f3fb0cf30f19fd1b060297ce239c90
126,301
ipynb
Jupyter Notebook
Classify_text_with_bert.ipynb
Abudhagir/DeepLearningTutorials
d6e8b15b5f60c3d148451bf4a87d44214c26fb18
[ "BSD-3-Clause" ]
null
null
null
Classify_text_with_bert.ipynb
Abudhagir/DeepLearningTutorials
d6e8b15b5f60c3d148451bf4a87d44214c26fb18
[ "BSD-3-Clause" ]
null
null
null
Classify_text_with_bert.ipynb
Abudhagir/DeepLearningTutorials
d6e8b15b5f60c3d148451bf4a87d44214c26fb18
[ "BSD-3-Clause" ]
null
null
null
94.963158
36,158
0.708791
[ [ [ "<a href=\"https://colab.research.google.com/github/Abudhagir/DeepLearningTutorials/blob/master/Classify_text_with_bert.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "##### Copyright 2020 The TensorFlow Hub Authors.\n", "_____no_output_____" ] ], [ [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "_____no_output_____" ] ], [ [ "<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/text/tutorials/classify_text_with_bert\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/text/blob/master/docs/tutorials/classify_text_with_bert.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/text/blob/master/docs/tutorials/classify_text_with_bert.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/text/docs/tutorials/classify_text_with_bert.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n <td>\n <a href=\"https://tfhub.dev/google/collections/bert/1\"><img src=\"https://www.tensorflow.org/images/hub_logo_32px.png\" />See TF Hub model</a>\n </td>\n</table>", "_____no_output_____" ], [ "# Classify text with BERT\n\nThis tutorial contains complete code to fine-tune BERT to perform sentiment analysis on a dataset of plain-text IMDB movie reviews.\nIn addition to training a model, you will learn how to preprocess text into an appropriate format.\n\nIn this notebook, you will:\n\n- Load the IMDB dataset\n- Load a BERT model from TensorFlow Hub\n- Build your own model by combining BERT with a classifier\n- Train your own model, fine-tuning BERT as part of that\n- Save your model and use it to classify sentences\n\nIf you're new to working with the IMDB dataset, please see [Basic text classification](https://www.tensorflow.org/tutorials/keras/text_classification) for more details.", "_____no_output_____" ], [ "## About BERT\n\n[BERT](https://arxiv.org/abs/1810.04805) and other Transformer encoder architectures have been wildly successful on a variety of tasks in NLP (natural language processing). They compute vector-space representations of natural language that are suitable for use in deep learning models. The BERT family of models uses the Transformer encoder architecture to process each token of input text in the full context of all tokens before and after, hence the name: Bidirectional Encoder Representations from Transformers. \n\nBERT models are usually pre-trained on a large corpus of text, then fine-tuned for specific tasks.\n", "_____no_output_____" ], [ "## Setup\n", "_____no_output_____" ], [ "#TensorFlow Text\n\"TensorFlow Text provides a collection of text related classes and ops ready to use with TensorFlow 2.0. The library can perform the preprocessing regularly required by text-based models, and includes other features useful for sequence modeling not provided by core TensorFlow\" (https://www.tensorflow.org/text/guide/tf_text_intro)\n", "_____no_output_____" ] ], [ [ "# A dependency of the preprocessing for BERT inputs\n!pip install -q -U tensorflow-text", "\u001b[K |████████████████████████████████| 4.4 MB 5.3 MB/s \n\u001b[?25h" ] ], [ [ "We will use the AdamW optimizer from [tensorflow/models](https://github.com/tensorflow/models). \"TensorFlow Model Garden is a repository with a number of different implementations of state-of-the-art (SOTA) models and modeling solutions for TensorFlow users.\"\n", "_____no_output_____" ] ], [ [ "!pip install -q tf-models-official", "\u001b[?25l\r\u001b[K |▏ | 10 kB 22.1 MB/s eta 0:00:01\r\u001b[K |▍ | 20 kB 28.4 MB/s eta 0:00:01\r\u001b[K |▋ | 30 kB 26.6 MB/s eta 0:00:01\r\u001b[K |▊ | 40 kB 19.5 MB/s eta 0:00:01\r\u001b[K |█ | 51 kB 5.7 MB/s eta 0:00:01\r\u001b[K |█▏ | 61 kB 6.1 MB/s eta 0:00:01\r\u001b[K |█▎ | 71 kB 5.6 MB/s eta 0:00:01\r\u001b[K |█▌ | 81 kB 6.3 MB/s eta 0:00:01\r\u001b[K |█▊ | 92 kB 6.5 MB/s eta 0:00:01\r\u001b[K |█▉ | 102 kB 5.3 MB/s eta 0:00:01\r\u001b[K |██ | 112 kB 5.3 MB/s eta 0:00:01\r\u001b[K |██▎ | 122 kB 5.3 MB/s eta 0:00:01\r\u001b[K |██▍ | 133 kB 5.3 MB/s eta 0:00:01\r\u001b[K |██▋ | 143 kB 5.3 MB/s eta 0:00:01\r\u001b[K |██▉ | 153 kB 5.3 MB/s eta 0:00:01\r\u001b[K |███ | 163 kB 5.3 MB/s eta 0:00:01\r\u001b[K |███▏ | 174 kB 5.3 MB/s eta 0:00:01\r\u001b[K |███▍ | 184 kB 5.3 MB/s eta 0:00:01\r\u001b[K |███▌ | 194 kB 5.3 MB/s eta 0:00:01\r\u001b[K |███▊ | 204 kB 5.3 MB/s eta 0:00:01\r\u001b[K |████ | 215 kB 5.3 MB/s eta 0:00:01\r\u001b[K |████ | 225 kB 5.3 MB/s eta 0:00:01\r\u001b[K |████▎ | 235 kB 5.3 MB/s eta 0:00:01\r\u001b[K |████▌ | 245 kB 5.3 MB/s eta 0:00:01\r\u001b[K |████▋ | 256 kB 5.3 MB/s eta 0:00:01\r\u001b[K |████▉ | 266 kB 5.3 MB/s eta 0:00:01\r\u001b[K |█████ | 276 kB 5.3 MB/s eta 0:00:01\r\u001b[K |█████▏ | 286 kB 5.3 MB/s eta 0:00:01\r\u001b[K |█████▍ | 296 kB 5.3 MB/s eta 0:00:01\r\u001b[K |█████▋ | 307 kB 5.3 MB/s eta 0:00:01\r\u001b[K |█████▊ | 317 kB 5.3 MB/s eta 0:00:01\r\u001b[K |██████ | 327 kB 5.3 MB/s eta 0:00:01\r\u001b[K |██████▏ | 337 kB 5.3 MB/s eta 0:00:01\r\u001b[K |██████▎ | 348 kB 5.3 MB/s eta 0:00:01\r\u001b[K |██████▌ | 358 kB 5.3 MB/s eta 0:00:01\r\u001b[K |██████▊ | 368 kB 5.3 MB/s eta 0:00:01\r\u001b[K |██████▉ | 378 kB 5.3 MB/s eta 0:00:01\r\u001b[K |███████ | 389 kB 5.3 MB/s eta 0:00:01\r\u001b[K |███████▎ | 399 kB 5.3 MB/s eta 0:00:01\r\u001b[K |███████▍ | 409 kB 5.3 MB/s eta 0:00:01\r\u001b[K |███████▋ | 419 kB 5.3 MB/s eta 0:00:01\r\u001b[K |███████▉ | 430 kB 5.3 MB/s eta 0:00:01\r\u001b[K |████████ | 440 kB 5.3 MB/s eta 0:00:01\r\u001b[K |████████▏ | 450 kB 5.3 MB/s eta 0:00:01\r\u001b[K |████████▍ | 460 kB 5.3 MB/s eta 0:00:01\r\u001b[K |████████▌ | 471 kB 5.3 MB/s eta 0:00:01\r\u001b[K |████████▊ | 481 kB 5.3 MB/s eta 0:00:01\r\u001b[K |█████████ | 491 kB 5.3 MB/s eta 0:00:01\r\u001b[K |█████████ | 501 kB 5.3 MB/s eta 0:00:01\r\u001b[K |█████████▎ | 512 kB 5.3 MB/s eta 0:00:01\r\u001b[K |█████████▌ | 522 kB 5.3 MB/s eta 0:00:01\r\u001b[K |█████████▋ | 532 kB 5.3 MB/s eta 0:00:01\r\u001b[K |█████████▉ | 542 kB 5.3 MB/s eta 0:00:01\r\u001b[K |██████████ | 552 kB 5.3 MB/s eta 0:00:01\r\u001b[K |██████████▏ | 563 kB 5.3 MB/s eta 0:00:01\r\u001b[K |██████████▍ | 573 kB 5.3 MB/s eta 0:00:01\r\u001b[K |██████████▋ | 583 kB 5.3 MB/s eta 0:00:01\r\u001b[K |██████████▊ | 593 kB 5.3 MB/s eta 0:00:01\r\u001b[K |███████████ | 604 kB 5.3 MB/s eta 0:00:01\r\u001b[K |███████████▏ | 614 kB 5.3 MB/s eta 0:00:01\r\u001b[K |███████████▎ | 624 kB 5.3 MB/s eta 0:00:01\r\u001b[K |███████████▌ | 634 kB 5.3 MB/s eta 0:00:01\r\u001b[K |███████████▊ | 645 kB 5.3 MB/s eta 0:00:01\r\u001b[K |███████████▉ | 655 kB 5.3 MB/s eta 0:00:01\r\u001b[K |████████████ | 665 kB 5.3 MB/s eta 0:00:01\r\u001b[K |████████████▎ | 675 kB 5.3 MB/s eta 0:00:01\r\u001b[K |████████████▍ | 686 kB 5.3 MB/s eta 0:00:01\r\u001b[K |████████████▋ | 696 kB 5.3 MB/s eta 0:00:01\r\u001b[K |████████████▉ | 706 kB 5.3 MB/s eta 0:00:01\r\u001b[K |█████████████ | 716 kB 5.3 MB/s eta 0:00:01\r\u001b[K |█████████████▏ | 727 kB 5.3 MB/s eta 0:00:01\r\u001b[K |█████████████▍ | 737 kB 5.3 MB/s eta 0:00:01\r\u001b[K |█████████████▌ | 747 kB 5.3 MB/s eta 0:00:01\r\u001b[K |█████████████▊ | 757 kB 5.3 MB/s eta 0:00:01\r\u001b[K |██████████████ | 768 kB 5.3 MB/s eta 0:00:01\r\u001b[K |██████████████ | 778 kB 5.3 MB/s eta 0:00:01\r\u001b[K |██████████████▎ | 788 kB 5.3 MB/s eta 0:00:01\r\u001b[K |██████████████▌ | 798 kB 5.3 MB/s eta 0:00:01\r\u001b[K |██████████████▋ | 808 kB 5.3 MB/s eta 0:00:01\r\u001b[K |██████████████▉ | 819 kB 5.3 MB/s eta 0:00:01\r\u001b[K |███████████████ | 829 kB 5.3 MB/s eta 0:00:01\r\u001b[K |███████████████▏ | 839 kB 5.3 MB/s eta 0:00:01\r\u001b[K |███████████████▍ | 849 kB 5.3 MB/s eta 0:00:01\r\u001b[K |███████████████▋ | 860 kB 5.3 MB/s eta 0:00:01\r\u001b[K |███████████████▊ | 870 kB 5.3 MB/s eta 0:00:01\r\u001b[K |████████████████ | 880 kB 5.3 MB/s eta 0:00:01\r\u001b[K |████████████████▏ | 890 kB 5.3 MB/s eta 0:00:01\r\u001b[K |████████████████▎ | 901 kB 5.3 MB/s eta 0:00:01\r\u001b[K |████████████████▌ | 911 kB 5.3 MB/s eta 0:00:01\r\u001b[K |████████████████▊ | 921 kB 5.3 MB/s eta 0:00:01\r\u001b[K |████████████████▉ | 931 kB 5.3 MB/s eta 0:00:01\r\u001b[K |█████████████████ | 942 kB 5.3 MB/s eta 0:00:01\r\u001b[K |█████████████████▎ | 952 kB 5.3 MB/s eta 0:00:01\r\u001b[K |█████████████████▍ | 962 kB 5.3 MB/s eta 0:00:01\r\u001b[K |█████████████████▋ | 972 kB 5.3 MB/s eta 0:00:01\r\u001b[K |█████████████████▉ | 983 kB 5.3 MB/s eta 0:00:01\r\u001b[K |██████████████████ | 993 kB 5.3 MB/s eta 0:00:01\r\u001b[K |██████████████████▏ | 1.0 MB 5.3 MB/s eta 0:00:01\r\u001b[K |██████████████████▍ | 1.0 MB 5.3 MB/s eta 0:00:01\r\u001b[K |██████████████████▋ | 1.0 MB 5.3 MB/s eta 0:00:01\r\u001b[K |██████████████████▊ | 1.0 MB 5.3 MB/s eta 0:00:01\r\u001b[K |███████████████████ | 1.0 MB 5.3 MB/s eta 0:00:01\r\u001b[K |███████████████████▏ | 1.1 MB 5.3 MB/s eta 0:00:01\r\u001b[K |███████████████████▎ | 1.1 MB 5.3 MB/s eta 0:00:01\r\u001b[K |███████████████████▌ | 1.1 MB 5.3 MB/s eta 0:00:01\r\u001b[K |███████████████████▊ | 1.1 MB 5.3 MB/s eta 0:00:01\r\u001b[K |███████████████████▉ | 1.1 MB 5.3 MB/s eta 0:00:01\r\u001b[K |████████████████████ | 1.1 MB 5.3 MB/s eta 0:00:01\r\u001b[K |████████████████████▎ | 1.1 MB 5.3 MB/s eta 0:00:01\r\u001b[K |████████████████████▍ | 1.1 MB 5.3 MB/s eta 0:00:01\r\u001b[K |████████████████████▋ | 1.1 MB 5.3 MB/s eta 0:00:01\r\u001b[K |████████████████████▉ | 1.1 MB 5.3 MB/s eta 0:00:01\r\u001b[K |█████████████████████ | 1.2 MB 5.3 MB/s eta 0:00:01\r\u001b[K |█████████████████████▏ | 1.2 MB 5.3 MB/s eta 0:00:01\r\u001b[K |█████████████████████▍ | 1.2 MB 5.3 MB/s eta 0:00:01\r\u001b[K |█████████████████████▌ | 1.2 MB 5.3 MB/s eta 0:00:01\r\u001b[K |█████████████████████▊ | 1.2 MB 5.3 MB/s eta 0:00:01\r\u001b[K |██████████████████████ | 1.2 MB 5.3 MB/s eta 0:00:01\r\u001b[K |██████████████████████ | 1.2 MB 5.3 MB/s eta 0:00:01\r\u001b[K |██████████████████████▎ | 1.2 MB 5.3 MB/s eta 0:00:01\r\u001b[K |██████████████████████▌ | 1.2 MB 5.3 MB/s eta 0:00:01\r\u001b[K |██████████████████████▋ | 1.2 MB 5.3 MB/s eta 0:00:01\r\u001b[K |██████████████████████▉ | 1.3 MB 5.3 MB/s eta 0:00:01\r\u001b[K |███████████████████████ | 1.3 MB 5.3 MB/s eta 0:00:01\r\u001b[K |███████████████████████▏ | 1.3 MB 5.3 MB/s eta 0:00:01\r\u001b[K |███████████████████████▍ | 1.3 MB 5.3 MB/s eta 0:00:01\r\u001b[K |███████████████████████▋ | 1.3 MB 5.3 MB/s eta 0:00:01\r\u001b[K |███████████████████████▊ | 1.3 MB 5.3 MB/s eta 0:00:01\r\u001b[K |████████████████████████ | 1.3 MB 5.3 MB/s eta 0:00:01\r\u001b[K |████████████████████████▏ | 1.3 MB 5.3 MB/s eta 0:00:01\r\u001b[K |████████████████████████▎ | 1.3 MB 5.3 MB/s eta 0:00:01\r\u001b[K |████████████████████████▌ | 1.4 MB 5.3 MB/s eta 0:00:01\r\u001b[K |████████████████████████▊ | 1.4 MB 5.3 MB/s eta 0:00:01\r\u001b[K |████████████████████████▉ | 1.4 MB 5.3 MB/s eta 0:00:01\r\u001b[K |█████████████████████████ | 1.4 MB 5.3 MB/s eta 0:00:01\r\u001b[K |█████████████████████████▎ | 1.4 MB 5.3 MB/s eta 0:00:01\r\u001b[K |█████████████████████████▍ | 1.4 MB 5.3 MB/s eta 0:00:01\r\u001b[K |█████████████████████████▋ | 1.4 MB 5.3 MB/s eta 0:00:01\r\u001b[K |█████████████████████████▉ | 1.4 MB 5.3 MB/s eta 0:00:01\r\u001b[K |██████████████████████████ | 1.4 MB 5.3 MB/s eta 0:00:01\r\u001b[K |██████████████████████████▏ | 1.4 MB 5.3 MB/s eta 0:00:01\r\u001b[K |██████████████████████████▍ | 1.5 MB 5.3 MB/s eta 0:00:01\r\u001b[K |██████████████████████████▌ | 1.5 MB 5.3 MB/s eta 0:00:01\r\u001b[K |██████████████████████████▊ | 1.5 MB 5.3 MB/s eta 0:00:01\r\u001b[K |███████████████████████████ | 1.5 MB 5.3 MB/s eta 0:00:01\r\u001b[K |███████████████████████████ | 1.5 MB 5.3 MB/s eta 0:00:01\r\u001b[K |███████████████████████████▎ | 1.5 MB 5.3 MB/s eta 0:00:01\r\u001b[K |███████████████████████████▌ | 1.5 MB 5.3 MB/s eta 0:00:01\r\u001b[K |███████████████████████████▋ | 1.5 MB 5.3 MB/s eta 0:00:01\r\u001b[K |███████████████████████████▉ | 1.5 MB 5.3 MB/s eta 0:00:01\r\u001b[K |████████████████████████████ | 1.5 MB 5.3 MB/s eta 0:00:01\r\u001b[K |████████████████████████████▏ | 1.6 MB 5.3 MB/s eta 0:00:01\r\u001b[K |████████████████████████████▍ | 1.6 MB 5.3 MB/s eta 0:00:01\r\u001b[K |████████████████████████████▋ | 1.6 MB 5.3 MB/s eta 0:00:01\r\u001b[K |████████████████████████████▊ | 1.6 MB 5.3 MB/s eta 0:00:01\r\u001b[K |█████████████████████████████ | 1.6 MB 5.3 MB/s eta 0:00:01\r\u001b[K |█████████████████████████████▏ | 1.6 MB 5.3 MB/s eta 0:00:01\r\u001b[K |█████████████████████████████▎ | 1.6 MB 5.3 MB/s eta 0:00:01\r\u001b[K |█████████████████████████████▌ | 1.6 MB 5.3 MB/s eta 0:00:01\r\u001b[K |█████████████████████████████▊ | 1.6 MB 5.3 MB/s eta 0:00:01\r\u001b[K |█████████████████████████████▉ | 1.6 MB 5.3 MB/s eta 0:00:01\r\u001b[K |██████████████████████████████ | 1.7 MB 5.3 MB/s eta 0:00:01\r\u001b[K |██████████████████████████████▎ | 1.7 MB 5.3 MB/s eta 0:00:01\r\u001b[K |██████████████████████████████▍ | 1.7 MB 5.3 MB/s eta 0:00:01\r\u001b[K |██████████████████████████████▋ | 1.7 MB 5.3 MB/s eta 0:00:01\r\u001b[K |██████████████████████████████▉ | 1.7 MB 5.3 MB/s eta 0:00:01\r\u001b[K |███████████████████████████████ | 1.7 MB 5.3 MB/s eta 0:00:01\r\u001b[K |███████████████████████████████▏| 1.7 MB 5.3 MB/s eta 0:00:01\r\u001b[K |███████████████████████████████▍| 1.7 MB 5.3 MB/s eta 0:00:01\r\u001b[K |███████████████████████████████▌| 1.7 MB 5.3 MB/s eta 0:00:01\r\u001b[K |███████████████████████████████▊| 1.8 MB 5.3 MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 1.8 MB 5.3 MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 1.8 MB 5.3 MB/s \n\u001b[K |████████████████████████████████| 211 kB 57.8 MB/s \n\u001b[K |████████████████████████████████| 90 kB 8.5 MB/s \n\u001b[K |████████████████████████████████| 99 kB 10.8 MB/s \n\u001b[K |████████████████████████████████| 1.2 MB 29.8 MB/s \n\u001b[K |████████████████████████████████| 636 kB 48.8 MB/s \n\u001b[K |████████████████████████████████| 1.1 MB 34.2 MB/s \n\u001b[K |████████████████████████████████| 43 kB 2.3 MB/s \n\u001b[K |████████████████████████████████| 352 kB 45.4 MB/s \n\u001b[K |████████████████████████████████| 37.1 MB 43 kB/s \n\u001b[?25h Building wheel for py-cpuinfo (setup.py) ... \u001b[?25l\u001b[?25hdone\n Building wheel for seqeval (setup.py) ... \u001b[?25l\u001b[?25hdone\n" ], [ "%tensorflow_version 2.x\nimport tensorflow as tf\ndevice_name = tf.test.gpu_device_name()\nif device_name != '/device:GPU:0':\n raise SystemError('GPU device not found')\nprint('Found GPU at: {}'.format(device_name))", "Found GPU at: /device:GPU:0\n" ], [ "import os\nimport shutil\n\nimport tensorflow_hub as hub # TFHub is a repository of trained machine learning models (https://www.tensorflow.org/hub)\nimport tensorflow_text as text\nfrom official.nlp import optimization # to create AdamW optimizer\n\nimport matplotlib.pyplot as plt\n\ntf.get_logger().setLevel('ERROR')", "_____no_output_____" ] ], [ [ "## Sentiment analysis\n\nThis notebook trains a sentiment analysis model to classify movie reviews as *positive* or *negative*, based on the text of the review.\n\nYou'll use the [Large Movie Review Dataset](https://ai.stanford.edu/~amaas/data/sentiment/) that contains the text of 50,000 movie reviews from the [Internet Movie Database](https://www.imdb.com/).", "_____no_output_____" ], [ "#Keras\n\"Keras is the high-level API of TensorFlow 2: an approachable, highly-productive interface for solving machine learning problems, with a focus on modern deep learning. It provides essential abstractions and building blocks for developing and shipping machine learning solutions with high iteration velocity\" (https://keras.io/about/). A useful hands-on colab detailing keras abstractions can be found [here](https://jaredwinick.github.io/what_is_tf_keras/). Additional details can be found [here](https://machinelearningmastery.com/tensorflow-tutorial-deep-learning-with-tf-keras/).", "_____no_output_____" ], [ "### Download the IMDB dataset\n\nLet's download and extract the dataset, then explore the directory structure.\n", "_____no_output_____" ] ], [ [ "url = 'https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz'\n\ndataset = tf.keras.utils.get_file('aclImdb_v1.tar.gz', url,\n untar=True, cache_dir='.',\n cache_subdir='')\n\ndataset_dir = os.path.join(os.path.dirname(dataset), 'aclImdb')\nprint(dataset_dir)\n\n!ls ./aclImdb/train\n\ntrain_dir = os.path.join(dataset_dir, 'train')\n\n# remove unused folders to make it easier to load the data\nremove_dir = os.path.join(train_dir, 'unsup')\nshutil.rmtree(remove_dir)", "Downloading data from https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz\n84131840/84125825 [==============================] - 5s 0us/step\n84140032/84125825 [==============================] - 5s 0us/step\n./aclImdb\nlabeledBow.feat pos\tunsupBow.feat urls_pos.txt\nneg\t\t unsup\turls_neg.txt urls_unsup.txt\n" ] ], [ [ "Next, you will use the `text_dataset_from_directory` utility to create a labeled `tf.data.Dataset`. The `tf.data.Dataset` API supports writing descriptive and efficient input pipelines. Dataset usage follows a common pattern:\n- Create a source dataset from your input data.\n- Apply dataset transformations to preprocess the data.\n- Iterate over the dataset and process the elements.\nMore details can be found [here](https://www.tensorflow.org/api_docs/python/tf/data/Dataset).\n\nThe IMDB dataset has already been divided into train and test, but it lacks a validation set. Let's create a validation set using an 80:20 split of the training data by using the `validation_split` argument below.\n\nNote: When using the `validation_split` and `subset` arguments, make sure to either specify a random seed, or to pass `shuffle=False`, so that the validation and training splits have no overlap.", "_____no_output_____" ], [ "#Prefetching\nPrefetching overlaps the preprocessing and model execution of a training step. While the model is executing training step s, the input pipeline is reading the data for step s+1. Doing so reduces the step time to the maximum (as opposed to the sum) of the training and the time it takes to extract the data.\n\nThe tf.data API provides the tf.data.Dataset.prefetch transformation. It can be used to decouple the time when data is produced from the time when data is consumed. In particular, the transformation uses a background thread and an internal buffer to prefetch elements from the input dataset ahead of the time they are requested. The number of elements to prefetch should be equal to (or possibly greater than) the number of batches consumed by a single training step. You could either manually tune this value, or set it to tf.data.AUTOTUNE, which will prompt the tf.data runtime to tune the value dynamically at runtime. [Source](https://www.tensorflow.org/guide/data_performance#prefetching)\n", "_____no_output_____" ] ], [ [ "AUTOTUNE = tf.data.AUTOTUNE\nbatch_size = 32\nseed = 42\n\nraw_train_ds = tf.keras.preprocessing.text_dataset_from_directory(\n 'aclImdb/train',\n batch_size=batch_size,\n validation_split=0.2,\n subset='training',\n seed=seed)\n\nclass_names = raw_train_ds.class_names\nprint(class_names)\ntrain_ds = raw_train_ds.cache().prefetch(buffer_size=AUTOTUNE)\n\nval_ds = tf.keras.preprocessing.text_dataset_from_directory(\n 'aclImdb/train',\n batch_size=batch_size,\n validation_split=0.2,\n subset='validation',\n seed=seed)\n\nval_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)\n\ntest_ds = tf.keras.preprocessing.text_dataset_from_directory(\n 'aclImdb/test',\n batch_size=batch_size)\n\ntest_ds = test_ds.cache().prefetch(buffer_size=AUTOTUNE)", "Found 25000 files belonging to 2 classes.\nUsing 20000 files for training.\n['neg', 'pos']\nFound 25000 files belonging to 2 classes.\nUsing 5000 files for validation.\nFound 25000 files belonging to 2 classes.\n" ] ], [ [ "Let's take a look at a few reviews.", "_____no_output_____" ] ], [ [ "for text_batch, label_batch in train_ds.take(1):\n for i in range(3):\n print(f'Review: {text_batch.numpy()[i]}')\n label = label_batch.numpy()[i]\n print(f'Label : {label} ({class_names[label]})')", "Review: b'\"Pandemonium\" is a horror movie spoof that comes off more stupid than funny. Believe me when I tell you, I love comedies. Especially comedy spoofs. \"Airplane\", \"The Naked Gun\" trilogy, \"Blazing Saddles\", \"High Anxiety\", and \"Spaceballs\" are some of my favorite comedies that spoof a particular genre. \"Pandemonium\" is not up there with those films. Most of the scenes in this movie had me sitting there in stunned silence because the movie wasn\\'t all that funny. There are a few laughs in the film, but when you watch a comedy, you expect to laugh a lot more than a few times and that\\'s all this film has going for it. Geez, \"Scream\" had more laughs than this film and that was more of a horror film. How bizarre is that?<br /><br />*1/2 (out of four)'\nLabel : 0 (neg)\nReview: b\"David Mamet is a very interesting and a very un-equal director. His first movie 'House of Games' was the one I liked best, and it set a series of films with characters whose perspective of life changes as they get into complicated situations, and so does the perspective of the viewer.<br /><br />So is 'Homicide' which from the title tries to set the mind of the viewer to the usual crime drama. The principal characters are two cops, one Jewish and one Irish who deal with a racially charged area. The murder of an old Jewish shop owner who proves to be an ancient veteran of the Israeli Independence war triggers the Jewish identity in the mind and heart of the Jewish detective.<br /><br />This is were the flaws of the film are the more obvious. The process of awakening is theatrical and hard to believe, the group of Jewish militants is operatic, and the way the detective eventually walks to the final violent confrontation is pathetic. The end of the film itself is Mamet-like smart, but disappoints from a human emotional perspective.<br /><br />Joe Mantegna and William Macy give strong performances, but the flaws of the story are too evident to be easily compensated.\"\nLabel : 0 (neg)\nReview: b'Great documentary about the lives of NY firefighters during the worst terrorist attack of all time.. That reason alone is why this should be a must see collectors item.. What shocked me was not only the attacks, but the\"High Fat Diet\" and physical appearance of some of these firefighters. I think a lot of Doctors would agree with me that,in the physical shape they were in, some of these firefighters would NOT of made it to the 79th floor carrying over 60 lbs of gear. Having said that i now have a greater respect for firefighters and i realize becoming a firefighter is a life altering job. The French have a history of making great documentary\\'s and that is what this is, a Great Documentary.....'\nLabel : 1 (pos)\n" ] ], [ [ "## Loading models from TensorFlow Hub\n\nHere you can choose which BERT model you will load from TensorFlow Hub and fine-tune. There are multiple BERT models available.\n\n - [BERT-Base](https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/3), [Uncased](https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/3) and [seven more models](https://tfhub.dev/google/collections/bert/1) with trained weights released by the original BERT authors.\n - [Small BERTs](https://tfhub.dev/google/collections/bert/1) have the same general architecture but fewer and/or smaller Transformer blocks, which lets you explore tradeoffs between speed, size and quality.\n - [ALBERT](https://tfhub.dev/google/collections/albert/1): four different sizes of \"A Lite BERT\" that reduces model size (but not computation time) by sharing parameters between layers.\n\nThe model documentation on TensorFlow Hub has more details and references to the\nresearch literature. Follow the links above, or click on the [`tfhub.dev`](http://tfhub.dev) URL\nprinted after the next cell execution.\n\nThe suggestion is to start with a Small BERT (with fewer parameters) since they are faster to fine-tune. If you like a small model but with higher accuracy, ALBERT might be your next option. If you want even better accuracy, choose\none of the classic BERT sizes or their recent refinements like Electra, Talking Heads, or a BERT Expert.\n\nAside from the models available below, there are [multiple versions](https://tfhub.dev/google/collections/transformer_encoders_text/1) of the models that are larger and can yield even better accuracy, but they are too big to be fine-tuned on a single GPU. You will be able to do that on the [Solve GLUE tasks using BERT on a TPU colab](https://www.tensorflow.org/text/tutorials/bert_glue).\n\nYou'll see in the code below that switching the tfhub.dev URL is enough to try any of these models, because all the differences between them are encapsulated in the SavedModels from TF Hub.", "_____no_output_____" ] ], [ [ "#@title Choose a BERT model to fine-tune\n\nbert_model_name = \"small_bert/bert_en_uncased_L-4_H-512_A-8\" #@param [\"bert_en_uncased_L-12_H-768_A-12\", \"bert_en_cased_L-12_H-768_A-12\", \"bert_multi_cased_L-12_H-768_A-12\", \"small_bert/bert_en_uncased_L-2_H-128_A-2\", \"small_bert/bert_en_uncased_L-2_H-256_A-4\", \"small_bert/bert_en_uncased_L-2_H-512_A-8\", \"small_bert/bert_en_uncased_L-2_H-768_A-12\", \"small_bert/bert_en_uncased_L-4_H-128_A-2\", \"small_bert/bert_en_uncased_L-4_H-256_A-4\", \"small_bert/bert_en_uncased_L-4_H-512_A-8\", \"small_bert/bert_en_uncased_L-4_H-768_A-12\", \"small_bert/bert_en_uncased_L-6_H-128_A-2\", \"small_bert/bert_en_uncased_L-6_H-256_A-4\", \"small_bert/bert_en_uncased_L-6_H-512_A-8\", \"small_bert/bert_en_uncased_L-6_H-768_A-12\", \"small_bert/bert_en_uncased_L-8_H-128_A-2\", \"small_bert/bert_en_uncased_L-8_H-256_A-4\", \"small_bert/bert_en_uncased_L-8_H-512_A-8\", \"small_bert/bert_en_uncased_L-8_H-768_A-12\", \"small_bert/bert_en_uncased_L-10_H-128_A-2\", \"small_bert/bert_en_uncased_L-10_H-256_A-4\", \"small_bert/bert_en_uncased_L-10_H-512_A-8\", \"small_bert/bert_en_uncased_L-10_H-768_A-12\", \"small_bert/bert_en_uncased_L-12_H-128_A-2\", \"small_bert/bert_en_uncased_L-12_H-256_A-4\", \"small_bert/bert_en_uncased_L-12_H-512_A-8\", \"small_bert/bert_en_uncased_L-12_H-768_A-12\", \"albert_en_base\", \"electra_small\", \"electra_base\", \"experts_pubmed\", \"experts_wiki_books\", \"talking-heads_base\"]\n\nmap_name_to_handle = {\n 'bert_en_uncased_L-12_H-768_A-12':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/3',\n 'bert_en_cased_L-12_H-768_A-12':\n 'https://tfhub.dev/tensorflow/bert_en_cased_L-12_H-768_A-12/3',\n 'bert_multi_cased_L-12_H-768_A-12':\n 'https://tfhub.dev/tensorflow/bert_multi_cased_L-12_H-768_A-12/3',\n 'small_bert/bert_en_uncased_L-2_H-128_A-2':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-128_A-2/1',\n 'small_bert/bert_en_uncased_L-2_H-256_A-4':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-256_A-4/1',\n 'small_bert/bert_en_uncased_L-2_H-512_A-8':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-512_A-8/1',\n 'small_bert/bert_en_uncased_L-2_H-768_A-12':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-768_A-12/1',\n 'small_bert/bert_en_uncased_L-4_H-128_A-2':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-128_A-2/1',\n 'small_bert/bert_en_uncased_L-4_H-256_A-4':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-256_A-4/1',\n 'small_bert/bert_en_uncased_L-4_H-512_A-8':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-512_A-8/1',\n 'small_bert/bert_en_uncased_L-4_H-768_A-12':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-768_A-12/1',\n 'small_bert/bert_en_uncased_L-6_H-128_A-2':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-128_A-2/1',\n 'small_bert/bert_en_uncased_L-6_H-256_A-4':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-256_A-4/1',\n 'small_bert/bert_en_uncased_L-6_H-512_A-8':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-512_A-8/1',\n 'small_bert/bert_en_uncased_L-6_H-768_A-12':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-768_A-12/1',\n 'small_bert/bert_en_uncased_L-8_H-128_A-2':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-128_A-2/1',\n 'small_bert/bert_en_uncased_L-8_H-256_A-4':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-256_A-4/1',\n 'small_bert/bert_en_uncased_L-8_H-512_A-8':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-512_A-8/1',\n 'small_bert/bert_en_uncased_L-8_H-768_A-12':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-768_A-12/1',\n 'small_bert/bert_en_uncased_L-10_H-128_A-2':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-128_A-2/1',\n 'small_bert/bert_en_uncased_L-10_H-256_A-4':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-256_A-4/1',\n 'small_bert/bert_en_uncased_L-10_H-512_A-8':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-512_A-8/1',\n 'small_bert/bert_en_uncased_L-10_H-768_A-12':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-768_A-12/1',\n 'small_bert/bert_en_uncased_L-12_H-128_A-2':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-128_A-2/1',\n 'small_bert/bert_en_uncased_L-12_H-256_A-4':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-256_A-4/1',\n 'small_bert/bert_en_uncased_L-12_H-512_A-8':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-512_A-8/1',\n 'small_bert/bert_en_uncased_L-12_H-768_A-12':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-768_A-12/1',\n 'albert_en_base':\n 'https://tfhub.dev/tensorflow/albert_en_base/2',\n 'electra_small':\n 'https://tfhub.dev/google/electra_small/2',\n 'electra_base':\n 'https://tfhub.dev/google/electra_base/2',\n 'experts_pubmed':\n 'https://tfhub.dev/google/experts/bert/pubmed/2',\n 'experts_wiki_books':\n 'https://tfhub.dev/google/experts/bert/wiki_books/2',\n 'talking-heads_base':\n 'https://tfhub.dev/tensorflow/talkheads_ggelu_bert_en_base/1',\n}\n\nmap_model_to_preprocess = {\n 'bert_en_uncased_L-12_H-768_A-12':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'bert_en_cased_L-12_H-768_A-12':\n 'https://tfhub.dev/tensorflow/bert_en_cased_preprocess/3',\n 'small_bert/bert_en_uncased_L-2_H-128_A-2':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-2_H-256_A-4':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-2_H-512_A-8':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-2_H-768_A-12':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-4_H-128_A-2':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-4_H-256_A-4':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-4_H-512_A-8':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-4_H-768_A-12':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-6_H-128_A-2':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-6_H-256_A-4':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-6_H-512_A-8':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-6_H-768_A-12':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-8_H-128_A-2':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-8_H-256_A-4':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-8_H-512_A-8':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-8_H-768_A-12':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-10_H-128_A-2':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-10_H-256_A-4':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-10_H-512_A-8':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-10_H-768_A-12':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-12_H-128_A-2':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-12_H-256_A-4':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-12_H-512_A-8':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-12_H-768_A-12':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'bert_multi_cased_L-12_H-768_A-12':\n 'https://tfhub.dev/tensorflow/bert_multi_cased_preprocess/3',\n 'albert_en_base':\n 'https://tfhub.dev/tensorflow/albert_en_preprocess/3',\n 'electra_small':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'electra_base':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'experts_pubmed':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'experts_wiki_books':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'talking-heads_base':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n}\n\ntfhub_handle_encoder = map_name_to_handle[bert_model_name]\ntfhub_handle_preprocess = map_model_to_preprocess[bert_model_name]\n\nprint(f'BERT model selected : {tfhub_handle_encoder}')\nprint(f'Preprocess model auto-selected: {tfhub_handle_preprocess}')", "BERT model selected : https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-512_A-8/1\nPreprocess model auto-selected: https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3\n" ] ], [ [ "## The preprocessing model\n\nText inputs need to be transformed to numeric token ids and arranged in several Tensors before being input to BERT. TensorFlow Hub provides a matching preprocessing model for each of the BERT models discussed above, which implements this transformation using TF ops from the TF.text library. It is not necessary to run pure Python code outside your TensorFlow model to preprocess text.\n\nThe preprocessing model must be the one referenced by the documentation of the BERT model, which you can read at the URL printed above. For BERT models from the drop-down above, the preprocessing model is selected automatically.\n\nNote: You will load the preprocessing model into a [hub.KerasLayer](https://www.tensorflow.org/hub/api_docs/python/hub/KerasLayer) to compose your fine-tuned model. More information on Keras layers can be found [here](https://keras.io/api/layers/). \"Layers are the basic building blocks of neural networks in Keras. A layer consists of a tensor-in tensor-out computation function (the layer's call method) and some state, held in TensorFlow variables (the layer's weights).\"\n\n[hub.KerasLayer](https://www.tensorflow.org/hub/api_docs/python/hub/KerasLayer) is the preferred API to load a TF2-style [SavedModel](https://www.tensorflow.org/guide/saved_model) from TF Hub into a Keras model.", "_____no_output_____" ] ], [ [ "bert_preprocess_model = hub.KerasLayer(tfhub_handle_preprocess)", "_____no_output_____" ] ], [ [ "Let's try the preprocessing model on some text and see the output:", "_____no_output_____" ] ], [ [ "text_test = ['this is such an amazing movie!']\ntext_preprocessed = bert_preprocess_model(text_test)\n\nprint(f'Keys : {list(text_preprocessed.keys())}')\nprint(f'Shape : {text_preprocessed[\"input_word_ids\"].shape}')\nprint(f'Word Ids : {text_preprocessed[\"input_word_ids\"][0, :12]}')\nprint(f'Input Mask : {text_preprocessed[\"input_mask\"][0, :12]}')\nprint(f'Type Ids : {text_preprocessed[\"input_type_ids\"][0, :12]}')", "Keys : ['input_type_ids', 'input_mask', 'input_word_ids']\nShape : (1, 128)\nWord Ids : [ 101 2023 2003 2107 2019 6429 3185 999 102 0 0 0]\nInput Mask : [1 1 1 1 1 1 1 1 1 0 0 0]\nType Ids : [0 0 0 0 0 0 0 0 0 0 0 0]\n" ] ], [ [ "As you can see, now you have the 3 outputs from the preprocessing that a BERT model would use (`input_words_id`, `input_mask` and `input_type_ids`).\n\nSome other important points:\n- The input is truncated to 128 tokens. The number of tokens can be customized, and you can see more details on the [Solve GLUE tasks using BERT on a TPU colab](https://www.tensorflow.org/text/tutorials/bert_glue).\n- The `input_type_ids` only have one value (0) because this is a single sentence input. For a multiple sentence input, it would have one number for each input.\n\nSince this text preprocessor is a TensorFlow model, It can be included in your model directly.", "_____no_output_____" ], [ "## Using the BERT model\n\nBefore putting BERT into your own model, let's take a look at its outputs. You will load it from TF Hub and see the returned values.", "_____no_output_____" ] ], [ [ "bert_model = hub.KerasLayer(tfhub_handle_encoder)", "_____no_output_____" ], [ "bert_results = bert_model(text_preprocessed)\n\nprint(f'Loaded BERT: {tfhub_handle_encoder}')\nprint(f'Pooled Outputs Shape:{bert_results[\"pooled_output\"].shape}')\nprint(f'Pooled Outputs Values:{bert_results[\"pooled_output\"][0, :12]}')\nprint(f'Sequence Outputs Shape:{bert_results[\"sequence_output\"].shape}')\nprint(f'Sequence Outputs Values:{bert_results[\"sequence_output\"][0, :12]}')", "Loaded BERT: https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-512_A-8/1\nPooled Outputs Shape:(1, 512)\nPooled Outputs Values:[ 0.7626282 0.9928099 -0.18611862 0.3667383 0.15233758 0.655044\n 0.9681154 -0.94862705 0.0021616 -0.9877732 0.06842764 -0.97630596]\nSequence Outputs Shape:(1, 128, 512)\nSequence Outputs Values:[[-0.28946292 0.34321183 0.33231512 ... 0.21300802 0.7102092\n -0.05771042]\n [-0.28741995 0.31980985 -0.23018652 ... 0.5845511 -0.21329862\n 0.72692007]\n [-0.6615692 0.68876815 -0.8743301 ... 0.1087728 -0.26173076\n 0.47855455]\n ...\n [-0.22561137 -0.2892573 -0.07064426 ... 0.47566032 0.8327724\n 0.40025347]\n [-0.2982421 -0.27473164 -0.05450544 ... 0.4884972 1.0955367\n 0.18163365]\n [-0.4437818 0.00930662 0.07223704 ... 0.17290089 1.1833239\n 0.07897975]]\n" ] ], [ [ "The BERT models return a map with 3 important keys: `pooled_output`, `sequence_output`, `encoder_outputs`:\n\n- `pooled_output` represents each input sequence as a whole. The shape is `[batch_size, H]`. You can think of this as an embedding for the entire movie review.\n- `sequence_output` represents each input token in the context. The shape is `[batch_size, seq_length, H]`. You can think of this as a contextual embedding for every token in the movie review.\n- `encoder_outputs` are the intermediate activations of the `L` Transformer blocks. `outputs[\"encoder_outputs\"][i]` is a Tensor of shape `[batch_size, seq_length, 1024]` with the outputs of the i-th Transformer block, for `0 <= i < L`. The last value of the list is equal to `sequence_output`.\n\nFor the fine-tuning you are going to use the `pooled_output` array.", "_____no_output_____" ], [ "## Define your model\n\nYou will create a very simple fine-tuned model, with the preprocessing model, the selected BERT model, one Dense and a Dropout layer.\n\nNote: for more information about the base model's input and output you can follow the model's URL for documentation. Here specifically, you don't need to worry about it because the preprocessing model will take care of that for you.\n", "_____no_output_____" ] ], [ [ "def build_classifier_model():\n text_input = tf.keras.layers.Input(shape=(), dtype=tf.string, name='text')\n preprocessing_layer = hub.KerasLayer(tfhub_handle_preprocess, name='preprocessing')\n encoder_inputs = preprocessing_layer(text_input)\n encoder = hub.KerasLayer(tfhub_handle_encoder, trainable=True, name='BERT_encoder')\n outputs = encoder(encoder_inputs)\n net = outputs['pooled_output']\n net = tf.keras.layers.Dropout(0.1)(net)\n net = tf.keras.layers.Dense(1, activation=None, name='classifier')(net)\n return tf.keras.Model(text_input, net)", "_____no_output_____" ] ], [ [ "Let's check that the model runs with the output of the preprocessing model.", "_____no_output_____" ] ], [ [ "classifier_model = build_classifier_model()\nbert_raw_result = classifier_model(tf.constant(text_test))\nprint(tf.sigmoid(bert_raw_result))", "tf.Tensor([[0.5890199]], shape=(1, 1), dtype=float32)\n" ] ], [ [ "The output is meaningless, of course, because the model has not been trained yet.\n\nLet's take a look at the model's structure.", "_____no_output_____" ] ], [ [ "tf.keras.utils.plot_model(classifier_model)", "_____no_output_____" ] ], [ [ "## Model training\n\nYou now have all the pieces to train a model, including the preprocessing module, BERT encoder, data, and classifier.", "_____no_output_____" ], [ "### Loss function\n\nSince this is a binary classification problem and the model outputs a probability (a single-unit layer), you'll use `losses.BinaryCrossentropy` loss function. More information on the BinaryCrossentropy loss can be found [here](https://www.tensorflow.org/api_docs/python/tf/keras/losses/BinaryCrossentropy).\n", "_____no_output_____" ] ], [ [ "loss = tf.keras.losses.BinaryCrossentropy(from_logits=True)\nmetrics = tf.metrics.BinaryAccuracy()", "_____no_output_____" ] ], [ [ "### Optimizer\n\nFor fine-tuning, let's use the same optimizer that BERT was originally trained with: the \"Adaptive Moments\" (Adam). This optimizer minimizes the prediction loss and does regularization by weight decay (not using moments), which is also known as [AdamW](https://arxiv.org/abs/1711.05101).\n\nFor the learning rate (`init_lr`), you will use the same schedule as BERT pre-training: linear decay of a notional initial learning rate, prefixed with a linear warm-up phase over the first 10% of training steps (`num_warmup_steps`). In line with the BERT paper, the initial learning rate is smaller for fine-tuning (best of 5e-5, 3e-5, 2e-5).", "_____no_output_____" ] ], [ [ "epochs = 3\nsteps_per_epoch = tf.data.experimental.cardinality(train_ds).numpy()\nnum_train_steps = steps_per_epoch * epochs\nnum_warmup_steps = int(0.1*num_train_steps)\n\ninit_lr = 3e-5\noptimizer = optimization.create_optimizer(init_lr=init_lr,\n num_train_steps=num_train_steps,\n num_warmup_steps=num_warmup_steps,\n optimizer_type='adamw')", "_____no_output_____" ] ], [ [ "### Loading the BERT model and training\n\nUsing the `classifier_model` you created earlier, you can compile the model with the loss, metric and optimizer.", "_____no_output_____" ] ], [ [ "classifier_model.compile(optimizer=optimizer,\n loss=loss,\n metrics=metrics)", "_____no_output_____" ] ], [ [ "Note: training time will vary depending on the complexity of the BERT model you have selected.", "_____no_output_____" ] ], [ [ "print(f'Training model with {tfhub_handle_encoder}')\nhistory = classifier_model.fit(x=train_ds,\n validation_data=val_ds,\n epochs=epochs)", "Training model with https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-512_A-8/1\nEpoch 1/3\n625/625 [==============================] - 290s 453ms/step - loss: 0.4603 - binary_accuracy: 0.7601 - val_loss: 0.3787 - val_binary_accuracy: 0.8374\nEpoch 2/3\n625/625 [==============================] - 279s 447ms/step - loss: 0.3213 - binary_accuracy: 0.8560 - val_loss: 0.3574 - val_binary_accuracy: 0.8452\nEpoch 3/3\n625/625 [==============================] - 280s 448ms/step - loss: 0.2537 - binary_accuracy: 0.8921 - val_loss: 0.3812 - val_binary_accuracy: 0.8478\n" ] ], [ [ "### Evaluate the model\n\nLet's see how the model performs. Two values will be returned. Loss (a number which represents the error, lower values are better), and accuracy.", "_____no_output_____" ] ], [ [ "loss, accuracy = classifier_model.evaluate(test_ds)\n\nprint(f'Loss: {loss}')\nprint(f'Accuracy: {accuracy}')", "782/782 [==============================] - 152s 195ms/step - loss: 0.3684 - binary_accuracy: 0.8516\nLoss: 0.3683711290359497\nAccuracy: 0.8515999913215637\n" ] ], [ [ "### Plot the accuracy and loss over time\n\nBased on the `History` object returned by `model.fit()`. You can plot the training and validation loss for comparison, as well as the training and validation accuracy. More information on the History object can be found [here](https://machinelearningmastery.com/display-deep-learning-model-training-history-in-keras/).", "_____no_output_____" ] ], [ [ "history_dict = history.history\nprint(history_dict.keys())\n\nacc = history_dict['binary_accuracy']\nval_acc = history_dict['val_binary_accuracy']\nloss = history_dict['loss']\nval_loss = history_dict['val_loss']\n\nepochs = range(1, len(acc) + 1)\nfig = plt.figure(figsize=(10, 6))\nfig.tight_layout()\n\nplt.subplot(2, 1, 1)\n# \"bo\" is for \"blue dot\"\nplt.plot(epochs, loss, 'r', label='Training loss')\n# b is for \"solid blue line\"\nplt.plot(epochs, val_loss, 'b', label='Validation loss')\nplt.title('Training and validation loss')\n# plt.xlabel('Epochs')\nplt.ylabel('Loss')\nplt.legend()\n\nplt.subplot(2, 1, 2)\nplt.plot(epochs, acc, 'r', label='Training acc')\nplt.plot(epochs, val_acc, 'b', label='Validation acc')\nplt.title('Training and validation accuracy')\nplt.xlabel('Epochs')\nplt.ylabel('Accuracy')\nplt.legend(loc='lower right')", "dict_keys(['loss', 'binary_accuracy', 'val_loss', 'val_binary_accuracy'])\n" ] ], [ [ "In this plot, the red lines represent the training loss and accuracy, and the blue lines are the validation loss and accuracy.", "_____no_output_____" ], [ "## Export for inference\n\nNow you just save your fine-tuned model for later use.", "_____no_output_____" ] ], [ [ "dataset_name = 'imdb'\nsaved_model_path = './{}_bert'.format(dataset_name.replace('/', '_'))\n\nclassifier_model.save(saved_model_path, include_optimizer=False)", "WARNING:absl:Found untraced functions such as restored_function_body, restored_function_body, restored_function_body, restored_function_body, restored_function_body while saving (showing 5 of 310). These functions will not be directly callable after loading.\n" ] ], [ [ "Let's reload the model, so you can try it side by side with the model that is still in memory.", "_____no_output_____" ] ], [ [ "reloaded_model = tf.saved_model.load(saved_model_path)", "_____no_output_____" ] ], [ [ "Here you can test your model on any sentence you want, just add to the examples variable below.", "_____no_output_____" ] ], [ [ "def print_my_examples(inputs, results):\n result_for_printing = \\\n [f'input: {inputs[i]:<30} : score: {results[i][0]:.6f}'\n for i in range(len(inputs))]\n print(*result_for_printing, sep='\\n')\n print()\n\n\nexamples = [\n 'this is such an amazing movie!', # this is the same sentence tried earlier\n 'The movie was great!',\n 'The movie was meh.',\n 'The movie was okish.',\n 'The movie was terrible...'\n]\n\nreloaded_results = tf.sigmoid(reloaded_model(tf.constant(examples)))\noriginal_results = tf.sigmoid(classifier_model(tf.constant(examples)))\n\nprint('Results from the saved model:')\nprint_my_examples(examples, reloaded_results)\nprint('Results from the model in memory:')\nprint_my_examples(examples, original_results)", "Results from the saved model:\ninput: this is such an amazing movie! : score: 0.998838\ninput: The movie was great! : score: 0.993884\ninput: The movie was meh. : score: 0.915520\ninput: The movie was okish. : score: 0.100049\ninput: The movie was terrible... : score: 0.003937\n\nResults from the model in memory:\ninput: this is such an amazing movie! : score: 0.998838\ninput: The movie was great! : score: 0.993884\ninput: The movie was meh. : score: 0.915520\ninput: The movie was okish. : score: 0.100049\ninput: The movie was terrible... : score: 0.003937\n\n" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
e7d6c5a2c8d7bba0c3f02a0796e6a257c2bc1216
246,187
ipynb
Jupyter Notebook
Pytorch/00_DevelopModel.ipynb
simonzhaoms/AKSDeploymentTutorial
1bd41c4986df046ee989a1d10fbaea31ff3a1032
[ "MIT" ]
null
null
null
Pytorch/00_DevelopModel.ipynb
simonzhaoms/AKSDeploymentTutorial
1bd41c4986df046ee989a1d10fbaea31ff3a1032
[ "MIT" ]
null
null
null
Pytorch/00_DevelopModel.ipynb
simonzhaoms/AKSDeploymentTutorial
1bd41c4986df046ee989a1d10fbaea31ff3a1032
[ "MIT" ]
1
2019-05-14T02:51:00.000Z
2019-05-14T02:51:00.000Z
273.845384
200,492
0.888183
[ [ [ "# Develop Model", "_____no_output_____" ], [ "In this noteook, we will go through the steps to load the ResNet152 model, pre-process the images to the required format and call the model to find the top predictions.", "_____no_output_____" ] ], [ [ "import PIL\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torchvision\nimport wget\nfrom PIL import Image\nfrom torchvision import models, transforms", "_____no_output_____" ], [ "print(torch.__version__)\nprint(torchvision.__version__)", "0.4.1.post2\n0.2.1\n" ] ], [ [ "We download the synset for the model. This translates the output of the model to a specific label.", "_____no_output_____" ] ], [ [ "!wget \"http://data.dmlc.ml/mxnet/models/imagenet/synset.txt\"", "--2018-10-09 07:00:23-- http://data.dmlc.ml/mxnet/models/imagenet/synset.txt\nResolving data.dmlc.ml... 54.208.175.7\nConnecting to data.dmlc.ml|54.208.175.7|:80... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 31675 (31K) [text/plain]\nSaving to: ‘synset.txt.3’\n\nsynset.txt.3 100%[===================>] 30.93K --.-KB/s in 0.002s \n\n2018-10-09 07:00:24 (14.7 MB/s) - ‘synset.txt.3’ saved [31675/31675]\n\n" ] ], [ [ "We first load the model which we imported torchvision. This can take about 10s.", "_____no_output_____" ] ], [ [ "%%time\nmodel = models.resnet152(pretrained=True)", "CPU times: user 1.29 s, sys: 450 ms, total: 1.74 s\nWall time: 1.74 s\n" ] ], [ [ "You can print the summary of the model in the below cell. We cleared the output here for brevity. When you run the cell you should see a list of the layers and the size of the model in terms of number of parameters at the bottom of the output.", "_____no_output_____" ] ], [ [ "model=model.cuda()", "_____no_output_____" ], [ "print(model)\nprint('Number of parameters {}'.format(sum([param.view(-1).size()[0] for param in model.parameters()])))", "ResNet(\n (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)\n (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)\n (layer1): Sequential(\n (0): Bottleneck(\n (conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n (downsample): Sequential(\n (0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n )\n )\n (1): Bottleneck(\n (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (2): Bottleneck(\n (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n )\n (layer2): Sequential(\n (0): Bottleneck(\n (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n (downsample): Sequential(\n (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)\n (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n )\n )\n (1): Bottleneck(\n (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (2): Bottleneck(\n (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (3): Bottleneck(\n (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (4): Bottleneck(\n (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (5): Bottleneck(\n (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (6): Bottleneck(\n (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (7): Bottleneck(\n (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n )\n (layer3): Sequential(\n (0): Bottleneck(\n (conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n (downsample): Sequential(\n (0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False)\n (1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n )\n )\n (1): Bottleneck(\n (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (2): Bottleneck(\n (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (3): Bottleneck(\n (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (4): Bottleneck(\n (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (5): Bottleneck(\n (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (6): Bottleneck(\n (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (7): Bottleneck(\n (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (8): Bottleneck(\n (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (9): Bottleneck(\n (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (10): Bottleneck(\n (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (11): Bottleneck(\n (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (12): Bottleneck(\n (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (13): Bottleneck(\n (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (14): Bottleneck(\n (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (15): Bottleneck(\n (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (16): Bottleneck(\n (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (17): Bottleneck(\n (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (18): Bottleneck(\n (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (19): Bottleneck(\n (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (20): Bottleneck(\n (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (21): Bottleneck(\n (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (22): Bottleneck(\n (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (23): Bottleneck(\n (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (24): Bottleneck(\n (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (25): Bottleneck(\n (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (26): Bottleneck(\n (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (27): Bottleneck(\n (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (28): Bottleneck(\n (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (29): Bottleneck(\n (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (30): Bottleneck(\n (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (31): Bottleneck(\n (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (32): Bottleneck(\n (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (33): Bottleneck(\n (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (34): Bottleneck(\n (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (35): Bottleneck(\n (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n )\n (layer4): Sequential(\n (0): Bottleneck(\n (conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n (downsample): Sequential(\n (0): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False)\n (1): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n )\n )\n (1): Bottleneck(\n (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (2): Bottleneck(\n (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n )\n (avgpool): AvgPool2d(kernel_size=7, stride=1, padding=0)\n (fc): Linear(in_features=2048, out_features=1000, bias=True)\n)\nNumber of parameters 60192808\n" ] ], [ [ "Let's test our model with an image of a Lynx.", "_____no_output_____" ] ], [ [ "wget.download('https://upload.wikimedia.org/wikipedia/commons/thumb/6/68/Lynx_lynx_poing.jpg/220px-Lynx_lynx_poing.jpg')", "_____no_output_____" ], [ "img_path = '220px-Lynx_lynx_poing.jpg'\nprint(Image.open(img_path).size)\nImage.open(img_path)", "(220, 330)\n" ] ], [ [ "Below, we load the image. Then we compose transformation which resize the image to (224, 224) and then convert it to a PyTorch tensor and normalize the pixel values.", "_____no_output_____" ] ], [ [ "img = Image.open(img_path).convert('RGB')", "_____no_output_____" ], [ "preprocess_input = transforms.Compose([\n torchvision.transforms.Resize((224, 224), interpolation=PIL.Image.BICUBIC),\n transforms.ToTensor(),\n transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])\n])", "_____no_output_____" ], [ "img = Image.open(img_path)\nimg = preprocess_input(img)", "_____no_output_____" ] ], [ [ "Let's make a label look up function to make it easy to lookup the classes from the synset file", "_____no_output_____" ] ], [ [ "def create_label_lookup():\n with open('synset.txt', 'r') as f:\n label_list = [l.rstrip() for l in f]\n def _label_lookup(*label_locks):\n return [label_list[l] for l in label_locks]\n return _label_lookup", "_____no_output_____" ], [ "label_lookup = create_label_lookup()", "_____no_output_____" ] ], [ [ "We will apply softmax to the output of the model to get probabilities for each label", "_____no_output_____" ] ], [ [ "softmax = nn.Softmax(dim=1).cuda()", "_____no_output_____" ] ], [ [ "Now, let's call the model on our image to predict the top 3 labels. This will take a few seconds.", "_____no_output_____" ] ], [ [ "model = model.eval()", "_____no_output_____" ], [ "%%time\nwith torch.no_grad():\n img = img.unsqueeze(0)\n image_gpu = img.type(torch.float).cuda()\n outputs = model(image_gpu)\n probabilities = softmax(outputs)", "CPU times: user 416 ms, sys: 41.9 ms, total: 458 ms\nWall time: 76.6 ms\n" ], [ "label_lookup = create_label_lookup()", "_____no_output_____" ], [ "probabilities_numpy = probabilities.cpu().numpy().squeeze()", "_____no_output_____" ], [ "top_results = np.flip(np.sort(probabilities_numpy), 0)[:3]", "_____no_output_____" ], [ "labels = label_lookup(*np.flip(probabilities_numpy.argsort(),0)[:3])", "_____no_output_____" ], [ "dict(zip(labels, top_results))", "_____no_output_____" ] ], [ [ "The top guess is Lynx with probability about 99%. We can now move on to [developing the model api for our model](01_DevelopModelDriver.ipynb).", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ] ]
e7d6d2e111b8da9544f5371afb3822576d269d1d
21,395
ipynb
Jupyter Notebook
src/DQN/pytorch_tutorial/06-WHAT IS TORCH.NN REALLY.ipynb
BepfCp/RL-imple
8f146cbaab4646cd384bfd38356c8e8c1f8a27f6
[ "MIT" ]
2
2020-03-25T14:20:26.000Z
2020-03-29T02:16:11.000Z
src/DQN/pytorch_tutorial/06-WHAT IS TORCH.NN REALLY.ipynb
BepfCp/RL-imple
8f146cbaab4646cd384bfd38356c8e8c1f8a27f6
[ "MIT" ]
null
null
null
src/DQN/pytorch_tutorial/06-WHAT IS TORCH.NN REALLY.ipynb
BepfCp/RL-imple
8f146cbaab4646cd384bfd38356c8e8c1f8a27f6
[ "MIT" ]
null
null
null
25.996355
108
0.502547
[ [ [ "### WHAT IS TORCH.NN REALLY", "_____no_output_____" ] ], [ [ "\"\"\"MINIST data setup\n\"\"\"", "_____no_output_____" ], [ "from pathlib import Path\n\nDATA_PATH = Path(\"./data\")\nPATH = DATA_PATH/\"mnist.pkl\"\n# PATH = DATA_PATH / \"mnist\"\n\n# PATH.mkdir(parents=True, exist_ok=True)\n\n# URL = \"http://deeplearning.net/data/mnist/\"\n# FILENAME = \"mnist.pkl.gz\"\n\n# if not (PATH / FILENAME).exists():\n# content = requests.get(URL + FILENAME).content\n# (PATH / FILENAME).open(\"wb\").write(content)\n\nimport pickle\nwith open(PATH.as_posix(), \"rb\") as f:\n ((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding=\"latin-1\")", "_____no_output_____" ], [ "from matplotlib import pyplot\nimport numpy as np\n\npyplot.imshow(x_train[0].reshape((28, 28)), cmap=\"gray\")\nprint(x_train.shape)", "_____no_output_____" ], [ "import torch\n\nx_train, y_train, x_valid, y_valid = map(\n torch.tensor, (x_train, y_train, x_valid, y_valid)\n) # https://www.geeksforgeeks.org/python-map-function/\nn, c = x_train.shape\n# x_train, x_train.shape, y_train.min(), y_train.max()\nprint(x_train, y_train)\nprint(x_train.shape)\nprint(y_train.min(), y_train.max())", "_____no_output_____" ], [ "\"\"\"Neural net from scratch (no torch.nn)\n\"\"\"", "_____no_output_____" ], [ "import math\n\nweights = torch.randn(784, 10) / math.sqrt(784)\nweights.requires_grad_()\nbias = torch.zeros(10, requires_grad=True)", "_____no_output_____" ], [ "def log_softmax(x):\n # https://discuss.pytorch.org/t/what-is-the-difference-between-log-softmax-and-softmax/11801\n # https://stackoverflow.com/questions/44790670/torch-sum-a-tensor-along-an-axis\n # https://stackoverflow.com/questions/57237352/what-does-unsqueeze-do-in-pytorch\n # https://pytorch.org/docs/stable/notes/broadcasting.html\n return x - x.exp().sum(-1).log().unsqueeze(-1)\n\ndef model(xb):\n # https://stackoverflow.com/questions/5919530/what-is-the-pythonic-way-to-calculate-dot-product\n return log_softmax(xb @ weights + bias)", "_____no_output_____" ], [ "bs = 64 # batch size\n\nxb = x_train[0:bs] # a mini-batch from x\npreds = model(xb) # predictions\n# preds[0], preds.shape\nprint(preds[0], preds.shape)", "_____no_output_____" ], [ "# negative log-likelihood\n# https://stats.stackexchange.com/questions/198038/cross-entropy-or-log-likelihood-in-output-layer\ndef nll(input, target):\n # https://blog.csdn.net/u010496337/article/details/50574154\n return -input[range(target.shape[0]), target].mean()\n\nloss_func = nll", "_____no_output_____" ], [ "yb = y_train[0:bs]\nprint(loss_func(preds, yb))", "_____no_output_____" ], [ "def accuracy(out, yb):\n preds = torch.argmax(out, dim=1)\n return (preds==yb).float().mean() # Can only calculate the mean of floating types", "_____no_output_____" ], [ "print(accuracy(preds, yb))", "_____no_output_____" ], [ "# from IPython.core.debugger import set_trace\nlr = 0.5 # learning rate\nepochs = 2 # how many epochs to train for\n\nfor epoch in range(epochs):\n for i in range((n - 1) // bs + 1):\n # set_trace()\n start_i = i * bs\n end_i = start_i + bs\n xb = x_train[start_i:end_i]\n yb = y_train[start_i:end_i]\n pred = model(xb)\n loss = loss_func(pred, yb)\n\n loss.backward()\n with torch.no_grad():\n weights -= weights.grad * lr\n bias -= bias.grad * lr\n weights.grad.zero_()\n bias.grad.zero_()", "_____no_output_____" ], [ "print(loss_func(model(xb), yb), accuracy(model(xb), yb))", "_____no_output_____" ], [ "\"\"\"Using torch.nn.functional\n- making our code one or more of: shorter, more understandable, and/or more flexible.\n\"\"\"", "_____no_output_____" ], [ "# This module contains all the functions in the torch.nn library\n# (whereas other parts of the library contain classes)\nimport torch.nn.functional as F\n\nloss_func = F.cross_entropy\n\ndef model(xb):\n return xb @ weights + bias", "_____no_output_____" ], [ "print(loss_func(model(xb), yb), accuracy(model(xb), yb))", "_____no_output_____" ], [ "\"\"\"Refactor using nn.Module\n\"\"\"", "_____no_output_____" ], [ "from torch import nn\n\nclass Mnist_Logistic(nn.Module):\n def __init__(self):\n super().__init__()\n self.weights = nn.Parameter(torch.randn(784, 10) / math.sqrt(784))\n self.bias = nn.Parameter(torch.zeros(10))\n\n def forward(self, xb):\n return xb @ self.weights + self.bias", "_____no_output_____" ], [ "model = Mnist_Logistic()", "_____no_output_____" ], [ "print(loss_func(model(xb), yb))", "_____no_output_____" ], [ "def fit():\n for epoch in range(epochs):\n for i in range((n - 1) // bs + 1):\n start_i = i * bs\n end_i = start_i + bs\n xb = x_train[start_i:end_i]\n yb = y_train[start_i:end_i]\n pred = model(xb)\n loss = loss_func(pred, yb)\n\n loss.backward()\n with torch.no_grad():\n for p in model.parameters():\n p -= p.grad * lr\n model.zero_grad()\n\nfit()", "_____no_output_____" ], [ "print(loss_func(model(xb), yb))", "_____no_output_____" ], [ "\"\"\"Refactor using nn.Linear\n\"\"\"", "_____no_output_____" ], [ "class Mnist_Logistic(nn.Module):\n def __init__(self):\n super().__init__()\n self.lin = nn.Linear(784, 10)\n\n def forward(self, xb):\n return self.lin(xb)", "_____no_output_____" ], [ "model = Mnist_Logistic()\nfit()\n\nprint(loss_func(model(xb), yb))", "_____no_output_____" ], [ "\"\"\"Refactor using optim\n\"\"\"", "_____no_output_____" ], [ "from torch import optim\ndef get_model():\n model = Mnist_Logistic()\n return model, optim.SGD(model.parameters(), lr=lr)\n\nmodel, opt = get_model()\nprint(loss_func(model(xb), yb))\n\nfor epoch in range(epochs):\n for i in range((n - 1) // bs + 1):\n start_i = i * bs\n end_i = start_i + bs\n xb = x_train[start_i:end_i]\n yb = y_train[start_i:end_i]\n pred = model(xb)\n loss = loss_func(pred, yb)\n\n loss.backward()\n opt.step()\n opt.zero_grad()\n\nprint(loss_func(model(xb), yb))", "_____no_output_____" ], [ "\"\"\"Refactor using Dataset\n- PyTorch’s TensorDataset is a Dataset wrapping tensors. \n By defining a length and way of indexing, this also gives us a way to iterate, \n index, and slice along the first dimension of a tensor. \n This will make it easier to access both the independent \n and dependent variables in the same line as we train.\n\"\"\"", "_____no_output_____" ], [ "from torch.utils.data import TensorDataset\ntrain_ds = TensorDataset(x_train, y_train)\nmodel, opt = get_model()\n\nfor epoch in range(epochs):\n for i in range((n - 1) // bs + 1):\n xb, yb = train_ds[i * bs: i * bs + bs]\n pred = model(xb)\n loss = loss_func(pred, yb)\n\n loss.backward()\n opt.step()\n opt.zero_grad()\n\nprint(loss_func(model(xb), yb))", "_____no_output_____" ], [ "\"\"\"Refactor using DataLoader\n-Pytorch’s DataLoader is responsible for managing batches. \n You can create a DataLoader from any Dataset.\n DataLoader makes it easier to iterate over batches. \n Rather than having to use train_ds[i*bs : i*bs+bs], \n the DataLoader gives us each minibatch automatically.\n\"\"\"", "_____no_output_____" ], [ "from torch.utils.data import DataLoader\n\ntrain_ds = TensorDataset(x_train, y_train)\ntrain_dl = DataLoader(train_ds, batch_size=bs)\n\nmodel, opt = get_model()\n\nfor epoch in range(epochs):\n for xb, yb in train_dl:\n pred = model(xb)\n loss = loss_func(pred, yb)\n\n loss.backward()\n opt.step()\n opt.zero_grad()\n\nprint(loss_func(model(xb), yb))", "_____no_output_____" ], [ "\"\"\"Add Validation\n- Shuffling the training data is important to prevent correlation between batches and overfitting\n\"\"\"", "_____no_output_____" ], [ "train_ds = TensorDataset(x_train, y_train)\ntrain_dl = DataLoader(train_ds, batch_size=bs, shuffle=True)\n\nvalid_ds = TensorDataset(x_valid, y_valid)\nvalid_dl = DataLoader(valid_ds, batch_size=bs * 2)\n\nmodel, opt = get_model()\n\nfor epoch in range(epochs):\n # Note that we always call model.train() before training, and model.eval() before inference, \n # because these are used by layers such as nn.BatchNorm2d and nn.Dropout \n # to ensure appropriate behaviour for these different phases\n model.train()\n for xb, yb in train_dl:\n pred = model(xb)\n loss = loss_func(pred, yb)\n\n loss.backward()\n opt.step()\n opt.zero_grad()\n\n model.eval()\n with torch.no_grad():\n valid_loss = sum(loss_func(model(xb), yb) for xb, yb in valid_dl)\n\n print(epoch, valid_loss / len(valid_dl))", "_____no_output_____" ], [ "\"\"\"Create fit() and get_data()\n\"\"\"", "_____no_output_____" ], [ "def loss_batch(model, loss_func, xb, yb, opt=None):\n loss = loss_func(model(xb), yb)\n\n if opt is not None:\n loss.backward()\n opt.step()\n opt.zero_grad()\n\n return loss.item(), len(xb)", "_____no_output_____" ], [ "def get_data(train_ds, valid_ds, bs):\n return (\n DataLoader(train_ds, batch_size=bs, shuffle=True),\n DataLoader(valid_ds, batch_size=bs * 2),\n )", "_____no_output_____" ], [ "import numpy as np\n\ndef fit(epochs, model, loss_func, opt, train_dl, valid_dl):\n for epoch in range(epochs):\n model.train()\n for xb, yb in train_dl:\n loss_batch(model, loss_func, xb, yb, opt)\n\n model.eval()\n with torch.no_grad():\n losses, nums = zip(\n *[loss_batch(model, loss_func, xb, yb) for xb, yb in valid_dl]\n )\n val_loss = np.sum(np.multiply(losses, nums)) / np.sum(nums)\n\n print(epoch, val_loss)", "_____no_output_____" ], [ "train_dl, valid_dl = get_data(train_ds, valid_ds, bs)\nmodel, opt = get_model()\nfit(epochs, model, loss_func, opt, train_dl, valid_dl)", "_____no_output_____" ], [ "\"\"\"Switch to CNN\n\"\"\"", "_____no_output_____" ], [ "class Mnist_CNN(nn.Module):\n def __init__(self):\n super().__init__()\n self.conv1 = nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1)\n self.conv2 = nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1)\n self.conv3 = nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1)\n\n def forward(self, xb):\n xb = xb.view(-1, 1, 28, 28)\n xb = F.relu(self.conv1(xb))\n xb = F.relu(self.conv2(xb))\n xb = F.relu(self.conv3(xb))\n xb = F.avg_pool2d(xb, 4)\n return xb.view(-1, xb.size(1))\n\nlr = 0.1\nmodel = Mnist_CNN()\nopt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)\n\nfit(epochs, model, loss_func, opt, train_dl, valid_dl)", "_____no_output_____" ], [ "\"\"\"nn Sequential\n\"\"\"", "_____no_output_____" ], [ "class Lambda(nn.Module):\n def __init__(self, func):\n super().__init__()\n self.func = func\n\n def forward(self, x):\n return self.func(x)\n\n\ndef preprocess(x):\n return x.view(-1, 1, 28, 28)", "_____no_output_____" ], [ "model = nn.Sequential(\n Lambda(preprocess),\n nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1),\n nn.ReLU(),\n nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1),\n nn.ReLU(),\n nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1),\n nn.ReLU(),\n nn.AvgPool2d(4),\n Lambda(lambda x: x.view(x.size(0), -1)),\n)\n\nopt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)\n\nfit(epochs, model, loss_func, opt, train_dl, valid_dl)", "_____no_output_____" ], [ "\"\"\"Wrapping DataLoader\n\"\"\"", "_____no_output_____" ], [ "def preprocess(x, y):\n return x.view(-1, 1, 28, 28), y\n\n\nclass WrappedDataLoader:\n def __init__(self, dl, func):\n self.dl = dl\n self.func = func\n\n def __len__(self):\n return len(self.dl)\n\n def __iter__(self):\n batches = iter(self.dl)\n for b in batches:\n yield (self.func(*b))\n\ntrain_dl, valid_dl = get_data(train_ds, valid_ds, bs)\ntrain_dl = WrappedDataLoader(train_dl, preprocess)\nvalid_dl = WrappedDataLoader(valid_dl, preprocess)", "_____no_output_____" ], [ "model = nn.Sequential(\n nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1),\n nn.ReLU(),\n nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1),\n nn.ReLU(),\n nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1),\n nn.ReLU(),\n nn.AdaptiveAvgPool2d(1),\n Lambda(lambda x: x.view(x.size(0), -1)),\n)\n\nopt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)\nfit(epochs, model, loss_func, opt, train_dl, valid_dl)", "_____no_output_____" ], [ "\"\"\"Using your GPU\n\"\"\"", "_____no_output_____" ], [ "dev = torch.device(\n \"cuda\") if torch.cuda.is_available() else torch.device(\"cpu\")", "_____no_output_____" ], [ "def preprocess(x, y):\n return x.view(-1, 1, 28, 28).to(dev), y.to(dev)\n\n\ntrain_dl, valid_dl = get_data(train_ds, valid_ds, bs)\ntrain_dl = WrappedDataLoader(train_dl, preprocess)\nvalid_dl = WrappedDataLoader(valid_dl, preprocess)", "_____no_output_____" ], [ "model.to(dev)\nopt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)\nfit(epochs, model, loss_func, opt, train_dl, valid_dl)", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7d6dd7c93013a75f1f23cd5a22dfc09cc71fad1
668,882
ipynb
Jupyter Notebook
video-game-exploration.ipynb
Chandler-Stewart/video-game-exploration
d843e2253dbfa3daa95586a013cbff31efec6b39
[ "MIT" ]
null
null
null
video-game-exploration.ipynb
Chandler-Stewart/video-game-exploration
d843e2253dbfa3daa95586a013cbff31efec6b39
[ "MIT" ]
null
null
null
video-game-exploration.ipynb
Chandler-Stewart/video-game-exploration
d843e2253dbfa3daa95586a013cbff31efec6b39
[ "MIT" ]
null
null
null
267.659864
204,780
0.907555
[ [ [ "# Integrated Project #1: Video Game", "_____no_output_____" ], [ "The goal of this project is to:", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nfrom scipy import stats as st\nimport matplotlib.pyplot as plt\nimport matplotlib.patches as mpatches\nimport re, math", "_____no_output_____" ] ], [ [ "## Project Description", "_____no_output_____" ], [ "You work for the online store Ice, which sells video games all over the world. User and expert reviews, genres, platforms (e.g. Xbox or PlayStation), and historical data on game sales are available from open sources. You need to identify patterns that determine whether a game succeeds or not. This will allow you to spot potential big winners and plan advertising campaigns.\n\nIn front of you is data going back to 2016. Let’s imagine that it’s December 2016 and you’re planning a campaign for 2017.\n\n(The important thing is to get experience working with data. It doesn't really matter whether you're forecasting 2017 sales based on data from 2016 or 2027 sales based on data from 2026.)\n\nThe dataset contains the abbreviation ESRB. The Entertainment Software Rating Board evaluates a game's content and assigns an age rating such as Teen or Mature.", "_____no_output_____" ], [ "## Table of Contents", "_____no_output_____" ], [ "- [The Goal](#goal)\n- [Step 0](#imports): Imports\n- [Step 1](#step1): Open the data file and study the general information\n - [Step 1 conclusion](#step1con)\n- [Step 2](#step2): Prepare the data\n - [Names](#step2name)\n - [Year of Release](#step2year)\n - [Sales](#step2sales)\n - [Score](#step2scores)\n - [Ratings](#step2ratings)\n - [Step 2 conclusion](#step2con)\n- [Step 3](#step3): Analyze the data\n - [Step 3 conclusion](#step3con)\n- [Step 4](#step4): Analyze the data\n - [Step 4 conclusion](#step4con)\n- [Step 5](#step5): Test the hypotheses\n - [Hypothesis 1](#step5h1): The average revenue from users of Ultimate and Surf calling plans differs\n - [Hypothesis 2](#step5h2): The average revenue from users in NY-NJ area is different from that of the users from other regions\n - [Step 5 conclusion](#step5con)\n- [Step 6](#step6): Write an overall conclusion", "_____no_output_____" ], [ "### Step 1. Open the data file and study the general information\n<a id='step1'></a>", "_____no_output_____" ] ], [ [ "raw_games_data = pd.read_csv('/datasets/games.csv')\ngames_data = raw_games_data\ngames_data.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 16715 entries, 0 to 16714\nData columns (total 11 columns):\nName 16713 non-null object\nPlatform 16715 non-null object\nYear_of_Release 16446 non-null float64\nGenre 16713 non-null object\nNA_sales 16715 non-null float64\nEU_sales 16715 non-null float64\nJP_sales 16715 non-null float64\nOther_sales 16715 non-null float64\nCritic_Score 8137 non-null float64\nUser_Score 10014 non-null object\nRating 9949 non-null object\ndtypes: float64(6), object(5)\nmemory usage: 1.4+ MB\n" ] ], [ [ "#### Step 1 conclusion\n<a id='step1con'></a>", "_____no_output_____" ], [ "We do have some nulls, and in some columns, such as the Ratings, there are a lot. To work with the information, we will need to replace the column names with lowercase text, and acknowledge the following issues:\n\nName:\n- There are two nulls, and in the information, they are also lacking genres, critic/user scores, and an ESRB rating. Because there are only 2 of 16715 entries, these should be removed.\n\nYear_of_Release:\n- We need to fill in the nulls. Some of the sports games have the year in the name (for example, 'Madden NFL 2004') so we will try to utilize those. Then we will try to fill in games with multiple platforms, but the year is only missing from one of the platforms. Otherwise, we will fill based on the mode.\n- We need to change the types of this column to integers.\n\nGenre:\n- The only nulls are from the same two nulls mentioned in the Name column. so these will be taken care of as well.\n\nSales:\n- The sales has a significant amount of zeros. These may be the result of consoles not sold in some countries, or the game itself not being sold in some countries. We would want to look at this further to see if something is going on here. Specifically for the Other sales, this seems to be the lowest category of purchasers of video games, and the zero seems to be more of an acceptable amount here.\n\nScores:\n- Scores have a large amount of missing data. This will need to be filled in, most likely with averages based on the copies sold. Popular games that sell well will likely be higher rated.\n- Critic score will need to be changed to an integer as it is a 0 to 100 score, and the user score will need to be changed to a float. \n- TBDs in the user score column oddly seem related games that are based off of movies/TV and brands. This may be an issue related to \n\nRating:\n - ESRB rating also has a significant number of missing values. We will likely need to figure out the most common with the mode. Some are more intuitive than others, such as shooters would tend to be more M for Mature.", "_____no_output_____" ], [ "### Step 2. Prepare the data\n<a id='step2'></a>", "_____no_output_____" ], [ "First for the entire dataset, we will need to replace the column names with lowercase text.", "_____no_output_____" ] ], [ [ "games_data.columns = [x.lower() for x in games_data.columns]", "_____no_output_____" ] ], [ [ "#### Name\n<a id='step2name'></a>", "_____no_output_____" ], [ "The two nulls of the set may just be failures in the data gather process, as the name of the game is the principle identifier. Because there are only 2 of 16715 entries, these should be removed.", "_____no_output_____" ] ], [ [ "games_data.drop(games_data[games_data['name'].isnull()].index, inplace=True)", "_____no_output_____" ] ], [ [ "#### Year of Release\n<a id='step2year'></a>", "_____no_output_____" ], [ "The years may be missing because this data seems focused on sales. The data may not prioritize the year then. \n\nFirst, we can attempt to draw information directly from the title. Some of the sports games, such as 'Madden NFL 2004' have the year in the name.", "_____no_output_____" ] ], [ [ "check_years = games_data.query('year_of_release.isnull()')\nfor i, row in check_years.iterrows():\n try:\n year = int(x = re.findall(\"[0-9][0-9][0-9][0-9]\", row['name']))\n except:\n continue\n games_data.loc[i, 'year_of_release'] = year", "_____no_output_____" ], [ "check_years = games_data.query('year_of_release.isnull()')\nfor i, row in check_years.iterrows():\n try:\n year = int(row['name'][-2:])\n except:\n continue\n if year > 80:\n year += 1900\n elif year < 20:\n year += 2000\n else:\n continue\n games_data.loc[i, 'year_of_release'] = year", "_____no_output_____" ] ], [ [ "After that, we can try to see if some years are missing, but the same game but for a different platform has the year.", "_____no_output_____" ] ], [ [ "check_years = games_data.query('year_of_release.isnull()')\ncheck_against = games_data.query('year_of_release.notnull()')", "_____no_output_____" ], [ "for i, row in check_years.iterrows():\n name = row['name']\n multiplatform = check_against.query('name == @name')\n if len(multiplatform):\n year = list(multiplatform['year_of_release'])[0]\n games_data.loc[i, 'year_of_release'] = year", "_____no_output_____" ] ], [ [ "Anything left over we can fill by using the mode based on the platform. As platforms are done in generations, they typically are popular for only a few consecutive years until the next console is released. Therefore, it should be fine to use the mode.", "_____no_output_____" ] ], [ [ "check_years = games_data.query('year_of_release.isnull()')\ncheck_against = games_data.query('year_of_release.notnull()')\n\nkeys = check_against.platform.unique()\nvalues = list(check_against.groupby('platform')['year_of_release'].agg(pd.Series.mode))\nreference = {keys[i]: values[i] for i in range(len(keys))}\n\nfor i,val in check_years.platform.iteritems():\n replace = reference[val]\n if not isinstance(replace, float):\n replace = replace[0]\n games_data.loc[i,'year_of_release'] = replace", "_____no_output_____" ] ], [ [ "Lastly, because they are years, we need to change them to integers.", "_____no_output_____" ] ], [ [ "games_data['year_of_release'] = pd.to_numeric(games_data['year_of_release'], downcast='integer')", "_____no_output_____" ] ], [ [ "#### Sales\n<a id='step2sales'></a>", "_____no_output_____" ], [ "Lets take a look at the sales by platform.", "_____no_output_____" ] ], [ [ "check = games_data[['platform', 'na_sales', 'eu_sales', 'jp_sales']]\nvalues = check.groupby('platform').mean()\nprint(values)", " na_sales eu_sales jp_sales\nplatform \n2600 0.681203 0.041128 0.000000\n3DO 0.000000 0.000000 0.033333\n3DS 0.160558 0.118231 0.193596\nDC 0.104423 0.032500 0.164615\nDS 0.177778 0.087815 0.081623\nGB 1.166531 0.487959 0.868571\nGBA 0.228151 0.091545 0.057579\nGC 0.240036 0.069622 0.038813\nGEN 0.713704 0.204444 0.098889\nGG 0.000000 0.000000 0.040000\nN64 0.435799 0.128715 0.107273\nNES 1.285102 0.215816 1.006633\nNG 0.000000 0.000000 0.120000\nPC 0.097053 0.146242 0.000175\nPCFX 0.000000 0.000000 0.030000\nPS 0.281136 0.178454 0.116809\nPS2 0.270171 0.157006 0.064415\nPS3 0.295635 0.248152 0.060248\nPS4 0.277398 0.359923 0.040714\nPSP 0.090298 0.055153 0.063507\nPSV 0.029256 0.030512 0.050953\nSAT 0.004162 0.003121 0.186474\nSCD 0.166667 0.060000 0.075000\nSNES 0.256192 0.079665 0.487657\nTG16 0.000000 0.000000 0.080000\nWS 0.000000 0.000000 0.236667\nWii 0.376439 0.198644 0.052523\nWiiU 0.259184 0.170952 0.088503\nX360 0.477393 0.214548 0.009849\nXB 0.226566 0.073968 0.001675\nXOne 0.377004 0.208866 0.001377\n" ] ], [ [ "Initially the zeros look like problems with our data, but after some research, it appears that these represent a lack of console based sales. For example, the Atari 2600 shows zero sales for Japan, but the Atari 2600 was not sold in Japan. Instead, a console labelled the Atari 2800 was. Similarly, the Game Gear (Presumably the GG item) was a Japanese based handheld console, which is why there are zero sales in NA and EU.\n\nWe would like to use the total sales later on, so we should add a global sales column.", "_____no_output_____" ] ], [ [ "games_data.insert(loc=8, column='total_sales', value=0.0)\nfor i, row in games_data.iterrows():\n games_data.loc[i,'total_sales'] = row['na_sales'] + row['eu_sales'] + row['jp_sales'] + row['other_sales']\ngames_data.sort_values(['total_sales'], ascending=False)", "_____no_output_____" ] ], [ [ "#### Scores\n<a id='step2scores'></a>", "_____no_output_____" ], [ "Similar to the years, the scores may not be prioritized in the origination of the data. Because the scores have a lot of missing data, filling directly by an average may significantly weight the data and give biased results. We want to localize the information so we will get rolling averages by genre and total sales. Theoretically, the community of gamers likely are based on genre, so gamers interested in racing games would likely pick up more racing games and have a better understanding of what makes a racing game good or bad. Similarly, better scoring games should get better traction in sales, so that will be the other factor.\n\nFirst we will start with the critic scores.", "_____no_output_____" ] ], [ [ "check = games_data.sort_values(['genre', 'total_sales'], ascending=(True, False))\ncheck_critic_null = check.query('critic_score.isnull()')\nfor i, row in check_critic_null.iterrows():\n up, down, new_val = 1, 1, np.nan\n genre = row['genre']\n try:\n while pd.isna(check.loc[i-up, 'critic_score']):\n if check.loc[i-up, 'genre'] != genre:\n up = -1\n break\n up += 1\n except:\n up=-1\n \n try:\n while pd.isna(check.loc[i+down, 'critic_score']):\n if check.loc[i+down, 'genre'] != genre:\n down = -1\n break\n down += 1\n except:\n down=-1\n \n if up != -1 and down != -1:\n new_val = int((check.loc[i-up, 'critic_score'] + check.loc[i+down, 'critic_score'])/2)\n elif up != -1:\n new_val = check.loc[i-up, 'critic_score']\n elif down != -1:\n new_val = check.loc[i+down, 'critic_score']\n elif pd.notna(check.loc[i, 'user_score']) and check.loc[i, 'user_score'] != 'tbd':\n new_val = int(float(check.loc[i, 'user_score'])*10)\n \n games_data.loc[i, 'critic_score'] = new_val\ngames_data.info()", "<class 'pandas.core.frame.DataFrame'>\nInt64Index: 16713 entries, 0 to 16714\nData columns (total 12 columns):\nname 16713 non-null object\nplatform 16713 non-null object\nyear_of_release 16713 non-null int16\ngenre 16713 non-null object\nna_sales 16713 non-null float64\neu_sales 16713 non-null float64\njp_sales 16713 non-null float64\nother_sales 16713 non-null float64\ntotal_sales 16713 non-null float64\ncritic_score 14354 non-null float64\nuser_score 10014 non-null object\nrating 9949 non-null object\ndtypes: float64(6), int16(1), object(5)\nmemory usage: 2.2+ MB\n" ] ], [ [ "Left over NaN values should be because there are no genre specific scores. This is certainly possible with the amount of missing values. Now we should try to base it on the total sales and not have it genre specific. Lastly, if there are still values left, we should use the user value to determine the critic value.", "_____no_output_____" ] ], [ [ "check = games_data.sort_values(['total_sales'], ascending=False)\ncheck_critic_null = check.query('critic_score.isnull()')\nfor i, row in check_critic_null.iterrows():\n up, down, new_val = 1, 1, np.nan\n try:\n while pd.isna(check.loc[i-up, 'critic_score']):\n up += 1\n except:\n up=-1\n \n try:\n while pd.isna(check.loc[i+down, 'critic_score']):\n down += 1\n except:\n down=-1\n \n if up != -1 and down != -1:\n new_val = int((check.loc[i-up, 'critic_score'] + check.loc[i+down, 'critic_score'])/2)\n elif up != -1:\n new_val = check.loc[i-up, 'critic_score']\n elif down != -1:\n new_val = check.loc[i+down, 'critic_score']\n elif pd.notna(check.loc[i, 'user_score']) and check.loc[i, 'user_score'] != 'tbd':\n new_val = int(float(check.loc[i, 'user_score'])*10)\n if new_val != np.nan:\n games_data.loc[i, 'critic_score'] = new_val\ngames_data.info()", "<class 'pandas.core.frame.DataFrame'>\nInt64Index: 16713 entries, 0 to 16714\nData columns (total 12 columns):\nname 16713 non-null object\nplatform 16713 non-null object\nyear_of_release 16713 non-null int16\ngenre 16713 non-null object\nna_sales 16713 non-null float64\neu_sales 16713 non-null float64\njp_sales 16713 non-null float64\nother_sales 16713 non-null float64\ntotal_sales 16713 non-null float64\ncritic_score 16713 non-null float64\nuser_score 10014 non-null object\nrating 9949 non-null object\ndtypes: float64(6), int16(1), object(5)\nmemory usage: 2.2+ MB\n" ] ], [ [ "Now we can repeat the same process with user scores. In user scores, there are TBD values. These values are most likely due to the sample size requirements of the score. Looking at the data, a majority of the TBD values appear to be on low selling games, and therefore are 'waiting' for a certain number of user scores to determine it is an acceptable sized survey. We can treat these the same as if they were NaN values.", "_____no_output_____" ] ], [ [ "check_user_null = check.query('user_score.isnull()')\n\nfor i, row in check_user_null.iterrows():\n up, down, new_val = 1, 1, -1\n genre = row['genre']\n \n try:\n while pd.isna(check.loc[i-up, 'user_score']) or check.loc[i-up, 'user_score'] == 'tbd':\n if check.loc[i-up, 'genre'] != genre:\n up = -1\n break\n up += 1\n except:\n up=-1\n \n try:\n while pd.isna(check.loc[i+down, 'user_score']) or check.loc[i+down, 'user_score'] == 'tbd':\n if check.loc[i+down, 'genre'] != genre:\n down = -1\n break\n down += 1\n except:\n down=-1\n\n if up != -1 and down != -1:\n new_val = (float(check.loc[i-up, 'user_score']) + float(check.loc[i+down, 'user_score']))/2\n elif up != -1:\n new_val = check.loc[i-up, 'user_score']\n elif down != -1:\n new_val = check.loc[i+down, 'user_score']\n if new_val != -1:\n games_data.loc[i, 'user_score'] = round(float(new_val),1)\ngames_data.info()", "<class 'pandas.core.frame.DataFrame'>\nInt64Index: 16713 entries, 0 to 16714\nData columns (total 12 columns):\nname 16713 non-null object\nplatform 16713 non-null object\nyear_of_release 16713 non-null int16\ngenre 16713 non-null object\nna_sales 16713 non-null float64\neu_sales 16713 non-null float64\njp_sales 16713 non-null float64\nother_sales 16713 non-null float64\ntotal_sales 16713 non-null float64\ncritic_score 16713 non-null float64\nuser_score 14426 non-null object\nrating 9949 non-null object\ndtypes: float64(6), int16(1), object(5)\nmemory usage: 2.2+ MB\n" ], [ "for i, row in games_data.iterrows():\n if row['user_score'] == 'tbd' or row['user_score'] is np.nan:\n games_data.loc[i, 'user_score'] = round(row['critic_score']/10, 1)", "_____no_output_____" ] ], [ [ "The critic score are integers on a scale from 1 to 100, and the user scores are floats from 0.0 to 10.0, so we need to cast them as such.", "_____no_output_____" ] ], [ [ "games_data['critic_score'] = pd.to_numeric(games_data['critic_score'], downcast='integer')\ngames_data['user_score'] = pd.to_numeric(games_data['user_score'], downcast='float')\ngames_data.info()", "<class 'pandas.core.frame.DataFrame'>\nInt64Index: 16713 entries, 0 to 16714\nData columns (total 12 columns):\nname 16713 non-null object\nplatform 16713 non-null object\nyear_of_release 16713 non-null int16\ngenre 16713 non-null object\nna_sales 16713 non-null float64\neu_sales 16713 non-null float64\njp_sales 16713 non-null float64\nother_sales 16713 non-null float64\ntotal_sales 16713 non-null float64\ncritic_score 16713 non-null int8\nuser_score 16713 non-null float32\nrating 9949 non-null object\ndtypes: float32(1), float64(5), int16(1), int8(1), object(4)\nmemory usage: 2.0+ MB\n" ] ], [ [ "#### Ratings\n<a id='step2ratings'></a>", "_____no_output_____" ] ], [ [ "check_rating = games_data.query('rating.isnull()')\ncheck_against = games_data.query('rating.notnull()')", "_____no_output_____" ], [ "import sys\nimport warnings\nif not sys.warnoptions:\n warnings.simplefilter(\"ignore\")", "_____no_output_____" ], [ "check_against['keys'] = check_against.platform+\".\"+check_against.genre\nkeys = list(check_against['keys'].unique())", "_____no_output_____" ], [ "values = list(check_against.groupby(['platform', 'genre'])['rating'].agg(pd.Series.mode))\nprint(check_against.groupby(['platform', 'genre'])['rating'].agg(pd.Series.mode))\nreference = {keys[i]: values[i] for i in range(len(values))}\n\nfor i,row in check_rating.iterrows():\n check = row.platform + \".\" + row.genre\n try:\n replace = reference[check]\n games_data.loc[i,'rating'] = replace\n except:\n continue", "platform genre \n3DS Action E10+\n Adventure E10+\n Fighting T\n Misc E\n Platform E\n ... \nXOne Role-Playing M\n Shooter M\n Simulation [E, T]\n Sports E\n Strategy [E, T]\nName: rating, Length: 198, dtype: object\n" ], [ "check_rating = games_data.query('rating.isnull()')\ncheck_against = games_data.query('rating.notnull()')", "_____no_output_____" ], [ "check_rating_null = games_data.query('rating.isnull()')\nfor i, row in check_critic_null.iterrows():\n up, down, new_val = 1, 1, np.nan\n try:\n while pd.isna(games_data.loc[i-up, 'rating']):\n up += 1\n except:\n up=-1\n \n try:\n while pd.isna(games_data.loc[i+down, 'rating']):\n down += 1\n except:\n down=-1\n \n if up != -1 and down != -1:\n if up < down:\n new_val = games_data.loc[i-up, 'rating']\n else:\n new_val = games_data.loc[i+down, 'rating']\n elif up != -1:\n new_val = games_data.loc[i-up, 'rating']\n elif down != -1:\n new_val = games_data.loc[i+down, 'rating']\n else:\n genre = row['genre']\n new_val = games_data.groupby(['genre'])['rating'].agg(pd.Series.mode).loc[genre]\n games_data.loc[i, 'rating'] = new_val\ngames_data.info()", "<class 'pandas.core.frame.DataFrame'>\nInt64Index: 16713 entries, 0 to 16714\nData columns (total 12 columns):\nname 16713 non-null object\nplatform 16713 non-null object\nyear_of_release 16713 non-null int16\ngenre 16713 non-null object\nna_sales 16713 non-null float64\neu_sales 16713 non-null float64\njp_sales 16713 non-null float64\nother_sales 16713 non-null float64\ntotal_sales 16713 non-null float64\ncritic_score 16713 non-null int8\nuser_score 16713 non-null float32\nrating 15855 non-null object\ndtypes: float32(1), float64(5), int16(1), int8(1), object(4)\nmemory usage: 2.0+ MB\n" ], [ "keys = check_against.genre.unique()\nvalues = list(check_against.groupby(['genre'])['rating'].agg(pd.Series.mode))\nreference = {keys[i]: values[i] for i in range(len(values))}\n\nfor i,row in check_rating.iterrows():\n check = row.genre\n try:\n replace = reference[check]\n games_data.loc[i,'rating'] = replace\n except:\n continue", "_____no_output_____" ], [ "games_data.info()", "<class 'pandas.core.frame.DataFrame'>\nInt64Index: 16713 entries, 0 to 16714\nData columns (total 12 columns):\nname 16713 non-null object\nplatform 16713 non-null object\nyear_of_release 16713 non-null int16\ngenre 16713 non-null object\nna_sales 16713 non-null float64\neu_sales 16713 non-null float64\njp_sales 16713 non-null float64\nother_sales 16713 non-null float64\ntotal_sales 16713 non-null float64\ncritic_score 16713 non-null int8\nuser_score 16713 non-null float32\nrating 16713 non-null object\ndtypes: float32(1), float64(5), int16(1), int8(1), object(4)\nmemory usage: 2.0+ MB\n" ] ], [ [ "Lastly, it turns out that K-A was a rating that is the same as E, as K-A is kids through adults, and was later changed to mean E. We should change that in this data as well.", "_____no_output_____" ] ], [ [ "games_data.loc[games_data['rating'] == \"K-A\", \"rating\"] = \"E\"", "_____no_output_____" ], [ "games_data.info()", "<class 'pandas.core.frame.DataFrame'>\nInt64Index: 16713 entries, 0 to 16714\nData columns (total 12 columns):\nname 16713 non-null object\nplatform 16713 non-null object\nyear_of_release 16713 non-null int16\ngenre 16713 non-null object\nna_sales 16713 non-null float64\neu_sales 16713 non-null float64\njp_sales 16713 non-null float64\nother_sales 16713 non-null float64\ntotal_sales 16713 non-null float64\ncritic_score 16713 non-null int8\nuser_score 16713 non-null float32\nrating 16713 non-null object\ndtypes: float32(1), float64(5), int16(1), int8(1), object(4)\nmemory usage: 2.0+ MB\n" ] ], [ [ "All of the ratings are now filled in.", "_____no_output_____" ], [ "#### Step 2 Conclusion\n<a id='step2con'></a>", "_____no_output_____" ], [ " All of the data has been cleaned and filled in. There are no longer any missing values, and there are no more obtuse values such as TBD. All of the characteristics are their correct types, and are adequately downsized to optimized types.", "_____no_output_____" ], [ "### Step 3. Analyze the data\n<a id='step3'></a>", "_____no_output_____" ], [ "- Look at how many games were released in different years. Is the data for every period significant?\n- Look at how sales varied from platform to platform. Choose the platforms with the greatest total sales and build a distribution based on data for each year. Find platforms that used to be popular but now have zero sales. How long does it generally take for new platforms to appear and old ones to fade?\n- Determine what period you should take data for. To do so, look at your answers to the previous questions. The data should allow you to build a prognosis for 2017.\n- Work only with the data that you've decided is relevant. Disregard the data for previous years.\n- Which platforms are leading in sales? Which ones are growing or shrinking? Select several potentially profitable platforms.\n- Build a box plot for the global sales of all games, broken down by platform. Are the differences in sales significant? What about average sales on various platforms? Describe your findings.\n- Take a look at how user and professional reviews affect sales for one popular platform (you choose). Build a scatter plot and calculate the correlation between reviews and sales. Draw conclusions.\n- Keeping your conclusions in mind, compare the sales of the same games on other platforms.\n- Take a look at the general distribution of games by genre. What can we say about the most profitable genres? Can you generalize about genres with high and low sales?", "_____no_output_____" ], [ "First lets look at the total sales by release year.", "_____no_output_____" ] ], [ [ "total_years = games_data.year_of_release.max()-games_data.year_of_release.min()\ngames_data.year_of_release.hist(bins=total_years)\n\nplt.ylabel('Total Sales')\nplt.xlabel('Year')\nplt.title('Distribution of Sales by Year')\nplt.show()", "_____no_output_____" ] ], [ [ "There appears to be an early tail, most likely when gaming had not yet fully joined the ranks of pop culture that we know it has today. This delay is likely due to consumer access and early technology. \n\nIt can be compared to the cell phone we know today. It used to be a large brick that had a large price tag of nearly //$4,000 and was extremely limited battery life of about 30 minutes, as mentioned by [this NBC article](https://www.nbcnews.com/id/wbna7432915).\n\nThis was not seen as something really necessary for anyone but wealthy business leaders. Soon, technology became cheaper and now most citizens of developed countries have a cell phone.\n\nThat being said, lets remove this time period needed for gaming to take off. ", "_____no_output_____" ] ], [ [ "q1 = games_data.year_of_release.quantile(q=.25)\nq3 = games_data.year_of_release.quantile(q=.75)\nIQR = q3-q1\ngames_data = games_data.query('year_of_release > @q1 - @IQR*1.5')", "_____no_output_____" ], [ "total_years = games_data.year_of_release.max()-games_data.year_of_release.min()\ngames_data.year_of_release.hist(bins=total_years)\n\nplt.ylabel('Total Sales')\nplt.xlabel('Year')\nplt.title('Distribution of Sales by Year')\nplt.show()", "_____no_output_____" ] ], [ [ "Now lets try to filter out the less popular platforms. Also, as we are trying to predict near future results, we need to make sure that the consoles are still selling games in the most recent year. Otherwise, they will not be selling games in 2017 either.", "_____no_output_____" ] ], [ [ "grouped_platform_sales = games_data.groupby(['platform', 'year_of_release'])['total_sales'].agg(['sum', 'count'])", "_____no_output_____" ], [ "plats = []\nfor platform, df in grouped_platform_sales.groupby(level=0):\n #print(df.index)\n keep = df.index.isin(['2016'], level='year_of_release')\n #print(df)\n if 1 in keep:\n plats.append(platform)\nprint(plats)", "['3DS', 'PC', 'PS3', 'PS4', 'PSV', 'Wii', 'WiiU', 'X360', 'XOne']\n" ] ], [ [ "To understand each platform's performance, we need to calculate the total number of sales per year, per platform.", "_____no_output_____" ] ], [ [ "usable_platforms = grouped_platform_sales[grouped_platform_sales.index.get_level_values('platform').isin(plats)]\nprint(usable_platforms)\nclean_games_data = games_data.query('platform.isin(@plats)')", " sum count\nplatform year_of_release \n3DS 1993 0.40 1\n 1999 0.47 5\n 2000 0.02 1\n 2010 0.30 1\n 2011 63.20 116\n... ... ...\nX360 2016 1.52 13\nXOne 2013 18.96 19\n 2014 54.07 61\n 2015 60.14 80\n 2016 26.15 87\n\n[101 rows x 2 columns]\n" ] ], [ [ "There are a few games that are highly skewing the results, such as Wii Sports, that may be diamonds in the rough, and can not be used to predict future sales.", "_____no_output_____" ] ], [ [ "q1 = clean_games_data.total_sales.quantile(q=.25)\nq3 = clean_games_data.total_sales.quantile(q=.75)\nIQR = q3-q1\nfiltered_clean_games_data = clean_games_data.query('total_sales < @q3 + @IQR*1.5')\ntotal_years = clean_games_data.year_of_release.max()-clean_games_data.year_of_release.min()\nplat_count = clean_games_data.pivot_table(values= 'total_sales', index='year_of_release', columns='platform', aggfunc='sum', fill_value=0)\nfiltered_plat_count = filtered_clean_games_data.pivot_table(values= 'total_sales', index='year_of_release', columns='platform', aggfunc='sum', fill_value=0)", "_____no_output_____" ], [ "# This is to make sure that colors are different with a large number of different colored bars in our graphs\ndef floatRgb(mag, cmin, cmax):\n \"\"\" Return a tuple of floats between 0 and 1 for R, G, and B. \"\"\"\n # Normalize to 0-1\n try: x = float(mag-cmin)/(cmax-cmin)\n except ZeroDivisionError: x = 0.5 # cmax == cmin\n blue = min((max((4*(0.75-x), 0.)), 1.))\n red = min((max((4*(x-0.25), 0.)), 1.))\n green = min((max((4*math.fabs(x-0.5)-1., 0.)), 1.))\n return red, green, blue\n\ndef rgb(mag, cmin, cmax):\n \"\"\" Return a tuple of integers, as used in AWT/Java plots. \"\"\"\n red, green, blue = floatRgb(mag, cmin, cmax)\n return int(red*255), int(green*255), int(blue*255)\n\ndef strRgb(mag, cmin, cmax):\n \"\"\" Return a hex string, as used in Tk plots. \"\"\"\n return \"#%02x%02x%02x\" % rgb(mag, cmin, cmax)", "_____no_output_____" ], [ "# Plotting\nplots = [clean_games_data, filtered_clean_games_data]\nplot_totals = [plat_count, filtered_plat_count]\n\nfor plot in range(len(plots)):\n plt.figure(figsize=(16,8))\n print(plots[plot].platform.unique())\n color_vals = []\n #rotates through the platforms\n for i in range(len(plots[plot].platform.unique())):\n num = i*1/len(plots[plot].platform.unique())\n color = strRgb(num,0,1)\n color_vals.append(color)\n\n # Creating dictionaries with colors\n colors = {i: color_vals[i] for i in range(len(color_vals))}\n vals = list(plots[plot].platform.unique())\n platforms = {i: vals[i] for i in range(len(vals))}\n\n # Plotting in a loop\n for i in range(len(plot_totals[plot].index)):\n year = plot_totals[plot].index[i]\n year_data = plot_totals[plot].loc[year]\n baseline = 0\n color_index = 0\n for j in year_data:\n plt.bar(x = i, height = j, bottom = baseline, color=colors[color_index])\n baseline += j\n color_index += 1\n\n plt.xticks(np.arange(len(plot_totals[plot].index)), plot_totals[plot].index, rotation = 270);\n\n # Creating legend\n patches = list()\n for i in reversed(range(len(plots[plot].platform.unique()))):\n patch = mpatches.Patch(color = colors[i], label = plot_totals[plot].columns[i])\n patches.append(patch)\n plt.legend(handles=patches, fontsize=12, framealpha=1)\n\n # Some additioanl plot prep\n plt.rcParams['axes.axisbelow'] = True\n plt.grid(color='gray', linestyle='dashed')\n plt.ylabel('Amount of Sales')\n plt.xlabel('Year')\n plt.title('Number of Games by Year and Platform');", "['Wii' 'X360' 'PS3' 'PS4' '3DS' 'PC' 'XOne' 'WiiU' 'PSV']\n['PC' 'Wii' 'PS3' 'XOne' 'X360' 'WiiU' '3DS' 'PS4' 'PSV']\n" ] ], [ [ "<div class=\"alert alert-success\" role=\"alert\">\nReviewer's comment v. 1:\n \nAN excellent graphs, but Ridgeplots can be useful here: https://matplotlib.org/matplotblog/posts/create-ridgeplots-in-matplotlib/ \n</div>", "_____no_output_____" ], [ "We can see a large boost of games sold around 2009 through 2011, primarily for the success of the Wii, PS3, and Xbox 360 consoles. After that burst, the sales drop, and then next gen consoles become popular, but not at the same level and are already falling well below previous years by 2016.\n\nTo make a prediction for the next year, we need to attempt a parabolic trend, as it will be based on the growth or decay of the popularity of the platforms, and it should also represent how quickly the platforms are coming in and out of popularity. \n\nWe can see in the above plots on a single platform basis that there are parabolic trends where there is not enough time yet for game developers to create games for a brand new platform, they get that time to make it, and over time the platform becomes outdated, and developers and consumers both prepare for the new consoles. In particular, consumers may want to save money on video games if they believe a platform is nearing the end of its stride, and would want to be financially ready for the next platform. \n\nThis parabolic trend tends to line up for platforms, as major competing consoles launch at the same time. For example, the playstation series from Sony typically launches around the same time as Microsoft's Xbox line to drive sales with competition.", "_____no_output_____" ] ], [ [ "predicting_2017 = filtered_clean_games_data.query('year_of_release.isin([2014, 2015, 2016])')\ntemp = pd.pivot_table(predicting_2017, values='total_sales', index='platform', columns='year_of_release', aggfunc='sum')", "_____no_output_____" ], [ "def calc_parabola_vertex(x1, y1, x2, y2, x3, y3):\n\n denom = (x1-x2) * (x1-x3) * (x2-x3);\n A = (x3 * (y2-y1) + x2 * (y1-y3) + x1 * (y3-y2)) / denom;\n B = (x3*x3 * (y1-y2) + x2*x2 * (y3-y1) + x1*x1 * (y2-y3)) / denom;\n C = (x2 * x3 * (x2-x3) * y1+x3 * x1 * (x3-x1) * y2+x1 * x2 * (x1-x2) * y3) / denom;\n\n return A,B,C", "_____no_output_____" ], [ "for i, row in temp.iterrows():\n x1, y1 = [2014, row[2014]]\n x2, y2 = [2015, row[2015]]\n x3, y3 = [2016, row[2016]]\n \n a, b, c = calc_parabola_vertex(x1, y1, x2, y2, x3, y3)\n new_val=(a*(2017**2))+(b*2017)+c\n \n if new_val > 0:\n temp.loc[i, 2017] = new_val\n else:\n temp.loc[i, 2017] = 0\ntemp", "_____no_output_____" ] ], [ [ "Now that we have the estimated amounts of sales per platform, we need to integrate it into our sales graph.", "_____no_output_____" ] ], [ [ "filtered_plat_count = filtered_plat_count.append(temp[2017])\nprint(temp[2017].sum())", "32.4200055571273\n" ], [ "# Plotting\nplots = [filtered_clean_games_data]\nplot_totals = [filtered_plat_count]\n\nfor plot in range(len(plots)):\n plt.figure(figsize=(16,8))\n print(plots[plot].platform.unique())\n color_vals = []\n for i in range(len(plots[plot].platform.unique())):\n num = i*1/len(plots[plot].platform.unique())\n color = strRgb(num,0,1)\n color_vals.append(color)\n\n # Creating dictionaries with colors and cancelaltion causes\n colors = {i: color_vals[i] for i in range(len(color_vals))}\n vals = list(plots[plot].platform.unique())\n platforms = {i: vals[i] for i in range(len(vals))}\n\n # Plotting in a loop\n for i in range(len(plot_totals[plot].index)):\n year = plot_totals[plot].index[i]\n year_data = plot_totals[plot].loc[year]\n baseline = 0\n color_index = 0\n for j in year_data:\n plt.bar(x = i, height = j, bottom = baseline, color=colors[color_index])\n baseline += j\n color_index += 1\n #plt.text(x = i, y = plat_count[i] + 0.05, s = round(plat_count[i], 1), \\\n #ha = 'center', fontsize=13)\n# for j in year_data:\n# plt.bar(x = 2017, height = j, bottom = baseline, color=colors[color_index])\n# baseline += j\n# color_index += 1\n \n\n plt.xticks(np.arange(len(plot_totals[plot].index)), plot_totals[plot].index, rotation = 270);\n\n # Creating legend\n patches = list()\n for i in reversed(range(len(plots[plot].platform.unique()))):\n patch = mpatches.Patch(color = colors[i], label = plot_totals[plot].columns[i])\n patches.append(patch)\n plt.legend(handles=patches, fontsize=12, framealpha=1)\n\n # Some additioanl plot prep\n plt.rcParams['axes.axisbelow'] = True\n plt.grid(color='gray', linestyle='dashed')\n plt.ylabel('Amount of Sales')\n plt.xlabel('Year')\n plt.title('Number of Games by Year and Platform');", "['PC' 'Wii' 'PS3' 'XOne' 'X360' 'WiiU' '3DS' 'PS4' 'PSV']\n" ] ], [ [ "We can see that from about 2013 to 2016, it has risen, and began dropping at a faster rate. Our prediction of 2017 at this level of modelling visibly follows that trend. One thing to note is that this indicates, from what we know of the gaming industry, that it would likely be time for a new generation of consoles to come out, restarting the wave of consumer sales.", "_____no_output_____" ], [ "Next, lets look at the distribution of these games by platform.", "_____no_output_____" ] ], [ [ "filtered_clean_games_data.boxplot(column='total_sales', by='platform', figsize=(16,8))\nplt.ylabel('Total Sales')\nplt.xlabel('Platform')\nplt.title('Distribution of Sales by Platform')\nplt.show()\n", "_____no_output_____" ] ], [ [ "It appears that there are a significant number of outliers accross the board. This shows, that a large portion of each platforms market performance is largely based on triple A titles, but there are still a significant number of games that are indie games, less advertised games, or games that just generally did not get the same amount of traction among consumers. \n\nIt looks like overall, the PS3 and Xbox 360 generally had better selling games, as the distribution is spread out to higher sales. The PSV was not know for its popularity, so this explains is low distribution. As for the PC, it is known for having a lot of indie games, as it is more accessible for game makers to distribute games. This accessibility also explains the large number of outliers as well.", "_____no_output_____" ], [ "Now lets take a look at how critic and user reviews affect sales of a single platform. For this example, we will look at the Wii.", "_____no_output_____" ] ], [ [ "wii_data = filtered_clean_games_data[clean_games_data['platform'] == 'Wii']\nfig, axes = plt.subplots(ncols=3, figsize=(16,8))\n\naxes[0].scatter(wii_data.critic_score, wii_data.total_sales, color='orange', alpha=.5)\naxes[1].scatter(wii_data.user_score, wii_data.total_sales, color='blue', alpha=.5)\naxes[2].scatter(wii_data.user_score*10, wii_data.total_sales, color='blue', alpha=.3)\naxes[2].scatter(wii_data.critic_score, wii_data.total_sales, color='orange', alpha=.3)\n\npop_a = mpatches.Patch(color='blue', label='user')\npop_b = mpatches.Patch(color='orange', label='critic')\n\naxes[2].legend(handles=[pop_a,pop_b], loc='upper left')\n\naxes[0].set(title='Critic Score vs. Total Sales', xlabel='Critic Score', ylabel='Total Sales')\naxes[1].set(title='User Score vs. Total Sales', xlabel='User Score')\naxes[2].set(title='Overlapped Critic and User Score vs. Total Sales', xlabel='Critic Score and Equivalent Scale of User Score')\n\nplt.show()", "_____no_output_____" ] ], [ [ "Although critic and user reviews look similar, we can see that barring a few outliers, users tend to be more willing to rate games higher than critics. It also seems that the shape of the critics scoring appears to be more rectangular than the users score. This implies that the amount of sales has less of an impact on the scoring than users do. Users may be more inclined to be influenced by word of mouth and riding the wave of a game's popularity.", "_____no_output_____" ], [ "A lot of games are multiplatform, so lets see if there is much of a difference between platforms.", "_____no_output_____" ] ], [ [ "wii_data = wii_data[['name', 'na_sales', 'eu_sales', 'jp_sales', 'other_sales', 'total_sales']]\nx360_data = filtered_clean_games_data[clean_games_data['platform'] == 'X360']\nx360_data = x360_data[['name', 'na_sales', 'eu_sales', 'jp_sales', 'other_sales', 'total_sales']]\nwii_x360_cross = pd.merge(wii_data, x360_data, on=\"name\", suffixes=(\"Wii\", \"X360\"))\n\nfig = plt.figure()\nax1 = fig.add_subplot(111)\n\nax1.hist(wii_x360_cross['total_salesWii'], bins=30, alpha= 0.5, label='Wii Sales')\nax1.hist(wii_x360_cross['total_salesX360'], bins=30, alpha= 0.5, label='XBox 360 Sales')\nplt.legend(loc='upper right');\nplt.ylabel('Frequency')\nplt.xlabel('Total Sales')\nplt.title('Distribution of Sales by Platform')\nplt.show()", "_____no_output_____" ] ], [ [ "It does appear that there may be a slight bias towards Xbox 360. This does make some sense, as the consoles are very different. The Xbox is primarily a button input, while the wii does have button inputs, but the console was largely popular to it's motion control. Because the Xbox does not have motion controls, motion control games would not be multi platform, and so it does not have that advantage of what area of expertise gave it its popularity.", "_____no_output_____" ], [ "Now lets take a look into the same level of detail for the game genres.", "_____no_output_____" ] ], [ [ "top_plats = filtered_clean_games_data.groupby('genre')['total_sales'].sum()\nfiltered_clean_games_data.boxplot(column='total_sales', by='genre', figsize=(16,8))\nplt.ylabel('Total Sales')\nplt.xlabel('Genre')\nplt.title('Distribution of Sales by Genre')\nplt.show()", "_____no_output_____" ] ], [ [ "The largest genres are Action, Fighting, Platform, Shooter, and Sports. This makes sense as they make up a large part of the triple A title games, including well established franchises such as Zelda, Mortal Combat, Mario, Call of Duty, and Fifa. The lowest are Adventure, Puzzle, and Strategy, games that are typical as indie titles and represent lower volume and pricing.", "_____no_output_____" ] ], [ [ "top_plats.plot('bar', figsize=(16,8))\nplt.ylabel('Total Sales')\nplt.xlabel('Platform')\nplt.title('Total Sales by Genre')\nplt.show()", "_____no_output_____" ] ], [ [ "Similarly to the distribution, the largest amount of sales are in Action and Sports, while the lower sales are in puzzle and strategy. The differences between these total amounts and the distribution is largely in the volume of games in the market place.", "_____no_output_____" ], [ "#### Step 3 Conclusion\n<a id='step3con'></a>", "_____no_output_____" ], [ "After viewing the preliminary data, we saw that the earlier years are not very representative, so all lower outlying years were filtered out. We then filtered for the popular and relevant consoles, based on being more recently selling platforms. We saw a large boost of games sold around 2009 through 2011, primarily for the success of the Wii, PS3, and Xbox 360 consoles - at those years, they were relatively new. After that burst, the success fell, and then next gen consoles came out, but the wave was not as successful and are already falling well below by 2016. Because of this trend, and without the knowledge of new consoles, the trend naturally falls, and we expect sales around 32.4 million USD.\n\nAs expected with the waves of success, consoles such as the PS3 and Xbox 360 had higher distributions of sales, while handhelds and PC games sold typically lower. \n\nWe also found that users and critics scored games very similar, but users may be slightly more biased by the traction in sales and popularity by word of mouth. Consoles were also relatively similar, but small discrepancies can be found between multiplatform games, and this may be due to the strengths and weaknesses of the consoles in relation to the game types. \n\nWe also looked at the distribution of sales based on genre and noticed that more total sales by genre correlated with higher distribution of better selling games. They also are typically the genres of triple A games, so these distributions make sense.", "_____no_output_____" ], [ "### Step 4. Create a user profile for each region\n<a id='step4'></a>", "_____no_output_____" ] ], [ [ "categories = ['platform', 'genre', 'rating']\nfig, axes = plt.subplots(nrows=3, ncols=3, figsize=(16,16))\n\ncolor_vals = ['orange', 'cyan', 'lime']\ncolors = {i: color_vals[i] for i in range(len(color_vals))}\nvals = ['na_sales', 'eu_sales', 'jp_sales']\nlocations = {i: vals[i] for i in range(len(vals))}\n\nfor i in range(len(categories)):\n top_order = filtered_clean_games_data.groupby(categories[i])['na_sales', 'eu_sales', 'jp_sales'].sum()\n na_top_order = top_order.sort_values(by='na_sales', ascending=False)\n eu_top_order = top_order.sort_values(by='eu_sales', ascending=False)\n jp_top_order = top_order.sort_values(by='jp_sales', ascending=False)\n top = [na_top_order, eu_top_order, jp_top_order]\n \n for j in range(len(top)):\n # Plotting in a loop\n for k in range(5):\n platforms = top[j].index[k]\n platforms_data = top[j].loc[platforms]\n baseline = 0\n color_index = 0\n for m in platforms_data:\n axes[i,j].bar(x = k, height = m, bottom = baseline, color=colors[color_index])\n baseline += m\n color_index += 1\n\n title_label = 'Top 5 ' + top_order.index.name.title() + 's for ' + top_order.columns[j][:2].upper()\n axes[i,j].set_title(label=title_label)\n axes[i,j].xaxis.set(ticks=np.arange(5), ticklabels=top[j].index)\n axes[i,0].set_ylabel(ylabel='Sales')\n \n#Creating legend\npatches = list()\nfor i in range(3):\n #patch = mpatches.Patch(color = colors[i], label = cancellation_cause[cancellation_code_per_carrier_pct.columns[i]])\n patch = mpatches.Patch(color = colors[i], label = top_order.columns[i])\n patches.append(patch)\nfig.legend(handles=patches, fontsize=12, framealpha=1, loc='upper left')\nfig.show()", "_____no_output_____" ] ], [ [ "#### Step 4 Conclusion\n<a id='step4con'></a>", "_____no_output_____" ], [ "For the top 5 platforms by location, the Xbox 360, Wii, and PS3 were very popular in North America, but nothing outstanding byond those three. The EU is similar, but PC was preferenced over the Wii, keeping course with tactile, button based platforms. In Japan, PS3 was the largest platform, but handhelds were highly prefered over what was popular for both the EU and NA groups. This deiscrepancy may be largely due to [Japan's significantly higher use of public transport](https://en.wikipedia.org/wiki/List_of_countries_by_rail_usage). This means they may be more inclined to use that time on a train to use a handheld console for convenience. \n\nFor the top 5 genres by location, Action and Sports were very the most popular in North America, and the EU. In Japan, Role-Playing was a close second, which was at the 5th spot in NA and was not even present in the EU's top 5. This may be due to the popularity with sports in the respective countries. For example, some of the two most successful sports games are the Madden NFL american football series and FIFA Soccer (european football) series. Both of these may correlate with the popularity of the sports in the United States and Europe respectively.\n\nFor the top 5 ratings by location, E and T were every groups first and second, respectively. In Japan and the EU, M took precedent over E10+, but the opposite was true in NA. The differences between M and E10+ however, are quite small in all regions, and may be considered negligible.", "_____no_output_____" ], [ "### Step 5. Test the following hypotheses:\n<a id='step5'></a>", "_____no_output_____" ] ], [ [ "# the level of significance\nalpha = .05", "_____no_output_____" ] ], [ [ "#### Average user ratings of the Xbox One and PC platforms are the same.\n<a id='step5h1'></a>", "_____no_output_____" ], [ "A dual sample t-test will be used to determine if the _surf_ plan and _ultimate_ plan generate different monthly revenues per person. We will create the following hypotheses:", "_____no_output_____" ], [ "The null hypothesis, $H_0$: The average score from users of the Xbox One games and PC games are equal. \nThe alternative hypothesis, $H_A$: The average score from users of the Xbox One games and PC games are not equal. ", "_____no_output_____" ] ], [ [ "set1 = filtered_clean_games_data[filtered_clean_games_data.platform == 'XOne']['user_score']\nset2 = filtered_clean_games_data[filtered_clean_games_data.platform == 'PC']['user_score']\n\nresults = st.ttest_ind(\n filtered_clean_games_data[filtered_clean_games_data.platform == 'XOne']['user_score'],\n filtered_clean_games_data[filtered_clean_games_data.platform == 'PC']['user_score'],\n equal_var=False)\n\nprint('p-value: ', results.pvalue)\n\nif results.pvalue > alpha:\n print('We cannot reject the null hypothesis')\nelse:\n print('We can reject the null hypothesis')", "p-value: 0.001260648856892789\nWe can reject the null hypothesis\n" ], [ "fig, axes = plt.subplots(ncols=2, figsize=(16,4))\n\nxbox = filtered_clean_games_data[filtered_clean_games_data.platform == 'XOne']\npc = filtered_clean_games_data[filtered_clean_games_data.platform == 'PC']\n\naxes[0].hist(xbox.user_score, bins=len(xbox.user_score.unique()), color='orange')\naxes[1].hist(pc.user_score, bins=len(pc.user_score.unique()), color='blue')\naxes[0].set(title='Distribution of Xbox User Scores', xlabel='User Score', ylabel='Frequency')\naxes[1].set(title='Distribution of PC User Scores', xlabel='User Score', ylabel='Frequency')\n\nplt.show()\n\npop_a = mpatches.Patch(color='blue', label='PC')\npop_b = mpatches.Patch(color='orange', label='Xbox')\n\nfig, axes = plt.subplots(ncols=1, figsize=(16,8))\naxes.hist([xbox.user_score, pc.user_score], bins=len(xbox.user_score.unique()), color=['orange', 'blue'])\naxes.set(title='Distribution of Xbox and PC User Scores', xlabel='User Score', ylabel='Frequency')\naxes.legend(handles=[pop_a,pop_b], loc='upper left')\nplt.show()", "_____no_output_____" ] ], [ [ "To confirm, it does appear that the distribution of the PC user score is more left skewed than the Xbox user score. the PC scores peak around 6.7, and tail more evenly in both directions, while there seems to be a larger dostribution of high ranked PC games.", "_____no_output_____" ], [ "The variances of the two subsamples are not equal, and therefore the parameter, `equal_var` must be set to False to compare sets with different variances and/or sets of different sizes.", "_____no_output_____" ], [ "The null hypothesis of a dual sample t-test is that the two groups are similar, and the alternative hypothesis is that they are dissimilar. \n\nIn this case, the null hypothesis is that the average score from users of the Xbox One games are similar to the average scores of the PC games. In the results of the t-test, the p-value was below our level of significance and we could reject the null variable and say that the average scores differ between the two groups. From the correlation of user score to sales, as well as the distribution of Xbox One games' higher total sales vs the PC games' lower total sales, this makes sense that they would not be equal.", "_____no_output_____" ], [ "#### Average user ratings for the Action and Sports genres are different.\n<a id='step5h2'></a>", "_____no_output_____" ], [ "The null hypothesis, $H_0$: The average score from users of the Action games and Sports games are equal. \nThe alternative hypothesis, $H_A$: The average score from users of the Action games and Sports games are not equal. ", "_____no_output_____" ] ], [ [ "results = st.ttest_ind(\n filtered_clean_games_data[filtered_clean_games_data.genre == 'Action']['user_score'],\n filtered_clean_games_data[filtered_clean_games_data.genre == 'Sports']['user_score'],\n equal_var=False)\n\nprint('p-value: ', results.pvalue)\n\nif results.pvalue > alpha:\n print('We cannot reject the null hypothesis')\nelse:\n print('We can reject the null hypothesis')", "p-value: 5.279399270185236e-10\nWe can reject the null hypothesis\n" ] ], [ [ "The variances of these two subsamples are also not equal, so the `equal_var` must be set to False.\n\nFor this example, the null hypothesis is that the average score from users of the Action games are similar to the average scores of the Sports games. In the results of the t-test, the p-value was again below our level of significance and we could reject the null variable and say that the average scores differ between the two groups. The distribution of Action games' higher total sales vs the Sports games' lower total sales, once again we can make sense that they would not be equal.", "_____no_output_____" ], [ "#### Step 5 Conclusion\n<a id='step5con'></a>", "_____no_output_____" ], [ "The original hypotheses that we had were that the average user scores of the Xbox One and PC platforms are the same, and that the average user scores for the Action and Sports genres are the same.\n\nWe can conclude, that for the first hypothesis, the two platforms did result in different user scores. This makes sense from our prelimiary visualisation of the data that users typically score games higher for games that have sold better amongst consumers. Because the Xbox tended to sell higher amounts per game as compared to the PC, it would make sense that the scoring would be higher.\n\nAs for the second hypothesis, we were testing the alternative hypothesis, and concluded similarly that the genres Action and Sports do score differently among direct consumers. This is most likely due to the same idea that Action sells more overall than Sports. Per game it may be a lower distribution, but Sports games tend to be repetetive year over year, so ratings may not vary, while action titles may be less consistent and vary much more.", "_____no_output_____" ], [ "### Step 6. Write a general conclusion\n<a id='step6'></a>", "_____no_output_____" ], [ "In this project, the data has been reviewed, filled, and changed into the correct types. The data has been filtered, split by year, and annual performances were tracked, and the next year was predicted. Once done so, locational based sale distributions were made by platform, genre, and ESRB rating. Lastly, some hypothesis testing was conducted to determine results from user scores. \n\nWe initially found that console game performances have cyclical performances based on console generations coming and going. the years before 1993 were uneventful and not telling of current market experiences, as they are minimal in comparison. In 2017, we predicted approximately 32.4 million USD in total sales. This is the case without any new consoles entering the market. This decay in sales seems to be the case after a lack of new market participants. \n\nFor the most part, the EU and North America had largely similar preferences for platforms and Genres, while Japan prefered handheld based platforms and role-playing games. All groups had little distinction from each other when it came to the games' ESRB ratings.\n\nFor the first hypothesis, we used a null hypothesis that these two sets of user scores were the same, based on a 5% significance level. By comparing these sets with the null hypothesis, that they were the same, we concluded that it failed the null hypothesis, so it was rejected. \n\nWe then used the alternative hypothesis that these two sets of user scores were indeed different, based on the same level of significance. By comparing these two sets with the null hypothesis, that they were the same, we concluded that it failed the null hypothesis, so it was rejected.\n\nWe believe that any advertising should continue to be focused on large, triple A titles. They typically meet the criteria for driving sales, while there is a large number of games that tend to minimally affect the sales. Also, sales have gone down, likely in anticipation for the next generation of platforms, to drive a new wave of sales, as the current generation is starting to decay.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
e7d6e26a4285bcea2e7c4cd98d382f9a8fe73af0
4,660
ipynb
Jupyter Notebook
notebooks/ensemble_ex_03.ipynb
Imarcos/scikit-learn-mooc
69a7a7e891c5a4a9bce8983d7c92326674fda071
[ "CC-BY-4.0" ]
null
null
null
notebooks/ensemble_ex_03.ipynb
Imarcos/scikit-learn-mooc
69a7a7e891c5a4a9bce8983d7c92326674fda071
[ "CC-BY-4.0" ]
null
null
null
notebooks/ensemble_ex_03.ipynb
Imarcos/scikit-learn-mooc
69a7a7e891c5a4a9bce8983d7c92326674fda071
[ "CC-BY-4.0" ]
null
null
null
29.308176
101
0.616953
[ [ [ "# 📝 Exercise M6.03\n\nThe aim of this exercise is to:\n\n* verifying if a random forest or a gradient-boosting decision tree overfit\n if the number of estimators is not properly chosen;\n* use the early-stopping strategy to avoid adding unnecessary trees, to\n get the best generalization performances.\n\nWe will use the California housing dataset to conduct our experiments.", "_____no_output_____" ] ], [ [ "from sklearn.datasets import fetch_california_housing\nfrom sklearn.model_selection import train_test_split\n\ndata, target = fetch_california_housing(return_X_y=True, as_frame=True)\ntarget *= 100 # rescale the target in k$\ndata_train, data_test, target_train, target_test = train_test_split(\n data, target, random_state=0, test_size=0.5)", "_____no_output_____" ] ], [ [ "<div class=\"admonition note alert alert-info\">\n<p class=\"first admonition-title\" style=\"font-weight: bold;\">Note</p>\n<p class=\"last\">If you want a deeper overview regarding this dataset, you can refer to the\nAppendix - Datasets description section at the end of this MOOC.</p>\n</div>", "_____no_output_____" ], [ "Create a gradient boosting decision tree with `max_depth=5` and\n`learning_rate=0.5`.", "_____no_output_____" ] ], [ [ "# Write your code here.", "_____no_output_____" ] ], [ [ "\nAlso create a random forest with fully grown trees by setting `max_depth=None`.", "_____no_output_____" ] ], [ [ "# Write your code here.", "_____no_output_____" ] ], [ [ "\nFor both the gradient-boosting and random forest models, create a validation\ncurve using the training set to assess the impact of the number of trees on\nthe performance of each model. Evaluate the list of parameters `param_range =\n[1, 2, 5, 10, 20, 50, 100]` and use the mean absolute error.", "_____no_output_____" ] ], [ [ "# Write your code here.", "_____no_output_____" ] ], [ [ "Both gradient boosting and random forest models will always improve when\nincreasing the number of trees in the ensemble. However, it will reach a\nplateau where adding new trees will just make fitting and scoring slower.\n\nTo avoid adding new unnecessary tree, unlike random-forest gradient-boosting\noffers an early-stopping option. Internally, the algorithm will use an\nout-of-sample set to compute the generalization performance of the model at\neach addition of a tree. Thus, if the generalization performance is not\nimproving for several iterations, it will stop adding trees.\n\nNow, create a gradient-boosting model with `n_estimators=1_000`. This number\nof trees will be too large. Change the parameter `n_iter_no_change` such\nthat the gradient boosting fitting will stop after adding 5 trees that do not\nimprove the overall generalization performance.", "_____no_output_____" ] ], [ [ "# Write your code here.", "_____no_output_____" ] ], [ [ "Estimate the generalization performance of this model again using\nthe `sklearn.metrics.mean_absolute_error` metric but this time using\nthe test set that we held out at the beginning of the notebook.\nCompare the resulting value with the values observed in the validation\ncurve.", "_____no_output_____" ] ], [ [ "# Write your code here.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7d6e4cc768293ad206ab4b60437cf58b3e67a48
410,066
ipynb
Jupyter Notebook
convolutional-neural-networks/conv-visualization/conv_visualization.ipynb
marielen/deep-learning-v2-pytorch
36ff7f22a08c065013b459032bad9cfaf7b375fd
[ "MIT" ]
null
null
null
convolutional-neural-networks/conv-visualization/conv_visualization.ipynb
marielen/deep-learning-v2-pytorch
36ff7f22a08c065013b459032bad9cfaf7b375fd
[ "MIT" ]
null
null
null
convolutional-neural-networks/conv-visualization/conv_visualization.ipynb
marielen/deep-learning-v2-pytorch
36ff7f22a08c065013b459032bad9cfaf7b375fd
[ "MIT" ]
null
null
null
1,084.830688
115,736
0.953147
[ [ [ "# Convolutional Layer\n\nIn this notebook, we visualize four filtered outputs (a.k.a. activation maps) of a convolutional layer. \n\nIn this example, *we* are defining four filters that are applied to an input image by initializing the **weights** of a convolutional layer, but a trained CNN will learn the values of these weights.\n\n<img src='notebook_ims/conv_layer.gif' height=60% width=60% />", "_____no_output_____" ], [ "### Import the image", "_____no_output_____" ] ], [ [ "import cv2\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# TODO: Feel free to try out your own images here by changing img_path\n# to a file path to another image on your computer!\nimg_path = 'data/udacity_sdc.png'\n\n# load color image \nbgr_img = cv2.imread(img_path)\n# convert to grayscale\ngray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)\n\n# normalize, rescale entries to lie in [0,1]\ngray_img = gray_img.astype(\"float32\")/255\n\n# plot image\nplt.imshow(gray_img, cmap='gray')\nplt.show()", "_____no_output_____" ] ], [ [ "### Define and visualize the filters", "_____no_output_____" ] ], [ [ "import numpy as np\n\n## TODO: Feel free to modify the numbers here, to try out another filter!\nfilter_vals = np.array([[1, 1, 1, -1], [1, 1, -1, 1], [1, -1, 1, 1], [-1, 1, 1, 1]])\n\nprint('Filter shape: ', filter_vals.shape)\n", "Filter shape: (4, 4)\n" ], [ "# Defining four different filters, \n# all of which are linear combinations of the `filter_vals` defined above\n\n# define four filters\nfilter_1 = filter_vals\nfilter_2 = -filter_1\nfilter_3 = filter_1.T\nfilter_4 = -filter_3\nfilters = np.array([filter_1, filter_2, filter_3, filter_4])\n\n# For an example, print out the values of filter 1\nprint('Filter 1: \\n', filter_1)", "Filter 1: \n [[ 1 1 1 -1]\n [ 1 1 -1 1]\n [ 1 -1 1 1]\n [-1 1 1 1]]\n" ], [ "# visualize all four filters\nfig = plt.figure(figsize=(10, 5))\nfor i in range(4):\n ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])\n ax.imshow(filters[i], cmap='gray')\n ax.set_title('Filter %s' % str(i+1))\n width, height = filters[i].shape\n for x in range(width):\n for y in range(height):\n ax.annotate(str(filters[i][x][y]), xy=(y,x),\n horizontalalignment='center',\n verticalalignment='center',\n color='white' if filters[i][x][y]<0 else 'black')", "_____no_output_____" ] ], [ [ "## Define a convolutional layer \n\nThe various layers that make up any neural network are documented, [here](http://pytorch.org/docs/stable/nn.html). For a convolutional neural network, we'll start by defining a:\n* Convolutional layer\n\nInitialize a single convolutional layer so that it contains all your created filters. Note that you are not training this network; you are initializing the weights in a convolutional layer so that you can visualize what happens after a forward pass through this network!\n\n\n#### `__init__` and `forward`\nTo define a neural network in PyTorch, you define the layers of a model in the function `__init__` and define the forward behavior of a network that applyies those initialized layers to an input (`x`) in the function `forward`. In PyTorch we convert all inputs into the Tensor datatype, which is similar to a list data type in Python. \n\nBelow, I define the structure of a class called `Net` that has a convolutional layer that can contain four 3x3 grayscale filters.", "_____no_output_____" ] ], [ [ "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n \n# define a neural network with a single convolutional layer with four filters\nclass Net(nn.Module):\n \n def __init__(self, weight):\n super(Net, self).__init__()\n # initializes the weights of the convolutional layer to be the weights of the 4 defined filters\n k_height, k_width = weight.shape[2:]\n # assumes there are 4 grayscale filters\n self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)\n self.conv.weight = torch.nn.Parameter(weight)\n\n def forward(self, x):\n # calculates the output of a convolutional layer\n # pre- and post-activation\n conv_x = self.conv(x)\n activated_x = F.relu(conv_x)\n \n # returns both layers\n return conv_x, activated_x\n \n# instantiate the model and set the weights\nweight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)\nmodel = Net(weight)\n\n# print out the layer in the network\nprint(model)", "Net(\n (conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)\n)\n" ] ], [ [ "### Visualize the output of each filter\n\nFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.", "_____no_output_____" ] ], [ [ "# helper function for visualizing the output of a given layer\n# default number of filters is 4\ndef viz_layer(layer, n_filters= 4):\n fig = plt.figure(figsize=(20, 20))\n \n for i in range(n_filters):\n ax = fig.add_subplot(1, n_filters, i+1, xticks=[], yticks=[])\n # grab layer outputs\n ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')\n ax.set_title('Output %s' % str(i+1))", "_____no_output_____" ] ], [ [ "Let's look at the output of a convolutional layer, before and after a ReLu activation function is applied.", "_____no_output_____" ] ], [ [ "# plot original image\nplt.imshow(gray_img, cmap='gray')\n\n# visualize all filters\nfig = plt.figure(figsize=(12, 6))\nfig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)\nfor i in range(4):\n ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])\n ax.imshow(filters[i], cmap='gray')\n ax.set_title('Filter %s' % str(i+1))\n\n \n# convert the image into an input Tensor\ngray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)\n\n# get the convolutional layer (pre and post activation)\nconv_layer, activated_layer = model(gray_img_tensor)\n\n# visualize the output of a conv layer\nviz_layer(conv_layer)", "_____no_output_____" ] ], [ [ "#### ReLu activation\n\nIn this model, we've used an activation function that scales the output of the convolutional layer. We've chose a ReLu function to do this, and this function simply turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`. \n\n<img src='notebook_ims/relu_ex.png' height=50% width=50% />", "_____no_output_____" ] ], [ [ "# after a ReLu is applied\n# visualize the output of an activated conv layer\nviz_layer(activated_layer)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7d6f2303cba30db8b9baeccfb2b2187ad6d50b4
15,111
ipynb
Jupyter Notebook
numpy/numpy_lab_manual.ipynb
wahmed555/PhD_learning
aac84b34eeb44bb35817554bf10176ee3b548239
[ "MIT" ]
null
null
null
numpy/numpy_lab_manual.ipynb
wahmed555/PhD_learning
aac84b34eeb44bb35817554bf10176ee3b548239
[ "MIT" ]
null
null
null
numpy/numpy_lab_manual.ipynb
wahmed555/PhD_learning
aac84b34eeb44bb35817554bf10176ee3b548239
[ "MIT" ]
null
null
null
26.28
86
0.416518
[ [ [ "#A\nimport numpy as np", "_____no_output_____" ], [ "#B\narr1=np.zeros((2,3,4))\narr1", "_____no_output_____" ], [ "#C\narr2=np.ones((2,3,4))\narr2", "_____no_output_____" ], [ "#D\narr3=np.arange(0,1000)\narr3", "_____no_output_____" ], [ "#E\nli1=[2,3.2,5.5,-6.4,-2.2,2.4]\na=np.array(li1)\nprint(a)\ntype(a)\n", "[ 2. 3.2 5.5 -6.4 -2.2 2.4]\n" ], [ "#F\nprint(a[1])", "3.2\n" ], [ "#G\nprint(a[1:4])", "[ 3.2 5.5 -6.4]\n" ], [ "#H\nli2=[[2,3.2,5.5,-6.4,-2.2,2.4],\n [1,22,4,0.1,5.3,-9],\n [3,1,2.1,21,1.1,-2]]\na=np.array(li2)\na.ndim", "_____no_output_____" ], [ "#I\nprint(a[:,2])\nprint(a[1:4,0:4])\nprint(a[1:,2])", "[5.5 4. 2.1]\n[[ 1. 22. 4. 0.1]\n [ 3. 1. 2.1 21. ]]\n[4. 2.1]\n" ], [ "#J\narr4=np.arange(4)\narr5=np.arange(10,14)\narr6=np.array((arr4,arr5))\nprint(arr6.shape)\nprint(arr6.size)\nprint(arr6.max())\nprint(arr6.min())\n\n\n\n", "(2, 4)\n8\n13\n0\n" ], [ "#K-1\narr6.reshape(2,2,2)", "_____no_output_____" ], [ "#K-2\narr6.T", "_____no_output_____" ], [ "#K-3\narr6.flatten()", "_____no_output_____" ], [ "#K-4\narr6.astype('float')", "_____no_output_____" ], [ "#L\narr7=np.arange(1,21)\narr7", "_____no_output_____" ], [ "#M\nli3=[]\n\nfor i in arr7:\n if i % 3==0 and i > 0 and i < 30:\n li3.append(i)\narr8=np.array([li3])\narr8", "_____no_output_____" ], [ "#N\narr9=np.linspace(0,1,8)\narr9", "_____no_output_____" ], [ "#O\narr10=np.arange(1,9)\nA=arr10.reshape(2,4)\nprint(A)", "[[1 2 3 4]\n [5 6 7 8]]\n" ], [ "#P\narr11=np.array([1,2])\nB=arr11\nprint(B)", "[1 2]\n" ], [ "#Q\nB=B[:,np.newaxis]\nA+B", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7d7073143e2bc23eab8aa7d75e1b8163890e34f
139,577
ipynb
Jupyter Notebook
16_CVM_Os_Melhores_e_os_Piores_Fundos_de_Investimento_do_mes_Python_para_Investimentos.ipynb
alcebytes/python_para_investimentos
7ad3590bc1a493dfd7f79a8e5954e7a07f2fb72d
[ "MIT" ]
null
null
null
16_CVM_Os_Melhores_e_os_Piores_Fundos_de_Investimento_do_mes_Python_para_Investimentos.ipynb
alcebytes/python_para_investimentos
7ad3590bc1a493dfd7f79a8e5954e7a07f2fb72d
[ "MIT" ]
null
null
null
16_CVM_Os_Melhores_e_os_Piores_Fundos_de_Investimento_do_mes_Python_para_Investimentos.ipynb
alcebytes/python_para_investimentos
7ad3590bc1a493dfd7f79a8e5954e7a07f2fb72d
[ "MIT" ]
null
null
null
40.907679
325
0.305674
[ [ [ "<a href=\"https://colab.research.google.com/github/ricospeloacaso/python_para_investimentos/blob/master/16_CVM_Os_Melhores_e_os_Piores_Fundos_de_Investimento_do_mes_Python_para_Investimentos.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "###Ricos pelo Acaso", "_____no_output_____" ], [ "* Link para o vídeo: https://youtu.be/NHCUUZOvk7k\n---\n* Base de Dados: http://dados.cvm.gov.br/\n\n", "_____no_output_____" ], [ "###Coletando os dados da CVM", "_____no_output_____" ] ], [ [ "import pandas as pd\npd.set_option(\"display.max_colwidth\", 150)\n#pd.options.display.float_format = '{:.2f}'.format", "_____no_output_____" ] ], [ [ ">Funções que buscam dados no site da CVM e retornam um DataFrame Pandas:\n", "_____no_output_____" ] ], [ [ "", "_____no_output_____" ], [ "def busca_informes_cvm(ano, mes):\n url = 'http://dados.cvm.gov.br/dados/FI/DOC/INF_DIARIO/DADOS/inf_diario_fi_{:02d}{:02d}.csv'.format(ano,mes)\n return pd.read_csv(url, sep=';')", "_____no_output_____" ], [ "def busca_cadastro_cvm(ano, mes, dia):\n url = 'http://dados.cvm.gov.br/dados/FI/CAD/DADOS/inf_cadastral_fi_{}{:02d}{:02d}.csv'.format(ano, mes, dia)\n return pd.read_csv(url, sep=';', encoding='ISO-8859-1')", "_____no_output_____" ] ], [ [ ">Buscando dados no site da CVM", "_____no_output_____" ] ], [ [ "informes_diarios = busca_informes_cvm(2020,4)", "_____no_output_____" ], [ "informes_diarios", "_____no_output_____" ], [ "cadastro_cvm = busca_cadastro_cvm(2020,5,1)", "_____no_output_____" ], [ "cadastro_cvm", "_____no_output_____" ] ], [ [ "###Manipulando os dados da CVM", "_____no_output_____" ], [ ">Definindo filtros para os Fundos de Investimento\n", "_____no_output_____" ] ], [ [ "minimo_cotistas = 100", "_____no_output_____" ] ], [ [ ">Manipulando os dados e aplicando filtros", "_____no_output_____" ] ], [ [ "fundos = informes_diarios[informes_diarios['NR_COTST'] >= minimo_cotistas].pivot(index='DT_COMPTC', columns='CNPJ_FUNDO', values=['VL_TOTAL',\t'VL_QUOTA',\t'VL_PATRIM_LIQ',\t'CAPTC_DIA',\t'RESG_DIA'])", "_____no_output_____" ], [ "fundos", "_____no_output_____" ] ], [ [ ">Normalizando os dados de cotas para efeitos comparativos", "_____no_output_____" ] ], [ [ "cotas_normalizadas = fundos['VL_QUOTA'] / fundos['VL_QUOTA'].iloc[0]", "_____no_output_____" ], [ "cotas_normalizadas", "_____no_output_____" ] ], [ [ "###Fundos de Investimento com os melhores desempenhos em Abril de 2020", "_____no_output_____" ] ], [ [ "melhores = pd.DataFrame()\nmelhores['retorno(%)'] = (cotas_normalizadas.iloc[-1].sort_values(ascending=False)[:5] - 1) * 100\nmelhores", "_____no_output_____" ] ], [ [ ">Buscando dados dos Fundos de Investimento pelo CNPJ", "_____no_output_____" ] ], [ [ "for cnpj in melhores.index:\n fundo = cadastro_cvm[cadastro_cvm['CNPJ_FUNDO'] == cnpj]\n melhores.at[cnpj, 'Fundo de Investimento'] = fundo['DENOM_SOCIAL'].values[0]\n melhores.at[cnpj, 'Classe'] = fundo['CLASSE'].values[0]\n melhores.at[cnpj, 'PL'] = fundo['VL_PATRIM_LIQ'].values[0]", "_____no_output_____" ], [ "melhores", "_____no_output_____" ] ], [ [ "###Fundos de Investimento com os piores desempenhos em Abril de 2020", "_____no_output_____" ] ], [ [ "piores = pd.DataFrame()\npiores['retorno(%)'] = (cotas_normalizadas.iloc[-1].sort_values(ascending=True)[:5] - 1) * 100\npiores", "_____no_output_____" ] ], [ [ ">Buscando dados dos Fundos de Investimento pelo CNPJ", "_____no_output_____" ] ], [ [ "for cnpj in piores.index:\n fundo = cadastro_cvm[cadastro_cvm['CNPJ_FUNDO'] == cnpj]\n piores.at[cnpj, 'Fundo de Investimento'] = fundo['DENOM_SOCIAL'].values[0]\n piores.at[cnpj, 'Classe'] = fundo['CLASSE'].values[0]\n piores.at[cnpj, 'PL'] = fundo['VL_PATRIM_LIQ'].values[0]", "_____no_output_____" ], [ "piores", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
e7d708b5f7424636276e3d1110e0d4f8c1bf9bc5
3,440
ipynb
Jupyter Notebook
classification-comparison/preparation/prepare-dataset.ipynb
huseinzol05/Tensorflow-NLP-Models
0741216aa8235e1228b3de7903cc36d73f8f2b45
[ "MIT" ]
1,705
2018-11-03T17:34:22.000Z
2022-03-29T04:30:01.000Z
classification-comparison/preparation/prepare-dataset.ipynb
eridgd/NLP-Models-Tensorflow
d46e746cd038f25e8ee2df434facbe12e31576a1
[ "MIT" ]
26
2019-03-16T17:23:00.000Z
2021-10-08T08:06:09.000Z
classification-comparison/preparation/prepare-dataset.ipynb
eridgd/NLP-Models-Tensorflow
d46e746cd038f25e8ee2df434facbe12e31576a1
[ "MIT" ]
705
2018-11-03T17:34:25.000Z
2022-03-24T02:29:14.000Z
29.401709
232
0.509884
[ [ [ "import numpy as np\nimport os\nimport re\nimport pickle", "_____no_output_____" ], [ "def clearstring(string):\n string = re.sub('[^\\'\\\"A-Za-z0-9 ]+', '', string)\n string = string.split(' ')\n string = filter(None, string)\n string = [y.strip() for y in string]\n string = [y for y in string if len(y) > 3 and y.find('nbsp') < 0]\n return ' '.join(string)\n\ndef read_data(location):\n list_folder = os.listdir(location)\n label = list_folder\n label.sort()\n outer_string, outer_label = [], []\n for i in range(len(list_folder)):\n list_file = os.listdir('data/' + list_folder[i])\n strings = []\n for x in range(len(list_file)):\n with open('data/' + list_folder[i] + '/' + list_file[x], 'r') as fopen:\n strings += fopen.read().split('\\n')\n strings = list(filter(None, strings))\n for k in range(len(strings)):\n strings[k] = clearstring(strings[k])\n labels = [i] * len(strings)\n outer_string += strings\n outer_label += labels\n \n dataset = np.array([outer_string, outer_label])\n dataset = dataset.T\n np.random.shuffle(dataset)\n \n return dataset", "_____no_output_____" ], [ "dataset = read_data('data/')\ndataset[:5,:]", "_____no_output_____" ], [ "with open('dataset-emotion.p', 'wb') as fopen:\n pickle.dump(dataset, fopen)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code" ] ]
e7d719a3387af94c2132a782dbda9a2efb70bf44
766,843
ipynb
Jupyter Notebook
1_1_Image_Representation/5_1. HSV Color Space, Balloons.ipynb
Abdulrahman-Adel/CVND-Exercises
ec8618e1651b5302c37788b2383620d143fdd8e3
[ "MIT" ]
null
null
null
1_1_Image_Representation/5_1. HSV Color Space, Balloons.ipynb
Abdulrahman-Adel/CVND-Exercises
ec8618e1651b5302c37788b2383620d143fdd8e3
[ "MIT" ]
null
null
null
1_1_Image_Representation/5_1. HSV Color Space, Balloons.ipynb
Abdulrahman-Adel/CVND-Exercises
ec8618e1651b5302c37788b2383620d143fdd8e3
[ "MIT" ]
null
null
null
2,419.063091
257,704
0.960273
[ [ [ "# HSV Color Space, Balloons", "_____no_output_____" ], [ "### Import resources and display image", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pyplot as plt\nimport cv2\n", "_____no_output_____" ], [ "%matplotlib inline\n\n# Read in the image\nimage = cv2.imread('images/water_balloons.jpg')\n\n# Make a copy of the image\nimage_copy = np.copy(image)\n\n# Change color to RGB (from BGR)\nimage = cv2.cvtColor(image_copy, cv2.COLOR_BGR2RGB)\n\nplt.imshow(image)", "_____no_output_____" ] ], [ [ "### Plot color channels", "_____no_output_____" ] ], [ [ "# RGB channels\nr = image[:,:,0]\ng = image[:,:,1]\nb = image[:,:,2]\n\nf, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(20,10))\n\nax1.set_title('Red')\nax1.imshow(r, cmap='gray')\n\nax2.set_title('Green')\nax2.imshow(g, cmap='gray')\n\nax3.set_title('Blue')\nax3.imshow(b, cmap='gray')\n", "_____no_output_____" ], [ "# Convert from RGB to HSV\nhsv = cv2.cvtColor(image, cv2.COLOR_RGB2HSV)\n\n# HSV channels\nh = hsv[:,:,0]\ns = hsv[:,:,1]\nv = hsv[:,:,2]\n\nf, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(20,10))\n\nax1.set_title('Hue')\nax1.imshow(h, cmap='gray')\n\nax2.set_title('Saturation')\nax2.imshow(s, cmap='gray')\n\nax3.set_title('Value')\nax3.imshow(v, cmap='gray')\n", "_____no_output_____" ] ], [ [ "### Define pink and hue selection thresholds", "_____no_output_____" ] ], [ [ "# Define our color selection criteria in HSV values\nlower_hue = np.array([160,0,0]) \nupper_hue = np.array([180,255,255])\n", "_____no_output_____" ], [ "# Define our color selection criteria in RGB values\nlower_pink = np.array([180,0,100]) \nupper_pink = np.array([255,255,230])", "_____no_output_____" ] ], [ [ "### Mask the image ", "_____no_output_____" ] ], [ [ "# Define the masked area in RGB space\nmask_rgb = cv2.inRange(image, lower_pink, upper_pink)\n\n# mask the image\nmasked_image = np.copy(image)\nmasked_image[mask_rgb==0] = [0,0,0]\n\n# Vizualize the mask\nplt.imshow(masked_image)", "_____no_output_____" ], [ "# Now try HSV!\n\n# Define the masked area in HSV space\nmask_hsv = cv2.inRange(hsv, lower_hue, upper_hue)\n\n# mask the image\nmasked_image = np.copy(image)\nmasked_image[mask_hsv==0] = [0,0,0]\n\n# Vizualize the mask\nplt.imshow(masked_image)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
e7d7212078b0df08ab401d63f1d5f37e923f57ad
140,790
ipynb
Jupyter Notebook
Phase 3/Code/FinalProject.ipynb
Python-Charmer/Final-Project-Team-Python-Charmer
7aba19d201fdb981953a5f13f1b828d952d843de
[ "MIT" ]
1
2018-11-06T22:51:28.000Z
2018-11-06T22:51:28.000Z
Phase 3/Code/FinalProject.ipynb
Python-Charmer/Final-Project-Team-Python-Charmer
7aba19d201fdb981953a5f13f1b828d952d843de
[ "MIT" ]
null
null
null
Phase 3/Code/FinalProject.ipynb
Python-Charmer/Final-Project-Team-Python-Charmer
7aba19d201fdb981953a5f13f1b828d952d843de
[ "MIT" ]
null
null
null
209.197623
29,378
0.853384
[ [ [ "<a href=\"https://colab.research.google.com/github/Python-Charmer/Final-Project-Team-Python-Charmer/blob/master/Phase%203/Code/FinalProject.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# Python Final Project - Team Python Charmers\n", "_____no_output_____" ] ], [ [ "# Loading Packages\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn.cluster import KMeans\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.pipeline import make_pipeline", "_____no_output_____" ], [ "# Loading Data From Source.\ndef load_data():\n url = r'https://raw.githubusercontent.com/Python-Charmer/Final-Project-Team-Python-Charmer/master/Phase1/Data/BreastCancerWisconsin.csv'\n df = pd.read_csv(url)\n names = ['Scn','A2','A3','A4','A5','A6','A7', 'A8','A9','A10','Class']\n df.columns = names\n return df", "_____no_output_____" ], [ "# Understanding Missing Values\ndef clean_missing(df):\n df['A7'] = df['A7'].replace('?',np.NaN)\n df['A7'] = pd.to_numeric(df['A7'])\n print(\"Below are how many missing values for each column\\n\")\n print(df.isnull().sum())\n print(\"\\nCleaning missing values with column means\\n\")\n df = df.fillna(round(df.mean(skipna = True),2))\n print(df.isnull().sum())\n return df\n ", "_____no_output_____" ], [ "# Calculating Summary Metrics\ndef sum_metrics(df):\n print(\"\\n Below are the summary metrics of the data \\n\" + str(df.describe()))\n print (\"\\n\\nThere are \" + str(df.shape[0]) + \" rows and \" + str(df.shape[1]) + \" Columns in this data frame\")\n print(\"\\nThere are \" + str(len(df['Scn'].unique())) + \" unique scn values in the dataset.\\n\")\n print(\"Below are the duplicate rows in the dataset.\\n\")\n print(str(df.loc[df.duplicated(), :]) + \"\\n\")", "_____no_output_____" ], [ "# Plotting graphs\ndef plot_graphs(df):\n print(\"\\nBelow are the histograms of A2:A10 \\n\")\n df.iloc[:,1:10].hist(bins = 8, color=\"blue\", grid=\"False\",alpha = .5, figsize=(12,6))\n plt.tight_layout(rect=(0,0,1.2,1.2))\n plt.show()\n df['Class'].value_counts().plot.bar().set_title(\"Class Variable: 2 = Benign 4 = Malignant\")\n df.plot.scatter(x='A3', y='A4').set_title(\"Scatter of A3 & A4 90% corr\")\n", "_____no_output_____" ], [ "# We are getting centers for K = 4 clusters\ndef get_mids(X):\n clss = KMeans(n_clusters = 4) \n clss.fit(X)\n cent = clss.cluster_centers_\n print(\"\\n Below are the centers of K = 4 clusters \\n\")\n print(pd.DataFrame(cent ,columns = X.columns))", "_____no_output_____" ], [ "# We are plotting intertia plot to find optimal K\ndef find_optimal_K(X):\n print(\"\\n Below is the intertia chart \\n\")\n inertia = []\n k = []\n for i in range(1,15):\n clss = KMeans(n_clusters = i) \n clss.fit(X)\n iner = clss.inertia_\n k.append(i)\n inertia.append(iner)\n res = pd.concat([pd.DataFrame(k), pd.DataFrame(inertia)],axis = 1)\n res.columns = ['K','Inertia']\n ax = res.plot(\"K\",marker='o', linestyle='dashed', title = \"Optimal K = 2\" )\n ax.set_xlabel(\"Number of Clusters\")\n ax.set_ylabel(\"Inertia\")\n \n \n", "_____no_output_____" ], [ "# Plotting SD plot to understand the data variance\ndef sd_plot(X):\n dt = pd.DataFrame(X.std()).sort_values(by = 0, ascending = False)\n dt.reset_index()\n fig, ay = plt.subplots()\n x_val = dt.index\n y_val = dt[0].values\n ay.bar(x = x_val, height = y_val)\n ay.set_xlabel(\"Features\")\n ay.set_ylabel(\"Standard Deviation\")\n ay.set_title(\"Standard Deviation Plot\")\n\n\n# Plotting Box plot to understand the data variance\ndef var_plot(df):\n # Box plot showing variation of the columns A2:A10\n data = []\n for i in range(1, 10):\n data.append(df.iloc[:, i])\n\n # Multiple box plots on one Axes\n fig, ax = plt.subplots()\n plt.title(\"Boxplot showing Variation of Features\")\n plt.xlabel(\"Columns A2 thru A10\")\n plt.ylabel(\"Values\")\n ax.boxplot(data, 0,showbox=True,showmeans=True)\n top = 12\n bottom = -2\n ax.set_ylim(bottom, top)\n ax.set_xticklabels(df.iloc[:,1:-1].columns, rotation=45, fontsize=8)\n plt.show()", "_____no_output_____" ], [ "#Getting centers of optimal K = 2\ndef get_centers(X):\n print(\"\\n Below are the centers of K = 2 clusters \\n \\n\")\n mdl = make_pipeline(StandardScaler(), KMeans(n_clusters = 2, n_init=20))\n mdl.fit(X)\n centers = pd.DataFrame(mdl.named_steps['kmeans'].cluster_centers_)\n centers.columns = X.columns\n print(centers)\n\n", "_____no_output_____" ], [ "# Cross tabulating the cluster labels with \"Class\"\ndef lables(i,df):\n print(\"\\nBelow are the predicted labels with k = \" + str(i) + \"\\n\")\n if i == 4:\n mdl = KMeans(n_clusters = i)\n else:\n mdl = make_pipeline(StandardScaler(), KMeans(n_clusters = i, n_init=20))\n labels = mdl.fit_predict(df.iloc[:,1:-1])\n ctf = pd.DataFrame({'labels': labels, 'Class': df[\"Class\"]})\n print(pd.crosstab(ctf['labels'], ctf['Class']))", "_____no_output_____" ], [ "# Main Function Phase 1\ndf = load_data()\ndf = clean_missing(df)\nsum_metrics(df)\nplot_graphs(df)\nprint(\"The columns that need standardization are: A7,A3,& A9 because they have the highest amount of variance compared to other factors.\")\n", "Below are how many missing values for each column\n\nScn 0\nA2 0\nA3 0\nA4 0\nA5 0\nA6 0\nA7 16\nA8 0\nA9 0\nA10 0\nClass 0\ndtype: int64\n\nCleaning missing values with column means\n\nScn 0\nA2 0\nA3 0\nA4 0\nA5 0\nA6 0\nA7 0\nA8 0\nA9 0\nA10 0\nClass 0\ndtype: int64\n\n Below are the summary metrics of the data \n Scn A2 A3 A4 A5 \\\ncount 6.990000e+02 699.000000 699.000000 699.000000 699.000000 \nmean 1.071704e+06 4.417740 3.134478 3.207439 2.806867 \nstd 6.170957e+05 2.815741 3.051459 2.971913 2.855379 \nmin 6.163400e+04 1.000000 1.000000 1.000000 1.000000 \n25% 8.706885e+05 2.000000 1.000000 1.000000 1.000000 \n50% 1.171710e+06 4.000000 1.000000 1.000000 1.000000 \n75% 1.238298e+06 6.000000 5.000000 5.000000 4.000000 \nmax 1.345435e+07 10.000000 10.000000 10.000000 10.000000 \n\n A6 A7 A8 A9 A10 Class \ncount 699.000000 699.000000 699.000000 699.000000 699.000000 699.000000 \nmean 3.216023 3.544549 3.437768 2.866953 1.589413 2.689557 \nstd 2.214300 3.601852 2.438364 3.053634 1.715078 0.951273 \nmin 1.000000 1.000000 1.000000 1.000000 1.000000 2.000000 \n25% 2.000000 1.000000 2.000000 1.000000 1.000000 2.000000 \n50% 2.000000 1.000000 3.000000 1.000000 1.000000 2.000000 \n75% 4.000000 5.000000 5.000000 4.000000 1.000000 4.000000 \nmax 10.000000 10.000000 10.000000 10.000000 10.000000 4.000000 \n\n\nThere are 699 rows and 11 Columns in this data frame\n\nThere are 645 unique scn values in the dataset.\n\nBelow are the duplicate rows in the dataset.\n\n Scn A2 A3 A4 A5 A6 A7 A8 A9 A10 Class\n208 1218860 1 1 1 1 1 1.0 3 1 1 2\n253 1100524 6 10 10 2 8 10.0 7 3 3 4\n254 1116116 9 10 10 1 10 8.0 3 3 1 4\n258 1198641 3 1 1 1 2 1.0 3 1 1 2\n272 320675 3 3 5 2 3 10.0 7 1 1 4\n338 704097 1 1 1 1 1 1.0 2 1 1 2\n561 1321942 5 1 1 1 2 1.0 3 1 1 2\n684 466906 1 1 1 1 2 1.0 1 1 1 2\n\n\nBelow are the histograms of A2:A10 \n\n" ], [ "#Main Functions Phase 2\nX = df.drop(['Scn','Class'], axis = 1)\ny = df['Class']\nget_mids(X)\nlables(4,df)\nfind_optimal_K(X)\nsd_plot(X)\nvar_plot(df)\nprint('\\n Based on the Box and SD plot above we can see features A7,A9 has the most variations.\\n')\nget_centers(X)\nlables(2,df)\n", "\n Below are the centers of K = 4 clusters \n\n A2 A3 A4 A5 A6 A7 A8 \\\n0 7.204082 4.846939 5.010204 4.816327 4.071429 9.158571 5.224490 \n1 2.984716 1.266376 1.386463 1.312227 2.054585 1.352576 2.080786 \n2 6.721519 8.367089 8.405063 7.810127 6.734177 9.227848 7.367089 \n3 7.562500 7.421875 7.062500 4.250000 5.875000 3.619063 5.562500 \n\n A9 A10 \n0 3.795918 1.642857 \n1 1.213974 1.102620 \n2 7.822785 3.822785 \n3 7.156250 2.234375 \n\nBelow are the predicted labels with k = 4\n\nClass 2 4\nlabels \n0 7 64\n1 444 10\n2 7 87\n3 0 80\n\n Below is the intertia chart \n\n" ], [ "#Main Phase 3\nmdl = make_pipeline(StandardScaler(), KMeans(n_clusters = 2, n_init=20, max_iter = 500))\nlabels = mdl.fit_predict(X)\ndf['Predicted'] = labels\n\nfor x in range(df.shape[0]):\n if df.iloc[x,11] == 0:\n df.iloc[x,11] = 2\n else:\n df.iloc[x,11] = 4\n \nprint(\"\\nBelow are the first 15 rows of the dataframe \\n\")\nprint(df.head(15))\n\nprint(\"\\nBelow are the observtions where the predicted did not match the class \\n\")\n\nprint(df[df['Class'] != df['Predicted']])\n\n\ndef error_rate(predicted,actual):\n tab = pd.crosstab(actual,predicted)\n error2 = tab.iloc[0,1]\n total2 = tab.iloc[0,0] + tab.iloc[1,0]\n \n error4 = tab.iloc[1,0]\n total4 = tab.iloc[0,1] + tab.iloc[1,1]\n \n B = str(round(error2/total2,4)*100) + \"%\"\n M = str(round(error4/total4,4)*100) + \"%\"\n tot_error = str(round((error2 + error4)/(total2 + total4),4)*100) + \"%\"\n \n print(\"\\nThe error rate for beningn cells is \" + str(B) + \"\\n\")\n print(\"The error rate for malignent cells is \" +str(M) + \"\\n\")\n print(\"The total error rate is \" +str(tot_error) + \"\\n\")\n \nerror_rate(df['Predicted'], df['Class'])", "\nBelow are the first 15 rows of the dataframe \n\n Scn A2 A3 A4 A5 A6 A7 A8 A9 A10 Class Predicted\n0 1000025 5 1 1 1 2 1.0 3 1 1 2 2\n1 1002945 5 4 4 5 7 10.0 3 2 1 2 4\n2 1015425 3 1 1 1 2 2.0 3 1 1 2 2\n3 1016277 6 8 8 1 3 4.0 3 7 1 2 4\n4 1017023 4 1 1 3 2 1.0 3 1 1 2 2\n5 1017122 8 10 10 8 7 10.0 9 7 1 4 4\n6 1018099 1 1 1 1 2 10.0 3 1 1 2 2\n7 1018561 2 1 2 1 2 1.0 3 1 1 2 2\n8 1033078 2 1 1 1 2 1.0 1 1 5 2 2\n9 1033078 4 2 1 1 2 1.0 2 1 1 2 2\n10 1035283 1 1 1 1 1 1.0 3 1 1 2 2\n11 1036172 2 1 1 1 2 1.0 2 1 1 2 2\n12 1041801 5 3 3 3 2 3.0 4 4 1 4 2\n13 1043999 1 1 1 1 2 3.0 3 1 1 2 2\n14 1044572 8 7 5 10 7 9.0 5 5 4 4 4\n\nBelow are the observtions where the predicted did not match the class \n\n Scn A2 A3 A4 A5 A6 A7 A8 A9 A10 Class Predicted\n1 1002945 5 4 4 5 7 10.00 3 2 1 2 4\n3 1016277 6 8 8 1 3 4.00 3 7 1 2 4\n12 1041801 5 3 3 3 2 3.00 4 4 1 4 2\n25 1065726 5 2 3 4 2 7.00 3 6 1 4 2\n40 1096800 6 6 6 9 6 3.54 7 8 1 2 4\n51 1108449 5 3 3 4 2 4.00 3 4 1 4 2\n57 1113038 8 2 4 1 5 1.00 5 4 4 4 2\n58 1113483 5 2 3 1 6 10.00 5 1 1 4 2\n59 1113906 9 5 5 2 2 2.00 5 1 1 4 2\n63 1116132 6 3 4 1 5 2.00 3 9 1 4 2\n101 1167439 2 3 4 4 2 5.00 2 5 1 4 2\n103 1168359 8 2 3 1 6 3.00 7 1 1 4 2\n146 1185609 3 4 5 2 6 8.00 4 1 1 4 2\n179 1202812 5 3 3 3 6 10.00 3 1 1 4 2\n196 1213375 8 4 4 5 4 7.00 7 8 2 2 4\n222 1226012 4 1 1 3 1 5.00 2 1 1 4 2\n247 145447 8 4 4 1 2 9.00 3 3 1 4 2\n252 1017023 6 3 3 5 3 10.00 3 5 3 2 4\n259 242970 5 7 7 1 5 8.00 3 4 1 2 4\n273 428903 7 2 4 1 3 4.00 3 3 1 4 2\n296 616240 5 3 4 3 4 5.00 4 7 1 2 4\n315 704168 4 6 5 6 7 3.54 4 9 1 2 4\n319 721482 4 4 4 4 6 5.00 7 3 1 2 4\n326 752904 10 1 1 1 2 10.00 5 4 1 4 2\n348 832226 3 4 4 10 5 1.00 3 3 1 4 2\n352 846832 3 4 5 3 7 3.00 4 6 1 2 4\n356 859164 5 3 3 1 3 3.00 3 3 3 4 2\n434 1293439 6 9 7 5 5 8.00 4 2 1 2 4\n455 1246562 10 2 2 1 2 6.00 1 1 2 4 2\n489 1084139 6 3 2 1 3 4.00 4 1 1 4 2\n657 1333877 5 4 5 1 8 1.00 3 6 1 2 4\n\nThe error rate for beningn cells is 2.58%\n\nThe error rate for malignent cells is 8.12%\n\nThe total error rate is 4.43%\n\n" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7d7277664befbfb2b23100390e1013e90c4775e
6,743
ipynb
Jupyter Notebook
Chapter 2 - Data Processing.ipynb
Ludovico1979/Corso_ML
8a30d0d8066b518c9dfd2333eb7c627ee9283d60
[ "MIT" ]
null
null
null
Chapter 2 - Data Processing.ipynb
Ludovico1979/Corso_ML
8a30d0d8066b518c9dfd2333eb7c627ee9283d60
[ "MIT" ]
null
null
null
Chapter 2 - Data Processing.ipynb
Ludovico1979/Corso_ML
8a30d0d8066b518c9dfd2333eb7c627ee9283d60
[ "MIT" ]
null
null
null
22.254125
216
0.499481
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
e7d72f749acc63309ec9e3cb5721a8f6ee6ca92d
28,514
ipynb
Jupyter Notebook
MoreNotebooks/Skyfit.ipynb
mahdiqezlou/FlyingCircus
42de8f21fef66058c617cc7e9e4144d74cb861c9
[ "MIT" ]
5
2021-02-25T20:42:53.000Z
2021-06-16T15:17:10.000Z
MoreNotebooks/Skyfit.ipynb
mahdiqezlou/FlyingCircus
42de8f21fef66058c617cc7e9e4144d74cb861c9
[ "MIT" ]
1
2021-06-16T03:13:18.000Z
2021-06-16T03:13:18.000Z
MoreNotebooks/Skyfit.ipynb
mahdiqezlou/FlyingCircus
42de8f21fef66058c617cc7e9e4144d74cb861c9
[ "MIT" ]
3
2020-06-16T05:32:35.000Z
2021-12-20T19:01:20.000Z
55.474708
986
0.649821
[ [ [ "# Putting it All Together\n\nThis notebook is a case study in working with python and several modules. It's a real problem I had to solve with real data. There are *many* ways to attack a problem such as this; this is simply one way. The point is to illustrate how you can get existing modules to do the heavy-lifting for you and that visualization is a powerful diagnostic tool. Try not to get caught up in the details of the model; it's quite complex and the point is not to understand all the equations, but the *procedure* of exploring data and fitting it to a model (read the citation if you're really interested all the gory details).\n\nThis notebook requires the following modules:\n* `numpy`: dealing with arrays of numbers and mathematics\n* `scipy`: collection of scientific algorithms\n* `matplotlib`: de-facto plotting module\n* `pandas`: module for organizing arrays of number into tables\n* `bokeh`: another module for plotting, with emphasis on interactive visualization\n\nThe problem I needed to solve: predict the background sky brightness caused by the moon at a given location in the sky on a given date. This is to help plan observations at the telescope. As with all problems of this type, we need to do several things:\n\n* Download/import/munge training data\n* Model the training data\n* Extract model parameters\n* Graph the result(s) to see how well we do, maybe modify the model\n* Use final model and parameters to make future predictions\n\n### 1) The Data\n\nIn this case, the data to model is roughly 10 years of photometry from the Carnegie Supernova Project (CSP). Each and every measurement of the flux from a standard star has an associated estimate of the sky background (which must be subtracted from the raw counts of the star). These data were taken over many different times of the month and a many different sky altitudes, so are ideal for this problem.\n\nLet's start by getting the data. For convenience, this has been included in the `data` folder and so we can load it up immediately into a `pandas` dataframe.", "_____no_output_____" ] ], [ [ "import pandas as pd\ndata = pd.read_csv('data/skyfit.dat')", "_____no_output_____" ] ], [ [ "We can take a quick look at what's in this `DataFrame` by printing out the first few rows.", "_____no_output_____" ] ], [ [ "print(data[0:10])", "_____no_output_____" ] ], [ [ "The column `jd` is the [Julian Day](https://en.wikipedia.org/wiki/Julian_day), a common numerical representation of the date, `RA` and `Decl` are the sky coordiates of the field, and `magsky` is the sky brighness. Let's have a look at the distribution of sky brightnesses to make sure they \"make sense\". The units should be magnitudes per square-arc-second and be on order of 22 or so, but should be smaller for bright time (full moon). Since we're just doing a quick-look, we can use `pandas`' built-in histogram plotter.", "_____no_output_____" ] ], [ [ "%matplotlib inline\ndata.hist('magsky', bins=50)", "_____no_output_____" ] ], [ [ "As you can see, there is peak near 22 mag/square-arc-sec, as expected, but a broader peak at brighter backgrounds. We expect this is due to moonlight. Something to think about: why would this be bi-modal?\n\nWe expect that the fuller the moon, the brighter it will be and the closer the observation is to the moon on the sky, the higher the background. So whatever model we use is going to require knowledge of the moon's position and phase. There are mathematical formulae for calculating these, but we'll use the handy `astropy.coordinates` module to do all the work for us. First, let's compute the lunar phase for each date in our table. To do this, we need the position of the moon and the sun at these times.", "_____no_output_____" ] ], [ [ "from astropy.coordinates import get_moon, get_sun\nfrom astropy.time import Time\ntimes = Time(data['jd'], format='jd') # makes an array of astropy.Time objects\nmoon = get_moon(times) # makes an array of moon positions\nsun = get_sun(times) # makes an array of sun positions", "_____no_output_____" ] ], [ [ "Currently, `astropy.coordinates` does not have a lunar phase function, so we'll just use the angular separation between the sun and moon as a proxy. If the angular separation is 0 degrees, that's new moon, whereas an angular separation of 180 degrees is full moon. Other phases lie in between. `moon` and `sun` are arrays of `SkyCoord` objects that have many useful tools for computing sky posisitions. Here we'll use the `separation()` function, which computes the angular separation on the sky between two objects:", "_____no_output_____" ] ], [ [ "seps = moon.separation(sun) # angular separation from moon to sun\ndata['phase'] = pd.Series(seps, index=data.index) # Add this new parameter to the data frame", "_____no_output_____" ] ], [ [ "Now that we have the phase information, let's see if our earlier hypothesis about the moon being a source of background light is valid. We'll plot one versus the other, again using the `pandas` built-in plotting functionality.", "_____no_output_____" ] ], [ [ "data.plot.scatter('phase','magsky')", "_____no_output_____" ] ], [ [ "Great! There's a definite trend there, but also some interesting patterns. Remember these are magnitudes per square arc-second, so brighter sky is down, not up. We can also split up the data based on the phase and plot the resulting histograms together. You can run this next snippet of code with different `phasecut` values to see how they separate out. We use `matplotlib`'s `gca` function to \"get the current axis\", allowing us to over-plot two histograms.", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nphasecut = 90.\nres = data[data.phase>phasecut].hist('magsky', bins=50, label='> {:.2f} degrees'.format(phasecut), alpha=0.7)\nax = plt.gca()\nres = data[data.phase<phasecut].hist('magsky', ax=ax, bins=50, label='< {:.2f} degrees'.format(phasecut), alpha=0.7)\nplt.legend(loc='upper left')", "_____no_output_____" ] ], [ [ "Success! It definitely looksl like scattered moonlight is responsible for the bulk of the added sky brightness. But there's also a portion of data where the moon was bright but the sky was still dark. There's more to it than just phase. Now we turn to the task of fitting a model to this.\n\n### 2) The Model\n\nTurns out that the definitive reference for this was authored by a colleague of mine: Kevin Krisciunas at Texas A&M. His paper can be found at the ADS abstract service: http://adsabs.harvard.edu/abs/1991PASP..103.1033K\n\nYou can read the details (lots of empirical formulas, light-scattering theory, and unit conversions), but the short of it is that we get a predictive model of the sky-brightness at the position of an astronomical object as a function of the following variables:\n\n1. The lunar phase angle: $\\alpha$\n2. The angular separation between the object and the moon: $\\rho$\n3. The Zenith angle of the object: $Z$\n4. The Zenith angle of the moon: $Z_m$\n5. The extinction coefficient: $k_X$ (a measure of how much the atmosphere absorbs light)\n6. The dark-sky (no moon) sky background at zenith (in mag/square-arc-sec): $m_{dark}$\n\nThe following diagram shows some of these variables: ![diagram showing variables](media/Embed.jpeg)\n\nActually, $\\alpha$, $\\rho$, $Z$, and $Z_m$ are all functions of the date of observations and sky coordinates of the object, which we have already. That leaves $k_x$ and $m_{dark}$ as the only unknowns to be determined. Given these variables, the flux from the moon is given by an empirically-determined function that takes into account the fact that the moon is not a perfect sphere:\n\n$$I^* = 10^{-0.4(3.84 + 0.026|\\alpha | + 4\\times 10^{-9}\\alpha^4)}$$\n\nThis flux is then scattered by angle $\\rho$ into our line of sight, contributing to the sky background. The fraction of light scattered into angle $\\rho$ is given empirically by:\n\n$$f(\\rho) = 10^{5.36}\\left[1.06 + \\cos^2\\rho\\right] + 10^{6.15 - \\rho/40} $$\n\nThis just tells us how quickly the sky brightness falls off as we look further away from the moon. We can visualize this by making a 2D array of angles from the center of an image ($\\rho$) and comptuing $f(\\rho)$. The first part of the next cell uses numpy array functions to create a 2D \"image\" with the moon at center and each pixel representing a value of $\\rho$ degrees from the center.", "_____no_output_____" ] ], [ [ "import numpy as np\njj,ii = np.indices((1024,1024))/1024 # 2D index arrays scaled 0->1 \nrho = np.sqrt((ii-0.5)**2 + (jj-0.5)**2)*45.0 # 2D array of angles from center in degrees\n\nf = 10**5.36*(1.06 + (np.cos(rho*np.pi/180)**2)) + np.power(10, 6.15-rho/40) \nplt.imshow(f, origin='lower', extent=(-22.5,22.5,-22.5,22.5))\nplt.contour(f, origin='lower', extent=(-22.5,22.5,-22.5,22.5), colors='white', alpha=0.1)\nplt.xlabel('X angular distance')\nplt.ylabel('Y angular distance')", "_____no_output_____" ] ], [ [ "So there's less and less scattered light farther from the moon (at the center). But this scattered light is also attenuated (absorbed) by the atmosphere. This attenuation is parametrized by the *airmass* $X$, the relative amount of atmosphere the light has to penetrate (with $X=1$ for the zenith). Krisciunas & Schaefer (1991) present this formula for the airmass: $X(Z) = \\left(1 - 0.96 \\sin^2 Z\\right)^{-1/2}$. We'll come back to this later. Suffice it to say for the moment that this is an approximation very close to the \"infinite slab\" model of the atmosphere. Putting it all together, the surface brigthness (in the interesting units of [nanoLamberts](https://en.wikipedia.org/wiki/Lambert_(unit))) from the moon will be:\n\n$$ B_{moon} = f(\\rho)I^*10^{-0.4 k_X X(Z_m)}\\left[1 - 10^{-0.4k_X X(Z)}\\right] $$\n\nLet's visualize that first factor, which attenuates the light from the moon. I'll just set $I^*=1$ and $k_X=5$ to make the effect obvious. We'll define the airmass function for later use as well. Let's assume the moon is at a zenith angle of 22.5$^\\circ$ so the bottom of the graph corresponds to $Z=45^\\circ$ and the top is the zenith $Z=0^\\circ$. <a id=\"airmass\"></a>", "_____no_output_____" ] ], [ [ "def X(Z):\n '''Airmass as afunction zenith angle Z in radians'''\n return 1./np.sqrt(1 - 0.96*np.power(np.sin(Z),2))\n\nZ = (45 - jj*45)*np.pi/180. # rescale jj (0->1) to Z (45->0) and convert to radians\nplt.imshow(f*np.power(10, -0.4*5*X(Z)), origin='lower', extent=(-22.5,22.5,45,0))\nplt.contour(f*np.power(10, -0.4*5*X(Z)), origin='lower', extent=(-22.5,22.5,45,0), colors='white', alpha=0.1)\nplt.xlabel('X angular distance')\nplt.ylabel('Zenith angle Z')", "_____no_output_____" ] ], [ [ "So as we get closer to the horizon, there's less moonlight, as it's been attenuated by the larger amount of atmosphere. Lastly, to convert these nanoLamberts into magnitudes per square arc-second, we need the dark (no moon) sky brightness at the zenith, $m_{dark}$, and convert that to nanoLamberts using this formula:\n\n$$ B_{dark} = 34.08\\exp (20.7233 - 0.92104 m_{dark})10^{-0.4 k_X (X(Z)-1)}X(Z) $$\n\nwhere we have also corrected for attenuation by the atmosphere and air-glow (which increases with airmass). The final model for observed sky brightness $m_{sky}$ is:\n\n$$ m_{sky} = m_{dark} - 2.5 \\log_{10}\\left(\\frac{B_{moon} + B_{dark}}{B_{dark}}\\right) $$\n\nWhew! That's a lot of math. But that's all it is, and we can make a python function that will do it all for us.", "_____no_output_____" ] ], [ [ "def modelsky(alpha, rho, kx, Z, Zm, mdark):\n Istar = np.power(10, -0.4*(3.84+0.026*np.absolute(alpha)+4e-9*np.power(alpha,4)))\n frho = np.power(10, 5.36)*(1.06 + np.power(np.cos(rho),2))+np.power(10, 6.15-rho*180./np.pi/40)\n Bmoon = frho*Istar*np.power(10,-0.4*kx*X(Zm))*(1-np.power(10,-0.4*kx*X(Z)))\n Bdark = 34.08*np.exp(20.723 - 0.92104*mdark)*np.power(10,-0.4*kx*(X(Z)-1))*X(Z)\n return mdark - 2.5*np.log10((Bmoon+Bdark)/Bdark)", "_____no_output_____" ] ], [ [ "Note that all angles should be entered in radians to work with `numpy` trig functions. \n\n### 3) Data Munging\n\nNow, we just need the final ingredients: $\\alpha$, $\\rho$, $Z$, and $Z_m$, all of which are computed using `astropy.coordinates`. The lunar phase angle $\\alpha$ is defined as the angular separation between the Earth and Sun as observed *on the moon*. Alas, `astropy` can't compute this directly (guess they never thought lunar astronauts would use the software). But since the Earth-moon distance is much less than the Earth-sun distance (i.e., $\\gamma \\sim 0$), this is close enough to 180 degrees minus the angular separation between the moon and sun as observed on Earth (call it $\\beta$, which we already computed). See diaram below. ![Diagram showing Earth, moon, and sun](media/EarthMoonSun.jpg)", "_____no_output_____" ] ], [ [ "alpha = (180. - data['phase']) # Note: these need to be in degrees\ndata['alpha'] = pd.Series(alpha, index=data.index)", "_____no_output_____" ] ], [ [ "Next, in order to compute zenith angles and azimuths, we need to tell the `astropy` functions where on Earth we are located, since these quantities depend on our local horizon. Luckily, Las Campanas Observatory (LCO) is in `astropy`'s database of locations. We'll also need to create locations on the sky for all our background observations.", "_____no_output_____" ] ], [ [ "from astropy.coordinates import EarthLocation, SkyCoord, AltAz\nfrom astropy import units as u\nlco = EarthLocation.of_site('lco')\nfields = SkyCoord(data['RA']*u.degree, data['Decl']*u.degree) # astropy often requires units\nf_altaz = fields.transform_to(AltAz(obstime=times, location=lco)) # Transform from RA/DEc to Alt/Az\nm_altaz = moon.transform_to(AltAz(obstime=times, location=lco))\n\nrho = moon.separation(fields)*np.pi/180.0 # angular distance between moon and all fields\nZ = (90. - f_altaz.alt.value)*np.pi/180.0 # remember: we need things in radians\nZm = (90. - m_altaz.alt.value)*np.pi/180.0 \nskyaz = f_altaz.az.value\ndata['rho'] = pd.Series(rho, index=data.index)\ndata['Z'] = pd.Series(Z, index=data.index) # radians\ndata['Zm'] = pd.Series(Zm, index=data.index)\ndata['skyaz'] = pd.Series(skyaz, index=data.index)", "_____no_output_____" ] ], [ [ "I've added the variables to the Pandas `dataFrame` as it will help with plotting later. We can try plotting some of these variables against others to see how things look. Let's try a scatter plot of moon/sky separation vs. sky brightness and color the points according to lunar phase. I tried this with the Pandas `scatter()` and it didn't look that great, so we'll do it with the matplotlib functions directly. Also with `matplotlib` we can invert the y axis so that brighter is 'up'.", "_____no_output_____" ] ], [ [ "fig,axes = plt.subplots(1,2, figsize=(15,6))\nsc = axes[0].scatter(data['rho'], data['magsky'], marker='.', c=data['alpha'], cmap='viridis_r')\naxes[0].set_xlabel(r'$\\rho$', fontsize=16)\naxes[0].set_ylabel('Sky brightness (mag/sq-arc-sec)', fontsize=12)\naxes[0].text(1.25, 0.5, \"lunar phase\", va='center', ha='right', rotation=90,\n transform=axes[0].transAxes, fontsize=12)\naxes[0].invert_yaxis()\nfig.colorbar(sc, ax=axes[0])\nsc = axes[1].scatter(data['alpha'], data['magsky'], marker='.', c=data['rho'], cmap='viridis_r')\naxes[1].set_xlabel('Lunar phase', fontsize=12)\naxes[1].set_ylabel('Sky brightness (mag/sq-arc-sec)', fontsize=12)\naxes[1].text(1.25, 0.5, r\"$\\rho$\", va='center', ha='right', rotation=90,\n transform=axes[1].transAxes, fontsize=12)\naxes[1].invert_yaxis()\nymin,ymax = axes[0].get_ylim()\nfig.colorbar(sc, ax=axes[1])\n", "_____no_output_____" ] ], [ [ "There certainly seems to be a trend that the closer to full ($\\alpha = 0$, yellow), the brighter the background and the closer the moon is to the field (lower $\\rho$), the higher the background. Looks good. \n\n### 4) Fitting (Training) the Model\n\nLet's try and fit this data with our model and solve for $m_{dark}$, and $k_x$, the only unknowns in the problem. For this we need to create a dummy function that we can use with `scipy`'s `leastsq` function. It needs to take a list of parameters (`p`) as its first argument, followed by any other arguments and return the weighted difference between the model and data. We don't have any weights (uncertainties), so it will just return the differences.", "_____no_output_____" ] ], [ [ "from scipy.optimize import leastsq\ndef func(p, alpha, rho, Z, Zm, magsky):\n mdark,kx = p\n return magsky - modelsky(alpha, rho, kx, Z, Zm, mdark)", "_____no_output_____" ] ], [ [ "We now run the least-squares function, which will find the parameters `p` which minimize the squared sum of the residuals (i.e. $\\chi^2$). `leastsq` takes as arguments the function we wrote above, `func`, an initial guess of the parameters, and a tuple of extra arguments needed by our function. It returns the best-fit parameters and a status code. We can print these out, but also use them in our `modelsky` function to get the prediction that we can compare to the observed data.", "_____no_output_____" ] ], [ [ "pars,stat = leastsq(func, [22, 0.2], args=(data['alpha'],data['rho'],data['Z'],data['Zm'],data['magsky']))\nprint(pars)\n# save the best-fit model and residuals\ndata['modelsky']=pd.Series(modelsky(data['alpha'],data['rho'],pars[1],data['Z'],data['Zm'],pars[0]), index=data.index)\ndata['residuals']=pd.Series(data['magsky']-data['modelsky'], index=data.index)", "_____no_output_____" ] ], [ [ "Now that we have a model, we have a way to *predict* the sky brightness. So let's make the same two plots as we did above, but this time plotting the *model* brigthnesses rather than the observed brightnesses. Just to see if we get the same kinds of patterns/behaviours. This next cell is a copy of the earlier one, just changing `magsky` into `modelsky`. ", "_____no_output_____" ] ], [ [ "fig,axes = plt.subplots(1,2, figsize=(15,6))\nsc = axes[0].scatter(data['rho'], data['modelsky'], marker='.', c=data['alpha'], cmap='viridis_r')\naxes[0].set_xlabel(r'$\\rho$', fontsize=16)\naxes[0].set_ylabel('Sky brightness (mag/sq-arc-sec)', fontsize=12)\naxes[0].text(1.25, 0.5, \"lunar phase\", va='center', ha='right', rotation=90,\n transform=axes[0].transAxes, fontsize=12)\naxes[0].invert_yaxis()\nfig.colorbar(sc, ax=axes[0])\nsc = axes[1].scatter(data['alpha'], data['modelsky'], marker='.', c=data['rho'], cmap='viridis_r')\naxes[1].set_xlabel('Lunar phase', fontsize=12)\naxes[1].set_ylabel('Sky brightness (mag/sq-arc-sec)', fontsize=12)\naxes[1].text(1.25, 0.5, r\"$\\rho$\", va='center', ha='right', rotation=90,\n transform=axes[1].transAxes, fontsize=12)\naxes[1].invert_yaxis()\naxes[0].set_ylim(ymin,ymax)\naxes[1].set_ylim(ymin,ymax)\nfig.colorbar(sc, ax=axes[1])", "_____no_output_____" ] ], [ [ "You will see that there are some patterns that are correctly predicted, but others that are not. In particular, there's a whole cloud of points with $\\alpha < 0.8$ and sky brightness > 22 that are observed but *not* predicted. In other words, we observed some objects where the moon was relatively bright, yet the sky was relatively dark.\n\nThis is where I hit a bit of a wall in my investigation. It was not at all obvious where these points were coming from because the data set was so large and we have so many variables at work. However, by luck this ended up being around the time that Shanon was playing around with [Bokeh](https://docs.bokeh.org/en/latest/index.html) and it turned out to be exactly what I needed to explore where things were not working correctly. Let's do that now.\n\n### 5) Plotting Residuals\nA good way to see where a model is failing is to plot the residuals (observed - model). Where the residuals are close to zero, the model is doing a good job, but where the residuals are large (positive or nagative), the model is failing to capture something. A good diagnostic is to plot these residuals versus each of your variables and see where things go wrong. The great thing about Bokeh is it gives a very powerful way to do this: linking graphs so that selecting points in one graph will select the corresponding points in all other graphs that share the same dataset. This is why we've been adding our variables to the pandas `dataFrame`, `data`: that's whay Bokeh uses for plotting. In this code block we setup a Bokeh graph and plot 6 different \"slices\" through our multi-dimenisonal data. In the resulting plots, try selecting different regions of the upper-left panel (the residuals) to see if they correspond to interesting sets of parameters in the other panels.", "_____no_output_____" ] ], [ [ "from bokeh.plotting import figure\nfrom bokeh.layouts import gridplot\nfrom bokeh.io import show,output_notebook\nfrom bokeh.models import ColumnDataSource\n\noutput_notebook()\nsource = ColumnDataSource(data)\nTOOLS = ['box_select','lasso_select','reset','box_zoom','help']\nvars = [('alpha','residuals'),('alpha','rho'),('alpha','Zm'),\n ('jd','alpha'),('Z','Zm'),('RA','Decl')]\nplots = []\nfor var in vars:\n s = figure(tools=TOOLS, plot_width=300, plot_height=300)\n s.circle(*var, source=source, selection_color='red')\n s.xaxis.axis_label = var[0]\n s.yaxis.axis_label = var[1]\n plots.append(s)\n#plots[0].line([17.8,22.3],[17.8,22.3], line_color='orangered')\n\np = gridplot([plots[0:3],plots[3:]])\nshow(p)", "_____no_output_____" ] ], [ [ "With a little data exploring, it's pretty obvious that the majority of the outlying points comes from observations when the moon is relatively full but very low (or even below) the horizon. The reason is that the airmass formula that we implemented above has a problem with $Zm > \\pi/2$. To see this, we can simply plot `X(Z)` as a function of 'Z':", "_____no_output_____" ] ], [ [ "from matplotlib.pyplot import plot, xlabel, ylabel,ylim\nZ = np.linspace(0, 3*np.pi/4, 100) # make a range of Zenith angles\nplot(Z*180/np.pi, X(Z), '-')\nxlabel('Zenith angle (degrees)')\nylabel('Airmass')", "_____no_output_____" ] ], [ [ "So the airmass (amount of air the light travels through) increases as you get to the horizon ($Z=90^\\circ$), but then decreases. That's not right! This is the reason the model if failing for some points. Can you think of a way to easil fix this problem? Try it out. Just [go back](#airmass) to the cell above where `X(Z)` is defined and change it. Then select `Cell -> Run All Below` from the menu so see how the results change. There's also an entire [Wikipedia page](https://en.wikipedia.org/wiki/Air_mass_(astronomy)) with many airmass approximations and formulae, which you could try coding and seeing if they work better.\n\n### 6) Final Remarks\n\nAt this point you might be feeling overwhelmed. How did I know which modules to use? How did I know how to use them? The answer: Google, ADS, and 20+ years (eek!) of experience coding in Python. I also neglected to show all the dead-ends and mistakes I made on the way to getting the final solution, all the emails I sent to Kevin asking about the details of his paper, and trips to Shannon's office to get help with using Bokeh.\n\nBefore you start tackling a particular problem it's well worth your time to research whether there is already a solution \"out there\" that you can use or modify for your use. It has never been so easy to do this, thanks to search engines ([Google](https://www.google.com), et al.), data/software catalogs ([PyPI](https://pypi.org), et al.), discussion groups ([Stackoverflow](https://stackoverflow.com/), et al.) and even social media ([python users in astronomy facebook group](https://www.facebook.com/groups/astropython/), etc). And your friendly neighborhood python experts are there to make helpful suggestions.\n\nDon't re-invent the wheel, but improve it by all means.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e7d7410bfc464698a4101e18707d979ffb28c34a
122,667
ipynb
Jupyter Notebook
visual_data_analysis.ipynb
dbogatic/sql-homework
04ee04adb7ee49bceb87c246df7c27fd90d3aae9
[ "PostgreSQL", "CC-BY-2.0" ]
null
null
null
visual_data_analysis.ipynb
dbogatic/sql-homework
04ee04adb7ee49bceb87c246df7c27fd90d3aae9
[ "PostgreSQL", "CC-BY-2.0" ]
null
null
null
visual_data_analysis.ipynb
dbogatic/sql-homework
04ee04adb7ee49bceb87c246df7c27fd90d3aae9
[ "PostgreSQL", "CC-BY-2.0" ]
null
null
null
231.44717
50,921
0.647436
[ [ [ " # Visual Data Analysis of Fraudulent Transactions", "_____no_output_____" ] ], [ [ "# initial imports\nimport pandas as pd\nimport datetime\nimport calendar\nimport plotly.express as px\nimport matplotlib.pyplot as plt\nimport hvplot.pandas\nfrom sqlalchemy import create_engine\nimport psycopg2\n%matplotlib inline", "_____no_output_____" ], [ "# create a connection to the database\nengine = create_engine(\"postgresql://postgres:postgres@localhost:5432/fraud_detection\")", "_____no_output_____" ] ], [ [ " ## Data Analysis Questions 1\n\n Use `hvPlot` to create a line plot showing a time series from the transactions along all the year for **card holders 2 and 18**. In order to contrast the patterns of both card holders, create a line plot containing both lines. What difference do you observe between the consumption patterns? Does the difference could be a fraudulent transaction? Explain your rationale.", "_____no_output_____" ] ], [ [ "# loading data for card holder 2 and 18 from the database\nquery = \"\"\"\n\nSELECT transaction.date, credit_card.id_card_holder, card_holder.name, credit_card.card, transaction.amount, merchant.merchant_name, merchant_category.merchant_category_name\n\nFROM card_holder\nLEFT JOIN credit_card\nON credit_card.id_card_holder = card_holder.id\n\nLEFT JOIN transaction\nON transaction.card = credit_card.card\n\nLEFT JOIN merchant \nON merchant.id_merchant = transaction.id_merchant\n\nLEFT JOIN merchant_category\nON merchant_category.id_merchant_category = merchant.id_merchant_category\n\n\n\"\"\"\nfraud_detection_df = pd.read_sql_query(query, engine)\n\nfraud_detection_df.set_index(\"date\", inplace=True)\n\nfraud_detection_hourly_window = fraud_detection_df.between_time('07:00','09:00')\n\nfraud_detection_hourly_window.reset_index(inplace=True)\n\nfraud_detection_hourly_window.set_index(\"id_card_holder\", inplace=True)\n\ncard_holders_df = fraud_detection_hourly_window.loc[[2,18]]\n\ncard_holders_df.head()\n", "_____no_output_____" ], [ "# plot for cardholder 2\nfirst_card_holder = fraud_detection_hourly_window.loc[2]\nfirst_card_holder\nfirst_card_holder_transactions = first_card_holder[[\"date\",\"amount\"]]\nfirst_card_holder_transactions\nfirst_card_holder_plot = first_card_holder_transactions.hvplot.line(x='date', y='amount', title=\"Cardholder id_2 transactions\")\nfirst_card_holder_plot", "_____no_output_____" ], [ "# Calculate stats for cardholder 2\n\nfirst_card_holder_mean = first_card_holder_transactions[\"amount\"].mean()\nfirst_card_holder_median = first_card_holder_transactions[\"amount\"].median()\nfirst_card_holder_max = first_card_holder_transactions[\"amount\"].max()\nfirst_card_holder_min = first_card_holder_transactions[\"amount\"].min()\n\nprint(f\" Mean value = ${first_card_holder_mean :.2f}\")\nprint(f\" Median Value = ${first_card_holder_median :.2f}\")\nprint(f\" Max Value = ${first_card_holder_max :.2f}\")\nprint(f\" Min Value = ${first_card_holder_min :.2f}\")\n", " Mean value = $12.94\n Median Value = $11.35\n Max Value = $18.52\n Min Value = $10.29\n" ], [ "# plot for cardholder 18\nsecond_card_holder = fraud_detection_hourly_window.loc[18]\nsecond_card_holder\nsecond_card_holder_transactions = second_card_holder[[\"date\",\"amount\"]]\nsecond_card_holder_transactions\nsecond_card_holder_plot = second_card_holder_transactions.hvplot.line(x='date', y='amount',title=\"Cardholder id_18 transactions\")\nsecond_card_holder_plot", "_____no_output_____" ], [ "# Calculate stats for cardholder 18\n\nsecond_card_holder_mean = second_card_holder_transactions[\"amount\"].mean()\nsecond_card_holder_median = second_card_holder_transactions[\"amount\"].median()\nsecond_card_holder_max = second_card_holder_transactions[\"amount\"].max()\nsecond_card_holder_min = second_card_holder_transactions[\"amount\"].min()\nprint(f\" Mean value = ${second_card_holder_mean :.2f}\")\nprint(f\" Median Value = ${second_card_holder_median :.2f}\")\nprint(f\" Max Value = ${second_card_holder_max :.2f}\")\nprint(f\" Min Value = ${second_card_holder_min :.2f}\")", " Mean value = $9.44\n Median Value = $11.04\n Max Value = $18.54\n Min Value = $1.36\n" ], [ "# combined plot for card holders 2 and 18\n\ncard_holder_transaction_comparison_plot = (first_card_holder_plot * second_card_holder_plot).opts(title = \"Transaction Comparison Id_2 and id_18\", show_legend=True)\ncard_holder_transaction_comparison_plot\n\n# legend was added but not showing?", "_____no_output_____" ] ], [ [ " ### Conclusions for Question 1\n\n", "_____no_output_____" ] ], [ [ "# Analysis of transactions between 7:00-9:00 for id_2 and id_18 shows that median and mean values are similar, thus it appears no suspicious transactions are occuring. However, it is wise to confirm the validity of small bar transactions (under 2 dol) for Id_18.", "_____no_output_____" ] ], [ [ " ## Data Analysis Question 2\n\n Use `Plotly Express` to create a series of six box plots, one for each month, in order to identify how many outliers could be per month for **card holder id 25**. By observing the consumption patters, do you see any anomalies? Write your own conclusions about your insights.", "_____no_output_____" ] ], [ [ "# loading data of daily transactions from jan to jun 2018 for card holder 25\n\nfraud_detection_df\nfraud_detection_df.reset_index(inplace=True)\nfraud_detection_df.set_index(\"id_card_holder\", inplace=True)\nthird_card_holder_df = fraud_detection_df.loc[25]\nthird_card_holder_df.reset_index(inplace=True)\nthird_card_holder_df.set_index(\"date\", inplace=True)\nthird_card_holder_df.sort_index(ascending=True, inplace=True)\nthird_card_holder_suspicious_trans = third_card_holder_df.iloc[0:68]\nthird_card_holder_suspicious_trans.sort_index(inplace=True, ascending=True)\nthird_card_holder_suspicious_trans.head(10)\n", "_____no_output_____" ], [ "# change the numeric month to month names using strftime formatter to create date as string\n\nthird_card_holder_suspicious_trans.reset_index(inplace=True)\nthird_card_holder_suspicious_trans.set_index(\"date\", inplace=True)\nthird_card_holder_suspicious_trans.index = third_card_holder_suspicious_trans.index.strftime('%B')\nthird_card_holder_suspicious_trans.reset_index(inplace=True)\nthird_card_holder_suspicious_trans.set_index(\"index\", inplace=True)\nthird_card_holder_suspicious_trans.head()", "_____no_output_____" ], [ "# creating the six box plots using plotly express\n\nthird_card_holder_suspicious_trans_plot = third_card_holder_suspicious_trans.boxplot(column=\"amount\", by=\"index\", figsize=(10,5))\nplt.title(\"Suspicious Transactions by Month Cardholder 25\")\nplt.ylabel(\"amount ($)\")\nplt.xlabel(\"month\")\n\n# need to find a way to sort months", "_____no_output_____" ] ], [ [ " ### Conclusions for Question 2\n\n", "_____no_output_____" ] ], [ [ "# Analysis of Id_25 cardholder's transactions show that there were high amount transactions between Jan-June (especially June with 3 high amounts) that took place in pub, bar, restaurant, food truck, suggesting misuse of the corporate credit card.", "_____no_output_____" ], [ "# identify small transactions less than two dollars\n\nfraud_detection_df = pd.read_sql_query(query, engine)\nfraud_detection_df.set_index(\"date\", inplace=True)\nsuspicious_small_transactions_df = fraud_detection_df[fraud_detection_df[\"amount\"] < 2]\nsuspicious_small_transactions_df.head()\nsuspicious_small_transactions_df.groupby([\"merchant_category_name\"]).count()", "_____no_output_____" ], [ "# Analysis of small amount transactions show that the riskiest places where card hacks can occur are restaurants, pubs, food trucks, bars and coffe shops.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
e7d746c7cad9d9e4778fe478606a0f20b887b190
465,909
ipynb
Jupyter Notebook
Labs/Resources/Misc/Misc1/churn.ipynb
JayParanjape/ml-community
4cd67edfc059ff3f20a6a38b2c9d2a7af09ac44b
[ "MIT" ]
null
null
null
Labs/Resources/Misc/Misc1/churn.ipynb
JayParanjape/ml-community
4cd67edfc059ff3f20a6a38b2c9d2a7af09ac44b
[ "MIT" ]
1
2018-06-04T05:30:19.000Z
2018-06-04T05:30:19.000Z
Labs/Resources/Misc/Misc1/churn.ipynb
JayParanjape/ml-community
4cd67edfc059ff3f20a6a38b2c9d2a7af09ac44b
[ "MIT" ]
10
2018-06-18T12:21:55.000Z
2021-12-15T20:28:46.000Z
271.984238
100,720
0.902069
[ [ [ "#Comparing and evaluating models\n", "_____no_output_____" ] ], [ [ "%matplotlib inline\nimport numpy as np\nimport scipy as sp\nimport matplotlib as mpl\nimport matplotlib.cm as cm\nimport matplotlib.pyplot as plt\nimport pandas as pd\npd.set_option('display.width', 500)\npd.set_option('display.max_columns', 100)\npd.set_option('display.notebook_repr_html', True)\nimport seaborn as sns\nsns.set_style(\"whitegrid\")\nsns.set_context(\"poster\")\nfrom PIL import Image", "_____no_output_____" ], [ "from sklearn.grid_search import GridSearchCV\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.metrics import confusion_matrix\ndef cv_optimize(clf, parameters, X, y, n_jobs=1, n_folds=5, score_func=None):\n if score_func:\n gs = GridSearchCV(clf, param_grid=parameters, cv=n_folds, n_jobs=n_jobs, scoring=score_func)\n else:\n gs = GridSearchCV(clf, param_grid=parameters, n_jobs=n_jobs, cv=n_folds)\n gs.fit(X, y)\n print \"BEST\", gs.best_params_, gs.best_score_, gs.grid_scores_\n best = gs.best_estimator_\n return best\ndef do_classify(clf, parameters, indf, featurenames, targetname, target1val, mask=None, reuse_split=None, score_func=None, n_folds=5, n_jobs=1):\n subdf=indf[featurenames]\n X=subdf.values\n y=(indf[targetname].values==target1val)*1\n if mask !=None:\n print \"using mask\"\n Xtrain, Xtest, ytrain, ytest = X[mask], X[~mask], y[mask], y[~mask]\n if reuse_split !=None:\n print \"using reuse split\"\n Xtrain, Xtest, ytrain, ytest = reuse_split['Xtrain'], reuse_split['Xtest'], reuse_split['ytrain'], reuse_split['ytest']\n if parameters:\n clf = cv_optimize(clf, parameters, Xtrain, ytrain, n_jobs=n_jobs, n_folds=n_folds, score_func=score_func)\n clf=clf.fit(Xtrain, ytrain)\n training_accuracy = clf.score(Xtrain, ytrain)\n test_accuracy = clf.score(Xtest, ytest)\n print \"############# based on standard predict ################\"\n print \"Accuracy on training data: %0.2f\" % (training_accuracy)\n print \"Accuracy on test data: %0.2f\" % (test_accuracy)\n print confusion_matrix(ytest, clf.predict(Xtest))\n print \"########################################################\"\n return clf, Xtrain, ytrain, Xtest, ytest", "_____no_output_____" ], [ "from matplotlib.colors import ListedColormap\ncmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])\ncmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])\ncm = plt.cm.RdBu\ncm_bright = ListedColormap(['#FF0000', '#0000FF'])\n\ndef points_plot(ax, Xtr, Xte, ytr, yte, clf, mesh=True, colorscale=cmap_light, cdiscrete=cmap_bold, alpha=0.1, psize=10, zfunc=False):\n h = .02\n X=np.concatenate((Xtr, Xte))\n x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5\n y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5\n xx, yy = np.meshgrid(np.linspace(x_min, x_max, 100),\n np.linspace(y_min, y_max, 100))\n\n #plt.figure(figsize=(10,6))\n if mesh:\n if zfunc:\n p0 = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 0]\n p1 = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]\n Z=zfunc(p0, p1)\n else:\n Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])\n Z = Z.reshape(xx.shape)\n plt.pcolormesh(xx, yy, Z, cmap=cmap_light, alpha=alpha, axes=ax)\n ax.scatter(Xtr[:, 0], Xtr[:, 1], c=ytr-1, cmap=cmap_bold, s=psize, alpha=alpha,edgecolor=\"k\")\n # and testing points\n yact=clf.predict(Xte)\n ax.scatter(Xte[:, 0], Xte[:, 1], c=yte-1, cmap=cmap_bold, alpha=alpha, marker=\"s\", s=psize+10)\n ax.set_xlim(xx.min(), xx.max())\n ax.set_ylim(yy.min(), yy.max())\n return ax,xx,yy", "_____no_output_____" ], [ "def points_plot_prob(ax, Xtr, Xte, ytr, yte, clf, colorscale=cmap_light, cdiscrete=cmap_bold, ccolor=cm, psize=10, alpha=0.1):\n ax,xx,yy = points_plot(ax, Xtr, Xte, ytr, yte, clf, mesh=False, colorscale=colorscale, cdiscrete=cdiscrete, psize=psize, alpha=alpha) \n Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]\n Z = Z.reshape(xx.shape)\n plt.contourf(xx, yy, Z, cmap=ccolor, alpha=.2, axes=ax)\n cs2 = plt.contour(xx, yy, Z, cmap=ccolor, alpha=.6, axes=ax)\n plt.clabel(cs2, fmt = '%2.1f', colors = 'k', fontsize=14, axes=ax)\n return ax ", "_____no_output_____" ] ], [ [ "##The churn example\n\nThis is a dataset from a telecom company, of their customers. Based on various features of these customers and their calling plans, we want to predict if a customer is likely to leave the company. This is expensive for the company, as a lost customer means lost monthly revenue!", "_____no_output_____" ] ], [ [ "#data set from yhathq: http://blog.yhathq.com/posts/predicting-customer-churn-with-sklearn.html\ndfchurn=pd.read_csv(\"https://dl.dropboxusercontent.com/u/75194/churn.csv\")\ndfchurn.head()", "_____no_output_____" ] ], [ [ "Lets write some code to feature select and clean our data first, of-course.", "_____no_output_____" ] ], [ [ "dfchurn[\"Int'l Plan\"] = dfchurn[\"Int'l Plan\"]=='yes'\ndfchurn[\"VMail Plan\"] = dfchurn[\"VMail Plan\"]=='yes'", "_____no_output_____" ], [ "colswewant_cont=[ u'Account Length', u'VMail Message', u'Day Mins', u'Day Calls', u'Day Charge', u'Eve Mins', u'Eve Calls', u'Eve Charge', u'Night Mins', u'Night Calls', u'Night Charge', u'Intl Mins', u'Intl Calls', u'Intl Charge', u'CustServ Calls']\ncolswewant_cat=[u\"Int'l Plan\", u'VMail Plan']", "_____no_output_____" ] ], [ [ "##Asymmetry", "_____no_output_____" ], [ "First notice that our data set is very highly asymmetric, with positives, or people who churned, only making up 14-15% of the samples.", "_____no_output_____" ] ], [ [ "ychurn = np.where(dfchurn['Churn?'] == 'True.',1,0)\n100*ychurn.mean()", "_____no_output_____" ] ], [ [ "This means that a classifier which predicts that EVERY customer is a negative (does not churn) has an accuracy rate of 85-86%. \n\nBut is accuracy the correct metric?", "_____no_output_____" ], [ "##Remember the Confusion matrix? We reproduce it here for convenience", "_____no_output_____" ], [ "- the samples that are +ive and the classifier predicts as +ive are called True Positives (TP)\n- the samples that are -ive and the classifier predicts (wrongly) as +ive are called False Positives (FP)\n- the samples that are -ive and the classifier predicts as -ive are called True Negatives (TN)\n- the samples that are +ive and the classifier predicts as -ive are called False Negatives (FN)\n\nA classifier produces a confusion matrix which looks like this:\n\n![hwimages](./images/confusionmatrix.png)\n\n\nIMPORTANT NOTE: In sklearn, to obtain the confusion matrix in the form above, always have the observed `y` first, i.e.: use as `confusion_matrix(y_true, y_pred)`\n\nConsider two classifiers, A and B, as in the image below. Suppose they were trained on a balanced set. Let A make its mistakes only through false positives: non-churners(n) predicted to churn(Y), while B makes its mistake only through false negatives, churners(p), predicted not to churn(N). Now consider what this looks like on an unbalanced set, where the ps (churners) are much less than the ns (non-churners). It would seem that B makes far fewer misclassifications based on accuracy than A, and would thus be a better classifier.", "_____no_output_____" ], [ "![m:abmodeldiag](./images/abmodeldiag.png)\n\nHowever, is B reaslly the best classifier for us? False negatives are people who churn, but we predicted them not to churn.These are very costly for us. So for us. classifier A might be better, even though, on the unbalanced set, it is way less accurate!", "_____no_output_____" ], [ "##Classifiers should be about the Business End: keeping costs down", "_____no_output_____" ], [ "####Establishing Baseline Classifiers via profit or loss.", "_____no_output_____" ], [ "Whenever you are comparing classifiers you should always establish a baseline, one way or the other. In our churn dataset there are two obvious baselines: assume every customer wont churn, and assume all customers will churn.\n\nThe former baseline, will on our dataset, straight away give you a 85.5% accuracy. If you are planning on using accuracy, any classifier you write ought to beat this. The other baseline, from an accuracy perspective is less interesting: it would only have a 14.5% correct rate.\n\nBut as we have seen, on such asymmetric data sets, accuracy is just not a good metric. So what should we use?\n\n**A metric ought to hew to the business function that the classifier is intended for**.\n\nIn our case, we want to minimize the cost/maximize the profit for the telecom.\n\nBut to do this we need to understand the business situation. To do this, we write a **utility**, or, equivalently, **cost** matrix associated with the 4 scenarios that the confusion matrix talks about. \n\n![cost matrix](images/costmatrix.png)\n\nRemember that +ives or 1s are churners, and -ives or 0s are the ones that dont churn. \n\nLets assume we make an offer with an administrative cost of \\$3 and an offer cost of \\$100, an incentive for the customer to stay with us. If a customer leaves us, we lose the customer lifetime value, which is some kind of measure of the lost profit from that customer. Lets assume this is the average number of months a customer stays with the telecom times the net revenue from the customer per month. We'll assume 3 years and \\$30/month margin per user lost, for roughly a $1000 loss.", "_____no_output_____" ] ], [ [ "admin_cost=3\noffer_cost=100\nclv=1000#customer lifetime value", "_____no_output_____" ] ], [ [ "- TN=people we predicted not to churn who wont churn. We associate no cost with this as they continue being our customers\n- FP=people we predict to churn. Who wont. Lets associate a `admin_cost+offer_cost` cost per customer with this as we will spend some money on getting them not to churn, but we will lose this money.\n- FN=people we predict wont churn. And we send them nothing. But they will. This is the big loss, the `clv`\n- TP= people who we predict will churn. And they will. These are the people we can do something with. So we make them an offer. Say a fraction f accept it. Our cost is\n\n`f * offer_cost + (1-f)*(clv+admin_cost)`\n\nThis model can definitely be made more complex.\n\nLets assume a conversion fraction of 0.5", "_____no_output_____" ] ], [ [ "conv=0.5\ntnc = 0.\nfpc = admin_cost+offer_cost\nfnc = clv\ntpc = conv*offer_cost + (1. - conv)*(clv+admin_cost)", "_____no_output_____" ], [ "cost=np.array([[tnc,fpc],[fnc, tpc]])\nprint cost", "[[ 0. 103. ]\n [ 1000. 551.5]]\n" ] ], [ [ "We can compute the average cost(profit) per person using the following formula, which calculates the \"expected value\" of the per-customer loss/cost(profit):\n\n\\begin{eqnarray}\nCost &=& c(1P,1A) \\times p(1P,1A) + c(1P,0A) \\times p(1P,0A) + c(0P,1A) \\times p(0P,1A) + c(0P,0A) \\times p(0P,0A) \\\\\n&=& \\frac{TP \\times c(1P,1A) + FP \\times c(1P,0A) + FN \\times c(0P,1A) + TN \\times c(0P,0A)}{N}\n\\end{eqnarray}\n\nwhere N is the total size of the test set, 1P is predictions for class 1, or positives, 0A is actual values of the negative class in the test set. The first formula above just weighs the cost of a combination of observed and predicted with the out-of-sample probability of the combination occurring. The probabilities are \"estimated\" by the corresponding confusion matrix on the test set. (We'll provide a proof of this later in the course for the mathematically inclined, or just come bug Rahul at office hour if you cant wait!)\n\nThe cost can thus be found by multiplying the cost matrix by the confusion matrix elementwise, and dividing by the sum of the elements in the confusion matrix, or the test set size.\n\nWe implement this process of finding the average cost per person in the `average_cost` function below:", "_____no_output_____" ] ], [ [ "def average_cost(y, ypred, cost):\n c=confusion_matrix(y,ypred)\n score=np.sum(c*cost)/np.sum(c)\n return score", "_____no_output_____" ] ], [ [ "####No customer churns and we send nothing\n\nWe havent made any calculations yet! Lets fix that omission and create our training and test sets.", "_____no_output_____" ] ], [ [ "churntrain, churntest = train_test_split(xrange(dfchurn.shape[0]), train_size=0.6)\nchurnmask=np.ones(dfchurn.shape[0], dtype='int')\nchurnmask[churntrain]=1\nchurnmask[churntest]=0\nchurnmask = (churnmask==1)\nchurnmask", "_____no_output_____" ], [ "testchurners=dfchurn['Churn?'][~churnmask].values=='True.'", "_____no_output_____" ], [ "testsize = dfchurn[~churnmask].shape[0]\nypred_dste = np.zeros(testsize, dtype=\"int\")\nprint confusion_matrix(testchurners, ypred_dste)", "[[1145 0]\n [ 189 0]]\n" ], [ "dsteval=average_cost(testchurners, ypred_dste, cost)\ndsteval", "_____no_output_____" ] ], [ [ "Not doing anything costs us 140 per customer.", "_____no_output_____" ], [ "####All customers churn, we send everyone", "_____no_output_____" ] ], [ [ "ypred_ste = np.ones(testsize, dtype=\"int\")\nprint confusion_matrix(testchurners, ypred_ste)", "[[ 0 1145]\n [ 0 189]]\n" ], [ "steval=average_cost(testchurners, ypred_ste, cost)\nsteval", "_____no_output_____" ] ], [ [ "Make offers to everyone costs us even more, not surprisingly. The first one is the one to beat!", "_____no_output_____" ], [ "## Naive Bayes Classifier\n\nSo lets try a classifier. Here we try one known as Gaussian Naive Bayes. We'll just use the default parameters, since the actual details are not of importance to us.", "_____no_output_____" ] ], [ [ "from sklearn.naive_bayes import GaussianNB\nclfgnb = GaussianNB()\nclfgnb, Xtrain, ytrain, Xtest, ytest=do_classify(clfgnb, None, dfchurn, colswewant_cont+colswewant_cat, 'Churn?', \"True.\", mask=churnmask)", "using mask\n############# based on standard predict ################\nAccuracy on training data: 0.86\nAccuracy on test data: 0.87\n[[1059 86]\n [ 88 101]]\n########################################################\n" ], [ "confusion_matrix(ytest, clfgnb.predict(Xtest))", "_____no_output_____" ], [ "average_cost(ytest, clfgnb.predict(Xtest), cost)", "_____no_output_____" ] ], [ [ "Ok! We did better! But is this the true value of our cost? To answer this question, we need to ask a question: what exactly is `clf.predict` doing?\n\nThere is a caveat for SVM's though: we cannot repredict 1's and 0's directly for `clfsvm`, as the SVM is whats called a \"discriminative\" classifier: it directly gives us a decision function, with no probabilistic explanation and no probabilities. (I lie, an SVM can be retrofitted with probabilities: see http://scikit-learn.org/stable/modules/svm.html#scores-probabilities, but these are expensive amd not always well callibrated (callibration of probabilities will be covered later in our class)).\n\nWhat do we do? The SVM does give us a measure of how far we are from the \"margin\" though, and this is an ordered set of distances, just as the probabilities in a statistical classifier are. This ordering on the distance is just like an ordering on the probabilities: a sample far on the positive side from the line is an almost very definite 1, just like a sample with a 0.99 probability of being a 1 is an almost very definite 1.\n\nFor both these reasons we turn to ROC curves.", "_____no_output_____" ], [ "##Changing the Prediction threshold, and the ROC Curve", "_____no_output_____" ], [ "Our dataset is a very lopsided data set with 86% of samples being negative. We now know that in such a case, accuracy is not a very good measure of a classifier.\n\nWe have also noticed that, as is often the case in situations in which one class dominates the other, the costs of one kind of misclassification: false negatives are differently expensive than false positives. We saw above that FN are more costly in our case than FP. \n\n\nIn the case of such asymmetric costs, the `sklearn` API function `predict` is useless, as it assumes a threshold probability of having a +ive sample to be 0.5; that is, if a sample has a greater than 0.5 chance of being a 1, assume it is so. Clearly, when FN are more expensive than FP, you want to lower this threshold: you are ok with falsely classifying -ive examples as +ive. We play with this below by chosing a threshold `t` in the function `repredict` which chooses a different threshold than 0.5 to make a classification.\n\nYou can think about this very starkly from the perspective of the cancer doctor. Do you really want to be setting a threshold of 0.5 probability to predict if a patient has cancer or not? The false negative problem: ie the chance you predict someone dosent have cancer who has cancer is much higher for such a threshold. You could kill someone by telling them not to get a biopsy. Why not play it safe and assume a much lower threshold: for eg, if the probability of 1(cancer) is greater than 0.05, we'll call it a 1.\n\nOne caveat: we cannot repredict for the linear SVM model `clfsvm`, as the SVM is whats called a \"discriminative\" classifier: it directly gives us a decision function, with no probabilistic explanation and no probabilities. (I lie, an SVM can be retrofitted with probabilities: see http://scikit-learn.org/stable/modules/svm.html#scores-probabilities, but these are expensive amd not always well callibrated).\n", "_____no_output_____" ] ], [ [ "def repredict(est,t, xtest):\n probs=est.predict_proba(xtest)\n p0 = probs[:,0]\n p1 = probs[:,1]\n ypred = (p1 >= t)*1\n return ypred", "_____no_output_____" ], [ "average_cost(ytest, repredict(clfgnb, 0.3, Xtest), cost)", "_____no_output_____" ], [ "plt.hist(clfgnb.predict_proba(Xtest)[:,1])", "_____no_output_____" ] ], [ [ "Aha! At a 0.3 threshold we save more money!\n\nWe see that in this situation, where we have asymmetric costs, we do need to change the threshold at which we make our positive and negative predictions. We need to change the threshold so that we much dislike false negatives (same in the cancer case). Thus we must accept many more false positives by setting such a low threshold.\n\nFor otherwise, we let too many people slip through our hands who would have stayed with our telecom company given an incentive. But how do we pick this threshold?", "_____no_output_____" ], [ "###The ROC Curve", "_____no_output_____" ], [ "ROC curves are actually a set of classifiers, in which we move the threshold for classifying a sample as positive from 0 to 1. (In the standard scenario, where we use classifier accuracy, this threshold is implicitly set at 0.5).\n\nWe talked more about how to create a ROC curve in the accompanying lab to this one, so here we shall just repeat the ROC curve making code from there.", "_____no_output_____" ] ], [ [ "from sklearn.metrics import roc_curve, auc", "_____no_output_____" ], [ "def make_roc(name, clf, ytest, xtest, ax=None, labe=5, proba=True, skip=0):\n initial=False\n if not ax:\n ax=plt.gca()\n initial=True\n if proba:\n fpr, tpr, thresholds=roc_curve(ytest, clf.predict_proba(xtest)[:,1])\n else:\n fpr, tpr, thresholds=roc_curve(ytest, clf.decision_function(xtest))\n roc_auc = auc(fpr, tpr)\n if skip:\n l=fpr.shape[0]\n ax.plot(fpr[0:l:skip], tpr[0:l:skip], '.-', alpha=0.3, label='ROC curve for %s (area = %0.2f)' % (name, roc_auc))\n else:\n ax.plot(fpr, tpr, '.-', alpha=0.3, label='ROC curve for %s (area = %0.2f)' % (name, roc_auc))\n label_kwargs = {}\n label_kwargs['bbox'] = dict(\n boxstyle='round,pad=0.3', alpha=0.2,\n )\n for k in xrange(0, fpr.shape[0],labe):\n #from https://gist.github.com/podshumok/c1d1c9394335d86255b8\n threshold = str(np.round(thresholds[k], 2))\n ax.annotate(threshold, (fpr[k], tpr[k]), **label_kwargs)\n if initial:\n ax.plot([0, 1], [0, 1], 'k--')\n ax.set_xlim([0.0, 1.0])\n ax.set_ylim([0.0, 1.05])\n ax.set_xlabel('False Positive Rate')\n ax.set_ylabel('True Positive Rate')\n ax.set_title('ROC')\n ax.legend(loc=\"lower right\")\n return ax", "_____no_output_____" ], [ "make_roc(\"gnb\",clfgnb, ytest, Xtest, None, labe=50)", "_____no_output_____" ] ], [ [ "OK. Now that we have a ROC curve that shows us different thresholds, we need to figure how to pick the appropriate threshold from the ROC curve. But first, let us try another classifier.", "_____no_output_____" ], [ "##Classifier Comparison", "_____no_output_____" ], [ "###Decision Trees", "_____no_output_____" ], [ "Descision trees are very simple things we are all familiar with. If a problem is multi-dimensional, the tree goes dimension by dimension and makes cuts in the space to create a classifier.\n\nFrom scikit-docs:\n \n<img src=\"http://scikit-learn.org/stable/_images/iris.svg\"/>", "_____no_output_____" ] ], [ [ "from sklearn.tree import DecisionTreeClassifier", "_____no_output_____" ], [ "reuse_split=dict(Xtrain=Xtrain, Xtest=Xtest, ytrain=ytrain, ytest=ytest)", "_____no_output_____" ] ], [ [ "We train a simple decision tree classifier.", "_____no_output_____" ] ], [ [ "clfdt=DecisionTreeClassifier()\nclfdt, Xtrain, ytrain, Xtest, ytest = do_classify(clfdt, {\"max_depth\": range(1,10,1)}, dfchurn, colswewant_cont+colswewant_cat, 'Churn?', \"True.\", reuse_split=reuse_split)", "using reuse split\nBEST {'max_depth': 6} 0.935967983992 [mean: 0.86143, std: 0.01117, params: {'max_depth': 1}, mean: 0.87894, std: 0.00853, params: {'max_depth': 2}, mean: 0.89295, std: 0.01170, params: {'max_depth': 3}, mean: 0.91396, std: 0.00927, params: {'max_depth': 4}, mean: 0.93547, std: 0.01543, params: {'max_depth': 5}, mean: 0.93597, std: 0.01054, params: {'max_depth': 6}, mean: 0.92946, std: 0.01219, params: {'max_depth': 7}, mean: 0.92546, std: 0.00887, params: {'max_depth': 8}, mean: 0.92496, std: 0.01052, params: {'max_depth': 9}]\n############# based on standard predict ################\nAccuracy on training data: 0.97\nAccuracy on test data: 0.93\n[[1116 29]\n [ 59 130]]\n########################################################\n" ], [ "confusion_matrix(ytest,clfdt.predict(Xtest))", "_____no_output_____" ] ], [ [ "###Compare!", "_____no_output_____" ] ], [ [ "ax=make_roc(\"gnb\",clfgnb, ytest, Xtest, None, labe=60)\nmake_roc(\"dt\",clfdt, ytest, Xtest, ax, labe=1)", "_____no_output_____" ] ], [ [ "How do we read which classifier is better from a ROC curve. The usual advice is to go to the North-West corner of a ROC curve, as that is closest to TPE=1, FPR=0. But thats not our setup here..we have this asymmetric data set. The other advice is to look at the classifier with the highest AUC. But as we can see in the image below, captured from a run of this lab, the AUC is the same, but the classifiers seem to have very different performances in different parts of the graph\n\n![rocs](./images/churnrocs.png)\n\nAnd then there is the question of figuring what threshold to choose as well. To answer both of these, we are going to have to turn back to cost", "_____no_output_____" ], [ "##Reprediction again: Now with Cost or Risk", "_____no_output_____" ], [ "You can use the utility or risk matrix to provide a threshold to pick for our classifier. \n\nThe key idea is that we want to minimize cost on our test set, so for each sample, simply pick the class which does that. \n\nDecision Theory is the branch of statistics that speaks to this: its the theory which tells us how to make a positive or negative prediction for a given sample.\n\nDo you remember the log loss in Logistic Regression and the Hinge Loss in the SVM? The former, for example, gave us a bunch of probabilities which we needed to turn into decisions about what the samples are. In the latter, its the values the decision function gives us.\n\nThere then is a second cost or risk or loss involved in machine learning. This is the decision loss.\n\nWhat do we mean by a \"decision\" exactly? We'll use the letter g here to indicate a decision, in both the regression and classification problems. In the classification problem, one example of a decision is the process used to choose the class of a sample, given the probability of being in that class. As another example, consider the cancer story from the previous chapter. The decision may be: ought we biopsy, or ought we not biopsy. By minimizing the estimation risk, we obtain a probability that the patient has cancer. We must mix these probabilities with \"business knowledge\" or \"domain knowledge\" to make a decision.\n\n(As an aside, this is true in regression as well. there are really two losses there. The first one, the one equivalent to the log loss is the one where we say that at each point the prediction for y is a gaussian....the samples of this gaussian come from the bootstrap we make on the original data set...each replication leads to a new line and a distribution for the prediction at a point x. But usually in a regression we just quote the mean of this distribution at each point, the regression line E[y|x]. Why the mean? The mean comes from choosing a least squares decision loss...if we chose a L1 loss, we'd be looking at a median.)\n\n**The cost matrix we have been using above is exactly what goes into this decision loss!!**\n\n###Decision Theory Math\n\nTo understand this, lets follow through with a bit of math:\n(you can safely skip this section if you are not interested)\n\nWe simply weigh each combinations loss by the probability that that combination can happen:\n\n$$ R_{g}(x) = \\sum_y l(y,g(x)) p(y|x)$$\n\nThat is, we calculate the **average risk** over all choices y, of making choice g for a given sample.\n\nThen, if we want to calculate the overall risk, given all the samples in our set, we calculate:\n\n$$R(g) = \\sum_x p(x) R_{g}(x)$$\n\nIt is sufficient to minimize the risk at each point or sample to minimize the overall risk since $p(x)$ is always positive.\n\nConsider the two class classification case. Say we make a \"decision g about which class\" at a sample x. Then:\n\n$$R_g(x) = l(1, g)p(1|x) + l(0, g)p(0|x).$$\n\nThen for the \"decision\" $g=1$ we have:\n\n$$R_1(x) = l(1,1)p(1|x) + l(0,1)p(0|x),$$\n\nand for the \"decision\" $g=0$ we have:\n\n$$R_0(x) = l(1,0)p(1|x) + l(0,0)p(0|x).$$\n\nNow, we'd choose $1$ for the sample at $x$ if:\n\n$$R_1(x) \\lt R_0(x).$$\n\n$$ P(1|x)(l(1,1) - l(1,0)) \\lt p(0|x)(l(0,0) - l(0,1))$$\n\nThis gives us a ratio `r` between the probabilities to make a prediction. We assume this is true for all samples.\n\nSo, to choose '1':\n\n$$p(1|x) \\gt r P(0|x) \\implies r=\\frac{l(0,1) - l(0,0)}{l(1,0) - l(1,1)} =\\frac{c_{FP} - c_{TN}}{c_{FN} - c_{TP}}$$\n\nThis may also be written as:\n\n$$P(1|x) \\gt t = \\frac{r}{1+r}$$.\n\nIf you assume that True positives and True negatives have no cost, and the cost of a false positive is equal to that of a false positive, then $r=1$ and the threshold is the usual intutive $t=0.5$.", "_____no_output_____" ] ], [ [ "cost", "_____no_output_____" ], [ "def rat(cost):\n return (cost[0,1] - cost[0,0])/(cost[1,0]-cost[1,1])", "_____no_output_____" ], [ "def c_repredict(est, c, xtest):\n r = rat(c)\n print r\n t=r/(1.+r)\n print \"t=\", t\n probs=est.predict_proba(xtest)\n p0 = probs[:,0]\n p1 = probs[:,1]\n ypred = (p1 >= t)*1\n return ypred", "_____no_output_____" ], [ "average_cost(ytest, c_repredict(clfdt, cost, Xtest), cost)", "0.229654403567\nt= 0.18676337262\n" ] ], [ [ "For reasons that will become clearer in a later lab, this value turns out to be only approximate, and we are better using a ROC curve or a Cost curve (below) to find minimum cost. However, it will get us in the right ballpark of the threshold we need. Note that the threshold itself depends only on costs and is independent of the classifier.", "_____no_output_____" ] ], [ [ "plt.plot(ts, [average_cost(ytest, repredict(clfdt, t, Xtest), cost) for t in ts] )", "_____no_output_____" ] ], [ [ "Note that none of this can be done for classifiers that dont provide probabilities. So, once again, we turn to ROC curves to help us out.", "_____no_output_____" ], [ "##Model selection from Cost and ROC", "_____no_output_____" ], [ "Notice that the ROC curve has a very interesting property: if you look at the confusion matrix , TPR is only calculated from the observed \"1\" row while FPR is calculated from the observed '0' row. This means that the ROC curve is idenpendent of the class balance/imbalance on the test set, and thus works for all ratios of positive to negative samples. The balance picks a point on the curve, as you can read below.\n\nLets rewrite the cost equation from before.\n\n\\begin{eqnarray}\nCost &=& c(1P,1A) \\times p(1P,1A) + c(1P,0A) \\times p(1P,0A) + c(0P,1A) \\times p(0P,1A) + c(0P,0A) \\times p(0P,0A) \\\\\n&=& p(1A) \\times \\left ( c(1P,1A) \\times p(1P | 1A) + c(0P,1A) \\times p(0P | 1A) \\right ) \\\\\n&+& p(0A) \\times \\left ( c(1P,0A) \\times p(1P,0A) + c(0P,0A) \\times p(0P | 0A) \\right ) \\\\\n&=& p(1A) \\times \\left ( c(1P,1A) \\times TPR + c(0P,1A) \\times (1 - TPR)\\right ) \\\\\n&+& p(0A) \\times \\left ( c(1P,0A) \\times FPR + c(0P,0A) \\times (1 - FPR) \\right )\n\\end{eqnarray}\n\n\nThis can then be used to write TPR in terms of FPR, which as you can see from below is a line if you fix the cost. So lines on the graph correspond to a fixed cost. Of course they must intersect the ROC curve to be acceptable as coming from our classifier.\n\n$$TPR = \\frac{1}{p(1A)(c_{FN} - c_{TP})} \\left ( p(1A) c_{FP} + p(0A) c_{TN} - Cost \\right ) + r \\frac{p(0A)}{p(1A)} \\times FPR$$", "_____no_output_____" ], [ "There are three observations to be made from here.\n\n1. The slope is the reprediction ratio $r$ multiplied by the negative positive imbalance. In the purely asymmetric case the ratio r is the ratio of the false-positive cost to the false-negative cost. Thus for the balanced case, low slopes penalize false negatives and correspond to low thresholds\n2. When imbalance is included, a much more middling slope is achieved, since low $r$ usually comes with high negative-positive imbalance. So we still usually land up finding a model somewhere in the northwest quadrant.\n3. The line you want is a tangent line. Why? The tangent line has the highest intercept. Since the cost is subtracted, the highest intercept corresponds to the lowest cost!.\n", "_____no_output_____" ], [ "A diagram illustrates this for balanced classes:\n![asyroc](images/asyroc.png)", "_____no_output_____" ], [ "So one can use the tangent line method to find the classifier we ought to use and multiple questions about ROC curves now get answered.\n\n(1) For a balanced data set, with equal misclassification costs, and no cost for true positives and true negatives, the slope is 1. Thus 45 degree lines are what we want, and hence closest to the north west corner, as thats where a 45 degree line would be tangent.\n(2) Classifiers which have some part of their ROC curve closer to the northwest corner than others have tangent lines with higher intercepts and thus lower cost\n(3) For any other case, find the line!", "_____no_output_____" ] ], [ [ "print rat(cost)\nslope = rat(cost)*(np.mean(ytest==0)/np.mean(ytest==1))\nslope", "0.229654403567\n" ], [ "z1=np.arange(0.,1., 0.02)\ndef plot_line(ax, intercept):\n plt.figure(figsize=(12,12))\n ax=plt.gca()\n ax.set_xlim([0.0,1.0])\n ax.set_ylim([0.0,1.0])\n make_roc(\"gnb\",clfgnb, ytest, Xtest, ax, labe=60)\n make_roc(\"dt\",clfdt, ytest, Xtest, ax, labe=1)\n ax.plot(z1 , slope*z1 + intercept, 'k-')", "_____no_output_____" ], [ "from IPython.html.widgets import interact, fixed\ninteract(plot_line, ax=fixed(ax), intercept=(0.0,1.0, 0.02))", "_____no_output_____" ] ], [ [ "As you can see our slope is actually on the rising part of the curve, even with the imbalance. (Since the cost ratio isnt too small..an analyst should play around with the assumptions that went into the cost matrix!)", "_____no_output_____" ], [ "##Cost curves", "_____no_output_____" ], [ "The proof is always in the pudding. So far we have used a method to calculate a rough threshold from the cost/utility matrix, and seen the ROC curve which implements one classifier per threshold to pick an appropriate model. But why not just plot the cost/profit (per person) per threshold on a ROC like curve to see which classifier maximizes profit/minimizes cost? \n\nJust like in a ROC curve, we go down the sorted (by score or probability) list of samples. We one-by-one add an additional sample to our positive samples, noting down the attendant classifier's TPR and FPR and threshold. In addition to what we do for the ROC curve, we now also note down the percentage of our list of samples predicted as positive. Remember we start from the mostest positive, where the percentage labelled as positive would be minuscule, like 0.1 or so and the threshold like a 0.99 in probability or so. As we decrease the threshold, the percentage predicted to be positive clearly increases until everything is predicted positive at a threshold of 0. What we now do is, at each such additional sample/threshold (given to us by the `roc_curve` function from `sklearn`), we calculate the expected profit per person and plot it against the percentage predicted positive by that threshold to produce a profit curve. Thus, small percentages correspond to samples most likely to be positive: a percentage of 8% means the top 8% of our samples ranked by likelihood of being positive.\n\nAs in the ROC curve case, we use `sklearn`'s `roc_curve` function to return us a set of thresholds with TPRs and FPRs.", "_____no_output_____" ] ], [ [ "def percentage(tpr, fpr, priorp, priorn):\n perc = tpr*priorp + fpr*priorn\n return perc\ndef av_cost2(tpr, fpr, cost, priorp, priorn):\n profit = priorp*(cost[1][1]*tpr+cost[1][0]*(1.-tpr))+priorn*(cost[0][0]*(1.-fpr) +cost[0][1]*fpr)\n return profit\ndef plot_cost(name, clf, ytest, xtest, cost, ax=None, threshold=False, labe=200, proba=True):\n initial=False\n if not ax:\n ax=plt.gca()\n initial=True\n if proba:\n fpr, tpr, thresholds=roc_curve(ytest, clf.predict_proba(xtest)[:,1])\n else:\n fpr, tpr, thresholds=roc_curve(ytest, clf.decision_function(xtest))\n priorp=np.mean(ytest)\n priorn=1. - priorp\n ben=[]\n percs=[]\n for i,t in enumerate(thresholds):\n perc=percentage(tpr[i], fpr[i], priorp, priorn)\n ev = av_cost2(tpr[i], fpr[i], cost, priorp, priorn)\n ben.append(ev)\n percs.append(perc*100)\n ax.plot(percs, ben, '-', alpha=0.3, markersize=5, label='cost curve for %s' % name)\n if threshold:\n label_kwargs = {}\n label_kwargs['bbox'] = dict(\n boxstyle='round,pad=0.3', alpha=0.2,\n )\n for k in xrange(0, fpr.shape[0],labe):\n #from https://gist.github.com/podshumok/c1d1c9394335d86255b8\n threshold = str(np.round(thresholds[k], 2))\n ax.annotate(threshold, (percs[k], ben[k]), **label_kwargs)\n ax.legend(loc=\"lower right\")\n return ax", "_____no_output_____" ], [ "ax = plot_cost(\"gnb\",clfgnb, ytest, Xtest, cost, threshold=True, labe=50);\nplot_cost(\"dt\",clfdt, ytest, Xtest, cost, ax, threshold=True, labe=2);", "_____no_output_____" ] ], [ [ "Note the customers on the left of this graph are most likely to churn (be positive).\n\nThis if you had a finite budget, you should be targeting them!\n\nFinding the best classifier has a real consequence: you save money!!!\n\n![costcurves](./images/costcurves.png)", "_____no_output_____" ] ], [ [ "cost", "_____no_output_____" ] ], [ [ "The above graph is a snapshot of a run. One thing worth noticing is that classifiers perform differently in different regions. If you targeted only the top 20% of your users..and these are the ones most likely to churn so you should target them first, you would want to use the decision-tree classifier. And you might only get to target these top 20 given your budget. Remember that there is a cost associated with targeting predicted positives. That cost can be read of the graph above. Say we had a million customers. Now, at 10%, or 100,000 we are talking about a minimum budget of 10.3 million dollars. \n\nIf 10-15 million is your budget, then you use the decision tree classifier on your left. If 40-60 million is your budget, roughly, you would use the gnb classifier instead.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e7d74c4e02728eafaa1c27061fc2092079e462d7
368,777
ipynb
Jupyter Notebook
Chapter_16_questions.ipynb
cormach/bayesian_stats_by_b_lambert
dac5c952299fd6bbfb64c7c3064b7f3a5c935698
[ "Apache-2.0" ]
1
2022-01-03T19:41:52.000Z
2022-01-03T19:41:52.000Z
Chapter_16_questions.ipynb
cormach/bayesian_stats_by_b_lambert
dac5c952299fd6bbfb64c7c3064b7f3a5c935698
[ "Apache-2.0" ]
null
null
null
Chapter_16_questions.ipynb
cormach/bayesian_stats_by_b_lambert
dac5c952299fd6bbfb64c7c3064b7f3a5c935698
[ "Apache-2.0" ]
null
null
null
134.836197
116,174
0.782169
[ [ [ "<a href=\"https://colab.research.google.com/github/cormach/bayesian_stats_by_b_lambert/blob/master/Chapter_16_questions.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ "import os\nimport json\nimport shutil\nimport urllib.request\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "# Please use the latest version of CmdStanPy\n!pip install --upgrade cmdstanpy", "Collecting cmdstanpy\n Downloading https://files.pythonhosted.org/packages/e3/e2/204c9c6beaf9e05ad28bd589c154afff35dffa6166d76841d3c0dec6c1e3/cmdstanpy-0.9.5-py3-none-any.whl\nRequirement already satisfied, skipping upgrade: pandas in /usr/local/lib/python3.6/dist-packages (from cmdstanpy) (1.0.5)\nRequirement already satisfied, skipping upgrade: numpy in /usr/local/lib/python3.6/dist-packages (from cmdstanpy) (1.18.5)\nRequirement already satisfied, skipping upgrade: python-dateutil>=2.6.1 in /usr/local/lib/python3.6/dist-packages (from pandas->cmdstanpy) (2.8.1)\nRequirement already satisfied, skipping upgrade: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas->cmdstanpy) (2018.9)\nRequirement already satisfied, skipping upgrade: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.6.1->pandas->cmdstanpy) (1.12.0)\n\u001b[31mERROR: fbprophet 0.6 has requirement cmdstanpy==0.4, but you'll have cmdstanpy 0.9.5 which is incompatible.\u001b[0m\nInstalling collected packages: cmdstanpy\n Found existing installation: cmdstanpy 0.4.0\n Uninstalling cmdstanpy-0.4.0:\n Successfully uninstalled cmdstanpy-0.4.0\nSuccessfully installed cmdstanpy-0.9.5\n" ], [ "# Install pre-built CmdStan binary\n# (faster than compiling from source via install_cmdstan() function)\ntgz_file = 'colab-cmdstan-2.23.0.tar.gz'\ntgz_url = 'https://github.com/stan-dev/cmdstan/releases/download/v2.23.0/colab-cmdstan-2.23.0.tar.gz'\nif not os.path.exists(tgz_file):\n urllib.request.urlretrieve(tgz_url, tgz_file)\n shutil.unpack_archive(tgz_file)", "_____no_output_____" ], [ "# Specify CmdStan location via environment variable\nos.environ['CMDSTAN'] = './cmdstan-2.23.0'\n# Check CmdStan path\nfrom cmdstanpy import CmdStanModel, cmdstan_path\ncmdstan_path()", "_____no_output_____" ], [ "!pip install arviz", "Collecting arviz\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/d2/ed/2f9d0217fac295b3dd158195060e5350c1c9a2abcba04030a426a15fd908/arviz-0.9.0-py3-none-any.whl (1.5MB)\n\u001b[K |████████████████████████████████| 1.5MB 2.8MB/s \n\u001b[?25hRequirement already satisfied: numpy>=1.12 in /usr/local/lib/python3.6/dist-packages (from arviz) (1.18.5)\nCollecting netcdf4\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/35/4f/d49fe0c65dea4d2ebfdc602d3e3d2a45a172255c151f4497c43f6d94a5f6/netCDF4-1.5.3-cp36-cp36m-manylinux1_x86_64.whl (4.1MB)\n\u001b[K |████████████████████████████████| 4.1MB 9.3MB/s \n\u001b[?25hRequirement already satisfied: pandas>=0.23 in /usr/local/lib/python3.6/dist-packages (from arviz) (1.0.5)\nRequirement already satisfied: packaging in /usr/local/lib/python3.6/dist-packages (from arviz) (20.4)\nRequirement already satisfied: scipy>=0.19 in /usr/local/lib/python3.6/dist-packages (from arviz) (1.4.1)\nRequirement already satisfied: xarray>=0.11 in /usr/local/lib/python3.6/dist-packages (from arviz) (0.15.1)\nRequirement already satisfied: matplotlib>=3.0 in /usr/local/lib/python3.6/dist-packages (from arviz) (3.2.2)\nCollecting cftime\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/eb/0f/846488085d0f5517d79dfd7a12cd231ff87b94265a5bbfef62da56a6b029/cftime-1.2.0-cp36-cp36m-manylinux1_x86_64.whl (282kB)\n\u001b[K |████████████████████████████████| 286kB 39.8MB/s \n\u001b[?25hRequirement already satisfied: python-dateutil>=2.6.1 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.23->arviz) (2.8.1)\nRequirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.23->arviz) (2018.9)\nRequirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.6/dist-packages (from packaging->arviz) (2.4.7)\nRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from packaging->arviz) (1.12.0)\nRequirement already satisfied: setuptools>=41.2 in /usr/local/lib/python3.6/dist-packages (from xarray>=0.11->arviz) (47.3.1)\nRequirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=3.0->arviz) (0.10.0)\nRequirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=3.0->arviz) (1.2.0)\nInstalling collected packages: cftime, netcdf4, arviz\nSuccessfully installed arviz-0.9.0 cftime-1.2.0 netcdf4-1.5.3\n" ], [ "import arviz as az", "_____no_output_____" ] ], [ [ "Question 16_1 - discoveries", "_____no_output_____" ] ], [ [ "stan_text = '''data {\n int N;\n int<lower=0> X[N];\n}\nparameters {\n real<lower=0> mu;\n real<lower=0> kappa;\n}\nmodel {\n X ~ neg_binomial_2(mu, kappa);\n mu ~ lognormal(2,1);\n kappa ~lognormal(2,1); \n}\ngenerated quantities {\n int<lower=0> XSim[N];\n for (i in 1:N)\n {XSim[i] <- neg_binomial_2_rng(mu, kappa);}\n}'''\nwith open('stan_file.stan', 'w') as f:\n f.write(stan_text)", "_____no_output_____" ], [ "!cat stan_file.stan", "data {\n int N;\n int<lower=0> X[N];\n}\nparameters {\n real<lower=0> mu;\n real<lower=0> kappa;\n}\nmodel {\n X ~ neg_binomial_2(mu, kappa);\n mu ~ lognormal(2,1);\n kappa ~lognormal(2,1); \n}\ngenerated quantities {\n int<lower=0> XSim[N];\n for (i in 1:N)\n {XSim[i] <- neg_binomial_2_rng(mu, kappa);}\n}" ], [ "stan_model = CmdStanModel(stan_file='stan_file.stan')", "INFO:cmdstanpy:compiling stan program, exe file: /content/stan_file\nINFO:cmdstanpy:compiler options: stanc_options=None, cpp_options=None\nINFO:cmdstanpy:compiled model file: /content/stan_file\n" ], [ "url='https://raw.githubusercontent.com/alexandrahotti/Solutions-to-A-Students-Guide-to-Bayesian-Statistics-by-Ben-Lambert/master/All_data/evaluation_discoveries.csv'\ndf = pd.read_csv(url, error_bad_lines=False)", "_____no_output_____" ], [ "data = {'X':df.discoveries.to_numpy(),'N':df.shape[0] }", "_____no_output_____" ], [ "stan_posterior=stan_model.sample(data=data)", "INFO:cmdstanpy:start chain 1\nINFO:cmdstanpy:finish chain 1\nINFO:cmdstanpy:start chain 2\nINFO:cmdstanpy:finish chain 2\nINFO:cmdstanpy:start chain 3\nINFO:cmdstanpy:finish chain 3\nINFO:cmdstanpy:start chain 4\nINFO:cmdstanpy:finish chain 4\n" ], [ "stan_posterior.diagnose()", "INFO:cmdstanpy:Processing csv files: /tmp/tmp_5e0vghs/stan_file-202007121621-1-gf1e_wqi.csv, /tmp/tmp_5e0vghs/stan_file-202007121621-2-wqzke3p5.csv, /tmp/tmp_5e0vghs/stan_file-202007121621-3-xpqgqjey.csv, /tmp/tmp_5e0vghs/stan_file-202007121621-4-f_ivuw4p.csv\n\nChecking sampler transitions treedepth.\nTreedepth satisfactory for all transitions.\n\nChecking sampler transitions for divergences.\nNo divergent transitions found.\n\nChecking E-BFMI - sampler transitions HMC potential energy.\nE-BFMI satisfactory for all transitions.\n\nEffective sample size satisfactory.\n\nSplit R-hat values satisfactory all parameters.\n\nProcessing complete, no problems detected.\n" ], [ "stan_posterior.summary().round(decimals=3).iloc[1:4,:]", "_____no_output_____" ], [ "", "_____no_output_____" ], [ "stan_sample = stan_posterior.get_drawset()", "_____no_output_____" ], [ "az_infdata_obj = az.from_cmdstanpy(\n posterior=stan_posterior,\n posterior_predictive=\"XSim\",\n observed_data=data)\n\naz_infdata_obj", "_____no_output_____" ], [ "az.plot_autocorr(az_infdata_obj)", "_____no_output_____" ], [ "az.plot_pair(az_infdata_obj)", "_____no_output_____" ], [ "az.plot_density(az_infdata_obj)", "_____no_output_____" ], [ "az.plot_trace(az_infdata_obj)", "_____no_output_____" ], [ "stan_sample.drop(columns=['lp__', 'accept_stat__','stepsize__', 'treedepth__', 'n_leapfrog__',\n 'divergent__', 'energy__', 'mu','kappa'], inplace=True)", "_____no_output_____" ], [ "posterior_checks_max =np.amax(stan_sample, axis=1)", "_____no_output_____" ], [ "(posterior_checks_max >=12).sum()/float(len(posterior_checks_max))", "_____no_output_____" ], [ "(df.discoveries-stan_sample['XSim.1']).dropna().plot()", "INFO:numexpr.utils:NumExpr defaulting to 2 threads.\n" ], [ "plt.acorr((df.discoveries-stan_sample['XSim.1']).dropna())", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7d75e603e604de67c093562655611e73b65859d
117,731
ipynb
Jupyter Notebook
4_Modelling/Time_Series_Models/Predict_03_Hours/GRU_Multivariate_Horizon_Style_03H.ipynb
Jaoaud/Capstone_Energy
56819cef42e1db435208b4f9955c2c6d5ca9f2b3
[ "MIT" ]
null
null
null
4_Modelling/Time_Series_Models/Predict_03_Hours/GRU_Multivariate_Horizon_Style_03H.ipynb
Jaoaud/Capstone_Energy
56819cef42e1db435208b4f9955c2c6d5ca9f2b3
[ "MIT" ]
null
null
null
4_Modelling/Time_Series_Models/Predict_03_Hours/GRU_Multivariate_Horizon_Style_03H.ipynb
Jaoaud/Capstone_Energy
56819cef42e1db435208b4f9955c2c6d5ca9f2b3
[ "MIT" ]
2
2021-08-19T16:10:22.000Z
2021-09-15T07:26:09.000Z
96.185458
52,980
0.760454
[ [ [ "# Import all dependencies\n\nimport pandas as pd\nimport numpy as np\nimport tensorflow as tf\nfrom sklearn import preprocessing\nimport matplotlib.pyplot as plt\ntf.random.set_seed(123)\nnp.random.seed(123)\nfrom tensorflow import keras\n\nfrom tensorflow.python.client import device_lib \nprint(device_lib.list_local_devices())\n\n\ngpus = tf.config.experimental.list_physical_devices('GPU')\nif gpus:\n try:\n # Restrict TensorFlow to only use the fourth GPU\n tf.config.experimental.set_visible_devices(gpus[0], 'GPU')\n\n # Currently, memory growth needs to be the same across GPUs\n for gpu in gpus:\n tf.config.experimental.set_memory_growth(gpu, True)\n logical_gpus = tf.config.experimental.list_logical_devices('GPU')\n print(len(gpus), \"Physical GPUs,\", len(logical_gpus), \"Logical GPUs\")\n except RuntimeError as e:\n # Memory growth must be set before GPUs have been initialized\n print(e)", "[name: \"/device:CPU:0\"\ndevice_type: \"CPU\"\nmemory_limit: 268435456\nlocality {\n}\nincarnation: 3352902413677293880\n, name: \"/device:GPU:0\"\ndevice_type: \"GPU\"\nmemory_limit: 2842746880\nlocality {\n bus_id: 1\n links {\n }\n}\nincarnation: 7898730495029856507\nphysical_device_desc: \"device: 0, name: NVIDIA GeForce GTX 1660 Ti with Max-Q Design, pci bus id: 0000:01:00.0, compute capability: 7.5\"\n]\n1 Physical GPUs, 1 Logical GPUs\n" ], [ "# read csv in the dataframe\n\ndf = pd.read_csv(\"../ane_energy/df_merged.csv\",parse_dates=['dt_start_utc'],index_col='dt_start_utc')", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "# lets have a look about our data, how its distributed\n\ndf.describe()", "_____no_output_____" ], [ "# defined function which preprocesses the data suitable for forecasting\n\ndef custom_ts_multi_data_prep(dataset, target, start, end, window, horizon):\n X = []\n y = []\n start = start + window\n if end is None:\n end = len(dataset) - horizon\n\n for i in range(start, end):\n indices = range(i-window, i)\n X.append(dataset[indices])\n\n indicey = range(i+1, i+1+horizon)\n y.append(target[indicey])\n return np.array(X), np.array(y)", "_____no_output_____" ], [ "# cut the last 12 Timestamps data for forecasting\n\nvalidate = df.tail(12)\ndf.drop(df.tail(12).index,inplace=True)", "_____no_output_____" ], [ "# drop the columns we dont need\n\ndf_1 = df.drop(\"rebap_eur_mwh\", axis= 1)", "_____no_output_____" ], [ "# MinMaxScaler to scale down the values. \n# The neural network converges sooner when it exposes the same scaled features and gives better accuracy\n\nx_scaler = preprocessing.MinMaxScaler()\ny_scaler = preprocessing.MinMaxScaler()\ndataX = x_scaler.fit_transform(df_1)\ndataY = y_scaler.fit_transform(df[['rebap_eur_mwh']])", "_____no_output_____" ], [ "# As we are doing multiple-step forecasting, \n# let’s allow the model to see past 48 hours of data and forecast the 12 hrs after data; \n# for that, we set the horizon to 12.\n\nhist_window = 48\nhorizon = 12\nTRAIN_SPLIT = 10000\nx_train_multi, y_train_multi = custom_ts_multi_data_prep(\n dataX, dataY, 0, TRAIN_SPLIT, hist_window, horizon)\nx_val_multi, y_val_multi= custom_ts_multi_data_prep(\n dataX, dataY, TRAIN_SPLIT, None, hist_window, horizon)", "_____no_output_____" ], [ "print ('Single window of past history')\nprint(x_train_multi[0])\nprint ('\\n Target horizon')\nprint (y_train_multi[0])\n", "Single window of past history\n[[0.06277442 0.05514809 0.01028954 ... 1. 1. 0. ]\n [0.06243139 0.05527498 0.01028954 ... 1. 1. 0. ]\n [0.06215697 0.05552877 0.01028954 ... 1. 1. 1. ]\n ...\n [0.07725027 0.11285943 0.0201005 ... 1. 1. 0. ]\n [0.07834797 0.11415374 0.0201005 ... 1. 1. 0. ]\n [0.07965148 0.1155242 0.02273271 ... 1. 1. 1. ]]\n\n Target horizon\n[[0.346906 ]\n [0.34695425]\n [0.34470899]\n [0.34405936]\n [0.35797551]\n [0.3533747 ]\n [0.36057574]\n [0.3567762 ]\n [0.3552357 ]\n [0.34467969]\n [0.34614437]\n [0.34365442]]\n" ], [ "# Prepare the training data and validation data using the TensorFlow data function, \n# which faster and efficient way to feed data for training.\n\nBATCH_SIZE = 16\nBUFFER_SIZE = 4\n\ntrain_data_multi = tf.data.Dataset.from_tensor_slices((x_train_multi, y_train_multi))\ntrain_data_multi = train_data_multi.cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE).repeat()\n\nval_data_multi = tf.data.Dataset.from_tensor_slices((x_val_multi, y_val_multi))\nval_data_multi = val_data_multi.batch(BATCH_SIZE).repeat()", "_____no_output_____" ], [ "# Build and compile the model\n\nGRU_model = tf.keras.models.Sequential([\n tf.keras.layers.GRU(100, input_shape=x_train_multi.shape[-2:],return_sequences=True),\n tf.keras.layers.Dropout(0.2),\n tf.keras.layers.GRU(units=50,return_sequences=False),\n tf.keras.layers.Dropout(0.2),\n tf.keras.layers.Dense(units=horizon),\n])\nGRU_model.compile(optimizer='adam', loss='mse')", "_____no_output_____" ], [ "# save model\n\nmodel_path = r'\\Chapter_7\\GRU_Multivariate.h5\"", "_____no_output_____" ], [ "# train the model\n\nEVALUATION_INTERVAL = 150\nEPOCHS = 150\nhistory = GRU_model.fit(train_data_multi, epochs=EPOCHS,steps_per_epoch=EVALUATION_INTERVAL,validation_data=val_data_multi, validation_steps=50,verbose =1,\n callbacks =[tf.keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=10, verbose=1, mode='min'),tf.keras.callbacks.ModelCheckpoint(model_path,monitor='val_loss', save_best_only=True, mode='min', verbose=0)])", "Epoch 1/150\n150/150 [==============================] - 5s 14ms/step - loss: 0.0248 - val_loss: 0.0186\nEpoch 2/150\n150/150 [==============================] - 2s 16ms/step - loss: 0.0093 - val_loss: 0.0053\nEpoch 3/150\n150/150 [==============================] - 2s 11ms/step - loss: 0.0065 - val_loss: 0.0086\nEpoch 4/150\n150/150 [==============================] - 2s 11ms/step - loss: 0.0062 - val_loss: 0.0100\nEpoch 5/150\n150/150 [==============================] - 2s 11ms/step - loss: 0.0038 - val_loss: 0.0013\nEpoch 6/150\n150/150 [==============================] - 2s 11ms/step - loss: 0.0024 - val_loss: 6.8226e-04\nEpoch 7/150\n150/150 [==============================] - 2s 11ms/step - loss: 0.0016 - val_loss: 8.7281e-04\nEpoch 8/150\n150/150 [==============================] - 2s 11ms/step - loss: 0.0021 - val_loss: 8.3235e-04\nEpoch 9/150\n150/150 [==============================] - 2s 11ms/step - loss: 0.0016 - val_loss: 5.1706e-04\nEpoch 10/150\n150/150 [==============================] - 2s 11ms/step - loss: 0.0012 - val_loss: 4.9013e-04\nEpoch 11/150\n150/150 [==============================] - 2s 13ms/step - loss: 8.2511e-04 - val_loss: 2.5482e-04\nEpoch 12/150\n150/150 [==============================] - 2s 11ms/step - loss: 9.7859e-04 - val_loss: 4.1009e-04\nEpoch 13/150\n150/150 [==============================] - 2s 11ms/step - loss: 0.0015 - val_loss: 3.6558e-04\nEpoch 14/150\n150/150 [==============================] - 2s 11ms/step - loss: 6.7184e-04 - val_loss: 2.5571e-04\nEpoch 15/150\n150/150 [==============================] - 2s 11ms/step - loss: 5.3513e-04 - val_loss: 2.0933e-04\nEpoch 16/150\n150/150 [==============================] - 2s 11ms/step - loss: 4.1639e-04 - val_loss: 2.0937e-04\nEpoch 17/150\n150/150 [==============================] - 2s 11ms/step - loss: 0.0013 - val_loss: 2.9999e-04\nEpoch 18/150\n150/150 [==============================] - 2s 11ms/step - loss: 4.6430e-04 - val_loss: 2.2297e-04\nEpoch 19/150\n150/150 [==============================] - 2s 11ms/step - loss: 4.1980e-04 - val_loss: 2.4682e-04\nEpoch 20/150\n150/150 [==============================] - 2s 11ms/step - loss: 2.5193e-04 - val_loss: 1.9517e-04\nEpoch 21/150\n150/150 [==============================] - 2s 11ms/step - loss: 0.0011 - val_loss: 1.8907e-04\nEpoch 22/150\n150/150 [==============================] - 2s 11ms/step - loss: 4.3798e-04 - val_loss: 1.8487e-04\nEpoch 23/150\n150/150 [==============================] - 2s 11ms/step - loss: 3.3531e-04 - val_loss: 1.9312e-04\nEpoch 24/150\n150/150 [==============================] - 2s 11ms/step - loss: 1.7797e-04 - val_loss: 1.8095e-04\nEpoch 25/150\n150/150 [==============================] - 2s 11ms/step - loss: 9.4058e-04 - val_loss: 2.5006e-04\nEpoch 26/150\n150/150 [==============================] - 2s 11ms/step - loss: 3.8869e-04 - val_loss: 1.6935e-04\nEpoch 27/150\n150/150 [==============================] - 2s 11ms/step - loss: 2.9030e-04 - val_loss: 1.9110e-04\nEpoch 28/150\n150/150 [==============================] - 2s 11ms/step - loss: 1.2934e-04 - val_loss: 1.7248e-04\nEpoch 29/150\n150/150 [==============================] - 2s 11ms/step - loss: 8.7321e-04 - val_loss: 1.7163e-04\nEpoch 30/150\n150/150 [==============================] - 2s 11ms/step - loss: 3.7644e-04 - val_loss: 1.7861e-04\nEpoch 31/150\n150/150 [==============================] - 2s 11ms/step - loss: 2.7255e-04 - val_loss: 1.8099e-04\nEpoch 32/150\n150/150 [==============================] - 2s 11ms/step - loss: 1.1130e-04 - val_loss: 1.9780e-04\nEpoch 33/150\n150/150 [==============================] - 2s 11ms/step - loss: 7.8181e-04 - val_loss: 2.1408e-04\nEpoch 34/150\n150/150 [==============================] - 2s 11ms/step - loss: 4.4683e-04 - val_loss: 1.9704e-04\nEpoch 35/150\n150/150 [==============================] - 2s 11ms/step - loss: 2.5854e-04 - val_loss: 1.7526e-04\nEpoch 36/150\n150/150 [==============================] - 2s 11ms/step - loss: 9.9427e-05 - val_loss: 2.4284e-04\nEpoch 00036: early stopping\n" ], [ "# Load the saved model\n\nTrained_model = tf.keras.models.load_model(model_path)", "_____no_output_____" ], [ "# Show the model architecture\nTrained_model.summary()", "Model: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ngru (GRU) (None, 48, 100) 52500 \n_________________________________________________________________\ndropout (Dropout) (None, 48, 100) 0 \n_________________________________________________________________\ngru_1 (GRU) (None, 50) 22800 \n_________________________________________________________________\ndropout_1 (Dropout) (None, 50) 0 \n_________________________________________________________________\ndense (Dense) (None, 12) 612 \n=================================================================\nTotal params: 75,912\nTrainable params: 75,912\nNon-trainable params: 0\n_________________________________________________________________\n" ], [ "# plot our train and validation loss\n\nplt.plot(history.history['loss'])\nplt.plot(history.history['val_loss'])\nplt.title('Model loss')\nplt.ylabel('loss')\nplt.xlabel('epoch')\nplt.legend(['train loss', 'validation loss'], loc='upper left')\nplt.rcParams[\"figure.figsize\"] = [16,9]\nplt.show()", "_____no_output_____" ], [ "# Prepare the testing data for the last 48 hrs and check the prediction against it\n# by visualizing the actual and predicted values. \n\ndata_val = x_scaler.fit_transform(df_1.tail(48))", "_____no_output_____" ], [ "val_rescaled = data_val.reshape(1, data_val.shape[0], data_val.shape[1])", "_____no_output_____" ], [ "Predicted_results = Trained_model.predict(val_rescaled)", "_____no_output_____" ], [ "Predicted_results", "_____no_output_____" ], [ "Predicted_results_Inv_trans = y_scaler.inverse_transform(Predicted_results.reshape(-1,1))", "_____no_output_____" ], [ "Predicted_results_Inv_trans", "_____no_output_____" ], [ "# Finally, evaluate the result with standard performance metrics.\n\nfrom sklearn import metrics\ndef timeseries_evaluation_metrics_func(y_true, y_pred):\n \n def mean_absolute_percentage_error(y_true, y_pred): \n y_true, y_pred = np.array(y_true), np.array(y_pred)\n return np.mean(np.abs((y_true - y_pred) / y_true)) * 100\n print('Evaluation metric results:-')\n print(f'MSE is : {metrics.mean_squared_error(y_true, y_pred)}')\n print(f'MAE is : {metrics.mean_absolute_error(y_true, y_pred)}')\n print(f'RMSE is : {np.sqrt(metrics.mean_squared_error(y_true, y_pred))}')\n print(f'MAPE is : {mean_absolute_percentage_error(y_true, y_pred)}')\n print(f'R2 is : {metrics.r2_score(y_true, y_pred)}',end='\\n\\n')", "_____no_output_____" ], [ "# Results of the metrics\n\ntimeseries_evaluation_metrics_func(validate['rebap_eur_mwh'],Predicted_results_Inv_trans)", "Evaluation metric results:-\nMSE is : 6295.365572272652\nMAE is : 73.66418577194213\nRMSE is : 79.3433398104255\nMAPE is : 231.43888414568875\nR2 is : -0.8900885986758531\n\n" ], [ "# Plot the actual vs predicted data\n\nplt.plot( list(validate['rebap_eur_mwh']))\nplt.plot( list(Predicted_results_Inv_trans))\nplt.title(\"Actual vs Predicted\")\nplt.ylabel(\"rebap_eur_mwh\")\nplt.legend(('Actual','predicted'))\nplt.show()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7d77cd09a7a1828de722184a98564a933f61ec4
2,162
ipynb
Jupyter Notebook
Merge Intervals/Merge_Intervals.ipynb
LucasColas/Coding-Problems
a0d1f375fbde66c0d7d01f976a6c010c914939c1
[ "Apache-2.0" ]
null
null
null
Merge Intervals/Merge_Intervals.ipynb
LucasColas/Coding-Problems
a0d1f375fbde66c0d7d01f976a6c010c914939c1
[ "Apache-2.0" ]
null
null
null
Merge Intervals/Merge_Intervals.ipynb
LucasColas/Coding-Problems
a0d1f375fbde66c0d7d01f976a6c010c914939c1
[ "Apache-2.0" ]
null
null
null
27.025
99
0.477336
[ [ [ "# Merge intervals \n\n\nGiven an array of intervals where intervals[i] = [starti, endi], \nmerge all overlapping intervals, and return an array of the non-overlapping \nintervals that cover all the intervals in the input.\n\nFrom Leetcode : https://leetcode.com/problems/merge-intervals/\n", "_____no_output_____" ] ], [ [ "from operator import itemgetter\n\ndef merge_Intervals(intervals):\n merged_intervals = []\n sorted_intervals = sorted(intervals, key=itemgetter(0))\n \n for i in range(len(sorted_intervals)):\n if len(merged_intervals) == 0 or merged_intervals[-1][1] < sorted_intervals[i][0]:\n merged_intervals.append(sorted_intervals[i])\n\n else:\n merged_intervals[-1][1] = max(merged_intervals[-1][1], sorted_intervals[i][1])\n\n return merged_intervals\n\n \n\nmerge_Intervals([[1,4],[4,5]])\n", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code" ] ]
e7d78318052929bbb2d4703a086bf03cf78dc22c
144,269
ipynb
Jupyter Notebook
5. Appendix_data normalization.ipynb
YXChen512/Churn_Analysis_music_player_app
2068046242883539ed24de080af1603384485df7
[ "MIT" ]
1
2021-07-07T12:31:58.000Z
2021-07-07T12:31:58.000Z
5. Appendix_data normalization.ipynb
YXChen512/Churn_Analysis_music_player_app
2068046242883539ed24de080af1603384485df7
[ "MIT" ]
null
null
null
5. Appendix_data normalization.ipynb
YXChen512/Churn_Analysis_music_player_app
2068046242883539ed24de080af1603384485df7
[ "MIT" ]
null
null
null
127.333628
12,568
0.825673
[ [ [ "import numpy as np\nimport pandas as pd", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\n% matplotlib inline\nplt.style.use('ggplot')", "_____no_output_____" ], [ "feature_pickle = 'C:\\\\Users\\\\Sean\\\\Documents\\\\BitTiger\\\\Capston_music_player_python\\\\modified_features_and_label.pkl'\ndf = pd.read_pickle(feature_pickle)\ndf.head()", "_____no_output_____" ], [ "df.columns", "_____no_output_____" ], [ "df_to_normalize = df.loc[:,['total_play_time','popular_songs_ratio', 'avg_play_time', \\\n 'least_popular_count','most_popular_count', 'count_play_23']]\ndf_to_normalize.describe()", "_____no_output_____" ], [ "import math\nmath.log(100,10)", "_____no_output_____" ], [ "df_to_normalize.least_popular_count.apply(lambda x : math.log(x+0.1,10)).plot.hist(bins = 100)", "_____no_output_____" ], [ "df_least_popular = df.loc[:,['least_popular_count','label']]\ndf_least_popular.head()", "_____no_output_____" ], [ "df_least_popular = df.loc[:,['least_popular_count','label']]\n\ndf_least_popular['bins'] = list(pd.cut(df_least_popular.least_popular_count,bins = [0,1,10.1,1500],\n labels = ['0','1-10','>10'],\n right=False, retbins=False, precision=1, include_lowest=True))\n\ndf_least_popular.head()", "_____no_output_____" ], [ "#df_least_popular.loc[df_least_popular.bins.isnull(),'bins'] = '10+'\n#df_least_popular.head()", "_____no_output_____" ], [ "plt.rcParams[\"figure.figsize\"] = (8,4)\nax = df_least_popular.groupby(['bins','label']).size().unstack().fillna(0).plot.bar(alpha = 0.8, color = ['b','r'])\nplt.rcParams[\"figure.figsize\"] = (8,4)\nplt.show()", "_____no_output_____" ], [ "df_to_normalize.avg_play_time.apply(lambda x : math.log(x+0.1,2)).plot.hist(bins = 100)", "_____no_output_____" ], [ "df_to_normalize.total_play_time.apply(lambda x : math.log(x+0.1,10)).plot.hist(bins = 100)", "_____no_output_____" ], [ "df_to_normalize.most_popular_count.apply(lambda x : math.log(x+1,10)).plot.hist(bins = 100)", "_____no_output_____" ], [ "df_to_normalize.count_play_23.apply(lambda x : math.log(x,10)).plot.hist(bins = 100)", "_____no_output_____" ], [ "plt.figure();\n#plt.xlim(0,5000)\ndf.avg_complete_ratio.apply(lambda x : math.log(1.01-x,10)+2).plot.hist(bins = 100)", "_____no_output_____" ], [ "plt.figure();\n#plt.xlim(0,5000)\ndf.popular_songs_ratio.apply(lambda x : math.log(x+0.01,10)+2).plot.hist(bins = 100)", "_____no_output_____" ], [ "df.loc[df.popular_songs_ratio==0,'label'].mean(),df.loc[df.popular_songs_ratio>0,'label'].mean()", "_____no_output_____" ], [ "plt.figure();\n#plt.xlim(0,5000)\ndf.days_since_last_play.plot.hist(bins = 25,width = 0.5)", "_____no_output_____" ], [ "df.groupby('days_since_last_play').mean()['label']", "_____no_output_____" ], [ "plt.plot(df.groupby('days_since_last_play').mean().index, df.groupby('days_since_last_play').mean()['label'].values,'ro')\nplt.axis([0.5, 22.5, -0.1, 1.1])\nplt.show()", "_____no_output_____" ], [ "df.loc[df.ratio_3_over_14==0,'label'].mean(), df.loc[df.ratio_3_over_14>0,'label'].mean()", "_____no_output_____" ], [ "np.linspace(-0.5,10.5,12)", "_____no_output_____" ], [ "filename_pickle = 'C:\\\\Users\\\\Sean\\\\Documents\\\\BitTiger\\\\Capston_music_player_python\\\\features_after_RandForest.pkl'\ndf_new = pd.read_pickle(feature_pickle)\ndf_new.head()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7d78de8e4b78bdb32a46f9e2efbf6da204d83db
22,525
ipynb
Jupyter Notebook
08 - Lists.ipynb
dschenck/Python-crash-course
260a76512f450b381d2f19623f2ab510d96b540c
[ "MIT" ]
4
2019-05-04T00:46:42.000Z
2020-08-07T10:05:41.000Z
08 - Lists.ipynb
dschenck/Python-crash-course
260a76512f450b381d2f19623f2ab510d96b540c
[ "MIT" ]
null
null
null
08 - Lists.ipynb
dschenck/Python-crash-course
260a76512f450b381d2f19623f2ab510d96b540c
[ "MIT" ]
null
null
null
21.555024
784
0.501265
[ [ [ "# Lists\nA list is an ordered (not necessarily sorted) sequence of values.", "_____no_output_____" ] ], [ [ "#You create a new list using square brackets\nprimes = []\nprint(primes)", "[]\n" ], [ "type(primes)", "_____no_output_____" ], [ "#Create a list with some values\nprimes = [2, 3, 5, 7, 11, 13, 17, 19]\nprint(primes)", "[2, 3, 5, 7, 11, 13, 17, 19]\n" ] ], [ [ "## 1. Operations", "_____no_output_____" ] ], [ [ "#Concatenate two lists: combine two lists to create a new list\nevens = [2, 4, 6, 8]\nodds = [1, 3, 5, 7, 9]\n\nnumbers = evens + odds\nprint(numbers)", "[2, 4, 6, 8, 1, 3, 5, 7, 9]\n" ], [ "#Check whether a value is in a list\nprint(5 in [1, 2, 3, 4, 5])\nprint(1 in primes)", "True\nFalse\n" ], [ "#Sequence repetition\nripples = [1,2,3] * 3\n\nprint(ripples)", "[1, 2, 3, 1, 2, 3, 1, 2, 3]\n" ] ], [ [ "## 2. Built-in function", "_____no_output_____" ] ], [ [ "print(\"Maximum:\", max(primes))\nprint(\"Minimum:\", min(primes))\nprint(len(primes), \"items\")\nprint(\"Sum of items\", sum(primes))", "Maximum: 19\nMinimum: 2\n8 items\nSum of items 77\n" ], [ "#But of course, the values of the list must support summation\n#This will not work\nprint(sum([\"David\", \"Celine\", \"Camille\"]))", "_____no_output_____" ], [ "#If your list is list of boolean values, you can use the any and all\n#All returns True is all the values are True\n#Any returns True if one is at least True\n\nx = [True, True, True]\ny = [True, False, True]\n\nprint(any(x))\nprint(any(y))\nprint(all(x))\nprint(all(y))", "True\nTrue\nTrue\nFalse\n" ] ], [ [ "## 3. Indexing", "_____no_output_____" ] ], [ [ "#You can access to a particular value of a list via its index\n#Note that in Python, the first element has index 0\n#Hence the last element has index = len(x) - 1\n\nprint(primes[0])\nprint(primes[1])\nprint(primes[3])", "2\n3\n7\n" ], [ "#Negative indices allow you to go from the end of the list\nprint(primes[-1])\nprint(primes[-2])\nprint(primes[-5])", "19\n17\n7\n" ], [ "#By definition therefore, the below is True\nprint(primes[0] == primes[-len(primes)])", "True\n" ] ], [ [ "## 4. Slicing", "_____no_output_____" ] ], [ [ "#Recall the previous list of prime numbers\nprimes = [2, 3, 5, 7, 11, 13, 17, 19]", "_____no_output_____" ], [ "#Slicing allows you to take a slice - or a chop - of the list \n#and return a new list containing the elements of your slice\n#The second value of your slice is excluded\n\n#From the first (index 0) to the fourth (index 3) value\nx = primes[0:4]\nprint(x)", "[2, 3, 5, 7]\n" ], [ "#from the 4th (index 3) to the last element (index len(primes) - 1)\ny = primes[3:len(primes)]\nprint(y)", "[7, 11, 13, 17, 19]\n" ], [ "#Slicing and indexing is not the same! \n#Indexing allows you to access a value at a given position\n#Slicing takes a piece of your list\n\nprint(primes[0]) #Prints the first element\nprint(primes[0:1]) #Prints a new list containing 1 element: the first", "2\n[2]\n" ], [ "#You can use negative indices in your slices too! \nprint(primes[-5:-1]) #prints the first to the last (excluded)\nprint(primes[-len(primes):3]) #prints the first to the fourth (excluded)", "[7, 11, 13, 17]\n[2, 3, 5]\n" ], [ "#Omit an index in your slice, and you get sensible default\nprint(primes[:3]) #from the first to the fourth (excluded)\nprint(primes[2:]) #from the second to the last (included)\nprint(primes[:]) #form the first to the last (included)", "[2, 3, 5]\n[5, 7, 11, 13, 17, 19]\n[2, 3, 5, 7, 11, 13, 17, 19]\n" ], [ "#As above, it also works with negative numbers\nprint(primes[-4:]) #four last ones\nprint(primes[:-5]) #from the first to the fifth-to-last (excluded)", "[11, 13, 17, 19]\n[2, 3, 5]\n" ], [ "#You can use a step!\nprint(primes[0:7:2]) #every second from first to eigth (excluded)\nprint(primes[1:8:2]) #every second from second to ninth (excluded)", "[2, 5, 11, 17]\n[3, 7, 13, 19]\n" ], [ "#Trick! Reverse the order of your list\nprint(primes[::-1]) #from the last to the first (included) using a negative step (-1)", "[19, 17, 13, 11, 7, 5, 3, 2]\n" ] ], [ [ "## 4. Methods", "_____no_output_____" ] ], [ [ "#append to the end of the list\nprimes.append(23)\nprint(primes)", "[2, 3, 5, 7, 11, 13, 17, 19, 23, 23]\n" ], [ "primes.append(25)\nprint(primes)", "[2, 3, 5, 7, 11, 13, 17, 19, 23, 23, 25]\n" ], [ "#remove the first instance of a given value\nprimes.remove(25)\nprint(primes)", "[2, 3, 5, 7, 11, 13, 17, 19, 23, 23]\n" ], [ "#if the value doesnt exist... you get an error!\nprimes.remove(99)\nprint(primes)", "_____no_output_____" ], [ "#delete a value at a position, and save it\ndeleted = primes.pop(1) #the second element\nprint(deleted)\nprint(primes)", "3\n[2, 5, 7, 11, 13, 17, 19, 23, 23]\n" ], [ "#Insert a value at a given position\nprimes.insert(1, 4)\nprint(primes)", "[2, 4, 5, 7, 11, 13, 17, 19, 23, 23]\n" ], [ "#Whoops - that should've been 3\n#Not a problem, simply reassign the value at the index\nprimes[1] = 3\nprint(primes)", "[2, 3, 5, 7, 11, 13, 17, 19, 23, 23]\n" ], [ "#Reverse the list\nprimes.reverse()\nprint(primes)", "[23, 23, 19, 17, 13, 11, 7, 5, 3, 2]\n" ], [ "#Sort the list of primes\nprimes.sort()\nprint(primes)", "[2, 3, 5, 7, 11, 13, 17, 19, 23, 23]\n" ] ], [ [ "## 5. Iteration", "_____no_output_____" ] ], [ [ "i = 0 \nwhile i < len(primes):\n print(\"{} is a prime number\".format(primes[i]))\n i += 1", "2 is a prime number\n3 is a prime number\n5 is a prime number\n7 is a prime number\n11 is a prime number\n13 is a prime number\n17 is a prime number\n19 is a prime number\n23 is a prime number\n23 is a prime number\n" ], [ "#More pythonic way: use this syntax! \nfor value in primes: \n print(\"{} is still a prime number\".format(value))", "2 is still a prime number\n3 is still a prime number\n5 is still a prime number\n7 is still a prime number\n11 is still a prime number\n13 is still a prime number\n17 is still a prime number\n19 is still a prime number\n23 is still a prime number\n23 is still a prime number\n" ] ], [ [ "## Practice! ", "_____no_output_____" ], [ "### Problem 1: \nWrite a function that accepts a list of grades (from 0 to 100) and computes the average grade.", "_____no_output_____" ], [ "### Problem 2: \nWrite a function that accepts a list of words and joins them with a comma. For example:\n```\nwords = [\"apple\", \"pear\", \"lemons\", \"oranges\"]\n> \"apple,pear,lemons,oranges\"\n```", "_____no_output_____" ], [ "### Problem 3: \nWrite a function which returns a list of prime numbers from 2 to n", "_____no_output_____" ], [ "### Problem 4: \nWrite a function that takes two lists and returns a new list containing all the elements contained in one but not both lists. ", "_____no_output_____" ], [ "### Problem 5: \nWrite a function that takes a list of numbers and returns a new list containing all the numbers except those following a number divisible by 7. For example: \n```\nnumbers = [1,4,7,8,20,14,28,32,49]\n> [1,4,7,20,14,49] #8, 28 and 32 are removed, since they follow a number divisible by 7\n```", "_____no_output_____" ], [ "### Problem 6: \nWrite a function that computes the list of common whole divisors between two numbers `p` and `q` ", "_____no_output_____" ], [ "### Problem 7: \nWrite a function that removes all duplicates values from a list", "_____no_output_____" ], [ "### Problem 8: \nYou are faced with a difficult choice: \n- Option A: pay 700 dollars upfront for a new phone and freely choose a 19 dollar dollar per month contract\n- Option B: get the phone for free but commit to a 49 dollars per month phone contract. \n\nAfter how many months should you prefer the first option? ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
e7d7ace9995fb26a7badf2e179ce1ae4932ac0c5
39,363
ipynb
Jupyter Notebook
scr/.ipynb_checkpoints/demo_parasite_axes2-checkpoint.ipynb
RivasCalduch/IndiceReferenciaMercadoHipotecario_Visualizacion
819a8fa52ca8d3ed5b981d1631b9b1a395d38d08
[ "MIT" ]
null
null
null
scr/.ipynb_checkpoints/demo_parasite_axes2-checkpoint.ipynb
RivasCalduch/IndiceReferenciaMercadoHipotecario_Visualizacion
819a8fa52ca8d3ed5b981d1631b9b1a395d38d08
[ "MIT" ]
null
null
null
scr/.ipynb_checkpoints/demo_parasite_axes2-checkpoint.ipynb
RivasCalduch/IndiceReferenciaMercadoHipotecario_Visualizacion
819a8fa52ca8d3ed5b981d1631b9b1a395d38d08
[ "MIT" ]
null
null
null
258.967105
34,020
0.918121
[ [ [ "%matplotlib inline", "_____no_output_____" ] ], [ [ "\n# Parasite axis demo\n\n\nThis example demonstrates the use of parasite axis to plot multiple datasets\nonto one single plot.\n\nNotice how in this example, *par1* and *par2* are both obtained by calling\n``twinx()``, which ties their x-limits with the host's x-axis. From there, each\nof those two axis behave separately from each other: different datasets can be\nplotted, and the y-limits are adjusted separately.\n\nNote that this approach uses the `mpl_toolkits.axes_grid1.parasite_axes`'\n`~mpl_toolkits.axes_grid1.parasite_axes.host_subplot` and\n`mpl_toolkits.axisartist.axislines.Axes`. An alternative approach using the\n`~mpl_toolkits.axes_grid1.parasite_axes`'s\n`~.mpl_toolkits.axes_grid1.parasite_axes.HostAxes` and\n`~.mpl_toolkits.axes_grid1.parasite_axes.ParasiteAxes` is the\n:doc:`/gallery/axisartist/demo_parasite_axes` example.\nAn alternative approach using the usual Matplotlib subplots is shown in\nthe :doc:`/gallery/ticks_and_spines/multiple_yaxis_with_spines` example.\n", "_____no_output_____" ] ], [ [ "from mpl_toolkits.axes_grid1 import host_subplot\nfrom mpl_toolkits import axisartist\nimport matplotlib.pyplot as plt\n\nhost = host_subplot(111, axes_class=axisartist.Axes)\nplt.subplots_adjust(right=0.75)\n\npar1 = host.twinx()\npar2 = host.twinx()\n\npar2.axis[\"right\"] = par2.new_fixed_axis(loc=\"right\", offset=(60, 0))\n\npar1.axis[\"right\"].toggle(all=True)\npar2.axis[\"right\"].toggle(all=True)\n\np1, = host.plot([0, 1, 2], [0, 1, 2], label=\"Density\")\np2, = par1.plot([0, 1, 2], [0, 3, 2], label=\"Temperature\")\np3, = par2.plot([0, 1, 2], [50, 30, 15], label=\"Velocity\")\n\nhost.set_xlim(0, 2)\nhost.set_ylim(0, 2)\npar1.set_ylim(0, 4)\npar2.set_ylim(1, 65)\n\nhost.set_xlabel(\"Distance\")\nhost.set_ylabel(\"Density\")\npar1.set_ylabel(\"Temperature\")\npar2.set_ylabel(\"Velocity\")\n\nhost.legend()\n\nhost.axis[\"left\"].label.set_color(p1.get_color())\npar1.axis[\"right\"].label.set_color(p2.get_color())\npar2.axis[\"right\"].label.set_color(p3.get_color())\n\nplt.show()", "_____no_output_____" ], [ "fig", "_____no_output_____" ], [ "html_str = mpld3.fig_to_html(fig)\nHtml_file= open(\"demo1.html\",\"w\")\nHtml_file.write(html_str)\nHtml_file.close()", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
e7d7ad729071a0699afc3f2e8283f1a681771d8c
737,725
ipynb
Jupyter Notebook
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
Jaycob-jh/course-content
6b2db614a7a357c16c1c108dfd4266dc0b2e9ea5
[ "CC-BY-4.0" ]
null
null
null
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
Jaycob-jh/course-content
6b2db614a7a357c16c1c108dfd4266dc0b2e9ea5
[ "CC-BY-4.0" ]
null
null
null
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb
Jaycob-jh/course-content
6b2db614a7a357c16c1c108dfd4266dc0b2e9ea5
[ "CC-BY-4.0" ]
1
2021-08-06T08:05:01.000Z
2021-08-06T08:05:01.000Z
236.223183
131,684
0.908033
[ [ [ "<a href=\"https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# Neuromatch Academy: Week 1, Day 2, Tutorial 2\n# Modeling Practice: Model implementation and evaluation\n__Content creators:__ Marius 't Hart, Paul Schrater, Gunnar Blohm\n\n__Content reviewers:__ Norma Kuhn, Saeed Salehi, Madineh Sarvestani, Spiros Chavlis, Michael Waskom", "_____no_output_____" ], [ "---\n# Tutorial objectives\n\nWe are investigating a simple phenomena, working through the 10 steps of modeling ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)) in two notebooks: \n\n**Framing the question**\n\n1. finding a phenomenon and a question to ask about it\n2. understanding the state of the art\n3. determining the basic ingredients\n4. formulating specific, mathematically defined hypotheses\n\n**Implementing the model**\n\n5. selecting the toolkit\n6. planning the model\n7. implementing the model\n\n**Model testing**\n\n8. completing the model\n9. testing and evaluating the model\n\n**Publishing**\n\n10. publishing models\n\nWe did steps 1-5 in Tutorial 1 and will cover steps 6-10 in Tutorial 2 (this notebook).", "_____no_output_____" ], [ "# Setup\n\n", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy import stats\nfrom scipy.stats import gamma\nfrom IPython.display import YouTubeVideo", "_____no_output_____" ], [ "# @title Figure settings\nimport ipywidgets as widgets\n\n%config InlineBackend.figure_format = 'retina'\n\nplt.style.use(\"https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle\")", "_____no_output_____" ], [ "# @title Helper functions\n\n\ndef my_moving_window(x, window=3, FUN=np.mean):\n \"\"\"\n Calculates a moving estimate for a signal\n\n Args:\n x (numpy.ndarray): a vector array of size N\n window (int): size of the window, must be a positive integer\n FUN (function): the function to apply to the samples in the window\n\n Returns:\n (numpy.ndarray): a vector array of size N, containing the moving\n average of x, calculated with a window of size window\n\n There are smarter and faster solutions (e.g. using convolution) but this\n function shows what the output really means. This function skips NaNs, and\n should not be susceptible to edge effects: it will simply use\n all the available samples, which means that close to the edges of the\n signal or close to NaNs, the output will just be based on fewer samples. By\n default, this function will apply a mean to the samples in the window, but\n this can be changed to be a max/min/median or other function that returns a\n single numeric value based on a sequence of values.\n \"\"\"\n\n # if data is a matrix, apply filter to each row:\n if len(x.shape) == 2:\n output = np.zeros(x.shape)\n for rown in range(x.shape[0]):\n output[rown, :] = my_moving_window(x[rown, :],\n window=window, FUN=FUN)\n return output\n\n # make output array of the same size as x:\n output = np.zeros(x.size)\n\n # loop through the signal in x\n for samp_i in range(x.size):\n\n values = []\n\n # loop through the window:\n for wind_i in range(int(1 - window), 1):\n\n if ((samp_i + wind_i) < 0) or (samp_i + wind_i) > (x.size - 1):\n # out of range\n continue\n\n # sample is in range and not nan, use it:\n if not(np.isnan(x[samp_i + wind_i])):\n values += [x[samp_i + wind_i]]\n\n # calculate the mean in the window for this point in the output:\n output[samp_i] = FUN(values)\n\n return output\n\n\ndef my_plot_percepts(datasets=None, plotconditions=False):\n\n if isinstance(datasets, dict):\n # try to plot the datasets\n # they should be named...\n # 'expectations', 'judgments', 'predictions'\n\n plt.figure(figsize=(8, 8)) # set aspect ratio = 1? not really\n\n plt.ylabel('perceived self motion [m/s]')\n plt.xlabel('perceived world motion [m/s]')\n plt.title('perceived velocities')\n\n # loop through the entries in datasets\n # plot them in the appropriate way\n for k in datasets.keys():\n if k == 'expectations':\n\n expect = datasets[k]\n plt.scatter(expect['world'], expect['self'], marker='*',\n color='xkcd:green', label='my expectations')\n\n elif k == 'judgments':\n\n judgments = datasets[k]\n\n for condition in np.unique(judgments[:, 0]):\n c_idx = np.where(judgments[:, 0] == condition)[0]\n cond_self_motion = judgments[c_idx[0], 1]\n cond_world_motion = judgments[c_idx[0], 2]\n if cond_world_motion == -1 and cond_self_motion == 0:\n c_label = 'world-motion condition judgments'\n elif cond_world_motion == 0 and cond_self_motion == 1:\n c_label = 'self-motion condition judgments'\n else:\n c_label = f\"condition [{condition:d}] judgments\"\n\n plt.scatter(judgments[c_idx, 3], judgments[c_idx, 4],\n label=c_label, alpha=0.2)\n\n elif k == 'predictions':\n\n predictions = datasets[k]\n\n for condition in np.unique(predictions[:, 0]):\n c_idx = np.where(predictions[:, 0] == condition)[0]\n cond_self_motion = predictions[c_idx[0], 1]\n cond_world_motion = predictions[c_idx[0], 2]\n if cond_world_motion == -1 and cond_self_motion == 0:\n c_label = 'predicted world-motion condition'\n elif cond_world_motion == 0 and cond_self_motion == 1:\n c_label = 'predicted self-motion condition'\n else:\n c_label = f\"condition [{condition:d}] prediction\"\n\n plt.scatter(predictions[c_idx, 4], predictions[c_idx, 3],\n marker='x', label=c_label)\n\n else:\n print(\"datasets keys should be 'hypothesis', \\\n 'judgments' and 'predictions'\")\n\n if plotconditions:\n # this code is simplified but only works for the dataset we have:\n plt.scatter([1], [0], marker='<', facecolor='none',\n edgecolor='xkcd:black', linewidths=2,\n label='world-motion stimulus', s=80)\n plt.scatter([0], [1], marker='>', facecolor='none',\n edgecolor='xkcd:black', linewidths=2,\n label='self-motion stimulus', s=80)\n\n plt.legend(facecolor='xkcd:white')\n plt.show()\n\n else:\n if datasets is not None:\n print('datasets argument should be a dict')\n raise TypeError\n\n\ndef my_plot_stimuli(t, a, v):\n plt.figure(figsize=(10, 6))\n plt.plot(t, a, label='acceleration [$m/s^2$]')\n plt.plot(t, v, label='velocity [$m/s$]')\n plt.xlabel('time [s]')\n plt.ylabel('[motion]')\n plt.legend(facecolor='xkcd:white')\n plt.show()\n\n\ndef my_plot_motion_signals():\n dt = 1 / 10\n a = gamma.pdf(np.arange(0, 10, dt), 2.5, 0)\n t = np.arange(0, 10, dt)\n v = np.cumsum(a * dt)\n\n fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, sharex='col',\n sharey='row', figsize=(14, 6))\n fig.suptitle('Sensory ground truth')\n\n ax1.set_title('world-motion condition')\n ax1.plot(t, -v, label='visual [$m/s$]')\n ax1.plot(t, np.zeros(a.size), label='vestibular [$m/s^2$]')\n ax1.set_xlabel('time [s]')\n ax1.set_ylabel('motion')\n ax1.legend(facecolor='xkcd:white')\n\n ax2.set_title('self-motion condition')\n ax2.plot(t, -v, label='visual [$m/s$]')\n ax2.plot(t, a, label='vestibular [$m/s^2$]')\n ax2.set_xlabel('time [s]')\n ax2.set_ylabel('motion')\n ax2.legend(facecolor='xkcd:white')\n\n plt.show()\n\n\ndef my_plot_sensorysignals(judgments, opticflow, vestibular, returnaxes=False,\n addaverages=False, integrateVestibular=False,\n addGroundTruth=False):\n\n if addGroundTruth:\n dt = 1 / 10\n a = gamma.pdf(np.arange(0, 10, dt), 2.5, 0)\n t = np.arange(0, 10, dt)\n v = a\n\n wm_idx = np.where(judgments[:, 0] == 0)\n sm_idx = np.where(judgments[:, 0] == 1)\n\n opticflow = opticflow.transpose()\n wm_opticflow = np.squeeze(opticflow[:, wm_idx])\n sm_opticflow = np.squeeze(opticflow[:, sm_idx])\n\n if integrateVestibular:\n vestibular = np.cumsum(vestibular * .1, axis=1)\n if addGroundTruth:\n v = np.cumsum(a * dt)\n\n vestibular = vestibular.transpose()\n wm_vestibular = np.squeeze(vestibular[:, wm_idx])\n sm_vestibular = np.squeeze(vestibular[:, sm_idx])\n\n X = np.arange(0, 10, .1)\n\n fig, my_axes = plt.subplots(nrows=2, ncols=2, sharex='col',\n sharey='row', figsize=(15, 10))\n fig.suptitle('Sensory signals')\n\n my_axes[0][0].plot(X, wm_opticflow, color='xkcd:light red', alpha=0.1)\n my_axes[0][0].plot([0, 10], [0, 0], ':', color='xkcd:black')\n if addGroundTruth:\n my_axes[0][0].plot(t, -v, color='xkcd:red')\n if addaverages:\n my_axes[0][0].plot(X, np.average(wm_opticflow, axis=1),\n color='xkcd:red', alpha=1)\n my_axes[0][0].set_title('optic-flow in world-motion condition')\n my_axes[0][0].set_ylabel('velocity signal [$m/s$]')\n\n my_axes[0][1].plot(X, sm_opticflow, color='xkcd:azure', alpha=0.1)\n my_axes[0][1].plot([0, 10], [0, 0], ':', color='xkcd:black')\n if addGroundTruth:\n my_axes[0][1].plot(t, -v, color='xkcd:blue')\n if addaverages:\n my_axes[0][1].plot(X, np.average(sm_opticflow, axis=1),\n color='xkcd:blue', alpha=1)\n my_axes[0][1].set_title('optic-flow in self-motion condition')\n\n my_axes[1][0].plot(X, wm_vestibular, color='xkcd:light red', alpha=0.1)\n my_axes[1][0].plot([0, 10], [0, 0], ':', color='xkcd:black')\n if addaverages:\n my_axes[1][0].plot(X, np.average(wm_vestibular, axis=1),\n color='xkcd:red', alpha=1)\n my_axes[1][0].set_title('vestibular signal in world-motion condition')\n if addGroundTruth:\n my_axes[1][0].plot(t, np.zeros(100), color='xkcd:red')\n my_axes[1][0].set_xlabel('time [s]')\n if integrateVestibular:\n my_axes[1][0].set_ylabel('velocity signal [$m/s$]')\n else:\n my_axes[1][0].set_ylabel('acceleration signal [$m/s^2$]')\n\n my_axes[1][1].plot(X, sm_vestibular, color='xkcd:azure', alpha=0.1)\n my_axes[1][1].plot([0, 10], [0, 0], ':', color='xkcd:black')\n if addGroundTruth:\n my_axes[1][1].plot(t, v, color='xkcd:blue')\n if addaverages:\n my_axes[1][1].plot(X, np.average(sm_vestibular, axis=1),\n color='xkcd:blue', alpha=1)\n my_axes[1][1].set_title('vestibular signal in self-motion condition')\n my_axes[1][1].set_xlabel('time [s]')\n\n if returnaxes:\n return my_axes\n else:\n plt.show()\n\n\ndef my_threshold_solution(selfmotion_vel_est, threshold):\n is_move = (selfmotion_vel_est > threshold)\n return is_move\n\n\ndef my_moving_threshold(selfmotion_vel_est, thresholds):\n\n pselfmove_nomove = np.empty(thresholds.shape)\n pselfmove_move = np.empty(thresholds.shape)\n prop_correct = np.empty(thresholds.shape)\n pselfmove_nomove[:] = np.NaN\n pselfmove_move[:] = np.NaN\n prop_correct[:] = np.NaN\n\n for thr_i, threshold in enumerate(thresholds):\n\n # run my_threshold that the students will write:\n try:\n is_move = my_threshold(selfmotion_vel_est, threshold)\n except Exception:\n is_move = my_threshold_solution(selfmotion_vel_est, threshold)\n\n # store results:\n pselfmove_nomove[thr_i] = np.mean(is_move[0:100])\n pselfmove_move[thr_i] = np.mean(is_move[100:200])\n\n # calculate the proportion classified correctly:\n # (1-pselfmove_nomove) + ()\n # Correct rejections:\n p_CR = (1 - pselfmove_nomove[thr_i])\n # correct detections:\n p_D = pselfmove_move[thr_i]\n\n # this is corrected for proportion of trials in each condition:\n prop_correct[thr_i] = (p_CR + p_D) / 2\n\n return [pselfmove_nomove, pselfmove_move, prop_correct]\n\n\ndef my_plot_thresholds(thresholds, world_prop, self_prop, prop_correct):\n\n plt.figure(figsize=(12, 8))\n plt.title('threshold effects')\n plt.plot([min(thresholds), max(thresholds)], [0, 0], ':',\n color='xkcd:black')\n plt.plot([min(thresholds), max(thresholds)], [0.5, 0.5], ':',\n color='xkcd:black')\n plt.plot([min(thresholds), max(thresholds)], [1, 1], ':',\n color='xkcd:black')\n plt.plot(thresholds, world_prop, label='world motion condition')\n plt.plot(thresholds, self_prop, label='self motion condition')\n plt.plot(thresholds, prop_correct, color='xkcd:purple',\n label='correct classification')\n plt.xlabel('threshold')\n plt.ylabel('proportion correct or classified as self motion')\n plt.legend(facecolor='xkcd:white')\n plt.show()\n\n\ndef my_plot_predictions_data(judgments, predictions):\n\n # conditions = np.concatenate((np.abs(judgments[:, 1]),\n # np.abs(judgments[:, 2])))\n # veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))\n # velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))\n\n # self:\n # conditions_self = np.abs(judgments[:, 1])\n veljudgmnt_self = judgments[:, 3]\n velpredict_self = predictions[:, 3]\n\n # world:\n # conditions_world = np.abs(judgments[:, 2])\n veljudgmnt_world = judgments[:, 4]\n velpredict_world = predictions[:, 4]\n\n fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, sharey='row',\n figsize=(12, 5))\n\n ax1.scatter(veljudgmnt_self, velpredict_self, alpha=0.2)\n ax1.plot([0, 1], [0, 1], ':', color='xkcd:black')\n ax1.set_title('self-motion judgments')\n ax1.set_xlabel('observed')\n ax1.set_ylabel('predicted')\n\n ax2.scatter(veljudgmnt_world, velpredict_world, alpha=0.2)\n ax2.plot([0, 1], [0, 1], ':', color='xkcd:black')\n ax2.set_title('world-motion judgments')\n ax2.set_xlabel('observed')\n ax2.set_ylabel('predicted')\n\n plt.show()", "_____no_output_____" ], [ "# @title Data retrieval\nimport os\nfname=\"W1D2_data.npz\"\nif not os.path.exists(fname):\n !wget https://osf.io/c5xyf/download -O $fname\n\nfilez = np.load(file=fname, allow_pickle=True)\njudgments = filez['judgments']\nopticflow = filez['opticflow']\nvestibular = filez['vestibular']", "_____no_output_____" ] ], [ [ "---\n# Section 6: Model planning", "_____no_output_____" ] ], [ [ "# @title Video 6: Planning\nfrom IPython.display import IFrame\nclass BiliVideo(IFrame):\n def __init__(self, id, page=1, width=400, height=300, **kwargs):\n self.id=id\n src = \"//player.bilibili.com/player.html?bvid={0}&page={1}\".format(id, page)\n super(BiliVideo, self).__init__(src, width, height, **kwargs)\n\nvideo = BiliVideo(id='BV1nC4y1h7yL', width=854, height=480, fs=1)\nprint(\"Video available at https://www.bilibili.com/video/{0}\".format(video.id))\nvideo", "Video available at https://youtube.com/watch?v=dRTOFFigxa0\n" ] ], [ [ "\n**Goal:** Identify the key components of the model and how they work together.\n\nOur goal all along has been to model our perceptual estimates of sensory data.\nNow that we have some idea of what we want to do, we need to line up the components of the model: what are the input and output? Which computations are done and in what order? \n\nOur model will have:\n* **inputs**: the values the system has available - this can be broken down in _data:_ the sensory signals, _parameters:_ the threshold and the window sizes for filtering\n* **outputs**: these are the predictions our model will make - for this tutorial these are the perceptual judgments on each trial in m/s, just like the judgments participants made.\n* **model functions**: A set of functions that perform the hypothesized computations.\n\nWe will define a set of functions that take our data and some parameters as input, can run our model, and output a prediction for the judgment data.\n\n**Recap of what we've accomplished so far:**\n\nTo model perceptual estimates from our sensory data, we need to \n1. _integrate:_ to ensure sensory information are in appropriate units\n2. _filter:_ to reduce noise and set timescale\n3. _threshold:_ to model detection\n\nThis will be done with these operations:\n1. _integrate:_ `np.cumsum()`\n2. _filter:_ `my_moving_window()`\n3. _threshold:_ `if` with a comparison (`>` or `<`) and `else`\n\n**_Planning our model:_**\n\nWe will now start putting all the pieces together. Normally you would sketch this yourself, but here is an overview of how the functions comprising the model are going to work:\n\n![model functions purpose](https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W1D2_ModelingPractice/static/NMA-W1D2-fig05.png)\n\nBelow is the main function with a detailed explanation of what the function is supposed to do, exactly what input is expected, and what output will be generated. \n\nThe model is not complete, so it only returns nans (**n**ot-**a**-**n**umber) for now. However, this outlines how most model code works: it gets some measured data (the sensory signals) and a set of parameters as input, and as output returns a prediction on other measured data (the velocity judgments). \n\nThe goal of this function is to define the top level of a simulation model which:\n* receives all input\n* loops through the cases\n* calls functions that computes predicted values for each case\n* outputs the predictions", "_____no_output_____" ], [ "**Main model function**", "_____no_output_____" ] ], [ [ "def my_train_illusion_model(sensorydata, params):\n \"\"\"\n Generate output predictions of perceived self-motion and perceived\n world-motion velocity based on input visual and vestibular signals.\n\n Args:\n\n sensorydata: (dict) dictionary with two named entries:\n opticflow: (numpy.ndarray of float) NxM array with N trials on rows\n and M visual signal samples in columns\n\n vestibular: (numpy.ndarray of float) NxM array with N trials on rows\n and M vestibular signal samples in columns\n\n params: (dict) dictionary with named entries:\n threshold: (float) vestibular threshold for credit assignment\n\n filterwindow: (list of int) determines the strength of filtering for\n the visual and vestibular signals, respectively\n\n integrate (bool): whether to integrate the vestibular signals, will\n be set to True if absent\n\n FUN (function): function used in the filter, will be set to\n np.mean if absent\n\n samplingrate (float): the number of samples per second in the\n sensory data, will be set to 10 if absent\n\n Returns:\n\n dict with two entries:\n\n selfmotion: (numpy.ndarray) vector array of length N, with predictions\n of perceived self motion\n\n worldmotion: (numpy.ndarray) vector array of length N, with predictions\n of perceived world motion\n \"\"\"\n\n # sanitize input a little\n if not('FUN' in params.keys()):\n params['FUN'] = np.mean\n if not('integrate' in params.keys()):\n params['integrate'] = True\n if not('samplingrate' in params.keys()):\n params['samplingrate'] = 10\n\n # number of trials:\n ntrials = sensorydata['opticflow'].shape[0]\n\n # set up variables to collect output\n selfmotion = np.empty(ntrials)\n worldmotion = np.empty(ntrials)\n\n # loop through trials?\n for trialN in range(ntrials):\n\n # these are our sensory variables (inputs)\n vis = sensorydata['opticflow'][trialN, :]\n ves = sensorydata['vestibular'][trialN, :]\n\n # generate output predicted perception:\n selfmotion[trialN],\\\n worldmotion[trialN] = my_perceived_motion(vis=vis, ves=ves,\n params=params)\n\n return {'selfmotion': selfmotion, 'worldmotion': worldmotion}\n\n\n# here is a mock version of my_perceived motion.\n# so you can test my_train_illusion_model()\ndef my_perceived_motion(*args, **kwargs):\n return [np.nan, np.nan]\n\n\n# let's look at the preditions we generated for two sample trials (0,100)\n# we should get a 1x2 vector of self-motion prediction and another\n# for world-motion\n\nsensorydata={'opticflow': opticflow[[0, 100], :0],\n 'vestibular': vestibular[[0, 100], :0]}\nparams={'threshold': 0.33, 'filterwindows': [100, 50]}\nmy_train_illusion_model(sensorydata=sensorydata, params=params)", "_____no_output_____" ] ], [ [ "We've also completed the `my_perceived_motion()` function for you below. Follow this example to complete the template for `my_selfmotion()` and `my_worldmotion()`. Write out the inputs and outputs, and the steps required to calculate the outputs from the inputs.\n\n**Perceived motion function**", "_____no_output_____" ] ], [ [ "# Full perceived motion function\n\n\ndef my_perceived_motion(vis, ves, params):\n \"\"\"\n Takes sensory data and parameters and returns predicted percepts\n\n Args:\n vis (numpy.ndarray) : 1xM array of optic flow velocity data\n ves (numpy.ndarray) : 1xM array of vestibular acceleration data\n params : (dict) dictionary with named entries:\n see my_train_illusion_model() for details\n\n Returns:\n [list of floats] : prediction for perceived self-motion based on\n vestibular data, and prediction for perceived\n world-motion based on perceived self-motion and\n visual data\n \"\"\"\n\n # estimate self motion based on only the vestibular data\n # pass on the parameters\n selfmotion = my_selfmotion(ves=ves, params=params)\n\n # estimate the world motion, based on the selfmotion and visual data\n # pass on the parameters as well\n worldmotion = my_worldmotion(vis=vis, selfmotion=selfmotion, params=params)\n\n return [selfmotion, worldmotion]", "_____no_output_____" ] ], [ [ "## TD 6.1: Formulate purpose of the self motion function\n\nNow we plan out the purpose of one of the remaining functions. **Only name input arguments, write help text and comments, _no code_.** The goal of this exercise is to make writing the code (in Micro-tutorial 7) much easier. Based on our work before the break, you should now be able to answer these questions for each function:\n\n* what (sensory) data is necessary? \n* what parameters does the function need, if any?\n* which operations will be performed on the input?\n* what is the output?\n\nThe number of arguments is correct.", "_____no_output_____" ], [ "**Template calculate self motion**\n\nName the _input arguments_, complete the _help text_, and add _comments_ in the function below to describe the inputs, the outputs, and operations using elements from the recap at the top of this notebook (or from micro-tutorials 3 and 4 in part 1), in order to plan out the function. Do not write any code.", "_____no_output_____" ] ], [ [ "def my_selfmotion(arg1, arg2):\n \"\"\"\n Short description of the function\n\n Args:\n argument 1: explain the format and content of the first argument\n argument 2: explain the format and content of the second argument\n\n Returns:\n what output does the function generate?\n\n Any further description?\n \"\"\"\n\n # what operations do we perform on the input?\n # use the elements from micro-tutorials 3, 4, and 5\n # 1.\n # 2.\n # 3.\n # 4.\n\n # what output should this function produce?\n return output", "_____no_output_____" ] ], [ [ "[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_90e4d753.py)\n\n", "_____no_output_____" ], [ "**Template calculate world motion**\n\nWe have drafted the help text and written comments in the function below that describe the inputs, the outputs, and operations we use to estimate world motion, based on the recap above.", "_____no_output_____" ] ], [ [ "# World motion function\n\n\ndef my_worldmotion(vis, selfmotion, params):\n \"\"\"\n Estimates world motion based on the visual signal, the estimate of\n\n Args:\n vis (numpy.ndarray): 1xM array with the optic flow signal\n selfmotion (float): estimate of self motion\n params (dict): dictionary with named entries:\n see my_train_illusion_model() for details\n\n Returns:\n (float): an estimate of world motion in m/s\n \"\"\"\n\n # 1. running window function\n # 2. take final value\n # 3. subtract selfmotion from value\n\n # return final value\n return output", "_____no_output_____" ] ], [ [ "---\n# Section 7: Model implementation", "_____no_output_____" ] ], [ [ "# @title Video 7: Implementation\nfrom IPython.display import IFrame\nclass BiliVideo(IFrame):\n def __init__(self, id, page=1, width=400, height=300, **kwargs):\n self.id=id\n src = \"//player.bilibili.com/player.html?bvid={0}&page={1}\".format(id, page)\n super(BiliVideo, self).__init__(src, width, height, **kwargs)\n\nvideo = BiliVideo(id='BV18Z4y1u7yB', width=854, height=480, fs=1)\nprint(\"Video available at https://www.bilibili.com/video/{0}\".format(video.id))\nvideo", "Video available at https://youtube.com/watch?v=DMSIt7t-LO8\n" ] ], [ [ "\n**Goal:** We write the components of the model in actual code.\n\nFor the operations we picked, there function ready to use:\n* integration: `np.cumsum(data, axis=1)` (axis=1: per trial and over samples)\n* filtering: `my_moving_window(data, window)` (window: int, default 3)\n* take last `selfmotion` value as our estimate\n* threshold: if (value > thr): <operation 1> else: <operation 2>\n\n", "_____no_output_____" ], [ "## TD 7.1: Write code to estimate self motion\n\nUse the operations to finish writing the function that will calculate an estimate of self motion. Fill in the descriptive list of items with actual operations. Use the function for estimating world-motion below, which we've filled for you!\n\n**Template finish self motion function**", "_____no_output_____" ] ], [ [ "# Self motion function\n\n\ndef my_selfmotion(ves, params):\n \"\"\"\n Estimates self motion for one vestibular signal\n\n Args:\n ves (numpy.ndarray): 1xM array with a vestibular signal\n params (dict) : dictionary with named entries:\n see my_train_illusion_model() for details\n\n Returns:\n (float) : an estimate of self motion in m/s\n \"\"\"\n\n # uncomment the code below and fill in with your code\n\n # 1. integrate vestibular signal\n # ves = np.cumsum(ves * (1 / params['samplingrate']))\n\n # 2. running window function to accumulate evidence:\n # selfmotion = ... YOUR CODE HERE\n\n # 3. take final value of self-motion vector as our estimate\n # selfmotion = ... YOUR CODE HERE\n\n # 4. compare to threshold. Hint the threshodl is stored in\n # params['threshold']\n # if selfmotion is higher than threshold: return value\n # if it's lower than threshold: return 0\n\n # if YOURCODEHERE\n # selfmotion = YOURCODHERE\n\n # Comment this line when your function is ready\n raise NotImplementedError(\"Student excercise: estimate my_selfmotion\")\n\n return output", "_____no_output_____" ] ], [ [ "[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_53312239.py)\n\n", "_____no_output_____" ], [ "### Interactive Demo: Unit testing\n\nTesting if the functions you wrote do what they are supposed to do is important, and known as 'unit testing'. Here we will simplify this for the `my_selfmotion()` function, by allowing varying the threshold and window size with a slider, and seeing what the distribution of self-motion estimates looks like.", "_____no_output_____" ] ], [ [ "#@title\n\n#@markdown Make sure you execute this cell to enable the widget!\n\ndef refresh(threshold=0, windowsize=100):\n\n params = {'samplingrate': 10, 'FUN': np.mean}\n params['filterwindows'] = [windowsize, 50]\n params['threshold'] = threshold\n\n selfmotion_estimates = np.empty(200)\n\n # get the estimates for each trial:\n for trial_number in range(200):\n ves = vestibular[trial_number, :]\n selfmotion_estimates[trial_number] = my_selfmotion(ves, params)\n\n plt.figure()\n plt.hist(selfmotion_estimates, bins=20)\n plt.xlabel('self-motion estimate')\n plt.ylabel('frequency')\n plt.show()\n\n\n_ = widgets.interact(refresh, threshold=(-1, 2, .01), windowsize=(1, 100, 1))", "_____no_output_____" ] ], [ [ "**Estimate world motion**\n\nWe have completed the `my_worldmotion()` function for you below.\n\n", "_____no_output_____" ] ], [ [ "# World motion function\ndef my_worldmotion(vis, selfmotion, params):\n \"\"\"\n Short description of the function\n\n Args:\n vis (numpy.ndarray): 1xM array with the optic flow signal\n selfmotion (float): estimate of self motion\n params (dict): dictionary with named entries:\n see my_train_illusion_model() for details\n\n Returns:\n (float): an estimate of world motion in m/s\n \"\"\"\n\n # running average to smooth/accumulate sensory evidence\n visualmotion = my_moving_window(vis, window=params['filterwindows'][1],\n FUN=np.mean)\n\n # take final value\n visualmotion = visualmotion[-1]\n\n # subtract selfmotion from value\n worldmotion = visualmotion + selfmotion\n\n # return final value\n return worldmotion", "_____no_output_____" ] ], [ [ "---\n# Section 8: Model completion", "_____no_output_____" ] ], [ [ "# @title Video 8: Completion\nfrom IPython.display import IFrame\nclass BiliVideo(IFrame):\n def __init__(self, id, page=1, width=400, height=300, **kwargs):\n self.id=id\n src = \"//player.bilibili.com/player.html?bvid={0}&page={1}\".format(id, page)\n super(BiliVideo, self).__init__(src, width, height, **kwargs)\n\nvideo = BiliVideo(id='BV1YK411H7oW', width=854, height=480, fs=1)\nprint(\"Video available at https://www.bilibili.com/video/{0}\".format(video.id))\nvideo", "Video available at https://youtube.com/watch?v=EM-G8YYdrDg\n" ] ], [ [ "\n**Goal:** Make sure the model can speak to the hypothesis. Eliminate all the parameters that do not speak to the hypothesis.\n\nNow that we have a working model, we can keep improving it, but at some point we need to decide that it is finished. Once we have a model that displays the properties of a system we are interested in, it should be possible to say something about our hypothesis and question. Keeping the model simple makes it easier to understand the phenomenon and answer the research question. Here that means that our model should have illusory perception, and perhaps make similar judgments to those of the participants, but not much more.\n\nTo test this, we will run the model, store the output and plot the models' perceived self motion over perceived world motion, like we did with the actual perceptual judgments (it even uses the same plotting function).\n\n## TD 8.1: See if the model produces illusions", "_____no_output_____" ] ], [ [ "# @markdown Run to plot model predictions of motion estimates\n# prepare to run the model again:\ndata = {'opticflow': opticflow, 'vestibular': vestibular}\nparams = {'threshold': 0.6, 'filterwindows': [100, 50], 'FUN': np.mean}\nmodelpredictions = my_train_illusion_model(sensorydata=data, params=params)\n\n# process the data to allow plotting...\npredictions = np.zeros(judgments.shape)\npredictions[:, 0:3] = judgments[:, 0:3]\npredictions[:, 3] = modelpredictions['selfmotion']\npredictions[:, 4] = modelpredictions['worldmotion'] * -1\nmy_plot_percepts(datasets={'predictions': predictions}, plotconditions=True)", "_____no_output_____" ] ], [ [ "**Questions:**\n\n* How does the distribution of data points compare to the plot in TD 1.2 or in TD 7.1?\n* Did you expect to see this?\n* Where do the model's predicted judgments for each of the two conditions fall?\n* How does this compare to the behavioral data?\n\nHowever, the main observation should be that **there are illusions**: the blue and red data points are mixed in each of the two clusters of data points. This mean the model can help us understand the phenomenon.", "_____no_output_____" ], [ "---\n# Section 9: Model evaluation", "_____no_output_____" ] ], [ [ "# @title Video 9: Evaluation\nfrom IPython.display import IFrame\nclass BiliVideo(IFrame):\n def __init__(self, id, page=1, width=400, height=300, **kwargs):\n self.id=id\n src = \"//player.bilibili.com/player.html?bvid={0}&page={1}\".format(id, page)\n super(BiliVideo, self).__init__(src, width, height, **kwargs)\n\nvideo = BiliVideo(id='BV1uK411H7EK', width=854, height=480, fs=1)\nprint(\"Video available at https://www.bilibili.com/video/{0}\".format(video.id))\nvideo", "Video available at https://youtube.com/watch?v=bWLFyobm4Rk\n" ] ], [ [ "\n**Goal:** Once we have finished the model, we need a description of how good it is. The question and goals we set in micro-tutorial 1 and 4 help here. There are multiple ways to evaluate a model. Aside from the obvious fact that we want to get insight into the phenomenon that is not directly accessible without the model, we always want to quantify how well the model agrees with the data.\n\n**Quantify model quality with $R^2$**\n\nLet's look at how well our model matches the actual judgment data.", "_____no_output_____" ] ], [ [ "# @markdown Run to plot predictions over data\nmy_plot_predictions_data(judgments, predictions)", "_____no_output_____" ] ], [ [ "When model predictions are correct, the red points in the figure above should lie along the identity line (a dotted black line here). Points off the identity line represent model prediction errors. While in each plot we see two clusters of dots that are fairly close to the identity line, there are also two clusters that are not. For the trials that those points represent, the model has an illusion while the participants don't or vice versa.\n\nWe will use a straightforward, quantitative measure of how good the model is: $R^2$ (pronounced: \"R-squared\"), which can take values between 0 and 1, and expresses how much variance is explained by the relationship between two variables (here the model's predictions and the actual judgments). It is also called [coefficient of determination](https://en.wikipedia.org/wiki/Coefficient_of_determination), and is calculated here as the square of the correlation coefficient (r or $\\rho$). Just run the chunk below:", "_____no_output_____" ] ], [ [ "# @markdown Run to calculate R^2\nconditions = np.concatenate((np.abs(judgments[:, 1]), np.abs(judgments[:, 2])))\nveljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))\nvelpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))\n\nslope, intercept, r_value,\\\n p_value, std_err = stats.linregress(conditions, veljudgmnt)\nprint(f\"conditions -> judgments R^2: {r_value ** 2:0.3f}\")\n\nslope, intercept, r_value,\\\n p_value, std_err = stats.linregress(veljudgmnt, velpredict)\nprint(f\"predictions -> judgments R^2: {r_value ** 2:0.3f}\")\n", "conditions -> judgments R^2: 0.032\npredictions -> judgments R^2: 0.256\n" ] ], [ [ "These $R^2$s express how well the experimental conditions explain the participants judgments and how well the models predicted judgments explain the participants judgments.\n\nYou will learn much more about model fitting, quantitative model evaluation and model comparison tomorrow!\n\nPerhaps the $R^2$ values don't seem very impressive, but the judgments produced by the participants are explained by the model's predictions better than by the actual conditions. In other words: in a certain percentage of cases the model tends to have the same illusions as the participants.", "_____no_output_____" ], [ "## TD 9.1 Varying the threshold parameter to improve the model\n\nIn the code below, see if you can find a better value for the threshold parameter, to reduce errors in the models' predictions.\n\n**Testing thresholds**", "_____no_output_____" ], [ "\n### Interactive Demo: optimizing the model", "_____no_output_____" ] ], [ [ "#@title\n\n#@markdown Make sure you execute this cell to enable the widget!\n\ndata = {'opticflow': opticflow, 'vestibular': vestibular}\n\n\ndef refresh(threshold=0, windowsize=100):\n\n # set parameters according to sliders:\n params = {'samplingrate': 10, 'FUN': np.mean}\n params['filterwindows'] = [windowsize, 50]\n params['threshold'] = threshold\n\n modelpredictions = my_train_illusion_model(sensorydata=data, params=params)\n\n predictions = np.zeros(judgments.shape)\n predictions[:, 0:3] = judgments[:, 0:3]\n predictions[:, 3] = modelpredictions['selfmotion']\n predictions[:, 4] = modelpredictions['worldmotion'] * -1\n\n # plot the predictions:\n my_plot_predictions_data(judgments, predictions)\n\n # calculate R2\n veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))\n velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))\n slope, intercept, r_value,\\\n p_value, std_err = stats.linregress(veljudgmnt, velpredict)\n\n print(f\"predictions -> judgments R^2: {r_value ** 2:0.3f}\")\n\n\n_ = widgets.interact(refresh, threshold=(-1, 2, .01), windowsize=(1, 100, 1))", "_____no_output_____" ] ], [ [ "Varying the parameters this way, allows you to increase the models' performance in predicting the actual data as measured by $R^2$. This is called model fitting, and will be done better in the coming weeks.", "_____no_output_____" ], [ "## TD 9.2: Credit assigmnent of self motion\n\nWhen we look at the figure in **TD 8.1**, we can see a cluster does seem very close to (1,0), just like in the actual data. The cluster of points at (1,0) are from the case where we conclude there is no self motion, and then set the self motion to 0. That value of 0 removes a lot of noise from the world-motion estimates, and all noise from the self-motion estimate. In the other case, where there is self motion, we still have a lot of noise (see also micro-tutorial 4).\n\nLet's change our `my_selfmotion()` function to return a self motion of 1 when the vestibular signal indicates we are above threshold, and 0 when we are below threshold. Edit the function here.", "_____no_output_____" ], [ "### Exercise 1: function for credit assigment of self motion", "_____no_output_____" ] ], [ [ "def my_selfmotion(ves, params):\n \"\"\"\n Estimates self motion for one vestibular signal\n\n Args:\n ves (numpy.ndarray): 1xM array with a vestibular signal\n params (dict): dictionary with named entries:\n see my_train_illusion_model() for details\n\n Returns:\n (float): an estimate of self motion in m/s\n \"\"\"\n\n # integrate signal:\n ves = np.cumsum(ves * (1 / params['samplingrate']))\n\n # use running window to accumulate evidence:\n selfmotion = my_moving_window(ves, window=params['filterwindows'][0],\n FUN=params['FUN'])\n\n # take the final value as our estimate:\n selfmotion = selfmotion[-1]\n\n # compare to threshold, set to 0 if lower and else...\n if selfmotion < params['threshold']:\n selfmotion = 0\n ###########################################################################\n # Exercise: Complete credit assignment. Remove the next line to test your function\n else:\n selfmotion = ... #YOUR CODE HERE\n\n raise NotImplementedError(\"Modify with credit assignment\")\n ###########################################################################\n\n return selfmotion\n\n# Use the updated function to run the model and plot the data\n# Uncomment below to test your function \ndata = {'opticflow': opticflow, 'vestibular': vestibular}\nparams = {'threshold': 0.33, 'filterwindows': [100, 50], 'FUN': np.mean}\n#modelpredictions = my_train_illusion_model(sensorydata=data, params=params)\n\npredictions = np.zeros(judgments.shape)\npredictions[:, 0:3] = judgments[:, 0:3]\npredictions[:, 3] = modelpredictions['selfmotion']\npredictions[:, 4] = modelpredictions['worldmotion'] * -1\n#my_plot_percepts(datasets={'predictions': predictions}, plotconditions=False)", "_____no_output_____" ] ], [ [ "[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_51dce10c.py)\n\n*Example output:*\n\n<img alt='Solution hint' align='left' width=560 height=560 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W1D2_ModelingPractice/static/W1D2_Tutorial2_Solution_51dce10c_0.png>\n\n", "_____no_output_____" ], [ "That looks much better, and closer to the actual data. Let's see if the $R^2$ values have improved. Use the optimal values for the threshold and window size that you found previously.\n\n### Interactive Demo: evaluating the model", "_____no_output_____" ] ], [ [ "#@title\n\n#@markdown Make sure you execute this cell to enable the widget!\n\ndata = {'opticflow': opticflow, 'vestibular': vestibular}\n\n\ndef refresh(threshold=0, windowsize=100):\n\n # set parameters according to sliders:\n params = {'samplingrate': 10, 'FUN': np.mean}\n params['filterwindows'] = [windowsize, 50]\n params['threshold'] = threshold\n\n modelpredictions = my_train_illusion_model(sensorydata=data, params=params)\n\n predictions = np.zeros(judgments.shape)\n predictions[:, 0:3] = judgments[:, 0:3]\n predictions[:, 3] = modelpredictions['selfmotion']\n predictions[:, 4] = modelpredictions['worldmotion'] * -1\n\n # plot the predictions:\n my_plot_predictions_data(judgments, predictions)\n\n # calculate R2\n veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))\n velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))\n slope, intercept, r_value,\\\n p_value, std_err = stats.linregress(veljudgmnt, velpredict)\n\n print(f\"predictions -> judgments R2: {r_value ** 2:0.3f}\")\n\n\n_ = widgets.interact(refresh, threshold=(-1, 2, .01), windowsize=(1, 100, 1))", "_____no_output_____" ] ], [ [ "While the model still predicts velocity judgments better than the conditions (i.e. the model predicts illusions in somewhat similar cases), the $R^2$ values are a little worse than those of the simpler model. What's really going on is that the same set of points that were model prediction errors in the previous model are also errors here. All we have done is reduce the spread.", "_____no_output_____" ], [ "**Interpret the model's meaning**\n\nHere's what you should have learned from model the train illusion: \n\n1. A noisy, vestibular, acceleration signal can give rise to illusory motion.\n2. However, disambiguating the optic flow by adding the vestibular signal simply adds a lot of noise. This is not a plausible thing for the brain to do.\n3. Our other hypothesis - credit assignment - is more qualitatively correct, but our simulations were not able to match the frequency of the illusion on a trial-by-trial basis.\n\nWe decided that for now we have learned enough, so it's time to write it up.\n", "_____no_output_____" ], [ "---\n# Section 10: Model publication!", "_____no_output_____" ] ], [ [ "# @title Video 10: Publication\nfrom IPython.display import IFrame\nclass BiliVideo(IFrame):\n def __init__(self, id, page=1, width=400, height=300, **kwargs):\n self.id=id\n src = \"//player.bilibili.com/player.html?bvid={0}&page={1}\".format(id, page)\n super(BiliVideo, self).__init__(src, width, height, **kwargs)\n\nvideo = BiliVideo(id='BV1M5411e7AG', width=854, height=480, fs=1)\nprint(\"Video available at https://www.bilibili.com/video/{0}\".format(video.id))\nvideo", "Video available at https://youtube.com/watch?v=zm8x7oegN6Q\n" ] ], [ [ "\n**Goal:** In order for our model to impact the field, it needs to be accepted by our peers, and order for that to happen it matters how the model is published.", "_____no_output_____" ], [ "## TD 10.1: Write a summary of the project\n\nHere we will write up our model, by answering the following questions:\n* **What is the phenomena**? Here summarize the part of the phenomena which your model addresses.\n* **What is the key scientific question?**: Clearly articulate the question which your model tries to answer.\n* **What was our hypothesis?**: Explain the key relationships which we relied on to simulate the phenomena.\n* **How did your model work?** Give an overview of the model, it's main components, and how the model works. ''Here we ... ''\n* **What did we find? Did the model work?** Explain the key outcomes of your model evaluation. \n* **What can we conclude?** Conclude as much as you can _with reference to the hypothesis_, within the limits of the model. \n* **What did you learn? What is left to be learned?** Briefly argue the plausibility of the approach and what you think is _essential_ that may have been left out.\n\n### Guidance for the future\nThere are good guidelines for structuring and writing an effective paper (e.g., [Mensh & Kording, 2017](https://doi.org/10.1371/journal.pcbi.1005619)), all of which apply to papers about models. There are some extra considerations when publishing a model. In general, you should explain each of the steps in the paper:\n\n**Introduction:** Steps 1 & 2 (maybe 3)\n\n**Methods:** Steps 3-7, 9\n\n**Results:** Steps 8 & 9, going back to 1, 2 & 4\n\nIn addition, you should provide a visualization of the model, and upload the code implementing the model and the data it was trained and tested on to a repository (e.g. GitHub and OSF).\n\nThe audience for all of this should be experimentalists, as they are the ones who can test predictions made by your your model and collect new data. This way your models can impact future experiments, and that future data can then be modeled (see modeling process schematic below). Remember your audience - it is _always_ hard to clearly convey the main points of your work to others, especially if your audience doesn't necessarily create computational models themselves.\n\n![how-to-model process from Blohm et al 2019](https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W1D2_ModelingPractice/static/NMA-W1D2-fig06.png)\n\n### Suggestion\n\nFor every modeling project, a very good exercise in this is to _**first**_ write a short, 100-word abstract of the project plan and expected impact, like the summary you wrote. This forces focussing on the main points: describing the relevance, question, model, answer and what it all means very succinctly. This allows you to decide to do this project or not **before you commit time writing code for no good purpose**. Notice that this is really what we've walked you through carefully in this tutorial! :)\n", "_____no_output_____" ], [ "---\n# Summary\nConfatulations! You have finished Day2 of NMA! In this tutorial, we worked through the rest steps of the process of modeling.\n\n- We identified the key components of the model, and examined how they work together (step 6)\n- We implemented the model (step 7), and completed it (step 8)\n- We tested and evaluated our model (step 9), and finally\n- We learn how to publish our model in order to increase its visibility amongts our peers\n\n## Post-script\n\nNote that the model we built here was extremely simple and used artificial data on purpose. It allowed us to go through all the steps of building a model, and hopefully you noticed that it is not always a linear process, you will go back to different steps if you hit a roadblock somewhere.\n\nHowever, if you're interested in how to actually approach modeling a similar phenomenon in a probabilistic way, we encourage you to read the paper by [Dokka et. al., 2019](https://doi.org/10.1073/pnas.1820373116), where the authors model how judgments of heading direction are influenced by objects that are also moving.", "_____no_output_____" ], [ "---\n# Reading\n\nBlohm G, Kording KP, Schrater PR (2020). _A How-to-Model Guide for Neuroscience_ eNeuro, 7(1). https://doi.org/10.1523/ENEURO.0352-19.2019 \n\nDokka K, Park H, Jansen M, DeAngelis GC, Angelaki DE (2019). _Causal inference accounts for heading perception in the presence of object motion._ PNAS, 116(18):9060-9065. https://doi.org/10.1073/pnas.1820373116\n\nDrugowitsch J, DeAngelis GC, Klier EM, Angelaki DE, Pouget A (2014). _Optimal Multisensory Decision-Making in a Reaction-Time Task._ eLife, 3:e03005. https://doi.org/10.7554/eLife.03005\n\nHartmann, M, Haller K, Moser I, Hossner E-J, Mast FW (2014). _Direction detection thresholds of passive self-motion in artistic gymnasts._ Exp Brain Res, 232:1249–1258. https://doi.org/10.1007/s00221-014-3841-0\n\nMensh B, Kording K (2017). _Ten simple rules for structuring papers._ PLOS Comput Biol 13(9): e1005619. https://doi.org/10.1371/journal.pcbi.1005619\n\nSeno T, Fukuda H (2012). _Stimulus Meanings Alter Illusory Self-Motion (Vection) - Experimental Examination of the Train Illusion._ Seeing Perceiving, 25(6):631-45. https://doi.org/10.1163/18784763-00002394\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ] ]
e7d7afcfc5bf41c98c916947273993970cfa6be6
13,341
ipynb
Jupyter Notebook
Exercise 6, answers.ipynb
maeehart/TIES483
cce5c779aeb0ade5f959a2ed5cca982be5cf2316
[ "CC-BY-3.0" ]
4
2019-04-26T12:46:14.000Z
2021-11-23T03:38:59.000Z
Exercise 6, answers.ipynb
maeehart/TIES483
cce5c779aeb0ade5f959a2ed5cca982be5cf2316
[ "CC-BY-3.0" ]
null
null
null
Exercise 6, answers.ipynb
maeehart/TIES483
cce5c779aeb0ade5f959a2ed5cca982be5cf2316
[ "CC-BY-3.0" ]
6
2016-01-08T16:28:11.000Z
2021-04-10T05:18:10.000Z
32.859606
323
0.511506
[ [ [ "# Exercise 6, answers", "_____no_output_____" ], [ "## Problem 1", "_____no_output_____" ] ], [ [ "from pyomo.environ import *\n\nmodel = ConcreteModel()\n#Three variables\nmodel.x = Var([1,2,3])\n#Objective function including powers and logarithm\nmodel.OBJ = Objective(expr = log(model.x[1]**2+1)+model.x[2]**4\n +model.x[1]*model.x[3]) #Objective function\nmodel.constr = Constraint(expr = model.x[1]**3-model.x[2]**2>=1)\nmodel.box1 = Constraint(expr = model.x[1]>=0)\nmodel.box2 = Constraint(expr = model.x[3]>=0)\n\nfrom pyomo.opt import SolverFactory #Import interfaces to solvers\n\nopt = SolverFactory(\"ipopt\") #Use ipopt\n\nres = opt.solve(model, tee=True) #Solve the problem and print the output\n\nprint \"Optimal solutions is \"\nmodel.x.display()\nprint \"Objective value at the optimal solution is \"\nmodel.OBJ.display()", "\n\n******************************************************************************\nThis program contains Ipopt, a library for large-scale nonlinear optimization.\n Ipopt is released as open source code under the Eclipse Public License (EPL).\n For more information visit http://projects.coin-or.org/Ipopt\n******************************************************************************\n\nThis is Ipopt version 3.12, running with linear solver mumps.\nNOTE: Other linear solvers might be more efficient (see Ipopt documentation).\n\nNumber of nonzeros in equality constraint Jacobian...: 0\nNumber of nonzeros in inequality constraint Jacobian.: 4\nNumber of nonzeros in Lagrangian Hessian.............: 3\n\nTotal number of variables............................: 3\n variables with only lower bounds: 0\n variables with lower and upper bounds: 0\n variables with only upper bounds: 0\nTotal number of equality constraints.................: 0\nTotal number of inequality constraints...............: 3\n inequality constraints with only lower bounds: 3\n inequality constraints with lower and upper bounds: 0\n inequality constraints with only upper bounds: 0\n\niter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls\n 0 0.0000000e+00 1.00e+00 5.00e-01 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0\n 1 2.2129049e-06 1.00e+00 1.09e+02 -1.0 1.01e+00 - 1.00e+00 9.80e-03h 1\n 2 2.4328193e-06 1.00e+00 1.55e+05 -1.0 1.00e+00 - 1.39e-01 9.90e-05h 1\n 3 1.9533370e-02 9.97e-01 9.82e+04 -1.0 1.60e+05 - 6.81e-08 9.04e-07h 1\n 4 8.5283692e-01 0.00e+00 2.91e+06 -1.0 1.57e+01 - 1.18e-02 6.25e-02f 5\n 5 7.4508982e-01 0.00e+00 1.19e+07 -1.0 1.19e-01 8.0 3.17e-04 1.00e+00f 1\n 6 7.3522284e-01 0.00e+00 6.57e+05 -1.0 9.61e-03 7.5 7.73e-01 1.00e+00h 1\n 7 7.3514688e-01 0.00e+00 9.59e+02 -1.0 7.35e-05 7.0 1.00e+00 1.00e+00h 1\n 8 7.3514746e-01 0.00e+00 3.58e+00 -1.0 9.67e-07 6.6 1.00e+00 1.00e+00h 1\n 9 7.6476859e-01 0.00e+00 3.06e-02 -1.0 2.25e-02 - 1.00e+00 1.00e+00f 1\niter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls\n 10 6.9848511e-01 0.00e+00 1.98e-03 -2.5 5.37e-02 - 1.00e+00 1.00e+00f 1\n 11 6.9344736e-01 0.00e+00 1.59e-05 -3.8 4.78e-03 - 1.00e+00 1.00e+00h 1\n 12 6.9315086e-01 0.00e+00 5.53e-08 -5.7 4.18e-04 - 1.00e+00 1.00e+00h 1\n 13 6.9314717e-01 0.00e+00 8.49e-12 -8.6 5.40e-06 - 1.00e+00 1.00e+00h 1\n\nNumber of Iterations....: 13\n\n (scaled) (unscaled)\nObjective...............: 6.9314717223847255e-01 6.9314717223847255e-01\nDual infeasibility......: 8.4893203577962595e-12 8.4893203577962595e-12\nConstraint violation....: 0.0000000000000000e+00 0.0000000000000000e+00\nComplementarity.........: 2.5092981987187852e-09 2.5092981987187852e-09\nOverall NLP error.......: 2.5092981987187852e-09 2.5092981987187852e-09\n\n\nNumber of objective function evaluations = 20\nNumber of objective gradient evaluations = 14\nNumber of equality constraint evaluations = 0\nNumber of inequality constraint evaluations = 20\nNumber of equality constraint Jacobian evaluations = 0\nNumber of inequality constraint Jacobian evaluations = 14\nNumber of Lagrangian Hessian evaluations = 13\nTotal CPU secs in IPOPT (w/o function evaluations) = 0.004\nTotal CPU secs in NLP function evaluations = 0.000\n\nEXIT: Optimal Solution Found.\n \nIpopt 3.12: Optimal Solution Found\nOptimal solutions is \nx : Size=3, Index=x_index, Domain=Reals\n Key : Lower : Value : Upper : Fixed : Stale\n 1 : None : 0.999999999169 : None : False : False\n 2 : None : 0.0 : None : False : False\n 3 : None : -7.49070198136e-09 : None : False : False\nObjective value at the optimal solution is \nOBJ : Size=1, Index=None, Active=True\n Key : Active : Value\n None : True : 0.693147172238\n" ] ], [ [ "## Problem 2", "_____no_output_____" ], [ "The set Pareto optimal solutions is $\\{(t,1-t):t\\in[0,1]\\}$.\n\nLet us denote set of Pareto optimal solutions by $PS$ and show that $PS=\\{(t,1-t):t\\in[0,1]\\}$.\n\n$PS\\supset\\{(t,1-t):t\\in[0,1]\\}$:\n\nLet's assume that there exists $t\\in[0,1]$, which is not Pareto optimal. Then there exists $x=(x_1,x_2)\\in\\mathbb R^2$ and $t\\in[0,1]$ such that\n$$\n\\left\\{\n\\begin{align}\n\\|x-(1,0)\\|^2<\\|(t,1-t)-(1,0) \\|^2,\\text{ and}\\\\\n\\|x-(0,1)\\|^2\\leq\\|(t,1-t)-(0,1) \\|^2\n\\end{align}\n\\right.\n$$\nor\n$$\n\\left\\{\n\\begin{align}\n\\|x-(1,0)\\|^2\\leq\\|(t,1-t)-(1,0) \\|^2,\\text{ and}\\\\\n\\|x-(0,1)\\|^2<\\|(t,1-t)-(0,1)\\|^2.\n\\end{align}\n\\right.\n$$\n\nBut in both cases\n\n$$\n\\sqrt{2} = \\|(1,0)-(0,1)\\|\\\\\n\\leq \\|(1,0)-x\\|+\\|x-(0,1)\\|\\\\\n< \\|(t,1-t)-(1,0) \\|+\\|(t,1-t)-(0,1) \\|\\\\\n= \\|(1,0)-(0,1)\\| =\\sqrt{2}.\n$$\nbecause the point $(t,1-t)$ is on the straight line from $(1,0)$ to $(0,1)$.\n\nThus, neither one of the requirements of non-Pareto optimality can hold. Thus, the point is Pareto optimal.\n\n$PS\\subset\\{(t,1-t):t\\in[0,1]\\}$:\n\nLet's assume a Pareto optimal solution $x$. This follows from the triangle inequality.", "_____no_output_____" ], [ "## Problem 3", "_____no_output_____" ], [ "Ideal:\n\nTo solve\n$$\n\\min \\|x-(1,0)\\|^2\\\\\n\\text{s.t. }x\\in \\mathbb R^2.\n$$\nThe solution of this problem is naturally $x = (1,0)$ and the minimum is $0$. Minimizing the second objective give $x=(0,1)$ and the minimum is again $0$. Thus, the ideal is $(0,0)$.\n\nNow, the problem has just two objectives and thus, we get the components of the nadir by optimizing\n$$\n\\min f_1(x)\\\\\n\\text{s.t. }f_2(x)\\leq z^{ideal}_2\n$$\nand\n$$\n\\min f_2(x)\\\\\n\\text{s.t. }f_1(x)\\leq z^{ideal}_1.\n$$\n\nThe solution of this problem is Pareto optimal because of the epsilon constraint method and also because the other one of the objectives is at the minimum and the other one cannot be grown with growing the other. Thus, the components of the nadir are at least the optimal values of the above optimization problems.\n\nOn the other hand, the components of the nadir have to be at most the optimal values of the above optimization problems, because if this was not the case, then the solution would not be Pareto optimal.\n\nBy solving these optimization problems, we get nadir (2,2).", "_____no_output_____" ], [ "## Problem 4", "_____no_output_____" ] ], [ [ "def prob(x):\n return [(x[0]-1)**2+x[1]**2,x[0]**2+(x[1]-1)**2]", "_____no_output_____" ] ], [ [ "Let's do this using Pyomo:", "_____no_output_____" ] ], [ [ "from pyomo.environ import *\nfrom pyomo.opt import SolverFactory #Import interfaces to solvers\n\ndef weighting_method_pyomo(f,w):\n points = []\n for wi in w:\n model = ConcreteModel()\n model.x = Var([0,1])\n #weighted sum\n model.obj = Objective(expr = wi[0]*f(model.x)[0]+wi[1]*f(model.x)[1])\n opt = SolverFactory(\"ipopt\") #Use ipopt\n #Combination of expression and function\n res=opt.solve(model) #Solve the problem\n points.append([model.x[0].value,model.x[1].value]) #We should check for optimality...\n return points", "_____no_output_____" ], [ "w = np.random.random((500,2)) #500 random weights\nrepr = weighting_method_pyomo(prob,w)", "_____no_output_____" ] ], [ [ "**Plot the solutions in the objective space**", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nf_repr_ws = [prob(repri) for repri in repr]\nfig = plt.figure()\nplt.scatter([z[0] for z in f_repr_ws],[z[1] for z in f_repr_ws])\nplt.show()", "_____no_output_____" ] ], [ [ "**Plot the solutions in the decision space**", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nfig = plt.figure()\nplt.scatter([x[0] for x in repr],[x[1] for x in repr])\nplt.show()", "_____no_output_____" ] ], [ [ "**What do we notice?**", "_____no_output_____" ], [ "In this problem, the weighting method works. This is because the objective functions are convex.\n\nWorking here means that the method produces an even representation of the whole Pareto optimal set.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
e7d7c900add40d668c05ef2bd9b0799521193916
42,719
ipynb
Jupyter Notebook
temp_analysis_bonus_1_starter.ipynb
georgiafbi/sqlalchemy-challenge
db65bfff5101f062fa7e6714b018510d21dc5e57
[ "ADSL" ]
null
null
null
temp_analysis_bonus_1_starter.ipynb
georgiafbi/sqlalchemy-challenge
db65bfff5101f062fa7e6714b018510d21dc5e57
[ "ADSL" ]
null
null
null
temp_analysis_bonus_1_starter.ipynb
georgiafbi/sqlalchemy-challenge
db65bfff5101f062fa7e6714b018510d21dc5e57
[ "ADSL" ]
null
null
null
93.069717
22,400
0.819495
[ [ [ "# Bonus: Temperature Analysis I", "_____no_output_____" ] ], [ [ "import pandas as pd\nfrom datetime import datetime as dt", "_____no_output_____" ], [ "# \"tobs\" is \"temperature observations\"\ndf = pd.read_csv('Resources/hawaii_measurements.csv')\ndf.head()", "_____no_output_____" ], [ "# Convert the date column format from string to datetime\ndf[\"date\"] = pd.to_datetime(df['date'])\ndf.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 19550 entries, 0 to 19549\nData columns (total 4 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 station 19550 non-null object \n 1 date 19550 non-null datetime64[ns]\n 2 prcp 18103 non-null float64 \n 3 tobs 19550 non-null int64 \ndtypes: datetime64[ns](1), float64(1), int64(1), object(1)\nmemory usage: 611.1+ KB\n" ], [ "# Set the date column as the DataFrame index\n# Drop the date column\ndf = df.set_index('date')\ndf.head()", "_____no_output_____" ] ], [ [ "### Compare June and December data across all years ", "_____no_output_____" ] ], [ [ "import warnings\nwarnings.filterwarnings('ignore')\n%matplotlib inline\nfrom matplotlib import pyplot as plt\nimport numpy as np\nimport scipy.stats as stats\nfrom scipy.stats import ttest_rel", "_____no_output_____" ], [ "# Filter data for desired months\njune_df = df[df.index.month==6]\ndec_df=df[df.index.month==12]", "_____no_output_____" ], [ "# Identify the average temperature for June\navg_temp_june = round(june_df.tobs.mean(),1)\nprint(f\"The average temperature in June from {june_df.index.year[0]} to {june_df.index.year[-1]} is {avg_temp_june} °F.\")", "The average temperature in June from 2010 to 2017 is 74.9 °F.\n" ], [ "# Identify the average temperature for December\navg_temp_dec = round(dec_df.tobs.mean(),1)\nprint(f\"The average temperature in December from {dec_df.index.year[0]} to {dec_df.index.year[-1]} is {avg_temp_dec} °F.\")", "The average temperature in December from 2010 to 2016 is 71.0 °F.\n" ], [ "# Create collections of temperature dataq\njune_temps_df= pd.DataFrame(june_df.tobs).rename(columns={\"tobs\":\"tobs_june\"})\ndec_temps_df= pd.DataFrame(dec_df.tobs).rename(columns={\"tobs\":\"tobs_dec\"})", "_____no_output_____" ], [ "# Run paired t-test\n# Generate some fake data to test with\ndef ttest_plots(dataset1, dataset2):\n # Scatter Plot of Data\n ds1_col=dataset1.columns[0]\n ds2_col=dataset2.columns[0]\n x1_range= dataset1.index\n x2_range = dataset2.index\n plt.subplot(2, 1, 1)\n plt.scatter(x1_range,dataset1[ds1_col], label=ds1_col,alpha=0.7)\n plt.scatter(x2_range,dataset2[ds2_col], label=ds2_col,alpha=0.7)\n plt.xlabel(\"Year\")\n plt.ylabel(\"Temperature (°F)\")\n plt.title(f\"Scatter Plot of {ds1_col} vs {ds2_col} from {dataset1.index.year[0]} to {dataset1.index.year[-1]}\")\n plt.legend()\n plt.tight_layout\n plt.savefig(\"Scatter_Plot_June_and_December_Temps_Hawaii.png\")\n plt.show()\n\n # Histogram Plot of Data\n plt.subplot(2, 1, 2)\n plt.hist(dataset1[ds1_col], 10, density=True, alpha=0.7, label=ds1_col)\n plt.hist(dataset2[ds2_col], 10, density=True, alpha=0.7, label=ds2_col)\n plt.axvline(dataset1[ds1_col].mean(), color='k', linestyle='dashed', linewidth=1)\n plt.axvline(dataset2[ds2_col].mean(), color='k', linestyle='dashed', linewidth=1)\n plt.legend() \n plt.xlabel(\"Temperature (°F)\")\n plt.tight_layout\n plt.savefig(\"Histogram_Plot_June_and_December_Temps_Hawaii.png\")\n plt.show()\n \n return dataset1[ds1_col], dataset2[ds2_col]\ntemps_june, temps_dec = ttest_plots(june_temps_df, dec_temps_df)\n\n# Note: Setting equal_var=False performs Welch's t-test which does \n# not assume equal population variance\nprint(stats.ttest_ind(temps_june,temps_dec, equal_var=False))\n", "_____no_output_____" ] ], [ [ "### Analysis", "_____no_output_____" ], [ "We are doing a paired TTEST because we are looking at only Hawaiian temperatures at different times of the year. Since the pvalue is less than 5% we can conclude that the temperatures in June and December in Hawaii are significally different.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ] ]
e7d7ea580d4d268a1b0be8820ebd874c2d812d1e
1,346
ipynb
Jupyter Notebook
notebooks/develop.ipynb
kmuehlbauer/wradlib-notebooks
11f46c3f2130582fa5c46908bac3820a7f48d294
[ "MIT" ]
null
null
null
notebooks/develop.ipynb
kmuehlbauer/wradlib-notebooks
11f46c3f2130582fa5c46908bac3820a7f48d294
[ "MIT" ]
null
null
null
notebooks/develop.ipynb
kmuehlbauer/wradlib-notebooks
11f46c3f2130582fa5c46908bac3820a7f48d294
[ "MIT" ]
1
2020-06-07T21:25:47.000Z
2020-06-07T21:25:47.000Z
20.707692
115
0.56315
[ [ [ "This notebook is part of the $\\omega radlib$ documentation: http://wradlib.org/wradlib-docs.\n\nCopyright (c) 2018, $\\omega radlib$ developers.\nDistributed under the MIT License. See LICENSE.txt for more info.", "_____no_output_____" ], [ "# For Developers", "_____no_output_____" ], [ "This section provides a collection of code snippets we use in $\\omega radlib$ to achieve certain features.\n", "_____no_output_____" ], [ "## Examples List\n- [Apichange Function Decorators](develop/wradlib_api_change.ipynb)", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ] ]
e7d7f49ab34c8a43754a3256b0bd22d2f6737212
4,298
ipynb
Jupyter Notebook
demo/hello-world.ipynb
lbz007/rectanglequery
59d6eb007bf65480fa3e9245542d0b6071f81831
[ "BSD-3-Clause" ]
1
2021-01-01T23:39:02.000Z
2021-01-01T23:39:02.000Z
demo/hello-world.ipynb
zoumingzhe/OpenEDA
e87867044b495e40d4276756a6cb13bb38fe49a9
[ "BSD-3-Clause" ]
null
null
null
demo/hello-world.ipynb
zoumingzhe/OpenEDA
e87867044b495e40d4276756a6cb13bb38fe49a9
[ "BSD-3-Clause" ]
null
null
null
22.26943
119
0.539321
[ [ [ "# Open-EDI Python Demo ----- Hello World", "_____no_output_____" ], [ "import openedi\n\nIf failed, check whether the python version for jupyter-notebook and that for building the project are consistent", "_____no_output_____" ] ], [ [ "import sys\nmodule_dir = [\"../lib/\", \"./lib/\", \"../build/edi/python/\"] # find from install_dir or build_dir\nsys.path.extend(module_dir)\nimport openedi as edi\n\nedi.ediPrint(edi.MessageType.kInfo, \"Hello World.\\n\")", "_____no_output_____" ] ], [ [ "创建一个database", "_____no_output_____" ] ], [ [ "db = edi.db.Database()", "_____no_output_____" ] ], [ [ "创建一个model, 并添加相应model term", "_____no_output_____" ] ], [ [ "m0 = db.addModel(\"model0\")\nm0.setModelType(edi.ModelType.kCell)\nmt0 = m0.addTerm(\"term0\")\nmt0.setSignalDirect(edi.SignalDirection.kInput)\nmt1= m0.addTerm(\"term1\")\nmt1.setSignalDirect(edi.SignalDirection.kOutput)", "_____no_output_____" ] ], [ [ "创建一个design", "_____no_output_____" ] ], [ [ "design = db.getDesign()", "_____no_output_____" ] ], [ [ "在design里创建两个instances", "_____no_output_____" ] ], [ [ "inst0 = design.addInst()\ninst0.getAttr().setName(\"inst0\")\np0 = edi.geo.Point2DInt(0, 1)\ninst0.getAttr().setLoc(p0)\ninst0.addModel(m0)\n\nattr1 = edi.db.InstAttr()\nattr1.setName(\"inst1\")\np1 = edi.geo.Point2DInt(2, 3)\nattr1.setLoc(p1)\ninst1 = design.addInst(attr1)\ninst1.addModel(m0)", "_____no_output_____" ] ], [ [ "创建相应instance terms, 并连接至一个net", "_____no_output_____" ] ], [ [ "net0 = design.addNet()\nnet0.getAttr().setName(\"net0\")\n\ninst_term0 = design.addInstTerm()\ninst_term0.getAttr().setModelTerm(mt0)\ninst_term0.setInst(inst0)\ninst_term0.setNet(net0)\ninst0.addInstTerm(inst_term0)\nnet0.addInstTerm(inst_term0)\n\ninst_term1 = design.addInstTerm()\ninst_term1.getAttr().setModelTerm(mt1)\ninst_term1.setInst(inst1)\ninst_term1.setNet(net0)\ninst1.addInstTerm(inst_term1)\nnet0.addInstTerm(inst_term1)", "_____no_output_____" ] ], [ [ "将database写入文件", "_____no_output_____" ] ], [ [ "filename = \"demo_db.txt\"\nedi.db.write(db, filename, 0) # 0 means ascii mode, 1 means binary mode", "_____no_output_____" ] ], [ [ "从文件读入database", "_____no_output_____" ] ], [ [ "db2 = edi.db.Database()\nedi.db.read(db2, filename, 0) # 0 means ascii mode, 1 means binary mode\nprint(\"We have %d models in db2.\" % (db2.numModels())) # =1\nprint(\"We have %d insts in db2.design_.\" % (db2.getDesign().numInsts())) # =2\nprint(\"We have %d nets in db2.design_.\" % (db2.getDesign().numNets())) # =1\nprint(\"We have %d inst_terms in db2.design_.\" % (db2.getDesign().numInstTerms())) # =2", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7d7f5f2386278c2779372131592c03813a784b0
47,806
ipynb
Jupyter Notebook
train_test_nnets.ipynb
cemysf/BCI
279301a3e9a35a76a604fee42dacbebc375aa2b2
[ "MIT" ]
null
null
null
train_test_nnets.ipynb
cemysf/BCI
279301a3e9a35a76a604fee42dacbebc375aa2b2
[ "MIT" ]
null
null
null
train_test_nnets.ipynb
cemysf/BCI
279301a3e9a35a76a604fee42dacbebc375aa2b2
[ "MIT" ]
null
null
null
64.169128
20,000
0.693867
[ [ [ "import numpy as np\nfrom sklearn.model_selection import train_test_split\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom keras.layers import Input, Dense, Convolution2D, MaxPooling2D, add, Flatten\nfrom keras.models import Model\nfrom keras.regularizers import l2\nfrom keras.utils import to_categorical\nfrom keras.optimizers import Adam\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.utils.multiclass import unique_labels\nfrom sklearn.metrics import accuracy_score, f1_score", "_____no_output_____" ], [ "from mish import Mish", "_____no_output_____" ] ], [ [ "# Load datasets, split, normalize etc..", "_____no_output_____" ] ], [ [ "datasets = np.load(\"datasets.npy\")\nlabels = np.load(\"labels.npy\")\n\ndatasets_val = np.load(\"datasets_val.npy\")\nlabels_val = np.load(\"labels_val.npy\")", "_____no_output_____" ], [ "datasets.shape", "_____no_output_____" ], [ "X_train,X_test,y_train, y_test = train_test_split(datasets, labels, test_size=0.05,random_state=4242)", "_____no_output_____" ], [ "# min-max normalization (can try other also..)\n\ndef norm_dataset_minMax(dataset):\n for i in range(len(dataset)):\n d = dataset[i]\n d = (d-d.min()) / (d.max()-d.min())\n dataset[i] = d\n return dataset\n\ndef norm_dataset_meanStd(dataset):\n for i in range(len(dataset)):\n d = dataset[i]\n d = (d-d.mean()) / d.std()\n dataset[i] = d\n return dataset", "_____no_output_____" ], [ "def print_statistics(dataset):\n print(\"min:{:.3f} max:{:.3f} mean:{:.3f} std:{:.3f}\".format(dataset.min(), dataset.max(), dataset.mean(), dataset.std()))", "_____no_output_____" ], [ "X_train = norm_dataset_meanStd(X_train)\nX_test = norm_dataset_meanStd(X_test)\nX_val = norm_dataset_meanStd(datasets_val)", "_____no_output_____" ], [ "print_statistics(X_train)\nprint_statistics(X_test)\nprint_statistics(X_val)", "min:-1.547 max:74.888 mean:-0.000 std:1.000\nmin:-1.376 max:62.494 mean:0.000 std:1.000\nmin:-1.411 max:49.713 mean:-0.000 std:1.000\n" ] ], [ [ "- Make channel the last axis", "_____no_output_____" ] ], [ [ "X_train = np.swapaxes(X_train, -2, -1)\nX_test = np.swapaxes(X_test, -2, -1)\nX_val = np.swapaxes(X_val, -2, -1)", "_____no_output_____" ], [ "X_test.shape", "_____no_output_____" ] ], [ [ "- Labels to one hot", "_____no_output_____" ] ], [ [ "def to_numericalLabel(x):\n if x == \"left\":\n return 0\n elif x == \"none\":\n return 1\n elif x == \"right\":\n return 2", "_____no_output_____" ], [ "y_train = [to_numericalLabel(l) for l in y_train]\ny_test = [to_numericalLabel(l) for l in y_test]\n\ny_train = to_categorical(y_train)\ny_test = to_categorical(y_test)", "_____no_output_____" ], [ "y_train[:10]", "_____no_output_____" ], [ "y_test[:10]", "_____no_output_____" ], [ "y_val = [to_numericalLabel(l) for l in labels_val]\ny_val = to_categorical(y_val)", "_____no_output_____" ] ], [ [ "# Try #1: simple conv net", "_____no_output_____" ], [ "- 2D conv and max poolings, with skip connections (add)\n- Dense at the end for classification", "_____no_output_____" ] ], [ [ "input_img = Input(shape=(250,60,16)) ## 16 channels", "_____no_output_____" ], [ "learning_rate = 5e-4 ## 1e-3 is default for adam\nreg_param = 1e-2", "_____no_output_____" ], [ "def net_model(input_img):\n conv1 = Convolution2D(32, (3,3), activation=\"Mish\", padding=\"same\", kernel_regularizer=l2(reg_param))(input_img)\n # add1 \n pool1 = MaxPooling2D((2,2), padding=\"same\")(conv1)\n \n conv2 = Convolution2D(32, (3,3), activation=\"Mish\", padding=\"same\", kernel_regularizer=l2(reg_param))(pool1)\n add2 = add([pool1, conv2])\n pool2 = MaxPooling2D((2,2), padding=\"same\")(add2)\n \n conv3 = Convolution2D(32, (3,3), activation=\"Mish\", padding=\"same\", kernel_regularizer=l2(reg_param))(pool2)\n add3 = add([pool2, conv3])\n pool3 = MaxPooling2D((2,2), padding=\"same\")(add3)\n \n conv4 = Convolution2D(32, (3,3), activation=\"Mish\", padding=\"same\", kernel_regularizer=l2(reg_param))(pool3)\n add4 = add([pool3, conv4])\n pool4 = MaxPooling2D((2,2), padding=\"same\")(add4)\n \n conv5 = Convolution2D(32, (3,3), activation=\"Mish\", padding=\"same\", kernel_regularizer=l2(reg_param))(pool4)\n add5 = add([pool4, conv5])\n pool5 = MaxPooling2D((2,2), padding=\"same\")(add5)\n \n flatten = Flatten()(add5)\n #dense1 = Dense(256, activation=\"Mish\")(flatten)\n #dense2 = Dense(32, activation=\"Mish\")(dense1)\n preds = Dense(3, activation=\"softmax\")(flatten)\n \n return preds", "_____no_output_____" ], [ "nnet = Model(inputs=input_img, outputs=net_model(input_img))", "_____no_output_____" ], [ "nnet.summary()", "Model: \"model_2\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_2 (InputLayer) (None, 250, 60, 16) 0 \n__________________________________________________________________________________________________\nconv2d_6 (Conv2D) (None, 250, 60, 32) 4640 input_2[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_6 (MaxPooling2D) (None, 125, 30, 32) 0 conv2d_6[0][0] \n__________________________________________________________________________________________________\nconv2d_7 (Conv2D) (None, 125, 30, 32) 9248 max_pooling2d_6[0][0] \n__________________________________________________________________________________________________\nadd_5 (Add) (None, 125, 30, 32) 0 max_pooling2d_6[0][0] \n conv2d_7[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_7 (MaxPooling2D) (None, 63, 15, 32) 0 add_5[0][0] \n__________________________________________________________________________________________________\nconv2d_8 (Conv2D) (None, 63, 15, 32) 9248 max_pooling2d_7[0][0] \n__________________________________________________________________________________________________\nadd_6 (Add) (None, 63, 15, 32) 0 max_pooling2d_7[0][0] \n conv2d_8[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_8 (MaxPooling2D) (None, 32, 8, 32) 0 add_6[0][0] \n__________________________________________________________________________________________________\nconv2d_9 (Conv2D) (None, 32, 8, 32) 9248 max_pooling2d_8[0][0] \n__________________________________________________________________________________________________\nadd_7 (Add) (None, 32, 8, 32) 0 max_pooling2d_8[0][0] \n conv2d_9[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_9 (MaxPooling2D) (None, 16, 4, 32) 0 add_7[0][0] \n__________________________________________________________________________________________________\nconv2d_10 (Conv2D) (None, 16, 4, 32) 9248 max_pooling2d_9[0][0] \n__________________________________________________________________________________________________\nadd_8 (Add) (None, 16, 4, 32) 0 max_pooling2d_9[0][0] \n conv2d_10[0][0] \n__________________________________________________________________________________________________\nflatten_2 (Flatten) (None, 2048) 0 add_8[0][0] \n__________________________________________________________________________________________________\ndense_2 (Dense) (None, 3) 6147 flatten_2[0][0] \n==================================================================================================\nTotal params: 47,779\nTrainable params: 47,779\nNon-trainable params: 0\n__________________________________________________________________________________________________\n" ], [ "nnet.compile(optimizer=Adam(lr=learning_rate), loss=\"categorical_crossentropy\", metrics=[\"accuracy\"])", "_____no_output_____" ], [ "nnet.fit(x=X_train, y=y_train, batch_size=32, epochs=50, validation_data=(X_test,y_test))", "Train on 999 samples, validate on 53 samples\nEpoch 1/50\n999/999 [==============================] - 17s 17ms/step - loss: 3.1340 - accuracy: 0.3534 - val_loss: 2.5861 - val_accuracy: 0.4340\nEpoch 2/50\n999/999 [==============================] - 17s 17ms/step - loss: 2.4655 - accuracy: 0.4875 - val_loss: 2.5292 - val_accuracy: 0.4906\nEpoch 3/50\n999/999 [==============================] - 19s 19ms/step - loss: 2.3267 - accuracy: 0.5666 - val_loss: 2.3891 - val_accuracy: 0.3962\nEpoch 4/50\n999/999 [==============================] - 21s 21ms/step - loss: 2.1721 - accuracy: 0.6186 - val_loss: 2.3609 - val_accuracy: 0.4151\nEpoch 5/50\n999/999 [==============================] - 21s 21ms/step - loss: 2.0605 - accuracy: 0.6817 - val_loss: 2.3059 - val_accuracy: 0.5472\nEpoch 6/50\n999/999 [==============================] - 21s 21ms/step - loss: 1.9565 - accuracy: 0.7247 - val_loss: 2.2523 - val_accuracy: 0.5094\nEpoch 7/50\n999/999 [==============================] - 21s 21ms/step - loss: 1.8642 - accuracy: 0.7477 - val_loss: 2.3795 - val_accuracy: 0.4906\nEpoch 8/50\n999/999 [==============================] - 22s 22ms/step - loss: 1.7659 - accuracy: 0.7828 - val_loss: 2.0648 - val_accuracy: 0.5094\nEpoch 9/50\n999/999 [==============================] - 21s 21ms/step - loss: 1.6996 - accuracy: 0.7918 - val_loss: 2.1412 - val_accuracy: 0.4151\nEpoch 10/50\n999/999 [==============================] - 21s 21ms/step - loss: 1.7006 - accuracy: 0.7698 - val_loss: 2.0887 - val_accuracy: 0.5283\nEpoch 11/50\n999/999 [==============================] - 21s 21ms/step - loss: 1.5257 - accuracy: 0.8819 - val_loss: 2.0092 - val_accuracy: 0.5849\nEpoch 12/50\n999/999 [==============================] - 21s 21ms/step - loss: 1.4590 - accuracy: 0.8839 - val_loss: 2.0219 - val_accuracy: 0.5283\nEpoch 13/50\n999/999 [==============================] - 21s 21ms/step - loss: 1.3878 - accuracy: 0.9099 - val_loss: 1.9607 - val_accuracy: 0.5472\nEpoch 14/50\n999/999 [==============================] - 21s 21ms/step - loss: 1.3103 - accuracy: 0.9349 - val_loss: 1.9491 - val_accuracy: 0.5849\nEpoch 15/50\n999/999 [==============================] - 21s 21ms/step - loss: 1.3654 - accuracy: 0.8659 - val_loss: 1.8709 - val_accuracy: 0.5472\nEpoch 16/50\n999/999 [==============================] - 21s 21ms/step - loss: 1.2550 - accuracy: 0.9279 - val_loss: 2.0366 - val_accuracy: 0.5660\nEpoch 17/50\n999/999 [==============================] - 20s 20ms/step - loss: 1.1963 - accuracy: 0.9429 - val_loss: 2.1585 - val_accuracy: 0.5849\nEpoch 18/50\n999/999 [==============================] - 21s 21ms/step - loss: 1.1255 - accuracy: 0.9750 - val_loss: 2.0433 - val_accuracy: 0.5472\nEpoch 19/50\n999/999 [==============================] - 21s 21ms/step - loss: 1.0958 - accuracy: 0.9710 - val_loss: 1.9887 - val_accuracy: 0.6226\nEpoch 20/50\n999/999 [==============================] - 21s 21ms/step - loss: 1.0298 - accuracy: 0.9890 - val_loss: 1.8445 - val_accuracy: 0.6226\nEpoch 21/50\n999/999 [==============================] - 21s 21ms/step - loss: 0.9840 - accuracy: 0.9910 - val_loss: 2.1092 - val_accuracy: 0.5094\nEpoch 22/50\n999/999 [==============================] - 21s 21ms/step - loss: 0.9516 - accuracy: 0.9940 - val_loss: 1.9623 - val_accuracy: 0.6038\nEpoch 23/50\n999/999 [==============================] - 21s 21ms/step - loss: 0.9439 - accuracy: 0.9790 - val_loss: 1.8351 - val_accuracy: 0.6038\nEpoch 24/50\n999/999 [==============================] - 21s 21ms/step - loss: 0.9123 - accuracy: 0.9910 - val_loss: 1.7453 - val_accuracy: 0.6981\nEpoch 25/50\n999/999 [==============================] - 21s 21ms/step - loss: 0.8768 - accuracy: 0.9930 - val_loss: 1.8839 - val_accuracy: 0.6226\nEpoch 26/50\n999/999 [==============================] - 21s 21ms/step - loss: 0.8578 - accuracy: 0.9930 - val_loss: 2.0378 - val_accuracy: 0.5849\nEpoch 27/50\n999/999 [==============================] - 21s 21ms/step - loss: 0.8601 - accuracy: 0.9810 - val_loss: 2.1030 - val_accuracy: 0.4906\nEpoch 28/50\n999/999 [==============================] - 21s 21ms/step - loss: 0.9804 - accuracy: 0.9069 - val_loss: 1.9581 - val_accuracy: 0.6226\nEpoch 29/50\n999/999 [==============================] - 21s 21ms/step - loss: 0.8434 - accuracy: 0.9840 - val_loss: 1.6174 - val_accuracy: 0.6415\nEpoch 30/50\n999/999 [==============================] - 21s 21ms/step - loss: 0.7778 - accuracy: 0.9980 - val_loss: 1.6972 - val_accuracy: 0.6604\nEpoch 31/50\n999/999 [==============================] - 21s 21ms/step - loss: 0.7467 - accuracy: 0.9990 - val_loss: 1.6132 - val_accuracy: 0.6038\nEpoch 32/50\n999/999 [==============================] - 21s 21ms/step - loss: 0.7287 - accuracy: 1.0000 - val_loss: 1.8634 - val_accuracy: 0.6604\nEpoch 33/50\n999/999 [==============================] - 21s 21ms/step - loss: 0.7087 - accuracy: 1.0000 - val_loss: 1.7936 - val_accuracy: 0.6226\nEpoch 34/50\n999/999 [==============================] - 21s 21ms/step - loss: 0.6838 - accuracy: 1.0000 - val_loss: 1.6540 - val_accuracy: 0.6415\nEpoch 35/50\n999/999 [==============================] - 22s 22ms/step - loss: 0.6663 - accuracy: 0.9990 - val_loss: 1.8528 - val_accuracy: 0.6226\nEpoch 36/50\n999/999 [==============================] - 21s 21ms/step - loss: 0.6467 - accuracy: 0.9990 - val_loss: 1.6618 - val_accuracy: 0.6604\nEpoch 37/50\n999/999 [==============================] - 21s 21ms/step - loss: 0.6385 - accuracy: 0.9950 - val_loss: 1.5993 - val_accuracy: 0.6415\nEpoch 38/50\n999/999 [==============================] - 21s 21ms/step - loss: 0.7089 - accuracy: 0.9650 - val_loss: 1.5298 - val_accuracy: 0.6415\nEpoch 39/50\n999/999 [==============================] - 21s 21ms/step - loss: 0.6575 - accuracy: 0.9820 - val_loss: 1.8452 - val_accuracy: 0.5094\nEpoch 40/50\n999/999 [==============================] - 21s 21ms/step - loss: 0.6187 - accuracy: 0.9920 - val_loss: 1.5886 - val_accuracy: 0.6415\nEpoch 41/50\n999/999 [==============================] - 22s 22ms/step - loss: 0.5981 - accuracy: 0.9930 - val_loss: 1.6700 - val_accuracy: 0.5849\nEpoch 42/50\n999/999 [==============================] - 21s 21ms/step - loss: 0.5698 - accuracy: 1.0000 - val_loss: 1.6604 - val_accuracy: 0.6038\nEpoch 43/50\n999/999 [==============================] - 21s 21ms/step - loss: 0.5557 - accuracy: 0.9990 - val_loss: 1.7552 - val_accuracy: 0.5849\nEpoch 44/50\n999/999 [==============================] - 21s 21ms/step - loss: 0.5420 - accuracy: 0.9990 - val_loss: 1.7287 - val_accuracy: 0.6226\nEpoch 45/50\n999/999 [==============================] - 21s 21ms/step - loss: 0.5424 - accuracy: 0.9950 - val_loss: 2.1250 - val_accuracy: 0.5849\nEpoch 46/50\n999/999 [==============================] - 22s 22ms/step - loss: 0.5871 - accuracy: 0.9700 - val_loss: 1.7103 - val_accuracy: 0.5849\nEpoch 47/50\n999/999 [==============================] - 22s 22ms/step - loss: 0.5319 - accuracy: 0.9970 - val_loss: 1.5476 - val_accuracy: 0.5849\nEpoch 48/50\n999/999 [==============================] - 21s 21ms/step - loss: 0.5075 - accuracy: 0.9980 - val_loss: 1.4982 - val_accuracy: 0.6038\nEpoch 49/50\n999/999 [==============================] - 21s 21ms/step - loss: 0.4914 - accuracy: 0.9990 - val_loss: 1.7249 - val_accuracy: 0.6415\nEpoch 50/50\n999/999 [==============================] - 21s 21ms/step - loss: 0.4798 - accuracy: 1.0000 - val_loss: 1.8127 - val_accuracy: 0.5849\n" ] ], [ [ "### Check results", "_____no_output_____" ] ], [ [ "### https://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html\n\ndef plot_confusion_matrix(y_true, y_pred, classes,\n normalize=False,\n title=None,\n cmap=plt.cm.Blues):\n \"\"\"\n This function prints and plots the confusion matrix.\n Normalization can be applied by setting `normalize=True`.\n \"\"\"\n if not title:\n if normalize:\n title = 'Normalized confusion matrix'\n else:\n title = 'Confusion matrix, without normalization'\n\n # Compute confusion matrix\n cm = confusion_matrix(y_true, y_pred)\n # Only use the labels that appear in the data\n #classes = classes[unique_labels(y_true, y_pred)]\n if normalize:\n cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]\n print(\"Normalized confusion matrix\")\n else:\n print('Confusion matrix, without normalization')\n\n print(cm)\n\n fig, ax = plt.subplots()\n im = ax.imshow(cm, interpolation='nearest', cmap=cmap)\n ax.figure.colorbar(im, ax=ax)\n # We want to show all ticks...\n ax.set(xticks=np.arange(cm.shape[1]),\n yticks=np.arange(cm.shape[0]),\n # ... and label them with the respective list entries\n xticklabels=classes, yticklabels=classes,\n title=title,\n ylabel='True label',\n xlabel='Predicted label')\n\n # Rotate the tick labels and set their alignment.\n plt.setp(ax.get_xticklabels(), rotation=45, ha=\"right\",\n rotation_mode=\"anchor\")\n\n # Loop over data dimensions and create text annotations.\n fmt = '.4f' if normalize else 'd'\n thresh = cm.max() / 2.\n for i in range(cm.shape[0]):\n for j in range(cm.shape[1]):\n ax.text(j, i, format(cm[i, j], fmt),\n ha=\"center\", va=\"center\",\n color=\"white\" if cm[i, j] > thresh else \"black\")\n fig.tight_layout()\n return ax\n", "_____no_output_____" ], [ "val_preds = nnet.predict(X_val)", "_____no_output_____" ], [ "plot_confusion_matrix(y_val.argmax(axis=1), val_preds.argmax(axis=1), [\"left\",\"none\",\"right\"], normalize=True)", "Normalized confusion matrix\n[[0.42553191 0.21276596 0.36170213]\n [0.10638298 0.70212766 0.19148936]\n [0.29787234 0.36170213 0.34042553]]\n" ], [ "acc = accuracy_score(y_val.argmax(axis=1), val_preds.argmax(axis=1))\nf1 = f1_score(y_val.argmax(axis=1), val_preds.argmax(axis=1) , average=\"weighted\")", "_____no_output_____" ], [ "print(\"accuracy:{:.4f} f1:{:.4f}\".format(acc, f1))", "accuracy:0.4894 f1:0.4805\n" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
e7d7fae831edf736e99dbaa6766bf4288da66e69
562,272
ipynb
Jupyter Notebook
2.cnn_based_image_retrievel.ipynb
DowsonLewis/MachineLearning-work
2aef22e89abc428844fad916cf796461bb1d5d35
[ "Apache-2.0" ]
1
2019-03-27T02:37:56.000Z
2019-03-27T02:37:56.000Z
2.cnn_based_image_retrievel.ipynb
DowsonLewis/MachineLearning-work
2aef22e89abc428844fad916cf796461bb1d5d35
[ "Apache-2.0" ]
null
null
null
2.cnn_based_image_retrievel.ipynb
DowsonLewis/MachineLearning-work
2aef22e89abc428844fad916cf796461bb1d5d35
[ "Apache-2.0" ]
null
null
null
594.996825
31,374
0.936011
[ [ [ "![](../img/dl_banner.jpg)", "_____no_output_____" ], [ "# 基于深度学习的图像检索\n#### \\[稀牛学院 x 网易云课程\\]《深度学习工程师(实战)》课程资料 by [@寒小阳](https://blog.csdn.net/han_xiaoyang)\n\n**提示:如果大家觉得计算资源有限,欢迎大家在翻-墙后免费试用[google的colab](https://colab.research.google.com),有免费的K80 GPU供大家使用,大家只需要把课程的notebook上传即可运行**", "_____no_output_____" ] ], [ [ "!rm -rf tiny* features\n!wget http://cs231n.stanford.edu/tiny-imagenet-200.zip", "--2019-01-12 15:03:41-- http://cs231n.stanford.edu/tiny-imagenet-200.zip\nResolving cs231n.stanford.edu (cs231n.stanford.edu)... 171.64.68.10\nConnecting to cs231n.stanford.edu (cs231n.stanford.edu)|171.64.68.10|:80... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 248100043 (237M) [application/zip]\nSaving to: ‘tiny-imagenet-200.zip’\n\ntiny-imagenet-200.z 100%[===================>] 236.61M 87.7MB/s in 2.7s \n\n2019-01-12 15:03:43 (87.7 MB/s) - ‘tiny-imagenet-200.zip’ saved [248100043/248100043]\n\n" ], [ "import zipfile\nzfile = zipfile.ZipFile('tiny-imagenet-200.zip','r')\nzfile.extractall()\nzfile.close()", "_____no_output_____" ], [ "!ls", "ImageName.txt sample_data tiny-imagenet-200 tiny-imagenet-200.zip\n" ], [ "!ls tiny-imagenet-200", "test train val wnids.txt words.txt\n" ], [ "!ls tiny-imagenet-200/train/n01443537/images | wc -l", "500\n" ], [ "# -*- coding: utf-8 -*-\nimport os\nimport random\n\n# 打开文件以便写入图片名称\nout = open(\"ImageName.txt\", 'w')\n\n# 递归遍历文件夹,并以一定的几率把图像名写入文件\ndef gci(filepath):\n #遍历filepath下所有文件,包括子目录\n files = os.listdir(filepath)\n for fi in files:\n fi_d = os.path.join(filepath,fi) \n if os.path.isdir(fi_d):\n gci(fi_d) \n else:\n if random.random()<=0.02 and fi_d.endswith(\".JPEG\"):\n out.write(os.path.join(fi_d)+\"\\n\")", "_____no_output_____" ], [ "filepath = \"tiny-imagenet-200\"\ngci(filepath)\nout.close()", "_____no_output_____" ], [ "!ls", "ImageName.txt sample_data tiny-imagenet-200 tiny-imagenet-200.zip\n" ], [ "!head -5 ImageName.txt ", "tiny-imagenet-200/train/n02843684/images/n02843684_219.JPEG\ntiny-imagenet-200/train/n02843684/images/n02843684_66.JPEG\ntiny-imagenet-200/train/n02843684/images/n02843684_152.JPEG\ntiny-imagenet-200/train/n02843684/images/n02843684_479.JPEG\ntiny-imagenet-200/train/n02843684/images/n02843684_95.JPEG\n" ] ], [ [ "# 图像特征抽取\n#### \\[稀牛学院 x 网易云课程\\]《深度学习工程师(实战)》课程资料 by [@寒小阳](https://blog.csdn.net/han_xiaoyang)", "_____no_output_____" ] ], [ [ "import numpy as np\nfrom numpy import linalg as LA\nimport h5py\nfrom keras.applications.inception_v3 import InceptionV3\nfrom keras.preprocessing import image\nimport keras.applications.inception_v3 as inception_v3\nimport keras.applications.vgg16 as vgg16\nfrom keras.applications.vgg16 import VGG16\n\nclass InceptionNet:\n def __init__(self):\n # weights: 'imagenet'\n # pooling: 'max' or 'avg'\n # input_shape: (width, height, 3), width and height should >= 48\n self.input_shape = (224, 224, 3)\n self.weight = 'imagenet'\n self.pooling = 'max'\n # 构建不带分类器的预训练模型\n self.model = InceptionV3(weights='imagenet', include_top=False)\n self.model.predict(np.zeros((1, 224, 224 , 3)))\n\n '''\n Use inception_v3 model to extract features\n Output normalized feature vector\n '''\n def extract_feat(self, img_path):\n img = image.load_img(img_path, target_size=(self.input_shape[0], self.input_shape[1]))\n img = image.img_to_array(img)\n img = np.expand_dims(img, axis=0)\n img = inception_v3.preprocess_input(img)\n feat = self.model.predict(img)\n return fea\n #norm_feat = feat[0]/LA.norm(feat[0])\n #return norm_feat\n \n\n\nclass VGGNet:\n def __init__(self):\n # weights: 'imagenet'\n # pooling: 'max' or 'avg'\n # input_shape: (width, height, 3), width and height should >= 48\n self.input_shape = (224, 224, 3)\n self.weight = 'imagenet'\n self.pooling = 'max'\n self.model = VGG16(weights = self.weight, input_shape = (self.input_shape[0], self.input_shape[1], self.input_shape[2]), pooling = self.pooling, include_top = False)\n self.model.predict(np.zeros((1, 224, 224 , 3)))\n\n '''\n Use vgg16 model to extract features\n Output normalized feature vector\n '''\n def extract_feat(self, img_path):\n img = image.load_img(img_path, target_size=(self.input_shape[0], self.input_shape[1]))\n img = image.img_to_array(img)\n img = np.expand_dims(img, axis=0)\n img = vgg16.preprocess_input(img)\n feat = self.model.predict(img)\n return feat", "Using TensorFlow backend.\n" ] ], [ [ "# 遍历图片抽取图像特征并存储\n#### \\[稀牛学院 x 网易云课程\\]《深度学习工程师(实战)》课程资料 by [@寒小阳](https://blog.csdn.net/han_xiaoyang)", "_____no_output_____" ] ], [ [ "print(\"--------------------------------------------------\")\nprint(\" 特征抽取开始 \")\nprint(\"--------------------------------------------------\")\n\n# 特征与文件名存储列表\nfeats = []\nnames = []\n\n# 读取图片列表\nimg_list = open(\"ImageName.txt\", 'r').readlines()\nimg_list = [image.strip() for image in img_list]\n\n# 初始化模型\n# model = InceptionNet()\nmodel = VGGNet()\n\n# 遍历与特征抽取\nfor i, img_path in enumerate(img_list):\n norm_feat = model.extract_feat(img_path)\n img_name = os.path.split(img_path)[1]\n feats.append(norm_feat)\n names.append(img_name)\n if i%50 == 0:\n print(\"抽取图片的特征,进度%d/%d\" %((i+1), len(img_list)))\n\n# 特征转换成numpy array格式 \nfeats = np.array(feats)\n\nprint(\"--------------------------------------------------\")\nprint(\" 把抽取的特征写入文件中 \")\nprint(\"--------------------------------------------------\")\n\n# 把特征写入文件\noutput = \"features\"\nh5f = h5py.File(output, 'w')\nh5f.create_dataset('dataset_1', data = feats)\nh5f.create_dataset('dataset_2', data = np.string_(names))\nh5f.close()", "--------------------------------------------------\n 特征抽取开始 \n--------------------------------------------------\n抽取图片的特征,进度1/2366\n抽取图片的特征,进度51/2366\n抽取图片的特征,进度101/2366\n抽取图片的特征,进度151/2366\n抽取图片的特征,进度201/2366\n抽取图片的特征,进度251/2366\n抽取图片的特征,进度301/2366\n抽取图片的特征,进度351/2366\n抽取图片的特征,进度401/2366\n抽取图片的特征,进度451/2366\n抽取图片的特征,进度501/2366\n抽取图片的特征,进度551/2366\n抽取图片的特征,进度601/2366\n抽取图片的特征,进度651/2366\n抽取图片的特征,进度701/2366\n抽取图片的特征,进度751/2366\n抽取图片的特征,进度801/2366\n抽取图片的特征,进度851/2366\n抽取图片的特征,进度901/2366\n抽取图片的特征,进度951/2366\n抽取图片的特征,进度1001/2366\n抽取图片的特征,进度1051/2366\n抽取图片的特征,进度1101/2366\n抽取图片的特征,进度1151/2366\n抽取图片的特征,进度1201/2366\n抽取图片的特征,进度1251/2366\n抽取图片的特征,进度1301/2366\n抽取图片的特征,进度1351/2366\n抽取图片的特征,进度1401/2366\n抽取图片的特征,进度1451/2366\n抽取图片的特征,进度1501/2366\n抽取图片的特征,进度1551/2366\n抽取图片的特征,进度1601/2366\n抽取图片的特征,进度1651/2366\n抽取图片的特征,进度1701/2366\n抽取图片的特征,进度1751/2366\n抽取图片的特征,进度1801/2366\n抽取图片的特征,进度1851/2366\n抽取图片的特征,进度1901/2366\n抽取图片的特征,进度1951/2366\n抽取图片的特征,进度2001/2366\n抽取图片的特征,进度2051/2366\n抽取图片的特征,进度2101/2366\n抽取图片的特征,进度2151/2366\n抽取图片的特征,进度2201/2366\n抽取图片的特征,进度2251/2366\n抽取图片的特征,进度2301/2366\n抽取图片的特征,进度2351/2366\n--------------------------------------------------\n 把抽取的特征写入文件中 \n--------------------------------------------------\n" ], [ "%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nfrom scipy import spatial\n\ndef image_retrieval(input_img, max_res, feats):\n # 读取待检索图片与展示\n queryImg = mpimg.imread(input_img)\n plt.title(\"Query Image\")\n plt.imshow(queryImg)\n plt.grid(None)\n plt.show()\n\n # 初始化Inception模型\n model = VGGNet()\n\n # 抽取特征,距离比对与排序\n queryVec = model.extract_feat(input_img)\n queryVec = queryVec.reshape(1,-1)\n feats = feats.reshape(feats.shape[0],-1)\n scores = spatial.distance.cdist(queryVec, feats).ravel()\n rank_ID = np.argsort(scores)\n rank_score = scores[rank_ID]\n\n\n # 选取top max_res张最相似的图片展示\n imlist = [img_list[index] for i,index in enumerate(rank_ID[0:max_res])]\n print(\"最接近的%d张图片为: \" %max_res, imlist)\n\n for i,im in enumerate(imlist):\n image = mpimg.imread(im)\n plt.title(\"search output %d\" %(i+1))\n plt.imshow(image)\n plt.grid(None)\n plt.show()", "_____no_output_____" ], [ "input_img = \"tiny-imagenet-200/train/n02843684/images/n02843684_66.JPEG\"\nmax_res = 8\nimage_retrieval(input_img, max_res, feats)", "_____no_output_____" ] ], [ [ "# 使用近似最近邻算法加速\n#### \\[稀牛学院 x 网易云课程\\]《深度学习工程师(实战)》课程资料 by [@寒小阳](https://blog.csdn.net/han_xiaoyang)", "_____no_output_____" ] ], [ [ "feats.shape", "_____no_output_____" ], [ "!pip install nearpy\nfrom nearpy import Engine\nfrom nearpy.hashes import RandomBinaryProjections\n\nDIMENSIONS = 512\nPROJECTIONBITS = 16\nENGINE = Engine(DIMENSIONS, lshashes=[RandomBinaryProjections('rbp', PROJECTIONBITS,rand_seed=2611),\n RandomBinaryProjections('rbp', PROJECTIONBITS,rand_seed=261),\n RandomBinaryProjections('rbp', PROJECTIONBITS,rand_seed=26)])\n\n\nfor i,f in enumerate(feats.reshape(feats.shape[0],-1)):\n #print(i, f.shape)\n ENGINE.store_vector(f, i)\n\n\ndef image_retrieval_fast(input_img, max_res, ann):\n # 读取待检索图片与展示\n queryImg = mpimg.imread(input_img)\n plt.title(\"Query Image\")\n plt.imshow(queryImg)\n plt.grid(None)\n plt.show()\n\n # 初始化Inception模型\n model = VGGNet()\n\n # 抽取特征,使用近似最近邻算法快速检索召回\n queryVec = model.extract_feat(input_img)\n imlist = [img_list[int(k)] for v,k,d in ENGINE.neighbours(queryVec.ravel())[:max_res]]\n\n\n # 选取top max_res张最相似的图片展示\n print(\"最接近的%d张图片为: \" %max_res, imlist)\n\n for i,im in enumerate(imlist):\n image = mpimg.imread(im)\n plt.title(\"search output %d\" %(i+1))\n plt.imshow(image)\n plt.grid(None)\n plt.show()", "Requirement already satisfied: nearpy in /usr/local/lib/python3.6/dist-packages (1.0.0)\nRequirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from nearpy) (1.1.0)\nRequirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from nearpy) (0.16.0)\nRequirement already satisfied: bitarray in /usr/local/lib/python3.6/dist-packages (from nearpy) (0.8.3)\nRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from nearpy) (1.14.6)\n" ], [ "input_img = \"tiny-imagenet-200/train/n02843684/images/n02843684_66.JPEG\"\nmax_res = 8\nimage_retrieval_fast(input_img, max_res, feats)", "_____no_output_____" ] ], [ [ "![](../img/xiniu_neteasy.png)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ] ]
e7d80492aa2c37e082ab8932de1d9e2e120369b0
45,327
ipynb
Jupyter Notebook
2-Regression/1-Tools/solution/notebook.ipynb
buseorak/ML-For-Beginners
6ea8a8491d5e8bb42ac39c4785847d1ac9a812a4
[ "MIT" ]
9
2021-07-20T12:35:21.000Z
2022-02-02T12:10:58.000Z
2-Regression/1-Tools/solution/notebook.ipynb
buseorak/ML-For-Beginners
6ea8a8491d5e8bb42ac39c4785847d1ac9a812a4
[ "MIT" ]
3
2022-02-15T00:55:42.000Z
2022-02-27T23:05:26.000Z
2-Regression/1-Tools/solution/notebook.ipynb
buseorak/ML-For-Beginners
6ea8a8491d5e8bb42ac39c4785847d1ac9a812a4
[ "MIT" ]
4
2021-07-29T13:44:47.000Z
2021-12-11T13:02:54.000Z
233.64433
27,789
0.702297
[ [ [ "## Linear Regression for North American Pumpkins - Lesson 1", "_____no_output_____" ], [ "Import needed libraries", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nimport numpy as np\nfrom sklearn import datasets, linear_model, model_selection\n", "_____no_output_____" ] ], [ [ "Load the diabetes dataset, divided into `X` data and `y` features", "_____no_output_____" ] ], [ [ "X, y = datasets.load_diabetes(return_X_y=True)\nprint(X.shape)\nprint(X[0])", "(442, 10)\n[ 0.03807591 0.05068012 0.06169621 0.02187235 -0.0442235 -0.03482076\n -0.04340085 -0.00259226 0.01990842 -0.01764613]\n" ] ], [ [ "Select just one feature to target for this exercise", "_____no_output_____" ] ], [ [ "X = X[:, np.newaxis, 2]\n", "_____no_output_____" ] ], [ [ "Split the training and test data for both `X` and `y`", "_____no_output_____" ] ], [ [ "X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size=0.33)\n", "_____no_output_____" ] ], [ [ "Select the model and fit it with the training data", "_____no_output_____" ] ], [ [ "model = linear_model.LinearRegression()\nmodel.fit(X_train, y_train)", "_____no_output_____" ] ], [ [ "Use test data to predict a line", "_____no_output_____" ] ], [ [ "y_pred = model.predict(X_test)\n", "_____no_output_____" ] ], [ [ "Display the results in a plot", "_____no_output_____" ] ], [ [ "plt.scatter(X_test, y_test, color='black')\nplt.plot(X_test, y_pred, color='blue', linewidth=3)\nplt.show()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7d80ab25ab99cb731e2feeedd7517f3d940b252
872,063
ipynb
Jupyter Notebook
small_run/Flow_Cytometry_Mondrian_Processes-Random-Effects-Final_n_chain_5_n_sample_1000.ipynb
disiji/fc_mondrian
97420ff311242afe103c45130ada509e1e60a0ac
[ "MIT" ]
1
2020-12-28T16:41:33.000Z
2020-12-28T16:41:33.000Z
small_run/Flow_Cytometry_Mondrian_Processes-Random-Effects-Final_n_chain_5_n_sample_1000.ipynb
disiji/fc_mondrian
97420ff311242afe103c45130ada509e1e60a0ac
[ "MIT" ]
1
2019-10-07T19:17:58.000Z
2019-10-08T06:55:16.000Z
small_run/Flow_Cytometry_Mondrian_Processes-Random-Effects-Final_n_chain_5_n_sample_1000.ipynb
disiji/fc_mondrian
97420ff311242afe103c45130ada509e1e60a0ac
[ "MIT" ]
null
null
null
427.691515
115,952
0.91965
[ [ [ "from joblib import Parallel, delayed\nimport multiprocessing", "_____no_output_____" ], [ "import os\nimport sys\nimport glob\nimport pickle\nimport itertools\nimport random\nimport copy\n\nfrom IPython.display import Image\n\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport matplotlib.mlab as mlab\nfrom matplotlib.colors import ListedColormap\nfrom scipy.stats import multivariate_normal\n\nimport numpy as np\nimport pandas as pd\nfrom scipy.stats import beta\nfrom scipy.stats import norm\n\nfrom flowMP import *\n\nrandom.seed(1234)\n%matplotlib inline", "_____no_output_____" ], [ "def logP_Mondrian_Gaussian_perturbation(indiv_mp,template_mp,stepsize):\n \"\"\"\n To-do: truncated Gaussian pdf\n \"\"\"\n if template_mp[1] == None and template_mp[2] == None:\n return 0\n \n # find the dimension and location of first cut in the old_sample\n for _ in range(template_mp[0].shape[0]):\n if template_mp[0][_,1] > template_mp[1][0][_,1]:\n break\n \n dim = _\n pos_template = template_mp[1][0][dim,1]\n pos_indiv = indiv_mp[1][0][dim, 1]\n \n res = norm(pos_template,(template_mp[0][dim,1] - template_mp[0][dim,0])*stepsize).logpdf(pos_indiv)\n \n res += logP_Mondrian_Gaussian_perturbation(indiv_mp[1],template_mp[1],stepsize)\n res += logP_Mondrian_Gaussian_perturbation(indiv_mp[2],template_mp[2],stepsize)\n return res\n\n\n### function for computing joint probability\ndef joint_logP_Random_Effect(template_mp, indiv_mp_list, data_list, table, stepsize):\n \"\"\"\n INPUT:\n template_mp: one mondrian process\n indiv_mp_list: a list of mondrian processes\n data_list: a list of cell*marker np array\n table: +1 -1 0 information\n \"\"\"\n logP = comp_log_p_prior(template_mp, table, [1 for _ in range(table.shape[1])])\n n_sample = len(data_list)\n for _ in range(n_sample):\n logP += logP_Mondrian_Gaussian_perturbation(indiv_mp_list[_], template_mp, stepsize)\n logP += comp_log_p_sample(indiv_mp_list[_], data_list[_])\n return logP\n\n\n## a mini MCMC run to initialize Mondrian process with data\ndef init_mp(theta_space, table, data, n_iter,mcmc_gaussin_std):\n # randomly draw a template mondrian process\n sample = draw_informed_Mondrian(theta_space, table)\n log_p_sample = comp_log_p_sample(sample, data) + \\\n comp_log_p_prior(sample, table, [1 for _ in range(table.shape[1])])\n\n for idx in xrange(n_iter):\n new_sample = Mondrian_Gaussian_perturbation(theta_space,sample, mcmc_gaussin_std)\n # perform accept-reject step\n new_log_p_sample = comp_log_p_sample(new_sample, data) + \\\n comp_log_p_prior(new_sample, table, [1 for _ in range(table.shape[1])])\n\n if new_log_p_sample >= log_p_sample or \\\n np.log(np.random.uniform(low=0, high=1.)) <= new_log_p_sample - log_p_sample:\n sample = new_sample\n log_p_sample = new_log_p_sample\n return sample", "_____no_output_____" ], [ "def mcmc_condition_on_template(user_id,template_mp,n_mcmc_sample=500,mcmc_gaussin_std=0.1):\n \"\"\"\n sample: data of a sample, np matrix\n tempalte_mp: a mondrain tree\n chain: index of chain\n \"\"\"\n \n np.random.seed(123)\n indiv_mp = template_mp\n joint_logP = []\n accepts_indiv_mp_list = []\n \n for idx in xrange(n_mcmc_sample):\n if idx % (n_mcmc_sample / 4) == 0:\n mcmc_gaussin_std = mcmc_gaussin_std / 5\n \n new_sample = Mondrian_Gaussian_perturbation(theta_space,indiv_mp, mcmc_gaussin_std)\n\n log_p = joint_logP_Random_Effect(template_mp, \\\n [indiv_mp],[data[user_id]], table, random_effect_gaussian_std)\n new_log_p = joint_logP_Random_Effect(template_mp, \\\n [new_sample],[data[user_id]], table, random_effect_gaussian_std)\n\n\n if new_log_p > log_p or \\\n np.log(np.random.uniform(low=0, high=1.)) < new_log_p - log_p:\n indiv_mp = new_sample\n accepts_indiv_mp_list.append(new_sample)\n joint_logP.append(new_log_p)\n \n print \"Drawing Sample %d ...\" % (idx + 1)\n print \"Accepted proposals of indiv mp, template mp: %d\" % len(accepts_indiv_mp_list)\n \n return joint_logP, accepts_indiv_mp_list\n\ndef log_MP_X_given_template(id):\n res_H = Parallel(n_jobs=num_cores)(delayed(mcmc_condition_on_template)\\\n (id,accepts_template_mp_H[i][-1]) for i in range(n_mcmc_chain))\n res_SJ = Parallel(n_jobs=num_cores)(delayed(mcmc_condition_on_template)\\\n (id,accepts_template_mp_SJ[i][-1]) for i in range(n_mcmc_chain))\n \n \"\"\"\n res_H: n_mcmc_chain * 2 * n_accepted_in_chain, \n res_SJ: n_mcmc_chain * 2 * n_accepted_in_chain, log likelihood\n \"\"\"\n return res_H,res_SJ", "_____no_output_____" ], [ "def compute_cell_population(data_subset, burnt_samples, table, cell_type_name2idx):\n \"Return a list of length n_cell_types\"\n burnt_predictions = [None for i in burnt_samples]\n for i in range(len(burnt_samples)):\n burnt_predictions[i] = classify_cells(data_subset, burnt_samples[i], \\\n table, cell_type_name2idx)\n votes = np.zeros([data_subset.shape[0], table.shape[0]])\n for Y_predict in burnt_predictions:\n for _ in range(len(Y_predict)):\n votes[_,Y_predict[_]] += 1\n Y_predict_majority = np.argmax(votes, axis=1)\n Y_predict_majority = [cell_type_idx2name[_] for _ in Y_predict_majority]\n return [Y_predict_majority.count(_)*1.0 / len(Y_predict_majority) \\\n for _ in table.index]", "_____no_output_____" ], [ "def mcmc_template(chain):\n \n print len(data)\n \n np.random.seed(chain) \n mcmc_gaussin_std = 0.1\n \n accepts_template_mp_chain = []\n accepts_indiv_mp_lists_chain = [[] for i in range(n_samples)]\n joint_logP_chain = []\n \n ### INITIALIZE template_mp AND indivi_mp_list\n print \"Initializing template mondrian process with pooled data\"\n template_mp = init_mp(theta_space, table, pooled_data, 100, mcmc_gaussin_std)\n indiv_mp_list = [np.copy(template_mp) for _ in range(n_samples)] \n accepts_template_mp_chain.append(template_mp)\n\n for idx in xrange(n_mcmc_sample):\n if idx == n_mcmc_sample / 3:\n mcmc_gaussin_std = mcmc_gaussin_std / 5\n \n # update indiv mondrian processes of each sample\n for _ in range(n_samples):\n new_sample = Mondrian_Gaussian_perturbation(\n theta_space,indiv_mp_list[_], mcmc_gaussin_std)\n \n log_p = joint_logP_Random_Effect(template_mp, \\\n [indiv_mp_list[_]],[data[_]], table, random_effect_gaussian_std)\n new_log_p = joint_logP_Random_Effect(template_mp, \\\n [new_sample],[data[_]], table, random_effect_gaussian_std)\n \n \n if new_log_p > log_p or \\\n np.log(np.random.uniform(low=0, high=1.)) < new_log_p - log_p:\n indiv_mp_list[_] = new_sample\n accepts_indiv_mp_lists_chain[_].append(new_sample)\n \n \n # update template mondrian process\n new_sample = Mondrian_Gaussian_perturbation(\n theta_space, template_mp, mcmc_gaussin_std)\n \n log_p = joint_logP_Random_Effect(template_mp, indiv_mp_list, \n [np.empty((0,table.shape[1])) for _ in range(n_samples)],\\\n table, random_effect_gaussian_std)\n\n new_log_p = joint_logP_Random_Effect(new_sample, indiv_mp_list, \n [np.empty((0,table.shape[1])) for _ in range(n_samples)],\\\n table, random_effect_gaussian_std)\n \n if new_log_p > log_p or \\\n np.log(np.random.uniform(low=0, high=1.)) < new_log_p - log_p:\n template_mp = new_sample\n accepts_template_mp_chain.append(template_mp)\n \n joint_logP_chain.append(joint_logP_Random_Effect(template_mp, indiv_mp_list, \\\n data, table, random_effect_gaussian_std))\n\n if (idx + 1) % (n_mcmc_sample/4) == 0:\n print \"Chain %d: Drawing Sample %d ...\" % (chain, idx + 1)\n print \"Accepted proposals of indiv mp, template mp: %d, %d, %d, %d, %d, %d\" \\\n % (len(accepts_indiv_mp_lists_chain[0]), \\\n len(accepts_indiv_mp_lists_chain[1]), \\\n len(accepts_indiv_mp_lists_chain[2]), \\\n len(accepts_indiv_mp_lists_chain[3]), \\\n len(accepts_indiv_mp_lists_chain[4]), \\\n len(accepts_template_mp_chain))\n \n return accepts_template_mp_chain,accepts_indiv_mp_lists_chain,joint_logP_chain", "_____no_output_____" ] ], [ [ "## Flow Cytometry Data\n\nLoad AML data from 21 samples, 5 of them are healthy (H\\*), 16 of them are AML samples (SJ\\*).", "_____no_output_____" ] ], [ [ "%%time\n\n# load data into a dictionary of pandas data frames\n\nPATH_DATA = '/extra/disij0/data/flow_cytometry/cytobank/levine_aml/CSV/'\n#PATH = '/Users/disiji/Dropbox/current/flow_cytometry/acdc/data/'\n\nuser_ids = ['H1','H2','H3','H4','H5','SJ01','SJ02','SJ03','SJ04','SJ05','SJ06','SJ07','SJ08','SJ09','SJ10',\\\n 'SJ11','SJ12','SJ13','SJ14','SJ15','SJ16']\n\ndata_dict = dict()\nfor id in user_ids:\n print id\n data_path = PATH_DATA + id\n allFiles = glob.glob(data_path + \"/*fcsdim_42.csv\")\n frame = pd.DataFrame()\n list_ = []\n for file_ in allFiles:\n df = pd.read_csv(file_,index_col=None, header=0)\n list_.append(df)\n data_dict[id] = pd.concat(list_)", "H1\nH2\nH3\nH4\nH5\nSJ01\nSJ02\nSJ03\nSJ04\nSJ05\nSJ06\nSJ07\nSJ08\nSJ09\nSJ10\nSJ11\nSJ12\nSJ13\nSJ14\nSJ15\nSJ16\nCPU times: user 2min 32s, sys: 8.64 s, total: 2min 40s\nWall time: 6min 30s\n" ], [ "markers = ['HLA-DR','CD19','CD34','CD45','CD47','CD44','CD117','CD123','CD38','CD11b',\\\n 'CD7','CD15','CD3','CD64','CD33','CD41']\n \nprint markers\n \nPATH_TABLE = '/home/disij/projects/acdc/data/AML_benchmark/'\ntable = pd.read_csv(PATH_TABLE + 'AML_table.csv', sep=',', header=0, index_col=0)\ntable = table.fillna(0)\ntable = table[markers]\nprint table.shape\nprint table\n\ncell_type_name2idx = {x:i for i,x in enumerate(table.index)}\ncell_type_idx2name = {i:x for i,x in enumerate(table.index)}", "['HLA-DR', 'CD19', 'CD34', 'CD45', 'CD47', 'CD44', 'CD117', 'CD123', 'CD38', 'CD11b', 'CD7', 'CD15', 'CD3', 'CD64', 'CD33', 'CD41']\n(14, 16)\n HLA-DR CD19 CD34 CD45 CD47 CD44 CD117 CD123 \\\nBasophils -1.0 -1 -1 0.0 0.0 0.0 0.0 1 \nCD4 T cells -1.0 -1 -1 0.0 0.0 0.0 0.0 -1 \nCD8 T cells -1.0 -1 -1 0.0 0.0 0.0 0.0 -1 \nCD16- NK cells -1.0 -1 -1 0.0 0.0 0.0 0.0 -1 \nCD16+ NK cells -1.0 -1 -1 0.0 0.0 0.0 0.0 -1 \nCD34+CD38+CD123- HSPCs 0.0 -1 1 -1.0 0.0 0.0 0.0 -1 \nCD34+CD38+CD123+ HSPCs 0.0 -1 1 -1.0 0.0 0.0 0.0 1 \nCD34+CD38lo HSCs 0.0 -1 1 -1.0 0.0 0.0 0.0 -1 \nMature B cells 0.0 1 -1 0.0 0.0 0.0 0.0 -1 \nPlasma B cells -1.0 1 -1 0.0 0.0 0.0 0.0 -1 \nPre B cells 1.0 1 -1 0.0 0.0 0.0 0.0 -1 \nPro B cells 0.0 1 1 -1.0 0.0 0.0 0.0 -1 \nMonocytes 1.0 -1 -1 0.0 0.0 0.0 0.0 -1 \npDCs 1.0 -1 -1 0.0 0.0 0.0 0.0 1 \n\n CD38 CD11b CD7 CD15 CD3 CD64 CD33 CD41 \nBasophils 0.0 0.0 -1.0 0.0 -1 -1.0 0.0 0.0 \nCD4 T cells 0.0 0.0 0.0 0.0 1 -1.0 0.0 0.0 \nCD8 T cells 0.0 0.0 1.0 0.0 1 -1.0 0.0 0.0 \nCD16- NK cells 0.0 0.0 1.0 0.0 -1 -1.0 0.0 0.0 \nCD16+ NK cells 0.0 0.0 1.0 0.0 -1 -1.0 0.0 0.0 \nCD34+CD38+CD123- HSPCs 1.0 0.0 -1.0 0.0 -1 -1.0 0.0 0.0 \nCD34+CD38+CD123+ HSPCs 1.0 0.0 -1.0 0.0 -1 -1.0 0.0 0.0 \nCD34+CD38lo HSCs -1.0 0.0 -1.0 0.0 -1 -1.0 0.0 0.0 \nMature B cells 0.0 0.0 -1.0 0.0 -1 0.0 0.0 0.0 \nPlasma B cells 1.0 0.0 -1.0 0.0 -1 0.0 0.0 0.0 \nPre B cells 1.0 0.0 -1.0 0.0 -1 -1.0 0.0 0.0 \nPro B cells 1.0 0.0 -1.0 0.0 -1 -1.0 0.0 0.0 \nMonocytes 0.0 0.0 -1.0 0.0 -1 0.0 0.0 0.0 \npDCs 0.0 0.0 -1.0 0.0 -1 -1.0 0.0 0.0 \n" ] ], [ [ "Now run MCMC to collect posterior samples...", "_____no_output_____" ], [ "# Random effect model", "_____no_output_____" ], [ "### Training models for healthy samples", "_____no_output_____" ] ], [ [ "f = lambda x: np.arcsinh((x -1.)/5.)\ndata = [data_dict[_].head(20000).applymap(f)[markers].values for _ in ['H1','H2','H3','H4','H5']]\n\n# compute data range \ndata_ranges = np.array([[[data[_][:,d].min(),data[_][:,d].max()] \\\n for d in range(len(markers))]\n for _ in range(len(data))])\n\ntheta_space = np.array([[data_ranges[:,d,0].min(), data_ranges[:,d,1].max()] \\\n for d in range(len(markers))])\n\nn_samples = len(data)", "_____no_output_____" ], [ "%%time\n\nn_mcmc_chain = 5\nn_mcmc_sample = 1000\nmcmc_gaussin_std = 0.1\nrandom_effect_gaussian_std = 0.5\n\npooled_data = np.concatenate(data)\n\nnum_cores = multiprocessing.cpu_count()\nresults = Parallel(n_jobs=num_cores)(delayed(mcmc_template)(i) for i in range(n_mcmc_chain))", "5\n5\n5\n5\n5\nInitializing template mondrian process with pooled data\nInitializing template mondrian process with pooled data\nInitializing template mondrian process with pooled data\nInitializing template mondrian process with pooled data\nInitializing template mondrian process with pooled data\nChain 0: Drawing Sample 250 ...\nAccepted proposals of indiv mp, template mp: 4, 4, 2, 6, 6, 62\nChain 1: Drawing Sample 250 ...\nAccepted proposals of indiv mp, template mp: 6, 5, 6, 8, 9, 51\nChain 2: Drawing Sample 250 ...\nAccepted proposals of indiv mp, template mp: 7, 2, 3, 2, 6, 48\nChain 4: Drawing Sample 250 ...\nAccepted proposals of indiv mp, template mp: 12, 12, 9, 8, 11, 45\nChain 3: Drawing Sample 250 ...\nAccepted proposals of indiv mp, template mp: 5, 7, 7, 2, 5, 55\nChain 0: Drawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 14, 12, 10, 12, 16, 194\nChain 1: Drawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 16, 13, 16, 14, 22, 202\nChain 4: Drawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 25, 25, 16, 20, 15, 195\nChain 2: Drawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 16, 14, 8, 10, 23, 185\nChain 3: Drawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 16, 20, 14, 10, 12, 203\nChain 0: Drawing Sample 750 ...\nAccepted proposals of indiv mp, template mp: 23, 12, 14, 12, 18, 393\nChain 1: Drawing Sample 750 ...\nAccepted proposals of indiv mp, template mp: 24, 13, 24, 15, 27, 400\nChain 2: Drawing Sample 750 ...\nAccepted proposals of indiv mp, template mp: 23, 21, 12, 14, 31, 385\nChain 4: Drawing Sample 750 ...\nAccepted proposals of indiv mp, template mp: 29, 26, 26, 21, 18, 391\nChain 3: Drawing Sample 750 ...\nAccepted proposals of indiv mp, template mp: 17, 23, 22, 12, 13, 390\nChain 0: Drawing Sample 1000 ...\nAccepted proposals of indiv mp, template mp: 24, 13, 18, 12, 19, 582\nChain 1: Drawing Sample 1000 ...\nAccepted proposals of indiv mp, template mp: 25, 13, 26, 22, 30, 596\nChain 4: Drawing Sample 1000 ...\nAccepted proposals of indiv mp, template mp: 29, 26, 37, 21, 24, 590\nChain 2: Drawing Sample 1000 ...\nAccepted proposals of indiv mp, template mp: 27, 21, 13, 14, 36, 577\nChain 3: Drawing Sample 1000 ...\nAccepted proposals of indiv mp, template mp: 18, 27, 25, 14, 16, 577\nCPU times: user 2.58 s, sys: 1.19 s, total: 3.77 s\nWall time: 19min 6s\n" ], [ "accepts_template_mp_H = []\naccepts_indiv_mp_lists_H = []\njoint_logP_H = []\n\nfor _ in results:\n accepts_template_mp_H.append(_[0])\n accepts_indiv_mp_lists_H.append(_[1])\n joint_logP_H.append(_[2])", "_____no_output_____" ], [ "fig, axarr = plt.subplots(n_mcmc_chain / 3 + 1, 3, figsize=(15,6 * 1))\nfor i in range(n_mcmc_chain):\n axarr[i/3,i%3].plot(joint_logP_H[i])\nfig.suptitle(\"log joint likelihood\")\nplt.show()", "_____no_output_____" ], [ "population_size_H = [None for _ in range(n_samples)]\n\nfor id in range(n_samples):\n data_subset = data[id]\n burnt_samples = [i for _ in range(n_mcmc_chain) for i in \\\n accepts_indiv_mp_lists_H[_][id][-2:]]\n population_size_H[id] = compute_cell_population(data_subset, burnt_samples, \\\n table, cell_type_name2idx)\n\nfor id in range(n_samples):\n plt.plot(population_size_H[id],color = 'g')\nplt.title('Healthy')\nplt.show()", "_____no_output_____" ] ], [ [ "### Training models for unhealthy samples", "_____no_output_____" ] ], [ [ "data = [data_dict[_].head(20000).applymap(f)[markers].values for _ in ['SJ01','SJ02',\\\n 'SJ03','SJ04','SJ05','SJ06','SJ07','SJ08','SJ09','SJ10',\\\n 'SJ11','SJ12','SJ13','SJ14','SJ15','SJ16']]\n \n# compute data range \ndata_ranges = np.array([[[data[_][:,d].min(),data[_][:,d].max()] \\\n for d in range(len(markers))]\n for _ in range(len(data))])\n\ntheta_space = np.array([[data_ranges[:,d,0].min(), data_ranges[:,d,1].max()] \\\n for d in range(len(markers))])\n\nn_samples = len(data)", "_____no_output_____" ], [ "%%time\npooled_data = np.concatenate(data)\nresults = Parallel(n_jobs=num_cores)(delayed(mcmc_template)(i) for i in range(n_mcmc_chain))", "16\n16\n16\n16\n16\nInitializing template mondrian process with pooled data\nInitializing template mondrian process with pooled data\nInitializing template mondrian process with pooled data\nInitializing template mondrian process with pooled data\nInitializing template mondrian process with pooled data\nChain 4: Drawing Sample 250 ...\nAccepted proposals of indiv mp, template mp: 4, 5, 3, 6, 8, 24\nChain 1: Drawing Sample 250 ...\nAccepted proposals of indiv mp, template mp: 7, 10, 5, 10, 4, 22\nChain 0: Drawing Sample 250 ...\nAccepted proposals of indiv mp, template mp: 4, 10, 4, 5, 10, 25\nChain 2: Drawing Sample 250 ...\nAccepted proposals of indiv mp, template mp: 8, 15, 12, 8, 11, 26\nChain 3: Drawing Sample 250 ...\nAccepted proposals of indiv mp, template mp: 4, 8, 9, 11, 12, 19\nChain 4: Drawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 26, 13, 8, 30, 10, 141\nChain 0: Drawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 15, 35, 16, 26, 25, 133\nChain 1: Drawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 12, 33, 11, 25, 15, 140\nChain 2: Drawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 19, 33, 28, 23, 17, 137\nChain 3: Drawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 10, 22, 36, 23, 25, 151\nChain 4: Drawing Sample 750 ...\nAccepted proposals of indiv mp, template mp: 35, 14, 9, 43, 20, 319\nChain 0: Drawing Sample 750 ...\nAccepted proposals of indiv mp, template mp: 19, 44, 22, 43, 30, 309\nChain 1: Drawing Sample 750 ...\nAccepted proposals of indiv mp, template mp: 18, 42, 11, 31, 25, 316\nChain 2: Drawing Sample 750 ...\nAccepted proposals of indiv mp, template mp: 21, 33, 30, 31, 30, 294\nChain 3: Drawing Sample 750 ...\nAccepted proposals of indiv mp, template mp: 14, 31, 38, 29, 31, 324\nChain 4: Drawing Sample 1000 ...\nAccepted proposals of indiv mp, template mp: 38, 18, 9, 44, 24, 491\nChain 0: Drawing Sample 1000 ...\nAccepted proposals of indiv mp, template mp: 22, 47, 25, 47, 32, 488\nChain 1: Drawing Sample 1000 ...\nAccepted proposals of indiv mp, template mp: 23, 44, 12, 32, 35, 481\nChain 2: Drawing Sample 1000 ...\nAccepted proposals of indiv mp, template mp: 26, 34, 32, 39, 32, 474\nChain 3: Drawing Sample 1000 ...\nAccepted proposals of indiv mp, template mp: 38, 33, 39, 31, 31, 490\nCPU times: user 6.69 s, sys: 3.03 s, total: 9.72 s\nWall time: 1h 4min 17s\n" ], [ "accepts_template_mp_SJ = []\naccepts_indiv_mp_lists_SJ = []\njoint_logP_SJ = []\n\nfor _ in results:\n accepts_template_mp_SJ.append(_[0])\n accepts_indiv_mp_lists_SJ.append(_[1])\n joint_logP_SJ.append(_[2])", "_____no_output_____" ], [ "fig, axarr = plt.subplots(n_mcmc_chain / 2, 3, figsize=(15,6 ))\nfor i in range(n_mcmc_chain):\n axarr[i/3,i%3].plot(joint_logP_SJ[i])\nfig.suptitle(\"log joint likelihood\")\nplt.show()", "_____no_output_____" ], [ "population_size_SJ = [None for _ in range(n_samples)]\n\nfor id in range(n_samples):\n data_subset = data[id]\n burnt_samples = [i for _ in range(n_mcmc_chain) for i in \\\n accepts_indiv_mp_lists_SJ[_][id][-1:]]\n population_size_SJ[id] = compute_cell_population(data_subset , burnt_samples, \\\n table, cell_type_name2idx)\n\nfor id in range(n_samples):\n plt.plot(population_size_SJ[id],color = 'r')\nplt.title('AML')\nplt.show()", "_____no_output_____" ] ], [ [ "### compare size of subpopulations in healthy and AML individuals (within sample analysis)", "_____no_output_____" ] ], [ [ "fig, axarr = plt.subplots(2, 1,sharey=True)\nfor id in range(0,5):\n axarr[0].plot(population_size_H[id],color = 'g')\naxarr[0].set_title('healty')\nfor id in range(0,16):\n axarr[1].plot(population_size_SJ[id],color = 'r')\naxarr[1].set_title('AML')\nplt.show()", "_____no_output_____" ], [ "X = np.array(population_size_H + population_size_SJ)\nY = np.array([0]*5 + [1]*16)\npredict_prob,models = LOO(X,Y)", "[0.99532102689879254, 0.99500949745181755, 0.99321895845965269, 0.9999611759918432, 0.99989328253566567, 4.2215057051153693e-06, 2.5674480819137813e-06, 3.1676490874765761e-08, 3.652663687070401e-07, 0.3361582889839847, 0.036373234122759057, 2.4021019596198734e-05, 5.2887321258632269e-05, 0.00010748043038588673, 6.004330907971589e-05, 6.2859005415916158e-05, 4.4100858254125797e-07, 0.018541781477783403, 7.2288869790160248e-07, 1.5686456855679154e-06, 1.9075969346360466e-05]\n" ], [ "cell_types = [cell_type_idx2name[i] for i in range(14)]\n\nfig, axarr = plt.subplots(2, 1,sharey=True, sharex = True)\nfor id in range(5):\n axarr[0].plot(population_size_H[id],color = 'g')\naxarr[0].set_title('Proportion of each cell type for Healty individuals')\nfor id in range(16):\n axarr[1].plot(population_size_SJ[id],color = 'r')\naxarr[1].set_title('Proportion of each cell type for AML individuals')\n\nplt.xticks(range(14),cell_types,rotation = 90)\nplt.show()\n\nfor i in range(21):\n plt.plot(models[i].coef_[0])\nplt.title('LOOCV Logistic Regression Coefficients')\nplt.xticks(range(14),cell_types,rotation = 90)\nplt.show()", "_____no_output_____" ] ], [ [ "# Diagnosis", "_____no_output_____" ] ], [ [ "# reload data!\n\ndata = [data_dict[_].head(20000).applymap(f)[markers].values for _ in ['H1','H2','H3','H4',\\\n 'H5','SJ01','SJ02','SJ03','SJ04','SJ05','SJ06','SJ07','SJ08','SJ09','SJ10',\\\n 'SJ11','SJ12','SJ13','SJ14','SJ15','SJ16']]\n \n# compute data range \ndata_ranges = np.array([[[data[_][:,d].min(),data[_][:,d].max()] \\\n for d in range(len(markers))]\n for _ in range(len(data))])\n\ntheta_space = np.array([[data_ranges[:,d,0].min(), data_ranges[:,d,1].max()] \\\n for d in range(len(markers))])\n\nn_samples = len(data)", "_____no_output_____" ] ], [ [ "### Logistic regression with cell population of under 2 templates as features", "_____no_output_____" ] ], [ [ "# step 1: learn cell populations of all samples, under 2 template MPs, 5 chains\n# V: cell proportion for 21 samples under healthy template\nV_H = [[None for chain in range(n_mcmc_chain)] for _ in range(21)]\nV_SJ = [[None for chain in range(n_mcmc_chain)] for _ in range(21)]\n\n\nfor id in range(21):\n print id\n res_H = Parallel(n_jobs=num_cores)(delayed(mcmc_condition_on_template)\\\n (id,accepts_template_mp_H[i][-1]) for i in range(n_mcmc_chain))\n indiv_MP_condition_template_H = [_[1][-1] for _ in res_H]\n for chain in range(n_mcmc_chain):\n V_H[id][chain] = compute_cell_population(data[id], indiv_MP_condition_template_H[chain:chain+1], \\\n table, cell_type_name2idx)\n \n res_SJ = Parallel(n_jobs=num_cores)(delayed(mcmc_condition_on_template)\\\n (id,accepts_template_mp_SJ[i][-1]) for i in range(n_mcmc_chain))\n indiv_MP_condition_template_SJ = [_[1][-1] for _ in res_SJ]\n for chain in range(n_mcmc_chain):\n V_SJ[id][chain] = compute_cell_population(data[id], indiv_MP_condition_template_SJ[chain:chain+1], \\\n table, cell_type_name2idx)", "0\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 98\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 42\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 65\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 40\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 59\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 65\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 69\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 63\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 66\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 82\n1\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 47\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 56\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 63\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 68\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 92\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 53\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 90\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 59\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 49\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 51\n2\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 68\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 89\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 45\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 81\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 69\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 61\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 66\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 57\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 56\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 67\n3\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 107\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 66\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 82\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 45\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 58\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 62\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 68\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 69\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 37\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 82\n4\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 74\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 70\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 79\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 50\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 74\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 62\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 61\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 108\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 57\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 71\n5\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 44\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 90\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 150\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 56\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 85\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 45\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 59\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 54\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 84\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 50\n6\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 93\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 58\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 47\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 76\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 72\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 75\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 67\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 48\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 41\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 55\n7\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 48\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 46\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 47\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 73\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 67\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 49\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 58\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 50\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 49\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 99\n8\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 76\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 56\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 72\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 69\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 79\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 88\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 108\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 67\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 53\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 75\n9\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 69\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 81\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 38\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 63\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 62\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 57\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 52\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 55\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 65\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 101\n10\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 88\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 103\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 68\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 65\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 81\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 73\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 92\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 59\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 79\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 92\n11\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 93\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 73\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 72\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 76\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 84\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 73\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 45\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 72\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 78\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 65\n12\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 156\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 61\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 68\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 48\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 85\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 77\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 60\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 61\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 69\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 63\n13\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 88\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 78\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 40\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 45\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 75\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 56\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 57\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 53\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 65\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 60\n14\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 99\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 97\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 64\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 80\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 123\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 90\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 70\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 66\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 78\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 110\n15\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 54\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 102\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 51\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 49\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 116\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 70\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 40\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 63\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 105\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 92\n16\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 79\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 117\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 72\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 88\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 127\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 91\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 140\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 64\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 56\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 68\n17\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 87\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 87\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 75\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 67\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 68\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 46\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 75\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 50\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 90\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 53\n18\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 73\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 103\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 47\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 68\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 69\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 90\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 55\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 68\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 49\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 75\n19\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 103\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 70\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 60\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 65\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 84\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 81\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 62\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 68\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 106\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 81\n20\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 66\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 50\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 126\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 82\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 78\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 68\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 84\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 69\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 60\nDrawing Sample 500 ...\nAccepted proposals of indiv mp, template mp: 83\n" ], [ "X = [[V_H[id][chain] + V_SJ[id][chain] for id in range(21)] for chain in range(n_mcmc_chain)]\nY = [0]*5 + [1]*16", "_____no_output_____" ], [ "def LOO(X,Y):\n from sklearn.model_selection import LeaveOneOut\n from sklearn import linear_model\n loo = LeaveOneOut()\n models = []\n X = np.array(X)\n Y = np.array(Y)\n\n predict_prob = []\n for train, test in loo.split(X,Y):\n train_X = X[train]\n train_Y = Y[train]\n test_X = X[test]\n test_Y = Y[test]\n logreg = linear_model.LogisticRegression(C=1e5)\n logreg.fit(train_X, train_Y)\n test_Y_predict = logreg.predict(test_X)\n models.append(logreg)\n predict_prob.append(logreg.predict_proba(test_X)[0][0])\n\n print predict_prob\n plt.scatter(range(21),predict_prob,s = 100)\n plt.xlim(0, 21)\n plt.ylim(0, 1)\n groups = ['H%s' % i for i in range(1,6)] + ['SJ%s' % i for i in range(1,17)]\n plt.legend()\n\n plt.xticks(range(21),groups)\n plt.ylabel('P(healthy)')\n plt.title('P(healthy) Predicted by LOOCV Logistic Regression')\n \n return predict_prob,models", "_____no_output_____" ], [ "predict_prob,models = [],[]\nfor chain in range(n_mcmc_chain):\n res = LOO(X[chain],Y)\n predict_prob.append(res[0])\n models.append(res[1])", "[0.99841206336111987, 0.99966800288254687, 0.87316854492456542, 0.99999926620800161, 0.99984613432778913, 1.3459889647293721e-08, 0.0026811637112176268, 0.00010195742044638578, 1.5442242625729463e-05, 1.8254518332594394e-05, 0.003338405513243603, 0.00011531545835186119, 0.00034991109377846552, 0.033424769452122471, 0.0017130441929669171, 0.046224174116587413, 4.2252976673040621e-09, 0.0035542734667173281, 4.0519915056602684e-07, 3.3218450434802094e-08, 1.41687768184795e-08]\n[0.99679385991584324, 0.99983747510386578, 0.96874183860576113, 0.99966630817397717, 0.99999843814722889, 4.4080938899071498e-09, 8.4638356256938607e-07, 3.5134730125285785e-08, 7.2666163464241151e-07, 0.014622982240019011, 0.0033318636184076489, 6.7338532638849813e-07, 0.00066990380102160962, 0.00065606113297067559, 1.729235734559964e-05, 7.4609522781043935e-06, 3.076769780518962e-05, 0.083990951899576283, 4.8496175056866875e-06, 4.6487622640256632e-09, 1.674832040832186e-06]\n[0.97461768633847823, 0.99955912667312252, 0.99589566487681869, 0.99998901902420423, 0.99994996814969583, 1.2550153397627994e-06, 4.3184142195729081e-05, 2.4103542013431678e-06, 4.7705896568661643e-06, 0.090557765099546939, 0.00049534119915739527, 1.8921063007715233e-05, 7.7215739042735265e-06, 0.00042160357552945005, 2.5992641006222783e-07, 4.4390364528634763e-07, 1.1415410838822027e-09, 0.047078205690052166, 1.2900448687069854e-07, 1.8290425096711971e-07, 2.3621352237546134e-06]\n[0.9984581698292071, 0.99967566805213026, 0.95581470201280105, 0.99981729250189166, 0.99998005823530389, 3.8171524496810605e-08, 1.0332645393407169e-05, 7.8714287676806549e-07, 1.0093235203179063e-06, 0.0035568928188703941, 0.0045957703296436447, 5.0590847675224815e-05, 0.00021169720824332217, 0.0013218348791412815, 0.00070822216859012244, 0.0014310848368445095, 1.8268520585174031e-08, 0.84909823894672387, 2.2819103173699062e-06, 4.6697962184927277e-06, 7.9412068315631856e-06]\n[0.95817677332306073, 0.99975102698038409, 0.96197731114098117, 0.99941207433360069, 0.9999706433925627, 1.6075629938328007e-08, 5.4855779849649622e-06, 1.1835507840451953e-07, 6.3212776513221769e-07, 0.025756083957138909, 0.003883512638022113, 3.1504749534727594e-06, 1.089165066914255e-05, 0.070833743257998627, 1.3106135488438753e-05, 1.2063959820007852e-07, 4.8466431523674913e-06, 0.0066751730227279094, 2.6432808174159383e-05, 8.1768226702139124e-05, 0.00011538114822084999]\n" ] ], [ [ "# Baseline 1: one tree for each group (without random effects) ", "_____no_output_____" ] ], [ [ "# fit 1 tree to pooled healthy samples\nglobal_MP_H = []\nglobal_MP_SJ = []\nn_iter = 1000\n\ndata_H = np.concatenate(data[0:5])\nfor chain in range(n_mcmc_chain):\n global_MP_H.append(init_mp(theta_space, table, data_H, n_iter,mcmc_gaussin_std))\n \ndata_SJ = np.concatenate(data[5:])\nfor chain in range(n_mcmc_chain):\n global_MP_SJ.append(init_mp(theta_space, table, data_SJ, n_iter,mcmc_gaussin_std))", "_____no_output_____" ] ], [ [ "### Compare classification error(both gives perfect classification): ", "_____no_output_____" ] ], [ [ "V_H_Global = [None for _ in range(21)]\nV_SJ_Global = [None for _ in range(21)]\n\n\nfor id in range(21):\n V_H_Global[id] = compute_cell_population(data[id], global_MP_H, table, cell_type_name2idx)\n V_SJ_Global[id] = compute_cell_population(data[id], global_MP_SJ, table, cell_type_name2idx)\n\nX_Global = [V_H_Global[id] + V_SJ_Global[id] for id in range(21)]\nY_Global = [0]*5 + [1]*16", "_____no_output_____" ], [ "for id in range(21):\n plt.plot(X_Global[id])", "_____no_output_____" ], [ "predict_prob,models = LOO(X_Global,Y_Global)", "_____no_output_____" ] ], [ [ "### Compare log likelihood $P(data_i|MP_i)$", "_____no_output_____" ] ], [ [ "# individual MP with random effects\n\nlog_lik_H = [[] for _ in range(5)] # 5 * n_chain\nlog_lik_SJ = [[] for _ in range(16)] # 5 * n_chain\n\nfor id in range(5):\n data_subset = data[id]\n burnt_samples = [i for _ in range(n_mcmc_chain) for i in \\\n accepts_indiv_mp_lists_H[_][id][-1:]]\n for sample in burnt_samples:\n log_lik_H[id].append(comp_log_p_sample(sample, data_subset))\n\nfor id in range(16):\n data_subset = data[5+id]\n burnt_samples = [i for _ in range(n_mcmc_chain) for i in \\\n accepts_indiv_mp_lists_SJ[_][id][-1:]]\n for sample in burnt_samples:\n log_lik_SJ[id].append(comp_log_p_sample(sample, data_subset))\n\nlog_lik = log_lik_H + log_lik_SJ ", "_____no_output_____" ], [ "# individual MP without random effects\n\nlog_lik_H_global = [[] for _ in range(5)] # 5 * n_chain * 2\nlog_lik_SJ_global = [[] for _ in range(16)] # 5 * n_chain * 2\n\nfor id in range(5):\n data_subset = data[id]\n for sample in global_MP_H:\n log_lik_H_global[id].append(comp_log_p_sample(sample, data_subset))\n\nfor id in range(16):\n data_subset = data[5+id]\n for sample in global_MP_SJ:\n log_lik_SJ_global[id].append(comp_log_p_sample(sample, data_subset))\n\nlog_lik_global = log_lik_H_global + log_lik_SJ_global", "_____no_output_____" ], [ "def draw_plot(data, edge_color, fill_color):\n bp = ax.boxplot(data, patch_artist=True)\n\n for element in ['boxes', 'whiskers', 'fliers', 'means', 'medians', 'caps']:\n plt.setp(bp[element], color=edge_color)\n\n for patch in bp['boxes']:\n patch.set(facecolor=fill_color) \n\nfig, ax = plt.subplots(figsize=(8,3))\ndraw_plot(log_lik.T, 'red', 'tan')\ndraw_plot(log_lik_global.T, 'blue', 'cyan')\n\nax.set_ylabel('Log likelihood',fontsize=12)\n#plt.setp(ax.get_yticklabels(),visible=False)\n\ngroups = ['H%s' % i for i in range(1,6)] + ['S%s' % i for i in range(1,17)]\nplt.plot([], c='#D7191C', label='MP+RE')\nplt.plot([], c='#2C7BB6', label='Global MP')\nplt.legend(fontsize=12)\n\nplt.plot([5.5, 5.5],[-400000, -150000], c = 'k', linestyle = ':')\n\nplt.xticks(range(1,22),groups)\nplt.xticks(fontsize=12)\n#plt.xlabel('Subjects')\nax.yaxis.get_major_formatter().set_powerlimits((0,1))\nplt.yticks(fontsize=12)\nplt.tight_layout()\nplt.savefig('log_lik_comparison.png')\nplt.show()", "_____no_output_____" ] ], [ [ "# Baseline 2: K means (use centers of pooled healthy data and pooled AML data as feature extractors)", "_____no_output_____" ] ], [ [ "V_Kmeans_H = [[None for chain in range(n_mcmc_chain)] for _ in range(21)]\nV_Kmeans_SJ = [[None for chain in range(n_mcmc_chain)] for _ in range(21)]\n\nfrom sklearn.cluster import KMeans\nfrom scipy.spatial import distance\n\nfor chain in range(n_mcmc_chain):\n cluster_centers_H = KMeans(n_clusters=14, random_state=chain).\\\n fit(np.concatenate(data[0:5])).cluster_centers_\n for id in range(21):\n closest_pt_index = distance.cdist(data[id], cluster_centers_H).argmin(axis=1)\n V_Kmeans_H[id][chain] = [sum(closest_pt_index == k)*1.0 / \\\n len(closest_pt_index) for k in range(14)] \n cluster_centers_SJ = KMeans(n_clusters=14, random_state=chain).\\\n fit(np.concatenate(data[6:21])).cluster_centers_\n for id in range(21):\n closest_pt_index = distance.cdist(data[id], cluster_centers_SJ).argmin(axis=1)\n V_Kmeans_SJ[id][chain] = [sum(closest_pt_index == k)*1.0 / \\\n len(closest_pt_index) for k in range(14)] \n\nX_Kmeans = [[V_Kmeans_H[id][chain] + V_Kmeans_SJ[id][chain] for id in range(21)] \\\n for chain in range(n_mcmc_chain)]\n\npredict_prob_Kmeans,models_Kmeans = [],[]\nfor chain in range(n_mcmc_chain):\n res = LOO(X_Kmeans[chain],Y)\n predict_prob_Kmeans.append(res[0])\n models_Kmeans.append(res[1])", "[0.99957763283433509, 0.99885485252270456, 0.98118568140555851, 0.99999054960979017, 0.99997063254704432, 4.3026640457188847e-06, 0.0055757304074162128, 5.0585259043778308e-06, 6.0652879763090084e-07, 0.00019091658833148006, 0.0043653049569929436, 5.7032459874872821e-05, 0.0076303207937618023, 0.98683205070367164, 0.0012310249984547328, 0.023964942080579754, 8.8642442064301719e-07, 1.9553462057841919e-05, 2.1187550134804667e-05, 3.3786011667147342e-05, 2.932361926510918e-05]\n[0.99955064613804545, 0.99858666580172484, 0.97770920491571345, 0.99999422407984573, 0.99998291966791519, 3.3537270848205125e-06, 0.0062126532857572636, 6.255274379585174e-06, 6.8856880253154173e-07, 0.00020911720184280114, 0.01326873878003354, 7.0980025453915019e-05, 0.0063022662813789765, 0.99426559271936044, 0.0026655012984032611, 0.041022514620350559, 1.1312533524376889e-06, 1.3805118529996996e-05, 2.3640786669387737e-05, 1.5249168426212378e-05, 1.2343200929132436e-06]\n[0.99954686556518046, 0.99856749553186974, 0.97781947247818735, 0.99999421387391996, 0.99998310344830577, 3.3348779885367108e-06, 0.0063521317442997161, 6.1645956321854101e-06, 6.705105622950569e-07, 0.00020938781522039651, 0.013164062000923127, 7.0540332681501283e-05, 0.0062488399647419035, 0.99450239318810929, 0.0026056758127035451, 0.041515587617018346, 1.1069050120937618e-06, 1.413241696779366e-05, 2.3479921824165473e-05, 1.4738924588808544e-05, 1.2055106546338124e-06]\n[0.99955120995524693, 0.99857565004546611, 0.97767557438617314, 0.99999420507276238, 0.99998291746546841, 3.3875243959924362e-06, 0.0062282574583948369, 6.2034483754302983e-06, 6.8340256043075698e-07, 0.00020970386426599763, 0.013245436222817375, 7.0740948105108004e-05, 0.0062636762651021582, 0.99426586786976034, 0.0026306981228785276, 0.040737510261603771, 1.1188147578389263e-06, 1.3720431628061469e-05, 2.3571536214239686e-05, 1.5533009315893409e-05, 1.2651479311953651e-06]\n[0.99966190645702191, 0.99803465872840169, 0.96835910933692082, 0.99999627186656392, 0.99999585195396845, 3.3376022746667289e-06, 0.0060878769895420515, 3.2514022662022413e-06, 8.3584651944246247e-07, 0.00032003825892290561, 0.021264225346234844, 0.00010921313015588296, 0.0038333854281551449, 0.99684910951373362, 0.00098670519730992279, 0.28970948633941762, 4.9793757728178178e-07, 3.4436224841671859e-06, 5.3012818343245449e-05, 3.7256820643594146e-05, 1.0591402876958256e-06]\n" ], [ "# draw box plot\n\nfig, ax = plt.subplots(figsize=(8,3))\nres_1 = np.array(predict_prob)\nres_1[:,6:] = 1 - res_1[:,6:]\nres_2 = np.array(predict_prob_Kmeans)\nres_2[:,6:] = 1 - res_2[:,6:]\ndraw_plot(res_1, 'red', 'tan')\ndraw_plot(res_2, 'blue', 'cyan')\n\nax.set_ylabel('p(Y_hat = Y)',fontsize=12)\n#plt.setp(ax.get_yticklabels(),visible=False)\n\ngroups = ['H%s' % i for i in range(1,6)] + ['S%s' % i for i in range(1,17)]\nplt.plot([], c='#D7191C', label='MP+RE')\nplt.plot([], c='#2C7BB6', label='kmeans')\nplt.legend(fontsize=12)\n\nplt.plot([5.5, 5.5],[0,1], c = 'k', linestyle = ':')\n\nplt.xticks(range(1,22),groups)\nplt.xticks(fontsize=12)\n#plt.xlabel('Subjects')\nax.yaxis.get_major_formatter().set_powerlimits((0,1))\nplt.yticks(fontsize=12)\nplt.tight_layout()\nplt.show()", "_____no_output_____" ] ], [ [ "# Random Effect Analysis", "_____no_output_____" ] ], [ [ "def find_first_cut(theta_space):\n \n # find the dimension and location of first cut when there is a cut\n root_rec = theta_space[0]\n left_rec = theta_space[1][0]\n \n for _ in range(root_rec.shape[0]):\n if root_rec[_,1] != left_rec[_,1]:\n break \n dim, pos = _, left_rec[_,1]\n return dim , pos\n \ndef compute_diff_mp(template_mp,mp):\n \"\"\"\n Input: 2 mondrian trees\n Output:\n returns mp - tempatlate_mp\n D: tree structured (dimenison of cuts, shared across 2 mp trees), each node is an integer\n C: tree structured (position of cuts), each node is a real value\n \"\"\"\n if mp[1] == None and mp[2] == None:\n return None, None\n d_0_template, c_0_template = find_first_cut(template_mp)\n d_0_mp, c_0_mp = find_first_cut(mp)\n d_0 = d_0_template\n len_d_0 = template_mp[0][d_0][1] - template_mp[0][d_0][0]\n c_0 = abs(c_0_mp - c_0_template) / len_d_0\n \n D_left, C_left = compute_diff_mp(template_mp[1],mp[1])\n D_right, C_right = compute_diff_mp(template_mp[2],mp[2])\n D = [d_0, D_left, D_right]\n C = [c_0, C_left, C_right]\n return D, C", "_____no_output_____" ] ], [ [ "## Compare magnitude of random effects in 2 groups ", "_____no_output_____" ] ], [ [ "random_effect_H = [[None for chain in range(n_mcmc_chain)] for id in range(5)]\nrandom_effect_SJ = [[None for chain in range(n_mcmc_chain)] for id in range(16)]\n\nfor id in range(5):\n for chain in range(n_mcmc_chain):\n random_effect_H[id][chain] = compute_diff_mp(accepts_template_mp_H[chain][-1],\\\n accepts_indiv_mp_lists_H[chain][id][-1])\nfor id in range(16):\n for chain in range(n_mcmc_chain):\n random_effect_SJ[id][chain] = compute_diff_mp(accepts_template_mp_SJ[chain][-1],\\\n accepts_indiv_mp_lists_SJ[chain][id][-1])", "_____no_output_____" ], [ "def flatten_tree(tree):\n if tree == None:\n return []\n if len(tree) == 1:\n return tree\n else:\n return [tree[0]] + flatten_tree(tree[1]) + flatten_tree(tree[2])\n\"\"\"\nrandom_effect_H_flattened[patient_id][chain] = a list of unordered offsets\nrandom_effect_SJ_flattened[patient_id][chain] = a list of unordered offsets\n\"\"\"\nrandom_effect_H_flattened = [[flatten_tree(random_effect_H[id][chain][1]) \\\n for chain in range(n_mcmc_chain)] for id in range(5)]\nrandom_effect_SJ_flattened = [[flatten_tree(random_effect_SJ[id][chain][1]) \\\n for chain in range(n_mcmc_chain)] for id in range(16)]", "_____no_output_____" ], [ "import itertools\nimport seaborn as sns; sns.set(color_codes=True)\nfrom sklearn.neighbors import KernelDensity\n\nrandom_effect_H_set = [j for i in random_effect_H_flattened for _ in i for j in _]\nrandom_effect_SJ_set = [j for i in random_effect_SJ_flattened for _ in i for j in _]\n# bins = 20\n# plt.hist(random_effect_H_set,bins = bins)\n# plt.show()\n# plt.hist(random_effect_SJ_set, bins = bins)\n# plt.show()\n# kde_H = KernelDensity(kernel='gaussian', bandwidth=0.75).fit(random_effect_H_set)\n\nplt.plot()\noffset_H = sns.distplot(random_effect_H_set,label=\"Healthy\")\noffset_SJ = sns.distplot(random_effect_SJ_set, label=\"AML\")\nplt.legend()\nplt.show()", "_____no_output_____" ] ], [ [ "## Visualize random effects(find chains and dimensions what random effects are obvious)", "_____no_output_____" ] ], [ [ "chain = 1\n\nrandom_effect_H_set = [random_effect_H_flattened[id][chain][0] for id in range(5)]\nrandom_effect_SJ_set = [random_effect_SJ_flattened[id][chain][0] for id in range(16)]\n# bins = 20\n# plt.hist(random_effect_H_set,bins = bins)\n# plt.show()\n# plt.hist(random_effect_SJ_set, bins = bins)\n# plt.show()\n# kde_H = KernelDensity(kernel='gaussian', bandwidth=0.75).fit(random_effect_H_set)\n\nplt.plot()\noffset_H = sns.distplot(random_effect_H_set,label=\"Healthy\")\noffset_SJ = sns.distplot(random_effect_SJ_set, label=\"AML\")\nplt.legend()\nplt.show()\n", "_____no_output_____" ], [ "jkdsa", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
e7d8132d00d39e9d26790b394edb4094e5f16a6c
38,953
ipynb
Jupyter Notebook
chapters/chapter_4/4_2_mlp_surnames/4_2_Classifying_Surnames_with_an_MLP.ipynb
prampampam/PyTorchNLPBook
2c8a0700a4c0741fd352aa9883ec28efe0586907
[ "Apache-2.0" ]
null
null
null
chapters/chapter_4/4_2_mlp_surnames/4_2_Classifying_Surnames_with_an_MLP.ipynb
prampampam/PyTorchNLPBook
2c8a0700a4c0741fd352aa9883ec28efe0586907
[ "Apache-2.0" ]
null
null
null
chapters/chapter_4/4_2_mlp_surnames/4_2_Classifying_Surnames_with_an_MLP.ipynb
prampampam/PyTorchNLPBook
2c8a0700a4c0741fd352aa9883ec28efe0586907
[ "Apache-2.0" ]
1
2021-03-17T18:00:29.000Z
2021-03-17T18:00:29.000Z
36.067593
121
0.496393
[ [ [ "# Classifying Surnames with a Multilayer Perceptron", "_____no_output_____" ], [ "## Imports", "_____no_output_____" ] ], [ [ "from argparse import Namespace\nfrom collections import Counter\nimport json\nimport os\nimport string\n\nimport numpy as np\nimport pandas as pd\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nfrom torch.utils.data import Dataset, DataLoader\nfrom tqdm import tqdm_notebook", "_____no_output_____" ] ], [ [ "## Data Vectorization classes", "_____no_output_____" ], [ "### The Vocabulary", "_____no_output_____" ] ], [ [ "class Vocabulary(object):\n \"\"\"Class to process text and extract vocabulary for mapping\"\"\"\n\n def __init__(self, token_to_idx=None, add_unk=True, unk_token=\"<UNK>\"):\n \"\"\"\n Args:\n token_to_idx (dict): a pre-existing map of tokens to indices\n add_unk (bool): a flag that indicates whether to add the UNK token\n unk_token (str): the UNK token to add into the Vocabulary\n \"\"\"\n\n if token_to_idx is None:\n token_to_idx = {}\n self._token_to_idx = token_to_idx\n\n self._idx_to_token = {idx: token \n for token, idx in self._token_to_idx.items()}\n \n self._add_unk = add_unk\n self._unk_token = unk_token\n \n self.unk_index = -1\n if add_unk:\n self.unk_index = self.add_token(unk_token) \n \n \n def to_serializable(self):\n \"\"\" returns a dictionary that can be serialized \"\"\"\n return {'token_to_idx': self._token_to_idx, \n 'add_unk': self._add_unk, \n 'unk_token': self._unk_token}\n\n @classmethod\n def from_serializable(cls, contents):\n \"\"\" instantiates the Vocabulary from a serialized dictionary \"\"\"\n return cls(**contents)\n\n def add_token(self, token):\n \"\"\"Update mapping dicts based on the token.\n\n Args:\n token (str): the item to add into the Vocabulary\n Returns:\n index (int): the integer corresponding to the token\n \"\"\"\n try:\n index = self._token_to_idx[token]\n except KeyError:\n index = len(self._token_to_idx)\n self._token_to_idx[token] = index\n self._idx_to_token[index] = token\n return index\n \n def add_many(self, tokens):\n \"\"\"Add a list of tokens into the Vocabulary\n \n Args:\n tokens (list): a list of string tokens\n Returns:\n indices (list): a list of indices corresponding to the tokens\n \"\"\"\n return [self.add_token(token) for token in tokens]\n\n def lookup_token(self, token):\n \"\"\"Retrieve the index associated with the token \n or the UNK index if token isn't present.\n \n Args:\n token (str): the token to look up \n Returns:\n index (int): the index corresponding to the token\n Notes:\n `unk_index` needs to be >=0 (having been added into the Vocabulary) \n for the UNK functionality \n \"\"\"\n if self.unk_index >= 0:\n return self._token_to_idx.get(token, self.unk_index)\n else:\n return self._token_to_idx[token]\n\n def lookup_index(self, index):\n \"\"\"Return the token associated with the index\n \n Args: \n index (int): the index to look up\n Returns:\n token (str): the token corresponding to the index\n Raises:\n KeyError: if the index is not in the Vocabulary\n \"\"\"\n if index not in self._idx_to_token:\n raise KeyError(\"the index (%d) is not in the Vocabulary\" % index)\n return self._idx_to_token[index]\n\n def __str__(self):\n return \"<Vocabulary(size=%d)>\" % len(self)\n\n def __len__(self):\n return len(self._token_to_idx)", "_____no_output_____" ] ], [ [ "### The Vectorizer", "_____no_output_____" ] ], [ [ "class SurnameVectorizer(object):\n \"\"\" The Vectorizer which coordinates the Vocabularies and puts them to use\"\"\"\n def __init__(self, surname_vocab, nationality_vocab):\n \"\"\"\n Args:\n surname_vocab (Vocabulary): maps characters to integers\n nationality_vocab (Vocabulary): maps nationalities to integers\n \"\"\"\n self.surname_vocab = surname_vocab\n self.nationality_vocab = nationality_vocab\n\n def vectorize(self, surname):\n \"\"\"\n Args:\n surname (str): the surname\n\n Returns:\n one_hot (np.ndarray): a collapsed one-hot encoding \n \"\"\"\n vocab = self.surname_vocab\n one_hot = np.zeros(len(vocab), dtype=np.float32)\n for token in surname:\n one_hot[vocab.lookup_token(token)] = 1\n\n return one_hot\n\n @classmethod\n def from_dataframe(cls, surname_df):\n \"\"\"Instantiate the vectorizer from the dataset dataframe\n \n Args:\n surname_df (pandas.DataFrame): the surnames dataset\n Returns:\n an instance of the SurnameVectorizer\n \"\"\"\n surname_vocab = Vocabulary(unk_token=\"@\")\n nationality_vocab = Vocabulary(add_unk=False)\n\n for index, row in surname_df.iterrows():\n for letter in row.surname:\n surname_vocab.add_token(letter)\n nationality_vocab.add_token(row.nationality)\n\n return cls(surname_vocab, nationality_vocab)\n\n @classmethod\n def from_serializable(cls, contents):\n surname_vocab = Vocabulary.from_serializable(contents['surname_vocab'])\n nationality_vocab = Vocabulary.from_serializable(contents['nationality_vocab'])\n return cls(surname_vocab=surname_vocab, nationality_vocab=nationality_vocab)\n\n def to_serializable(self):\n return {'surname_vocab': self.surname_vocab.to_serializable(),\n 'nationality_vocab': self.nationality_vocab.to_serializable()}", "_____no_output_____" ] ], [ [ "### The Dataset", "_____no_output_____" ] ], [ [ "class SurnameDataset(Dataset):\n def __init__(self, surname_df, vectorizer):\n \"\"\"\n Args:\n surname_df (pandas.DataFrame): the dataset\n vectorizer (SurnameVectorizer): vectorizer instatiated from dataset\n \"\"\"\n self.surname_df = surname_df\n self._vectorizer = vectorizer\n\n self.train_df = self.surname_df[self.surname_df.split=='train']\n self.train_size = len(self.train_df)\n\n self.val_df = self.surname_df[self.surname_df.split=='val']\n self.validation_size = len(self.val_df)\n\n self.test_df = self.surname_df[self.surname_df.split=='test']\n self.test_size = len(self.test_df)\n\n self._lookup_dict = {'train': (self.train_df, self.train_size),\n 'val': (self.val_df, self.validation_size),\n 'test': (self.test_df, self.test_size)}\n\n self.set_split('train')\n \n # Class weights\n class_counts = surname_df.nationality.value_counts().to_dict()\n def sort_key(item):\n return self._vectorizer.nationality_vocab.lookup_token(item[0])\n sorted_counts = sorted(class_counts.items(), key=sort_key)\n frequencies = [count for _, count in sorted_counts]\n self.class_weights = 1.0 / torch.tensor(frequencies, dtype=torch.float32)\n\n @classmethod\n def load_dataset_and_make_vectorizer(cls, surname_csv):\n \"\"\"Load dataset and make a new vectorizer from scratch\n \n Args:\n surname_csv (str): location of the dataset\n Returns:\n an instance of SurnameDataset\n \"\"\"\n surname_df = pd.read_csv(surname_csv)\n train_surname_df = surname_df[surname_df.split=='train']\n return cls(surname_df, SurnameVectorizer.from_dataframe(train_surname_df))\n\n @classmethod\n def load_dataset_and_load_vectorizer(cls, surname_csv, vectorizer_filepath):\n \"\"\"Load dataset and the corresponding vectorizer. \n Used in the case in the vectorizer has been cached for re-use\n \n Args:\n surname_csv (str): location of the dataset\n vectorizer_filepath (str): location of the saved vectorizer\n Returns:\n an instance of SurnameDataset\n \"\"\"\n surname_df = pd.read_csv(surname_csv)\n vectorizer = cls.load_vectorizer_only(vectorizer_filepath)\n return cls(surname_df, vectorizer)\n\n @staticmethod\n def load_vectorizer_only(vectorizer_filepath):\n \"\"\"a static method for loading the vectorizer from file\n \n Args:\n vectorizer_filepath (str): the location of the serialized vectorizer\n Returns:\n an instance of SurnameVectorizer\n \"\"\"\n with open(vectorizer_filepath) as fp:\n return SurnameVectorizer.from_serializable(json.load(fp))\n\n def save_vectorizer(self, vectorizer_filepath):\n \"\"\"saves the vectorizer to disk using json\n \n Args:\n vectorizer_filepath (str): the location to save the vectorizer\n \"\"\"\n with open(vectorizer_filepath, \"w\") as fp:\n json.dump(self._vectorizer.to_serializable(), fp)\n\n def get_vectorizer(self):\n \"\"\" returns the vectorizer \"\"\"\n return self._vectorizer\n\n def set_split(self, split=\"train\"):\n \"\"\" selects the splits in the dataset using a column in the dataframe \"\"\"\n self._target_split = split\n self._target_df, self._target_size = self._lookup_dict[split]\n\n def __len__(self):\n return self._target_size\n\n def __getitem__(self, index):\n \"\"\"the primary entry point method for PyTorch datasets\n \n Args:\n index (int): the index to the data point \n Returns:\n a dictionary holding the data point's:\n features (x_surname)\n label (y_nationality)\n \"\"\"\n row = self._target_df.iloc[index]\n\n surname_vector = \\\n self._vectorizer.vectorize(row.surname)\n\n nationality_index = \\\n self._vectorizer.nationality_vocab.lookup_token(row.nationality)\n\n return {'x_surname': surname_vector,\n 'y_nationality': nationality_index}\n\n def get_num_batches(self, batch_size):\n \"\"\"Given a batch size, return the number of batches in the dataset\n \n Args:\n batch_size (int)\n Returns:\n number of batches in the dataset\n \"\"\"\n return len(self) // batch_size\n\n \ndef generate_batches(dataset, batch_size, shuffle=True,\n drop_last=True, device=\"cpu\"): \n \"\"\"\n A generator function which wraps the PyTorch DataLoader. It will \n ensure each tensor is on the write device location.\n \"\"\"\n dataloader = DataLoader(dataset=dataset, batch_size=batch_size,\n shuffle=shuffle, drop_last=drop_last)\n\n for data_dict in dataloader:\n out_data_dict = {}\n for name, tensor in data_dict.items():\n out_data_dict[name] = data_dict[name].to(device)\n yield out_data_dict", "_____no_output_____" ] ], [ [ "## The Model: SurnameClassifier", "_____no_output_____" ] ], [ [ "class SurnameClassifier(nn.Module):\n \"\"\" A 2-layer Multilayer Perceptron for classifying surnames \"\"\"\n def __init__(self, input_dim, hidden_dim, output_dim):\n \"\"\"\n Args:\n input_dim (int): the size of the input vectors\n hidden_dim (int): the output size of the first Linear layer\n output_dim (int): the output size of the second Linear layer\n \"\"\"\n super(SurnameClassifier, self).__init__()\n self.fc1 = nn.Linear(input_dim, hidden_dim)\n self.fc2 = nn.Linear(hidden_dim, output_dim)\n\n def forward(self, x_in, apply_softmax=False):\n \"\"\"The forward pass of the classifier\n \n Args:\n x_in (torch.Tensor): an input data tensor. \n x_in.shape should be (batch, input_dim)\n apply_softmax (bool): a flag for the softmax activation\n should be false if used with the Cross Entropy losses\n Returns:\n the resulting tensor. tensor.shape should be (batch, output_dim)\n \"\"\"\n intermediate_vector = F.relu(self.fc1(x_in))\n prediction_vector = self.fc2(intermediate_vector)\n\n if apply_softmax:\n prediction_vector = F.softmax(prediction_vector, dim=1)\n\n return prediction_vector", "_____no_output_____" ] ], [ [ "## Training Routine", "_____no_output_____" ], [ "### Helper functions", "_____no_output_____" ] ], [ [ "def make_train_state(args):\n return {'stop_early': False,\n 'early_stopping_step': 0,\n 'early_stopping_best_val': 1e8,\n 'learning_rate': args.learning_rate,\n 'epoch_index': 0,\n 'train_loss': [],\n 'train_acc': [],\n 'val_loss': [],\n 'val_acc': [],\n 'test_loss': -1,\n 'test_acc': -1,\n 'model_filename': args.model_state_file}\n\ndef update_train_state(args, model, train_state):\n \"\"\"Handle the training state updates.\n\n Components:\n - Early Stopping: Prevent overfitting.\n - Model Checkpoint: Model is saved if the model is better\n\n :param args: main arguments\n :param model: model to train\n :param train_state: a dictionary representing the training state values\n :returns:\n a new train_state\n \"\"\"\n\n # Save one model at least\n if train_state['epoch_index'] == 0:\n torch.save(model.state_dict(), train_state['model_filename'])\n train_state['stop_early'] = False\n\n # Save model if performance improved\n elif train_state['epoch_index'] >= 1:\n loss_tm1, loss_t = train_state['val_loss'][-2:]\n\n # If loss worsened\n if loss_t >= train_state['early_stopping_best_val']:\n # Update step\n train_state['early_stopping_step'] += 1\n # Loss decreased\n else:\n # Save the best model\n if loss_t < train_state['early_stopping_best_val']:\n torch.save(model.state_dict(), train_state['model_filename'])\n\n # Reset early stopping step\n train_state['early_stopping_step'] = 0\n\n # Stop early ?\n train_state['stop_early'] = \\\n train_state['early_stopping_step'] >= args.early_stopping_criteria\n\n return train_state\n\ndef compute_accuracy(y_pred, y_target):\n _, y_pred_indices = y_pred.max(dim=1)\n n_correct = torch.eq(y_pred_indices, y_target).sum().item()\n return n_correct / len(y_pred_indices) * 100", "_____no_output_____" ] ], [ [ "#### general utilities", "_____no_output_____" ] ], [ [ "def set_seed_everywhere(seed, cuda):\n np.random.seed(seed)\n torch.manual_seed(seed)\n if cuda:\n torch.cuda.manual_seed_all(seed)\n\ndef handle_dirs(dirpath):\n if not os.path.exists(dirpath):\n os.makedirs(dirpath)", "_____no_output_____" ] ], [ [ "### Settings and some prep work", "_____no_output_____" ] ], [ [ "args = Namespace(\n # Data and path information\n surname_csv=\"data/surnames/surnames_with_splits.csv\",\n vectorizer_file=\"vectorizer.json\",\n model_state_file=\"model.pth\",\n save_dir=\"model_storage/ch4/surname_mlp\",\n # Model hyper parameters\n hidden_dim=300,\n # Training hyper parameters\n seed=1337,\n num_epochs=100,\n early_stopping_criteria=5,\n learning_rate=0.001,\n batch_size=64,\n # Runtime options\n cuda=False,\n reload_from_files=False,\n expand_filepaths_to_save_dir=True,\n)\n\nif args.expand_filepaths_to_save_dir:\n args.vectorizer_file = os.path.join(args.save_dir,\n args.vectorizer_file)\n\n args.model_state_file = os.path.join(args.save_dir,\n args.model_state_file)\n \n print(\"Expanded filepaths: \")\n print(\"\\t{}\".format(args.vectorizer_file))\n print(\"\\t{}\".format(args.model_state_file))\n \n# Check CUDA\nif not torch.cuda.is_available():\n args.cuda = False\n\nargs.device = torch.device(\"cuda\" if args.cuda else \"cpu\")\n \nprint(\"Using CUDA: {}\".format(args.cuda))\n\n\n# Set seed for reproducibility\nset_seed_everywhere(args.seed, args.cuda)\n\n# handle dirs\nhandle_dirs(args.save_dir)", "Expanded filepaths: \n\tmodel_storage/ch4/surname_mlp/vectorizer.json\n\tmodel_storage/ch4/surname_mlp/model.pth\nUsing CUDA: False\n" ] ], [ [ "### Initializations", "_____no_output_____" ] ], [ [ "if args.reload_from_files:\n # training from a checkpoint\n print(\"Reloading!\")\n dataset = SurnameDataset.load_dataset_and_load_vectorizer(args.surname_csv,\n args.vectorizer_file)\nelse:\n # create dataset and vectorizer\n print(\"Creating fresh!\")\n dataset = SurnameDataset.load_dataset_and_make_vectorizer(args.surname_csv)\n dataset.save_vectorizer(args.vectorizer_file)\n \nvectorizer = dataset.get_vectorizer()\nclassifier = SurnameClassifier(input_dim=len(vectorizer.surname_vocab), \n hidden_dim=args.hidden_dim, \n output_dim=len(vectorizer.nationality_vocab))\n", "Creating fresh!\n" ] ], [ [ "### Training loop", "_____no_output_____" ] ], [ [ "classifier = classifier.to(args.device)\ndataset.class_weights = dataset.class_weights.to(args.device)\n\n \nloss_func = nn.CrossEntropyLoss(dataset.class_weights)\noptimizer = optim.Adam(classifier.parameters(), lr=args.learning_rate)\nscheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer=optimizer,\n mode='min', factor=0.5,\n patience=1)\n\ntrain_state = make_train_state(args)\n\nepoch_bar = tqdm_notebook(desc='training routine', \n total=args.num_epochs,\n position=0)\n\ndataset.set_split('train')\ntrain_bar = tqdm_notebook(desc='split=train',\n total=dataset.get_num_batches(args.batch_size), \n position=1, \n leave=True)\ndataset.set_split('val')\nval_bar = tqdm_notebook(desc='split=val',\n total=dataset.get_num_batches(args.batch_size), \n position=1, \n leave=True)\n\ntry:\n for epoch_index in range(args.num_epochs):\n train_state['epoch_index'] = epoch_index\n\n # Iterate over training dataset\n\n # setup: batch generator, set loss and acc to 0, set train mode on\n\n dataset.set_split('train')\n batch_generator = generate_batches(dataset, \n batch_size=args.batch_size, \n device=args.device)\n running_loss = 0.0\n running_acc = 0.0\n classifier.train()\n\n for batch_index, batch_dict in enumerate(batch_generator):\n # the training routine is these 5 steps:\n\n # --------------------------------------\n # step 1. zero the gradients\n optimizer.zero_grad()\n\n # step 2. compute the output\n y_pred = classifier(batch_dict['x_surname'])\n\n # step 3. compute the loss\n loss = loss_func(y_pred, batch_dict['y_nationality'])\n loss_t = loss.item()\n running_loss += (loss_t - running_loss) / (batch_index + 1)\n\n # step 4. use loss to produce gradients\n loss.backward()\n\n # step 5. use optimizer to take gradient step\n optimizer.step()\n # -----------------------------------------\n # compute the accuracy\n acc_t = compute_accuracy(y_pred, batch_dict['y_nationality'])\n running_acc += (acc_t - running_acc) / (batch_index + 1)\n\n # update bar\n train_bar.set_postfix(loss=running_loss, acc=running_acc, \n epoch=epoch_index)\n train_bar.update()\n\n train_state['train_loss'].append(running_loss)\n train_state['train_acc'].append(running_acc)\n\n # Iterate over val dataset\n\n # setup: batch generator, set loss and acc to 0; set eval mode on\n dataset.set_split('val')\n batch_generator = generate_batches(dataset, \n batch_size=args.batch_size, \n device=args.device)\n running_loss = 0.\n running_acc = 0.\n classifier.eval()\n\n for batch_index, batch_dict in enumerate(batch_generator):\n\n # compute the output\n y_pred = classifier(batch_dict['x_surname'])\n\n # step 3. compute the loss\n loss = loss_func(y_pred, batch_dict['y_nationality'])\n loss_t = loss.to(\"cpu\").item()\n running_loss += (loss_t - running_loss) / (batch_index + 1)\n\n # compute the accuracy\n acc_t = compute_accuracy(y_pred, batch_dict['y_nationality'])\n running_acc += (acc_t - running_acc) / (batch_index + 1)\n val_bar.set_postfix(loss=running_loss, acc=running_acc, \n epoch=epoch_index)\n val_bar.update()\n\n train_state['val_loss'].append(running_loss)\n train_state['val_acc'].append(running_acc)\n\n train_state = update_train_state(args=args, model=classifier,\n train_state=train_state)\n\n scheduler.step(train_state['val_loss'][-1])\n\n if train_state['stop_early']:\n break\n\n train_bar.n = 0\n val_bar.n = 0\n epoch_bar.update()\nexcept KeyboardInterrupt:\n print(\"Exiting loop\")\n", "_____no_output_____" ], [ "# compute the loss & accuracy on the test set using the best available model\n\nclassifier.load_state_dict(torch.load(train_state['model_filename']))\n\nclassifier = classifier.to(args.device)\ndataset.class_weights = dataset.class_weights.to(args.device)\nloss_func = nn.CrossEntropyLoss(dataset.class_weights)\n\ndataset.set_split('test')\nbatch_generator = generate_batches(dataset, \n batch_size=args.batch_size, \n device=args.device)\nrunning_loss = 0.\nrunning_acc = 0.\nclassifier.eval()\n\nfor batch_index, batch_dict in enumerate(batch_generator):\n # compute the output\n y_pred = classifier(batch_dict['x_surname'])\n \n # compute the loss\n loss = loss_func(y_pred, batch_dict['y_nationality'])\n loss_t = loss.item()\n running_loss += (loss_t - running_loss) / (batch_index + 1)\n\n # compute the accuracy\n acc_t = compute_accuracy(y_pred, batch_dict['y_nationality'])\n running_acc += (acc_t - running_acc) / (batch_index + 1)\n\ntrain_state['test_loss'] = running_loss\ntrain_state['test_acc'] = running_acc\n", "_____no_output_____" ], [ "print(\"Test loss: {};\".format(train_state['test_loss']))\nprint(\"Test Accuracy: {}\".format(train_state['test_acc']))", "Test loss: 1.7435305690765381;\nTest Accuracy: 47.875\n" ] ], [ [ "### Inference", "_____no_output_____" ] ], [ [ "def predict_nationality(surname, classifier, vectorizer):\n \"\"\"Predict the nationality from a new surname\n \n Args:\n surname (str): the surname to classifier\n classifier (SurnameClassifer): an instance of the classifier\n vectorizer (SurnameVectorizer): the corresponding vectorizer\n Returns:\n a dictionary with the most likely nationality and its probability\n \"\"\"\n vectorized_surname = vectorizer.vectorize(surname)\n vectorized_surname = torch.tensor(vectorized_surname).view(1, -1)\n result = classifier(vectorized_surname, apply_softmax=True)\n\n probability_values, indices = result.max(dim=1)\n index = indices.item()\n\n predicted_nationality = vectorizer.nationality_vocab.lookup_index(index)\n probability_value = probability_values.item()\n\n return {'nationality': predicted_nationality, 'probability': probability_value}\n", "_____no_output_____" ], [ "new_surname = input(\"Enter a surname to classify: \")\nclassifier = classifier.to(\"cpu\")\nprediction = predict_nationality(new_surname, classifier, vectorizer)\nprint(\"{} -> {} (p={:0.2f})\".format(new_surname,\n prediction['nationality'],\n prediction['probability']))", "Enter a surname to classify: McMahan\nMcMahan -> Irish (p=0.55)\n" ] ], [ [ "### Top-K Inference", "_____no_output_____" ] ], [ [ "vectorizer.nationality_vocab.lookup_index(8)", "_____no_output_____" ], [ "def predict_topk_nationality(name, classifier, vectorizer, k=5):\n vectorized_name = vectorizer.vectorize(name)\n vectorized_name = torch.tensor(vectorized_name).view(1, -1)\n prediction_vector = classifier(vectorized_name, apply_softmax=True)\n probability_values, indices = torch.topk(prediction_vector, k=k)\n \n # returned size is 1,k\n probability_values = probability_values.detach().numpy()[0]\n indices = indices.detach().numpy()[0]\n \n results = []\n for prob_value, index in zip(probability_values, indices):\n nationality = vectorizer.nationality_vocab.lookup_index(index)\n results.append({'nationality': nationality, \n 'probability': prob_value})\n \n return results\n\n\nnew_surname = input(\"Enter a surname to classify: \")\nclassifier = classifier.to(\"cpu\")\n\nk = int(input(\"How many of the top predictions to see? \"))\nif k > len(vectorizer.nationality_vocab):\n print(\"Sorry! That's more than the # of nationalities we have.. defaulting you to max size :)\")\n k = len(vectorizer.nationality_vocab)\n \npredictions = predict_topk_nationality(new_surname, classifier, vectorizer, k=k)\n\nprint(\"Top {} predictions:\".format(k))\nprint(\"===================\")\nfor prediction in predictions:\n print(\"{} -> {} (p={:0.2f})\".format(new_surname,\n prediction['nationality'],\n prediction['probability']))", "Enter a surname to classify: McMahan\nHow many of the top predictions to see? 5\nTop 5 predictions:\n===================\nMcMahan -> Irish (p=0.55)\nMcMahan -> Scottish (p=0.21)\nMcMahan -> Czech (p=0.05)\nMcMahan -> German (p=0.04)\nMcMahan -> English (p=0.03)\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
e7d81e44d1797e6873a613ffe08b9ba7f72aeb03
65,781
ipynb
Jupyter Notebook
proto_two.ipynb
sarveshbhatnagar/PCL_DETECTION
bb62fdf60d13a3f29930ac6aeb56abcb377a827a
[ "MIT" ]
1
2022-02-27T19:59:09.000Z
2022-02-27T19:59:09.000Z
proto_two.ipynb
sarveshbhatnagar/PCL_DETECTION
bb62fdf60d13a3f29930ac6aeb56abcb377a827a
[ "MIT" ]
null
null
null
proto_two.ipynb
sarveshbhatnagar/PCL_DETECTION
bb62fdf60d13a3f29930ac6aeb56abcb377a827a
[ "MIT" ]
null
null
null
41.036182
504
0.501452
[ [ [ "# File will include the working.\n# Made by Sarvesh Bhatnagar\n# dontpatronizeme\nfrom dont_patronize_me import DontPatronizeMe\n\n# Feature\nimport feature.basicFeatures as bf\nimport feature.makeWordVector as mwv\n\n# Preprocessing\nimport preprocessing.basicPreProcessing as bp\n\n# Model\nimport models.deepModel as dm\n\n# Misc for model training.\nfrom tensorflow import keras\nimport numpy as np\nfrom imblearn.under_sampling import RandomUnderSampler\nfrom imblearn.over_sampling import RandomOverSampler\nfrom sklearn.model_selection import train_test_split\nimport contractions\n\n\n# Scores\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.metrics import precision_score\nfrom sklearn.metrics import recall_score\n\nimport nltk\nfrom collections import Counter\n\nimport feature.makeWordVector as mwv\n\n\ndef ready_data(X, y):\n X = np.array(X)\n X = X.reshape(-1, 1)\n x_rus = X\n y_rus = y\n x_rus = [item[0] for item in x_rus]\n x_rus = np.array(x_rus).astype(np.float32)\n return x_rus, y_rus\n\n\ndef contract_words(text):\n \"\"\"\n Removes Contractations from text. i.e. are'nt -> are not\n \"\"\"\n return contractions.fix(text)\n\n\ndef preprocess_text(text):\n \"\"\"\n Should return a list of words\n \"\"\"\n text = str(text)\n text = contract_words(text)\n text = text.lower()\n text = text.replace('\"', \"\").replace(\n \",\", \"\").replace(\"'\", \"\")\n return text.split()\n\ndef get_word_size_embeddings(text_list, size):\n \"\"\"\n Returns word size embeddings\n \"\"\"\n embeddings = np.zeros((size,))\n ind = len(text_list) if len(text_list) < size else size \n for i in range(0, ind):\n embeddings[i] = len(text_list[i]) + 30\n return embeddings\n\ndef get_tags(text):\n tags_p = nltk.pos_tag(text)\n return [i[1] for i in tags_p]\n\ndef get_most_common_tags(text):\n tags_p = nltk.pos_tag(text)\n tags_p = [i[1] for i in tags_p]\n tags_p = list(Counter(tags_p).most_common(3))\n tags_p = [i[0] for i in tags_p]\n return tags_p\n\ndef get_most_common_words(text):\n stopwords = {\"i\", \"the\", \"and\", \"or\", \"a\", \"an\", \"is\", \"are\", \"was\", \"were\", \"be\", \"been\", \"am\", \"me\", \"my\"}\n text = [i for i in text if i not in stopwords]\n words = list(Counter(text).most_common(3))\n words = [i[0] for i in words]\n return words\n\ndef combine(x,y):\n z = []\n for i in range(len(x)):\n z.append(np.concatenate((x[i],y[i])))\n return z\n\n", "_____no_output_____" ], [ "dpm = DontPatronizeMe('dataset', 'dontpatronizeme_pcl.tsv')\ndpm.load_task1()\ndata = dpm.train_task1_df", "_____no_output_____" ], [ "process = bp.BasicPreProcessing()\ndata['text_split'] = data['text'].apply(preprocess_text)\nwv = mwv.Word2VecModelTrainer().load_trained(\"word2vec.wordvectors\")\nbasic_features = bf.BasicFeatures()\ndata['embeddings'] = data['text_split'].apply(\n basic_features.add_vectors, wv=wv)", "_____no_output_____" ], [ "data[\"text_feature\"] = data['text_split'].apply(\n basic_features.get_text_feature)\ndata[\"embeddings_feature\"] = data['text_feature'].apply(\n basic_features.add_vectors_multiple, wv=wv)\n\ndata[\"text_feature_v2\"] = data['text_split'].apply(basic_features.get_text_feature, n=[1,5])\n\ndata[\"embeddings_feature_v2\"] = data['text_feature_v2'].apply(basic_features.add_vectors_multiple, wv=wv)\n\ndata[\"text_feature_v3\"] = data['text_split'].apply(basic_features.get_text_feature, n=[3,7])\n\ndata[\"embeddings_feature_v3\"] = data['text_feature_v3'].apply(basic_features.add_vectors_multiple, wv=wv)\n\ndata[\"word_size_embeddings\"] = data['text_split'].apply(get_word_size_embeddings, size=100)\n\ndata[\"tags_p\"] = data[\"text_split\"].apply(get_tags)\ndata[\"most_common_words\"] = data[\"text_split\"].apply(get_most_common_words)\ndata[\"most_common_tags\"] = data[\"text_split\"].apply(get_most_common_tags)\nwvec = mwv.Word2VecModelTrainer(sentences=data[\"tags_p\"], path=\"pos_tags.wordvectors\")\n# wvec.train(size=10)\nwvp = wvec.load_trained(\"pos_tags.wordvectors\")\n\ndata[\"pos_embeddings\"] = data[\"tags_p\"].apply(basic_features.add_vectors, wv=wvp)\ndata[\"most_common_tags_embeddings\"] = data[\"most_common_tags\"].apply(basic_features.add_vectors, wv=wvp)\ndata[\"most_common_words_embeddings\"] = data[\"most_common_words\"].apply(basic_features.add_vectors, wv=wv)\n\n\n", "_____no_output_____" ], [ "def combine(x,y):\n z = []\n for i in range(len(x)):\n if(type(y[i]) == int):\n print(\"INT\", i)\n z.append(np.concatenate((x[i],y[i])))\n return z", "_____no_output_____" ], [ "data[\"text_split\"][8639]", "_____no_output_____" ], [ "# ll = combine(data[\"embeddings_feature\"],data[\"pos_embeddings\"])\n# data[\"embeddings_feature\"]\n# data[\"pos_embeddings\"]", "_____no_output_____" ], [ "dpm = DontPatronizeMe('dataset', 'dontpatronizeme_pcl.tsv')\ndpm.load_task1()\ndata = dpm.train_task1_df\n# Deep Learning Pipeline.\ndef task_one_data(data, labels=True):\n # Load the data.\n \n process = bp.BasicPreProcessing()\n data['text_split'] = data['text'].apply(preprocess_text)\n\n # Train WordVectors. Only run once.\n # mwv.Word2VecModelTrainer(\n # sentences=data['text_split'], path=\"dataword.wordvectors\").train()\n\n # Load the trained word vectors.\n wv = mwv.Word2VecModelTrainer().load_trained(\"word2vec.wordvectors\")\n\n # Make Embedding Columns for each text split.\n basic_features = bf.BasicFeatures()\n data['embeddings'] = data['text_split'].apply(\n basic_features.add_vectors, wv=wv)\n\n # NOTE NEW FEATURE\n data[\"text_feature\"] = data['text_split'].apply(\n basic_features.get_text_feature)\n \n \n data[\"embeddings_feature\"] = data['text_feature'].apply(\n basic_features.add_vectors_multiple, wv=wv)\n\n data[\"text_feature_v2\"] = data['text_split'].apply(basic_features.get_text_feature, n=[1,5])\n\n data[\"embeddings_feature_v2\"] = data['text_feature_v2'].apply(basic_features.add_vectors_multiple, wv=wv)\n\n data[\"text_feature_v3\"] = data['text_split'].apply(basic_features.get_text_feature, n=[3,7])\n\n data[\"embeddings_feature_v3\"] = data['text_feature_v3'].apply(basic_features.add_vectors_multiple, wv=wv)\n\n data[\"word_size_embeddings\"] = data['text_split'].apply(get_word_size_embeddings, size=100)\n \n data[\"tags_p\"] = data[\"text_split\"].apply(get_tags)\n data[\"most_common_words\"] = data[\"text_split\"].apply(get_most_common_words)\n data[\"most_common_tags\"] = data[\"text_split\"].apply(get_most_common_tags)\n wvec = mwv.Word2VecModelTrainer(sentences=data[\"tags_p\"], path=\"pos_tags.wordvectors\")\n # wvec.train(size=10)\n wvp = wvec.load_trained(\"pos_tags.wordvectors\")\n\n data[\"pos_embeddings\"] = data[\"tags_p\"].apply(basic_features.add_vectors, wv=wvp)\n data[\"most_common_tags_embeddings\"] = data[\"most_common_tags\"].apply(basic_features.add_vectors, wv=wvp)\n data[\"most_common_words_embeddings\"] = data[\"most_common_words\"].apply(basic_features.add_vectors, wv=wv)\n\n\n ll = combine(data[\"embeddings_feature\"],data[\"pos_embeddings\"])\n # ll = combine(ll,data[\"most_common_tags_embeddings\"])\n # ll = combine(ll,data[\"most_common_words_embeddings\"])\n # ll = combine(ll,data[\"embeddings_feature_v2\"])\n ll = combine(ll,data[\"word_size_embeddings\"])\n # ll = combine(ll,data[\"embeddings_feature_v3\"])\n # ll = combine(ll,data[\"embeddings\"])\n\n\n data[\"combined\"] = ll\n\n rus = RandomUnderSampler(random_state=42)\n X_train = data[\"combined\"]\n if(labels):\n y_train = data[\"label\"]\n return X_train, y_train\n return X_train", "_____no_output_____" ], [ "X_train, y_train = task_one_data(data)", "_____no_output_____" ], [ "y_train[1].shape", "_____no_output_____" ], [ "# rus = RandomUnderSampler(random_state=42)\n# X_train, X_test, y_train, y_test = train_test_split(\n# data['combined'], data['label'], stratify=data['label'], test_size=0.2, random_state=1)\n", "_____no_output_____" ], [ "# data[\"combined\"][3].shape", "_____no_output_____" ], [ "nn_model = dm.NNModels(input_shape=X_train[0].shape,)\n# TODO data[\"combined\"][0].shape\n\n\nrus = RandomOverSampler(random_state=42,sampling_strategy=1)\nX_train = np.array(X_train)\nX_train = X_train.reshape(-1, 1)\nx_rus, y_rus = rus.fit_resample(X_train, y_train)\nx_rus = [item[0] for item in x_rus]\nx_rus = np.array(x_rus).astype(np.float32)", "_____no_output_____" ], [ "# model.compile(\n# optimizer=keras.optimizers.RMSprop(), # Optimizer\n# # Loss function to minimize\n# loss=keras.losses.SparseCategoricalCrossentropy(),\n# # List of metrics to monitor\n# metrics=[keras.metrics.SparseCategoricalAccuracy()],\n# )\nmodel = nn_model.create_baseline()\nmodel.compile(\n optimizer=keras.optimizers.RMSprop(), # Optimizer\n # Loss function to minimize\n loss=keras.losses.SparseCategoricalCrossentropy(),\n # List of metrics to monitor\n metrics=[keras.metrics.SparseCategoricalAccuracy()],\n)", "_____no_output_____" ], [ "# X_test_n, y_test_n = ready_data(X_test, y_test)", "_____no_output_____" ], [ "# history = model.fit(x_rus, y_rus, batch_size=64, epochs=250, validation_data=(X_test_n, y_test_n))\nhistory = model.fit(x_rus, y_rus, batch_size=64, epochs=250)", "Epoch 1/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.6640 - sparse_categorical_accuracy: 0.6014\nEpoch 2/250\n297/297 [==============================] - 0s 2ms/step - loss: 0.6288 - sparse_categorical_accuracy: 0.6493\nEpoch 3/250\n297/297 [==============================] - 0s 2ms/step - loss: 0.6188 - sparse_categorical_accuracy: 0.6612\nEpoch 4/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.6126 - sparse_categorical_accuracy: 0.6678\nEpoch 5/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.6100 - sparse_categorical_accuracy: 0.6679\nEpoch 6/250\n297/297 [==============================] - 1s 3ms/step - loss: 0.6039 - sparse_categorical_accuracy: 0.6769\nEpoch 7/250\n297/297 [==============================] - 1s 3ms/step - loss: 0.6028 - sparse_categorical_accuracy: 0.6726\nEpoch 8/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5976 - sparse_categorical_accuracy: 0.6824\nEpoch 9/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5951 - sparse_categorical_accuracy: 0.6857\nEpoch 10/250\n297/297 [==============================] - 1s 3ms/step - loss: 0.5934 - sparse_categorical_accuracy: 0.6872\nEpoch 11/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5941 - sparse_categorical_accuracy: 0.6870\nEpoch 12/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5915 - sparse_categorical_accuracy: 0.6876\nEpoch 13/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5905 - sparse_categorical_accuracy: 0.6916\nEpoch 14/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5853 - sparse_categorical_accuracy: 0.6941\nEpoch 15/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5830 - sparse_categorical_accuracy: 0.7006\nEpoch 16/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5818 - sparse_categorical_accuracy: 0.6979\nEpoch 17/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5802 - sparse_categorical_accuracy: 0.6994\nEpoch 18/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5793 - sparse_categorical_accuracy: 0.7020\nEpoch 19/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5767 - sparse_categorical_accuracy: 0.7022\nEpoch 20/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5722 - sparse_categorical_accuracy: 0.7082\nEpoch 21/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5700 - sparse_categorical_accuracy: 0.7123\nEpoch 22/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5670 - sparse_categorical_accuracy: 0.7144\nEpoch 23/250\n297/297 [==============================] - 1s 3ms/step - loss: 0.5691 - sparse_categorical_accuracy: 0.7085\nEpoch 24/250\n297/297 [==============================] - 0s 1ms/step - loss: 0.5671 - sparse_categorical_accuracy: 0.7141\nEpoch 25/250\n297/297 [==============================] - 0s 1ms/step - loss: 0.5657 - sparse_categorical_accuracy: 0.7106\nEpoch 26/250\n297/297 [==============================] - 0s 1ms/step - loss: 0.5650 - sparse_categorical_accuracy: 0.7141\nEpoch 27/250\n297/297 [==============================] - 0s 1ms/step - loss: 0.5632 - sparse_categorical_accuracy: 0.7114\nEpoch 28/250\n297/297 [==============================] - 0s 1ms/step - loss: 0.5607 - sparse_categorical_accuracy: 0.7131\nEpoch 29/250\n297/297 [==============================] - 1s 4ms/step - loss: 0.5622 - sparse_categorical_accuracy: 0.7172\nEpoch 30/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5605 - sparse_categorical_accuracy: 0.7155\nEpoch 31/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5622 - sparse_categorical_accuracy: 0.7152\nEpoch 32/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5572 - sparse_categorical_accuracy: 0.7208\nEpoch 33/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5552 - sparse_categorical_accuracy: 0.7184\nEpoch 34/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5587 - sparse_categorical_accuracy: 0.7197\nEpoch 35/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5570 - sparse_categorical_accuracy: 0.7227\nEpoch 36/250\n297/297 [==============================] - 1s 3ms/step - loss: 0.5555 - sparse_categorical_accuracy: 0.7249\nEpoch 37/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5563 - sparse_categorical_accuracy: 0.7238\nEpoch 38/250\n297/297 [==============================] - 1s 3ms/step - loss: 0.5550 - sparse_categorical_accuracy: 0.7232\nEpoch 39/250\n297/297 [==============================] - 1s 3ms/step - loss: 0.5536 - sparse_categorical_accuracy: 0.7237\nEpoch 40/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5492 - sparse_categorical_accuracy: 0.7286\nEpoch 41/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5562 - sparse_categorical_accuracy: 0.7249\nEpoch 42/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5548 - sparse_categorical_accuracy: 0.7259\nEpoch 43/250\n297/297 [==============================] - 1s 3ms/step - loss: 0.5576 - sparse_categorical_accuracy: 0.7220\nEpoch 44/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5531 - sparse_categorical_accuracy: 0.7256\nEpoch 45/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5485 - sparse_categorical_accuracy: 0.7280\nEpoch 46/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5427 - sparse_categorical_accuracy: 0.7313\nEpoch 47/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5426 - sparse_categorical_accuracy: 0.7332\nEpoch 48/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5415 - sparse_categorical_accuracy: 0.7352\nEpoch 49/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5408 - sparse_categorical_accuracy: 0.7353\nEpoch 50/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5367 - sparse_categorical_accuracy: 0.7389\nEpoch 51/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5416 - sparse_categorical_accuracy: 0.7358\nEpoch 52/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5471 - sparse_categorical_accuracy: 0.7312\nEpoch 53/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5432 - sparse_categorical_accuracy: 0.7382\nEpoch 54/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5464 - sparse_categorical_accuracy: 0.7307\nEpoch 55/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5414 - sparse_categorical_accuracy: 0.7342\nEpoch 56/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5412 - sparse_categorical_accuracy: 0.7350\nEpoch 57/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5343 - sparse_categorical_accuracy: 0.7439\nEpoch 58/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5346 - sparse_categorical_accuracy: 0.7410\nEpoch 59/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5341 - sparse_categorical_accuracy: 0.7444\nEpoch 60/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5339 - sparse_categorical_accuracy: 0.7390\nEpoch 61/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5283 - sparse_categorical_accuracy: 0.7464\nEpoch 62/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5309 - sparse_categorical_accuracy: 0.7425\nEpoch 63/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5305 - sparse_categorical_accuracy: 0.7440\nEpoch 64/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5317 - sparse_categorical_accuracy: 0.7420\nEpoch 65/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5328 - sparse_categorical_accuracy: 0.7403\nEpoch 66/250\n297/297 [==============================] - 1s 3ms/step - loss: 0.5320 - sparse_categorical_accuracy: 0.7422\nEpoch 67/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5421 - sparse_categorical_accuracy: 0.7366\nEpoch 68/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5292 - sparse_categorical_accuracy: 0.7435\nEpoch 69/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5245 - sparse_categorical_accuracy: 0.7467\nEpoch 70/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5399 - sparse_categorical_accuracy: 0.7407\nEpoch 71/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5314 - sparse_categorical_accuracy: 0.7436\nEpoch 72/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5247 - sparse_categorical_accuracy: 0.7507\nEpoch 73/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5277 - sparse_categorical_accuracy: 0.7477\nEpoch 74/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5227 - sparse_categorical_accuracy: 0.7503\nEpoch 75/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5448 - sparse_categorical_accuracy: 0.7388\nEpoch 76/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5285 - sparse_categorical_accuracy: 0.7453\nEpoch 77/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5231 - sparse_categorical_accuracy: 0.7488\nEpoch 78/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5194 - sparse_categorical_accuracy: 0.7522\nEpoch 79/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5229 - sparse_categorical_accuracy: 0.7518\nEpoch 80/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5201 - sparse_categorical_accuracy: 0.7546\nEpoch 81/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5191 - sparse_categorical_accuracy: 0.7551\nEpoch 82/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5194 - sparse_categorical_accuracy: 0.7573\nEpoch 83/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5248 - sparse_categorical_accuracy: 0.7521\nEpoch 84/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5203 - sparse_categorical_accuracy: 0.7563\nEpoch 85/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5116 - sparse_categorical_accuracy: 0.7590\nEpoch 86/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5129 - sparse_categorical_accuracy: 0.7613\nEpoch 87/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5142 - sparse_categorical_accuracy: 0.7600\nEpoch 88/250\n297/297 [==============================] - 1s 3ms/step - loss: 0.5125 - sparse_categorical_accuracy: 0.7625\nEpoch 89/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5141 - sparse_categorical_accuracy: 0.7621\nEpoch 90/250\n297/297 [==============================] - 1s 3ms/step - loss: 0.5163 - sparse_categorical_accuracy: 0.7607\nEpoch 91/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5122 - sparse_categorical_accuracy: 0.7626\nEpoch 92/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5092 - sparse_categorical_accuracy: 0.7631\nEpoch 93/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5116 - sparse_categorical_accuracy: 0.7635\nEpoch 94/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5117 - sparse_categorical_accuracy: 0.7652\nEpoch 95/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5088 - sparse_categorical_accuracy: 0.7636\nEpoch 96/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5075 - sparse_categorical_accuracy: 0.7667\nEpoch 97/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5077 - sparse_categorical_accuracy: 0.7661\nEpoch 98/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5216 - sparse_categorical_accuracy: 0.7603\nEpoch 99/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5065 - sparse_categorical_accuracy: 0.7675\nEpoch 100/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5066 - sparse_categorical_accuracy: 0.7675\nEpoch 101/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5083 - sparse_categorical_accuracy: 0.7654\nEpoch 102/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5054 - sparse_categorical_accuracy: 0.7702\nEpoch 103/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5054 - sparse_categorical_accuracy: 0.7685\nEpoch 104/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5007 - sparse_categorical_accuracy: 0.7714\nEpoch 105/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5080 - sparse_categorical_accuracy: 0.7670\nEpoch 106/250\n297/297 [==============================] - 1s 3ms/step - loss: 0.4988 - sparse_categorical_accuracy: 0.7727A: 0s - loss: 0.4975 - sparse_categorical_acc\nEpoch 107/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4975 - sparse_categorical_accuracy: 0.7744\nEpoch 108/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4965 - sparse_categorical_accuracy: 0.7769\nEpoch 109/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5007 - sparse_categorical_accuracy: 0.7714\nEpoch 110/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5073 - sparse_categorical_accuracy: 0.7705\nEpoch 111/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5028 - sparse_categorical_accuracy: 0.7734\nEpoch 112/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5035 - sparse_categorical_accuracy: 0.7712\nEpoch 113/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5100 - sparse_categorical_accuracy: 0.7685\nEpoch 114/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5106 - sparse_categorical_accuracy: 0.7682\nEpoch 115/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4969 - sparse_categorical_accuracy: 0.7790\nEpoch 116/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5080 - sparse_categorical_accuracy: 0.7725\nEpoch 117/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.5123 - sparse_categorical_accuracy: 0.7688\nEpoch 118/250\n297/297 [==============================] - 1s 3ms/step - loss: 0.5022 - sparse_categorical_accuracy: 0.7737A: 0s - loss: 0.5078 - sparse_categorical_accu\nEpoch 119/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4984 - sparse_categorical_accuracy: 0.7751\nEpoch 120/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4970 - sparse_categorical_accuracy: 0.7753\nEpoch 121/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4971 - sparse_categorical_accuracy: 0.7798\nEpoch 122/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4930 - sparse_categorical_accuracy: 0.7782\nEpoch 123/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4888 - sparse_categorical_accuracy: 0.7813\nEpoch 124/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4892 - sparse_categorical_accuracy: 0.7832\nEpoch 125/250\n297/297 [==============================] - ETA: 0s - loss: 0.5023 - sparse_categorical_accuracy: 0.775 - 1s 2ms/step - loss: 0.4992 - sparse_categorical_accuracy: 0.7773\nEpoch 126/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4897 - sparse_categorical_accuracy: 0.7776\nEpoch 127/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4846 - sparse_categorical_accuracy: 0.7862\nEpoch 128/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4857 - sparse_categorical_accuracy: 0.7846\nEpoch 129/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4853 - sparse_categorical_accuracy: 0.7839\nEpoch 130/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4867 - sparse_categorical_accuracy: 0.7836\nEpoch 131/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4848 - sparse_categorical_accuracy: 0.7869\nEpoch 132/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4891 - sparse_categorical_accuracy: 0.7830\nEpoch 133/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4756 - sparse_categorical_accuracy: 0.7904\nEpoch 134/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4765 - sparse_categorical_accuracy: 0.7921\nEpoch 135/250\n297/297 [==============================] - 1s 3ms/step - loss: 0.4777 - sparse_categorical_accuracy: 0.7911\nEpoch 136/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4785 - sparse_categorical_accuracy: 0.7932\nEpoch 137/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4747 - sparse_categorical_accuracy: 0.7951\nEpoch 138/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4721 - sparse_categorical_accuracy: 0.7951\nEpoch 139/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4717 - sparse_categorical_accuracy: 0.7984\nEpoch 140/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4659 - sparse_categorical_accuracy: 0.8012\nEpoch 141/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4680 - sparse_categorical_accuracy: 0.8010\nEpoch 142/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4621 - sparse_categorical_accuracy: 0.8034\nEpoch 143/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4649 - sparse_categorical_accuracy: 0.8012\nEpoch 144/250\n297/297 [==============================] - 1s 3ms/step - loss: 0.4628 - sparse_categorical_accuracy: 0.8044\nEpoch 145/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4563 - sparse_categorical_accuracy: 0.8082\nEpoch 146/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4539 - sparse_categorical_accuracy: 0.8089\nEpoch 147/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4532 - sparse_categorical_accuracy: 0.8094\nEpoch 148/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4583 - sparse_categorical_accuracy: 0.8051\nEpoch 149/250\n297/297 [==============================] - 1s 3ms/step - loss: 0.4554 - sparse_categorical_accuracy: 0.8101\nEpoch 150/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4579 - sparse_categorical_accuracy: 0.8084\nEpoch 151/250\n297/297 [==============================] - 1s 3ms/step - loss: 0.4510 - sparse_categorical_accuracy: 0.8106A: 0s - loss: 0.4467 - sparse_categorical_ac\nEpoch 152/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4468 - sparse_categorical_accuracy: 0.8152\nEpoch 153/250\n297/297 [==============================] - 1s 3ms/step - loss: 0.4478 - sparse_categorical_accuracy: 0.8159A: 0s - loss: 0.4441 - sparse_categorical_a\nEpoch 154/250\n297/297 [==============================] - 1s 3ms/step - loss: 0.4464 - sparse_categorical_accuracy: 0.8137\nEpoch 155/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4423 - sparse_categorical_accuracy: 0.8176\nEpoch 156/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4432 - sparse_categorical_accuracy: 0.8159\nEpoch 157/250\n297/297 [==============================] - 1s 4ms/step - loss: 0.4404 - sparse_categorical_accuracy: 0.8179\nEpoch 158/250\n297/297 [==============================] - 1s 3ms/step - loss: 0.4337 - sparse_categorical_accuracy: 0.8225\nEpoch 159/250\n297/297 [==============================] - 1s 3ms/step - loss: 0.4348 - sparse_categorical_accuracy: 0.8213\nEpoch 160/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4341 - sparse_categorical_accuracy: 0.8220\nEpoch 161/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4323 - sparse_categorical_accuracy: 0.8238\nEpoch 162/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4325 - sparse_categorical_accuracy: 0.8215\nEpoch 163/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4298 - sparse_categorical_accuracy: 0.8261\nEpoch 164/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4351 - sparse_categorical_accuracy: 0.8222\nEpoch 165/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4292 - sparse_categorical_accuracy: 0.8279\nEpoch 166/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4317 - sparse_categorical_accuracy: 0.8266\nEpoch 167/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4309 - sparse_categorical_accuracy: 0.8251\nEpoch 168/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4263 - sparse_categorical_accuracy: 0.8295\nEpoch 169/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4238 - sparse_categorical_accuracy: 0.8318\nEpoch 170/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4225 - sparse_categorical_accuracy: 0.8293\nEpoch 171/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4253 - sparse_categorical_accuracy: 0.8288\nEpoch 172/250\n297/297 [==============================] - 1s 3ms/step - loss: 0.4221 - sparse_categorical_accuracy: 0.8311\nEpoch 173/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4166 - sparse_categorical_accuracy: 0.8344\nEpoch 174/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4141 - sparse_categorical_accuracy: 0.8376\nEpoch 175/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4184 - sparse_categorical_accuracy: 0.8353\nEpoch 176/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4200 - sparse_categorical_accuracy: 0.8329\nEpoch 177/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4146 - sparse_categorical_accuracy: 0.8360\nEpoch 178/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4163 - sparse_categorical_accuracy: 0.8346\nEpoch 179/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4158 - sparse_categorical_accuracy: 0.8353\nEpoch 180/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4104 - sparse_categorical_accuracy: 0.8376\nEpoch 181/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4140 - sparse_categorical_accuracy: 0.8360\nEpoch 182/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4170 - sparse_categorical_accuracy: 0.8361\nEpoch 183/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4075 - sparse_categorical_accuracy: 0.8423\nEpoch 184/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4078 - sparse_categorical_accuracy: 0.8410\nEpoch 185/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4073 - sparse_categorical_accuracy: 0.8406\nEpoch 186/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4057 - sparse_categorical_accuracy: 0.8407\nEpoch 187/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4108 - sparse_categorical_accuracy: 0.8399\nEpoch 188/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4065 - sparse_categorical_accuracy: 0.8421\nEpoch 189/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4049 - sparse_categorical_accuracy: 0.8439\nEpoch 190/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4076 - sparse_categorical_accuracy: 0.8404\nEpoch 191/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4009 - sparse_categorical_accuracy: 0.8469\nEpoch 192/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4020 - sparse_categorical_accuracy: 0.8442\nEpoch 193/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4066 - sparse_categorical_accuracy: 0.8437\nEpoch 194/250\n297/297 [==============================] - 1s 3ms/step - loss: 0.4022 - sparse_categorical_accuracy: 0.8460A: 0s - loss: 0.4027 - sparse_categorical_accuracy: 0.84\nEpoch 195/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4100 - sparse_categorical_accuracy: 0.8424\nEpoch 196/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4024 - sparse_categorical_accuracy: 0.8467\nEpoch 197/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3995 - sparse_categorical_accuracy: 0.8461\nEpoch 198/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4003 - sparse_categorical_accuracy: 0.8463\nEpoch 199/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3965 - sparse_categorical_accuracy: 0.8479\nEpoch 200/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3971 - sparse_categorical_accuracy: 0.8515\nEpoch 201/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3946 - sparse_categorical_accuracy: 0.8505\nEpoch 202/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3911 - sparse_categorical_accuracy: 0.8538\nEpoch 203/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3925 - sparse_categorical_accuracy: 0.8523\nEpoch 204/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3941 - sparse_categorical_accuracy: 0.8520\nEpoch 205/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3973 - sparse_categorical_accuracy: 0.8478\nEpoch 206/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3887 - sparse_categorical_accuracy: 0.8537\nEpoch 207/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3900 - sparse_categorical_accuracy: 0.8536\nEpoch 208/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3918 - sparse_categorical_accuracy: 0.8525\nEpoch 209/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.4003 - sparse_categorical_accuracy: 0.8455\nEpoch 210/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3890 - sparse_categorical_accuracy: 0.8539\nEpoch 211/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3896 - sparse_categorical_accuracy: 0.8547\nEpoch 212/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3896 - sparse_categorical_accuracy: 0.8549\nEpoch 213/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3896 - sparse_categorical_accuracy: 0.8536\nEpoch 214/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3842 - sparse_categorical_accuracy: 0.8580\nEpoch 215/250\n297/297 [==============================] - 1s 3ms/step - loss: 0.3849 - sparse_categorical_accuracy: 0.8568\nEpoch 216/250\n297/297 [==============================] - 1s 3ms/step - loss: 0.3917 - sparse_categorical_accuracy: 0.8542\nEpoch 217/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3941 - sparse_categorical_accuracy: 0.8524\nEpoch 218/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3825 - sparse_categorical_accuracy: 0.8572\nEpoch 219/250\n297/297 [==============================] - 1s 3ms/step - loss: 0.3902 - sparse_categorical_accuracy: 0.8518\nEpoch 220/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3831 - sparse_categorical_accuracy: 0.8567\nEpoch 221/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3871 - sparse_categorical_accuracy: 0.8568\nEpoch 222/250\n297/297 [==============================] - 1s 3ms/step - loss: 0.3856 - sparse_categorical_accuracy: 0.8550\nEpoch 223/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3797 - sparse_categorical_accuracy: 0.8612\nEpoch 224/250\n297/297 [==============================] - 1s 3ms/step - loss: 0.3833 - sparse_categorical_accuracy: 0.8563\nEpoch 225/250\n297/297 [==============================] - 1s 3ms/step - loss: 0.3779 - sparse_categorical_accuracy: 0.8604\nEpoch 226/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3866 - sparse_categorical_accuracy: 0.8554\nEpoch 227/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3866 - sparse_categorical_accuracy: 0.8571\nEpoch 228/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3937 - sparse_categorical_accuracy: 0.8496\nEpoch 229/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3809 - sparse_categorical_accuracy: 0.8619\nEpoch 230/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3880 - sparse_categorical_accuracy: 0.8547\nEpoch 231/250\n297/297 [==============================] - 1s 3ms/step - loss: 0.3948 - sparse_categorical_accuracy: 0.8486A: 0s - loss: 0.3981 - sparse_categorical_accuracy: 0.8\nEpoch 232/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3852 - sparse_categorical_accuracy: 0.8560\nEpoch 233/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3819 - sparse_categorical_accuracy: 0.8566A: 0s - loss: 0.3812 - sparse_categorical_accuracy: 0\nEpoch 234/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3881 - sparse_categorical_accuracy: 0.8560\nEpoch 235/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3865 - sparse_categorical_accuracy: 0.8547\nEpoch 236/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3814 - sparse_categorical_accuracy: 0.8581\nEpoch 237/250\n297/297 [==============================] - 1s 3ms/step - loss: 0.3814 - sparse_categorical_accuracy: 0.8604\nEpoch 238/250\n297/297 [==============================] - 1s 3ms/step - loss: 0.3749 - sparse_categorical_accuracy: 0.8607A: 0s - loss: 0.3821 - sparse_categorical_accura\nEpoch 239/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3861 - sparse_categorical_accuracy: 0.8558\nEpoch 240/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3921 - sparse_categorical_accuracy: 0.8516\nEpoch 241/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3847 - sparse_categorical_accuracy: 0.8544\nEpoch 242/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3933 - sparse_categorical_accuracy: 0.8477\nEpoch 243/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3815 - sparse_categorical_accuracy: 0.8587\nEpoch 244/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3752 - sparse_categorical_accuracy: 0.8601\nEpoch 245/250\n297/297 [==============================] - 1s 3ms/step - loss: 0.3792 - sparse_categorical_accuracy: 0.8600\nEpoch 246/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3797 - sparse_categorical_accuracy: 0.8578\nEpoch 247/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3874 - sparse_categorical_accuracy: 0.8555\nEpoch 248/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3812 - sparse_categorical_accuracy: 0.8571\nEpoch 249/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3873 - sparse_categorical_accuracy: 0.8571\nEpoch 250/250\n297/297 [==============================] - 1s 2ms/step - loss: 0.3692 - sparse_categorical_accuracy: 0.8655\n" ], [ "# dpm.load_test()", "_____no_output_____" ], [ "# dpm.load_test()\n# test_data = dpm.test_set\n# X_test = task_one_data(data, labels=False)", "_____no_output_____" ], [ "# Prepare testing data.\nX_test_n, y_test_n = ready_data(X_train, y_train)\n\npredictions = model.predict(X_test_n)\npredictions = [item.argmax() for item in predictions]\ny_test_n = list(y_test_n)\nprint(\"Accuracy\", accuracy_score(y_test_n, predictions))\nprint(\"Precision\", precision_score(y_test_n, predictions, average=None))\nprint(\"Recall\", recall_score(\n y_test_n, predictions, labels=[0, 1], average=None))", "Accuracy 0.8518483140701117\nPrecision [0.99059057 0.3833612 ]\nRecall [0.8443436 0.92346425]\n" ], [ "# helper function to save predictions to an output file\ndef labels2file(p, outf_path):\n\twith open(outf_path,'w') as outf:\n\t\tfor pi in p:\n\t\t\toutf.write(','.join([str(k) for k in pi])+'\\n')", "_____no_output_____" ], [ "import os", "_____no_output_____" ], [ "predictions = np.array(predictions).reshape(-1,1)", "_____no_output_____" ], [ "labels2file(predictions, os.path.join('res/', 'task1.txt'))", "_____no_output_____" ], [ "# 250 epochs\n# ll = combine(data[\"embeddings_feature\"],data[\"pos_embeddings\"])\n# ll = combine(ll,data[\"word_size_embeddings\"])\n# pos vec size = 10\n# Accuracy 0.8414517669531996\n# Precision [0.93634841 0.28052805]\n# Recall [0.88496042 0.42713568]", "_____no_output_____" ], [ "# 250 epochs\n# ll = combine(data[\"embeddings_feature\"],data[\"pos_embeddings\"])\n# ll = combine(ll,data[\"most_common_words_embeddings\"])\n# ll = combine(ll,data[\"word_size_embeddings\"])\n# pos vec size = 10\n# Accuracy 0.8481375358166189\n# Precision [0.93443526 0.28673835]\n# Recall [0.89498681 0.40201005]", "_____no_output_____" ], [ "\n# 500 epochs\n# ll = combine(data[\"embeddings_feature\"],data[\"pos_embeddings\"])\n# ll = combine(ll,data[\"most_common_tags_embeddings\"])\n# ll = combine(ll,data[\"most_common_words_embeddings\"])\n# ll = combine(ll,data[\"embeddings_feature_v2\"])\n# Accuracy 0.8046800382043935\n# Precision [0.94813028 0.25917431]\n# Recall [0.82955145 0.5678392 ]", "_____no_output_____" ], [ "# get_text_feature : n=[3,7]\n# ll = combine(data[\"embeddings_feature\"],data[\"pos_embeddings\"])\n# ll = combine(ll,data[\"most_common_tags_embeddings\"])\n# ll = combine(ll,data[\"most_common_words_embeddings\"])\n# epochs = 250\n# tag size = 50\n# tags == words == 3\n# Accuracy 0.8357211079274116\n# Precision [0.9408755 0.28358209]\n# Recall [0.87335092 0.47738693]", "_____no_output_____" ], [ "# embeddings feature gives high recall for both\n# low precision for NPCL.\n# ll = combine(data[\"embeddings_feature\"],data[\"pos_embeddings\"])\n# ll = combine(ll,data[\"most_common_tags_embeddings\"])\n# ll = combine(ll,data[\"most_common_words_embeddings\"])\n# tags == words == 3 most common.\n# tagsize = 100\n# Accuracy 0.720152817574021\n# Precision [0.96254417 0.21502209]\n# Recall [0.71873351 0.73366834]", "_____no_output_____" ] ], [ [ "# Task 2", "_____no_output_____" ], [ "dpm.load_task2()", "_____no_output_____" ] ], [ [ "dpm.load_task2()", "Map of label to numerical label:\n{'Unbalanced_power_relations': 0, 'Shallow_solution': 1, 'Presupposition': 2, 'Authority_voice': 3, 'Metaphors': 4, 'Compassion': 5, 'The_poorer_the_merrier': 6}\n" ], [ "data = dpm.train_task2_df", "_____no_output_____" ], [ "data[\"keyword\"].value_counts().keys()", "_____no_output_____" ], [ "data[\"text\"][0]", "_____no_output_____" ], [ "def get_text_for(data,label = 0):\n \"\"\"\n Returns text that corresponds to the label as a single string.\n \"\"\"\n text = []\n for i in range(len(data[\"text\"])):\n if data[\"label\"][i][label] == 1:\n text.append(str(label))\n z = data[\"text\"][i]\n z = z.split(\" \")\n for i in z:\n text.append(i)\n text.append(str(label))\n return \" \".join(text)", "_____no_output_____" ], [ "text_cat = []\n\nfor i in range(7):\n text_cat.append(get_text_for(data,i).split(\" \"))", "_____no_output_____" ], [ "# text_cat = \" \".join(text_cat)\n", "_____no_output_____" ], [ "wvec = mwv.Word2VecModelTrainer(sentences=text_cat, path=\"label.wordvectors\")\nwvec.train(size=50)\nwvp = wvec.load_trained(\"label.wordvectors\")", "_____no_output_____" ], [ "z = [wvp.distance(\"we\",\"0\"), wvp.distance(\"we\",\"1\"), wvp.distance(\"we\",\"2\"), wvp.distance(\"we\",\"3\"), wvp.distance(\"we\",\"4\"), wvp.distance(\"we\",\"5\"), wvp.distance(\"we\",\"6\")]", "_____no_output_____" ], [ "z = np.array(z)", "_____no_output_____" ], [ "z.mean()", "_____no_output_____" ], [ "np.where(z<z.mean())", "_____no_output_____" ], [ "z.index(min(z))", "_____no_output_____" ], [ "def predict(text, pl=84):\n \"\"\"\n Returns a list of predictions for the text.\n \"\"\"\n text = text.split(\" \")\n pred = []\n count_dict = {0:0,1:0,2:0,3:0,4:0,5:0,6:0}\n labels = [\"0\",\"1\",\"2\",\"3\",\"4\",\"5\",\"6\"]\n for i in text:\n z = [wvp.distance(i,val) for val in labels]\n z = np.array(z)\n res = np.where(z<z.mean())[0]\n for i in res:\n count_dict[i] += 1\n # return count_dict\n \n res= np.array(list(count_dict.values()))\n nres = np.zeros(7)\n # return res\n nres[np.where(res>np.percentile(res,pl))] = 1\n return nres", "_____no_output_____" ], [ "z = predict(data[\"text\"][18],pl=82)", "_____no_output_____" ], [ "z", "_____no_output_____" ], [ "zx = data[\"label\"][0] == z", "_____no_output_____" ], [ "len(np.where(zx == False)[0])", "_____no_output_____" ], [ "def calculate_hit(y_true, y_pred):\n hits = 0\n misses = 0\n for i in range(len(y_true)):\n res = y_true[i] == y_pred[i]\n if all(res):\n hits += 1\n else:\n misses += 1\n\n return hits, misses\n\n ", "_____no_output_____" ], [ "def calculate_hit_partial(y_true, y_pred):\n hits = 0\n misses = 0\n for i in range(len(y_true)):\n res = y_true[i] == y_pred[i]\n \n misses += len(np.where(res == False)[0])\n\n return misses", "_____no_output_____" ], [ "losses =[]\nfor pl in range(75, 100):\n y_pred = [predict(data[\"text\"][i],pl=pl) for i in range(len(data[\"text\"]))]\n losses.append((pl,calculate_hit_partial(data[\"label\"],y_pred)))", "_____no_output_____" ], [ "y_pred = [predict(data[\"text\"][i],pl=84) for i in range(len(data[\"text\"]))]", "_____no_output_____" ], [ "y_pred = [i.astype(int) for i in y_pred]", "_____no_output_____" ], [ "labels2file(y_pred, os.path.join('res/', 'task2.txt'))", "_____no_output_____" ], [ "labels2file(dpm.train_task1_df.label.apply(lambda x:[x]).tolist(), os.path.join('ref/', 'task1.txt'))", "_____no_output_____" ], [ "labels2file(dpm.train_task2_df.label.tolist(), os.path.join('ref/', 'task2.txt'))", "_____no_output_____" ], [ "! python evaluation.py . .", "_____no_output_____" ], [ "! cat scores.txt", "task1_precision:0.3833612040133779\ntask1_recall:0.9234642497482377\ntask1_f1:0.5418020679468242\ntask2_unb:0.459119496855346\ntask2_sha:0.4518518518518519\ntask2_pre:0.6141078838174274\ntask2_aut:0.5051546391752577\ntask2_met:0.39097744360902253\ntask2_com:0.04583333333333334\ntask2_the:0.29032258064516125\ntask2_avg:0.3939096041839143\n" ], [ "! zip submission.zip task1.txt task2.txt", "\tzip warning: name not matched: task2.txt\nupdating: task1.txt (deflated 92%)\n" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7d821b02dadf0a0af0f5cde9698cbbeb8059e7c
2,571
ipynb
Jupyter Notebook
Greedy/0202/452. Minimum Number of Arrows to Burst Balloons.ipynb
YuHe0108/Leetcode
90d904dde125dd35ee256a7f383961786f1ada5d
[ "Apache-2.0" ]
1
2020-08-05T11:47:47.000Z
2020-08-05T11:47:47.000Z
Greedy/0202/452. Minimum Number of Arrows to Burst Balloons.ipynb
YuHe0108/LeetCode
b9e5de69b4e4d794aff89497624f558343e362ad
[ "Apache-2.0" ]
null
null
null
Greedy/0202/452. Minimum Number of Arrows to Burst Balloons.ipynb
YuHe0108/LeetCode
b9e5de69b4e4d794aff89497624f558343e362ad
[ "Apache-2.0" ]
null
null
null
21.974359
73
0.437184
[ [ [ "from typing import List\n\nclass Solution:\n def findMinArrowShots(self, points: List[List[int]]) -> int:\n if not points or not points[0]:\n return 0\n points.sort(key=lambda x:x[1])\n arrow = points[0][-1]\n cnt = 1\n for i in range(1, len(points)):\n l, r = points[i]\n if l <= arrow:\n continue\n arrow = r\n cnt += 1\n return cnt", "_____no_output_____" ], [ "from typing import List\n\nclass Solution:\n def findMinArrowShots(self, points: List[List[int]]) -> int:\n if not points or not points[0]:\n return 0\n points.sort(key=lambda x:x[1])\n arrow = points[0][-1]\n cnt = 1\n for i in range(1, len(points)):\n l, r = points[i]\n if l <= arrow <= r:\n continue\n arrow = r\n cnt += 1\n \n return cnt", "_____no_output_____" ], [ "solution = Solution()\nsolution.findMinArrowShots(points = [[1,2],[3,4],[5,6],[7,8]])", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code" ] ]
e7d82380a9ce1857a587aeba8b906cc5eb1f84eb
10,911
ipynb
Jupyter Notebook
Bootstrapping_augmentation.ipynb
sytseng/CS109b_Final_Project_Spring_2019
776b4caf51390e90e74410b3f435391864e2bd00
[ "MIT" ]
null
null
null
Bootstrapping_augmentation.ipynb
sytseng/CS109b_Final_Project_Spring_2019
776b4caf51390e90e74410b3f435391864e2bd00
[ "MIT" ]
null
null
null
Bootstrapping_augmentation.ipynb
sytseng/CS109b_Final_Project_Spring_2019
776b4caf51390e90e74410b3f435391864e2bd00
[ "MIT" ]
null
null
null
33.572308
177
0.52195
[ [ [ "%matplotlib inline\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport pickle\nimport matplotlib.patches\nimport seaborn as sns\n\n# scikit-learn bootstrap\nfrom sklearn.utils import resample", "_____no_output_____" ], [ "#PLEASE RUN THIS CELL \nimport requests\nfrom IPython.core.display import HTML\nstyles = requests.get(\"https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/cs109.css\").text\nHTML(styles)", "_____no_output_____" ] ], [ [ "### Pre-processing mouse data\nLoad the clusters obtained from previous section, so that we can bootsrap on them.", "_____no_output_____" ] ], [ [ "### Load DataFrame of log-transfromed averaged time-series for each cluster\n# (1) Healthy Group\ndf_log_healthy = pd.read_pickle(\"data/df_log_healthy.pkl\")\n# (2) IBD Group\ndf_log_ibd = pd.read_pickle(\"data/df_log_ibd.pkl\")\n\n### Load cluster memberships for every OTU\n# (1) Healthy Group\ntree_healthy = pd.read_pickle( \"data/OTU_dm_kmclusters.p\")\nNMF_healthy = pd.read_pickle( \"data/OTU_NMF_healthy.p\")\ntime_healthy = pd.read_pickle( \"data/OTU_time_healthy.p\")\n# (2) IBD Group\ntree_ibd = pd.read_pickle( \"data/OTU_dm_kmclusters_IBD.p\" )\nNMF_ibd = pd.read_pickle( \"data/OTU_NMF_ibd.p\" )\ntime_ibd = pd.read_pickle( \"data/OTU_time_ibd.p\")", "_____no_output_____" ] ], [ [ "## Bootstrapping\n### Step A. Subset df by cluster Membership\nRecall that we have three methods to generate clusters: \n- Tree based: 3 clusters\n- NMF correlation: 9 clusters\n- Time correlation: 5 clusters\n\nAnd we have loaded the cluster membership for every OTU above. In this section, we will subset the OTU into those different clusters.", "_____no_output_____" ] ], [ [ "### Function to subset the dataframe by cluster membership\ndef subset_df_by_membership(df, tree, NMF, time):\n # get the total number of otu and time points\n (otu_length,time_length) = df.shape\n\n # add the membership as the last column\n df['tree']=tree\n df['NMF']=NMF\n df['time']=time\n \n # loop through 3 different memberships\n methods = ['tree', 'NMF', 'time']\n method_list = list()\n ###########1##############\n # method_list[0]: 'tree' #\n # method_list[1]: 'NMF' #\n # method_list[2]: 'time' #\n ##########################\n for method in methods:\n # loop through all clusters\n culsters = list(df[method].unique())\n df_list = list()\n #########################2###########################\n # for example: #\n # df_list[0]: OTU with membership as first clusters #\n # ... #\n #####################################################\n for cluster in culsters:\n df_selected = df[df[method] == cluster].iloc[:,:time_length]\n df_list.append(df_selected) #1#\n method_list.append(df_list) #2#\n \n return method_list", "_____no_output_____" ], [ "### Split the DataFrame into clusters based on their Membership, pack them up into the list\nmethod_list_healthy = subset_df_by_membership(df_log_healthy, tree_healthy, NMF_healthy, time_healthy)\nmethod_list_ibd = subset_df_by_membership(df_log_ibd, tree_ibd, NMF_ibd, time_ibd)", "_____no_output_____" ] ], [ [ "### Step B. Bootstrap to generate more mice data\nNow that we have the clusters, we do bootstrap:\n- For each single sample step, within every cluster, we randomly choose 30% of the OTUs, took the average of them to generate one time series representing that cluster.\n- We repeated the sampling for 30 times, to generate the 30 mice.", "_____no_output_____" ] ], [ [ "### Function to Bootstrap:\ndef bootrapping(method_list, mice_count):\n methods = list()\n for method in range(3):\n mice = list()\n for time in range(mice_count):\n clusters = list()\n for cluster in range(len(method_list[method])):\n one_sample = method_list[method][cluster].sample(frac=0.3, replace=True)\n log_mean = one_sample[:].mean(axis=0)\n # inverse natural log transform\n real_mean = np.exp(log_mean)\n clusters.append(real_mean)\n mice.append(np.array(clusters))\n methods.append(mice)\n \n tree = methods[0]\n NMF = methods[1]\n time = methods[2]\n return tree, NMF, time", "_____no_output_____" ], [ "### Generate 30 mice for both group\nmice_count=30\n# (1) Healthy Mice\ntree_healthy_30_mice, NMF_healthy_30_mice, time_healthy_30_mice = bootrapping(method_list_healthy, mice_count)\n# (2) IBD Mice\ntree_ibd_30_mice, NMF_ibd_30_mice, time_ibd_30_mice = bootrapping(method_list_ibd, mice_count)\n\n### save the mice as a pickle file\n# (1) Healthy Mice\npickle.dump(tree_healthy_30_mice, open( \"data/30_mice_tree_healthy.p\", \"wb\" ) )\npickle.dump(NMF_healthy_30_mice, open( \"data/30_mice_NMF_healthy.p\", \"wb\" ) )\npickle.dump(time_healthy_30_mice, open( \"data/30_mice_time_healthy.p\", \"wb\" ) )\n# (2) IBD Mice\npickle.dump(tree_ibd_30_mice, open( \"data/30_mice_tree_ibd.p\", \"wb\" ) )\npickle.dump(NMF_ibd_30_mice, open( \"data/30_mice_NMF_ibd.p\", \"wb\" ) )\npickle.dump(time_ibd_30_mice, open( \"data/30_mice_time_ibd.p\", \"wb\" ) )", "_____no_output_____" ] ], [ [ "### Data Structure Example\nThese are the simulated absolute values (not the log-transformed, they have already been transformed back).", "_____no_output_____" ] ], [ [ "#######################################################################\n# For example: tree_healthy_30_mice #\n# tree_healthy_30_mice: the first mice data #\n# tree_healthy_30_mice[0]: the first cluster in the first mice data #\n# tree_healthy_30_mice[0][0]: the 75 time points of the first cluster #\n#######################################################################\nprint('the nunmber of simulated mice data is: ', len(tree_healthy_30_mice))\nprint('within each mouse, the number of the tree_based clusters is: ', len(tree_healthy_30_mice[0]))\nprint('for each cluster, the number of the time points is: ', len(tree_healthy_30_mice[0][0]))", "the nunmber of simulated mice data is: 30\nwithin each mouse, the number of the tree_based clusters is: 3\nfor each cluster, the number of the time points is: 75\n" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
e7d84caa738df8556479eb2a771ff0383d056ece
187,179
ipynb
Jupyter Notebook
notebooks/autoencoders/MNIST10/one_anomaly_detector.ipynb
tayden/NoveltyDetection
797de85b7543e4d0f118295b31a36ec17126d459
[ "MIT" ]
null
null
null
notebooks/autoencoders/MNIST10/one_anomaly_detector.ipynb
tayden/NoveltyDetection
797de85b7543e4d0f118295b31a36ec17126d459
[ "MIT" ]
null
null
null
notebooks/autoencoders/MNIST10/one_anomaly_detector.ipynb
tayden/NoveltyDetection
797de85b7543e4d0f118295b31a36ec17126d459
[ "MIT" ]
null
null
null
419.683857
34,676
0.933171
[ [ [ "from keras.datasets import mnist\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\ndef show_10_images(data): \n n = 10\n plt.figure(figsize=(20, 2))\n for i in range(n):\n ax = plt.subplot(1, n, i+1)\n plt.imshow(data[i].reshape(28, 28))\n plt.gray()\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n plt.show()\n\n(x_train, y_train), (x_test, y_test) = mnist.load_data()\n\nx_train = x_train.astype('float32') / 255.\nx_test = x_test.astype('float32') / 255.\nx_train = np.reshape(x_train, (len(x_train), 28, 28, 1))\nx_test = np.reshape(x_test, (len(x_test), 28, 28, 1))", "/home/tadenoud/.virtualenvs/ml/lib/python3.5/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n from ._conv import register_converters as _register_converters\nUsing TensorFlow backend.\n" ], [ "from keras.layers import Input, Conv2D, MaxPooling2D, UpSampling2D, Flatten, Dense, Reshape\nfrom keras.regularizers import l1\nfrom keras.models import Model\n\nn_hidden = 256\ninput_img = Input(shape=(28, 28, 1)) # adapt this if using `channels_first` image data format\n\nx = Conv2D(32, (3, 3), activation='relu', padding='same')(input_img)\nx = MaxPooling2D((2, 2), padding='same')(x)\nx = Conv2D(2, (3, 3), activation='relu', padding='same')(x)\nx = MaxPooling2D((2, 2), padding='same')(x)\n\n# at this point the representation is (7, 7, 32)\n\nx = Flatten()(x)\nencoded = Dense(n_hidden, activity_regularizer=l1(10e-8))(x)\n\n# representation is now size n_hidden\n\nx = Dense(7*7*32)(encoded)\nx = Reshape((7, 7, 32))(x)\n\nx = Conv2D(2, (3, 3), activation='relu', padding='same')(x)\nx = UpSampling2D((2, 2))(x)\nx = Conv2D(32, (3, 3), activation='relu', padding='same')(x)\nx = UpSampling2D((2, 2))(x)\ndecoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)\n\nautoencoder = Model(input_img, decoded)\nautoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')", "_____no_output_____" ], [ "import os\n\nweight_file = './weights/mnist_autoencoder_all_digits_binary_crossentropy.hd5'\n\nif(os.path.exists(weight_file)):\n autoencoder.load_weights(weight_file)\nelse:\n autoencoder.fit(x_train, x_train,\n epochs=100,\n batch_size=128,\n shuffle=True,\n validation_data=(x_test, x_test),\n callbacks=[])\n\n autoencoder.save_weights(weight_file)", "_____no_output_____" ], [ "from keras.layers import Lambda\nfrom keras.losses import binary_crossentropy\nimport keras.backend as K \n\ninput_ = Input(shape=(28, 28, 1))\npredicted = autoencoder(input_)\nloss = Lambda(lambda x: binary_crossentropy(K.batch_flatten(input_), K.batch_flatten(x)))(predicted)\n\nanomaly_detector = Model(input_, loss)", "_____no_output_____" ], [ "# Ground truth\nshow_10_images(x_test)\n\n# Reconstructed images\nshow_10_images(autoencoder.predict(x_test))", "_____no_output_____" ] ], [ [ "## Fashion MNIST", "_____no_output_____" ] ], [ [ "from keras.datasets import fashion_mnist\nfrom sklearn.metrics import roc_auc_score\nfrom sklearn.metrics import mean_squared_error\n\n_, (fashion_x_test, _) = fashion_mnist.load_data()\n\nfashion_x_test = fashion_x_test.astype('float32') / 255.\nfashion_x_test = np.reshape(fashion_x_test, (len(x_test), 28, 28, 1))\n\nshow_10_images(fashion_x_test)\nshow_10_images(autoencoder.predict(fashion_x_test))", "_____no_output_____" ], [ "labels = len(x_test) * [0] + len(fashion_x_test) * [1]\ntest_samples = np.concatenate((x_test, fashion_x_test))\nlosses = anomaly_detector.predict(test_samples)\nprint(\"AUROC:\", roc_auc_score(labels, losses))", "AUROC: 0.99937089\n" ] ], [ [ "## EMNIST Letters", "_____no_output_____" ] ], [ [ "from torchvision.datasets import EMNIST\n\nemnist_letters = EMNIST('./', \"letters\", train=False, download=True)\nemnist_letters = emnist_letters.test_data.numpy()\nemnist_letters = emnist_letters.astype('float32') / 255.\nemnist_letters = np.swapaxes(emnist_letters, 1, 2)\n\nemnist_letters = np.reshape(emnist_letters, (len(emnist_letters), 28, 28, 1))\n\nshow_10_images(emnist_letters)\nshow_10_images(autoencoder.predict(emnist_letters))", "_____no_output_____" ], [ "labels = len(x_test) * [0] + len(emnist_letters) * [1]\ntest_samples = np.concatenate((x_test, emnist_letters))\nlosses = anomaly_detector.predict(test_samples)\nprint(\"AUROC:\", roc_auc_score(labels, losses))", "AUROC: 0.9604927475961538\n" ] ], [ [ "## Gaussian Noise", "_____no_output_____" ] ], [ [ "mnist_mean = np.mean(x_train)\nmnist_std = np.std(x_train)\ngaussian_data = np.random.normal(mnist_mean, mnist_std, size=(10000, 28, 28, 1))\n\nshow_10_images(gaussian_data)\nshow_10_images(autoencoder.predict(gaussian_data))", "_____no_output_____" ], [ "labels = len(x_test) * [0] + len(gaussian_data) * [1]\ntest_samples = np.concatenate((x_test, gaussian_data))\nlosses = anomaly_detector.predict(test_samples)\nprint(\"AUROC:\", roc_auc_score(labels, losses))", "AUROC: 1.0\n" ] ], [ [ "## Uniform Noise", "_____no_output_____" ] ], [ [ "import math\nb = math.sqrt(3.) * mnist_std\na = -b + mnist_mean\nb += mnist_mean\n\nuniform_data = np.random.uniform(low=a, high=b, size=(10000, 28, 28, 1))\n\nshow_10_images(uniform_data)\nshow_10_images(autoencoder.predict(uniform_data))", "_____no_output_____" ], [ "labels = len(x_test) * [0] + len(uniform_data) * [1]\ntest_samples = np.concatenate((x_test, uniform_data))\nlosses = anomaly_detector.predict(test_samples)\nprint(\"AUROC:\", roc_auc_score(labels, losses))", "AUROC: 1.0\n" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
e7d8576c9e23d43f25dbe3cbf7a250e12b5fb401
27,636
ipynb
Jupyter Notebook
Susan/Check River Files.ipynb
SalishSeaCast/analysis
5964628f08ca1f36121a5d8430ad5b4ae7756c7a
[ "Apache-2.0" ]
null
null
null
Susan/Check River Files.ipynb
SalishSeaCast/analysis
5964628f08ca1f36121a5d8430ad5b4ae7756c7a
[ "Apache-2.0" ]
null
null
null
Susan/Check River Files.ipynb
SalishSeaCast/analysis
5964628f08ca1f36121a5d8430ad5b4ae7756c7a
[ "Apache-2.0" ]
null
null
null
130.976303
21,234
0.869192
[ [ [ "# Check River Flows", "_____no_output_____" ] ], [ [ "from __future__ import division\n\nimport matplotlib.pyplot as plt\nimport netCDF4 as nc\nimport numpy as np\n\nfrom salishsea_tools import nc_tools\n\n%matplotlib inline", "_____no_output_____" ], [ "def find_points(flow):\n for i in range(390,435):\n for j in range(280,398):\n if flow1[0,i,j] > 0:\n print i,j, lat[i,j], lon[i,j], flow[0,i,j]", "_____no_output_____" ], [ "grid = nc.Dataset('/ocean/sallen/allen/research/MEOPAR/NEMO-forcing/grid/bathy_meter_SalishSea2.nc')\nlat = grid.variables['nav_lat'][:,:]\nlon = grid.variables['nav_lon'][:,:]\ndepth = grid.variables['Bathymetry'][:]", "_____no_output_____" ], [ "river1 = nc.Dataset('/ocean/sallen/allen/research/MEOPAR/Rivers/RFraserCElse_y2015m05d15.nc')\nriver2 = nc.Dataset('/ocean/sallen/allen/research/MEOPAR/Rivers/RFraserCElse_y2015m07d01.nc')\nriver3 = nc.Dataset('/ocean/sallen/allen/research/MEOPAR/Rivers/RFraserCElse_y2015m07d02.nc')\nriver4 = nc.Dataset('/ocean/sallen/allen/research/MEOPAR/Rivers/RFraserCElse_y2015m07d03.nc')\nriver5 = nc.Dataset('/ocean/sallen/allen/research/MEOPAR/Rivers/RFraserCElse_y2015m07d04.nc')\nriver6 = nc.Dataset('/ocean/sallen/allen/research/MEOPAR/Rivers/RFraserCElse_y2015m07d05.nc')", "_____no_output_____" ], [ "print 'May 15'\nfind_points(river1.variables['rorunoff'][:,:,:])\nprint 'Jul 1'\nfind_points(river2.variables['rorunoff'][:,:,:])\nprint 'Jul 2'\nfind_points(river3.variables['rorunoff'][:,:,:])\nprint 'Jul 3'\nfind_points(river4.variables['rorunoff'][:,:,:])\nprint 'Jul 4'\nfind_points(river5.variables['rorunoff'][:,:,:])\nprint 'Jul 5'\nfind_points(river6.variables['rorunoff'][:,:,:])", "May 15\n411 324 49.0993995667 -123.083885193 0.72367\n412 324 49.1033248901 -123.087242126 0.72367\n414 334 49.1300430298 -123.041992188 7.24504\n415 334 49.1339645386 -123.045349121 7.24504\n416 334 49.1378898621 -123.048713684 7.24504\n434 318 49.1783180237 -123.192321777 0.726198\nJul 1\n411 324 49.0993995667 -123.083885193 0.674936\n412 324 49.1033248901 -123.087242126 0.674936\n414 334 49.1300430298 -123.041992188 8.04579\n415 334 49.1339645386 -123.045349121 8.04579\n416 334 49.1378898621 -123.048713684 8.04579\n434 318 49.1783180237 -123.192321777 0.677294\nJul 2\n411 324 49.0993995667 -123.083885193 0.674921\n412 324 49.1033248901 -123.087242126 0.674921\n414 334 49.1300430298 -123.041992188 7.98521\n415 334 49.1339645386 -123.045349121 7.98521\n416 334 49.1378898621 -123.048713684 7.98521\n434 318 49.1783180237 -123.192321777 0.677279\nJul 3\n411 324 49.0993995667 -123.083885193 0.676803\n412 324 49.1033248901 -123.087242126 0.676803\n414 334 49.1300430298 -123.041992188 7.92463\n415 334 49.1339645386 -123.045349121 7.92463\n416 334 49.1378898621 -123.048713684 7.92463\n434 318 49.1783180237 -123.192321777 0.679167\nJul 4\n411 324 49.0993995667 -123.083885193 0.681227\n412 324 49.1033248901 -123.087242126 0.681227\n414 334 49.1300430298 -123.041992188 7.86405\n415 334 49.1339645386 -123.045349121 7.86405\n416 334 49.1378898621 -123.048713684 7.86405\n434 318 49.1783180237 -123.192321777 0.683607\nJul 5\n411 324 49.0993995667 -123.083885193 0.674281\n412 324 49.1033248901 -123.087242126 0.674281\n414 334 49.1300430298 -123.041992188 7.80347\n415 334 49.1339645386 -123.045349121 7.80347\n416 334 49.1378898621 -123.048713684 7.80347\n434 318 49.1783180237 -123.192321777 0.676636\n" ], [ "ik = 425; jk = 302; d = 6\nfig, ax = plt.subplots(1,1,figsize=(15,7.5))\nimin = 390; imax = 435; jmin = 280; jmax = 398\ncmap = plt.get_cmap('winter_r')\ncmap.set_bad('burlywood')\nmesh = ax.pcolormesh(depth[imin:imax,jmin:jmax], vmax = 10., cmap=cmap)\nax.set_xlim((0,110))\nax.set_xlabel('Grid Points')\nax.set_ylabel('Grid Points')\nax.text(40, 28, \"Short Fraser River\", fontsize=14)\ncbar=fig.colorbar(mesh)\ncbar.set_label('Depth (m)')\nax.plot(np.array((324,324,334,334,334,318))-jmin+0.5,np.array((411,412,414,415,416,434))-imin+0.5,'ko');", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
e7d857849ecb7a3df35531b88fdf3e004b896f89
29,020
ipynb
Jupyter Notebook
1-Lessons/Lesson12/.ipynb_checkpoints/data-display-histogram-checkpoint.ipynb
dustykat/engr-1330-psuedo-course
3e7e31a32a1896fcb1fd82b573daa5248e465a36
[ "CC0-1.0" ]
null
null
null
1-Lessons/Lesson12/.ipynb_checkpoints/data-display-histogram-checkpoint.ipynb
dustykat/engr-1330-psuedo-course
3e7e31a32a1896fcb1fd82b573daa5248e465a36
[ "CC0-1.0" ]
null
null
null
1-Lessons/Lesson12/.ipynb_checkpoints/data-display-histogram-checkpoint.ipynb
dustykat/engr-1330-psuedo-course
3e7e31a32a1896fcb1fd82b573daa5248e465a36
[ "CC0-1.0" ]
null
null
null
102.183099
6,420
0.839421
[ [ [ "import pandas as pd\n\ndf = pd.read_csv('top_movies.csv')\ndf.head()", "_____no_output_____" ], [ "df[[\"Gross\"]].hist()", "_____no_output_____" ], [ "df[[\"Gross\"]].hist(bins=4)", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\n\ndata = df[['Gross']].values\nplt.hist(data)", "_____no_output_____" ], [ "plt.hist(data, bins=4)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code" ] ]
e7d85955e73646f27850ec8635f337c4c2150627
293,150
ipynb
Jupyter Notebook
dataproject.ipynb
mugilc/mugilc.github.io
7a404b5d388f563cb747ea3c44b11a8269c61a01
[ "MIT" ]
null
null
null
dataproject.ipynb
mugilc/mugilc.github.io
7a404b5d388f563cb747ea3c44b11a8269c61a01
[ "MIT" ]
null
null
null
dataproject.ipynb
mugilc/mugilc.github.io
7a404b5d388f563cb747ea3c44b11a8269c61a01
[ "MIT" ]
null
null
null
69.269849
66,822
0.594883
[ [ [ "<a href=\"https://colab.research.google.com/github/mooglol/mooglol.github.io/blob/master/dataproject.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ "import pandas as pd\nws = pd.read_csv('winshares.txt')", "_____no_output_____" ], [ "ws.head()", "_____no_output_____" ], [ "ws.shape", "_____no_output_____" ], [ "# clean up player name column\n\nws['Player'] = ws['Player'].str.split('/').str[0]", "_____no_output_____" ], [ "ws.head()", "_____no_output_____" ], [ "sals = pd.read_csv('salaries.txt')", "_____no_output_____" ], [ "sals.head()", "_____no_output_____" ], [ "sals.shape", "_____no_output_____" ], [ "sals['Player'] = sals['Player'].str.split('/').str[0]", "_____no_output_____" ], [ "sals.head()", "_____no_output_____" ], [ "# merge columns 2019-2020 salaries with ws dataframe\n\nfinal = pd.merge(ws, sals[['Player', '2019-20']], how='inner', on='Player')\nfinal.head()", "_____no_output_____" ], [ "# lost players who did not renew contracts this year + new players entering the league in 2019-20, but dataframe is ready to go - salaries are together with stats\n# use WS as primary statistic to assess value\n# drop some unnecessary columns\n\nfinal = final.drop(columns=['Unnamed: 19', 'Unnamed: 24'])", "_____no_output_____" ], [ "final.head()", "_____no_output_____" ], [ "# rename WS▼, get rid of extra character\n\nfinal.rename(columns={'WS▼':'WS'}, inplace=True)", "_____no_output_____" ], [ "final.head()", "_____no_output_____" ], [ "final.dtypes", "_____no_output_____" ], [ "# change 2019-20 dtype to float so we can use it to divide $/WS\n\nfinal['2019-20'] = final['2019-20'].replace('[\\$,]', '', regex=True).astype(float)\nfinal.head()", "_____no_output_____" ], [ "final['value'] = final['WS']/final['2019-20']", "_____no_output_____" ], [ "final['value'].head()", "_____no_output_____" ], [ "# sort by most valuable players in terms of WS/salary for 2019-20\n\nfinal.sort_values(by=['value'], inplace=True, ascending=False)", "_____no_output_____" ], [ "final.sort_values(by=['2019-20'], inplace=True, ascending=False)\nfinal.head(10)", "_____no_output_____" ], [ "final.head()", "_____no_output_____" ], [ "# drop inf values, replace with nan\n\nimport numpy as np\n\nfinal = final.replace([np.inf, -np.inf], np.nan)", "_____no_output_____" ], [ "# drop nan values\n\nfinal = final.dropna()", "_____no_output_____" ], [ "final.head(10)", "_____no_output_____" ], [ "# lets choose a few columns to narrow down and work with\n\nfinal1 = final[['Player', 'Pos', 'Age', 'WS', '2019-20', 'value']]", "_____no_output_____" ], [ "final1.head(10)", "_____no_output_____" ], [ "# value column is hard to read because the numbers are so small - double digit winshares divided by millions in salary so let's scale that up\n\nfinal1['value'] = final1['value'].apply(lambda x: x*10000000)", "/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:2: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n \n" ], [ "# gave me the warning above, but seems to have worked\nfinal1['value'].head()", "_____no_output_____" ], [ "# top 20 most valuable players, in terms of WS last year / salary this season\nfinal1.head(20)", "_____no_output_____" ], [ "# top 20 LEAST valuable players from last season\nfinal1.tail(20)", "_____no_output_____" ], [ "# let's make a visual of the top 50 most valuable players\n\ntop20 = final1.head(20)", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\n\nplt.scatter(top15['WS'], top15['2019-20']\n , s=top15['value'])\nplt.title('Top 15 Most Valuable Players by WS/$')\nplt.xlabel('WS')\nplt.ylabel('2019-20 Salary')\nplt.show()", "_____no_output_____" ], [ "df = pd.DataFrame({\n'x': top20['2019-20'],\n'y': top20['WS'],\n's': top20['value'],\n'group': top20['Player']\n})", "_____no_output_____" ], [ "df.shape", "_____no_output_____" ], [ "# lets try adding labels to the points\n\nimport pandas as pd\nimport matplotlib.pylab as plt\nimport seaborn as sns\n\nplt.figure(figsize = (17,12))\n\nax = sns.scatterplot(df.x, df.y, alpha = 0.5,s = 1000)\n\nfor line in range(0,df.shape[0]):\n ax.text(df.x.iloc[line], df.y.iloc[line], df.group.iloc[line], horizontalalignment='center', size='medium', color='black')\n\nplt.title('Top 20 Most Valuable NBA Players by WS/$')\nplt.xlabel('2019-20 Salary')\nplt.ylabel('WS')\nplt.show()", "_____no_output_____" ], [ "# do the same, but for 20 least valuable players\n\nbot20 = final1.tail(20)", "_____no_output_____" ], [ "df = pd.DataFrame({\n'x': bot20['2019-20'],\n'y': bot20['WS'],\n's': bot20['value'],\n'group': bot20['Player']\n})", "_____no_output_____" ], [ "df.shape", "_____no_output_____" ], [ "import pandas as pd\nimport matplotlib.pylab as plt\nimport seaborn as sns\n\nplt.figure(figsize = (17,12))\n\nax = sns.scatterplot(df.x, df.y, alpha = 0.5,s = 1000)\n\nfor line in range(0,df.shape[0]):\n ax.text(df.x.iloc[line], df.y.iloc[line], df.group.iloc[line], horizontalalignment='center', size='medium', color='black')\n\nplt.title('Top 20 Most Least NBA Players by WS/$')\nplt.xlabel('2019-20 Salary')\nplt.ylabel('WS')\nplt.show()", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7d85dd09d961e6b0b7fa0a6393c046655d03fa6
375,543
ipynb
Jupyter Notebook
gene-prediction-gaussian.ipynb
neungkl/MLE-and-naive-bayes-classification
485fe42e908b8ae2420b219deaa5a44b74b035bd
[ "MIT" ]
1
2021-02-14T03:16:35.000Z
2021-02-14T03:16:35.000Z
gene-prediction-gaussian.ipynb
neungkl/MLE-and-naive-bayes-classification
485fe42e908b8ae2420b219deaa5a44b74b035bd
[ "MIT" ]
null
null
null
gene-prediction-gaussian.ipynb
neungkl/MLE-and-naive-bayes-classification
485fe42e908b8ae2420b219deaa5a44b74b035bd
[ "MIT" ]
1
2020-06-03T03:34:44.000Z
2020-06-03T03:34:44.000Z
292.251362
29,428
0.902424
[ [ [ "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nplt.style.use('seaborn')\n\nIS_HK = 1\nIS_NOT_HK = 0\n\n%matplotlib inline", "_____no_output_____" ], [ "data = pd.read_csv(\"data.csv\")\ndata.head()", "_____no_output_____" ], [ "data.describe()", "_____no_output_____" ], [ "data.loc[data[\"5_MAR_presence\"] == \"no\", \"5_MAR_presence\"] = 0.0\ndata.loc[data[\"5_MAR_presence\"] == \"yes\", \"5_MAR_presence\"] = 1.0\ndata.loc[data[\"3_MAR_presence\"] == \"no\", \"3_MAR_presence\"] = 0.0\ndata.loc[data[\"3_MAR_presence\"] == \"yes\", \"3_MAR_presence\"] = 1.0\ndata.loc[data[\"5_polyA_18_presence\"] == \"no\", \"5_polyA_18_presence\"] = 0.0\ndata.loc[data[\"5_polyA_18_presence\"] == \"yes\", \"5_polyA_18_presence\"] = 1.0\ndata.loc[data[\"5_CCGNN_2_5_presence\"] == \"no\", \"5_CCGNN_2_5_presence\"] = 0.0\ndata.loc[data[\"5_CCGNN_2_5_presence\"] == \"yes\", \"5_CCGNN_2_5_presence\"] = 1.0\ndata.loc[data[\"is_hk\"] == \"no\", \"is_hk\"] = 0.0\ndata.loc[data[\"is_hk\"] == \"yes\", \"is_hk\"] = 1.0\ndel data[\"EMBL_transcript_id\"]\n\ndata[\"5_MAR_presence\"] = data[\"5_MAR_presence\"].astype(float)\ndata[\"3_MAR_presence\"] = data[\"3_MAR_presence\"].astype(float)\ndata[\"5_polyA_18_presence\"] = data[\"5_polyA_18_presence\"].astype(float)\ndata[\"5_CCGNN_2_5_presence\"] = data[\"5_CCGNN_2_5_presence\"].astype(float)\n\ncategory_features = [\"5_MAR_presence\", \"3_MAR_presence\", \"5_polyA_18_presence\", \"5_CCGNN_2_5_presence\"]", "_____no_output_____" ], [ "data.head()", "_____no_output_____" ] ], [ [ "- How many items are NaN in the is hk column?\n- How many items are known housekeeping genes?\n- How many items are known tissue specific genes?", "_____no_output_____" ] ], [ [ "print(\"NaN %s\" % len(data[data[\"is_hk\"].isnull()]))\nprint(\"Housekeeping %s\" % len(data[data[\"is_hk\"] == 1]))\nprint(\"Specific %s\" % len(data[data[\"is_hk\"] == 0]))", "NaN 46459\nHousekeeping 103\nSpecific 667\n" ], [ "def split_train_test(data):\n split = (int) (len(data) * 0.9)\n return data[0:split], data[split:]\n\ndef split_data(data):\n # Shuffle data\n data = data.sample(frac = 1)\n \n # train_set, test_set\n hk_yes = data[data[\"is_hk\"] == IS_HK]\n hk_no = data[data[\"is_hk\"] == IS_NOT_HK]\n \n train_yes, test_yes = split_train_test(hk_yes)\n train_no , test_no = split_train_test(hk_no)\n \n train_set = train_yes\n train_set = train_set.append(train_no)\n train_set = train_set.sample(frac = 1)\n \n test_set = test_yes\n test_set = test_set.append(test_no)\n test_set = test_set.sample(frac = 1)\n \n # unsup_train_set\n unsup_train_set = data[data[\"is_hk\"].isnull()]\n \n # sup_train_set\n sup_train_set = data[data[\"is_hk\"].notnull()]\n \n return train_set, test_set, unsup_train_set, sup_train_set\n\ntrain_set, test_set, unsup_train_set, sup_train_set = split_data(data)", "_____no_output_____" ], [ "def bin_plot(hist, bin_edge):\n # make sure to import matplotlib.pyplot as plt\n # plot the histogram\n plt.figure(figsize=(6,4))\n plt.fill_between(bin_edge.repeat(2)[1:-1],hist.repeat(2),facecolor=\"steelblue\")\n plt.show()\n \n # plot the first 100 bins only\n plt.figure(figsize=(6,4))\n plt.fill_between(bin_edge.repeat(2)[1:100],hist.repeat(2)[1:100],facecolor=\"steelblue\")\n plt.show()\n # plot the first 500 bins only\n plt.figure(figsize=(6,4))\n plt.fill_between(bin_edge.repeat(2)[1:500],hist.repeat(2)[1:500],facecolor=\"steelblue\")\n plt.show()", "_____no_output_____" ], [ "# remove NaN values\ntrain_set_clength_no_nan = data[\"cDNA_length\"][~np.isnan(data[\"cDNA_length\"])]\n# bin the data into 1000 equally spaced bins\n# hist is the count for each bin\n# bin_edge is the edge values of the bins\nhist, bin_edge = np.histogram(train_set_clength_no_nan,1000)\nbin_plot(hist, bin_edge)", "_____no_output_____" ] ], [ [ "How many bins have zero counts?", "_____no_output_____" ] ], [ [ "print(\"Total %s\" % len(hist))\nprint(\"Zeros %s\" % sum(hist == 0))", "Total 1000\nZeros 823\n" ] ], [ [ "**cDNA Density Plot**", "_____no_output_____" ] ], [ [ "train_set_clength_no_nan_sorted = data[\"cDNA_length\"][data[\"cDNA_length\"].notnull()].sort_values()\n\nbin_edge = np.unique(train_set_clength_no_nan_sorted[0::70])\nhist = np.bincount(np.digitize(train_set_clength_no_nan_sorted, bin_edge))\nhist = hist[1:-1]\nbin_plot(hist, bin_edge)", "_____no_output_____" ] ], [ [ "**CDS Density Plot**", "_____no_output_____" ] ], [ [ "train_set_clength_no_nan_sorted = data[\"cds_length\"][data[\"cds_length\"].notnull()].sort_values()\n\nbin_edge = np.unique(train_set_clength_no_nan_sorted[0::100])\nhist = np.bincount(np.digitize(train_set_clength_no_nan_sorted, bin_edge))\nhist = hist[1:-1]\nbin_plot(hist, bin_edge)", "_____no_output_____" ], [ "for feature in list(train_set):\n if feature == \"is_hk\":\n continue\n \n f, (ax1, ax2) = plt.subplots(1, 2, sharey=True, figsize=(12,4))\n \n bin_size = 2 if feature in category_features else 500\n \n X = train_set[train_set[\"is_hk\"] == IS_HK][feature][~np.isnan(train_set[feature])]\n hist, bin_edge = np.histogram(X, bin_size)\n ax1.fill_between(bin_edge.repeat(2)[1:-1],hist.repeat(2),facecolor=\"orange\")\n ax1.set_title(feature + \" (is_hk)\")\n \n X = train_set[train_set[\"is_hk\"] == IS_NOT_HK][feature][~np.isnan(train_set[feature])]\n hist, bin_edge = np.histogram(X, bin_size)\n ax2.fill_between(bin_edge.repeat(2)[1:-1],hist.repeat(2),facecolor=\"steelblue\")\n ax2.set_title(feature + \" (is_not_hk)\")\n \n plt.show()", "_____no_output_____" ] ], [ [ "## MLE calculation", "_____no_output_____" ] ], [ [ "def calc_mean_var(data):\n data = data[data.notnull()]\n u = data.mean()\n v = data.var()\n return u, v\n\ndef calc_prob_eq_zero(data):\n data = data[data.notnull()]\n return len(data[data == 0]) * 1.0 / len(data) ", "_____no_output_____" ], [ "likelihood = {}\n\nfor feature in list(train_set):\n if feature == \"is_hk\":\n continue\n param_type = \"category\" if feature in category_features else \"gaussian\"\n \n ll = [0, 0]\n for i in [IS_HK, IS_NOT_HK]:\n variable = {}\n train_data = sup_train_set[sup_train_set[\"is_hk\"] == i]\n if param_type == \"category\":\n variable[\"prob_zero\"] = calc_prob_eq_zero(\n train_data[feature]\n )\n else:\n u, var = calc_mean_var(train_data[feature])\n variable[\"u\"] = u\n variable[\"var\"] = var\n ll[i] = variable\n \n likelihood[feature] = ll\n\nprior = [0, 0]\nprior[IS_NOT_HK] = len(sup_train_set[sup_train_set[\"is_hk\"] == IS_NOT_HK]) / len(sup_train_set)\nprior[IS_HK] = 1 - prior[0]\n\nprint(\"Prior: is_not_hk = %s, is_hk = %s\" % (prior[IS_NOT_HK], prior[IS_HK]))", "Prior: is_not_hk = 0.8662337662337662, is_hk = 0.13376623376623376\n" ], [ "def normal_pdf(x, u, var):\n if var < 1e-12:\n return np.zeros(len(x)) + 1e-12\n return np.exp(-(x - u)**2 / (2 * var)) / np.sqrt(2 * np.pi * var) ", "_____no_output_____" ], [ "for feature in list(train_set):\n if feature == \"is_hk\":\n continue\n \n if feature in category_features:\n continue\n \n plt.figure(figsize=(5,4))\n \n X_max = train_set[feature].max()\n \n if X_max < 10:\n X_norm = np.arange(0, X_max, 0.01)\n else:\n X_norm = np.arange(0, X_max, 1)\n \n Y_norm = normal_pdf(\n X_norm,\n likelihood[feature][IS_HK][\"u\"],\n likelihood[feature][IS_HK][\"var\"] + 1e-12\n )\n hist, bin_edge = np.histogram(X, bin_size)\n \n plt.plot(X_norm, Y_norm, color=\"orange\")\n \n Y_norm = normal_pdf(\n X_norm,\n likelihood[feature][IS_NOT_HK][\"u\"],\n likelihood[feature][IS_NOT_HK][\"var\"] + 1e-12\n )\n \n plt.plot(X_norm, Y_norm, color=\"steelblue\")\n plt.legend(['is_hk', 'is_not_hk'])\n plt.title(feature)\n plt.show()", "_____no_output_____" ], [ "def prob_category(x, ll, is_hk):\n if x == 0:\n return ll[is_hk][\"prob_zero\"]\n else:\n return 1 - ll[is_hk][\"prob_zero\"]\n\ndef predict(test_data):\n L = np.zeros(len(test_data))\n for feature in list(test_data):\n \n if feature == \"is_hk\":\n continue\n \n data = test_data[feature]\n ll = likelihood[feature]\n \n if feature in category_features:\n p_house = np.fromiter((prob_category(x, ll, IS_HK) for x in data), data.dtype)\n p_not_house = np.fromiter((prob_category(x, ll, IS_NOT_HK) for x in data), data.dtype)\n L += np.log(p_house) - np.log(p_not_house)\n else:\n not_null_idx = data.notnull()\n L[not_null_idx] += np.log(normal_pdf(data[not_null_idx], ll[IS_HK][\"u\"], ll[IS_HK][\"var\"]))\n L[not_null_idx] -= np.log(normal_pdf(data[not_null_idx], ll[IS_NOT_HK][\"u\"], ll[IS_NOT_HK][\"var\"]))\n \n L += np.log(prior[IS_HK]) - np.log(prior[IS_NOT_HK])\n \n return L", "_____no_output_____" ], [ "def activate_predict(y, threshold = 0.0):\n return (y > threshold).astype(int)", "_____no_output_____" ], [ "def accuracy(y_test, y_pred):\n return np.sum(y_test == y_pred) / len(y_test)\n\ndef precision(y_test, y_pred):\n n_y_pred = np.sum(y_pred == 1)\n return np.sum(np.logical_and(y_test == y_pred, y_pred == 1)) / (np.sum(y_pred == 1) + 1e-12)\n \n# true positive rate\ndef recall(y_test, y_pred):\n return np.sum(np.logical_and(y_test == y_pred, y_test == 1)) / (np.sum(y_test == 1) + 1e-12)\n\ndef false_positive_rate(y_test, y_pred):\n return np.sum(np.logical_and(y_test != y_pred, y_test == 0)) / np.sum(y_test == 0)\n\ndef measure_metrics(y_test, y_pred):\n print(\"Accuracy: %f\" % accuracy(y_test, y_pred))\n pcs = precision(y_test, y_pred)\n rc = recall(y_test, y_pred)\n print(\"Precision: %f\" % pcs)\n print(\"Recall: %f\" % rc)\n f1 = 2 * pcs * rc / (pcs + rc + 1e-12)\n print(\"F1: %f\" % f1)", "_____no_output_____" ], [ "y_test = test_set[\"is_hk\"]\ny_pred = activate_predict(predict(test_set))\nmeasure_metrics(y_test, y_pred)", "Accuracy: 0.923077\nPrecision: 0.647059\nRecall: 1.000000\nF1: 0.785714\n" ] ], [ [ "## Baseline", "_____no_output_____" ], [ "1\\. Random Choice Baseline", "_____no_output_____" ] ], [ [ "def create_random_pred():\n return np.random.random_sample((len(y_test),)) - 0.5\ny_pred = activate_predict(create_random_pred())\nmeasure_metrics(y_test, y_pred)", "Accuracy: 0.435897\nPrecision: 0.133333\nRecall: 0.545455\nF1: 0.214286\n" ] ], [ [ "2\\. Majority", "_____no_output_____" ] ], [ [ "def create_majority_pred():\n return np.ones(len(y_test)) * test_set[\"is_hk\"].mode().values.astype(int)\ny_pred = create_majority_pred()\nmeasure_metrics(y_test, y_pred)", "Accuracy: 0.858974\nPrecision: 0.000000\nRecall: 0.000000\nF1: 0.000000\n" ] ], [ [ "## ROC", "_____no_output_____" ] ], [ [ "t = np.arange(-5,5,0.01)\n\ntp = []\ntp_random = []\ntp_majority = []\n\nfp = []\nfp_random = []\nfp_majority = []\n\ny_test = test_set[\"is_hk\"]\n\ny_pred = predict(test_set)\ny_random = create_random_pred()\ny_act_majority = create_majority_pred()\n\nfor t_i in t:\n \n y_act_pred = activate_predict(y_pred, threshold = t_i)\n y_act_random = activate_predict(y_random, threshold = t_i)\n \n tp.append(recall(y_test, y_act_pred))\n fp.append(false_positive_rate(y_test, y_act_pred))\n \n tp_random.append(recall(y_test, y_act_random))\n fp_random.append(false_positive_rate(y_test, y_act_random))\n \n tp_majority.append(recall(y_test, y_act_majority))\n fp_majority.append(false_positive_rate(y_test, y_act_majority))\n \n\nplt.figure(figsize=(7,5))\nplt.plot(fp_random, tp_random)\nplt.plot(fp_majority, tp_majority)\nplt.plot(fp, tp)\nplt.legend(['Random', 'Majority', 'Naive Bayes'])\nplt.show()", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7d8a655ed1917c2e320f47810713c523246a525
9,746
ipynb
Jupyter Notebook
courses/machine_learning/deepdive/02_tensorflow/b_tfstart_graph.ipynb
KayvanShah1/training-data-analyst
3f778a57b8e6d2446af40ca6063b2fd9c1b4bc88
[ "Apache-2.0" ]
6,140
2016-05-23T16:09:35.000Z
2022-03-30T19:00:46.000Z
courses/machine_learning/deepdive/02_tensorflow/b_tfstart_graph.ipynb
KayvanShah1/training-data-analyst
3f778a57b8e6d2446af40ca6063b2fd9c1b4bc88
[ "Apache-2.0" ]
1,384
2016-07-08T22:26:41.000Z
2022-03-24T16:39:43.000Z
courses/machine_learning/deepdive/02_tensorflow/b_tfstart_graph.ipynb
KayvanShah1/training-data-analyst
3f778a57b8e6d2446af40ca6063b2fd9c1b4bc88
[ "Apache-2.0" ]
5,110
2016-05-27T13:45:18.000Z
2022-03-31T18:40:42.000Z
35.057554
553
0.600144
[ [ [ "# Getting started with TensorFlow (Graph Mode)\n\n**Learning Objectives**\n - Understand the difference between Tensorflow's two modes: Eager Execution and Graph Execution\n - Get used to deferred execution paradigm: first define a graph then run it in a `tf.Session()`\n - Understand how to parameterize a graph using `tf.placeholder()` and `feed_dict`\n - Understand the difference between constant Tensors and variable Tensors, and how to define each\n - Practice using mid-level `tf.train` module for gradient descent\n\n## Introduction\n\n**Eager Execution**\n\nEager mode evaluates operations immediatley and return concrete values immediately. To enable eager mode simply place `tf.enable_eager_execution()` at the top of your code. We recommend using eager execution when prototyping as it is intuitive, easier to debug, and requires less boilerplate code.\n\n**Graph Execution**\n\nGraph mode is TensorFlow's default execution mode (although it will change to eager in TF 2.0). In graph mode operations only produce a symbolic graph which doesn't get executed until run within the context of a tf.Session(). This style of coding is less inutitive and has more boilerplate, however it can lead to performance optimizations and is particularly suited for distributing training across multiple devices. We recommend using delayed execution for performance sensitive production code. ", "_____no_output_____" ] ], [ [ "import tensorflow as tf\nprint(tf.__version__)", "_____no_output_____" ] ], [ [ "## Graph Execution", "_____no_output_____" ], [ "### Adding Two Tensors ", "_____no_output_____" ], [ "#### Build the Graph\n\nUnlike eager mode, no concrete value will be returned yet. Just a name, shape and type are printed. Behind the scenes a directed graph is being created.", "_____no_output_____" ] ], [ [ "a = tf.constant(value = [5, 3, 8], dtype = tf.int32)\nb = tf.constant(value = [3, -1, 2], dtype = tf.int32)\nc = tf.add(x = a, y = b)\nprint(c)", "_____no_output_____" ] ], [ [ "#### Run the Graph\n\nA graph can be executed in the context of a `tf.Session()`. Think of a session as the bridge between the front-end Python API and the back-end C++ execution engine. \n\nWithin a session, passing a tensor operation to `run()` will cause Tensorflow to execute all upstream operations in the graph required to calculate that value.", "_____no_output_____" ] ], [ [ "with tf.Session() as sess:\n result = sess.run(fetches = c)\n print(result)", "_____no_output_____" ] ], [ [ "#### Parameterizing the Grpah \n\nWhat if values of `a` and `b` keep changing? How can you parameterize them so they can be fed in at runtime? \n\n*Step 1: Define Placeholders*\n\nDefine `a` and `b` using `tf.placeholder()`. You'll need to specify the data type of the placeholder, and optionally a tensor shape.\n\n*Step 2: Provide feed_dict*\n\nNow when invoking `run()` within the `tf.Session()`, in addition to providing a tensor operation to evaluate, you also provide a dictionary whose keys are the names of the placeholders. ", "_____no_output_____" ] ], [ [ "a = tf.placeholder(dtype = tf.int32, shape = [None]) \nb = tf.placeholder(dtype = tf.int32, shape = [None])\nc = tf.add(x = a, y = b)\n\nwith tf.Session() as sess:\n result = sess.run(fetches = c, feed_dict = {\n a: [3, 4, 5],\n b: [-1, 2, 3]\n })\n print(result)", "_____no_output_____" ] ], [ [ "### Linear Regression", "_____no_output_____" ], [ "#### Toy Dataset\nWe'll model the following:\n\n\\begin{equation}\ny= 2x + 10\n\\end{equation}", "_____no_output_____" ] ], [ [ "X = tf.constant(value = [1,2,3,4,5,6,7,8,9,10], dtype = tf.float32)\nY = 2 * X + 10\nprint(\"X:{}\".format(X))\nprint(\"Y:{}\".format(Y))", "_____no_output_____" ] ], [ [ "#### 2.2 Loss Function\nUsing mean squared error, our loss function is:\n\\begin{equation}\nMSE = \\frac{1}{m}\\sum_{i=1}^{m}(\\hat{Y}_i-Y_i)^2\n\\end{equation}\n\n$\\hat{Y}$ represents the vector containing our model's predictions:\n\\begin{equation}\n\\hat{Y} = w_0X + w_1\n\\end{equation}\n\nNote below we introduce TF variables for the first time. Unlike constants, variables are mutable. \n\nBrowse the official TensorFlow [guide on variables](https://www.tensorflow.org/guide/variables) for more information on when/how to use them.", "_____no_output_____" ] ], [ [ "with tf.variable_scope(name_or_scope = \"training\", reuse = tf.AUTO_REUSE):\n w0 = tf.get_variable(name = \"w0\", initializer = tf.constant(value = 0.0, dtype = tf.float32))\n w1 = tf.get_variable(name = \"w1\", initializer = tf.constant(value = 0.0, dtype = tf.float32))\n \nY_hat = w0 * X + w1\nloss_mse = tf.reduce_mean(input_tensor = (Y_hat - Y)**2)", "_____no_output_____" ] ], [ [ "#### Optimizer\n\nAn optimizer in TensorFlow both calculates gradients and updates weights. In addition to basic gradient descent, TF provides implementations of several more advanced optimizers such as ADAM and FTRL. They can all be found in the [tf.train](https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/train) module. \n\nNote below we're not expclictly telling the optimizer which tensors are our weight tensors. So how does it know what to update? Optimizers will update all variables in the `tf.GraphKeys.TRAINABLE_VARIABLES` [collection](https://www.tensorflow.org/guide/variables#variable_collections). All variables are added to this collection by default. Since our only variables are `w0` and `w1`, this is the behavior we want. If we had a variable that we *didn't* want to be added to the collection we would set `trainable=false` when creating it.", "_____no_output_____" ] ], [ [ "LEARNING_RATE = tf.placeholder(dtype = tf.float32, shape = None)\noptimizer = tf.train.GradientDescentOptimizer(learning_rate = LEARNING_RATE).minimize(loss = loss_mse)", "_____no_output_____" ] ], [ [ "#### Training Loop\n\nNote our results are identical to what we found in Eager mode.", "_____no_output_____" ] ], [ [ "STEPS = 1000\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer()) # initialize variables\n \n for step in range(STEPS):\n #1. Calculate gradients and update seights \n sess.run(fetches = optimizer, feed_dict = {LEARNING_RATE: 0.02})\n \n #2. Periodically print MSE\n if step % 100 == 0:\n print(\"STEP: {} MSE: {}\".format(step, sess.run(fetches = loss_mse)))\n \n # Print final MSE and weights\n print(\"STEP: {} MSE: {}\".format(STEPS, sess.run(loss_mse)))\n print(\"w0:{}\".format(round(float(sess.run(w0)), 4)))\n print(\"w1:{}\".format(round(float(sess.run(w1)), 4)))", "_____no_output_____" ] ], [ [ "Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e7d8b97f045b31e3074503fc71c51e12d67a880e
651,503
ipynb
Jupyter Notebook
Covid19.ipynb
BryanSouza91/COVID-19
cfda353ef9e8352cb0eb376897c275f958c78ceb
[ "MIT" ]
null
null
null
Covid19.ipynb
BryanSouza91/COVID-19
cfda353ef9e8352cb0eb376897c275f958c78ceb
[ "MIT" ]
null
null
null
Covid19.ipynb
BryanSouza91/COVID-19
cfda353ef9e8352cb0eb376897c275f958c78ceb
[ "MIT" ]
null
null
null
159.564781
95,600
0.738782
[ [ [ "<a href=\"https://colab.research.google.com/github/BryanSouza91/COVID-19/blob/master/Covid19.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# This is only the tested and reported cases John Hopkins CCSE has data for this is by no means a definitive view of the global epidemic.\n\n##### The repo is updated daily around 5:00pm PDT", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt", "/usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.\n import pandas.util.testing as tm\n" ], [ "confirmed_url = \"https://github.com/CSSEGISandData/COVID-19/raw/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv\"\n\nrecovered_url = \"https://github.com/CSSEGISandData/COVID-19/raw/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_recovered_global.csv\"\n\ndeaths_url = \"https://github.com/CSSEGISandData/COVID-19/raw/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_global.csv\"", "_____no_output_____" ], [ "conf_df = pd.read_csv(confirmed_url) # ,index_col=['Province/State', 'Country/Region', 'Lat', 'Long']) \n\nrecv_df = pd.read_csv(recovered_url) # ,index_col=['Province/State', 'Country/Region', 'Lat', 'Long'])\n\ndeath_df = pd.read_csv(deaths_url) # ,index_col=['Province/State', 'Country/Region', 'Lat', 'Long'])", "_____no_output_____" ], [ "latest = conf_df.columns[-1]\nlatest", "_____no_output_____" ], [ "# create a differenced series function\n\ndef difference(dataset, interval=1):\n return pd.Series([dataset[i] - dataset[i - interval] for i in range(interval, len(dataset))])\n", "_____no_output_____" ] ], [ [ "# Plots total confirmed cases by country\n\n##### Changing the logx=False to True shows the logarithmic scales of x-axis\n##### Changing the logy=False to True shows the logarithmic scales of y-axis\n##### Changing the loglog=False to True shows the logarithmic scales of both axes", "_____no_output_____" ] ], [ [ "conf_df.loc[:,'1/22/20':].loc[conf_df['Country/Region'] == 'China'].sum().plot(figsize=(25,6),logx=False,logy=False,loglog=True);", "_____no_output_____" ], [ "conf_df.loc[:,'1/22/20':].loc[conf_df['Country/Region'] == 'US'].sum().plot(figsize=(25,6),logx=False,logy=False,loglog=False);", "_____no_output_____" ], [ "conf_df.loc[:,'1/22/20':].loc[conf_df['Country/Region'] == 'Japan'].sum().plot(figsize=(25,6),logx=False,logy=False,loglog=True);", "_____no_output_____" ], [ "conf_df.loc[:,'1/22/20':].loc[conf_df['Country/Region'] == 'Italy'].sum().plot(figsize=(25,6),logx=False,logy=False,loglog=True);", "_____no_output_____" ], [ "conf_df.loc[:,'1/22/20':].loc[conf_df['Country/Region'] == 'Iran'].sum().plot(figsize=(25,6),logx=False,logy=False,loglog=True);", "_____no_output_____" ], [ "conf_df.loc[:,'1/22/20':].loc[conf_df['Country/Region'] == 'Russia'].sum().plot(figsize=(25,6),logx=False,logy=False,loglog=True);", "_____no_output_____" ], [ "conf_df.loc[:,'1/22/20':].loc[conf_df['Country/Region'] == 'Greece'].sum().plot(figsize=(25,6),logx=False,logy=False,loglog=True);", "_____no_output_____" ], [ "conf_df.loc[:,'1/22/20':].loc[conf_df['Country/Region'] == 'India'].sum().plot(figsize=(25,6),logx=False,logy=False,loglog=True);", "_____no_output_____" ], [ "plt.figure(figsize=(26,13))\nplt.title(\"SARS-Cov-2 COVID-19 Confirmed Cases\")\nsns.set_palette('colorblind')\nsns.scatterplot(x='Long',y='Lat',size=latest,hue='Country/Region',data=conf_df,sizes=(10,10000),legend=False,edgecolor='k');", "_____no_output_____" ], [ "plt.figure(figsize=(26,13))\nplt.title(\"SARS-Cov-2 COVID-19 Recovered Cases\")\nsns.set_palette('colorblind')\nsns.scatterplot(x='Long',y='Lat',size=latest,hue='Country/Region',data=recv_df,sizes=(10,10000),legend=False,edgecolor='k');", "_____no_output_____" ], [ "plt.figure(figsize=(26,13))\nplt.title(\"SARS-Cov-2 COVID-19 Deaths\")\nsns.set_palette('colorblind')\nsns.scatterplot(x='Long',y='Lat',size=latest,hue='Country/Region',data=death_df,sizes=(10,10000),legend=False,edgecolor='k');", "_____no_output_____" ] ], [ [ "# World report", "_____no_output_____" ] ], [ [ "\n# Create reusable series objects \nconf_sum = conf_df.loc[:,'1/22/20':].sum()\nrecv_sum = recv_df.loc[:,'1/22/20':].sum()\ndeath_sum = death_df.loc[:,'1/22/20':].sum()\n\nconf_sum_dif = difference(conf_sum, 1).values\nrecv_sum_dif = difference(recv_sum, 1).values\ndeath_sum_dif = difference(death_sum, 1).values", "_____no_output_____" ], [ "# Print world report\nprint(\"World numbers current as of {}\".format(conf_df.columns[-1]))\nprint(\"New cases: {}\".format(conf_sum_dif[-1]))\nprint(\"Total confirmed cases: {}\".format(conf_sum[-1]))\nprint(\"New case rate: {0:>.3%}\".format(conf_sum_dif[-1] / conf_sum[-2]))\nprint(\"New case 7-day Moving Average: {0:>.0f}\".format(difference(conf_sum, 1).rolling(7).mean().values[-1]))\nprint(\"New case 30-day Moving Average: {0:>.0f}\".format(difference(conf_sum, 1).rolling(30).mean().values[-1]))\nprint(\"New Recovered cases: {}\".format(recv_sum_dif[-1]))\nprint(\"Total recovered cases: {}\".format(recv_sum[-1]))\nprint(\"Recovery rate: {0:>.3%}\".format(recv_sum[-1]/conf_sum[-1]))\nprint(\"New Deaths: {}\".format(death_sum_dif[-1]))\nprint(\"Total deaths: {}\".format(death_sum[-1]))\nprint(\"Death rate: {0:>.3%}\".format(death_sum[-1]/conf_sum[-1]))\nprint()\nprint(\"Growth rate above 1.0 is sign of exponential growth, but also skewed by increased testing.\")\nprint(\"World Growth rate: {0:>.4}\".format((conf_sum_dif[-1])/(conf_sum_dif[-2])))", "World numbers current as of 4/5/20\nNew cases: 74710\nTotal confirmed cases: 1272115\nNew case rate: 6.239%\nNew case 7-day Moving Average: 78857\nNew case 30-day Moving Average: 39010\nNew Recovered cases: 13860\nTotal recovered cases: 260012\nRecovery rate: 20.439%\nNew Deaths: 4768\nTotal deaths: 69374\nDeath rate: 5.453%\n\nGrowth rate above 1.0 is sign of exponential growth, but also skewed by increased testing.\nWorld Growth rate: 0.7361\n" ] ], [ [ "# Report for each country reporting cases\n", "_____no_output_____" ] ], [ [ "# define report function\ndef report(country):\n # Create reusable series objects \n country_conf_sum = conf_df.loc[:,'1/22/20':].loc[conf_df['Country/Region'] == country].sum()\n country_recv_sum = recv_df.loc[:,'1/22/20':].loc[conf_df['Country/Region'] == country].sum()\n country_death_sum = death_df.loc[:,'1/22/20':].loc[conf_df['Country/Region'] == country].sum()\n\n country_conf_sum_dif = difference(country_conf_sum, 1).values\n country_recv_sum_dif = difference(country_recv_sum, 1).values\n country_death_sum_dif = difference(country_death_sum, 1).values\n\n print()\n print('_'*60)\n print(\"Numbers for {} current as of {}\".format(country, conf_df.columns[-1]))\n print()\n print(\"New cases: {}\".format(country_conf_sum_dif[-1]))\n print(\"Total confirmed cases: {}\".format(country_conf_sum[-1]))\n print(\"New case rate: {0:>.3%}\".format(country_conf_sum_dif[-1]/country_conf_sum[-1]))\n print(\"New case 7-day Moving Average: {0:>.0f}\".format(difference(country_conf_sum, 1).rolling(7).mean().values[-1]))\n print(\"New case 30-day Moving Average: {0:>.0f}\".format(difference(country_conf_sum, 1).rolling(30).mean().values[-1]))\n print(\"New Recovered cases: {}\".format(country_recv_sum_dif[-1]))\n print(\"Total recovered cases: {}\".format(country_recv_sum[-1]))\n print(\"Recovery rate: {0:>.3%}\".format(country_recv_sum_dif[-1]/country_recv_sum[-1]))\n print(\"New Deaths: {}\".format(country_death_sum_dif[-1]))\n print(\"Total deaths: {}\".format(country_death_sum[-1]))\n print(\"Death rate: {0:>.3%}\".format(country_death_sum_dif[-1]/country_conf_sum[-1]))\n print()\n print(\"Growth rate: {0:>.4}\".format(country_conf_sum_dif[-1]/country_conf_sum_dif[-2]))\n print(\"_\"*60)", "_____no_output_____" ], [ "report('US')", "\n____________________________________________________________\nNumbers for US current as of 4/5/20\n\nNew cases: 28222\nTotal confirmed cases: 337072\nNew case rate: 8.373%\nNew case 7-day Moving Average: 28027\nNew case 30-day Moving Average: 11227\nNew Recovered cases: 2796\nTotal recovered cases: 17448\nRecovery rate: 16.025%\nNew Deaths: 1212\nTotal deaths: 9619\nDeath rate: 0.360%\n\nGrowth rate: 0.8484\n____________________________________________________________\n" ], [ "report('Italy')", "\n____________________________________________________________\nNumbers for Italy current as of 4/5/20\n\nNew cases: 4316\nTotal confirmed cases: 128948\nNew case rate: 3.347%\nNew case 7-day Moving Average: 4466\nNew case 30-day Moving Average: 4144\nNew Recovered cases: 138\nTotal recovered cases: 6463\nRecovery rate: 2.135%\nNew Deaths: 525\nTotal deaths: 15887\nDeath rate: 0.407%\n\nGrowth rate: 0.8982\n____________________________________________________________\n" ], [ "for each in conf_df['Country/Region'].sort_values().unique():\n report(each)", "\n____________________________________________________________\nNumbers for Afghanistan current as of 4/5/20\n\nNew cases: 50\nTotal confirmed cases: 349\nNew case rate: 14.327%\nNew case 7-day Moving Average: 33\nNew case 30-day Moving Average: 12\nNew Recovered cases: 5\nTotal recovered cases: 15\nRecovery rate: 33.333%\nNew Deaths: 0\nTotal deaths: 7\nDeath rate: 0.000%\n\nGrowth rate: 2.778\n____________________________________________________________\n\n____________________________________________________________\nNumbers for Albania current as of 4/5/20\n\nNew cases: 28\nTotal confirmed cases: 361\nNew case rate: 7.756%\nNew case 7-day Moving Average: 21\nNew case 30-day Moving Average: 12\nNew Recovered cases: 5\nTotal recovered cases: 104\nRecovery rate: 4.808%\nNew Deaths: 0\nTotal deaths: 20\nDeath rate: 0.000%\n\nGrowth rate: 0.9655\n____________________________________________________________\n\n____________________________________________________________\nNumbers for Algeria current as of 4/5/20\n\nNew cases: 69\nTotal confirmed cases: 1320\nNew case rate: 5.227%\nNew case 7-day Moving Average: 116\nNew case 30-day Moving Average: 43\nNew Recovered cases: 0\nTotal recovered cases: 90\nRecovery rate: 0.000%\nNew Deaths: 22\nTotal deaths: 152\nDeath rate: 1.667%\n\nGrowth rate: 0.8625\n____________________________________________________________\n\n____________________________________________________________\nNumbers for Andorra current as of 4/5/20\n\nNew cases: 35\nTotal confirmed cases: 501\nNew case rate: 6.986%\nNew case 7-day Moving Average: 24\nNew case 30-day Moving Average: 17\nNew Recovered cases: 5\nTotal recovered cases: 26\nRecovery rate: 19.231%\nNew Deaths: 1\nTotal deaths: 18\nDeath rate: 0.200%\n\nGrowth rate: 1.296\n____________________________________________________________\n\n____________________________________________________________\nNumbers for Angola current as of 4/5/20\n\nNew cases: 4\nTotal confirmed cases: 14\nNew case rate: 28.571%\nNew case 7-day Moving Average: 1\nNew case 30-day Moving Average: 0\nNew Recovered cases: 0\nTotal recovered cases: 2\nRecovery rate: 0.000%\nNew Deaths: 0\nTotal deaths: 2\nDeath rate: 0.000%\n\nGrowth rate: 2.0\n____________________________________________________________\n\n____________________________________________________________\nNumbers for Antigua and Barbuda current as of 4/5/20\n\nNew cases: 0\nTotal confirmed cases: 15\nNew case rate: 0.000%\nNew case 7-day Moving Average: 1\nNew case 30-day Moving Average: 0\nNew Recovered cases: 0\nTotal recovered cases: 0\nRecovery rate: nan%\nNew Deaths: 0\nTotal deaths: 0\nDeath rate: 0.000%\n\nGrowth rate: nan\n____________________________________________________________\n\n____________________________________________________________\nNumbers for Argentina current as of 4/5/20\n\nNew cases: 0\nTotal confirmed cases: 1451\nNew case rate: 0.000%\nNew case 7-day Moving Average: 101\nNew case 30-day Moving Average: 48\nNew Recovered cases: 1\nTotal recovered cases: 280\nRecovery rate: 0.357%\nNew Deaths: 1\nTotal deaths: 44\nDeath rate: 0.069%\n\nGrowth rate: 0.0\n____________________________________________________________\n\n____________________________________________________________\nNumbers for Armenia current as of 4/5/20\n\nNew cases: 52\nTotal confirmed cases: 822\nNew case rate: 6.326%\nNew case 7-day Moving Average: 57\nNew case 30-day Moving Average: 27\nNew Recovered cases: 14\nTotal recovered cases: 57\nRecovery rate: 24.561%\nNew Deaths: 0\nTotal deaths: 7\nDeath rate: 0.000%\n\nGrowth rate: 1.529\n____________________________________________________________\n\n____________________________________________________________\nNumbers for Australia current as of 4/5/20\n\nNew cases: 137\nTotal confirmed cases: 5687\nNew case rate: 2.409%\nNew case 7-day Moving Average: 243\nNew case 30-day Moving Average: 188\nNew Recovered cases: 56\nTotal recovered cases: 757\nRecovery rate: 7.398%\nNew Deaths: 5\nTotal deaths: 35\nDeath rate: 0.088%\n\nGrowth rate: 0.6227\n____________________________________________________________\n\n____________________________________________________________\nNumbers for Austria current as of 4/5/20\n\nNew cases: 270\nTotal confirmed cases: 12051\nNew case rate: 2.240%\nNew case 7-day Moving Average: 466\nNew case 30-day Moving Average: 400\nNew Recovered cases: 491\nTotal recovered cases: 2998\nRecovery rate: 16.378%\nNew Deaths: 18\nTotal deaths: 204\nDeath rate: 0.149%\n\nGrowth rate: 1.051\n____________________________________________________________\n\n____________________________________________________________\nNumbers for Azerbaijan current as of 4/5/20\n\nNew cases: 63\nTotal confirmed cases: 584\nNew case rate: 10.788%\nNew case 7-day Moving Average: 54\nNew case 30-day Moving Average: 19\nNew Recovered cases: 0\nTotal recovered cases: 32\nRecovery rate: 0.000%\nNew Deaths: 2\nTotal deaths: 7\nDeath rate: 0.342%\n\nGrowth rate: 0.8077\n____________________________________________________________\n\n____________________________________________________________\nNumbers for Bahamas current as of 4/5/20\n\nNew cases: 0\nTotal confirmed cases: 28\nNew case rate: 0.000%\nNew case 7-day Moving Average: 2\nNew case 30-day Moving Average: 1\nNew Recovered cases: 0\nTotal recovered cases: 0\nRecovery rate: nan%\nNew Deaths: 0\nTotal deaths: 4\nDeath rate: 0.000%\n\nGrowth rate: 0.0\n____________________________________________________________\n\n____________________________________________________________\nNumbers for Bahrain current as of 4/5/20\n\nNew cases: 12\nTotal confirmed cases: 700\nNew case rate: 1.714%\nNew case 7-day Moving Average: 29\nNew case 30-day Moving Average: 21\nNew Recovered cases: 8\nTotal recovered cases: 431\nRecovery rate: 1.856%\nNew Deaths: 0\nTotal deaths: 4\nDeath rate: 0.000%\n\nGrowth rate: 0.75\n____________________________________________________________\n\n____________________________________________________________\nNumbers for Bangladesh current as of 4/5/20\n\nNew cases: 18\nTotal confirmed cases: 88\nNew case rate: 20.455%\nNew case 7-day Moving Average: 6\nNew case 30-day Moving Average: 3\nNew Recovered cases: 3\nTotal recovered cases: 33\nRecovery rate: 9.091%\nNew Deaths: 1\nTotal deaths: 9\nDeath rate: 1.136%\n\nGrowth rate: 2.0\n____________________________________________________________\n\n____________________________________________________________\nNumbers for Barbados current as of 4/5/20\n\nNew cases: 4\nTotal confirmed cases: 56\nNew case rate: 7.143%\nNew case 7-day Moving Average: 3\nNew case 30-day Moving Average: 2\nNew Recovered cases: 6\nTotal recovered cases: 6\nRecovery rate: 100.000%\nNew Deaths: 1\nTotal deaths: 1\nDeath rate: 1.786%\n\nGrowth rate: 4.0\n____________________________________________________________\n" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
e7d8d18aaf10300143cd1a3cab552ae4d330024b
700,832
ipynb
Jupyter Notebook
covid_fixmatch_xray.ipynb
AadSah/fixmatch
e72e8a1b8a9e73fd985c5009cd8cb70f33e85af0
[ "Apache-2.0" ]
null
null
null
covid_fixmatch_xray.ipynb
AadSah/fixmatch
e72e8a1b8a9e73fd985c5009cd8cb70f33e85af0
[ "Apache-2.0" ]
null
null
null
covid_fixmatch_xray.ipynb
AadSah/fixmatch
e72e8a1b8a9e73fd985c5009cd8cb70f33e85af0
[ "Apache-2.0" ]
null
null
null
131.958577
5,063
0.722157
[ [ [ "<a href=\"https://colab.research.google.com/github/AadSah/fixmatch/blob/master/covid_fixmatch_xray.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ "!git clone https://github.com/AadSah/fixmatch.git", "Cloning into 'fixmatch'...\nremote: Enumerating objects: 185, done.\u001b[K\nremote: Counting objects: 0% (1/185)\u001b[K\rremote: Counting objects: 1% (2/185)\u001b[K\rremote: Counting objects: 2% (4/185)\u001b[K\rremote: Counting objects: 3% (6/185)\u001b[K\rremote: Counting objects: 4% (8/185)\u001b[K\rremote: Counting objects: 5% (10/185)\u001b[K\rremote: Counting objects: 6% (12/185)\u001b[K\rremote: Counting objects: 7% (13/185)\u001b[K\rremote: Counting objects: 8% (15/185)\u001b[K\rremote: Counting objects: 9% (17/185)\u001b[K\rremote: Counting objects: 10% (19/185)\u001b[K\rremote: Counting objects: 11% (21/185)\u001b[K\rremote: Counting objects: 12% (23/185)\u001b[K\rremote: Counting objects: 13% (25/185)\u001b[K\rremote: Counting objects: 14% (26/185)\u001b[K\rremote: Counting objects: 15% (28/185)\u001b[K\rremote: Counting objects: 16% (30/185)\u001b[K\rremote: Counting objects: 17% (32/185)\u001b[K\rremote: Counting objects: 18% (34/185)\u001b[K\rremote: Counting objects: 19% (36/185)\u001b[K\rremote: Counting objects: 20% (37/185)\u001b[K\rremote: Counting objects: 21% (39/185)\u001b[K\rremote: Counting objects: 22% (41/185)\u001b[K\rremote: Counting objects: 23% (43/185)\u001b[K\rremote: Counting objects: 24% (45/185)\u001b[K\rremote: Counting objects: 25% (47/185)\u001b[K\rremote: Counting objects: 26% (49/185)\u001b[K\rremote: Counting objects: 27% (50/185)\u001b[K\rremote: Counting objects: 28% (52/185)\u001b[K\rremote: Counting objects: 29% (54/185)\u001b[K\rremote: Counting objects: 30% (56/185)\u001b[K\rremote: Counting objects: 31% (58/185)\u001b[K\rremote: Counting objects: 32% (60/185)\u001b[K\rremote: Counting objects: 33% (62/185)\u001b[K\rremote: Counting objects: 34% (63/185)\u001b[K\rremote: Counting objects: 35% (65/185)\u001b[K\rremote: Counting objects: 36% (67/185)\u001b[K\rremote: Counting objects: 37% (69/185)\u001b[K\rremote: Counting objects: 38% (71/185)\u001b[K\rremote: Counting objects: 39% (73/185)\u001b[K\rremote: Counting objects: 40% (74/185)\u001b[K\rremote: Counting objects: 41% (76/185)\u001b[K\rremote: Counting objects: 42% (78/185)\u001b[K\rremote: Counting objects: 43% (80/185)\u001b[K\rremote: Counting objects: 44% (82/185)\u001b[K\rremote: Counting objects: 45% (84/185)\u001b[K\rremote: Counting objects: 46% (86/185)\u001b[K\rremote: Counting objects: 47% (87/185)\u001b[K\rremote: Counting objects: 48% (89/185)\u001b[K\rremote: Counting objects: 49% (91/185)\u001b[K\rremote: Counting objects: 50% (93/185)\u001b[K\rremote: Counting objects: 51% (95/185)\u001b[K\rremote: Counting objects: 52% (97/185)\u001b[K\rremote: Counting objects: 53% (99/185)\u001b[K\rremote: Counting objects: 54% (100/185)\u001b[K\rremote: Counting objects: 55% (102/185)\u001b[K\rremote: Counting objects: 56% (104/185)\u001b[K\rremote: Counting objects: 57% (106/185)\u001b[K\rremote: Counting objects: 58% (108/185)\u001b[K\rremote: Counting objects: 59% (110/185)\u001b[K\rremote: Counting objects: 60% (111/185)\u001b[K\rremote: Counting objects: 61% (113/185)\u001b[K\rremote: Counting objects: 62% (115/185)\u001b[K\rremote: Counting objects: 63% (117/185)\u001b[K\rremote: Counting objects: 64% (119/185)\u001b[K\rremote: Counting objects: 65% (121/185)\u001b[K\rremote: Counting objects: 66% (123/185)\u001b[K\rremote: Counting objects: 67% (124/185)\u001b[K\rremote: Counting objects: 68% (126/185)\u001b[K\rremote: Counting objects: 69% (128/185)\u001b[K\rremote: Counting objects: 70% (130/185)\u001b[K\rremote: Counting objects: 71% (132/185)\u001b[K\rremote: Counting objects: 72% (134/185)\u001b[K\rremote: Counting objects: 73% (136/185)\u001b[K\rremote: Counting objects: 74% (137/185)\u001b[K\rremote: Counting objects: 75% (139/185)\u001b[K\rremote: Counting objects: 76% (141/185)\u001b[K\rremote: Counting objects: 77% (143/185)\u001b[K\rremote: Counting objects: 78% (145/185)\u001b[K\rremote: Counting objects: 79% (147/185)\u001b[K\rremote: Counting objects: 80% (148/185)\u001b[K\rremote: Counting objects: 81% (150/185)\u001b[K\rremote: Counting objects: 82% (152/185)\u001b[K\rremote: Counting objects: 83% (154/185)\u001b[K\rremote: Counting objects: 84% (156/185)\u001b[K\rremote: Counting objects: 85% (158/185)\u001b[K\rremote: Counting objects: 86% (160/185)\u001b[K\rremote: Counting objects: 87% (161/185)\u001b[K\rremote: Counting objects: 88% (163/185)\u001b[K\rremote: Counting objects: 89% (165/185)\u001b[K\rremote: Counting objects: 90% (167/185)\u001b[K\rremote: Counting objects: 91% (169/185)\u001b[K\rremote: Counting objects: 92% (171/185)\u001b[K\rremote: Counting objects: 93% (173/185)\u001b[K\rremote: Counting objects: 94% (174/185)\u001b[K\rremote: Counting objects: 95% (176/185)\u001b[K\rremote: Counting objects: 96% (178/185)\u001b[K\rremote: Counting objects: 97% (180/185)\u001b[K\rremote: Counting objects: 98% (182/185)\u001b[K\rremote: Counting objects: 99% (184/185)\u001b[K\rremote: Counting objects: 100% (185/185)\u001b[K\rremote: Counting objects: 100% (185/185), done.\u001b[K\nremote: Compressing objects: 100% (124/124), done.\u001b[K\nremote: Total 185 (delta 95), reused 133 (delta 58), pack-reused 0\u001b[K\nReceiving objects: 100% (185/185), 26.91 MiB | 21.07 MiB/s, done.\nResolving deltas: 100% (95/95), done.\n" ], [ "!scp -r ./fixmatch/* ./", "_____no_output_____" ], [ "!pip install -r ./requirements.txt", "Requirement already satisfied: absl-py in /usr/local/lib/python3.6/dist-packages (from -r ./requirements.txt (line 1)) (0.9.0)\nRequirement already satisfied: easydict in /usr/local/lib/python3.6/dist-packages (from -r ./requirements.txt (line 2)) (1.9)\nRequirement already satisfied: cython in /usr/local/lib/python3.6/dist-packages (from -r ./requirements.txt (line 3)) (0.29.16)\nRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from -r ./requirements.txt (line 4)) (1.18.2)\nCollecting tensorflow-gpu==1.14.0\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/76/04/43153bfdfcf6c9a4c38ecdb971ca9a75b9a791bb69a764d652c359aca504/tensorflow_gpu-1.14.0-cp36-cp36m-manylinux1_x86_64.whl (377.0MB)\n\u001b[K |████████████████████████████████| 377.0MB 48kB/s \n\u001b[?25hRequirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from -r ./requirements.txt (line 6)) (4.38.0)\nRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from absl-py->-r ./requirements.txt (line 1)) (1.12.0)\nRequirement already satisfied: google-pasta>=0.1.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.14.0->-r ./requirements.txt (line 5)) (0.2.0)\nRequirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.14.0->-r ./requirements.txt (line 5)) (0.34.2)\nCollecting tensorflow-estimator<1.15.0rc0,>=1.14.0rc0\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/3c/d5/21860a5b11caf0678fbc8319341b0ae21a07156911132e0e71bffed0510d/tensorflow_estimator-1.14.0-py2.py3-none-any.whl (488kB)\n\u001b[K |████████████████████████████████| 491kB 47.7MB/s \n\u001b[?25hRequirement already satisfied: astor>=0.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.14.0->-r ./requirements.txt (line 5)) (0.8.1)\nRequirement already satisfied: gast>=0.2.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.14.0->-r ./requirements.txt (line 5)) (0.3.3)\nCollecting tensorboard<1.15.0,>=1.14.0\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/91/2d/2ed263449a078cd9c8a9ba50ebd50123adf1f8cfbea1492f9084169b89d9/tensorboard-1.14.0-py3-none-any.whl (3.1MB)\n\u001b[K |████████████████████████████████| 3.2MB 57.8MB/s \n\u001b[?25hRequirement already satisfied: protobuf>=3.6.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.14.0->-r ./requirements.txt (line 5)) (3.10.0)\nRequirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.14.0->-r ./requirements.txt (line 5)) (1.27.2)\nRequirement already satisfied: keras-preprocessing>=1.0.5 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.14.0->-r ./requirements.txt (line 5)) (1.1.0)\nRequirement already satisfied: keras-applications>=1.0.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.14.0->-r ./requirements.txt (line 5)) (1.0.8)\nRequirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.14.0->-r ./requirements.txt (line 5)) (1.1.0)\nRequirement already satisfied: wrapt>=1.11.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.14.0->-r ./requirements.txt (line 5)) (1.12.1)\nRequirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.15.0,>=1.14.0->tensorflow-gpu==1.14.0->-r ./requirements.txt (line 5)) (1.0.0)\nRequirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.15.0,>=1.14.0->tensorflow-gpu==1.14.0->-r ./requirements.txt (line 5)) (46.0.0)\nRequirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.15.0,>=1.14.0->tensorflow-gpu==1.14.0->-r ./requirements.txt (line 5)) (3.2.1)\nRequirement already satisfied: h5py in /usr/local/lib/python3.6/dist-packages (from keras-applications>=1.0.6->tensorflow-gpu==1.14.0->-r ./requirements.txt (line 5)) (2.10.0)\n\u001b[31mERROR: tensorflow 2.2.0rc2 has requirement tensorboard<2.3.0,>=2.2.0, but you'll have tensorboard 1.14.0 which is incompatible.\u001b[0m\n\u001b[31mERROR: tensorflow 2.2.0rc2 has requirement tensorflow-estimator<2.3.0,>=2.2.0rc0, but you'll have tensorflow-estimator 1.14.0 which is incompatible.\u001b[0m\nInstalling collected packages: tensorflow-estimator, tensorboard, tensorflow-gpu\n Found existing installation: tensorflow-estimator 2.2.0rc0\n Uninstalling tensorflow-estimator-2.2.0rc0:\n Successfully uninstalled tensorflow-estimator-2.2.0rc0\n Found existing installation: tensorboard 2.2.0\n Uninstalling tensorboard-2.2.0:\n Successfully uninstalled tensorboard-2.2.0\nSuccessfully installed tensorboard-1.14.0 tensorflow-estimator-1.14.0 tensorflow-gpu-1.14.0\n" ], [ "!mkdir datasets", "_____no_output_____" ], [ "!pip uninstall tensorflow\n!pip uninstall tensorflow-gpu\n!pip install tensorflow-gpu==1.14.0", "Uninstalling tensorflow-2.2.0rc2:\n Would remove:\n /usr/local/bin/estimator_ckpt_converter\n /usr/local/bin/saved_model_cli\n /usr/local/bin/tensorboard\n /usr/local/bin/tf_upgrade_v2\n /usr/local/bin/tflite_convert\n /usr/local/bin/toco\n /usr/local/bin/toco_from_protos\n /usr/local/lib/python3.6/dist-packages/tensorflow-2.2.0rc2.dist-info/*\n /usr/local/lib/python3.6/dist-packages/tensorflow/*\n Would not remove (might be manually added):\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/app/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/audio/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/autograph/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/autograph/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/bitwise/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/app/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/audio/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/autograph/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/autograph/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/bitwise/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/compat/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/config/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/config/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/config/optimizer/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/config/threading/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/data/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/data/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/debugging/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/distribute/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/distribute/cluster_resolver/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/distribute/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/distributions/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/dtypes/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/errors/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/feature_column/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/gfile/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/graph_util/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/image/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/initializers/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/io/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/io/gfile/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/layers/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/layers/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/linalg/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/lite/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/lite/constants/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/lite/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/lite/experimental/nn/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/logging/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/lookup/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/lookup/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/losses/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/manip/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/math/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/metrics/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/nest/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/nn/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/nn/rnn_cell/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/profiler/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/python_io/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/quantization/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/queue/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/ragged/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/random/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/random/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/raw_ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/resource_loader/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/saved_model/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/saved_model/builder/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/saved_model/constants/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/saved_model/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/saved_model/loader/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/saved_model/main_op/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/saved_model/signature_constants/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/saved_model/signature_def_utils/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/saved_model/tag_constants/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/saved_model/utils/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/sets/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/signal/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/sparse/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/spectral/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/strings/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/summary/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/sysconfig/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/test/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/tpu/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/tpu/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/train/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/train/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/train/queue_runner/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/user_ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/version/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/xla/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v1/xla/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/audio/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/autograph/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/autograph/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/bitwise/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/compat/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/config/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/config/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/config/optimizer/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/config/threading/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/data/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/data/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/debugging/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/distribute/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/distribute/cluster_resolver/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/distribute/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/dtypes/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/errors/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/feature_column/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/graph_util/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/image/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/io/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/io/gfile/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/linalg/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/lite/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/lookup/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/lookup/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/math/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/nest/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/nn/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/quantization/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/queue/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/ragged/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/random/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/random/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/raw_ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/saved_model/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/sets/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/signal/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/sparse/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/strings/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/summary/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/summary/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/sysconfig/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/test/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/tpu/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/tpu/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/train/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/train/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/version/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/xla/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/compat/v2/xla/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/config/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/config/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/config/optimizer/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/config/threading/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/data/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/data/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/debugging/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/distribute/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/distribute/cluster_resolver/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/distribute/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/distributions/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/dtypes/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/errors/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/feature_column/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/gfile/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/graph_util/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/image/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/initializers/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/io/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/io/gfile/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/layers/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/layers/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/linalg/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/lite/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/lite/constants/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/lite/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/lite/experimental/nn/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/logging/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/lookup/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/lookup/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/losses/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/manip/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/math/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/metrics/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/nest/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/nn/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/nn/rnn_cell/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/profiler/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/python_io/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/quantization/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/queue/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/ragged/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/random/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/random/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/raw_ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/resource_loader/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/saved_model/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/saved_model/builder/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/saved_model/constants/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/saved_model/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/saved_model/loader/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/saved_model/main_op/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/saved_model/signature_constants/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/saved_model/signature_def_utils/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/saved_model/tag_constants/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/saved_model/utils/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/sets/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/signal/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/sparse/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/spectral/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/strings/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/summary/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/sysconfig/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/test/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/tpu/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/tpu/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/train/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/train/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/train/queue_runner/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/user_ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/v1.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/version/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/xla/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/xla/experimental/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/compiler/tf2tensorrt/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/compiler/tf2tensorrt/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/compiler/tf2tensorrt/python/ops/libtftrt.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/compiler/tf2tensorrt/python/ops/trt_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/all_reduce/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/all_reduce/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/all_reduce/python/all_reduce.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/autograph/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/batching/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/batching/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/batching/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/batching/python/ops/batch_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/batching/python/ops/batch_ops_test.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/bayesflow/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/bayesflow/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/bayesflow/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/bayesflow/python/ops/monte_carlo.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/bayesflow/python/ops/monte_carlo_impl.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/bigtable/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/bigtable/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/bigtable/ops/gen_bigtable_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/bigtable/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/bigtable/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/bigtable/python/ops/_bigtable.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/bigtable/python/ops/bigtable_api.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/estimator_batch/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/estimator_batch/custom_export_strategy.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/estimator_batch/custom_loss_head.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/estimator_batch/distillation_loss.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/estimator_batch/dnn_tree_combined_estimator.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/estimator_batch/estimator.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/estimator_batch/estimator_utils.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/estimator_batch/model.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/estimator_batch/trainer_hooks.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/lib/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/lib/learner/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/lib/learner/batch/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/lib/learner/batch/base_split_handler.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/lib/learner/batch/categorical_split_handler.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/lib/learner/batch/ordinal_split_handler.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/proto/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/proto/learner_pb2.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/proto/quantiles_pb2.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/proto/split_info_pb2.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/proto/tree_config_pb2.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/python/ops/_boosted_trees_ops.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/python/ops/batch_ops_utils.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/python/ops/boosted_trees_ops_loader.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/python/ops/gen_model_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/python/ops/gen_prediction_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/python/ops/gen_quantile_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/python/ops/gen_split_handler_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/python/ops/gen_stats_accumulator_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/python/ops/gen_training_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/python/ops/model_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/python/ops/prediction_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/python/ops/quantile_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/python/ops/split_handler_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/python/ops/stats_accumulator_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/python/ops/training_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/python/training/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/python/training/functions/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/python/training/functions/gbdt_batch.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/python/utils/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/boosted_trees/python/utils/losses.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/checkpoint/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/checkpoint/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/checkpoint/python/containers.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/checkpoint/python/python_state.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/checkpoint/python/split_dependency.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/checkpoint/python/visualize.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/cloud/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/cloud/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/cloud/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/cloud/python/ops/bigquery_reader_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/cloud/python/ops/gcs_config_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/cloud/python/ops/gen_bigquery_reader_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/cloud/python/ops/gen_gcs_config_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/cluster_resolver/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/cluster_resolver/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/cluster_resolver/python/training/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/cluster_resolver/python/training/cluster_resolver.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/cluster_resolver/python/training/gce_cluster_resolver.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/cluster_resolver/python/training/kubernetes_cluster_resolver.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/cluster_resolver/python/training/slurm_cluster_resolver.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/cluster_resolver/python/training/tfconfig_cluster_resolver.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/cluster_resolver/python/training/tpu_cluster_resolver.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/cmake/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/cmake/tools/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/cmake/tools/create_def_file.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/compiler/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/compiler/jit.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/compiler/xla.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/constrained_optimization/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/constrained_optimization/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/constrained_optimization/python/candidates.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/constrained_optimization/python/constrained_minimization_problem.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/constrained_optimization/python/constrained_optimizer.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/constrained_optimization/python/external_regret_optimizer.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/constrained_optimization/python/swap_regret_optimizer.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/constrained_optimization/python/test_util.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/copy_graph/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/copy_graph/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/copy_graph/python/util/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/copy_graph/python/util/copy_elements.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/crf/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/crf/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/crf/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/crf/python/ops/crf.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/cudnn_rnn/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/cudnn_rnn/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/cudnn_rnn/python/layers/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/cudnn_rnn/python/layers/cudnn_rnn.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/cudnn_rnn/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/cudnn_rnn/python/ops/cudnn_rnn_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/data/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/data/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/data/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/data/python/ops/batching.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/data/python/ops/counter.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/data/python/ops/enumerate_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/data/python/ops/error_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/data/python/ops/get_single_element.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/data/python/ops/grouping.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/data/python/ops/interleave_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/data/python/ops/iterator_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/data/python/ops/parsing_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/data/python/ops/prefetching_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/data/python/ops/random_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/data/python/ops/readers.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/data/python/ops/resampling.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/data/python/ops/scan_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/data/python/ops/shuffle_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/data/python/ops/sliding.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/data/python/ops/threadpool.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/data/python/ops/unique.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/data/python/ops/writers.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/decision_trees/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/decision_trees/proto/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/decision_trees/proto/generic_tree_model_extensions_pb2.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/decision_trees/proto/generic_tree_model_pb2.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/deprecated/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distribute/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distribute/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distribute/python/collective_all_reduce_strategy.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distribute/python/keras_multi_worker_test_base.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distribute/python/mirrored_strategy.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distribute/python/monitor.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distribute/python/one_device_strategy.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distribute/python/parameter_server_strategy.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distribute/python/tpu_strategy.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/autoregressive.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/batch_reshape.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/bijectors/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/bijectors/absolute_value.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/bijectors/affine.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/bijectors/affine_linear_operator.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/bijectors/affine_scalar.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/bijectors/batch_normalization.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/bijectors/chain.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/bijectors/cholesky_outer_product.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/bijectors/conditional_bijector.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/bijectors/exp.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/bijectors/fill_triangular.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/bijectors/gumbel.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/bijectors/inline.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/bijectors/invert.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/bijectors/kumaraswamy.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/bijectors/masked_autoregressive.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/bijectors/matrix_inverse_tril.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/bijectors/ordered.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/bijectors/permute.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/bijectors/power_transform.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/bijectors/real_nvp.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/bijectors/reshape.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/bijectors/scale_tril.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/bijectors/sigmoid.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/bijectors/sinh_arcsinh.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/bijectors/softmax_centered.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/bijectors/softplus.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/bijectors/softsign.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/bijectors/square.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/bijectors/transform_diagonal.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/bijectors/weibull.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/binomial.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/cauchy.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/chi2.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/conditional_distribution.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/conditional_transformed_distribution.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/deterministic.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/distribution_util.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/estimator.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/geometric.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/gumbel.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/half_normal.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/independent.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/inverse_gamma.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/kumaraswamy.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/logistic.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/mixture.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/mixture_same_family.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/moving_stats.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/mvn_diag.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/mvn_diag_plus_low_rank.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/mvn_full_covariance.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/mvn_linear_operator.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/mvn_tril.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/negative_binomial.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/normal_conjugate_posteriors.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/onehot_categorical.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/poisson.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/poisson_lognormal.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/quantized_distribution.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/relaxed_bernoulli.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/relaxed_onehot_categorical.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/sample_stats.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/seed_stream.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/shape.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/sinh_arcsinh.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/statistical_testing.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/test_util.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/vector_diffeomixture.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/vector_exponential_diag.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/vector_exponential_linear_operator.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/vector_laplace_diag.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/vector_laplace_linear_operator.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/vector_sinh_arcsinh_diag.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/vector_student_t.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/distributions/python/ops/wishart.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/eager/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/eager/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/eager/python/datasets.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/eager/python/evaluator.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/eager/python/examples/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/eager/python/examples/densenet/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/eager/python/examples/densenet/densenet.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/eager/python/examples/gan/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/eager/python/examples/gan/mnist.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/eager/python/examples/l2hmc/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/eager/python/examples/l2hmc/l2hmc.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/eager/python/examples/l2hmc/neural_nets.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/eager/python/examples/linear_regression/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/eager/python/examples/linear_regression/linear_regression.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/eager/python/examples/resnet50/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/eager/python/examples/resnet50/resnet50.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/eager/python/examples/revnet/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/eager/python/examples/revnet/blocks.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/eager/python/examples/revnet/config.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/eager/python/examples/revnet/ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/eager/python/examples/revnet/revnet.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/eager/python/examples/rnn_colorbot/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/eager/python/examples/rnn_colorbot/rnn_colorbot.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/eager/python/examples/rnn_ptb/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/eager/python/examples/rnn_ptb/rnn_ptb.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/eager/python/examples/spinn/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/eager/python/examples/spinn/data.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/eager/python/metrics.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/eager/python/metrics_impl.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/eager/python/network.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/eager/python/parameter_server.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/eager/python/saver.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/eager/python/tfe.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/estimator/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/estimator/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/estimator/python/estimator/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/estimator/python/estimator/boosted_trees.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/estimator/python/estimator/dnn_with_layer_annotations.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/estimator/python/estimator/early_stopping.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/estimator/python/estimator/export.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/estimator/python/estimator/exporter.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/estimator/python/estimator/extenders.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/estimator/python/estimator/head.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/estimator/python/estimator/hooks.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/estimator/python/estimator/logit_fns.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/estimator/python/estimator/multi_head.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/estimator/python/estimator/replicate_model_fn.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/estimator/python/estimator/rnn.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/estimator/python/estimator/saved_model_estimator.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/factorization/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/factorization/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/factorization/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/factorization/python/ops/_factorization_ops.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/factorization/python/ops/clustering_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/factorization/python/ops/factorization_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/factorization/python/ops/factorization_ops_test_utils.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/factorization/python/ops/gen_factorization_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/factorization/python/ops/gmm.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/factorization/python/ops/gmm_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/factorization/python/ops/kmeans.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/factorization/python/ops/wals.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/feature_column/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/feature_column/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/feature_column/python/feature_column/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/feature_column/python/feature_column/sequence_feature_column.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/ffmpeg/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/ffmpeg/ffmpeg.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/ffmpeg/ffmpeg_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/ffmpeg/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/ffmpeg/ops/gen_decode_audio_op_py.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/ffmpeg/ops/gen_decode_video_op_py.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/ffmpeg/ops/gen_encode_audio_op_py.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/framework/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/framework/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/framework/python/framework/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/framework/python/framework/checkpoint_utils.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/framework/python/framework/experimental.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/framework/python/framework/graph_util.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/framework/python/framework/tensor_util.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/framework/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/framework/python/ops/_variable_ops.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/framework/python/ops/arg_scope.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/framework/python/ops/audio_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/framework/python/ops/checkpoint_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/framework/python/ops/gen_variable_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/framework/python/ops/ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/framework/python/ops/prettyprint_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/framework/python/ops/script_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/framework/python/ops/sort_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/framework/python/ops/variables.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/fused_conv/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/fused_conv/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/fused_conv/ops/gen_fused_conv2d_bias_activation_op.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/fused_conv/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/fused_conv/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/fused_conv/python/ops/_fused_conv2d_bias_activation_op.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/fused_conv/python/ops/fused_conv2d_bias_activation_benchmark.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/fused_conv/python/ops/fused_conv2d_bias_activation_op.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/fused_conv/python/ops/fused_conv2d_bias_activation_op_test.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/fused_conv/python/ops/fused_conv2d_bias_activation_op_test_base.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/estimator/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/estimator/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/estimator/python/gan_estimator.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/estimator/python/gan_estimator_impl.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/estimator/python/head.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/estimator/python/head_impl.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/estimator/python/latent_gan_estimator.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/estimator/python/latent_gan_estimator_impl.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/estimator/python/stargan_estimator.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/estimator/python/stargan_estimator_impl.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/estimator/python/tpu_gan_estimator.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/estimator/python/tpu_gan_estimator_impl.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/eval/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/eval/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/eval/python/classifier_metrics.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/eval/python/classifier_metrics_impl.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/eval/python/eval_utils.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/eval/python/eval_utils_impl.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/eval/python/sliced_wasserstein.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/eval/python/sliced_wasserstein_impl.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/eval/python/summaries.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/eval/python/summaries_impl.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/features/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/features/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/features/python/clip_weights.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/features/python/clip_weights_impl.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/features/python/conditioning_utils.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/features/python/conditioning_utils_impl.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/features/python/random_tensor_pool.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/features/python/random_tensor_pool_impl.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/features/python/spectral_normalization.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/features/python/spectral_normalization_impl.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/features/python/virtual_batchnorm.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/features/python/virtual_batchnorm_impl.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/losses/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/losses/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/losses/python/losses_impl.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/losses/python/losses_wargs.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/losses/python/tuple_losses.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/losses/python/tuple_losses_impl.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/namedtuples.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/train.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/graph_editor/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/graph_editor/edit.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/graph_editor/reroute.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/graph_editor/select.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/graph_editor/subgraph.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/graph_editor/tests/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/graph_editor/tests/match.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/graph_editor/transform.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/graph_editor/util.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/grid_rnn/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/grid_rnn/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/grid_rnn/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/grid_rnn/python/ops/grid_rnn_cell.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/hadoop/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/hadoop/_dataset_ops.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/hadoop/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/hadoop/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/hadoop/python/ops/gen_dataset_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/hadoop/python/ops/hadoop_dataset_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/hadoop/python/ops/hadoop_op_loader.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/hooks/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/hooks/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/hooks/python/training/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/hooks/python/training/profiler_hook.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/ignite/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/ignite/_ignite_ops.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/ignite/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/ignite/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/ignite/python/ops/gen_dataset_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/ignite/python/ops/gen_igfs_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/ignite/python/ops/igfs_op_loader.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/ignite/python/ops/igfs_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/ignite/python/ops/ignite_dataset_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/ignite/python/ops/ignite_op_loader.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/image/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/image/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/image/ops/gen_distort_image_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/image/ops/gen_image_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/image/ops/gen_single_image_random_dot_stereograms_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/image/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/image/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/image/python/ops/_distort_image_ops.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/image/python/ops/_image_ops.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/image/python/ops/_single_image_random_dot_stereograms.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/image/python/ops/dense_image_warp.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/image/python/ops/distort_image_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/image/python/ops/image_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/image/python/ops/interpolate_spline.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/image/python/ops/single_image_random_dot_stereograms.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/image/python/ops/sparse_image_warp.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/input_pipeline/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/input_pipeline/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/input_pipeline/ops/gen_input_pipeline_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/input_pipeline/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/input_pipeline/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/input_pipeline/python/ops/_input_pipeline_ops.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/input_pipeline/python/ops/input_pipeline_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/input_pipeline/python/ops/input_pipeline_ops_test.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/integrate/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/integrate/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/integrate/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/integrate/python/ops/odes.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/kafka/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/kafka/_dataset_ops.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/kafka/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/kafka/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/kafka/python/ops/gen_dataset_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/kafka/python/ops/kafka_dataset_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/kafka/python/ops/kafka_op_loader.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/keras/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/keras/api/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/keras/api/keras/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/keras/api/keras/activations/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/keras/api/keras/applications/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/keras/api/keras/applications/inception_v3/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/keras/api/keras/applications/mobilenet/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/keras/api/keras/applications/resnet50/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/keras/api/keras/applications/vgg16/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/keras/api/keras/applications/vgg19/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/keras/api/keras/applications/xception/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/keras/api/keras/backend/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/keras/api/keras/callbacks/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/keras/api/keras/constraints/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/keras/api/keras/datasets/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/keras/api/keras/datasets/boston_housing/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/keras/api/keras/datasets/cifar10/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/keras/api/keras/datasets/cifar100/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/keras/api/keras/datasets/imdb/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/keras/api/keras/datasets/mnist/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/keras/api/keras/datasets/reuters/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/keras/api/keras/initializers/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/keras/api/keras/layers/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/keras/api/keras/losses/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/keras/api/keras/metrics/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/keras/api/keras/models/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/keras/api/keras/optimizers/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/keras/api/keras/preprocessing/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/keras/api/keras/preprocessing/image/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/keras/api/keras/preprocessing/sequence/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/keras/api/keras/preprocessing/text/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/keras/api/keras/regularizers/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/keras/api/keras/utils/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/keras/api/keras/wrappers/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/keras/api/keras/wrappers/scikit_learn/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/kernel_methods/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/kernel_methods/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/kernel_methods/python/kernel_estimators.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/kernel_methods/python/losses.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/kernel_methods/python/mappers/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/kernel_methods/python/mappers/dense_kernel_mapper.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/kernel_methods/python/mappers/random_fourier_features.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/kinesis/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/kinesis/_dataset_ops.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/kinesis/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/kinesis/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/kinesis/python/ops/gen_dataset_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/kinesis/python/ops/kinesis_dataset_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/kinesis/python/ops/kinesis_op_loader.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/labeled_tensor/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/labeled_tensor/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/labeled_tensor/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/labeled_tensor/python/ops/_typecheck.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/labeled_tensor/python/ops/core.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/labeled_tensor/python/ops/io_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/labeled_tensor/python/ops/nn.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/labeled_tensor/python/ops/ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/labeled_tensor/python/ops/sugar.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/labeled_tensor/python/ops/test_util.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/layers/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/layers/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/layers/ops/gen_sparse_feature_cross_op.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/layers/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/layers/python/layers/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/layers/python/layers/embedding_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/layers/python/layers/encoders.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/layers/python/layers/feature_column.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/layers/python/layers/feature_column_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/layers/python/layers/initializers.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/layers/python/layers/layers.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/layers/python/layers/normalization.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/layers/python/layers/optimizers.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/layers/python/layers/regularizers.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/layers/python/layers/rev_block_lib.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/layers/python/layers/summaries.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/layers/python/layers/target_column.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/layers/python/layers/utils.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/layers/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/layers/python/ops/_sparse_feature_cross_op.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/layers/python/ops/bucketization_op.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/layers/python/ops/sparse_feature_cross_op.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/layers/python/ops/sparse_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/basic_session_run_hooks.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/datasets/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/datasets/base.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/datasets/data/boston_house_prices.csv\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/datasets/data/iris.csv\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/datasets/data/text_test.csv\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/datasets/data/text_train.csv\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/datasets/produce_small_datasets.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/datasets/synthetic.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/datasets/text_datasets.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/estimators/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/estimators/_sklearn.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/estimators/composable_model.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/estimators/constants.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/estimators/debug.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/estimators/dnn.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/estimators/dnn_linear_combined.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/estimators/dynamic_rnn_estimator.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/estimators/estimator_test_utils.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/estimators/head.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/estimators/head_test.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/estimators/kmeans.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/estimators/linear.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/estimators/logistic_regressor.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/estimators/metric_key.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/estimators/model_fn.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/estimators/prediction_key.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/estimators/rnn_common.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/estimators/run_config.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/estimators/state_saving_rnn_estimator.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/estimators/svm.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/estimators/tensor_signature.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/estimators/test_data.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/evaluable.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/experiment.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/export_strategy.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/graph_actions.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/learn_io/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/learn_io/dask_io.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/learn_io/data_feeder.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/learn_io/generator_io.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/learn_io/graph_io.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/learn_io/numpy_io.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/learn_io/pandas_io.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/learn_runner.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/learn_runner_lib.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/metric_spec.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/models.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/monitored_session.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/monitors.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/ops/embeddings_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/ops/losses_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/ops/seq2seq_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/preprocessing/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/preprocessing/categorical.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/preprocessing/categorical_vocabulary.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/preprocessing/tests/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/preprocessing/text.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/session_run_hook.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/summary_writer_cache.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/trainable.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/utils/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/utils/export.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/utils/gc.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/utils/input_fn_utils.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/utils/inspect_checkpoint.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/utils/saved_model_export_utils.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/legacy_seq2seq/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/legacy_seq2seq/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/legacy_seq2seq/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/legacy_seq2seq/python/ops/seq2seq.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/libsvm/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/libsvm/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/libsvm/ops/gen_libsvm_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/libsvm/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/libsvm/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/libsvm/python/ops/_libsvm_ops.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/libsvm/python/ops/libsvm_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/linear_optimizer/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/linear_optimizer/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/linear_optimizer/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/linear_optimizer/python/ops/sdca_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/linear_optimizer/python/ops/sharded_mutable_dense_hashtable.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/linear_optimizer/python/ops/sparse_feature_column.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/linear_optimizer/python/sdca_estimator.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/linear_optimizer/python/sdca_optimizer.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/lookup/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/lookup/lookup_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/losses/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/losses/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/losses/python/losses/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/losses/python/losses/loss_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/losses/python/metric_learning/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/losses/python/metric_learning/metric_loss_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/memory_stats/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/memory_stats/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/memory_stats/ops/gen_memory_stats_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/memory_stats/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/memory_stats/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/memory_stats/python/ops/_memory_stats_ops.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/memory_stats/python/ops/memory_stats_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/meta_graph_transform/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/meta_graph_transform/meta_graph_transform.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/metrics/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/metrics/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/metrics/python/metrics/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/metrics/python/metrics/classification.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/metrics/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/metrics/python/ops/confusion_matrix_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/metrics/python/ops/histogram_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/metrics/python/ops/metric_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/metrics/python/ops/set_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/mixed_precision/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/mixed_precision/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/mixed_precision/python/loss_scale_manager.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/mixed_precision/python/loss_scale_optimizer.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/model_pruning/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/model_pruning/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/model_pruning/python/layers/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/model_pruning/python/layers/core_layers.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/model_pruning/python/layers/layers.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/model_pruning/python/layers/rnn_cells.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/model_pruning/python/learning.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/model_pruning/python/pruning.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/model_pruning/python/pruning_utils.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/model_pruning/python/strip_pruning_vars_lib.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/nearest_neighbor/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/nearest_neighbor/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/nearest_neighbor/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/nearest_neighbor/python/ops/_nearest_neighbor_ops.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/nearest_neighbor/python/ops/nearest_neighbor_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/nn/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/nn/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/nn/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/nn/python/ops/alpha_dropout.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/nn/python/ops/cross_entropy.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/nn/python/ops/fwd_gradients.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/nn/python/ops/sampling_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/nn/python/ops/scaled_softplus.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/opt/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/opt/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/opt/python/training/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/opt/python/training/adam_gs_optimizer.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/opt/python/training/adamax.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/opt/python/training/addsign.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/opt/python/training/agn_optimizer.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/opt/python/training/drop_stale_gradient_optimizer.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/opt/python/training/elastic_average_optimizer.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/opt/python/training/external_optimizer.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/opt/python/training/ggt.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/opt/python/training/lars_optimizer.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/opt/python/training/lazy_adam_gs_optimizer.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/opt/python/training/lazy_adam_optimizer.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/opt/python/training/matrix_functions.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/opt/python/training/model_average_optimizer.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/opt/python/training/moving_average_optimizer.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/opt/python/training/multitask_optimizer_wrapper.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/opt/python/training/nadam_optimizer.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/opt/python/training/powersign.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/opt/python/training/reg_adagrad_optimizer.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/opt/python/training/shampoo.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/opt/python/training/sign_decay.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/opt/python/training/variable_clipping_optimizer.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/opt/python/training/weight_decay_optimizers.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/optimizer_v2/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/optimizer_v2/adadelta.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/optimizer_v2/adagrad.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/optimizer_v2/adam.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/optimizer_v2/gradient_descent.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/optimizer_v2/momentum.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/optimizer_v2/optimizer_v2.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/optimizer_v2/optimizer_v2_symbols.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/optimizer_v2/rmsprop.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/periodic_resample/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/periodic_resample/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/periodic_resample/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/periodic_resample/python/ops/_periodic_resample_op.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/periodic_resample/python/ops/gen_periodic_resample_op.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/periodic_resample/python/ops/periodic_resample_op.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/predictor/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/predictor/contrib_estimator_predictor.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/predictor/core_estimator_predictor.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/predictor/predictor.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/predictor/predictor_factories.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/predictor/saved_model_predictor.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/proto/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/proto/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/proto/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/proto/python/ops/decode_proto_op.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/proto/python/ops/encode_proto_op.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/quantization/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/quantization/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/quantization/python/array_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/quantization/python/math_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/quantization/python/nn_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/quantize/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/quantize/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/quantize/python/common.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/quantize/python/fold_batch_norms.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/quantize/python/graph_matcher.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/quantize/python/input_to_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/quantize/python/quant_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/quantize/python/quantize.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/quantize/python/quantize_graph.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/rate/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/rate/rate.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/receptive_field/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/receptive_field/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/receptive_field/python/util/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/receptive_field/python/util/graph_compute_order.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/receptive_field/python/util/parse_layer_parameters.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/receptive_field/python/util/receptive_field.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/receptive_field/receptive_field_api.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/recurrent/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/recurrent/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/recurrent/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/recurrent/python/ops/functional_rnn.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/recurrent/python/ops/recurrent.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/recurrent/python/recurrent_api.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/reduce_slice_ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/reduce_slice_ops/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/reduce_slice_ops/ops/gen_reduce_slice_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/reduce_slice_ops/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/reduce_slice_ops/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/reduce_slice_ops/python/ops/_reduce_slice_ops.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/reduce_slice_ops/python/ops/reduce_slice_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/remote_fused_graph/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/remote_fused_graph/pylib/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/remote_fused_graph/pylib/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/remote_fused_graph/pylib/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/remote_fused_graph/pylib/python/ops/gen_remote_fused_graph_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/remote_fused_graph/pylib/python/ops/remote_fused_graph_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/remote_fused_graph/pylib/python/ops/remote_fused_graph_ops_test.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/resampler/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/resampler/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/resampler/ops/gen_resampler_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/resampler/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/resampler/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/resampler/python/ops/_resampler_ops.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/resampler/python/ops/resampler_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/resampler/python/ops/resampler_ops_test.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/rnn/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/rnn/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/rnn/ops/gen_gru_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/rnn/ops/gen_lstm_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/rnn/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/rnn/python/kernel_tests/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/rnn/python/kernel_tests/benchmarking.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/rnn/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/rnn/python/ops/_gru_ops.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/rnn/python/ops/_lstm_ops.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/rnn/python/ops/core_rnn_cell.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/rnn/python/ops/fused_rnn_cell.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/rnn/python/ops/gru_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/rnn/python/ops/lstm_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/rnn/python/ops/rnn.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/rnn/python/ops/rnn_cell.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/rnn/python/tools/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/rnn/python/tools/checkpoint_convert.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/rpc/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/rpc/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/rpc/python/kernel_tests/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/rpc/python/kernel_tests/libtestexample.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/rpc/python/kernel_tests/rpc_op_test_base.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/rpc/python/kernel_tests/rpc_op_test_servicer.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/rpc/python/kernel_tests/test_example_pb2.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/rpc/python/kernel_tests/test_example_pb2_grpc.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/rpc/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/rpc/python/ops/gen_rpc_op.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/rpc/python/ops/rpc_op.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/saved_model/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/saved_model/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/saved_model/python/saved_model/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/saved_model/python/saved_model/keras_saved_model.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/saved_model/python/saved_model/reader.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/seq2seq/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/seq2seq/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/seq2seq/ops/gen_beam_search_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/seq2seq/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/seq2seq/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/seq2seq/python/ops/_beam_search_ops.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/seq2seq/python/ops/basic_decoder.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/seq2seq/python/ops/beam_search_decoder.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/seq2seq/python/ops/beam_search_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/seq2seq/python/ops/decoder.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/seq2seq/python/ops/helper.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/seq2seq/python/ops/loss.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/seq2seq/python/ops/sampler.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/session_bundle/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/session_bundle/bundle_shim.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/session_bundle/constants.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/session_bundle/exporter.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/session_bundle/gc.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/session_bundle/manifest_pb2.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/session_bundle/session_bundle.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/signal/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/slim/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/slim/nets.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/slim/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/slim/python/slim/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/slim/python/slim/data/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/slim/python/slim/data/data_decoder.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/slim/python/slim/data/data_provider.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/slim/python/slim/data/dataset.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/slim/python/slim/data/dataset_data_provider.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/slim/python/slim/data/parallel_reader.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/slim/python/slim/data/prefetch_queue.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/slim/python/slim/data/test_utils.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/slim/python/slim/data/tfexample_decoder.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/slim/python/slim/evaluation.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/slim/python/slim/learning.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/slim/python/slim/model_analyzer.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/slim/python/slim/nets/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/slim/python/slim/nets/alexnet.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/slim/python/slim/nets/inception.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/slim/python/slim/nets/inception_v1.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/slim/python/slim/nets/inception_v2.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/slim/python/slim/nets/inception_v3.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/slim/python/slim/nets/overfeat.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/slim/python/slim/nets/resnet_utils.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/slim/python/slim/nets/resnet_v1.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/slim/python/slim/nets/resnet_v2.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/slim/python/slim/nets/vgg.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/slim/python/slim/queues.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/slim/python/slim/summaries.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/solvers/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/solvers/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/solvers/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/solvers/python/ops/lanczos.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/solvers/python/ops/least_squares.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/solvers/python/ops/linear_equations.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/solvers/python/ops/util.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/sparsemax/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/sparsemax/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/sparsemax/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/sparsemax/python/ops/sparsemax.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/sparsemax/python/ops/sparsemax_loss.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/specs/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/specs/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/specs/python/params_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/specs/python/specs.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/specs/python/specs_lib.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/specs/python/specs_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/specs/python/summaries.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/staging/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/stat_summarizer/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/stateless/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/summary/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/summary/summary.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/summary/summary_test_util.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/client/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/client/eval_metrics.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/client/random_forest.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/hybrid/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/hybrid/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/hybrid/ops/gen_training_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/hybrid/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/hybrid/python/hybrid_layer.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/hybrid/python/hybrid_model.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/hybrid/python/layers/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/hybrid/python/layers/decisions_to_data.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/hybrid/python/layers/fully_connected.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/hybrid/python/models/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/hybrid/python/models/decisions_to_data_then_nn.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/hybrid/python/models/forest_to_data_then_nn.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/hybrid/python/models/hard_decisions_to_data_then_nn.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/hybrid/python/models/k_feature_decisions_to_data_then_nn.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/hybrid/python/models/nn.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/hybrid/python/models/stochastic_hard_decisions_to_data_then_nn.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/hybrid/python/models/stochastic_soft_decisions_to_data_then_nn.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/hybrid/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/hybrid/python/ops/_training_ops.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/hybrid/python/ops/training_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/libforestprotos.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/proto/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/proto/fertile_stats_pb2.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/proto/tensor_forest_params_pb2.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/python/ops/_model_ops.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/python/ops/_stats_ops.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/python/ops/_tensor_forest_ops.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/python/ops/data_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/python/ops/gen_model_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/python/ops/gen_stats_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/python/ops/gen_tensor_forest_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/python/ops/model_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/python/ops/stats_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/python/ops/tensor_forest_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensor_forest/python/tensor_forest.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensorboard/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensorboard/plugins/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensorboard/plugins/projector/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensorboard/plugins/projector/projector_config_pb2.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensorrt/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensorrt/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensorrt/python/trt_convert.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/testing/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/testing/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/testing/python/framework/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/testing/python/framework/fake_summary_writer.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/testing/python/framework/util_test.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/text/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/text/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/text/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/text/python/ops/_skip_gram_ops.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/text/python/ops/gen_skip_gram_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/text/python/ops/skip_gram_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tfprof/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tfprof/model_analyzer.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tfprof/tfprof_logger.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/timeseries/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/timeseries/examples/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/timeseries/examples/data/changepoints.csv\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/timeseries/examples/data/multivariate_level.csv\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/timeseries/examples/data/multivariate_periods.csv\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/timeseries/examples/data/period_trend.csv\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/timeseries/examples/known_anomaly.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/timeseries/examples/lstm.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/timeseries/examples/multivariate.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/timeseries/examples/predict.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/timeseries/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/timeseries/python/timeseries/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/timeseries/python/timeseries/ar_model.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/timeseries/python/timeseries/estimators.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/timeseries/python/timeseries/feature_keys.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/timeseries/python/timeseries/head.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/timeseries/python/timeseries/input_pipeline.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/timeseries/python/timeseries/math_utils.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/timeseries/python/timeseries/model.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/timeseries/python/timeseries/model_utils.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/timeseries/python/timeseries/saved_model_utils.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/timeseries/python/timeseries/state_management.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/timeseries/python/timeseries/state_space_models/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/timeseries/python/timeseries/state_space_models/filtering_postprocessor.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/timeseries/python/timeseries/state_space_models/kalman_filter.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/timeseries/python/timeseries/state_space_models/level_trend.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/timeseries/python/timeseries/state_space_models/periodic.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/timeseries/python/timeseries/state_space_models/state_space_model.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/timeseries/python/timeseries/state_space_models/structural_ensemble.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/timeseries/python/timeseries/state_space_models/test_utils.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/timeseries/python/timeseries/state_space_models/varma.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/timeseries/python/timeseries/test_utils.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tpu/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tpu/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tpu/python/ops/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tpu/python/ops/tpu_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tpu/python/profiler/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tpu/python/tpu/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tpu/python/tpu/_tpu_estimator_embedding.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tpu/python/tpu/async_checkpoint.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tpu/python/tpu/bfloat16.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tpu/python/tpu/datasets.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tpu/python/tpu/device_assignment.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tpu/python/tpu/error_handling.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tpu/python/tpu/feature_column.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tpu/python/tpu/functional.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tpu/python/tpu/keras_support.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tpu/python/tpu/keras_tpu_variables.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tpu/python/tpu/session_support.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tpu/python/tpu/tensor_tracer.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tpu/python/tpu/topology.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_config.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_context.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_embedding.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_embedding_gradient.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_feed.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_function.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_optimizer.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_sharding.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_system_metadata.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tpu/python/tpu/training_loop.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tpu/python/tpu/util.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/training/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/training/python/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/training/python/training/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/training/python/training/bucket_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/training/python/training/device_setter.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/training/python/training/evaluation.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/training/python/training/feeding_queue_runner.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/training/python/training/hparam.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/training/python/training/hparam_pb2.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/training/python/training/resample.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/training/python/training/sampling_ops.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/training/python/training/sequence_queueing_state_saver.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/training/python/training/training.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/training/python/training/tuner.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/util/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/util/loader.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/core/profiler/op_profile_pb2.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/examples/tutorials/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/examples/tutorials/mnist/__init__.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/examples/tutorials/mnist/input_data.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/examples/tutorials/mnist/mnist.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/Eigen/src/Core/arch/GPU/Half.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/Eigen/src/Core/arch/GPU/PacketMath.h.orig\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/Eigen/src/Core/arch/GPU/PacketMathHalf.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/KinesisClient.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/KinesisEndpoint.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/KinesisErrorMarshaller.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/KinesisErrors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/KinesisRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/Kinesis_EXPORTS.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/AddTagsToStreamRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/Consumer.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/ConsumerDescription.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/ConsumerStatus.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/CreateStreamRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/DecreaseStreamRetentionPeriodRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/DeleteStreamRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/DeregisterStreamConsumerRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/DescribeLimitsRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/DescribeLimitsResult.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/DescribeStreamConsumerRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/DescribeStreamConsumerResult.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/DescribeStreamRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/DescribeStreamResult.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/DescribeStreamSummaryRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/DescribeStreamSummaryResult.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/DisableEnhancedMonitoringRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/DisableEnhancedMonitoringResult.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/EnableEnhancedMonitoringRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/EnableEnhancedMonitoringResult.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/EncryptionType.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/EnhancedMetrics.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/GetRecordsRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/GetRecordsResult.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/GetShardIteratorRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/GetShardIteratorResult.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/HashKeyRange.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/IncreaseStreamRetentionPeriodRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/ListShardsRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/ListShardsResult.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/ListStreamConsumersRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/ListStreamConsumersResult.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/ListStreamsRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/ListStreamsResult.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/ListTagsForStreamRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/ListTagsForStreamResult.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/MergeShardsRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/MetricsName.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/PutRecordRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/PutRecordResult.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/PutRecordsRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/PutRecordsRequestEntry.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/PutRecordsResult.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/PutRecordsResultEntry.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/Record.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/RegisterStreamConsumerRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/RegisterStreamConsumerResult.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/RemoveTagsFromStreamRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/ScalingType.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/SequenceNumberRange.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/Shard.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/ShardIteratorType.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/SplitShardRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/StartStreamEncryptionRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/StartingPosition.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/StopStreamEncryptionRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/StreamDescription.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/StreamDescriptionSummary.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/StreamStatus.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/Tag.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/UpdateShardCountRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/UpdateShardCountResult.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_github_googleapis_googleapis/LICENSE\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/algorithm/algorithm.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/algorithm/container.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/attributes.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/call_once.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/casts.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/config.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/const_init.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/dynamic_annotations.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/atomic_hook.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/bits.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/cycleclock.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/direct_mmap.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/endian.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/hide_ptr.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/identity.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/inline_variable.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/invoke.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/low_level_alloc.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/low_level_scheduling.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/per_thread_tls.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/raw_logging.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/scheduling_mode.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/spinlock.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/spinlock_akaros.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/spinlock_linux.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/spinlock_posix.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/spinlock_wait.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/spinlock_win32.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/sysinfo.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/thread_identity.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/throw_delegate.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/tsan_mutex_interface.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/unaligned_access.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/unscaledcycleclock.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/log_severity.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/macros.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/optimization.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/policy_checks.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/port.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/thread_annotations.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/container/fixed_array.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/container/flat_hash_map.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/container/flat_hash_set.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/container/inlined_vector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/container/internal/common.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/container/internal/compressed_tuple.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/container/internal/container_memory.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/container/internal/hash_function_defaults.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/container/internal/hash_policy_traits.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/container/internal/hashtable_debug_hooks.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/container/internal/hashtablez_sampler.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/container/internal/have_sse.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/container/internal/inlined_vector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/container/internal/layout.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/container/internal/raw_hash_map.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/container/internal/raw_hash_set.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/internal/address_is_readable.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/internal/demangle.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/internal/elf_mem_image.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/internal/stacktrace_aarch64-inl.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/internal/stacktrace_arm-inl.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/internal/stacktrace_config.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/internal/stacktrace_generic-inl.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/internal/stacktrace_powerpc-inl.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/internal/stacktrace_unimplemented-inl.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/internal/stacktrace_win32-inl.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/internal/stacktrace_x86-inl.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/internal/symbolize.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/internal/vdso_support.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/leak_check.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/stacktrace.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/symbolize.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/symbolize_elf.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/symbolize_unimplemented.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/symbolize_win32.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/hash/hash.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/hash/internal/city.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/hash/internal/hash.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/memory/memory.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/meta/type_traits.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/numeric/int128.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/numeric/int128_have_intrinsic.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/numeric/int128_no_intrinsic.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/ascii.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/charconv.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/escaping.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/internal/char_map.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/internal/charconv_bigint.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/internal/charconv_parse.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/internal/memutil.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/internal/ostringstream.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/internal/resize_uninitialized.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/internal/stl_type_traits.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/internal/str_format/arg.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/internal/str_format/bind.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/internal/str_format/checker.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/internal/str_format/extension.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/internal/str_format/float_conversion.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/internal/str_format/output.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/internal/str_format/parser.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/internal/str_join_internal.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/internal/str_split_internal.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/internal/utf8.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/match.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/numbers.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/str_cat.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/str_format.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/str_join.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/str_replace.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/str_split.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/string_view.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/strip.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/substitute.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/synchronization/barrier.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/synchronization/blocking_counter.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/synchronization/internal/create_thread_identity.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/synchronization/internal/graphcycles.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/synchronization/internal/kernel_timeout.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/synchronization/internal/mutex_nonprod.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/synchronization/internal/per_thread_sem.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/synchronization/internal/waiter.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/synchronization/mutex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/synchronization/notification.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/time/civil_time.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/time/clock.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/include/cctz/civil_time.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/include/cctz/civil_time_detail.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/include/cctz/time_zone.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/include/cctz/zone_info_source.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/src/time_zone_fixed.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/src/time_zone_if.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/src/time_zone_impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/src/time_zone_info.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/src/time_zone_libc.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/src/time_zone_posix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/src/tzfile.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/time/internal/get_current_time_chrono.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/time/internal/get_current_time_posix.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/time/time.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/types/bad_optional_access.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/types/bad_variant_access.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/types/internal/optional.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/types/internal/span.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/types/internal/variant.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/types/optional.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/types/span.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/types/variant.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/utility/utility.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_googlesource_code_re2/re2/stringpiece.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_googlesource_code_re2/util/mix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_googlesource_code_re2/util/mutex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/curl/lib/pipeline.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/curl/lib/vtls/axtls.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/curl/lib/vtls/cyassl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/curl/lib/vtls/darwinssl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/Cholesky\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/CholmodSupport\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/Core\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/Dense\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/Eigenvalues\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/Geometry\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/Householder\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/Jacobi\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/KLUSupport\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/LU\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/OrderingMethods\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/PaStiXSupport\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/PardisoSupport\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/QR\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/QtAlignedMalloc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/SPQRSupport\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/SVD\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/SparseCore\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/SparseQR\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/StdDeque\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/StdList\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/StdVector\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/SuperLUSupport\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/UmfPackSupport\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Cholesky/LDLT.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Cholesky/LLT.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Cholesky/LLT_LAPACKE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/CholmodSupport/CholmodSupport.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/ArithmeticSequence.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Array.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/ArrayBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/ArrayWrapper.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Assign.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/AssignEvaluator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Assign_MKL.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/BandMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Block.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/BooleanRedux.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/CommaInitializer.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/ConditionEstimator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/CoreEvaluators.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/CoreIterators.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/CwiseBinaryOp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/CwiseNullaryOp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/CwiseTernaryOp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/CwiseUnaryOp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/CwiseUnaryView.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/DenseBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/DenseCoeffsBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/DenseStorage.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Diagonal.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/DiagonalMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/DiagonalProduct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Dot.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/EigenBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/ForceAlignedAccess.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Fuzzy.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/GeneralProduct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/GenericPacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/GlobalFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/IO.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/IndexedView.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Inverse.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Map.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/MapBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/MathFunctionsImpl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Matrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/MatrixBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/NestByValue.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/NoAlias.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/NumTraits.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/PartialReduxEvaluator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/PermutationMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/PlainObjectBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Product.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/ProductEvaluators.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Random.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Redux.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Ref.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Replicate.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Reshaped.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/ReturnByValue.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Reverse.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Select.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/SelfAdjointView.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/SelfCwiseBinaryOp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Solve.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/SolveTriangular.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/SolverBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/StableNorm.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/StlIterators.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Stride.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Swap.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Transpose.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Transpositions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/TriangularMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/VectorBlock.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/VectorwiseOp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Visitor.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AVX/Complex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AVX/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AVX/PacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AVX/TypeCasting.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AVX512/Complex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AVX512/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AVX512/PacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AltiVec/Complex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AltiVec/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AltiVec/PacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/CUDA/Complex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/Default/ConjHelper.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/Default/GenericPacketMathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/Default/Settings.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/GPU/Half.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/GPU/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/GPU/PacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/GPU/PacketMath.h.orig\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/GPU/PacketMathHalf.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/GPU/TypeCasting.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/HIP/hcc/math_constants.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/MSA/Complex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/MSA/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/MSA/PacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/NEON/Complex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/NEON/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/NEON/PacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/NEON/TypeCasting.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/SSE/Complex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/SSE/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/SSE/PacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/SSE/TypeCasting.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/SYCL/InteropHeaders.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/SYCL/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/SYCL/PacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/SYCL/TypeCasting.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/ZVector/Complex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/ZVector/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/ZVector/PacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/functors/AssignmentFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/functors/BinaryFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/functors/NullaryFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/functors/StlFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/functors/TernaryFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/functors/UnaryFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/GeneralBlockPanelKernel.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/GeneralMatrixMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/GeneralMatrixMatrixTriangular.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/GeneralMatrixMatrixTriangular_BLAS.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/GeneralMatrixMatrix_BLAS.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/GeneralMatrixVector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/GeneralMatrixVector_BLAS.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/Parallelizer.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/SelfadjointMatrixMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/SelfadjointMatrixMatrix_BLAS.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/SelfadjointMatrixVector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/SelfadjointMatrixVector_BLAS.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/SelfadjointProduct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/SelfadjointRank2Update.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/TriangularMatrixMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/TriangularMatrixMatrix_BLAS.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/TriangularMatrixVector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/TriangularMatrixVector_BLAS.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/TriangularSolverMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/TriangularSolverMatrix_BLAS.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/TriangularSolverVector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/BlasUtil.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/ConfigureVectorization.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/Constants.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/DisableStupidWarnings.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/ForwardDeclarations.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/IndexedViewHelper.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/IntegralConstant.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/MKL_support.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/Macros.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/Memory.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/Meta.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/ReenableStupidWarnings.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/ReshapedHelper.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/StaticAssert.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/SymbolicIndex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/XprHelper.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/ComplexEigenSolver.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/ComplexSchur.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/ComplexSchur_LAPACKE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/EigenSolver.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/GeneralizedEigenSolver.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/GeneralizedSelfAdjointEigenSolver.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/HessenbergDecomposition.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/MatrixBaseEigenvalues.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/RealQZ.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/RealSchur.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/RealSchur_LAPACKE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/SelfAdjointEigenSolver.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/SelfAdjointEigenSolver_LAPACKE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/Tridiagonalization.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/AlignedBox.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/AngleAxis.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/EulerAngles.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/Homogeneous.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/Hyperplane.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/OrthoMethods.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/ParametrizedLine.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/Quaternion.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/Rotation2D.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/RotationBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/Scaling.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/Transform.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/Translation.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/Umeyama.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/arch/Geometry_SSE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Householder/BlockHouseholder.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Householder/Householder.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Householder/HouseholderSequence.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/IterativeLinearSolvers/BasicPreconditioners.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/IterativeLinearSolvers/BiCGSTAB.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/IterativeLinearSolvers/ConjugateGradient.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/IterativeLinearSolvers/IncompleteCholesky.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/IterativeLinearSolvers/IncompleteLUT.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/IterativeLinearSolvers/IterativeSolverBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/IterativeLinearSolvers/LeastSquareConjugateGradient.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/IterativeLinearSolvers/SolveWithGuess.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Jacobi/Jacobi.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/KLUSupport/KLUSupport.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/LU/Determinant.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/LU/FullPivLU.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/LU/InverseImpl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/LU/PartialPivLU.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/LU/PartialPivLU_LAPACKE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/LU/arch/Inverse_SSE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/MetisSupport/MetisSupport.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/OrderingMethods/Eigen_Colamd.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/OrderingMethods/Ordering.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/PaStiXSupport/PaStiXSupport.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/PardisoSupport/PardisoSupport.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/QR/ColPivHouseholderQR.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/QR/ColPivHouseholderQR_LAPACKE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/QR/CompleteOrthogonalDecomposition.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/QR/FullPivHouseholderQR.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/QR/HouseholderQR.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/QR/HouseholderQR_LAPACKE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SPQRSupport/SuiteSparseQRSupport.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SVD/BDCSVD.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SVD/JacobiSVD.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SVD/JacobiSVD_LAPACKE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SVD/SVDBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SVD/UpperBidiagonalization.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/AmbiVector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/CompressedStorage.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/ConservativeSparseSparseProduct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/MappedSparseMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseAssign.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseBlock.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseColEtree.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseCompressedBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseCwiseBinaryOp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseCwiseUnaryOp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseDenseProduct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseDiagonalProduct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseDot.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseFuzzy.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseMap.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseMatrixBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparsePermutation.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseProduct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseRedux.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseRef.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseSelfAdjointView.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseSolverBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseSparseProductWithPruning.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseTranspose.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseTriangularView.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseUtil.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseVector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseView.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/TriangularSolver.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLUImpl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_Memory.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_Structs.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_SupernodalMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_Utils.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_column_bmod.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_column_dfs.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_copy_to_ucol.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_gemm_kernel.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_heap_relax_snode.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_kernel_bmod.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_panel_bmod.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_panel_dfs.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_pivotL.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_pruneL.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_relax_snode.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseQR/SparseQR.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/StlSupport/StdDeque.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/StlSupport/StdList.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/StlSupport/StdVector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/StlSupport/details.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SuperLUSupport/SuperLUSupport.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/UmfPackSupport/UmfPackSupport.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/misc/Image.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/misc/Kernel.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/misc/RealSvd2x2.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/misc/blas.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/misc/lapack.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/misc/lapacke.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/misc/lapacke_mangling.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/plugins/ArrayCwiseBinaryOps.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/plugins/ArrayCwiseUnaryOps.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/plugins/BlockMethods.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/plugins/CommonCwiseBinaryOps.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/plugins/CommonCwiseUnaryOps.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/plugins/IndexedViewMethods.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/plugins/MatrixCwiseBinaryOps.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/plugins/MatrixCwiseUnaryOps.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/plugins/ReshapedMethods.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/CMakeLists.txt\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/Tensor\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/TensorSymmetry\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/ThreadPool\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/README.md\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/Tensor.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorArgMax.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorArgMaxSycl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorAssign.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorBlock.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorBroadcasting.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorChipping.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorConcatenation.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorContraction.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorContractionBlocking.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorContractionCuda.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorContractionGpu.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorContractionMapper.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorContractionSycl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorContractionThreadPool.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorConversion.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorConvolution.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorConvolutionSycl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorCostModel.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorCustomOp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorDevice.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorDeviceCuda.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorDeviceDefault.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorDeviceGpu.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorDeviceSycl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorDeviceThreadPool.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorDimensionList.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorDimensions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorEvalTo.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorEvaluator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorExecutor.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorExpr.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorFFT.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorFixedSize.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorForcedEval.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorForwardDeclarations.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorGenerator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorGlobalFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorGpuHipCudaDefines.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorGpuHipCudaUndefines.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorIO.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorImagePatch.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorIndexList.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorInflation.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorInitializer.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorIntDiv.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorLayoutSwap.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorMacros.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorMap.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorMeta.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorMorphing.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorPadding.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorPatch.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorRandom.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorReduction.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorReductionCuda.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorReductionGpu.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorReductionSycl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorRef.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorReverse.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorScan.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorShuffling.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorStorage.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorStriding.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSycl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclConvertToDeviceExpression.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclExprConstructor.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclExtractAccessor.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclExtractFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclLeafCount.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclPlaceHolderExpr.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclRun.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclTuple.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorTrace.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorTraits.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorUInt128.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorVolumePatch.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/TensorSymmetry/DynamicSymmetry.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/TensorSymmetry/StaticSymmetry.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/TensorSymmetry/Symmetry.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/TensorSymmetry/util/TemplateGroupTheory.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/ThreadPool/Barrier.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/ThreadPool/EventCount.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/ThreadPool/NonBlockingThreadPool.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/ThreadPool/RunQueue.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/ThreadPool/ThreadCancel.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/ThreadPool/ThreadEnvironment.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/ThreadPool/ThreadLocal.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/ThreadPool/ThreadPoolInterface.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/ThreadPool/ThreadYield.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/util/CXX11Meta.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/util/CXX11Workarounds.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/util/EmulateArray.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/util/EmulateCXX11Meta.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/util/MaxSizeVector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/FFT\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/KroneckerProduct\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/MatrixFunctions\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/SpecialFunctions\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/FFT/ei_fftw_impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/FFT/ei_kissfft_impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/KroneckerProduct/KroneckerTensorProduct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/MatrixFunctions/MatrixExponential.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/MatrixFunctions/MatrixFunction.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/MatrixFunctions/MatrixLogarithm.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/MatrixFunctions/MatrixPower.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/MatrixFunctions/MatrixSquareRoot.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/MatrixFunctions/StemFunction.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/SpecialFunctions/SpecialFunctionsArrayAPI.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/SpecialFunctions/SpecialFunctionsFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/SpecialFunctions/SpecialFunctionsHalf.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/SpecialFunctions/SpecialFunctionsImpl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/SpecialFunctions/SpecialFunctionsPacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/SpecialFunctions/arch/GPU/GpuSpecialFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/fft2d/fft/readme.txt\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/gif_archive/COPYING\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/gif_archive/lib/gif_hash.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/gif_archive/lib/gif_lib.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/gif_archive/lib/gif_lib_private.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/grpc/LICENSE\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/grpc/third_party/address_sorting/LICENSE\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/hwloc/COPYING\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/LICENSE.md\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jccolext.c\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jchuff.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jconfig.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jconfigint.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jdcoefct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jdcol565.c\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jdcolext.c\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jdct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jdhuff.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jdmainct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jdmaster.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jdmrg565.c\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jdmrgext.c\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jdsample.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jerror.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jinclude.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jmemsys.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jmorecfg.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jpeg_nbits_table.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jpegcomp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jpegint.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jpeglib.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jsimd.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jsimddct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jstdhuff.c\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jversion.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/simd/jsimd.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jsoncpp_git/include/json/assertions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jsoncpp_git/include/json/autolink.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jsoncpp_git/include/json/config.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jsoncpp_git/include/json/features.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jsoncpp_git/include/json/forwards.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jsoncpp_git/include/json/json.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jsoncpp_git/include/json/reader.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jsoncpp_git/include/json/value.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jsoncpp_git/include/json/version.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jsoncpp_git/include/json/writer.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jsoncpp_git/src/lib_json/json_tool.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jsoncpp_git/src/lib_json/json_valueiterator.inl\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/kafka/LICENSE\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/keras_applications_archive/LICENSE\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/llvm/LICENSE.TXT\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/png_archive/LICENSE\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/LICENSE\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/python/google/protobuf/internal/_api_implementation.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/python/google/protobuf/pyext/_message.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/any.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/any.pb.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/api.pb.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/arena.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/arena_impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/arena_test_util.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/arenastring.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/annotation_test_util.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/code_generator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/command_line_interface.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/cpp/cpp_enum.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/cpp/cpp_enum_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/cpp/cpp_extension.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/cpp/cpp_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/cpp/cpp_file.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/cpp/cpp_generator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/cpp/cpp_helpers.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/cpp/cpp_map_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/cpp/cpp_message.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/cpp/cpp_message_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/cpp/cpp_message_layout_helper.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/cpp/cpp_options.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/cpp/cpp_padding_optimizer.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/cpp/cpp_primitive_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/cpp/cpp_service.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/cpp/cpp_string_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/cpp/cpp_unittest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/cpp/cpp_unittest.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/csharp/csharp_doc_comment.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/csharp/csharp_enum.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/csharp/csharp_enum_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/csharp/csharp_field_base.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/csharp/csharp_generator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/csharp/csharp_helpers.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/csharp/csharp_map_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/csharp/csharp_message.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/csharp/csharp_message_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/csharp/csharp_names.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/csharp/csharp_options.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/csharp/csharp_primitive_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/csharp/csharp_reflection_class.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/csharp/csharp_repeated_enum_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/csharp/csharp_repeated_message_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/csharp/csharp_repeated_primitive_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/csharp/csharp_source_generator_base.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/csharp/csharp_wrapper_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/importer.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_context.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_doc_comment.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_enum.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_enum_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_enum_field_lite.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_enum_lite.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_extension.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_extension_lite.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_file.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_generator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_generator_factory.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_helpers.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_map_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_map_field_lite.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_message.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_message_builder.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_message_builder_lite.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_message_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_message_field_lite.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_message_lite.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_name_resolver.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_names.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_options.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_primitive_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_primitive_field_lite.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_service.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_shared_code_generator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_string_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_string_field_lite.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/js/js_generator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/js/well_known_types_embed.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/mock_code_generator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/objectivec/objectivec_enum.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/objectivec/objectivec_enum_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/objectivec/objectivec_extension.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/objectivec/objectivec_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/objectivec/objectivec_file.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/objectivec/objectivec_generator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/objectivec/objectivec_helpers.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/objectivec/objectivec_map_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/objectivec/objectivec_message.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/objectivec/objectivec_message_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/objectivec/objectivec_nsobject_methods.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/objectivec/objectivec_oneof.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/objectivec/objectivec_primitive_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/package_info.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/parser.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/php/php_generator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/plugin.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/plugin.pb.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/python/python_generator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/ruby/ruby_generator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/scc.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/subprocess.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/zip_writer.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/descriptor.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/descriptor.pb.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/descriptor_database.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/duration.pb.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/dynamic_message.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/empty.pb.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/extension_set.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/extension_set_inl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/field_mask.pb.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/generated_enum_reflection.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/generated_enum_util.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/generated_message_reflection.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/generated_message_table_driven.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/generated_message_table_driven_lite.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/generated_message_util.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/has_bits.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/implicit_weak_message.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/inlined_string_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/io/coded_stream.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/io/coded_stream_inl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/io/gzip_stream.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/io/package_info.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/io/printer.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/io/strtod.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/io/tokenizer.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/io/zero_copy_stream.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/io/zero_copy_stream_impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/io/zero_copy_stream_impl_lite.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/map.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/map_entry.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/map_entry_lite.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/map_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/map_field_inl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/map_field_lite.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/map_lite_test_util.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/map_test_util.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/map_test_util_impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/map_type_handler.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/message.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/message_lite.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/message_unittest.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/metadata.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/metadata_lite.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/package_info.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/parse_context.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/port.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/port_def.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/port_undef.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/proto3_lite_unittest.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/reflection.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/reflection_internal.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/reflection_ops.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/repeated_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/service.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/source_context.pb.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/struct.pb.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/bytestream.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/callback.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/casts.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/common.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/fastmem.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/hash.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/int128.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/io_win32.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/logging.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/macros.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/map_util.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/mathlimits.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/mathutil.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/mutex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/once.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/platform_macros.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/port.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/status.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/status_macros.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/statusor.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/stl_util.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/stringpiece.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/stringprintf.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/strutil.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/substitute.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/template_util.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/time.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/test_util.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/test_util.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/test_util2.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/test_util_lite.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/testing/file.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/testing/googletest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/text_format.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/timestamp.pb.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/type.pb.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/unknown_field_set.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/delimited_message_util.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/field_comparator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/field_mask_util.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/constants.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/datapiece.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/default_value_objectwriter.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/error_listener.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/expecting_objectwriter.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/field_mask_utility.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/json_escaping.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/json_objectwriter.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/json_stream_parser.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/location_tracker.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/mock_error_listener.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/object_location_tracker.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/object_source.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/object_writer.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/proto_writer.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/protostream_objectsource.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/protostream_objectwriter.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/structured_objectwriter.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/type_info.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/type_info_test_helper.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/utility.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/json_util.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/message_differencer.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/package_info.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/time_util.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/type_resolver.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/type_resolver_util.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/wire_format.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/wire_format_lite.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/wire_format_lite_inl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/wrappers.pb.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/zlib_archive/crc32.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/zlib_archive/deflate.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/zlib_archive/gzguts.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/zlib_archive/inffast.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/zlib_archive/inffixed.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/zlib_archive/inflate.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/zlib_archive/inftrees.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/zlib_archive/trees.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/zlib_archive/zconf.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/zlib_archive/zlib.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/zlib_archive/zutil.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/google/protobuf/stubs/io_win32.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/google/protobuf/wire_format_lite_inl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/example/example.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/example/example.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/example/feature.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/example/feature.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/allocation_description.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/allocation_description.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/api_def.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/api_def.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/attr_value.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/attr_value.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/cost_graph.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/cost_graph.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/device_attributes.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/device_attributes.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/function.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/function.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/graph.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/graph.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/graph_transfer_info.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/graph_transfer_info.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/kernel_def.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/kernel_def.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/log_memory.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/log_memory.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/node_def.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/node_def.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/op_def.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/op_def.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/reader_base.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/reader_base.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/remote_fused_graph_execute_info.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/remote_fused_graph_execute_info.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/resource_handle.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/resource_handle.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/step_stats.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/step_stats.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/summary.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/summary.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/tensor.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/tensor.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/tensor_description.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/tensor_description.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/tensor_shape.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/tensor_shape.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/tensor_slice.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/tensor_slice.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/types.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/types.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/variable.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/variable.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/versions.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/versions.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/grappler/optimizers/layout_optimizer.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/lib/core/error_codes.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/lib/core/error_codes.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/lib/gtl/stl_util.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/platform/cloud/retrying_file_system.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/platform/cloud/retrying_utils.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/platform/default/thread_annotations.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/platform/grpc_services.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/platform/monitoring.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/platform/posix/error.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/platform/posix/posix_file_system.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/platform/posix/subprocess.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/platform/windows/cpu_info.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/platform/windows/integral_types.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/cluster.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/cluster.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/config.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/config.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/debug.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/debug.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/device_properties.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/device_properties.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/graph_debug_info.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/graph_debug_info.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/queue_runner.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/queue_runner.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/rewriter_config.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/rewriter_config.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/saver.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/saver.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/tensor_bundle.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/tensor_bundle.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/trace_events.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/trace_events.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/verifier_config.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/verifier_config.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/util/event.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/util/event.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/util/memmapped_file_system.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/util/memmapped_file_system.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/util/saved_tensor_slice.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/util/saved_tensor_slice.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/algorithm/algorithm.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/algorithm/container.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/attributes.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/call_once.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/casts.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/config.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/const_init.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/dynamic_annotations.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/atomic_hook.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/bits.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/cycleclock.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/direct_mmap.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/endian.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/hide_ptr.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/identity.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/inline_variable.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/invoke.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/low_level_alloc.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/low_level_scheduling.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/per_thread_tls.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/raw_logging.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/scheduling_mode.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/spinlock.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/spinlock_akaros.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/spinlock_linux.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/spinlock_posix.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/spinlock_wait.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/spinlock_win32.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/sysinfo.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/thread_identity.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/throw_delegate.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/tsan_mutex_interface.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/unaligned_access.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/unscaledcycleclock.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/log_severity.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/macros.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/optimization.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/policy_checks.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/port.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/thread_annotations.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/container/fixed_array.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/container/flat_hash_map.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/container/flat_hash_set.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/container/inlined_vector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/container/internal/common.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/container/internal/compressed_tuple.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/container/internal/container_memory.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/container/internal/hash_function_defaults.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/container/internal/hash_policy_traits.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/container/internal/hashtable_debug_hooks.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/container/internal/hashtablez_sampler.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/container/internal/have_sse.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/container/internal/inlined_vector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/container/internal/layout.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/container/internal/raw_hash_map.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/container/internal/raw_hash_set.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/internal/address_is_readable.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/internal/demangle.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/internal/elf_mem_image.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/internal/stacktrace_aarch64-inl.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/internal/stacktrace_arm-inl.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/internal/stacktrace_config.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/internal/stacktrace_generic-inl.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/internal/stacktrace_powerpc-inl.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/internal/stacktrace_unimplemented-inl.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/internal/stacktrace_win32-inl.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/internal/stacktrace_x86-inl.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/internal/symbolize.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/internal/vdso_support.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/leak_check.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/stacktrace.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/symbolize.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/symbolize_elf.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/symbolize_unimplemented.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/symbolize_win32.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/hash/hash.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/hash/internal/city.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/hash/internal/hash.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/memory/memory.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/meta/type_traits.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/numeric/int128.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/numeric/int128_have_intrinsic.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/numeric/int128_no_intrinsic.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/ascii.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/charconv.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/escaping.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/internal/char_map.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/internal/charconv_bigint.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/internal/charconv_parse.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/internal/memutil.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/internal/ostringstream.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/internal/resize_uninitialized.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/internal/stl_type_traits.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/internal/str_format/arg.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/internal/str_format/bind.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/internal/str_format/checker.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/internal/str_format/extension.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/internal/str_format/float_conversion.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/internal/str_format/output.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/internal/str_format/parser.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/internal/str_join_internal.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/internal/str_split_internal.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/internal/utf8.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/match.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/numbers.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/str_cat.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/str_format.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/str_join.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/str_replace.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/str_split.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/string_view.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/strip.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/substitute.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/synchronization/barrier.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/synchronization/blocking_counter.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/synchronization/internal/create_thread_identity.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/synchronization/internal/graphcycles.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/synchronization/internal/kernel_timeout.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/synchronization/internal/mutex_nonprod.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/synchronization/internal/per_thread_sem.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/synchronization/internal/waiter.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/synchronization/mutex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/synchronization/notification.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/time/civil_time.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/time/clock.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/include/cctz/civil_time.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/include/cctz/civil_time_detail.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/include/cctz/time_zone.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/include/cctz/zone_info_source.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/src/time_zone_fixed.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/src/time_zone_if.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/src/time_zone_impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/src/time_zone_info.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/src/time_zone_libc.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/src/time_zone_posix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/src/tzfile.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/time/internal/get_current_time_chrono.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/time/internal/get_current_time_posix.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/time/time.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/types/bad_optional_access.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/types/bad_variant_access.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/types/internal/optional.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/types/internal/span.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/types/internal/variant.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/types/optional.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/types/span.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/types/variant.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/utility/utility.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/Cholesky\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/CholmodSupport\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/Core\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/Dense\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/Eigenvalues\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/Geometry\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/Householder\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/Jacobi\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/KLUSupport\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/LU\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/OrderingMethods\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/PaStiXSupport\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/PardisoSupport\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/QR\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/QtAlignedMalloc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/SPQRSupport\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/SVD\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/SparseCore\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/SparseQR\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/StdDeque\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/StdList\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/StdVector\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/SuperLUSupport\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/UmfPackSupport\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Cholesky/LDLT.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Cholesky/LLT.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Cholesky/LLT_LAPACKE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/CholmodSupport/CholmodSupport.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/ArithmeticSequence.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Array.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/ArrayBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/ArrayWrapper.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Assign.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/AssignEvaluator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Assign_MKL.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/BandMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Block.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/BooleanRedux.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/CommaInitializer.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/ConditionEstimator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/CoreEvaluators.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/CoreIterators.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/CwiseBinaryOp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/CwiseNullaryOp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/CwiseTernaryOp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/CwiseUnaryOp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/CwiseUnaryView.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/DenseBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/DenseCoeffsBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/DenseStorage.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Diagonal.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/DiagonalMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/DiagonalProduct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Dot.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/EigenBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/ForceAlignedAccess.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Fuzzy.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/GeneralProduct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/GenericPacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/GlobalFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/IO.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/IndexedView.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Inverse.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Map.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/MapBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/MathFunctionsImpl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Matrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/MatrixBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/NestByValue.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/NoAlias.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/NumTraits.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/PartialReduxEvaluator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/PermutationMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/PlainObjectBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Product.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/ProductEvaluators.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Random.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Redux.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Ref.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Replicate.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Reshaped.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/ReturnByValue.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Reverse.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Select.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/SelfAdjointView.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/SelfCwiseBinaryOp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Solve.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/SolveTriangular.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/SolverBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/StableNorm.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/StlIterators.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Stride.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Swap.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Transpose.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Transpositions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/TriangularMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/VectorBlock.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/VectorwiseOp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Visitor.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AVX/Complex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AVX/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AVX/PacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AVX/TypeCasting.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AVX512/Complex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AVX512/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AVX512/PacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AltiVec/Complex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AltiVec/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AltiVec/PacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/CUDA/Complex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/Default/ConjHelper.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/Default/GenericPacketMathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/Default/Settings.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/GPU/Half.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/GPU/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/GPU/PacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/GPU/PacketMath.h.orig\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/GPU/PacketMathHalf.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/GPU/TypeCasting.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/HIP/hcc/math_constants.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/MSA/Complex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/MSA/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/MSA/PacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/NEON/Complex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/NEON/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/NEON/PacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/NEON/TypeCasting.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/SSE/Complex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/SSE/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/SSE/PacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/SSE/TypeCasting.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/SYCL/InteropHeaders.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/SYCL/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/SYCL/PacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/SYCL/TypeCasting.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/ZVector/Complex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/ZVector/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/ZVector/PacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/functors/AssignmentFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/functors/BinaryFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/functors/NullaryFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/functors/StlFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/functors/TernaryFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/functors/UnaryFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/GeneralBlockPanelKernel.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/GeneralMatrixMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/GeneralMatrixMatrixTriangular.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/GeneralMatrixMatrixTriangular_BLAS.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/GeneralMatrixMatrix_BLAS.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/GeneralMatrixVector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/GeneralMatrixVector_BLAS.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/Parallelizer.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/SelfadjointMatrixMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/SelfadjointMatrixMatrix_BLAS.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/SelfadjointMatrixVector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/SelfadjointMatrixVector_BLAS.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/SelfadjointProduct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/SelfadjointRank2Update.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/TriangularMatrixMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/TriangularMatrixMatrix_BLAS.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/TriangularMatrixVector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/TriangularMatrixVector_BLAS.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/TriangularSolverMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/TriangularSolverMatrix_BLAS.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/TriangularSolverVector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/BlasUtil.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/ConfigureVectorization.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/Constants.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/DisableStupidWarnings.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/ForwardDeclarations.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/IndexedViewHelper.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/IntegralConstant.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/MKL_support.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/Macros.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/Memory.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/Meta.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/ReenableStupidWarnings.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/ReshapedHelper.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/StaticAssert.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/SymbolicIndex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/XprHelper.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/ComplexEigenSolver.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/ComplexSchur.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/ComplexSchur_LAPACKE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/EigenSolver.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/GeneralizedEigenSolver.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/GeneralizedSelfAdjointEigenSolver.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/HessenbergDecomposition.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/MatrixBaseEigenvalues.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/RealQZ.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/RealSchur.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/RealSchur_LAPACKE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/SelfAdjointEigenSolver.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/SelfAdjointEigenSolver_LAPACKE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/Tridiagonalization.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/AlignedBox.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/AngleAxis.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/EulerAngles.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/Homogeneous.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/Hyperplane.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/OrthoMethods.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/ParametrizedLine.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/Quaternion.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/Rotation2D.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/RotationBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/Scaling.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/Transform.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/Translation.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/Umeyama.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/arch/Geometry_SSE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Householder/BlockHouseholder.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Householder/Householder.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Householder/HouseholderSequence.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/IterativeLinearSolvers/BasicPreconditioners.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/IterativeLinearSolvers/BiCGSTAB.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/IterativeLinearSolvers/ConjugateGradient.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/IterativeLinearSolvers/IncompleteCholesky.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/IterativeLinearSolvers/IncompleteLUT.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/IterativeLinearSolvers/IterativeSolverBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/IterativeLinearSolvers/LeastSquareConjugateGradient.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/IterativeLinearSolvers/SolveWithGuess.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Jacobi/Jacobi.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/KLUSupport/KLUSupport.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/LU/Determinant.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/LU/FullPivLU.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/LU/InverseImpl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/LU/PartialPivLU.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/LU/PartialPivLU_LAPACKE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/LU/arch/Inverse_SSE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/MetisSupport/MetisSupport.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/OrderingMethods/Eigen_Colamd.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/OrderingMethods/Ordering.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/PaStiXSupport/PaStiXSupport.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/PardisoSupport/PardisoSupport.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/QR/ColPivHouseholderQR.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/QR/ColPivHouseholderQR_LAPACKE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/QR/CompleteOrthogonalDecomposition.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/QR/FullPivHouseholderQR.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/QR/HouseholderQR.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/QR/HouseholderQR_LAPACKE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SPQRSupport/SuiteSparseQRSupport.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SVD/BDCSVD.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SVD/JacobiSVD.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SVD/JacobiSVD_LAPACKE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SVD/SVDBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SVD/UpperBidiagonalization.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/AmbiVector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/CompressedStorage.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/ConservativeSparseSparseProduct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/MappedSparseMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseAssign.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseBlock.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseColEtree.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseCompressedBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseCwiseBinaryOp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseCwiseUnaryOp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseDenseProduct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseDiagonalProduct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseDot.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseFuzzy.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseMap.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseMatrixBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparsePermutation.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseProduct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseRedux.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseRef.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseSelfAdjointView.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseSolverBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseSparseProductWithPruning.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseTranspose.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseTriangularView.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseUtil.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseVector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseView.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/TriangularSolver.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLUImpl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_Memory.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_Structs.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_SupernodalMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_Utils.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_column_bmod.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_column_dfs.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_copy_to_ucol.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_gemm_kernel.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_heap_relax_snode.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_kernel_bmod.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_panel_bmod.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_panel_dfs.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_pivotL.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_pruneL.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_relax_snode.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseQR/SparseQR.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/StlSupport/StdDeque.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/StlSupport/StdList.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/StlSupport/StdVector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/StlSupport/details.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SuperLUSupport/SuperLUSupport.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/UmfPackSupport/UmfPackSupport.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/misc/Image.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/misc/Kernel.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/misc/RealSvd2x2.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/misc/blas.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/misc/lapack.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/misc/lapacke.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/misc/lapacke_mangling.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/plugins/ArrayCwiseBinaryOps.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/plugins/ArrayCwiseUnaryOps.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/plugins/BlockMethods.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/plugins/CommonCwiseBinaryOps.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/plugins/CommonCwiseUnaryOps.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/plugins/IndexedViewMethods.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/plugins/MatrixCwiseBinaryOps.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/plugins/MatrixCwiseUnaryOps.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/plugins/ReshapedMethods.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/CMakeLists.txt\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/Tensor\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/TensorSymmetry\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/ThreadPool\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/README.md\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/Tensor.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorArgMax.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorArgMaxSycl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorAssign.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorBlock.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorBroadcasting.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorChipping.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorConcatenation.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorContraction.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorContractionBlocking.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorContractionCuda.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorContractionGpu.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorContractionMapper.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorContractionSycl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorContractionThreadPool.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorConversion.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorConvolution.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorConvolutionSycl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorCostModel.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorCustomOp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorDevice.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorDeviceCuda.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorDeviceDefault.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorDeviceGpu.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorDeviceSycl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorDeviceThreadPool.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorDimensionList.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorDimensions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorEvalTo.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorEvaluator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorExecutor.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorExpr.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorFFT.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorFixedSize.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorForcedEval.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorForwardDeclarations.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorGenerator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorGlobalFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorGpuHipCudaDefines.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorGpuHipCudaUndefines.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorIO.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorImagePatch.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorIndexList.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorInflation.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorInitializer.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorIntDiv.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorLayoutSwap.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorMacros.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorMap.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorMeta.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorMorphing.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorPadding.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorPatch.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorRandom.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorReduction.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorReductionCuda.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorReductionGpu.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorReductionSycl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorRef.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorReverse.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorScan.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorShuffling.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorStorage.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorStriding.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSycl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclConvertToDeviceExpression.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclExprConstructor.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclExtractAccessor.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclExtractFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclLeafCount.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclPlaceHolderExpr.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclRun.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclTuple.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorTrace.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorTraits.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorUInt128.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorVolumePatch.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/TensorSymmetry/DynamicSymmetry.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/TensorSymmetry/StaticSymmetry.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/TensorSymmetry/Symmetry.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/TensorSymmetry/util/TemplateGroupTheory.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/ThreadPool/Barrier.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/ThreadPool/EventCount.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/ThreadPool/NonBlockingThreadPool.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/ThreadPool/RunQueue.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/ThreadPool/ThreadCancel.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/ThreadPool/ThreadEnvironment.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/ThreadPool/ThreadLocal.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/ThreadPool/ThreadPoolInterface.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/ThreadPool/ThreadYield.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/util/CXX11Meta.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/util/CXX11Workarounds.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/util/EmulateArray.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/util/EmulateCXX11Meta.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/util/MaxSizeVector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/FFT\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/KroneckerProduct\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/MatrixFunctions\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/SpecialFunctions\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/FFT/ei_fftw_impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/FFT/ei_kissfft_impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/KroneckerProduct/KroneckerTensorProduct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/MatrixFunctions/MatrixExponential.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/MatrixFunctions/MatrixFunction.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/MatrixFunctions/MatrixLogarithm.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/MatrixFunctions/MatrixPower.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/MatrixFunctions/MatrixSquareRoot.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/MatrixFunctions/StemFunction.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/SpecialFunctions/SpecialFunctionsArrayAPI.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/SpecialFunctions/SpecialFunctionsFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/SpecialFunctions/SpecialFunctionsHalf.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/SpecialFunctions/SpecialFunctionsImpl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/SpecialFunctions/SpecialFunctionsPacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/SpecialFunctions/arch/GPU/GpuSpecialFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/stream_executor/platform/thread_annotations.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/src/Tensor/TensorArgMaxSycl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/src/Tensor/TensorSycl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/src/Tensor/TensorSyclConvertToDeviceExpression.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/src/Tensor/TensorSyclExprConstructor.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/src/Tensor/TensorSyclExtractAccessor.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/src/Tensor/TensorSyclExtractFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/src/Tensor/TensorSyclFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/src/Tensor/TensorSyclLeafCount.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/src/Tensor/TensorSyclPlaceHolderExpr.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/src/Tensor/TensorSyclRun.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/src/Tensor/TensorSyclTuple.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/src/util/EmulateCXX11Meta.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/libtensorflow_framework.so.1\n /usr/local/lib/python3.6/dist-packages/tensorflow/lite/toco/python/_tensorflow_wrap_toco.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/lite/toco/python/tensorflow_wrap_toco.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/autograph/converters/side_effect_guards.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/autograph/core/function_wrapping.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/autograph/pyct/compiler.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/data/experimental/ops/sleep.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/data/kernel_tests/filter_test_base.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/debug/examples/debug_errors.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/debug/examples/debug_fibonacci.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/debug/examples/debug_keras.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/debug/examples/debug_tflearn_iris.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/execution_callbacks.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/indexed_slices_tensor_spec.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/sparse_tensor_spec.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/applications/resnet50.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/distribute/keras_lstm_model_correctness_test.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/saving/saved_model.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/ragged/ragged_tensor_spec.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/ragged/ragged_test_util.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/spectral_ops_test_util.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/profiler/profile_context.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/training/tracking/object_identity.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/tools/graph_transforms/__init__.py\nProceed (y/n)? y\n Successfully uninstalled tensorflow-2.2.0rc2\nUninstalling tensorflow-gpu-1.14.0:\n Would remove:\n /usr/local/bin/freeze_graph\n /usr/local/lib/python3.6/dist-packages/tensorflow/_api/v1/*\n /usr/local/lib/python3.6/dist-packages/tensorflow/compiler/tf2tensorrt/python/*\n /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/*\n /usr/local/lib/python3.6/dist-packages/tensorflow/core/profiler/op_profile_pb2.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/examples/tutorials/*\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/Eigen/src/Core/arch/GPU/Half.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/Eigen/src/Core/arch/GPU/PacketMath.h.orig\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/Eigen/src/Core/arch/GPU/PacketMathHalf.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/KinesisClient.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/KinesisEndpoint.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/KinesisErrorMarshaller.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/KinesisErrors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/KinesisRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/Kinesis_EXPORTS.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/AddTagsToStreamRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/Consumer.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/ConsumerDescription.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/ConsumerStatus.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/CreateStreamRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/DecreaseStreamRetentionPeriodRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/DeleteStreamRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/DeregisterStreamConsumerRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/DescribeLimitsRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/DescribeLimitsResult.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/DescribeStreamConsumerRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/DescribeStreamConsumerResult.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/DescribeStreamRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/DescribeStreamResult.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/DescribeStreamSummaryRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/DescribeStreamSummaryResult.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/DisableEnhancedMonitoringRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/DisableEnhancedMonitoringResult.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/EnableEnhancedMonitoringRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/EnableEnhancedMonitoringResult.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/EncryptionType.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/EnhancedMetrics.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/GetRecordsRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/GetRecordsResult.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/GetShardIteratorRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/GetShardIteratorResult.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/HashKeyRange.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/IncreaseStreamRetentionPeriodRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/ListShardsRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/ListShardsResult.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/ListStreamConsumersRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/ListStreamConsumersResult.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/ListStreamsRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/ListStreamsResult.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/ListTagsForStreamRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/ListTagsForStreamResult.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/MergeShardsRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/MetricsName.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/PutRecordRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/PutRecordResult.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/PutRecordsRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/PutRecordsRequestEntry.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/PutRecordsResult.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/PutRecordsResultEntry.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/Record.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/RegisterStreamConsumerRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/RegisterStreamConsumerResult.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/RemoveTagsFromStreamRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/ScalingType.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/SequenceNumberRange.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/Shard.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/ShardIteratorType.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/SplitShardRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/StartStreamEncryptionRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/StartingPosition.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/StopStreamEncryptionRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/StreamDescription.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/StreamDescriptionSummary.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/StreamStatus.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/Tag.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/UpdateShardCountRequest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/aws/aws-cpp-sdk-kinesis/include/aws/kinesis/model/UpdateShardCountResult.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_github_googleapis_googleapis/LICENSE\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/algorithm/algorithm.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/algorithm/container.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/attributes.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/call_once.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/casts.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/config.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/const_init.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/dynamic_annotations.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/atomic_hook.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/bits.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/cycleclock.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/direct_mmap.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/endian.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/hide_ptr.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/identity.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/inline_variable.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/invoke.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/low_level_alloc.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/low_level_scheduling.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/per_thread_tls.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/raw_logging.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/scheduling_mode.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/spinlock.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/spinlock_akaros.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/spinlock_linux.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/spinlock_posix.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/spinlock_wait.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/spinlock_win32.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/sysinfo.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/thread_identity.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/throw_delegate.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/tsan_mutex_interface.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/unaligned_access.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/internal/unscaledcycleclock.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/log_severity.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/macros.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/optimization.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/policy_checks.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/port.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/base/thread_annotations.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/container/fixed_array.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/container/flat_hash_map.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/container/flat_hash_set.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/container/inlined_vector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/container/internal/common.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/container/internal/compressed_tuple.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/container/internal/container_memory.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/container/internal/hash_function_defaults.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/container/internal/hash_policy_traits.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/container/internal/hashtable_debug_hooks.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/container/internal/hashtablez_sampler.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/container/internal/have_sse.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/container/internal/inlined_vector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/container/internal/layout.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/container/internal/raw_hash_map.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/container/internal/raw_hash_set.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/internal/address_is_readable.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/internal/demangle.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/internal/elf_mem_image.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/internal/stacktrace_aarch64-inl.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/internal/stacktrace_arm-inl.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/internal/stacktrace_config.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/internal/stacktrace_generic-inl.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/internal/stacktrace_powerpc-inl.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/internal/stacktrace_unimplemented-inl.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/internal/stacktrace_win32-inl.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/internal/stacktrace_x86-inl.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/internal/symbolize.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/internal/vdso_support.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/leak_check.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/stacktrace.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/symbolize.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/symbolize_elf.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/symbolize_unimplemented.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/debugging/symbolize_win32.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/hash/hash.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/hash/internal/city.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/hash/internal/hash.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/memory/memory.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/meta/type_traits.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/numeric/int128.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/numeric/int128_have_intrinsic.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/numeric/int128_no_intrinsic.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/ascii.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/charconv.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/escaping.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/internal/char_map.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/internal/charconv_bigint.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/internal/charconv_parse.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/internal/memutil.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/internal/ostringstream.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/internal/resize_uninitialized.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/internal/stl_type_traits.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/internal/str_format/arg.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/internal/str_format/bind.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/internal/str_format/checker.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/internal/str_format/extension.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/internal/str_format/float_conversion.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/internal/str_format/output.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/internal/str_format/parser.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/internal/str_join_internal.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/internal/str_split_internal.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/internal/utf8.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/match.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/numbers.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/str_cat.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/str_format.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/str_join.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/str_replace.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/str_split.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/string_view.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/strip.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/strings/substitute.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/synchronization/barrier.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/synchronization/blocking_counter.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/synchronization/internal/create_thread_identity.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/synchronization/internal/graphcycles.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/synchronization/internal/kernel_timeout.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/synchronization/internal/mutex_nonprod.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/synchronization/internal/per_thread_sem.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/synchronization/internal/waiter.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/synchronization/mutex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/synchronization/notification.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/time/civil_time.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/time/clock.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/include/cctz/civil_time.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/include/cctz/civil_time_detail.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/include/cctz/time_zone.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/include/cctz/zone_info_source.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/src/time_zone_fixed.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/src/time_zone_if.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/src/time_zone_impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/src/time_zone_info.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/src/time_zone_libc.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/src/time_zone_posix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/src/tzfile.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/time/internal/get_current_time_chrono.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/time/internal/get_current_time_posix.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/time/time.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/types/bad_optional_access.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/types/bad_variant_access.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/types/internal/optional.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/types/internal/span.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/types/internal/variant.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/types/optional.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/types/span.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/types/variant.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_google_absl/absl/utility/utility.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_googlesource_code_re2/re2/stringpiece.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_googlesource_code_re2/util/mix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/com_googlesource_code_re2/util/mutex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/curl/lib/pipeline.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/curl/lib/vtls/axtls.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/curl/lib/vtls/cyassl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/curl/lib/vtls/darwinssl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/Cholesky\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/CholmodSupport\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/Core\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/Dense\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/Eigenvalues\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/Geometry\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/Householder\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/Jacobi\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/KLUSupport\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/LU\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/OrderingMethods\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/PaStiXSupport\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/PardisoSupport\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/QR\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/QtAlignedMalloc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/SPQRSupport\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/SVD\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/SparseCore\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/SparseQR\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/StdDeque\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/StdList\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/StdVector\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/SuperLUSupport\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/UmfPackSupport\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Cholesky/LDLT.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Cholesky/LLT.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Cholesky/LLT_LAPACKE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/CholmodSupport/CholmodSupport.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/ArithmeticSequence.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Array.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/ArrayBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/ArrayWrapper.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Assign.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/AssignEvaluator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Assign_MKL.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/BandMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Block.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/BooleanRedux.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/CommaInitializer.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/ConditionEstimator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/CoreEvaluators.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/CoreIterators.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/CwiseBinaryOp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/CwiseNullaryOp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/CwiseTernaryOp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/CwiseUnaryOp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/CwiseUnaryView.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/DenseBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/DenseCoeffsBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/DenseStorage.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Diagonal.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/DiagonalMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/DiagonalProduct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Dot.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/EigenBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/ForceAlignedAccess.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Fuzzy.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/GeneralProduct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/GenericPacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/GlobalFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/IO.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/IndexedView.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Inverse.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Map.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/MapBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/MathFunctionsImpl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Matrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/MatrixBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/NestByValue.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/NoAlias.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/NumTraits.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/PartialReduxEvaluator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/PermutationMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/PlainObjectBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Product.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/ProductEvaluators.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Random.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Redux.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Ref.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Replicate.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Reshaped.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/ReturnByValue.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Reverse.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Select.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/SelfAdjointView.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/SelfCwiseBinaryOp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Solve.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/SolveTriangular.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/SolverBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/StableNorm.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/StlIterators.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Stride.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Swap.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Transpose.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Transpositions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/TriangularMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/VectorBlock.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/VectorwiseOp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/Visitor.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AVX/Complex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AVX/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AVX/PacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AVX/TypeCasting.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AVX512/Complex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AVX512/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AVX512/PacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AltiVec/Complex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AltiVec/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AltiVec/PacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/CUDA/Complex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/Default/ConjHelper.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/Default/GenericPacketMathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/Default/Settings.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/GPU/Half.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/GPU/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/GPU/PacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/GPU/PacketMath.h.orig\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/GPU/PacketMathHalf.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/GPU/TypeCasting.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/HIP/hcc/math_constants.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/MSA/Complex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/MSA/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/MSA/PacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/NEON/Complex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/NEON/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/NEON/PacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/NEON/TypeCasting.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/SSE/Complex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/SSE/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/SSE/PacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/SSE/TypeCasting.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/SYCL/InteropHeaders.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/SYCL/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/SYCL/PacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/SYCL/TypeCasting.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/ZVector/Complex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/ZVector/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/ZVector/PacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/functors/AssignmentFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/functors/BinaryFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/functors/NullaryFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/functors/StlFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/functors/TernaryFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/functors/UnaryFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/GeneralBlockPanelKernel.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/GeneralMatrixMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/GeneralMatrixMatrixTriangular.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/GeneralMatrixMatrixTriangular_BLAS.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/GeneralMatrixMatrix_BLAS.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/GeneralMatrixVector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/GeneralMatrixVector_BLAS.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/Parallelizer.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/SelfadjointMatrixMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/SelfadjointMatrixMatrix_BLAS.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/SelfadjointMatrixVector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/SelfadjointMatrixVector_BLAS.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/SelfadjointProduct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/SelfadjointRank2Update.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/TriangularMatrixMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/TriangularMatrixMatrix_BLAS.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/TriangularMatrixVector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/TriangularMatrixVector_BLAS.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/TriangularSolverMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/TriangularSolverMatrix_BLAS.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/TriangularSolverVector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/BlasUtil.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/ConfigureVectorization.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/Constants.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/DisableStupidWarnings.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/ForwardDeclarations.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/IndexedViewHelper.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/IntegralConstant.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/MKL_support.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/Macros.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/Memory.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/Meta.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/ReenableStupidWarnings.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/ReshapedHelper.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/StaticAssert.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/SymbolicIndex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/XprHelper.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/ComplexEigenSolver.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/ComplexSchur.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/ComplexSchur_LAPACKE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/EigenSolver.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/GeneralizedEigenSolver.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/GeneralizedSelfAdjointEigenSolver.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/HessenbergDecomposition.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/MatrixBaseEigenvalues.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/RealQZ.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/RealSchur.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/RealSchur_LAPACKE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/SelfAdjointEigenSolver.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/SelfAdjointEigenSolver_LAPACKE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/Tridiagonalization.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/AlignedBox.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/AngleAxis.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/EulerAngles.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/Homogeneous.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/Hyperplane.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/OrthoMethods.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/ParametrizedLine.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/Quaternion.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/Rotation2D.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/RotationBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/Scaling.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/Transform.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/Translation.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/Umeyama.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/arch/Geometry_SSE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Householder/BlockHouseholder.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Householder/Householder.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Householder/HouseholderSequence.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/IterativeLinearSolvers/BasicPreconditioners.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/IterativeLinearSolvers/BiCGSTAB.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/IterativeLinearSolvers/ConjugateGradient.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/IterativeLinearSolvers/IncompleteCholesky.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/IterativeLinearSolvers/IncompleteLUT.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/IterativeLinearSolvers/IterativeSolverBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/IterativeLinearSolvers/LeastSquareConjugateGradient.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/IterativeLinearSolvers/SolveWithGuess.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/Jacobi/Jacobi.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/KLUSupport/KLUSupport.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/LU/Determinant.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/LU/FullPivLU.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/LU/InverseImpl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/LU/PartialPivLU.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/LU/PartialPivLU_LAPACKE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/LU/arch/Inverse_SSE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/MetisSupport/MetisSupport.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/OrderingMethods/Eigen_Colamd.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/OrderingMethods/Ordering.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/PaStiXSupport/PaStiXSupport.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/PardisoSupport/PardisoSupport.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/QR/ColPivHouseholderQR.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/QR/ColPivHouseholderQR_LAPACKE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/QR/CompleteOrthogonalDecomposition.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/QR/FullPivHouseholderQR.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/QR/HouseholderQR.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/QR/HouseholderQR_LAPACKE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SPQRSupport/SuiteSparseQRSupport.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SVD/BDCSVD.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SVD/JacobiSVD.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SVD/JacobiSVD_LAPACKE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SVD/SVDBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SVD/UpperBidiagonalization.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/AmbiVector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/CompressedStorage.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/ConservativeSparseSparseProduct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/MappedSparseMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseAssign.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseBlock.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseColEtree.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseCompressedBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseCwiseBinaryOp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseCwiseUnaryOp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseDenseProduct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseDiagonalProduct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseDot.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseFuzzy.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseMap.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseMatrixBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparsePermutation.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseProduct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseRedux.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseRef.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseSelfAdjointView.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseSolverBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseSparseProductWithPruning.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseTranspose.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseTriangularView.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseUtil.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseVector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseView.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/TriangularSolver.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLUImpl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_Memory.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_Structs.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_SupernodalMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_Utils.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_column_bmod.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_column_dfs.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_copy_to_ucol.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_gemm_kernel.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_heap_relax_snode.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_kernel_bmod.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_panel_bmod.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_panel_dfs.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_pivotL.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_pruneL.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_relax_snode.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SparseQR/SparseQR.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/StlSupport/StdDeque.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/StlSupport/StdList.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/StlSupport/StdVector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/StlSupport/details.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/SuperLUSupport/SuperLUSupport.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/UmfPackSupport/UmfPackSupport.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/misc/Image.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/misc/Kernel.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/misc/RealSvd2x2.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/misc/blas.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/misc/lapack.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/misc/lapacke.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/misc/lapacke_mangling.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/plugins/ArrayCwiseBinaryOps.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/plugins/ArrayCwiseUnaryOps.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/plugins/BlockMethods.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/plugins/CommonCwiseBinaryOps.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/plugins/CommonCwiseUnaryOps.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/plugins/IndexedViewMethods.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/plugins/MatrixCwiseBinaryOps.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/plugins/MatrixCwiseUnaryOps.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/Eigen/src/plugins/ReshapedMethods.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/CMakeLists.txt\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/Tensor\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/TensorSymmetry\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/ThreadPool\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/README.md\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/Tensor.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorArgMax.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorArgMaxSycl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorAssign.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorBlock.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorBroadcasting.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorChipping.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorConcatenation.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorContraction.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorContractionBlocking.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorContractionCuda.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorContractionGpu.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorContractionMapper.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorContractionSycl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorContractionThreadPool.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorConversion.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorConvolution.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorConvolutionSycl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorCostModel.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorCustomOp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorDevice.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorDeviceCuda.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorDeviceDefault.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorDeviceGpu.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorDeviceSycl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorDeviceThreadPool.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorDimensionList.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorDimensions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorEvalTo.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorEvaluator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorExecutor.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorExpr.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorFFT.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorFixedSize.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorForcedEval.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorForwardDeclarations.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorGenerator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorGlobalFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorGpuHipCudaDefines.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorGpuHipCudaUndefines.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorIO.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorImagePatch.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorIndexList.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorInflation.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorInitializer.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorIntDiv.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorLayoutSwap.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorMacros.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorMap.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorMeta.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorMorphing.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorPadding.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorPatch.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorRandom.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorReduction.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorReductionCuda.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorReductionGpu.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorReductionSycl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorRef.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorReverse.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorScan.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorShuffling.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorStorage.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorStriding.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSycl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclConvertToDeviceExpression.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclExprConstructor.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclExtractAccessor.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclExtractFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclLeafCount.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclPlaceHolderExpr.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclRun.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclTuple.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorTrace.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorTraits.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorUInt128.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorVolumePatch.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/TensorSymmetry/DynamicSymmetry.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/TensorSymmetry/StaticSymmetry.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/TensorSymmetry/Symmetry.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/TensorSymmetry/util/TemplateGroupTheory.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/ThreadPool/Barrier.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/ThreadPool/EventCount.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/ThreadPool/NonBlockingThreadPool.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/ThreadPool/RunQueue.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/ThreadPool/ThreadCancel.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/ThreadPool/ThreadEnvironment.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/ThreadPool/ThreadLocal.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/ThreadPool/ThreadPoolInterface.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/ThreadPool/ThreadYield.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/util/CXX11Meta.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/util/CXX11Workarounds.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/util/EmulateArray.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/util/EmulateCXX11Meta.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/util/MaxSizeVector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/FFT\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/KroneckerProduct\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/MatrixFunctions\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/SpecialFunctions\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/FFT/ei_fftw_impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/FFT/ei_kissfft_impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/KroneckerProduct/KroneckerTensorProduct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/MatrixFunctions/MatrixExponential.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/MatrixFunctions/MatrixFunction.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/MatrixFunctions/MatrixLogarithm.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/MatrixFunctions/MatrixPower.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/MatrixFunctions/MatrixSquareRoot.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/MatrixFunctions/StemFunction.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/SpecialFunctions/SpecialFunctionsArrayAPI.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/SpecialFunctions/SpecialFunctionsFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/SpecialFunctions/SpecialFunctionsHalf.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/SpecialFunctions/SpecialFunctionsImpl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/SpecialFunctions/SpecialFunctionsPacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/SpecialFunctions/arch/GPU/GpuSpecialFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/fft2d/fft/readme.txt\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/gif_archive/COPYING\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/gif_archive/lib/gif_hash.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/gif_archive/lib/gif_lib.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/gif_archive/lib/gif_lib_private.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/grpc/LICENSE\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/grpc/third_party/address_sorting/LICENSE\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/hwloc/COPYING\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/LICENSE.md\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jccolext.c\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jchuff.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jconfig.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jconfigint.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jdcoefct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jdcol565.c\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jdcolext.c\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jdct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jdhuff.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jdmainct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jdmaster.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jdmrg565.c\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jdmrgext.c\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jdsample.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jerror.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jinclude.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jmemsys.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jmorecfg.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jpeg_nbits_table.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jpegcomp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jpegint.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jpeglib.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jsimd.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jsimddct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jstdhuff.c\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/jversion.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jpeg/simd/jsimd.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jsoncpp_git/include/json/assertions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jsoncpp_git/include/json/autolink.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jsoncpp_git/include/json/config.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jsoncpp_git/include/json/features.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jsoncpp_git/include/json/forwards.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jsoncpp_git/include/json/json.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jsoncpp_git/include/json/reader.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jsoncpp_git/include/json/value.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jsoncpp_git/include/json/version.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jsoncpp_git/include/json/writer.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jsoncpp_git/src/lib_json/json_tool.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/jsoncpp_git/src/lib_json/json_valueiterator.inl\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/kafka/LICENSE\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/keras_applications_archive/LICENSE\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/llvm/LICENSE.TXT\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/png_archive/LICENSE\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/LICENSE\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/python/google/protobuf/internal/_api_implementation.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/python/google/protobuf/pyext/_message.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/any.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/any.pb.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/api.pb.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/arena.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/arena_impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/arena_test_util.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/arenastring.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/annotation_test_util.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/code_generator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/command_line_interface.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/cpp/cpp_enum.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/cpp/cpp_enum_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/cpp/cpp_extension.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/cpp/cpp_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/cpp/cpp_file.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/cpp/cpp_generator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/cpp/cpp_helpers.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/cpp/cpp_map_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/cpp/cpp_message.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/cpp/cpp_message_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/cpp/cpp_message_layout_helper.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/cpp/cpp_options.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/cpp/cpp_padding_optimizer.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/cpp/cpp_primitive_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/cpp/cpp_service.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/cpp/cpp_string_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/cpp/cpp_unittest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/cpp/cpp_unittest.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/csharp/csharp_doc_comment.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/csharp/csharp_enum.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/csharp/csharp_enum_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/csharp/csharp_field_base.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/csharp/csharp_generator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/csharp/csharp_helpers.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/csharp/csharp_map_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/csharp/csharp_message.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/csharp/csharp_message_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/csharp/csharp_names.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/csharp/csharp_options.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/csharp/csharp_primitive_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/csharp/csharp_reflection_class.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/csharp/csharp_repeated_enum_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/csharp/csharp_repeated_message_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/csharp/csharp_repeated_primitive_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/csharp/csharp_source_generator_base.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/csharp/csharp_wrapper_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/importer.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_context.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_doc_comment.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_enum.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_enum_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_enum_field_lite.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_enum_lite.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_extension.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_extension_lite.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_file.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_generator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_generator_factory.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_helpers.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_map_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_map_field_lite.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_message.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_message_builder.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_message_builder_lite.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_message_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_message_field_lite.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_message_lite.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_name_resolver.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_names.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_options.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_primitive_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_primitive_field_lite.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_service.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_shared_code_generator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_string_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/java/java_string_field_lite.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/js/js_generator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/js/well_known_types_embed.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/mock_code_generator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/objectivec/objectivec_enum.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/objectivec/objectivec_enum_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/objectivec/objectivec_extension.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/objectivec/objectivec_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/objectivec/objectivec_file.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/objectivec/objectivec_generator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/objectivec/objectivec_helpers.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/objectivec/objectivec_map_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/objectivec/objectivec_message.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/objectivec/objectivec_message_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/objectivec/objectivec_nsobject_methods.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/objectivec/objectivec_oneof.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/objectivec/objectivec_primitive_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/package_info.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/parser.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/php/php_generator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/plugin.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/plugin.pb.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/python/python_generator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/ruby/ruby_generator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/scc.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/subprocess.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/compiler/zip_writer.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/descriptor.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/descriptor.pb.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/descriptor_database.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/duration.pb.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/dynamic_message.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/empty.pb.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/extension_set.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/extension_set_inl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/field_mask.pb.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/generated_enum_reflection.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/generated_enum_util.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/generated_message_reflection.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/generated_message_table_driven.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/generated_message_table_driven_lite.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/generated_message_util.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/has_bits.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/implicit_weak_message.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/inlined_string_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/io/coded_stream.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/io/coded_stream_inl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/io/gzip_stream.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/io/package_info.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/io/printer.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/io/strtod.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/io/tokenizer.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/io/zero_copy_stream.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/io/zero_copy_stream_impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/io/zero_copy_stream_impl_lite.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/map.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/map_entry.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/map_entry_lite.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/map_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/map_field_inl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/map_field_lite.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/map_lite_test_util.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/map_test_util.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/map_test_util_impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/map_type_handler.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/message.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/message_lite.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/message_unittest.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/metadata.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/metadata_lite.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/package_info.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/parse_context.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/port.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/port_def.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/port_undef.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/proto3_lite_unittest.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/reflection.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/reflection_internal.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/reflection_ops.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/repeated_field.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/service.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/source_context.pb.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/struct.pb.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/bytestream.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/callback.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/casts.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/common.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/fastmem.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/hash.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/int128.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/io_win32.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/logging.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/macros.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/map_util.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/mathlimits.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/mathutil.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/mutex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/once.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/platform_macros.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/port.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/status.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/status_macros.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/statusor.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/stl_util.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/stringpiece.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/stringprintf.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/strutil.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/substitute.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/template_util.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/stubs/time.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/test_util.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/test_util.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/test_util2.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/test_util_lite.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/testing/file.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/testing/googletest.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/text_format.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/timestamp.pb.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/type.pb.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/unknown_field_set.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/delimited_message_util.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/field_comparator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/field_mask_util.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/constants.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/datapiece.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/default_value_objectwriter.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/error_listener.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/expecting_objectwriter.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/field_mask_utility.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/json_escaping.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/json_objectwriter.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/json_stream_parser.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/location_tracker.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/mock_error_listener.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/object_location_tracker.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/object_source.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/object_writer.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/proto_writer.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/protostream_objectsource.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/protostream_objectwriter.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/structured_objectwriter.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/type_info.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/type_info_test_helper.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/internal/utility.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/json_util.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/message_differencer.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/package_info.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/time_util.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/type_resolver.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/util/type_resolver_util.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/wire_format.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/wire_format_lite.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/wire_format_lite_inl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/protobuf_archive/src/google/protobuf/wrappers.pb.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/zlib_archive/crc32.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/zlib_archive/deflate.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/zlib_archive/gzguts.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/zlib_archive/inffast.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/zlib_archive/inffixed.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/zlib_archive/inflate.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/zlib_archive/inftrees.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/zlib_archive/trees.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/zlib_archive/zconf.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/zlib_archive/zlib.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/external/zlib_archive/zutil.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/google/protobuf/stubs/io_win32.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/google/protobuf/wire_format_lite_inl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/example/example.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/example/example.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/example/feature.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/example/feature.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/allocation_description.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/allocation_description.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/api_def.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/api_def.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/attr_value.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/attr_value.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/cost_graph.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/cost_graph.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/device_attributes.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/device_attributes.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/function.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/function.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/graph.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/graph.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/graph_transfer_info.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/graph_transfer_info.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/kernel_def.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/kernel_def.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/log_memory.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/log_memory.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/node_def.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/node_def.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/op_def.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/op_def.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/reader_base.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/reader_base.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/remote_fused_graph_execute_info.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/remote_fused_graph_execute_info.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/resource_handle.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/resource_handle.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/step_stats.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/step_stats.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/summary.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/summary.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/tensor.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/tensor.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/tensor_description.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/tensor_description.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/tensor_shape.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/tensor_shape.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/tensor_slice.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/tensor_slice.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/types.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/types.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/variable.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/variable.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/versions.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/versions.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/grappler/optimizers/layout_optimizer.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/lib/core/error_codes.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/lib/core/error_codes.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/lib/gtl/stl_util.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/platform/cloud/retrying_file_system.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/platform/cloud/retrying_utils.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/platform/default/thread_annotations.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/platform/grpc_services.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/platform/monitoring.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/platform/posix/error.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/platform/posix/posix_file_system.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/platform/posix/subprocess.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/platform/windows/cpu_info.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/platform/windows/integral_types.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/cluster.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/cluster.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/config.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/config.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/debug.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/debug.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/device_properties.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/device_properties.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/graph_debug_info.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/graph_debug_info.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/queue_runner.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/queue_runner.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/rewriter_config.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/rewriter_config.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/saver.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/saver.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/tensor_bundle.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/tensor_bundle.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/trace_events.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/trace_events.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/verifier_config.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/protobuf/verifier_config.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/util/event.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/util/event.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/util/memmapped_file_system.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/util/memmapped_file_system.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/util/saved_tensor_slice.pb_text-impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/util/saved_tensor_slice.pb_text.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/algorithm/algorithm.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/algorithm/container.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/attributes.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/call_once.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/casts.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/config.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/const_init.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/dynamic_annotations.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/atomic_hook.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/bits.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/cycleclock.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/direct_mmap.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/endian.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/hide_ptr.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/identity.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/inline_variable.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/invoke.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/low_level_alloc.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/low_level_scheduling.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/per_thread_tls.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/raw_logging.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/scheduling_mode.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/spinlock.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/spinlock_akaros.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/spinlock_linux.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/spinlock_posix.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/spinlock_wait.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/spinlock_win32.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/sysinfo.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/thread_identity.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/throw_delegate.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/tsan_mutex_interface.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/unaligned_access.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/internal/unscaledcycleclock.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/log_severity.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/macros.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/optimization.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/policy_checks.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/port.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/base/thread_annotations.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/container/fixed_array.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/container/flat_hash_map.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/container/flat_hash_set.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/container/inlined_vector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/container/internal/common.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/container/internal/compressed_tuple.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/container/internal/container_memory.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/container/internal/hash_function_defaults.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/container/internal/hash_policy_traits.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/container/internal/hashtable_debug_hooks.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/container/internal/hashtablez_sampler.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/container/internal/have_sse.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/container/internal/inlined_vector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/container/internal/layout.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/container/internal/raw_hash_map.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/container/internal/raw_hash_set.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/internal/address_is_readable.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/internal/demangle.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/internal/elf_mem_image.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/internal/stacktrace_aarch64-inl.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/internal/stacktrace_arm-inl.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/internal/stacktrace_config.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/internal/stacktrace_generic-inl.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/internal/stacktrace_powerpc-inl.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/internal/stacktrace_unimplemented-inl.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/internal/stacktrace_win32-inl.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/internal/stacktrace_x86-inl.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/internal/symbolize.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/internal/vdso_support.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/leak_check.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/stacktrace.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/symbolize.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/symbolize_elf.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/symbolize_unimplemented.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/debugging/symbolize_win32.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/hash/hash.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/hash/internal/city.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/hash/internal/hash.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/memory/memory.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/meta/type_traits.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/numeric/int128.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/numeric/int128_have_intrinsic.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/numeric/int128_no_intrinsic.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/ascii.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/charconv.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/escaping.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/internal/char_map.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/internal/charconv_bigint.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/internal/charconv_parse.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/internal/memutil.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/internal/ostringstream.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/internal/resize_uninitialized.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/internal/stl_type_traits.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/internal/str_format/arg.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/internal/str_format/bind.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/internal/str_format/checker.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/internal/str_format/extension.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/internal/str_format/float_conversion.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/internal/str_format/output.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/internal/str_format/parser.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/internal/str_join_internal.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/internal/str_split_internal.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/internal/utf8.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/match.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/numbers.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/str_cat.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/str_format.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/str_join.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/str_replace.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/str_split.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/string_view.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/strip.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/strings/substitute.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/synchronization/barrier.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/synchronization/blocking_counter.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/synchronization/internal/create_thread_identity.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/synchronization/internal/graphcycles.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/synchronization/internal/kernel_timeout.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/synchronization/internal/mutex_nonprod.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/synchronization/internal/per_thread_sem.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/synchronization/internal/waiter.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/synchronization/mutex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/synchronization/notification.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/time/civil_time.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/time/clock.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/include/cctz/civil_time.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/include/cctz/civil_time_detail.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/include/cctz/time_zone.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/include/cctz/zone_info_source.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/src/time_zone_fixed.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/src/time_zone_if.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/src/time_zone_impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/src/time_zone_info.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/src/time_zone_libc.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/src/time_zone_posix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/time/internal/cctz/src/tzfile.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/time/internal/get_current_time_chrono.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/time/internal/get_current_time_posix.inc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/time/time.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/types/bad_optional_access.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/types/bad_variant_access.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/types/internal/optional.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/types/internal/span.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/types/internal/variant.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/types/optional.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/types/span.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/types/variant.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/com_google_absl/absl/utility/utility.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/Cholesky\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/CholmodSupport\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/Core\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/Dense\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/Eigenvalues\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/Geometry\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/Householder\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/Jacobi\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/KLUSupport\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/LU\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/OrderingMethods\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/PaStiXSupport\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/PardisoSupport\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/QR\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/QtAlignedMalloc\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/SPQRSupport\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/SVD\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/SparseCore\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/SparseQR\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/StdDeque\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/StdList\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/StdVector\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/SuperLUSupport\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/UmfPackSupport\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Cholesky/LDLT.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Cholesky/LLT.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Cholesky/LLT_LAPACKE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/CholmodSupport/CholmodSupport.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/ArithmeticSequence.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Array.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/ArrayBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/ArrayWrapper.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Assign.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/AssignEvaluator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Assign_MKL.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/BandMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Block.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/BooleanRedux.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/CommaInitializer.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/ConditionEstimator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/CoreEvaluators.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/CoreIterators.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/CwiseBinaryOp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/CwiseNullaryOp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/CwiseTernaryOp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/CwiseUnaryOp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/CwiseUnaryView.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/DenseBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/DenseCoeffsBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/DenseStorage.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Diagonal.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/DiagonalMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/DiagonalProduct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Dot.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/EigenBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/ForceAlignedAccess.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Fuzzy.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/GeneralProduct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/GenericPacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/GlobalFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/IO.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/IndexedView.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Inverse.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Map.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/MapBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/MathFunctionsImpl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Matrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/MatrixBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/NestByValue.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/NoAlias.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/NumTraits.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/PartialReduxEvaluator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/PermutationMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/PlainObjectBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Product.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/ProductEvaluators.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Random.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Redux.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Ref.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Replicate.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Reshaped.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/ReturnByValue.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Reverse.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Select.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/SelfAdjointView.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/SelfCwiseBinaryOp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Solve.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/SolveTriangular.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/SolverBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/StableNorm.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/StlIterators.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Stride.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Swap.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Transpose.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Transpositions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/TriangularMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/VectorBlock.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/VectorwiseOp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/Visitor.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AVX/Complex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AVX/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AVX/PacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AVX/TypeCasting.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AVX512/Complex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AVX512/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AVX512/PacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AltiVec/Complex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AltiVec/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/AltiVec/PacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/CUDA/Complex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/Default/ConjHelper.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/Default/GenericPacketMathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/Default/Settings.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/GPU/Half.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/GPU/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/GPU/PacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/GPU/PacketMath.h.orig\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/GPU/PacketMathHalf.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/GPU/TypeCasting.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/HIP/hcc/math_constants.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/MSA/Complex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/MSA/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/MSA/PacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/NEON/Complex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/NEON/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/NEON/PacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/NEON/TypeCasting.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/SSE/Complex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/SSE/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/SSE/PacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/SSE/TypeCasting.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/SYCL/InteropHeaders.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/SYCL/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/SYCL/PacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/SYCL/TypeCasting.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/ZVector/Complex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/ZVector/MathFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/arch/ZVector/PacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/functors/AssignmentFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/functors/BinaryFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/functors/NullaryFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/functors/StlFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/functors/TernaryFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/functors/UnaryFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/GeneralBlockPanelKernel.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/GeneralMatrixMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/GeneralMatrixMatrixTriangular.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/GeneralMatrixMatrixTriangular_BLAS.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/GeneralMatrixMatrix_BLAS.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/GeneralMatrixVector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/GeneralMatrixVector_BLAS.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/Parallelizer.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/SelfadjointMatrixMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/SelfadjointMatrixMatrix_BLAS.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/SelfadjointMatrixVector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/SelfadjointMatrixVector_BLAS.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/SelfadjointProduct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/SelfadjointRank2Update.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/TriangularMatrixMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/TriangularMatrixMatrix_BLAS.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/TriangularMatrixVector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/TriangularMatrixVector_BLAS.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/TriangularSolverMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/TriangularSolverMatrix_BLAS.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/products/TriangularSolverVector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/BlasUtil.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/ConfigureVectorization.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/Constants.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/DisableStupidWarnings.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/ForwardDeclarations.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/IndexedViewHelper.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/IntegralConstant.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/MKL_support.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/Macros.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/Memory.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/Meta.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/ReenableStupidWarnings.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/ReshapedHelper.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/StaticAssert.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/SymbolicIndex.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Core/util/XprHelper.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/ComplexEigenSolver.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/ComplexSchur.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/ComplexSchur_LAPACKE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/EigenSolver.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/GeneralizedEigenSolver.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/GeneralizedSelfAdjointEigenSolver.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/HessenbergDecomposition.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/MatrixBaseEigenvalues.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/RealQZ.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/RealSchur.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/RealSchur_LAPACKE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/SelfAdjointEigenSolver.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/SelfAdjointEigenSolver_LAPACKE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Eigenvalues/Tridiagonalization.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/AlignedBox.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/AngleAxis.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/EulerAngles.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/Homogeneous.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/Hyperplane.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/OrthoMethods.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/ParametrizedLine.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/Quaternion.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/Rotation2D.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/RotationBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/Scaling.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/Transform.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/Translation.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/Umeyama.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Geometry/arch/Geometry_SSE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Householder/BlockHouseholder.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Householder/Householder.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Householder/HouseholderSequence.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/IterativeLinearSolvers/BasicPreconditioners.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/IterativeLinearSolvers/BiCGSTAB.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/IterativeLinearSolvers/ConjugateGradient.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/IterativeLinearSolvers/IncompleteCholesky.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/IterativeLinearSolvers/IncompleteLUT.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/IterativeLinearSolvers/IterativeSolverBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/IterativeLinearSolvers/LeastSquareConjugateGradient.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/IterativeLinearSolvers/SolveWithGuess.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/Jacobi/Jacobi.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/KLUSupport/KLUSupport.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/LU/Determinant.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/LU/FullPivLU.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/LU/InverseImpl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/LU/PartialPivLU.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/LU/PartialPivLU_LAPACKE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/LU/arch/Inverse_SSE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/MetisSupport/MetisSupport.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/OrderingMethods/Eigen_Colamd.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/OrderingMethods/Ordering.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/PaStiXSupport/PaStiXSupport.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/PardisoSupport/PardisoSupport.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/QR/ColPivHouseholderQR.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/QR/ColPivHouseholderQR_LAPACKE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/QR/CompleteOrthogonalDecomposition.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/QR/FullPivHouseholderQR.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/QR/HouseholderQR.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/QR/HouseholderQR_LAPACKE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SPQRSupport/SuiteSparseQRSupport.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SVD/BDCSVD.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SVD/JacobiSVD.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SVD/JacobiSVD_LAPACKE.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SVD/SVDBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SVD/UpperBidiagonalization.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/AmbiVector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/CompressedStorage.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/ConservativeSparseSparseProduct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/MappedSparseMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseAssign.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseBlock.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseColEtree.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseCompressedBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseCwiseBinaryOp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseCwiseUnaryOp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseDenseProduct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseDiagonalProduct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseDot.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseFuzzy.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseMap.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseMatrixBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparsePermutation.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseProduct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseRedux.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseRef.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseSelfAdjointView.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseSolverBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseSparseProductWithPruning.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseTranspose.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseTriangularView.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseUtil.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseVector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/SparseView.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseCore/TriangularSolver.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLUImpl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_Memory.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_Structs.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_SupernodalMatrix.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_Utils.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_column_bmod.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_column_dfs.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_copy_to_ucol.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_gemm_kernel.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_heap_relax_snode.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_kernel_bmod.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_panel_bmod.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_panel_dfs.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_pivotL.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_pruneL.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseLU/SparseLU_relax_snode.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SparseQR/SparseQR.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/StlSupport/StdDeque.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/StlSupport/StdList.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/StlSupport/StdVector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/StlSupport/details.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/SuperLUSupport/SuperLUSupport.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/UmfPackSupport/UmfPackSupport.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/misc/Image.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/misc/Kernel.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/misc/RealSvd2x2.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/misc/blas.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/misc/lapack.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/misc/lapacke.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/misc/lapacke_mangling.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/plugins/ArrayCwiseBinaryOps.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/plugins/ArrayCwiseUnaryOps.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/plugins/BlockMethods.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/plugins/CommonCwiseBinaryOps.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/plugins/CommonCwiseUnaryOps.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/plugins/IndexedViewMethods.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/plugins/MatrixCwiseBinaryOps.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/plugins/MatrixCwiseUnaryOps.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/Eigen/src/plugins/ReshapedMethods.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/CMakeLists.txt\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/Tensor\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/TensorSymmetry\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/ThreadPool\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/README.md\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/Tensor.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorArgMax.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorArgMaxSycl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorAssign.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorBase.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorBlock.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorBroadcasting.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorChipping.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorConcatenation.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorContraction.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorContractionBlocking.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorContractionCuda.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorContractionGpu.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorContractionMapper.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorContractionSycl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorContractionThreadPool.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorConversion.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorConvolution.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorConvolutionSycl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorCostModel.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorCustomOp.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorDevice.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorDeviceCuda.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorDeviceDefault.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorDeviceGpu.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorDeviceSycl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorDeviceThreadPool.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorDimensionList.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorDimensions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorEvalTo.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorEvaluator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorExecutor.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorExpr.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorFFT.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorFixedSize.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorForcedEval.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorForwardDeclarations.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorGenerator.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorGlobalFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorGpuHipCudaDefines.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorGpuHipCudaUndefines.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorIO.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorImagePatch.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorIndexList.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorInflation.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorInitializer.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorIntDiv.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorLayoutSwap.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorMacros.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorMap.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorMeta.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorMorphing.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorPadding.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorPatch.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorRandom.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorReduction.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorReductionCuda.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorReductionGpu.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorReductionSycl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorRef.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorReverse.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorScan.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorShuffling.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorStorage.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorStriding.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSycl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclConvertToDeviceExpression.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclExprConstructor.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclExtractAccessor.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclExtractFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclLeafCount.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclPlaceHolderExpr.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclRun.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclTuple.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorTrace.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorTraits.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorUInt128.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorVolumePatch.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/TensorSymmetry/DynamicSymmetry.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/TensorSymmetry/StaticSymmetry.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/TensorSymmetry/Symmetry.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/TensorSymmetry/util/TemplateGroupTheory.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/ThreadPool/Barrier.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/ThreadPool/EventCount.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/ThreadPool/NonBlockingThreadPool.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/ThreadPool/RunQueue.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/ThreadPool/ThreadCancel.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/ThreadPool/ThreadEnvironment.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/ThreadPool/ThreadLocal.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/ThreadPool/ThreadPoolInterface.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/ThreadPool/ThreadYield.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/util/CXX11Meta.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/util/CXX11Workarounds.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/util/EmulateArray.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/util/EmulateCXX11Meta.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/util/MaxSizeVector.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/FFT\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/KroneckerProduct\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/MatrixFunctions\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/SpecialFunctions\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/FFT/ei_fftw_impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/FFT/ei_kissfft_impl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/KroneckerProduct/KroneckerTensorProduct.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/MatrixFunctions/MatrixExponential.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/MatrixFunctions/MatrixFunction.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/MatrixFunctions/MatrixLogarithm.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/MatrixFunctions/MatrixPower.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/MatrixFunctions/MatrixSquareRoot.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/MatrixFunctions/StemFunction.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/SpecialFunctions/SpecialFunctionsArrayAPI.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/SpecialFunctions/SpecialFunctionsFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/SpecialFunctions/SpecialFunctionsHalf.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/SpecialFunctions/SpecialFunctionsImpl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/SpecialFunctions/SpecialFunctionsPacketMath.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/src/SpecialFunctions/arch/GPU/GpuSpecialFunctions.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/stream_executor/platform/thread_annotations.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/src/Tensor/TensorArgMaxSycl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/src/Tensor/TensorSycl.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/src/Tensor/TensorSyclConvertToDeviceExpression.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/src/Tensor/TensorSyclExprConstructor.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/src/Tensor/TensorSyclExtractAccessor.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/src/Tensor/TensorSyclExtractFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/src/Tensor/TensorSyclFunctors.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/src/Tensor/TensorSyclLeafCount.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/src/Tensor/TensorSyclPlaceHolderExpr.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/src/Tensor/TensorSyclRun.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/src/Tensor/TensorSyclTuple.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/include/unsupported/Eigen/CXX11/src/util/EmulateCXX11Meta.h\n /usr/local/lib/python3.6/dist-packages/tensorflow/libtensorflow_framework.so.1\n /usr/local/lib/python3.6/dist-packages/tensorflow/lite/toco/python/_tensorflow_wrap_toco.so\n /usr/local/lib/python3.6/dist-packages/tensorflow/lite/toco/python/tensorflow_wrap_toco.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/autograph/converters/side_effect_guards.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/autograph/core/function_wrapping.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/autograph/pyct/compiler.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/data/experimental/ops/sleep.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/data/kernel_tests/filter_test_base.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/debug/examples/debug_errors.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/debug/examples/debug_fibonacci.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/debug/examples/debug_keras.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/debug/examples/debug_tflearn_iris.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/execution_callbacks.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/indexed_slices_tensor_spec.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/sparse_tensor_spec.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/applications/resnet50.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/distribute/keras_lstm_model_correctness_test.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/saving/saved_model.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/ragged/ragged_tensor_spec.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/ragged/ragged_test_util.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/spectral_ops_test_util.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/profiler/profile_context.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/training/tracking/object_identity.py\n /usr/local/lib/python3.6/dist-packages/tensorflow/tools/graph_transforms/*\n /usr/local/lib/python3.6/dist-packages/tensorflow_gpu-1.14.0.dist-info/*\nProceed (y/n)? y\n Successfully uninstalled tensorflow-gpu-1.14.0\nCollecting tensorflow-gpu==1.14.0\n Using cached https://files.pythonhosted.org/packages/76/04/43153bfdfcf6c9a4c38ecdb971ca9a75b9a791bb69a764d652c359aca504/tensorflow_gpu-1.14.0-cp36-cp36m-manylinux1_x86_64.whl\nRequirement already satisfied: keras-applications>=1.0.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.14.0) (1.0.8)\nRequirement already satisfied: numpy<2.0,>=1.14.5 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.14.0) (1.18.2)\nRequirement already satisfied: absl-py>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.14.0) (0.9.0)\nRequirement already satisfied: astor>=0.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.14.0) (0.8.1)\nRequirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.14.0) (1.12.0)\nRequirement already satisfied: protobuf>=3.6.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.14.0) (3.10.0)\nRequirement already satisfied: tensorboard<1.15.0,>=1.14.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.14.0) (1.14.0)\nRequirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.14.0) (1.1.0)\nRequirement already satisfied: wrapt>=1.11.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.14.0) (1.12.1)\nRequirement already satisfied: gast>=0.2.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.14.0) (0.3.3)\nRequirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.14.0) (1.27.2)\nRequirement already satisfied: tensorflow-estimator<1.15.0rc0,>=1.14.0rc0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.14.0) (1.14.0)\nRequirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.14.0) (0.34.2)\nRequirement already satisfied: keras-preprocessing>=1.0.5 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.14.0) (1.1.0)\nRequirement already satisfied: google-pasta>=0.1.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.14.0) (0.2.0)\nRequirement already satisfied: h5py in /usr/local/lib/python3.6/dist-packages (from keras-applications>=1.0.6->tensorflow-gpu==1.14.0) (2.10.0)\nRequirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from protobuf>=3.6.1->tensorflow-gpu==1.14.0) (46.0.0)\nRequirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.15.0,>=1.14.0->tensorflow-gpu==1.14.0) (3.2.1)\nRequirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.15.0,>=1.14.0->tensorflow-gpu==1.14.0) (1.0.0)\nInstalling collected packages: tensorflow-gpu\nSuccessfully installed tensorflow-gpu-1.14.0\n" ], [ "!pip install libml", "Collecting libml\n Downloading https://files.pythonhosted.org/packages/c3/10/3a58547058f5197bc19a9d252b9a68368d6707d52dac058f263c7201c294/LibML-0.1.03.tar.gz\nBuilding wheels for collected packages: libml\n Building wheel for libml (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for libml: filename=LibML-0.1.3-cp36-none-any.whl size=13943 sha256=6f23ea4c208b0d70794459aad82acdccf79fb977410ae3b00c4898fe1f1c1fdd\n Stored in directory: /root/.cache/pip/wheels/1a/ae/37/503c2ec3bede9b9b9ad23995f16eee1f2a36367b8feb411f4b\nSuccessfully built libml\nInstalling collected packages: libml\nSuccessfully installed libml-0.1.3\n" ], [ "import os\nos.environ['ML_DATA'] = './datasets'", "_____no_output_____" ], [ "%set_env PYTHONPATH=$PYTHONPATH:.", "env: PYTHONPATH=$PYTHONPATH:.\n" ], [ "!CUDA_VISIBLE_DEVICES= ./scripts/create_datasets.py\n!cp $ML_DATA/svhn-test.tfrecord $ML_DATA/svhn_noextra-test.tfrecord", "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\nPreparing cifar10\n2020-04-02 19:33:54.468793: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcuda.so.1\n2020-04-02 19:33:54.506662: E tensorflow/stream_executor/cuda/cuda_driver.cc:318] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected\n2020-04-02 19:33:54.506718: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: d1e534b23fc1\n2020-04-02 19:33:54.506731: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: d1e534b23fc1\n2020-04-02 19:33:54.506787: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:200] libcuda reported version is: 418.67.0\n2020-04-02 19:33:54.506818: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:204] kernel reported version is: 418.67.0\n2020-04-02 19:33:54.506828: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:310] kernel version seems to match DSO: 418.67.0\n2020-04-02 19:33:54.507172: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA\n2020-04-02 19:33:54.511865: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2200000000 Hz\n2020-04-02 19:33:54.512065: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x182ea00 executing computations on platform Host. Devices:\n2020-04-02 19:33:54.512094: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): <undefined>, <undefined>\nWARNING:tensorflow:From ./scripts/create_datasets.py:47: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.\n\nW0402 19:33:54.512559 139865036998528 deprecation_wrapper.py:119] From ./scripts/create_datasets.py:47: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.\n\nSaving dataset: ./datasets/cifar10-train.tfrecord\nWARNING:tensorflow:From ./scripts/create_datasets.py:193: The name tf.python_io.TFRecordWriter is deprecated. Please use tf.io.TFRecordWriter instead.\n\nW0402 19:34:26.097883 139865036998528 deprecation_wrapper.py:119] From ./scripts/create_datasets.py:193: The name tf.python_io.TFRecordWriter is deprecated. Please use tf.io.TFRecordWriter instead.\n\nBuilding records: 100% 50000/50000 [00:02<00:00, 18869.20it/s]\nSaved: ./datasets/cifar10-train.tfrecord\nSaving dataset: ./datasets/cifar10-test.tfrecord\nBuilding records: 100% 10000/10000 [00:00<00:00, 17602.50it/s]\nSaved: ./datasets/cifar10-test.tfrecord\nPreparing cifar100\nSaving dataset: ./datasets/cifar100-train.tfrecord\nBuilding records: 100% 50000/50000 [00:02<00:00, 18726.10it/s]\nSaved: ./datasets/cifar100-train.tfrecord\nSaving dataset: ./datasets/cifar100-test.tfrecord\nBuilding records: 100% 10000/10000 [00:00<00:00, 17102.37it/s]\nSaved: ./datasets/cifar100-test.tfrecord\nPreparing svhn\ntcmalloc: large alloc 1631641600 bytes == 0x53082000 @ 0x7f34dd9591e7 0x5929fc 0x7f34d04ec6da 0x7f34d070e873 0x7f34d071302d 0x7f34d0717d1f 0x7f34d0723550 0x7f34d07104f5 0x50ac25 0x50c5b9 0x508245 0x50a080 0x50aa7d 0x50c5b9 0x508245 0x50a080 0x50aa7d 0x50c5b9 0x508245 0x50a080 0x50aa7d 0x50c5b9 0x509d48 0x50aa7d 0x50c5b9 0x509d48 0x50aa7d 0x50c5b9 0x509d48 0x50aa7d 0x50c5b9\nPNG Encoding: 76% 401725/531131 [03:25<01:07, 1930.26it/s]Traceback (most recent call last):\n File \"/usr/lib/python3.6/contextlib.py\", line 99, in __exit__\n self.gen.throw(type, value, traceback)\n File \"/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py\", line 5652, in get_controller\n yield g\n File \"./scripts/create_datasets.py\", line 50, in _encode_png\n raw.append(sess.run(to_png, feed_dict={image_x: images[x]}))\n File \"/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py\", line 950, in run\n run_metadata_ptr)\n File \"/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py\", line 1173, in _run\n feed_dict_tensor, options, run_metadata)\n File \"/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py\", line 1350, in _do_run\n run_metadata)\n File \"/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py\", line 1356, in _do_call\n return fn(*args)\n File \"/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py\", line 1341, in _run_fn\n options, feed_dict, fetch_list, target_list, run_metadata)\n File \"/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py\", line 1429, in _call_tf_sessionrun\n run_metadata)\nKeyboardInterrupt\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"./scripts/create_datasets.py\", line 265, in <module>\n app.run(main)\n File \"/usr/local/lib/python3.6/dist-packages/absl/app.py\", line 299, in run\n _run_main(main, args)\n File \"/usr/local/lib/python3.6/dist-packages/absl/app.py\", line 250, in _run_main\n sys.exit(main(argv))\n File \"./scripts/create_datasets.py\", line 248, in main\n datas = config['loader']()\n File \"./scripts/create_datasets.py\", line 62, in _load_svhn\n dataset['images'] = _encode_png(dataset['images'])\n File \"./scripts/create_datasets.py\", line 50, in _encode_png\n raw.append(sess.run(to_png, feed_dict={image_x: images[x]}))\n File \"/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py\", line 1616, in __exit__\n name='SessionCloseThread', target=self.close)\n File \"/usr/lib/python3.6/threading.py\", line 793, in __init__\n self._started = Event()\n File \"/usr/lib/python3.6/threading.py\", line 499, in __init__\n self._cond = Condition(Lock())\n File \"/usr/lib/python3.6/threading.py\", line 218, in __init__\n self._lock = lock\nKeyboardInterrupt\ncp: cannot stat './datasets/svhn-test.tfrecord': No such file or directory\n" ], [ "%%shell\n# Create unlabeled datasets\nCUDA_VISIBLE_DEVICES= scripts/create_unlabeled.py $ML_DATA/SSL2/svhn $ML_DATA/svhn-train.tfrecord $ML_DATA/svhn-extra.tfrecord &\nCUDA_VISIBLE_DEVICES= scripts/create_unlabeled.py $ML_DATA/SSL2/svhn_noextra $ML_DATA/svhn-train.tfrecord &\nCUDA_VISIBLE_DEVICES= scripts/create_unlabeled.py $ML_DATA/SSL2/cifar10 $ML_DATA/cifar10-train.tfrecord &\nCUDA_VISIBLE_DEVICES= scripts/create_unlabeled.py $ML_DATA/SSL2/cifar100 $ML_DATA/cifar100-train.tfrecord &\nCUDA_VISIBLE_DEVICES= scripts/create_unlabeled.py $ML_DATA/SSL2/stl10 $ML_DATA/stl10-train.tfrecord $ML_DATA/stl10-unlabeled.tfrecord &\nwait", "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\nTraceback (most recent call last):\n File \"scripts/create_unlabeled.py\", line 105, in <module>\n app.run(main)\n File \"/usr/local/lib/python3.6/dist-packages/absl/app.py\", line 299, in run\n _run_main(main, args)\n File \"/usr/local/lib/python3.6/dist-packages/absl/app.py\", line 250, in _run_main\n sys.exit(main(argv))\n File \"scripts/create_unlabeled.py\", line 39, in main\n raise FileNotFoundError(argv[1:])\nFileNotFoundError: ['./datasets/svhn-train.tfrecord']\nComputing class distribution\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\nTraceback (most recent call last):\n File \"scripts/create_unlabeled.py\", line 105, in <module>\n app.run(main)\n File \"/usr/local/lib/python3.6/dist-packages/absl/app.py\", line 299, in run\n _run_main(main, args)\n File \"/usr/local/lib/python3.6/dist-packages/absl/app.py\", line 250, in _run_main\n sys.exit(main(argv))\n File \"scripts/create_unlabeled.py\", line 39, in main\n raise FileNotFoundError(argv[1:])\nFileNotFoundError: ['./datasets/stl10-train.tfrecord', './datasets/stl10-unlabeled.tfrecord']\n2020-04-02 19:41:46.851812: E tensorflow/stream_executor/cuda/cuda_driver.cc:318] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected\n0it [00:00, ?it/s]Computing class distribution\n1024it [00:00, 8230.84it/s]/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\n/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\n2020-04-02 19:41:47.126665: E tensorflow/stream_executor/cuda/cuda_driver.cc:318] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected\n7168it [00:00, 12348.00it/s]Traceback (most recent call last):\n File \"scripts/create_unlabeled.py\", line 105, in <module>\n app.run(main)\n File \"/usr/local/lib/python3.6/dist-packages/absl/app.py\", line 299, in run\n _run_main(main, args)\n File \"/usr/local/lib/python3.6/dist-packages/absl/app.py\", line 250, in _run_main\n sys.exit(main(argv))\n File \"scripts/create_unlabeled.py\", line 39, in main\n raise FileNotFoundError(argv[1:])\nFileNotFoundError: ['./datasets/svhn-train.tfrecord', './datasets/svhn-extra.tfrecord']\n50000 records found\n Stats 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00\nCreating unlabeled dataset for in ./datasets/SSL2/cifar100\n50000 records found\n Stats 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00\nCreating unlabeled dataset for in ./datasets/SSL2/cifar10\nWriting records: 100% 50000/50000 [00:03<00:00, 16008.28it/s]\nWriting records: 100% 50000/50000 [00:03<00:00, 14353.14it/s]\n" ], [ "%%shell\n# Create semi-supervised subsets\nfor seed in 0 1 2 3 4 5; do\n for size in 10 20 30 40 100 250 1000 4000; do\n CUDA_VISIBLE_DEVICES= scripts/create_split.py --seed=$seed --size=$size $ML_DATA/SSL2/svhn $ML_DATA/svhn-train.tfrecord $ML_DATA/svhn-extra.tfrecord &\n CUDA_VISIBLE_DEVICES= scripts/create_split.py --seed=$seed --size=$size $ML_DATA/SSL2/svhn_noextra $ML_DATA/svhn-train.tfrecord &\n CUDA_VISIBLE_DEVICES= scripts/create_split.py --seed=$seed --size=$size $ML_DATA/SSL2/cifar10 $ML_DATA/cifar10-train.tfrecord &\n done\n for size in 400 1000 2500 10000; do\n CUDA_VISIBLE_DEVICES= scripts/create_split.py --seed=$seed --size=$size $ML_DATA/SSL2/cifar100 $ML_DATA/cifar100-train.tfrecord &\n done\n CUDA_VISIBLE_DEVICES= scripts/create_split.py --seed=$seed --size=1000 $ML_DATA/SSL2/stl10 $ML_DATA/stl10-train.tfrecord $ML_DATA/stl10-unlabeled.tfrecord &\n wait\ndone", "_____no_output_____" ], [ "!CUDA_VISIBLE_DEVICES= scripts/create_split.py --seed=1 --size=5000 $ML_DATA/SSL2/stl10 $ML_DATA/stl10-train.tfrecord $ML_DATA/stl10-unlabeled.tfrecord", "_____no_output_____" ], [ "#Run this for CIFAR-10 seed=3 and size=40 only\n!CUDA_VISIBLE_DEVICES= scripts/create_split.py --seed=3 --size=40 $ML_DATA/SSL2/cifar10 $ML_DATA/cifar10-train.tfrecord &", "_____no_output_____" ], [ "!CUDA_VISIBLE_DEVICES=0 python fixmatch.py --filters=32 --dataset=cifar10.3@40-1 --train_dir ./experiments/fixmatch", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7d8f859fef5a2dccc6870b4083e4e8c85508882
93,441
ipynb
Jupyter Notebook
NFL (1).ipynb
humberhutch/NFLAnalysis
4e2fd11cb8df452f42327a0cff6e2792a13e27ec
[ "Apache-2.0" ]
null
null
null
NFL (1).ipynb
humberhutch/NFLAnalysis
4e2fd11cb8df452f42327a0cff6e2792a13e27ec
[ "Apache-2.0" ]
null
null
null
NFL (1).ipynb
humberhutch/NFLAnalysis
4e2fd11cb8df452f42327a0cff6e2792a13e27ec
[ "Apache-2.0" ]
null
null
null
42.981141
5,310
0.361886
[ [ [ "#Get NFL Player Data\r\n\r\n", "_____no_output_____" ] ], [ [ "import requests\r\nfrom pandas.io.json import json_normalize\r\nimport pandas as pd\r\nimport requests\r\n\r\n# https://sportsdata.io/developers/api-documentation/nfl\r\n\r\n# Player overall information\r\n#url = \"https://api.sportsdata.io/v3/nfl/scores/json/Players?key=d072122708d34423857116889b72f55b\"\r\n\r\n# Player Season stats for 2020\r\nurl = \"https://api.sportsdata.io/v3/nfl/stats/json/PlayerSeasonStats/2020?key=d072122708d34423857116889b72f55b\"\r\n\r\n# create a dataframe from data\r\ndf = pd.read_json(url)\r\n\r\nurl2019 = \"https://api.sportsdata.io/v3/nfl/stats/json/PlayerSeasonStats/2019?key=d072122708d34423857116889b72f55b\"\r\ndf2019 = pd.read_json(url)\r\ndf.append(df2019, ignore_index = True) \r\n\r\n\r\nurl2018 = \"https://api.sportsdata.io/v3/nfl/stats/json/PlayerSeasonStats/2018?key=d072122708d34423857116889b72f55b\"\r\ndf2018 = pd.read_json(url)\r\ndf.append(df2018, ignore_index = True) \r\n\r\n\r\ndf.shape[0] # number of players that played in 2018,2019, 2020", "_____no_output_____" ] ], [ [ "# Show the first few rows of data returned - All players", "_____no_output_____" ] ], [ [ "df.head()", "_____no_output_____" ] ], [ [ "# Focus on Wide Receivers", "_____no_output_____" ] ], [ [ "wr = df[ df['Position'] =='WR' ]\r\nprint (wr.shape) # Number of players (rows) and attributes (columns)", "(278, 140)\n" ], [ "# remove players with few games played or less than 10 Receiving Yards\r\nwr = wr[ wr['Played'] >10]\r\nwr = wr[ wr['ReceivingYards'] >10]\r\n\r\nwr.describe()", "_____no_output_____" ], [ "yardsPerGame = wr['ReceivingYards']/wr['Played']\r\n\r\nwr['yardsPerGame'] = yardsPerGame", "_____no_output_____" ], [ "# sample 2 rows from the dataframe\r\nwr.sample(2)", "_____no_output_____" ] ], [ [ "# Create a histogram of the Yards Per Game", "_____no_output_____" ] ], [ [ "wr['yardsPerGame'].hist()\r\n", "_____no_output_____" ] ], [ [ "# Boxplot to show distribution ", "_____no_output_____" ] ], [ [ "wr['yardsPerGame'].plot.box();", "_____no_output_____" ] ], [ [ "# Keep the main columns for analysis", "_____no_output_____" ] ], [ [ "colsKeep = ['PlayerID', 'Season','Team', 'Activated','Played','Started','ReceivingTargets',\t'Receptions',\t'ReceivingYards',\t'ReceivingYardsPerReception','ReceivingTouchdowns','ReceivingLong','yardsPerGame']\r\n", "_____no_output_____" ], [ "new_wr = wr[colsKeep]\r\nnew_wr.head()", "_____no_output_____" ] ], [ [ "# Retrieve data for all players for past 3 years and add salary for analysis\r\n", "_____no_output_____" ] ], [ [ "new_wr.groupby(['Team']).mean()['yardsPerGame']", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
e7d8f8f73524a5267c6f1b92031ed9566e66d2bc
81,180
ipynb
Jupyter Notebook
lessons/ETLPipelines/12_dummyvariables_exercise/12_dummyvariables_exercise.ipynb
rabadzhiyski/Data_Science_Udacity
8285515a765b7b5737b55e02714c9b27da4201e7
[ "MIT" ]
null
null
null
lessons/ETLPipelines/12_dummyvariables_exercise/12_dummyvariables_exercise.ipynb
rabadzhiyski/Data_Science_Udacity
8285515a765b7b5737b55e02714c9b27da4201e7
[ "MIT" ]
null
null
null
lessons/ETLPipelines/12_dummyvariables_exercise/12_dummyvariables_exercise.ipynb
rabadzhiyski/Data_Science_Udacity
8285515a765b7b5737b55e02714c9b27da4201e7
[ "MIT" ]
null
null
null
49.021739
500
0.515447
[ [ [ "# Dummy Variables Exercise\n\nIn this exercise, you'll create dummy variables from the projects data set. The idea is to transform categorical data like this:\n\n| Project ID | Project Category |\n|------------|------------------|\n| 0 | Energy |\n| 1 | Transportation |\n| 2 | Health |\n| 3 | Employment |\n\ninto new features that look like this:\n\n| Project ID | Energy | Transportation | Health | Employment |\n|------------|--------|----------------|--------|------------|\n| 0 | 1 | 0 | 0 | 0 |\n| 1 | 0 | 1 | 0 | 0 |\n| 2 | 0 | 0 | 1 | 0 |\n| 3 | 0 | 0 | 0 | 1 |\n\n\n(Note if you were going to use this data with a model influenced by multicollinearity, you would want to eliminate one of the columns to avoid redundant information.) \n\nThe reasoning behind these transformations is that machine learning algorithms read in numbers not text. Text needs to be converted into numbers. You could assign a number to each category like 1, 2, 3, and 4. But a categorical variable has no inherent order.\n\nPandas makes it very easy to create dummy variables with the [get_dummies](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html) method. In this exercise, you'll create dummy variables from the World Bank projects data; however, there's a caveat. The World Bank data is not particularly clean, so you'll need to explore and wrangle the data first.\n\nYou'll focus on the text values in the sector variables.\n\nRun the code cells below to read in the World Bank projects data set and then to filter out the data for text variables. ", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\n\n# read in the projects data set and do basic wrangling \nprojects = pd.read_csv('../data/projects_data.csv', dtype=str)\nprojects.drop('Unnamed: 56', axis=1, inplace=True)\nprojects['totalamt'] = pd.to_numeric(projects['totalamt'].str.replace(',', ''))\nprojects['countryname'] = projects['countryname'].str.split(';', expand=True)[0]\nprojects['boardapprovaldate'] = pd.to_datetime(projects['boardapprovaldate'])\n\n# keep the project name, lending, sector and theme data\nsector = projects.copy()\nsector = sector[['project_name', 'lendinginstr', 'sector1', 'sector2', 'sector3', 'sector4', 'sector5', 'sector',\n 'mjsector1', 'mjsector2', 'mjsector3', 'mjsector4', 'mjsector5',\n 'mjsector', 'theme1', 'theme2', 'theme3', 'theme4', 'theme5', 'theme ',\n 'goal', 'financier', 'mjtheme1name', 'mjtheme2name', 'mjtheme3name',\n 'mjtheme4name', 'mjtheme5name']]", "_____no_output_____" ] ], [ [ "Run the code cell below. This cell shows the percentage of each variable that is null. Notice the mjsector1 through mjsector5 variables are all null. The mjtheme1name through mjtheme5name are also all null as well as the theme variable. \n\nBecause these variables contain so many null values, they're probably not very useful.", "_____no_output_____" ] ], [ [ "# output percentage of values that are missing\n100 * sector.isnull().sum() / sector.shape[0]", "_____no_output_____" ] ], [ [ "The sector1 variable looks promising; it doesn't contain any null values at all. In the next cell, store the unique sector1 values in a list and output the results. Use the sort_values() and unique() methods.", "_____no_output_____" ] ], [ [ "# TODO: Create a list of the unique values in sector1. Use the sort_values() and unique() pandas methods. \n# And then convert those results into a Python list\nuniquesectors1 = list(sector['sector1'].sort_values().unique())\nuniquesectors1", "_____no_output_____" ], [ "# run this code cell to see the number of unique values\nprint('Number of unique values in sector1:', len(uniquesectors1))", "Number of unique values in sector1: 3060\n" ] ], [ [ "3060 different categories is quite a lot! Remember that with dummy variables, if you have n categorical values, you need n - 1 new variables! That means 3059 extra columns! \n\n# Exercise 2\n\nThere are a few issues with this 'sector1' variable. First, there are values labeled '!$!0'. These should be substituted with NaN.\n\nFurthermore, each sector1 value ends with a ten or eleven character string like '!$!49!$!EP'. Some sectors show up twice in the list like:\n 'Other Industry; Trade and Services!$!70!$!YZ',\n 'Other Industry; Trade and Services!$!63!$!YZ',\n\nBut it seems like those are actually the same sector. You'll need to remove everything past the exclamation point. \n\nMany values in the sector1 variable start with the term '(Historic)'. Try removing that phrase as well.\n\n### replace() method\n\nWith pandas, you can use the replace() method to search for text and replace parts of a string with another string. If you know the exact string you're looking for, the replace() method is straight forward. For example, say you wanted to remove the string '(Trial)' from this data:\n\n| data |\n|--------------------------|\n| '(Trial) Banking' |\n| 'Banking' |\n| 'Farming' |\n| '(Trial) Transportation' |\n\nYou could use `df['data'].replace('(Trial'), '')` to replace (Trial) with an empty string.\n\nWhat about this data?\n\n| data |\n|------------------------------------------------|\n| 'Other Industry; Trade and Services?$ab' |\n| 'Other Industry; Trade and Services?ceg' |\n\nThis type of data is trickier. In this case, there's a pattern where you want to remove a string that starts with an exclamation point and then has an unknown number of characters after it. When you need to match patterns of character, you can use [regular expressions](https://en.wikipedia.org/wiki/Regular_expression).\n\nThe replace method can take a regular expression. So\ndf['data'].replace('?.+', regex=True) where '?.+' means find a set of characters that starts with a question mark is then followed by one or more characters. You can see a [regular expression cheat sheet](https://medium.com/factory-mind/regex-tutorial-a-simple-cheatsheet-by-examples-649dc1c3f285) here.\n\nFix these issues in the code cell below.", "_____no_output_____" ] ], [ [ "# TODO: In the sector1 variable, replace the string '!$10' with nan\n# HINT: you can use the pandas replace() method and numpy.nan\nsector['sector1'] = sector['sector1'].replace('!$!0', np.nan)\n\n# TODO: In the sector1 variable, remove the last 10 or 11 characters from the sector1 variable.\n# HINT: There is more than one way to do this including the replace method\n# HINT: You can use a regex expression '!.+'\n# That regex expression looks for a string with an exclamation\n# point followed by one or more characters\n\nsector['sector1'] = sector['sector1'].replace('!.+', '', regex=True)\n\n# TODO: Remove the string '(Historic)' from the sector1 variable\n# HINT: You can use the replace method\nsector['sector1'] = sector['sector1'].replace('^(\\(Historic\\))', '', regex=True)\n\nprint('Number of unique sectors after cleaning:', len(list(sector['sector1'].unique())))\nprint('Percentage of null values after cleaning:', 100 * sector['sector1'].isnull().sum() / sector['sector1'].shape[0])", "Number of unique sectors after cleaning: 156\nPercentage of null values after cleaning: 3.4962735642262164\n" ] ], [ [ "Now there are 156 unique categorical values. That's better than 3060. If you were going to use this data with a supervised learning machine model, you could try converting these 156 values to dummy variables. You'd still have to train and test a model to see if those are good features.\n\nBut can you do anything else with the sector1 variable?\n\nThe percentage of null values for 'sector1' is now 3.49%. That turns out to be the same number as the null values for the 'sector' column. You can see this if you scroll back up to where the code calculated the percentage of null values for each variable. \n\nPerhaps the 'sector1' and 'sector' variable have the same information. If you look at the 'sector' variable, however, it also needs cleaning. The values look like this:\n\n'Urban Transport;Urban Transport;Public Administration - Transportation'\n\nIt turns out the 'sector' variable combines information from the 'sector1' through 'sector5' variables and the 'mjsector' variable. Run the code cell below to look at the sector variable.", "_____no_output_____" ] ], [ [ "sector['sector']", "_____no_output_____" ] ], [ [ "What else can you do? If you look at all of the diferent sector1 categories, it might be useful to combine a few of them together. For example, there are various categories with the term \"Energy\" in them. And then there are other categories that seem related to energy but don't have the word energy in them like \"Thermal\" and \"Hydro\". Some categories have the term \"Renewable Energy\", so perhaps you could make a separate \"Renewable Energy\" category.\n\nSimilarly, there are categories with the term \"Transportation\" in them, and then there are related categories like \"Highways\".\n\nIn the next cell, find all sector1 values with the term 'Energy' in them. For each of these rows, put the string 'energy' in a new column called 'sector1_aggregates'. Do the same for \"Transportation\". ", "_____no_output_____" ] ], [ [ "import re\n\n# Create the sector1_aggregates variable\nsector.loc[:,'sector1_aggregates'] = sector['sector1']\n\n# TODO: The code above created a new variable called sector1_aggregates. \n# Currently, sector1_aggregates has all of the same values as sector1\n# For this task, find all the rows in sector1_aggregates with the term 'Energy' in them, \n# For all of these rows, replace whatever is the value is with the term 'Energy'.\n# The idea is to simplify the category names by combining various categories together.\n# Then, do the same for the term 'Transportation\n# HINT: You can use the contains() methods. See the documentation for how to ignore case using the re library\n# HINT: You might get an error saying \"cannot index with vector containing NA / NaN values.\" \n# Try converting NaN values to something else like False or a string\n\nsector.loc[sector['sector1_aggregates'].str.contains('Energy', re.IGNORECASE).replace(np.nan, False),'sector1_aggregates'] = 'Energy'\nsector.loc[sector['sector1_aggregates'].str.contains('Transportation', re.IGNORECASE).replace(np.nan, False),'sector1_aggregates'] = 'Transportation'\n\nprint('Number of unique sectors after cleaning:', len(list(sector['sector1_aggregates'].unique())))", "Number of unique sectors after cleaning: 145\n" ] ], [ [ "The number of unique sectors continues to go down. Keep in mind that how much to consolidate will depend on your machine learning model performance and your hardware's ability to handle the extra features in memory. If your hardware's memory can handle 3060 new features and your machine learning algorithm performs better, then go for it!\n\nThere are still 638 entries with NaN values. How could you fill these in? You might try to determine an appropriate category from the 'project_name' or 'lendinginstr' variables. If you make dummy variables including NaN values, then you could consider a feature with all zeros to represent NaN. Or you could delete these records from the data set. Pandas will ignore NaN values by default. That means, for a given row, all dummy variables will have a value of 0 if the sector1 value was NaN.\n\nDon't forget about the bigger context! This data is being prepared for a machine learning algorithm. Whatever techniques you use to engineer new features, you'll need to use those when running your model on new data. So if your new data does not contain a sector1 value, you'll have to run whatever feature engineering processes you did on your training set.\n\nIn this final set, use the pandas pd.get_dummies() method to create dummy variables. Then use the concat() method to concatenate the dummy variables to a dataframe that contains the project totalamt variable and the project year from the boardapprovaldate.", "_____no_output_____" ] ], [ [ "# TODO: Create dummy variables from the sector1_aggregates data. Put the results into a dataframe called dummies\n# Hint: Use the get_dummies method\ndummies = pd.DataFrame(pd.get_dummies(sector['sector1_aggregates']))\n\n# TODO: Filter the projects data for the totalamt, the year from boardapprovaldate, and the dummy variables\nprojects['year'] = projects['boardapprovaldate'].dt.year\ndf = projects[['totalamt','year']]\ndf_final = pd.concat([df, dummies], axis=1)\n\ndf_final.head()", "_____no_output_____" ] ], [ [ "# Conclusion\n\nPandas makes it relatively easy to create dummy variables; however, oftentimes you'll need to clean the data first.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e7d8fb900721a9e86f2f6b454d1b8d63ebd61c31
46,776
ipynb
Jupyter Notebook
.ipynb_checkpoints/main-checkpoint.ipynb
reckless129/PointNet_Custom_Object_Detection
1603081af8eaf612cbc6ac13f66c27773a79f534
[ "MIT" ]
20
2020-01-06T08:59:34.000Z
2022-03-24T01:23:46.000Z
.ipynb_checkpoints/main-checkpoint.ipynb
reckless129/PointNet_Custom_Object_Detection
1603081af8eaf612cbc6ac13f66c27773a79f534
[ "MIT" ]
2
2020-02-07T09:53:27.000Z
2021-04-28T14:02:30.000Z
.ipynb_checkpoints/main-checkpoint.ipynb
reckless129/PointNet_Custom_Object_Detection
1603081af8eaf612cbc6ac13f66c27773a79f534
[ "MIT" ]
5
2021-01-26T13:03:14.000Z
2022-01-16T16:01:34.000Z
92.625743
9,108
0.660531
[ [ [ "import sys\nsys.executable", "_____no_output_____" ], [ "import argparse\nimport math\nimport h5py\nimport numpy as np\nimport tensorflow as tf\nimport socket\nimport glob\n\nimport os\nimport sys\n \nimport provider\nimport tf_util\nfrom model import *\nprint(\"success\")\n", "success\n" ], [ "BATCH_SIZE = 12\nBATCH_SIZE_EVAL = 12\nNUM_POINT = 4096\nMAX_EPOCH = 61\nBASE_LEARNING_RATE = 0.001\nGPU_INDEX = 0\nMOMENTUM = 0.9\nOPTIMIZER = 'adam'\nDECAY_STEP = 300000\nDECAY_RATE = 0.5\n\nLOG_DIR = 'log'\nif not os.path.exists(LOG_DIR): os.mkdir(LOG_DIR)\nos.system('cp model.py %s' % (LOG_DIR)) # bkp of model def\nos.system('cp train.py %s' % (LOG_DIR)) # bkp of train procedure\nLOG_FOUT = open(os.path.join(LOG_DIR, 'log_train.txt'), 'w')\n# LOG_FOUT.write(str(FLAGS)+'\\n')\n\nMAX_NUM_POINT = 4096\nNUM_CLASSES = 2\n\nBN_INIT_DECAY = 0.5\nBN_DECAY_DECAY_RATE = 0.5\n#BN_DECAY_DECAY_STEP = float(DECAY_STEP * 2)\nBN_DECAY_DECAY_STEP = float(DECAY_STEP)\nBN_DECAY_CLIP = 0.99\n\nHOSTNAME = socket.gethostname()", "_____no_output_____" ], [ "\n\n \ntotal_data = np.zeros((1158,4096, 6))\ntotal_label = np.zeros((1158,4096))\nxmax = 3.0\nxmin = -3.0\n\nfor i in range (0,6):\n f = h5py.File('/home/atas/real_pcl_data/d'+str(i)+'.h5','r')\n data = f['data']\n label = f['label']\n total_data[i*len(data):(i+1)*len(data),:,0:3] = (data[:, :, 0:3] - xmin) / (xmax - xmin )\n total_data[i*len(data):(i+1)*len(data),:,3:6] = data[:, :, 3:6]/255\n total_label[i*len(data):(i+1)*len(data),:] = label[:, :]\n \n''' \nf = h5py.File('/home/atas/real_pcl_data/d6.h5','r')\ndata = f['data']\nlabel = f['label'] \ntotal_data[1200:1260,:,0:3] = (data[:, :, 0:3] - xmin) / (xmax - xmin )\ntotal_data[1200:1260,:,3:6] = data[:, :, 3:6]\ntotal_label[1200:1260,:] = label[:, :]\n''' \nprint(total_data.shape)\nprint(total_label.shape)", "(1158, 4096, 6)\n(1158, 4096)\n" ], [ "features = [\"x\",\"y\",\"z\",\"r\",\"g\",\"b\"]\nfor i in range(6): \n print(features[i] + \"_range :\", np.min(total_data[:, :, i]), np.max(total_data[:, :, i]))", "x_range : 0.27024004856745404 0.7425641020139059\ny_range : 0.30058085918426514 0.7210604945818583\nz_range : 0.5301859254638354 0.9228959878285726\nr_range : 0.0 0.004496253238004797\ng_range : 1.3209388171340904e-05 0.004500191819434072\nb_range : 0.0 0.004500191819434072\n" ], [ "X = total_data\ny = total_label\n\nX.shape, y.shape", "_____no_output_____" ], [ "features = [\"x\",\"y\",\"z\",\"r\",\"g\",\"b\"]\nfor i in range(6): \n print(features[i] + \"_range :\", np.min(total_data[:, :, i]), np.max(total_data[:, :, i]))", "x_range : 0.27024004856745404 0.7425641020139059\ny_range : 0.30058085918426514 0.7210604945818583\nz_range : 0.5301859254638354 0.9228959878285726\nr_range : 0.0 0.004496253238004797\ng_range : 1.3209388171340904e-05 0.004500191819434072\nb_range : 0.0 0.004500191819434072\n" ], [ "from sklearn.model_selection import train_test_split\n\ntrain_data, test_data, train_label, test_label = train_test_split(X, y, test_size=0.1, random_state=42)\n\nprint(train_data.shape, train_label.shape)\nprint(test_data.shape, test_label.shape)", "(1042, 4096, 6) (1042, 4096)\n(116, 4096, 6) (116, 4096)\n" ], [ "\ndef log_string(out_str):\n LOG_FOUT.write(out_str+'\\n')\n LOG_FOUT.flush()\n print(out_str)\n\n\ndef get_learning_rate(batch):\n learning_rate = tf.train.exponential_decay(\n BASE_LEARNING_RATE, # Base learning rate.\n batch * BATCH_SIZE, # Current index into the dataset.\n DECAY_STEP, # Decay step.\n DECAY_RATE, # Decay rate.\n staircase=True)\n learning_rate = tf.maximum(learning_rate, 0.00001) # CLIP THE LEARNING RATE!!\n return learning_rate \n\ndef get_bn_decay(batch):\n bn_momentum = tf.train.exponential_decay(\n BN_INIT_DECAY,\n batch*BATCH_SIZE,\n BN_DECAY_DECAY_STEP,\n BN_DECAY_DECAY_RATE,\n staircase=True)\n bn_decay = tf.minimum(BN_DECAY_CLIP, 1 - bn_momentum)\n return bn_decay\n\ndef train():\n with tf.Graph().as_default():\n with tf.device('/gpu:'+str(GPU_INDEX)):\n pointclouds_pl, labels_pl = placeholder_inputs(BATCH_SIZE, NUM_POINT)\n is_training_pl = tf.placeholder(tf.bool, shape=())\n \n # Note the global_step=batch parameter to minimize. \n # That tells the optimizer to helpfully increment the 'batch' parameter for you every time it trains.\n batch = tf.Variable(0)\n bn_decay = get_bn_decay(batch)\n tf.summary.scalar('bn_decay', bn_decay)\n\n # Get model and loss \n pred = get_model(pointclouds_pl, is_training_pl, bn_decay=bn_decay)\n loss = get_loss(pred, labels_pl)\n tf.summary.scalar('loss', loss)\n\n correct = tf.equal(tf.argmax(pred, 2), tf.to_int64(labels_pl))\n accuracy = tf.reduce_sum(tf.cast(correct, tf.float32)) / float(BATCH_SIZE*NUM_POINT)\n tf.summary.scalar('accuracy', accuracy)\n\n # Get training operator\n learning_rate = get_learning_rate(batch)\n tf.summary.scalar('learning_rate', learning_rate)\n if OPTIMIZER == 'momentum':\n optimizer = tf.train.MomentumOptimizer(learning_rate, momentum=MOMENTUM)\n elif OPTIMIZER == 'adam':\n optimizer = tf.train.AdamOptimizer(learning_rate)\n train_op = optimizer.minimize(loss, global_step=batch)\n \n # Add ops to save and restore all the variables.\n saver = tf.train.Saver()\n \n # Create a session\n config = tf.ConfigProto()\n config.gpu_options.allow_growth = True\n config.allow_soft_placement = True\n config.log_device_placement = True\n sess = tf.Session(config=config)\n\n # Add summary writers\n merged = tf.summary.merge_all()\n train_writer = tf.summary.FileWriter(os.path.join(LOG_DIR, 'train'),\n sess.graph)\n test_writer = tf.summary.FileWriter(os.path.join(LOG_DIR, 'test'))\n\n # Init variables\n init = tf.global_variables_initializer()\n sess.run(init, {is_training_pl:True})\n\n ops = {'pointclouds_pl': pointclouds_pl,\n 'labels_pl': labels_pl,\n 'is_training_pl': is_training_pl,\n 'pred': pred,\n 'loss': loss,\n 'train_op': train_op,\n 'merged': merged,\n 'step': batch}\n\n for epoch in range(MAX_EPOCH):\n log_string('**** EPOCH %03d ****' % (epoch))\n sys.stdout.flush()\n \n train_one_epoch(sess, ops, train_writer)\n eval_one_epoch(sess, ops, test_writer)\n \n # Save the variables to disk.\n if epoch % 10 == 0:\n save_path = saver.save(sess, os.path.join(LOG_DIR, \"model.ckpt\"))\n log_string(\"Model saved in file: %s\" % save_path)\n\n\n\ndef train_one_epoch(sess, ops, train_writer):\n \"\"\" ops: dict mapping from string to tf ops \"\"\"\n is_training = True\n \n log_string('----')\n current_data, current_label, _ = provider.shuffle_data(train_data[:,0:NUM_POINT,:], train_label) \n \n file_size = current_data.shape[0]\n num_batches = file_size // BATCH_SIZE\n \n total_correct = 0\n total_seen = 0\n loss_sum = 0\n \n for batch_idx in range(num_batches):\n if batch_idx % 100 == 0:\n print('Current batch/total batch num: %d/%d'%(batch_idx,num_batches))\n start_idx = batch_idx * BATCH_SIZE\n end_idx = (batch_idx+1) * BATCH_SIZE\n \n feed_dict = {ops['pointclouds_pl']: current_data[start_idx:end_idx, :, :],\n ops['labels_pl']: current_label[start_idx:end_idx],\n ops['is_training_pl']: is_training,}\n summary, step, _, loss_val, pred_val = sess.run([ops['merged'], ops['step'], ops['train_op'], ops['loss'], ops['pred']],\n feed_dict=feed_dict)\n train_writer.add_summary(summary, step)\n pred_val = np.argmax(pred_val, 2)\n correct = np.sum(pred_val == current_label[start_idx:end_idx])\n total_correct += correct\n total_seen += (BATCH_SIZE*NUM_POINT)\n loss_sum += loss_val\n \n log_string('mean loss: %f' % (loss_sum / float(num_batches)))\n log_string('accuracy: %f' % (total_correct / float(total_seen)))\n\n \ndef eval_one_epoch(sess, ops, test_writer):\n \"\"\" ops: dict mapping from string to tf ops \"\"\"\n is_training = False\n total_correct = 0\n total_seen = 0\n loss_sum = 0\n total_seen_class = [0 for _ in range(NUM_CLASSES)]\n total_correct_class = [0 for _ in range(NUM_CLASSES)]\n \n log_string('----')\n current_data = test_data[:,0:NUM_POINT,:]\n current_label = np.squeeze(test_label)\n \n file_size = current_data.shape[0]\n num_batches = file_size // BATCH_SIZE_EVAL\n \n for batch_idx in range(num_batches):\n start_idx = batch_idx * BATCH_SIZE_EVAL\n end_idx = (batch_idx+1) * BATCH_SIZE_EVAL\n\n feed_dict = {ops['pointclouds_pl']: current_data[start_idx:end_idx, :, :],\n ops['labels_pl']: current_label[start_idx:end_idx],\n ops['is_training_pl']: is_training}\n summary, step, loss_val, pred_val = sess.run([ops['merged'], ops['step'], ops['loss'], ops['pred']],\n feed_dict=feed_dict)\n test_writer.add_summary(summary, step)\n pred_val = np.argmax(pred_val, 2)\n correct = np.sum(pred_val == current_label[start_idx:end_idx])\n total_correct += correct\n total_seen += (BATCH_SIZE_EVAL*NUM_POINT)\n loss_sum += (loss_val*BATCH_SIZE_EVAL)\n for i in range(start_idx, end_idx):\n for j in range(NUM_POINT):\n l = int(current_label[i, j])\n total_seen_class[l] += 1\n total_correct_class[l] += (pred_val[i-start_idx, j] == l)\n \n log_string('eval mean loss: %f' % (loss_sum / float(total_seen/NUM_POINT)))\n log_string('eval accuracy: %f'% (total_correct / float(total_seen)))\n log_string('eval avg class acc: %f' % (np.mean(np.array(total_correct_class)/np.array(total_seen_class,dtype=np.float))))\n \n\nif __name__ == \"__main__\":\n train()\n LOG_FOUT.close()", "WARNING:tensorflow:From /home/atas/PointNet-SemSeg-VKITTI3D/model.py:13: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.\n\nWARNING:tensorflow:From /home/atas/PointNet-SemSeg-VKITTI3D/tf_util.py:145: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.\n\nWARNING:tensorflow:\nThe TensorFlow contrib module will not be included in TensorFlow 2.0.\nFor more information, please see:\n * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md\n * https://github.com/tensorflow/addons\n * https://github.com/tensorflow/io (for I/O related ops)\nIf you depend on functionality not listed there, please file an issue.\n\nWARNING:tensorflow:From /home/atas/PointNet-SemSeg-VKITTI3D/tf_util.py:21: The name tf.get_variable is deprecated. Please use tf.compat.v1.get_variable instead.\n\nWARNING:tensorflow:From /home/atas/PointNet-SemSeg-VKITTI3D/tf_util.py:48: The name tf.add_to_collection is deprecated. Please use tf.compat.v1.add_to_collection instead.\n\nWARNING:tensorflow:From /home/atas/PointNet-SemSeg-VKITTI3D/tf_util.py:368: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.\n\nTensor(\"fc2/Relu:0\", shape=(16, 128), dtype=float32, device=/device:GPU:0)\nWARNING:tensorflow:From /home/atas/PointNet-SemSeg-VKITTI3D/tf_util.py:573: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.\nInstructions for updating:\nPlease use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.\nWARNING:tensorflow:From <ipython-input-10-454625cf90e3>:44: to_int64 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse `tf.cast` instead.\n**** EPOCH 000 ****\n----\nCurrent batch/total batch num: 0/65\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7d910461029d7ac26e46d30ceef36807c6700d3
7,576
ipynb
Jupyter Notebook
markov_asset/asset_solutions_jl.ipynb
ginjuuu/QuantEcon
7379c9cb1265e3cb0baec58fcfa322f8d8c573bd
[ "BSD-3-Clause" ]
11
2018-05-02T22:12:14.000Z
2021-11-18T01:07:33.000Z
markov_asset/asset_solutions_jl.ipynb
ginjuuu/QuantEcon
7379c9cb1265e3cb0baec58fcfa322f8d8c573bd
[ "BSD-3-Clause" ]
null
null
null
markov_asset/asset_solutions_jl.ipynb
ginjuuu/QuantEcon
7379c9cb1265e3cb0baec58fcfa322f8d8c573bd
[ "BSD-3-Clause" ]
13
2017-11-11T22:38:22.000Z
2022-02-21T20:33:03.000Z
22.087464
279
0.4967
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
e7d913bece852f9c3bb92e3362d2da5afaac4686
9,471
ipynb
Jupyter Notebook
notebooks/02.5-make-projection-dfs/higher-spread/.ipynb_checkpoints/cassins-umap-checkpoint.ipynb
xingjeffrey/avgn_paper
412e95dabc7b7b13a434b85cc54a21c06efe4e2b
[ "MIT" ]
null
null
null
notebooks/02.5-make-projection-dfs/higher-spread/.ipynb_checkpoints/cassins-umap-checkpoint.ipynb
xingjeffrey/avgn_paper
412e95dabc7b7b13a434b85cc54a21c06efe4e2b
[ "MIT" ]
null
null
null
notebooks/02.5-make-projection-dfs/higher-spread/.ipynb_checkpoints/cassins-umap-checkpoint.ipynb
xingjeffrey/avgn_paper
412e95dabc7b7b13a434b85cc54a21c06efe4e2b
[ "MIT" ]
null
null
null
25.256
253
0.479464
[ [ [ "%load_ext autoreload\n%autoreload 2\n%env CUDA_DEVICE_ORDER=PCI_BUS_ID\n%env CUDA_VISIBLE_DEVICES=2", "env: CUDA_DEVICE_ORDER=PCI_BUS_ID\nenv: CUDA_VISIBLE_DEVICES=2\n" ], [ "import numpy as np\nimport matplotlib.pyplot as plt\nfrom tqdm.autonotebook import tqdm\nimport pandas as pd\nfrom cuml.manifold.umap import UMAP as cumlUMAP\nfrom avgn.utils.paths import DATA_DIR, most_recent_subdirectory, ensure_dir\nfrom avgn.signalprocessing.create_spectrogram_dataset import flatten_spectrograms", "/mnt/cube/tsainbur/conda_envs/tpy3/lib/python3.6/site-packages/tqdm/autonotebook/__init__.py:14: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console)\n \" (e.g. in jupyter console)\", TqdmExperimentalWarning)\n" ] ], [ [ "### load data", "_____no_output_____" ] ], [ [ "DATASET_ID = 'BIRD_DB_Vireo_cassinii'\ndf_loc = DATA_DIR / 'syllable_dfs' / DATASET_ID / 'cassins.pickle'", "_____no_output_____" ], [ "syllable_df = pd.read_pickle(df_loc)\ndel syllable_df['audio']", "_____no_output_____" ], [ "syllable_df[:3]", "_____no_output_____" ], [ "np.shape(syllable_df.spectrogram.values[0])", "_____no_output_____" ] ], [ [ "### project", "_____no_output_____" ] ], [ [ "specs = list(syllable_df.spectrogram.values)\nspecs = [i/np.max(i) for i in tqdm(specs)]\nspecs_flattened = flatten_spectrograms(specs)\nnp.shape(specs_flattened)", "_____no_output_____" ], [ "cuml_umap = cumlUMAP(min_dist = 0.5)\nembedding = cuml_umap.fit_transform(specs_flattened)", "_____no_output_____" ], [ "fig, ax = plt.subplots()\nax.scatter(embedding[:,0], embedding[:,1], s=1, color='k', alpha = 0.005)\nax.set_xlim([-8,8])\nax.set_ylim([-8,8])", "_____no_output_____" ], [ "syllable_df['umap'] = list(embedding)", "_____no_output_____" ] ], [ [ "### Save", "_____no_output_____" ] ], [ [ "ensure_dir(DATA_DIR / 'embeddings' / DATASET_ID / 'full')", "_____no_output_____" ], [ "syllable_df.to_pickle(DATA_DIR / 'embeddings' / DATASET_ID / (str(min_dist) + '_full.pickle'))", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
e7d9217ddb30ac78ec155461c220f19c8d182678
92,878
ipynb
Jupyter Notebook
elman_pytorch.ipynb
qihongl/demo-elman-1990
f7008c89d4a8f105a01c91e50c87476b19880e62
[ "MIT" ]
null
null
null
elman_pytorch.ipynb
qihongl/demo-elman-1990
f7008c89d4a8f105a01c91e50c87476b19880e62
[ "MIT" ]
null
null
null
elman_pytorch.ipynb
qihongl/demo-elman-1990
f7008c89d4a8f105a01c91e50c87476b19880e62
[ "MIT" ]
null
null
null
207.316964
71,452
0.905327
[ [ [ "Qualitatively replicate: \n\n[1] Elman, J. L. (1990). Finding structure in time. \nCognitive Science, 14(2), 179–211. \nhttps://doi.org/10.1016/0364-0213(90)90002-E\n\n[2] Saffran, J. R., Aslin, R. N., & Newport, E. L. (1996). Statistical learning by 8-month-old infants. \nScience, 274(5294), 1926–1928. \nhttps://doi.org/10.1126/science.274.5294.1926", "_____no_output_____" ] ], [ [ "import os \nimport time \nimport warnings\nimport itertools\nimport numpy as np\nfrom sklearn.preprocessing import OneHotEncoder\n\nimport torch\nimport torch.nn as nn\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nwarnings.filterwarnings(\"ignore\")\nsns.set(style='white', context='poster', font_scale=.8, rc={\"lines.linewidth\": 2})\ndevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\nprint(f'device = {device}')\n\nseed_val = 0\ntorch.manual_seed(seed_val)\nnp.random.seed(seed_val)\n\n%matplotlib inline \n%autosave 5", "device = cpu\n" ], [ "import string \nall_letters = string.ascii_lowercase\n\n# define all vocabs\nchunk_size = 4\nall_vocabs = [\n all_letters[i:i + chunk_size]\n for i in range(0, len(all_letters), chunk_size) \n]\nprint(f'All vocabs:\\n{all_vocabs}')", "All vocabs:\n['abcd', 'efgh', 'ijkl', 'mnop', 'qrst', 'uvwx', 'yz']\n" ], [ "# gen seqs, given some vocabs\ndef gen_story(all_vocabs, seq_len): \n n_vocabs = len(all_vocabs)\n seq_ids = np.random.randint(n_vocabs, size=seq_len)\n seq = [all_vocabs[i] for i in seq_ids]\n # integer representation\n seq_int = [\n [all_letters.index(letter) for letter in vocab]\n for vocab in seq\n ]\n return seq, seq_int\n\nseq_len = 12\nseq, seq_int = gen_story(all_vocabs, seq_len)\nprint(f'Here\\'s a \"story\":\\n{seq}')\nprint(f'The corresponding int representation:\\n{seq_int}')", "Here's a \"story\":\n['qrst', 'uvwx', 'abcd', 'mnop', 'mnop', 'mnop', 'efgh', 'mnop', 'uvwx', 'ijkl', 'qrst', 'yz']\nThe corresponding int representation:\n[[16, 17, 18, 19], [20, 21, 22, 23], [0, 1, 2, 3], [12, 13, 14, 15], [12, 13, 14, 15], [12, 13, 14, 15], [4, 5, 6, 7], [12, 13, 14, 15], [20, 21, 22, 23], [8, 9, 10, 11], [16, 17, 18, 19], [24, 25]]\n" ], [ "# vectorize the input \ndef onehot_transform(seq_int_): \n # get the unit of representation\n n_letters = len(all_letters)\n all_letters_ohe_template = np.reshape(np.arange(n_letters),newshape=(-1,1))\n # init one hot encoder\n ohe = OneHotEncoder(sparse=False)\n ohe.fit(all_letters_ohe_template)\n # reformat the sequence\n seq_int_ = [np.reshape(vocab, newshape=(-1,1)) for vocab in seq_int_]\n # transform to one hot \n seq_ohe = [ohe.transform(vocab) for vocab in seq_int_]\n return seq_ohe\n\nseq_ohe = onehot_transform(seq_int)", "_____no_output_____" ], [ "f, ax = plt.subplots(1,1, figsize=(9,5))\n\nvocab_id = 0\n\nax.imshow(seq_ohe[vocab_id], cmap='bone')\nax.set_xlabel('Feature dim')\nax.set_yticks([])\nax.set_title(f'The one hot representation of \"{seq[vocab_id]}\"')", "_____no_output_____" ], [ "# generate training data\ndef gen_data(seq_len): \n seq, seq_int = gen_story(all_vocabs, seq_len)\n seq_ohe = onehot_transform(seq_int)\n\n # to sequence to pytorch format\n seq_ohe_merged = list(itertools.chain(*seq_ohe))\n # X = np.expand_dims(seq_ohe_merged, axis=-1)\n X = np.array(seq_ohe_merged)\n X = torch.from_numpy(X).type(torch.FloatTensor)\n return X, seq\n\n \n# how to use `gen_data`\nseq_len = 25\nX, seq = gen_data(seq_len)\nn_time_steps = X.size()[0]\nprint(X.size())", "torch.Size([94, 26])\n" ], [ "# model params \ndim_output = dim_input = X.size()[1]\ndim_hidden = 32\n\n# training params\nseq_len = 25\nlearning_rate = 3e-3\nn_epochs = 10\n\n# init model \nmodel = nn.RNN(dim_input, dim_hidden)\nreadout = nn.Linear(dim_hidden, dim_output)\n\n# init optimizer\ncriterion = nn.MSELoss()\noptimizer = torch.optim.Adam(\n list(model.parameters())+list(readout.parameters()), \n lr=learning_rate\n)", "_____no_output_____" ], [ "# loop over epoch\nlosses_torch = np.zeros(n_epochs,)\n\nfor i in range(n_epochs):\n\n # gen data \n X, _ = gen_data(seq_len)\n n_time_steps = X.size()[0]\n time_start = time.time() \n \n # feed seq\n out, hidden_T = model(X.unsqueeze(0))\n xhat = readout(out)\n\n # compute loss\n out_sqed = torch.squeeze(xhat, dim=0)\n loss = criterion(out_sqed, X)\n\n losses_torch[i] += loss.item()\n\n # update weights\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n \n # print out some stuff\n time_end = time.time()\n print(f'Epoch {i} : \\t loss = {losses_torch[i]}, \\t time = {time_end - time_start}')", "Epoch 0 : \t loss = 0.060347530990839005, \t time = 0.004294872283935547\nEpoch 1 : \t loss = 0.05690968409180641, \t time = 0.0013539791107177734\nEpoch 2 : \t loss = 0.05126382037997246, \t time = 0.0013630390167236328\nEpoch 3 : \t loss = 0.04948564991354942, \t time = 0.0015859603881835938\nEpoch 4 : \t loss = 0.04546681419014931, \t time = 0.0014979839324951172\nEpoch 5 : \t loss = 0.04320373758673668, \t time = 0.0014946460723876953\nEpoch 6 : \t loss = 0.041097912937402725, \t time = 0.0013561248779296875\nEpoch 7 : \t loss = 0.03981594368815422, \t time = 0.001332998275756836\nEpoch 8 : \t loss = 0.037393804639577866, \t time = 0.0012009143829345703\nEpoch 9 : \t loss = 0.03604215383529663, \t time = 0.0014851093292236328\n" ], [ "# gen some new data \nseq_len_test = 25\nX_test, seq = gen_data(seq_len_test)\nn_time_steps = X_test.size()[0]\n\nloss_test = np.zeros(n_time_steps,)\n\nh_0 = torch.randn(1, 1, dim_hidden)\nc_0 = torch.randn(1, 1, dim_hidden)\n# loop over time, for one training example\nfor t, x_t in enumerate(X_test):\n if t == 0: \n h_t = h_0\n\n # recurrent computation at time t\n out, (h_t) = model(x_t.view(1, 1, -1), h_t)\n xhat = readout(out)\n\n # compute loss\n out_sqed = torch.squeeze(xhat, dim=0)\n loss = criterion(out_sqed, x_t)\n loss_test[t] = loss.item()", "_____no_output_____" ], [ "\"\"\"\nin general, the error function over time peaks right after event(word) boundaries.\n\"\"\"\n\nword_boundaries = np.cumsum([len(vocab) for vocab in seq])-1\nseq_letters = list(itertools.chain(*seq))\nseq_len_test = len(seq_letters)\n\nf,ax = plt.subplots(1,1, figsize=(16, 5))\n\nax.plot(np.arange(0,seq_len_test,1), loss_test)\n\nax.set_title('Instantaneous prediction error')\nax.set_xlabel('Time')\nax.set_ylabel('Error')\n\nfor i, letter in enumerate(seq_letters):\n ax.annotate(letter, (i, loss_test[i]), fontsize=14)\n\nfor wb in word_boundaries: \n ax.axvline(wb, color='grey', linestyle='--')\n \nsns.despine()\nf.tight_layout()", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7d92d7d3819a093f16e91be738197ede740aa84
36,505
ipynb
Jupyter Notebook
Lecture 4/Lecture 4 - Pandas II (Template).ipynb
iEvidently/ihme-python-course
436144bd2458a0fa4117b2f6f7e848d9421f8476
[ "CC-BY-3.0" ]
24
2016-11-04T00:22:50.000Z
2022-03-23T15:50:50.000Z
Lecture 4/Lecture 4 - Pandas II (Template).ipynb
iEvidently/ihme-python-course
436144bd2458a0fa4117b2f6f7e848d9421f8476
[ "CC-BY-3.0" ]
null
null
null
Lecture 4/Lecture 4 - Pandas II (Template).ipynb
iEvidently/ihme-python-course
436144bd2458a0fa4117b2f6f7e848d9421f8476
[ "CC-BY-3.0" ]
19
2016-10-27T00:11:26.000Z
2021-12-07T20:04:43.000Z
21.626185
776
0.549925
[ [ [ "# Pandas II", "_____no_output_____" ], [ "## More indexing tricks", "_____no_output_____" ], [ "We'll start out with some data from Beer Advocate (see [Tom Augspurger](https://github.com/TomAugspurger/pydata-chi-h2t/blob/master/3-Indexing.ipynb) for some cool details on how he extracted this data)", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\npd.options.display.max_rows = 10", "_____no_output_____" ], [ "df = pd.read_csv('data/beer_subset.csv.gz', parse_dates=['time'], compression='gzip')", "_____no_output_____" ] ], [ [ "### Boolean indexing\n\nLike a where clause in SQL. \n\nThe indexer (or boolean mask) should be 1-dimensional and the same length as the thing being indexed.", "_____no_output_____" ] ], [ [ "df.loc[((df['abv'] < 5) & (df['time'] > pd.Timestamp('2009-06'))) | \n (df['review_overall'] >= 4.5)].head()", "_____no_output_____" ] ], [ [ "Be careful with the order of operations...", "_____no_output_____" ], [ "Safest to use parentheses...", "_____no_output_____" ], [ "Select just the rows where the `beer_style` contains `'IPA'`:", "_____no_output_____" ], [ "Find the rows where the beer style is either `'American IPA'` or `'Pilsner'`:", "_____no_output_____" ] ], [ [ "(df['beer_style'] == 'American IPA')", "_____no_output_____" ] ], [ [ "Or more succinctly:", "_____no_output_____" ] ], [ [ "df[df['beer_style'].isin(['American IPA', 'Pilsner'])].head()", "_____no_output_____" ] ], [ [ "#### Mini Exercise\n\n- Select the rows where the scores of the 5 review_cols ('review_appearance', 'review_aroma', 'review_overall', 'review_palate', 'review_taste') are all at least 4.0.\n\n- _Hint_: Like NumPy arrays, DataFrames have an any and all methods that check whether it contains any or all True values. These methods also take an axis argument for the dimension to remove.\n - 0 or 'index' removes (or aggregates over) the vertical dimension\n - 1 or 'columns' removes (aggregates over) the horizontal dimension.", "_____no_output_____" ], [ "Or the short way:", "_____no_output_____" ], [ "Now select rows where the _average_ of the 5 `review_cols` is at least 4.", "_____no_output_____" ], [ "## Hierarchical Indexing", "_____no_output_____" ], [ "- One of the most powerful and most complicated features of pandas\n- Let's you represent high-dimensional datasets in a table", "_____no_output_____" ] ], [ [ "reviews = df.set_index(['profile_name', 'beer_id', 'time'])", "_____no_output_____" ] ], [ [ "### Top Reviewers\n\nLet's select all the reviews by the top reviewers, by label.", "_____no_output_____" ], [ "The syntax is a bit trickier when you want to specify a row Indexer *and* a column Indexer:", "_____no_output_____" ] ], [ [ "reviews.loc[(top_reviewers, 99, :), ['beer_name', 'brewer_name']]", "_____no_output_____" ], [ "reviews.loc[pd.IndexSlice[top_reviewers, 99, :], ['beer_name', 'brewer_id']]", "_____no_output_____" ] ], [ [ "Use `.loc` to select the `beer_name` and `beer_style` for the 10 most popular beers, as measured by number of reviews:", "_____no_output_____" ], [ "### Beware \"chained indexing\"\n\nYou can sometimes get away with using `[...][...]`, but try to avoid it!", "_____no_output_____" ] ], [ [ "df.loc[df['beer_style'].str.contains('IPA')]['beer_name']", "_____no_output_____" ], [ "df.loc[df['beer_style'].str.contains('IPA')]['beer_name'] = 'yummy'", "_____no_output_____" ], [ "df.loc[df['beer_style'].str.contains('IPA')]['beer_name']", "_____no_output_____" ] ], [ [ "## Dates and Times", "_____no_output_____" ], [ "- Date and time data are inherently problematic\n - An unequal number of days in every month\n - An unequal number of days in a year (due to leap years)\n - Time zones that vary over space\n - etc\n \n- The datetime built-in library handles temporal information down to the nanosecond", "_____no_output_____" ], [ "Having a custom data type for dates and times is convenient because we can perform operations on them easily. \n\nFor example, we may want to calculate the difference between two times:", "_____no_output_____" ], [ "See [the docs](http://pandas.pydata.org/pandas-docs/stable/timeseries.html) for more information on Pandas' complex time and date functionalities...", "_____no_output_____" ], [ "## Example\n\nIn this section, we will manipulate data collected from ocean-going vessels on the eastern seaboard. Vessel operations are monitored using the Automatic Identification System (AIS), a safety at sea navigation technology which vessels are required to maintain and that uses transponders to transmit very high frequency (VHF) radio signals containing static information including ship name, call sign, and country of origin, as well as dynamic information unique to a particular voyage such as vessel location, heading, and speed. \n\nThe International Maritime Organization’s (IMO) International Convention for the Safety of Life at Sea requires functioning AIS capabilities on all vessels 300 gross tons or greater and the US Coast Guard requires AIS on nearly all vessels sailing in U.S. waters. The Coast Guard has established a national network of AIS receivers that provides coverage of nearly all U.S. waters. AIS signals are transmitted several times each minute and the network is capable of handling thousands of reports per minute and updates as often as every two seconds. Therefore, a typical voyage in our study might include the transmission of hundreds or thousands of AIS encoded signals. This provides a rich source of spatial data that includes both spatial and temporal information.\n\nFor our purposes, we will use summarized data that describes the transit of a given vessel through a particular administrative area. The data includes the start and end time of the transit segment, as well as information about the speed of the vessel, how far it travelled, etc.", "_____no_output_____" ] ], [ [ "segments = pd.read_csv('data/AIS/transit_segments.csv')", "_____no_output_____" ] ], [ [ "For example, we might be interested in the distribution of transit lengths, so we can plot them as a histogram:", "_____no_output_____" ], [ "Though most of the transits appear to be short, there are a few longer distances that make the plot difficult to read. \n\nThis is where a transformation is useful:", "_____no_output_____" ], [ "We can see that although there are date/time fields in the dataset, they are not in any specialized format, such as `datetime`.", "_____no_output_____" ], [ "Our first order of business will be to convert these data to `datetime`. \n\nThe `strptime` method parses a string representation of a date and/or time field, according to the expected format of this information.", "_____no_output_____" ] ], [ [ "datetime.strptime(segments['st_time'].ix[0], '%m/%d/%y %H:%M')", "_____no_output_____" ] ], [ [ "As a convenience, Pandas has a `to_datetime` method that will parse and convert an entire Series of formatted strings into `datetime` objects.", "_____no_output_____" ], [ "Pandas also has a custom NA value for missing datetime objects, `NaT`.", "_____no_output_____" ] ], [ [ "pd.to_datetime([None])", "_____no_output_____" ] ], [ [ "Finally, if `to_datetime()` has problems parsing any particular date/time format, you can pass the spec in using the `format=` argument.", "_____no_output_____" ], [ "## Merging and joining `DataFrame`s", "_____no_output_____" ], [ "In Pandas, we can combine tables according to the value of one or more *keys* that are used to identify rows, much like an index.", "_____no_output_____" ] ], [ [ "df1 = pd.DataFrame({'id': range(4), \n 'age': np.random.randint(18, 31, size=4)})", "_____no_output_____" ], [ "df2 = pd.DataFrame({'id': list(range(3))*2, \n 'score': np.random.random(size=6)})", "_____no_output_____" ] ], [ [ "Notice that without any information about which column to use as a key, Pandas did the right thing and used the `id` column in both tables. Unless specified otherwise, `merge` will used any common column names as keys for merging the tables. ", "_____no_output_____" ], [ "Notice also that `id=3` from `df1` was omitted from the merged table. This is because, by default, `merge` performs an **inner join** on the tables, meaning that the merged table represents an intersection of the two tables.", "_____no_output_____" ], [ "The **outer join** above yields the union of the two tables, so all rows are represented, with missing values inserted as appropriate. \n\nOne can also perform **right** and **left** joins to include all rows of the right or left table (*i.e.* first or second argument to `merge`), but not necessarily the other.", "_____no_output_____" ], [ "### Back to the example\n\nNow that we have the vessel transit information as we need it, we may want a little more information regarding the vessels themselves. \n\nIn the `data/AIS` folder there is a second table that contains information about each of the ships that traveled the segments in the `segments` table.", "_____no_output_____" ] ], [ [ "vessels = pd.read_csv('data/AIS/vessel_information.csv', index_col='mmsi')", "_____no_output_____" ] ], [ [ "We see that there is a `mmsi` value (a vessel identifier) in each table, but it is used as an index for the `vessels` table. In this case, we have to specify to join on the index for this table, and on the `mmsi` column for the other.", "_____no_output_____" ], [ "Notice that `mmsi` field that was an index on the `vessels` table is no longer an index on the merged table.", "_____no_output_____" ], [ "Each `DataFrame` also has a `.merge()` method that could have been used:", "_____no_output_____" ], [ "Occasionally, there will be fields with the same in both tables that we do not wish to use to join the tables; they may contain different information, despite having the same name. \n\nIn this case, Pandas will by default append suffixes `_x` and `_y` to the columns to uniquely identify them.", "_____no_output_____" ], [ "This behavior can be overridden by specifying a `suffixes` argument, containing a list of the suffixes to be used for the columns of the left and right columns, respectively.", "_____no_output_____" ], [ "## Reshaping `DataFrame`s", "_____no_output_____" ], [ "This dataset in from Table 6.9 of [Statistical Methods for the Analysis of Repeated Measurements](http://www.amazon.com/Statistical-Methods-Analysis-Repeated-Measurements/dp/0387953701) by Charles S. Davis, pp. 161-163 (Springer, 2002). These data are from a multicenter, randomized controlled trial of botulinum toxin type B (BotB) in patients with cervical dystonia from nine U.S. sites.\n\n* Randomized to placebo (N=36), 5000 units of BotB (N=36), 10,000 units of BotB (N=37)\n* Response variable: total score on Toronto Western Spasmodic Torticollis Rating Scale (TWSTRS), measuring severity, pain, and disability of cervical dystonia (high scores mean more impairment)\n* TWSTRS measured at baseline (week 0) and weeks 2, 4, 8, 12, 16 after treatment began", "_____no_output_____" ] ], [ [ "cdystonia = pd.read_csv('data/cdystonia.csv', index_col=None)", "_____no_output_____" ] ], [ [ "This dataset includes repeated measurements of the same individuals (longitudinal data). Its possible to present such information in (at least) two ways: showing each repeated measurement in their own row, or in multiple columns representing multiple measurements.", "_____no_output_____" ], [ "`.stack()` rotates the data frame so that columns are represented in rows:", "_____no_output_____" ], [ "And there's a corresponding `.unstack()` which pivots back into columns:", "_____no_output_____" ], [ "For this dataset, it makes sense to create a hierarchical index based on the patient and observation:", "_____no_output_____" ] ], [ [ "cdystonia2 = cdystonia.set_index(['patient','obs'])", "_____no_output_____" ] ], [ [ "If we want to transform this data so that repeated measurements are in columns, we can `unstack` the `twstrs` measurements according to `obs`:", "_____no_output_____" ], [ "And if we want to keep the other variables:", "_____no_output_____" ] ], [ [ "cdystonia_wide = (cdystonia[['patient','site','id','treat','age','sex']]\n .drop_duplicates()\n .merge(twstrs_wide, right_index=True, left_on='patient', how='inner')\n .head())", "_____no_output_____" ] ], [ [ "Or to simplify things, we can set the patient-level information as an index before unstacking:", "_____no_output_____" ] ], [ [ "(cdystonia.set_index(['patient','site','id','treat','age','sex','week'])['twstrs']\n .unstack('week').head())", "_____no_output_____" ] ], [ [ "### [`.melt()`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.melt.html)\n\n- To convert our \"wide\" format back to long, we can use the `melt` function. \n- This function is useful for `DataFrame`s where one or more columns are identifier variables (`id_vars`), with the remaining columns being measured variables (`value_vars`). \n- The measured variables are \"unpivoted\" to the row axis, leaving just two non-identifier columns, a *variable* and its corresponding *value*, which can both be renamed using optional arguments.", "_____no_output_____" ] ], [ [ "pd.melt(cdystonia_wide, id_vars=['patient','site','id','treat','age','sex'], \n var_name='obs', value_name='twsters').head()", "_____no_output_____" ] ], [ [ "## Pivoting", "_____no_output_____" ], [ "The `pivot` method allows a DataFrame to be transformed easily between long and wide formats in the same way as a pivot table is created in a spreadsheet. \n\nIt takes three arguments: `index`, `columns` and `values`, corresponding to the DataFrame index (the row headers), columns and cell values, respectively.", "_____no_output_____" ], [ "For example, we may want the `twstrs` variable (the response variable) in wide format according to patient, as we saw with the unstacking method above:", "_____no_output_____" ] ], [ [ "cdystonia.pivot(index='patient', columns='obs', values='twstrs').head()", "_____no_output_____" ] ], [ [ "If we omit the `values` argument, we get a `DataFrame` with hierarchical columns, just as when we applied `unstack` to the hierarchically-indexed table:", "_____no_output_____" ], [ "A related method, `pivot_table`, creates a spreadsheet-like table with a hierarchical index, and allows the values of the table to be populated using an arbitrary aggregation function.", "_____no_output_____" ] ], [ [ "cdystonia.head()", "_____no_output_____" ], [ "cdystonia.pivot_table(index=['site', 'treat'], columns='week', values='twstrs', \n aggfunc=max).head(20)", "_____no_output_____" ] ], [ [ "## Crosstabs and Summaries", "_____no_output_____" ], [ "For a simple cross-tabulation of group frequencies, the `crosstab` function (not a method) aggregates counts of data according to factors in rows and columns. The factors may be hierarchical if desired.", "_____no_output_____" ], [ "And the `.describe()` method gives some useful summary information on the `DataFrame`:", "_____no_output_____" ], [ "## Exercise 4\n\nOpen up [Lecture 4/Exercise 4.ipynb](./Exercise 4.ipynb) in your Jupyter notebook server.\n\nSolutions are at [Lecture 4/Exercise 4 - Solutions.ipynb](./Exercise 4 - Solutions.ipynb)", "_____no_output_____" ], [ "## References\n\nSlide materials inspired by and adapted from [Chris Fonnesbeck](https://github.com/fonnesbeck/statistical-analysis-python-tutorial) and [Tom Augspurger](https://github.com/TomAugspurger/pydata-chi-h2t)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
e7d9454f5cd456efd1a9a38582bff7be0e018e0d
29,819
ipynb
Jupyter Notebook
9 google customer revenue prediction/exploratory-google-store-analysis.ipynb
MLVPRASAD/KaggleProjects
379e062cf58d83ff57a456552bb956df68381fdd
[ "MIT" ]
2
2020-01-25T08:31:14.000Z
2022-03-23T18:24:03.000Z
9 google customer revenue prediction/exploratory-google-store-analysis.ipynb
MLVPRASAD/KaggleProjects
379e062cf58d83ff57a456552bb956df68381fdd
[ "MIT" ]
null
null
null
9 google customer revenue prediction/exploratory-google-store-analysis.ipynb
MLVPRASAD/KaggleProjects
379e062cf58d83ff57a456552bb956df68381fdd
[ "MIT" ]
null
null
null
29,819
29,819
0.725678
[ [ [ "<img src=\"https://www.amazeemetrics.com/sites/default/files/Getting-Started-with-Google-Analytics.jpg\">", "_____no_output_____" ], [ "<a href='#'>Preface</a><br>\n<a href='#desc'>description</a><br>\n<a href='#about'>About notebook</a><br>\n<a href='#load_lib'>Load libraries</a><br>\n<a href='#load_data'>Load Dataset</a><br>\n<a href='#description'>Column Description</a><br>\n<a href='#cleaning'>Cleaning Dataset</a><br>\n<a href='#eda'>EDA</a><br>\n- <a href='#null'>Null values</a><br>\n- <a href='#channel'>Via which channel did user visited </a><br>\n- <a href='#mobile'>Mobile users</a><br>\n- <a href='#browser'>Browser based </a><br>\n- <a href='#device'>Device Category</a><br>\n- <a href='#os'>Operating system</a><br>\n- <a href='#continent'>Continent Based</a><br>\n- <a href='#metro'>Metro Based</a><br>\n- <a href='#networkdomain'>Network Domain</a><br>\n- <a href='#region'>Region</a><br>\n- <a href='#country'>Country Based</a><br>\n- <a href='#subcountry'>Sub Continent Based</a><br>\n- <a href='#pVb'>Page view V/S bounces</a><br>\n- <a href='#customers'>new cusotmer or old customer</a><br>\n- <a href='#minmax'>Minmum & maximum revenu on daily basis</a><br>\n- <a href='#month'>Revenue based on month</a><br>\n- <a href='#day'>Revenue based on day</a><br>\n- <a href='#weekday'>Revenue based on weekday</a><br>\n- <a href='#adcontent'>Most Ad Content</a><br>\n- <a href='#keywords'>Keywords used by users</a><br>\n- <a href='#source'>Source from where users came</a><br>", "_____no_output_____" ], [ "# <a id='desc'> Description</a>", "_____no_output_____" ], [ "In this competition, you’re challenged to analyze a Google Merchandise Store (also known as GStore, where Google swag is sold) customer dataset to predict revenue per customer. ", "_____no_output_____" ], [ "# <a id='about'>About notebook</a>", "_____no_output_____" ], [ "In this notebook we will look into the Dataset provided in the competetion and we will analyze the users of GStore.", "_____no_output_____" ], [ "# <a id='load_lib'>Load libraries</a>", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\nimport plotly.graph_objs as go\nimport plotly.offline as py\nfrom plotly.offline import init_notebook_mode, iplot, download_plotlyjs\nimport plotly.graph_objs as go\nfrom plotly import tools\nimport matplotlib.pyplot as plt\ninit_notebook_mode(connected=True)\nfrom plotly.tools import FigureFactory as ff\nimport random\nfrom collections import Counter\nimport warnings\nimport json\nimport os\nimport datetime\nfrom pandas.io.json import json_normalize\nwarnings.filterwarnings('ignore')\npd.set_option('display.max_columns', 500)\nimport pycountry\nfrom wordcloud import WordCloud\nimport matplotlib.pyplot as plt\nimport seaborn as sns", "_____no_output_____" ] ], [ [ "# <a id='load_data'>Load Dataset</a>", "_____no_output_____" ] ], [ [ "train = pd.read_csv(\"../input/train.csv\")\ntest = pd.read_csv(\"../input/test.csv\")\n# train_df = pd.read_csv('flatten_train.csv')\n# test_df = pd.read_csv('flatten_test.csv')", "_____no_output_____" ], [ "# helper functions\ndef constant_cols(df):\n cols = []\n columns = df.columns.values\n for col in columns:\n if df[col].nunique(dropna = False) == 1:\n cols.append(col)\n return cols\n\ndef diff_cols(df1,df2):\n columns1 = df1.columns.values\n columns2 = df2.columns.values\n print(list(set(columns1) - set(columns2)))\n \n\ndef count_mean(col,color1,color2):\n col_count = train_df[col].value_counts()\n col_count_chart = go.Bar(x = col_count.head(10).index, y = col_count.head(10).values, name=\"Count\",marker = dict(color=color1))\n\n col_mean_count = train_df[[col,'totals.transactionRevenue']][(train_df['totals.transactionRevenue'] >1)]\n col_mean_count = col_mean_count.groupby(col)['totals.transactionRevenue'].mean().sort_values(ascending=False)\n col_mean_count_chart = go.Bar(x = col_mean_count.head(10).index, y = col_mean_count.head(10).values, name=\"Mean\",marker = dict(color=color2))\n\n fig = tools.make_subplots(rows = 1, cols = 2,subplot_titles=('Total Count','Mean Revenue'))\n fig.append_trace(col_count_chart, 1,1)\n fig.append_trace(col_mean_count_chart,1,2)\n py.iplot(fig)\n", "_____no_output_____" ], [ "train.head()", "_____no_output_____" ] ], [ [ "# <a id='description'>Column Description</a><br>", "_____no_output_____" ], [ "- fullVisitorId- A unique identifier for each user of the Google Merchandise Store.\n- channelGrouping - The channel via which the user came to the Store.\n- date - The date on which the user visited the Store.\n- device - The specifications for the device used to access the Store.\n- geoNetwork - This section contains information about the geography of the user.\n- sessionId - A unique identifier for this visit to the store.\n- socialEngagementType - Engagement type, either \"Socially Engaged\" or \"Not Socially Engaged\".\n- totals - This section contains aggregate values across the session.\n- trafficSource - This section contains information about the Traffic Source from which the session originated.\n- visitId - An identifier for this session. This is part of the value usually stored as the _utmb cookie. This is only unique to the user. For a completely unique ID, you should use a combination of fullVisitorId and visitId.\n- visitNumber - The session number for this user. If this is the first session, then this is set to 1.\n-visitStartTime - The timestamp (expressed as POSIX time)", "_____no_output_____" ], [ "Since few columns have json values lets convert flatten them", "_____no_output_____" ], [ "Source from where i got the code to flatten the json columns<br>\nhttps://www.kaggle.com/julian3833/1-quick-start-read-csv-and-flatten-json-fields/notebook", "_____no_output_____" ] ], [ [ "def load_df(csv_path='../input/train.csv', nrows=None):\n JSON_COLUMNS = ['device', 'geoNetwork', 'totals', 'trafficSource']\n \n df = pd.read_csv(csv_path, \n converters={column: json.loads for column in JSON_COLUMNS}, \n dtype={'fullVisitorId': 'str'}, # Important!!\n nrows=nrows)\n \n for column in JSON_COLUMNS:\n column_as_df = json_normalize(df[column])\n column_as_df.columns = [f\"{column}.{subcolumn}\" for subcolumn in column_as_df.columns]\n df = df.drop(column, axis=1).merge(column_as_df, right_index=True, left_index=True)\n print(f\"Loaded {os.path.basename(csv_path)}. Shape: {df.shape}\")\n return df", "_____no_output_____" ], [ "train_df = load_df()", "_____no_output_____" ], [ "train_df.head()", "_____no_output_____" ], [ "test_df = load_df(\"../input/test.csv\")", "_____no_output_____" ], [ "# train_df.to_csv('flatten_train.csv')\n# test_df.to_csv('flatten_test.csv')\ndiff_cols(train_df,test_df)", "_____no_output_____" ] ], [ [ "Since totals transaction Revenue is what we are going to predict.\nand there is no campaignCode in test set", "_____no_output_____" ], [ "# <a id='cleaning'>Cleaning Dataset</a>", "_____no_output_____" ] ], [ [ "train_constants = constant_cols(train_df)\ntest_constants = constant_cols(test_df)\nprint(train_constants)\nprint(test_constants)", "_____no_output_____" ], [ "train_df[\"totals.transactionRevenue\"] = train_df[\"totals.transactionRevenue\"].astype('float')\ntrain_df['totals.transactionRevenue'] = train_df['totals.transactionRevenue'].fillna(0)\ntrain_df['date'] = train_df['date'].astype(str)\ntrain_df[\"date\"] = train_df[\"date\"].apply(lambda x : x[:4] + \"-\" + x[4:6] + \"-\" + x[6:])\ntrain_df[\"date\"] = pd.to_datetime(train_df[\"date\"])", "_____no_output_____" ] ], [ [ "## both the df has same cols with constant values lets remove them ", "_____no_output_____" ] ], [ [ "train_constants = constant_cols(train_df)\ntest_constants = constant_cols(test_df)\ntrain_df = train_df.drop(columns=train_constants,axis = 1)\ntest_df = test_df.drop(columns=test_constants, axis = 1)", "_____no_output_____" ] ], [ [ "# <a id='eda'>EDA</a><br>", "_____no_output_____" ], [ "# <a id='null'>Null values</a><br>", "_____no_output_____" ] ], [ [ "null_values = train_df.isna().sum(axis = 0).reset_index()\nnull_values = null_values[null_values[0] > 50]\nnull_chart = [go.Bar(y = null_values['index'],x = null_values[0]*100/len(train_df), orientation = 'h')]\npy.iplot(null_chart)", "_____no_output_____" ] ], [ [ "**Summary**<br>\n- So many coloumns has null values<br>\n- we will find why these columns are null and we will also see how we can manage them.", "_____no_output_____" ], [ "# <a id='channel'>Via which channel did user visited </a><br>", "_____no_output_____" ] ], [ [ "data = train_df[['channelGrouping','totals.transactionRevenue']]\ntemp = data['channelGrouping'].value_counts()\nchart = [go.Pie(labels = temp.index, values = temp.values)]\npy.iplot(chart)", "_____no_output_____" ] ], [ [ "**Summary**<br>\n- Most of the users came via organic search.<br>\n- Paid search and affilate users are very less.", "_____no_output_____" ], [ "# <a id='mobile'>Mobile users</a><br>", "_____no_output_____" ] ], [ [ "temp = train_df['device.isMobile'].value_counts()\nchart = go.Bar(x = [\"False\",\"True\"], y = temp.values)\npy.iplot([chart])", "_____no_output_____" ] ], [ [ "**Summary**\n- Many users browse the site from desktop or tablet", "_____no_output_____" ], [ "# <a id='browser'>Browser based</a><br>", "_____no_output_____" ] ], [ [ "count_mean('device.browser',\"#7FDBFF\",\"#3D9970\")", "_____no_output_____" ] ], [ [ "**Summary**\n- The user visit count is very high for Chrome but Firefox users have genereted more revenue", "_____no_output_____" ], [ "# <a id='device'>Device Category</a><br>", "_____no_output_____" ] ], [ [ "count_mean('device.deviceCategory',\"#FF851B\",\"#FF4136\")", "_____no_output_____" ] ], [ [ "**Summary**\n- Desktop site has generated more user count as well as more revenue\n- One thing to note is tablet users have generated almost same revenue as mobile users ", "_____no_output_____" ], [ "# <a id='os'>Operating system</a><br>", "_____no_output_____" ] ], [ [ "count_mean('device.operatingSystem',\"#80DEEA\",\"#0097A7\")", "_____no_output_____" ] ], [ [ "**Summary**\n- Less chrome OS users but high revenue is generated.\n-For windows phone users also generated good revenue.", "_____no_output_____" ], [ "# <a id='continent'>Continent Based</a><br>", "_____no_output_____" ] ], [ [ "count_mean('geoNetwork.continent',\"#F48FB1\",\"#C2185B\")", "_____no_output_____" ] ], [ [ "**Summary**\n- African Users have generated more than a billion mean revenue.\n- Users from american have used the website a lot but didnt purchased products.", "_____no_output_____" ], [ "# <a id='country'>Country Based</a><br>", "_____no_output_____" ] ], [ [ "data = train_df[['geoNetwork.country','totals.transactionRevenue']][(train_df['totals.transactionRevenue'] >1)]\ntemp = data.groupby('geoNetwork.country',as_index=False)['totals.transactionRevenue'].mean()\ntemp['code'] = 'sample'\nfor i,country in enumerate(temp['geoNetwork.country']):\n mapping = {country.name: country.alpha_3 for country in pycountry.countries}\n temp.set_value(i,'code',mapping.get(country))\nchart = [ dict(\n type = 'choropleth',\n locations = temp['code'],\n z = temp['totals.transactionRevenue'],\n text = temp['geoNetwork.country'],\n autocolorscale = True,\n reversescale = True,\n marker = dict(\n line = dict (\n color = 'rgb(180,180,180)',\n width = 0.5\n ) ),\n colorbar = dict(\n autotick = True,\n title = 'Mean Revenue'),\n ) ]\n\nlayout = dict(\n title = 'Mean revenue based on country',\n geo = dict(\n showframe = True,\n showcoastlines = True,\n showocean = True,\n projection = dict(\n type = 'Mercator'\n )\n )\n)\n\nfig = dict( data=chart, layout=layout )\npy.iplot( fig, validate=False)", "_____no_output_____" ] ], [ [ "**Summary**\n- We can see the revenue generated based on the countries\n- only 4 countries in africa have generated huge mean revenue.", "_____no_output_____" ], [ "# <a id='metro'>Metro Based</a><br>", "_____no_output_____" ] ], [ [ "count_mean('geoNetwork.metro',\"#CE93D8\", \"#7B1FA2\")", "_____no_output_____" ] ], [ [ "**Summary**\n- Most of the users metro location is not present in the dataset.\n- may be we have to assume some hypothesis here", "_____no_output_____" ], [ "# <a id='networkdomain'>Network Domain</a><br>", "_____no_output_____" ] ], [ [ "count_mean('geoNetwork.networkDomain','#90CAF9','#1976D2')", "_____no_output_____" ] ], [ [ "**Summary**\n- Most users who generate revenue uses digitalwest.net\n- Again same we have huge users count from unknown source/ (not set)", "_____no_output_____" ], [ "# <a id='region'>Region</a><br>", "_____no_output_____" ] ], [ [ "count_mean('geoNetwork.region','#DCE775','#AFB42B')", "_____no_output_____" ] ], [ [ "** Summary**\n- Tokyo has generated more than 1 Billion mean revenue", "_____no_output_____" ], [ "# <a id='subcountry'>Sub Continent Based</a><br>", "_____no_output_____" ] ], [ [ "count_mean('geoNetwork.subContinent','#FFE082','#FFA000')", "_____no_output_____" ] ], [ [ "**Summary**\n- No surprise Africa stands top in mean revenue ", "_____no_output_____" ], [ "# <a id='pVb'>Page view V/S bounces</a><br>", "_____no_output_____" ] ], [ [ "train_df['totals.pageviews'] = train_df['totals.pageviews'].fillna(0).astype('int32')\ntrain_df['totals.bounces'] = train_df['totals.bounces'].fillna(0).astype('int32')\n\npageview = train_df.groupby('date')['totals.pageviews'].apply(lambda x:x[x >= 1].count()).reset_index()\nbounce = train_df.groupby('date')['totals.bounces'].apply(lambda x:x[x >= 1].count()).reset_index()\n\npageviews = go.Scatter(x = pageview['date'],y= pageview['totals.pageviews'], name = 'Pageview',marker=dict(color = \"#B0BEC5\"))\n\n\nbounces = go.Scatter(x = bounce['date'],y= bounce['totals.bounces'],name = 'Bounce',marker=dict(color = \"#37474F\"))\n\npy.iplot([pageviews,bounces])", "_____no_output_____" ] ], [ [ "**Summary**\n- We can see the based on pageview we have increase/decrease of bounce rate ", "_____no_output_____" ], [ "# <a id='customers'>new cusotmer or old customer</a><br>", "_____no_output_____" ] ], [ [ "train_df['totals.newVisits'] = train_df['totals.newVisits'].fillna(0).astype('int32')\ntrain_df['totals.hits'] = train_df['totals.hits'].fillna(0).astype('int32')\n\nnewvisit = train_df.groupby('date')['totals.newVisits'].apply(lambda x:x[x == 1].count()).reset_index()\noldVisit = train_df.groupby('date')['totals.newVisits'].apply(lambda x:x[x == 0].count()).reset_index()\nhit = train_df.groupby('date')['totals.hits'].apply(lambda x:x[x >= 1].count()).reset_index()\n\n\nhits = go.Scatter(x = hit['date'],y = hit['totals.hits'], name = 'total hits', marker=dict(color = '#FFEE58'))\n\nnew_vist = go.Scatter(x = newvisit['date'],y= newvisit['totals.newVisits'],name = 'New Vists', marker=dict(color = '#F57F17'))\n\noldvisit = go.Scatter(x = oldVisit['date'],y = oldVisit['totals.newVisits'], name = 'Old Visit', marker=dict(color = '#FFD600'))\n\npy.iplot([hits, new_vist, oldvisit])", "_____no_output_____" ] ], [ [ "** Summary **\n- Out of all the hits we have more new visit than old visit\n- That means the returning customer is very less than the new customers.\n- Or there can be other meaning.", "_____no_output_____" ], [ "# <a id='minmax'>Minmum & maximum revenue on daily basis</a><br>", "_____no_output_____" ] ], [ [ "temp = train_df[(train_df['totals.transactionRevenue'] >0)]\ndata = temp[['totals.transactionRevenue','date']].groupby('date')['totals.transactionRevenue'].agg(['min','max']).reset_index()\nmean = go.Scatter(x = data['date'], y = data['min'],name = \"Min\",marker = dict(color = '#00E676'))\ncount = go.Scatter(x = data['date'],y = data['max'], name = \"Max\",marker = dict(color = '#00838F'))\npy.iplot([mean,count])", "_____no_output_____" ] ], [ [ "** Summary **\n- I have removed all the non-zero transaction.\n- This graph is to understand the minimum and maximum revenue the company generates.\n- 5th april the company generated the maximum revenue.\n- there are few days where the maximum and minimum revenue are same(Eg - 15 Jan 2017)", "_____no_output_____" ], [ "# <a id='month'>Revenue based on month</a><br>", "_____no_output_____" ] ], [ [ "train_df['month'] = train_df['date'].dt.month\ntrain_df['day'] = train_df['date'].dt.day\ntrain_df['weekday'] = train_df['date'].dt.weekday", "_____no_output_____" ], [ "temp = train_df.groupby('month')['totals.transactionRevenue'].agg(['count','mean']).reset_index()\ncount_chart = go.Bar(x = temp['month'], y = temp['count'],name = 'Count',marker = dict(color = \"#E6EE9C\"))\nmean_chart = go.Bar(x = temp['month'],y = temp['mean'], name = 'Mean',marker = dict(color = \"#AFB42B\"))\n\nfig = tools.make_subplots(rows = 1, cols = 2, subplot_titles = ('Total Count', 'Mean Count'))\nfig.append_trace(count_chart,1,1)\nfig.append_trace(mean_chart, 1,2)\npy.iplot(fig)", "_____no_output_____" ] ], [ [ "** Summary **\n- It is seen that November month has higest number of visitors but the transaction generated in that month is very low.\n- April month has generated higest mean revenue while its visitors are not high.", "_____no_output_____" ], [ "# <a id='day'>Revenue based on day</a><br>", "_____no_output_____" ] ], [ [ "temp = train_df.groupby('day')['totals.transactionRevenue'].agg(['count','mean']).reset_index()\ncount_chart = go.Bar(x = temp['day'], y = temp['count'],name = 'Count', marker = dict(color = '#1DE9B6'))\nmean_chart = go.Bar(x = temp['day'],y = temp['mean'], name = 'Mean', marker = dict(color = '#00796B'))\n\nfig = tools.make_subplots(rows = 1, cols = 2, subplot_titles = ('Total Count', 'Mean Count'))\nfig.append_trace(count_chart,1,1)\nfig.append_trace(mean_chart, 1,2)\npy.iplot(fig)", "_____no_output_____" ] ], [ [ "** Summary **\n- It is seen that on 31th day the view count is very less I dont blame any because only Jan, Mar, May, Jul, Aug, Oct, Dec has 31 days.\n- But the intresting fact is that the 2nd highest mean revenue is on day 31st.\n- I think it can either be because of Jan or Dec as they are the start or end of the year.", "_____no_output_____" ], [ "# <a id='weekday'>Revenue based on weekday</a><br>", "_____no_output_____" ] ], [ [ "temp = train_df.groupby('weekday')['totals.transactionRevenue'].agg(['count','mean']).reset_index()\ncount_chart = go.Bar(x = temp['weekday'], y = temp['count'],name = 'Count', marker = dict(color = '#9575CD'))\nmean_chart = go.Bar(x = temp['weekday'],y = temp['mean'], name = 'Mean', marker = dict(color = '#B388FF'))\n\nfig = tools.make_subplots(rows = 1, cols = 2, subplot_titles = ('Total Count', 'Mean Count'))\nfig.append_trace(count_chart,1,1)\nfig.append_trace(mean_chart, 1,2)\npy.iplot(fig)", "_____no_output_____" ] ], [ [ "** Summary **\n- Most of the revenue is generated on monday.\n- very less revenue is generated on weekend.", "_____no_output_____" ], [ "# <a id='adcontent'>Most Ad Content</a><br>", "_____no_output_____" ] ], [ [ "train_df['trafficSource.adContent'] = train_df['trafficSource.adContent'].fillna('')\nwordcloud2 = WordCloud(width=800, height=400).generate(' '.join(train_df['trafficSource.adContent']))\nplt.figure( figsize=(15,20))\nplt.imshow(wordcloud2)\nplt.axis(\"off\")\nplt.show()", "_____no_output_____" ] ], [ [ "** Summary **\n- Image speak for it self", "_____no_output_____" ], [ "# <a id='keywords'>Keywords used by users</a><br>", "_____no_output_____" ] ], [ [ "train_df['trafficSource.keyword'] = train_df['trafficSource.keyword'].fillna('')\nwordcloud2 = WordCloud(width=800, height=400).generate(' '.join(train_df['trafficSource.keyword']))\nplt.figure( figsize=(20,20) )\nplt.imshow(wordcloud2)\nplt.axis(\"off\")\nplt.show()", "_____no_output_____" ] ], [ [ "<a id='source'>Source from where users came</a><br>", "_____no_output_____" ] ], [ [ "train_df['trafficSource.source'] = train_df['trafficSource.source'].fillna('')\nwordcloud2 = WordCloud(width=800, height=400).generate(' '.join(train_df['trafficSource.source']))\nplt.figure( figsize=(15,20) )\nplt.imshow(wordcloud2)\nplt.axis(\"off\")\nplt.show()", "_____no_output_____" ] ], [ [ "# Glad that you made it till end.\n# Please Upvote to boost me :-).", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e7d94935566829f83517ddb9369533e6c100e52d
551,188
ipynb
Jupyter Notebook
1. Load and Visualize Data.ipynb
royveshovda/P1_Facial_Keypoints
12ede5c8855abca7671b9d15392a7df021049bcc
[ "MIT" ]
null
null
null
1. Load and Visualize Data.ipynb
royveshovda/P1_Facial_Keypoints
12ede5c8855abca7671b9d15392a7df021049bcc
[ "MIT" ]
null
null
null
1. Load and Visualize Data.ipynb
royveshovda/P1_Facial_Keypoints
12ede5c8855abca7671b9d15392a7df021049bcc
[ "MIT" ]
null
null
null
799.982583
167,076
0.949554
[ [ [ "# Facial Keypoint Detection\n \nThis project will be all about defining and training a convolutional neural network to perform facial keypoint detection, and using computer vision techniques to transform images of faces. The first step in any challenge like this will be to load and visualize the data you'll be working with. \n\nLet's take a look at some examples of images and corresponding facial keypoints.\n\n<img src='images/key_pts_example.png' width=50% height=50%/>\n\nFacial keypoints (also called facial landmarks) are the small magenta dots shown on each of the faces in the image above. In each training and test image, there is a single face and **68 keypoints, with coordinates (x, y), for that face**. These keypoints mark important areas of the face: the eyes, corners of the mouth, the nose, etc. These keypoints are relevant for a variety of tasks, such as face filters, emotion recognition, pose recognition, and so on. Here they are, numbered, and you can see that specific ranges of points match different portions of the face.\n\n<img src='images/landmarks_numbered.jpg' width=30% height=30%/>\n\n---", "_____no_output_____" ], [ "## Load and Visualize Data\n\nThe first step in working with any dataset is to become familiar with your data; you'll need to load in the images of faces and their keypoints and visualize them! This set of image data has been extracted from the [YouTube Faces Dataset](https://www.cs.tau.ac.il/~wolf/ytfaces/), which includes videos of people in YouTube videos. These videos have been fed through some processing steps and turned into sets of image frames containing one face and the associated keypoints.\n\n#### Training and Testing Data\n\nThis facial keypoints dataset consists of 5770 color images. All of these images are separated into either a training or a test set of data.\n\n* 3462 of these images are training images, for you to use as you create a model to predict keypoints.\n* 2308 are test images, which will be used to test the accuracy of your model.\n\nThe information about the images and keypoints in this dataset are summarized in CSV files, which we can read in using `pandas`. Let's read the training CSV and get the annotations in an (N, 2) array where N is the number of keypoints and 2 is the dimension of the keypoint coordinates (x, y).\n\n---", "_____no_output_____" ] ], [ [ "# import the required libraries\nimport glob\nimport os\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\n\nimport cv2", "_____no_output_____" ], [ "key_pts_frame = pd.read_csv('data/training_frames_keypoints.csv')\n\nn = 0\nimage_name = key_pts_frame.iloc[n, 0]\nkey_pts = key_pts_frame.iloc[n, 1:].as_matrix()\nkey_pts = key_pts.astype('float').reshape(-1, 2)\n\nprint('Image name: ', image_name)\nprint('Landmarks shape: ', key_pts.shape)\nprint('First 4 key pts: {}'.format(key_pts[:4]))", "Image name: Luis_Fonsi_21.jpg\nLandmarks shape: (68, 2)\nFirst 4 key pts: [[ 45. 98.]\n [ 47. 106.]\n [ 49. 110.]\n [ 53. 119.]]\n" ], [ "# print out some stats about the data\nprint('Number of images: ', key_pts_frame.shape[0])", "Number of images: 3462\n" ] ], [ [ "## Look at some images\n\nBelow, is a function `show_keypoints` that takes in an image and keypoints and displays them. As you look at this data, **note that these images are not all of the same size**, and neither are the faces! To eventually train a neural network on these images, we'll need to standardize their shape.", "_____no_output_____" ] ], [ [ "def show_keypoints(image, key_pts):\n \"\"\"Show image with keypoints\"\"\"\n plt.imshow(image)\n plt.scatter(key_pts[:, 0], key_pts[:, 1], s=20, marker='.', c='m')\n", "_____no_output_____" ], [ "# Display a few different types of images by changing the index n\n\n# select an image by index in our data frame\nn = 5\nimage_name = key_pts_frame.iloc[n, 0]\nkey_pts = key_pts_frame.iloc[n, 1:].as_matrix()\nkey_pts = key_pts.astype('float').reshape(-1, 2)\n\nplt.figure(figsize=(5, 5))\nshow_keypoints(mpimg.imread(os.path.join('data/training/', image_name)), key_pts)\nplt.show()", "/home/roy/anaconda3/envs/cvnd/lib/python3.6/site-packages/ipykernel_launcher.py:6: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.\n \n" ] ], [ [ "## Dataset class and Transformations\n\nTo prepare our data for training, we'll be using PyTorch's Dataset class. Much of this this code is a modified version of what can be found in the [PyTorch data loading tutorial](http://pytorch.org/tutorials/beginner/data_loading_tutorial.html).\n\n#### Dataset class\n\n``torch.utils.data.Dataset`` is an abstract class representing a\ndataset. This class will allow us to load batches of image/keypoint data, and uniformly apply transformations to our data, such as rescaling and normalizing images for training a neural network.\n\n\nYour custom dataset should inherit ``Dataset`` and override the following\nmethods:\n\n- ``__len__`` so that ``len(dataset)`` returns the size of the dataset.\n- ``__getitem__`` to support the indexing such that ``dataset[i]`` can\n be used to get the i-th sample of image/keypoint data.\n\nLet's create a dataset class for our face keypoints dataset. We will\nread the CSV file in ``__init__`` but leave the reading of images to\n``__getitem__``. This is memory efficient because all the images are not\nstored in the memory at once but read as required.\n\nA sample of our dataset will be a dictionary\n``{'image': image, 'keypoints': key_pts}``. Our dataset will take an\noptional argument ``transform`` so that any required processing can be\napplied on the sample. We will see the usefulness of ``transform`` in the\nnext section.\n", "_____no_output_____" ] ], [ [ "from torch.utils.data import Dataset, DataLoader\n\nclass FacialKeypointsDataset(Dataset):\n \"\"\"Face Landmarks dataset.\"\"\"\n\n def __init__(self, csv_file, root_dir, transform=None):\n \"\"\"\n Args:\n csv_file (string): Path to the csv file with annotations.\n root_dir (string): Directory with all the images.\n transform (callable, optional): Optional transform to be applied\n on a sample.\n \"\"\"\n self.key_pts_frame = pd.read_csv(csv_file)\n self.root_dir = root_dir\n self.transform = transform\n\n def __len__(self):\n return len(self.key_pts_frame)\n\n def __getitem__(self, idx):\n image_name = os.path.join(self.root_dir,\n self.key_pts_frame.iloc[idx, 0])\n \n image = mpimg.imread(image_name)\n \n # if image has an alpha color channel, get rid of it\n if(image.shape[2] == 4):\n image = image[:,:,0:3]\n \n key_pts = self.key_pts_frame.iloc[idx, 1:].as_matrix()\n key_pts = key_pts.astype('float').reshape(-1, 2)\n sample = {'image': image, 'keypoints': key_pts}\n\n if self.transform:\n sample = self.transform(sample)\n\n return sample", "_____no_output_____" ] ], [ [ "Now that we've defined this class, let's instantiate the dataset and display some images.", "_____no_output_____" ] ], [ [ "# Construct the dataset\nface_dataset = FacialKeypointsDataset(csv_file='data/training_frames_keypoints.csv',\n root_dir='data/training/')\n\n# print some stats about the dataset\nprint('Length of dataset: ', len(face_dataset))", "Length of dataset: 3462\n" ], [ "# Display a few of the images from the dataset\nnum_to_display = 3\n\nfor i in range(num_to_display):\n \n # define the size of images\n fig = plt.figure(figsize=(20,10))\n \n # randomly select a sample\n rand_i = np.random.randint(0, len(face_dataset))\n sample = face_dataset[rand_i]\n\n # print the shape of the image and keypoints\n print(i, sample['image'].shape, sample['keypoints'].shape)\n\n ax = plt.subplot(1, num_to_display, i + 1)\n ax.set_title('Sample #{}'.format(i))\n \n # Using the same display function, defined earlier\n show_keypoints(sample['image'], sample['keypoints'])\n", "0 (157, 136, 3) (68, 2)\n1 (295, 261, 3) (68, 2)\n2 (147, 165, 3) (68, 2)\n" ] ], [ [ "## Transforms\n\nNow, the images above are not of the same size, and neural networks often expect images that are standardized; a fixed size, with a normalized range for color ranges and coordinates, and (for PyTorch) converted from numpy lists and arrays to Tensors.\n\nTherefore, we will need to write some pre-processing code.\nLet's create four transforms:\n\n- ``Normalize``: to convert a color image to grayscale values with a range of [0,1] and normalize the keypoints to be in a range of about [-1, 1]\n- ``Rescale``: to rescale an image to a desired size.\n- ``RandomCrop``: to crop an image randomly.\n- ``ToTensor``: to convert numpy images to torch images.\n\n\nWe will write them as callable classes instead of simple functions so\nthat parameters of the transform need not be passed everytime it's\ncalled. For this, we just need to implement ``__call__`` method and \n(if we require parameters to be passed in), the ``__init__`` method. \nWe can then use a transform like this:\n\n tx = Transform(params)\n transformed_sample = tx(sample)\n\nObserve below how these transforms are generally applied to both the image and its keypoints.\n\n", "_____no_output_____" ] ], [ [ "import torch\nfrom torchvision import transforms, utils\n# tranforms\n\nclass Normalize(object):\n \"\"\"Convert a color image to grayscale and normalize the color range to [0,1].\"\"\" \n\n def __call__(self, sample):\n image, key_pts = sample['image'], sample['keypoints']\n \n image_copy = np.copy(image)\n key_pts_copy = np.copy(key_pts)\n\n # convert image to grayscale\n image_copy = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)\n \n # scale color range from [0, 255] to [0, 1]\n image_copy= image_copy/255.0\n \n # scale keypoints to be centered around 0 with a range of [-1, 1]\n # mean = 100, sqrt = 50, so, pts should be (pts - 100)/50\n key_pts_copy = (key_pts_copy - 100)/50.0\n\n\n return {'image': image_copy, 'keypoints': key_pts_copy}\n\n\nclass Rescale(object):\n \"\"\"Rescale the image in a sample to a given size.\n\n Args:\n output_size (tuple or int): Desired output size. If tuple, output is\n matched to output_size. If int, smaller of image edges is matched\n to output_size keeping aspect ratio the same.\n \"\"\"\n\n def __init__(self, output_size):\n assert isinstance(output_size, (int, tuple))\n self.output_size = output_size\n\n def __call__(self, sample):\n image, key_pts = sample['image'], sample['keypoints']\n\n h, w = image.shape[:2]\n if isinstance(self.output_size, int):\n if h > w:\n new_h, new_w = self.output_size * h / w, self.output_size\n else:\n new_h, new_w = self.output_size, self.output_size * w / h\n else:\n new_h, new_w = self.output_size\n\n new_h, new_w = int(new_h), int(new_w)\n\n img = cv2.resize(image, (new_w, new_h))\n \n # scale the pts, too\n key_pts = key_pts * [new_w / w, new_h / h]\n\n return {'image': img, 'keypoints': key_pts}\n\n\nclass RandomCrop(object):\n \"\"\"Crop randomly the image in a sample.\n\n Args:\n output_size (tuple or int): Desired output size. If int, square crop\n is made.\n \"\"\"\n\n def __init__(self, output_size):\n assert isinstance(output_size, (int, tuple))\n if isinstance(output_size, int):\n self.output_size = (output_size, output_size)\n else:\n assert len(output_size) == 2\n self.output_size = output_size\n\n def __call__(self, sample):\n image, key_pts = sample['image'], sample['keypoints']\n\n h, w = image.shape[:2]\n new_h, new_w = self.output_size\n\n top = np.random.randint(0, h - new_h)\n left = np.random.randint(0, w - new_w)\n\n image = image[top: top + new_h,\n left: left + new_w]\n\n key_pts = key_pts - [left, top]\n\n return {'image': image, 'keypoints': key_pts}\n\n\nclass ToTensor(object):\n \"\"\"Convert ndarrays in sample to Tensors.\"\"\"\n\n def __call__(self, sample):\n image, key_pts = sample['image'], sample['keypoints']\n \n # if image has no grayscale color channel, add one\n if(len(image.shape) == 2):\n # add that third color dim\n image = image.reshape(image.shape[0], image.shape[1], 1)\n \n # swap color axis because\n # numpy image: H x W x C\n # torch image: C X H X W\n image = image.transpose((2, 0, 1))\n \n return {'image': torch.from_numpy(image),\n 'keypoints': torch.from_numpy(key_pts)}", "_____no_output_____" ] ], [ [ "## Test out the transforms\n\nLet's test these transforms out to make sure they behave as expected. As you look at each transform, note that, in this case, **order does matter**. For example, you cannot crop a image using a value smaller than the original image (and the orginal images vary in size!), but, if you first rescale the original image, you can then crop it to any size smaller than the rescaled size.", "_____no_output_____" ] ], [ [ "# test out some of these transforms\nrescale = Rescale(100)\ncrop = RandomCrop(50)\ncomposed = transforms.Compose([Rescale(250),\n RandomCrop(224)])\n\n# apply the transforms to a sample image\ntest_num = 500\nsample = face_dataset[test_num]\n\nfig = plt.figure()\nfor i, tx in enumerate([rescale, crop, composed]):\n transformed_sample = tx(sample)\n\n ax = plt.subplot(1, 3, i + 1)\n plt.tight_layout()\n ax.set_title(type(tx).__name__)\n show_keypoints(transformed_sample['image'], transformed_sample['keypoints'])\n\nplt.show()", "/home/roy/anaconda3/envs/cvnd/lib/python3.6/site-packages/ipykernel_launcher.py:31: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.\n" ] ], [ [ "## Create the transformed dataset\n\nApply the transforms in order to get grayscale images of the same shape. Verify that your transform works by printing out the shape of the resulting data (printing out a few examples should show you a consistent tensor size).", "_____no_output_____" ] ], [ [ "# define the data tranform\n# order matters! i.e. rescaling should come before a smaller crop\ndata_transform = transforms.Compose([Rescale(250),\n RandomCrop(224),\n Normalize(),\n ToTensor()])\n\n# create the transformed dataset\ntransformed_dataset = FacialKeypointsDataset(csv_file='data/training_frames_keypoints.csv',\n root_dir='data/training/',\n transform=data_transform)\n", "_____no_output_____" ], [ "# print some stats about the transformed data\nprint('Number of images: ', len(transformed_dataset))\n\n# make sure the sample tensors are the expected size\nfor i in range(5):\n sample = transformed_dataset[i]\n print(i, sample['image'].size(), sample['keypoints'].size())\n", "Number of images: 3462\n0 torch.Size([1, 224, 224]) torch.Size([68, 2])\n1 torch.Size([1, 224, 224]) torch.Size([68, 2])\n2 torch.Size([1, 224, 224]) torch.Size([68, 2])\n3 torch.Size([1, 224, 224]) torch.Size([68, 2])\n4 torch.Size([1, 224, 224]) torch.Size([68, 2])\n" ] ], [ [ "## Data Iteration and Batching\n\nRight now, we are iterating over this data using a ``for`` loop, but we are missing out on a lot of PyTorch's dataset capabilities, specifically the abilities to:\n\n- Batch the data\n- Shuffle the data\n- Load the data in parallel using ``multiprocessing`` workers.\n\n``torch.utils.data.DataLoader`` is an iterator which provides all these\nfeatures, and we'll see this in use in the *next* notebook, Notebook 2, when we load data in batches to train a neural network!\n\n---\n\n", "_____no_output_____" ], [ "## Ready to Train!\n\nNow that you've seen how to load and transform our data, you're ready to build a neural network to train on this data.\n\nIn the next notebook, you'll be tasked with creating a CNN for facial keypoint detection.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ] ]