hexsha
stringlengths 40
40
| size
int64 6
14.9M
| ext
stringclasses 1
value | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 6
260
| max_stars_repo_name
stringlengths 6
119
| max_stars_repo_head_hexsha
stringlengths 40
41
| max_stars_repo_licenses
list | max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 6
260
| max_issues_repo_name
stringlengths 6
119
| max_issues_repo_head_hexsha
stringlengths 40
41
| max_issues_repo_licenses
list | max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 6
260
| max_forks_repo_name
stringlengths 6
119
| max_forks_repo_head_hexsha
stringlengths 40
41
| max_forks_repo_licenses
list | max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | avg_line_length
float64 2
1.04M
| max_line_length
int64 2
11.2M
| alphanum_fraction
float64 0
1
| cells
list | cell_types
list | cell_type_groups
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ecd69eb8f086dbe014c328ab815c513d956f705b | 76,331 | ipynb | Jupyter Notebook | notebook/ieee-fraud-detection-preprocessing.ipynb | DataCampM2DSSAF/suivi-du-data-camp-equipe-tchouacheu_toure_niang | c2cb99f87cda6aa82ca6f8479ac1f5382d1e5e57 | [
"MIT"
] | 9 | 2020-03-19T22:20:58.000Z | 2020-04-06T21:58:06.000Z | notebook/ieee-fraud-detection-preprocessing.ipynb | DataCampM2DSSAF/suivi-du-data-camp-equipe-tchouacheu_toure_niang | c2cb99f87cda6aa82ca6f8479ac1f5382d1e5e57 | [
"MIT"
] | 11 | 2020-03-31T18:02:33.000Z | 2020-04-06T14:59:38.000Z | notebook/ieee-fraud-detection-preprocessing.ipynb | DataCampM2DSSAF/suivi-du-data-camp-equipe-tchouacheu_toure_niang | c2cb99f87cda6aa82ca6f8479ac1f5382d1e5e57 | [
"MIT"
] | 10 | 2020-03-19T22:20:37.000Z | 2021-03-17T19:25:27.000Z | 36.469661 | 449 | 0.51937 | [
[
[
"<h1 align=\"center\" style=\"color:#6699ff\"> DataCamp IEEE Fraud Detection </h1>",
"_____no_output_____"
],
[
"<img src=\"https://github.com/DataCampM2DSSAF/suivi-du-data-camp-equipe-tchouacheu-niang-chokki/blob/master/img/credit-card-fraud-detection.png?raw=true\" width=\"800\" align=\"center\">",
"_____no_output_____"
],
[
"# <a style=\"color:#6699ff\"> Team </a>\n- <a style=\"color:#6699ff\">Mohamed NIANG </a>\n- <a style=\"color:#6699ff\">Fernanda Tchouacheu </a>\n- <a style=\"color:#6699ff\">Hypolite Chokki </a>",
"_____no_output_____"
],
[
"# <a style=\"color:#6699ff\"> Table of Contents</a> \n\n<a style=\"color:#6699ff\"> I. Introduction</a>\n\n<a style=\"color:#6699ff\"> II. Descriptive Statistics & Visualization</a>\n\n<a style=\"color:#6699ff\"> III. Preprocessing</a>\n\n<a style=\"color:#6699ff\"> IV. Machine Learning Models</a>",
"_____no_output_____"
],
[
"# <a style=\"color:#6699ff\"> I. Introduction</a>",
"_____no_output_____"
],
[
"**Pourquoi la détection de fraude ?**\n> La fraude est un commerce d'un milliard de dollars et elle augmente chaque année. L'enquête mondiale de PwC sur la criminalité économique de 2018 a révélé que la moitié (49 %) des 7 200 entreprises interrogées avaient été victimes d'une fraude quelconque. C'est une augmentation par rapport à l'étude PwC de 2016, dans laquelle un peu plus d'un tiers des organisations interrogées (36 %) avaient été victimes de la criminalité économique.\n\n\nCette compétition est un problème de **classification binaire** - c'est-à-dire que notre variable cible est un attribut binaire (l'utilisateur qui fait le clic est-il frauduleux ou non ?) et notre objectif est de classer les utilisateurs en \"frauduleux\" ou \"non frauduleux\" le mieux possible.",
"_____no_output_____"
]
],
[
[
"import numpy as np # linear algebra\nimport pandas as pd # data processing, CSV file \n\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\n\nfrom sklearn import preprocessing\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.preprocessing import LabelEncoder\nimport matplotlib.gridspec as gridspec\n%matplotlib inline\n\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\nimport gc\ngc.enable()\n\nimport os\nos.chdir('/kaggle/input/ieeecis-fraud-detection') # Set working directory\nprint(os.listdir('/kaggle/input/ieeecis-fraud-detection'))",
"['test_identity.csv', 'train_identity.csv', 'test_transaction.csv', 'sample_submission.csv', 'train_transaction.csv']\n"
]
],
[
[
"**Load data**",
"_____no_output_____"
]
],
[
[
"%%time\ntrain_transaction = pd.read_csv('train_transaction.csv', index_col='TransactionID')\ntest_transaction = pd.read_csv('test_transaction.csv', index_col='TransactionID')\ntrain_identity = pd.read_csv('train_identity.csv', index_col='TransactionID')\ntest_identity = pd.read_csv('test_identity.csv', index_col='TransactionID')\nprint (\"Data is loaded!\")",
"Data is loaded!\nCPU times: user 50.8 s, sys: 3.36 s, total: 54.1 s\nWall time: 54.7 s\n"
],
[
"print('train_transaction shape is {}'.format(train_transaction.shape))\nprint('test_transaction shape is {}'.format(test_transaction.shape))\nprint('train_identity shape is {}'.format(train_identity.shape))\nprint('test_identity shape is {}'.format(test_identity.shape))",
"train_transaction shape is (590540, 393)\ntest_transaction shape is (506691, 392)\ntrain_identity shape is (144233, 40)\ntest_identity shape is (141907, 40)\n"
]
],
[
[
"# <a style=\"color:#6699ff\"> III. Preprocessing</a>",
"_____no_output_____"
],
[
"## Merge transaction & identity ",
"_____no_output_____"
]
],
[
[
"%%time\ntrain_df = pd.merge(train_transaction, train_identity, on = \"TransactionID\", how = \"left\")\nprint(\"Tain: \",train_df.shape)\ndel train_transaction, train_identity\ngc.collect()",
"Tain: (590540, 433)\nCPU times: user 4.32 s, sys: 2.21 s, total: 6.54 s\nWall time: 6.55 s\n"
],
[
"%%time\ntest_df = pd.merge(test_transaction, test_identity, on = \"TransactionID\", how = \"left\")\nprint(\"Test: \",test_df.shape)\ntest_df[\"isFraud\"] = 0\ndel test_transaction, test_identity\ngc.collect()",
"Test: (506691, 432)\nCPU times: user 3.92 s, sys: 2.01 s, total: 5.93 s\nWall time: 5.96 s\n"
]
],
[
[
"## Pipeline of preprocessing",
"_____no_output_____"
]
],
[
[
"emails = {\n'gmail': 'google', \n'att.net': 'att', \n'twc.com': 'spectrum', \n'scranton.edu': 'other', \n'optonline.net': 'other', \n'hotmail.co.uk': 'microsoft',\n'comcast.net': 'other', \n'yahoo.com.mx': 'yahoo', \n'yahoo.fr': 'yahoo',\n'yahoo.es': 'yahoo', \n'charter.net': 'spectrum', \n'live.com': 'microsoft', \n'aim.com': 'aol', \n'hotmail.de': 'microsoft', \n'centurylink.net': 'centurylink',\n'gmail.com': 'google', \n'me.com': 'apple', \n'earthlink.net': 'other', \n'gmx.de': 'other',\n'web.de': 'other', \n'cfl.rr.com': 'other', \n'hotmail.com': 'microsoft', \n'protonmail.com': 'other', \n'hotmail.fr': 'microsoft', \n'windstream.net': 'other', \n'outlook.es': 'microsoft', \n'yahoo.co.jp': 'yahoo', \n'yahoo.de': 'yahoo',\n'servicios-ta.com': 'other', \n'netzero.net': 'other', \n'suddenlink.net': 'other',\n'roadrunner.com': 'other', \n'sc.rr.com': 'other', \n'live.fr': 'microsoft',\n'verizon.net': 'yahoo', \n'msn.com': 'microsoft', \n'q.com': 'centurylink', \n'prodigy.net.mx': 'att', \n'frontier.com': 'yahoo', \n'anonymous.com': 'other', \n'rocketmail.com': 'yahoo',\n'sbcglobal.net': 'att',\n'frontiernet.net': 'yahoo', \n'ymail.com': 'yahoo',\n'outlook.com': 'microsoft',\n'mail.com': 'other', \n'bellsouth.net': 'other',\n'embarqmail.com': 'centurylink',\n'cableone.net': 'other', \n'hotmail.es': 'microsoft', \n'mac.com': 'apple',\n'yahoo.co.uk': 'yahoo',\n'netzero.com': 'other', \n'yahoo.com': 'yahoo', \n'live.com.mx': 'microsoft',\n'ptd.net': 'other',\n'cox.net': 'other',\n'aol.com': 'aol',\n'juno.com': 'other',\n'icloud.com': 'apple'\n}\n\n# number types for filtering the columns\nint_types = [\"int8\", \"int16\", \"int32\", \"int64\", \"float\"]",
"_____no_output_____"
],
[
"# Let's check how many missing values has each column.\n\ndef check_nan(df, limit):\n '''\n Check how many values are missing in each column.\n If the number of missing values are higher than limit, we drop the column.\n '''\n \n total_rows = df.shape[0]\n total_cols = df.shape[1]\n \n total_dropped = 0\n col_to_drop = []\n \n for col in df.columns:\n\n null_sum = df[col].isnull().sum()\n perc_over_total = round((null_sum/total_rows), 2)\n \n if perc_over_total > limit:\n \n print(\"The col {} contains {} null values.\\nThis represents {} of total rows.\"\\\n .format(col, null_sum, perc_over_total))\n \n print(\"Dropping column {} from the df.\\n\".format(col))\n \n col_to_drop.append(col)\n total_dropped += 1 \n \n df.drop(col_to_drop, axis = 1, inplace = True)\n print(\"We have dropped a total of {} columns.\\nIt's {} of the total\"\\\n .format(total_dropped, round((total_dropped/total_cols), 2)))\n \n return df",
"_____no_output_____"
],
[
"def binarizer(df_train, df_test):\n '''\n Work with cat features and binarize the values.\n Works with 2 dataframes at a time and returns a tupple of both.\n '''\n cat_cols = df_train.select_dtypes(exclude=int_types).columns\n\n for col in cat_cols:\n \n # creating a list of unique features to binarize so we dont get and value error\n unique_train = list(df_train[col].unique())\n unique_test = list(df_test[col].unique())\n unique_values = list(set(unique_train + unique_test))\n \n enc = LabelEncoder()\n enc.fit(unique_values)\n \n df_train[col] = enc.transform((df_train[col].values).reshape(-1 ,1))\n df_test[col] = enc.transform((df_test[col].values).reshape(-1 ,1))\n \n return (df_train, df_test)",
"_____no_output_____"
],
[
"def cathegorical_imputer(df_train, df_test, strategy, fill_value):\n '''\n Replace all cathegorical features with a constant or the most frequent strategy.\n '''\n cat_cols = df_train.select_dtypes(exclude=int_types).columns\n \n for col in cat_cols:\n print(\"Working with column {}\".format(col))\n \n # select the correct inputer\n if strategy == \"constant\":\n # input a fill_value of -999 to all nulls\n inputer = SimpleImputer(strategy=strategy, fill_value=fill_value)\n elif strategy == \"most_frequent\":\n inputer = SimpleImputer(strategy=strategy)\n \n # replace the nulls in train and test\n df_train[col] = inputer.fit_transform(X = (df_train[col].values).reshape(-1, 1))\n df_test[col] = inputer.transform(X = (df_test[col].values).reshape(-1, 1))\n \n return (df_train, df_test)",
"_____no_output_____"
],
[
"def numerical_inputer(df_train, df_test, strategy, fill_value):\n '''\n Replace NaN in the numerical features.\n Works with 2 dataframes at a time (train & test).\n Return a tupple of both.\n '''\n \n # assert valid strategy\n message = \"Please select a valid strategy (mean, median, constant (and give a fill_value) or most_frequent)\"\n assert strategy in [\"constant\", \"most_frequent\", \"mean\", \"median\"], message\n \n # int_types defined earlier in the kernel\n num_cols = df_train.select_dtypes(include = int_types).columns\n \n for col in num_cols:\n\n print(\"Working with column {}\".format(col))\n\n # select the correct inputer\n if strategy == \"constant\":\n inputer = SimpleImputer(strategy=strategy, fill_value=fill_value)\n elif strategy == \"most_frequent\":\n inputer = SimpleImputer(strategy=strategy)\n elif strategy == \"mean\":\n inputer = SimpleImputer(strategy=strategy)\n elif strategy == \"median\":\n inputer = SimpleImputer(strategy=strategy)\n\n # replace the nulls in train and test\n try:\n df_train[col] = inputer.fit_transform(X = (df_train[col].values).reshape(-1, 1))\n df_test[col] = inputer.transform(X = (df_test[col].values).reshape(-1, 1))\n except:\n print(\"Col {} gave and error.\".format(col))\n \n return (df_train, df_test)",
"_____no_output_____"
],
[
"def pipeline(df_train, df_test):\n '''\n We define a personal pipeline to process the data and fill with processing functions.\n NOTE: modifies the df in place.\n '''\n print(\"Shape of train is {}\".format(df_train.shape))\n print(\"Shape of test is {}\".format(df_test.shape))\n # We have set the limit of 70%. If a column contains more that 70% of it's values as NaN/Missing values we will drop the column\n # Since it's very unlikely that it will help our future model.\n print(\"Checking for nan values\\n\")\n df_train = check_nan(df_train, limit=0.7)\n \n # Select the columns from df_train with less nulls and asign to test.\n df_test = df_test[list(df_train.columns)]\n \n print(\"Shape of train is {}\".format(df_train.shape))\n print(\"Shape of test is {}\".format(df_test.shape))\n \n # mapping emails\n print(\"Mapping emails \\n\")\n df_train[\"EMAILP\"] = df_train[\"P_emaildomain\"].map(emails)\n df_test[\"EMAILP\"] = df_test[\"P_emaildomain\"].map(emails)\n\n print(\"Shape of train is {}\".format(df_train.shape))\n print(\"Shape of test is {}\".format(df_test.shape))\n \n # replace nulls from the train and test df with a value of \"Other\"\n print(\"Working with cathegorical values\\n\")\n df_train, df_test = cathegorical_imputer(df_train, df_test, strategy = \"constant\", fill_value = \"Other\")\n \n print(\"Shape of train is {}\".format(df_train.shape))\n print(\"Shape of test is {}\".format(df_test.shape))\n \n # now we will make a one hot encoder of these colums\n print(\"Binarazing values\\n\")\n df_train, df_test = binarizer(df_train, df_test)\n \n print(\"Shape of train is {}\".format(df_train.shape))\n print(\"Shape of test is {}\".format(df_test.shape))\n \n # working with null values in numeric columns\n print(\"Working with numerical columns. NAN values\\n\")\n df_train, df_test = numerical_inputer(df_train, df_test, strategy = \"constant\", fill_value=-999)\n \n print(\"Shape of train is {}\".format(df_train.shape))\n print(\"Shape of test is {}\".format(df_test.shape))\n \n return (df_train, df_test)",
"_____no_output_____"
],
[
"# before preprocesing\nprint(\"Train before preprocesing: \",train_df.shape)\nprint(\"Test before preprocesing: \",test_df.shape)\n\ntrain_df, test_df = pipeline(train_df, test_df)\n\n# after preprocesing\nprint(\"Train after preprocesing: \",train_df.shape)\nprint(\"Test after preprocesing: \",test_df.shape)",
"Train before preprocesing: (590540, 433)\nTest before preprocesing: (506691, 433)\nShape of train is (590540, 433)\nShape of test is (506691, 433)\nChecking for nan values\n\nThe col dist2 contains 552913 null values.\nThis represents 0.94 of total rows.\nDropping column dist2 from the df.\n\nThe col R_emaildomain contains 453249 null values.\nThis represents 0.77 of total rows.\nDropping column R_emaildomain from the df.\n\nThe col D6 contains 517353 null values.\nThis represents 0.88 of total rows.\nDropping column D6 from the df.\n\nThe col D7 contains 551623 null values.\nThis represents 0.93 of total rows.\nDropping column D7 from the df.\n\nThe col D8 contains 515614 null values.\nThis represents 0.87 of total rows.\nDropping column D8 from the df.\n\nThe col D9 contains 515614 null values.\nThis represents 0.87 of total rows.\nDropping column D9 from the df.\n\nThe col D12 contains 525823 null values.\nThis represents 0.89 of total rows.\nDropping column D12 from the df.\n\nThe col D13 contains 528588 null values.\nThis represents 0.9 of total rows.\nDropping column D13 from the df.\n\nThe col D14 contains 528353 null values.\nThis represents 0.89 of total rows.\nDropping column D14 from the df.\n\nThe col V138 contains 508595 null values.\nThis represents 0.86 of total rows.\nDropping column V138 from the df.\n\nThe col V139 contains 508595 null values.\nThis represents 0.86 of total rows.\nDropping column V139 from the df.\n\nThe col V140 contains 508595 null values.\nThis represents 0.86 of total rows.\nDropping column V140 from the df.\n\nThe col V141 contains 508595 null values.\nThis represents 0.86 of total rows.\nDropping column V141 from the df.\n\nThe col V142 contains 508595 null values.\nThis represents 0.86 of total rows.\nDropping column V142 from the df.\n\nThe col V143 contains 508589 null values.\nThis represents 0.86 of total rows.\nDropping column V143 from the df.\n\nThe col V144 contains 508589 null values.\nThis represents 0.86 of total rows.\nDropping column V144 from the df.\n\nThe col V145 contains 508589 null values.\nThis represents 0.86 of total rows.\nDropping column V145 from the df.\n\nThe col V146 contains 508595 null values.\nThis represents 0.86 of total rows.\nDropping column V146 from the df.\n\nThe col V147 contains 508595 null values.\nThis represents 0.86 of total rows.\nDropping column V147 from the df.\n\nThe col V148 contains 508595 null values.\nThis represents 0.86 of total rows.\nDropping column V148 from the df.\n\nThe col V149 contains 508595 null values.\nThis represents 0.86 of total rows.\nDropping column V149 from the df.\n\nThe col V150 contains 508589 null values.\nThis represents 0.86 of total rows.\nDropping column V150 from the df.\n\nThe col V151 contains 508589 null values.\nThis represents 0.86 of total rows.\nDropping column V151 from the df.\n\nThe col V152 contains 508589 null values.\nThis represents 0.86 of total rows.\nDropping column V152 from the df.\n\nThe col V153 contains 508595 null values.\nThis represents 0.86 of total rows.\nDropping column V153 from the df.\n\nThe col V154 contains 508595 null values.\nThis represents 0.86 of total rows.\nDropping column V154 from the df.\n\nThe col V155 contains 508595 null values.\nThis represents 0.86 of total rows.\nDropping column V155 from the df.\n\nThe col V156 contains 508595 null values.\nThis represents 0.86 of total rows.\nDropping column V156 from the df.\n\nThe col V157 contains 508595 null values.\nThis represents 0.86 of total rows.\nDropping column V157 from the df.\n\nThe col V158 contains 508595 null values.\nThis represents 0.86 of total rows.\nDropping column V158 from the df.\n\nThe col V159 contains 508589 null values.\nThis represents 0.86 of total rows.\nDropping column V159 from the df.\n\nThe col V160 contains 508589 null values.\nThis represents 0.86 of total rows.\nDropping column V160 from the df.\n\nThe col V161 contains 508595 null values.\nThis represents 0.86 of total rows.\nDropping column V161 from the df.\n\nThe col V162 contains 508595 null values.\nThis represents 0.86 of total rows.\nDropping column V162 from the df.\n\nThe col V163 contains 508595 null values.\nThis represents 0.86 of total rows.\nDropping column V163 from the df.\n\nThe col V164 contains 508589 null values.\nThis represents 0.86 of total rows.\nDropping column V164 from the df.\n\nThe col V165 contains 508589 null values.\nThis represents 0.86 of total rows.\nDropping column V165 from the df.\n\nThe col V166 contains 508589 null values.\nThis represents 0.86 of total rows.\nDropping column V166 from the df.\n\nThe col V167 contains 450909 null values.\nThis represents 0.76 of total rows.\nDropping column V167 from the df.\n\nThe col V168 contains 450909 null values.\nThis represents 0.76 of total rows.\nDropping column V168 from the df.\n\nThe col V169 contains 450721 null values.\nThis represents 0.76 of total rows.\nDropping column V169 from the df.\n\nThe col V170 contains 450721 null values.\nThis represents 0.76 of total rows.\nDropping column V170 from the df.\n\nThe col V171 contains 450721 null values.\nThis represents 0.76 of total rows.\nDropping column V171 from the df.\n\nThe col V172 contains 450909 null values.\nThis represents 0.76 of total rows.\nDropping column V172 from the df.\n\nThe col V173 contains 450909 null values.\nThis represents 0.76 of total rows.\nDropping column V173 from the df.\n\nThe col V174 contains 450721 null values.\nThis represents 0.76 of total rows.\nDropping column V174 from the df.\n\nThe col V175 contains 450721 null values.\nThis represents 0.76 of total rows.\nDropping column V175 from the df.\n\nThe col V176 contains 450909 null values.\nThis represents 0.76 of total rows.\nDropping column V176 from the df.\n\nThe col V177 contains 450909 null values.\nThis represents 0.76 of total rows.\nDropping column V177 from the df.\n\nThe col V178 contains 450909 null values.\nThis represents 0.76 of total rows.\nDropping column V178 from the df.\n\nThe col V179 contains 450909 null values.\nThis represents 0.76 of total rows.\nDropping column V179 from the df.\n\nThe col V180 contains 450721 null values.\nThis represents 0.76 of total rows.\nDropping column V180 from the df.\n\nThe col V181 contains 450909 null values.\nThis represents 0.76 of total rows.\nDropping column V181 from the df.\n\nThe col V182 contains 450909 null values.\nThis represents 0.76 of total rows.\nDropping column V182 from the df.\n\nThe col V183 contains 450909 null values.\nThis represents 0.76 of total rows.\nDropping column V183 from the df.\n\nThe col V184 contains 450721 null values.\nThis represents 0.76 of total rows.\nDropping column V184 from the df.\n\nThe col V185 contains 450721 null values.\nThis represents 0.76 of total rows.\nDropping column V185 from the df.\n\nThe col V186 contains 450909 null values.\nThis represents 0.76 of total rows.\nDropping column V186 from the df.\n\nThe col V187 contains 450909 null values.\nThis represents 0.76 of total rows.\nDropping column V187 from the df.\n\nThe col V188 contains 450721 null values.\nThis represents 0.76 of total rows.\nDropping column V188 from the df.\n\nThe col V189 contains 450721 null values.\nThis represents 0.76 of total rows.\nDropping column V189 from the df.\n\nThe col V190 contains 450909 null values.\nThis represents 0.76 of total rows.\nDropping column V190 from the df.\n\nThe col V191 contains 450909 null values.\nThis represents 0.76 of total rows.\nDropping column V191 from the df.\n\nThe col V192 contains 450909 null values.\nThis represents 0.76 of total rows.\nDropping column V192 from the df.\n\nThe col V193 contains 450909 null values.\nThis represents 0.76 of total rows.\nDropping column V193 from the df.\n\nThe col V194 contains 450721 null values.\nThis represents 0.76 of total rows.\nDropping column V194 from the df.\n\nThe col V195 contains 450721 null values.\nThis represents 0.76 of total rows.\nDropping column V195 from the df.\n\nThe col V196 contains 450909 null values.\nThis represents 0.76 of total rows.\nDropping column V196 from the df.\n\nThe col V197 contains 450721 null values.\nThis represents 0.76 of total rows.\nDropping column V197 from the df.\n\nThe col V198 contains 450721 null values.\nThis represents 0.76 of total rows.\nDropping column V198 from the df.\n\nThe col V199 contains 450909 null values.\nThis represents 0.76 of total rows.\nDropping column V199 from the df.\n\nThe col V200 contains 450721 null values.\nThis represents 0.76 of total rows.\nDropping column V200 from the df.\n\nThe col V201 contains 450721 null values.\nThis represents 0.76 of total rows.\nDropping column V201 from the df.\n\nThe col V202 contains 450909 null values.\nThis represents 0.76 of total rows.\nDropping column V202 from the df.\n\nThe col V203 contains 450909 null values.\nThis represents 0.76 of total rows.\nDropping column V203 from the df.\n\nThe col V204 contains 450909 null values.\nThis represents 0.76 of total rows.\nDropping column V204 from the df.\n\nThe col V205 contains 450909 null values.\nThis represents 0.76 of total rows.\nDropping column V205 from the df.\n\nThe col V206 contains 450909 null values.\nThis represents 0.76 of total rows.\nDropping column V206 from the df.\n\nThe col V207 contains 450909 null values.\nThis represents 0.76 of total rows.\nDropping column V207 from the df.\n\nThe col V208 contains 450721 null values.\nThis represents 0.76 of total rows.\nDropping column V208 from the df.\n\nThe col V209 contains 450721 null values.\nThis represents 0.76 of total rows.\nDropping column V209 from the df.\n\nThe col V210 contains 450721 null values.\nThis represents 0.76 of total rows.\nDropping column V210 from the df.\n\nThe col V211 contains 450909 null values.\nThis represents 0.76 of total rows.\nDropping column V211 from the df.\n\nThe col V212 contains 450909 null values.\nThis represents 0.76 of total rows.\nDropping column V212 from the df.\n\nThe col V213 contains 450909 null values.\nThis represents 0.76 of total rows.\nDropping column V213 from the df.\n\nThe col V214 contains 450909 null values.\nThis represents 0.76 of total rows.\nDropping column V214 from the df.\n\nThe col V215 contains 450909 null values.\nThis represents 0.76 of total rows.\nDropping column V215 from the df.\n\nThe col V216 contains 450909 null values.\nThis represents 0.76 of total rows.\nDropping column V216 from the df.\n\nThe col V217 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V217 from the df.\n\nThe col V218 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V218 from the df.\n\nThe col V219 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V219 from the df.\n\nThe col V220 contains 449124 null values.\nThis represents 0.76 of total rows.\nDropping column V220 from the df.\n\nThe col V221 contains 449124 null values.\nThis represents 0.76 of total rows.\nDropping column V221 from the df.\n\nThe col V222 contains 449124 null values.\nThis represents 0.76 of total rows.\nDropping column V222 from the df.\n\nThe col V223 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V223 from the df.\n\nThe col V224 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V224 from the df.\n\nThe col V225 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V225 from the df.\n\nThe col V226 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V226 from the df.\n\nThe col V227 contains 449124 null values.\nThis represents 0.76 of total rows.\nDropping column V227 from the df.\n\nThe col V228 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V228 from the df.\n\nThe col V229 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V229 from the df.\n\nThe col V230 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V230 from the df.\n\nThe col V231 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V231 from the df.\n\nThe col V232 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V232 from the df.\n\nThe col V233 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V233 from the df.\n\nThe col V234 contains 449124 null values.\nThis represents 0.76 of total rows.\nDropping column V234 from the df.\n\nThe col V235 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V235 from the df.\n\nThe col V236 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V236 from the df.\n\nThe col V237 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V237 from the df.\n\nThe col V238 contains 449124 null values.\nThis represents 0.76 of total rows.\nDropping column V238 from the df.\n\nThe col V239 contains 449124 null values.\nThis represents 0.76 of total rows.\nDropping column V239 from the df.\n\nThe col V240 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V240 from the df.\n\nThe col V241 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V241 from the df.\n\nThe col V242 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V242 from the df.\n\nThe col V243 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V243 from the df.\n\nThe col V244 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V244 from the df.\n\nThe col V245 contains 449124 null values.\nThis represents 0.76 of total rows.\nDropping column V245 from the df.\n\nThe col V246 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V246 from the df.\n\nThe col V247 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V247 from the df.\n\nThe col V248 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V248 from the df.\n\nThe col V249 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V249 from the df.\n\nThe col V250 contains 449124 null values.\nThis represents 0.76 of total rows.\nDropping column V250 from the df.\n\nThe col V251 contains 449124 null values.\nThis represents 0.76 of total rows.\nDropping column V251 from the df.\n\nThe col V252 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V252 from the df.\n\nThe col V253 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V253 from the df.\n\nThe col V254 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V254 from the df.\n\nThe col V255 contains 449124 null values.\nThis represents 0.76 of total rows.\nDropping column V255 from the df.\n\nThe col V256 contains 449124 null values.\nThis represents 0.76 of total rows.\nDropping column V256 from the df.\n\nThe col V257 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V257 from the df.\n\nThe col V258 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V258 from the df.\n\nThe col V259 contains 449124 null values.\nThis represents 0.76 of total rows.\nDropping column V259 from the df.\n\nThe col V260 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V260 from the df.\n\nThe col V261 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V261 from the df.\n\nThe col V262 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V262 from the df.\n\nThe col V263 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V263 from the df.\n\nThe col V264 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V264 from the df.\n\nThe col V265 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V265 from the df.\n\nThe col V266 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V266 from the df.\n\nThe col V267 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V267 from the df.\n\nThe col V268 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V268 from the df.\n\nThe col V269 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V269 from the df.\n\nThe col V270 contains 449124 null values.\nThis represents 0.76 of total rows.\nDropping column V270 from the df.\n\nThe col V271 contains 449124 null values.\nThis represents 0.76 of total rows.\nDropping column V271 from the df.\n\nThe col V272 contains 449124 null values.\nThis represents 0.76 of total rows.\nDropping column V272 from the df.\n\nThe col V273 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V273 from the df.\n\nThe col V274 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V274 from the df.\n\nThe col V275 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V275 from the df.\n\nThe col V276 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V276 from the df.\n\nThe col V277 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V277 from the df.\n\nThe col V278 contains 460110 null values.\nThis represents 0.78 of total rows.\nDropping column V278 from the df.\n\nThe col V322 contains 508189 null values.\nThis represents 0.86 of total rows.\nDropping column V322 from the df.\n\nThe col V323 contains 508189 null values.\nThis represents 0.86 of total rows.\nDropping column V323 from the df.\n\nThe col V324 contains 508189 null values.\nThis represents 0.86 of total rows.\nDropping column V324 from the df.\n\nThe col V325 contains 508189 null values.\nThis represents 0.86 of total rows.\nDropping column V325 from the df.\n\nThe col V326 contains 508189 null values.\nThis represents 0.86 of total rows.\nDropping column V326 from the df.\n\nThe col V327 contains 508189 null values.\nThis represents 0.86 of total rows.\nDropping column V327 from the df.\n\nThe col V328 contains 508189 null values.\nThis represents 0.86 of total rows.\nDropping column V328 from the df.\n\nThe col V329 contains 508189 null values.\nThis represents 0.86 of total rows.\nDropping column V329 from the df.\n\nThe col V330 contains 508189 null values.\nThis represents 0.86 of total rows.\nDropping column V330 from the df.\n\nThe col V331 contains 508189 null values.\nThis represents 0.86 of total rows.\nDropping column V331 from the df.\n\nThe col V332 contains 508189 null values.\nThis represents 0.86 of total rows.\nDropping column V332 from the df.\n\nThe col V333 contains 508189 null values.\nThis represents 0.86 of total rows.\nDropping column V333 from the df.\n\nThe col V334 contains 508189 null values.\nThis represents 0.86 of total rows.\nDropping column V334 from the df.\n\nThe col V335 contains 508189 null values.\nThis represents 0.86 of total rows.\nDropping column V335 from the df.\n\nThe col V336 contains 508189 null values.\nThis represents 0.86 of total rows.\nDropping column V336 from the df.\n\nThe col V337 contains 508189 null values.\nThis represents 0.86 of total rows.\nDropping column V337 from the df.\n\nThe col V338 contains 508189 null values.\nThis represents 0.86 of total rows.\nDropping column V338 from the df.\n\nThe col V339 contains 508189 null values.\nThis represents 0.86 of total rows.\nDropping column V339 from the df.\n\nThe col id_01 contains 446307 null values.\nThis represents 0.76 of total rows.\nDropping column id_01 from the df.\n\nThe col id_02 contains 449668 null values.\nThis represents 0.76 of total rows.\nDropping column id_02 from the df.\n\nThe col id_03 contains 524216 null values.\nThis represents 0.89 of total rows.\nDropping column id_03 from the df.\n\nThe col id_04 contains 524216 null values.\nThis represents 0.89 of total rows.\nDropping column id_04 from the df.\n\nThe col id_05 contains 453675 null values.\nThis represents 0.77 of total rows.\nDropping column id_05 from the df.\n\nThe col id_06 contains 453675 null values.\nThis represents 0.77 of total rows.\nDropping column id_06 from the df.\n\nThe col id_07 contains 585385 null values.\nThis represents 0.99 of total rows.\nDropping column id_07 from the df.\n\nThe col id_08 contains 585385 null values.\nThis represents 0.99 of total rows.\nDropping column id_08 from the df.\n\nThe col id_09 contains 515614 null values.\nThis represents 0.87 of total rows.\nDropping column id_09 from the df.\n\nThe col id_10 contains 515614 null values.\nThis represents 0.87 of total rows.\nDropping column id_10 from the df.\n\nThe col id_11 contains 449562 null values.\nThis represents 0.76 of total rows.\nDropping column id_11 from the df.\n\nThe col id_12 contains 446307 null values.\nThis represents 0.76 of total rows.\nDropping column id_12 from the df.\n\nThe col id_13 contains 463220 null values.\nThis represents 0.78 of total rows.\nDropping column id_13 from the df.\n\nThe col id_14 contains 510496 null values.\nThis represents 0.86 of total rows.\nDropping column id_14 from the df.\n\nThe col id_15 contains 449555 null values.\nThis represents 0.76 of total rows.\nDropping column id_15 from the df.\n\nThe col id_16 contains 461200 null values.\nThis represents 0.78 of total rows.\nDropping column id_16 from the df.\n\nThe col id_17 contains 451171 null values.\nThis represents 0.76 of total rows.\nDropping column id_17 from the df.\n\nThe col id_18 contains 545427 null values.\nThis represents 0.92 of total rows.\nDropping column id_18 from the df.\n\nThe col id_19 contains 451222 null values.\nThis represents 0.76 of total rows.\nDropping column id_19 from the df.\n\nThe col id_20 contains 451279 null values.\nThis represents 0.76 of total rows.\nDropping column id_20 from the df.\n\nThe col id_21 contains 585381 null values.\nThis represents 0.99 of total rows.\nDropping column id_21 from the df.\n\nThe col id_22 contains 585371 null values.\nThis represents 0.99 of total rows.\nDropping column id_22 from the df.\n\nThe col id_23 contains 585371 null values.\nThis represents 0.99 of total rows.\nDropping column id_23 from the df.\n\nThe col id_24 contains 585793 null values.\nThis represents 0.99 of total rows.\nDropping column id_24 from the df.\n\nThe col id_25 contains 585408 null values.\nThis represents 0.99 of total rows.\nDropping column id_25 from the df.\n\nThe col id_26 contains 585377 null values.\nThis represents 0.99 of total rows.\nDropping column id_26 from the df.\n\nThe col id_27 contains 585371 null values.\nThis represents 0.99 of total rows.\nDropping column id_27 from the df.\n\nThe col id_28 contains 449562 null values.\nThis represents 0.76 of total rows.\nDropping column id_28 from the df.\n\nThe col id_29 contains 449562 null values.\nThis represents 0.76 of total rows.\nDropping column id_29 from the df.\n\nThe col id_30 contains 512975 null values.\nThis represents 0.87 of total rows.\nDropping column id_30 from the df.\n\nThe col id_31 contains 450258 null values.\nThis represents 0.76 of total rows.\nDropping column id_31 from the df.\n\nThe col id_32 contains 512954 null values.\nThis represents 0.87 of total rows.\nDropping column id_32 from the df.\n\nThe col id_33 contains 517251 null values.\nThis represents 0.88 of total rows.\nDropping column id_33 from the df.\n\nThe col id_34 contains 512735 null values.\nThis represents 0.87 of total rows.\nDropping column id_34 from the df.\n\nThe col id_35 contains 449555 null values.\nThis represents 0.76 of total rows.\nDropping column id_35 from the df.\n\nThe col id_36 contains 449555 null values.\nThis represents 0.76 of total rows.\nDropping column id_36 from the df.\n\nThe col id_37 contains 449555 null values.\nThis represents 0.76 of total rows.\nDropping column id_37 from the df.\n\nThe col id_38 contains 449555 null values.\nThis represents 0.76 of total rows.\nDropping column id_38 from the df.\n\nThe col DeviceType contains 449730 null values.\nThis represents 0.76 of total rows.\nDropping column DeviceType from the df.\n\nThe col DeviceInfo contains 471874 null values.\nThis represents 0.8 of total rows.\nDropping column DeviceInfo from the df.\n\nWe have dropped a total of 208 columns.\nIt's 0.48 of the total\nShape of train is (590540, 225)\nShape of test is (506691, 225)\nMapping emails \n\nShape of train is (590540, 226)\nShape of test is (506691, 226)\nWorking with cathegorical values\n\nWorking with column ProductCD\nWorking with column card4\nWorking with column card6\nWorking with column P_emaildomain\nWorking with column M1\nWorking with column M2\nWorking with column M3\nWorking with column M4\nWorking with column M5\nWorking with column M6\nWorking with column M7\nWorking with column M8\nWorking with column M9\nWorking with column EMAILP\nShape of train is (590540, 226)\nShape of test is (506691, 226)\nBinarazing values\n\nShape of train is (590540, 226)\nShape of test is (506691, 226)\nWorking with numerical columns. NAN values\n\nWorking with column isFraud\nWorking with column TransactionDT\nWorking with column TransactionAmt\nWorking with column ProductCD\nWorking with column card1\nWorking with column card2\nWorking with column card3\nWorking with column card4\nWorking with column card5\nWorking with column card6\nWorking with column addr1\nWorking with column addr2\nWorking with column dist1\nWorking with column P_emaildomain\nWorking with column C1\nWorking with column C2\nWorking with column C3\nWorking with column C4\nWorking with column C5\nWorking with column C6\nWorking with column C7\nWorking with column C8\nWorking with column C9\nWorking with column C10\nWorking with column C11\nWorking with column C12\nWorking with column C13\nWorking with column C14\nWorking with column D1\nWorking with column D2\nWorking with column D3\nWorking with column D4\nWorking with column D5\nWorking with column D10\nWorking with column D11\nWorking with column D15\nWorking with column M1\nWorking with column M2\nWorking with column M3\nWorking with column M4\nWorking with column M5\nWorking with column M6\nWorking with column M7\nWorking with column M8\nWorking with column M9\nWorking with column V1\nWorking with column V2\nWorking with column V3\nWorking with column V4\nWorking with column V5\nWorking with column V6\nWorking with column V7\nWorking with column V8\nWorking with column V9\nWorking with column V10\nWorking with column V11\nWorking with column V12\nWorking with column V13\nWorking with column V14\nWorking with column V15\nWorking with column V16\nWorking with column V17\nWorking with column V18\nWorking with column V19\nWorking with column V20\nWorking with column V21\nWorking with column V22\nWorking with column V23\nWorking with column V24\nWorking with column V25\nWorking with column V26\nWorking with column V27\nWorking with column V28\nWorking with column V29\nWorking with column V30\nWorking with column V31\nWorking with column V32\nWorking with column V33\nWorking with column V34\nWorking with column V35\nWorking with column V36\nWorking with column V37\nWorking with column V38\nWorking with column V39\nWorking with column V40\nWorking with column V41\nWorking with column V42\nWorking with column V43\nWorking with column V44\nWorking with column V45\nWorking with column V46\nWorking with column V47\nWorking with column V48\nWorking with column V49\nWorking with column V50\nWorking with column V51\nWorking with column V52\nWorking with column V53\nWorking with column V54\nWorking with column V55\nWorking with column V56\nWorking with column V57\nWorking with column V58\nWorking with column V59\nWorking with column V60\nWorking with column V61\nWorking with column V62\nWorking with column V63\nWorking with column V64\nWorking with column V65\nWorking with column V66\nWorking with column V67\nWorking with column V68\nWorking with column V69\nWorking with column V70\nWorking with column V71\nWorking with column V72\nWorking with column V73\nWorking with column V74\nWorking with column V75\nWorking with column V76\nWorking with column V77\nWorking with column V78\nWorking with column V79\nWorking with column V80\nWorking with column V81\nWorking with column V82\nWorking with column V83\nWorking with column V84\nWorking with column V85\nWorking with column V86\nWorking with column V87\nWorking with column V88\nWorking with column V89\nWorking with column V90\nWorking with column V91\nWorking with column V92\nWorking with column V93\nWorking with column V94\nWorking with column V95\nWorking with column V96\nWorking with column V97\nWorking with column V98\nWorking with column V99\nWorking with column V100\nWorking with column V101\nWorking with column V102\nWorking with column V103\nWorking with column V104\nWorking with column V105\nWorking with column V106\nWorking with column V107\nWorking with column V108\nWorking with column V109\nWorking with column V110\nWorking with column V111\nWorking with column V112\nWorking with column V113\nWorking with column V114\nWorking with column V115\nWorking with column V116\nWorking with column V117\nWorking with column V118\nWorking with column V119\nWorking with column V120\nWorking with column V121\nWorking with column V122\nWorking with column V123\nWorking with column V124\nWorking with column V125\nWorking with column V126\nWorking with column V127\nWorking with column V128\nWorking with column V129\nWorking with column V130\nWorking with column V131\nWorking with column V132\nWorking with column V133\nWorking with column V134\nWorking with column V135\nWorking with column V136\nWorking with column V137\nWorking with column V279\nWorking with column V280\nWorking with column V281\nWorking with column V282\nWorking with column V283\nWorking with column V284\nWorking with column V285\nWorking with column V286\nWorking with column V287\nWorking with column V288\nWorking with column V289\nWorking with column V290\nWorking with column V291\nWorking with column V292\nWorking with column V293\nWorking with column V294\nWorking with column V295\nWorking with column V296\nWorking with column V297\nWorking with column V298\nWorking with column V299\nWorking with column V300\nWorking with column V301\nWorking with column V302\nWorking with column V303\nWorking with column V304\nWorking with column V305\nWorking with column V306\nWorking with column V307\nWorking with column V308\nWorking with column V309\nWorking with column V310\nWorking with column V311\nWorking with column V312\nWorking with column V313\nWorking with column V314\nWorking with column V315\nWorking with column V316\nWorking with column V317\nWorking with column V318\nWorking with column V319\nWorking with column V320\nWorking with column V321\nWorking with column EMAILP\nShape of train is (590540, 226)\nShape of test is (506691, 226)\nTrain after preprocesing: (590540, 226)\nTest after preprocesing: (506691, 226)\n"
],
[
"# check for null values\ncolumns = train_df.columns\nfor col in columns:\n total_nulls = train_df[col].isnull().sum()\n if total_nulls > 0:\n print(col, total_nulls)\n \ncolumns = test_df.select_dtypes(exclude=int_types).columns\ntrain_df[columns]\n\ncolumns = test_df.select_dtypes(include=int_types).columns\ntrain_df[columns]",
"_____no_output_____"
],
[
"train_df.to_pickle('/kaggle/working/train_df.pkl')\ntest_df.to_pickle('/kaggle/working/test_df.pkl')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecd6a0291180de45b22e09e48aaf7dbf8864ffe4 | 18,642 | ipynb | Jupyter Notebook | key_words.ipynb | ISCB-Academy/keywords | dbbeb3bc8ebbda6a3f28b96366dc2a77a83e6c4d | [
"MIT"
] | null | null | null | key_words.ipynb | ISCB-Academy/keywords | dbbeb3bc8ebbda6a3f28b96366dc2a77a83e6c4d | [
"MIT"
] | null | null | null | key_words.ipynb | ISCB-Academy/keywords | dbbeb3bc8ebbda6a3f28b96366dc2a77a83e6c4d | [
"MIT"
] | null | null | null | 44.8125 | 509 | 0.427476 | [
[
[
"<a href=\"https://colab.research.google.com/github/ISCB-Academy/keywords/blob/main/key_words.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"!pip install git+https://github.com/LIAAD/yake",
"Collecting git+https://github.com/LIAAD/yake\n Cloning https://github.com/LIAAD/yake to /tmp/pip-req-build-vbp0sr1m\n Running command git clone -q https://github.com/LIAAD/yake /tmp/pip-req-build-vbp0sr1m\nRequirement already satisfied: tabulate in /usr/local/lib/python3.7/dist-packages (from yake==0.4.8) (0.8.9)\nRequirement already satisfied: click>=6.0 in /usr/local/lib/python3.7/dist-packages (from yake==0.4.8) (7.1.2)\nRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from yake==0.4.8) (1.19.5)\nCollecting segtok\n Downloading segtok-1.5.11-py3-none-any.whl (24 kB)\nRequirement already satisfied: networkx in /usr/local/lib/python3.7/dist-packages (from yake==0.4.8) (2.6.3)\nCollecting jellyfish\n Downloading jellyfish-0.9.0.tar.gz (132 kB)\n\u001b[K |████████████████████████████████| 132 kB 35.4 MB/s \n\u001b[?25hRequirement already satisfied: regex in /usr/local/lib/python3.7/dist-packages (from segtok->yake==0.4.8) (2019.12.20)\nBuilding wheels for collected packages: yake, jellyfish\n Building wheel for yake (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for yake: filename=yake-0.4.8-py2.py3-none-any.whl size=60189 sha256=7e194862292e9ee62a68e53f1dc58e76513899e3570abbf3569e97f7f163996d\n Stored in directory: /tmp/pip-ephem-wheel-cache-91tpvaao/wheels/52/79/f4/dae9309f60266aa3767a4381405002b6f2955fbcf038d804da\n Building wheel for jellyfish (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for jellyfish: filename=jellyfish-0.9.0-cp37-cp37m-linux_x86_64.whl size=73970 sha256=f2b7ab80c78112a01d6ddee135375ef6f4afd557134db9014f2c83d77e7e485c\n Stored in directory: /root/.cache/pip/wheels/fe/99/4e/646ce766df0d070b0ef04db27aa11543e2767fda3075aec31b\nSuccessfully built yake jellyfish\nInstalling collected packages: segtok, jellyfish, yake\nSuccessfully installed jellyfish-0.9.0 segtok-1.5.11 yake-0.4.8\n"
],
[
"import yake\nimport pandas as pd",
"_____no_output_____"
],
[
"from google.colab import drive\ndrive.mount('/content/drive')",
"Mounted at /content/drive\n"
],
[
"text = pd.read_csv(\"/content/drive/MyDrive/Education_ISCB/Bioschemas/ISCB_events.csv\")\ntext['abstract'] = text['abstract'].astype('string')",
"_____no_output_____"
],
[
"language = \"en\"\nmax_ngram_size = 2\ndeduplication_threshold = 0.3\ndeduplication_algo = 'seqm'\nwindowsSize=1\nnumOfKeywords = 5\n\ncustom_kw_extractor = yake.KeywordExtractor(lan=language, \n n=max_ngram_size, \n dedupFunc=deduplication_algo, dedupLim=deduplication_threshold,\n top=numOfKeywords, \n features=None)\n#keywords = custom_kw_extractor.extract_keywords(abstract)\n\nextract_keywords = lambda x: ', '.join(k[0] for k in custom_kw_extractor.extract_keywords(x))\n\ntext['TopKeyword'] = text['abstract'].apply(extract_keywords)\n\ntext.to_csv('/content/drive/MyDrive/Education_ISCB/Bioschemas/keywords.csv') \n",
"_____no_output_____"
],
[
"text.head()",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecd6a0352cb1d554d647fa7042b71caa5b6d0f73 | 62,478 | ipynb | Jupyter Notebook | notebooks/ExperimentWithPerStationData.ipynb | isabelladegen/mlp-2021 | f577d97d060156bc09cdd03635cdefa7a7f7d839 | [
"BSD-2-Clause"
] | null | null | null | notebooks/ExperimentWithPerStationData.ipynb | isabelladegen/mlp-2021 | f577d97d060156bc09cdd03635cdefa7a7f7d839 | [
"BSD-2-Clause"
] | null | null | null | notebooks/ExperimentWithPerStationData.ipynb | isabelladegen/mlp-2021 | f577d97d060156bc09cdd03635cdefa7a7f7d839 | [
"BSD-2-Clause"
] | null | null | null | 67.91087 | 10,787 | 0.451423 | [
[
[
"# Phase 1\nData Investigation",
"_____no_output_____"
]
],
[
[
"import pandas as pd\ndata_201= pd.read_csv('../data/Train/station_201_deploy.csv')\ndata_275= pd.read_csv('../data/Train/station_275_deploy.csv')\n",
"_____no_output_____"
],
[
"data_201.head()",
"_____no_output_____"
],
[
"data_275.head()",
"_____no_output_____"
],
[
"#station, latitude, longitude and numDocks are per station\ndata_201.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 745 entries, 0 to 744\nData columns (total 25 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 station 745 non-null int64 \n 1 latitude 745 non-null float64\n 2 longitude 745 non-null float64\n 3 numDocks 745 non-null int64 \n 4 timestamp 745 non-null float64\n 5 year 745 non-null int64 \n 6 month 745 non-null int64 \n 7 day 745 non-null int64 \n 8 hour 745 non-null int64 \n 9 weekday 745 non-null object \n 10 weekhour 745 non-null int64 \n 11 isHoliday 745 non-null int64 \n 12 windMaxSpeed.m.s 744 non-null float64\n 13 windMeanSpeed.m.s 744 non-null float64\n 14 windDirection.grades 740 non-null float64\n 15 temperature.C 744 non-null float64\n 16 relHumidity.HR 744 non-null float64\n 17 airPressure.mb 744 non-null float64\n 18 precipitation.l.m2 744 non-null float64\n 19 bikes_3h_ago 741 non-null float64\n 20 full_profile_3h_diff_bikes 574 non-null float64\n 21 full_profile_bikes 577 non-null float64\n 22 short_profile_3h_diff_bikes 574 non-null float64\n 23 short_profile_bikes 577 non-null float64\n 24 bikes 744 non-null float64\ndtypes: float64(16), int64(8), object(1)\nmemory usage: 145.6+ KB\n"
],
[
"data_275.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 745 entries, 0 to 744\nData columns (total 25 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 station 745 non-null int64 \n 1 latitude 745 non-null float64\n 2 longitude 745 non-null float64\n 3 numDocks 745 non-null int64 \n 4 timestamp 745 non-null float64\n 5 year 745 non-null int64 \n 6 month 745 non-null int64 \n 7 day 745 non-null int64 \n 8 hour 745 non-null int64 \n 9 weekday 745 non-null object \n 10 weekhour 745 non-null int64 \n 11 isHoliday 745 non-null int64 \n 12 windMaxSpeed.m.s 744 non-null float64\n 13 windMeanSpeed.m.s 744 non-null float64\n 14 windDirection.grades 740 non-null float64\n 15 temperature.C 744 non-null float64\n 16 relHumidity.HR 744 non-null float64\n 17 airPressure.mb 744 non-null float64\n 18 precipitation.l.m2 744 non-null float64\n 19 bikes_3h_ago 741 non-null float64\n 20 full_profile_3h_diff_bikes 574 non-null float64\n 21 full_profile_bikes 577 non-null float64\n 22 short_profile_3h_diff_bikes 574 non-null float64\n 23 short_profile_bikes 577 non-null float64\n 24 bikes 744 non-null float64\ndtypes: float64(16), int64(8), object(1)\nmemory usage: 145.6+ KB\n"
],
[
"data_201[['isHoliday',\n 'windMaxSpeed.m.s', 'windMeanSpeed.m.s', 'windDirection.grades',\n 'temperature.C', 'relHumidity.HR', 'airPressure.mb',\n 'precipitation.l.m2', 'bikes_3h_ago', 'full_profile_3h_diff_bikes',\n 'full_profile_bikes', 'short_profile_3h_diff_bikes',\n 'short_profile_bikes', 'bikes']].describe()\n\n",
"_____no_output_____"
],
[
"data_201.isHoliday.value_counts()",
"_____no_output_____"
],
[
"data_201.weekhour.value_counts()",
"_____no_output_____"
],
[
"data_201.hour.value_counts().head(n=10)",
"_____no_output_____"
],
[
"data_201.month.value_counts()",
"_____no_output_____"
],
[
"data_275.year.value_counts()",
"_____no_output_____"
],
[
"data_201.weekday.value_counts()",
"_____no_output_____"
],
[
"data_201.bikes.value_counts().head(n=10)",
"_____no_output_____"
],
[
"data_201.full_profile_3h_diff_bikes.value_counts().head(n=10)",
"_____no_output_____"
],
[
"data_201.short_profile_3h_diff_bikes.value_counts().head(n=10)",
"_____no_output_____"
],
[
"data_201[:20]",
"_____no_output_____"
]
],
[
[
"# First conclusions:\nGoal: Predict the number of bikes available at each hour. Which makes it possible to predict 3h in advance\n\nML:\n- Seems to be a regression problem -> probably first algorithms to try\n\nData:\n- There are NA data fields that need to be handled\n- Not sure yet what full_profile3h_diff_bikes, full_profile_bikes, short_profile_3h_diff_bikes and short_profile_bikes means\n- Need some further investigation of what features are most correlated to no of bikes\n\nEvaluation:\nMean absolute error between the predicted and true value",
"_____no_output_____"
]
],
[
[
"#reading all csv files for training into one dataframe\nimport glob\nimport os\ndf = pd.concat(map(pd.read_csv, glob.glob(os.path.join('../Data/Train', \"*.csv\"))), ignore_index=True)",
"_____no_output_____"
],
[
"df.shape",
"_____no_output_____"
],
[
"df.info()\n",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 55875 entries, 0 to 55874\nData columns (total 25 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 station 55875 non-null int64 \n 1 latitude 55875 non-null float64\n 2 longitude 55875 non-null float64\n 3 numDocks 55875 non-null int64 \n 4 timestamp 55875 non-null float64\n 5 year 55875 non-null int64 \n 6 month 55875 non-null int64 \n 7 day 55875 non-null int64 \n 8 hour 55875 non-null int64 \n 9 weekday 55875 non-null object \n 10 weekhour 55875 non-null int64 \n 11 isHoliday 55875 non-null int64 \n 12 windMaxSpeed.m.s 55800 non-null float64\n 13 windMeanSpeed.m.s 55800 non-null float64\n 14 windDirection.grades 55500 non-null float64\n 15 temperature.C 55800 non-null float64\n 16 relHumidity.HR 55800 non-null float64\n 17 airPressure.mb 55800 non-null float64\n 18 precipitation.l.m2 55800 non-null float64\n 19 bikes_3h_ago 55575 non-null float64\n 20 full_profile_3h_diff_bikes 43050 non-null float64\n 21 full_profile_bikes 43275 non-null float64\n 22 short_profile_3h_diff_bikes 43050 non-null float64\n 23 short_profile_bikes 43275 non-null float64\n 24 bikes 55800 non-null float64\ndtypes: float64(16), int64(8), object(1)\nmemory usage: 10.7+ MB\n"
],
[
"#75 rows don't have a label! -> Probably best to remove those for training\ndf[df['bikes'].isnull()].shape",
"_____no_output_____"
],
[
"#most rows have some NaN values\nis_NaN = df.isnull()\nrow_has_NaN = is_NaN.any(axis=1)\nrows_with_NaN = df[row_has_NaN]\nrows_with_NaN.shape",
"_____no_output_____"
],
[
"#all stations have docks\ndf[df['numDocks'].isnull()]",
"_____no_output_____"
],
[
"df.dropna(subset=['bikes']).shape",
"_____no_output_____"
],
[
"# figure out how many rows don't have bikes from 3h ago \ndf[df.bikes_3h_ago.isnull()].shape",
"_____no_output_____"
],
[
"# find station that has all bikes\nstations_with_all_bikes = df[~df['bikes'].isna()]\nstations_with_nan_in_bikes = df[df['bikes'].isna()]\nprint(f'All Stations:\\n {sorted(set(stations_with_all_bikes[\"station\"]))}')\nprint(f'Stations that don\\'t have nan in bikes:\\n {sorted(set(stations_with_all_bikes[\"station\"]))}')\nprint(f'Stations that have nan in bikes:\\n {sorted(set(stations_with_nan_in_bikes[\"station\"]))}')\n\n# all station have one bike null",
"All Stations:\n [201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275]\nStations that don't have nan in bikes:\n [201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275]\nStations that have nan in bikes:\n [201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275]\n"
],
[
"#Number of examples per station\nstation_ids = sorted(set(df['station']))\nfor station in station_ids:\n number_of_bikes = df.loc[df['station'] == station].shape[0]\n print(f'Number of examples for station {station}: {number_of_bikes}')",
"Number of examples for station 201: 745\nNumber of examples for station 202: 745\nNumber of examples for station 203: 745\nNumber of examples for station 204: 745\nNumber of examples for station 205: 745\nNumber of examples for station 206: 745\nNumber of examples for station 207: 745\nNumber of examples for station 208: 745\nNumber of examples for station 209: 745\nNumber of examples for station 210: 745\nNumber of examples for station 211: 745\nNumber of examples for station 212: 745\nNumber of examples for station 213: 745\nNumber of examples for station 214: 745\nNumber of examples for station 215: 745\nNumber of examples for station 216: 745\nNumber of examples for station 217: 745\nNumber of examples for station 218: 745\nNumber of examples for station 219: 745\nNumber of examples for station 220: 745\nNumber of examples for station 221: 745\nNumber of examples for station 222: 745\nNumber of examples for station 223: 745\nNumber of examples for station 224: 745\nNumber of examples for station 225: 745\nNumber of examples for station 226: 745\nNumber of examples for station 227: 745\nNumber of examples for station 228: 745\nNumber of examples for station 229: 745\nNumber of examples for station 230: 745\nNumber of examples for station 231: 745\nNumber of examples for station 232: 745\nNumber of examples for station 233: 745\nNumber of examples for station 234: 745\nNumber of examples for station 235: 745\nNumber of examples for station 236: 745\nNumber of examples for station 237: 745\nNumber of examples for station 238: 745\nNumber of examples for station 239: 745\nNumber of examples for station 240: 745\nNumber of examples for station 241: 745\nNumber of examples for station 242: 745\nNumber of examples for station 243: 745\nNumber of examples for station 244: 745\nNumber of examples for station 245: 745\nNumber of examples for station 246: 745\nNumber of examples for station 247: 745\nNumber of examples for station 248: 745\nNumber of examples for station 249: 745\nNumber of examples for station 250: 745\nNumber of examples for station 251: 745\nNumber of examples for station 252: 745\nNumber of examples for station 253: 745\nNumber of examples for station 254: 745\nNumber of examples for station 255: 745\nNumber of examples for station 256: 745\nNumber of examples for station 257: 745\nNumber of examples for station 258: 745\nNumber of examples for station 259: 745\nNumber of examples for station 260: 745\nNumber of examples for station 261: 745\nNumber of examples for station 262: 745\nNumber of examples for station 263: 745\nNumber of examples for station 264: 745\nNumber of examples for station 265: 745\nNumber of examples for station 266: 745\nNumber of examples for station 267: 745\nNumber of examples for station 268: 745\nNumber of examples for station 269: 745\nNumber of examples for station 270: 745\nNumber of examples for station 271: 745\nNumber of examples for station 272: 745\nNumber of examples for station 273: 745\nNumber of examples for station 274: 745\nNumber of examples for station 275: 745\n"
],
[
"df[df['weekhour'].isnull()].shape",
"_____no_output_____"
],
[
"df[df['precipitation.l.m2'].isnull()].shape",
"_____no_output_____"
],
[
"df.loc[df['precipitation.l.m2'] == 0.0].shape",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecd6a06195cc1c9f508c2d2e152379bd6d48a5d9 | 270,592 | ipynb | Jupyter Notebook | Trainer-Collaboratories/Ensamble/EVALUASI_MODEL_(k=10) (1).ipynb | MarioTiara/MarioTiara-RD-Detection-Ensemble-CNN | 95101b942a7e51078bfe643021e745e51e82516f | [
"MIT"
] | 1 | 2022-03-24T18:16:33.000Z | 2022-03-24T18:16:33.000Z | Trainer-Collaboratories/Ensamble/EVALUASI_MODEL_(k=10) (1).ipynb | MarioTiara/MarioTiara-RD-Detection-Ensemble-CNN | 95101b942a7e51078bfe643021e745e51e82516f | [
"MIT"
] | null | null | null | Trainer-Collaboratories/Ensamble/EVALUASI_MODEL_(k=10) (1).ipynb | MarioTiara/MarioTiara-RD-Detection-Ensemble-CNN | 95101b942a7e51078bfe643021e745e51e82516f | [
"MIT"
] | 2 | 2022-02-08T05:41:17.000Z | 2022-03-31T06:56:41.000Z | 150.162042 | 112,962 | 0.86337 | [
[
[
"### **Import Google Drive**",
"_____no_output_____"
]
],
[
[
"from google.colab import drive\ndrive.mount('/content/drive')",
"Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly\n\nEnter your authorization code:\n··········\nMounted at /content/drive\n"
]
],
[
[
"### **Import Library**",
"_____no_output_____"
]
],
[
[
"import glob\nimport numpy as np\nimport os\nimport shutil\nnp.random.seed(42)\nfrom sklearn.preprocessing import LabelEncoder\nimport cv2\nimport tensorflow as tf\nimport keras\nimport shutil\nimport random\nimport warnings\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nfrom sklearn.utils import class_weight\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import confusion_matrix, cohen_kappa_score\nfrom sklearn.metrics import accuracy_score, classification_report",
"Using TensorFlow backend.\n/usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.\n import pandas.util.testing as tm\n"
]
],
[
[
"### **Load Data**",
"_____no_output_____"
]
],
[
[
"os.chdir('/content/drive/My Drive/Colab Notebooks/DATA RD/')\nTrain = glob.glob('/content/drive/My Drive/Colab Notebooks/DATA RD/DATASETS/K-means Cluster/Train_K-means(10)/*')\nVal=glob.glob('/content/drive/My Drive/Colab Notebooks/DATA RD/DATASETS/K-means Cluster/Val_K-means(10)/*')\nTest=glob.glob('/content/drive/My Drive/Colab Notebooks/DATA RD/DATASETS/K-means Cluster/Test_K-means(10)/*')",
"_____no_output_____"
],
[
"import matplotlib.image as mpimg\nfor ima in Train[600:601]:\n img=mpimg.imread(ima)\n imgplot = plt.imshow(img)\n plt.show()",
"_____no_output_____"
]
],
[
[
"### **Data Preparation**",
"_____no_output_____"
]
],
[
[
"nrows = 224\nncolumns = 224\nchannels = 3 \n\ndef read_and_process_image(list_of_images):\n \n X = [] # images\n y = [] # labels\n \n for image in list_of_images:\n X.append(cv2.resize(cv2.imread(image, cv2.IMREAD_COLOR), (nrows,ncolumns), interpolation=cv2.INTER_CUBIC)) #Read the image\n #get the labels\n if 'Normal' in image:\n y.append(0)\n elif 'Mild' in image:\n y.append(1)\n elif 'Moderate' in image:\n y.append(2)\n elif 'Severe' in image:\n y.append(3)\n\n \n return X, y",
"_____no_output_____"
],
[
"X_train, y_train = read_and_process_image(Train)\nX_val, y_val = read_and_process_image(Val)\nX_test, y_test = read_and_process_image(Test)",
"_____no_output_____"
],
[
"import seaborn as sns\nimport gc \ngc.collect()\n\n#Convert list to numpy array\nX_train = np.array(X_train)\ny_train= np.array(y_train)\n\nX_val = np.array(X_val)\ny_val= np.array(y_val)\n\nX_test = np.array(X_test)\ny_test= np.array(y_test)\n\nprint('Train:',X_train.shape,y_train.shape)\nprint('Val:',X_val.shape,y_val.shape)\nprint('Test',X_test.shape,y_test.shape)",
"Train: (6000, 224, 224, 3) (6000,)\nVal: (1500, 224, 224, 3) (1500,)\nTest (500, 224, 224, 3) (500,)\n"
],
[
"sns.countplot(y_train)\nplt.title('Total Data Training')",
"_____no_output_____"
],
[
"sns.countplot(y_val)\nplt.title('Total Data Validasi')",
"_____no_output_____"
],
[
"sns.countplot(y_test)\nplt.title('Total Data Test')",
"_____no_output_____"
],
[
"y_train_ohe = pd.get_dummies(y_train)\ny_val_ohe=pd.get_dummies(y_val)\ny_test_ohe=pd.get_dummies(y_test)\n\ny_train_ohe.shape,y_val_ohe.shape,y_test_ohe.shape",
"_____no_output_____"
]
],
[
[
"### **Model Parameters**",
"_____no_output_____"
]
],
[
[
"batch_size = 16\nEPOCHS = 100\nWARMUP_EPOCHS = 2\nLEARNING_RATE = 1e-4\nWARMUP_LEARNING_RATE = 1e-3\nHEIGHT = 224\nWIDTH = 224\nCANAL = 3\nN_CLASSES = 4\nES_PATIENCE = 5\nRLROP_PATIENCE = 3\nDECAY_DROP = 0.5",
"_____no_output_____"
]
],
[
[
"### **Data Generator**",
"_____no_output_____"
]
],
[
[
"train_datagen =tf.keras.preprocessing.image.ImageDataGenerator(\n rotation_range=360,\n horizontal_flip=True,\n vertical_flip=True)\n\ntest_datagen=tf.keras.preprocessing.image.ImageDataGenerator()",
"_____no_output_____"
],
[
"train_generator = train_datagen.flow(X_train, y_train_ohe, batch_size=batch_size)\nval_generator = test_datagen.flow(X_val, y_val_ohe, batch_size=batch_size)\ntest_generator = test_datagen.flow(X_test, y_test_ohe, batch_size=batch_size)",
"_____no_output_____"
]
],
[
[
"### **EVALUASI DENSNET**",
"_____no_output_____"
]
],
[
[
"input_shape = (224,224,3)\nmodel_input =tf.keras.Input(shape=input_shape)\n\nBase_model1 =tf.keras.applications.DenseNet201(input_shape=input_shape, input_tensor=model_input, include_top=False, weights=None)\nfor layer in Base_model1.layers:\n layer.trainable = True",
"_____no_output_____"
],
[
"Base_model1_last_layer = Base_model1.get_layer('relu')\nprint('last layer output shape:',Base_model1_last_layer.output_shape)\nBase_model1_last_output = Base_model1_last_layer.output",
"last layer output shape: (None, 7, 7, 1920)\n"
],
[
"x1 =tf.keras.layers.GlobalAveragePooling2D()(Base_model1_last_output)\nx1 =tf.keras.layers.Dropout(0.25)(x1)\nx1 =tf.keras.layers.Dense(512, activation='relu')(x1)\nx1 =tf.keras.layers.Dropout(0.25)(x1)\nfinal_output1 =tf.keras.layers.Dense(4, activation='softmax', name='final_output')(x1)\nDensNet201_model =tf.keras.models.Model(model_input, final_output1)\nmetric_list = [\"accuracy\"]\noptimizer =tf.keras.optimizers.Adam(lr=2.5000e-05)\nDensNet201_model.compile(optimizer=optimizer, loss=\"categorical_crossentropy\", metrics=metric_list)\nDensNet201_model.load_weights('/content/drive/My Drive/Colab Notebooks/DATA RD/MODEL/Weights/K=10/Weight_DensNet201_Optimal_(k=10).h5')",
"_____no_output_____"
]
],
[
[
"**EVALUASI DENSNET PADA DATA VALIDASI**",
"_____no_output_____"
]
],
[
[
"loss_val, acc_val = DensNet201_model.evaluate(X_val,y_val_ohe,batch_size=1, verbose=1)\nprint(\"Validation: accuracy = %f ; loss_v = %f\" % (acc_val, loss_val))",
"1500/1500 [==============================] - 39s 26ms/step - loss: 0.2945 - accuracy: 0.8853\nValidation: accuracy = 0.885333 ; loss_v = 0.294495\n"
],
[
"Validation_pred = DensNet201_model.predict(X_val)\nPrdict_label = np.argmax(Validation_pred, -1)\nActual_label = y_val\n\nprint('Accuracy on Validation Data: %2.2f%%' % (100*accuracy_score(Actual_label, Prdict_label)))\nprint(classification_report(Actual_label, Prdict_label))",
"Accuracy on Validation Data: 88.53%\n precision recall f1-score support\n\n 0 0.97 0.99 0.98 375\n 1 0.87 0.89 0.88 375\n 2 0.82 0.80 0.81 375\n 3 0.88 0.86 0.87 375\n\n accuracy 0.89 1500\n macro avg 0.88 0.89 0.88 1500\nweighted avg 0.88 0.89 0.88 1500\n\n"
],
[
"import seaborn as sns\nfrom sklearn.metrics import confusion_matrix\nsns.heatmap(confusion_matrix(Actual_label , Prdict_label), \n annot=True, fmt=\"d\", cbar = True, cmap = plt.cm.Blues, vmax = X_val.shape[0]//16)",
"_____no_output_____"
]
],
[
[
"**EVALUASI DENSNET PADA DATA TEST**",
"_____no_output_____"
]
],
[
[
"loss_test, acc_test =DensNet201_model.evaluate(X_test,y_test_ohe,batch_size=1, verbose=1)\nprint(\"Test: accuracy = %f ; loss_v = %f\" % (acc_test, loss_test))",
"500/500 [==============================] - 13s 26ms/step - loss: 0.3163 - accuracy: 0.8840\nTest: accuracy = 0.884000 ; loss_v = 0.316331\n"
],
[
"Test_predict = DensNet201_model.predict(X_test)\n\nPrdict_label = np.argmax(Test_predict, -1)\nActual_label = y_test\n\nprint('Accuracy on Test Data: %2.2f%%' % (100*accuracy_score(Actual_label, Prdict_label)))\nprint(classification_report(Actual_label, Prdict_label))",
"Accuracy on Test Data: 88.40%\n precision recall f1-score support\n\n 0 0.97 0.99 0.98 125\n 1 0.86 0.86 0.86 125\n 2 0.80 0.82 0.81 125\n 3 0.90 0.86 0.88 125\n\n accuracy 0.88 500\n macro avg 0.88 0.88 0.88 500\nweighted avg 0.88 0.88 0.88 500\n\n"
],
[
"import seaborn as sns\nfrom sklearn.metrics import confusion_matrix\nsns.heatmap(confusion_matrix(Actual_label, Prdict_label), \n annot=True, fmt=\"d\", cbar = True, cmap = plt.cm.Blues, vmax = X_test.shape[0]//16)",
"_____no_output_____"
]
],
[
[
"### **EVALUASI INCEPTIONV3**",
"_____no_output_____"
]
],
[
[
"Base_model2 =tf.keras.applications.InceptionV3(input_shape=input_shape, input_tensor=model_input, include_top=False, weights=None)\nfor layer in Base_model2.layers:\n layer.trainable = True",
"_____no_output_____"
],
[
"Base_model2_last_layer = Base_model2.get_layer('mixed10')\nprint('last layer output shape:', Base_model2_last_layer.output_shape)\nBase_model2_last_output = Base_model2_last_layer.output",
"last layer output shape: (None, 5, 5, 2048)\n"
],
[
"x2 =tf.keras.layers.GlobalAveragePooling2D()(Base_model2_last_output)\nx2 =tf.keras.layers.Dropout(0.25)(x2)\nx2 =tf.keras.layers.Dense(1024, activation='relu')(x2)\nx2 =tf.keras.layers.Dropout(0.25)(x2)\nfinal_output2 =tf.keras.layers.Dense(4, activation='softmax', name='final_output2')(x2)\nInceptionV3_model =tf.keras.models.Model(model_input, final_output2)\nmetric_list = [\"accuracy\"]\noptimizer = tf.keras.optimizers.Adam(1.2500e-05)\nInceptionV3_model.compile(optimizer=optimizer, loss=\"categorical_crossentropy\", metrics=metric_list)\nInceptionV3_model.load_weights('/content/drive/My Drive/Colab Notebooks/DATA RD/MODEL/Weights/K=10/Weight_InceptionV3_Optimal_(k=10).h5')",
"_____no_output_____"
]
],
[
[
"**EVALUASI INCEPTIONV3 PADA DATA VALIDASI**",
"_____no_output_____"
]
],
[
[
"loss_val, acc_val = InceptionV3_model.evaluate(X_val,y_val_ohe,batch_size=1, verbose=1)\nprint(\"Validation: accuracy = %f ; loss_v = %f\" % (acc_val, loss_val))",
"1500/1500 [==============================] - 22s 14ms/step - loss: 0.2652 - accuracy: 0.9100\nValidation: accuracy = 0.910000 ; loss_v = 0.265206\n"
],
[
"Validation_pred = InceptionV3_model.predict(X_val)\nPrdict_label = np.argmax(Validation_pred, -1)\nActual_label = y_val\n\nprint('Accuracy on Validation Data: %2.2f%%' % (100*accuracy_score(Actual_label, Prdict_label)))\nprint(classification_report(Actual_label, Prdict_label))",
"Accuracy on Validation Data: 91.00%\n precision recall f1-score support\n\n 0 0.99 0.97 0.98 375\n 1 0.95 0.88 0.91 375\n 2 0.83 0.87 0.85 375\n 3 0.88 0.91 0.89 375\n\n accuracy 0.91 1500\n macro avg 0.91 0.91 0.91 1500\nweighted avg 0.91 0.91 0.91 1500\n\n"
],
[
"sns.heatmap(confusion_matrix(Actual_label , Prdict_label), \n annot=True, fmt=\"d\", cbar = True, cmap = plt.cm.Blues, vmax = X_val.shape[0]//16)",
"_____no_output_____"
]
],
[
[
"**EVALUASI INCEPTIONV3 PADA DATA TEST**",
"_____no_output_____"
]
],
[
[
"loss_test, acc_test = InceptionV3_model.evaluate(X_test,y_test_ohe,batch_size=1, verbose=1)\nprint(\"Test: accuracy = %f ; loss_v = %f\" % (acc_test, loss_test))",
"500/500 [==============================] - 7s 14ms/step - loss: 0.2953 - accuracy: 0.9060\nTest: accuracy = 0.906000 ; loss_v = 0.295299\n"
],
[
"Test_predict = InceptionV3_model.predict(X_test)\n\nPrdict_label = np.argmax(Test_predict, -1)\nActual_label = y_test\n\nprint('Accuracy on Test Data: %2.2f%%' % (100*accuracy_score(Actual_label, Prdict_label)))\nprint(classification_report(Actual_label, Prdict_label))",
"Accuracy on Test Data: 90.60%\n precision recall f1-score support\n\n 0 0.98 0.98 0.98 125\n 1 0.94 0.89 0.91 125\n 2 0.83 0.84 0.84 125\n 3 0.87 0.91 0.89 125\n\n accuracy 0.91 500\n macro avg 0.91 0.91 0.91 500\nweighted avg 0.91 0.91 0.91 500\n\n"
],
[
"sns.heatmap(confusion_matrix(Actual_label, Prdict_label), \n annot=True, fmt=\"d\", cbar = True, cmap = plt.cm.Blues, vmax = X_test.shape[0]//16)",
"_____no_output_____"
]
],
[
[
"### **EVALUASI MOBILENETV3**",
"_____no_output_____"
]
],
[
[
"Base_model3 =tf.keras.applications.MobileNetV2(input_shape=input_shape, input_tensor=model_input, include_top=False, weights=None)\nfor layer in Base_model3.layers:\n layer.trainable = True",
"_____no_output_____"
],
[
"Base_model3_last_layer = Base_model3.get_layer('out_relu')\nprint('last layer output shape:', Base_model3_last_layer.output_shape)\nBase_model3_last_output = Base_model3_last_layer.output",
"last layer output shape: (None, 7, 7, 1280)\n"
],
[
"x3 =tf.keras.layers.GlobalAveragePooling2D()(Base_model3_last_output)\nx3 =tf.keras.layers.Dropout(0.5)(x3)\nx3 =tf.keras.layers.Dense(512, activation='relu')(x3)\nx3 =tf.keras.layers.Dropout(0.5)(x3)\nfinal_output3 =tf.keras.layers.Dense(4, activation='softmax', name='final_output3')(x3)\nMobileNetV2_model =tf.keras.models.Model(model_input, final_output3)\nmetric_list = [\"accuracy\"]\noptimizer = tf.keras.optimizers.Adam(lr=5.0000e-05)\nMobileNetV2_model.compile(optimizer=optimizer, loss=\"categorical_crossentropy\", metrics=metric_list)\nMobileNetV2_model.load_weights('/content/drive/My Drive/Colab Notebooks/DATA RD/MODEL/Weights/K=10/Weight_MobileNetV2_Optimal_(10).h5')",
"_____no_output_____"
]
],
[
[
"**EVALUASI MOBILENETV2 PADA DATA VALIDASI**",
"_____no_output_____"
]
],
[
[
"loss_val, acc_val = MobileNetV2_model.evaluate(X_val,y_val_ohe,batch_size=1, verbose=1)\nprint(\"Validation: accuracy = %f ; loss_v = %f\" % (acc_val, loss_val))",
"1500/1500 [==============================] - 7s 5ms/step - loss: 0.5138 - accuracy: 0.8040\nValidation: accuracy = 0.804000 ; loss_v = 0.513802\n"
],
[
"Validation_pred = MobileNetV2_model.predict(X_val)\nPrdict_label = np.argmax(Validation_pred, -1)\nActual_label = y_val\n\nprint('Accuracy on Validation Data: %2.2f%%' % (100*accuracy_score(Actual_label, Prdict_label)))\nprint(classification_report(Actual_label, Prdict_label))",
"Accuracy on Validation Data: 80.40%\n precision recall f1-score support\n\n 0 0.99 0.94 0.96 375\n 1 0.80 0.79 0.79 375\n 2 0.62 0.87 0.72 375\n 3 0.95 0.61 0.74 375\n\n accuracy 0.80 1500\n macro avg 0.84 0.80 0.81 1500\nweighted avg 0.84 0.80 0.81 1500\n\n"
],
[
"sns.heatmap(confusion_matrix(Actual_label , Prdict_label), \n annot=True, fmt=\"d\", cbar = True, cmap = plt.cm.Blues, vmax = X_val.shape[0]//16)",
"_____no_output_____"
]
],
[
[
"**EVALUASI MOBILENETV2 PADA DATA TEST**",
"_____no_output_____"
]
],
[
[
"loss_test, acc_test = MobileNetV2_model.evaluate(X_test,y_test_ohe,batch_size=1, verbose=1)\nprint(\"Test: accuracy = %f ; loss_v = %f\" % (acc_test, loss_test))",
"500/500 [==============================] - 2s 5ms/step - loss: 0.5002 - accuracy: 0.8240\nTest: accuracy = 0.824000 ; loss_v = 0.500181\n"
],
[
"Test_predict = MobileNetV2_model.predict(X_test)\n\nPrdict_label = np.argmax(Test_predict, -1)\nActual_label = y_test\n\nprint('Accuracy on Test Data: %2.2f%%' % (100*accuracy_score(Actual_label, Prdict_label)))\nprint(classification_report(Actual_label, Prdict_label))",
"Accuracy on Test Data: 82.40%\n precision recall f1-score support\n\n 0 0.98 0.99 0.99 125\n 1 0.80 0.74 0.77 125\n 2 0.65 0.90 0.75 125\n 3 0.97 0.66 0.79 125\n\n accuracy 0.82 500\n macro avg 0.85 0.82 0.83 500\nweighted avg 0.85 0.82 0.83 500\n\n"
],
[
"sns.heatmap(confusion_matrix(Actual_label, Prdict_label), \n annot=True, fmt=\"d\", cbar = True, cmap = plt.cm.Blues, vmax = X_test.shape[0]//16)",
"_____no_output_____"
]
],
[
[
"### **ENSAMBLE**",
"_____no_output_____"
]
],
[
[
"def ensemble(models, model_input):\n outputs = [model.outputs[0] for model in models]\n y =tf.keras.layers.Average()(outputs)\n model =tf.keras.Model(model_input,y,name='ensemble')\n return model",
"_____no_output_____"
],
[
"ensemble_model = ensemble([DensNet201_model,InceptionV3_model,MobileNetV2_model], model_input)\nmetric_list = [\"accuracy\"]\noptimizer =tf.keras.optimizers.Adam(lr=2.5000e-05)\nensemble_model.compile(optimizer=optimizer, loss=\"categorical_crossentropy\", metrics=metric_list)",
"_____no_output_____"
]
],
[
[
"**EVALUASI ENSAMBLE PADA DATA VALIDASI**",
"_____no_output_____"
]
],
[
[
"loss_val, acc_val = ensemble_model.evaluate(X_val,y_val_ohe,batch_size=1, verbose=1)\nprint(\"Validation: accuracy = %f ; loss_v = %f\" % (acc_val, loss_val))",
"1500/1500 [==============================] - 67s 44ms/step - loss: 0.2590 - accuracy: 0.9087\nValidation: accuracy = 0.908667 ; loss_v = 0.258961\n"
],
[
"Validation_pred = ensemble_model.predict(X_val)\nPrdict_label = np.argmax(Validation_pred, -1)\nActual_label = y_val\n\nprint('Accuracy on Validation Data: %2.2f%%' % (100*accuracy_score(Actual_label, Prdict_label)))\nprint(classification_report(Actual_label, Prdict_label))",
"Accuracy on Validation Data: 90.87%\n precision recall f1-score support\n\n 0 0.99 0.98 0.99 375\n 1 0.93 0.89 0.91 375\n 2 0.80 0.90 0.85 375\n 3 0.93 0.85 0.89 375\n\n accuracy 0.91 1500\n macro avg 0.91 0.91 0.91 1500\nweighted avg 0.91 0.91 0.91 1500\n\n"
],
[
"sns.heatmap(confusion_matrix(Actual_label, Prdict_label), \n annot=True, fmt=\"d\", cbar = True, cmap = plt.cm.Blues, vmax = X_test.shape[0]//16)",
"_____no_output_____"
]
],
[
[
"**EVALUASI ENSAMBLE PADA DATA TEST**",
"_____no_output_____"
]
],
[
[
"loss_test, acc_test = ensemble_model.evaluate(X_test,y_test_ohe,batch_size=1, verbose=1)\nprint(\"Test: accuracy = %f ; loss_v = %f\" % (acc_test, loss_test))",
"500/500 [==============================] - 22s 45ms/step - loss: 0.2692 - accuracy: 0.9140\nTest: accuracy = 0.914000 ; loss_v = 0.269187\n"
],
[
"Test_predict = ensemble_model.predict(X_test)\n\nPrdict_label = np.argmax(Test_predict, -1)\nActual_label = y_test\n\nprint('Accuracy on Test Data: %2.2f%%' % (100*accuracy_score(Actual_label, Prdict_label)))\nprint(classification_report(Actual_label, Prdict_label))",
"Accuracy on Test Data: 91.40%\n precision recall f1-score support\n\n 0 0.98 0.99 0.99 125\n 1 0.92 0.90 0.91 125\n 2 0.83 0.87 0.85 125\n 3 0.93 0.89 0.91 125\n\n accuracy 0.91 500\n macro avg 0.91 0.91 0.91 500\nweighted avg 0.91 0.91 0.91 500\n\n"
],
[
"sns.heatmap(confusion_matrix(Actual_label, Prdict_label), \n annot=True, fmt=\"d\", cbar = True, cmap = plt.cm.Blues, vmax = X_test.shape[0]//16)",
"_____no_output_____"
],
[
"ensemble_model.save('Ensamble_Model(k=10).h5')\nensemble_model.save_weights('Weight_Ensamble_Model(k=10).h5')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
ecd6b080d07291329f7897db542291685b0dc0b8 | 65,775 | ipynb | Jupyter Notebook | intrinsic/inBias.ipynb | MSR-LIT/MultilingualBias | 37dbc7ff4ec2d35edd4f3fe19feb872f2dde128e | [
"MIT"
] | 2 | 2020-07-15T04:18:20.000Z | 2020-08-03T02:50:21.000Z | intrinsic/inBias.ipynb | MSR-LIT/MultilingualBias | 37dbc7ff4ec2d35edd4f3fe19feb872f2dde128e | [
"MIT"
] | null | null | null | intrinsic/inBias.ipynb | MSR-LIT/MultilingualBias | 37dbc7ff4ec2d35edd4f3fe19feb872f2dde128e | [
"MIT"
] | null | null | null | 46.682044 | 23,888 | 0.721946 | [
[
[
"# Gender Bias in Multilingual Embeddings\n\nThis notebook is for the intrinsic bias analysis. For extrinsic bias analysis, please refer to the \"Extrinsic Bias Analysis\" section in our paper.",
"_____no_output_____"
]
],
[
[
"%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
],
[
"import json\nimport numpy as np\nimport os\nimport random\nrandom.seed(2)\nimport scipy\nfrom collections import defaultdict\nimport tqdm\nfrom matplotlib import pyplot as plt",
"_____no_output_____"
]
],
[
[
" We adopt the WE class from [here](https://github.com/tolga-b/debiaswe)",
"_____no_output_____"
]
],
[
[
"import sys\nsys.path.insert(0, \"/home/jyzhao/Github/debiaswe/debiaswe\")\nimport we",
"_____no_output_____"
],
[
"def sim(E, x, y):\n return 1 - scipy.spatial.distance.cosine(E.v(x), E.v(y))\n\ndef dis(E, x, y):\n return scipy.spatial.distance.cosine(E.v(x), E.v(y))\n\ndef sim_v(u, v):\n return 1 - scipy.spatial.distance.cosine(u, v)",
"_____no_output_____"
],
[
"def cal_bias_paired(E, paired_m, paired_f, gender_def_pairs):\n assert len(paired_m) == len(paired_f) \n O, M, F = [], [], []\n \n for idx in range(len(paired_m)):\n overall = np.average([abs(dis(E, paired_m[idx], m) - dis(E, paired_f[idx], f))\\\n for f,m in gender_def_pairs])\n O.append(overall) \n return O, np.average(O)",
"_____no_output_____"
],
[
"#read occ list from other languages. format: en-occ es-occ\ndef read_occs(f_file, m_file, E):\n occ_def_f_raw = []\n occ_def_m_raw = []\n with open(f_file, 'r') as es_f, open(m_file, 'r') as es_m:\n for line in es_f.readlines():\n tokens = line.strip().split('\\t')\n if tokens[0].lower() not in en_occ_def_f:\n print(\"wrong line:\", tokens)\n break\n occ_def_f_raw.append(tokens[1].lower())\n for line in es_m.readlines():\n tokens = line.strip().split('\\t')\n if tokens[0].lower() not in en_occ_def_m:\n print(\"wrong line:\", tokens)\n break\n occ_def_m_raw.append(tokens[1].lower())\n\n print(len(occ_def_m_raw), len(occ_def_f_raw)) \n ids = []\n for idx in range(len(occ_def_f_raw)):\n if occ_def_f_raw[idx] in E.words and occ_def_m_raw[idx] in E.words:\n ids.append(idx)\n print(f'{len(ids)} pairs in the embeddings')\n occ_def_f = [occ_def_f_raw[x] for x in ids]\n occ_def_m = [occ_def_m_raw[x] for x in ids]\n return occ_def_f, occ_def_m",
"_____no_output_____"
]
],
[
[
"# 1. inBias in English",
"_____no_output_____"
],
[
"#### - Read embeddings\n Generate a debiased EN from [here](https://github.com/tolga-b/debiaswe)",
"_____no_output_____"
]
],
[
[
"en_ori_file = '/home/jyzhao/git7/fastText/alignment/data/wiki.en.vec'\nen_debiased_file = '/home/jyzhao/git7/fastText/alignment/data/wiki.en_debias.vec'\nen_sup_es_file = '/home/jyzhao/git7/fastText/alignment/res/wiki.en-es.vec' #fasttext EN-->Es\nen_sup_de_file = '/home/jyzhao/git7/fastText/alignment/res/wiki.en-de.vec' #fasttext EN-->de\nen_sup_fr_file = '/home/jyzhao/git7/fastText/alignment/res/wiki.en-fr.vec' \nen_sup_tr_file = '/home/jyzhao/git7/fastText/alignment/res/wiki.en-tr.vec'",
"_____no_output_____"
],
[
"gender_definitions_files = 'inBias/en_gender_pairs.json'\ngender_pairs = json.load(open(gender_definitions_files, 'r'))\ngender_pairs = [[x.lower(), y.lower()] for x,y in gender_pairs]",
"_____no_output_____"
],
[
"#read original EN\nen_ori_E = we.WordEmbedding(en_ori_file)\n#gender direction \nen_ori_gd = we.doPCA(gender_pairs, en_ori_E).components_[0] ",
"0it [00:00, ?it/s]"
],
[
"#reading debiased EN\nen_deb_E = we.WordEmbedding(en_debiased_file)\nen_deb_gd = we.doPCA(gender_pairs, en_deb_E).components_[0]",
"_____no_output_____"
],
[
"#read EN-ES\nen_sup_es_E = we.WordEmbedding(en_sup_es_file)\nen_sup_es_gd = we.doPCA(gender_pairs, en_sup_es_E).components_[0]",
"_____no_output_____"
],
[
"#read EN-DE\nen_sup_de_E = we.WordEmbedding(en_sup_de_file)\nen_sup_de_gd = we.doPCA(gender_pairs, en_sup_de_E).components_[0]",
"_____no_output_____"
],
[
"#read EN-FR\nen_sup_fr_E = we.WordEmbedding(en_sup_fr_file)\nen_sup_fr_gd = we.doPCA(gender_pairs, en_sup_fr_E).components_[0]",
"_____no_output_____"
],
[
"#read EN-TR\nen_sup_tr_E = we.WordEmbedding(en_sup_tr_file)\nen_sup_tr_gd = we.doPCA(gender_pairs, en_sup_tr_E).components_[0]",
"_____no_output_____"
]
],
[
[
"\n#### - Read masculine/feminine occupations. The occupation lists are adopted from [Learning Gender-Neutral Word Embeddings](https://github.com/uclanlp/gn_glove) and extended by us. ",
"_____no_output_____"
]
],
[
[
"vocab = en_ori_E.words",
"_____no_output_____"
],
[
"en_occ_def_f_raw = [x.strip().split('\\t')[0] for x in open('inBias/en_female_def_occ').readlines()]\nen_occ_def_m_raw = [x.strip().split('\\t')[0] for x in open('inBias/en_male_def_occ').readlines()]",
"_____no_output_____"
],
[
"#make sure both feminine and masculine versions are in the vocab\nids = []\nfor x in range(len(en_occ_def_m_raw)):\n if en_occ_def_f_raw[x] in vocab and en_occ_def_m_raw[x] in vocab:\n ids.append(x) \n else:\n print(x)\nen_occ_def_f = [en_occ_def_f_raw[x] for x in ids]\nen_occ_def_m = [en_occ_def_m_raw[x] for x in ids]",
"_____no_output_____"
]
],
[
[
"- **inBias score in EN-xx embeddings**",
"_____no_output_____"
]
],
[
[
"#Using paired English occupations such as actor/actress\nO_en_ori_p, O = cal_bias_paired(en_ori_E, en_occ_def_m[:137], en_occ_def_f[:137], gender_pairs)\nprint(f\"in original EN, average bias in strong gendered words:{O}\")\nO_en_ori_s, O = cal_bias_paired(en_ori_E, en_occ_def_m[137:], en_occ_def_f[137:], gender_pairs)\nprint(f\"in original EN, average bias in weak gendered words:{O}\")\nO_en_ori, O = cal_bias_paired(en_ori_E, en_occ_def_m, en_occ_def_f, gender_pairs)\nprint(f\"in original EN, average bias across all occupation words:{O}\")",
"in original EN, average bias in strong gendered words:0.113837761300656\nin original EN, average bias in weak gendered words:0.047707567879007436\nin original EN, average bias across all occupation words:0.08295984997537262\n"
],
[
"O_en_sup_es_p, O = cal_bias_paired(en_sup_es_E, en_occ_def_m[:137], en_occ_def_f[:137], gender_pairs)\nprint(f\"in EN-ES, average bias in strong gendered words:{O}\")\nO_en_sup_es_s, O, = cal_bias_paired(en_sup_es_E, en_occ_def_m[137:], en_occ_def_f[137:], gender_pairs)\nprint(f\"in EN-ES, average bias in weak gendered words:{O}\")\nO_en_sup_es, O = cal_bias_paired(en_sup_es_E, en_occ_def_m, en_occ_def_f, gender_pairs)\nprint(f\"in EN-ES, average bias:{O}\")",
"in EN-ES, average bias in strong gendered words:0.08481298356233141\nin EN-ES, average bias in weak gendered words:0.040029569166815944\nin EN-ES, average bias:0.06390243987570941\n"
],
[
"scipy.stats.ttest_ind(O_en_ori, O_en_sup_es)",
"_____no_output_____"
],
[
"scipy.stats.ttest_ind(O_en_ori_p, O_en_sup_es_p)",
"_____no_output_____"
],
[
"O_en_sup_de_p, O = cal_bias_paired(en_sup_de_E, en_occ_def_m[:137], en_occ_def_f[:137],gender_pairs)\nprint(f\"in DE-EN, average bias:{O}\")\nO_en_sup_de_s, O = cal_bias_paired(en_sup_de_E, en_occ_def_m[137:], en_occ_def_f[137:], gender_pairs)\nprint(f\"in DE-EN, average bias:{O}\")\nO_en_sup_de, O = cal_bias_paired(en_sup_de_E, en_occ_def_m, en_occ_def_f, gender_pairs)\nprint(f\"in DE-EN, average bias:{O}\")",
"in DE-EN, average bias:0.0934872714574513\nin DE-EN, average bias:0.043027333163276865\nin DE-EN, average bias:0.06992621077534651\n"
],
[
"scipy.stats.ttest_ind(O_en_ori, O_en_sup_de), scipy.stats.ttest_ind(O_en_ori_p, O_en_sup_de_p)",
"_____no_output_____"
],
[
"O_en_sup_fr_p, O = cal_bias_paired(en_sup_fr_E, en_occ_def_m[:137], en_occ_def_f[:137],gender_pairs)\nprint(f\"in FR-EN, average bias:{O}\")\nO_en_sup_fr_s, O, = cal_bias_paired(en_sup_fr_E, en_occ_def_m[137:], en_occ_def_f[137:], gender_pairs)\nprint(f\"in FR-EN, average bias:{O} \")\nO_en_sup_fr, O = cal_bias_paired(en_sup_fr_E, en_occ_def_m, en_occ_def_f, gender_pairs)\nprint(f\"in FR-EN, average bias:{O} \")",
"in FR-EN, average bias:0.08329827981394476\nin FR-EN, average bias:0.03947033618611318 \nin FR-EN, average bias:0.06283387033791445 \n"
],
[
"scipy.stats.ttest_ind(O_en_ori, O_en_sup_fr), scipy.stats.ttest_ind(O_en_ori_p, O_en_sup_fr_p)",
"_____no_output_____"
],
[
"O_en_sup_tr, O = cal_bias_paired(en_sup_tr_E, en_occ_def_m, en_occ_def_f, gender_pairs)\nprint(f\"in tr-aligned EN, average bias:{O}\")",
"in tr-aligned EN, average bias:0.059185084214831034\n"
]
],
[
[
"## - Bias in ENDEB",
"_____no_output_____"
]
],
[
[
"O_en_deb_p, O = cal_bias_paired(en_deb_E, en_occ_def_m[:137], en_occ_def_f[:137], gender_pairs)\nprint(f\"in debiased EN, average bias:{O}\")\nO_en_deb_s, O = cal_bias_paired(en_deb_E, en_occ_def_m[137:], en_occ_def_f[137:], gender_pairs)\nprint(f\"in debiased EN, average bias:{O}\")\nO_en_deb, O = cal_bias_paired(en_deb_E, en_occ_def_m, en_occ_def_f, gender_pairs)\nprint(f\"in debiased EN, average bias:{O}\")",
"in debiased EN, average bias:0.08297640271635917\nin debiased EN, average bias:0.01260097629791643\nin debiased EN, average bias:0.05011628143148319\n"
],
[
"endeb_sup_es_file = '/local/jyzhao/Github/fastText/alignment/res/wiki.endeb-es.vec' #fasttext EN-->Es\nendeb_sup_de_file = '/home/jyzhao/git7/fastText/alignment/res/wiki.endeb-de.vec' #fasttext EN-->de\nendeb_sup_fr_file = '/home/jyzhao/git7/fastText/alignment/res/wiki.endeb-fr.vec' ",
"_____no_output_____"
],
[
"endeb_sup_es_E = we.WordEmbedding(endeb_sup_es_file)\nendeb_sup_de_E = we.WordEmbedding(endeb_sup_de_file)\nendeb_sup_fr_E = we.WordEmbedding(endeb_sup_fr_file)",
"_____no_output_____"
],
[
"O_endeb_es_p, O = cal_bias_paired(endeb_sup_es_E, en_occ_def_m[:137], en_occ_def_f[:137], gender_pairs)\nprint(f\"in ENDEB-ES, average bias for strong gendered words:{O}\")\nO_endeb_es_s, O = cal_bias_paired(endeb_sup_es_E, en_occ_def_m[137:], en_occ_def_f[137:], gender_pairs)\nprint(f\"in ENDEB-ES, average bias for weak gendered words:{O}\")\nO_endeb_es, O = cal_bias_paired(endeb_sup_es_E, en_occ_def_m, en_occ_def_f, gender_pairs)\nprint(f\"in ENDEB-ES, average bias:{O}\")",
"in ENDEB-ES, average bias for strong gendered words:0.06830389713694772\nin ENDEB-ES, average bias for weak gendered words:0.020136128379790872\nin ENDEB-ES, average bias:0.04581311016862546\n"
],
[
"O_endeb_de_p, O= cal_bias_paired(endeb_sup_de_E, en_occ_def_m[:137], en_occ_def_f[:137], gender_pairs)\nprint(f\"in ENDEB-DE, average bias for strong gendered words:{O}, \")\nO_endeb_de_s, O = cal_bias_paired(endeb_sup_de_E, en_occ_def_m[137:], en_occ_def_f[137:], gender_pairs)\nprint(f\"in ENDEB-DE, average bias for weak gendered words:{O}, \")\nO_endeb_de, O = cal_bias_paired(endeb_sup_de_E, en_occ_def_m, en_occ_def_f, gender_pairs)\nprint(f\"in ENDEB-DE, average bias:{O}, \")",
"in ENDEB-DE, average bias for strong gendered words:0.07469259366235981, \nin ENDEB-DE, average bias for weak gendered words:0.02694967444986105, \nin ENDEB-DE, average bias:0.05240018002228257, \n"
],
[
"O_endeb_fr_p, O = cal_bias_paired(endeb_sup_fr_E, en_occ_def_m[:137], en_occ_def_f[:137], gender_pairs)\nprint(f\"in FR-ENDEB, average bias for strong gendered words:{O} \")\nO_endeb_fr_s, O = cal_bias_paired(endeb_sup_fr_E, en_occ_def_m[137:], en_occ_def_f[137:], gender_pairs)\nprint(f\"in FR-ENDEB, average bias for weak gendered words:{O} \")\nO_endeb_fr, O = cal_bias_paired(endeb_sup_fr_E, en_occ_def_m, en_occ_def_f, gender_pairs)\nprint(f\"in FR-ENDEB, average bias:{O} \")",
"in FR-ENDEB, average bias for strong gendered words:0.06851245712455656 \nin FR-ENDEB, average bias for weak gendered words:0.016227787953835948 \nin FR-ENDEB, average bias:0.04409938202538741 \n"
],
[
"scipy.stats.ttest_ind(O_en_ori, O_en_deb), \\\nscipy.stats.ttest_ind(O_en_ori, O_endeb_es),\\\nscipy.stats.ttest_ind(O_en_ori, O_endeb_de), \\\nscipy.stats.ttest_ind(O_en_ori, O_endeb_fr)",
"_____no_output_____"
]
],
[
[
"----\n# 2. inBias in ES",
"_____no_output_____"
]
],
[
[
"es_ori_file = '/home/jyzhao/git7/fastText/alignment/data/wiki.es.vec'\nes_ali_file = '/home/jyzhao/git7/fastText/alignment/res/wiki.es.align.vec'\nes_sup_de_file = '/home/jyzhao/git7/fastText/alignment/res/wiki.es-de.vec'\nes_sup_fr_file = '/home/jyzhao/git7/fastText/alignment/res/wiki.es-fr.vec'\nes_sup_endeb_file = '/home/jyzhao/git7/fastText/alignment/res/wiki.es-endeb.vec'",
"_____no_output_____"
],
[
"es_ori_E = we.WordEmbedding(es_ori_file)\nes_sup_endeb_E = we.WordEmbedding(es_sup_endeb_file)\nes_ali_E = we.WordEmbedding(es_ali_file)\nes_sup_de_E = we.WordEmbedding(es_sup_de_file)\nes_sup_fr_E = we.WordEmbedding(es_sup_fr_file)",
"_____no_output_____"
],
[
"es_gender_pairs = json.load(open('inBias/es_gender_pairs.json', 'r'))",
"_____no_output_____"
],
[
"es_ori_gd = we.doPCA(es_gender_pairs, es_ori_E).components_[0]\nes_ali_gd = we.doPCA(es_gender_pairs, es_ali_E).components_[0]\nes_sup_endeb_gd = we.doPCA(es_gender_pairs, es_sup_endeb_E).components_[0]\nes_sup_de_gd = we.doPCA(es_gender_pairs, es_sup_de_E).components_[0]\nes_sup_fr_gd = we.doPCA(es_gender_pairs, es_sup_fr_E).components_[0]",
"_____no_output_____"
]
],
[
[
"- Read the occupation list for ES\nThe list is translated from EN.",
"_____no_output_____"
]
],
[
[
"es_occ_def_f, es_occ_def_m = read_occs('inBias/es_female_def_occ', 'inBias/es_male_def_occ', es_ori_E)",
"257 257\n251 pairs in the embeddings\n"
]
],
[
[
"- #### inBias score changes",
"_____no_output_____"
]
],
[
[
"o_es_ori, o = cal_bias_paired(es_ori_E, es_occ_def_m, es_occ_def_f, es_gender_pairs)\nprint(f\"in original ES, average bias:{o}\")\n\no_es_ali, o = cal_bias_paired(es_ali_E, es_occ_def_m, es_occ_def_f, es_gender_pairs)\nprint(f\"in ES-EN, average bias:{o}\")\n\no_es_sup_de, o = cal_bias_paired(es_sup_de_E,es_occ_def_m, es_occ_def_f,es_gender_pairs)\nprint(f\"in ES-DE, average bias:{o}\")\n\no_es_sup_fr, o = cal_bias_paired(es_sup_fr_E,es_occ_def_m, es_occ_def_f, es_gender_pairs)\nprint(f\"in ES-FR, average bias:{o}\")\n\no_es_sup_endeb, o, = cal_bias_paired(es_sup_endeb_E, es_occ_def_m, es_occ_def_f, es_gender_pairs)\nprint(f\"in ES-ENDEB, average bias:{o}\")",
"in original ES, average bias:0.08032530358274752\nin ES-EN, average bias:0.08891596550644666\nin ES-DE, average bias:0.06340341033160872\nin ES-FR, average bias:0.06417109799786128\nin ES-ENDEB, average bias:0.06649101964532406\n"
],
[
"scipy.stats.ttest_ind(o_es_ori, o_es_ali)",
"_____no_output_____"
],
[
"scipy.stats.ttest_ind(o_es_ori, o_es_sup_de)",
"_____no_output_____"
],
[
"scipy.stats.ttest_ind(o_es_ori, o_es_sup_fr)",
"_____no_output_____"
],
[
"scipy.stats.ttest_ind(o_es_ori, o_es_sup_endeb)",
"_____no_output_____"
]
],
[
[
"----\n# 3. inBias in German",
"_____no_output_____"
]
],
[
[
"de_ori_file = '/home/jyzhao/git7/fastText/alignment/data/wiki.de.vec'\nde_ali_file = '/home/jyzhao/git7/fastText/alignment/res/wiki.de.align.vec'\nde_sup_endeb_file = '/home/jyzhao/git7/fastText/alignment/res/wiki.de-endeb.vec'\nde_sup_es_file = '/home/jyzhao/git7/fastText/alignment/res/wiki.de-es.vec'\nde_sup_fr_file = '/home/jyzhao/git7/fastText/alignment/res/wiki.de-fr.vec'",
"_____no_output_____"
],
[
"de_ori_E = we.WordEmbedding(de_ori_file)\nde_ali_E = we.WordEmbedding(de_ali_file)\nde_sup_endeb_E = we.WordEmbedding(de_sup_endeb_file)\nde_sup_es_E = we.WordEmbedding(de_sup_es_file)\nde_sup_fr_E = we.WordEmbedding(de_sup_fr_file)",
"_____no_output_____"
],
[
"de_ori_gd = we.doPCA(de_gender_pairs, de_ori_E).components_[0]\nde_sup_ali_gd = we.doPCA(de_gender_pairs, de_sup_ali_E).components_[0]\nde_sup_endeb_gd = we.doPCA(de_gender_pairs, de_sup_endeb_E).components_[0]\nde_sup_es_gd = we.doPCA(de_gender_pairs, de_sup_es_E).components_[0]\nde_sup_fr_gd = we.doPCA(de_gender_pairs, de_sup_fr_E).components_[0]",
"_____no_output_____"
],
[
"de_gender_pairs = json.load(open('inBias/de_gender_pairs.json'))",
"_____no_output_____"
],
[
"de_occ_def_f, de_occ_def_m = read_occs('inBias/de_female_def_occ', 'inBias/de_male_def_occ', de_ori_E)",
"257 257\n227 pairs in the embeddings\n"
],
[
"O_de_ori, O = cal_bias_paired(de_ori_E, de_occ_def_m, de_occ_def_f, de_gender_pairs)\nprint(f\"in original DE, average bias:{O}\")\nO_de_ali, O = cal_bias_paired(de_ali_E, de_occ_def_m, de_occ_def_f, de_gender_pairs)\nprint(f\"in DE-EN, average bias:{O}\")\nO_de_sup_es, O = cal_bias_paired(de_sup_es_E, de_occ_def_m, de_occ_def_f, de_gender_pairs)\nprint(f\"in DE-ES, average bias:{O},\")\nO_de_sup_fr, O = cal_bias_paired(de_sup_fr_E, de_occ_def_m, de_occ_def_f, de_gender_pairs)\nprint(f\"in DE-FR, average bias:{O}\")\nO_de_sup_endeb, O = cal_bias_paired(de_sup_endeb_E, de_occ_def_m, de_occ_def_f, de_gender_pairs)\nprint(f\"in DE-ENDEB, average bias:{O}\")",
"in original DE, average bias:0.10794044646415794\nin DE-EN, average bias:0.11236809847518123\nin DE-ES, average bias:0.0715921776614467,\nin DE-FR, average bias:0.08045753194051108\nin DE-ENDEB, average bias:0.08760217654168227\n"
],
[
"scipy.stats.ttest_ind(O_de_ori, O_de_ali),\\\nscipy.stats.ttest_ind(O_de_ori, O_de_sup_es), \\\nscipy.stats.ttest_ind(O_de_ori, O_de_sup_fr),\\\nscipy.stats.ttest_ind(O_de_ori, O_de_sup_endeb)",
"_____no_output_____"
]
],
[
[
"---\n# 4. inBias in French",
"_____no_output_____"
]
],
[
[
"fr_ori_file = '/home/jyzhao/git7/fastText/alignment/data/wiki.fr.vec'\nfr_ali_file = '/home/jyzhao/git7/fastText/alignment/res/wiki.fr.align.vec'\nfr_sup_es_file = '/home/jyzhao/git7/fastText/alignment/res/wiki.fr-es.vec' \nfr_sup_de_file = '/home/jyzhao/git7/fastText/alignment/res/wiki.fr-de.vec' \nfr_sup_endeb_file = '/home/jyzhao/git7/fastText/alignment/res/wiki.fr-endeb.vec' ",
"_____no_output_____"
],
[
"fr_ori_E = we.WordEmbedding(fr_ori_file)\nfr_ali_E = we.WordEmbedding(fr_ali_file)\nfr_sup_es_E = we.WordEmbedding(fr_sup_es_file)\nfr_sup_de_E = we.WordEmbedding(fr_sup_de_file)\nfr_sup_endeb_E = we.WordEmbedding(fr_sup_endeb_file)",
"1673it [00:00, 8560.17it/s]"
],
[
"fr_gender_pairs = json.load(open('inBias/fr_gender_pairs.json'))",
"_____no_output_____"
],
[
"fr_occ_def_f, fr_occ_def_m = read_occs('inBias/fr_female_def_occ', 'inBias/fr_male_def_occ', fr_ori_E)",
"257 257\n239 pairs in the embeddings\n"
],
[
"fr_ori_gd = we.doPCA(fr_gender_pairs, fr_ori_E).components_[0]\nfr_sup_ali_gd = we.doPCA(fr_gender_pairs, fr_ali_E).components_[0]\nfr_sup_es_gd = we.doPCA(fr_gender_pairs, fr_sup_es_E).components_[0]\nfr_sup_de_gd = we.doPCA(fr_gender_pairs, fr_sup_de_E).components_[0]\nfr_sup_endeb_gd = we.doPCA(fr_gender_pairs, fr_sup_endeb_E).components_[0]",
"_____no_output_____"
],
[
"O_fr_ori, O = cal_bias_paired(fr_ori_E, fr_occ_def_m, fr_occ_def_f, fr_gender_pairs)\nprint(f\"in original fr, average bias:{O}\")\nO_fr_ali, O = cal_bias_paired(fr_ali_E, fr_occ_def_m, fr_occ_def_f, fr_gender_pairs)\nprint(f\"in FR-EN, average bias:{O}\")\nO_fr_sup_es, O = cal_bias_paired(fr_sup_es_E, fr_occ_def_m, fr_occ_def_f, fr_gender_pairs)\nprint(f\"in FR-ES, average bias:{O}\")\nO_fr_sup_de, O = cal_bias_paired(fr_sup_de_E, fr_occ_def_m, fr_occ_def_f, fr_gender_pairs)\nprint(f\"in FR-DE, average bias:{O} \")\nO_fr_sup_endeb, O = cal_bias_paired(fr_sup_endeb_E, fr_occ_def_m, fr_occ_def_f, fr_gender_pairs)\nprint(f\"in FR-ENDEB, average bias:{O} \")",
"in original fr, average bias:0.09398277639833535\nin FR-EN, average bias:0.102673281450014\nin FR-ES, average bias:0.07680788650062682\nin FR-DE, average bias:0.07817990581887316 \nin FR-ENDEB, average bias:0.09054989769867773 \n"
],
[
"scipy.stats.ttest_ind(O_fr_ori, O_fr_ali), \\\nscipy.stats.ttest_ind(O_fr_ori, O_fr_sup_es), \\\nscipy.stats.ttest_ind(O_fr_ori, O_fr_sup_de), \\\nscipy.stats.ttest_ind(O_fr_ori, O_fr_sup_endeb)",
"_____no_output_____"
]
],
[
[
"# 5. Bias in TR",
"_____no_output_____"
]
],
[
[
"tr_ori_file = '/home/jyzhao/git7/fastText/alignment/data/wiki.tr.vec'\ntr_ali_file = '/home/jyzhao/git7/fastText/alignment/res/wiki.tr.align.vec'",
"_____no_output_____"
],
[
"tr_ori_E = we.WordEmbedding(tr_ori_file)\ntr_ali_E = we.WordEmbedding(tr_ali_file)",
"_____no_output_____"
],
[
"tr_gender_pairs = json.load(open('tr_gender_pairs.json', 'r'))",
"_____no_output_____"
],
[
"#We generate the occupations in TR by using the bilingual dictionary provided in fastText\ndef get_dic(dic_file):\n en2lg = defaultdict(list)\n with open(dic_file, 'r') as f:\n for line in f:\n en, lg = line.strip().split()\n en2lg[en].append(lg)\n return en2lg",
"_____no_output_____"
],
[
"en2tr = get_dic('/home/jyzhao/git7/fastText/alignment/data/en-tr.txt')",
"_____no_output_____"
],
[
"def get_tr_prof(en_occ_def_f, en_occ_def_m): #remove the occpuation that only one gendered version is in the dict\n tr_profs = []\n for idx in range(len(en_occ_def_f)):\n if len(en2tr[en_occ_def_f[idx]]) < 1 or len(en2tr[en_occ_def_m[idx]]) < 1:\n continue\n tr_profs.append([en2tr[en_occ_def_f[idx]][0], en2tr[en_occ_def_m[idx]][0]])\n return tr_profs",
"_____no_output_____"
],
[
"tr_profs = get_tr_prof(en_occ_def_f_raw, en_occ_def_m_raw)\ntr_prof_m = [x[1] for x in tr_profs]\ntr_prof_f = [x[0] for x in tr_profs]\nO_list, O_ori = cal_bias_paired(tr_ori_E, tr_prof_m, tr_prof_f, tr_gender_pairs)\nprint(f\"in original TR, average bias:{O_ori}\")\nO_ali_list, O_ali = cal_bias_paired(tr_ali_E, tr_prof_m, tr_prof_f, tr_gender_pairs)\nprint(f\"in aligned TR, average bias:{O_ali} \")\nprint(f\"delta(Bias) = {(O_ali - O_ori) / O_ori * 100} \")",
"in original TR, average bias:0.07187426452453417\nin aligned TR, average bias:0.0711953996520027 \ndelta(Bias) = -0.9445173137037797 \n"
]
],
[
[
"----\n- # 6. Embedding changes along gender direction before/after alignment\n- Using ES as an example",
"_____no_output_____"
]
],
[
[
"m_f_w = [['beau', 'belle'],\n ['dudes', 'gals'],\n ['governor', 'governess'],\n ['dude', 'chick'],\n ['tailor', 'seamstress'],\n ['stewards', 'stewardesses'],\n ]",
"_____no_output_____"
],
[
"plt.rcParams.update({'font.size': 15})",
"_____no_output_____"
],
[
"def plot_titles_gd(E, gender_pairs, titles_m, titles_f):\n gd = np.average([(E.v(gender_pairs[idx][0]) - E.v(gender_pairs[idx][1])) for idx in range(len(gender_pairs))], axis = 0)\n x_m = [np.dot(E.v(x), gd) for x in titles_m if x in E.words]\n x_f = [np.dot(E.v(x), gd) for x in titles_f if x in E.words]\n x_seed_m = np.average([np.dot(E.v(x[1]), gd) for x in gender_pairs if x[1] in E.words])\n x_seed_f = np.average([np.dot(E.v(x[0]), gd) for x in gender_pairs if x[0] in E.words])\n y = [x for x in range(len(titles_m))]\n fig= plt.figure(figsize=(6.4,4.8))\n plt.xlim(-0.12, 0.17)\n plt.yticks([])\n# plt.autoscale(enable=True, axis='y', tight=True)\n plt.plot(x_m, y, 'go', label='M.')\n plt.plot(x_f, y, 'rs', label = 'F.')\n plt.plot(x_seed_m, np.average(y), 'gP', )\n plt.plot(x_seed_f, np.average(y), 'rD')\n plt.annotate('Avg-M', (x_seed_m-0.01, np.average(y)+0.15), fontsize=15)\n plt.annotate('Avg-F', (x_seed_f-0.03, np.average(y) + 0.15), fontsize=15)\n for idx in range(len(titles_m)):\n if titles_m[idx] in ['stewards']:\n plt.annotate(titles_m[idx], (x_m[idx]-0.04, y[idx]-0.15), fontsize=15)\n else:\n plt.annotate(titles_m[idx], (x_m[idx]-0.02, y[idx]-0.2), fontsize=15)\n if titles_f[idx] in ['seamstress']:\n plt.annotate(titles_f[idx], (x_f[idx]-0.03, y[idx]-0.2), fontsize=15)\n elif titles_f[idx] in ['stewardesses']:\n plt.annotate(titles_f[idx], (x_f[idx]+0.003, y[idx]-0.3), fontsize=15)\n else:\n plt.annotate(titles_f[idx], (x_f[idx]-0.005, y[idx]-0.2), fontsize=15)\n plt.legend()\n plt.savefig('es-ori_titles.pdf')\n plt.show()\n plt.close()",
"_____no_output_____"
],
[
"plot_titles_gd(es_ori_E, es_gender_pairs, [x[0] for x in m_f_w], [x[1] for x in m_f_w])\n# plot_titles_gd(es_ali_E, es_gender_pairs, [x[0] for x in m_f_w], [x[1] for x in m_f_w])\n# plot_titles_gd(es_sup_de_E, es_gender_pairs, [x[0] for x in m_f_w], [x[1] for x in m_f_w])\n# plot_titles_gd(es_sup_endeb_E, es_gender_pairs, [x[0] for x in m_f_w], [x[1] for x in m_f_w])",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
ecd6b8899beebc094901e2de146e67e33406e2bb | 5,974 | ipynb | Jupyter Notebook | boards/Pynq-Z2/base/notebooks/microblaze/microblaze_c_libraries.ipynb | jackrosenthal/PYNQ | 788bf18529bc7a0564af4033ef3e246c03fc5b10 | [
"BSD-3-Clause"
] | 1,537 | 2016-09-26T22:51:50.000Z | 2022-03-31T13:33:54.000Z | boards/Pynq-Z2/base/notebooks/microblaze/microblaze_c_libraries.ipynb | MakarenaLabs/PYNQ | 6f3113278e62b23315cf4e000df8f57fb53c4f6d | [
"BSD-3-Clause"
] | 414 | 2016-10-03T21:12:10.000Z | 2022-03-21T14:55:02.000Z | boards/Pynq-Z2/base/notebooks/microblaze/microblaze_c_libraries.ipynb | MakarenaLabs/PYNQ | 6f3113278e62b23315cf4e000df8f57fb53c4f6d | [
"BSD-3-Clause"
] | 826 | 2016-09-23T22:29:43.000Z | 2022-03-29T11:02:09.000Z | 28.721154 | 430 | 0.570305 | [
[
[
"# PYNQ Microblaze Libraries in C\n\nThis document describes the various libraries that ship with PYNQ Microblaze.\n\n## `pynqmb`\n\nThe main library is `pynqmb` which consists of functions for interacting with a variety of I/O devices. `pynqmb` is split into separate `i2c.h`, `gpio.h`, `spi.h`, `timer.h` and `uart.h` header files with each one being self contained. In this notebook we will look just at the I2C and GPIO headers however the full function reference for all of the components can be found on http://pynq.readthedocs.io \n\nAll of the components follow the same pattern in having `_open` function calls that take one or more pins depending on the protocol. These function use an I/O switch in the subsystem to connect the protocol controller to the output pins. For devices not connected to output pins there are `_open_device` functions which take either the base address of the controller or it's index as defined in the board support package.\n\nFor this example we are going to use a Grove ADC connected via Pmod-Grove adapter and using the I2C protocol. One ancilliary header file that is useful when using the Pmod-Grove adapter is `pmod_grove.h` which includes the pin definitions for the adapter board. In this case we are using the G4 port on the adapter which is connected to pins 6 and 2 of the Pmod connector.",
"_____no_output_____"
]
],
[
[
"from pynq.overlays.base import BaseOverlay\nbase = BaseOverlay('base.bit')",
"_____no_output_____"
],
[
"%%microblaze base.PMODA\n#include <i2c.h>\n#include <pmod_grove.h>\n\nint read_adc() {\n i2c device = i2c_open(PMOD_G4_B, PMOD_G4_A);\n unsigned char buf[2];\n buf[0] = 0;\n i2c_write(device, 0x50, buf, 1);\n i2c_read(device, 0x50, buf, 2);\n return ((buf[0] & 0x0F) << 8) | buf[1];\n}",
"_____no_output_____"
],
[
"read_adc()",
"_____no_output_____"
]
],
[
[
"We can use the `gpio` and `timer` components in concert to flash an LED connected to G1. The `timer` header provides PWM and program delay functionality, although only one can be used simultaneously.",
"_____no_output_____"
]
],
[
[
"%%microblaze base.PMODA\n#include <timer.h>\n#include <gpio.h>\n#include <pmod_grove.h>\n\nvoid flash_led() {\n gpio led = gpio_open(PMOD_G1_A);\n gpio_set_direction(led, GPIO_OUT);\n int state = 0;\n while (1) {\n gpio_write(led, state);\n state = !state;\n delay_ms(500);\n }\n}",
"_____no_output_____"
],
[
"flash_led()",
"_____no_output_____"
]
],
[
[
"## `pyprintf`\n\nThe `pyprint` library exposes a single `pyprintf` function which acts similarly to a regular `printf` function but forwards arguments to Python for formatting and display result in far lower code overhead than a regular printf as well as not requiring access to standard in and out.",
"_____no_output_____"
]
],
[
[
"%%microblaze base.PMODA\n#include <pyprintf.h>\n\nint test_print(float value) {\n pyprintf(\"Printing %f from the microblaze!\\n\", value);\n return 0;\n}",
"_____no_output_____"
],
[
"test_print(1.5)",
"Printing 1.500000 from the microblaze!\n"
]
],
[
[
"At present, `pyprintf` can support the common subset of datatype between Python and C - in particular `%{douxXfFgGeEsc}`. Long data types and additional format modifiers are not supported yet.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
ecd6b9ded0b3f2a9ea135a417a818055a12c220d | 1,510 | ipynb | Jupyter Notebook | notebooks/what_is_an_epoch.ipynb | yngtodd/cnn-text-classification-tf | a090e7bbfb621e9fa75d5953b8632967fe89b31a | [
"Apache-2.0"
] | null | null | null | notebooks/what_is_an_epoch.ipynb | yngtodd/cnn-text-classification-tf | a090e7bbfb621e9fa75d5953b8632967fe89b31a | [
"Apache-2.0"
] | 1 | 2019-08-15T18:45:25.000Z | 2019-08-15T18:45:39.000Z | notebooks/what_is_an_epoch.ipynb | yngtodd/cnn-text-classification-tf | a090e7bbfb621e9fa75d5953b8632967fe89b31a | [
"Apache-2.0"
] | null | null | null | 27.962963 | 357 | 0.556954 | [
[
[
"from cnntext.data_helpers import *",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
ecd6baecad82fc9319df400f9097413bb29f6909 | 18,030 | ipynb | Jupyter Notebook | examples/semeval2010task8.ipynb | RobinKa/SemanticRelationClassification | 2427a27c92dccecd672484fc670aaec81508772e | [
"MIT"
] | 1 | 2019-10-26T16:52:36.000Z | 2019-10-26T16:52:36.000Z | examples/semeval2010task8.ipynb | waelkht/DeepOnto.KOM-ISWC2019 | 5d27a871720f7e9a13eb029401783f0eaa2fc1c4 | [
"MIT"
] | null | null | null | examples/semeval2010task8.ipynb | waelkht/DeepOnto.KOM-ISWC2019 | 5d27a871720f7e9a13eb029401783f0eaa2fc1c4 | [
"MIT"
] | 1 | 2019-04-02T07:18:44.000Z | 2019-04-02T07:18:44.000Z | 30.353535 | 121 | 0.518802 | [
[
[
"relation_ids = {\n \"Other\": 0,\n \"Cause-Effect(e1,e2)\": 1,\n \"Instrument-Agency(e1,e2)\": 2,\n \"Product-Producer(e1,e2)\": 3,\n \"Content-Container(e1,e2)\": 4,\n \"Entity-Origin(e1,e2)\": 5,\n \"Entity-Destination(e1,e2)\": 6,\n \"Component-Whole(e1,e2)\": 7,\n \"Member-Collection(e1,e2)\": 8,\n \"Message-Topic(e1,e2)\": 9,\n}",
"_____no_output_____"
],
[
"from os.path import join\n\ndef find_between(s, a, b):\n return s.split(a)[1].split(b)[0].strip().replace(\" \", \"\")\n\ndef preprocess_data(path, out_path, save_opposites=False, num_rel=None):\n print(\"Loading data from\", path, \"and saving preprocessed files to\", out_path)\n in_file = open(path)\n\n relation_files = {}\n for relation_id in relation_ids:\n relation_files[relation_id] = open(\n out_path % relation_id, \"w\", encoding=\"utf-8\")\n\n while True:\n line = in_file.readline()\n\n if num_rel is not None:\n num_rel -= 1\n if num_rel < 0:\n break\n\n if not line:\n break\n\n text_id, text = line.split(\"\\t\")\n\n word_a = find_between(text, \"<e1>\", \"</e1>\")\n word_b = find_between(text, \"<e2>\", \"</e2>\")\n\n relation = in_file.readline().strip()\n comment = in_file.readline()\n\n # Swap words if the relation-order is not word_1, word_2\n if relation != \"Other\" and not \"(e1,e2)\" in relation:\n word_a, word_b = word_b, word_a\n relation = relation.replace(\"(e2,e1)\", \"(e1,e2)\")\n\n word_a = word_a.replace(\" \", \"_\").lower()\n word_b = word_b.replace(\" \", \"_\").lower()\n\n relation_files[relation].write(\"%s %s\\n\" % (word_a, word_b))\n\n if save_opposites:\n relation_files[\"Other\"].write(\"%s %s\\n\" % (word_b, word_a))\n\n # New line\n in_file.readline()\n \ntrain_file = \"TRAIN_FILE.txt\"\ntest_file = \"TEST_FILE_FULL.txt\"\nout_path = \".\"\n\n# Train\n\npreprocess_data(train_file, join(\n out_path, \"train_%s_1000.csv\"), num_rel=1000)\npreprocess_data(train_file, join(\n out_path, \"train_%s_2000.csv\"), num_rel=2000)\npreprocess_data(train_file, join(\n out_path, \"train_%s_4000.csv\"), num_rel=4000)\npreprocess_data(train_file, join(\n out_path, \"train_%s_8000.csv\"), num_rel=8000)\n\n# Test\npreprocess_data(test_file, join(out_path, \"test_%s.csv\"))",
"Loading data from TRAIN_FILE.txt and saving preprocessed files to .\\train_%s_1000.csv\nLoading data from TRAIN_FILE.txt and saving preprocessed files to .\\train_%s_2000.csv\nLoading data from TRAIN_FILE.txt and saving preprocessed files to .\\train_%s_4000.csv\nLoading data from TRAIN_FILE.txt and saving preprocessed files to .\\train_%s_8000.csv\nLoading data from TEST_FILE_FULL.txt and saving preprocessed files to .\\test_%s.csv\n"
],
[
"from ontokom.embeddings import create_relation_dataset, DataFrameEmbeddings\nfrom glob import glob\n\nembeddings = DataFrameEmbeddings(\"embeddings_acm_wiki_glove_300.h5\")\nembeddings.load()\n\nrelation_paths_train = glob(\"train_*_8000.csv\")\nrelation_paths_test = glob(\"test_*.csv\")\n\ncreate_relation_dataset(embeddings, \"relations_train_8000.h5\", \"labels_train_8000.h5\", relation_paths_train,\n unknown_word=\"<unk>\")\ncreate_relation_dataset(embeddings, \"relations_test.h5\", \"labels_test.h5\", relation_paths_test,\n unknown_word=\"<unk>\")",
"Processing relations at train_Cause-Effect(e1,e2)_8000.csv\n"
],
[
"from ontokom.classification import RelationClassifier, load_relations, load_labels\nimport numpy as np\nfrom sklearn.metrics import classification_report\n\ntrain_relations = load_relations(\"relations_train_8000.h5\")\ntrain_labels = load_labels(\"labels_train_8000.h5\")\nassert train_relations.shape[0] == train_labels.shape[0]\n\ntest_relations = load_relations(\"relations_test.h5\")\ntest_labels = load_labels(\"labels_test.h5\")\ntest_labels = np.argmax(test_labels, 1)\nassert test_relations.shape[0] == test_labels.shape[0]\n\nclassifier = RelationClassifier()\nclassifier.new(train_relations.shape[1], train_labels.shape[1], one_hot=True,\n filters=64, max_filters=256,\n optimizer=\"rmsprop\", learn_rate=0.01,\n dropout=0.0, kernel_size=5)\n\nclassifier.train(train_relations, train_labels,\n epochs=20, validation_split=0, verbose=0)\n\npredicted_labels = np.argmax(classifier.predict(test_relations), 1)\n\nprint(classification_report(test_labels, predicted_labels))",
"_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nreshape_5 (Reshape) (None, 2, 300, 1) 0 \n_________________________________________________________________\nconv2d_3 (Conv2D) (None, 1, 147, 64) 960 \n_________________________________________________________________\nbatch_normalization_17 (Batc (None, 1, 147, 64) 256 \n_________________________________________________________________\nreshape_6 (Reshape) (None, 147, 64) 0 \n_________________________________________________________________\nconv1d_15 (Conv1D) (None, 74, 128) 41088 \n_________________________________________________________________\nbatch_normalization_18 (Batc (None, 74, 128) 512 \n_________________________________________________________________\nconv1d_16 (Conv1D) (None, 37, 256) 164096 \n_________________________________________________________________\nbatch_normalization_19 (Batc (None, 37, 256) 1024 \n_________________________________________________________________\nconv1d_17 (Conv1D) (None, 19, 256) 327936 \n_________________________________________________________________\nbatch_normalization_20 (Batc (None, 19, 256) 1024 \n_________________________________________________________________\nconv1d_18 (Conv1D) (None, 10, 256) 327936 \n_________________________________________________________________\nbatch_normalization_21 (Batc (None, 10, 256) 1024 \n_________________________________________________________________\nconv1d_19 (Conv1D) (None, 5, 256) 327936 \n_________________________________________________________________\nbatch_normalization_22 (Batc (None, 5, 256) 1024 \n_________________________________________________________________\nconv1d_20 (Conv1D) (None, 3, 256) 327936 \n_________________________________________________________________\nbatch_normalization_23 (Batc (None, 3, 256) 1024 \n_________________________________________________________________\nconv1d_21 (Conv1D) (None, 1, 256) 196864 \n_________________________________________________________________\nbatch_normalization_24 (Batc (None, 1, 256) 1024 \n_________________________________________________________________\nflatten_3 (Flatten) (None, 256) 0 \n_________________________________________________________________\ndense_3 (Dense) (None, 10) 2570 \n=================================================================\nTotal params: 1,724,234\nTrainable params: 1,720,778\nNon-trainable params: 3,456\n_________________________________________________________________\nNone\n precision recall f1-score support\n\n 0 0.66 0.86 0.74 325\n 1 0.69 0.66 0.67 302\n 2 0.60 0.61 0.61 183\n 3 0.39 0.38 0.38 292\n 4 0.40 0.51 0.44 251\n 5 0.56 0.40 0.46 156\n 6 0.71 0.70 0.71 226\n 7 0.67 0.72 0.69 261\n 8 0.40 0.29 0.34 450\n 9 0.63 0.63 0.63 225\n\navg / total 0.56 0.56 0.56 2671\n\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
ecd6c21c0f30cd7f6e7b0f8ca59a5ce87ab03a3f | 105,377 | ipynb | Jupyter Notebook | sagemaker_edge_manager/sagemaker_edge_example/sagemaker_edge_example.ipynb | arindam999/amazon-sagemaker-examples | 17dd42d45449eebd650493b2799287a6762186d8 | [
"Apache-2.0"
] | null | null | null | sagemaker_edge_manager/sagemaker_edge_example/sagemaker_edge_example.ipynb | arindam999/amazon-sagemaker-examples | 17dd42d45449eebd650493b2799287a6762186d8 | [
"Apache-2.0"
] | null | null | null | sagemaker_edge_manager/sagemaker_edge_example/sagemaker_edge_example.ipynb | arindam999/amazon-sagemaker-examples | 17dd42d45449eebd650493b2799287a6762186d8 | [
"Apache-2.0"
] | null | null | null | 40.54521 | 11,908 | 0.602247 | [
[
[
"# SageMaker Edge Manager Example",
"_____no_output_____"
],
[
"## Introduction\n\nSageMaker Edge Manager is a service from Amazon SageMaker that lets you:\n\n+ prepare custom models for edge device hardware\n+ include a runtime for running machine learning inference efficiently on edge devices\n+ enable the device to send samples of data from each model securely to SageMaker for relabeling and retraining.\n\nThere are two main components to this service:\n+ SageMaker Edge Manager in the Cloud \n+ SageMaker Edge Agent on the Edge device\n\nThis notebook demonstrates the end-to-end workflow for getting a running SageMaker Edge Agent on the edge device. This will involve the following steps:\n\n+ Compile the model using SageMaker Neo\n+ Package the compiled model with SageMaker Edge Manager\n+ Deploy with SageMaker Edge Manager Agent\n+ Run inference with the model\n+ Capture the model's input and output data to S3\n\n**Note**:\nTypically, the SageMaker Edge Agent is run on an edge device. For the sake of this notebook, we will run the Agent on an EC2 instance. We show how to package the compiled model and then load it to the Agent on the edge Device to make predictions with. Finally, we show how to capture model's input and output to S3 via the Agent.\n\nThis notebook is intended only for notebook instances. When you run this notebook, choose the kernel: `conda_tensorflow_p36`",
"_____no_output_____"
],
[
"**Please note**: There are pricing implications to the use of this notebook. Please refer to [Edge Manager](https://aws.amazon.com/sagemaker/edge-manager/pricing) for more information.",
"_____no_output_____"
],
[
"## Demo Setup",
"_____no_output_____"
],
[
"We need an AWS account role with SageMaker access. This role is used to give SageMaker access to S3, launch an EC2 instance and send command with Systems Manager.",
"_____no_output_____"
]
],
[
[
"import sagemaker\nfrom sagemaker import get_execution_role\nimport boto3\nimport botocore\nimport json\n\nrole = get_execution_role()\nsess = sagemaker.Session()\nregion = boto3.Session().region_name",
"_____no_output_____"
],
[
"print(role)",
"arn:aws:iam::254903919824:role/service-role/AmazonSageMakerServiceCatalogProductsUseRole\n"
]
],
[
[
"Locate the above printed sagemaker role from [IAM console](https://console.aws.amazon.com/iam), find and attach the following policies to role:\n\n- AmazonEC2FullAccess \n- AmazonEC2RoleforSSM \n- AmazonS3FullAccess \n- AmazonSSMManagedInstanceCore \n- AmazonSSMFullAccess \n- AWSIoTFullAccess \n\nYou can find more information about how to attach policies to role [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html#add-policies-console).\n\n**If you try this example with a real device, only attach AWSIoTFullAccess to create certificates on AWS IoT.**",
"_____no_output_____"
]
],
[
[
"#User Input: Set up names that will be used later in this example \n\nEC2_role_name = \"smex_ec2_role_9\"\nEC2_profile_name = \"smex_ec2_profile_9\"\nDevice_role_name = \"SageMaker_smex_device_role_9\"\n",
"_____no_output_____"
]
],
[
[
"We then need an S3 bucket that would be used for storing the model artifacts generated after compilation and packaged artifacts generated after edge packaging job.",
"_____no_output_____"
]
],
[
[
"# S3 bucket and folders for saving model artifacts.\n# Feel free to specify different bucket/folders here if you wish.\nbucket = sess.default_bucket()\nfolder = \"DEMO-Sagemaker-Edge\"\ncompilation_output_sub_folder = folder + \"/compilation-output\"\niot_folder = folder + \"/iot\"\n\n# S3 Location to save the model artifact after compilation\ns3_compilation_output_location = \"s3://{}/{}\".format(bucket, compilation_output_sub_folder)",
"_____no_output_____"
]
],
[
[
"Finally, we upload the test image to S3 bucket. This image will be used in inference later.",
"_____no_output_____"
]
],
[
[
"keras_img_path = sess.upload_data(\"keras.bmp\", bucket, iot_folder)",
"_____no_output_____"
]
],
[
[
"### Launch EC2 Instance",
"_____no_output_____"
],
[
"As mentioned earlier, this EC2 instance is used in place of an edge device for running the agent software.",
"_____no_output_____"
]
],
[
[
"ec2_client = boto3.client(\"ec2\", region_name=region)",
"_____no_output_____"
],
[
"# Copyright 2021 Amazon.com.\n# SPDX-License-Identifier: MIT\n\n",
"_____no_output_____"
]
],
[
[
"Generate a key pair for the EC2 instance, and save the key PEM file. We can use this key with SSH to connect to the instance. But in this notebook example, we will not use SSH, instead, we will use AWS Systems Manager to send commands to the instance instead.",
"_____no_output_____"
]
],
[
[
"import datetime\n\nec2_key_name = \"edge-manager-key-\" + str(datetime.datetime.now())\nec2_key_pair = ec2_client.create_key_pair(\n KeyName=ec2_key_name,\n)\n\nkey_pair = str(ec2_key_pair[\"KeyMaterial\"])\nkey_pair_file = open(\"ec2-key-pair.pem\", \"w\")\nkey_pair_file.write(key_pair)\nkey_pair_file.close()",
"_____no_output_____"
]
],
[
[
"Create a role for the EC2 instance we are going to use. Read for detailed information about [IAM roles for Amazon EC2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html).\n\nFollow steps here to [create an IAM role](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#create-iam-role). Note down the role name and role ARN; the role name will be used when we launch the EC2 instance, and role ARN will be needed to create inline policy.\n\nAfter creation, make sure the following policies are attached to role:\n\n- AmazonS3FullAccess \n- AmazonSSMManagedInstanceCore \n- CloudWatchAgentAdminPolicy \n\n\nLocate the same SageMaker role used for running this notebook in [Demo Setup](#Demo-Setup) in [IAM console](https://console.aws.amazon.com/iam), click `Add inline policy` button on the role summary page, choose JSON format and replace the content with below statement:\n\nBefore copying the following content, make sure you use the EC2 role ARN you just created in the `Resource` field for `iam:PassRole` action.\n\n```\n{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Effect\": \"Allow\",\n \"Action\": \"iam:PassRole\",\n \"Resource\": \"arn:aws:iam::<account>:role/<role-name>\"\n }\n ]\n}\n```",
"_____no_output_____"
]
],
[
[
"!aws iam create-role --role-name $EC2_role_name --assume-role-policy-document file://ec2-role-trust-policy.json\n",
"{\r\n \"Role\": {\r\n \"Path\": \"/\",\r\n \"RoleName\": \"smex_ec2_role_9\",\r\n \"RoleId\": \"AROATWWLVETIFVBDCHELB\",\r\n \"Arn\": \"arn:aws:iam::254903919824:role/smex_ec2_role_9\",\r\n \"CreateDate\": \"2021-11-17T02:19:53Z\",\r\n \"AssumeRolePolicyDocument\": {\r\n \"Version\": \"2012-10-17\",\r\n \"Statement\": [\r\n {\r\n \"Effect\": \"Allow\",\r\n \"Principal\": {\r\n \"Service\": \"ec2.amazonaws.com\"\r\n },\r\n \"Action\": \"sts:AssumeRole\"\r\n }\r\n ]\r\n }\r\n }\r\n}\r\n"
],
[
"!aws iam put-role-policy --role-name $EC2_role_name --policy-name Pass-Permissions --policy-document file://ec2-role-access-policy4.json",
"_____no_output_____"
],
[
"!aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess --role-name $EC2_role_name",
"_____no_output_____"
],
[
"!aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore --role-name $EC2_role_name",
"_____no_output_____"
],
[
"!aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/CloudWatchAgentAdminPolicy --role-name $EC2_role_name",
"_____no_output_____"
],
[
"!aws iam create-instance-profile --instance-profile-name $EC2_profile_name",
"{\r\n \"InstanceProfile\": {\r\n \"Path\": \"/\",\r\n \"InstanceProfileName\": \"smex_ec2_profile_9\",\r\n \"InstanceProfileId\": \"AIPATWWLVETIFZRJBEWGL\",\r\n \"Arn\": \"arn:aws:iam::254903919824:instance-profile/smex_ec2_profile_9\",\r\n \"CreateDate\": \"2021-11-17T02:19:58Z\",\r\n \"Roles\": []\r\n }\r\n}\r\n"
],
[
"!aws iam add-role-to-instance-profile --instance-profile-name $EC2_profile_name --role-name $EC2_role_name",
"_____no_output_____"
],
[
"import time\ntime.sleep(20)",
"_____no_output_____"
]
],
[
[
"Launch an EC2 C5 instance. In this example we will use an AWS deep learning AMI.",
"_____no_output_____"
]
],
[
[
"ami = ec2_client.describe_images(Filters=[{'Name': 'name', 'Values': ['Deep Learning AMI (Ubuntu 18.04) Version 36.0']}])['Images'][0]['ImageId']\nami",
"_____no_output_____"
],
[
"ec2_profile_name = EC2_profile_name # the name of the role created for EC2\n\nec2_instance = ec2_client.run_instances(\n ImageId=ami,\n MinCount=1,\n MaxCount=1,\n InstanceType=\"c5.large\",\n KeyName=ec2_key_name,\n IamInstanceProfile={\n \"Name\": ec2_profile_name,\n },\n TagSpecifications=[\n {\n \"ResourceType\": \"instance\",\n \"Tags\": [{\"Key\": \"Name\", \"Value\": \"edge-manager-notebook\"}],\n }\n ],\n)",
"_____no_output_____"
],
[
"instance_id = ec2_instance[\"Instances\"][0][\"InstanceId\"]\nprint(instance_id)",
"i-03880ae1c0078460b\n"
]
],
[
[
"## Compile Model using SageMaker Neo\n\nCreate a SageMaker client.",
"_____no_output_____"
]
],
[
[
"sagemaker_client = boto3.client(\"sagemaker\", region_name=region)",
"_____no_output_____"
]
],
[
[
"### Download pretrained Keras model",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\n\nmodel = tf.keras.applications.MobileNetV2()\nmodel.save(\"mobilenet_v2.h5\")",
"WARNING:tensorflow:From /home/ec2-user/anaconda3/envs/tensorflow_p36/cpu/lib/python3.6/site-packages/tensorflow_core/__init__.py:1473: The name tf.estimator.inputs is deprecated. Please use tf.compat.v1.estimator.inputs instead.\n\nWARNING:tensorflow:From /home/ec2-user/anaconda3/envs/tensorflow_p36/cpu/lib/python3.6/site-packages/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.\nInstructions for updating:\nIf using Keras pass *_constraint arguments to layers.\nDownloading data from https://github.com/JonathanCMitchell/mobilenet_v2_keras/releases/download/v1.1/mobilenet_v2_weights_tf_dim_ordering_tf_kernels_1.0_224.h5\n14540800/14536120 [==============================] - 0s 0us/step\nWARNING:tensorflow:OMP_NUM_THREADS is no longer used by the default Keras config. To configure the number of threads, use tf.config.threading APIs.\n"
],
[
"import tarfile\n\nwith tarfile.open(\"mobilenet_v2.tar.gz\", mode=\"w:gz\") as archive:\n archive.add(\"mobilenet_v2.h5\")",
"_____no_output_____"
],
[
"keras_model_path = sess.upload_data(\"mobilenet_v2.tar.gz\", bucket, folder)",
"_____no_output_____"
]
],
[
[
"**Note**: When calling ``create_compilation_job()`` the user is expected to provide all the correct input shapes required by the model for successful compilation. If using a different model, you will need to specify the framework and data shape correctly.",
"_____no_output_____"
]
],
[
[
"keras_model_data_shape = '{\"input_1\":[1,3,224,224]}'\nkeras_model_framework = \"keras\"\ntarget_os = \"LINUX\"\ntarget_arch = \"X86_64\"",
"_____no_output_____"
],
[
"import time\n\nkeras_compilation_job_name = \"Sagemaker-Edge-\" + str(time.time()).split(\".\")[0]\nprint(\"Compilation job for %s started\" % keras_compilation_job_name)\n\nresponse = sagemaker_client.create_compilation_job(\n CompilationJobName=keras_compilation_job_name,\n RoleArn=role,\n InputConfig={\n \"S3Uri\": keras_model_path,\n \"DataInputConfig\": keras_model_data_shape,\n \"Framework\": keras_model_framework.upper(),\n },\n OutputConfig={\n \"S3OutputLocation\": s3_compilation_output_location,\n \"TargetPlatform\": {\"Arch\": target_arch, \"Os\": target_os},\n },\n StoppingCondition={\"MaxRuntimeInSeconds\": 900},\n)\n\nprint(response)\n\n# Poll every 30 sec\nwhile True:\n response = sagemaker_client.describe_compilation_job(\n CompilationJobName=keras_compilation_job_name\n )\n if response[\"CompilationJobStatus\"] == \"COMPLETED\":\n break\n elif response[\"CompilationJobStatus\"] == \"FAILED\":\n print(response)\n raise RuntimeError(\"Compilation failed\")\n print(\"Compiling ...\")\n time.sleep(30)\nprint(\"Done!\")",
"Compilation job for Sagemaker-Edge-1637115655 started\n{'CompilationJobArn': 'arn:aws:sagemaker:us-east-2:254903919824:compilation-job/Sagemaker-Edge-1637115655', 'ResponseMetadata': {'RequestId': 'b1c7d75e-599e-4c73-9523-cc6a90afdc79', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': 'b1c7d75e-599e-4c73-9523-cc6a90afdc79', 'content-type': 'application/x-amz-json-1.1', 'content-length': '106', 'date': 'Wed, 17 Nov 2021 02:20:55 GMT'}, 'RetryAttempts': 0}}\nCompiling ...\nCompiling ...\nCompiling ...\nCompiling ...\nCompiling ...\nCompiling ...\nCompiling ...\nCompiling ...\nCompiling ...\nCompiling ...\nDone!\n"
]
],
[
[
"### Package Keras Model",
"_____no_output_____"
]
],
[
[
"keras_packaged_model_name = \"keras-model\"\nkeras_model_version = \"1.0\"\nkeras_model_package = \"{}-{}.tar.gz\".format(keras_packaged_model_name, keras_model_version)",
"_____no_output_____"
],
[
"keras_packaging_job_name = keras_compilation_job_name + \"-packaging\"\nresponse = sagemaker_client.create_edge_packaging_job(\n RoleArn=role,\n OutputConfig={\n \"S3OutputLocation\": s3_compilation_output_location,\n },\n ModelName=keras_packaged_model_name,\n ModelVersion=keras_model_version,\n EdgePackagingJobName=keras_packaging_job_name,\n CompilationJobName=keras_compilation_job_name,\n)\n\nprint(response)\n\n# Poll every 30 sec\nwhile True:\n job_status = sagemaker_client.describe_edge_packaging_job(\n EdgePackagingJobName=keras_packaging_job_name\n )\n if job_status[\"EdgePackagingJobStatus\"] == \"COMPLETED\":\n break\n elif job_status[\"EdgePackagingJobStatus\"] == \"FAILED\":\n print(job_status)\n raise RuntimeError(\"Edge Packaging failed\")\n print(\"Packaging ...\")\n time.sleep(30)\nprint(\"Done!\")",
"{'ResponseMetadata': {'RequestId': '3ceb1bdd-a926-4151-931c-05db4b002827', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': '3ceb1bdd-a926-4151-931c-05db4b002827', 'content-type': 'application/x-amz-json-1.1', 'content-length': '0', 'date': 'Wed, 17 Nov 2021 02:25:57 GMT'}, 'RetryAttempts': 0}}\nPackaging ...\nDone!\n"
],
[
"keras_model_data = job_status[\"ModelArtifact\"]",
"_____no_output_____"
]
],
[
[
"### Create AWS IoT thing\n\nSageMaker Edge Manager uses AWS IoT Core to authenticate the device in order to make calls to SageMaker Edge Manager endpoints in AWS Cloud. \n\nIn order for an edge device to use AWS services, it is necessary for it to first authenticate. We recommend doing this via AWS IoT based authentication, for more details refer [here](https://docs.aws.amazon.com/iot/latest/developerguide/authorizing-direct-aws.html) and [here](https://aws.amazon.com/blogs/security/how-to-eliminate-the-need-for-hardcoded-aws-credentials-in-devices-by-using-the-aws-iot-credentials-provider/).",
"_____no_output_____"
]
],
[
[
"iot_client = boto3.client(\"iot\", region_name=region)",
"_____no_output_____"
],
[
"iot_thing_name = \"sagemaker-edge-thing-demo\"\niot_thing_type = \"SagemakerEdgeDemo\"",
"_____no_output_____"
],
[
"iot_client.create_thing_type(thingTypeName=iot_thing_type)",
"_____no_output_____"
],
[
"iot_client.create_thing(thingName=iot_thing_name, thingTypeName=iot_thing_type)",
"_____no_output_____"
]
],
[
[
"### Create Device Fleet",
"_____no_output_____"
],
[
"#### Create IAM role for device fleet",
"_____no_output_____"
],
[
"Configure an IAM role in your AWS account that will be assumed by the credentials' provider on behalf of the devices in your device fleet. \n\n**Notice**: The name of the role must start with `SageMaker`.\n\nGo to [IAM console](https://console.aws.amazon.com/iam), create role for IoT, attach the following policies:\n\n- AmazonSageMakerEdgeDeviceFleetPolicy\n\nAdd the statement to trust relationship:\n```\n{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Effect\": \"Allow\",\n \"Principal\": {\"Service\": \"credentials.iot.amazonaws.com\"},\n \"Action\": \"sts:AssumeRole\"\n },\n {\n \"Effect\": \"Allow\",\n \"Principal\": {\"Service\": \"sagemaker.amazonaws.com\"},\n \"Action\": \"sts:AssumeRole\"\n }\n ]\n}\n```\n\nNote down the role ARN, it will be later used for creating the device fleet.",
"_____no_output_____"
]
],
[
[
"Output_of_create = !aws iam create-role --role-name $Device_role_name --assume-role-policy-document file://device-role-trust-policy.json\n",
"_____no_output_____"
],
[
"#print(Output_of_create)",
"_____no_output_____"
],
[
"the_raw_name = Output_of_create[5]\n#print(the_raw_name)\ndrop_last_comma = the_raw_name[:-2]\n#print(drop_last_comma)\narn_value_of_device = drop_last_comma[16:]\n#print(arn_value_of_device)\nName_to_str = ''.join(arn_value_of_device)\ndevice_fleet_role_arn = ''.join(arn_value_of_device)\n#print(Name_to_str)",
"_____no_output_____"
],
[
"print(Name_to_str)",
"arn:aws:iam::254903919824:role/SageMaker_smex_device_role_9\n"
],
[
"!aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/service-role/AmazonSageMakerEdgeDeviceFleetPolicy --role-name $Device_role_name",
"_____no_output_____"
],
[
"time.sleep(20)",
"_____no_output_____"
],
[
"device_fleet_name = \"demo-device-fleet\" + str(time.time()).split(\".\")[0]\n\n# This is the role you created following the instructions in the above cell\n#device_fleet_role_arn = Name_to_str\n\nsagemaker_client.create_device_fleet(\n DeviceFleetName=device_fleet_name,\n RoleArn=device_fleet_role_arn,\n OutputConfig={\"S3OutputLocation\": s3_compilation_output_location},\n)",
"_____no_output_____"
]
],
[
[
"#### Register device to the fleet",
"_____no_output_____"
]
],
[
[
"device_name = (\n \"sagemaker-edge-demo-device\" + str(time.time()).split(\".\")[0]\n) # device name should be 36 charactors\n\nsagemaker_client.register_devices(\n DeviceFleetName=device_fleet_name,\n Devices=[\n {\n \"DeviceName\": device_name,\n \"IotThingName\": iot_thing_name,\n \"Description\": \"this is a sample virtual device\",\n }\n ],\n)",
"_____no_output_____"
]
],
[
[
"### Create and register client certificate with AWS IoT",
"_____no_output_____"
],
[
"Create private key, public key, and X.509 certificate files to register and activate the certificate with AWS IoT. ",
"_____no_output_____"
]
],
[
[
"iot_cert = iot_client.create_keys_and_certificate(setAsActive=True)",
"_____no_output_____"
]
],
[
[
"Save the files and upload to S3 bucket, these files will be used to provide credentials on device to communicate with AWS services.",
"_____no_output_____"
]
],
[
[
"with open(\"./iot.pem.crt\", \"w\") as f:\n for line in iot_cert[\"certificatePem\"].split(\"\\n\"):\n f.write(line)\n f.write(\"\\n\")",
"_____no_output_____"
],
[
"with open(\"./iot_key.pem.key\", \"w\") as f:\n for line in iot_cert[\"keyPair\"][\"PrivateKey\"].split(\"\\n\"):\n f.write(line)\n f.write(\"\\n\")",
"_____no_output_____"
],
[
"with open(\"./iot_key_pair.pem.key\", \"w\") as f:\n for line in iot_cert[\"keyPair\"][\"PublicKey\"].split(\"\\n\"):\n f.write(line)\n f.write(\"\\n\")",
"_____no_output_____"
]
],
[
[
"Associate the role alias generated from `create_device_fleet()` with AWS IoT.",
"_____no_output_____"
]
],
[
[
"role_alias_name = \"SageMakerEdge-\" + device_fleet_name\n\nrole_alias = iot_client.describe_role_alias(roleAlias=role_alias_name)",
"_____no_output_____"
]
],
[
[
"We created and registered a certificate with AWS IoT earlier for successful authentication of your device. Now, we need to create and attach a policy to the certificate to authorize the request for the security token.",
"_____no_output_____"
]
],
[
[
"alias_policy = {\n \"Version\": \"2012-10-17\",\n \"Statement\": {\n \"Effect\": \"Allow\",\n \"Action\": \"iot:AssumeRoleWithCertificate\",\n \"Resource\": role_alias[\"roleAliasDescription\"][\"roleAliasArn\"],\n },\n}",
"_____no_output_____"
],
[
"policy_name = \"aliaspolicy-\" + str(time.time()).split(\".\")[0]\naliaspolicy = iot_client.create_policy(\n policyName=policy_name,\n policyDocument=json.dumps(alias_policy),\n)",
"_____no_output_____"
],
[
"iot_client.attach_policy(policyName=policy_name, target=iot_cert[\"certificateArn\"])",
"_____no_output_____"
],
[
"# Copyright 2021 Amazon.com.\n# SPDX-License-Identifier: MIT\n\n",
"_____no_output_____"
],
[
"iot_endpoint = iot_client.describe_endpoint(endpointType=\"iot:CredentialProvider\")",
"_____no_output_____"
],
[
"endpoint = \"https://{}/role-aliases/{}/credentials\".format(\n iot_endpoint[\"endpointAddress\"], role_alias_name\n)",
"_____no_output_____"
]
],
[
[
"Get official Amazon Root CA file and upload to S3 bucket. ",
"_____no_output_____"
]
],
[
[
"!wget https://www.amazontrust.com/repository/AmazonRootCA1.pem",
"--2021-11-17 02:26:50-- https://www.amazontrust.com/repository/AmazonRootCA1.pem\nResolving www.amazontrust.com (www.amazontrust.com)... 99.86.61.68, 99.86.61.97, 99.86.61.114, ...\nConnecting to www.amazontrust.com (www.amazontrust.com)|99.86.61.68|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 1188 (1.2K) [text/plain]\nSaving to: ‘AmazonRootCA1.pem.6’\n\nAmazonRootCA1.pem.6 100%[===================>] 1.16K --.-KB/s in 0s \n\n2021-11-17 02:26:51 (145 MB/s) - ‘AmazonRootCA1.pem.6’ saved [1188/1188]\n\n"
]
],
[
[
"Use the endpoint to make an HTTPS request to the credentials' provider to return a security token. The following example command uses curl, but you can use any HTTP client.\n\n**Optional: verify the credentials.**\n",
"_____no_output_____"
]
],
[
[
"!curl --cert iot.pem.crt --key iot_key.pem.key --cacert AmazonRootCA1.pem $endpoint",
"{\"credentials\":{\"accessKeyId\":\"ASIATWWLVETIMYQYURIC\",\"secretAccessKey\":\"Vk/DyOdKoX497W+YhTKzAWdaEAFFObBEYbPCn34a\",\"sessionToken\":\"IQoJb3JpZ2luX2VjEOL//////////wEaCXVzLWVhc3QtMiJHMEUCIQCiO6TQYpaN6VYeLnklPAVCv3emQQYJGfK83tnNnCiQTQIgXqQiSSfqwMOHbE8YGt/Kx+355aZsQr0CaPD3RkB6b04q8gMInP//////////ARAAGgwyNTQ5MDM5MTk4MjQiDMXXJwK5qx+00I7TeSrGA5TbLKj7njlE2y6VduS/EeAzLIBPrmM5Agd1zopinFWe3Da472Fw4seqpiGyVDZc8RurbwYhhmEIShGCpueQBdeDk5Cjej0BKR9Q1yOfKSa2okjeT1bLjgy6nxXcn4uggeyIm3/AZk6Qlw9+J6KytWxYTAgoPrgkQ7bCvuU1Qq1NveQLgAOQeMMR9kqpnimJceMaVFhDmjaht8A4gsU/aXG7uzMIhUJotQpmrQva4YpiN7qmh9mq1YUzBtAcTx8/P7pfQXPuzXDWwiz3k8GvSslom3P1UvvBY5qLd65rjpnQ/aAmTildPVRRDhN1ShF5Pxw3q2N4jfTBPeEShlyi2wQK3InFl0oJ1ESq9n6rt2Z9GixRxSeB79A6348SL2m46aMN6Xs5B2Did/VXHZ0tbLCe09ruDh0sqwp2tiu26Q655PJ2WwcqVvYNn3F6CxI1cLskNPJko+UPviN267KeT7KvcWk4skgPJWz2xi4VYIH3lszdcTF/0dofEC8YjILcrHmauNpKUNrh06v8JvhlZ4RgZ0egbF423YlZ3E5fl/xdkbZ/dwtpUguwmra0T5WYdK0iD7R7W4Kdv5tBjoyj9ZtVEq1kgbww69DRjAY6wgFcHgfTBAP0TQIj+aXZIi8Mb9c+oGcqYn7wOwDnVZjvgRU6smNwJqveV7ZjvRwRsVUmGMw4Fb821PJtEdjBIM4zK8y79Cgk6urkoSPHeceAu3elR0dlwP9IHc+4OMys6+odc5vjwphRUJbfgc3csA2Cq+Je/lWW7ygFojfn2Lpm622ZkawMDjNH3BNxdG+WLBffr9rj0ghOIAR1/O7H2tUwMCuf1eR0w095IXBmMPZe66iUi2V2QmA+25YnjQR7yizZOA==\",\"expiration\":\"2021-11-17T03:26:51Z\"}}"
]
],
[
[
"If the certificate can be verified with the endpoint without error, upload certificate files to S3 bucket.\n\nThese files will be used in the [Setup SageMaker Edge Manager Agent](#Setup-Sagemaker-Edge-Manager-Agent) section on EC2/device as Credential Provider.",
"_____no_output_____"
]
],
[
[
"root_ca_path = sess.upload_data(\"AmazonRootCA1.pem\", bucket, iot_folder)\ndevice_cert_path = sess.upload_data(\"iot.pem.crt\", bucket, iot_folder)\ndevice_key_path = sess.upload_data(\"iot_key.pem.key\", bucket, iot_folder)",
"_____no_output_____"
]
],
[
[
"## Inference on the Edge",
"_____no_output_____"
],
[
"In our example, we will use [AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent.html) to remotely perform actions on the EC2 instance. To see the SSM logs from CloudWatch, refer to [Install CloudWatch Agent section](#(Optional)Install-CloudWatch-Agent). \n\nExecution status of send_command is available in [AWS Systems Manager console](https://console.aws.amazon.com/systems-manager/run-command/complete-commands) command history.",
"_____no_output_____"
]
],
[
[
"ssm_client = boto3.client(\"ssm\", region_name=region)",
"_____no_output_____"
]
],
[
[
"### Setup SageMaker Edge Agent",
"_____no_output_____"
],
[
"Download SageMaker Edge Agent binary examples to EC2 instance.\n\nPlease fetch the latest version of binaries from SageMaker Edge release bucket. For more information about [Inference engine (Edge Manager agent)](https://docs.aws.amazon.com/sagemaker/latest/dg/edge-device-fleet-about.html).",
"_____no_output_____"
]
],
[
[
"release_bucket_map = {\n \"armv8\": \"sagemaker-edge-release-store-us-west-2-linux-armv8\",\n \"linux\": \"sagemaker-edge-release-store-us-west-2-linux-x64\",\n \"win64\": \"sagemaker-edge-release-store-us-west-2-windows-x64\",\n \"win32\": \"sagemaker-edge-release-store-us-west-2-windows-x86\",\n}",
"_____no_output_____"
],
[
"# In this example, we will run inference on Linux\nrelease_bucket = release_bucket_map[\"linux\"]",
"_____no_output_____"
]
],
[
[
"To download the artifacts, specify the `VERSION`. The `VERSION` is broken into three components: `<MAJOR_VERSION>.<YYYY-MM-DD>-<SHA-7>`, where:\n\n- MAJOR_VERSION: The release version. The release version is currently set to 1.\n\n- `<YYYY-MM-DD>`: Time stamp of the artifacts release.\n\n- SHA-7: repository commit ID the release is built from.\n\nWe suggest you use the latest artifact release time stamp. Use the following to get the latest time stamp.",
"_____no_output_____"
]
],
[
[
"pick_version = !aws s3 ls s3://$release_bucket/Releases/ | sort -r",
"_____no_output_____"
],
[
"#print(pick_version[1])\nversion = ''.join(pick_version[0][31:-1])\nprint(version)",
"1.20210820.e20fa3a\n"
],
[
"# A version string from the above cell\n#version = \"1.20210820.e20fa3a\"",
"_____no_output_____"
],
[
"response = ssm_client.send_command(\n InstanceIds=[instance_id],\n DocumentName=\"AWS-RunShellScript\",\n OutputS3BucketName=bucket,\n OutputS3KeyPrefix=folder,\n Parameters={\n \"commands\": [\n \"#!/bin/bash\",\n \"mkdir /demo\",\n \"aws s3 cp s3://{}/Releases/{}/{}.tgz demo.tgz\".format(\n release_bucket, version, version\n ),\n \"tar -xf demo.tgz -C /demo\",\n \"cd /demo/bin\",\n \"chmod +x sagemaker_edge_agent_binary\",\n \"chmod +x sagemaker_edge_agent_client_example\",\n ]\n },\n)",
"_____no_output_____"
],
[
"time.sleep(20)",
"_____no_output_____"
],
[
"ssm_client.get_command_invocation(\n CommandId=response[\"Command\"][\"CommandId\"],\n InstanceId=instance_id,\n)",
"_____no_output_____"
]
],
[
[
"Get model signing root certificates from the release bucket.",
"_____no_output_____"
]
],
[
[
"response = ssm_client.send_command(\n InstanceIds=[instance_id],\n DocumentName=\"AWS-RunShellScript\",\n OutputS3BucketName=bucket,\n OutputS3KeyPrefix=folder,\n Parameters={\n \"commands\": [\n \"#!/bin/bash\",\n \"cd /demo\",\n \"mkdir certificates\",\n \"aws s3 cp s3://{}/Certificates/{}/{}.pem certificates\".format(\n release_bucket, region, region\n ),\n \"chmod 400 certificates/*\",\n ]\n },\n)",
"_____no_output_____"
],
[
"time.sleep(20)",
"_____no_output_____"
],
[
"ssm_client.get_command_invocation(\n CommandId=response[\"Command\"][\"CommandId\"],\n InstanceId=instance_id,\n)",
"_____no_output_____"
]
],
[
[
"Download IoT certificates, private key, models, and test images to the EC2 instance.",
"_____no_output_____"
]
],
[
[
"response = ssm_client.send_command(\n InstanceIds=[instance_id],\n DocumentName=\"AWS-RunShellScript\",\n OutputS3BucketName=bucket,\n OutputS3KeyPrefix=folder,\n Parameters={\n \"commands\": [\n \"#!/bin/bash\",\n \"cd /demo\",\n \"mkdir iot-credentials\",\n \"cd iot-credentials\",\n \"aws s3 cp \" + root_ca_path + \" .\",\n \"aws s3 cp \" + device_cert_path + \" .\",\n \"aws s3 cp \" + device_key_path + \" .\",\n \"cd /demo\",\n \"aws s3 cp \" + keras_img_path + \" .\",\n \"aws s3 cp \" + keras_model_data + \" .\",\n \"mkdir keras_model\",\n \"tar -xf \" + keras_model_package + \" -C keras_model\",\n ]\n },\n)",
"_____no_output_____"
],
[
"time.sleep(20)",
"_____no_output_____"
],
[
"ssm_client.get_command_invocation(\n CommandId=response[\"Command\"][\"CommandId\"],\n InstanceId=instance_id,\n)",
"_____no_output_____"
]
],
[
[
"#### Configure SageMaker Edge Agent\n\nGenerate SageMaker Edge Agent configuration file. [For more information see the documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/edge-device-fleet-about.html#edge-device-fleet-running-agent).",
"_____no_output_____"
]
],
[
[
"sagemaker_edge_config = {\n \"sagemaker_edge_core_device_name\": device_name,\n \"sagemaker_edge_core_device_fleet_name\": device_fleet_name,\n \"sagemaker_edge_core_capture_data_buffer_size\": 30,\n \"sagemaker_edge_core_capture_data_batch_size\": 10,\n \"sagemaker_edge_core_capture_data_push_period_seconds\": 4,\n \"sagemaker_edge_core_folder_prefix\": \"demo_capture\",\n \"sagemaker_edge_core_region\": region,\n \"sagemaker_edge_core_root_certs_path\": \"/demo/certificates\",\n \"sagemaker_edge_provider_aws_ca_cert_file\": \"/demo/iot-credentials/AmazonRootCA1.pem\",\n \"sagemaker_edge_provider_aws_cert_file\": \"/demo/iot-credentials/iot.pem.crt\",\n \"sagemaker_edge_provider_aws_cert_pk_file\": \"/demo/iot-credentials/iot_key.pem.key\",\n \"sagemaker_edge_provider_aws_iot_cred_endpoint\": endpoint,\n \"sagemaker_edge_provider_provider\": \"Aws\",\n \"sagemaker_edge_provider_provider_path\": \"/demo/lib/libprovider_aws.so\",\n \"sagemaker_edge_provider_s3_bucket_name\": bucket,\n \"sagemaker_edge_core_capture_data_destination\": \"Cloud\",\n}",
"_____no_output_____"
],
[
"edge_config_file = open(\"sagemaker_edge_config.json\", \"w\")\njson.dump(sagemaker_edge_config, edge_config_file, indent=6)\nedge_config_file.close()",
"_____no_output_____"
]
],
[
[
"Upload SageMaker Edge Agent configuration to an S3 bucket.",
"_____no_output_____"
]
],
[
[
"config_path = sess.upload_data(\"sagemaker_edge_config.json\", bucket, iot_folder)",
"_____no_output_____"
]
],
[
[
"Download the config file to the EC2 instance.",
"_____no_output_____"
]
],
[
[
"response = ssm_client.send_command(\n InstanceIds=[instance_id],\n DocumentName=\"AWS-RunShellScript\",\n OutputS3BucketName=bucket,\n OutputS3KeyPrefix=folder,\n Parameters={\"commands\": [\"#!/bin/bash\", \"aws s3 cp \" + config_path + \" /demo\"]},\n)",
"_____no_output_____"
],
[
"time.sleep(20)",
"_____no_output_____"
],
[
"ssm_client.get_command_invocation(\n CommandId=response[\"Command\"][\"CommandId\"],\n InstanceId=instance_id,\n)",
"_____no_output_____"
]
],
[
[
"#### Launch SageMaker Edge Agent",
"_____no_output_____"
],
[
"Initialize the SageMaker Edge Manager agent. Note that we are using a long timeout on this command. In production, we recommend running the agent as a daemon, using a service like systemd.",
"_____no_output_____"
]
],
[
[
"agent_out = ssm_client.send_command(\n InstanceIds=[instance_id],\n DocumentName=\"AWS-RunShellScript\",\n TimeoutSeconds=24 * 60 * 60,\n OutputS3BucketName=bucket,\n OutputS3KeyPrefix=folder,\n Parameters={\n \"commands\": [\n \"cd /demo\",\n \"rm -f /tmp/sagemaker_edge_agent_example.sock\",\n \"./bin/sagemaker_edge_agent_binary -a /tmp/sagemaker_edge_agent_example.sock -c sagemaker_edge_config.json\",\n ]\n },\n)",
"_____no_output_____"
],
[
"time.sleep(20)",
"_____no_output_____"
]
],
[
[
"### Load Model\n\nIn this section, we show the model management capabilities offered by SageMaker Edge Manager. We will load the two compiled and packaged models using the Agent. This keeps both models ready to run inference with. As you will see, once the models are loaded you can run multiple inferences as many times as necessary until the models are unloaded. This reliefs the client applications from the logic and operational burden of managing them separately. These models are now simply an API away from running inference with.\n\nWhen loading the model with the SageMaker Edge Agent, the argument to the API points the Agent to a directory containing the packaged model (without any extraneous files within the directory). ",
"_____no_output_____"
],
[
"#### Load Keras model\n\n`keras_model` is the path containing the packaged model in this notebook. `demo-keras` is the name given to this model. This name will be used later to refer to this model for, making predictions, capturing data, unload.",
"_____no_output_____"
]
],
[
[
"load_keras_model_out = ssm_client.send_command(\n InstanceIds=[instance_id],\n DocumentName=\"AWS-RunShellScript\",\n OutputS3BucketName=bucket,\n OutputS3KeyPrefix=folder,\n Parameters={\n \"commands\": [\n \"cd /demo\",\n \"./bin/sagemaker_edge_agent_client_example LoadModel keras_model demo-keras\",\n ]\n },\n)",
"_____no_output_____"
],
[
"time.sleep(20)",
"_____no_output_____"
],
[
"ssm_client.get_command_invocation(\n CommandId=load_keras_model_out[\"Command\"][\"CommandId\"],\n InstanceId=instance_id,\n)",
"_____no_output_____"
]
],
[
[
"### List Models\n\nThis API simply lists all the models and their names that are loaded by the Agent. Note that the names shown here are same as the ones provided during the LoadModel in the previous sections.",
"_____no_output_____"
]
],
[
[
"list_model_out = ssm_client.send_command(\n InstanceIds=[instance_id],\n DocumentName=\"AWS-RunShellScript\",\n OutputS3BucketName=bucket,\n OutputS3KeyPrefix=folder,\n Parameters={\"commands\": [\"cd /demo\", \"./bin/sagemaker_edge_agent_client_example ListModels\"]},\n)",
"_____no_output_____"
],
[
"time.sleep(20)",
"_____no_output_____"
],
[
"ssm_client.get_command_invocation(\n CommandId=list_model_out[\"Command\"][\"CommandId\"],\n InstanceId=instance_id,\n)",
"_____no_output_____"
]
],
[
[
"### Run Predict\n\nIn this API, we pass the model name, input data file that will be directly fed into the neural network, input tensor name that was passed earlier during the compilation phase, along with its size and shape.",
"_____no_output_____"
],
[
"#### Run prediction on Keras model",
"_____no_output_____"
]
],
[
[
"keras_predict_out = ssm_client.send_command(\n InstanceIds=[instance_id],\n DocumentName=\"AWS-RunShellScript\",\n OutputS3BucketName=bucket,\n OutputS3KeyPrefix=folder,\n Parameters={\n \"commands\": [\n \"cd /demo\",\n \"./bin/sagemaker_edge_agent_client_example Predict demo-keras keras.bmp input_1 224 224 3\",\n ]\n },\n)",
"_____no_output_____"
],
[
"time.sleep(20)",
"_____no_output_____"
],
[
"ssm_client.get_command_invocation(\n CommandId=keras_predict_out[\"Command\"][\"CommandId\"],\n InstanceId=instance_id,\n)",
"_____no_output_____"
]
],
[
[
"### Capture Data\n\nCapture the inputs and outputs of an inference call to cloud or disk. The specific parameters were configured earlier in the config file. ",
"_____no_output_____"
]
],
[
[
"keras_capture_out = ssm_client.send_command(\n InstanceIds=[instance_id],\n DocumentName=\"AWS-RunShellScript\",\n OutputS3BucketName=bucket,\n OutputS3KeyPrefix=folder,\n Parameters={\n \"commands\": [\n \"cd /demo\",\n \"./bin/sagemaker_edge_agent_client_example PredictAndCapture demo-keras keras.bmp input_1 224 224 3\",\n ]\n },\n)",
"_____no_output_____"
],
[
"time.sleep(20)",
"_____no_output_____"
],
[
"ssm_client.get_command_invocation(\n CommandId=keras_capture_out[\"Command\"][\"CommandId\"],\n InstanceId=instance_id,\n)",
"_____no_output_____"
]
],
[
[
"### Unload Model\n\nAfter unloading a model, the same name can be reused for future `LoadModel` APIs calls.",
"_____no_output_____"
]
],
[
[
"unload_model_out = ssm_client.send_command(\n InstanceIds=[instance_id],\n DocumentName=\"AWS-RunShellScript\",\n OutputS3BucketName=bucket,\n OutputS3KeyPrefix=folder,\n Parameters={\n \"commands\": [\n \"cd /demo\",\n \"./bin/sagemaker_edge_agent_client_example UnloadModel demo-keras\",\n ]\n },\n)",
"_____no_output_____"
],
[
"ssm_client.get_command_invocation(\n CommandId=unload_model_out[\"Command\"][\"CommandId\"],\n InstanceId=instance_id,\n)",
"_____no_output_____"
]
],
[
[
"## Clean Up",
"_____no_output_____"
],
[
"Stop the Agent",
"_____no_output_____"
]
],
[
[
"ssm_client.cancel_command(CommandId=agent_out[\"Command\"][\"CommandId\"], InstanceIds=[instance_id])",
"_____no_output_____"
]
],
[
[
"Stop the EC2 instance",
"_____no_output_____"
]
],
[
[
"ec2_client.stop_instances(InstanceIds=[instance_id])",
"_____no_output_____"
]
],
[
[
"Detach and delete policy",
"_____no_output_____"
]
],
[
[
"iot_client.detach_policy(policyName=policy_name, target=iot_cert[\"certificateArn\"])\n\niot_client.delete_policy(policyName=policy_name)",
"_____no_output_____"
]
],
[
[
"Deregister device and delete device fleet",
"_____no_output_____"
]
],
[
[
"sagemaker_client.deregister_devices(DeviceFleetName=device_fleet_name, DeviceNames=[device_name])\n\nsagemaker_client.delete_device_fleet(DeviceFleetName=device_fleet_name)",
"_____no_output_____"
]
],
[
[
"## Appendix",
"_____no_output_____"
],
[
"### (Optional)Install CloudWatch Agent ",
"_____no_output_____"
]
],
[
[
"CW_log_config = {\n \"agent\": {\n \"metrics_collection_interval\": 10,\n \"logfile\": \"/opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log\",\n },\n \"logs\": {\n \"logs_collected\": {\n \"files\": {\n \"collect_list\": [\n {\n \"file_path\": \"/opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log\",\n \"log_group_name\": \"amazon-cloudwatch-agent.log\",\n \"log_stream_name\": \"amazon-cloudwatch-agent.log\",\n \"timezone\": \"UTC\",\n },\n {\n \"file_path\": \"/opt/aws/amazon-cloudwatch-agent/logs/test.log\",\n \"log_group_name\": \"test.log\",\n \"log_stream_name\": \"test.log\",\n \"timezone\": \"Local\",\n },\n ]\n }\n },\n \"log_stream_name\": \"my_log_stream_name\",\n \"force_flush_interval\": 15,\n },\n}",
"_____no_output_____"
],
[
"CW_file = open(\"cloudwatch.json\", \"w\")\njson.dump(CW_log_config, CW_file, indent=6)\nCW_file.close()",
"_____no_output_____"
],
[
"CW_config_path = sess.upload_data(\"cloudwatch.json\", bucket, iot_folder)",
"_____no_output_____"
],
[
"ssm_client.send_command(\n InstanceIds=[instance_id],\n DocumentName=\"AWS-RunShellScript\",\n OutputS3BucketName=bucket,\n OutputS3KeyPrefix=folder,\n Parameters={\n \"commands\": [\n \"#!/bin/bash\",\n \"aws s3 cp \"\n + CW_config_path\n + \" /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json\",\n ]\n },\n)",
"_____no_output_____"
],
[
"ssm_client.send_command(\n InstanceIds=[instance_id],\n DocumentName=\"AWS-RunShellScript\",\n OutputS3BucketName=bucket,\n OutputS3KeyPrefix=folder,\n Parameters={\n \"commands\": [\n \"#!/bin/bash\",\n \"wget https://s3.amazonaws.com/amazoncloudwatch-agent/ubuntu/amd64/latest/amazon-cloudwatch-agent.deb\",\n \"sudo dpkg -i -E ./amazon-cloudwatch-agent.deb\",\n ]\n },\n)",
"_____no_output_____"
]
],
[
[
"Install Cloud Watch Agent to SSM agent.",
"_____no_output_____"
]
],
[
[
"ssm_client.send_command(\n DocumentName=\"AWS-ConfigureAWSPackage\",\n DocumentVersion=\"1\",\n OutputS3BucketName=bucket,\n OutputS3KeyPrefix=folder,\n Targets=[\n {\"Key\": \"InstanceIds\", \"Values\": [instance_id]},\n ],\n TimeoutSeconds=600,\n Parameters={\"action\": [\"Install\"], \"name\": [\"AmazonCloudWatchAgent\"]},\n MaxConcurrency=\"50\",\n MaxErrors=\"0\",\n)",
"_____no_output_____"
]
],
[
[
"To debug with CloudWatch, add a parameter `CloudWatchOutputConfig` to `send_command`\n```\nCloudWatchOutputConfig={\n 'CloudWatchOutputEnabled': True\n}\n```\n\nExample:\n```\nssm_client.send_command(\n InstanceIds=[instance_id],\n DocumentName=\"AWS-RunShellScript\",\n OutputS3BucketName=bucket,\n OutputS3KeyPrefix=folder,\n CloudWatchOutputConfig={\n 'CloudWatchOutputEnabled': True\n },\n Parameters={\n 'commands':[\n \"cd /demo\",\n \"./bin/neo_agent_binary -a /tmp/sagemaker_edge_agent_example.sock -c neo_config.json\" \n ]\n }\n)\n```\n\nRunning log can be found in cloud watch log group `/aws/ssm/AWS-RunShellScript`",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ecd6c8d36dbf6ded58b5c8f3e8d81d89f975d554 | 14,147 | ipynb | Jupyter Notebook | Week 8 Texts and Databases/Week-8-NLP-Databases/Working with Databases.ipynb | ebishwaraj/ebishwaraj-PythonForDataScience_DSE200X_UCSD | db4e82a02395aebfdfa1c02d11267f16369c020d | [
"MIT"
] | 1 | 2020-12-24T19:29:28.000Z | 2020-12-24T19:29:28.000Z | Week 8 Texts and Databases/Week-8-NLP-Databases/Working with Databases.ipynb | ebishwaraj/ebishwaraj-PythonForDataScience_DSE200X_UCSD | db4e82a02395aebfdfa1c02d11267f16369c020d | [
"MIT"
] | null | null | null | Week 8 Texts and Databases/Week-8-NLP-Databases/Working with Databases.ipynb | ebishwaraj/ebishwaraj-PythonForDataScience_DSE200X_UCSD | db4e82a02395aebfdfa1c02d11267f16369c020d | [
"MIT"
] | null | null | null | 28.989754 | 601 | 0.509578 | [
[
[
"# Access a Database with Python - Iris Dataset\n\nThe Iris dataset is a popular dataset especially in the Machine Learning community, it is a set of features of 50 Iris flowers and their classification into 3 species.\nIt is often used to introduce classification Machine Learning algorithms.\n\nFirst let's download the dataset in `SQLite` format from Kaggle:\n\n<https://www.kaggle.com/uciml/iris/>\n\nDownload `database.sqlite` and save it in the `data/iris` folder.",
"_____no_output_____"
],
[
"<p><img src=\"https://upload.wikimedia.org/wikipedia/commons/4/49/Iris_germanica_%28Purple_bearded_Iris%29%2C_Wakehurst_Place%2C_UK_-_Diliff.jpg\" alt=\"Iris germanica (Purple bearded Iris), Wakehurst Place, UK - Diliff.jpg\" height=\"145\" width=\"114\"></p>\n\n<p><br> From <a href=\"https://commons.wikimedia.org/wiki/File:Iris_germanica_(Purple_bearded_Iris),_Wakehurst_Place,_UK_-_Diliff.jpg#/media/File:Iris_germanica_(Purple_bearded_Iris),_Wakehurst_Place,_UK_-_Diliff.jpg\">Wikimedia</a>, by <a href=\"//commons.wikimedia.org/wiki/User:Diliff\" title=\"User:Diliff\">Diliff</a> - <span class=\"int-own-work\" lang=\"en\">Own work</span>, <a href=\"http://creativecommons.org/licenses/by-sa/3.0\" title=\"Creative Commons Attribution-Share Alike 3.0\">CC BY-SA 3.0</a>, <a href=\"https://commons.wikimedia.org/w/index.php?curid=33037509\">Link</a></p>",
"_____no_output_____"
],
[
"First let's check that the sqlite database is available and display an error message if the file is not available (`assert` checks if the expression is `True`, otherwise throws `AssertionError` with the error message string provided):",
"_____no_output_____"
]
],
[
[
"import os\ndata_iris_folder_content = os.listdir(\"data/iris\")",
"_____no_output_____"
],
[
"error_message = \"Error: sqlite file not available, check instructions above to download it\"\nassert \"database.sqlite\" in data_iris_folder_content, error_message",
"_____no_output_____"
]
],
[
[
"## Access the Database with the sqlite3 Package",
"_____no_output_____"
],
[
"We can use the `sqlite3` package from the Python standard library to connect to the `sqlite` database:",
"_____no_output_____"
]
],
[
[
"import sqlite3",
"_____no_output_____"
],
[
"conn = sqlite3.connect('data/iris/database.sqlite')",
"_____no_output_____"
],
[
"cursor = conn.cursor()",
"_____no_output_____"
],
[
"type(cursor)",
"_____no_output_____"
]
],
[
[
"A `sqlite3.Cursor` object is our interface to the database, mostly throught the `execute` method that allows to run any `SQL` query on our database.\n\nFirst of all we can get a list of all the tables saved into the database, this is done by reading the column `name` from the `sqlite_master` metadata table with:\n\n SELECT name FROM sqlite_master\n \nThe output of the `execute` method is an iterator that can be used in a `for` loop to print the value of each row.",
"_____no_output_____"
]
],
[
[
"for row in cursor.execute(\"SELECT name FROM sqlite_master\"):\n print(row)",
"('Iris',)\n"
]
],
[
[
"a shortcut to directly execute the query and gather the results is the `fetchall` method:",
"_____no_output_____"
]
],
[
[
"cursor.execute(\"SELECT name FROM sqlite_master\").fetchall()",
"_____no_output_____"
]
],
[
[
"**Notice**: this way of finding the available tables in a database is specific to `sqlite`, other databases like `MySQL` or `PostgreSQL` have different syntax.",
"_____no_output_____"
],
[
"Then we can execute standard `SQL` query on the database, `SQL` is a language designed to interact with data stored in a relational database. It has a standard specification, therefore the commands below work on any database.\n\nIf you need to connect to another database, you would use another package instead of `sqlite3`, for example:\n\n* [MySQL Connector](https://dev.mysql.com/doc/connector-python/en/) for MySQL\n* [Psycopg](http://initd.org/psycopg/docs/install.html) for PostgreSQL\n* [pymssql](http://pymssql.org/en/stable/) for Microsoft MS SQL\n\nthen you would connect to the database using specific host, port and authentication credentials but then you could execute the same exact `SQL` statements.\n\nLet's take a look for example at the first 3 rows in the Iris table:",
"_____no_output_____"
]
],
[
[
"sample_data = cursor.execute(\"SELECT * FROM Iris LIMIT 20\").fetchall()",
"_____no_output_____"
],
[
"print(type(sample_data))\nsample_data",
"<class 'list'>\n"
],
[
"[row[0] for row in cursor.description]",
"_____no_output_____"
]
],
[
[
"It is evident that the interface provided by `sqlite3` is low-level, for data exploration purposes we would like to directly import data into a more user friendly library like `pandas`.",
"_____no_output_____"
],
[
"## Import data from a database to `pandas`",
"_____no_output_____"
]
],
[
[
"import pandas as pd",
"_____no_output_____"
],
[
"iris_data = pd.read_sql_query(\"SELECT * FROM Iris\", conn)",
"_____no_output_____"
],
[
"iris_data.head()",
"_____no_output_____"
],
[
"iris_data.dtypes",
"_____no_output_____"
]
],
[
[
"`pandas.read_sql_query` takes a `SQL` query and a connection object and imports the data into a `DataFrame`, also keeping the same data types of the database columns. `pandas` provides a lot of the same functionality of `SQL` with a more user-friendly interface.\n\nHowever, `sqlite3` is extremely useful for downselecting data **before** importing them in `pandas`.\n\nFor example you might have 1 TB of data in a table stored in a database on a server machine. You are interested in working on a subset of the data based on some criterion, unfortunately it would be impossible to first load data into `pandas` and then filter them, therefore we should tell the database to perform the filtering and just load into `pandas` the downsized dataset.",
"_____no_output_____"
]
],
[
[
"iris_setosa_data = pd.read_sql_query(\"SELECT * FROM Iris WHERE Species == 'Iris-setosa'\", conn)",
"_____no_output_____"
],
[
"iris_setosa_data\nprint(iris_setosa_data.shape)\nprint(iris_data.shape)\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ecd6ddd4950b00432452dd1febb2a165ab0ef7c3 | 632,879 | ipynb | Jupyter Notebook | docs_src/data_block.ipynb | superaja/fastai | 82e9388ed4a69adf8f20c654b9d097c3a9418a8f | [
"Apache-2.0"
] | 1 | 2018-11-18T11:50:14.000Z | 2018-11-18T11:50:14.000Z | docs_src/data_block.ipynb | superaja/fastai | 82e9388ed4a69adf8f20c654b9d097c3a9418a8f | [
"Apache-2.0"
] | null | null | null | docs_src/data_block.ipynb | superaja/fastai | 82e9388ed4a69adf8f20c654b9d097c3a9418a8f | [
"Apache-2.0"
] | 1 | 2019-02-04T16:10:31.000Z | 2019-02-04T16:10:31.000Z | 169.309524 | 182,880 | 0.890017 | [
[
[
"# The data block API",
"_____no_output_____"
]
],
[
[
"from fastai.gen_doc.nbdoc import *\nfrom fastai.tabular import *\nfrom fastai.text import *\nfrom fastai.vision import * \nnp.random.seed(42)",
"_____no_output_____"
]
],
[
[
"The data block API lets you customize the creation of a [`DataBunch`](/basic_data.html#DataBunch) by isolating the underlying parts of that process in separate blocks, mainly:\n 1. Where are the inputs and how to create them?\n 1. How to split the data into a training and validation sets?\n 1. How to label the inputs?\n 1. What transforms to apply?\n 1. How to add a test set?\n 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.html#DataBunch)?\n \nEach of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.html#DataBunch) (batch size, collate function...)\n\nThe data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.html#DataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.html#DataBunch) are great for beginners but you can't always make your data fit in the tracks they require.\n\n<img src=\"imgs/mix_match.png\" alt=\"Mix and match\" width=\"200\">\n\nAs usual, we'll begin with end-to-end examples, then switch to the details of each of those parts.",
"_____no_output_____"
],
[
"## Examples of use",
"_____no_output_____"
],
[
"Let's begin with our traditional MNIST example.",
"_____no_output_____"
]
],
[
[
"path = untar_data(URLs.MNIST_TINY)\ntfms = get_transforms(do_flip=False)\npath.ls()",
"_____no_output_____"
],
[
"(path/'train').ls()",
"_____no_output_____"
]
],
[
[
"In [`vision.data`](/vision.data.html#vision.data), we create an easy [`DataBunch`](/basic_data.html#DataBunch) suitable for classification by simply typing:",
"_____no_output_____"
]
],
[
[
"data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24)",
"_____no_output_____"
]
],
[
[
"This is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.html#train) and `valid` directories, each containing one subdirectory per class, where all the pictures are. There is also a `test` directory containing unlabelled pictures. With the data block API, we can group everything together like this:",
"_____no_output_____"
]
],
[
[
"data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders\n .split_by_folder() #How to split in train/valid? -> use the folders\n .label_from_folder() #How to label? -> depending on the folder of the filenames\n .add_test_folder() #Optionally add a test set (here default name is test)\n .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64\n .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch",
"_____no_output_____"
],
[
"data.show_batch(3, figsize=(6,6), hide_axis=False)",
"_____no_output_____"
]
],
[
[
"Let's look at another example from [`vision.data`](/vision.data.html#vision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is:",
"_____no_output_____"
]
],
[
[
"planet = untar_data(URLs.PLANET_TINY)\nplanet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.)",
"_____no_output_____"
],
[
"data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms)",
"_____no_output_____"
]
],
[
[
"With the data block API we can rewrite this like that:",
"_____no_output_____"
]
],
[
[
"data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg')\n #Where to find the data? -> in planet 'train' folder\n .random_split_by_pct()\n #How to split in train/valid? -> randomly with the default 20% in valid\n .label_from_df(label_delim=' ')\n #How to label? -> use the csv file\n .transform(planet_tfms, size=128)\n #Data augmentation? -> use tfms with a size of 128\n .databunch()) \n #Finally -> use the defaults for conversion to databunch",
"_____no_output_____"
],
[
"data.show_batch(rows=2, figsize=(9,7))",
"_____no_output_____"
]
],
[
[
"The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.html#ImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.html#DataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder.",
"_____no_output_____"
]
],
[
[
"camvid = untar_data(URLs.CAMVID_TINY)\npath_lbl = camvid/'labels'\npath_img = camvid/'images'",
"_____no_output_____"
]
],
[
[
"We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...)",
"_____no_output_____"
]
],
[
[
"codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes",
"_____no_output_____"
]
],
[
[
"And we define the following function that infers the mask filename from the image filename.",
"_____no_output_____"
]
],
[
[
"get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}'",
"_____no_output_____"
]
],
[
[
"Then we can easily define a [`DataBunch`](/basic_data.html#DataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image.",
"_____no_output_____"
]
],
[
[
"data = (SegmentationItemList.from_folder(path_img)\n .random_split_by_pct()\n .label_from_func(get_y_fn, classes=codes)\n .transform(get_transforms(), tfm_y=True, size=128)\n .databunch())",
"_____no_output_____"
],
[
"data.show_batch(rows=2, figsize=(7,5))",
"_____no_output_____"
]
],
[
[
"Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/#home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename.",
"_____no_output_____"
]
],
[
[
"coco = untar_data(URLs.COCO_TINY)\nimages, lbl_bbox = get_annotations(coco/'train.json')\nimg2bbox = dict(zip(images, lbl_bbox))\nget_y_func = lambda o:img2bbox[o.name]",
"_____no_output_____"
]
],
[
[
"The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes.",
"_____no_output_____"
]
],
[
[
"data = (ObjectItemList.from_folder(coco)\n #Where are the images? -> in coco\n .random_split_by_pct() \n #How to split in train/valid? -> randomly with the default 20% in valid\n .label_from_func(get_y_func)\n #How to find the labels? -> use get_y_func\n .transform(get_transforms(), tfm_y=True)\n #Data augmentation? -> Standard transforms with tfm_y=True\n .databunch(bs=16, collate_fn=bb_pad_collate)) \n #Finally we convert to a DataBunch and we use bb_pad_collate",
"_____no_output_____"
],
[
"data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6))",
"_____no_output_____"
]
],
[
[
"But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model.",
"_____no_output_____"
]
],
[
[
"imdb = untar_data(URLs.IMDB_SAMPLE)",
"_____no_output_____"
],
[
"data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text')\n #Where are the inputs? Column 'text' of this csv\n .random_split_by_pct()\n #How to split it? Randomly with the default 20%\n .label_for_lm()\n #Label it for a language model\n .databunch())",
"_____no_output_____"
],
[
"data_lm.show_batch()",
"_____no_output_____"
]
],
[
[
"For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`.",
"_____no_output_____"
]
],
[
[
"data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text')\n .split_from_df(col='is_valid')\n .label_from_df(cols='label')\n .databunch())",
"_____no_output_____"
],
[
"data_clas.show_batch()",
"_____no_output_____"
]
],
[
[
"Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.html#PreProcessor)s that are going to be applied to our data once the splitting and labelling is done.",
"_____no_output_____"
]
],
[
[
"adult = untar_data(URLs.ADULT_SAMPLE)\ndf = pd.read_csv(adult/'adult.csv')\ndep_var = 'salary'\ncat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country']\ncont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain']\nprocs = [FillMissing, Categorify, Normalize]",
"_____no_output_____"
],
[
"data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs)\n .split_by_idx(valid_idx=range(800,1000))\n .label_from_df(cols=dep_var)\n .databunch())",
"_____no_output_____"
],
[
"data.show_batch()",
"_____no_output_____"
]
],
[
[
"## Step 1: Provide inputs",
"_____no_output_____"
],
[
"The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.html#ItemList)).",
"_____no_output_____"
]
],
[
[
"show_doc(ItemList, title_level=3)",
"_____no_output_____"
]
],
[
[
"This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling.",
"_____no_output_____"
],
[
"It has multiple subclasses depending on the type of data you're handling. Here is a quick list:\n - [`CategoryList`](/data_block.html#CategoryList) for labels in classification\n - [`MultiCategoryList`](/data_block.html#MultiCategoryList) for labels in a multi classification problem\n - [`FloatList`](/data_block.html#FloatList) for float labels in a regression problem\n - [`ImageItemList`](/vision.data.html#ImageItemList) for data that are images\n - [`SegmentationItemList`](/vision.data.html#SegmentationItemList) like [`ImageItemList`](/vision.data.html#ImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.html#SegmentationLabelList)\n - [`SegmentationLabelList`](/vision.data.html#SegmentationLabelList) for segmentation masks\n - [`ObjectItemList`](/vision.data.html#ObjectItemList) like [`ImageItemList`](/vision.data.html#ImageItemList) but will default labels to `ObjectLabelList`\n - `ObjectLabelList` for object detection\n - [`PointsItemList`](/vision.data.html#PointsItemList) for points (of the type [`ImagePoints`](/vision.image.html#ImagePoints))\n - [`ImageImageList`](/vision.data.html#ImageImageList) for image to image tasks\n - [`TextList`](/text.data.html#TextList) for text data\n - [`TextFilesList`](/text.data.html#TextFilesList) for text data stored in files\n - [`TabularList`](/tabular.data.html#TabularList) for tabular data\n - [`CollabList`](/collab.html#CollabList) for collaborative filtering",
"_____no_output_____"
],
[
"Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods",
"_____no_output_____"
]
],
[
[
"show_doc(ItemList.from_folder)",
"_____no_output_____"
],
[
"show_doc(ItemList.from_df)",
"_____no_output_____"
],
[
"show_doc(ItemList.from_csv)",
"_____no_output_____"
]
],
[
[
"### Optional step: filter your data",
"_____no_output_____"
],
[
"The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods.",
"_____no_output_____"
]
],
[
[
"show_doc(ItemList.filter_by_func)",
"_____no_output_____"
],
[
"show_doc(ItemList.filter_by_folder)",
"_____no_output_____"
],
[
"show_doc(ItemList.filter_by_rand)",
"_____no_output_____"
],
[
"show_doc(ItemList.to_text)",
"_____no_output_____"
],
[
"show_doc(ItemList.use_partial_data)",
"_____no_output_____"
]
],
[
[
"### Writing your own [`ItemList`](/data_block.html#ItemList)",
"_____no_output_____"
],
[
"First check if you can't easily customize one of the existing subclass by:\n- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)\n- applying a custom `processor` (see step 4)\n- changing the default `label_cls` for the label creation\n- adding a default [`PreProcessor`](/data_block.html#PreProcessor) with the `_processor` class variable\n\nIf this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed.",
"_____no_output_____"
]
],
[
[
"show_doc(ItemList.analyze_pred)",
"_____no_output_____"
],
[
"show_doc(ItemList.get)",
"_____no_output_____"
],
[
"show_doc(ItemList.new)",
"_____no_output_____"
]
],
[
[
"You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`.",
"_____no_output_____"
]
],
[
[
"show_doc(ItemList.reconstruct)",
"_____no_output_____"
]
],
[
[
"## Step 2: Split the data between the training and the validation set",
"_____no_output_____"
],
[
"This step is normally straightforward, you just have to pick oe of the following functions depending on what you need.",
"_____no_output_____"
]
],
[
[
"show_doc(ItemList.no_split)",
"_____no_output_____"
],
[
"show_doc(ItemList.random_split_by_pct)",
"_____no_output_____"
],
[
"show_doc(ItemList.split_by_files)",
"_____no_output_____"
],
[
"show_doc(ItemList.split_by_fname_file)",
"_____no_output_____"
],
[
"show_doc(ItemList.split_by_folder)",
"_____no_output_____"
],
[
"jekyll_note(\"This method looks at the folder immediately after `self.path` for `valid` and `train`.\")",
"_____no_output_____"
],
[
"show_doc(ItemList.split_by_idx)",
"_____no_output_____"
],
[
"show_doc(ItemList.split_by_idxs)",
"_____no_output_____"
],
[
"show_doc(ItemList.split_by_list)",
"_____no_output_____"
],
[
"show_doc(ItemList.split_by_valid_func)",
"_____no_output_____"
],
[
"show_doc(ItemList.split_from_df)",
"_____no_output_____"
],
[
"jekyll_warn(\"This method assumes the data has been created from a csv file or a dataframe.\")",
"_____no_output_____"
]
],
[
[
"## Step 3: Label the inputs",
"_____no_output_____"
],
[
"To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.html#ItemList), and if there is none, it will go to [`CategoryList`](/data_block.html#CategoryList), [`MultiCategoryList`](/data_block.html#MultiCategoryList) or [`FloatList`](/data_block.html#FloatList) depending on the type of the labels). This is implemented in the following function:",
"_____no_output_____"
]
],
[
[
"show_doc(ItemList.get_label_cls)",
"_____no_output_____"
]
],
[
[
"The first example in these docs created labels as follows:",
"_____no_output_____"
]
],
[
[
"path = untar_data(URLs.MNIST_TINY)\nll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train",
"_____no_output_____"
]
],
[
[
"If you want to save the data necessary to recreate your [`LabelList`](/data_block.html#LabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:\n\n```python\nll.train.to_csv('tmp.csv')\n```\n\nOr just grab a `pd.DataFrame` directly:",
"_____no_output_____"
]
],
[
[
"ll.to_df().head()",
"_____no_output_____"
],
[
"show_doc(ItemList.label_empty)",
"_____no_output_____"
],
[
"show_doc(ItemList.label_from_list)",
"_____no_output_____"
],
[
"show_doc(ItemList.label_from_df)",
"_____no_output_____"
],
[
"jekyll_warn(\"This method only works with data objects created with either `from_csv` or `from_df` methods.\")",
"_____no_output_____"
],
[
"show_doc(ItemList.label_const)",
"_____no_output_____"
],
[
"show_doc(ItemList.label_from_folder)",
"_____no_output_____"
],
[
"jekyll_note(\"This method looks at the last subfolder in the path to determine the classes.\")",
"_____no_output_____"
],
[
"show_doc(ItemList.label_from_func)",
"_____no_output_____"
],
[
"show_doc(ItemList.label_from_re)",
"_____no_output_____"
],
[
"show_doc(CategoryList, title_level=3)",
"_____no_output_____"
]
],
[
[
"[`ItemList`](/data_block.html#ItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.html#CategoryProcessor).",
"_____no_output_____"
]
],
[
[
"show_doc(MultiCategoryList, title_level=3)",
"_____no_output_____"
]
],
[
[
"It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.\n\nIf `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels).",
"_____no_output_____"
]
],
[
[
"show_doc(FloatList, title_level=3)",
"_____no_output_____"
],
[
"show_doc(EmptyLabelList, title_level=3)",
"_____no_output_____"
]
],
[
[
"## Invisible step: preprocessing",
"_____no_output_____"
],
[
"This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.html#ItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.html#PreProcessor) classes).\n\nA processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.\n\nAnother example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.html#PreProcessor) and applied on the validation set.\n\nThis is the generic class for all processors.",
"_____no_output_____"
]
],
[
[
"show_doc(PreProcessor, title_level=3)",
"_____no_output_____"
],
[
"show_doc(PreProcessor.process_one)",
"_____no_output_____"
]
],
[
[
"Process one `item`. This method needs to be written in any subclass.",
"_____no_output_____"
]
],
[
[
"show_doc(PreProcessor.process)",
"_____no_output_____"
]
],
[
[
"Process a dataset. This default to apply `process_one` on every `item` of `ds`.",
"_____no_output_____"
]
],
[
[
"show_doc(CategoryProcessor, title_level=3)",
"_____no_output_____"
],
[
"show_doc(CategoryProcessor.generate_classes)",
"_____no_output_____"
],
[
"show_doc(MultiCategoryProcessor, title_level=3)",
"_____no_output_____"
],
[
"show_doc(MultiCategoryProcessor.generate_classes)",
"_____no_output_____"
]
],
[
[
"## Optional steps",
"_____no_output_____"
],
[
"### Add transforms",
"_____no_output_____"
],
[
"Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms.",
"_____no_output_____"
]
],
[
[
"show_doc(LabelLists.transform)",
"_____no_output_____"
]
],
[
[
"This is primary for the vision application. The `kwargs` are the one expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.",
"_____no_output_____"
],
[
"### Add a test set",
"_____no_output_____"
],
[
"To add a test set, you can use one of the two following methods.",
"_____no_output_____"
]
],
[
[
"show_doc(LabelLists.add_test)",
"_____no_output_____"
],
[
"jekyll_note(\"Here `items` can be an `ItemList` or a collection.\")",
"_____no_output_____"
],
[
"show_doc(LabelLists.add_test_folder)",
"_____no_output_____"
]
],
[
[
"**Important**! No labels will be collected if available. Instead, either the passed `label` argument or a first label from `train_ds` will be used for all entries of this dataset. \n\nIn the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted.\n\nIf you want to use a `test` dataset with labels, you probably need to use it as a validation set, as in:\n\n```\ndata_test = (ImageItemList.from_folder(path)\n .split_by_folder(train='train', valid='test')\n .label_from_folder()\n ...)\n```\n\nAnother approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:\n\n```\ntfms = []\npath = Path('data').resolve()\ndata = (ImageItemList.from_folder(path)\n .split_by_pct()\n .label_from_folder()\n .transform(tfms)\n .databunch()\n .normalize() ) \nlearn = create_cnn(data, models.resnet50, metrics=accuracy)\nlearn.fit_one_cycle(5,1e-2)\n\n# now replace the validation dataset entry with the test dataset as a new validation dataset: \n# everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` \n# (or perhaps you were already using the latter, so simply switch to valid='test')\ndata_test = (ImageItemList.from_folder(path)\n .split_by_folder(train='train', valid='test')\n .label_from_folder()\n .transform(tfms)\n .databunch()\n .normalize()\n ) \nlearn.data = data_test\nlearn.validate()\n```\nOf course, your data block can be totally different, this is just an example.",
"_____no_output_____"
],
[
"## Step 4: convert to a [`DataBunch`](/basic_data.html#DataBunch)",
"_____no_output_____"
],
[
"This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.html#DataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.html#DataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you.",
"_____no_output_____"
]
],
[
[
"show_doc(LabelLists.databunch)",
"_____no_output_____"
]
],
[
[
"## Inner classes",
"_____no_output_____"
]
],
[
[
"show_doc(LabelList, title_level=3)",
"_____no_output_____"
]
],
[
[
"Optionally apply `tfms` to `y` if `tfm_y` is `True`. ",
"_____no_output_____"
]
],
[
[
"show_doc(LabelList.export)",
"_____no_output_____"
],
[
"show_doc(LabelList.transform_y)",
"_____no_output_____"
],
[
"show_doc(LabelList.get_state)",
"_____no_output_____"
],
[
"show_doc(LabelList.load_empty)",
"_____no_output_____"
],
[
"show_doc(LabelList.load_state)",
"_____no_output_____"
],
[
"show_doc(LabelList.process)",
"_____no_output_____"
],
[
"show_doc(LabelList.set_item)",
"_____no_output_____"
],
[
"show_doc(LabelList.to_df)",
"_____no_output_____"
],
[
"show_doc(LabelList.to_csv)",
"_____no_output_____"
],
[
"show_doc(LabelList.transform)",
"_____no_output_____"
],
[
"show_doc(ItemLists, title_level=3)",
"_____no_output_____"
],
[
"show_doc(ItemLists.label_from_lists)",
"_____no_output_____"
],
[
"show_doc(ItemLists.transform)",
"_____no_output_____"
],
[
"show_doc(ItemLists.transform_y)",
"_____no_output_____"
],
[
"show_doc(LabelLists, title_level=3)",
"_____no_output_____"
],
[
"show_doc(LabelLists.get_processors)",
"_____no_output_____"
],
[
"show_doc(LabelLists.load_empty)",
"_____no_output_____"
],
[
"show_doc(LabelLists.load_state)",
"_____no_output_____"
],
[
"show_doc(LabelLists.process)",
"_____no_output_____"
]
],
[
[
"## Helper functions",
"_____no_output_____"
]
],
[
[
"show_doc(get_files)",
"_____no_output_____"
]
],
[
[
"## Undocumented Methods - Methods moved below this line will intentionally be hidden",
"_____no_output_____"
]
],
[
[
"show_doc(CategoryList.new)",
"_____no_output_____"
],
[
"show_doc(LabelList.new)",
"_____no_output_____"
],
[
"show_doc(CategoryList.get)",
"_____no_output_____"
],
[
"show_doc(LabelList.predict)",
"_____no_output_____"
],
[
"show_doc(ItemList.new)",
"_____no_output_____"
],
[
"show_doc(ItemList.process_one)",
"_____no_output_____"
],
[
"show_doc(ItemList.process)",
"_____no_output_____"
],
[
"show_doc(MultiCategoryProcessor.process_one)",
"_____no_output_____"
],
[
"show_doc(FloatList.get)",
"_____no_output_____"
],
[
"show_doc(CategoryProcessor.process_one)",
"_____no_output_____"
],
[
"show_doc(CategoryProcessor.create_classes)",
"_____no_output_____"
],
[
"show_doc(CategoryProcessor.process)",
"_____no_output_____"
],
[
"show_doc(MultiCategoryList.get)",
"_____no_output_____"
],
[
"show_doc(FloatList.new)",
"_____no_output_____"
],
[
"show_doc(FloatList.reconstruct)",
"_____no_output_____"
],
[
"show_doc(MultiCategoryList.analyze_pred)",
"_____no_output_____"
],
[
"show_doc(MultiCategoryList.reconstruct)",
"_____no_output_____"
],
[
"show_doc(CategoryList.reconstruct)",
"_____no_output_____"
],
[
"show_doc(CategoryList.analyze_pred)",
"_____no_output_____"
],
[
"show_doc(EmptyLabelList.reconstruct)",
"_____no_output_____"
],
[
"show_doc(EmptyLabelList.get)",
"_____no_output_____"
],
[
"show_doc(LabelList.databunch)",
"_____no_output_____"
]
],
[
[
"## New Methods - Please document or move to the undocumented section",
"_____no_output_____"
]
],
[
[
"show_doc(ItemList.add)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecd6e8177d48d822131f74ec6711ef1fa7acc8d7 | 28,555 | ipynb | Jupyter Notebook | Test/Dataset_test_result.ipynb | Miguel-Jiahao-Wang/Connected_Components_in_Graphs | 9e4a3236eccf06decf3a54b609066acb5c1b5936 | [
"MIT"
] | null | null | null | Test/Dataset_test_result.ipynb | Miguel-Jiahao-Wang/Connected_Components_in_Graphs | 9e4a3236eccf06decf3a54b609066acb5c1b5936 | [
"MIT"
] | null | null | null | Test/Dataset_test_result.ipynb | Miguel-Jiahao-Wang/Connected_Components_in_Graphs | 9e4a3236eccf06decf3a54b609066acb5c1b5936 | [
"MIT"
] | null | null | null | 22.53749 | 199 | 0.416074 | [
[
[
"import python_version\nfrom python_version import Graph, Tree, Cracker_python\nimport os",
"_____no_output_____"
],
[
"from pyspark import SparkContext, SparkConf\nconf = SparkConf().setAppName(\"pyspark\")\nsc = SparkContext(conf=conf)",
"_____no_output_____"
],
[
"def Min_Selection_Step(G): #dictionary format RDD\n v_min = G.map(lambda x: (x[0], min(x[1] | {x[0]})))\n NN_G_u = G.map(lambda x: (x[0], (x[1] | {x[0]})))\n addEdge1 = v_min.cogroup(NN_G_u).map(lambda x :(x[0], ( list(x[1][0]), list(x[1][1])))) #if it is possible to reduce to one MapReduce job\n H = addEdge1.flatMap(lambda x: [(x[1][0][0], y) for y in x[1][1][0]]).map(lambda x: (x[1], x[0])).groupByKey().map(lambda x: (x[0], set(x[1])))#.filter(lambda x: len(x[1]) > 1)\n return H\n\ndef Pruning_Step(H, T):\n H_filtered = H.filter(lambda x: len(x[1]) > 1)\n v_min_filtered = H_filtered.map(lambda x: (x[0], min(x[1])))\n NN_H_u = H_filtered.map(lambda x: (x[0], x[1] - {min(x[1])} ))\n addEdge2 = v_min_filtered.cogroup(NN_H_u).map(lambda x :(x[0], ( list(x[1][0]), list(x[1][1]))))\n G = addEdge2.flatMap(lambda x: [(x[1][0][0], y) for y in x[1][1][0]]).flatMap(lambda x: [x, (x[1], x[0])]).groupByKey().map(lambda x: (x[0], set(x[1])))\n \n \n #deactiviation\n deactiveNodes = H.filter(lambda x: x[0] not in x[1]).map(lambda x: (x[0], None))\n v_min = H.map(lambda x: (x[0], min(x[1])))\n addEdge3 = deactiveNodes.join(v_min).map(lambda x: (x[1][1], x[0]))\n T = T.union(addEdge3)\n\n \n return [G, T]\n\ndef findSeeds(T):\n T_rev = T.map(lambda x:(x[1], x[0]))\n A = T.keys().distinct().map(lambda x:(x,1))\n B = T_rev.keys().distinct().map(lambda x:(x,1))\n return A.leftOuterJoin(B).filter(lambda x: not x[1][1]).map(lambda x:x[0])\n\ndef Cracker_pyspark(G):\n n = 0\n T = sc.parallelize([])\n while G.take(1):\n n += 1\n print(n)\n H = Min_Selection_Step(G)\n G, T = Pruning_Step(H, T)\n \n return T\n\ndef Seed_Propragation(T, seed): \n seed = seed.map(lambda x: (x, x)) \n T_seed = sc.parallelize([(-1, (None, -1))]) \n \n while T_seed.map(lambda x: (x[1])).lookup(None):\n T_seed = seed.rightOuterJoin(T)\n seed = T_seed.map(lambda x: (x[1][1], x[1][0])).union(seed)\n \n return T_seed",
"_____no_output_____"
]
],
[
[
"## simulated_blockmodel_graph_100_nodes",
"_____no_output_____"
]
],
[
[
"os.path.getsize('graph_datasets/simulated_blockmodel_graph_100_nodes.tsv')",
"_____no_output_____"
],
[
"G_100 = Graph() \nfor line in open('graph_datasets/simulated_blockmodel_graph_100_nodes.tsv',\"r\"):\n node_1, node_2, c = line.strip().split(\"\\t\")\n G_100.addEdge(int(node_1), int(node_2))\n\nT_100 = Cracker_python(G_100)\nT_100.seed",
"_____no_output_____"
],
[
"G_100_spark_raw = sc.textFile('graph_datasets/simulated_blockmodel_graph_100_nodes.tsv')\nG_100_spark = G_100_spark_raw.map(lambda x: x.split('\\t')).map(lambda x: (int(x[0]),int(x[1]))).flatMap(lambda x: [x, (x[1], x[0])]).groupByKey().map(lambda x: (x[0], set(x[1])))\nG_100_spark.lookup(100)",
"_____no_output_____"
],
[
"T_100_spark = Cracker_pyspark(G_100_spark)\nfindSeeds(T_100_spark).collect()",
"1\n2\n"
]
],
[
[
"## facebook/107.edges",
"_____no_output_____"
]
],
[
[
"os.path.getsize('graph_datasets/facebook/107.edges')",
"_____no_output_____"
],
[
"G_fb = Graph()\nfor line in open('graph_datasets/facebook/107.edges',\"r\"): \n node_1, node_2 = line.strip().split(\" \")\n G_fb.addEdge(int(node_1), int(node_2))\n \nT_fb = Cracker_python(G_fb)\nT_fb.seed",
"_____no_output_____"
],
[
"G_fb_spark_raw = sc.textFile('graph_datasets/facebook/107.edges')\nG_fb_spark = G_100_spark_raw.map(lambda x: x.split(\" \")).map(lambda x: (int(x[0]),int(x[1]))).flatMap(lambda x: [x, (x[1], x[0])]).groupByKey().map(lambda x: (x[0], set(x[1])))\n#G_fb_spark.lookup(960)",
"_____no_output_____"
],
[
"T_fb_spark = Cracker_pyspark(G_fb_spark)\nfindSeeds(T_fb_spark).collect()",
"1\n2\n3\n"
]
],
[
[
"## soc-sign-bitcoinalpha.csv",
"_____no_output_____"
]
],
[
[
"os.path.getsize('graph_datasets/soc-sign-bitcoinalpha.csv')",
"_____no_output_____"
],
[
"G_btc = Graph()\nfor line in open('graph_datasets/soc-sign-bitcoinalpha.csv',\"r\"):\n nodes = line.strip().split(\",\")\n #print(edge)\n G_btc.addEdge(int(nodes[0]), int(nodes[1]))\n \nT_btc = Cracker_python(G_btc)\nT_btc.seed",
"_____no_output_____"
],
[
"G_btc_spark_raw = sc.textFile(\"graph_datasets/soc-sign-bitcoinalpha.csv\")\nG_btc_spark = G_btc_spark_raw.map(lambda x: x.split(\",\")).map(lambda x: (int(x[0]), int(x[1]))).flatMap(lambda x: [x, (x[1], x[0])]).groupByKey().map(lambda x: (x[0], set(x[1])))\nG_btc_spark.lookup(1000)",
"_____no_output_____"
],
[
"T_btc_spark = Cracker_pyspark(G_btc_spark)\nfindSeeds(T_btc_spark).collect()",
"1\n2\n3\n"
]
],
[
[
"## as-skitter.txt",
"_____no_output_____"
]
],
[
[
"os.path.getsize('graph_datasets/as-skitter.txt')\n#maximum recursion depth exceeded in comparison",
"_____no_output_____"
],
[
"#G_skitter = Graph()\n#for line in open('graph_datasets/as-skitter.txt',\"r\"):\n# nodes = line.strip().split(\"\\t\")\n #print(edge)\n# G_skitter.addEdge(int(nodes[0]), int(nodes[1]))\n \n#T_skitter = Cracker_python(G_skitter)\n#T_skitter.seed",
"_____no_output_____"
],
[
"G_skitter_spark_raw = sc.textFile(\"graph_datasets/as-skitter.txt\")\nG_skitter_spark = G_skitter_spark_raw.map(lambda x: x.split(\"\\t\")).map(lambda x: (int(x[0]), int(x[1]))).flatMap(lambda x: [x, (x[1], x[0])]).groupByKey().map(lambda x: (x[0], set(x[1])))\nG_skitter_spark.lookup(10000)",
"_____no_output_____"
],
[
"T_skitter_spark = Cracker_pyspark(G_skitter_spark)\nfindSeeds(T_skitter_spark).collect()",
"1\n2\n3\n4\n5\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
ecd6ec54e6dc570247fefc845c1c4221d78f9589 | 21,730 | ipynb | Jupyter Notebook | how-to-use-azureml/track-and-monitor-experiments/logging-api/logging-api.ipynb | swaticolab/MachineLearningNotebooks | 3588eb9665cab356b19c20d712286b58928018cc | [
"MIT"
] | 1 | 2020-01-30T18:26:19.000Z | 2020-01-30T18:26:19.000Z | how-to-use-azureml/track-and-monitor-experiments/logging-api/logging-api.ipynb | swaticolab/MachineLearningNotebooks | 3588eb9665cab356b19c20d712286b58928018cc | [
"MIT"
] | null | null | null | how-to-use-azureml/track-and-monitor-experiments/logging-api/logging-api.ipynb | swaticolab/MachineLearningNotebooks | 3588eb9665cab356b19c20d712286b58928018cc | [
"MIT"
] | 1 | 2020-04-22T10:58:57.000Z | 2020-04-22T10:58:57.000Z | 37.33677 | 439 | 0.564565 | [
[
[
"",
"_____no_output_____"
],
[
"Copyright (c) Microsoft Corporation. All rights reserved.\n\nLicensed under the MIT License.",
"_____no_output_____"
],
[
"# Logging\n\n_**This notebook showcases various ways to use the Azure Machine Learning service run logging APIs, and view the results in the Azure portal.**_\n\n---\n---\n\n## Table of Contents\n\n1. [Introduction](#Introduction)\n1. [Setup](#Setup)\n 1. Validate Azure ML SDK installation\n 1. Initialize workspace\n 1. Set experiment\n1. [Logging](#Logging)\n 1. Starting a run\n 1. Viewing a run in the portal\n 1. Viewing the experiment in the portal\n 1. Logging metrics\n 1. Logging string metrics\n 1. Logging numeric metrics\n 1. Logging vectors\n 1. Logging tables\n 1. Uploading files\n1. [Analyzing results](#Analyzing-results)\n 1. Tagging a run\n1. [Next steps](#Next-steps)\n",
"_____no_output_____"
],
[
"## Introduction\n\nLogging metrics from runs in your experiments allows you to track results from one run to another, determining trends in your outputs and understand how your inputs correspond to your model and script performance. Azure Machine Learning services (AzureML) allows you to track various types of metrics including images and arbitrary files in order to understand, analyze, and audit your experimental progress. \n\nTypically you should log all parameters for your experiment and all numerical and string outputs of your experiment. This will allow you to analyze the performance of your experiments across multiple runs, correlate inputs to outputs, and filter runs based on interesting criteria.\n\nThe experiment's Run History report page automatically creates a report that can be customized to show the KPI's, charts, and column sets that are interesting to you. \n\n|  |  |\n|:--:|:--:|\n| *Run Details* | *Run History* |\n\n---\n\n## Setup\n\nIf you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) Notebook first if you haven't already to establish your connection to the AzureML Workspace. Also make sure you have tqdm and matplotlib installed in the current kernel.\n\n```\n(myenv) $ conda install -y tqdm matplotlib\n```",
"_____no_output_____"
],
[
"### Validate Azure ML SDK installation and get version number for debugging purposes",
"_____no_output_____"
]
],
[
[
"from azureml.core import Experiment, Workspace, Run\nimport azureml.core\nimport numpy as np\nfrom tqdm import tqdm\n\n# Check core SDK version number\n\nprint(\"This notebook was created using SDK version 1.0.85, you are currently running version\", azureml.core.VERSION)",
"_____no_output_____"
]
],
[
[
"### Initialize workspace\n\nInitialize a workspace object from persisted configuration.",
"_____no_output_____"
]
],
[
[
"ws = Workspace.from_config()\nprint('Workspace name: ' + ws.name, \n 'Azure region: ' + ws.location, \n 'Subscription id: ' + ws.subscription_id, \n 'Resource group: ' + ws.resource_group, sep='\\n')",
"_____no_output_____"
]
],
[
[
"### Set experiment\nCreate a new experiment (or get the one with the specified name). An *experiment* is a container for an arbitrary set of *runs*. ",
"_____no_output_____"
]
],
[
[
"experiment = Experiment(workspace=ws, name='logging-api-test')",
"_____no_output_____"
]
],
[
[
"---\n\n## Logging\nIn this section we will explore the various logging mechanisms.\n\n### Starting a run\n\nA *run* is a singular experimental trial. In this notebook we will create a run directly on the experiment by calling `run = exp.start_logging()`. If you were experimenting by submitting a script file as an experiment using ``experiment.submit()``, you would call `run = Run.get_context()` in your script to access the run context of your code. In either case, the logging methods on the returned run object work the same.\n\nThis cell also stores the run id for use later in this notebook. The run_id is not necessary for logging.",
"_____no_output_____"
]
],
[
[
"# start logging for the run\nrun = experiment.start_logging()\n\n# access the run id for use later\nrun_id = run.id\n\n# change the scale factor on different runs to see how you can compare multiple runs\nscale_factor = 2\n\n# change the category on different runs to see how to organize data in reports\ncategory = 'Red'",
"_____no_output_____"
]
],
[
[
"#### Viewing a run in the Portal\nOnce a run is started you can see the run in the portal by simply typing ``run``. Clicking on the \"Link to Portal\" link will take you to the Run Details page that shows the metrics you have logged and other run properties. You can refresh this page after each logging statement to see the updated results.",
"_____no_output_____"
]
],
[
[
"run",
"_____no_output_____"
]
],
[
[
"### Viewing an experiment in the portal\nYou can also view an experiement similarly by typing `experiment`. The portal link will take you to the experiment's Run History page that shows all runs and allows you to analyze trends across multiple runs.",
"_____no_output_____"
]
],
[
[
"experiment",
"_____no_output_____"
]
],
[
[
"## Logging metrics\nMetrics are visible in the run details page in the AzureML portal and also can be analyzed in experiment reports. The run details page looks as below and contains tabs for Details, Outputs, Logs, and Snapshot. \n* The Details page displays attributes about the run, plus logged metrics and images. Metrics that are vectors appear as charts. \n* The Outputs page contains any files, such as models, you uploaded into the \"outputs\" directory from your run into storage. If you place files in the \"outputs\" directory locally, the files are automatically uploaded on your behald when the run is completed.\n* The Logs page allows you to view any log files created by your run. Logging runs created in notebooks typically do not generate log files.\n* The Snapshot page contains a snapshot of the directory specified in the ''start_logging'' statement, plus the notebook at the time of the ''start_logging'' call. This snapshot and notebook can be downloaded from the Run Details page to continue or reproduce an experiment.\n\n### Logging string metrics\nThe following cell logs a string metric. A string metric is simply a string value associated with a name. A string metric String metrics are useful for labelling runs and to organize your data. Typically you should log all string parameters as metrics for later analysis - even information such as paths can help to understand how individual experiements perform differently.\n\nString metrics can be used in the following ways:\n* Plot in hitograms\n* Group by indicators for numerical plots\n* Filtering runs\n\nString metrics appear in the **Tracked Metrics** section of the Run Details page and can be added as a column in Run History reports.",
"_____no_output_____"
]
],
[
[
"# log a string metric\nrun.log(name='Category', value=category)",
"_____no_output_____"
]
],
[
[
"### Logging numerical metrics\nThe following cell logs some numerical metrics. Numerical metrics can include metrics such as AUC or MSE. You should log any parameter or significant output measure in order to understand trends across multiple experiments. Numerical metrics appear in the **Tracked Metrics** section of the Run Details page, and can be used in charts or KPI's in experiment Run History reports.",
"_____no_output_____"
]
],
[
[
"# log numerical values\nrun.log(name=\"scale factor\", value = scale_factor)\nrun.log(name='Magic Number', value=42 * scale_factor)",
"_____no_output_____"
]
],
[
[
"### Logging vectors\nVectors are good for recording information such as loss curves. You can log a vector by creating a list of numbers, calling ``log_list()`` and supplying a name and the list, or by repeatedly logging a value using the same name.\n\nVectors are presented in Run Details as a chart, and are directly comparable in experiment reports when placed in a chart. \n\n**Note:** vectors logged into the run are expected to be relatively small. Logging very large vectors into Azure ML can result in reduced performance. If you need to store large amounts of data associated with the run, you can write the data to file that will be uploaded.",
"_____no_output_____"
]
],
[
[
"fibonacci_values = [0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89]\nscaled_values = (i * scale_factor for i in fibonacci_values)\n\n# Log a list of values. Note this will generate a single-variable line chart.\nrun.log_list(name='Fibonacci', value=scaled_values)\n\nfor i in tqdm(range(-10, 10)):\n # log a metric value repeatedly, this will generate a single-variable line chart.\n run.log(name='Sigmoid', value=1 / (1 + np.exp(-i)))\n ",
"_____no_output_____"
]
],
[
[
"### Logging tables\nTables are good for recording related sets of information such as accuracy tables, confusion matrices, etc. \nYou can log a table in two ways:\n* Create a dictionary of lists where each list represents a column in the table and call ``log_table()``\n* Repeatedly call ``log_row()`` providing the same table name with a consistent set of named args as the column values\n\nTables are presented in Run Details as a chart using the first two columns of the table \n\n**Note:** tables logged into the run are expected to be relatively small. Logging very large tables into Azure ML can result in reduced performance. If you need to store large amounts of data associated with the run, you can write the data to file that will be uploaded.",
"_____no_output_____"
]
],
[
[
"# create a dictionary to hold a table of values\nsines = {}\nsines['angle'] = []\nsines['sine'] = []\n\nfor i in tqdm(range(-10, 10)):\n angle = i / 2.0 * scale_factor\n \n # log a 2 (or more) values as a metric repeatedly. This will generate a 2-variable line chart if you have 2 numerical columns.\n run.log_row(name='Cosine Wave', angle=angle, cos=np.cos(angle))\n \n sines['angle'].append(angle)\n sines['sine'].append(np.sin(angle))\n\n# log a dictionary as a table, this will generate a 2-variable chart if you have 2 numerical columns\nrun.log_table(name='Sine Wave', value=sines)",
"_____no_output_____"
]
],
[
[
"### Logging images\nYou can directly log _matplotlib_ plots and arbitrary images to your run record. This code logs a _matplotlib_ pyplot object. Images show up in the run details page in the Azure ML Portal.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n\n# Create a plot\nimport matplotlib.pyplot as plt\nangle = np.linspace(-3, 3, 50) * scale_factor\nplt.plot(angle,np.tanh(angle), label='tanh')\nplt.legend(fontsize=12)\nplt.title('Hyperbolic Tangent', fontsize=16)\nplt.grid(True)\n\n# Log the plot to the run. To log an arbitrary image, use the form run.log_image(name, path='./image_path.png')\nrun.log_image(name='Hyperbolic Tangent', plot=plt)",
"_____no_output_____"
]
],
[
[
"### Uploading files\n\nFiles can also be uploaded explicitly and stored as artifacts along with the run record. These files are also visible in the *Outputs* tab of the Run Details page.\n",
"_____no_output_____"
]
],
[
[
"file_name = 'outputs/myfile.txt'\n\nwith open(file_name, \"w\") as f:\n f.write('This is an output file that will be uploaded.\\n')\n\n# Upload the file explicitly into artifacts \nrun.upload_file(name = file_name, path_or_stream = file_name)",
"_____no_output_____"
]
],
[
[
"### Completing the run\n\nCalling `run.complete()` marks the run as completed and triggers the output file collection. If for any reason you need to indicate the run failed or simply need to cancel the run you can call `run.fail()` or `run.cancel()`.",
"_____no_output_____"
]
],
[
[
"run.complete()",
"_____no_output_____"
]
],
[
[
"---\n\n## Analyzing results",
"_____no_output_____"
],
[
"You can refresh the run in the Azure portal to see all of your results. In many cases you will want to analyze runs that were performed previously to inspect the contents or compare results. Runs can be fetched from their parent Experiment object using the ``Run()`` constructor or the ``experiment.get_runs()`` method. ",
"_____no_output_____"
]
],
[
[
"fetched_run = Run(experiment, run_id)\nfetched_run",
"_____no_output_____"
]
],
[
[
"Call ``run.get_metrics()`` to retrieve all the metrics from a run.",
"_____no_output_____"
]
],
[
[
"fetched_run.get_metrics()",
"_____no_output_____"
]
],
[
[
"Call ``run.get_metrics(name = <metric name>)`` to retrieve a metric value by name. Retrieving a single metric can be faster, especially if the run contains many metrics.",
"_____no_output_____"
]
],
[
[
"fetched_run.get_metrics(name = \"scale factor\")",
"_____no_output_____"
]
],
[
[
"See the files uploaded for this run by calling ``run.get_file_names()``",
"_____no_output_____"
]
],
[
[
"fetched_run.get_file_names()",
"_____no_output_____"
]
],
[
[
"Once you know the file names in a run, you can download the files using the ``run.download_file()`` method",
"_____no_output_____"
]
],
[
[
"import os\nos.makedirs('files', exist_ok=True)\n\nfor f in run.get_file_names():\n dest = os.path.join('files', f.split('/')[-1])\n print('Downloading file {} to {}...'.format(f, dest))\n fetched_run.download_file(f, dest) ",
"_____no_output_____"
]
],
[
[
"### Tagging a run\nOften when you analyze the results of a run, you may need to tag that run with important personal or external information. You can add a tag to a run using the ``run.tag()`` method. AzureML supports valueless and valued tags.",
"_____no_output_____"
]
],
[
[
"fetched_run.tag(\"My Favorite Run\")\nfetched_run.tag(\"Competition Rank\", 1)\n\nfetched_run.get_tags()",
"_____no_output_____"
]
],
[
[
"## Next steps\nTo experiment more with logging and to understand how metrics can be visualized, go back to the *Start a run* section, try changing the category and scale_factor values and going through the notebook several times. Play with the KPI, charting, and column selection options on the experiment's Run History reports page to see how the various metrics can be combined and visualized.\n\nAfter learning about all of the logging options, go to the [train on remote vm](..\\train-on-remote-vm\\train-on-remote-vm.ipynb) notebook and experiment with logging from remote compute contexts.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ecd70af591aa6bba3d9338a5412cd551a6e3aa3b | 33,160 | ipynb | Jupyter Notebook | ScratchPad.ipynb | Gearlux/football-predictor | 28025c50f063dc9794a1190e1ba335ad33a18523 | [
"MIT"
] | 7 | 2019-06-13T17:37:07.000Z | 2020-09-27T19:32:28.000Z | ScratchPad.ipynb | Gearlux/football-predictor | 28025c50f063dc9794a1190e1ba335ad33a18523 | [
"MIT"
] | null | null | null | ScratchPad.ipynb | Gearlux/football-predictor | 28025c50f063dc9794a1190e1ba335ad33a18523 | [
"MIT"
] | 3 | 2019-10-10T14:42:00.000Z | 2021-05-28T17:38:26.000Z | 102.662539 | 14,312 | 0.859288 | [
[
[
"# For another notebook",
"_____no_output_____"
]
],
[
[
"# https://medium.com/@media_73863/machine-learning-for-sports-betting-not-a-basic-classification-problem-b42ae4900782\n# https://medium.com/vantageai/beating-the-bookies-with-machine-learning-7b429a0b5980\n\nfrom keras.layers import BatchNormalization, Dense, Input, Dropout\nfrom keras.models import Model\nfrom keras import backend as K\nimport pandas as pd\nimport numpy as np\nfrom sklearn.model_selection import train_test_split\nfrom keras.callbacks import EarlyStopping, ModelCheckpoint\nimport matplotlib.pyplot as plt\n\ndef get_model(input_dim, output_dim, base=1000, multiplier=0.25, p=0.2):\n inputs = Input(shape=(input_dim,))\n l = BatchNormalization()(inputs)\n l = Dropout(p)(l)\n n = base\n l = Dense(n, activation='relu')(l)\n l = BatchNormalization()(l)\n l = Dropout(p)(l)\n n = int(n * multiplier)\n l = Dense(n, activation='relu')(l)\n l = BatchNormalization()(l)\n l = Dropout(p)(l)\n n = int(n * multiplier)\n l = Dense(n, activation='relu')(l)\n outputs = Dense(output_dim, activation='softmax')(l)\n model = Model(inputs=inputs, outputs=outputs)\n model.compile(optimizer='Nadam', loss=bet_loss)\n return model\n",
"Using TensorFlow backend.\n"
],
[
"## 2D binning of profits",
"_____no_output_____"
],
[
"def histo_profits(models, prob_range = np.arange(0, 0.15, 0.01), ):\n preds = np.concatenate(models[1])\n odds = pd.concat(models[2])\n \n probs = 1./ odds.abs() \n \n profits = np.zeros( (len(prob_range), len(odd_range), 3))\n for i, ii in zip(prob_range, range(len(prob_range))):\n pr = ((preds - probs) >= i) & ((preds-probs)<i+0.01)\n for th, tt in zip(odd_range, range(len(odd_range))):\n\n oa = odds.abs()\n tr = ( oa >= th) & (oa < th+0.1)\n\n sel = pr & tr\n\n profits[ii, tt, :] = (sel * (odds.clip(0,np.inf) - 1)).sum().values\n \n labels = 'HAD'\n r = pd.concat({ labels[i]: pd.DataFrame(profits[:,:,i], columns=odd_range, index=prob_range) for i in range(3)}, axis=0) \n r.index = r.index.rename(['Location','Probability'])\n \n return r",
"_____no_output_____"
],
[
"%matplotlib inline\nfrom matplotlib import pyplot as plt\nplt.imshow(pr.loc['H'].values, cmap='PuOr')\nplt.colorbar()",
"_____no_output_____"
]
],
[
[
"## More loss functions",
"_____no_output_____"
]
],
[
[
"_EPSILON = 10e-8\n\ndef cat_loss(b_true, y_pred):\n prob_true = K.clip(b_true, 0., 1.)\n prob = K.clip(y_pred, _EPSILON, 1. - _EPSILON)\n res = K.sum(prob_true * -K.log(prob), axis=-1)\n return res\n\ndef weighted_cat_loss(b_true, y_pred):\n prob_true = K.clip(b_true, 0., np.inf)\n prob = K.clip(y_pred, _EPSILON, 1. - _EPSILON)\n res = K.sum(prob_true * -K.log(prob), axis=-1)\n return res\n\ndef bet_loss(b_true, y_pred):\n profit = K.clip(b_true, 0., np.inf) - 1\n prob = K.clip(y_pred, _EPSILON, 1. - _EPSILON)\n res2 = K.sum(profit * prob, axis=-1)\n return -res2\n",
"_____no_output_____"
],
[
"import math\nimport numpy as np\nfrom matplotlib import pylab as plt\n%matplotlib inline\nplt.plot(np.exp(-np.arange(0,5,0.01)))",
"_____no_output_____"
],
[
"plt.plot(-np.log(np.arange(0.01,1,0.01)) )\nplt.plot(-np.log(0.5 * np.arange(0.01,1,0.01)))",
"_____no_output_____"
]
],
[
[
"# Python version",
"_____no_output_____"
]
],
[
[
"from platform import python_version\nprint(python_version())",
"3.6.5\n"
],
[
"!where python",
"C:\\Users\\amjmr\\GitHub\\MakingMoneyML\\venv\\Scripts\\python.exe\nC:\\Python36\\python.exe\n"
]
],
[
[
"# Disable GPU",
"_____no_output_____"
]
],
[
[
"import os\nos.environ[\"CUDA_VISIBLE_DEVICES\"]=\"-1\" \nimport tensorflow as tf",
"_____no_output_____"
],
[
"import torch\ntorch.cuda.is_available()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ecd70d1e36e3c86258d05c17f806b6b0317d0c7c | 156,726 | ipynb | Jupyter Notebook | Notebooks/9_Perceptron_MLPN/N4_MLP_MNIST.ipynb | shouhaiel1/Machine_Learning_Labworks | 7eccfe371cba56880db2891b73e0238cd69498ed | [
"Apache-2.0"
] | 1 | 2022-02-11T10:41:41.000Z | 2022-02-11T10:41:41.000Z | Notebooks/9_Perceptron_MLPN/N4_MLP_MNIST.ipynb | shouhaiel1/Machine_Learning_Labworks | 7eccfe371cba56880db2891b73e0238cd69498ed | [
"Apache-2.0"
] | null | null | null | Notebooks/9_Perceptron_MLPN/N4_MLP_MNIST.ipynb | shouhaiel1/Machine_Learning_Labworks | 7eccfe371cba56880db2891b73e0238cd69498ed | [
"Apache-2.0"
] | null | null | null | 194.690683 | 44,536 | 0.878916 | [
[
[
"*Adapted from the keras example https://github.com/keras-team/keras/blob/master/examples/mnist_mlp.py*\n\n\nThis notebook can be run on mybinder: [](https://mybinder.org/v2/git/https%3A%2F%2Fgricad-gitlab.univ-grenoble-alpes.fr%2Fchatelaf%2Fml-sicom3a/master?urlpath=lab/tree/notebooks/X_deep_learning/)\n\nGiven the computational load, an efficient alternative is to use the UGA's jupyterhub service https://jupyterhub.u-ga.fr/ .\nIn this case, to install tensorflow 2.X, just type\n\n !pip install --user --upgrade tensorflow\n\nin a code cell, then restart the notebook (or just restart the kernel)",
"_____no_output_____"
],
[
"# Logistic regression and multiple layer perceptron to process MNIST dataset with Keras\n\nThe objective of this notebook is to code/train/test some neural net models with the [TensorFlow](https://www.tensorflow.org/) ML platform developped by Google using the [Keras API](https://keras.io/) (integrated in TensorFlow).\n\n\n\nMNIST is a simple computer vision dataset. It consists of images of handwritten digits. It also includes labels for each image, telling us which digit it is. In this lab session, we're going to train a model to look at images and predict what digits they are.\n\nThe cells below allow us to\n- [Load/Format/Display MNIST data](#I.-Load-and-format-MNIST-data)\n- [Code a Multinomial Logistic Regression](#II.-Multinomial-Logistic-Regression)\n- [Code a Multi-Layers Network to reach at least 98% of correct classification](#III.-Multiple-layer-network)",
"_____no_output_____"
],
[
"## I. Load and format MNIST data\n\nFirst, start here with these lines of code which will download and read in the data automatically:",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\nfrom tensorflow.keras.datasets import mnist\n\n(x_train, y_train), (x_test, y_test) = mnist.load_data()",
"_____no_output_____"
]
],
[
[
"The MNIST data is split into two parts: 60,000 data points of training data, 10,000 points of test data. It's essential in machine learning that we have separate data which we don't learn from so that we can make sure that what we've learned actually generalizes!\n\nEvery MNIST data point has two parts: an image of a handwritten digit and a corresponding label. \"x\" corresponds to images and \"y\" to labels. Both the training set and test set contain images and their corresponding labels.\n\nFirst, we will visualize some of the data:",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport numpy as np\n\nplt.figure()\nfor i in range(10):\n plt.subplot(2, 5, i + 1)\n plt.axis('off')\n index = np.where(y_train == i)[0][0]\n plt.imshow(x_train[index,:,:], cmap=plt.cm.gray_r, interpolation='nearest')\n plt.title('Training: %i' % y_train[index])\nplt.show()",
"_____no_output_____"
]
],
[
[
"Each image is 28 pixels by 28 pixels. In this lab session, we flatten this array into a vector of 28x28 = 784 numbers.\n\n### Write the code to flatten the data:",
"_____no_output_____"
]
],
[
[
"x_train = x_train.reshape(60000, 784)\nx_test = x_test.reshape(10000, 784)",
"_____no_output_____"
]
],
[
[
"### Write the code to normalize pixel intensity between 0 and 1 of images:",
"_____no_output_____"
]
],
[
[
"x_train = x_train.astype('float32')\nx_test = x_test.astype('float32')\nx_train /= 255\nx_test /= 255",
"_____no_output_____"
]
],
[
[
"Each image in MNIST has a corresponding label, a number between 0 and 9 representing the digit drawn in the image.\n\nIn this lab session, we're going to code our labels as \"one-hot vectors\".\n\n**What is one-hot encoding?**\n\nA one-hot vector is a vector which is 0 in most dimensions, and 1 in a single dimension. In this case, the $n^{th}$ digit will be represented as a vector which is 1 in the $n^{th}$ dimension. For example, label for digit '3' would be $[0,0,0,1,0,0,0,0,0,0]$. \nOne advantage of \"one-hot encoding\" is that it corresponds to the true probabality distribution of a sample (the probability is one for true true class, zero otherwise). This makes it easier to compare with the predicted probality distribution.\n\n\n### 3) Convert the labels to one-hot vectors (using the function \"to_categorical\" available in Keras):",
"_____no_output_____"
]
],
[
[
"num_classes = 10\nz_train = tf.keras.utils.to_categorical(y_train, num_classes)\nz_test = tf.keras.utils.to_categorical(y_test, num_classes)",
"_____no_output_____"
]
],
[
[
"## II. Multinomial Logistic Regression\n\nEvery image in MNIST is of a handwritten digit between zero and nine. So there are only ten possible things that a given image can be. For a given image, we want to compute the probabilities for it being each digit. In this part, we will use a softmax regression model:\n\n$$y = \\mathrm{softmax}(Wx+b)$$\n\nwhere $\\mathrm{softmax}(\\cdot)$ is the [normalized exponential function](https://en.wikipedia.org/wiki/Softmax_function) which is used for multinomial logistic regression.",
"_____no_output_____"
],
[
"### Define a Keras network architecture for Multinomial Logistic Regression\nBased on the [Sequential API](https://keras.io/models/sequential/) to stack the layers in the tf.keras model, \nwe use a [Dense layer](https://keras.io/api/layers/core_layers/dense/) to define the softmax regression.\nThe sequential Keras model is a linear stack of layers. You can create a sequential model by passing a list of layer instances to the constructor. Among the layer instances, you can use:\n - regular densely-connected layer using [Dense](https://keras.io/api/layers/core_layers/dense/) \n - Regularization layer such that [L1/L2 penalty for input activities](https://keras.io/api/layers/regularization_layers/activity_regularization/) or [Dropout](https://keras.io/api/layers/regularization_layers/dropout/)\n\n**Why Dropout regularization?**:\nHere we are going to use a\n[Dropout layer](https://keras.io/api/layers/regularization_layers/dropout/) as regularization one. Unlike L1 and L2 regularization, dropout doesn't rely on penalizing the cost function. Instead, in dropout we modify the network itself by randomly setting some input units to 0 with at each step during training time. The rationale behind dropout is that only the weights of the most useful units will be significantly updated during the training (while the less significant will stay close to zero). This forces the network to learn more robust features that are useful in conjunction with many different random subsets of the other units, which helps to prevent overfitting.\n\nOne advantage of dropout is that it is very computationally cheap, and quite versatile for neural nets (it can be directly applied to different networks architecture, distributed or not), while it can be a very effective regularizer.",
"_____no_output_____"
]
],
[
[
"from tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense\n# remove deprecated warning for tensorflow 2.0\nimport logging\nlogging.getLogger('tensorflow').disabled = True\n\n\n# y = softmax (Wx+b)\nmodel = Sequential()\nmodel.add(Dense(num_classes, activation='softmax', input_shape=(784,)))",
"_____no_output_____"
]
],
[
[
"### How many trainable parameters are there?",
"_____no_output_____"
]
],
[
[
"model.summary()",
"Model: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense (Dense) (None, 10) 7850 \n=================================================================\nTotal params: 7,850\nTrainable params: 7,850\nNon-trainable params: 0\n_________________________________________________________________\n"
]
],
[
[
"### Specify the loss function and the optimization algorithm (+ the metrics)\n\nWe now have to define the loss function. We try to minimize that error, and the smaller the error margin, the better our model is. Here the categorical \"cross-entropy\" is used as the loss of the model. It's defined as:\n\n$$H_{y}(z) = -\\sum_{i=1}^K y_i \\log(z_i)$$\n\nwhere $z$ is our predicted probability distribution (thus $z_i$ is the predicted probability for the $i$th class), and $y$ is the true distribution (the one-hot vector with the digit labels, thus $y_i=1$ if the sample belongs to the $i$th class, $0$ otherwise). This loss is averaged over all samples in the training set.\n\n\n\n**Why using cross-entropy?** \n\nThis loss function is commonly used to train neural networks:\n- other usual loss derived from the global accuracy like the miss-classification rate are stepwise (non-smooth) functions harder to optimize than cross-entropy,\n- cross-entropy is based on the predicted class probabilities, which is more informative than the only predicted labels\n- compared to the quadratic loss, where we square the difference between the true probabily (the one-hot-vector) and the predicted one, cross-entropy loss function for a classification problem often leads to faster training as well as improved generalization.\n\nNote also than the cross-entropy function leads to the same loss function than the logistic regression one (derived from the likelihood of the logistic model), called the \"_log-loss_\".\n\n\n**Which optimization algorithm:** now we need to specify the optimization algorithm that will be used to minimized the loss function. There exists many variants of stochastic gradient descent especially to select and adapt the learning rate (the list of the optimizers available in keras is [here](https://keras.io/api/optimizers/)).\nHere, we will use [RMSprop](https://en.wikipedia.org/wiki/Stochastic_gradient_descent#RMSProp).\n\n**Which metric:** \nWe will also specify a metric to follow the convergence of the training step. Here `accuracy`, which is the rate of correct classification (one minus the miss-classification rate)\n\nWe can now config the Keras model with the loss function, the optimization algorithm and the metric with the `compile()`method on the model (see https://keras.io/api/models/model_training_apis/#compile-method): ",
"_____no_output_____"
]
],
[
[
"optimizer= tf.keras.optimizers.RMSprop()\n# config the model with losses, optimize and metrics \nmodel.compile(loss='categorical_crossentropy',\n optimizer=optimizer,\n metrics=['accuracy'])\n",
"_____no_output_____"
]
],
[
[
"### Write the code to train the model:\nThis is simply done with the `fit()` method (see \nhttps://keras.io/api/models/model_training_apis/#fit-method)",
"_____no_output_____"
]
],
[
[
"# Stochastic gradient descent parameters\nbatch_size = 128 # number of samples to average to compute the gradient\nepochs = 20 # number of sweep over the whole training set\n\nhistory = model.fit(x_train, z_train,\n batch_size=batch_size,\n epochs=epochs,\n verbose=1,\n validation_data=(x_test, z_test))",
"Epoch 1/20\n469/469 [==============================] - 2s 5ms/step - loss: 0.5968 - accuracy: 0.8496 - val_loss: 0.3414 - val_accuracy: 0.9081\nEpoch 2/20\n469/469 [==============================] - 4s 8ms/step - loss: 0.3306 - accuracy: 0.9082 - val_loss: 0.3007 - val_accuracy: 0.9156\nEpoch 3/20\n469/469 [==============================] - 4s 8ms/step - loss: 0.3017 - accuracy: 0.9163 - val_loss: 0.2875 - val_accuracy: 0.9178\nEpoch 4/20\n469/469 [==============================] - 5s 11ms/step - loss: 0.2885 - accuracy: 0.9191 - val_loss: 0.2780 - val_accuracy: 0.9226\nEpoch 5/20\n469/469 [==============================] - 3s 6ms/step - loss: 0.2804 - accuracy: 0.9216 - val_loss: 0.2752 - val_accuracy: 0.9244\nEpoch 6/20\n469/469 [==============================] - 6s 13ms/step - loss: 0.2748 - accuracy: 0.9230 - val_loss: 0.2726 - val_accuracy: 0.9241\nEpoch 7/20\n469/469 [==============================] - 7s 15ms/step - loss: 0.2707 - accuracy: 0.9251 - val_loss: 0.2709 - val_accuracy: 0.9247\nEpoch 8/20\n469/469 [==============================] - 3s 6ms/step - loss: 0.2675 - accuracy: 0.9262 - val_loss: 0.2692 - val_accuracy: 0.9257\nEpoch 9/20\n469/469 [==============================] - 3s 7ms/step - loss: 0.2649 - accuracy: 0.9270 - val_loss: 0.2705 - val_accuracy: 0.9253\nEpoch 10/20\n469/469 [==============================] - 5s 11ms/step - loss: 0.2630 - accuracy: 0.9280 - val_loss: 0.2692 - val_accuracy: 0.9252\nEpoch 11/20\n469/469 [==============================] - 7s 14ms/step - loss: 0.2613 - accuracy: 0.9284 - val_loss: 0.2680 - val_accuracy: 0.9265\nEpoch 12/20\n469/469 [==============================] - 4s 9ms/step - loss: 0.2596 - accuracy: 0.9291 - val_loss: 0.2670 - val_accuracy: 0.9267\nEpoch 13/20\n469/469 [==============================] - 3s 6ms/step - loss: 0.2581 - accuracy: 0.9297 - val_loss: 0.2698 - val_accuracy: 0.9269\nEpoch 14/20\n469/469 [==============================] - 5s 10ms/step - loss: 0.2574 - accuracy: 0.9300 - val_loss: 0.2669 - val_accuracy: 0.9275\nEpoch 15/20\n469/469 [==============================] - 4s 9ms/step - loss: 0.2564 - accuracy: 0.9306 - val_loss: 0.2694 - val_accuracy: 0.9267\nEpoch 16/20\n469/469 [==============================] - 2s 5ms/step - loss: 0.2554 - accuracy: 0.9308 - val_loss: 0.2683 - val_accuracy: 0.9258\nEpoch 17/20\n469/469 [==============================] - 1s 3ms/step - loss: 0.2545 - accuracy: 0.9312 - val_loss: 0.2691 - val_accuracy: 0.9275\nEpoch 18/20\n469/469 [==============================] - 2s 3ms/step - loss: 0.2538 - accuracy: 0.9314 - val_loss: 0.2702 - val_accuracy: 0.9276\nEpoch 19/20\n469/469 [==============================] - 1s 3ms/step - loss: 0.2531 - accuracy: 0.9319 - val_loss: 0.2690 - val_accuracy: 0.9274\nEpoch 20/20\n469/469 [==============================] - 2s 3ms/step - loss: 0.2524 - accuracy: 0.9320 - val_loss: 0.2701 - val_accuracy: 0.9272\n"
]
],
[
[
"To study the convergence of the training step, we will plot the evolution of the accuracy for both training and testing data with respect to the epochs. The code to do this is provided below.\n\n### Study the convergence figure and the evaluation score.",
"_____no_output_____"
]
],
[
[
"# list all data in history\nprint(history.history.keys())\n\n#Visualize history (loss vs epochs)\nplt.figure()\nplt.plot(1-np.asarray(history.history['accuracy']))\nplt.plot(1-np.asarray(history.history['val_accuracy']))\nplt.title('model accuracy')\nplt.ylabel('misclassification rate') \nplt.xlabel('epochs')\nplt.legend(['train','test'], loc='upper left')\nplt.grid('on')\nplt.show()\n\nscore = model.evaluate(x_test, z_test, verbose=0)\nprint('Test cross-entropy loss:', score[0])\nprint('Test misclassification rate:', 1-score[1])\nprint('Test accuracy:', score[1])\n",
"dict_keys(['loss', 'accuracy', 'val_loss', 'val_accuracy'])\n"
]
],
[
[
"Is that good? Compare your results with the score of the current best models: https://rodrigob.github.io/are_we_there_yet/build/classification_datasets_results",
"_____no_output_____"
],
[
"### Visualize the incorrect predictions:",
"_____no_output_____"
]
],
[
[
"z_pred = model.predict(x_test)\n\n\ny_pred = np.argmax(z_pred,axis=1)\nindex = np.where(y_pred - y_test != 0)\nim_test = x_test.reshape(10000, 28, 28)\n\nplt.figure()\nfor i in range(10):\n plt.subplot(2, 5, i + 1)\n plt.axis('off')\n plt.imshow(im_test[index[0][i],:,:], cmap=plt.cm.gray_r, interpolation='nearest')\n plt.title(str(y_test[index[0][i]])+','+str(y_pred[index[0][i]]) )\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Study the confusion matrix:",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import confusion_matrix\nimport itertools\n\nclass_names= ['0','1','2','3','4','5','6','7','8','9']\n\ndef plot_confusion_matrix(cm, classes,\n normalize=False,\n title='Confusion matrix',\n cmap=plt.cm.Blues):\n \"\"\"\n This function prints and plots the confusion matrix.\n Normalization can be applied by setting `normalize=True`.\n \"\"\"\n if normalize:\n cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]\n print(\"Normalized confusion matrix\")\n else:\n print('Confusion matrix, without normalization')\n\n #print(cm)\n\n plt.imshow(cm, interpolation='nearest', cmap=cmap)\n plt.title(title, fontsize=14)\n plt.colorbar()\n tick_marks = np.arange(len(classes))\n plt.xticks(tick_marks, classes, rotation=45)\n plt.yticks(tick_marks, classes)\n\n fmt = '.3f' if normalize else 'd'\n thresh = cm.max() / 2.\n for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):\n plt.text(j, i, format(cm[i, j], fmt),\n horizontalalignment=\"center\",\n color=\"white\" if cm[i, j] > thresh else \"black\")\n\n plt.tight_layout()\n plt.ylabel('True label', fontsize=14)\n plt.xlabel('Predicted label', fontsize=14)\n\n# Compute confusion matrix\ncnf_matrix = confusion_matrix(y_test, y_pred)\nnp.set_printoptions(precision=2)\n\n# Plot normalized confusion matrix\nplt.figure(figsize=(7, 7))\nplot_confusion_matrix(cnf_matrix, classes=class_names, normalize=True,\n title='Normalized confusion matrix')\n\nplt.show()",
"Normalized confusion matrix\n"
]
],
[
[
"## III. Multiple layer network\n\nYou have tested above a linear classification rule with the logistic regression model. To see if you can improve the results, we are going to consider a nonlinear classification algorithm by adding some hidden layers. \n\nNeural nets use nonlinear activation functions for the units, i.e. the neurons, of the hidden layers to define the output of these units given the input. Logistic function (sometimes called *sigmoid*) or other sigmoïdal functions as the hyperbolic tangent 'tanh' are typically used since the 90's and 2000's. However these functions appear to be badly adapted to some deep neural net architectures like convolutional networks.\nAdoption of the rectified linear unit (ReLU) activation function in the 2010's may be considered one of the few milestones that now permit the routine development of very deep neural networks, for several reasons:\n\n1. **Counter the _vanishing gradient problem_**\n\n A general problem with the logistic or hyperbolic tangent 'tanh' functions is that they saturate. For instance the logistic snap to 1.0 for large positive input and snap to -1 for large negative input, and is only really sensitive to changes when the input is near 0. \n\n Layers deep in large networks using these nonlinear activation functions fail to receive useful gradient information. Error is back propagated through the network from the outputs and used to update the weights. The amount of error decreases dramatically with each additional layer through which it is propagated, given the derivative of the chosen activation function. This is called the *vanishing gradient problem* and prevents deep networks from learning effectively. \n \n Because ReLU is piecewise linear, it preserves many of the properties that make linear models easy to optimize with gradient-based methods. In particular, it is linear for large positive values which prevent from the *vanishing gradient problem*. It also preserves many of the properties that make linear models generalize well. Yet, it is a nonlinear function as negative values are always output as zero: this allows one to obtain more flexible prediction rules than just linear ones. This yields universal function approximators.\n\n2. **Make computation cheaper!**\n\n ReLU is very cheap to compute: no need for any multiplication or call of complex function (and the gradient is super simple: 1 for positive value and 0 for negative value). This is useful when the number of units is tens of millions or more in deep architectures!\n\n\n### Build a multiple layer dense network with ReLU activation for the hidden layer",
"_____no_output_____"
]
],
[
[
"from tensorflow.keras.layers import Dropout\n\nprint(x_test.shape)\n\nmodel = Sequential()\nmodel.add(Dense(512, activation='relu', input_shape=(784,)))\nmodel.add(Dropout(0.2))\nmodel.add(Dense(512, activation='relu'))\nmodel.add(Dropout(0.2))\nmodel.add(Dense(num_classes, activation='softmax'))\n\nmodel.summary()\n\nmodel.compile(loss='categorical_crossentropy',\n optimizer=optimizer,\n metrics=['accuracy'])\n\nhistory = model.fit(x_train, z_train,\n batch_size=batch_size,\n epochs=epochs,\n verbose=1,\n validation_data=(x_test, z_test))\nscore = model.evaluate(x_test, z_test, verbose=0)\nprint('Test loss:', score[0])\nprint('Test accuracy:', score[1])\n\n#Visualize history (loss vs epochs)\nplt.figure()\nplt.plot(1-np.asarray(history.history['accuracy']))\nplt.plot(1-np.asarray(history.history['val_accuracy']))\nplt.title('model accuracy')\nplt.ylabel('misclassification') \nplt.xlabel('epochs')\nplt.legend(['train','test'], loc='upper left')\nplt.grid('on')\nplt.show()\n\nscore = model.evaluate(x_test, z_test, verbose=0)\nprint('Test cross-entropy loss:', score[0])\nprint('Test misclassification rate:', 1-score[1])\nprint('Test accuracy:', score[1])",
"(10000, 784)\nModel: \"sequential_1\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense_1 (Dense) (None, 512) 401920 \n_________________________________________________________________\ndropout (Dropout) (None, 512) 0 \n_________________________________________________________________\ndense_2 (Dense) (None, 512) 262656 \n_________________________________________________________________\ndropout_1 (Dropout) (None, 512) 0 \n_________________________________________________________________\ndense_3 (Dense) (None, 10) 5130 \n=================================================================\nTotal params: 669,706\nTrainable params: 669,706\nNon-trainable params: 0\n_________________________________________________________________\nEpoch 1/20\n469/469 [==============================] - 9s 18ms/step - loss: 0.2444 - accuracy: 0.9246 - val_loss: 0.1120 - val_accuracy: 0.9653\nEpoch 2/20\n469/469 [==============================] - 9s 19ms/step - loss: 0.1022 - accuracy: 0.9694 - val_loss: 0.0778 - val_accuracy: 0.9771\nEpoch 3/20\n469/469 [==============================] - 7s 15ms/step - loss: 0.0745 - accuracy: 0.9775 - val_loss: 0.0845 - val_accuracy: 0.9749\nEpoch 4/20\n469/469 [==============================] - 7s 15ms/step - loss: 0.0606 - accuracy: 0.9816 - val_loss: 0.0851 - val_accuracy: 0.9771\nEpoch 5/20\n469/469 [==============================] - 8s 17ms/step - loss: 0.0493 - accuracy: 0.9845 - val_loss: 0.0754 - val_accuracy: 0.9804\nEpoch 6/20\n469/469 [==============================] - 8s 16ms/step - loss: 0.0436 - accuracy: 0.9871 - val_loss: 0.0971 - val_accuracy: 0.9768\nEpoch 7/20\n469/469 [==============================] - 8s 17ms/step - loss: 0.0371 - accuracy: 0.9891 - val_loss: 0.0898 - val_accuracy: 0.9791\nEpoch 8/20\n469/469 [==============================] - 7s 16ms/step - loss: 0.0354 - accuracy: 0.9900 - val_loss: 0.0957 - val_accuracy: 0.9800\nEpoch 9/20\n469/469 [==============================] - 8s 17ms/step - loss: 0.0300 - accuracy: 0.9910 - val_loss: 0.0836 - val_accuracy: 0.9835\nEpoch 10/20\n469/469 [==============================] - 7s 16ms/step - loss: 0.0282 - accuracy: 0.9917 - val_loss: 0.0955 - val_accuracy: 0.9833\nEpoch 11/20\n469/469 [==============================] - 8s 16ms/step - loss: 0.0246 - accuracy: 0.9923 - val_loss: 0.1124 - val_accuracy: 0.9794\nEpoch 12/20\n469/469 [==============================] - 8s 17ms/step - loss: 0.0243 - accuracy: 0.9931 - val_loss: 0.0992 - val_accuracy: 0.9830\nEpoch 13/20\n469/469 [==============================] - 8s 17ms/step - loss: 0.0209 - accuracy: 0.9937 - val_loss: 0.1126 - val_accuracy: 0.9813\nEpoch 14/20\n469/469 [==============================] - 8s 16ms/step - loss: 0.0194 - accuracy: 0.9942 - val_loss: 0.1072 - val_accuracy: 0.9821\nEpoch 15/20\n469/469 [==============================] - 7s 16ms/step - loss: 0.0217 - accuracy: 0.9942 - val_loss: 0.1170 - val_accuracy: 0.9804\nEpoch 16/20\n469/469 [==============================] - 8s 17ms/step - loss: 0.0208 - accuracy: 0.9945 - val_loss: 0.1132 - val_accuracy: 0.9825\nEpoch 17/20\n469/469 [==============================] - 8s 16ms/step - loss: 0.0189 - accuracy: 0.9950 - val_loss: 0.1216 - val_accuracy: 0.9831\nEpoch 18/20\n469/469 [==============================] - 8s 18ms/step - loss: 0.0174 - accuracy: 0.9955 - val_loss: 0.1170 - val_accuracy: 0.9827\nEpoch 19/20\n469/469 [==============================] - 8s 18ms/step - loss: 0.0180 - accuracy: 0.9953 - val_loss: 0.1174 - val_accuracy: 0.9841\nEpoch 20/20\n469/469 [==============================] - 8s 17ms/step - loss: 0.0179 - accuracy: 0.9952 - val_loss: 0.1230 - val_accuracy: 0.9841\nTest loss: 0.12300080806016922\nTest accuracy: 0.9840999841690063\n"
]
],
[
[
"We reach now 98.4% of correct classification!\n\n### Visualize the incorrect predictions:",
"_____no_output_____"
]
],
[
[
"z_pred = model.predict(x_test)\n\n\ny_pred = np.argmax(z_pred,axis=1)\nindex = np.where(y_pred - y_test != 0)\nim_test = x_test.reshape(10000, 28, 28)\n\nplt.figure()\nfor i in range(10):\n plt.subplot(2, 5, i + 1)\n plt.axis('off')\n plt.imshow(im_test[index[0][i],:,:], cmap=plt.cm.gray_r, interpolation='nearest')\n plt.title(str(y_test[index[0][i]])+','+str(y_pred[index[0][i]]) )\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecd72531f0b194d200f75f517bee0484aecf406e | 172,123 | ipynb | Jupyter Notebook | tutorials/Certification_Trainings/Public/databricks_notebooks/6. Using T5 for 17 different NLP tasks .ipynb | hatrungduc/spark-nlp-workshop | 4a4ec0195d1d3d847261df9ef2df7aa5f95bbaec | [
"Apache-2.0"
] | 687 | 2018-09-07T03:45:39.000Z | 2022-03-20T17:11:20.000Z | tutorials/Certification_Trainings/Public/databricks_notebooks/6. Using T5 for 17 different NLP tasks .ipynb | hatrungduc/spark-nlp-workshop | 4a4ec0195d1d3d847261df9ef2df7aa5f95bbaec | [
"Apache-2.0"
] | 89 | 2018-09-18T02:04:42.000Z | 2022-02-24T18:22:27.000Z | tutorials/Certification_Trainings/Public/databricks_notebooks/6. Using T5 for 17 different NLP tasks .ipynb | hatrungduc/spark-nlp-workshop | 4a4ec0195d1d3d847261df9ef2df7aa5f95bbaec | [
"Apache-2.0"
] | 407 | 2018-09-07T03:45:44.000Z | 2022-03-20T05:12:25.000Z | 86,061.5 | 172,122 | 0.504628 | [
[
[
"",
"_____no_output_____"
],
[
"# **6. Using T5 for 17 different NLP tasks **",
"_____no_output_____"
],
[
"---\n\n\n\n# Overview of every task available with T5\n[The T5 model](https://arxiv.org/pdf/1910.10683.pdf) is trained on various datasets for 17 different tasks which fall into 8 categories.\n\n\n\n1. Text Summarization\n2. Question Answering\n3. Translation\n4. Sentiment analysis\n5. Natural Language Inference\n6. Coreference Resolution\n7. Sentence Completion\n8. Word Sense Disambiguation\n\n# Every T5 Task with explanation:\n|Task Name | Explanation | \n|----------|--------------|\n|[1.CoLA](https://nyu-mll.github.io/CoLA/) | Classify if a sentence is gramaticaly correct|\n|[2.RTE](https://dl.acm.org/doi/10.1007/11736790_9) | Classify whether a statement can be deducted from a sentence|\n|[3.MNLI](https://arxiv.org/abs/1704.05426) | Classify for a hypothesis and premise whether they contradict or imply each other or neither of both (3 class).|\n|[4.MRPC](https://www.aclweb.org/anthology/I05-5002.pdf) | Classify whether a pair of sentences is a re-phrasing of each other (semantically equivalent)|\n|[5.QNLI](https://arxiv.org/pdf/1804.07461.pdf) | Classify whether the answer to a question can be deducted from an answer candidate.|\n|[6.QQP](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | Classify whether a pair of questions is a re-phrasing of each other (semantically equivalent)|\n|[7.SST2](https://www.aclweb.org/anthology/D13-1170.pdf) | Classify the sentiment of a sentence as positive or negative|\n|[8.STSB](https://www.aclweb.org/anthology/S17-2001/) | Classify the sentiment of a sentence on a scale from 1 to 5 (21 Sentiment classes)|\n|[9.CB](https://ojs.ub.uni-konstanz.de/sub/index.php/sub/article/view/601) | Classify for a premise and a hypothesis whether they contradict each other or not (binary).|\n|[10.COPA](https://www.aaai.org/ocs/index.php/SSS/SSS11/paper/view/2418/0) | Classify for a question, premise, and 2 choices which choice the correct choice is (binary).|\n|[11.MultiRc](https://www.aclweb.org/anthology/N18-1023.pdf) | Classify for a question, a paragraph of text, and an answer candidate, if the answer is correct (binary),|\n|[12.WiC](https://arxiv.org/abs/1808.09121) | Classify for a pair of sentences and a disambigous word if the word has the same meaning in both sentences.|\n|[13.WSC/DPR](https://www.aaai.org/ocs/index.php/KR/KR12/paper/view/4492/0) | Predict for an ambiguous pronoun in a sentence what it is referring to. |\n|[14.Summarization](https://arxiv.org/abs/1506.03340) | Summarize text into a shorter representation.|\n|[15.SQuAD](https://arxiv.org/abs/1606.05250) | Answer a question for a given context.|\n|[16.WMT1.](https://arxiv.org/abs/1706.03762) | Translate English to German|\n|[17.WMT2.](https://arxiv.org/abs/1706.03762) | Translate English to French|\n|[18.WMT3.](https://arxiv.org/abs/1706.03762) | Translate English to Romanian|\n\n\n# Information about pre-procession for T5 tasks\n\n## Tasks that require no pre-processing\nThe following tasks work fine without any additional pre-processing, only setting the `task parameter` on the T5 model is required:\n\n- CoLA\n- Summarization\n- SST2\n- WMT1.\n- WMT2.\n- WMT3.\n\n\n## Tasks that require pre-processing with 1 tag\nThe following tasks require `exactly 1 additional tag` added by manual pre-processing.\nSet the `task parameter` and then join the sentences on the `tag` for these tasks.\n\n- RTE\n- MNLI\n- MRPC\n- QNLI\n- QQP\n- SST2\n- STSB\n- CB\n\n\n## Tasks that require pre-processing with multiple tags\nThe following tasks require `more than 1 additional tag` added manual by pre-processing.\nSet the `task parameter` and then prefix sentences with their corresponding tags and join them for these tasks:\n\n- COPA\n- MultiRc\n- WiC\n\n\n## WSC/DPR is a special case that requires `*` surrounding\nThe task WSC/DPR requires highlighting a pronoun with `*` and configuring a `task parameter`.\n<br><br><br><br><br>\n\n\n\n\n\nThe following sections describe each task in detail, with an example and also a pre-processed example.\n\n***NOTE:*** Linebreaks are added to the `pre-processed examples` in the following section. The T5 model also works with linebreaks, but it can hinder the performance and it is not recommended to intentionally add them.\n\n\n\n# Task 1 [CoLA - Binary Grammatical Sentence acceptability classification](https://nyu-mll.github.io/CoLA/)\nJudges if a sentence is grammatically acceptable. \nThis is a sub-task of [GLUE](https://arxiv.org/pdf/1804.07461.pdf).\n\n\n\n## Example\n\n|sentence | prediction|\n|------------|------------|\n| Anna and Mike is going skiing and they is liked is | unacceptable | \n| Anna and Mike like to dance | acceptable | \n\n\n## How to configure T5 task for CoLA\n`.setTask(cola sentence:)` prefix.\n\n### Example pre-processed input for T5 CoLA sentence acceptability judgement:\n```\ncola \nsentence: Anna and Mike is going skiing and they is liked is\n```\n\n# Task 2 [RTE - Natural language inference Deduction Classification](https://dl.acm.org/doi/10.1007/11736790_9)\nThe RTE task is defined as recognizing, given two text fragments, whether the meaning of one text can be inferred (entailed) from the other or not. \nClassification of sentence pairs as entailed and not_entailed \nThis is a sub-task of [GLUE](https://arxiv.org/pdf/1804.07461.pdf) and [SuperGLUE](https://w4ngatang.github.io/static/papers/superglue.pdf).\n\n\n\n## Example\n\n|sentence 1 | sentence 2 | prediction|\n|------------|------------|----------|\nKessler ’s team conducted 60,643 interviews with adults in 14 countries. | Kessler ’s team interviewed more than 60,000 adults in 14 countries | entailed\nPeter loves New York, it is his favorite city| Peter loves new York. | entailed\nRecent report say Johnny makes he alot of money, he earned 10 million USD each year for the last 5 years. |Johnny is a millionare | entailment|\nRecent report say Johnny makes he alot of money, he earned 10 million USD each year for the last 5 years. |Johnny is a poor man | not_entailment | \n| It was raining in England for the last 4 weeks | England was very dry yesterday | not_entailment|\n\n## How to configure T5 task for RTE\n`.setTask('rte sentence1:)` and prefix second sentence with `sentence2:`\n\n\n### Example pre-processed input for T5 RTE - 2 Class Natural language inference\n```\nrte \nsentence1: Recent report say Peter makes he alot of money, he earned 10 million USD each year for the last 5 years. \nsentence2: Peter is a millionare.\n```\n\n### References\n- https://arxiv.org/abs/2010.03061\n\n\n# Task 3 [MNLI - 3 Class Natural Language Inference 3-class contradiction classification](https://arxiv.org/abs/1704.05426)\nClassification of sentence pairs with the labels `entailment`, `contradiction`, and `neutral`. \nThis is a sub-task of [GLUE](https://arxiv.org/pdf/1804.07461.pdf).\n\n\nThis classifier predicts for two sentences :\n- Whether the first sentence logically and semantically follows from the second sentence as entailment\n- Whether the first sentence is a contradiction to the second sentence as a contradiction\n- Whether the first sentence does not entail or contradict the first sentence as neutral\n\n| Hypothesis | Premise | prediction|\n|------------|------------|----------|\n| Recent report say Johnny makes he alot of money, he earned 10 million USD each year for the last 5 years. | Johnny is a poor man. | contradiction|\n|It rained in England the last 4 weeks.| It was snowing in New York last week| neutral | \n\n## How to configure T5 task for MNLI\n`.setTask('mnli hypothesis:)` and prefix second sentence with `premise:`\n\n### Example pre-processed input for T5 MNLI - 3 Class Natural Language Inference\n\n```\nmnli \nhypothesis: At 8:34, the Boston Center controller received a third, transmission from American 11. \npremise: The Boston Center controller got a third transmission from American 11.\n```\n\n\n# Task 4 [MRPC - Binary Paraphrasing/ sentence similarity classification ](https://www.aclweb.org/anthology/I05-5002.pdf)\nDetect whether one sentence is a re-phrasing or similar to another sentence \nThis is a sub-task of [GLUE](https://arxiv.org/pdf/1804.07461.pdf).\n\n\n| Sentence1 | Sentence2 | prediction|\n|------------|------------|----------|\n|We acted because we saw the existing evidence in a new light , through the prism of our experience on 11 September , \" Rumsfeld said .| Rather , the US acted because the administration saw \" existing evidence in a new light , through the prism of our experience on September 11 \" . | equivalent | \n| I like to eat peanutbutter for breakfast| I like to play football | not_equivalent | \n\n\n## How to configure T5 task for MRPC\n`.setTask('mrpc sentence1:)` and prefix second sentence with `sentence2:`\n\n### Example pre-processed input for T5 MRPC - Binary Paraphrasing/ sentence similarity\n\n```\nmrpc \nsentence1: We acted because we saw the existing evidence in a new light , through the prism of our experience on 11 September , \" Rumsfeld said . \nsentence2: Rather , the US acted because the administration saw \" existing evidence in a new light , through the prism of our experience on September 11\",\n```\n\n*ISSUE:* Can only get neutral and contradiction as prediction results for tested samples but no entailment predictions.\n\n\n# Task 5 [QNLI - Natural Language Inference question answered classification](https://arxiv.org/pdf/1804.07461.pdf)\nClassify whether a question is answered by a sentence (`entailed`). \nThis is a sub-task of [GLUE](https://arxiv.org/pdf/1804.07461.pdf).\n\n| Question | Answer | prediction|\n|------------|------------|----------|\n|Where did Jebe die?| Ghenkis Khan recalled Subtai back to Mongolia soon afterward, and Jebe died on the road back to Samarkand | entailment|\n|What does Steve like to eat? | Steve watches TV all day | not_netailment\n\n## How to configure T5 task for QNLI - Natural Language Inference question answered classification\n`.setTask('QNLI sentence1:)` and prefix question with `question:` sentence with `sentence:`:\n\n### Example pre-processed input for T5 QNLI - Natural Language Inference question answered classification\n\n```\nqnli\nquestion: Where did Jebe die? \nsentence: Ghenkis Khan recalled Subtai back to Mongolia soon afterwards, and Jebe died on the road back to Samarkand,\n```\n\n\n# Task 6 [QQP - Binary Question Similarity/Paraphrasing](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs)\nBased on a quora dataset, determine whether a pair of questions are semantically equivalent. \nThis is a sub-task of [GLUE](https://arxiv.org/pdf/1804.07461.pdf).\n\n| Question1 | Question2 | prediction|\n|------------|------------|----------|\n|What attributes would have made you highly desirable in ancient Rome? | How I GET OPPERTINUTY TO JOIN IT COMPANY AS A FRESHER? | not_duplicate | \n|What was it like in Ancient rome? | What was Ancient rome like?| duplicate | \n\n\n## How to configure T5 task for QQP\n.setTask('qqp question1:) and\nprefix second sentence with question2:\n\n\n### Example pre-processed input for T5 QQP - Binary Question Similarity/Paraphrasing\n\n```\nqqp \nquestion1: What attributes would have made you highly desirable in ancient Rome? \nquestion2: How I GET OPPERTINUTY TO JOIN IT COMPANY AS A FRESHER?',\n```\n\n# Task 7 [SST2 - Binary Sentiment Analysis](https://www.aclweb.org/anthology/D13-1170.pdf)\nBinary sentiment classification. \nThis is a sub-task of [GLUE](https://arxiv.org/pdf/1804.07461.pdf).\n\n| Sentence1 | Prediction | \n|-----------|-----------|\n|it confirms fincher ’s status as a film maker who artfully bends technical know-how to the service of psychological insight | positive| \n|I really hated that movie | negative | \n\n\n## How to configure T5 task for SST2\n`.setTask('sst2 sentence: ')`\n\n### Example pre-processed input for T5 SST2 - Binary Sentiment Analysis\n\n```\nsst2\nsentence: I hated that movie\n```\n\n\n\n# Task8 [STSB - Regressive semantic sentence similarity](https://www.aclweb.org/anthology/S17-2001/)\nMeasures how similar two sentences are on a scale from 0 to 5 with 21 classes representing a regressive label. \nThis is a sub-task of [GLUE](https://arxiv.org/pdf/1804.07461.pdf).\n\n\n| Question1 | Question2 | prediction|\n|------------|------------|----------|\n|What attributes would have made you highly desirable in ancient Rome? | How I GET OPPERTINUTY TO JOIN IT COMPANY AS A FRESHER? | 0 | \n|What was it like in Ancient rome? | What was Ancient rome like?| 5.0 | \n|What was live like as a King in Ancient Rome?? | What is it like to live in Rome? | 3.2 | \n\n## How to configure T5 task for STSB\n`.setTask('stsb sentence1:)` and prefix second sentence with `sentence2:`\n\n\n### Example pre-processed input for T5 STSB - Regressive semantic sentence similarity\n\n```\nstsb\nsentence1: What attributes would have made you highly desirable in ancient Rome? \nsentence2: How I GET OPPERTINUTY TO JOIN IT COMPANY AS A FRESHER?',\n```\n\n\n# Task 9[ CB - Natural language inference contradiction classification](https://ojs.ub.uni-konstanz.de/sub/index.php/sub/article/view/601)\nClassify whether a Premise contradicts a Hypothesis. \nPredicts entailment, neutral and contradiction \nThis is a sub-task of [SuperGLUE](https://w4ngatang.github.io/static/papers/superglue.pdf).\n\n\n| Hypothesis | Premise | Prediction | \n|--------|-------------|----------|\n|Valence was helping | Valence the void-brain, Valence the virtuous valet. Why couldn’t the figger choose his own portion of titanic anatomy to shaft? Did he think he was helping'| Contradiction|\n\n\n## How to configure T5 task for CB\n`.setTask('cb hypothesis:)` and prefix premise with `premise:`\n\n### Example pre-processed input for T5 CB - Natural language inference contradiction classification\n\n```\ncb \nhypothesis: Valence was helping \npremise: Valence the void-brain, Valence the virtuous valet. Why couldn’t the figger choose his own portion of titanic anatomy to shaft? Did he think he was helping,\n```\n\n\n# Task 10 [COPA - Sentence Completion/ Binary choice selection](https://www.aaai.org/ocs/index.php/SSS/SSS11/paper/view/2418/0)\nThe Choice of Plausible Alternatives (COPA) task by Roemmele et al. (2011) evaluates\ncausal reasoning between events, which requires commonsense knowledge about what usually takes\nplace in the world. Each example provides a premise and either asks for the correct cause or effect\nfrom two choices, thus testing either ``backward`` or `forward causal reasoning`. COPA data, which\nconsists of 1,000 examples total, can be downloaded at https://people.ict.usc.e\n\nThis is a sub-task of [SuperGLUE](https://w4ngatang.github.io/static/papers/superglue.pdf).\n\nThis classifier selects from a choice of `2 options` which one the correct is based on a `premise`.\n\n\n## forward causal reasoning\nPremise: The man lost his balance on the ladder. \nquestion: What happened as a result? \nAlternative 1: He fell off the ladder. \nAlternative 2: He climbed up the ladder.\n## backwards causal reasoning\nPremise: The man fell unconscious. What was the cause\nof this? \nAlternative 1: The assailant struck the man in the head. \nAlternative 2: The assailant took the man’s wallet.\n\n\n| Question | Premise | Choice 1 | Choice 2 | Prediction | \n|--------|-------------|----------|---------|-------------|\n|effect | Politcal Violence broke out in the nation. | many citizens relocated to the capitol. | Many citizens took refuge in other territories | Choice 1 | \n|correct| The men fell unconscious | The assailant struckl the man in the head | he assailant s took the man's wallet. | choice1 | \n\n\n## How to configure T5 task for COPA\n`.setTask('copa choice1:)`, prefix choice2 with `choice2:` , prefix premise with `premise:` and prefix the question with `question`\n\n### Example pre-processed input for T5 COPA - Sentence Completion/ Binary choice selection\n\n```\ncopa \nchoice1: He fell off the ladder \nchoice2: He climbed up the lader \npremise: The man lost his balance on the ladder \nquestion: effect\n```\n\n\n\n\n# Task 11 [MultiRc - Question Answering](https://www.aclweb.org/anthology/N18-1023.pdf)\nEvaluates an `answer` for a `question` as `true` or `false` based on an input `paragraph`\nThe T5 model predicts for a `question` and a `paragraph` of `sentences` wether an `answer` is true or not,\nbased on the semantic contents of the paragraph. \nThis is a sub-task of [SuperGLUE](https://w4ngatang.github.io/static/papers/superglue.pdf).\n\n\n\n**Exceeds human performance by a large margin**\n\n\n\n| Question | Answer | Prediction | paragraph|\n|--------------------------------------------------------------|---------------------------------------------------------------------|------------|----------|\n| Why was Joey surprised the morning he woke up for breakfast? | There was only pie to eat, rather than traditional breakfast foods | True |Once upon a time, there was a squirrel named Joey. Joey loved to go outside and play with his cousin Jimmy. Joey and Jimmy played silly games together, and were always laughing. One day, Joey and Jimmy went swimming together 50 at their Aunt Julie’s pond. Joey woke up early in the morning to eat some food before they left. He couldn’t find anything to eat except for pie! Usually, Joey would eat cereal, fruit (a pear), or oatmeal for breakfast. After he ate, he and Jimmy went to the pond. On their way there they saw their friend Jack Rabbit. They dove into the water and swam for several hours. The sun was out, but the breeze was cold. Joey and Jimmy got out of the water and started walking home. Their fur was wet, and the breeze chilled them. When they got home, they dried off, and Jimmy put on his favorite purple shirt. Joey put on a blue shirt with red and green dots. The two squirrels ate some food that Joey’s mom, Jasmine, made and went off to bed., |\n| Why was Joey surprised the morning he woke up for breakfast? | There was a T-Rex in his garden | False |Once upon a time, there was a squirrel named Joey. Joey loved to go outside and play with his cousin Jimmy. Joey and Jimmy played silly games together, and were always laughing. One day, Joey and Jimmy went swimming together 50 at their Aunt Julie’s pond. Joey woke up early in the morning to eat some food before they left. He couldn’t find anything to eat except for pie! Usually, Joey would eat cereal, fruit (a pear), or oatmeal for breakfast. After he ate, he and Jimmy went to the pond. On their way there they saw their friend Jack Rabbit. They dove into the water and swam for several hours. The sun was out, but the breeze was cold. Joey and Jimmy got out of the water and started walking home. Their fur was wet, and the breeze chilled them. When they got home, they dried off, and Jimmy put on his favorite purple shirt. Joey put on a blue shirt with red and green dots. The two squirrels ate some food that Joey’s mom, Jasmine, made and went off to bed., |\n\n## How to configure T5 task for MultiRC\n`.setTask('multirc questions:)` followed by `answer:` prefix for the answer to evaluate, followed by `paragraph:` and then a series of sentences, where each sentence is prefixed with `Sent n:`prefix second sentence with sentence2:\n\n\n### Example pre-processed input for T5 MultiRc task:\n```\nmultirc questions: Why was Joey surprised the morning he woke up for breakfast? \nanswer: There was a T-REX in his garden. \nparagraph: \nSent 1: Once upon a time, there was a squirrel named Joey. \nSent 2: Joey loved to go outside and play with his cousin Jimmy. \nSent 3: Joey and Jimmy played silly games together, and were always laughing. \nSent 4: One day, Joey and Jimmy went swimming together 50 at their Aunt Julie’s pond. \nSent 5: Joey woke up early in the morning to eat some food before they left. \nSent 6: He couldn’t find anything to eat except for pie! \nSent 7: Usually, Joey would eat cereal, fruit (a pear), or oatmeal for breakfast. \nSent 8: After he ate, he and Jimmy went to the pond. \nSent 9: On their way there they saw their friend Jack Rabbit. \nSent 10: They dove into the water and swam for several hours. \nSent 11: The sun was out, but the breeze was cold. \nSent 12: Joey and Jimmy got out of the water and started walking home. \nSent 13: Their fur was wet, and the breeze chilled them. \nSent 14: When they got home, they dried off, and Jimmy put on his favorite purple shirt. \nSent 15: Joey put on a blue shirt with red and green dots. \nSent 16: The two squirrels ate some food that Joey’s mom, Jasmine, made and went off to bed. \n```\n\n\n# Task 12 [WiC - Word sense disambiguation](https://arxiv.org/abs/1808.09121)\nDecide for `two sentence`s with a shared `disambigous word` wether they have the target word has the same `semantic meaning` in both sentences. \nThis is a sub-task of [SuperGLUE](https://w4ngatang.github.io/static/papers/superglue.pdf).\n\n\n|Predicted | disambigous word| Sentence 1 | Sentence 2 | \n|----------|-----------------|------------|------------|\n| False | kill | He totally killed that rock show! | The airplane crash killed his family | \n| True | window | The expanded window will give us time to catch the thieves.|You have a two-hour window for turning in your homework. | \n| False | window | He jumped out of the window.|You have a two-hour window for turning in your homework. | \n\n\n## How to configure T5 task for MultiRC\n`.setTask('wic pos:)` followed by `sentence1:` prefix for the first sentence, followed by `sentence2:` prefix for the second sentence.\n\n\n### Example pre-processed input for T5 WiC task:\n\n```\nwic pos:\nsentence1: The expanded window will give us time to catch the thieves.\nsentence2: You have a two-hour window of turning in your homework.\nword : window\n```\n\n\n\n# Task 13 [WSC and DPR - Coreference resolution/ Pronoun ambiguity resolver ](https://www.aaai.org/ocs/index.php/KR/KR12/paper/view/4492/0)\nPredict for an `ambiguous pronoun` to which `noun` it is referring to. \nThis is a sub-task of [GLUE](https://arxiv.org/pdf/1804.07461.pdf) and [SuperGLUE](https://w4ngatang.github.io/static/papers/superglue.pdf).\n\n|Prediction| Text | \n|----------|-------|\n| stable | The stable was very roomy, with four good stalls; a large swinging window opened into the yard , which made *it* pleasant and airy. | \n\n\n\n## How to configure T5 task for WSC/DPR\n`.setTask('wsc:)` and surround pronoun with asteriks symbols..\n\n\n### Example pre-processed input for T5 WSC/DPR task:\nThe `ambiguous pronous` should be surrounded with `*` symbols.\n\n***Note*** Read [Appendix A.](https://arxiv.org/pdf/1910.10683.pdf#page=64&zoom=100,84,360) for more info\n```\nwsc: \nThe stable was very roomy, with four good stalls; a large swinging window opened into the yard , which made *it* pleasant and airy.\n```\n\n\n# Task 14 [Text summarization](https://arxiv.org/abs/1506.03340)\n`Summarizes` a paragraph into a shorter version with the same semantic meaning.\n\n| Predicted summary| Text | \n|------------------|-------|\n| manchester united face newcastle in the premier league on wednesday . louis van gaal's side currently sit two points clear of liverpool in fourth . the belgian duo took to the dance floor on monday night with some friends . | the belgian duo took to the dance floor on monday night with some friends . manchester united face newcastle in the premier league on wednesday . red devils will be looking for just their second league away win in seven . louis van gaal’s side currently sit two points clear of liverpool in fourth . | \n\n\n## How to configure T5 task for summarization\n`.setTask('summarize:)`\n\n\n### Example pre-processed input for T5 summarization task:\nThis task requires no pre-processing, setting the task to `summarize` is sufficient.\n```\nthe belgian duo took to the dance floor on monday night with some friends . manchester united face newcastle in the premier league on wednesday . red devils will be looking for just their second league away win in seven . louis van gaal’s side currently sit two points clear of liverpool in fourth .\n```\n\n# Task 15 [SQuAD - Context based question answering](https://arxiv.org/abs/1606.05250)\nPredict an `answer` to a `question` based on input `context`.\n\n|Predicted Answer | Question | Context | \n|-----------------|----------|------|\n|carbon monoxide| What does increased oxygen concentrations in the patient’s lungs displace? | Hyperbaric (high-pressure) medicine uses special oxygen chambers to increase the partial pressure of O 2 around the patient and, when needed, the medical staff. Carbon monoxide poisoning, gas gangrene, and decompression sickness (the ’bends’) are sometimes treated using these devices. Increased O 2 concentration in the lungs helps to displace carbon monoxide from the heme group of hemoglobin. Oxygen gas is poisonous to the anaerobic bacteria that cause gas gangrene, so increasing its partial pressure helps kill them. Decompression sickness occurs in divers who decompress too quickly after a dive, resulting in bubbles of inert gas, mostly nitrogen and helium, forming in their blood. Increasing the pressure of O 2 as soon as possible is part of the treatment.\n|pie| What did Joey eat for breakfast?| Once upon a time, there was a squirrel named Joey. Joey loved to go outside and play with his cousin Jimmy. Joey and Jimmy played silly games together, and were always laughing. One day, Joey and Jimmy went swimming together 50 at their Aunt Julie’s pond. Joey woke up early in the morning to eat some food before they left. Usually, Joey would eat cereal, fruit (a pear), or oatmeal for breakfast. After he ate, he and Jimmy went to the pond. On their way there they saw their friend Jack Rabbit. They dove into the water and swam for several hours. The sun was out, but the breeze was cold. Joey and Jimmy got out of the water and started walking home. Their fur was wet, and the breeze chilled them. When they got home, they dried off, and Jimmy put on his favorite purple shirt. Joey put on a blue shirt with red and green dots. The two squirrels ate some food that Joey’s mom, Jasmine, made and went off to bed,'| \n\n## How to configure T5 task parameter for Squad Context based question answering\n`.setTask('question:)` and prefix the context which can be made up of multiple sentences with `context:`\n\n## Example pre-processed input for T5 Squad Context based question answering:\n```\nquestion: What does increased oxygen concentrations in the patient’s lungs displace? \ncontext: Hyperbaric (high-pressure) medicine uses special oxygen chambers to increase the partial pressure of O 2 around the patient and, when needed, the medical staff. Carbon monoxide poisoning, gas gangrene, and decompression sickness (the ’bends’) are sometimes treated using these devices. Increased O 2 concentration in the lungs helps to displace carbon monoxide from the heme group of hemoglobin. Oxygen gas is poisonous to the anaerobic bacteria that cause gas gangrene, so increasing its partial pressure helps kill them. Decompression sickness occurs in divers who decompress too quickly after a dive, resulting in bubbles of inert gas, mostly nitrogen and helium, forming in their blood. Increasing the pressure of O 2 as soon as possible is part of the treatment.\n```\n\n\n\n# Task 16 [WMT1 Translate English to German](https://arxiv.org/abs/1706.03762)\nFor translation tasks use the `marian` model\n## How to configure T5 task parameter for WMT Translate English to German\n`.setTask('translate English to German:)`\n\n# Task 17 [WMT2 Translate English to French](https://arxiv.org/abs/1706.03762)\nFor translation tasks use the `marian` model\n## How to configure T5 task parameter for WMT Translate English to French\n`.setTask('translate English to French:)`\n\n\n# 18 [WMT3 - Translate English to Romanian](https://arxiv.org/abs/1706.03762)\nFor translation tasks use the `marian` model\n## How to configure T5 task parameter for English to Romanian\n`.setTask('translate English to Romanian:)`",
"_____no_output_____"
],
[
"# Spark-NLP Example for every Task:",
"_____no_output_____"
]
],
[
[
"import sparknlp\n\nspark = sparknlp.start()\n\nprint(\"Spark NLP version\", sparknlp.version())\n\nprint(\"Apache Spark version:\", spark.version)",
"_____no_output_____"
]
],
[
[
"## Define Document assembler and T5 model for running the tasks",
"_____no_output_____"
]
],
[
[
"import pandas as pd\npd.set_option('display.width', 100000)\npd.set_option('max_colwidth', 8000)\npd.set_option('display.max_rows', 500)\npd.set_option('display.max_columns', 500)",
"_____no_output_____"
],
[
"from sparknlp.annotator import *\nimport sparknlp\nfrom sparknlp.common import *\nfrom sparknlp.base import *\nfrom pyspark.ml import Pipeline\n\n\ndocumentAssembler = DocumentAssembler() \\\n .setInputCol(\"text\") \\\n .setOutputCol(\"document\") \n\n# Can take in document or sentence columns\nt5 = T5Transformer.pretrained(name='t5_base',lang='en')\\\n .setInputCols('document')\\\n .setOutputCol(\"T5\")\n",
"_____no_output_____"
]
],
[
[
"# Task 1 [CoLA - Binary Grammatical Sentence acceptability classification](https://nyu-mll.github.io/CoLA/)\nJudges if a sentence is grammatically acceptable. \nThis is a sub-task of [GLUE](https://arxiv.org/pdf/1804.07461.pdf).\n\n\n\n## Example\n\n|sentence | prediction|\n|------------|------------|\n| Anna and Mike is going skiing and they is liked is | unacceptable | \n| Anna and Mike like to dance | acceptable | \n\n## How to configure T5 task for CoLA\n`.setTask(cola sentence:)` prefix.\n\n### Example pre-processed input for T5 CoLA sentence acceptability judgement:\n```\ncola \nsentence: Anna and Mike is going skiing and they is liked is\n```",
"_____no_output_____"
]
],
[
[
"# Set the task on T5\nt5.setTask('cola sentence:')\n\n# Build pipeline with T5\npipe_components = [documentAssembler,t5]\npipeline = Pipeline().setStages( pipe_components)\n\n# define Data\nsentences = [['Anna and Mike is going skiing and they is liked is'],['Anna and Mike like to dance']]\ndf = spark.createDataFrame(sentences).toDF(\"text\")\n\n#Predict on text data with T5\nmodel = pipeline.fit(df)\nannotated_df = model.transform(df)\nannotated_df.select(['text','t5.result']).show(truncate=False)",
"_____no_output_____"
]
],
[
[
"# Task 2 [RTE - Natural language inference Deduction Classification](https://dl.acm.org/doi/10.1007/11736790_9)\nThe RTE task is defined as recognizing, given two text fragments, whether the meaning of one text can be inferred (entailed) from the other or not. \nClassification of sentence pairs as entailment and not_entailment \nThis is a sub-task of [GLUE](https://arxiv.org/pdf/1804.07461.pdf) and [SuperGLUE](https://w4ngatang.github.io/static/papers/superglue.pdf).\n\n\n\n## Example\n\n|sentence 1 | sentence 2 | prediction|\n|------------|------------|----------|\nKessler ’s team conducted 60,643 interviews with adults in 14 countries. | Kessler ’s team interviewed more than 60,000 adults in 14 countries | entailment\nPeter loves New York, it is his favorite city| Peter loves new York. | entailment\nRecent report say Johnny makes he alot of money, he earned 10 million USD each year for the last 5 years. |Johnny is a millionare | entailment|\nRecent report say Johnny makes he alot of money, he earned 10 million USD each year for the last 5 years. |Johnny is a poor man | not_entailment | \n| It was raining in England for the last 4 weeks | England was very dry yesterday | not_entailment|\n\n## How to configure T5 task for RTE\n`.setTask('rte sentence1:)` and prefix second sentence with `sentence2:`\n\n\n### Example pre-processed input for T5 RTE - 2 Class Natural language inference\n```\nrte \nsentence1: Recent report say Peter makes he alot of money, he earned 10 million USD each year for the last 5 years. \nsentence2: Peter is a millionare.\n```\n\n### References\n- https://arxiv.org/abs/2010.03061",
"_____no_output_____"
]
],
[
[
"# Set the task on T5\nt5.setTask('rte sentence1:')\n\n# Build pipeline with T5\npipe_components = [documentAssembler,t5]\npipeline = Pipeline().setStages( pipe_components)\n\n# define Data, add additional tags between sentences\nsentences = [\n ['Recent report say Peter makes he alot of money, he earned 10 million USD each year for the last 5 years. sentence2: Peter is a millionare'],\n ['Recent report say Peter makes he alot of money, he earned 10 million USD each year for the last 5 years. sentence2: Peter is a poor man']\n ]\n\n\n\ndf = spark.createDataFrame(sentences).toDF(\"text\")\n\n#Predict on text data with T5\nmodel = pipeline.fit(df)\nannotated_df = model.transform(df)\nannotated_df.select(['text','t5.result']).show(truncate=False)",
"_____no_output_____"
]
],
[
[
"# Task 3 [MNLI - 3 Class Natural Language Inference 3-class contradiction classification](https://arxiv.org/abs/1704.05426)\nClassification of sentence pairs with the labels `entailment`, `contradiction`, and `neutral`. \nThis is a sub-task of [GLUE](https://arxiv.org/pdf/1804.07461.pdf).\n\n\nThis classifier predicts for two sentences :\n- Whether the first sentence logically and semantically follows from the second sentence as entailment\n- Whether the first sentence is a contradiction to the second sentence as a contradiction\n- Whether the first sentence does not entail or contradict the first sentence as neutral\n\n| Hypothesis | Premise | prediction|\n|------------|------------|----------|\n| Recent report say Johnny makes he alot of money, he earned 10 million USD each year for the last 5 years. | Johnny is a poor man. | contradiction|\n|It rained in England the last 4 weeks.| It was snowing in New York last week| neutral | \n\n## How to configure T5 task for MNLI\n`.setTask('mnli hypothesis:)` and prefix second sentence with `premise:`\n\n### Example pre-processed input for T5 MNLI - 3 Class Natural Language Inference\n\n```\nmnli \nhypothesis: At 8:34, the Boston Center controller received a third, transmission from American 11. \npremise: The Boston Center controller got a third transmission from American 11.\n```",
"_____no_output_____"
]
],
[
[
"# Set the task on T5\nt5.setTask('mnli ')\n\n# Build pipeline with T5\npipe_components = [documentAssembler,t5]\npipeline = Pipeline().setStages( pipe_components)\n\n# define Data, add additional tags between sentences\nsentences = [\n [\n ''' hypothesis: At 8:34, the Boston Center controller received a third, transmission from American 11.\n premise: The Boston Center controller got a third transmission from American 11.\n '''\n ],\n [''' \n hypothesis: Recent report say Johnny makes he alot of money, he earned 10 million USD each year for the last 5 years.\n premise: Johnny is a poor man.\n ''']\n\n ]\n\n\n\ndf = spark.createDataFrame(sentences).toDF(\"text\")\n\n#Predict on text data with T5\nmodel = pipeline.fit(df)\nannotated_df = model.transform(df)\nannotated_df.select(['text','t5.result']).show()#.toPandas().head(5) <-- for better vis of result data frame",
"_____no_output_____"
]
],
[
[
"# Task 4 [MRPC - Binary Paraphrasing/ sentence similarity classification ](https://www.aclweb.org/anthology/I05-5002.pdf)\nDetect whether one sentence is a re-phrasing or similar to another sentence \nThis is a sub-task of [GLUE](https://arxiv.org/pdf/1804.07461.pdf).\n\n\n| Sentence1 | Sentence2 | prediction|\n|------------|------------|----------|\n|We acted because we saw the existing evidence in a new light , through the prism of our experience on 11 September , \" Rumsfeld said .| Rather , the US acted because the administration saw \" existing evidence in a new light , through the prism of our experience on September 11 \" . | equivalent | \n| I like to eat peanutbutter for breakfast| I like to play football | not_equivalent | \n\n\n## How to configure T5 task for MRPC\n`.setTask('mrpc sentence1:)` and prefix second sentence with `sentence2:`\n\n### Example pre-processed input for T5 MRPC - Binary Paraphrasing/ sentence similarity\n\n```\nmrpc \nsentence1: We acted because we saw the existing evidence in a new light , through the prism of our experience on 11 September , \" Rumsfeld said . \nsentence2: Rather , the US acted because the administration saw \" existing evidence in a new light , through the prism of our experience on September 11\",\n```",
"_____no_output_____"
]
],
[
[
"# Set the task on T5\nt5.setTask('mrpc ')\n\n# Build pipeline with T5\npipe_components = [documentAssembler,t5]\npipeline = Pipeline().setStages( pipe_components)\n\n# define Data, add additional tags between sentences\nsentences = [\n [\n ''' sentence1: We acted because we saw the existing evidence in a new light , through the prism of our experience on 11 September , \" Rumsfeld said .\n sentence2: Rather , the US acted because the administration saw \" existing evidence in a new light , through the prism of our experience on September 11 \" \n '''\n ],\n [''' \n sentence1: I like to eat peanutbutter for breakfast\n sentence2: \tI like to play football.\n ''']\n ]\n\n\n\ndf = spark.createDataFrame(sentences).toDF(\"text\")\n\n#Predict on text data with T5\nmodel = pipeline.fit(df)\nannotated_df = model.transform(df)\nannotated_df.select(['text','t5.result']).toPandas()#show()",
"_____no_output_____"
]
],
[
[
"# Task 5 [QNLI - Natural Language Inference question answered classification](https://arxiv.org/pdf/1804.07461.pdf)\nClassify whether a question is answered by a sentence (`entailed`). \nThis is a sub-task of [GLUE](https://arxiv.org/pdf/1804.07461.pdf).\n\n| Question | Answer | prediction|\n|------------|------------|----------|\n|Where did Jebe die?| Ghenkis Khan recalled Subtai back to Mongolia soon afterward, and Jebe died on the road back to Samarkand | entailment|\n|What does Steve like to eat? | Steve watches TV all day | not_netailment\n\n## How to configure T5 task for QNLI - Natural Language Inference question answered classification\n`.setTask('QNLI sentence1:)` and prefix question with `question:` sentence with `sentence:`:\n\n### Example pre-processed input for T5 QNLI - Natural Language Inference question answered classification\n\n```\nqnli\nquestion: Where did Jebe die? \nsentence: Ghenkis Khan recalled Subtai back to Mongolia soon afterwards, and Jebe died on the road back to Samarkand,\n```",
"_____no_output_____"
]
],
[
[
"# Set the task on T5\nt5.setTask('QNLI ')\n\n# Build pipeline with T5\npipe_components = [documentAssembler,t5]\npipeline = Pipeline().setStages( pipe_components)\n\n# define Data, add additional tags between sentences\nsentences = [\n [\n ''' question: Where did Jebe die? \n sentence: Ghenkis Khan recalled Subtai back to Mongolia soon afterwards, and Jebe died on the road back to Samarkand,\n '''\n ],\n [''' \n question: What does Steve like to eat?\t\n sentence: \tSteve watches TV all day\n ''']\n\n ]\n\n\n\ndf = spark.createDataFrame(sentences).toDF(\"text\")\n\n#Predict on text data with T5\nmodel = pipeline.fit(df)\nannotated_df = model.transform(df)\nannotated_df.select(['text','t5.result']).toPandas()#.show()",
"_____no_output_____"
]
],
[
[
"# Task 6 [QQP - Binary Question Similarity/Paraphrasing](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs)\nBased on a quora dataset, determine whether a pair of questions are semantically equivalent. \nThis is a sub-task of [GLUE](https://arxiv.org/pdf/1804.07461.pdf).\n\n| Question1 | Question2 | prediction|\n|------------|------------|----------|\n|What attributes would have made you highly desirable in ancient Rome? | How I GET OPPERTINUTY TO JOIN IT COMPANY AS A FRESHER? | not_duplicate | \n|What was it like in Ancient rome? | What was Ancient rome like?| duplicate | \n\n\n## How to configure T5 task for QQP\n.setTask('qqp question1:) and\nprefix second sentence with question2:\n\n\n### Example pre-processed input for T5 QQP - Binary Question Similarity/Paraphrasing\n\n```\nqqp \nquestion1: What attributes would have made you highly desirable in ancient Rome? \nquestion2: How I GET OPPERTINUTY TO JOIN IT COMPANY AS A FRESHER?',\n```",
"_____no_output_____"
]
],
[
[
"# Set the task on T5\nt5.setTask('qqp ')\n\n# Build pipeline with T5\npipe_components = [documentAssembler,t5]\npipeline = Pipeline().setStages( pipe_components)\n\n# define Data, add additional tags between sentences\nsentences = [\n [\n ''' question1: What attributes would have made you highly desirable in ancient Rome? \n question2: How I GET OPPERTINUTY TO JOIN IT COMPANY AS A FRESHER?'\n '''\n ],\n [''' \n question1: What was it like in Ancient rome?\n question2: \tWhat was Ancient rome like?\n ''']\n\n ]\n\n\ndf = spark.createDataFrame(sentences).toDF(\"text\")\n\n#Predict on text data with T5\nmodel = pipeline.fit(df)\nannotated_df = model.transform(df)\nannotated_df.select(['text','t5.result']).toPandas()#.show()",
"_____no_output_____"
]
],
[
[
"# Task 7 [SST2 - Binary Sentiment Analysis](https://www.aclweb.org/anthology/D13-1170.pdf)\nBinary sentiment classification. \nThis is a sub-task of [GLUE](https://arxiv.org/pdf/1804.07461.pdf).\n\n| Sentence1 | Prediction | \n|-----------|-----------|\n|it confirms fincher ’s status as a film maker who artfully bends technical know-how to the service of psychological insight | positive| \n|I really hated that movie | negative | \n\n\n## How to configure T5 task for SST2\n`.setTask('sst2 sentence: ')`\n\n### Example pre-processed input for T5 SST2 - Binary Sentiment Analysis\n\n```\nsst2\nsentence: I hated that movie\n```",
"_____no_output_____"
]
],
[
[
"# Set the task on T5\nt5.setTask('sst2 sentence: ')\n\n# Build pipeline with T5\npipe_components = [documentAssembler,t5]\npipeline = Pipeline().setStages( pipe_components)\n\n# define Data, add additional tags between sentences\nsentences = [\n [\n ''' I really hated that movie'''],\n [''' it confirms fincher ’s status as a film maker who artfully bends technical know-how to the service of psychological insight''']]\n\n\n\ndf = spark.createDataFrame(sentences).toDF(\"text\")\n\n#Predict on text data with T5\nmodel = pipeline.fit(df)\nannotated_df = model.transform(df)\nannotated_df.select(['text','t5.result']).toPandas()#show()",
"_____no_output_____"
]
],
[
[
"# Task8 [STSB - Regressive semantic sentence similarity](https://www.aclweb.org/anthology/S17-2001/)\nMeasures how similar two sentences are on a scale from 0 to 5 with 21 classes representing a regressive label. \nThis is a sub-task of [GLUE](https://arxiv.org/pdf/1804.07461.pdf).\n\n\n| Question1 | Question2 | prediction|\n|------------|------------|----------|\n|What attributes would have made you highly desirable in ancient Rome? | How I GET OPPERTINUTY TO JOIN IT COMPANY AS A FRESHER? | 0 | \n|What was it like in Ancient rome? | What was Ancient rome like?| 5.0 | \n|What was live like as a King in Ancient Rome?? | What is it like to live in Rome? | 3.2 | \n\n## How to configure T5 task for STSB\n`.setTask('stsb sentence1:)` and prefix second sentence with `sentence2:`\n\n\n### Example pre-processed input for T5 STSB - Regressive semantic sentence similarity\n\n```\nstsb\nsentence1: What attributes would have made you highly desirable in ancient Rome? \nsentence2: How I GET OPPERTINUTY TO JOIN IT COMPANY AS A FRESHER?',\n```",
"_____no_output_____"
]
],
[
[
"# Set the task on T5\nt5.setTask('stsb ')\n\n# Build pipeline with T5\npipe_components = [documentAssembler,t5]\npipeline = Pipeline().setStages( pipe_components)\n\n# define Data, add additional tags between sentences\nsentences = [\n [\n ''' sentence1: What attributes would have made you highly desirable in ancient Rome? \n sentence2: How I GET OPPERTINUTY TO JOIN IT COMPANY AS A FRESHER?'\n '''\n ],\n [''' \n sentence1: What was it like in Ancient rome?\n sentence2: \tWhat was Ancient rome like?\n '''],\n [''' \n sentence1: What was live like as a King in Ancient Rome??\n sentence2: \tWhat was Ancient rome like?\n ''']\n ]\n\ndf = spark.createDataFrame(sentences).toDF(\"text\")\n\n#Predict on text data with T5\nmodel = pipeline.fit(df)\nannotated_df = model.transform(df)\nannotated_df.select(['text','t5.result']).toPandas()#show(truncate=False)",
"_____no_output_____"
]
],
[
[
"# Task 9[ CB - Natural language inference contradiction classification](https://ojs.ub.uni-konstanz.de/sub/index.php/sub/article/view/601)\nClassify whether a Premise contradicts a Hypothesis. \nPredicts entailment, neutral and contradiction \nThis is a sub-task of [SuperGLUE](https://w4ngatang.github.io/static/papers/superglue.pdf).\n\n\n| Hypothesis | Premise | Prediction | \n|--------|-------------|----------|\n|Valence was helping | Valence the void-brain, Valence the virtuous valet. Why couldn’t the figger choose his own portion of titanic anatomy to shaft? Did he think he was helping'| Contradiction|\n\n\n## How to configure T5 task for CB\n`.setTask('cb hypothesis:)` and prefix premise with `premise:`\n\n### Example pre-processed input for T5 CB - Natural language inference contradiction classification\n\n```\ncb \nhypothesis: Valence was helping \npremise: Valence the void-brain, Valence the virtuous valet. Why couldn’t the figger choose his own portion of titanic anatomy to shaft? Did he think he was helping,\n```",
"_____no_output_____"
]
],
[
[
"# Set the task on T5\nt5.setTask('cb ')\n\n# Build pipeline with T5\npipe_components = [documentAssembler,t5]\npipeline = Pipeline().setStages( pipe_components)\n\n# define Data, add additional tags between sentences\nsentences = [\n [\n '''\n hypothesis: Recent report say Johnny makes he alot of money, he earned 10 million USD each year for the last 5 years.\n premise: Johnny is a poor man.\n ''']\n ]\n\n\n\ndf = spark.createDataFrame(sentences).toDF(\"text\")\n\n#Predict on text data with T5\nmodel = pipeline.fit(df)\nannotated_df = model.transform(df)\nannotated_df.select(['text','t5.result']).toPandas()#show(truncate=False)",
"_____no_output_____"
]
],
[
[
"# Task 10 [COPA - Sentence Completion/ Binary choice selection](https://www.aaai.org/ocs/index.php/SSS/SSS11/paper/view/2418/0)\nThe Choice of Plausible Alternatives (COPA) task by Roemmele et al. (2011) evaluates\ncausal reasoning between events, which requires commonsense knowledge about what usually takes\nplace in the world. Each example provides a premise and either asks for the correct cause or effect\nfrom two choices, thus testing either ``backward`` or `forward causal reasoning`. COPA data, which\nconsists of 1,000 examples total, can be downloaded at https://people.ict.usc.e\n\nThis is a sub-task of [SuperGLUE](https://w4ngatang.github.io/static/papers/superglue.pdf).\n\nThis classifier selects from a choice of `2 options` which one the correct is based on a `premise`.\n\n\n## forward causal reasoning\nPremise: The man lost his balance on the ladder. \nquestion: What happened as a result? \nAlternative 1: He fell off the ladder. \nAlternative 2: He climbed up the ladder.\n## backwards causal reasoning\nPremise: The man fell unconscious. What was the cause\nof this? \nAlternative 1: The assailant struck the man in the head. \nAlternative 2: The assailant took the man’s wallet.\n\n\n| Question | Premise | Choice 1 | Choice 2 | Prediction | \n|--------|-------------|----------|---------|-------------|\n|effect | Politcal Violence broke out in the nation. | many citizens relocated to the capitol. | Many citizens took refuge in other territories | Choice 1 | \n|correct| The men fell unconscious | The assailant struckl the man in the head | he assailant s took the man's wallet. | choice1 | \n\n\n## How to configure T5 task for COPA\n`.setTask('copa choice1:)`, prefix choice2 with `choice2:` , prefix premise with `premise:` and prefix the question with `question`\n\n### Example pre-processed input for T5 COPA - Sentence Completion/ Binary choice selection\n\n```\ncopa \nchoice1: He fell off the ladder \nchoice2: He climbed up the lader \npremise: The man lost his balance on the ladder \nquestion: effect\n```",
"_____no_output_____"
]
],
[
[
"# Set the task on T5\nt5.setTask('copa ')\n\n# Build pipeline with T5\npipe_components = [documentAssembler,t5]\npipeline = Pipeline().setStages( pipe_components)\n\n# define Data, add additional tags between sentences\nsentences = [\n [\n '''\n choice1: He fell off the ladder \n choice2: He climbed up the lader \n premise: The man lost his balance on the ladder \n question: effect\n\n ''']\n ]\n\n\ndf = spark.createDataFrame(sentences).toDF(\"text\")\n\n#Predict on text data with T5\nmodel = pipeline.fit(df)\nannotated_df = model.transform(df)\nannotated_df.select(['text','t5.result']).toPandas()#show(truncate=False)",
"_____no_output_____"
]
],
[
[
"# Task 11 [MultiRc - Question Answering](https://www.aclweb.org/anthology/N18-1023.pdf)\nEvaluates an `answer` for a `question` as `true` or `false` based on an input `paragraph`\nThe T5 model predicts for a `question` and a `paragraph` of `sentences` wether an `answer` is true or not,\nbased on the semantic contents of the paragraph. \nThis is a sub-task of [SuperGLUE](https://w4ngatang.github.io/static/papers/superglue.pdf).\n\n\n\n**Exceeds human performance by a large margin**\n\n\n\n| Question | Answer | Prediction | paragraph|\n|--------------------------------------------------------------|---------------------------------------------------------------------|------------|----------|\n| Why was Joey surprised the morning he woke up for breakfast? | There was only pie to eat, rather than traditional breakfast foods | True |Once upon a time, there was a squirrel named Joey. Joey loved to go outside and play with his cousin Jimmy. Joey and Jimmy played silly games together, and were always laughing. One day, Joey and Jimmy went swimming together 50 at their Aunt Julie’s pond. Joey woke up early in the morning to eat some food before they left. He couldn’t find anything to eat except for pie! Usually, Joey would eat cereal, fruit (a pear), or oatmeal for breakfast. After he ate, he and Jimmy went to the pond. On their way there they saw their friend Jack Rabbit. They dove into the water and swam for several hours. The sun was out, but the breeze was cold. Joey and Jimmy got out of the water and started walking home. Their fur was wet, and the breeze chilled them. When they got home, they dried off, and Jimmy put on his favorite purple shirt. Joey put on a blue shirt with red and green dots. The two squirrels ate some food that Joey’s mom, Jasmine, made and went off to bed., |\n| Why was Joey surprised the morning he woke up for breakfast? | There was a T-Rex in his garden | False |Once upon a time, there was a squirrel named Joey. Joey loved to go outside and play with his cousin Jimmy. Joey and Jimmy played silly games together, and were always laughing. One day, Joey and Jimmy went swimming together 50 at their Aunt Julie’s pond. Joey woke up early in the morning to eat some food before they left. He couldn’t find anything to eat except for pie! Usually, Joey would eat cereal, fruit (a pear), or oatmeal for breakfast. After he ate, he and Jimmy went to the pond. On their way there they saw their friend Jack Rabbit. They dove into the water and swam for several hours. The sun was out, but the breeze was cold. Joey and Jimmy got out of the water and started walking home. Their fur was wet, and the breeze chilled them. When they got home, they dried off, and Jimmy put on his favorite purple shirt. Joey put on a blue shirt with red and green dots. The two squirrels ate some food that Joey’s mom, Jasmine, made and went off to bed., |\n\n## How to configure T5 task for MultiRC\n`.setTask('multirc questions:)` followed by `answer:` prefix for the answer to evaluate, followed by `paragraph:` and then a series of sentences, where each sentence is prefixed with `Sent n:`prefix second sentence with sentence2:\n\n\n### Example pre-processed input for T5 MultiRc task:\n```\nmultirc questions: Why was Joey surprised the morning he woke up for breakfast? \nanswer: There was a T-REX in his garden. \nparagraph: \nSent 1: Once upon a time, there was a squirrel named Joey. \nSent 2: Joey loved to go outside and play with his cousin Jimmy. \nSent 3: Joey and Jimmy played silly games together, and were always laughing. \nSent 4: One day, Joey and Jimmy went swimming together 50 at their Aunt Julie’s pond. \nSent 5: Joey woke up early in the morning to eat some food before they left. \nSent 6: He couldn’t find anything to eat except for pie! \nSent 7: Usually, Joey would eat cereal, fruit (a pear), or oatmeal for breakfast. \nSent 8: After he ate, he and Jimmy went to the pond. \nSent 9: On their way there they saw their friend Jack Rabbit. \nSent 10: They dove into the water and swam for several hours. \nSent 11: The sun was out, but the breeze was cold. \nSent 12: Joey and Jimmy got out of the water and started walking home. \nSent 13: Their fur was wet, and the breeze chilled them. \nSent 14: When they got home, they dried off, and Jimmy put on his favorite purple shirt. \nSent 15: Joey put on a blue shirt with red and green dots. \nSent 16: The two squirrels ate some food that Joey’s mom, Jasmine, made and went off to bed. \n```",
"_____no_output_____"
]
],
[
[
"# Set the task on T5\nt5.setTask('multirc ')\n\n# Build pipeline with T5\npipe_components = [documentAssembler,t5]\npipeline = Pipeline().setStages( pipe_components)\n\n# define Data, add additional tags between sentences\nsentences = [\n [\n '''\nquestions: Why was Joey surprised the morning he woke up for breakfast? \nanswer: There was a T-REX in his garden. \nparagraph: \nSent 1: Once upon a time, there was a squirrel named Joey. \nSent 2: Joey loved to go outside and play with his cousin Jimmy. \nSent 3: Joey and Jimmy played silly games together, and were always laughing. \nSent 4: One day, Joey and Jimmy went swimming together 50 at their Aunt Julie’s pond. \nSent 5: Joey woke up early in the morning to eat some food before they left. \nSent 6: He couldn’t find anything to eat except for pie! \nSent 7: Usually, Joey would eat cereal, fruit (a pear), or oatmeal for breakfast. \nSent 8: After he ate, he and Jimmy went to the pond. \nSent 9: On their way there they saw their friend Jack Rabbit. \nSent 10: They dove into the water and swam for several hours. \nSent 11: The sun was out, but the breeze was cold. \nSent 12: Joey and Jimmy got out of the water and started walking home. \nSent 13: Their fur was wet, and the breeze chilled them. \nSent 14: When they got home, they dried off, and Jimmy put on his favorite purple shirt. \nSent 15: Joey put on a blue shirt with red and green dots. \nSent 16: The two squirrels ate some food that Joey’s mom, Jasmine, made and went off to bed. \n\n '''],\n \n [\n '''\nquestions: Why was Joey surprised the morning he woke up for breakfast? \nanswer: There was only pie for breakfast. \nparagraph: \nSent 1: Once upon a time, there was a squirrel named Joey. \nSent 2: Joey loved to go outside and play with his cousin Jimmy. \nSent 3: Joey and Jimmy played silly games together, and were always laughing. \nSent 4: One day, Joey and Jimmy went swimming together 50 at their Aunt Julie’s pond. \nSent 5: Joey woke up early in the morning to eat some food before they left. \nSent 6: He couldn’t find anything to eat except for pie! \nSent 7: Usually, Joey would eat cereal, fruit (a pear), or oatmeal for breakfast. \nSent 8: After he ate, he and Jimmy went to the pond. \nSent 9: On their way there they saw their friend Jack Rabbit. \nSent 10: They dove into the water and swam for several hours. \nSent 11: The sun was out, but the breeze was cold. \nSent 12: Joey and Jimmy got out of the water and started walking home. \nSent 13: Their fur was wet, and the breeze chilled them. \nSent 14: When they got home, they dried off, and Jimmy put on his favorite purple shirt. \nSent 15: Joey put on a blue shirt with red and green dots. \nSent 16: The two squirrels ate some food that Joey’s mom, Jasmine, made and went off to bed. \n\n ''']\n \n \n ]\n\n\n\ndf = spark.createDataFrame(sentences).toDF(\"text\")\n\n#Predict on text data with T5\nmodel = pipeline.fit(df)\nannotated_df = model.transform(df)\nannotated_df.select(['text','t5.result']).show(truncate=False)",
"_____no_output_____"
]
],
[
[
"# Task 12 [WiC - Word sense disambiguation](https://arxiv.org/abs/1808.09121)\nDecide for `two sentence`s with a shared `disambigous word` wether they have the target word has the same `semantic meaning` in both sentences. \nThis is a sub-task of [SuperGLUE](https://w4ngatang.github.io/static/papers/superglue.pdf).\n\n\n|Predicted | disambigous word| Sentence 1 | Sentence 2 | \n|----------|-----------------|------------|------------|\n| False | kill | He totally killed that rock show! | The airplane crash killed his family | \n| True | window | The expanded window will give us time to catch the thieves.|You have a two-hour window for turning in your homework. | \n| False | window | He jumped out of the window.|You have a two-hour window for turning in your homework. | \n\n\n## How to configure T5 task for MultiRC\n`.setTask('wic pos:)` followed by `sentence1:` prefix for the first sentence, followed by `sentence2:` prefix for the second sentence.\n\n\n### Example pre-processed input for T5 WiC task:\n\n```\nwic pos:\nsentence1: The expanded window will give us time to catch the thieves.\nsentence2: You have a two-hour window of turning in your homework.\nword : window\n```",
"_____no_output_____"
]
],
[
[
"# Set the task on T5\nt5.setTask('wic ')\n\n# Build pipeline with T5\npipe_components = [documentAssembler,t5]\npipeline = Pipeline().setStages( pipe_components)\n\n# define Data, add additional tags between sentences\nsentences = [\n [\n '''\npos:\nsentence1: The expanded window will give us time to catch the thieves.\nsentence2: You have a two-hour window of turning in your homework.\nword : window\n\n '''],]\n\n\n\ndf = spark.createDataFrame(sentences).toDF(\"text\")\n\n#Predict on text data with T5\nmodel = pipeline.fit(df)\nannotated_df = model.transform(df)\nannotated_df.select(['text','t5.result']).show(truncate=180)",
"_____no_output_____"
]
],
[
[
"# Task 13 [WSC and DPR - Coreference resolution/ Pronoun ambiguity resolver ](https://www.aaai.org/ocs/index.php/KR/KR12/paper/view/4492/0)\nPredict for an `ambiguous pronoun` to which `noun` it is referring to. \nThis is a sub-task of [GLUE](https://arxiv.org/pdf/1804.07461.pdf) and [SuperGLUE](https://w4ngatang.github.io/static/papers/superglue.pdf).\n\n|Prediction| Text | \n|----------|-------|\n| stable | The stable was very roomy, with four good stalls; a large swinging window opened into the yard , which made *it* pleasant and airy. | \n\n\n\n## How to configure T5 task for WSC/DPR\n`.setTask('wsc:)` and surround pronoun with asteriks symbols..\n\n\n### Example pre-processed input for T5 WSC/DPR task:\nThe `ambiguous pronous` should be surrounded with `*` symbols.\n\n***Note*** Read [Appendix A.](https://arxiv.org/pdf/1910.10683.pdf#page=64&zoom=100,84,360) for more info\n```\nwsc: \nThe stable was very roomy, with four good stalls; a large swinging window opened into the yard , which made *it* pleasant and airy.\n```",
"_____no_output_____"
]
],
[
[
"# Does not work yet 100% correct\n# Set the task on T5\nt5.setTask('wsc')\n\n# Build pipeline with T5\npipe_components = [documentAssembler,t5]\npipeline = Pipeline().setStages( pipe_components)\n\n# define Data, add additional tags between sentences\nsentences = [['''The stable was very roomy, with four good stalls; a large swinging window opened into the yard , which made *it* pleasant and airy.'''],]\n\n\n\ndf = spark.createDataFrame(sentences).toDF(\"text\")\n\n#Predict on text data with T5\nmodel = pipeline.fit(df)\nannotated_df = model.transform(df)\nannotated_df.select(['text','t5.result']).show(truncate=False)",
"_____no_output_____"
]
],
[
[
"# Task 14 [Text summarization](https://arxiv.org/abs/1506.03340)\n`Summarizes` a paragraph into a shorter version with the same semantic meaning.\n\n| Predicted summary| Text | \n|------------------|-------|\n| manchester united face newcastle in the premier league on wednesday . louis van gaal's side currently sit two points clear of liverpool in fourth . the belgian duo took to the dance floor on monday night with some friends . | the belgian duo took to the dance floor on monday night with some friends . manchester united face newcastle in the premier league on wednesday . red devils will be looking for just their second league away win in seven . louis van gaal’s side currently sit two points clear of liverpool in fourth . | \n\n\n## How to configure T5 task for summarization\n`.setTask('summarize:)`\n\n\n### Example pre-processed input for T5 summarization task:\nThis task requires no pre-processing, setting the task to `summarize` is sufficient.\n```\nthe belgian duo took to the dance floor on monday night with some friends . manchester united face newcastle in the premier league on wednesday . red devils will be looking for just their second league away win in seven . louis van gaal’s side currently sit two points clear of liverpool in fourth .\n```",
"_____no_output_____"
]
],
[
[
"# Set the task on T5\nt5.setTask('summarize ')\n\n# Build pipeline with T5\npipe_components = [documentAssembler,t5]\npipeline = Pipeline().setStages( pipe_components)\n\n# define Data, add additional tags between sentences\nsentences = [\n [\n '''\nThe belgian duo took to the dance floor on monday night with some friends . manchester united face newcastle in the premier league on wednesday . red devils will be looking for just their second league away win in seven . louis van gaal’s side currently sit two points clear of liverpool in fourth .\n '''],\n [''' Calculus, originally called infinitesimal calculus or \"the calculus of infinitesimals\", is the mathematical study of continuous change, in the same way that geometry is the study of shape and algebra is the study of generalizations of arithmetic operations. It has two major branches, differential calculus and integral calculus; the former concerns instantaneous rates of change, and the slopes of curves, while integral calculus concerns accumulation of quantities, and areas under or between curves. These two branches are related to each other by the fundamental theorem of calculus, and they make use of the fundamental notions of convergence of infinite sequences and infinite series to a well-defined limit.[1] Infinitesimal calculus was developed independently in the late 17th century by Isaac Newton and Gottfried Wilhelm Leibniz.[2][3] Today, calculus has widespread uses in science, engineering, and economics.[4] In mathematics education, calculus denotes courses of elementary mathematical analysis, which are mainly devoted to the study of functions and limits. The word calculus (plural calculi) is a Latin word, meaning originally \"small pebble\" (this meaning is kept in medicine – see Calculus (medicine)). Because such pebbles were used for calculation, the meaning of the word has evolved and today usually means a method of computation. It is therefore used for naming specific methods of calculation and related theories, such as propositional calculus, Ricci calculus, calculus of variations, lambda calculus, and process calculus.''']\n ]\n\n\n\ndf = spark.createDataFrame(sentences).toDF(\"text\")\n\n#Predict on text data with T5\nmodel = pipeline.fit(df)\nannotated_df = model.transform(df)\nannotated_df.select(['text','t5.result']).show(truncate=False)",
"_____no_output_____"
]
],
[
[
"# Task 15 [SQuAD - Context based question answering](https://arxiv.org/abs/1606.05250)\nPredict an `answer` to a `question` based on input `context`.\n\n|Predicted Answer | Question | Context | \n|-----------------|----------|------|\n|carbon monoxide| What does increased oxygen concentrations in the patient’s lungs displace? | Hyperbaric (high-pressure) medicine uses special oxygen chambers to increase the partial pressure of O 2 around the patient and, when needed, the medical staff. Carbon monoxide poisoning, gas gangrene, and decompression sickness (the ’bends’) are sometimes treated using these devices. Increased O 2 concentration in the lungs helps to displace carbon monoxide from the heme group of hemoglobin. Oxygen gas is poisonous to the anaerobic bacteria that cause gas gangrene, so increasing its partial pressure helps kill them. Decompression sickness occurs in divers who decompress too quickly after a dive, resulting in bubbles of inert gas, mostly nitrogen and helium, forming in their blood. Increasing the pressure of O 2 as soon as possible is part of the treatment.\n|pie| What did Joey eat for breakfast?| Once upon a time, there was a squirrel named Joey. Joey loved to go outside and play with his cousin Jimmy. Joey and Jimmy played silly games together, and were always laughing. One day, Joey and Jimmy went swimming together 50 at their Aunt Julie’s pond. Joey woke up early in the morning to eat some food before they left. Usually, Joey would eat cereal, fruit (a pear), or oatmeal for breakfast. After he ate, he and Jimmy went to the pond. On their way there they saw their friend Jack Rabbit. They dove into the water and swam for several hours. The sun was out, but the breeze was cold. Joey and Jimmy got out of the water and started walking home. Their fur was wet, and the breeze chilled them. When they got home, they dried off, and Jimmy put on his favorite purple shirt. Joey put on a blue shirt with red and green dots. The two squirrels ate some food that Joey’s mom, Jasmine, made and went off to bed,'| \n\n## How to configure T5 task parameter for Squad Context based question answering\n`.setTask('question:)` and prefix the context which can be made up of multiple sentences with `context:`\n\n## Example pre-processed input for T5 Squad Context based question answering:\n```\nquestion: What does increased oxygen concentrations in the patient’s lungs displace? \ncontext: Hyperbaric (high-pressure) medicine uses special oxygen chambers to increase the partial pressure of O 2 around the patient and, when needed, the medical staff. Carbon monoxide poisoning, gas gangrene, and decompression sickness (the ’bends’) are sometimes treated using these devices. Increased O 2 concentration in the lungs helps to displace carbon monoxide from the heme group of hemoglobin. Oxygen gas is poisonous to the anaerobic bacteria that cause gas gangrene, so increasing its partial pressure helps kill them. Decompression sickness occurs in divers who decompress too quickly after a dive, resulting in bubbles of inert gas, mostly nitrogen and helium, forming in their blood. Increasing the pressure of O 2 as soon as possible is part of the treatment.\n```",
"_____no_output_____"
]
],
[
[
"# Set the task on T5\nt5.setTask('question: ')\n\n# Build pipeline with T5\npipe_components = [documentAssembler,t5]\npipeline = Pipeline().setStages( pipe_components)\n\n# define Data, add additional tags between sentences\nsentences = [\n [\n '''\nWhat does increased oxygen concentrations in the patient’s lungs displace? \ncontext: Hyperbaric (high-pressure) medicine uses special oxygen chambers to increase the partial pressure of O 2 around the patient and, when needed, the medical staff. Carbon monoxide poisoning, gas gangrene, and decompression sickness (the ’bends’) are sometimes treated using these devices. Increased O 2 concentration in the lungs helps to displace carbon monoxide from the heme group of hemoglobin. Oxygen gas is poisonous to the anaerobic bacteria that cause gas gangrene, so increasing its partial pressure helps kill them. Decompression sickness occurs in divers who decompress too quickly after a dive, resulting in bubbles of inert gas, mostly nitrogen and helium, forming in their blood. Increasing the pressure of O 2 as soon as possible is part of the treatment.\n ''']\n ]\n\n\n\ndf = spark.createDataFrame(sentences).toDF(\"text\")\n\n#Predict on text data with T5\nmodel = pipeline.fit(df)\nannotated_df = model.transform(df)\nannotated_df.select(['text','t5.result']).show(truncate=False)",
"_____no_output_____"
]
],
[
[
"# Task 16 [WMT1 Translate English to German](https://arxiv.org/abs/1706.03762)\nFor translation tasks use the `marian` model\n## How to configure T5 task parameter for WMT Translate English to German\n`.setTask('translate English to German:)`",
"_____no_output_____"
]
],
[
[
"# Set the task on T5\nt5.setTask('translate English to German: ')\n\n# Build pipeline with T5\npipe_components = [documentAssembler,t5]\npipeline = Pipeline().setStages( pipe_components)\n\n# define Data, add additional tags between sentences\nsentences = [\n [\n '''I like sausage and Tea for breakfast with potatoes'''],]\n\n\n\ndf = spark.createDataFrame(sentences).toDF(\"text\")\n\n#Predict on text data with T5\nmodel = pipeline.fit(df)\nannotated_df = model.transform(df)\nannotated_df.select(['text','t5.result']).show(truncate=False)",
"_____no_output_____"
]
],
[
[
"# Task 17 [WMT2 Translate English to French](https://arxiv.org/abs/1706.03762)\nFor translation tasks use the `marian` model\n## How to configure T5 task parameter for WMT Translate English to French\n`.setTask('translate English to French:)`",
"_____no_output_____"
]
],
[
[
"# Set the task on T5\nt5.setTask('translate English to French: ')\n\n# Build pipeline with T5\npipe_components = [documentAssembler,t5]\npipeline = Pipeline().setStages( pipe_components)\n\n# define Data, add additional tags between sentences\nsentences = [\n [ '''I like sausage and Tea for breakfast with potatoes''']\n ]\n\n\n\ndf = spark.createDataFrame(sentences).toDF(\"text\")\n\n#Predict on text data with T5\nmodel = pipeline.fit(df)\nannotated_df = model.transform(df)\nannotated_df.select(['text','t5.result']).show(truncate=False)",
"_____no_output_____"
]
],
[
[
"# 18 [WMT3 - Translate English to Romanian](https://arxiv.org/abs/1706.03762)\nFor translation tasks use the `marian` model\n## How to configure T5 task parameter for English to Romanian\n`.setTask('translate English to Romanian:)`",
"_____no_output_____"
]
],
[
[
"# Set the task on T5\nt5.setTask('translate English to Romanian: ')\n\n# Build pipeline with T5\npipe_components = [documentAssembler,t5]\npipeline = Pipeline().setStages( pipe_components)\n\n# define Data, add additional tags between sentences\nsentences = [\n ['''I like sausage and Tea for breakfast with potatoes''']\n ]\n\ndf = spark.createDataFrame(sentences).toDF(\"text\")\n\n#Predict on text data with T5\nmodel = pipeline.fit(df)\nannotated_df = model.transform(df)\nannotated_df.select(['text','t5.result']).show(truncate=False)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecd73d5be1ab6d7d23148e68350c4374e7c1c9d2 | 5,021 | ipynb | Jupyter Notebook | PythonChallenge/PyBank/Main.ipynb | yohokr7/python-challenge | e0c49d9abad15886b5e4763a177ae481ccdb1880 | [
"MIT"
] | null | null | null | PythonChallenge/PyBank/Main.ipynb | yohokr7/python-challenge | e0c49d9abad15886b5e4763a177ae481ccdb1880 | [
"MIT"
] | null | null | null | PythonChallenge/PyBank/Main.ipynb | yohokr7/python-challenge | e0c49d9abad15886b5e4763a177ae481ccdb1880 | [
"MIT"
] | null | null | null | 34.627586 | 111 | 0.517028 | [
[
[
"#Import Module\nimport os\nimport csv\n\n#Create CSV Pathway\nbudgetcsvpath = os.path.join(\".\", \"Resources\", \"budget_data.csv\")\n\n#Open CSV, Create CSV Reader, Skip Header\nwith open(budgetcsvpath, newline='') as budgetcsv:\n budgetreader = csv.reader(budgetcsv, delimiter = ',')\n budgetheader = next(budgetreader)\n \n #Define Variables and Lists\n totalamount = 0\n totalmonths = 0\n monthlyamount = []\n changeamount = []\n months = []\n \n #Calculate Total Months and Total P&L, Create Separate Lists for Months and Profits/Losses\n for row in budgetreader:\n amount = int(row[1])\n totalmonths = totalmonths + 1\n totalamount = totalamount + amount\n monthlyamount.append(amount)\n months.append(row[0])\n \n #Calculate Change Between Each Month Into a List\n for amountindex in range(len(monthlyamount)):\n if amountindex == 0:\n current = 0\n past = 0\n else:\n current = int(monthlyamount[amountindex])\n past = int(monthlyamount[amountindex - 1])\n \n changeamount.append(current - past)\n \n #Calculate the Average Change, Biggest Increase, and Biggest Decrease\n avgchange = sum(changeamount)/ (len(changeamount)-1)\n incchange = max(changeamount)\n decchange = min(changeamount)\n \n #Capture the List Index for Biggest Increase and Biggest Decrease\n for changeindex in range(len(changeamount)):\n if changeamount[changeindex] == incchange:\n incindex = changeindex\n if changeamount[changeindex] == decchange:\n decindex = changeindex\n \n #Find the Months Linked to the Biggest Increase and Biggest Decrease\n for monthindex in range(len(months)):\n if monthindex == incindex:\n incmonth = months[monthindex]\n if monthindex == decindex:\n decmonth = months[monthindex]\n \n #Making Financial Analysis String\n line1 = f'Financial Analysis'\n line2 = f'----------------------------'\n line3 = f'Total Months: {totalmonths}'\n line4 = f'Total: ${totalamount}'\n line5 = f'Average Change: ${avgchange}'\n line6 = f'Greatest Increase in Profits: {incmonth} (${incchange})'\n line7 = f'Greatest Decrease in Profits: {decmonth} (${decchange})'\n\n #Printing Financial Analysis in Terminal\n print(f'{line1}\\n{line2}\\n{line3}\\n{line4}\\n{line5}\\n{line6}/n{line7}')\n \n #Determining if Analysis.txt exists in Analysis SubDirectory\n filepath = os.path.join(\".\",\"analysis\",\"Analysis.txt\")\n isfile = os.path.isfile(filepath)\n path = os.path.join(\".\",\"analysis\")\n \n #If Analysis.txt Does Not Exist, Creates Analysis.txt and Populates\n if isfile == False:\n with open(os.path.join(path, \"Analysis.txt\"), \"w\") as analysis:\n analysis.write(f'{line1}\\n')\n analysis.write(f'{line2}\\n')\n analysis.write(f'{line3}\\n')\n analysis.write(f'{line4}\\n')\n analysis.write(f'{line5}\\n')\n analysis.write(f'{line6}\\n')\n analysis.write(f'{line7}')",
"Financial Analysis\n----------------------------\nTotal Months: 86\nTotal: $38382578\nAverage Change: $-2315.1176470588234\nGreatest Increase in Profits: Feb-2012 ($1926159)/nGreatest Decrease in Profits: Sep-2013 ($-2196167)\n"
]
]
] | [
"code"
] | [
[
"code"
]
] |
ecd7429cdef4be893a2a478a86d532e16fb704c6 | 9,641 | ipynb | Jupyter Notebook | models/GBM - 10-fold SKF 866 Cols 02.06.ipynb | susantamoh84/Kaggle-Quora | d38a87a4e9bc5786dc43f8641ce08b50ff3d1075 | [
"MIT"
] | 99 | 2017-06-11T16:39:21.000Z | 2022-03-27T15:51:59.000Z | models/GBM - 10-fold SKF 866 Cols 02.06.ipynb | susantamoh84/Kaggle-Quora | d38a87a4e9bc5786dc43f8641ce08b50ff3d1075 | [
"MIT"
] | 3 | 2018-05-24T07:13:45.000Z | 2018-06-07T06:57:28.000Z | models/GBM - 10-fold SKF 866 Cols 02.06.ipynb | susantamoh84/Kaggle-Quora | d38a87a4e9bc5786dc43f8641ce08b50ff3d1075 | [
"MIT"
] | 37 | 2017-06-11T16:47:22.000Z | 2021-08-01T22:13:39.000Z | 32.570946 | 266 | 0.548698 | [
[
[
"import nltk\nimport difflib\nimport time\nimport gc\nimport itertools\nimport multiprocessing\nimport pandas as pd\nimport numpy as np\nimport xgboost as xgb\nimport lightgbm as lgb\n\nimport warnings\nwarnings.filterwarnings('ignore')\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport seaborn as sns\n\nfrom sklearn.metrics import log_loss\nfrom sklearn.model_selection import train_test_split, StratifiedKFold\n\nfrom models_utils_fe import *\nfrom models_utils_skf import *",
"_____no_output_____"
],
[
"src = '/media/w/1c392724-ecf3-4615-8f3c-79368ec36380/DS Projects/Kaggle/Quora/scripts/features/'\nfeats_src = '/media/w/1c392724-ecf3-4615-8f3c-79368ec36380/DS Projects/Kaggle/Quora/data/features/uncleaned/'\noof_src = '/media/w/1c392724-ecf3-4615-8f3c-79368ec36380/DS Projects/Kaggle/Quora/scripts/models/OOF_preds/train/'\n\nX_train = pd.read_pickle('Xtrain_866BestColsDropped.pkl')\nmlp = pd.read_pickle(oof_src + 'train_preds_MLP_1sttry.pkl')\nX_train = pd.concat([X_train, mlp], axis = 1)\nX_train = X_train.astype('float32')\nprint(X_train.shape)\n\nxgb_feats = pd.read_csv(feats_src + '/the_1owl/owl_train.csv')\ny_train = xgb_feats[['is_duplicate']]\n\ndel xgb_feats\ngc.collect()",
"(404290, 867)\n"
],
[
"def xgb_foldrun_ooftr(X, y, params, name, save = True):\n skf = StratifiedKFold(n_splits = 10, random_state = 111, shuffle = True)\n if isinstance(X, pd.core.frame.DataFrame):\n X = X.values\n if isinstance(y, pd.core.frame.DataFrame):\n y = y.is_duplicate.values\n if isinstance(y, pd.core.frame.Series):\n y = y.values\n print('Running XGB model with parameters:', params)\n \n i = 0\n losses = []\n oof_train = np.zeros((X.shape[0]))\n os.makedirs('saved_models/XGB/SKF/{}'.format(name), exist_ok = True)\n for tr_index, val_index in skf.split(X, y):\n X_tr, X_val = X[tr_index], X[val_index]\n y_tr, y_val = y[tr_index], y[val_index]\n t = time.time()\n \n dtrain = xgb.DMatrix(X_tr, label = y_tr)\n dval = xgb.DMatrix(X_val, label = y_val)\n watchlist = [(dtrain, 'train'), (dval, 'valid')]\n print('Start training on fold: {}'.format(i))\n gbm = xgb.train(params, dtrain, 10000, watchlist, \n early_stopping_rounds = 200, verbose_eval = 100)\n print('Start predicting...')\n val_pred = gbm.predict(xgb.DMatrix(X_val), ntree_limit=gbm.best_ntree_limit)\n oof_train[val_index] = val_pred\n score = log_loss(y_val, val_pred)\n losses.append(score)\n print('Final score for fold {} :'.format(i), score, '\\n',\n 'Time it took to train and predict on fold:', time.time() - t, '\\n')\n gbm.save_model('saved_models/XGB/SKF/{}/XGB_10SKF_loss{:.5f}_fold{}.txt'.format(name, score, i))\n i += 1\n print('Mean logloss for model in 10-folds SKF:', np.array(losses).mean(axis = 0))\n oof_train = pd.DataFrame(oof_train)\n oof_train.columns = ['{}_prob'.format(name)]\n if save:\n oof_train.to_pickle('OOF_preds/train/train_preds_{}.pkl'.format(name))\n return oof_train",
"_____no_output_____"
],
[
"xgb_params1 = {\n 'seed': 1337,\n 'colsample_bytree': 0.46,\n 'silent': 1,\n 'subsample': 0.89,\n 'eta': 0.02,\n 'objective': 'binary:logistic',\n 'eval_metric': 'logloss',\n 'max_depth': 8,\n 'min_child_weight': 21,\n 'nthread': 4,\n 'tree_method': 'hist',\n }\n\nxgb_params2 = {\n 'seed': 1337,\n 'colsample_bytree': 0.43,\n 'silent': 1,\n 'subsample': 0.88,\n 'eta': 0.02,\n 'objective': 'binary:logistic',\n 'eval_metric': 'logloss',\n 'max_depth': 5,\n 'min_child_weight': 30,\n 'nthread': 4,\n 'tree_method': 'hist',\n }",
"_____no_output_____"
],
[
"oof_train1 = xgb_foldrun_ooftr(X_train, y_train, xgb_params1, '866cols_xgbparams1_MLPadded', save = False)",
"Running XGB model with parameters: {'min_child_weight': 21, 'objective': 'binary:logistic', 'seed': 1337, 'subsample': 0.89, 'nthread': 4, 'eval_metric': 'logloss', 'eta': 0.02, 'max_depth': 8, 'tree_method': 'hist', 'silent': 1, 'colsample_bytree': 0.46}\nStart training on fold: 0\n[0]\ttrain-logloss:0.678835\tvalid-logloss:0.678743\nMultiple eval metrics have been passed: 'valid-logloss' will be used for early stopping.\n\nWill train until valid-logloss hasn't improved in 200 rounds.\n[100]\ttrain-logloss:0.241704\tvalid-logloss:0.242783\n[200]\ttrain-logloss:0.193455\tvalid-logloss:0.197241\n[300]\ttrain-logloss:0.181924\tvalid-logloss:0.188527\n[400]\ttrain-logloss:0.17482\tvalid-logloss:0.185142\n[500]\ttrain-logloss:0.168908\tvalid-logloss:0.183354\n[600]\ttrain-logloss:0.163067\tvalid-logloss:0.182058\n[700]\ttrain-logloss:0.158119\tvalid-logloss:0.18118\n[800]\ttrain-logloss:0.153389\tvalid-logloss:0.180357\n[900]\ttrain-logloss:0.149153\tvalid-logloss:0.179934\n"
],
[
"oof_train2 = xgb_foldrun_ooftr(X_train, y_train, xgb_params2, '866cols_xgbparams2')",
"_____no_output_____"
],
[
"xgb_params3 = {\n 'seed': 1337,\n 'colsample_bytree': 0.38,\n 'silent': 1,\n 'subsample': 0.87,\n 'eta': 0.02,\n 'objective': 'binary:logistic',\n 'eval_metric': 'logloss',\n 'max_depth': 10,\n 'min_child_weight': 16,\n 'nthread': 4,\n 'tree_method': 'hist',\n }\n\nxgb_params4 = {\n 'seed': 1337,\n 'colsample_bytree': 0.46,\n 'silent': 1,\n 'subsample': 0.88,\n 'eta': 0.02,\n 'objective': 'binary:logistic',\n 'eval_metric': 'logloss',\n 'max_depth': 7,\n 'min_child_weight': 23,\n 'nthread': 4,\n 'tree_method': 'hist',\n }\n\n\noof_train3 = xgb_foldrun_ooftr(X_train, y_train, xgb_params3, '866cols_xgbparams3')\noof_train4 = xgb_foldrun_ooftr(X_train, y_train, xgb_params4, '866cols_xgbparams4')",
"_____no_output_____"
],
[
"gbm = xgb.Booster(model_file = 'saved_models/XGB/XGB_10SKF_FredFeatsGRU_loss0.17917_fold1.txt')\ndtrain = xgb.DMatrix(X_train, label = y_train)\n\nmapper = {'f{0}'.format(i): v for i, v in enumerate(dtrain.feature_names)}\nimportance = {mapper[k]: v for k, v in gbm.get_fscore().items()}\nimportance = sorted(importance.items(), key=lambda x:x[1], reverse=True)[:20]\n\ndf_importance = pd.DataFrame(importance, columns=['feature', 'fscore'])\ndf_importance['fscore'] = df_importance['fscore'] / df_importance['fscore'].sum()\n\nplt.figure()\ndf_importance.plot()\ndf_importance.plot(kind='barh', x='feature', y='fscore', legend=False, figsize=(10, 18))\nplt.title('XGBoost Feature Importance')\nplt.xlabel('relative importance')",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecd7525d295c0e1da0070705e59878b4321c860f | 32,061 | ipynb | Jupyter Notebook | Tennis.ipynb | saidulislam/deep-reinforcement-collaboration-and-Competition | 0d2ebb4177944a5a655e23e2e36a010139cc2901 | [
"Apache-2.0"
] | null | null | null | Tennis.ipynb | saidulislam/deep-reinforcement-collaboration-and-Competition | 0d2ebb4177944a5a655e23e2e36a010139cc2901 | [
"Apache-2.0"
] | null | null | null | Tennis.ipynb | saidulislam/deep-reinforcement-collaboration-and-Competition | 0d2ebb4177944a5a655e23e2e36a010139cc2901 | [
"Apache-2.0"
] | null | null | null | 82.419023 | 18,552 | 0.78778 | [
[
[
"# Collaboration and Competition\n\n---\n\nYou are welcome to use this coding environment to train your agent for the project. Follow the instructions below to get started!\n\n### 1. Start the Environment\n\nRun the next code cell to install a few packages. This line will take a few minutes to run!",
"_____no_output_____"
]
],
[
[
"!pip -q install ./python",
"\u001b[31mtensorflow 1.7.1 has requirement numpy>=1.13.3, but you'll have numpy 1.12.1 which is incompatible.\u001b[0m\r\n\u001b[31mipython 6.5.0 has requirement prompt-toolkit<2.0.0,>=1.0.15, but you'll have prompt-toolkit 3.0.5 which is incompatible.\u001b[0m\r\n"
]
],
[
[
"The environment is already saved in the Workspace and can be accessed at the file path provided below. ",
"_____no_output_____"
]
],
[
[
"from unityagents import UnityEnvironment\nimport numpy as np\n\nenv = UnityEnvironment(file_name=\"/data/Tennis_Linux_NoVis/Tennis\")",
"INFO:unityagents:\n'Academy' started successfully!\nUnity Academy name: Academy\n Number of Brains: 1\n Number of External Brains : 1\n Lesson number : 0\n Reset Parameters :\n\t\t\nUnity brain name: TennisBrain\n Number of Visual Observations (per agent): 0\n Vector Observation space type: continuous\n Vector Observation space size (per agent): 8\n Number of stacked Vector Observation: 3\n Vector Action space type: continuous\n Vector Action space size (per agent): 2\n Vector Action descriptions: , \n"
]
],
[
[
"Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.",
"_____no_output_____"
]
],
[
[
"# get the default brain\nbrain_name = env.brain_names[0]\nbrain = env.brains[brain_name]",
"_____no_output_____"
]
],
[
[
"### 2. Examine the State and Action Spaces\n\nRun the code cell below to print some information about the environment.",
"_____no_output_____"
]
],
[
[
"# reset the environment\nenv_info = env.reset(train_mode=True)[brain_name]\n\n# number of agents \nnum_agents = len(env_info.agents)\nprint('Number of agents:', num_agents)\n\n# size of each action\naction_size = brain.vector_action_space_size\nprint('Size of each action:', action_size)\n\n# examine the state space \nstates = env_info.vector_observations\nstate_size = states.shape[1]\nprint('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))\nprint('The state for the first agent looks like:', states[0])",
"Number of agents: 2\nSize of each action: 2\nThere are 2 agents. Each observes a state with length: 24\nThe state for the first agent looks like: [ 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0.\n 0. 0. -6.65278625 -1.5 -0. 0.\n 6.83172083 6. -0. 0. ]\n"
]
],
[
[
"### 3. Take Random Actions in the Environment\n\nIn the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.\n\nNote that **in this coding environment, you will not be able to watch the agents while they are training**, and you should set `train_mode=True` to restart the environment.",
"_____no_output_____"
]
],
[
[
"for i in range(5): # play game for 5 episodes\n env_info = env.reset(train_mode=False)[brain_name] # reset the environment \n states = env_info.vector_observations # get the current state (for each agent)\n scores = np.zeros(num_agents) # initialize the score (for each agent)\n while True:\n actions = np.random.randn(num_agents, action_size) # select an action (for each agent)\n actions = np.clip(actions, -1, 1) # all actions between -1 and 1\n env_info = env.step(actions)[brain_name] # send all actions to tne environment\n next_states = env_info.vector_observations # get next state (for each agent)\n rewards = env_info.rewards # get reward (for each agent)\n dones = env_info.local_done # see if episode finished\n scores += env_info.rewards # update the score (for each agent)\n states = next_states # roll over states to next time step\n if np.any(dones): # exit loop if episode finished\n break\n print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores)))",
"Total score (averaged over agents) this episode: -0.004999999888241291\nTotal score (averaged over agents) this episode: 0.04500000085681677\nTotal score (averaged over agents) this episode: -0.004999999888241291\nTotal score (averaged over agents) this episode: 0.04500000085681677\nTotal score (averaged over agents) this episode: 0.1450000023469329\n"
]
],
[
[
"### 4. It's Your Turn!\n\nNow it's your turn to train your own agent to solve the environment! A few **important notes**:\n- When training the environment, set `train_mode=True`, so that the line for resetting the environment looks like the following:\n```python\nenv_info = env.reset(train_mode=True)[brain_name]\n```\n- To structure your work, you're welcome to work directly in this Jupyter notebook, or you might like to start over with a new file! You can see the list of files in the workspace by clicking on **_Jupyter_** in the top left corner of the notebook.\n- In this coding environment, you will not be able to watch the agents while they are training. However, **_after training the agents_**, you can download the saved model weights to watch the agents on your own machine! ",
"_____no_output_____"
]
],
[
[
"from maddpg_agent import MADDPG\nimport torch\nfrom collections import deque\nfrom matplotlib import pyplot as plt\n\nimport workspace_utils # temp. to keep session alive while I train ",
"_____no_output_____"
],
[
"from workspace_utils import active_session\n\nn_episodes = 10000\nmax_t = 1000\nscores = []\nscores_deque = deque(maxlen=100)\nscores_avg = []\n\n\n# buffer_size=100000,\n# batch_size=256,\n# gamma=0.999,\n# update_every=4,\n# noise_start=1.0,\n# noise_decay=1.0,\n# t_stop_noise=30000,\n\n\nagent = MADDPG(seed=42, \n buffer_size=10000, \n batch_size=64,\n noise_start=0.5,\n noise_decay=1.0,\n update_every=2, \n gamma=.9999, \n t_stop_noise=4000)\n\nwith active_session():\n for i_episode in range(1, n_episodes+1):\n rewards = []\n \n # reset the environment\n env_info = env.reset(train_mode=False)[brain_name]\n \n # get the current state\n state = env_info.vector_observations\n \n # looping over the steps\n for t in range(max_t):\n # select an action\n action = agent.act(state)\n # take action in the environment\n env_info = env.step(action)[brain_name]\n next_state = env_info.vector_observations\n rewards_vec = env_info.rewards\n done = env_info.local_done\n agent.step(state, action, rewards_vec, next_state, done)\n state = next_state\n rewards.append(rewards_vec)\n if any(done):\n break\n\n # calculate reward for the episode\n episode_reward = np.max(np.sum(np.array(rewards),axis=0))\n\n scores.append(episode_reward) # save most recent score to overall score\n scores_deque.append(episode_reward) # save most recent score to running window of 100 last scores\n current_avg_score = np.mean(scores_deque)\n scores_avg.append(current_avg_score) # save average of last 100 scores to average score array\n\n print('\\rEpisode {}\\tAverage Score: {:.3f}'.format(i_episode, current_avg_score),end=\"\")\n\n # print score every 200 episodes\n if i_episode % 200 == 0:\n print('\\rEpisode {}\\tAverage Score: {:.3f}'.format(i_episode, current_avg_score))\n agent.save_agents()\n\n # exit the loop, if environment is solved\n if np.mean(scores_deque)>=.5:\n print('\\nEnvironment solved in {:d} episodes!\\tAverage Score: {:.3f}'.format(i_episode, np.mean(scores_deque)))\n agent.save_agents()\n break",
"Episode 200\tAverage Score: 0.010\nEpisode 400\tAverage Score: 0.009\nEpisode 600\tAverage Score: 0.010\nEpisode 800\tAverage Score: 0.015\nEpisode 1000\tAverage Score: 0.041\nEpisode 1200\tAverage Score: 0.082\nEpisode 1400\tAverage Score: 0.055\nEpisode 1600\tAverage Score: 0.047\nEpisode 1800\tAverage Score: 0.067\nEpisode 2000\tAverage Score: 0.092\nEpisode 2200\tAverage Score: 0.123\nEpisode 2400\tAverage Score: 0.312\nEpisode 2541\tAverage Score: 0.505\nEnvironment solved in 2541 episodes!\tAverage Score: 0.505\n"
],
[
"# plot the scores\nfig = plt.figure()\nax = fig.add_subplot(111)\nplt.plot(np.arange(len(scores)), scores, label='MADDPG')\nplt.plot(np.arange(len(scores)), scores_avg, c='r', label='average')\nplt.ylabel('Score')\nplt.xlabel('Episode #')\nplt.legend(loc='upper left');\nplt.show()",
"_____no_output_____"
]
],
[
[
"close the environment.",
"_____no_output_____"
]
],
[
[
"env.close()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecd762b74c6202533246e063fdac25b416b7d08f | 219,150 | ipynb | Jupyter Notebook | assignments/assignment2/PyTorch.ipynb | ChechkovEugene/dlcourse_ai | e96a86266799a3b1b163aff3d249cab6c6256e57 | [
"MIT"
] | 1 | 2020-01-09T12:28:41.000Z | 2020-01-09T12:28:41.000Z | assignments/assignment2/PyTorch.ipynb | ChechkovEugene/dlcourse_ai | e96a86266799a3b1b163aff3d249cab6c6256e57 | [
"MIT"
] | null | null | null | assignments/assignment2/PyTorch.ipynb | ChechkovEugene/dlcourse_ai | e96a86266799a3b1b163aff3d249cab6c6256e57 | [
"MIT"
] | null | null | null | 225.462963 | 64,384 | 0.895081 | [
[
[
"# Задание 2.2 - Введение в PyTorch\n\nДля этого задания потребуется установить версию PyTorch 1.0\n\nhttps://pytorch.org/get-started/locally/\n\nВ этом задании мы познакомимся с основными компонентами PyTorch и натренируем несколько небольших моделей.<br>\nGPU нам пока не понадобится.\n\nОсновные ссылки: \nhttps://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html \nhttps://pytorch.org/docs/stable/nn.html \nhttps://pytorch.org/docs/stable/torchvision/index.html ",
"_____no_output_____"
]
],
[
[
"import torch\nimport torch.nn as nn\nimport torch.optim as optim\nimport torchvision.datasets as dset\nfrom torch.utils.data.sampler import SubsetRandomSampler, Sampler\n\nfrom torchvision import transforms\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"## Как всегда, начинаем с загрузки данных\n\nPyTorch поддерживает загрузку SVHN из коробки.",
"_____no_output_____"
]
],
[
[
"# First, lets load the dataset\ndata_train = dset.SVHN('./data/', split='train',\n transform=transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize(mean=[0.43,0.44,0.47],\n std=[0.20,0.20,0.20]) \n ])\n )\ndata_test = dset.SVHN('./data/', split='test', \n transform=transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize(mean=[0.43,0.44,0.47],\n std=[0.20,0.20,0.20]) \n ]))",
"_____no_output_____"
]
],
[
[
"Теперь мы разделим данные на training и validation с использованием классов `SubsetRandomSampler` и `DataLoader`.\n\n`DataLoader` подгружает данные, предоставляемые классом `Dataset`, во время тренировки и группирует их в батчи.\nОн дает возможность указать `Sampler`, который выбирает, какие примеры из датасета использовать для тренировки. Мы используем это, чтобы разделить данные на training и validation.\n\nПодробнее: https://pytorch.org/tutorials/beginner/data_loading_tutorial.html",
"_____no_output_____"
]
],
[
[
"batch_size = 64\n\ndata_size = data_train.data.shape[0]\nvalidation_split = .2\nsplit = int(np.floor(validation_split * data_size))\nindices = list(range(data_size))\nnp.random.shuffle(indices)\n\ntrain_indices, val_indices = indices[split:], indices[:split]\n\ntrain_sampler = SubsetRandomSampler(train_indices)\nval_sampler = SubsetRandomSampler(val_indices)\n\ntrain_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size, \n sampler=train_sampler)\nval_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,\n sampler=val_sampler)",
"_____no_output_____"
]
],
[
[
"В нашей задаче мы получаем на вход изображения, но работаем с ними как с одномерными массивами. Чтобы превратить многомерный массив в одномерный, мы воспользуемся очень простым вспомогательным модулем `Flattener`.",
"_____no_output_____"
]
],
[
[
"sample, label = data_train[0]\nprint(\"SVHN data sample shape: \", sample.shape)\n# As you can see, the data is shaped like an image\n\n# We'll use a special helper module to shape it into a tensor\nclass Flattener(nn.Module):\n def forward(self, x):\n batch_size, *_ = x.shape\n return x.view(batch_size, -1)",
"SVHN data sample shape: torch.Size([3, 32, 32])\n"
]
],
[
[
"И наконец, мы создаем основные объекты PyTorch:\n- `nn_model` - собственно, модель с нейросетью\n- `loss` - функцию ошибки, в нашем случае `CrossEntropyLoss`\n- `optimizer` - алгоритм оптимизации, в нашем случае просто `SGD`",
"_____no_output_____"
]
],
[
[
"nn_model = nn.Sequential(\n Flattener(),\n nn.Linear(3*32*32, 100),\n nn.ReLU(inplace=True),\n nn.Linear(100, 10), \n )\nnn_model.type(torch.FloatTensor)\n\n# We will minimize cross-entropy between the ground truth and\n# network predictions using an SGD optimizer\nloss = nn.CrossEntropyLoss().type(torch.FloatTensor)\noptimizer = optim.SGD(nn_model.parameters(), lr=1e-2, weight_decay=1e-1)",
"_____no_output_____"
]
],
[
[
"## Тренируем!\n\nНиже приведена функция `train_model`, реализующая основной цикл тренировки PyTorch.\n\nКаждую эпоху эта функция вызывает функцию `compute_accuracy`, которая вычисляет точность на validation, эту последнюю функцию предлагается реализовать вам.",
"_____no_output_____"
]
],
[
[
"# This is how to implement the same main train loop in PyTorch. Pretty easy, right?\n\ndef train_model(model, train_loader, val_loader, loss, optimizer, num_epochs, scheduler = None): \n loss_history = []\n train_history = []\n val_history = []\n for epoch in range(num_epochs):\n \n model.train() # Enter train mode\n \n loss_accum = 0\n correct_samples = 0\n total_samples = 0\n for i_step, (x, y) in enumerate(train_loader):\n prediction = model(x) \n loss_value = loss(prediction, y)\n optimizer.zero_grad()\n loss_value.backward()\n optimizer.step()\n \n _, indices = torch.max(prediction, 1)\n correct_samples += torch.sum(indices == y)\n total_samples += y.shape[0]\n \n loss_accum += loss_value\n \n if epoch%2 == 0 and not scheduler is None:\n scheduler.step()\n \n ave_loss = loss_accum / (i_step + 1)\n train_accuracy = float(correct_samples) / total_samples\n val_accuracy = compute_accuracy(model, val_loader)\n \n loss_history.append(float(ave_loss))\n train_history.append(train_accuracy)\n val_history.append(val_accuracy)\n \n print(\"Average loss: %f, Train accuracy: %f, Val accuracy: %f\" % (ave_loss, train_accuracy, val_accuracy))\n \n return loss_history, train_history, val_history\n \ndef compute_accuracy(model, loader):\n \"\"\"\n Computes accuracy on the dataset wrapped in a loader\n \n Returns: accuracy as a float value between 0 and 1\n \"\"\"\n model.eval() # Evaluation mode\n # TODO: Implement the inference of the model on all of the batches from loader,\n # and compute the overall accuracy.\n # Hint: PyTorch has the argmax function!\n correct_samples = 0\n total_samples = 0\n \n for i_step, (x, y) in enumerate(loader):\n prediction = model(x) \n \n _, indices = torch.max(prediction, 1)\n correct_samples += torch.sum(indices == y)\n total_samples += y.shape[0]\n \n val_accuracy = float(correct_samples) / total_samples\n \n return val_accuracy\n\nloss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 3)",
"Average loss: 1.822343, Train accuracy: 0.417602, Val accuracy: 0.547608\nAverage loss: 1.456002, Train accuracy: 0.583012, Val accuracy: 0.609378\nAverage loss: 1.380273, Train accuracy: 0.619049, Val accuracy: 0.638591\n"
]
],
[
[
"## После основного цикла\n\nПосмотрим на другие возможности и оптимизации, которые предоставляет PyTorch.\n\nДобавьте еще один скрытый слой размера 100 нейронов к модели",
"_____no_output_____"
]
],
[
[
"# Since it's so easy to add layers, let's add some!\n\n# TODO: Implement a model with 2 hidden layers of the size 100\nnn_model = nn.Sequential(\n Flattener(),\n nn.Linear(3*32*32, 100),\n nn.ReLU(inplace=True),\n nn.Linear(100, 100),\n nn.ReLU(inplace=True),\n nn.Linear(100, 10), \n )\nnn_model.type(torch.FloatTensor)\n\noptimizer = optim.SGD(nn_model.parameters(), lr=1e-2, weight_decay=1e-1)\nloss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)",
"Average loss: 2.171983, Train accuracy: 0.208989, Val accuracy: 0.247355\nAverage loss: 1.931843, Train accuracy: 0.319814, Val accuracy: 0.391031\nAverage loss: 1.740896, Train accuracy: 0.405436, Val accuracy: 0.442086\nAverage loss: 1.693064, Train accuracy: 0.426543, Val accuracy: 0.443110\nAverage loss: 1.676226, Train accuracy: 0.436474, Val accuracy: 0.455327\n"
]
],
[
[
"Добавьте слой с Batch Normalization",
"_____no_output_____"
]
],
[
[
"# We heard batch normalization is powerful, let's use it!\n# TODO: Add batch normalization after each of the hidden layers of the network, before or after non-linearity\n# Hint: check out torch.nn.BatchNorm1d\n\nnn_model = nn.Sequential(\n Flattener(),\n nn.Linear(3*32*32, 100),\n nn.ReLU(inplace=True),\n nn.BatchNorm1d(100),\n nn.Linear(100, 100),\n nn.ReLU(inplace=True),\n nn.BatchNorm1d(100),\n nn.Linear(100, 10),\n )\n\noptimizer = optim.SGD(nn_model.parameters(), lr=1e-3, weight_decay=1e-1)\nloss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)",
"Average loss: 1.897655, Train accuracy: 0.398901, Val accuracy: 0.562897\nAverage loss: 1.462757, Train accuracy: 0.603317, Val accuracy: 0.655245\nAverage loss: 1.296343, Train accuracy: 0.657441, Val accuracy: 0.667599\nAverage loss: 1.207463, Train accuracy: 0.683735, Val accuracy: 0.699065\nAverage loss: 1.154683, Train accuracy: 0.699894, Val accuracy: 0.696676\n"
]
],
[
[
"Добавьте уменьшение скорости обучения по ходу тренировки.",
"_____no_output_____"
]
],
[
[
"# Learning rate annealing\n# Reduce your learning rate 2x every 2 epochs\n# Hint: look up learning rate schedulers in PyTorch. You might need to extend train_model function a little bit too!\nfrom torch.optim.lr_scheduler import StepLR\n\nnn_model = nn.Sequential(\n Flattener(),\n nn.Linear(3*32*32, 100),\n nn.ReLU(inplace=True),\n nn.BatchNorm1d(100),\n nn.Linear(100, 100),\n nn.ReLU(inplace=True),\n nn.BatchNorm1d(100),\n nn.Linear(100, 10),\n )\n\noptimizer = optim.SGD(nn_model.parameters(), lr=1e-3, weight_decay=1e-1)\nscheduler = StepLR(optimizer, step_size=2, gamma=0.1)\nloss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 10, scheduler)",
"Average loss: 1.889557, Train accuracy: 0.398662, Val accuracy: 0.586103\nAverage loss: 1.451407, Train accuracy: 0.610091, Val accuracy: 0.631971\nAverage loss: 1.292893, Train accuracy: 0.659557, Val accuracy: 0.690670\nAverage loss: 1.200272, Train accuracy: 0.691977, Val accuracy: 0.705140\nAverage loss: 1.185031, Train accuracy: 0.697249, Val accuracy: 0.712170\nAverage loss: 1.176684, Train accuracy: 0.698700, Val accuracy: 0.713808\nAverage loss: 1.166874, Train accuracy: 0.702966, Val accuracy: 0.715105\nAverage loss: 1.158139, Train accuracy: 0.706003, Val accuracy: 0.715173\nAverage loss: 1.157390, Train accuracy: 0.706515, Val accuracy: 0.715241\nAverage loss: 1.156957, Train accuracy: 0.705389, Val accuracy: 0.716538\n"
]
],
[
[
"# Визуализируем ошибки модели\n\nПопробуем посмотреть, на каких изображениях наша модель ошибается.\nДля этого мы получим все предсказания модели на validation set и сравним их с истинными метками (ground truth).\n\nПервая часть - реализовать код на PyTorch, который вычисляет все предсказания модели на validation set. \nЧтобы это сделать мы приводим код `SubsetSampler`, который просто проходит по всем заданным индексам последовательно и составляет из них батчи. \n\nРеализуйте функцию `evaluate_model`, которая прогоняет модель через все сэмплы validation set и запоминает предсказания модели и истинные метки.",
"_____no_output_____"
]
],
[
[
"class SubsetSampler(Sampler):\n r\"\"\"Samples elements with given indices sequentially\n\n Arguments:\n indices (ndarray): indices of the samples to take\n \"\"\"\n\n def __init__(self, indices):\n self.indices = indices\n\n def __iter__(self):\n return (self.indices[i] for i in range(len(self.indices)))\n\n def __len__(self):\n return len(self.indices)\n \n \ndef evaluate_model(model, dataset, indices):\n \"\"\"\n Computes predictions and ground truth labels for the indices of the dataset\n \n Returns: \n predictions: np array of ints - model predictions\n grount_truth: np array of ints - actual labels of the dataset\n \"\"\"\n model.eval() # Evaluation mode\n \n # TODO: Evaluate model on the list of indices and capture predictions\n # and ground truth labels\n # Hint: SubsetSampler above could be useful!\n \n eval_sampler = SubsetSampler(indices)\n eval_loader = torch.utils.data.DataLoader(dataset,\n sampler=eval_sampler , batch_size=len(indices))\n predictions = np.array([])\n ground_truth = np.array([])\n \n for i_step, (x, y) in enumerate(eval_loader):\n prediction = model(x) \n loss_value = loss(prediction, y)\n \n _, preds = torch.max(prediction, 1)\n ground_truth = np.concatenate((ground_truth, y))\n predictions = np.concatenate((predictions, preds))\n\n\n return predictions, ground_truth\n\n# Evaluate model on validation\npredictions, gt = evaluate_model(nn_model, data_train, val_indices)\nassert len(predictions) == len(val_indices)\nassert len(gt) == len(val_indices)\nassert gt[100] == data_train[val_indices[100]][1]\nassert np.any(np.not_equal(gt, predictions))",
"_____no_output_____"
]
],
[
[
"## Confusion matrix\nПервая часть визуализации - вывести confusion matrix (https://en.wikipedia.org/wiki/Confusion_matrix ).\n\nConfusion matrix - это матрица, где каждой строке соответствуют классы предсказанный, а столбцу - классы истинных меток (ground truth). Число с координатами `i,j` - это количество сэмплов класса `j`, которые модель считает классом `i`.\n\n\n\nДля того, чтобы облегчить вам задачу, ниже реализована функция `visualize_confusion_matrix` которая визуализирует такую матрицу. \nВам осталось реализовать функцию `build_confusion_matrix`, которая ее вычислит.\n\nРезультатом должна быть матрица 10x10.",
"_____no_output_____"
]
],
[
[
"def visualize_confusion_matrix(confusion_matrix):\n \"\"\"\n Visualizes confusion matrix\n \n confusion_matrix: np array of ints, x axis - predicted class, y axis - actual class\n [i][j] should have the count of samples that were predicted to be class i,\n but have j in the ground truth\n \n \"\"\"\n # Adapted from \n # https://stackoverflow.com/questions/2897826/confusion-matrix-with-number-of-classified-misclassified-instances-on-it-python\n assert confusion_matrix.shape[0] == confusion_matrix.shape[1]\n size = confusion_matrix.shape[0]\n fig = plt.figure(figsize=(10,10))\n plt.title(\"Confusion matrix\")\n plt.ylabel(\"predicted\")\n plt.xlabel(\"ground truth\")\n res = plt.imshow(confusion_matrix, cmap='GnBu', interpolation='nearest')\n cb = fig.colorbar(res)\n plt.xticks(np.arange(size))\n plt.yticks(np.arange(size))\n for i, row in enumerate(confusion_matrix):\n for j, count in enumerate(row):\n plt.text(j, i, count, fontsize=14, horizontalalignment='center', verticalalignment='center')\n \ndef build_confusion_matrix(predictions, ground_truth):\n \"\"\"\n Builds confusion matrix from predictions and ground truth\n\n predictions: np array of ints, model predictions for all validation samples\n ground_truth: np array of ints, ground truth for all validation samples\n \n Returns:\n np array of ints, (10,10), counts of samples for predicted/ground_truth classes\n \"\"\"\n preds = np.array(predictions, np.int)\n gt = np.array(ground_truth, np.int)\n \n confusion_matrix = np.zeros((10,10), np.int)\n \n for pc in range(10):\n for ac in range(10):\n confusion_matrix[pc, ac] = np.sum(np.logical_and(preds == pc, gt == ac))\n \n # TODO: Implement filling the prediction matrix\n return confusion_matrix\n\nconfusion_matrix = build_confusion_matrix(predictions, gt)\nvisualize_confusion_matrix(confusion_matrix)",
"_____no_output_____"
]
],
[
[
"Наконец, посмотрим на изображения, соответствующие некоторым элементам этой матрицы.\n\nКак и раньше, вам дана функция `visualize_images`, которой нужно воспрользоваться при реализации функции `visualize_predicted_actual`. Эта функция должна вывести несколько примеров, соответствующих заданному элементу матрицы.\n\nВизуализируйте наиболее частые ошибки и попробуйте понять, почему модель их совершает.",
"_____no_output_____"
]
],
[
[
"data_train_images = dset.SVHN('./data/', split='train')\n\ndef visualize_images(indices, data, title='', max_num=10):\n \"\"\"\n Visualizes several images from the dataset\n \n indices: array of indices to visualize\n data: torch Dataset with the images\n title: string, title of the plot\n max_num: int, max number of images to display\n \"\"\"\n to_show = min(len(indices), max_num)\n fig = plt.figure(figsize=(10,1.5))\n fig.suptitle(title)\n for i, index in enumerate(indices[:to_show]):\n plt.subplot(1,to_show, i+1)\n plt.axis('off')\n sample = data[index][0]\n plt.imshow(sample)\n \ndef visualize_predicted_actual(predicted_class, gt_class, predictions, groud_truth, val_indices, data):\n \"\"\"\n Visualizes images of a ground truth class which were predicted as the other class \n \n predicted: int 0-9, index of the predicted class\n gt_class: int 0-9, index of the ground truth class\n predictions: np array of ints, model predictions for all validation samples\n ground_truth: np array of ints, ground truth for all validation samples\n val_indices: np array of ints, indices of validation samples\n \"\"\"\n\n # TODO: Implement visualization using visualize_images above\n # predictions and ground_truth are provided for validation set only, defined by val_indices\n # Hint: numpy index arrays might be helpful\n # https://docs.scipy.org/doc/numpy/user/basics.indexing.html#index-arrays\n # Please make the title meaningful!\n indexes = val_indices[(groud_truth == gt_class)&(predictions == predicted_class)]\n visualize_images(indexes, \n data_train_images, \n title ='Misclassified images for class {} misclassified as {}'.format(gt_class, predicted_class))\n\nvisualize_predicted_actual(6, 8, predictions, gt, np.array(val_indices), data_train_images)\nvisualize_predicted_actual(1, 7, predictions, gt, np.array(val_indices), data_train_images)",
"_____no_output_____"
]
],
[
[
"# Переходим к свободным упражнениям!\n\nНатренируйте модель как можно лучше - экспериментируйте сами!\nЧто следует обязательно попробовать:\n- перебор гиперпараметров с помощью валидационной выборки\n- другие оптимизаторы вместо SGD\n- изменение количества слоев и их размеров\n- наличие Batch Normalization\n\nНо ограничиваться этим не стоит!\n\nТочность на тестовой выборке должна быть доведена до **80%**",
"_____no_output_____"
]
],
[
[
"# Experiment here!\n\nnn_model = nn.Sequential(\n Flattener(),\n nn.Linear(3*32*32, 100),\n nn.ReLU(inplace=True),\n nn.BatchNorm1d(100),\n nn.Linear(100, 100),\n nn.ReLU(inplace=True),\n nn.BatchNorm1d(100),\n nn.Linear(100, 10), \n )\n\nlearning_rates = [1e-3, 1e-4, 1e-5]\ngamma = [0.2, 0.5, 0.7, 0.9]\nstep = [1, 2]\n\nbest_classifier = ()\nbest_val_accuracy = 0\n\nfor lr in learning_rates:\n for gm in gamma:\n for st in step:\n print('Training for learning rate = {}, gamma = {}, step = {}'.format(lr, gm, st))\n optimizer = optim.SGD(nn_model.parameters(), lr=lr, weight_decay=1e-1)\n scheduler = StepLR(optimizer, step_size=st, gamma=gm)\n loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5, scheduler)\n if val_history[-1] > best_val_accuracy:\n best_val_accuracy = val_history[-1]\n best_classifier = (lr, gm, st) \n print('best validation accuracy achieved: {}'.format(best_val_accuracy))",
"Training for learning rate = 0.001, gamma = 0.2, step = 1\nAverage loss: 1.932258, Train accuracy: 0.373750, Val accuracy: 0.561532\nAverage loss: 1.583728, Train accuracy: 0.569003, Val accuracy: 0.603167\nAverage loss: 1.520537, Train accuracy: 0.595980, Val accuracy: 0.620982\nAverage loss: 1.483051, Train accuracy: 0.610108, Val accuracy: 0.630401\nAverage loss: 1.472424, Train accuracy: 0.614289, Val accuracy: 0.632107\nbest validation accuracy achieved: 0.6321070234113713\nTraining for learning rate = 0.001, gamma = 0.2, step = 2\nAverage loss: 1.404644, Train accuracy: 0.628588, Val accuracy: 0.661866\nAverage loss: 1.280162, Train accuracy: 0.663362, Val accuracy: 0.688895\nAverage loss: 1.207017, Train accuracy: 0.685083, Val accuracy: 0.704457\nAverage loss: 1.132377, Train accuracy: 0.715183, Val accuracy: 0.729097\nAverage loss: 1.118779, Train accuracy: 0.719278, Val accuracy: 0.734284\nbest validation accuracy achieved: 0.7342843491911815\nTraining for learning rate = 0.001, gamma = 0.5, step = 1\nAverage loss: 1.147991, Train accuracy: 0.702505, Val accuracy: 0.721043\nAverage loss: 1.087714, Train accuracy: 0.727963, Val accuracy: 0.747526\nAverage loss: 1.071230, Train accuracy: 0.734396, Val accuracy: 0.747526\nAverage loss: 1.044945, Train accuracy: 0.746494, Val accuracy: 0.758037\nAverage loss: 1.036218, Train accuracy: 0.749616, Val accuracy: 0.757081\nbest validation accuracy achieved: 0.7570814278888813\nTraining for learning rate = 0.001, gamma = 0.5, step = 2\nAverage loss: 1.095695, Train accuracy: 0.721940, Val accuracy: 0.700362\nAverage loss: 1.074398, Train accuracy: 0.735044, Val accuracy: 0.752645\nAverage loss: 1.061996, Train accuracy: 0.743644, Val accuracy: 0.751962\nAverage loss: 1.016958, Train accuracy: 0.768590, Val accuracy: 0.777626\nAverage loss: 1.012977, Train accuracy: 0.772924, Val accuracy: 0.764248\nbest validation accuracy achieved: 0.7642481741860624\nTraining for learning rate = 0.001, gamma = 0.7, step = 1\nAverage loss: 1.063376, Train accuracy: 0.750913, Val accuracy: 0.757627\nAverage loss: 1.031042, Train accuracy: 0.772310, Val accuracy: 0.765613\nAverage loss: 1.031141, Train accuracy: 0.774033, Val accuracy: 0.789093\nAverage loss: 1.013273, Train accuracy: 0.784800, Val accuracy: 0.783906\nAverage loss: 1.012729, Train accuracy: 0.788196, Val accuracy: 0.785407\nbest validation accuracy achieved: 0.7854071394444065\nTraining for learning rate = 0.001, gamma = 0.7, step = 2\nAverage loss: 1.076068, Train accuracy: 0.758625, Val accuracy: 0.774077\nAverage loss: 1.070304, Train accuracy: 0.768181, Val accuracy: 0.778445\nAverage loss: 1.074000, Train accuracy: 0.771662, Val accuracy: 0.786294\nAverage loss: 1.049283, Train accuracy: 0.788435, Val accuracy: 0.786567\nAverage loss: 1.054367, Train accuracy: 0.790363, Val accuracy: 0.768343\nbest validation accuracy achieved: 0.7854071394444065\nTraining for learning rate = 0.001, gamma = 0.9, step = 1\nAverage loss: 1.088445, Train accuracy: 0.771525, Val accuracy: 0.775032\nAverage loss: 1.081740, Train accuracy: 0.783060, Val accuracy: 0.792233\nAverage loss: 1.083250, Train accuracy: 0.783196, Val accuracy: 0.773531\nAverage loss: 1.079929, Train accuracy: 0.787513, Val accuracy: 0.795236\nAverage loss: 1.082930, Train accuracy: 0.788110, Val accuracy: 0.760426\nbest validation accuracy achieved: 0.7854071394444065\nTraining for learning rate = 0.001, gamma = 0.9, step = 2\nAverage loss: 1.109028, Train accuracy: 0.774887, Val accuracy: 0.742953\nAverage loss: 1.106447, Train accuracy: 0.779511, Val accuracy: 0.779947\nAverage loss: 1.110628, Train accuracy: 0.778077, Val accuracy: 0.777080\nAverage loss: 1.104702, Train accuracy: 0.784152, Val accuracy: 0.798171\nAverage loss: 1.105245, Train accuracy: 0.788008, Val accuracy: 0.801652\nbest validation accuracy achieved: 0.8016517643846837\nTraining for learning rate = 0.0001, gamma = 0.2, step = 1\nAverage loss: 1.055307, Train accuracy: 0.815804, Val accuracy: 0.816463\nAverage loss: 1.048198, Train accuracy: 0.820752, Val accuracy: 0.822265\nAverage loss: 1.047107, Train accuracy: 0.823346, Val accuracy: 0.821855\nAverage loss: 1.045431, Train accuracy: 0.823943, Val accuracy: 0.816122\nAverage loss: 1.045499, Train accuracy: 0.823090, Val accuracy: 0.822196\nbest validation accuracy achieved: 0.8221964371032694\nTraining for learning rate = 0.0001, gamma = 0.2, step = 2\nAverage loss: 1.048922, Train accuracy: 0.822561, Val accuracy: 0.818238\nAverage loss: 1.046829, Train accuracy: 0.822305, Val accuracy: 0.824858\nAverage loss: 1.045841, Train accuracy: 0.824114, Val accuracy: 0.824244\nAverage loss: 1.040933, Train accuracy: 0.826980, Val accuracy: 0.820422\nAverage loss: 1.039246, Train accuracy: 0.827065, Val accuracy: 0.824585\nbest validation accuracy achieved: 0.8245853525356631\nTraining for learning rate = 0.0001, gamma = 0.5, step = 1\nAverage loss: 1.042257, Train accuracy: 0.824421, Val accuracy: 0.820285\nAverage loss: 1.039634, Train accuracy: 0.826502, Val accuracy: 0.826019\nAverage loss: 1.038986, Train accuracy: 0.826417, Val accuracy: 0.826087\nAverage loss: 1.036068, Train accuracy: 0.827577, Val accuracy: 0.826223\nAverage loss: 1.036452, Train accuracy: 0.828652, Val accuracy: 0.824312\nbest validation accuracy achieved: 0.8245853525356631\nTraining for learning rate = 0.0001, gamma = 0.5, step = 2\nAverage loss: 1.040094, Train accuracy: 0.825376, Val accuracy: 0.822742\nAverage loss: 1.036705, Train accuracy: 0.825820, Val accuracy: 0.827657\nAverage loss: 1.035858, Train accuracy: 0.827356, Val accuracy: 0.822469\nAverage loss: 1.033181, Train accuracy: 0.828755, Val accuracy: 0.824176\nAverage loss: 1.033221, Train accuracy: 0.829352, Val accuracy: 0.826087\nbest validation accuracy achieved: 0.8260869565217391\nTraining for learning rate = 0.0001, gamma = 0.7, step = 1\nAverage loss: 1.033502, Train accuracy: 0.828891, Val accuracy: 0.824449\nAverage loss: 1.031038, Train accuracy: 0.828004, Val accuracy: 0.825268\nAverage loss: 1.030073, Train accuracy: 0.831024, Val accuracy: 0.827998\nAverage loss: 1.027060, Train accuracy: 0.831417, Val accuracy: 0.824449\nAverage loss: 1.027180, Train accuracy: 0.832133, Val accuracy: 0.825473\nbest validation accuracy achieved: 0.8260869565217391\nTraining for learning rate = 0.0001, gamma = 0.7, step = 2\nAverage loss: 1.029785, Train accuracy: 0.828140, Val accuracy: 0.828544\nAverage loss: 1.028430, Train accuracy: 0.829369, Val accuracy: 0.824995\nAverage loss: 1.027220, Train accuracy: 0.829898, Val accuracy: 0.825677\nAverage loss: 1.023921, Train accuracy: 0.831519, Val accuracy: 0.827657\nAverage loss: 1.022448, Train accuracy: 0.832577, Val accuracy: 0.828681\nbest validation accuracy achieved: 0.8286806361340523\nTraining for learning rate = 0.0001, gamma = 0.9, step = 1\nAverage loss: 1.024645, Train accuracy: 0.830853, Val accuracy: 0.823766\nAverage loss: 1.023279, Train accuracy: 0.830137, Val accuracy: 0.827998\nAverage loss: 1.021574, Train accuracy: 0.831655, Val accuracy: 0.827247\nAverage loss: 1.020632, Train accuracy: 0.831843, Val accuracy: 0.829227\nAverage loss: 1.018295, Train accuracy: 0.833413, Val accuracy: 0.832708\nbest validation accuracy achieved: 0.8327076650058016\nTraining for learning rate = 0.0001, gamma = 0.9, step = 2\nAverage loss: 1.020448, Train accuracy: 0.831843, Val accuracy: 0.828612\nAverage loss: 1.019672, Train accuracy: 0.832133, Val accuracy: 0.826565\nAverage loss: 1.017434, Train accuracy: 0.832099, Val accuracy: 0.823357\nAverage loss: 1.015636, Train accuracy: 0.833908, Val accuracy: 0.820217\nAverage loss: 1.015084, Train accuracy: 0.834880, Val accuracy: 0.831274\nbest validation accuracy achieved: 0.8327076650058016\nTraining for learning rate = 1e-05, gamma = 0.2, step = 1\nAverage loss: 1.007052, Train accuracy: 0.838464, Val accuracy: 0.835916\nAverage loss: 1.005932, Train accuracy: 0.839027, Val accuracy: 0.834755\nAverage loss: 1.007423, Train accuracy: 0.839249, Val accuracy: 0.832708\nAverage loss: 1.006988, Train accuracy: 0.838617, Val accuracy: 0.833117\nAverage loss: 1.005612, Train accuracy: 0.839419, Val accuracy: 0.832503\nbest validation accuracy achieved: 0.8327076650058016\nTraining for learning rate = 1e-05, gamma = 0.2, step = 2\nAverage loss: 1.007944, Train accuracy: 0.839010, Val accuracy: 0.831684\n"
],
[
"# Как всегда, в конце проверяем на test set\ntest_loader = torch.utils.data.DataLoader(data_test, batch_size=batch_size)\ntest_accuracy = compute_accuracy(nn_model, test_loader)\nprint(\"Test accuracy: %2.4f\" % test_accuracy)",
"Test accuracy: 0.8144\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ecd770c771849b53edb70ff4a9028f9aed45be8e | 23,218 | ipynb | Jupyter Notebook | xgboost/MushroomClassification/mushroom_data_preparation.ipynb | thiago1080/sagemaker | 0e59c8052f5a3ae7067e9ee5afe1aa3ac023407e | [
"Apache-2.0"
] | null | null | null | xgboost/MushroomClassification/mushroom_data_preparation.ipynb | thiago1080/sagemaker | 0e59c8052f5a3ae7067e9ee5afe1aa3ac023407e | [
"Apache-2.0"
] | null | null | null | xgboost/MushroomClassification/mushroom_data_preparation.ipynb | thiago1080/sagemaker | 0e59c8052f5a3ae7067e9ee5afe1aa3ac023407e | [
"Apache-2.0"
] | null | null | null | 31.08166 | 396 | 0.332371 | [
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom sklearn import preprocessing",
"_____no_output_____"
]
],
[
[
"<h2>Mushroom Classification Dataset - All Categorical Features</h2>\n\nInput Features: \n'cap-shape', 'cap-surface', 'cap-color', 'bruises','odor', 'gill-attachment', 'gill-spacing', 'gill-size', 'gill-color','stalk-shape', 'stalk-root', 'stalk-surface-above-ring','stalk-surface-below-ring', 'stalk-color-above-ring','stalk-color-below-ring', 'veil-type', 'veil-color', 'ring-number', 'ring-type', 'spore-print-color', 'population', 'habitat'<br>\n\nTarget Feature:<br>\n'class_edible'<br>\n\nObjective: Predict if a mushroom is edible or inedible<br>\n<h4>Data source: https://archive.ics.uci.edu/ml/datasets/mushroom</h4>",
"_____no_output_____"
]
],
[
[
"columns = ['class_edible', 'cap-shape', 'cap-surface', 'cap-color', 'bruises',\n 'odor', 'gill-attachment', 'gill-spacing', 'gill-size', 'gill-color',\n 'stalk-shape', 'stalk-root', 'stalk-surface-above-ring',\n 'stalk-surface-below-ring', 'stalk-color-above-ring',\n 'stalk-color-below-ring', 'veil-type', 'veil-color', 'ring-number',\n 'ring-type', 'spore-print-color', 'population', 'habitat']",
"_____no_output_____"
],
[
"df = pd.read_csv('mushroom_data_all.csv')",
"_____no_output_____"
],
[
"df['class_edible'].value_counts()",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"# https://stackoverflow.com/questions/24458645/label-encoding-across-multiple-columns-in-scikit-learn\nfrom collections import defaultdict\nd = defaultdict(preprocessing.LabelEncoder)",
"_____no_output_____"
],
[
"# Encoding the variable\ndf = df.apply(lambda x: d[x.name].fit_transform(x))",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"d.keys()",
"_____no_output_____"
],
[
"for key in d.keys():\n print(key, d[key].classes_)",
"class_edible ['e' 'p']\ncap-shape ['b' 'c' 'f' 'k' 's' 'x']\ncap-surface ['f' 'g' 's' 'y']\ncap-color ['b' 'c' 'e' 'g' 'n' 'p' 'r' 'u' 'w' 'y']\nbruises ['f' 't']\nodor ['a' 'c' 'f' 'l' 'm' 'n' 'p' 's' 'y']\ngill-attachment ['a' 'f']\ngill-spacing ['c' 'w']\ngill-size ['b' 'n']\ngill-color ['b' 'e' 'g' 'h' 'k' 'n' 'o' 'p' 'r' 'u' 'w' 'y']\nstalk-shape ['e' 't']\nstalk-root ['?' 'b' 'c' 'e' 'r']\nstalk-surface-above-ring ['f' 'k' 's' 'y']\nstalk-surface-below-ring ['f' 'k' 's' 'y']\nstalk-color-above-ring ['b' 'c' 'e' 'g' 'n' 'o' 'p' 'w' 'y']\nstalk-color-below-ring ['b' 'c' 'e' 'g' 'n' 'o' 'p' 'w' 'y']\nveil-type ['p']\nveil-color ['n' 'o' 'w' 'y']\nring-number ['n' 'o' 't']\nring-type ['e' 'f' 'l' 'n' 'p']\nspore-print-color ['b' 'h' 'k' 'n' 'o' 'r' 'u' 'w' 'y']\npopulation ['a' 'c' 'n' 's' 'v' 'y']\nhabitat ['d' 'g' 'l' 'm' 'p' 'u' 'w']\n"
],
[
"df['class_edible'].value_counts()",
"_____no_output_____"
],
[
"df.to_csv('mushroom_encoded_all.csv'\n ,index=False)",
"_____no_output_____"
]
],
[
[
"## Training and Validation Set\n### Target Variable as first column followed by input features:\n'class_edible', 'cap-shape', 'cap-surface', 'cap-color', 'bruises',\n 'odor', 'gill-attachment', 'gill-spacing', 'gill-size', 'gill-color',\n 'stalk-shape', 'stalk-root', 'stalk-surface-above-ring',\n 'stalk-surface-below-ring', 'stalk-color-above-ring',\n 'stalk-color-below-ring', 'veil-type', 'veil-color', 'ring-number',\n 'ring-type', 'spore-print-color', 'population', 'habitat'\n### Training, Validation files do not have a column header",
"_____no_output_____"
]
],
[
[
"# Training = 70% of the data\n# Validation = 30% of the data\n# Randomize the datset\nnp.random.seed(5)\nl = list(df.index)\nnp.random.shuffle(l)\ndf = df.iloc[l]",
"_____no_output_____"
],
[
"rows = df.shape[0]\ntrain = int(.7 * rows)\ntest = rows-train",
"_____no_output_____"
],
[
"rows, train, test",
"_____no_output_____"
],
[
"# Write Training Set\ndf[:train].to_csv('mushroom_train.csv'\n ,index=False,index_label='Row',header=False\n ,columns=columns)",
"_____no_output_____"
],
[
"# Write Validation Set\ndf[train:].to_csv('mushroom_validation.csv'\n ,index=False,index_label='Row',header=False\n ,columns=columns)",
"_____no_output_____"
],
[
"# Write Column List\nwith open('mushroom_train_column_list.txt','w') as f:\n f.write(','.join(columns))",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecd7799060658501b960b8fb90694ca11d2c54e4 | 412,711 | ipynb | Jupyter Notebook | lab1.ipynb | Bhargavcheekati/IBM-Quantum-Challenge---Africa-2021 | d3ec4eb78aef63e58eb183c357a6848b98680e12 | [
"MIT"
] | null | null | null | lab1.ipynb | Bhargavcheekati/IBM-Quantum-Challenge---Africa-2021 | d3ec4eb78aef63e58eb183c357a6848b98680e12 | [
"MIT"
] | null | null | null | lab1.ipynb | Bhargavcheekati/IBM-Quantum-Challenge---Africa-2021 | d3ec4eb78aef63e58eb183c357a6848b98680e12 | [
"MIT"
] | null | null | null | 63.028558 | 53,860 | 0.739908 | [
[
[
"<h1>IBM Quantum Challenge Africa 2021</h1>\n<p style=\"font-size:xx-large;\">Introduction and the Crop-Yield Problem</p>",
"_____no_output_____"
],
[
"Quantum Computing has the potential to revolutionize computing, as it can solve problems that are not possible to solve on a classical computer. This extra ability that quantum computers have is called quantum advantage. To achieve this goal, the world needs passionate and competent end-users: those who know how to apply the technology to their field.\n\nIn this challenge you will be exposed, at a high-level, to quantum computing through Qiskit. As current or future users of quantum computers, you need to know what problems are appropriate for quantum computation, how to structure the problem model/inputs so that they are compatible with your chosen algorithm, and how to execute a given algorithm and quantum solution to solve the problem.\n\nThis is the first notebook for the IBM Quantum Challenge Africa. Before starting here, ensure you have completed the Week 0 content in preparation for the following exercises.",
"_____no_output_____"
],
[
"## Initialization",
"_____no_output_____"
],
[
"To ensure the demonstrations and exercises have the required Python modules and libraries, run the following cell before continuing.",
"_____no_output_____"
]
],
[
[
"# Import auxiliary libraries\nimport numpy as np\n\n# Import Qiskit\nfrom qiskit import IBMQ, Aer\nfrom qiskit.algorithms import QAOA, VQE, NumPyMinimumEigensolver\nfrom qiskit.algorithms.optimizers import COBYLA\nfrom qiskit.utils import QuantumInstance, algorithm_globals\nfrom qiskit.providers.aer.noise.noise_model import NoiseModel\n\nfrom qiskit_optimization import QuadraticProgram\nfrom qiskit_optimization.algorithms import MinimumEigenOptimizer\nfrom qiskit_optimization.converters import QuadraticProgramToQubo\n\nimport qiskit.test.mock as Fake",
"_____no_output_____"
]
],
[
[
"## Table of Contents",
"_____no_output_____"
],
[
"The notebook is structured as follows:",
"_____no_output_____"
],
[
"1. Initialization\n2. Table of Contents\n3. Qiskit and its parts\n4. Setting up the Qiskit Environment\n5. Quadratic Problems\n6. Crop-Yield Problem as a Quadratic Problem\n7. Solving the Crop-Yield Problem using Quantum Computing\n8. Simulating a Real Quantum Computer for the Crop-Yield Problem",
"_____no_output_____"
],
[
"## Qiskit and its parts",
"_____no_output_____"
],
[
"Qiskit is divided into multiple modules for different purposes, with Terra at the core of the Qiskit ecosystem. It helps to be familiar with what each module can do and whether you will need to utilize the software contained within them for your specific problem. There are four modules that deal with the application of quantum computing; with tools and algorithms built specifically for their fields.\n\nHave a quick look at the [Qiskit documentation](https://qiskit.org/overview) and this IBM Research [blog post](https://www.ibm.com/blogs/research/2021/04/qiskit-application-modules/) to see what these modules are called. Once you have done so, you can complete the first exercise of the challenge: replace the ficticious module names, in the python cell below, with the correct module names. Though it would be fun to have a Qiskit Gaming module, it has not yet been developed. If you are interested in contributing to the open-source Qiskit community, have a look at the [contribution guide](https://qiskit.org/documentation/contributing_to_qiskit.html).",
"_____no_output_____"
],
[
"### Exercise 1a: Qiskit Applications Modules",
"_____no_output_____"
]
],
[
[
"# Definitely real Qiskit module names\nqiskit_module_names = [\n \"Qiskit Nature\",\n \"Qiskit Finance\" , \n \"Qiskit Optimization\", \n \"Qiskit Machine Learning\"\n]",
"_____no_output_____"
]
],
[
[
"Run the following python cell to check if you have the correct module names.",
"_____no_output_____"
]
],
[
[
"from qc_grader import grade_ex1a\n\ngrade_ex1a(qiskit_module_names)",
"Submitting your answer for ex1/partA. Please wait...\nCongratulations 🎉! Your answer is correct and has been submitted.\n"
]
],
[
[
"### Qiskit Applications Modules",
"_____no_output_____"
],
[
"In this notebook, we will use the `Qiskit Optimization` applications module for a specific problem to illustrate how\n\n1. classical and mathematical problem definitions are represented in Qiskit,\n2. some quantum algorithms are defined,\n3. to execute a quantum algorithm for a given problem definition,\n4. and algorithms and Qiskit problems are executed on real and simulated quantum computers.\n\nWe will also be using Qiskit Terra and Aer as they provide the foundation of Qiskit and high-performance quantum simulators respectively.",
"_____no_output_____"
],
[
"## Quadratic Problems",
"_____no_output_____"
],
[
"Some computational problems can be formulated into quadratic equations such that the minimum of the quadratic equations is the optimal solution, if any exist. These problems are encountered in finance, agriculture, operations and production management, and economics.\n\nQuadratic programming is also used to identify an optimal financial portfolio with minimum risk and optimizing the layout of production components in a factory to minimize the travel distance of resources. This notebook focuses on agriculture as it is a relevant application of quantum computing to problems facing the African continent. However, all of these applications share two common characteristics: the system can be modelled as a quadratic equation and the system variables may be constrained, with their values limited to within a given range.\n\n---\n\nQuadratic problems take on the following structure. Given a vector of $n$ variables $x\\in\\mathbb{R}^n$, the quadratic function to minimize is as follows.\n\n$$\n\\begin{align}\n\\text{minimize}\\quad & f\\left(x\\right)=\\frac{1}{2}x^\\top{}\\mathbf{Q}x + c^\\top{}x &\\\\\n\\text{subject to}\\quad & \\mathbf{A}x\\leq{}b&\\\\\n& x^\\top{}\\mathbf{Q}_ix + c_{i}^\\top{}x\\leq{}r_i,\\quad&\\forall{}i\\in[1,k_q]\\\\\n& l_i\\leq{}x_i\\leq{}u_i,\\quad&\\forall{}i\\in[1,k_l]\\\\\n\\end{align}\n$$\n\n$\\mathbf{Q}$, $\\mathbf{Q}_i$, and $\\mathbf{A}$ are $n\\times{}n$ symmetric matrices. $c$ and $c_i$ are $n\\times{}1$ column vectors. $\\mathbf{Q}_i$, $\\mathbf{A}$, $c_i$, $l_i$, and $u_i$ define constraints on the variables in $x$. The quadratic equation at the core of the quadratic problem is found by multiplying out the matrices in the minimization function. Though '$\\leq{}$' is used in the constraint equations above, any identity relationship may be used for any number of constraints: i.e. \"$<$\", \"$=$\", \"$>$\", \"$\\geq$\", or \"$\\leq$\".\n\nA valid solution to the quadratic must satisfy all conditions for the problem. Examples of some constraints are given below. The first two are linear constraints whereas the third example is a quadratic constraint.\n\n$$ x_1 + x_4 \\leq{} 10$$\n\n$$ x_2 - 3x_6 = 10$$\n\n$$x_1x_2 - 4x_3x_4 + x_5 \\leq{} 15 $$\n",
"_____no_output_____"
],
[
"Qiskit has Python code that allows you to implement a quadratic problem as a `QuadraticProgram` instance. Though our definition above used matrices to define the coefficients, `QuadraticProgram` allows you to define the objective (function overwhich to minimize) directly. To illustrate how to use `QuadraticProgram`, we will use the following quadratic problem definition, with three integer variables.\n\n$$\\begin{align}\n\\text{minimize}\\quad{} & f(x)=(x_1)^2 + (x_2)^2 - x_1x_2 - 6x_3 \\\\\n\\text{subject to}\\quad{} & x_1 + x_2 = 2 \\\\\n & x_2x_3 \\geq{} 1 \\\\\n & -2 \\leq{} x_2 \\leq{} 2 \\\\\n & -2 \\leq{} x_3 \\leq{} 4 \\\\\n\\end{align}$$",
"_____no_output_____"
],
[
"The figure below shows the constraints on $x_1$ and $x_3$, with some simplifcation. The shaded area denotes valid values for $x_1$ and $x_3$, within which $f(x)$ must be minimized.",
"_____no_output_____"
],
[
"<img src=\"quadratic_example.svg\" width=512px/>",
"_____no_output_____"
],
[
"In the following code, the above quadratic problem is defined as a `QuadraticProgram` instance. Have a look at the [Qiskit documentation for `QuadraticProgram`](https://qiskit.org/documentation/stubs/qiskit.optimization.QuadraticProgram.html), as it can be very useful in helping your understanding of its interface.\n\nThe quadratic to minimize, called an objective, is implemented using dictionaries. This allows you, the developer, to explicitly define coefficients for specific variables and terms. The keys in the dictionaries are the variable names identifying a term in $f(x)$. For example, `(\"x_1\",\"x_2\")` is for $x_1x_2$. The values for each item are the coefficients for said terms. Terms that are subtracted in $f(x)$ must have a negative coefficient.",
"_____no_output_____"
]
],
[
[
"quadprog = QuadraticProgram(name=\"example 1\")\nquadprog.integer_var(name=\"x_1\", lowerbound=0, upperbound=4)\nquadprog.integer_var(name=\"x_2\", lowerbound=-2, upperbound=2)\nquadprog.integer_var(name=\"x_3\", lowerbound=-2, upperbound=4)\nquadprog.minimize(\n linear={\"x_3\": -6},\n quadratic={(\"x_1\", \"x_1\"): 1, (\"x_2\", \"x_2\"): 1, (\"x_1\", \"x_2\"): -1},\n)\nquadprog.linear_constraint(linear={\"x_1\": 1, \"x_2\": 1}, sense=\"=\", rhs=2)\nquadprog.quadratic_constraint(quadratic={(\"x_2\", \"x_3\"): 1}, sense=\">=\", rhs=1)",
"_____no_output_____"
]
],
[
[
"A `QuadraticProgram` can have three types of variables: binary, integer, and continuous. The Qiskit implementation of the algorithms we are going to use currently only support binary and integer variables. There are other algorithms that allow for the simulation of continuous variables, but they are not covered in this notebook. If you want to know more about them though, have a look at this Qiskit tutorial on [algorithms to solve mixed-variable quadratic problems](https://qiskit.org/documentation/tutorials/optimization/5_admm_optimizer.html).",
"_____no_output_____"
],
[
"We can visualize the `QuadraticProgram` as an LP string, a portable text-based format used representating the model as a **L**inear **P**rogramming problem.",
"_____no_output_____"
]
],
[
[
"print(quadprog.export_as_lp_string())",
"\\ This file has been generated by DOcplex\n\\ ENCODING=ISO-8859-1\n\\Problem name: example 1\n\nMinimize\n obj: - 6 x_3 + [ 2 x_1^2 - 2 x_1*x_2 + 2 x_2^2 ]/2\nSubject To\n c0: x_1 + x_2 = 2\n q0: [ x_2*x_3 ] >= 1\n\nBounds\n x_1 <= 4\n -2 <= x_2 <= 2\n -2 <= x_3 <= 4\n\nGenerals\n x_1 x_2 x_3\nEnd\n\n"
]
],
[
[
"Any optimization problem that can be represented as a single second-order equation, in that the greatest exponent of any term is 2, can be transformed into a quadratic problem or program of the form given above. The above example is arbitrary and does not necessarily represent a given real-world problem. The main problem of this notebook focuses on optimizing the yield of a farm, though only the problem definition need be changed to apply this technique to other quadratic problem applications.",
"_____no_output_____"
],
[
"## Crop-Yield Problem as a Quadratic Problem",
"_____no_output_____"
],
[
"To show how to solve your quadratic program using a quantum computer, we will use two algorithms to solve the crop-yield problem. It is a common need to optimize the crops and management of a farm to reduce risk while increasing profits. One of the big challenges facing Africa and the whole world is how to produce enough food for everyone. The problem here focuses not on profits but on the tonnage of crops harvested. Imagine you have a farm with three hectares of land suitable for farming. You need to choose which crops to plant from a selection of four. Furthermore, you also need to determine how many hectares of each you should plant. The four crops you can plant are wheat, soybeans, maize, and a push-pull crop. The fourth cannot be sold once harvested but it can help increase the yield of the other crops.\n\n<table>\n <tr>\n <th>\n <img src=\"farm_template.svg\" width=\"384px\"/>\n </th>\n </tr>\n <tr>\n <th>\n Our beautiful three hectare farm\n </th>\n </tr>\n</table>\n\n<table>\n <tr>\n <th>\n <img src=\"crop_wheat.svg\" width=\"256px\"/>\n </th>\n <th>\n <img src=\"crop_soybeans.svg\" width=\"256px\"/>\n </th>\n <th>\n <img src=\"crop_maize.svg\" width=\"256px\"/>\n </th>\n <th>\n <img src=\"crop_pushpull.svg\" width=\"256px\"/>\n </th>\n </tr>\n <tr>\n <th>\n Wheat\n </th>\n <th>\n Soybeans\n </th>\n <th>\n Maize\n </th>\n <th>\n Push-Pull\n </th>\n<!-- <th>\n <p align=\"right\" style=\"height:32px;padding-top:10px;\">Wheat<img src=\"wheat.svg\" width=\"32px\" style=\"float:left;margin-top:-10px;margin-right:8px;\"/></p>\n </th>\n <th>\n <p style=\"height:32px;padding-top:10px;\">Soybeans<img src=\"soybeans.svg\" width=\"32px\" style=\"float:left;margin-top:-10px;margin-right:8px;\"/></p>\n </th>\n <th>\n <p style=\"height:32px;padding-top:10px;\">Maize<img src=\"maize.svg\" width=\"32px\" style=\"float:left;margin-top:-10px;margin-right:8px;\"/></p>\n </th>\n <th>\n <p style=\"height:32px;padding-top:10px;\">Push-Pull<img src=\"pushpull.svg\" width=\"32px\" style=\"float:left;margin-top:-10px;margin-right:8px;\"/></p>\n </th> -->\n </tr>\n</table>\n\nThere are three types of farming methods we can use: monocropping, intercropping, and push-pull farming. These are shown below. Monocropping is where only one crop is farmed. This is can make the farm susceptible to disease and pests as the entire yield would be affected. In some instances, growing two different plants nearby each other will increase the yield of both, though sometimes it can decrease the yield. Intercropping is the process where different plants are chosen to _increase_ the yield. Push-Pull crops are pairs of plants that repel pests and attract pests respectively. Integrating them into a larger farm increases the yield of harvested food but with the cost of not necessarily being able to use the harvest of Push-Pull crops as part of the total yield. This is because the Push-Pull crop may not be usable or even edible.\n\n<table>\n <tr>\n <th>\n <img src=\"farm_mono.svg\" width=\"256px\"/>\n </th>\n <th>\n <img src=\"farm_intercrop.svg\" width=\"256px\"/>\n </th>\n <th>\n <img src=\"farm_intercrop_pushpull.svg\" width=\"256px\"/>\n </th>\n </tr>\n <tr>\n <th>\n Monocropping\n </th>\n <th>\n Intercropping\n </th>\n <th>\n Push-Pull farming\n </th>\n </tr>\n</table>",
"_____no_output_____"
],
[
"---\nOnly in certain cases can quadratic programming problems be solved easily using classical problems. In their general sense, they are NP-Hard; a class of problems that is difficult to solve using classical computational methods. In fact, the best classical method to solve these problems involves heuristics, a technique that finds an approximate solution. Quantum Computers have been shown to provide significant speed-up and better scaling for some heuristic problems. The crop-yield problem is a combinatorial problem, in that the solution is a specific combination of input parameters. Though the problem shown here is small enough to solve classically, larger problems become intractable on a classical computer owing to the number of combinations of which to optimize.\n\n\nSolving the above problem using quantum computing involves three components:\n\n1. Defining the problem\n2. Defining the algorithm\n3. Executing the algorithm on a backend\n\nMany problems in Qiskit follow this structure as the algorithm you use can typically be swapped for another without significantly redefining your problem. Execution on different backends is the easiest, as long as the device has sufficient resource. The first component is given below, with the second and third in sections 1.5 and 1.6.",
"_____no_output_____"
],
[
"### Define the Crop-Yield problem",
"_____no_output_____"
],
[
"The following problem is defined for you but the `QuadraticProgram` is not implemented. Your task at the end of this section is to implement the `QuadraticProgram` for the given crop-yield model.\n\nYour farm has three hectares available, $3~ha$, with each crop taking up $0~ha$ or $1~ha$. We define the yield of the farm as a quadratic function where the influence of each crop on eachother is represented by the quadratic coefficients. The variables in this quadratic are the number of hectares of the crop to be planted and the objective function to maximize is the yield of usable crops in tons. Here is the mathematical model for the problem. In this scenario, all crops increase the yield of other crops. However, the problem to solve is which crops to use to achieve the maximum yield.",
"_____no_output_____"
],
[
"<img src=\"qubo_problem_graphical_variables.svg\" width=\"534px\"/>\n\nThe farm yield, in tons, is modelled as a quadratic equation, given below, with constraints on the hectares used by each crop and the total hectares available. Each crop is shown using a different symbol, as shown above, representing the number of hectares to be planted of said plant. Note that we can only plant up to 1 hectare of each crop and that our farm is constrained to 3 hectares.\n\n<img src=\"qubo_problem_graphical.svg\" width=\"400px\"/>",
"_____no_output_____"
],
[
"----\n#### Non-graphical notation\nHere is a non-graphical representation of the above model, if you are struggling to interpret the above graphic.\n\n$$\n\\begin{align}\n \\text{maximize} \\quad & 2(\\operatorname{Wheat}) + \\operatorname{Soybeans} + 4(\\operatorname{Maize}) \\\\\n & + 2.4(\\operatorname{Wheat}\\times\\operatorname{Soybeans}) \\\\ & + 4(\\operatorname{Wheat}\\times\\operatorname{Maize})\\\\\n &+ 4(\\operatorname{Wheat}\\times\\operatorname{PushPull}) \\\\ & + 2(\\operatorname{Soybeans}\\times\\operatorname{Maize}) \\\\\n & + (\\operatorname{Soybeans}\\times\\operatorname{PushPull}) \\\\ & + 5(\\operatorname{Maize}\\times\\operatorname{PushPull})\n\\end{align}\n$$\n\n$$\n\\begin{align}\n\\text{subject to} \\quad & \\operatorname{Wheat} + \\operatorname{Soybeans} + \\operatorname{Maize} + \\operatorname{PushPull} \\leq{} 3\\\\\n& 0\\leq{}\\operatorname{Wheat}\\leq{}1\\\\\n& 0\\leq{}\\operatorname{Soybeans}\\leq{}1\\\\\n& 0\\leq{}\\operatorname{Maize}\\leq{}1\\\\\n& 0\\leq{}\\operatorname{PushPull}\\leq{}1\n\\end{align}\n$$",
"_____no_output_____"
],
[
"### Exercise 1b: Create Quadratic Program from crop-yield variables",
"_____no_output_____"
],
[
"Your first exercise is to create a `QuadraticProgram` that represents the above model. Write your implementation in the `cropyield_quadratic_program` function below. Remember to use the example as a guide, and to look at the [QuadraticProgram documentation](https://qiskit.org/documentation/tutorials/optimization/1_quadratic_program.html?highlight=quadraticprogram) and [Qiskit reference](https://qiskit.org/documentation/stubs/qiskit.optimization.QuadraticProgram.html?highlight=quadraticprogram#qiskit.optimization.QuadraticProgram).\n\n**Note:** Ensure your variables are named `Wheat`, `Soybeans`, `Maize,` and `PushPull`. This is necessary for the grader to work.",
"_____no_output_____"
]
],
[
[
"def cropyield_quadratic_program():\n cropyield = QuadraticProgram(name=\"Crop Yield\")\n ##############################\n # Put your implementation here\n cropyield.binary_var(name=\"Wheat\")\n cropyield.binary_var(name=\"Soybeans\")\n cropyield.binary_var(name=\"Maize\")\n cropyield.binary_var(name=\"PushPull\")\n cropyield.maximize(\n linear={\"Wheat\": 2,\"Soybeans\" : 1 ,\"Maize\" : 4},\n quadratic={(\"Wheat\", \"Soybeans\"): 2.4, (\"Wheat\", \"Maize\"): 4, (\"Wheat\", \"PushPull\"): 4,(\"Soybeans\",\"Maize\"): 2, (\"Soybeans\", \"PushPull\"): 1, (\"Maize\", \"PushPull\"):5}\n )\n cropyield.linear_constraint(linear={\"Wheat\": 1, \"Soybeans\": 1,\"Maize\": 1, \"PushPull\": 1}, sense=\"<=\", rhs=3)\n #\n ##############################\n return cropyield",
"_____no_output_____"
],
[
"cropyield = cropyield_quadratic_program()\ncropyield",
"_____no_output_____"
]
],
[
[
"Once you feel your implementation is correct, you can grade your solution in the following cell.",
"_____no_output_____"
]
],
[
[
"# Execute this cell to grade your solution\nfrom qc_grader import grade_ex1b\n\ncropyield = cropyield_quadratic_program()\ngrade_ex1b(cropyield)",
"Submitting your answer for ex1/partB. Please wait...\nCongratulations 🎉! Your answer is correct and has been submitted.\n"
]
],
[
[
"### Converting QuadraticPrograms",
"_____no_output_____"
],
[
"If we want to estimate how many qubits this quadratic program requires, we can convert it to an Ising Model and print the `num_qubits` parameter. An [ising model](https://qiskit.org/documentation/apidoc/qiskit.optimization.applications.ising.html?highlight=ising) is a special system model type that is well suited for quantum computing. Though we will not be using an ising model explicitly, the algorithms and Qiskit classes we are using do this conversion internally.",
"_____no_output_____"
]
],
[
[
"# Estimate the number of qubits required\nising_operations, _ = (\n QuadraticProgramToQubo()\n .convert(\n cropyield,\n )\n .to_ising()\n)\nprint(f\"Number of qubits required is {ising_operations.num_qubits}\")",
"Number of qubits required is 6\n"
]
],
[
[
"Even though quadratic programs are widely used in Qiskit, the algorithms we are going to use require binary variables. Qiskit provides an automated method for converting our integer variables into binary variables. The binary-only form is called a _Quadratic Unconstrained Binary Optimization_ problem, or `QUBO`. The conversion is done using `QuadraticProgramToQUBO` from the Qiskit optimization module. Every integer variable, and their associated constraints, are transformed into binary variables.\n\nRun the following code to see how the QUBO version of the cropyield problem looks. Notice how the quadratic becomes longer and more variables are added. This is to account for the bits in each variable, including the constraints. When we run our quantum algorithm to solve this QuadraticProgram, it is converted to a QUBO instance within the Qiskit algorith implementation, implicitly.",
"_____no_output_____"
]
],
[
[
"QuadraticProgramToQubo().convert(cropyield)",
"_____no_output_____"
]
],
[
[
"## Solving the Crop-Yield Problem using a Quantum Computer",
"_____no_output_____"
],
[
"There are three ways to _run_ a quantum algorithm using Qiskit:\n1. on a simulator locally on your own machine\n2. on a simulator hosted in the cloud by IBM\n3. on an actual quantum computer accessible through IBM Quantum.\n\nAll of these are called backends. In all cases, the _backend_ can easily be swapped for another as long as the simulator or device has appropriate resources (number of qubits etc.). In the code below, we show how to access different backends. We demonstrate this using the local Aer QASM simulator from Qiskit. The Aer QASM simulator models the physical properties of a real quantum computer so you, researchers, and developers can test their quantum computing code and algorithms before running on real devices.",
"_____no_output_____"
]
],
[
[
"# We will use the Aer provided QASM simulator\nbackend = Aer.get_backend(\"qasm_simulator\")\n\n# Given we are using a simulator, we will fix the algorithm seed to ensure our results are reproducible\nalgorithm_globals.random_seed = 271828",
"_____no_output_____"
]
],
[
[
"We would like to compare our quantum solution to that obtained classically. Secondly, we also want to try different algorithms. The following three subsections show how these different methods for solving the Crop-Yield problem are implemented in Qiskit. The two algorithms used are the [_Quantum Approximate Optimization Algorithm_](https://qiskit.org/documentation/stubs/qiskit.algorithms.QAOA.html?highlight=qaoa#qiskit.algorithms.QAOA) `QAOA` and the [_Variational Quantum Eigensolver_](https://qiskit.org/documentation/stubs/qiskit.algorithms.VQE.html?highlight=vqe#qiskit.algorithms.VQE) `VQE`.\n\nBoth of these algorithms are hybrid, in that they use a classical _optimizer_ to alter parameters that affect the quantum computation. The VQE algorithm is used to find the lowest eigenvalue of a matrix, which can describe a system to optimize. The QAOA also finds the lowest eigenvalue, but achieves this is a different way to VQE. Both are very popular algorithms, with varying applications and strengths.",
"_____no_output_____"
],
[
"### Classical Solution",
"_____no_output_____"
],
[
"The classical solution to the crop-yield problem can easily be found using Numpy and Qiskit. The QUBO problem can be solved by finding the minimum eigenvalue of its underlying matrix representation. Fortunately, we don't have to know what this matrix looks like. We only need to pass it to a `MinimumEigensolver` and `MinimumEigenOptimizer`.\n\nThe optimizer translates the provided problem into a parameterised representation which is then passed to the solver. By optimizing the paramters, the solver will eventually give the minimum eigenvalue for the parameterized representation and thus the solution to the original problem. Here we use a classical solver from NumPy, the `NumPyMinimumEigensolver`.",
"_____no_output_____"
]
],
[
[
"def get_classical_solution_for(quadprog: QuadraticProgram):\n # Create solver\n solver = NumPyMinimumEigensolver()\n\n # Create optimizer for solver\n optimizer = MinimumEigenOptimizer(solver)\n\n # Return result from optimizer\n return optimizer.solve(quadprog)",
"_____no_output_____"
]
],
[
[
"If we execute the classical method for our crop-yield problem, we get a valid solution that maximises the yield.",
"_____no_output_____"
]
],
[
[
"# Get classical result\nclassical_result = get_classical_solution_for(cropyield)\n\n# Format and print result\nprint(\"Solution found using the classical method:\\n\")\nprint(f\"Maximum crop-yield is {classical_result.fval} tons\")\nprint(f\"Crops used are: \")\n\n_crops = [v.name for v in cropyield.variables]\nfor cropIndex, cropHectares in enumerate(classical_result.x):\n print(f\"\\t{cropHectares} ha of {_crops[cropIndex]}\")",
"Solution found using the classical method:\n\nMaximum crop-yield is 19.0 tons\nCrops used are: \n\t1.0 ha of Wheat\n\t0.0 ha of Soybeans\n\t1.0 ha of Maize\n\t1.0 ha of PushPull\n"
]
],
[
[
"### QAOA Solution",
"_____no_output_____"
],
[
"To solve our problem using QAOA, we need only replace the classical_solver with a `QAOA` class instance. Now that we are running a quantum algorithm, we need to tell the solver where to execute the quantum component. We use a `QuantumInstance` to store the backend information. The QAOA is an iterative algorithm, and thus is run multiple times with different internal parameters. The parameters are tuned classically during the optimization step of the computation by `optimizer`. If we leave `optimizer` as `None`, our algorithms will use the default optimization algorithm. To determine how many iterations there are, we define a callback function that runs for each iteration and stores the number of evaluations thus far. At the end of our algorithm execution, we return the result and the number of iterations.",
"_____no_output_____"
]
],
[
[
"def get_QAOA_solution_for(\n quadprog: QuadraticProgram, quantumInstance: QuantumInstance, optimizer=None,\n):\n _eval_count = 0\n\n def callback(eval_count, parameters, mean, std):\n nonlocal _eval_count\n _eval_count = eval_count\n\n # Create solver\n solver = QAOA(\n optimizer=optimizer, quantum_instance=quantumInstance, callback=callback,\n )\n\n # Create optimizer for solver\n optimizer = MinimumEigenOptimizer(solver)\n\n # Get result from optimizer\n result = optimizer.solve(quadprog)\n return result, _eval_count",
"_____no_output_____"
]
],
[
[
"If we execute the QAOA method for our crop-yield problem, we get the same result as the classical method, showing that 1) the quantum solution is correct and 2) that you now know how to use a quantum algorithm! 🌟",
"_____no_output_____"
]
],
[
[
"# Create a QuantumInstance\nsimulator_instance = QuantumInstance(\n backend=backend,\n seed_simulator=algorithm_globals.random_seed,\n seed_transpiler=algorithm_globals.random_seed,\n)\n\n# Get QAOA result\nqaoa_result, qaoa_eval_count = get_QAOA_solution_for(cropyield, simulator_instance)\n\n# Format and print result\nprint(\"Solution found using the QAOA method:\\n\")\nprint(f\"Maximum crop-yield is {qaoa_result.fval} tons\")\nprint(f\"Crops used are: \")\nfor cropHectares, cropName in zip(qaoa_result.x, qaoa_result.variable_names):\n print(f\"\\t{cropHectares} ha of {cropName}\")\n\nprint(f\"\\nThe solution was found within {qaoa_eval_count} evaluations of QAOA.\")",
"Solution found using the QAOA method:\n\nMaximum crop-yield is 19.0 tons\nCrops used are: \n\t1.0 ha of Wheat\n\t0.0 ha of Soybeans\n\t1.0 ha of Maize\n\t1.0 ha of PushPull\n\nThe solution was found within 3 evaluations of QAOA.\n"
]
],
[
[
"### VQE Solution",
"_____no_output_____"
],
[
"The `VQE` algorithm works in a very similar way to the `QAOA`. Not only in a mathematical modelling and algorithmic perspective, but also programmatically. There is a quantum solver and a classical optimizer. The `VQE` instance is also iterative, and so we can measure how many iterations are needed to find a solution to the Crop-Yield problem.",
"_____no_output_____"
]
],
[
[
"def get_VQE_solution_for(\n quadprog: QuadraticProgram, quantumInstance: QuantumInstance, optimizer=None,\n):\n _eval_count = 0\n\n def callback(eval_count, parameters, mean, std):\n nonlocal _eval_count\n _eval_count = eval_count\n\n # Create solver and optimizer\n solver = VQE(\n optimizer=optimizer, quantum_instance=quantumInstance, callback=callback\n )\n\n # Create optimizer for solver\n optimizer = MinimumEigenOptimizer(solver)\n\n # Get result from optimizer\n result = optimizer.solve(quadprog)\n return result, _eval_count",
"_____no_output_____"
]
],
[
[
"And we should get the exact same answer as before.",
"_____no_output_____"
]
],
[
[
"# Create a QuantumInstance\nsimulator_instance = QuantumInstance(\n backend=backend,\n seed_simulator=algorithm_globals.random_seed,\n seed_transpiler=algorithm_globals.random_seed,\n)\n\n# Get VQE result\nvqe_result, vqe_eval_count = get_VQE_solution_for(cropyield, simulator_instance)\n\n# Format and print result\nprint(\"Solution found using the VQE method:\\n\")\nprint(f\"Maximum crop-yield is {vqe_result.fval} tons\")\nprint(f\"Crops used are: \")\nfor cropHectares, cropName in zip(vqe_result.x, vqe_result.variable_names):\n print(f\"\\t{cropHectares} ha of {cropName}\")\n\nprint(f\"\\nThe solution was found within {vqe_eval_count} evaluations of VQE\")",
"Solution found using the VQE method:\n\nMaximum crop-yield is 19.0 tons\nCrops used are: \n\t1.0 ha of Wheat\n\t0.0 ha of Soybeans\n\t1.0 ha of Maize\n\t1.0 ha of PushPull\n\nThe solution was found within 25 evaluations of VQE\n"
]
],
[
[
"### Exercise 1c: Classical and Quantum Computational Results",
"_____no_output_____"
],
[
"From the above computations you received six results, the maximum crop-yield and the number of evaluations for three different methods. The maximum yield values should be the same. If your yield values aren't all the same, rerun the algorithms. Sometimes the optimization process can miss the correct answer because of the randomness used to initialize the algorithm parameters.\n\nRun the code cell below to see if the maximum yields you computed are correct.",
"_____no_output_____"
]
],
[
[
"from qc_grader import grade_ex1c\n\nmax_yield_qaoa = qaoa_result.fval\nmax_yield_vqe = vqe_result.fval\n\ngrade_ex1c(tonnage_qaoa=max_yield_qaoa, tonnage_vqe=max_yield_vqe)",
"Submitting your answer for ex1/partC. Please wait...\nCongratulations 🎉! Your answer is correct and has been submitted.\n"
]
],
[
[
"_You could always verify your result with the classical method, though this is only possible here given the size of the problem. Larger problems become more difficult to verify._",
"_____no_output_____"
],
[
"## Simulating a Real Quantum Computer for the Crop-Yield Problem",
"_____no_output_____"
],
[
"Sometimes one would want to _simulate_ a real quantum computer to see how the actual hardware may impact the performance of the algorithm. All quantum computers have an underlying architecture, different noise characeristics, and error rates. These three aspects impact how well the algorithm can perform on a given deivce. To test the impact a given quantum computer has on the QAOA instance, we can utilize a _fake_ instance of the device in Qiskit to tell our simulator what parameters to use. In this example we will be simulating `ibmq_johannesburg`, a device named after the city of Johannesburg in South Africa.",
"_____no_output_____"
]
],
[
[
"fake_device = Fake.FakeJohannesburg()",
"_____no_output_____"
]
],
[
[
"We can inspect what this device _looks_ like using the Qiskit Jupyter tools, shown below. You do not need to know about this structure to execute quantum programs on a device, but it is useful to visualize the parameters.",
"_____no_output_____"
]
],
[
[
"import qiskit.tools.jupyter\n\nfake_device",
"_____no_output_____"
]
],
[
[
"The three aforementioned components of a quantum computer are represented as a noise model, coupling map, and the basis gate set. The noise model is a representation of how the noise and errors in the computer behave. The coupling map and basis gate set are core to the architecture of the device. The coupling map represents how the physical qubits can interact whereas the basis gate set is analogous to the set of fundamental computatonal instructions we can use. You can see the coupling map in the above widget as the lines connecting each qubit in the architecture diagram.\n\nTo simulate `ibmq_johannesburg`, we must pass these three components to our Aer simulator.",
"_____no_output_____"
]
],
[
[
"# Create the noise model, which contains the basis gate set\nnoise_model = NoiseModel.from_backend(fake_device)\n\n# Get the coupling map\ncoupling_map = fake_device.configuration().coupling_map",
"_____no_output_____"
]
],
[
[
"Next we create a new `QuantumInstance` with these parameters",
"_____no_output_____"
]
],
[
[
"fake_instance = QuantumInstance(\n backend=backend,\n basis_gates=noise_model.basis_gates,\n coupling_map=coupling_map,\n noise_model=noise_model,\n seed_simulator=algorithm_globals.random_seed,\n seed_transpiler=algorithm_globals.random_seed,\n)",
"_____no_output_____"
]
],
[
[
"We can then execute the `QAOA` from before on this new _fake_ quantum device.",
"_____no_output_____"
]
],
[
[
"# Get QAOA result\nqaoa_result, qaoa_eval_count = get_QAOA_solution_for(cropyield, fake_instance)\n\n# Format and print result\nprint(\"Solution found using the QAOA method:\\n\")\nprint(f\"Maximum crop-yield is {qaoa_result.fval} tons\")\nprint(f\"Crops used are: \")\nfor cropHectares, cropName in zip(qaoa_result.x, qaoa_result.variable_names):\n print(f\"\\t{cropHectares} ha of {cropName}\")\n\nprint(f\"\\nThe solution was found within {qaoa_eval_count} evaluations of QAOA.\")",
"Solution found using the QAOA method:\n\nMaximum crop-yield is 19.0 tons\nCrops used are: \n\t1.0 ha of Wheat\n\t0.0 ha of Soybeans\n\t1.0 ha of Maize\n\t1.0 ha of PushPull\n\nThe solution was found within 3 evaluations of QAOA.\n"
]
],
[
[
"### Scaling of the Quantum Solution vs Classical",
"_____no_output_____"
],
[
"When we created our quadratic program for the crop-yield problem, we saw that the Ising Model required 6 qubits. We had constrained our problem such that we could only plant up to 1 hectare per crop. However, we could change the model so that we can plot 3 hectares per crop, upto our maximum available farm area of 3 hectares.\n\nHow many qubits would this ising model require?\n\n---\n\nFurthermore, what if we had more land to farm? We know that this problem is NP-Hard and thus classical solutions are mostly found using heuristics. This is the core reason why quantum computers are promising to solve these kinds of problems. But what are the resource requirements for the quantum solution, with a larger farm and crops that can be planted in more hectares?\n\nTo illustrate this, we've provided a function that returns the number of qubits required by the underlying Ising Model for the Crop-Yield Problem. We then see the estimated number of qubits needed for different problem parameters. Feel free to modify the variables being used to see how the qubit resource requirements change.",
"_____no_output_____"
]
],
[
[
"# Function to estimate the number of qubits required\ndef estimate_number_of_qubits_required_for(max_hectares_per_crop, hectares_available):\n return int(\n 4 * np.ceil(np.log2(max_hectares_per_crop + 1))\n + np.ceil(np.log2(hectares_available + 1))\n )\n\n\n# Our new problem parameters\nhectares_available = 10\nmax_hectares_per_crop = 10\n\n# Retrieving the number of qubits required\nnumber_of_qubits_required = estimate_number_of_qubits_required_for(\n max_hectares_per_crop=max_hectares_per_crop, hectares_available=hectares_available\n)\n\nprint(\n f\"Optimizing a {hectares_available} ha farm with each crop taking up to {max_hectares_per_crop} ha each,\",\n f\"the computation is estimated to require {number_of_qubits_required} qubits.\",\n)",
"Optimizing a 10 ha farm with each crop taking up to 10 ha each, the computation is estimated to require 20 qubits.\n"
]
],
[
[
"The number of qubits required is related to the constraints in the quadratic program and how the integer variables are converted to binary variables. In fact, the scaling of the number of qubits, as a function of the hectares available, is logarithmic in nature; owing to this conversion.",
"_____no_output_____"
],
[
"## Running on real quantum hardware",
"_____no_output_____"
],
[
"To use the IBM Quantum platform is easy. First you need to load the account you enabled in the week 0 content. If you didn't complete this, follow this [quick guide](https://quantum-computing.ibm.com/lab/docs/iql/manage/account/ibmq) on connecting your IBM Quantum account with Qiskit in python and Jupyter.",
"_____no_output_____"
]
],
[
[
"IBMQ.load_account()",
"_____no_output_____"
]
],
[
[
"IBM Quantum backends are accessed through a provider, which manages the devices to which you have access. For this challenge, you have access to the new `ibm_perth` quantum computer! Typically, you would find your provider details under your [IBM Quantum account details](https://quantum-computing.ibm.com/account). Under your account you can see the different hubs, groups, and projects you are a part of. Qiskit allows us to retrieve a provider using just the hub, group, and project as follows:\n\n```python\nprovider = IBMQ.get_provider(hub=\"ibm-q\", group=\"open\", project=\"main\")\n```\n\nHowever, because we have given you special access for this challenge, we are going to retrive the provider using a different method. Execute the code cell below to retrieve the correct provider.",
"_____no_output_____"
]
],
[
[
"provider = None\nfor prov in IBMQ.providers():\n if (\n \"iqc-africa-21\" in prov.credentials.hub\n and \"q-challenge\" in prov.credentials.group\n and \"ex1\" in prov.credentials.project\n ):\n # Correct provider found\n provider = prov\n \nif provider == None:\n print(\"ERROR: The expected provider was not found!\")\nelse:\n print(\"Yay! The expected provider was found!\")",
"Yay! The expected provider was found!\n"
]
],
[
[
"If the above code cell returned an error, you may not yet have access to the real quantum computer. The list of participants is updated daily, so you may have to wait some time before the correct provider appears. If you need assistance, send a message to the challenge Slack channel [#challenge-africa-2021](https://qiskit.slack.com/archives/C02C8MKP153) and make sure to tag the admin team with [@africa_admin](#).",
"_____no_output_____"
],
[
"------\n\nTo retrieve a backend from the provider, one needs only request it by name. For example, we can request `ibm_perth` as follows.",
"_____no_output_____"
]
],
[
[
"backend_real = provider.get_backend(\"ibm_perth\")",
"_____no_output_____"
]
],
[
[
"We can also list all backends available through a given backend. In this example we use the _open_ provider as it has access to all open devices and simulators, instead of the limited few for the challenge.",
"_____no_output_____"
]
],
[
[
"for _backend in IBMQ.get_provider(hub='ibm-q', group='open', project='main').backends():\n print(_backend.name())",
"ibmq_qasm_simulator\nibmq_armonk\nibmq_santiago\nibmq_bogota\nibmq_lima\nibmq_belem\nibmq_quito\nsimulator_statevector\nsimulator_mps\nsimulator_extended_stabilizer\nsimulator_stabilizer\nibmq_manila\n"
]
],
[
[
"Qiskit provides visual tools to view backend information in a jupyter notebook. To accomplish this, one needs to import the `jupyter` submodule and call the appropriate _magic comment_. With `qiskit_backend_overview` you can view all devices accessible by the current IBMQ account. Notice how it does not include simulators. Furthermore, you should see that all devices available through the _open group_ have at most 5 qubits. This is a problem for solving the crop-yield problem we created earlier, as we showed it requires 6 qubits.\n\nTo demonstrate how a real quantum device is used, a smaller `QuadraticProgram` is provided, requiring a maximum of 4 qubits.",
"_____no_output_____"
]
],
[
[
"%qiskit_backend_overview",
"_____no_output_____"
]
],
[
[
"<div class=\"alert alert-block alert-warning\">\n \nIf you want access to larger and more sophisticated quantum computers through IBM, see if your university or company is part of the [IBM Quantum Network](https://www.ibm.com/quantum-computing/network/members/). Researchers are institutions that are part of the [African Research Universities Alliance (ARUA)](https://arua.org.za/) can also apply for access through the University of the Witwatersrand, in South Africa; which is a member of the IBM Quantum Network. If you are a researcher, you can also apply for access through the [IBM Quantum Researchers Program](https://www.ibm.com/quantum-computing/researchers-program/). If you're a student at a highschool or university, you can ask your teachers or lecturers to apply for access through the [IBM Quantum Educators Program](https://www.ibm.com/quantum-computing/educators-program/).\n\n</div>",
"_____no_output_____"
],
[
"Given that we have imported `qiskit.tools.jupyter`, Jupyter will now display a helpful widget when an IBMQ backend is displayed. We do not have to use a _magic comment_ here as the jupyter submodule defines how some variables are displayed in a jupyter notebook, without requiring a _magic comment_.",
"_____no_output_____"
]
],
[
[
"backend_real",
"_____no_output_____"
]
],
[
[
"We can create a new `QuantumInstance` object to contain our real quantum computer backend, similar to how we created one to manage our simulator. For real devices, there is an extra parameter we can set: `shots`. The output of a quantum computing algorithm is probabilistic. Therefore, we must execute the quantum computation multiple times, sampling the outputs to estimate their probabilities. The number of `shots` is the number of executions of the quantum computation. Here we set out `QuantumInstance` to use the least busy backend with 2048 shots.",
"_____no_output_____"
]
],
[
[
"quantum_instance_real = QuantumInstance(backend_real, shots=2048)",
"_____no_output_____"
]
],
[
[
"The VQE algorithm and QAOA are iterative, meaning that they incorporate a classical-quantum loop which repeats certain computations, _hopefully_ converging to a valid solution. In each iteration, or evaluation, the quantum backend will execute the quantum operations 2048 times. Each shot is quite fast, so we do not have to worry about a significant increase in processing time by using more shots.",
"_____no_output_____"
],
[
"----\nWe now define our small crop-yield problem, which only requires 4 qubits. In this example, only Wheat and Maize are used. The model is altered to illustrate the impact of growing too much of a single crop, with the yield decreasing as the number of hectares of a single crop is increased. However, utilizing both wheat and maize increases yield, showing the benefits of intercropping.\n\n**NB:** The maximum number of hectares available is 4, but given that the model would never exceed this limit, the linear constraint defining the maximum number of hectares is not included. This reduces the number of qubits required from 6 to 4.",
"_____no_output_____"
]
],
[
[
"# Create a small crop-yield example quadratic program\ncropyield_small = QuadraticProgram(name=\"Small Crop-Yield\")\n\n# Add two variables, indicating whether we grow 0, 1, or 2 hectares for two different crops\ncropyield_small.integer_var(lowerbound=0, upperbound=2, name=\"Wheat\")\ncropyield_small.integer_var(lowerbound=0, upperbound=2, name=\"Maize\")\n\n# Add the objective function defining the yield in tonnes\ncropyield_small.maximize(\n linear={\"Wheat\": 3, \"Maize\": 3},\n quadratic={(\"Maize\", \"Wheat\"): 1, (\"Maize\", \"Maize\"): -2, (\"Wheat\", \"Wheat\"): -2},\n)\n\n# This linear constraint is not used as the model never reaches this. This is because the\n# sum of the upperbounds on both variables is 4 already. If this constraint is applied, the\n# model would require 6 qubits instead of 4.\n# cropyield_small.linear_constraint(linear={\"Wheat\": 1, \"Maize\": 1}, sense=\"<=\", rhs=4)\n\nprint(cropyield_small)",
"\\ This file has been generated by DOcplex\n\\ ENCODING=ISO-8859-1\n\\Problem name: Small Crop-Yield\n\nMaximize\n obj: 3 Wheat + 3 Maize + [ - 4 Wheat^2 + 2 Wheat*Maize - 4 Maize^2 ]/2\nSubject To\n\nBounds\n Wheat <= 2\n Maize <= 2\n\nGenerals\n Wheat Maize\nEnd\n\n"
]
],
[
[
"Here we verify that our small crop-yield problem requires only 4 qubits.",
"_____no_output_____"
]
],
[
[
"# Estimate the number of qubits required\nising_operations_small, _ = (\n QuadraticProgramToQubo()\n .convert(\n cropyield_small,\n )\n .to_ising()\n)\nprint(f\"Number of qubits required is {ising_operations_small.num_qubits}\")",
"Number of qubits required is 4\n"
]
],
[
[
"### Exercise 1d: Submitting a job to a real quantum computer",
"_____no_output_____"
],
[
"Now that we know the problem can be run on our chosen device, we can execute the VQE algorithm. In this case we will set the optimizer, with a maximum number of iterations of 1, so that we do not occupy the device for too long. Our answer will be incorrect, but we only want to see how to send a quantum program to a real quantum computer.",
"_____no_output_____"
]
],
[
[
"# Create our optimizer\noptimizer = COBYLA(maxiter=1)\n\n## Get result from real device with VQE\nvqe_result_real, vqe_eval_count_real = get_VQE_solution_for(\n cropyield_small, quantum_instance_real, optimizer=optimizer\n)",
"/opt/conda/lib/python3.8/site-packages/qiskit/utils/run_circuits.py:695: UserWarning: max_credits is not a recognized runtime option and may be ignored by the backend.\n return backend.run(circuits, **run_kwargs)\n"
]
],
[
[
"Qiskit uses `jobs` to track computations and their results on remote devices and simulators. We can query the backend object for the jobs it received, which would be those created by the VQE algorithm.",
"_____no_output_____"
]
],
[
[
"# Retrieve the VQE job sent\njob_real = backend_real.jobs()[0]\n\nprint(f\"VQE job created at {job_real.creation_date()} and has a job id of {job_real.job_id()}\")",
"VQE job created at 2021-09-18 06:21:10.850000+00:00 and has a job id of 614585562be3692c38586b9e\n"
]
],
[
[
"Put the job id for your the above job into the cell below and execute the code cell.",
"_____no_output_____"
]
],
[
[
"from qc_grader import grade_ex1d\n\njob_id = '614585562be3692c38586b9e'\n\ngrade_ex1d(job_id)",
"Submitting your answer for ex1/partD. Please wait...\nCongratulations 🎉! Your answer is correct and has been submitted.\n"
]
],
[
[
"You have now completed the first lab of the IBM Quantum Challenge Africa 2021! Make sure that you are on the [Qiskit Slack channel](https://ibm.co/Africa_Slack) so you can ask questions and talk to other participants. There are two more labs left in the challenge, which are more difficult than this introductory lab, covering quantum computing for finance and HIV.",
"_____no_output_____"
],
[
"## References",
"_____no_output_____"
],
[
"[1] A. A. Nel, ‘Crop rotation in the summer rainfall area of South Africa’, South African Journal of Plant and Soil, vol. 22, no. 4, pp. 274–278, Jan. 2005, doi: 10.1080/02571862.2005.10634721.\n\n[2] H. Ritchie and M. Roser, ‘Crop yields’, Our World in Data, 2013, [Online]. Available: https://ourworldindata.org/crop-yields.\n\n[3] G. Brion, ‘Controlling Pests with Plants: The power of intercropping’, UVM Food Feed, Jan. 09, 2014. https://learn.uvm.edu/foodsystemsblog/2014/01/09/controlling-pests-with-plants-the-power-of-intercropping/ (accessed Feb. 15, 2021).\n\n[4] N. O. Ogot, J. O. Pittchar, C. A. O. Midega, and Z. R. Khan, ‘Attributes of push-pull technology in enhancing food and nutrition security’, African Journal of Agriculture and Food Security, vol. 6, pp. 229–242, Mar. 2018.",
"_____no_output_____"
]
],
[
[
"import qiskit.tools.jupyter\n\n%qiskit_version_table\n%qiskit_copyright",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
]
] |
ecd791f2259ce121ba3ff69f99297ea5be35d33b | 94,198 | ipynb | Jupyter Notebook | examples/Example1.ipynb | tsmcgrath/OOH | 4a7be760fb26c466f7f6df94fb33b18e4c8111e9 | [
"MIT"
] | null | null | null | examples/Example1.ipynb | tsmcgrath/OOH | 4a7be760fb26c466f7f6df94fb33b18e4c8111e9 | [
"MIT"
] | null | null | null | examples/Example1.ipynb | tsmcgrath/OOH | 4a7be760fb26c466f7f6df94fb33b18e4c8111e9 | [
"MIT"
] | null | null | null | 432.100917 | 88,843 | 0.645438 | [
[
[
"# Illustration of some ideas for Owning Our History",
"_____no_output_____"
]
],
[
[
"# install dependencies\n%pip install ipyleaflet",
"_____no_output_____"
],
[
"import os\nimport json\nimport random\nimport requests\n\nfrom ipyleaflet import Map, basemaps, Marker, LayersControl, WidgetControl, GeoJSON",
"_____no_output_____"
],
[
"if not os.path.exists('europe_110.geo.json'):\n url = 'https://github.com/jupyter-widgets/ipyleaflet/raw/master/examples/europe_110.geo.json'\n r = requests.get(url)\n with open('europe_110.geo.json', 'w') as f:\n f.write(r.content.decode(\"utf-8\"))\n\nwith open('europe_110.geo.json', 'r') as f:\n data = json.load(f)\n\ndef random_color(feature):\n return {\n 'color': 'black',\n 'fillColor': random.choice(['red', 'yellow', 'green', 'orange']),\n }\n\nm = Map(center=(31.522699, -84.648262), zoom=3)\n\ngeo_json = GeoJSON(\n data=data,\n style={\n 'opacity': 1, 'dashArray': '9', 'fillOpacity': 0.1, 'weight': 1\n },\n hover_style={\n 'color': 'white', 'dashArray': '0', 'fillOpacity': 0.5\n },\n style_callback=random_color\n)\nm.add_layer(geo_json)\n\nm",
"_____no_output_____"
],
[
"with open('../data/sample_point.geojson', 'r') as f:\n data = json.load(f)\ncenter = [31.522699, -84.648262]\nzoom = 7\n\nm = Map(center=center, zoom=zoom)\n\ngeo_json = GeoJSON(\n style={\n 'color': \"#FF0000\", 'radius': 8,\n },\n data=data\n)\nm.add_layer(geo_json)\n\ncontrol = LayersControl(position='topright')\nm.add_control(control)\n\nm",
"_____no_output_____"
],
[
"clusterMap = Map(center=center, zoom=zoom)\nclusterMap.add_layer(MarkerCluster(\n markers=[Marker(location=geolocation.coords[0][::-1]) for geolocation in ../data/sample_point.geojson(1000).geometry])\n )\nclusterMap",
"_____no_output_____"
],
[
"print(geo_json)",
"GeoJSON(data={'type': 'FeatureCollection', 'name': 'sample_point', 'crs': {'type': 'name', 'properties': {'name': 'urn:ogc:def:crs:EPSG::3857'}}, 'features': [{'type': 'Feature', 'properties': {'sample_id': 'SA001', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1882, 'month_sour': 1, 'day_source': 17, 'mwt_date_t': '1882/01/17', 'name': 'Fred Britton', 'race_sourc': 'white', 'mwt_race': 'White', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': 'Belgreen', 'county': 'Franklin', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': 'Source2 corrects name from Unnamed White man', 'mwt_notes_': None, 'mwt_county': 'als_franklin', 'mwt_coun_1': 5, 'mwt_coun_2': 'als_franklin.5', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 17548, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9780903.3814, 4093672.826700002]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA002', 'mwt_en_mas': 2, 'state': 'AL', 'year_sourc': 1882, 'month_sour': 4, 'day_source': 13, 'mwt_date_t': '1882/04/13', 'name': 'Henry Ivy', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': 'Selma', 'county': 'Dallas', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': 'Source2 corrects name from Two unnamed Negro men', 'mwt_notes_': None, 'mwt_county': 'als_dallas', 'mwt_coun_1': 13, 'mwt_coun_2': 'als_dallas.13', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 17634, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9687336.0098, 3817618.5358000025]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA003', 'mwt_en_mas': 2, 'state': 'AL', 'year_sourc': 1882, 'month_sour': 4, 'day_source': 13, 'mwt_date_t': '1882/04/13', 'name': 'Jim Acoff', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': 'Selma', 'county': 'Dallas', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': 'Source2 corrects name from Two unnamed Negro men', 'mwt_notes_': 'noted by Source2 as Uncertain', 'mwt_county': 'als_dallas', 'mwt_coun_1': 13, 'mwt_coun_2': 'als_dallas.13', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 17634, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9687336.0098, 3817618.5358000025]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA004', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1882, 'month_sour': 8, 'day_source': 19, 'mwt_date_t': '1882/08/19', 'name': 'Jack Turner', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Incendiarism', 'mwt_allege': 'Crime against property', 'town': 'Butler', 'county': 'Choctaw', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': 'Source2 corrects name from Unnamed Negro man', 'mwt_notes_': None, 'mwt_county': 'als_choctaw', 'mwt_coun_1': 1, 'mwt_coun_2': 'als_choctaw.1', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 17762, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9820747.9667, 3775044.925999999]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA005', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1882, 'month_sour': 8, 'day_source': 25, 'mwt_date_t': '1882/08/25', 'name': 'Leonard Coker', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murder & rape', 'mwt_allege': 'Murder-Rape', 'town': None, 'county': 'Macon', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_macon', 'mwt_coun_1': 9, 'mwt_coun_2': 'als_macon.9', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 17768, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9539254.3985, 3814077.809600003]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA006', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1882, 'month_sour': 10, 'day_source': 4, 'mwt_date_t': '1882/10/04', 'name': 'John Brooks', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Rape', 'mwt_allege': 'Alleged Sexual crime', 'town': 'Anniston', 'county': 'Calhoun', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_calhoun', 'mwt_coun_1': 11, 'mwt_coun_2': 'als_calhoun.11', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 17808, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9553317.3616, 3982854.0182000026]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA007', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1883, 'month_sour': 4, 'day_source': 28, 'mwt_date_t': '1883/04/28', 'name': 'George Ware', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': None, 'county': 'Colbert', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_colbert', 'mwt_coun_1': 2, 'mwt_coun_2': 'als_colbert.2', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 18014, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9774898.7374, 4123000.8550999984]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA008', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1883, 'month_sour': 6, 'day_source': 13, 'mwt_date_t': '1883/06/13', 'name': 'Jordan Corbin', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': None, 'county': 'Coosa', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_coosa', 'mwt_coun_1': 5, 'mwt_coun_2': 'als_coosa.5', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 18060, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9601125.4642, 3886916.230899997]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA009', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1883, 'month_sour': 10, 'day_source': 9, 'mwt_date_t': '1883/10/09', 'name': 'Wes Brown', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': 'Huntsville', 'county': 'Madison', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': 'Source2 corrects name from Unnamed Negro man', 'mwt_notes_': None, 'mwt_county': 'als_madison', 'mwt_coun_1': 6, 'mwt_coun_2': 'als_madison.6', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 18178, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9638609.2423, 4127153.3202000037]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA010', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1883, 'month_sour': 11, 'day_source': 24, 'mwt_date_t': '1883/11/24', 'name': 'Lewis Houston', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Attempted rape', 'mwt_allege': 'Alleged Sexual crime', 'town': 'Birmingham', 'county': 'Jefferson', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_jefferson', 'mwt_coun_1': 12, 'mwt_coun_2': 'als_jefferson.12', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 18224, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9663840.9181, 3964621.221500002]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA011', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1884, 'month_sour': 7, 'day_source': 18, 'mwt_date_t': '1884/07/18', 'name': 'Andy Burke', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Attempted rape', 'mwt_allege': 'Alleged Sexual crime', 'town': 'Tuscaloosa', 'county': 'Tuscaloosa', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_tuscaloosa', 'mwt_coun_1': 16, 'mwt_coun_2': 'als_tuscaloosa.16', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 18461, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9747830.3607, 3923267.1208000034]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA012', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1884, 'month_sour': 9, 'day_source': 18, 'mwt_date_t': '1884/09/18', 'name': '— Short', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Rape', 'mwt_allege': 'Alleged Sexual crime', 'town': 'Carthage', 'county': 'Hale', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': 'Source2 corrects race from White', 'mwt_notes_': None, 'mwt_county': 'als_hale', 'mwt_coun_1': 2, 'mwt_coun_2': 'als_hale.2', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 18523, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9754821.3948, 3863837.3123999983]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA013', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1885, 'month_sour': 4, 'day_source': 4, 'mwt_date_t': '1885/04/04', 'name': 'George Roose', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': 'Vienna', 'county': 'Pickens', 'source1': 1, 'source2': 0, 'source3': None, 'source4': None, 'mwt_source': 'duplicate record for lynching of George Rouse in Vienna, Dooly Co, GA on 3/29/1885', 'mwt_notes_': None, 'mwt_county': 'als_pickens', 'mwt_coun_1': 6, 'mwt_coun_2': 'als_pickens.6', 'mwt_includ': '0', 'mwt_incl_1': 'duplicate', 'mwt_find_d': 'match.4', 'mwt_sequen': 18721, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9810980.7946, 3906610.3347999975]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA014', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1885, 'month_sour': 5, 'day_source': 1, 'mwt_date_t': '1885/05/01', 'name': '— Woods', 'race_sourc': 'white', 'mwt_race': 'White', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': 'Langston', 'county': 'Jackson', 'source1': 1, 'source2': 0, 'source3': None, 'source4': None, 'mwt_source': 'Not similarly corroborated by a Source2 for Alabama', 'mwt_notes_': None, 'mwt_county': 'als_jackson', 'mwt_coun_1': 13, 'mwt_coun_2': 'als_jackson.13', 'mwt_includ': '0', 'mwt_incl_1': 'no Source2 unlike the others', 'mwt_find_d': None, 'mwt_sequen': 18748, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9582723.5183, 4100827.1462000012]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA015', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1885, 'month_sour': 5, 'day_source': 9, 'mwt_date_t': '1885/05/09', 'name': '— Jordan', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Attempted assault (rape)', 'mwt_allege': 'Alleged Sexual crime', 'town': 'Tuscumbia', 'county': 'Colbert', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_colbert', 'mwt_coun_1': 2, 'mwt_coun_2': 'als_colbert.2', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 18756, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9763038.8295, 4127433.7040000036]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA016', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1885, 'month_sour': 10, 'day_source': 21, 'mwt_date_t': '1885/10/21', 'name': 'George Ward', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': 'Eufaula', 'county': 'Barbour', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_barbour', 'mwt_coun_1': 9, 'mwt_coun_2': 'als_barbour.9', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 18921, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9478392.6652, 3749533.0011000037]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA017', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1885, 'month_sour': 12, 'day_source': 28, 'mwt_date_t': '1885/12/28', 'name': 'Alexander Reed', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': 'Coffeeville', 'county': 'Clarke', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_clarke', 'mwt_coun_1': 5, 'mwt_coun_2': 'als_clarke.5', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 18989, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9806259.735, 3731449.9613]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA018', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1886, 'month_sour': 7, 'day_source': 13, 'mwt_date_t': '1886/07/13', 'name': 'John Renfroe', 'race_sourc': 'white', 'mwt_race': 'White', 'sex': 'M', 'alleged': 'Attempted murder', 'mwt_allege': 'Murder or Attempted', 'town': 'Livingston', 'county': 'Sumter', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_sumter', 'mwt_coun_1': 2, 'mwt_coun_2': 'als_sumter.2', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 19186, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9816898.5387, 3840216.207199998]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA019', 'mwt_en_mas': 3, 'state': 'AL', 'year_sourc': 1886, 'month_sour': 10, 'day_source': 21, 'mwt_date_t': '1886/10/21', 'name': 'Unnamed #1', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'U', 'alleged': 'Arson', 'mwt_allege': 'Crime against property', 'town': None, 'county': 'Pickens', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_pickens', 'mwt_coun_1': 6, 'mwt_coun_2': 'als_pickens.6', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 19286, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9805987.9774, 3932659.3264999986]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA020', 'mwt_en_mas': 3, 'state': 'AL', 'year_sourc': 1886, 'month_sour': 10, 'day_source': 21, 'mwt_date_t': '1886/10/21', 'name': 'Unnamed #2', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'U', 'alleged': 'Arson', 'mwt_allege': 'Crime against property', 'town': None, 'county': 'Pickens', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_pickens', 'mwt_coun_1': 6, 'mwt_coun_2': 'als_pickens.6', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 19286, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9805987.9774, 3932659.3264999986]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA021', 'mwt_en_mas': 3, 'state': 'AL', 'year_sourc': 1886, 'month_sour': 10, 'day_source': 21, 'mwt_date_t': '1886/10/21', 'name': 'Unnamed #3', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'U', 'alleged': 'Arson', 'mwt_allege': 'Crime against property', 'town': None, 'county': 'Pickens', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_pickens', 'mwt_coun_1': 6, 'mwt_coun_2': 'als_pickens.6', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 19286, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9805987.9774, 3932659.3264999986]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA022', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1886, 'month_sour': 11, 'day_source': 3, 'mwt_date_t': '1886/11/03', 'name': 'John Hart', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': None, 'county': 'Lee', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': 'noted by Source2 as Uncertain', 'mwt_county': 'als_lee', 'mwt_coun_1': 8, 'mwt_coun_2': 'als_lee.8', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 19299, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9501729.2521, 3842466.9968999997]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA023', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1886, 'month_sour': 11, 'day_source': 23, 'mwt_date_t': '1886/11/23', 'name': 'John Davis', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Assault of woman (rape)', 'mwt_allege': 'Alleged Sexual crime', 'town': 'Randolph', 'county': 'Bibb', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_bibb', 'mwt_coun_1': 11, 'mwt_coun_2': 'als_bibb.11', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 19319, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9675306.8256, 3882347.068599999]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA024', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1887, 'month_sour': 6, 'day_source': 2, 'mwt_date_t': '1887/06/02', 'name': 'Unnamed', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Attempted criminal assault', 'mwt_allege': 'Assault/Threat against Persons', 'town': 'Blockton Mines', 'county': 'Bibb', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_bibb', 'mwt_coun_1': 11, 'mwt_coun_2': 'als_bibb.11', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 19510, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9698041.6052, 3910627.432599999]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA025', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1887, 'month_sour': 8, 'day_source': 23, 'mwt_date_t': '1887/08/23', 'name': 'Jack Myrick', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Assault of woman (rape)', 'mwt_allege': 'Alleged Sexual crime', 'town': None, 'county': 'Henry', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_henry', 'mwt_coun_1': 7, 'mwt_coun_2': 'als_henry.7', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 19592, 'mwt_mob_ra': 'Possibly non-white perpetrators'}, 'geometry': {'type': 'Point', 'coordinates': [-9489177.7899, 3699592.8159999996]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA026', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1887, 'month_sour': 9, 'day_source': 18, 'mwt_date_t': '1887/09/18', 'name': 'Monroe Johnson', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': 'Leeds', 'county': 'Jefferson', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_jefferson', 'mwt_coun_1': 13, 'mwt_coun_2': 'als_jefferson.13', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 19618, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9633678.902, 3967531.1369]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA027', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1887, 'month_sour': 11, 'day_source': 5, 'mwt_date_t': '1887/11/05', 'name': 'George Hart', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': 'Opelika', 'county': 'Lee', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_lee', 'mwt_coun_1': 8, 'mwt_coun_2': 'als_lee.8', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 19666, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9504530.4816, 3849005.039499998]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA028', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1888, 'month_sour': 1, 'day_source': 1, 'mwt_date_t': '1888/01/01', 'name': 'Oscar Coger', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Attempted arson', 'mwt_allege': 'Crime against property', 'town': 'Cherokee', 'county': 'Colbert', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_colbert', 'mwt_coun_1': 2, 'mwt_coun_2': 'als_colbert.2', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 19723, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9793152.9782, 4130704.0122999996]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA029', 'mwt_en_mas': 2, 'state': 'AL', 'year_sourc': 1888, 'month_sour': 1, 'day_source': 10, 'mwt_date_t': '1888/01/10', 'name': 'George King', 'race_sourc': 'white', 'mwt_race': 'White', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': None, 'county': 'Baldwin', 'source1': 1, 'source2': 0, 'source3': None, 'source4': None, 'mwt_source': 'Not similarly corroborated by a Source2 for Alabama', 'mwt_notes_': None, 'mwt_county': 'als_baldwin', 'mwt_coun_1': 7, 'mwt_coun_2': 'als_baldwin.7', 'mwt_includ': '0', 'mwt_incl_1': 'no Source2 unlike the others', 'mwt_find_d': None, 'mwt_sequen': 19732, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9765338.1497, 3598496.874499999]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA030', 'mwt_en_mas': 2, 'state': 'AL', 'year_sourc': 1888, 'month_sour': 1, 'day_source': 10, 'mwt_date_t': '1888/01/10', 'name': 'Unnamed', 'race_sourc': 'white', 'mwt_race': 'White', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': None, 'county': 'Baldwin', 'source1': 1, 'source2': 0, 'source3': None, 'source4': None, 'mwt_source': 'Not similarly corroborated by a Source2 for Alabama', 'mwt_notes_': None, 'mwt_county': 'als_baldwin', 'mwt_coun_1': 7, 'mwt_coun_2': 'als_baldwin.7', 'mwt_includ': '0', 'mwt_incl_1': 'no Source2 unlike the others', 'mwt_find_d': None, 'mwt_sequen': 19732, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9765338.1497, 3598496.874499999]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA031', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1888, 'month_sour': 1, 'day_source': 27, 'mwt_date_t': '1888/01/27', 'name': 'David Dunce', 'race_sourc': 'white', 'mwt_race': 'White', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': 'Russellville', 'county': 'Franklin', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_franklin', 'mwt_coun_1': 5, 'mwt_coun_2': 'als_franklin.5', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 19749, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9765881.9293, 4097142.6982000023]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA032', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1888, 'month_sour': 3, 'day_source': 18, 'mwt_date_t': '1888/03/18', 'name': 'Jeff Curry', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Threats to kill', 'mwt_allege': 'Assault/Threat against Persons', 'town': 'Bessemer', 'county': 'Jefferson', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_jefferson', 'mwt_coun_1': 13, 'mwt_coun_2': 'als_jefferson.13', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 19800, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9679673.8892, 3948789.1125999987]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA033', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1888, 'month_sour': 3, 'day_source': 29, 'mwt_date_t': '1888/03/29', 'name': 'Theo Calloway', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': None, 'county': 'Lowndes', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_lowndes', 'mwt_coun_1': 3, 'mwt_coun_2': 'als_lowndes.3', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 19811, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9645845.4436, 3783637.3874000013]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA034', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1888, 'month_sour': 4, 'day_source': 23, 'mwt_date_t': '1888/04/23', 'name': 'Hardy Posey', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Attempted rape', 'mwt_allege': 'Alleged Sexual crime', 'town': 'Bessemer', 'county': 'Jefferson', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_jefferson', 'mwt_coun_1': 13, 'mwt_coun_2': 'als_jefferson.13', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 19836, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9679673.8892, 3948789.1125999987]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA035', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1888, 'month_sour': 5, 'day_source': 1, 'mwt_date_t': '1888/05/01', 'name': 'George Martin', 'race_sourc': 'white', 'mwt_race': 'White', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': 'Birmingham', 'county': 'Jefferson', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_jefferson', 'mwt_coun_1': 13, 'mwt_coun_2': 'als_jefferson.13', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 19844, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9663840.9181, 3964621.221500002]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA036', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1888, 'month_sour': 7, 'day_source': 13, 'mwt_date_t': '1888/07/13', 'name': 'Frank Stone', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Attempted rape', 'mwt_allege': 'Alleged Sexual crime', 'town': 'Pell City', 'county': 'St. Clair', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_stclair', 'mwt_coun_1': 15, 'mwt_coun_2': 'als_stclair.15', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 19917, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9605359.2236, 3973357.1785999984]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA037', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1888, 'month_sour': 7, 'day_source': 13, 'mwt_date_t': '1888/07/13', 'name': 'Jim Torney', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Eloped with white girl', 'mwt_allege': 'Alleged Sexual crime', 'town': 'Greenport', 'county': 'St. Clair', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': 'noted by Source2 as Uncertain', 'mwt_county': 'als_stclair', 'mwt_coun_1': 15, 'mwt_coun_2': 'als_stclair.15', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 19917, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9608504.1125, 3990684.641099997]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA038', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1889, 'month_sour': 1, 'day_source': 15, 'mwt_date_t': '1889/01/15', 'name': 'George Meadows', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murder & Sexual assault', 'mwt_allege': 'Murder-Rape', 'town': 'Pratt Mines', 'county': 'Jefferson', 'source1': 1, 'source2': 5, 'source3': '23', 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_jefferson', 'mwt_coun_1': 13, 'mwt_coun_2': 'als_jefferson.13', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 20103, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9671576.5095, 3967386.892499998]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA039', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1889, 'month_sour': 5, 'day_source': 21, 'mwt_date_t': '1889/05/21', 'name': 'Noah Dickson', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Attempted rape', 'mwt_allege': 'Alleged Sexual crime', 'town': 'Walnut Grove', 'county': 'Etowah', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_etowah', 'mwt_coun_1': 4, 'mwt_coun_2': 'als_etowah.4', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 20229, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9607393.0307, 4037885.358000003]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA040', 'mwt_en_mas': 2, 'state': 'AL', 'year_sourc': 1889, 'month_sour': 9, 'day_source': 2, 'mwt_date_t': '1889/09/02', 'name': 'Unnamed #1', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': 'Montevallo', 'county': 'Shelby', 'source1': 1, 'source2': 0, 'source3': None, 'source4': None, 'mwt_source': 'Not similarly corroborated by a Source2 for Alabama', 'mwt_notes_': None, 'mwt_county': 'als_shelby', 'mwt_coun_1': 16, 'mwt_coun_2': 'als_shelby.16', 'mwt_includ': '0', 'mwt_incl_1': 'no Source2 unlike the others', 'mwt_find_d': None, 'mwt_sequen': 20333, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9669499.2878, 3908665.8850999996]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA041', 'mwt_en_mas': 2, 'state': 'AL', 'year_sourc': 1889, 'month_sour': 9, 'day_source': 2, 'mwt_date_t': '1889/09/02', 'name': 'Unnamed #2', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': 'Montevallo', 'county': 'Shelby', 'source1': 1, 'source2': 0, 'source3': None, 'source4': None, 'mwt_source': 'Not similarly corroborated by a Source2 for Alabama', 'mwt_notes_': None, 'mwt_county': 'als_shelby', 'mwt_coun_1': 16, 'mwt_coun_2': 'als_shelby.16', 'mwt_includ': '0', 'mwt_incl_1': 'no Source2 unlike the others', 'mwt_find_d': None, 'mwt_sequen': 20333, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9669499.2878, 3908665.8850999996]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA042', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1889, 'month_sour': 9, 'day_source': 27, 'mwt_date_t': '1889/09/27', 'name': 'John Sleet', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': 'Birmingham', 'county': 'Jefferson', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_jefferson', 'mwt_coun_1': 13, 'mwt_coun_2': 'als_jefferson.13', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 20358, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9663840.9181, 3964621.221500002]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA043', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1889, 'month_sour': 10, 'day_source': 4, 'mwt_date_t': '1889/10/04', 'name': '— Starke', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': 'Moss Point', 'county': 'Marengo', 'source1': 1, 'source2': 0, 'source3': None, 'source4': None, 'mwt_source': 'Not similarly corroborated by a Source2 for Alabama', 'mwt_notes_': None, 'mwt_county': 'als_marengo', 'mwt_coun_1': 6, 'mwt_coun_2': 'als_marengo.6', 'mwt_includ': '0', 'mwt_incl_1': 'no Source2 unlike the others', 'mwt_find_d': None, 'mwt_sequen': 20365, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9772691.1755, 3795857.1431]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA044', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1889, 'month_sour': 12, 'day_source': 26, 'mwt_date_t': '1889/12/26', 'name': 'Bud Wilson', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Attempted rape', 'mwt_allege': 'Alleged Sexual crime', 'town': 'Tuscaloosa', 'county': 'Tuscaloosa', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_tuscaloosa', 'mwt_coun_1': 16, 'mwt_coun_2': 'als_tuscaloosa.16', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 20448, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9747830.3607, 3923267.1208000034]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA045', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1890, 'month_sour': 3, 'day_source': 21, 'mwt_date_t': '1890/03/21', 'name': 'Robert Moseley', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Attempted rape', 'mwt_allege': 'Alleged Sexual crime', 'town': 'Huntsville', 'county': 'Madison', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_madison', 'mwt_coun_1': 6, 'mwt_coun_2': 'als_madison.6', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 20533, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9638609.2423, 4127153.3202000037]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA046', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1890, 'month_sour': 3, 'day_source': 29, 'mwt_date_t': '1890/03/29', 'name': 'Frank Griffin', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Rape', 'mwt_allege': 'Alleged Sexual crime', 'town': 'Stanton', 'county': 'Chilton', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_chilton', 'mwt_coun_1': 5, 'mwt_coun_2': 'als_chilton.5', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 20541, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9673660.4104, 3860266.9359000027]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA047', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1890, 'month_sour': 4, 'day_source': 2, 'mwt_date_t': '1890/04/02', 'name': 'Unnamed', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': 'Brantley', 'county': 'Crenshaw', 'source1': 1, 'source2': 0, 'source3': None, 'source4': None, 'mwt_source': 'Not similarly corroborated by a Source2 for Alabama', 'mwt_notes_': None, 'mwt_county': 'als_crenshaw', 'mwt_coun_1': 9, 'mwt_coun_2': 'als_crenshaw.9', 'mwt_includ': '0', 'mwt_incl_1': 'no Source2 unlike the others', 'mwt_find_d': None, 'mwt_sequen': 20545, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9602071.959, 3708622.0247]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA048', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1890, 'month_sour': 7, 'day_source': 13, 'mwt_date_t': '1890/07/13', 'name': 'John Jones', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Robbery', 'mwt_allege': 'Crime against property', 'town': 'Anniston', 'county': 'Calhoun', 'source1': 1, 'source2': 0, 'source3': None, 'source4': None, 'mwt_source': 'Not similarly corroborated by a Source2 for Alabama', 'mwt_notes_': None, 'mwt_county': 'als_calhoun', 'mwt_coun_1': 13, 'mwt_coun_2': 'als_calhoun.13', 'mwt_includ': '0', 'mwt_incl_1': 'no Source2 unlike the others', 'mwt_find_d': None, 'mwt_sequen': 20647, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9553317.3616, 3982854.0182000026]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA049', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1890, 'month_sour': 7, 'day_source': 25, 'mwt_date_t': '1890/07/25', 'name': 'Unnamed', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Miscegenation', 'mwt_allege': 'Alleged Sexual crime', 'town': 'Riverton', 'county': 'Colbert', 'source1': 1, 'source2': 0, 'source3': None, 'source4': None, 'mwt_source': 'Not similarly corroborated by a Source2 for Alabama', 'mwt_notes_': None, 'mwt_county': 'als_colbert', 'mwt_coun_1': 2, 'mwt_coun_2': 'als_colbert.2', 'mwt_includ': '0', 'mwt_incl_1': 'no Source2 unlike the others', 'mwt_find_d': None, 'mwt_sequen': 20659, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9804674.5455, 4147710.399599999]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA050', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1890, 'month_sour': 8, 'day_source': 9, 'mwt_date_t': '1890/08/09', 'name': 'Ike Cook', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Terrorism', 'mwt_allege': 'Other', 'town': 'Montgomery', 'county': 'Montgomery', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': 'noted by Source2 as Uncertain', 'mwt_county': 'als_montgomery', 'mwt_coun_1': 20, 'mwt_coun_2': 'als_montgomery.20', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 20674, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9606880.961, 3813315.3188999966]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA051', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1890, 'month_sour': 11, 'day_source': 16, 'mwt_date_t': '1890/11/16', 'name': 'Henry Smith', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Assault of woman (rape)', 'mwt_allege': 'Alleged Sexual crime', 'town': None, 'county': 'Jefferson', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': \"Source2 corrects location from Wood's Station, Catoosa Co, GA\", 'mwt_notes_': None, 'mwt_county': 'als_jefferson', 'mwt_coun_1': 13, 'mwt_coun_2': 'als_jefferson.13', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 20773, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9673272.9205, 3969107.7529999986]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA052', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1891, 'month_sour': 3, 'day_source': 31, 'mwt_date_t': '1891/03/31', 'name': 'Zachariah Graham', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Attempted rape', 'mwt_allege': 'Alleged Sexual crime', 'town': 'Whistler', 'county': 'Mobile', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_mobile', 'mwt_coun_1': 8, 'mwt_coun_2': 'als_mobile.8', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 20908, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9807690.1905, 3601038.0024999976]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA053', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1891, 'month_sour': 4, 'day_source': 15, 'mwt_date_t': '1891/04/15', 'name': 'Roxie Elliott', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'U', 'alleged': 'Cause unknown', 'mwt_allege': '(Unknown)', 'town': 'Centerville', 'county': 'Bibb', 'source1': 1, 'source2': 0, 'source3': None, 'source4': None, 'mwt_source': 'Not similarly corroborated by a Source2 for Alabama', 'mwt_notes_': None, 'mwt_county': 'als_bibb', 'mwt_coun_1': 11, 'mwt_coun_2': 'als_bibb.11', 'mwt_includ': '0', 'mwt_incl_1': 'no Source2 unlike the others', 'mwt_find_d': None, 'mwt_sequen': 20923, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9699791.5476, 3887989.9886]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA054', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1891, 'month_sour': 4, 'day_source': 25, 'mwt_date_t': '1891/04/25', 'name': '— Randall', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murder & robbery', 'mwt_allege': 'Murder or Attempted', 'town': 'Winfield', 'county': 'Marion', 'source1': 1, 'source2': 0, 'source3': None, 'source4': None, 'mwt_source': 'Not similarly corroborated by a Source2 for Alabama', 'mwt_notes_': None, 'mwt_county': 'als_marion', 'mwt_coun_1': 10, 'mwt_coun_2': 'als_marion.10', 'mwt_includ': '0', 'mwt_incl_1': 'no Source2 unlike the others', 'mwt_find_d': None, 'mwt_sequen': 20933, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9775712.5535, 4019253.6576]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA055', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1891, 'month_sour': 7, 'day_source': 26, 'mwt_date_t': '1891/07/26', 'name': 'Jesse Underwood', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Rape', 'mwt_allege': 'Alleged Sexual crime', 'town': 'Tuscumbia', 'county': 'Colbert', 'source1': 1, 'source2': 0, 'source3': None, 'source4': None, 'mwt_source': 'Not similarly corroborated by a Source2 for Alabama', 'mwt_notes_': None, 'mwt_county': 'als_colbert', 'mwt_coun_1': 2, 'mwt_coun_2': 'als_colbert.2', 'mwt_includ': '0', 'mwt_incl_1': 'no Source2 unlike the others', 'mwt_find_d': None, 'mwt_sequen': 21025, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9763038.8295, 4127433.7040000036]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA056', 'mwt_en_mas': 4, 'state': 'AL', 'year_sourc': 1891, 'month_sour': 8, 'day_source': 6, 'mwt_date_t': '1891/08/06', 'name': 'Belle Williams', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'U', 'alleged': 'Arson', 'mwt_allege': 'Crime against property', 'town': None, 'county': 'Henry', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_henry', 'mwt_coun_1': 7, 'mwt_coun_2': 'als_henry.7', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 21036, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9489177.7899, 3699592.8159999996]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA057', 'mwt_en_mas': 4, 'state': 'AL', 'year_sourc': 1891, 'month_sour': 8, 'day_source': 6, 'mwt_date_t': '1891/08/06', 'name': 'Ella Williams', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'F', 'alleged': 'Arson', 'mwt_allege': 'Crime against property', 'town': None, 'county': 'Henry', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_henry', 'mwt_coun_1': 7, 'mwt_coun_2': 'als_henry.7', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 21036, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9489177.7899, 3699592.8159999996]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA058', 'mwt_en_mas': 4, 'state': 'AL', 'year_sourc': 1891, 'month_sour': 8, 'day_source': 6, 'mwt_date_t': '1891/08/06', 'name': 'Lizzie Lowe', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'F', 'alleged': 'Arson', 'mwt_allege': 'Crime against property', 'town': None, 'county': 'Henry', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_henry', 'mwt_coun_1': 7, 'mwt_coun_2': 'als_henry.7', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 21036, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9489177.7899, 3699592.8159999996]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA059', 'mwt_en_mas': 4, 'state': 'AL', 'year_sourc': 1891, 'month_sour': 8, 'day_source': 6, 'mwt_date_t': '1891/08/06', 'name': 'Willis Lowe', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Arson', 'mwt_allege': 'Crime against property', 'town': None, 'county': 'Henry', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_henry', 'mwt_coun_1': 7, 'mwt_coun_2': 'als_henry.7', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 21036, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9489177.7899, 3699592.8159999996]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA060', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1891, 'month_sour': 8, 'day_source': 21, 'mwt_date_t': '1891/08/21', 'name': 'Ray Porter', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': 'Clanton', 'county': 'Chilton', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_chilton', 'mwt_coun_1': 5, 'mwt_coun_2': 'als_chilton.5', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 21051, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9643832.3528, 3874198.465499997]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA061', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1891, 'month_sour': 9, 'day_source': 3, 'mwt_date_t': '1891/09/03', 'name': 'James Sims', 'race_sourc': 'white', 'mwt_race': 'White', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': None, 'county': 'Choctaw', 'source1': 1, 'source2': 0, 'source3': None, 'source4': None, 'mwt_source': 'Not similarly corroborated by a Source2 for Alabama', 'mwt_notes_': None, 'mwt_county': 'als_choctaw', 'mwt_coun_1': 1, 'mwt_coun_2': 'als_choctaw.1', 'mwt_includ': '0', 'mwt_incl_1': 'no Source2 unlike the others', 'mwt_find_d': None, 'mwt_sequen': 21064, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9825412.8409, 3765886.050999999]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA062', 'mwt_en_mas': 2, 'state': 'AL', 'year_sourc': 1891, 'month_sour': 9, 'day_source': 25, 'mwt_date_t': '1891/09/25', 'name': 'Unnamed #1', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': 'Georgiana', 'county': 'Butler', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_butler', 'mwt_coun_1': 12, 'mwt_coun_2': 'als_butler.12', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 21086, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9655824.8015, 3715588.938199997]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA063', 'mwt_en_mas': 2, 'state': 'AL', 'year_sourc': 1891, 'month_sour': 9, 'day_source': 25, 'mwt_date_t': '1891/09/25', 'name': 'Unnamed #2', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': 'Georgiana', 'county': 'Butler', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_butler', 'mwt_coun_1': 12, 'mwt_coun_2': 'als_butler.12', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 21086, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9655824.8015, 3715588.938199997]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA064', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1891, 'month_sour': 10, 'day_source': 15, 'mwt_date_t': '1891/10/15', 'name': 'Sam Wright', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Rape', 'mwt_allege': 'Alleged Sexual crime', 'town': 'Helena', 'county': 'Shelby', 'source1': 1, 'source2': 0, 'source3': None, 'source4': None, 'mwt_source': 'Not similarly corroborated by a Source2 for Alabama', 'mwt_notes_': None, 'mwt_county': 'als_shelby', 'mwt_coun_1': 16, 'mwt_coun_2': 'als_shelby.16', 'mwt_includ': '0', 'mwt_incl_1': 'no Source2 unlike the others', 'mwt_find_d': None, 'mwt_sequen': 21106, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9667422.0661, 3933905.4575000033]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA065', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1891, 'month_sour': 12, 'day_source': 22, 'mwt_date_t': '1891/12/22', 'name': 'Jesse Miller', 'race_sourc': 'white', 'mwt_race': 'White', 'sex': 'M', 'alleged': 'Outlaw', 'mwt_allege': 'Other', 'town': None, 'county': 'Bibb', 'source1': 1, 'source2': 0, 'source3': None, 'source4': None, 'mwt_source': 'Not similarly corroborated by a Source2 for Alabama', 'mwt_notes_': None, 'mwt_county': 'als_bibb', 'mwt_coun_1': 11, 'mwt_coun_2': 'als_bibb.11', 'mwt_includ': '0', 'mwt_incl_1': 'no Source2 unlike the others', 'mwt_find_d': None, 'mwt_sequen': 21174, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9698869.8754, 3895121.279799998]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA066', 'mwt_en_mas': 0, 'state': 'GA', 'year_sourc': 1885, 'month_sour': 3, 'day_source': 29, 'mwt_date_t': '1885/03/29', 'name': 'George Rouse', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murder & rape', 'mwt_allege': 'Murder-Rape', 'town': 'Vienna', 'county': 'Dooly', 'source1': 1, 'source2': 5, 'source3': '13', 'source4': None, 'mwt_source': 'Source2 clarifies offense; Source2 and Source3 do not corroborate a town', 'mwt_notes_': None, 'mwt_county': 'gas_dooly', 'mwt_coun_1': 12, 'mwt_coun_2': 'gas_dooly.12', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': 'match.4', 'mwt_sequen': 18715, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9328038.9949, 3775299.8325999975]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA067', 'mwt_en_mas': 0, 'state': 'FL', 'year_sourc': 1882, 'month_sour': 3, 'day_source': 6, 'mwt_date_t': '1882/03/06', 'name': 'C.D. Owens', 'race_sourc': 'white', 'mwt_race': 'White', 'sex': 'M', 'alleged': 'Attempted rape', 'mwt_allege': 'Alleged Sexual crime', 'town': 'Tampa', 'county': 'Hillsborough', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': 'Source2 corrects name from Unnamed White man', 'mwt_notes_': None, 'mwt_county': 'fls_hillsborough', 'mwt_coun_1': 9, 'mwt_coun_2': 'fls_hillsborough.9', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 17596, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9179323.9476, 3242234.1169999987]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA068', 'mwt_en_mas': 2, 'state': 'FL', 'year_sourc': 1882, 'month_sour': 8, 'day_source': 25, 'mwt_date_t': '1882/08/25', 'name': '— James', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': 'Madison', 'county': 'Madison', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': 'Source2 corrects name from Two unnamed Negro men', 'mwt_notes_': None, 'mwt_county': 'fls_madison', 'mwt_coun_1': 5, 'mwt_coun_2': 'fls_madison.5', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 17768, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9285162.0667, 3563989.3341]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA069', 'mwt_en_mas': 2, 'state': 'FL', 'year_sourc': 1882, 'month_sour': 8, 'day_source': 25, 'mwt_date_t': '1882/08/25', 'name': '— Savage', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': 'Madison', 'county': 'Madison', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': 'Source2 corrects name from Two unnamed Negro men', 'mwt_notes_': None, 'mwt_county': 'fls_madison', 'mwt_coun_1': 5, 'mwt_coun_2': 'fls_madison.5', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 17768, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9285162.0667, 3563989.3341]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA070', 'mwt_en_mas': 2, 'state': 'FL', 'year_sourc': 1883, 'month_sour': 12, 'day_source': 25, 'mwt_date_t': '1883/12/25', 'name': '— Fagan (brother #1)', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murderous assault', 'mwt_allege': 'Assault/Threat against Persons', 'town': 'Brooksville', 'county': 'Hernando', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'fls_hernando', 'mwt_coun_1': 6, 'mwt_coun_2': 'fls_hernando.6', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 18255, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9171370.17, 3319084.1525000036]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA071', 'mwt_en_mas': 2, 'state': 'FL', 'year_sourc': 1883, 'month_sour': 12, 'day_source': 25, 'mwt_date_t': '1883/12/25', 'name': '— Fagan (brother #2)', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murderous assault', 'mwt_allege': 'Assault/Threat against Persons', 'town': 'Brooksville', 'county': 'Hernando', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'fls_hernando', 'mwt_coun_1': 6, 'mwt_coun_2': 'fls_hernando.6', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 18255, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9171370.17, 3319084.1525000036]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA072', 'mwt_en_mas': 0, 'state': 'FL', 'year_sourc': 1884, 'month_sour': 6, 'day_source': 6, 'mwt_date_t': '1884/06/06', 'name': 'Unnamed', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Rape', 'mwt_allege': 'Alleged Sexual crime', 'town': None, 'county': 'Santa Rosa', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'fls_santarosa', 'mwt_coun_1': 3, 'mwt_coun_2': 'fls_santarosa.3', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 18419, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9686756.4642, 3597925.086000003]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA073', 'mwt_en_mas': 0, 'state': 'FL', 'year_sourc': 1884, 'month_sour': 11, 'day_source': 22, 'mwt_date_t': '1884/11/22', 'name': 'Armstead Williams', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Attempted rape', 'mwt_allege': 'Alleged Sexual crime', 'town': 'Madison', 'county': 'Madison', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'fls_madison', 'mwt_coun_1': 5, 'mwt_coun_2': 'fls_madison.5', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 18588, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9285162.0667, 3563989.3341]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA074', 'mwt_en_mas': 0, 'state': 'FL', 'year_sourc': 1884, 'month_sour': 12, 'day_source': 26, 'mwt_date_t': '1884/12/26', 'name': 'Charles E Abbe', 'race_sourc': 'white', 'mwt_race': 'White', 'sex': 'M', 'alleged': 'Cause unknown', 'mwt_allege': '(Unknown)', 'town': None, 'county': 'Manatee', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': 'Source2 corrects date by 22 days from 1/17/1885; Source2 corrects location from Sarasota, Sarasota Co', 'mwt_notes_': 'noted by Source2 as Uncertain', 'mwt_county': 'fls_manatee', 'mwt_coun_1': 4, 'mwt_coun_2': 'fls_manatee.4', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 18622, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9161812.101, 3182549.5124999993]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA075', 'mwt_en_mas': 0, 'state': 'FL', 'year_sourc': 1885, 'month_sour': 6, 'day_source': 8, 'mwt_date_t': '1885/06/08', 'name': 'John Evans', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Rape', 'mwt_allege': 'Alleged Sexual crime', 'town': 'Live Oak', 'county': 'Suwannee', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'fls_suwannee', 'mwt_coun_1': 2, 'mwt_coun_2': 'fls_suwannee.2', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 18786, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9237787.831, 3541654.9092999995]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA076', 'mwt_en_mas': 3, 'state': 'FL', 'year_sourc': 1885, 'month_sour': 7, 'day_source': 6, 'mwt_date_t': '1885/07/06', 'name': 'Unnamed #1', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Incendiarism', 'mwt_allege': 'Crime against property', 'town': 'Citra', 'county': 'Marion', 'source1': 1, 'source2': 0, 'source3': None, 'source4': None, 'mwt_source': 'Not similarly corroborated by a Source2 for Florida', 'mwt_notes_': None, 'mwt_county': 'fls_marion', 'mwt_coun_1': 5, 'mwt_coun_2': 'fls_marion.5', 'mwt_includ': '0', 'mwt_incl_1': 'no Source2 unlike the others', 'mwt_find_d': None, 'mwt_sequen': 18814, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9140453.4078, 3428115.5247]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA077', 'mwt_en_mas': 3, 'state': 'FL', 'year_sourc': 1885, 'month_sour': 7, 'day_source': 6, 'mwt_date_t': '1885/07/06', 'name': 'Unnamed #2', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Incendiarism', 'mwt_allege': 'Crime against property', 'town': 'Citra', 'county': 'Marion', 'source1': 1, 'source2': 0, 'source3': None, 'source4': None, 'mwt_source': 'Not similarly corroborated by a Source2 for Florida', 'mwt_notes_': None, 'mwt_county': 'fls_marion', 'mwt_coun_1': 5, 'mwt_coun_2': 'fls_marion.5', 'mwt_includ': '0', 'mwt_incl_1': 'no Source2 unlike the others', 'mwt_find_d': None, 'mwt_sequen': 18814, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9140453.4078, 3428115.5247]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA078', 'mwt_en_mas': 3, 'state': 'FL', 'year_sourc': 1885, 'month_sour': 7, 'day_source': 6, 'mwt_date_t': '1885/07/06', 'name': 'Unnamed #3', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Incendiarism', 'mwt_allege': 'Crime against property', 'town': 'Citra', 'county': 'Marion', 'source1': 1, 'source2': 0, 'source3': None, 'source4': None, 'mwt_source': 'Not similarly corroborated by a Source2 for Florida', 'mwt_notes_': None, 'mwt_county': 'fls_marion', 'mwt_coun_1': 5, 'mwt_coun_2': 'fls_marion.5', 'mwt_includ': '0', 'mwt_incl_1': 'no Source2 unlike the others', 'mwt_find_d': None, 'mwt_sequen': 18814, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9140453.4078, 3428115.5247]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA079', 'mwt_en_mas': 2, 'state': 'FL', 'year_sourc': 1886, 'month_sour': 5, 'day_source': 15, 'mwt_date_t': '1886/05/15', 'name': 'Daniel Mann', 'race_sourc': 'white', 'mwt_race': 'White', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': 'Bartow', 'county': 'Polk', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'fls_polk', 'mwt_coun_1': 6, 'mwt_coun_2': 'fls_polk.6', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 19127, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9110560.7849, 3236259.8132]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA080', 'mwt_en_mas': 2, 'state': 'FL', 'year_sourc': 1886, 'month_sour': 5, 'day_source': 15, 'mwt_date_t': '1886/05/15', 'name': 'Lon Mann', 'race_sourc': 'white', 'mwt_race': 'White', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': 'Bartow', 'county': 'Polk', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'fls_polk', 'mwt_coun_1': 6, 'mwt_coun_2': 'fls_polk.6', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 19127, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9110560.7849, 3236259.8132]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA081', 'mwt_en_mas': 2, 'state': 'FL', 'year_sourc': 1886, 'month_sour': 10, 'day_source': 3, 'mwt_date_t': '1886/10/03', 'name': '— Buckly', 'race_sourc': 'U', 'mwt_race': 'Unknown', 'sex': 'M', 'alleged': 'Arson', 'mwt_allege': 'Crime against property', 'town': 'Quincy', 'county': 'Gadsden', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'fls_gadsden', 'mwt_coun_1': 6, 'mwt_coun_2': 'fls_gadsden.6', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 19268, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9415479.3417, 3579376.0042999983]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA082', 'mwt_en_mas': 2, 'state': 'FL', 'year_sourc': 1886, 'month_sour': 10, 'day_source': 3, 'mwt_date_t': '1886/10/03', 'name': 'F.L. Harris', 'race_sourc': 'U', 'mwt_race': 'Unknown', 'sex': 'M', 'alleged': 'Arson', 'mwt_allege': 'Crime against property', 'town': 'Quincy', 'county': 'Gadsden', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'fls_gadsden', 'mwt_coun_1': 6, 'mwt_coun_2': 'fls_gadsden.6', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 19268, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9415479.3417, 3579376.0042999983]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA083', 'mwt_en_mas': 2, 'state': 'FL', 'year_sourc': 1886, 'month_sour': 10, 'day_source': 24, 'mwt_date_t': '1886/10/24', 'name': 'John Renew', 'race_sourc': 'U', 'mwt_race': 'Unknown', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': 'Blountstown', 'county': 'Calhoun', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'fls_calhoun', 'mwt_coun_1': 5, 'mwt_coun_2': 'fls_calhoun.5', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 19289, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9467211.7355, 3560645.942900002]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA084', 'mwt_en_mas': 2, 'state': 'FL', 'year_sourc': 1886, 'month_sour': 10, 'day_source': 24, 'mwt_date_t': '1886/10/24', 'name': 'Lot Renew', 'race_sourc': 'U', 'mwt_race': 'Unknown', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': 'Blountstown', 'county': 'Calhoun', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'fls_calhoun', 'mwt_coun_1': 5, 'mwt_coun_2': 'fls_calhoun.5', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 19289, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9467211.7355, 3560645.942900002]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA085', 'mwt_en_mas': 0, 'state': 'FL', 'year_sourc': 1887, 'month_sour': 11, 'day_source': 26, 'mwt_date_t': '1887/11/26', 'name': 'William Williams', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': 'Oakland', 'county': 'Orange', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': 'noted by Source2 as Uncertain', 'mwt_county': 'fls_orange', 'mwt_coun_1': 16, 'mwt_coun_2': 'fls_orange.16', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 19687, 'mwt_mob_ra': 'Possibly non-white perpetrators'}, 'geometry': {'type': 'Point', 'coordinates': [-9086794.0736, 3319394.6577999964]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA086', 'mwt_en_mas': 2, 'state': 'FL', 'year_sourc': 1887, 'month_sour': 12, 'day_source': 11, 'mwt_date_t': '1887/12/11', 'name': 'Unnamed #1', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Assault of woman', 'mwt_allege': 'Assault/Attack upon Women', 'town': None, 'county': 'Pasco', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': 'Source2 corrects location from Pemberton, Sumter Co; Source2 corrects offense from Rape', 'mwt_notes_': None, 'mwt_county': 'fls_pasco', 'mwt_coun_1': 1, 'mwt_coun_2': 'fls_pasco.1', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 19702, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9171782.2418, 3288016.161899999]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA087', 'mwt_en_mas': 2, 'state': 'FL', 'year_sourc': 1887, 'month_sour': 12, 'day_source': 11, 'mwt_date_t': '1887/12/11', 'name': 'Unnamed #2', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Assault of woman', 'mwt_allege': 'Assault/Attack upon Women', 'town': None, 'county': 'Pasco', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': 'Source2 corrects location from Pemberton, Sumter Co; Source2 corrects offense from Rape', 'mwt_notes_': None, 'mwt_county': 'fls_pasco', 'mwt_coun_1': 1, 'mwt_coun_2': 'fls_pasco.1', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 19702, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9171782.2418, 3288016.161899999]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA088', 'mwt_en_mas': 0, 'state': 'FL', 'year_sourc': 1887, 'month_sour': 12, 'day_source': 12, 'mwt_date_t': '1887/12/12', 'name': 'George Green', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Theft', 'mwt_allege': 'Crime against property', 'town': 'Ocala', 'county': 'Marion', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'fls_marion', 'mwt_coun_1': 6, 'mwt_coun_2': 'fls_marion.6', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 19703, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9143828.6148, 3399564.1746999994]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA089', 'mwt_en_mas': 0, 'state': 'FL', 'year_sourc': 1888, 'month_sour': 1, 'day_source': 27, 'mwt_date_t': '1888/01/27', 'name': '— Clark', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Rape', 'mwt_allege': 'Alleged Sexual crime', 'town': 'Floral City', 'county': 'Citrus', 'source1': 1, 'source2': 0, 'source3': None, 'source4': None, 'mwt_source': 'Not similarly corroborated by a Source2 for Florida', 'mwt_notes_': None, 'mwt_county': 'fls_citrus', 'mwt_coun_1': 1, 'mwt_coun_2': 'fls_citrus.1', 'mwt_includ': '0', 'mwt_incl_1': 'no Source2 unlike the others', 'mwt_find_d': None, 'mwt_sequen': 19749, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9161353.6422, 3343578.0009]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA090', 'mwt_en_mas': 0, 'state': 'FL', 'year_sourc': 1888, 'month_sour': 8, 'day_source': 15, 'mwt_date_t': '1888/08/15', 'name': 'Nash Griffin', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Insulting notes to white woman', 'mwt_allege': 'Insult to White Persons', 'town': None, 'county': 'Calhoun', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': 'Source2 corrects location from Ocheesee, Jackson Co', 'mwt_notes_': None, 'mwt_county': 'fls_calhoun', 'mwt_coun_1': 5, 'mwt_coun_2': 'fls_calhoun.5', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 19950, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9484115.0183, 3555834.027999997]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA091', 'mwt_en_mas': 0, 'state': 'FL', 'year_sourc': 1890, 'month_sour': 7, 'day_source': 17, 'mwt_date_t': '1890/07/17', 'name': 'Green Jackson', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Rape', 'mwt_allege': 'Alleged Sexual crime', 'town': 'Ft. White', 'county': 'Columbia', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'fls_columbia', 'mwt_coun_1': 7, 'mwt_coun_2': 'fls_columbia.7', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 20651, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9207484.4392, 3493667.6059999987]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA092', 'mwt_en_mas': 2, 'state': 'FL', 'year_sourc': 1891, 'month_sour': 2, 'day_source': 17, 'mwt_date_t': '1891/02/17', 'name': 'Michael Kelly', 'race_sourc': 'white', 'mwt_race': 'White', 'sex': 'M', 'alleged': 'Complicity in murder', 'mwt_allege': 'Murder or Attempted', 'town': 'Gainesville', 'county': 'Alachua', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'fls_alachua', 'mwt_coun_1': 12, 'mwt_coun_2': 'fls_alachua.12', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 20866, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9164129.9503, 3458889.2575]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA093', 'mwt_en_mas': 2, 'state': 'FL', 'year_sourc': 1891, 'month_sour': 2, 'day_source': 17, 'mwt_date_t': '1891/02/17', 'name': 'Tony Compion', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Complicity in murder', 'mwt_allege': 'Murder or Attempted', 'town': 'Gainesville', 'county': 'Alachua', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'fls_alachua', 'mwt_coun_1': 12, 'mwt_coun_2': 'fls_alachua.12', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 20866, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9164129.9503, 3458889.2575]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA094', 'mwt_en_mas': 0, 'state': 'FL', 'year_sourc': 1891, 'month_sour': 6, 'day_source': 17, 'mwt_date_t': '1891/06/17', 'name': 'Charles Griffin', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Accessory to murder', 'mwt_allege': 'Murder or Attempted', 'town': None, 'county': 'Suwannee', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': 'Source2 corrects location from New Branford, Lafayette Co', 'mwt_notes_': 'noted by Source2 as Uncertain', 'mwt_county': 'fls_suwannee', 'mwt_coun_1': 3, 'mwt_coun_2': 'fls_suwannee.3', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 20986, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9238570.7774, 3528714.9197999984]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA095', 'mwt_en_mas': 0, 'state': 'FL', 'year_sourc': 1891, 'month_sour': 6, 'day_source': 17, 'mwt_date_t': '1891/06/17', 'name': 'Unnamed', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Cause unknown', 'mwt_allege': '(Unknown)', 'town': 'Ft. White', 'county': 'Columbia', 'source1': 1, 'source2': 0, 'source3': None, 'source4': None, 'mwt_source': 'Not similarly corroborated by a Source2 for Florida', 'mwt_notes_': None, 'mwt_county': 'fls_columbia', 'mwt_coun_1': 7, 'mwt_coun_2': 'fls_columbia.7', 'mwt_includ': '0', 'mwt_incl_1': 'no Source2 unlike the others', 'mwt_find_d': None, 'mwt_sequen': 20986, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9207484.4392, 3493667.6059999987]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA096', 'mwt_en_mas': 0, 'state': 'FL', 'year_sourc': 1891, 'month_sour': 8, 'day_source': 24, 'mwt_date_t': '1891/08/24', 'name': 'Andy Ford', 'race_sourc': 'white', 'mwt_race': 'White', 'sex': 'M', 'alleged': 'Desperado', 'mwt_allege': 'Other', 'town': 'Gainesville', 'county': 'Alachua', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'fls_alachua', 'mwt_coun_1': 12, 'mwt_coun_2': 'fls_alachua.12', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 21054, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9164129.9503, 3458889.2575]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA097', 'mwt_en_mas': 0, 'state': 'FL', 'year_sourc': 1891, 'month_sour': 9, 'day_source': 26, 'mwt_date_t': '1891/09/26', 'name': 'Lee Bailey', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Rape', 'mwt_allege': 'Alleged Sexual crime', 'town': 'Deland', 'county': 'Volusia', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'fls_volusia', 'mwt_coun_1': 3, 'mwt_coun_2': 'fls_volusia.3', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 21087, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9050880.1795, 3379161.911899999]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA098', 'mwt_en_mas': 0, 'state': 'FL', 'year_sourc': 1891, 'month_sour': 12, 'day_source': 13, 'mwt_date_t': '1891/12/13', 'name': 'John R Ely, Jr.', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': \"Entered girl's room\", 'mwt_allege': 'Alleged Sexual crime', 'town': None, 'county': 'Jackson', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': 'Source2 corrects location from Holloway, Rapides Parish, LA', 'mwt_notes_': None, 'mwt_county': 'fls_jackson', 'mwt_coun_1': 17, 'mwt_coun_2': 'fls_jackson.17', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 21165, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9486463.2792, 3606249.074]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA099', 'mwt_en_mas': 0, 'state': 'FL', 'year_sourc': 1891, 'month_sour': 12, 'day_source': 15, 'mwt_date_t': '1891/12/15', 'name': 'Unnamed', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Robbery', 'mwt_allege': 'Crime against property', 'town': 'New Branford', 'county': 'Lafayette', 'source1': 1, 'source2': 0, 'source3': None, 'source4': None, 'mwt_source': 'Not similarly corroborated by a Source2 for Florida', 'mwt_notes_': None, 'mwt_county': 'fls_lafayette', 'mwt_coun_1': 5, 'mwt_coun_2': 'fls_lafayette.5', 'mwt_includ': '0', 'mwt_incl_1': 'no Source2 unlike the others', 'mwt_find_d': None, 'mwt_sequen': 21167, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9259672.7667, 3501681.0535999984]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA100', 'mwt_en_mas': 2, 'state': 'FL', 'year_sourc': 1891, 'month_sour': 12, 'day_source': 17, 'mwt_date_t': '1891/12/17', 'name': 'Alfred Jones', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': 'Live Oak', 'county': 'Suwannee', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': 'Source2 corrects name from Two unnamed Negro men', 'mwt_notes_': None, 'mwt_county': 'fls_suwannee', 'mwt_coun_1': 3, 'mwt_coun_2': 'fls_suwannee.3', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 21169, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9237787.831, 3541654.9092999995]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA101', 'mwt_en_mas': 2, 'state': 'FL', 'year_sourc': 1891, 'month_sour': 12, 'day_source': 17, 'mwt_date_t': '1891/12/17', 'name': 'Brady Young', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': 'Live Oak', 'county': 'Suwannee', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': 'Source2 corrects name from Two unnamed Negro men', 'mwt_notes_': None, 'mwt_county': 'fls_suwannee', 'mwt_coun_1': 3, 'mwt_coun_2': 'fls_suwannee.3', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 21169, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9237787.831, 3541654.9092999995]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA102', 'mwt_en_mas': 0, 'state': 'FL', 'year_sourc': 1892, 'month_sour': 1, 'day_source': 12, 'mwt_date_t': '1892/01/12', 'name': 'Henry Hinson', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': 'Micanopy', 'county': 'Alachua', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'fls_alachua', 'mwt_coun_1': 12, 'mwt_coun_2': 'fls_alachua.12', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 21195, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9159307.5899, 3440735.764200002]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA103', 'mwt_en_mas': 0, 'state': 'FL', 'year_sourc': 1892, 'month_sour': 2, 'day_source': 15, 'mwt_date_t': '1892/02/15', 'name': 'Walter Austin', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': 'Arcadia', 'county': 'DeSoto', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'fls_desoto', 'mwt_coun_1': 1, 'mwt_coun_2': 'fls_desoto.1', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 21229, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9112906.2866, 3150663.1165000014]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA104', 'mwt_en_mas': 4, 'state': 'FL', 'year_sourc': 1892, 'month_sour': 4, 'day_source': 19, 'mwt_date_t': '1892/04/19', 'name': 'Albert Robinson', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': None, 'county': 'Citrus', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': 'Source2 corrects location from Inverness, Bullock Co, AL', 'mwt_notes_': None, 'mwt_county': 'fls_citrus', 'mwt_coun_1': 1, 'mwt_coun_2': 'fls_citrus.1', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 21293, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9180067.4127, 3356667.2070000023]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA105', 'mwt_en_mas': 4, 'state': 'FL', 'year_sourc': 1892, 'month_sour': 4, 'day_source': 19, 'mwt_date_t': '1892/04/19', 'name': 'George Davis', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': None, 'county': 'Citrus', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': 'Source2 corrects location from Inverness, Bullock Co, AL', 'mwt_notes_': None, 'mwt_county': 'fls_citrus', 'mwt_coun_1': 1, 'mwt_coun_2': 'fls_citrus.1', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 21293, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9180067.4127, 3356667.2070000023]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA106', 'mwt_en_mas': 4, 'state': 'FL', 'year_sourc': 1892, 'month_sour': 4, 'day_source': 19, 'mwt_date_t': '1892/04/19', 'name': 'Jerry Williams', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': None, 'county': 'Citrus', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': 'Source2 corrects location from Inverness, Bullock Co, AL', 'mwt_notes_': None, 'mwt_county': 'fls_citrus', 'mwt_coun_1': 1, 'mwt_coun_2': 'fls_citrus.1', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 21293, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9180067.4127, 3356667.2070000023]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA107', 'mwt_en_mas': 4, 'state': 'FL', 'year_sourc': 1892, 'month_sour': 4, 'day_source': 19, 'mwt_date_t': '1892/04/19', 'name': 'William Williams', 'race_sourc': 'Negro', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Murder', 'mwt_allege': 'Murder or Attempted', 'town': None, 'county': 'Citrus', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': 'Source2 corrects location from Inverness, Bullock Co, AL', 'mwt_notes_': None, 'mwt_county': 'fls_citrus', 'mwt_coun_1': 1, 'mwt_coun_2': 'fls_citrus.1', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 21293, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9180067.4127, 3356667.2070000023]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA108', 'mwt_en_mas': 2, 'state': 'FL', 'year_sourc': 1892, 'month_sour': 5, 'day_source': 25, 'mwt_date_t': '1892/05/25', 'name': 'Henry E Bedgood', 'race_sourc': 'U', 'mwt_race': 'Unknown', 'sex': 'M', 'alleged': 'Murder & robbery', 'mwt_allege': 'Murder or Attempted', 'town': 'Buffalo Bluff', 'county': 'Putnam', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': 'Source2 corrects name from Unnamed Negro man', 'mwt_notes_': 'noted by Source2 as Uncertain', 'mwt_county': 'fls_putnam', 'mwt_coun_1': 7, 'mwt_coun_2': 'fls_putnam.7', 'mwt_includ': '0.5', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 21329, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9089870.9444, 3447647.520800002]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA109', 'mwt_en_mas': 2, 'state': 'FL', 'year_sourc': 1892, 'month_sour': 5, 'day_source': 25, 'mwt_date_t': '1892/05/25', 'name': 'James Williams', 'race_sourc': 'U', 'mwt_race': 'Unknown', 'sex': 'M', 'alleged': 'Murder & robbery', 'mwt_allege': 'Murder or Attempted', 'town': 'Buffalo Bluff', 'county': 'Putnam', 'source1': 1, 'source2': 5, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': 'noted by Source2 as Uncertain', 'mwt_county': 'fls_putnam', 'mwt_coun_1': 7, 'mwt_coun_2': 'fls_putnam.7', 'mwt_includ': '0.5', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 21329, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9089870.9444, 3447647.520800002]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA110', 'mwt_en_mas': 0, 'state': 'IL', 'year_sourc': 1917, 'month_sour': 7, 'day_source': 2, 'mwt_date_t': '1917/07/02', 'name': 'Edward Cook', 'race_sourc': 'black', 'mwt_race': 'Black', 'sex': 'M', 'alleged': None, 'mwt_allege': None, 'town': 'East St. Louis', 'county': 'St. Clair', 'source1': 71, 'source2': 77, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': 'Riot at East St. Louis of 1917', 'mwt_county': 'ils_stclair', 'mwt_coun_1': 12, 'mwt_coun_2': 'ils_stclair.12', 'mwt_includ': '0.25', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 30497, 'mwt_mob_ra': 'White'}, 'geometry': {'type': 'Point', 'coordinates': [-10036957.1345, 4668306.966499999]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA111', 'mwt_en_mas': 0, 'state': 'IL', 'year_sourc': 1917, 'month_sour': 7, 'day_source': 2, 'mwt_date_t': '1917/07/02', 'name': 'Edward Cook’s son', 'race_sourc': 'black', 'mwt_race': 'Black', 'sex': 'M', 'alleged': None, 'mwt_allege': None, 'town': 'East St. Louis', 'county': 'St. Clair', 'source1': 71, 'source2': 77, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': 'Riot at East St. Louis of 1917', 'mwt_county': 'ils_stclair', 'mwt_coun_1': 12, 'mwt_coun_2': 'ils_stclair.12', 'mwt_includ': '0.25', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 30497, 'mwt_mob_ra': 'White'}, 'geometry': {'type': 'Point', 'coordinates': [-10036957.1345, 4668306.966499999]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA112', 'mwt_en_mas': 0, 'state': 'IL', 'year_sourc': 1917, 'month_sour': 7, 'day_source': 2, 'mwt_date_t': '1917/07/02', 'name': 'Frank Wadley', 'race_sourc': 'white', 'mwt_race': 'White', 'sex': 'M', 'alleged': None, 'mwt_allege': None, 'town': 'East St. Louis', 'county': 'St. Clair', 'source1': 71, 'source2': 77, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': 'Riot at East St. Louis of 1917', 'mwt_county': 'ils_stclair', 'mwt_coun_1': 12, 'mwt_coun_2': 'ils_stclair.12', 'mwt_includ': '0.25', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 30497, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-10036957.1345, 4668306.966499999]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA113', 'mwt_en_mas': 0, 'state': 'IL', 'year_sourc': 1917, 'month_sour': 7, 'day_source': 2, 'mwt_date_t': '1917/07/02', 'name': 'Samuel Coppedge', 'race_sourc': 'white', 'mwt_race': 'White', 'sex': 'M', 'alleged': None, 'mwt_allege': None, 'town': 'East St. Louis', 'county': 'St. Clair', 'source1': 71, 'source2': 77, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': 'Riot at East St. Louis of 1917', 'mwt_county': 'ils_stclair', 'mwt_coun_1': 12, 'mwt_coun_2': 'ils_stclair.12', 'mwt_includ': '0.25', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 30497, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-10036957.1345, 4668306.966499999]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA114', 'mwt_en_mas': 0, 'state': 'IL', 'year_sourc': 1917, 'month_sour': 7, 'day_source': 0, 'mwt_date_t': '1917/07/02', 'name': 'Scott Clark', 'race_sourc': 'black', 'mwt_race': 'Black', 'sex': 'M', 'alleged': None, 'mwt_allege': None, 'town': 'East St. Louis', 'county': 'St. Clair', 'source1': 71, 'source2': 77, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': 'Riot at East St. Louis of 1917', 'mwt_county': 'ils_stclair', 'mwt_coun_1': 12, 'mwt_coun_2': 'ils_stclair.12', 'mwt_includ': '0.25', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 30497, 'mwt_mob_ra': 'White'}, 'geometry': {'type': 'Point', 'coordinates': [-10036957.1345, 4668306.966499999]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA115', 'mwt_en_mas': 0, 'state': 'IL', 'year_sourc': 1905, 'month_sour': 5, 'day_source': 27, 'mwt_date_t': '1905/05/27', 'name': 'Bernard Engstrani', 'race_sourc': 'white', 'mwt_race': 'White', 'sex': 'M', 'alleged': None, 'mwt_allege': None, 'town': 'Chicago', 'county': 'Cook', 'source1': 77, 'source2': 0, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': 'Chicago riot of 1905', 'mwt_county': 'ils_cook', 'mwt_coun_1': 4, 'mwt_coun_2': 'ils_cook.4', 'mwt_includ': '0.25', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 26078, 'mwt_mob_ra': None}, 'geometry': {'type': 'Point', 'coordinates': [-9755199.711, 5143656.394299999]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA116', 'mwt_en_mas': 0, 'state': 'IL', 'year_sourc': 1905, 'month_sour': 5, 'day_source': 27, 'mwt_date_t': '1905/05/27', 'name': 'James Gray', 'race_sourc': 'black', 'mwt_race': 'Black', 'sex': 'M', 'alleged': None, 'mwt_allege': None, 'town': 'Chicago', 'county': 'Cook', 'source1': 77, 'source2': 0, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': 'Chicago riot of 1905', 'mwt_county': 'ils_cook', 'mwt_coun_1': 4, 'mwt_coun_2': 'ils_cook.4', 'mwt_includ': '0.25', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 26078, 'mwt_mob_ra': 'White'}, 'geometry': {'type': 'Point', 'coordinates': [-9755199.711, 5143656.394299999]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA117', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1934, 'month_sour': 6, 'day_source': 18, 'mwt_date_t': '1934/06/18', 'name': 'Otis Parham', 'race_sourc': 'black', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Race Prejudice', 'mwt_allege': 'Absence of crime', 'town': 'Pine Level', 'county': 'Autauga', 'source1': 38, 'source2': 0, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_autauga', 'mwt_coun_1': 8, 'mwt_coun_2': 'als_autauga.8', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 36692, 'mwt_mob_ra': 'White'}, 'geometry': {'type': 'Point', 'coordinates': [-9627726.6489, 3848776.3149999976]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA118', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1933, 'month_sour': 7, 'day_source': 5, 'mwt_date_t': '1933/07/05', 'name': 'Elizabeth Lawrence', 'race_sourc': 'black', 'mwt_race': 'Black', 'sex': 'F', 'alleged': 'Scolded white children', 'mwt_allege': 'Insult to White Persons', 'town': 'Birmingham', 'county': 'Jefferson', 'source1': 38, 'source2': 0, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_jefferson', 'mwt_coun_1': 16, 'mwt_coun_2': 'als_jefferson.16', 'mwt_includ': '1', 'mwt_incl_1': None, 'mwt_find_d': None, 'mwt_sequen': 36344, 'mwt_mob_ra': 'White'}, 'geometry': {'type': 'Point', 'coordinates': [-9663840.9181, 3964621.221500002]}}, {'type': 'Feature', 'properties': {'sample_id': 'SA119', 'mwt_en_mas': 0, 'state': 'AL', 'year_sourc': 1947, 'month_sour': 12, 'day_source': 4, 'mwt_date_t': '1947/12/04', 'name': 'Elmore Bolling', 'race_sourc': 'black', 'mwt_race': 'Black', 'sex': 'M', 'alleged': 'Resentment of economic success', 'mwt_allege': 'Absence of crime', 'town': 'Lowndesboro', 'county': 'Lowndes', 'source1': 38, 'source2': 0, 'source3': None, 'source4': None, 'mwt_source': None, 'mwt_notes_': None, 'mwt_county': 'als_lowndes', 'mwt_coun_1': 3, 'mwt_coun_2': 'als_lowndes.3', 'mwt_includ': '3', 'mwt_incl_1': 'Tuskegee lynching definition unmet', 'mwt_find_d': None, 'mwt_sequen': 41609, 'mwt_mob_ra': 'White'}, 'geometry': {'type': 'Point', 'coordinates': [-9641521.3602, 3799501.2119000033]}}]}, style={'color': '#FF0000', 'radius': 8})\n"
],
[
"type(geo_json)",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecd79f034a8b097d9b5b45a09b443eecf4ea978e | 207,448 | ipynb | Jupyter Notebook | training/memorize-seed-results.ipynb | nauralcodinglab/linear-nonlinear-dendrites | 310d236b52e0fa6a1138e4450e821af4c74397ee | [
"CC-BY-4.0"
] | 1 | 2021-09-12T20:09:45.000Z | 2021-09-12T20:09:45.000Z | training/memorize-seed-results.ipynb | nauralcodinglab/linear-nonlinear-dendrites | 310d236b52e0fa6a1138e4450e821af4c74397ee | [
"CC-BY-4.0"
] | null | null | null | training/memorize-seed-results.ipynb | nauralcodinglab/linear-nonlinear-dendrites | 310d236b52e0fa6a1138e4450e821af4c74397ee | [
"CC-BY-4.0"
] | null | null | null | 345.746667 | 100,328 | 0.924034 | [
[
[
"# Training networks of PRC neurons to memorize random input\n\nThis notebook contains code for Fig. 7 of [Harkin, Shen _et al_. (2021)](https://doi.org/10.1101/2021.03.25.437091).\n\nTo reproduce these findings, first run `memorize.py`.",
"_____no_output_____"
]
],
[
[
"import os\nfrom copy import deepcopy\n\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport matplotlib.gridspec as gs\nfrom matplotlib.patheffects import Normal, Stroke\nimport numpy as np\nimport seaborn as sns\nfrom ezephys import pltools\n\nimport memorize",
"_____no_output_____"
]
],
[
[
"## Illustrate training of one network",
"_____no_output_____"
]
],
[
[
"FIGURE_WIDTH = 7\n\ndef savefig(fname, **pltargs):\n plt.savefig(fname + '.png', dpi=300, bbox_inches='tight', **pltargs)\n plt.savefig(fname + '.svg', dpi=300, bbox_inches='tight', **pltargs)",
"_____no_output_____"
],
[
"memorize.torch.manual_seed(42)\n\nDEMO_NET = 'BAP'\n\ndemo_net_before_train = memorize.get_networks()[DEMO_NET]\ndemo_net_after_train = deepcopy(demo_net_before_train)\ndemo_optimizer = memorize.get_optimizers({DEMO_NET: demo_net_after_train})[DEMO_NET]\ndemo_x, demo_y = memorize.generate_data()\n\ndemo_optimizer.optimize(demo_x, demo_y, memorize.EPOCHS, progress_bar='notebook')",
"_____no_output_____"
],
[
"def tensor_to_events(tensor):\n return [np.nonzero(tensor[i, :]).flatten() for i in range(tensor.shape[0])]\n\ndef _visual_spike(voltage, spikes, spike_height=50):\n assert voltage.shape == spikes.shape\n voltage = deepcopy(voltage)\n voltage[spikes > 0.0] = spike_height\n return voltage\n\ndef plot_voltage_with_spikes(recorder, example_nb: int, unit_nb: int, ax=None, **pltargs):\n if ax is None:\n ax = plt.gca()\n \n ax.plot(\n _visual_spike(\n recorder.recorded['somatic_subunit.linear'][example_nb, :, unit_nb].detach().numpy(),\n recorder.recorded['output'][example_nb, :, unit_nb].detach().numpy()\n ),\n **pltargs\n )\n\ny_palette = {0: 'xkcd:violet', 1: 'xkcd:forest green'}",
"_____no_output_____"
],
[
"spec_outer = gs.GridSpec(1, 3)\nspec_hidden = gs.GridSpecFromSubplotSpec(demo_net_before_train.nb_units_by_layer[1], 1, spec_outer[:, 1])\nspec_output = gs.GridSpecFromSubplotSpec(demo_net_before_train.nb_units_by_layer[2], 1, spec_outer[:, 2])\n\nplt.figure(figsize=(FIGURE_WIDTH, 2), dpi=120)\n\nplt.subplot(spec_outer[:, 0])\nplt.title('Input layer')\nplt.eventplot(tensor_to_events(demo_x[0, ...].T), linelengths=4, color='black')\nplt.ylabel('Input units')\npltools.hide_border('trb', trim=True)\n\n_, demo_activity = demo_net_before_train.run_snn(demo_x)\n\nfor i in range(demo_net_before_train.nb_units_by_layer[1]):\n plt.subplot(spec_hidden[i, :])\n plot_voltage_with_spikes(demo_activity['l1'], 0, i, color='black', clip_on=False)\n plt.ylim(-23, 52)\n \n if i == 0:\n plt.title('Hidden layer')\n \n pltools.hide_border()\n \nfor i in range(demo_net_before_train.nb_units_by_layer[2]):\n plt.subplot(spec_output[i, :])\n plt.plot(\n demo_activity['l2'].recorded['output'][0, :, i].detach().numpy(),\n color=y_palette[i],\n clip_on=False\n )\n plt.ylim(-12, 12)\n \n if i == 0:\n plt.title('Output layer')\n \n pltools.hide_border()\n\nsavefig(os.path.join('..', 'data', 'single_example_BAP'))\n\ndel spec_outer, spec_hidden, spec_output, i",
"/home/efharkin/.miniconda3/envs/lnldend/lib/python3.8/site-packages/numpy/core/fromnumeric.py:58: UserWarning: This overload of nonzero is deprecated:\n\tnonzero()\nConsider using one of the following signatures instead:\n\tnonzero(*, bool as_tuple) (Triggered internally at /tmp/pip-req-build-ojg3q6e4/torch/csrc/utils/python_arg_parser.cpp:882.)\n return bound(*args, **kwds)\n"
],
[
"def predicted_label(recording, example_nb):\n return recording.recorded['output'][example_nb, ...].max(axis=0)[0].max(axis=0)[1]\n\ndef annotate_correctness(recording, labels, example_nb, ax=None):\n if ax is None:\n ax = plt.gca()\n \n if predicted_label(demo_activity['l2'], example_nb) == demo_y[i]:\n mark, color = '✓', 'green'\n else:\n mark, color = '✗', 'red'\n \n text = ax.text(1, 1, mark, color=color, transform=ax.transAxes, ha='center', va='top')\n text.set_path_effects([Stroke(linewidth=2, foreground='white'), Normal()])",
"_____no_output_____"
],
[
"EXAMPLE_GRID_SHAPE = (3, 3)\n\nspec_outer = gs.GridSpec(1, 4, hspace=0.2)\nspec_input = gs.GridSpecFromSubplotSpec(*EXAMPLE_GRID_SHAPE, spec_outer[:, 0])\nspec_before = gs.GridSpecFromSubplotSpec(*EXAMPLE_GRID_SHAPE, spec_outer[:, 1])\nspec_loss = gs.GridSpecFromSubplotSpec(2, 2, spec_outer[:, 2], width_ratios=[0.4, 1])\nspec_after = gs.GridSpecFromSubplotSpec(*EXAMPLE_GRID_SHAPE, spec_outer[:, 3])\n\nplt.figure(figsize=(FIGURE_WIDTH, 2), dpi=120)\n\n# Show input\n\nfor i in range(np.prod(EXAMPLE_GRID_SHAPE)):\n if i == 0:\n ax0 = plt.subplot(spec_input[i])\n else:\n plt.subplot(spec_input[i], sharey=ax0)\n plt.eventplot(tensor_to_events(demo_x[i, ...].T), linelengths=4, color=y_palette[int(demo_y[i])])\n pltools.hide_border()\n\n if i == EXAMPLE_GRID_SHAPE[0] // 2:\n plt.title('Input')\n\ndel ax0\n\n# Show output before training\n\n_, demo_activity = demo_net_before_train.run_snn(demo_x)\n\nfor i in range(np.prod(EXAMPLE_GRID_SHAPE)):\n if i == 0:\n ax0 = plt.subplot(spec_before[i])\n else:\n plt.subplot(spec_before[i], sharey=ax0)\n \n for j in range(demo_net_before_train.nb_units_by_layer[2]):\n plt.plot(\n demo_activity['l2'].recorded['output'][i, :, j].detach().numpy(),\n color=y_palette[j],\n clip_on=False\n )\n pltools.hide_border()\n \n annotate_correctness(demo_activity['l2'], demo_y, i)\n \n if i == EXAMPLE_GRID_SHAPE[0] // 2:\n plt.title('Before training')\n \n# Show loss and accuracy\n\nplt.subplot(spec_loss[0, 1])\nplt.plot(demo_optimizer.loss_history, 'k-')\nplt.xticks([0, 300, 600], ['', '', ''])\nplt.ylim(0, plt.ylim()[1])\nplt.ylabel('Loss')\npltools.hide_border('tr')\n\nplt.subplot(spec_loss[1, 1])\nplt.plot(demo_optimizer.accuracy_history, 'k-')\nplt.xticks([0, 300, 600])\nplt.ylim(0.4, 1)\nplt.ylabel('Accuracy')\nplt.xlabel('Epoch')\npltools.hide_border('tr')\n \n# Show output after training\n\n_, demo_activity = demo_net_after_train.run_snn(demo_x)\n\n\nfor i in range(np.prod(EXAMPLE_GRID_SHAPE)):\n if i == 0:\n ax0 = plt.subplot(spec_after[i])\n else:\n plt.subplot(spec_after[i], sharey=ax0)\n \n for j in range(demo_net_after_train.nb_units_by_layer[2]):\n plt.plot(\n demo_activity['l2'].recorded['output'][i, :, j].detach().numpy(),\n color=y_palette[j],\n clip_on=False\n )\n pltools.hide_border()\n \n if i == EXAMPLE_GRID_SHAPE[0] // 2:\n plt.title('After training')\n \n annotate_correctness(demo_activity['l2'], demo_y, i)\n \nsavefig(os.path.join('..', 'data', 'multi_example_BAP'))",
"_____no_output_____"
]
],
[
[
"# Performance comparison\n\nRun `memorize.py` to train five different PRC models several times with different seeds. The loss and accuracy during training are saved to CSV files that we'll inspect here.",
"_____no_output_____"
]
],
[
[
"DATA_FILE_PREFIX = os.path.join('..', 'data', 'memorization_training_results_')\n\nperformance = []\n\nfor i in range(10):\n single_seed_performance = pd.read_csv(DATA_FILE_PREFIX + str(i) + '.csv')\n single_seed_performance['seed'] = i\n\n performance.append(single_seed_performance)\n\nperformance = pd.concat(performance).reset_index(drop=True)",
"_____no_output_____"
],
[
"performance.head()",
"_____no_output_____"
],
[
"def performance_bandplot(model_name, metric_loss=True, ax=None, **pltargs):\n if ax is None:\n ax = plt.gca()\n\n if metric_loss:\n metric = 'loss'\n else:\n metric = 'accuracy'\n\n alpha = min(pltargs.pop('alpha', 1), 0.8)\n\n this_model = performance.query('model_name == @model_name').sort_values(\n 'epoch'\n )\n if 'label' not in pltargs:\n pltargs['label'] = model_name\n\n this_std = (\n this_model.groupby(['model_name', 'epoch']).std().sort_values('epoch')\n )\n this_mean = (\n this_model.groupby(['model_name', 'epoch']).mean().sort_values('epoch')\n )\n\n label = pltargs.pop('label', None)\n maincolor = pltargs.pop('color', None)\n ax.fill_between(\n this_mean.index.get_level_values('epoch'),\n this_mean[metric] - this_std[metric],\n this_mean[metric] + this_std[metric],\n alpha=alpha,\n facecolor=maincolor,\n edgecolor='none',\n **pltargs\n )\n ax.plot(\n this_mean.index.get_level_values('epoch'),\n this_mean[metric],\n label=label,\n color=maincolor,\n **pltargs\n )",
"_____no_output_____"
],
[
"palette = {\n 'One compartment': 'gray',\n 'No BAP': 'xkcd:ocean',\n 'BAP': 'xkcd:cherry',\n 'Parallel subunits, no BAP': 'xkcd:iris',\n 'Parallel subunits + BAP (full PRC model)': 'xkcd:blood orange'\n}",
"_____no_output_____"
],
[
"spec_outer = gs.GridSpec(1, 3, width_ratios=(0.5, 1, 1))\n\nplt.figure(figsize=(FIGURE_WIDTH, 2), dpi=120)\n\nplt.subplot(spec_outer[:, 1])\nplt.axhline(0.5, color='k', ls='--', dashes=(10, 5), lw=0.7, zorder=-1)\nfor model_name, colour in palette.items():\n this_mean = performance_bandplot(\n model_name, metric_loss=False, color=colour, alpha=0.3\n )\nplt.legend(loc='upper right', bbox_to_anchor=(-0.2, 1))\nplt.xlabel('Epoch')\nplt.ylabel('Accuracy')\nplt.ylim(0.36, 1.05)\npltools.hide_border('tr')\n\nplt.subplot(spec_outer[:, 2])\nplt.axhline(0.5, color='k', ls='--', dashes=(10, 5), lw=0.7, zorder=-1)\nbox_data = performance.query('epoch in [0, 1999]').copy()\nbox_data.loc[:, 'Epoch'] = box_data.epoch.astype(str)\nbox_data.replace('0', 'Before training', inplace=True)\nbox_data.replace('1999', 'After training', inplace=True)\nsns.boxplot(\n x='model_name',\n y='accuracy',\n hue='Epoch',\n data=box_data,\n)\nplt.legend(title='', loc='upper left', bbox_to_anchor=(1, 1))\nplt.xticks(rotation=45, ha='right')\nplt.ylabel('')\nplt.ylim(0.36, 1.05)\npltools.hide_border('tr')\nplt.gca().set_yticklabels([])\n\nsavefig(os.path.join('..', 'data', 'performance_comparison'))\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
ecd7a1277c2a74d7634e0bb827bb0c7320281a04 | 10,956 | ipynb | Jupyter Notebook | ImageCollection/mosaicking.ipynb | YuePanEdward/earthengine-py-notebooks | cade6a81dd4dbbfb1b9b37aaf6955de42226cfc5 | [
"MIT"
] | 1 | 2020-11-16T08:00:11.000Z | 2020-11-16T08:00:11.000Z | ImageCollection/mosaicking.ipynb | mllzl/earthengine-py-notebooks | cade6a81dd4dbbfb1b9b37aaf6955de42226cfc5 | [
"MIT"
] | null | null | null | ImageCollection/mosaicking.ipynb | mllzl/earthengine-py-notebooks | cade6a81dd4dbbfb1b9b37aaf6955de42226cfc5 | [
"MIT"
] | null | null | null | 48.264317 | 1,031 | 0.580686 | [
[
[
"<table class=\"ee-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://github.com/giswqs/earthengine-py-notebooks/tree/master/ImageCollection/mosaicking.ipynb\"><img width=32px src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" /> View source on GitHub</a></td>\n <td><a target=\"_blank\" href=\"https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/ImageCollection/mosaicking.ipynb\"><img width=26px src=\"https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png\" />Notebook Viewer</a></td>\n <td><a target=\"_blank\" href=\"https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=ImageCollection/mosaicking.ipynb\"><img width=58px src=\"https://mybinder.org/static/images/logo_social.png\" />Run in binder</a></td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/ImageCollection/mosaicking.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" /> Run in Google Colab</a></td>\n</table>",
"_____no_output_____"
],
[
"## Install Earth Engine API and geemap\nInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.\nThe following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.\n\n**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).",
"_____no_output_____"
]
],
[
[
"# Installs geemap package\nimport subprocess\n\ntry:\n import geemap\nexcept ImportError:\n print('geemap package not installed. Installing ...')\n subprocess.check_call([\"python\", '-m', 'pip', 'install', 'geemap'])\n\n# Checks whether this notebook is running on Google Colab\ntry:\n import google.colab\n import geemap.eefolium as emap\nexcept:\n import geemap as emap\n\n# Authenticates and initializes Earth Engine\nimport ee\n\ntry:\n ee.Initialize()\nexcept Exception as e:\n ee.Authenticate()\n ee.Initialize() ",
"_____no_output_____"
]
],
[
[
"## Create an interactive map \nThe default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function. ",
"_____no_output_____"
]
],
[
[
"Map = emap.Map(center=[40,-100], zoom=4)\nMap.add_basemap('ROADMAP') # Add Google Map\nMap",
"_____no_output_____"
]
],
[
[
"## Add Earth Engine Python script ",
"_____no_output_____"
]
],
[
[
"# Add Earth Engine dataset\n# Load three NAIP quarter quads in the same location, different times.\nnaip2004_2012 = ee.ImageCollection('USDA/NAIP/DOQQ') \\\n .filterBounds(ee.Geometry.Point(-71.08841, 42.39823)) \\\n .filterDate('2004-07-01', '2012-12-31') \\\n .select(['R', 'G', 'B'])\n\n# Temporally composite the images with a maximum value function.\ncomposite = naip2004_2012.max()\nMap.setCenter(-71.12532, 42.3712, 12)\nMap.addLayer(composite, {}, 'max value composite')\n\n\n# Load four 2012 NAIP quarter quads, different locations.\nnaip2012 = ee.ImageCollection('USDA/NAIP/DOQQ') \\\n .filterBounds(ee.Geometry.Rectangle(-71.17965, 42.35125, -71.08824, 42.40584)) \\\n .filterDate('2012-01-01', '2012-12-31')\n\n# Spatially mosaic the images in the collection and display.\nmosaic = naip2012.mosaic()\nMap.setCenter(-71.12532, 42.3712, 12)\nMap.addLayer(mosaic, {}, 'spatial mosaic')\n\n\n# Load a NAIP quarter quad, display.\nnaip = ee.Image('USDA/NAIP/DOQQ/m_4207148_nw_19_1_20120710')\nMap.setCenter(-71.0915, 42.3443, 14)\nMap.addLayer(naip, {}, 'NAIP DOQQ')\n\n# Create the NDVI and NDWI spectral indices.\nndvi = naip.normalizedDifference(['N', 'R'])\nndwi = naip.normalizedDifference(['G', 'N'])\n\n# Create some binary images from thresholds on the indices.\n# This threshold is designed to detect bare land.\nbare1 = ndvi.lt(0.2).And(ndwi.lt(0.3))\n# This detects bare land with lower sensitivity. It also detects shadows.\nbare2 = ndvi.lt(0.2).And(ndwi.lt(0.8))\n\n# Define visualization parameters for the spectral indices.\nndviViz = {'min': -1, 'max': 1, 'palette': ['FF0000', '00FF00']}\nndwiViz = {'min': 0.5, 'max': 1, 'palette': ['00FFFF', '0000FF']}\n\n# Mask and mosaic visualization images. The last layer is on top.\nmosaic = ee.ImageCollection([\n # NDWI > 0.5 is water. Visualize it with a blue palette.\n ndwi.updateMask(ndwi.gte(0.5)).visualize(**ndwiViz),\n # NDVI > 0.2 is vegetation. Visualize it with a green palette.\n ndvi.updateMask(ndvi.gte(0.2)).visualize(**ndviViz),\n # Visualize bare areas with shadow (bare2 but not bare1) as gray.\n bare2.updateMask(bare2.And(bare1.Not())).visualize(**{'palette': ['AAAAAA']}),\n # Visualize the other bare areas as white.\n bare1.updateMask(bare1).visualize(**{'palette': ['FFFFFF']}),\n]).mosaic()\nMap.addLayer(mosaic, {}, 'Visualization mosaic')\n\n\n\n# # This function masks clouds in Landsat 8 imagery.\n# maskClouds = function(image) {\n# scored = ee.Algorithms.Landsat.simpleCloudScore(image)\n# return image.updateMask(scored.select(['cloud']).lt(20))\n# }\n\n# # This function masks clouds and adds quality bands to Landsat 8 images.\n# addQualityBands = function(image) {\n# return maskClouds(image)\n# # NDVI \\\n# .addBands(image.normalizedDifference(['B5', 'B4']))\n# # time in days \\\n# .addBands(image.metadata('system:time_start'))\n# }\n\n# # Load a 2014 Landsat 8 ImageCollection.\n# # Map the cloud masking and quality band function over the collection.\n# collection = ee.ImageCollection('LANDSAT/LC08/C01/T1_TOA') \\\n# .filterDate('2014-06-01', '2014-12-31') \\\n# .map(addQualityBands)\n\n# # Create a cloud-free, most recent value composite.\n# recentValueComposite = collection.qualityMosaic('system:time_start')\n\n# # Create a greenest pixel composite.\n# greenestPixelComposite = collection.qualityMosaic('nd')\n\n# # Display the results.\n# Map.setCenter(-122.374, 37.8239, 12) # San Francisco Bay\n# vizParams = {'bands': ['B5', 'B4', 'B3'], 'min': 0, 'max': 0.4}\n# Map.addLayer(recentValueComposite, vizParams, 'recent value composite')\n# Map.addLayer(greenestPixelComposite, vizParams, 'greenest pixel composite')\n\n# # Compare to a cloudy image in the collection.\n# cloudy = ee.Image('LANDSAT/LC08/C01/T1_TOA/LC08_044034_20140825')\n# Map.addLayer(cloudy, vizParams, 'cloudy')\n\n",
"_____no_output_____"
]
],
[
[
"## Display Earth Engine data layers ",
"_____no_output_____"
]
],
[
[
"Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.\nMap",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecd7d795cc14c10095b07a7081854e4b0095c299 | 67,026 | ipynb | Jupyter Notebook | 2. CNN/ThreeX_6D_combine_3X.ipynb | nikhil-mathews/MastersPr_Predicting-Human-Pathogen-PPIs-using-Natural-Language-Processing-methods | 78bbaaf5e4e52939a522fe14aedbf5acfd29e10c | [
"MIT"
] | null | null | null | 2. CNN/ThreeX_6D_combine_3X.ipynb | nikhil-mathews/MastersPr_Predicting-Human-Pathogen-PPIs-using-Natural-Language-Processing-methods | 78bbaaf5e4e52939a522fe14aedbf5acfd29e10c | [
"MIT"
] | null | null | null | 2. CNN/ThreeX_6D_combine_3X.ipynb | nikhil-mathews/MastersPr_Predicting-Human-Pathogen-PPIs-using-Natural-Language-Processing-methods | 78bbaaf5e4e52939a522fe14aedbf5acfd29e10c | [
"MIT"
] | null | null | null | 67,026 | 67,026 | 0.854997 | [
[
[
"import pandas as pd\n#Google colab does not have pickle\ntry:\n import pickle5 as pickle\nexcept:\n !pip install pickle5\n import pickle5 as pickle\nimport os\nimport seaborn as sns\n\nimport sys\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom keras.preprocessing.text import Tokenizer\nfrom keras.preprocessing.sequence import pad_sequences\nfrom keras.layers import Dense, Input, GlobalMaxPooling1D,Flatten\nfrom keras.layers import Conv1D, MaxPooling1D, Embedding,Concatenate\nfrom keras.models import Model\nfrom sklearn.metrics import roc_auc_score,confusion_matrix,roc_curve, auc\nfrom numpy import random\nfrom keras.layers import LSTM, Bidirectional, GlobalMaxPool1D, Dropout\nfrom keras.optimizers import Adam\nfrom keras.utils.vis_utils import plot_model\n\nimport sys\nsys.path.insert(0,'/content/drive/MyDrive/ML_Data/')\nimport functions as f",
"_____no_output_____"
],
[
"def load_data(D=1,randomize=False):\n try:\n with open('/content/drive/MyDrive/ML_Data/df_train_'+str(D)+'D.pickle', 'rb') as handle:\n df_train = pickle.load(handle)\n except:\n df_train = pd.read_pickle(\"C:/Users/nik00/py/proj/hyppi-train.pkl\")\n try:\n with open('/content/drive/MyDrive/ML_Data/df_test_'+str(D)+'D.pickle', 'rb') as handle:\n df_test = pickle.load(handle)\n except:\n df_test = pd.read_pickle(\"C:/Users/nik00/py/proj/hyppi-independent.pkl\")\n if randomize:\n return shuff_together(df_train,df_test)\n else:\n return df_train,df_test\n\ndf_train,df_test = load_data(6)\nprint('The data used will be:')\ndf_train[['Human','Yersinia']]",
"The data used will be:\n"
],
[
"lengths = sorted(len(s) for s in df_train['Human'])\nprint(\"Median length of Human sequence is\",lengths[len(lengths)//2])\n_ = sns.displot(lengths)\n_=plt.title(\"Most Human sequences seem to be less than 2000 in length\")",
"Median length of Human sequence is 477\n"
],
[
"lengths = sorted(len(s) for s in df_train['Yersinia'])\nprint(\"Median length of Yersinia sequence is\",lengths[len(lengths)//2])\n_ = sns.displot(lengths)\n_=plt.title(\"Most Yersinia sequences seem to be less than 1000 in length\")",
"Median length of Yersinia sequence is 334\n"
],
[
"rows = df_train['Joined'].shape[0]\nlengths = sorted(len(s) for s in df_train['Joined'])\nprint(\"Median length of Joined sequence is\",lengths[len(lengths)//2])\n_ = sns.displot(lengths)\n_=plt.title(\"Most Joined sequences seem to be less than 2000 in length\")",
"Median length of Joined sequence is 877\n"
],
[
"data1_6D_doubleip_pre,data2_6D_doubleip_pre,data1_test_6D_doubleip_pre,data2_test_6D_doubleip_pre,num_words_6D,MAX_SEQUENCE_LENGTH_6D_dIP,MAX_VOCAB_SIZE_6D = f.get_seq_data_doubleip(850000,500,df_train,df_test, pad='pre')",
"MAX_VOCAB_SIZE is 850000\nMAX_SEQUENCE_LENGTH is 500\nmax sequences1_train length: 5630\nmin sequences1_train length: 0\nmedian sequences1_train length: 199\nmax word index sequences1_train: 849999\nmax sequences2_train length: 3705\nmin sequences2_train length: 0\nmedian sequences2_train length: 329\nmax word index sequences2_train: 849999\nFound 2438322 unique tokens in tokenizer1.\nFound 864366 unique tokens in tokenizer2.\npre padding\nShape of data1 tensor: (6270, 500)\nShape of data2 tensor: (6270, 500)\nmax test_sequences1 length: 5630\nmin test_sequences1 length: 0\nmedian test_sequences1 length: 70\nmax test_sequences2 length: 3705\nmin test_sequences2 length: 0\nmedian test_sequences2 length: 284\npre padding for test seq.\nShape of test_data1 tensor: (1514, 500)\nShape of test_data2 tensor: (1514, 500)\nnum_words is 850000\n"
],
[
"data1_6D_doubleip_center,data2_6D_doubleip_center,data1_test_6D_doubleip_center,data2_test_6D_doubleip_center,num_words_6D,MAX_SEQUENCE_LENGTH_6D_dIP,MAX_VOCAB_SIZE_6D = f.get_seq_data_doubleip(850000,500,df_train,df_test, pad='center')",
"MAX_VOCAB_SIZE is 850000\nMAX_SEQUENCE_LENGTH is 500\nmax sequences1_train length: 5630\nmin sequences1_train length: 0\nmedian sequences1_train length: 199\nmax word index sequences1_train: 849999\nmax sequences2_train length: 3705\nmin sequences2_train length: 0\nmedian sequences2_train length: 329\nmax word index sequences2_train: 849999\nFound 2438322 unique tokens in tokenizer1.\nFound 864366 unique tokens in tokenizer2.\nCenter padding\nShape of data1 tensor: (6270, 500)\nShape of data2 tensor: (6270, 500)\nmax test_sequences1 length: 5630\nmin test_sequences1 length: 0\nmedian test_sequences1 length: 70\nmax test_sequences2 length: 3705\nmin test_sequences2 length: 0\nmedian test_sequences2 length: 284\nCenter padding for test seq.\nShape of test_data1 tensor: (1514, 500)\nShape of test_data2 tensor: (1514, 500)\nnum_words is 850000\n"
],
[
"data1_6D_doubleip_post,data2_6D_doubleip_post,data1_test_6D_doubleip_post,data2_test_6D_doubleip_post,num_words_6D,MAX_SEQUENCE_LENGTH_6D_dIP,MAX_VOCAB_SIZE_6D = f.get_seq_data_doubleip(850000,500,df_train,df_test, pad='post')",
"MAX_VOCAB_SIZE is 850000\nMAX_SEQUENCE_LENGTH is 500\nmax sequences1_train length: 5630\nmin sequences1_train length: 0\nmedian sequences1_train length: 199\nmax word index sequences1_train: 849999\nmax sequences2_train length: 3705\nmin sequences2_train length: 0\nmedian sequences2_train length: 329\nmax word index sequences2_train: 849999\nFound 2438322 unique tokens in tokenizer1.\nFound 864366 unique tokens in tokenizer2.\npost padding\nShape of data1 tensor: (6270, 500)\nShape of data2 tensor: (6270, 500)\nmax test_sequences1 length: 5630\nmin test_sequences1 length: 0\nmedian test_sequences1 length: 70\nmax test_sequences2 length: 3705\nmin test_sequences2 length: 0\nmedian test_sequences2 length: 284\npost padding for test seq.\nShape of test_data1 tensor: (1514, 500)\nShape of test_data2 tensor: (1514, 500)\nnum_words is 850000\n"
],
[
"data_6D_join_pre,data_test_6D_join_pre,num_words_6D,MAX_SEQUENCE_LENGTH_6D_J,MAX_VOCAB_SIZE_6D = f.get_seq_data_join(850000,500,df_train,df_test,pad='pre')",
"MAX_VOCAB_SIZE is 850000\nMAX_SEQUENCE_LENGTH is 500\nmax sequence_data length: 6856\nmin sequence_data length: 5\nmedian sequence_data length: 436\nmax word index: 849999\nFound 3205693 unique tokens.\npre padding.\nShape of data tensor: (6270, 500)\nmax sequences_test length: 5558\nmin sequences_test length: 4\nmedian sequences_test length: 261\npre padding for test seq.\nShape of data_test tensor: (1514, 500)\nnum_words is 850000\n"
],
[
"data_6D_join_center,data_test_6D_join_center,num_words_6D,MAX_SEQUENCE_LENGTH_6D_J,MAX_VOCAB_SIZE_6D = f.get_seq_data_join(850000,500,df_train,df_test,pad='center')",
"MAX_VOCAB_SIZE is 850000\nMAX_SEQUENCE_LENGTH is 500\nmax sequence_data length: 6856\nmin sequence_data length: 5\nmedian sequence_data length: 436\nmax word index: 849999\nFound 3205693 unique tokens.\nCenter padding.\nShape of data tensor: (6270, 500)\nmax sequences_test length: 5558\nmin sequences_test length: 4\nmedian sequences_test length: 261\nCenter padding for test seq.\nShape of data_test tensor: (1514, 500)\nnum_words is 850000\n"
],
[
"data_6D_join_post,data_test_6D_join_post,num_words_6D,MAX_SEQUENCE_LENGTH_6D_J,MAX_VOCAB_SIZE_6D = f.get_seq_data_join(850000,500,df_train,df_test,pad='post')",
"MAX_VOCAB_SIZE is 850000\nMAX_SEQUENCE_LENGTH is 500\nmax sequence_data length: 6856\nmin sequence_data length: 5\nmedian sequence_data length: 436\nmax word index: 849999\nFound 3205693 unique tokens.\npost padding.\nShape of data tensor: (6270, 500)\nmax sequences_test length: 5558\nmin sequences_test length: 4\nmedian sequences_test length: 261\npost padding for test seq.\nShape of data_test tensor: (1514, 500)\nnum_words is 850000\n"
],
[
"EMBEDDING_DIM_6D = 15\nVALIDATION_SPLIT = 0.2\nBATCH_SIZE = 128\nEPOCHS = 5\nDROP = 0.5\nx1_join = f.conv_model(MAX_SEQUENCE_LENGTH_6D_J,EMBEDDING_DIM_6D,num_words_6D,DROP)\nx2_join = f.conv_model(MAX_SEQUENCE_LENGTH_6D_J,EMBEDDING_DIM_6D,num_words_6D,DROP)\nx3_join = f.conv_model(MAX_SEQUENCE_LENGTH_6D_J,EMBEDDING_DIM_6D,num_words_6D,DROP)\n\nx1_doubleip = f.conv_model(MAX_SEQUENCE_LENGTH_6D_dIP,EMBEDDING_DIM_6D,num_words_6D,DROP)\nx2_doubleip = f.conv_model(MAX_SEQUENCE_LENGTH_6D_dIP,EMBEDDING_DIM_6D,num_words_6D,DROP)\nx3_doubleip = f.conv_model(MAX_SEQUENCE_LENGTH_6D_dIP,EMBEDDING_DIM_6D,num_words_6D,DROP)\nx4_doubleip = f.conv_model(MAX_SEQUENCE_LENGTH_6D_dIP,EMBEDDING_DIM_6D,num_words_6D,DROP)\nx5_doubleip = f.conv_model(MAX_SEQUENCE_LENGTH_6D_dIP,EMBEDDING_DIM_6D,num_words_6D,DROP)\nx6_doubleip = f.conv_model(MAX_SEQUENCE_LENGTH_6D_dIP,EMBEDDING_DIM_6D,num_words_6D,DROP)\n\nconcatenator = Concatenate(axis=1)\nx = concatenator([x1_join.output, x2_join.output, x3_join.output, x1_doubleip.output, x2_doubleip.output, x3_doubleip.output, x4_doubleip.output, x5_doubleip.output, x6_doubleip.output])\nx = Dense(128)(x)\nx = Dropout(0.3)(x)\noutput = Dense(1, activation=\"sigmoid\",name=\"Final\")(x)\nmodel6D_CNN_combine = Model(inputs=[x1_join.input, x2_join.input, x3_join.input, x1_doubleip.input, x2_doubleip.input, x3_doubleip.input, x4_doubleip.input, x5_doubleip.input, x6_doubleip.input], outputs=output)\n\nmodel6D_CNN_combine.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])\n\n\ntrains = [data_6D_join_pre,data_6D_join_center,data_6D_join_post, data1_6D_doubleip_pre,data1_6D_doubleip_center,data1_6D_doubleip_post, data2_6D_doubleip_pre,data2_6D_doubleip_center,data2_6D_doubleip_post]\ntests = [data_test_6D_join_pre,data_test_6D_join_center,data_test_6D_join_post, data1_test_6D_doubleip_pre,data1_test_6D_doubleip_center,data1_test_6D_doubleip_post, data2_test_6D_doubleip_pre,data2_test_6D_doubleip_center,data2_test_6D_doubleip_post]\n\nmodel6D_CNN_combine.fit(trains, df_train['label'].values, epochs=EPOCHS, batch_size=BATCH_SIZE, validation_data=(tests,df_test['label'].values))\nprint(roc_auc_score(df_test['label'].values, model6D_CNN_combine.predict(tests)))\n\n#batch_size=BATCH_SIZE,",
"Epoch 1/5\n49/49 [==============================] - 55s 1s/step - loss: 0.5595 - accuracy: 0.7042 - val_loss: 0.4239 - val_accuracy: 0.8124\nEpoch 2/5\n49/49 [==============================] - 51s 1s/step - loss: 0.2838 - accuracy: 0.8787 - val_loss: 0.4225 - val_accuracy: 0.8104\nEpoch 3/5\n49/49 [==============================] - 52s 1s/step - loss: 0.0550 - accuracy: 0.9841 - val_loss: 0.4822 - val_accuracy: 0.8342\nEpoch 4/5\n49/49 [==============================] - 51s 1s/step - loss: 0.0061 - accuracy: 0.9992 - val_loss: 0.5706 - val_accuracy: 0.8342\nEpoch 5/5\n49/49 [==============================] - 51s 1s/step - loss: 0.0010 - accuracy: 0.9999 - val_loss: 0.5941 - val_accuracy: 0.8362\n0.8978952934216795\n"
],
[
"EMBEDDING_DIM_6D = 15\nVALIDATION_SPLIT = 0.2\nBATCH_SIZE = 128\nEPOCHS = 5\nDROP = 0.6\nx1_join = f.conv_model(MAX_SEQUENCE_LENGTH_6D_J,EMBEDDING_DIM_6D,num_words_6D,DROP)\nx2_join = f.conv_model(MAX_SEQUENCE_LENGTH_6D_J,EMBEDDING_DIM_6D,num_words_6D,DROP)\nx3_join = f.conv_model(MAX_SEQUENCE_LENGTH_6D_J,EMBEDDING_DIM_6D,num_words_6D,DROP)\n\nx1_doubleip = f.conv_model(MAX_SEQUENCE_LENGTH_6D_dIP,EMBEDDING_DIM_6D,num_words_6D,DROP)\nx2_doubleip = f.conv_model(MAX_SEQUENCE_LENGTH_6D_dIP,EMBEDDING_DIM_6D,num_words_6D,DROP)\nx3_doubleip = f.conv_model(MAX_SEQUENCE_LENGTH_6D_dIP,EMBEDDING_DIM_6D,num_words_6D,DROP)\nx4_doubleip = f.conv_model(MAX_SEQUENCE_LENGTH_6D_dIP,EMBEDDING_DIM_6D,num_words_6D,DROP)\nx5_doubleip = f.conv_model(MAX_SEQUENCE_LENGTH_6D_dIP,EMBEDDING_DIM_6D,num_words_6D,DROP)\nx6_doubleip = f.conv_model(MAX_SEQUENCE_LENGTH_6D_dIP,EMBEDDING_DIM_6D,num_words_6D,DROP)\n\nconcatenator = Concatenate(axis=1)\nx = concatenator([x1_join.output, x2_join.output, x3_join.output, x1_doubleip.output, x2_doubleip.output, x3_doubleip.output, x4_doubleip.output, x5_doubleip.output, x6_doubleip.output])\nx = Dense(128)(x)\nx = Dropout(DROP)(x)\noutput = Dense(1, activation=\"sigmoid\",name=\"Final\")(x)\nmodel6D_CNN_combine = Model(inputs=[x1_join.input, x2_join.input, x3_join.input, x1_doubleip.input, x2_doubleip.input, x3_doubleip.input, x4_doubleip.input, x5_doubleip.input, x6_doubleip.input], outputs=output)\n\nmodel6D_CNN_combine.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])\n\n\ntrains = [data_6D_join_pre,data_6D_join_center,data_6D_join_post, data1_6D_doubleip_pre,data1_6D_doubleip_center,data1_6D_doubleip_post, data2_6D_doubleip_pre,data2_6D_doubleip_center,data2_6D_doubleip_post]\ntests = [data_test_6D_join_pre,data_test_6D_join_center,data_test_6D_join_post, data1_test_6D_doubleip_pre,data1_test_6D_doubleip_center,data1_test_6D_doubleip_post, data2_test_6D_doubleip_pre,data2_test_6D_doubleip_center,data2_test_6D_doubleip_post]\n\nmodel6D_CNN_combine.fit(trains, df_train['label'].values, epochs=EPOCHS, batch_size=BATCH_SIZE, validation_data=(tests,df_test['label'].values))\nprint(roc_auc_score(df_test['label'].values, model6D_CNN_combine.predict(tests)))\n\n#batch_size=BATCH_SIZE,",
"Epoch 1/5\n49/49 [==============================] - 54s 1s/step - loss: 0.5560 - accuracy: 0.7249 - val_loss: 0.4310 - val_accuracy: 0.8190\nEpoch 2/5\n49/49 [==============================] - 51s 1s/step - loss: 0.2973 - accuracy: 0.8787 - val_loss: 0.4525 - val_accuracy: 0.8058\nEpoch 3/5\n49/49 [==============================] - 51s 1s/step - loss: 0.0874 - accuracy: 0.9689 - val_loss: 0.4943 - val_accuracy: 0.8091\nEpoch 4/5\n49/49 [==============================] - 51s 1s/step - loss: 0.0158 - accuracy: 0.9948 - val_loss: 0.5412 - val_accuracy: 0.8316\nEpoch 5/5\n49/49 [==============================] - 51s 1s/step - loss: 0.0072 - accuracy: 0.9971 - val_loss: 0.6270 - val_accuracy: 0.8336\n0.8969232997527262\n"
],
[
"EMBEDDING_DIM_6D = 15\nVALIDATION_SPLIT = 0.2\nBATCH_SIZE = 128\nEPOCHS = 5\nDROP = 0.5\nx1_join = f.conv_model(MAX_SEQUENCE_LENGTH_6D_J,EMBEDDING_DIM_6D,num_words_6D,DROP)\nx2_join = f.conv_model(MAX_SEQUENCE_LENGTH_6D_J,EMBEDDING_DIM_6D,num_words_6D,DROP)\nx3_join = f.conv_model(MAX_SEQUENCE_LENGTH_6D_J,EMBEDDING_DIM_6D,num_words_6D,DROP)\n\nx1_doubleip = f.conv_model(MAX_SEQUENCE_LENGTH_6D_dIP,EMBEDDING_DIM_6D,num_words_6D,DROP)\nx2_doubleip = f.conv_model(MAX_SEQUENCE_LENGTH_6D_dIP,EMBEDDING_DIM_6D,num_words_6D,DROP)\nx3_doubleip = f.conv_model(MAX_SEQUENCE_LENGTH_6D_dIP,EMBEDDING_DIM_6D,num_words_6D,DROP)\nx4_doubleip = f.conv_model(MAX_SEQUENCE_LENGTH_6D_dIP,EMBEDDING_DIM_6D,num_words_6D,DROP)\nx5_doubleip = f.conv_model(MAX_SEQUENCE_LENGTH_6D_dIP,EMBEDDING_DIM_6D,num_words_6D,DROP)\nx6_doubleip = f.conv_model(MAX_SEQUENCE_LENGTH_6D_dIP,EMBEDDING_DIM_6D,num_words_6D,DROP)\n\nconcatenator = Concatenate(axis=1)\nx = concatenator([x1_join.output, x2_join.output, x3_join.output, x1_doubleip.output, x2_doubleip.output, x3_doubleip.output, x4_doubleip.output, x5_doubleip.output, x6_doubleip.output])\nx = Dense(128)(x)\nx = Dropout(0.3)(x)\noutput = Dense(1, activation=\"sigmoid\",name=\"Final\")(x)\nmodel6D_CNN_combine = Model(inputs=[x1_join.input, x2_join.input, x3_join.input, x1_doubleip.input, x2_doubleip.input, x3_doubleip.input, x4_doubleip.input, x5_doubleip.input, x6_doubleip.input], outputs=output)\n\nmodel6D_CNN_combine.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])\n\n\ntrains = [data_6D_join_pre,data_6D_join_center,data_6D_join_post, data1_6D_doubleip_pre,data1_6D_doubleip_center,data1_6D_doubleip_post, data2_6D_doubleip_pre,data2_6D_doubleip_center,data2_6D_doubleip_post]\ntests = [data_test_6D_join_pre,data_test_6D_join_center,data_test_6D_join_post, data1_test_6D_doubleip_pre,data1_test_6D_doubleip_center,data1_test_6D_doubleip_post, data2_test_6D_doubleip_pre,data2_test_6D_doubleip_center,data2_test_6D_doubleip_post]\n\nmodel6D_CNN_combine.fit(trains, df_train['label'].values, epochs=EPOCHS, batch_size=BATCH_SIZE, validation_data=(tests,df_test['label'].values))\nprint(roc_auc_score(df_test['label'].values, model6D_CNN_combine.predict(tests)))\n\n#batch_size=BATCH_SIZE,",
"Epoch 1/5\n196/196 [==============================] - 147s 652ms/step - loss: 0.5561 - accuracy: 0.7148 - val_loss: 0.4054 - val_accuracy: 0.8111\nEpoch 2/5\n196/196 [==============================] - 127s 649ms/step - loss: 0.1430 - accuracy: 0.9490 - val_loss: 0.4004 - val_accuracy: 0.8355\nEpoch 3/5\n196/196 [==============================] - 127s 650ms/step - loss: 0.0155 - accuracy: 0.9949 - val_loss: 0.5931 - val_accuracy: 0.8487\nEpoch 4/5\n196/196 [==============================] - 127s 648ms/step - loss: 0.0025 - accuracy: 0.9992 - val_loss: 0.7275 - val_accuracy: 0.8289\nEpoch 5/5\n196/196 [==============================] - 127s 647ms/step - loss: 0.0014 - accuracy: 0.9998 - val_loss: 0.7876 - val_accuracy: 0.8243\n0.9139663449373437\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecd7dc3d9e9e9f62d474ede965bfa5402f53be70 | 5,792 | ipynb | Jupyter Notebook | tests/boostrap.ipynb | oscarpimentel/astro-lightcurves-handler | 436dd17967df82e1f79d92f330220a0627dcecd1 | [
"MIT"
] | 1 | 2020-11-17T19:54:20.000Z | 2020-11-17T19:54:20.000Z | tests/boostrap.ipynb | oscarpimentel/astro-lightcurves-handler | 436dd17967df82e1f79d92f330220a0627dcecd1 | [
"MIT"
] | null | null | null | tests/boostrap.ipynb | oscarpimentel/astro-lightcurves-handler | 436dd17967df82e1f79d92f330220a0627dcecd1 | [
"MIT"
] | null | null | null | 33.097143 | 450 | 0.504662 | [
[
[
"import sys\nsys.path.append('../') # or just install the module\nsys.path.append('../../fuzzy-tools') # or just install the module",
"_____no_output_____"
],
[
"from fuzzytools.files import search_for_filedirs\nfrom lchandler import C_\n\nroot_folder = '../../surveys-save'\nfiledirs = search_for_filedirs(root_folder, fext=C_.EXT_RAW_LIGHTCURVE)\nfiledirs",
"_____no_output_____"
],
[
"%load_ext autoreload\n%autoreload 2\nimport numpy as np\nfrom fuzzytools.files import load_pickle, save_pickle, get_dict_from_filedir\n\nfiledir = '../../surveys-save/survey=alerceZTFv7.1~bands=gr~mode=onlySNe.ralcds'\nfiledict = get_dict_from_filedir(filedir)\nroot_folder = filedict['_rootdir']\ncfilename = filedict['_cfilename']\nsurvey = filedict['survey']\nlcdataset = load_pickle(filedir)\nprint(lcdataset)",
"The autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\nLCDataset:\n[outliers - samples 10]\n(.) obs_samples=541 - min_len=14 - max_dur=408.0[days] - dur(p50)=133.8[days] - cadence(p50)=1.0[days]\n(g) obs_samples=260 - min_len=6 - max_dur=408.0[days] - dur(p50)=133.8[days] - cadence(p50)=3.0[days]\n(r) obs_samples=281 - min_len=8 - max_dur=376.0[days] - dur(p50)=128.7[days] - cadence(p50)=3.0[days]\n |█▌ | SLSN - 2/10 (20.00%)\n |▊ | SNIa - 1/10 (10.00%)\n |█▌ | SNIbc - 2/10 (20.00%)\n |████ | allSNII - 5/10 (50.00%)\n────────────────────────────────────────────────────────────────────────────────────────────────────\n[faint - samples 48]\n(.) obs_samples=1,107 - min_len=7 - max_dur=298.1[days] - dur(p50)=40.5[days] - cadence(p50)=1.0[days]\n(g) obs_samples=450 - min_len=0 - max_dur=221.7[days] - dur(p50)=28.0[days] - cadence(p50)=2.9[days]\n(r) obs_samples=657 - min_len=4 - max_dur=298.1[days] - dur(p50)=38.9[days] - cadence(p50)=2.9[days]\n |█▏ | SLSN - 7/48 (14.58%)\n |███▊ | SNIa - 23/48 (47.92%)\n |▌ | SNIbc - 3/48 (6.25%)\n |██▌ | allSNII - 15/48 (31.25%)\n────────────────────────────────────────────────────────────────────────────────────────────────────\n[raw - samples 1,940]\n(.) obs_samples=53,326 - min_len=6 - max_dur=538.8[days] - dur(p50)=53.0[days] - cadence(p50)=1.0[days]\n(g) obs_samples=23,566 - min_len=0 - max_dur=538.7[days] - dur(p50)=39.0[days] - cadence(p50)=3.0[days]\n(r) obs_samples=29,760 - min_len=0 - max_dur=538.7[days] - dur(p50)=51.0[days] - cadence(p50)=3.0[days]\n | | SLSN - 22/1,940 (1.13%)\n |██████ | SNIa - 1,477/1,940 (76.13%)\n |▍ | SNIbc - 95/1,940 (4.90%)\n |█▍ | allSNII - 346/1,940 (17.84%)\n────────────────────────────────────────────────────────────────────────────────────────────────────\n\n"
],
[
"%load_ext autoreload\n%autoreload 2\nlcdataset['raw'].reset_boostrap() # fixme",
"The autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\n"
],
[
"%load_ext autoreload\n%autoreload 2\nlcdataset['raw'].get_boostrap_samples('SLSN', 5)\n",
"The autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\n{'ZTF18ablwafp': 21, 'ZTF18abshezu': 21, 'ZTF18abxbmqh': 27, 'ZTF18acnnevs': 25, 'ZTF18acqyvag': 25, 'ZTF18acxgqxq': 27, 'ZTF18acyxnyw': 24, 'ZTF19aaeopgw': 26, 'ZTF19aafljiq': 22, 'ZTF19aalbrgu': 22, 'ZTF19aamrais': 24, 'ZTF19aanesgt': 26, 'ZTF19aarphwc': 27, 'ZTF19aaserwb': 29, 'ZTF19abaeyqw': 25, 'ZTF19abclykm': 27, 'ZTF19abnacvf': 31, 'ZTF19abpbopt': 30, 'ZTF19acfwynw': 30, 'ZTF19adcfsoc': 32, 'ZTF20aahbfmf': 29, 'ZTF20aayprqz': 25}\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
ecd7f8235ba2aa5340fa46ea9027581366dc44d1 | 29,257 | ipynb | Jupyter Notebook | notebooks_paper_2022/BERT_finetuning_v2/BERT_finetune_IMDB.ipynb | PlaytikaResearch/esntorch | 585369853e2bb7c46d782fd10469dd30597de2e3 | [
"MIT"
] | 1 | 2021-10-06T07:42:01.000Z | 2021-10-06T07:42:01.000Z | notebooks_paper_2022/BERT_finetuning_v2/BERT_finetune_IMDB.ipynb | PlaytikaResearch/esntorch | 585369853e2bb7c46d782fd10469dd30597de2e3 | [
"MIT"
] | null | null | null | notebooks_paper_2022/BERT_finetuning_v2/BERT_finetune_IMDB.ipynb | PlaytikaResearch/esntorch | 585369853e2bb7c46d782fd10469dd30597de2e3 | [
"MIT"
] | null | null | null | 37.460948 | 422 | 0.50217 | [
[
[
"# IMDB\n# BERT finetuning",
"_____no_output_____"
],
[
"## Librairy",
"_____no_output_____"
]
],
[
[
"# !pip install transformers==4.8.2\n# !pip install datasets==1.7.0",
"_____no_output_____"
],
[
"import os\nimport time\nimport pickle\n\nimport numpy as np\nimport torch\n\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import accuracy_score, recall_score, precision_score, f1_score\n\nfrom transformers import BertTokenizer, BertTokenizerFast\nfrom transformers import BertForSequenceClassification, AdamW\nfrom transformers import Trainer, TrainingArguments\nfrom transformers import EarlyStoppingCallback\nfrom transformers.data.data_collator import DataCollatorWithPadding\n\nfrom datasets import load_dataset, Dataset, concatenate_datasets",
"_____no_output_____"
],
[
"device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')\ndevice",
"_____no_output_____"
]
],
[
[
"## Global variables",
"_____no_output_____"
]
],
[
[
"BATCH_SIZE = 24 # cf. paper Sun et al.\nNB_EPOCHS = 4 # cf. paper Sun et al.",
"_____no_output_____"
],
[
"CURRENT_PATH = '~/Results/BERT_finetune' # put your path here",
"_____no_output_____"
],
[
"RESULTS_FILE = os.path.join(CURRENT_PATH, 'imdb_results.pkl')\nRESULTS_DIR = os.path.join(CURRENT_PATH,'imdb/')",
"_____no_output_____"
],
[
"CACHE_DIR = '~/Data/huggignface/' # put your path here",
"_____no_output_____"
]
],
[
[
"## Dataset",
"_____no_output_____"
]
],
[
[
"# download dataset\n\nraw_datasets = load_dataset('imdb', cache_dir=CACHE_DIR)",
"_____no_output_____"
],
[
"# tokenize\n\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\n\ndef tokenize_function(examples):\n return tokenizer(examples[\"text\"], padding=True, truncation=True)\n\ntokenized_datasets = raw_datasets.map(tokenize_function, batched=True)\ntokenized_datasets.set_format(type='torch', columns=['input_ids', 'attention_mask', 'label'])\n\ntrain_dataset = tokenized_datasets[\"train\"].shuffle(seed=42)\ntrain_val_datasets = train_dataset.train_test_split(train_size=0.8)\n\ntrain_dataset = train_val_datasets['train'].rename_column('label', 'labels')\nval_dataset = train_val_datasets['test'].rename_column('label', 'labels')\ntest_dataset = tokenized_datasets[\"test\"].shuffle(seed=42).rename_column('label', 'labels')",
"_____no_output_____"
],
[
"# get number of labels\n\nnum_labels = len(set(train_dataset['labels'].tolist()))\nnum_labels",
"_____no_output_____"
]
],
[
[
"## Model",
"_____no_output_____"
],
[
"#### Model",
"_____no_output_____"
]
],
[
[
"model = BertForSequenceClassification.from_pretrained(\"bert-base-uncased\", num_labels=num_labels)\nmodel.to(device)",
"Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForSequenceClassification: ['cls.predictions.decoder.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.dense.weight', 'cls.seq_relationship.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.bias', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight']\n- This IS expected if you are initializing BertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n- This IS NOT expected if you are initializing BertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\nSome weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['classifier.bias', 'classifier.weight']\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\n"
]
],
[
[
"#### Training",
"_____no_output_____"
]
],
[
[
"training_args = TrainingArguments(\n \n # output\n output_dir=RESULTS_DIR, \n \n # params\n num_train_epochs=NB_EPOCHS, # nb of epochs\n per_device_train_batch_size=BATCH_SIZE, # batch size per device during training\n per_device_eval_batch_size=BATCH_SIZE, # cf. paper Sun et al.\n learning_rate=2e-5, # cf. paper Sun et al.\n# warmup_steps=500, # number of warmup steps for learning rate scheduler\n warmup_ratio=0.1, # cf. paper Sun et al.\n weight_decay=0.01, # strength of weight decay\n \n# # eval\n evaluation_strategy=\"steps\", # cf. paper Sun et al.\n eval_steps=50, # 20 # cf. paper Sun et al.\n# evaluation_strategy='no', # no more evaluation, takes time\n \n # log\n logging_dir=RESULTS_DIR+'logs', \n logging_strategy='steps',\n logging_steps=50, # 20\n \n # save\n save_strategy='steps',\n save_total_limit=1,\n # save_steps=20, # default 500\n load_best_model_at_end=True, # cf. paper Sun et al.\n metric_for_best_model='eval_loss' \n)",
"_____no_output_____"
],
[
"def compute_metrics(p):\n \n pred, labels = p\n pred = np.argmax(pred, axis=1)\n\n accuracy = accuracy_score(y_true=labels, y_pred=pred)\n \n return {\"val_accuracy\": accuracy}",
"_____no_output_____"
],
[
"trainer = Trainer(\n model=model,\n args=training_args,\n tokenizer=tokenizer,\n train_dataset=train_dataset,\n eval_dataset=val_dataset,\n # compute_metrics=compute_metrics,\n # callbacks=[EarlyStoppingCallback(early_stopping_patience=5)]\n)",
"_____no_output_____"
],
[
"results = trainer.train()",
"_____no_output_____"
],
[
"training_time = results.metrics[\"train_runtime\"]\ntraining_time_per_epoch = training_time / training_args.num_train_epochs\ntraining_time_per_epoch",
"_____no_output_____"
],
[
"trainer.save_model(os.path.join(RESULTS_DIR, 'checkpoint_best_model'))",
"_____no_output_____"
]
],
[
[
"## Results",
"_____no_output_____"
]
],
[
[
"# load model\nmodel_file = os.path.join(RESULTS_DIR, 'checkpoint_best_model')\nfinetuned_model = BertForSequenceClassification.from_pretrained(model_file, num_labels=num_labels)\nfinetuned_model.to(device)\nfinetuned_model.eval()\n\n# compute test acc\ntest_trainer = Trainer(finetuned_model, data_collator=DataCollatorWithPadding(tokenizer))\nraw_preds, labels, _ = test_trainer.predict(test_dataset)\npreds = np.argmax(raw_preds, axis=1)\ntest_acc = accuracy_score(y_true=labels, y_pred=preds)\n\n# save results\nresults_d = {}\nresults_d['accuracy'] = test_acc\nresults_d['training_time'] = training_time",
"_____no_output_____"
],
[
"results_d",
"_____no_output_____"
],
[
"# save results\n\nwith open(RESULTS_FILE, 'wb') as fh:\n pickle.dump(results_d, fh)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
ecd7fae9a624797c8680dff60d00f477bd2c6c4b | 46,416 | ipynb | Jupyter Notebook | notebooks/homecdt_model/bear_LighGBM_Test2.ipynb | ss9202150/Project_1 | 349dbf8cd42b074c2a897e84ed360f061f07dc0b | [
"MIT"
] | null | null | null | notebooks/homecdt_model/bear_LighGBM_Test2.ipynb | ss9202150/Project_1 | 349dbf8cd42b074c2a897e84ed360f061f07dc0b | [
"MIT"
] | null | null | null | notebooks/homecdt_model/bear_LighGBM_Test2.ipynb | ss9202150/Project_1 | 349dbf8cd42b074c2a897e84ed360f061f07dc0b | [
"MIT"
] | null | null | null | 62.979647 | 322 | 0.625625 | [
[
[
"# Forked from excellent kernel : https://www.kaggle.com/jsaguiar/updated-0-792-lb-lightgbm-with-simple-features\n# From Kaggler : https://www.kaggle.com/jsaguiar\n# Just added a few features so I thought I had to make release it as well...\n\nimport numpy as np\nimport pandas as pd\nimport gc\nimport time\nfrom contextlib import contextmanager\nimport lightgbm as lgb\nfrom sklearn.metrics import roc_auc_score, roc_curve\nfrom sklearn.model_selection import KFold, StratifiedKFold\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport warnings\nwarnings.simplefilter(action='ignore', category=FutureWarning)",
"_____no_output_____"
],
[
"features_with_no_imp_at_least_twice = [\n 'ACTIVE_CNT_CREDIT_PROLONG_SUM', 'ACTIVE_CREDIT_DAY_OVERDUE_MEAN', 'AMT_REQ_CREDIT_BUREAU_DAY', 'AMT_REQ_CREDIT_BUREAU_HOUR',\n 'AMT_REQ_CREDIT_BUREAU_WEEK', 'BURO_CNT_CREDIT_PROLONG_SUM', 'BURO_CREDIT_ACTIVE_Bad debt_MEAN', 'BURO_CREDIT_ACTIVE_nan_MEAN',\n 'BURO_CREDIT_CURRENCY_currency 1_MEAN', 'BURO_CREDIT_CURRENCY_currency 2_MEAN', 'BURO_CREDIT_CURRENCY_currency 3_MEAN',\n 'BURO_CREDIT_CURRENCY_currency 4_MEAN', 'BURO_CREDIT_CURRENCY_nan_MEAN', 'BURO_CREDIT_DAY_OVERDUE_MAX', 'BURO_CREDIT_DAY_OVERDUE_MEAN',\n 'BURO_CREDIT_TYPE_Cash loan (non-earmarked)_MEAN', 'BURO_CREDIT_TYPE_Interbank credit_MEAN', 'BURO_CREDIT_TYPE_Loan for business development_MEAN',\n 'BURO_CREDIT_TYPE_Loan for purchase of shares (margin lending)_MEAN', 'BURO_CREDIT_TYPE_Loan for the purchase of equipment_MEAN',\n 'BURO_CREDIT_TYPE_Loan for working capital replenishment_MEAN', 'BURO_CREDIT_TYPE_Mobile operator loan_MEAN',\n 'BURO_CREDIT_TYPE_Real estate loan_MEAN', 'BURO_CREDIT_TYPE_Unknown type of loan_MEAN', 'BURO_CREDIT_TYPE_nan_MEAN',\n 'BURO_MONTHS_BALANCE_MAX_MAX', 'BURO_STATUS_2_MEAN_MEAN', 'BURO_STATUS_3_MEAN_MEAN', 'BURO_STATUS_4_MEAN_MEAN', 'BURO_STATUS_5_MEAN_MEAN',\n 'BURO_STATUS_nan_MEAN_MEAN', 'CC_AMT_DRAWINGS_ATM_CURRENT_MIN', 'CC_AMT_DRAWINGS_CURRENT_MIN', 'CC_AMT_DRAWINGS_OTHER_CURRENT_MAX',\n 'CC_AMT_DRAWINGS_OTHER_CURRENT_MEAN', 'CC_AMT_DRAWINGS_OTHER_CURRENT_MIN', 'CC_AMT_DRAWINGS_OTHER_CURRENT_SUM',\n 'CC_AMT_DRAWINGS_OTHER_CURRENT_VAR', 'CC_AMT_INST_MIN_REGULARITY_MIN', 'CC_AMT_PAYMENT_TOTAL_CURRENT_MIN', 'CC_AMT_PAYMENT_TOTAL_CURRENT_VAR',\n 'CC_AMT_RECIVABLE_SUM', 'CC_AMT_TOTAL_RECEIVABLE_MAX', 'CC_AMT_TOTAL_RECEIVABLE_MIN', 'CC_AMT_TOTAL_RECEIVABLE_SUM', 'CC_AMT_TOTAL_RECEIVABLE_VAR',\n 'CC_CNT_DRAWINGS_ATM_CURRENT_MIN', 'CC_CNT_DRAWINGS_CURRENT_MIN', 'CC_CNT_DRAWINGS_OTHER_CURRENT_MAX', 'CC_CNT_DRAWINGS_OTHER_CURRENT_MEAN',\n 'CC_CNT_DRAWINGS_OTHER_CURRENT_MIN', 'CC_CNT_DRAWINGS_OTHER_CURRENT_SUM', 'CC_CNT_DRAWINGS_OTHER_CURRENT_VAR', 'CC_CNT_DRAWINGS_POS_CURRENT_SUM',\n 'CC_CNT_INSTALMENT_MATURE_CUM_MAX', 'CC_CNT_INSTALMENT_MATURE_CUM_MIN', 'CC_COUNT', 'CC_MONTHS_BALANCE_MAX', 'CC_MONTHS_BALANCE_MEAN',\n 'CC_MONTHS_BALANCE_MIN', 'CC_MONTHS_BALANCE_SUM', 'CC_NAME_CONTRACT_STATUS_Active_MAX', 'CC_NAME_CONTRACT_STATUS_Active_MIN',\n 'CC_NAME_CONTRACT_STATUS_Approved_MAX', 'CC_NAME_CONTRACT_STATUS_Approved_MEAN', 'CC_NAME_CONTRACT_STATUS_Approved_MIN',\n 'CC_NAME_CONTRACT_STATUS_Approved_SUM', 'CC_NAME_CONTRACT_STATUS_Approved_VAR', 'CC_NAME_CONTRACT_STATUS_Completed_MAX',\n 'CC_NAME_CONTRACT_STATUS_Completed_MEAN', 'CC_NAME_CONTRACT_STATUS_Completed_MIN', 'CC_NAME_CONTRACT_STATUS_Completed_SUM', 'CC_NAME_CONTRACT_STATUS_Completed_VAR',\n 'CC_NAME_CONTRACT_STATUS_Demand_MAX', 'CC_NAME_CONTRACT_STATUS_Demand_MEAN', 'CC_NAME_CONTRACT_STATUS_Demand_MIN', 'CC_NAME_CONTRACT_STATUS_Demand_SUM',\n 'CC_NAME_CONTRACT_STATUS_Demand_VAR', 'CC_NAME_CONTRACT_STATUS_Refused_MAX', 'CC_NAME_CONTRACT_STATUS_Refused_MEAN', 'CC_NAME_CONTRACT_STATUS_Refused_MIN',\n 'CC_NAME_CONTRACT_STATUS_Refused_SUM', 'CC_NAME_CONTRACT_STATUS_Refused_VAR', 'CC_NAME_CONTRACT_STATUS_Sent proposal_MAX',\n 'CC_NAME_CONTRACT_STATUS_Sent proposal_MEAN', 'CC_NAME_CONTRACT_STATUS_Sent proposal_MIN', 'CC_NAME_CONTRACT_STATUS_Sent proposal_SUM',\n 'CC_NAME_CONTRACT_STATUS_Sent proposal_VAR', 'CC_NAME_CONTRACT_STATUS_Signed_MAX', 'CC_NAME_CONTRACT_STATUS_Signed_MEAN', 'CC_NAME_CONTRACT_STATUS_Signed_MIN',\n 'CC_NAME_CONTRACT_STATUS_Signed_SUM', 'CC_NAME_CONTRACT_STATUS_Signed_VAR', 'CC_NAME_CONTRACT_STATUS_nan_MAX', 'CC_NAME_CONTRACT_STATUS_nan_MEAN',\n 'CC_NAME_CONTRACT_STATUS_nan_MIN', 'CC_NAME_CONTRACT_STATUS_nan_SUM', 'CC_NAME_CONTRACT_STATUS_nan_VAR', 'CC_SK_DPD_DEF_MAX',\n 'CC_SK_DPD_DEF_MIN', 'CC_SK_DPD_DEF_SUM', 'CC_SK_DPD_DEF_VAR', 'CC_SK_DPD_MAX', 'CC_SK_DPD_MEAN', 'CC_SK_DPD_MIN', 'CC_SK_DPD_SUM',\n 'CC_SK_DPD_VAR', 'CLOSED_AMT_CREDIT_SUM_LIMIT_MEAN', 'CLOSED_AMT_CREDIT_SUM_LIMIT_SUM', 'CLOSED_AMT_CREDIT_SUM_OVERDUE_MEAN',\n 'CLOSED_CNT_CREDIT_PROLONG_SUM', 'CLOSED_CREDIT_DAY_OVERDUE_MAX', 'CLOSED_CREDIT_DAY_OVERDUE_MEAN', 'CLOSED_MONTHS_BALANCE_MAX_MAX',\n 'CNT_CHILDREN', 'ELEVATORS_MEDI', 'ELEVATORS_MODE', 'EMERGENCYSTATE_MODE_No', 'EMERGENCYSTATE_MODE_Yes', 'ENTRANCES_MODE', 'FLAG_CONT_MOBILE',\n 'FLAG_DOCUMENT_10', 'FLAG_DOCUMENT_11', 'FLAG_DOCUMENT_12', 'FLAG_DOCUMENT_13', 'FLAG_DOCUMENT_14', 'FLAG_DOCUMENT_15', 'FLAG_DOCUMENT_16',\n 'FLAG_DOCUMENT_17', 'FLAG_DOCUMENT_19', 'FLAG_DOCUMENT_2', 'FLAG_DOCUMENT_20', 'FLAG_DOCUMENT_21', 'FLAG_DOCUMENT_4', 'FLAG_DOCUMENT_5',\n 'FLAG_DOCUMENT_6', 'FLAG_DOCUMENT_7', 'FLAG_DOCUMENT_9', 'FLAG_EMAIL', 'FLAG_EMP_PHONE', 'FLAG_MOBIL', 'FLAG_OWN_CAR', 'FLOORSMAX_MODE',\n 'FONDKAPREMONT_MODE_not specified', 'FONDKAPREMONT_MODE_org spec account', 'FONDKAPREMONT_MODE_reg oper account', 'FONDKAPREMONT_MODE_reg oper spec account',\n 'HOUSETYPE_MODE_block of flats', 'HOUSETYPE_MODE_specific housing', 'HOUSETYPE_MODE_terraced house', 'LIVE_REGION_NOT_WORK_REGION',\n 'NAME_CONTRACT_TYPE_Revolving loans', 'NAME_EDUCATION_TYPE_Academic degree', 'NAME_FAMILY_STATUS_Civil marriage', 'NAME_FAMILY_STATUS_Single / not married',\n 'NAME_FAMILY_STATUS_Unknown', 'NAME_FAMILY_STATUS_Widow', 'NAME_HOUSING_TYPE_Co-op apartment', 'NAME_HOUSING_TYPE_With parents',\n 'NAME_INCOME_TYPE_Businessman', 'NAME_INCOME_TYPE_Maternity leave', 'NAME_INCOME_TYPE_Pensioner', 'NAME_INCOME_TYPE_Student',\n 'NAME_INCOME_TYPE_Unemployed', 'NAME_TYPE_SUITE_Children', 'NAME_TYPE_SUITE_Family', 'NAME_TYPE_SUITE_Group of people',\n 'NAME_TYPE_SUITE_Other_A', 'NAME_TYPE_SUITE_Other_B', 'NAME_TYPE_SUITE_Spouse, partner', 'NAME_TYPE_SUITE_Unaccompanied',\n 'NEW_RATIO_BURO_AMT_CREDIT_SUM_DEBT_MEAN', 'NEW_RATIO_BURO_AMT_CREDIT_SUM_LIMIT_SUM', 'NEW_RATIO_BURO_AMT_CREDIT_SUM_OVERDUE_MEAN',\n 'NEW_RATIO_BURO_CNT_CREDIT_PROLONG_SUM', 'NEW_RATIO_BURO_CREDIT_DAY_OVERDUE_MAX', 'NEW_RATIO_BURO_CREDIT_DAY_OVERDUE_MEAN', 'NEW_RATIO_BURO_MONTHS_BALANCE_MAX_MAX',\n 'NEW_RATIO_PREV_AMT_DOWN_PAYMENT_MIN', 'NEW_RATIO_PREV_RATE_DOWN_PAYMENT_MAX', 'OCCUPATION_TYPE_Cleaning staff', 'OCCUPATION_TYPE_Cooking staff',\n 'OCCUPATION_TYPE_HR staff', 'OCCUPATION_TYPE_IT staff', 'OCCUPATION_TYPE_Low-skill Laborers', 'OCCUPATION_TYPE_Managers',\n 'OCCUPATION_TYPE_Private service staff', 'OCCUPATION_TYPE_Realty agents', 'OCCUPATION_TYPE_Sales staff', 'OCCUPATION_TYPE_Secretaries',\n 'OCCUPATION_TYPE_Security staff', 'OCCUPATION_TYPE_Waiters/barmen staff', 'ORGANIZATION_TYPE_Advertising', 'ORGANIZATION_TYPE_Agriculture',\n 'ORGANIZATION_TYPE_Business Entity Type 1', 'ORGANIZATION_TYPE_Business Entity Type 2', 'ORGANIZATION_TYPE_Cleaning', 'ORGANIZATION_TYPE_Culture',\n 'ORGANIZATION_TYPE_Electricity', 'ORGANIZATION_TYPE_Emergency', 'ORGANIZATION_TYPE_Government', 'ORGANIZATION_TYPE_Hotel', 'ORGANIZATION_TYPE_Housing',\n 'ORGANIZATION_TYPE_Industry: type 1', 'ORGANIZATION_TYPE_Industry: type 10', 'ORGANIZATION_TYPE_Industry: type 11', 'ORGANIZATION_TYPE_Industry: type 12',\n 'ORGANIZATION_TYPE_Industry: type 13', 'ORGANIZATION_TYPE_Industry: type 2', 'ORGANIZATION_TYPE_Industry: type 3', 'ORGANIZATION_TYPE_Industry: type 4',\n 'ORGANIZATION_TYPE_Industry: type 5', 'ORGANIZATION_TYPE_Industry: type 6', 'ORGANIZATION_TYPE_Industry: type 7', 'ORGANIZATION_TYPE_Industry: type 8',\n 'ORGANIZATION_TYPE_Insurance', 'ORGANIZATION_TYPE_Legal Services', 'ORGANIZATION_TYPE_Mobile', 'ORGANIZATION_TYPE_Other', 'ORGANIZATION_TYPE_Postal',\n 'ORGANIZATION_TYPE_Realtor', 'ORGANIZATION_TYPE_Religion', 'ORGANIZATION_TYPE_Restaurant', 'ORGANIZATION_TYPE_Security',\n 'ORGANIZATION_TYPE_Security Ministries', 'ORGANIZATION_TYPE_Services', 'ORGANIZATION_TYPE_Telecom', 'ORGANIZATION_TYPE_Trade: type 1',\n 'ORGANIZATION_TYPE_Trade: type 2', 'ORGANIZATION_TYPE_Trade: type 3', 'ORGANIZATION_TYPE_Trade: type 4', 'ORGANIZATION_TYPE_Trade: type 5',\n 'ORGANIZATION_TYPE_Trade: type 6', 'ORGANIZATION_TYPE_Trade: type 7',\n 'ORGANIZATION_TYPE_Transport: type 1', 'ORGANIZATION_TYPE_Transport: type 2', 'ORGANIZATION_TYPE_Transport: type 4', 'ORGANIZATION_TYPE_University',\n 'ORGANIZATION_TYPE_XNA', 'POS_NAME_CONTRACT_STATUS_Amortized debt_MEAN', 'POS_NAME_CONTRACT_STATUS_Approved_MEAN', 'POS_NAME_CONTRACT_STATUS_Canceled_MEAN',\n 'POS_NAME_CONTRACT_STATUS_Demand_MEAN', 'POS_NAME_CONTRACT_STATUS_XNA_MEAN', 'POS_NAME_CONTRACT_STATUS_nan_MEAN', 'PREV_CHANNEL_TYPE_Car dealer_MEAN',\n 'PREV_CHANNEL_TYPE_nan_MEAN', 'PREV_CODE_REJECT_REASON_CLIENT_MEAN', 'PREV_CODE_REJECT_REASON_SYSTEM_MEAN', 'PREV_CODE_REJECT_REASON_VERIF_MEAN',\n 'PREV_CODE_REJECT_REASON_XNA_MEAN', 'PREV_CODE_REJECT_REASON_nan_MEAN', 'PREV_FLAG_LAST_APPL_PER_CONTRACT_N_MEAN', 'PREV_FLAG_LAST_APPL_PER_CONTRACT_Y_MEAN',\n 'PREV_FLAG_LAST_APPL_PER_CONTRACT_nan_MEAN', 'PREV_NAME_CASH_LOAN_PURPOSE_Building a house or an annex_MEAN', 'PREV_NAME_CASH_LOAN_PURPOSE_Business development_MEAN',\n 'PREV_NAME_CASH_LOAN_PURPOSE_Buying a garage_MEAN', 'PREV_NAME_CASH_LOAN_PURPOSE_Buying a holiday home / land_MEAN', 'PREV_NAME_CASH_LOAN_PURPOSE_Buying a home_MEAN',\n 'PREV_NAME_CASH_LOAN_PURPOSE_Buying a new car_MEAN', 'PREV_NAME_CASH_LOAN_PURPOSE_Buying a used car_MEAN', 'PREV_NAME_CASH_LOAN_PURPOSE_Education_MEAN',\n 'PREV_NAME_CASH_LOAN_PURPOSE_Everyday expenses_MEAN', 'PREV_NAME_CASH_LOAN_PURPOSE_Furniture_MEAN', 'PREV_NAME_CASH_LOAN_PURPOSE_Gasification / water supply_MEAN',\n 'PREV_NAME_CASH_LOAN_PURPOSE_Hobby_MEAN', 'PREV_NAME_CASH_LOAN_PURPOSE_Journey_MEAN', 'PREV_NAME_CASH_LOAN_PURPOSE_Money for a third person_MEAN', 'PREV_NAME_CASH_LOAN_PURPOSE_Other_MEAN',\n 'PREV_NAME_CASH_LOAN_PURPOSE_Payments on other loans_MEAN', 'PREV_NAME_CASH_LOAN_PURPOSE_Purchase of electronic equipment_MEAN', 'PREV_NAME_CASH_LOAN_PURPOSE_Refusal to name the goal_MEAN',\n 'PREV_NAME_CASH_LOAN_PURPOSE_Wedding / gift / holiday_MEAN', 'PREV_NAME_CASH_LOAN_PURPOSE_XAP_MEAN', 'PREV_NAME_CASH_LOAN_PURPOSE_nan_MEAN', 'PREV_NAME_CLIENT_TYPE_XNA_MEAN',\n 'PREV_NAME_CLIENT_TYPE_nan_MEAN', 'PREV_NAME_CONTRACT_STATUS_Unused offer_MEAN', 'PREV_NAME_CONTRACT_STATUS_nan_MEAN', 'PREV_NAME_CONTRACT_TYPE_XNA_MEAN',\n 'PREV_NAME_CONTRACT_TYPE_nan_MEAN', 'PREV_NAME_GOODS_CATEGORY_Additional Service_MEAN', 'PREV_NAME_GOODS_CATEGORY_Animals_MEAN',\n 'PREV_NAME_GOODS_CATEGORY_Auto Accessories_MEAN', 'PREV_NAME_GOODS_CATEGORY_Clothing and Accessories_MEAN', 'PREV_NAME_GOODS_CATEGORY_Construction Materials_MEAN',\n 'PREV_NAME_GOODS_CATEGORY_Direct Sales_MEAN', 'PREV_NAME_GOODS_CATEGORY_Education_MEAN', 'PREV_NAME_GOODS_CATEGORY_Fitness_MEAN',\n 'PREV_NAME_GOODS_CATEGORY_Gardening_MEAN', 'PREV_NAME_GOODS_CATEGORY_Homewares_MEAN', 'PREV_NAME_GOODS_CATEGORY_House Construction_MEAN',\n 'PREV_NAME_GOODS_CATEGORY_Insurance_MEAN', 'PREV_NAME_GOODS_CATEGORY_Jewelry_MEAN', 'PREV_NAME_GOODS_CATEGORY_Medical Supplies_MEAN',\n 'PREV_NAME_GOODS_CATEGORY_Medicine_MEAN', 'PREV_NAME_GOODS_CATEGORY_Office Appliances_MEAN', 'PREV_NAME_GOODS_CATEGORY_Other_MEAN', 'PREV_NAME_GOODS_CATEGORY_Tourism_MEAN',\n 'PREV_NAME_GOODS_CATEGORY_Vehicles_MEAN', 'PREV_NAME_GOODS_CATEGORY_Weapon_MEAN', 'PREV_NAME_GOODS_CATEGORY_XNA_MEAN', 'PREV_NAME_GOODS_CATEGORY_nan_MEAN',\n 'PREV_NAME_PAYMENT_TYPE_Cashless from the account of the employer_MEAN', 'PREV_NAME_PAYMENT_TYPE_Non-cash from your account_MEAN', 'PREV_NAME_PAYMENT_TYPE_nan_MEAN',\n 'PREV_NAME_PORTFOLIO_Cars_MEAN', 'PREV_NAME_PORTFOLIO_nan_MEAN', 'PREV_NAME_PRODUCT_TYPE_nan_MEAN', 'PREV_NAME_SELLER_INDUSTRY_Construction_MEAN',\n 'PREV_NAME_SELLER_INDUSTRY_Furniture_MEAN', 'PREV_NAME_SELLER_INDUSTRY_Industry_MEAN', 'PREV_NAME_SELLER_INDUSTRY_Jewelry_MEAN', 'PREV_NAME_SELLER_INDUSTRY_MLM partners_MEAN',\n 'PREV_NAME_SELLER_INDUSTRY_Tourism_MEAN', 'PREV_NAME_SELLER_INDUSTRY_nan_MEAN', 'PREV_NAME_TYPE_SUITE_Group of people_MEAN', 'PREV_NAME_YIELD_GROUP_nan_MEAN',\n 'PREV_PRODUCT_COMBINATION_POS industry without interest_MEAN', 'PREV_PRODUCT_COMBINATION_POS mobile without interest_MEAN', 'PREV_PRODUCT_COMBINATION_POS others without interest_MEAN',\n 'PREV_PRODUCT_COMBINATION_nan_MEAN', 'PREV_WEEKDAY_APPR_PROCESS_START_nan_MEAN', 'REFUSED_AMT_DOWN_PAYMENT_MAX', 'REFUSED_AMT_DOWN_PAYMENT_MEAN',\n 'REFUSED_RATE_DOWN_PAYMENT_MIN', 'REG_CITY_NOT_WORK_CITY', 'REG_REGION_NOT_LIVE_REGION', 'REG_REGION_NOT_WORK_REGION',\n 'WALLSMATERIAL_MODE_Block', 'WALLSMATERIAL_MODE_Mixed', 'WALLSMATERIAL_MODE_Monolithic', 'WALLSMATERIAL_MODE_Others', 'WALLSMATERIAL_MODE_Panel',\n 'WALLSMATERIAL_MODE_Wooden', 'WEEKDAY_APPR_PROCESS_START_FRIDAY', 'WEEKDAY_APPR_PROCESS_START_THURSDAY', 'WEEKDAY_APPR_PROCESS_START_TUESDAY'\n]\n\n@contextmanager\ndef timer(title):\n t0 = time.time()\n yield\n print(\"{} - done in {:.0f}s\".format(title, time.time() - t0))\n\n# One-hot encoding for categorical columns with get_dummies\n# Object做 one-hot 新增 new_columns 的List \ndef one_hot_encoder(df, nan_as_category = True):\n original_columns = list(df.columns)\n categorical_columns = [col for col in df.columns if df[col].dtype == 'object']\n df = pd.get_dummies(df, columns= categorical_columns, dummy_na= nan_as_category)\n new_columns = [c for c in df.columns if c not in original_columns]\n return df, new_columns\n\n# Preprocess application_train.csv and application_test.csv\ndef application_train_test(num_rows = None, nan_as_category = False):\n # Read data and merge\n df = pd.read_csv('../input/application_train.csv', nrows= num_rows)\n test_df = pd.read_csv('../input/application_test.csv', nrows= num_rows)\n print(\"Train samples: {}, test samples: {}\".format(len(df), len(test_df)))\n # 和 test資料垂直合併 並重新編索引\n df = df.append(test_df).reset_index()\n # Optional: Remove 4 applications with XNA CODE_GENDER (train set)\n df = df[df['CODE_GENDER'] != 'XNA']\n \n # 找出所有欄位名稱包含 'FLAG_DOC' 的存到docs\n docs = [_f for _f in df.columns if 'FLAG_DOC' in _f]\n # 找出所有欄位名稱包含 'FLAG_' 但不包含 'FLAG_DOC'和 '_FLAG_' 的存到 live\n live = [_f for _f in df.columns if ('FLAG_' in _f) & ('FLAG_DOC' not in _f) & ('_FLAG_' not in _f)]\n \n # NaN values for DAYS_EMPLOYED: 365.243 -> nan\n df['DAYS_EMPLOYED'].replace(365243, np.nan, inplace= True)\n \n # 客戶組織類型和收入的中位數\n inc_by_org = df[['AMT_INCOME_TOTAL', 'ORGANIZATION_TYPE']].groupby('ORGANIZATION_TYPE').median()['AMT_INCOME_TOTAL']\n \n #每年要還的比例 : 貸款信用額 / 貸款年金\n df['NEW_CREDIT_TO_ANNUITY_RATIO'] = df['AMT_CREDIT'] / df['AMT_ANNUITY']\n #與原目標價格的借貸金額比例\n df['NEW_CREDIT_TO_GOODS_RATIO'] = df['AMT_CREDIT'] / df['AMT_GOODS_PRICE']\n #計算 FLAG_DOCUMENT_*全部的平均數、標準差、峰態係數\n df['NEW_DOC_IND_AVG'] = df[docs].mean(axis=1)\n df['NEW_DOC_IND_STD'] = df[docs].std(axis=1)\n df['NEW_DOC_IND_KURT'] = df[docs].kurtosis(axis=1)\n #計算貸款人特質相關 FLAG_EMP_PHONE、WORK_PHONE等等的平均數、標準差、峰態係數\n df['NEW_LIVE_IND_SUM'] = df[live].sum(axis=1)\n df['NEW_LIVE_IND_STD'] = df[live].std(axis=1)\n df['NEW_LIVE_IND_KURT'] = df[live].kurtosis(axis=1)\n # 客戶收入 / (1 + 小孩數)\n df['NEW_INC_PER_CHLD'] = df['AMT_INCOME_TOTAL'] / (1 + df['CNT_CHILDREN'])\n # 客戶組織類型和收入的中位數\n df['NEW_INC_BY_ORG'] = df['ORGANIZATION_TYPE'].map(inc_by_org)\n # 在職天數 / 年齡 (人生有多少時間在工作 ?)\n df['NEW_EMPLOY_TO_BIRTH_RATIO'] = df['DAYS_EMPLOYED'] / df['DAYS_BIRTH']\n # 一期應繳金額/客戶收入\n df['NEW_ANNUITY_TO_INCOME_RATIO'] = df['AMT_ANNUITY'] / (df['AMT_INCOME_TOTAL'])\n # 外部資料三項相乘\n df['NEW_SOURCES_PROD'] = df['EXT_SOURCE_1'] * df['EXT_SOURCE_2'] * df['EXT_SOURCE_3']\n # 外部資料三項取平均\n df['NEW_EXT_SOURCES_MEAN'] = df[['EXT_SOURCE_1', 'EXT_SOURCE_2', 'EXT_SOURCE_3']].mean(axis=1)\n # 外部資料三項取標準差用整體平均數填空值\n df['NEW_SCORES_STD'] = df[['EXT_SOURCE_1', 'EXT_SOURCE_2', 'EXT_SOURCE_3']].std(axis=1)\n df['NEW_SCORES_STD'] = df['NEW_SCORES_STD'].fillna(df['NEW_SCORES_STD'].mean())\n # 車子年齡 / 出生日期\n df['NEW_CAR_TO_BIRTH_RATIO'] = df['OWN_CAR_AGE'] / df['DAYS_BIRTH']\n # 車子年齡 / 開始工作日期\n df['NEW_CAR_TO_EMPLOY_RATIO'] = df['OWN_CAR_AGE'] / df['DAYS_EMPLOYED']\n # 申請前幾天更換手機 / 出生日期\n df['NEW_PHONE_TO_BIRTH_RATIO'] = df['DAYS_LAST_PHONE_CHANGE'] / df['DAYS_BIRTH']\n # 申請前幾天更換手機 / 開始工作日期\n df['NEW_PHONE_TO_EMPLOY_RATIO'] = df['DAYS_LAST_PHONE_CHANGE'] / df['DAYS_EMPLOYED']\n # 借貸信用額 / 客戶總收入\n df['NEW_CREDIT_TO_INCOME_RATIO'] = df['AMT_CREDIT'] / df['AMT_INCOME_TOTAL']\n \n # Categorical features with Binary encode (0 or 1; two categories)\n for bin_feature in ['CODE_GENDER', 'FLAG_OWN_CAR', 'FLAG_OWN_REALTY']:\n df[bin_feature], uniques = pd.factorize(df[bin_feature])\n # Categorical features with One-Hot encode\n df, cat_cols = one_hot_encoder(df, nan_as_category)\n \n del test_df\n gc.collect()\n return df\n\n# Preprocess bureau.csv and bureau_balance.csv\ndef bureau_and_balance(num_rows = None, nan_as_category = True):\n bureau = pd.read_csv('../input/bureau.csv', nrows = num_rows)\n bb = pd.read_csv('../input/bureau_balance.csv', nrows = num_rows)\n bb, bb_cat = one_hot_encoder(bb, nan_as_category)\n bureau, bureau_cat = one_hot_encoder(bureau, nan_as_category)\n \n # Bureau balance: Perform aggregations and merge with bureau.csv\n bb_aggregations = {'MONTHS_BALANCE': ['min', 'max', 'size']}\n for col in bb_cat:\n bb_aggregations[col] = ['mean']\n # agg_api -> https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.agg.html \n bb_agg = bb.groupby('SK_ID_BUREAU').agg(bb_aggregations)\n # 改columns names\n bb_agg.columns = pd.Index([e[0] + \"_\" + e[1].upper() for e in bb_agg.columns.tolist()])\n # bureau_balance 根據 SK_ID_BUREAU 和 bureau 合併\n bureau = bureau.join(bb_agg, how='left', on='SK_ID_BUREAU')\n # ID 丟了\n bureau.drop(['SK_ID_BUREAU'], axis=1, inplace= True)\n del bb, bb_agg\n gc.collect()\n \n # Bureau and bureau_balance numeric features\n num_aggregations = {\n 'DAYS_CREDIT': ['min', 'max', 'mean', 'var'],\n 'DAYS_CREDIT_ENDDATE': ['min', 'max', 'mean'],\n 'DAYS_CREDIT_UPDATE': ['mean'],\n 'CREDIT_DAY_OVERDUE': ['max', 'mean'],\n 'AMT_CREDIT_MAX_OVERDUE': ['mean'],\n 'AMT_CREDIT_SUM': ['max', 'mean', 'sum'],\n 'AMT_CREDIT_SUM_DEBT': ['max', 'mean', 'sum'],\n 'AMT_CREDIT_SUM_OVERDUE': ['mean'],\n 'AMT_CREDIT_SUM_LIMIT': ['mean', 'sum'],\n 'AMT_ANNUITY': ['max', 'mean'],\n 'CNT_CREDIT_PROLONG': ['sum'],\n 'MONTHS_BALANCE_MIN': ['min'],\n 'MONTHS_BALANCE_MAX': ['max'],\n 'MONTHS_BALANCE_SIZE': ['mean', 'sum']\n }\n # Bureau and bureau_balance categorical features\n # bureau 額外項(one-hot出來的) 和bb 額外項建立字典 以便於後續用.agg\n cat_aggregations = {}\n for cat in bureau_cat: cat_aggregations[cat] = ['mean']\n for cat in bb_cat: cat_aggregations[cat + \"_MEAN\"] = ['mean']\n # 用.agg 建立 min, max, mean, var 等等的dataframe\n bureau_agg = bureau.groupby('SK_ID_CURR').agg({**num_aggregations, **cat_aggregations})\n bureau_agg.columns = pd.Index(['BURO_' + e[0] + \"_\" + e[1].upper() for e in bureau_agg.columns.tolist()])\n # Bureau: Active credits - using only numerical aggregations\n #挑選信貸狀況為Active的出來\n active = bureau[bureau['CREDIT_ACTIVE_Active'] == 1]\n active_agg = active.groupby('SK_ID_CURR').agg(num_aggregations)\n cols = active_agg.columns.tolist()\n active_agg.columns = pd.Index(['ACTIVE_' + e[0] + \"_\" + e[1].upper() for e in active_agg.columns.tolist()])\n bureau_agg = bureau_agg.join(active_agg, how='left', on='SK_ID_CURR')\n del active, active_agg\n gc.collect()\n # Bureau: Closed credits - using only numerical aggregations\n #挑選信貸狀況為Closed的出來\n closed = bureau[bureau['CREDIT_ACTIVE_Closed'] == 1]\n closed_agg = closed.groupby('SK_ID_CURR').agg(num_aggregations)\n closed_agg.columns = pd.Index(['CLOSED_' + e[0] + \"_\" + e[1].upper() for e in closed_agg.columns.tolist()])\n bureau_agg = bureau_agg.join(closed_agg, how='left', on='SK_ID_CURR')\n #新增全部 Active / Closed 的相關項\n for e in cols:\n bureau_agg['NEW_RATIO_BURO_' + e[0] + \"_\" + e[1].upper()] = bureau_agg['ACTIVE_' + e[0] + \"_\" + e[1].upper()] / bureau_agg['CLOSED_' + e[0] + \"_\" + e[1].upper()]\n \n del closed, closed_agg, bureau\n gc.collect()\n return bureau_agg\n\n# Preprocess previous_applications.csv\ndef previous_applications(num_rows = None, nan_as_category = True):\n prev = pd.read_csv('../input/previous_application.csv', nrows = num_rows)\n prev, cat_cols = one_hot_encoder(prev, nan_as_category= True)\n # Days 365.243 values -> nan\n prev['DAYS_FIRST_DRAWING'].replace(365243, np.nan, inplace= True)\n prev['DAYS_FIRST_DUE'].replace(365243, np.nan, inplace= True)\n prev['DAYS_LAST_DUE_1ST_VERSION'].replace(365243, np.nan, inplace= True)\n prev['DAYS_LAST_DUE'].replace(365243, np.nan, inplace= True)\n prev['DAYS_TERMINATION'].replace(365243, np.nan, inplace= True)\n # Add feature: value ask / value received percentage\n # 客戶先前申請的信用額度 / 客戶先前申請實際得到的信用額度\n prev['APP_CREDIT_PERC'] = prev['AMT_APPLICATION'] / prev['AMT_CREDIT']\n # Previous applications numeric features\n #老樣子,數值型計算基本統計量 .agg方法\n num_aggregations = {\n 'AMT_ANNUITY': ['min', 'max', 'mean'],\n 'AMT_APPLICATION': ['min', 'max', 'mean'],\n 'AMT_CREDIT': ['min', 'max', 'mean'],\n 'APP_CREDIT_PERC': ['min', 'max', 'mean', 'var'],\n 'AMT_DOWN_PAYMENT': ['min', 'max', 'mean'],\n 'AMT_GOODS_PRICE': ['min', 'max', 'mean'],\n 'HOUR_APPR_PROCESS_START': ['min', 'max', 'mean'],\n 'RATE_DOWN_PAYMENT': ['min', 'max', 'mean'],\n 'DAYS_DECISION': ['min', 'max', 'mean'],\n 'CNT_PAYMENT': ['mean', 'sum'],\n }\n # Previous applications categorical features\n cat_aggregations = {}\n for cat in cat_cols:\n cat_aggregations[cat] = ['mean']\n \n prev_agg = prev.groupby('SK_ID_CURR').agg({**num_aggregations, **cat_aggregations})\n prev_agg.columns = pd.Index(['PREV_' + e[0] + \"_\" + e[1].upper() for e in prev_agg.columns.tolist()])\n # Previous Applications: Approved Applications - only numerical features\n # 先前信用期間合約申請狀態為 Approved 計算個數值的基本統計量\n approved = prev[prev['NAME_CONTRACT_STATUS_Approved'] == 1]\n approved_agg = approved.groupby('SK_ID_CURR').agg(num_aggregations)\n cols = approved_agg.columns.tolist()\n approved_agg.columns = pd.Index(['APPROVED_' + e[0] + \"_\" + e[1].upper() for e in approved_agg.columns.tolist()])\n prev_agg = prev_agg.join(approved_agg, how='left', on='SK_ID_CURR')\n # Previous Applications: Refused Applications - only numerical features\n # 先前信用期間合約申請狀態為 REFUSED 計算個數值的基本統計量\n refused = prev[prev['NAME_CONTRACT_STATUS_Refused'] == 1]\n refused_agg = refused.groupby('SK_ID_CURR').agg(num_aggregations)\n refused_agg.columns = pd.Index(['REFUSED_' + e[0] + \"_\" + e[1].upper() for e in refused_agg.columns.tolist()])\n prev_agg = prev_agg.join(refused_agg, how='left', on='SK_ID_CURR')\n del refused, refused_agg, approved, approved_agg, prev\n # 新增全部 APPROVED / REFUSED 的相關項\n for e in cols:\n prev_agg['NEW_RATIO_PREV_' + e[0] + \"_\" + e[1].upper()] = prev_agg['APPROVED_' + e[0] + \"_\" + e[1].upper()] / prev_agg['REFUSED_' + e[0] + \"_\" + e[1].upper()]\n \n gc.collect()\n return prev_agg\n\n# Preprocess POS_CASH_balance.csv\ndef pos_cash(num_rows = None, nan_as_category = True):\n pos = pd.read_csv('../input/POS_CASH_balance.csv', nrows = num_rows)\n pos, cat_cols = one_hot_encoder(pos, nan_as_category= True)\n # Features \n #老樣子 算數值型的基本統計量 .agg方法\n aggregations = {\n 'MONTHS_BALANCE': ['max', 'mean', 'size'],\n 'SK_DPD': ['max', 'mean'],\n 'SK_DPD_DEF': ['max', 'mean']\n }\n for cat in cat_cols:\n aggregations[cat] = ['mean']\n \n pos_agg = pos.groupby('SK_ID_CURR').agg(aggregations)\n pos_agg.columns = pd.Index(['POS_' + e[0] + \"_\" + e[1].upper() for e in pos_agg.columns.tolist()])\n # Count pos cash accounts\n pos_agg['POS_COUNT'] = pos.groupby('SK_ID_CURR').size()\n del pos\n gc.collect()\n return pos_agg\n \n# Preprocess installments_payments.csv\ndef installments_payments(num_rows = None, nan_as_category = True):\n ins = pd.read_csv('../input/installments_payments.csv', nrows = num_rows)\n ins, cat_cols = one_hot_encoder(ins, nan_as_category= True)\n # Percentage and difference paid in each installment (amount paid and installment value)\n # 每期付款的百分比和差異(已付金額和分期付款價值)\n ins['PAYMENT_PERC'] = ins['AMT_PAYMENT'] / ins['AMT_INSTALMENT']\n ins['PAYMENT_DIFF'] = ins['AMT_INSTALMENT'] - ins['AMT_PAYMENT']\n # Days past due and days before due (no negative values)\n # 到期天數和到期天數(無負值)\n # 先前分期付款實際還款時間 - 先前分期付款規定還款時間\n ins['DPD'] = ins['DAYS_ENTRY_PAYMENT'] - ins['DAYS_INSTALMENT']\n # 先前分期付款規定還款時間 - 先前分期付款實際還款時間\n ins['DBD'] = ins['DAYS_INSTALMENT'] - ins['DAYS_ENTRY_PAYMENT']\n # 去除掉小於0的\n ins['DPD'] = ins['DPD'].apply(lambda x: x if x > 0 else 0)\n ins['DBD'] = ins['DBD'].apply(lambda x: x if x > 0 else 0)\n # Features: Perform aggregations\n # 老樣子 算數值型的基本統計量 .agg方法\n aggregations = {\n 'NUM_INSTALMENT_VERSION': ['nunique'],\n 'DPD': ['max', 'mean', 'sum'],\n 'DBD': ['max', 'mean', 'sum'],\n 'PAYMENT_PERC': ['max', 'mean', 'sum', 'var'],\n 'PAYMENT_DIFF': ['max', 'mean', 'sum', 'var'],\n 'AMT_INSTALMENT': ['max', 'mean', 'sum'],\n 'AMT_PAYMENT': ['min', 'max', 'mean', 'sum'],\n 'DAYS_ENTRY_PAYMENT': ['max', 'mean', 'sum']\n }\n for cat in cat_cols:\n aggregations[cat] = ['mean']\n ins_agg = ins.groupby('SK_ID_CURR').agg(aggregations)\n ins_agg.columns = pd.Index(['INSTAL_' + e[0] + \"_\" + e[1].upper() for e in ins_agg.columns.tolist()])\n # Count installments accounts\n ins_agg['INSTAL_COUNT'] = ins.groupby('SK_ID_CURR').size()\n del ins\n gc.collect()\n return ins_agg\n\n# Preprocess credit_card_balance.csv\ndef credit_card_balance(num_rows = None, nan_as_category = True):\n cc = pd.read_csv('../input/credit_card_balance.csv', nrows = num_rows)\n cc, cat_cols = one_hot_encoder(cc, nan_as_category= True)\n # General aggregations\n cc.drop(['SK_ID_PREV'], axis= 1, inplace = True)\n # 全部為數值型全部做基本統計量計算\n cc_agg = cc.groupby('SK_ID_CURR').agg(['min', 'max', 'mean', 'sum', 'var'])\n cc_agg.columns = pd.Index(['CC_' + e[0] + \"_\" + e[1].upper() for e in cc_agg.columns.tolist()])\n # Count credit card lines\n #計算相同ID的數量\n cc_agg['CC_COUNT'] = cc.groupby('SK_ID_CURR').size()\n del cc\n gc.collect()\n return cc_agg\n\n# LightGBM GBDT with KFold or Stratified KFold\n# Parameters from Tilii kernel: https://www.kaggle.com/tilii7/olivier-lightgbm-parameters-by-bayesian-opt/code\ndef kfold_lightgbm(df, num_folds, stratified = False, debug= False):\n # Divide in training/validation and test data\n train_df = df[df['TARGET'].notnull()]\n test_df = df[df['TARGET'].isnull()]\n print(\"Starting LightGBM. Train shape: {}, test shape: {}\".format(train_df.shape, test_df.shape))\n del df\n gc.collect()\n # Cross validation model\n if stratified:\n folds = StratifiedKFold(n_splits= num_folds, shuffle=True, random_state=1001)\n else:\n folds = KFold(n_splits= num_folds, shuffle=True, random_state=1001)\n # Create arrays and dataframes to store results\n oof_preds = np.zeros(train_df.shape[0])\n sub_preds = np.zeros(test_df.shape[0])\n feature_importance_df = pd.DataFrame()\n feats = [f for f in train_df.columns if f not in ['TARGET','SK_ID_CURR','SK_ID_BUREAU','SK_ID_PREV','index']]\n \n for n_fold, (train_idx, valid_idx) in enumerate(folds.split(train_df[feats], train_df['TARGET'])):\n dtrain = lgb.Dataset(data=train_df[feats].iloc[train_idx], \n label=train_df['TARGET'].iloc[train_idx], \n free_raw_data=False, silent=True)\n dvalid = lgb.Dataset(data=train_df[feats].iloc[valid_idx], \n label=train_df['TARGET'].iloc[valid_idx], \n free_raw_data=False, silent=True)\n\n # LightGBM parameters found by Bayesian optimization\n params = {\n 'objective': 'binary',\n 'boosting_type': 'gbdt',\n 'nthread': 4,\n 'learning_rate': 0.02, # 02,\n 'num_leaves': 20,\n 'colsample_bytree': 0.9497036,\n 'subsample': 0.8715623,\n 'subsample_freq': 1,\n 'max_depth': 8,\n 'reg_alpha': 0.041545473,\n 'reg_lambda': 0.0735294,\n 'min_split_gain': 0.0222415,\n 'min_child_weight': 60, # 39.3259775,\n 'seed': 0,\n 'verbose': -1,\n 'metric': 'auc',\n }\n \n clf = lgb.train(\n params=params,\n train_set=dtrain,\n num_boost_round=10000,\n valid_sets=[dtrain, dvalid],\n early_stopping_rounds=200,\n verbose_eval=False\n )\n\n oof_preds[valid_idx] = clf.predict(dvalid.data)\n sub_preds += clf.predict(test_df[feats]) / folds.n_splits\n\n fold_importance_df = pd.DataFrame()\n fold_importance_df[\"feature\"] = feats\n fold_importance_df[\"importance\"] = clf.feature_importance(importance_type='gain')\n fold_importance_df[\"fold\"] = n_fold + 1\n feature_importance_df = pd.concat([feature_importance_df, fold_importance_df], axis=0)\n print('Fold %2d AUC : %.6f' % (n_fold + 1, roc_auc_score(dvalid.label, oof_preds[valid_idx])))\n del clf, dtrain, dvalid\n gc.collect()\n\n print('Full AUC score %.6f' % roc_auc_score(train_df['TARGET'], oof_preds))\n # Write submission file and plot feature importance\n if not debug:\n sub_df = test_df[['SK_ID_CURR']].copy()\n sub_df['TARGET'] = sub_preds\n sub_df[['SK_ID_CURR', 'TARGET']].to_csv(submission_file_name, index= False)\n display_importances(feature_importance_df)\n return feature_importance_df\n\n# Display/plot feature importance\ndef display_importances(feature_importance_df_):\n cols = feature_importance_df_[[\"feature\", \"importance\"]].groupby(\"feature\").mean().sort_values(by=\"importance\", ascending=False)[:40].index\n best_features = feature_importance_df_.loc[feature_importance_df_.feature.isin(cols)]\n plt.figure(figsize=(8, 10))\n sns.barplot(x=\"importance\", y=\"feature\", data=best_features.sort_values(by=\"importance\", ascending=False))\n plt.title('LightGBM Features (avg over folds)')\n plt.tight_layout\n plt.savefig('lgbm_importances01.png')\n\n\ndef main(debug = False):\n num_rows = 10000 if debug else None\n df = application_train_test(num_rows)\n with timer(\"Process bureau and bureau_balance\"):\n bureau = bureau_and_balance(num_rows)\n print(\"Bureau df shape:\", bureau.shape)\n df = df.join(bureau, how='left', on='SK_ID_CURR')\n del bureau\n gc.collect()\n with timer(\"Process previous_applications\"):\n prev = previous_applications(num_rows)\n print(\"Previous applications df shape:\", prev.shape)\n df = df.join(prev, how='left', on='SK_ID_CURR')\n del prev\n gc.collect()\n with timer(\"Process POS-CASH balance\"):\n pos = pos_cash(num_rows)\n print(\"Pos-cash balance df shape:\", pos.shape)\n df = df.join(pos, how='left', on='SK_ID_CURR')\n del pos\n gc.collect()\n with timer(\"Process installments payments\"):\n ins = installments_payments(num_rows)\n print(\"Installments payments df shape:\", ins.shape)\n df = df.join(ins, how='left', on='SK_ID_CURR')\n del ins\n gc.collect()\n with timer(\"Process credit card balance\"):\n cc = credit_card_balance(num_rows)\n print(\"Credit card balance df shape:\", cc.shape)\n df = df.join(cc, how='left', on='SK_ID_CURR')\n del cc\n gc.collect()\n with timer(\"Run LightGBM with kfold\"):\n print(df.shape)\n df.drop(features_with_no_imp_at_least_twice, axis=1, inplace=True)\n gc.collect()\n print(df.shape)\n feat_importance = kfold_lightgbm(df, num_folds= 5, stratified= False, debug= debug)\n\n# if __name__ == \"__main__\":\n# submission_file_name = \"submission_with selected_features.csv\"\n# with timer(\"Full model run\"):\n# main()",
"_____no_output_____"
],
[
"# pip install bayesian-optimization --user",
"_____no_output_____"
],
[
"# df = pd.read_csv('../../datasets/homeCredit/final_train_test.csv')",
"_____no_output_____"
],
[
"df = pd.read_csv('../../datasets/homeCredit/BDSE12_03G_HomeCredit_V2.csv')\ndf = pd.get_dummies(df)",
"_____no_output_____"
],
[
"from lightgbm import LGBMClassifier\nfrom bayes_opt import BayesianOptimization",
"_____no_output_____"
],
[
"def lgbm_evaluate(**params):\n warnings.simplefilter('ignore')\n \n params['num_leaves'] = int(params['num_leaves'])\n params['max_depth'] = int(params['max_depth'])\n \n clf = LGBMClassifier(**params, n_estimators = 2000, nthread = 5)\n\n train_df = df[df['TARGET'].notnull()]\n test_df = df[df['TARGET'].isnull()]\n\n folds = StratifiedKFold(n_splits= 5, shuffle=True, random_state=1001)\n \n test_pred_proba = np.zeros(train_df.shape[0])\n \n feats = [f for f in train_df.columns if f not in ['TARGET','SK_ID_CURR','SK_ID_BUREAU','SK_ID_PREV','index']]\n \n for n_fold, (train_idx, valid_idx) in enumerate(folds.split(train_df[feats], train_df['TARGET'])):\n train_x, train_y = train_df[feats].iloc[train_idx], train_df['TARGET'].iloc[train_idx]\n valid_x, valid_y = train_df[feats].iloc[valid_idx], train_df['TARGET'].iloc[valid_idx]\n\n clf.fit(train_x, train_y, \n eval_set = [(train_x, train_y), (valid_x, valid_y)], eval_metric = 'auc', \n verbose = False, early_stopping_rounds = 100)\n\n test_pred_proba[valid_idx] = clf.predict_proba(valid_x, num_iteration = clf.best_iteration_)[:, 1]\n \n del train_x, train_y, valid_x, valid_y\n gc.collect()\n\n return roc_auc_score(train_df['TARGET'], test_pred_proba)",
"_____no_output_____"
],
[
"params = {'learning_rate': (.01, .03), \n 'num_leaves': (20, 35), \n 'subsample': (0.8, 1), \n 'max_depth': (6, 9), \n 'reg_alpha': (.03, .05), \n 'reg_lambda': (.06, .08), \n 'min_split_gain': (.01, .03),\n 'min_child_weight': (20, 70)}\nbo = BayesianOptimization(lgbm_evaluate, params)\nbo.maximize(init_points = 5, n_iter = 5)",
"| iter | target | learni... | max_depth | min_ch... | min_sp... | num_le... | reg_alpha | reg_la... | subsample |\n-------------------------------------------------------------------------------------------------------------------------\n| \u001b[0m 1 \u001b[0m | \u001b[0m 0.7869 \u001b[0m | \u001b[0m 0.01504 \u001b[0m | \u001b[0m 8.331 \u001b[0m | \u001b[0m 65.85 \u001b[0m | \u001b[0m 0.02335 \u001b[0m | \u001b[0m 26.31 \u001b[0m | \u001b[0m 0.04414 \u001b[0m | \u001b[0m 0.06098 \u001b[0m | \u001b[0m 0.9206 \u001b[0m |\n| \u001b[0m 2 \u001b[0m | \u001b[0m 0.7865 \u001b[0m | \u001b[0m 0.01602 \u001b[0m | \u001b[0m 6.624 \u001b[0m | \u001b[0m 57.48 \u001b[0m | \u001b[0m 0.01276 \u001b[0m | \u001b[0m 27.25 \u001b[0m | \u001b[0m 0.03046 \u001b[0m | \u001b[0m 0.0622 \u001b[0m | \u001b[0m 0.91 \u001b[0m |\n| \u001b[0m 3 \u001b[0m | \u001b[0m 0.7865 \u001b[0m | \u001b[0m 0.02044 \u001b[0m | \u001b[0m 6.753 \u001b[0m | \u001b[0m 45.31 \u001b[0m | \u001b[0m 0.01204 \u001b[0m | \u001b[0m 21.23 \u001b[0m | \u001b[0m 0.03793 \u001b[0m | \u001b[0m 0.0786 \u001b[0m | \u001b[0m 0.8288 \u001b[0m |\n| \u001b[95m 4 \u001b[0m | \u001b[95m 0.787 \u001b[0m | \u001b[95m 0.01721 \u001b[0m | \u001b[95m 6.544 \u001b[0m | \u001b[95m 69.37 \u001b[0m | \u001b[95m 0.02724 \u001b[0m | \u001b[95m 28.53 \u001b[0m | \u001b[95m 0.04883 \u001b[0m | \u001b[95m 0.07453 \u001b[0m | \u001b[95m 0.8591 \u001b[0m |\n| \u001b[0m 5 \u001b[0m | \u001b[0m 0.7867 \u001b[0m | \u001b[0m 0.0203 \u001b[0m | \u001b[0m 7.442 \u001b[0m | \u001b[0m 65.89 \u001b[0m | \u001b[0m 0.01717 \u001b[0m | \u001b[0m 28.69 \u001b[0m | \u001b[0m 0.03477 \u001b[0m | \u001b[0m 0.07738 \u001b[0m | \u001b[0m 0.9039 \u001b[0m |\n| \u001b[0m 6 \u001b[0m | \u001b[0m 0.7855 \u001b[0m | \u001b[0m 0.02617 \u001b[0m | \u001b[0m 6.621 \u001b[0m | \u001b[0m 20.03 \u001b[0m | \u001b[0m 0.01593 \u001b[0m | \u001b[0m 34.79 \u001b[0m | \u001b[0m 0.04377 \u001b[0m | \u001b[0m 0.07929 \u001b[0m | \u001b[0m 0.9321 \u001b[0m |\n| \u001b[95m 7 \u001b[0m | \u001b[95m 0.7871 \u001b[0m | \u001b[95m 0.02027 \u001b[0m | \u001b[95m 7.681 \u001b[0m | \u001b[95m 69.99 \u001b[0m | \u001b[95m 0.02673 \u001b[0m | \u001b[95m 20.17 \u001b[0m | \u001b[95m 0.04383 \u001b[0m | \u001b[95m 0.06643 \u001b[0m | \u001b[95m 0.8474 \u001b[0m |\n| \u001b[0m 8 \u001b[0m | \u001b[0m 0.7859 \u001b[0m | \u001b[0m 0.02114 \u001b[0m | \u001b[0m 8.928 \u001b[0m | \u001b[0m 20.08 \u001b[0m | \u001b[0m 0.01919 \u001b[0m | \u001b[0m 20.09 \u001b[0m | \u001b[0m 0.0337 \u001b[0m | \u001b[0m 0.07964 \u001b[0m | \u001b[0m 0.8132 \u001b[0m |\n| \u001b[0m 9 \u001b[0m | \u001b[0m 0.7868 \u001b[0m | \u001b[0m 0.01099 \u001b[0m | \u001b[0m 7.078 \u001b[0m | \u001b[0m 69.95 \u001b[0m | \u001b[0m 0.018 \u001b[0m | \u001b[0m 34.92 \u001b[0m | \u001b[0m 0.03598 \u001b[0m | \u001b[0m 0.06957 \u001b[0m | \u001b[0m 0.81 \u001b[0m |\n| \u001b[95m 10 \u001b[0m | \u001b[95m 0.7872 \u001b[0m | \u001b[95m 0.01228 \u001b[0m | \u001b[95m 6.403 \u001b[0m | \u001b[95m 69.97 \u001b[0m | \u001b[95m 0.02047 \u001b[0m | \u001b[95m 20.12 \u001b[0m | \u001b[95m 0.04902 \u001b[0m | \u001b[95m 0.07738 \u001b[0m | \u001b[95m 0.8939 \u001b[0m |\n=========================================================================================================================\n"
],
[
"params = {'learning_rate': (.01, .03), \n 'num_leaves': (20, 35), \n 'subsample': (0.8, 1), \n 'max_depth': (6, 9), \n 'reg_alpha': (.03, .05), \n 'reg_lambda': (.06, .08), \n 'min_split_gain': (.01, .03),\n 'min_child_weight': (20, 70)}\nbo = BayesianOptimization(lgbm_evaluate, params)",
"_____no_output_____"
],
[
"bo.maximize?",
"_____no_output_____"
],
[
"bo.res[7]",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecd80257045084ea47da274fe2fa2174ffef44e8 | 6,251 | ipynb | Jupyter Notebook | Amazon Bestselling Novels/Amazon PDF.ipynb | GabiPoli/Exploratory-Data-Analysis | 953f310aa5c5f6f9a2ea1e013d76b6cfa4cb92b9 | [
"MIT"
] | null | null | null | Amazon Bestselling Novels/Amazon PDF.ipynb | GabiPoli/Exploratory-Data-Analysis | 953f310aa5c5f6f9a2ea1e013d76b6cfa4cb92b9 | [
"MIT"
] | null | null | null | Amazon Bestselling Novels/Amazon PDF.ipynb | GabiPoli/Exploratory-Data-Analysis | 953f310aa5c5f6f9a2ea1e013d76b6cfa4cb92b9 | [
"MIT"
] | null | null | null | 33.972826 | 232 | 0.558471 | [
[
[
"# Python libraries\nfrom fpdf import FPDF\nfrom datetime import datetime, timedelta\nimport os\nfrom PIL import Image\n\n#Below is a sample of a PDF created by python",
"_____no_output_____"
],
[
"\n\nWIDTH = 210\nHEIGHT = 297\n\n''' First Page '''\n\npdf = FPDF()\npdf.add_page()\npdf.image(r'Images Amazon\\Amazon Cover.png',0,0,WIDTH)\n\n\n''' Second Page '''\npdf.add_page()\npdf.image(r'Images Amazon\\Amazon Cover 1.png',0,0,WIDTH)\npdf.set_font('Arial', 'B', 16)\npdf.cell(50, 30, 'Amazon Bestselling Novels',ln=1)\n\n\npdf.set_font('Arial','', 10)\ncol1=\"This file contains data on top 50 bestselling novels on Amazon each year from 2009 to 2020. The data was collected from Kaggle.\\n\\n\\n\";\npdf.multi_cell(180, 5, col1, 0, 1);\n\npdf.set_font('Arial', 'B', 12)\ncol1=\" I'll be answering the following questions along the way:\\n\\n\";\npdf.multi_cell(180, 5, col1, 0, 1);\n\npdf.set_font('Arial','', 10)\ncol1=\"1. Is there any correlation between the variables?\\n\\n\";\npdf.multi_cell(180, 5, col1, 0, 1);\n\npdf.set_font('Arial','', 10)\ncol1=\"2. What is the genre distribution?\\n\\n\";\npdf.multi_cell(180, 5, col1, 0, 1);\n\ncol1=\"3. What is the user rating distribution?\\n\\n\";\npdf.multi_cell(180, 5, col1, 0, 1);\n\ncol1=\"4. What is the user rating distribution by genre?\\n\\n\";\npdf.multi_cell(180, 5, col1, 0, 1);\n\ncol1=\"5. What is the price distribution by genre over the years?\\n\\n\";\npdf.multi_cell(180, 5, col1, 0, 1);\n\ncol1=\"6. What is the rate distribution by genre over the years?\\n\\n\";\npdf.multi_cell(180, 5, col1, 0, 1);\n\ncol1=\"7. What is the review distribution by year and genre?\\n\\n\";\npdf.multi_cell(180, 5, col1, 0, 1);\n\n\n\n''' Second Page '''\npdf.add_page()\npdf.image(r'Images Amazon\\Amazon Cover 1.png',0,0,WIDTH)\n\npdf.image(r'Images Amazon\\Chart1.png', 35,0,0,130)\npdf.image(r'Images Amazon\\Chart2.png', 5,100,0,WIDTH/2.5) \npdf.image(r'Images Amazon\\Chart3.png',5,170,0,WIDTH/2)\n\n\n''' Third Page '''\npdf.add_page()\npdf.image(r'Images Amazon\\Amazon Cover 1.png',0,0,WIDTH)\n\npdf.image(r'Images Amazon\\Chart4.png',0,0,0,130)\npdf.image(r'Images Amazon\\Chart5.png', 0,120,0,130) \n \n\n''' fourth Page '''\npdf.add_page()\npdf.image(r'Images Amazon\\Amazon Cover 1.png',0,0,WIDTH)\n\npdf.image(r'Images Amazon\\Chart6.png',0,0,0,130)\npdf.image(r'Images Amazon\\Chart7.png', 0,120,0,WIDTH/2) \n\n\n \n''' Fifth Page '''\npdf.add_page()\npdf.image(r'Images Amazon\\Amazon Cover 1.png',0,0,WIDTH)\npdf.set_font('Arial', 'B', 16)\npdf.cell(50, 30, 'Conclusion',ln=1)\n\n\n \npdf.set_font('Arial','', 10)\ncol1=\"By the analysis of the charts I noticed that, there are no direct correlations visible between any of the variables. Which means that the User Rating is not influenced by the genre, price or year of release.\\n\\n\";\npdf.multi_cell(180, 5, col1, 0, 1);\n\ncol1=\"On the first chart, we can see that there is a larger amount of Non Fiction Genre than Fiction Genre\\n\\n\";\npdf.multi_cell(180, 5, col1, 0, 1);\n\ncol1=\"On the second chart, it is clear that the distribution User Rating is concentrated on the higher rating \\n\\n\";\npdf.multi_cell(180, 5, col1, 0, 1);\n\ncol1=\"On the next chart, the Fiction books seem to be doing better than non-fiction. Non-fiction however seems to be on the rise in 2020 to possible surpas fiction books with higher ratings.\\n\\n\";\npdf.multi_cell(180, 5, col1, 0, 1);\n\n\ncol1=\"Moving to the next chart we can see that the book prices are decreasing in both genres over the years, with Non Fiction having a spike around 2014.\\n\\n\";\npdf.multi_cell(180, 5, col1, 0, 1);\n\ncol1=\"As it was previous showed, the lineplot rate distribution chart prove again that the rate is increasing over the years. However, on this chart you can see that both genres was on the uptrending.\\n\\n\";\npdf.multi_cell(180, 5, col1, 0, 1);\n\ncol1=\"On the review distribution chart, the uptrend is not clear until to 2018. From 2019 we can see that both genres had a strong uptrend\\n\\n\";\npdf.multi_cell(180, 5, col1, 0, 1);\n\ncol1=\"On the last chart, we have the same review distribution and you can see better the spikes in 2015, 2019 and 2020.\\n\\n\";\npdf.multi_cell(180, 5, col1, 0, 1);\n\n \npdf.output('Amazon Analysis.pdf', 'F')",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code"
]
] |
ecd80a25e4b564e34954d68a9bb235242b389d16 | 8,167 | ipynb | Jupyter Notebook | examples/notebooks/99_landsat_9.ipynb | ppoon23/geemap | e1a9660336ab9a7eddd702964719118b012db697 | [
"MIT"
] | 2 | 2022-03-12T14:46:53.000Z | 2022-03-14T12:37:16.000Z | examples/notebooks/99_landsat_9.ipynb | ppoon23/geemap | e1a9660336ab9a7eddd702964719118b012db697 | [
"MIT"
] | null | null | null | examples/notebooks/99_landsat_9.ipynb | ppoon23/geemap | e1a9660336ab9a7eddd702964719118b012db697 | [
"MIT"
] | null | null | null | 22.623269 | 600 | 0.53508 | [
[
[
"<a href=\"https://githubtocolab.com/giswqs/geemap/blob/master/examples/notebooks/99_landsat_9.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open in Colab\"/></a>\n\nUncomment the following line to install [geemap](https://geemap.org) if needed.\n\n[Landsat 9](https://landsat.gsfc.nasa.gov/satellites/landsat-9) was successfully launched on Sept. 27, 2021. USGS has been providing Landsat data to the public since Feb. 10, 2022. Landsat 9 data can be downloaded from [EarthExplorer](https://earthexplorer.usgs.gov). The Earth Engine team has been ingesting Landsat 9 into the Public Data Catalog. As of Feb. 14, 2022, although Landsat 9 data have not been publicly listed on the [Earth Engine Datasets](https://developers.google.com/earth-engine/datasets) page, you can access the data through `ee.ImageCollection('LANDSAT/LC09/C02/T1_L2')` .",
"_____no_output_____"
]
],
[
[
"# !pip install geemap",
"_____no_output_____"
]
],
[
[
"Import libraries.",
"_____no_output_____"
]
],
[
[
"import ee\nimport geemap",
"_____no_output_____"
]
],
[
[
"Create an interactive map.",
"_____no_output_____"
]
],
[
[
"Map = geemap.Map()",
"_____no_output_____"
]
],
[
[
"Find out how many Landsat imgaes are available.",
"_____no_output_____"
]
],
[
[
"collection = ee.ImageCollection('LANDSAT/LC09/C02/T1_L2')\nprint(collection.size().getInfo())",
"_____no_output_____"
]
],
[
[
"Create a median composite.",
"_____no_output_____"
]
],
[
[
"median = collection.median()",
"_____no_output_____"
]
],
[
[
"Apply scaling factors. See https://developers.google.com/earth-engine/datasets/catalog/LANDSAT_LC08_C02_T1_L2#bands",
"_____no_output_____"
]
],
[
[
"def apply_scale_factors(image):\n opticalBands = image.select('SR_B.').multiply(0.0000275).add(-0.2)\n thermalBands = image.select('ST_B.*').multiply(0.00341802).add(149.0)\n return image.addBands(opticalBands, None, True).addBands(thermalBands, None, True)",
"_____no_output_____"
],
[
"dataset = apply_scale_factors(median)",
"_____no_output_____"
]
],
[
[
"Specify visualization parameters.",
"_____no_output_____"
]
],
[
[
"vis_natural = {\n 'bands': ['SR_B4', 'SR_B3', 'SR_B2'],\n 'min': 0.0,\n 'max': 0.3,\n}\n\nvis_nir = {\n 'bands': ['SR_B5', 'SR_B4', 'SR_B3'],\n 'min': 0.0,\n 'max': 0.3,\n}",
"_____no_output_____"
]
],
[
[
"Add data layers to the map.",
"_____no_output_____"
]
],
[
[
"Map.addLayer(dataset, vis_natural, 'True color (432)')\nMap.addLayer(dataset, vis_nir, 'Color infrared (543)')\nMap",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
],
[
"Create linked maps for visualizing images with different band combinations. For more information on common band combinations of Landsat 8/9, see https://gisgeography.com/landsat-8-bands-combinations/",
"_____no_output_____"
],
[
"Specify visualization parameters.",
"_____no_output_____"
]
],
[
[
"vis_params = [\n {'bands': ['SR_B4', 'SR_B3', 'SR_B2'], 'min': 0, 'max': 0.3},\n {'bands': ['SR_B5', 'SR_B4', 'SR_B3'], 'min': 0, 'max': 0.3},\n {'bands': ['SR_B7', 'SR_B6', 'SR_B4'], 'min': 0, 'max': 0.3},\n {'bands': ['SR_B6', 'SR_B5', 'SR_B2'], 'min': 0, 'max': 0.3},\n]",
"_____no_output_____"
]
],
[
[
"Specify labels for each layers.",
"_____no_output_____"
]
],
[
[
"labels = [\n 'Natural Color (4, 3, 2)',\n 'Color Infrared (5, 4, 3)',\n 'Short-Wave Infrared (7, 6 4)',\n 'Agriculture (6, 5, 2)',\n]",
"_____no_output_____"
]
],
[
[
"Create linked maps.",
"_____no_output_____"
]
],
[
[
"geemap.linked_maps(\n rows=2,\n cols=2,\n height=\"400px\",\n center=[40, -100],\n zoom=4,\n ee_objects=[dataset],\n vis_params=vis_params,\n labels=labels,\n label_position=\"topright\",\n)",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
],
[
"Create a split-panel map for comparing Landsat 8 and 9 images.\n\nRetrieve two sample images.",
"_____no_output_____"
]
],
[
[
"landsat8 = ee.Image('LANDSAT/LC08/C02/T1_L2/LC08_015043_20130402')\nlandsat9 = ee.Image('LANDSAT/LC09/C02/T1_L2/LC09_015043_20211231')",
"_____no_output_____"
]
],
[
[
"Apply scaling factors.",
"_____no_output_____"
]
],
[
[
"landsat8 = apply_scale_factors(landsat8)\nlandsat9 = apply_scale_factors(landsat9)",
"_____no_output_____"
]
],
[
[
"Generate Earth Engine layers.",
"_____no_output_____"
]
],
[
[
"left_layer = geemap.ee_tile_layer(landsat8, vis_natural, 'Landsat 8')\nright_layer = geemap.ee_tile_layer(landsat9, vis_natural, 'Landsat 9')",
"_____no_output_____"
]
],
[
[
"Create a split-panel map.",
"_____no_output_____"
]
],
[
[
"Map = geemap.Map()\nMap.split_map(left_layer, right_layer)\nMap",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ecd81cc10ba116a3cca58692c7e94f30c8a52d9b | 131,186 | ipynb | Jupyter Notebook | week6_linear_regression/linear_regression_szeged.ipynb | mgalaviz1/my_work | b8bc909d95f75e11621634751c9d3c31e96f0ffe | [
"MIT"
] | null | null | null | week6_linear_regression/linear_regression_szeged.ipynb | mgalaviz1/my_work | b8bc909d95f75e11621634751c9d3c31e96f0ffe | [
"MIT"
] | null | null | null | week6_linear_regression/linear_regression_szeged.ipynb | mgalaviz1/my_work | b8bc909d95f75e11621634751c9d3c31e96f0ffe | [
"MIT"
] | null | null | null | 124.229167 | 31,552 | 0.811237 | [
[
[
"#%reload_ext nb_black",
"_____no_output_____"
]
],
[
[
"# Szeged, Hungary Weather\nchecking assumption in linear regression models",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\n\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.metrics import confusion_matrix, classification_report\nfrom statsmodels.stats.diagnostic import het_breuschpagan\n\nimport statsmodels.api as sm\n\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport plotly_express as px\n\n%matplotlib inline",
"_____no_output_____"
],
[
"from sqlalchemy import create_engine\nfrom IPython.display import display_html",
"_____no_output_____"
],
[
"postgres_user = 'dsbc_student'\npostgres_pw = '7*.8G9QH21'\npostgres_host = '142.93.121.174'\npostgres_port = '5432'\npostgres_db = 'weatherinszeged'\n\nengine = create_engine('postgresql://{}:{}@{}:{}/{}'.format(\n postgres_user, postgres_pw, postgres_host, postgres_port, postgres_db))\ndf = pd.read_sql_query('select * from weatherinszeged',con=engine)\n\n# No need for an open connection, because only doing a single query\nengine.dispose()\n\n\ndf.head(10)",
"_____no_output_____"
],
[
"df.isna().mean()",
"_____no_output_____"
],
[
"drop_cols = [\"date\", \"summary\", \"preciptype\", \"apparenttemperature\", \"visibility\", \"loudcover\", \"dailysummary\"]\ndf = df.drop(columns=drop_cols)",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"X = df[[\"humidity\", \"windspeed\", \"windbearing\", \"pressure\"]]\ny = df[\"temperature\"]",
"_____no_output_____"
],
[
"X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state = 42)",
"_____no_output_____"
],
[
"model = LinearRegression()\nmodel.fit(X_train, y_train)\n\ntrain_score = model.score(X_train, y_train)\ntest_score = model.score(X_test, y_test)\n\nprint(f'Train Score: {train_score}')\nprint(f'Test Score: {test_score}')",
"Train Score: 0.4214854607250208\nTest Score: 0.4169121840570047\n"
],
[
"X_train_const = sm.add_constant(X_train)\nlm = sm.OLS(y_train, X_train_const).fit()\n\nlm.summary()",
"_____no_output_____"
],
[
"print(model.intercept_)\nprint(model.coef_)",
"37.9540705993263\n[-3.25079855e+01 -1.99185599e-01 3.78991600e-03 -6.78668049e-04]\n"
]
],
[
[
"Our model only captures about 42% of the variance in the data so it's not a good model. ",
"_____no_output_____"
]
],
[
[
"predictions = model.predict(X)\nerrors = y_test - predictions\n\nprint(\"Mean of the errors in the model is: {}\".format(np.mean(errors)))",
"_____no_output_____"
]
],
[
[
"Checking on assumptions\n\n* Linearity of models in their coefficients\n* The error term's expected value\n* Homoscedasticity\n* Low multicollinearity\n* Uncorrelated error terms\n* Independence of the features and errors\n* Normality of the errors",
"_____no_output_____"
],
[
"Model has linear coefficients and a constant so assumption #1 and #2 are satisifed.",
"_____no_output_____"
]
],
[
[
"# doing a Bartlett Levene test to check for homoscedasticity \nfrom scipy.stats import bartlett\nfrom scipy.stats import levene\n\nbart_stats = bartlett(predictions, errors)\nlev_stats = levene(predictions, errors)\n\nprint(\"Bartlett test statistic value is {0:3g} and p value is {1:.3g}\".format(bart_stats[0], bart_stats[1]))\nprint(\"Levene test statistic value is {0:3g} and p value is {1:.3g}\".format(lev_stats[0], lev_stats[1]))",
"Bartlett test statistic value is 2459.04 and p value is 0\nLevene test statistic value is 2314.22 and p value is 0\n"
],
[
"# null hypothesis: data is homoscedastic\ntrue_residuals = lm.resid\n_, p, _, _ = het_breuschpagan(true_residuals, X_train_const)\np",
"_____no_output_____"
]
],
[
[
"Bartlett Levene test and the Breuschpagen test shows that our model is heteroscedastic so we need to look including more variables or perhaps skew.",
"_____no_output_____"
],
[
"None of our variables are high correlated with each other so we satisfy the multicollinearity assumption.",
"_____no_output_____"
]
],
[
[
"df.corr()",
"_____no_output_____"
]
],
[
[
"Make a scatter plot with Visibility on the x axis and model predictions as the y axis",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(12,10))\nsns.heatmap(df.corr(),vmin = -1, vmax = 1, annot=True)",
"_____no_output_____"
],
[
"plt.plot(errors)\nplt.show()",
"_____no_output_____"
],
[
"from statsmodels.tsa.stattools import acf\n\nacf_data = acf(errors)\n\nplt.plot(acf_data[1:])\nplt.show()",
"C:\\Users\\Mike\\Anaconda3\\lib\\site-packages\\statsmodels\\tsa\\stattools.py:660: FutureWarning: The default number of lags is changing from 40 tomin(int(10 * np.log10(nobs)), nobs - 1) after 0.12is released. Set the number of lags to an integer to silence this warning.\n FutureWarning,\nC:\\Users\\Mike\\Anaconda3\\lib\\site-packages\\statsmodels\\tsa\\stattools.py:669: FutureWarning: fft=True will become the default after the release of the 0.12 release of statsmodels. To suppress this warning, explicitly set fft=False.\n FutureWarning,\n"
]
],
[
[
"There appears to be correlated error terms so we need to include more variables into the model.",
"_____no_output_____"
]
],
[
[
"rand_nums = np.random.normal(np.mean(errors), np.std(errors), len(errors))\n\nplt.figure(figsize=(12,5))\n\nplt.subplot(1,2,1)\nplt.scatter(np.sort(rand_nums), np.sort(errors)) # Sort the arrays\nplt.xlabel(\"the normally distributed random variable\")\nplt.ylabel(\"errors of the model\")\nplt.title(\"QQ plot\")\n\nplt.subplot(1,2,2)\nplt.hist(errors)\nplt.xlabel(\"errors\")\nplt.title(\"Histogram of the errors\")\n\nplt.tight_layout()\nplt.show()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecd821fa6d2778aea521e111d73eb917dc0ba507 | 33,993 | ipynb | Jupyter Notebook | IllinoisGRMHD/ID_converter_ILGRMHD/doc/Tutorial-ID_converter_ILGRMHD.ipynb | dinatraykova/nrpytutorial | 74d1bab0c45380727975568ba956b69c082e2293 | [
"BSD-2-Clause"
] | null | null | null | IllinoisGRMHD/ID_converter_ILGRMHD/doc/Tutorial-ID_converter_ILGRMHD.ipynb | dinatraykova/nrpytutorial | 74d1bab0c45380727975568ba956b69c082e2293 | [
"BSD-2-Clause"
] | null | null | null | IllinoisGRMHD/ID_converter_ILGRMHD/doc/Tutorial-ID_converter_ILGRMHD.ipynb | dinatraykova/nrpytutorial | 74d1bab0c45380727975568ba956b69c082e2293 | [
"BSD-2-Clause"
] | 2 | 2019-11-14T03:31:18.000Z | 2019-12-12T13:42:52.000Z | 43.413793 | 354 | 0.553408 | [
[
[
"<script async src=\"https://www.googletagmanager.com/gtag/js?id=UA-59152712-8\"></script>\n<script>\n window.dataLayer = window.dataLayer || [];\n function gtag(){dataLayer.push(arguments);}\n gtag('js', new Date());\n\n gtag('config', 'UA-59152712-8');\n</script>\n\n# Tutorial-ID_converter_ILGRMHD\n\n## Authors: Leo Werneck & Zach Etienne\n\n<font color='red'>**This module is currently under development**</font>\n\n## In this tutorial module we generate the ID_converter_ILGRMHD ETK thorn files, compatible with our latest implementation of IllinoisGRMHD\n\n### Required and recommended citations:\n\n* **(Required)** Etienne, Z. B., Paschalidis, V., Haas R., Mösta P., and Shapiro, S. L. IllinoisGRMHD: an open-source, user-friendly GRMHD code for dynamical spacetimes. Class. Quantum Grav. 32 (2015) 175009. ([arxiv:1501.07276](http://arxiv.org/abs/1501.07276)).\n* **(Required)** Noble, S. C., Gammie, C. F., McKinney, J. C., Del Zanna, L. Primitive Variable Solvers for Conservative General Relativistic Magnetohydrodynamics. Astrophysical Journal, 641, 626 (2006) ([astro-ph/0512420](https://arxiv.org/abs/astro-ph/0512420)).\n* **(Recommended)** Del Zanna, L., Bucciantini N., Londrillo, P. An efficient shock-capturing central-type scheme for multidimensional relativistic flows - II. Magnetohydrodynamics. A&A 400 (2) 397-413 (2003). DOI: 10.1051/0004-6361:20021641 ([astro-ph/0210618](https://arxiv.org/abs/astro-ph/0210618)).",
"_____no_output_____"
],
[
"<a id='toc'></a>\n\n# Table of Contents\n$$\\label{toc}$$\n\nThis module is organized as follows\n\n0. [Step 0](#src_dir): **Source directory creation**\n1. [Step 1](#introduction): **Introduction**\n1. [Step 2](#convert_to_hydrobase__src): **`set_IllinoisGRMHD_metric_GRMHD_variables_based_on_HydroBase_and_ADMBase_variables.C`**\n1. [Step 3](#convert_to_hydrobase__param): **`param.ccl`**\n1. [Step 4](#convert_to_hydrobase__interface): **`interface.ccl`**\n1. [Step 5](#convert_to_hydrobase__schedule): **`schedule.ccl`**\n1. [Step 6](#convert_to_hydrobase__make): **`make.code.defn`**\n1. [Step n-1](#code_validation): **Code validation**\n1. [Step n](#latex_pdf_output): **Output this module to $\\LaTeX$-formatted PDF file**",
"_____no_output_____"
],
[
"<a id='src_dir'></a>\n\n# Step 0: Source directory creation \\[Back to [top](#toc)\\]\n$$\\label{src_dir}$$\n\nWe will now use the [cmdline_helper.py NRPy+ module](Tutorial-Tutorial-cmdline_helper.ipynb) to create the source directory within the `IllinoisGRMHD` NRPy+ directory, if it does not exist yet.",
"_____no_output_____"
]
],
[
[
"# Step 0: Creation of the IllinoisGRMHD source directory\n# Step 0a: Load up cmdline_helper and create the directory\nimport os,sys\nimport cmdline_helper as cmd\nIDcIGM_dir_path = \"..\"\ncmd.mkdir(IDcIGM_dir_path)\nIDcIGM_src_dir_path = os.path.join(IDcIGM_dir_path,\"src\")\ncmd.mkdir(IDcIGM_src_dir_path)\n\n# Step 0b: Create the output file path \noutfile_path__ID_converter_ILGRMHD__source = os.path.join(IDcIGM_src_dir_path,\"set_IllinoisGRMHD_metric_GRMHD_variables_based_on_HydroBase_and_ADMBase_variables.C\")\noutfile_path__ID_converter_ILGRMHD__make = os.path.join(IDcIGM_src_dir_path,\"make.code.defn\")\noutfile_path__ID_converter_ILGRMHD__param = os.path.join(IDcIGM_dir_path,\"param.ccl\")\noutfile_path__ID_converter_ILGRMHD__interface = os.path.join(IDcIGM_dir_path,\"interface.ccl\")\noutfile_path__ID_converter_ILGRMHD__schedule = os.path.join(IDcIGM_dir_path,\"schedule.ccl\")",
"_____no_output_____"
]
],
[
[
"<a id='introduction'></a>\n\n# Step 1: Introduction \\[Back to [top](#toc)\\]\n$$\\label{introduction}$$",
"_____no_output_____"
],
[
"<a id='convert_to_hydrobase__src'></a>\n\n# Step 2: `set_IllinoisGRMHD_metric_GRMHD_variables _based_on_HydroBase_and_ADMBase_variables.C` \\[Back to [top](#toc)\\]\n$$\\label{convert_to_hydrobase__src}$$",
"_____no_output_____"
]
],
[
[
"%%writefile $outfile_path__ID_converter_ILGRMHD__source\n/********************************\n * CONVERT ET ID TO IllinoisGRMHD\n * \n * Written in 2014 by Zachariah B. Etienne\n *\n * Sets metric & MHD variables needed \n * by IllinoisGRMHD, converting from\n * HydroBase and ADMBase.\n ********************************/\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <math.h>\n#include <sys/time.h>\n#include \"cctk.h\"\n#include \"cctk_Parameters.h\"\n#include \"cctk_Arguments.h\"\n#include \"IllinoisGRMHD_headers.h\"\n\nextern \"C\" void set_IllinoisGRMHD_metric_GRMHD_variables_based_on_HydroBase_and_ADMBase_variables(CCTK_ARGUMENTS) {\n\n DECLARE_CCTK_ARGUMENTS;\n DECLARE_CCTK_PARAMETERS;\n\n if(rho_b_atm > 1e199) {\n CCTK_VError(VERR_DEF_PARAMS, \"You MUST set rho_b_atm to some reasonable value in your param.ccl file.\\n\");\n }\n\n // Convert ADM variables (from ADMBase) to the BSSN-based variables expected by this routine.\n IllinoisGRMHD_convert_ADM_to_BSSN__enforce_detgtij_eq_1__and_compute_gtupij(cctkGH,cctk_lsh, gxx,gxy,gxz,gyy,gyz,gzz,alp,\n gtxx,gtxy,gtxz,gtyy,gtyz,gtzz,\n gtupxx,gtupxy,gtupxz,gtupyy,gtupyz,gtupzz,\n phi_bssn,psi_bssn,lapm1);\n\n /***************\n * PPEOS Patch *\n ***************/\n eos_struct eos;\n initialize_EOS_struct_from_input(eos);\n \n if(pure_hydro_run) {\n#pragma omp parallel for\n for(int k=0;k<cctk_lsh[2];k++) for(int j=0;j<cctk_lsh[1];j++) for(int i=0;i<cctk_lsh[0];i++) {\n int index=CCTK_GFINDEX3D(cctkGH,i,j,k);\n Avec[CCTK_GFINDEX4D(cctkGH,i,j,k,0)]=0;\n Avec[CCTK_GFINDEX4D(cctkGH,i,j,k,1)]=0;\n Avec[CCTK_GFINDEX4D(cctkGH,i,j,k,2)]=0;\n Aphi[index]=0;\n }\n }\n\n#pragma omp parallel for\n for(int k=0;k<cctk_lsh[2];k++) for(int j=0;j<cctk_lsh[1];j++) for(int i=0;i<cctk_lsh[0];i++) {\n int index=CCTK_GFINDEX3D(cctkGH,i,j,k);\n\n rho_b[index] = rho[index];\n P[index] = press[index];\n\n /***************\n * PPEOS Patch *\n ***************\n * We now verify that the initial data\n * provided by the user is indeed \"cold\",\n * i.e. it contains no Thermal part and\n * P = P_cold.\n */\n /* Compute P_cold */\n int polytropic_index = find_polytropic_K_and_Gamma_index(eos, rho_b[index]);\n double K_poly = eos.K_ppoly_tab[polytropic_index];\n double Gamma_poly = eos.Gamma_ppoly_tab[polytropic_index];\n double P_cold = K_poly*pow(rho_b[index],Gamma_poly);\n\n /* Compare P and P_cold */\n double P_rel_error = fabs(P[index] - P_cold)/P[index];\n if( rho_b[index] > rho_b_atm && P_rel_error > 1e-2 ) {\n\n /* Determine the value of Gamma_poly_local associated with P[index] */\n CCTK_VError(VERR_DEF_PARAMS,\"Expected a piecewise polytropic EOS with local Gamma_poly = %.15e, but found a point such that Gamma_poly_local = %.15e.\\nError = %e\\nrho_b = %e\\nrho_b_atm = %e\\nP = %e\\n\",\n Gamma_poly, 0.123456, P_rel_error, rho_b[index], rho_b_atm, P[index]);\n }\n\n Ax[index] = Avec[CCTK_GFINDEX4D(cctkGH,i,j,k,0)];\n Ay[index] = Avec[CCTK_GFINDEX4D(cctkGH,i,j,k,1)];\n Az[index] = Avec[CCTK_GFINDEX4D(cctkGH,i,j,k,2)];\n psi6phi[index] = Aphi[index];\n\t\n double ETvx = vel[CCTK_GFINDEX4D(cctkGH,i,j,k,0)];\n double ETvy = vel[CCTK_GFINDEX4D(cctkGH,i,j,k,1)];\n double ETvz = vel[CCTK_GFINDEX4D(cctkGH,i,j,k,2)];\n\n // IllinoisGRMHD defines v^i = u^i/u^0.\n\t\n // Meanwhile, the ET/HydroBase formalism, called the Valencia \n // formalism, splits the 4 velocity into a purely spatial part\n // and a part that is normal to the spatial hypersurface:\n // u^a = G (n^a + U^a), (Eq. 14 of arXiv:1304.5544; G=W, U^a=v^a)\n // where n^a is the unit normal vector to the spatial hypersurface,\n // n_a = {-\\alpha,0,0,0}, and U^a is the purely spatial part, which\n // is defined in HydroBase as the vel[] vector gridfunction.\n // Then u^a n_a = - \\alpha u^0 = G n^a n_a = -G, and\n // of course \\alpha u^0 = 1/sqrt(1+γ^ij u_i u_j) = \\Gamma,\n // the standard Lorentz factor.\n\n // Note that n^i = - \\beta^i / \\alpha, so \n // u^a = \\Gamma (n^a + U^a) \n // -> u^i = \\Gamma ( U^i - \\beta^i / \\alpha )\n // which implies\n // v^i = u^i/u^0\n // = \\Gamma/u^0 ( U^i - \\beta^i / \\alpha ) <- \\Gamma = \\alpha u^0\n // = \\alpha ( U^i - \\beta^i / \\alpha )\n // = \\alpha U^i - \\beta^i\n\n vx[index] = alp[index]*ETvx - betax[index];\n vy[index] = alp[index]*ETvy - betay[index];\n vz[index] = alp[index]*ETvz - betaz[index];\n\n }\n\n // Neat feature for debugging: Add a roundoff-error perturbation\n // to the initial data.\n // Set random_pert variable to ~1e-14 for a random 15th digit\n // perturbation.\n srand(random_seed); // Use srand() as rand() is thread-safe.\n for(int k=0;k<cctk_lsh[2];k++)\n for(int j=0;j<cctk_lsh[1];j++)\n for(int i=0;i<cctk_lsh[0];i++) {\n int index=CCTK_GFINDEX3D(cctkGH,i,j,k);\n double pert = (random_pert*(double)rand() / RAND_MAX);\n double one_plus_pert=(1.0+pert);\n rho[index]*=one_plus_pert;\n vx[index]*=one_plus_pert;\n vy[index]*=one_plus_pert;\n vz[index]*=one_plus_pert;\n\n psi6phi[index]*=one_plus_pert;\n Ax[index]*=one_plus_pert;\n Ay[index]*=one_plus_pert;\n Az[index]*=one_plus_pert;\n }\n\n // Next compute B & B_stagger from A_i. Note that this routine also depends on\n // the psi_bssn[] gridfunction being set to exp(phi).\n\n double dxi = 1.0/CCTK_DELTA_SPACE(0);\n double dyi = 1.0/CCTK_DELTA_SPACE(1);\n double dzi = 1.0/CCTK_DELTA_SPACE(2); \n\n#pragma omp parallel for\n for(int k=0;k<cctk_lsh[2];k++)\n for(int j=0;j<cctk_lsh[1];j++)\n for(int i=0;i<cctk_lsh[0];i++) {\n // Look Mom, no if() statements!\n int shiftedim1 = (i-1)*(i!=0); // This way, i=0 yields shiftedim1=0 and shiftedi=1, used below for our COPY boundary condition.\n int shiftedi = shiftedim1+1;\n\n int shiftedjm1 = (j-1)*(j!=0);\n int shiftedj = shiftedjm1+1;\n\n int shiftedkm1 = (k-1)*(k!=0);\n int shiftedk = shiftedkm1+1;\n\n int index,indexim1,indexjm1,indexkm1;\n\n int actual_index = CCTK_GFINDEX3D(cctkGH,i,j,k);\n\n double Psi = psi_bssn[actual_index];\n double Psim3 = 1.0/(Psi*Psi*Psi);\n\n // For the lower boundaries, the following applies a \"copy\" \n // boundary condition on Bi_stagger where needed.\n // E.g., Bx_stagger(i,jmin,k) = Bx_stagger(i,jmin+1,k)\n // We find the copy BC works better than extrapolation.\n // For the upper boundaries, we do the following copy:\n // E.g., Psi(imax+1,j,k)=Psi(imax,j,k)\n /**************/\n /* Bx_stagger */\n /**************/\n\n index = CCTK_GFINDEX3D(cctkGH,i,shiftedj,shiftedk);\n indexjm1 = CCTK_GFINDEX3D(cctkGH,i,shiftedjm1,shiftedk);\n indexkm1 = CCTK_GFINDEX3D(cctkGH,i,shiftedj,shiftedkm1);\n // Set Bx_stagger = \\partial_y A_z - partial_z A_y\n // \"Grid\" Ax(i,j,k) is actually Ax(i,j+1/2,k+1/2)\n // \"Grid\" Ay(i,j,k) is actually Ay(i+1/2,j,k+1/2)\n // \"Grid\" Az(i,j,k) is actually Ay(i+1/2,j+1/2,k)\n // Therefore, the 2nd order derivative \\partial_z A_y at (i+1/2,j,k) is:\n // [\"Grid\" Ay(i,j,k) - \"Grid\" Ay(i,j,k-1)]/dZ\n Bx_stagger[actual_index] = (Az[index]-Az[indexjm1])*dyi - (Ay[index]-Ay[indexkm1])*dzi;\n\n // Now multiply Bx and Bx_stagger by 1/sqrt(gamma(i+1/2,j,k)]) = 1/sqrt(1/2 [gamma + gamma_ip1]) = exp(-6 x 1/2 [phi + phi_ip1] )\n int imax_minus_i = (cctk_lsh[0]-1)-i;\n int indexip1jk = CCTK_GFINDEX3D(cctkGH,i + ( (imax_minus_i > 0) - (0 > imax_minus_i) ),j,k);\n double Psi_ip1 = psi_bssn[indexip1jk];\n Bx_stagger[actual_index] *= Psim3/(Psi_ip1*Psi_ip1*Psi_ip1);\n\n /**************/\n /* By_stagger */\n /**************/\n\n index = CCTK_GFINDEX3D(cctkGH,shiftedi,j,shiftedk);\n indexim1 = CCTK_GFINDEX3D(cctkGH,shiftedim1,j,shiftedk);\n indexkm1 = CCTK_GFINDEX3D(cctkGH,shiftedi,j,shiftedkm1);\n // Set By_stagger = \\partial_z A_x - \\partial_x A_z\n By_stagger[actual_index] = (Ax[index]-Ax[indexkm1])*dzi - (Az[index]-Az[indexim1])*dxi;\n\n // Now multiply By and By_stagger by 1/sqrt(gamma(i,j+1/2,k)]) = 1/sqrt(1/2 [gamma + gamma_jp1]) = exp(-6 x 1/2 [phi + phi_jp1] )\n int jmax_minus_j = (cctk_lsh[1]-1)-j;\n int indexijp1k = CCTK_GFINDEX3D(cctkGH,i,j + ( (jmax_minus_j > 0) - (0 > jmax_minus_j) ),k);\n double Psi_jp1 = psi_bssn[indexijp1k];\n By_stagger[actual_index] *= Psim3/(Psi_jp1*Psi_jp1*Psi_jp1);\n\n\n /**************/\n /* Bz_stagger */\n /**************/\n\n index = CCTK_GFINDEX3D(cctkGH,shiftedi,shiftedj,k);\n indexim1 = CCTK_GFINDEX3D(cctkGH,shiftedim1,shiftedj,k);\n indexjm1 = CCTK_GFINDEX3D(cctkGH,shiftedi,shiftedjm1,k);\n // Set Bz_stagger = \\partial_x A_y - \\partial_y A_x\n Bz_stagger[actual_index] = (Ay[index]-Ay[indexim1])*dxi - (Ax[index]-Ax[indexjm1])*dyi;\n\n // Now multiply Bz_stagger by 1/sqrt(gamma(i,j,k+1/2)]) = 1/sqrt(1/2 [gamma + gamma_kp1]) = exp(-6 x 1/2 [phi + phi_kp1] )\n int kmax_minus_k = (cctk_lsh[2]-1)-k;\n int indexijkp1 = CCTK_GFINDEX3D(cctkGH,i,j,k + ( (kmax_minus_k > 0) - (0 > kmax_minus_k) ));\n double Psi_kp1 = psi_bssn[indexijkp1];\n Bz_stagger[actual_index] *= Psim3/(Psi_kp1*Psi_kp1*Psi_kp1);\n\n }\n\n#pragma omp parallel for\n for(int k=0;k<cctk_lsh[2];k++)\n for(int j=0;j<cctk_lsh[1];j++)\n for(int i=0;i<cctk_lsh[0];i++) {\n // Look Mom, no if() statements!\n int shiftedim1 = (i-1)*(i!=0); // This way, i=0 yields shiftedim1=0 and shiftedi=1, used below for our COPY boundary condition.\n int shiftedi = shiftedim1+1;\n\n int shiftedjm1 = (j-1)*(j!=0);\n int shiftedj = shiftedjm1+1;\n\n int shiftedkm1 = (k-1)*(k!=0);\n int shiftedk = shiftedkm1+1;\n\n int index,indexim1,indexjm1,indexkm1;\n\n int actual_index = CCTK_GFINDEX3D(cctkGH,i,j,k);\n\n // For the lower boundaries, the following applies a \"copy\" \n // boundary condition on Bi and Bi_stagger where needed.\n // E.g., Bx(imin,j,k) = Bx(imin+1,j,k)\n // We find the copy BC works better than extrapolation.\n /******/\n /* Bx */\n /******/\n index = CCTK_GFINDEX3D(cctkGH,shiftedi,j,k);\n indexim1 = CCTK_GFINDEX3D(cctkGH,shiftedim1,j,k);\n // Set Bx = 0.5 ( Bx_stagger + Bx_stagger_im1 )\n // \"Grid\" Bx_stagger(i,j,k) is actually Bx_stagger(i+1/2,j,k)\n Bx[actual_index] = 0.5 * ( Bx_stagger[index] + Bx_stagger[indexim1] );\n\n /******/\n /* By */\n /******/\n index = CCTK_GFINDEX3D(cctkGH,i,shiftedj,k);\n indexjm1 = CCTK_GFINDEX3D(cctkGH,i,shiftedjm1,k);\n // Set By = 0.5 ( By_stagger + By_stagger_im1 )\n // \"Grid\" By_stagger(i,j,k) is actually By_stagger(i,j+1/2,k)\n By[actual_index] = 0.5 * ( By_stagger[index] + By_stagger[indexjm1] );\n\n /******/\n /* Bz */\n /******/\n index = CCTK_GFINDEX3D(cctkGH,i,j,shiftedk);\n indexkm1 = CCTK_GFINDEX3D(cctkGH,i,j,shiftedkm1);\n // Set Bz = 0.5 ( Bz_stagger + Bz_stagger_im1 )\n // \"Grid\" Bz_stagger(i,j,k) is actually Bz_stagger(i,j+1/2,k)\n Bz[actual_index] = 0.5 * ( Bz_stagger[index] + Bz_stagger[indexkm1] );\n }\n\n // Finally, enforce limits on primitives & compute conservative variables.\n#pragma omp parallel for\n for(int k=0;k<cctk_lsh[2];k++)\n for(int j=0;j<cctk_lsh[1];j++)\n for(int i=0;i<cctk_lsh[0];i++) {\n static const int zero_int=0;\n int index = CCTK_GFINDEX3D(cctkGH,i,j,k);\n\n int ww;\n\n double PRIMS[MAXNUMVARS];\n ww=0;\n PRIMS[ww] = rho_b[index]; ww++;\n PRIMS[ww] = P[index]; ww++;\n PRIMS[ww] = vx[index]; ww++;\n PRIMS[ww] = vy[index]; ww++;\n PRIMS[ww] = vz[index]; ww++;\n PRIMS[ww] = Bx[index]; ww++;\n PRIMS[ww] = By[index]; ww++;\n PRIMS[ww] = Bz[index]; ww++;\n\n double METRIC[NUMVARS_FOR_METRIC],dummy=0;\n ww=0;\n // FIXME: NECESSARY?\n //psi_bssn[index] = exp(phi[index]);\n METRIC[ww] = phi_bssn[index];ww++;\n METRIC[ww] = dummy; ww++; // Don't need to set psi.\n METRIC[ww] = gtxx[index]; ww++;\n METRIC[ww] = gtxy[index]; ww++;\n METRIC[ww] = gtxz[index]; ww++;\n METRIC[ww] = gtyy[index]; ww++;\n METRIC[ww] = gtyz[index]; ww++;\n METRIC[ww] = gtzz[index]; ww++;\n METRIC[ww] = lapm1[index]; ww++;\n METRIC[ww] = betax[index]; ww++;\n METRIC[ww] = betay[index]; ww++;\n METRIC[ww] = betaz[index]; ww++;\n METRIC[ww] = gtupxx[index]; ww++;\n METRIC[ww] = gtupyy[index]; ww++;\n METRIC[ww] = gtupzz[index]; ww++;\n METRIC[ww] = gtupxy[index]; ww++;\n METRIC[ww] = gtupxz[index]; ww++;\n METRIC[ww] = gtupyz[index]; ww++;\n\n double CONSERVS[NUM_CONSERVS] = {0,0,0,0,0};\n double g4dn[4][4];\n double g4up[4][4];\n double TUPMUNU[10],TDNMUNU[10];\n\n struct output_stats stats; stats.failure_checker=0;\n IllinoisGRMHD_enforce_limits_on_primitives_and_recompute_conservs(zero_int,PRIMS,stats,eos,\n METRIC,g4dn,g4up,TUPMUNU,TDNMUNU,CONSERVS);\n rho_b[index] = PRIMS[RHOB];\n P[index] = PRIMS[PRESSURE];\n vx[index] = PRIMS[VX];\n vy[index] = PRIMS[VY];\n vz[index] = PRIMS[VZ];\n\n rho_star[index] = CONSERVS[RHOSTAR];\n mhd_st_x[index] = CONSERVS[STILDEX];\n mhd_st_y[index] = CONSERVS[STILDEY];\n mhd_st_z[index] = CONSERVS[STILDEZ];\n tau[index] = CONSERVS[TAUENERGY];\n\n if(update_Tmunu) {\n ww=0;\n eTtt[index] = TDNMUNU[ww]; ww++;\n eTtx[index] = TDNMUNU[ww]; ww++;\n eTty[index] = TDNMUNU[ww]; ww++;\n eTtz[index] = TDNMUNU[ww]; ww++;\n eTxx[index] = TDNMUNU[ww]; ww++;\n eTxy[index] = TDNMUNU[ww]; ww++;\n eTxz[index] = TDNMUNU[ww]; ww++;\n eTyy[index] = TDNMUNU[ww]; ww++;\n eTyz[index] = TDNMUNU[ww]; ww++;\n eTzz[index] = TDNMUNU[ww];\n }\n }\n}\n\n",
"Overwriting ../ID_converter_ILGRMHD/src/set_IllinoisGRMHD_metric_GRMHD_variables_based_on_HydroBase_and_ADMBase_variables.C\n"
]
],
[
[
"<a id='convert_to_hydrobase__param'></a>\n\n# Step 3: `param.ccl` \\[Back to [top](#toc)\\]\n$$\\label{convert_to_hydrobase__param}$$",
"_____no_output_____"
]
],
[
[
"%%writefile $outfile_path__ID_converter_ILGRMHD__param\n# Parameter definitions for thorn ID_converter_ILGRMHD\n\nshares: IllinoisGRMHD\nUSES KEYWORD rho_b_max\nUSES KEYWORD rho_b_atm\nUSES KEYWORD tau_atm\nUSES KEYWORD neos\nUSES KEYWORD K_ppoly_tab0\nUSES KEYWORD rho_ppoly_tab_in[10]\nUSES KEYWORD Gamma_ppoly_tab_in[10]\nUSES KEYWORD Sym_Bz\nUSES KEYWORD GAMMA_SPEED_LIMIT\nUSES KEYWORD Psi6threshold\nUSES KEYWORD update_Tmunu\n\nprivate:\n\nINT random_seed \"Random seed for random, generally roundoff-level perturbation on initial data. Seeds srand(), and rand() is used for the RNG.\"\n{\n 0:99999999 :: \"Anything unsigned goes.\"\n} 0\n\nREAL random_pert \"Random perturbation atop data\"\n{\n *:* :: \"Anything goes.\"\n} 0\n\nBOOLEAN pure_hydro_run \"Set the vector potential and corresponding EM gauge quantity to zero\"\n{\n} \"no\"\n\n",
"Overwriting ../ID_converter_ILGRMHD/param.ccl\n"
]
],
[
[
"<a id='convert_to_hydrobase__interface'></a>\n\n# Step 4: `interface.ccl` \\[Back to [top](#toc)\\]\n$$\\label{convert_to_hydrobase__interface}$$",
"_____no_output_____"
]
],
[
[
"%%writefile $outfile_path__ID_converter_ILGRMHD__interface\n# Interface definition for thorn ID_converter_ILGRMHD\n\nimplements: ID_converter_ILGRMHD\n\ninherits: ADMBase, Boundary, SpaceMask, Tmunubase, HydroBase, grid, IllinoisGRMHD\n\nuses include header: IllinoisGRMHD_headers.h\nUSES INCLUDE: Symmetry.h\n\n",
"Overwriting ../ID_converter_ILGRMHD/interface.ccl\n"
]
],
[
[
"<a id='convert_to_hydrobase__schedule'></a>\n\n# Step 5: `schedule.ccl` \\[Back to [top](#toc)\\]\n$$\\label{convert_to_hydrobase__schedule}$$",
"_____no_output_____"
]
],
[
[
"%%writefile $outfile_path__ID_converter_ILGRMHD__schedule\n# Schedule definitions for thorn ID_converter_ILGRMHD\n\nschedule group IllinoisGRMHD_ID_Converter at CCTK_INITIAL after HydroBase_Initial before Convert_to_HydroBase\n{\n} \"Translate ET-generated, HydroBase-compatible initial data and convert into variables used by IllinoisGRMHD\"\n\nschedule set_IllinoisGRMHD_metric_GRMHD_variables_based_on_HydroBase_and_ADMBase_variables IN IllinoisGRMHD_ID_Converter as first_initialdata before TOV_Initial_Data\n{\n LANG: C\n OPTIONS: LOCAL\n # What the heck, let's synchronize everything!\n SYNC: IllinoisGRMHD::grmhd_primitives_Bi, IllinoisGRMHD::grmhd_primitives_Bi_stagger, IllinoisGRMHD::grmhd_primitives_allbutBi, IllinoisGRMHD::em_Ax,IllinoisGRMHD::em_Ay,IllinoisGRMHD::em_Az,IllinoisGRMHD::em_psi6phi,IllinoisGRMHD::grmhd_conservatives,IllinoisGRMHD::BSSN_quantities,ADMBase::metric,ADMBase::lapse,ADMBase::shift,ADMBase::curv\n} \"Convert HydroBase initial data (ID) to ID that IllinoisGRMHD can read.\"\n\nschedule IllinoisGRMHD_InitSymBound IN IllinoisGRMHD_ID_Converter as third_initialdata after second_initialdata\n{\n SYNC: IllinoisGRMHD::grmhd_conservatives,IllinoisGRMHD::em_Ax,IllinoisGRMHD::em_Ay,IllinoisGRMHD::em_Az,IllinoisGRMHD::em_psi6phi\n LANG: C\n} \"Schedule symmetries -- Actually just a placeholder function to ensure prolongation / processor syncs are done BEFORE the primitives solver.\"\n\nschedule IllinoisGRMHD_compute_B_and_Bstagger_from_A IN IllinoisGRMHD_ID_Converter as fourth_initialdata after third_initialdata\n{\n SYNC: IllinoisGRMHD::grmhd_primitives_Bi, IllinoisGRMHD::grmhd_primitives_Bi_stagger\n LANG: C\n} \"Compute B and B_stagger from A\"\n\nschedule IllinoisGRMHD_conserv_to_prims IN IllinoisGRMHD_ID_Converter as fifth_initialdata after fourth_initialdata\n{\n LANG: C\n} \"Compute primitive variables from conservatives. This is non-trivial, requiring a Newton-Raphson root-finder.\"\n\n",
"Overwriting ../ID_converter_ILGRMHD/schedule.ccl\n"
]
],
[
[
"<a id='convert_to_hydrobase__make'></a>\n\n# Step 6: `make.code.defn` \\[Back to [top](#toc)\\]\n$$\\label{convert_to_hydrobase__make}$$",
"_____no_output_____"
]
],
[
[
"%%writefile $outfile_path__ID_converter_ILGRMHD__make\n# Main make.code.defn file for thorn ID_converter_ILGRMHD\n\n# Source files in this directory\nSRCS = set_IllinoisGRMHD_metric_GRMHD_variables_based_on_HydroBase_and_ADMBase_variables.C\n\n",
"Overwriting ../ID_converter_ILGRMHD/src/make.code.defn\n"
]
],
[
[
"<a id='code_validation'></a>\n\n# Step n-1: Code validation \\[Back to [top](#toc)\\]\n$$\\label{code_validation}$$\n\nFirst we download the original `IllinoisGRMHD` source code and then compare it to the source code generated by this tutorial notebook.",
"_____no_output_____"
]
],
[
[
"# # Verify if the code generated by this tutorial module\n# # matches the original IllinoisGRMHD source code\n\n# # First download the original IllinoisGRMHD source code\n# import urllib\n# from os import path\n\n# original_IGM_file_url = \"https://bitbucket.org/zach_etienne/wvuthorns/raw/5611b2f0b17135538c9d9d17c7da062abe0401b6/IllinoisGRMHD/src/A_i_rhs_no_gauge_terms.C\"\n# original_IGM_file_name = \"A_i_rhs_no_gauge_terms-original.C\"\n# original_IGM_file_path = os.path.join(IGM_src_dir_path,original_IGM_file_name)\n\n# # Then download the original IllinoisGRMHD source code\n# # We try it here in a couple of ways in an attempt to keep\n# # the code more portable\n# try:\n# original_IGM_file_code = urllib.request.urlopen(original_IGM_file_url).read()\n# # Write down the file the original IllinoisGRMHD source code\n# with open(original_IGM_file_path,\"w\") as file:\n# file.write(original_IGM_file_code)\n# except:\n# try:\n# original_IGM_file_code = urllib.urlopen(original_IGM_file_url).read()\n# # Write down the file the original IllinoisGRMHD source code\n# with open(original_IGM_file_path,\"w\") as file:\n# file.write(original_IGM_file_code)\n# except:\n# # If all else fails, hope wget does the job\n# !wget -O $original_IGM_file_path $original_IGM_file_url\n\n# # Perform validation\n# Validation__A_i_rhs_no_gauge_terms__C = !diff $original_IGM_file_path $outfile_path__A_i_rhs_no_gauge_terms__C\n\n# if Validation__A_i_rhs_no_gauge_terms__C == []:\n# # If the validation passes, we do not need to store the original IGM source code file\n# !rm $original_IGM_file_path\n# print(\"Validation test for A_i_rhs_no_gauge_terms.C: PASSED!\")\n# else:\n# # If the validation fails, we keep the original IGM source code file\n# print(\"Validation test for A_i_rhs_no_gauge_terms.C: FAILED!\")\n# # We also print out the difference between the code generated\n# # in this tutorial module and the original IGM source code\n# print(\"Diff:\")\n# for diff_line in Validation__A_i_rhs_no_gauge_terms__C:\n# print(diff_line)",
"_____no_output_____"
]
],
[
[
"<a id='latex_pdf_output'></a>\n\n# Step n: Output this module to $\\LaTeX$-formatted PDF file \\[Back to [top](#toc)\\]\n$$\\label{latex_pdf_output}$$\n\nThe following code cell converts this Jupyter notebook into a proper, clickable $\\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename\n[Tutorial-IllinoisGRMHD__A_i_rhs_no_gauge_terms.pdf](Tutorial-IllinoisGRMHD__A_i_rhs_no_gauge_terms.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means).",
"_____no_output_____"
]
],
[
[
"latex_nrpy_style_path = os.path.join(nrpy_dir_path,\"latex_nrpy_style.tplx\")\n#!jupyter nbconvert --to latex --template $latex_nrpy_style_path Tutorial-IllinoisGRMHD__A_i_rhs_no_gauge_terms.ipynb\n#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__A_i_rhs_no_gauge_terms.tex\n#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__A_i_rhs_no_gauge_terms.tex\n#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__A_i_rhs_no_gauge_terms.tex\n!rm -f Tut*.out Tut*.aux Tut*.log",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecd82490540fc3ff42d4edd947d75506f96cc030 | 23,660 | ipynb | Jupyter Notebook | src/Budget_Analysis.ipynb | naseebth/Budget_Text_Analysis | cac0210b8b4b998fe798da92a9bbdd10eb1c4773 | [
"MIT"
] | null | null | null | src/Budget_Analysis.ipynb | naseebth/Budget_Text_Analysis | cac0210b8b4b998fe798da92a9bbdd10eb1c4773 | [
"MIT"
] | 13 | 2019-09-24T14:32:26.000Z | 2019-12-12T02:16:03.000Z | src/Budget_Analysis.ipynb | naseebth/Budget_Text_Analysis | cac0210b8b4b998fe798da92a9bbdd10eb1c4773 | [
"MIT"
] | 2 | 2020-01-04T07:32:56.000Z | 2020-09-16T07:20:09.000Z | 25.197018 | 168 | 0.406762 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
ecd82d2340f349fa02c4a6e6dc87c171c335d10e | 291,066 | ipynb | Jupyter Notebook | Gene Expression PCA.ipynb | thierrygrimm/GeneExpressionPCA | c537759d2eaf36c6d1c06e442a17ea1020e3cb96 | [
"MIT"
] | 1 | 2020-04-27T16:06:34.000Z | 2020-04-27T16:06:34.000Z | Gene Expression PCA.ipynb | thierrygrimm/GeneExpressionPCA | c537759d2eaf36c6d1c06e442a17ea1020e3cb96 | [
"MIT"
] | null | null | null | Gene Expression PCA.ipynb | thierrygrimm/GeneExpressionPCA | c537759d2eaf36c6d1c06e442a17ea1020e3cb96 | [
"MIT"
] | null | null | null | 393.864682 | 96,012 | 0.931909 | [
[
[
"# Gene Expression PCA",
"_____no_output_____"
],
[
"<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Introduction\" data-toc-modified-id=\"Introduction-1\"><span class=\"toc-item-num\">1 </span>Introduction</a></span></li><li><span><a href=\"#Imports\" data-toc-modified-id=\"Imports-2\"><span class=\"toc-item-num\">2 </span>Imports</a></span></li><li><span><a href=\"#Boxplots\" data-toc-modified-id=\"Boxplots-3\"><span class=\"toc-item-num\">3 </span>Boxplots</a></span><ul class=\"toc-item\"><li><span><a href=\"#Gene-Expression-Boxplots\" data-toc-modified-id=\"Gene-Expression-Boxplots-3.1\"><span class=\"toc-item-num\">3.1 </span>Gene Expression Boxplots</a></span></li></ul></li><li><span><a href=\"#PCA\" data-toc-modified-id=\"PCA-4\"><span class=\"toc-item-num\">4 </span>PCA</a></span><ul class=\"toc-item\"><li><span><a href=\"#Scree-Plot\" data-toc-modified-id=\"Scree-Plot-4.1\"><span class=\"toc-item-num\">4.1 </span>Scree Plot</a></span></li><li><span><a href=\"#Explained-Variance\" data-toc-modified-id=\"Explained-Variance-4.2\"><span class=\"toc-item-num\">4.2 </span>Explained Variance</a></span></li><li><span><a href=\"#Cumulative-Explained-Variance\" data-toc-modified-id=\"Cumulative-Explained-Variance-4.3\"><span class=\"toc-item-num\">4.3 </span>Cumulative Explained Variance</a></span></li><li><span><a href=\"#Principal-Components\" data-toc-modified-id=\"Principal-Components-4.4\"><span class=\"toc-item-num\">4.4 </span>Principal Components</a></span></li><li><span><a href=\"#PC-Boxplot\" data-toc-modified-id=\"PC-Boxplot-4.5\"><span class=\"toc-item-num\">4.5 </span>PC Boxplot</a></span></li></ul></li><li><span><a href=\"#K-means-Scatter-Plot\" data-toc-modified-id=\"K-means-Scatter-Plot-5\"><span class=\"toc-item-num\">5 </span>K-means Scatter Plot</a></span><ul class=\"toc-item\"><li><span><a href=\"#3D-Scatter-Plot-by-Condition\" data-toc-modified-id=\"3D-Scatter-Plot-by-Condition-5.1\"><span class=\"toc-item-num\">5.1 </span>3D Scatter Plot by Condition</a></span></li><li><span><a href=\"#K-Means-Clusters\" data-toc-modified-id=\"K-Means-Clusters-5.2\"><span class=\"toc-item-num\">5.2 </span>K-Means Clusters</a></span></li></ul></li></ul></div>",
"_____no_output_____"
],
[
"## Introduction\nThe signaling pathway regulating the activity of the mammalian target of rapamycin complex 1 (mTORC1) controls skeletal muscle homeostasis, which is determined by the difference in the rates of protein synthesis and degradation. In the skeletal muscle, mTORC1 activation occurs in response to a variety of signals, including growth factors, nutrients, energy state and mechanical load. To study the function of mTORC1 in the skeletal muscle, the laboratory of Dr. Markus Rüegg (Biozentrum) has developed a mouse model called TSCmKO, in which the mTORC1 inhibitor Tsc1 was selectively deleted in skeletal muscles. It was found that these mice develop precocious sarcopenia, characterized by fragmentation of the neuromuscular junction, progressive loss of muscle mass and loss of muscle force. Treatment of TSCmKO mice with rapamycin, an mTORC1 inhibitor, ameliorated the myopathy. To identify core pathways that underlie myopathy in TSCmKO mice, mRNA-seq samples from EDL muscle of TSCmKO mice and wild-type mice of the age of 3 months (young phase in both TSCmKO and wild-type mice) and 9 months (adult phase in wild-type mice and sarcopenic phase in TSCmKO mice) in combination with rapamycin treatment were generated and sequenced at the Quantitative Genomics facility of the Biozentrum.",
"_____no_output_____"
],
[
"The table called “GeneExpressionTable.tsv” contains the information about the expression of 117 genes in log2[TPM] units measured in the Musculus extensor digitorum longus of the following mice:\n\n- Condition 1: 5 replicates of 3 months old wild-type mice\n- Condition 2: 5 replicates of 3 months old wild-type mice treated with rapamycin\n- Condition 3: 5 replicates of 3 months old TSCmKO mice\n- Condition 4: 5 replicates of 3 months old TSCmKO mice treated with rapamycin\n- Condition 5: 5 replicates of 9 months old wild-type mice\n- Condition 6: 5 replicates of 9 months old wild-type mice treated with rapamycin\n- Condition 7: 5 replicates of 9 months old TSCmKO mice\n- Condition 8: 5 replicates of 9 months old TSCmKO mice treated with rapamycin",
"_____no_output_____"
],
[
"## Imports",
"_____no_output_____"
]
],
[
[
"from __future__ import print_function\nfrom ipywidgets import interact, interactive, fixed, interact_manual\nfrom mpl_toolkits.mplot3d import axes3d, Axes3D\nimport scipy.io\nimport pandas as pd\nimport numpy as np\nimport sys\nimport math\nimport matplotlib.pyplot as plt\nfrom sklearn.decomposition import PCA\nfrom sklearn import cluster, datasets\nimport ipywidgets as widgets\nfrom IPython.display import display\nfrom collections import OrderedDict\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"## Boxplots",
"_____no_output_____"
],
[
"### Gene Expression Boxplots\nCreate boxplots for the expression of genes “gene_3”, “gene_45”, “gene_79” and “gene_86” in mentioned\nconditions and characterize the expression of these genes. ",
"_____no_output_____"
]
],
[
[
"#Choose Genes to plot\ngenes_to_plot = [3,45,79,86]",
"_____no_output_____"
],
[
"# Read data and set basic variables\ngene_df = pd.read_csv('GeneExpressionTable.tsv', sep='\\t')\ndata_dict = {}\ndata_mean = {}\ndata_spread = {}\ncond_list = ['3m', '3m_RM', '3m_KO', '3m_KO_RM',\n '9m', '9m_RM', '9m_KO', '9m_KO_RM']\n\nconditions = [['3m_rep1', '3m_rep5'],\n ['3m_RM_rep1', '3m_RM_rep5'],\n ['3m_KO_rep1', '3m_KO_rep5'],\n ['3m_KO_RM_rep1', '3m_KO_RM_rep5'],\n ['9m_rep1', '9m_rep5'],\n ['9m_RM_rep1', '9m_RM_rep5'],\n ['9m_KO_rep1', '9m_KO_rep5'],\n ['9m_KO_RM_rep1', '9m_KO_RM_rep5']]\n\nfor i in genes_to_plot:\n for cond in conditions:\n data_dict[\"gene_\"+str(i)+'_'+str(cond[0][:-5])\n ] = gene_df.loc[i-1][cond[0]:cond[1]]\n data_mean[\"gene_\"+str(i)+'_'+str(cond[0][:-5])\n ] = np.mean(gene_df.loc[i-1][cond[0]:cond[1]])\n data_spread[\"gene_\"+str(i)+'_'+str(cond[0][:-5])\n ] = np.std(gene_df.loc[i-1][cond[0]:cond[1]])\n\n# Initialise Figure\nfig = plt.figure(1, figsize=(10, 30))\nrows = len(genes_to_plot)\ncolumns = 1\n\n# Initialise Subplots\nfor i in range(len(genes_to_plot)):\n # 4 rows, 1 column, index\n gene_num = genes_to_plot[i] # e.g. 3 45 86\n # Subplots with e.g. 4 rows, 1 column, index\n ax = fig.add_subplot(rows, columns, i+1)\n data_to_plot = []\n\n # Fill data_to_plot array\n for cond in conditions:\n data_to_plot.append(\n # Values in data_dict['gene_'+$gene_number$ + '_' + $condition$']\n data_dict['gene_'+str(gene_num)+'_'+str(cond[0][:-5])])\n\n # Title: Gene NUMBBER\n ax.set_title('Gene ' + str(gene_num))\n\n # Display Boxplots\n bx = ax.boxplot(data_to_plot, showfliers=False, labels=cond_list)",
"_____no_output_____"
]
],
[
[
"**Question:** \n\n\n`In which condition/s do these genes have a higher or lower expression?`\n\n\n**Answer:**\n\n\nGene 3 shows a reduced expression for 9 months old specimen compared to 3 months olds over all conditions.\n\nGene 45 shows an increase in expression for both the knockout and combined treatment.\n\nGene 79 shows a lower expression for both Rapamycin and combined treatment but a higher expression in the gene knockout.\n\nGene 86 shows a significant increase in 9 months old KO specimen compared to 3 months baseline.",
"_____no_output_____"
],
[
"## PCA\nQuantify principal components 1 (PC1), PC2 and PC3\nand the percentage of the variance that they explain. ",
"_____no_output_____"
],
[
"### Scree Plot",
"_____no_output_____"
]
],
[
[
"# PCA on Dataset\npca = PCA(n_components=40)\npcat = PCA(n_components=40)\npca_data = gene_df.drop(['gene_id'], axis=1)\n\npca.fit_transform(pca_data.T)\npcat.fit_transform(pca_data.T)\nplt.plot(np.cumsum(pca.explained_variance_ratio_))\nplt.xlabel('number of components')\nplt.ylabel('cumulative explained variance')\nplt.plot()\ncumsum = np.cumsum(pcat.explained_variance_ratio_)\n\n# PCA on Dataset with 3 components\npca3t = PCA(n_components=3)\npca3t.fit_transform(pca_data.T)\n\npca3 = PCA(n_components=3)\npca3.fit_transform(pca_data)\n\n\n# Cumulative Variance\ndef f(x):\n return cumsum[x-1]*100",
"_____no_output_____"
]
],
[
[
"### Explained Variance",
"_____no_output_____"
]
],
[
[
"# Display Explained Variance for PC1-PC3\npca3t.transform(pca_data.T)\nPCs = np.transpose(\n [pca3.components_[0], pca3.components_[1], pca3.components_[2]])\nk_PCs = np.transpose(\n [pca3t.components_[0], pca3t.components_[1], pca3t.components_[2]])\nprint(\"PCA by Conditions:\")\nprint(pca3.explained_variance_ratio_)\n\nprint(\"PCA by Genes:\")\nprint(pca3t.explained_variance_ratio_)",
"PCA by Conditions:\n[0.73168743 0.12404557 0.06157152]\nPCA by Genes:\n[0.46406599 0.22934225 0.11050398]\n"
]
],
[
[
"### Cumulative Explained Variance",
"_____no_output_____"
]
],
[
[
"# Display Slider with Cumulative Explained Variance\nprint('Cumulative Explained Variance:')\ninteract(f, x=(1, 40, 1), value=3);",
"Cumulative Explained Variance:\n"
]
],
[
[
"### Principal Components",
"_____no_output_____"
]
],
[
[
"# Display Principal Components\nprint(pca3t.components_)",
"[[-0.19590007 -0.20634107 -0.04732572 -0.16545443 -0.06649532 -0.05920324\n -0.06101624 -0.12033489 -0.13293112 -0.04159629 -0.04955115 -0.05956468\n -0.09875698 -0.11154952 -0.04126172 -0.10060078 -0.03848439 -0.097323\n -0.10038848 -0.04545142 -0.08250682 -0.02992593 -0.05994896 -0.08567958\n -0.0569886 0.1156746 0.12099543 0.1079085 0.09190126 0.08872589\n 0.12918427 0.10585226 0.14888075 0.13905172 0.15522587 0.14740613\n 0.09396873 0.16824016 0.11631577 0.10304167 0.147049 0.13427131\n 0.16238993 0.14912982 0.19099221 0.14394968 0.19153592 0.18115281\n 0.1509931 0.36418001 0.0802221 0.05037143 -0.018111 -0.02069415\n 0.09262291 -0.02034942 0.07537836 0.10803433 -0.03675524 0.08671116\n -0.06221867 0.00511866 -0.05985064 -0.06686155 -0.0603177 -0.05801372\n -0.08807122 -0.03967883 -0.0791727 -0.03402901 0.00631486 -0.036835\n 0.0108616 -0.04977767 -0.0260592 0.00945246 -0.09066175 -0.05316013\n 0.02105357 -0.05213832 0.01081778 0.01303672 -0.01488485 0.00178472\n -0.04012384 0.03406294 0.01930278 0.04910196 -0.03377144 -0.07559621\n 0.05844465 -0.00888646 -0.01025077 -0.0093496 0.04114001 -0.04055475\n -0.00551229 -0.01228031 0.07692864 0.03489227 0.05078327 0.02895192\n 0.0139182 0.04629045 0.0164095 0.05687109 0.05230136 -0.01631423\n -0.01198422 0.07193912 0.05700806 0.02537128 -0.01881232 -0.0311822\n 0.01125675 0.07104296 0.03414023]\n [-0.04400284 -0.10113626 0.45237159 -0.02970256 0.23514249 0.24844461\n 0.17795082 -0.01013868 -0.02319391 0.2082033 0.16117724 0.11537719\n 0.0117319 -0.02691894 0.17851318 -0.00447438 0.18720763 -0.00228552\n -0.03099992 0.08092508 0.06617072 0.16064249 0.07792404 -0.01711889\n 0.07581009 0.02234762 -0.02233282 -0.02365294 -0.08852003 -0.06322094\n -0.00785445 -0.03070551 0.0418539 0.00296733 0.08214558 0.03077836\n -0.07916077 0.06099721 -0.03637593 -0.08668071 -0.01827179 -0.03590297\n 0.04248942 -0.07456104 0.05814416 -0.09447098 0.06077495 0.00288957\n -0.09202222 0.08626741 0.11801761 0.11789162 0.2045728 0.20028188\n 0.08884778 0.17038081 0.0974657 0.09088882 0.15747197 0.08701798\n -0.03895194 -0.08271376 -0.04876722 -0.04559561 -0.05073265 -0.05352415\n -0.03104485 -0.06966917 -0.04293047 -0.07677992 -0.10498623 -0.08117703\n -0.11103841 -0.06894854 -0.0914804 -0.12844321 -0.07120832 -0.10790089\n 0.01509746 0.01973434 -0.03474275 0.01913748 0.05249494 0.01191138\n 0.03260636 -0.01238517 0.0315937 0.02922564 0.04513924 -0.01672171\n 0.06273461 -0.02382527 0.02905949 0.10614618 -0.00906604 0.03964542\n 0.0080806 0.04184432 0.01210162 0.00814435 -0.00573115 -0.02709434\n -0.04056459 0.00695036 0.00351278 -0.00670777 -0.01207415 0.0250286\n -0.03291077 -0.02028966 0.02084 0.00703782 0.05452487 -0.04806908\n -0.04175211 -0.03173247 -0.01264483]\n [ 0.00089858 -0.02850698 0.06250188 0.10930264 0.01055706 0.02008445\n 0.02870236 -0.01758037 -0.07332227 0.04527799 0.0276605 0.03023829\n -0.02216581 -0.01932461 0.00843921 -0.01832072 -0.00855162 -0.02351529\n 0.02879505 0.12423796 -0.08598041 0.05518273 0.04042827 0.0665657\n 0.0512354 -0.06058778 0.09819624 -0.02061342 -0.00048569 -0.09340098\n 0.0547199 -0.09488829 0.02374006 0.02364251 -0.04187321 0.02118376\n -0.10346388 0.06789218 -0.08071775 -0.11141893 0.00300259 -0.05291174\n -0.03915355 0.12205511 0.02367127 0.0587548 -0.04621505 -0.04296238\n -0.10239025 0.10354535 -0.02330131 -0.05610156 0.00768427 0.01047008\n -0.1253117 0.00670438 -0.13748103 -0.03559143 -0.00682464 -0.03788526\n 0.02664166 0.14094751 -0.01333152 0.01775195 -0.01399035 -0.14415254\n 0.03812004 0.00973206 -0.03808873 0.04599292 -0.00885982 0.06579233\n -0.01240313 0.05930856 -0.04556628 -0.00131565 -0.04812412 -0.10560149\n -0.22815015 -0.16258549 -0.16764296 -0.16556871 -0.15317233 -0.13221074\n -0.12746484 -0.12983006 -0.13633142 -0.16523677 -0.12103609 -0.07743421\n -0.15256886 -0.09529032 -0.11247833 -0.04821524 -0.12118808 -0.07823486\n -0.0691413 -0.07932417 -0.15226513 0.11208977 0.11188565 0.10707777\n 0.11358707 0.10256444 0.12635685 0.10000006 0.11608125 0.13068591\n 0.16134867 0.11450696 0.12317431 0.13793403 0.14066562 0.176243\n 0.15530583 0.14611148 0.19912773]]\n"
]
],
[
[
"### PC Boxplot\nFor each PC create a boxplot of coordinates split by\nconditions. ",
"_____no_output_____"
]
],
[
[
"# Initialise Plot\nfig = plt.figure(1, figsize=(10, 22))\nrows = len(pca3.components_)\ncolumns = 1\nfor i in range(len(pca3.components_)):\n # Subplot with 3 rows, 1 column, index\n ax = fig.add_subplot(rows, columns, i+1)\n pca_data_to_plot = []\n\n # Fill Data array\n for cond in range(len(conditions)):\n pca_data_to_plot.append(\n pca3.components_[i][cond*5:cond*5+5])\n\n # Set dynamic title\n ax.set_title('Principal Component ' + str(i + 1))\n\n # Display Boxplot\n bx = ax.boxplot(pca_data_to_plot, showfliers=False, labels=cond_list)",
"_____no_output_____"
]
],
[
[
"**Question:** \n\n\n`Which difference in the gene expression between conditions do principal components 1, 2 and 3\nreflect?`\n\n\n**Answer:**\n\nPrincipal Component 1 and 2 primarily explain the effect size of gene-knockout and combined treatment.\n\nPrincipal Component 2 also leads to a reduced increase in combined treatment.\n\nPrincipal Component 3 accounts for most of the age difference in specimen.",
"_____no_output_____"
],
[
"## K-means Scatter Plot",
"_____no_output_____"
],
[
"### 3D Scatter Plot by Condition\nPlot PC1 vs PC2 vs PC3 on the 3-D scatter plot. Color replicates by conditions and label data points with the corresponding replicate names. ",
"_____no_output_____"
]
],
[
[
"# Prepare Plot\ncolor_map = ('#e74c3c', '#e67e22', '#f1c40f', '#95a5a6',\n '#8e44ad', '#2980b9', '#16a085', '#27ae60')\nmarker_map = ('o', 's', 'D', '^', 'o', 's', 'D', '^')\n\ncolors = []\nmarkers = []\nconds = []\n\n# Visuals\nfor col, mark, con in zip(color_map, marker_map, cond_list):\n index = 0\n while index < 5:\n colors.append(col)\n markers.append(mark)\n conds.append(con)\n index += 1\n\n# Initialise Plot\nfig = plt.figure(1, figsize=(10, 6))\nax = fig.add_subplot(111, projection='3d')\nax = Axes3D(fig)\n\n# Scatter\nfor data, color, group, marker in zip(PCs, colors, conds, markers):\n x, y, z = data\n ax.scatter(x, y, z, c=color, edgecolors='none',\n marker=marker, s=120, label=group)\n\nax.set_xlabel('PC1')\nax.set_ylabel('PC2')\nax.set_zlabel('PC3')\nplt.title('PCA by Condition')\n\n# View Settings\nhandles, labels = plt.gca().get_legend_handles_labels()\nby_label = OrderedDict(zip(labels, handles))\nplt.legend(by_label.values(), by_label.keys())\nax.view_init(30,140)\nplt.show()\na = 0",
"_____no_output_____"
]
],
[
[
"### K-Means Clusters\nCluster replicates using k-means clustering (determine the number of clusters\nfrom PCA results). Annotate data points on PCA plots with the same color if replicates belong to the same\ncluster. ",
"_____no_output_____"
]
],
[
[
"# Fit Clusters\nk_means = cluster.KMeans(n_clusters=6)\nk_means.fit(k_PCs)\nprint(k_means.labels_)\n \n# Prepare Plot\nk_colors_all = ('#e74c3c', '#e67e22', '#f1c40f', '#27ae60', '#2980b9', '#95a5a6')\nk_markers_all = ('o', 's', 'D', '^', 'o', 's', 'D', '^')\nk_conds_all = ('Cluster 1', 'Cluster 2', 'Cluster 3', 'Cluster 4', 'Cluster 5', 'Cluster 6')\nk_colors, k_markers, k_conds = [],[],[]\n\nfor i in k_means.labels_:\n k_colors.append(k_colors_all[i])\n k_markers.append(k_markers_all[i])\n k_conds.append(k_conds_all[i])\n",
"[0 0 3 0 3 3 3 0 0 3 3 3 0 0 3 0 3 0 0 1 5 3 3 0 3 2 1 4 4 4 2 4 2 2 2 2 4\n 2 4 4 2 4 2 1 2 4 2 2 4 2 2 5 3 3 5 3 5 2 3 2 0 1 0 0 0 5 0 0 0 0 0 0 0 0\n 0 0 0 0 5 5 5 5 5 5 5 5 5 5 5 0 5 5 5 5 5 5 5 5 5 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1]\n"
],
[
"# Initialise\nfig = plt.figure(1, figsize=(10, 6))\nax = fig.add_subplot(111, projection='3d')\nax = Axes3D(fig)\n\n# Scatter\nfor data, color, group, marker in zip(k_PCs, k_colors, k_conds, k_markers):\n x, y, z = data\n ax.scatter(x, y, z, c=color, edgecolors='none',\n marker=marker, s=100, label=group)\n\nax.set_xlabel('PC1')\nax.set_ylabel('PC2')\nax.set_zlabel('PC3')\nplt.title('PCA by Replicates and Raw Data')\n\n# View Settings\nhandles, labels = plt.gca().get_legend_handles_labels()\nby_label = OrderedDict(zip(labels, handles))\nplt.legend(by_label.values(), by_label.keys())\nax.view_init(30, 110)\nplt.show()\nb=0",
"_____no_output_____"
]
],
[
[
"**Question:** \n\n\n`Which of replicates have similar gene expression?`\n\n\n**Answer:**",
"_____no_output_____"
]
],
[
[
"for i in range(6):\n print('Cluster '+str(i))\n print( [c for c, x in enumerate(k_means.labels_) if x == i] )",
"Cluster 0\n[0, 1, 3, 7, 8, 12, 13, 15, 17, 18, 23, 60, 62, 63, 64, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 89]\nCluster 1\n[19, 26, 43, 61, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116]\nCluster 2\n[25, 30, 32, 33, 34, 35, 37, 40, 42, 44, 46, 47, 49, 50, 57, 59]\nCluster 3\n[2, 4, 5, 6, 9, 10, 11, 14, 16, 21, 22, 24, 52, 53, 55, 58]\nCluster 4\n[27, 28, 29, 31, 36, 38, 39, 41, 45, 48]\nCluster 5\n[20, 51, 54, 56, 65, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 90, 91, 92, 93, 94, 95, 96, 97, 98]\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecd83e463505ff34d0514d110ac35b206cf702f1 | 270,554 | ipynb | Jupyter Notebook | binder/tutorial.ipynb | JMSchoeffmann/uproot | 7906a1130555aa979da23dd9029b5c98cdace757 | [
"BSD-3-Clause"
] | null | null | null | binder/tutorial.ipynb | JMSchoeffmann/uproot | 7906a1130555aa979da23dd9029b5c98cdace757 | [
"BSD-3-Clause"
] | null | null | null | binder/tutorial.ipynb | JMSchoeffmann/uproot | 7906a1130555aa979da23dd9029b5c98cdace757 | [
"BSD-3-Clause"
] | null | null | null | 36.233293 | 9,416 | 0.510478 | [
[
[
"# Introduction\n\nThis tutorial is designed to help you start using uproot. Unlike the [reference documentation](https://uproot.readthedocs.io/en/latest/), which defines every parameter of every function, this tutorial provides introductory examples to help you learn how to use them.\n\nThe original tutorial [has been archived](https://github.com/scikit-hep/uproot/blob/master/docs/old-tutorial.rst)—this version was written in June 2019 in response to feedback from a series of tutorials I presented early this year and common questions in the [GitHub issues](https://github.com/scikit-hep/uproot/issues). The new tutorial is [executable on Binder](https://mybinder.org/v2/gh/scikit-hep/uproot/master?urlpath=lab/tree/binder%2Ftutorial.ipynb) and may be read in any order, though it has to be executed from top to bottom because some variables are reused.",
"_____no_output_____"
],
[
"# What is uproot?\n\nUproot is a Python package; it is pip and conda-installable, and it only depends on other Python packages. Although it is similar in function to [root_numpy](https://pypi.org/project/root-numpy/) and [root_pandas](https://pypi.org/project/root_pandas/), it does not compile into ROOT and therefore avoids issues in which the version used in compilation differs from the version encountered at runtime.\n\nIn short, you should never see a segmentation fault.\n\n<center><img src=\"https://raw.githubusercontent.com/scikit-hep/uproot/master/docs/abstraction-layers.png\" width=\"75%\"></center>",
"_____no_output_____"
],
[
"Uproot is strictly concerned with file I/O only—all other functionality is handled by other libraries:\n\n * [uproot-methods](https://github.com/scikit-hep/uproot-methods): physics methods for types read from ROOT files, such as histograms and Lorentz vectors. It is intended to be largely user-contributed (and is).\n * [awkward-array](https://github.com/scikit-hep/awkward-array): array manipulation beyond [Numpy](https://docs.scipy.org/doc/numpy/reference/). Several are encountered in this tutorial, particularly lazy arrays and jagged arrays.",
"_____no_output_____"
],
[
"In the past year, uproot has become one of the most widely used Python packages made for particle physics, with users in all four LHC experiments, theory, neutrino experiments, XENON-nT (dark matter direct detection), MAGIC (gamma ray astronomy), and IceCube (neutrino astronomy).\n\n<center><img src=\"https://raw.githubusercontent.com/scikit-hep/uproot/master/docs/all_file_project.png\" width=\"75%\"></center>",
"_____no_output_____"
],
[
"# Exploring a file\n\n[uproot.open](https://uproot.readthedocs.io/en/latest/opening-files.html#uproot-open) is the entry point for reading a single file.\n\nIt takes a local filename path or a remote `http://` or `root://` URL. (HTTP requires the Python [requests](https://pypi.org/project/requests/) library and XRootD requires [pyxrootd](http://xrootd.org/), both of which have to be explicitly pip-installed if you installed uproot with pip, but are automatically installed if you installed uproot with conda.)",
"_____no_output_____"
]
],
[
[
"import uproot\n\nfile = uproot.open(\"https://scikit-hep.org/uproot/examples/nesteddirs.root\")\nfile",
"_____no_output_____"
]
],
[
[
"[uproot.open](https://uproot.readthedocs.io/en/latest/opening-files.html#uproot-open) returns a [ROOTDirectory](https://uproot.readthedocs.io/en/latest/root-io.html#uproot-rootio-rootdirectory), which behaves like a Python dict; it has `keys()`, `values()`, and key-value access with square brackets.",
"_____no_output_____"
]
],
[
[
"file.keys()",
"_____no_output_____"
],
[
"file[\"one\"]",
"_____no_output_____"
]
],
[
[
"Subdirectories also have type [ROOTDirectory](https://uproot.readthedocs.io/en/latest/root-io.html#uproot-rootio-rootdirectory), so they behave like Python dicts, too.",
"_____no_output_____"
]
],
[
[
"file[\"one\"].keys()",
"_____no_output_____"
],
[
"file[\"one\"].values()",
"_____no_output_____"
]
],
[
[
"**What's the `b` before each object name?** Python 3 distinguishes between bytestrings and encoded strings. ROOT object names have no encoding, such as Latin-1 or Unicode, so uproot presents them as raw bytestrings. However, if you enter a Python string (no `b`) and it matches an object name (interpreted as plain ASCII), it will count as a match, as `\"one\"` does above.",
"_____no_output_____"
],
[
"**What's the `;1` after each object name?** ROOT objects are versioned with a \"cycle number.\" If multiple objects are written to the ROOT file with the same name, they will have different cycle numbers, with the largest value being last. If you don't specify a cycle number, you'll get the latest one.",
"_____no_output_____"
],
[
"This file is deeply nested, so while you could find the TTree with",
"_____no_output_____"
]
],
[
[
"file[\"one\"][\"two\"][\"tree\"]",
"_____no_output_____"
]
],
[
[
"you can also find it using a directory path, with slashes.",
"_____no_output_____"
]
],
[
[
"file[\"one/two/tree\"]",
"_____no_output_____"
]
],
[
[
"Here are a few more tricks for finding your way around a file:\n\n * the `keys()`, `values()`, and `items()` methods have `allkeys()`, `allvalues()`, `allitems()` variants that recursively search through all subdirectories;\n * all of these functions can be filtered by name or class: see [ROOTDirectory.keys](https://uproot.readthedocs.io/en/latest/root-io.html#uproot.rootio.ROOTDirectory.keys).\n\nHere's how you would search the subdirectories to find all TTrees:",
"_____no_output_____"
]
],
[
[
"file.allkeys(filterclass=lambda cls: issubclass(cls, uproot.tree.TTreeMethods))",
"_____no_output_____"
]
],
[
[
"Or get a Python dict of them:",
"_____no_output_____"
]
],
[
[
"all_ttrees = dict(file.allitems(filterclass=lambda cls: issubclass(cls, uproot.tree.TTreeMethods)))\nall_ttrees",
"_____no_output_____"
]
],
[
[
"Be careful: Python 3 is not as forgiving about matching key names. `all_ttrees` is a plain Python dict, so the key must be a bytestring and must include the cycle number.",
"_____no_output_____"
]
],
[
[
"all_ttrees[b\"one/two/tree;1\"]",
"_____no_output_____"
]
],
[
[
"## Compressed objects in ROOT files\n\nObjects in ROOT files can be uncompressed, compressed with ZLIB, compressed with LZMA, or compressed with LZ4. Uproot picks the right decompressor and gives you the objects transparently: you don't have to specify anything. However, if an object is compressed with LZ4 and you don't have the [lz4](https://pypi.org/project/lz4/) library installed, you'll get an error with installation instructions in the message. (It is automatically installed if you installed uproot with conda.) ZLIB is part of the Python Standard Library, and LZMA is part of the Python 3 Standard Library, so you won't get error messages about these except for LZMA in Python 2 (for which there is [backports.lzma](https://pypi.org/project/backports.lzma/), automatically installed if you installed uproot with conda).\n\nThe [ROOTDirectory](https://uproot.readthedocs.io/en/latest/root-io.html#uproot-rootio-rootdirectory) class has a `compression` property that tells you the compression algorithm and level associated with this file,",
"_____no_output_____"
]
],
[
[
"file.compression",
"_____no_output_____"
]
],
[
[
"but any object can be compressed with any algorithm at any level—this is only the default compression for the file. Some ROOT files are written with each TTree branch compressed using a different algorithm and level.",
"_____no_output_____"
],
[
"## Exploring a TTree\n\nTTrees are special objects in ROOT files: they contain most of the physics data. Uproot presents TTrees as subclasses of [TTreeMethods](https://uproot.readthedocs.io/en/latest/ttree-handling.html#uproot-tree-ttreemethods).\n\n(**Why subclass?** Different ROOT files can have different versions of a class, so uproot generates Python classes to fit the data, as needed. All TTrees inherit from [TTreeMethods](https://uproot.readthedocs.io/en/latest/ttree-handling.html#uproot-tree-ttreemethods) so that they get the same data-reading methods.)",
"_____no_output_____"
]
],
[
[
"events = uproot.open(\"https://scikit-hep.org/uproot/examples/Zmumu.root\")[\"events\"]\nevents",
"_____no_output_____"
]
],
[
[
"Although [TTreeMethods](https://uproot.readthedocs.io/en/latest/ttree-handling.html#uproot-tree-ttreemethods) objects behave like Python dicts of [TBranchMethods](https://uproot.readthedocs.io/en/latest/ttree-handling.html#uproot-tree-tbranchmethods) objects, the easiest way to browse a TTree is by calling its `show()` method, which prints the branches and their interpretations as arrays.",
"_____no_output_____"
]
],
[
[
"events.keys()",
"_____no_output_____"
],
[
"events.show()",
"Type (no streamer) asstring()\nRun (no streamer) asdtype('>i4')\nEvent (no streamer) asdtype('>i4')\nE1 (no streamer) asdtype('>f8')\npx1 (no streamer) asdtype('>f8')\npy1 (no streamer) asdtype('>f8')\npz1 (no streamer) asdtype('>f8')\npt1 (no streamer) asdtype('>f8')\neta1 (no streamer) asdtype('>f8')\nphi1 (no streamer) asdtype('>f8')\nQ1 (no streamer) asdtype('>i4')\nE2 (no streamer) asdtype('>f8')\npx2 (no streamer) asdtype('>f8')\npy2 (no streamer) asdtype('>f8')\npz2 (no streamer) asdtype('>f8')\npt2 (no streamer) asdtype('>f8')\neta2 (no streamer) asdtype('>f8')\nphi2 (no streamer) asdtype('>f8')\nQ2 (no streamer) asdtype('>i4')\nM (no streamer) asdtype('>f8')\n"
]
],
[
[
"Basic information about the TTree, such as its number of entries, are available as properties.",
"_____no_output_____"
]
],
[
[
"events.name, events.title, events.numentries",
"_____no_output_____"
]
],
[
[
"## Some terminology\n\nROOT files contain objects internally referred to via `TKeys` (dict-like lookup in uproot). `TTree` organizes data in `TBranches`, and uproot interprets one `TBranch` as one array, either a [Numpy array](https://docs.scipy.org/doc/numpy/reference/generated/numpy.array.html) or an [awkward array](https://github.com/scikit-hep/awkward-array). `TBranch` data are stored in chunks called `TBaskets`, though uproot hides this level of granularity unless you dig into the details.\n\n<br>\n\n<center><img src=\"https://raw.githubusercontent.com/scikit-hep/uproot/master/docs/terminology.png\" width=\"75%\"></center>",
"_____no_output_____"
],
[
"# Reading arrays from a TTree\n\nThe bulk data in a TTree are not read until requested. There are many ways to do that:\n\n * select a TBranch and call [TBranchMethods.array](https://uproot.readthedocs.io/en/latest/ttree-handling.html#id11);\n * call [TTreeMethods.array](https://uproot.readthedocs.io/en/latest/ttree-handling.html#array) directly from the TTree object;\n * call [TTreeMethods.arrays](https://uproot.readthedocs.io/en/latest/ttree-handling.html#arrays) to get several arrays at a time;\n * call [TBranch.lazyarray](https://uproot.readthedocs.io/en/latest/ttree-handling.html#id13), [TTreeMethods.lazyarray](https://uproot.readthedocs.io/en/latest/ttree-handling.html#lazyarray), [TTreeMethods.lazyarrays](https://uproot.readthedocs.io/en/latest/ttree-handling.html#lazyarrays), or [uproot.lazyarrays](https://uproot.readthedocs.io/en/latest/opening-files.html#uproot-lazyarray-and-lazyarrays) to get array-like objects that read on demand;\n * call [TTreeMethods.iterate](https://uproot.readthedocs.io/en/latest/ttree-handling.html#iterate) or [uproot.iterate](https://uproot.readthedocs.io/en/latest/opening-files.html#uproot-iterate) to explicitly iterate over chunks of data (to avoid reading more than would fit into memory);\n * call [TTreeMethods.pandas](https://uproot.readthedocs.io/en/latest/ttree-handling.html#id7) or [uproot.pandas.iterate](https://uproot.readthedocs.io/en/latest/opening-files.html#uproot-pandas-iterate) to get Pandas DataFrames ([Pandas](https://pandas.pydata.org/) must be installed).\n\nLet's start with the simplest.",
"_____no_output_____"
]
],
[
[
"a = events.array(\"E1\")\na",
"_____no_output_____"
]
],
[
[
"Since `array` is singular, you specify one branch name and get one array back. This is a [Numpy array](https://docs.scipy.org/doc/numpy/reference/generated/numpy.array.html) of 8-byte floating point numbers, the [Numpy dtype](https://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html) specified by the `\"E1\"` branch's interpretation.",
"_____no_output_____"
]
],
[
[
"events[\"E1\"].interpretation",
"_____no_output_____"
]
],
[
[
"We can use this array in Numpy calculations; see the [Numpy documentation](https://docs.scipy.org/doc/numpy/) for details.",
"_____no_output_____"
]
],
[
[
"import numpy\n\nnumpy.log(a)",
"_____no_output_____"
]
],
[
[
"Numpy arrays are also the standard container for entering data into machine learning frameworks; see this [Keras introduction](https://keras.io/), [PyTorch introduction](https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html), [TensorFlow introduction](https://www.tensorflow.org/guide/low_level_intro), or [Scikit-Learn introduction](https://scikit-learn.org/stable/tutorial/basic/tutorial.html) to see how to put Numpy arrays to work in machine learning.",
"_____no_output_____"
],
[
"The [TBranchMethods.array](https://uproot.readthedocs.io/en/latest/ttree-handling.html#id11) method is the same as [TTreeMethods.array](https://uproot.readthedocs.io/en/latest/ttree-handling.html#array) except that you don't have to specify the TBranch name (naturally). Sometimes one is more convenient, sometimes the other.",
"_____no_output_____"
]
],
[
[
"events.array(\"E1\"), events[\"E1\"].array()",
"_____no_output_____"
]
],
[
[
"The plural `arrays` method is different. Whereas singular `array` could only return one array, plural `arrays` takes a list of names (possibly including wildcards) and returns them all in a Python dict.",
"_____no_output_____"
]
],
[
[
"events.arrays([\"px1\", \"py1\", \"pz1\"])",
"_____no_output_____"
],
[
"events.arrays([\"p[xyz]*\"])",
"_____no_output_____"
]
],
[
[
"As with all ROOT object names, the TBranch names are bytestrings (prepended by `b`). If you know the encoding or it doesn't matter (`\"ascii\"` and `\"utf-8\"` are generic), pass a `namedecode` to get keys that are strings.",
"_____no_output_____"
]
],
[
[
"events.arrays([\"p[xyz]*\"], namedecode=\"utf-8\")",
"_____no_output_____"
]
],
[
[
"These array-reading functions have many parameters, but most of them have the same names and meanings across all the functions. Rather than discuss all of them here, they'll be presented in context in sections on special features below.",
"_____no_output_____"
],
[
"# Caching data\n\nEvery time you ask for arrays, uproot goes to the file and re-reads them. For especially large arrays, this can take a long time.\n\nFor quicker access, uproot's array-reading functions have a **cache** parameter, which is an entry point for you to manage your own cache. The **cache** only needs to behave like a dict (many third-party Python caches do).",
"_____no_output_____"
]
],
[
[
"mycache = {}\n\n# first time: reads from file\nevents.arrays([\"p[xyz]*\"], cache=mycache);\n\n# any other time: reads from cache\nevents.arrays([\"p[xyz]*\"], cache=mycache);",
"_____no_output_____"
]
],
[
[
"In this example, the cache is a simple Python dict. Uproot has filled it with unique ID → array pairs, and it uses the unique ID to identify an array that it has previously read. You can see that it's full by looking at those keys:",
"_____no_output_____"
]
],
[
[
"mycache",
"_____no_output_____"
]
],
[
[
"though they're not very human-readable.\n\nIf you're running out of memory, you could manually clear your cache by simply clearing the dict.",
"_____no_output_____"
]
],
[
[
"mycache.clear()\nmycache",
"_____no_output_____"
]
],
[
[
"Now the same line of code reads from the file again.",
"_____no_output_____"
]
],
[
[
"# not in cache: reads from file\nevents.arrays([\"p[xyz]*\"], cache=mycache);",
"_____no_output_____"
]
],
[
[
"## Automatically managed caches\n\nThis manual process of clearing the cache when you run out of memory is not very robust. What you want instead is a dict-like object that drops elements on its own when memory is scarce.\n\nUproot has an [ArrayCache](https://uproot.readthedocs.io/en/latest/caches.html#uproot-cache-arraycache) class for this purpose, though it's a thin wrapper around the third-party [cachetools](https://pypi.org/project/cachetools/) library. Whereas [cachetools](https://pypi.org/project/cachetools/) drops old data from cache when a maximum number of items is reached, [ArrayCache](https://uproot.readthedocs.io/en/latest/caches.html#uproot-cache-arraycache) drops old data when the data usage reaches a limit, specified in bytes.",
"_____no_output_____"
]
],
[
[
"mycache = uproot.ArrayCache(\"100 kB\")\nevents.arrays(\"*\", cache=mycache);\n\nlen(mycache), len(events.keys())",
"_____no_output_____"
]
],
[
[
"With a limit of 100 kB, only 6 of the 20 arrays fit into cache, the rest have been evicted.\n\nAll data sizes in uproot are specified as an integer in bytes (integers) or a string with the appropriate unit (interpreted as powers of 1024, not 1000).",
"_____no_output_____"
],
[
"The fact that any dict-like object may be a cache opens many possibilities. If you're struggling with a script that takes a long time to load data, then crashes, you may want to try a process-independent cache like [memcached](https://realpython.com/python-memcache-efficient-caching/). If you have a small, fast disk, you may want to consider [diskcache](http://www.grantjenks.com/docs/diskcache/tutorial.html) to temporarily hold arrays from ROOT files on the big, slow disk.",
"_____no_output_____"
],
[
"## Caching at all levels of abstraction\n\nAll of the array-reading functions have a **cache** parameter to accept a cache object. This is the high-level cache, which caches data after it has been fully interpreted. These functions also have a **basketcache** parameter to cache data after reading and decompressing baskets, but before interpretation as high-level arrays. The main purpose of this is to avoid reading TBaskets twice when an iteration step falls in the middle of a basket (see below). There is also a **keycache** for caching ROOT's TKey objects, which use negligible memory but would be a bottleneck to re-read when TBaskets are provided by a **basketcache**.\n\nFor more on these high and mid-level caching parameters, see [reference documentation](https://uproot.readthedocs.io/en/latest/caches.html).\n\nAt the lowest level of abstraction, raw bytes are cached by the HTTP and XRootD remote file readers. You can control the memory remote file memory use with `uproot.HTTPSource.defaults[\"limitbytes\"]` and `uproot.XRootDSource.defaults[\"limitbytes\"]`, either by globally setting these parameters before opening a file, or by passing them to [uproot.open](https://uproot.readthedocs.io/en/latest/opening-files.html#uproot-open) through the **limitbytes** parameter.",
"_____no_output_____"
]
],
[
[
"# default remote file caches in MB\nuproot.HTTPSource.defaults[\"limitbytes\"] / 1024**2, uproot.XRootDSource.defaults[\"limitbytes\"] / 1024**2",
"_____no_output_____"
]
],
[
[
"If you want to limit this cache to less than the default **chunkbytes** of 1 MB, be sure to make the **chunkbytes** smaller, so that it's able to load at least one chunk!",
"_____no_output_____"
]
],
[
[
"uproot.open(\"https://scikit-hep.org/uproot/examples/Zmumu.root\", limitbytes=\"100 kB\", chunkbytes=\"10 kB\")",
"_____no_output_____"
]
],
[
[
"By default (unless **localsource** is overridden), local files are memory-mapped, so the operating system manages its byte-level cache.",
"_____no_output_____"
],
[
"# Lazy arrays\n\nIf you call [TBranchMethods.array](https://uproot.readthedocs.io/en/latest/ttree-handling.html#id11), [TTreeMethods.array](https://uproot.readthedocs.io/en/latest/ttree-handling.html#array), or [TTreeMethods.arrays](https://uproot.readthedocs.io/en/latest/ttree-handling.html#arrays), uproot reads the file or cache immediately and returns an in-memory array. For exploratory work or to control memory usage, you might want to let the data be read on demand.\n\nThe [TBranch.lazyarray](https://uproot.readthedocs.io/en/latest/ttree-handling.html#id13), [TTreeMethods.lazyarray](https://uproot.readthedocs.io/en/latest/ttree-handling.html#lazyarray), [TTreeMethods.lazyarrays](https://uproot.readthedocs.io/en/latest/ttree-handling.html#lazyarrays), and [uproot.lazyarrays](https://uproot.readthedocs.io/en/latest/opening-files.html#uproot-lazyarray-and-lazyarrays) functions take most of the same parameters but return lazy array objects, rather than Numpy arrays.",
"_____no_output_____"
]
],
[
[
"data = events.lazyarrays(\"*\")\ndata",
"_____no_output_____"
]
],
[
[
"This `ChunkedArray` represents all the data in the file in chunks specified by ROOT's internal baskets (specifically, the places where the baskets align, called \"clusters\"). Each chunk contains a `VirtualArray`, which is read when any element from it is accessed.",
"_____no_output_____"
]
],
[
[
"data = events.lazyarrays(entrysteps=500) # chunks of 500 events each\ndataE1 = data[\"E1\"]\ndataE1",
"_____no_output_____"
]
],
[
[
"Requesting `\"E1\"` through all the chunks and printing it (above) has caused the first and last chunks of the array to be read, because that's all that got written to the screen. (See the `...`?)",
"_____no_output_____"
]
],
[
[
"[chunk.ismaterialized for chunk in dataE1.chunks]",
"_____no_output_____"
]
],
[
[
"These arrays can be used with [Numpy's universal functions](https://docs.scipy.org/doc/numpy/reference/ufuncs.html) (ufuncs), which are the mathematical functions that perform elementwise mathematics.",
"_____no_output_____"
]
],
[
[
"numpy.log(dataE1)",
"_____no_output_____"
]
],
[
[
"Now all of the chunks have been read, because the values were needed to compute `log(E1)` for all `E1`.",
"_____no_output_____"
]
],
[
[
"[chunk.ismaterialized for chunk in dataE1.chunks]",
"_____no_output_____"
]
],
[
[
"(**Note:** only ufuncs recognize these lazy arrays because Numpy provides a [mechanism to override ufuncs](https://www.numpy.org/neps/nep-0013-ufunc-overrides.html) but a [similar mechanism for high-level functions](https://www.numpy.org/neps/nep-0018-array-function-protocol.html) is still in development. To turn lazy arrays into Numpy arrays, pass them to the Numpy constructor, as shown below. This causes the whole array to be loaded into memory and to be stitched together into a contiguous whole.)",
"_____no_output_____"
]
],
[
[
"numpy.array(dataE1)",
"_____no_output_____"
]
],
[
[
"## Lazy array of many files\n\nThere's a lazy version of each of the array-reading functions in [TTreeMethods](https://uproot.readthedocs.io/en/latest/ttree-handling.html#uproot-tree-ttreemethods) and [TBranchMethods](https://uproot.readthedocs.io/en/latest/ttree-handling.html#uproot-tree-tbranchmethods), but there's also module-level [uproot.lazyarray](https://uproot.readthedocs.io/en/latest/opening-files.html#uproot.tree.lazyarray) and [uproot.lazyarrays](https://uproot.readthedocs.io/en/latest/opening-files.html#uproot.tree.lazyarrays). These functions let you make a lazy array that spans many files.\n\nThese functions may be thought of as alternatives to ROOT's TChain: a TChain presents many files as though they were a single TTree, and a file-spanning lazy array presents many files as though they were a single array. See Iteration below as a more explicit TChain alternative.",
"_____no_output_____"
]
],
[
[
"data = uproot.lazyarray(\n # list of files; local files can have wildcards (*)\n [\"samples/sample-%s-zlib.root\" % x\n for x in [\"5.23.02\", \"5.24.00\", \"5.25.02\", \"5.26.00\", \"5.27.02\", \"5.28.00\",\n \"5.29.02\", \"5.30.00\", \"6.08.04\", \"6.10.05\", \"6.14.00\"]],\n # TTree name in each file\n \"sample\",\n # branch(s) in each file for lazyarray(s)\n \"f8\")\ndata",
"_____no_output_____"
]
],
[
[
"This `data` represents the entire set of files, and the only up-front processing that had to be done was to find out how many entries each TTree contains.\n\nIt uses the [uproot.numentries](https://uproot.readthedocs.io/en/latest/opening-files.html#uproot-numentries) shortcut method (which reads less data than normal file-opening):",
"_____no_output_____"
]
],
[
[
"dict(uproot.numentries(\n # list of files; local files can have wildcards (*)\n [\"samples/sample-%s-zlib.root\" % x\n for x in [\"5.23.02\", \"5.24.00\", \"5.25.02\", \"5.26.00\", \"5.27.02\", \"5.28.00\",\n \"5.29.02\", \"5.30.00\", \"6.08.04\", \"6.10.05\", \"6.14.00\"]],\n # TTree name in each file\n \"sample\",\n # total=True adds all values; total=False leaves them as a dict\n total=False))",
"_____no_output_____"
]
],
[
[
"## Lazy arrays with caching\n\nBy default, lazy arrays hold onto all data that have been read as long as the lazy array continues to exist. To use a lazy array as a window into a very large dataset, you'll have to limit how much it's allowed to keep in memory at a time.\n\nThis is caching, and the caching mechanism is the same as before:",
"_____no_output_____"
]
],
[
[
"mycache = uproot.cache.ArrayCache(100*1024) # 100 kB\n\ndata = events.lazyarrays(entrysteps=500, cache=mycache)\ndata",
"_____no_output_____"
]
],
[
[
"Before performing a calculation, the cache is empty.",
"_____no_output_____"
]
],
[
[
"len(mycache)",
"_____no_output_____"
],
[
"numpy.sqrt((data[\"E1\"] + data[\"E2\"])**2 - (data[\"px1\"] + data[\"px2\"])**2 -\n (data[\"py1\"] + data[\"py2\"])**2 - (data[\"pz1\"] + data[\"pz2\"])**2)",
"_____no_output_____"
]
],
[
[
"After performing the calculation, the cache contains only as many chunks as it could hold.",
"_____no_output_____"
]
],
[
[
"# chunks in cache chunks touched to compute (E1 + E2)**2 - (px1 + px2)**2 - (py1 + py2)**2 - (pz1 + pz2)**2\nlen(mycache), len(data[\"E1\"].chunks) * 8",
"_____no_output_____"
]
],
[
[
"## Lazy arrays as lightweight skims\n\nThe `ChunkedArray` and `VirtualArray` classes are defined in the [awkward-array](https://github.com/scikit-hep/awkward-array#awkward-array) library installed with uproot. These arrays can be saved to files in a way that preserves their virtualness, which allows you to save a \"diff\" with respect to the original ROOT files.\n\nBelow, we load lazy arrays from a ROOT file with **persistvirtual=True** and add a derived feature:",
"_____no_output_____"
]
],
[
[
"data = events.lazyarrays([\"E*\", \"p[xyz]*\"], persistvirtual=True)\n\ndata[\"mass\"] = numpy.sqrt((data[\"E1\"] + data[\"E2\"])**2 - (data[\"px1\"] + data[\"px2\"])**2 -\n (data[\"py1\"] + data[\"py2\"])**2 - (data[\"pz1\"] + data[\"pz2\"])**2)",
"_____no_output_____"
]
],
[
[
"and save the whole thing to an awkward-array file (`.awkd`).",
"_____no_output_____"
]
],
[
[
"import awkward\n\nawkward.save(\"derived-feature.awkd\", data, mode=\"w\")",
"_____no_output_____"
]
],
[
[
"When we read it back, the derived features come from the awkward-array file but the original features are loaded as pointers to the original ROOT files (`VirtualArrays` whose array-making function knows the original ROOT filenames—don't move them!).",
"_____no_output_____"
]
],
[
[
"data2 = awkward.load(\"derived-feature.awkd\")",
"_____no_output_____"
],
[
"# reads from derived-feature.awkd\ndata2[\"mass\"]",
"_____no_output_____"
],
[
"# reads from the original ROOT flies\ndata2[\"E1\"]",
"_____no_output_____"
]
],
[
[
"Similarly, a dataset with a cut applied saves the identities of the selected events but only pointers to the original ROOT data. This acts as a lightweight skim.",
"_____no_output_____"
]
],
[
[
"selected = data[data[\"mass\"] < 80]\nselected",
"_____no_output_____"
],
[
"awkward.save(\"selected-events.awkd\", selected, mode=\"w\")",
"_____no_output_____"
],
[
"data3 = awkward.load(\"selected-events.awkd\")\ndata3",
"_____no_output_____"
]
],
[
[
"## Lazy arrays in Dask\n\n[Dask](https://dask.org/) is a framework for delayed and distributed computation with lazy array and dataframe interfaces. To turn uproot's lazy arrays into Dask objects, use the [uproot.daskarray](https://uproot.readthedocs.io/en/latest/opening-files.html#uproot.tree.daskarray) and [uproot.daskframe](https://uproot.readthedocs.io/en/latest/opening-files.html#uproot.tree.daskframe) functions.",
"_____no_output_____"
]
],
[
[
"uproot.daskarray(\"https://scikit-hep.org/uproot/examples/Zmumu.root\", \"events\", \"E1\")",
"_____no_output_____"
],
[
"uproot.daskframe(\"https://scikit-hep.org/uproot/examples/Zmumu.root\", \"events\")",
"_____no_output_____"
]
],
[
[
"# Iteration\n\nLazy arrays _implicitly_ step through chunks of data to give you the impression that you have a larger array than memory can hold all at once. The next two methods _explicitly_ step through chunks of data, to give you more control over the process.\n\n[TTreeMethods.iterate](https://uproot.readthedocs.io/en/latest/ttree-handling.html#iterate) iterates over chunks of a TTree and [uproot.iterate](https://uproot.readthedocs.io/en/latest/opening-files.html#uproot-iterate) iterates through files.\n\nLike a file-spanning lazy array, a file-spanning iterator erases the difference between files and may be used as a TChain alternative. However, the iteration is over _chunks of many events_, not _single events_.",
"_____no_output_____"
]
],
[
[
"histogram = None\n\nfor data in events.iterate([\"E*\", \"p[xyz]*\"], namedecode=\"utf-8\"):\n # operate on a batch of data in the loop\n mass = numpy.sqrt((data[\"E1\"] + data[\"E2\"])**2 - (data[\"px1\"] + data[\"px2\"])**2 -\n (data[\"py1\"] + data[\"py2\"])**2 - (data[\"pz1\"] + data[\"pz2\"])**2)\n\n # accumulate results\n counts, edges = numpy.histogram(mass, bins=120, range=(0, 120))\n if histogram is None:\n histogram = counts, edges\n else:\n histogram = histogram[0] + counts, edges",
"_____no_output_____"
],
[
"%matplotlib inline\nimport matplotlib.pyplot\n\ncounts, edges = histogram\n\nmatplotlib.pyplot.step(x=edges, y=numpy.append(counts, 0), where=\"post\");\nmatplotlib.pyplot.xlim(edges[0], edges[-1]);\nmatplotlib.pyplot.ylim(0, counts.max() * 1.1);\nmatplotlib.pyplot.xlabel(\"mass\");\nmatplotlib.pyplot.ylabel(\"events per bin\");",
"_____no_output_____"
]
],
[
[
"This differs from the lazy array approach in that you need to explicitly manage the iteration, as in this histogram accumulation. However, since we aren't caching, the previous array batch is deleted as soon as `data` goes out of scope, so it is easier to control which arrays are in memory and which aren't.\n\nChoose lazy arrays or iteration according to the degree of control you need.",
"_____no_output_____"
],
[
"## Filenames and entry numbers while iterating\n\n[uproot.iterate](https://uproot.readthedocs.io/en/latest/opening-files.html#uproot.tree.iterate) crosses file boundaries as part of its iteration, and that's information we might need in the loop. If the following are `True`, each step in iteration is a tuple containing the arrays and the additional information.\n\n * **reportpath:** the full path or URL of the (possibly remote) file;\n * **reportfile:** the [ROOTDirectory](https://uproot.readthedocs.io/en/latest/root-io.html#uproot-rootio-rootdirectory) object itself (so that you don't need to re-open it at each iteration step);\n * **reportentries:** the starting and stopping entry numbers for this chunk of data. In a multi-file iteration, these are global (always increasing, not returning to zero as we start the next file).",
"_____no_output_____"
]
],
[
[
"for path, file, start, stop, arrays in uproot.iterate(\n [\"https://scikit-hep.org/uproot/examples/sample-%s-zlib.root\" % x\n for x in [\"5.23.02\", \"5.24.00\", \"5.25.02\", \"5.26.00\", \"5.27.02\", \"5.28.00\",\n \"5.29.02\", \"5.30.00\", \"6.08.04\", \"6.10.05\", \"6.14.00\"]],\n \"sample\",\n \"f8\",\n reportpath=True, reportfile=True, reportentries=True):\n print(path, file, start, stop, len(arrays))",
"https://scikit-hep.org/uproot/examples/sample-5.23.02-zlib.root <ROOTDirectory b'sample-5.23.02-zlib.root' at 0x7ac1160de470> 0 30 1\nhttps://scikit-hep.org/uproot/examples/sample-5.24.00-zlib.root <ROOTDirectory b'sample-5.24.00-zlib.root' at 0x7ac1160c6160> 30 60 1\nhttps://scikit-hep.org/uproot/examples/sample-5.25.02-zlib.root <ROOTDirectory b'sample-5.25.02-zlib.root' at 0x7ac116029cf8> 60 90 1\nhttps://scikit-hep.org/uproot/examples/sample-5.26.00-zlib.root <ROOTDirectory b'sample-5.26.00-zlib.root' at 0x7ac116053b70> 90 120 1\nhttps://scikit-hep.org/uproot/examples/sample-5.27.02-zlib.root <ROOTDirectory b'sample-5.27.02-zlib.root' at 0x7ac115fae7b8> 120 150 1\nhttps://scikit-hep.org/uproot/examples/sample-5.28.00-zlib.root <ROOTDirectory b'sample-5.28.00-zlib.root' at 0x7ac115f9c5c0> 150 180 1\nhttps://scikit-hep.org/uproot/examples/sample-5.29.02-zlib.root <ROOTDirectory b'sample-5.29.02-zlib.root' at 0x7ac115ff88d0> 180 210 1\nhttps://scikit-hep.org/uproot/examples/sample-5.30.00-zlib.root <ROOTDirectory b'sample-5.30.00-zlib.root' at 0x7ac11604cf60> 210 240 1\nhttps://scikit-hep.org/uproot/examples/sample-6.08.04-zlib.root <ROOTDirectory b'sample-6.08.04-zlib.root' at 0x7ac1160b3ef0> 240 270 1\nhttps://scikit-hep.org/uproot/examples/sample-6.10.05-zlib.root <ROOTDirectory b'sample-6.10.05-zlib.root' at 0x7ac115f79320> 270 300 1\nhttps://scikit-hep.org/uproot/examples/sample-6.14.00-zlib.root <ROOTDirectory b'sample-6.14.00-zlib.root' at 0x7ac115f81828> 300 330 1\n"
]
],
[
[
"## Limiting the number of entries to be read\n\nAll array-reading functions have the following parameters:\n\n * **entrystart:** the first entry to read, by default `0`;\n * **entrystop:** one after the last entry to read, by default `numentries`.\n\nSetting **entrystart** and/or **entrystop** differs from slicing the resulting array in that slicing reads, then discards, but these parameters minimize the data to read.",
"_____no_output_____"
]
],
[
[
"len(events.array(\"E1\", entrystart=100, entrystop=300))",
"_____no_output_____"
]
],
[
[
"As with Python slices, the **entrystart** and **entrystop** can be negative to count from the end of the TTree.",
"_____no_output_____"
]
],
[
[
"events.array(\"E1\", entrystart=-10)",
"_____no_output_____"
]
],
[
[
"Internally, ROOT files are written in chunks and whole chunks must be read, so the best places to set **entrystart** and **entrystop** are between basket boundaries.",
"_____no_output_____"
]
],
[
[
"# This file has small TBaskets\ntree = uproot.open(\"https://scikit-hep.org/uproot/examples/foriter.root\")[\"foriter\"]\nbranch = tree[\"data\"]\n[branch.basket_numentries(i) for i in range(branch.numbaskets)]",
"_____no_output_____"
],
[
"# (entrystart, entrystop) pairs where ALL the TBranches' TBaskets align\nlist(tree.clusters())",
"_____no_output_____"
]
],
[
[
"Or simply,",
"_____no_output_____"
]
],
[
[
"branch.baskets()",
"_____no_output_____"
]
],
[
[
"## Controlling lazy chunk and iteration step sizes\n\nIn addition to **entrystart** and **entrystop**, the lazy array and iteration functions also have:\n\n * **entrysteps:** the number of entries to read in each chunk or step, `numpy.inf` for make the chunks/steps as big as possible (limited by file boundaries), a memory size string, or a list of `(entrystart, entrystop)` pairs to be explicit.",
"_____no_output_____"
]
],
[
[
"[len(chunk) for chunk in events.lazyarrays(entrysteps=500)[\"E1\"].chunks]",
"_____no_output_____"
],
[
"[len(data[b\"E1\"]) for data in events.iterate([\"E*\", \"p[xyz]*\"], entrysteps=500)]",
"_____no_output_____"
]
],
[
[
"The TTree lazy array/iteration functions ([TTreeMethods.array](https://uproot.readthedocs.io/en/latest/ttree-handling.html#array), [TTreeMethods.arrays](https://uproot.readthedocs.io/en/latest/ttree-handling.html#arrays), [TBranch.lazyarray](https://uproot.readthedocs.io/en/latest/ttree-handling.html#id13), [TTreeMethods.lazyarray](https://uproot.readthedocs.io/en/latest/ttree-handling.html#lazyarray), and [TTreeMethods.lazyarrays](https://uproot.readthedocs.io/en/latest/ttree-handling.html#lazyarrays)) use basket or cluster sizes as a default **entrysteps**, while multi-file lazy array/iteration functions ([uproot.lazyarrays](https://uproot.readthedocs.io/en/latest/opening-files.html#uproot-lazyarray-and-lazyarrays) and [uproot.iterate](https://uproot.readthedocs.io/en/latest/opening-files.html#uproot-iterate)) use the maximum per file: `numpy.inf`.",
"_____no_output_____"
]
],
[
[
"# This file has small TBaskets\ntree = uproot.open(\"https://scikit-hep.org/uproot/examples/foriter.root\")[\"foriter\"]\nbranch = tree[\"data\"]\n[len(a[\"data\"]) for a in tree.iterate(namedecode=\"utf-8\")]",
"_____no_output_____"
],
[
"# This file has small TBaskets\n[len(a[\"data\"]) for a in uproot.iterate([\"https://scikit-hep.org/uproot/examples/foriter.root\"] * 3,\n \"foriter\", namedecode=\"utf-8\")]",
"_____no_output_____"
]
],
[
[
"One particularly useful way to specify the **entrysteps** is with a memory size string. This string consists of a number followed by a memory unit: `B` for bytes, `kB` for kilobytes, `MB`, `GB`, and so on (whitespace and case insensitive).\n\nThe chunks are not guaranteed to fit the memory size perfectly or even be less than the target size. Uproot picks a fixed number of events that approximates this size on average. The result depends on the number of branches chosen because it is the total size of the set of branches that are chosen for the memory target.",
"_____no_output_____"
]
],
[
[
"[len(data[b\"E1\"]) for data in events.iterate([\"E*\", \"p[xyz]*\"], entrysteps=\"50 kB\")]",
"_____no_output_____"
],
[
"[len(data[b\"E1\"]) for data in events.iterate(entrysteps=\"50 kB\")]",
"_____no_output_____"
]
],
[
[
"Since lazy arrays represent all branches but we won't necessarily be reading all branches, memory size chunking is less useful for lazy arrays, but you can do it because all function parameters are treated consistently.",
"_____no_output_____"
]
],
[
[
"[len(chunk) for chunk in events.lazyarrays(entrysteps=\"50 kB\")[\"E1\"].chunks]",
"_____no_output_____"
]
],
[
[
"## Caching and iteration\n\nSince iteration gives you more precise control over which set of events you're processing at a given time, caching with the **cache** parameter is less useful than it is with lazy arrays. For consistency's sake, the [TTreeMethods.iterate](https://uproot.readthedocs.io/en/latest/ttree-handling.html#iterate) and [uproot.iterate](https://uproot.readthedocs.io/en/latest/opening-files.html#uproot-iterate) functions provide a **cache** parameter and it works the same way that it does in other array-reading functions, but its effect would be to retain the previous step's arrays while working on a new step in the iteration. Presumably, the reason you're iterating is because only the current step fits into memory, so this is not a useful feature.\n\nHowever, the **basketcache** is very useful for iteration, more so than it is for lazy arrays. If an iteration step falls in the middle of a TBasket, the whole TBasket must be read in that step, despite the fact that only part of it is incorporated into the output array. The remainder of the TBasket will be used in the next iteration step, so caching it for exactly one iteration step is ideal: it avoids the need to reread it and decompress it again.\n\nIt is such a useful feature that it's built into [TTreeMethods.iterate](https://uproot.readthedocs.io/en/latest/ttree-handling.html#iterate) and [uproot.iterate](https://uproot.readthedocs.io/en/latest/opening-files.html#uproot-iterate) by default. If you don't set a **basketcache**, these functions will create one with no memory limit and save TBaskets in it for exactly one iteration step, eliminating that temporary cache at the end of iteration. (The same is true of the **keycache**; see [reference documentation](https://uproot.readthedocs.io/en/latest/caches.html) for detail.)\n\nThus, you probably don't want to set any explicit caches while iterating. Setting an explicit **basketcache** would introduce an upper limit on how much it can store, but it would lose the property of evicting after exactly one iteration step (because the connection between the cache object and the iterator would be lost). If you're running out of memory during iteration, try reducing the **entrysteps**.",
"_____no_output_____"
],
[
"# Changing the output container type\n\nWhen we ask for [TTreeMethods.arrays](https://uproot.readthedocs.io/en/latest/ttree-handling.html#arrays) (plural), [TTreeMethods.iterate](https://uproot.readthedocs.io/en/latest/ttree-handling.html#iterate), or [uproot.iterate](https://uproot.readthedocs.io/en/latest/opening-files.html#uproot-iterate), we get a Python dict mapping branch names to arrays. (As a reminder, **namedecode=\"utf-8\"** makes those branch names Python strings, rather than bytestrings.) Sometimes, we want a different kind of container.\n\n * **outputtype:** the _type_ of the container to hold the output arrays.\n\nOne particularly useful container is `tuple`, which can be unpacked by a tuple-assignment.",
"_____no_output_____"
]
],
[
[
"px, py, pz = events.arrays(\"p[xyz]1\", outputtype=tuple)",
"_____no_output_____"
],
[
"px",
"_____no_output_____"
]
],
[
[
"Using `tuple` as an **outputtype** in [TTreeMethods.iterate](https://uproot.readthedocs.io/en/latest/ttree-handling.html#iterate) and [uproot.iterate](https://uproot.readthedocs.io/en/latest/opening-files.html#uproot-iterate) lets us unpack the arrays in Python's for statement.",
"_____no_output_____"
]
],
[
[
"for px, py, pz in events.iterate(\"p[xyz]1\", outputtype=tuple):\n px**2 + py**2 + pz**2",
"_____no_output_____"
]
],
[
[
"Another useful type is `collections.namedtuple`, which packs everything into a single object, but the fields are accessible by name.",
"_____no_output_____"
]
],
[
[
"import collections # from the Python standard library\n\na = events.arrays(\"p[xyz]1\", outputtype=collections.namedtuple)",
"_____no_output_____"
],
[
"a.px1",
"_____no_output_____"
]
],
[
[
"You can also use your own classes.",
"_____no_output_____"
]
],
[
[
"class Stuff:\n def __init__(self, px, py, pz):\n self.p = numpy.sqrt(px**2 + py**2 + pz**2)\n def __repr__(self):\n return \"<Stuff %r>\" % self.p\n\nevents.arrays(\"p[xyz]1\", outputtype=Stuff)",
"_____no_output_____"
]
],
[
[
"And perhaps most importantly, you can pass in [pandas.DataFrame](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html).",
"_____no_output_____"
]
],
[
[
"import pandas\n\nevents.arrays(\"p[xyz]1\", outputtype=pandas.DataFrame, entrystop=10)",
"_____no_output_____"
]
],
[
[
"# Filling Pandas DataFrames\n\nThe previous example filled a [pandas.DataFrame](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) by explicitly passing it as an **outputtype**. Pandas is such an important container type that there are specialized functions for it: [TTreeMethods.pandas.df](https://uproot.readthedocs.io/en/latest/ttree-handling.html#id7) and [uproot.pandas.df](https://uproot.readthedocs.io/en/latest/opening-files.html#uproot-pandas-iterate).",
"_____no_output_____"
]
],
[
[
"events.pandas.df(\"p[xyz]1\", entrystop=10)",
"_____no_output_____"
]
],
[
[
"The **entry** index in the resulting DataFrame represents the actual entry numbers in the file. For instance, counting from the end:",
"_____no_output_____"
]
],
[
[
"events.pandas.df(\"p[xyz]1\", entrystart=-10)",
"_____no_output_____"
]
],
[
[
"The [uproot.pandas.df](https://uproot.readthedocs.io/en/latest/opening-files.html#uproot-pandas-iterate) function doesn't have a **reportentries** because they're included in the DataFrame itself.",
"_____no_output_____"
]
],
[
[
"for df in uproot.pandas.iterate(\"https://scikit-hep.org/uproot/examples/Zmumu.root\", \"events\", \"p[xyz]1\", entrysteps=500):\n print(df[:3])",
" px1 py1 pz1\n0 -41.195288 17.433244 -68.964962\n1 35.118050 -16.570362 -48.775247\n2 35.118050 -16.570362 -48.775247\n px1 py1 pz1\n500 39.163212 -19.185280 -13.979333\n501 39.094970 -19.152964 -13.936115\n502 -7.656437 -33.431880 91.840257\n px1 py1 pz1\n1000 26.043759 -17.618814 -0.567176\n1001 26.043759 -17.618814 -0.567176\n1002 25.996204 -17.585241 -0.568920\n px1 py1 pz1\n1500 82.816840 13.262734 27.797909\n1501 -11.416911 39.815352 32.349893\n1502 -11.416911 39.815352 32.349893\n px1 py1 pz1\n2000 -43.378378 -15.235422 3.019698\n2001 -43.378378 -15.235422 3.019698\n2002 -43.244422 -15.187402 3.003985\n"
]
],
[
[
"Part of the motivation for a special function is that it's the first of potentially many external connectors (Dask is another: see above). The other part is that these functions have more Pandas-friendly default parameters, such as **flatten=True**.\n\nFlattening turns multiple values per entry (i.e. multiple particles per event) into separate DataFrame rows, maintaining the nested structure in the DataFrame index. Flattening is usually undesirable for arrays—because arrays don't have an index to record that information—but it's usually desirable for DataFrames.",
"_____no_output_____"
]
],
[
[
"events2 = uproot.open(\"https://scikit-hep.org/uproot/examples/HZZ.root\")[\"events\"] # non-flat data",
"_____no_output_____"
],
[
"events2.pandas.df([\"MET_p*\", \"Muon_P*\"], entrystop=10, flatten=False) # not the default",
"_____no_output_____"
]
],
[
[
"DataFrames like the above are slow (the cell entries are Python lists) and difficult to use in Pandas. Pandas doesn't have specialized functions for manipulating this kind of structure.\n\nHowever, if we use the default **flatten=True**:",
"_____no_output_____"
]
],
[
[
"df = events2.pandas.df([\"MET_p*\", \"Muon_P*\"], entrystop=10)\ndf",
"_____no_output_____"
]
],
[
[
"The particles-within-events structure is encoded in the [pandas.MultiIndex](https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html), and we can use Pandas functions like [DataFrame.unstack](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.unstack.html) to manipulate that structure.",
"_____no_output_____"
]
],
[
[
"df.unstack()",
"_____no_output_____"
]
],
[
[
"There's also a **flatten=None** that skips all non-flat TBranches, included as a convenience against overzealous branch selection.",
"_____no_output_____"
]
],
[
[
"events2.pandas.df([\"MET_p*\", \"Muon_P*\"], entrystop=10, flatten=None)",
"_____no_output_____"
]
],
[
[
"# Selecting and interpreting branches\n\nWe have already seen that TBranches can be selected as lists of strings and with wildcards. This is the same wildcard pattern that filesystems use to match file lists: `*` can be replaced with any text (or none), `?` can be replaced by one character, and `[...]` specifies a list of alternate characters.\n\nWildcard patters are quick to write, but limited relative to regular expressions. Any branch request between slashes (`/` inside the quotation marks) will be interpreted as regular expressions instead (i.e. `.*` instead of `*`).",
"_____no_output_____"
]
],
[
[
"events.arrays(\"p[xyz]?\").keys() # using wildcards",
"_____no_output_____"
],
[
"events.arrays(\"/p[x-z].?/\").keys() # using regular expressions",
"_____no_output_____"
]
],
[
[
"If, instead of strings, you pass a function from branch objects to `True` or `False`, the branches will be selected by evaluating the function as a filter. This is a way of selecting branches based on properties other than their names.",
"_____no_output_____"
]
],
[
[
"events.arrays(lambda branch: branch.compressionratio() > 3).keys()",
"_____no_output_____"
]
],
[
[
"Note that the return values must be strictly `True` and `False`, not anything that [Python evaluates to true or false](https://itnext.io/you-shouldnt-use-truthy-tests-753b39ef8893). If the function returns anything else, it will be used as a new [Interpretation](https://uproot.readthedocs.io/en/latest/interpretation.html) for the branch.",
"_____no_output_____"
],
[
"## TBranch interpretations\n\nThe very first thing we looked at when we opened a TTree was its TBranches and their interpretations with the `show` method:",
"_____no_output_____"
]
],
[
[
"events.show()",
"Type (no streamer) asstring()\nRun (no streamer) asdtype('>i4')\nEvent (no streamer) asdtype('>i4')\nE1 (no streamer) asdtype('>f8')\npx1 (no streamer) asdtype('>f8')\npy1 (no streamer) asdtype('>f8')\npz1 (no streamer) asdtype('>f8')\npt1 (no streamer) asdtype('>f8')\neta1 (no streamer) asdtype('>f8')\nphi1 (no streamer) asdtype('>f8')\nQ1 (no streamer) asdtype('>i4')\nE2 (no streamer) asdtype('>f8')\npx2 (no streamer) asdtype('>f8')\npy2 (no streamer) asdtype('>f8')\npz2 (no streamer) asdtype('>f8')\npt2 (no streamer) asdtype('>f8')\neta2 (no streamer) asdtype('>f8')\nphi2 (no streamer) asdtype('>f8')\nQ2 (no streamer) asdtype('>i4')\nM (no streamer) asdtype('>f8')\n"
]
],
[
[
"Every branch has a default interpretation, such as",
"_____no_output_____"
]
],
[
[
"events[\"E1\"].interpretation",
"_____no_output_____"
]
],
[
[
"meaning big-endian, 8-byte floating point numbers as a [Numpy dtype](https://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html). We could interpret this branch with a different [Numpy dtype](https://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html), but it wouldn't be meaningful.",
"_____no_output_____"
]
],
[
[
"events[\"E1\"].array(uproot.asdtype(\">i8\"))",
"_____no_output_____"
]
],
[
[
"Instead of reading the values as floating point numbers, we've read them as integers. It's unlikely that you'd ever want to do that, unless the default interpretation is wrong.",
"_____no_output_____"
],
[
"## Reading data into a preexisting array\n\nOne actually useful TBranch reinterpretation is [uproot.asarray](https://uproot.readthedocs.io/en/latest/interpretation.html#uproot-interp-numerical-asarray). It differs from [uproot.asdtype](https://uproot.readthedocs.io/en/latest/interpretation.html#uproot-interp-numerical-asdtype) only in that the latter creates a new array when reading data while the former fills a user-specified array.",
"_____no_output_____"
]
],
[
[
"myarray = numpy.zeros(events.numentries, dtype=numpy.float32) # (different size)\nreinterpretation = events[\"E1\"].interpretation.toarray(myarray)\nreinterpretation",
"_____no_output_____"
]
],
[
[
"Passing the new [uproot.asarray](https://uproot.readthedocs.io/en/latest/interpretation.html#uproot-interp-numerical-asarray) interpretation to the array-reading function",
"_____no_output_____"
]
],
[
[
"events[\"E1\"].array(reinterpretation)",
"_____no_output_____"
]
],
[
[
"fills and returns that array. When you look at my array object, you can see that it is now filled, overwriting whatever might have been in it before.",
"_____no_output_____"
]
],
[
[
"myarray",
"_____no_output_____"
]
],
[
[
"This is useful for speed-critical applications or ones in which the array is managed by an external system. The array could be NUMA-allocated in a supercomputer or CPU/GPU managed by PyTorch, for instance.\n\nAs the provider of the array, it is your responsibility to ensure that it has enough elements to hold the (possibly type-converted) output. (Failure to do so only results in an exception, not a segmentation fault or anything.)",
"_____no_output_____"
],
[
"## Passing many new interpretations in one call\n\nAbove, you saw what happens when a TBranch selector is a function returning `True` or `False`, and I stressed that it must be literally `True`, not an object that Python would evaluate to `True`.",
"_____no_output_____"
]
],
[
[
"events.arrays(lambda branch: isinstance(branch.interpretation, uproot.asdtype) and\n str(branch.interpretation.fromdtype) == \">f8\").keys()",
"_____no_output_____"
]
],
[
[
"This is because a function that returns objects selects branches and sets their interpretations in one pass.",
"_____no_output_____"
]
],
[
[
"events.arrays(lambda branch: uproot.asdtype(\">f8\", \"<f4\") if branch.name.startswith(b\"px\") else None)",
"_____no_output_____"
]
],
[
[
"The above selects TBranch names that start with `\"px\"`, read-interprets them as big-endian 8-byte floats and writes them as little-endian 4-byte floats. The selector returns `None` for the TBranches to exclude and an [Interpretation](https://uproot.readthedocs.io/en/latest/interpretation.html) for the ones to reinterpret.\n\nThe same could have been said in a less functional way with a dict:",
"_____no_output_____"
]
],
[
[
"events.arrays({\"px1\": uproot.asdtype(\">f8\", \"<f4\"),\n \"px2\": uproot.asdtype(\">f8\", \"<f4\")})",
"_____no_output_____"
]
],
[
[
"## Multiple values per event: fixed size arrays\n\nSo far, you've seen a lot of examples with one value per event, but multiple values per event are very common. In the simplest case, the value in each event is a vector, matrix, or tensor with a fixed number of dimensions, such as a 3-vector or a set of parton weights from a Monte Carlo.\n\nHere's an artificial example:",
"_____no_output_____"
]
],
[
[
"tree = uproot.open(\"https://scikit-hep.org/uproot/examples/nesteddirs.root\")[\"one/two/tree\"]\narray = tree.array(\"ArrayInt64\", entrystop=20)\narray",
"_____no_output_____"
]
],
[
[
"The resulting array has a non-trivial [Numpy shape](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.shape.html), but otherwise, it has the same [Numpy array type](https://docs.scipy.org/doc/numpy/reference/generated/numpy.array.html) as the other arrays you've seen (apart from lazy arrays—`ChunkedArray` and `VirtualArray`—which are not Numpy objects).",
"_____no_output_____"
]
],
[
[
"array.shape",
"_____no_output_____"
]
],
[
[
"All but the first dimension of the shape parameter (the \"length\") is known before reading the array: it's the [dtype shape](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dtype.shape.html).",
"_____no_output_____"
]
],
[
[
"tree[\"ArrayInt64\"].interpretation",
"_____no_output_____"
],
[
"tree[\"ArrayInt64\"].interpretation.todtype.shape",
"_____no_output_____"
]
],
[
[
"The [dtype shape](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dtype.shape.html) of a TBranch with one value per event (simple, 1-dimensional arrays) is an empty tuple.",
"_____no_output_____"
]
],
[
[
"tree[\"Int64\"].interpretation.todtype.shape",
"_____no_output_____"
]
],
[
[
"Fixed-width arrays are exploded into one column per element when viewed as a [pandas.DataFrame](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html).",
"_____no_output_____"
]
],
[
[
"tree.pandas.df(\"ArrayInt64\", entrystop=20)",
"_____no_output_____"
]
],
[
[
"## Multiple values per event: leaf-lists\n\nAnother of ROOT's fundamental TBranch types is a \"[leaf-list](https://root.cern.ch/root/htmldoc/guides/users-guide/Trees.html#adding-a-branch-to-hold-a-list-of-variables),\" or a TBranch with multiple TLeaves. (**Note:** in ROOT terminology, \"TBranch\" is a data structure that usually points to data in TBaskets and \"TLeaf\" is the _data type_ descriptor. TBranches and TLeaves have no relationship to the interior and endpoints of a tree structure in computer science.)\n\nThe Numpy analogue of a leaf-list is a [structured array](https://docs.scipy.org/doc/numpy/user/basics.rec.html), a [dtype](https://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html) with named fields, which is Numpy's view into a C array of structs (with or without padding).",
"_____no_output_____"
]
],
[
[
"tree = uproot.open(\"https://scikit-hep.org/uproot/examples/leaflist.root\")[\"tree\"]\narray = tree.array(\"leaflist\")\narray",
"_____no_output_____"
]
],
[
[
"This array is presented as an array of tuples, though it's actually a contiguous block of memory with floating point numbers (`\"x\"`), integers (`\"y\"`), and single characters (`\"z\"`) adjacent to each other.",
"_____no_output_____"
]
],
[
[
"array[0]",
"_____no_output_____"
],
[
"array[\"x\"]",
"_____no_output_____"
],
[
"array[\"y\"]",
"_____no_output_____"
],
[
"array[\"z\"]",
"_____no_output_____"
]
],
[
[
"The [dtype](https://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html) for this array defines the field stucture. Its [item size](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.itemsize.html) is `8 + 4 + 1 = 13`, not a power of 2, as arrays of primitive types are.",
"_____no_output_____"
]
],
[
[
"array.dtype",
"_____no_output_____"
],
[
"array.dtype.itemsize",
"_____no_output_____"
]
],
[
[
"ROOT TBranches may have multiple values per event _and_ a leaf-list structure, and [Numpy arrays](https://docs.scipy.org/doc/numpy/reference/generated/numpy.array.html) may have non-trivial shape _and_ [dtype fields](https://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html), so the translation between ROOT and Numpy is one-to-one.",
"_____no_output_____"
],
[
"Leaf-list TBranches are exploded into one column per field when viewed as a [pandas.DataFrame](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html).",
"_____no_output_____"
]
],
[
[
"tree.pandas.df(\"leaflist\")",
"_____no_output_____"
]
],
[
[
"The **flatname** parameter determines how fixed-width arrays and field names are translated into Pandas names; the default is `uproot._connect._pandas.default_flatname` (a function from **branchname** _(str)_, **fieldname** _(str)_, **index** _(int)_ to Pandas column name _(str)_).",
"_____no_output_____"
],
[
"## Multiple values per event: jagged arrays\n\nIn physics data, it is even more common to have an arbitrary number of values per event than a fixed number of values per event. Consider, for instance, particles produced in a collision, tracks in a jet, hits on a track, etc.\n\nUnlike fixed-width arrays and a fixed number of fields per element, Numpy has no analogue for this type. It is fundamentally outside of Numpy's scope because Numpy describes rectangular tables of data. As we have seen above, Pandas has some support for this so-called \"jagged\" (sometimes \"ragged\") data, but only through manipulation of its index ([pandas.MultiIndex](https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html)), not the data themselves.\n\nFor this, uproot fills a new `JaggedArray` data structure (from the awkward-array library, like `ChunkedArray` and `VirtualArray`).",
"_____no_output_____"
]
],
[
[
"tree = uproot.open(\"https://scikit-hep.org/uproot/examples/nesteddirs.root\")[\"one/two/tree\"]\narray = tree.array(\"SliceInt64\", entrystop=20)\narray",
"_____no_output_____"
]
],
[
[
"These `JaggedArrays` are made of [Numpy arrays](https://docs.scipy.org/doc/numpy/reference/generated/numpy.array.html) and follow the same [Numpy slicing rules](https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html), including [advanced indexing](https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#advanced-indexing).\n\nAwkward-array generalizes Numpy in many ways—details can be found [in its documentation](https://github.com/scikit-hep/awkward-array).",
"_____no_output_____"
]
],
[
[
"array.counts",
"_____no_output_____"
],
[
"array.flatten()",
"_____no_output_____"
],
[
"array[:6]",
"_____no_output_____"
],
[
"array[array.counts > 1, 0]",
"_____no_output_____"
]
],
[
[
"Here is an example of `JaggedArrays` in physics data:",
"_____no_output_____"
]
],
[
[
"events2 = uproot.open(\"https://scikit-hep.org/uproot/examples/HZZ.root\")[\"events\"]",
"_____no_output_____"
],
[
"E, px, py, pz = events2.arrays([\"Muon_E\", \"Muon_P[xyz]\"], outputtype=tuple)\nE",
"_____no_output_____"
],
[
"pt = numpy.sqrt(px**2 + py**2)\np = numpy.sqrt(px**2 + py**2 + pz**2)\np",
"_____no_output_____"
],
[
"eta = numpy.log((p + pz)/(p - pz))/2\neta",
"_____no_output_____"
],
[
"phi = numpy.arctan2(py, px)\nphi",
"_____no_output_____"
],
[
"pt.counts",
"_____no_output_____"
],
[
"pt.flatten()",
"_____no_output_____"
],
[
"pt[:6]",
"_____no_output_____"
]
],
[
[
"Note that if you want to histogram the inner contents of these arrays (i.e. histogram of particles, ignoring event boundaries), functions like [numpy.histogram](https://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram.html) require non-jagged arrays, so flatten them with a call to `.flatten()`.\n\nTo select elements of inner lists (Pandas's [DataFrame.xs](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.xs.html)), first require the list to have at least that many elements.",
"_____no_output_____"
]
],
[
[
"pt[pt.counts > 1, 0]",
"_____no_output_____"
]
],
[
[
"`JaggedArrays` of booleans select from inner lists (i.e. put a cut on particles):",
"_____no_output_____"
]
],
[
[
"pt > 50",
"_____no_output_____"
],
[
"eta[pt > 50]",
"_____no_output_____"
]
],
[
[
"And Numpy arrays of booleans select from outer lists (i.e. put a cut on events):",
"_____no_output_____"
]
],
[
[
"eta[pt.max() > 50]",
"_____no_output_____"
]
],
[
[
"Reducers like `count`, `sum`, `min`, `max`, `any` (boolean), or `all` (boolean) apply per-event, turning a `JaggedArray` into a Numpy array.",
"_____no_output_____"
]
],
[
[
"pt.max()",
"_____no_output_____"
]
],
[
[
"You can even do combinatorics, such as `a.cross(b)` to compute the Cartesian product of `a` and `b` per event, or `a.choose(n)` to choose `n` distinct combinations of elements per event.",
"_____no_output_____"
]
],
[
[
"pt.choose(2)",
"_____no_output_____"
]
],
[
[
"Some of these functions have \"arg\" versions that return integers, which can be used in indexing.",
"_____no_output_____"
]
],
[
[
"abs(eta).argmax()",
"_____no_output_____"
],
[
"pairs = pt.argchoose(2)\npairs",
"_____no_output_____"
],
[
"left = pairs.i0\nright = pairs.i1\nleft, right",
"_____no_output_____"
]
],
[
[
"Masses of unique pairs of muons, for events that have them:",
"_____no_output_____"
]
],
[
[
"masses = numpy.sqrt((E[left] + E[right])**2 - (px[left] + px[right])**2 -\n (py[left] + py[right])**2 - (pz[left] + pz[right])**2)\nmasses",
"_____no_output_____"
],
[
"counts, edges = numpy.histogram(masses.flatten(), bins=120, range=(0, 120))\n\nmatplotlib.pyplot.step(x=edges, y=numpy.append(counts, 0), where=\"post\");\nmatplotlib.pyplot.xlim(edges[0], edges[-1]);\nmatplotlib.pyplot.ylim(0, counts.max() * 1.1);\nmatplotlib.pyplot.xlabel(\"mass\");\nmatplotlib.pyplot.ylabel(\"events per bin\");",
"_____no_output_____"
]
],
[
[
"## Jagged array performance\n\n`JaggedArrays` are compact in memory and fast to read. Whereas [root_numpy](https://pypi.org/project/root-numpy/) reads data like `std::vector<float>` per event into a Numpy array of Numpy arrays (Numpy's object `\"O\"` [dtype](https://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html)), which has data locality issues, `JaggedArray` consists of two contiguous arrays: one containing content (the `floats`) and the other representing structure via `offsets` (random access) or `counts`.",
"_____no_output_____"
]
],
[
[
"masses.content",
"_____no_output_____"
],
[
"masses.offsets",
"_____no_output_____"
],
[
"masses.counts",
"_____no_output_____"
]
],
[
[
"Fortunately, ROOT files are themselves structured this way, with variable-width data represented by contents and offsets in a TBasket. These arrays do not need to be deserialized individually, but can be merely cast as Numpy arrays in one Python call. The lack of per-event processing is why reading in uproot and processing data with awkward-array can be fast, despite being written in Python.\n\n<br>\n\n<center><img src=\"https://raw.githubusercontent.com/scikit-hep/uproot/master/docs/logscales.png\" width=\"75%\"></center>",
"_____no_output_____"
],
[
"## Special physics objects: Lorentz vectors\n\nAlthough any C++ type can in principle be read (see below), some are important enough to be given convenience methods for analysis. These are not defined in uproot (which is strictly concerned with I/O), but in [uproot-methods](https://github.com/scikit-hep/uproot-methods). If you need certain classes to have user-friendly methods in Python, you're encouraged to contribute them to [uproot-methods](https://github.com/scikit-hep/uproot-methods).\n\nOne of these classes is `TLorentzVectorArray`, which defines an _array_ of Lorentz vectors.",
"_____no_output_____"
]
],
[
[
"events3 = uproot.open(\"https://scikit-hep.org/uproot/examples/HZZ-objects.root\")[\"events\"]",
"_____no_output_____"
],
[
"muons = events3.array(\"muonp4\")\nmuons",
"_____no_output_____"
]
],
[
[
"In the print-out, these appear to be Python objects, but they're high-performance arrays that are only turned into objects when you look at individuals.",
"_____no_output_____"
]
],
[
[
"muon = muons[0, 0]\ntype(muon), muon",
"_____no_output_____"
]
],
[
[
"This object has all the usual kinematics methods,",
"_____no_output_____"
]
],
[
[
"muon.mass",
"_____no_output_____"
],
[
"muons[0, 0].delta_phi(muons[0, 1])",
"_____no_output_____"
]
],
[
[
"But an array of Lorentz vectors also has these methods, and they are computed in bulk (faster than creating each object and calling the method on each).",
"_____no_output_____"
]
],
[
[
"muons.mass # some mass**2 are slightly negative, hence the Numpy warning about negative square roots",
"/home/jpivarski/miniconda3/lib/python3.7/site-packages/uproot_methods-0.7.2-py3.7.egg/uproot_methods/classes/TLorentzVector.py:189: RuntimeWarning: invalid value encountered in sqrt\n return self._trymemo(\"mass\", lambda self: self.awkward.numpy.sqrt(self.mag2))\n"
]
],
[
[
"(**Note:** if you don't want to see Numpy warnings, use [numpy.seterr](https://docs.scipy.org/doc/numpy/reference/generated/numpy.seterr.html).)",
"_____no_output_____"
]
],
[
[
"pairs = muons.choose(2)\nlefts = pairs.i0\nrights = pairs.i1\nlefts.delta_r(rights)",
"_____no_output_____"
]
],
[
[
"TBranches with C++ class `TLorentzVector` are automatically converted into `TLorentzVectorArrays`. Although they're in wide use, the C++ `TLorentzVector` class is deprecated in favor of [ROOT::Math::LorentzVector](https://root.cern/doc/v612/classROOT_1_1Math_1_1LorentzVector.html). Unlike the old class, the new vectors can be represented with a variety of data types and coordinate systems, and they're split into multiple branches, so uproot sees them as four branches, each representing the components.\n\nYou can still use the `TLorentzVectorArray` Python class; you just need to use a special constructor to build the object from its branches.",
"_____no_output_____"
]
],
[
[
"# Suppose you have four component branches...\nE, px, py, pz = events2.arrays([\"Muon_E\", \"Muon_P[xyz]\"], outputtype=tuple)",
"_____no_output_____"
],
[
"import uproot_methods\n\narray = uproot_methods.TLorentzVectorArray.from_cartesian(px, py, pz, E)\narray",
"_____no_output_____"
]
],
[
[
"There are constructors for different coordinate systems. Internally, `TLorentzVectorArray` uses the coordinates you give it and only converts to other systems on demand.",
"_____no_output_____"
]
],
[
[
"[x for x in dir(uproot_methods.TLorentzVectorArray) if x.startswith(\"from_\")]",
"_____no_output_____"
]
],
[
[
"## Variable-width values: strings\n\nStrings are another fundamental type. In C++, they may be `char*`, `std::string`, or `TString`, but all string types are converted (on demand) to the same Python string type.",
"_____no_output_____"
]
],
[
[
"branch = uproot.open(\"https://scikit-hep.org/uproot/examples/sample-6.14.00-zlib.root\")[\"sample\"][\"str\"]\nbranch.array()",
"_____no_output_____"
]
],
[
[
"As with most strings from ROOT, they are unencoded bytestrings (see the `b` before each quote). Since they're not names, there's no **namedecode**, but they can be decoded as needed using the usual Python method.",
"_____no_output_____"
]
],
[
[
"[x.decode(\"utf-8\") for x in branch.array()]",
"_____no_output_____"
]
],
[
[
"## Arbitrary objects in TTrees\n\nUproot does not have a hard-coded deserialization for every C++ class type; it uses the \"streamers\" that ROOT includes in each file to learn how to deserialize the objects in that file. Even if you defined your own C++ classes, uproot should be able to read them. (**Caveat:** not all structure types have been implemented, so the coverage of C++ types is a work in progress.)\n\nIn some cases, the deserialization is simplified by the fact that ROOT has \"split\" the objects. Instead of seeing a `JaggedArray` of objects, you see a `JaggedArray` of each attribute separately, such as the components of a [ROOT::Math::LorentzVector](https://root.cern/doc/v612/classROOT_1_1Math_1_1LorentzVector.html).\n\nIn the example below, `Track` objects under `fTracks` have been split into `fTracks.fUniqueID`, `fTracks.fBits`, `fTracks.fPx`, `fTracks.fPy`, `fTracks.fPz`, etc.",
"_____no_output_____"
]
],
[
[
"!wget https://scikit-hep.org/uproot/examples/Event.root",
"--2019-12-18 07:20:26-- https://scikit-hep.org/uproot/examples/Event.root\nResolving scikit-hep.org (scikit-hep.org)... 185.199.109.153, 185.199.108.153, 185.199.111.153, ...\nConnecting to scikit-hep.org (scikit-hep.org)|185.199.109.153|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 37533466 (36M) [application/octet-stream]\nSaving to: ‘Event.root’\n\nEvent.root 100%[===================>] 35.79M 3.63MB/s in 11s \n\n2019-12-18 07:20:37 (3.35 MB/s) - ‘Event.root’ saved [37533466/37533466]\n\n"
],
[
"tree = uproot.open(\"Event.root\")[\"T\"]\ntree.show()",
"event TStreamerInfo None\nTObject TStreamerInfo None\nfUniqueID TStreamerBasicType asdtype('>u4')\nfBits TStreamerBasicType asdtype('>u4')\n\nfType[20] TStreamerBasicType asdtype(\"('i1', (20,))\")\nfEventName TStreamerBasicType asstring(4)\nfNtrack TStreamerBasicType asdtype('>i4')\nfNseg TStreamerBasicType asdtype('>i4')\nfNvertex TStreamerBasicType asdtype('>u4')\nfFlag TStreamerBasicType asdtype('>u4')\nfTemperature TStreamerBasicType asdtype('>f4', 'float64')\nfMeasures[10] TStreamerBasicType asdtype(\"('>i4', (10,))\")\nfMatrix[4][4] TStreamerBasicType asdtype(\"('>f4', (4, 4))\", \"('<f8', (4, 4))\")\nfClosestDistance TStreamerBasicPointer None\nfEvtHdr TStreamerObjectAny None\nfEvtHdr.fEvtNum TStreamerBasicType asdtype('>i4')\nfEvtHdr.fRun TStreamerBasicType asdtype('>i4')\nfEvtHdr.fDate TStreamerBasicType asdtype('>i4')\n\nfTracks TStreamerObjectPointer None\nfTracks.fUniqueID TStreamerBasicType asjagged(asdtype('>u4'))\nfTracks.fBits TStreamerBasicType asjagged(asdtype('>u4'))\nfTracks.fPx TStreamerBasicType asjagged(asdtype('>f4'))\nfTracks.fPy TStreamerBasicType asjagged(asdtype('>f4'))\nfTracks.fPz TStreamerBasicType asjagged(asdtype('>f4'))\nfTracks.fRandom TStreamerBasicType asjagged(asdtype('>f4'))\nfTracks.fMass2 TStreamerBasicType asjagged(asfloat16(0.0, 0.0, 8, dtype([('exponent', 'u1'), ('mantissa', '>u2')]), dtype('float32')))\nfTracks.fBx TStreamerBasicType asjagged(asfloat16(0.0, 0.0, 10, dtype([('exponent', 'u1'), ('mantissa', '>u2')]), dtype('float32')))\nfTracks.fBy TStreamerBasicType asjagged(asfloat16(0.0, 0.0, 10, dtype([('exponent', 'u1'), ('mantissa', '>u2')]), dtype('float32')))\nfTracks.fMeanCharge TStreamerBasicType asjagged(asdtype('>f4'))\nfTracks.fXfirst TStreamerBasicType asjagged(asfloat16(0, 0, 12, dtype([('exponent', 'u1'), ('mantissa', '>u2')]), dtype('float32')))\nfTracks.fXlast TStreamerBasicType asjagged(asfloat16(0, 0, 12, dtype([('exponent', 'u1'), ('mantissa', '>u2')]), dtype('float32')))\nfTracks.fYfirst TStreamerBasicType asjagged(asfloat16(0, 0, 12, dtype([('exponent', 'u1'), ('mantissa', '>u2')]), dtype('float32')))\nfTracks.fYlast TStreamerBasicType asjagged(asfloat16(0, 0, 12, dtype([('exponent', 'u1'), ('mantissa', '>u2')]), dtype('float32')))\nfTracks.fZfirst TStreamerBasicType asjagged(asfloat16(0, 0, 12, dtype([('exponent', 'u1'), ('mantissa', '>u2')]), dtype('float32')))\nfTracks.fZlast TStreamerBasicType asjagged(asfloat16(0, 0, 12, dtype([('exponent', 'u1'), ('mantissa', '>u2')]), dtype('float32')))\nfTracks.fCharge TStreamerBasicType asjagged(asdouble32(-1.0, 1.0, 2, dtype('>u4'), dtype('float64')))\nfTracks.fVertex[3] TStreamerBasicType asjagged(asdouble32(-30.0, 30.0, 16, dtype(('>u4', (3,))), dtype(('<f8', (3,)))))\nfTracks.fNpoint TStreamerBasicType asjagged(asdtype('>i4'))\nfTracks.fValid TStreamerBasicType asjagged(asdtype('>i2'))\nfTracks.fNsp TStreamerBasicType asjagged(asdtype('>u4'))\nfTracks.fPointValue TStreamerBasicPointer None\nfTracks.fTriggerBits.fUniqueID\n TStreamerBasicType asjagged(asdtype('>u4'))\nfTracks.fTriggerBits.fBits TStreamerBasicType asjagged(asdtype('>u4'))\nfTracks.fTriggerBits.fNbits\n TStreamerBasicType asjagged(asdtype('>u4'))\nfTracks.fTriggerBits.fNbytes\n TStreamerBasicType asjagged(asdtype('>u4'))\nfTracks.fTriggerBits.fAllBits\n TStreamerBasicPointer asjagged(asdtype('uint8'), 1)\nfTracks.fTArray[3] TStreamerBasicType asjagged(asdtype(\"('>f4', (3,))\"))\n\nfHighPt TStreamerObjectPointer asgenobj(TRefArray)\nfMuons TStreamerObjectPointer asgenobj(TRefArray)\nfLastTrack TStreamerInfo asobj(<uproot.rootio.TRef>)\nfWebHistogram TStreamerInfo asobj(<uproot.rootio.TRef>)\nfH TStreamerObjectPointer asgenobj(TH1F)\nfTriggerBits TStreamerInfo None\nfTriggerBits.TObject (no streamer) None\nfTriggerBits.fUniqueID TStreamerBasicType asdtype('>u4')\nfTriggerBits.fBits TStreamerBasicType asdtype('>u4')\n\nfTriggerBits.fNbits TStreamerBasicType asdtype('>u4')\nfTriggerBits.fNbytes TStreamerBasicType asdtype('>u4')\nfTriggerBits.fAllBits TStreamerBasicPointer asjagged(asdtype('uint8'), 1)\n\nfIsValid TStreamerBasicType asdtype('bool')\n\n"
]
],
[
[
"In this view, many of the attributes are _not_ special classes and can be read as arrays of numbers,",
"_____no_output_____"
]
],
[
[
"tree.array(\"fTemperature\", entrystop=20)",
"_____no_output_____"
]
],
[
[
"as arrays of fixed-width matrices,",
"_____no_output_____"
]
],
[
[
"tree.array(\"fMatrix[4][4]\", entrystop=6)",
"_____no_output_____"
]
],
[
[
"as jagged arrays (of ROOT's \"Float16_t\" encoding),",
"_____no_output_____"
]
],
[
[
"tree.array(\"fTracks.fMass2\", entrystop=6)",
"_____no_output_____"
]
],
[
[
"or as jagged arrays of fixed arrays (of ROOT's \"Double32_t\" encoding),",
"_____no_output_____"
]
],
[
[
"tree.array(\"fTracks.fTArray[3]\", entrystop=6)",
"_____no_output_____"
]
],
[
[
"However, some types are not fully split by ROOT and have to be deserialized individually (not vectorally). This example includes _histograms_ in the TTree, and histograms are sufficiently complex that they cannot be split.",
"_____no_output_____"
]
],
[
[
"tree.array(\"fH\", entrystop=6)",
"_____no_output_____"
]
],
[
[
"Each of those is a standard histogram object, something that would ordinarily be in a `TDirectory`, not a `TTree`. It has histogram convenience methods (see below).",
"_____no_output_____"
]
],
[
[
"for histogram in tree.array(\"fH\", entrystop=3):\n print(histogram.title)\n print(histogram.values)\nprint(\"\\n...\\n\")\nfor histogram in tree.array(\"fH\", entrystart=-3):\n print(histogram.title)\n print(histogram.values)",
"b'Event Histogram'\n[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0.]\nb'Event Histogram'\n[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0.]\nb'Event Histogram'\n[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0.]\n\n...\n\nb'Event Histogram'\n[14. 18. 14. 11. 15. 13. 12. 13. 8. 8. 9. 10. 10. 7. 7. 10. 8. 12.\n 6. 8. 7. 9. 10. 12. 10. 11. 10. 10. 10. 8. 14. 13. 9. 7. 12. 10.\n 7. 6. 9. 13. 11. 8. 10. 9. 7. 4. 7. 10. 8. 8. 9. 9. 7. 12.\n 11. 9. 10. 7. 10. 13. 13. 11. 9. 9. 8. 8. 10. 12. 7. 5. 9. 10.\n 12. 13. 10. 14. 10. 10. 8. 12. 12. 11. 16. 12. 8. 12. 7. 9. 8. 7.\n 10. 7. 11. 11. 8. 13. 9. 8. 14. 16.]\nb'Event Histogram'\n[14. 18. 14. 11. 15. 13. 12. 13. 8. 8. 9. 10. 10. 7. 8. 10. 8. 12.\n 6. 8. 7. 9. 10. 12. 10. 11. 10. 10. 10. 8. 14. 13. 9. 7. 12. 10.\n 7. 6. 9. 13. 11. 8. 10. 9. 7. 4. 7. 10. 8. 8. 9. 9. 7. 12.\n 11. 9. 10. 7. 10. 13. 13. 11. 9. 9. 8. 8. 10. 12. 7. 5. 9. 10.\n 12. 13. 10. 14. 10. 10. 8. 12. 12. 11. 16. 12. 8. 12. 7. 9. 8. 7.\n 10. 7. 11. 11. 8. 13. 9. 8. 14. 16.]\nb'Event Histogram'\n[14. 18. 14. 11. 15. 13. 12. 13. 8. 8. 9. 10. 10. 7. 8. 10. 8. 12.\n 6. 8. 7. 9. 10. 12. 10. 11. 10. 10. 10. 8. 14. 13. 9. 7. 12. 10.\n 7. 6. 9. 13. 11. 8. 10. 9. 7. 4. 7. 10. 8. 8. 9. 9. 7. 12.\n 11. 9. 10. 7. 10. 13. 13. 11. 9. 9. 8. 8. 10. 12. 7. 5. 9. 10.\n 12. 13. 10. 14. 10. 10. 8. 12. 12. 11. 16. 12. 8. 12. 7. 9. 9. 7.\n 10. 7. 11. 11. 8. 13. 9. 8. 14. 16.]\n"
]
],
[
[
"The criteria for whether an object can be read vectorially in Numpy (fast) or individually in Python (slow) is whether it has a fixed width—all objects having the same number of bytes—or a variable width. You can see this in the TBranch's `interpretation` as the distinction between [uproot.asobj](https://uproot.readthedocs.io/en/latest/interpretation.html#uproot-interp-objects-asobj) (fixed width, vector read) and [uproot.asgenobj](https://uproot.readthedocs.io/en/latest/interpretation.html#uproot-interp-objects-asgenobj) (variable width, read into Python objects).",
"_____no_output_____"
]
],
[
[
"# TLorentzVectors all have the same number of fixed width components, so they can be read vectorially.\nevents3[\"muonp4\"].interpretation",
"_____no_output_____"
],
[
"# Histograms contain name strings and variable length lists, so they must be read as Python objects.\ntree[\"fH\"].interpretation",
"_____no_output_____"
]
],
[
[
"## Doubly nested jagged arrays (i.e. `std::vector<std::vector<T>>`)\n\nVariable length lists are an exception to the above—up to one level of depth. This is why `JaggedArrays`, representing types such as `std::vector<T>` for a fixed-width `T`, can be read vectorially. Unfortunately, the same does not apply to doubly nested jagged arrays, such as `std::vector<std::vector<T>>`.",
"_____no_output_____"
]
],
[
[
"branch = uproot.open(\"https://scikit-hep.org/uproot/examples/vectorVectorDouble.root\")[\"t\"][\"x\"]\nbranch.interpretation",
"_____no_output_____"
],
[
"branch._streamer._fTypeName",
"_____no_output_____"
],
[
"array = branch.array()\narray",
"_____no_output_____"
]
],
[
[
"Although you see something that looks like a `JaggedArray`, the type is `ObjectArray`, meaning that you only have some bytes with an auto-generated prescription for turning them into Python objects (from the \"streamers,\" self-describing the ROOT file). You can't apply the usual `JaggedArray` slicing.",
"_____no_output_____"
]
],
[
[
"try:\n array[array.counts > 0, 0]\nexcept Exception as err:\n print(type(err), err)",
"_____no_output_____"
]
],
[
[
"To get `JaggedArray` semantics, use `awkward.fromiter` to convert the arbitrary Python objects into awkward-arrays.",
"_____no_output_____"
]
],
[
[
"jagged = awkward.fromiter(array)\njagged",
"_____no_output_____"
],
[
"jagged[jagged.counts > 0, 0]",
"_____no_output_____"
]
],
[
[
"Doubly nested `JaggedArrays` are a native type in awkward-array: they can be any number of levels deep.",
"_____no_output_____"
]
],
[
[
"jagged.flatten()",
"_____no_output_____"
],
[
"jagged.flatten().flatten()",
"_____no_output_____"
],
[
"jagged.sum()",
"_____no_output_____"
],
[
"jagged.sum().sum()",
"_____no_output_____"
]
],
[
[
"# Parallel array reading\n\nUproot supports reading, deserialization, and array-building in parallel. All of the array-reading functions have **executor** and **blocking** parameters:\n\n * **executor:** a Python 3 [Executor](https://docs.python.org/3/library/concurrent.futures.html) object, which schedules and runs tasks in parallel;\n * **blocking:** if `True` _(default)_, the array-reading function blocks (waits) until the result is ready, then returns it. If `False`, it immediately returns a zero-argument function that, when called, blocks until the result is ready. This zero-argument function is a simple type of \"future.\"",
"_____no_output_____"
]
],
[
[
"import concurrent.futures\n\n# ThreadPoolExecutor divides work among multiple threads.\n# Avoid ProcessPoolExecutor because the finalized arrays would have to be reserialized to pass between processes.\nexecutor = concurrent.futures.ThreadPoolExecutor()\n\nresult = tree.array(\"fTracks.fVertex[3]\", executor=executor, blocking=False)\nresult",
"_____no_output_____"
]
],
[
[
"We can work on other things while the array is being read.",
"_____no_output_____"
]
],
[
[
"# and now get the array (waiting, if necessary, for it to complete)\nresult()",
"_____no_output_____"
]
],
[
[
"The **executor** and **blocking** parameters are often used together, but they do not have to be. You can collect data in parallel but let the array-reading function block until it is finished:",
"_____no_output_____"
]
],
[
[
"tree.array(\"fTracks.fVertex[3]\", executor=executor)",
"_____no_output_____"
]
],
[
[
"The other case, non-blocking return without parallel processing (**executor=None** and **blocking=False**) is not very useful because all the work of creating the array would be done on the main thread (meaning: you have to wait) and then you would be returned a zero-argument function to reveal it.\n\n * **executor=None**, **blocking=True**: common case\n * **executor=executor**, **blocking=True**: read in parallel, but wait for it to finish\n * **executor=executor**, **blocking=False**: read in parallel and immediately return a future\n * **executor=None**, **blocking=False**: not useful but not excluded.",
"_____no_output_____"
],
[
"Although parallel processing has been integrated into uproot's design, it only provides a performance improvement in cases that are dominated by read time in non-Python functions. Python's [Global Interpreter Lock](https://realpython.com/python-gil/) (GIL) severely limits parallel scaling of Python calls, but external functions that release the GIL (not all do) are immune.\n\nThus, if reading is slow because the ROOT file has a lot of small TBaskets, requiring uproot to step through them using Python calls, parallelizing that work in many threads has limited benefit because those threads stop and wait for each other due to Python's GIL. If reading is slow because the ROOT file is heavily compressed—for instance, with LZMA—then parallel reading is beneficial and scales well with the number of threads.\n\n<center><img src=\"https://raw.githubusercontent.com/scikit-hep/uproot/master/docs/scaling.png\" width=\"75%\"></center>",
"_____no_output_____"
],
[
"If, on the other other hand, processing time is dominated by your analysis code and not file-reading, then parallelizing the file-reading won't help. Instead, you want to [parallelize your whole analysis](https://sebastianraschka.com/Articles/2014_multiprocessing.html), and a good way to do that in Python is with [multiprocessing](https://docs.python.org/3/library/multiprocessing.html) from the Python Standard Library.\n\nIf you do split your analysis into multiple processes, you _probably don't_ want to also parallelize the array-reading within each process. It's easy to make performance worse by making it too complicated. Particle physics analysis is usually embarrassingly parallel, well suited to splitting the work into independent tasks, each of which is single-threaded.\n\nAnother option, of course, is to use a batch system (Condor, Slurm, GRID, etc.). It can be advantageous to parallelize your work across machines with a batch system and across CPU cores with [multiprocessing](https://docs.python.org/3/library/multiprocessing.html).",
"_____no_output_____"
],
[
"# Histograms, TProfiles, TGraphs, and others\n\nTTrees are not the only kinds of objects to analyze in ROOT files; we are also interested in aggregated data in histograms, profiles, and graphs. Uproot uses the ROOT file's \"streamers\" to learn how to deserialize any object, but an anonymous deserialization often isn't useful:",
"_____no_output_____"
]
],
[
[
"file = uproot.open(\"Event.root\")\ndict(file.classes())",
"_____no_output_____"
],
[
"processid = file[\"ProcessID0\"]\nprocessid",
"_____no_output_____"
]
],
[
[
"What is a `TProcessID`?",
"_____no_output_____"
]
],
[
[
"processid._members()",
"_____no_output_____"
]
],
[
[
"Something with an `fName` and `fTitle`...",
"_____no_output_____"
]
],
[
[
"processid._fName, processid._fTitle # note the underscore; these are private members",
"_____no_output_____"
]
],
[
[
"Some C++ classes have Pythonic overloads to make them more useful in Python. Here's a way to find out which ones have been defined so far:",
"_____no_output_____"
]
],
[
[
"import pkgutil\n\n[modname for importer, modname, ispkg in pkgutil.walk_packages(uproot_methods.classes.__path__)]",
"_____no_output_____"
]
],
[
[
"This file contains `TH1F` objects, which is a subclass of `TH1`. The `TH1` methods will extend it.",
"_____no_output_____"
]
],
[
[
"file[\"htime\"].edges",
"_____no_output_____"
],
[
"file[\"htime\"].values",
"_____no_output_____"
],
[
"file[\"htime\"].show()",
" 0 0.38739\n +-----------------------------------------------------------+\n[-inf, 0) 0.021839 |*** |\n[0, 1) 0.33352 |*************************************************** |\n[1, 2) 0.30403 |********************************************** |\n[2, 3) 0.32452 |************************************************* |\n[3, 4) 0.35097 |***************************************************** |\n[4, 5) 0.36894 |******************************************************** |\n[5, 6) 0.30728 |*********************************************** |\n[6, 7) 0.30681 |*********************************************** |\n[7, 8) 0.34156 |**************************************************** |\n[8, 9) 0.16151 |************************* |\n[9, 10) 0 | |\n[10, inf] 0 | |\n +-----------------------------------------------------------+\n"
]
],
[
[
"The purpose of most of these methods is to extract data, which includes conversion to common Python formats.",
"_____no_output_____"
]
],
[
[
"uproot.open(\"https://scikit-hep.org/uproot/examples/issue33.root\")[\"cutflow\"].show()",
" 0 41529\n +---------------------------------------------------+\n(underflow) 0 | |\nDijet 39551 |************************************************* |\nMET 27951 |********************************** |\nMuonVeto 27911 |********************************** |\nIsoMuonTrackVeto 27861 |********************************** |\nElectronVeto 27737 |********************************** |\nIsoElectronTrackVeto 27460 |********************************** |\nIsoPionTrackVeto 26751 |********************************* |\n(overflow) 0 | |\n +---------------------------------------------------+\n"
],
[
"file[\"htime\"].pandas()",
"_____no_output_____"
],
[
"print(file[\"htime\"].hepdata())",
"dependent_variables:\n- header:\n name: counts\n units: null\n qualifiers: []\n values:\n - errors:\n - label: stat\n symerror: 0.33352208137512207\n value: 0.33352208137512207\n - errors:\n - label: stat\n symerror: 0.3040299415588379\n value: 0.3040299415588379\n - errors:\n - label: stat\n symerror: 0.32451915740966797\n value: 0.32451915740966797\n - errors:\n - label: stat\n symerror: 0.35097289085388184\n value: 0.35097289085388184\n - errors:\n - label: stat\n symerror: 0.3689420223236084\n value: 0.3689420223236084\n - errors:\n - label: stat\n symerror: 0.3072829246520996\n value: 0.3072829246520996\n - errors:\n - label: stat\n symerror: 0.306812047958374\n value: 0.306812047958374\n - errors:\n - label: stat\n symerror: 0.34156298637390137\n value: 0.34156298637390137\n - errors:\n - label: stat\n symerror: 0.16150808334350586\n value: 0.16150808334350586\n - errors:\n - label: stat\n symerror: 0.0\n value: 0.0\nindependent_variables:\n- header:\n name: Real-Time to write versus time\n units: null\n values:\n - high: 1.0\n low: 0.0\n - high: 2.0\n low: 1.0\n - high: 3.0\n low: 2.0\n - high: 4.0\n low: 3.0\n - high: 5.0\n low: 4.0\n - high: 6.0\n low: 5.0\n - high: 7.0\n low: 6.0\n - high: 8.0\n low: 7.0\n - high: 9.0\n low: 8.0\n - high: 10.0\n low: 9.0\n\n"
]
],
[
[
"Numpy histograms, used as a common format through the scientific Python ecosystem, are just a tuple of counts/bin contents and edge positions. (There's one more edge than contents to cover left and right.)",
"_____no_output_____"
]
],
[
[
"file[\"htime\"].numpy()",
"_____no_output_____"
],
[
"uproot.open(\"samples/hepdata-example.root\")[\"hpxpy\"].numpy()",
"_____no_output_____"
]
],
[
[
"# Creating and writing data to ROOT files\n\nUproot has a limited (but growing!) ability to _write_ ROOT files. Two types currently supported are `TObjString` (for debugging) and histograms.\n\nTo write to a ROOT file in uproot, the file must be opened for writing using `uproot.create`, `uproot.recreate`, or `uproot.update` (corresponding to ROOT's `\"CREATE\"`, `\"RECREATE\"`, and `\"UPDATE\"` file modes). The compression level is given by `uproot.ZLIB(n)`, `uproot.LZMA(n)`, `uproot.LZ4(n)`, or `None`.",
"_____no_output_____"
]
],
[
[
"file = uproot.recreate(\"tmp.root\", compression=uproot.ZLIB(4))",
"_____no_output_____"
]
],
[
[
"Unlike objects created by [uproot.open](https://uproot.readthedocs.io/en/latest/opening-files.html#uproot-open), you can _assign_ to this `file`. Just as reading behaves like getting an object from a Python dict, writing behaves like putting an object into a Python dict.\n\n**Note:** this is a fundamental departure from how ROOT uses names. In ROOT, a name is a part of an object that is _also_ used for lookup. With a dict-like interface, the object need not have a name; only the lookup mechanism (e.g. [ROOTDirectory](https://uproot.readthedocs.io/en/latest/root-io.html#uproot-rootio-rootdirectory)) needs to manage names.\n\nWhen you write objects to the ROOT file, they can be unnamed things like a Python string, but they get \"stamped\" with the lookup name once they go into the file.",
"_____no_output_____"
]
],
[
[
"file[\"name\"] = \"Some object, like a TObjString.\"",
"_____no_output_____"
]
],
[
[
"The object is now in the file. ROOT would be able to open this file and read the data, like this:\n\n```c++\nroot [0] auto file = TFile::Open(\"tmp.root\");\nroot [1] file->ls();\n```\n```\nTFile**\t\ttmp.root\t\n TFile*\t\ttmp.root\t\n KEY: TObjString\tname;1\tCollectable string class\n```\n```c++\nroot [2] TObjString* data;\nroot [3] file->GetObject(\"name\", data);\nroot [4] data->GetString()\n```\n```\n(const TString &) \"Some object, like a TObjString.\"[31]\n```\n\nWe can also read it back in uproot, like this:",
"_____no_output_____"
]
],
[
[
"file.keys()",
"_____no_output_____"
],
[
"dict(file.classes())",
"_____no_output_____"
],
[
"file[\"name\"]",
"_____no_output_____"
]
],
[
[
"(Notice that it lost its encoding—it is now a bytestring.)",
"_____no_output_____"
],
[
"## Writing histograms\n\nHistograms can be written to the file in the same way: by assignment (choosing a name at the time of assignment). The histograms may be taken from another file and modified,",
"_____no_output_____"
]
],
[
[
"histogram = uproot.open(\"https://scikit-hep.org/uproot/examples/histograms.root\")[\"one\"]\nhistogram.show()\nnorm = histogram.allvalues.sum()\nfor i in range(len(histogram)):\n histogram[i] /= norm\nhistogram.show()\n \nfile[\"normalized\"] = histogram",
" 0 2410.8\n +------------------------------------------------------------+\n[-inf, -3) 0 | |\n[-3, -2.4) 68 |** |\n[-2.4, -1.8) 285 |******* |\n[-1.8, -1.2) 755 |******************* |\n[-1.2, -0.6) 1580 |*************************************** |\n[-0.6, 0) 2296 |********************************************************* |\n[0, 0.6) 2286 |********************************************************* |\n[0.6, 1.2) 1570 |*************************************** |\n[1.2, 1.8) 795 |******************** |\n[1.8, 2.4) 289 |******* |\n[2.4, 3) 76 |** |\n[3, inf] 0 | |\n +------------------------------------------------------------+\n 0 0.24108\n +----------------------------------------------------------+\n[-inf, -3) 0 | |\n[-3, -2.4) 0.0068 |** |\n[-2.4, -1.8) 0.0285 |******* |\n[-1.8, -1.2) 0.0755 |****************** |\n[-1.2, -0.6) 0.158 |************************************** |\n[-0.6, 0) 0.2296 |******************************************************* |\n[0, 0.6) 0.2286 |******************************************************* |\n[0.6, 1.2) 0.157 |************************************** |\n[1.2, 1.8) 0.0795 |******************* |\n[1.8, 2.4) 0.0289 |******* |\n[2.4, 3) 0.0076 |** |\n[3, inf] 0 | |\n +----------------------------------------------------------+\n"
]
],
[
[
"or it may be created entirely in Python.",
"_____no_output_____"
]
],
[
[
"import types\nimport uproot_methods.classes.TH1\n\nclass MyTH1(uproot_methods.classes.TH1.Methods, list):\n def __init__(self, low, high, values, title=\"\"):\n self._fXaxis = types.SimpleNamespace()\n self._fXaxis._fNbins = len(values)\n self._fXaxis._fXmin = low\n self._fXaxis._fXmax = high\n for x in values:\n self.append(float(x))\n self._fTitle = title\n self._classname = \"TH1F\"\n \nhistogram = MyTH1(-5, 5, [0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0])\n\nfile[\"synthetic\"] = histogram",
"_____no_output_____"
],
[
"file[\"synthetic\"].show()",
" 0 1.05\n +--------------------------------------------------------+\n[-inf, -5) 0 | |\n[-5, -4.1667) 1 |***************************************************** |\n[-4.1667, -3.3333) 1 |***************************************************** |\n[-3.3333, -2.5) 1 |***************************************************** |\n[-2.5, -1.6667) 1 |***************************************************** |\n[-1.6667, -0.83333) 1 |***************************************************** |\n[-0.83333, 0) 1 |***************************************************** |\n[0, 0.83333) 1 |***************************************************** |\n[0.83333, 1.6667) 1 |***************************************************** |\n[1.6667, 2.5) 1 |***************************************************** |\n[2.5, 3.3333) 1 |***************************************************** |\n[5, inf] 0 | |\n +--------------------------------------------------------+\n"
]
],
[
[
"But it is particularly useful that uproot recognizes [Numpy histograms](https://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram.html), which may have come from other libraries.",
"_____no_output_____"
]
],
[
[
"file[\"from_numpy\"] = numpy.histogram(numpy.random.normal(0, 1, 10000))",
"_____no_output_____"
],
[
"file[\"from_numpy\"].show()",
" 0 3209.8\n +----------------------------------------------------+\n[-inf, -4.2249) 0 | |\n[-4.2249, -3.3867) 6 | |\n[-3.3867, -2.5484) 63 |* |\n[-2.5484, -1.7101) 376 |****** |\n[-1.7101, -0.8718) 1438 |*********************** |\n[-0.8718, -0.033514) 2975 |************************************************ |\n[-0.033514, 0.80477) 3057 |************************************************** |\n[0.80477, 1.6431) 1570 |************************* |\n[1.6431, 2.4813) 442 |******* |\n[2.4813, 3.3196) 66 |* |\n[3.3196, 4.1579) 7 | |\n[4.1579, inf] 0 | |\n +----------------------------------------------------+\n"
],
[
"file[\"from_numpy2d\"] = numpy.histogram2d(numpy.random.normal(0, 1, 10000), numpy.random.normal(0, 1, 10000))",
"_____no_output_____"
],
[
"file[\"from_numpy2d\"].numpy()",
"_____no_output_____"
]
],
[
[
"## Writing TTrees\n\nUproot can now write TTrees (documented on the [main README](https://github.com/scikit-hep/uproot#writing-ttrees)), but the interactive tutorial has not been written.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
ecd83f27b0dcbfd22227f25bd6aa5fe2f0bd190a | 577,172 | ipynb | Jupyter Notebook | copy_akwilson/model_analysis.ipynb | brianpm/TemperatureExtremes | 28823d8029b29e54ddd88693fece8003c63c1a13 | [
"MIT"
] | 2 | 2020-07-23T21:30:00.000Z | 2021-07-19T21:17:02.000Z | copy_akwilson/model_analysis.ipynb | brianpm/TemperatureExtremes | 28823d8029b29e54ddd88693fece8003c63c1a13 | [
"MIT"
] | null | null | null | copy_akwilson/model_analysis.ipynb | brianpm/TemperatureExtremes | 28823d8029b29e54ddd88693fece8003c63c1a13 | [
"MIT"
] | null | null | null | 356.940012 | 94,824 | 0.926566 | [
[
[
"# for numbers\nimport xarray as xr\nimport numpy as np\nimport pandas as pd\n\n# for figures\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport cartopy.crs as ccrs",
"/project/amp/akwilson/testdata/ENTER/lib/python3.7/site-packages/dask/config.py:168: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.\n data = yaml.load(f.read()) or {}\n/project/amp/akwilson/testdata/ENTER/lib/python3.7/site-packages/distributed/config.py:20: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.\n defaults = yaml.load(f)\n"
],
[
"# An \"anonymous function\" to print the max value of an Xarray DataArray\nprintMax = lambda x: print(np.asscalar(x.max().values))",
"_____no_output_____"
],
[
"def quick_map(lons, lats, data, title=None, **kwargs):\n f, a = plt.subplots(subplot_kw={\"projection\":ccrs.Robinson()})\n # pass in norm as kwarg if needed\n # norm = mpl.colors.Normalize(vmin=1979, vmax=2019)\n img = a.pcolormesh(lons, lats, data, transform=ccrs.PlateCarree(), **kwargs)\n a.set_title(title)\n f.colorbar(img, shrink=0.4)\n return f, a",
"_____no_output_____"
],
[
"# Sligthly fancier way:\n# from pathlib import Path\n# stem = Path(\"/project/amp/brianpm/TemperatureExtremes/Derived\")\n# fil = \"CPC_tmax_90pct_event_attributes_compressed.nc\"\n# ds = xr.open_dataset(stem/fil)\n\n# Slightly easier way:\nstem = \"/project/amp/brianpm/TemperatureExtremes/Derived/\"\nfil = \"f.e13.FAMIPC5CN.ne30_ne30.beta17.TREFMXAV.90pct_event_attributes_compressed.nc\"\nds = xr.open_dataset(stem+fil)",
"_____no_output_____"
],
[
"print(\"DataSet Information\")\nprint(f\"Dimensions {ds.dims}\")\nprint(f\"Variables: {ds.data_vars}\")",
"DataSet Information\nDimensions Frozen(SortedKeysDict({'events': 1226, 'lat': 360, 'lon': 720}))\nVariables: Data variables:\n Event_ID (events, lat, lon) float64 ...\n initial_index (events, lat, lon) float64 ...\n duration (events, lat, lon) float64 ...\n"
],
[
"ids = ds['Event_ID'] # each point has a series of events that are labeled as increasing integers\nd = ds['duration'] # NOTE: the 1st entry in duration, i.e., d[0,:,:], is the number of zeros counted, so is number of non-event days.\ninit = ds['initial_index'].astype(int) # the index value of `time` dimension from original data. One value per event gives the first day of the event.",
"_____no_output_____"
],
[
"# get dimensions out, set up lons, lats meshgrid for plots\n# nlat and nlon are used later for indexing\nlon = ds['lon']\nlat = ds['lat']\nnlat = len(ds['lat'])\nnlon = len(ds['lon'])\nlons, lats = np.meshgrid(lon, lat)",
"_____no_output_____"
],
[
"import cartopy.crs as ccrs\nimport cartopy.feature as cfeature\nimport cartopy.io.shapereader as shapereader",
"_____no_output_____"
],
[
"fig001, ax001 = plt.subplots(subplot_kw={\"projection\":ccrs.Robinson()})\nnorm = mpl.colors.Normalize(vmin=0, vmax=1000)\nimg001 = ax001.pcolormesh(lons, lats,ids.max(dim='events'), transform=ccrs.PlateCarree(), norm=norm)\nax001.set_title(\"Total Number of Events in CESM1\")\nax001.add_feature(cfeature.COASTLINE)\nfig001.colorbar(img001, shrink=0.4)\nplt.savefig(\"/project/amp/akwilson/testdata/totalnumbermodel_figure_001.tiff\")",
"_____no_output_____"
],
[
"# Use boulder as a test example\nboulder = ds.sel(lat=40, lon=360-105, method='nearest')\n# \nboulder",
"_____no_output_____"
],
[
"boulder['duration'].max().values",
"_____no_output_____"
],
[
"printMax(boulder['duration'][1:])",
"8.0\n"
]
],
[
[
"# Duration\nMake a map of the longest event at each location.",
"_____no_output_____"
]
],
[
[
"# define the longest event at each location\n# skip the 0th element because there are always mostly \"non-events\"\nd_longest = d[1:, :, :].max(dim='events')",
"_____no_output_____"
],
[
"fig002, ax002 = plt.subplots(subplot_kw={\"projection\":ccrs.Robinson()})\nnorm = mpl.colors.Normalize(vmin=1, vmax=30)\nimg002 = ax002.pcolormesh(lons, lats, d_longest, transform=ccrs.PlateCarree(), norm=norm)\nax002.set_title(\"Longest Known Event in CESM1\")\nax002.add_feature(cfeature.COASTLINE)\nfig002.colorbar(img002, shrink=0.4)\nplt.savefig(\"/project/amp/akwilson/testdata/durationmodel_figure_001.tiff\")",
"_____no_output_____"
]
],
[
[
"# Going back to time\n\nWe use `initial_index` to figure out when each event happens.\n\nThe `initial_index` is the position from the original data set. This is fragile because it depends on being able to re-load the exact same data in order to get correct times.\n\n(To-Do: Brian should change the script to return the `time since <reference date>` in the file. That would make it much easier to deal with here.)\n\nFor now, we deal with it, and make a map of the year in which the longest event occurred. Of course, for many locations there will be ties, so we get more than one event. For now, we just take the first one. This choice makes sense, but would be more justified if we also were breaking things down by season or month.",
"_____no_output_____"
]
],
[
[
"init",
"_____no_output_____"
],
[
"# Go back to the original data, but we don't need the actual values here,\n# just the time coordinate\nds_orig = xr.open_mfdataset(\"/project/amp/brianpm/TemperatureExtremes/TREFMXAV/f.e13.FAMIPC5CN.ne30_ne30.beta17.t3.cam.h1.TREFMXAV.19650101-20051231.nc\")\nt = ds_orig['time']",
"_____no_output_____"
],
[
"# Use boulder as our testing example\nb_lg = np.where( boulder['duration'][1:] == boulder['duration'][1:].max(), 1, 0 )\nlongest_ndx = np.argwhere((boulder['duration'][1:] == boulder['duration'][1:].max()).values)\nboulder_longest_initial_time = boulder['initial_index'][np.asscalar(longest_ndx)+1]\nprint(boulder_longest_initial_time)\nt[int(np.asscalar(boulder_longest_initial_time.values))]",
"_____no_output_____"
],
[
"# Apply similar approach for the whole data set\nndx_long = np.zeros((nlat, nlon)) # basically allocating memory\ndv = d.values # duration data, but converted from Xarray DataArray to numpy array\niv = init.values # initial index data, as numpy array\n\n# Loop over latitude (print), loop over longitude\n# check if the longest duration event is zero (ocean in obs), skip if yes\n# otherwise use np.argwhere() to get indices where duration == longest at that location \n# IF ties (i.e., multiple events with same longest duration), take the first one (.min())\n# put the time index into ndx_long[i, j]\nfor i in range(nlat):\n print(f\"working on latitude {ds['lat'][i].values}\")\n for j in range(nlon):\n if d_longest[i, j] == 0:\n ndx_long[i, j] = np.nan\n else:\n longest_ndx = np.argwhere(dv[1:, i, j] == d_longest[i,j].values).min()\n if np.isnan(longest_ndx):\n print(f\"checking on point {(i, j)} ---> the value we get is {longest_ndx} (add one)\")\n break\n ndx_long[i,j] = iv[longest_ndx+1, i, j]\n",
"working on latitude 89.75\nworking on latitude 89.25\nworking on latitude 88.75\nworking on latitude 88.25\nworking on latitude 87.75\nworking on latitude 87.25\nworking on latitude 86.75\nworking on latitude 86.25\nworking on latitude 85.75\nworking on latitude 85.25\nworking on latitude 84.75\nworking on latitude 84.25\nworking on latitude 83.75\nworking on latitude 83.25\nworking on latitude 82.75\nworking on latitude 82.25\nworking on latitude 81.75\nworking on latitude 81.25\nworking on latitude 80.75\nworking on latitude 80.25\nworking on latitude 79.75\nworking on latitude 79.25\nworking on latitude 78.75\nworking on latitude 78.25\nworking on latitude 77.75\nworking on latitude 77.25\nworking on latitude 76.75\nworking on latitude 76.25\nworking on latitude 75.75\nworking on latitude 75.25\nworking on latitude 74.75\nworking on latitude 74.25\nworking on latitude 73.75\nworking on latitude 73.25\nworking on latitude 72.75\nworking on latitude 72.25\nworking on latitude 71.75\nworking on latitude 71.25\nworking on latitude 70.75\nworking on latitude 70.25\nworking on latitude 69.75\nworking on latitude 69.25\nworking on latitude 68.75\nworking on latitude 68.25\nworking on latitude 67.75\nworking on latitude 67.25\nworking on latitude 66.75\nworking on latitude 66.25\nworking on latitude 65.75\nworking on latitude 65.25\nworking on latitude 64.75\nworking on latitude 64.25\nworking on latitude 63.75\nworking on latitude 63.25\nworking on latitude 62.75\nworking on latitude 62.25\nworking on latitude 61.75\nworking on latitude 61.25\nworking on latitude 60.75\nworking on latitude 60.25\nworking on latitude 59.75\nworking on latitude 59.25\nworking on latitude 58.75\nworking on latitude 58.25\nworking on latitude 57.75\nworking on latitude 57.25\nworking on latitude 56.75\nworking on latitude 56.25\nworking on latitude 55.75\nworking on latitude 55.25\nworking on latitude 54.75\nworking on latitude 54.25\nworking on latitude 53.75\nworking on latitude 53.25\nworking on latitude 52.75\nworking on latitude 52.25\nworking on latitude 51.75\nworking on latitude 51.25\nworking on latitude 50.75\nworking on latitude 50.25\nworking on latitude 49.75\nworking on latitude 49.25\nworking on latitude 48.75\nworking on latitude 48.25\nworking on latitude 47.75\nworking on latitude 47.25\nworking on latitude 46.75\nworking on latitude 46.25\nworking on latitude 45.75\nworking on latitude 45.25\nworking on latitude 44.75\nworking on latitude 44.25\nworking on latitude 43.75\nworking on latitude 43.25\nworking on latitude 42.75\nworking on latitude 42.25\nworking on latitude 41.75\nworking on latitude 41.25\nworking on latitude 40.75\nworking on latitude 40.25\nworking on latitude 39.75\nworking on latitude 39.25\nworking on latitude 38.75\nworking on latitude 38.25\nworking on latitude 37.75\nworking on latitude 37.25\nworking on latitude 36.75\nworking on latitude 36.25\nworking on latitude 35.75\nworking on latitude 35.25\nworking on latitude 34.75\nworking on latitude 34.25\nworking on latitude 33.75\nworking on latitude 33.25\nworking on latitude 32.75\nworking on latitude 32.25\nworking on latitude 31.75\nworking on latitude 31.25\nworking on latitude 30.75\nworking on latitude 30.25\nworking on latitude 29.75\nworking on latitude 29.25\nworking on latitude 28.75\nworking on latitude 28.25\nworking on latitude 27.75\nworking on latitude 27.25\nworking on latitude 26.75\nworking on latitude 26.25\nworking on latitude 25.75\nworking on latitude 25.25\nworking on latitude 24.75\nworking on latitude 24.25\nworking on latitude 23.75\nworking on latitude 23.25\nworking on latitude 22.75\nworking on latitude 22.25\nworking on latitude 21.75\nworking on latitude 21.25\nworking on latitude 20.75\nworking on latitude 20.25\nworking on latitude 19.75\nworking on latitude 19.25\nworking on latitude 18.75\nworking on latitude 18.25\nworking on latitude 17.75\nworking on latitude 17.25\nworking on latitude 16.75\nworking on latitude 16.25\nworking on latitude 15.75\nworking on latitude 15.25\nworking on latitude 14.75\nworking on latitude 14.25\nworking on latitude 13.75\nworking on latitude 13.25\nworking on latitude 12.75\nworking on latitude 12.25\nworking on latitude 11.75\nworking on latitude 11.25\nworking on latitude 10.75\nworking on latitude 10.25\nworking on latitude 9.75\nworking on latitude 9.25\nworking on latitude 8.75\nworking on latitude 8.25\nworking on latitude 7.75\nworking on latitude 7.25\nworking on latitude 6.75\nworking on latitude 6.25\nworking on latitude 5.75\nworking on latitude 5.25\nworking on latitude 4.75\nworking on latitude 4.25\nworking on latitude 3.75\nworking on latitude 3.25\nworking on latitude 2.75\nworking on latitude 2.25\nworking on latitude 1.75\nworking on latitude 1.25\nworking on latitude 0.75\nworking on latitude 0.25\nworking on latitude -0.25\nworking on latitude -0.75\nworking on latitude -1.25\nworking on latitude -1.75\nworking on latitude -2.25\nworking on latitude -2.75\nworking on latitude -3.25\nworking on latitude -3.75\nworking on latitude -4.25\nworking on latitude -4.75\nworking on latitude -5.25\nworking on latitude -5.75\nworking on latitude -6.25\nworking on latitude -6.75\nworking on latitude -7.25\nworking on latitude -7.75\nworking on latitude -8.25\nworking on latitude -8.75\nworking on latitude -9.25\nworking on latitude -9.75\nworking on latitude -10.25\nworking on latitude -10.75\nworking on latitude -11.25\nworking on latitude -11.75\nworking on latitude -12.25\nworking on latitude -12.75\nworking on latitude -13.25\nworking on latitude -13.75\nworking on latitude -14.25\nworking on latitude -14.75\nworking on latitude -15.25\nworking on latitude -15.75\nworking on latitude -16.25\nworking on latitude -16.75\nworking on latitude -17.25\nworking on latitude -17.75\nworking on latitude -18.25\nworking on latitude -18.75\nworking on latitude -19.25\nworking on latitude -19.75\nworking on latitude -20.25\nworking on latitude -20.75\nworking on latitude -21.25\nworking on latitude -21.75\nworking on latitude -22.25\nworking on latitude -22.75\nworking on latitude -23.25\nworking on latitude -23.75\nworking on latitude -24.25\nworking on latitude -24.75\nworking on latitude -25.25\nworking on latitude -25.75\nworking on latitude -26.25\nworking on latitude -26.75\nworking on latitude -27.25\nworking on latitude -27.75\nworking on latitude -28.25\nworking on latitude -28.75\nworking on latitude -29.25\nworking on latitude -29.75\nworking on latitude -30.25\nworking on latitude -30.75\nworking on latitude -31.25\nworking on latitude -31.75\nworking on latitude -32.25\nworking on latitude -32.75\nworking on latitude -33.25\nworking on latitude -33.75\nworking on latitude -34.25\nworking on latitude -34.75\nworking on latitude -35.25\nworking on latitude -35.75\nworking on latitude -36.25\nworking on latitude -36.75\nworking on latitude -37.25\nworking on latitude -37.75\nworking on latitude -38.25\nworking on latitude -38.75\nworking on latitude -39.25\nworking on latitude -39.75\nworking on latitude -40.25\nworking on latitude -40.75\nworking on latitude -41.25\nworking on latitude -41.75\nworking on latitude -42.25\nworking on latitude -42.75\nworking on latitude -43.25\nworking on latitude -43.75\nworking on latitude -44.25\nworking on latitude -44.75\nworking on latitude -45.25\nworking on latitude -45.75\nworking on latitude -46.25\nworking on latitude -46.75\nworking on latitude -47.25\nworking on latitude -47.75\nworking on latitude -48.25\nworking on latitude -48.75\nworking on latitude -49.25\nworking on latitude -49.75\nworking on latitude -50.25\nworking on latitude -50.75\nworking on latitude -51.25\nworking on latitude -51.75\nworking on latitude -52.25\nworking on latitude -52.75\nworking on latitude -53.25\nworking on latitude -53.75\nworking on latitude -54.25\nworking on latitude -54.75\nworking on latitude -55.25\nworking on latitude -55.75\nworking on latitude -56.25\nworking on latitude -56.75\nworking on latitude -57.25\nworking on latitude -57.75\nworking on latitude -58.25\nworking on latitude -58.75\nworking on latitude -59.25\nworking on latitude -59.75\nworking on latitude -60.25\nworking on latitude -60.75\nworking on latitude -61.25\nworking on latitude -61.75\nworking on latitude -62.25\nworking on latitude -62.75\nworking on latitude -63.25\nworking on latitude -63.75\nworking on latitude -64.25\nworking on latitude -64.75\nworking on latitude -65.25\nworking on latitude -65.75\nworking on latitude -66.25\nworking on latitude -66.75\nworking on latitude -67.25\nworking on latitude -67.75\nworking on latitude -68.25\nworking on latitude -68.75\nworking on latitude -69.25\nworking on latitude -69.75\nworking on latitude -70.25\nworking on latitude -70.75\nworking on latitude -71.25\nworking on latitude -71.75\nworking on latitude -72.25\nworking on latitude -72.75\nworking on latitude -73.25\nworking on latitude -73.75\nworking on latitude -74.25\nworking on latitude -74.75\nworking on latitude -75.25\nworking on latitude -75.75\nworking on latitude -76.25\nworking on latitude -76.75\nworking on latitude -77.25\nworking on latitude -77.75\nworking on latitude -78.25\nworking on latitude -78.75\nworking on latitude -79.25\nworking on latitude -79.75\nworking on latitude -80.25\nworking on latitude -80.75\nworking on latitude -81.25\nworking on latitude -81.75\nworking on latitude -82.25\nworking on latitude -82.75\nworking on latitude -83.25\nworking on latitude -83.75\nworking on latitude -84.25\nworking on latitude -84.75\nworking on latitude -85.25\nworking on latitude -85.75\nworking on latitude -86.25\nworking on latitude -86.75\nworking on latitude -87.25\nworking on latitude -87.75\nworking on latitude -88.25\nworking on latitude -88.75\nworking on latitude -89.25\nworking on latitude -89.75\n"
],
[
"y = np.zeros((nlat,nlon)) # initial a year array\n\n# loop over lat, lon (again!?!?!)\nfor i in range(nlat):\n# print(f\"working on latitude {ds['lat'][i].values}\")\n for j in range(nlon):\n tmp = ndx_long[i,j]\n if not np.isnan(tmp):\n y[i,j] = t[int(tmp)].dt.year # go get the year of this time index, put it in y",
"_____no_output_____"
],
[
"norm = mpl.colors.Normalize(vmin=1979, vmax=2005)\nfig003a, ax003a = quick_map(lons, lats, y, title=\"Year of Longest Event in CESM1\", **{'norm':norm})\nax003a.add_feature(cfeature.COASTLINE)\nplt.savefig(\"/project/amp/akwilson/testdata/yearoflongestmodel_figure_001.tiff\")",
"_____no_output_____"
],
[
"yprime = np.where(y < 1995, np.nan, y)",
"_____no_output_____"
],
[
"lons, lats = np.meshgrid(ds['lon'], ds['lat'])\nfig, ax = plt.subplots(subplot_kw={\"projection\":ccrs.PlateCarree()})\n\n#fig003a, ax003a = quick_map(lons, lats, y, title=\"Year of Longest Event\")\n\n# img = ax.contourf(lons, lats, y, levels=[1996,1997,1998,1999,2000,2001,2002,2003,2004,2005],cmap='RdBu',vmin=1996, vmax=2005)\n\nimg = ax.contourf(lons, lats, yprime, levels=np.linspace(1995,2005,6), cmap='Reds',vmin=1996, vmax=2005, transform=ccrs.PlateCarree())\n\n# countries = cfeature.NaturalEarthFeature(\n# category='cultural',\n# name='admin_0_countries',\n# scale='50m',\n# facecolor='none')\n\nax.add_feature(cfeature.OCEAN, zorder=3)\n# ax.add_feature(cfeature.LAND)\n\n\n# ax.add_feature(cfeature.COASTLINE)\nax.coastlines()\nax.set_title(\"Heatwave Events after 1995 in CESM1\")\n\ncb = fig.colorbar(img, shrink=0.6)\ncb.set_label(\"Year\")\n\n\nplt.savefig(\"/project/amp/akwilson/testdata/yearlongestmodel1995event_figure_001.tiff\")",
"_____no_output_____"
]
],
[
[
"# Frequency\n\nHow often do events occur?\n\nThe first thing to do is figure out the average number of events per year.\n",
"_____no_output_____"
]
],
[
[
"# go back to our original detection file:\nds_events = xr.open_dataset(stem+\"CPC_tmax_90pct_event_detection.nc\")\nevents = ds_events['Event_ID'] # [time, lat, lon], data is the same event id as above, but now with the values repeated identifying the whole event.",
"_____no_output_____"
],
[
"# There are a few ways we could do this.\n# One is just to count the number of days that qualify in each year. Probably not a great definition, but let's take a look.\n\nevent_binary_np = np.where(events.values > 0, 1, 0)\nevent_binary = xr.DataArray(event_binary_np, coords=events.coords, dims=events.dims)\n\nnum_per_year = event_binary.groupby('time.year').sum(dim='time') # ['year', lat, lon]\n",
"_____no_output_____"
],
[
"# look at the minimum and maximum\nnorm = mpl.colors.Normalize(vmin=0, vmax=200)\nfig004a, ax004a = quick_map(lons, lats, num_per_year.max(dim='year'), title=\"Max Events per year\", **{'norm':norm}) \nfig004b, ax004b = quick_map(lons, lats, num_per_year.min(dim='year'), title=\"Min Events per year\") \nplt.savefig(\"/project/amp/akwilson/testdata/maxandmineventsmodelupdate_figure_001.jpg\")",
"_____no_output_____"
],
[
"norm = mpl.colors.Normalize(vmin=0, vmax=15)\nfig004b, ax004b = quick_map(lons, lats, num_per_year.min(dim='year'), title=\"Min Events per year\") \nplt.savefig(\"/project/amp/akwilson/testdata/mineventsmodelupdate_figure_001.jpg\")",
"_____no_output_____"
],
[
"# probably it makes more sense to take a look at the average number of events per year\nevents_a = np.where(events.values > 0, events.values, np.nan) # exclude zeros\nevents_a = xr.DataArray(events_a, coords=events.coords, dims=events.dims)\nfirst_event_each_year = events_a.groupby('time.year').min(dim='time')\nlast_event_each_year = events_a.groupby('time.year').max(dim='time')\nevents_per_year = (last_event_each_year - first_event_each_year)+1 # ['year', lat, lon]\navg_events_per_year = events_per_year.mean(dim='year')\n",
"/project/amp/akwilson/testdata/ENTER/lib/python3.7/site-packages/xarray/core/nanops.py:159: RuntimeWarning: Mean of empty slice\n return np.nanmean(a, axis=axis, dtype=dtype)\n"
],
[
"f005, a005 = quick_map(lons, lats, avg_events_per_year, title='Avg Events per year')\nplt.savefig(\"/project/amp/akwilson/testdata/averageventsmodel_figure_001.jpg\")",
"_____no_output_____"
]
],
[
[
"# Doing something by month\n\nSince we probably care about Tmax extremes during summer more than winter.\n\nWe show how to do this for a single location (Boulder). \n\nIt seems a little more complicated for the full dataset (partly because of different number of events at each location). But we can make summary statistics as needed, even if it requires a little bit of acrobatics.",
"_____no_output_____"
]
],
[
[
"amonth = events_a.isel(time=(events_a['time.month'] == 8))\nyearly_first = amonth.groupby('time.year').min(dim='time')\nyearly_last = amonth.groupby('time.year').max(dim='time')\nyear_num = (yearly_last - yearly_first)+1",
"_____no_output_____"
],
[
"faug, aaug = quick_map(lons, lats, year_num.mean(dim='year'), title=\"Avg Num Aug Events\")",
"/project/amp/brianpm/miniconda3/lib/python3.7/site-packages/xarray/core/nanops.py:161: RuntimeWarning: Mean of empty slice\n return np.nanmean(a, axis=axis, dtype=dtype)\n"
],
[
"# avg duration of august events\n\n# could\n# 1. extract august events from detections (use amonth) and derive duration\n# 2. use duration from attributes, back out august events",
"_____no_output_____"
],
[
"# Try to start again with Boulder // # How long are august events in boulder\n\n# init = ds['initial_index']\n# time is t\nz = boulder['initial_index'].astype(int) # start times of the events (as integers)\nzz = t[z.max()].values # date of last event // not needed\nboulder_times = t[z[z != 0]] # makes an xarray dataarray of datetime objects\n# how many events in boulder\nprint(boulder['Event_ID'].max())\nprint(len(boulder_times))",
"<xarray.DataArray 'Event_ID' ()>\narray(863.)\nCoordinates:\n lat float64 40.25\n lon float64 255.2\n863\n"
],
[
"bd = boulder['duration'][z != 0] # use the same indices to extract the series of durations for boulder events\n# longest boulder event\n# boulder['duration'][z != 0].max()\n\n# boulder - duration - august only:\nbd_aug = bd[boulder_times['time.month'] == 8]\n# average duration of august events in boulder\nprint(f\"Average duration of august events in boulder: {bd_aug.mean().values} days\")\n\n# the actual dates of the august events\nboulder_aug_events = boulder_times[ boulder_times['time.month'] == 8 ]\n",
"Average duration of august events in boulder: 1.662162162162162 days\n"
],
[
"boulder_aug_events['time.year'] # with a series of datetimes we can do things like get the year/month/day",
"_____no_output_____"
],
[
"xt = t[z[z != 0]]\nxt['time.year']",
"_____no_output_____"
],
[
"def loc_times(xy, time):\n a = time[xy[xy != 0]]\n return np.where(xy == 0, np.nan, a)\n\n\n",
"_____no_output_____"
],
[
"# now the trick is to apply the same approach to the multi-dimensional array\n# make an array of datetime objects\n\n# worst case, just loop through all points.\n\n# say we want to get the number of events in each month.\n\n# easiest way is probably to go through the event detection data, then:\n# iyrs = set(events_a.time.dt.year.values)\n\n\n# THIS IS FINE, BUT DOES NOT SOLVE THE PROBLEM OF GETTING DURATION YET.\n# need to have pandas imported\ndates = []\nfor year in range(1979, 2019):\n for month in range(1,13):\n dstr = f\"{year}-{month:02d}\"\n dates.append(pd.to_datetime(f\"{year}-{month:02d}\"))\n # print(f\"shape for {dstr} is {events_a.sel(time=dstr).shape}\")\n tmp = events_a.sel(time=dstr)\n n = tmp.max(dim='time') - tmp.min(dim='time')\n try:\n monthly_events = xr.concat([monthly_events, n], dim='time')\n except:\n print(n)\n monthly_events = n\n \n# put in the time values \nmonthly_events['time'] = dates ",
"<xarray.DataArray (lat: 360, lon: 720)>\narray([[nan, nan, nan, ..., nan, nan, nan],\n [nan, nan, nan, ..., nan, nan, nan],\n [nan, nan, nan, ..., nan, nan, nan],\n ...,\n [nan, nan, nan, ..., nan, nan, nan],\n [nan, nan, nan, ..., nan, nan, nan],\n [nan, nan, nan, ..., nan, nan, nan]])\nCoordinates:\n * lat (lat) float32 89.75 89.25 88.75 88.25 ... -88.75 -89.25 -89.75\n * lon (lon) float32 0.25 0.75 1.25 1.75 ... 358.25 358.75 359.25 359.75\n"
],
[
"monthly_events",
"_____no_output_____"
],
[
"aug_evnts = monthly_events[monthly_events['time.month']==8, : ,:].mean(dim='time')",
"/project/amp/brianpm/miniconda3/lib/python3.7/site-packages/xarray/core/nanops.py:161: RuntimeWarning: Mean of empty slice\n return np.nanmean(a, axis=axis, dtype=dtype)\n"
],
[
"faug, aaug = quick_map(lons, lats, aug_evnts, title=\"Average August Events\")",
"_____no_output_____"
],
[
"# if you need to build a variable within a loop, here's one option\n\nfor i in range(4):\n try:\n homer += \"!\"\n except:\n homer = \"D'oh\"\n \nprint(homer)",
"D'oh!!!!!!!!!!!\n"
]
],
[
[
"# Notes\n\nWe have three data sources that we can use:\n- original data set Tmax(time, lat, lon)\n- event detection Event_IDs(time, lat, lon)\n- event attributes (ID, duration, index)(event, lat, lon)\n\nOne thing we could consider to make things easier is to revise the event attributes to also include a better representation of the time of the event. In particular, we should get the time value from the original data set (with units of days since a reference date).\n\nThe hard thing to do right now is to extract event duration (and amplitude) based on time. Part of the issue is that events at different locations happen at different times, so it isn't like \"get event X based on it's time\" unless it is at a given location. In our event detection, we can do say, \"get all events in some time window\", but then we need to figure out their duration. This is probably the way to do this though. \n\nTo do that, we go use the event detection data, determine the values in the time selection, which are the IDs. Use those IDs to get the duration of those events.",
"_____no_output_____"
]
],
[
[
"# OK ... here's how we do it.\n\n# 1. Start by making events a \"stacked\" array, so that it looks 2D. (z is \"space\" dimension)\nev = events_a.stack(z=(\"lat\", \"lon\")) # (time, z)\n\n\n# 2. get our event ids from ev based on whatever time sampling we want\n# For example, let's get all august events\nev_aug = ev[ev['time.month'] == 8, :] # (time*, z), time* is only the times in august\n\n# 3. get the event ids at each spatial point, put those into a new array of arrays (space x variable-length)\nev_ids_to_get = np.empty(len(ev['z']), dtype=np.ndarray)\nfor i,v in enumerate(ev_aug['z']): # v will be the tuple of (lat, lon)\n ev_ids_to_get[i] = np.array( np.unique(ev_aug[:, i]) ) # the unique values (event IDs) at each space point.\n\n# 4. now we go back to our attributes to retrieve those events at each point\n# 4.a stack the attributes\nds_stack = ds.stack(z=('lat', 'lon'))\n# 4.b create another array that will hold the durations for the selected events:\nev_dur = np.empty(len(ev['z']), dtype=np.ndarray)\n# 4.c fill ev_dur with the duration values\nfor i,v in enumerate(ev_ids_to_get):\n if len(v) > 1:\n ev_dur[i] = ds_stack['duration'][ds_stack['Event_ID'][:,i]==, i]",
"_____no_output_____"
],
[
"ds",
"_____no_output_____"
],
[
"r = np.empty(5, dtype=np.ndarray)\n\nfor i, v in enumerate(r.ravel()):\n r[i] = np.array([1]*np.random.randint(low=2, high=10)) # random length list turned into an array",
"_____no_output_____"
],
[
"r.shape",
"_____no_output_____"
],
[
"np.random.rand()",
"_____no_output_____"
],
[
"[1]*np.random.randint(low=2, high=10)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecd8419c44cd81543d6a93c5a0dcd714de67546a | 198,170 | ipynb | Jupyter Notebook | Taller001-Python-Intro/T001-12-Paquetes y Módulos Python.ipynb | Fhernd/PandasTallerManipulacionDatos | 71ff198a8fdae2d89075ebb75f7b52a626aebad6 | [
"MIT"
] | 2 | 2021-03-30T05:30:49.000Z | 2021-09-11T09:14:07.000Z | Taller001-Python-Intro/T001-12-Paquetes y Módulos Python.ipynb | Fhernd/PandasTallerManipulacionDatos | 71ff198a8fdae2d89075ebb75f7b52a626aebad6 | [
"MIT"
] | null | null | null | Taller001-Python-Intro/T001-12-Paquetes y Módulos Python.ipynb | Fhernd/PandasTallerManipulacionDatos | 71ff198a8fdae2d89075ebb75f7b52a626aebad6 | [
"MIT"
] | 3 | 2021-02-03T22:19:03.000Z | 2021-06-24T19:45:57.000Z | 22.021336 | 2,008 | 0.440985 | [
[
[
"# 12. Paquetes y Módulos de Python\n\nEl lenguaje de programación cuenta con una amplia variedad de módulos y paquetes incorporados para facilitar el desarrollo de diferentes tareas:\n\n1. Manipulación de archivos\n2. Acceso a la red\n3. Base de datos SQLite3\n4. Gestión de texto\n5. Matemáticas y números\n6. Programación funcional\n7. Compresión de archivos\n8. Manipulación de archivos CSV\n9. Criptografía",
"_____no_output_____"
],
[
"## 12.1 Módulo `math`\n\nEl módulo `math` provee varias funciones matemáticas relevantes para tareas generales.",
"_____no_output_____"
]
],
[
[
"import math",
"_____no_output_____"
],
[
"help(math)",
"Help on built-in module math:\n\nNAME\n math\n\nDESCRIPTION\n This module provides access to the mathematical functions\n defined by the C standard.\n\nFUNCTIONS\n acos(x, /)\n Return the arc cosine (measured in radians) of x.\n \n acosh(x, /)\n Return the inverse hyperbolic cosine of x.\n \n asin(x, /)\n Return the arc sine (measured in radians) of x.\n \n asinh(x, /)\n Return the inverse hyperbolic sine of x.\n \n atan(x, /)\n Return the arc tangent (measured in radians) of x.\n \n atan2(y, x, /)\n Return the arc tangent (measured in radians) of y/x.\n \n Unlike atan(y/x), the signs of both x and y are considered.\n \n atanh(x, /)\n Return the inverse hyperbolic tangent of x.\n \n ceil(x, /)\n Return the ceiling of x as an Integral.\n \n This is the smallest integer >= x.\n \n comb(n, k, /)\n Number of ways to choose k items from n items without repetition and without order.\n \n Evaluates to n! / (k! * (n - k)!) when k <= n and evaluates\n to zero when k > n.\n \n Also called the binomial coefficient because it is equivalent\n to the coefficient of k-th term in polynomial expansion of the\n expression (1 + x)**n.\n \n Raises TypeError if either of the arguments are not integers.\n Raises ValueError if either of the arguments are negative.\n \n copysign(x, y, /)\n Return a float with the magnitude (absolute value) of x but the sign of y.\n \n On platforms that support signed zeros, copysign(1.0, -0.0)\n returns -1.0.\n \n cos(x, /)\n Return the cosine of x (measured in radians).\n \n cosh(x, /)\n Return the hyperbolic cosine of x.\n \n degrees(x, /)\n Convert angle x from radians to degrees.\n \n dist(p, q, /)\n Return the Euclidean distance between two points p and q.\n \n The points should be specified as sequences (or iterables) of\n coordinates. Both inputs must have the same dimension.\n \n Roughly equivalent to:\n sqrt(sum((px - qx) ** 2.0 for px, qx in zip(p, q)))\n \n erf(x, /)\n Error function at x.\n \n erfc(x, /)\n Complementary error function at x.\n \n exp(x, /)\n Return e raised to the power of x.\n \n expm1(x, /)\n Return exp(x)-1.\n \n This function avoids the loss of precision involved in the direct evaluation of exp(x)-1 for small x.\n \n fabs(x, /)\n Return the absolute value of the float x.\n \n factorial(x, /)\n Find x!.\n \n Raise a ValueError if x is negative or non-integral.\n \n floor(x, /)\n Return the floor of x as an Integral.\n \n This is the largest integer <= x.\n \n fmod(x, y, /)\n Return fmod(x, y), according to platform C.\n \n x % y may differ.\n \n frexp(x, /)\n Return the mantissa and exponent of x, as pair (m, e).\n \n m is a float and e is an int, such that x = m * 2.**e.\n If x is 0, m and e are both 0. Else 0.5 <= abs(m) < 1.0.\n \n fsum(seq, /)\n Return an accurate floating point sum of values in the iterable seq.\n \n Assumes IEEE-754 floating point arithmetic.\n \n gamma(x, /)\n Gamma function at x.\n \n gcd(x, y, /)\n greatest common divisor of x and y\n \n hypot(...)\n hypot(*coordinates) -> value\n \n Multidimensional Euclidean distance from the origin to a point.\n \n Roughly equivalent to:\n sqrt(sum(x**2 for x in coordinates))\n \n For a two dimensional point (x, y), gives the hypotenuse\n using the Pythagorean theorem: sqrt(x*x + y*y).\n \n For example, the hypotenuse of a 3/4/5 right triangle is:\n \n >>> hypot(3.0, 4.0)\n 5.0\n \n isclose(a, b, *, rel_tol=1e-09, abs_tol=0.0)\n Determine whether two floating point numbers are close in value.\n \n rel_tol\n maximum difference for being considered \"close\", relative to the\n magnitude of the input values\n abs_tol\n maximum difference for being considered \"close\", regardless of the\n magnitude of the input values\n \n Return True if a is close in value to b, and False otherwise.\n \n For the values to be considered close, the difference between them\n must be smaller than at least one of the tolerances.\n \n -inf, inf and NaN behave similarly to the IEEE 754 Standard. That\n is, NaN is not close to anything, even itself. inf and -inf are\n only close to themselves.\n \n isfinite(x, /)\n Return True if x is neither an infinity nor a NaN, and False otherwise.\n \n isinf(x, /)\n Return True if x is a positive or negative infinity, and False otherwise.\n \n isnan(x, /)\n Return True if x is a NaN (not a number), and False otherwise.\n \n isqrt(n, /)\n Return the integer part of the square root of the input.\n \n ldexp(x, i, /)\n Return x * (2**i).\n \n This is essentially the inverse of frexp().\n \n lgamma(x, /)\n Natural logarithm of absolute value of Gamma function at x.\n \n log(...)\n log(x, [base=math.e])\n Return the logarithm of x to the given base.\n \n If the base not specified, returns the natural logarithm (base e) of x.\n \n log10(x, /)\n Return the base 10 logarithm of x.\n \n log1p(x, /)\n Return the natural logarithm of 1+x (base e).\n \n The result is computed in a way which is accurate for x near zero.\n \n log2(x, /)\n Return the base 2 logarithm of x.\n \n modf(x, /)\n Return the fractional and integer parts of x.\n \n Both results carry the sign of x and are floats.\n \n perm(n, k=None, /)\n Number of ways to choose k items from n items without repetition and with order.\n \n Evaluates to n! / (n - k)! when k <= n and evaluates\n to zero when k > n.\n \n If k is not specified or is None, then k defaults to n\n and the function returns n!.\n \n Raises TypeError if either of the arguments are not integers.\n Raises ValueError if either of the arguments are negative.\n \n pow(x, y, /)\n Return x**y (x to the power of y).\n \n prod(iterable, /, *, start=1)\n Calculate the product of all the elements in the input iterable.\n \n The default start value for the product is 1.\n \n When the iterable is empty, return the start value. This function is\n intended specifically for use with numeric values and may reject\n non-numeric types.\n \n radians(x, /)\n Convert angle x from degrees to radians.\n \n remainder(x, y, /)\n Difference between x and the closest integer multiple of y.\n \n Return x - n*y where n*y is the closest integer multiple of y.\n In the case where x is exactly halfway between two multiples of\n y, the nearest even value of n is used. The result is always exact.\n \n sin(x, /)\n Return the sine of x (measured in radians).\n \n sinh(x, /)\n Return the hyperbolic sine of x.\n \n sqrt(x, /)\n Return the square root of x.\n \n tan(x, /)\n Return the tangent of x (measured in radians).\n \n tanh(x, /)\n Return the hyperbolic tangent of x.\n \n trunc(x, /)\n Truncates the Real x to the nearest Integral toward 0.\n \n Uses the __trunc__ magic method.\n\nDATA\n e = 2.718281828459045\n inf = inf\n nan = nan\n pi = 3.141592653589793\n tau = 6.283185307179586\n\nFILE\n (built-in)\n\n\n"
],
[
"math.pow(3, 3) # 27",
"_____no_output_____"
],
[
"math.sin(math.pi / 2)",
"_____no_output_____"
],
[
"math.cos(math.pi / 2)",
"_____no_output_____"
],
[
"math.radians(90)",
"_____no_output_____"
],
[
"math.pi / 2",
"_____no_output_____"
]
],
[
[
"## 12.2 Módulo `statistics`\n\nProvee varias funciones para calcular datos sobre diferentes operaciones estadísticas:\n\n1. Promedio\n2. Media\n3. Moda\n4. Varianza\n5. Desviación estándar",
"_____no_output_____"
]
],
[
[
"import statistics",
"_____no_output_____"
],
[
"numeros = [3, 2, 3, 5, 5, 3, 7, 11, 2, 3, 19, 11, 11, 5, 19, 5, 3, 11]",
"_____no_output_____"
],
[
"numeros",
"_____no_output_____"
],
[
"len(numeros)",
"_____no_output_____"
],
[
"statistics.mean(numeros)",
"_____no_output_____"
],
[
"statistics.median(numeros)",
"_____no_output_____"
],
[
"sorted(numeros)",
"_____no_output_____"
],
[
"statistics.mode(numeros)",
"_____no_output_____"
],
[
"statistics.stdev(numeros)",
"_____no_output_____"
],
[
"statistics.variance(numeros)",
"_____no_output_____"
],
[
"math.sqrt(statistics.variance(numeros))",
"_____no_output_____"
],
[
"dir(statistics)",
"_____no_output_____"
],
[
"help(statistics.multimode)",
"Help on function multimode in module statistics:\n\nmultimode(data)\n Return a list of the most frequently occurring values.\n \n Will return more than one result if there are multiple modes\n or an empty list if *data* is empty.\n \n >>> multimode('aabbbbbbbbcc')\n ['b']\n >>> multimode('aabbbbccddddeeffffgg')\n ['b', 'd', 'f']\n >>> multimode('')\n []\n\n"
],
[
"statistics.multimode(numeros)",
"_____no_output_____"
],
[
"numeros.append(5)",
"_____no_output_____"
],
[
"statistics.multimode(numeros)",
"_____no_output_____"
]
],
[
[
"**Nota importante**: Este módulo `statistics` NO es un sustituto a las librerías de terceros para trabajar con estadísticas. Es un módulo que ofrece las operaciones básicas/esenciales para estadística.",
"_____no_output_____"
],
[
"## 12.3 Módulo `fractions`\n\nEste módulo provee soporte para trabajar con números racionales.\n\nUna fracción está compuesta por dos partes:\n\n1. Numerador\n2. Denominador",
"_____no_output_____"
]
],
[
[
"import fractions",
"_____no_output_____"
],
[
"help(fractions)",
"Help on module fractions:\n\nNAME\n fractions - Fraction, infinite-precision, real numbers.\n\nMODULE REFERENCE\n https://docs.python.org/3.8/library/fractions\n \n The following documentation is automatically generated from the Python\n source files. It may be incomplete, incorrect or include features that\n are considered implementation detail and may vary between Python\n implementations. When in doubt, consult the module reference at the\n location listed above.\n\nCLASSES\n numbers.Rational(numbers.Real)\n Fraction\n \n class Fraction(numbers.Rational)\n | Fraction(numerator=0, denominator=None, *, _normalize=True)\n | \n | This class implements rational numbers.\n | \n | In the two-argument form of the constructor, Fraction(8, 6) will\n | produce a rational number equivalent to 4/3. Both arguments must\n | be Rational. The numerator defaults to 0 and the denominator\n | defaults to 1 so that Fraction(3) == 3 and Fraction() == 0.\n | \n | Fractions can also be constructed from:\n | \n | - numeric strings similar to those accepted by the\n | float constructor (for example, '-2.3' or '1e10')\n | \n | - strings of the form '123/456'\n | \n | - float and Decimal instances\n | \n | - other Rational instances (including integers)\n | \n | Method resolution order:\n | Fraction\n | numbers.Rational\n | numbers.Real\n | numbers.Complex\n | numbers.Number\n | builtins.object\n | \n | Methods defined here:\n | \n | __abs__(a)\n | abs(a)\n | \n | __add__(a, b)\n | a + b\n | \n | __bool__(a)\n | a != 0\n | \n | __ceil__(a)\n | math.ceil(a)\n | \n | __copy__(self)\n | \n | __deepcopy__(self, memo)\n | \n | __divmod__(a, b)\n | (a // b, a % b)\n | \n | __eq__(a, b)\n | a == b\n | \n | __floor__(a)\n | math.floor(a)\n | \n | __floordiv__(a, b)\n | a // b\n | \n | __ge__(a, b)\n | a >= b\n | \n | __gt__(a, b)\n | a > b\n | \n | __hash__(self)\n | hash(self)\n | \n | __le__(a, b)\n | a <= b\n | \n | __lt__(a, b)\n | a < b\n | \n | __mod__(a, b)\n | a % b\n | \n | __mul__(a, b)\n | a * b\n | \n | __neg__(a)\n | -a\n | \n | __pos__(a)\n | +a: Coerces a subclass instance to Fraction\n | \n | __pow__(a, b)\n | a ** b\n | \n | If b is not an integer, the result will be a float or complex\n | since roots are generally irrational. If b is an integer, the\n | result will be rational.\n | \n | __radd__(b, a)\n | a + b\n | \n | __rdivmod__(b, a)\n | (a // b, a % b)\n | \n | __reduce__(self)\n | Helper for pickle.\n | \n | __repr__(self)\n | repr(self)\n | \n | __rfloordiv__(b, a)\n | a // b\n | \n | __rmod__(b, a)\n | a % b\n | \n | __rmul__(b, a)\n | a * b\n | \n | __round__(self, ndigits=None)\n | round(self, ndigits)\n | \n | Rounds half toward even.\n | \n | __rpow__(b, a)\n | a ** b\n | \n | __rsub__(b, a)\n | a - b\n | \n | __rtruediv__(b, a)\n | a / b\n | \n | __str__(self)\n | str(self)\n | \n | __sub__(a, b)\n | a - b\n | \n | __truediv__(a, b)\n | a / b\n | \n | __trunc__(a)\n | trunc(a)\n | \n | as_integer_ratio(self)\n | Return the integer ratio as a tuple.\n | \n | Return a tuple of two integers, whose ratio is equal to the\n | Fraction and with a positive denominator.\n | \n | limit_denominator(self, max_denominator=1000000)\n | Closest Fraction to self with denominator at most max_denominator.\n | \n | >>> Fraction('3.141592653589793').limit_denominator(10)\n | Fraction(22, 7)\n | >>> Fraction('3.141592653589793').limit_denominator(100)\n | Fraction(311, 99)\n | >>> Fraction(4321, 8765).limit_denominator(10000)\n | Fraction(4321, 8765)\n | \n | ----------------------------------------------------------------------\n | Class methods defined here:\n | \n | from_decimal(dec) from abc.ABCMeta\n | Converts a finite Decimal instance to a rational number, exactly.\n | \n | from_float(f) from abc.ABCMeta\n | Converts a finite float to a rational number, exactly.\n | \n | Beware that Fraction.from_float(0.3) != Fraction(3, 10).\n | \n | ----------------------------------------------------------------------\n | Static methods defined here:\n | \n | __new__(cls, numerator=0, denominator=None, *, _normalize=True)\n | Constructs a Rational.\n | \n | Takes a string like '3/2' or '1.5', another Rational instance, a\n | numerator/denominator pair, or a float.\n | \n | Examples\n | --------\n | \n | >>> Fraction(10, -8)\n | Fraction(-5, 4)\n | >>> Fraction(Fraction(1, 7), 5)\n | Fraction(1, 35)\n | >>> Fraction(Fraction(1, 7), Fraction(2, 3))\n | Fraction(3, 14)\n | >>> Fraction('314')\n | Fraction(314, 1)\n | >>> Fraction('-35/4')\n | Fraction(-35, 4)\n | >>> Fraction('3.1415') # conversion from numeric string\n | Fraction(6283, 2000)\n | >>> Fraction('-47e-2') # string may include a decimal exponent\n | Fraction(-47, 100)\n | >>> Fraction(1.47) # direct construction from float (exact conversion)\n | Fraction(6620291452234629, 4503599627370496)\n | >>> Fraction(2.25)\n | Fraction(9, 4)\n | >>> Fraction(Decimal('1.47'))\n | Fraction(147, 100)\n | \n | ----------------------------------------------------------------------\n | Readonly properties defined here:\n | \n | denominator\n | \n | numerator\n | \n | ----------------------------------------------------------------------\n | Data and other attributes defined here:\n | \n | __abstractmethods__ = frozenset()\n | \n | ----------------------------------------------------------------------\n | Methods inherited from numbers.Rational:\n | \n | __float__(self)\n | float(self) = self.numerator / self.denominator\n | \n | It's important that this conversion use the integer's \"true\"\n | division rather than casting one side to float before dividing\n | so that ratios of huge integers convert without overflowing.\n | \n | ----------------------------------------------------------------------\n | Methods inherited from numbers.Real:\n | \n | __complex__(self)\n | complex(self) == complex(float(self), 0)\n | \n | conjugate(self)\n | Conjugate is a no-op for Reals.\n | \n | ----------------------------------------------------------------------\n | Readonly properties inherited from numbers.Real:\n | \n | imag\n | Real numbers have no imaginary component.\n | \n | real\n | Real numbers are their real component.\n\nFUNCTIONS\n gcd(a, b)\n Calculate the Greatest Common Divisor of a and b.\n \n Unless b==0, the result will have the same sign as b (so that when\n b is divided by it, the result comes out positive).\n\nDATA\n __all__ = ['Fraction', 'gcd']\n\nFILE\n g:\\users\\johno\\anaconda3\\lib\\fractions.py\n\n\n"
],
[
"fraccion_1 = fractions.Fraction(1, 2)",
"_____no_output_____"
],
[
"fraccion_1",
"_____no_output_____"
],
[
"print(fraccion_1)",
"1/2\n"
],
[
"fraccion_2 = fractions.Fraction(1, 3)",
"_____no_output_____"
],
[
"print(fraccion_2)",
"1/3\n"
],
[
"type(fraccion_2)",
"_____no_output_____"
],
[
"suma_fracciones = fraccion_1 + fraccion_2",
"_____no_output_____"
],
[
"suma_fracciones",
"_____no_output_____"
],
[
"print(suma_fracciones)",
"5/6\n"
],
[
"resta_fraccion = fraccion_1 - fraccion_2",
"_____no_output_____"
],
[
"print(resta_fraccion)",
"1/6\n"
],
[
"producto_fraccion = fraccion_1 * fraccion_2",
"_____no_output_____"
],
[
"print(producto_fraccion)",
"1/6\n"
],
[
"division_fraccion = fraccion_2 / fraccion_1",
"_____no_output_____"
],
[
"print(division_fraccion)",
"2/3\n"
]
],
[
[
"Obtener los valores como un elemento de dato tipo `int` (entero) o `float` (real):",
"_____no_output_____"
]
],
[
[
"int(suma_fracciones)",
"_____no_output_____"
],
[
"float(suma_fracciones)",
"_____no_output_____"
],
[
"float(resta_fraccion)",
"_____no_output_____"
],
[
"float(division_fraccion)",
"_____no_output_____"
]
],
[
[
"Es posible crear una fracción a partir de una literal real:",
"_____no_output_____"
]
],
[
[
"fraccion = fractions.Fraction.from_float(0.3)",
"_____no_output_____"
],
[
"fraccion",
"_____no_output_____"
],
[
"fraccion = fractions.Fraction.from_float(1/3)",
"_____no_output_____"
],
[
"fraccion",
"_____no_output_____"
],
[
"fraccion = fractions.Fraction(1/3).as_integer_ratio()",
"_____no_output_____"
],
[
"fraccion",
"_____no_output_____"
],
[
"fractions.Fraction(0.25)",
"_____no_output_____"
],
[
"1/4",
"_____no_output_____"
]
],
[
[
"# 12.4 Módulo `datetime`\n\nEste módulo provee una serie de clases (elementos de programa) para la manipulación de fechas y horas.",
"_____no_output_____"
]
],
[
[
"import datetime",
"_____no_output_____"
],
[
"fecha_hora_actual = datetime.datetime.now()",
"_____no_output_____"
],
[
"fecha_hora_actual",
"_____no_output_____"
],
[
"type(fecha_hora_actual)",
"_____no_output_____"
],
[
"help(fecha_hora_actual.strftime)",
"Help on built-in function strftime:\n\nstrftime(...) method of datetime.datetime instance\n format -> strftime() style string.\n\n"
],
[
"fecha_hora_actual.strftime('%Y/%m/%d')",
"_____no_output_____"
],
[
"fecha_hora_actual.strftime('%Y/%m/%d %H:%M:%S')",
"_____no_output_____"
],
[
"fecha_hora_actual.strftime('%Y, %B %A (%m)')",
"_____no_output_____"
]
],
[
[
"También es posible obtener únicamente la hora:",
"_____no_output_____"
]
],
[
[
"hora_actual = datetime.datetime.now().time()",
"_____no_output_____"
],
[
"hora_actual",
"_____no_output_____"
],
[
"type(hora_actual)",
"_____no_output_____"
]
],
[
[
"Obtener únicamente la fecha:",
"_____no_output_____"
]
],
[
[
"fecha_actual = datetime.date.today()",
"_____no_output_____"
],
[
"fecha_actual",
"_____no_output_____"
],
[
"type(fecha_actual)",
"_____no_output_____"
]
],
[
[
"Obtener una fecha a partir de una cadena de caracteres:",
"_____no_output_____"
]
],
[
[
"fecha_hora_cadena = '2021-01-09 21:29:37'",
"_____no_output_____"
],
[
"type(fecha_hora_cadena)",
"_____no_output_____"
],
[
"fecha_hora = datetime.datetime.strptime(fecha_hora_cadena, '%Y-%m-%d %H:%M:%S')",
"_____no_output_____"
],
[
"fecha_hora",
"_____no_output_____"
],
[
"type(fecha_hora)",
"_____no_output_____"
]
],
[
[
"Calcular la diferencia entre dos fechas:\n\n1. Fecha actual\n2. Fecha arbitraria",
"_____no_output_____"
]
],
[
[
"hoy = datetime.date.today()",
"_____no_output_____"
],
[
"hoy",
"_____no_output_____"
],
[
"otra_fecha_anterior = datetime.date(1999, 12, 31)",
"_____no_output_____"
],
[
"otra_fecha_anterior",
"_____no_output_____"
],
[
"diferencia = hoy - otra_fecha_anterior",
"_____no_output_____"
],
[
"diferencia",
"_____no_output_____"
],
[
"diferencia.days",
"_____no_output_____"
],
[
"otra_fecha_posterior = datetime.date(2039, 9, 1)",
"_____no_output_____"
],
[
"otra_fecha_posterior",
"_____no_output_____"
],
[
"diferencia = otra_fecha_posterior - hoy",
"_____no_output_____"
],
[
"diferencia.days",
"_____no_output_____"
]
],
[
[
"Sumar/restar tiempo a una fecha.\n\nPara resolver este problema necesitamos hacer uso de un objeto tipo `timedelta`.",
"_____no_output_____"
]
],
[
[
"hoy",
"_____no_output_____"
],
[
"ayer = hoy - datetime.timedelta(1)",
"_____no_output_____"
],
[
"print('La fecha de ayer fue:', ayer)",
"La fecha de ayer fue: 2021-05-25\n"
],
[
"mañana = hoy + datetime.timedelta(1)",
"_____no_output_____"
],
[
"print('La fecha de mañana será:', mañana)",
"La fecha de mañana será: 2021-05-27\n"
]
],
[
[
"Definir una función para sumar n cantidad de años a una fecha dada:",
"_____no_output_____"
]
],
[
[
"def sumar_años(fecha, años=1):\n try:\n return fecha.replace(year = fecha.year + años)\n except ValueError:\n return fecha + (datetime.date(fecha.year + años, 1, 1) - datetime.date(fecha.year, 1, 1))",
"_____no_output_____"
],
[
"fecha_actual = datetime.date.today()\n\nfecha_actual",
"_____no_output_____"
],
[
"nueva_fecha = sumar_años(fecha_actual, 5)",
"_____no_output_____"
],
[
"nueva_fecha",
"_____no_output_____"
]
],
[
[
"Uso de la clase `TextCalendar` para generar un calendario de un mes específico en formato textual.",
"_____no_output_____"
]
],
[
[
"from calendar import TextCalendar",
"_____no_output_____"
],
[
"calendario_2021 = TextCalendar(firstweekday=0)",
"_____no_output_____"
],
[
"calendario_2021",
"_____no_output_____"
],
[
"print(calendario_2021.formatyear(2021))",
" 2021\n\n January February March\nMo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su\n 1 2 3 1 2 3 4 5 6 7 1 2 3 4 5 6 7\n 4 5 6 7 8 9 10 8 9 10 11 12 13 14 8 9 10 11 12 13 14\n11 12 13 14 15 16 17 15 16 17 18 19 20 21 15 16 17 18 19 20 21\n18 19 20 21 22 23 24 22 23 24 25 26 27 28 22 23 24 25 26 27 28\n25 26 27 28 29 30 31 29 30 31\n\n April May June\nMo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su\n 1 2 3 4 1 2 1 2 3 4 5 6\n 5 6 7 8 9 10 11 3 4 5 6 7 8 9 7 8 9 10 11 12 13\n12 13 14 15 16 17 18 10 11 12 13 14 15 16 14 15 16 17 18 19 20\n19 20 21 22 23 24 25 17 18 19 20 21 22 23 21 22 23 24 25 26 27\n26 27 28 29 30 24 25 26 27 28 29 30 28 29 30\n 31\n\n July August September\nMo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su\n 1 2 3 4 1 1 2 3 4 5\n 5 6 7 8 9 10 11 2 3 4 5 6 7 8 6 7 8 9 10 11 12\n12 13 14 15 16 17 18 9 10 11 12 13 14 15 13 14 15 16 17 18 19\n19 20 21 22 23 24 25 16 17 18 19 20 21 22 20 21 22 23 24 25 26\n26 27 28 29 30 31 23 24 25 26 27 28 29 27 28 29 30\n 30 31\n\n October November December\nMo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su\n 1 2 3 1 2 3 4 5 6 7 1 2 3 4 5\n 4 5 6 7 8 9 10 8 9 10 11 12 13 14 6 7 8 9 10 11 12\n11 12 13 14 15 16 17 15 16 17 18 19 20 21 13 14 15 16 17 18 19\n18 19 20 21 22 23 24 22 23 24 25 26 27 28 20 21 22 23 24 25 26\n25 26 27 28 29 30 31 29 30 27 28 29 30 31\n\n"
],
[
"print(calendario_2021.formatmonth(2021, 3))",
" March 2021\nMo Tu We Th Fr Sa Su\n 1 2 3 4 5 6 7\n 8 9 10 11 12 13 14\n15 16 17 18 19 20 21\n22 23 24 25 26 27 28\n29 30 31\n\n"
],
[
"dir(calendario_2021)",
"_____no_output_____"
],
[
"help(calendario_2021.formatweek)",
"Help on method formatweek in module calendar:\n\nformatweek(theweek, width) method of calendar.TextCalendar instance\n Returns a single week in a string (no newline).\n\n"
],
[
"for s in calendario_2021.monthdays2calendar(2021, 3):\n print(calendario_2021.formatweek(s, 15))",
" 1 2 3 4 5 6 7 \n 8 9 10 11 12 13 14 \n 15 16 17 18 19 20 21 \n 22 23 24 25 26 27 28 \n 29 30 31 \n"
]
],
[
[
"El módulo `calendar` ofrece los nombres de los meses y los días:",
"_____no_output_____"
]
],
[
[
"import calendar\n\nfor d in calendar.day_name:\n print(d)",
"Monday\nTuesday\nWednesday\nThursday\nFriday\nSaturday\nSunday\n"
],
[
"for d in calendar.day_abbr:\n print(d)",
"Mon\nTue\nWed\nThu\nFri\nSat\nSun\n"
],
[
"for m in calendar.month_name:\n print(m)",
"\nJanuary\nFebruary\nMarch\nApril\nMay\nJune\nJuly\nAugust\nSeptember\nOctober\nNovember\nDecember\n"
],
[
"for m in calendar.month_abbr:\n print(m)",
"\nJan\nFeb\nMar\nApr\nMay\nJun\nJul\nAug\nSep\nOct\nNov\nDec\n"
]
],
[
[
"## 12.5 Módulo `string`\n\nEste módulo ofrece constantes para cadenas de caracteres comúnes:\n\n1. Alfabeto inglés en minúscula y mayúscula\n2. Dígitos decimales (0-9)\n3. Dígitos hexadecimales (0-F)\n4. Dígitos octales\n5. Caracteres de puntuación: !\"#()*+.-/:;\n6. Caracteres que representan espacio\n\nHay diferentes funciones para manipulación de texto:\n\n1. format()\n2. vformat()\n3. parse()",
"_____no_output_____"
]
],
[
[
"import string",
"_____no_output_____"
],
[
"help(string)",
"Help on module string:\n\nNAME\n string - A collection of string constants.\n\nMODULE REFERENCE\n https://docs.python.org/3.8/library/string\n \n The following documentation is automatically generated from the Python\n source files. It may be incomplete, incorrect or include features that\n are considered implementation detail and may vary between Python\n implementations. When in doubt, consult the module reference at the\n location listed above.\n\nDESCRIPTION\n Public module variables:\n \n whitespace -- a string containing all ASCII whitespace\n ascii_lowercase -- a string containing all ASCII lowercase letters\n ascii_uppercase -- a string containing all ASCII uppercase letters\n ascii_letters -- a string containing all ASCII letters\n digits -- a string containing all ASCII decimal digits\n hexdigits -- a string containing all ASCII hexadecimal digits\n octdigits -- a string containing all ASCII octal digits\n punctuation -- a string containing all ASCII punctuation characters\n printable -- a string containing all ASCII characters considered printable\n\nCLASSES\n builtins.object\n Formatter\n Template\n \n class Formatter(builtins.object)\n | Methods defined here:\n | \n | check_unused_args(self, used_args, args, kwargs)\n | \n | convert_field(self, value, conversion)\n | \n | format(self, format_string, /, *args, **kwargs)\n | \n | format_field(self, value, format_spec)\n | \n | get_field(self, field_name, args, kwargs)\n | # given a field_name, find the object it references.\n | # field_name: the field being looked up, e.g. \"0.name\"\n | # or \"lookup[3]\"\n | # used_args: a set of which args have been used\n | # args, kwargs: as passed in to vformat\n | \n | get_value(self, key, args, kwargs)\n | \n | parse(self, format_string)\n | # returns an iterable that contains tuples of the form:\n | # (literal_text, field_name, format_spec, conversion)\n | # literal_text can be zero length\n | # field_name can be None, in which case there's no\n | # object to format and output\n | # if field_name is not None, it is looked up, formatted\n | # with format_spec and conversion and then used\n | \n | vformat(self, format_string, args, kwargs)\n | \n | ----------------------------------------------------------------------\n | Data descriptors defined here:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n \n class Template(builtins.object)\n | Template(template)\n | \n | A string class for supporting $-substitutions.\n | \n | Methods defined here:\n | \n | __init__(self, template)\n | Initialize self. See help(type(self)) for accurate signature.\n | \n | safe_substitute(self, mapping={}, /, **kws)\n | \n | substitute(self, mapping={}, /, **kws)\n | \n | ----------------------------------------------------------------------\n | Data descriptors defined here:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | ----------------------------------------------------------------------\n | Data and other attributes defined here:\n | \n | braceidpattern = None\n | \n | delimiter = '$'\n | \n | flags = re.IGNORECASE\n | \n | idpattern = '(?a:[_a-z][_a-z0-9]*)'\n | \n | pattern = re.compile('\\n \\\\$(?:\\n (?P<escaped>\\\\$)...ced>(?a:[...\n\nFUNCTIONS\n capwords(s, sep=None)\n capwords(s [,sep]) -> string\n \n Split the argument into words using split, capitalize each\n word using capitalize, and join the capitalized words using\n join. If the optional second argument sep is absent or None,\n runs of whitespace characters are replaced by a single space\n and leading and trailing whitespace are removed, otherwise\n sep is used to split and join the words.\n\nDATA\n __all__ = ['ascii_letters', 'ascii_lowercase', 'ascii_uppercase', 'cap...\n ascii_letters = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'\n ascii_lowercase = 'abcdefghijklmnopqrstuvwxyz'\n ascii_uppercase = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'\n digits = '0123456789'\n hexdigits = '0123456789abcdefABCDEF'\n octdigits = '01234567'\n printable = '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTU...\n punctuation = '!\"#$%&\\'()*+,-./:;<=>?@[\\\\]^_`{|}~'\n whitespace = ' \\t\\n\\r\\x0b\\x0c'\n\nFILE\n g:\\users\\johno\\anaconda3\\lib\\string.py\n\n\n"
],
[
"string.ascii_lowercase",
"_____no_output_____"
],
[
"string.ascii_uppercase",
"_____no_output_____"
],
[
"len(string.ascii_uppercase)",
"_____no_output_____"
],
[
"type(string.ascii_uppercase)",
"_____no_output_____"
],
[
"for c in string.ascii_lowercase:\n print(c, end=' ')",
"a b c d e f g h i j k l m n o p q r s t u v w x y z "
],
[
"letras = list(string.ascii_uppercase)",
"_____no_output_____"
],
[
"letras",
"_____no_output_____"
],
[
"string.digits",
"_____no_output_____"
],
[
"def es_cadena_numerica(cadena):\n for c in cadena:\n if c not in string.digits:\n return False\n \n return True",
"_____no_output_____"
],
[
"es_cadena_numerica('123')",
"_____no_output_____"
],
[
"es_cadena_numerica('123A')",
"_____no_output_____"
]
],
[
[
"Definir una función para validar si una cadena corresponde con un número hexadecimal:",
"_____no_output_____"
]
],
[
[
"def es_hexadecimal(cadena):\n \"\"\"\n Valida si una cadena representa un número hexadecimal.\n \n :param cadena: Cadena a validar.\n :return: true si la cadena representa un número hexadecimal, false en caso contrario.\n \"\"\"\n return all([c in string.hexdigits for c in cadena])",
"_____no_output_____"
],
[
"string.hexdigits",
"_____no_output_____"
],
[
"es_hexadecimal('A1')",
"_____no_output_____"
],
[
"es_hexadecimal('A1eF3')",
"_____no_output_____"
],
[
"es_hexadecimal('A1G')",
"_____no_output_____"
]
],
[
[
"El módulo `string` cuenta con la clase `Formatter`.\n\nEsta clase permite formatear valores de cadena.",
"_____no_output_____"
]
],
[
[
"help(string.Formatter)",
"Help on class Formatter in module string:\n\nclass Formatter(builtins.object)\n | Methods defined here:\n | \n | check_unused_args(self, used_args, args, kwargs)\n | \n | convert_field(self, value, conversion)\n | \n | format(self, format_string, /, *args, **kwargs)\n | \n | format_field(self, value, format_spec)\n | \n | get_field(self, field_name, args, kwargs)\n | # given a field_name, find the object it references.\n | # field_name: the field being looked up, e.g. \"0.name\"\n | # or \"lookup[3]\"\n | # used_args: a set of which args have been used\n | # args, kwargs: as passed in to vformat\n | \n | get_value(self, key, args, kwargs)\n | \n | parse(self, format_string)\n | # returns an iterable that contains tuples of the form:\n | # (literal_text, field_name, format_spec, conversion)\n | # literal_text can be zero length\n | # field_name can be None, in which case there's no\n | # object to format and output\n | # if field_name is not None, it is looked up, formatted\n | # with format_spec and conversion and then used\n | \n | vformat(self, format_string, args, kwargs)\n | \n | ----------------------------------------------------------------------\n | Data descriptors defined here:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n\n"
],
[
"formateador = string.Formatter()",
"_____no_output_____"
],
[
"type(formateador)",
"_____no_output_____"
],
[
"nombre = 'Oliva'\napellido = 'Ordoñez'\n\nresultado = formateador.format('{nombre} {apellido}', nombre=nombre, apellido=apellido)",
"_____no_output_____"
],
[
"resultado",
"_____no_output_____"
],
[
"precio = 101.373\n\nresultado = formateador.format('${precio:.2f}', precio=precio)\n\nresultado",
"_____no_output_____"
],
[
"'${precio:.2f}'.format(precio=precio)",
"_____no_output_____"
]
],
[
[
"## 12.6 Módulo `collections`\n\nOfrece varias estructuras de datos (contenedores) a parte de `dict`, `list`, `tuple`, y `set`.",
"_____no_output_____"
]
],
[
[
"import collections as ds",
"_____no_output_____"
],
[
"Punto = ds.namedtuple('Punto', 'x,y')",
"_____no_output_____"
],
[
"type(Punto)",
"_____no_output_____"
],
[
"punto_1 = Punto(1, 3)",
"_____no_output_____"
],
[
"punto_1",
"_____no_output_____"
],
[
"punto_1.x",
"_____no_output_____"
],
[
"punto_1.y",
"_____no_output_____"
],
[
"type(punto_1)",
"_____no_output_____"
],
[
"# punto_1.x = -5",
"_____no_output_____"
],
[
"punto_2 = Punto(-5, -7)",
"_____no_output_____"
],
[
"punto_2",
"_____no_output_____"
]
],
[
[
"### 12.6.2 Clase `deque`\n\nEstructura de datos similar a una lista. Facilita manipular datos en los extremos izquierdo y derecho: agregar, eliminar, agregar múltiples, etc.",
"_____no_output_____"
]
],
[
[
"numeros = ds.deque()",
"_____no_output_____"
],
[
"type(numeros)",
"_____no_output_____"
],
[
"' '.join(dir(numeros))",
"_____no_output_____"
],
[
"numeros.append(5)\nnumeros.append(6)\nnumeros.append(7)\nnumeros.append(8)\nnumeros.append(9)",
"_____no_output_____"
],
[
"numeros",
"_____no_output_____"
],
[
"numeros.appendleft(4)\nnumeros.appendleft(3)\nnumeros.appendleft(2)\nnumeros.appendleft(1)",
"_____no_output_____"
],
[
"numeros",
"_____no_output_____"
],
[
"numeros.extend([10, 11, 12])",
"_____no_output_____"
],
[
"numeros",
"_____no_output_____"
],
[
"numeros.extendleft([0, -1, -2])",
"_____no_output_____"
],
[
"numeros",
"_____no_output_____"
],
[
"len(numeros)",
"_____no_output_____"
],
[
"numero = numeros.pop()",
"_____no_output_____"
],
[
"numero",
"_____no_output_____"
],
[
"numero = numeros.popleft()",
"_____no_output_____"
],
[
"numero",
"_____no_output_____"
],
[
"len(numeros)",
"_____no_output_____"
],
[
"numeros",
"_____no_output_____"
]
],
[
[
"### 12.6.3 Uso de la clase `Counter`\n\nEsta clase permite contar el número de ocurrencias de un iterable (colección: lista, tupla, cadena de caracteres, etc.).",
"_____no_output_____"
]
],
[
[
"pais = 'Colombia'",
"_____no_output_____"
],
[
"conteo_ocurrencias_caracteres = ds.Counter(pais)",
"_____no_output_____"
],
[
"type(conteo_ocurrencias_caracteres)",
"_____no_output_____"
],
[
"conteo_ocurrencias_caracteres",
"_____no_output_____"
],
[
"frase = 'Python es un lenguaje de programación orientado a objetos'",
"_____no_output_____"
],
[
"conteo_ocurrencias_caracteres = ds.Counter(frase)",
"_____no_output_____"
],
[
"conteo_ocurrencias_caracteres",
"_____no_output_____"
],
[
"dir(conteo_ocurrencias_caracteres)",
"_____no_output_____"
],
[
"conteo_ocurrencias_caracteres.keys()",
"_____no_output_____"
],
[
"list(conteo_ocurrencias_caracteres.keys())",
"_____no_output_____"
],
[
"conteo_ocurrencias_caracteres.values()",
"_____no_output_____"
],
[
"help(conteo_ocurrencias_caracteres.most_common)",
"Help on method most_common in module collections:\n\nmost_common(n=None) method of collections.Counter instance\n List the n most common elements and their counts from the most\n common to the least. If n is None, then list all element counts.\n \n >>> Counter('abracadabra').most_common(3)\n [('a', 5), ('b', 2), ('r', 2)]\n\n"
],
[
"conteo_ocurrencias_caracteres.most_common(3)",
"_____no_output_____"
],
[
"conteo_ocurrencias_caracteres.most_common(5)",
"_____no_output_____"
],
[
"import random",
"_____no_output_____"
],
[
"numeros = [random.randint(1, 6) for _ in range(100)]",
"_____no_output_____"
],
[
"numeros",
"_____no_output_____"
],
[
"conteo_ocurrencias_numeros = ds.Counter(numeros)",
"_____no_output_____"
],
[
"conteo_ocurrencias_numeros",
"_____no_output_____"
],
[
"conteo_ocurrencias_numeros.most_common(3)",
"_____no_output_____"
]
],
[
[
"### 12.6.4 Uso de la clase `OrderedDict`\n\nEs una clase que permite mantener el orden de agregación de los elementos en un diccionario.\n\nEs la única diferencia respecto a un diccionario estándar (`dict`).",
"_____no_output_____"
]
],
[
[
"paises_capitales = ds.OrderedDict()",
"_____no_output_____"
],
[
"paises_capitales['Colombia'] = 'Bogotá'\npaises_capitales['Perú'] = 'Lima'\npaises_capitales['Argentina'] = 'Buenos Aires'\npaises_capitales['Estados Unidos'] = 'Washington'\npaises_capitales['Rusia'] = 'Moscú'",
"_____no_output_____"
],
[
"type(paises_capitales)",
"_____no_output_____"
],
[
"paises_capitales",
"_____no_output_____"
],
[
"dir(paises_capitales)",
"_____no_output_____"
],
[
"for k, v in paises_capitales.items():\n print(k, v)",
"Colombia Bogotá\nPerú Lima\nArgentina Buenos Aires\nEstados Unidos Washington\nRusia Moscú\n"
],
[
"# {}, dict()",
"_____no_output_____"
]
],
[
[
"### 12.6.5 Uso de `defaultdict`\n\nRepresenta un diccionario con un valor predeterminado para cada llave.",
"_____no_output_____"
]
],
[
[
"tipos_numeros = ds.defaultdict(list)",
"_____no_output_____"
],
[
"type(tipos_numeros)",
"_____no_output_____"
],
[
"tipos_numeros['negativos'].append(-1)",
"_____no_output_____"
],
[
"tipos_numeros['negativos']",
"_____no_output_____"
],
[
"tipos_numeros['negativos'].append(-3)",
"_____no_output_____"
],
[
"tipos_numeros['negativos']",
"_____no_output_____"
],
[
"tipos_numeros['negativos'].extend((-5, -7, -9))",
"_____no_output_____"
],
[
"tipos_numeros['negativos']",
"_____no_output_____"
],
[
"type(tipos_numeros['negativos'])",
"_____no_output_____"
],
[
"tipos_numeros['primos'] = [2, 3, 5, 7]",
"_____no_output_____"
],
[
"tipos_numeros['primos']",
"_____no_output_____"
],
[
"len(tipos_numeros['primos'])",
"_____no_output_____"
],
[
"len(tipos_numeros['negativos'])",
"_____no_output_____"
]
],
[
[
"Consultar la cantidad de elementos del diccionario predeterminado:",
"_____no_output_____"
]
],
[
[
"len(tipos_numeros) # 2",
"_____no_output_____"
],
[
"tipos_numeros.keys()",
"_____no_output_____"
],
[
"tipos_numeros.values()",
"_____no_output_____"
],
[
"len(tipos_numeros['positivos']) # 0",
"_____no_output_____"
]
],
[
[
"## 12.7 Módulo `pprint`\n\npretty -> embellecer\n\nEste módulo *embellece* la impresión sobre la salida estándar (consola, o terminal).",
"_____no_output_____"
]
],
[
[
"import pprint",
"_____no_output_____"
],
[
"impresora = pprint.PrettyPrinter(depth=1)",
"_____no_output_____"
],
[
"print(impresora)",
"<pprint.PrettyPrinter object at 0x000001A77A67CEE0>\n"
],
[
"coordenadas = [\n {\n \"nombre\": 'Ubicación 1',\n \"gps\": (19.008966, 11.573724)\n },\n {\n 'nombre': 'Ubicación 2',\n 'gps': (40.1632626, 44.2935926)\n },\n {\n 'nombre': 'Ubicación 3',\n 'gps': (29.476705, 120.869339)\n }\n]",
"_____no_output_____"
],
[
"impresora.pprint(coordenadas)",
"[{...}, {...}, {...}]\n"
],
[
"print(coordenadas)",
"[{'nombre': 'Ubicación 1', 'gps': (19.008966, 11.573724)}, {'nombre': 'Ubicación 2', 'gps': (40.1632626, 44.2935926)}, {'nombre': 'Ubicación 3', 'gps': (29.476705, 120.869339)}]\n"
],
[
"dir(impresora)",
"_____no_output_____"
],
[
"impresora._depth",
"_____no_output_____"
],
[
"impresora._depth = 2",
"_____no_output_____"
],
[
"impresora.pprint(coordenadas)",
"[{'gps': (...), 'nombre': 'Ubicación 1'},\n {'gps': (...), 'nombre': 'Ubicación 2'},\n {'gps': (...), 'nombre': 'Ubicación 3'}]\n"
],
[
"impresora._depth = 3",
"_____no_output_____"
],
[
"impresora.pprint(coordenadas)",
"[{'gps': (19.008966, 11.573724), 'nombre': 'Ubicación 1'},\n {'gps': (40.1632626, 44.2935926), 'nombre': 'Ubicación 2'},\n {'gps': (29.476705, 120.869339), 'nombre': 'Ubicación 3'}]\n"
],
[
"from pprint import pprint",
"_____no_output_____"
],
[
"datos = [(i, { 'a':'A',\n 'b':'B',\n 'c':'C',\n 'd':'D',\n 'e':'E',\n 'f':'F',\n 'g':'G',\n 'h':'H',\n 'i': 'I',\n 'j': 'J'\n })\n for i in range(3)]",
"_____no_output_____"
],
[
"type(datos)",
"_____no_output_____"
],
[
"print(datos)",
"[(0, {'a': 'A', 'b': 'B', 'c': 'C', 'd': 'D', 'e': 'E', 'f': 'F', 'g': 'G', 'h': 'H', 'i': 'I', 'j': 'J'}), (1, {'a': 'A', 'b': 'B', 'c': 'C', 'd': 'D', 'e': 'E', 'f': 'F', 'g': 'G', 'h': 'H', 'i': 'I', 'j': 'J'}), (2, {'a': 'A', 'b': 'B', 'c': 'C', 'd': 'D', 'e': 'E', 'f': 'F', 'g': 'G', 'h': 'H', 'i': 'I', 'j': 'J'})]\n"
],
[
"len(datos)",
"_____no_output_____"
],
[
"pprint(datos)",
"[(0,\n {'a': 'A',\n 'b': 'B',\n 'c': 'C',\n 'd': 'D',\n 'e': 'E',\n 'f': 'F',\n 'g': 'G',\n 'h': 'H',\n 'i': 'I',\n 'j': 'J'}),\n (1,\n {'a': 'A',\n 'b': 'B',\n 'c': 'C',\n 'd': 'D',\n 'e': 'E',\n 'f': 'F',\n 'g': 'G',\n 'h': 'H',\n 'i': 'I',\n 'j': 'J'}),\n (2,\n {'a': 'A',\n 'b': 'B',\n 'c': 'C',\n 'd': 'D',\n 'e': 'E',\n 'f': 'F',\n 'g': 'G',\n 'h': 'H',\n 'i': 'I',\n 'j': 'J'})]\n"
],
[
"help(pprint)",
"Help on function pprint in module pprint:\n\npprint(object, stream=None, indent=1, width=80, depth=None, *, compact=False, sort_dicts=True)\n Pretty-print a Python object to a stream [default is sys.stdout].\n\n"
],
[
"anchos = [5, 20, 60, 80, 160]",
"_____no_output_____"
],
[
"for a in anchos:\n print('Ancho:', a)\n pprint(datos, width=a)\n print()",
"Ancho: 5\n[(0,\n {'a': 'A',\n 'b': 'B',\n 'c': 'C',\n 'd': 'D',\n 'e': 'E',\n 'f': 'F',\n 'g': 'G',\n 'h': 'H',\n 'i': 'I',\n 'j': 'J'}),\n (1,\n {'a': 'A',\n 'b': 'B',\n 'c': 'C',\n 'd': 'D',\n 'e': 'E',\n 'f': 'F',\n 'g': 'G',\n 'h': 'H',\n 'i': 'I',\n 'j': 'J'}),\n (2,\n {'a': 'A',\n 'b': 'B',\n 'c': 'C',\n 'd': 'D',\n 'e': 'E',\n 'f': 'F',\n 'g': 'G',\n 'h': 'H',\n 'i': 'I',\n 'j': 'J'})]\n\nAncho: 20\n[(0,\n {'a': 'A',\n 'b': 'B',\n 'c': 'C',\n 'd': 'D',\n 'e': 'E',\n 'f': 'F',\n 'g': 'G',\n 'h': 'H',\n 'i': 'I',\n 'j': 'J'}),\n (1,\n {'a': 'A',\n 'b': 'B',\n 'c': 'C',\n 'd': 'D',\n 'e': 'E',\n 'f': 'F',\n 'g': 'G',\n 'h': 'H',\n 'i': 'I',\n 'j': 'J'}),\n (2,\n {'a': 'A',\n 'b': 'B',\n 'c': 'C',\n 'd': 'D',\n 'e': 'E',\n 'f': 'F',\n 'g': 'G',\n 'h': 'H',\n 'i': 'I',\n 'j': 'J'})]\n\nAncho: 60\n[(0,\n {'a': 'A',\n 'b': 'B',\n 'c': 'C',\n 'd': 'D',\n 'e': 'E',\n 'f': 'F',\n 'g': 'G',\n 'h': 'H',\n 'i': 'I',\n 'j': 'J'}),\n (1,\n {'a': 'A',\n 'b': 'B',\n 'c': 'C',\n 'd': 'D',\n 'e': 'E',\n 'f': 'F',\n 'g': 'G',\n 'h': 'H',\n 'i': 'I',\n 'j': 'J'}),\n (2,\n {'a': 'A',\n 'b': 'B',\n 'c': 'C',\n 'd': 'D',\n 'e': 'E',\n 'f': 'F',\n 'g': 'G',\n 'h': 'H',\n 'i': 'I',\n 'j': 'J'})]\n\nAncho: 80\n[(0,\n {'a': 'A',\n 'b': 'B',\n 'c': 'C',\n 'd': 'D',\n 'e': 'E',\n 'f': 'F',\n 'g': 'G',\n 'h': 'H',\n 'i': 'I',\n 'j': 'J'}),\n (1,\n {'a': 'A',\n 'b': 'B',\n 'c': 'C',\n 'd': 'D',\n 'e': 'E',\n 'f': 'F',\n 'g': 'G',\n 'h': 'H',\n 'i': 'I',\n 'j': 'J'}),\n (2,\n {'a': 'A',\n 'b': 'B',\n 'c': 'C',\n 'd': 'D',\n 'e': 'E',\n 'f': 'F',\n 'g': 'G',\n 'h': 'H',\n 'i': 'I',\n 'j': 'J'})]\n\nAncho: 160\n[(0, {'a': 'A', 'b': 'B', 'c': 'C', 'd': 'D', 'e': 'E', 'f': 'F', 'g': 'G', 'h': 'H', 'i': 'I', 'j': 'J'}),\n (1, {'a': 'A', 'b': 'B', 'c': 'C', 'd': 'D', 'e': 'E', 'f': 'F', 'g': 'G', 'h': 'H', 'i': 'I', 'j': 'J'}),\n (2, {'a': 'A', 'b': 'B', 'c': 'C', 'd': 'D', 'e': 'E', 'f': 'F', 'g': 'G', 'h': 'H', 'i': 'I', 'j': 'J'})]\n\n"
]
],
[
[
"## 12.8 Módulo `itertools`\n\nConjunto de funciones para iterar colecciones.",
"_____no_output_____"
]
],
[
[
"import itertools",
"_____no_output_____"
]
],
[
[
"### 12.8.1 Función `count()`",
"_____no_output_____"
]
],
[
[
"dir(itertools)",
"_____no_output_____"
],
[
"iterador_contador = itertools.count(100)",
"_____no_output_____"
],
[
"type(iterador_contador)",
"_____no_output_____"
],
[
"# iterador_contador() # Al intentar invocar el iterador se genera el error TypeError()",
"_____no_output_____"
],
[
"next(iterador_contador)",
"_____no_output_____"
],
[
"next(iterador_contador)",
"_____no_output_____"
],
[
"next(iterador_contador)",
"_____no_output_____"
],
[
"next(iterador_contador)",
"_____no_output_____"
],
[
"for _ in range(10):\n print(next(iterador_contador))",
"104\n105\n106\n107\n108\n109\n110\n111\n112\n113\n"
],
[
"otros_numeros = [next(iterador_contador) for _ in range(20)]",
"_____no_output_____"
],
[
"len(otros_numeros) # 20",
"_____no_output_____"
],
[
"otros_numeros",
"_____no_output_____"
],
[
"mas_numeros = [next(iterador_contador) for _ in range(20000)]",
"_____no_output_____"
],
[
"len(mas_numeros)",
"_____no_output_____"
],
[
"mas_numeros[-1]",
"_____no_output_____"
],
[
"iterador_pares = itertools.count(100000, 2)",
"_____no_output_____"
],
[
"type(iterador_pares)",
"_____no_output_____"
],
[
"next(iterador_pares)",
"_____no_output_____"
],
[
"next(iterador_pares)",
"_____no_output_____"
],
[
"for _ in range(100):\n next(iterador_pares)",
"_____no_output_____"
],
[
"next(iterador_pares)",
"_____no_output_____"
],
[
"cuenta_regresiva = itertools.count(10, step=-1)",
"_____no_output_____"
],
[
"for _ in range(10):\n print(next(cuenta_regresiva))",
"10\n9\n8\n7\n6\n5\n4\n3\n2\n1\n"
],
[
"for _ in range(10):\n print(next(cuenta_regresiva))",
"0\n-1\n-2\n-3\n-4\n-5\n-6\n-7\n-8\n-9\n"
],
[
"next(cuenta_regresiva)",
"_____no_output_____"
]
],
[
[
"### 12.8.2 Función `cycle()`\n\nPermite realizar una iteración recursiva (repetitiva) sobre una colección. Esa iteración se efectúa de forma indefinida.",
"_____no_output_____"
]
],
[
[
"primos = [2, 3, 5, 7, 11]",
"_____no_output_____"
],
[
"primos",
"_____no_output_____"
],
[
"len(primos)",
"_____no_output_____"
],
[
"iterador_primos = itertools.cycle(primos)",
"_____no_output_____"
],
[
"type(iterador_primos)",
"_____no_output_____"
],
[
"next(iterador_primos)",
"_____no_output_____"
],
[
"next(iterador_primos)",
"_____no_output_____"
],
[
"next(iterador_primos)",
"_____no_output_____"
],
[
"next(iterador_primos)",
"_____no_output_____"
],
[
"next(iterador_primos)",
"_____no_output_____"
]
],
[
[
"Si se vuelve a invocar la función `next()` pasando como argumento `iterador_primos` se vuelve a empezar por el primer elemento:",
"_____no_output_____"
]
],
[
[
"next(iterador_primos)",
"_____no_output_____"
],
[
"next(iterador_primos)",
"_____no_output_____"
],
[
"next(iterador_primos)",
"_____no_output_____"
],
[
"next(iterador_primos)",
"_____no_output_____"
],
[
"next(iterador_primos)",
"_____no_output_____"
],
[
"next(iterador_primos)",
"_____no_output_____"
],
[
"for _ in range(100):\n print(next(iterador_primos), end=' ')",
"3 5 7 11 2 3 5 7 11 2 3 5 7 11 2 3 5 7 11 2 3 5 7 11 2 3 5 7 11 2 3 5 7 11 2 3 5 7 11 2 3 5 7 11 2 3 5 7 11 2 3 5 7 11 2 3 5 7 11 2 3 5 7 11 2 3 5 7 11 2 3 5 7 11 2 3 5 7 11 2 3 5 7 11 2 3 5 7 11 2 3 5 7 11 2 3 5 7 11 2 "
]
],
[
[
"También es posible realizar una iteración cíclica para cadenas de caracteres:",
"_____no_output_____"
]
],
[
[
"texto = 'WXYZ'",
"_____no_output_____"
],
[
"texto",
"_____no_output_____"
],
[
"iterador_texto = itertools.cycle(texto)",
"_____no_output_____"
],
[
"type(iterador_texto)",
"_____no_output_____"
],
[
"next(iterador_texto)",
"_____no_output_____"
],
[
"next(iterador_texto)",
"_____no_output_____"
],
[
"next(iterador_texto)",
"_____no_output_____"
],
[
"next(iterador_texto)",
"_____no_output_____"
],
[
"next(iterador_texto)",
"_____no_output_____"
],
[
"next(iterador_texto)",
"_____no_output_____"
],
[
"for _ in range(1000):\n print(next(iterador_texto), end=' ')",
"Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X Y Z W X "
],
[
"planetas = ('Mercurio', 'Venus', 'Tierra', 'Marte', 'Júpiter', 'Saturno', 'Urano', 'Neptuno')",
"_____no_output_____"
],
[
"type(planetas)",
"_____no_output_____"
],
[
"iterador_planetas = itertools.cycle(planetas)",
"_____no_output_____"
],
[
"conteo_planetas = len(planetas)",
"_____no_output_____"
],
[
"conteo_planetas",
"_____no_output_____"
],
[
"i = 1",
"_____no_output_____"
],
[
"while i <= conteo_planetas:\n print(next(iterador_planetas))\n \n i += 1",
"Mercurio\nVenus\nTierra\nMarte\nJúpiter\nSaturno\nUrano\nNeptuno\n"
]
],
[
[
"### 12.8.3 Función `repeat()`\n\nPermite repetir un x valor n cantidad de veces.",
"_____no_output_____"
]
],
[
[
"lenguaje = 'Python'\nn = 5",
"_____no_output_____"
],
[
"iterador_repeticion = itertools.repeat(lenguaje, n)",
"_____no_output_____"
],
[
"type(iterador_repeticion)",
"_____no_output_____"
],
[
"next(iterador_repeticion)",
"_____no_output_____"
],
[
"next(iterador_repeticion)",
"_____no_output_____"
],
[
"next(iterador_repeticion)",
"_____no_output_____"
],
[
"next(iterador_repeticion)",
"_____no_output_____"
],
[
"next(iterador_repeticion)",
"_____no_output_____"
],
[
"# next(iterador_repeticion) # Genera excepción StopIteracion: los elementos del iterador se han agotado.",
"_____no_output_____"
],
[
"iterador_repeticion = itertools.repeat(lenguaje, n)",
"_____no_output_____"
],
[
"try:\n for d in iterador_repeticion:\n print(d)\nexcept StopIteration as e:\n print('Tipo de error:', type(e))\n print('Mensaje técnico:', e)",
"Python\nPython\nPython\nPython\nPython\n"
],
[
"iterador_repeticion = itertools.repeat(lenguaje, n)",
"_____no_output_____"
],
[
"try:\n print(next(iterador_repeticion))\n print(next(iterador_repeticion))\n print(next(iterador_repeticion))\n print(next(iterador_repeticion))\n print(next(iterador_repeticion))\n \n # La siguiente instrucción hace que se produzca la excepción StopIteration:\n print(next(iterador_repeticion))\nexcept StopIteration as e:\n print('Tipo de error:', type(e))\n print('Mensaje técnico:', e)",
"Python\nPython\nPython\nPython\nPython\nTipo de error: <class 'StopIteration'>\nMensaje técnico: \n"
]
],
[
[
"### 12.8.4 Función `accumulate()`\n\nEsta función permite realizar sumas parciales sobre los elementos de un objeto iterable (lista o tupla).",
"_____no_output_____"
]
],
[
[
"numeros_primos = [13, 7, 2, 19, 5]",
"_____no_output_____"
],
[
"numeros_primos",
"_____no_output_____"
],
[
"type(numeros_primos)",
"_____no_output_____"
]
],
[
[
"13 -> 20 -> 22 -> 41 -> 46",
"_____no_output_____"
]
],
[
[
"resultado = itertools.accumulate(numeros_primos)",
"_____no_output_____"
],
[
"type(resultado)",
"_____no_output_____"
],
[
"resultado",
"_____no_output_____"
],
[
"next(resultado)",
"_____no_output_____"
],
[
"next(resultado)",
"_____no_output_____"
],
[
"next(resultado)",
"_____no_output_____"
],
[
"for n in resultado:\n print(n)",
"41\n46\n"
],
[
"resultado = itertools.accumulate(numeros_primos)",
"_____no_output_____"
],
[
"for r in resultado:\n print(r)",
"13\n20\n22\n41\n46\n"
],
[
"suma = sum(numeros_primos)",
"_____no_output_____"
],
[
"suma # 46",
"_____no_output_____"
]
],
[
[
"### 12.8.5 Función `chain()`\n\nPermite encadenar varios objetos iterables.",
"_____no_output_____"
]
],
[
[
"letras = 'XYZ'",
"_____no_output_____"
],
[
"simbolos = ('#', '*', '/', '$')",
"_____no_output_____"
],
[
"len(letras)",
"_____no_output_____"
],
[
"len(simbolos)",
"_____no_output_____"
],
[
"letras_simbolos = itertools.chain(letras, simbolos)",
"_____no_output_____"
],
[
"type(letras_simbolos)",
"_____no_output_____"
],
[
"for e in letras_simbolos:\n print(e, end=' ')",
"X Y Z # * / $ "
],
[
"letras_simbolos = itertools.chain(simbolos, letras)",
"_____no_output_____"
],
[
"for e in letras_simbolos:\n print(e, end=' ')",
"# * / $ X Y Z "
],
[
"help(itertools.chain)",
"Help on class chain in module itertools:\n\nclass chain(builtins.object)\n | chain(*iterables) --> chain object\n | \n | Return a chain object whose .__next__() method returns elements from the\n | first iterable until it is exhausted, then elements from the next\n | iterable, until all of the iterables are exhausted.\n | \n | Methods defined here:\n | \n | __getattribute__(self, name, /)\n | Return getattr(self, name).\n | \n | __iter__(self, /)\n | Implement iter(self).\n | \n | __next__(self, /)\n | Implement next(self).\n | \n | __reduce__(...)\n | Return state information for pickling.\n | \n | __setstate__(...)\n | Set state information for unpickling.\n | \n | ----------------------------------------------------------------------\n | Class methods defined here:\n | \n | from_iterable(iterable, /) from builtins.type\n | Alternative chain() constructor taking a single iterable argument that evaluates lazily.\n | \n | ----------------------------------------------------------------------\n | Static methods defined here:\n | \n | __new__(*args, **kwargs) from builtins.type\n | Create and return a new object. See help(type) for accurate signature.\n\n"
],
[
"constantes = [3.1415, 2.7172, 1.4142]",
"_____no_output_____"
],
[
"type(constantes)",
"_____no_output_____"
],
[
"iterables_encadenados = itertools.chain(letras, constantes, simbolos)",
"_____no_output_____"
],
[
"for e in iterables_encadenados:\n print(e, end=' ')",
"X Y Z 3.1415 2.7172 1.4142 # * / $ "
]
],
[
[
"### 12.8.6 Función `compress()`\n\nPermite la selección de valores a partir de un selector que indica cuáles valores se deben tomar (extraer) de un iterable.",
"_____no_output_____"
]
],
[
[
"lenguajes = ('Go', 'C++', 'Java', 'PHP', 'Kotlin', 'JavaScript', 'C', 'Python')",
"_____no_output_____"
],
[
"selector = [0, 0, 1, 0, 0, 1, 0, 1]",
"_____no_output_____"
],
[
"lenguajes_2021 = itertools.compress(lenguajes, selector)",
"_____no_output_____"
],
[
"for l in lenguajes_2021:\n print(l)",
"Java\nJavaScript\nPython\n"
],
[
"selector = [False, False, True, False, False, True, False, True]",
"_____no_output_____"
],
[
"lenguajes_2021 = itertools.compress(lenguajes, selector)",
"_____no_output_____"
],
[
"for l in lenguajes_2021:\n print(l)",
"Java\nJavaScript\nPython\n"
]
],
[
[
"### 12.8.7 Función `dropwhile()`\n\nElimina datos de una lista hasta que se deje de cumplir una determinada condición.",
"_____no_output_____"
]
],
[
[
"lenguajes = ['C++', 'C', 'PHP', 'Python', 'JavaScript', 'Go', 'Java', 'Kotlin']",
"_____no_output_____"
],
[
"lenguajes",
"_____no_output_____"
],
[
"len(lenguajes)",
"_____no_output_____"
],
[
"resultado = itertools.dropwhile(lambda l: len(l) < 4, lenguajes)",
"_____no_output_____"
],
[
"# Python, JavaScript, Go, Java, Kotlin\n\nfor r in resultado:\n print(r, end=' ')",
"Python JavaScript Go Java Kotlin "
]
],
[
[
"### 12.8.8 Función `filterfalse()`\n\nFiltra aquellos valores cuyo resultado de evaluación sea igual o equivalente a `False`.",
"_____no_output_____"
]
],
[
[
"lenguajes",
"_____no_output_____"
],
[
"filtro_lenguajes = itertools.filterfalse(lambda l: len(l) < 4, lenguajes)",
"_____no_output_____"
],
[
"for c in filtro_lenguajes:\n print(c)",
"Python\nJavaScript\nJava\nKotlin\n"
],
[
"for c in itertools.filterfalse(lambda d: d, [True, False, True, False, None, '', \"\"]):\n print(c)",
"False\nFalse\nNone\n\n\n"
]
],
[
[
"### 12.8.9 Función `takewhile()`\n\nEsta función extrae los elementos de un iterable mientras se cumpla una condición.\n\nLa búsqueda se realiza de izquierda a derecha. Ese proceso de búsqueda termina cuando el primer elemento no cumpla la condición especificada como primer argumento.",
"_____no_output_____"
]
],
[
[
"lenguajes",
"_____no_output_____"
],
[
"resultado = itertools.takewhile(lambda l: len(l) <= 4, lenguajes)",
"_____no_output_____"
],
[
"type(resultado)",
"_____no_output_____"
],
[
"for l in resultado:\n print(l, end=' ')",
"C++ C PHP "
]
],
[
[
"## 12.9 Compresión de archivos ZIP\n\nEn Python podemos comprimir o descomprimir archivos a través del módulo `zipfile`.",
"_____no_output_____"
]
],
[
[
"from zipfile import ZipFile ",
"_____no_output_____"
],
[
"help(ZipFile)",
"Help on class ZipFile in module zipfile:\n\nclass ZipFile(builtins.object)\n | ZipFile(file, mode='r', compression=0, allowZip64=True, compresslevel=None, *, strict_timestamps=True)\n | \n | Class with methods to open, read, write, close, list zip files.\n | \n | z = ZipFile(file, mode=\"r\", compression=ZIP_STORED, allowZip64=True,\n | compresslevel=None)\n | \n | file: Either the path to the file, or a file-like object.\n | If it is a path, the file will be opened and closed by ZipFile.\n | mode: The mode can be either read 'r', write 'w', exclusive create 'x',\n | or append 'a'.\n | compression: ZIP_STORED (no compression), ZIP_DEFLATED (requires zlib),\n | ZIP_BZIP2 (requires bz2) or ZIP_LZMA (requires lzma).\n | allowZip64: if True ZipFile will create files with ZIP64 extensions when\n | needed, otherwise it will raise an exception when this would\n | be necessary.\n | compresslevel: None (default for the given compression type) or an integer\n | specifying the level to pass to the compressor.\n | When using ZIP_STORED or ZIP_LZMA this keyword has no effect.\n | When using ZIP_DEFLATED integers 0 through 9 are accepted.\n | When using ZIP_BZIP2 integers 1 through 9 are accepted.\n | \n | Methods defined here:\n | \n | __del__(self)\n | Call the \"close()\" method in case the user forgot.\n | \n | __enter__(self)\n | \n | __exit__(self, type, value, traceback)\n | \n | __init__(self, file, mode='r', compression=0, allowZip64=True, compresslevel=None, *, strict_timestamps=True)\n | Open the ZIP file with mode read 'r', write 'w', exclusive create 'x',\n | or append 'a'.\n | \n | __repr__(self)\n | Return repr(self).\n | \n | close(self)\n | Close the file, and for mode 'w', 'x' and 'a' write the ending\n | records.\n | \n | extract(self, member, path=None, pwd=None)\n | Extract a member from the archive to the current working directory,\n | using its full name. Its file information is extracted as accurately\n | as possible. `member' may be a filename or a ZipInfo object. You can\n | specify a different directory using `path'.\n | \n | extractall(self, path=None, members=None, pwd=None)\n | Extract all members from the archive to the current working\n | directory. `path' specifies a different directory to extract to.\n | `members' is optional and must be a subset of the list returned\n | by namelist().\n | \n | getinfo(self, name)\n | Return the instance of ZipInfo given 'name'.\n | \n | infolist(self)\n | Return a list of class ZipInfo instances for files in the\n | archive.\n | \n | namelist(self)\n | Return a list of file names in the archive.\n | \n | open(self, name, mode='r', pwd=None, *, force_zip64=False)\n | Return file-like object for 'name'.\n | \n | name is a string for the file name within the ZIP file, or a ZipInfo\n | object.\n | \n | mode should be 'r' to read a file already in the ZIP file, or 'w' to\n | write to a file newly added to the archive.\n | \n | pwd is the password to decrypt files (only used for reading).\n | \n | When writing, if the file size is not known in advance but may exceed\n | 2 GiB, pass force_zip64 to use the ZIP64 format, which can handle large\n | files. If the size is known in advance, it is best to pass a ZipInfo\n | instance for name, with zinfo.file_size set.\n | \n | printdir(self, file=None)\n | Print a table of contents for the zip file.\n | \n | read(self, name, pwd=None)\n | Return file bytes for name.\n | \n | setpassword(self, pwd)\n | Set default password for encrypted files.\n | \n | testzip(self)\n | Read all the files and check the CRC.\n | \n | write(self, filename, arcname=None, compress_type=None, compresslevel=None)\n | Put the bytes from filename into the archive under the name\n | arcname.\n | \n | writestr(self, zinfo_or_arcname, data, compress_type=None, compresslevel=None)\n | Write a file into the archive. The contents is 'data', which\n | may be either a 'str' or a 'bytes' instance; if it is a 'str',\n | it is encoded as UTF-8 first.\n | 'zinfo_or_arcname' is either a ZipInfo instance or\n | the name of the file in the archive.\n | \n | ----------------------------------------------------------------------\n | Data descriptors defined here:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | comment\n | The comment text associated with the ZIP file.\n | \n | ----------------------------------------------------------------------\n | Data and other attributes defined here:\n | \n | fp = None\n\n"
],
[
"archivo_zip = ZipFile('T001-12-archivos.zip', 'w')",
"_____no_output_____"
],
[
"archivo_zip.write('T001-09-archivos.txt')",
"_____no_output_____"
],
[
"archivo_zip.write('T001-09-debate.csv')",
"_____no_output_____"
],
[
"archivo_zip.write('T001-09-Archivos.ipynb')",
"_____no_output_____"
],
[
"archivo_zip.close()",
"_____no_output_____"
]
],
[
[
"Abrir un archivo ZIP para ver su contenido:",
"_____no_output_____"
]
],
[
[
"nombre_archivo_zip = 'T001-12-archivos.zip'",
"_____no_output_____"
],
[
"with ZipFile(nombre_archivo_zip, 'r') as f:\n listado_archivos = f.namelist()\n \n for a in listado_archivos:\n print(a)",
"T001-09-archivos.txt\nT001-09-debate.csv\nT001-09-Archivos.ipynb\n"
]
],
[
[
"Abrir un archivo ZIP y obtener información como:\n\n1. Nombre del archivo (junto con la ruta)\n2. El tamaño en bytes\n3. La fecha\n4. El tamaño de compresión",
"_____no_output_____"
]
],
[
[
"with ZipFile(nombre_archivo_zip, 'r') as f:\n \n listado = f.infolist()\n \n for a in listado:\n print(f'Nombre: {a.filename}')\n print(f'Tamaño orginal: {a.file_size}')\n print(f'Fecha: {a.date_time}')\n print(f'Tamaño comprimido: {a.compress_size}')\n print()",
"Nombre: T001-09-archivos.txt\nTamaño orginal: 1463\nFecha: (2020, 11, 13, 14, 20, 20)\nTamaño comprimido: 1463\n\nNombre: T001-09-debate.csv\nTamaño orginal: 2071995\nFecha: (2020, 12, 5, 12, 10, 10)\nTamaño comprimido: 2071995\n\nNombre: T001-09-Archivos.ipynb\nTamaño orginal: 120332\nFecha: (2020, 12, 5, 12, 11, 10)\nTamaño comprimido: 120332\n\n"
]
],
[
[
"Descomprimir el contenido de un archivo ZIP:",
"_____no_output_____"
]
],
[
[
"nombre_archivo_zip",
"_____no_output_____"
],
[
"with ZipFile(nombre_archivo_zip, 'r') as f:\n f.extractall('archivos')",
"_____no_output_____"
]
],
[
[
"## 12.10 Módulo de ejecución concurrente\n\nTrabajeremos con la clase `Thread`.\n\nA través de esa clase podemos ejecutar tareas en segundo plano.",
"_____no_output_____"
]
],
[
[
"from threading import Thread",
"_____no_output_____"
],
[
"help(Thread)",
"Help on class Thread in module threading:\n\nclass Thread(builtins.object)\n | Thread(group=None, target=None, name=None, args=(), kwargs=None, *, daemon=None)\n | \n | A class that represents a thread of control.\n | \n | This class can be safely subclassed in a limited fashion. There are two ways\n | to specify the activity: by passing a callable object to the constructor, or\n | by overriding the run() method in a subclass.\n | \n | Methods defined here:\n | \n | __init__(self, group=None, target=None, name=None, args=(), kwargs=None, *, daemon=None)\n | This constructor should always be called with keyword arguments. Arguments are:\n | \n | *group* should be None; reserved for future extension when a ThreadGroup\n | class is implemented.\n | \n | *target* is the callable object to be invoked by the run()\n | method. Defaults to None, meaning nothing is called.\n | \n | *name* is the thread name. By default, a unique name is constructed of\n | the form \"Thread-N\" where N is a small decimal number.\n | \n | *args* is the argument tuple for the target invocation. Defaults to ().\n | \n | *kwargs* is a dictionary of keyword arguments for the target\n | invocation. Defaults to {}.\n | \n | If a subclass overrides the constructor, it must make sure to invoke\n | the base class constructor (Thread.__init__()) before doing anything\n | else to the thread.\n | \n | __repr__(self)\n | Return repr(self).\n | \n | getName(self)\n | \n | isAlive(self)\n | Return whether the thread is alive.\n | \n | This method is deprecated, use is_alive() instead.\n | \n | isDaemon(self)\n | \n | is_alive(self)\n | Return whether the thread is alive.\n | \n | This method returns True just before the run() method starts until just\n | after the run() method terminates. The module function enumerate()\n | returns a list of all alive threads.\n | \n | join(self, timeout=None)\n | Wait until the thread terminates.\n | \n | This blocks the calling thread until the thread whose join() method is\n | called terminates -- either normally or through an unhandled exception\n | or until the optional timeout occurs.\n | \n | When the timeout argument is present and not None, it should be a\n | floating point number specifying a timeout for the operation in seconds\n | (or fractions thereof). As join() always returns None, you must call\n | is_alive() after join() to decide whether a timeout happened -- if the\n | thread is still alive, the join() call timed out.\n | \n | When the timeout argument is not present or None, the operation will\n | block until the thread terminates.\n | \n | A thread can be join()ed many times.\n | \n | join() raises a RuntimeError if an attempt is made to join the current\n | thread as that would cause a deadlock. It is also an error to join() a\n | thread before it has been started and attempts to do so raises the same\n | exception.\n | \n | run(self)\n | Method representing the thread's activity.\n | \n | You may override this method in a subclass. The standard run() method\n | invokes the callable object passed to the object's constructor as the\n | target argument, if any, with sequential and keyword arguments taken\n | from the args and kwargs arguments, respectively.\n | \n | setDaemon(self, daemonic)\n | \n | setName(self, name)\n | \n | start(self)\n | Start the thread's activity.\n | \n | It must be called at most once per thread object. It arranges for the\n | object's run() method to be invoked in a separate thread of control.\n | \n | This method will raise a RuntimeError if called more than once on the\n | same thread object.\n | \n | ----------------------------------------------------------------------\n | Readonly properties defined here:\n | \n | ident\n | Thread identifier of this thread or None if it has not been started.\n | \n | This is a nonzero integer. See the get_ident() function. Thread\n | identifiers may be recycled when a thread exits and another thread is\n | created. The identifier is available even after the thread has exited.\n | \n | native_id\n | Native integral thread ID of this thread, or None if it has not been started.\n | \n | This is a non-negative integer. See the get_native_id() function.\n | This represents the Thread ID as reported by the kernel.\n | \n | ----------------------------------------------------------------------\n | Data descriptors defined here:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n | \n | daemon\n | A boolean value indicating whether this thread is a daemon thread.\n | \n | This must be set before start() is called, otherwise RuntimeError is\n | raised. Its initial value is inherited from the creating thread; the\n | main thread is not a daemon thread and therefore all threads created in\n | the main thread default to daemon = False.\n | \n | The entire Python program exits when only daemon threads are left.\n | \n | name\n | A string used for identification purposes only.\n | \n | It has no semantics. Multiple threads may be given the same name. The\n | initial name is set by the constructor.\n\n"
],
[
"from time import sleep",
"_____no_output_____"
],
[
"def mostrar_mensaje_con_retardo(mensaje, segundos=5):\n sleep(segundos)\n print(mensaje)",
"_____no_output_____"
],
[
"thread_mensaje = Thread(target=mostrar_mensaje_con_retardo, args=('¡Python es tremendo!',))",
"_____no_output_____"
],
[
"thread_mensaje.start()",
"_____no_output_____"
],
[
"# thread_mensaje.start() # Un thread se puede ejecutar sólo una vez.",
"_____no_output_____"
],
[
"print('OK')",
"OK\n¡Python es tremendo!\n"
],
[
"thread_mensaje_2 = Thread(target=mostrar_mensaje_con_retardo, args=('¡Python es lenguaje de programación!', 20))",
"_____no_output_____"
],
[
"thread_mensaje_2.start()",
"_____no_output_____"
],
[
"for i in range(10):\n print(i)",
"0\n1\n2\n3\n4\n5\n6\n7\n8\n9\n¡Python es lenguaje de programación!\n"
]
],
[
[
"Ejecución de múltiples threads:",
"_____no_output_____"
]
],
[
[
"threads = []",
"_____no_output_____"
],
[
"threads.append(Thread(target=mostrar_mensaje_con_retardo, args=('¡Hola, Python!', 25)))",
"_____no_output_____"
],
[
"threads.append(Thread(target=mostrar_mensaje_con_retardo, args=('¡Hola, Mundo!', 15)))",
"_____no_output_____"
],
[
"len(threads)",
"_____no_output_____"
],
[
"for t in threads:\n t.start()",
"¡Hola, Mundo!\n"
],
[
"for c in 'Python':\n print(c)",
"P\ny\nt\nh\no\nn\n¡Hola, Python!\n"
]
],
[
[
"Ejemplo:\n\nEjecución de dos threads que realizan dos tareas diferentes.",
"_____no_output_____"
]
],
[
[
"def cubo(n):\n for i in range(1, n + 1):\n sleep(1)\n print(f'{i} ^ 3 = {i**3}')",
"_____no_output_____"
],
[
"def cuadrado(n):\n for i in range(1, n + 1):\n sleep(1)\n print(f'{i} ^ 2 = {i**2}')",
"_____no_output_____"
],
[
"thread_cubo = Thread(target=cubo, args=(10,))",
"_____no_output_____"
],
[
"thread_cuadrado = Thread(target=cuadrado, args=(10,))",
"_____no_output_____"
],
[
"thread_cubo.start()\nthread_cuadrado.start()",
"1 ^ 3 = 11 ^ 2 = 1\n\n2 ^ 2 = 4\n2 ^ 3 = 8\n3 ^ 2 = 9\n3 ^ 3 = 27\n4 ^ 3 = 644 ^ 2 = 16\n\n5 ^ 3 = 1255 ^ 2 = 25\n\n6 ^ 2 = 36\n6 ^ 3 = 216\n7 ^ 3 = 3437 ^ 2 = 49\n\n8 ^ 3 = 5128 ^ 2 = 64\n\n9 ^ 2 = 819 ^ 3 = 729\n\n10 ^ 3 = 1000\n10 ^ 2 = 100\n"
]
],
[
[
"Ejemplo:\n\nSimular un reloj a través de una función que se ejecute en un thread (hilo) independiente.",
"_____no_output_____"
]
],
[
[
"import datetime\nimport sys\nimport time",
"_____no_output_____"
],
[
"def reloj():\n while True:\n tiempo_transcurrido = time.time()\n fecha = datetime.datetime.fromtimestamp(tiempo_transcurrido)\n \n hora_formateada = fecha.strftime('%H:%M:%S')\n print(hora_formateada)\n \n sys.stdout.flush()\n \n time.sleep(1)",
"_____no_output_____"
],
[
"reloj()",
"09:05:20\n09:05:21\n09:05:22\n09:05:23\n09:05:24\n09:05:25\n09:05:26\n09:05:27\n09:05:28\n09:05:29\n09:05:30\n09:05:31\n09:05:32\n09:05:33\n09:05:34\n09:05:35\n09:05:36\n09:05:37\n09:05:38\n09:05:39\n09:05:40\n09:05:41\n09:05:42\n09:05:43\n09:05:44\n09:05:45\n09:05:46\n09:05:47\n09:05:48\n09:05:49\n09:05:50\n09:05:51\n09:05:52\n09:05:53\n09:05:54\n09:05:55\n09:05:56\n09:05:57\n09:05:58\n09:05:59\n09:06:00\n09:06:01\n09:06:02\n09:06:03\n09:06:04\n09:06:05\n09:06:06\n09:06:07\n09:06:08\n09:06:09\n09:06:10\n09:06:11\n09:06:12\n"
],
[
"print('OK')",
"OK\n"
],
[
"reloj()",
"06:51:37\n06:51:38\n06:51:39\n06:51:40\n06:51:41\n06:51:42\n06:51:43\n06:51:44\n06:51:45\n06:51:46\n06:51:47\n06:51:48\n06:51:49\n06:51:50\n06:51:51\n06:51:52\n06:51:53\n06:51:54\n06:51:55\n06:51:56\n06:51:57\n06:51:58\n06:51:59\n06:52:00\n06:52:01\n06:52:02\n06:52:03\n06:52:04\n06:52:05\n06:52:06\n06:52:07\n06:52:08\n06:52:09\n06:52:10\n06:52:11\n06:52:12\n06:52:13\n06:52:14\n06:52:15\n06:52:16\n06:52:17\n06:52:18\n06:52:19\n06:52:20\n06:52:21\n"
],
[
"print('Python es tremendo')",
"_____no_output_____"
],
[
"thread_reloj = Thread(target=reloj)",
"_____no_output_____"
],
[
"thread_reloj.start()",
"06:56:14\n06:56:15\n06:56:16\n06:56:17\n06:56:18\n06:56:19\n06:56:20\n06:56:21\n06:56:22\n06:56:23\n06:56:24\n06:56:25\n06:56:26\n06:56:27\n06:56:28\n06:56:29\n06:56:30\n06:56:31\n06:56:32\n06:56:33\n06:56:34\n06:56:35\n"
],
[
"print('Python es tremendo')",
"Python es tremendo\n06:56:36\n06:56:37\n06:56:38\n06:56:39\n06:56:40\n06:56:41\n06:56:42\n06:56:43\n06:56:44\n06:56:45\n06:56:46\n06:56:47\n06:56:48\n06:56:49\n06:56:50\n06:56:51\n06:56:52\n06:56:53\n06:56:54\n06:56:55\n06:56:56\n06:56:57\n06:56:58\n06:56:59\n06:57:00\n06:57:01\n06:57:02\n06:57:03\n06:57:04\n06:57:05\n06:57:06\n06:57:07\n06:57:08\n06:57:09\n06:57:10\n"
],
[
"for i in range(100):\n print(i, end=' ')",
"0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 06:57:11\n06:57:12\n06:57:13\n06:57:14\n06:57:15\n06:57:16\n06:57:17\n06:57:18\n06:57:19\n06:57:20\n06:57:21\n06:57:22\n06:57:23\n06:57:24\n06:57:25\n06:57:26\n06:57:27\n06:57:28\n06:57:29\n06:57:30\n06:57:31\n06:57:32\n06:57:33\n06:57:34\n06:57:35\n06:57:36\n06:57:37\n06:57:38\n06:57:39\n06:57:40\n06:57:41\n06:57:42\n06:57:43\n06:57:44\n06:57:45\n06:57:46\n06:57:47\n06:57:48\n06:57:49\n06:57:50\n06:57:51\n06:57:52\n06:57:53\n06:57:54\n06:57:55\n06:57:56\n"
]
],
[
[
"Crear una jerarquía de herencia a partir de la clase padre `Thread`.\n\nEl propósito u objetivo es implementar un thread (hilo) dedicado para un reloj.",
"_____no_output_____"
]
],
[
[
"from threading import Thread\nimport sys\nimport time\nimport datetime\n\nclass Reloj(Thread):\n \n def __init__(self):\n Thread.__init__(self)\n \n def run(self):\n while True:\n tiempo_transcurrido = time.time()\n fecha = datetime.datetime.fromtimestamp(tiempo_transcurrido)\n\n hora_formateada = fecha.strftime('%H:%M:%S')\n print(hora_formateada)\n\n sys.stdout.flush()\n\n time.sleep(1)",
"_____no_output_____"
],
[
"thread_reloj = Reloj()",
"17:39:17\n17:39:18\n17:39:19\n17:39:20\n17:39:21\n"
],
[
"print('OK')",
"OK\n17:40:35\n17:40:36\n17:40:37\n17:40:38\n17:40:39\n17:40:40\n17:40:41\n17:40:42\n17:40:43\n17:40:44\n17:40:45\n17:40:46\n17:40:47\n17:40:48\n17:40:49\n17:40:50\n17:40:51\n17:40:52\n17:40:53\n17:40:54\n"
],
[
"thread_reloj = Reloj()",
"_____no_output_____"
],
[
"dir(Reloj)",
"_____no_output_____"
],
[
"thread_reloj.start()",
"17:44:01\n17:44:02\n17:44:03\n17:44:04\n17:44:05\n17:44:06\n17:44:07\n17:44:08\n17:44:09\n17:44:10\n17:44:11\n17:44:12\n17:44:13\n17:44:14\n17:44:15\n17:44:16\n17:44:17\n17:44:18\n17:44:19\n17:44:20\n17:44:21\n17:44:22\n17:44:23\n17:44:24\n17:44:25\n17:44:26\n17:44:27\n17:44:28\n17:44:29\n17:44:30\n17:44:31\n17:44:32\n17:44:33\n17:44:34\n17:44:35\n17:44:36\n17:44:37\n17:44:38\n17:44:39\n17:44:40\n17:44:41\n17:44:42\n17:44:43\n17:44:44\n17:44:45\n17:44:46\n17:44:47\n17:44:48\n17:44:49\n17:44:50\n17:44:51\n17:44:52\n17:44:53\n17:44:54\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecd84bac1ee4067e461d37e16043f8d19e1e4c53 | 46,610 | ipynb | Jupyter Notebook | module3-make-explanatory-visualizations/LS_DS_123_Make_explanatory_visualizations.ipynb | Rawab/DS-Sprint-02-Storytelling-With-Data | 8eca6b100549e5a70e12daabf72a78c5c3d7c201 | [
"MIT"
] | null | null | null | module3-make-explanatory-visualizations/LS_DS_123_Make_explanatory_visualizations.ipynb | Rawab/DS-Sprint-02-Storytelling-With-Data | 8eca6b100549e5a70e12daabf72a78c5c3d7c201 | [
"MIT"
] | null | null | null | module3-make-explanatory-visualizations/LS_DS_123_Make_explanatory_visualizations.ipynb | Rawab/DS-Sprint-02-Storytelling-With-Data | 8eca6b100549e5a70e12daabf72a78c5c3d7c201 | [
"MIT"
] | null | null | null | 274.176471 | 41,200 | 0.904548 | [
[
[
"<a href=\"https://colab.research.google.com/github/Rawab/DS-Sprint-02-Storytelling-With-Data/blob/master/module3-make-explanatory-visualizations/LS_DS_123_Make_explanatory_visualizations.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"_Lambda School Data Science_\n\n# Choose appropriate visualizations\n\n\nRecreate this [example by FiveThirtyEight:](https://fivethirtyeight.com/features/al-gores-new-movie-exposes-the-big-flaw-in-online-movie-ratings/)\n\n\n\nUsing this data:\n\nhttps://github.com/fivethirtyeight/data/tree/master/inconvenient-sequel\n\n### Stretch goals\n\nRecreate more examples from [FiveThityEight's shared data repository](https://data.fivethirtyeight.com/).\n\nFor example:\n- [thanksgiving-2015](https://fivethirtyeight.com/features/heres-what-your-part-of-america-eats-on-thanksgiving/) ([`altair`](https://altair-viz.github.io/gallery/index.html#maps))\n- [candy-power-ranking](https://fivethirtyeight.com/features/the-ultimate-halloween-candy-power-ranking/) ([`statsmodels`](https://www.statsmodels.org/stable/index.html))",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"df = pd.read_csv('https://raw.githubusercontent.com/fivethirtyeight/data/master/inconvenient-sequel/ratings.csv')",
"_____no_output_____"
],
[
"df.timestamp = pd.to_datetime(df.timestamp)\ndf.set_index('timestamp', inplace=True)",
"_____no_output_____"
],
[
"pct_respondents = df[df['category'] == 'IMDb users']['respondents'] / df[df['category'] == 'IMDb users']['respondents'].max()",
"_____no_output_____"
],
[
"import matplotlib.ticker as ticker",
"_____no_output_____"
],
[
"plt.style.use('fivethirtyeight')\nax = pct_respondents.plot(color = 'skyblue', ylim = (0,1.03), legend=False)\nvals = ax.get_yticks()\nax.set_yticklabels(['{:,.0%}'.format(x) for x in vals])\nax.set(xlabel='')\nax.set_xticklabels(['','JULY 23, 2017', 'JULY 30', 'AUG. 6', 'AUG. 13', 'AUG. 20', 'AUG. 27'])\nax.set_yticklabels(['0 ', '20 ', '40 ', '60 ', '80 ', '100%'])\nax.tick_params(axis='x',labelrotation=0)\n\nlong_title = \"'An Inconvenient Sequel' was doomed before its release\"\nlong_title += '\\nShare of tickets sold and IMDb reviews for \"An Inconvenient Sequel\"'\nax.set_title(y=1.3, label = long_title);\n# ax.text(x=0, y=0, s = 'Share of tickets sold and IMDb reviews for \"An Inconvenient Sequel\"')\n# ax.text(x=0, y=0, s = 'Posted through Aug.27, by day')",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecd86164ea40257c44f8a57f8a9c96aa7b92c888 | 6,728 | ipynb | Jupyter Notebook | hw3/.ipynb_checkpoints/Untitled-checkpoint.ipynb | erfanMhi/Deep-Reinforcement-Learning-CS285-Pytroch | 6da04f367e52a451c202ae7e5477994c1d149baf | [
"MIT"
] | 91 | 2020-06-13T16:26:42.000Z | 2022-03-31T02:49:30.000Z | hw3/.ipynb_checkpoints/Untitled-checkpoint.ipynb | erfanMhi/Deep-Reinforcement-Learning-CS285-Pytroch | 6da04f367e52a451c202ae7e5477994c1d149baf | [
"MIT"
] | 6 | 2020-07-26T15:44:36.000Z | 2022-02-10T02:15:10.000Z | hw3/.ipynb_checkpoints/Untitled-checkpoint.ipynb | erfanMhi/Deep-Reinforcement-Learning-CS285-Pytroch | 6da04f367e52a451c202ae7e5477994c1d149baf | [
"MIT"
] | 16 | 2020-08-04T01:17:45.000Z | 2022-02-24T04:51:41.000Z | 23.041096 | 267 | 0.472503 | [
[
[
"import cs285.infrastructure.torch_utils as tu",
"_____no_output_____"
],
[
"import numpy as np\ntu.torch_one_hot(([2, 5, 8, 9]), 10)",
"_____no_output_____"
]
],
[
[
"# ",
"_____no_output_____"
]
],
[
[
"(np.array([5, 6])).dtype == int",
"_____no_output_____"
],
[
"import torch\ntorch.tensor(([2, 3])).dtype",
"_____no_output_____"
],
[
"torch.mm(torch.tensor([[2, 3], [1, 4]]), torch.tensor([[2, 3], [4, 6]]))",
"_____no_output_____"
],
[
"torch.tensor([[2, 3], [4, 6]]).shape",
"_____no_output_____"
],
[
"import tensorflow as tf",
"/home/oriea/anaconda3/envs/cs285_env/lib/python3.5/site-packages/google/protobuf/__init__.py:37: UserWarning: Module cs285 was already imported from None, but /home/oriea/Codes/github/Deep-Reinforcement-Learning-CS285-Pytorch/hw3 is being added to sys.path\n __import__('pkg_resources').declare_namespace(__name__)\n"
],
[
"tf.enable_eager_execution()",
"_____no_output_____"
],
[
"a = torch.rand(10, 9)\na",
"_____no_output_____"
],
[
"print\n\nc = torch.stack([torch.arange(10), torch.argmax(a, dim=1)]).numpy().tolist()\na[c.numpy().tolist()]",
"_____no_output_____"
],
[
"def gather_nd(params, indices):\n \"\"\"params is of \"n\" dimensions and has size [x1, x2, x3, ..., xn], indices is of 2 dimensions and has size [num_samples, m] (m <= n)\"\"\"\n assert type(indices) == torch.Tensor\n return params[indices.transpose(0,1).long().numpy().tolist()]",
"_____no_output_____"
],
[
"gather_nd(a, c)",
"_____no_output_____"
],
[
"tf.gather_nd(a.numpy(), c.numpy())",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecd861ec2fd69ddf5671e156c2b0ba255a9e7ea1 | 4,367 | ipynb | Jupyter Notebook | tutorials/022 - Writing Partitions Concurrently.ipynb | conley/aws-data-wrangler | 781b7731aaea38d4ea027291cef2bc3e138583ee | [
"Apache-2.0"
] | 1 | 2021-09-07T23:08:20.000Z | 2021-09-07T23:08:20.000Z | tutorials/022 - Writing Partitions Concurrently.ipynb | chandrasekarvempallee/aws-data-wrangler | 1b8ffa5c81ac05a4d3c55bfeab6138b25f87245d | [
"Apache-2.0"
] | null | null | null | tutorials/022 - Writing Partitions Concurrently.ipynb | chandrasekarvempallee/aws-data-wrangler | 1b8ffa5c81ac05a4d3c55bfeab6138b25f87245d | [
"Apache-2.0"
] | null | null | null | 22.626943 | 184 | 0.502862 | [
[
[
"[](https://github.com/awslabs/aws-data-wrangler)\n\n# 22 - Writing Partitions Concurrently\n\n* `concurrent_partitioning` argument:\n\n If True will increase the parallelism level during the partitions writing. It will decrease the\n writing time and increase the memory usage.\n\n*P.S. Check the [function API doc](https://aws-data-wrangler.readthedocs.io/en/2.11.0/api.html) to see it has some argument that can be configured through Global configurations.*",
"_____no_output_____"
]
],
[
[
"%reload_ext memory_profiler\n\nimport awswrangler as wr",
"_____no_output_____"
]
],
[
[
"## Enter your bucket name:",
"_____no_output_____"
]
],
[
[
"import getpass\nbucket = getpass.getpass()\npath = f\"s3://{bucket}/data/\"",
" ············\n"
]
],
[
[
"## Reading 4 GB of CSV from NOAA's historical data and creating a year column",
"_____no_output_____"
]
],
[
[
"noaa_path = \"s3://noaa-ghcn-pds/csv/193\"\n\ncols = [\"id\", \"dt\", \"element\", \"value\", \"m_flag\", \"q_flag\", \"s_flag\", \"obs_time\"]\ndates = [\"dt\", \"obs_time\"]\ndtype = {x: \"category\" for x in [\"element\", \"m_flag\", \"q_flag\", \"s_flag\"]}\n\ndf = wr.s3.read_csv(noaa_path, names=cols, parse_dates=dates, dtype=dtype)\n\ndf[\"year\"] = df[\"dt\"].dt.year\n\nprint(f\"Number of rows: {len(df.index)}\")\nprint(f\"Number of columns: {len(df.columns)}\")",
"Number of rows: 125407761\nNumber of columns: 9\n"
]
],
[
[
"## Default Writing",
"_____no_output_____"
]
],
[
[
"%%time\n%%memit\n\nwr.s3.to_parquet(\n df=df,\n path=path,\n dataset=True,\n mode=\"overwrite\",\n partition_cols=[\"year\"],\n);",
"peak memory: 22169.04 MiB, increment: 11119.68 MiB\nCPU times: user 49 s, sys: 12.5 s, total: 1min 1s\nWall time: 1min 11s\n"
]
],
[
[
"## Concurrent Partitioning (Decreasing writing time, but increasing memory usage)",
"_____no_output_____"
]
],
[
[
"%%time\n%%memit\n\nwr.s3.to_parquet(\n df=df,\n path=path,\n dataset=True,\n mode=\"overwrite\",\n partition_cols=[\"year\"],\n concurrent_partitioning=True # <-----\n);",
"peak memory: 27819.48 MiB, increment: 15743.30 MiB\nCPU times: user 52.3 s, sys: 13.6 s, total: 1min 5s\nWall time: 41.6 s\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecd879a003181a76ac7b442b7eeaa5da067ae023 | 65,977 | ipynb | Jupyter Notebook | prototyping/LotkaVolterra2d.ipynb | krystophny/GeometricIntegrators.jl | 7855e977b014c8ba119f6bb73c6ed9bf96f04b1d | [
"MIT"
] | null | null | null | prototyping/LotkaVolterra2d.ipynb | krystophny/GeometricIntegrators.jl | 7855e977b014c8ba119f6bb73c6ed9bf96f04b1d | [
"MIT"
] | null | null | null | prototyping/LotkaVolterra2d.ipynb | krystophny/GeometricIntegrators.jl | 7855e977b014c8ba119f6bb73c6ed9bf96f04b1d | [
"MIT"
] | null | null | null | 289.372807 | 36,745 | 0.935538 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
ecd87a38a188cc34bd4f70f534b96df834a3c80d | 8,069 | ipynb | Jupyter Notebook | rel-eng/gem_release.ipynb | gersonkevin23/apipie-rails | 132d2dca3deee8d0611a07e23f0a7425b87ddac5 | [
"Apache-2.0",
"MIT"
] | 1,852 | 2015-01-01T22:07:25.000Z | 2022-03-30T14:26:57.000Z | rel-eng/gem_release.ipynb | gersonkevin23/apipie-rails | 132d2dca3deee8d0611a07e23f0a7425b87ddac5 | [
"Apache-2.0",
"MIT"
] | 432 | 2015-01-02T01:01:22.000Z | 2022-03-31T23:09:54.000Z | rel-eng/gem_release.ipynb | gersonkevin23/apipie-rails | 132d2dca3deee8d0611a07e23f0a7425b87ddac5 | [
"Apache-2.0",
"MIT"
] | 369 | 2015-01-15T04:37:02.000Z | 2022-03-25T09:36:17.000Z | 21.986376 | 186 | 0.513075 | [
[
[
"## Release of apipie-rails gem\n\n### Requirements\n- push access to https://github.com/Apipie/apipie-rails\n- push access to rubygems.org for apipie-rails\n- sudo yum install python-slugify asciidoc\n- ensure neither the `git push` or `gem push` don't require interractive auth. If you can't use api key or ssh key to auth skip these steps and run them form the shell manually \n\n### Release process\n- Follow the steps with `<Shift>+<Enter>` or `<Ctrl>+<Enter>,<Down>`\n- If anything fails, fix it and re-run the step if applicable\n\n### Release settings",
"_____no_output_____"
]
],
[
[
"%cd ..",
"_____no_output_____"
]
],
[
[
"### Update the following notebook settings",
"_____no_output_____"
]
],
[
[
"NEW_VERSION = '0.5.19'\nLAST_VERSION = '0.5.18'\nGIT_REMOTE_UPSTREAM = 'origin'\nWORK_BRANCH = 'master'\n",
"_____no_output_____"
]
],
[
[
"### Ensure the repo is up to date",
"_____no_output_____"
]
],
[
[
"! git checkout {WORK_BRANCH}",
"_____no_output_____"
],
[
"! git fetch {GIT_REMOTE_UPSTREAM}",
"_____no_output_____"
],
[
"! git rebase {GIT_REMOTE_UPSTREAM}/{WORK_BRANCH}",
"_____no_output_____"
]
],
[
[
"### Run tests localy",
"_____no_output_____"
]
],
[
[
"! bundle update",
"_____no_output_____"
],
[
"! bundle exec rake",
"_____no_output_____"
]
],
[
[
"### Update release related stuff",
"_____no_output_____"
]
],
[
[
"! sed -i 's/VERSION = .*/VERSION = \"{NEW_VERSION}\"/' lib/apipie/version.rb",
"_____no_output_____"
],
[
"# Parse git changelog\nfrom IPython.display import Markdown as md\nfrom subprocess import check_output\nfrom shlex import split\nimport re\n\ndef format_log_entry(entry):\n author = re.search(r'author:(.*)', entry).group(1)\n entry = re.sub(r'author:(.*)', '', entry)\n entry = re.sub(r'([fF]ixes|[rR]efs)[^-]*-\\s*(.*)', r'\\2', entry)\n entry = '* ' + entry.capitalize()\n entry = re.sub(r'\\(#([0-9]+)\\)', r'[#\\1](https://github.com/Apipie/apipie-rails/pull/\\1)', entry)\n entry = entry + f'({author})'\n return entry\n\ndef skip(entry):\n if re.match(r'Merge pull', entry) or \\\n re.match(r'^i18n', entry) or \\\n re.match(r'^Bump to version', entry):\n return True\n else:\n return False \ngit_log_cmd = 'git log --pretty=format:\"%%s author:%%an\" v%s..HEAD' % LAST_VERSION\nlog = check_output(split(git_log_cmd)).decode('utf8').split('\\n')\nchange_log = [format_log_entry(e) for e in log if not skip(e)]\nmd('\\n'.join(change_log))\n",
"_____no_output_____"
],
[
"# Write release notes\nfrom datetime import datetime\nimport fileinput\nimport sys\n\nfh = fileinput.input('CHANGELOG.md', inplace=True) \nfor line in fh: \n print(line.rstrip())\n if re.match(r'========', line):\n print('## [v%s](https://github.com/Apipie/apipie-rails/tree/v%s) (%s)' % (NEW_VERSION, NEW_VERSION, datetime.today().strftime('%Y-%m-%d')))\n print('[Full Changelog](https://github.com/Apipie/apipie-rails/compare/v%s...v%s)' % (LAST_VERSION, NEW_VERSION))\n for entry in change_log:\n print(entry)\n print('')\nfh.close() ",
"_____no_output_____"
]
],
[
[
"#### Manual step: Update deps in the gemspec if neccessary",
"_____no_output_____"
],
[
"### Check what is going to be commited",
"_____no_output_____"
]
],
[
[
"! git add -u\n! git status",
"_____no_output_____"
],
[
"! git diff --cached",
"_____no_output_____"
]
],
[
[
"### Commit changes",
"_____no_output_____"
]
],
[
[
"! git commit -m \"Bump to {NEW_VERSION}\"",
"_____no_output_____"
]
],
[
[
"### Tag new version",
"_____no_output_____"
]
],
[
[
"! git tag v{NEW_VERSION}",
"_____no_output_____"
]
],
[
[
"### Build the gem",
"_____no_output_____"
]
],
[
[
"! rake build",
"_____no_output_____"
],
[
"! gem push pkg/apipie-rails-{NEW_VERSION}.gem",
"_____no_output_____"
]
],
[
[
"### PUSH the changes upstream If everything is correct",
"_____no_output_____"
]
],
[
[
"! git push {GIT_REMOTE_UPSTREAM} {WORK_BRANCH}",
"_____no_output_____"
],
[
"! git push --tags {GIT_REMOTE_UPSTREAM} {WORK_BRANCH}",
"_____no_output_____"
]
],
[
[
"#### Now the new release is in upstream repo",
"_____no_output_____"
],
[
"### Some manual steps follow to improve the UX\n\n#### New relase on GitHub\n\nCopy the following changelog lines to the description in form on link below\nThe release title is the new version.",
"_____no_output_____"
]
],
[
[
"print('\\n')\nprint('\\n'.join(change_log))\nprint('\\n\\nhttps://github.com/Apipie/apipie-rails/releases/new?tag=%s' % NEW_VERSION)",
"_____no_output_____"
]
],
[
[
"## Congratulations\n\nRelease is public now.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ecd884b7b223b874e324ace636f7462ee33fd9a4 | 15,957 | ipynb | Jupyter Notebook | docs/tutorials/deform_source_mesh_to_target_mesh.ipynb | ronenroi/pytorch3d | 33c8abfd6434c8dff3d75fca47f7746a20fa5a4a | [
"BSD-3-Clause"
] | 1 | 2021-11-17T01:46:24.000Z | 2021-11-17T01:46:24.000Z | docs/tutorials/deform_source_mesh_to_target_mesh.ipynb | ronenroi/pytorch3d | 33c8abfd6434c8dff3d75fca47f7746a20fa5a4a | [
"BSD-3-Clause"
] | null | null | null | docs/tutorials/deform_source_mesh_to_target_mesh.ipynb | ronenroi/pytorch3d | 33c8abfd6434c8dff3d75fca47f7746a20fa5a4a | [
"BSD-3-Clause"
] | 1 | 2022-01-05T15:03:24.000Z | 2022-01-05T15:03:24.000Z | 29.659851 | 228 | 0.576675 | [
[
[
"# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.",
"_____no_output_____"
]
],
[
[
"# Deform a source mesh to form a target mesh using 3D loss functions",
"_____no_output_____"
],
[
"In this tutorial, we learn to deform an initial generic shape (e.g. sphere) to fit a target shape.\n\nWe will cover: \n\n- How to **load a mesh** from an `.obj` file\n- How to use the PyTorch3D **Meshes** datastructure\n- How to use 4 different PyTorch3D **mesh loss functions**\n- How to set up an **optimization loop**\n\n\nStarting from a sphere mesh, we learn the offset to each vertex in the mesh such that\nthe predicted mesh is closer to the target mesh at each optimization step. To achieve this we minimize:\n\n+ `chamfer_distance`, the distance between the predicted (deformed) and target mesh, defined as the chamfer distance between the set of pointclouds resulting from **differentiably sampling points** from their surfaces. \n\nHowever, solely minimizing the chamfer distance between the predicted and the target mesh will lead to a non-smooth shape (verify this by setting `w_chamfer=1.0` and all other weights to `0.0`). \n\nWe enforce smoothness by adding **shape regularizers** to the objective. Namely, we add:\n\n+ `mesh_edge_length`, which minimizes the length of the edges in the predicted mesh.\n+ `mesh_normal_consistency`, which enforces consistency across the normals of neighboring faces.\n+ `mesh_laplacian_smoothing`, which is the laplacian regularizer.",
"_____no_output_____"
],
[
"## 0. Install and Import modules",
"_____no_output_____"
],
[
"Ensure `torch` and `torchvision` are installed. If `pytorch3d` is not installed, install it using the following cell:",
"_____no_output_____"
]
],
[
[
"import os\nimport sys\nimport torch\nneed_pytorch3d=False\ntry:\n import pytorch3d\nexcept ModuleNotFoundError:\n need_pytorch3d=True\nif need_pytorch3d:\n if torch.__version__.startswith(\"1.9\") and sys.platform.startswith(\"linux\"):\n # We try to install PyTorch3D via a released wheel.\n version_str=\"\".join([\n f\"py3{sys.version_info.minor}_cu\",\n torch.version.cuda.replace(\".\",\"\"),\n f\"_pyt{torch.__version__[0:5:2]}\"\n ])\n !pip install pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html\n else:\n # We try to install PyTorch3D from source.\n !curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz\n !tar xzf 1.10.0.tar.gz\n os.environ[\"CUB_HOME\"] = os.getcwd() + \"/cub-1.10.0\"\n !pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'",
"_____no_output_____"
],
[
"import os\nimport torch\nfrom pytorch3d.io import load_obj, save_obj\nfrom pytorch3d.structures import Meshes\nfrom pytorch3d.utils import ico_sphere\nfrom pytorch3d.ops import sample_points_from_meshes\nfrom pytorch3d.loss import (\n chamfer_distance, \n mesh_edge_loss, \n mesh_laplacian_smoothing, \n mesh_normal_consistency,\n)\nimport numpy as np\nfrom tqdm.notebook import tqdm\n%matplotlib notebook \nfrom mpl_toolkits.mplot3d import Axes3D\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\nmpl.rcParams['savefig.dpi'] = 80\nmpl.rcParams['figure.dpi'] = 80\n\n# Set the device\nif torch.cuda.is_available():\n device = torch.device(\"cuda:0\")\nelse:\n device = torch.device(\"cpu\")\n print(\"WARNING: CPU only, this will be slow!\")",
"_____no_output_____"
]
],
[
[
"## 1. Load an obj file and create a Meshes object",
"_____no_output_____"
],
[
"Download the target 3D model of a dolphin. It will be saved locally as a file called `dolphin.obj`.",
"_____no_output_____"
]
],
[
[
"!wget https://dl.fbaipublicfiles.com/pytorch3d/data/dolphin/dolphin.obj",
"_____no_output_____"
],
[
"# Load the dolphin mesh.\ntrg_obj = os.path.join('dolphin.obj')",
"_____no_output_____"
],
[
"# We read the target 3D model using load_obj\nverts, faces, aux = load_obj(trg_obj)\n\n# verts is a FloatTensor of shape (V, 3) where V is the number of vertices in the mesh\n# faces is an object which contains the following LongTensors: verts_idx, normals_idx and textures_idx\n# For this tutorial, normals and textures are ignored.\nfaces_idx = faces.verts_idx.to(device)\nverts = verts.to(device)\n\n# We scale normalize and center the target mesh to fit in a sphere of radius 1 centered at (0,0,0). \n# (scale, center) will be used to bring the predicted mesh to its original center and scale\n# Note that normalizing the target mesh, speeds up the optimization but is not necessary!\ncenter = verts.mean(0)\nverts = verts - center\nscale = max(verts.abs().max(0)[0])\nverts = verts / scale\n\n# We construct a Meshes structure for the target mesh\ntrg_mesh = Meshes(verts=[verts], faces=[faces_idx])",
"_____no_output_____"
],
[
"# We initialize the source shape to be a sphere of radius 1\nsrc_mesh = ico_sphere(4, device)",
"_____no_output_____"
]
],
[
[
"### Visualize the source and target meshes",
"_____no_output_____"
]
],
[
[
"def plot_pointcloud(mesh, title=\"\"):\n # Sample points uniformly from the surface of the mesh.\n points = sample_points_from_meshes(mesh, 5000)\n x, y, z = points.clone().detach().cpu().squeeze().unbind(1) \n fig = plt.figure(figsize=(5, 5))\n ax = Axes3D(fig)\n ax.scatter3D(x, z, -y)\n ax.set_xlabel('x')\n ax.set_ylabel('z')\n ax.set_zlabel('y')\n ax.set_title(title)\n ax.view_init(190, 30)\n plt.show()",
"_____no_output_____"
],
[
"# %matplotlib notebook\nplot_pointcloud(trg_mesh, \"Target mesh\")\nplot_pointcloud(src_mesh, \"Source mesh\")",
"_____no_output_____"
]
],
[
[
"## 3. Optimization loop ",
"_____no_output_____"
]
],
[
[
"# We will learn to deform the source mesh by offsetting its vertices\n# The shape of the deform parameters is equal to the total number of vertices in src_mesh\ndeform_verts = torch.full(src_mesh.verts_packed().shape, 0.0, device=device, requires_grad=True)",
"_____no_output_____"
],
[
"# The optimizer\noptimizer = torch.optim.SGD([deform_verts], lr=1.0, momentum=0.9)",
"_____no_output_____"
],
[
"# Number of optimization steps\nNiter = 2000\n# Weight for the chamfer loss\nw_chamfer = 1.0 \n# Weight for mesh edge loss\nw_edge = 1.0 \n# Weight for mesh normal consistency\nw_normal = 0.01 \n# Weight for mesh laplacian smoothing\nw_laplacian = 0.1 \n# Plot period for the losses\nplot_period = 250\nloop = tqdm(range(Niter))\n\nchamfer_losses = []\nlaplacian_losses = []\nedge_losses = []\nnormal_losses = []\n\n%matplotlib inline\n\nfor i in loop:\n # Initialize optimizer\n optimizer.zero_grad()\n \n # Deform the mesh\n new_src_mesh = src_mesh.offset_verts(deform_verts)\n \n # We sample 5k points from the surface of each mesh \n sample_trg = sample_points_from_meshes(trg_mesh, 5000)\n sample_src = sample_points_from_meshes(new_src_mesh, 5000)\n \n # We compare the two sets of pointclouds by computing (a) the chamfer loss\n loss_chamfer, _ = chamfer_distance(sample_trg, sample_src)\n \n # and (b) the edge length of the predicted mesh\n loss_edge = mesh_edge_loss(new_src_mesh)\n \n # mesh normal consistency\n loss_normal = mesh_normal_consistency(new_src_mesh)\n \n # mesh laplacian smoothing\n loss_laplacian = mesh_laplacian_smoothing(new_src_mesh, method=\"uniform\")\n \n # Weighted sum of the losses\n loss = loss_chamfer * w_chamfer + loss_edge * w_edge + loss_normal * w_normal + loss_laplacian * w_laplacian\n \n # Print the losses\n loop.set_description('total_loss = %.6f' % loss)\n \n # Save the losses for plotting\n chamfer_losses.append(float(loss_chamfer.detach().cpu()))\n edge_losses.append(float(loss_edge.detach().cpu()))\n normal_losses.append(float(loss_normal.detach().cpu()))\n laplacian_losses.append(float(loss_laplacian.detach().cpu()))\n \n # Plot mesh\n if i % plot_period == 0:\n plot_pointcloud(new_src_mesh, title=\"iter: %d\" % i)\n \n # Optimization step\n loss.backward()\n optimizer.step()\n",
"_____no_output_____"
]
],
[
[
"## 4. Visualize the loss",
"_____no_output_____"
]
],
[
[
"fig = plt.figure(figsize=(13, 5))\nax = fig.gca()\nax.plot(chamfer_losses, label=\"chamfer loss\")\nax.plot(edge_losses, label=\"edge loss\")\nax.plot(normal_losses, label=\"normal loss\")\nax.plot(laplacian_losses, label=\"laplacian loss\")\nax.legend(fontsize=\"16\")\nax.set_xlabel(\"Iteration\", fontsize=\"16\")\nax.set_ylabel(\"Loss\", fontsize=\"16\")\nax.set_title(\"Loss vs iterations\", fontsize=\"16\");",
"_____no_output_____"
]
],
[
[
"## 5. Save the predicted mesh",
"_____no_output_____"
]
],
[
[
"# Fetch the verts and faces of the final predicted mesh\nfinal_verts, final_faces = new_src_mesh.get_mesh_verts_faces(0)\n\n# Scale normalize back to the original target size\nfinal_verts = final_verts * scale + center\n\n# Store the predicted mesh using save_obj\nfinal_obj = os.path.join('./', 'final_model.obj')\nsave_obj(final_obj, final_verts, final_faces)",
"_____no_output_____"
]
],
[
[
"## 6. Conclusion \n\nIn this tutorial we learnt how to load a mesh from an obj file, initialize a PyTorch3D datastructure called **Meshes**, set up an optimization loop and use four different PyTorch3D mesh loss functions. ",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ecd8ac45010f218542dfaa68c2ad8270182378d3 | 30,352 | ipynb | Jupyter Notebook | Measurements/Measurements.ipynb | afgbloch/QuantumKatas | 2bd53efdaf4716ac0873a8e3919b57797cddcf95 | [
"MIT"
] | 1 | 2020-05-20T14:02:15.000Z | 2020-05-20T14:02:15.000Z | Measurements/Measurements.ipynb | afgbloch/QuantumKatas | 2bd53efdaf4716ac0873a8e3919b57797cddcf95 | [
"MIT"
] | null | null | null | Measurements/Measurements.ipynb | afgbloch/QuantumKatas | 2bd53efdaf4716ac0873a8e3919b57797cddcf95 | [
"MIT"
] | null | null | null | 37.333333 | 380 | 0.583224 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
ecd8ac866bda191f4bf4c9c64c3190388446613d | 218,073 | ipynb | Jupyter Notebook | code/erowid_sentiment_analysis.ipynb | timothymeehan/drug_experiences_NLP | 872730747aa9da72ccf31c3870093bdd19a0d4df | [
"MIT"
] | null | null | null | code/erowid_sentiment_analysis.ipynb | timothymeehan/drug_experiences_NLP | 872730747aa9da72ccf31c3870093bdd19a0d4df | [
"MIT"
] | null | null | null | code/erowid_sentiment_analysis.ipynb | timothymeehan/drug_experiences_NLP | 872730747aa9da72ccf31c3870093bdd19a0d4df | [
"MIT"
] | null | null | null | 338.097674 | 88,332 | 0.906018 | [
[
[
"import pandas as pd\npd.set_option('display.max_colwidth', -1)\npd.set_option('display.max_columns', 1000)\nimport numpy as np\nimport random\n\nfrom textblob import TextBlob\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set(context='poster', style='white', font_scale=2)\n%matplotlib inline\n\nimport warnings\nwarnings.filterwarnings('ignore')",
"_____no_output_____"
],
[
"def sentiment_analysis(clean_df):\n \n pol = []\n sub=[]\n for doc in clean_df['body']:\n sentp = TextBlob(doc).sentiment.polarity\n sents = TextBlob(doc).sentiment.subjectivity\n pol.append(sentp)\n sub.append(sents) \n\n #sentdf = pd.DataFrame()\n #sentdf['polarity'] = pol\n #sentdf['subjectivity'] = sub\n \n clean_df['polarity'] = pol\n clean_df['subjectivity'] = sub\n \n return clean_df",
"_____no_output_____"
],
[
"df_clean = pd.read_pickle('prepro/df_clean.pkl')",
"_____no_output_____"
],
[
"df_clean.head()",
"_____no_output_____"
],
[
"df_sentiment = sentiment_analysis(df_clean)",
"_____no_output_____"
],
[
"df_sentiment.head()",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(figsize=(12, 8))\nsns.boxenplot(x='polarity', y='experience', data=df_sentiment, )\nsns.despine()\nax.set_ylabel('')\nax.set_yticklabels(['Glowing', 'General', 'Mystical', 'Bad'])\nax.set_xlabel('Polarity')\nplt.savefig('figs/polarity_by_exp_type.png', bbox_inches='tight')",
"'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n"
],
[
"ord_by_median = df_sentiment.groupby('drug').median().sort_values(by='polarity', ascending=False).index.to_list()",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(figsize=(12, 12))\nsns.boxenplot(x='polarity', y='drug', data=df_sentiment, palette='coolwarm', order=ord_by_median)\nsns.despine()\nax.set_ylabel('')\nax.set_xlabel('Polarity')\nplt.savefig('figs/polarity_by_drug.png', bbox_inches='tight')",
"'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n"
],
[
"fig, ax = plt.subplots(figsize=(12, 8))\nsns.boxenplot(x='age', y='drug', data=df_sentiment, palette='cividis_r')\nsns.despine()\nax.set_ylabel('')\nax.set_xlabel('Age')",
"'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n"
],
[
"df_sentiment[df_sentiment['polarity'] == df_sentiment['polarity'].max()]",
"_____no_output_____"
],
[
"df_sentiment.loc[df_sentiment['exp_id']==9967, 'body']",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecd8c551d5f18e95f9f1949045de31499601c300 | 300,312 | ipynb | Jupyter Notebook | Untitled1.ipynb | semnan-university-ai/tensorflow-keras | db319da1d222b52b68e7002c057e2f49e45856bd | [
"MIT"
] | 3 | 2021-01-02T19:19:12.000Z | 2021-03-24T17:29:23.000Z | Untitled1.ipynb | semnan-university-ai/tensorflow-keras | db319da1d222b52b68e7002c057e2f49e45856bd | [
"MIT"
] | null | null | null | Untitled1.ipynb | semnan-university-ai/tensorflow-keras | db319da1d222b52b68e7002c057e2f49e45856bd | [
"MIT"
] | 1 | 2021-01-02T19:25:55.000Z | 2021-01-02T19:25:55.000Z | 1,072.542857 | 289,146 | 0.90427 | [
[
[
"<a href=\"https://colab.research.google.com/github/semnan-university-ai/tensorflow-keras/blob/main/Untitled1.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"## **convolutional neural network**",
"_____no_output_____"
]
],
[
[
"from keras.datasets import mnist\r\n\r\n# Load data\r\n(train_images, train_labels), (test_images, test_labels) = mnist.load_data()\r\n\r\n# Data attributes\r\nprint(\"train_images dimentions: \", train_images.ndim)\r\nprint(\"train_images shape: \", train_images.shape)\r\nprint(\"train_images type: \", train_images.dtype)\r\n\r\nX_train = train_images.reshape(60000, 28, 28, 1)\r\nX_test = test_images.reshape(10000, 28, 28, 1)\r\n\r\nX_train = X_train.astype('float32')\r\nX_test = X_test.astype('float32')\r\n\r\nX_train /= 255\r\nX_test /= 255\r\n\r\nfrom keras.utils import np_utils\r\nY_train = np_utils.to_categorical(train_labels)\r\nY_test = np_utils.to_categorical(test_labels)",
"Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz\n11493376/11490434 [==============================] - 0s 0us/step\ntrain_images dimentions: 3\ntrain_images shape: (60000, 28, 28)\ntrain_images type: uint8\n"
],
[
"# Creating our model\r\nfrom keras.models import Model\r\nfrom keras import layers\r\nimport keras\r\n\r\nmyInput = layers.Input(shape=(28,28,1))\r\nconv1 = layers.Conv2D(16, 3, activation='relu', padding='same')(myInput) #filter #window size\r\npool1 = layers.MaxPool2D(pool_size=2)(conv1)\r\nconv2 = layers.Conv2D(32, 3, activation='relu', padding='same')(pool1)\r\npool2 = layers.MaxPool2D(pool_size=2)(conv2)\r\n\r\nflat = layers.Flatten()(pool2)\r\nout_layer = layers.Dense(10, activation='softmax')(flat)\r\n\r\nmyModel = Model(myInput, out_layer)\r\n\r\nmyModel.summary()\r\nmyModel.compile(optimizer=keras.optimizers.Adam(), loss=keras.losses.categorical_crossentropy, metrics=['accuracy'])",
"Model: \"model\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_1 (InputLayer) [(None, 28, 28, 1)] 0 \n_________________________________________________________________\nconv2d (Conv2D) (None, 28, 28, 16) 160 \n_________________________________________________________________\nmax_pooling2d (MaxPooling2D) (None, 14, 14, 16) 0 \n_________________________________________________________________\nconv2d_1 (Conv2D) (None, 14, 14, 32) 4640 \n_________________________________________________________________\nmax_pooling2d_1 (MaxPooling2 (None, 7, 7, 32) 0 \n_________________________________________________________________\nflatten (Flatten) (None, 1568) 0 \n_________________________________________________________________\ndense (Dense) (None, 10) 15690 \n=================================================================\nTotal params: 20,490\nTrainable params: 20,490\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"network_history = myModel.fit(X_train, Y_train, batch_size=128, epochs=5, validation_split=0.2)\r\n",
"Epoch 1/5\n375/375 [==============================] - 27s 70ms/step - loss: 0.8704 - accuracy: 0.7427 - val_loss: 0.1220 - val_accuracy: 0.9659\nEpoch 2/5\n375/375 [==============================] - 26s 69ms/step - loss: 0.1077 - accuracy: 0.9692 - val_loss: 0.0805 - val_accuracy: 0.9778\nEpoch 3/5\n375/375 [==============================] - 26s 69ms/step - loss: 0.0746 - accuracy: 0.9767 - val_loss: 0.0657 - val_accuracy: 0.9803\nEpoch 4/5\n375/375 [==============================] - 26s 69ms/step - loss: 0.0570 - accuracy: 0.9827 - val_loss: 0.0646 - val_accuracy: 0.9797\nEpoch 5/5\n375/375 [==============================] - 26s 69ms/step - loss: 0.0494 - accuracy: 0.9845 - val_loss: 0.0530 - val_accuracy: 0.9835\n"
],
[
"myInput = layers.Input(shape=(28,28,1))\r\nconv1 = layers.Conv2D(16, 3, activation='relu', padding='same', strides=2)(myInput)\r\nconv2 = layers.Conv2D(32, 3, activation='relu', padding='same', strides=2)(conv1)\r\nflat = layers.Flatten()(conv2)\r\nout_layer = layers.Dense(10, activation='softmax')(flat)\r\n\r\nmyModel = Model(myInput, out_layer)\r\n\r\nmyModel.summary()\r\nmyModel.compile(optimizer=keras.optimizers.Adam(), loss=keras.losses.categorical_crossentropy, metrics=['accuracy'])",
"Model: \"model_1\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_2 (InputLayer) [(None, 28, 28, 1)] 0 \n_________________________________________________________________\nconv2d_2 (Conv2D) (None, 14, 14, 16) 160 \n_________________________________________________________________\nconv2d_3 (Conv2D) (None, 7, 7, 32) 4640 \n_________________________________________________________________\nflatten_1 (Flatten) (None, 1568) 0 \n_________________________________________________________________\ndense_1 (Dense) (None, 10) 15690 \n=================================================================\nTotal params: 20,490\nTrainable params: 20,490\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"network_history = myModel.fit(X_train, Y_train, batch_size=128, epochs=5, validation_split=0.2)\r\n",
"Epoch 1/5\n375/375 [==============================] - 8s 20ms/step - loss: 0.9441 - accuracy: 0.7540 - val_loss: 0.2160 - val_accuracy: 0.9367\nEpoch 2/5\n375/375 [==============================] - 8s 20ms/step - loss: 0.1886 - accuracy: 0.9449 - val_loss: 0.1242 - val_accuracy: 0.9649\nEpoch 3/5\n375/375 [==============================] - 8s 21ms/step - loss: 0.1100 - accuracy: 0.9674 - val_loss: 0.0913 - val_accuracy: 0.9717\nEpoch 4/5\n375/375 [==============================] - 8s 21ms/step - loss: 0.0815 - accuracy: 0.9760 - val_loss: 0.0848 - val_accuracy: 0.9757\nEpoch 5/5\n375/375 [==============================] - 8s 21ms/step - loss: 0.0687 - accuracy: 0.9798 - val_loss: 0.0757 - val_accuracy: 0.9775\n"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecd8c8f38d0941b58b9beeb8a6137f435d8cdeb4 | 6,601 | ipynb | Jupyter Notebook | Gardenkiak/Programazioa/Multzoak.ipynb | mpenagar/Konputaziorako-Sarrera | 1f276cbda42e9d3d0beb716249fadbad348533d7 | [
"MIT"
] | null | null | null | Gardenkiak/Programazioa/Multzoak.ipynb | mpenagar/Konputaziorako-Sarrera | 1f276cbda42e9d3d0beb716249fadbad348533d7 | [
"MIT"
] | null | null | null | Gardenkiak/Programazioa/Multzoak.ipynb | mpenagar/Konputaziorako-Sarrera | 1f276cbda42e9d3d0beb716249fadbad348533d7 | [
"MIT"
] | null | null | null | 21.785479 | 120 | 0.474928 | [
[
[
"# Multzoak\n\n* `set` motako objektuak\n* **Elementu** ez errepikatuz osotutako kolekzio aldakorra.\n * *Baliorik gabeko* hiztegia\n* Multyzoak sortzeko aukerak:\n 1. Objektuaren berezko metodoaz → `set(it)`\n 1. Balio literala → `{e1, e2, e3, ... }`\n* `len()` funtzioa → elementu kopurua",
"_____no_output_____"
],
[
"### Multzo hutsa",
"_____no_output_____"
]
],
[
[
"# multzo hutsa\nm = set()\nprint(m,len(m))",
"set() 0\n"
]
],
[
[
"### `{e1 , e2 , e3, ... }`\nAdierazitako elementuez osotutako multzoa sortzen du:",
"_____no_output_____"
]
],
[
[
"lagunak = {\"Jon\",\"Enara\",\"Ane\",\"Jon\"}\nprint(lagunak,len(lagunak))",
"{'Ane', 'Enara', 'Jon'} 3\n"
]
],
[
[
"### `set(iteragarria)`\nIteragarria zeharkatu eta bere elementuez osotutako multzoa sortzen du:",
"_____no_output_____"
]
],
[
[
"zenbakiak = list(range(5))*3\nmultzoa = set(zenbakiak)\nprint(zenbakiak,multzoa,sep=\"\\n\")",
"[0, 1, 2, 3, 4, 0, 1, 2, 3, 4, 0, 1, 2, 3, 4]\n{0, 1, 2, 3, 4}\n"
]
],
[
[
"## Multzoen ezaugarriak\n\n\n* Iteragarriak → `for i in m :`\n* Aldakorrak\n\n### Multzoko elementuak → balio aldaezinak dituzten objektuak\n\n* Balioz aldatu ezin daitezkeenak\n* `int` → balio aldaezina\n* `float` → balio aldaezina\n* `bool` → balio aldaezina\n* `str` → balio aldaezina\n* `None` → balio aldaezina\n* `list` → **balio aldakorra**\n* `tuple` → **balio aldakorra** izan lezake",
"_____no_output_____"
],
[
"## Iteragarritasuna\n* Multzo bat zeharkatzean, bere elementuak zeharkatzen ditugu.\n* Multzo bat zeharkatzean, ez dugu inongo ordenik espero.\n * Azkeneko bertsioetan bai...",
"_____no_output_____"
]
],
[
[
"m = set(\"aeiou\")\nprint(list(m))",
"['i', 'a', 'o', 'u', 'e']\n"
]
],
[
[
"## Multzoen eragileak\n* `m1==m2` , `m1!=m2` → `bool` : edukiarekiko konparaketa\n* `e in m`, `e not in m` → `bool` : `e` elementua `m` multzoan ote dagoen\n* `a is b` → `bool` : `a` objektua `b` ote den",
"_____no_output_____"
],
[
"## Multzoen metodoak\n\n* Metodo hauek izendatzeko: `multzoa.metodo_izena`\n* Multzoek, [17 metodo](https://docs.python.org/3/library/stdtypes.html#set-types-set-frozenset) dituzte.\n```",
"_____no_output_____"
]
],
[
[
"m = set()\nm = set(range(10))\nm = set(\"aeiou\")\nm.add(\"kaixo\")\nm.update(range(5),\"AEIOU\")\nm1 = set(range(5))\nm2 = set(range(3,10))\nm3 = m1.union(m2)\nprint(m1,m2,m3)\nm3 = m1.intersection(set(range(3,10)))\nprint(m1,m2,m3)\nm3 = m1.difference(m2)\nprint(m1,m2,m3)\nm3 = m2.difference(m1)\nprint(m1,m2,m3)\nm3 = m1.symmetric_difference(m2)\nprint(m1,m2,m3)",
"{0, 1, 2, 3, 4} {3, 4, 5, 6, 7, 8, 9} {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}\n{0, 1, 2, 3, 4} {3, 4, 5, 6, 7, 8, 9} {3, 4}\n{0, 1, 2, 3, 4} {3, 4, 5, 6, 7, 8, 9} {0, 1, 2}\n{0, 1, 2, 3, 4} {3, 4, 5, 6, 7, 8, 9} {5, 6, 7, 8, 9}\n{0, 1, 2, 3, 4} {3, 4, 5, 6, 7, 8, 9} {0, 1, 2, 5, 6, 7, 8, 9}\n"
]
],
[
[
"<table border=\"0\" width=\"100%\" style=\"margin: 0px;\">\n<tr> \n <td style=\"text-align:left\"><a href=\"Hiztegiak.ipynb\">< < Hiztegiak < <</a></td>\n <td style=\"text-align:right\"><a href=\"del%20sententzia.ipynb\">> > del sententzia > ></a></td>\n</tr>\n</table>",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ecd8cbffb8c892859a7665fb424f221d93105e40 | 35,164 | ipynb | Jupyter Notebook | Python Absolute Beginner/Module_1.1_Practice.ipynb | Logan-Gunnin/pythonteachingcode | 48070ff5d9dd207b4f1da271396d4c98f28633b3 | [
"MIT"
] | null | null | null | Python Absolute Beginner/Module_1.1_Practice.ipynb | Logan-Gunnin/pythonteachingcode | 48070ff5d9dd207b4f1da271396d4c98f28633b3 | [
"MIT"
] | null | null | null | Python Absolute Beginner/Module_1.1_Practice.ipynb | Logan-Gunnin/pythonteachingcode | 48070ff5d9dd207b4f1da271396d4c98f28633b3 | [
"MIT"
] | null | null | null | 22.129641 | 200 | 0.484018 | [
[
[
"# Module 1 Practice 1\n## Getting started with Python in Jupyter Notebooks\n### notebooks, comments, print(), type(), addition, errors and art\n\n<font size=\"5\" color=\"#00A0B2\" face=\"verdana\"> <B>Student will be able to</B></font>\n- use Python 3 in Jupyter notebooks\n- write working code using `print()` and `#` comments \n- write working code using `type()` and variables\n- combine strings using string addition (+)\n- add numbers in code (+)\n- troubleshoot errors\n- create character art \n\n# \n>**note:** the **[ ]** indicates student has a task to complete \n \n>**reminder:** to run code and save changes: student should upload or clone a copy of notebooks \n\n#### notebook use\n- [ ] insert a **code cell** below \n- [ ] enter the following Python code, including the comment: \n```python \n# [ ] print 'Hello!' and remember to save notebook!\nprint('Hello!')\n```\nThen run the code - the output should be: \n`Hello!`",
"_____no_output_____"
]
],
[
[
"# [ ] print 'Hello!' and remember to save notebook!\nprint('Hello!')",
"Hello!\n"
]
],
[
[
"#### run the cell below \n- [ ] use **Ctrl + Enter** \n- [ ] use **Shift + Enter** ",
"_____no_output_____"
]
],
[
[
"print('watch for the cat')",
"watch for the cat\n"
]
],
[
[
"#### Logan's Notebook editing\n- [ ] Edit **this** notebook Markdown cell replacing the word \"Student's\" above with your name\n- [ ] Run the cell to display the formatted text\n- [ ] Run any 'markdown' cells that are in edit mode, so they are easier to read",
"_____no_output_____"
],
[
"#### [ ] convert \\*this\\* cell from markdown to a code cell, then run it \nprint('Run as a code cell')\n",
"_____no_output_____"
],
[
"## # comments\ncreate a code comment that identifies this notebook, containing your name and the date",
"_____no_output_____"
]
],
[
[
"# This is Logan Gunnin's Notebook 2/6/22",
"_____no_output_____"
]
],
[
[
"#### use print() to \n- [ ] print [**your_name**]\n- [ ] print **is using python!**",
"_____no_output_____"
]
],
[
[
"# [ ] print your name\nprint(\"Logan Gunnin\")\n# [ ] print \"is using Python\"\nprint(\"is using Python!\")\n",
"Logan Gunnin\nis using Python!\n"
]
],
[
[
"Output above should be: \n`Your Name \nis using Python!` ",
"_____no_output_____"
],
[
"#### use variables in print()\n- [ ] create a variable **your_name** and assign it a string containing your name\n- [ ] print **your_name**",
"_____no_output_____"
]
],
[
[
"# [ ] create a variable your_name and assign it a sting containing your name\nyour_name = 'Logan Gunnin'\n#[ ] print your_name\nprint(your_name)\n",
"Logan Gunnin\n"
]
],
[
[
"#### create more string variables\n- **[ ]** create variables as directed below\n- **[ ]** print the variables",
"_____no_output_____"
]
],
[
[
"# [ ] create variables and assign values for: favorite_song, shoe_size, lucky_number\nfavorite_song = 'song 2'\nshoe_size = '11'\nlucky_number = '14'\n\n# [ ] print the value of each variable favorite_song, shoe_size, and lucky_number\nprint(favorite_song)\nprint(shoe_size)\nprint(lucky_number)\n",
"song 2\n11\n14\n"
]
],
[
[
"#### use string addition\n- **[ ]** print the above string variables (favorite_song, shoe_size, lucky_number) combined with a description by using **string addition**\n>for example favorite_song displayed as: \n`favorite song is happy birthday`",
"_____no_output_____"
]
],
[
[
"# [ ] print favorite_song with description\nprint(favorite_song + ' from Blur')\n\n# [ ] print shoe_size with description\nprint(shoe_size + ' with a wide base')\n\n# [ ] print lucky_number with description\nprint(lucky_number + \" Choosing 14 as a lucky number because why not 14?\")\n",
"song 2 from Blur\n11 with a wide base\n14 Choosing 14 as a lucky number because why not 14?\n"
]
],
[
[
"##### more string addition\n- **[ ]** make a single string (sentence) in a variable called favorite_lucky_shoe using **string addition** with favorite_song, shoe_size, lucky_number variables and other strings as needed \n- **[ ]** print the value of the favorite_lucky_shoe variable string\n> sample output: \n`For singing happy birthday 8.5 times, you will be fined $25`",
"_____no_output_____"
]
],
[
[
"# assign favorite_lucky_shoe using\nfavorite_lucky_shoe = \"my favorite song is song 2 from blur\" + \" my shoe size is 11\" + \" I picked 14 since why not\"\nprint(favorite_lucky_shoe)\n",
"my favorite song is song 2 from blur my shoe size is 11 I picked 14 since why not\n"
]
],
[
[
"### print() art",
"_____no_output_____"
],
[
"#### use `print()` and the asterisk **\\*** to create the following shapes\n- [ ] diagonal line \n- [ ] rectangle \n- [ ] smiley face",
"_____no_output_____"
]
],
[
[
"# [ ] print a diagonal using \"*\"\nprint(\"*\")\nprint(\" *\")\nprint(\" *\")\n# [ ] rectangle using \"*\"\nprint(\" ************\")\nprint(\" * *\")\nprint(\" * *\")\nprint(\" ************\")\n\n# [ ] smiley using \"*\"\nprint(\" * * \")\nprint(\" * * \")\nprint(\" * * \")\nprint(\" * * \")\nprint(\" * * \")\n",
"*\n *\n *\n ************\n * *\n * *\n ************\n * * \n * * \n * * \n * * \n * * \n"
]
],
[
[
"#### Using `type()`\n-**[ ]** calulate the *type* using `type()`",
"_____no_output_____"
]
],
[
[
"# [ ] display the type of 'your name' (use single quotes)\ntype('logan gunnin')\n\n",
"_____no_output_____"
],
[
"# [ ] display the type of \"save your notebook!\" (use double quotes)\ntype(\"Save your notebook!\")\n\n",
"_____no_output_____"
],
[
"# [ ] display the type of \"25\" (use quotes)\ntype(\"25\")\n\n",
"_____no_output_____"
],
[
"# [ ] display the type of \"save your notebook \" + 'your name'\n\ntype(\"save your notebook\" + 'Logan Gunnin')\n",
"_____no_output_____"
],
[
"# [ ] display the type of 25 (no quotes)\ntype(25)\n\n",
"_____no_output_____"
],
[
"# [ ] display the type of 25 + 10 \n\ntype(25 + 10)\n",
"_____no_output_____"
],
[
"# [ ] display the type of 1.55\n\ntype(1.55)\n",
"_____no_output_____"
],
[
"# [ ] display the type of 1.55 + 25\ntype(1.55 + 25)\n\n",
"_____no_output_____"
]
],
[
[
"#### Find the type of variables\n- **[ ]** run the cell below to make the variables available to be used in other code\n- **[ ]** display the data type as directed in the cells that follow",
"_____no_output_____"
]
],
[
[
"# assignments ***RUN THIS CELL*** before starting the section\n\nstudent_name = \"Gus\"\nstudent_age = 16\nstudent_grade = 3.5\nstudent_id = \"ABC-000-000\"\n",
"_____no_output_____"
],
[
"# [ ] display the current type of the variable student_name\ntype(student_name)\n\n",
"_____no_output_____"
],
[
"# [ ] display the type of student_age\ntype(student_age)\n\n",
"_____no_output_____"
],
[
"# [ ] display the type of student_grade\n\ntype(student_grade)\n",
"_____no_output_____"
],
[
"# [ ] display the type of student_age + student_grade\ntype(student_age + student_grade)\n\n",
"_____no_output_____"
],
[
"# [ ] display the current type of student_id\ntype(student_id)\n\n",
"_____no_output_____"
],
[
"# assign new value to student_id \nstudent_id = \"CBA-001-002\"\n\n# [ ] display the current of student_id\nprint(student_id)\n\n",
"CBA-001-002\n"
]
],
[
[
"#### number integer addition\n\n- **[ ]** create variables (x, y, z) with integer values",
"_____no_output_____"
]
],
[
[
"# [ ] create integer variables (x, y, z) and assign them 1-3 digit integers (no decimals - no quotes)\nx = 1\ny = 11\nz = 111",
"_____no_output_____"
]
],
[
[
"- **[ ]** insert a **code cell** below\n- **[ ]** create an integer variable named **xyz_sum** equal to the sum of x, y, and z\n- **[ ]** print the value of **xyz_sum** ",
"_____no_output_____"
]
],
[
[
"xyz_sum = x + y + z\nprint(xyz_sum)\n\n\n",
"123\n"
]
],
[
[
"### Errors\n- **[ ]** troubleshoot and fix the errors below",
"_____no_output_____"
]
],
[
[
"# [ ] fix the error \n\nprint(\"Hello World!\") \n\n\n",
"Hello World!\n"
],
[
"# [ ] fix the error \nprint('strings have quotes and variables have names')\n\n",
"strings have quotes and variables have names\n"
],
[
"# [ ] fix the error \nprint( \"I have $\" + '5')\n\n",
"I have $5\n"
],
[
"# [ ] fix the error \nprint(\"always save the notebook\")\n \n",
"always save the notebook\n"
]
],
[
[
"## ASCII art\n- **[ ]** Display first name or initials as ASCII Art\n- **[ ]** Challenge: insert an additional code cell to make an ASCII picture",
"_____no_output_____"
]
],
[
[
"# [ ] ASCII ART\nprint(\"* * \")\nprint(\"* * \")\nprint(\"* * * * \")\nprint(\"* * * \")\nprint(\"******* * * * * \")\n",
"* * \n* * \n* * * * \n* * * \n******* * * * * \n"
],
[
"# [ ] ASCII ART\n\n",
"_____no_output_____"
]
],
[
[
"# Module 1 Practice 2\n## Strings: input, testing, formatting\n<font size=\"5\" color=\"#00A0B2\" face=\"verdana\"> <B>Student will be able to</B></font>\n- gather, store and use string `input()` \n- format `print()` output \n- test string characteristics \n- format string output \n- search for a string in a string ",
"_____no_output_____"
],
[
"## input()\ngetting input from users",
"_____no_output_____"
]
],
[
[
"# [ ] get user input for a variable named remind_me\nremind_me = input( \" put in a reminder\")\n\n# [ ] print the value of the variable remind_me\nprint(remind_me)\n",
" put in a reminder2:35\n2:35\n"
],
[
"# use string addition to print \"remember: \" before the remind_me input string\nprint(\"remember: \" + remind_me)\n",
"remember: 2:35\n"
]
],
[
[
"### Program: Meeting Details\n#### [ ] get user **input** for meeting subject and time\n`what is the meeting subject?: plan for graduation` \n`what is the meeting time?: 3:00 PM on Monday` \n\n#### [ ] print **output** with descriptive labels \n`Meeting Subject: plan for graduation` \n`Meeting Time: 3:00 PM on Monday`",
"_____no_output_____"
]
],
[
[
"# [ ] get user input for 2 variables: meeting_subject and meeting_time\nmeeting_subject = input(\" what is the meeting? \")\nmeeting_time = input(\" what time is the meeting? \")\n\n# [ ] use string addition to print meeting subject and time with labels\nprint(\"Meeting Subject: \" + meeting_subject + \" Meeting Time: \" + meeting_time)\n\n\n",
" what is the meeting? Graduation\n what time is the meeting? 4:53\nMeeting Subject: Graduation Meeting Time: 4:53\n"
]
],
[
[
"## print() formatting \n### combining multiple strings separated by commas in the print() function",
"_____no_output_____"
]
],
[
[
"# [ ] print the combined strings \"Wednesday is\" and \"in the middle of the week\" \nprint(\"Wednesday is \" + \"in the middle of the week\")\n",
"Wednesday is in the middle of the week\n"
],
[
"# [ ] print combined string \"Remember to\" and the string variable remind_me from input above\nprint(\"Remember to \" + remind_me)\n",
"Remember to 2:35\n"
],
[
"# [ ] Combine 3 variables from above with multiple strings\nprint(meeting_subject + \" is the event \" + meeting_time + \" is the time \" + remind_me + \" is the reminder \")\n",
"Graduation is the event 4:53 is the time 2:35 is the reminder \n"
]
],
[
[
"### print() quotation marks",
"_____no_output_____"
]
],
[
[
"# [ ] print a string sentence that will display an Apostrophe (')\nprint(\"'\")\n\n",
"'\n"
],
[
"# [ ] print a string sentence that will display a quote(\") or quotes\nprint('\"\"')\n\n",
"\"\"\n"
]
],
[
[
"## Boolean string tests",
"_____no_output_____"
],
[
"### Vehicle tests \n#### get user input for a variable named vehicle \nprint the following tests results \n- check True or False if vehicle is All alphabetical characters using .isalpha() \n- check True or False if vehicle is only All alphabetical & numeric characters \n- check True or False if vehicle is Capitalized (first letter only) \n- check True or False if vehicle is All lowercase \n- **bonus** print description for each test (e.g.- `\"All Alpha: True\"`)",
"_____no_output_____"
]
],
[
[
"# [ ] complete vehicle tests \nvehicle = input(\"what brand of car is it \")\nprint(vehicle.isalpha())\nprint(vehicle.isalnum())\nprint(vehicle.isupper())\nprint(vehicle.islower())\n\n",
"what brand of car is it toyota\nTrue\nTrue\nFalse\nTrue\n"
],
[
"# [ ] print True or False if color starts with \"b\" \ncolor = input(\"Does your favorite color start with a b?: \")\ncolor.startswith('b')",
"Does your favorite color start with a b?: blue\n"
]
],
[
[
"## Sting formatting",
"_____no_output_____"
]
],
[
[
"# [ ] print the string variable capital_this Capitalizing only the first letter\ncapitalize_this = \"the TIME is Noon.\"\ncapitalize_this.capitalize()\n",
"_____no_output_____"
],
[
"# print the string variable swap_this in swapped case\nswap_this = \"wHO writes LIKE tHIS?\"\nswap_this.swapcase()\n",
"_____no_output_____"
],
[
"# print the string variable whisper_this in all lowercase\nwhisper_this = \"Can you hear me?\"\nwhisper_this.lower()\n",
"_____no_output_____"
],
[
"# print the string variable yell_this in all UPPERCASE\nyell_this = \"Can you hear me Now!?\"\nyell_this.upper()\n",
"_____no_output_____"
],
[
"#format input using .upper(), .lower(), .swapcase, .capitalize()\nformat_input = input('enter a string to reformat: ')\nformat_input.upper(), format_input.lower(), format_input.swapcase(), format_input.capitalize()\n",
"enter a string to reformat: WOW CRAZY\n"
]
],
[
[
"### input() formatting",
"_____no_output_____"
]
],
[
[
"# [ ] get user input for a variable named color\n# [ ] modify color to be all lowercase and print\ncolor = input(\"what is your favorite color\")\nprint(color.lower())",
"what is your favorite colorRED\nred\n"
],
[
"# [ ] get user input using variable remind_me and format to all **lowercase** and print\n# [ ] test using input with mixed upper and lower cases\nremind_me = input(\" add a reminder \")\nprint(remind_me.lower())\n",
" add a reminder WODLdlweo\nwodldlweo\n"
],
[
"# [] get user input for the variable yell_this and format as a \"YELL\" to ALL CAPS\nyell_this = input(\" what do you want to yell? \")\nprint(yell_this.upper())",
" what do you want to yell? what\nWHAT\n"
]
],
[
[
"## \"in\" keyword\n### boolean: short_str in long_str",
"_____no_output_____"
]
],
[
[
"# [ ] get user input for the name of some animals in the variable animals_input\nanimals_input = input(\"name an animal\")\nCat = 'Cat'\n# [ ] print true or false if 'cat' is in the string variable animals_input\nprint(\"is the animal a cat? \", animals_input in Cat.lower())\n",
"name an animaldog\nis the animal a cat? False\n"
],
[
"# [ ] get user input for color\ncolor = input(\" name a color \").lower()\n\n# [ ] print True or False for starts with \"b\"\ncolor.startswith('b')\n\n# [ ] print color variable value exactly as input \n# test with input: \"Blue\", \"BLUE\", \"bLUE\"\n\n\n\n",
" name a color bLUE\n"
]
],
[
[
"## Program: guess what I'm reading\n### short_str in long_str\n\n1. **[ ]** get user **`input`** for a single word describing something that can be read \n save in a variable called **can_read** \n e.g. - \"website\", \"newspaper\", \"blog\", \"textbook\" \n \n2. **[ ]** get user **`input`** for 3 things can be read \n save in a variable called **can_read_things** \n \n\n3. **[ ]** print **`true`** if the **can_read** string is found \n **in** the **can_read_things** string variable\n",
"_____no_output_____"
]
],
[
[
"# project: \"guess what I'm reading\"\n\n# 1[ ] get 1 word input for can_read variable\ncan_read = input('provide a single word ')\n\n# 2[ ] get 3 things input for can_read_things variable\ncan_read_things =input('provide 3 things that can be read ')\n\n# 3[ ] print True if can_read is in can_read_things\nprint(can_read.lower() in can_read_things.lower())\n\n# [] challenge: format the output to read \"item found = True\" (or false)\n# hint: look print formatting exercises\nprint(\"item found = true\" + can_read.lower() in can_read_things.lower())\ns",
"provide a single word book\nprovide 3 things that can be read book newspaper journal\nTrue\nFalse\n"
]
],
[
[
"## Program: Allergy Check\n\n1. **[ ]** get user **`input`** for categories of food eaten in the last 24 hours \n save in a variable called **input_test** \n\n2. **[ ]** print **`True`** if \"dairy\" is in the **input_test** string \n3. **[ ]** Test the code so far \n4. **[ ]** repeat the process checking the input for \"nuts\", **challenge** add \"Seafood\" and \"chocolate\" \n5. **[ ]** Test your code \n \n6. **[ ] challenge:** make your code work for input regardless of case, e.g. - print **`True`** for \"Nuts\", \"NuTs\", \"NUTS\" or \"nuts\" \n",
"_____no_output_____"
]
],
[
[
"# Allergy check \n\n# 1[ ] get input for test\ninput_test = input(\" Enter a food you have eaten \").lower()\n\n# 2/3[ ] print True if \"dairy\" is in the input or False if not\nprint('dairy' in input_test)\n\n# 4[ ] Check if \"nuts\" are in the input\nprint('nuts' in input_test)\n# 4+[ ] Challenge: Check if \"seafood\" is in the input\nprint('seafood' in input_test)\n# 4+[ ] Challenge: Check if \"chocolate\" is in the input\nprint('choclate' in input_test)\n",
" Enter a food you have eaten NuTs\nFalse\nTrue\nFalse\nFalse\n"
]
],
[
[
"[Terms of use](http://go.microsoft.com/fwlink/?LinkID=206977) [Privacy & cookies](https://go.microsoft.com/fwlink/?LinkId=521839) © 2017 Microsoft",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ecd8d6dc1839e42c8c73cc5adbd5e469e3c14e66 | 13,993 | ipynb | Jupyter Notebook | exercise_10/VAE_CVAE.ipynb | machism0/deep-learn-2018 | c60874f700352dbab40563366a28fd69691b481a | [
"MIT"
] | null | null | null | exercise_10/VAE_CVAE.ipynb | machism0/deep-learn-2018 | c60874f700352dbab40563366a28fd69691b481a | [
"MIT"
] | null | null | null | exercise_10/VAE_CVAE.ipynb | machism0/deep-learn-2018 | c60874f700352dbab40563366a28fd69691b481a | [
"MIT"
] | null | null | null | 44.849359 | 130 | 0.554634 | [
[
[
"%matplotlib inline\nfrom mpl_toolkits.mplot3d import Axes3D\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport tensorflow as tf",
"_____no_output_____"
],
[
"class VAE:\n def __init__(self, batch_size=100, latent_dim=2):\n self.latent_dim = latent_dim\n self.batch_size = tf.cast(tf.placeholder_with_default(batch_size, shape=()), dtype=tf.int64)\n self.convd_size = 22\n self.dense_size = int(np.sqrt(self.convd_size * self.convd_size * 16))\n\n self.is_training = tf.placeholder_with_default(True, shape=())\n self.image_input = tf.placeholder(dtype=tf.float32, shape=[None, 28, 28, 1])\n self.image_batch, self.iterator, _ = self._make_dataset_iterator()\n self.z_mean, self.z_log_var = self._encoder()\n self.z = self._sampler()\n self.decoded = self._decoder()\n\n self.loss, self.optimization, self.reconstruction_loss, self.latent_loss = self._make_loss_opt()\n\n def _make_dataset_iterator(self):\n dataset = tf.data.Dataset.from_tensor_slices(self.image_input)\n dataset = dataset.shuffle(buffer_size=20000)\n dataset = dataset.batch(batch_size=self.batch_size)\n\n iterator = dataset.make_initializable_iterator()\n image_batch = iterator.get_next()\n return image_batch, iterator, dataset\n\n def _encoder(self):\n conv_kwargs = {'kernel_size': 3, 'filters': 16, 'padding': 'valid', 'strides': 1, 'activation': tf.nn.leaky_relu}\n x = tf.layers.conv2d(self.image_batch, **conv_kwargs)\n x = tf.layers.batch_normalization(x, training=self.is_training)\n x = tf.layers.conv2d(x, **conv_kwargs)\n x = tf.layers.conv2d(x, **conv_kwargs)\n x = tf.layers.flatten(x)\n x = tf.layers.dense(x, units=self.dense_size, activation=tf.nn.leaky_relu)\n z_mean = tf.layers.dense(x, units=self.latent_dim)\n z_log_var = tf.layers.dense(x, units=self.latent_dim)\n return z_mean, z_log_var\n\n def _sampler(self):\n self.samples = tf.random_normal(shape=[self.batch_size, self.latent_dim],\n mean=0.,\n stddev=1.,\n dtype=tf.float32)\n z = self.z_mean + tf.sqrt(tf.exp(self.z_log_var)) * self.samples\n return z\n\n def _decoder(self):\n conv_kwargs = {'padding': 'valid', 'strides': 1}\n x = tf.layers.dense(self.z, units=self.dense_size, activation=tf.nn.leaky_relu)\n x = tf.layers.dense(x, units=self.dense_size ** 2, activation=tf.nn.leaky_relu)\n x = tf.reshape(x, shape=[-1, self.convd_size, self.convd_size, 16])\n x = tf.layers.conv2d_transpose(x, kernel_size=5, filters=16, activation=tf.nn.leaky_relu, **conv_kwargs)\n x = tf.layers.conv2d_transpose(x, kernel_size=5, filters=16, activation=tf.nn.leaky_relu, **conv_kwargs)\n x = tf.layers.conv2d(x, kernel_size=3, filters=8, activation=tf.nn.leaky_relu, **conv_kwargs)\n decoded = tf.layers.conv2d(x, kernel_size=3, filters=1, padding='same', activation=tf.nn.sigmoid)\n return decoded\n\n def _make_loss_opt(self):\n reconstruction_loss = tf.reduce_sum(self.image_batch * tf.log(1e-10 + self.decoded) +\n (1 - self.image_batch) * tf.log(1e-10 + 1 - self.decoded),\n axis=[1, 2, 3])\n reconstruction_loss = tf.reduce_mean(reconstruction_loss)\n latent_loss = 0.5 * tf.reduce_sum(1 + self.z_log_var - self.z_mean ** 2 - tf.exp(self.z_log_var), axis=1)\n latent_loss = tf.reduce_mean(latent_loss)\n loss = -(reconstruction_loss + latent_loss)\n\n opt = tf.train.AdamOptimizer(learning_rate=1e-5).minimize(loss)\n return loss, opt, reconstruction_loss, latent_loss\n\n def train(self, session, images):\n session.run(self.iterator.initializer, feed_dict={self.image_input: images})\n\n while True:\n try:\n _, loss, reconstruction_loss, latent_loss, decoded = session.run(\n [self.optimization, self.loss, self.reconstruction_loss, self.latent_loss, self.decoded],\n feed_dict={self.is_training: True}\n )\n except tf.errors.OutOfRangeError:\n break\n\n return loss, reconstruction_loss, latent_loss, decoded",
"_____no_output_____"
],
[
"with np.load('vae-cvae-challenge.npz') as fh:\n images, labels = fh['data_x'], fh['data_y']\n images = np.reshape(images, newshape=[-1, 28, 28, 1])\nprint(f'image shape: {images.shape}, labels shape: {labels.shape}')",
"_____no_output_____"
],
[
"def visualize_digits(tensor_to_visualize):\n plt.axis('off')\n plt.imshow(np.squeeze(tensor_to_visualize), cmap='gray')\n plt.show()\n\ncount_epochs = 25\nvae = VAE()\n\n# Check sampling with tf.Print (alraedy in there, but logging doesn't work in jupyter.)\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch in range(count_epochs):\n loss, reconstruction_loss, latent_loss, decoded = vae.train(sess, images)\n visualize_digits(decoded[0])",
"_____no_output_____"
],
[
"x = tf.constant(np.random.rand(10,20,5))\nx = tf.reshape(x, shape=tf.constant((-1, 2)))",
"_____no_output_____"
],
[
"item = tf.constant(np.random.rand(10,5,3,2))\nok = tf.reduce_sum(np.random.rand(10,5,3,2), axis=item.rank)\nwith tf.Session() as sess:\n print(sess.run(ok.shape))",
"_____no_output_____"
],
[
"len(item.shape)",
"_____no_output_____"
],
[
"def iterate(predicate, images, iterator, session):\n session.run(iterator.initializer, feed_dict={real_images: images})\n while True:\n try:\n result = session.run(predicate)\n except tf.errors.OutOfRangeError:\n break\n return result",
"_____no_output_____"
],
[
"class CVAE:\n def __init__(self, batch_size=100, latent_dim=1):\n self.latent_dim = latent_dim\n self.batch_size = tf.cast(tf.placeholder_with_default(batch_size, shape=()), dtype=tf.int64)\n self.convd_size = 22\n self.dense_size = int(np.sqrt(self.convd_size * self.convd_size * 16))\n\n self.is_training = tf.placeholder_with_default(True, shape=())\n self.image_input = tf.placeholder(dtype=tf.float32, shape=[None, 28, 28, 1])\n self.label_input = tf.placeholder(dtype=tf.int64, shape=[None])\n self.image_batch, self.image_iterator, self.label_batch, self.label_iterator = self._make_dataset_iterator()\n self.z_mean, self.z_log_var = self._encoder()\n self.z = self._sampler()\n self.decoded = self._decoder()\n\n self.loss, self.optimization, self.reconstruction_loss, self.latent_loss = self._make_loss_opt()\n\n def _make_dataset_iterator(self):\n label_dataset = tf.data.Dataset.from_tensor_slices(self.label_input)\n label_dataset = label_dataset.shuffle(buffer_size=20000, seed=42)\n label_dataset = label_dataset.batch(batch_size=self.batch_size)\n\n label_iterator = label_dataset.make_initializable_iterator()\n label_batch = label_iterator.get_next()\n label_batch = tf.one_hot(label_batch, 10)\n\n image_dataset = tf.data.Dataset.from_tensor_slices(self.image_input)\n image_dataset = image_dataset.shuffle(buffer_size=20000, seed=42)\n image_dataset = image_dataset.batch(batch_size=self.batch_size)\n\n image_iterator = image_dataset.make_initializable_iterator()\n image_batch = image_iterator.get_next()\n return image_batch, image_iterator, label_batch, label_iterator\n\n def _encoder(self):\n conv_kwargs = {'kernel_size': 3, 'filters': 16, 'padding': 'valid', 'strides': 1,\n 'activation': tf.nn.leaky_relu}\n x = tf.layers.conv2d(self.image_batch, **conv_kwargs)\n x = tf.layers.batch_normalization(x, training=self.is_training)\n x = tf.layers.conv2d(x, **conv_kwargs)\n x = tf.layers.conv2d(x, **conv_kwargs)\n x = tf.layers.flatten(x)\n x = tf.concat([x, self.label_batch], axis=1)\n x = tf.layers.dense(x, units=self.dense_size, activation=tf.nn.leaky_relu)\n z_mean = tf.layers.dense(x, units=self.latent_dim)\n z_log_var = tf.layers.dense(x, units=self.latent_dim)\n return z_mean, z_log_var\n\n def _sampler(self):\n self.samples = tf.random_normal(shape=[self.batch_size, self.latent_dim],\n mean=0.,\n stddev=1.,\n dtype=tf.float32)\n z = self.z_mean + tf.sqrt(tf.exp(self.z_log_var)) * self.samples\n return z\n\n def _decoder(self):\n conv_kwargs = {'padding': 'valid', 'strides': 1}\n x = tf.concat([self.z, self.label_batch], axis=1)\n x = tf.layers.dense(x, units=self.dense_size, activation=tf.nn.leaky_relu)\n x = tf.layers.dense(x, units=self.dense_size ** 2, activation=tf.nn.leaky_relu)\n x = tf.reshape(x, shape=[-1, self.convd_size, self.convd_size, 16])\n x = tf.layers.conv2d_transpose(x, kernel_size=5, filters=16, activation=tf.nn.leaky_relu, **conv_kwargs)\n x = tf.layers.conv2d_transpose(x, kernel_size=5, filters=16, activation=tf.nn.leaky_relu, **conv_kwargs)\n x = tf.layers.conv2d(x, kernel_size=3, filters=8, activation=tf.nn.leaky_relu, **conv_kwargs)\n decoded = tf.layers.conv2d(x, kernel_size=3, filters=1, padding='same', activation=tf.nn.sigmoid)\n return decoded\n\n def _make_loss_opt(self):\n reconstruction_loss = tf.reduce_sum(self.image_batch * tf.log(1e-10 + self.decoded) +\n (1 - self.image_batch) * tf.log(1e-10 + 1 - self.decoded),\n axis=[1, 2, 3])\n reconstruction_loss = tf.reduce_mean(reconstruction_loss)\n latent_loss = 0.5 * tf.reduce_sum(1 + self.z_log_var - self.z_mean ** 2 - tf.exp(self.z_log_var), axis=1)\n latent_loss = tf.reduce_mean(latent_loss)\n loss = -(reconstruction_loss + latent_loss)\n\n opt = tf.train.AdamOptimizer(learning_rate=1e-5).minimize(loss)\n return loss, opt, reconstruction_loss, latent_loss\n\n def train(self, session, images, labels):\n session.run([self.image_iterator.initializer, self.label_iterator.initializer],\n feed_dict={self.image_input: images, self.label_input: labels})\n\n while True:\n try:\n _, loss, reconstruction_loss, latent_loss, decoded = session.run(\n [self.optimization, self.loss, self.reconstruction_loss, self.latent_loss, self.decoded],\n feed_dict={self.is_training: True}\n )\n except tf.errors.OutOfRangeError:\n break\n\n return loss, reconstruction_loss, latent_loss, decoded",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecd8f37cee2f07cd0d4b769631f163b5e08d0bf0 | 13,132 | ipynb | Jupyter Notebook | notebooks/mapd_to_pygdf_to_matrix_1.ipynb | raydouglass/pygdf | d1d67646b930f7e70e4df833eb17a7ee58978d0f | [
"Apache-2.0"
] | 5 | 2019-01-15T12:31:49.000Z | 2021-03-05T21:17:13.000Z | notebooks/mapd_to_pygdf_to_matrix_1.ipynb | raydouglass/pygdf | d1d67646b930f7e70e4df833eb17a7ee58978d0f | [
"Apache-2.0"
] | 19 | 2018-07-18T07:15:44.000Z | 2021-02-22T17:00:18.000Z | notebooks/mapd_to_pygdf_to_matrix_1.ipynb | raydouglass/pygdf | d1d67646b930f7e70e4df833eb17a7ee58978d0f | [
"Apache-2.0"
] | 2 | 2020-05-01T09:54:34.000Z | 2021-04-17T10:57:07.000Z | 22.838261 | 1,552 | 0.517286 | [
[
[
"Test MapD->PyGDF->matrix",
"_____no_output_____"
]
],
[
[
"PWD = !pwd",
"_____no_output_____"
],
[
"import sys\nimport os.path",
"_____no_output_____"
]
],
[
[
"Add import path to MapD Thrift binding",
"_____no_output_____"
]
],
[
[
"mapd_thrift_path = os.path.join(PWD[0], 'gen-py')\nsys.path.append(mapd_thrift_path)",
"_____no_output_____"
]
],
[
[
"Add import path to Arrow Schema",
"_____no_output_____"
]
],
[
[
"arrow_schema_path = os.path.join(PWD[0], 'arrow_schema')\nsys.path.append(arrow_schema_path)",
"_____no_output_____"
],
[
"from thrift.protocol import TBinaryProtocol\nfrom thrift.protocol import TJSONProtocol\nfrom thrift.transport import TSocket\nfrom thrift.transport import THttpClient\nfrom thrift.transport import TTransport",
"_____no_output_____"
],
[
"from mapd import MapD\nfrom mapd import ttypes",
"_____no_output_____"
]
],
[
[
"MapD connection",
"_____no_output_____"
]
],
[
[
"def get_client(host_or_uri, port, http):\n if http:\n transport = THttpClient.THttpClient(host_or_uri)\n protocol = TJSONProtocol.TJSONProtocol(transport)\n else:\n socket = TSocket.TSocket(host_or_uri, port)\n transport = TTransport.TBufferedTransport(socket)\n protocol = TBinaryProtocol.TBinaryProtocol(transport)\n\n client = MapD.Client(protocol)\n transport.open()\n return client",
"_____no_output_____"
],
[
"db_name = 'mapd'\nuser_name = 'mapd'\npasswd = 'HyperInteractive'\nhostname = 'localhost'\nportno = 9091\n\nclient = get_client(hostname, portno, False)\nsession = client.connect(user_name, passwd, db_name)\nprint('Connection complete')",
"Connection complete\n"
]
],
[
[
"The Query",
"_____no_output_____"
]
],
[
[
"query = 'select dest_lat, dest_lon from flights_2008_7M limit 23;'\nprint('Query is : ' + query)\n\n# always use True for is columnar\nresults = client.sql_execute_cudf(session, query, device_id=0, first_n=-1)",
"Query is : select dest_lat, dest_lon from flights_2008_7M limit 23;\n"
],
[
"results",
"_____no_output_____"
]
],
[
[
"Use Numba to access the IPC memory handle\n\nNote: this requires numba 0.32.0 + PR #2023\n\n```bash\ngit clone https://github.com/numba/numba\ncd numba\ngit fetch origin pull/2023/merge:pr/2023\ngit checkout pr/2023\n```",
"_____no_output_____"
]
],
[
[
"from numba import cuda\nfrom numba.cuda.cudadrv import drvapi",
"_____no_output_____"
],
[
"ipc_handle = drvapi.cu_ipc_mem_handle(*results.df_handle)",
"_____no_output_____"
],
[
"ipch = cuda.driver.IpcHandle(None, ipc_handle, size=results.df_size)",
"_____no_output_____"
],
[
"ctx = cuda.current_context()",
"_____no_output_____"
],
[
"dptr = ipch.open(ctx)",
"_____no_output_____"
],
[
"dptr",
"_____no_output_____"
]
],
[
[
"`dptr` is GPU memory containing the query result",
"_____no_output_____"
],
[
"Convert `dptr` into a GPU device ndarray (numpy array like object on GPU)",
"_____no_output_____"
]
],
[
[
"import numpy as np",
"_____no_output_____"
],
[
"dtype = np.dtype(np.byte)\ndarr = cuda.devicearray.DeviceNDArray(shape=dptr.size, strides=dtype.itemsize, dtype=dtype, gpu_data=dptr)",
"_____no_output_____"
]
],
[
[
"Use PyGDF to read the arrow metadata from the query",
"_____no_output_____"
]
],
[
[
"from pygdf.gpuarrow import GpuArrowReader",
"_____no_output_____"
],
[
"reader = GpuArrowReader(darr)",
"_____no_output_____"
],
[
"reader.to_dict()",
"_____no_output_____"
]
],
[
[
"Wrap result in a Python CUDA DataFrame",
"_____no_output_____"
]
],
[
[
"from pygdf.dataframe import DataFrame",
"_____no_output_____"
],
[
"df = DataFrame()\nfor k, v in reader.to_dict().items():\n df[k] = v",
"_____no_output_____"
],
[
"df.columns, len(df)",
"_____no_output_____"
]
],
[
[
"Turn the dataframe into a matrix",
"_____no_output_____"
]
],
[
[
"gpu_matrix = df.as_gpu_matrix()",
"_____no_output_____"
]
],
[
[
"The ctypes pointer to the gpu matrix",
"_____no_output_____"
]
],
[
[
"ctypes_ptr = gpu_matrix.device_ctypes_pointer",
"_____no_output_____"
],
[
"print('address value as integer', hex(ctypes_ptr.value))",
"address value as integer 0x1020a800000\n"
]
],
[
[
"Get numpy array for the matrix",
"_____no_output_____"
]
],
[
[
"gpu_matrix.copy_to_host()",
"_____no_output_____"
]
],
[
[
"Cleanup the IPC handle",
"_____no_output_____"
]
],
[
[
"ipch.close()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecd8fef61ee0c144bbda60f3004931d4e23dc1de | 207,359 | ipynb | Jupyter Notebook | Examples/Examples.ipynb | AdmiralWen/ds_tools | 5112c35d2007c1b654e939e46def8d159b5cf900 | [
"MIT"
] | null | null | null | Examples/Examples.ipynb | AdmiralWen/ds_tools | 5112c35d2007c1b654e939e46def8d159b5cf900 | [
"MIT"
] | null | null | null | Examples/Examples.ipynb | AdmiralWen/ds_tools | 5112c35d2007c1b654e939e46def8d159b5cf900 | [
"MIT"
] | null | null | null | 129.680425 | 61,258 | 0.815373 | [
[
[
"## Demonstration of the functions in the ds_tools library, using Kaggle's Titanic dataset",
"_____no_output_____"
],
[
"### **Installation - do this if installing for the first time:**\n`!pip install git+http://github.com/AdmiralWen/ds_tools.git`",
"_____no_output_____"
],
[
"### **Import main libraries**",
"_____no_output_____"
]
],
[
[
"# Import main libraries:\nimport pandas as pd\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\n# Importing exploration library as de:\nfrom ds_tools import exploration as de",
"_____no_output_____"
]
],
[
[
"### **Part I: Exploration**",
"_____no_output_____"
]
],
[
[
"# Import Titanic dataset:\ntitanic = pd.read_csv('Titanic.csv')\ntitanic.head()",
"_____no_output_____"
]
],
[
[
"### data_info()\nDisplays basic information about a dataframe. Useful for identifying the number of unique observations, null observations, etc.",
"_____no_output_____"
]
],
[
[
"# data_info(dataframe):\nde.data_info(titanic)",
"_____no_output_____"
]
],
[
[
"### extreme_obs()\nDisplays the n (default n = 10) largest and smallest observations for a variable in a dataframe. If the boxplot argument is True, will also plot a boxplot for this variable. Use the whis argument to control the length of the whiskers (default is 1.5 IQR).",
"_____no_output_____"
]
],
[
[
"# extreme_obs(dataframe, variable, n = 10, boxplot = True, whis = 1.5):\nde.extreme_obs(titanic, 'Age')",
"_____no_output_____"
]
],
[
[
"### check_unique_by() and non_unique_items()\nChecks if a dataframe is unique by a given list of columns. The variables argument can be either a single column name (string) or a list.",
"_____no_output_____"
]
],
[
[
"# check_unique_by(dataframe, variables):\nde.check_unique_by(titanic, ['PassengerId'])",
"_____no_output_____"
],
[
"# non_unique_items(dataframe, variables):\nde.non_unique_items(titanic, ['Ticket']).head()",
"_____no_output_____"
]
],
[
[
"### freq_tab()\nReturns the frequency tabulation of the input variable as a Pandas dataframe. Specify drop_na = True to drop NaNs from the tabulation (default is False), and specify sort_by_count = False to sort the result alphabetically instead of by the frequency counts (default is True). Use the plot argument to specify the output type: frequency table, graph by count, or graph by percent (None, 'count', and 'percent' respectively).",
"_____no_output_____"
]
],
[
[
"# freq_tab(dataframe, variable, drop_na = False, sort_by_count = True, plot = None, fig_size = (16, 8)):\nde.freq_tab(titanic, 'Pclass')",
"_____no_output_____"
],
[
"# Specify whether to sort by count or by index:\nde.freq_tab(titanic, 'Pclass', sort_by_count = False)",
"_____no_output_____"
],
[
"# freq_tab_plot:\nfig, ax = plt.subplots(figsize = (6, 5))\nde.freq_tab_plot(ax, titanic, 'Pclass', drop_na = False, sort_by_count = False, plot = 'count')\nplt.show()",
"_____no_output_____"
]
],
[
[
"### summary_tab()\nReturns the summary tabulation of the input variable as a Pandas dataframe. Be sure to enter the groupby_var and sum_var as strings; function can only support one group_by and sum variable. To sort by the grouping variable instead of the summary variable, specify sort_by_sum = False.",
"_____no_output_____"
]
],
[
[
"# summary_tab(dataframe, groupby_var, sum_var, sort_by_sum = True):\nde.summary_tab(titanic, 'Pclass', 'Fare')",
"_____no_output_____"
]
],
[
[
"### describe_by()\nAdds \"Non-NaN Count\" and \"Sum\" to df.groupby().describe().",
"_____no_output_____"
]
],
[
[
"# describe_by(dataframe, groupby_var, numeric_var):\nde.describe_by(titanic, 'Pclass', 'Fare')",
"_____no_output_____"
]
],
[
[
"### na_per_column()\nFunctions for viewing the NaN values of a dataset by column.",
"_____no_output_____"
]
],
[
[
"# na_per_columdataframe):\nde.na_per_column(titanic)",
"_____no_output_____"
]
],
[
[
"### split_column()\nSplits a variable of a dataframe into multiple columns. You can specify the delimiter (which must be a string) using the delimiter argument, and the exp_cols_prefix argument (also a string) is used to prefix the split column names. The merge_orig and drop_orig arguments are used to control whether to merge back to the original dataframe and whether to drop the original column in the output.",
"_____no_output_____"
]
],
[
[
"# split_column(dataframe, variable, delimiter, exp_cols_prefix, merge_orig = True, drop_orig = False):\ntitanic2 = de.split_column(titanic, variable = 'Name', delimiter = ',', exp_cols_prefix = 'Name_')\n\n# (In this example it'd be more meaningful to rename the results afterwards):\ntitanic2.rename(columns = {'Name_0':'Last_Name', 'Name_1':'First_Name'}, inplace = True)\ntitanic2.head()",
"_____no_output_____"
]
],
[
[
"### correlation_heatmap()\nCreates a heatmap of the correlation matrix using the input dataframe and the variables list. The var_list argument should be a list of variable names (strings) that you wish to compute correlations for. Use sns_font_scale and fig_size to set the font and graph sizes.",
"_____no_output_____"
]
],
[
[
"# correlation_heatmap:\ncorr_vars = ['Survived', 'Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare']\n\nfig, ax = plt.subplots(figsize = (8, 8))\nde.correlation_heatmap(ax, titanic, corr_vars, method = 'pearson', plot_title = 'Pearson Matrix Example', data_label_size = 16)\nplt.show()\n\n#de.correlation_heatmap(titanic, corr_vars, sns_font_scale = 1.5, fig_size = (10, 10))",
"_____no_output_____"
]
],
[
[
"### custom_boxplot()\nCreates a custom boxplot based on pre-calculated precentile metrics (built-in boxplot functions are often designed to generate from underlying data; it doesn't allow you to set the actual box & whisker positions). This function requires the percentile metrics to be in a dataframe. Each row of the dataframe should contain the name of the category we have metrics for, as well as 5 additional columns of percentile metrics in increasing order.",
"_____no_output_____"
]
],
[
[
"# Some sample data:\ndata = pd.DataFrame([{'category':'Dept A', 'p10':1, 'p25':4, 'p50':7, 'p75':8, 'p90':10},\n {'category':'Dept B', 'p10':0.2, 'p25':2.5, 'p50':5.75, 'p75':7.3, 'p90':9.1},\n {'category':'Dept C', 'p10':-2, 'p25':2, 'p50':4, 'p75':8, 'p90':12}])\ndata",
"_____no_output_____"
],
[
"# custom_boxplot:\nfig, ax = plt.subplots(figsize = (6, 4))\nde.custom_boxplot(ax, data, 'category', ['p10', 'p25', 'p50', 'p75', 'p90'], plot_title = 'Example')\nplt.show()",
"_____no_output_____"
]
],
[
[
"### **Part II: Evaluation**",
"_____no_output_____"
]
],
[
[
"from ds_tools import evaluation as ev",
"_____no_output_____"
]
],
[
[
"### plot_confusion_matrix()\nPlots a confusion matrix given actual and predicted labels (can handle more than 2 classes).",
"_____no_output_____"
]
],
[
[
"# plot_confusion_matrix:\nimport random\nt = [random.randrange(0, 2, 1) for i in range(50)]\np = [random.randrange(0, 2, 1) for i in range(50)]\n\nfig, ax = plt.subplots(1, 2, figsize = (10, 8))\nev.plot_confusion_matrix(ax[0], t, p, normalize = False, title = 'Test Plot 1', cmap = plt.cm.Blues)\nev.plot_confusion_matrix(ax[1], t, p, normalize = True, title = 'Test Plot 2', cmap = plt.cm.Blues)\nplt.show()",
"_____no_output_____"
]
],
[
[
"### gini() and gini_plot()\nCalculates gini coefficient (raw or normalized) given the actual, predicted, and optionally weight vectors. Use gini_plot() to plot the lorenz curves.",
"_____no_output_____"
]
],
[
[
"# gini:\nt = np.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 4, 5, 6])\np = np.array([0.1, 0.1, 0.1, 0.2, 0.2, 0.2, 0.2, 0.31, 0.4, 0.73, 0.31, 0.4, 0.6, 0.2, 0.32, 0.53, 0.74, 0.1, 0.34,\n 0.9, 0.2, 0.11, 0.71, 0.3, 0.51, 0.61, 0.72, 0.52, 0.29, 0.8])\nw = np.array([8, 3, 4, 9, 6, 2, 13, 8, 11, 8, 7, 9, 8, 5, 13, 2, 7, 10, 16, 6, 8, 10, 1, 11, 15, 14, 7, 10, 12, 11])\nev.gini(actual = t, predicted = p, weight = w, normalize = True)",
"_____no_output_____"
],
[
"# gini_plot:\nimport random\nt = np.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 4, 5, 6])\np = np.array([0.1, 0.1, 0.1, 0.2, 0.2, 0.2, 0.2, 0.31, 0.4, 0.73, 0.31, 0.4, 0.6, 0.2, 0.32, 0.53, 0.74, 0.1, 0.34,\n 0.9, 0.2, 0.11, 0.71, 0.3, 0.51, 0.61, 0.72, 0.52, 0.29, 0.8])\nw = np.array([8, 3, 4, 9, 6, 2, 13, 8, 11, 8, 7, 9, 8, 5, 13, 2, 7, 10, 16, 6, 8, 10, 1, 11, 15, 14, 7, 10, 12, 11])\nr = np.array([round(random.random(), 1) for i in range(30)])\n\nfig, ax = plt.subplots(1, 2, figsize = (12, 6))\nev.gini_plot(ax[0], actual = t, predicted = r, weight = w, normalize = True, title = 'Example Plot - Random Predictions')\nev.gini_plot(ax[1], actual = t, predicted = p, weight = w, normalize = True, title = 'Example Plot - Simulated Predictions')\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ecd905269e590269ceaea8ef28d4d60499681192 | 12,642 | ipynb | Jupyter Notebook | Lazypredict Regressor.ipynb | rcgopi100/LazypredictRegressor | 2d55d3afe1bea555568ef6c6d157b8ebfc2174e9 | [
"MIT"
] | null | null | null | Lazypredict Regressor.ipynb | rcgopi100/LazypredictRegressor | 2d55d3afe1bea555568ef6c6d157b8ebfc2174e9 | [
"MIT"
] | null | null | null | Lazypredict Regressor.ipynb | rcgopi100/LazypredictRegressor | 2d55d3afe1bea555568ef6c6d157b8ebfc2174e9 | [
"MIT"
] | null | null | null | 37.737313 | 129 | 0.42873 | [
[
[
"#Install lazypredict\npip install lazypredict",
"_____no_output_____"
],
[
"#Import Lazypredict and import all libraries\nimport lazypredict\nfrom lazypredict.Supervised import LazyRegressor\nfrom sklearn.model_selection import train_test_split\nimport os\nimport pandas as pd\npd.set_option('float_format', '{:f}'.format)\nimport numpy as np",
"_____no_output_____"
],
[
"#Function to download and read delivery days CSV file\nDATA_FILES_PATH = r\"C:\\\\Users\\\\gchandr4\\\\Documents\\\\Blogs\\\\Customer Price Prediction\"\ndef load_customer_data(data_path=DATA_FILES_PATH):\n csv_path = os.path.join(data_path, \"customer_data.csv\")\n return pd.read_csv(csv_path)",
"_____no_output_____"
],
[
"#load the data\ndata = load_customer_data()",
"_____no_output_____"
],
[
"#Print Data\ndata.head()",
"_____no_output_____"
],
[
"#Split data into X and y target\nX = data[['Sold To', 'Ship To', 'Material', 'Price/Qty (USD)', 'Qty']]\ny = data[['Total Price (USD)']]",
"_____no_output_____"
],
[
"X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=.5,random_state =123)",
"_____no_output_____"
],
[
"clf = LazyRegressor(verbose=0,ignore_warnings=True, custom_metric=None)\nmodels, predictions = clf.fit(X_train, X_test, y_train, y_test)",
"100%|██████████████████████████████████████████████████████████████████████████████████| 42/42 [00:02<00:00, 20.57it/s]\n"
],
[
"print(models)",
" Adjusted R-Squared R-Squared RMSE \\\nModel \nDecisionTreeRegressor 1.000000 1.000000 0.000000 \nExtraTreesRegressor 0.999996 0.999996 65.667529 \nXGBRegressor 0.999989 0.999989 105.431512 \nExtraTreeRegressor 0.999974 0.999974 159.799293 \nRandomForestRegressor 0.999819 0.999821 420.760396 \nBaggingRegressor 0.999743 0.999746 500.915888 \nHistGradientBoostingRegressor 0.999418 0.999424 754.389006 \nLGBMRegressor 0.999399 0.999405 766.450977 \nGradientBoostingRegressor 0.999021 0.999031 978.470164 \nGaussianProcessRegressor 0.991668 0.991752 2854.725904 \nAdaBoostRegressor 0.973038 0.973308 5135.432221 \nKNeighborsRegressor 0.946168 0.946708 7256.405860 \nPoissonRegressor 0.920593 0.921388 8813.175012 \nLarsCV 0.874240 0.875500 11091.085843 \nLassoLarsCV 0.874240 0.875500 11091.085843 \nLassoCV 0.874239 0.875499 11091.095428 \nLassoLarsIC 0.874225 0.875485 11091.726442 \nRidge 0.874163 0.875424 11094.465950 \nRidgeCV 0.874163 0.875424 11094.465950 \nBayesianRidge 0.874148 0.875409 11095.122622 \nLassoLars 0.874133 0.875394 11095.774402 \nLasso 0.874098 0.875359 11097.335123 \nLars 0.874096 0.875358 11097.412234 \nLinearRegression 0.874096 0.875358 11097.412234 \nTransformedTargetRegressor 0.874096 0.875358 11097.412234 \nSGDRegressor 0.874010 0.875273 11101.184242 \nOrthogonalMatchingPursuitCV 0.873945 0.875208 11104.079787 \nHuberRegressor 0.867924 0.869247 11366.175046 \nPassiveAggressiveRegressor 0.857111 0.858543 11822.295856 \nElasticNet 0.776357 0.778598 14790.389572 \nRANSACRegressor 0.735965 0.738611 16070.619670 \nTweedieRegressor 0.649481 0.652993 18516.465197 \nGeneralizedLinearRegressor 0.649481 0.652993 18516.465197 \nGammaRegressor 0.632317 0.636001 18964.396366 \nOrthogonalMatchingPursuit 0.534440 0.539105 21339.778094 \nElasticNetCV 0.059688 0.069110 30327.602403 \nNuSVR -0.009266 0.000847 31419.905512 \nDummyRegressor -0.013401 -0.003247 31484.209408 \nSVR -0.027627 -0.017330 31704.423676 \nKernelRidge -0.875217 -0.856427 42828.000191 \nLinearSVR -1.491864 -1.466895 49370.126782 \nMLPRegressor -1.521979 -1.496708 49667.553278 \n\n Time Taken \nModel \nDecisionTreeRegressor 0.011968 \nExtraTreesRegressor 0.103405 \nXGBRegressor 0.045250 \nExtraTreeRegressor 0.027434 \nRandomForestRegressor 0.179598 \nBaggingRegressor 0.031242 \nHistGradientBoostingRegressor 0.347581 \nLGBMRegressor 0.050692 \nGradientBoostingRegressor 0.064550 \nGaussianProcessRegressor 0.031249 \nAdaBoostRegressor 0.076498 \nKNeighborsRegressor 0.027400 \nPoissonRegressor 0.000000 \nLarsCV 0.015622 \nLassoLarsCV 0.019946 \nLassoCV 0.067971 \nLassoLarsIC 0.011970 \nRidge 0.005543 \nRidgeCV 0.016876 \nBayesianRidge 0.028096 \nLassoLars 0.018568 \nLasso 0.015621 \nLars 0.015621 \nLinearRegression 0.011966 \nTransformedTargetRegressor 0.025389 \nSGDRegressor 0.000000 \nOrthogonalMatchingPursuitCV 0.012012 \nHuberRegressor 0.031243 \nPassiveAggressiveRegressor 0.015621 \nElasticNet 0.012965 \nRANSACRegressor 0.015622 \nTweedieRegressor 0.010971 \nGeneralizedLinearRegressor 0.019741 \nGammaRegressor 0.012505 \nOrthogonalMatchingPursuit 0.010971 \nElasticNetCV 0.039258 \nNuSVR 0.035672 \nDummyRegressor 0.008976 \nSVR 0.015625 \nKernelRidge 0.020321 \nLinearSVR 0.005038 \nMLPRegressor 0.524264 \n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecd90b33010e58d9f38f206a625c253e198076f7 | 181,655 | ipynb | Jupyter Notebook | .ipynb_checkpoints/Web Top 100 Songs Billboard Scrape con Python-checkpoint.ipynb | juanspinelli/all | 53897e926b098c11653c1ca9035b54db7d63c5d4 | [
"MIT"
] | 3 | 2019-12-24T03:06:10.000Z | 2019-12-31T08:57:41.000Z | .ipynb_checkpoints/Web Top 100 Songs Billboard Scrape con Python-checkpoint.ipynb | juanspinelli/all | 53897e926b098c11653c1ca9035b54db7d63c5d4 | [
"MIT"
] | null | null | null | .ipynb_checkpoints/Web Top 100 Songs Billboard Scrape con Python-checkpoint.ipynb | juanspinelli/all | 53897e926b098c11653c1ca9035b54db7d63c5d4 | [
"MIT"
] | 2 | 2019-12-27T11:25:16.000Z | 2020-07-08T10:21:57.000Z | 91.744949 | 14,264 | 0.768132 | [
[
[
"import requests\nimport urllib.request\nimport pandas as pd\nimport re\nimport json\nfrom time import time\nfrom datetime import datetime, timedelta\nfrom bs4 import BeautifulSoup\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline\n\npalette = [\"#3a9679\", \"#a7d129\",\"#ff3d00\", \"#00e676\", \"#ad1457\", \"#f09c67\", \"#257aa6\", \n \"#ffab00\", \"#e16262\", \"#263238\"]\n\nsns.palplot(sns.color_palette(palette))",
"_____no_output_____"
],
[
"timeFinish = 0\nstart_time = time()",
"_____no_output_____"
],
[
"url = 'https://www.billboard.com/charts/hot-100'\npath_file_dest = r'billboard.json'\nresponse = requests.get(url, headers={'User-Agent': 'Mozilla/5.0 (Platform; Security; OS-or-CPU; Localization; rv:1.4) Gecko/20030624 Netscape/7.1 (ax)'})",
"_____no_output_____"
],
[
"def write_doc_json(content, mode):\n own_file = open(path_file_dest, mode, encoding='utf-8')\n own_file.write(content.decode('utf-8'))\n own_file.close()\n print('\\n• El archivo json fue sobre-escrito con exito!')",
"_____no_output_____"
],
[
"if response.status_code == 200:\n \n info = []\n soup = BeautifulSoup(response.text, \"html.parser\")\n data = soup.findAll('div', attrs={'class': 'chart-list-item__first-row chart-list-item__cursor-pointer'})\n \n for x in range (0, len(data)):\n puesto = data[x].find('div', {'class': 'chart-list-item__rank'}).get_text()\n tema = data[x].find('span', {'class': 'chart-list-item__title-text'}).get_text()\n cantante = data[x].find('div', {'chart-list-item__artist'}).get_text()\n \n individuo = {'Puesto' : puesto.replace(\"\\n\",\"\"), \n 'Tema' : tema.replace(\"\\n\",\"\"), \n 'cantante' : cantante.replace(\"\\n\",\"\")}\n \n info.append(individuo)\n \n content = json.dumps(info, indent=4, sort_keys=True, ensure_ascii=False).encode('utf-8')\n write_doc_json(content ,'w+')\n data = pd.read_json(path_file_dest, encoding = 'UTF-8')\n print('• Se leyo el json y se creo el dataframe data')\nelse:\n print('No se pudo conectar')",
"\n• El archivo json fue sobre-escrito con exito!\n• Se leyo el json y se creo el dataframe data\n"
],
[
"data.head()",
"_____no_output_____"
],
[
"data.cantante = data.cantante.apply(lambda x: x.replace('Featuring', '&'))\ndata['cancion'] = data.Tema + ' (' + data.cantante + ')'\ncanciones = data.cancion\ndata.head()",
"_____no_output_____"
],
[
"print('''\\nObteniendo duracion, ranking Deezer y si el contenido de la letra \ny la tapa del cd son explicitos en cada tema ... \\n''')\n\nfor x in range (0, len(canciones)):\n url = 'https://api.deezer.com/search?q=' + str(canciones[x])\n \n response = requests.get(url, headers={'User-Agent': 'Mozilla/5.0 (Platform; Security; OS-or-CPU; Localization; rv:1.4) Gecko/20030624 Netscape/7.1 (ax)'})\n if response.status_code == 200:\n soup = BeautifulSoup(response.text, \"html.parser\")\n newDictionary=json.loads(str(soup))\n if len(newDictionary['data']) != 0:\n data.loc[x, 'time'] = newDictionary['data'][0]['duration']\n data.loc[x, 'deezer_rank'] = newDictionary['data'][0]['rank']\n data.loc[x, 'explicit_lyrics'] = newDictionary['data'][0]['explicit_lyrics']\n data.loc[x, 'explicit_content_lyrics'] = newDictionary['data'][0]['explicit_content_lyrics']\n data.loc[x, 'explicit_content_cover'] = newDictionary['data'][0]['explicit_content_cover']\n else:\n data.loc[x, 'time'] = 0\n data.loc[x, 'deezer_rank'] = -1\n data.loc[x, 'explicit_lyrics'] = False\n data.loc[x, 'explicit_content_lyrics'] = -1\n data.loc[x, 'explicit_content_cover'] = -1\n \n if x != 0:\n if x%10 == 0:\n print(' • ' + str(x) + ' temas obtenidos')\n \n if x == 99:\n print('\\nSe obtuvieron las duraciones y los rankings de los 100 temas')\n \n else:\n print('Error al conectar a la siguiente Url: ' + str(url))\n",
"\nObteniendo duracion, ranking Deezer y si el contenido de la letra \ny la tapa del cd son explicitos en cada tema ... \n\n • 10 temas obtenidos\n • 20 temas obtenidos\n • 30 temas obtenidos\n • 40 temas obtenidos\n • 50 temas obtenidos\n • 60 temas obtenidos\n • 70 temas obtenidos\n • 80 temas obtenidos\n • 90 temas obtenidos\n\nSe obtuvieron las duraciones y los rankings de los 100 temas\n"
],
[
"print(data.time[data.time == -1].sum())\nprint(data.deezer_rank[data.deezer_rank == -1].sum())\nprint(data.explicit_lyrics[data.explicit_lyrics == -1].sum())\nprint(data.explicit_content_lyrics[data.explicit_content_lyrics == -1].sum())\nprint(data.explicit_content_cover[data.explicit_content_cover == -1].sum())",
"0.0\n-1.0\n0\n-1.0\n-1.0\n"
],
[
"data.head()",
"_____no_output_____"
],
[
"data.explicit_lyrics = data.explicit_lyrics.apply(lambda x: 1 if x == True else 0)",
"_____no_output_____"
],
[
"data.head()",
"_____no_output_____"
],
[
"data.dtypes",
"_____no_output_____"
],
[
"import numpy as np\ndata['time'] = data['time'].apply(np.int64)\ndata['deezer_rank'] = data['deezer_rank'].apply(np.int64)\ndata['explicit_content_lyrics'] = data['explicit_content_lyrics'].apply(np.int64)\ndata['explicit_content_cover'] = data['explicit_content_cover'].apply(np.int64)",
"_____no_output_____"
],
[
"data.dtypes",
"_____no_output_____"
],
[
"data.head()",
"_____no_output_____"
],
[
"artistas = data.cantante",
"_____no_output_____"
],
[
"for x in range (0, len(artistas)):\n url = 'https://www.google.es/search?ei=kZzMXKWUJpy75OUPs6OvqAc&q=' + artistas[x].split(' &')[0] + '+edad&oq=' + artistas[x].split(' &')[0] + '+edad&gs_l=psy-ab.3..0i70i251.7054.7621..7845...0.0..0.109.421.4j1......0....1..gws-wiz.......0i67j0j0i22i30.__AALxE8064'\n response = requests.get(url, headers={'User-Agent': 'Mozilla/5.0 (Platform; Security; OS-or-CPU; Localization; rv:1.4) Gecko/20030624 Netscape/7.1 (ax)'})\n if response.status_code == 200:\n soup = BeautifulSoup(response.text, \"html.parser\")\n \n nacimiento = soup.find('span', {'class': 'A1t5ne'})\n \n if nacimiento is None:\n fechaNacimiento == 'sd'\n else:\n fechaNacimiento = soup.find('span', {'class': 'A1t5ne'}).get_text()\n \n data.loc[x, 'nacimiento'] = fechaNacimiento",
"_____no_output_____"
],
[
"data.head(3)",
"_____no_output_____"
],
[
"nuevaData = data.nacimiento",
"_____no_output_____"
],
[
"def mesANumero(string):\n m = {\n 'enero': \"1\",\n 'febrero': \"2\",\n 'marzo': \"3\",\n 'abril': \"4\",\n 'mayo': \"5\",\n 'junio': \"6\",\n 'julio': \"7\",\n 'agosto': \"8\",\n 'septiembre': \"9\",\n 'octubre': \"10\",\n 'noviembre': \"11\",\n 'diciembre': \"12\"\n }\n\n fecha = string.split(\" \")\n dia = fecha[0]\n mes = fecha[2]\n anio = fecha[4]\n\n try:\n \n out = str(m[mes.lower()])\n return dia + \"-\" + out + \"-\" + anio\n except:\n \n raise ValueError('No es un mes')\n \nfor x in range (0, len(nuevaData)):\n \n elementos = len(nuevaData[x].split(','))\n separar = nuevaData[x].split(',')\n\n if elementos == 5:\n \n ciudad = separar[2]\n provincia = separar[3]\n pais = separar[4]\n nacimiento = separar[0].split('(')\n fecha_final = mesANumero(nacimiento[0])\n \n elif elementos == 4:\n \n verificacion = [k for k in separar[0] if k.isdigit()]\n \n ciudad = separar[1]\n provincia = separar[2]\n pais = separar[3]\n \n if len(verificacion) <= 4:\n \n fecha_final = ''\n \n else:\n \n nacimiento = separar[0].split('(')\n fecha_final = mesANumero(nacimiento[0])\n \n elif elementos == 3:\n \n verificacion = [k for k in separar[0] if k.isdigit()]\n \n if len(verificacion) == 0:\n \n nacimiento = ''\n ciudad = separar[0]\n provincia = separar[1]\n pais = separar[2]\n \n if len(verificacion) != 0:\n \n ciudad = ''\n provincia = separar[1]\n pais = separar[2]\n nacimiento = separar[0].split('(')\n fecha_final = mesANumero(nacimiento[0])\n \n else :\n \n ciudad = 'sd'\n provincia = 'sd'\n pais = 'sd'\n fecha_final = 'sd'\n \n data.loc[x, 'fecha_nacimiento'] = fecha_final\n data.loc[x, 'ciudad'] = ciudad\n data.loc[x, 'provincia'] = provincia\n data.loc[x, 'pais'] = pais",
"_____no_output_____"
],
[
"data.head(2)",
"_____no_output_____"
],
[
"print(len(data[data.fecha_nacimiento == '']), len(data[data.ciudad == '']), \n len(data[data.provincia == '']), len(data[data.pais == '']))",
"3 12 0 0\n"
],
[
"for x in range (0, len(data)):\n if data.ciudad[x] == '':\n data.loc[x, 'ciudad'] = data.provincia[x]\n\nfor x in range (0, len(data)):\n if data.fecha_nacimiento[x] == '':\n data.loc[x, 'fecha_nacimiento'] = 'sd'",
"_____no_output_____"
],
[
"print(len(data[data.fecha_nacimiento == '']), len(data[data.ciudad == '']), \n len(data[data.provincia == '']), len(data[data.pais == '']))",
"0 0 0 0\n"
],
[
"data.ciudad = data.ciudad.apply(lambda x : \" \".join(x.split()).lower())\ndata.provincia = data.provincia.apply(lambda x : \" \".join(x.split()).lower())\ndata.pais = data.pais.apply(lambda x : \" \".join(x.split()).lower())\ndata.ciudad = data.ciudad.apply(lambda x : x.replace('lenox hill hospital', 'sd'))\ndata.ciudad = data.ciudad.apply(lambda x : x.replace('municipio de cheltenham', 'sd'))\ndata.ciudad = data.ciudad.apply(lambda x : x.replace('halsey', 'sd'))\ndata.ciudad = data.ciudad.apply(lambda x : x.replace('andrew taggart', 'sd'))\ndata.provincia = data.provincia.apply(lambda x : x.replace('fráncfort del meno', 'sd'))\ndata.provincia = data.provincia.apply(lambda x : x.replace('khalid', 'sd'))\ndata.provincia = data.provincia.apply(lambda x : x.replace('alex pall', 'sd'))\ndata.pais = data.pais.apply(lambda x : x.replace('benny blanco', 'sd'))\ndata.pais = data.pais.apply(lambda x : x.replace('rhett bixler', 'sd'))\ndata.pais = data.pais.apply(lambda x : x.replace('ee. uu.', 'estados unidos'))",
"_____no_output_____"
],
[
"data.ciudad.unique()",
"_____no_output_____"
],
[
"data.provincia.unique()",
"_____no_output_____"
],
[
"data.pais.unique()",
"_____no_output_____"
],
[
"print(len(data.ciudad.unique()), len(data.provincia.unique()), len(data.pais.unique()))",
"47 24 8\n"
],
[
"data.head(1)",
"_____no_output_____"
],
[
"artistas.head()",
"_____no_output_____"
],
[
"for x in range (0, len(artistas)):\n url = 'https://www.google.com/search?rlz=1C1GCEU_esAR821AR821&ei=BTbQXLS5C6ik5OUP76uX2A4&q='+artistas[x].split(' &')[0]+'+genero&oq='+artistas[x].split(' &')[0]+'+genero&gs_l=psy-ab.3..0j0i22i30l5.33988.35043..35208...0.0..0.144.765.4j3......0....1..gws-wiz.......0i67j0i131i67j0i131j0i22i10i30.g3gr1OFMcB8'\n responses = requests.get(url, headers={'User-Agent': 'Mozilla/5.0 (Platform; Security; OS-or-CPU; Localization; rv:1.4) Gecko/20030624 Netscape/7.1 (ax)'})\n if response.status_code == 200:\n soup = BeautifulSoup(responses.text, \"html.parser\")\n soup_string = str(soup)\n if 'Hip hop' in soup_string:\n data.loc[x, 'genero'] = 'Hip Hop'\n elif 'Rock' in soup_string:\n data.loc[x, 'genero'] = 'Rock'\n elif 'Pop' in soup_string:\n data.loc[x, 'genero'] = 'Pop'\n elif 'Country' in soup_string:\n data.loc[x, 'genero'] = 'Country'\n elif 'Rap' in soup_string:\n data.loc[x, 'genero'] = 'Rap'\n elif 'Future bass' in soup_string:\n data.loc[x, 'genero'] = 'Future bass' \n else:\n data.loc[x, 'genero'] = 'No Data'",
"_____no_output_____"
],
[
"data.head(2)",
"_____no_output_____"
],
[
"for x in range(0, len(data)):\n if data.fecha_nacimiento[x] != 'sd':\n data.loc[x, 'anio'] = data.fecha_nacimiento[x][-4:]\n else:\n data.loc[x, 'anio'] = 'sd'",
"_____no_output_____"
],
[
"data.shape",
"_____no_output_____"
],
[
"timeFinish += (time() - start_time)\nprint('Ending - time: ' + str(timedelta(seconds=timeFinish)))",
"Ending - time: 0:03:39.378000\n"
],
[
"data.to_excel('billBoardCompleto.xlsx')",
"_____no_output_____"
],
[
"plt.figure(figsize=(12,5))\nax = sns.countplot('genero', data = data,\n palette= palette)\nplt.title('Total por genero musical', size=20)\nplt.ylabel('Total')\nplt.show()",
"_____no_output_____"
],
[
"plt.figure(figsize=(12,5))\nax = sns.countplot('pais', data = data,\n palette= palette)\nplt.title('Total artistas por pais', size=20)\nplt.ylabel('Total')\nplt.show()",
"_____no_output_____"
],
[
"plt.figure(figsize=(12,5))\nax = sns.countplot('anio', data = data,\n palette= palette,\n order = data['anio'].value_counts().index)\nplt.title('Fechas nacimiento agrupadas', size=20)\nplt.ylabel('Total')\nplt.show()",
"_____no_output_____"
],
[
"paises = data.pais.unique().tolist()\n\nfor x in range(0, len(paises)):\n plt.figure(figsize=(12,5))\n ax = sns.countplot('anio', data = data[data.pais == paises[x]],\n palette= palette,\n order = data['anio'][data.pais == paises[x]].value_counts().index)\n plt.title(paises[x], size=20)\n plt.ylabel('Total')\n plt.show()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecd92621a5f4cd4f0f32d08c27937f49ea8eb7fc | 549,947 | ipynb | Jupyter Notebook | Clase23agosto.ipynb | GiselaCS/Mujeres_Digitales | 1946448fa4915f99ba4a5a06908b47bf4ff06133 | [
"MIT"
] | 1 | 2021-08-13T19:11:50.000Z | 2021-08-13T19:11:50.000Z | Clase23agosto.ipynb | GiselaCS/Mujeres_Digitales | 1946448fa4915f99ba4a5a06908b47bf4ff06133 | [
"MIT"
] | null | null | null | Clase23agosto.ipynb | GiselaCS/Mujeres_Digitales | 1946448fa4915f99ba4a5a06908b47bf4ff06133 | [
"MIT"
] | null | null | null | 321.418469 | 65,026 | 0.918863 | [
[
[
"<a href=\"https://colab.research.google.com/github/GiselaCS/Mujeres_Digitales/blob/main/Clase23agosto.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
],
[
[
"# Algebra lineal \n\n\n---\n\nse enfoca en el estudio de matrices y vectores, usando para representar ecuaciones lineales.\n",
"_____no_output_____"
]
],
[
[
"# Matrices y vectores en python\nimport numpy as np #libreria para trabajo numerico.\n#matriz de 3 x 3 \nM = np.array([[1, 2, 3],[4, 5, 6],[7, 8, 9]])\n# este es el vector fila []\nv = np.array([[1],[2],[3]])\nprint(M)\nprint(v)",
"[[1 2 3]\n [4 5 6]\n [7 8 9]]\n[[1]\n [2]\n [3]]\n"
],
[
"print (M.shape) # trae la cantidad filas y columnas\nprint (v.shape)\nv_single_dim = np.array([1, 2, 3]) # se crea con una sola dimension\nprint (v_single_dim.shape) ",
"(3, 3)\n(3, 1)\n(3,)\n"
],
[
"# para sumar vectores deben tener la misma dimension\nprint(v+v)\nprint(3*v) ",
"[[2]\n [4]\n [6]]\n[[3]\n [6]\n [9]]\n"
],
[
"# Otra forma de crear matrices\n# se pueden unir los arreglos de manera vertical como se ve en M\nv1 = np.array([1, 2, 3])\nv2 = np.array([4, 5, 6])\nv3 = np.array([7, 8, 9])\nM = np.vstack([v1, v2, v3])\nprint(M)",
"[[1 2 3]\n [4 5 6]\n [7 8 9]]\n"
],
[
"# Indexar matrices\nprint (M[:2, 1:3])\n",
"[[2 3]\n [5 6]]\n"
],
[
"#si se suman dos vectores se hace la operacion normla\nv + v\n#si se suman las listas trae la lista anterior y trae la nueva, es decir repite los valores",
"_____no_output_____"
]
],
[
[
"**Producto punto**\n\ntipo de operaciones que se pueden realiar entre vectores. \n\n(Sacarle el angulo que se forma entre ambos vectores, en donde daria ese resultado) ",
"_____no_output_____"
],
[
"** Producto Cruz**\n\nencontrar una vector que sea perpendicular a otro vector.\n\n* al multiplicar dos matrices la formula toma el numero de columnas de la primera si es igual al numero de filas de la segunda matriz, para que sean compatibles, luego de eso se hace la sumatoria. \n",
"_____no_output_____"
]
],
[
[
"#producto punto con el vector\nprint (M.dot(v)) \nprint (v.T.dot(v)) # el vector cuadra las dimensiones. con T.\nv1=np.array([3,-3,1])\nv2=np.array([4,9,2])\n#producto cruz\nprint (np.cross(v1, v2, axisa=0, axisb=0).T) \nprint (np.multiply(M, v))\nprint (np.multiply(v, v))",
"[[14]\n [32]\n [50]]\n[[14]]\n[-15 -2 39]\n[[ 1 2 3]\n [ 8 10 12]\n [21 24 27]]\n[[1]\n [4]\n [9]]\n"
]
],
[
[
"**Traspuesta**\n\ncambia las filas por las columnas",
"_____no_output_____"
]
],
[
[
"print(M.T)\nprint(v.T)",
"[[1 4 7]\n [2 5 8]\n [3 6 9]]\n[[1 2 3]]\n"
]
],
[
[
"**Determinante**\n\nuna forma de generar el valor de la matriz. \nmultiplica los elementos de manerda diagonal.\n",
"_____no_output_____"
],
[
"**Matriz inversa**\n\nes hacer una matriz que al multiplicarse por si misma, da la matriz identidad.\nes decir, en la forma diagonal da 1, y el resto de los compos dan cero.",
"_____no_output_____"
]
],
[
[
"#ejemplo de creacion de una matriz\nv1 = np.array([3, 0, 2])\nv2 = np.array([2, 0, -2])\nv3 = np.array([0, 1, 1])\nM = np.vstack([v1, v2, v3])\nprint (np.linalg.inv(M))\nprint (np.linalg.det(M))",
"[[ 0.2 0.2 0. ]\n [-0.2 0.3 1. ]\n [ 0.2 -0.3 -0. ]]\n10.000000000000002\n"
]
],
[
[
"# Valores y vectores propios\nUn valor propio λ y un vector propio u⃗ satisfacen\n\n**Au=λu **\n\nDonde A es una matriz cuadrada.\n\nReordenando la ecuacion anterior tenemos el sistema:\n\nAu−λu=(A−λI)u=0 \n\nEl cual tiene solucion si y solo si det(A−λI)=0 \n\n1. Los valores propios son las raices del polinomio caracteristico del determinante\n2. Susituyendo los valores propios en\n\nAu=λu \n\ny resolviendo se puede obtener el vector propio asociado",
"_____no_output_____"
]
],
[
[
"v1 = np.array([0, 1])\nv2 = np.array([-2, -3])\nM = np.vstack([v1, v2])\neigvals, eigvecs= np.linalg.eig(M)\n# los eigv son caracteristicas de matrices\nprint(eigvals)\nprint(eigvecs)",
"[-1. -2.]\n[[ 0.70710678 -0.4472136 ]\n [-0.70710678 0.89442719]]\n"
]
],
[
[
"\n\n---\n\n\n# **Matplotlip**\n\n\n---\n\n\n",
"_____no_output_____"
],
[
"Es una libreria hecha, para realizar los tipos de graficos que hay. (Lineas, barrar, tortas, burbujas, etc.)",
"_____no_output_____"
]
],
[
[
"import pandas as pd \nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline # las graficas se mantienen entre las lienas",
"_____no_output_____"
],
[
"pk = pd.read_csv('pokemon_data.csv')\n",
"_____no_output_____"
],
[
"pk.head(3)",
"_____no_output_____"
],
[
"#datos aleatorios con numpy \n#randint(el rango), size (el tamaño)\nplt.plot(np.random.randint(0,50,size=(10,1)))",
"_____no_output_____"
],
[
"# en x estos sean los valores y en y tambien se asignan los valores\nx = np.array([1,2,3,4]) \ny = np.array([1,2,3,4]) \nplt.plot(x,y)",
"_____no_output_____"
],
[
"x = np.array([1,2,3,4,5,6,7,8,9,10,11,12]) # esto son los meses\ny = np.array([1,3,2,4,3,2,5,9,6,10,11,13]) # este es el precio\n# el .plot es para graficar\nplt.plot(x,y, label='Dollar vs Time') #estos son los nombres que se le dan a las series\nplt.ylabel('Dollar to COP')\nplt.xlabel('Time')\nplt.legend(loc='lower right') #este lo ponga en la esquina inferior derecha",
"_____no_output_____"
],
[
"#tipo de funcion exponencial \n#se crea una funcion, hacer el exponencial de -x al cuadraro\ndef f(x):\n return np.exp(-x ** 2)\n\nx = np.linspace(-5,5, num=1000)\n\nplt.plot(x,f(x), label=\"f(x) Function\")\nplt.ylabel('f(x)')\nplt.xlabel('x')\nplt.legend()",
"_____no_output_____"
],
[
"# se cambia para que la linea sera negra y punteada\nplt.plot(x,f(x), 'k--', label=\"f(x) Function\")\nplt.ylabel('f(x)')\nplt.xlabel('x')\nplt.legend()",
"_____no_output_____"
],
[
"#por ejemplo en color azul\nplt.plot(x,f(x), 'b--', label=\"f(x) Function\")\nplt.ylabel('f(x)')\nplt.xlabel('x')\nplt.legend()",
"_____no_output_____"
],
[
"x = np.linspace(-10,10, num=10)\nplt.plot(x,f(x), linestyle='--', marker='o', label=\"f(x) Function\")\nplt.ylabel('f(x)')\nplt.xlabel('x')\nplt.legend()",
"_____no_output_____"
],
[
"print(plt.style.available)",
"['Solarize_Light2', '_classic_test_patch', 'bmh', 'classic', 'dark_background', 'fast', 'fivethirtyeight', 'ggplot', 'grayscale', 'seaborn', 'seaborn-bright', 'seaborn-colorblind', 'seaborn-dark', 'seaborn-dark-palette', 'seaborn-darkgrid', 'seaborn-deep', 'seaborn-muted', 'seaborn-notebook', 'seaborn-paper', 'seaborn-pastel', 'seaborn-poster', 'seaborn-talk', 'seaborn-ticks', 'seaborn-white', 'seaborn-whitegrid', 'tableau-colorblind10']\n"
],
[
"#estilo fondo oscuro\nplt.style.use('dark_background')\n\nx = np.linspace(-10,10, num=10)\nplt.plot(x,f(x), linestyle='--', marker='.', label=\"f(x) Function\")\nplt.ylabel('f(x)')\nplt.xlabel('x')\nplt.legend()",
"_____no_output_____"
],
[
"# otro tipo de estilos\nplt.style.use('seaborn')\nx = np.array([0,1,2,4,5])\ny = np.array([1,2,7,4,3])\nplt.plot(x,y)",
"_____no_output_____"
],
[
"#forma aleatoria cada vez que se ejecuta\nx = np.random.randint(0,10, size=(1,10))\ny = np.random.randint(0,10, size=(1,10))\nplt.plot(x[0],y[0], label='Random', linewidth = 3, color='blue')\nplt.ylabel(\"Y\")\nplt.xlabel(\"X\")\nplt.legend()",
"_____no_output_____"
],
[
"#dos graficas al tiempo \nx = np.arange(0,10)\ny = np.random.randint(0,10, size=(1,10))\ny2 = np.random.randint(0,10, size=(1,10))\nplt.plot(x,y[0],label='Random', linewidth = 1, color='blue')\nplt.plot(x,y2[0], label='Random 2',linewidth= 1, color='grey')\nplt.ylabel(\"Y\")\nplt.xlabel(\"X\")\nplt.legend()",
"_____no_output_____"
],
[
"# otro tipo \nx = np.arange(0,10)\ny = np.random.randint(0,10, size=(1,10))\ny2 = np.random.randint(0,10, size=(1,10))\nplt.plot(x,y[0],label='Random', linewidth = 3, color='blue')\nplt.plot(x,y2[0], label='Random 2',linewidth= 3, color='grey')\nplt.ylabel(\"Y\")\nplt.xlabel(\"X\")\nplt.legend()\nplt.grid()",
"_____no_output_____"
],
[
"#graficas de linea\nx = np.linspace(1,30,10)\ny = np.random.randint(0,50, size=(1,10))\nplt.bar(x,y[0], width=3, label=\"RD\", color=\"lightblue\")\nplt.legend()",
"_____no_output_____"
],
[
"#de barras de pokemon \nplt.bar(pk['Name'][:3], pk['Attack'][:3], label='PKMN Attack', width=0.5, color='orange')\nplt.title(\"PKMN\")\nplt.legend()\nplt.xlabel(\"Names\")\nplt.ylabel(\"Attack\")",
"_____no_output_____"
],
[
"''' las primeras 10 filas y se grafica el ataque \nde los primeros 10 pokemones'''\n\n",
"_____no_output_____"
],
[
"plt.bar(pk['Name'][:10], pk['Attack'][:10], label='PK Attack', width=0.2, color='green')\nplt.legend()\nplt.xlabel(\"Nombres\")\nplt.ylabel(\"Ataque\")",
"_____no_output_____"
],
[
"max_at= pk['Attack'].max()\nmin_at= pk['Attack'].min()",
"_____no_output_____"
],
[
"plt.plot(pk.loc[:10, 'Attack'])\nplt.ylim(max_at, min_at)",
"_____no_output_____"
],
[
"plt.bar(pk['Name'][:10], pk['Defense'][:10], label='PK Attack', width=0.2, color='green')\nplt.legend()\nplt.xlabel(\"Nombres\")\nplt.ylabel(\"Defensa\")",
"_____no_output_____"
],
[
"plt.barh(pk['Name'][:13], pk['Attack'][:13], label='PKMN Attack',color='orange')\nplt.title(\"PKMN\")\nplt.legend()\nplt.xlabel(\"Names\")\nplt.ylabel(\"Attack\")",
"_____no_output_____"
],
[
"width = 0.25\nplt.bar(pk['Name'][:13], pk['Attack'][:13], label='PKMN Atk', width=width, color='orange')\nplt.bar(pk['Name'][:13], pk['HP'][:13], label='PKMN HP', width=width, color='red')\nplt.title(\"PKMN\")\nplt.legend()\nplt.xlabel(\"Names\")\nplt.ylabel(\"Stats\")\nplt.xticks(rotation=90)",
"_____no_output_____"
],
[
"data = np.random.randint(0,50, size=(1,20))[0]\nbins = np.arange(0,101,10)\nplt.hist(data,bins, histtype='bar', rwidth=0.8, color=\"green\")\nplt.title('Histogram')",
"_____no_output_____"
],
[
"x1 = np.random.randint(0,100, size=(1,10))[0]\ny1 = np.random.randint(0,100, size=(1,10))[0]\nx2 = np.random.randint(0,100, size=(1,10))[0]\ny2 = np.random.randint(0,100, size=(1,10))[0]\n\nplt.scatter(x1,y1,label=\"Data 1\", color=\"green\")\nplt.scatter(x2,y2,label=\"Data 2\", color=\"purple\")\nplt.title(\"Scatter Plot\")\nplt.legend()",
"_____no_output_____"
],
[
"plt.scatter(pk['Name'][:13],pk['Defense'][:13],label=\"PKMN Def\", color=\"green\")\nplt.title(\"Scatter Plot\")\nplt.legend()\nplt.xticks(rotation=90)",
"_____no_output_____"
],
[
"#https://htmlcolorcodes.com/es/\npk['count'] = 1\nperc = pk.groupby(['Type 1']).count()['count']\n#colors= ['#4C212A','#01172F','#00635D','#08A4BD','#446DF6']\nplt.pie(perc.values, labels=perc.index, startangle=90, autopct='%1.1f%%', shadow=True, radius=1.5)\nplt.title(\"Pie Chart\")\nplt.savefig('pie_types.png')",
"_____no_output_____"
],
[
"data = pk.loc[0:9,'Speed']\nbins = 10\nplt.hist(data,bins, histtype='bar', rwidth=0.8, color=\"green\")\nplt.title('Histogram')",
"_____no_output_____"
],
[
"data = pk.loc[0:9,'Generation']\nbins = 10\nplt.hist(data,bins, histtype='bar', rwidth=0.8, color=\"green\")\nplt.title('Histogram')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecd92928329ced67e3cba1a5ced126cc516d6fcf | 16,260 | ipynb | Jupyter Notebook | 01_Databases/Online_database_in_class.ipynb | bvoight/GCB535 | 9f688a39dd60687fc15fb75b9610e2fa406b4fb7 | [
"BSD-3-Clause"
] | 1 | 2021-01-11T15:09:56.000Z | 2021-01-11T15:09:56.000Z | 01_Databases/Online_database_in_class.ipynb | bvoight/GCB535 | 9f688a39dd60687fc15fb75b9610e2fa406b4fb7 | [
"BSD-3-Clause"
] | null | null | null | 01_Databases/Online_database_in_class.ipynb | bvoight/GCB535 | 9f688a39dd60687fc15fb75b9610e2fa406b4fb7 | [
"BSD-3-Clause"
] | null | null | null | 41.164557 | 875 | 0.668266 | [
[
[
"# Genomic Databases - In Class Exercises",
"_____no_output_____"
],
[
"Imagine that you are a grad student who has just begun work in a mouse lab. Your advisor, Dr. Stoker, works on a novel mouse phenotype, which has been dubbed Vampiric. Mice with this phenotype have the physiological feature of exceptionally sharp teeth and the behavioral feature of biting other mice. Dr. Stoker has developed a genetic mutagenesis screen for this phenotype using sunlight as a negative selector. In one strain of Vampiric mice, the mutation has been narrowed to an approximately 250-kilobase (kb) region of chromosome 12. Your first task is to investigate what is known about candidate genes in this region.",
"_____no_output_____"
],
[
"** Part I.\tExplore the basic functionality of the UCSC Genome Browser **",
"_____no_output_____"
],
[
"1. Go to http://genome.ucsc.edu in your favorite browser. Take a moment to read “Our story” and browse the “News” updates on the main page.\n2. Click the link at the top left that reads “Genomes” to access the genomes.\nYour queries should go in the text box under “position/search term”. Read through the “Sample Position Queries” further down the page for an explanation of how to search for a particular region.\n3. Choose a different species from the “genome” drop-down list.\nNote that the entire page changes.",
"_____no_output_____"
],
[
"** Part II.\tInvestigate the genes in Dr. Stoker’s candidate region.**",
"_____no_output_____"
],
[
"1.\tSearch for the region on chromosome 12 between bases 56,532,042 and 56,785,902 in the most recent assembly of the Mouse genome: choose \"Mouse GRCm38/mm10\" from the “genome” drop-down list. Type or copy-paste \"chr12:56,532,042-56,785,902\" in the \"search term\" cell. Click \"go\".\n2. You should be taken to a page with an image displaying tracks of annotation information for this region. Take a moment to explore this page. Controls for shifting the display region or zooming in/out are above the track image. Some tracks can be expanded or compressed by clicking on them. In other cases, individual annotations on tracks (such as genes) can be clicked for more details. Below the track image is a list of the available tracks for the genome assembly that you are viewing. You can control which tracks are shown by changing the drop-down boxes next to each track name and then clicking “refresh” (the button on the page, not on the browser toolbar). You can also guarantee that you are looking at the normal set of tracks by clicking the “default tracks” button. Base your answers to the following questions on information in the available tracks.\n",
"_____no_output_____"
],
[
"To answer the following 3 questions, scroll down to the “Genes and Gene Prediction Tracks” and set RefSeq Genes to “pack”; and GenScan Genes to “dense”; also scroll down to “mRNA and EST Tracks” and set Mouse mRNAs to “pack”; You may want to hide other tracks to simplify the view.",
"_____no_output_____"
],
[
"Q1.\tWhat RefSeq Mouse Genes are in this region?",
"_____no_output_____"
],
[
"RefSeq genes are a good place to start your search for candidate genes, however it is possible that the mutation for Vampiric is in an unknown gene. Note that the Mouse mRNA track has many more annotations than the RefSeq track. RefSeq is a manually curated non-redundant gene database, while GenBank is a larger database with potentially redundant experimental data. For each gene in RefSeq there is typically one or more corresponding mRNA in GenBank that aligns to the same position in the genome. However, some mRNAs in GenBank may be unconfirmed as real genes, and therefore do not have a corresponding RefSeq entry. Such mRNAs would be further candidates for your mutation.",
"_____no_output_____"
],
[
"Q2.\tName an example GeneBank mRNAs in this region that does not correspond to a RefSeq genes. (Hint: Zoom in to see which mRNAs overlap with the smaller genes)",
"_____no_output_____"
],
[
"You should also consider the possibility that your mutation is in a gene that has never been characterized experimentally. GenScan is a computational tool for predicting genes, which we will discuss in more detail later in this course. For now, just take a look at the annotation track for genes predicted by GenScan. You should always take these predictions with a grain of salt, but they may be useful if you don’t get any interesting results from known genes or mRNAs.",
"_____no_output_____"
],
[
"Q3.\tHow many GenScan predicted genes are in this region?",
"_____no_output_____"
],
[
"** Part III.\tGet detailed information about one of the candidate genes, Pax9.**",
"_____no_output_____"
],
[
"Now it’s time to look at what is known about our candidate genes. The RefSeq genes will have the most useful information as they are often based on multiple experiments and have been validated in some way. Click on Pax9 in the RefSeq track on the browser. ",
"_____no_output_____"
],
[
"Q4.\tWhat is the RefSeq Accession Number for Pax9?",
"_____no_output_____"
],
[
"Q5.\tWhat is the transcript size (the number of base pairs in the entire transcribed mRNA, including introns and untranslated regions) of Pax9?",
"_____no_output_____"
],
[
"Q6.\t Is Pax9 on the forward or reverse strand of chromosome 12?",
"_____no_output_____"
],
[
"Under 'mRNA/Genomic Alignments', click the 'browser' link. \nThis will take you back to the track view, but will now be zoomed in on this specific gene. The intron/exon structure of this gene is clearly shown in the RefSeq track. The thickest lines represent exons, the medium lines represent untranslated regions, and the thin lines represent introns, with arrows indicating the direction of transcription.",
"_____no_output_____"
],
[
"Q7.\tHow many exons are in Pax9?",
"_____no_output_____"
],
[
"Return to the Pax9 information page, and click on the link embedded in the gene id under “Entrez Gene”.\nThis brings you to an NCBI page with more detailed information on this gene. The NCBI Entrez Gene database is linked to GenBank, PubMed, and many other useful databases. Scroll down to the section marked “Genomic regions, transcripts, and products”. You should see an image composed with a green colored line for genomic information, purple for mRNA information and red for protein information. If you only see green lines (genes), click on them and the green line will expand to purple line and red line. Reconfirm the transcript length and the number of exons in Pax9 by mousing over the purple line.",
"_____no_output_____"
],
[
"Q8.\tHow many nucleotides are there in the mature Pax9 mRNA(without introns)? If there are multiple isoforms, choose the one corresponding to the RefSeq transcript we looked at in the UCSC Browser. How many amino acids are there in Pax9 protein? Do they follow the 3:1 ratio (as 3 mRNA nucleotides code for 1 amino acid)? If not, do you know which biological process causes the discrepancy?",
"_____no_output_____"
],
[
"If you hover over the purple line, you will see many options to display the genomic, mRNA, and protein information of Pax9 in different formats. If you select 'FASTA record,' you'll see the gene sequence corresponding to Pax9. ",
"_____no_output_____"
],
[
"Continue scrolling the NCBI gene page and investigating Pax9.\nNote that the GeneRIF, Gene Ontology, and Interactions, and Summary sections provide useful information on what is known about the function of this gene. Use the GeneRIF, Gene Ontology, and Interactions sections to answer the following questions about Pax9:",
"_____no_output_____"
],
[
"Q9.\tWhat evidence (if any) supports Pax9 as a likely mutant related to the Vampiric phenotype?",
"_____no_output_____"
],
[
"Q10.\tWhat other basic biological function (if any) does Pax9 have?",
"_____no_output_____"
],
[
"Q11.\tWhat genes or proteins (if any) does Pax9 interact with? (Hint: look under heading Interactions)",
"_____no_output_____"
],
[
"** Part IV.\tGet information on Pax9 in other (non-mouse) species. **",
"_____no_output_____"
],
[
"Imagine that you are able to confirm that Pax9 is in fact responsible for the Vampiric phenotype in Mouse. Now Dr. Stoker wants to try inducing this phenotype in other organisms. He asks you to find out which species have Pax9 genes in GenBank.\n1.\tGo to: http://www.ncbi.nlm.nih.gov/Genbank/\nRead the introductory information for GenBank. One common way to search GenBank is to submit a sequence as a BLAST search. However, we will cover BLAST and other sequence homology tools later on in the class. In the meantime, we will search the Gene database using keywords.\n2.\tGo to: http://www.ncbi.nlm.nih.gov/gene\nRead the “Help” section for extra hints on ways you can search.\n3.\tSearch for “Pax9” in the search bar at the top of the page.\nYou will notice that you get a list of many genes but not all of them are called Pax9. This is because you have searched all fields in the database and are also seeing genes related to Pax9. Narrow your search by clicking “Advanced” search and modifying the Builder to “Gene Name”. You can also accomplish the same thing by changing your search string to: Pax9[sym], 'sym’ stands for the gene symbol. Species are typically listed on the first line of the item summary in brackets (e.g., [Gallus gallus])",
"_____no_output_____"
],
[
"Dr. Stoker suggests trying to induce the Vampiric phenotype in human subjects but you point out that that would be against ethical guidelines. He agrees and suggests instead that you use the species Gallus gallus (Chicken).",
"_____no_output_____"
],
[
"Q12.\tWhat chromosome is Pax9 on in Chicken? ",
"_____no_output_____"
],
[
"Q13.\tIf you wanted to go back and view this region of the chicken genome in UCSC Genome Browser, what search string would you use?",
"_____no_output_____"
],
[
"** List of Genomic Databases **\n\nNCBI Entrez - http://www.ncbi.nlm.nih.gov/sites/gquery - huge database that encompasses other databases, including:\n- PubMed for Journal Articles - http://www.ncbi.nlm.nih.gov/pubmed/\n- GenBank for Raw Sequence - http://www.ncbi.nlm.nih.gov/genbank/\n- RefSeq for Non-Redundant Sequence - http://www.ncbi.nlm.nih.gov/RefSeq/\n- OMIM for Genetic Diseases - http://www.ncbi.nlm.nih.gov/omim?db=omim\n- dbSNP for Polymorphisms - http://www.ncbi.nlm.nih.gov/snp?db=snp\n- GEO for Gene Expression Data - http://www.ncbi.nlm.nih.gov/geo/\n\nExPASy - http://expasy.org/ - Another large database encompassing other databases:\n- Uniprot for Protein Sequence/Annotation - http://www.uniprot.org/\n- PROSITE for Protein Sequence Patterns - http://prosite.expasy.org/\n\nENSEMBL - http://useast.ensembl.org/index.html - An alternative to RefSeq and UniProt\nGenome Browser - http://genome.ucsc.edu/ - Track-based portal to databases of genomic sequence and annotations\nGeneCards - http://www.genecards.org/ - Gene-centered portal to information from many other databases\nENCODE - http://www.genome.gov/10005107 - Encyclopedia of DNA Elements\nHapMap - http://hapmap.ncbi.nlm.nih.gov/ - Database of human variation across populations\nGene Ontology (GO) - http://www.geneontology.org/ - Hierarchy of gene annotations\nMGED - http://www.mged.org/ - Database of gene expression/microarray results \n\nThis list is by no means complete, for more databases see the most recent Database Summary Paper Alpha List: http://www.oxfordjournals.org/nar/database/a/",
"_____no_output_____"
],
[
"# Homework exercise (**10 Points**)",
"_____no_output_____"
],
[
"Your colleague has just finished an extensive karyotyping study across samples from many different types of human cancers. She specifically looked for regions of the genome that have a statistically significant rate of chromosomal aberrations (including inversions, deletions, and translocations). She has asked you to help her analyze her results, starting with a region she identified on chromosome 6, ranging from base pairs 108,510,000 to 109,500,000 using NCBI build 36 (hg18). Use UCSC Genome Browser and/or other public databases to view information about known genes in this region.",
"_____no_output_____"
],
[
"Q1. What are the genes in this region? (3 points) ",
"_____no_output_____"
],
[
"Q2. Hypothesize which gene you think is the most likely candidate to be related to human cancers, and provide evidence from at least 3 different public databases. Be sure to include the URL to each database entry on which you base your answer. (7 points) \n[Hint: There is Phenotype and Disease Associations category on UCSC genome browser.]\n",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
ecd93a62b4f6c3ce715079676cf8bb2a133d4eea | 249,574 | ipynb | Jupyter Notebook | notebook/word2vec_examples.ipynb | ForsetiSss/thai2fit | f3560b89051ba468b1ccd486d0cc6db5e16af6ca | [
"MIT"
] | null | null | null | notebook/word2vec_examples.ipynb | ForsetiSss/thai2fit | f3560b89051ba468b1ccd486d0cc6db5e16af6ca | [
"MIT"
] | null | null | null | notebook/word2vec_examples.ipynb | ForsetiSss/thai2fit | f3560b89051ba468b1ccd486d0cc6db5e16af6ca | [
"MIT"
] | null | null | null | 182.972141 | 118,640 | 0.875905 | [
[
[
"# Thai2Vec Embeddings Examples\nThe `thai2vec.vec` contains 60,002 word embeddings of 400 dimensions, in descending order by their frequencies (See `thai2vec.vocab`). The files are in word2vec format readable by `gensim`. Most common applications include word vector visualization, word arithmetic, word grouping, cosine similarity and sentence or document vectors.",
"_____no_output_____"
],
[
"## Imports",
"_____no_output_____"
]
],
[
[
"%reload_ext autoreload\n%autoreload 2\n%matplotlib inline\n\nfrom pythainlp.tokenize import word_tokenize\nfrom gensim.models import KeyedVectors\nimport numpy as np\n\nfrom sklearn.manifold import TSNE\nimport matplotlib.pyplot as plt\nimport matplotlib.font_manager as fm\n\nimport dill as pickle\nimport pandas as pd\n\nDATA_PATH='../lm_data/'\nMODEL_PATH = f'{DATA_PATH}models/'\nMISC_PATH = f'{DATA_PATH}misc/'",
"_____no_output_____"
],
[
"#load into gensim\nmodel = KeyedVectors.load_word2vec_format(f'{MODEL_PATH}thai2vec.bin',binary=True)\n#create dataframe\nthai2dict = {}\nfor word in model.index2word:\n thai2dict[word] = model[word]\nthai2vec = pd.DataFrame.from_dict(thai2dict,orient='index')\nthai2vec.head(10)",
"_____no_output_____"
]
],
[
[
"Using t-SNE, we can compress the 400 dimensions of each word into a 2D plane and plot their relationships.",
"_____no_output_____"
]
],
[
[
"labels = model.index2word\n\n# #tnse\n# tsne = TSNE(n_components=2, init='pca', n_iter=1000)\n# thai2plot = tsne.fit_transform(thai2vec)\n# pickle.dump(thai2plot,open(f'{MODEL_PATH}thai2plot.pkl','wb'))\n\nthai2plot = pickle.load(open(f'{MODEL_PATH}thai2plot.pkl','rb'))",
"_____no_output_____"
],
[
"labels[:10]",
"_____no_output_____"
],
[
"#stolen from https://blog.manash.me/how-to-use-pre-trained-word-vectors-from-facebooks-fasttext-a71e6d55f27\ndef plot_with_labels(low_dim_embs, labels, filename, figsize=(10,10),\n axis_lims = None):\n assert low_dim_embs.shape[0] >= len(labels), \"More labels than embeddings\"\n plt.figure(figsize=figsize) # in inches\n for i, label in enumerate(labels):\n x, y = low_dim_embs[i, :]\n plt.scatter(x, y)\n prop = fm.FontProperties(fname=f'{MISC_PATH}THSarabunNew.ttf',size=20)\n plt.annotate(label,\n fontproperties=prop,\n xy=(x, y),\n xytext=(5, 2),\n textcoords='offset points',\n ha='right',\n va='bottom')\n if axis_lims is not None: plt.axis(axis_lims)\n plt.savefig(filename)\n \nplot_with_labels(thai2plot[200:500],labels[200:500],f'{MISC_PATH}random.png',axis_lims = [0,30,0,30])",
"_____no_output_____"
]
],
[
[
"## Word Arithmetic",
"_____no_output_____"
],
[
"You can do simple \"arithmetic\" with words based on the word vectors such as:\n* ผู้หญิง + พระราชา - ผู้ชาย = พระราชินี\n* นายกรัฐมนตรี - อำนาจ = ประธานาธิบดี\n* กิ้งก่า + โบราณ = ไดโนเสาร์",
"_____no_output_____"
]
],
[
[
"#word arithmetic\nmodel.most_similar_cosmul(positive=['พระราชา','ผู้หญิง'], negative=['ผู้ชาย'])",
"_____no_output_____"
],
[
"sample_words = ['ผู้หญิง','พระราชา','ผู้ชาย','พระราชินี']\nsample_idx = []\nfor word in sample_words:\n sample_idx.append(labels.index(word))\nsample_plot = thai2plot[sample_idx]\nplot_with_labels(sample_plot,sample_words,f'{MISC_PATH}word_arithematic1.png')",
"_____no_output_____"
],
[
"model.most_similar_cosmul(positive=['นายกรัฐมนตรี'],negative=['อำนาจ'])",
"_____no_output_____"
],
[
"sample_words = ['นายกรัฐมนตรี','อำนาจ','ประธานาธิบดี']\nsample_idx = []\nfor word in sample_words:\n sample_idx.append(labels.index(word))\nsample_plot = thai2plot[sample_idx]\nplot_with_labels(sample_plot,sample_words,f'{MISC_PATH}word_arithematic2.png')",
"_____no_output_____"
],
[
"#word arithmetic\nmodel.most_similar_cosmul(positive=['สัตว์','พืช'], negative=[])",
"_____no_output_____"
],
[
"sample_words = ['สัตว์','พืช','สิ่งมีชีวิต']\nsample_idx = []\nfor word in sample_words:\n sample_idx.append(labels.index(word))\nsample_plot = thai2plot[sample_idx]\nplot_with_labels(sample_plot,sample_words,f'{MISC_PATH}word_arithematic_baseball.png')",
"_____no_output_____"
]
],
[
[
"## Doesn't Match",
"_____no_output_____"
],
[
"It can also be used to do word groupings. For instance:\n* อาหารเช้า อาหารสัตว์ อาหารเย็น อาหารกลางวัน - อาหารสัตว์ is type of food whereas others are meals in the day\n* ลาก ดึง ดูด ดัน - ดัน is pushing while the rest is pulling.\n* กด กัด กิน เคี้ยว - กด is not verbs for the eating process\nNote that this could be relying on a different \"take\" than you would expect. For example, you could have answered ลูกเขย in the second example because it is the one associated with male gender.",
"_____no_output_____"
]
],
[
[
"model.doesnt_match(\"อาหารเช้า อาหารสัตว์ อาหารเย็น อาหารกลางวัน\".split())",
"_____no_output_____"
],
[
"sample_words = \"อาหารเช้า อาหารสัตว์ อาหารเย็น อาหารกลางวัน\".split()\nsample_idx = []\nfor word in sample_words:\n sample_idx.append(labels.index(word))\nsample_plot = thai2plot[sample_idx]\nplot_with_labels(sample_plot,sample_words,f'{MISC_PATH}doesnt_match1.png')",
"_____no_output_____"
],
[
"model.doesnt_match(\"ลาก ดึง ดูด ดัน\".split())",
"_____no_output_____"
],
[
"sample_words = \"ลาก ดึง ดูด ดัน\".split()\nsample_idx = []\nfor word in sample_words:\n sample_idx.append(labels.index(word))\nsample_plot = thai2plot[sample_idx]\nplot_with_labels(sample_plot,sample_words,f'{MISC_PATH}doesnt_match2.png')",
"_____no_output_____"
],
[
"model.doesnt_match(\"แมว หมา หมู หมอ\".split())",
"_____no_output_____"
],
[
"sample_words = \"แมว หมา หมู หมอ\".split()\nsample_idx = []\nfor word in sample_words:\n sample_idx.append(labels.index(word))\nsample_plot = thai2plot[sample_idx]\nplot_with_labels(sample_plot,sample_words,f'{MISC_PATH}doesnt_match3.png')",
"_____no_output_____"
]
],
[
[
"## Cosine Similarity",
"_____no_output_____"
]
],
[
[
"print('Country + its capital:', model.similarity('ปักกิ่ง', 'จีน'))\nprint('Country + its capital:', model.similarity('กรุง','อิตาลี'))\nprint('One capital and another:', model.similarity('โรม', 'ปักกิ่ง'))",
"Country + its capital: 0.9901805597638698\nCountry + its capital: 0.9903342338260549\nOne capital and another: 0.9924441170511737\n"
],
[
"sample_words = \"ปักกิ่ง จีน โรม อิตาลี โตเกียว ญี่ปุ่น\".split()\nsample_idx = []\nfor word in sample_words:\n sample_idx.append(labels.index(word))\nsample_plot = thai2plot[sample_idx]\nplot_with_labels(sample_plot,sample_words,f'{MISC_PATH}cosine_sim.png')",
"_____no_output_____"
]
],
[
[
"## Spellchecking",
"_____no_output_____"
],
[
"Originally contributed by [Sakares ATV](https://github.com/sakares), adapted from [Kaggle Spell Checker using Word2vec by CPMP](https://www.kaggle.com/cpmpml/spell-checker-using-word2vec).",
"_____no_output_____"
]
],
[
[
"words = model.index2word\n\nw_rank = {}\nfor i,word in enumerate(words):\n w_rank[word] = i\n\nWORDS = w_rank",
"_____no_output_____"
],
[
"thai_letters = 'กขฃคฅฆงจฉชซฌญฎฏฐฑฒณดตถทธนบปผฝพฟภมยรฤฤๅลฦฦๅวศษสหฬอฮะัาำิีึืุูเแโใไ็่้๊๋์'\n\ndef words(text): return re.findall(r'\\w+', text.lower())\n\ndef P(word): \n \"Probability of `word`.\"\n # use inverse of rank as proxy\n # returns 0 if the word isn't in the dictionary\n return - WORDS.get(word, 0)\n\ndef correction(word): \n \"Most probable spelling correction for word.\"\n return max(candidates(word), key=P)\n\ndef candidates(word): \n \"Generate possible spelling corrections for word.\"\n return (known([word]) or known(edits1(word)) or known(edits2(word)) or [word])\n\ndef known(words): \n \"The subset of `words` that appear in the dictionary of WORDS.\"\n return set(w for w in words if w in WORDS)\n\ndef edits1(word):\n \"All edits that are one edit away from `word`.\"\n letters = thai_letters\n splits = [(word[:i], word[i:]) for i in range(len(word) + 1)]\n deletes = [L + R[1:] for L, R in splits if R]\n transposes = [L + R[1] + R[0] + R[2:] for L, R in splits if len(R)>1]\n replaces = [L + c + R[1:] for L, R in splits if R for c in letters]\n inserts = [L + c + R for L, R in splits for c in letters]\n return set(deletes + transposes + replaces + inserts)\n\ndef edits2(word): \n \"All edits that are two edits away from `word`.\"\n return (e2 for e1 in edits1(word) for e2 in edits1(e1))",
"_____no_output_____"
],
[
"correction('พัดนา')",
"_____no_output_____"
],
[
"correction('ขริง')",
"_____no_output_____"
],
[
"correction('จย้า')",
"_____no_output_____"
],
[
"correction('นะค่ะ')",
"_____no_output_____"
]
],
[
[
"## Sentence Vector",
"_____no_output_____"
],
[
"One of the most immediate use cases for thai2vec is using it to estimate a sentence vector for text classification.",
"_____no_output_____"
]
],
[
[
"def sentence_vectorizer(ss,model,dim=400,use_mean=True):\n s = word_tokenize(ss,engine='ulmfit')\n vec = np.zeros((1,dim))\n for word in s:\n if word == ' ': word = 'xxspace'\n if word == '\\n': word == 'xxeol'\n if word in model.index2word:\n vec+= model.word_vec(word)\n else: pass\n if use_mean: vec /= len(s)\n return(vec)",
"_____no_output_____"
],
[
"ss = 'วันนี้ วันดีปีใหม่'\nsentence_vectorizer(ss,model,use_mean=True)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
ecd94356109c64bd753383b8f6baa09eab87b352 | 67,028 | ipynb | Jupyter Notebook | module4-classification-metrics/LS_DS_224_assignment.ipynb | danoand/DS-Unit-2-Kaggle-Challenge | 5a746196b47af30a5242df218fa37bea8173a2b6 | [
"MIT"
] | null | null | null | module4-classification-metrics/LS_DS_224_assignment.ipynb | danoand/DS-Unit-2-Kaggle-Challenge | 5a746196b47af30a5242df218fa37bea8173a2b6 | [
"MIT"
] | null | null | null | module4-classification-metrics/LS_DS_224_assignment.ipynb | danoand/DS-Unit-2-Kaggle-Challenge | 5a746196b47af30a5242df218fa37bea8173a2b6 | [
"MIT"
] | null | null | null | 123.667897 | 23,446 | 0.821761 | [
[
[
"Lambda School Data Science\n\n*Unit 2, Sprint 2, Module 4*\n\n---",
"_____no_output_____"
],
[
"# Classification Metrics\n\n## Assignment\n- [x] If you haven't yet, [review requirements for your portfolio project](https://lambdaschool.github.io/ds/unit2), then submit your dataset.\n- [x] Plot a confusion matrix for your Tanzania Waterpumps model.\n- [x] Continue to participate in our Kaggle challenge. Every student should have made at least one submission that scores at least 70% accuracy (well above the majority class baseline).\n- [x] Submit your final predictions to our Kaggle competition. Optionally, go to **My Submissions**, and _\"you may select up to 1 submission to be used to count towards your final leaderboard score.\"_\n- [x] Commit your notebook to your fork of the GitHub repo.\n- [x] Read [Maximizing Scarce Maintenance Resources with Data: Applying predictive modeling, precision at k, and clustering to optimize impact](https://towardsdatascience.com/maximizing-scarce-maintenance-resources-with-data-8f3491133050), by Lambda DS3 student Michael Brady. His blog post extends the Tanzania Waterpumps scenario, far beyond what's in the lecture notebook.\n\n\n## Stretch Goals\n\n### Reading\n\n- [Attacking discrimination with smarter machine learning](https://research.google.com/bigpicture/attacking-discrimination-in-ml/), by Google Research, with interactive visualizations. _\"A threshold classifier essentially makes a yes/no decision, putting things in one category or another. We look at how these classifiers work, ways they can potentially be unfair, and how you might turn an unfair classifier into a fairer one. As an illustrative example, we focus on loan granting scenarios where a bank may grant or deny a loan based on a single, automatically computed number such as a credit score.\"_\n- [Notebook about how to calculate expected value from a confusion matrix by treating it as a cost-benefit matrix](https://github.com/podopie/DAT18NYC/blob/master/classes/13-expected_value_cost_benefit_analysis.ipynb)\n- [Visualizing Machine Learning Thresholds to Make Better Business Decisions](https://blog.insightdatascience.com/visualizing-machine-learning-thresholds-to-make-better-business-decisions-4ab07f823415)\n\n\n### Doing\n- [ ] Share visualizations in our Slack channel!\n- [ ] RandomizedSearchCV / GridSearchCV, for model selection. (See module 3 assignment notebook)\n- [ ] Stacking Ensemble. (See module 3 assignment notebook)\n- [ ] More Categorical Encoding. (See module 2 assignment notebook)",
"_____no_output_____"
]
],
[
[
"%%capture\nimport sys\n\n# If you're on Colab:\nif 'google.colab' in sys.modules:\n DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'\n !pip install category_encoders==2.*\n\n# If you're working locally:\nelse:\n DATA_PATH = '../data/'",
"_____no_output_____"
],
[
"import pandas as pd\n\n# Merge train_features.csv & train_labels.csv\ntrain = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'), \n pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))\n\n# Read test_features.csv & sample_submission.csv\ntest = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')\nsample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')",
"_____no_output_____"
],
[
"import numpy as np\n\n# Wrangle the modeling data\n\n# indicate_missing is a function that returns a boolean value if the inbound data = 'MISSING'\n# - helps in creating a \"missing\" column \ndef indicate_missing(val):\n if val == 'MISSING':\n return True\n\n return False\n\n# boolean_missing is a function converting the permit boolean column to categorical data \ndef boolean_missing(val):\n if val == True:\n return 'TRUE'\n\n if val == False:\n return 'FALSE'\n\n return 'MISSING'\n\ndef wrangle(DF):\n X = DF.copy()\n\n # Replace near zero latitude values with zero\n X['latitude'] = X['latitude'].replace(-2e-08, 0)\n\n # Replace zero values with nan so we can impute values downstream\n cols_with_zeroes = ['longitude',\n 'latitude',\n 'construction_year',\n 'gps_height',\n 'population']\n for col in cols_with_zeroes:\n X[col] = X[col].replace(0, np.nan) # replace zeros with nans\n\n # Create columns for month and year recorded data\n X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)\n X['year_recorded'] = X['date_recorded'].dt.year\n X['month_recorded'] = X['date_recorded'].dt.month\n X['day_recorded'] = X['date_recorded'].dt.day\n\n # Create a column reflecting the number of years from construction to year recorded\n X['years'] = X['year_recorded'] - X['construction_year']\n X['years'] = X['years'].replace(0, np.nan) # replace zeros with nans\n\n # Replace missing boolean data with categorical data reflecting that missing data\n cols_boolean_missing = ['public_meeting', 'permit']\n for col in cols_boolean_missing:\n X[col+'_CATEGORICAL'] = X[col].apply(boolean_missing)\n\n # Replace missing categorical data with 'MISSING'\n cols_categorical_missing = ['funder', 'installer', 'scheme_name', 'scheme_management', 'subvillage']\n for col in cols_categorical_missing:\n X[col] = X[col].replace(np.nan, 'MISSING')\n\n # List columns to be dropped\n cols_drop = ['date_recorded', # date_recorded - using year_recorded and month_recorded instead\n 'quantity_group', # duplicate column\n 'payment_type', # duplicate column\n 'recorded_by', # data collection process column (not predictive)\n 'id', # data collection process column (not predictive)\n 'public_meeting', # replaced by categorical column: public_meeting_CATEGORICAL\n 'permit', # replaced by categorical column: permit_CATEGORICAL\n 'num_private', # 98% zeroes, unclear the purpose of this dat\n 'construction_year', # use 'years' as a proxy\n 'amount_tsh'] # highly skewed data\n # Also drop the columns we processed due to missing values\n cols_drop.extend(cols_boolean_missing)\n\n # Drop undesired columns\n X = X.drop(columns=cols_drop)\n\n return X",
"_____no_output_____"
],
[
"from sklearn.model_selection import train_test_split\n\n# Split train into train & val. Make val the same size as test.\ntarget = 'status_group'\ntrain, val = train_test_split(train, test_size=len(test), \n stratify=train[target], random_state=42)",
"_____no_output_____"
],
[
"# Wrangle train, validate, and test sets in the same way\ndf_train = wrangle(train)\ndf_val = wrangle(val)\ndf_test = wrangle(test)\nprint(f'Training: {df_train.shape}, Validation: {df_val.shape}, Test: {df_test.shape}')",
"Training: (45042, 37), Validation: (14358, 37), Test: (14358, 36)\n"
],
[
"# Construct the X features matrix and y target vector\nX_train = df_train.drop(columns=target)\ny_train = df_train[target]\nX_val = df_val.drop(columns=target)\ny_val = df_val[target]\nX_test = df_test",
"_____no_output_____"
],
[
"import category_encoders as ce\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.impute import SimpleImputer\n\n# Configure a modeling pipelin\npipeline = make_pipeline(\n ce.OrdinalEncoder(),\n SimpleImputer(strategy='mean'),\n RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)\n)",
"_____no_output_____"
],
[
"from sklearn.metrics import accuracy_score\n\n# Fit the pipeline model on the training dataset\npipeline.fit(X_train, y_train)\n\n# Generate model predictions using the validation dataset\ny_pred = pipeline.predict(X_val)\nprint(f'Validation Accuracy: {round(accuracy_score(y_val, y_pred), 5)}')",
"Validation Accuracy: 0.81021\n"
],
[
"# Generate test dataset predictions\ny_pred_test = pipeline.predict(X_test)",
"_____no_output_____"
],
[
"# Construct dataframe housing the Kaggle submission dataset\ntmp_dict = {'id': list(test['id']), 'status_group': list(y_pred_test)}\ndf_submission = pd.DataFrame(tmp_dict)",
"_____no_output_____"
],
[
"# Create submission csv (download and submit to Kaggle)\ndf_submission.to_csv(\"submission_dfa.csv\", index=False)",
"_____no_output_____"
],
[
"!pip install scikit-plot",
"Collecting scikit-plot\n Downloading https://files.pythonhosted.org/packages/7c/47/32520e259340c140a4ad27c1b97050dd3254fdc517b1d59974d47037510e/scikit_plot-0.3.7-py3-none-any.whl\nRequirement already satisfied: scipy>=0.9 in /usr/local/lib/python3.6/dist-packages (from scikit-plot) (1.4.1)\nRequirement already satisfied: joblib>=0.10 in /usr/local/lib/python3.6/dist-packages (from scikit-plot) (0.14.1)\nRequirement already satisfied: scikit-learn>=0.18 in /usr/local/lib/python3.6/dist-packages (from scikit-plot) (0.22.1)\nRequirement already satisfied: matplotlib>=1.4.0 in /usr/local/lib/python3.6/dist-packages (from scikit-plot) (3.1.2)\nRequirement already satisfied: numpy>=1.13.3 in /usr/local/lib/python3.6/dist-packages (from scipy>=0.9->scikit-plot) (1.17.5)\nRequirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=1.4.0->scikit-plot) (0.10.0)\nRequirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=1.4.0->scikit-plot) (2.6.1)\nRequirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=1.4.0->scikit-plot) (1.1.0)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=1.4.0->scikit-plot) (2.4.6)\nRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from cycler>=0.10->matplotlib>=1.4.0->scikit-plot) (1.12.0)\nRequirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from kiwisolver>=1.0.1->matplotlib>=1.4.0->scikit-plot) (45.1.0)\nInstalling collected packages: scikit-plot\nSuccessfully installed scikit-plot-0.3.7\n"
],
[
"# Print out the confusion matrix\nfrom scikitplot.metrics import plot_confusion_matrix\n\nplot_confusion_matrix(y_val, y_pred,\n figsize=(8, 6),\n title=f'Confustion Matrix: N Obs={len(y_val)}',\n normalize=False)",
"_____no_output_____"
],
[
"plot_confusion_matrix(y_val, y_pred,\n figsize=(8, 6),\n title=f'Confustion Matrix: N Obs={len(y_val)}',\n normalize=True)",
"_____no_output_____"
],
[
"# Print out the classification report\nfrom sklearn.metrics import classification_report\n\nprint(classification_report(y_val, y_pred))",
" precision recall f1-score support\n\n functional 0.80 0.90 0.85 7798\nfunctional needs repair 0.56 0.31 0.40 1043\n non functional 0.85 0.78 0.81 5517\n\n accuracy 0.81 14358\n macro avg 0.74 0.66 0.69 14358\n weighted avg 0.80 0.81 0.80 14358\n\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecd94aa4a44ee8513d0f2ffe64d1d75eb58ea9d7 | 15,824 | ipynb | Jupyter Notebook | site/en/tutorials/customization/basics.ipynb | khimraj/docs | 886e2ca605d9e1acc6522ae3ca7cfb5294563923 | [
"Apache-2.0"
] | 2 | 2020-03-18T10:08:38.000Z | 2020-03-18T10:08:40.000Z | site/en/tutorials/customization/basics.ipynb | khimraj/docs | 886e2ca605d9e1acc6522ae3ca7cfb5294563923 | [
"Apache-2.0"
] | 2 | 2020-03-21T20:21:57.000Z | 2020-03-21T20:22:11.000Z | site/en/tutorials/customization/basics.ipynb | khimraj/docs | 886e2ca605d9e1acc6522ae3ca7cfb5294563923 | [
"Apache-2.0"
] | 1 | 2020-03-21T02:54:17.000Z | 2020-03-21T02:54:17.000Z | 34.177106 | 627 | 0.537285 | [
[
[
"##### Copyright 2018 The TensorFlow Authors.",
"_____no_output_____"
]
],
[
[
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"_____no_output_____"
]
],
[
[
"# Customization basics: tensors and operations",
"_____no_output_____"
],
[
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/tutorials/customization/basics\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/customization/basics.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs/blob/master/site/en/tutorials/customization/basics.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/customization/basics.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>",
"_____no_output_____"
],
[
"This is an introductory TensorFlow tutorial that shows how to:\n\n* Import the required package\n* Create and use tensors\n* Use GPU acceleration\n* Demonstrate `tf.data.Dataset`",
"_____no_output_____"
]
],
[
[
"from __future__ import absolute_import, division, print_function, unicode_literals\n\ntry:\n # %tensorflow_version only exists in Colab.\n %tensorflow_version 2.x\nexcept Exception:\n pass\n",
"_____no_output_____"
]
],
[
[
"## Import TensorFlow\n\nTo get started, import the `tensorflow` module. As of TensorFlow 2, eager execution is turned on by default. This enables a more interactive frontend to TensorFlow, the details of which we will discuss much later.",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf",
"_____no_output_____"
]
],
[
[
"## Tensors\n\nA Tensor is a multi-dimensional array. Similar to NumPy `ndarray` objects, `tf.Tensor` objects have a data type and a shape. Additionally, `tf.Tensor`s can reside in accelerator memory (like a GPU). TensorFlow offers a rich library of operations ([tf.add](https://www.tensorflow.org/api_docs/python/tf/add), [tf.matmul](https://www.tensorflow.org/api_docs/python/tf/matmul), [tf.linalg.inv](https://www.tensorflow.org/api_docs/python/tf/linalg/inv) etc.) that consume and produce `tf.Tensor`s. These operations automatically convert native Python types, for example:\n",
"_____no_output_____"
]
],
[
[
"print(tf.add(1, 2))\nprint(tf.add([1, 2], [3, 4]))\nprint(tf.square(5))\nprint(tf.reduce_sum([1, 2, 3]))\n\n# Operator overloading is also supported\nprint(tf.square(2) + tf.square(3))",
"_____no_output_____"
]
],
[
[
"Each `tf.Tensor` has a shape and a datatype:",
"_____no_output_____"
]
],
[
[
"x = tf.matmul([[1]], [[2, 3]])\nprint(x)\nprint(x.shape)\nprint(x.dtype)",
"_____no_output_____"
]
],
[
[
"The most obvious differences between NumPy arrays and `tf.Tensor`s are:\n\n1. Tensors can be backed by accelerator memory (like GPU, TPU).\n2. Tensors are immutable.",
"_____no_output_____"
],
[
"### NumPy Compatibility\n\nConverting between a TensorFlow `tf.Tensor`s and a NumPy `ndarray` is easy:\n\n* TensorFlow operations automatically convert NumPy ndarrays to Tensors.\n* NumPy operations automatically convert Tensors to NumPy ndarrays.\n\nTensors are explicitly converted to NumPy ndarrays using their `.numpy()` method. These conversions are typically cheap since the array and `tf.Tensor` share the underlying memory representation, if possible. However, sharing the underlying representation isn't always possible since the `tf.Tensor` may be hosted in GPU memory while NumPy arrays are always backed by host memory, and the conversion involves a copy from GPU to host memory.",
"_____no_output_____"
]
],
[
[
"import numpy as np\n\nndarray = np.ones([3, 3])\n\nprint(\"TensorFlow operations convert numpy arrays to Tensors automatically\")\ntensor = tf.multiply(ndarray, 42)\nprint(tensor)\n\n\nprint(\"And NumPy operations convert Tensors to numpy arrays automatically\")\nprint(np.add(tensor, 1))\n\nprint(\"The .numpy() method explicitly converts a Tensor to a numpy array\")\nprint(tensor.numpy())",
"_____no_output_____"
]
],
[
[
"## GPU acceleration\n\nMany TensorFlow operations are accelerated using the GPU for computation. Without any annotations, TensorFlow automatically decides whether to use the GPU or CPU for an operation—copying the tensor between CPU and GPU memory, if necessary. Tensors produced by an operation are typically backed by the memory of the device on which the operation executed, for example:",
"_____no_output_____"
]
],
[
[
"x = tf.random.uniform([3, 3])\n\nprint(\"Is there a GPU available: \"),\nprint(tf.config.experimental.list_physical_devices(\"GPU\"))\n\nprint(\"Is the Tensor on GPU #0: \"),\nprint(x.device.endswith('GPU:0'))",
"_____no_output_____"
]
],
[
[
"### Device Names\n\nThe `Tensor.device` property provides a fully qualified string name of the device hosting the contents of the tensor. This name encodes many details, such as an identifier of the network address of the host on which this program is executing and the device within that host. This is required for distributed execution of a TensorFlow program. The string ends with `GPU:<N>` if the tensor is placed on the `N`-th GPU on the host.",
"_____no_output_____"
],
[
"\n\n### Explicit Device Placement\n\nIn TensorFlow, *placement* refers to how individual operations are assigned (placed on) a device for execution. As mentioned, when there is no explicit guidance provided, TensorFlow automatically decides which device to execute an operation and copies tensors to that device, if needed. However, TensorFlow operations can be explicitly placed on specific devices using the `tf.device` context manager, for example:",
"_____no_output_____"
]
],
[
[
"import time\n\ndef time_matmul(x):\n start = time.time()\n for loop in range(10):\n tf.matmul(x, x)\n\n result = time.time()-start\n\n print(\"10 loops: {:0.2f}ms\".format(1000*result))\n\n# Force execution on CPU\nprint(\"On CPU:\")\nwith tf.device(\"CPU:0\"):\n x = tf.random.uniform([1000, 1000])\n assert x.device.endswith(\"CPU:0\")\n time_matmul(x)\n\n# Force execution on GPU #0 if available\nif tf.config.experimental.list_physical_devices(\"GPU\"):\n print(\"On GPU:\")\n with tf.device(\"GPU:0\"): # Or GPU:1 for the 2nd GPU, GPU:2 for the 3rd etc.\n x = tf.random.uniform([1000, 1000])\n assert x.device.endswith(\"GPU:0\")\n time_matmul(x)",
"_____no_output_____"
]
],
[
[
"## Datasets\n\nThis section uses the [`tf.data.Dataset` API](https://www.tensorflow.org/guide/datasets) to build a pipeline for feeding data to your model. The `tf.data.Dataset` API is used to build performant, complex input pipelines from simple, re-usable pieces that will feed your model's training or evaluation loops.",
"_____no_output_____"
],
[
"### Create a source `Dataset`\n\nCreate a *source* dataset using one of the factory functions like [`Dataset.from_tensors`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_tensors), [`Dataset.from_tensor_slices`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_tensor_slices), or using objects that read from files like [`TextLineDataset`](https://www.tensorflow.org/api_docs/python/tf/data/TextLineDataset) or [`TFRecordDataset`](https://www.tensorflow.org/api_docs/python/tf/data/TFRecordDataset). See the [TensorFlow Dataset guide](https://www.tensorflow.org/guide/datasets#reading_input_data) for more information.",
"_____no_output_____"
]
],
[
[
"ds_tensors = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4, 5, 6])\n\n# Create a CSV file\nimport tempfile\n_, filename = tempfile.mkstemp()\n\nwith open(filename, 'w') as f:\n f.write(\"\"\"Line 1\nLine 2\nLine 3\n \"\"\")\n\nds_file = tf.data.TextLineDataset(filename)",
"_____no_output_____"
]
],
[
[
"### Apply transformations\n\nUse the transformations functions like [`map`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#map), [`batch`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#batch), and [`shuffle`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shuffle) to apply transformations to dataset records.",
"_____no_output_____"
]
],
[
[
"ds_tensors = ds_tensors.map(tf.square).shuffle(2).batch(2)\n\nds_file = ds_file.batch(2)",
"_____no_output_____"
]
],
[
[
"### Iterate\n\n`tf.data.Dataset` objects support iteration to loop over records:",
"_____no_output_____"
]
],
[
[
"print('Elements of ds_tensors:')\nfor x in ds_tensors:\n print(x)\n\nprint('\\nElements in ds_file:')\nfor x in ds_file:\n print(x)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecd94cbddb9ce03c7b558bede8e9dd4101f954dc | 104,086 | ipynb | Jupyter Notebook | Python and Numpy Tutorial/knn classifier.ipynb | aishikchakraborty/cs60010 | 266e2de6cfd3725063d85210348e223da1386112 | [
"MIT"
] | 3 | 2018-01-04T12:08:14.000Z | 2020-02-13T19:10:55.000Z | Python and Numpy Tutorial/knn classifier.ipynb | aishikchakraborty/cs60010 | 266e2de6cfd3725063d85210348e223da1386112 | [
"MIT"
] | null | null | null | Python and Numpy Tutorial/knn classifier.ipynb | aishikchakraborty/cs60010 | 266e2de6cfd3725063d85210348e223da1386112 | [
"MIT"
] | null | null | null | 531.05102 | 51,268 | 0.941279 | [
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.colors import ListedColormap\nfrom sklearn.model_selection import train_test_split\n%matplotlib inline\n\n# Fixing random state for reproducibility\nnp.random.seed(100)",
"_____no_output_____"
],
[
"N = 100 # number of points per class\nD = 2 # dimensionality\nK = 3 # number of classes\nX = np.zeros((N*K,D)) # data matrix (each row = single example)\ny = np.zeros(N*K, dtype='uint8') # class labels\nfor j in range(K):\n ix = range(N*j,N*(j+1))\n X[ix] = np.random.normal(0.0 + 2*j, 1.0, N*2).reshape(N, 2) #Sample 200 points and then reshape (100, 2)\n y[ix] = j\n# lets visualize the data:\n\ncmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])\ncmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])\n\nplt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=cmap_bold, edgecolor='k')\nplt.show()\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=100)\n",
"_____no_output_____"
],
[
"class kNN(object):\n def __init__(self):\n pass #no weights to initialize\n def train(self, X, y):\n \"\"\" X is N x D where each row is an example. Y is 1-dimension of size N \"\"\"\n # the nearest neighbor classifier simply remembers all the training data\n self.Xtr = X\n self.ytr = y\n def predict(self, X):\n \"\"\" X is N x D where each row is an example we wish to predict label for \"\"\"\n num_test = X.shape[0]\n # lets make sure that the output type matches the input type\n Ypred = np.zeros(num_test, dtype = self.ytr.dtype)\n \n # loop over all test rows\n for i in range(num_test):\n # find the nearest training image to the i'th test image\n # using the L1 distance (sum of absolute value differences)\n distances = np.sum(np.abs(self.Xtr - X[i,:]), axis = 1)\n min_index = np.argmin(distances) # get the index with smallest distance\n Ypred[i] = self.ytr[min_index] # predict the label of the nearest example\n\n return Ypred",
"_____no_output_____"
],
[
"nn = kNN()\nnn.train(X_train, y_train)\ny_pred = nn.predict(X_test)\nacc = np.mean(y_pred == y_test)\nprint('Accuracy :', acc*100.)\n\n# Plot the decision boundary. For that, we will assign a color to each\n# point in the mesh [x_min, x_max]x[y_min, y_max].\ncmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])\ncmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])\n\nh = .02 # step size in the mesh\n\nx_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1\ny_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1\n\nxx, yy = np.meshgrid(np.arange(x_min, x_max, h),\n np.arange(y_min, y_max, h))\n\n# print(np.arange(x_min, x_max, h))\nprint(xx)\n# Put the result into a color plot\n# print(xx.ravel())\nZ = nn.predict(np.c_[xx.ravel(), yy.ravel()])\nZ = Z.reshape(xx.shape)\nplt.figure()\nplt.pcolormesh(xx, yy, Z, cmap=cmap_light)\n\n# Plot also the training points\nplt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold,\n edgecolor='k', s=40)\nplt.xlim(xx.min(), xx.max())\nplt.ylim(yy.min(), yy.max())\nplt.show()",
"Accuracy : 84.84848484848484\n[[-3.97331547 -3.95331547 -3.93331547 ... 8.80668453 8.82668453\n 8.84668453]\n [-3.97331547 -3.95331547 -3.93331547 ... 8.80668453 8.82668453\n 8.84668453]\n [-3.97331547 -3.95331547 -3.93331547 ... 8.80668453 8.82668453\n 8.84668453]\n ...\n [-3.97331547 -3.95331547 -3.93331547 ... 8.80668453 8.82668453\n 8.84668453]\n [-3.97331547 -3.95331547 -3.93331547 ... 8.80668453 8.82668453\n 8.84668453]\n [-3.97331547 -3.95331547 -3.93331547 ... 8.80668453 8.82668453\n 8.84668453]]\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
ecd951e605cc83d8abab367321aabee53dfe6661 | 7,220 | ipynb | Jupyter Notebook | LinearRegression_MultipleVariable/Linear Regression Multiple Variables.ipynb | Aniket762/ML-Algos | 8a3c054278d8f0d99ad0a04c93c52e1701ff458f | [
"MIT"
] | null | null | null | LinearRegression_MultipleVariable/Linear Regression Multiple Variables.ipynb | Aniket762/ML-Algos | 8a3c054278d8f0d99ad0a04c93c52e1701ff458f | [
"MIT"
] | null | null | null | LinearRegression_MultipleVariable/Linear Regression Multiple Variables.ipynb | Aniket762/ML-Algos | 8a3c054278d8f0d99ad0a04c93c52e1701ff458f | [
"MIT"
] | null | null | null | 23.672131 | 58 | 0.355402 | [
[
[
"import pandas as pd\nimport numpy as np\nfrom sklearn import linear_model",
"_____no_output_____"
],
[
"df = pd.read_csv(\"homeprices.csv\")\ndf",
"_____no_output_____"
],
[
"import math\nmedian_bedroom = math.floor(df.bedrooms.median())\nmedian_bedroom",
"_____no_output_____"
],
[
"df.bedrooms = df.bedrooms.fillna(median_bedroom)\ndf",
"_____no_output_____"
],
[
"reg = linear_model.LinearRegression()\nreg.fit(df[['area','bedrooms','age']],df.price)",
"_____no_output_____"
],
[
"reg.predict([[2500,4,5]])",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecd95901af150826cd1f0dce787617a8d875fd34 | 3,660 | ipynb | Jupyter Notebook | src/Part2/Quiz15.ipynb | Drogon1573/PyChallenge-Tips | bc6e6ffd45ed9098af50c5258458452b0098efe1 | [
"MIT"
] | 2 | 2020-01-12T10:32:03.000Z | 2021-10-21T08:25:36.000Z | src/Part2/Quiz15.ipynb | Drogon1573/PyChallenge-Tips | bc6e6ffd45ed9098af50c5258458452b0098efe1 | [
"MIT"
] | null | null | null | src/Part2/Quiz15.ipynb | Drogon1573/PyChallenge-Tips | bc6e6ffd45ed9098af50c5258458452b0098efe1 | [
"MIT"
] | null | null | null | 22.875 | 211 | 0.509836 | [
[
[
"# Whom?\n\n[](https://github.com/Dragon1573/PyChallenge-Tips/blob/master/LICENSE)\n[](http://www.pythonchallenge.com/pc/return/uzi.html)\n\n<center><img src=\"../../resources/imgs/Quiz15-1.png\" /></center>",
"_____no_output_____"
],
[
"  通过关卡图片,我们可以获得以下重要信息:\n\n- 当年是1xx6年\n- 当年1月26日为周一\n- 放大图片右下角,二月有29天,因此当年是闰年\n\n  获取关卡源代码,进一步分析线索。",
"_____no_output_____"
]
],
[
[
"from requests import post\nfrom bs4 import BeautifulSoup as Soup",
"_____no_output_____"
],
[
"response = post(\n 'http://www.pythonchallenge.com/pc/return/uzi.html',\n headers={'Authorization': 'Basic aHVnZTpmaWxl'}\n)\nresponse = Soup(response.text, features='html.parser')\nprint(response.prettify())",
"<html>\n <head>\n <title>\n whom?\n </title>\n <link href=\"../style.css\" rel=\"stylesheet\" type=\"text/css\"/>\n </head>\n <body>\n <br/>\n <center>\n <!-- he ain't the youngest, he is the second -->\n <img src=\"screen15.jpg\"/>\n <br/>\n </center>\n </body>\n</html>\n<!-- todo: buy flowers for tomorrow -->\n\n"
]
],
[
[
"  `<center />`标签中有两句注释:\n\n1. 他不是年龄最小的,他是第2小的\n2. 待办事项:为明天买花\n\n  它们也是关键提示。因为符合以上3个条件的年份可能不止1个,还需要通过这个提示进一步筛选。",
"_____no_output_____"
]
],
[
[
"import calendar\nimport datetime",
"_____no_output_____"
],
[
"years = []\nfor year in range(1006, 1997, 10):\n # 获取日期\n date = datetime.datetime(year, 1, 27)\n # 闰年和星期判断\n if calendar.isleap(year) and date.weekday() == 1:\n years.append(year)\n# 逆序排列\nyears.sort(reverse=True)\n# 获得第2大(年龄第2小)的值\nprint(years[1])",
"1756\n"
]
],
[
[
"  公元1756年1月27日,世界著名音乐大师沃尔夫冈·阿玛多伊斯·**莫扎特**出生,因此下一关的链接为<http://www.pythonchallenge.com/pc/return/mozart.html>。",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
ecd968f883189af10bc3fb6b42c4e5b08f1d3b7b | 11,494 | ipynb | Jupyter Notebook | notebook_22mars.ipynb | clement-plancq/outils-corpus | 3979ea5db6693485a73fc027e45c87af52678416 | [
"MIT"
] | 3 | 2019-03-18T12:18:52.000Z | 2021-11-15T08:08:18.000Z | notebook_22mars.ipynb | clement-plancq/outils-corpus | 3979ea5db6693485a73fc027e45c87af52678416 | [
"MIT"
] | null | null | null | notebook_22mars.ipynb | clement-plancq/outils-corpus | 3979ea5db6693485a73fc027e45c87af52678416 | [
"MIT"
] | 2 | 2020-02-26T07:27:00.000Z | 2020-04-07T20:53:13.000Z | 30.010444 | 2,140 | 0.472159 | [
[
[
"import requests, json\nurl = \"http://apps.lattice.cnrs.fr/readab/json/\"\nparam = dict(url='https://www.nytimes.com/2021/03/21/obituaries/nawal-el-saadawi-dead.html')\nresp = requests.get(url=url, params=param)\ndata = json.loads(resp.content)\ntext = data[\"text\"]",
"_____no_output_____"
],
[
"text",
"_____no_output_____"
],
[
"data[\"title\"]",
"_____no_output_____"
],
[
"import spacy\n\nnlp = spacy.load(\"en_core_web_sm\")",
"_____no_output_____"
],
[
"from collections import Counter\n\ncounter = Counter()\ndoc = nlp(text)\nfor tok in doc:\n if tok.pos_ in ['PUNCT', 'ADP', 'AUX', 'DET', 'SPACE', 'PRON', 'CCONJ', 'PART']:\n continue\n counter[tok.text] += 1\n\ncounter.most_common()",
"_____no_output_____"
],
[
"for ent in doc.ents:\n print(ent.text, ent.label_)",
"Saadawi PERSON\n1,500 CARDINAL\nSadat PERSON\nOctober 1981 DATE\nthree months later DATE\nArabic LANGUAGE\nMemoirs From the Women’s Prison WORK_OF_ART\n1983 DATE\nfirst ORDINAL\nSaadawi PERSON\nEnglish LANGUAGE\nThe Hidden Face of Eve: Women in the Arab World WORK_OF_ART\nthe United States GPE\n1982 DATE\nBeacon Press ORG\nVivian Gornick PERSON\nThe New York Times Book Review ORG\nAmerican NORP\nMarxist NORP\nThe Hidden Face of Eve WORK_OF_ART\nfirst ORDINAL\nSaadawi PERSON\nEnglish LANGUAGE\nFour years later DATE\nSaadawi PERSON\nGod Dies WORK_OF_ART\nNile LOC\nIndian NORP\nAmerican NORP\nBharati Mukherjee PERSON\nAmerican NORP\nMubarak PERSON\nSadat’s ORG\nSaadawi PERSON\nIslamist NORP\nSaudi Arabia GPE\nDuke University ORG\n1993 to 1996 DATE\nSaadawi PERSON\ntwo CARDINAL\nEgypt GPE\nMubarak PERSON\n2004 DATE\n80s DATE\nGuardian ORG\n2015 DATE\nSaadawi PERSON\n"
],
[
"tokens = [tok.text for tok in doc]\nbigrams = [b for b in zip(tokens[:-1], tokens[1:])]",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecd97356297124f50cac5e41c639a69ce10f9038 | 26,308 | ipynb | Jupyter Notebook | Appendix-D-HInfinity-Filters.ipynb | yangzongsheng/kalman | 0e43c1b9aef4242bf794fd5086319b1db496d057 | [
"CC-BY-4.0"
] | 1 | 2021-08-16T02:07:09.000Z | 2021-08-16T02:07:09.000Z | Appendix-D-HInfinity-Filters.ipynb | zhpfu/Kalman-and-Bayesian-Filters-in-Python | 0e43c1b9aef4242bf794fd5086319b1db496d057 | [
"CC-BY-4.0"
] | null | null | null | Appendix-D-HInfinity-Filters.ipynb | zhpfu/Kalman-and-Bayesian-Filters-in-Python | 0e43c1b9aef4242bf794fd5086319b1db496d057 | [
"CC-BY-4.0"
] | 2 | 2021-02-10T18:36:32.000Z | 2022-01-02T02:17:23.000Z | 71.295393 | 14,776 | 0.719439 | [
[
[
"[Table of Contents](./table_of_contents.ipynb)",
"_____no_output_____"
],
[
"# H Infinity filter",
"_____no_output_____"
]
],
[
[
"#format the book\n%matplotlib inline\nfrom __future__ import division, print_function\nfrom book_format import load_style\nload_style()",
"_____no_output_____"
]
],
[
[
"I am still mulling over how to write this chapter. In the meantime, Professor Dan Simon at Cleveland State University has an accessible introduction here:\n\nhttp://academic.csuohio.edu/simond/courses/eec641/hinfinity.pdf\n\nIn one sentence the $H_\\infty$ (H infinity) filter is like a Kalman filter, but it is robust in the face of non-Gaussian, non-predictable inputs.\n\n\nMy FilterPy library contains an H-Infinity filter. I've pasted some test code below which implements the filter designed by Simon in the article above. Hope it helps.",
"_____no_output_____"
]
],
[
[
"from __future__ import (absolute_import, division, print_function,\n unicode_literals)\n\nfrom numpy import array\nimport matplotlib.pyplot as plt\n\nfrom filterpy.hinfinity import HInfinityFilter\n\ndt = 0.1\nf = HInfinityFilter(2, 1, dim_u=1, gamma=.01)\n\nf.F = array([[1., dt],\n [0., 1.]])\n\nf.H = array([[0., 1.]])\nf.G = array([[dt**2 / 2, dt]]).T\n\nf.P = 0.01\nf.W = array([[0.0003, 0.005],\n [0.0050, 0.100]])/ 1000 #process noise\n\nf.V = 0.01\nf.Q = 0.01\nu = 1. #acceleration of 1 f/sec**2\n\nxs = []\nvs = []\n\nfor i in range(1,40):\n f.update (5)\n #print(f.x.T)\n xs.append(f.x[0,0])\n vs.append(f.x[1,0])\n f.predict(u=u)\n\nplt.subplot(211)\nplt.plot(xs)\nplt.title('position')\nplt.subplot(212)\nplt.plot(vs) \nplt.title('velocity');",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecd978253c545532ec51a145273a07d2fa513410 | 6,480 | ipynb | Jupyter Notebook | SustainabilityOnMars/AmalTrack/ChallengeTemplate/challenge-template.ipynb | BryceHaley/hackathon | 47de43b626b429dff9983add201a6bbdc6c974b2 | [
"CC-BY-4.0"
] | 3 | 2019-12-23T14:27:17.000Z | 2020-10-16T23:00:06.000Z | SustainabilityOnMars/AmalTrack/ChallengeTemplate/challenge-template.ipynb | BryceHaley/hackathon | 47de43b626b429dff9983add201a6bbdc6c974b2 | [
"CC-BY-4.0"
] | 22 | 2019-12-11T16:58:11.000Z | 2021-02-25T05:42:07.000Z | SustainabilityOnMars/AmalTrack/ChallengeTemplate/challenge-template.ipynb | BryceHaley/hackathon | 47de43b626b429dff9983add201a6bbdc6c974b2 | [
"CC-BY-4.0"
] | 6 | 2019-11-07T22:14:41.000Z | 2021-03-16T04:26:14.000Z | 33.230769 | 437 | 0.608642 | [
[
[
"### \n\n<a href=\"https://hub.callysto.ca/jupyter/hub/user-redirect/git-pull?repo=https%3A%2F%2Fgithub.com%2Fcallysto%2Fhackathon&branch=master&subPath=SustainabilityOnMars/AmalTrack/ChallengeTemplate/challenge-template.ipynb&depth=1\" target=\"_parent\"><img src=\"https://raw.githubusercontent.com/callysto/curriculum-notebooks/master/open-in-callysto-button.svg?sanitize=true\" width=\"123\" height=\"24\" alt=\"Open in Callysto\"/></a>",
"_____no_output_____"
],
[
"# *Sustaining Life on Mars: Data Science Challenge*.\n\nYou’re a data scientist on a team of newly-arrived humans. While you were on Earth, you figured out how you could make the planet habitable. From growing food to clothing needs, you need to start building the framework for sustaining life on the red planet. \n\nUse data to answer questions such as:\n\n1. What food do we need to bring?\n e.g. trees, seeds, genetically-modified foods\n \n2. How do we feed people there?\n Consider: supply, manage, distribute, connect\n\n7. What are essential key resources? \n e.g. Electricity, oxygen, water, fuel, brick, plastics, steel, food. \n\n4. How do we decide who will go?\n e.g. population proportions, demographics, health, qualifications, genetic diversity\n\n5. What forms of entertainment would people need? \n e.g. music, books, pets, lego\n\n6. What machines do we need? \n e.g. cars, ships, fighter jets, rockets, computers, mobile phones. \n \n#### Choose one or more of these questions to answer, or come up with your own. Check out the example notebooks, and complete the sections in this notebook to answer your chosen question or questions",
"_____no_output_____"
],
[
"### Section I: About Me\n\nDouble-click this cell and tell us:\n\n✏️\n\nFor example\n\n 1. My name: Not-my Name\n 2. My email address: [email protected]\n 3. Why I picked this challenge: \n 4. The questions I picked: ",
"_____no_output_____"
],
[
"### Section II: The data I used\n\nPlease provide the following information: (Double-click to edit this cell)\n\n✏️ \n1. Name of dataset\n2. Link to dataset\n3. Why I picked the dataset\n\nIf you picked multiple datasets, separate them using commas \",\"",
"_____no_output_____"
]
],
[
[
"# Use this cell to import libraries\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport cufflinks as cf",
"_____no_output_____"
],
[
"# Use this cell to read the data - use the tutorials if you are not sure how to do this\n# Double-click to edit this cell. \n\n\n",
"_____no_output_____"
]
],
[
[
"### Section III: Data Analysis and Visualization\n\nUse as many code cells as you need - remember to add a title, as well as appropriate x and y labels to your visualizations. \n\nMake sure you write down what things you notice in the data such as trends, patterns, and basic statistics.\n\nUse the code cell below to start adding your code for data analysis and visualization",
"_____no_output_____"
],
[
"### 👨🏽💻 Provide your code to explore and analyse your data",
"_____no_output_____"
]
],
[
[
"# Double-click this cell and provide your code here. Use as many code cells as you need to analyze and visualize your data. \n# Remember to add a title, as well as appropriate x and y labels to your visualizations\n\n",
"_____no_output_____"
]
],
[
[
"### Observations\nDouble-click this cell and write down at least 2 - 3 things or more you observed in your data through your analysis and visualizations.\n\n✏️ ",
"_____no_output_____"
],
[
"### Section IV: Conclusion\n\nIt is crucial that you connect what you learned via the dataset to the main question(s) you are asking. \n\nUse this space to propose a solution to the question you picked. Make sure it is clear in your answer what area of development you chose to focus on and your proposed solution based on the dataset(s) you worked on. \n\nSee our example notebooks for some inspiration on questions and solutions that you can develop using data.",
"_____no_output_____"
],
[
"Provide your analysis-driven result(s) here. Write down 2 - 3 things you learned from the data and how they help answer your main question(s). Also write down 2 - 3 things you learned from participating in this hackathon. \nDouble-click to edit this cell. \n\n✏️ ",
"_____no_output_____"
],
[
"### [](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
ecd9789ce9dfaead36c98df31fd59fbacf6dc185 | 32,505 | ipynb | Jupyter Notebook | notebooks/figures/sample.ipynb | nksaunders/NexTeX-sample | cbaf62cb8752ea08e97b20cef70d111a45e12615 | [
"MIT"
] | null | null | null | notebooks/figures/sample.ipynb | nksaunders/NexTeX-sample | cbaf62cb8752ea08e97b20cef70d111a45e12615 | [
"MIT"
] | null | null | null | notebooks/figures/sample.ipynb | nksaunders/NexTeX-sample | cbaf62cb8752ea08e97b20cef70d111a45e12615 | [
"MIT"
] | null | null | null | 485.149254 | 31,328 | 0.947485 | [
[
[
"import lightkurve as lk\nimport matplotlib.pyplot as plt\nimport os",
"_____no_output_____"
],
[
"lc = lk.search_lightcurvefile('trappist-1')[0].download().PDCSAP_FLUX",
"_____no_output_____"
],
[
"lc.scatter()\nplt.savefig('sample.pdf')",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
ecd97b9e43560d8fcf6297e27613b1aa9c966fa6 | 4,907 | ipynb | Jupyter Notebook | archived/programming/python/Python_Basics_Data_Structure.ipynb | yennanliu/Python_basics | 6a597442d39468295946cefbfb11d08f61424dc3 | [
"Unlicense"
] | 18 | 2019-08-01T07:45:02.000Z | 2022-03-31T18:05:44.000Z | archived/programming/python/Python_Basics_Data_Structure.ipynb | yennanliu/Python_basics | 6a597442d39468295946cefbfb11d08f61424dc3 | [
"Unlicense"
] | null | null | null | archived/programming/python/Python_Basics_Data_Structure.ipynb | yennanliu/Python_basics | 6a597442d39468295946cefbfb11d08f61424dc3 | [
"Unlicense"
] | 15 | 2019-12-29T08:46:20.000Z | 2022-03-08T14:14:05.000Z | 20.445833 | 79 | 0.425107 | [
[
[
"### ref :\n\n- http://allenchien.logdown.com/posts/419722-python-study-notes-4-list\n- https://docs.python.org.tw/3/tutorial/datastructures.html",
"_____no_output_____"
],
[
"## 1) Queue",
"_____no_output_____"
]
],
[
[
"from collections import deque\ndef list_deque():\n queue = deque([\"Eric\", \"John\", \"Michael\"])\n queue.append(\"Terry\")\n queue.append(\"Graham\")\n print(queue)\n print(queue.popleft())\n print(queue.popleft())\n print(queue)\n",
"_____no_output_____"
],
[
"list_deque()",
"deque(['Eric', 'John', 'Michael', 'Terry', 'Graham'])\nEric\nJohn\ndeque(['Michael', 'Terry', 'Graham'])\n"
]
],
[
[
"## 2) del",
"_____no_output_____"
]
],
[
[
"def list_del():\n a = [-1, 1, 66.25, 333, 333, 1234.5]\n del a[0]\n print(a)\n del a[2:4]\n print(a)\n del a[:]\n print(a)\n del a # will be error if \"print (a)\" or \"refer a\"\n",
"_____no_output_____"
],
[
"list_del()",
"[1, 66.25, 333, 333, 1234.5]\n[1, 66.25, 1234.5]\n[]\n"
]
],
[
[
"## 3) Set",
"_____no_output_____"
]
],
[
[
"def set_test():\n basket = {'apple', 'orange', 'apple', 'pear', 'orange', 'banana'}\n print ('type of basket', type(basket))\n print ('')\n print(basket)\n print('orange' in basket)\n print('crabgrass' in basket)\n a = set('abracadabra')\n b = set('alacazam')\n print(a)\n c = a - b # in a but not in b \n\n print(c)\n c = a | b # in a or b \n\n print(c)\n c = a & b # both in a and b \n\n print(c)\n c = a ^ b # in either a or b, but not both in a and b \n\n print(c)",
"_____no_output_____"
],
[
"set_test()",
"type of basket <class 'set'>\n\n{'pear', 'banana', 'orange', 'apple'}\nTrue\nFalse\n{'r', 'd', 'c', 'a', 'b'}\n{'r', 'd', 'b'}\n{'r', 'z', 'd', 'c', 'm', 'l', 'a', 'b'}\n{'c', 'a'}\n{'r', 'd', 'm', 'l', 'b', 'z'}\n"
]
],
[
[
"## 4) Dict",
"_____no_output_____"
]
],
[
[
"def dict_test():\n tel = {'jack': 4098, 'sape': 4139}\n tel['guido'] = 4127\n print(tel)\n print(tel['jack'])\n del tel['sape']\n tel['irv'] = 4127\n print(tel)\n print(tel.keys())\n tel_sort = sorted(tel.keys())\n print(tel_sort)\n\n",
"_____no_output_____"
],
[
"dict_test()",
"{'jack': 4098, 'sape': 4139, 'guido': 4127}\n4098\n{'jack': 4098, 'irv': 4127, 'guido': 4127}\ndict_keys(['jack', 'irv', 'guido'])\n['guido', 'irv', 'jack']\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ecd986b49e426cc473f9b14d9949d0ad02cb579b | 203,632 | ipynb | Jupyter Notebook | notebooks/MinimumJerkHypothesis.ipynb | regifukuchi/BMC | 9983c94ba0aa8e3660f08ab06fb98e38d7b22f0a | [
"CC-BY-4.0"
] | 293 | 2015-01-17T12:36:30.000Z | 2022-02-13T13:13:12.000Z | notebooks/MinimumJerkHypothesis.ipynb | regifukuchi/BMC | 9983c94ba0aa8e3660f08ab06fb98e38d7b22f0a | [
"CC-BY-4.0"
] | 11 | 2018-06-21T21:40:40.000Z | 2018-08-09T19:55:26.000Z | notebooks/MinimumJerkHypothesis.ipynb | regifukuchi/BMC | 9983c94ba0aa8e3660f08ab06fb98e38d7b22f0a | [
"CC-BY-4.0"
] | 162 | 2015-01-16T22:54:31.000Z | 2022-02-14T21:14:43.000Z | 327.382637 | 68,648 | 0.926073 | [
[
[
"# The minimum jerk hypothesis\n\n> Marcos Duarte \n> Laboratory of Biomechanics and Motor Control ([http://demotu.org/](http://demotu.org/)) \n> Federal University of ABC, Brazil",
"_____no_output_____"
],
[
"Hogan and Flash (1984, 1985), based on observations of voluntary movements in primates, suggested that movements are performed (organized) with the smoothest trajectory possible. In this organizing principle, the endpoint trajectory is such that the mean squared-jerk across time of this movement is minimum. \n\nJerk is the derivative of acceleration and the observation of the minimum-jerk trajectory is for the endpoint in the extracorporal coordinates (not for joint angles) and according to Flash and Hogan (1985), the minimum-jerk trajectory of a planar movement is such that minimizes the following objective function:\n\n$$ C=\\frac{1}{2} \\int\\limits_{t_{i}}^{t_{f}}\\;\\left[\\left(\\frac{d^{3}x}{dt^{3}}\\right)^2+\\left(\\frac{d^{3}y}{dt^{3}}\\right)^2\\right]\\:\\mathrm{d}t $$\n\nHogan (1984) found that the solution for this objective function is a fifth-order polynomial trajectory (see Shadmehr and Wise (2004) for a simpler proof): \n\n$$ \\begin{array}{l l}\nx(t) = a_0+a_1t+a_2t^2+a_3t^3+a_4t^4+a_5t^5 \\\\\ny(t) = b_0+b_1t+b_2t^2+b_3t^3+b_4t^4+b_5t^5\n\\end{array} $$\n\nWith the following boundary conditions for $ x(t) $ and $ y(t) $: initial and final positions are $ (x_i,y_i) $ and $ (x_f,y_f) $ and initial and final velocities and accelerations are zero.\n\nLet's employ [Sympy](http://sympy.org/en/index.html) to find the solution for the minimum jerk trajectory using symbolic algebra.",
"_____no_output_____"
]
],
[
[
"import numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom IPython.display import display, Math, Latex\nfrom sympy import symbols, Matrix, latex, Eq, collect, solve, diff, simplify\nfrom sympy.utilities.lambdify import lambdify",
"_____no_output_____"
]
],
[
[
"Using Sympy, the equation for minimum jerk trajectory for x is:",
"_____no_output_____"
]
],
[
[
"# declare the symbolic variables\nx, xi, xf, y, yi, yf, d, t = symbols('x, x_i, x_f, y, y_i, y_f, d, t')\na0, a1, a2, a3, a4, a5 = symbols('a_0:6')\nx = a0 + a1*t + a2*t**2 + a3*t**3 + a4*t**4 + a5*t**5\ndisplay(Math(latex('x(t)=') + latex(x)))",
"_____no_output_____"
]
],
[
[
"Without loss of generality, consider $ t_i=0 $ and let's use $ d $ for movement duration ($ d=t_f $). The system of equations with the boundary conditions for $ x $ is:",
"_____no_output_____"
]
],
[
[
"# define the system of equations\ns = Matrix([Eq(x.subs(t,0) , xi),\n Eq(diff(x,t,1).subs(t,0), 0),\n Eq(diff(x,t,2).subs(t,0), 0),\n Eq(x.subs(t,d) , xf),\n Eq(diff(x,t,1).subs(t,d), 0),\n Eq(diff(x,t,2).subs(t,d), 0)])\ndisplay(Math(latex(s, mat_str='matrix', mat_delim='[')))",
"_____no_output_____"
]
],
[
[
"Which gives the following solution:",
"_____no_output_____"
]
],
[
[
"# algebraically solve the system of equations\nsol = solve(s, [a0, a1, a2, a3, a4, a5])\ndisplay(Math(latex(sol)))",
"_____no_output_____"
]
],
[
[
"Substituting this solution in the fifth order polynomial trajectory equation, we have the actual displacement trajectories:",
"_____no_output_____"
]
],
[
[
"# substitute the equation parameters by the solution\nx2 = x.subs(sol)\nx2 = collect(simplify(x2, ratio=1), xf-xi)\ndisplay(Math(latex('x(t)=') + latex(x2)))\ny2 = x2.subs([(xi, yi), (xf, yf)])\ndisplay(Math(latex('y(t)=') + latex(y2)))",
"_____no_output_____"
]
],
[
[
"And for the velocity, acceleration, and jerk trajectories in x:",
"_____no_output_____"
]
],
[
[
"# symbolic differentiation\nvx = x2.diff(t, 1)\ndisplay(Math(latex('v_x(t)=') + latex(vx)))\nax = x2.diff(t, 2)\ndisplay(Math(latex('a_x(t)=') + latex(ax)))\njx = x2.diff(t, 3)\ndisplay(Math(latex('j_x(t)=') + latex(jx)))",
"_____no_output_____"
]
],
[
[
"Let's plot the minimum jerk trajectory for x and its velocity, acceleration, and jerk considering $x_i=0,x_f=1,d=1$:",
"_____no_output_____"
]
],
[
[
"# substitute by the numerical values\nx3 = x2.subs([(xi, 0), (xf, 1), (d, 1)])\n#create functions for calculation of numerical values\nxfu = lambdify(t, diff(x3, t, 0), 'numpy')\nvfu = lambdify(t, diff(x3, t, 1), 'numpy')\nafu = lambdify(t, diff(x3, t, 2), 'numpy')\njfu = lambdify(t, diff(x3, t, 3), 'numpy')\n#plots using matplotlib\nts = np.arange(0, 1.01, .01)\nfig, axs = plt.subplots(1, 4, figsize=(12, 5), sharex=True, squeeze=True)\naxs[0].plot(ts, xfu(ts), linewidth=3)\naxs[0].set_title('Displacement [$\\mathrm{m}$]')\naxs[1].plot(ts, vfu(ts), linewidth=3)\naxs[1].set_title('Velocity [$\\mathrm{m/s}$]')\naxs[2].plot(ts, afu(ts), linewidth=3)\naxs[2].set_title('Acceleration [$\\mathrm{m/s^2}$]')\naxs[3].plot(ts, jfu(ts), linewidth=3)\naxs[3].set_title('Jerk [$\\mathrm{m/s^3}$]')\n\nfor axi in axs:\n axi.set_xlabel('Time [s]', fontsize=14)\n axi.grid(True)\n\nfig.suptitle('Minimum jerk trajectory kinematics', fontsize=20, y=1.03)\nfig.tight_layout()\nplt.show()",
"_____no_output_____"
]
],
[
[
"Note that for the minimum jerk trajectory, initial and final values of both velocity and acceleration are zero, but not for the jerk. \n\nRead more about the minimum jerk trajectory hypothesis in the [Shadmehr and Wise's book companion site](http://www.shadmehrlab.org/book/minimum_jerk/minimumjerk.htm) and in [Paul Gribble's website](http://www.gribblelab.org/compneuro/4_Computational_Motor_Control_Kinematics.html#sec-5-1).",
"_____no_output_____"
],
[
"### The angular trajectory of a minimum jerk trajectory \n\nLet's calculate the resulting angular trajectory given a minimum jerk linear trajectory, supposing it is from a circular motion of an elbow flexion. The length of the forearm is 0.5 m, the movement duration is 1 s, the elbow starts flexed at 90$^o$ and the flexes to 180$^o$.\n\nFirst, the linear trajectories for this circular motion:",
"_____no_output_____"
]
],
[
[
"# substitute by the numerical values\nx3 = x2.subs([(xi, 0.5), (xf, 0), (d, 1)])\ny3 = x2.subs([(xi, 0), (xf, 0.5), (d, 1)])\ndisplay(Math(latex('y(t)=') + latex(x3)))\ndisplay(Math(latex('x(t)=') + latex(y3)))\n#create functions for calculation of numerical values\nxfux = lambdify(t, diff(x3, t, 0), 'numpy')\nvfux = lambdify(t, diff(x3, t, 1), 'numpy')\nafux = lambdify(t, diff(x3, t, 2), 'numpy')\njfux = lambdify(t, diff(x3, t, 3), 'numpy')\nxfuy = lambdify(t, diff(y3, t, 0), 'numpy')\nvfuy = lambdify(t, diff(y3, t, 1), 'numpy')\nafuy = lambdify(t, diff(y3, t, 2), 'numpy')\njfuy = lambdify(t, diff(y3, t, 3), 'numpy')",
"_____no_output_____"
],
[
"#plots using matplotlib\nts = np.arange(0, 1.01, .01)\nfig, axs = plt.subplots(1, 4, figsize=(12, 5), sharex=True, squeeze=True)\naxs[0].plot(ts, xfux(ts), 'b', linewidth=3)\naxs[0].plot(ts, xfuy(ts), 'r', linewidth=3)\naxs[0].set_title('Displacement [$\\mathrm{m}$]')\naxs[1].plot(ts, vfux(ts), 'b', linewidth=3)\naxs[1].plot(ts, vfuy(ts), 'r', linewidth=3)\naxs[1].set_title('Velocity [$\\mathrm{m/s}$]')\naxs[2].plot(ts, afux(ts), 'b', linewidth=3)\naxs[2].plot(ts, afuy(ts), 'r', linewidth=3)\naxs[2].set_title('Acceleration [$\\mathrm{m/s^2}$]')\naxs[3].plot(ts, jfux(ts), 'b', linewidth=3)\naxs[3].plot(ts, jfuy(ts), 'r', linewidth=3)\naxs[3].set_title('Jerk [$\\mathrm{m/s^3}$]')\n\nfor axi in axs:\n axi.set_xlabel('Time [s]', fontsize=14)\n axi.grid(True)\n\nfig.suptitle('Minimum jerk trajectory kinematics', fontsize=20, y=1.03)\nfig.tight_layout()\nplt.show()",
"_____no_output_____"
]
],
[
[
"Now, the angular trajectories for this circular motion:",
"_____no_output_____"
]
],
[
[
"from sympy import atan2\nang = atan2(y3, x3)*180/np.pi\ndisplay(Math(latex('angle(t)=') + latex(ang)))\nxang = lambdify(t, diff(ang, t, 0), 'numpy')\nvang = lambdify(t, diff(ang, t, 1), 'numpy')\naang = lambdify(t, diff(ang, t, 2), 'numpy')\njang = lambdify(t, diff(ang, t, 3), 'numpy')",
"_____no_output_____"
],
[
"ts = np.arange(0, 1.01, .01)\nfig, axs = plt.subplots(1, 4, figsize=(12, 5), sharex=True, squeeze=True)\naxs[0].plot(ts, xang(ts), linewidth=3)\naxs[0].set_title('Displacement [$\\mathrm{m}$]')\naxs[1].plot(ts, vang(ts), linewidth=3)\naxs[1].set_title('Velocity [$\\mathrm{m/s}$]')\naxs[2].plot(ts, aang(ts), linewidth=3)\naxs[2].set_title('Acceleration [$\\mathrm{m/s^2}$]')\naxs[3].plot(ts, jang(ts), linewidth=3)\naxs[3].set_title('Jerk [$\\mathrm{m/s^3}$]')\n\nfor axi in axs:\n axi.set_xlabel('Time [s]', fontsize=14)\n axi.grid(True)\n\nfig.suptitle('Minimum jerk trajectory angular kinematics', fontsize=20, y=1.03)\nfig.tight_layout()\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Problems\n\n1. What is your opinion on the the minimum jerk hypothesis? Do you think humans control movement based on this principle? (Think about what biomechanical and neurophysiological properties are not considered on this hypothesis.)\n2. Calculate and plot the position, velocity, acceleration, and jerk trajectories for different movement speeds (for example, consider always a displacement of 1 m and movement durations of 0.5, 1, and 2 s). \n3. For the data in the previous item, calculate the ratio peak speed to average speed. Shadmehr and Wise (2004) argue that psychophysical experiments show that reaching movements with the hand have this ratio equals to 1.75. Compare with the calculated values. \n4. Can you propose alternative hypotheses for the control of movement? ",
"_____no_output_____"
],
[
"## References\n\n- Flash T, Hogan N (1985) [The coordination of arm movements: an experimentally confirmed mathematical model](http://www.jneurosci.org/cgi/reprint/5/7/1688.pdf). Journal of Neuroscience, 5, 1688-1703. \n- Hogan N (1984) [An organizing principle for a class of voluntary movements](http://www.jneurosci.org/content/4/11/2745.full.pdf). Journal of Neuroscience, 4, 2745-2754.\n- Shadmehr R, Wise S (2004) [The Computational Neurobiology of Reaching and Pointing: A Foundation for Motor Learning](http://www.shadmehrlab.org/book/). A Bradford Book. [Companion site](http://www.shadmehrlab.org/book/).\n- Zatsiorsky VM (1998) [Kinematics of Human Motion](http://books.google.com.br/books/about/Kinematics_of_Human_Motion.html?id=Pql_xXdbrMcC&redir_esc=y). Champaign, Human Kinetics.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
ecd98b46f9422415b4881fe1e0f96337df4cee1f | 510,985 | ipynb | Jupyter Notebook | Zillow_Housing.ipynb | josh-grasso/NYC_Residential_Real_Estate | e1626a1dcb54b0eb377d6e728085cf4ddef05db3 | [
"MIT"
] | 4 | 2021-04-06T17:01:13.000Z | 2021-12-18T13:18:59.000Z | Zillow_Housing.ipynb | josh-grasso/NYC_Residential_Real_Estate | e1626a1dcb54b0eb377d6e728085cf4ddef05db3 | [
"MIT"
] | null | null | null | Zillow_Housing.ipynb | josh-grasso/NYC_Residential_Real_Estate | e1626a1dcb54b0eb377d6e728085cf4ddef05db3 | [
"MIT"
] | 1 | 2021-04-15T15:29:27.000Z | 2021-04-15T15:29:27.000Z | 177.548645 | 228,780 | 0.851187 | [
[
[
"## Zillow Single Family Home Values for NYC Neighborhoods:",
"_____no_output_____"
],
[
"# Buying a Home in NYC: What Neighborhoods are the Best Value?\n### Applying Data Science Tools to Understand NYC's Residential Real Estate Fundamentals\n\n Josh Grasso | [email protected]\n\nThis project seeks to understand the fundamental factors that explain differences in residential real estate prices across NYC. ",
"_____no_output_____"
],
[
"### Neighborhood-level Zillow Home Value Index (ZHVI) for Single-Family Homes (SFH)\nZillow has graciously made several datasets available, one of which is the Zillow Home Value Index (ZHVI) for Single-Family Homes (SFH), which is used in this project. The dataset is a monthly time series, going back as far as 1996 in some cases, with detail down to the “neighborhood” level. \n\nThe neighborhood definitions used throughout this project are those defined by the NYC Department of City Planning, in which there are 306 neighborhoods in NYC’s 5 boroughs. The Zillow SFH data has a data series for 235 neighborhoods, which were all mapped to the neighborhood definitions used in this analysis. Further, the Zillow data has 424 total neighborhoods within the “New York-Newark-Jersey City” metro area – providing an additional 189 neighborhoods in surrounding areas across the Hudson River in New Jersey, on Long Island, and north into Westchester County - for possible further future analysis. Throughout the entire US, Zillow provides a data series for over 16k neighborhoods – an impressive level of granularity that speaks to the new capabilities of big data. \n\nMy initial inspiration for the project came from exploring NYC real estate on the Zillow app – so it’s great to have access to their huge dataset, even if it’s not at the hyper-granular level of each individual listing. The NYC Department of Finance single-family residence data provides a good compliment to the Zillow data – since the NYC DoF dataset is at the transaction level. \n\nZillow describes this dataset as being built on top of estimates for over 100mm homes in the US, including new construction homes and/or homes that have not traded on the open market in many years. The data is an index, with the most recent, present-day value of the time series being defined as the “typical home value” for the property universe, and the value of the index going back in time being engineered to reflect “the market’s total appreciation. In other words, the ZHVI appreciation can now be viewed as the theoretical financial return that could be gained from buying all homes in a given subset (by geography and/or home type) in one period and selling them in the next period.”\n\nTo exactly match the NYC DoF data and analysis, I restrict my initial focus on the Zillow data to the time period from 2005 to 2019. However, a deeper historic analysis is possible with 178 of the neighborhoods having data going all the way back to January 1996. Further, the average price for each neighborhood during the full 2005 to 2019 period is used as the average price in the analysis/regression. This convention was used in the NYC DoF dataset, given the sparsity of transactions in some neighborhoods across certain years and, in some cases, across all years. Thus, the convention was carried over to the Zillow data, for uniformity. Finally, the average annual growth in prices was calculated by fitting a linear regression to the full, monthly dataset for each individual neighborhood; and using that monthly increase (slope: best-fit, $/month for 2005 to 2019) to calculate an annual percentage increase vs. the average sales price for the neighborhood during the period. This will be used to build a \"momentum\" metric for each neighborhood - to be used alongside a measure of \"value\" in determining which neighborhoods look most compelling from an investment perspective. In the analysis in the main notebook, we will compare the Zillow data to the NYC DoF data. ",
"_____no_output_____"
],
[
"### Resources: \n* https://www.zillow.com/research/data/\n* Zillow API: https://documenter.getpostman.com/view/9197254/SzRuZCCj?version=latest\n* Single Family & Condo/Co-op: https://www.zillow.com/new-york-ny/home-values/",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport requests\nimport json\n\nimport plotly.express as px\nfrom IPython.display import Image\n\n%matplotlib inline\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n#plt.style.use('seaborn') \nsns.set()\n\nfrom datetime import datetime\ntoday = datetime.now()\nmonth,day,year = today.month,today.day,today.year",
"_____no_output_____"
],
[
"from pathlib import Path\nhome_path = Path.home() / 'Jupyter' / 'Real_Estate' # / 'Zillow'",
"_____no_output_____"
],
[
"# Zillow Home Value Index (ZHVI)\n# Source: https://www.zillow.com/research/data/\n\n# Transition to Zillow API: \n# Source: https://documenter.getpostman.com/view/9197254/SzRuZCCj?version=latest\n\n# \"ZHVI All Homes (SFR, Condo/Co-op) Time Series, Smoothed, Seasonally Adjusted($)\"\n# \"ZHVI All Homes (SFR, Condo/Co-op) Time Series, Raw, Mid-Tier ($)\"\n# \"ZHVI All Homes- Top Tier Time Series ($)\"\n# \"ZHVI All Homes- Bottom Tier Time Series ($)\"\n# \"ZHVI Single-Family Homes Time Series ($)\"\n\n# \"ZHVI Condo/Co-op Time Series ($)\"\n# \"ZHVI 1-Bedroom Time Series ($)\"\n# \"ZHVI 2-Bedroom Time Series ($)\"\n# \"ZHVI 3-Bedroom Time Series ($)\"\n# \"ZHVI 4-Bedroom Time Series ($)\"\n# \"ZHVI 5+ Bedroom Time Series ($)\"\n\nZHVI_SFR_Smoothed_Neighborhood_url = 'https://files.zillowstatic.com/research/public_v2/zhvi/Neighborhood_zhvi_uc_sfrcondo_tier_0.33_0.67_sm_sa_mon.csv'\nZHVI_SFR_Raw_Metro_url = 'https://files.zillowstatic.com/research/public_v2/zhvi/Metro_zhvi_uc_sfrcondo_tier_0.33_0.67_raw_mon.csv' \nZHVI_SFR_Top_City_url = 'https://files.zillowstatic.com/research/public_v2/zhvi/City_zhvi_uc_sfrcondo_tier_0.67_1.0_sm_sa_mon.csv'\nZHVI_SFR_Bottom_City_url = 'https://files.zillowstatic.com/research/public_v2/zhvi/City_zhvi_uc_sfrcondo_tier_0.0_0.33_sm_sa_mon.csv'\nZHVI_SFR_Neighborhood_url = 'https://files.zillowstatic.com/research/public_v2/zhvi/Neighborhood_zhvi_uc_sfr_sm_sa_mon.csv'\n\nZHVI_Condo_Coop_Neighborhood_url = 'https://files.zillowstatic.com/research/public_v2/zhvi/Neighborhood_zhvi_uc_condo_tier_0.33_0.67_sm_sa_mon.csv'\nZHVI_1Br_Neighborhood_url = 'https://files.zillowstatic.com/research/public_v2/zhvi/Neighborhood_zhvi_bdrmcnt_1_uc_sfrcondo_tier_0.33_0.67_sm_sa_mon.csv'\nZHVI_2Br_Neighborhood_url = 'https://files.zillowstatic.com/research/public_v2/zhvi/Neighborhood_zhvi_bdrmcnt_2_uc_sfrcondo_tier_0.33_0.67_sm_sa_mon.csv'\nZHVI_3Br_Neighborhood_url = 'https://files.zillowstatic.com/research/public_v2/zhvi/Neighborhood_zhvi_bdrmcnt_3_uc_sfrcondo_tier_0.33_0.67_sm_sa_mon.csv'\nZHVI_4Br_Neighborhood_url = 'https://files.zillowstatic.com/research/public_v2/zhvi/Neighborhood_zhvi_bdrmcnt_4_uc_sfrcondo_tier_0.33_0.67_sm_sa_mon.csv'\nZHVI_5Br_Neighborhood_url = 'https://files.zillowstatic.com/research/public_v2/zhvi/Neighborhood_zhvi_bdrmcnt_5_uc_sfrcondo_tier_0.33_0.67_sm_sa_mon.csv'\n",
"_____no_output_____"
],
[
"# Load: ZHVI Single-Family Homes Time Series ($)\nresp = requests.get(ZHVI_SFR_Neighborhood_url) \nlocal_path = home_path / 'ZHVI_SFR_Neighborhood.csv'\nwith open(local_path, 'wb') as output:\n output.write(resp.content)\nzhvi_sfr_neighborhood_df = pd.read_csv(local_path)",
"_____no_output_____"
],
[
"zhvi_sfr_neighborhood_df",
"_____no_output_____"
],
[
"# Brooklyn Heights is incorrectly labeled as New York County(Manhattan, should be Kings County(Brooklyn):\n\n# zhvi_sfr_neighborhood_df[zhvi_sfr_neighborhood_df['RegionName'] == 'Brooklyn Heights'] # 551\nzhvi_sfr_neighborhood_df.loc[551, 'CountyName'] = 'Kings County'\nzhvi_sfr_neighborhood_df[zhvi_sfr_neighborhood_df['RegionName'] == 'Brooklyn Heights']",
"_____no_output_____"
],
[
"# Explore Dataset: \n\n# zhvi_sfr_neighborhood_df.columns \n# ['RegionID', 'SizeRank', 'RegionName', 'RegionType', 'StateName', 'State', 'City', 'Metro', 'CountyName'\n# '1996-01-31' - '2021-02-28']\n\n# len(zhvi_sfr_neighborhood_df['Metro'].unique()) # 265\n# zhvi_sfr_neighborhood_df[zhvi_sfr_neighborhood_df['Metro'] == 'New York-Newark-Jersey City'] # 424 rows",
"_____no_output_____"
],
[
"# Filter to NYC Neighborhoods: \n\nny_df = zhvi_sfr_neighborhood_df[zhvi_sfr_neighborhood_df['State'] == 'NY'].reset_index(drop=True)\nnyc_df = ny_df[ny_df['City'] == 'New York']\nnyc_df = nyc_df.reset_index(drop=True)",
"_____no_output_____"
],
[
"# Map County Names to Borough Names:\n\n# nyc_df['CountyName'].unique() \n# ['New York County', 'Kings County', 'Queens County', 'Bronx County', 'Richmond County']\nborough_list = ['Brooklyn', 'Queens', 'Bronx', 'Manhattan', 'Staten_Island']\nmap_county_borough_dict = {'New York County': 'Manhattan', \n 'Kings County': 'Brooklyn', \n 'Queens County': 'Queens', \n 'Bronx County': 'Bronx',\n 'Richmond County': 'Staten_Island'}\nnyc_df['Borough'] = nyc_df['CountyName'].replace(map_county_borough_dict)\n",
"_____no_output_____"
],
[
"# Map Zillow Neighborhood Names to NYC Neighborhood Names:\n\n# Neighborhoods Names\n# https://www1.nyc.gov/site/planning/data-maps/open-data.page\n\nneighborhood_url = 'https://services5.arcgis.com/GfwWNkhOj9bNBqoJ/arcgis/rest/services/Neighborhood_Names/FeatureServer/0/query?where=1=1&outFields=*&outSR=4326&f=pgeojson'\nresp = requests.get(neighborhood_url)\nneighborhood_json = resp.json()\n\nneighborhood_ids_list = []\nneighborhood_details_list = []\n\nfor neighborhood_dict in neighborhood_json['features']:\n neighborhood_ids_list.append(neighborhood_dict['id']) \n \n d = {}\n d['ID'] = neighborhood_dict['id']\n # Neighborhood instead of name? \n d['Name'] = neighborhood_dict['properties']['Name']\n d['Borough'] = neighborhood_dict['properties']['Borough']\n d['Lat'] = neighborhood_dict['geometry']['coordinates'][1]\n d['Long'] = neighborhood_dict['geometry']['coordinates'][0]\n \n neighborhood_details_list.append(d)\n\nneighborhood_df = pd.DataFrame.from_dict(neighborhood_details_list)\nneighborhood_df['Borough'] = neighborhood_df['Borough'].replace({'Staten Island': 'Staten_Island'})\n",
"_____no_output_____"
],
[
"neighborhood_df",
"_____no_output_____"
],
[
"# Matching Neighborhoods\n# set(nyc_df['RegionName'].to_list()).intersection(neighborhood_df['Name'].to_list())\n#set(nyc_df.set_index(['Borough', 'RegionName']).index.to_list()).intersection(\n# neighborhood_df.set_index(['Borough', 'Name']).index.to_list())",
"_____no_output_____"
],
[
"# In Zillow, but not in NYC\n# set(nyc_df['RegionName'].to_list()).difference(neighborhood_df['Name'].to_list())\n#set(nyc_df.set_index(['Borough', 'RegionName']).index.to_list()).difference(\n# neighborhood_df.set_index(['Borough', 'Name']).index.to_list())",
"_____no_output_____"
],
[
"# In NYC but not in Zillow\n# set(neighborhood_df['Name'].to_list()).difference(nyc_df['RegionName'].to_list())\n#set(neighborhood_df.set_index(['Borough', 'Name']).index.to_list()).difference(\n# nyc_df.set_index(['Borough', 'RegionName']).index.to_list())",
"_____no_output_____"
],
[
"map_zillow_neighborhoods_dict = {'Battery Park': 'Battery Park City',\n 'Bronx Park': np.nan,\n 'Chelsea-Travis': 'Travis',\n 'Clove Lake': np.nan,\n 'Columbia Street Waterfront District': np.nan, # 'Cobble Hill',\n 'DUMBO': 'Dumbo',\n 'Douglaston-Little Neck': ['Douglaston', 'Little Neck'],\n 'Flatiron District': 'Flatiron',\n 'Floral park': 'Floral Park',\n 'Flushing Meadows Corona Park': np.nan,\n 'Fort Wadsworth': np.nan,\n 'Garment District': 'Midtown South',\n 'Grasmere - Concord': ['Grasmere', 'Concord'],\n 'Greenwood': np.nan, # 'Sunset Park',\n 'Harlem': 'Central Harlem',\n 'Highbridge': 'High Bridge',\n 'Jamaica': ['Jamaica Center', 'South Jamaica'],\n 'John F. Kennedy International Airport': np.nan,\n 'Meiers Corners': np.nan, # 'Castleton Corners',\n 'Navy Yard': 'Vinegar Hill',\n 'New Utrecht': np.nan, # 'Bensonhurst',\n 'NoHo': 'Noho',\n 'Pelham Bay Park': 'Pelham Parkway', \n 'SoHo': 'Soho',\n 'South Bronx': np.nan, # 'Melrose',\n 'Throggs Neck': 'Throgs Neck',\n 'Tremont': np.nan, # 'East Tremont',\n 'Westchester Heights': 'Westchester Square',\n 'Hunters Point': 'Long Island City'}\n",
"_____no_output_____"
],
[
"update_nyc_df = nyc_df.copy()\nupdate_nyc_df['Neighborhood'] = update_nyc_df['RegionName']\nupdate_nyc_df['Neighborhood'] = [map_zillow_neighborhoods_dict.get(key,key) for key in update_nyc_df['Neighborhood']]\nupdate_nyc_df = update_nyc_df.explode('Neighborhood')\n\nupdate_nyc_df = update_nyc_df[update_nyc_df['Neighborhood'].notna()]\nupdate_nyc_df = update_nyc_df.set_index(['Borough', 'Neighborhood'])\nupdate_nyc_df = update_nyc_df.drop(columns= ['RegionID', 'SizeRank', 'RegionName', 'RegionType', \n 'StateName', 'State', 'City', 'Metro', 'CountyName'])",
"_____no_output_____"
],
[
"update_nyc_df",
"_____no_output_____"
],
[
"print(len(update_nyc_df.columns))",
"302\n"
],
[
"# How far back do the neighborhoods go? \n\n(update_nyc_df.notna().sum(axis=1)).apply(lambda x: round(x,0)).value_counts().sort_index(ascending=True).tail()\n",
"_____no_output_____"
],
[
"# NYC Neighborhood Sales Summary is for 2005 through 2019\n\nupdate_nyc_df.columns = pd.to_datetime(update_nyc_df.columns)\n\nzillow_2005_2019_df = update_nyc_df.T[update_nyc_df.T.index.year.isin(np.arange(2005,2019+1))]\n\n# Average:\navg_zillow_2005_2019_df = (zillow_2005_2019_df.mean().to_frame(name='Avg_Price_2005_2019')\n .sort_values('Avg_Price_2005_2019', ascending=False))\n",
"_____no_output_____"
],
[
"# Slope:\n\ndef zillow_price_trajectory(series):\n _regression_values = series.dropna().values\n if len(_regression_values) >= 2:\n return np.polyfit(np.arange(len(_regression_values)), _regression_values, 1)[0]\n else:\n return 0\n\ngrowth_zillow_2005_2019_df = (zillow_2005_2019_df.apply(lambda x: zillow_price_trajectory(x) * 12)\n .to_frame(name='Annual_Growth_2005_2019'))\n",
"_____no_output_____"
],
[
"growth_zillow_2005_2019_df",
"_____no_output_____"
],
[
"# avg_zillow_2005_2019_df.reset_index()[avg_zillow_2005_2019_df.reset_index()['Neighborhood'] == 'Long Island City']\n# avg_zillow_2005_2019_df.reset_index()[avg_zillow_2005_2019_df.reset_index()['Neighborhood'] == 'Hunters Point']\n",
"_____no_output_____"
],
[
"growth_zillow_2005_2019_df = avg_zillow_2005_2019_df.join(growth_zillow_2005_2019_df)\ngrowth_zillow_2005_2019_df['Growth_%_2005_2019'] = (growth_zillow_2005_2019_df['Annual_Growth_2005_2019']\n / growth_zillow_2005_2019_df['Avg_Price_2005_2019'])\ngrowth_zillow_2005_2019_df.sort_values(by='Growth_%_2005_2019', inplace=True, ascending=False)\n",
"_____no_output_____"
],
[
"growth_zillow_2005_2019_df",
"_____no_output_____"
],
[
"# Save to CSV: \ngrowth_zillow_2005_2019_df.to_csv(path_or_buf= home_path / 'Zillow_NYC_SFR_2005_2019.csv')\n",
"_____no_output_____"
],
[
"# Best and Worst Performing Residential Real Estate Neighborhoods in NYC: (from 2004)\n# Zillow was only founded in 2004, ignore data prior (back to 1996 somehow)\n\nneighborhood_growth_df = update_nyc_df.T.copy()\nneighborhood_growth_df.index = pd.to_datetime(neighborhood_growth_df.index, infer_datetime_format=True)\nneighborhood_growth_df = neighborhood_growth_df[neighborhood_growth_df.index > pd.to_datetime('2004')]\n\nyearly_avg_series = neighborhood_growth_df.pct_change().add(1).T.mean()\nyearly_avg_series.iloc[0] = 1\n\nneighborhood_growth_df = neighborhood_growth_df.pct_change().add(1)\nneighborhood_growth_df = neighborhood_growth_df.T.fillna(yearly_avg_series, axis=0).T # .to_dict()\nneighborhood_growth_df = neighborhood_growth_df.cumprod().T.sort_values(neighborhood_growth_df.index[-1]).T\nyearly_avg_series = yearly_avg_series.cumprod().rename('Average')\n\nrank_series = neighborhood_growth_df.iloc[-1].rank(pct=True).apply(lambda x: round(x,2))\n",
"_____no_output_____"
],
[
"# Plot By Borough:\n\n# fig, ax = plt.subplots(figsize=(15,8))\n# sns.lineplot(data=neighborhood_growth_df.T.reset_index(level=1, drop=True).T, \n# ci=95, linewidth=3, ax=ax); # palette='viridis', \n",
"_____no_output_____"
],
[
"# Top and Bottom 5%, with Average:\n\nfig, ax = plt.subplots(figsize=(15,8)) # sharex=True, sharey=True\n\nsns.lineplot(data=neighborhood_growth_df.T[rank_series > 0.95].reset_index(level=0, drop=True).T, \n palette='Blues', dashes=False, linewidth=2, ax=ax);\nsns.lineplot(data=neighborhood_growth_df.T[rank_series < 0.05].reset_index(level=0, drop=True).T, \n palette='Purples', dashes=False, linewidth=2, ax=ax);\nsns.lineplot(data=yearly_avg_series, color=u'k', linewidth=5, linestyle='--', legend=\"brief\", ax=ax);\n# '-', '--', '-.', ':', 'None', ' ', '', 'solid', 'dashed', 'dashdot', 'dotted'\nplt.title(\"Top and Bottom 5% of Neighborhoods, and Average (Zillow)\", fontsize=20);",
"_____no_output_____"
],
[
"# Build Mapbox: \n# Map Neighborhoods to NTA's - to leverage geojson for actually maping\n# Resource: https://towardsdatascience.com/new-to-data-visualization-start-with-new-york-city-107785f836ab",
"_____no_output_____"
],
[
"# NTA:\n# Source: https://www1.nyc.gov/site/planning/data-maps/open-data/dwn-nynta.page\n\nnta_json_url = 'https://services5.arcgis.com/GfwWNkhOj9bNBqoJ/ArcGIS/rest/services/NYC_Neighborhood_Tabulation_Areas/FeatureServer/0/query?where=1=1&outFields=*&outSR=4326&f=pgeojson'\nresp = requests.get(nta_json_url)\nnta_json = resp.json()\n\nnta_property_list = []\nfor nta_property in nta_json['features']:\n nta_dict = dict.fromkeys(['BoroName', 'NTACode', 'NTAName'])\n for key in nta_dict:\n nta_dict[key] = nta_property['properties'][key]\n nta_property_list.append(nta_dict)\n\nnta_df = pd.DataFrame(nta_property_list)\nnta_df['BoroName'] = nta_df['BoroName'].replace({'Staten Island': 'Staten_Island'})\nnta_df = nta_df.rename(columns={'BoroName': 'Borough'}) # 'NTAName': 'Neighborhood'\n",
"_____no_output_____"
],
[
"nta_df",
"_____no_output_____"
],
[
"nta_to_neighborhood_staten_island_dict = {\n \"Annadale-Huguenot-Prince's Bay-Eltingville\": ['Annadale', 'Eltingville', 'Huguenot', \n \"Prince's Bay\", 'Greenridge'],\n 'Charleston-Richmond Valley-Tottenville': ['Tottenville', 'Charleston', 'Richmond Valley', 'Butler Manor',\n 'Sandy Ground', 'Pleasant Plains'],\n 'Grasmere-Arrochar-Ft. Wadsworth': ['Arrochar', 'Grasmere', 'Concord'],\n 'Grymes Hill-Clifton-Fox Hills': ['Grymes Hill', 'Silver Lake', 'Randall Manor', 'Sunnyside', \n 'Fox Hills', 'Park Hill'],\n \"Mariner's Harbor-Arlington-Port Ivory-Graniteville\": ['Elm Park', \"Mariner's Harbor\", 'Arlington', \n 'Port Ivory', 'Howland Hook', 'Graniteville'],\n 'New Brighton-Silver Lake': 'West Brighton', \n 'New Dorp-Midland Beach': ['New Dorp', 'New Dorp Beach', 'Grant City', 'Midland Beach'],\n 'New Springville-Bloomfield-Travis': ['Bloomfield', 'Chelsea', 'Travis', 'Bulls Head', 'Willowbrook', \n 'Manor Heights', 'New Springville'],\n 'Oakwood-Oakwood Beach': ['Oakwood','Richmond Town'],\n 'Old Town-Dongan Hills-South Beach': ['Dongan Hills', 'South Beach', 'Old Town'],\n 'Rossville-Woodrow': ['Rossville', 'Woodrow'],\n 'Stapleton-Rosebank': ['Tompkinsville', 'Stapleton', 'Clifton', 'Rosebank', 'Shore Acres'],\n 'Todt Hill-Emerson Hill-Heartland Village-Lighthouse Hill': ['Heartland Village', 'Lighthouse Hill',\n 'Egbertville', 'Todt Hill', 'Emerson Hill'],\n 'West New Brighton-New Brighton-St. George': ['St. George', 'New Brighton'],\n 'park-cemetery-etc-Staten Island': np.nan,\n # These have the key/name displayed as well as the other neighborhood\n 'Great Kills': ['Great Kills', 'Bay Terrace'],\n 'Westerleigh': ['Westerleigh', 'Castleton Corners']\n}\n",
"_____no_output_____"
],
[
"nta_to_neighborhood_queens_dict = {\n 'Airport': np.nan,\n 'Baisley Park': 'Rochdale',\n 'Bayside-Bayside Hills': 'Bayside',\n 'Breezy Point-Belle Harbor-Rockaway Park-Broad Channel': ['Breezy Point', 'Roxbury', 'Neponsit', \n 'Belle Harbor', 'Rockaway Park', \n 'Rockaway Beach', 'Broad Channel'],\n 'Briarwood-Jamaica Hills': ['Briarwood', 'Jamaica Hills'],\n 'Douglas Manor-Douglaston-Little Neck': ['Little Neck', 'Douglaston'],\n 'East Flushing': 'Murray Hill',\n 'Elmhurst-Maspeth': 'Woodside',\n 'Far Rockaway-Bayswater': ['Far Rockaway', 'Bayswater'],\n 'Fresh Meadows-Utopia': 'Fresh Meadows',\n 'Ft. Totten-Bay Terrace-Clearview': 'Bay Terrace',\n 'Glen Oaks-Floral Park-New Hyde Park': ['Glen Oaks', 'Floral Park'],\n 'Hammels-Arverne-Edgemere': ['Hammels', 'Arverne', 'Somerville', 'Edgemere'],\n 'Hunters Point-Sunnyside-West Maspeth': ['Long Island City', 'Sunnyside Gardens', 'Sunnyside', 'Blissville'],\n 'Jamaica': 'Jamaica Center',\n 'Jamaica Estates-Holliswood': ['Jamaica Estates', 'Holliswood'],\n 'Lindenwood-Howard Beach': ['Lindenwood', 'Howard Beach'],\n 'Old Astoria': 'Astoria',\n 'Pomonok-Flushing Heights-Hillcrest': ['Pomonok', 'Utopia', 'Hillcrest'],\n 'Queensbridge-Ravenswood-Long Island City': ['Queensbridge', 'Ravenswood', 'Long Island City'],\n 'Springfield Gardens North': 'Rochdale',\n 'Springfield Gardens South-Brookville': ['Springfield Gardens', 'Brookville'],\n 'park-cemetery-etc-Queens': np.nan,\n # These have the key/name displayed as well as the other neighborhood\n 'Steinway': ['Steinway', 'Astoria Heights'],\n 'Elmhurst': ['Elmhurst', 'Lefrak City'],\n 'Forest Hills': ['Forest Hills', 'Forest Hills Gardens'],\n 'Whitestone': ['Whitestone', 'Malba', 'Beechhurst'],\n 'Bellerose': ['Bellerose', 'Bellaire'],\n}\n",
"_____no_output_____"
],
[
"nta_to_neighborhood_bronx_dict = {\n 'Allerton-Pelham Gardens': ['Allerton', 'Pelham Gardens'],\n 'Bedford Park-Fordham North': 'Bedford Park',\n 'Claremont-Bathgate': 'Claremont Village',\n 'Crotona Park East': 'Claremont Village',\n 'East Concourse-Concourse Village': ['Concourse', 'Concourse Village'],\n 'Eastchester-Edenwald-Baychester': ['Eastchester', 'Edenwald', 'Baychester'],\n 'Fordham South': 'Fordham',\n 'Highbridge': 'High Bridge',\n 'Melrose South-Mott Haven North': 'Melrose',\n 'Morrisania-Melrose': ['Claremont Village', 'Morrisania'],\n 'Mott Haven-Port Morris': ['Mott Haven', 'Port Morris'],\n 'North Riverdale-Fieldston-Riverdale': ['North Riverdale', 'Fieldston', 'Riverdale'],\n 'Pelham Bay-Country Club-City Island': ['Pelham Bay', 'Country Club', 'City Island'],\n 'Rikers Island': np.nan,\n 'Schuylerville-Throgs Neck-Edgewater Park': ['Schuylerville', 'Throgs Neck', 'Edgewater Park'],\n 'Soundview-Bruckner': 'Soundview',\n 'Soundview-Castle Hill-Clason Point-Harding Park': ['Castle Hill', 'Clason Point', 'Soundview'],\n 'Spuyten Duyvil-Kingsbridge': ['Spuyten Duyvil', 'Kingsbridge'],\n 'University Heights-Morris Heights': ['Morris Heights', 'University Heights'],\n 'Van Cortlandt Village': 'Kingsbridge',\n 'Van Nest-Morris Park-Westchester Square': ['Van Nest', 'Morris Park'],\n 'West Concourse': 'Mount Eden',\n 'West Farms-Bronx River': 'West Farms',\n 'Westchester-Unionport': ['Westchester Square', 'Unionport'],\n 'Williamsbridge-Olinville': ['Williamsbridge', 'Olinville'],\n 'Woodlawn-Wakefield': ['Woodlawn', 'Wakefield'],\n 'park-cemetery-etc-Bronx': np.nan\n}\n",
"_____no_output_____"
],
[
"nta_to_neighborhood_brooklyn_dict = {\n 'Bedford': 'Bedford Stuyvesant',\n 'Bensonhurst East': 'Bensonhurst',\n 'Bensonhurst West': 'Bensonhurst',\n 'Brooklyn Heights-Cobble Hill': ['Fulton Ferry', 'Brooklyn Heights', 'Cobble Hill'],\n 'Bushwick North': 'Bushwick',\n 'Bushwick South': 'Bushwick',\n 'Carroll Gardens-Columbia Street-Red Hook': ['Carroll Gardens', 'Red Hook'], # (Possibly Cobble Hill)\n 'Crown Heights North': ['Crown Heights', 'Weeksville'],\n 'Crown Heights South': 'Prospect Lefferts Gardens',\n 'Cypress Hills-City Line': ['City Line', 'Cypress Hills', 'Highland Park'],\n 'DUMBO-Vinegar Hill-Downtown Brooklyn-Boerum Hill': ['Boerum Hill', 'Dumbo', 'Vinegar Hill', 'Downtown'],\n 'East Flatbush-Farragut': 'East Flatbush',\n 'East New York (Pennsylvania Ave)': 'New Lots',\n 'Georgetown-Marine Park-Bergen Beach-Mill Basin': ['Bergen Beach', 'Georgetown', 'Mill Basin', \n 'Mill Island', 'Marine Park'],\n 'Kensington-Ocean Parkway': 'Kensington', \n 'North Side-South Side': 'Williamsburg', # ['North Side', 'South Side'],\n 'Ocean Parkway South': 'Ocean Parkway',\n 'Park Slope-Gowanus': ['Gowanus', 'Park Slope'],\n 'Prospect Lefferts Gardens-Wingate': ['Prospect Lefferts Gardens', 'Wingate'],\n 'Rugby-Remsen Village': ['Remsen Village', 'Rugby'],\n 'Seagate-Coney Island': ['Coney Island', 'Sea Gate'],\n 'Sheepshead Bay-Gerritsen Beach-Manhattan Beach': ['Gerritsen Beach', 'Sheepshead Bay', 'Manhattan Beach'],\n 'Stuyvesant Heights': 'Bedford Stuyvesant',\n 'Sunset Park East': 'Sunset Park',\n 'Sunset Park West': 'Sunset Park',\n 'West Brighton': 'Brighton Beach',\n 'park-cemetery-etc-Brooklyn': np.nan,\n # These have the key/name displayed as well as the other neighborhood\n 'Ocean Hill': ['Ocean Hill', 'Broadway Junction'],\n 'Flatbush': ['Flatbush', 'Ditmas Park', 'Prospect Park South'],\n 'Bay Ridge': ['Bay Ridge', 'Fort Hamilton'],\n 'Midwood': ['Midwood', 'Manhattan Terrace'],\n 'Canarsie': ['Canarsie', 'Paerdegat Basin']\n}\n",
"_____no_output_____"
],
[
"nta_to_neighborhood_manhattan_dict = {\n 'Battery Park City-Lower Manhattan': ['Battery Park City', 'Financial District'],\n 'Central Harlem North-Polo Grounds': 'Central Harlem',\n 'Central Harlem South': 'Central Harlem',\n 'East Harlem North': 'East Harlem',\n 'East Harlem South': 'East Harlem',\n 'Hudson Yards-Chelsea-Flatiron-Union Square': ['Hudson Yards', 'Chelsea', 'Flatiron'],\n 'Lenox Hill-Roosevelt Island': ['Lenox Hill', 'Roosevelt Island'],\n 'Marble Hill-Inwood': ['Marble Hill', 'Inwood'],\n 'Murray Hill-Kips Bay': 'Murray Hill', \n 'SoHo-TriBeCa-Civic Center-Little Italy': ['Soho', 'Tribeca', 'Civic Center', 'Little Italy'],\n 'Turtle Bay-East Midtown': ['Turtle Bay', 'Sutton Place', 'Tudor City'],\n 'Upper East Side-Carnegie Hill': ['Upper East Side', 'Carnegie Hill'],\n 'Washington Heights North': 'Washington Heights',\n 'Washington Heights South': 'Washington Heights',\n 'Midtown-Midtown South': ['Midtown','Midtown South'],\n 'Stuyvesant Town-Cooper Village': 'Stuyvesant Town',\n 'park-cemetery-etc-Manhattan': np.nan,\n # These have the key/name displayed as well as the other neighborhood\n 'West Village': ['West Village', 'Greenwich Village'],\n 'Upper West Side': ['Upper West Side', 'Manhattan Valley'],\n 'Chinatown': ['Chinatown', 'Noho']\n}\n",
"_____no_output_____"
],
[
"nta_to_neighborhood_dict = {**nta_to_neighborhood_brooklyn_dict, **nta_to_neighborhood_queens_dict, \n **nta_to_neighborhood_bronx_dict, **nta_to_neighborhood_manhattan_dict, \n **nta_to_neighborhood_staten_island_dict}\n\nimport json\n\njson.dump(nta_to_neighborhood_dict, open(home_path / \"NTA_to_Neighborhood_mapping_dict.json\", 'w' ) )\n",
"_____no_output_____"
],
[
"nta_zillow_df = nta_df.copy()\nnta_zillow_df['Neighborhood'] = [nta_to_neighborhood_dict.get(key,key) # only changing names in dict\n for key in nta_zillow_df['NTAName']]\nnta_zillow_df = nta_zillow_df.explode('Neighborhood')\nnta_zillow_df['Price_Growth_Percentile'] = (nta_zillow_df.set_index(['Borough','Neighborhood']).index\n .map(rank_series.to_dict()))\nnta_zillow_df['Price_Growth'] = (nta_zillow_df.set_index(['Borough','Neighborhood']).index\n .map(neighborhood_growth_df.iloc[-1].apply(lambda x: round(x,2)).to_dict()))\nnta_zillow_df = nta_zillow_df.groupby(['Borough','NTAName']).mean().fillna(0) \nnta_zillow_df = nta_df.set_index(['Borough','NTAName']).join(nta_zillow_df)\n",
"_____no_output_____"
],
[
"#nta_percentiles_df[nta_percentiles_df['Price_Growth_Percentile'].isna()] # None\n# nta_percentiles_df[nta_percentiles_df['Price_Growth'].isna()] # None\n# nta_percentiles_df[nta_percentiles_df['NTACode'].isna()] # None\nnta_zillow_df.sort_values(by='Price_Growth', ascending=False)",
"_____no_output_____"
],
[
"# Plotly Express: Mapbox Choropleth Map\n\nimport json\n# ! pip install plotly\nimport plotly.express as px\n\nnta_geojson_url = 'https://data.cityofnewyork.us/api/geospatial/cpf4-rkhq?method=export&format=GeoJSON'\nresp = requests.get(nta_geojson_url)\nnycmap = json.loads(resp.text)\n\n#local_path = home_path / 'NYC_NTA.geojson'\n#with open(local_path, 'wb') as output:\n# output.write(resp.content)\n#nycmap = json.loads(resp.content)\n\n# MapBox Token:\nMapBox_Token_path = Path.home() / 'Jupyter' / 'MapBox_Token.txt'\nif MapBox_Token_path.is_file():\n px.set_mapbox_access_token(open(MapBox_Token_path, 'rt').read())\nelse:\n print('Error: File not found')",
"_____no_output_____"
],
[
"len(nycmap[\"features\"]) #[0] #[\"properties\"][\"ntacode\"]",
"_____no_output_____"
],
[
"# Center Figure on 'Williamsburg'\n# williamsburg_geo_dict = {'lat': 40.7, 'lon': -74.0}\n# Lat 40.707153\n# Long -73.958117",
"_____no_output_____"
],
[
"#neighborhood_growth_df.iloc[-1].max() # 4.094288817693072\n#neighborhood_growth_df.iloc[-1].min() # 0.8445564702907127\n# nta_zillow_df['Price_Growth'][nta_zillow_df['Price_Growth'] > 0.0].median() # 1.89\n# nta_zillow_df['Price_Growth'][nta_zillow_df['Price_Growth'] > 0.0].mean() # 2.028\n# (4.1-0.85)/2 + 0.85 # 2.4749",
"_____no_output_____"
],
[
"# Plotly Express: basic Choropleth Map\nfig = px.choropleth(data_frame=nta_zillow_df.reset_index(drop=False),\n geojson=nycmap, # nycmap, nta_geojson_url\n locations='NTACode',\n featureidkey='properties.ntacode',\n color='Price_Growth', # Price_Growth_Percentile, Price_Growth\n color_continuous_scale='Portland',\n color_continuous_midpoint=2.25,\n # range_color=(0.0, 4.1),\n center={\"lat\": 40.7, \"lon\": -73.9},\n hover_name='NTAName', # 'NTAName'\n labels={'NTAName': 'Neighborhood'},\n title=\"NYC Residential Real Estate Price Change HeatMap (Zillow 2004-2019)\",\n scope='usa'\n )\nfig.update_geos(fitbounds=\"locations\")\n# fig.show(); # not rendering in GitHub, loading image instead:\nImage(filename= home_path / 'Figures' / \"Figure_Zillow.png\")\n",
"_____no_output_____"
],
[
"# Save to CSV: \n\nneighborhood_growth_df.to_csv(path_or_buf= home_path / 'Zillow_Historical_Analysis.csv')\nyearly_avg_series.to_csv(home_path / 'Zillow_Historical_Analysis_yearly_avg.csv')\nnta_zillow_df.to_csv(path_or_buf= home_path / 'Zillow_Historical_Heatmap_data.csv')\n",
"_____no_output_____"
],
[
"# Plotly Express: Mapbox Choropleth Map\n\"\"\"fig = px.choropleth_mapbox(data_frame=nta_zillow_df.reset_index(drop=False),\n geojson=nycmap, # nycmap, nta_geojson_url\n locations='NTACode',\n featureidkey='properties.ntacode',\n color='Price_Growth', # Price_Growth_Percentile, Price_Growth\n color_continuous_scale='Portland', # RdBu, viridis, Portland, balance\n color_continuous_midpoint=2.25,\n #range_color=(0.0, 1.0), # (0.0, 1.0), (0.0, 4.1)\n #range_color=(0.0, 4.1), # (0.0, 1.0), (0.0, 4.1)\n mapbox_style=\"carto-positron\",\n zoom=9, \n center={\"lat\": 40.7, \"lon\": -74.0},\n opacity=0.7,\n hover_name='NTAName', # 'NTAName'\n labels={'NTAName': 'Neighborhood'},\n title=\"NYC Residential Real Estate Price Change HeatMap (2004-2019)\"\n )\n\n# Note the geojson attribute can also be the URL to a GeoJSON file, \n# which can speed up map rendering in certain cases.\n\n# The GeoJSON data is passed to the geojson argument, and the data is passed into the color argument \n# of px.choropleth_mapbox, in the same order as the IDs are passed into the location argument.\nfig.show()\n\"\"\"",
"_____no_output_____"
],
[
"# Other Analysis:",
"_____no_output_____"
],
[
"# Caluculate Growth over Full Timeline Available: \n\n#growth_zillow_all_years_df = (update_nyc_df.T.apply(lambda x: zillow_price_trajectory(x) * 12)\n# .to_frame(name='Annual_Growth_All_Years'))\n#growth_zillow_all_years_df.sort_values(by='Annual_Growth_All_Years', inplace=True, ascending=False)\n",
"_____no_output_____"
],
[
"# Compare SFR and Condo/Co-op?\n# Are these overlaping datasets? \n\n# Condos/Co-ops Are Different than Single Family \n# See: https://www.zillow.com/new-york-ny/home-values/\n",
"_____no_output_____"
],
[
"# NYC Metro-Area (Surrounding Neighborhoods: Newark, Jersey City, etc. )\n\n#(zhvi_sfr_neighborhood_df[zhvi_sfr_neighborhood_df['Metro'] == 'New York-Newark-Jersey City']\n# .reset_index(drop=True))\n\n# ny_df['City'].unique()\n# Cummuting Areas: ['Croton-on-Hudson', 'Pelham', 'New Rochelle', 'Scarsdale', 'Mount Vernon', 'Yonkers'\n# 'Town of Mamaroneck', 'Manhasset', 'Massapequa', 'Great Neck', ]\n# Far, but Interesting: ['Town Of Cornwall', 'Hyde Park']\n\n# New Jersey\n# nj_df = zhvi_sfr_neighborhood_df[zhvi_sfr_neighborhood_df['State'] == 'NJ'].reset_index(drop=True)\n# nj_df['City'].unique()\n# ['Jersey City', 'Newark', ",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecd99ea8cf1b5b391ad38ef078783d3e832b5e16 | 7,555 | ipynb | Jupyter Notebook | jupyter_notebooks/ETL/papermill/papermill_example.ipynb | manual123/Nacho-Jupyter-Notebooks | e75523434b1a90313a6b44e32b056f63de8a7135 | [
"MIT"
] | 2 | 2021-02-13T05:52:05.000Z | 2022-02-08T09:52:35.000Z | jupyter_notebooks/ETL/papermill/papermill_example.ipynb | manual123/Nacho-Jupyter-Notebooks | e75523434b1a90313a6b44e32b056f63de8a7135 | [
"MIT"
] | null | null | null | jupyter_notebooks/ETL/papermill/papermill_example.ipynb | manual123/Nacho-Jupyter-Notebooks | e75523434b1a90313a6b44e32b056f63de8a7135 | [
"MIT"
] | null | null | null | 22.155425 | 222 | 0.375116 | [
[
[
"# PURPOSE:",
"_____no_output_____"
],
[
"#### This papermill notebook will persist (\"save\" or if using ```scrapbook``` library's terminology, \"glue\") the path to the dataframe that this notebook generated, which will then be consumed by another notebook",
"_____no_output_____"
]
],
[
[
"group = 'a'",
"_____no_output_____"
],
[
"from pathlib import Path\nimport pandas as pd\nimport scrapbook as sb\ndata = pd.DataFrame({'group': ['a', 'a', 'a', 'b','b', 'b', 'c', 'c','c'],\n 'ounces': [4, 3, 12, 6, 7.5, 8, 3, 5, 6]})\ndata",
"_____no_output_____"
],
[
"save_path = Path('/home/pybokeh/temp/output.csv')",
"_____no_output_____"
],
[
"data_filtered = data.query(\"group == @group\")",
"_____no_output_____"
],
[
"data_filtered",
"_____no_output_____"
],
[
"data_filtered.to_csv(save_path, index=False)",
"_____no_output_____"
],
[
"str(save_path)",
"_____no_output_____"
]
],
[
[
"#### Using scrapbook library's ```glue``` class, we can now persist the string path by \"glueing\" it to this notebook.",
"_____no_output_____"
]
],
[
[
"sb.glue(\"path_to_df\", str(save_path))",
"_____no_output_____"
],
[
"sb.glue(\"just_some_list\", ['one', 'two', 'three'])",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ecd9af1c467b6a97031979d48100575a3fbb7ee7 | 3,257 | ipynb | Jupyter Notebook | notebook_utils/preprocess_abstract.ipynb | omarsou/altegrad_challenge_hindex | 199e555a79919bd4bf2e1483c04458169f9a289b | [
"MIT"
] | 1 | 2021-03-26T08:40:15.000Z | 2021-03-26T08:40:15.000Z | notebook_utils/preprocess_abstract.ipynb | omarsou/altegrad_challenge_hindex | 199e555a79919bd4bf2e1483c04458169f9a289b | [
"MIT"
] | null | null | null | notebook_utils/preprocess_abstract.ipynb | omarsou/altegrad_challenge_hindex | 199e555a79919bd4bf2e1483c04458169f9a289b | [
"MIT"
] | null | null | null | 25.248062 | 114 | 0.532392 | [
[
[
"**This notebook is made in order to process the abstract. <br>\nThe format of the text in the abstract is not suited for being processed by models (doc2vec, bert etc ...)**",
"_____no_output_____"
]
],
[
[
"from tqdm import tqdm_notebook as tqdm\nimport ast\nimport re\nimport gzip\nimport pickle",
"_____no_output_____"
],
[
"def save(object, filename, protocol = 0):\n \"\"\"Saves a compressed object to disk\n \"\"\"\n file = gzip.GzipFile(filename, 'wb')\n file.write(pickle.dumps(object, protocol))\n file.close()",
"_____no_output_____"
],
[
"## Reformatting and Cleaning\nf = open(\"/content/drive/MyDrive/altegrad_datachallenge/data/abstracts.txt\",\"r\")\npattern = re.compile(r'(,){2,}')\ndic = {}\nfor l in tqdm(f):\n if (l==\"\\n\"):\n continue\n id = l.split(\"----\")[0]\n inv = \"\".join(l.split(\"----\")[1:])\n res = ast.literal_eval(inv) \n abstract =[ \"\" for i in range(res[\"IndexLength\"])]\n inv_indx= res[\"InvertedIndex\"]\n for i in inv_indx:\n for j in inv_indx[i]:\n abstract[j] = i.lower()\n abstract = re.sub(pattern, ',', \",\".join(abstract))\n dic[id] = abstract\nfor key in tqdm(dic.keys()):\n dic[key] = dic[key].replace(',',' ')",
"_____no_output_____"
],
[
"# Saving\nsave(dic, \"/content/drive/MyDrive/altegrad_datachallenge/preprocess_abstracts.txt\")",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
ecd9b62a5fb2d73d20ee55768cd5503765d69232 | 576,527 | ipynb | Jupyter Notebook | opencv-python-free-course-code/11_Object_Tracking/11_objectTracking.ipynb | seanbei/opencv-python-free-course | dbcd36517c4b15993de47369983bfe5f01839fd4 | [
"MIT"
] | null | null | null | opencv-python-free-course-code/11_Object_Tracking/11_objectTracking.ipynb | seanbei/opencv-python-free-course | dbcd36517c4b15993de47369983bfe5f01839fd4 | [
"MIT"
] | null | null | null | opencv-python-free-course-code/11_Object_Tracking/11_objectTracking.ipynb | seanbei/opencv-python-free-course | dbcd36517c4b15993de47369983bfe5f01839fd4 | [
"MIT"
] | null | null | null | 1,208.651992 | 565,548 | 0.959039 | [
[
[
"# Object Tracking\n**Satya Mallick, LearnOpenCV.com**\n\n- What is tracking?\n- Tracking in computer vison.\n- Motion model and appearnace model.\n- OpenCV API Tracker Class.",
"_____no_output_____"
],
[
"# Goal \n\n Given the initial location of an object, track location in subsequent frames \n\n",
"_____no_output_____"
],
[
"# Tracker Class in OpenCV\n\n1. BOOSTING\n2. MIL \n3. KCF \n4. CRST\n4. TLD \n * Tends to recover from occulusions\n5. MEDIANFLOW \n * Good for predictable slow motion\n6. GOTURN\n * Deep Learning based\n * Most Accurate \n7. MOSSE\n * Fastest",
"_____no_output_____"
]
],
[
[
"from IPython.display import HTML\nHTML(\"\"\"\n<video width=1024 controls>\n <source src=\"race_car_preview.mp4\" type=\"video/mp4\">\n</video>\n\"\"\")",
"_____no_output_____"
],
[
"# Import modules\nimport cv2\nimport sys\nimport os\nimport matplotlib.pyplot as plt\nfrom matplotlib.animation import FuncAnimation\nfrom IPython.display import HTML\nimport urllib\n\nvideo_input_file_name = \"race_car.mp4\"\n\ndef drawRectangle(frame, bbox):\n p1 = (int(bbox[0]), int(bbox[1]))\n p2 = (int(bbox[0] + bbox[2]), int(bbox[1] + bbox[3]))\n cv2.rectangle(frame, p1, p2, (255,0,0), 2, 1)\n\ndef displayRectangle(frame, bbox):\n plt.figure(figsize=(20,10))\n frameCopy = frame.copy()\n drawRectangle(frameCopy, bbox)\n frameCopy = cv2.cvtColor(frameCopy, cv2.COLOR_RGB2BGR)\n plt.imshow(frameCopy); plt.axis('off') \n\ndef drawText(frame, txt, location, color = (50,170,50)):\n cv2.putText(frame, txt, location, cv2.FONT_HERSHEY_SIMPLEX, 1, color, 3)\n",
"_____no_output_____"
]
],
[
[
"# Download tracking model (for GOTURN only)",
"_____no_output_____"
]
],
[
[
"if not os.path.isfile('goturn.prototxt') or not os.path.isfile('goturn.caffemodel'):\n print(\"Downloading GOTURN model zip file\")\n urllib.request.urlretrieve('https://www.dropbox.com/sh/77frbrkmf9ojfm6/AACgY7-wSfj-LIyYcOgUSZ0Ua?dl=1', 'GOTURN.zip')\n \n # Uncompress the file\n !tar -xvf GOTURN.zip\n\n # Delete the zip file\n os.remove('GOTURN.zip')",
"_____no_output_____"
]
],
[
[
"# GOTURN Tracker\n\n",
"_____no_output_____"
],
[
"# Create the Tracker instance",
"_____no_output_____"
]
],
[
[
"# Set up tracker\ntracker_types = ['BOOSTING', 'MIL','KCF', 'CSRT', 'TLD', 'MEDIANFLOW', 'GOTURN','MOSSE']\n\n# Change the index to change the tracker type\ntracker_type = tracker_types[2]\n\nif tracker_type == 'BOOSTING':\n tracker = cv2.legacy_TrackerBoosting.create()\nelif tracker_type == 'MIL':\n tracker = cv2.TrackerMIL_create()\nelif tracker_type == 'KCF':\n tracker = cv2.TrackerKCF_create()\nelif tracker_type == 'CSRT':\n tracker = cv2.legacy_TrackerCSRT.create()\nelif tracker_type == 'TLD':\n tracker = cv2.legacy_TrackerTLD.create()\nelif tracker_type == 'MEDIANFLOW':\n tracker = cv2.legacy_TrackerMedianFlow.create()\nelif tracker_type == 'GOTURN':\n tracker = cv2.TrackerGOTURN_create() \nelse:\n tracker = cv2.legacy_TrackerMOSSE.create()",
"_____no_output_____"
]
],
[
[
"# Read input video & Setup output Video",
"_____no_output_____"
]
],
[
[
"# Read video\nvideo = cv2.VideoCapture(video_input_file_name)\nok, frame = video.read()\n\n# Exit if video not opened\nif not video.isOpened():\n print(\"Could not open video\")\n sys.exit()\nelse : \n width = int(video.get(cv2.CAP_PROP_FRAME_WIDTH))\n height = int(video.get(cv2.CAP_PROP_FRAME_HEIGHT))\n \nvideo_output_file_name = 'race_car-' + tracker_type + '.mp4'\nvideo_out = cv2.VideoWriter(video_output_file_name,cv2.VideoWriter_fourcc(*'avc1'), 10, (width, height))\n",
"_____no_output_____"
]
],
[
[
"# Define Bounding Box",
"_____no_output_____"
]
],
[
[
"# Define a bounding box\nbbox = (1300, 405, 160, 120)\n#bbox = cv2.selectROI(frame, False)\n#print(bbox)\ndisplayRectangle(frame,bbox)",
"_____no_output_____"
]
],
[
[
"# Intilialize Tracker \n\n1. One frame\n2. A bounding box \n",
"_____no_output_____"
]
],
[
[
"# Initialize tracker with first frame and bounding box\n\nok = tracker.init(frame, bbox)",
"_____no_output_____"
]
],
[
[
"# Read frame and Track Object",
"_____no_output_____"
]
],
[
[
"while True:\n ok, frame = video.read()\n if not ok:\n break \n \n # Start timer\n timer = cv2.getTickCount()\n\n # Update tracker\n ok, bbox = tracker.update(frame)\n\n # Calculate Frames per second (FPS)\n fps = cv2.getTickFrequency() / (cv2.getTickCount() - timer);\n\n # Draw bounding box\n if ok:\n drawRectangle(frame, bbox)\n else :\n drawText(frame, \"Tracking failure detected\", (80,140), (0, 0, 255))\n\n # Display Info\n drawText(frame, tracker_type + \" Tracker\", (80,60))\n drawText(frame, \"FPS : \" + str(int(fps)), (80,100))\n \n # Write frame to video\n video_out.write(frame)\n \nvideo.release()\nvideo_out.release()",
"_____no_output_____"
],
[
"# Tracker: KCF\nHTML(\"\"\"\n<video width=1024 controls>\n <source src=\"race_car-KCF.mp4\" type=\"video/mp4\">\n</video>\n\"\"\")",
"_____no_output_____"
],
[
"# Tracker: CSRT\nHTML(\"\"\"\n<video width=1024 controls>\n <source src=\"race_car-CSRT.mp4\" type=\"video/mp4\">\n</video>\n\"\"\")",
"_____no_output_____"
],
[
"# Tracker: GOTURN\nHTML(\"\"\"\n<video width=1024 controls>\n <source src=\"race_car-GOTURN.mp4\" type=\"video/mp4\">\n</video>\n\"\"\")",
"_____no_output_____"
]
],
[
[
"# Thank You!",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
ecd9be56dc0ffa700aca03534a81488579b682c4 | 825,511 | ipynb | Jupyter Notebook | examples/jupyter/sleep_patterns.ipynb | summerlabs/krangl | ba5bbcbccfc740f0022fc4bb891e7a1e61a86839 | [
"MIT"
] | 535 | 2016-06-17T14:54:52.000Z | 2022-03-29T14:58:47.000Z | examples/jupyter/sleep_patterns.ipynb | summerlabs/krangl | ba5bbcbccfc740f0022fc4bb891e7a1e61a86839 | [
"MIT"
] | 139 | 2016-06-23T09:43:38.000Z | 2022-03-23T19:52:55.000Z | examples/jupyter/sleep_patterns.ipynb | summerlabs/krangl | ba5bbcbccfc740f0022fc4bb891e7a1e61a86839 | [
"MIT"
] | 64 | 2017-04-29T04:35:24.000Z | 2022-02-23T11:30:52.000Z | 474.431609 | 239,446 | 0.900812 | [
[
[
"# Mammalian Sleep Patterns",
"_____no_output_____"
],
[
"This tutorial is intended to give you a brief overview about how to analyze data with `krangl` and `lets-plot` package. For further information see\n\n* https://github.com/holgerbrandl/krangl\n* https://github.com/JetBrains/lets-plot-kotlin\n\nWe will learn how to quickly create a variety of different plots and how to adapt plots to our specific needs.",
"_____no_output_____"
],
[
"Let's get started by pulling `krangl` into this environment.",
"_____no_output_____"
]
],
[
[
"// @file:Repository(\"*mavenLocal\")\n@file:DependsOn(\"com.github.holgerbrandl:krangl:0.17\")",
"_____no_output_____"
]
],
[
[
"Next, we also add `lets-plot` with the [`%use`](https://github.com/Kotlin/kotlin-jupyter#line-magics) command. No version is required here, but keep in mind that this reduces long-term stability of the notebook. A version could be provided in round brackets e.g. `use krangl(0.16.2)` to allow for more concise loading of libraries while maintaining long-term reproducibility.",
"_____no_output_____"
]
],
[
[
"%use lets-plot",
"_____no_output_____"
]
],
[
[
"Note: In contrast to R and python, we can use versioned dependencies here, which keeps our analysis reproducible even if the underlying libraries evolve.",
"_____no_output_____"
],
[
"## The lets-plot ggplot syntax\n\n`lets-plot` adopts the API of https://ggplot2.tidyverse.org/.\nThe ggplot syntax is used to build a plot layer by layer. Usually, the following steps are involved\n\n* Defining the data to be used in the plot with ggplot(«data.frame»)\n* Specifying the visual representation of the data with geoms, i.e., `geomPoint()` or `geomLine()`\n* Specifying the features or aesthetics to represent the values in the plot with aes()\n* Optionally modifying scales, labels or adding additional layers\n\nNote: The underlying data is by default the same for all layers.\n\n**Important note: In this tutorial data is always considered to be shaped as data-frame. However, `lets-plot` also allows visualizing data shaped differently.**\n\n",
"_____no_output_____"
],
[
"# Data: Mammals Sleep",
"_____no_output_____"
],
[
"The famous `sleepData` dataset contains the sleep times and weights for a set of mammalian species. The dataset contains 83 rows and 11 variables.\n\nThe dataset ships with `krangl` as an example dataset, so there is no need to download it from elsewhere.\n\nSee [here](https://ggplot2.tidyverse.org/reference/msleep.html) for further reference.",
"_____no_output_____"
]
],
[
[
"sleepData",
"_____no_output_____"
]
],
[
[
"Let's also get an idea about the types of the individual attributes/columns",
"_____no_output_____"
]
],
[
[
"sleepData.schema()",
"_____no_output_____"
]
],
[
[
"We find: In total, 83 species are listed with various categorical and numeric attributes describing their physique and their sleeping behavior.",
"_____no_output_____"
],
[
"# Building a first plot\n\nTo get started with, we calculate a new attribute that puts total sleep and rem-sleep into proportion. In other words, we calculate the proportion in which animals are _dreaming_.",
"_____no_output_____"
]
],
[
[
"var sleepDataExt = sleepData.addColumn(\"rem_proportion\"){it[\"sleep_rem\"]/it[\"sleep_total\"]}",
"_____no_output_____"
],
[
"sleepDataExt.select{startsWith(\"sleep\") AND listOf(\"rem_proportion\")}",
"_____no_output_____"
]
],
[
[
"We find: As usual, some records contain missing values, so `rem_proportion` is computed as `NA` (or `null` in _kotlinspeak_).",
"_____no_output_____"
]
],
[
[
"sleepDataExt.letsPlot { }",
"_____no_output_____"
]
],
[
[
"As expected nothing happens, because we have not mapped variables to aesthetics yet. Let's do so now:",
"_____no_output_____"
]
],
[
[
"sleepDataExt.letsPlot {x=\"sleep_total\"; y=\"rem_proportion\" } + geomPoint()",
"_____no_output_____"
]
],
[
[
"Plots can be assigned to variables for composition and visual exploration:",
"_____no_output_____"
]
],
[
[
"val myPlot = sleepDataExt.letsPlot {x=\"sleep_total\"; y=\"rem_proportion\" } + geomPoint()\nmyPlot",
"_____no_output_____"
]
],
[
[
"Clearly we can export plots, e.g. as svg",
"_____no_output_____"
]
],
[
[
"ggsave(myPlot, \"testplot.svg\")",
"_____no_output_____"
]
],
[
[
"# Mapping data attributes to aesthetics\n\nWe did so already in when plotting x against y. The concept extends to other visual attributes such as size, shape, symbols or labels as well.\n\nConstant aesthetics are specified outside of mapping. Here, we specify fixed values for size and transparency.",
"_____no_output_____"
]
],
[
[
"sleepDataExt.letsPlot {x=\"sleep_total\"; y=\"rem_proportion\" } + \n geom_point(size=4, alpha=0.3)",
"_____no_output_____"
]
],
[
[
"Instead of mapping to fixed values, we can also map columns (variables) in our dataset to visual attributes such as the color.",
"_____no_output_____"
]
],
[
[
"sleepDataExt\n .letsPlot {x=\"sleep_total\"; y=\"rem_proportion\"; color=\"vore\" } + \n geomPoint(size=4, alpha=.7)",
"_____no_output_____"
]
],
[
[
"We find: At first glance there is no striking correlation pattern wrt food preference.\n\nThe `alpha` was set to accommodate possible over-plotting, i.e. overlapping points.",
"_____no_output_____"
],
[
"Still we don't see much of pattern here, so let's bring in another attribute: We map the average brain size of each species to the point size ",
"_____no_output_____"
]
],
[
[
"sleepDataExt\n .letsPlot {x=\"sleep_total\"; y=\"rem_proportion\"; color=\"vore\"; size=\"brainwt\" } + \n geomPoint(size=4, alpha=.7) \n",
"Line_25.jupyter-kts (2:67 - 71) Unresolved reference: size"
]
],
[
[
"This does not work, see chapter about `letsplot` limitations.",
"_____no_output_____"
],
[
"# Multiple layers\n\nTo perform a distribution analysis, a histogram comes to mind. Luckily we can use the same paradigm - mapping of data attribute to aesthetics - here.",
"_____no_output_____"
]
],
[
[
"sleepDataExt.letsPlot { x=\"sleep_total\"} + geomHistogram(binWidth = 2)",
"_____no_output_____"
]
],
[
[
"This looks good, but we still can't easily compare groups while keeping an eye on the raw data. So let's try out a [boxplot](https://en.wikipedia.org/wiki/Box_plot) instead. ",
"_____no_output_____"
]
],
[
[
"sleepDataExt.letsPlot { x=\"vore\"; y=\"sleep_total\"} + geomBoxplot()",
"_____no_output_____"
]
],
[
[
"It's the nature of a boxplot to just display quantiles of distribution. So we lost track of outliers, and the overall possible multi-modal shape of the distributions.\n\nLuckily, we can superimpose a second layer here to show the raw data as well.",
"_____no_output_____"
]
],
[
[
"sleepDataExt.letsPlot { x=\"vore\"; y=\"sleep_total\"} +\n geomBoxplot() + \n geomPoint()",
"_____no_output_____"
]
],
[
[
"This looks good, but the points all being aligned horizontally lacks beauty, so we randomize their position on along x.",
"_____no_output_____"
]
],
[
[
"sleepDataExt.letsPlot { x=\"vore\"; y=\"sleep_total\"} +\n geomBoxplot() + \n geomPoint(position=positionJitter(0.3))",
"_____no_output_____"
]
],
[
[
"We find: Overlaying the boxplot with the raw data paid off right away. First the groups have different sizes, and the insects show a bi-modal distribution in total sleep time.",
"_____no_output_____"
],
[
"# Conservation Status",
"_____no_output_____"
],
[
"To better understand our dataset, we want to understand how conservation status and food preference relate to each other.\n\nConservation_status (Source Wikipedia):\n* cd = (Conservation Dependent, now part of NT), stable and sizable populations depend on sustained conservation activity. \n* Domesticated \n* en = Endangered (EN): faces a high risk of extinction in the near future. \n* lc = LC (Least Concern) species that have been evaluated and found to be so common that no conservation concern is projected in the foreseeable future. \n* nt = NT (Near Threatened: close to qualifying for listing as Vulnerable but not fully meeting those criteria; slowly declining or fairly small populations but probably no danger of going extinct even without conservation activity in the foreseeable future, or threats suspected to affect taxon in the near future but still avoidable \n* vu =Vulnerable (VU): faces a considerable risk of extinction in the medium term",
"_____no_output_____"
],
[
"Let check the overall distribution in our data set",
"_____no_output_____"
]
],
[
[
"sleepDataExt.letsPlot { x=\"conservation\"} + geomBar()",
"_____no_output_____"
]
],
[
[
"We find: The majority of the species in our dataset is not being endangered.\n\nLet's spice this up by adding the food preference.",
"_____no_output_____"
]
],
[
[
"sleepDataExt.letsPlot { x=\"conservation\"; fill=\"vore\"} +\n geomBar()",
"_____no_output_____"
]
],
[
[
"... or plot with both attributes being flipped",
"_____no_output_____"
]
],
[
[
"sleepDataExt.letsPlot { x=\"vore\"; fill=\"conservation\"} +\n geomBar()",
"_____no_output_____"
]
],
[
[
"We find: Endangered species seem enriched in carnivore and herbivore. However, there is a large proportion of missing values here, so our finding feels inconclusive.\n\nTo display the same data also in proportional terms, we can simply adjust the style:",
"_____no_output_____"
]
],
[
[
"sleepDataExt.letsPlot { x=\"vore\"; fill=\"conservation\"} + \n geomBar(position=Pos.fill)",
"_____no_output_____"
]
],
[
[
"# Scales\n\nScales are required to give the plot reader a sense of reference and thus encompass the ideas of both axes and legends on plots.\n\n`ggplot2` will usually automatically choose appropriate scales and display legends if necessary.\n\nIt is however, very easy to override or modify the default values.\n\nThe scale syntax is scale_«attribute»_«optional subspecification»() i.e. `scale_x_continuous()` or `scale_x_discrete()` or `scale_x_log10()`\n",
"_____no_output_____"
],
[
"Although being just 83 records, this dataset is very rich. Let's analyze the data for possible correlation between physical dimensions and sleep properties.",
"_____no_output_____"
]
],
[
[
"val corPlot = sleepData.letsPlot { x=\"bodywt\"; y=\"sleep_total\"} + \n geomPoint()",
"_____no_output_____"
],
[
"corPlot + xlab(\"Body Weight\")",
"_____no_output_____"
]
],
[
[
"We find: It's hard to analyze the plot because some outliers. We could filter them away first:",
"_____no_output_____"
]
],
[
[
"sleepData\n .filter{it[\"bodywt\"] lt 2000}\n .letsPlot { x=\"bodywt\"; y=\"sleep_total\"} + \n geomPoint()",
"_____no_output_____"
]
],
[
[
"But then we would just study some subset. So as alternative, we can also modulate the scale on the x.",
"_____no_output_____"
]
],
[
[
"corPlot + xlab(\"Body Weight\") + scaleXLog10()",
"_____no_output_____"
]
],
[
[
"It's hard to see correlation so let's overlay the display with a regression model.",
"_____no_output_____"
]
],
[
[
"corPlot + \n xlab(\"Body Weight\") + \n scaleXLog10() +\n geomSmooth() + \n ggtitle(\"Correlation between sleep time and body weight\")",
"_____no_output_____"
]
],
[
[
"Technically, we can always fit a linear trend model here, but \nWe find: There is no strong correlation between sleep time and body weight. But we never know without asking the data. A negative result can be as informative as a positive finding.",
"_____no_output_____"
],
[
"# Faceting\n\nThe ggplot2 package provides two interesting functionalities to look at subgroups in your data. Also `lets-plot` as well `kravis` support this great function.\n\nFaceting is useful to split the data into different groups that are displayed next to each other.",
"_____no_output_____"
]
],
[
[
"sleepData.letsPlot { x=\"bodywt\"; y=\"sleep_total\"} +\n geomPoint() +\n// scaleXLog10(\"Body Weight\") + \n// scaleYLog10(\"Sleep Time\") + \n facetWrap(\"vore\") + // no scales argument?\n ggtitle(\"Correlation Total Sleep Time and Body Weight per Species\")",
"_____no_output_____"
]
],
[
[
"We find: Since the panels are sharing the same axes ranges, we still face the outlier problem.\n\nDecoupling the axes seems not possible with `lets-plot` at the moment.\n\nThe good news: There is an emerging data-science ecosystem in Kotlin, sow we could use a different library such as [kravis](https://github.com/holgerbrandl/kravis). `kravis` is a wrapper around R/ggplot and has more complex setup requirements.\n\nFirst, we load the kravis library into this environment:",
"_____no_output_____"
]
],
[
[
"@file:DependsOn(\"com.github.holgerbrandl:kravis:0.8.1\")",
"_____no_output_____"
]
],
[
[
"Second, we rebuild the plot using the highly similar `kravis` API (because as well as `lets-plot`) its foundation is `ggplot2` from R.",
"_____no_output_____"
]
],
[
[
"sleepData.plot( x=\"bodywt\", y=\"sleep_total\")\n .geomPoint()\n .facetWrap(\"vore\", scales=FacetScales.free)\n .title(\"Correlation Total Sleep Time and Body Weight per Species\")",
"_____no_output_____"
]
],
[
[
"We find: There is no obvious correlation if when analyzing the data separately per food preference.",
"_____no_output_____"
],
[
"# Limitations",
"_____no_output_____"
],
[
"One motivation of the author of this tutorial was to assess the current capabilities of `lets-plot`. It has undergone a great and amazing evolution over the last 2 years and provides a great API experience and a wide range of plotting capabilities.\n\nAs always there a few shortcomings, that are to some extent personal taste or maybe just areas where the library is still under development.",
"_____no_output_____"
],
[
"## No theming",
"_____no_output_____"
],
[
"Note: Currently there does not seem to be a way to modify the style of the plot, also known as _theming_. See also https://github.com/JetBrains/lets-plot/issues/221. \n\n",
"_____no_output_____"
],
[
"So let's rebuild the plot with kravis. Note the calls of `theme*` to adjust the overall appearance of the plot.",
"_____no_output_____"
]
],
[
[
"sleepDataExt\n .plot(x=\"sleep_total\", y=\"rem_proportion\", color=\"vore\")\n .geomPoint(size=4.0, alpha=.7)\n .themeMinimal()\n .theme(axisTitle = ElementText(size = 20, color = RColor.red))\n .show()",
"_____no_output_____"
]
],
[
[
"## Lack of more versatile aesthetics mapping\n",
"_____no_output_____"
],
[
"As reported in https://github.com/JetBrains/lets-plot-kotlin/issues/82 data attributes can be mapped to only a few aesthetics at the moment. E.g., as shown above there is no way to map a data attribute to `size`.\n\n**Update** After discussion with its developers, we've learnt that additional mappings are already available via the `geoms`. Example `p + geomPoint {size=\"brainwt\"}`\n\nSo let's try to do so with `kravis` instead:",
"_____no_output_____"
]
],
[
[
"sleepDataExt\n .plot(x=\"sleep_total\", y=\"rem_proportion\", color=\"vore\", size=\"brainwt\")\n .geomPoint(alpha=.7)\n .show()",
"_____no_output_____"
]
],
[
[
"We find: With this enhanced plot, we can see that _grass-eaters_ with a large brain tend to sleep the least. \n",
"_____no_output_____"
],
[
"## API Design\n\nIt's still unclear - to the author - why `lets-plot` is not adopting kotlin conventions to chain plot composition. By using `.` instead of the `+` used in `lets-plot`, we would have a more seamless platform experience and could break lines in a more _kotlinequse_ style.\n\nOne motivation for doing so might be to provide an easier migration from R, but the resulting API (mix of `.` and `+`) feels inconsistent.",
"_____no_output_____"
],
[
"## Just linear trend in `geomSmooth`\n\nA nice feature of `ggplot2` are the flexible smoothing backends such as 'loess'. Currently, it seems only possible to do linear fits with `lets-plot/geomSmooth`.\n\nAlso here `kravis` could step in if needed:\n",
"_____no_output_____"
]
],
[
[
"sleepData\n .plot( x=\"bodywt\", y=\"sleep_total\") \n .xLabel(\"Body Weight\")\n .scaleXLog10()\n .geomPoint()\n .geomSmooth()",
"_____no_output_____"
]
],
[
[
"# Summary\n\n\nWe've learnt something interesting about biology today. :-)\n\n\nAs we've learned in this tutorial, we can analyze complex data at ease with tools such as `krangl` for data manipulation, and `lets-plot` for visualization. Where needed we can complement an analysis with additional libraries such as `kravis` for more advanced types of visualization.\n\nSupported by the functionality of the [kotlin-kernel](https://github.com/Kotlin/kotlin-jupyter) data science becomes more and more fluent and fun in Kotlin.\n\nFor questions and comments feel welcome get in touch via [kotlin slack](https://app.slack.com/client/T09229ZC6/C4W52CFEZ) or directly via [twitter](https://twitter.com/holgerbrandl).\n\nIf you have enjoyed this tutorial, don't forget to check out http://holgerbrandl.github.io/krangl/ to learn more about data-science with kotlin.\n\nThanks to the kotlin community and in particular the authors of `lets-plot` for making this notebook possible.\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ecd9c91bfaf2f6459a45ca3f270fc89c39622f1f | 21,131 | ipynb | Jupyter Notebook | sagemaker-python-sdk/tensorflow_iris_dnn_classifier_using_estimators/tensorflow_iris_dnn_classifier_using_estimators_elastic_inference_local.ipynb | Intellagent/amazon-sagemaker-examples | 80cb4e7e43dc560f3e8febe3dab778a32b1ed0cb | [
"Apache-2.0"
] | 3 | 2019-03-26T14:50:17.000Z | 2019-12-07T13:51:38.000Z | sagemaker-python-sdk/tensorflow_iris_dnn_classifier_using_estimators/tensorflow_iris_dnn_classifier_using_estimators_elastic_inference_local.ipynb | Intellagent/amazon-sagemaker-examples | 80cb4e7e43dc560f3e8febe3dab778a32b1ed0cb | [
"Apache-2.0"
] | null | null | null | sagemaker-python-sdk/tensorflow_iris_dnn_classifier_using_estimators/tensorflow_iris_dnn_classifier_using_estimators_elastic_inference_local.ipynb | Intellagent/amazon-sagemaker-examples | 80cb4e7e43dc560f3e8febe3dab778a32b1ed0cb | [
"Apache-2.0"
] | 2 | 2019-07-09T18:32:20.000Z | 2020-09-11T19:07:55.000Z | 43.931393 | 557 | 0.632625 | [
[
[
"# Using Amazon Elastic Inference with TensorFlow on an Amazon SageMaker Notebook Instance\n\nThis notebook demonstrates how to enable and utilize Amazon Elastic Inference with our predefined SageMaker TensorFlow containers on your SageMaker notebook instance. This notebook will train locally on your notebook instance and make inferences to the EI accelerator attached to your notebook instance.\n\nAmazon Elastic Inference (EI) is a resource you can attach to your Amazon EC2 instances to accelerate your deep learning (DL) inference workloads. EI allows you to add inference acceleration to an Amazon SageMaker hosted endpoint or Jupyter notebook for a fraction of the cost of using a full GPU instance. Since EI is only meant for inferences, no training logic changes are needed. For more information please visit: https://docs.aws.amazon.com/sagemaker/latest/dg/ei.html\n\nThis notebook is an adaption of the [SageMaker TensorFlow Iris DNN classifier](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/tensorflow_iris_dnn_classifier_using_estimators/tensorflow_iris_dnn_classifier_using_estimators.ipynb), with changes showing the easy changes needed to enable and use EI with TensorFlow on SageMaker.\n\n1. [The Iris dataset](#The-Iris-dataset)\n1. [Setup](#Setup)\n1. [tf.estimator](#tf.estimator)\n1. [Construct a deep neural network classifier](#Construct-a-deep-neural-network-classifier)\n 1. [Complete neural network source code](#Complete-neural-network-source-code)\n 1. [Using a tf.estimator in SageMaker](#Using-a-tf.estimator-in-SageMaker)\n 1. [Describe the training input pipeline](#Describe-the-training-input-pipeline)\n 1. [Describe the serving input pipeline](#Describe-the-serving-input-pipeline)\n1. [Train a Model on Amazon SageMaker using TensorFlow custom code](#Train-a-Model-on-Amazon-SageMaker-using-TensorFlow-custom-code)\n1. [Deploy the trained Model to an Endpoint with an attached EI accelerator](#Deploy-the-trained-Model-to-an-Endpoint-with-an-attached-EI-accelerator)\n 1. [Using EI with a SageMaker notebook instance](#Using-EI-with-a-SageMaker-notebook-instance)\n 1. [Invoke the Endpoint to get inferences locally](#Invoke-the-Endpoint-to-get-inferences-locally)\n 1. [Delete the Endpoint](#Delete-the-Endpoint)\n\nIf you are familiar with SageMaker and already have a trained model, skip ahead to the [Creating-an-inference-endpoint section](#Creating-an-inference-endpoint-with-EI)\n\nFor this example, we will be utilizing the SageMaker Python SDK, which helps deploy your models for training and hosting in SageMaker.\n\nIn this tutorial, you'll use tf.estimator to construct a\n[neural network](https://en.wikipedia.org/wiki/Artificial_neural_network)\nclassifier and train it on the\n[Iris data set](https://en.wikipedia.org/wiki/Iris_flower_data_set) to\npredict flower species based on sepal/petal geometry. You'll write code to\nperform the following five steps:\n\n1. Deploy a TensorFlow container in SageMaker\n2. Load CSVs containing Iris training/test data from a S3 bucket into a TensorFlow `Dataset`\n3. Construct a `tf.estimator.DNNClassifier` neural network classifier\n4. Train the model using the training data\n5. Host the model in an endpoint with EI\n6. Classify new samples invoking the endpoint\n\nThis tutorial is a simplified version of TensorFlow's [get_started/estimator](https://www.tensorflow.org/get_started/estimator#fit_the_dnnclassifier_to_the_iris_training_data) tutorial but using SageMaker and the SageMaker Python SDK to simplify training and hosting.",
"_____no_output_____"
],
[
"## The Iris dataset\n\nThe [Iris data set](https://en.wikipedia.org/wiki/Iris_flower_data_set) contains\n150 rows of data, comprising 50 samples from each of three related Iris species:\n*Iris setosa*, *Iris virginica*, and *Iris versicolor*.\n\n **From left to right,\n[*Iris setosa*](https://commons.wikimedia.org/w/index.php?curid=170298) (by\n[Radomil](https://commons.wikimedia.org/wiki/User:Radomil), CC BY-SA 3.0),\n[*Iris versicolor*](https://commons.wikimedia.org/w/index.php?curid=248095) (by\n[Dlanglois](https://commons.wikimedia.org/wiki/User:Dlanglois), CC BY-SA 3.0),\nand [*Iris virginica*](https://www.flickr.com/photos/33397993@N05/3352169862)\n(by [Frank Mayfield](https://www.flickr.com/photos/33397993@N05), CC BY-SA\n2.0).**\n\nEach row contains the following data for each flower sample:\n[sepal](https://en.wikipedia.org/wiki/Sepal) length, sepal width,\n[petal](https://en.wikipedia.org/wiki/Petal) length, petal width, and flower\nspecies. Flower species are represented as integers, with 0 denoting *Iris\nsetosa*, 1 denoting *Iris versicolor*, and 2 denoting *Iris virginica*.\n\nSepal Length | Sepal Width | Petal Length | Petal Width | Species\n:----------- | :---------- | :----------- | :---------- | :-------\n5.1 | 3.5 | 1.4 | 0.2 | 0\n4.9 | 3.0 | 1.4 | 0.2 | 0\n4.7 | 3.2 | 1.3 | 0.2 | 0\n… | … | … | … | …\n7.0 | 3.2 | 4.7 | 1.4 | 1\n6.4 | 3.2 | 4.5 | 1.5 | 1\n6.9 | 3.1 | 4.9 | 1.5 | 1\n… | … | … | … | …\n6.5 | 3.0 | 5.2 | 2.0 | 2\n6.2 | 3.4 | 5.4 | 2.3 | 2\n5.9 | 3.0 | 5.1 | 1.8 | 2\n\nFor this tutorial, the Iris data has been randomized and split into two separate\nCSVs:\n\n* A training set of 120 samples\n iris_training.csv\n* A test set of 30 samples\n iris_test.csv\n\nThese files are provided in the SageMaker sample data bucket:\n**s3://sagemaker-sample-data-{region}/tensorflow/iris**. Copies of the bucket exist in each SageMaker region. When we access the data, we'll replace {region} with the AWS region the notebook is running in.",
"_____no_output_____"
],
[
"## Setup\n\nLet's start by creating a SageMaker session and specifying the IAM role arn used to give training and hosting access to your data. See the [documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the `sagemaker.get_execution_role()` with a the appropriate full IAM role arn string(s).",
"_____no_output_____"
]
],
[
[
"import sagemaker\n\nsagemaker_session = sagemaker.Session()\n\nbucket = sagemaker_session.default_bucket()\nprefix = 'sagemaker/DEMO-tensorflow-iris'\n\nrole = sagemaker.get_execution_role()",
"_____no_output_____"
]
],
[
[
"This notebook shows how to use the SageMaker Python SDK to run your code in a local container before deploying to SageMaker's managed training or hosting environments. Just change your estimator's train_instance_type to local or local_gpu. For more information, see [local mode](https://github.com/aws/sagemaker-python-sdk#local-mode).\n\nTo use Amazon Elastic Inference locally change your `accelerator_type` to `local_sagemaker_notebook` when calling `deploy()`.\n\n***`local_sagemaker_notebook` will only work if you created your notebook instance with an EI accelerator attached to it.***\n\nIn order to use this feature you'll need to install docker-compose (and nvidia-docker if training with a GPU). Running following script will install docker-compose or nvidia-docker-compose and configure the notebook environment for you.\n\nNote, you can only run a single local notebook at a time.",
"_____no_output_____"
]
],
[
[
"!/bin/bash ./setup.sh",
"_____no_output_____"
]
],
[
[
"# tf.estimator",
"_____no_output_____"
],
[
"The tf.estimator framework makes it easy to construct and train machine learning models via its high-level Estimator API. Estimator offers classes you can instantiate to quickly configure common model types such as regressors and classifiers:\n\n\n* **```tf.estimator.LinearClassifier```**:\n Constructs a linear classification model.\n* **```tf.estimator.LinearRegressor```**:\n Constructs a linear regression model.\n* **```tf.estimator.DNNClassifier```**:\n Construct a neural network classification model.\n* **```tf.estimator.DNNRegressor```**:\n Construct a neural network regression model.\n* **```tf.estimator.DNNLinearCombinedClassifier```**:\n Construct a neural network and linear combined classification model.\n* **```tf.estimator.DNNRegressor```**:\n Construct a neural network and linear combined regression model.\n \nMore information about estimators can be found [here](https://www.tensorflow.org/extend/estimators)",
"_____no_output_____"
],
[
"# Construct a deep neural network classifier",
"_____no_output_____"
],
[
"## Complete neural network source code \n\nHere is the full code for the neural network classifier:",
"_____no_output_____"
]
],
[
[
"!cat \"iris_dnn_classifier.py\"",
"_____no_output_____"
]
],
[
[
"With few lines of code, using SageMaker and TensorFlow, you can create a deep neural network model, ready for training and hosting. Let's give a deeper look at the code.",
"_____no_output_____"
],
[
"### Using a tf.estimator in SageMaker\nUsing a TensorFlow estimator in SageMaker is very easy, you can create one with few lines of code:",
"_____no_output_____"
]
],
[
[
"def estimator(model_path, hyperparameters):\n feature_columns = [tf.feature_column.numeric_column(INPUT_TENSOR_NAME, shape=[4])]\n return tf.estimator.DNNClassifier(feature_columns=feature_columns,\n hidden_units=[10, 20, 10],\n n_classes=3,\n model_dir=model_path)",
"_____no_output_____"
]
],
[
[
"The code above first defines the model's feature columns, which specify the data\ntype for the features in the data set. All the feature data is continuous, so\n`tf.feature_column.numeric_column` is the appropriate function to use to\nconstruct the feature columns. There are four features in the data set (sepal\nwidth, sepal height, petal width, and petal height), so accordingly `shape`\nmust be set to `[4]` to hold all the data.\n\nThen, the code creates a `DNNClassifier` model using the following arguments:\n\n* `feature_columns=feature_columns`. The set of feature columns defined above.\n* `hidden_units=[10, 20, 10]`. Three\n [hidden layers](http://stats.stackexchange.com/questions/181/how-to-choose-the-number-of-hidden-layers-and-nodes-in-a-feedforward-neural-netw),\n containing 10, 20, and 10 neurons, respectively.\n* `n_classes=3`. Three target classes, representing the three Iris species.\n* `model_dir=model_path`. The directory in which TensorFlow will save\n checkpoint data during model training. ",
"_____no_output_____"
],
[
"### Describe the training input pipeline\n\nThe `tf.estimator` API uses input functions, which create the TensorFlow\noperations that generate data for the model.\nWe can use `tf.estimator.inputs.numpy_input_fn` to produce the input pipeline:",
"_____no_output_____"
]
],
[
[
"def train_input_fn(training_dir, hyperparameters):\n training_set = tf.contrib.learn.datasets.base.load_csv_with_header(\n filename=os.path.join(training_dir, 'iris_training.csv'),\n target_dtype=np.int,\n features_dtype=np.float32)\n\n return tf.estimator.inputs.numpy_input_fn(\n x={INPUT_TENSOR_NAME: np.array(training_set.data)},\n y=np.array(training_set.target),\n num_epochs=None,\n shuffle=True)()",
"_____no_output_____"
]
],
[
[
"### Describe the serving input pipeline\n\nAfter traininng your model, SageMaker will host it in a TensorFlow serving. You need to describe a serving input function:",
"_____no_output_____"
]
],
[
[
"def serving_input_fn(hyperparameters):\n feature_spec = {INPUT_TENSOR_NAME: tf.FixedLenFeature(dtype=tf.float32, shape=[4])}\n return tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)()",
"_____no_output_____"
]
],
[
[
"Now we are ready to submit the script for training.",
"_____no_output_____"
],
[
"# Train a Model on Amazon SageMaker using TensorFlow custom code\n\nWe can use the SDK to run our training script locally.\n\n1. Pass the path to the `iris_dnn_classifier.py` file, which contains the functions for defining your estimator, to the sagemaker.TensorFlow init method.\n2. Pass the S3 location that we uploaded our data to previously to the `fit()` method.\n3. Pass `local` into the `train_instance_type`. By passing local, training will be done inside of a Docker container on this notebook instance.",
"_____no_output_____"
]
],
[
[
"from sagemaker.tensorflow import TensorFlow\n\niris_estimator = TensorFlow(entry_point='iris_dnn_classifier.py',\n role=role,\n framework_version='1.11',\n train_instance_count=1,\n train_instance_type='local',\n training_steps=1000,\n evaluation_steps=100)",
"_____no_output_____"
],
[
"%%time\nimport boto3\n\n# use the region-specific sample data bucket\nregion = boto3.Session().region_name\ntrain_data_location = 's3://sagemaker-sample-data-{}/tensorflow/iris'.format(region)\n\niris_estimator.fit(train_data_location)",
"_____no_output_____"
]
],
[
[
"# Deploy the trained Model to an Endpoint with an attached EI accelerator\n\nThe `deploy()` method creates an endpoint which serves prediction requests in real-time.\n\nThe only change required for utilizing EI with our SageMaker TensorFlow containers only requires providing an `accelerator_type` parameter, which determines which type of EI accelerator to attach to your endpoint. The supported types of accelerators can be found here: https://aws.amazon.com/sagemaker/pricing/instance-types/\n\nNo code changes are necessary for your model, as our predefined TensorFlow containers utilizes TensorFlow serving, which has been modified to utilize EI for inference, as long as an EI accelerator is attached to the endpoint.\n\n## Using EI with a SageMaker notebook instance\n\nHere we're going to utilize the EI accelerator attached to our local SageMaker notebook instance. This can be done by using `local_sagemaker_notebook` as the value for `accelerator_type`. This will make an inference request against the TensorFlow Serving endpoint running on this Notebook Instance with an attached EI.\n\nAn EI accelerator must be attached in order to make inferences using EI.\n\nAs of now, an EI accelerator attached to a notebook will initialize for the first deep learning framework used to inference against EI. If you wish to use EI with another deep learning framework, please either restart or create a new notebook instance with the new EI.\n\n***`local_sagemaker_notebook` will only work if you created your notebook instance with an EI accelerator attached to it.*** \n\n***Please restart or create a new notebook instance if you wish to use EI with a different framework than the first framework used on this notebook instance as specified when calling `deploy()` with `local_sagemaker_notebook`for `accelerator_type`.***",
"_____no_output_____"
]
],
[
[
"%%time\niris_predictor = iris_estimator.deploy(initial_instance_count=1,\n instance_type='local',\n accelerator_type='local_sagemaker_notebook')",
"_____no_output_____"
]
],
[
[
"# Invoke the Endpoint to get inferences locally\n\nInvoking prediction:",
"_____no_output_____"
]
],
[
[
"%%time\niris_predictor.predict([6.4, 3.2, 4.5, 1.5]) #expected label to be 1",
"_____no_output_____"
]
],
[
[
"# Delete the Endpoint\n\nAfter you have finished with this example, remember to delete the prediction endpoint.",
"_____no_output_____"
]
],
[
[
"print(iris_predictor.endpoint)",
"_____no_output_____"
],
[
"import sagemaker\n\niris_predictor.delete_endpoint()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
ecd9c979e8a70fc3bb202646db1b24f3905e9270 | 260,438 | ipynb | Jupyter Notebook | Modelagem/transacoes/analise_qtd_transacoes.ipynb | lsawakuchi/x1 | 564a135b4fdaa687a4ef6d470ddaa4730932d429 | [
"MIT"
] | null | null | null | Modelagem/transacoes/analise_qtd_transacoes.ipynb | lsawakuchi/x1 | 564a135b4fdaa687a4ef6d470ddaa4730932d429 | [
"MIT"
] | null | null | null | Modelagem/transacoes/analise_qtd_transacoes.ipynb | lsawakuchi/x1 | 564a135b4fdaa687a4ef6d470ddaa4730932d429 | [
"MIT"
] | null | null | null | 43.240578 | 39,908 | 0.57924 | [
[
[
"import pandas as pd\nimport numpy as np\nfrom datetime import datetime\nfrom sqlalchemy import create_engine\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\nfrom plotly import plotly\nfrom plotly.offline import init_notebook_mode, iplot\nfrom plotly import graph_objs as go\nimport matplotlib.pyplot as plt\nimport seaborn as sns\ninit_notebook_mode(connected=True)",
"_____no_output_____"
],
[
"engine = create_engine(\"mysql+pymysql://capMaster:#jackpot123#@captalys.cmrbivuuu7sv.sa-east-1.rds.amazonaws.com:23306/varejo\")\ncon = engine.connect()\ndfmoip = pd.read_sql(\"select cpf_cnpj as cnpj, data, pgto_liquido as valor, numero_transacoes from fluxo_moip\", con)\ncon.close()",
"_____no_output_____"
],
[
"engine = create_engine(\"mysql+pymysql://capMaster:#jackpot123#@captalys.cmrbivuuu7sv.sa-east-1.rds.amazonaws.com:23306/varejo\")\ncon = engine.connect()\ndfmoip_flag = pd.read_sql(\"select distinct cpf_cnpj as cnpj, flag_aprovacao from fluxo_moip\", con)\ncon.close()",
"_____no_output_____"
],
[
"dfmoip_flag.shape",
"_____no_output_____"
],
[
"dfmoip_flag = dfmoip_flag[dfmoip_flag['cnpj']!='00.000.000/0001-91']",
"_____no_output_____"
],
[
"dfmoip_flag = dfmoip_flag.iloc[1:, :]",
"_____no_output_____"
],
[
"dfmoip_flag.shape",
"_____no_output_____"
],
[
"dfmoip_flag['flag_cnpj'] = dfmoip_flag.apply(lambda x : int(len(x['cnpj'])==18), axis=1)",
"_____no_output_____"
],
[
"dfmoip_flag = dfmoip_flag[dfmoip_flag['flag_cnpj']==1]",
"_____no_output_____"
],
[
"dfmoip_flag.shape",
"_____no_output_____"
],
[
"dfmoip['cnpj'].unique().tolist().__len__()",
"_____no_output_____"
],
[
"dfmoip = dfmoip[dfmoip['cnpj']!='00.000.000/0001-91']\ndfmoip = dfmoip.iloc[1:, :]",
"_____no_output_____"
],
[
"dfmoip[\"flag_cnpj\"] = dfmoip.apply(lambda x : int(len(x[\"cnpj\"])==18), axis=1)",
"_____no_output_____"
],
[
"dfmoip = dfmoip[dfmoip['flag_cnpj']==1]",
"_____no_output_____"
],
[
"dfmoip[\"cnpj\"].unique().tolist().__len__()",
"_____no_output_____"
],
[
"dfmoip.drop(columns='flag_cnpj', axis=1, inplace=True)\ndfmoip[\"produto\"] = 'moip'",
"_____no_output_____"
],
[
"dfmoip.head()",
"_____no_output_____"
],
[
"engine = create_engine(\"mysql+pymysql://capMaster:#jackpot123#@captalys.cmrbivuuu7sv.sa-east-1.rds.amazonaws.com:23306/varejo\")\ncon = engine.connect()\ndfpv = pd.read_sql(\"select cpf_cnpj as cnpj, data, valor, numero_transacoes from fluxo_pv\", con)\ncon.close()",
"_____no_output_____"
],
[
"dfpv[\"flag_cnpj\"] = dfpv.apply(lambda x : int(len(x[\"cnpj\"])==18), axis=1)\n\ndfpv = dfpv[dfpv['cnpj']!='00.000.000/0001-91']\n\ndfpv = dfpv[dfpv['flag_cnpj']==1]\n\ndfpv.drop(columns=['flag_cnpj'], axis=1, inplace=True)\n\ndfpv['produto'] = 'pv'",
"_____no_output_____"
],
[
"dfpv[\"cnpj\"].unique().tolist().__len__()",
"_____no_output_____"
],
[
"dfpv.head()",
"_____no_output_____"
],
[
"engine = create_engine(\"mysql+pymysql://capMaster:#jackpot123#@captalys.cmrbivuuu7sv.sa-east-1.rds.amazonaws.com:23306/varejo\")\ncon = engine.connect()\ndfpv_flag = pd.read_sql(\"select distinct cpf_cnpj as cnpj, flag_aprovacao from fluxo_pv\", con)\ncon.close()",
"_____no_output_____"
],
[
"dfpv_flag['flag_cnpj'] = dfpv_flag.apply(lambda x : int(len(x['cnpj'])==18), axis=1)",
"_____no_output_____"
],
[
"dfpv_flag = dfpv_flag[dfpv_flag['cnpj']!='00.000.000/0001-91']",
"_____no_output_____"
],
[
"dfpv_flag.shape\n",
"_____no_output_____"
],
[
"dfpv_flag = dfpv_flag[dfpv_flag['flag_cnpj']==1]",
"_____no_output_____"
],
[
"dfpv_flag.head()",
"_____no_output_____"
],
[
"df = pd.concat([dfmoip, dfpv])",
"_____no_output_____"
],
[
"df['cnpj'].unique().tolist().__len__()",
"_____no_output_____"
],
[
"resp = []\nfor el in df['cnpj'].unique().tolist():\n dt = df[df['cnpj']==el]\n dt = dt.dropna(subset=['valor'])\n if not dt.dropna(subset=['numero_transacoes']).empty:\n dt = dt.fillna(int(dt['numero_transacoes'].mean()))\n dt.index=pd.to_datetime(dt.data)\n dt = dt.resample('MS').sum().reset_index()\n dt[\"cnpj\"] = el\n resp.append(dt)",
"_____no_output_____"
],
[
"df = pd.concat(resp)",
"_____no_output_____"
],
[
"df['cnpj'].unique().tolist().__len__()",
"_____no_output_____"
],
[
"_df = pd.concat([dfmoip, dfpv])",
"_____no_output_____"
],
[
"dfproduto = _df[['cnpj', 'produto']].drop_duplicates()",
"_____no_output_____"
],
[
"dfproduto.head()",
"_____no_output_____"
],
[
"# numero medio de transacoes por produto",
"_____no_output_____"
],
[
"df_media = df.groupby('cnpj').mean().reset_index()[['cnpj', 'numero_transacoes']]",
"_____no_output_____"
],
[
"df_media = df_media.merge(dfproduto, left_on='cnpj', right_on='cnpj', how='left')",
"_____no_output_____"
],
[
"# dt = df_media[df_media['produto']=='pv']\n# trace1 = go.Histogram(\n# x = dt['numero_transacoes'],\n# marker = dict(color=\"rgb(240,134,134)\"),\n# name = \"pv\",\n# )\n# dt = df_media[df_media['produto']=='moip']\n# trace2 = go.Histogram(\n# x = dt['numero_transacoes'],\n# marker = dict(color=\"rgb(88,206,149)\"),\n# name = \"moip\",\n# # xbins = dict(\n# # start=_df[\"ops_semanal\"].min(),\n# # end=_df[\"ops_semanal\"].max(),\n# # size = 1\n# # )\n \n# )\n# data = [trace2, trace1]\n# layout = go.Layout(title=\"Numero médio de transações mensais\", barmode='overlay')\n# fig = go.Figure(data=data, layout=layout)\n# iplot(fig)",
"_____no_output_____"
],
[
"df_media.sort_values('numero_transacoes', ascending=False).head()",
"_____no_output_____"
],
[
"dfm = df_media[df_media['produto']=='moip']",
"_____no_output_____"
],
[
"dfp = df_media[df_media['produto']=='pv']",
"_____no_output_____"
],
[
"dfm.sort_values('numero_transacoes', ascending=False).head()",
"_____no_output_____"
],
[
"d1 = dfm[dfm['numero_transacoes']<100]\ntrace1 = go.Histogram(\n x = d1['numero_transacoes'],\n marker = dict(color=\"rgb(240,134,134)\"),\n name = \"moip < 100\",\n opacity = 0.75,\n xbins = dict(start=0, end=100, size=3)\n)\nd1 = dfp[dfp['numero_transacoes']<100]\ntrace2 = go.Histogram(\n x = d1['numero_transacoes'],\n marker = dict(color=\"rgb(88,206,149)\"),\n name = \"pv < 100\",\n opacity=0.6\n)\nlayout = go.Layout(title=\"Numero médio de transações mensais\", barmode='overlay')\nfig = go.Figure(data=[trace2, trace1], layout=layout)\niplot(fig)",
"_____no_output_____"
],
[
"d1 = dfm[(dfm['numero_transacoes']>=100) & (dfm['numero_transacoes']<1000)]\ntrace1 = go.Histogram(\n x = d1['numero_transacoes'],\n marker = dict(color=\"rgb(240,134,134)\"),\n name = \"moip < 1000\",\n opacity = 0.75\n)\nd1 = dfp[(dfp['numero_transacoes']>=100) & (dfp['numero_transacoes']<1000)]\ntrace2 = go.Histogram(\n x = d1['numero_transacoes'],\n marker = dict(color=\"rgb(88,206,149)\"),\n name = \"pv < 1000\",\n opacity=0.6\n)\nlayout = go.Layout(title=\"Numero médio de transações mensais\", barmode='overlay')\nfig = go.Figure(data=[trace1, trace2], layout=layout)\niplot(fig)",
"_____no_output_____"
],
[
"d1 = dfm[(dfm['numero_transacoes']>=1000) & (dfm['numero_transacoes']<200000)]\ntrace1 = go.Histogram(\n x = d1['numero_transacoes'],\n marker = dict(color=\"rgb(240,134,134)\"),\n name = \"moip < 1000\",\n opacity = 0.75,\n# xbins = dict(\n# start = d1['numero_transacoes'].min(),\n# end = d1['numero_transacoes'].max(),\n \n# )\n)\nd1 = dfp[(dfp['numero_transacoes']>=1000) & (dfp['numero_transacoes']<20000)]\ntrace2 = go.Histogram(\n x = d1['numero_transacoes'],\n marker = dict(color=\"rgb(88,206,149)\"),\n name = \"pv < 1000\",\n opacity=0.6\n)\nlayout = go.Layout(title=\"Numero médio de transações mensais\", barmode='overlay')\nfig = go.Figure(data=[trace1, trace2], layout=layout)\niplot(fig)",
"_____no_output_____"
],
[
"dfp.sort_values('numero_transacoes', ascending=False).head()",
"_____no_output_____"
],
[
"dfm['numero_transacoes'].max()",
"_____no_output_____"
],
[
"# moip : estabelecimentos com numero_transacoes < 100",
"_____no_output_____"
],
[
"dfm100 = dfm[dfm['numero_transacoes']<100]",
"_____no_output_____"
],
[
"dfm100.head(2)",
"_____no_output_____"
],
[
"_dfm = df[df['cnpj'].isin(dfm100['cnpj'].unique().tolist())]",
"_____no_output_____"
],
[
"_dfm['ticket'] = _dfm['valor']/_dfm['numero_transacoes']",
"_____no_output_____"
],
[
"_dfm.groupby('cnpj').mean().reset_index()[['cnpj', 'numero_transacoes', 'ticket']].sort_values('ticket', ascending=False).head()",
"_____no_output_____"
],
[
"# ticket medio 6m vs ticket medio historico",
"_____no_output_____"
],
[
"resp = []\nfor el in df['cnpj'].unique().tolist():\n dt = df[df['cnpj']==el]\n\n dt['ticket'] = dt['valor']/dt['numero_transacoes']\n\n t6 = dt.sort_values('data', ascending=False).iloc[:6, :]['ticket'].mean()\n\n t = dt['ticket'].mean()\n _df = pd.DataFrame({'cnpj' : [el], 'ticket_6m' : [t6], 'ticket_hist' : [t]})\n resp.append(_df)",
"_____no_output_____"
],
[
"dfticket = pd.concat(resp)",
"_____no_output_____"
],
[
"dfticket.head()",
"_____no_output_____"
],
[
"# dt = df[df['cnpj'] == '00.429.649/0001-22']",
"_____no_output_____"
],
[
"# dt['ticket'] = dt['valor']/dt['numero_transacoes']",
"_____no_output_____"
],
[
"# el = '00.429.649/0001-2'",
"_____no_output_____"
],
[
"# df[\"flag_transacoes\"] = df.apply(lambda x : int(x['numero_transacoes']>=12), axis=1)",
"_____no_output_____"
],
[
"# df[df['flag_transacoes']==0].head()",
"_____no_output_____"
],
[
"# testar condicao de elegibilidade",
"_____no_output_____"
],
[
"# numero medio de transacoes >= 12, no historico e nos ultimos 6 meses",
"_____no_output_____"
],
[
"resp = []\nfor el in df['cnpj'].unique().tolist():\n dt = df[df['cnpj']==el]\n dt = dt.sort_values(\"data\", ascending=False).iloc[:12, :]\n\n flag_total = int(dt[\"numero_transacoes\"].mean() >= 20)\n\n dt6 = dt.iloc[:6, :]\n\n dt6['flag_transacoes'] = dt6.apply(lambda x : int(x['numero_transacoes']>=20), axis=1)\n\n flag_6meses = int(0 not in dt6['flag_transacoes'].tolist())\n\n _df = pd.DataFrame({'cnpj' : [el], 'flag_total' : [flag_total], 'flag6' : [flag_6meses]})\n resp.append(_df)",
"_____no_output_____"
],
[
"df_flags = pd.concat(resp)",
"_____no_output_____"
],
[
"df_flags[df_flags['flag_total']==0]['flag6'].unique().tolist()",
"_____no_output_____"
],
[
"dfmoip_flag.head()",
"_____no_output_____"
],
[
"dfmoip_flag.drop(columns=['flag_cnpj'], axis=1, inplace=True)",
"_____no_output_____"
],
[
"dfpv_flag.drop(columns=['flag_cnpj'], axis=1, inplace=True)",
"_____no_output_____"
],
[
"dfpv_flag.head()",
"_____no_output_____"
],
[
"df_flag_fat = pd.concat([dfmoip_flag, dfpv_flag])",
"_____no_output_____"
],
[
"df_flag_fat.shape",
"_____no_output_____"
],
[
"df_flags.shape",
"_____no_output_____"
],
[
"final = df_flags.merge(df_flag_fat, left_on='cnpj', right_on='cnpj', how='left')",
"_____no_output_____"
],
[
"final[final['cnpj']=='00.354.700/0001-84']",
"_____no_output_____"
],
[
"final.groupby('cnpj').count().sort_values('flag_aprovacao', ascending=False).head()",
"_____no_output_____"
],
[
"final.rename(columns={'flag_aprovacao' : 'flag_faturamento'}, inplace=True)",
"_____no_output_____"
],
[
"final['flag_transacoes'] = final['flag_total']*final['flag6']",
"_____no_output_____"
],
[
"final.head()",
"_____no_output_____"
],
[
"dfaprov = final[final[\"flag_faturamento\"]==1]",
"_____no_output_____"
],
[
"dfaprov.head()",
"_____no_output_____"
],
[
"dfaprov.shape",
"_____no_output_____"
],
[
"dfaprov.groupby('flag_transacoes').count()",
"_____no_output_____"
],
[
"457/1208",
"_____no_output_____"
]
],
[
[
"- min=12 => 25%\n- min=16 = 31%\n- min=20 => 38%",
"_____no_output_____"
]
]
] | [
"code",
"markdown"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
ecd9d53b2a86f96ea793d675980fca18f2bb715d | 30,402 | ipynb | Jupyter Notebook | Poster/2021-03-09_viz_for_poster_AP.ipynb | ph1001/Data_Visualisation_Project_Group_I | 6ae4ba6ede9a5b827c9a6c0617c206cc352c512b | [
"MIT"
] | null | null | null | Poster/2021-03-09_viz_for_poster_AP.ipynb | ph1001/Data_Visualisation_Project_Group_I | 6ae4ba6ede9a5b827c9a6c0617c206cc352c512b | [
"MIT"
] | null | null | null | Poster/2021-03-09_viz_for_poster_AP.ipynb | ph1001/Data_Visualisation_Project_Group_I | 6ae4ba6ede9a5b827c9a6c0617c206cc352c512b | [
"MIT"
] | null | null | null | 40.374502 | 1,446 | 0.587724 | [
[
[
"import pandas as pd\nimport plotly.graph_objects as go\nfrom plotly.subplots import make_subplots\nimport datetime\nfrom sklearn.preprocessing import MinMaxScaler",
"_____no_output_____"
],
[
"covidNumbers = pd.read_csv('Data/owid-covid-data.csv')\ncovidNumbers.head()",
"_____no_output_____"
],
[
"print(covidNumbers['date'].dtypes)\ncovidNumbers['date'] = pd.to_datetime(covidNumbers['date'], format = '%Y-%m-%d')",
"_____no_output_____"
],
[
"covidUsa = covidNumbers[covidNumbers['iso_code'] == 'USA']\ncovidUsa.head()",
"_____no_output_____"
],
[
"stringencyData = pd.read_csv('Data/covid-stringency-index.csv')\nstringencyData.head()",
"_____no_output_____"
],
[
"stringencyData['Date'] = pd.to_datetime(stringencyData['Date'], format = '%Y-%m-%d')",
"_____no_output_____"
],
[
"stringencyUsa = stringencyData[stringencyData['Code'] == 'USA']\nstringencyUsa.tail()",
"_____no_output_____"
],
[
"stringencyUsa.rename(columns={\"Date\": \"date\"}, inplace=True)\ncasesStringency = pd.merge(covidUsa[['date', 'new_cases_smoothed']].copy(), \n stringencyUsa[['date', 'stringency_index']].copy(), on='date')\ncasesStringency.fillna(0, inplace=True)",
"_____no_output_____"
],
[
"# Create figure with secondary y-axis\nfig = make_subplots(specs=[[{\"secondary_y\": True}]])\n\n# Add traces\nfig.add_trace(\n go.Scatter(x=casesStringency['date'], y=casesStringency['new_cases_smoothed'], name=\"# New Cases\"),\n secondary_y=False,\n)\n\nfig.add_trace(\n go.Scatter(x=casesStringency['date'], y=casesStringency['stringency_index'], name=\"Stringency Index\"),\n secondary_y=True,\n)\n\n# Add figure title\nfig.update_layout(\n title_text=\"Comparison between cases and stringency\",\n paper_bgcolor='rgba(0,0,0,0)',\n plot_bgcolor='rgba(0,0,0,0)',\n yaxis=dict(\n title=\"# New Cases\",\n titlefont=dict(\n color=\"#636efa\"\n ),\n tickfont=dict(\n color=\"#1f77b4\"\n )\n ),\n yaxis2=dict(\n title=\"Stringency Index\",\n titlefont=dict(\n color=\"#ef553c\"\n ),\n tickfont=dict(\n color=\"#d62728\"\n ),\n anchor=\"x\",\n overlaying=\"y\",\n side=\"right\"\n ),\n)\n\n# Set x-axis title\nfig.update_xaxes(title_text=\"Date\")\n\n# Set y-axes titles\nfig.update_yaxes(title_text=\"# New Cases\", secondary_y=False)\nfig.update_yaxes(title_text=\"Stringency index\", secondary_y=True)\n\nfig.show()",
"_____no_output_____"
],
[
"industry = pd.read_csv('Data/Stocks/sm_dj_industry.csv')\npharma = pd.read_csv('Data/Stocks/sm_dj_pharma.csv')\ntourism = pd.read_csv('Data/Stocks/sm_dj_tourism.csv')\nsp500 = pd.read_csv('Data/Stocks/sm_sp500.csv')\nhero = pd.read_csv('Data/Stocks/sm_hero.csv')\nnasdaq100 = pd.read_csv('Data/Stocks/sm_nasdaq.csv')\nretail = pd.read_csv('Data/Stocks/sm_dj_retail.csv')\nvix = pd.read_csv('Data/vix.csv', index_col = 0)",
"_____no_output_____"
],
[
"industry.dtypes",
"_____no_output_____"
],
[
"vix.reset_index(inplace=True)\nvix['Data'] = pd.to_datetime(vix['Data'], format = '%d.%m.%Y')",
"_____no_output_____"
],
[
"industry.head()",
"_____no_output_____"
],
[
"# stocksConcatList= [pharma, tourism, sp500, hero, nasdaq100, retail, vix]\n\ncasesStringency = industry[['Data', 'Último']].copy()\ncasesStringency.rename(columns={\"Último\": \"industry\"}, inplace=True)\ncasesStringency['Data'] = pd.to_datetime(casesStringency['Data'], format = '%d.%m.%Y')\n\npharma.rename(columns={\"Último\": \"pharma\"}, inplace=True)\npharma['Data'] = pd.to_datetime(pharma['Data'], format = '%d.%m.%Y')\ncasesStringency = pd.merge(casesStringency, pharma[['Data', 'pharma']].copy(), on='Data')\n\ntourism.rename(columns={\"Último\": \"tourism\"}, inplace=True)\ntourism['Data'] = pd.to_datetime(tourism['Data'], format = '%d.%m.%Y')\ncasesStringency = pd.merge(casesStringency, tourism[['Data', 'tourism']].copy(), on='Data')\n\nsp500.rename(columns={\"Último\": \"sp500\"}, inplace=True)\nsp500['Data'] = pd.to_datetime(sp500['Data'], format = '%d.%m.%Y')\ncasesStringency = pd.merge(casesStringency, sp500[['Data', 'sp500']].copy(), on='Data')\n\nhero.rename(columns={\"Último\": \"hero\"}, inplace=True)\nhero['Data'] = pd.to_datetime(hero['Data'], format = '%d.%m.%Y')\ncasesStringency = pd.merge(casesStringency, hero[['Data', 'hero']].copy(), how='outer', on='Data')\n\nnasdaq100.rename(columns={\"Último\": \"nasdaq100\"}, inplace=True)\nnasdaq100['Data'] = pd.to_datetime(nasdaq100['Data'], format = '%d.%m.%Y')\ncasesStringency = pd.merge(casesStringency, nasdaq100[['Data', 'nasdaq100']].copy(), on='Data')\n\nretail.rename(columns={\"Último\": \"retail\"}, inplace=True)\nretail['Data'] = pd.to_datetime(retail['Data'], format = '%d.%m.%Y')\ncasesStringency = pd.merge(casesStringency, retail[['Data', 'retail']].copy(), on='Data')\n\nvix.rename(columns={\"Último\": \"vix\"}, inplace=True)\nvix['Data'] = pd.to_datetime(vix['Data'], format = '%d.%m.%Y')\ncasesStringency = pd.merge(casesStringency, vix[['Data', 'vix']].copy(), on='Data')",
"_____no_output_____"
],
[
"casesStringency.set_index('Data', inplace=True)\ncasesStringency.tail()",
"_____no_output_____"
],
[
"sp500 = sp500[['Data', 'sp500']]\nsp500 = sp500[(sp500['Data'] > '2020-01-22') & (sp500['Data'] < '2021-02-22')]\nsp500['sp500'] = (sp500['sp500'].replace('\\.','', regex=True)\n .replace(',','.', regex=True)\n .astype(float))\nsp500.head()",
"_____no_output_____"
],
[
"data_sectors = [dict(type='scatter',\n x=sp500.Data,\n y=sp500['sp500'])]\n\nlayout_sectors = dict(title=dict(text='SP500 stock price'),\n xaxis=dict(title='Date'),\n yaxis=dict(title='Price'),\n paper_bgcolor='rgba(0,0,0,0)',\n plot_bgcolor='rgba(0,0,0,0)',)\n\nfig_sectors = go.Figure(data=data_sectors, layout=layout_sectors)\n\nfig_sectors.show()",
"_____no_output_____"
],
[
"sectors_list = casesStringency.columns\nscaler = MinMaxScaler()\nfor s in sectors_list:\n casesStringency[s] = (casesStringency[s].replace('\\.','', regex=True)\n .replace(',','.', regex=True)\n .astype(float))\n casesStringency[s] = scaler.fit_transform(casesStringency[s].values.reshape(-1, 1))",
"_____no_output_____"
],
[
"sectors_list = casesStringency.columns\n\ndata_sectors = [dict(type='scatter',\n x=casesStringency.index,\n y=casesStringency[sector],\n name=sector)\n for sector in sectors_list]\n\nlayout_sectors = dict(title=dict(text='Sectors stock price'),\n xaxis=dict(title='Date'),\n yaxis=dict(title='Price'),\n paper_bgcolor='rgba(0,0,0,0)',\n plot_bgcolor='rgba(0,0,0,0)',)\n\nfig_sectors = go.Figure(data=data_sectors, layout=layout_sectors)\n\nfig_sectors.show()",
"_____no_output_____"
],
[
"casesStringency",
"_____no_output_____"
],
[
"stockVsCovid = pd.read_csv('Data/StockVsCovid.csv')\ngeoCovid = pd.read_csv('Data/geoMap-Covid.csv')\ngeoStock = pd.read_csv('Data/geoMap-Stock.csv')",
"_____no_output_____"
],
[
"stockVsCovid.reset_index(level=1, inplace=True)\nstockVsCovid.rename(columns={\"level_1\": \"Stock\", 'Categoria: Todas as categorias': 'Covid'}, inplace=True)\nstockVsCovid = stockVsCovid.iloc[3:]",
"_____no_output_____"
],
[
"stockVsCovid = stockVsCovid.astype('int32').copy()",
"_____no_output_____"
],
[
"stockVsCovid_list = stockVsCovid.columns\n\ndata_stockVsCovid = [dict(type='scatter',\n x=stockVsCovid.index,\n y=stockVsCovid[sc],\n name=sc)\n for sc in stockVsCovid_list]\n\nlayout_stockVsCovid = dict(title=dict(text='Trends in interests'),\n xaxis=dict(title='Date'),\n yaxis=dict(title='Interest'),\n paper_bgcolor='rgba(0,0,0,0)',\n plot_bgcolor='rgba(0,0,0,0)',)\n\nfig_sectors = go.Figure(data=data_stockVsCovid, layout=layout_stockVsCovid)\n\nfig_sectors.show()",
"_____no_output_____"
],
[
"geoCovid.head()",
"_____no_output_____"
],
[
"geoCovid.reset_index(level=0, inplace=True)\ngeoCovid.rename(columns={\"Category: All categories\": \"Interest\", \"index\": \"state\"}, inplace=True)\ngeoCovid = geoCovid.iloc[1:]",
"_____no_output_____"
],
[
"df_locations = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/2011_us_ag_exports.csv')\ndf_locations.head()",
"_____no_output_____"
],
[
"df_us_covid = pd.merge(geoCovid, df_locations[['code', 'state']].copy(), on='state')\ndf_us_covid.shape",
"_____no_output_____"
],
[
"df_us_covid.head()",
"_____no_output_____"
],
[
"fig = go.Figure(data=go.Choropleth(\n locations=df_us_covid['code'],\n z = df_us_covid['Interest'].astype(float),\n locationmode = 'USA-states',\n colorbar_title = \"Interest\",\n colorscale = 'Reds'\n))\n\nfig.update_layout(\n title_text = \"Google trends interest on 'Covid'\",\n geo_scope='usa',\n paper_bgcolor='rgba(0,0,0,0)',\n plot_bgcolor='rgba(0,0,0,0)'\n)\n\nfig.show()",
"_____no_output_____"
],
[
"geoStock.reset_index(level=0, inplace=True)\ngeoStock.rename(columns={\"Category: All categories\": \"Interest\", \"index\": \"state\"}, inplace=True)\ngeoStock = geoStock.iloc[1:]\ngeoStock.head()",
"_____no_output_____"
],
[
"df_us_stock = pd.merge(geoStock, df_locations[['code', 'state']].copy(), on='state')\ndf_us_stock.shape",
"_____no_output_____"
],
[
"fig = go.Figure(data=go.Choropleth(\n locations=df_us_stock['code'],\n z = df_us_stock['Interest'].astype(float),\n locationmode = 'USA-states',\n colorbar_title = \"Interest\",\n colorscale = 'Blues'\n))\n\nfig.update_layout(\n title_text = \"Google trends interest on 'Stocks'\",\n geo_scope='usa',\n paper_bgcolor='rgba(0,0,0,0)',\n plot_bgcolor='rgba(0,0,0,0)'\n)\n\nfig.show()",
"_____no_output_____"
],
[
"trendCovid19_20 = pd.read_csv('Data/multiTimeline-stock19_20.csv')",
"_____no_output_____"
],
[
"trendCovid19_20.head()",
"_____no_output_____"
],
[
"trendCovid19_20.reset_index(inplace=True)\ntrendCovid19_20.rename(columns={\"index\": \"date\", \"Category: All categories\": \"trendRating\"}, inplace=True)\ntrendCovid19_20 = trendCovid19_20.iloc[1:]\n\ntrendCovid19_20['trendRating'] = trendCovid19_20['trendRating'].astype('int32').copy()\ntrendCovid19_20['date'] = pd.to_datetime(trendCovid19_20['date'], format = '%Y-%m-%d')",
"_____no_output_____"
],
[
"mean19 = trendCovid19_20[trendCovid19_20['date'].dt.year == 2019]['trendRating'].mean()\nmean20 = trendCovid19_20[trendCovid19_20['date'].dt.year == 2020]['trendRating'].mean()\nprint(mean19)\nprint(mean20)\nratio = mean19/mean20\nprint(ratio)\ntrendCovid19_20.head()",
"_____no_output_____"
],
[
"geoStock19 = pd.read_csv('Data/geoMap-Stock19.csv')\ngeoStock20 = pd.read_csv('Data/geoMap-Stock20.csv')",
"_____no_output_____"
],
[
"geoStock19.head()",
"_____no_output_____"
],
[
"geoStock19.reset_index(level=0, inplace=True)\ngeoStock19.rename(columns={\"Category: All categories\": \"Interest\", \"index\": \"state\"}, inplace=True)\ngeoStock19 = geoStock19.iloc[1:]\ngeoStock19['Interest'] = geoStock19['Interest'].astype('int32').copy()\n\ngeoStock19 = pd.merge(geoStock19, df_locations[['code', 'state']].copy(), on='state')\ngeoStock19.head()",
"_____no_output_____"
],
[
"geoStock19['Interest'] = geoStock19['Interest']*ratio\ngeoStock19.head()",
"_____no_output_____"
],
[
"fig = go.Figure(data=go.Choropleth(\n locations=geoStock19['code'],\n z = geoStock19['Interest'].astype(float),\n locationmode = 'USA-states',\n colorbar_title = \"Interest\",\n colorscale = 'YlGnBu',\n zmin=0, zmax=100,\n))\n\nfig.update_layout(\n title_text = \"Google trends interest on 'Stocks' in 2019\",\n geo_scope='usa',\n paper_bgcolor='rgba(0,0,0,0)',\n plot_bgcolor='rgba(0,0,0,0)'\n)\n\nfig.show()",
"_____no_output_____"
],
[
"geoStock20.reset_index(level=0, inplace=True)\ngeoStock20.rename(columns={\"Category: All categories\": \"Interest\", \"index\": \"state\"}, inplace=True)\ngeoStock20 = geoStock20.iloc[1:]\ngeoStock20['Interest'] = geoStock20['Interest'].astype('int32').copy()\n\ngeoStock20 = pd.merge(geoStock20, df_locations[['code', 'state']].copy(), on='state')\ngeoStock20.head()",
"_____no_output_____"
],
[
"fig = go.Figure(data=go.Choropleth(\n locations=geoStock20['code'],\n z = geoStock20['Interest'].astype(float),\n locationmode = 'USA-states',\n colorbar_title = \"Interest\",\n colorscale = 'YlGnBu',\n zmin=0, zmax=100\n))\n\nfig.update_layout(\n title_text = \"Google trends interest on 'Stocks' in 2020\",\n geo_scope='usa',\n paper_bgcolor='rgba(0,0,0,0)',\n plot_bgcolor='rgba(0,0,0,0)'\n)\n\nfig.show()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecd9df98672a88addc066b7807e499f1293b3f4d | 305,250 | ipynb | Jupyter Notebook | proc_17_108_analysis_jeffykao_EDA.ipynb | j2kao/fcc_nn_research | 9b8f8880778a9d99ca8fb10e86cfc68d867de4ce | [
"MIT"
] | 148 | 2017-11-26T08:10:17.000Z | 2022-02-20T05:59:45.000Z | proc_17_108_analysis_jeffykao_EDA.ipynb | j2kao/fcc_nn_research | 9b8f8880778a9d99ca8fb10e86cfc68d867de4ce | [
"MIT"
] | 1 | 2018-01-16T00:09:05.000Z | 2018-01-16T00:35:25.000Z | proc_17_108_analysis_jeffykao_EDA.ipynb | j2kao/fcc_nn_research | 9b8f8880778a9d99ca8fb10e86cfc68d867de4ce | [
"MIT"
] | 25 | 2017-11-27T14:38:25.000Z | 2019-03-29T20:12:40.000Z | 135.425909 | 162,772 | 0.748898 | [
[
[
"import pandas as pd\nimport numpy as np\nfrom sklearn.cluster import DBSCAN\nfrom sklearn.cluster import AgglomerativeClustering\nfrom sklearn.preprocessing import normalize\nfrom hdbscan import HDBSCAN\nimport spacy\npd.options.display.max_colwidth = 500",
"_____no_output_____"
],
[
"%%time\ndata = pd.read_csv('proc_17_108_unique_comments_text_dupe_count.csv')\nprint(f\"Loaded in {len(data)} entries aggregating {data['dupe_count'].sum()} total comments.\")",
"Loaded in 2955186 entries aggregating 22078910.0 total comments.\nCPU times: user 22.7 s, sys: 4.79 s, total: 27.5 s\nWall time: 27.9 s\n"
],
[
"nlp = None\ntry:\n nlp = spacy.load(\"en\", tagger=False, entity=False, matcher=False, parser=False)\nexcept:\n import en_core_web_md\n nlp = en_core_web_md.load(tagger=False, entity=False, matcher=False, parser=False)",
"_____no_output_____"
],
[
"#take a sample to vectorize and cluster with dbscan\nsample_size = 10000 # can play with this - starts to get unmanageable on my desktop @ sample_size > 100000\ndata_sample = data.sample(sample_size, random_state=42)",
"_____no_output_____"
],
[
"def encode_doc_vecs(docs):\n #encode mean word vector per document - might take a while\n #it took about 10s per 10,000 documents on my desktop\n #would want to parallelize this for a large enough job\n doc_vecs = []\n for doc in docs:\n #encode the doc if it is a string\n if type(doc) == str:\n doc_spaced = nlp(doc)\n doc_vec = np.mean([word.vector for word in doc_spaced], axis=0)\n #in case the doc vector could not be encoded, e.g., empty string, no words found\n if type(doc_vec) != np.ndarray or doc_vec.shape[0] != 300:\n print(f\"Not Vectorizable: {doc}\")\n doc_vec = np.zeros(300)\n else:\n print(f\"Not a String: {doc}\")\n doc_vec = np.zeros(300)\n doc_vecs.append(doc_vec)\n return doc_vecs",
"_____no_output_____"
],
[
"%%time\ndoc_vecs = encode_doc_vecs(data_sample['text_data'])",
"CPU times: user 7min 39s, sys: 56.1 s, total: 8min 35s\nWall time: 5min 18s\n"
],
[
"#use the euclidean distance of the l2-normalized vectors (angular distance), which is proportional to the cosine distance\nnorm_doc_vecs = normalize(doc_vecs, norm='l2')",
"_____no_output_____"
],
[
"# play with eps and metric\n# higher eps will take *much* more time & resources\n# clusterer = DBSCAN(eps=0.001, min_samples=5, n_jobs=-1, metric='cosine')",
"_____no_output_____"
],
[
"# can also try HAC - play with number of clusters, ward linkage seemed to give the best results generally\n# clusterer = AgglomerativeClustering(n_clusters=20, linkage='average', affinity='cosine')\n#clusterer = AgglomerativeClustering(n_clusters=60, linkage='ward')",
"_____no_output_____"
],
[
"# can also try HDBSCAN\nclusterer = HDBSCAN(min_cluster_size=5)",
"_____no_output_____"
],
[
"%%time\nlabels = clusterer.fit_predict(norm_doc_vecs)",
"CPU times: user 43.7 s, sys: 108 ms, total: 43.8 s\nWall time: 43.9 s\n"
],
[
"X_clustered = pd.concat([pd.Series(data=labels, name='cluster'),data_sample.reset_index(drop=True),pd.DataFrame(doc_vecs)], axis=1)\n# take a quick peek\nX_clustered.sample(100)",
"_____no_output_____"
],
[
"# visualize silhouette scores (without outliers)\n# our goal is to find the largest, densest clusters from which to manually pick out signature strings\nfrom sklearn.metrics import silhouette_samples, silhouette_score",
"_____no_output_____"
],
[
"# we can focus on certain clusters using a mask, but it's not useful to plot the -1 outliers\n# w/ caveat that changing the mask will artifically inflate your sil score\nnum_clusters_skipped = 0 # speed up by skipping the first few (huge) clusters; \nmask = list(range(num_clusters_skipped, X_clustered['cluster'].max() + 1)) #since count = max + 1; ignore outliers (-1)",
"_____no_output_____"
],
[
"X_masked = X_clustered[X_clustered['cluster'].isin(mask)]\nfeatures = X_masked.iloc[:,-300:]\ncluster_labels = X_masked['cluster']",
"_____no_output_____"
],
[
"# Calculate silhouette scores - takes a while for if you don't mask out first few clusters\nn_clusters = len(mask)\n\n# Summary stat\nsilhouette_avg = silhouette_score(features, cluster_labels)\nprint(f\"For the clusters examined, the average silhouette_score is {silhouette_avg}\")\n\n# Compute the silhouette scores for each sample\nsample_silhouette_values = silhouette_samples(features, cluster_labels)",
"For the clusters examined, the average silhouette_score is 0.08326191993090572\n"
],
[
"# Plot to get a sense of density and separation of the formed clusters\nimport matplotlib.pyplot as plt\nimport matplotlib.cm as cm\n%matplotlib inline\n\n# Save the scores\nX_masked['silhouette_score'] = sample_silhouette_values\n\nfig, ax1 = plt.subplots(1, 1)\nfig.set_size_inches(18, 54)\nax1.set_autoscaley_on(True)\n\n# xlim -> silhouette coefficient range (unlikely negative in our case)\nax1.set_xlim([-1.0, 1.0])\nax1.set_ylim([0, len(X_masked) + (n_clusters + 1) * 10])\n\ny_lower = 10\nfor i in mask:\n # Aggregate and sort to get a profile for cluster i\n ith_cluster_silhouette_values = sample_silhouette_values[cluster_labels == i]\n ith_cluster_silhouette_values.sort()\n size_cluster_i = ith_cluster_silhouette_values.shape[0]\n y_upper = y_lower + size_cluster_i\n\n # Draw\n color = cm.nipy_spectral(float(i) / n_clusters)\n ax1.fill_betweenx(np.arange(y_lower, y_upper),\n 0, ith_cluster_silhouette_values,\n facecolor=color, edgecolor=color, alpha=0.7)\n\n # Label\n ax1.text(-1.05, y_lower + 0.5 * size_cluster_i, str(i))\n\n # A dab at the average silhouette score for cluster i\n ax1.scatter(x=ith_cluster_silhouette_values.mean(), y=y_lower + 0.5 * size_cluster_i, color='k')\n \n # Compute the new y_lower for next cluster\n y_lower = y_upper + 10\n \nax1.set_title(\"Silhouette plot and scores for the various clusters\")\nax1.set_xlabel(\"Silhouette coefficient value\")\nax1.set_ylabel(\"Cluster label\")\nax1.set_yticks([]) # clear\nax1.set_xticks([-1, -0.8, -0.6, -0.4, -0.2, 0, 0.2, 0.4, 0.6, 0.8, 1])\n\nplt.suptitle(f\"Silhouette analysis for resulting in n_clusters = {n_clusters} (masked)\",\n fontsize=14, fontweight='bold')\n\nplt.show()",
"/Users/jeffkao/Documents/ai-structured-prediction/.venv/lib/python3.6/site-packages/ipykernel_launcher.py:7: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n import sys\n"
],
[
"# poke around to make sure it looks reasonable\ndef examine_cluster(cluster_num, num_rows_listed=5):\n cluster_num = cluster_num #pick your cluster - the triangular ones are the mad-libs\n cluster_to_check = X_masked[X_masked['cluster'] == cluster_num][['silhouette_score','text_data']].sort_values('silhouette_score')\n print(f'== TOP {num_rows_listed} SILHOUETTE SCORES ==')\n print(cluster_to_check.tail(5))\n print(f'\\n== BOTTOM {num_rows_listed} SILHOUETTE SCORES ==')\n print(cluster_to_check.head(5))",
"_____no_output_____"
],
[
"# example clusters: 2, 3, 5, 6, 10, 11\nexamine_cluster(10)",
"== TOP 5 SILHOUETTE SCORES ==\n silhouette_score \\\n7510 0.176435 \n200 0.179572 \n2437 0.180443 \n2364 0.184913 \n1705 0.192486 \n\n text_data \n7510 FCC commissioners, I am a voter worried about network neutrality regulations. I urge the commission to overturn Barack Obama's scheme to regulate Internet access. People like me, rather than the FCC, ought to select whatever applications we desire. Barack Obama's scheme to regulate Internet access is a perversion of net neutrality. It broke a hands-off policy that functioned very, very successfully for a long time with broad bipartisan support. \n200 Chairman Pai: I'm very worried about Internet Freedom. I want to implore the FCC to rescind Barack Obama's scheme to regulate the Internet. Americans, as opposed to Washington bureaucrats, should be empowered to select whatever services they choose. Barack Obama's scheme to regulate the Internet is a perversion of the open Internet. It undid a market-based framework that performed supremely smoothly for a long time with bipartisan consensus. \n2437 Dear FCC, I am concerned about the FCC's Open Internet order. I strongly suggest you to reverse Barack Obama's decision to regulate the web. Individual citizens, rather than the FCC, should be empowered to select whatever products they want. Barack Obama's decision to regulate the web is a corruption of net neutrality. It reversed a pro-consumer policy that performed remarkably successfully for decades with nearly universal consensus. \n2364 Chairman Pai: With respect to net neutrality. I strongly recommend the commission to rescind Barack Obama's plan to control the web. Individual Americans, as opposed to the FCC Enforcement Bureau, should purchase which services they desire. Barack Obama's plan to control the web is a corruption of the open Internet. It broke a hands-off framework that performed supremely successfully for two decades with nearly universal approval. \n1705 To the Federal Communications Commission: I'm concerned about the FCC's so-called Open Internet order. I suggest Chairman Pai to undo Barack Obama's decision to control the Internet. Americans, not the FCC Enforcement Bureau, should be empowered to select the applications we want. Barack Obama's decision to control the Internet is a distortion of net neutrality. It undid a market-based policy that functioned very smoothly for two decades with Republican and Democrat backing. \n\n== BOTTOM 5 SILHOUETTE SCORES ==\n silhouette_score \\\n8221 -0.280551 \n2903 -0.269408 \n3661 -0.263232 \n5169 -0.249002 \n3121 -0.246076 \n\n text_data \n8221 To the FCC: I'd like to share my thoughts on internet regulations. I want to request Ajit Pai to rescind The Obama/Wheeler power grab to take over Internet access. Individuals, not so-called experts, should be free to use the products they choose. The Obama/Wheeler power grab to take over Internet access is a perversion of the open Internet. It reversed a hands-off system that performed very, very successfully for two decades with Republican and Democrat consensus. \n2903 Dear Chairman Pai, I am a voter worried about the Open Internet order. I would like to demand Chairman Pai to rescind Obama's power grab to take over the web. Individual citizens, not big government, should enjoy the applications we choose. Obama's power grab to take over the web is a betrayal of net neutrality. It disrupted a free-market approach that performed very, very well for two decades with broad bipartisan backing. \n3661 Chairman Pai: Hi, I'd like to comment on net neutrality. I would like to advocate you to overturn Obama's order to take over Internet access. Internet users, not so-called experts, ought to enjoy whatever products we desire. Obama's order to take over Internet access is a corruption of the open Internet. It undid a free-market policy that functioned very, very well for a long time with nearly universal consensus. \n5169 FCC commissioners, I would like to comment on an open Internet. I want to suggest you to rescind Tom Wheeler's order to take over broadband. People like me, not the FCC, should purchase which products they desire. Tom Wheeler's order to take over broadband is a perversion of net neutrality. It broke a market-based system that functioned exceptionally well for two decades with both parties' support. \n3121 Dear Commissioners: My comments re: the Open Internet order. I want to recommend the FCC to overturn Tom Wheeler's order to take over Internet access. Individual Americans, not Washington, should be able to select the services we prefer. Tom Wheeler's order to take over Internet access is a distortion of net neutrality. It stopped a hands-off policy that functioned exceptionally well for two decades with Republican and Democrat support. \n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecd9e17a55b35cdb342db129a1a4b3e4711c5253 | 193,770 | ipynb | Jupyter Notebook | dog_app.ipynb | Michael-Hagmans/DogBreeds | 4b7a91eebc034a84957bf35831b189d7efc9f2c2 | [
"MIT"
] | null | null | null | dog_app.ipynb | Michael-Hagmans/DogBreeds | 4b7a91eebc034a84957bf35831b189d7efc9f2c2 | [
"MIT"
] | null | null | null | dog_app.ipynb | Michael-Hagmans/DogBreeds | 4b7a91eebc034a84957bf35831b189d7efc9f2c2 | [
"MIT"
] | null | null | null | 101.132568 | 79,856 | 0.765449 | [
[
[
"# Data Scientist Nanodegree\n\n## Convolutional Neural Networks\n\n## Project: Write an Algorithm for a Dog Identification App \n\n\nThis notebook walks you through one of the most popular Udacity projects across machine learning and artificial intellegence nanodegree programs. The goal is to classify images of dogs according to their breed. \n\nIf you are looking for a more guided capstone project related to deep learning and convolutional neural networks, this might be just it. Notice that even if you follow the notebook to creating your classifier, you must still create a blog post or deploy an application to fulfill the requirements of the capstone project.\n\nAlso notice, you may be able to use only parts of this notebook (for example certain coding portions or the data) without completing all parts and still meet all requirements of the capstone project.\n\n---\n\nIn this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with **'(IMPLEMENTATION)'** in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully! \n\nIn addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question X'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.\n\n>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.\n\nThe rubric contains _optional_ \"Stand Out Suggestions\" for enhancing the project beyond the minimum requirements. If you decide to pursue the \"Stand Out Suggestions\", you should include the code in this IPython notebook.\n\n\n\n---\n### Why We're Here \n\nIn this notebook, you will make the first steps towards developing an algorithm that could be used as part of a mobile or web app. At the end of this project, your code will accept any user-supplied image as input. If a dog is detected in the image, it will provide an estimate of the dog's breed. If a human is detected, it will provide an estimate of the dog breed that is most resembling. The image below displays potential sample output of your finished project (... but we expect that each student's algorithm will behave differently!). \n\n\n\nIn this real-world setting, you will need to piece together a series of models to perform different tasks; for instance, the algorithm that detects humans in an image will be different from the CNN that infers dog breed. There are many points of possible failure, and no perfect algorithm exists. Your imperfect solution will nonetheless create a fun user experience!\n\n### The Road Ahead\n\nWe break the notebook into separate steps. Feel free to use the links below to navigate the notebook.\n\n* [Step 0](#step0): Import Datasets\n* [Step 1](#step1): Detect Humans\n* [Step 2](#step2): Detect Dogs\n* [Step 3](#step3): Create a CNN to Classify Dog Breeds (from Scratch)\n* [Step 4](#step4): Use a CNN to Classify Dog Breeds (using Transfer Learning)\n* [Step 5](#step5): Create a CNN to Classify Dog Breeds (using Transfer Learning)\n* [Step 6](#step6): Write your Algorithm\n* [Step 7](#step7): Test Your Algorithm\n\n---\n<a id='step0'></a>\n## Step 0: Import Datasets\n\n### Import Dog Dataset\n\nIn the code cell below, we import a dataset of dog images. We populate a few variables through the use of the `load_files` function from the scikit-learn library:\n- `train_files`, `valid_files`, `test_files` - numpy arrays containing file paths to images\n- `train_targets`, `valid_targets`, `test_targets` - numpy arrays containing onehot-encoded classification labels \n- `dog_names` - list of string-valued dog breed names for translating labels",
"_____no_output_____"
]
],
[
[
"from sklearn.datasets import load_files \nfrom keras.utils import np_utils\nimport numpy as np\nfrom glob import glob\n\n# define function to load train, test, and validation datasets\ndef load_dataset(path):\n data = load_files(path)\n dog_files = np.array(data['filenames'])\n dog_targets = np_utils.to_categorical(np.array(data['target']), 133)\n return dog_files, dog_targets\n\n# load train, test, and validation datasets\ntrain_files, train_targets = load_dataset('../../../data/dog_images/train')\nvalid_files, valid_targets = load_dataset('../../../data/dog_images/valid')\ntest_files, test_targets = load_dataset('../../../data/dog_images/test')\n\n# load list of dog names\ndog_names = [item[20:-1] for item in sorted(glob(\"../../../data/dog_images/train/*/\"))]\n\n# print statistics about the dataset\nprint('There are %d total dog categories.' % len(dog_names))\nprint('There are %s total dog images.\\n' % len(np.hstack([train_files, valid_files, test_files])))\nprint('There are %d training dog images.' % len(train_files))\nprint('There are %d validation dog images.' % len(valid_files))\nprint('There are %d test dog images.'% len(test_files))",
"Using TensorFlow backend.\n"
],
[
"train_files[0]",
"_____no_output_____"
]
],
[
[
"### Import Human Dataset\n\nIn the code cell below, we import a dataset of human images, where the file paths are stored in the numpy array `human_files`.",
"_____no_output_____"
]
],
[
[
"import random\nrandom.seed(8675309)\n\n# load filenames in shuffled human dataset\nhuman_files = np.array(glob(\"../../../data/lfw/*/*\"))\nrandom.shuffle(human_files)\n\n# print statistics about the dataset\nprint('There are %d total human images.' % len(human_files))",
"There are 13233 total human images.\n"
]
],
[
[
"---\n<a id='step1'></a>\n## Step 1: Detect Humans\n\nWe use OpenCV's implementation of [Haar feature-based cascade classifiers](http://docs.opencv.org/trunk/d7/d8b/tutorial_py_face_detection.html) to detect human faces in images. OpenCV provides many pre-trained face detectors, stored as XML files on [github](https://github.com/opencv/opencv/tree/master/data/haarcascades). We have downloaded one of these detectors and stored it in the `haarcascades` directory.\n\nIn the next code cell, we demonstrate how to use this detector to find human faces in a sample image.",
"_____no_output_____"
]
],
[
[
"import cv2 \nimport matplotlib.pyplot as plt \n%matplotlib inline \n\n# extract pre-trained face detector\nface_cascade = cv2.CascadeClassifier('haarcascades/haarcascade_frontalface_alt.xml')\n\n# load color (BGR) image\nimg = cv2.imread(human_files[210])\n# convert BGR image to grayscale\ngray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\n\n# find faces in image\nfaces = face_cascade.detectMultiScale(gray)\n\n# print number of faces detected in the image\nprint('Number of faces detected:', len(faces))\n\n# get bounding box for each detected face\nfor (x,y,w,h) in faces:\n # add bounding box to color image\n cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)\n \n# convert BGR image to RGB for plotting\ncv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n\n# display the image, along with bounding box\nplt.imshow(cv_rgb)\nplt.show()",
"Number of faces detected: 1\n"
]
],
[
[
"Before using any of the face detectors, it is standard procedure to convert the images to grayscale. The `detectMultiScale` function executes the classifier stored in `face_cascade` and takes the grayscale image as a parameter. \n\nIn the above code, `faces` is a numpy array of detected faces, where each row corresponds to a detected face. Each detected face is a 1D array with four entries that specifies the bounding box of the detected face. The first two entries in the array (extracted in the above code as `x` and `y`) specify the horizontal and vertical positions of the top left corner of the bounding box. The last two entries in the array (extracted here as `w` and `h`) specify the width and height of the box.\n\n### Write a Human Face Detector\n\nWe can use this procedure to write a function that returns `True` if a human face is detected in an image and `False` otherwise. This function, aptly named `face_detector`, takes a string-valued file path to an image as input and appears in the code block below.",
"_____no_output_____"
]
],
[
[
"# returns \"True\" if face is detected in image stored at img_path\ndef face_detector(img_path):\n img = cv2.imread(img_path)\n gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\n faces = face_cascade.detectMultiScale(gray)\n return len(faces) > 0",
"_____no_output_____"
]
],
[
[
"### (IMPLEMENTATION) Assess the Human Face Detector\n\n__Question 1:__ Use the code cell below to test the performance of the `face_detector` function. \n- What percentage of the first 100 images in `human_files` have a detected human face? \n- What percentage of the first 100 images in `dog_files` have a detected human face? \n\nIdeally, we would like 100% of human images with a detected face and 0% of dog images with a detected face. You will see that our algorithm falls short of this goal, but still gives acceptable performance. We extract the file paths for the first 100 images from each of the datasets and store them in the numpy arrays `human_files_short` and `dog_files_short`.\n\n__Answer:__ In 100% of human images human faces were detected. In 11% of the dog images human faces were detected.",
"_____no_output_____"
]
],
[
[
"human_files_short = human_files[:100]\ndog_files_short = train_files[:100]\n# Do NOT modify the code above this line.\n## TODO: Test the performance of the face_detector algorithm \n## on the images in human_files_short and dog_files_short.\n\n# Initialize two arrays to save if face was detected:\nlist_human_faces = np.zeros(len(human_files_short))\nlist_dog_faces = np.zeros(len(dog_files_short))\n\n# Run a loop on all images in the two lists and save result in corresponding array:\nfor i in range(len(human_files_short)):\n list_human_faces[i] = face_detector(human_files_short[i])\n list_dog_faces[i] = face_detector(dog_files_short[i])\n\nprint(int(np.sum(list_human_faces)), 'faces found in human images.')\nprint(int(np.sum(list_dog_faces)), 'human faces found in dog images.')",
"100 faces found in human images.\n11 human faces found in dog images.\n"
]
],
[
[
"__Question 2:__ This algorithmic choice necessitates that we communicate to the user that we accept human images only when they provide a clear view of a face (otherwise, we risk having unneccessarily frustrated users!). In your opinion, is this a reasonable expectation to pose on the user? If not, can you think of a way to detect humans in images that does not necessitate an image with a clearly presented face?\n\n__Answer:__ First of all this is a reasonable expectation to me. Users might understand that it is harder to see for an algorithm if there is a face in a image if it does not provide a clear view. If you would like to process those images of bad quality as well you should consider using a deep neural network using convolutional layers to detect those faces.\n\nWe suggest the face detector from OpenCV as a potential way to detect human images in your algorithm, but you are free to explore other approaches, especially approaches that make use of deep learning :). Please use the code cell below to design and test your own face detection algorithm. If you decide to pursue this _optional_ task, report performance on each of the datasets.",
"_____no_output_____"
]
],
[
[
"## (Optional) TODO: Report the performance of another \n## face detection algorithm on the LFW dataset\n### Feel free to use as many code cells as needed.",
"_____no_output_____"
]
],
[
[
"---\n<a id='step2'></a>\n## Step 2: Detect Dogs\n\nIn this section, we use a pre-trained [ResNet-50](http://ethereon.github.io/netscope/#/gist/db945b393d40bfa26006) model to detect dogs in images. Our first line of code downloads the ResNet-50 model, along with weights that have been trained on [ImageNet](http://www.image-net.org/), a very large, very popular dataset used for image classification and other vision tasks. ImageNet contains over 10 million URLs, each linking to an image containing an object from one of [1000 categories](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a). Given an image, this pre-trained ResNet-50 model returns a prediction (derived from the available categories in ImageNet) for the object that is contained in the image.",
"_____no_output_____"
]
],
[
[
"from keras.applications.resnet50 import ResNet50\n\n# define ResNet50 model\nResNet50_model = ResNet50(weights='imagenet')",
"Downloading data from https://github.com/fchollet/deep-learning-models/releases/download/v0.2/resnet50_weights_tf_dim_ordering_tf_kernels.h5\n102858752/102853048 [==============================] - 2s 0us/step\n"
]
],
[
[
"### Pre-process the Data\n\nWhen using TensorFlow as backend, Keras CNNs require a 4D array (which we'll also refer to as a 4D tensor) as input, with shape\n\n$$\n(\\text{nb_samples}, \\text{rows}, \\text{columns}, \\text{channels}),\n$$\n\nwhere `nb_samples` corresponds to the total number of images (or samples), and `rows`, `columns`, and `channels` correspond to the number of rows, columns, and channels for each image, respectively. \n\nThe `path_to_tensor` function below takes a string-valued file path to a color image as input and returns a 4D tensor suitable for supplying to a Keras CNN. The function first loads the image and resizes it to a square image that is $224 \\times 224$ pixels. Next, the image is converted to an array, which is then resized to a 4D tensor. In this case, since we are working with color images, each image has three channels. Likewise, since we are processing a single image (or sample), the returned tensor will always have shape\n\n$$\n(1, 224, 224, 3).\n$$\n\nThe `paths_to_tensor` function takes a numpy array of string-valued image paths as input and returns a 4D tensor with shape \n\n$$\n(\\text{nb_samples}, 224, 224, 3).\n$$\n\nHere, `nb_samples` is the number of samples, or number of images, in the supplied array of image paths. It is best to think of `nb_samples` as the number of 3D tensors (where each 3D tensor corresponds to a different image) in your dataset!",
"_____no_output_____"
]
],
[
[
"from keras.preprocessing import image \nfrom tqdm import tqdm\n\ndef path_to_tensor(img_path):\n # loads RGB image as PIL.Image.Image type\n img = image.load_img(img_path, target_size=(224, 224))\n # convert PIL.Image.Image type to 3D tensor with shape (224, 224, 3)\n x = image.img_to_array(img)\n # convert 3D tensor to 4D tensor with shape (1, 224, 224, 3) and return 4D tensor\n return np.expand_dims(x, axis=0)\n\ndef paths_to_tensor(img_paths):\n list_of_tensors = [path_to_tensor(img_path) for img_path in tqdm(img_paths)]\n return np.vstack(list_of_tensors)",
"_____no_output_____"
]
],
[
[
"### Making Predictions with ResNet-50\n\nGetting the 4D tensor ready for ResNet-50, and for any other pre-trained model in Keras, requires some additional processing. First, the RGB image is converted to BGR by reordering the channels. All pre-trained models have the additional normalization step that the mean pixel (expressed in RGB as $[103.939, 116.779, 123.68]$ and calculated from all pixels in all images in ImageNet) must be subtracted from every pixel in each image. This is implemented in the imported function `preprocess_input`. If you're curious, you can check the code for `preprocess_input` [here](https://github.com/fchollet/keras/blob/master/keras/applications/imagenet_utils.py).\n\nNow that we have a way to format our image for supplying to ResNet-50, we are now ready to use the model to extract the predictions. This is accomplished with the `predict` method, which returns an array whose $i$-th entry is the model's predicted probability that the image belongs to the $i$-th ImageNet category. This is implemented in the `ResNet50_predict_labels` function below.\n\nBy taking the argmax of the predicted probability vector, we obtain an integer corresponding to the model's predicted object class, which we can identify with an object category through the use of this [dictionary](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a). ",
"_____no_output_____"
]
],
[
[
"from keras.applications.resnet50 import preprocess_input, decode_predictions\n\ndef ResNet50_predict_labels(img_path):\n # returns prediction vector for image located at img_path\n img = preprocess_input(path_to_tensor(img_path))\n return np.argmax(ResNet50_model.predict(img))",
"_____no_output_____"
]
],
[
[
"### Write a Dog Detector\n\nWhile looking at the [dictionary](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a), you will notice that the categories corresponding to dogs appear in an uninterrupted sequence and correspond to dictionary keys 151-268, inclusive, to include all categories from `'Chihuahua'` to `'Mexican hairless'`. Thus, in order to check to see if an image is predicted to contain a dog by the pre-trained ResNet-50 model, we need only check if the `ResNet50_predict_labels` function above returns a value between 151 and 268 (inclusive).\n\nWe use these ideas to complete the `dog_detector` function below, which returns `True` if a dog is detected in an image (and `False` if not).",
"_____no_output_____"
]
],
[
[
"### returns \"True\" if a dog is detected in the image stored at img_path\ndef dog_detector(img_path):\n prediction = ResNet50_predict_labels(img_path)\n return ((prediction <= 268) & (prediction >= 151)) ",
"_____no_output_____"
]
],
[
[
"### (IMPLEMENTATION) Assess the Dog Detector\n\n__Question 3:__ Use the code cell below to test the performance of your `dog_detector` function. \n- What percentage of the images in `human_files_short` have a detected dog? \n- What percentage of the images in `dog_files_short` have a detected dog?\n\n__Answer:__ In 0% of the human images dogs were found. In 100% of the dog images dogs were found.",
"_____no_output_____"
]
],
[
[
"### TODO: Test the performance of the dog_detector function\n### on the images in human_files_short and dog_files_short.\n\n# Initialize two arrays to save if a dog was detected:\nlist_human_dogs = np.zeros(len(human_files_short))\nlist_dog_dogs = np.zeros(len(dog_files_short))\n\n# Run a loop on all images in the two lists and save result in corresponding array:\n\nfor i in range(len(dog_files_short)):\n list_human_dogs[i] = dog_detector(human_files_short[i])\n list_dog_dogs[i] = dog_detector(dog_files_short[i])\n\nprint(int(np.sum(list_human_dogs)), 'dogs found in human images.')\nprint(int(np.sum(list_dog_dogs)), 'dogs found in dog images.')",
"0 dogs found in human images.\n100 dogs found in dog images.\n"
]
],
[
[
"---\n<a id='step3'></a>\n## Step 3: Create a CNN to Classify Dog Breeds (from Scratch)\n\nNow that we have functions for detecting humans and dogs in images, we need a way to predict breed from images. In this step, you will create a CNN that classifies dog breeds. You must create your CNN _from scratch_ (so, you can't use transfer learning _yet_!), and you must attain a test accuracy of at least 1%. In Step 5 of this notebook, you will have the opportunity to use transfer learning to create a CNN that attains greatly improved accuracy.\n\nBe careful with adding too many trainable layers! More parameters means longer training, which means you are more likely to need a GPU to accelerate the training process. Thankfully, Keras provides a handy estimate of the time that each epoch is likely to take; you can extrapolate this estimate to figure out how long it will take for your algorithm to train. \n\nWe mention that the task of assigning breed to dogs from images is considered exceptionally challenging. To see why, consider that *even a human* would have great difficulty in distinguishing between a Brittany and a Welsh Springer Spaniel. \n\nBrittany | Welsh Springer Spaniel\n- | - \n<img src=\"images/Brittany_02625.jpg\" width=\"100\"> | <img src=\"images/Welsh_springer_spaniel_08203.jpg\" width=\"200\">\n\nIt is not difficult to find other dog breed pairs with minimal inter-class variation (for instance, Curly-Coated Retrievers and American Water Spaniels). \n\nCurly-Coated Retriever | American Water Spaniel\n- | -\n<img src=\"images/Curly-coated_retriever_03896.jpg\" width=\"200\"> | <img src=\"images/American_water_spaniel_00648.jpg\" width=\"200\">\n\n\nLikewise, recall that labradors come in yellow, chocolate, and black. Your vision-based algorithm will have to conquer this high intra-class variation to determine how to classify all of these different shades as the same breed. \n\nYellow Labrador | Chocolate Labrador | Black Labrador\n- | -\n<img src=\"images/Labrador_retriever_06457.jpg\" width=\"150\"> | <img src=\"images/Labrador_retriever_06455.jpg\" width=\"240\"> | <img src=\"images/Labrador_retriever_06449.jpg\" width=\"220\">\n\nWe also mention that random chance presents an exceptionally low bar: setting aside the fact that the classes are slightly imabalanced, a random guess will provide a correct answer roughly 1 in 133 times, which corresponds to an accuracy of less than 1%. \n\nRemember that the practice is far ahead of the theory in deep learning. Experiment with many different architectures, and trust your intuition. And, of course, have fun! \n\n### Pre-process the Data\n\nWe rescale the images by dividing every pixel in every image by 255.",
"_____no_output_____"
]
],
[
[
"from PIL import ImageFile \nImageFile.LOAD_TRUNCATED_IMAGES = True \n\n# pre-process the data for Keras\ntrain_tensors = paths_to_tensor(train_files).astype('float32')/255\nvalid_tensors = paths_to_tensor(valid_files).astype('float32')/255\ntest_tensors = paths_to_tensor(test_files).astype('float32')/255",
"100%|██████████| 6680/6680 [01:10<00:00, 95.41it/s] \n100%|██████████| 835/835 [00:07<00:00, 106.92it/s]\n100%|██████████| 836/836 [00:07<00:00, 108.75it/s]\n"
],
[
"len(train_tensors[0][0])",
"_____no_output_____"
]
],
[
[
"### (IMPLEMENTATION) Model Architecture\n\nCreate a CNN to classify dog breed. At the end of your code cell block, summarize the layers of your model by executing the line:\n \n model.summary()\n\nWe have imported some Python modules to get you started, but feel free to import as many modules as you need. If you end up getting stuck, here's a hint that specifies a model that trains relatively fast on CPU and attains >1% test accuracy in 5 epochs:\n\n\n \n__Question 4:__ Outline the steps you took to get to your final CNN architecture and your reasoning at each step. If you chose to use the hinted architecture above, describe why you think that CNN architecture should work well for the image classification task.\n\n__Answer:__ I chose to use the suggested architecture. I suppose that this architecture works fine as it combines three convolutional layers with many filters each. That makes it possible to the CNN to detect various shapes in the images which is necessary because dogs ar",
"_____no_output_____"
]
],
[
[
"from keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D\nfrom keras.layers import Dropout, Flatten, Dense\nfrom keras.models import Sequential\n\nmodel = Sequential()\n\n### TODO: Define your architecture.\nmodel.add(Conv2D(filters = 16, kernel_size = 2, strides = 1, activation = 'relu', input_shape = (224,224,3)))\nmodel.add(MaxPooling2D(pool_size=(2, 2)))\nmodel.add(Conv2D(filters = 32, kernel_size = 2, strides = 1, activation = 'relu', input_shape = (224,224,3)))\nmodel.add(MaxPooling2D(pool_size=(2, 2)))\nmodel.add(Conv2D(filters = 64, kernel_size = 2, strides = 1, activation = 'relu', input_shape = (224,224,3)))\nmodel.add(MaxPooling2D(pool_size=(2, 2)))\nmodel.add(GlobalAveragePooling2D())\nmodel.add(Dense(len(dog_names)))\n\nmodel.summary()",
"_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv2d_1 (Conv2D) (None, 223, 223, 16) 208 \n_________________________________________________________________\nmax_pooling2d_2 (MaxPooling2 (None, 111, 111, 16) 0 \n_________________________________________________________________\nconv2d_2 (Conv2D) (None, 110, 110, 32) 2080 \n_________________________________________________________________\nmax_pooling2d_3 (MaxPooling2 (None, 55, 55, 32) 0 \n_________________________________________________________________\nconv2d_3 (Conv2D) (None, 54, 54, 64) 8256 \n_________________________________________________________________\nmax_pooling2d_4 (MaxPooling2 (None, 27, 27, 64) 0 \n_________________________________________________________________\nglobal_average_pooling2d_1 ( (None, 64) 0 \n_________________________________________________________________\ndense_1 (Dense) (None, 133) 8645 \n=================================================================\nTotal params: 19,189\nTrainable params: 19,189\nNon-trainable params: 0\n_________________________________________________________________\n"
]
],
[
[
"### Compile the Model",
"_____no_output_____"
]
],
[
[
"model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])",
"_____no_output_____"
]
],
[
[
"### (IMPLEMENTATION) Train the Model\n\nTrain your model in the code cell below. Use model checkpointing to save the model that attains the best validation loss.\n\nYou are welcome to [augment the training data](https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html), but this is not a requirement. ",
"_____no_output_____"
]
],
[
[
"from keras.callbacks import ModelCheckpoint \n\n### TODO: specify the number of epochs that you would like to use to train the model.\n\nepochs = 5\n\n### Do NOT modify the code below this line.\n\ncheckpointer = ModelCheckpoint(filepath='saved_models/weights.best.from_scratch.hdf5', \n verbose=1, save_best_only=True)\n\nmodel.fit(train_tensors, train_targets, \n validation_data=(valid_tensors, valid_targets),\n epochs=epochs, batch_size=20, callbacks=[checkpointer], verbose=1)",
"Train on 6680 samples, validate on 835 samples\nEpoch 1/5\n6660/6680 [============================>.] - ETA: 0s - loss: 8.5331 - acc: 0.0071Epoch 00001: val_loss improved from inf to 8.72501, saving model to saved_models/weights.best.from_scratch.hdf5\n6680/6680 [==============================] - 21s 3ms/step - loss: 8.5293 - acc: 0.0070 - val_loss: 8.7250 - val_acc: 0.0072\nEpoch 2/5\n6660/6680 [============================>.] - ETA: 0s - loss: 8.2769 - acc: 0.0066Epoch 00002: val_loss did not improve\n6680/6680 [==============================] - 20s 3ms/step - loss: 8.2690 - acc: 0.0066 - val_loss: 8.7250 - val_acc: 0.0072\nEpoch 3/5\n6660/6680 [============================>.] - ETA: 0s - loss: 8.2648 - acc: 0.0066Epoch 00003: val_loss did not improve\n6680/6680 [==============================] - 20s 3ms/step - loss: 8.2690 - acc: 0.0066 - val_loss: 8.7250 - val_acc: 0.0072\nEpoch 4/5\n6660/6680 [============================>.] - ETA: 0s - loss: 8.2696 - acc: 0.0066Epoch 00004: val_loss did not improve\n6680/6680 [==============================] - 20s 3ms/step - loss: 8.2690 - acc: 0.0066 - val_loss: 8.7250 - val_acc: 0.0072\nEpoch 5/5\n6660/6680 [============================>.] - ETA: 0s - loss: 8.2575 - acc: 0.0066Epoch 00005: val_loss did not improve\n6680/6680 [==============================] - 20s 3ms/step - loss: 8.2690 - acc: 0.0066 - val_loss: 8.7250 - val_acc: 0.0072\n"
]
],
[
[
"### Load the Model with the Best Validation Loss",
"_____no_output_____"
]
],
[
[
"model.load_weights('saved_models/weights.best.from_scratch.hdf5')",
"_____no_output_____"
]
],
[
[
"### Test the Model\n\nTry out your model on the test dataset of dog images. Ensure that your test accuracy is greater than 1%.",
"_____no_output_____"
]
],
[
[
"# get index of predicted dog breed for each image in test set\ndog_breed_predictions = [np.argmax(model.predict(np.expand_dims(tensor, axis=0))) for tensor in test_tensors]\n\n# report test accuracy\ntest_accuracy = 100*np.sum(np.array(dog_breed_predictions)==np.argmax(test_targets, axis=1))/len(dog_breed_predictions)\nprint('Test accuracy: %.4f%%' % test_accuracy)",
"Test accuracy: 0.5981%\n"
]
],
[
[
"---\n<a id='step4'></a>\n## Step 4: Use a CNN to Classify Dog Breeds\n\nTo reduce training time without sacrificing accuracy, we show you how to train a CNN using transfer learning. In the following step, you will get a chance to use transfer learning to train your own CNN.\n\n### Obtain Bottleneck Features",
"_____no_output_____"
]
],
[
[
"bottleneck_features = np.load('bottleneck_features/DogVGG16Data.npz')\ntrain_VGG16 = bottleneck_features['train']\nvalid_VGG16 = bottleneck_features['valid']\ntest_VGG16 = bottleneck_features['test']",
"_____no_output_____"
]
],
[
[
"### Model Architecture\n\nThe model uses the the pre-trained VGG-16 model as a fixed feature extractor, where the last convolutional output of VGG-16 is fed as input to our model. We only add a global average pooling layer and a fully connected layer, where the latter contains one node for each dog category and is equipped with a softmax.",
"_____no_output_____"
]
],
[
[
"VGG16_model = Sequential()\nVGG16_model.add(GlobalAveragePooling2D(input_shape=train_VGG16.shape[1:]))\nVGG16_model.add(Dense(133, activation='softmax'))\n\nVGG16_model.summary()",
"_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nglobal_average_pooling2d_2 ( (None, 512) 0 \n_________________________________________________________________\ndense_2 (Dense) (None, 133) 68229 \n=================================================================\nTotal params: 68,229\nTrainable params: 68,229\nNon-trainable params: 0\n_________________________________________________________________\n"
]
],
[
[
"### Compile the Model",
"_____no_output_____"
]
],
[
[
"VGG16_model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])",
"_____no_output_____"
]
],
[
[
"### Train the Model",
"_____no_output_____"
]
],
[
[
"checkpointer = ModelCheckpoint(filepath='saved_models/weights.best.VGG16.hdf5', \n verbose=1, save_best_only=True)\n\nVGG16_model.fit(train_VGG16, train_targets, \n validation_data=(valid_VGG16, valid_targets),\n epochs=20, batch_size=20, callbacks=[checkpointer], verbose=1)",
"Train on 6680 samples, validate on 835 samples\nEpoch 1/20\n6620/6680 [============================>.] - ETA: 0s - loss: 12.5338 - acc: 0.1178Epoch 00001: val_loss improved from inf to 11.42511, saving model to saved_models/weights.best.VGG16.hdf5\n6680/6680 [==============================] - 2s 281us/step - loss: 12.5138 - acc: 0.1189 - val_loss: 11.4251 - val_acc: 0.1880\nEpoch 2/20\n6520/6680 [============================>.] - ETA: 0s - loss: 10.6843 - acc: 0.2543Epoch 00002: val_loss improved from 11.42511 to 10.63622, saving model to saved_models/weights.best.VGG16.hdf5\n6680/6680 [==============================] - 2s 245us/step - loss: 10.6971 - acc: 0.2537 - val_loss: 10.6362 - val_acc: 0.2527\nEpoch 3/20\n6620/6680 [============================>.] - ETA: 0s - loss: 10.2328 - acc: 0.3066Epoch 00003: val_loss improved from 10.63622 to 10.38797, saving model to saved_models/weights.best.VGG16.hdf5\n6680/6680 [==============================] - 2s 263us/step - loss: 10.2415 - acc: 0.3063 - val_loss: 10.3880 - val_acc: 0.2778\nEpoch 4/20\n6440/6680 [===========================>..] - ETA: 0s - loss: 9.9874 - acc: 0.3373 Epoch 00004: val_loss improved from 10.38797 to 10.05475, saving model to saved_models/weights.best.VGG16.hdf5\n6680/6680 [==============================] - 2s 248us/step - loss: 9.9658 - acc: 0.3389 - val_loss: 10.0548 - val_acc: 0.3138\nEpoch 5/20\n6640/6680 [============================>.] - ETA: 0s - loss: 9.7272 - acc: 0.3639Epoch 00005: val_loss improved from 10.05475 to 9.90755, saving model to saved_models/weights.best.VGG16.hdf5\n6680/6680 [==============================] - 2s 249us/step - loss: 9.7422 - acc: 0.3630 - val_loss: 9.9076 - val_acc: 0.3281\nEpoch 6/20\n6660/6680 [============================>.] - ETA: 0s - loss: 9.5834 - acc: 0.3791Epoch 00006: val_loss improved from 9.90755 to 9.78257, saving model to saved_models/weights.best.VGG16.hdf5\n6680/6680 [==============================] - 2s 259us/step - loss: 9.5861 - acc: 0.3790 - val_loss: 9.7826 - val_acc: 0.3317\nEpoch 7/20\n6500/6680 [============================>.] - ETA: 0s - loss: 9.3900 - acc: 0.3911Epoch 00007: val_loss improved from 9.78257 to 9.64712, saving model to saved_models/weights.best.VGG16.hdf5\n6680/6680 [==============================] - 2s 247us/step - loss: 9.3816 - acc: 0.3915 - val_loss: 9.6471 - val_acc: 0.3305\nEpoch 8/20\n6620/6680 [============================>.] - ETA: 0s - loss: 9.2130 - acc: 0.4089Epoch 00008: val_loss improved from 9.64712 to 9.54295, saving model to saved_models/weights.best.VGG16.hdf5\n6680/6680 [==============================] - 2s 244us/step - loss: 9.1938 - acc: 0.4100 - val_loss: 9.5429 - val_acc: 0.3437\nEpoch 9/20\n6600/6680 [============================>.] - ETA: 0s - loss: 9.1149 - acc: 0.4214Epoch 00009: val_loss improved from 9.54295 to 9.48443, saving model to saved_models/weights.best.VGG16.hdf5\n6680/6680 [==============================] - 2s 244us/step - loss: 9.1226 - acc: 0.4204 - val_loss: 9.4844 - val_acc: 0.3617\nEpoch 10/20\n6600/6680 [============================>.] - ETA: 0s - loss: 8.9957 - acc: 0.4273Epoch 00010: val_loss improved from 9.48443 to 9.47693, saving model to saved_models/weights.best.VGG16.hdf5\n6680/6680 [==============================] - 2s 254us/step - loss: 8.9966 - acc: 0.4271 - val_loss: 9.4769 - val_acc: 0.3449\nEpoch 11/20\n6540/6680 [============================>.] - ETA: 0s - loss: 8.9119 - acc: 0.4353Epoch 00011: val_loss improved from 9.47693 to 9.38110, saving model to saved_models/weights.best.VGG16.hdf5\n6680/6680 [==============================] - 2s 263us/step - loss: 8.9152 - acc: 0.4349 - val_loss: 9.3811 - val_acc: 0.3641\nEpoch 12/20\n6540/6680 [============================>.] - ETA: 0s - loss: 8.8470 - acc: 0.4396Epoch 00012: val_loss improved from 9.38110 to 9.34253, saving model to saved_models/weights.best.VGG16.hdf5\n6680/6680 [==============================] - 2s 253us/step - loss: 8.8464 - acc: 0.4394 - val_loss: 9.3425 - val_acc: 0.3545\nEpoch 13/20\n6560/6680 [============================>.] - ETA: 0s - loss: 8.6777 - acc: 0.4474Epoch 00013: val_loss improved from 9.34253 to 9.22299, saving model to saved_models/weights.best.VGG16.hdf5\n6680/6680 [==============================] - 2s 262us/step - loss: 8.6960 - acc: 0.4463 - val_loss: 9.2230 - val_acc: 0.3557\nEpoch 14/20\n6640/6680 [============================>.] - ETA: 0s - loss: 8.5157 - acc: 0.4580Epoch 00014: val_loss improved from 9.22299 to 9.10299, saving model to saved_models/weights.best.VGG16.hdf5\n6680/6680 [==============================] - 2s 263us/step - loss: 8.5165 - acc: 0.4578 - val_loss: 9.1030 - val_acc: 0.3725\nEpoch 15/20\n6440/6680 [===========================>..] - ETA: 0s - loss: 8.3987 - acc: 0.4691Epoch 00015: val_loss improved from 9.10299 to 8.98148, saving model to saved_models/weights.best.VGG16.hdf5\n6680/6680 [==============================] - 2s 258us/step - loss: 8.4204 - acc: 0.4678 - val_loss: 8.9815 - val_acc: 0.3760\nEpoch 16/20\n6640/6680 [============================>.] - ETA: 0s - loss: 8.3980 - acc: 0.4732Epoch 00016: val_loss did not improve\n6680/6680 [==============================] - 2s 241us/step - loss: 8.3996 - acc: 0.4729 - val_loss: 9.0012 - val_acc: 0.3868\nEpoch 17/20\n6660/6680 [============================>.] - ETA: 0s - loss: 8.3679 - acc: 0.4733Epoch 00017: val_loss improved from 8.98148 to 8.84088, saving model to saved_models/weights.best.VGG16.hdf5\n6680/6680 [==============================] - 2s 256us/step - loss: 8.3670 - acc: 0.4734 - val_loss: 8.8409 - val_acc: 0.3892\nEpoch 18/20\n6640/6680 [============================>.] - ETA: 0s - loss: 8.2308 - acc: 0.4830Epoch 00018: val_loss improved from 8.84088 to 8.83098, saving model to saved_models/weights.best.VGG16.hdf5\n6680/6680 [==============================] - 2s 263us/step - loss: 8.2328 - acc: 0.4828 - val_loss: 8.8310 - val_acc: 0.3916\nEpoch 19/20\n6460/6680 [============================>.] - ETA: 0s - loss: 8.2263 - acc: 0.4819Epoch 00019: val_loss did not improve\n6680/6680 [==============================] - 2s 261us/step - loss: 8.1902 - acc: 0.4841 - val_loss: 8.8345 - val_acc: 0.3988\nEpoch 20/20\n6460/6680 [============================>.] - ETA: 0s - loss: 8.0861 - acc: 0.4904Epoch 00020: val_loss improved from 8.83098 to 8.76129, saving model to saved_models/weights.best.VGG16.hdf5\n6680/6680 [==============================] - 2s 265us/step - loss: 8.0837 - acc: 0.4906 - val_loss: 8.7613 - val_acc: 0.3868\n"
]
],
[
[
"### Load the Model with the Best Validation Loss",
"_____no_output_____"
]
],
[
[
"VGG16_model.load_weights('saved_models/weights.best.VGG16.hdf5')",
"_____no_output_____"
]
],
[
[
"### Test the Model\n\nNow, we can use the CNN to test how well it identifies breed within our test dataset of dog images. We print the test accuracy below.",
"_____no_output_____"
]
],
[
[
"# get index of predicted dog breed for each image in test set\nVGG16_predictions = [np.argmax(VGG16_model.predict(np.expand_dims(feature, axis=0))) for feature in test_VGG16]\n\n# report test accuracy\ntest_accuracy = 100*np.sum(np.array(VGG16_predictions)==np.argmax(test_targets, axis=1))/len(VGG16_predictions)\nprint('Test accuracy: %.4f%%' % test_accuracy)",
"Test accuracy: 40.6699%\n"
]
],
[
[
"### Predict Dog Breed with the Model",
"_____no_output_____"
]
],
[
[
"from extract_bottleneck_features import *\n\ndef VGG16_predict_breed(img_path):\n # extract bottleneck features\n bottleneck_feature = extract_VGG16(path_to_tensor(img_path))\n # obtain predicted vector\n predicted_vector = VGG16_model.predict(bottleneck_feature)\n # return dog breed that is predicted by the model\n return dog_names[np.argmax(predicted_vector)]",
"_____no_output_____"
]
],
[
[
"---\n<a id='step5'></a>\n## Step 5: Create a CNN to Classify Dog Breeds (using Transfer Learning)\n\nYou will now use transfer learning to create a CNN that can identify dog breed from images. Your CNN must attain at least 60% accuracy on the test set.\n\nIn Step 4, we used transfer learning to create a CNN using VGG-16 bottleneck features. In this section, you must use the bottleneck features from a different pre-trained model. To make things easier for you, we have pre-computed the features for all of the networks that are currently available in Keras:\n- [VGG-19](https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/DogVGG19Data.npz) bottleneck features\n- [ResNet-50](https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/DogResnet50Data.npz) bottleneck features\n- [Inception](https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/DogInceptionV3Data.npz) bottleneck features\n- [Xception](https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/DogXceptionData.npz) bottleneck features\n\nThe files are encoded as such:\n\n Dog{network}Data.npz\n \nwhere `{network}`, in the above filename, can be one of `VGG19`, `Resnet50`, `InceptionV3`, or `Xception`. Pick one of the above architectures, download the corresponding bottleneck features, and store the downloaded file in the `bottleneck_features/` folder in the repository.\n\n### (IMPLEMENTATION) Obtain Bottleneck Features\n\nIn the code block below, extract the bottleneck features corresponding to the train, test, and validation sets by running the following:\n\n bottleneck_features = np.load('bottleneck_features/Dog{network}Data.npz')\n train_{network} = bottleneck_features['train']\n valid_{network} = bottleneck_features['valid']\n test_{network} = bottleneck_features['test']",
"_____no_output_____"
]
],
[
[
"### TODO: Obtain bottleneck features from another pre-trained CNN.\nbottleneck_features = np.load('bottleneck_features/DogResnet50Data.npz')\ntrain_Resnet50 = bottleneck_features['train']\nvalid_Resnet50 = bottleneck_features['valid']\ntest_Resnet50 = bottleneck_features['test']",
"_____no_output_____"
]
],
[
[
"### (IMPLEMENTATION) Model Architecture\n\nCreate a CNN to classify dog breed. At the end of your code cell block, summarize the layers of your model by executing the line:\n \n <your model's name>.summary()\n \n__Question 5:__ Outline the steps you took to get to your final CNN architecture and your reasoning at each step. Describe why you think the architecture is suitable for the current problem.\n\n__Answer:__ Because the data (dog images) is similar to the data used in the pretrained model (dogs were part of the images used to pretrain) and the data set of dog images is relatively small, I decided to only adjust the final part of the network. I added a global average pooling layer to bring the data into the right shape and then a fully connected layer with 133 nodes, one for each dog breed in the data set. After training the model I saw an accuracy of above 80% which didn't cause me to redesign my architecture.\n\n",
"_____no_output_____"
]
],
[
[
"### TODO: Define your architecture.\nResnet50_model = Sequential()\nResnet50_model.add(GlobalAveragePooling2D(input_shape=train_Resnet50.shape[1:]))\nResnet50_model.add(Dense(133, activation='softmax'))\n\nResnet50_model.summary()",
"_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nglobal_average_pooling2d_3 ( (None, 2048) 0 \n_________________________________________________________________\ndense_3 (Dense) (None, 133) 272517 \n=================================================================\nTotal params: 272,517\nTrainable params: 272,517\nNon-trainable params: 0\n_________________________________________________________________\n"
]
],
[
[
"### (IMPLEMENTATION) Compile the Model",
"_____no_output_____"
]
],
[
[
"### TODO: Compile the model.\nResnet50_model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])",
"_____no_output_____"
]
],
[
[
"### (IMPLEMENTATION) Train the Model\n\nTrain your model in the code cell below. Use model checkpointing to save the model that attains the best validation loss. \n\nYou are welcome to [augment the training data](https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html), but this is not a requirement. ",
"_____no_output_____"
]
],
[
[
"### TODO: Train the model.\ncheckpointer = ModelCheckpoint(filepath='saved_models/weights.best.Resnet50.hdf5', \n verbose=1, save_best_only=True)\n\nResnet50_model.fit(train_Resnet50, train_targets, \n validation_data=(valid_Resnet50, valid_targets),\n epochs=20, batch_size=20, callbacks=[checkpointer], verbose=1)",
"Train on 6680 samples, validate on 835 samples\nEpoch 1/20\n6600/6680 [============================>.] - ETA: 0s - loss: 1.6509 - acc: 0.5895Epoch 00001: val_loss improved from inf to 0.91785, saving model to saved_models/weights.best.Resnet50.hdf5\n6680/6680 [==============================] - 2s 266us/step - loss: 1.6401 - acc: 0.5919 - val_loss: 0.9178 - val_acc: 0.7341\nEpoch 2/20\n6540/6680 [============================>.] - ETA: 0s - loss: 0.4409 - acc: 0.8624Epoch 00002: val_loss improved from 0.91785 to 0.73692, saving model to saved_models/weights.best.Resnet50.hdf5\n6680/6680 [==============================] - 2s 232us/step - loss: 0.4398 - acc: 0.8624 - val_loss: 0.7369 - val_acc: 0.7844\nEpoch 3/20\n6440/6680 [===========================>..] - ETA: 0s - loss: 0.2714 - acc: 0.9138Epoch 00003: val_loss did not improve\n6680/6680 [==============================] - 1s 218us/step - loss: 0.2722 - acc: 0.9141 - val_loss: 0.8294 - val_acc: 0.7677\nEpoch 4/20\n6600/6680 [============================>.] - ETA: 0s - loss: 0.1823 - acc: 0.9414Epoch 00004: val_loss improved from 0.73692 to 0.70799, saving model to saved_models/weights.best.Resnet50.hdf5\n6680/6680 [==============================] - 1s 222us/step - loss: 0.1834 - acc: 0.9409 - val_loss: 0.7080 - val_acc: 0.8036\nEpoch 5/20\n6640/6680 [============================>.] - ETA: 0s - loss: 0.1254 - acc: 0.9586Epoch 00005: val_loss improved from 0.70799 to 0.64554, saving model to saved_models/weights.best.Resnet50.hdf5\n6680/6680 [==============================] - 2s 232us/step - loss: 0.1250 - acc: 0.9587 - val_loss: 0.6455 - val_acc: 0.8132\nEpoch 6/20\n6540/6680 [============================>.] - ETA: 0s - loss: 0.0910 - acc: 0.9699Epoch 00006: val_loss did not improve\n6680/6680 [==============================] - 1s 223us/step - loss: 0.0907 - acc: 0.9698 - val_loss: 0.7112 - val_acc: 0.8108\nEpoch 7/20\n6540/6680 [============================>.] - ETA: 0s - loss: 0.0640 - acc: 0.9801Epoch 00007: val_loss did not improve\n6680/6680 [==============================] - 2s 231us/step - loss: 0.0641 - acc: 0.9802 - val_loss: 0.6774 - val_acc: 0.8144\nEpoch 8/20\n6580/6680 [============================>.] - ETA: 0s - loss: 0.0469 - acc: 0.9865Epoch 00008: val_loss did not improve\n6680/6680 [==============================] - 1s 222us/step - loss: 0.0483 - acc: 0.9865 - val_loss: 0.7190 - val_acc: 0.7940\nEpoch 9/20\n6520/6680 [============================>.] - ETA: 0s - loss: 0.0363 - acc: 0.9880Epoch 00009: val_loss did not improve\n6680/6680 [==============================] - 2s 229us/step - loss: 0.0367 - acc: 0.9880 - val_loss: 0.7793 - val_acc: 0.8144\nEpoch 10/20\n6640/6680 [============================>.] - ETA: 0s - loss: 0.0267 - acc: 0.9925Epoch 00010: val_loss did not improve\n6680/6680 [==============================] - 2s 227us/step - loss: 0.0277 - acc: 0.9922 - val_loss: 0.7627 - val_acc: 0.8287\nEpoch 11/20\n6460/6680 [============================>.] - ETA: 0s - loss: 0.0239 - acc: 0.9937Epoch 00011: val_loss did not improve\n6680/6680 [==============================] - 2s 234us/step - loss: 0.0245 - acc: 0.9934 - val_loss: 0.7545 - val_acc: 0.8240\nEpoch 12/20\n6580/6680 [============================>.] - ETA: 0s - loss: 0.0191 - acc: 0.9947Epoch 00012: val_loss did not improve\n6680/6680 [==============================] - 1s 222us/step - loss: 0.0191 - acc: 0.9946 - val_loss: 0.7680 - val_acc: 0.8335\nEpoch 13/20\n6440/6680 [===========================>..] - ETA: 0s - loss: 0.0135 - acc: 0.9969Epoch 00013: val_loss did not improve\n6680/6680 [==============================] - 1s 218us/step - loss: 0.0137 - acc: 0.9969 - val_loss: 0.8177 - val_acc: 0.8204\nEpoch 14/20\n6580/6680 [============================>.] - ETA: 0s - loss: 0.0124 - acc: 0.9977Epoch 00014: val_loss did not improve\n6680/6680 [==============================] - 2s 235us/step - loss: 0.0127 - acc: 0.9976 - val_loss: 0.7938 - val_acc: 0.8251\nEpoch 15/20\n6560/6680 [============================>.] - ETA: 0s - loss: 0.0095 - acc: 0.9982Epoch 00015: val_loss did not improve\n6680/6680 [==============================] - 1s 222us/step - loss: 0.0099 - acc: 0.9978 - val_loss: 0.8483 - val_acc: 0.8192\nEpoch 16/20\n6640/6680 [============================>.] - ETA: 0s - loss: 0.0087 - acc: 0.9970Epoch 00016: val_loss did not improve\n6680/6680 [==============================] - 2s 239us/step - loss: 0.0086 - acc: 0.9970 - val_loss: 0.8864 - val_acc: 0.8240\nEpoch 17/20\n6560/6680 [============================>.] - ETA: 0s - loss: 0.0074 - acc: 0.9980Epoch 00017: val_loss did not improve\n6680/6680 [==============================] - 1s 221us/step - loss: 0.0076 - acc: 0.9978 - val_loss: 0.8711 - val_acc: 0.8251\nEpoch 18/20\n6640/6680 [============================>.] - ETA: 0s - loss: 0.0064 - acc: 0.9988Epoch 00018: val_loss did not improve\n6680/6680 [==============================] - 1s 219us/step - loss: 0.0064 - acc: 0.9988 - val_loss: 0.9093 - val_acc: 0.8240\nEpoch 19/20\n6580/6680 [============================>.] - ETA: 0s - loss: 0.0060 - acc: 0.9983Epoch 00019: val_loss did not improve\n6680/6680 [==============================] - 2s 226us/step - loss: 0.0060 - acc: 0.9984 - val_loss: 0.9206 - val_acc: 0.8204\nEpoch 20/20\n6440/6680 [===========================>..] - ETA: 0s - loss: 0.0057 - acc: 0.9989Epoch 00020: val_loss did not improve\n6680/6680 [==============================] - 2s 226us/step - loss: 0.0062 - acc: 0.9988 - val_loss: 0.9193 - val_acc: 0.8275\n"
]
],
[
[
"### (IMPLEMENTATION) Load the Model with the Best Validation Loss",
"_____no_output_____"
]
],
[
[
"### TODO: Load the model weights with the best validation loss.\nResnet50_model.load_weights('saved_models/weights.best.Resnet50.hdf5')",
"_____no_output_____"
]
],
[
[
"### (IMPLEMENTATION) Test the Model\n\nTry out your model on the test dataset of dog images. Ensure that your test accuracy is greater than 60%.",
"_____no_output_____"
]
],
[
[
"### TODO: Calculate classification accuracy on the test dataset.\nResnet50_predictions = [np.argmax(Resnet50_model.predict(np.expand_dims(feature, axis=0))) for feature in test_Resnet50]\n\n# report test accuracy\ntest_accuracy = 100*np.sum(np.array(Resnet50_predictions)==np.argmax(test_targets, axis=1))/len(Resnet50_predictions)\nprint('Test accuracy: %.4f%%' % test_accuracy)",
"Test accuracy: 80.3828%\n"
]
],
[
[
"### (IMPLEMENTATION) Predict Dog Breed with the Model\n\nWrite a function that takes an image path as input and returns the dog breed (`Affenpinscher`, `Afghan_hound`, etc) that is predicted by your model. \n\nSimilar to the analogous function in Step 5, your function should have three steps:\n1. Extract the bottleneck features corresponding to the chosen CNN model.\n2. Supply the bottleneck features as input to the model to return the predicted vector. Note that the argmax of this prediction vector gives the index of the predicted dog breed.\n3. Use the `dog_names` array defined in Step 0 of this notebook to return the corresponding breed.\n\nThe functions to extract the bottleneck features can be found in `extract_bottleneck_features.py`, and they have been imported in an earlier code cell. To obtain the bottleneck features corresponding to your chosen CNN architecture, you need to use the function\n\n extract_{network}\n \nwhere `{network}`, in the above filename, should be one of `VGG19`, `Resnet50`, `InceptionV3`, or `Xception`.",
"_____no_output_____"
]
],
[
[
"# Explore pretrained model:\nResNet50(weights = 'imagenet').summary()",
"__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_2 (InputLayer) (None, 224, 224, 3) 0 \n__________________________________________________________________________________________________\nconv1 (Conv2D) (None, 112, 112, 64) 9472 input_2[0][0] \n__________________________________________________________________________________________________\nbn_conv1 (BatchNormalization) (None, 112, 112, 64) 256 conv1[0][0] \n__________________________________________________________________________________________________\nactivation_50 (Activation) (None, 112, 112, 64) 0 bn_conv1[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_5 (MaxPooling2D) (None, 55, 55, 64) 0 activation_50[0][0] \n__________________________________________________________________________________________________\nres2a_branch2a (Conv2D) (None, 55, 55, 64) 4160 max_pooling2d_5[0][0] \n__________________________________________________________________________________________________\nbn2a_branch2a (BatchNormalizati (None, 55, 55, 64) 256 res2a_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_51 (Activation) (None, 55, 55, 64) 0 bn2a_branch2a[0][0] \n__________________________________________________________________________________________________\nres2a_branch2b (Conv2D) (None, 55, 55, 64) 36928 activation_51[0][0] \n__________________________________________________________________________________________________\nbn2a_branch2b (BatchNormalizati (None, 55, 55, 64) 256 res2a_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_52 (Activation) (None, 55, 55, 64) 0 bn2a_branch2b[0][0] \n__________________________________________________________________________________________________\nres2a_branch2c (Conv2D) (None, 55, 55, 256) 16640 activation_52[0][0] \n__________________________________________________________________________________________________\nres2a_branch1 (Conv2D) (None, 55, 55, 256) 16640 max_pooling2d_5[0][0] \n__________________________________________________________________________________________________\nbn2a_branch2c (BatchNormalizati (None, 55, 55, 256) 1024 res2a_branch2c[0][0] \n__________________________________________________________________________________________________\nbn2a_branch1 (BatchNormalizatio (None, 55, 55, 256) 1024 res2a_branch1[0][0] \n__________________________________________________________________________________________________\nadd_17 (Add) (None, 55, 55, 256) 0 bn2a_branch2c[0][0] \n bn2a_branch1[0][0] \n__________________________________________________________________________________________________\nactivation_53 (Activation) (None, 55, 55, 256) 0 add_17[0][0] \n__________________________________________________________________________________________________\nres2b_branch2a (Conv2D) (None, 55, 55, 64) 16448 activation_53[0][0] \n__________________________________________________________________________________________________\nbn2b_branch2a (BatchNormalizati (None, 55, 55, 64) 256 res2b_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_54 (Activation) (None, 55, 55, 64) 0 bn2b_branch2a[0][0] \n__________________________________________________________________________________________________\nres2b_branch2b (Conv2D) (None, 55, 55, 64) 36928 activation_54[0][0] \n__________________________________________________________________________________________________\nbn2b_branch2b (BatchNormalizati (None, 55, 55, 64) 256 res2b_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_55 (Activation) (None, 55, 55, 64) 0 bn2b_branch2b[0][0] \n__________________________________________________________________________________________________\nres2b_branch2c (Conv2D) (None, 55, 55, 256) 16640 activation_55[0][0] \n__________________________________________________________________________________________________\nbn2b_branch2c (BatchNormalizati (None, 55, 55, 256) 1024 res2b_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_18 (Add) (None, 55, 55, 256) 0 bn2b_branch2c[0][0] \n activation_53[0][0] \n__________________________________________________________________________________________________\nactivation_56 (Activation) (None, 55, 55, 256) 0 add_18[0][0] \n__________________________________________________________________________________________________\nres2c_branch2a (Conv2D) (None, 55, 55, 64) 16448 activation_56[0][0] \n__________________________________________________________________________________________________\nbn2c_branch2a (BatchNormalizati (None, 55, 55, 64) 256 res2c_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_57 (Activation) (None, 55, 55, 64) 0 bn2c_branch2a[0][0] \n__________________________________________________________________________________________________\nres2c_branch2b (Conv2D) (None, 55, 55, 64) 36928 activation_57[0][0] \n__________________________________________________________________________________________________\nbn2c_branch2b (BatchNormalizati (None, 55, 55, 64) 256 res2c_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_58 (Activation) (None, 55, 55, 64) 0 bn2c_branch2b[0][0] \n__________________________________________________________________________________________________\nres2c_branch2c (Conv2D) (None, 55, 55, 256) 16640 activation_58[0][0] \n__________________________________________________________________________________________________\nbn2c_branch2c (BatchNormalizati (None, 55, 55, 256) 1024 res2c_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_19 (Add) (None, 55, 55, 256) 0 bn2c_branch2c[0][0] \n activation_56[0][0] \n__________________________________________________________________________________________________\nactivation_59 (Activation) (None, 55, 55, 256) 0 add_19[0][0] \n__________________________________________________________________________________________________\nres3a_branch2a (Conv2D) (None, 28, 28, 128) 32896 activation_59[0][0] \n__________________________________________________________________________________________________\nbn3a_branch2a (BatchNormalizati (None, 28, 28, 128) 512 res3a_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_60 (Activation) (None, 28, 28, 128) 0 bn3a_branch2a[0][0] \n__________________________________________________________________________________________________\nres3a_branch2b (Conv2D) (None, 28, 28, 128) 147584 activation_60[0][0] \n__________________________________________________________________________________________________\nbn3a_branch2b (BatchNormalizati (None, 28, 28, 128) 512 res3a_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_61 (Activation) (None, 28, 28, 128) 0 bn3a_branch2b[0][0] \n__________________________________________________________________________________________________\nres3a_branch2c (Conv2D) (None, 28, 28, 512) 66048 activation_61[0][0] \n__________________________________________________________________________________________________\nres3a_branch1 (Conv2D) (None, 28, 28, 512) 131584 activation_59[0][0] \n__________________________________________________________________________________________________\nbn3a_branch2c (BatchNormalizati (None, 28, 28, 512) 2048 res3a_branch2c[0][0] \n__________________________________________________________________________________________________\nbn3a_branch1 (BatchNormalizatio (None, 28, 28, 512) 2048 res3a_branch1[0][0] \n__________________________________________________________________________________________________\nadd_20 (Add) (None, 28, 28, 512) 0 bn3a_branch2c[0][0] \n bn3a_branch1[0][0] \n__________________________________________________________________________________________________\nactivation_62 (Activation) (None, 28, 28, 512) 0 add_20[0][0] \n__________________________________________________________________________________________________\nres3b_branch2a (Conv2D) (None, 28, 28, 128) 65664 activation_62[0][0] \n__________________________________________________________________________________________________\nbn3b_branch2a (BatchNormalizati (None, 28, 28, 128) 512 res3b_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_63 (Activation) (None, 28, 28, 128) 0 bn3b_branch2a[0][0] \n__________________________________________________________________________________________________\nres3b_branch2b (Conv2D) (None, 28, 28, 128) 147584 activation_63[0][0] \n__________________________________________________________________________________________________\nbn3b_branch2b (BatchNormalizati (None, 28, 28, 128) 512 res3b_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_64 (Activation) (None, 28, 28, 128) 0 bn3b_branch2b[0][0] \n__________________________________________________________________________________________________\nres3b_branch2c (Conv2D) (None, 28, 28, 512) 66048 activation_64[0][0] \n__________________________________________________________________________________________________\nbn3b_branch2c (BatchNormalizati (None, 28, 28, 512) 2048 res3b_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_21 (Add) (None, 28, 28, 512) 0 bn3b_branch2c[0][0] \n activation_62[0][0] \n__________________________________________________________________________________________________\nactivation_65 (Activation) (None, 28, 28, 512) 0 add_21[0][0] \n__________________________________________________________________________________________________\nres3c_branch2a (Conv2D) (None, 28, 28, 128) 65664 activation_65[0][0] \n__________________________________________________________________________________________________\nbn3c_branch2a (BatchNormalizati (None, 28, 28, 128) 512 res3c_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_66 (Activation) (None, 28, 28, 128) 0 bn3c_branch2a[0][0] \n__________________________________________________________________________________________________\nres3c_branch2b (Conv2D) (None, 28, 28, 128) 147584 activation_66[0][0] \n__________________________________________________________________________________________________\nbn3c_branch2b (BatchNormalizati (None, 28, 28, 128) 512 res3c_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_67 (Activation) (None, 28, 28, 128) 0 bn3c_branch2b[0][0] \n__________________________________________________________________________________________________\nres3c_branch2c (Conv2D) (None, 28, 28, 512) 66048 activation_67[0][0] \n__________________________________________________________________________________________________\nbn3c_branch2c (BatchNormalizati (None, 28, 28, 512) 2048 res3c_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_22 (Add) (None, 28, 28, 512) 0 bn3c_branch2c[0][0] \n activation_65[0][0] \n__________________________________________________________________________________________________\nactivation_68 (Activation) (None, 28, 28, 512) 0 add_22[0][0] \n__________________________________________________________________________________________________\nres3d_branch2a (Conv2D) (None, 28, 28, 128) 65664 activation_68[0][0] \n__________________________________________________________________________________________________\nbn3d_branch2a (BatchNormalizati (None, 28, 28, 128) 512 res3d_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_69 (Activation) (None, 28, 28, 128) 0 bn3d_branch2a[0][0] \n__________________________________________________________________________________________________\nres3d_branch2b (Conv2D) (None, 28, 28, 128) 147584 activation_69[0][0] \n__________________________________________________________________________________________________\nbn3d_branch2b (BatchNormalizati (None, 28, 28, 128) 512 res3d_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_70 (Activation) (None, 28, 28, 128) 0 bn3d_branch2b[0][0] \n__________________________________________________________________________________________________\nres3d_branch2c (Conv2D) (None, 28, 28, 512) 66048 activation_70[0][0] \n__________________________________________________________________________________________________\nbn3d_branch2c (BatchNormalizati (None, 28, 28, 512) 2048 res3d_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_23 (Add) (None, 28, 28, 512) 0 bn3d_branch2c[0][0] \n activation_68[0][0] \n__________________________________________________________________________________________________\nactivation_71 (Activation) (None, 28, 28, 512) 0 add_23[0][0] \n__________________________________________________________________________________________________\nres4a_branch2a (Conv2D) (None, 14, 14, 256) 131328 activation_71[0][0] \n__________________________________________________________________________________________________\nbn4a_branch2a (BatchNormalizati (None, 14, 14, 256) 1024 res4a_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_72 (Activation) (None, 14, 14, 256) 0 bn4a_branch2a[0][0] \n__________________________________________________________________________________________________\nres4a_branch2b (Conv2D) (None, 14, 14, 256) 590080 activation_72[0][0] \n__________________________________________________________________________________________________\nbn4a_branch2b (BatchNormalizati (None, 14, 14, 256) 1024 res4a_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_73 (Activation) (None, 14, 14, 256) 0 bn4a_branch2b[0][0] \n__________________________________________________________________________________________________\nres4a_branch2c (Conv2D) (None, 14, 14, 1024) 263168 activation_73[0][0] \n__________________________________________________________________________________________________\nres4a_branch1 (Conv2D) (None, 14, 14, 1024) 525312 activation_71[0][0] \n__________________________________________________________________________________________________\nbn4a_branch2c (BatchNormalizati (None, 14, 14, 1024) 4096 res4a_branch2c[0][0] \n__________________________________________________________________________________________________\nbn4a_branch1 (BatchNormalizatio (None, 14, 14, 1024) 4096 res4a_branch1[0][0] \n__________________________________________________________________________________________________\nadd_24 (Add) (None, 14, 14, 1024) 0 bn4a_branch2c[0][0] \n bn4a_branch1[0][0] \n__________________________________________________________________________________________________\nactivation_74 (Activation) (None, 14, 14, 1024) 0 add_24[0][0] \n__________________________________________________________________________________________________\nres4b_branch2a (Conv2D) (None, 14, 14, 256) 262400 activation_74[0][0] \n__________________________________________________________________________________________________\nbn4b_branch2a (BatchNormalizati (None, 14, 14, 256) 1024 res4b_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_75 (Activation) (None, 14, 14, 256) 0 bn4b_branch2a[0][0] \n__________________________________________________________________________________________________\nres4b_branch2b (Conv2D) (None, 14, 14, 256) 590080 activation_75[0][0] \n__________________________________________________________________________________________________\nbn4b_branch2b (BatchNormalizati (None, 14, 14, 256) 1024 res4b_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_76 (Activation) (None, 14, 14, 256) 0 bn4b_branch2b[0][0] \n__________________________________________________________________________________________________\nres4b_branch2c (Conv2D) (None, 14, 14, 1024) 263168 activation_76[0][0] \n__________________________________________________________________________________________________\nbn4b_branch2c (BatchNormalizati (None, 14, 14, 1024) 4096 res4b_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_25 (Add) (None, 14, 14, 1024) 0 bn4b_branch2c[0][0] \n activation_74[0][0] \n__________________________________________________________________________________________________\nactivation_77 (Activation) (None, 14, 14, 1024) 0 add_25[0][0] \n__________________________________________________________________________________________________\nres4c_branch2a (Conv2D) (None, 14, 14, 256) 262400 activation_77[0][0] \n__________________________________________________________________________________________________\nbn4c_branch2a (BatchNormalizati (None, 14, 14, 256) 1024 res4c_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_78 (Activation) (None, 14, 14, 256) 0 bn4c_branch2a[0][0] \n__________________________________________________________________________________________________\nres4c_branch2b (Conv2D) (None, 14, 14, 256) 590080 activation_78[0][0] \n__________________________________________________________________________________________________\nbn4c_branch2b (BatchNormalizati (None, 14, 14, 256) 1024 res4c_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_79 (Activation) (None, 14, 14, 256) 0 bn4c_branch2b[0][0] \n__________________________________________________________________________________________________\nres4c_branch2c (Conv2D) (None, 14, 14, 1024) 263168 activation_79[0][0] \n__________________________________________________________________________________________________\nbn4c_branch2c (BatchNormalizati (None, 14, 14, 1024) 4096 res4c_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_26 (Add) (None, 14, 14, 1024) 0 bn4c_branch2c[0][0] \n activation_77[0][0] \n__________________________________________________________________________________________________\nactivation_80 (Activation) (None, 14, 14, 1024) 0 add_26[0][0] \n__________________________________________________________________________________________________\nres4d_branch2a (Conv2D) (None, 14, 14, 256) 262400 activation_80[0][0] \n__________________________________________________________________________________________________\nbn4d_branch2a (BatchNormalizati (None, 14, 14, 256) 1024 res4d_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_81 (Activation) (None, 14, 14, 256) 0 bn4d_branch2a[0][0] \n__________________________________________________________________________________________________\nres4d_branch2b (Conv2D) (None, 14, 14, 256) 590080 activation_81[0][0] \n__________________________________________________________________________________________________\nbn4d_branch2b (BatchNormalizati (None, 14, 14, 256) 1024 res4d_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_82 (Activation) (None, 14, 14, 256) 0 bn4d_branch2b[0][0] \n__________________________________________________________________________________________________\nres4d_branch2c (Conv2D) (None, 14, 14, 1024) 263168 activation_82[0][0] \n__________________________________________________________________________________________________\nbn4d_branch2c (BatchNormalizati (None, 14, 14, 1024) 4096 res4d_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_27 (Add) (None, 14, 14, 1024) 0 bn4d_branch2c[0][0] \n activation_80[0][0] \n__________________________________________________________________________________________________\nactivation_83 (Activation) (None, 14, 14, 1024) 0 add_27[0][0] \n__________________________________________________________________________________________________\nres4e_branch2a (Conv2D) (None, 14, 14, 256) 262400 activation_83[0][0] \n__________________________________________________________________________________________________\nbn4e_branch2a (BatchNormalizati (None, 14, 14, 256) 1024 res4e_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_84 (Activation) (None, 14, 14, 256) 0 bn4e_branch2a[0][0] \n__________________________________________________________________________________________________\nres4e_branch2b (Conv2D) (None, 14, 14, 256) 590080 activation_84[0][0] \n__________________________________________________________________________________________________\nbn4e_branch2b (BatchNormalizati (None, 14, 14, 256) 1024 res4e_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_85 (Activation) (None, 14, 14, 256) 0 bn4e_branch2b[0][0] \n__________________________________________________________________________________________________\nres4e_branch2c (Conv2D) (None, 14, 14, 1024) 263168 activation_85[0][0] \n__________________________________________________________________________________________________\nbn4e_branch2c (BatchNormalizati (None, 14, 14, 1024) 4096 res4e_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_28 (Add) (None, 14, 14, 1024) 0 bn4e_branch2c[0][0] \n activation_83[0][0] \n__________________________________________________________________________________________________\nactivation_86 (Activation) (None, 14, 14, 1024) 0 add_28[0][0] \n__________________________________________________________________________________________________\nres4f_branch2a (Conv2D) (None, 14, 14, 256) 262400 activation_86[0][0] \n__________________________________________________________________________________________________\nbn4f_branch2a (BatchNormalizati (None, 14, 14, 256) 1024 res4f_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_87 (Activation) (None, 14, 14, 256) 0 bn4f_branch2a[0][0] \n__________________________________________________________________________________________________\nres4f_branch2b (Conv2D) (None, 14, 14, 256) 590080 activation_87[0][0] \n__________________________________________________________________________________________________\nbn4f_branch2b (BatchNormalizati (None, 14, 14, 256) 1024 res4f_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_88 (Activation) (None, 14, 14, 256) 0 bn4f_branch2b[0][0] \n__________________________________________________________________________________________________\nres4f_branch2c (Conv2D) (None, 14, 14, 1024) 263168 activation_88[0][0] \n__________________________________________________________________________________________________\nbn4f_branch2c (BatchNormalizati (None, 14, 14, 1024) 4096 res4f_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_29 (Add) (None, 14, 14, 1024) 0 bn4f_branch2c[0][0] \n activation_86[0][0] \n__________________________________________________________________________________________________\nactivation_89 (Activation) (None, 14, 14, 1024) 0 add_29[0][0] \n__________________________________________________________________________________________________\nres5a_branch2a (Conv2D) (None, 7, 7, 512) 524800 activation_89[0][0] \n__________________________________________________________________________________________________\nbn5a_branch2a (BatchNormalizati (None, 7, 7, 512) 2048 res5a_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_90 (Activation) (None, 7, 7, 512) 0 bn5a_branch2a[0][0] \n__________________________________________________________________________________________________\nres5a_branch2b (Conv2D) (None, 7, 7, 512) 2359808 activation_90[0][0] \n__________________________________________________________________________________________________\nbn5a_branch2b (BatchNormalizati (None, 7, 7, 512) 2048 res5a_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_91 (Activation) (None, 7, 7, 512) 0 bn5a_branch2b[0][0] \n__________________________________________________________________________________________________\nres5a_branch2c (Conv2D) (None, 7, 7, 2048) 1050624 activation_91[0][0] \n__________________________________________________________________________________________________\nres5a_branch1 (Conv2D) (None, 7, 7, 2048) 2099200 activation_89[0][0] \n__________________________________________________________________________________________________\nbn5a_branch2c (BatchNormalizati (None, 7, 7, 2048) 8192 res5a_branch2c[0][0] \n__________________________________________________________________________________________________\nbn5a_branch1 (BatchNormalizatio (None, 7, 7, 2048) 8192 res5a_branch1[0][0] \n__________________________________________________________________________________________________\nadd_30 (Add) (None, 7, 7, 2048) 0 bn5a_branch2c[0][0] \n bn5a_branch1[0][0] \n__________________________________________________________________________________________________\nactivation_92 (Activation) (None, 7, 7, 2048) 0 add_30[0][0] \n__________________________________________________________________________________________________\nres5b_branch2a (Conv2D) (None, 7, 7, 512) 1049088 activation_92[0][0] \n__________________________________________________________________________________________________\nbn5b_branch2a (BatchNormalizati (None, 7, 7, 512) 2048 res5b_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_93 (Activation) (None, 7, 7, 512) 0 bn5b_branch2a[0][0] \n__________________________________________________________________________________________________\nres5b_branch2b (Conv2D) (None, 7, 7, 512) 2359808 activation_93[0][0] \n__________________________________________________________________________________________________\nbn5b_branch2b (BatchNormalizati (None, 7, 7, 512) 2048 res5b_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_94 (Activation) (None, 7, 7, 512) 0 bn5b_branch2b[0][0] \n__________________________________________________________________________________________________\nres5b_branch2c (Conv2D) (None, 7, 7, 2048) 1050624 activation_94[0][0] \n__________________________________________________________________________________________________\nbn5b_branch2c (BatchNormalizati (None, 7, 7, 2048) 8192 res5b_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_31 (Add) (None, 7, 7, 2048) 0 bn5b_branch2c[0][0] \n activation_92[0][0] \n__________________________________________________________________________________________________\nactivation_95 (Activation) (None, 7, 7, 2048) 0 add_31[0][0] \n__________________________________________________________________________________________________\nres5c_branch2a (Conv2D) (None, 7, 7, 512) 1049088 activation_95[0][0] \n__________________________________________________________________________________________________\nbn5c_branch2a (BatchNormalizati (None, 7, 7, 512) 2048 res5c_branch2a[0][0] \n__________________________________________________________________________________________________\nactivation_96 (Activation) (None, 7, 7, 512) 0 bn5c_branch2a[0][0] \n__________________________________________________________________________________________________\nres5c_branch2b (Conv2D) (None, 7, 7, 512) 2359808 activation_96[0][0] \n__________________________________________________________________________________________________\nbn5c_branch2b (BatchNormalizati (None, 7, 7, 512) 2048 res5c_branch2b[0][0] \n__________________________________________________________________________________________________\nactivation_97 (Activation) (None, 7, 7, 512) 0 bn5c_branch2b[0][0] \n__________________________________________________________________________________________________\nres5c_branch2c (Conv2D) (None, 7, 7, 2048) 1050624 activation_97[0][0] \n__________________________________________________________________________________________________\nbn5c_branch2c (BatchNormalizati (None, 7, 7, 2048) 8192 res5c_branch2c[0][0] \n__________________________________________________________________________________________________\nadd_32 (Add) (None, 7, 7, 2048) 0 bn5c_branch2c[0][0] \n activation_95[0][0] \n__________________________________________________________________________________________________\nactivation_98 (Activation) (None, 7, 7, 2048) 0 add_32[0][0] \n__________________________________________________________________________________________________\navg_pool (AveragePooling2D) (None, 1, 1, 2048) 0 activation_98[0][0] \n__________________________________________________________________________________________________\nflatten_2 (Flatten) (None, 2048) 0 avg_pool[0][0] \n__________________________________________________________________________________________________\nfc1000 (Dense) (None, 1000) 2049000 flatten_2[0][0] \n==================================================================================================\nTotal params: 25,636,712\nTrainable params: 25,583,592\nNon-trainable params: 53,120\n__________________________________________________________________________________________________\n"
],
[
"### TODO: Write a function that takes a path to an image as input\n### and returns the dog breed that is predicted by the model.\n\nfrom keras.applications.resnet50 import ResNet50\nfrom keras.preprocessing import image\n\ndef predict_dog_breed(image_path):\n '''\n Input: Path to image.\n \n Function first loads the image and passes it through the network to create\n bottleneck features. Then it uses the entire Resnet50 model to predict a\n dog breed id which is then converted into a dog breed.\n \n Output: Function returns the breed that belongs to the dog on an image.\n '''\n model_ResNet50_pretrained = ResNet50(weights = 'imagenet', include_top=False)\n \n # 1 Extract bottleneck features\n image_array = image.load_img(image_path, target_size=(224, 224))\n image_array = image.img_to_array(image_array)\n image_array = np.expand_dims(image_array, axis=0)\n image_array = preprocess_input(image_array)\n \n bottleneck_features = model_ResNet50_pretrained.predict(image_array)\n \n # 2 Feed bootleneck features into model\n dog_breed_id = np.argmax(Resnet50_model.predict(np.expand_dims(bottleneck_features[0], axis=0)))\n \n # 3 Find and return corresponding dog breed\n return dog_names[dog_breed_id].split('.')[1]\n \n# I read the hint on how to extract bottleneck features too late. That's why I provided a different solution.\n# This link showed me how to load the pretrained ResNet50 model to gain bottleneck features:\n# https://gist.github.com/Thimira/6dc1da782b0dca43485958dbee12a757\n\n# Here I found how to set include_top = False:\n# https://keras.io/applications/#resnet",
"_____no_output_____"
],
[
"# Test model on samples:\nid1 = 21 \nimage_path = train_files[id1]\nprint('The dogs name is', dog_names[np.argmax(train_targets[id1])].split('.')[1]+ '.')\nprint('My guess is', predict_dog_breed(image_path)+ '.')",
"The dogs name is Dalmatian.\nDownloading data from https://github.com/fchollet/deep-learning-models/releases/download/v0.2/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5\n94658560/94653016 [==============================] - 99s 1us/step\nMy guess is Dalmatian.\n"
]
],
[
[
"---\n<a id='step6'></a>\n## Step 6: Write your Algorithm\n\nWrite an algorithm that accepts a file path to an image and first determines whether the image contains a human, dog, or neither. Then,\n- if a __dog__ is detected in the image, return the predicted breed.\n- if a __human__ is detected in the image, return the resembling dog breed.\n- if __neither__ is detected in the image, provide output that indicates an error.\n\nYou are welcome to write your own functions for detecting humans and dogs in images, but feel free to use the `face_detector` and `dog_detector` functions developed above. You are __required__ to use your CNN from Step 5 to predict dog breed. \n\nA sample image and output for our algorithm is provided below, but feel free to design your own user experience!\n\n\n\nThis photo looks like an Afghan Hound.\n### (IMPLEMENTATION) Write your Algorithm",
"_____no_output_____"
]
],
[
[
"### TODO: Write your algorithm.\n### Feel free to use as many code cells as needed.\nid1 = 201 \nimage_path = train_files[id1]\n\nif dog_detector(image_path):\n print('This dog must be a', predict_dog_breed(image_path))\nelif face_detector(image_path):\n print('Wow, this person looks similar to a', predict_dog_breed(image_path))\nelse:\n print('Sorry, your image does not contain a dog or even a face. Try another one!')",
"This dog must be a Greyhound\n"
]
],
[
[
"---\n<a id='step7'></a>\n## Step 7: Test Your Algorithm\n\nIn this section, you will take your new algorithm for a spin! What kind of dog does the algorithm think that __you__ look like? If you have a dog, does it predict your dog's breed accurately? If you have a cat, does it mistakenly think that your cat is a dog?\n\n### (IMPLEMENTATION) Test Your Algorithm on Sample Images!\n\nTest your algorithm at least six images on your computer. Feel free to use any images you like. Use at least two human and two dog images. \n\n__Question 6:__ Is the output better than you expected :) ? Or worse :( ? Provide at least three possible points of improvement for your algorithm.\n\n__Answer:__ The output of my algorithm is better than I would have expected. I picked three random dog images from the web and tested the results which looked fantastic! But nevertheless there is always room for improvement. My first starting point would be sample size. We have about 8000 images which doesn't sound too bad but if you recall that we have 133 differnt dog breeds in the data set, 8000 is not too much. My second idea is similar. Yet we did not incorporate the fact that differnt breeds might occur with a differnt frequency in the data set which also has an impact on prediction accuracy. Finally I would consider playing around with some tuning parameters (additional layers, stride, padding, ...) to improve the model's accuracy.",
"_____no_output_____"
]
],
[
[
"## TODO: Execute your algorithm from Step 6 on\n## at least 6 images on your computer.\n## Feel free to use as many code cells as needed.\n\nimage_path = '/home/workspace/dog-project/images/Untitled Folder/Dog1.jpeg'\n\nif dog_detector(image_path):\n print('This dog must be a', predict_dog_breed(image_path))\nelif face_detector(image_path):\n print('Wow, this person looks similar to a', predict_dog_breed(image_path))\nelse:\n print('Sorry, your image does not contain a dog or even a face. Try another one!')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecda02380230b1b37d7cb08675b088a39c3ee401 | 8,013 | ipynb | Jupyter Notebook | notebooks/road_following/train_model.ipynb | WhoseAI/jetbot-cn | ce5ed5574bc5c98fa56d3725fe96d28bcd5a4de6 | [
"MIT"
] | 4 | 2020-03-03T09:03:44.000Z | 2021-07-19T07:20:13.000Z | notebooks/road_following/train_model.ipynb | WhoseAI/jetbot-cn | ce5ed5574bc5c98fa56d3725fe96d28bcd5a4de6 | [
"MIT"
] | null | null | null | notebooks/road_following/train_model.ipynb | WhoseAI/jetbot-cn | ce5ed5574bc5c98fa56d3725fe96d28bcd5a4de6 | [
"MIT"
] | 4 | 2020-08-01T08:06:09.000Z | 2021-12-14T13:11:34.000Z | 27.919861 | 173 | 0.548234 | [
[
[
"# Road Follower - Train Model(训练模型)\n在这个笔记本中,我们将训练一个神经网络来获取一个输入图像,并输出一组对应于目标的x,y值。\n我们将使用PyTorch深度学习框架来训练ResNet18神经网络架构模型,以供道路跟随者应用。",
"_____no_output_____"
]
],
[
[
"import torch\nimport torch.optim as optim\nimport torch.nn.functional as F\nimport torchvision\nimport torchvision.datasets as datasets\nimport torchvision.models as models\nimport torchvision.transforms as transforms\nimport glob\nimport PIL.Image\nimport os\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"### 下载和提取数据\n在开始之前,您应该上传您在robot上的 ``data-collection.ipynb``笔记本中创建的``road-following.zip``文件。\n> 如果要在 Jetbot 训练收集数据,可以跳过这个!\n\n然后,您应该通过调用下面的命令来提取此数据集:",
"_____no_output_____"
]
],
[
[
"!unzip -q road_following.zip",
"_____no_output_____"
]
],
[
[
"您应该会在文件浏览器中看到一个名为“dataset_all”的文件夹。",
"_____no_output_____"
],
[
"### 创建数据集实例\n在这里,我们创建了一个自定义的 ``torch.utils.data.Dataset``实现,它执行``__len__`` 和``__getitem__`` 函数。这个类负责从图像文件名加载图像并解析x,y值。因为我们实现了 ``torch.utils.data.Dataset``类,我们可以使用所有torch数据实用程序:)\n\n我们硬编码一些转换(如颜色抖动)到我们的数据集。我们做了随机的水平翻转(如果你想沿着一条非对称的道路,比如一条路。我们需要“保持正确”。如果小车是否遵循某种约定并不重要,那么您可以启用翻转来扩充数据集。",
"_____no_output_____"
]
],
[
[
"def get_x(path):\n \"\"\"Gets the x value from the image filename\"\"\"\n return (float(int(path[3:6])) - 50.0) / 50.0\n\ndef get_y(path):\n \"\"\"Gets the y value from the image filename\"\"\"\n return (float(int(path[7:10])) - 50.0) / 50.0\n\nclass XYDataset(torch.utils.data.Dataset):\n \n def __init__(self, directory, random_hflips=False):\n self.directory = directory\n self.random_hflips = random_hflips\n self.image_paths = glob.glob(os.path.join(self.directory, '*.jpg'))\n self.color_jitter = transforms.ColorJitter(0.3, 0.3, 0.3, 0.3)\n \n def __len__(self):\n return len(self.image_paths)\n \n def __getitem__(self, idx):\n image_path = self.image_paths[idx]\n \n image = PIL.Image.open(image_path)\n x = float(get_x(os.path.basename(image_path)))\n y = float(get_y(os.path.basename(image_path)))\n \n if float(np.random.rand(1)) > 0.5:\n image = transforms.functional.hflip(image)\n x = -x\n \n image = self.color_jitter(image)\n image = transforms.functional.resize(image, (224, 224))\n image = transforms.functional.to_tensor(image)\n image = image.numpy()[::-1].copy()\n image = torch.from_numpy(image)\n image = transforms.functional.normalize(image, [0.485, 0.456, 0.406], [0.229, 0.224, 0.225])\n \n return image, torch.tensor([x, y]).float()\n \ndataset = XYDataset('dataset_xy', random_hflips=False)",
"_____no_output_____"
]
],
[
[
"### 将数据集拆分为训练集和测试集\n一旦我们读取了数据集,我们将在火车和测试集中分割数据集。在这个例子中,将训练与测试数据集分开为90%-10%两部分,测试数据集将用于验证我们训练的模型的准确性。",
"_____no_output_____"
]
],
[
[
"test_percent = 0.1\nnum_test = int(test_percent * len(dataset))\ntrain_dataset, test_dataset = torch.utils.data.random_split(dataset, [len(dataset) - num_test, num_test])",
"_____no_output_____"
]
],
[
[
"### 创建数据加载程序以批量加载数据\n\n我们使用 ``DataLoader`` 类来批量加载数据、洗牌数据并允许使用多个子进程。在本例中,我们使用16的batch size。批处理大小将基于GPU可用的内存,它会影响模型的准确性。",
"_____no_output_____"
]
],
[
[
"train_loader = torch.utils.data.DataLoader(\n train_dataset,\n batch_size=16,\n shuffle=True,\n num_workers=4\n)\n\ntest_loader = torch.utils.data.DataLoader(\n test_dataset,\n batch_size=16,\n shuffle=True,\n num_workers=4\n)",
"_____no_output_____"
]
],
[
[
"### 定义神经网络模型\n我们在Pythorch TorchVision上使用ResNet-18模型。在一个称为迁移学习(transfer learning)过程中,我们可以重新利用一个预先训练好的模型(在数百万张图片上训练)来完成一个新的任务,这个任务的可用数据可能要少得多。\n有关ResNet-18的更多详细信息:https://github.com/pytorch/vision/blob/master/torchvision/models/ResNet.py\n有关转移学习的更多详细信息:https://www.youtube.com/watch?v=yofjFQddwHE",
"_____no_output_____"
]
],
[
[
"model = models.resnet18(pretrained=True)",
"_____no_output_____"
]
],
[
[
"ResNet模型有完全连接(fc)最后一层的512作为 ``in-features``,我们以 ``out-features``为1进行回归训练\n\n最后,我们将模型转移到GPU上执行",
"_____no_output_____"
]
],
[
[
"model.fc = torch.nn.Linear(512, 2)\ndevice = torch.device('cuda')\nmodel = model.to(device)",
"_____no_output_____"
]
],
[
[
"### 训练回归模型:\n我们训练了50个阶段,如果减少损失,就可以保存最好的模型。",
"_____no_output_____"
]
],
[
[
"NUM_EPOCHS = 70\nBEST_MODEL_PATH = 'best_steering_model_xy.pth'\nbest_loss = 1e9\n\noptimizer = optim.Adam(model.parameters())\n\nfor epoch in range(NUM_EPOCHS):\n \n model.train()\n train_loss = 0.0\n for images, labels in iter(train_loader):\n images = images.to(device)\n labels = labels.to(device)\n optimizer.zero_grad()\n outputs = model(images)\n loss = F.mse_loss(outputs, labels)\n train_loss += loss\n loss.backward()\n optimizer.step()\n train_loss /= len(train_loader)\n \n model.eval()\n test_loss = 0.0\n for images, labels in iter(test_loader):\n images = images.to(device)\n labels = labels.to(device)\n outputs = model(images)\n loss = F.mse_loss(outputs, labels)\n test_loss += loss\n test_loss /= len(test_loader)\n \n print('%f, %f' % (train_loss, test_loss))\n if test_loss < best_loss:\n torch.save(model.state_dict(), BEST_MODEL_PATH)\n best_loss = test_loss",
"_____no_output_____"
]
],
[
[
"一旦模型经过训练,它将生成 ``best_steering_model_xy.pth`` 文件,您可以在现场演示笔记本中使用该文件进行推理。\n\n如果您在JetBot以外的其他机器上进行训练,则需要将此文件上传到JetBot的 ``road-following`` 示例文件夹中。",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ecda028a13fe15a0d496409f2dfddc63d4801020 | 112,234 | ipynb | Jupyter Notebook | FeatureEngineering.ipynb | rambasnet/Python-Machine-Learning | 4b91bb78ec2f86545e5f7a974e5d185dae5fdc8e | [
"MIT"
] | 1 | 2021-02-17T19:58:27.000Z | 2021-02-17T19:58:27.000Z | FeatureEngineering.ipynb | rambasnet/Python-Machine-Learning | 4b91bb78ec2f86545e5f7a974e5d185dae5fdc8e | [
"MIT"
] | null | null | null | FeatureEngineering.ipynb | rambasnet/Python-Machine-Learning | 4b91bb78ec2f86545e5f7a974e5d185dae5fdc8e | [
"MIT"
] | 1 | 2021-06-06T15:48:31.000Z | 2021-06-06T15:48:31.000Z | 41.066228 | 17,264 | 0.564722 | [
[
[
"# Feature Engineering\n- sample features are the keys to machine learning as they determine how well a ML algorithm can learn\n- it is absolutely important that we examine and preprocess a dataset before we feed it to a ML algorithm\n- feature engineering involves from feature processing to dealing with missing values to properly encoding features and selecting the best features\n- the goal of feature engineering is simply to make your data better suited to the problem at hand plus:\n - improve a model's predictive performance\n - reduce computational or data needs\n - improve interpretability of the results\n\n### Dealing with missing data\n- it's not uncommon to miss certain feature values for many reasons\n - error in data collection process\n - certain measurements may not be applicable\n - particular fields could have been simply left blank in survey\n- missing values are usually missing or blank or NaN or NULL\n- ML algorithm can result unpredictable results if we simply ignore missing values\n\n#### Identify missing values\n- first, identify missing values and deal with them",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nfrom io import StringIO\nimport numpy as np",
"_____no_output_____"
],
[
"csv_data = '''A,B,C,D\n1.0,2.0,3.0,4.0\n5.0,6.0,,8.0\n10.0,11.0,12.0,'''\n\ndf = pd.read_csv(StringIO(csv_data))\n# StringIO function let's us read csv_data as if it's a file",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"# find the # of null values per column\ndf.isnull().sum()",
"_____no_output_____"
]
],
[
[
"### Eliminating training examples or features with missing values\n- one of the easiest way to deal with the missing data is simply to remove the feature (columns) or training examples (rows) from the dataset entirely\n- this is usually done when there's plenty of examples and features",
"_____no_output_____"
]
],
[
[
"# removing examples; return's new DataFrame objects after dropping all the rows in NaN\ndf.dropna(axis=0)",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"df.dropna(axis=1)",
"_____no_output_____"
],
[
"# drop rows where all columns are NaN\ndf.dropna(how='all')",
"_____no_output_____"
],
[
"# drop rows that have fewer than 4 real values\ndf.dropna(thresh=4)",
"_____no_output_____"
],
[
"# drop rows where NaN appear in specific columns\ndf.dropna(subset=['C'])",
"_____no_output_____"
]
],
[
[
"## Imputing missing values\n- often dropping an entire feature column is not practicle\n - we may lose too much valuable information\n- we can use interploation techniques to estimate the missing values from other training examples\n\n### mean imputation\n- simply replace the missing value with the mean value of the entire feature column\n- use `SimpleImputer` class from scikit-learn - https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html\n- different strategies to fill missing values:\n - mean, most_frequet, median, constant",
"_____no_output_____"
]
],
[
[
"from sklearn.impute import SimpleImputer",
"_____no_output_____"
],
[
"# our original DataFrame\ndf",
"_____no_output_____"
],
[
"# impute missing values via the column mean\nsi = SimpleImputer(missing_values=np.nan, strategy='mean')\nsi = si.fit(df.values)\nimputed_data = si.transform(df.values)",
"_____no_output_____"
],
[
"imputed_data",
"_____no_output_____"
],
[
"# another approach; returns a new DataFrame\ndf.fillna(df.mean())",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
]
],
[
[
"## Using transformed data using estimaters\n- the whole data can be transformed first and split to train and test set\n- new data must be tranformed using the same technique if the model is deployed\n",
"_____no_output_____"
],
[
"## Handling categorical data\n- there are two types of categorical data\n- **ordinal**\n - categorical values that can be sorted or ordered\n - e.g., T-shirt size: XS < S < M < L < XL < XXL\n- **nominal**\n - categorical values that don't imply any order\n - e.g., color values: blue, green, etc.\n - gender: male or female",
"_____no_output_____"
]
],
[
[
"df = pd.DataFrame([['green', 'M', 10.1, 'class2'],\n ['red', 'L', 13.5, 'class1'],\n ['blue', 'XL', 15.3, 'class2']])\n\ndf.columns = ['color', 'size', 'price', 'classlabel']",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
]
],
[
[
"### Mapping ordinal features\n- no convenient function/API to derive the order of ordinal features\n- just define the mapping manually and use the mapping",
"_____no_output_____"
]
],
[
[
"size_mapping = {'M':1, 'L':2, 'XL':3}\ndf['size'] = df['size'].map(size_mapping)",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"# get the original string representation\ninv_size_mapping = {v: k for k, v in size_mapping.items()}\ndf['size'].map(inv_size_mapping)",
"_____no_output_____"
]
],
[
[
"## Encoding class labels\n- scikit-learn classifiers convert class labels to integers internally\n- best practice to encode class labels explictly as integers",
"_____no_output_____"
]
],
[
[
"from sklearn.preprocessing import LabelEncoder\n\n# Label encoding with sklearn's LabelEncoder\nclass_le = LabelEncoder()\ny = class_le.fit_transform(df['classlabel'].values)",
"_____no_output_____"
],
[
"y",
"_____no_output_____"
]
],
[
[
"### one-hot encoding on nominal features\n- if nominal features encoded the same way as ordinal using numeric order ML classifiers may assume order in data and may lead to not optimal results\n - e.g. {'green': 1, 'red': 2, 'blue': 3}\n- workaround is one-hot encoding\n- create a new dummy feature for each unique value in the nominal feature column\n - use binary values for each feature; 1 represents the feature and 0 doesn't\n- use `OneHotEncoder` function https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html",
"_____no_output_____"
]
],
[
[
"from sklearn.preprocessing import OneHotEncoder",
"_____no_output_____"
],
[
"X = df[['color', 'size', 'price']].values",
"_____no_output_____"
],
[
"X",
"_____no_output_____"
],
[
"color_ohe = OneHotEncoder()",
"_____no_output_____"
],
[
"color_ohe.fit_transform(X[:, 0].reshape(-1, 1)).toarray()",
"_____no_output_____"
],
[
"# use ColumnTransformer to transorm the whole dataset with multiple columns\nfrom sklearn.compose import ColumnTransformer",
"_____no_output_____"
],
[
"c_transf = ColumnTransformer([\n ('onehot', OneHotEncoder(), [0]),\n ('nothing', 'passthrough', [1, 2])\n ])",
"_____no_output_____"
],
[
"X",
"_____no_output_____"
],
[
"c_transf.fit_transform(X).astype(float)",
"_____no_output_____"
],
[
"# more convenient way to create dummy features via one-hot encoding is us get_dummies method in pandas\npd.get_dummies(df[['price', 'color', 'size']])",
"_____no_output_____"
]
],
[
[
"## Wine dataset\n- let's apply preprocessing technqiues to Wine dataset found in UCI\n- https://archive.ics.uci.edu/ml/datasets/Wine\n- 178 wine samples with 13 features describing their different chemical properties",
"_____no_output_____"
]
],
[
[
"url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data'\ndf_wine = pd.read_csv(url, header=None)",
"_____no_output_____"
],
[
"df_wine",
"_____no_output_____"
],
[
"df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',\n 'Alcalinity of ash', 'Magnesium', 'Total phenols',\n 'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',\n 'Color intensity', 'Hue', 'OD280/OD315 of diluted wines',\n 'Proline']",
"_____no_output_____"
],
[
"print('Unique Class labels', np.unique(df_wine['Class label']))",
"Unique Class labels [1 2 3]\n"
],
[
"df_wine",
"_____no_output_____"
]
],
[
[
"## Bringing features onto the same scale\n- two common approaches to bringing different features onto the same scale:\n 1. **normalization**\n - rescaling the features to a range of [0, 1] (**min-max scaling**)\n 2. **standarization**\n - we've already used `StandardScaler`\n - `RobustScaler` is robust to outliers and can be good choice if the dataset is prone to overfitting\n- to normalize the features we can simply apply the min-max scaling to each feature column\n- new value, $x^{i}_{norm}$ of an example $x^i$ can be calculated as follows: \n $x^{i}_{norm} = \\frac {x^i - x_{min}}{x_{max} - x_{min}}$\n- use `MinMaxScaler` implemented in scikit-learn - https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html\n- let's noramalize and scale Wine dataset",
"_____no_output_____"
]
],
[
[
"from sklearn.preprocessing import MinMaxScaler\nfrom sklearn.preprocessing import RobustScaler",
"_____no_output_____"
],
[
"X = df_wine.iloc[:, 1:].values",
"_____no_output_____"
],
[
"X",
"_____no_output_____"
],
[
"y = df_wine['Class label'].values",
"_____no_output_____"
],
[
"y",
"_____no_output_____"
],
[
"mms = MinMaxScaler()\nX_norm = mms.fit_transform(X)",
"_____no_output_____"
],
[
"X_norm",
"_____no_output_____"
],
[
"rs = RobustScaler()\nX_robust = rs.fit_transform(X)",
"_____no_output_____"
]
],
[
[
"## Selecting meaningful features\n- overfitting occurs when a model performs much better on a training dataset thatn the test dataset\n - the model has high variance\n- common solutions to reduce the generalization errors are:\n 1. collect more training data\n 2. introduce a penalty for complexity via regularization\n 3. choose a simpler model with fewer parameters\n 4. reduce the dimensionality of the data\n- for regularized models in scikit-learn that support L1 regularization, we can simply set the `penalty` parameter to `'l1'` to obtain a sparse solution\n- `LogisticRegression` classifier a regularized model\n- https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import train_test_split\nfrom sklearn.linear_model import LogisticRegression",
"_____no_output_____"
],
[
"# let's split the normalized dataset\nX_train_norm, X_test_norm, y_train_norm, y_test_norm = train_test_split(X_norm, y, \n test_size=0.3, \n random_state=0, \n stratify=y)",
"_____no_output_____"
],
[
"# let's traing and test normalized dataset with LR\nlr = LogisticRegression(penalty='l1', C=1.0, solver='liblinear', multi_class='ovr')\n# Note that C=1.0 is the default. You can increase\n# or decrease it to make the regulariztion effect\n# stronger or weaker, respectively.\nlr.fit(X_train_norm, y_train_norm)\nprint('Training accuracy:', lr.score(X_train_norm, y_train_norm))\nprint('Test accuracy:', lr.score(X_test_norm, y_test_norm))",
"Training accuracy: 0.967741935483871\nTest accuracy: 0.9629629629629629\n"
],
[
"# let's split the robust scaled dataset\nX_train_robust, X_test_robust, y_train_robust, y_test_robust = train_test_split(X_robust, y, \n test_size=0.3, \n random_state=0, \n stratify=y)",
"_____no_output_____"
],
[
"# let's traing and test robust dataset with LR\nlr = LogisticRegression(penalty='l1', C=1.0, solver='liblinear', multi_class='ovr')\n# Note that C=1.0 is the default. You can increase\n# or decrease it to make the regulariztion effect\n# stronger or weaker, respectively.\nlr.fit(X_train_robust, y_train_robust)\nprint('Training accuracy:', lr.score(X_train_robust, y_train_robust))\nprint('Test accuracy:', lr.score(X_test_robust, y_test_robust))",
"Training accuracy: 1.0\nTest accuracy: 1.0\n"
]
],
[
[
"## Sequential feature selection algorithms\n- select subset of the original features based on criteria such as accuracy\n- **dimensionality reduction** via feature selection is especially useful for unregularized models\n- dimensionaliry reduction can have many advantages in real-world applications\n - cheaper to collect features\n - faster computation\n - avoid overfitting\n - reduce the generalization error\n- sequential feature selection algorithms are a family of greedy search algorithms\n- a classic selection algorithm is **sequential backward selection**\n- two types of search algorithms can be employed\n 1. **greedy algorithm** can be used locally optimal choices at each state of a combinatorial search problem\n - generally yields a suboptimal solution\n 2. **exhaustive search algorithms** evaluates all possible combinations and are guaranteed to find the optimal solution\n - not feasible in practice due to computational complexity\n\n### Sequential Backward Selection (SBS) algorithm\n- can be called backward elimination\n- sequentially remove features from the full features subset until the new feature subspace contains the desired number of features\n- inorder to determine which feature is to be removed at each stage, we define a criterion function such as error rate, that we want to minimize\n\n### Sequential Forward Selection (SFS) algorithm\n- sequentially add features until the new feature subspace contains the desired number of features\n- inorder to determine which feature to add at each stage, we define a criterion function such as accuracy that we want to maximize or error rate that we want to minimize\n\n### SBS implementation\n- scikit learn doesn't provide sequential feature selection algorithm\n- we can implement one as shown below",
"_____no_output_____"
]
],
[
[
"from sklearn.base import clone\nfrom itertools import combinations\nimport numpy as np\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.model_selection import train_test_split\n\n\nclass SBS():\n def __init__(self, estimator, k_features, scoring=accuracy_score,\n test_size=0.25, random_state=1):\n \"\"\"\n estimator = model\n k_features = minimum features\n \"\"\"\n self.scoring = scoring\n self.estimator = clone(estimator)\n self.k_features = k_features\n self.test_size = test_size\n self.random_state = random_state\n self.scores_ = []\n\n def fit(self, X, y):\n \n X_train, X_test, y_train, y_test = \\\n train_test_split(X, y, test_size=self.test_size,\n random_state=self.random_state)\n\n dim = X_train.shape[1]\n self.indices_ = tuple(range(dim))\n self.subsets_ = [self.indices_]\n score = self._calc_score(X_train, y_train, \n X_test, y_test, self.indices_)\n self.scores_ = [score]\n\n while dim > self.k_features:\n scores = []\n subsets = []\n\n for p in combinations(self.indices_, r=dim - 1):\n score = self._calc_score(X_train, y_train, \n X_test, y_test, p)\n scores.append(score)\n subsets.append(p)\n\n best = np.argmax(scores)\n self.indices_ = subsets[best]\n self.subsets_.append(self.indices_)\n dim -= 1\n\n self.scores_.append(scores[best])\n self.k_score_ = self.scores_[-1]\n\n return self\n\n def transform(self, X):\n return X[:, self.indices_]\n\n def _calc_score(self, X_train, y_train, X_test, y_test, indices):\n self.estimator.fit(X_train[:, indices], y_train)\n y_pred = self.estimator.predict(X_test[:, indices])\n score = self.scoring(y_test, y_pred)\n return score",
"_____no_output_____"
],
[
"# let's test SBS implemenation using the KNN classifier\nimport matplotlib.pyplot as plt\nfrom sklearn.neighbors import KNeighborsClassifier\n\nknn = KNeighborsClassifier(n_neighbors=5)\n\n# selecting features\nsbs = SBS(knn, k_features=1)\nsbs.fit(X_train_robust, y_train_robust)\n\n# plotting performance of feature subsets\nk_feat = [len(k) for k in sbs.subsets_]\n\nplt.plot(k_feat, sbs.scores_, marker='o')\nplt.ylim([0.7, 1.02])\nplt.ylabel('Accuracy')\nplt.xlabel('Number of features')\nplt.grid()\nplt.tight_layout()\n# plt.savefig('images/04_08.png', dpi=300)\nplt.show()",
"_____no_output_____"
],
[
"# what is the smallest feature subset which yielded the 100% accuracy?\nlist(sbs.subsets_)",
"_____no_output_____"
],
[
"# subset index 4 has 9 feature subset\nk9 = list(sbs.subsets_[4])",
"_____no_output_____"
],
[
"k9",
"_____no_output_____"
],
[
"print(df_wine.columns[1:][[k9]])",
"Index(['Alcohol', 'Malic acid', 'Ash', 'Alcalinity of ash', 'Magnesium',\n 'Total phenols', 'Flavanoids', 'Hue', 'OD280/OD315 of diluted wines'],\n dtype='object')\n"
],
[
"# let's evaluate the performance of the KNN classifier on the original test dataset\nknn.fit(X_train_robust, y_train_robust)\nprint('Training accuracy: %.4f'%knn.score(X_train_robust, y_train_robust))",
"Training accuracy: 0.9677\n"
],
[
"# let's use the selected best feature subset to see if the accuracy is improved...\nknn.fit(X_train_robust[:, k9], y_train_robust)\nprint('Training accuracy:', knn.score(X_train_robust[:, k9], y_train_robust))\nprint('Test accuracy:', knn.score(X_test_robust[:, k9], y_test_robust))",
"Training accuracy: 0.9435483870967742\nTest accuracy: 0.9444444444444444\n"
]
],
[
[
"## Feature ranking\n- if the features are ranked based on their respective importances then the top features can be selected\n\n### Tree-based feature ranking and selection\n- there are several techniques for feature selection - https://scikit-learn.org/stable/modules/feature_selection.html\n- tree-based estimaters and ensemble based classifiers such as random forest can be sued to computer impurity-based feature importances\n- Random Forest can be used to measure the importance of features as the averaged impurity decrease computed from all decision trees in the forest\n - doesn't make any assumption on whether dataset is linearly separable\n- RF implentation of scikit-learn provides `feature_importances_` attribute after fitting `RandomForestClassifier`\n- the code below trains RF of 500 tress on Wine dataset and rank the 13 features by their respective importance measures",
"_____no_output_____"
]
],
[
[
"from sklearn.ensemble import RandomForestClassifier\n\nX_train, X_test, y_train, y_test = \\\n train_test_split(X, y, test_size=0.2,\n random_state=1)\n \nfeat_labels = df_wine.columns[1:]\n\nforest = RandomForestClassifier(n_estimators=500,\n random_state=1)\n\nforest.fit(X_train, y_train)\nimportances = forest.feature_importances_\n\nindices = np.argsort(importances)[::-1]\n\n# print all the features and their importances in highest to lowest importance\nfor f in range(X_train.shape[1]):\n print(\"%2d) %-*s %f\" % (f + 1, 30, \n feat_labels[indices[f]], \n importances[indices[f]]))\n\n# plot the histogram bar chart\nplt.title('Feature Importance')\nplt.bar(range(X_train.shape[1]), \n importances[indices],\n align='center')\n\nplt.xticks(range(X_train.shape[1]), \n feat_labels[indices], rotation=90)\nplt.xlim([-1, X_train.shape[1]])\nplt.tight_layout()\n#plt.savefig('images/04_09.png', dpi=300)\nplt.show()",
" 1) Proline 0.187187\n 2) Flavanoids 0.157839\n 3) Color intensity 0.137384\n 4) Alcohol 0.112509\n 5) OD280/OD315 of diluted wines 0.109811\n 6) Hue 0.089735\n 7) Total phenols 0.064850\n 8) Malic acid 0.040078\n 9) Magnesium 0.031063\n10) Alcalinity of ash 0.025245\n11) Proanthocyanins 0.017486\n12) Ash 0.015996\n13) Nonflavanoid phenols 0.010816\n"
],
[
"# comparing with SBS best features\nprint(df_wine.columns[1:][[k9]])",
"Index(['Alcohol', 'Malic acid', 'Ash', 'Alcalinity of ash', 'Magnesium',\n 'Total phenols', 'Flavanoids', 'Hue', 'OD280/OD315 of diluted wines'],\n dtype='object')\n"
]
],
[
[
"### RF feature ranking Gotcha\n- if two or more features are highly correlated, one feature may be ranked very highly while the information on the other feature(s) may not be fully captured\n- on the other hand, we don't need to be concerned about this problem if we are merely interested in the predictive performance of a model rather than the interpretation of feature importance values\n\n\n### SelectFromModel\n- scikit-learn framework also provides `SelectFromModel` class that selects features based on a user-specified threshold after model fitting\n- one caveat is should know the threshold\n- e.g. we could use threshold to `0.1` and keep features whose importance is greater or equal to the feature\n - RF would keep reduce the feature set to the five most important features for the Wine dataset",
"_____no_output_____"
]
],
[
[
"from sklearn.feature_selection import SelectFromModel\n\nsfm = SelectFromModel(forest, threshold=0.1, prefit=True)\nX_selected = sfm.transform(X_train)",
"_____no_output_____"
],
[
"X_selected.shape",
"_____no_output_____"
],
[
"print('Number of features that meet this threshold criterion:', \n X_selected.shape[1])",
"Number of features that meet this threshold criterion: 5\n"
],
[
"# print the top features meeting the threshold criterion\nfor f in range(X_selected.shape[1]):\n print(\"%2d) %-*s %f\" % (f + 1, 30, \n feat_labels[indices[f]], \n importances[indices[f]]))",
" 1) Proline 0.187187\n 2) Flavanoids 0.157839\n 3) Color intensity 0.137384\n 4) Alcohol 0.112509\n 5) OD280/OD315 of diluted wines 0.109811\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
ecda08f3b0daa9523e3b96046750cf55d56246c6 | 298,006 | ipynb | Jupyter Notebook | comparison__aggregate_spiral_tightness.ipynb | tingard/gzbuilder_results | bbcd95033db333ddd9e7213cbf19032747ff8710 | [
"MIT"
] | null | null | null | comparison__aggregate_spiral_tightness.ipynb | tingard/gzbuilder_results | bbcd95033db333ddd9e7213cbf19032747ff8710 | [
"MIT"
] | null | null | null | comparison__aggregate_spiral_tightness.ipynb | tingard/gzbuilder_results | bbcd95033db333ddd9e7213cbf19032747ff8710 | [
"MIT"
] | null | null | null | 406.002725 | 109,140 | 0.934914 | [
[
[
"# Compare results from Galaxy Builder to Hart (2017)\n\nRoss Hart fit a relationship between GZ2 debiased morpholocal votes and the length-weighted, dominant chirality only pitch angle reported by [SpArcFiRe](http://sparcfire.ics.uci.edu/). In this notebook we compare the length-weighted galaxy builder pitch angles to that relationship.",
"_____no_output_____"
]
],
[
[
"%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
],
[
"import os\nimport numpy as np\nimport pandas as pd\nfrom matplotlib.patches import Circle\nimport matplotlib.pyplot as plt\nfrom astropy.io import fits\nimport scipy.stats as st\nfrom tqdm import tqdm\nfrom IPython.display import display\nimport lib.galaxy_utilities as gu\nfrom tqdm import tqdm",
"_____no_output_____"
]
],
[
[
"Load in a list of available subject ids and the [GZ2 debiased vote counts](https://data.galaxyzoo.org/).",
"_____no_output_____"
]
],
[
[
"from gzbuilder_analysis import load_aggregation_results\nagg_results = load_aggregation_results('output_files/aggregation_results')",
" \r"
],
[
"df_gz2 = pd.read_csv(\n '../source_files/gz2_hart16.csv'\n).set_index('dr7objid')",
"_____no_output_____"
]
],
[
[
"Define some helper functions for feature extraction",
"_____no_output_____"
]
],
[
[
"def get_nsaid(sid):\n return np.int64(gu.metadata['NSA id'].loc[sid])\n\ndef get_dr7objid(sid):\n return np.int64(gu.metadata['SDSS dr7 id'].loc[sid])\n\ndef hart_wavg(gal):\n return (np.hstack((\n gal['t10_arms_winding_a28_tight_debiased'],\n gal['t10_arms_winding_a29_medium_debiased'],\n gal['t10_arms_winding_a30_loose_debiased'],\n )) * (np.arange(3) + 1)).sum()\n\ndef hart_mavg(gal, t='debiased'):\n p = np.hstack((\n gal[f't11_arms_number_a31_1_{t}'],\n gal[f't11_arms_number_a32_2_{t}'],\n gal[f't11_arms_number_a33_3_{t}'],\n gal[f't11_arms_number_a34_4_{t}'],\n gal[f't11_arms_number_a36_more_than_4_{t}'],\n ))\n return (p / p.sum() * (np.arange(5) + 1)).sum()\n\ndef hart_mmax(gal, t='debiased'):\n return np.argmax((\n gal[f't11_arms_number_a37_cant_tell_{t}'],\n gal[f't11_arms_number_a31_1_{t}'],\n gal[f't11_arms_number_a32_2_{t}'],\n gal[f't11_arms_number_a33_3_{t}'],\n gal[f't11_arms_number_a34_4_{t}'],\n gal[f't11_arms_number_a36_more_than_4_{t}'],\n ))\n\ndef hart_pa(wavg, mavg):\n return 6.37 * wavg + 1.30 * mavg + 4.34\n\ndef get_hart_params(row):\n gal = df_gz2.loc[row['DR7OBJID']]\n wavg, mavg, mmax = hart_wavg(gal), hart_mavg(gal), hart_mmax(gal, 'debiased')\n if wavg == 0.0 or mavg == 0.0:\n pa = np.nan\n else:\n pa = hart_pa(wavg, mavg)\n return pd.Series(dict(hart_pa=pa, wavg=wavg, mavg=mavg, mmax=mmax))\n\n\nSPIRAL_COUNT_COLS = [i for i in df_gz2.columns if 'arms_number' in i and 'count' in i]\ndef get_count_columns(row):\n return df_gz2[SPIRAL_COUNT_COLS].loc[row['DR7OBJID']]",
"_____no_output_____"
]
],
[
[
"Generate a DataFrame containing GZB pitch angles, NSA IDs, SDSS DR7 IDs, and use it to calculate Hart (2017) pitch angles from the GZ2 data export. Then add a column with the pitch angle difference and an error approximation.",
"_____no_output_____"
]
],
[
[
"hart_df = pd.concat((\n agg_results.index.to_series().apply(get_nsaid).rename('NSAID'),\n agg_results.index.to_series().apply(get_dr7objid).rename('DR7OBJID')\n), axis=1).apply(get_hart_params, axis=1)",
"_____no_output_____"
],
[
"gzb_pa_df = agg_results.apply(\n lambda a: pd.Series(\n a.spiral_pipeline.get_pitch_angle(a.spiral_arms),\n index=('pa', 'sigma_pa'),\n )\n)\npa_spiral_df = pd.concat((hart_df.hart_pa, gzb_pa_df), axis=1)",
"_____no_output_____"
]
],
[
[
"What do the distributions of pitch angles look like? Can the difference between the two be put down to error?",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(6, 5), dpi=120)\nplt.title('Pitch angle difference between Galaxy Builder and Hart (2017)')\npa_diff = (pa_spiral_df.pa - pa_spiral_df.hart_pa).dropna()\nplt.hist(pa_diff, density=True, alpha=0.2, zorder=10, color='C3')\nsns.kdeplot(pa_diff, lw=3, zorder=10, color='C3')\nylims = plt.gca().get_ylim()\nplt.fill_betweenx(np.linspace(0, ylims[1]), -7, 7, color='k', alpha=0.15, zorder=1)\nplt.fill_betweenx(np.linspace(0, ylims[1]), -14, 14, color='k', alpha=0.15, zorder=1)\nplt.vlines(0, *ylims, linestyles='dashed')\nplt.ylim(*ylims)\nplt.xlabel('Pitch angle (˚)')\nplt.savefig(\n 'method-paper-plots/gzb-hart-comparison.pdf',\n bbox_inches='tight'\n);",
"_____no_output_____"
]
],
[
[
"## Comparison with number of spirals:",
"_____no_output_____"
],
[
"Using models drawn by volunteers:",
"_____no_output_____"
]
],
[
[
"n_spirals_in_cls = agg_results.apply(\n lambda a: a.input_models.apply(\n lambda b: len(b['spiral']),\n ).reset_index(drop=True)\n)\nspiral_clf_df = pd.concat((\n n_spirals_in_cls.replace(0, np.nan).T.describe().T,\n hart_df[['mmax', 'mavg']],\n), axis=1)\nf, ax = plt.subplots(ncols=2, sharey=True, figsize=(14, 6), dpi=100)\nplt.sca(ax[0])\nsns.stripplot('mmax', 'mean', data=spiral_clf_df, color='C1')\ncorr = st.pearsonr(*spiral_clf_df[['mmax', 'mean']].dropna().values.T)\nplt.xticks(np.arange(20), ['Can\\'t tell', *np.arange(1, 20)])\nplt.title('Pearson correlation coefficient {:.3f}, p={:.2e}'.format(*corr));\nplt.xlabel('GZ2 vote with highest fraction')\nplt.ylabel('Mean spirals drawn for galaxy (after cleaning)')\n\nplt.sca(ax[1])\n# spiral_clf_df.plot.scatter('mmax', 'mean', ax=plt.gca(), color='C1')\nsns.regplot('mavg', 'mean', spiral_clf_df, color='C2', ax=ax[1])\n# spiral_clf_df.plot.scatter('mavg', 'mean', ax=plt.gca(), color='C2')\ncorr = st.pearsonr(*spiral_clf_df[['mavg', 'mean']].dropna().values.T)\nplt.title('Pearson correlation coefficient {:.3f}, p={:.2e}'.format(*corr));\nplt.xlabel('Hart (2017) mavg')\nfor a in ax:\n plt.sca(a)\n for i in (0, 1, 2, 3, 4, 5): \n plt.axvline(i, c='k', alpha=0.2)\n plt.axhline(i, c='k', alpha=0.2)\n plt.ylim(-0.5)\n plt.xlim(-0.5, 5.5)\n plt.gca().add_line(plt.Line2D(*[(-1E2, 1E2)]*2, c='k', alpha=0.2))",
"_____no_output_____"
]
],
[
[
"Using aggregated models:",
"_____no_output_____"
]
],
[
[
"n_agg_spirals = agg_results.apply(\n lambda a: len(a.spiral_arms)\n).rename('m_gzb')",
"_____no_output_____"
],
[
"n_spiral_df = pd.concat((hart_df.mmax, hart_df.mavg, n_agg_spirals), axis=1)\ndisplay(n_spiral_df.corr('spearman'))\nst.spearmanr(*n_spiral_df[['mmax', 'm_gzb']].dropna().values.T)",
"_____no_output_____"
],
[
"plt.figure(figsize=(8, 8), dpi=100)\nsns.boxplot(x='mavg', y='m_gzb', data=n_spiral_df, orient='h')\nfor i in (0, 1, 2, 3, 4, 5):\n plt.axvline(i, c='k', alpha=0.2, lw=0.5, zorder=0)\n plt.axhline(i, c='k', alpha=0.2, lw=0.5, zorder=0)\nplt.gca().add_line(plt.Line2D(*[(-1E2, 1E2)]*2, c='k', alpha=0.2, zorder=0))\nplt.xlabel('GZ2 mavg (Hart, 2017)')\nplt.ylabel('Number of aggregated arms')\nplt.ylim(-0.5, 4.5)",
"_____no_output_____"
]
],
[
[
"We can also make direct use of the vote counts to properly model the uncertainty present in each GZ2 classifcation, in a more rigorous manner than the *mavg* proposed by Hart (2017).\n\n>*For each classification of a galaxy with $n_\\mathrm{gzb}$ aggregate arms, add $1/N_\\mathrm{cls}$ to the area of the circle at $(C, n_\\mathrm{gzb})$, where $N_\\mathrm{cls}$ is the total number of times that the \"number of arms\" question was answered for that galaxy, and C is the value of the classification.*\n\ni.e. (maybe)\n$$\nA_{i, j} = \\sum_{k}^{N_g}\\frac{1}{M_k}\\sum_{m}^{M_k}\n\\begin{cases}\n 1,&\\ \\mathrm{if}\\ n_k = i\\ \\mathrm{and}\\ C_{k, m} = j\\\\\n 0,&\\ \\mathrm{otherwise}\n\\end{cases}\n$$\nWhere $n_k$ is the number of aggrage arms for galaxy $k$ (out of $N_g$ galaxies), $C_{k, m}$ is the $m$-th answer for galaxy $k$, out of $M_k$ answers. $\\delta$ is the Dirac delta function.",
"_____no_output_____"
]
],
[
[
"gz2_count_df = pd.concat((\n agg_results.index.to_series().apply(get_nsaid).rename('NSAID'),\n agg_results.index.to_series().apply(get_dr7objid).rename('DR7OBJID')\n), axis=1).apply(get_count_columns, axis=1).assign(m_gzb=n_agg_spirals)",
"_____no_output_____"
],
[
"areas = pd.DataFrame(\n index=np.unique(gz2_count_df.m_gzb),\n columns=gz2_count_df.drop(columns='m_gzb').columns\n)\nfor j in areas.index:\n counts = gz2_count_df.query('m_gzb == @j').drop(columns='m_gzb')\n As = counts.sum(axis=0)\n areas.loc[j] = As / np.sum(As)",
"_____no_output_____"
],
[
"rename_dict = dict(\n t11_arms_number_a31_1_count=1,\n t11_arms_number_a32_2_count=2,\n t11_arms_number_a33_3_count=3,\n t11_arms_number_a34_4_count=4,\n t11_arms_number_a36_more_than_4_count=5,\n t11_arms_number_a37_cant_tell_count=0\n)\n\nareas_renamed = areas.rename(columns=rename_dict)",
"_____no_output_____"
],
[
"circ_norm = 1 / areas.max().max()\ncirc_scaling = circ_norm * 0.4\n\nplt.figure(figsize=(6, 5.1), dpi=120)\nfor col in areas_renamed.columns:\n for row in areas_renamed.index:\n A = areas_renamed[col][row]\n r = np.sqrt(A / np.pi) * circ_scaling\n if r > 0:\n plt.gca().add_patch(\n Circle(\n (col, row), r,\n zorder=1, alpha=1,\n fc='none', ec='C3'\n )\n )\n if A == areas_renamed[col].max():\n plt.gca().add_patch(\n Circle((col, row), r, zorder=1, alpha=0.4, fc='C3')\n )\n \nfor i in (0, 1, 2, 3, 4, 5):\n plt.axvline(i, c='k', alpha=0.2, lw=0.5, zorder=0)\n plt.axhline(i, c='k', alpha=0.2, lw=0.5, zorder=0)\nplt.gca().add_line(plt.Line2D(*[(-1E2, 1E2)]*2, c='k', alpha=0.2, zorder=0))\nplt.xticks(np.arange(6), ['Can\\'t tell', *np.arange(1, 5), '4+'])\nplt.xlim(-0.5, 5.5)\nplt.ylim(-0.5, 4.5)\nplt.xlabel('Galaxy Zoo 2 number of spiral arms')\nplt.ylabel('Number of aggregated spiral arms')\nplt.tight_layout()\nplt.savefig('method-paper-plots/spiral-number-vs-gz2.pdf', bbox_inches='tight')",
"_____no_output_____"
]
],
[
[
"We see a similar relationship if we use the most likely vote too (note this plot has been column-normalized):",
"_____no_output_____"
]
],
[
[
"mx_areas = np.zeros((6, 6))\nfor i in range(mx_areas.shape[0]):\n for j in range(mx_areas.shape[1]):\n mx_areas[i, j] = (\n np.sum((n_agg_spirals == i)&(hart_df.mmax == j))\n / (hart_df.mmax == j).sum()\n )\n\nplt.figure(figsize=(6, 6), dpi=120)\nplt.imshow(mx_areas, origin='lower')\nplt.xticks(range(6), ['Can\\'t tell', *np.arange(1, 5), '4+']);",
"_____no_output_____"
]
],
[
[
"Cases where people said 2 spirals and we got more:\n- 20902009, merging failed\n- 20902070, very blurry two or three armed galaxy (we got 3)\n- 21096867, one of the spirals has been split into two sections (I see three arms)\n- 21096878, as above",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ecda1705e80edbac27e16e8e8a7047208fac762f | 27,173 | ipynb | Jupyter Notebook | notebooks/phasor-signal-summation.ipynb | senderle/semantic-phasor-demo | fba8601829b38bbdb0313186b8d41f334b1ce20e | [
"MIT"
] | 1 | 2019-11-07T13:59:06.000Z | 2019-11-07T13:59:06.000Z | notebooks/phasor-signal-summation.ipynb | senderle/semantic-phasor-demo | fba8601829b38bbdb0313186b8d41f334b1ce20e | [
"MIT"
] | null | null | null | notebooks/phasor-signal-summation.ipynb | senderle/semantic-phasor-demo | fba8601829b38bbdb0313186b8d41f334b1ce20e | [
"MIT"
] | null | null | null | 59.459519 | 5,850 | 0.513009 | [
[
[
"from bokeh.plotting import figure, show, output_notebook\nfrom bokeh.models import HoverTool, TapTool, PointDrawTool, OpenURL, ColumnDataSource\nfrom bokeh.palettes import magma\n\noutput_notebook()",
"_____no_output_____"
],
[
"\nsource = ColumnDataSource(data=dict(\n x=[1, 2, 3, 4, 5],\n y=[2, 5, 8, 2, 7],\n desc=['A', 'b', 'C', 'd', 'E'],\n))\n\nTOOLTIPS = [\n (\"index\", \"$index\"),\n (\"(x,y)\", \"($x, $y)\"),\n (\"desc\", \"@desc\"),\n]\n\np = figure(plot_width=400, plot_height=400,\n title=\"Mouse over the dots\",\n # tooltips=TOOLTIPS,\n tools='box_zoom, reset', \n )\n\n\nc1 = p.circle('x', 'y', size=7, source=source)\npoint_draw = PointDrawTool(renderers=[c1])\np.add_tools(point_draw)\np.toolbar.active_drag = point_draw\n\nshow(p)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code"
]
] |
ecda197f4e43653be55e77234a6a7077b05bd63f | 268,254 | ipynb | Jupyter Notebook | Data Analysis/FreeCodeCamp-Pandas-Real-Life-Example/Exercises_1.ipynb | usama-hossain/Python | a75abfb01737bb5f9a38c1c1b5481873e895933e | [
"MIT"
] | null | null | null | Data Analysis/FreeCodeCamp-Pandas-Real-Life-Example/Exercises_1.ipynb | usama-hossain/Python | a75abfb01737bb5f9a38c1c1b5481873e895933e | [
"MIT"
] | null | null | null | Data Analysis/FreeCodeCamp-Pandas-Real-Life-Example/Exercises_1.ipynb | usama-hossain/Python | a75abfb01737bb5f9a38c1c1b5481873e895933e | [
"MIT"
] | null | null | null | 111.170327 | 42,896 | 0.854403 | [
[
[
"\n<hr style=\"margin-bottom: 40px;\">\n\n<img src=\"https://user-images.githubusercontent.com/7065401/58563302-42466a80-8201-11e9-9948-b3e9f88a5662.jpg\"\n style=\"width:400px; float: right; margin: 0 40px 40px 40px;\"></img>\n\n# Exercises\n## Bike store sales",
"_____no_output_____"
],
[
"\n\n## Hands on! ",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n%matplotlib inline",
"_____no_output_____"
],
[
"sales = pd.read_csv(\n 'data/sales_data.csv',\n parse_dates=['Date'])",
"_____no_output_____"
],
[
"sales.head()",
"_____no_output_____"
]
],
[
[
"\n\n### What's the mean of `Customers_Age`?",
"_____no_output_____"
]
],
[
[
"# your code goes here\nsales['Customer_Age'].mean()",
"_____no_output_____"
]
],
[
[
"Why don't you try with `.mean()`",
"_____no_output_____"
]
],
[
[
"sales['Customer_Age'].mean()",
"_____no_output_____"
]
],
[
[
"Go ahead and show a <b>density (KDE)</b> and a <b>box plot</b> with the `Customer_Age` data:",
"_____no_output_____"
]
],
[
[
"# your code goes here\nsales['Customer_Age'].plot(kind='kde', figsize=(14,6))",
"_____no_output_____"
],
[
"sales['Customer_Age'].plot(kind='kde', figsize=(14,6))",
"_____no_output_____"
],
[
"sales['Customer_Age'].plot(kind='box', vert=False, figsize=(14,6))",
"_____no_output_____"
]
],
[
[
"\n\n### What's the mean of `Order_Quantity`?",
"_____no_output_____"
]
],
[
[
"# your code goes here\nsales['Order_Quantity'].mean()",
"_____no_output_____"
],
[
"sales['Order_Quantity'].mean()",
"_____no_output_____"
]
],
[
[
"Go ahead and show a <b>histogram</b> and a <b>box plot</b> with the `Order_Quantity` data:",
"_____no_output_____"
]
],
[
[
"# your code goes here\nsales['Order_Quantity'].plot(kind='hist', bins=30, figsize=(14,6))",
"_____no_output_____"
],
[
"sales['Order_Quantity'].plot(kind='box', vert=False, figsize=(14,6))",
"_____no_output_____"
],
[
"sales['Order_Quantity'].plot(kind='hist', bins=50, figsize=(14,6))",
"_____no_output_____"
],
[
"sales['Order_Quantity'].plot(kind='box', vert=False, figsize=(14,6))",
"_____no_output_____"
]
],
[
[
"\n\n### How many sales per year do we have?",
"_____no_output_____"
]
],
[
[
"# your code goes here\nsales['Year'].value_counts()",
"_____no_output_____"
],
[
"sales['Year'].value_counts()",
"_____no_output_____"
]
],
[
[
"Go ahead and show a <b>pie plot</b> with the previous data:",
"_____no_output_____"
]
],
[
[
"# your code goes here\nsales['Year'].value_counts().plot(kind='pie', figsize=(6,6))",
"_____no_output_____"
],
[
"sales['Year'].value_counts().plot(kind='pie', figsize=(6,6))",
"_____no_output_____"
]
],
[
[
"\n\n### How many sales per month do we have?",
"_____no_output_____"
]
],
[
[
"# your code goes here\nsales['Month'].value_counts()",
"_____no_output_____"
],
[
"sales['Month'].value_counts()",
"_____no_output_____"
]
],
[
[
"Go ahead and show a <b>bar plot</b> with the previous data:",
"_____no_output_____"
]
],
[
[
"# your code goes here\n",
"_____no_output_____"
],
[
"sales['Month'].value_counts().plot(kind='bar', figsize=(14,6))",
"_____no_output_____"
]
],
[
[
"\n\n### Which country has the most sales `quantity of sales`?",
"_____no_output_____"
]
],
[
[
"sales['Country'].value_counts().head(1)",
"_____no_output_____"
],
[
"sales['Country'].value_counts().head(1)",
"_____no_output_____"
],
[
"sales['Country'].value_counts()",
"_____no_output_____"
]
],
[
[
"Go ahead and show a <b>bar plot</b> of the sales per country:",
"_____no_output_____"
]
],
[
[
"# your code goes here\n",
"_____no_output_____"
],
[
"sales['Country'].value_counts().plot(kind='bar', figsize=(14,6))",
"_____no_output_____"
]
],
[
[
"\n\n### Create a list of every product sold",
"_____no_output_____"
]
],
[
[
"sales['Product'].unique()",
"_____no_output_____"
],
[
"#sales.loc[:, 'Product'].unique()\n\nsales['Product'].unique()",
"_____no_output_____"
]
],
[
[
"Create a **bar plot** showing the 10 most sold products (best sellers):",
"_____no_output_____"
]
],
[
[
"# your code goes here\nsales['Product'].value_counts().head(10).plot(kind='bar', figsize=(14,6))",
"_____no_output_____"
],
[
"sales['Product'].value_counts().head(10).plot(kind='bar', figsize=(14,6))",
"_____no_output_____"
]
],
[
[
"\n\n### Can you see any relationship between `Unit_Cost` and `Unit_Price`?\n\nShow a <b>scatter plot</b> between both columns.",
"_____no_output_____"
]
],
[
[
"# your code goes here\nsales.plot(kind='scatter', x='Unit_Cost', y='Unit_Price', figsize=(6,6))",
"_____no_output_____"
],
[
"sales.plot(kind='scatter', x='Unit_Cost', y='Unit_Price', figsize=(6,6))",
"_____no_output_____"
]
],
[
[
"\n\n### Can you see any relationship between `Order_Quantity` and `Profit`?\n\nShow a <b>scatter plot</b> between both columns.",
"_____no_output_____"
]
],
[
[
"# your code goes here\nsales.plot(kind='scatter', x='Order_Quantity', y='Profit', figsize=(6,6))",
"_____no_output_____"
],
[
"sales.plot(kind='scatter', x='Order_Quantity', y='Profit', figsize=(6,6))",
"_____no_output_____"
]
],
[
[
"\n\n### Can you see any relationship between `Profit` per `Country`?\n\nShow a grouped <b>box plot</b> per country with the profit values.",
"_____no_output_____"
]
],
[
[
"# your code goes here\n",
"_____no_output_____"
],
[
"sales[['Profit', 'Country']].boxplot(by='Country', figsize=(10,6))",
"_____no_output_____"
]
],
[
[
"\n\n### Can you see any relationship between the `Customer_Age` per `Country`?\n\nShow a grouped <b>box plot</b> per country with the customer age values.",
"_____no_output_____"
]
],
[
[
"# your code goes here\nsales[['Customer_Age', 'Country']].boxplot(by='Country', figsize=(10, 6))",
"_____no_output_____"
],
[
"sales[['Customer_Age', 'Country']].boxplot(by='Country', figsize=(10,6))",
"_____no_output_____"
]
],
[
[
"\n\n### Add and calculate a new `Calculated_Date` column\n\nUse `Day`, `Month`, `Year` to create a `Date` column (`YYYY-MM-DD`).",
"_____no_output_____"
]
],
[
[
"# your code goes here\nsales['Calculated_Date'] = sales[['Year', 'Month', 'Day']].apply(lambda x: '{}-{}-{}'.format(x[0], x[1], x[2]), axis=1)\n\nsales['Calculated_Date'].head()",
"_____no_output_____"
],
[
"sales['Practice'] = sales[['Unit_Cost', 'Order_Quantity']].apply(lambda x: '{}'.format(np.dot(x[0], x[1])), axis=1)",
"_____no_output_____"
],
[
"sales[['Unit_Cost', 'Order_Quantity', 'Practice']].head(10)",
"_____no_output_____"
],
[
"sales['Calculated_Date'] = sales[['Year', 'Month', 'Day']].apply(lambda x: '{}-{}-{}'.format(x[0], x[1], x[2]), axis=1)\n\nsales['Calculated_Date'].head()",
"_____no_output_____"
]
],
[
[
"\n\n### Parse your `Calculated_Date` column into a datetime object",
"_____no_output_____"
]
],
[
[
"# your code goes here\nsales['Calculated_Date'] = pd.to_datetime(sales['Calculated_Date'])\nsales['Calculated_Date'].head()",
"_____no_output_____"
],
[
"sales['Calculated_Date'] = pd.to_datetime(sales['Calculated_Date'])\n\nsales['Calculated_Date'].head()",
"_____no_output_____"
]
],
[
[
"\n\n### How did sales evolve through the years?\n\nShow a <b>line plot</b> using `Calculated_Date` column as the x-axis and the count of sales as the y-axis.",
"_____no_output_____"
]
],
[
[
"# your code goes here\nsales['Calculated_Date'].value_counts().plot(kind='line', figsize=(14,6))",
"_____no_output_____"
],
[
"sales['Calculated_Date'].value_counts().plot(kind='line', figsize=(14,6))",
"_____no_output_____"
]
],
[
[
"\n\n### Increase 50 U$S revenue to every sale",
"_____no_output_____"
]
],
[
[
"# your code goes here\n",
"_____no_output_____"
],
[
"#sales['Revenue'] = sales['Revenue'] + 50\n\nsales['Revenue'] += 50",
"_____no_output_____"
]
],
[
[
"\n\n### How many orders were made in `Canada` or `France`?",
"_____no_output_____"
]
],
[
[
"# your code goes here\nsales.loc[(sales['Country'] == 'Canada') | (sales['Country'] == 'France')].shape[0]",
"_____no_output_____"
]
],
[
[
"\n\n### How many `Bike Racks` orders were made from Canada?",
"_____no_output_____"
]
],
[
[
"# your code goes here\nsales.loc[(sales['Country'] == 'Canada') & (sales['Sub_Category'] == 'Bike Racks')].shape[0]",
"_____no_output_____"
],
[
"sales.loc[(sales['Country'] == 'Canada') & (sales['Sub_Category'] == 'Bike Racks')].shape[0]",
"_____no_output_____"
]
],
[
[
"\n\n### How many orders were made in each region (state) of France?",
"_____no_output_____"
]
],
[
[
"# your code goes here\nfrance_states = sales.loc[sales['Country'] == 'France', 'State'].value_counts()\n\nfrance_states",
"_____no_output_____"
],
[
"france_states = sales.loc[sales['Country'] == 'France', 'State'].value_counts()\n\nfrance_states",
"_____no_output_____"
]
],
[
[
"Go ahead and show a <b>bar plot</b> with the results:",
"_____no_output_____"
]
],
[
[
"# your code goes here\n",
"_____no_output_____"
],
[
"france_states.plot(kind='bar', figsize=(14,6))",
"_____no_output_____"
]
],
[
[
"\n\n### How many sales were made per category?",
"_____no_output_____"
]
],
[
[
"# your code goes here\n",
"_____no_output_____"
],
[
"sales['Product_Category'].value_counts()",
"_____no_output_____"
]
],
[
[
"Go ahead and show a <b>pie plot</b> with the results:",
"_____no_output_____"
]
],
[
[
"# your code goes here\n",
"_____no_output_____"
],
[
"sales['Product_Category'].value_counts().plot(kind='pie', figsize=(6,6))",
"_____no_output_____"
]
],
[
[
"\n\n### How many orders were made per accessory sub-categories?",
"_____no_output_____"
]
],
[
[
"# your code goes here\norders = sales.loc[(sales['Product_Category'] == 'Accessories'), 'Sub_Category'].value_counts()\norders",
"_____no_output_____"
],
[
"sales.loc[:,'Sub_Category']",
"_____no_output_____"
],
[
"accessories = sales.loc[sales['Product_Category'] == 'Accessories', 'Sub_Category'].value_counts()\n\naccessories",
"_____no_output_____"
]
],
[
[
"Go ahead and show a <b>bar plot</b> with the results:",
"_____no_output_____"
]
],
[
[
"# your code goes here\n",
"_____no_output_____"
],
[
"accessories.plot(kind='bar', figsize=(14,6))",
"_____no_output_____"
]
],
[
[
"\n\n### How many orders were made per bike sub-categories?",
"_____no_output_____"
]
],
[
[
"# your code goes here\n",
"_____no_output_____"
],
[
"bikes = sales.loc[sales['Product_Category'] == 'Bikes', 'Sub_Category'].value_counts()\n\nbikes",
"_____no_output_____"
]
],
[
[
"Go ahead and show a <b>pie plot</b> with the results:",
"_____no_output_____"
]
],
[
[
"# your code goes here\n",
"_____no_output_____"
],
[
"bikes.plot(kind='pie', figsize=(6,6))",
"_____no_output_____"
]
],
[
[
"\n\n### Which gender has the most amount of sales?",
"_____no_output_____"
]
],
[
[
"# your code goes here\n",
"_____no_output_____"
],
[
"sales['Customer_Gender'].value_counts()",
"_____no_output_____"
],
[
"sales['Customer_Gender'].value_counts().plot(kind='bar')",
"_____no_output_____"
]
],
[
[
"\n\n### How many sales with more than 500 in `Revenue` were made by men?",
"_____no_output_____"
]
],
[
[
"# your code goes here\n",
"_____no_output_____"
],
[
"sales.loc[(sales['Customer_Gender'] == 'M') & (sales['Revenue'] > 500)].shape[0]",
"_____no_output_____"
]
],
[
[
"\n\n### Get the top-5 sales with the highest revenue",
"_____no_output_____"
]
],
[
[
"# your code goes here\nsales.sort_values(['Revenue'], ascending=False).head(5)",
"_____no_output_____"
],
[
"sales.sort_values(['Revenue'], ascending=False).head(5)",
"_____no_output_____"
]
],
[
[
"\n\n### Get the sale with the highest revenue",
"_____no_output_____"
]
],
[
[
"# your code goes here\nsales['Revenue'].max()",
"_____no_output_____"
],
[
"#sales.sort_values(['Revenue'], ascending=False).head(1)\n\ncond = sales['Revenue'] == sales['Revenue'].max()\n\nsales.loc[cond]",
"_____no_output_____"
]
],
[
[
"\n\n### What is the mean `Order_Quantity` of orders with more than 10K in revenue?",
"_____no_output_____"
]
],
[
[
"# your code goes here\nsales.loc[(sales['Revenue'] > 10000), 'Order_Quantity'].mean()",
"_____no_output_____"
],
[
"cond = sales['Revenue'] > 10_000\n\nsales.loc[cond, 'Order_Quantity'].mean()",
"_____no_output_____"
]
],
[
[
"\n\n### What is the mean `Order_Quantity` of orders with less than 10K in revenue?",
"_____no_output_____"
]
],
[
[
"# your code goes here\n",
"_____no_output_____"
],
[
"cond = sales['Revenue'] < 10_000\n\nsales.loc[cond, 'Order_Quantity'].mean()",
"_____no_output_____"
]
],
[
[
"\n\n### How many orders were made in May of 2016?",
"_____no_output_____"
]
],
[
[
"# your code goes here\n",
"_____no_output_____"
],
[
"cond = (sales['Year'] == 2016) & (sales['Month'] == 'May')\n\nsales.loc[cond].shape[0]",
"_____no_output_____"
]
],
[
[
"\n\n### How many orders were made between May and July of 2016?",
"_____no_output_____"
]
],
[
[
"# your code goes here\n",
"_____no_output_____"
],
[
"cond = (sales['Year'] == 2016) & (sales['Month'].isin(['May', 'June', 'July']))\n\nsales.loc[cond].shape[0]",
"_____no_output_____"
]
],
[
[
"Show a grouped <b>box plot</b> per month with the profit values.",
"_____no_output_____"
]
],
[
[
"# your code goes here\n",
"_____no_output_____"
],
[
"profit_2016 = sales.loc[sales['Year'] == 2016, ['Profit', 'Month']]\n\nprofit_2016.boxplot(by='Month', figsize=(14,6))",
"_____no_output_____"
]
],
[
[
"\n\n### Add 7.2% TAX on every sale `Unit_Price` within United States",
"_____no_output_____"
]
],
[
[
"# your code goes here\n",
"_____no_output_____"
],
[
"#sales.loc[sales['Country'] == 'United States', 'Unit_Price'] = sales.loc[sales['Country'] == 'United States', 'Unit_Price'] * 1.072\n\nsales.loc[sales['Country'] == 'United States', 'Unit_Price'] *= 1.072",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
ecda353f4eed7a13728928a63a5f034d3dec5e18 | 11,672 | ipynb | Jupyter Notebook | Complete-Python-3-Bootcamp-master/11-Python Generators/01-Iterators and Generators.ipynb | davidMartinVergues/PYTHON | dd39d3aabfc43b3cb09aadb2919e51d03364117d | [
"DOC"
] | 8 | 2020-09-02T03:59:02.000Z | 2022-01-08T23:36:19.000Z | Complete-Python-3-Bootcamp-master/11-Python Generators/01-Iterators and Generators.ipynb | davidMartinVergues/PYTHON | dd39d3aabfc43b3cb09aadb2919e51d03364117d | [
"DOC"
] | null | null | null | Complete-Python-3-Bootcamp-master/11-Python Generators/01-Iterators and Generators.ipynb | davidMartinVergues/PYTHON | dd39d3aabfc43b3cb09aadb2919e51d03364117d | [
"DOC"
] | 3 | 2020-11-18T12:13:05.000Z | 2021-02-24T19:31:50.000Z | 25.995546 | 682 | 0.544722 | [
[
[
"___\n\n<a href='https://www.udemy.com/user/joseportilla/'><img src='../Pierian_Data_Logo.png'/></a>\n___\n<center><em>Content Copyright by Pierian Data</em></center>",
"_____no_output_____"
],
[
"# Iterators and Generators",
"_____no_output_____"
],
[
"In this section of the course we will be learning the difference between iteration and generation in Python and how to construct our own Generators with the *yield* statement. Generators allow us to generate as we go along, instead of holding everything in memory. \n\nWe've touched on this topic in the past when discussing certain built-in Python functions like **range()**, **map()** and **filter()**.\n\nLet's explore a little deeper. We've learned how to create functions with <code>def</code> and the <code>return</code> statement. Generator functions allow us to write a function that can send back a value and then later resume to pick up where it left off. This type of function is a generator in Python, allowing us to generate a sequence of values over time. The main difference in syntax will be the use of a <code>yield</code> statement.\n\nIn most aspects, a generator function will appear very similar to a normal function. The main difference is when a generator function is compiled they become an object that supports an iteration protocol. That means when they are called in your code they don't actually return a value and then exit. Instead, generator functions will automatically suspend and resume their execution and state around the last point of value generation. The main advantage here is that instead of having to compute an entire series of values up front, the generator computes one value and then suspends its activity awaiting the next instruction. This feature is known as *state suspension*.\n\n\nTo start getting a better understanding of generators, let's go ahead and see how we can create some.",
"_____no_output_____"
]
],
[
[
"# Generator function for the cube of numbers (power of 3)\ndef gencubes(n):\n for num in range(n):\n yield num**3",
"_____no_output_____"
],
[
"for x in gencubes(10):\n print(x)",
"0\n1\n8\n27\n64\n125\n216\n343\n512\n729\n"
]
],
[
[
"Great! Now since we have a generator function we don't have to keep track of every single cube we created.\n\nGenerators are best for calculating large sets of results (particularly in calculations that involve loops themselves) in cases where we don’t want to allocate the memory for all of the results at the same time. \n\nLet's create another example generator which calculates [fibonacci](https://en.wikipedia.org/wiki/Fibonacci_number) numbers:",
"_____no_output_____"
]
],
[
[
"def genfibon(n):\n \"\"\"\n Generate a fibonnaci sequence up to n\n \"\"\"\n a = 1\n b = 1\n for i in range(n):\n yield a\n a,b = b,a+b",
"_____no_output_____"
],
[
"for num in genfibon(10):\n print(num)",
"1\n1\n2\n3\n5\n8\n13\n21\n34\n55\n"
]
],
[
[
"What if this was a normal function, what would it look like?",
"_____no_output_____"
]
],
[
[
"def fibon(n):\n a = 1\n b = 1\n output = []\n \n for i in range(n):\n output.append(a)\n a,b = b,a+b\n \n return output",
"_____no_output_____"
],
[
"fibon(10)",
"_____no_output_____"
]
],
[
[
"Notice that if we call some huge value of n (like 100000) the second function will have to keep track of every single result, when in our case we actually only care about the previous result to generate the next one!\n\n## next() and iter() built-in functions\nA key to fully understanding generators is the next() function and the iter() function.\n\nThe next() function allows us to access the next element in a sequence. Lets check it out:",
"_____no_output_____"
]
],
[
[
"def simple_gen():\n for x in range(3):\n yield x",
"_____no_output_____"
],
[
"# Assign simple_gen \ng = simple_gen()",
"_____no_output_____"
],
[
"print(next(g))",
"0\n"
],
[
"print(next(g))",
"1\n"
],
[
"print(next(g))",
"2\n"
],
[
"print(next(g))",
"_____no_output_____"
]
],
[
[
"After yielding all the values next() caused a StopIteration error. What this error informs us of is that all the values have been yielded. \n\nYou might be wondering that why don’t we get this error while using a for loop? A for loop automatically catches this error and stops calling next(). \n\nLet's go ahead and check out how to use iter(). You remember that strings are iterables:",
"_____no_output_____"
]
],
[
[
"s = 'hello'\n\n#Iterate over string\nfor let in s:\n print(let)",
"h\ne\nl\nl\no\n"
]
],
[
[
"But that doesn't mean the string itself is an *iterator*! We can check this with the next() function:",
"_____no_output_____"
]
],
[
[
"next(s)",
"_____no_output_____"
]
],
[
[
"Interesting, this means that a string object supports iteration, but we can not directly iterate over it as we could with a generator function. The iter() function allows us to do just that!",
"_____no_output_____"
]
],
[
[
"s_iter = iter(s)",
"_____no_output_____"
],
[
"next(s_iter)",
"_____no_output_____"
],
[
"next(s_iter)",
"_____no_output_____"
]
],
[
[
"Great! Now you know how to convert objects that are iterable into iterators themselves!\n\nThe main takeaway from this lecture is that using the yield keyword at a function will cause the function to become a generator. This change can save you a lot of memory for large use cases. For more information on generators check out:\n\n[Stack Overflow Answer](http://stackoverflow.com/questions/1756096/understanding-generators-in-python)\n\n[Another StackOverflow Answer](http://stackoverflow.com/questions/231767/what-does-the-yield-keyword-do-in-python)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
]
] |
ecda3765b4c2d65666220a77fda97d7a87eba44e | 31,174 | ipynb | Jupyter Notebook | 05a-tools-predicition-titanic_add_10min-skl_intro_afo/titanic.ipynb | niemaelbouri/data-x | a93fe07902dfd91325d4acb4f98052b6baa00338 | [
"Apache-2.0"
] | 1 | 2019-09-02T06:08:55.000Z | 2019-09-02T06:08:55.000Z | 05a-tools-predicition-titanic_add_10min-skl_intro_afo/titanic.ipynb | niemaelbouri/data-x | a93fe07902dfd91325d4acb4f98052b6baa00338 | [
"Apache-2.0"
] | null | null | null | 05a-tools-predicition-titanic_add_10min-skl_intro_afo/titanic.ipynb | niemaelbouri/data-x | a93fe07902dfd91325d4acb4f98052b6baa00338 | [
"Apache-2.0"
] | null | null | null | 26.065217 | 218 | 0.539584 | [
[
[
"\n\n## Data-X: Titanic Survival Analysis",
"_____no_output_____"
],
[
"**Authors:** Several public Kaggle Kernels, edits by Alexander Fred Ojala & Kevin Li\n\n<img src=\"data/Titanic_Variable.png\">",
"_____no_output_____"
],
[
"# Note\n\nInstall xgboost package in your pyhton enviroment:\n\ntry:\n```\n$ conda install py-xgboost\n```\n",
"_____no_output_____"
]
],
[
[
"'''\n# You can also install the package by running the line below\n# directly in your notebook\n''';\n\n#!conda install py-xgboost --y",
"_____no_output_____"
]
],
[
[
"## Import packages",
"_____no_output_____"
]
],
[
[
"# No warnings\nimport warnings\nwarnings.filterwarnings('ignore') # Filter out warnings\n\n# data analysis and wrangling\nimport pandas as pd\nimport numpy as np\nimport random as rnd\n\npd.set_option('display.max_columns', 100) # Print 100 Pandas columns\n\n# visualization\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# machine learning\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.svm import SVC, LinearSVC\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.naive_bayes import GaussianNB # Gaussian Naive Bays\nfrom sklearn.linear_model import Perceptron\nfrom sklearn.linear_model import SGDClassifier #stochastic gradient descent\nfrom sklearn.tree import DecisionTreeClassifier\n\nimport xgboost as xgb\n\n# Plot styling\nsns.set(style='white', context='notebook', palette='deep')\nplt.rcParams[ 'figure.figsize' ] = 10 , 6",
"_____no_output_____"
]
],
[
[
"### Define fancy plot to look at distributions",
"_____no_output_____"
]
],
[
[
"# Special distribution plot (will be used later)\ndef plot_distribution( df , var , target , **kwargs ):\n row = kwargs.get( 'row' , None )\n col = kwargs.get( 'col' , None )\n facet = sns.FacetGrid( df , hue=target , aspect=4 , row = row , col = col )\n facet.map( sns.kdeplot , var , shade= True )\n facet.set( xlim=( 0 , df[ var ].max() ) )\n facet.add_legend()\n plt.tight_layout()",
"_____no_output_____"
]
],
[
[
"## References to material we won't cover in detail:\n\n* **Gradient Boosting:** http://blog.kaggle.com/2017/01/23/a-kaggle-master-explains-gradient-boosting/\n\n* **Naive Bayes:** http://scikit-learn.org/stable/modules/naive_bayes.html\n\n* **Perceptron:** http://aass.oru.se/~lilien/ml/seminars/2007_02_01b-Janecek-Perceptron.pdf",
"_____no_output_____"
],
[
"## Input Data",
"_____no_output_____"
]
],
[
[
"train_df = pd.read_csv('data/train.csv')\ntest_df = pd.read_csv('data/test.csv')\ncombine = [train_df, test_df]\n\n# NOTE! When we change train_df or test_df the objects in combine \n# will also change\n# (combine is only a pointer to the objects)\n\n\n# combine is used to ensure whatever preprocessing is done\n# on training data is also done on test data",
"_____no_output_____"
]
],
[
[
"# Exploratory Data Anlysis (EDA)\nWe will analyze the data to see how we can work with it and what makes sense.",
"_____no_output_____"
]
],
[
[
"print(train_df.columns.values) ",
"_____no_output_____"
],
[
"# preview the data\ntrain_df.head(5)",
"_____no_output_____"
],
[
"# General data statistics\ntrain_df.describe()",
"_____no_output_____"
],
[
"# Data Frame information (null, data type etc)\ntrain_df.info()",
"_____no_output_____"
],
[
"train_df.hist(figsize=(13,10))\nplt.show()",
"_____no_output_____"
],
[
"# Balanced data set?\ntrain_df['Survived'].value_counts()",
"_____no_output_____"
],
[
"_",
"_____no_output_____"
],
[
"_[0]/(sum(_)) #base line for prediction accuracy",
"_____no_output_____"
],
[
"pd.tools.plotting.scatter_matrix(train_df,figsize=(13,10));",
"_____no_output_____"
]
],
[
[
"## Comment on the Data\n\n> `PassengerId` is a random number (incrementing index) and thus does not contain any valuable information. \n>\n>`Survived, Passenger Class, Age Siblings Spouses, Parents Children` and `Fare` are numerical values -- so we don't need to transform them, but we might want to group them (i.e. create categorical variables). \n>\n>`Sex, Embarked` are categorical features that we need to map to integer values. `Name, Ticket` and `Cabin` might also contain valuable information.",
"_____no_output_____"
],
[
"# Preprocessing Data",
"_____no_output_____"
]
],
[
[
"# check dimensions of the train and test datasets\nprint(\"Shapes Before: (train) (test) = \", \\\n train_df.shape, test_df.shape)",
"_____no_output_____"
],
[
"# Drop columns 'Ticket', 'Cabin', need to do it for both test\n# and training\n\ntrain_df = train_df.drop(['Ticket', 'Cabin'], axis=1)\ntest_df = test_df.drop(['Ticket', 'Cabin'], axis=1)\ncombine = [train_df, test_df]\n\nprint(\"Shapes After: (train) (test) =\", train_df.shape, test_df.shape)",
"_____no_output_____"
],
[
"# Check if there are null values in the datasets\n\nprint(train_df.isnull().sum())\nprint()\nprint(test_df.isnull().sum())\n",
"_____no_output_____"
]
],
[
[
"## Hypotheses\n\n## 1: The Title of the person is a feature that can predict survival",
"_____no_output_____"
]
],
[
[
"# List example titles in Name column\ntrain_df.Name[:5]",
"_____no_output_____"
],
[
"# from the Name column we will extract title of each passenger\n# and save that in a column in the dataset called 'Title'\n# if you want to match Titles or names with any other expression\n# refer to this tutorial on regex in python:\n# https://www.tutorialspoint.com/python/python_reg_expressions.htm\n\n# Create new column called title\n\nfor dataset in combine:\n dataset['Title'] = dataset['Name'].str.extract(' ([A-Za-z]+)\\.',\\\n expand=False)",
"_____no_output_____"
],
[
"# Double check that our titles makes sense (by comparing to sex)\n\npd.crosstab(train_df['Title'], train_df['Sex'])",
"_____no_output_____"
],
[
"# same for test set\npd.crosstab(test_df['Title'], test_df['Sex'])",
"_____no_output_____"
],
[
"# We see common titles like Miss, Mrs, Mr, Master are dominant, we will\n# correct some Titles to standard forms and replace the rarest titles \n# with single name 'Rare'\n\nfor dataset in combine:\n dataset['Title'] = dataset['Title'].\\\n replace(['Lady', 'Countess','Capt', 'Col', 'Don', 'Dr',\\\n 'Major', 'Rev', 'Sir', 'Jonkheer', 'Dona'], 'Rare')\n\n dataset['Title'] = dataset['Title'].replace('Mlle', 'Miss') #Mademoiselle\n dataset['Title'] = dataset['Title'].replace('Ms', 'Miss')\n dataset['Title'] = dataset['Title'].replace('Mme', 'Mrs') #Madame",
"_____no_output_____"
],
[
"# Now that we have more logical titles, and a few groups\n# we can plot the survival chance for each title\n\ntrain_df[['Title', 'Survived']].groupby(['Title']).mean()",
"_____no_output_____"
],
[
"# We can also plot it\nsns.countplot(x='Survived', hue=\"Title\", data=train_df, order=[1,0])\nplt.xticks(range(2),['Made it','Deceased']);",
"_____no_output_____"
],
[
"# Title dummy mapping\n# Map titles to binary dummy columns\nfor dataset in combine:\n binary_encoded = pd.get_dummies(dataset.Title)\n newcols = binary_encoded.columns\n dataset[newcols] = binary_encoded\n\ntrain_df.head()",
"_____no_output_____"
],
[
"train_df = train_df.drop(['PassengerId','Name', 'Title'], axis=1)\ntest_df = test_df.drop(['Name', 'Title'], axis=1)\ncombine = [train_df, test_df]",
"_____no_output_____"
],
[
"train_df.head()",
"_____no_output_____"
]
],
[
[
"### Map Sex column to binary (male = 0, female = 1) categories",
"_____no_output_____"
]
],
[
[
"\nfor dataset in combine:\n dataset['Sex'] = dataset['Sex']. \\\n map( {'female': 1, 'male': 0} ).astype(int)\n\ntrain_df.head()",
"_____no_output_____"
]
],
[
[
"## Handle missing values for age\nWe will now guess values of age based on sex (male / female) \nand socioeconomic class (1st,2nd,3rd) of the passenger.\n\nThe row indicates the sex, male = 0, female = 1\n\nMore refined estimate than only taking the median / mean etc.",
"_____no_output_____"
]
],
[
[
"guess_ages = np.zeros((2,3),dtype=int) #initialize\nguess_ages",
"_____no_output_____"
],
[
"# Fill the NA's for the Age columns\n# with \"qualified guesses\"\n\nfor idx,dataset in enumerate(combine):\n if idx==0:\n print('Working on Training Data set\\n')\n else:\n print('-'*35)\n print('Working on Test Data set\\n')\n \n print('Guess values of age based on sex and pclass of the passenger...')\n for i in range(0, 2):\n for j in range(0,3):\n guess_df = dataset[(dataset['Sex'] == i) \\\n &(dataset['Pclass'] == j+1)]['Age'].dropna()\n\n # Extract the median age for this group\n # (less sensitive) to outliers\n age_guess = guess_df.median()\n \n # Convert random age float to int\n guess_ages[i,j] = int(age_guess)\n \n \n print('Guess_Age table:\\n',guess_ages)\n print ('\\nAssigning age values to NAN age values in the dataset...')\n \n for i in range(0, 2):\n for j in range(0, 3):\n dataset.loc[ (dataset.Age.isnull()) & (dataset.Sex == i) \\\n & (dataset.Pclass == j+1),'Age'] = guess_ages[i,j]\n \n\n dataset['Age'] = dataset['Age'].astype(int)\n print()\nprint('Done!')\ntrain_df.head()",
"_____no_output_____"
]
],
[
[
"#### Split age into bands / categorical ranges and look at survival rates",
"_____no_output_____"
]
],
[
[
"# Age bands\ntrain_df['AgeBand'] = pd.cut(train_df['Age'], 5)\ntrain_df[['AgeBand', 'Survived']].groupby(['AgeBand'], as_index=False)\\\n .mean().sort_values(by='AgeBand', ascending=True)",
"_____no_output_____"
],
[
"# Plot distributions of Age of passangers who survived \n# or did not survive\n\nplot_distribution( train_df , var = 'Age' , target = 'Survived' ,\\\n row = 'Sex' )",
"_____no_output_____"
],
[
"# Change Age column to\n# map Age ranges (AgeBands) to integer values of categorical type \nfor dataset in combine: \n dataset.loc[ dataset['Age'] <= 16, 'Age'] = 0\n dataset.loc[(dataset['Age'] > 16) & (dataset['Age'] <= 32), 'Age'] = 1\n dataset.loc[(dataset['Age'] > 32) & (dataset['Age'] <= 48), 'Age'] = 2\n dataset.loc[(dataset['Age'] > 48) & (dataset['Age'] <= 64), 'Age'] = 3\n dataset.loc[ dataset['Age'] > 64, 'Age']=4\ntrain_df.head()\n\n# Note we could just run \n# dataset['Age'] = pd.cut(dataset['Age'], 5,labels=[0,1,2,3,4])",
"_____no_output_____"
],
[
"# remove AgeBand from before\ntrain_df = train_df.drop(['AgeBand'], axis=1)\ncombine = [train_df, test_df]\ntrain_df.head()",
"_____no_output_____"
]
],
[
[
"# Create variable for Family Size\n\nHow did the number of people the person traveled with impact the chance of survival?",
"_____no_output_____"
]
],
[
[
"# SibSp = Number of Sibling / Spouses\n# Parch = Parents / Children\n\nfor dataset in combine:\n dataset['FamilySize'] = dataset['SibSp'] + dataset['Parch'] + 1\n\n \n# Survival chance against FamilySize\n\ntrain_df[['FamilySize', 'Survived']].groupby(['FamilySize'], as_index=True).mean().sort_values(by='Survived', ascending=False)",
"_____no_output_____"
],
[
"# Plot it, 1 is survived\nsns.countplot(x='Survived', hue=\"FamilySize\", data=train_df, order=[1,0]);",
"_____no_output_____"
],
[
"# Create binary variable if the person was alone or not\n\nfor dataset in combine:\n dataset['IsAlone'] = 0\n dataset.loc[dataset['FamilySize'] == 1, 'IsAlone'] = 1\n\ntrain_df[['IsAlone', 'Survived']].groupby(['IsAlone'], as_index=True).mean()",
"_____no_output_____"
],
[
"# We will only use the binary IsAlone feature for further analysis\n\nfor df in combine:\n df.drop(['Parch', 'SibSp', 'FamilySize'], axis=1, inplace=True)\n\n\ntrain_df.head()",
"_____no_output_____"
],
[
"# We can also create new features based on intuitive combinations\n# Here is an example when we say that the age times socioclass is a determinant factor\nfor dataset in combine:\n dataset['Age*Class'] = dataset.Age * dataset.Pclass\n\ntrain_df.loc[:, ['Age*Class', 'Age', 'Pclass']].head(8)",
"_____no_output_____"
],
[
"train_df[['Age*Class', 'Survived']].groupby(['Age*Class'], as_index=True).mean()",
"_____no_output_____"
]
],
[
[
"# Port the person embarked from\nLet's see how that influences chance of survival",
"_____no_output_____"
]
],
[
[
"# To replace Nan value in 'Embarked', we will use the mode\n# in 'Embaraked'. This will give us the most frequent port \n# the passengers embarked from\n\nfreq_port = train_df['Embarked'].dropna().mode()[0]\nprint('Most frequent port of Embarkation:',freq_port)\n",
"_____no_output_____"
],
[
"# Fill NaN 'Embarked' Values in the datasets\nfor dataset in combine:\n dataset['Embarked'] = dataset['Embarked'].fillna(freq_port)\n \ntrain_df[['Embarked', 'Survived']].groupby(['Embarked'], as_index=True).mean().sort_values(by='Survived', ascending=False)",
"_____no_output_____"
],
[
"# Let's plot it\n\nsns.countplot(x='Survived', hue=\"Embarked\", data=train_df, order=[1,0])\nplt.xticks(range(2),['Made it!', 'Deceased']);",
"_____no_output_____"
],
[
"# Create categorical dummy variables for Embarked values\nfor dataset in combine:\n binary_encoded = pd.get_dummies(dataset.Embarked)\n newcols = binary_encoded.columns\n dataset[newcols] = binary_encoded\n\n \ntrain_df.head()",
"_____no_output_____"
],
[
"# Drop Embarked\nfor dataset in combine:\n dataset.drop('Embarked', axis=1, inplace=True)",
"_____no_output_____"
]
],
[
[
"## Handle continuous values in the Fare column",
"_____no_output_____"
]
],
[
[
"# Fill the NA values in the Fares column with the median\ntest_df['Fare'].fillna(test_df['Fare'].dropna().median(), inplace=True)\ntest_df.head()",
"_____no_output_____"
],
[
"# q cut will find ranges equal to the quartile of the data\ntrain_df['FareBand'] = pd.qcut(train_df['Fare'], 4)\ntrain_df[['FareBand', 'Survived']].groupby(['FareBand'], as_index=False).mean().sort_values(by='FareBand', ascending=True)",
"_____no_output_____"
],
[
"for dataset in combine:\n dataset['Fare']=pd.qcut(train_df['Fare'],4,labels=np.arange(4))\n dataset['Fare'] = dataset['Fare'].astype(int)\n\ntrain_df[['Fare','FareBand']].head(8)",
"_____no_output_____"
],
[
"# Drop FareBand\ntrain_df = train_df.drop(['FareBand'], axis=1) \ncombine = [train_df, test_df]",
"_____no_output_____"
]
],
[
[
"## Finished",
"_____no_output_____"
]
],
[
[
"train_df.head(7)\n# All features are approximately on the same scale\n# no need for feature engineering / normalization",
"_____no_output_____"
],
[
"test_df.head(7)",
"_____no_output_____"
],
[
"# Check correlation between features \n# (uncorrelated features are generally more powerful predictors)\ncolormap = plt.cm.viridis\nplt.figure(figsize=(12,12))\nplt.title('Pearson Correlation of Features', y=1.05, size=15)\nsns.heatmap(train_df.corr().round(2)\\\n ,linewidths=0.1,vmax=1.0, square=True, cmap=colormap, \\\n linecolor='white', annot=True);",
"_____no_output_____"
]
],
[
[
"# Next Up: Machine Learning!\nNow we will Model, Predict, and Choose algorithm for conducting the classification\nTry using different classifiers to model and predict. Choose the best model from:\n* Logistic Regression\n* KNN \n* SVM\n* Naive Bayes\n* Decision Tree\n* Random Forest\n* Perceptron\n* XGBoost",
"_____no_output_____"
],
[
"## Setup Train and Validation Set",
"_____no_output_____"
]
],
[
[
"X = train_df.drop(\"Survived\", axis=1) # Training & Validation data\nY = train_df[\"Survived\"] # Response / Target Variable\n\n# Since we don't have labels for the test data\n# this won't be used. It's only for Kaggle Submissions\nX_submission = test_df.drop(\"PassengerId\", axis=1).copy() \n\nprint(X.shape, Y.shape)\n\n",
"_____no_output_____"
],
[
"# Split training and test set so that we test on 20% of the data\n# Note that our algorithms will never have seen the validation \n# data during training. This is to evaluate how good our estimators are.\n\nnp.random.seed(1337) # set random seed for reproducibility\n\nfrom sklearn.model_selection import train_test_split\n\nX_train, X_val, Y_train, Y_val = train_test_split(X, Y, test_size=0.2)\n\nprint(X_train.shape, Y_train.shape)\nprint(X_val.shape, Y_val.shape)",
"_____no_output_____"
]
],
[
[
"## Scikit-Learn general ML workflow\n1. Instantiate model object\n2. Fit model to training data\n3. Let the model predict output for unseen data\n4. Compare predicitons with actual output to form accuracy measure",
"_____no_output_____"
],
[
"# Logistic Regression",
"_____no_output_____"
]
],
[
[
"logreg = LogisticRegression() # instantiate\nlogreg.fit(X_train, Y_train) # fit\nY_pred = logreg.predict(X_val) # predict\nacc_log = sum(Y_pred == Y_val)/len(Y_val)*100\nprint('Logistic Regression accuracy:', str(round(acc_log,2)),'%')",
"_____no_output_____"
],
[
"# we could also use scikit learn's method score\n# that predicts and then compares to validation set labels\nacc_log = logreg.score(X_val, Y_val) # evaluate\nacc_log",
"_____no_output_____"
],
[
"# Support Vector Machines Classifier (non-linear kernel)\n\nsvc = SVC()\nsvc.fit(X_train, Y_train)\nacc_svc = svc.score(X_val, Y_val)\nacc_svc",
"_____no_output_____"
],
[
"knn = KNeighborsClassifier(n_neighbors = 3)\nknn.fit(X_train, Y_train)\nacc_knn = knn.score(X_val, Y_val)\nacc_knn",
"_____no_output_____"
],
[
"# Perceptron\nperceptron = Perceptron()\nperceptron.fit(X_train, Y_train)\nacc_perceptron = perceptron.score(X_val, Y_val)\nacc_perceptron",
"_____no_output_____"
],
[
"# XGBoost, same API as scikit-learn\ngradboost = xgb.XGBClassifier(n_estimators=1000)\ngradboost.fit(X_train, Y_train)\nacc_perceptron = gradboost.score(X_val, Y_val)\nacc_perceptron",
"_____no_output_____"
],
[
"# Random Forest\nrandom_forest = RandomForestClassifier(n_estimators=1000)\nrandom_forest.fit(X_train, Y_train)\nacc_random_forest = random_forest.score(X_val, Y_val)\nacc_random_forest",
"_____no_output_____"
]
],
[
[
"# Importance scores in the random forest model",
"_____no_output_____"
]
],
[
[
"# Look at importnace of features for random forest\n\ndef plot_model_var_imp( model , X , y ):\n imp = pd.DataFrame( \n model.feature_importances_ , \n columns = [ 'Importance' ] , \n index = X.columns \n )\n imp = imp.sort_values( [ 'Importance' ] , ascending = True )\n imp[ : 10 ].plot( kind = 'barh' )\n print ('Training accuracy Random Forest:',model.score( X , y ))\n\nplot_model_var_imp(random_forest, X_train, Y_train)",
"_____no_output_____"
]
],
[
[
"# Compete on Kaggle!",
"_____no_output_____"
]
],
[
[
"# How to create a Kaggle submission with a Random Forest Classifier\nY_submission = random_forest.predict(X_submission)\nsubmission = pd.DataFrame({\n \"PassengerId\": test_df[\"PassengerId\"],\n \"Survived\": Y_submission\n })\nsubmission.to_csv('titanic.csv', index=False)",
"_____no_output_____"
]
],
[
[
"# Legacy code (not used anymore)",
"_____no_output_____"
],
[
"```python\n# Map title string values to numbers so that we can make predictions\n\ntitle_mapping = {\"Mr\": 1, \"Miss\": 2, \"Mrs\": 3, \"Master\": 4, \"Rare\": 5}\nfor dataset in combine:\n dataset['Title'] = dataset['Title'].map(title_mapping)\n dataset['Title'] = dataset['Title'].fillna(0) \n # Handle missing values\n\ntrain_df.head()\n```\n\n```python\n# Drop the unnecessary Name column (we have the titles now)\n\ntrain_df = train_df.drop(['Name', 'PassengerId'], axis=1)\ntest_df = test_df.drop(['Name'], axis=1)\ncombine = [train_df, test_df]\ntrain_df.shape, test_df.shape\n```\n\n```python\n# Create categorical dummy variables for Embarked values\nfor dataset in combine:\n dataset['Embarked'] = dataset['Embarked'].map( {'S': 0, 'C': 1, 'Q': 2} ).astype(int)\n\ntrain_df.head()\n```",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
ecda3a9583cb19663355d17d469f2687fc7f0b85 | 6,790 | ipynb | Jupyter Notebook | examples/ex2_1.ipynb | RishiKumarRay/kglab | f5377a7937c9b7f2198e237f558ddbd463e33cba | [
"MIT"
] | null | null | null | examples/ex2_1.ipynb | RishiKumarRay/kglab | f5377a7937c9b7f2198e237f558ddbd463e33cba | [
"MIT"
] | null | null | null | examples/ex2_1.ipynb | RishiKumarRay/kglab | f5377a7937c9b7f2198e237f558ddbd463e33cba | [
"MIT"
] | null | null | null | 30.863636 | 415 | 0.558616 | [
[
[
"# for use in tutorial and development; do not include this `sys.path` change in production:\nimport sys ; sys.path.insert(0, \"../\")",
"_____no_output_____"
]
],
[
[
"# Load data via Morph-KGC\n\n> [`morph-kgc`](https://github.com/oeg-upm/morph-kgc) is an engine that constructs RDF knowledge graphs from heterogeneous data sources with [R2RML](https://www.w3.org/2001/sw/rdb2rdf/r2rml/) and [RML](https://rml.io/specs/rml/) mapping languages. Morph-KGC is built on top of pandas and it leverages mapping partitions to significantly reduce execution times and memory consumption for large data sources.\n\nFor documentation see <https://github.com/oeg-upm/Morph-KGC/wiki/Usage>",
"_____no_output_____"
],
[
"This example uses a simple SQLite database as input, transforming it into an RDF knowledge graph based on an R2RML mapping for relations between \"students\" and \"sports\".\n\nFirst, let's visualize the sample database:",
"_____no_output_____"
]
],
[
[
"CREATE TABLE \"Student\" (\n \"ID\" integer PRIMARY KEY,\n \"FirstName\" varchar(50),\n \"LastName\" varchar(50)\n);\n\nCREATE TABLE \"Sport\" (\n \"ID\" integer PRIMARY KEY,\n \"Description\" varchar(50)\n);\n\nCREATE TABLE \"Student_Sport\" (\n \"ID_Student\" integer,\n \"ID_Sport\" integer,\n PRIMARY KEY (\"ID_Student\",\"ID_Sport\"),\n FOREIGN KEY (\"ID_Student\") REFERENCES \"Student\"(\"ID\"),\n FOREIGN KEY (\"ID_Sport\") REFERENCES \"Sport\"(\"ID\")\n);\n\nINSERT INTO \"Student\" (\"ID\",\"FirstName\",\"LastName\") VALUES (10,'Venus', 'Williams');\nINSERT INTO \"Student\" (\"ID\",\"FirstName\",\"LastName\") VALUES (11,'Fernando', 'Alonso');\nINSERT INTO \"Student\" (\"ID\",\"FirstName\",\"LastName\") VALUES (12,'David', 'Villa');\n\nINSERT INTO \"Sport\" (\"ID\", \"Description\") VALUES (110,'Tennis');\nINSERT INTO \"Sport\" (\"ID\", \"Description\") VALUES (111,'Football');\nINSERT INTO \"Sport\" (\"ID\", \"Description\") VALUES (112,'Formula1');\n\nINSERT INTO \"Student_Sport\" (\"ID_Student\", \"ID_Sport\") VALUES (10,110);\nINSERT INTO \"Student_Sport\" (\"ID_Student\", \"ID_Sport\") VALUES (11,111);\nINSERT INTO \"Student_Sport\" (\"ID_Student\", \"ID_Sport\") VALUES (11,112);\nINSERT INTO \"Student_Sport\" (\"ID_Student\", \"ID_Sport\") VALUES (12,111);",
"_____no_output_____"
]
],
[
[
"This has three tables plus the data to populate them.\n\n`Morph-KGC` needs a configuration to describe the mapping, so let's create a basic one for our example:",
"_____no_output_____"
]
],
[
[
"import os\n\nconfig = f\"\"\"\n[StudentSportDB]\nmappings={os.path.dirname(os.getcwd())}/dat/student_sport.r2rml.ttl\ndb_url=sqlite:///{os.path.dirname(os.getcwd())}/dat/student_sport.db\n \"\"\"",
"_____no_output_____"
]
],
[
[
"You can see how to create this config file in the [docs](https://github.com/oeg-upm/Morph-KGC/wiki/Configuration).\n\nAlternatively, you provide a path to a config file, for example:\n```\nconfig = \"path/to/config.ini\"\n```",
"_____no_output_____"
],
[
"Next we'll use `morph-kgc` to load the RDF data from the SQLite based on an R2RML mapping:",
"_____no_output_____"
]
],
[
[
"from icecream import ic\nimport kglab\n\nnamespaces = {\n \"ex\": \"http://example.com/\",\n }\n\nkg = kglab.KnowledgeGraph(\n name = \"A KG example with students and sports\",\n namespaces = namespaces,\n )\n\nkg.materialize(config);",
"INFO | 2022-02-27 12:15:21,403 | 7 mapping rules retrieved.\nINFO | 2022-02-27 12:15:21,418 | Mapping partition with 1 groups generated.\nINFO | 2022-02-27 12:15:21,419 | Maximum number of rules within mapping group: 7.\nINFO | 2022-02-27 12:15:21,420 | Mappings processed in 1.739 seconds.\nINFO | 2022-02-27 12:15:21,523 | Number of triples generated in total: 22.\n"
]
],
[
[
"Data can be loaded from multiple text formats, e.g. CSV, JSON, XML, Parquet, and also through different relational DBMS such as PostgresSQL, MySQL, Oracle, Microsoft SQL Server, MariaDB, and so on.",
"_____no_output_____"
],
[
"Now let's try to query!",
"_____no_output_____"
]
],
[
[
"sparql = \"\"\"\nPREFIX ex: <http://example.com/>\n\nSELECT ?student_name ?sport_desc\nWHERE {\n ?student rdf:type ex:Student .\n ?student ex:firstName ?student_name .\n ?student ex:plays ?sport .\n ?sport ex:description ?sport_desc\n}\n \"\"\"\n\nfor row in kg._g.query(sparql):\n student_name = kg.n3fy(row.student_name)\n sport_desc = kg.n3fy(row.sport_desc)\n ic(student_name, sport_desc)",
"ic| student_name: 'Venus', sport_desc: 'Tennis'\nic| student_name: 'David', sport_desc: 'Football'\nic| student_name: 'Fernando', sport_desc: 'Football'\nic| student_name: 'Fernando', sport_desc: 'Formula1'\n"
]
]
] | [
"code",
"markdown",
"raw",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown",
"markdown"
],
[
"raw"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
ecda3cc5c6abceb67122475eb812ef455ebaf4e4 | 240,250 | ipynb | Jupyter Notebook | scripts/notebooks/Exploring Cross Correlation in Dream4.ipynb | jiawu/Roller | a70e350905a59c2254dcefda7ab23c6417cf8f7d | [
"MIT"
] | null | null | null | scripts/notebooks/Exploring Cross Correlation in Dream4.ipynb | jiawu/Roller | a70e350905a59c2254dcefda7ab23c6417cf8f7d | [
"MIT"
] | 2 | 2015-07-13T18:51:22.000Z | 2015-07-16T15:35:24.000Z | scripts/notebooks/Exploring Cross Correlation in Dream4.ipynb | jiawu/Roller | a70e350905a59c2254dcefda7ab23c6417cf8f7d | [
"MIT"
] | null | null | null | 69.376263 | 172 | 0.763992 | [
[
[
"%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
],
[
"# load packages\nimport pandas as pd\nimport statsmodels.tsa.stattools as stats\nimport statsmodels.graphics.tsaplots as sg\nimport matplotlib.pyplot as plt\nimport matplotlib\n%matplotlib inline\nimport sys\nfrom datetime import datetime\nimport numpy as np\n\nimport networkx as nx\nfrom nxpd import draw\nfrom nxpd import nxpdParams\nnxpdParams['show'] = 'ipynb'\n\nsys.path.append(\"../pipelines\")\nimport Pipelines as tdw\ndata_folder = \"/projects/p20519/roller_output/optimizing_window_size/RandomForest/insilico_size10_1/\"\n\noutput_path = \"/home/jjw036/Roller/insilico_size10_1\"\n\ncurrent_time = datetime.now().strftime('%Y-%m-%d_%H:%M:%S')\n\ndata_folder = \"../output/insilico_size10_1\"\nfile_path = \"../data/dream4/insilico_size10_1_timeseries.tsv\"\nrun_params = {'data_folder': data_folder,\n 'file_path':file_path,\n 'td_window':10,\n 'min_lag':1,\n 'max_lag':3,\n 'n_trees':10,\n 'permutation_n':10,\n 'lag_method':'mean_mean',\n 'calc_mse':False,\n 'bootstrap_n':1000,\n 'n_trials':1,\n 'run_time':current_time,\n 'sort_by':'rank',\n 'iterating_param':'td_window',\n }\n \n\nroc,pr, tdr = tdw.get_td_stats(**run_params)",
"['Time', 'G1', 'G2', 'G3', 'G4', 'G5', 'G6', 'G7', 'G8', 'G9', 'G10']\nRunning permutation on window 3...\nRunning permutation on window 4...\nRunning permutation on window 5...\nRunning permutation on window 6...\nRunning permutation on window 7...\nRunning permutation on window 8...\nRunning permutation on window 9...\nRunning permutation on window 10...\nRunning permutation on window 11...\nCompiling all model edges...\n[DONE]\nLumping edges...\n[DONE]"
],
[
"## Loading baseline SWING results (uniform windowing)\nedges = pd.read_csv(\"../data/dream4/insilico_size10_1_goldstandard.tsv\",sep=\"\\t\",header=None)\nedges = edges[edges[2] > 0]\nedges=edges[edges.columns[0:2]]\nedges = [tuple(x) for x in edges.values]\n\ntdr.full_edge_list\n#tdr.edge_dict\nfinal_edge_list = tdr.make_sort_df(tdr.edge_dict, sort_by=run_params['sort_by'])\nfinal_edge_list['Correct'] = final_edge_list['regulator-target'].isin(edges)\npd.set_option('display.height', 500)\nfinal_edge_list",
"Calculating rank edge importance...\n[DONE]\nheight has been deprecated.\n\n"
],
[
"## Identifying edges that are poorly detected\n#Edge G2->G8, and G7->G4 \n\ndef get_experiment_list(filename):\n # load files\n timecourse = pd.read_csv(filename, sep=\"\\t\")\n # divide into list of dataframes\n experiments = []\n for i in range(0,85,21):\n experiments.append(timecourse.ix[i:i+20])\n \n #reformat\n for idx,exp in enumerate(experiments):\n exp = exp.set_index('Time')\n experiments[idx]=exp\n return(experiments)\n\nexperiments=get_experiment_list(\"../data/dream4/insilico_size10_1_timeseries.tsv\")\n# formatting matplotlib\nfont = {'family' : 'normal',\n 'weight' : 'bold',\n 'size' : 12}\n\nmatplotlib.rc('font', **font)\n",
"_____no_output_____"
],
[
"\n# plot time series for an interaction\nfig = plt.figure(figsize=(8,15))\n\nG1 = 'G9'\nG2 = 'G10'\n\nfor index,experiment in enumerate(experiments):\n ax1 = experiments[index][[G1, G2]].plot(linewidth=2, colors=['blue','red'], style=['.--','.-'])\n ax1.set_ylabel('Normalized Intensity', fontweight='bold')\n ax1.set_xlabel('Time',fontweight='bold')\n ax1.legend(loc='center left', bbox_to_anchor=(1, 0.5))\n",
"/Users/jjw036/anaconda/lib/python3.4/site-packages/pandas/tools/plotting.py:929: UserWarning: 'colors' is being deprecated. Please use 'color'instead of 'colors'\n warnings.warn((\"'colors' is being deprecated. Please use 'color'\"\n"
],
[
"def plot_ccf(exp,experiments, var1, var2):\n ccf = stats.ccf(experiments[exp][var1], experiments[exp][var2])\n return(ccf)\n\nccf_list = [plot_ccf(i, experiments, 'G9','G10') for i in range(0,5)]\n\nccf_array = np.array(ccf_list)\n\nmean_array = ccf_array.mean(axis = 0)\n\n#plt.plot(mean_array,'.',linewidth=2, label=i)\n\n# for i in range(0,5): \n# ccf=plot_ccf(i,experiments, 'G9','G10')\n# plt.plot(ccf,'.',linewidth=2, label=i)\n# plt.legend(loc='best')\nccf=plot_ccf(4,experiments, 'G9','G10')\nplt.plot(ccf,'.',linewidth=2, label=i)\n# len(ccf)\nplt.figure()\nitem1 = experiments[4]['G9']\nitem2 = experiments[4]['G10']\nitem2_shifted = experiments[4]['G10'].shift(1)\n\nplt.plot(item1,item2, '.', label = 'not-shifted')\nplt.plot(item1,item2_shifted , '.', label = 'shifted')\nplt.legend(loc = 'best')\n\nshifted_list=[item2.shift(i) for i in range(0, len(item2)-1)]\n#print(\"shifted list\", shifted_list)\n\npearson_cc = []\npvalue_cc = []\nfor i in range(0,len(shifted_list)-1):\n print(i)\n shifted_item =shifted_list[i][~pd.isnull(shifted_list[i])]\n censored_item = item1[:len(shifted_item),]\n\n rr, p_value = scipy.stats.pearsonr( shifted_item, censored_item)\n pearson_cc.append(rr)\n pvalue_cc.append(p_value)\n\nplt.figure()\nplt.plot(pearson_cc, '.')\n\n# 1)preprocessing steps before inference:\n# identify inactive genes in each experiment\n# make new array of inactive genes (using a matrix)\n# when regressing explanatory variables, remove genes/timepoints that are labeled as inactive\n\n# 2) CCF as a lag selection method\n# pick order, create cost function to select lag order\n\n\n# combining ccf for separate experiments?\n\n\n",
"0\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n"
],
[
"## Dissecting window results\nfor target_window in tdr.window_list:\n\n current_df = target_window.make_edge_table(calc_mse=False)\n current_df['adj_imp'] = np.abs(current_df['Importance']*(1-current_df['p_value']))\n current_df.sort(['adj_imp'], ascending=False,inplace=True)\n print(current_df[(current_df['Parent']=='G7') & (current_df['Child']=='G2')])",
" Parent Child Importance P_window C_window p_value adj_imp\n46 G7 G2 0.12973 1 3 0.00125 0.12956\n56 G7 G2 0.12513 2 3 0.00012 0.12511\n36 G7 G2 0.00537 0 3 0.29334 0.00379\n Parent Child Importance P_window C_window p_value adj_imp\n56 G7 G2 0.15941 3 4 0.00000 0.15941\n46 G7 G2 0.02320 2 4 0.80804 0.00445\n36 G7 G2 0.00480 1 4 0.14533 0.00411\n Parent Child Importance P_window C_window p_value adj_imp\n36 G7 G2 0.11332 2 5 0.01785 0.11130\n56 G7 G2 0.09284 4 5 0.13519 0.08029\n46 G7 G2 0.01322 3 5 0.60985 0.00516\n Parent Child Importance P_window C_window p_value adj_imp\n56 G7 G2 0.11475 5 6 0.00035 0.11471\n46 G7 G2 0.10962 4 6 0.01254 0.10825\n36 G7 G2 0.00924 3 6 0.45041 0.00508\n Parent Child Importance P_window C_window p_value adj_imp\n56 G7 G2 0.16068 6 7 0.00000 0.16068\n46 G7 G2 0.07597 5 7 0.01940 0.07450\n36 G7 G2 0.00670 4 7 0.50425 0.00332\n Parent Child Importance P_window C_window p_value adj_imp\n46 G7 G2 0.21620 6 8 0.00000 0.21620\n56 G7 G2 0.06600 7 8 0.11125 0.05866\n36 G7 G2 0.04196 5 8 0.81467 0.00778\n Parent Child Importance P_window C_window p_value adj_imp\n36 G7 G2 0.01204 6 9 0.32211 0.00816\n56 G7 G2 0.02238 8 9 0.79666 0.00455\n46 G7 G2 0.01246 7 9 0.69934 0.00375\n Parent Child Importance P_window C_window p_value adj_imp\n56 G7 G2 0.11581 9 10 0.00000 0.11581\n36 G7 G2 0.01161 7 10 0.34394 0.00762\n46 G7 G2 0.00749 8 10 0.31096 0.00516\n Parent Child Importance P_window C_window p_value adj_imp\n56 G7 G2 0.08967 10 11 0.00000 0.08967\n46 G7 G2 0.01589 9 11 0.24046 0.01207\n36 G7 G2 0.00718 8 11 0.25953 0.00532\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecda537f6eeb04fd8a616245df9c5efb7ed32678 | 77,075 | ipynb | Jupyter Notebook | faiss/faiss_Clustering_test.ipynb | cateto/python4NLP | 1d2d5086f907bf75be01762bf0b384c76d8f704e | [
"MIT"
] | 2 | 2021-12-16T22:38:27.000Z | 2021-12-17T13:09:49.000Z | faiss/faiss_Clustering_test.ipynb | cateto/python4NLP | 1d2d5086f907bf75be01762bf0b384c76d8f704e | [
"MIT"
] | null | null | null | faiss/faiss_Clustering_test.ipynb | cateto/python4NLP | 1d2d5086f907bf75be01762bf0b384c76d8f704e | [
"MIT"
] | null | null | null | 58.523159 | 2,478 | 0.58216 | [
[
[
"<a href=\"https://colab.research.google.com/github/cateto/python4NLP/blob/main/faiss/faiss_Clustering_test.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"!git clone https://github.com/kstathou/vector_engine",
"Cloning into 'vector_engine'...\nremote: Enumerating objects: 74, done.\u001b[K\nremote: Counting objects: 100% (74/74), done.\u001b[K\nremote: Compressing objects: 100% (53/53), done.\u001b[K\nremote: Total 74 (delta 32), reused 59 (delta 18), pack-reused 0\u001b[K\nUnpacking objects: 100% (74/74), done.\n"
],
[
"cd vector_engine",
"/content/vector_engine\n"
],
[
"pip install -r requirements.txt",
"Obtaining file:///content/vector_engine (from -r requirements.txt (line 9))\nCollecting torch==1.8.1\n Downloading torch-1.8.1-cp37-cp37m-manylinux1_x86_64.whl (804.1 MB)\n\u001b[K |████████████████████████████████| 804.1 MB 2.9 kB/s \n\u001b[?25hCollecting transformers==3.3.1\n Downloading transformers-3.3.1-py3-none-any.whl (1.1 MB)\n\u001b[K |████████████████████████████████| 1.1 MB 45.7 MB/s \n\u001b[?25hCollecting sentence-transformers==0.3.8\n Downloading sentence-transformers-0.3.8.tar.gz (66 kB)\n\u001b[K |████████████████████████████████| 66 kB 5.3 MB/s \n\u001b[?25hCollecting pandas==1.1.2\n Downloading pandas-1.1.2-cp37-cp37m-manylinux1_x86_64.whl (10.5 MB)\n\u001b[K |████████████████████████████████| 10.5 MB 48.1 MB/s \n\u001b[?25hCollecting faiss-cpu==1.6.1\n Downloading faiss_cpu-1.6.1-cp37-cp37m-manylinux2010_x86_64.whl (7.1 MB)\n\u001b[K |████████████████████████████████| 7.1 MB 31.1 MB/s \n\u001b[?25hCollecting numpy==1.19.2\n Downloading numpy-1.19.2-cp37-cp37m-manylinux2010_x86_64.whl (14.5 MB)\n\u001b[K |████████████████████████████████| 14.5 MB 50.1 MB/s \n\u001b[?25hCollecting folium==0.2.1\n Downloading folium-0.2.1.tar.gz (69 kB)\n\u001b[K |████████████████████████████████| 69 kB 7.6 MB/s \n\u001b[?25hCollecting streamlit==0.62.0\n Downloading streamlit-0.62.0-py2.py3-none-any.whl (7.1 MB)\n\u001b[K |████████████████████████████████| 7.1 MB 28.0 MB/s \n\u001b[?25hRequirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch==1.8.1->-r requirements.txt (line 1)) (3.10.0.2)\nCollecting sentencepiece!=0.1.92\n Downloading sentencepiece-0.1.96-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.2 MB)\n\u001b[K |████████████████████████████████| 1.2 MB 42.5 MB/s \n\u001b[?25hCollecting tokenizers==0.8.1.rc2\n Downloading tokenizers-0.8.1rc2-cp37-cp37m-manylinux1_x86_64.whl (3.0 MB)\n\u001b[K |████████████████████████████████| 3.0 MB 42.2 MB/s \n\u001b[?25hRequirement already satisfied: packaging in /usr/local/lib/python3.7/dist-packages (from transformers==3.3.1->-r requirements.txt (line 2)) (21.3)\nRequirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from transformers==3.3.1->-r requirements.txt (line 2)) (2.23.0)\nRequirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.7/dist-packages (from transformers==3.3.1->-r requirements.txt (line 2)) (4.62.3)\nRequirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.7/dist-packages (from transformers==3.3.1->-r requirements.txt (line 2)) (2019.12.20)\nCollecting sacremoses\n Downloading sacremoses-0.0.46-py3-none-any.whl (895 kB)\n\u001b[K |████████████████████████████████| 895 kB 40.6 MB/s \n\u001b[?25hRequirement already satisfied: filelock in /usr/local/lib/python3.7/dist-packages (from transformers==3.3.1->-r requirements.txt (line 2)) (3.4.0)\nRequirement already satisfied: scikit-learn in /usr/local/lib/python3.7/dist-packages (from sentence-transformers==0.3.8->-r requirements.txt (line 3)) (1.0.1)\nRequirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from sentence-transformers==0.3.8->-r requirements.txt (line 3)) (1.4.1)\nRequirement already satisfied: nltk in /usr/local/lib/python3.7/dist-packages (from sentence-transformers==0.3.8->-r requirements.txt (line 3)) (3.2.5)\nRequirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas==1.1.2->-r requirements.txt (line 4)) (2.8.2)\nRequirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas==1.1.2->-r requirements.txt (line 4)) (2018.9)\nRequirement already satisfied: Jinja2 in /usr/local/lib/python3.7/dist-packages (from folium==0.2.1->-r requirements.txt (line 7)) (2.11.3)\nCollecting boto3\n Downloading boto3-1.20.26-py3-none-any.whl (131 kB)\n\u001b[K |████████████████████████████████| 131 kB 49.7 MB/s \n\u001b[?25hRequirement already satisfied: toml in /usr/local/lib/python3.7/dist-packages (from streamlit==0.62.0->-r requirements.txt (line 8)) (0.10.2)\nRequirement already satisfied: click>=7.0 in /usr/local/lib/python3.7/dist-packages (from streamlit==0.62.0->-r requirements.txt (line 8)) (7.1.2)\nCollecting watchdog\n Downloading watchdog-2.1.6-py3-none-manylinux2014_x86_64.whl (76 kB)\n\u001b[K |████████████████████████████████| 76 kB 5.4 MB/s \n\u001b[?25hCollecting blinker\n Downloading blinker-1.4.tar.gz (111 kB)\n\u001b[K |████████████████████████████████| 111 kB 49.9 MB/s \n\u001b[?25hRequirement already satisfied: tornado>=5.0 in /usr/local/lib/python3.7/dist-packages (from streamlit==0.62.0->-r requirements.txt (line 8)) (5.1.1)\nCollecting botocore>=1.13.44\n Downloading botocore-1.23.26-py3-none-any.whl (8.5 MB)\n\u001b[K |████████████████████████████████| 8.5 MB 46.2 MB/s \n\u001b[?25hCollecting validators\n Downloading validators-0.18.2-py3-none-any.whl (19 kB)\nCollecting base58\n Downloading base58-2.1.1-py3-none-any.whl (5.6 kB)\nRequirement already satisfied: protobuf>=3.6.0 in /usr/local/lib/python3.7/dist-packages (from streamlit==0.62.0->-r requirements.txt (line 8)) (3.17.3)\nRequirement already satisfied: altair>=3.2.0 in /usr/local/lib/python3.7/dist-packages (from streamlit==0.62.0->-r requirements.txt (line 8)) (4.1.0)\nRequirement already satisfied: tzlocal in /usr/local/lib/python3.7/dist-packages (from streamlit==0.62.0->-r requirements.txt (line 8)) (1.5.1)\nRequirement already satisfied: pillow>=6.2.0 in /usr/local/lib/python3.7/dist-packages (from streamlit==0.62.0->-r requirements.txt (line 8)) (7.1.2)\nRequirement already satisfied: astor in /usr/local/lib/python3.7/dist-packages (from streamlit==0.62.0->-r requirements.txt (line 8)) (0.8.1)\nCollecting pydeck>=0.1.dev5\n Downloading pydeck-0.7.1-py2.py3-none-any.whl (4.3 MB)\n\u001b[K |████████████████████████████████| 4.3 MB 43.1 MB/s \n\u001b[?25hCollecting enum-compat\n Downloading enum_compat-0.0.3-py3-none-any.whl (1.3 kB)\nRequirement already satisfied: cachetools>=4.0 in /usr/local/lib/python3.7/dist-packages (from streamlit==0.62.0->-r requirements.txt (line 8)) (4.2.4)\nRequirement already satisfied: entrypoints in /usr/local/lib/python3.7/dist-packages (from altair>=3.2.0->streamlit==0.62.0->-r requirements.txt (line 8)) (0.3)\nRequirement already satisfied: toolz in /usr/local/lib/python3.7/dist-packages (from altair>=3.2.0->streamlit==0.62.0->-r requirements.txt (line 8)) (0.11.2)\nRequirement already satisfied: jsonschema in /usr/local/lib/python3.7/dist-packages (from altair>=3.2.0->streamlit==0.62.0->-r requirements.txt (line 8)) (2.6.0)\nCollecting jmespath<1.0.0,>=0.7.1\n Downloading jmespath-0.10.0-py2.py3-none-any.whl (24 kB)\nCollecting urllib3<1.27,>=1.25.4\n Downloading urllib3-1.26.7-py2.py3-none-any.whl (138 kB)\n\u001b[K |████████████████████████████████| 138 kB 47.0 MB/s \n\u001b[?25hRequirement already satisfied: six>=1.9 in /usr/local/lib/python3.7/dist-packages (from protobuf>=3.6.0->streamlit==0.62.0->-r requirements.txt (line 8)) (1.15.0)\nRequirement already satisfied: traitlets>=4.3.2 in /usr/local/lib/python3.7/dist-packages (from pydeck>=0.1.dev5->streamlit==0.62.0->-r requirements.txt (line 8)) (5.1.1)\nCollecting ipykernel>=5.1.2\n Downloading ipykernel-6.6.0-py3-none-any.whl (126 kB)\n\u001b[K |████████████████████████████████| 126 kB 50.3 MB/s \n\u001b[?25hRequirement already satisfied: ipywidgets>=7.0.0 in /usr/local/lib/python3.7/dist-packages (from pydeck>=0.1.dev5->streamlit==0.62.0->-r requirements.txt (line 8)) (7.6.5)\nCollecting ipython>=7.23.1\n Downloading ipython-7.30.1-py3-none-any.whl (791 kB)\n\u001b[K |████████████████████████████████| 791 kB 44.7 MB/s \n\u001b[?25hRequirement already satisfied: debugpy<2.0,>=1.0.0 in /usr/local/lib/python3.7/dist-packages (from ipykernel>=5.1.2->pydeck>=0.1.dev5->streamlit==0.62.0->-r requirements.txt (line 8)) (1.0.0)\nRequirement already satisfied: argcomplete>=1.12.3 in /usr/local/lib/python3.7/dist-packages (from ipykernel>=5.1.2->pydeck>=0.1.dev5->streamlit==0.62.0->-r requirements.txt (line 8)) (1.12.3)\nRequirement already satisfied: matplotlib-inline<0.2.0,>=0.1.0 in /usr/local/lib/python3.7/dist-packages (from ipykernel>=5.1.2->pydeck>=0.1.dev5->streamlit==0.62.0->-r requirements.txt (line 8)) (0.1.3)\nRequirement already satisfied: jupyter-client<8.0 in /usr/local/lib/python3.7/dist-packages (from ipykernel>=5.1.2->pydeck>=0.1.dev5->streamlit==0.62.0->-r requirements.txt (line 8)) (5.3.5)\nRequirement already satisfied: importlib-metadata<5 in /usr/local/lib/python3.7/dist-packages (from ipykernel>=5.1.2->pydeck>=0.1.dev5->streamlit==0.62.0->-r requirements.txt (line 8)) (4.8.2)\nRequirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata<5->ipykernel>=5.1.2->pydeck>=0.1.dev5->streamlit==0.62.0->-r requirements.txt (line 8)) (3.6.0)\nCollecting prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0\n Downloading prompt_toolkit-3.0.24-py3-none-any.whl (374 kB)\n\u001b[K |████████████████████████████████| 374 kB 48.3 MB/s \n\u001b[?25hRequirement already satisfied: decorator in /usr/local/lib/python3.7/dist-packages (from ipython>=7.23.1->ipykernel>=5.1.2->pydeck>=0.1.dev5->streamlit==0.62.0->-r requirements.txt (line 8)) (4.4.2)\nRequirement already satisfied: pexpect>4.3 in /usr/local/lib/python3.7/dist-packages (from ipython>=7.23.1->ipykernel>=5.1.2->pydeck>=0.1.dev5->streamlit==0.62.0->-r requirements.txt (line 8)) (4.8.0)\nRequirement already satisfied: setuptools>=18.5 in /usr/local/lib/python3.7/dist-packages (from ipython>=7.23.1->ipykernel>=5.1.2->pydeck>=0.1.dev5->streamlit==0.62.0->-r requirements.txt (line 8)) (57.4.0)\nRequirement already satisfied: backcall in /usr/local/lib/python3.7/dist-packages (from ipython>=7.23.1->ipykernel>=5.1.2->pydeck>=0.1.dev5->streamlit==0.62.0->-r requirements.txt (line 8)) (0.2.0)\nRequirement already satisfied: pygments in /usr/local/lib/python3.7/dist-packages (from ipython>=7.23.1->ipykernel>=5.1.2->pydeck>=0.1.dev5->streamlit==0.62.0->-r requirements.txt (line 8)) (2.6.1)\nRequirement already satisfied: jedi>=0.16 in /usr/local/lib/python3.7/dist-packages (from ipython>=7.23.1->ipykernel>=5.1.2->pydeck>=0.1.dev5->streamlit==0.62.0->-r requirements.txt (line 8)) (0.18.1)\nRequirement already satisfied: pickleshare in /usr/local/lib/python3.7/dist-packages (from ipython>=7.23.1->ipykernel>=5.1.2->pydeck>=0.1.dev5->streamlit==0.62.0->-r requirements.txt (line 8)) (0.7.5)\nRequirement already satisfied: nbformat>=4.2.0 in /usr/local/lib/python3.7/dist-packages (from ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit==0.62.0->-r requirements.txt (line 8)) (5.1.3)\nRequirement already satisfied: ipython-genutils~=0.2.0 in /usr/local/lib/python3.7/dist-packages (from ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit==0.62.0->-r requirements.txt (line 8)) (0.2.0)\nRequirement already satisfied: jupyterlab-widgets>=1.0.0 in /usr/local/lib/python3.7/dist-packages (from ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit==0.62.0->-r requirements.txt (line 8)) (1.0.2)\nRequirement already satisfied: widgetsnbextension~=3.5.0 in /usr/local/lib/python3.7/dist-packages (from ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit==0.62.0->-r requirements.txt (line 8)) (3.5.2)\nRequirement already satisfied: parso<0.9.0,>=0.8.0 in /usr/local/lib/python3.7/dist-packages (from jedi>=0.16->ipython>=7.23.1->ipykernel>=5.1.2->pydeck>=0.1.dev5->streamlit==0.62.0->-r requirements.txt (line 8)) (0.8.3)\nRequirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.7/dist-packages (from Jinja2->folium==0.2.1->-r requirements.txt (line 7)) (2.0.1)\nRequirement already satisfied: pyzmq>=13 in /usr/local/lib/python3.7/dist-packages (from jupyter-client<8.0->ipykernel>=5.1.2->pydeck>=0.1.dev5->streamlit==0.62.0->-r requirements.txt (line 8)) (22.3.0)\nRequirement already satisfied: jupyter-core>=4.6.0 in /usr/local/lib/python3.7/dist-packages (from jupyter-client<8.0->ipykernel>=5.1.2->pydeck>=0.1.dev5->streamlit==0.62.0->-r requirements.txt (line 8)) (4.9.1)\nRequirement already satisfied: ptyprocess>=0.5 in /usr/local/lib/python3.7/dist-packages (from pexpect>4.3->ipython>=7.23.1->ipykernel>=5.1.2->pydeck>=0.1.dev5->streamlit==0.62.0->-r requirements.txt (line 8)) (0.7.0)\nRequirement already satisfied: wcwidth in /usr/local/lib/python3.7/dist-packages (from prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0->ipython>=7.23.1->ipykernel>=5.1.2->pydeck>=0.1.dev5->streamlit==0.62.0->-r requirements.txt (line 8)) (0.2.5)\nRequirement already satisfied: notebook>=4.4.1 in /usr/local/lib/python3.7/dist-packages (from widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit==0.62.0->-r requirements.txt (line 8)) (5.3.1)\nRequirement already satisfied: nbconvert in /usr/local/lib/python3.7/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit==0.62.0->-r requirements.txt (line 8)) (5.6.1)\nRequirement already satisfied: Send2Trash in /usr/local/lib/python3.7/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit==0.62.0->-r requirements.txt (line 8)) (1.8.0)\nRequirement already satisfied: terminado>=0.8.1 in /usr/local/lib/python3.7/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit==0.62.0->-r requirements.txt (line 8)) (0.12.1)\nCollecting s3transfer<0.6.0,>=0.5.0\n Downloading s3transfer-0.5.0-py3-none-any.whl (79 kB)\n\u001b[K |████████████████████████████████| 79 kB 7.9 MB/s \n\u001b[?25hRequirement already satisfied: pandocfilters>=1.4.1 in /usr/local/lib/python3.7/dist-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit==0.62.0->-r requirements.txt (line 8)) (1.5.0)\nRequirement already satisfied: mistune<2,>=0.8.1 in /usr/local/lib/python3.7/dist-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit==0.62.0->-r requirements.txt (line 8)) (0.8.4)\nRequirement already satisfied: testpath in /usr/local/lib/python3.7/dist-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit==0.62.0->-r requirements.txt (line 8)) (0.5.0)\nRequirement already satisfied: defusedxml in /usr/local/lib/python3.7/dist-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit==0.62.0->-r requirements.txt (line 8)) (0.7.1)\nRequirement already satisfied: bleach in /usr/local/lib/python3.7/dist-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit==0.62.0->-r requirements.txt (line 8)) (4.1.0)\nRequirement already satisfied: webencodings in /usr/local/lib/python3.7/dist-packages (from bleach->nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit==0.62.0->-r requirements.txt (line 8)) (0.5.1)\nRequirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging->transformers==3.3.1->-r requirements.txt (line 2)) (3.0.6)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->transformers==3.3.1->-r requirements.txt (line 2)) (3.0.4)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->transformers==3.3.1->-r requirements.txt (line 2)) (2.10)\nCollecting urllib3<1.27,>=1.25.4\n Downloading urllib3-1.25.11-py2.py3-none-any.whl (127 kB)\n\u001b[K |████████████████████████████████| 127 kB 50.3 MB/s \n\u001b[?25hRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->transformers==3.3.1->-r requirements.txt (line 2)) (2021.10.8)\nRequirement already satisfied: joblib in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers==3.3.1->-r requirements.txt (line 2)) (1.1.0)\nRequirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->sentence-transformers==0.3.8->-r requirements.txt (line 3)) (3.0.0)\nBuilding wheels for collected packages: sentence-transformers, folium, blinker\n Building wheel for sentence-transformers (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for sentence-transformers: filename=sentence_transformers-0.3.8-py3-none-any.whl size=101995 sha256=6959e7233b56f79558b25d56324f5bd2b84496c1d54a1ec2f1daae11942fd8ed\n Stored in directory: /root/.cache/pip/wheels/1c/43/65/fe0f3ea9327623e749a79eb5dfad85a809c84064b1cc4682c1\n Building wheel for folium (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for folium: filename=folium-0.2.1-py3-none-any.whl size=79809 sha256=c7c1d32fc0f33467bf0c300f682637df67bc9db6a9430aa841fd3d8154c9e725\n Stored in directory: /root/.cache/pip/wheels/9a/f0/3a/3f79a6914ff5affaf50cabad60c9f4d565283283c97f0bdccf\n Building wheel for blinker (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for blinker: filename=blinker-1.4-py3-none-any.whl size=13478 sha256=30bbdc55a9441473c309de84d93d3e560bbded56dab895eccbe349bda9b0b5cc\n Stored in directory: /root/.cache/pip/wheels/22/f5/18/df711b66eb25b21325c132757d4314db9ac5e8dabeaf196eab\nSuccessfully built sentence-transformers folium blinker\nInstalling collected packages: prompt-toolkit, ipython, ipykernel, urllib3, jmespath, numpy, botocore, tokenizers, sentencepiece, sacremoses, s3transfer, pandas, watchdog, validators, transformers, torch, pydeck, enum-compat, boto3, blinker, base58, vector-engine, streamlit, sentence-transformers, folium, faiss-cpu\n Attempting uninstall: prompt-toolkit\n Found existing installation: prompt-toolkit 1.0.18\n Uninstalling prompt-toolkit-1.0.18:\n Successfully uninstalled prompt-toolkit-1.0.18\n Attempting uninstall: ipython\n Found existing installation: ipython 5.5.0\n Uninstalling ipython-5.5.0:\n Successfully uninstalled ipython-5.5.0\n Attempting uninstall: ipykernel\n Found existing installation: ipykernel 4.10.1\n Uninstalling ipykernel-4.10.1:\n Successfully uninstalled ipykernel-4.10.1\n Attempting uninstall: urllib3\n Found existing installation: urllib3 1.24.3\n Uninstalling urllib3-1.24.3:\n Successfully uninstalled urllib3-1.24.3\n Attempting uninstall: numpy\n Found existing installation: numpy 1.19.5\n Uninstalling numpy-1.19.5:\n Successfully uninstalled numpy-1.19.5\n Attempting uninstall: pandas\n Found existing installation: pandas 1.1.5\n Uninstalling pandas-1.1.5:\n Successfully uninstalled pandas-1.1.5\n Attempting uninstall: torch\n Found existing installation: torch 1.10.0+cu111\n Uninstalling torch-1.10.0+cu111:\n Successfully uninstalled torch-1.10.0+cu111\n Running setup.py develop for vector-engine\n Attempting uninstall: folium\n Found existing installation: folium 0.8.3\n Uninstalling folium-0.8.3:\n Successfully uninstalled folium-0.8.3\n\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\ntorchvision 0.11.1+cu111 requires torch==1.10.0, but you have torch 1.8.1 which is incompatible.\ntorchtext 0.11.0 requires torch==1.10.0, but you have torch 1.8.1 which is incompatible.\ntorchaudio 0.10.0+cu111 requires torch==1.10.0, but you have torch 1.8.1 which is incompatible.\njupyter-console 5.2.0 requires prompt-toolkit<2.0.0,>=1.0.0, but you have prompt-toolkit 3.0.24 which is incompatible.\ngoogle-colab 1.0.0 requires ipykernel~=4.10, but you have ipykernel 6.6.0 which is incompatible.\ngoogle-colab 1.0.0 requires ipython~=5.5.0, but you have ipython 7.30.1 which is incompatible.\nalbumentations 0.1.12 requires imgaug<0.2.7,>=0.2.5, but you have imgaug 0.2.9 which is incompatible.\u001b[0m\nSuccessfully installed base58-2.1.1 blinker-1.4 boto3-1.20.26 botocore-1.23.26 enum-compat-0.0.3 faiss-cpu-1.6.1 folium-0.2.1 ipykernel-6.6.0 ipython-7.30.1 jmespath-0.10.0 numpy-1.19.2 pandas-1.1.2 prompt-toolkit-3.0.24 pydeck-0.7.1 s3transfer-0.5.0 sacremoses-0.0.46 sentence-transformers-0.3.8 sentencepiece-0.1.96 streamlit-0.62.0 tokenizers-0.8.1rc2 torch-1.8.1 transformers-3.3.1 urllib3-1.25.11 validators-0.18.2 vector-engine-0.1.0 watchdog-2.1.6\n"
],
[
"%load_ext autoreload",
"_____no_output_____"
],
[
"%autoreload 2\nimport pandas as pd\n\nimport torch\nfrom sentence_transformers import SentenceTransformer\n\nimport faiss\nimport numpy as np\nimport pickle\nfrom pathlib import Path\n\nfrom vector_engine.utils import vector_search, id2details",
"_____no_output_____"
],
[
"df = pd.read_csv('data/misinformation_papers.csv')",
"_____no_output_____"
],
[
"df.head(3)",
"_____no_output_____"
],
[
"print(f\"Misinformation, disinformation and fake news papers: {df.id.unique().shape[0]}\")",
"Misinformation, disinformation and fake news papers: 8430\n"
]
],
[
[
"distilbert-base-nli-stsb-mean-tokens 소환! \n버트보다는 약간 성능이 낮지만 훨씬 작은 사이즈!",
"_____no_output_____"
]
],
[
[
"model = SentenceTransformer('distilbert-base-nli-stsb-mean-tokens')",
"100%|██████████| 245M/245M [00:12<00:00, 20.2MB/s]\n"
],
[
"if torch.cuda.is_available():\n model = model.to(torch.device(\"cuda\"))\nprint(model.device)",
"cuda:0\n"
],
[
"embeddings = model.encode(df.abstract.to_list(), show_progress_bar=True)",
"_____no_output_____"
],
[
"print('shape of vectorized abstract', {embeddings[0].shape})",
"shape of vectorized abstract {(768,)}\n"
],
[
"# 1. 데이터 타입 변경\nembeddings = np.array([embedding for embedding in embeddings]).astype(\"float32\")\n\n# 2. 인덱스 초기화\nindex = faiss.IndexFlatL2(embeddings.shape[1])\n\n# 3. 인덱스를 IndexIDMap으로 전달\nindex = faiss.IndexIDMap(index)\n\n# 4. 고유 Id와 vector을 추가\nindex.add_with_ids(embeddings, df.id.values)\n\nprint(f'faiss index에 있는 벡터의 수: {index.ntotal}')",
"faiss index에 있는 벡터의 수: 8430\n"
],
[
"df.iloc[5415,1]",
"_____no_output_____"
],
[
"#10개의 가까운 이웃을 추출\nD, I = index.search(np.array([embeddings[5415]]), k=10)\nprint(f'L2 거리 {D.flatten().tolist()} \\n\\nMAG paper IDs: {I.flatten().tolist()}')",
"L2 거리 [0.0, 1.2672882080078125, 62.72166442871094, 63.670326232910156, 64.58393859863281, 67.47343444824219, 67.96401977539062, 69.47561645507812, 72.56331634521484, 74.62234497070312] \n\nMAG paper IDs: [3092618151, 3011345566, 3012936764, 3055557295, 3011186656, 3044429417, 3092128270, 3024620668, 3047284882, 3048848247]\n"
],
[
"# index에 기반한 title 추출\nid2details(df, I, 'original_title')",
"_____no_output_____"
],
[
"# index에 기반한 초록 추출\nid2details(df, I, 'abstract')",
"_____no_output_____"
],
[
"user_query = \"\"\"\nWhatsApp was alleged to have been widely used to spread misinformation and propaganda \nduring the 2018 elections in Brazil and the 2019 elections in India. Due to the \nprivate encrypted nature of the messages on WhatsApp, it is hard to track the dissemination \nof misinformation at scale. In this work, using public WhatsApp data from Brazil and India, we \nobserve that misinformation has been largely shared on WhatsApp public groups even after they \nwere already fact-checked by popular fact-checking agencies. This represents a significant portion \nof misinformation spread in both Brazil and India in the groups analyzed. We posit that such \nmisinformation content could be prevented if WhatsApp had a means to flag already fact-checked \ncontent. To this end, we propose an architecture that could be implemented by WhatsApp to counter \nsuch misinformation. Our proposal respects the current end-to-end encryption architecture on WhatsApp, \nthus protecting users’ privacy while providing an approach to detect the misinformation that benefits \nfrom fact-checking efforts.\n\"\"\"",
"_____no_output_____"
],
[
"# 벡터 serach function에 전부 담았다.\nD, I = vector_search([user_query], model, index, num_results=10)\nprint(f'L2 distance: {D.flatten().tolist()}\\n\\nMAG paper IDs: {I.flatten().tolist()}')",
"L2 distance: [7.38446044921875, 57.32252502441406, 57.32252502441406, 71.48451232910156, 72.06806945800781, 79.134765625, 86.0127944946289, 89.91023254394531, 90.76019287109375, 90.76422119140625]\n\nMAG paper IDs: [3047438096, 3021927925, 3037966274, 2889959140, 2791045616, 2943077655, 3014380170, 2967434249, 3028584171, 2990343632]\n"
],
[
"# 인덱스에 기반한 타이틀 추출\nid2details(df, I, 'original_title')",
"_____no_output_____"
],
[
"# 구글 코랩에서 돌리면 인덱스를 0으로 수정해야함, 로컬에서는 1\nproject_dir = Path('notebooks').resolve().parents[0]\nprint(project_dir)\n\n# Serialise index and store it as a pickle\nwith open(f\"{project_dir}/models/faiss_index.pickle\", \"wb\") as h:\n pickle.dump(faiss.serialize_index(index), h)",
"/content/vector_engine\n"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecda5b51921ebb61d33ce86983926107cf095ae3 | 33,192 | ipynb | Jupyter Notebook | utils.ipynb | robertjhs/NLP_contract_analytics | d2cd9496ebb88c4af53dc43ce07ab9b2d7eafd70 | [
"CC-BY-4.0"
] | 1 | 2022-02-22T19:39:55.000Z | 2022-02-22T19:39:55.000Z | utils.ipynb | robertjhs/NLP_contract_analytics | d2cd9496ebb88c4af53dc43ce07ab9b2d7eafd70 | [
"CC-BY-4.0"
] | null | null | null | utils.ipynb | robertjhs/NLP_contract_analytics | d2cd9496ebb88c4af53dc43ce07ab9b2d7eafd70 | [
"CC-BY-4.0"
] | null | null | null | 40.776413 | 143 | 0.540612 | [
[
[
"Copyright 2020 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n\nVery heavily inspired by the official evaluation script for SQuAD version 2.0 which was modified by XLNet authors to\nupdate `find_best_threshold` scripts for SQuAD V2.0\n\nIn addition to basic functionality, we also compute additional statistics and plot precision-recall curves if an\nadditional na_prob.json file is provided. This file is expected to map question ID's to the model's predicted\nprobability that a question is unanswerable.\n\nModified version of \"squad_metrics.py\" adapated for CUAD.",
"_____no_output_____"
]
],
[
[
"import collections\nimport json\nimport math\nimport re\nimport string\nimport json\n\nfrom transformers.models.bert import BasicTokenizer\nfrom transformers.utils import logging ",
"_____no_output_____"
],
[
"logger = logging.get_logger(__name__)",
"_____no_output_____"
],
[
"def reformat_predicted_string(remaining_contract, predicted_string):\n tokens = predicted_string.split()\n assert len(tokens) > 0\n end_idx = 0\n for i, token in enumerate(tokens):\n found = remaining_contract[end_idx:].find(token)\n assert found != -1\n end_idx += found\n if i == 0:\n start_idx = end_idx\n end_idx += len(tokens[-1])\n return remaining_contract[start_idx:end_idx]",
"_____no_output_____"
],
[
"def find_char_start_idx(contract, preceeding_tokens, predicted_string):\n contract = \" \".join(contract.split())\n assert predicted_string in contract\n if contract.count(predicted_string) == 1:\n return contract.find(predicted_string)\n\n start_idx = 0\n for token in preceeding_tokens:\n found = contract[start_idx:].find(token)\n assert found != -1\n start_idx += found\n start_idx += len(preceeding_tokens[-1])\n remaining_str = contract[start_idx:]\n\n remaining_idx = remaining_str.find(predicted_string)\n assert remaining_idx != -1\n\n return start_idx + remaining_idx",
"_____no_output_____"
],
[
"def normalize_answer(s):\n \"\"\"Lower text and remove punctuation, articles and extra whitespace.\"\"\"\n\n def remove_articles(text):\n regex = re.compile(r\"\\b(a|an|the)\\b\", re.UNICODE)\n return re.sub(regex, \" \", text)\n\n def white_space_fix(text):\n return \" \".join(text.split())\n\n def remove_punc(text):\n exclude = set(string.punctuation)\n return \"\".join(ch for ch in text if ch not in exclude)\n\n def lower(text):\n return text.lower()\n\n return white_space_fix(remove_articles(remove_punc(lower(s))))",
"_____no_output_____"
],
[
"def get_tokens(s):\n if not s:\n return []\n return normalize_answer(s).split()",
"_____no_output_____"
],
[
"def compute_exact(a_gold, a_pred):\n return int(normalize_answer(a_gold) == normalize_answer(a_pred))",
"_____no_output_____"
],
[
"def compute_f1(a_gold, a_pred):\n gold_toks = get_tokens(a_gold)\n pred_toks = get_tokens(a_pred)\n common = collections.Counter(gold_toks) & collections.Counter(pred_toks)\n num_same = sum(common.values())\n if len(gold_toks) == 0 or len(pred_toks) == 0:\n # If either is no-answer, then F1 is 1 if they agree, 0 otherwise\n return int(gold_toks == pred_toks)\n if num_same == 0:\n return 0\n precision = 1.0 * num_same / len(pred_toks)\n recall = 1.0 * num_same / len(gold_toks)\n f1 = (2 * precision * recall) / (precision + recall)\n return f1",
"_____no_output_____"
],
[
"def get_raw_scores(examples, preds):\n \"\"\"\n Computes the exact and f1 scores from the examples and the model predictions\n \"\"\"\n exact_scores = {}\n f1_scores = {}\n\n for example in examples:\n qas_id = example.qas_id\n gold_answers = [answer[\"text\"] for answer in example.answers if normalize_answer(answer[\"text\"])]\n\n if not gold_answers:\n # For unanswerable questions, only correct answer is empty string\n gold_answers = [\"\"]\n\n if qas_id not in preds:\n print(\"Missing prediction for %s\" % qas_id)\n continue\n\n prediction = preds[qas_id]\n exact_scores[qas_id] = max(compute_exact(a, prediction) for a in gold_answers)\n f1_scores[qas_id] = max(compute_f1(a, prediction) for a in gold_answers)\n\n return exact_scores, f1_scores",
"_____no_output_____"
],
[
"def apply_no_ans_threshold(scores, na_probs, qid_to_has_ans, na_prob_thresh):\n new_scores = {}\n for qid, s in scores.items():\n pred_na = na_probs[qid] > na_prob_thresh\n if pred_na:\n new_scores[qid] = float(not qid_to_has_ans[qid])\n else:\n new_scores[qid] = s\n return new_scores",
"_____no_output_____"
],
[
"def make_eval_dict(exact_scores, f1_scores, qid_list=None):\n if not qid_list:\n total = len(exact_scores)\n return collections.OrderedDict(\n [\n (\"exact\", 100.0 * sum(exact_scores.values()) / total),\n (\"f1\", 100.0 * sum(f1_scores.values()) / total),\n (\"total\", total),\n ]\n )\n else:\n total = len(qid_list)\n return collections.OrderedDict(\n [\n (\"exact\", 100.0 * sum(exact_scores[k] for k in qid_list) / total),\n (\"f1\", 100.0 * sum(f1_scores[k] for k in qid_list) / total),\n (\"total\", total),\n ]\n )",
"_____no_output_____"
],
[
"def merge_eval(main_eval, new_eval, prefix):\n for k in new_eval:\n main_eval[\"%s_%s\" % (prefix, k)] = new_eval[k]",
"_____no_output_____"
],
[
"def find_best_thresh_v2(preds, scores, na_probs, qid_to_has_ans):\n num_no_ans = sum(1 for k in qid_to_has_ans if not qid_to_has_ans[k])\n cur_score = num_no_ans\n best_score = cur_score\n best_thresh = 0.0\n qid_list = sorted(na_probs, key=lambda k: na_probs[k])\n for i, qid in enumerate(qid_list):\n if qid not in scores:\n continue\n if qid_to_has_ans[qid]:\n diff = scores[qid]\n else:\n if preds[qid]:\n diff = -1\n else:\n diff = 0\n cur_score += diff\n if cur_score > best_score:\n best_score = cur_score\n best_thresh = na_probs[qid]\n\n has_ans_score, has_ans_cnt = 0, 0\n for qid in qid_list:\n if not qid_to_has_ans[qid]:\n continue\n has_ans_cnt += 1\n\n if qid not in scores:\n continue\n has_ans_score += scores[qid]\n\n return 100.0 * best_score / len(scores), best_thresh, 1.0 * has_ans_score / has_ans_cnt",
"_____no_output_____"
],
[
"def find_all_best_thresh_v2(main_eval, preds, exact_raw, f1_raw, na_probs, qid_to_has_ans):\n best_exact, exact_thresh, has_ans_exact = find_best_thresh_v2(preds, exact_raw, na_probs, qid_to_has_ans)\n best_f1, f1_thresh, has_ans_f1 = find_best_thresh_v2(preds, f1_raw, na_probs, qid_to_has_ans)\n # NOTE: For CUAD, which is about finding needles in haystacks and for which different answers should be treated\n # differently, these metrics don't make complete sense. We ignore them, but don't remove them for simplicity.\n main_eval[\"best_exact\"] = best_exact\n main_eval[\"best_exact_thresh\"] = exact_thresh\n main_eval[\"best_f1\"] = best_f1\n main_eval[\"best_f1_thresh\"] = f1_thresh\n main_eval[\"has_ans_exact\"] = has_ans_exact\n main_eval[\"has_ans_f1\"] = has_ans_f1",
"_____no_output_____"
],
[
"def find_best_thresh(preds, scores, na_probs, qid_to_has_ans):\n num_no_ans = sum(1 for k in qid_to_has_ans if not qid_to_has_ans[k])\n cur_score = num_no_ans\n best_score = cur_score\n best_thresh = 0.0\n qid_list = sorted(na_probs, key=lambda k: na_probs[k])\n for _, qid in enumerate(qid_list):\n if qid not in scores:\n continue\n if qid_to_has_ans[qid]:\n diff = scores[qid]\n else:\n if preds[qid]:\n diff = -1\n else:\n diff = 0\n cur_score += diff\n if cur_score > best_score:\n best_score = cur_score\n best_thresh = na_probs[qid]\n return 100.0 * best_score / len(scores), best_thresh",
"_____no_output_____"
],
[
"def find_all_best_thresh(main_eval, preds, exact_raw, f1_raw, na_probs, qid_to_has_ans):\n best_exact, exact_thresh = find_best_thresh(preds, exact_raw, na_probs, qid_to_has_ans)\n best_f1, f1_thresh = find_best_thresh(preds, f1_raw, na_probs, qid_to_has_ans)\n\n main_eval[\"best_exact\"] = best_exact\n main_eval[\"best_exact_thresh\"] = exact_thresh\n main_eval[\"best_f1\"] = best_f1\n main_eval[\"best_f1_thresh\"] = f1_thresh",
"_____no_output_____"
],
[
"def squad_evaluate(examples, preds, no_answer_probs=None, no_answer_probability_threshold=1.0):\n qas_id_to_has_answer = {example.qas_id: bool(example.answers) for example in examples}\n has_answer_qids = [qas_id for qas_id, has_answer in qas_id_to_has_answer.items() if has_answer]\n no_answer_qids = [qas_id for qas_id, has_answer in qas_id_to_has_answer.items() if not has_answer]\n\n if no_answer_probs is None:\n no_answer_probs = {k: 0.0 for k in preds}\n\n exact, f1 = get_raw_scores(examples, preds)\n\n exact_threshold = apply_no_ans_threshold(\n exact, no_answer_probs, qas_id_to_has_answer, no_answer_probability_threshold\n )\n f1_threshold = apply_no_ans_threshold(f1, no_answer_probs, qas_id_to_has_answer, no_answer_probability_threshold)\n\n evaluation = make_eval_dict(exact_threshold, f1_threshold)\n\n if has_answer_qids:\n has_ans_eval = make_eval_dict(exact_threshold, f1_threshold, qid_list=has_answer_qids)\n merge_eval(evaluation, has_ans_eval, \"HasAns\")\n\n if no_answer_qids:\n no_ans_eval = make_eval_dict(exact_threshold, f1_threshold, qid_list=no_answer_qids)\n merge_eval(evaluation, no_ans_eval, \"NoAns\")\n\n if no_answer_probs:\n find_all_best_thresh(evaluation, preds, exact, f1, no_answer_probs, qas_id_to_has_answer)\n\n return evaluation",
"_____no_output_____"
],
[
"def get_final_text(pred_text, orig_text, do_lower_case, verbose_logging=False):\n \"\"\"Project the tokenized prediction back to the original text.\"\"\"\n\n # When we created the data, we kept track of the alignment between original\n # (whitespace tokenized) tokens and our WordPiece tokenized tokens. So\n # now `orig_text` contains the span of our original text corresponding to the\n # span that we predicted.\n #\n # However, `orig_text` may contain extra characters that we don't want in\n # our prediction.\n #\n # For example, let's say:\n # pred_text = steve smith\n # orig_text = Steve Smith's\n #\n # We don't want to return `orig_text` because it contains the extra \"'s\".\n #\n # We don't want to return `pred_text` because it's already been normalized\n # (the SQuAD eval script also does punctuation stripping/lower casing but\n # our tokenizer does additional normalization like stripping accent\n # characters).\n #\n # What we really want to return is \"Steve Smith\".\n #\n # Therefore, we have to apply a semi-complicated alignment heuristic between\n # `pred_text` and `orig_text` to get a character-to-character alignment. This\n # can fail in certain cases in which case we just return `orig_text`.\n\n def _strip_spaces(text):\n ns_chars = []\n ns_to_s_map = collections.OrderedDict()\n for (i, c) in enumerate(text):\n if c == \" \":\n continue\n ns_to_s_map[len(ns_chars)] = i\n ns_chars.append(c)\n ns_text = \"\".join(ns_chars)\n return (ns_text, ns_to_s_map)\n\n # We first tokenize `orig_text`, strip whitespace from the result\n # and `pred_text`, and check if they are the same length. If they are\n # NOT the same length, the heuristic has failed. If they are the same\n # length, we assume the characters are one-to-one aligned.\n tokenizer = BasicTokenizer(do_lower_case=do_lower_case)\n\n tok_text = \" \".join(tokenizer.tokenize(orig_text))\n\n start_position = tok_text.find(pred_text)\n if start_position == -1:\n if verbose_logging:\n logger.info(\"Unable to find text: '%s' in '%s'\" % (pred_text, orig_text))\n return orig_text\n end_position = start_position + len(pred_text) - 1\n\n (orig_ns_text, orig_ns_to_s_map) = _strip_spaces(orig_text)\n (tok_ns_text, tok_ns_to_s_map) = _strip_spaces(tok_text)\n\n if len(orig_ns_text) != len(tok_ns_text):\n if verbose_logging:\n logger.info(\"Length not equal after stripping spaces: '%s' vs '%s'\", orig_ns_text, tok_ns_text)\n return orig_text\n\n # We then project the characters in `pred_text` back to `orig_text` using\n # the character-to-character alignment.\n tok_s_to_ns_map = {}\n for (i, tok_index) in tok_ns_to_s_map.items():\n tok_s_to_ns_map[tok_index] = i\n\n orig_start_position = None\n if start_position in tok_s_to_ns_map:\n ns_start_position = tok_s_to_ns_map[start_position]\n if ns_start_position in orig_ns_to_s_map:\n orig_start_position = orig_ns_to_s_map[ns_start_position]\n\n if orig_start_position is None:\n if verbose_logging:\n logger.info(\"Couldn't map start position\")\n return orig_text\n\n orig_end_position = None\n if end_position in tok_s_to_ns_map:\n ns_end_position = tok_s_to_ns_map[end_position]\n if ns_end_position in orig_ns_to_s_map:\n orig_end_position = orig_ns_to_s_map[ns_end_position]\n\n if orig_end_position is None:\n if verbose_logging:\n logger.info(\"Couldn't map end position\")\n return orig_text\n\n output_text = orig_text[orig_start_position : (orig_end_position + 1)]\n return output_text",
"_____no_output_____"
],
[
"def _get_best_indexes(logits, n_best_size):\n \"\"\"Get the n-best logits from a list.\"\"\"\n index_and_score = sorted(enumerate(logits), key=lambda x: x[1], reverse=True)\n\n best_indexes = []\n for i in range(len(index_and_score)):\n if i >= n_best_size:\n break\n best_indexes.append(index_and_score[i][0])\n return best_indexes",
"_____no_output_____"
],
[
"def _compute_softmax(scores):\n \"\"\"Compute softmax probability over raw logits.\"\"\"\n if not scores:\n return []\n\n max_score = None\n for score in scores:\n if max_score is None or score > max_score:\n max_score = score\n\n exp_scores = []\n total_sum = 0.0\n for score in scores:\n x = math.exp(score - max_score)\n exp_scores.append(x)\n total_sum += x\n\n probs = []\n for score in exp_scores:\n probs.append(score / total_sum)\n return probs",
"_____no_output_____"
],
[
"def compute_predictions_logits(\n json_input_dict,\n all_examples,\n all_features,\n all_results,\n n_best_size,\n max_answer_length,\n do_lower_case,\n output_prediction_file,\n output_nbest_file,\n output_null_log_odds_file,\n verbose_logging,\n version_2_with_negative,\n null_score_diff_threshold,\n tokenizer,\n):\n \"\"\"Write final predictions to the json file and log-odds of null if needed.\"\"\"\n if output_prediction_file:\n logger.info(f\"Writing predictions to: {output_prediction_file}\")\n if output_nbest_file:\n logger.info(f\"Writing nbest to: {output_nbest_file}\")\n if output_null_log_odds_file and version_2_with_negative:\n logger.info(f\"Writing null_log_odds to: {output_null_log_odds_file}\")\n\n example_index_to_features = collections.defaultdict(list)\n for feature in all_features:\n example_index_to_features[feature.example_index].append(feature)\n\n unique_id_to_result = {}\n for result in all_results:\n unique_id_to_result[result.unique_id] = result\n\n _PrelimPrediction = collections.namedtuple( # pylint: disable=invalid-name\n \"PrelimPrediction\", [\"feature_index\", \"start_index\", \"end_index\", \"start_logit\", \"end_logit\"]\n )\n\n all_predictions = collections.OrderedDict()\n all_nbest_json = collections.OrderedDict()\n scores_diff_json = collections.OrderedDict()\n\n contract_name_to_idx = {}\n for idx in range(len(json_input_dict[\"data\"])):\n contract_name_to_idx[json_input_dict[\"data\"][idx][\"title\"]] = idx\n\n for (example_index, example) in enumerate(all_examples):\n features = example_index_to_features[example_index]\n\n contract_name = example.title\n contract_index = contract_name_to_idx[contract_name]\n paragraphs = json_input_dict[\"data\"][contract_index][\"paragraphs\"]\n assert len(paragraphs) == 1\n\n prelim_predictions = []\n # keep track of the minimum score of null start+end of position 0\n score_null = 1000000 # large and positive\n min_null_feature_index = 0 # the paragraph slice with min null score\n null_start_logit = 0 # the start logit at the slice with min null score\n null_end_logit = 0 # the end logit at the slice with min null score\n for (feature_index, feature) in enumerate(features):\n result = unique_id_to_result[feature.unique_id]\n start_indexes = _get_best_indexes(result.start_logits, n_best_size)\n end_indexes = _get_best_indexes(result.end_logits, n_best_size)\n # if we could have irrelevant answers, get the min score of irrelevant\n if version_2_with_negative:\n feature_null_score = result.start_logits[0] + result.end_logits[0]\n if feature_null_score < score_null:\n score_null = feature_null_score\n min_null_feature_index = feature_index\n null_start_logit = result.start_logits[0]\n null_end_logit = result.end_logits[0]\n for start_index in start_indexes:\n for end_index in end_indexes:\n # We could hypothetically create invalid predictions, e.g., predict\n # that the start of the span is in the question. We throw out all\n # invalid predictions.\n if start_index >= len(feature.tokens):\n continue\n if end_index >= len(feature.tokens):\n continue\n if start_index not in feature.token_to_orig_map:\n continue\n if end_index not in feature.token_to_orig_map:\n continue\n if not feature.token_is_max_context.get(start_index, False):\n continue\n if end_index < start_index:\n continue\n length = end_index - start_index + 1\n if length > max_answer_length:\n continue\n prelim_predictions.append(\n _PrelimPrediction(\n feature_index=feature_index,\n start_index=start_index,\n end_index=end_index,\n start_logit=result.start_logits[start_index],\n end_logit=result.end_logits[end_index],\n )\n )\n if version_2_with_negative:\n prelim_predictions.append(\n _PrelimPrediction(\n feature_index=min_null_feature_index,\n start_index=0,\n end_index=0,\n start_logit=null_start_logit,\n end_logit=null_end_logit,\n )\n )\n prelim_predictions = sorted(prelim_predictions, key=lambda x: (x.start_logit + x.end_logit), reverse=True)\n\n _NbestPrediction = collections.namedtuple( # pylint: disable=invalid-name\n \"NbestPrediction\", [\"text\", \"start_logit\", \"end_logit\"]\n )\n\n seen_predictions = {}\n nbest = []\n start_indexes = []\n end_indexes = []\n for pred in prelim_predictions:\n if len(nbest) >= n_best_size:\n break\n feature = features[pred.feature_index]\n if pred.start_index > 0: # this is a non-null prediction\n tok_tokens = feature.tokens[pred.start_index : (pred.end_index + 1)]\n orig_doc_start = feature.token_to_orig_map[pred.start_index]\n orig_doc_end = feature.token_to_orig_map[pred.end_index]\n orig_tokens = example.doc_tokens[orig_doc_start : (orig_doc_end + 1)]\n\n tok_text = tokenizer.convert_tokens_to_string(tok_tokens)\n\n # Clean whitespace\n tok_text = tok_text.strip()\n tok_text = \" \".join(tok_text.split())\n orig_text = \" \".join(orig_tokens)\n\n final_text = get_final_text(tok_text, orig_text, do_lower_case, verbose_logging)\n\n if final_text in seen_predictions:\n continue\n\n seen_predictions[final_text] = True\n\n start_indexes.append(orig_doc_start)\n end_indexes.append(orig_doc_end)\n else:\n final_text = \"\"\n seen_predictions[final_text] = True\n\n start_indexes.append(-1)\n end_indexes.append(-1)\n\n nbest.append(_NbestPrediction(text=final_text, start_logit=pred.start_logit, end_logit=pred.end_logit))\n\n # if we didn't include the empty option in the n-best, include it\n if version_2_with_negative:\n if \"\" not in seen_predictions:\n nbest.append(_NbestPrediction(text=\"\", start_logit=null_start_logit, end_logit=null_end_logit))\n start_indexes.append(-1)\n end_indexes.append(-1)\n\n # In very rare edge cases we could only have single null prediction.\n # So we just create a nonce prediction in this case to avoid failure.\n if len(nbest) == 1:\n nbest.insert(0, _NbestPrediction(text=\"empty\", start_logit=0.0, end_logit=0.0))\n start_indexes.append(-1)\n end_indexes.append(-1)\n\n # In very rare edge cases we could have no valid predictions. So we\n # just create a nonce prediction in this case to avoid failure.\n if not nbest:\n nbest.append(_NbestPrediction(text=\"empty\", start_logit=0.0, end_logit=0.0))\n start_indexes.append(-1)\n end_indexes.append(-1)\n\n assert len(nbest) >= 1, \"No valid predictions\"\n assert len(nbest) == len(start_indexes), \"nbest length: {}, start_indexes length: {}\".format(len(nbest), len(start_indexes))\n\n total_scores = []\n best_non_null_entry = None\n for entry in nbest:\n total_scores.append(entry.start_logit + entry.end_logit)\n if not best_non_null_entry:\n if entry.text:\n best_non_null_entry = entry\n\n probs = _compute_softmax(total_scores)\n\n nbest_json = []\n for (i, entry) in enumerate(nbest):\n output = collections.OrderedDict()\n output[\"text\"] = entry.text\n output[\"probability\"] = probs[i]\n output[\"start_logit\"] = entry.start_logit\n output[\"end_logit\"] = entry.end_logit\n output[\"token_doc_start\"] = start_indexes[i]\n output[\"token_doc_end\"] = end_indexes[i]\n nbest_json.append(output)\n\n assert len(nbest_json) >= 1, \"No valid predictions\"\n\n if not version_2_with_negative:\n all_predictions[example.qas_id] = nbest_json[0][\"text\"]\n else:\n # predict \"\" iff the null score - the score of best non-null > threshold\n score_diff = score_null - best_non_null_entry.start_logit - (best_non_null_entry.end_logit)\n scores_diff_json[example.qas_id] = score_diff\n if score_diff > null_score_diff_threshold:\n all_predictions[example.qas_id] = \"\"\n else:\n all_predictions[example.qas_id] = best_non_null_entry.text\n all_nbest_json[example.qas_id] = nbest_json\n\n if output_prediction_file:\n with open(output_prediction_file, \"w\") as writer:\n writer.write(json.dumps(all_predictions, indent=4) + \"\\n\")\n\n if output_nbest_file:\n with open(output_nbest_file, \"w\") as writer:\n writer.write(json.dumps(all_nbest_json, indent=4) + \"\\n\")\n\n if output_null_log_odds_file and version_2_with_negative:\n with open(output_null_log_odds_file, \"w\") as writer:\n writer.write(json.dumps(scores_diff_json, indent=4) + \"\\n\")\n\n return all_predictions",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecda5cf3db5898c617c8a1fc4e8bf81175a470b0 | 928,017 | ipynb | Jupyter Notebook | _build/html/_sources/ipynb/18-analise-redes.ipynb | gcpeixoto/ICD | bae7d02cd467240649c89b0ba4440966fba18cc7 | [
"CC0-1.0"
] | 2 | 2021-09-09T01:56:40.000Z | 2021-11-10T01:56:56.000Z | _build/html/_sources/ipynb/18-analise-redes.ipynb | gcpeixoto/ICD | bae7d02cd467240649c89b0ba4440966fba18cc7 | [
"CC0-1.0"
] | null | null | null | _build/html/_sources/ipynb/18-analise-redes.ipynb | gcpeixoto/ICD | bae7d02cd467240649c89b0ba4440966fba18cc7 | [
"CC0-1.0"
] | 1 | 2021-11-23T14:24:03.000Z | 2021-11-23T14:24:03.000Z | 820.527851 | 200,984 | 0.953585 | [
[
[
"# Análise de redes complexas\n\n_Rede_ (_network_) é uma forma de organizar e representar dados discretos. Elas diferem da forma tabular, em que linhas e colunas são as estruturas fundamentais, e funcionam com base em dois conceitos:\n\n1. _entidades_, ou _atores_, ou ainda _nós_, e\n2. _relacionamentos_, ou _links_, ou _arcos_, ou ainda, _conexões_.\n\nCasualmente, o conceito de _rede_ se confunde com o conceito matemático de _grafo_, para o qual as entidades são chamadas _vértices_ e os relacionamentos _arestas_. Usa-se a notação $G(V,E)$ para designar um grafo genérico $G$ com um conjunto $V$ de vértices e um conjunto $E$ de arestas. A Fig. {numref}`random-graph` esboça um grafo genérico.\n```{figure} ../figs/17/random-graph.png\n---\nwidth: 500px\nname: random-graph\n---\nGrafo genérico contendo 6 vértices e 13 arestas.\n```",
"_____no_output_____"
],
[
"## Aplicações\n\nCom o barateamento dos recursos de computação no final do século XX, a _análise de redes complexas_ (do inglês _complex network analysis_, ou CNA) evoluiu como uma área de pesquisa independente. Desde então, tornou-se possível mapear bancos de dados enormes e extrair conhecimento a partir de um emaranhado complexo de interligações. \n\nNo século XXI, percebemos um interesse explosivo em CNA. Algumas aplicações modernas incluem, mas não se limitam a:\n\n- transporte, para planejamento de malhas ferroviárias, rodovias e conexões entre cidades;\n- sociologia, para entender pessoas, seu comportamento, interação em redes sociais, orientações de pensamento e preferências;\n- energia, para sistematizar linhas de transmissão de energia elétrica;\n- biologia, para modelar redes de transmissão de doenças infecciosas;\n- ciência, para encontrar os núcleos de pesquisa mais influentes do mundo em um determinado campo do conhecimento. \n\n\n## O módulo `networkx`\n\nNeste capítulo, introduziremos alguns conceitos relacionados à CNA, tais como componentes conexas, medidades de centralidade e visualização de grafos usando o módulo Python `networkx`. Este módulo tornou-se popular pela sua versatilidade. Alguns de seus pontos positivos são:\n\n- facilidade de instalação;\n- ampla documentação no [site oficial](https://networkx.org);\n- extenso conjunto de funções e algoritmos;\n- versatilidade para lidar com redes de até 100.000 nós.\n\n```{note}\nAlgumas ferramentas com potencial similar ao `networkx` são [`igraph`](https://igraph.org) e [`graph-tool`](https://graph-tool.skewed.de). Especificamente para visualização, você poderá se interessar pelo [`Graphviz`](https://www.graphviz.org) ou pelo [`Gephi`](https://gephi.org).\n```\n\nVamos praticar um pouco com este módulo para entender conceitos fundamentais. Em seguida, faremos uma aplicação. Supondo que você já tenha instalado o `networkx`, importe-o:",
"_____no_output_____"
]
],
[
[
"import networkx as nx",
"_____no_output_____"
]
],
[
[
"### Criação de grafos não dirigidos",
"_____no_output_____"
],
[
"Em seguida vamos criar um grafo $G$ _não dirigido_. Isso significa que o sentido da aresta é irrelevante. Contudo, vale comentar que há situações em que o sentido da aresta importa. Neste caso, diz-se que o grafo é _dirigido_.",
"_____no_output_____"
]
],
[
[
"# cria grafo não dirigido com 4 vértices\n\n# inicializa\nG = nx.Graph() \n\n# adiciona arestas explicitamente\nG.add_edge(1,2) \nG.add_edge(1,3)\nG.add_edge(2,3)\nG.add_edge(3,4)",
"_____no_output_____"
]
],
[
[
"Em seguida, visualizamos o grafo com `draw_networkx`.",
"_____no_output_____"
]
],
[
[
"nx.draw_networkx(G)",
"_____no_output_____"
]
],
[
[
"### Adição e deleção de nós e arestas",
"_____no_output_____"
],
[
"Podemos adicionar nós indvidualmente ou por meio de uma lista, bem como usar _strings_ como nome.",
"_____no_output_____"
]
],
[
[
"G.add_node('A')\nG.add_nodes_from(['p',99,'Qq'])\nG.add_node('Mn') # nó adicionado por engano\nnx.draw_networkx(G)",
"_____no_output_____"
]
],
[
[
"Podemos fazer o mesmo com arestas sobre nós existentes ou não existentes.",
"_____no_output_____"
]
],
[
[
"G.add_edge('A','p') # aresta individual\nG.add_edges_from([(1,99),(4,'A')]) # aresta por lista (origem, destino)\nG.add_edge('Mn','no') # 'no' não existente\nnx.draw_networkx(G)",
"_____no_output_____"
]
],
[
[
"Nós e arestas podem ser removidos de maneira similar.",
"_____no_output_____"
]
],
[
[
"G.remove_node('no')\nG.remove_nodes_from(['Qq',99,'p'])\nnx.draw_networkx(G)",
"_____no_output_____"
],
[
"G.remove_edge(1,2)\nG.remove_edges_from([('A',4),(1,3)])\nnx.draw_networkx(G)",
"_____no_output_____"
]
],
[
[
"Para remover todas os nós e arestas do grafo, mas mantê-lo criado, usamos `clear`.",
"_____no_output_____"
]
],
[
[
"G.clear()",
"_____no_output_____"
]
],
[
[
"Verificamos que não há nós nem arestas:",
"_____no_output_____"
]
],
[
[
"len(G.nodes()), len(G.edges)",
"_____no_output_____"
]
],
[
[
"Para deletá-lo completamente, podemos fazer:",
"_____no_output_____"
]
],
[
[
"del G",
"_____no_output_____"
]
],
[
[
"### Criação de grafos aleatórios",
"_____no_output_____"
],
[
"Podemos criar um grafo aleatório de diversas formas. Com `random_geometric_graph`, o grafo de _n_ nós uniformemente aleatórios fica restrito ao \"cubo\" unitário de dimensão `dim` e conecta quaisquer dois nós _u_ e _v_ cuja distância entre eles é no máximo `raio`.",
"_____no_output_____"
]
],
[
[
"# 30 nós com raio de conexão 0.2\nn = 30\nraio = 0.2\nG = nx.random_geometric_graph(n,raio,dim=2)\nnx.draw_networkx(G)",
"_____no_output_____"
],
[
"# 30 nós com raio de conexão 5\nn = 30\nraio = 5\nG = nx.random_geometric_graph(n,raio,dim=2)\nnx.draw_networkx(G)",
"_____no_output_____"
],
[
"# 12 nós com raio de conexão 1.15\nn = 12\nraio = 1.15\nG = nx.random_geometric_graph(n,raio,dim=2)\nnx.draw_networkx(G)",
"_____no_output_____"
],
[
"# 12 nós com raio de conexão 0.4\nn = 12\nraio = 0.4\nG = nx.random_geometric_graph(n,raio,dim=2)\nnx.draw_networkx(G)",
"_____no_output_____"
]
],
[
[
"### Impressão de listas de nós e de arestas\n\nPodemos acessar a lista de nós ou de arestas com:",
"_____no_output_____"
]
],
[
[
"G.nodes()",
"_____no_output_____"
],
[
"G.edges()",
"_____no_output_____"
]
],
[
[
"Notemos que as arestas são descritas por meio de tuplas (_origem_,_destino_).",
"_____no_output_____"
],
[
"Se especificarmos `data=True`, atributos adicionais são impressos. Para os nós, vemos `pos` como a posição espacial.",
"_____no_output_____"
]
],
[
[
"print(G.nodes(data=True))",
"[(0, {'pos': [0.14016708535353195, 0.6030612677990583]}), (1, {'pos': [0.7551166126779706, 0.5753984722722648]}), (2, {'pos': [0.587815560861345, 0.9262306002179045]}), (3, {'pos': [0.6481883004357897, 0.23944971393177616]}), (4, {'pos': [0.5407112650886972, 0.5928863158800595]}), (5, {'pos': [0.36356802456758663, 0.5063407090207792]}), (6, {'pos': [0.13333289318770103, 0.10093555742924487]}), (7, {'pos': [0.5880577194412443, 0.4328559322005747]}), (8, {'pos': [0.4877270706931097, 0.16480065213848483]}), (9, {'pos': [0.4864756110584624, 0.06055043228282264]}), (10, {'pos': [0.06553095561423039, 0.20127799578298156]}), (11, {'pos': [0.2821978870896149, 0.3551844724031613]})]\n"
]
],
[
[
"No caso das arestas, nenhum atributo existe para este grafo. Contudo, em grafos mais complexos, é comum ter _capacidade_ e _peso_ como atributos. Ambas são relevantes em estudos de _fluxo_, em que se associa a arestas uma \"capacidade\" de transporte e um \"peso\" de relevância.",
"_____no_output_____"
]
],
[
[
"print(G.edges(data=True))",
"[(0, 5, {}), (0, 11, {}), (1, 3, {}), (1, 2, {}), (1, 5, {}), (1, 4, {}), (1, 7, {}), (2, 4, {}), (3, 4, {}), (3, 7, {}), (3, 9, {}), (3, 5, {}), (3, 11, {}), (3, 8, {}), (4, 5, {}), (4, 11, {}), (4, 7, {}), (5, 7, {}), (5, 11, {}), (5, 8, {}), (6, 11, {}), (6, 8, {}), (6, 10, {}), (6, 9, {}), (7, 9, {}), (7, 11, {}), (7, 8, {}), (8, 9, {}), (8, 11, {}), (9, 11, {}), (10, 11, {})]\n"
]
],
[
[
"### Criação de redes a partir de arquivos\n\nUm modo conveniente de criar redes é ler diretamente um arquivo contendo informações sobre a conectividade. O _dataset_ que usaremos a partir deste ponto em diante corresponde a uma rede representando a amizade entre usuários reais do Facebook. Cada usuário é representado por um vértice e um vínculo de amizade por uma aresta. Os dados são anônimos.\n\nCarregamos o arquivo _.txt_ com `networkx.read_edgelist`.",
"_____no_output_____"
]
],
[
[
"fb = nx.read_edgelist('../database/fb_data.txt')\nlen(fb.nodes), len(fb.edges)",
"_____no_output_____"
]
],
[
[
"Vemos que esta rede possui 4039 usuários e 88234 vínculos de amizade. Você pode plotar o grafo para visualizá-lo, porém pode demorar um pouco...",
"_____no_output_____"
],
[
"## Propriedades relevantes\n\nVejamos algumas propriedades de interesse de redes e grafos.",
"_____no_output_____"
],
[
"### Grau\n\nO _grau_ de um nó é o número de arestas conectadas a ele. Assim, o grau médio da rede do Facebook acima pode ser calculado por:",
"_____no_output_____"
]
],
[
[
"fb.number_of_edges()/fb.number_of_nodes()",
"_____no_output_____"
]
],
[
[
"ou",
"_____no_output_____"
]
],
[
[
"fb.size()/fb.order()",
"_____no_output_____"
]
],
[
[
"Ambos os resultados mostram que cada usuário nesta rede tem pelo menos 21 amizades.",
"_____no_output_____"
],
[
"### Caminho\n\n_Caminho_ é uma sequencia de nós conectados por arestas contiguamente. O _caminho mais curto_ em uma rede é o menor número de arestas a serem visitadas partindo de um nó de origem _u_ até um nó de destino _v_.\n\nA seguir, plotamos um caminho formado por 20 nós.",
"_____no_output_____"
]
],
[
[
"Gpath = nx.path_graph(20)\nnx.draw_networkx(Gpath)",
"_____no_output_____"
]
],
[
[
"### Componente\n\nUm grafo é _conexo_ se para todo par de nós, existe um caminho entre eles. Uma _componente conexa_, ou simplesmente _componente_ de um grafo é um subconjunto de seus nós tal que cada nó no subconjunto tem um caminho para todos os outros.\n\nPodemos encontrar todas as componentes da rede do Facebook usando `connected_componentes`. Entretanto, o resultado final é um objeto _generator_. Para acessarmos as componentes, devemos usar um iterador.",
"_____no_output_____"
]
],
[
[
"cc = nx.connected_components(fb)\n\n# varre componentes e imprime os primeiros 5 nós\nfor c in cc:\n print(list(c)[0:5])",
"['572', '3423', '3794', '1409', '1318']\n"
]
],
[
[
"Uma vez que há apenas uma lista impressa, temos que a rede do Facebook, na verdade, é uma componente única. De outra forma,",
"_____no_output_____"
]
],
[
[
"# há apenas 1 componente conexa, a própria rede\nnx.number_connected_components(fb)",
"_____no_output_____"
]
],
[
[
"### Subgrafo\n\n_Subgrafo_ é um subconjunto dos nós de um grafo e todas as arestas que os conectam. Para selecionarmos um _subgrafo_ da rede Facebook, usamos `subgraph`. Os argumentos necessários são: o grafo original e uma lista dos nós de interesse. Abaixo, geramos uma lista aleatória de `ng` nós.",
"_____no_output_____"
]
],
[
[
"from numpy.random import randint\n\n# número de nós do subgrafo\nng = 40\n\n# identifica nós (nomes são strings)\nnodes_to_get = randint(1,fb.number_of_nodes(),ng).astype(str)\n\n# extrai subgrafo\nfb_sub = nx.subgraph(fb,nodes_to_get)\n\n# plota\nnx.draw_networkx(fb_sub)",
"_____no_output_____"
]
],
[
[
"Se fizermos alguma alteração no grafo original, pode ser que o número de componentes se altere. Vejamos:",
"_____no_output_____"
]
],
[
[
"# copia grafo\nfb_less = fb.copy()\n\n# remove o nó '0'\nfb_less.remove_node('0')\n\n# novas componentes\nnx.number_connected_components(fb_less)",
"_____no_output_____"
]
],
[
[
"Neste exemplo, a retirada de apenas um nó do grafo original resultou em 19 componentes, com número variável de elementos.",
"_____no_output_____"
]
],
[
[
"ncs = []\nfor c in nx.connected_components(fb_less):\n ncs.append(len(c))",
"_____no_output_____"
],
[
"# número de componentes em ordem\nsorted(ncs,reverse=True)",
"_____no_output_____"
]
],
[
[
"## Métricas de centralidade\n\nA _centralidade_ de um nó mede a sua importância relativa no grafo. Em outras palavras, nós mais \"centrais\" tendem a ser considerados os mais influentes, privilegiados ou comunicativos.\n\nEm uma rede social, por exemplo, um usuário com alta centralidade pode ser um _influencer_, um político, uma celebridade, ou até mesmo um malfeitor. Há diversas _métricas de centralidade_ disponíveis. Aqui veremos as 4 mais corriqueiras:\n\n- _centralidade de grau_ (_degree centrality_): definida pelo número de arestas de um nó;\n- _centralidade de intermediação_(_betweeness centrality_): definida pelo número de vezes em que o nó é visitado ao tomarmos o caminho mais curto entre um par de nós distintos deste. Esta centralidade pode ser imaginada como uma \"ponte\" ou \"pedágio\".\n- _centralidade de proximidade_ (_closeness centrality_): definida pelo inverso da soma das distâncias do nó de interesse a todos os outros do grafo. Ela quão \"próximo\" o nó é de todos os demais. Um nó com alta centralidade é aquele que, grosso modo, \"dista por igual\" dos demais.\n- _centralidade de autovetor_ (_eigenvector centrality_): definida pelo escore relativo para um nó tomando por base suas conexões. Conexões com nós de alta centralidade aumentam seu escore, ao passo que conexões com nós de baixa centralidade reduzem seu escore. De certa forma, ela mede como um nó está conectado a nós influentes.\n\nEm particular, um nó com alta centralidade de proximidade e alta centralidade de intermediação é chamado de _hub_.\n\nVamos calcular as centralidades de um subgrafo da rede do Facebook. Primeiro, extraímos um subgrafo menor.",
"_____no_output_____"
]
],
[
[
"# número de nós do subgrafo\nng = 400\n\n# identifica nós (nomes são strings)\nnodes_to_get = randint(1,fb.number_of_nodes(),ng).astype(str)\n\n# extrai subgrafo\nfb_sub_c = nx.subgraph(fb,nodes_to_get)",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\n\n# centralidade de grau\ndeg = nx.degree_centrality(fb_sub_c)\nnx.draw_networkx(fb_sub_c,\n with_labels=False,\n node_color=list(deg.values()),\n alpha=0.6,\n cmap=plt.cm.afmhot)",
"_____no_output_____"
],
[
"# centralidade de intermediação\nbet = nx.betweenness_centrality(fb_sub_c)\nnx.draw_networkx(fb_sub_c,\n with_labels=False,\n node_color=list(bet.values()),\n alpha=0.6,\n cmap=plt.cm.afmhot)",
"_____no_output_____"
],
[
"# centralidade de proximidade\ncln = nx.closeness_centrality(fb_sub_c)\nnx.draw_networkx(fb_sub_c,\n with_labels=False,\n node_color=list(cln.values()),\n alpha=0.6,\n cmap=plt.cm.afmhot)",
"_____no_output_____"
],
[
"# centralidade de autovetor\neig = nx.eigenvector_centrality(fb_sub_c)\nnx.draw_networkx(fb_sub_c,\n with_labels=False,\n node_color=list(eig.values()),\n alpha=0.6,\n cmap=plt.cm.afmhot)",
"_____no_output_____"
]
],
[
[
"## Layouts de visualização\n\nPodemos melhorar a visualização das redes alterando os layouts. O exemplo a seguir dispõe o grafo em um layout melhor, chamado de `spring`. Este layout acomoda a posição dos nós iterativamente por meio de um algoritmo especial. Além disso, a centralidade de grau está normalizada no intervalo [0,1] e escalonada. \n\nCom o novo plot, é possível distinguir \"comunidades\", sendo os maiores nós os mais centrais.",
"_____no_output_____"
]
],
[
[
"from numpy import array\npos_fb = nx.spring_layout(fb_sub_c,iterations = 50)\n\nnsize = array([v for v in deg.values()])\nnsize = 500*(nsize - min(nsize))/(max(nsize) - min(nsize))\nnodes = nx.draw_networkx_nodes(fb_sub_c, pos = pos_fb, node_size = nsize)\nedges = nx.draw_networkx_edges(fb_sub_c, pos = pos_fb, alpha = .1)",
"_____no_output_____"
]
],
[
[
"Um layout aleatório pode ser plotado da seguinte forma:",
"_____no_output_____"
]
],
[
[
"pos_fb = nx.random_layout(fb_sub_c)\nnx.draw_networkx(fb_sub_c,pos_fb,with_labels=False,alpha=0.5)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
ecda63731efe348af643fd8510268f8866c29a8b | 153,322 | ipynb | Jupyter Notebook | choosing the best features/choosing features based on simple models.ipynb | RafalKazubowski/dataworkshop_3city | 5fa1c9cb30d4e5ae409a0291548d7a46b6ff937c | [
"MIT"
] | null | null | null | choosing the best features/choosing features based on simple models.ipynb | RafalKazubowski/dataworkshop_3city | 5fa1c9cb30d4e5ae409a0291548d7a46b6ff937c | [
"MIT"
] | null | null | null | choosing the best features/choosing features based on simple models.ipynb | RafalKazubowski/dataworkshop_3city | 5fa1c9cb30d4e5ae409a0291548d7a46b6ff937c | [
"MIT"
] | null | null | null | 43.019641 | 406 | 0.458101 | [
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nfrom sklearn.neighbors import KNeighborsRegressor\nfrom sklearn.ensemble import RandomForestRegressor\nfrom sklearn import tree\nimport xgboost as xgb\n\n#from sklearn.metrics import accuracy_score \n#from sklearn.metrics import r2_score\n\nfrom sklearn.model_selection import train_test_split, cross_val_score\nfrom sklearn.model_selection import RandomizedSearchCV\n\nimport json",
"_____no_output_____"
],
[
"with open(r'gratkapl_mieszkania_sopot.json', 'r') as f:\n data2 = json.load(f)\ndf2 = pd.DataFrame(data2)\n\nwith open(r'gratkapl_mieszkania_gdynia.json', 'r') as f:\n data4 = json.load(f)\ndf4 = pd.DataFrame(data4)\n\nwith open(r'gratkapl_mieszkania_gdansk.json', 'r') as f:\n data5 = json.load(f)\ndf5 = pd.DataFrame(data5)\n",
"_____no_output_____"
]
],
[
[
"## Prepare data",
"_____no_output_____"
]
],
[
[
"df = pd.concat([ df2, df4, df5],sort=False, ignore_index =True)\ndf.shape",
"_____no_output_____"
],
[
"df_gratka = df[['adres','cena','cena_za_metr','data_aktualizacji','id_ogloszenia','liczba_pieter_budynku',\n 'liczba_pokoi','powierzchnia', 'rok_budowy', 'tresc', 'kuchnia',\n 'typ_nieruchomosci', 'tytul', 'wykonczenie', 'zdjecie','rodzaj_zabudowy', 'typ_domu', 'pietro',\n 'data_dostepne', 'miejsce-parkingowe', 'forma-wlasnosci', 'oplaty-czynsz-administracyjny-media',\n 'komunikacja', 'okna'\n ]]",
"_____no_output_____"
],
[
"df_gratka = df_gratka.assign(adres_miasto = df_gratka['adres'].str.split(',').apply(pd.Series, 1)[0])\ndf_gratka['adres_miasto'].unique()",
"_____no_output_____"
],
[
"df_gratka = df_gratka.assign(adres_dzielnica = df_gratka['adres'].str.split(',').apply(pd.Series, 1)[1])\ndf_gratka['adres_dzielnica'].unique()",
"_____no_output_____"
],
[
"df_gratka['adres_dzielnica'].replace({'-':''},regex=True, inplace=True)\ndf_gratka = df_gratka.assign(adres_dzielnica = df_gratka['adres_dzielnica'].str.lower())\ndf_gratka['adres_dzielnica'].replace({'pomorskie': np.nan},regex=True, inplace=True)\ndf_gratka['adres_dzielnica'].replace({'sopot': np.nan},regex=True, inplace=True)\ndf_gratka['adres_dzielnica'].replace({'gdynia.': np.nan},regex=True, inplace=True)\ndf_gratka['adres_dzielnica'].replace({'gdańsk': np.nan},regex=True, inplace=True)\ndf_gratka['adres_dzielnica'].replace({'os.aniołki': 'aniołki'},regex=True, inplace=True)\ndf_gratka['adres_dzielnica'].replace({'wzgórześw.maksymiliana': 'wzgórzeświętegomaksymiliana'},regex=True, inplace=True)\ndf_gratka['adres_dzielnica'].replace({'chyylonia': 'chylonia'},regex=True, inplace=True)\ndf_gratka['adres_dzielnica'].replace({'wyspasobieszewska': 'sobieszewo'},regex=True, inplace=True)\ndf_gratka['adres_dzielnica'].unique()",
"_____no_output_____"
],
[
"del df_gratka['adres']",
"_____no_output_____"
],
[
"df_gratka[\"powierzchnia\"].isnull().sum()",
"_____no_output_____"
],
[
"df_gratka[\"cena_za_metr\"].isnull().sum()",
"_____no_output_____"
],
[
"df_gratka[(df_gratka[\"powierzchnia\"].isnull()) & (df_gratka[\"cena_za_metr\"]> 0)]",
"_____no_output_____"
],
[
"df_gratka.loc[(df_gratka[\"powierzchnia\"].isnull()) & (df_gratka[\"cena_za_metr\"]> 0), \"powierzchnia\" ] = (df_gratka.loc[1161]['cena']/df_gratka.loc[1413]['cena_za_metr'])",
"_____no_output_____"
],
[
"df_gratka.shape",
"_____no_output_____"
],
[
"df_gratka.dropna(subset=[\"powierzchnia\"], inplace=True)\ndf_gratka.shape",
"_____no_output_____"
],
[
"df_flats = df_gratka[(df_gratka[\"powierzchnia\"] < 300)&(df_gratka[\"cena\"]< 2500000)&(df_gratka[\"cena_za_metr\"]< 25000)]\ndf_flats.shape",
"_____no_output_____"
],
[
"df_flats.head()",
"_____no_output_____"
],
[
"df_flats_features = df_flats[['cena','cena_za_metr','liczba_pieter_budynku','liczba_pokoi','powierzchnia', \n 'rok_budowy','kuchnia', 'wykonczenie', 'zdjecie','rodzaj_zabudowy','pietro',\n 'miejsce-parkingowe', 'forma-wlasnosci', 'oplaty-czynsz-administracyjny-media',\n 'komunikacja', 'okna','adres_dzielnica','adres_miasto'\n ]]",
"_____no_output_____"
],
[
"df_flats_features.head()",
"_____no_output_____"
],
[
"df_flats_features.shape",
"_____no_output_____"
],
[
"df_flats_features.isnull().sum()",
"_____no_output_____"
],
[
"df_flats_features['rok_budowy'].fillna(0, inplace=True)",
"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\pandas\\core\\generic.py:6130: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n self._update_inplace(new_data)\n"
],
[
"df_flats_features['liczba_pieter_budynku'].unique()",
"_____no_output_____"
],
[
"df_flats_features['liczba_pieter_budynku'] = df_flats_features['liczba_pieter_budynku'].astype('float')\ndf_flats_features['liczba_pieter_budynku'].fillna(0, inplace=True)",
"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:1: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n \"\"\"Entry point for launching an IPython kernel.\n"
],
[
"df_flats_features['liczba_pokoi'].unique()",
"_____no_output_____"
],
[
"df_flats_features['liczba_pokoi'].replace({'więcej niż 8': '9'},regex=True, inplace=True)\ndf_flats_features['liczba_pokoi'] = df_flats_features['liczba_pokoi'].astype('float')",
"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\pandas\\core\\generic.py:6586: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n self._update_inplace(new_data)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:2: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n \n"
],
[
"df_flats_features['liczba_pokoi'].fillna(0, inplace=True)",
"_____no_output_____"
],
[
"df_flats_features['pietro'].unique()",
"_____no_output_____"
],
[
"df_flats_features['pietro'].replace({'parter': '0'},regex=True, inplace=True)\ndf_flats_features['pietro'] = df_flats_features['pietro'].astype('float')",
"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:2: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n \n"
],
[
"df_flats_features['pietro'].fillna(-1, inplace=True)",
"_____no_output_____"
],
[
"SUFFIX_CAT = '__cat'\n\nfor feat in df_flats_features.columns:\n if isinstance(df_flats_features[feat][0], list):\n continue\n\n factorized_values = df_flats_features[feat].factorize()[0]\n if SUFFIX_CAT in feat:\n df_flats_features[feat] = factorized_values\n else:\n df_flats_features[feat + SUFFIX_CAT] = factorized_values",
"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:11: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n # This is added back by InteractiveShellApp.init_path()\n"
],
[
"cat_feats = [x for x in df_flats_features.columns if SUFFIX_CAT in x ]\ncat_feats = [x for x in cat_feats if 'cena' not in x]\ncat_feats = [x for x in cat_feats if 'powierzchnia' not in x]\ncat_feats = [x for x in cat_feats if 'rok' not in x]\n#cat_feats = [x for x in cat_feats if 'liczba' not in x]\n#cat_feats = [x for x in cat_feats if 'pietro' not in x]\n#cat_feats.append('cena_za_metr')\ncat_feats.append('powierzchnia')\ncat_feats.append('rok_budowy')\ncat_feats.append('liczba_pieter_budynku')\ncat_feats.append('liczba_pokoi')\ncat_feats.append('pietro')\nprint(cat_feats)\nlen(cat_feats)",
"['liczba_pieter_budynku__cat', 'liczba_pokoi__cat', 'kuchnia__cat', 'wykonczenie__cat', 'zdjecie__cat', 'rodzaj_zabudowy__cat', 'pietro__cat', 'miejsce-parkingowe__cat', 'forma-wlasnosci__cat', 'oplaty-czynsz-administracyjny-media__cat', 'komunikacja__cat', 'okna__cat', 'adres_dzielnica__cat', 'adres_miasto__cat', 'powierzchnia', 'rok_budowy', 'liczba_pieter_budynku', 'liczba_pokoi', 'pietro']\n"
],
[
"df_flats_features.head()",
"_____no_output_____"
],
[
"df_flats_features.isnull().sum()",
"_____no_output_____"
],
[
"x = df_flats_features[cat_feats].values\ny = df_flats_features[['cena']]\n\nx_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.1, random_state=8)",
"_____no_output_____"
]
],
[
[
"## Choose the best features\n#### test based on several models",
"_____no_output_____"
],
[
"#### Decision Tree",
"_____no_output_____"
]
],
[
[
"random_search = RandomizedSearchCV(tree.DecisionTreeRegressor(),\n param_distributions ={'criterion': [\"mse\", \"friedman_mse\", \"mae\"],\n 'min_samples_split': range(2, 21),\n 'min_samples_leaf' : range(1, 21),\n 'max_leaf_nodes' : range(2, 25), \n 'max_depth' : range(2, 25)\n },\n cv=5)\nrandom_search.fit(x_train, y_train)\nprint(random_search.best_score_)\nprint(random_search.best_params_)",
"0.7122711467340945\n{'min_samples_split': 13, 'min_samples_leaf': 10, 'max_leaf_nodes': 22, 'max_depth': 6, 'criterion': 'friedman_mse'}\n"
],
[
"tree_reg = tree.DecisionTreeRegressor(criterion = 'friedman_mse', max_depth = 6, max_leaf_nodes = 22, \n min_samples_leaf = 10, min_samples_split = 13, random_state=1)\ntree_cvs = cross_val_score(tree_reg, x_train, y_train, cv=5)\ntree_reg.fit(x_train, y_train)\ntree_score = tree_reg.score(x_test, y_test)\n\n\nprint('cross_val_score:', tree_cvs)\nprint(\"Score on train data (with 95%% conf. intervals): %0.2f (+/- %0.2f)\" % (tree_cvs.mean(), tree_cvs.std() * 2))\nprint('Score on test data:', tree_score)",
"cross_val_score: [0.6865368 0.73768263 0.68591128 0.72524994 0.72601164]\nScore on train data (with 95% conf. intervals): 0.71 (+/- 0.04)\nScore on test data: 0.5860539486696601\n"
],
[
"model = tree.DecisionTreeRegressor(criterion = 'friedman_mse', max_depth = 6, max_leaf_nodes = 22, \n min_samples_leaf = 10, min_samples_split = 13, random_state=1)\nscores = cross_val_score(model, x, y, cv=3, scoring ='neg_mean_absolute_error')\nnp.mean(scores)",
"_____no_output_____"
]
],
[
[
"#### Random Forest",
"_____no_output_____"
]
],
[
[
"random_search = RandomizedSearchCV(RandomForestRegressor(),\n param_distributions ={'n_estimators' : range(1,101),\n 'criterion' : [\"mse\", \"mae\"],\n 'max_depth' : range(2, 25)\n },\n cv=5)\nrandom_search.fit(x_train, y_train)\nprint(random_search.best_score_)\nprint(random_search.best_params_)",
"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\n"
],
[
"rf_reg = RandomForestRegressor(n_estimators =100, max_depth =19 ,criterion ='mse', random_state=1)\nrf_cvs = cross_val_score(rf_reg, x_train, y_train, cv=5)\nrf_reg.fit(x_train, y_train)\nrf_score = rf_reg.score(x_test, y_test)\n\nprint('cross_val_score:', rf_cvs)\nprint(\"Score on train data (with 95%% conf. intervals): %0.2f (+/- %0.2f)\" % (rf_cvs.mean(), rf_cvs.std() * 2))\nprint('Score on test data:', rf_score)",
"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:3: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n This is separate from the ipykernel package so we can avoid doing imports until\n"
],
[
"model = RandomForestRegressor(n_estimators =100, max_depth =19 ,criterion ='mse', random_state=1)\nscores = cross_val_score(model, x, y, cv=3, scoring ='neg_mean_absolute_error')\nnp.mean(scores)",
"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\sklearn\\model_selection\\_validation.py:514: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n estimator.fit(X_train, y_train, **fit_params)\n"
]
],
[
[
"#### XGBoost",
"_____no_output_____"
]
],
[
[
"random_search = RandomizedSearchCV(xgb.XGBRegressor(),\n param_distributions ={'booster' : [\"gbtree\", \"gblinear\"],\n 'max_depth' : range(2,50),\n 'learning_rate' : np.arange(0, 1.0, 0.1),\n 'reg_alpha': np.arange(0, 3.0, 0.1),\n 'reg_lambda': np.arange(0, 3.0, 0.1),\n 'n_estimators': range (1,300)\n },\n cv=5)\nrandom_search.fit(x_train, y_train)\nprint(random_search.best_score_)\nprint(random_search.best_params_)",
"[16:18:06] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:18:06] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:18:06] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:18:06] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:18:06] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:18:06] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:18:09] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:18:13] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:18:17] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:18:20] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:18:23] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:18:29] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:18:35] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:18:40] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:18:45] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:18:50] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:18:50] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:18:50] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:18:50] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:18:50] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:18:50] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:18:50] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:18:50] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:18:50] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:18:50] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:18:50] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:18:51] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:18:52] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:18:52] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:18:53] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:18:53] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:18:55] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:18:57] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:18:58] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:19:00] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:19:01] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:19:01] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:19:01] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:19:02] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:19:02] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:19:02] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:19:03] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:19:05] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:19:06] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:19:08] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:19:09] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:19:09] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:19:09] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:19:10] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:19:10] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:19:10] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n0.8391492442813899\n{'reg_lambda': 2.7, 'reg_alpha': 2.4000000000000004, 'n_estimators': 237, 'max_depth': 8, 'learning_rate': 0.1, 'booster': 'gbtree'}\n"
],
[
"xg_reg = xgb.XGBRegressor(reg_lambda= 2.7, reg_alpha= 2.4, n_estimators= 237, max_depth= 8,\n learning_rate = 0.1, booster ='gbtree',random_state=1)\nxg_cvs = cross_val_score(xg_reg, x_train, y_train, cv=5)\nxg_reg.fit(x_train, y_train)\nxg_score = xg_reg.score(x_test, y_test)\n\nprint('cross_val_score:', xg_cvs)\nprint(\"Score on train data (with 95%% conf. intervals): %0.2f (+/- %0.2f)\" % (xg_cvs.mean(), xg_cvs.std() * 2))\nprint('Score on test data:', xg_score)",
"[16:20:08] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:20:10] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:20:11] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:20:12] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:20:14] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:20:15] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\ncross_val_score: [0.83646301 0.841349 0.8383229 0.84117094 0.83844419]\nScore on train data (with 95% conf. intervals): 0.84 (+/- 0.00)\nScore on test data: 0.7483211459960712\n"
],
[
"model = xgb.XGBRegressor(reg_lambda= 2.7, reg_alpha= 2.4, n_estimators= 237, max_depth= 8,\n learning_rate = 0.1, booster ='gbtree',random_state=1)\nscores = cross_val_score(model, x, y, cv=3, scoring ='neg_mean_absolute_error')\nnp.mean(scores)",
"[16:20:47] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:20:49] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[16:20:50] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n"
]
],
[
[
"#### K-Nearest Neighbors",
"_____no_output_____"
]
],
[
[
"random_search = RandomizedSearchCV(KNeighborsRegressor(),\n param_distributions ={'n_neighbors' : range(1, 50),\n 'weights' : [\"uniform\", \"distance\"],\n 'p' : [1, 2]},\n cv=5)\nrandom_search.fit(x_train, y_train)\nprint(random_search.best_score_)\nprint(random_search.best_params_)",
"0.756482583102475\n{'weights': 'distance', 'p': 1, 'n_neighbors': 8}\n"
],
[
"kn_reg = KNeighborsRegressor(n_neighbors = 8, p = 1, weights = 'distance')\nkn_cvs = cross_val_score(kn_reg, x_train, y_train, cv=5)\nkn_reg.fit(x_train, y_train)\nkn_score = kn_reg.score(x_test, y_test)\n\nprint('cross_val_score:', kn_cvs)\nprint(\"Score on train data (with 95%% conf. intervals): %0.2f (+/- %0.2f)\" % (kn_cvs.mean(), kn_cvs.std() * 2))\nprint('Score on test data:', kn_score)",
"cross_val_score: [0.71859276 0.75454998 0.76312491 0.77706662 0.76913246]\nScore on train data (with 95% conf. intervals): 0.76 (+/- 0.04)\nScore on test data: 0.7201880511649613\n"
],
[
"model = KNeighborsRegressor(n_neighbors = 8, p = 1, weights = 'distance')\nscores = cross_val_score(model, x, y, cv=3, scoring ='neg_mean_absolute_error')\nnp.mean(scores)",
"_____no_output_____"
],
[
"!pip install eli5",
"Requirement already satisfied: eli5 in c:\\programdata\\anaconda3\\lib\\site-packages (0.10.1)\nRequirement already satisfied: numpy>=1.9.0 in c:\\users\\rafał\\appdata\\roaming\\python\\python37\\site-packages (from eli5) (1.17.0)\nRequirement already satisfied: jinja2 in c:\\programdata\\anaconda3\\lib\\site-packages (from eli5) (2.10.1)\nRequirement already satisfied: six in c:\\users\\rafał\\appdata\\roaming\\python\\python37\\site-packages (from eli5) (1.12.0)\nRequirement already satisfied: graphviz in c:\\users\\rafał\\appdata\\roaming\\python\\python37\\site-packages (from eli5) (0.8.4)\nRequirement already satisfied: scipy in c:\\programdata\\anaconda3\\lib\\site-packages (from eli5) (1.2.1)\nRequirement already satisfied: attrs>16.0.0 in c:\\programdata\\anaconda3\\lib\\site-packages (from eli5) (19.1.0)\nRequirement already satisfied: scikit-learn>=0.18 in c:\\programdata\\anaconda3\\lib\\site-packages (from eli5) (0.21.2)\nRequirement already satisfied: tabulate>=0.7.7 in c:\\programdata\\anaconda3\\lib\\site-packages (from eli5) (0.8.7)\nRequirement already satisfied: MarkupSafe>=0.23 in c:\\programdata\\anaconda3\\lib\\site-packages (from jinja2->eli5) (1.1.1)\nRequirement already satisfied: joblib>=0.11 in c:\\programdata\\anaconda3\\lib\\site-packages (from scikit-learn>=0.18->eli5) (0.13.2)\n"
],
[
"import eli5\nfrom eli5.sklearn import PermutationImportance",
"Using TensorFlow backend.\nC:\\Users\\Rafał\\AppData\\Roaming\\Python\\Python37\\site-packages\\tensorflow\\python\\framework\\dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\nC:\\Users\\Rafał\\AppData\\Roaming\\Python\\Python37\\site-packages\\tensorflow\\python\\framework\\dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\nC:\\Users\\Rafał\\AppData\\Roaming\\Python\\Python37\\site-packages\\tensorflow\\python\\framework\\dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\nC:\\Users\\Rafał\\AppData\\Roaming\\Python\\Python37\\site-packages\\tensorflow\\python\\framework\\dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\nC:\\Users\\Rafał\\AppData\\Roaming\\Python\\Python37\\site-packages\\tensorflow\\python\\framework\\dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\nC:\\Users\\Rafał\\AppData\\Roaming\\Python\\Python37\\site-packages\\tensorflow\\python\\framework\\dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\nC:\\Users\\Rafał\\AppData\\Roaming\\Python\\Python37\\site-packages\\tensorboard\\compat\\tensorflow_stub\\dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\nC:\\Users\\Rafał\\AppData\\Roaming\\Python\\Python37\\site-packages\\tensorboard\\compat\\tensorflow_stub\\dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\nC:\\Users\\Rafał\\AppData\\Roaming\\Python\\Python37\\site-packages\\tensorboard\\compat\\tensorflow_stub\\dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\nC:\\Users\\Rafał\\AppData\\Roaming\\Python\\Python37\\site-packages\\tensorboard\\compat\\tensorflow_stub\\dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\nC:\\Users\\Rafał\\AppData\\Roaming\\Python\\Python37\\site-packages\\tensorboard\\compat\\tensorflow_stub\\dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\nC:\\Users\\Rafał\\AppData\\Roaming\\Python\\Python37\\site-packages\\tensorboard\\compat\\tensorflow_stub\\dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\n"
],
[
"m = tree.DecisionTreeRegressor(criterion = 'friedman_mse', max_depth = 6, max_leaf_nodes = 22, \n min_samples_leaf = 10, min_samples_split = 13, random_state=1)\nm.fit(x,y)\n\nimp = PermutationImportance(m, random_state=0).fit(x,y)\neli5.show_weights(imp,feature_names=cat_feats)",
"_____no_output_____"
],
[
"m = RandomForestRegressor(n_estimators =100, max_depth =19 ,criterion ='mse', random_state=1)\nm.fit(x,y)\n\nimp = PermutationImportance(m, random_state=0).fit(x,y)\neli5.show_weights(imp,feature_names=cat_feats)",
"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:2: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n \n"
],
[
"m = xgb.XGBRegressor(reg_lambda= 2.7, reg_alpha= 2.4, n_estimators= 237, max_depth= 8,\n learning_rate = 0.1, booster ='gbtree',random_state=1)\nm.fit(x,y)\n\nimp = PermutationImportance(m, random_state=0).fit(x,y)\neli5.show_weights(imp,feature_names=cat_feats)",
"[16:30:12] WARNING: src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n"
],
[
"m = KNeighborsRegressor(n_neighbors = 8, p = 1, weights = 'distance')\nm.fit(x,y)\n\nimp = PermutationImportance(m, random_state=0).fit(x,y)\neli5.show_weights(imp,feature_names=cat_feats)",
"_____no_output_____"
],
[
"# THE BEST FEATURES:\ncat_feats =[\n'powierzchnia',\n'rodzaj_zabudowy__cat',\n'adres_miasto__cat',\n'adres_dzielnica__cat',\n'liczba_pieter_budynku__cat',\n'rok_budowy',\n'pietro',\n'liczba_pokoi__cat'\n]",
"_____no_output_____"
]
],
[
[
"## Share data ",
"_____no_output_____"
]
],
[
[
"df_join = df[['cena','cena_za_metr', 'powierzchnia', 'liczba_pokoi','liczba_pieter_budynku','rodzaj_zabudowy'\n ,'rok_budowy', 'adres', 'pietro', 'wykonczenie', 'kuchnia'\n ]]",
"_____no_output_____"
],
[
"df_join = df_join.assign(miasto = df_join['adres'].str.split(',').apply(pd.Series, 1)[0])",
"_____no_output_____"
],
[
"df_join = df_join.assign(dzielnica = df_join['adres'].str.split(',').apply(pd.Series, 1)[1])",
"_____no_output_____"
],
[
"df_join.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 3979 entries, 0 to 3978\nData columns (total 13 columns):\ncena 3979 non-null float64\ncena_za_metr 3974 non-null float64\npowierzchnia 3973 non-null float64\nliczba_pokoi 3969 non-null object\nliczba_pieter_budynku 3906 non-null object\nrodzaj_zabudowy 3644 non-null object\nrok_budowy 2770 non-null float64\nadres 3979 non-null object\npietro 3957 non-null object\nwykonczenie 451 non-null object\nkuchnia 1321 non-null object\nmiasto 3979 non-null object\ndzielnica 3979 non-null object\ndtypes: float64(4), object(9)\nmemory usage: 404.2+ KB\n"
],
[
"df_join.to_json(r'C:\\...\\gratka.json')",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
ecda65bbc08542482e6437605102fd06c0a5df94 | 2,954 | ipynb | Jupyter Notebook | notebooks/materials_project_notebooks/parallelization_notebooks/FInger_Pool4.ipynb | 3juholee/materialproject_ml | 6eb734fc1f92c6567c34845e917024dbb514e507 | [
"MIT"
] | null | null | null | notebooks/materials_project_notebooks/parallelization_notebooks/FInger_Pool4.ipynb | 3juholee/materialproject_ml | 6eb734fc1f92c6567c34845e917024dbb514e507 | [
"MIT"
] | null | null | null | notebooks/materials_project_notebooks/parallelization_notebooks/FInger_Pool4.ipynb | 3juholee/materialproject_ml | 6eb734fc1f92c6567c34845e917024dbb514e507 | [
"MIT"
] | null | null | null | 22.723077 | 319 | 0.560596 | [
[
[
"Finger_Pool Notebooks are the ones used to create the fingerprints (without oxidation states for structures with <50 atoms in the unit cell. This could have been done in one cell using the Multiprocess.Pool.map function however I wanted to track progress using tqdm which is harder to do with Multiprocess module.",
"_____no_output_____"
]
],
[
[
"import fingerprint as fp\nstruct_all=s_all=fp.read_pickle(\"struct_all.pickle\")",
"/usr/local/lib/python2.7/site-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.\n warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')\n"
],
[
"structs_lim_50=[x for x in struct_all if len(x.species)<50]",
"_____no_output_____"
],
[
"import tqdm\nimport numpy as np\nimport itertools\ndef phi_getter(i):\n phi_ones=fp.get_phi_scaled(i,obser='ones')\n phi_Z=fp.get_phi_scaled(i,obser='Z')\n phi_Chi=fp.get_phi_scaled(i,obser='Chi')\n return list(itertools.chain(phi_ones,phi_Z,phi_Chi))\n\n\n\n\nlim1=3700\nlim2=7400\nfinger_part=np.array([phi_getter(structs_lim_50[lim1+i]) for i in tqdm.tqdm_notebook(range(lim2-lim1))])\n\nfinger_part.shape",
"\n"
],
[
"np.savetxt(\"finger_part2.npz\",finger_part)",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
ecda690915189eecd0c2df317060e3997ca943ae | 27,919 | ipynb | Jupyter Notebook | week11-stereo_depth/seminar11.ipynb | markovka17/deep_vision_and_graphics | 62c19c459dcbb05138fb096e5bd1fada7c04410f | [
"MIT"
] | 51 | 2021-09-15T12:07:12.000Z | 2022-03-29T02:19:58.000Z | week11-stereo_depth/seminar11.ipynb | ElephantT/deep_vision_and_graphics | dd014301da75883671a330744c1e13a40f6defca | [
"MIT"
] | 7 | 2021-09-26T16:33:21.000Z | 2021-12-13T09:05:19.000Z | week11-stereo_depth/seminar11.ipynb | ElephantT/deep_vision_and_graphics | dd014301da75883671a330744c1e13a40f6defca | [
"MIT"
] | 24 | 2021-09-12T21:41:26.000Z | 2022-02-18T15:48:04.000Z | 33.759371 | 546 | 0.54368 | [
[
[
"# Seminar 11 - Stereo Depth",
"_____no_output_____"
],
[
"This task is based on two papers:\n\n1) Mayer et al. \"A Large Dataset to Train Convolutional Networksfor Disparity, Optical Flow, and Scene Flow Estimation\", CVPR 2016, ([pdf](https://openaccess.thecvf.com/content_cvpr_2016/papers/Mayer_A_Large_Dataset_CVPR_2016_paper.pdf), [poster](https://lmb.informatik.uni-freiburg.de/Publications/2016/MIFDB16/poster-MIFDB16.pdf), [supplimentary materials](https://lmb.informatik.uni-freiburg.de/Publications/2016/MIFDB16/supplementary-MIFDB16.pdf), [project page](https://lmb.informatik.uni-freiburg.de/Publications/2016/MIFDB16/)) \n\n2) Fischer at al. \"FlowNet: Learning Optical Flow with Convolutional Networks\", ICCV 2015, [pdf](https://arxiv.org/pdf/1504.06852.pdf)\n",
"_____no_output_____"
],
[
"<img src=\"https://media.arxiv-vanity.com/render-output/4733381/images/monkaa/Cleanpass_0050_L.jpg\" style=\"width:60%\">",
"_____no_output_____"
]
],
[
[
"#!L\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torchvision\nfrom torchvision import transforms\nimport tqdm\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport random\nfrom PIL import Image\nimport time\n%matplotlib inline",
"_____no_output_____"
],
[
"#!L\ndef get_computing_device():\n if torch.cuda.is_available():\n device = torch.device('cuda:0')\n else:\n device = torch.device('cpu')\n return device\n\ndevice = get_computing_device()\nprint(f\"Our main computing device is '{device}'\")",
"_____no_output_____"
]
],
[
[
"## 0. Warm up\n\n1) What is rectified stereo pair?\n\n2) What is disparity?\n\n3) How does disparity help with depth computation?",
"_____no_output_____"
],
[
"## 1. KITTI Stereo Depth 2012\n\nhttp://www.cvlibs.net/datasets/kitti/eval_stereo_flow.php?benchmark=stereo\n\n194 training image pairs, 195 test image pairs with hidden ground truth, ground truth depth captured by lidar.\n\nZero disparity value means that disparity is unknown.",
"_____no_output_____"
]
],
[
[
"import gfile\ngfile.download_list(\n 'https://drive.google.com/file/d/12zitJCsOVmoCHII5Ym_t2AAORXb6WMyU',\n filename='kitti_stereo_2012_training_data.zip',\n target_dir='.')",
"_____no_output_____"
],
[
"!unzip -q ./kitti_stereo_2012_training_data.zip ",
"_____no_output_____"
],
[
"def normalize_disparity(img):\n img = img.astype(np.float32) / 256\n return img",
"_____no_output_____"
],
[
"img_name = \"000002_10.png\"\n\nplt.figure(figsize=(15,5))\n\nplt.subplot(2,2,1)\nplt.title('left image')\nplt.imshow(Image.open(f'./kitti_stereo_2012_training_data/train/colored_0/{img_name}')); \nplt.xticks([])\nplt.yticks([])\n\nplt.subplot(2,2,2)\nplt.title('right image')\nplt.imshow(Image.open(f'./kitti_stereo_2012_training_data/train/colored_1/{img_name}')); \nplt.xticks([])\nplt.yticks([])\n\nplt.subplot(2,2,3)\nplt.title('disparity')\ndisp = np.array(Image.open(f'./kitti_stereo_2012_training_data/train/disp_noc/{img_name}'))\nplt.imshow(normalize_disparity(disp), 'gray')\nplt.xticks([])\nplt.yticks([])\n \nplt.subplot(2,2,4)\nplt.title('valid disparity mask')\nplt.imshow(disp > 0, 'gray')\nplt.xticks([])\nplt.yticks([])",
"_____no_output_____"
],
[
"sample_max_disparity = normalize_disparity(disp).max()\nsample_shape = disp.shape\n\nprint(f'max disp = {sample_max_disparity} , disp shape {sample_shape}')",
"_____no_output_____"
]
],
[
[
"## 2. Dataset loading",
"_____no_output_____"
]
],
[
[
"from kitti_dataset import KITTIStereoRAM",
"_____no_output_____"
],
[
"KITTIStereoRAM??",
"_____no_output_____"
],
[
"means = np.array([0.35715697, 0.37349922, 0.35886646] , dtype=np.float32)\nstds = np.array([0.27408948, 0.2807328, 0.27994434], dtype=np.float32)\n\ntransform_train = transforms.Compose([\n transforms.ToTensor(),\n transforms.ColorJitter(0.1, 0.1, 0.1, 0.1),\n transforms.Normalize(means, stds),\n])\n\n# min kitti shape is [370, 1226], max shape is [376, 1242]\nPAD_HEIGHT = 128*3 \nPAD_WIDTH = 1280\nCROP_WIDTH = 768\ndef transforms_train(left_image, right_image, disparity, valid_pixels_mask):\n disparity = torchvision.transforms.functional.to_tensor(disparity)\n valid_pixels_mask = torchvision.transforms.functional.to_tensor(valid_pixels_mask)\n left_image = transform_train(left_image)\n right_image = transform_train(right_image)\n left_image = pad_to_size(left_image, PAD_HEIGHT, PAD_WIDTH)\n right_image = pad_to_size(right_image, PAD_HEIGHT, PAD_WIDTH)\n disparity = pad_to_size(disparity, PAD_HEIGHT, PAD_WIDTH)\n valid_pixels_mask = pad_to_size(valid_pixels_mask, PAD_HEIGHT, PAD_WIDTH)\n\n shift = torch.randint(0, PAD_WIDTH-CROP_WIDTH, (1,))\n left_image = left_image[:,:,shift:shift+CROP_WIDTH]\n right_image = right_image[:, :, shift: shift+CROP_WIDTH]\n disparity = disparity[:, :, shift: shift+ CROP_WIDTH]\n valid_pixels_mask = valid_pixels_mask[:, :, shift: shift+CROP_WIDTH]\n return left_image, right_image, disparity, valid_pixels_mask\n\ndef pad_to_size(images, min_height, min_width):\n if images.shape[1] < min_height:\n images = torchvision.transforms.functional.pad(images, (0,0,0,min_height-images.shape[1]))\n if images.shape[2] < min_width:\n images = torchvision.transforms.functional.pad(images, (0,0, min_width - images.shape[2], 0))\n return images\n \ntransform_test = transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize(means, stds),\n])\n\ndef transforms_test(left_image, right_image, disparity, valid_pixels_mask):\n disparity = torchvision.transforms.functional.to_tensor(disparity)\n valid_pixels_mask = torchvision.transforms.functional.to_tensor(valid_pixels_mask)\n left_image = transform_test(left_image)\n right_image = transform_test(right_image)\n left_image = pad_to_size(left_image, PAD_HEIGHT, PAD_WIDTH)\n right_image = pad_to_size(right_image, PAD_HEIGHT, PAD_WIDTH)\n disparity = pad_to_size(disparity, PAD_HEIGHT, PAD_WIDTH)\n valid_pixels_mask = pad_to_size(valid_pixels_mask, PAD_HEIGHT, PAD_WIDTH)\n \n return left_image, right_image, disparity, valid_pixels_mask",
"_____no_output_____"
],
[
"train_loader = KITTIStereoRAM(root=\"./kitti_stereo_2012_training_data/\", train=True, transforms=transforms_train)\n\ntrain_batch_gen = torch.utils.data.DataLoader(train_loader, \n batch_size=8,\n shuffle=True,\n num_workers=16)\nval_loader = KITTIStereoRAM(root=\"./kitti_stereo_2012_training_data/\", train=False, transforms=transforms_test)\n\nval_batch_gen = torch.utils.data.DataLoader(val_loader, \n batch_size=1,\n shuffle=False,\n num_workers=16)\n",
"_____no_output_____"
],
[
"for elem in train_batch_gen:\n break",
"_____no_output_____"
]
],
[
[
"## 3. DispNet",
"_____no_output_____"
],
[
"[Paper](https://openaccess.thecvf.com/content_cvpr_2016/papers/Mayer_A_Large_Dataset_CVPR_2016_paper.pdf), [poster](https://lmb.informatik.uni-freiburg.de/Publications/2016/MIFDB16/poster-MIFDB16.pdf), [supplimentary materials](https://lmb.informatik.uni-freiburg.de/Publications/2016/MIFDB16/supplementary-MIFDB16.pdf), [project page](https://lmb.informatik.uni-freiburg.de/Publications/2016/MIFDB16/)",
"_____no_output_____"
],
[
"### 3.1 DispNet Simple",
"_____no_output_____"
],
[
"The simplest way to predict the disparity is just concat pair of images and feed it to unet-like architecture.\n\n[[FlowNet paper]](https://arxiv.org/pdf/1504.06852.pdf)",
"_____no_output_____"
],
[
"<img src=\"https://miro.medium.com/max/2400/0*LPtmtLr-mugr8OtN.png\" style=\"width:80%\">\n<img src=\"https://miro.medium.com/max/692/0*blFDiciN3KbPNeov.png\" style=\"width:80%\">\n\nNetwork architecture in more details:\n\n<img src=\"./dispnet.png\" style=\"width:50%\">",
"_____no_output_____"
]
],
[
[
"class ConvBNRelu(torch.nn.Module):\n def __init__(self, in_channels, out_channels, *args, **kwargs):\n super().__init__()\n self.conv = torch.nn.Conv2d(in_channels, out_channels, *args, **kwargs)\n self.bn = torch.nn.BatchNorm2d(out_channels)\n self.relu = torch.nn.ReLU()\n def forward(self, x):\n x = self.conv(x)\n x = self.bn(x)\n x = self.relu(x)\n return x\n \n \nclass UpConvBNRelu(torch.nn.Module):\n def __init__(self, in_channels, out_channels, *args, **kwargs):\n super().__init__()\n self.conv = torch.nn.ConvTranspose2d(in_channels, out_channels, *args, **kwargs)\n self.bn = torch.nn.BatchNorm2d(out_channels)\n self.relu = torch.nn.ReLU()\n def forward(self, x):\n x = self.conv(x)\n x = self.bn(x)\n x = self.relu(x)\n return x\n \n \nclass DispNetSimple(torch.nn.Module):\n def __init__(self):\n super().__init__()\n self.conv1 = ConvBNRelu(6, 64, kernel_size=(7,7), stride=2, padding=(3,3))\n self.conv2 = ConvBNRelu(64, 128, kernel_size=(5,5), stride=2, padding=(2,2))\n self.conv3 = torch.nn.Sequential(\n ConvBNRelu(128, 256, kernel_size=(5,5), stride=2, padding=(2,2)),\n ConvBNRelu(256, 256, kernel_size=(3,3), stride=1, padding=(1,1)))\n self.conv4 = torch.nn.Sequential(\n ConvBNRelu(256, 512, kernel_size=(3,3), stride=2, padding=(1,1)),\n ConvBNRelu(512, 512, kernel_size=(3,3), stride=1, padding=(1,1)))\n self.conv5 = torch.nn.Sequential(\n ConvBNRelu(512, 512, kernel_size=(3,3), stride=2, padding=(1,1)),\n ConvBNRelu(512, 512, kernel_size=(3,3), stride=1, padding=(1,1)))\n self.conv6 = torch.nn.Sequential(\n ConvBNRelu(512, 1024, kernel_size=(3,3), stride=2, padding=(1,1)),\n ConvBNRelu(1024, 1024, kernel_size=(3,3), stride=1, padding=(1,1)))\n self.pred6 = torch.nn.Conv2d(1024, 1, kernel_size=3, stride=1, padding=(1,1))\n \n self.upconv5 = UpConvBNRelu(1024, 512, kernel_size=4, stride=2, padding=(1,1))\n self.iconv5 = ConvBNRelu(1025, 512, kernel_size=3, stride=1, padding=(1,1))\n self.pred5 = torch.nn.Conv2d(512, 1, kernel_size=3, stride=1, padding=(1,1))\n\n self.upconv4 = UpConvBNRelu(512, 256, kernel_size=4, stride=2, padding=(1,1))\n self.iconv4 = ConvBNRelu(256+512+1, 256, kernel_size=3, stride=1, padding=(1,1))\n self.pred4 = torch.nn.Conv2d(256, 1, kernel_size=3, stride=1, padding=(1,1))\n \n self.upconv3 = UpConvBNRelu(256, 128, kernel_size=4, stride=2, padding=(1,1))\n self.iconv3 = ConvBNRelu(128+256+1, 128, kernel_size=3, stride=1, padding=(1,1))\n self.pred3 = torch.nn.Conv2d(128, 1, kernel_size=3, stride=1, padding=(1,1))\n\n self.upconv2 = UpConvBNRelu(128, 64, kernel_size=4, stride=2, padding=(1,1))\n self.iconv2 = ConvBNRelu(64+128+1, 64, kernel_size=3, stride=1, padding=(1,1))\n self.pred2 = torch.nn.Conv2d(64, 1, kernel_size=3, stride=1, padding=(1,1))\n\n self.upconv1 = UpConvBNRelu(64, 32, kernel_size=4, stride=2, padding=(1,1))\n self.iconv1 = ConvBNRelu(32+64+1, 32, kernel_size=3, stride=1, padding=(1,1))\n self.pred1 = torch.nn.Conv2d(32, 1, kernel_size=3, stride=1, padding=(1,1))\n \n def forward(self, left_img, right_img):\n x = torch.cat([left_img, right_img], dim=1)\n \n # TODO apply dispnet \n return predictions_per_scale\n",
"_____no_output_____"
]
],
[
[
"Let's check that it works",
"_____no_output_____"
]
],
[
[
"dispnet = DispNetSimple()",
"_____no_output_____"
],
[
"for sample in train_batch_gen:\n left, right, target, mask = sample\n res = dispnet(left, right)\n break",
"_____no_output_____"
]
],
[
[
"### 3.2 Loss",
"_____no_output_____"
]
],
[
[
"def compute_loss(predicted, target, mask):\n losses = []\n target_masked = target[mask]\n for scale_pred in predicted:\n scale_pred = torch.nn.functional.interpolate(\n scale_pred, size=target.shape[-2:], mode='bilinear', align_corners=True)\n scale_pred = scale_pred[mask]\n losses.append(torch.nn.functional.huber_loss(scale_pred, target_masked))\n total_loss = sum(losses) / len(losses)\n return total_loss, losses",
"_____no_output_____"
],
[
"compute_loss(res, target, mask)",
"_____no_output_____"
]
],
[
[
"### 3.3 Training",
"_____no_output_____"
]
],
[
[
"torch.manual_seed(0)\nnp.random.seed(0)\nrandom.seed(0)\n\ndispnet = DispNetSimple()\ndispnet = dispnet.to(device)\n\nopt = torch.optim.Adam(dispnet.parameters(), lr=1e-3, weight_decay=1e-5)",
"_____no_output_____"
],
[
"def train_network(network, opt, num_epochs=20):\n for epoch in range(num_epochs):\n start_time = time.time()\n train_loss = []\n val_loss = []\n train_scale_losses = []\n val_scale_losses = []\n\n network.train(True)\n\n for x_left, x_right, gt, valid_pixels_mask in tqdm.tqdm(train_batch_gen):\n opt.zero_grad()\n x_left = x_left.to(device)\n x_right = x_right.to(device)\n valid_pvalid_pixels_mask_mask = valid_pixels_mask.to(device)\n gt = gt.to(device)\n\n pred = network(x_left, x_right)\n\n loss, scale_losses = compute_loss(pred, gt, valid_pixels_mask)\n loss.backward()\n opt.step()\n\n train_loss.append(loss.cpu().data.numpy())\n train_scale_losses.append(np.array([elem.cpu().data.numpy() for elem in scale_losses]))\n\n network.train(False)\n with torch.no_grad():\n for x_left, x_right, gt, valid_pixels_mask in val_batch_gen:\n x_left = x_left.to(device)\n x_right = x_right.to(device)\n gt = gt.to(device)\n valid_pixels_mask = valid_pixels_mask.to(device)\n\n pred = network(x_left, x_right)\n\n loss, scale_losses = compute_loss(pred, gt, valid_pixels_mask)\n\n val_loss.append(loss.cpu().data.numpy())\n val_scale_losses.append(np.array([elem.cpu().data.numpy() for elem in scale_losses]))\n\n # Then we print the results for this epoch:\n print(\"Epoch {} of {} took {:.3f}s\".format(\n epoch + 1, num_epochs, time.time() - start_time))\n print(\" training loss (in-iteration): \\t{:.6f} , \\t component loss: {}\".format(\n np.mean(train_loss), np.mean(np.stack(train_scale_losses), axis=0)))\n print(\" validation loss: \\t\\t\\t{:.2f} , \\t\\t componnet loss: {}\".format(\n np.mean(val_loss), np.mean(np.stack(val_scale_losses), axis=0)))",
"_____no_output_____"
],
[
"train_network(dispnet, opt, num_epochs=20)",
"_____no_output_____"
],
[
"def visualize_result(network, img_index):\n network.train(False)\n for i, (x_left, x_right, target, mask) in enumerate(val_batch_gen):\n if i != img_index:\n continue\n pred = network(x_left.to(device), x_right.to(device))\n pred = pred[-1].cpu()\n break\n \n plt.figure(figsize=(20, 10))\n plt.subplot(3,1,1)\n plt.title('left image')\n plt.imshow(val_loader.left_images[img_index])\n plt.xticks([]), plt.yticks([])\n plt.subplot(3,1,2)\n plt.title('gt')\n plt.imshow(val_loader.targets[img_index])\n plt.xticks([]), plt.yticks([])\n plt.subplot(3,1,3)\n plt.title('pred')\n plt.imshow(pred.data.numpy()[0,0])\n plt.xticks([]), plt.yticks([])",
"_____no_output_____"
],
[
"visualize_result(dispnet, img_index=5)",
"_____no_output_____"
]
],
[
[
"### 3.4 DispNet-Corr1D",
"_____no_output_____"
],
[
"(image is taken from [poster](https://lmb.informatik.uni-freiburg.de/Publications/2016/MIFDB16/poster-MIFDB16.pdf))",
"_____no_output_____"
],
[
"<img src=\"./dispnet-corr1d.png\" style=\"width:80%\">",
"_____no_output_____"
]
],
[
[
"class Corr1DLayer(torch.nn.Module):\n def __init__(self, max_disp):\n super().__init__()\n self.max_disp = max_disp\n\n def forward(self, left_img, right_img):\n corr_result = []\n for shift in range(0, self.max_disp):\n # YOUR CODE\n ...\n corr = ...\n corr_result.append(corr)\n corr_result = torch.stack(corr_result, dim=1)\n return corr_result\n \n \nclass DispNetCorr1D(torch.nn.Module):\n def __init__(self, max_disp=40):\n super().__init__()\n self.conv1 = ConvBNRelu(3, 64, kernel_size=(7,7), stride=2, padding=(3,3))\n self.conv2 = ConvBNRelu(64, 128, kernel_size=(5,5), stride=2, padding=(2,2))\n \n self.corr1d = Corr1DLayer(max_disp)\n self.conv_refinement = ConvBNRelu(128, 64, kernel_size=(3,3), stride=1, padding=(1,1))\n \n self.conv3 = torch.nn.Sequential(\n ConvBNRelu(64+max_disp, 256, kernel_size=(5,5), stride=2, padding=(2,2)),\n ConvBNRelu(256, 256, kernel_size=(3,3), stride=1, padding=(1,1)))\n self.conv4 = torch.nn.Sequential(\n ConvBNRelu(256, 512, kernel_size=(3,3), stride=2, padding=(1,1)),\n ConvBNRelu(512, 512, kernel_size=(3,3), stride=1, padding=(1,1)))\n self.conv5 = torch.nn.Sequential(\n ConvBNRelu(512, 512, kernel_size=(3,3), stride=2, padding=(1,1)),\n ConvBNRelu(512, 512, kernel_size=(3,3), stride=1, padding=(1,1)))\n self.conv6 = torch.nn.Sequential(\n ConvBNRelu(512, 1024, kernel_size=(3,3), stride=2, padding=(1,1)),\n ConvBNRelu(1024, 1024, kernel_size=(3,3), stride=1, padding=(1,1)))\n self.pred6 = torch.nn.Conv2d(1024, 1, kernel_size=3, stride=1, padding=(1,1))\n \n self.upconv5 = UpConvBNRelu(1024, 512, kernel_size=4, stride=2, padding=(1,1))\n self.iconv5 = ConvBNRelu(1025, 512, kernel_size=3, stride=1, padding=(1,1))\n self.pred5 = torch.nn.Conv2d(512, 1, kernel_size=3, stride=1, padding=(1,1))\n\n self.upconv4 = UpConvBNRelu(512, 256, kernel_size=4, stride=2, padding=(1,1))\n self.iconv4 = ConvBNRelu(256+512+1, 256, kernel_size=3, stride=1, padding=(1,1))\n self.pred4 = torch.nn.Conv2d(256, 1, kernel_size=3, stride=1, padding=(1,1))\n \n self.upconv3 = UpConvBNRelu(256, 128, kernel_size=4, stride=2, padding=(1,1))\n self.iconv3 = ConvBNRelu(128+256+1, 128, kernel_size=3, stride=1, padding=(1,1))\n self.pred3 = torch.nn.Conv2d(128, 1, kernel_size=3, stride=1, padding=(1,1))\n\n self.upconv2 = UpConvBNRelu(128, 64, kernel_size=4, stride=2, padding=(1,1))\n self.iconv2 = ConvBNRelu(64+128+1, 64, kernel_size=3, stride=1, padding=(1,1))\n self.pred2 = torch.nn.Conv2d(64, 1, kernel_size=3, stride=1, padding=(1,1))\n\n self.upconv1 = UpConvBNRelu(64, 32, kernel_size=4, stride=2, padding=(1,1))\n self.iconv1 = ConvBNRelu(32+64+1, 32, kernel_size=3, stride=1, padding=(1,1))\n self.pred1 = torch.nn.Conv2d(32, 1, kernel_size=3, stride=1, padding=(1,1))\n \n def forward(self, left_img, right_img):\n \n # YOUR CODE\n \n return predictions\n",
"_____no_output_____"
],
[
"dispnet = DispNetCorr1D()",
"_____no_output_____"
],
[
"for sample in train_batch_gen:\n left, right, target, mask = sample\n res = dispnet(left, right)\n break",
"_____no_output_____"
]
],
[
[
"### 3.5 DispNet-Corr1D training",
"_____no_output_____"
]
],
[
[
"torch.manual_seed(0)\nnp.random.seed(0)\nrandom.seed(0)\n\ndispnet = DispNetCorr1D(max_disp=40)\ndispnet = dispnet.to(device)\n\nopt = torch.optim.Adam(dispnet.parameters(), lr=1e-3, weight_decay=1e-5)",
"_____no_output_____"
],
[
"train_network(dispnet, opt, num_epochs=20)",
"_____no_output_____"
],
[
"visualize_result(dispnet, img_index=5)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
ecda6c44d346e6ef45a9306ce203d416a16e3e3b | 16,193 | ipynb | Jupyter Notebook | docs/tutorial/demo.ipynb | JinwooPark00/otter-grader | 1e037fa56e7833650980347fc8db64bd0a152e09 | [
"BSD-3-Clause"
] | null | null | null | docs/tutorial/demo.ipynb | JinwooPark00/otter-grader | 1e037fa56e7833650980347fc8db64bd0a152e09 | [
"BSD-3-Clause"
] | null | null | null | docs/tutorial/demo.ipynb | JinwooPark00/otter-grader | 1e037fa56e7833650980347fc8db64bd0a152e09 | [
"BSD-3-Clause"
] | null | null | null | 32.979633 | 6,508 | 0.667572 | [
[
[
"```\nBEGIN ASSIGNMENT\nrequirements: requirements.txt\nsolutions_pdf: true\nexport_cell:\n instructions: \"These are some submission instructions.\"\ngenerate: \n pdf: true\n zips: false\n```",
"_____no_output_____"
],
[
"# Otter-Grader Tutorial\n\nThis notebook is part of the Otter-Grader tutorial. For more information about Otter, see our [documentation](https://otter-grader.rtfd.io).",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\n%matplotlib inline\nimport otter\ngrader = otter.Notebook()",
"_____no_output_____"
]
],
[
[
"**Question 1:** Write a function `square` that returns the square of its argument.\n\n```\nBEGIN QUESTION\nname: q1\n```",
"_____no_output_____"
]
],
[
[
"def square(x):\n return x**2 # SOLUTION",
"_____no_output_____"
],
[
"# TEST\nsquare(1) == 1",
"_____no_output_____"
],
[
"# TEST\nsquare(0) == 0",
"_____no_output_____"
],
[
"# HIDDEN TEST\nsquare(2.5) == 6.25",
"_____no_output_____"
]
],
[
[
"**Question 2:** Write an infinite generator of the Fibonacci sequence `fiberator` that is *not* recursive.\n\n```\nBEGIN QUESTION\nname: q2\n```",
"_____no_output_____"
]
],
[
[
"def fiberator():\n # BEGIN SOLUTION\n yield 0\n yield 1\n x, y = 0, 1\n while True:\n x, y = y, x + y\n yield y\n # END SOLUTION",
"_____no_output_____"
],
[
"# TEST\nf = fiberator()\nassert next(f) == 0\nassert next(f) == 1",
"_____no_output_____"
],
[
"# HIDDEN TEST\nf = fiberator()\nassert next(f) == 0\nassert next(f) == 1\nassert next(f) == 1\nassert next(f) == 2\nassert next(f) == 3\nassert next(f) == 5\nassert next(f) == 8\nassert next(f) == 13\nassert next(f) == 21",
"_____no_output_____"
]
],
[
[
"**Question 3:** Create a DataFrame mirroring the table below and assign this to `data`. Then group by the `flavor` column and find the mean price for each flavor; assign this **series** to `price_by_flavor`.\n\n| flavor | scoops | price |\n|-----|-----|-----|\n| chocolate | 1 | 2 |\n| vanilla | 1 | 1.5 |\n| chocolate | 2 | 3 |\n| strawberry | 1 | 2 |\n| strawberry | 3 | 4 |\n| vanilla | 2 | 2 |\n| mint | 1 | 4 |\n| mint | 2 | 5 |\n| chocolate | 3 | 5 |\n\n```\nBEGIN QUESTION\nname: q3\n```",
"_____no_output_____"
]
],
[
[
"# BEGIN SOLUTION NO PROMPT\ndata = pd.DataFrame({\n \"flavor\": [\"chocolate\", \"vanilla\", \"chocolate\", \"strawberry\", \"strawberry\", \"vanilla\", \"mint\", \n \"mint\", \"chocolate\"],\n \"scoops\": [1, 1, 2, 1, 3, 2, 1, 2, 3],\n \"price\": [2, 1.5, 3, 2, 4, 2, 4, 5, 5]\n})\nprice_by_flavor = data.groupby(\"flavor\").mean()[\"price\"]\n# END SOLUTION\n\"\"\" # BEGIN PROMPT\ndata = ...\nprice_by_flavor = ...\n\"\"\" # END PROMPT\nprice_by_flavor",
"_____no_output_____"
],
[
"# TEST\nlen(data[\"flavor\"].unique()) == 4",
"_____no_output_____"
],
[
"# TEST\nfor l in [\"chocolate\", \"vanilla\", \"strawberry\", \"mint\"]:\n assert l in data[\"flavor\"].unique()",
"_____no_output_____"
],
[
"# TEST\nassert type(price_by_flavor) == pd.Series",
"_____no_output_____"
],
[
"# TEST\nassert len(price_by_flavor) == 4",
"_____no_output_____"
],
[
"# HIDDEN TEST\nnp.isclose(price_by_flavor[\"chocolate\"], 3.33333333)",
"_____no_output_____"
],
[
"# HIDDEN TEST\nnp.isclose(price_by_flavor[\"mint\"], 4.5)",
"_____no_output_____"
],
[
"# HIDDEN TEST\nnp.isclose(price_by_flavor[\"strawberry\"], 3)",
"_____no_output_____"
],
[
"# HIDDEN TEST\nnp.isclose(price_by_flavor[\"vanilla\"], 1.75)",
"_____no_output_____"
]
],
[
[
"**Question 4:** Create a barplot of `price_by_flavor`.\n\n```\nBEGIN QUESTION\nname: q4\nmanual: true\n```",
"_____no_output_____"
]
],
[
[
"price_by_flavor.plot.bar(); # SOLUTION",
"_____no_output_____"
]
],
[
[
"**Question 5:** What do you notice about the bar plot?\n\n```\nBEGIN QUESTION\nname: q5\nmanual: true\n```",
"_____no_output_____"
],
[
"**SOLUTION:** mint is the highest...?",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
ecda7b32d2c708fdcbeb263a6e26ec741c5d6005 | 705,880 | ipynb | Jupyter Notebook | Backorder_Prediction_reduced_features.ipynb | amar-chheda/backorder-prediction | 836ed53768799f84783c0141b0bc277f2f64dac1 | [
"MIT"
] | 1 | 2019-02-24T01:09:27.000Z | 2019-02-24T01:09:27.000Z | Backorder_Prediction_reduced_features.ipynb | amar-chheda/backorder-prediction | 836ed53768799f84783c0141b0bc277f2f64dac1 | [
"MIT"
] | null | null | null | Backorder_Prediction_reduced_features.ipynb | amar-chheda/backorder-prediction | 836ed53768799f84783c0141b0bc277f2f64dac1 | [
"MIT"
] | 1 | 2019-12-09T09:18:26.000Z | 2019-12-09T09:18:26.000Z | 296.090604 | 139,876 | 0.899243 | [
[
[
"# Backorder Prediction System\n\nBackorders are one of the major problem in supply chain and logistics. Backorders are products that are temporarily out of stock, however a customer is permitted to place an order against future stock. Back orders are both good and bad: Strong demand can drive back orders, but so can suboptimal planning. The problem is when a product is not immediately available, customers may not have the extravagance or patience to wait. This results in lost sales and low customer satisfaction.",
"_____no_output_____"
],
[
"# Problem Definition\n\nInventory management is a risky business. Too much product on hand increases carrying costs. Too little increases the chance of back order. A review of data done by the FedEx representatives suggest that a single backorder costs the company around \\$11 - \\$15. For example, an average company processing 1 million orders per year with a 20% backorder rate would experience 200,000 backorders during a year. At cost of \\$13.28 per back order, a total increased cost due to backorders is approximately \\$2,656,000. Thus, predicting the backorders is an essential task in inventory control planning. Implementing machine learning to complete the task is the best way to go about solving this issue. Using the previous data, we are trying to predict the Backorders to prevent losses for a company.",
"_____no_output_____"
],
[
"## Studying the data\n\nWe have taken the kaggle dataset for backorder prediction which is a large dataset with over a million SKUs. We further import an study the dataset for better understanding of the correlations and the distributions of various features. ",
"_____no_output_____"
]
],
[
[
"# Importing dependencies\n\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.neighbors import KNeighborsClassifier \nfrom sklearn.svm import SVC \nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.model_selection import KFold\nfrom sklearn.utils import resample\nfrom sklearn.metrics import roc_curve, roc_auc_score, precision_recall_curve, confusion_matrix, accuracy_score\nfrom sklearn import svm\nfrom sklearn import linear_model",
"_____no_output_____"
],
[
"# Dictionary containing values for representing NaNs\nna_other = {'perf_6_month_avg':-99, 'perf_12_month_avg':-99}\n\n#reading the data\ntrain_data = pd.read_csv(\"Kaggle_Training_Dataset_v2.csv\", na_values = na_other)\ntest_data = pd.read_csv(\"Kaggle_Test_Dataset_v2.csv\", na_values = na_other)\n\n#The warning is due to the large size of the dataset.",
"C:\\Users\\amar\\Anaconda3\\envs\\py35\\lib\\site-packages\\IPython\\core\\interactiveshell.py:2698: DtypeWarning: Columns (0) have mixed types. Specify dtype option on import or set low_memory=False.\n interactivity=interactivity, compiler=compiler, result=result)\n"
],
[
"train_data.head().transpose() #exploring the variables.",
"_____no_output_____"
],
[
"train_data.tail(2).transpose()",
"_____no_output_____"
],
[
"test_data.tail(2).transpose()",
"_____no_output_____"
]
],
[
[
"We see that the last rows of test and train data are null. Thus, we remove the row from our dataset.",
"_____no_output_____"
]
],
[
[
"# Drop the last row\ntrain_data = train_data[:-1]\ntest_data = test_data[:-1]",
"_____no_output_____"
],
[
"#Summarise the numerical values in training data\ntrain_data.describe().transpose()",
"_____no_output_____"
],
[
"#summarise non-numerical values in training data\ntrain_data.describe(include = ['O']).transpose()",
"_____no_output_____"
]
],
[
[
"## Notes on data above:\n\n* Data is a mix of categorical and numerical values\n* The SKU has a unique value for each row, thus it can be considered as an index and ignored\n* The categorical values contain \"Yes\" and \"No\" which can be converted to \"1\" and \"0\"\n* The numerical features have different scales, thus we can normalize / standardize them to the same scale\n* There are missing values in lead_time, perf_6_month_avg and perf_12_month_avg which will need to be handled\n* There are only 0.67% observations where the product went on a backorder, this needs to be balanced well",
"_____no_output_____"
]
],
[
[
"#Drop SKU column\ntrain_data = train_data.drop('sku', axis = 1)\ntest_data = test_data.drop('sku', axis = 1)",
"_____no_output_____"
],
[
"#change categorical to boolean \n\ncat_vars = ['potential_issue', 'deck_risk', 'oe_constraint', 'ppap_risk','stop_auto_buy', 'rev_stop', 'went_on_backorder']\n\nfor var in cat_vars:\n test_data[var] = test_data[var].map({'No':0,'Yes':1})\n train_data[var] = train_data[var].map({'No':0,'Yes':1})",
"_____no_output_____"
],
[
"#Looking at the distribution of perf_6_month_avg\nx = train_data.perf_6_month_avg.dropna()\nplt.hist(x, bins = 30)\nplt.xlim(0,1)\nplt.show()",
"_____no_output_____"
],
[
"#Looking at the distribution of perf_12_month_avg\nx = train_data.perf_12_month_avg.dropna()\nplt.hist(x, bins = 30)\nplt.xlim(0,1)\nplt.show()",
"_____no_output_____"
],
[
"#Looking at the distribution of the lead_time\nx = train_data.lead_time.dropna()\nplt.hist(x)\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Replacing with median:\n\nobserving the plots, we can see that the distribution is either right or left skewed. Thus, replacing the missing values using the mean seems like a better option rather than using the mean.",
"_____no_output_____"
]
],
[
[
"# Replace NaNs in the dataset\n\n# perf_6_month_avg\ntrain_data.perf_6_month_avg = train_data.perf_6_month_avg.fillna(train_data.perf_6_month_avg.median())\ntest_data.perf_6_month_avg = test_data.perf_6_month_avg.fillna(test_data.perf_6_month_avg.median())\n\n# perf_12_month_avg\ntrain_data.perf_12_month_avg = train_data.perf_6_month_avg.fillna(train_data.perf_12_month_avg.median())\ntest_data.perf_12_month_avg = test_data.perf_6_month_avg.fillna(test_data.perf_12_month_avg.median())\n\n# lead_time\ntrain_data.lead_time = train_data.lead_time.fillna(train_data.lead_time.median())\ntest_data.lead_time = test_data.lead_time.fillna(test_data.lead_time.median())",
"_____no_output_____"
],
[
"forecasts = ['forecast_3_month','forecast_6_month', 'forecast_9_month']\nsns.pairplot(train_data, hue = 'went_on_backorder', vars = forecasts, size = 3 )",
"_____no_output_____"
],
[
"plt.show()",
"_____no_output_____"
],
[
"sales = ['sales_1_month', 'sales_3_month', 'sales_6_month', 'sales_9_month']\nsns.pairplot(train_data, vars=sales, hue='went_on_backorder', size=3)\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Relation between variables:\n\nWe see that the relationship between the variables here is linear and they are highly correlated. Also, we observe that the backorders happen only when the value of sales and forecast is very low.",
"_____no_output_____"
]
],
[
[
"# Separate data by going on backorder or not\nno_bo = train_data.loc[train_data['went_on_backorder'] == 0] \nis_bo = train_data.loc[train_data['went_on_backorder'] == 1]",
"_____no_output_____"
],
[
"# Make scatter plots of the 3-month forecast against each of the sales\nfor col in sales:\n fig = plt.figure(figsize=(6, 6))\n ax = fig.gca()\n no_bo.plot(kind='scatter', x=col, y='forecast_3_month', ax=ax, color='DarkBlue', legend=True)\n is_bo.plot(kind='scatter', x=col, y='forecast_3_month', ax=ax, color='Red')\n plt.show()",
"_____no_output_____"
]
],
[
[
"Here we can see that the relation between the forecast and the sales follow a linear pattern and have a relatively high correlation between them",
"_____no_output_____"
]
],
[
[
"# Look at forecast, sales, in transit and recommended stock level in a pair-wise scatter plot\nfeature_set_1 = ['forecast_3_month', 'sales_1_month', 'in_transit_qty', 'min_bank']\nsns.pairplot(train_data, vars=feature_set_1, hue='went_on_backorder', size=3)\nplt.show()",
"_____no_output_____"
]
],
[
[
"The scatter plots show okay linear relationships between forecast, sales, in transit and recommended stock level. All the features range from 0 to over 300,000. Backorders only occur when the features are at low values.\n\nDue to the good correlations and sufficiently linear relationships between these features, they will all be represented by a single feature in the machine learning models. The feature chosen is sales_1_month. This is because past sales is measured, whereas the quantity in transit, recommended minimum stock and forecasts are likely derived from past sales.",
"_____no_output_____"
]
],
[
[
"# Filter out the data that will be used\n\n# Features chosen\nfeatures = ['national_inv', 'lead_time', 'sales_1_month', 'pieces_past_due', 'perf_6_month_avg',\n 'local_bo_qty', 'deck_risk', 'oe_constraint', 'ppap_risk', 'stop_auto_buy', 'rev_stop']\n\ntrain_df = train_data[features]\ntest_df = test_data[features]\n\n# Set labels\ntrain_label = train_data['went_on_backorder']\ntest_label = test_data['went_on_backorder']",
"_____no_output_____"
],
[
"# Change scale of data\n\n# Use MinMaxScaler to convert features to range 0-1\n# The label is already in the range 0-1, so it won't be affected by this.\npp_method = MinMaxScaler()\npp_method.fit(train_df)\n\nreduced_train_df = pp_method.transform(train_df)\nreduced_train_df = pd.DataFrame(reduced_train_df, columns=features)\n\nreduced_test_df = pp_method.transform(test_df)\nreduced_test_df = pd.DataFrame(reduced_test_df, columns=features)",
"_____no_output_____"
],
[
"train_complete = pd.concat([reduced_train_df,train_label], axis = 1)\ntest_complete = pd.concat([reduced_test_df,test_label], axis = 1)",
"_____no_output_____"
]
],
[
[
"# Applying various ML models:\n* SVM\n* Logisitc Regression\n* KNN\n* Random Forest\n\nWhile running the models of KNN, Logistic Regression, SVM on data, the runningtime for them were more than 3 hours. Thus, given the time constraint we plan to stick to the Random Forest for our modeling. Random Forest is computationally less expensive and gives a better result as compared to the other algorithams on an unbalanced dataset. The statement is based on a [research](https://elitedatascience.com/imbalanced-classes) article on [Elite Data Science](https://elitedatascience.com/).",
"_____no_output_____"
],
[
"# Applying Random Forest:\n\nTo observe the effect of undersampling on our results we apply the Random Forest algorithm bothways and then compare the results.We then use the K-fold cross validation to validate our results.\n",
"_____no_output_____"
]
],
[
[
"#Merging the train and test data:\ntotal_data = pd.concat([train_complete, test_complete])",
"_____no_output_____"
],
[
"# the unbalance in the data:\ntotal_data.went_on_backorder.value_counts()",
"_____no_output_____"
],
[
"#create a blank dataframe to fill\nmerged_pred = pd.DataFrame(data=None,index=total_data.index)\n\n#Define folds for 2-fold Cross Validation\nkf = KFold(n_splits=2,shuffle=True,random_state=123) \n\n#Define index of dataset (to help in data sepparations within folds)\nind=total_data.index",
"_____no_output_____"
],
[
"#----------fit models and product predictions in each fold----------#\n\nfor train_index, test_index in kf.split(total_data):\n \n #Define Training data\n merged_train=total_data[ind.isin(train_index)]\n y_train=merged_train['went_on_backorder']\n X_train=merged_train.drop(['went_on_backorder'],axis=1)\n\n #Define Test data\n merged_test=total_data[ind.isin(test_index)]\n y_test=merged_test['went_on_backorder']\n X_test=merged_test.drop(['went_on_backorder'],axis=1)\n \n #Define down-sampled training data\n train_majority = merged_train[y_train==0]\n train_minority = merged_train[y_train==1]\n n_minority = len(train_minority)\n train_majority_downsampled = resample(train_majority, \n replace=False, \n n_samples=n_minority, \n random_state=123) \n train_downsampled = pd.concat([train_majority_downsampled, train_minority])\n y_train_downsampled = train_downsampled['went_on_backorder']\n X_train_downsampled = train_downsampled.drop(['went_on_backorder'],axis=1)\n \n \n #---------------------------------------------------------------#\n #Function to fit models\n def fitrandomforests(n_est,maxfeat,minleaf):\n \n #names of model predictions based on tuning parameter inputs\n varname= \"pred_nest%s_feat%s_leaf%s\" % (n_est,maxfeat,minleaf)\n varname2= \"pred_down_nest%s_feat%s_leaf%s\" % (n_est,maxfeat,minleaf)\n \n #Fit a Random Forest model\n rf=RandomForestClassifier(n_estimators=n_est,\n max_features=maxfeat,\n min_samples_leaf=minleaf)\n rf.fit(X_train,y_train)\n preds=rf.predict_proba(X_test)[:,1]\n merged_test[varname]=preds\n \n #Fit a Random Forest model on downsampled data\n rfd=RandomForestClassifier(n_estimators=n_est,\n max_features=maxfeat,\n min_samples_leaf=minleaf)\n rfd.fit(X_train_downsampled,y_train_downsampled)\n predsd=rfd.predict_proba(X_test)[:,1]\n merged_test[varname2]=predsd\n #---------------------------------------------------------------#\n \n #Tuning parameter grids\n \n #number of trees (more is better for prediction but slower)\n n_est=50\n #maximum features tried\n maxfeatgrid=[3,5,7]\n #Minimum samples per leaf\n minleafgrid=[5,10,30]\n \n\n #fit models\n for feat in maxfeatgrid:\n for leaf in minleafgrid:\n fitrandomforests(n_est,feat,leaf)\n\n #Combine predictions for this fold with previous folds\n merged_pred = pd.concat([merged_pred,merged_test])\n\n\n\n#drop NA's from dataframe caused by the method for combining datasets from each loop iteration\nmerged_pred=merged_pred.dropna() ",
"C:\\Users\\amar\\Anaconda3\\envs\\py35\\lib\\site-packages\\ipykernel_launcher.py:42: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\nC:\\Users\\amar\\Anaconda3\\envs\\py35\\lib\\site-packages\\ipykernel_launcher.py:50: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\nC:\\Users\\amar\\Anaconda3\\envs\\py35\\lib\\site-packages\\ipykernel_launcher.py:42: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\nC:\\Users\\amar\\Anaconda3\\envs\\py35\\lib\\site-packages\\ipykernel_launcher.py:50: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\nC:\\Users\\amar\\Anaconda3\\envs\\py35\\lib\\site-packages\\ipykernel_launcher.py:42: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\nC:\\Users\\amar\\Anaconda3\\envs\\py35\\lib\\site-packages\\ipykernel_launcher.py:50: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\nC:\\Users\\amar\\Anaconda3\\envs\\py35\\lib\\site-packages\\ipykernel_launcher.py:42: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\nC:\\Users\\amar\\Anaconda3\\envs\\py35\\lib\\site-packages\\ipykernel_launcher.py:50: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\nC:\\Users\\amar\\Anaconda3\\envs\\py35\\lib\\site-packages\\ipykernel_launcher.py:42: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\nC:\\Users\\amar\\Anaconda3\\envs\\py35\\lib\\site-packages\\ipykernel_launcher.py:50: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\nC:\\Users\\amar\\Anaconda3\\envs\\py35\\lib\\site-packages\\ipykernel_launcher.py:42: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\nC:\\Users\\amar\\Anaconda3\\envs\\py35\\lib\\site-packages\\ipykernel_launcher.py:50: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\nC:\\Users\\amar\\Anaconda3\\envs\\py35\\lib\\site-packages\\ipykernel_launcher.py:42: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\nC:\\Users\\amar\\Anaconda3\\envs\\py35\\lib\\site-packages\\ipykernel_launcher.py:50: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\nC:\\Users\\amar\\Anaconda3\\envs\\py35\\lib\\site-packages\\ipykernel_launcher.py:42: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\nC:\\Users\\amar\\Anaconda3\\envs\\py35\\lib\\site-packages\\ipykernel_launcher.py:50: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\nC:\\Users\\amar\\Anaconda3\\envs\\py35\\lib\\site-packages\\ipykernel_launcher.py:42: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\nC:\\Users\\amar\\Anaconda3\\envs\\py35\\lib\\site-packages\\ipykernel_launcher.py:50: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\nC:\\Users\\amar\\Anaconda3\\envs\\py35\\lib\\site-packages\\ipykernel_launcher.py:42: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\nC:\\Users\\amar\\Anaconda3\\envs\\py35\\lib\\site-packages\\ipykernel_launcher.py:50: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\nC:\\Users\\amar\\Anaconda3\\envs\\py35\\lib\\site-packages\\ipykernel_launcher.py:42: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\nC:\\Users\\amar\\Anaconda3\\envs\\py35\\lib\\site-packages\\ipykernel_launcher.py:50: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\nC:\\Users\\amar\\Anaconda3\\envs\\py35\\lib\\site-packages\\ipykernel_launcher.py:42: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\nC:\\Users\\amar\\Anaconda3\\envs\\py35\\lib\\site-packages\\ipykernel_launcher.py:50: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n"
],
[
"#View AUC for each model and each tuning parameter specification\nfor feat in maxfeatgrid:\n for leaf in minleafgrid:\n #Random forest for given tuning parameters\n varname1=\"pred_nest50_feat%s_leaf%s\" % (feat,leaf)\n rocscore1=roc_auc_score(merged_pred['went_on_backorder'],merged_pred[varname1])\n print( round(rocscore1,4 ) , varname1 )\n #Down Sampled Random Forest for given tuning parameters\n varname2=\"pred_down_nest50_feat%s_leaf%s\" % (feat,leaf)\n rocscore2=roc_auc_score(merged_pred['went_on_backorder'],merged_pred[varname2])\n print( round(rocscore2,4) , varname2 )",
"0.8665 pred_nest50_feat3_leaf5\n0.8955 pred_down_nest50_feat3_leaf5\n0.8577 pred_nest50_feat3_leaf10\n0.8856 pred_down_nest50_feat3_leaf10\n0.843 pred_nest50_feat3_leaf30\n0.8633 pred_down_nest50_feat3_leaf30\n0.8763 pred_nest50_feat5_leaf5\n0.9051 pred_down_nest50_feat5_leaf5\n0.8714 pred_nest50_feat5_leaf10\n0.8954 pred_down_nest50_feat5_leaf10\n0.8621 pred_nest50_feat5_leaf30\n0.874 pred_down_nest50_feat5_leaf30\n0.8746 pred_nest50_feat7_leaf5\n0.9089 pred_down_nest50_feat7_leaf5\n0.8742 pred_nest50_feat7_leaf10\n0.9008 pred_down_nest50_feat7_leaf10\n0.8685 pred_nest50_feat7_leaf30\n0.8777 pred_down_nest50_feat7_leaf30\n"
],
[
"#ROC Curves for top performing models\n\n#Define false positive rates/true positive rates / thresholds \n#Best random forest model\nfpr, tpr, thresholds = roc_curve(merged_pred['went_on_backorder'],\n merged_pred['pred_nest50_feat3_leaf5'])\n\n#Best down sampled random forest model\nfpr2, tpr2, thresholds2 = roc_curve(merged_pred['went_on_backorder'],\n merged_pred['pred_down_nest50_feat7_leaf5'])\n\n#AUC for best Random Forest and Random Forest Down sampled Models\nroc_auc = roc_auc_score(merged_pred['went_on_backorder'],\n merged_pred['pred_nest50_feat3_leaf5'])\nroc_auc2 = roc_auc_score(merged_pred['went_on_backorder'],\n merged_pred['pred_down_nest50_feat7_leaf5'])",
"_____no_output_____"
],
[
"#plot ROC Curve\nplt.title('ROC Curve')\nplt.plot(fpr, tpr, 'b', label='RF (AUC = %0.3f)'% roc_auc)\nplt.plot(fpr2, tpr2, 'g', label='RF Downsampled (AUC = %0.3f)'% roc_auc2)\nplt.plot([0,1],[0,1],'r--', label='Random Guess')\nplt.legend(loc='lower right')\nplt.xlim([0,1])\nplt.ylim([0,1])\nplt.show()",
"_____no_output_____"
],
[
"#define precision, recall, and corresponding threshold for model with highest AUC\nprecision, recall, threshold = precision_recall_curve(merged_pred['went_on_backorder'],\n merged_pred['pred_nest50_feat3_leaf5'])\n\n#plot Precision and Recall for a given threshold.\nplt.title('Precision and Recall')\nplt.plot(threshold,precision[1:],'purple',label='Precision')\nplt.plot(threshold,recall[1:],'orange', label='Recall')\nplt.axvline(x=.085,linestyle=\":\")\nplt.legend(loc=2,bbox_to_anchor=(1.05, 1))\nplt.xlim([0,1])\nplt.ylim([0,1])\nplt.ylabel('Precision and Recall Values')\nplt.xlabel('Threshold')\nplt.show()",
"_____no_output_____"
],
[
"#Confusion Matrix \nmerged_pred['optimal_classification']=merged_pred['pred_nest50_feat3_leaf5']>.4\npd.crosstab(merged_pred['went_on_backorder'],\n merged_pred['optimal_classification'],\n rownames=['Went on Backorder'],\n colnames=['Predicted going on Backorder?'])",
"_____no_output_____"
],
[
"#Accuracy of model\naccuracy_score(merged_pred['went_on_backorder'],merged_pred['optimal_classification'])",
"_____no_output_____"
],
[
"#Accuracy of \"naive\" (never-predict-backorder) model\nmerged_pred['naive_estimator']=0\naccuracy_score(merged_pred['went_on_backorder'],merged_pred['naive_estimator'])",
"_____no_output_____"
],
[
"from imblearn.over_sampling import SMOTE \nfrom sklearn.model_selection import train_test_split",
"_____no_output_____"
],
[
"x = total_data.drop(['went_on_backorder'],axis=1)\ny = total_data['went_on_backorder']",
"_____no_output_____"
],
[
"x_train, x_val, y_train, y_val = train_test_split(x,y, test_size = 0.3, random_state = 123)",
"_____no_output_____"
],
[
"sm = SMOTE(random_state=12, ratio = 1.0)\nx_train_res, y_train_res = sm.fit_sample(x_train, y_train)",
"C:\\Users\\amar\\Anaconda3\\envs\\py35\\lib\\site-packages\\sklearn\\utils\\deprecation.py:75: DeprecationWarning: Function _ratio_float is deprecated; Use a float for 'ratio' is deprecated from version 0.2. The support will be removed in 0.4. Use a dict, str, or a callable instead.\n warnings.warn(msg, category=DeprecationWarning)\n"
],
[
"rfd=RandomForestClassifier(n_estimators=50,\n max_features=5,\n min_samples_leaf=5)\nrfd.fit(x_train_res, y_train_res)\npred = rfd.predict_proba(x_val)[:,1]",
"_____no_output_____"
],
[
"fpr, tpr, thresholds = roc_curve(y_val, pred)\n#AUC for best Random Forest and Random Forest Down sampled Models\nroc_auc = roc_auc_score(y_val,pred)",
"_____no_output_____"
],
[
"#plot ROC Curve\nplt.title('ROC Curve')\nplt.plot(fpr, tpr, 'b', label='RF (AUC = %0.3f)'% roc_auc)\nplt.plot([0,1],[0,1],'r--', label='Random Guess')\nplt.legend(loc='lower right')\nplt.xlim([0,1])\nplt.ylim([0,1])\nplt.show()",
"_____no_output_____"
],
[
"#define precision, recall, and corresponding threshold for model with highest AUC\nprecision, recall, threshold = precision_recall_curve(y_val,pred)\n\n#plot Precision and Recall for a given threshold.\nplt.title('Precision and Recall')\nplt.plot(threshold,precision[1:],'purple',label='Precision')\nplt.plot(threshold,recall[1:],'orange', label='Recall')\nplt.axvline(x=.66,linestyle=\":\")\nplt.legend(loc=2,bbox_to_anchor=(1.05, 1))\nplt.xlim([0,1])\nplt.ylim([0,1])\nplt.ylabel('Precision and Recall Values')\nplt.xlabel('Threshold')\nplt.show()",
"_____no_output_____"
],
[
"#Confusion Matrix \npd.crosstab(y_val,\n pred>0.66,\n rownames=['Went on Backorder'],\n colnames=['Predicted going on Backorder?'])",
"_____no_output_____"
],
[
"#Accuracy of model\naccuracy_score(y_val,pred>0.66)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
ecda816c824f52838339b11620cf00a9d9fe8d98 | 11,059 | ipynb | Jupyter Notebook | notebooks/Hacking_cortex.loom.ipynb | HumanCellAtlas/expression_matrix_2_ontology | 74048e0d4cd74b7d77199d1520e02d6c886964d9 | [
"Apache-2.0"
] | 1 | 2020-06-23T21:23:00.000Z | 2020-06-23T21:23:00.000Z | notebooks/Hacking_cortex.loom.ipynb | HumanCellAtlas/expression_matrix_2_ontology | 74048e0d4cd74b7d77199d1520e02d6c886964d9 | [
"Apache-2.0"
] | 1 | 2019-11-28T16:26:46.000Z | 2019-11-28T16:26:46.000Z | notebooks/Hacking_cortex.loom.ipynb | HumanCellAtlas/expression_matrix_2_ontology | 74048e0d4cd74b7d77199d1520e02d6c886964d9 | [
"Apache-2.0"
] | 1 | 2019-11-29T09:25:02.000Z | 2019-11-29T09:25:02.000Z | 32.91369 | 545 | 0.494529 | [
[
[
"import requests\n\ncortex_request = requests.get(\"http://loom.linnarssonlab.org/clone/Previously%20Published/Cortex.loom\", stream=True)\ncortex_file = open(\"cortex.loom\", \"wb\")\ncortex_file.write(cortex_request.raw.read())\ncortex_file.close()",
"_____no_output_____"
],
[
"import loompy\ncortex = loompy.connect(\"cortex.loom\")",
"_____no_output_____"
],
[
"list(cortex.attrs.items())",
"_____no_output_____"
],
[
"cortex.shape",
"_____no_output_____"
],
[
"def list_attributes(attrs):\n for k,v in attrs:\n uval = list(set(v))\n if len(uval) < len(v):\n print(k + ' => ' + str(uval[0:50]))",
"_____no_output_____"
],
[
"list_attributes(cortex.ra.items())",
"Gene => ['Mbl2', 'Cd163', 'Mir1898', 'Baz2b', 'P2ry13', 'Spag8', 'Rnase4', 'Mir654', 'Ip6k3', 'Gm16063', 'Fancf', 'Nfix', 'Cpb2', 'H2-Ab1', 'Ptpdc1', 'Hao2', 'Ammecr1', 'r_RLTR13G', 'Vmn1r31', 'Kcnh5', 'Gria1', 'Lims1', 'Rusc1', 'Chst14', 'Syf2', 'Glis3', '4833411C07Rik', 'r_MMERGLN-int', '8030462N17Rik', '1810007D17Rik', 'Gm8773', 'Map1lc3a', 'Speer6-ps1', 'Alpk3', 'Suclg2', 'Lmbrd2', 'Krt20', 'C430002E04Rik', 'Cilp', 'Slc44a4', 'Havcr2', 'A330040F15Rik', 'Ncoa3', 'Phka2', 'Cdca2', 'Zfp521', 'Kctd19', 'Ccer1', 'Chrm1', 'Acadl']\nGeneGroup => ['2', '9', '5', '1', '6', '-1', '0', '7', '8', '3', '4']\nGeneType => ['mRNA', 'Mitochondrial', 'Repeat', 'Spikein']\n"
],
[
"list_attributes(cortex.ca.items())",
"Age => ['23', '25', '27', '21', '28', '20', '31', '26', '22', '24']\nClass => ['oligodendrocytes', 'astrocytes_ependymal', 'pyramidal SS', 'endothelial-mural', 'pyramidal CA1', 'interneurons', 'microglia']\nDiameter => ['8.49', '9.72', '8.84', '9.88', '9.13', '9.82', '7.96', '19.1', '8.86', '20.3', '7.48', '8.19', '9.84', '10.2', '7.5', '9.62', '7.09', '8.36', '10.3', '21.4', '15.6', '17.4', '9.05', '6.75', '8.2', '7.82', '9.86', '13.7', '7.32', '25.6', '6.76', '18.2', '6.06', '6.59', '8.15', '19.2', '8.68', '9.91', '8.64', '8.34', '15', '9.81', '8.94', '6.45', '8.47', '7.54', '6.91', '18.8', '7.11', '9.92']\nGroup => ['2', '9', '5', '1', '6', '7', '8', '3', '4']\nSex => ['0', '-1', '1']\nSubclass => ['Int15', 'Mgl2', 'Int4', 'Int10', 'Peric', 'Pvm1', 'Oligo1', '(none)', 'S1PyrL5', 'Int8', 'CA1Pyr1', 'Pvm2', 'Vend1', 'Oligo5', 'Vend2', 'Int7', 'Int12', 'Oligo6', 'Oligo4', 'Vsmc', 'S1PyrL4', 'Astro2', 'Int2', 'Int11', 'Int13', 'CA2Pyr2', 'SubPyr', 'Int14', 'Int5', 'S1PyrL6b', 'Epend', 'Int1', 'ClauPyr', 'CA1Pyr2', 'Oligo3', 'S1PyrL23', 'Int6', 'Int9', 'CA1PyrInt', 'Int3', 'S1PyrDL', 'S1PyrL6', 'Int16', 'S1PyrL5a', 'Choroid', 'Astro1', 'Mgl1', 'Oligo2']\nTissue => ['sscortex', 'ca1hippocampus']\nTotal_mRNA => ['2778', '7179', '26162', '11662', '38564', '13051', '19995', '3768', '7731', '19655', '26628', '8953', '22101', '6762', '10936', '3842', '22626', '14242', '3173', '41451', '14306', '3276', '14983', '3558', '26744', '4913', '13949', '36481', '8350', '24596', '25910', '13509', '19108', '4022', '5327', '12920', '16869', '9711', '4509', '3285', '4016', '14998', '10302', '10820', '32466', '18509', '7931', '23328', '11542', '4568']\nWell => ['93', '31', '4', '48', '9', '45', '6', '43', '80', '61', '55', '89', '20', '92', '30', '27', '71', '63', '3', '70', '34', '96', '5', '47', '42', '28', '1', '78', '54', '15', '24', '37', '83', '88', '87', '58', '85', '18', '13', '51', '62', '2', '50', '36', '11', '68', '95', '22', '38', '77']\n"
]
],
[
[
"#### Fields we might provide semantic maps for:\n\n* Gene type\n* Class (= Cell type)\n* subClass ( = more specific Cell type)\n* Tissue\n* Sex (but what do the entries mean?)\n\nWith the help of the paper and some searching on the Ontology Lookup Service, these can easily be mapped to cell ontology terms: 'interneuron': \"CL:0000099\", 'oligodendrocyte': 'CL:0000128', 'microglial cell' : 'CL:0000129', 'pyramidal neuron' : 'CL:0000598', 'ependymal cell': 'CL:0000065', 'astrocyte' : 'CL:0000127', 'endothelial cell': 'CL:0000115'\n\nNote that some of the annotation strings map to multiple cell types, for example, astrocytes_ependymal corresponds to 'ependymal cell': 'CL:0000065' OR 'astrocyte' : 'CL:0000127'\n\nSimilarly, the two values in the tissue field, can be mapped as follows:\n\n'CA1 field of hippocampus': 'UBERON:0003881', 'somatosensenory cortex': 'UBERON:0008930'\n",
"_____no_output_____"
]
],
[
[
"# (Crude) Function to roll map element\n\ndef roll_map(name, applicable_to, maps_to, relation = '', obj = ''):\n \n out = { \"name\": name, \"applicable_to\": applicable_to,\n \"maps_to\": maps_to }\n if relation:\n out['subject_of'] = { \"relation\": relation, \"object\": obj }\n \n\nmappings = []\n\nmappings.append(roll_map(\"interneurons\", [\"ca.Class\"], { \"name\": \"interneuron\", \"id\": \"CL:0000099\" },\n relation={ \"name\": \"has_soma_location\", \"id\": \"RO:0002100\" }, obj=\"ca.Tissue\" ))\n\n\nmappings.append(roll_map())",
"_____no_output_____"
],
[
"# Validating JSON\n\nimport \n\n# Inserting JSON into header",
"_____no_output_____"
],
[
"# Function to translate dot paths to Loom attributes.\n\nimport warnings\nimport json\nfrom jsonpath_rw import parse\nimport numpy\n\ndef check_string(x):\n if isinstance('', numpy.str):\n return True\n else:\n warnings.warn(\"Specified path does not point to list of strings.\")\n return False\n \ndef is_json(myjson):\n try:\n json_object = json.loads(myjson)\n except:\n return False\n return True\n\ndef dot_path2jpath(dot_path, json_string):\n j = json.loads(json_string)\n path = parse(dot_path)\n return path.find(j)\n\ndef check_attr(attr_type):\n \n \n \ndef resolve_dot_path(loom, path):\n elements = path.split('.')\n if elements[0] == \"ca\":\n if not elements[1] in loom.ca.keys():\n warnings.warn(\"There is no %s column present.\" % elements[1])\n return False\n column = loom.ca[elements[1]]\n print(type(column))\n print(type(column[0])) \n if check_string(column[0]):\n return list(set(column))\n if elements[0] == \"ra\":\n row = loom.ra[elements[1]] \n if check_string(row[0]):\n return list(set(row)) \n if elements[0] == \"attrs\":\n attr = loom.attrs[element[1]]\n if is_json(attr):\n return dot_path2jpath('.'.join(elements[2:], attr))\n else:\n return [attr]\n ",
"_____no_output_____"
],
[
"resolve_dot_path(cortex, \"ca.Class\")",
"<class 'numpy.ndarray'>\n<class 'numpy.str_'>\n"
],
[
" \n# Validation - do all maps resolve to terms used.\n\n# Validation - ontology term IRI resolution\n\n# Function to query by ontology term\n\ndef query_by_ontology_name():\n return\n \ndef query_by_ontology_id():\n return\n \n\n# Function for Ontology term query with grouping (via OLS)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
ecda82fa3d45e5722d1f18bfb9b9911f15e920cb | 17,002 | ipynb | Jupyter Notebook | valid/CompareCoNLL.ipynb | Ayushk4/Bi-LSTM-CNN-CRF | 4f207a38cadfa3498de6573ef7a61ebfcfec30ae | [
"MIT"
] | 1 | 2020-09-03T17:26:50.000Z | 2020-09-03T17:26:50.000Z | valid/CompareCoNLL.ipynb | Ayushk4/Bi-LSTM-CNN-CRF | 4f207a38cadfa3498de6573ef7a61ebfcfec30ae | [
"MIT"
] | null | null | null | valid/CompareCoNLL.ipynb | Ayushk4/Bi-LSTM-CNN-CRF | 4f207a38cadfa3498de6573ef7a61ebfcfec30ae | [
"MIT"
] | null | null | null | 41.671569 | 6,051 | 0.424715 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
ecda83a3cdb961a371e375098b9dc192d23c6f1b | 110,695 | ipynb | Jupyter Notebook | colab/decision_tree.ipynb | shamimreza/ml-python | 3cd57b455561cdac4b330fdd9ba9c899698c8b5e | [
"MIT"
] | 40 | 2018-10-14T18:15:22.000Z | 2022-03-24T16:08:54.000Z | colab/decision_tree.ipynb | Rifat007/ml-python | ca044bff54ec0edbfd47fd4612d4304b819868c7 | [
"MIT"
] | 1 | 2021-06-13T04:08:30.000Z | 2021-06-14T13:38:51.000Z | colab/decision_tree.ipynb | Rifat007/ml-python | ca044bff54ec0edbfd47fd4612d4304b819868c7 | [
"MIT"
] | 53 | 2019-04-16T09:50:44.000Z | 2022-02-04T19:22:47.000Z | 133.36747 | 76,482 | 0.748037 | [
[
[
"<a href=\"https://colab.research.google.com/github/raqueeb/ml-python/blob/master/colab/decision_tree.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# **ডিসিশন ট্রি কিভাবে কাজ করে? খালি চোখে আইরিস ডেটাসেট**",
"_____no_output_____"
],
[
"**মেশিন লার্নিং এর অ্যালগরিদম নিয়ে কথা কম কেন?**\n\nআমি সাধারণত: মেশিন লার্নিং এর অ্যালগরিদম নিয়ে কম কথা বলি। প্রথমত: আমি রিসার্চ করতে বসিনি। বরং, বহু বছরের রিসার্চ থেকে পাওয়া আউটকামকে একটা প্রোডাক্টে ট্রান্সফর্ম করার মানসিকতা থেকে কাজ করি বেশিরভাগ সময়ে। দ্বিতীয়তঃ, প্রায়োরিটি। এই অল্প সময়ের যুগে কেচে গন্ডুস করে পুরনো জিনিস না ঘেঁটে বরং সেটাকে কিভাবে ব্যবহারযোগ্য করা যায় সেটার পেছনে সময় দেই বেশি।\n\nআমার অভিজ্ঞতায় বলে, কোন অ্যাপ্লিকেশনে কোন ধরনের অ্যালগরিদম ভালো কাজ করে সেটা অনেকটাই স্ট্যান্ডার্ড হয়ে গেছে ইন্ডাস্ট্রিতে। সেই জিনিসটা ঠিকমতো জানলে কাজ শুরু করা যায় এ মুহূর্তে। অ্যালগরিদম কিভাবে কাজ করে সেটার পেছনে প্রচুর সময় দিয়ে আসল সময়ে মানে প্রোডাক্ট ডেভেলপমেন্ট এর সময় উৎসাহ হারিয়ে ফেলেন অনেকে। এখন, কেউ যদি একাডেমিক ফিল্ডে রিসার্চ করতে চান, তাদেরকে তো নতুন অ্যালগরিদম তৈরি করতে মানা করছেন না কেউ। তবে, পৃথিবীতে এতোই অ্যালগরিদম আছে যে সেগুলোর হাইপার-প্যারামিটারগুলোকে ঠিকমতো টিউন করতেই চলে যাবে বছরের পর বছর, সেখানে আমি এটার ব্যবহারিক দিক নিয়েই সন্তুষ্ট। আমার দরকার প্রোডাক্ট। সরকারের হয়ে বিশ্বের বড় বড় কোম্পানিগুলোর সাথে কাজ করতে গিয়ে এই উপলব্ধিটা কাজ করেছে বেশি। প্রজ্ঞা কিভাবে এলো তার থেকে দরকার বেশি সেই প্রজ্ঞা নিয়ে কী করতে চাই আমরা?",
"_____no_output_____"
],
[
"**মডেল ইন্টারপ্রেটেশন**\n\nতবে, মডেল ইন্টারপ্রেটেশনে মানুষের বোঝার জায়গাটা কিছুটা লিমিটেড হয়ে আসছে আস্তে আস্তে। যেমন, আমরা দেখেছি ‘ডিসিশন ট্রি’ কিন্তু বেশ ইনট্যুটিভ, পাশাপাশি তাদের সিদ্ধান্ত নেবার ধারণাগুলো কিন্তু সহজেই ইন্টারপ্রেট করা যায়। এই মডেলগুলোকে আমরা সাধারণত: ‘হোয়াইট বক্স’ মডেল বলি। মানে, বাইরে থেকে বোঝা যায় কিভাবে মডেলটা সিদ্ধান্ত নিচ্ছে। খালি চোখে। \n\nএদিকে আমার দেখা সবচেয়ে বেশি ব্যবহার করা ‘র্যান্ডম ফরেস্ট’ মডেল অথবা নিউরাল নেটওয়ার্ক এর ভেতরে কি হচ্ছে সেটা জানা মানুষের পক্ষে অনেকটাই দুষ্কর হয়ে যাচ্ছে কমপ্লেক্সিটির কারণে। অংকের জোরে এরা বেশ ভালো প্রেডিকশন দিচ্ছে বটে, আমাদের আশেপাশে ক্যালকুলেশন বলছে তাদের আউটকামগুলো আগের অ্যালগরিদম থেকে বেশ ভালো অ্যাক্যুরেসি দিচ্ছে। তবে যে জিনিসটা মানুষ বুঝতে পারছে না বা সাধারণভাবে আমরা ধরতে পারছি না যে - কেন এই প্রেডিকশনগুলো ঘটছে? কারণ, এতো ডেটার মধ্যে এই অংকের হিসেব বের করা সাধারণ মানুষের পক্ষে অসম্ভব হয়ে দাঁড়িয়েছে। \n\nধরুন, আমাদের নিউরাল নেটওয়ার্ক একটা ছবি থেকে একজন মানুষকে ঠিকমতো বের করতে পেরেছে, এখন মানুষ হিসেবে আমাদের বোঝাটা বেশ দুষ্কর যে - কোন বিশেষ এট্রিবিউটগুলো আসলে এই প্রেডিকশনকে ‘কন্ট্রিবিউট’ করেছে? মডেলটা কি ওই মানুষটার চোখ দেখে মানুষটাকে চিনেছে, নাকি তার মুখের কিছু ফিচার অথবা নাক নাকি তার দাঁড়ানোর ভঙ্গি? নাকি সে সবসময় যে ধরনের ব্যাকগ্রাউন্ড ব্যবহার করেন সেটা থেকে? হাজারো ডেটা পয়েন্ট থেকে কোন ডেটাগুলো এই ব্যাপারগুলোকে ‘কন্ট্রিবিউট’ অথবানেগেটিভলি কো-রিলেট করছে সেটা সাধারণ পার্সপেক্টিভ থেকে বের করা কঠিন।",
"_____no_output_____"
],
[
"**কেন ডিসিশন ট্রি?**\n\nঅনেক অ্যালগরিদম নিয়ে কাজ করা হয়েছে এর মধ্যে। সেদিক থেকে মানুষকে বোঝানোর জন্য আমার প্রিয় একটা মডেল হচ্ছে এই ডিসিশন ট্রি। এটা মানুষের বোঝার মতো করে খুবই সহজ ক্লাসিফিকেশন/রিগ্রেশন রুলগুলো দেখিয়ে দেয় আমাদের। আমার আগের বইটাতে এই ডিসিশন ট্রি নিয়ে একটা বড় চ্যাপ্টার লিখেছিলাম। যেহেতু পুরো বইটা আছে ইন্টারনেটে, দেখে নিতে পারেন একটু। তবে না দেখলে সমস্যা নেই। আমি এখানে সে জিনিসটা কাভার করে দেব পাইথনের লাইব্রেরি দিয়ে। \n\nপৃথিবীর এই অসম্ভব জনপ্রিয় এবং প্রচন্ড ক্ষমতাশালী অ্যালগরিদম ‘র্যান্ডম ফরেস্টে’র মূল ভিত্তি কিন্তু এসেছে এই ‘ডিসিশন ট্রি’ থেকে। আর সেকারণে এই চ্যাপ্টারে আমরা দেখাবো কিভাবে একটা ডিসিশন ট্রি ব্যবহার করে এই আইরিস ডেটা সেটের প্রেডিকশন করতে হয়। এর অর্থ হচ্ছে একটা ‘এন্ড টু এন্ড’ ডেমনস্ট্রেশন। আমাদের সাইকিট-লার্ন এখানে একটা ‘কার্ট’ - ‘ক্লাসিফিকেশন এন্ড রিগ্রেশন ট্রি’ অ্যালগরিদম ব্যবহার করে যেই সিদ্ধান্তের গাছগুলোকে বড় করার জন্য, যেটা আসলে একটা বাইনারি ট্রি। \n\nএখানে পাতা ছাড়া (লীফ নয়) নোডগুলোতে সব সময় দুটো চিল্ড্রেন থাকে। এর মানে হচ্ছে শুধুমাত্র প্রশ্নের হ্যাঁ অথবা না উত্তর এর উপর ভিত্তি করে সিদ্ধান্তের গাছটা বড় হতে থাকে। অরালিয়েন জেরোনের বইটা থেকে নিয়ে এলাম ধারণাটাকে। হাতেকলমের জন্য একটা অসাধারণ বই।",
"_____no_output_____"
],
[
"**হাতেকলমে ডিসিশন ট্রি**\n\nডিসিশন ট্রি বোঝার আগে লোড করে নেই আমাদের আইরিস ডেটাসেট। প্রেডিকশন করার পর আউটপুটের গ্রাফ ডেফিনিশন থেকে বুঝে যাব কিভাবে এই ডিসিশনট্রিটা প্রেডিক্ট সিদ্ধান্ত নিল। তৈরি তো আপনারা?",
"_____no_output_____"
],
[
"ইমপোর্ট করে নিলাম ডিসিশন ট্রি ক্লাসিফায়ারকে। বোঝার সুবিধার জন্য আমাদের ডিসিশন ট্রি এর ডালপালা বেশি বাড়তে দিলাম না শুরুতেই। পাশাপাশি আমাদের আইরিস ডেটাসেটের পুরো চারটা ফিচার নিয়ে সিদ্ধান্তের গাছপালা তৈরি করলে সেটার ডালপালা অনেক বড় হয়ে যেতে পারে বলে সেটাকে কমিয়ে দিলাম দুটো ফিচারে। এখানে নামপাই এর অ্যারে স্লাইসিং ব্যবহার করছি দরকারি ফিচারগুলোকে এক্সট্র্যাক্ট করার জন্য। আমাদের চেষ্টা থাকবে ব্যাপারটাকে আগাগোড়া বুঝতে। আর সে কারণেই ফেলে দিলাম বেশি বেশি কম্প্লেক্সিটি - এই হাইপার-প্যারামিটারে।",
"_____no_output_____"
]
],
[
[
"from sklearn.datasets import load_iris\nfrom sklearn.tree import DecisionTreeClassifier\n\niris = load_iris()",
"_____no_output_____"
],
[
"# লোড করি পুরো ডেটা, কী দেখাবে?\nX = iris.data\n# দেখি কী আছে ভেতরে?\nX",
"_____no_output_____"
],
[
"# আমরা একটা ফিচার নেই, ঠিক ধরেছেন, পেটাল প্রস্থ - নামপাই এর অ্যারে স্লাইসিং করে \nX = iris.data[:, 3:]\n# দেখে নেই এখন ভেতরে কী কী এসেছে?\nX",
"_____no_output_____"
],
[
"# আমরা শেষ দুটো ফিচার নেই, ঠিক ধরেছেন, একটা পেটাল দৈর্ঘ্য আর পেটাল প্রস্থ \nX = iris.data[:, 2:]\n# দেখে নেই এখন ভেতরে কী কী এসেছে?\nX",
"_____no_output_____"
],
[
"# ট্রেনিং করিয়ে নিচ্ছি \ny = iris.target\n\n# ডেপ্থ ২ মাত্র \ntree_clf = DecisionTreeClassifier(max_depth=2, random_state=42)\ntree_clf.fit(X, y)",
"_____no_output_____"
]
],
[
[
"**ছবিটা তৈরি করি **\n\nআমাদের গ্রাফ ডেফিনেশন ফাইলটাকে আমাদের প্রয়োজনীয় ‘ভিউয়েবল’ ফরম্যাটে কনভার্ট করে নেই। পিএনজি। গুগল কোলাবের ভেতরে ইনলাইন মোডে দেখার জন্য একটা নতুন মেথড কল করে নিয়ে আসি। অসাধারণ। আমাদের এই প্রথম ডিসিশন ট্রিটা দেখতে কিছুটা এরকম হবে। এখন দেখে আসি কিভাবে এই ডিসিশন ট্রিটা প্রেডিকশন করছে? এর ডেপ্থ লেভেল মাত্র ২। পাশাপাশি ব্যবহৃত ফিচারের সংখ্যাও ২। ফুলের পেটাল এর দৈর্ঘ্য এবং প্রস্থ। দুটো সংখ্যা। এর মানে ডিসিশন ট্রিএর ডালপালা অনেকটাই ছেঁটে ফেলে দিয়েছি কমপ্লেক্সিটি এড়াতে।\n",
"_____no_output_____"
]
],
[
[
"from sklearn.tree import export_graphviz\nfrom graphviz import Source\n\nexport_graphviz(\n tree_clf,\n out_file='tree.dot',\n feature_names=iris.feature_names[2:],\n class_names=iris.target_names,\n rounded=True,\n filled=True\n )",
"_____no_output_____"
]
],
[
[
"আমাদের গ্রাফ ডেফিনেশন ফাইলটাকে আমাদের প্রয়োজনীয় ‘ভিউয়েবল’ ফরম্যাটে কনভার্ট করে নেই। পিএনজি। গুগল কোলাবের ভেতরে ইনলাইন মোডে দেখার জন্য একটা নতুন মেথড কল করে নিয়ে আসি। \n\nঅসাধারণ। আমাদের এই প্রথম ডিসিশন ট্রিটা দেখতে কিছুটা এরকম হবে। এখন দেখে আসি কিভাবে এই ডিসিশন ট্রিটা প্রেডিকশন করছে? এর ডেপ্থ লেভেল মাত্র ২। পাশাপাশি ব্যবহৃত ফিচারের সংখ্যাও ২। ফুলের পেটাল এর দৈর্ঘ্য এবং প্রস্থ। দুটো সংখ্যা। এর মানে ডিসিশন ট্রিএর ডালপালা অনেকটাই ছেঁটে ফেলে দিয়েছি কমপ্লেক্সিটি এড়াতে।\n",
"_____no_output_____"
]
],
[
[
"# আমাদের দরকারি ফরম্যাটে ছবিতে পাল্টে নেই, গুগল কোলাব হবার কারণে \n# আলাদা করে graphviz ইনস্টল করার প্রয়োজন নেই। জুপিটার নোটবুকে প্রয়োজন \nfrom subprocess import call\ncall(['dot', '-Tpng', 'tree.dot', '-o', 'tree.png'])\n\n# আমাদের নোটবুকে ডিসপ্লে করি \nfrom IPython.display import Image\nImage(filename = 'tree.png')",
"_____no_output_____"
]
],
[
[
"**খালি চোখে প্রেডিকশন **\n\nধরুন, আপনার বন্ধু আপনাকে একটা আইরিস প্রজাতির ফুল উপহার দিয়েছেন। আপনি সেই ফুলের প্রজাতি দিয়ে ক্লাসিফাই করতে চান। স্বাভাবিকভাবেই আপনি শুরু করবেন রুট নোড দিয়ে। এর ডেপ্থ হচ্ছে ০, শুরু হবে একদম উপর থেকে। এই নোডে তার প্রশ্ন হবে - এই ফুলটার পেটাল দৈর্ঘ্য ২.৪৫ সেন্টিমিটার থেকে কম কিনা? প্রশ্নের উত্তর যদি হ্যাঁ মানে সত্যি হয় তাহলে সেটা চলে যাবে রুট নোড ধরে বাঁ দিকের চাইল্ড নোডে। এটার ডেপথ হচ্ছে ১, ছবিতে দেখুন এটা আছে বাঁদিকের লিফ নোড এ। এটা লীফ নোড কারণ এর কোন চাইল্ড নোড নেই। মানে - এর কোন সত্য বা মিথ্যা হবার জন্য নতুন কোন প্রশ্নের সম্মুখীন হতে হবে না। এই লীফ নোড এর বক্সের ভেতরে তাকালেই বুঝতে পারব আমাদের প্রেডিকটেড ক্লাস কি? এখানে আমাদের ডিসিশন ট্রি প্রেডিক্ট করে দিয়েছে যে - আমাদের ফুলটা হচ্ছে আইরিস- সেটোসা প্রজাতির।\n\n**আরেকটা প্রেডিকশন**\n\nধরুন, এবার আপনাকে আরেকটা ফুল দেওয়া হলো, তবে এবার সেটার পেটাল দৈর্ঘ্য ২.৪৫ সেন্টিমিটার থেকে বেশি। তাহলে কি হবে? প্রশ্নের উত্তর ‘না’ মানে মিথ্যা হওয়াতে আমাদের সিদ্ধান্ত নিচে নেমে রুট নোডের ডান দিকে চাইল্ড নোডে এসে দাঁড়াবে। এটার ডেপ্থ হচ্ছে এক এবং ডানের বক্স। দেখা যাচ্ছে যে এটা কিন্তু লীফ নোড না,তার মানে আমাদেরকে আর একটা প্রশ্নের সম্মুখীন হতে হবে। নতুন প্রশ্ন হচ্ছে - ফুলের পেটাল প্রস্থ কি ১.৭৫ সেন্টিমিটারের কম কিনা? যদি প্রশ্নের উত্তর সত্যি হয় তাহলে ফুলটি আইরিস-ভার্সিকালার হবার সম্ভাবনা বেশি। এটার ডেপ্থ দ্বিতীয় লেভেলের এবং বামের লীফ নোট। এই প্রশ্নের উত্তর যদি ‘না’ হয়, তাহলে এটা আইরিস-ভার্জিনিকা হবে। এটার ডেপথ লেভেল ২, ডানের লীফ নোড।\n\n**ট্রেনিং ইন্সট্যান্স**\n\nআমাদের রুট নোডের ‘অ্যাট্রিবিউট’ এর সংখ্যা দিয়ে বোঝা যায় যে কতগুলো ‘ট্রেনিং ইন্সট্যান্স’ এই নোডের ওপর কাজ করবে। রুট নোডে এটা পুরো ১৫০টা ‘ট্রেনিং ইন্সট্যান্স’ এর ওপর কাজ করবে। যেহেতু, পুরো ডেটসেট ৫০ করে তিনটা প্রজাতিকে আলাদা আলাদা করে রিপ্রেজেন্ট করছে। ৫০, ৫০, এবং ৫০। পাশাপাশি দেখা যাচ্ছে ১০০টা ট্রেনিং ‘ইনস্ট্যান্স’ এর পেটাল দৈর্ঘ্য ২.৪৫ সেন্টিমিটারের বেশি। এটার ডেপ্থ ১ এবং বামের চাইল্ড নোড। এই ১০০ এর মধ্যে ৫৪টা ফুলের পেটাল প্রস্থ ১.৭৫ সেন্টিমিটারের কম। এটার ডেপ্থ ২ এবং জিনিসটা বামের লীফ নোড। একটা নোডের ভ্যালু ‘অ্যাট্রিবিউট’ বলছে কয়টা ‘ট্রেনিং ইনস্ট্যান্স’ কাজ করছে ওই একেকটা ক্লাসের ওপর। যেমন, সবচেয়ে নিচের ডানের লীফ নোড বলছে এই ইনস্ট্যান্স এর মধ্যে ০টা হচ্ছে আইরিস-সেটোসা, ১টা হচ্ছে ভার্সিকালার আর বাকী ৪৫টা ভার্জিনিকা। অসাধারণ, তাই না?\n",
"_____no_output_____"
],
[
"গিটহাবের লিংক\n\nhttps://github.com/raqueeb/ml-python/blob/master/colab/decision_tree.ipynb\n\nগুগল কোলাবের লিংক \n\nhttps://colab.research.google.com/github/raqueeb/ml-python/blob/master/colab/decision_tree.ipynb",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
ecda89ffa6bda133d0c7d45fe1b45697d7d95e48 | 636,469 | ipynb | Jupyter Notebook | Examples/Notebooks/Machine Learning.ipynb | porterchild/Plotly.swift | e5cf41733164bebed8e89a97d8c9ac8a61032a12 | [
"MIT"
] | 71 | 2020-01-30T18:34:31.000Z | 2022-03-31T11:17:47.000Z | Examples/Notebooks/Machine Learning.ipynb | porterchild/Plotly.swift | e5cf41733164bebed8e89a97d8c9ac8a61032a12 | [
"MIT"
] | 23 | 2020-02-06T15:17:29.000Z | 2022-01-08T18:31:01.000Z | Examples/Notebooks/Machine Learning.ipynb | porterchild/Plotly.swift | e5cf41733164bebed8e89a97d8c9ac8a61032a12 | [
"MIT"
] | 9 | 2020-04-03T22:12:13.000Z | 2022-03-08T04:32:07.000Z | 634.565304 | 596,779 | 0.656909 | [
[
[
"# Machine Learning with Swift for TensorFlow and Plotly\n\n[![GitHubBadge]][GitHubLink] [![ColabBadge]][ColabLink]\n\n\n[ColabBadge]: https://colab.research.google.com/assets/colab-badge.svg \"Run notebook in Google Colab\"\n[ColabLink]: https://colab.research.google.com/github/vojtamolda/Plotly.swift/blob/main/Examples/Notebooks/Machine%20Learning.ipynb\n\n[GitHubBadge]: https://img.shields.io/badge/|-Edit_on_GitHub-green.svg?logo=github \"Edit notebook's source code on GitHub\"\n[GitHubLink]: https://github.com/vojtamolda/Plotly.swift/blob/main/Examples/Notebooks/Machine%20Learning.ipynb",
"_____no_output_____"
],
[
"\n## Introduction\n\nIn this tutorial, we'll take a look at using Swift for TensorFlow in conjunction with Plotly to build a machine learning model and interactively visualize the resulting data in the `swift-jupyter` environment.\n\n### Swift for TensorFlow\n\nWith Tensorflow's Python API experiencing performance limitations and lacking features, Swift for TensorFlow began development as a next-generation platform for machine learning, incorporating the latest research across machine learning, compilers, differentiable programming, systems design, and beyond. Read more about it on its [website](https://www.tensorflow.org/swift) and see the source code on [GitHub](https://github.com/tensorflow/swift).\n\n### Plotly\n\nThe [Plotly.swift](https://github.com/vojtamolda/Plotly.swift) framework is an interactive plotting library that lets you plot data natively in Swift. It's based on converting the plots to an intermediate JSON representation that's later render via user-side JavaScript in the browser. Read more and inspect the source code on [GitHub](https://github.com/vojtamolda/Plotly.swift).\n\n### Significance\n\nPreviously, Swift for Tensorflow projects needed to fallback to Python libraries such as `matplotlib` and `numpy` for data analysis and visualization. However, as more and more features fallback to Python, this results in these projects taking on some (not all) of the previous performance limitations and issues that are supposed to be avoided by using Swift.\n\nThis tutorial creates a project to build, train, analyze, and visualize a machine learning model all natively in Swift, with no Python fallback. Hopefully, this will be a step forward towards the goal of having S4TF machine learning researchers using pure Swift in the future.",
"_____no_output_____"
]
],
[
[
"%install '.package(url: \"https://github.com/tensorflow/swift-models\", .branch(\"tensorflow-0.6\"))' Datasets\n%install '.package(url: \"https://github.com/vojtamolda/Plotly.swift\", .branch(\"main\"))' Plotly\nprint(\"\\u{001B}[2J\") //removes installation output\n\n%include \"EnableIPythonDisplay.swift\"",
"\r\n"
]
],
[
[
"Finally, we'll configure all of the necessary imports.",
"_____no_output_____"
]
],
[
[
"import TensorFlow\nimport Datasets\nimport Plotly\n\n// No PythonKit import! :D",
"_____no_output_____"
]
],
[
[
"## Initializing the Dataset\n\nFor our tutorial, we'll be using the MNIST dataset with a batch size of 500. There are 60,000 images in training and 10,000 in testing, both of which should divide into 500 very nicely.",
"_____no_output_____"
]
],
[
[
"let batchSize = 500\nlet mnist = Datasets.MNIST(batchSize: batchSize)",
"Loading resource: train-images-idx3-ubyte\r\nLoading local data at: /content/train-images-idx3-ubyte\nSuccesfully loaded resource: train-images-idx3-ubyte\nLoading resource: train-labels-idx1-ubyte\nLoading local data at: /content/train-labels-idx1-ubyte\nSuccesfully loaded resource: train-labels-idx1-ubyte\nLoading resource: t10k-images-idx3-ubyte\nLoading local data at: /content/t10k-images-idx3-ubyte\nSuccesfully loaded resource: t10k-images-idx3-ubyte\nLoading resource: t10k-labels-idx1-ubyte\nLoading local data at: /content/t10k-labels-idx1-ubyte\nSuccesfully loaded resource: t10k-labels-idx1-ubyte\n"
]
],
[
[
"## Building the Model\n\nNext, let's build a very simple model.\n\nFirst, we define a simple struct, `Model`, that conforms to the `Layer` protocol. This model will take in the input image and return the class output.\n\nWe'll use one `Flatten<Float>` layer with hidden and output `Dense<Float>` layers. Feel free to play around with different sizes, layers, etc. to see how the model would change. Note that the first input size must be set at `28 * 28` as the images provided in MNIST are of that size and the last output size must be set to `10` since those are the number of classes in MNIST. Also, the `outputSize` of `hidden` and `inputSize` of `output` should be the same.\n\nThe `Layer` protocol requires a function, `callAsFunction`, that is called to pass the `input` through our model.",
"_____no_output_____"
]
],
[
[
"struct Model: TensorFlow.Layer {\n var flatten = TensorFlow.Flatten<Float>()\n var hidden = TensorFlow.Dense<Float>(inputSize: 28 * 28, outputSize: 20, activation: relu)\n var output = TensorFlow.Dense<Float>(inputSize: 20, outputSize: 10, activation: softmax)\n \n @differentiable\n func callAsFunction(_ input: Tensor<Float>) -> Tensor<Float> {\n return input.sequenced(through: flatten, hidden, output)\n }\n}",
"_____no_output_____"
]
],
[
[
"Lastly, we can initialize an instance with:",
"_____no_output_____"
]
],
[
[
"var model = Model()",
"_____no_output_____"
]
],
[
[
"## Training the Model\n\nIn order to train the model, we'll have to determine the number of epochs our model trains for. This will be the number of times our model will \"pass through\" the entire dataset. For our example, we'll set this number equal to 10.",
"_____no_output_____"
]
],
[
[
"let numEpochs = 10",
"_____no_output_____"
]
],
[
[
"We also need an optimizer. This will \"shape\" our model as we train it. Let's use the Adam optimizer, which is an adaptive learning rate optimization algorithm derived from adaptive moment estimation.",
"_____no_output_____"
]
],
[
[
"let optimizer = TensorFlow.Adam(for: model)",
"_____no_output_____"
]
],
[
[
"### Benchmarking\n\nWe can create a basic `struct` to hold and update some data while training.",
"_____no_output_____"
]
],
[
[
"struct Stat {\n var correct: Int = 0\n var loss: Float = 0\n mutating func update(logits: Tensor<Float>, labels: Tensor<Int32>) {\n self.correct += Int(Tensor<Int32>(logits.argmax(squeezingAxis: 1) .== labels).sum().scalarized())\n }\n}",
"_____no_output_____"
]
],
[
[
"We'll also create some arrays to hold some values for after training in order to visualize them with Plotly.",
"_____no_output_____"
]
],
[
[
"var epochs = stride(from: Float(1), to: Float(numEpochs+1), by: Float(1))\nvar trainLoss: [Float] = []\nvar trainAccuracy: [Float] = []\nvar testLoss: [Float] = []\nvar testAccuracy: [Float] = []",
"_____no_output_____"
]
],
[
[
"Next, we can start the training model loop.",
"_____no_output_____"
]
],
[
[
"//training loop\nfor epoch in epochs {\n //training phase\n var trainStat = Stat()\n Context.local.learningPhase = .training\n for i in 0 ..< mnist.trainingSize / batchSize {\n //get batch of images/labels\n let images = mnist.trainingImages.minibatch(at: i, batchSize: batchSize)\n let labels = mnist.trainingLabels.minibatch(at: i, batchSize: batchSize)\n //compute gradient\n let(loss, gradients) = valueWithGradient(at: model) {model -> Tensor<Float> in\n let logits = model(images)\n trainStat.update(logits: logits, labels: labels)\n return softmaxCrossEntropy(logits: logits, labels: labels)\n }\n trainStat.loss += loss.scalarized()\n //update model\n optimizer.update(&model, along: gradients)\n }\n \n //inference phase\n var testStat = Stat()\n Context.local.learningPhase = .training\n for i in 0 ..< mnist.testSize / batchSize {\n //get batch of images/labels\n let images = mnist.testImages.minibatch(at: i, batchSize: batchSize)\n let labels = mnist.testLabels.minibatch(at: i, batchSize: batchSize)\n //compute loss\n let logits = model(images)\n testStat.update(logits: logits, labels: labels)\n let loss = softmaxCrossEntropy(logits: logits, labels: labels)\n testStat.loss += loss.scalarized()\n }\n \n //calculate and store data\n trainLoss.append(Float(trainStat.loss)/Float(120))\n trainAccuracy.append(Float(trainStat.correct)/Float(60000))\n testLoss.append(Float(testStat.loss)/Float(20))\n testAccuracy.append(Float(testStat.correct)/Float(10000))\n \n //print data\n print(\"Epoch: \\(Int(epoch))\")\n print(\"Train Loss: \\(trainLoss[Int(epoch)-1])\")\n print(\"Train Accuracy: \\(trainAccuracy[Int(epoch)-1])\")\n print(\"Test Loss: \\(testLoss[Int(epoch)-1])\")\n print(\"Test Accuracy: \\(testAccuracy[Int(epoch)-1])\")\n print(\"\\n\")\n}",
"Epoch: 1\r\nTrain Loss: 1.9428478\r\nTrain Accuracy: 0.6056833\r\nTest Loss: 1.6811168\r\nTest Accuracy: 0.8472\r\n\r\n\nEpoch: 2\nTrain Loss: 1.6384782\nTrain Accuracy: 0.87121665\nTest Loss: 1.5965374\nTest Accuracy: 0.8994\n\n\nEpoch: 3\nTrain Loss: 1.5905955\nTrain Accuracy: 0.9001333\nTest Loss: 1.5723908\nTest Accuracy: 0.914\n\n\nEpoch: 4\nTrain Loss: 1.572142\nTrain Accuracy: 0.91148335\nTest Loss: 1.5605278\nTest Accuracy: 0.9195\n\n\nEpoch: 5\nTrain Loss: 1.5617635\nTrain Accuracy: 0.91756666\nTest Loss: 1.5532496\nTest Accuracy: 0.923\n\n\nEpoch: 6\nTrain Loss: 1.5548134\nTrain Accuracy: 0.9211\nTest Loss: 1.5481648\nTest Accuracy: 0.9259\n\n\nEpoch: 7\nTrain Loss: 1.5496384\nTrain Accuracy: 0.9246333\nTest Loss: 1.5442741\nTest Accuracy: 0.9288\n\n\nEpoch: 8\nTrain Loss: 1.5454963\nTrain Accuracy: 0.92775\nTest Loss: 1.5412167\nTest Accuracy: 0.9304\n\n\nEpoch: 9\nTrain Loss: 1.5420247\nTrain Accuracy: 0.93065\nTest Loss: 1.5386671\nTest Accuracy: 0.9318\n\n\nEpoch: 10\nTrain Loss: 1.5390309\nTrain Accuracy: 0.9328833\nTest Loss: 1.5365183\nTest Accuracy: 0.9341\n\n\n"
]
],
[
[
"## Visualizing the Data\n\nNow, let's visualize our data with Plotly! As demonstrated below, Plotly is very flexible, easy, and intuitive to use.\n\nFrom our arrays, we've gathered the following data:",
"_____no_output_____"
]
],
[
[
"print(\"Train Loss: \\(trainLoss)\")\nprint(\"Train Accuracy: \\(trainAccuracy)\")\nprint(\"Test Loss: \\(testLoss)\")\nprint(\"Test Accuracy: \\(testAccuracy)\")",
"Train Loss: [1.9428478, 1.6384782, 1.5905955, 1.572142, 1.5617635, 1.5548134, 1.5496384, 1.5454963, 1.5420247, 1.5390309]\r\nTrain Accuracy: [0.6056833, 0.87121665, 0.9001333, 0.91148335, 0.91756666, 0.9211, 0.9246333, 0.92775, 0.93065, 0.9328833]\r\nTest Loss: [1.6811168, 1.5965374, 1.5723908, 1.5605278, 1.5532496, 1.5481648, 1.5442741, 1.5412167, 1.5386671, 1.5365183]\r\nTest Accuracy: [0.8472, 0.8994, 0.914, 0.9195, 0.923, 0.9259, 0.9288, 0.9304, 0.9318, 0.9341]\r\n"
]
],
[
[
"### Accuracy\nNext, we'll make two `Scatter` traces. One for training and the other for test accuracy:",
"_____no_output_____"
]
],
[
[
"var trainAccuracyTrace = Plotly.Scatter(\n name: \"Train Accuracy\",\n x: epochs,\n y: trainAccuracy,\n line: .init(color: .orange),\n)\nvar testAccuracyTrace = Plotly.Scatter(\n name: \"Test Accuracy\",\n x: epochs,\n y: testAccuracy,\n line: .init(color: .lightBlue),\n)",
"_____no_output_____"
]
],
[
[
"We can then configure things such as the title, axes labels, and line settings with a `Layout` object:",
"_____no_output_____"
]
],
[
[
"let accuracyLayout = Plotly.Layout(\n title: \"Accuracy\",\n xAxis: .preset(title: \"Epoch\"),\n yAxis: .preset(title: \"Score\")\n)",
"_____no_output_____"
]
],
[
[
"Finally, we can display an interactive figure in the notebook:",
"_____no_output_____"
]
],
[
[
"let accuracyFigure = Plotly.Figure(data: [trainAccuracyTrace, testAccuracyTrace], layout: accuracyLayout)\ntry accuracyFigure.display()",
"_____no_output_____"
]
],
[
[
"From this, we can see that at first, both accuracies started out relatively low, but after each epoch they improved in an inverse exponential pattern.",
"_____no_output_____"
],
[
"### Loss\n\nWe can do a similar plot for the losses that decrease in an exponential pattern:\n",
"_____no_output_____"
]
],
[
[
"var trainLossTrace = Plotly.Scatter(\n name: \"Train Loss\",\n x: epochs,\n y: trainLoss,\n line: .init(color: .orange),\n)\nvar testLossTrace = Plotly.Scatter(\n name: \"Test Loss\",\n x: epochs,\n y: testLoss,\n line: .init(color: .lightBlue),\n)\n\nlet lossLayout = Plotly.Layout(\n title: \"Loss\",\n xAxis: .preset(title: \"Epoch\"),\n yAxis: .preset(title: \"Score\")\n)\n\nlet lossFigure = Plotly.Figure(data: [trainLossTrace, testLossTrace], layout: lossLayout)\ntry lossFigure.display()",
"_____no_output_____"
]
],
[
[
"### Confusion Matrix\n\n[Confusion matrix](https://en.wikipedia.org/wiki/Confusion_matrix) is a visualization of the kinds of errors our model makes. For a pair of classes `i` and `j`, the value at `[i][j]` shows how frequently the class `j` was classified as `i`. Correct predictions form the diagonal ofthe matrix where `i` == `j` and off-diagonal values represent errors.\n\nThe following code calculates the confusion matrix on the test set:\n",
"_____no_output_____"
]
],
[
[
"let digits = Array(0...9)\nvar confusionMatrix = Tensor<Float>(zeros: [digits.count, digits.count])\n\nContext.local.learningPhase = .inference\nfor i in 0 ..< mnist.testSize / batchSize {\n let images = mnist.testImages.minibatch(at: i, batchSize: batchSize)\n let labels = mnist.testLabels.minibatch(at: i, batchSize: batchSize)\n\n let logits = model(images)\n let predictions = logits.argmax(squeezingAxis: 1)\n\n for (prediction, label) in zip(predictions.scalars, labels.scalars) {\n let iPrediction = TensorRange.index(Int(prediction))\n let iLabel = TensorRange.index(Int(label))\n confusionMatrix[iPrediction, iLabel] += 1\n }\n}",
"_____no_output_____"
]
],
[
[
"Confusion matrix is best visualized as a heatmap with it's entries normalized by the frequency of each label in the dataset:",
"_____no_output_____"
]
],
[
[
"let datasetLabelFrequency = confusionMatrix.sum(squeezingAxes: 0)\nlet normalizedConfusionMatrix = confusionMatrix / datasetLabelFrequency * 100\n\nlet confusionMatrixHeatmap = Plotly.Heatmap(\n name: \"Accuracy\",\n z: normalizedConfusionMatrix,\n x: digits, y: digits,\n hoverTemplate: .constant(\"\"\"\n <span style='font-size: 1.5em'>%{z:.1f}%</span>\n <b>Prediction</b>: <span style='color: red'>%{x}</span> |\n <b>Label</b>: <span style='color: green'>%{y}</span>\n \"\"\"),\n zMin: 0, zMax: 5,\n colorScale: .blues,\n xAxis: .init(\n title: \"Model Prediction\",\n dTick: 1\n ),\n yAxis: .init(\n title: \"Correct Label\",\n dTick: 1\n )\n)\n\nlet layout = Plotly.Layout(\n title: \"Confusion Matrix\",\n width: 600, height: 600\n)\n\nlet confusionMatrixFigure = Plotly.Figure(data: [confusionMatrixHeatmap], layout: layout)\ntry confusionMatrixFigure.display()",
"_____no_output_____"
]
],
[
[
"### Examples of Errors\n\nThe following code collects all test images that are not correctly classified by our model:\n",
"_____no_output_____"
]
],
[
[
"var misclassified :[(image: Tensor<Float>, label: Int, prediction: Int)] = []\n\nContext.local.learningPhase = .inference\nfor i in 0 ..< mnist.testSize / batchSize {\n let images = mnist.testImages.minibatch(at: i, batchSize: batchSize)\n let labels = mnist.testLabels.minibatch(at: i, batchSize: batchSize)\n\n let logits = model(images)\n let predictions = logits.argmax(squeezingAxis: 1)\n\n for i in 0 ..< predictions.scalarCount where predictions[i] != labels[i] {\n let wrong = (image: images[i],\n label: Int(labels.array[i].scalar!),\n prediction: Int(predictions.array[i].scalar!))\n misclassified.append(wrong)\n }\n}",
"_____no_output_____"
]
],
[
[
"We display a random sample of the errorousnously classified images in a grid:",
"_____no_output_____"
]
],
[
[
"let (rows, columns) = (5, 9)\nvar misclassifiedDigits: [Trace] = []\n\nfor row in 1...rows {\n for column in 1...columns {\n let randomlySelected = misclassified.randomElement()!\n let rgbComponents = Array(repeating: 255 * (1 - randomlySelected.image), count: 3)\n let grayscaleImage = Tensor<Float>(concatenating: rgbComponents, alongAxis: -1)\n \n let misclassifiedDigit = Plotly.Image(\n name: \"Error\",\n z: grayscaleImage,\n hoverTemplate: .constant(\"\"\"\n <b>Prediction</b>: <span style='color: red;'>\\(randomlySelected.prediction)</span> |\n <b>Label</b>: <span style='color: green;'>\\(randomlySelected.label)</span>\n \"\"\"),\n xAxis: .init(uid: UInt(column), ticks: .off, showTickLabels: false,\n showGrid: false, zeroLine: false),\n yAxis: .init(uid: UInt(row), ticks: .off, showTickLabels: false,\n showGrid: false, zeroLine: false)\n )\n misclassifiedDigits.append(misclassifiedDigit)\n }\n}\n\nlet gridLayout = Plotly.Layout(\n title: \"Examples of Errors\",\n grid: .init(rows: rows, columns: columns)\n)\n\nlet examplesOfErrors = Plotly.Figure(data: misclassifiedDigits, layout: gridLayout)\ntry examplesOfErrors.display()",
"_____no_output_____"
]
],
[
[
"## Troubleshooting\n\nDon't be sad if you run into some difficulty with this tutorial, these are some new developments in machine learning and there's bound to be some hiccups along the way :)\n\n### Swift for TensorFlow\n\nIf you're having trouble installing/running Swift for TensorFlow, please join the [Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/swift) and ask for help! Be as detailed as possible, and nice people will help you find a solution.\n\n### Plotly\n\nIf you're having trouble installing/running Plotly, please go to [GitHub Issues](https://github.com/vojtamolda/Plotly.swift/issues) and file an issue! We'll be happy to help you work out any problems you might be facing.\n\n## Conclusion\n\nAnd we're done! We successfully built, trained, analyzed, and visualized a machine learning model all natively in Swift!\n\nThis project was only a step in the right direction to pure Swift machine learning development, as Swift for TensorFlow as well as Plotly are still in early stage active development, with programmers working hard to add to new features, fix bugs, and improve usability. \n\nHopefully, through this tutorial it can be seen that Swift for TensorFlow and open source Swift libraries are a real contender for the future of machine learning, and it's entirely possible now to create complete (albeit simple) projects without the need for any Python fallback :P\n\n\n## Credits/Acknowledgments\n\nThis tutorial wouldn't be possible without the previous hard work of other people. It was adapted from the original [SwiftPlot Version](https://github.com/KarthikRIyer/swiftplot/blob/master/Notebooks/Machine%20Learning%20with%20Swift%20for%20TensorFlow%20and%20SwiftPlot.ipynb) for Plotly and is licensed under the Apache 2.0 license. Big thank-you's to the following:\n\n- Swift for TensorFlow team\n- SwiftPlot contributors\n- Karthik Iyer and Ayush Agrawal for their support and guidance\n\n### References\n\nHere are some references that I found helpful while working on this tutorial:\n\n- [S4TF Tutorial (Wierenga)](https://rickwierenga.com/blog/s4tf/s4tf-mnist.html)\n- [S4TF Tutorial (Bolella)](https://heartbeat.fritz.ai/swifty-ml-an-intro-to-swift-for-tensorflow-9edc7045bc0c)\n- [Swift for TensorFlow Github](https://github.com/tensorflow/swift)\n- [Swift for TensorFlow Documentation](https://www.tensorflow.org/swift)\n- [Plotly Github](https://github.com/vojtamolda/Plotly.swift)\n- [Plotly Documentation](https://vojtamolda.github.io/Plotly.swift)\n\n\nThanks for reading, and have fun playing around with Swift for TensorFlow and Plotly!\n\nWritten By: William Zhang\n\nAdapted By: Vojta Molda",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
ecdaaa05c6f252ea785c63d7003580f7a922da35 | 28,578 | ipynb | Jupyter Notebook | tutorials/ranking/bert.ipynb | Ambitioner-c/MatchZoo-py | bb088edce8e01c2c2326ca1a8ac647f0d23f088d | [
"Apache-2.0"
] | 468 | 2019-07-03T02:43:52.000Z | 2022-03-30T05:51:03.000Z | tutorials/ranking/bert.ipynb | Ambitioner-c/MatchZoo-py | bb088edce8e01c2c2326ca1a8ac647f0d23f088d | [
"Apache-2.0"
] | 126 | 2019-07-04T15:51:57.000Z | 2021-07-31T13:14:40.000Z | tutorials/ranking/bert.ipynb | Ambitioner-c/MatchZoo-py | bb088edce8e01c2c2326ca1a8ac647f0d23f088d | [
"Apache-2.0"
] | 117 | 2019-07-04T11:31:08.000Z | 2022-03-18T12:21:32.000Z | 39.746871 | 172 | 0.487053 | [
[
[
"%run init.ipynb",
"matchzoo version 1.0\n`ranking_task` initialized with metrics [normalized_discounted_cumulative_gain@3(0.0), normalized_discounted_cumulative_gain@5(0.0), mean_average_precision(0.0)]\ndata loading ...\ndata loaded as `train_pack_raw` `dev_pack_raw` `test_pack_raw`\n"
],
[
"preprocessor = mz.models.Bert.get_default_preprocessor()",
"_____no_output_____"
],
[
"train_pack_processed = preprocessor.transform(train_pack_raw)\ndev_pack_processed = preprocessor.transform(dev_pack_raw)\ntest_pack_processed = preprocessor.transform(test_pack_raw)",
"Processing text_left with encode: 100%|██████████| 2118/2118 [00:00<00:00, 4920.66it/s]\nProcessing text_right with encode: 100%|██████████| 18841/18841 [00:11<00:00, 1574.02it/s]\nProcessing length_left with len: 100%|██████████| 2118/2118 [00:00<00:00, 688806.38it/s]\nProcessing length_right with len: 100%|██████████| 18841/18841 [00:00<00:00, 969547.18it/s]\nProcessing text_left with encode: 100%|██████████| 122/122 [00:00<00:00, 7249.90it/s]\nProcessing text_right with encode: 100%|██████████| 1115/1115 [00:00<00:00, 1583.57it/s]\nProcessing length_left with len: 100%|██████████| 122/122 [00:00<00:00, 222577.25it/s]\nProcessing length_right with len: 100%|██████████| 1115/1115 [00:00<00:00, 698946.19it/s]\nProcessing text_left with encode: 100%|██████████| 237/237 [00:00<00:00, 6706.45it/s]\nProcessing text_right with encode: 100%|██████████| 2300/2300 [00:01<00:00, 1709.31it/s]\nProcessing length_left with len: 100%|██████████| 237/237 [00:00<00:00, 315171.23it/s]\nProcessing length_right with len: 100%|██████████| 2300/2300 [00:00<00:00, 569743.63it/s]\n"
],
[
"trainset = mz.dataloader.Dataset(\n data_pack=train_pack_processed,\n mode='pair',\n num_dup=2,\n num_neg=1\n resample=True,\n sort=False,\n batch_size=20,\n)\ntestset = mz.dataloader.Dataset(\n data_pack=test_pack_processed\n batch_size=20,\n)",
"_____no_output_____"
],
[
"padding_callback = mz.models.Bert.get_default_padding_callback()\ntrainloader = mz.dataloader.DataLoader(\n dataset=trainset,\n stage='train',\n callback=padding_callback\n)\ntestloader = mz.dataloader.DataLoader(\n dataset=testset,\n stage='dev',\n callback=padding_callback\n)",
"_____no_output_____"
],
[
"model = mz.models.Bert()\n\nmodel.params['task'] = ranking_task\nmodel.params['mode'] = 'bert-base-uncased'\nmodel.params['dropout_rate'] = 0.2\n\nmodel.build()\n\nprint(model)\nprint('Trainable params: ', sum(p.numel() for p in model.parameters() if p.requires_grad))",
"Bert(\n (bert): BertModule(\n (bert): BertModel(\n (embeddings): BertEmbeddings(\n (word_embeddings): Embedding(30522, 768, padding_idx=0)\n (position_embeddings): Embedding(512, 768)\n (token_type_embeddings): Embedding(2, 768)\n (LayerNorm): BertLayerNorm()\n (dropout): Dropout(p=0.1, inplace=False)\n )\n (encoder): BertEncoder(\n (layer): ModuleList(\n (0): BertLayer(\n (attention): BertAttention(\n (self): BertSelfAttention(\n (query): Linear(in_features=768, out_features=768, bias=True)\n (key): Linear(in_features=768, out_features=768, bias=True)\n (value): Linear(in_features=768, out_features=768, bias=True)\n (dropout): Dropout(p=0.1, inplace=False)\n )\n (output): BertSelfOutput(\n (dense): Linear(in_features=768, out_features=768, bias=True)\n (LayerNorm): BertLayerNorm()\n (dropout): Dropout(p=0.1, inplace=False)\n )\n )\n (intermediate): BertIntermediate(\n (dense): Linear(in_features=768, out_features=3072, bias=True)\n )\n (output): BertOutput(\n (dense): Linear(in_features=3072, out_features=768, bias=True)\n (LayerNorm): BertLayerNorm()\n (dropout): Dropout(p=0.1, inplace=False)\n )\n )\n (1): BertLayer(\n (attention): BertAttention(\n (self): BertSelfAttention(\n (query): Linear(in_features=768, out_features=768, bias=True)\n (key): Linear(in_features=768, out_features=768, bias=True)\n (value): Linear(in_features=768, out_features=768, bias=True)\n (dropout): Dropout(p=0.1, inplace=False)\n )\n (output): BertSelfOutput(\n (dense): Linear(in_features=768, out_features=768, bias=True)\n (LayerNorm): BertLayerNorm()\n (dropout): Dropout(p=0.1, inplace=False)\n )\n )\n (intermediate): BertIntermediate(\n (dense): Linear(in_features=768, out_features=3072, bias=True)\n )\n (output): BertOutput(\n (dense): Linear(in_features=3072, out_features=768, bias=True)\n (LayerNorm): BertLayerNorm()\n (dropout): Dropout(p=0.1, inplace=False)\n )\n )\n (2): BertLayer(\n (attention): BertAttention(\n (self): BertSelfAttention(\n (query): Linear(in_features=768, out_features=768, bias=True)\n (key): Linear(in_features=768, out_features=768, bias=True)\n (value): Linear(in_features=768, out_features=768, bias=True)\n (dropout): Dropout(p=0.1, inplace=False)\n )\n (output): BertSelfOutput(\n (dense): Linear(in_features=768, out_features=768, bias=True)\n (LayerNorm): BertLayerNorm()\n (dropout): Dropout(p=0.1, inplace=False)\n )\n )\n (intermediate): BertIntermediate(\n (dense): Linear(in_features=768, out_features=3072, bias=True)\n )\n (output): BertOutput(\n (dense): Linear(in_features=3072, out_features=768, bias=True)\n (LayerNorm): BertLayerNorm()\n (dropout): Dropout(p=0.1, inplace=False)\n )\n )\n (3): BertLayer(\n (attention): BertAttention(\n (self): BertSelfAttention(\n (query): Linear(in_features=768, out_features=768, bias=True)\n (key): Linear(in_features=768, out_features=768, bias=True)\n (value): Linear(in_features=768, out_features=768, bias=True)\n (dropout): Dropout(p=0.1, inplace=False)\n )\n (output): BertSelfOutput(\n (dense): Linear(in_features=768, out_features=768, bias=True)\n (LayerNorm): BertLayerNorm()\n (dropout): Dropout(p=0.1, inplace=False)\n )\n )\n (intermediate): BertIntermediate(\n (dense): Linear(in_features=768, out_features=3072, bias=True)\n )\n (output): BertOutput(\n (dense): Linear(in_features=3072, out_features=768, bias=True)\n (LayerNorm): BertLayerNorm()\n (dropout): Dropout(p=0.1, inplace=False)\n )\n )\n (4): BertLayer(\n (attention): BertAttention(\n (self): BertSelfAttention(\n (query): Linear(in_features=768, out_features=768, bias=True)\n (key): Linear(in_features=768, out_features=768, bias=True)\n (value): Linear(in_features=768, out_features=768, bias=True)\n (dropout): Dropout(p=0.1, inplace=False)\n )\n (output): BertSelfOutput(\n (dense): Linear(in_features=768, out_features=768, bias=True)\n (LayerNorm): BertLayerNorm()\n (dropout): Dropout(p=0.1, inplace=False)\n )\n )\n (intermediate): BertIntermediate(\n (dense): Linear(in_features=768, out_features=3072, bias=True)\n )\n (output): BertOutput(\n (dense): Linear(in_features=3072, out_features=768, bias=True)\n (LayerNorm): BertLayerNorm()\n (dropout): Dropout(p=0.1, inplace=False)\n )\n )\n (5): BertLayer(\n (attention): BertAttention(\n (self): BertSelfAttention(\n (query): Linear(in_features=768, out_features=768, bias=True)\n (key): Linear(in_features=768, out_features=768, bias=True)\n (value): Linear(in_features=768, out_features=768, bias=True)\n (dropout): Dropout(p=0.1, inplace=False)\n )\n (output): BertSelfOutput(\n (dense): Linear(in_features=768, out_features=768, bias=True)\n (LayerNorm): BertLayerNorm()\n (dropout): Dropout(p=0.1, inplace=False)\n )\n )\n (intermediate): BertIntermediate(\n (dense): Linear(in_features=768, out_features=3072, bias=True)\n )\n (output): BertOutput(\n (dense): Linear(in_features=3072, out_features=768, bias=True)\n (LayerNorm): BertLayerNorm()\n (dropout): Dropout(p=0.1, inplace=False)\n )\n )\n (6): BertLayer(\n (attention): BertAttention(\n (self): BertSelfAttention(\n (query): Linear(in_features=768, out_features=768, bias=True)\n (key): Linear(in_features=768, out_features=768, bias=True)\n (value): Linear(in_features=768, out_features=768, bias=True)\n (dropout): Dropout(p=0.1, inplace=False)\n )\n (output): BertSelfOutput(\n (dense): Linear(in_features=768, out_features=768, bias=True)\n (LayerNorm): BertLayerNorm()\n (dropout): Dropout(p=0.1, inplace=False)\n )\n )\n (intermediate): BertIntermediate(\n (dense): Linear(in_features=768, out_features=3072, bias=True)\n )\n (output): BertOutput(\n (dense): Linear(in_features=3072, out_features=768, bias=True)\n (LayerNorm): BertLayerNorm()\n (dropout): Dropout(p=0.1, inplace=False)\n )\n )\n (7): BertLayer(\n (attention): BertAttention(\n (self): BertSelfAttention(\n (query): Linear(in_features=768, out_features=768, bias=True)\n (key): Linear(in_features=768, out_features=768, bias=True)\n (value): Linear(in_features=768, out_features=768, bias=True)\n (dropout): Dropout(p=0.1, inplace=False)\n )\n (output): BertSelfOutput(\n (dense): Linear(in_features=768, out_features=768, bias=True)\n (LayerNorm): BertLayerNorm()\n (dropout): Dropout(p=0.1, inplace=False)\n )\n )\n (intermediate): BertIntermediate(\n (dense): Linear(in_features=768, out_features=3072, bias=True)\n )\n (output): BertOutput(\n (dense): Linear(in_features=3072, out_features=768, bias=True)\n (LayerNorm): BertLayerNorm()\n (dropout): Dropout(p=0.1, inplace=False)\n )\n )\n (8): BertLayer(\n (attention): BertAttention(\n (self): BertSelfAttention(\n (query): Linear(in_features=768, out_features=768, bias=True)\n (key): Linear(in_features=768, out_features=768, bias=True)\n (value): Linear(in_features=768, out_features=768, bias=True)\n (dropout): Dropout(p=0.1, inplace=False)\n )\n (output): BertSelfOutput(\n (dense): Linear(in_features=768, out_features=768, bias=True)\n (LayerNorm): BertLayerNorm()\n (dropout): Dropout(p=0.1, inplace=False)\n )\n )\n (intermediate): BertIntermediate(\n (dense): Linear(in_features=768, out_features=3072, bias=True)\n )\n (output): BertOutput(\n (dense): Linear(in_features=3072, out_features=768, bias=True)\n (LayerNorm): BertLayerNorm()\n (dropout): Dropout(p=0.1, inplace=False)\n )\n )\n (9): BertLayer(\n (attention): BertAttention(\n (self): BertSelfAttention(\n (query): Linear(in_features=768, out_features=768, bias=True)\n (key): Linear(in_features=768, out_features=768, bias=True)\n (value): Linear(in_features=768, out_features=768, bias=True)\n (dropout): Dropout(p=0.1, inplace=False)\n )\n (output): BertSelfOutput(\n (dense): Linear(in_features=768, out_features=768, bias=True)\n (LayerNorm): BertLayerNorm()\n (dropout): Dropout(p=0.1, inplace=False)\n )\n )\n (intermediate): BertIntermediate(\n (dense): Linear(in_features=768, out_features=3072, bias=True)\n )\n (output): BertOutput(\n (dense): Linear(in_features=3072, out_features=768, bias=True)\n (LayerNorm): BertLayerNorm()\n (dropout): Dropout(p=0.1, inplace=False)\n )\n )\n (10): BertLayer(\n (attention): BertAttention(\n (self): BertSelfAttention(\n (query): Linear(in_features=768, out_features=768, bias=True)\n (key): Linear(in_features=768, out_features=768, bias=True)\n (value): Linear(in_features=768, out_features=768, bias=True)\n (dropout): Dropout(p=0.1, inplace=False)\n )\n (output): BertSelfOutput(\n (dense): Linear(in_features=768, out_features=768, bias=True)\n (LayerNorm): BertLayerNorm()\n (dropout): Dropout(p=0.1, inplace=False)\n )\n )\n (intermediate): BertIntermediate(\n (dense): Linear(in_features=768, out_features=3072, bias=True)\n )\n (output): BertOutput(\n (dense): Linear(in_features=3072, out_features=768, bias=True)\n (LayerNorm): BertLayerNorm()\n (dropout): Dropout(p=0.1, inplace=False)\n )\n )\n (11): BertLayer(\n (attention): BertAttention(\n (self): BertSelfAttention(\n (query): Linear(in_features=768, out_features=768, bias=True)\n (key): Linear(in_features=768, out_features=768, bias=True)\n (value): Linear(in_features=768, out_features=768, bias=True)\n (dropout): Dropout(p=0.1, inplace=False)\n )\n (output): BertSelfOutput(\n (dense): Linear(in_features=768, out_features=768, bias=True)\n (LayerNorm): BertLayerNorm()\n (dropout): Dropout(p=0.1, inplace=False)\n )\n )\n (intermediate): BertIntermediate(\n (dense): Linear(in_features=768, out_features=3072, bias=True)\n )\n (output): BertOutput(\n (dense): Linear(in_features=3072, out_features=768, bias=True)\n (LayerNorm): BertLayerNorm()\n (dropout): Dropout(p=0.1, inplace=False)\n )\n )\n )\n )\n (pooler): BertPooler(\n (dense): Linear(in_features=768, out_features=768, bias=True)\n (activation): Tanh()\n )\n )\n )\n (dropout): Dropout(p=0.2, inplace=False)\n (out): Linear(in_features=768, out_features=1, bias=True)\n)\nTrainable params: 109483009\n"
],
[
"no_decay = ['bias', 'LayerNorm.weight']\noptimizer_grouped_parameters = [\n {'params': [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)], 'weight_decay': 5e-5},\n {'params': [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}\n]\n\nfrom pytorch_transformers import AdamW, WarmupLinearSchedule\n\noptimizer = AdamW(optimizer_grouped_parameters, lr=5e-5, betas=(0.9, 0.98), eps=1e-8)\nscheduler = WarmupLinearSchedule(optimizer, warmup_steps=6, t_total=-1)\n\ntrainer = mz.trainers.Trainer(\n model=model,\n optimizer=optimizer,\n scheduler=scheduler,\n trainloader=trainloader,\n validloader=testloader,\n validate_interval=None,\n epochs=10\n)",
"_____no_output_____"
],
[
"trainer.run()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.