hexsha
stringlengths 40
40
| size
int64 6
14.9M
| ext
stringclasses 1
value | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 6
260
| max_stars_repo_name
stringlengths 6
119
| max_stars_repo_head_hexsha
stringlengths 40
41
| max_stars_repo_licenses
sequence | max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 6
260
| max_issues_repo_name
stringlengths 6
119
| max_issues_repo_head_hexsha
stringlengths 40
41
| max_issues_repo_licenses
sequence | max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 6
260
| max_forks_repo_name
stringlengths 6
119
| max_forks_repo_head_hexsha
stringlengths 40
41
| max_forks_repo_licenses
sequence | max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | avg_line_length
float64 2
1.04M
| max_line_length
int64 2
11.2M
| alphanum_fraction
float64 0
1
| cells
sequence | cell_types
sequence | cell_type_groups
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
e7b3c0d11e344a33b107c0e518a0366601bc292a | 6,187 | ipynb | Jupyter Notebook | site/_build/html/_sources/notebooks/02-intro-python/04-groupby.ipynb | rpi-techfundamentals/spring2020_website | b4b208ce7555f5574054ff5ff5d79b9e0e825499 | [
"MIT"
] | 2 | 2020-10-18T23:05:09.000Z | 2021-11-14T08:09:11.000Z | site/_build/html/_sources/notebooks/02-intro-python/04-groupby.ipynb | rpi-techfundamentals/spring2020_website | b4b208ce7555f5574054ff5ff5d79b9e0e825499 | [
"MIT"
] | 2 | 2020-12-31T14:33:02.000Z | 2020-12-31T14:38:26.000Z | site/_build/html/_sources/notebooks/02-intro-python/04-groupby.ipynb | rpi-techfundamentals/spring2020_website | b4b208ce7555f5574054ff5ff5d79b9e0e825499 | [
"MIT"
] | 3 | 2020-08-31T21:58:58.000Z | 2020-09-30T02:55:08.000Z | 23.614504 | 160 | 0.547761 | [
[
[
"[](http://introml.analyticsdojo.com)\n<center><h1>Introduction to Python - Groupby and Pivot Tables</h1></center>\n<center><h3><a href = 'http://introml.analyticsdojo.com'>introml.analyticsdojo.com</a></h3></center>\n\n\n",
"_____no_output_____"
],
[
"# Groupby and Pivot Tables",
"_____no_output_____"
]
],
[
[
"!wget https://raw.githubusercontent.com/rpi-techfundamentals/spring2019-materials/master/input/train.csv\n!wget https://raw.githubusercontent.com/rpi-techfundamentals/spring2019-materials/master/input/test.csv",
"_____no_output_____"
],
[
"import numpy as np \nimport pandas as pd \n\n# Input data files are available in the \"../input/\" directory.\n# Let's input them into a Pandas DataFrame\ntrain = pd.read_csv(\"train.csv\")\ntest = pd.read_csv(\"test.csv\")",
"_____no_output_____"
]
],
[
[
"### Groupby\n- Often it is useful to see statistics by different classes.\n- Can be used to examine different subpopulations",
"_____no_output_____"
]
],
[
[
"train.head()",
"_____no_output_____"
],
[
"print(train.dtypes)",
"_____no_output_____"
],
[
"#What does this tell us? \ntrain.groupby(['Sex']).Survived.mean()",
"_____no_output_____"
],
[
"#What does this tell us? \ntrain.groupby(['Sex','Pclass']).Survived.mean()",
"_____no_output_____"
],
[
"#What does this tell us? Here it doesn't look so clear. We could separate by set age ranges.\ntrain.groupby(['Sex','Age']).Survived.mean()",
"_____no_output_____"
]
],
[
[
"### Combining Multiple Operations\n- *Splitting* the data into groups based on some criteria\n- *Applying* a function to each group independently\n- *Combining* the results into a data structure",
"_____no_output_____"
]
],
[
[
"s = train.groupby(['Sex','Pclass'], as_index=False).Survived.sum()\ns['PerSurv'] = train.groupby(['Sex','Pclass'], as_index=False).Survived.mean().Survived\ns['PerSurv']=s['PerSurv']*100\ns['Count'] = train.groupby(['Sex','Pclass'], as_index=False).Survived.count().Survived\nsurvived =s.Survived\ns",
"_____no_output_____"
],
[
"#What does this tell us? \nspmean=train.groupby(['Sex','Pclass']).Survived.mean()\nspcount=train.groupby(['Sex','Pclass']).Survived.sum()\nspsum=train.groupby(['Sex','Pclass']).Survived.count()\nspsum",
"_____no_output_____"
]
],
[
[
"### Pivot Tables\n- A pivot table is a data summarization tool, much easier than the syntax of groupBy. \n- It can be used to that sum, sort, averge, count, over a pandas dataframe. \n- Download and open data in excel to appreciate the ways that you can use Pivot Tables. ",
"_____no_output_____"
]
],
[
[
"#Load it and create a pivot table.\nfrom google.colab import files\nfiles.download('train.csv')",
"_____no_output_____"
],
[
"#List the index and the functions you want to aggregage by. \npd.pivot_table(train,index=[\"Sex\",\"Pclass\"],values=[\"Survived\"],aggfunc=['count','sum','mean',])",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e7b3c848613ec69481eaa154da1c779498a811c6 | 54,756 | ipynb | Jupyter Notebook | NYC311Complaints.ipynb | Sanchit112/NYC311 | 2b6843b6d426a11791602b0bdeb3adaeeeb730c5 | [
"MIT"
] | null | null | null | NYC311Complaints.ipynb | Sanchit112/NYC311 | 2b6843b6d426a11791602b0bdeb3adaeeeb730c5 | [
"MIT"
] | null | null | null | NYC311Complaints.ipynb | Sanchit112/NYC311 | 2b6843b6d426a11791602b0bdeb3adaeeeb730c5 | [
"MIT"
] | null | null | null | 27,378 | 54,755 | 0.71026 | [
[
[
"body = client_cred.get_object(Bucket=bucket,Key='311_Service_Requests_from_2010_to_Present_min.csv')['Body']\n# add missing __iter__ method, so pandas accepts body as file-like object\nif not hasattr(body, \"__iter__\"): body.__iter__ = types.MethodType( __iter__, body )\n\ndf_data_1 = pd.read_csv(body)\ndf_data_1.head()\n",
"_____no_output_____"
],
[
"df_data_1.columns",
"_____no_output_____"
],
[
"df_data_1['Complaint Type'].value_counts()",
"_____no_output_____"
],
[
"df_data_1['Incident Zip'].value_counts()",
"_____no_output_____"
],
[
"df1 = df_data_1[ df_data_1['Complaint Type'].isin(['HEATING', 'HEAT/HOT WATER'])]\ndf1 = df1[df1['Borough'] == 'BRONX']\ndf1['complaint'] = 1\ndf1.rename(columns={'Incident Zip': 'ZipCode', 'Incident Address': 'Address'}, inplace = True)\ndf1.head(5)",
"_____no_output_____"
],
[
"body = client_cred.get_object(Bucket=bucket,Key='BX_18v1.csv')['Body']\n# add missing __iter__ method, so pandas accepts body as file-like object\nif not hasattr(body, \"__iter__\"): body.__iter__ = types.MethodType( __iter__, body )\n\ndf_data_2 = pd.read_csv(body, usecols = ['Address', 'BldgArea', 'BldgDepth', 'BuiltFAR', 'CommFAR', 'FacilFAR', 'Lot', 'LotArea', 'LotDepth', 'NumBldgs', 'NumFloors', 'OfficeArea', 'ResArea', 'ResidFAR', 'RetailArea', 'YearBuilt', 'YearAlter1', 'ZipCode', 'YCoord', 'XCoord'])\ndf_data_2.head()",
"_____no_output_____"
],
[
"df_data_2.columns",
"_____no_output_____"
],
[
"df1.columns",
"_____no_output_____"
],
[
"df = pd.merge(df_data_2, df1, on=['ZipCode', 'Address'], how='outer')\ndf.head(5)",
"_____no_output_____"
],
[
"df.columns",
"_____no_output_____"
],
[
"df = df.drop(['XCoord', 'YCoord', 'Unnamed: 0', 'Unique Key','Created Date', 'Closed Date', 'Complaint Type', 'Borough', 'Latitude', 'Longitude'], axis=1)",
"_____no_output_____"
],
[
"df.columns",
"_____no_output_____"
],
[
"df.fillna(0, inplace=True)\ndf['complaint'].value_counts()",
"_____no_output_____"
],
[
"df.info()",
"<class 'pandas.core.frame.DataFrame'>\nInt64Index: 679322 entries, 0 to 679321\nData columns (total 25 columns):\nLot 679322 non-null float64\nZipCode 679322 non-null float64\nAddress 679322 non-null object\nLotArea 679322 non-null float64\nBldgArea 679322 non-null float64\nResArea 679322 non-null float64\nOfficeArea 679322 non-null float64\nRetailArea 679322 non-null float64\nNumBldgs 679322 non-null float64\nNumFloors 679322 non-null float64\nLotDepth 679322 non-null float64\nBldgDepth 679322 non-null float64\nYearBuilt 679322 non-null float64\nYearAlter1 679322 non-null float64\nBuiltFAR 679322 non-null float64\nResidFAR 679322 non-null float64\nCommFAR 679322 non-null float64\nFacilFAR 679322 non-null float64\nLocation Type 679322 non-null object\nStreet Name 679322 non-null object\nAddress Type 679322 non-null object\nCity 679322 non-null object\nStatus 679322 non-null object\nResolution Description 679322 non-null object\ncomplaint 679322 non-null float64\ndtypes: float64(18), object(7)\nmemory usage: 134.8+ MB\n"
],
[
"from scipy.stats import spearmanr\ndf.corr(method='pearson')['complaint']",
"_____no_output_____"
],
[
"from sklearn.linear_model import LogisticRegression\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import mean_squared_error, r2_score\nfrom sklearn.preprocessing import StandardScaler\n\nscaler = StandardScaler()\ny = df['complaint']\nx = df[['BldgArea', 'ResArea', 'NumFloors', 'BldgDepth', 'YearBuilt', 'YearAlter1', 'BuiltFAR', 'ResidFAR', 'FacilFAR']]\nx = scaler.fit_transform(x)\nX_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.3, random_state=17)",
"_____no_output_____"
],
[
"logisticRegr = LogisticRegression(solver = 'liblinear')\nlogisticRegr.fit(X_train, y_train)\ny_pred = logisticRegr.predict(X_test)\n\nscore = logisticRegr.score(X_test, y_test)\nprint(score)",
"0.9311618914900612\n"
],
[
"import matplotlib.pyplot as plt\nimport seaborn as sns\nfrom sklearn import metrics\n\ncm = metrics.confusion_matrix(y_test, y_pred)\nplt.figure(figsize=(8,8))\nsns.heatmap(cm, annot=True, fmt=\".3f\", linewidths=.5, square = True, cmap = 'Blues_r');\nplt.ylabel('Actual label');\nplt.xlabel('Predicted label');\nall_sample_title = 'Accuracy Score: {0}'.format(score)\nplt.title(all_sample_title, size = 15);",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7b3d28bd7a2c68f496687d39eef885c9b239b2a | 72,626 | ipynb | Jupyter Notebook | Notebook_2_FeatureEngineering_RollingCompute.ipynb | yallic/PySpark-Predictive-Maintenance | f05fc8e6c47f7816be0ac42ef46aa5455cecc7f2 | [
"MIT"
] | 113 | 2017-06-08T22:49:01.000Z | 2022-02-06T01:10:43.000Z | Notebook_2_FeatureEngineering_RollingCompute.ipynb | yallic/PySpark-Predictive-Maintenance | f05fc8e6c47f7816be0ac42ef46aa5455cecc7f2 | [
"MIT"
] | 4 | 2019-04-26T12:42:24.000Z | 2020-04-01T10:13:07.000Z | Notebook_2_FeatureEngineering_RollingCompute.ipynb | yallic/PySpark-Predictive-Maintenance | f05fc8e6c47f7816be0ac42ef46aa5455cecc7f2 | [
"MIT"
] | 61 | 2017-06-15T01:25:24.000Z | 2021-11-06T03:37:25.000Z | 45.705475 | 672 | 0.613623 | [
[
[
"# Notebook #2\n\nIn this notebook, we worked with the result dataset from Notebook #1 and computed rolling statistics (mean, difference, std, max, min) for a list of features over various time windows. \nThis was the most time consuming and computational expensive part of the entire tutorial. We encountered some roadblocks and found some workarounds. Please see below for more details.\n\n## Outline\n\n- [Define Rolling Features and Window Sizes](#Define-list-of-features-for-rolling-compute,-window-sizes)\n- [Issues and Solutions](#What-issues-we-encountered-using-Pyspark-and-how-we-solved-them?)\n- [Rolling Compute](#Rolling-Compute)\n - [Rolling Mean](#Rolling-Mean)\n - [Rolling Difference](#Rolling-Difference)\n - [Rolling Std](#Rolling-Std)\n - [Rolling Max](#Rolling-Max)\n - [Rolling Min](#Rolling-Min)\n- [Join Results](#Join-result-dataset-from-the-five-rolling-compute-cells:)\n \n",
"_____no_output_____"
]
],
[
[
"import pyspark.sql.functions as F\nimport time\nimport subprocess\nimport sys\nimport os\nimport re\n\nfrom pyspark import SparkConf\nfrom pyspark import SparkContext\nfrom pyspark import SQLContext\nfrom pyspark.sql.types import *\nfrom pyspark.sql.functions import col,udf,lag,date_add,explode,lit,concat,unix_timestamp\nfrom pyspark.sql.dataframe import *\nfrom pyspark.sql.window import Window\nfrom pyspark.sql.types import DateType\nfrom datetime import datetime, timedelta\nfrom pyspark.sql import Row\n\nstart_time = time.time()\n",
"_____no_output_____"
]
],
[
[
"## Define list of features for rolling compute, window sizes",
"_____no_output_____"
]
],
[
[
"rolling_features = [\n 'warn_type1_total', 'warn_type2_total', \n 'pca_1_warn','pca_2_warn', 'pca_3_warn', 'pca_4_warn', 'pca_5_warn',\n 'pca_6_warn','pca_7_warn', 'pca_8_warn', 'pca_9_warn', 'pca_10_warn',\n 'pca_11_warn','pca_12_warn', 'pca_13_warn', 'pca_14_warn', 'pca_15_warn',\n 'pca_16_warn','pca_17_warn', 'pca_18_warn', 'pca_19_warn', 'pca_20_warn',\n 'problem_type_1', 'problem_type_2', 'problem_type_3','problem_type_4',\n 'problem_type_1_per_usage1','problem_type_2_per_usage1',\n 'problem_type_3_per_usage1','problem_type_4_per_usage1',\n 'problem_type_1_per_usage2','problem_type_2_per_usage2',\n 'problem_type_3_per_usage2','problem_type_4_per_usage2', \n 'fault_code_type_1_count', 'fault_code_type_2_count', 'fault_code_type_3_count', 'fault_code_type_4_count', \n 'fault_code_type_1_count_per_usage1','fault_code_type_2_count_per_usage1',\n 'fault_code_type_3_count_per_usage1', 'fault_code_type_4_count_per_usage1',\n 'fault_code_type_1_count_per_usage2','fault_code_type_2_count_per_usage2',\n 'fault_code_type_3_count_per_usage2', 'fault_code_type_4_count_per_usage2']\n \n# lag window 3, 7, 14, 30, 90 days\nlags = [3, 7, 14, 30, 90]\n\nprint(len(rolling_features))\n",
"46\n"
]
],
[
[
"## What issues we encountered using Pyspark and how we solved them?\n\n- If the entire list of **46 features** and **5 time windows** were computed for **5 different types of rolling** (mean, difference, std, max, min) all in one go, we always ran into \"StackOverFlow\" error. \n- It was because the lineage was too long and Spark could not handle it.\n- We could either create checkPoint and materialize it throughout the process.\n- OR break the workload into chunks and save the result from each chunk as parquet file.\n\n## A few things we found helpful:\n- Before the rolling compute, save the upstream work as a parquet file in Notebook_1 (\"Notebook_1_DataCleansing_FeatureEngineering\"). It will speed up the whole process because we no need to repeat all the previous steps. It will also help reduce the lineage.\n- Print out the lag and feature name to track progress.\n- Use \"htop\" command from the terminal to keep track how many CPUs are running for a particular task. For rolling compute, we were considering two potential approaches: 1) Use Spark clusters on HDInsight to perform rolling compute in parallel; 2) Use single node Spark on a powerful VM. By looking at htop dashboard, we saw all the 32 cores were running at the same time for a single task (for example compute rolling mean). So if say we divide the workload onto multiple nodes and each node runs a type of rolling compute, the amount of time taken will be comparable with running everything in a sequential manner on a single node Spark on a powerful machine.\n- Use \"%%time\" for each cell to get an estimate of the total run time, we will then have a better idea where and what to optimze the process.\n- Materialize the intermediate results by either caching in memory or writing as parquet files. We chose to save as parquet files because we did not want to repeat the compute again in case cache() did not work or any part of the rolling compute did not work.\n- Why parquet? There are many reasons, just to name a few: parquet not only saves the data but also the schema, it is a preferred file format by Spark, you are allowed to read only the data you need, etc..\n\n<br>\n",
"_____no_output_____"
],
[
"## Rolling Compute\n### Rolling Mean",
"_____no_output_____"
]
],
[
[
"%%time\n\n# Load result dataset from Notebook #1\ndf = sqlContext.read.parquet('/mnt/resource/PysparkExample/notebook1_result.parquet')\n\nfor lag_n in lags:\n wSpec = Window.partitionBy('deviceid').orderBy('date').rowsBetween(1-lag_n, 0)\n for col_name in rolling_features:\n df = df.withColumn(col_name+'_rollingmean_'+str(lag_n), F.avg(col(col_name)).over(wSpec))\n print(\"Lag = %d, Column = %s\" % (lag_n, col_name))\n\n# Save the intermediate result for downstream work\ndf.write.mode('overwrite').parquet('/mnt/resource/PysparkExample/data_rollingmean.parquet')\n",
"Lag = 3, Column = warn_type1_total\nLag = 3, Column = warn_type2_total\nLag = 3, Column = pca_1_warn\nLag = 3, Column = pca_2_warn\nLag = 3, Column = pca_3_warn\nLag = 3, Column = pca_4_warn\nLag = 3, Column = pca_5_warn\nLag = 3, Column = pca_6_warn\nLag = 3, Column = pca_7_warn\nLag = 3, Column = pca_8_warn\nLag = 3, Column = pca_9_warn\nLag = 3, Column = pca_10_warn\nLag = 3, Column = pca_11_warn\nLag = 3, Column = pca_12_warn\nLag = 3, Column = pca_13_warn\nLag = 3, Column = pca_14_warn\nLag = 3, Column = pca_15_warn\nLag = 3, Column = pca_16_warn\nLag = 3, Column = pca_17_warn\nLag = 3, Column = pca_18_warn\nLag = 3, Column = pca_19_warn\nLag = 3, Column = pca_20_warn\nLag = 3, Column = problem_type_1\nLag = 3, Column = problem_type_2\nLag = 3, Column = problem_type_3\nLag = 3, Column = problem_type_4\nLag = 3, Column = problem_type_1_per_usage1\nLag = 3, Column = problem_type_2_per_usage1\nLag = 3, Column = problem_type_3_per_usage1\nLag = 3, Column = problem_type_4_per_usage1\nLag = 3, Column = problem_type_1_per_usage2\nLag = 3, Column = problem_type_2_per_usage2\nLag = 3, Column = problem_type_3_per_usage2\nLag = 3, Column = problem_type_4_per_usage2\nLag = 3, Column = fault_code_type_1_count\nLag = 3, Column = fault_code_type_2_count\nLag = 3, Column = fault_code_type_3_count\nLag = 3, Column = fault_code_type_4_count\nLag = 3, Column = fault_code_type_1_count_per_usage1\nLag = 3, Column = fault_code_type_2_count_per_usage1\nLag = 3, Column = fault_code_type_3_count_per_usage1\nLag = 3, Column = fault_code_type_4_count_per_usage1\nLag = 3, Column = fault_code_type_1_count_per_usage2\nLag = 3, Column = fault_code_type_2_count_per_usage2\nLag = 3, Column = fault_code_type_3_count_per_usage2\nLag = 3, Column = fault_code_type_4_count_per_usage2\nLag = 7, Column = warn_type1_total\nLag = 7, Column = warn_type2_total\nLag = 7, Column = pca_1_warn\nLag = 7, Column = pca_2_warn\nLag = 7, Column = pca_3_warn\nLag = 7, Column = pca_4_warn\nLag = 7, Column = pca_5_warn\nLag = 7, Column = pca_6_warn\nLag = 7, Column = pca_7_warn\nLag = 7, Column = pca_8_warn\nLag = 7, Column = pca_9_warn\nLag = 7, Column = pca_10_warn\nLag = 7, Column = pca_11_warn\nLag = 7, Column = pca_12_warn\nLag = 7, Column = pca_13_warn\nLag = 7, Column = pca_14_warn\nLag = 7, Column = pca_15_warn\nLag = 7, Column = pca_16_warn\nLag = 7, Column = pca_17_warn\nLag = 7, Column = pca_18_warn\nLag = 7, Column = pca_19_warn\nLag = 7, Column = pca_20_warn\nLag = 7, Column = problem_type_1\nLag = 7, Column = problem_type_2\nLag = 7, Column = problem_type_3\nLag = 7, Column = problem_type_4\nLag = 7, Column = problem_type_1_per_usage1\nLag = 7, Column = problem_type_2_per_usage1\nLag = 7, Column = problem_type_3_per_usage1\nLag = 7, Column = problem_type_4_per_usage1\nLag = 7, Column = problem_type_1_per_usage2\nLag = 7, Column = problem_type_2_per_usage2\nLag = 7, Column = problem_type_3_per_usage2\nLag = 7, Column = problem_type_4_per_usage2\nLag = 7, Column = fault_code_type_1_count\nLag = 7, Column = fault_code_type_2_count\nLag = 7, Column = fault_code_type_3_count\nLag = 7, Column = fault_code_type_4_count\nLag = 7, Column = fault_code_type_1_count_per_usage1\nLag = 7, Column = fault_code_type_2_count_per_usage1\nLag = 7, Column = fault_code_type_3_count_per_usage1\nLag = 7, Column = fault_code_type_4_count_per_usage1\nLag = 7, Column = fault_code_type_1_count_per_usage2\nLag = 7, Column = fault_code_type_2_count_per_usage2\nLag = 7, Column = fault_code_type_3_count_per_usage2\nLag = 7, Column = fault_code_type_4_count_per_usage2\nLag = 14, Column = warn_type1_total\nLag = 14, Column = warn_type2_total\nLag = 14, Column = pca_1_warn\nLag = 14, Column = pca_2_warn\nLag = 14, Column = pca_3_warn\nLag = 14, Column = pca_4_warn\nLag = 14, Column = pca_5_warn\nLag = 14, Column = pca_6_warn\nLag = 14, Column = pca_7_warn\nLag = 14, Column = pca_8_warn\nLag = 14, Column = pca_9_warn\nLag = 14, Column = pca_10_warn\nLag = 14, Column = pca_11_warn\nLag = 14, Column = pca_12_warn\nLag = 14, Column = pca_13_warn\nLag = 14, Column = pca_14_warn\nLag = 14, Column = pca_15_warn\nLag = 14, Column = pca_16_warn\nLag = 14, Column = pca_17_warn\nLag = 14, Column = pca_18_warn\nLag = 14, Column = pca_19_warn\nLag = 14, Column = pca_20_warn\nLag = 14, Column = problem_type_1\nLag = 14, Column = problem_type_2\nLag = 14, Column = problem_type_3\nLag = 14, Column = problem_type_4\nLag = 14, Column = problem_type_1_per_usage1\nLag = 14, Column = problem_type_2_per_usage1\nLag = 14, Column = problem_type_3_per_usage1\nLag = 14, Column = problem_type_4_per_usage1\nLag = 14, Column = problem_type_1_per_usage2\nLag = 14, Column = problem_type_2_per_usage2\nLag = 14, Column = problem_type_3_per_usage2\nLag = 14, Column = problem_type_4_per_usage2\nLag = 14, Column = fault_code_type_1_count\nLag = 14, Column = fault_code_type_2_count\nLag = 14, Column = fault_code_type_3_count\nLag = 14, Column = fault_code_type_4_count\nLag = 14, Column = fault_code_type_1_count_per_usage1\nLag = 14, Column = fault_code_type_2_count_per_usage1\nLag = 14, Column = fault_code_type_3_count_per_usage1\nLag = 14, Column = fault_code_type_4_count_per_usage1\nLag = 14, Column = fault_code_type_1_count_per_usage2\nLag = 14, Column = fault_code_type_2_count_per_usage2\nLag = 14, Column = fault_code_type_3_count_per_usage2\nLag = 14, Column = fault_code_type_4_count_per_usage2\nLag = 30, Column = warn_type1_total\nLag = 30, Column = warn_type2_total\nLag = 30, Column = pca_1_warn\nLag = 30, Column = pca_2_warn\nLag = 30, Column = pca_3_warn\nLag = 30, Column = pca_4_warn\nLag = 30, Column = pca_5_warn\nLag = 30, Column = pca_6_warn\nLag = 30, Column = pca_7_warn\nLag = 30, Column = pca_8_warn\nLag = 30, Column = pca_9_warn\nLag = 30, Column = pca_10_warn\nLag = 30, Column = pca_11_warn\nLag = 30, Column = pca_12_warn\nLag = 30, Column = pca_13_warn\nLag = 30, Column = pca_14_warn\nLag = 30, Column = pca_15_warn\nLag = 30, Column = pca_16_warn\nLag = 30, Column = pca_17_warn\nLag = 30, Column = pca_18_warn\nLag = 30, Column = pca_19_warn\nLag = 30, Column = pca_20_warn\nLag = 30, Column = problem_type_1\nLag = 30, Column = problem_type_2\nLag = 30, Column = problem_type_3\nLag = 30, Column = problem_type_4\nLag = 30, Column = problem_type_1_per_usage1\nLag = 30, Column = problem_type_2_per_usage1\nLag = 30, Column = problem_type_3_per_usage1\nLag = 30, Column = problem_type_4_per_usage1\nLag = 30, Column = problem_type_1_per_usage2\nLag = 30, Column = problem_type_2_per_usage2\nLag = 30, Column = problem_type_3_per_usage2\nLag = 30, Column = problem_type_4_per_usage2\nLag = 30, Column = fault_code_type_1_count\nLag = 30, Column = fault_code_type_2_count\nLag = 30, Column = fault_code_type_3_count\nLag = 30, Column = fault_code_type_4_count\nLag = 30, Column = fault_code_type_1_count_per_usage1\nLag = 30, Column = fault_code_type_2_count_per_usage1\nLag = 30, Column = fault_code_type_3_count_per_usage1\nLag = 30, Column = fault_code_type_4_count_per_usage1\nLag = 30, Column = fault_code_type_1_count_per_usage2\nLag = 30, Column = fault_code_type_2_count_per_usage2\nLag = 30, Column = fault_code_type_3_count_per_usage2\nLag = 30, Column = fault_code_type_4_count_per_usage2\nLag = 90, Column = warn_type1_total\nLag = 90, Column = warn_type2_total\nLag = 90, Column = pca_1_warn\nLag = 90, Column = pca_2_warn\nLag = 90, Column = pca_3_warn\nLag = 90, Column = pca_4_warn\nLag = 90, Column = pca_5_warn\nLag = 90, Column = pca_6_warn\nLag = 90, Column = pca_7_warn\nLag = 90, Column = pca_8_warn\nLag = 90, Column = pca_9_warn\nLag = 90, Column = pca_10_warn\nLag = 90, Column = pca_11_warn\nLag = 90, Column = pca_12_warn\nLag = 90, Column = pca_13_warn\nLag = 90, Column = pca_14_warn\nLag = 90, Column = pca_15_warn\nLag = 90, Column = pca_16_warn\nLag = 90, Column = pca_17_warn\nLag = 90, Column = pca_18_warn\nLag = 90, Column = pca_19_warn\nLag = 90, Column = pca_20_warn\nLag = 90, Column = problem_type_1\nLag = 90, Column = problem_type_2\nLag = 90, Column = problem_type_3\nLag = 90, Column = problem_type_4\nLag = 90, Column = problem_type_1_per_usage1\nLag = 90, Column = problem_type_2_per_usage1\nLag = 90, Column = problem_type_3_per_usage1\nLag = 90, Column = problem_type_4_per_usage1\nLag = 90, Column = problem_type_1_per_usage2\nLag = 90, Column = problem_type_2_per_usage2\nLag = 90, Column = problem_type_3_per_usage2\nLag = 90, Column = problem_type_4_per_usage2\nLag = 90, Column = fault_code_type_1_count\nLag = 90, Column = fault_code_type_2_count\nLag = 90, Column = fault_code_type_3_count\nLag = 90, Column = fault_code_type_4_count\nLag = 90, Column = fault_code_type_1_count_per_usage1\nLag = 90, Column = fault_code_type_2_count_per_usage1\nLag = 90, Column = fault_code_type_3_count_per_usage1\nLag = 90, Column = fault_code_type_4_count_per_usage1\nLag = 90, Column = fault_code_type_1_count_per_usage2\nLag = 90, Column = fault_code_type_2_count_per_usage2\nLag = 90, Column = fault_code_type_3_count_per_usage2\nLag = 90, Column = fault_code_type_4_count_per_usage2\nCPU times: user 848 ms, sys: 279 ms, total: 1.13 s\nWall time: 28min 6s\n"
]
],
[
[
"### Rolling Difference",
"_____no_output_____"
]
],
[
[
"%%time\n\n# Load result dataset from Notebook #1\ndf = sqlContext.read.parquet('/mnt/resource/PysparkExample/notebook1_result.parquet')\n\nfor lag_n in lags:\n wSpec = Window.partitionBy('deviceid').orderBy('date').rowsBetween(1-lag_n, 0)\n for col_name in rolling_features:\n df = df.withColumn(col_name+'_rollingdiff_'+str(lag_n), col(col_name)-F.avg(col(col_name)).over(wSpec))\n print(\"Lag = %d, Column = %s\" % (lag_n, col_name))\n\nrollingdiff = df.select(['key'] + list(s for s in df.columns if \"rollingdiff\" in s))\n\n# Save the intermediate result for downstream work\nrollingdiff.write.mode('overwrite').parquet('/mnt/resource/PysparkExample/rollingdiff.parquet')\n",
"Lag = 3, Column = warn_type1_total\nLag = 3, Column = warn_type2_total\nLag = 3, Column = pca_1_warn\nLag = 3, Column = pca_2_warn\nLag = 3, Column = pca_3_warn\nLag = 3, Column = pca_4_warn\nLag = 3, Column = pca_5_warn\nLag = 3, Column = pca_6_warn\nLag = 3, Column = pca_7_warn\nLag = 3, Column = pca_8_warn\nLag = 3, Column = pca_9_warn\nLag = 3, Column = pca_10_warn\nLag = 3, Column = pca_11_warn\nLag = 3, Column = pca_12_warn\nLag = 3, Column = pca_13_warn\nLag = 3, Column = pca_14_warn\nLag = 3, Column = pca_15_warn\nLag = 3, Column = pca_16_warn\nLag = 3, Column = pca_17_warn\nLag = 3, Column = pca_18_warn\nLag = 3, Column = pca_19_warn\nLag = 3, Column = pca_20_warn\nLag = 3, Column = problem_type_1\nLag = 3, Column = problem_type_2\nLag = 3, Column = problem_type_3\nLag = 3, Column = problem_type_4\nLag = 3, Column = problem_type_1_per_usage1\nLag = 3, Column = problem_type_2_per_usage1\nLag = 3, Column = problem_type_3_per_usage1\nLag = 3, Column = problem_type_4_per_usage1\nLag = 3, Column = problem_type_1_per_usage2\nLag = 3, Column = problem_type_2_per_usage2\nLag = 3, Column = problem_type_3_per_usage2\nLag = 3, Column = problem_type_4_per_usage2\nLag = 3, Column = fault_code_type_1_count\nLag = 3, Column = fault_code_type_2_count\nLag = 3, Column = fault_code_type_3_count\nLag = 3, Column = fault_code_type_4_count\nLag = 3, Column = fault_code_type_1_count_per_usage1\nLag = 3, Column = fault_code_type_2_count_per_usage1\nLag = 3, Column = fault_code_type_3_count_per_usage1\nLag = 3, Column = fault_code_type_4_count_per_usage1\nLag = 3, Column = fault_code_type_1_count_per_usage2\nLag = 3, Column = fault_code_type_2_count_per_usage2\nLag = 3, Column = fault_code_type_3_count_per_usage2\nLag = 3, Column = fault_code_type_4_count_per_usage2\nLag = 7, Column = warn_type1_total\nLag = 7, Column = warn_type2_total\nLag = 7, Column = pca_1_warn\nLag = 7, Column = pca_2_warn\nLag = 7, Column = pca_3_warn\nLag = 7, Column = pca_4_warn\nLag = 7, Column = pca_5_warn\nLag = 7, Column = pca_6_warn\nLag = 7, Column = pca_7_warn\nLag = 7, Column = pca_8_warn\nLag = 7, Column = pca_9_warn\nLag = 7, Column = pca_10_warn\nLag = 7, Column = pca_11_warn\nLag = 7, Column = pca_12_warn\nLag = 7, Column = pca_13_warn\nLag = 7, Column = pca_14_warn\nLag = 7, Column = pca_15_warn\nLag = 7, Column = pca_16_warn\nLag = 7, Column = pca_17_warn\nLag = 7, Column = pca_18_warn\nLag = 7, Column = pca_19_warn\nLag = 7, Column = pca_20_warn\nLag = 7, Column = problem_type_1\nLag = 7, Column = problem_type_2\nLag = 7, Column = problem_type_3\nLag = 7, Column = problem_type_4\nLag = 7, Column = problem_type_1_per_usage1\nLag = 7, Column = problem_type_2_per_usage1\nLag = 7, Column = problem_type_3_per_usage1\nLag = 7, Column = problem_type_4_per_usage1\nLag = 7, Column = problem_type_1_per_usage2\nLag = 7, Column = problem_type_2_per_usage2\nLag = 7, Column = problem_type_3_per_usage2\nLag = 7, Column = problem_type_4_per_usage2\nLag = 7, Column = fault_code_type_1_count\nLag = 7, Column = fault_code_type_2_count\nLag = 7, Column = fault_code_type_3_count\nLag = 7, Column = fault_code_type_4_count\nLag = 7, Column = fault_code_type_1_count_per_usage1\nLag = 7, Column = fault_code_type_2_count_per_usage1\nLag = 7, Column = fault_code_type_3_count_per_usage1\nLag = 7, Column = fault_code_type_4_count_per_usage1\nLag = 7, Column = fault_code_type_1_count_per_usage2\nLag = 7, Column = fault_code_type_2_count_per_usage2\nLag = 7, Column = fault_code_type_3_count_per_usage2\nLag = 7, Column = fault_code_type_4_count_per_usage2\nLag = 14, Column = warn_type1_total\nLag = 14, Column = warn_type2_total\nLag = 14, Column = pca_1_warn\nLag = 14, Column = pca_2_warn\nLag = 14, Column = pca_3_warn\nLag = 14, Column = pca_4_warn\nLag = 14, Column = pca_5_warn\nLag = 14, Column = pca_6_warn\nLag = 14, Column = pca_7_warn\nLag = 14, Column = pca_8_warn\nLag = 14, Column = pca_9_warn\nLag = 14, Column = pca_10_warn\nLag = 14, Column = pca_11_warn\nLag = 14, Column = pca_12_warn\nLag = 14, Column = pca_13_warn\nLag = 14, Column = pca_14_warn\nLag = 14, Column = pca_15_warn\nLag = 14, Column = pca_16_warn\nLag = 14, Column = pca_17_warn\nLag = 14, Column = pca_18_warn\nLag = 14, Column = pca_19_warn\nLag = 14, Column = pca_20_warn\nLag = 14, Column = problem_type_1\nLag = 14, Column = problem_type_2\nLag = 14, Column = problem_type_3\nLag = 14, Column = problem_type_4\nLag = 14, Column = problem_type_1_per_usage1\nLag = 14, Column = problem_type_2_per_usage1\nLag = 14, Column = problem_type_3_per_usage1\nLag = 14, Column = problem_type_4_per_usage1\nLag = 14, Column = problem_type_1_per_usage2\nLag = 14, Column = problem_type_2_per_usage2\nLag = 14, Column = problem_type_3_per_usage2\nLag = 14, Column = problem_type_4_per_usage2\nLag = 14, Column = fault_code_type_1_count\nLag = 14, Column = fault_code_type_2_count\nLag = 14, Column = fault_code_type_3_count\nLag = 14, Column = fault_code_type_4_count\nLag = 14, Column = fault_code_type_1_count_per_usage1\nLag = 14, Column = fault_code_type_2_count_per_usage1\nLag = 14, Column = fault_code_type_3_count_per_usage1\nLag = 14, Column = fault_code_type_4_count_per_usage1\nLag = 14, Column = fault_code_type_1_count_per_usage2\nLag = 14, Column = fault_code_type_2_count_per_usage2\nLag = 14, Column = fault_code_type_3_count_per_usage2\nLag = 14, Column = fault_code_type_4_count_per_usage2\nLag = 30, Column = warn_type1_total\nLag = 30, Column = warn_type2_total\nLag = 30, Column = pca_1_warn\nLag = 30, Column = pca_2_warn\nLag = 30, Column = pca_3_warn\nLag = 30, Column = pca_4_warn\nLag = 30, Column = pca_5_warn\nLag = 30, Column = pca_6_warn\nLag = 30, Column = pca_7_warn\nLag = 30, Column = pca_8_warn\nLag = 30, Column = pca_9_warn\nLag = 30, Column = pca_10_warn\nLag = 30, Column = pca_11_warn\nLag = 30, Column = pca_12_warn\nLag = 30, Column = pca_13_warn\nLag = 30, Column = pca_14_warn\nLag = 30, Column = pca_15_warn\nLag = 30, Column = pca_16_warn\nLag = 30, Column = pca_17_warn\nLag = 30, Column = pca_18_warn\nLag = 30, Column = pca_19_warn\nLag = 30, Column = pca_20_warn\nLag = 30, Column = problem_type_1\nLag = 30, Column = problem_type_2\nLag = 30, Column = problem_type_3\nLag = 30, Column = problem_type_4\nLag = 30, Column = problem_type_1_per_usage1\nLag = 30, Column = problem_type_2_per_usage1\nLag = 30, Column = problem_type_3_per_usage1\nLag = 30, Column = problem_type_4_per_usage1\nLag = 30, Column = problem_type_1_per_usage2\nLag = 30, Column = problem_type_2_per_usage2\nLag = 30, Column = problem_type_3_per_usage2\nLag = 30, Column = problem_type_4_per_usage2\nLag = 30, Column = fault_code_type_1_count\nLag = 30, Column = fault_code_type_2_count\nLag = 30, Column = fault_code_type_3_count\nLag = 30, Column = fault_code_type_4_count\nLag = 30, Column = fault_code_type_1_count_per_usage1\nLag = 30, Column = fault_code_type_2_count_per_usage1\nLag = 30, Column = fault_code_type_3_count_per_usage1\nLag = 30, Column = fault_code_type_4_count_per_usage1\nLag = 30, Column = fault_code_type_1_count_per_usage2\nLag = 30, Column = fault_code_type_2_count_per_usage2\nLag = 30, Column = fault_code_type_3_count_per_usage2\nLag = 30, Column = fault_code_type_4_count_per_usage2\nLag = 90, Column = warn_type1_total\nLag = 90, Column = warn_type2_total\nLag = 90, Column = pca_1_warn\nLag = 90, Column = pca_2_warn\nLag = 90, Column = pca_3_warn\nLag = 90, Column = pca_4_warn\nLag = 90, Column = pca_5_warn\nLag = 90, Column = pca_6_warn\nLag = 90, Column = pca_7_warn\nLag = 90, Column = pca_8_warn\nLag = 90, Column = pca_9_warn\nLag = 90, Column = pca_10_warn\nLag = 90, Column = pca_11_warn\nLag = 90, Column = pca_12_warn\nLag = 90, Column = pca_13_warn\nLag = 90, Column = pca_14_warn\nLag = 90, Column = pca_15_warn\nLag = 90, Column = pca_16_warn\nLag = 90, Column = pca_17_warn\nLag = 90, Column = pca_18_warn\nLag = 90, Column = pca_19_warn\nLag = 90, Column = pca_20_warn\nLag = 90, Column = problem_type_1\nLag = 90, Column = problem_type_2\nLag = 90, Column = problem_type_3\nLag = 90, Column = problem_type_4\nLag = 90, Column = problem_type_1_per_usage1\nLag = 90, Column = problem_type_2_per_usage1\nLag = 90, Column = problem_type_3_per_usage1\nLag = 90, Column = problem_type_4_per_usage1\nLag = 90, Column = problem_type_1_per_usage2\nLag = 90, Column = problem_type_2_per_usage2\nLag = 90, Column = problem_type_3_per_usage2\nLag = 90, Column = problem_type_4_per_usage2\nLag = 90, Column = fault_code_type_1_count\nLag = 90, Column = fault_code_type_2_count\nLag = 90, Column = fault_code_type_3_count\nLag = 90, Column = fault_code_type_4_count\nLag = 90, Column = fault_code_type_1_count_per_usage1\nLag = 90, Column = fault_code_type_2_count_per_usage1\nLag = 90, Column = fault_code_type_3_count_per_usage1\nLag = 90, Column = fault_code_type_4_count_per_usage1\nLag = 90, Column = fault_code_type_1_count_per_usage2\nLag = 90, Column = fault_code_type_2_count_per_usage2\nLag = 90, Column = fault_code_type_3_count_per_usage2\nLag = 90, Column = fault_code_type_4_count_per_usage2\nCPU times: user 1.18 s, sys: 383 ms, total: 1.56 s\nWall time: 45min 12s\n"
]
],
[
[
"### Rolling Std",
"_____no_output_____"
]
],
[
[
"%%time\n\n# Load result dataset from Notebook #1\ndf = sqlContext.read.parquet('/mnt/resource/PysparkExample/notebook1_result.parquet')\n\nfor lag_n in lags:\n wSpec = Window.partitionBy('deviceid').orderBy('date').rowsBetween(1-lag_n, 0)\n for col_name in rolling_features:\n df = df.withColumn(col_name+'_rollingstd_'+str(lag_n), F.stddev(col(col_name)).over(wSpec))\n print(\"Lag = %d, Column = %s\" % (lag_n, col_name))\n\n# There are some missing values for rollingstd features\nrollingstd_features = list(s for s in df.columns if \"rollingstd\" in s)\ndf = df.fillna(0, subset=rollingstd_features)\nrollingstd = df.select(['key'] + list(s for s in df.columns if \"rollingstd\" in s))\n\n# Save the intermediate result for downstream work\nrollingstd.write.mode('overwrite').parquet('/mnt/resource/PysparkExample/rollingstd.parquet')\n",
"Lag = 3, Column = warn_type1_total\nLag = 3, Column = warn_type2_total\nLag = 3, Column = pca_1_warn\nLag = 3, Column = pca_2_warn\nLag = 3, Column = pca_3_warn\nLag = 3, Column = pca_4_warn\nLag = 3, Column = pca_5_warn\nLag = 3, Column = pca_6_warn\nLag = 3, Column = pca_7_warn\nLag = 3, Column = pca_8_warn\nLag = 3, Column = pca_9_warn\nLag = 3, Column = pca_10_warn\nLag = 3, Column = pca_11_warn\nLag = 3, Column = pca_12_warn\nLag = 3, Column = pca_13_warn\nLag = 3, Column = pca_14_warn\nLag = 3, Column = pca_15_warn\nLag = 3, Column = pca_16_warn\nLag = 3, Column = pca_17_warn\nLag = 3, Column = pca_18_warn\nLag = 3, Column = pca_19_warn\nLag = 3, Column = pca_20_warn\nLag = 3, Column = problem_type_1\nLag = 3, Column = problem_type_2\nLag = 3, Column = problem_type_3\nLag = 3, Column = problem_type_4\nLag = 3, Column = problem_type_1_per_usage1\nLag = 3, Column = problem_type_2_per_usage1\nLag = 3, Column = problem_type_3_per_usage1\nLag = 3, Column = problem_type_4_per_usage1\nLag = 3, Column = problem_type_1_per_usage2\nLag = 3, Column = problem_type_2_per_usage2\nLag = 3, Column = problem_type_3_per_usage2\nLag = 3, Column = problem_type_4_per_usage2\nLag = 3, Column = fault_code_type_1_count\nLag = 3, Column = fault_code_type_2_count\nLag = 3, Column = fault_code_type_3_count\nLag = 3, Column = fault_code_type_4_count\nLag = 3, Column = fault_code_type_1_count_per_usage1\nLag = 3, Column = fault_code_type_2_count_per_usage1\nLag = 3, Column = fault_code_type_3_count_per_usage1\nLag = 3, Column = fault_code_type_4_count_per_usage1\nLag = 3, Column = fault_code_type_1_count_per_usage2\nLag = 3, Column = fault_code_type_2_count_per_usage2\nLag = 3, Column = fault_code_type_3_count_per_usage2\nLag = 3, Column = fault_code_type_4_count_per_usage2\nLag = 7, Column = warn_type1_total\nLag = 7, Column = warn_type2_total\nLag = 7, Column = pca_1_warn\nLag = 7, Column = pca_2_warn\nLag = 7, Column = pca_3_warn\nLag = 7, Column = pca_4_warn\nLag = 7, Column = pca_5_warn\nLag = 7, Column = pca_6_warn\nLag = 7, Column = pca_7_warn\nLag = 7, Column = pca_8_warn\nLag = 7, Column = pca_9_warn\nLag = 7, Column = pca_10_warn\nLag = 7, Column = pca_11_warn\nLag = 7, Column = pca_12_warn\nLag = 7, Column = pca_13_warn\nLag = 7, Column = pca_14_warn\nLag = 7, Column = pca_15_warn\nLag = 7, Column = pca_16_warn\nLag = 7, Column = pca_17_warn\nLag = 7, Column = pca_18_warn\nLag = 7, Column = pca_19_warn\nLag = 7, Column = pca_20_warn\nLag = 7, Column = problem_type_1\nLag = 7, Column = problem_type_2\nLag = 7, Column = problem_type_3\nLag = 7, Column = problem_type_4\nLag = 7, Column = problem_type_1_per_usage1\nLag = 7, Column = problem_type_2_per_usage1\nLag = 7, Column = problem_type_3_per_usage1\nLag = 7, Column = problem_type_4_per_usage1\nLag = 7, Column = problem_type_1_per_usage2\nLag = 7, Column = problem_type_2_per_usage2\nLag = 7, Column = problem_type_3_per_usage2\nLag = 7, Column = problem_type_4_per_usage2\nLag = 7, Column = fault_code_type_1_count\nLag = 7, Column = fault_code_type_2_count\nLag = 7, Column = fault_code_type_3_count\nLag = 7, Column = fault_code_type_4_count\nLag = 7, Column = fault_code_type_1_count_per_usage1\nLag = 7, Column = fault_code_type_2_count_per_usage1\nLag = 7, Column = fault_code_type_3_count_per_usage1\nLag = 7, Column = fault_code_type_4_count_per_usage1\nLag = 7, Column = fault_code_type_1_count_per_usage2\nLag = 7, Column = fault_code_type_2_count_per_usage2\nLag = 7, Column = fault_code_type_3_count_per_usage2\nLag = 7, Column = fault_code_type_4_count_per_usage2\nLag = 14, Column = warn_type1_total\nLag = 14, Column = warn_type2_total\nLag = 14, Column = pca_1_warn\nLag = 14, Column = pca_2_warn\nLag = 14, Column = pca_3_warn\nLag = 14, Column = pca_4_warn\nLag = 14, Column = pca_5_warn\nLag = 14, Column = pca_6_warn\nLag = 14, Column = pca_7_warn\nLag = 14, Column = pca_8_warn\nLag = 14, Column = pca_9_warn\nLag = 14, Column = pca_10_warn\nLag = 14, Column = pca_11_warn\nLag = 14, Column = pca_12_warn\nLag = 14, Column = pca_13_warn\nLag = 14, Column = pca_14_warn\nLag = 14, Column = pca_15_warn\nLag = 14, Column = pca_16_warn\nLag = 14, Column = pca_17_warn\nLag = 14, Column = pca_18_warn\nLag = 14, Column = pca_19_warn\nLag = 14, Column = pca_20_warn\nLag = 14, Column = problem_type_1\nLag = 14, Column = problem_type_2\nLag = 14, Column = problem_type_3\nLag = 14, Column = problem_type_4\nLag = 14, Column = problem_type_1_per_usage1\nLag = 14, Column = problem_type_2_per_usage1\nLag = 14, Column = problem_type_3_per_usage1\nLag = 14, Column = problem_type_4_per_usage1\nLag = 14, Column = problem_type_1_per_usage2\nLag = 14, Column = problem_type_2_per_usage2\nLag = 14, Column = problem_type_3_per_usage2\nLag = 14, Column = problem_type_4_per_usage2\nLag = 14, Column = fault_code_type_1_count\nLag = 14, Column = fault_code_type_2_count\nLag = 14, Column = fault_code_type_3_count\nLag = 14, Column = fault_code_type_4_count\nLag = 14, Column = fault_code_type_1_count_per_usage1\nLag = 14, Column = fault_code_type_2_count_per_usage1\nLag = 14, Column = fault_code_type_3_count_per_usage1\nLag = 14, Column = fault_code_type_4_count_per_usage1\nLag = 14, Column = fault_code_type_1_count_per_usage2\nLag = 14, Column = fault_code_type_2_count_per_usage2\nLag = 14, Column = fault_code_type_3_count_per_usage2\nLag = 14, Column = fault_code_type_4_count_per_usage2\nLag = 30, Column = warn_type1_total\nLag = 30, Column = warn_type2_total\nLag = 30, Column = pca_1_warn\nLag = 30, Column = pca_2_warn\nLag = 30, Column = pca_3_warn\nLag = 30, Column = pca_4_warn\nLag = 30, Column = pca_5_warn\nLag = 30, Column = pca_6_warn\nLag = 30, Column = pca_7_warn\nLag = 30, Column = pca_8_warn\nLag = 30, Column = pca_9_warn\nLag = 30, Column = pca_10_warn\nLag = 30, Column = pca_11_warn\nLag = 30, Column = pca_12_warn\nLag = 30, Column = pca_13_warn\nLag = 30, Column = pca_14_warn\nLag = 30, Column = pca_15_warn\nLag = 30, Column = pca_16_warn\nLag = 30, Column = pca_17_warn\nLag = 30, Column = pca_18_warn\nLag = 30, Column = pca_19_warn\nLag = 30, Column = pca_20_warn\nLag = 30, Column = problem_type_1\nLag = 30, Column = problem_type_2\nLag = 30, Column = problem_type_3\nLag = 30, Column = problem_type_4\nLag = 30, Column = problem_type_1_per_usage1\nLag = 30, Column = problem_type_2_per_usage1\nLag = 30, Column = problem_type_3_per_usage1\nLag = 30, Column = problem_type_4_per_usage1\nLag = 30, Column = problem_type_1_per_usage2\nLag = 30, Column = problem_type_2_per_usage2\nLag = 30, Column = problem_type_3_per_usage2\nLag = 30, Column = problem_type_4_per_usage2\nLag = 30, Column = fault_code_type_1_count\nLag = 30, Column = fault_code_type_2_count\nLag = 30, Column = fault_code_type_3_count\nLag = 30, Column = fault_code_type_4_count\nLag = 30, Column = fault_code_type_1_count_per_usage1\nLag = 30, Column = fault_code_type_2_count_per_usage1\nLag = 30, Column = fault_code_type_3_count_per_usage1\nLag = 30, Column = fault_code_type_4_count_per_usage1\nLag = 30, Column = fault_code_type_1_count_per_usage2\nLag = 30, Column = fault_code_type_2_count_per_usage2\nLag = 30, Column = fault_code_type_3_count_per_usage2\nLag = 30, Column = fault_code_type_4_count_per_usage2\nLag = 90, Column = warn_type1_total\nLag = 90, Column = warn_type2_total\nLag = 90, Column = pca_1_warn\nLag = 90, Column = pca_2_warn\nLag = 90, Column = pca_3_warn\nLag = 90, Column = pca_4_warn\nLag = 90, Column = pca_5_warn\nLag = 90, Column = pca_6_warn\nLag = 90, Column = pca_7_warn\nLag = 90, Column = pca_8_warn\nLag = 90, Column = pca_9_warn\nLag = 90, Column = pca_10_warn\nLag = 90, Column = pca_11_warn\nLag = 90, Column = pca_12_warn\nLag = 90, Column = pca_13_warn\nLag = 90, Column = pca_14_warn\nLag = 90, Column = pca_15_warn\nLag = 90, Column = pca_16_warn\nLag = 90, Column = pca_17_warn\nLag = 90, Column = pca_18_warn\nLag = 90, Column = pca_19_warn\nLag = 90, Column = pca_20_warn\nLag = 90, Column = problem_type_1\nLag = 90, Column = problem_type_2\nLag = 90, Column = problem_type_3\nLag = 90, Column = problem_type_4\nLag = 90, Column = problem_type_1_per_usage1\nLag = 90, Column = problem_type_2_per_usage1\nLag = 90, Column = problem_type_3_per_usage1\nLag = 90, Column = problem_type_4_per_usage1\nLag = 90, Column = problem_type_1_per_usage2\nLag = 90, Column = problem_type_2_per_usage2\nLag = 90, Column = problem_type_3_per_usage2\nLag = 90, Column = problem_type_4_per_usage2\nLag = 90, Column = fault_code_type_1_count\nLag = 90, Column = fault_code_type_2_count\nLag = 90, Column = fault_code_type_3_count\nLag = 90, Column = fault_code_type_4_count\nLag = 90, Column = fault_code_type_1_count_per_usage1\nLag = 90, Column = fault_code_type_2_count_per_usage1\nLag = 90, Column = fault_code_type_3_count_per_usage1\nLag = 90, Column = fault_code_type_4_count_per_usage1\nLag = 90, Column = fault_code_type_1_count_per_usage2\nLag = 90, Column = fault_code_type_2_count_per_usage2\nLag = 90, Column = fault_code_type_3_count_per_usage2\nLag = 90, Column = fault_code_type_4_count_per_usage2\nCPU times: user 1.03 s, sys: 411 ms, total: 1.44 s\nWall time: 30min 16s\n"
]
],
[
[
"### Rolling Max",
"_____no_output_____"
]
],
[
[
"%%time\n\n# Load result dataset from Notebook #1\ndf = sqlContext.read.parquet('/mnt/resource/PysparkExample/notebook1_result.parquet')\n\nfor lag_n in lags:\n wSpec = Window.partitionBy('deviceid').orderBy('date').rowsBetween(1-lag_n, 0)\n for col_name in rolling_features:\n df = df.withColumn(col_name+'_rollingmax_'+str(lag_n), F.max(col(col_name)).over(wSpec))\n print(\"Lag = %d, Column = %s\" % (lag_n, col_name))\n\nrollingmax = df.select(['key'] + list(s for s in df.columns if \"rollingmax\" in s))\n\n# Save the intermediate result for downstream work\nrollingmax.write.mode('overwrite').parquet('/mnt/resource/PysparkExample/rollingmax.parquet')\n",
"Lag = 3, Column = warn_type1_total\nLag = 3, Column = warn_type2_total\nLag = 3, Column = pca_1_warn\nLag = 3, Column = pca_2_warn\nLag = 3, Column = pca_3_warn\nLag = 3, Column = pca_4_warn\nLag = 3, Column = pca_5_warn\nLag = 3, Column = pca_6_warn\nLag = 3, Column = pca_7_warn\nLag = 3, Column = pca_8_warn\nLag = 3, Column = pca_9_warn\nLag = 3, Column = pca_10_warn\nLag = 3, Column = pca_11_warn\nLag = 3, Column = pca_12_warn\nLag = 3, Column = pca_13_warn\nLag = 3, Column = pca_14_warn\nLag = 3, Column = pca_15_warn\nLag = 3, Column = pca_16_warn\nLag = 3, Column = pca_17_warn\nLag = 3, Column = pca_18_warn\nLag = 3, Column = pca_19_warn\nLag = 3, Column = pca_20_warn\nLag = 3, Column = problem_type_1\nLag = 3, Column = problem_type_2\nLag = 3, Column = problem_type_3\nLag = 3, Column = problem_type_4\nLag = 3, Column = problem_type_1_per_usage1\nLag = 3, Column = problem_type_2_per_usage1\nLag = 3, Column = problem_type_3_per_usage1\nLag = 3, Column = problem_type_4_per_usage1\nLag = 3, Column = problem_type_1_per_usage2\nLag = 3, Column = problem_type_2_per_usage2\nLag = 3, Column = problem_type_3_per_usage2\nLag = 3, Column = problem_type_4_per_usage2\nLag = 3, Column = fault_code_type_1_count\nLag = 3, Column = fault_code_type_2_count\nLag = 3, Column = fault_code_type_3_count\nLag = 3, Column = fault_code_type_4_count\nLag = 3, Column = fault_code_type_1_count_per_usage1\nLag = 3, Column = fault_code_type_2_count_per_usage1\nLag = 3, Column = fault_code_type_3_count_per_usage1\nLag = 3, Column = fault_code_type_4_count_per_usage1\nLag = 3, Column = fault_code_type_1_count_per_usage2\nLag = 3, Column = fault_code_type_2_count_per_usage2\nLag = 3, Column = fault_code_type_3_count_per_usage2\nLag = 3, Column = fault_code_type_4_count_per_usage2\nLag = 7, Column = warn_type1_total\nLag = 7, Column = warn_type2_total\nLag = 7, Column = pca_1_warn\nLag = 7, Column = pca_2_warn\nLag = 7, Column = pca_3_warn\nLag = 7, Column = pca_4_warn\nLag = 7, Column = pca_5_warn\nLag = 7, Column = pca_6_warn\nLag = 7, Column = pca_7_warn\nLag = 7, Column = pca_8_warn\nLag = 7, Column = pca_9_warn\nLag = 7, Column = pca_10_warn\nLag = 7, Column = pca_11_warn\nLag = 7, Column = pca_12_warn\nLag = 7, Column = pca_13_warn\nLag = 7, Column = pca_14_warn\nLag = 7, Column = pca_15_warn\nLag = 7, Column = pca_16_warn\nLag = 7, Column = pca_17_warn\nLag = 7, Column = pca_18_warn\nLag = 7, Column = pca_19_warn\nLag = 7, Column = pca_20_warn\nLag = 7, Column = problem_type_1\nLag = 7, Column = problem_type_2\nLag = 7, Column = problem_type_3\nLag = 7, Column = problem_type_4\nLag = 7, Column = problem_type_1_per_usage1\nLag = 7, Column = problem_type_2_per_usage1\nLag = 7, Column = problem_type_3_per_usage1\nLag = 7, Column = problem_type_4_per_usage1\nLag = 7, Column = problem_type_1_per_usage2\nLag = 7, Column = problem_type_2_per_usage2\nLag = 7, Column = problem_type_3_per_usage2\nLag = 7, Column = problem_type_4_per_usage2\nLag = 7, Column = fault_code_type_1_count\nLag = 7, Column = fault_code_type_2_count\nLag = 7, Column = fault_code_type_3_count\nLag = 7, Column = fault_code_type_4_count\nLag = 7, Column = fault_code_type_1_count_per_usage1\nLag = 7, Column = fault_code_type_2_count_per_usage1\nLag = 7, Column = fault_code_type_3_count_per_usage1\nLag = 7, Column = fault_code_type_4_count_per_usage1\nLag = 7, Column = fault_code_type_1_count_per_usage2\nLag = 7, Column = fault_code_type_2_count_per_usage2\nLag = 7, Column = fault_code_type_3_count_per_usage2\nLag = 7, Column = fault_code_type_4_count_per_usage2\nLag = 14, Column = warn_type1_total\nLag = 14, Column = warn_type2_total\nLag = 14, Column = pca_1_warn\nLag = 14, Column = pca_2_warn\nLag = 14, Column = pca_3_warn\nLag = 14, Column = pca_4_warn\nLag = 14, Column = pca_5_warn\nLag = 14, Column = pca_6_warn\nLag = 14, Column = pca_7_warn\nLag = 14, Column = pca_8_warn\nLag = 14, Column = pca_9_warn\nLag = 14, Column = pca_10_warn\nLag = 14, Column = pca_11_warn\nLag = 14, Column = pca_12_warn\nLag = 14, Column = pca_13_warn\nLag = 14, Column = pca_14_warn\nLag = 14, Column = pca_15_warn\nLag = 14, Column = pca_16_warn\nLag = 14, Column = pca_17_warn\nLag = 14, Column = pca_18_warn\nLag = 14, Column = pca_19_warn\nLag = 14, Column = pca_20_warn\nLag = 14, Column = problem_type_1\nLag = 14, Column = problem_type_2\nLag = 14, Column = problem_type_3\nLag = 14, Column = problem_type_4\nLag = 14, Column = problem_type_1_per_usage1\nLag = 14, Column = problem_type_2_per_usage1\nLag = 14, Column = problem_type_3_per_usage1\nLag = 14, Column = problem_type_4_per_usage1\nLag = 14, Column = problem_type_1_per_usage2\nLag = 14, Column = problem_type_2_per_usage2\nLag = 14, Column = problem_type_3_per_usage2\nLag = 14, Column = problem_type_4_per_usage2\nLag = 14, Column = fault_code_type_1_count\nLag = 14, Column = fault_code_type_2_count\nLag = 14, Column = fault_code_type_3_count\nLag = 14, Column = fault_code_type_4_count\nLag = 14, Column = fault_code_type_1_count_per_usage1\nLag = 14, Column = fault_code_type_2_count_per_usage1\nLag = 14, Column = fault_code_type_3_count_per_usage1\nLag = 14, Column = fault_code_type_4_count_per_usage1\nLag = 14, Column = fault_code_type_1_count_per_usage2\nLag = 14, Column = fault_code_type_2_count_per_usage2\nLag = 14, Column = fault_code_type_3_count_per_usage2\nLag = 14, Column = fault_code_type_4_count_per_usage2\nLag = 30, Column = warn_type1_total\nLag = 30, Column = warn_type2_total\nLag = 30, Column = pca_1_warn\nLag = 30, Column = pca_2_warn\nLag = 30, Column = pca_3_warn\nLag = 30, Column = pca_4_warn\nLag = 30, Column = pca_5_warn\nLag = 30, Column = pca_6_warn\nLag = 30, Column = pca_7_warn\nLag = 30, Column = pca_8_warn\nLag = 30, Column = pca_9_warn\nLag = 30, Column = pca_10_warn\nLag = 30, Column = pca_11_warn\nLag = 30, Column = pca_12_warn\nLag = 30, Column = pca_13_warn\nLag = 30, Column = pca_14_warn\nLag = 30, Column = pca_15_warn\nLag = 30, Column = pca_16_warn\nLag = 30, Column = pca_17_warn\nLag = 30, Column = pca_18_warn\nLag = 30, Column = pca_19_warn\nLag = 30, Column = pca_20_warn\nLag = 30, Column = problem_type_1\nLag = 30, Column = problem_type_2\nLag = 30, Column = problem_type_3\nLag = 30, Column = problem_type_4\nLag = 30, Column = problem_type_1_per_usage1\nLag = 30, Column = problem_type_2_per_usage1\nLag = 30, Column = problem_type_3_per_usage1\nLag = 30, Column = problem_type_4_per_usage1\nLag = 30, Column = problem_type_1_per_usage2\nLag = 30, Column = problem_type_2_per_usage2\nLag = 30, Column = problem_type_3_per_usage2\nLag = 30, Column = problem_type_4_per_usage2\nLag = 30, Column = fault_code_type_1_count\nLag = 30, Column = fault_code_type_2_count\nLag = 30, Column = fault_code_type_3_count\nLag = 30, Column = fault_code_type_4_count\nLag = 30, Column = fault_code_type_1_count_per_usage1\nLag = 30, Column = fault_code_type_2_count_per_usage1\nLag = 30, Column = fault_code_type_3_count_per_usage1\nLag = 30, Column = fault_code_type_4_count_per_usage1\nLag = 30, Column = fault_code_type_1_count_per_usage2\nLag = 30, Column = fault_code_type_2_count_per_usage2\nLag = 30, Column = fault_code_type_3_count_per_usage2\nLag = 30, Column = fault_code_type_4_count_per_usage2\nLag = 90, Column = warn_type1_total\nLag = 90, Column = warn_type2_total\nLag = 90, Column = pca_1_warn\nLag = 90, Column = pca_2_warn\nLag = 90, Column = pca_3_warn\nLag = 90, Column = pca_4_warn\nLag = 90, Column = pca_5_warn\nLag = 90, Column = pca_6_warn\nLag = 90, Column = pca_7_warn\nLag = 90, Column = pca_8_warn\nLag = 90, Column = pca_9_warn\nLag = 90, Column = pca_10_warn\nLag = 90, Column = pca_11_warn\nLag = 90, Column = pca_12_warn\nLag = 90, Column = pca_13_warn\nLag = 90, Column = pca_14_warn\nLag = 90, Column = pca_15_warn\nLag = 90, Column = pca_16_warn\nLag = 90, Column = pca_17_warn\nLag = 90, Column = pca_18_warn\nLag = 90, Column = pca_19_warn\nLag = 90, Column = pca_20_warn\nLag = 90, Column = problem_type_1\nLag = 90, Column = problem_type_2\nLag = 90, Column = problem_type_3\nLag = 90, Column = problem_type_4\nLag = 90, Column = problem_type_1_per_usage1\nLag = 90, Column = problem_type_2_per_usage1\nLag = 90, Column = problem_type_3_per_usage1\nLag = 90, Column = problem_type_4_per_usage1\nLag = 90, Column = problem_type_1_per_usage2\nLag = 90, Column = problem_type_2_per_usage2\nLag = 90, Column = problem_type_3_per_usage2\nLag = 90, Column = problem_type_4_per_usage2\nLag = 90, Column = fault_code_type_1_count\nLag = 90, Column = fault_code_type_2_count\nLag = 90, Column = fault_code_type_3_count\nLag = 90, Column = fault_code_type_4_count\nLag = 90, Column = fault_code_type_1_count_per_usage1\nLag = 90, Column = fault_code_type_2_count_per_usage1\nLag = 90, Column = fault_code_type_3_count_per_usage1\nLag = 90, Column = fault_code_type_4_count_per_usage1\nLag = 90, Column = fault_code_type_1_count_per_usage2\nLag = 90, Column = fault_code_type_2_count_per_usage2\nLag = 90, Column = fault_code_type_3_count_per_usage2\nLag = 90, Column = fault_code_type_4_count_per_usage2\nCPU times: user 860 ms, sys: 316 ms, total: 1.18 s\nWall time: 24min 41s\n"
]
],
[
[
"### Rolling Min",
"_____no_output_____"
]
],
[
[
"%%time\n\n# Load result dataset from Notebook #1\ndf = sqlContext.read.parquet('/mnt/resource/PysparkExample/notebook1_result.parquet')\n\nfor lag_n in lags:\n wSpec = Window.partitionBy('deviceid').orderBy('date').rowsBetween(1-lag_n, 0)\n for col_name in rolling_features:\n df = df.withColumn(col_name+'_rollingmin_'+str(lag_n), F.min(col(col_name)).over(wSpec))\n print(\"Lag = %d, Column = %s\" % (lag_n, col_name))\n\nrollingmin = df.select(['key'] + list(s for s in df.columns if \"rollingmin\" in s))\n\n# Save the intermediate result for downstream work\nrollingmin.write.mode('overwrite').parquet('/mnt/resource/PysparkExample/rollingmin.parquet')\n",
"Lag = 3, Column = warn_type1_total\nLag = 3, Column = warn_type2_total\nLag = 3, Column = pca_1_warn\nLag = 3, Column = pca_2_warn\nLag = 3, Column = pca_3_warn\nLag = 3, Column = pca_4_warn\nLag = 3, Column = pca_5_warn\nLag = 3, Column = pca_6_warn\nLag = 3, Column = pca_7_warn\nLag = 3, Column = pca_8_warn\nLag = 3, Column = pca_9_warn\nLag = 3, Column = pca_10_warn\nLag = 3, Column = pca_11_warn\nLag = 3, Column = pca_12_warn\nLag = 3, Column = pca_13_warn\nLag = 3, Column = pca_14_warn\nLag = 3, Column = pca_15_warn\nLag = 3, Column = pca_16_warn\nLag = 3, Column = pca_17_warn\nLag = 3, Column = pca_18_warn\nLag = 3, Column = pca_19_warn\nLag = 3, Column = pca_20_warn\nLag = 3, Column = problem_type_1\nLag = 3, Column = problem_type_2\nLag = 3, Column = problem_type_3\nLag = 3, Column = problem_type_4\nLag = 3, Column = problem_type_1_per_usage1\nLag = 3, Column = problem_type_2_per_usage1\nLag = 3, Column = problem_type_3_per_usage1\nLag = 3, Column = problem_type_4_per_usage1\nLag = 3, Column = problem_type_1_per_usage2\nLag = 3, Column = problem_type_2_per_usage2\nLag = 3, Column = problem_type_3_per_usage2\nLag = 3, Column = problem_type_4_per_usage2\nLag = 3, Column = fault_code_type_1_count\nLag = 3, Column = fault_code_type_2_count\nLag = 3, Column = fault_code_type_3_count\nLag = 3, Column = fault_code_type_4_count\nLag = 3, Column = fault_code_type_1_count_per_usage1\nLag = 3, Column = fault_code_type_2_count_per_usage1\nLag = 3, Column = fault_code_type_3_count_per_usage1\nLag = 3, Column = fault_code_type_4_count_per_usage1\nLag = 3, Column = fault_code_type_1_count_per_usage2\nLag = 3, Column = fault_code_type_2_count_per_usage2\nLag = 3, Column = fault_code_type_3_count_per_usage2\nLag = 3, Column = fault_code_type_4_count_per_usage2\nLag = 7, Column = warn_type1_total\nLag = 7, Column = warn_type2_total\nLag = 7, Column = pca_1_warn\nLag = 7, Column = pca_2_warn\nLag = 7, Column = pca_3_warn\nLag = 7, Column = pca_4_warn\nLag = 7, Column = pca_5_warn\nLag = 7, Column = pca_6_warn\nLag = 7, Column = pca_7_warn\nLag = 7, Column = pca_8_warn\nLag = 7, Column = pca_9_warn\nLag = 7, Column = pca_10_warn\nLag = 7, Column = pca_11_warn\nLag = 7, Column = pca_12_warn\nLag = 7, Column = pca_13_warn\nLag = 7, Column = pca_14_warn\nLag = 7, Column = pca_15_warn\nLag = 7, Column = pca_16_warn\nLag = 7, Column = pca_17_warn\nLag = 7, Column = pca_18_warn\nLag = 7, Column = pca_19_warn\nLag = 7, Column = pca_20_warn\nLag = 7, Column = problem_type_1\nLag = 7, Column = problem_type_2\nLag = 7, Column = problem_type_3\nLag = 7, Column = problem_type_4\nLag = 7, Column = problem_type_1_per_usage1\nLag = 7, Column = problem_type_2_per_usage1\nLag = 7, Column = problem_type_3_per_usage1\nLag = 7, Column = problem_type_4_per_usage1\nLag = 7, Column = problem_type_1_per_usage2\nLag = 7, Column = problem_type_2_per_usage2\nLag = 7, Column = problem_type_3_per_usage2\nLag = 7, Column = problem_type_4_per_usage2\nLag = 7, Column = fault_code_type_1_count\nLag = 7, Column = fault_code_type_2_count\nLag = 7, Column = fault_code_type_3_count\nLag = 7, Column = fault_code_type_4_count\nLag = 7, Column = fault_code_type_1_count_per_usage1\nLag = 7, Column = fault_code_type_2_count_per_usage1\nLag = 7, Column = fault_code_type_3_count_per_usage1\nLag = 7, Column = fault_code_type_4_count_per_usage1\nLag = 7, Column = fault_code_type_1_count_per_usage2\nLag = 7, Column = fault_code_type_2_count_per_usage2\nLag = 7, Column = fault_code_type_3_count_per_usage2\nLag = 7, Column = fault_code_type_4_count_per_usage2\nLag = 14, Column = warn_type1_total\nLag = 14, Column = warn_type2_total\nLag = 14, Column = pca_1_warn\nLag = 14, Column = pca_2_warn\nLag = 14, Column = pca_3_warn\nLag = 14, Column = pca_4_warn\nLag = 14, Column = pca_5_warn\nLag = 14, Column = pca_6_warn\nLag = 14, Column = pca_7_warn\nLag = 14, Column = pca_8_warn\nLag = 14, Column = pca_9_warn\nLag = 14, Column = pca_10_warn\nLag = 14, Column = pca_11_warn\nLag = 14, Column = pca_12_warn\nLag = 14, Column = pca_13_warn\nLag = 14, Column = pca_14_warn\nLag = 14, Column = pca_15_warn\nLag = 14, Column = pca_16_warn\nLag = 14, Column = pca_17_warn\nLag = 14, Column = pca_18_warn\nLag = 14, Column = pca_19_warn\nLag = 14, Column = pca_20_warn\nLag = 14, Column = problem_type_1\nLag = 14, Column = problem_type_2\nLag = 14, Column = problem_type_3\nLag = 14, Column = problem_type_4\nLag = 14, Column = problem_type_1_per_usage1\nLag = 14, Column = problem_type_2_per_usage1\nLag = 14, Column = problem_type_3_per_usage1\nLag = 14, Column = problem_type_4_per_usage1\nLag = 14, Column = problem_type_1_per_usage2\nLag = 14, Column = problem_type_2_per_usage2\nLag = 14, Column = problem_type_3_per_usage2\nLag = 14, Column = problem_type_4_per_usage2\nLag = 14, Column = fault_code_type_1_count\nLag = 14, Column = fault_code_type_2_count\nLag = 14, Column = fault_code_type_3_count\nLag = 14, Column = fault_code_type_4_count\nLag = 14, Column = fault_code_type_1_count_per_usage1\nLag = 14, Column = fault_code_type_2_count_per_usage1\nLag = 14, Column = fault_code_type_3_count_per_usage1\nLag = 14, Column = fault_code_type_4_count_per_usage1\nLag = 14, Column = fault_code_type_1_count_per_usage2\nLag = 14, Column = fault_code_type_2_count_per_usage2\nLag = 14, Column = fault_code_type_3_count_per_usage2\nLag = 14, Column = fault_code_type_4_count_per_usage2\nLag = 30, Column = warn_type1_total\nLag = 30, Column = warn_type2_total\nLag = 30, Column = pca_1_warn\nLag = 30, Column = pca_2_warn\nLag = 30, Column = pca_3_warn\nLag = 30, Column = pca_4_warn\nLag = 30, Column = pca_5_warn\nLag = 30, Column = pca_6_warn\nLag = 30, Column = pca_7_warn\nLag = 30, Column = pca_8_warn\nLag = 30, Column = pca_9_warn\nLag = 30, Column = pca_10_warn\nLag = 30, Column = pca_11_warn\nLag = 30, Column = pca_12_warn\nLag = 30, Column = pca_13_warn\nLag = 30, Column = pca_14_warn\nLag = 30, Column = pca_15_warn\nLag = 30, Column = pca_16_warn\nLag = 30, Column = pca_17_warn\nLag = 30, Column = pca_18_warn\nLag = 30, Column = pca_19_warn\nLag = 30, Column = pca_20_warn\nLag = 30, Column = problem_type_1\nLag = 30, Column = problem_type_2\nLag = 30, Column = problem_type_3\nLag = 30, Column = problem_type_4\nLag = 30, Column = problem_type_1_per_usage1\nLag = 30, Column = problem_type_2_per_usage1\nLag = 30, Column = problem_type_3_per_usage1\nLag = 30, Column = problem_type_4_per_usage1\nLag = 30, Column = problem_type_1_per_usage2\nLag = 30, Column = problem_type_2_per_usage2\nLag = 30, Column = problem_type_3_per_usage2\nLag = 30, Column = problem_type_4_per_usage2\nLag = 30, Column = fault_code_type_1_count\nLag = 30, Column = fault_code_type_2_count\nLag = 30, Column = fault_code_type_3_count\nLag = 30, Column = fault_code_type_4_count\nLag = 30, Column = fault_code_type_1_count_per_usage1\nLag = 30, Column = fault_code_type_2_count_per_usage1\nLag = 30, Column = fault_code_type_3_count_per_usage1\nLag = 30, Column = fault_code_type_4_count_per_usage1\nLag = 30, Column = fault_code_type_1_count_per_usage2\nLag = 30, Column = fault_code_type_2_count_per_usage2\nLag = 30, Column = fault_code_type_3_count_per_usage2\nLag = 30, Column = fault_code_type_4_count_per_usage2\nLag = 90, Column = warn_type1_total\nLag = 90, Column = warn_type2_total\nLag = 90, Column = pca_1_warn\nLag = 90, Column = pca_2_warn\nLag = 90, Column = pca_3_warn\nLag = 90, Column = pca_4_warn\nLag = 90, Column = pca_5_warn\nLag = 90, Column = pca_6_warn\nLag = 90, Column = pca_7_warn\nLag = 90, Column = pca_8_warn\nLag = 90, Column = pca_9_warn\nLag = 90, Column = pca_10_warn\nLag = 90, Column = pca_11_warn\nLag = 90, Column = pca_12_warn\nLag = 90, Column = pca_13_warn\nLag = 90, Column = pca_14_warn\nLag = 90, Column = pca_15_warn\nLag = 90, Column = pca_16_warn\nLag = 90, Column = pca_17_warn\nLag = 90, Column = pca_18_warn\nLag = 90, Column = pca_19_warn\nLag = 90, Column = pca_20_warn\nLag = 90, Column = problem_type_1\nLag = 90, Column = problem_type_2\nLag = 90, Column = problem_type_3\nLag = 90, Column = problem_type_4\nLag = 90, Column = problem_type_1_per_usage1\nLag = 90, Column = problem_type_2_per_usage1\nLag = 90, Column = problem_type_3_per_usage1\nLag = 90, Column = problem_type_4_per_usage1\nLag = 90, Column = problem_type_1_per_usage2\nLag = 90, Column = problem_type_2_per_usage2\nLag = 90, Column = problem_type_3_per_usage2\nLag = 90, Column = problem_type_4_per_usage2\nLag = 90, Column = fault_code_type_1_count\nLag = 90, Column = fault_code_type_2_count\nLag = 90, Column = fault_code_type_3_count\nLag = 90, Column = fault_code_type_4_count\nLag = 90, Column = fault_code_type_1_count_per_usage1\nLag = 90, Column = fault_code_type_2_count_per_usage1\nLag = 90, Column = fault_code_type_3_count_per_usage1\nLag = 90, Column = fault_code_type_4_count_per_usage1\nLag = 90, Column = fault_code_type_1_count_per_usage2\nLag = 90, Column = fault_code_type_2_count_per_usage2\nLag = 90, Column = fault_code_type_3_count_per_usage2\nLag = 90, Column = fault_code_type_4_count_per_usage2\nCPU times: user 870 ms, sys: 306 ms, total: 1.18 s\nWall time: 23min 27s\n"
]
],
[
[
"## Join result dataset from the five rolling compute cells:\n- Join in Spark is usually very slow, it is better to reduce the number of partitions before the join.\n- Check the number of partitions of the pyspark dataframe.\n- **repartition vs coalesce**. If we only want to reduce the number of partitions, it is better to use coalesce because repartition involves reshuffling which is computational more expensive and takes more time.\n<br>\n",
"_____no_output_____"
]
],
[
[
"# Import result dataset \nrollingmean = sqlContext.read.parquet('/mnt/resource/PysparkExample/data_rollingmean.parquet')\nrollingdiff = sqlContext.read.parquet('/mnt/resource/PysparkExample/rollingdiff.parquet')\nrollingstd = sqlContext.read.parquet('/mnt/resource/PysparkExample/rollingstd.parquet')\nrollingmax = sqlContext.read.parquet('/mnt/resource/PysparkExample/rollingmax.parquet')\nrollingmin = sqlContext.read.parquet('/mnt/resource/PysparkExample/rollingmin.parquet')\n\n# Check the number of partitions for each dataset\nprint(rollingmean.rdd.getNumPartitions())\nprint(rollingdiff.rdd.getNumPartitions())\nprint(rollingstd.rdd.getNumPartitions())\nprint(rollingmax.rdd.getNumPartitions())\nprint(rollingmin.rdd.getNumPartitions())\n",
"33\n33\n33\n31\n31\n"
],
[
"%%time\n\n# To make join faster, reduce the number of partitions (not necessarily to \"1\")\nrollingmean = rollingmean.coalesce(1)\nrollingdiff = rollingdiff.coalesce(1)\nrollingstd = rollingstd.coalesce(1)\nrollingmax = rollingmax.coalesce(1)\nrollingmin = rollingmin.coalesce(1)\n\nrolling_result = rollingmean.join(rollingdiff, 'key', 'inner')\\\n .join(rollingstd, 'key', 'inner')\\\n .join(rollingmax, 'key', 'inner')\\\n .join(rollingmin, 'key', 'inner')\n \n\n## Write the final result as parquet file for downstream work in Notebook_3\nrolling_result.write.mode('overwrite').parquet('/mnt/resource/PysparkExample/notebook2_result.parquet')\n",
"CPU times: user 901 ms, sys: 303 ms, total: 1.2 s\nWall time: 1h 50min 38s\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e7b3dc16e3f23ebd221d50e3affc74f25c6f6c19 | 12,416 | ipynb | Jupyter Notebook | misc/are-unique-graphs-unique-clusters.ipynb | exalearn/hydronet | bd6fc4cb0962226a975057978d5de9f0053985ba | [
"Apache-2.0"
] | 7 | 2021-07-04T04:11:32.000Z | 2022-01-25T07:11:55.000Z | misc/are-unique-graphs-unique-clusters.ipynb | exalearn/hydronet | bd6fc4cb0962226a975057978d5de9f0053985ba | [
"Apache-2.0"
] | 11 | 2021-04-30T18:16:16.000Z | 2022-03-18T17:39:36.000Z | misc/are-unique-graphs-unique-clusters.ipynb | exalearn/hydronet | bd6fc4cb0962226a975057978d5de9f0053985ba | [
"Apache-2.0"
] | 3 | 2020-12-04T17:48:42.000Z | 2021-10-07T20:06:07.000Z | 36.517647 | 4,932 | 0.692171 | [
[
[
"# Are Graphs Unique?\nThis notebook shows how to determine if each entry in the HydroNet dataset represents a unique graph. ",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nfrom matplotlib import pyplot as plt\nfrom hydronet.data import graph_from_dict\nfrom multiprocessing import Pool\nfrom functools import partial\nfrom tqdm import tqdm\nimport networkx as nx\nimport pandas as pd\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"Configuration",
"_____no_output_____"
]
],
[
[
"cluster_size = 18",
"_____no_output_____"
]
],
[
[
"## Load in the Data\nLoad in a small dataset from disk",
"_____no_output_____"
]
],
[
[
"%%time\ndata = pd.read_json('../data/output/atomic_valid.json.gz', lines=True)\nprint(f'Loaded {len(data)} records')",
"Loaded 224018 records\nCPU times: user 33.1 s, sys: 5.7 s, total: 38.8 s\nWall time: 38.8 s\n"
]
],
[
[
"## Find Pairs of Isomorphic Graphs\nAssess how many training records are isomorphic",
"_____no_output_____"
]
],
[
[
"data.query(f'n_waters=={cluster_size}', inplace=True)\nprint(f'Downselected to {len(data)} graphs')",
"Downselected to 5714 graphs\n"
]
],
[
[
"Generate networkx objects for each",
"_____no_output_____"
]
],
[
[
"%%time\ndata['nx'] = data.apply(graph_from_dict, axis=1)",
"CPU times: user 1.66 s, sys: 170 ms, total: 1.83 s\nWall time: 1.83 s\n"
]
],
[
[
"Compute which graphs are isomorphic",
"_____no_output_____"
]
],
[
[
"data.reset_index(inplace=True)",
"_____no_output_____"
],
[
"matches = [[] for _ in range(len(data))]\nn_matches = 0\nwith Pool() as p:\n for i, g in tqdm(enumerate(data['nx']), total=len(data)):\n f = partial(nx.algorithms.is_isomorphic, g, node_match=dict.__eq__, edge_match=dict.__eq__)\n is_match = p.map(f, data['nx'].iloc[i+1:])\n for j, hit in enumerate(is_match):\n if hit:\n n_matches += 1\n j_real = i + j + 1\n matches[i].append(j_real)\n matches[j_real].append(i)\nprint(f'Found {n_matches} pairs of isomorphic graphs')",
"100%|██████████| 5714/5714 [26:46<00:00, 3.56it/s] \n"
]
],
[
[
"Add to the dataframe for safe keeping",
"_____no_output_____"
]
],
[
[
"data['matches'] = matches",
"_____no_output_____"
],
[
"data['n_matches'] = data['matches'].apply(len)",
"_____no_output_____"
]
],
[
[
"## Assess Energy Differences between Isomorphic Graphs\nWe want to know how large they are. Does each graph represent a local minimum, or they actually very different in energy?",
"_____no_output_____"
]
],
[
[
"energy_diffs = []\nfor rid, row in data.query('n_matches>0').iterrows():\n for m in row['matches']:\n if m > rid:\n energy_diffs.append(abs(row['energy'] - data.iloc[m]['energy']))",
"_____no_output_____"
],
[
"print(f'Maximum: {np.max(energy_diffs):.2e} kcal/mol')\nprint(f'Median: {np.percentile(energy_diffs, 50):.2e} kcal/mol')\nprint(f'Minimum: {np.min(energy_diffs):.2e} kcal/mol')",
"Maximum: 1.50e-01 kcal/mol\nMedian: 5.48e-02 kcal/mol\nMinimum: 1.04e-02 kcal/mol\n"
],
[
"fig, ax = plt.subplots(figsize=(3.5, 2.5))\n\nbins = np.logspace(-4, 1, 32)\nax.hist(energy_diffs, bins=bins)\nax.set_xscale('log')\n\nax.set_xlabel('$\\Delta E$ (kcal/mol)')\nax.set_ylabel('Frequency')\nfig.tight_layout()\nfig.savefig(f'figures/energy-difference-isomorphic-graphs-size-{cluster_size}.png', dpi=320)",
"_____no_output_____"
]
],
[
[
"For comparision, print out the range of energies for clusters",
"_____no_output_____"
]
],
[
[
"(data['energy'] - data['energy'].min()).describe()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7b3df6f5ae2585433b00c9ae4a27f2eab9fb360 | 19,386 | ipynb | Jupyter Notebook | notebooks/213-question-answering/213-question-answering.ipynb | mzegla/openvino_notebooks | a63c6115f89a1d911732e47f972af09ad03a1c91 | [
"Apache-2.0"
] | null | null | null | notebooks/213-question-answering/213-question-answering.ipynb | mzegla/openvino_notebooks | a63c6115f89a1d911732e47f972af09ad03a1c91 | [
"Apache-2.0"
] | null | null | null | notebooks/213-question-answering/213-question-answering.ipynb | mzegla/openvino_notebooks | a63c6115f89a1d911732e47f972af09ad03a1c91 | [
"Apache-2.0"
] | null | null | null | 36.371482 | 757 | 0.591509 | [
[
[
"# Interactive question answering with OpenVINO\n\nThis demo shows interactive question answering with OpenVINO. We use [small BERT-large-like model](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/bert-small-uncased-whole-word-masking-squad-int8-0002) distilled and quantized to INT8 on SQuAD v1.1 training set from larger BERT-large model. The model comes from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/). At the bottom of this notebook, you will see live inference results from your inputs.",
"_____no_output_____"
],
[
"## Imports",
"_____no_output_____"
]
],
[
[
"import time\nfrom urllib import parse\n\nimport numpy as np\nfrom openvino.runtime import Core, Dimension\n\nimport html_reader as reader\nimport tokens_bert as tokens",
"_____no_output_____"
]
],
[
[
"## The model\n\n### Download the model\n\nWe use `omz_downloader`, which is a command-line tool from the `openvino-dev` package. `omz_downloader` automatically creates a directory structure and downloads the selected model. If the model is already downloaded, this step is skipped.\n\nYou can download and use any of the following models: `bert-large-uncased-whole-word-masking-squad-0001`, `bert-large-uncased-whole-word-masking-squad-int8-0001`, `bert-small-uncased-whole-word-masking-squad-0001`, `bert-small-uncased-whole-word-masking-squad-0002`, `bert-small-uncased-whole-word-masking-squad-int8-0002`, just change the model name below. Any of these models are already converted to OpenVINO Intermediate Representation (IR), so there is no need to use `omz_converter`.",
"_____no_output_____"
]
],
[
[
"# directory where model will be downloaded\nbase_model_dir = \"model\"\n\n# desired precision\nprecision = \"FP16-INT8\"\n\n# model name as named in Open Model Zoo\nmodel_name = \"bert-small-uncased-whole-word-masking-squad-int8-0002\"\n\nmodel_path = f\"model/intel/{model_name}/{precision}/{model_name}.xml\"\nmodel_weights_path = f\"model/intel/{model_name}/{precision}/{model_name}.bin\"\n\ndownload_command = f\"omz_downloader \" \\\n f\"--name {model_name} \" \\\n f\"--precision {precision} \" \\\n f\"--output_dir {base_model_dir} \" \\\n f\"--cache_dir {base_model_dir}\"\n! $download_command",
"_____no_output_____"
]
],
[
[
"### Load the model\n\nDownloaded models are located in a fixed structure, which indicates vendor, model name and precision. Only a few lines of code are required to run the model. First, we create an Inference Engine object. Then we read the network architecture and model weights from the .xml and .bin files. Finally, we compile the network for the desired device. Because of using dynamic shapes we can run code only on `CPU`.",
"_____no_output_____"
]
],
[
[
"# initialize inference engine\nie_core = Core()\n\n# read the model and corresponding weights from file\nmodel = ie_core.read_model(model=model_path, weights=model_weights_path)\n\n# assign dynamic shapes to every input layer\nfor input_layer in model.inputs:\n input_shape = input_layer.partial_shape\n input_shape[1] = Dimension()\n model.reshape({input_layer: input_shape})\n\n# compile the model for the CPU\ncompiled_model = ie_core.compile_model(model=model, device_name=\"CPU\")\n\n# get input and output names of nodes\ninput_keys = list(compiled_model.inputs)\noutput_keys = list(compiled_model.outputs)",
"_____no_output_____"
]
],
[
[
"Input keys are the names of the input nodes and output keys contain names of output nodes of the network. In the case of the BERT-large-like model, we have four inputs and two outputs.",
"_____no_output_____"
]
],
[
[
"[i.any_name for i in input_keys], [o.any_name for o in output_keys]",
"_____no_output_____"
]
],
[
[
"## Processing\n\nNLP models usually take a list of tokens as standard input. A token is a single word converted to some integer. To provide the proper input, we need the vocabulary for such mapping. We also define some special tokens like separators and a function to load the content from provided URLs.",
"_____no_output_____"
]
],
[
[
"# path to vocabulary file\nvocab_file_path = \"data/vocab.txt\"\n\n# create dictionary with words and their indices\nvocab = tokens.load_vocab_file(vocab_file_path)\n\n# define special tokens\ncls_token = vocab[\"[CLS]\"]\nsep_token = vocab[\"[SEP]\"]\n\n\n# function to load text from given urls\ndef load_context(sources):\n input_urls = []\n paragraphs = []\n for source in sources:\n result = parse.urlparse(source)\n if all([result.scheme, result.netloc]):\n input_urls.append(source)\n else:\n paragraphs.append(source)\n\n paragraphs.extend(reader.get_paragraphs(input_urls))\n # produce one big context string\n return \"\\n\".join(paragraphs)",
"_____no_output_____"
]
],
[
[
"\n### Preprocessing\n\nThe main input (`input_ids`) to used BERT model consist of two parts: question tokens and context tokens separated by some special tokens. We also need to provide: `attention_mask`, which is a sequence of integer values representing the mask of valid values in the input; `token_type_ids`, which is a sequence of integer values representing the segmentation of the `input_ids` into question and context; `position_ids`, which is a sequence of integer values from 0 to length of input, extended by separation tokens, representing the position index for each input token. To know more about input, please read [this](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/bert-small-uncased-whole-word-masking-squad-int8-0002#input).",
"_____no_output_____"
]
],
[
[
"# generator of a sequence of inputs\ndef prepare_input(question_tokens, context_tokens, input_keys):\n input_ids = [cls_token] + question_tokens + [sep_token] + context_tokens + [sep_token]\n # 1 for any index\n attention_mask = [1] * len(input_ids)\n # 0 for question tokens, 1 for context part\n token_type_ids = [0] * (len(question_tokens) + 2) + [1] * (len(context_tokens) + 1)\n\n # create input to feed the model\n input_dict = {\n \"input_ids\": np.array([input_ids], dtype=np.int32),\n \"attention_mask\": np.array([attention_mask], dtype=np.int32),\n \"token_type_ids\": np.array([token_type_ids], dtype=np.int32),\n }\n\n # some models require additional position_ids\n if \"position_ids\" in [i_key.any_name for i_key in input_keys]:\n position_ids = np.arange(len(input_ids))\n input_dict[\"position_ids\"] = np.array([position_ids], dtype=np.int32)\n\n return input_dict",
"_____no_output_____"
]
],
[
[
"### Postprocessing\n\nThe results from the network are raw (logits). We need to use the softmax function to get the probability distribution. Then, we are looking for the best answer in the current part of the context (the highest score) and we return the score and the context range for the answer.",
"_____no_output_____"
]
],
[
[
"# based on https://github.com/openvinotoolkit/open_model_zoo/blob/bf03f505a650bafe8da03d2747a8b55c5cb2ef16/demos/common/python/openvino/model_zoo/model_api/models/bert.py#L163\ndef postprocess(output_start, output_end, question_tokens, context_tokens_start_end, input_size):\n\n def get_score(logits):\n out = np.exp(logits)\n return out / out.sum(axis=-1)\n\n # get start-end scores for context\n score_start = get_score(output_start)\n score_end = get_score(output_end)\n\n # index of first context token in tensor\n context_start_idx = len(question_tokens) + 2\n # index of last+1 context token in tensor\n context_end_idx = input_size - 1\n\n # find product of all start-end combinations to find the best one\n max_score, max_start, max_end = find_best_answer_window(start_score=score_start,\n end_score=score_end,\n context_start_idx=context_start_idx,\n context_end_idx=context_end_idx)\n\n # convert to context text start-end index\n max_start = context_tokens_start_end[max_start][0]\n max_end = context_tokens_start_end[max_end][1]\n\n return max_score, max_start, max_end\n\n\n# based on https://github.com/openvinotoolkit/open_model_zoo/blob/bf03f505a650bafe8da03d2747a8b55c5cb2ef16/demos/common/python/openvino/model_zoo/model_api/models/bert.py#L188\ndef find_best_answer_window(start_score, end_score, context_start_idx, context_end_idx):\n context_len = context_end_idx - context_start_idx\n\n score_mat = np.matmul(\n start_score[context_start_idx:context_end_idx].reshape((context_len, 1)),\n end_score[context_start_idx:context_end_idx].reshape((1, context_len)),\n )\n \n # reset candidates with end before start\n score_mat = np.triu(score_mat)\n # reset long candidates (>16 words)\n score_mat = np.tril(score_mat, 16)\n # find the best start-end pair\n max_s, max_e = divmod(score_mat.flatten().argmax(), score_mat.shape[1])\n max_score = score_mat[max_s, max_e]\n\n return max_score, max_s, max_e",
"_____no_output_____"
]
],
[
[
" Firstly, we need to create a list of tokens from the context and the question. Then, we are looking for the best answer in the context. The best answer should come with the highest score.",
"_____no_output_____"
]
],
[
[
"def get_best_answer(question, context, vocab, input_keys):\n # convert context string to tokens\n context_tokens, context_tokens_start_end = tokens.text_to_tokens(text=context.lower(),\n vocab=vocab)\n # convert question string to tokens\n question_tokens, _ = tokens.text_to_tokens(text=question.lower(), vocab=vocab)\n\n network_input = prepare_input(question_tokens, context_tokens, input_keys)\n input_size = len(context_tokens) + len(question_tokens) + 3\n\n # openvino inference\n request = compiled_model.create_infer_request()\n request.infer(inputs=network_input)\n\n # postprocess the result getting the score and context range for the answer\n score_start_end = postprocess(output_start=request.get_tensor(name=\"output_s\").data[0],\n output_end=request.get_tensor(name=\"output_e\").data[0],\n question_tokens=question_tokens,\n context_tokens_start_end=context_tokens_start_end,\n input_size=input_size)\n\n # return the part of the context, which is already an answer\n return context[score_start_end[1]:score_start_end[2]], score_start_end[0]",
"_____no_output_____"
]
],
[
[
"### Main Processing Function\n\nRun question answering on specific knowledge base and iterate through the questions.",
"_____no_output_____"
]
],
[
[
"def run_question_answering(sources):\n print(f\"Context: {sources}\", flush=True)\n context = load_context(sources)\n\n if len(context) == 0:\n print(\"Error: Empty context or outside paragraphs\")\n return\n\n while True:\n question = input()\n # if no question - break\n if question == \"\":\n break\n\n # measure processing time\n start_time = time.perf_counter()\n answer, score = get_best_answer(question=question, context=context, vocab=vocab, input_keys=input_keys)\n end_time = time.perf_counter()\n\n print(f\"Question: {question}\")\n print(f\"Answer: {answer}\")\n print(f\"Score: {score:.2f}\")\n print(f\"Time: {end_time - start_time:.2f}s\")",
"_____no_output_____"
]
],
[
[
"## Run\n\n### Run on local paragraphs\n\nChange sources to your own to answer your questions. You can use as many sources as you want. Usually, you need to wait a few seconds for the answer, but the longer context the longer the waiting time. The model is very limited and sensitive for the input. The answer can depend on whether there is a question mark at the end. The model will try to answer any of your questions even there is no good answer in the context, so in that case, you can see random results.\n\nSample source: Computational complexity theory paragraph (from [here](https://rajpurkar.github.io/SQuAD-explorer/explore/v2.0/dev/Computational_complexity_theory.html))\n\nSample questions:\n- What is the term for a task that generally lends itself to being solved by a computer?\n- By what main attribute are computational problems classified utilizing computational complexity theory?\n- What branch of theoretical computer science deals with broadly classifying computational problems by difficulty and class of relationship?\n\nIf you want to stop the processing just put an empty string.\n\n*Note: Firstly, run the code below and then put your questions in the box.*",
"_____no_output_____"
]
],
[
[
"sources = [\"Computational complexity theory is a branch of the theory of computation in theoretical computer \"\n \"science that focuses on classifying computational problems according to their inherent difficulty, \"\n \"and relating those classes to each other. A computational problem is understood to be a task that \"\n \"is in principle amenable to being solved by a computer, which is equivalent to stating that the \"\n \"problem may be solved by mechanical application of mathematical steps, such as an algorithm.\"]\n\nrun_question_answering(sources)",
"_____no_output_____"
]
],
[
[
"### Run on websites\n\nYou can also provide urls. Note that the context (knowledge base) is built from website paragraphs. If some information is outside the paragraphs, the algorithm won't able to find it.\n\nSample source: [OpenVINO wiki](https://en.wikipedia.org/wiki/OpenVINO)\n\nSample questions:\n- What does OpenVINO mean?\n- What is the license for OpenVINO?\n- Where can you deploy OpenVINO code?\n\nIf you want to stop the processing just put an empty string.\n\n*Note: Firstly, run the code below and then put your questions in the box.*",
"_____no_output_____"
]
],
[
[
"sources = [\"https://en.wikipedia.org/wiki/OpenVINO\"]\n\nrun_question_answering(sources)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7b3e87dc31e615d0934fb31245fcac7eed7c71f | 16,671 | ipynb | Jupyter Notebook | labs/lab10.ipynb | AlvaroMarambio/mat281_portfolio | 570c55ce8b75603eca3cca2bf6460445828850a9 | [
"MIT"
] | null | null | null | labs/lab10.ipynb | AlvaroMarambio/mat281_portfolio | 570c55ce8b75603eca3cca2bf6460445828850a9 | [
"MIT"
] | null | null | null | labs/lab10.ipynb | AlvaroMarambio/mat281_portfolio | 570c55ce8b75603eca3cca2bf6460445828850a9 | [
"MIT"
] | null | null | null | 66.951807 | 10,032 | 0.80685 | [
[
[
"# Laboratorio 10",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nfrom sklearn.datasets import load_breast_cancer\nfrom sklearn.model_selection import train_test_split, GridSearchCV\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.metrics import plot_confusion_matrix, classification_report, accuracy_score, recall_score, f1_score\n\n%matplotlib inline",
"_____no_output_____"
],
[
"breast_cancer = load_breast_cancer()\nX, y = breast_cancer.data, breast_cancer.target\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)\ntarget_names = breast_cancer.target_names",
"_____no_output_____"
]
],
[
[
"## Ejercicio 1\n\n(1 pto.)\n\nAjusta una regresión logística a los datos de entrenamiento y obtén el _accuracy_ con los datos de test. Utiliza el argumento `n_jobs` igual a $-1$, si aún así no converge aumenta el valor de `max_iter`.\n\nHint: Recuerda que el _accuracy_ es el _score_ por defecto en los modelos de clasificación de scikit-learn.",
"_____no_output_____"
]
],
[
[
"# No hay que hace validacion cruzada pq la regresion logistica no tiene hiperparametros\nlr = LogisticRegression(max_iter=2100, n_jobs=-1) # ya con 2100 iteraciones aproximadamente converge aproximadamente al mismo valor que con mas iteraciones\nlr.fit(X_train, y_train)\ny_pred = lr.predict(X_test)\nprint(f\"Logistic Regression accuracy: {accuracy_score(y_test, y_pred):0.2f}\") #lr.score(X_test, y_test)",
"Logistic Regression accuracy: 0.98\n"
]
],
[
[
"## Ejercicio 2\n\n(1 pto.)\n\nUtiliza `GridSearchCV` con 5 _folds_ para encontrar el mejor valor de `n_neighbors` de un modelo KNN.",
"_____no_output_____"
]
],
[
[
"knn =KNeighborsClassifier() # Defino el modelo de KNN\nknn_grid = {\"n_neighbors\": np.arange(2, 31)} # Defino una grilla\n\nknn_cv = GridSearchCV( # Hago validacion cruzada\n KNeighborsClassifier(), # modelo a hiperparametrizar\n param_grid=knn_grid, #defino la grilla a utilizar\n cv=5, # Defino los 5 fold\n n_jobs=-1 # Uso todos los nucleos\n)\nknn_cv.fit(X_train, y_train) # Entrenarla\nknn_cv.best_params_\ny_pred1 = knn_cv.predict(X_test) ",
"_____no_output_____"
],
[
"#knn_cv.best_score_ \n#knn_cv.best_estimator_\nknn_cv.best_params_",
"_____no_output_____"
],
[
"print(f\"KNN accuracy: {accuracy_score(y_test, y_pred1):0.2f}\") #Imorimir el acurracy del moejor modelo de knn con el mejor n_neighbors",
"KNN accuracy: 0.96\n"
]
],
[
[
"## Ejercicio 3\n\n(1 pto.)\n\n¿Cuál modelo escogerías basándote en los resultados anteriores? Justifica",
"_____no_output_____"
],
[
"__Respuesta:__ El mejor modelo de los dos utilizados es el de regresion logistica ya que tiene un accuracy de 0.98 con 2100 iteraciones en comparacion con el de KNN que tiene un accuracy de 0.96 utilizando 5 fold.",
"_____no_output_____"
],
[
"## Ejercicio 4\n\n(1 pto.)\n\nPara el modelo seleccionado en el ejercicio anterior.\n\n* Grafica la matriz de confusión (no olvides colocar los nombres originales en los _labels_).\n* Imprime el reporte de clasificación.",
"_____no_output_____"
]
],
[
[
"# Me quedo con el modelo de regresion logistica \nplot_confusion_matrix(lr, X_test, y_test, display_labels=target_names) # en vez de tirar 0 y 1 te salagan valores reales malignt bening\nplt.show()",
"_____no_output_____"
],
[
"# Uso el y_pred que definí en el modelo de regresion logistica\nprint(classification_report(y_test, y_pred, target_names=breast_cancer.target_names)) ",
" precision recall f1-score support\n\n malignant 0.97 0.97 0.97 63\n benign 0.98 0.98 0.98 108\n\n accuracy 0.98 171\n macro avg 0.97 0.97 0.97 171\nweighted avg 0.98 0.98 0.98 171\n\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
e7b3f3c54d90601eb5a2830945def88e5d0842c3 | 6,237 | ipynb | Jupyter Notebook | DSFS Chapter 11 - Machine Learning.ipynb | Kladar/dsfs | 09c77d0277d0899a7d929d0b37020a9ece24725e | [
"MIT"
] | null | null | null | DSFS Chapter 11 - Machine Learning.ipynb | Kladar/dsfs | 09c77d0277d0899a7d929d0b37020a9ece24725e | [
"MIT"
] | null | null | null | DSFS Chapter 11 - Machine Learning.ipynb | Kladar/dsfs | 09c77d0277d0899a7d929d0b37020a9ece24725e | [
"MIT"
] | 3 | 2020-01-28T06:40:29.000Z | 2021-07-30T20:21:09.000Z | 33.352941 | 1,196 | 0.579606 | [
[
[
"# Chapter 11 - Machine Learning\n\nWoo! #machinelearning #ml #AI",
"_____no_output_____"
],
[
"Data science is a lot of reformatting of business problems into data problems, and then collecting, cleaning, formatting, and restructuring data. ML is almost an afterthought. But it is an essential afterthought. Intro to this chapter is a good one. ",
"_____no_output_____"
]
],
[
[
"# supervised models \n# need to create some learnin data\n\ndef split_data(data, prob):\n results = [], []\n for row in data:\n results[0 if random.random() < prob else 1].append(row)\n return results\n\ndef train_test_split(x, y, test_pct):\n data = zip(x,y)\n train, test = split_data(data, 1 - test_pct)\n x_train, y_train = zip(*train)\n x_test, y_test = zip(*test)\n return x_train, x_test, y_train, y_test\n\n# then you can create your little model\nmodel = SomeKindOfModel()\nx_train, x_test, y_train, y_test = train_test_split(xs, ys, 0.33)\nmodel.train(x_train, y_train)\n\nperformance = model.test(x_test, y_test)\n\n",
"_____no_output_____"
]
],
[
[
"Models aren't necessarily graded on accuracy. If we said every person named Luke will not develop leukemia, we'd be right 98% of the time.\n\nSee the book (or a google search) for the confusion matrix, which describes true positives, true negatives, false positives (Type I error), and false negatives (Type 2 error)",
"_____no_output_____"
]
],
[
[
"def accuracy(tp, fp, fn, tn):\n correct = tp + tn\n total = tp+tn+fp+fn\n return correct / total\n\nprint(accuracy(70, 4930, 13930, 981070))",
"0.98114\n"
],
[
"# precision is accuracy of positive predictions\ndef precision(tp, fp, fn, tn):\n return tp / (tp+fp)\nprint(precision(70,4930,13930,981070))\n\n# recall is the fraction of the posinives identified\ndef recall(tp, fp, fn, tn):\n return tp / (tp + fn)\nprint(recall(70,4930,13930,981070))\n\n# both terrible = terrible model",
"0.014\n0.005\n"
],
[
"# sometimes these are combined into an f1 score\n\ndef f1_score(tp, fp, fn, tn):\n p = precision(tp, fp, fn, tn)\n r = recall(tp, fp, fn, tn)\n \n return 2 * p * r / (p+r)\n\nprint(f1_score(70,4930,13930,981070))",
"0.00736842105263158\n"
]
],
[
[
"## Bias vs Variance Trades-off\n\nFamously. Worth reading the chapter here to the end. The next chapters go into a bunch of different machine learing models we can build. I love this book so far and I like where it is headed.",
"_____no_output_____"
],
[
"## This concludes Chapter 10. \n\nA myriad of plethora of shittons of ML stuff exists on the web. After reading this book, I realize I should go finish my self-driving car nanodegree rather than doing the ML one. That I can learn if I have a source of data. ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
e7b43cc6a7f787284843b5505939a97bf1d8280f | 502,447 | ipynb | Jupyter Notebook | demo/mouse_data.ipynb | reactome/descartes | 7e7f21c5ccdf42b867db9e68fe0cb7a17d06fb25 | [
"Apache-2.0"
] | 2 | 2021-08-02T18:09:07.000Z | 2022-01-18T08:29:59.000Z | demo/mouse_data.ipynb | reactome/descartes | 7e7f21c5ccdf42b867db9e68fe0cb7a17d06fb25 | [
"Apache-2.0"
] | 5 | 2021-06-22T22:27:23.000Z | 2021-08-04T02:04:09.000Z | demo/mouse_data.ipynb | reactome/descartes_rpa | 7e7f21c5ccdf42b867db9e68fe0cb7a17d06fb25 | [
"Apache-2.0"
] | null | null | null | 129.429933 | 166,836 | 0.750742 | [
[
[
"# Annotating pathway into mouse Single-Cell clusters\n\nThis tutorial shows how to use the descartes_rpa module with scanpy formated data outside of Descartes. Data from the [Trajectory inference for hematopoiesis in mouse](https://scanpy-tutorials.readthedocs.io/en/latest/paga-paul15.html) tutorial will be used.",
"_____no_output_____"
]
],
[
[
"import scanpy as sc",
"_____no_output_____"
],
[
"adata = sc.datasets.paul15()",
"WARNING: In Scanpy 0.*, this returned logarithmized data. Now it returns non-logarithmized data.\n... storing 'paul15_clusters' as categorical\nTrying to set attribute `.uns` of view, copying.\n"
],
[
"adata.X = adata.X.astype('float64') # this is not required and results will be comparable without it",
"_____no_output_____"
],
[
"sc.pp.recipe_zheng17(adata)",
"_____no_output_____"
],
[
"sc.tl.pca(adata, svd_solver='arpack')",
"_____no_output_____"
],
[
"sc.pp.neighbors(adata, n_neighbors=4, n_pcs=20)",
"_____no_output_____"
],
[
"sc.tl.leiden(adata)",
"_____no_output_____"
]
],
[
[
"### Since this dataset is from mouse (Mus musculus), we pass its species as input",
"_____no_output_____"
]
],
[
[
"from descartes_rpa import get_pathways_for_group",
"_____no_output_____"
],
[
"get_pathways_for_group(adata, groupby=\"paul15_clusters\", species=\"Mus musculus\")",
"/home/joao/miniconda3/envs/descartes-rpa/lib/python3.9/site-packages/scanpy/tools/_rank_genes_groups.py:419: RuntimeWarning: invalid value encountered in log2\n self.stats[group_name, 'logfoldchanges'] = np.log2(\n"
]
],
[
[
"### We can look at the top 2 marker genes for each cluster",
"_____no_output_____"
]
],
[
[
"from descartes_rpa.pl import marker_genes",
"_____no_output_____"
],
[
"marker_genes(adata, n_genes=2)",
"WARNING: dendrogram data not found (using key=dendrogram_paul15_clusters). Running `sc.tl.dendrogram` with default parameters. For fine tuning it is recommended to run `sc.tl.dendrogram` independently.\nWARNING: saving figure to file dotplot_marker_genes.pdf\n"
]
],
[
[
"### Also, we can look at the shared pathways between clusters",
"_____no_output_____"
]
],
[
[
"from descartes_rpa.pl import shared_pathways",
"_____no_output_____"
],
[
"shared_pathways(adata, clusters=[\"9GMP\", \"1Ery\", \"17Neu\", \"4Ery\"])",
"_____no_output_____"
],
[
"from descartes_rpa import get_shared",
"_____no_output_____"
],
[
"get_shared(adata, clusters=[\"1Ery\", \"4Ery\"])",
"_____no_output_____"
],
[
"from descartes_rpa.pl import pathways",
"_____no_output_____"
],
[
"adata.uns[\"pathways\"].keys()",
"_____no_output_____"
],
[
"pathways(adata, \"18Eos\")",
"_____no_output_____"
],
[
"pathways(adata, \"3Ery\")",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7b44f4fc42da5ec349a6dd8786c2d8420ecfe67 | 4,817 | ipynb | Jupyter Notebook | ipynb/Djibouti.ipynb | oscovida/oscovida.github.io | c74d6da79feda1b5ccce107ad3acd48cf0e74c1c | [
"CC-BY-4.0"
] | 2 | 2020-06-19T09:16:14.000Z | 2021-01-24T17:47:56.000Z | ipynb/Djibouti.ipynb | oscovida/oscovida.github.io | c74d6da79feda1b5ccce107ad3acd48cf0e74c1c | [
"CC-BY-4.0"
] | 8 | 2020-04-20T16:49:49.000Z | 2021-12-25T16:54:19.000Z | ipynb/Djibouti.ipynb | oscovida/oscovida.github.io | c74d6da79feda1b5ccce107ad3acd48cf0e74c1c | [
"CC-BY-4.0"
] | 4 | 2020-04-20T13:24:45.000Z | 2021-01-29T11:12:12.000Z | 28.844311 | 162 | 0.511314 | [
[
[
"# Djibouti\n\n* Homepage of project: https://oscovida.github.io\n* Plots are explained at http://oscovida.github.io/plots.html\n* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Djibouti.ipynb)",
"_____no_output_____"
]
],
[
[
"import datetime\nimport time\n\nstart = datetime.datetime.now()\nprint(f\"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}\")",
"_____no_output_____"
],
[
"%config InlineBackend.figure_formats = ['svg']\nfrom oscovida import *",
"_____no_output_____"
],
[
"overview(\"Djibouti\", weeks=5);",
"_____no_output_____"
],
[
"overview(\"Djibouti\");",
"_____no_output_____"
],
[
"compare_plot(\"Djibouti\", normalise=True);\n",
"_____no_output_____"
],
[
"# load the data\ncases, deaths = get_country_data(\"Djibouti\")\n\n# get population of the region for future normalisation:\ninhabitants = population(\"Djibouti\")\nprint(f'Population of \"Djibouti\": {inhabitants} people')\n\n# compose into one table\ntable = compose_dataframe_summary(cases, deaths)\n\n# show tables with up to 1000 rows\npd.set_option(\"max_rows\", 1000)\n\n# display the table\ntable",
"_____no_output_____"
]
],
[
[
"# Explore the data in your web browser\n\n- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Djibouti.ipynb)\n- and wait (~1 to 2 minutes)\n- Then press SHIFT+RETURN to advance code cell to code cell\n- See http://jupyter.org for more details on how to use Jupyter Notebook",
"_____no_output_____"
],
[
"# Acknowledgements:\n\n- Johns Hopkins University provides data for countries\n- Robert Koch Institute provides data for within Germany\n- Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)\n- Open source and scientific computing community for the data tools\n- Github for hosting repository and html files\n- Project Jupyter for the Notebook and binder service\n- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))\n\n--------------------",
"_____no_output_____"
]
],
[
[
"print(f\"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and \"\n f\"deaths at {fetch_deaths_last_execution()}.\")",
"_____no_output_____"
],
[
"# to force a fresh download of data, run \"clear_cache()\"",
"_____no_output_____"
],
[
"print(f\"Notebook execution took: {datetime.datetime.now()-start}\")\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
]
] |
e7b45746d59e918c474533b7ea04589dc5e6d72e | 10,803 | ipynb | Jupyter Notebook | notebooks/linear_regression_without_sklearn.ipynb | brospars/scikit-learn-mooc | 1f08c0349f185f6380c294b06c331d29ff9575f6 | [
"CC-BY-4.0"
] | null | null | null | notebooks/linear_regression_without_sklearn.ipynb | brospars/scikit-learn-mooc | 1f08c0349f185f6380c294b06c331d29ff9575f6 | [
"CC-BY-4.0"
] | null | null | null | notebooks/linear_regression_without_sklearn.ipynb | brospars/scikit-learn-mooc | 1f08c0349f185f6380c294b06c331d29ff9575f6 | [
"CC-BY-4.0"
] | null | null | null | 33.549689 | 183 | 0.626308 | [
[
[
"# Linear regression without scikit-learn\n\nIn this notebook, we introduce linear regression. Before presenting the\navailable scikit-learn classes, we will provide some insights with a simple\nexample. We will use a dataset that contains information about penguins.",
"_____no_output_____"
],
[
"<div class=\"admonition note alert alert-info\">\n<p class=\"first admonition-title\" style=\"font-weight: bold;\">Note</p>\n<p class=\"last\">If you want a deeper overview regarding this dataset, you can refer to the\nAppendix - Datasets description section at the end of this MOOC.</p>\n</div>",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\npenguins = pd.read_csv(\"../datasets/penguins_regression.csv\")\npenguins.head()",
"_____no_output_____"
]
],
[
[
"This dataset contains measurements taken on penguins. We will formulate the\nfollowing problem: using the flipper length of a penguin, we would like\nto infer its mass.",
"_____no_output_____"
]
],
[
[
"import seaborn as sns\n\nfeature_names = \"Flipper Length (mm)\"\ntarget_name = \"Body Mass (g)\"\ndata, target = penguins[[feature_names]], penguins[target_name]\n\nax = sns.scatterplot(data=penguins, x=feature_names, y=target_name)\nax.set_title(\"Flipper length in function of the body mass\")",
"_____no_output_____"
]
],
[
[
"<div class=\"admonition tip alert alert-warning\">\n<p class=\"first admonition-title\" style=\"font-weight: bold;\">Tip</p>\n<p class=\"last\">The function <tt class=\"docutils literal\">scatterplot</tt> from searborn take as input the full dataframe\nand the parameter <tt class=\"docutils literal\">x</tt> and <tt class=\"docutils literal\">y</tt> allows to specify the name of the columns to\nbe plotted. Note that this function returns a matplotlib axis\n(named <tt class=\"docutils literal\">ax</tt> in the example above) that can be further used to add element on\nthe same matplotlib axis (such as a title).</p>\n</div>",
"_____no_output_____"
],
[
"<div class=\"admonition caution alert alert-warning\">\n<p class=\"first admonition-title\" style=\"font-weight: bold;\">Caution!</p>\n<p class=\"last\">Here and later, we use the name <tt class=\"docutils literal\">data</tt> and <tt class=\"docutils literal\">target</tt> to be explicit. In\nscikit-learn, documentation <tt class=\"docutils literal\">data</tt> is commonly named <tt class=\"docutils literal\">X</tt> and <tt class=\"docutils literal\">target</tt> is\ncommonly called <tt class=\"docutils literal\">y</tt>.</p>\n</div>",
"_____no_output_____"
],
[
"In this problem, penguin mass is our target. It is a continuous\nvariable that roughly varies between 2700 g and 6300 g. Thus, this is a\nregression problem (in contrast to classification). We also see that there is\nalmost a linear relationship between the body mass of the penguin and its\nflipper length. The longer the flipper, the heavier the penguin.\n\nThus, we could come up with a simple formula, where given a flipper length\nwe could compute the body mass of a penguin using a linear relationship\nof the form `y = a * x + b` where `a` and `b` are the 2 parameters of our\nmodel.",
"_____no_output_____"
]
],
[
[
"def linear_model_flipper_mass(flipper_length, weight_flipper_length,\n intercept_body_mass):\n \"\"\"Linear model of the form y = a * x + b\"\"\"\n body_mass = weight_flipper_length * flipper_length + intercept_body_mass\n return body_mass",
"_____no_output_____"
]
],
[
[
"Using the model we defined above, we can check the body mass values\npredicted for a range of flipper lengths. We will set `weight_flipper_length`\nto be 45 and `intercept_body_mass` to be -5000.",
"_____no_output_____"
]
],
[
[
"import numpy as np\n\nweight_flipper_length = 45\nintercept_body_mass = -5000\n\nflipper_length_range = np.linspace(data.min(), data.max(), num=300)\npredicted_body_mass = linear_model_flipper_mass(\n flipper_length_range, weight_flipper_length, intercept_body_mass)",
"_____no_output_____"
]
],
[
[
"We can now plot all samples and the linear model prediction.",
"_____no_output_____"
]
],
[
[
"label = \"{0:.2f} (g / mm) * flipper length + {1:.2f} (g)\"\n\nax = sns.scatterplot(data=penguins, x=feature_names, y=target_name)\nax.plot(flipper_length_range, predicted_body_mass, color=\"tab:orange\")\n_ = ax.set_title(label.format(weight_flipper_length, intercept_body_mass))",
"_____no_output_____"
]
],
[
[
"The variable `weight_flipper_length` is a weight applied to the feature\n`flipper_length` in order to make the inference. When this coefficient is\npositive, it means that penguins with longer flipper lengths will have larger\nbody masses. If the coefficient is negative, it means that penguins with\nshorter flipper lengths have larger body masses. Graphically, this\ncoefficient is represented by the slope of the curve in the plot. Below we\nshow what the curve would look like when the `weight_flipper_length`\ncoefficient is negative.",
"_____no_output_____"
]
],
[
[
"weight_flipper_length = -40\nintercept_body_mass = 13000\n\npredicted_body_mass = linear_model_flipper_mass(\n flipper_length_range, weight_flipper_length, intercept_body_mass)",
"_____no_output_____"
]
],
[
[
"We can now plot all samples and the linear model prediction.",
"_____no_output_____"
]
],
[
[
"ax = sns.scatterplot(data=penguins, x=feature_names, y=target_name)\nax.plot(flipper_length_range, predicted_body_mass, color=\"tab:orange\")\n_ = ax.set_title(label.format(weight_flipper_length, intercept_body_mass))",
"_____no_output_____"
]
],
[
[
"In our case, this coefficient has a meaningful unit: g/mm.\nFor instance, a coefficient of 40 g/mm, means that for each\nadditional millimeter in flipper length, the body weight predicted will\nincrease by 40 g.",
"_____no_output_____"
]
],
[
[
"body_mass_180 = linear_model_flipper_mass(\n flipper_length=180, weight_flipper_length=40, intercept_body_mass=0)\nbody_mass_181 = linear_model_flipper_mass(\n flipper_length=181, weight_flipper_length=40, intercept_body_mass=0)\n\nprint(f\"The body mass for a flipper length of 180 mm \"\n f\"is {body_mass_180} g and {body_mass_181} g \"\n f\"for a flipper length of 181 mm\")",
"_____no_output_____"
]
],
[
[
"We can also see that we have a parameter `intercept_body_mass` in our model.\nThis parameter corresponds to the value on the y-axis if `flipper_length=0`\n(which in our case is only a mathematical consideration, as in our data,\n the value of `flipper_length` only goes from 170mm to 230mm). This y-value\nwhen x=0 is called the y-intercept. If `intercept_body_mass` is 0, the curve\nwill pass through the origin:",
"_____no_output_____"
]
],
[
[
"weight_flipper_length = 25\nintercept_body_mass = 0\n\n# redefined the flipper length to start at 0 to plot the intercept value\nflipper_length_range = np.linspace(0, data.max(), num=300)\npredicted_body_mass = linear_model_flipper_mass(\n flipper_length_range, weight_flipper_length, intercept_body_mass)",
"_____no_output_____"
],
[
"ax = sns.scatterplot(data=penguins, x=feature_names, y=target_name)\nax.plot(flipper_length_range, predicted_body_mass, color=\"tab:orange\")\n_ = ax.set_title(label.format(weight_flipper_length, intercept_body_mass))",
"_____no_output_____"
]
],
[
[
"Otherwise, it will pass through the `intercept_body_mass` value:",
"_____no_output_____"
]
],
[
[
"weight_flipper_length = 45\nintercept_body_mass = -5000\n\npredicted_body_mass = linear_model_flipper_mass(\n flipper_length_range, weight_flipper_length, intercept_body_mass)",
"_____no_output_____"
],
[
"ax = sns.scatterplot(data=penguins, x=feature_names, y=target_name)\nax.plot(flipper_length_range, predicted_body_mass, color=\"tab:orange\")\n_ = ax.set_title(label.format(weight_flipper_length, intercept_body_mass))",
"_____no_output_____"
]
],
[
[
" In this notebook, we have seen the parametrization of a linear regression\n model and more precisely meaning of the terms weights and intercepts.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
e7b48c071d878d4a37f39a80409cac394a355f59 | 421,857 | ipynb | Jupyter Notebook | vietocr_gettingstart.ipynb | lexuanthinh/vietocr | b7145e42e11492e58c1803a13106551ccd4dcdf0 | [
"Apache-2.0"
] | null | null | null | vietocr_gettingstart.ipynb | lexuanthinh/vietocr | b7145e42e11492e58c1803a13106551ccd4dcdf0 | [
"Apache-2.0"
] | null | null | null | vietocr_gettingstart.ipynb | lexuanthinh/vietocr | b7145e42e11492e58c1803a13106551ccd4dcdf0 | [
"Apache-2.0"
] | null | null | null | 345.784426 | 48,940 | 0.933956 | [
[
[
"<a href=\"https://colab.research.google.com/github/pbcquoc/vietocr/blob/master/vietocr_gettingstart.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"\n# Introduction\n<p align=\"center\">\n<img src=\"https://raw.githubusercontent.com/pbcquoc/vietocr/master/image/vietocr.jpg\" width=\"512\" height=\"512\">\n</p>\nThis notebook describe how you can use VietOcr to train OCR model\n\n\n",
"_____no_output_____"
]
],
[
[
"# pip install --quiet vietocr==0.3.2",
"\u001b[?25l\r\u001b[K |█████▌ | 10kB 26.4MB/s eta 0:00:01\r\u001b[K |███████████ | 20kB 1.7MB/s eta 0:00:01\r\u001b[K |████████████████▋ | 30kB 2.3MB/s eta 0:00:01\r\u001b[K |██████████████████████▏ | 40kB 2.5MB/s eta 0:00:01\r\u001b[K |███████████████████████████▋ | 51kB 2.0MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 61kB 1.8MB/s \n\u001b[?25h Installing build dependencies ... \u001b[?25l\u001b[?25hdone\n Getting requirements to build wheel ... \u001b[?25l\u001b[?25hdone\n Preparing wheel metadata ... \u001b[?25l\u001b[?25hdone\n\u001b[K |████████████████████████████████| 880kB 7.2MB/s \n\u001b[K |████████████████████████████████| 952kB 17.0MB/s \n\u001b[?25h Building wheel for gdown (PEP 517) ... \u001b[?25l\u001b[?25hdone\n Building wheel for lmdb (setup.py) ... \u001b[?25l\u001b[?25hdone\n\u001b[31mERROR: albumentations 0.1.12 has requirement imgaug<0.2.7,>=0.2.5, but you'll have imgaug 0.4.0 which is incompatible.\u001b[0m\n"
]
],
[
[
"# Inference",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nfrom PIL import Image\n\nfrom vietocr.tool.predictor import Predictor\nfrom vietocr.tool.config import Cfg",
"_____no_output_____"
],
[
"config = Cfg.load_config_from_name('vgg_transformer')",
"_____no_output_____"
]
],
[
[
"Change weights to your weights or using default weights from our pretrained model. Path can be url or local file",
"_____no_output_____"
]
],
[
[
"# config['weights'] = './weights/transformerocr.pth'\nconfig['weights'] = 'https://drive.google.com/uc?id=13327Y1tz1ohsm5YZMyXVMPIOjoOA0OaA'\nconfig['cnn']['pretrained']=False\nconfig['device'] = 'cpu'\nconfig['predictor']['beamsearch']=False",
"_____no_output_____"
],
[
"detector = Predictor(config)",
"Cached Downloading: C:\\Users\\thinhlx\\.cache/gdown\\https-COLON--SLASH--SLASH-drive.google.com-SLASH-uc-QUESTION-id-EQUAL-13327Y1tz1ohsm5YZMyXVMPIOjoOA0OaA\nDownloading...\nFrom: https://drive.google.com/uc?id=13327Y1tz1ohsm5YZMyXVMPIOjoOA0OaA\nTo: C:\\Users\\thinhlx\\.cache\\gdown\\tmpmvu_d_09\\dl\n100%|████████████████████████████████████████████████████████████████████████████████████████████| 152M/152M [00:15<00:00, 9.89MB/s]\n"
],
[
"! gdown --id 1uMVd6EBjY4Q0G2IkU5iMOQ34X0bysm0b\n! unzip -qq -o sample.zip",
"Downloading...\nFrom: https://drive.google.com/uc?id=1uMVd6EBjY4Q0G2IkU5iMOQ34X0bysm0b\nTo: D:\\HIPT\\vietocr\\sample.zip\n\n 0%| | 0.00/306k [00:00<?, ?B/s]\n100%|##########| 306k/306k [00:00<00:00, 1.42MB/s]\n100%|##########| 306k/306k [00:00<00:00, 1.41MB/s]\n'unzip' is not recognized as an internal or external command,\noperable program or batch file.\n"
],
[
"! ls sample | shuf |head -n 5",
"'ls' is not recognized as an internal or external command,\noperable program or batch file.\n"
],
[
"img = './image/3.png'\nimg = Image.open(img)\nplt.imshow(img)\ns = detector.predict(img)\ns",
"_____no_output_____"
]
],
[
[
"# Download sample dataset",
"_____no_output_____"
]
],
[
[
"! gdown https://drive.google.com/uc?id=19QU4VnKtgm3gf0Uw_N2QKSquW1SQ5JiE",
"Downloading...\nFrom: https://drive.google.com/uc?id=19QU4VnKtgm3gf0Uw_N2QKSquW1SQ5JiE\nTo: /content/data_line.zip\n61.2MB [00:00, 67.2MB/s]\n"
],
[
"! unzip -qq -o ./data_line.zip",
"_____no_output_____"
]
],
[
[
"# Train model",
"_____no_output_____"
],
[
"\n\n1. Load your config\n2. Train model using your dataset above\n\n",
"_____no_output_____"
],
[
"Load the default config, we adopt VGG for image feature extraction",
"_____no_output_____"
]
],
[
[
"from vietocr.tool.config import Cfg\nfrom vietocr.model.trainer import Trainer",
"_____no_output_____"
]
],
[
[
"# Change the config \n\n* *data_root*: the folder save your all images\n* *train_annotation*: path to train annotation\n* *valid_annotation*: path to valid annotation\n* *print_every*: show train loss at every n steps\n* *valid_every*: show validation loss at every n steps\n* *iters*: number of iteration to train your model\n* *export*: export weights to folder that you can use for inference\n* *metrics*: number of sample in validation annotation you use for computing full_sequence_accuracy, for large dataset it will take too long, then you can reuduce this number\n",
"_____no_output_____"
]
],
[
[
"config = Cfg.load_config_from_name('vgg_transformer')",
"_____no_output_____"
],
[
"#config['vocab'] = 'aAàÀảẢãÃáÁạẠăĂằẰẳẲẵẴắẮặẶâÂầẦẩẨẫẪấẤậẬbBcCdDđĐeEèÈẻẺẽẼéÉẹẸêÊềỀểỂễỄếẾệỆfFgGhHiIìÌỉỈĩĨíÍịỊjJkKlLmMnNoOòÒỏỎõÕóÓọỌôÔồỒổỔỗỖốỐộỘơƠờỜởỞỡỠớỚợỢpPqQrRsStTuUùÙủỦũŨúÚụỤưƯừỪửỬữỮứỨựỰvVwWxXyYỳỲỷỶỹỸýÝỵỴzZ0123456789!\"#$%&\\'()*+,-./:;<=>?@[\\\\]^_`{|}~ '\n\ndataset_params = {\n 'name':'hw',\n 'data_root':'./data_line/',\n 'train_annotation':'train_line_annotation.txt',\n 'valid_annotation':'test_line_annotation.txt'\n}\n\nparams = {\n 'print_every':200,\n 'valid_every':15*200,\n 'iters':20000,\n 'checkpoint':'./checkpoint/transformerocr_checkpoint.pth', \n 'export':'./weights/transformerocr.pth',\n 'metrics': 10000\n }\n\nconfig['trainer'].update(params)\nconfig['dataset'].update(dataset_params)\nconfig['device'] = 'cuda:0'",
"_____no_output_____"
]
],
[
[
"you can change any of these params in this full list below",
"_____no_output_____"
]
],
[
[
"config",
"_____no_output_____"
]
],
[
[
"You should train model from our pretrained ",
"_____no_output_____"
]
],
[
[
"trainer = Trainer(config, pretrained=True)",
"Downloading: \"https://download.pytorch.org/models/vgg19_bn-c79401a0.pth\" to /root/.cache/torch/hub/checkpoints/vgg19_bn-c79401a0.pth\n"
]
],
[
[
"Save model configuration for inference, load_config_from_file",
"_____no_output_____"
]
],
[
[
"trainer.config.save('config.yml')",
"_____no_output_____"
]
],
[
[
"Visualize your dataset to check data augmentation is appropriate",
"_____no_output_____"
]
],
[
[
"trainer.visualize_dataset()",
"_____no_output_____"
]
],
[
[
"Train now",
"_____no_output_____"
]
],
[
[
"trainer.train()",
"iter: 000200 - train loss: 1.657 - lr: 1.91e-05 - load time: 1.08 - gpu time: 158.33\niter: 000400 - train loss: 1.429 - lr: 3.95e-05 - load time: 0.76 - gpu time: 158.76\niter: 000600 - train loss: 1.331 - lr: 7.14e-05 - load time: 0.73 - gpu time: 158.38\niter: 000800 - train loss: 1.252 - lr: 1.12e-04 - load time: 1.29 - gpu time: 158.43\niter: 001000 - train loss: 1.218 - lr: 1.56e-04 - load time: 0.84 - gpu time: 158.86\niter: 001200 - train loss: 1.192 - lr: 2.01e-04 - load time: 0.78 - gpu time: 160.20\niter: 001400 - train loss: 1.140 - lr: 2.41e-04 - load time: 1.54 - gpu time: 158.48\niter: 001600 - train loss: 1.129 - lr: 2.73e-04 - load time: 0.70 - gpu time: 159.42\niter: 001800 - train loss: 1.095 - lr: 2.93e-04 - load time: 0.74 - gpu time: 158.03\niter: 002000 - train loss: 1.098 - lr: 3.00e-04 - load time: 0.66 - gpu time: 159.21\niter: 002200 - train loss: 1.060 - lr: 3.00e-04 - load time: 1.52 - gpu time: 157.63\niter: 002400 - train loss: 1.055 - lr: 3.00e-04 - load time: 0.80 - gpu time: 159.34\niter: 002600 - train loss: 1.032 - lr: 2.99e-04 - load time: 0.74 - gpu time: 159.13\niter: 002800 - train loss: 1.019 - lr: 2.99e-04 - load time: 1.42 - gpu time: 158.27\n"
]
],
[
[
"Visualize prediction from our trained model\n",
"_____no_output_____"
]
],
[
[
"trainer.visualize_prediction()",
"_____no_output_____"
]
],
[
[
"Compute full seq accuracy for full valid dataset",
"_____no_output_____"
]
],
[
[
"trainer.precision()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7b4945fc51f25fe1854d0d8cc05449c03081dfb | 17,016 | ipynb | Jupyter Notebook | 3.3-RNN.ipynb | datamllab/automl-in-action-notebooks | 348f93809982c45dfa75e7d16aeeb9b66d5c5700 | [
"MIT"
] | 30 | 2020-04-17T22:18:48.000Z | 2022-03-18T12:11:01.000Z | 3.3-RNN.ipynb | datamllab/automl-in-action-notebooks | 348f93809982c45dfa75e7d16aeeb9b66d5c5700 | [
"MIT"
] | 1 | 2021-11-07T16:59:30.000Z | 2021-11-07T16:59:30.000Z | 3.3-RNN.ipynb | datamllab/automl-in-action-notebooks | 348f93809982c45dfa75e7d16aeeb9b66d5c5700 | [
"MIT"
] | 13 | 2021-02-05T16:46:18.000Z | 2021-12-21T00:43:43.000Z | 42.646617 | 963 | 0.529502 | [
[
[
"# RNN\nIn this section, we will introduce how to use recurrent neural networks for text classification.\nThe dataset we use is the IMDB Movie Reviews.\nWe use the reviews written by the users as the input and try to predict whether the they are positive or negative.\n\n## Preparing the data\n\nYou can use the following code to load the IMDB dataset.\n",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\ntf.random.set_seed(42)\n\nfrom tensorflow.keras.datasets import imdb\nfrom tensorflow.keras.preprocessing import sequence\n\nmax_words = 10000\nembedding_dim = 32\n\n(train_data, train_labels), (test_data, test_labels) = imdb.load_data(\n num_words=max_words)\nprint(train_data.shape)\nprint(train_labels.shape)\nprint(train_data[:2])\nprint(train_labels[:2])",
"(25000,)\n(25000,)\n[list([1, 14, 22, 16, 43, 530, 973, 1622, 1385, 65, 458, 4468, 66, 3941, 4, 173, 36, 256, 5, 25, 100, 43, 838, 112, 50, 670, 2, 9, 35, 480, 284, 5, 150, 4, 172, 112, 167, 2, 336, 385, 39, 4, 172, 4536, 1111, 17, 546, 38, 13, 447, 4, 192, 50, 16, 6, 147, 2025, 19, 14, 22, 4, 1920, 4613, 469, 4, 22, 71, 87, 12, 16, 43, 530, 38, 76, 15, 13, 1247, 4, 22, 17, 515, 17, 12, 16, 626, 18, 2, 5, 62, 386, 12, 8, 316, 8, 106, 5, 4, 2223, 5244, 16, 480, 66, 3785, 33, 4, 130, 12, 16, 38, 619, 5, 25, 124, 51, 36, 135, 48, 25, 1415, 33, 6, 22, 12, 215, 28, 77, 52, 5, 14, 407, 16, 82, 2, 8, 4, 107, 117, 5952, 15, 256, 4, 2, 7, 3766, 5, 723, 36, 71, 43, 530, 476, 26, 400, 317, 46, 7, 4, 2, 1029, 13, 104, 88, 4, 381, 15, 297, 98, 32, 2071, 56, 26, 141, 6, 194, 7486, 18, 4, 226, 22, 21, 134, 476, 26, 480, 5, 144, 30, 5535, 18, 51, 36, 28, 224, 92, 25, 104, 4, 226, 65, 16, 38, 1334, 88, 12, 16, 283, 5, 16, 4472, 113, 103, 32, 15, 16, 5345, 19, 178, 32])\n list([1, 194, 1153, 194, 8255, 78, 228, 5, 6, 1463, 4369, 5012, 134, 26, 4, 715, 8, 118, 1634, 14, 394, 20, 13, 119, 954, 189, 102, 5, 207, 110, 3103, 21, 14, 69, 188, 8, 30, 23, 7, 4, 249, 126, 93, 4, 114, 9, 2300, 1523, 5, 647, 4, 116, 9, 35, 8163, 4, 229, 9, 340, 1322, 4, 118, 9, 4, 130, 4901, 19, 4, 1002, 5, 89, 29, 952, 46, 37, 4, 455, 9, 45, 43, 38, 1543, 1905, 398, 4, 1649, 26, 6853, 5, 163, 11, 3215, 2, 4, 1153, 9, 194, 775, 7, 8255, 2, 349, 2637, 148, 605, 2, 8003, 15, 123, 125, 68, 2, 6853, 15, 349, 165, 4362, 98, 5, 4, 228, 9, 43, 2, 1157, 15, 299, 120, 5, 120, 174, 11, 220, 175, 136, 50, 9, 4373, 228, 8255, 5, 2, 656, 245, 2350, 5, 4, 9837, 131, 152, 491, 18, 2, 32, 7464, 1212, 14, 9, 6, 371, 78, 22, 625, 64, 1382, 9, 8, 168, 145, 23, 4, 1690, 15, 16, 4, 1355, 5, 28, 6, 52, 154, 462, 33, 89, 78, 285, 16, 145, 95])]\n[1 0]\n"
]
],
[
[
"The code above would load the reviews into train_data and test_data, load the labels (positive or negative) into train_labels and test_labels. As you can see the reviews in train_data are lists of integers instead of texts. It is because the raw texts cannot be used as an input to a neural network. Neural networks only accepts numerical data as inputs.\n\nThe integers we see above is the raw text data after a preprocessing step named tokenization. It first split each review into a list of words and assign an integer to each of the words. For example, a scentence \"How are you? How are you doing?\" will be transformed into a list of words as [\"how\", \"are\", \"you\", \"how\", \"are\", \"you\", \"doing\"]. Then transformed to [5, 8, 9, 5, 8, 9, 7]. The integers doesn't have special meanings but a representation of the words. Same integers represents the same words, different integers represents different words.\n\nThe labels are also integers, where 1 represents positive, 0 represents negative.\n\nThen, we pad the data to the same length.",
"_____no_output_____"
]
],
[
[
"# Pad the sequence to length max_len.\nmaxlen = 100\nprint(len(train_data[0]))\nprint(len(train_data[1]))\ntrain_data = sequence.pad_sequences(train_data, maxlen=maxlen)\ntest_data = sequence.pad_sequences(test_data, maxlen=maxlen)\nprint(train_data.shape)\nprint(train_labels.shape)",
"218\n189\n(25000, 100)\n(25000,)\n"
]
],
[
[
"## Building your network\nThe next step is to build your neural network model and train it.\nWe will introduce the neural network in three steps.\nThe first step is the embedding, which transform each integer list into a list of vectors.\nThe second step is to feed the vectors to the recurrent neural network.\nThe third step is to use the output of the recurrent neural network for classification.\n### Embedding\nEmbedding means find a corresponding numerical vector for each word, which is now an integer in the list.\nThe numerical vector can be seen as the coordinate of a point in the space.\nWe embed the words into specific points in the space.\nThat is why we call the process embedding.\n\nTo implement it, we use a Keras Embedding layer.\nFirst, we need to create a Keras Sequential model.\nThen, we add the layers one by one to the model.\nThe order of the layers is from the input to the output.",
"_____no_output_____"
]
],
[
[
"from tensorflow.keras.layers import Embedding\nfrom tensorflow.keras import Sequential\n\nmax_words = 10000\nembedding_dim = 32\n\nmodel = Sequential()\nmodel.add(Embedding(max_words, embedding_dim))\nmodel.summary()",
"Model: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nembedding (Embedding) (None, None, 32) 320000 \n=================================================================\nTotal params: 320,000\nTrainable params: 320,000\nNon-trainable params: 0\n_________________________________________________________________\n"
]
],
[
[
"In the code above, we initialized an Embedding layer.\nThe max_words is the vocabulary size, which is an integer meaning how many different words are there in the input data.\nThe integer 16 means the length of the vector representation fro each word is 32.\nThe output tensor of the Embedding layer is (batch_size, max_len, embedding_dim).\n### Recurrent Neural Networks\nAfter the embedding layer, we need to use a recurrent neural network for the classification.\nRecurrent neural networks can handle sequential inputs.\nFor example, we input a movie review as a sequence of word embedding vectors to it, which are the output of the embedding layer.\nEach vector has a length of 32.\nEach review contains 100 vectors.\nIf we see the RNN as a whole, it takes 100 vectors of length 16 altogether.\nHowever, in the real case, it takes one vector at a time.\n\nEach time the RNN takes in a word embedding vector,\nit not only takes the word embedding vector, but another state vector as well.\nYou can think the state vector as the memory of the RNN.\nIt memorizes the previous words it taken as input.\nIn the first step, the RNN has no previous words to remember.\nIt takes an initial state, which is usually empty,\nand the first word embedding vector as input.\nThe output of the first step is actually the state to be input to the second step.\nFor the rest of the steps, the RNN will just take the previous output and the current input as input,\nand output the state for the next step.\nFor the last step, the output state is the final output we will use for the classification.\n\nWe can use the following python code to illustrate the process.\n```python\nstate = [0] * 32\nfor i in range(100):\n state = rnn(embedding[i], state)\nreturn state\n```\nThe returned state is the final output of the RNN.\n\nSometimes, we may also need to collect the output of each step as shown in the following code.\n```python\nstate = [0] * 32\noutput = []\nfor i in range(100):\n state = rnn(embedding[i], state)\n output.append(state)\nreturn output\n```\nIn the code above, the output of an RNN can also be a sequence of vectors, which is the same format as the input to the RNN.\nTherefore, we can make the RNN deeper by stacking multiple RNN layers together.\n\n\nTo implement the RNN described, we need the SimpleRNN layer in Keras.",
"_____no_output_____"
]
],
[
[
"from tensorflow.keras.layers import SimpleRNN\n\nmodel.add(SimpleRNN(embedding_dim, return_sequences=True))\nmodel.add(SimpleRNN(embedding_dim, return_sequences=True))\nmodel.add(SimpleRNN(embedding_dim, return_sequences=True))\nmodel.add(SimpleRNN(embedding_dim))\nmodel.summary()",
"Model: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nembedding (Embedding) (None, None, 32) 320000 \n_________________________________________________________________\nsimple_rnn (SimpleRNN) (None, None, 32) 2080 \n_________________________________________________________________\nsimple_rnn_1 (SimpleRNN) (None, None, 32) 2080 \n_________________________________________________________________\nsimple_rnn_2 (SimpleRNN) (None, None, 32) 2080 \n_________________________________________________________________\nsimple_rnn_3 (SimpleRNN) (None, 32) 2080 \n=================================================================\nTotal params: 328,320\nTrainable params: 328,320\nNon-trainable params: 0\n_________________________________________________________________\n"
]
],
[
[
"The return_sequences parameter controlls whether to collect all the output vectors of an RNN or only collect the last output. It is set to False by default.\n### Classification Head\nThen we will use the output of the last SimpleRNN layer, which is a vector of length 32, as the input to the classification head.\nIn the classification head, we use a fully-connected layer for the classification.\nThen we compile and train the model.\n",
"_____no_output_____"
]
],
[
[
"from tensorflow.keras.layers import Dense\n\nmodel.add(Dense(1, activation='sigmoid'))\nmodel.compile(optimizer='adam', metrics=['acc'], loss='binary_crossentropy')\nmodel.fit(train_data, \n train_labels,\n epochs=2,\n batch_size=128)",
"Epoch 1/2\n196/196 [==============================] - 30s 119ms/step - loss: 0.6258 - acc: 0.6111\nEpoch 2/2\n196/196 [==============================] - 23s 117ms/step - loss: 0.3361 - acc: 0.8573\n"
]
],
[
[
"Then we can validate our model on the testing data.",
"_____no_output_____"
]
],
[
[
"model.evaluate(test_data, test_labels)",
"782/782 [==============================] - 28s 35ms/step - loss: 0.3684 - acc: 0.8402\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7b49e92c829d20e1da1648fa5915cd3144522db | 58,139 | ipynb | Jupyter Notebook | code/Audio_Data_Analysis_Ex_5.ipynb | AIWithDaddy/AIWithDaddy.github.io | 1874ebdc0fcb3602d314eacafedba7a4d6f67fda | [
"MIT"
] | null | null | null | code/Audio_Data_Analysis_Ex_5.ipynb | AIWithDaddy/AIWithDaddy.github.io | 1874ebdc0fcb3602d314eacafedba7a4d6f67fda | [
"MIT"
] | null | null | null | code/Audio_Data_Analysis_Ex_5.ipynb | AIWithDaddy/AIWithDaddy.github.io | 1874ebdc0fcb3602d314eacafedba7a4d6f67fda | [
"MIT"
] | null | null | null | 319.445055 | 53,638 | 0.925025 | [
[
[
"import os, shutil\nfrom google.colab import drive\ndrive.mount('/content/gdrive')\n# 표시되는 화면에서 자신이 사용 중인 구글 드라이브를 선택하고 진행\nos.chdir('gdrive/My Drive')\n%cd Test/Audio/\n# gdrive/My Drive/Test/Audio/에 본 예제에 사용되는 rain.wav 파일이 존재 (테스트에 사용될 wav 파일을 미리 업로드 해 두어야 함)",
"Mounted at /content/gdrive\n/content/gdrive/My Drive/Test/Audio\n"
],
[
"import librosa, librosa.display\naudio_data = 'rain.wav'\nx , sr = librosa.load(audio_data, sr=44100)",
"_____no_output_____"
],
[
"import sklearn\nspectral_rolloff = librosa.feature.spectral_rolloff(x+0.01, sr=sr)[0]\nspectral_rolloff.shape",
"_____no_output_____"
],
[
"# Computing the time variable for visualization\nimport matplotlib.pyplot as plt\nplt.figure(figsize=(12, 4))\nframes = range(len(spectral_rolloff))\nt = librosa.frames_to_time(frames)",
"_____no_output_____"
],
[
"# Normalising for visualisation\ndef normalize(x, axis=0):\n return sklearn.preprocessing.minmax_scale(x, axis=axis)",
"_____no_output_____"
],
[
"# Plotting the Spectral Rolloff along the waveform\nlibrosa.display.waveplot(x, sr=sr, alpha=0.4)\nplt.plot(t, normalize(spectral_rolloff), color='r')",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7b4aed8395121fc1078d152207f986868e71e1f | 11,228 | ipynb | Jupyter Notebook | business_results.ipynb | gustweing/house_rocket | 5718bbfff27be9b0f634c995b5ab871888e3a13a | [
"MIT"
] | null | null | null | business_results.ipynb | gustweing/house_rocket | 5718bbfff27be9b0f634c995b5ab871888e3a13a | [
"MIT"
] | null | null | null | business_results.ipynb | gustweing/house_rocket | 5718bbfff27be9b0f634c995b5ab871888e3a13a | [
"MIT"
] | null | null | null | 31.807365 | 256 | 0.426612 | [
[
[
"# Business Results",
"_____no_output_____"
],
[
"## Recomendação de compra dos imóveis:\n##### Para decidir quais imóveis deverão ser comprados, iremos comparar os imóveis pelo zipcode e selecionar as casas que estão abaixo da média. Como explicado anteriormente, estaremos selecionando as casas que possuem 'condition' maior ou igual a 3.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\ndata = pd.read_csv('datasets/kc_house_clean.csv')\npd.set_option( 'display.float_format', lambda x: '%.2f' % x)\npd.options.display.max_columns = None\npd.options.display.max_rows = None",
"_____no_output_____"
],
[
"#comparar a média do 'zipcode' e se o valor for abaixo da média e a condição >= 3 então comprar\ndf = data[['price','id','zipcode','condition']].copy()\n\nrecomendations = df[['zipcode','price']].groupby('zipcode').median().reset_index()\nrecomendations.columns = ['zipcode','median_price']\ndf = pd.merge(df, recomendations, on= 'zipcode', how = 'inner')\n\ndf['recomendations'] = 'na'\nfor i in range(len(df)):\n if (df.loc[i,'price'] < df.loc[i,'median_price'])&(df.loc[i,'condition'] >= 3):\n df.loc[i,'recomendations'] = 'comprar'\n else:\n df.loc[i,'recomendations'] = 'não comprar'\ndf.to_csv('datasets/recomendacoes_compras.csv')",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"## Recomendação de venda dos imóveis\n##### Para decidir o melhor momento de venda dos imóveis estaremos comparando os imóveis pelo 'zipcode' e pela estação.\n##### Se o preço for menor que a mediana, então estaremos acrescentando 30% ao valor.\n##### Se o preço for maior que a mediana, então estaremos acrescentando 10% ao valor.\n\ndata = pd.read_csv('datasets/kc_house_clean.csv')\ndf1 = data[['price','date','zipcode','id']].copy()\n\ndf1['date'] = pd.to_datetime(df1['date']).dt.month\ndf1['season'] = df1['date'].apply(lambda x: 'spring' if ( x >= 3 )&( x <= 5 ) else\n 'summer' if ( x >= 6 )&( x <= 8 ) else\n 'fall' if ( x >= 9 )&( x <= 11 ) else \n 'winter')\n\n\nestacoes = df1[['zipcode','season','price']].groupby(['zipcode','season']).median().reset_index()\nestacoes.columns = ['zipcode','season','median_price']\nestacoes['zip_season'] = estacoes['zipcode'].astype(str) + \"_\" + estacoes['season'].astype(str)\nestacoes = estacoes.drop(['zipcode','season'], axis = 1)\n\ndf1['zip_season'] = df1['zipcode'].astype(str) + \"_\" + df1['season'].astype(str)\ndf1 = pd.merge( df1, estacoes, on='zip_season', how='inner')\n\ndf1['venda'] = 'na'\nfor i in range(len(data)):\n if (df1.loc[i,'price'] <= df1.loc[i,'median_price']):\n df1.loc[i,'venda'] = df1.loc[i,'price'] * 1.30\n else:\n df1.loc[i,'venda'] = df1.loc[i,'price'] * 1.10\ndf1.to_csv('datasets/recomendacoes_venda.csv')",
"_____no_output_____"
],
[
"df1.head()",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
e7b4b6c7b0dff01d7539ba3d0c26fd6b618bc3a7 | 11,043 | ipynb | Jupyter Notebook | IPython-parallel-tutorial/exercises/Remote Iteration.ipynb | sunny2309/scipy_conf_notebooks | 30a85d5137db95e01461ad21519bc1bdf294044b | [
"MIT"
] | 2 | 2021-01-09T15:57:26.000Z | 2021-11-29T01:44:21.000Z | IPython-parallel-tutorial/exercises/Remote Iteration.ipynb | sunny2309/scipy_conf_notebooks | 30a85d5137db95e01461ad21519bc1bdf294044b | [
"MIT"
] | 5 | 2019-11-15T02:00:26.000Z | 2021-01-06T04:26:40.000Z | IPython-parallel-tutorial/exercises/Remote Iteration.ipynb | sunny2309/scipy_conf_notebooks | 30a85d5137db95e01461ad21519bc1bdf294044b | [
"MIT"
] | null | null | null | 26.044811 | 205 | 0.493978 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e7b4b97267a4edb431b305e4078c16a79b8a28d4 | 10,010 | ipynb | Jupyter Notebook | jupyter_english/assignments_demo/assignment08_implement_sgd_regressor_solution.ipynb | salman394/AI-ml--course | 2ed3a1382614dd00184e5179026623714ccc9e8c | [
"Unlicense"
] | null | null | null | jupyter_english/assignments_demo/assignment08_implement_sgd_regressor_solution.ipynb | salman394/AI-ml--course | 2ed3a1382614dd00184e5179026623714ccc9e8c | [
"Unlicense"
] | null | null | null | jupyter_english/assignments_demo/assignment08_implement_sgd_regressor_solution.ipynb | salman394/AI-ml--course | 2ed3a1382614dd00184e5179026623714ccc9e8c | [
"Unlicense"
] | null | null | null | 29.791667 | 569 | 0.573327 | [
[
[
"<center>\n<img src=\"../../img/ods_stickers.jpg\" />\n \n## [mlcourse.ai](https://mlcourse.ai) – Open Machine Learning Course \n\nAuthor: [Yury Kashnitskiy](https://yorko.github.io). Translated by [Sergey Oreshkov](https://www.linkedin.com/in/sergeoreshkov/). This material is subject to the terms and conditions of the [Creative Commons CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license. Free use is permitted for any non-commercial purpose.",
"_____no_output_____"
],
[
"# <center> Assignment #8 (demo). Solution\n\n## <center> Implementation of online regressor\n \n**Same assignment as a [Kaggle Kernel](https://www.kaggle.com/kashnitsky/a8-demo-implementing-online-regressor) + [solution](https://www.kaggle.com/kashnitsky/a8-demo-implementing-online-regressor-solution).**",
"_____no_output_____"
],
[
"Here we'll implement a regressor trained with stochastic gradient descent (SGD). Fill in the missing code. If you do evething right, you'll pass a simple embedded test.",
"_____no_output_____"
],
[
"## <center>Linear regression and Stochastic Gradient Descent",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nfrom sklearn.base import BaseEstimator\nfrom sklearn.metrics import log_loss, mean_squared_error, roc_auc_score\nfrom sklearn.model_selection import train_test_split\nfrom tqdm import tqdm\n\n%matplotlib inline\nimport seaborn as sns\nfrom matplotlib import pyplot as plt\nfrom sklearn.preprocessing import StandardScaler",
"_____no_output_____"
]
],
[
[
"Implement class `SGDRegressor`. Specification:\n- class is inherited from `sklearn.base.BaseEstimator`\n- constructor takes parameters `eta` – gradient step ($10^{-3}$ by default) and `n_epochs` – dataset pass count (3 by default)\n- constructor also creates `mse_` and `weights_` lists in order to track mean squared error and weight vector during gradient descent iterations\n- Class has `fit` and `predict` methods\n- The `fit` method takes matrix `X` and vector `y` (`numpy.array` objects) as parameters, appends column of ones to `X` on the left side, initializes weight vector `w` with **zeros** and then makes `n_epochs` iterations of weight updates (you may refer to this [article](https://medium.com/open-machine-learning-course/open-machine-learning-course-topic-8-vowpal-wabbit-fast-learning-with-gigabytes-of-data-60f750086237) for details), and for every iteration logs mean squared error and weight vector `w` in corresponding lists we created in the constructor. \n- Additionally the `fit` method will create `w_` variable to store weights which produce minimal mean squared error\n- The `fit` method returns current instance of the `SGDRegressor` class, i.e. `self`\n- The `predict` method takes `X` matrix, adds column of ones to the left side and returns prediction vector, using weight vector `w_`, created by the `fit` method.",
"_____no_output_____"
]
],
[
[
"class SGDRegressor(BaseEstimator):\n def __init__(self, eta=1e-3, n_epochs=3):\n self.eta = eta\n self.n_epochs = n_epochs\n self.mse_ = []\n self.weights_ = []\n\n def fit(self, X, y):\n X = np.hstack([np.ones([X.shape[0], 1]), X])\n\n w = np.zeros(X.shape[1])\n\n for it in tqdm(range(self.n_epochs)):\n for i in range(X.shape[0]):\n\n new_w = w.copy()\n new_w[0] += self.eta * (y[i] - w.dot(X[i, :]))\n for j in range(1, X.shape[1]):\n new_w[j] += self.eta * (y[i] - w.dot(X[i, :])) * X[i, j]\n w = new_w.copy()\n\n self.weights_.append(w)\n self.mse_.append(mean_squared_error(y, X.dot(w)))\n\n self.w_ = self.weights_[np.argmin(self.mse_)]\n\n return self\n\n def predict(self, X):\n X = np.hstack([np.ones([X.shape[0], 1]), X])\n\n return X.dot(self.w_)",
"_____no_output_____"
]
],
[
[
"Let's test out the algorithm on height/weight data. We will predict heights (in inches) based on weights (in lbs).",
"_____no_output_____"
]
],
[
[
"data_demo = pd.read_csv(\"../../data/weights_heights.csv\")",
"_____no_output_____"
],
[
"plt.scatter(data_demo[\"Weight\"], data_demo[\"Height\"])\nplt.xlabel(\"Weight (lbs)\")\nplt.ylabel(\"Height (Inch)\")\nplt.grid();",
"_____no_output_____"
],
[
"X, y = data_demo[\"Weight\"].values, data_demo[\"Height\"].values",
"_____no_output_____"
]
],
[
[
"Perform train/test split and scale data.",
"_____no_output_____"
]
],
[
[
"X_train, X_valid, y_train, y_valid = train_test_split(\n X, y, test_size=0.3, random_state=17\n)",
"_____no_output_____"
],
[
"scaler = StandardScaler()\nX_train_scaled = scaler.fit_transform(X_train.reshape([-1, 1]))\nX_valid_scaled = scaler.transform(X_valid.reshape([-1, 1]))",
"_____no_output_____"
]
],
[
[
"Train created `SGDRegressor` with `(X_train_scaled, y_train)` data. Leave default parameter values for now.",
"_____no_output_____"
]
],
[
[
"# you code here\nsgd_reg = SGDRegressor()\nsgd_reg.fit(X_train_scaled, y_train)",
"_____no_output_____"
]
],
[
[
"Draw a chart with training process – dependency of mean squared error from the i-th SGD iteration number.",
"_____no_output_____"
]
],
[
[
"# you code here\nplt.plot(range(len(sgd_reg.mse_)), sgd_reg.mse_)\nplt.xlabel(\"#updates\")\nplt.ylabel(\"MSE\");",
"_____no_output_____"
]
],
[
[
"Print the minimal value of mean squared error and the best weights vector.",
"_____no_output_____"
]
],
[
[
"# you code here\nnp.min(sgd_reg.mse_), sgd_reg.w_",
"_____no_output_____"
]
],
[
[
"Draw chart of model weights ($w_0$ and $w_1$) behavior during training.",
"_____no_output_____"
]
],
[
[
"# you code here\nplt.subplot(121)\nplt.plot(range(len(sgd_reg.weights_)), [w[0] for w in sgd_reg.weights_])\nplt.subplot(122)\nplt.plot(range(len(sgd_reg.weights_)), [w[1] for w in sgd_reg.weights_]);",
"_____no_output_____"
]
],
[
[
"Make a prediction for hold-out set `(X_valid_scaled, y_valid)` and check MSE value.",
"_____no_output_____"
]
],
[
[
"# you code here\nsgd_holdout_mse = mean_squared_error(y_valid, sgd_reg.predict(X_valid_scaled))\nsgd_holdout_mse",
"_____no_output_____"
]
],
[
[
"Do the same thing for `LinearRegression` class from `sklearn.linear_model`. Evaluate MSE for hold-out set.",
"_____no_output_____"
]
],
[
[
"# you code here\nfrom sklearn.linear_model import LinearRegression\n\nlm = LinearRegression().fit(X_train_scaled, y_train)\nprint(lm.coef_, lm.intercept_)\nlinreg_holdout_mse = mean_squared_error(y_valid, lm.predict(X_valid_scaled))\nlinreg_holdout_mse",
"_____no_output_____"
],
[
"try:\n assert (sgd_holdout_mse - linreg_holdout_mse) < 1e-4\n print(\"Correct!\")\nexcept AssertionError:\n print(\n \"Something's not good.\\n Linreg's holdout MSE: {}\"\n \"\\n SGD's holdout MSE: {}\".format(linreg_holdout_mse, sgd_holdout_mse)\n )",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e7b4bc8376e9956aa3b57b4b299fea6d2f2aeca9 | 20,693 | ipynb | Jupyter Notebook | collman15v2/201710/runPy.ipynb | neurodata-dev/synaptome-stats | 7637432355d4ccc169e5c3847dd123616f4594cb | [
"Apache-2.0"
] | null | null | null | collman15v2/201710/runPy.ipynb | neurodata-dev/synaptome-stats | 7637432355d4ccc169e5c3847dd123616f4594cb | [
"Apache-2.0"
] | null | null | null | collman15v2/201710/runPy.ipynb | neurodata-dev/synaptome-stats | 7637432355d4ccc169e5c3847dd123616f4594cb | [
"Apache-2.0"
] | null | null | null | 23.950231 | 109 | 0.263326 | [
[
[
"import importlib \nimport numpy as np\nimport toolbox\nimport annoStats\nimportlib.reload(toolbox)\nimportlib.reload(annoStats)",
"_____no_output_____"
],
[
"#cubes, loc, F0, nonzeros, ids = annoTightAll.testMain()\ncubes, loc, Fmax = annoStats.testMain()",
"Synapsin647\nVGluT1_647\n"
],
[
"loc",
"_____no_output_____"
],
[
"idx = np.argsort([3,2,1])\nnp.transpose(loc[:,idx])",
"_____no_output_____"
],
[
"toolbox.mainOUT(np.transpose(loc[:,idx]), ['x','y','z'], \"locations_test.csv\")",
"_____no_output_____"
],
[
"len(Fmax[0])",
"_____no_output_____"
],
[
"Fmax",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7b4c7f26c660f21f743eb519818da00d32c9559 | 8,488 | ipynb | Jupyter Notebook | colabs/dynamic_costs.ipynb | Ressmann/starthinker | 301c5cf17e382afee346871974ca2f4ae905a94a | [
"Apache-2.0"
] | 138 | 2018-11-28T21:42:44.000Z | 2022-03-30T17:26:35.000Z | colabs/dynamic_costs.ipynb | Ressmann/starthinker | 301c5cf17e382afee346871974ca2f4ae905a94a | [
"Apache-2.0"
] | 36 | 2019-02-19T18:33:20.000Z | 2022-01-24T18:02:44.000Z | colabs/dynamic_costs.ipynb | Ressmann/starthinker | 301c5cf17e382afee346871974ca2f4ae905a94a | [
"Apache-2.0"
] | 54 | 2018-12-06T05:47:32.000Z | 2022-02-21T22:01:01.000Z | 37.065502 | 261 | 0.533106 | [
[
[
"#Dynamic Costs Reporting\nCalculate DV360 cost at the dynamic creative combination level.\n",
"_____no_output_____"
],
[
"#License\n\nCopyright 2020 Google LLC,\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n https://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n\n",
"_____no_output_____"
],
[
"#Disclaimer\nThis is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.\n\nThis code generated (see starthinker/scripts for possible source):\n - **Command**: \"python starthinker_ui/manage.py colab\"\n - **Command**: \"python starthinker/tools/colab.py [JSON RECIPE]\"\n\n",
"_____no_output_____"
],
[
"#1. Install Dependencies\nFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.\n",
"_____no_output_____"
]
],
[
[
"!pip install git+https://github.com/google/starthinker\n",
"_____no_output_____"
]
],
[
[
"#2. Set Configuration\n\nThis code is required to initialize the project. Fill in required fields and press play.\n\n1. If the recipe uses a Google Cloud Project:\n - Set the configuration **project** value to the project identifier from [these instructions](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md).\n\n1. If the recipe has **auth** set to **user**:\n - If you have user credentials:\n - Set the configuration **user** value to your user credentials JSON.\n - If you DO NOT have user credentials:\n - Set the configuration **client** value to [downloaded client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md).\n\n1. If the recipe has **auth** set to **service**:\n - Set the configuration **service** value to [downloaded service credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_service.md).\n\n",
"_____no_output_____"
]
],
[
[
"from starthinker.util.configuration import Configuration\n\n\nCONFIG = Configuration(\n project=\"\",\n client={},\n service={},\n user=\"/content/user.json\",\n verbose=True\n)\n\n",
"_____no_output_____"
]
],
[
[
"#3. Enter Dynamic Costs Reporting Recipe Parameters\n 1. Add a sheet URL. This is where you will enter advertiser and campaign level details.\n 1. Specify the CM network ID.\n 1. Click run now once, and a tab called <strong>Dynamic Costs</strong> will be added to the sheet with instructions.\n 1. Follow the instructions on the sheet; this will be your configuration.\n 1. StarThinker will create two or three (depending on the case) reports in CM named <strong>Dynamic Costs - ...</strong>.\n 1. Wait for <b>BigQuery->->->Dynamic_Costs_Analysis</b> to be created or click Run Now.\n 1. Copy <a href='https://datastudio.google.com/open/1vBvBEiMbqCbBuJTsBGpeg8vCLtg6ztqA' target='_blank'>Dynamic Costs Sample Data ( Copy From This )</a>.\n 1. Click Edit Connection, and Change to <b>BigQuery->->->Dynamic_Costs_Analysis</b>.\n 1. Copy <a href='https://datastudio.google.com/open/1xulBAdx95SnvjnUzFP6r14lhkvvVbsP8' target='_blank'>Dynamic Costs Sample Report ( Copy From This )</a>.\n 1. When prompted, choose the new data source you just created.\n 1. Edit the table to include or exclude columns as desired.\n 1. Or, give the dashboard connection intructions to the client.\nModify the values below for your use case, can be done multiple times, then click play.\n",
"_____no_output_____"
]
],
[
[
"FIELDS = {\n 'dcm_account': '',\n 'auth_read': 'user', # Credentials used for reading data.\n 'configuration_sheet_url': '',\n 'auth_write': 'service', # Credentials used for writing data.\n 'bigquery_dataset': 'dynamic_costs',\n}\n\nprint(\"Parameters Set To: %s\" % FIELDS)\n",
"_____no_output_____"
]
],
[
[
"#4. Execute Dynamic Costs Reporting\nThis does NOT need to be modified unless you are changing the recipe, click play.\n",
"_____no_output_____"
]
],
[
[
"from starthinker.util.configuration import execute\nfrom starthinker.util.recipe import json_set_fields\n\nTASKS = [\n {\n 'dynamic_costs': {\n 'auth': 'user', \n 'account': {'field': {'name': 'dcm_account', 'kind': 'string', 'order': 0, 'default': ''}}, \n 'sheet': {\n 'template': {\n 'url': 'https://docs.google.com/spreadsheets/d/19J-Hjln2wd1E0aeG3JDgKQN9TVGRLWxIEUQSmmQetJc/edit?usp=sharing', \n 'tab': 'Dynamic Costs', \n 'range': 'A1'\n }, \n 'url': {'field': {'name': 'configuration_sheet_url', 'kind': 'string', 'order': 1, 'default': ''}}, \n 'tab': 'Dynamic Costs', \n 'range': 'A2:B'\n }, \n 'out': {\n 'auth': 'user', \n 'dataset': {'field': {'name': 'bigquery_dataset', 'kind': 'string', 'order': 2, 'default': 'dynamic_costs'}}\n }\n }\n }\n]\n\njson_set_fields(TASKS, FIELDS)\n\nexecute(CONFIG, TASKS, force=True)\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7b4ce0084e0fc94d7b59fb2183b8d9809fd44fe | 3,833 | ipynb | Jupyter Notebook | Cap2..ipynb | mleyvaz/Neutrosofia | 6aab3892a826d2c94a068717bda93715d32df3c7 | [
"MIT"
] | null | null | null | Cap2..ipynb | mleyvaz/Neutrosofia | 6aab3892a826d2c94a068717bda93715d32df3c7 | [
"MIT"
] | null | null | null | Cap2..ipynb | mleyvaz/Neutrosofia | 6aab3892a826d2c94a068717bda93715d32df3c7 | [
"MIT"
] | 3 | 2018-10-14T02:43:10.000Z | 2019-02-21T02:51:00.000Z | 19.261307 | 95 | 0.433081 | [
[
[
"## Capítulo 2: Toma de Decisiones y Neutrosofía",
"_____no_output_____"
],
[
"## Distancia Euclidiana entre números SVN",
"_____no_output_____"
]
],
[
[
"def euclideanNeu(a1,a2):\n a=0\n c=len(a1) \n for i in range(c): \n a=a+pow(a1[i][0]-a2[i][0],2)+pow(a1[i][1]-a2[i][1],2)+pow(a1[i][2]-a2[i][2],2)\n a=pow(1.0/3.0*a,0.5)\n return(a)",
"_____no_output_____"
]
],
[
[
"## Ejemplo de uso de la distancia Euclidiana",
"_____no_output_____"
]
],
[
[
"EB=(1,0,0)\nMMB=(0.9, 0.1, 0.1)\nMB=(0.8,0.15,0.20)\nB=(0.70,0.25,0.30)\nMDB=(0.60,0.35,0.40)\nM=(0.50,0.50,0.50)\nMDM=(0.40,0.65,0.60)\nMA=(0.30,0.75,0.70)\nMM=(0.20,0.85,0.80)\nMMM=(0.10,0.90,0.90)\nEM=(0,1,1)\nr1=[MDB,B,B]\ni=[MMB, MMB, MB]\neuclideanNeu(r1,i)",
"_____no_output_____"
]
],
[
[
"## Operador SVNWA",
"_____no_output_____"
]
],
[
[
"def SVNWA(list,W):\n t=1\n i=1\n f=1\n c=0\n for j in list:\n t=t*(1-j[0])**W[c]\n i=i*j[1]**W[c]\n f=f*j[2]**W[c]\n c=c+1\n return (1-t,i,f)\n ",
"_____no_output_____"
]
],
[
[
"## Ejemplo de uso SVNWA",
"_____no_output_____"
]
],
[
[
"A=[MDB,B,MDB]\nW = [0.55, 0.26, 0.19] # W:Vector de pesos\nSVNWA(A,W)",
"_____no_output_____"
]
],
[
[
"## Operador SVNGA",
"_____no_output_____"
]
],
[
[
"def SVNGA(list,W):\n t=1\n i=1\n f=1\n c=0\n for j in list:\n t=t*j[0]**W[c]\n i=i*j[1]**W[c]\n f=f*j[2]**W[c] \n c=c+1\n return (t,i,f)",
"_____no_output_____"
],
[
"A=[MDB,B,MDB]\nW = [0.55, 0.26, 0.19] # W:Vector de pesos\nSVNGA(A,W)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e7b4d0e636986467eece660f0daf9bf336a7ddc2 | 7,773 | ipynb | Jupyter Notebook | _posts/Problem-005.ipynb | bru1987/euler | cc6d308c4aec32efd369861489aa6b8fc392cb9e | [
"MIT"
] | null | null | null | _posts/Problem-005.ipynb | bru1987/euler | cc6d308c4aec32efd369861489aa6b8fc392cb9e | [
"MIT"
] | null | null | null | _posts/Problem-005.ipynb | bru1987/euler | cc6d308c4aec32efd369861489aa6b8fc392cb9e | [
"MIT"
] | 1 | 2022-02-23T04:25:24.000Z | 2022-02-23T04:25:24.000Z | 25.237013 | 388 | 0.503409 | [
[
[
"---\nlayout: post\ntitle: Project Euler - Problem 5\npost-order: 005\n---",
"_____no_output_____"
],
[
"2520 is the smallest number that can be divided by each of the numbers from 1 to 10 without any remainder.\n\nWhat is the smallest positive number that is **evenly divisible** by all of the numbers from 1 to 20?",
"_____no_output_____"
],
[
"### Solution #1",
"_____no_output_____"
],
[
"Let's brute-force it and see what happens. One thing we know: the number we are trying to find **cannot be** smaller than the products of all primes up to $20$. And the reason for that is, the number we are trying to find has to be divisible by $1$ to $20$ and the only way that can be accomplished is if the prime factors (for example $13$) is in that product.",
"_____no_output_____"
]
],
[
[
"import math\nimport time\n\n# let's setle \"floor\" first: the products of all primes from 1 to 20\ndef primes_up_to(n):\n product = 2\n\n for candidate_for_prime in range (3,n+1):\n for i in range (2,candidate_for_prime): \n if (candidate_for_prime % i == 0):\n break\n elif ((candidate_for_prime % i != 0) & (i + 1 == candidate_for_prime)):\n product *= candidate_for_prime\n \n return product\n\nstart = time.time()\n\ndivisible_up_to = 20\nfloor = primes_up_to(divisible_up_to)\n\n# the highest we can go is to n! (n factorial)\nceiling = math.factorial(divisible_up_to)\n\nfound = False\n\nfor ii in range (floor,ceiling):\n for i in range(2,divisible_up_to+1):\n if ii % i != 0:\n break\n elif i == divisible_up_to:\n print (ii)\n elapsed = time.time() - start\n print(elapsed)\n found = True\n break\n \n if found:\n break",
"232792560\n162.78695583343506\n"
]
],
[
[
"As you may notice, the running time is extremely high (and it will get higher if we choose a greater `divisible_up_to`). We need to find a more efficient way to tacke this problem.",
"_____no_output_____"
],
[
"### Solution #2 - Greatest power of primes",
"_____no_output_____"
],
[
"The solution for this problem actually requires **no computation**. And the reason for that is, if we find the prime factorization of each number up to 20 and multiply the greatest power of each, we will find the correct solution.",
"_____no_output_____"
],
[
"$$\n\\begin{align}\n2 &= 2^1\\\\\n3 &= 3^1\\\\\n4 &= 2^2\\\\\n5 &= 5^1\\\\\n\\vdots\\\\\n18 &= 2^1 \\cdot 3^2\\\\\n19 &= 19^1\\\\\n20 &= 2^2 \\cdot 5^1\n\\end{align}\n$$",
"_____no_output_____"
],
[
"We can take advantage of this and build a list with the prime factors and a list with the prime factor's greatest power that shows up from 1 to `divisible_up_to`. After that, we evaluate the first item of the list of primes raised to the first item of the list of powers, multiplied by the second item of the list of primes raised to the second item of the list of powers and so on.",
"_____no_output_____"
],
[
"|LISTS|<td colspan=11>|\n|----------------|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\n| list of primes | [ | 2 | 3 | 5 | 7 | 11| 13| 17| 19| ] |\n| list of powers | [ |4 | 2 | 1 | 1 | 1 | 1 | 1 | 1 | ] |",
"_____no_output_____"
],
[
"Answer = $2^4 \\cdot 3^2 \\cdot 5^1 \\cdot 7^1 \\cdot 11^1 \\cdot 13^1 \\cdot 17^1 \\cdot 19^1 = 232792560$",
"_____no_output_____"
]
],
[
[
"def primes_list(n):\n list_of_primes = [1,2]\n\n for i in range (3,n+1):\n for ii in range (2,i): \n if (i % ii == 0):\n break\n elif ((i % ii != 0) & (ii + 1 == i)):\n list_of_primes.append(i)\n \n return list_of_primes\n\nlist_of_primes = primes_list(20)\nprint (list_of_primes)",
"[1, 2, 3, 5, 7, 11, 13, 17, 19]\n"
]
],
[
[
"Let's build a new list, with the same number of elements, but now filled with zeros. It will receive the greatest power for each prime:",
"_____no_output_____"
]
],
[
[
"list_of_powers = [1] * len(list_of_primes)\nprint(list_of_powers)",
"[1, 1, 1, 1, 1, 1, 1, 1, 1]\n"
]
],
[
[
"Now we need to check what is the greatest power that shows up, for each of the primes, from 1 to 20.",
"_____no_output_____"
]
],
[
[
"for i in list_of_primes:\n print (i)\n\n# if it's prime, there's no need to check: the power will be 1\n# do the prime factorization of all numbers up to 20",
"1\n2\n3\n5\n7\n11\n13\n17\n19\n"
]
],
[
[
"Placing everything on a single code,",
"_____no_output_____"
],
[
"The performance of this second algorithm is much better. We can even increase `divisible_up_to` without a substantial compromise on its performance:",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
e7b4fbb9b01b114ba154daad0788b72c03b1940b | 551,324 | ipynb | Jupyter Notebook | notebooks/histograms.ipynb | altanova/stuff | d55a65e929d95bcb6729ea61f4a655dea6452d0d | [
"MIT"
] | 2 | 2021-11-12T15:05:09.000Z | 2022-01-06T21:39:43.000Z | notebooks/histograms.ipynb | altanova/stuff | d55a65e929d95bcb6729ea61f4a655dea6452d0d | [
"MIT"
] | null | null | null | notebooks/histograms.ipynb | altanova/stuff | d55a65e929d95bcb6729ea61f4a655dea6452d0d | [
"MIT"
] | null | null | null | 658.690562 | 122,408 | 0.947392 | [
[
[
"# tips on drawing histograms: bin alignment vs labels",
"_____no_output_____"
]
],
[
[
"import pandas as pd, numpy as np, seaborn as sns\nimport os\n# Remove the most annoying pandas warning\n# A value is trying to be set on a copy of a slice from a DataFrame.\npd.options.mode.chained_assignment = None\n\ndata_dir = '../../data'\nsrc_file = 'sample01.csv'\nf = os.path.join(data_dir, src_file)\n\nimport sys\nsys.path.insert(0, '../modules')\nimport handy as hd",
"_____no_output_____"
],
[
"df = pd.read_csv(f, sep = ';')\ndf.shape",
"_____no_output_____"
],
[
"# remember the original data under this variable\ndf0 = df.copy()",
"_____no_output_____"
]
],
[
[
"## example: 2d histogram\n\nshowing bin allignments: incorrect, and correct",
"_____no_output_____"
]
],
[
[
"data = df\nsns.set()\nimport matplotlib.pyplot as plt;\nimport matplotlib.colors as mcolors\n\nfig, ax = plt.subplots(1, 3, figsize=(20,4))\nax = ax.flatten()\n\nfig.suptitle(\"some histograms\", fontsize=20, y=1.1)\n\naxis = ax[0]\n\nh = axis.hist(x = data.weekday)\n\naxis.set_xlabel('days of week (0=Mon)')\naxis.set_ylabel('frequency')\naxis.set_title(\"sloppy bin allignment\", fontsize = 16)\n\naxis = ax[1]\n\naxis.hist(x = data.weekday, bins = 7)\n\naxis.set_xlabel('days of week (0=Mon)')\naxis.set_ylabel('frequency')\naxis.set_title(\"still sloppy: x labels not aligned)\", fontsize = 16)\n\naxis = ax[2]\n\naxis.hist(x = data.weekday, bins = (np.arange(7 + 1)) - 0.5)\n\naxis.set_xlabel('days of week (0=Mon)')\naxis.set_ylabel('frequency')\naxis.set_title(\"correct bin allignment\", fontsize = 16)\n\n\nplt.tight_layout()\n",
"_____no_output_____"
],
[
"data = df\n\nimport matplotlib.pyplot as plt;\nimport matplotlib.colors as mcolors\n\nfig, ax = plt.subplots(1, 2, figsize=(15,6))\n\nfig.suptitle(\"two 2d histograms\", fontsize=20, y=1.1)\n\nax1 = ax[0]\n# gammas = [0.8, 0.5, 0.3]\ngamma = 0.4\n\nh = ax1.hist2d(x = data.weekday, \n y = data.hour, \n bins = [7, 24], \n norm=mcolors.PowerNorm(gamma), \n cmap='Blues')\n\ncb = fig.colorbar(h[3], ax=ax1)\ncb.set_label('incidents per bin')\nax1.set_xlabel('days of week (0=Mon)')\nax1.set_ylabel('hours of day')\nax1.set_title(\"sloppy bin allignment\", fontsize = 16)\n\nplt.tight_layout()\n\nax1 =ax[1]\nxbins = np.arange(0, 7 + 1) - 0.5\nybins = np.arange(0, 24 + 1) - 0.5\n\nh = ax1.hist2d(x = data.weekday, \n y = data.hour, \n bins = [xbins, ybins], \n norm=mcolors.PowerNorm(gamma), \n cmap='Blues')\n #vmax = 100 )\ncb = fig.colorbar(h[3], ax=ax1)\ncb.set_label('incidents per bin')\nax1.set_xlabel('days of week (0=Mon)')\nax1.set_ylabel('hours of day')\nax1.set_title(\"bins aligned correctly\", fontsize = 16)\n\nplt.tight_layout()\n",
"_____no_output_____"
]
],
[
[
"# same problem with Seaborn (v 0.11) displot or joinplot, type hist",
"_____no_output_____"
]
],
[
[
"# bad \nsns.jointplot(x=\"weekday\", y=\"hour\", data=df.sample(1000), kind='hist', ax = ax[0])\n",
"_____no_output_____"
],
[
"# still bad\nsns.jointplot(x=\"weekday\", y=\"hour\", data=df.sample(1000), kind='hist',bins = [7, 24])",
"_____no_output_____"
],
[
"#good\nbins = (np.arange(7 + 1)-0.5, np.arange(24 + 1) - 0.5)\nsns.jointplot(x=\"weekday\", y=\"hour\", data=df.sample(1000), kind='hist',bins = bins)",
"_____no_output_____"
],
[
"# but the problem is gone, if displot type is other than kde\nsns.jointplot(x=\"weekday\", y=\"hour\", data=df.sample(1000), kind='kde', xlim=(0,6), ylim=(0,24))",
"_____no_output_____"
],
[
"sns.jointplot(x=\"weekday\", y=\"hour\", cmap = 'coolwarm', data=df.sample(10000), kind='kde', fill=True)",
"_____no_output_____"
],
[
"# same, with contour only\nsns.jointplot(x=\"weekday\", y=\"hour\", cmap = 'coolwarm', data=df.sample(10000), kind='kde')",
"_____no_output_____"
]
],
[
[
"# more problems with bin allignment",
"_____no_output_____"
]
],
[
[
"# I will now demonstrate how bad bins lead to bad conclusions",
"_____no_output_____"
],
[
"# store previous df under separate variable\ndf0 = df\n",
"_____no_output_____"
],
[
"data_dir = '../../data'\nsrc_file = 'sample02.csv'\nf = os.path.join(data_dir, src_file)\ndf = pd.read_csv(f, sep = ';')",
"_____no_output_____"
],
[
"df['created'] = pd.to_datetime(df['created'], format = hd.format_dash, errors = 'coerce')\ndf['resolved'] = pd.to_datetime(df['resolved'], format = hd.format_dash, errors = 'coerce')\ndf = hd.augment_columns(df)\n\n# remember this augmented data set\ndf1 = df",
"_____no_output_____"
],
[
"minweek, maxweek = df.week_nr.min(), df.week_nr.max()\nminweek, maxweek",
"_____no_output_____"
],
[
"data1 = df[(df.week_nr == maxweek -1) & (df.category == 'Alarm')].weekhour\ndata2 = df[(df.week_nr == maxweek -1)].weekhour",
"_____no_output_____"
],
[
"#INCORRECT\n\nslots = 24 * 7\nbins_sloppy = slots\n \nfig, ax = plt.subplots(2,1, figsize = (25,4))\n\naxis = ax[0]\nw1 = axis.hist(x= data1, bins = bins_sloppy)\naxis.set_title('category 1, sloppy bins', loc = 'left', fontsize = 16)\n\naxis = ax[1]\nw2 = axis.hist(x= data2, bins = bins_sloppy)\naxis.set_title('category 2, sloppy bins', loc = 'left', fontsize = 16)\n\nplt.tight_layout()",
"_____no_output_____"
],
[
"# the visualization above is wrong. Bars are mislined, due to sloppy bins definition",
"_____no_output_____"
],
[
"#CORRECT\n\ndef my_title_markers(axis, title, marker1, marker2):\n axis.set_title(title, loc = 'left', fontsize = 16)\n axis.axvline(x=marker1, color='r', linestyle='dashed', linewidth=2, label = str(marker1))\n axis.axvline(x=marker2, color='b', linestyle='dashed', linewidth=2, label = str(marker2))\n axis.legend(loc = 'upper left', fontsize = '12')\n\n \nslots = 24 * 7\nsloppy_bins = slots\nbins = np.arange(slots + 1) - 0.5\n\nfig, ax = plt.subplots(4,1, figsize = (25,10))\nmarker1, marker2 = 8, 40\n\naxis = ax[0]\naxis.hist(x= data1, bins = sloppy_bins)\nmy_title_markers(axis, 'category 1, sloppy bins', marker1, marker2)\n\naxis = ax[1]\naxis.hist(x= data2, bins = sloppy_bins)\nmy_title_markers(axis, 'category 2, sloppy bins', marker1, marker2)\n\naxis = ax[2]\nw1 = axis.hist(x= data1, bins = bins)\nmy_title_markers(axis, 'category 1, correct bins', marker1, marker2)\n\naxis = ax[3]\nw2 = axis.hist(x= data2, bins = bins)\nmy_title_markers(axis, 'category 2, correct bins', marker1, marker2)\n\nplt.tight_layout()",
"_____no_output_____"
]
],
[
[
"# Problems with binning and rounding days and weeks (pd.Timestamp)\n\nThe below is a long version. The compact version has been summarized in handy-showcase workbook.",
"_____no_output_____"
]
],
[
[
"data_dir = '../../data'\nsrc_file = 'sample01.csv'\nf = os.path.join(data_dir, src_file)\ndf = pd.read_csv(f, sep = ';')\ndf['created'] = pd.to_datetime(df['created'], format = hd.format_dash, errors = 'coerce')\ndf['resolved'] = pd.to_datetime(df['resolved'], format = hd.format_dash, errors = 'coerce')\ndf = hd.augment_columns(df)\n",
"_____no_output_____"
],
[
"import seaborn as sns, matplotlib.pyplot as plt\nsns.set()\n\nstart, end = df.created.min(), df.created.max()\ndays = (end - start).days\nweeks = days / 7\n\nfig, ax = plt.subplots(1,1, figsize = (25,3))\ndata = df[(df.created > start) & (df.created < end)].created\n\naxis = ax\nw = axis.hist(x= data, bins = int(weeks))\naxis.set_title('Naive (incorrect) weekly histogram (1 bar = 1 week)', fontsize = 20)\n\nplt.show()\nprint('Basic statistics:\\n')\nprint('Total records:\\t{}'.format(len(df)))\nprint('start:\\t{}\\t{}\\nend:\\t{}\\t{}'.format(start, start.day_name(), end, end.day_name()))\nprint('weeks: {:.1f}\\trecords per week:{:.1f},\\t weekly min:{},\\t weekly max:{}'.format( weeks, len(df) / weeks, int(min(w[0])), int(max(w[0]))))\nprint('days: {}\\trecords per day:{:.1f}'.format(days, len(df) / days))\n",
"_____no_output_____"
],
[
"# return Monday 00:00:00 before given moment\ndef monday_before(now):\n monday_before = now - pd.Timedelta(now.weekday(), 'days')\n # Monday 00:00:00\n return pd.Timestamp(monday_before.date())\n\n# return Monday 00:00:00 after given moment \ndef monday_after(now):\n # trick: compute Monday before 1 week from now... it's the same.\n return monday_before(now + pd.Timedelta(7, 'days'))\n\n# use this to have full week span, spanning tne entire period\n# returns: Monday before, Monday after, number of weeks between\ndef outer_week_boundaries(series):\n start, end = monday_before(series.min()), monday_after(series.max())\n return start, end, (end - start).days // 7\n\ndef inner_week_boundaries(series):\n start, end = monday_after(series.min()), monday_before(series.max())\n return start, end, (end - start).days // 7 \n\n# exact number of days, including fraction of day (float)\ndef fractional_days(data_start, data_end):\n delta = data_end - data_start\n return delta.days + delta.seconds / (60 * 60 * 24)\n\n# number of full 24-hour periods\ndef inner_days(data_start, data_end):\n return (data_end - data_start).days \n\n# number of days between midnight-before-first-record and midnight-after-last-record\ndef outer_days(data_start, data_end):\n return (data_end.date() - data_start.date()).days + 1\n\ndef weekly_bin_edges(start, howmany):\n # add 1 for we count bin edges rather than bins\n WEEK = pd.Timedelta(7, 'days')\n return [outer_start + i * WEEK for i in np.arange(howmany + 1)]\n\ndef daily_bin_edges(start, howmany):\n # add 1 for we count bin edges rather than bins\n DAY = pd.Timedelta(1, 'days')\n return [data_start.date() + i * DAY for i in np.arange(howmany + 1)]\n\n \n ",
"_____no_output_____"
],
[
"# weekly bins\n\n\n#s = weekly_statistics()\nouter_start, outer_end, outer_weeks = outer_week_boundaries(df.created)\ninner_start, inner_end, inner_weeks = inner_week_boundaries(df.created)\ndata_start, data_end = df.created.min(), df.created.max()\nweekly_bins = weekly_bin_edges(outer_start, outer_weeks)\n\ndays = fractional_days(data_start, data_end)\nouter_days = outer_days(data_start, data_end)\ndaily_bins = daily_bin_edges(data_start, outer_days)\nweeks = days / 7",
"_____no_output_____"
],
[
"def draw(axis, outer_start, outer_end, inner_start, inner_end):\n axis.axvline(x=inner_start, color='r', linestyle='dashed', linewidth=2, label = 'inner (full) weeks range')\n axis.axvline(x=outer_start, color='b', linestyle='dashed', linewidth=2, label = 'outer (incomplete) weeks range')\n axis.axvline(x=inner_end, color='r', linestyle='dashed', linewidth=2)\n axis.axvline(x=outer_end, color='b', linestyle='dashed', linewidth=2)\n axis.legend()\n \nfig, ax = plt.subplots(2,1, figsize = (25,6))\ndata = df.created\n\naxis = ax[0]\nw = axis.hist(x= data, bins = weekly_bins)\ndraw(axis, outer_start, outer_end, inner_start, inner_end)\naxis.set_title('Correct weekly histogram (1 bar = 1 week)', fontsize = 20)\n \nweek_values = w[0]\nfullweek_values = week_values[1:-1]\n\naxis = ax[1]\ndraw(axis, outer_start, outer_end, inner_start, inner_end)\nd = axis.hist(x= data, bins = daily_bins, edgecolor = 'black')\naxis.set_title('Corresponding daily histogram (1 bar = 1 day)', fontsize = 20)\n\nday_values = d[0]\nfullday_values = day_values[1:-1]\n\nplt.tight_layout()\nplt.show()\n\n\nprint('Basic statistics:\\n')\nprint('Total records:\\t{}'.format(len(df)))\nprint('Histogram range (outer weeks):{:.0f}'.format(outer_weeks))\nstart, end = outer_start, outer_end\nprint('start:\\t{}\\t{}\\nend:\\t{}\\t{}'.format(start, start.day_name(), end, end.day_name()))\nprint('Data range:')\nstart, end = data_start, data_end\nprint('start:\\t{}\\t{}\\nend:\\t{}\\t{}'.format(start, start.day_name(), end, end.day_name()))\nprint('Full weeks (inner weeks):{:.0f}'.format(inner_weeks))\nstart, end = inner_start, inner_end\nprint('start:\\t{}\\t{}\\nend:\\t{}\\t{}'.format(start, start.day_name(), end, end.day_name()))\nprint('Data stats:')\nprint('weeks: {:.1f}\\trecords per week:{:.1f},\\t weekly min:{},\\t weekly max:{}'.\\\n format( weeks, len(df) / weeks, int(min(fullweek_values)), int(max(week_values))))\nprint('days: {:.1f}\\trecords per day:{:.1f},\\t daily min:{},\\t daily max:{}'.\\\n format(days, len(df) / days, int(min(fullday_values)), int(max(day_values))))\nprint('Note: The minima do not take into account the marginal (uncomplete) weeks or days')\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
e7b5115e3c35eb571e1778ea85a0406d9fac1e23 | 26,401 | ipynb | Jupyter Notebook | notebooks/bhno-with-adversary/21ChansOTNoFFTOtherClassDims.ipynb | robintibor/reversible2 | e6fea33ba41c7f76ee50295329b4ef27b879a7fa | [
"MIT"
] | null | null | null | notebooks/bhno-with-adversary/21ChansOTNoFFTOtherClassDims.ipynb | robintibor/reversible2 | e6fea33ba41c7f76ee50295329b4ef27b879a7fa | [
"MIT"
] | null | null | null | notebooks/bhno-with-adversary/21ChansOTNoFFTOtherClassDims.ipynb | robintibor/reversible2 | e6fea33ba41c7f76ee50295329b4ef27b879a7fa | [
"MIT"
] | null | null | null | 37.608262 | 127 | 0.552328 | [
[
[
"import logging\nimport importlib\nimportlib.reload(logging) # see https://stackoverflow.com/a/21475297/1469195\nlog = logging.getLogger()\nlog.setLevel('INFO')\nimport sys\n\nlogging.basicConfig(format='%(asctime)s %(levelname)s : %(message)s',\n level=logging.INFO, stream=sys.stdout)",
"_____no_output_____"
],
[
"%%capture\nimport os\nimport site\nos.sys.path.insert(0, '/home/schirrmr/code/reversible/')\nos.sys.path.insert(0, '/home/schirrmr/braindecode/code/braindecode/')\nos.sys.path.insert(0, '/home/schirrmr/code/explaining/reversible//')\n\n\n%load_ext autoreload\n%autoreload 2\nimport numpy as np\nimport logging\nlog = logging.getLogger()\nlog.setLevel('INFO')\nimport sys\nlogging.basicConfig(format='%(asctime)s %(levelname)s : %(message)s',\n level=logging.INFO, stream=sys.stdout)\nimport matplotlib\nfrom matplotlib import pyplot as plt\nfrom matplotlib import cm\n%matplotlib inline\n%config InlineBackend.figure_format = 'png'\nmatplotlib.rcParams['figure.figsize'] = (12.0, 1.0)\nmatplotlib.rcParams['font.size'] = 14\nimport seaborn\nseaborn.set_style('darkgrid')\n\nfrom reversible2.sliced import sliced_from_samples\nfrom numpy.random import RandomState\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\nimport numpy as np\nimport copy\nimport math\n\nimport itertools\nimport torch as th\nfrom braindecode.torch_ext.util import np_to_var, var_to_np\nfrom reversible2.splitter import SubsampleSplitter\n\nfrom reversible2.view_as import ViewAs\n\nfrom reversible2.affine import AdditiveBlock\nfrom reversible2.plot import display_text, display_close\nfrom reversible2.bhno import load_file, create_inputs\nth.backends.cudnn.benchmark = True",
"_____no_output_____"
],
[
"sensor_names = ['Fz', \n 'FC3','FC1','FCz','FC2','FC4',\n 'C5','C3','C1','Cz','C2','C4','C6',\n 'CP3','CP1','CPz','CP2','CP4',\n 'P1','Pz','P2',\n 'POz']\norig_train_cnt = load_file('/data/schirrmr/schirrmr/HGD-public/reduced/train/4.mat')\ntrain_cnt = orig_train_cnt.reorder_channels(sensor_names)\n\ntrain_inputs = create_inputs(train_cnt, final_hz=256, half_before=True)\nn_split = len(train_inputs[0]) - 40\ntest_inputs = [t[-40:] for t in train_inputs]\ntrain_inputs = [t[:-40] for t in train_inputs]",
"_____no_output_____"
],
[
"cuda = True\nif cuda:\n train_inputs = [i.cuda() for i in train_inputs]\n test_inputs = [i.cuda() for i in test_inputs]",
"_____no_output_____"
],
[
"from reversible2.graph import Node\nfrom reversible2.branching import CatChans, ChunkChans, Select\nfrom reversible2.constantmemory import sequential_to_constant_memory\nfrom reversible2.constantmemory import graph_to_constant_memory\ndef invert(feature_model, out):\n return feature_model.invert(out)\n\nfrom copy import deepcopy\nfrom reversible2.graph import Node\nfrom reversible2.distribution import TwoClassDist\nfrom reversible2.wrap_invertible import WrapInvertible\nfrom reversible2.blocks import dense_add_no_switch, conv_add_3x3_no_switch\nfrom reversible2.rfft import RFFT, Interleave\nfrom reversible2.util import set_random_seeds\nfrom torch.nn import ConstantPad2d\nimport torch as th\nfrom reversible2.splitter import SubsampleSplitter\n\nset_random_seeds(2019011641, cuda)\nn_chans = train_inputs[0].shape[1]\nn_time = train_inputs[0].shape[2]\nbase_model = nn.Sequential(\n SubsampleSplitter(stride=[2,1],chunk_chans_first=False),\n conv_add_3x3_no_switch(2*n_chans,32),\n conv_add_3x3_no_switch(2*n_chans,32),\n SubsampleSplitter(stride=[2,1],chunk_chans_first=True), # 4 x 128\n conv_add_3x3_no_switch(4*n_chans,32),\n conv_add_3x3_no_switch(4*n_chans,32),\n SubsampleSplitter(stride=[2,1],chunk_chans_first=True), # 8 x 64\n conv_add_3x3_no_switch(8*n_chans,32),\n conv_add_3x3_no_switch(8*n_chans,32))\nbase_model.cuda();\n\nbranch_1_a = nn.Sequential(\n SubsampleSplitter(stride=[2,1],chunk_chans_first=False), # 8 x 32\n conv_add_3x3_no_switch(8*n_chans,32),\n conv_add_3x3_no_switch(8*n_chans,32),\n SubsampleSplitter(stride=[2,1],chunk_chans_first=True),# 16 x 16\n conv_add_3x3_no_switch(16*n_chans,32),\n conv_add_3x3_no_switch(16*n_chans,32),\n SubsampleSplitter(stride=[2,1],chunk_chans_first=True), # 32 x 8\n conv_add_3x3_no_switch(32*n_chans,32),\n conv_add_3x3_no_switch(32*n_chans,32),\n)\nbranch_1_b = nn.Sequential(\n *(list(deepcopy(branch_1_a).children()) + [\n ViewAs((-1, 32*n_chans,n_time//64,1), (-1,(n_time // 2)*n_chans)),\n dense_add_no_switch((n_time // 2)*n_chans,32),\n dense_add_no_switch((n_time // 2)*n_chans,32),\n dense_add_no_switch((n_time // 2)*n_chans,32),\n dense_add_no_switch((n_time // 2)*n_chans,32),\n]))\nbranch_1_a.cuda();\nbranch_1_b.cuda();\n\nbranch_2_a = nn.Sequential(\n SubsampleSplitter(stride=[2,1], chunk_chans_first=False),# 32 x 4\n conv_add_3x3_no_switch(32*n_chans,32),\n conv_add_3x3_no_switch(32*n_chans,32),\n SubsampleSplitter(stride=[2,1],chunk_chans_first=True),# 64 x 2\n conv_add_3x3_no_switch(64*n_chans,32),\n conv_add_3x3_no_switch(64*n_chans,32),\n ViewAs((-1, (n_time // 4)*n_chans,1,1), (-1,(n_time // 4)*n_chans)),\n dense_add_no_switch((n_time // 4)*n_chans,64),\n dense_add_no_switch((n_time // 4)*n_chans,64),\n dense_add_no_switch((n_time // 4)*n_chans,64),\n dense_add_no_switch((n_time // 4)*n_chans,64),\n)\n\n\nbranch_2_b = deepcopy(branch_2_a).cuda()\nbranch_2_a.cuda();\nbranch_2_b.cuda();\n\nfinal_model = nn.Sequential(\n dense_add_no_switch(n_time*n_chans,256),\n dense_add_no_switch(n_time*n_chans,256),\n dense_add_no_switch(n_time*n_chans,256),\n dense_add_no_switch(n_time*n_chans,256),\n)\nfinal_model.cuda();\no = Node(None, base_model)\no = Node(o, ChunkChans(2))\no1a = Node(o, Select(0))\no1b = Node(o, Select(1))\no1a = Node(o1a, branch_1_a)\no1b = Node(o1b, branch_1_b)\no2 = Node(o1a, ChunkChans(2))\no2a = Node(o2, Select(0))\no2b = Node(o2, Select(1))\no2a = Node(o2a, branch_2_a)\no2b = Node(o2b, branch_2_b)\no = Node([o1b,o2a,o2b], CatChans())\no = Node(o, final_model)\no = graph_to_constant_memory(o)\nfeature_model = o\nif cuda:\n feature_model.cuda()\nfeature_model.eval();",
"_____no_output_____"
]
],
[
[
"### set feature model weights and distribution to good start parametersm",
"_____no_output_____"
]
],
[
[
"n_dims = np.prod(train_inputs[0].shape[1:])\ni_class_dims = [int(n_dims*0.25), int(n_dims * 0.75)]\n\nfrom reversible2.constantmemory import clear_ctx_dicts\nfrom reversible2.distribution import TwoClassDist\n\nfeature_model.data_init(th.cat((train_inputs[0], train_inputs[1]), dim=0))\n\n# Check that forward + inverse is really identical\nt_out = feature_model(train_inputs[0][:2])\ninverted = invert(feature_model, t_out)\nclear_ctx_dicts(feature_model)\nassert th.allclose(train_inputs[0][:2], inverted, rtol=1e-3,atol=1e-4)\ndevice = list(feature_model.parameters())[0].device\nfrom reversible2.ot_exact import ot_euclidean_loss_for_samples\nclass_dist = TwoClassDist(2, np.prod(train_inputs[0].size()[1:]) - 2,\n i_class_inds=i_class_dims)\nclass_dist.cuda()\n\nfor i_class in range(2):\n with th.no_grad():\n this_outs = feature_model(train_inputs[i_class])\n mean = th.mean(this_outs, dim=0)\n std = th.std(this_outs, dim=0)\n class_dist.set_mean_std(i_class, mean, std)\n # Just check\n setted_mean, setted_std = class_dist.get_mean_std(i_class)\n assert th.allclose(mean, setted_mean)\n assert th.allclose(std, setted_std)\nclear_ctx_dicts(feature_model)\n\noptim_model = th.optim.Adam(feature_model.parameters(), lr=1e-3, betas=(0.9,0.999))\noptim_dist = th.optim.Adam(class_dist.parameters(), lr=1e-2, betas=(0.9,0.999))",
"_____no_output_____"
],
[
"%%writefile plot.py\nimport torch as th\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom reversible2.util import var_to_np\nfrom reversible2.plot import display_close\nfrom matplotlib.patches import Ellipse\nimport seaborn\n\ndef plot_outs(feature_model, train_inputs, test_inputs, class_dist):\n with th.no_grad():\n # Compute dist for mean/std of encodings\n data_cls_dists = []\n for i_class in range(len(train_inputs)):\n this_class_outs = feature_model(train_inputs[i_class])[:,class_dist.i_class_inds]\n data_cls_dists.append(\n th.distributions.MultivariateNormal(th.mean(this_class_outs, dim=0),\n covariance_matrix=th.diag(th.std(this_class_outs, dim=0) ** 2)))\n for setname, set_inputs in ((\"Train\", train_inputs), (\"Test\", test_inputs)):\n\n outs = [feature_model(ins) for ins in set_inputs]\n c_outs = [o[:,class_dist.i_class_inds] for o in outs]\n\n c_outs_all = th.cat(c_outs)\n\n cls_dists = []\n for i_class in range(len(c_outs)):\n mean, std = class_dist.get_mean_std(i_class)\n cls_dists.append(\n th.distributions.MultivariateNormal(mean[class_dist.i_class_inds],\n covariance_matrix=th.diag(std[class_dist.i_class_inds] ** 2)))\n\n preds_per_class = [th.stack([cls_dists[i_cls].log_prob(c_out)\n for i_cls in range(len(cls_dists))],\n dim=-1) for c_out in c_outs]\n\n pred_labels_per_class = [np.argmax(var_to_np(preds), axis=1)\n for preds in preds_per_class]\n\n labels = np.concatenate([np.ones(len(set_inputs[i_cls])) * i_cls \n for i_cls in range(len(train_inputs))])\n\n acc = np.mean(labels == np.concatenate(pred_labels_per_class))\n\n data_preds_per_class = [th.stack([data_cls_dists[i_cls].log_prob(c_out)\n for i_cls in range(len(cls_dists))],\n dim=-1) for c_out in c_outs]\n data_pred_labels_per_class = [np.argmax(var_to_np(data_preds), axis=1)\n for data_preds in data_preds_per_class]\n data_acc = np.mean(labels == np.concatenate(data_pred_labels_per_class))\n\n print(\"{:s} Accuracy: {:.1f}%\".format(setname, acc * 100))\n fig = plt.figure(figsize=(5,5))\n ax = plt.gca()\n for i_class in range(len(c_outs)):\n #if i_class == 0:\n # continue\n o = var_to_np(c_outs[i_class]).squeeze()\n incorrect_pred_mask = pred_labels_per_class[i_class] != i_class\n plt.scatter(o[:,0], o[:,1], s=20, alpha=0.75, label=[\"Right\", \"Rest\"][i_class])\n assert len(incorrect_pred_mask) == len(o)\n plt.scatter(o[incorrect_pred_mask,0], o[incorrect_pred_mask,1], marker='x', color='black',\n alpha=1, s=5)\n means, stds = class_dist.get_mean_std(i_class)\n means = var_to_np(means)[class_dist.i_class_inds]\n stds = var_to_np(stds)[class_dist.i_class_inds]\n for sigma in [0.5,1,2,3]:\n ellipse = Ellipse(means, stds[0]*sigma, stds[1]*sigma)\n ax.add_artist(ellipse)\n ellipse.set_edgecolor(seaborn.color_palette()[i_class])\n ellipse.set_facecolor(\"None\")\n for i_class in range(len(c_outs)):\n o = var_to_np(c_outs[i_class]).squeeze()\n plt.scatter(np.mean(o[:,0]), np.mean(o[:,1]),\n color=seaborn.color_palette()[i_class+2], s=80, marker=\"^\",\n label=[\"Right Mean\", \"Rest Mean\"][i_class])\n\n plt.title(\"{:6s} Accuracy: {:.1f}%\\n\"\n \"From data mean/std: {:.1f}%\".format(setname, acc * 100, data_acc * 100))\n plt.legend(bbox_to_anchor=(1,1,0,0))\n display_close(fig)\n return",
"_____no_output_____"
],
[
"import pandas as pd\ndf = pd.DataFrame()\n\nfrom reversible2.training import OTTrainer\ntrainer = OTTrainer(feature_model, class_dist,\n optim_model, optim_dist)",
"_____no_output_____"
],
[
"from reversible2.constantmemory import clear_ctx_dicts\nfrom reversible2.timer import Timer\nfrom plot import plot_outs\nfrom reversible2.gradient_penalty import gradient_penalty\n\n\ni_start_epoch_out = 4001\nn_epochs = 10001\nfor i_epoch in range(n_epochs):\n epoch_row = {}\n with Timer(name='EpochLoop', verbose=False) as loop_time:\n loss_on_outs = i_epoch >= i_start_epoch_out\n result = trainer.train(train_inputs, loss_on_outs=loss_on_outs)\n \n epoch_row.update(result)\n epoch_row['runtime'] = loop_time.elapsed_secs * 1000\n if i_epoch % (n_epochs // 20) != 0:\n df = df.append(epoch_row, ignore_index=True)\n # otherwise add ot loss in\n else:\n for i_class in range(len(train_inputs)):\n with th.no_grad():\n class_ins = train_inputs[i_class]\n samples = class_dist.get_samples(i_class, len(train_inputs[i_class]) * 4)\n inverted = feature_model.invert(samples)\n clear_ctx_dicts(feature_model)\n ot_loss_in = ot_euclidean_loss_for_samples(class_ins.view(class_ins.shape[0], -1),\n inverted.view(inverted.shape[0], -1)[:(len(class_ins))])\n epoch_row['ot_loss_in_{:d}'.format(i_class)] = ot_loss_in.item()\n df = df.append(epoch_row, ignore_index=True)\n print(\"Epoch {:d} of {:d}\".format(i_epoch, n_epochs))\n print(\"Loop Time: {:.0f} ms\".format(loop_time.elapsed_secs * 1000))\n display(df.iloc[-3:])\n plot_outs(feature_model, train_inputs, test_inputs,\n class_dist)\n fig = plt.figure(figsize=(8,2))\n plt.plot(var_to_np(th.cat((th.exp(class_dist.class_log_stds),\n th.exp(class_dist.non_class_log_stds)))),\n marker='o')\n display_close(fig)\n \n",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"for i_class in range(len(train_inputs)):\n with th.no_grad():\n class_ins = train_inputs[i_class]\n samples = class_dist.get_samples(i_class, len(train_inputs[i_class]) * 4)\n inverted = feature_model.invert(samples)\n clear_ctx_dicts(feature_model)\n ot_loss_in = ot_euclidean_loss_for_samples(class_ins.view(class_ins.shape[0], -1),\n inverted.view(inverted.shape[0], -1)[:(len(class_ins))])\n epoch_row['ot_loss_in_{:d}'.format(i_class)] = ot_loss_in.item()\nprint(\"Epoch {:d} of {:d}\".format(i_epoch, n_epochs))\nprint(\"Loop Time: {:.0f} ms\".format(loop_time.elapsed_secs * 1000))\ndisplay(df.iloc[-3:])\nplot_outs(feature_model, train_inputs, test_inputs,\n class_dist)\nfig = plt.figure(figsize=(8,2))\nplt.plot(var_to_np(th.cat((th.exp(class_dist.class_log_stds),\n th.exp(class_dist.non_class_log_stds)))),\n marker='o')\ndisplay_close(fig)",
"_____no_output_____"
],
[
"def get_non_class_outs(feature_model, inputs, class_dist):\n with th.no_grad():\n outs_per_class = [feature_model(t) for t in inputs]\n clear_ctx_dicts(feature_model)\n non_class_inds = np.setdiff1d(list(range(outs_per_class[0].shape[1])),\n class_dist.i_class_inds)\n non_class_outs = [o[:,non_class_inds].detach() for o in outs_per_class]\n return non_class_outs",
"_____no_output_____"
],
[
"train_non_class_outs = get_non_class_outs(feature_model, train_inputs, class_dist)\n\ntest_non_class_outs = get_non_class_outs(feature_model, test_inputs, class_dist)",
"_____no_output_____"
],
[
"class DistClassifier(nn.Module):\n def __init__(self, n_dims):\n super(DistClassifier, self).__init__()\n self.mean0 = th.nn.Parameter(th.zeros(n_dims))\n self.mean1 = th.nn.Parameter(th.zeros(n_dims))\n self.logstd0 = th.nn.Parameter(th.zeros(n_dims))\n self.logstd1 = th.nn.Parameter(th.zeros(n_dims))\n \n def predict(self, outs):\n dist0 = th.distributions.MultivariateNormal(self.mean0,\n th.diag(th.exp(self.logstd0) ** 2))\n dist1 = th.distributions.MultivariateNormal(self.mean1,\n th.diag(th.exp(self.logstd1) ** 2))\n return th.stack((dist0.log_prob(outs), dist1.log_prob(outs)), dim=-1)\n \n def predict_log_softmax(self, outs):\n probs = self.predict(outs)\n return F.log_softmax(probs, dim=1)\n \n ",
"_____no_output_____"
],
[
"clf = DistClassifier(train_non_class_outs[0].shape[1])\nclf.cuda()\noptim_clf = th.optim.Adam(clf.parameters(), lr=1e-2)",
"_____no_output_____"
],
[
"n_epochs = 200\nfor i_epoch in range(n_epochs):\n accs = []\n for i_class in range(2):\n preds = clf.predict_log_softmax(train_non_class_outs[i_class])\n labels = np_to_var((np.ones(len(preds)) * i_class).astype(np.int64), device='cuda')\n loss = F.nll_loss(preds, labels)\n optim_clf.zero_grad()\n loss.backward()\n optim_clf.step()\n \n with th.no_grad():\n for set_non_class_outs in [train_non_class_outs, test_non_class_outs]:\n accs = []\n for i_class in range(2):\n preds = clf.predict_log_softmax(set_non_class_outs[i_class])\n labels = np_to_var((np.ones(len(preds)) * i_class).astype(np.int64), device='cuda')\n acc = np.mean(np.argmax(var_to_np(preds), axis=1) == var_to_np(labels))\n accs.append(acc)\n print(\"Acc: {:.1f}%\".format(100*np.mean(accs)))\n print(\"\")",
"_____no_output_____"
],
[
"with th.no_grad():\n for set_non_class_outs in [train_non_class_outs, test_non_class_outs]:\n accs = []\n for i_class in range(2):\n preds = clf.predict_log_softmax(set_non_class_outs[i_class])\n labels = np_to_var((np.ones(len(preds)) * i_class).astype(np.int64), device='cuda')\n acc = np.mean(np.argmax(var_to_np(preds), axis=1) == var_to_np(labels))\n accs.append(acc)\n print(len(preds))\n print(\"Acc: {:.1f}%\".format(100*np.mean(accs)))\nprint(\"\")",
"_____no_output_____"
],
[
"%%javascript\nvar kernel = IPython.notebook.kernel;\nvar thename = window.document.getElementById(\"notebook_name\").innerHTML;\nvar command = \"nbname = \" + \"'\"+thename+\"'\";\nkernel.execute(command);",
"_____no_output_____"
],
[
"nbname",
"_____no_output_____"
],
[
"folder_path = '/data/schirrmr/schirrmr/reversible/models/notebooks/{:s}/'.format(nbname)\nos.makedirs(folder_path,exist_ok=True)",
"_____no_output_____"
],
[
"name_and_variable = [('feature_model', feature_model),\n ('class_dist', class_dist),\n ('non_class_log_stds', class_dist.non_class_log_stds),\n ('class_log_stds', class_dist.class_log_stds,),\n ('class_means', class_dist.class_means),\n ('non_class_means', class_dist.non_class_means),\n ('feature_model_params', feature_model.state_dict()),\n ('optim_model', optim_model.state_dict()),\n ('optim_dist', optim_dist.state_dict())]\n\nfor name, variable in name_and_variable:\n th.save(variable, os.path.join(folder_path, name + '.pkl'))",
"_____no_output_____"
],
[
"print(\"\\n\".join([\"{:30s}\\t{:.1f}\".format(f, os.path.getsize(os.path.join(folder_path, f)) / (1024.0 *1024.0))\n for f in os.listdir(folder_path)]))",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7b519225b7b966c683ccfe6f697013d8ec188f6 | 1,813 | ipynb | Jupyter Notebook | ch5/Softmax.ipynb | yoon-youngjin/deep-learning-from-scratch | 1f52b273d5c5b03442e9009b0ee8b5f312b2d25f | [
"MIT"
] | null | null | null | ch5/Softmax.ipynb | yoon-youngjin/deep-learning-from-scratch | 1f52b273d5c5b03442e9009b0ee8b5f312b2d25f | [
"MIT"
] | null | null | null | ch5/Softmax.ipynb | yoon-youngjin/deep-learning-from-scratch | 1f52b273d5c5b03442e9009b0ee8b5f312b2d25f | [
"MIT"
] | null | null | null | 21.583333 | 63 | 0.450083 | [
[
[
"import numpy as np",
"_____no_output_____"
],
[
"def softmax(x):\n \n c = np.max(x)\n exp_x = np.exp(x-c)\n exp_all = np.sum(exp_x)\n \n return exp_x / exp_all\n\ndef cross_entropy_loss(y, t):\n delta = 1e-7\n return -np.sum(t*np.log(y+delta))\n\nclass SoftmaxWithLoss:\n def __init__(self):\n self.loss = None\n self.y = None\n self.t = None\n \n def forward(self, x, t):\n self.t = t\n self.y = softmax(x)\n self.loss = cross_entropy_loss(self.y, self.t)\n \n return self.loss\n \n def backward(self, dout=1):\n batch_size = self.t.shape[0]\n dx = self.y - self.t / batch_size\n \n return dx\n ",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code"
]
] |
e7b525f3797599cc1f5c1324ebb7b759a9d576f8 | 145,286 | ipynb | Jupyter Notebook | docs/AHPQ_for_GloVe.ipynb | AxelvL/AHPQ.jl | 66efa2b1febf83ad852836bf22603fd488d0f45e | [
"MIT"
] | 3 | 2021-02-06T22:32:06.000Z | 2021-05-25T15:33:18.000Z | docs/AHPQ_for_GloVe.ipynb | AxelvL/AHPQ.jl | 66efa2b1febf83ad852836bf22603fd488d0f45e | [
"MIT"
] | null | null | null | docs/AHPQ_for_GloVe.ipynb | AxelvL/AHPQ.jl | 66efa2b1febf83ad852836bf22603fd488d0f45e | [
"MIT"
] | null | null | null | 346.74463 | 68,704 | 0.696784 | [
[
[
"# Installation cell\n%%shell\nif ! command -v julia 3>&1 > /dev/null\nthen\n wget -q 'https://julialang-s3.julialang.org/bin/linux/x64/1.5/julia-1.5.3-linux-x86_64.tar.gz' \\\n -O /tmp/julia.tar.gz\n tar -x -f /tmp/julia.tar.gz -C /usr/local --strip-components 1\n rm /tmp/julia.tar.gz\nfi\njulia -e 'using Pkg; pkg\"add IJulia; precompile;\"'\necho 'Done'",
"_____no_output_____"
],
[
"using Pkg\r\nPkg.add(PackageSpec(url=\"https://github.com/AxelvL/AHPQ.jl\", rev=\"master\"))\r\nusing AHPQ",
"_____no_output_____"
],
[
"Pkg.add(\"HTTP\")\r\nusing HTTP\r\nPkg.add(\"HDF5\")\r\nusing HDF5\r\nPkg.add(\"Statistics\")\r\nusing Statistics: norm\r\nPkg.add(\"Plots\")\r\nusing Plots",
"_____no_output_____"
]
],
[
[
"## Downloading GloVe",
"_____no_output_____"
]
],
[
[
"dir = HTTP.download(\"http://ann-benchmarks.com/glove-100-angular.hdf5\", update_period=60)\r\n\r\ndata = h5open(dir, \"r\") do file\r\n read(file);\r\nend",
"_____no_output_____"
],
[
"train = data[\"train\"]\r\nqueries = data[\"test\"]\r\ngroundtruth = data[\"neighbors\"].+1; #zero-indexed\r\ntrain = train ./ mapslices(norm, train, dims=1);\r\ntrain_backup = deepcopy(train);",
"_____no_output_____"
]
],
[
[
"### 100 Bits Graph",
"_____no_output_____"
]
],
[
[
"n_codebooks = 25\r\nn_centers = 16\r\nn_neighbors = 100\r\nstopcond=1e-1;",
"_____no_output_____"
]
],
[
[
"#### L2 loss",
"_____no_output_____"
]
],
[
[
"ahpq = builder(train; \r\n T=0,\r\n n_codebooks=n_codebooks, \r\n n_centers=n_centers,\r\n verbose=true,\r\n stopcond=stopcond,\r\n a=0,\r\n inverted_index=true,\r\n multithreading=false,\r\n training_points=25_000,\r\n increment_steps=3);",
"_____no_output_____"
],
[
"yhat_L2_100bits = MIPS(ahpq, queries, n_neighbors)\r\nL2_scores_1 = get1atNscores(yhat_L2_100bits, groundtruth, n_neighbors)",
"_____no_output_____"
]
],
[
[
"#### Anisotropic Loss",
"_____no_output_____"
]
],
[
[
"train = deepcopy(train_backup)\r\nahpq = builder(train; \r\n T=0.2,\r\n n_codebooks=n_codebooks, \r\n n_centers=n_centers,\r\n verbose=true,\r\n stopcond=stopcond,\r\n a=0,\r\n inverted_index=false,\r\n multithreading=true,\r\n training_points=250_000,\r\n increment_steps=3);",
"_____no_output_____"
],
[
"yhat_anisotropic_100bits = MIPS(ahpq, queries, n_neighbors)\r\nanisotropic_scores_1 = get1atNscores(yhat_anisotropic_100bits, groundtruth, n_neighbors);",
"_____no_output_____"
]
],
[
[
"#### Comparison",
"_____no_output_____"
]
],
[
[
"plot(1:100, anisotropic_scores_1, label=\"Anisotropic Loss\")\r\nplot!(1:100, L2_scores_1, label=\"Reconstruction Loss\")\r\nplot!(title=\"Recall of Glove-1.2M - 100 bits\", \r\n xlabel=\"N\",\r\n ylabel=\"Recall 1@N\",\r\n legend=:bottomright,\r\n xticks=0:20:100,\r\n yticks=0.1:0.1:0.9)",
"_____no_output_____"
]
],
[
[
"### 200 Bits Graph",
"_____no_output_____"
]
],
[
[
"n_codebooks = 50\r\nn_centers = 16\r\nn_neighbors = 100\r\nstopcond=1e-1;",
"_____no_output_____"
]
],
[
[
"#### L2 loss",
"_____no_output_____"
]
],
[
[
"train = deepcopy(train_backup)\r\nahpq = builder(train; \r\n T=0,\r\n n_codebooks=n_codebooks, \r\n n_centers=n_centers,\r\n verbose=true,\r\n stopcond=stopcond,\r\n a=0,\r\n inverted_index=true,\r\n multithreading=false,\r\n training_points=250_000,\r\n increment_steps=3);",
"_____no_output_____"
],
[
"yhat_L2_200bits = MIPS(ahpq, queries, n_neighbors)\r\nL2_scores_2 = get1atNscores(yhat_L2_200bits, groundtruth, n_neighbors);",
"_____no_output_____"
]
],
[
[
"#### Anisotropic Loss",
"_____no_output_____"
]
],
[
[
"train = deepcopy(train_backup)\r\nahpq = builder(train; \r\n T=0.2,\r\n n_codebooks=n_codebooks, \r\n n_centers=n_centers,\r\n verbose=true,\r\n stopcond=stopcond,\r\n a=0,\r\n inverted_index=true,\r\n multithreading=false,\r\n training_points=250_000,\r\n increment_steps=3);",
"_____no_output_____"
],
[
"yhat_anisotropic_200bits = MIPS(ahpq, queries, n_neighbors)\r\nanisotropic_scores_2 = get1atNscores(yhat_anisotropic_200bits, groundtruth, n_neighbors)",
"_____no_output_____"
]
],
[
[
"#### Comparison",
"_____no_output_____"
]
],
[
[
"plot(1:100, anisotropic_scores_2, label=\"Anisotropic Loss\")\r\nplot!(1:100, L2_scores_2, label=\"Reconstruction Loss\")\r\nplot!(title=\"Recall of Glove-1.2M - 200 bits\", \r\n xlabel=\"N\",\r\n ylabel=\"Recall 1@N\",\r\n legend=:bottomright,\r\n xticks=0:20:100,\r\n yticks=0.1:0.1:0.9)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7b5265268e1072b9de13b685df03c013c2e6fa7 | 552 | ipynb | Jupyter Notebook | 6-alternatives.ipynb | vaa-msu/ranking-tests | b284d6668c57930bc8001e40407e4d7c6fde69b8 | [
"MIT"
] | null | null | null | 6-alternatives.ipynb | vaa-msu/ranking-tests | b284d6668c57930bc8001e40407e4d7c6fde69b8 | [
"MIT"
] | 1 | 2021-01-30T13:28:03.000Z | 2021-01-30T13:28:03.000Z | 6-alternatives.ipynb | vaa-msu/ranking-tests | b284d6668c57930bc8001e40407e4d7c6fde69b8 | [
"MIT"
] | null | null | null | 16.235294 | 51 | 0.512681 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e7b52c8e4090cfe58f7122d659f5004ddab03f5b | 792,119 | ipynb | Jupyter Notebook | es11/11.2.ipynb | lorycontixd/PNS | 23f61833ae75a2425d05ca9757ef471aec930654 | [
"MIT"
] | null | null | null | es11/11.2.ipynb | lorycontixd/PNS | 23f61833ae75a2425d05ca9757ef471aec930654 | [
"MIT"
] | null | null | null | es11/11.2.ipynb | lorycontixd/PNS | 23f61833ae75a2425d05ca9757ef471aec930654 | [
"MIT"
] | null | null | null | 440.555617 | 30,584 | 0.930076 | [
[
[
"# <span style=\"color:orange\"> Exercise 11.2 </span>\n## <span style=\"color:green\"> Task </span>\n\nTry to extend the model to obtain a reasonable fit of the following polynomial of order 3:\n\n$$\nf(x)=4-3x-2x^2+3x^3\n$$\nfor $x \\in [-1,1]$.\n\nIn order to make practice with NN, explore reasonable different choices for:\n\n- the number of layers\n- the number of neurons in each layer\n- the activation function\n- the optimizer\n- the loss function\n \nMake graphs comparing fits for different NNs.\nCheck your NN models by seeing how well your fits predict newly generated test data (including on data outside the range you fit. How well do your NN do on points in the range of $x$ where you trained the model? How about points outside the original training data set? \nSummarize what you have learned about the relationship between model complexity (number of parameters), goodness of fit on training data, and the ability to predict well.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport math\nfrom tensorflow import keras\nfrom matplotlib import pyplot as plt\n\n#function = 3x^3 - 2x^2 - 3x + 4\ndef polynomial(x_array,a,b,c,d):\n x_array = np.asfarray(x_array)\n return a*x_array**3 + b*x_array**2 + c*x_array + d\n\n#\nnp.random.seed(0)\nx_train = np.random.uniform(-1, 1, 1000) # dataset for training\nx_valid = np.random.uniform(-1, 1, 100) #dataset for testing/validation\nx_valid.sort()\n\na=3\nb=-2\nc=-3\nd=4\ny_target = polynomial(x_valid,a,b,c,d)\n\nsigma = 0.0 # noise standard deviation\ny_train = np.random.normal(polynomial(x_train,a,b,c,d), sigma) # actual measures from which we want to guess regression parameters\ny_valid = np.random.normal(polynomial(x_valid,a,b,c,d), sigma)\n\nplt.plot(x_valid, y_target)\nplt.scatter(x_valid, y_valid, color='r')\nplt.grid(True)\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Using model from Ex11.1\nThis section utilizes the linear model used in Exercise 11.1, just to prove that it should not function and another model must be defined for polynomials.",
"_____no_output_____"
]
],
[
[
"# Using model from ex11.1\n\n# Load previous model for extension\noldmodel = keras.models.load_model('models/model_ex1')\noldmodel.summary()\nprint()\n\nhistory = oldmodel.fit(x=x_train, y=y_train, \n batch_size=32, epochs=100,\n shuffle=True, #\n validation_data=(x_valid, y_valid),\n verbose=0\n)\n\nscore = oldmodel.evaluate(x_valid, y_valid, batch_size=32, verbose=0)\n\n# print performance\nprint()\nprint('Test loss:', score[0])\nprint('Test accuracy:', score[1])\nprint()\nplt.plot(history.history['loss'])\nplt.plot(history.history['val_loss'])\nplt.title('Model loss')\nplt.ylabel('Loss')\nplt.xlabel('Epoch')\nplt.legend(['Train', 'Test'], loc='best')\nplt.show()\n\nx_predicted = np.random.uniform(-1, 1, 100)\ny_predicted = oldmodel.predict(x_predicted)\nplt.scatter(x_predicted, y_predicted,color='r')\nplt.plot(x_valid, y_target)\nplt.grid(True)\nplt.show()",
"Model: \"sequential_184\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense_184 (Dense) (None, 1) 2 \n=================================================================\nTotal params: 2\nTrainable params: 2\nNon-trainable params: 0\n_________________________________________________________________\n\n\nTest loss: 0.8131828904151917\nTest accuracy: 0.8131828904151917\n\n"
]
],
[
[
"### New model\n\n",
"_____no_output_____"
]
],
[
[
"from tensorflow.keras import models\nfrom tensorflow.keras import layers\nfrom tensorflow.keras import optimizers\nfrom tensorflow.keras import backend as K\nfrom tensorflow.keras import callbacks\nfrom tensorflow.keras import losses\nfrom tensorflow.keras import activations\nfrom tensorflow.keras.utils import get_custom_objects, plot_model \n\ndef run(layers:list,optimizer='sgd',loss='mse',batch_size=32,epochs=60,show_summary=True,outputs=False,testing=True,graph=True,logger=True):\n global x_train, y_train, x_valid, y_valid, y_target\n model = models.Sequential(layers)\n model.compile(optimizer=optimizer, loss=loss, metrics=['mse'])\n optname = optimizer if isinstance(optimizer,str) else optimizer.__class__.__name__\n lossname = loss if isinstance(loss,str) else loss.__class__.__name__\n if logger:\n print(f\"************** {len(layers)} Layers\\t{layers[0].output_shape[1]} input neurons\\t optimizer={optname}\\tloss={lossname}\")\n if show_summary:\n model.summary()\n \n history = model.fit(x=x_train, y=y_train, \n batch_size=batch_size, epochs=epochs,\n shuffle=True, # a good idea is to shuffle input before at each epoch\n validation_data=(x_valid, y_valid),\n verbose=0\n )\n\n score = model.evaluate(x_valid, y_valid, batch_size=batch_size, verbose=0)\n if outputs:\n print()\n print(\"Validation performance\")\n print('Test loss:', score[0])\n print('Test accuracy:', score[1])\n\n score = model.evaluate(x_valid, y_target, batch_size=batch_size, verbose=0)\n if outputs:\n print()\n print(\"Testing performance\")\n print('Test loss:', score[0])\n print('Test accuracy:', score[1])\n \n if testing:\n plt.plot(history.history['loss'])\n plt.plot(history.history['val_loss'])\n plt.title('Model loss')\n plt.ylabel('Loss')\n plt.xlabel('Epoch')\n plt.legend(['Train', 'Test'], loc='best')\n plt.show()\n\n if graph:\n fig=plt.figure(figsize=(10, 5))\n x_predicted = np.random.uniform(-1.5, 1.5, 100)\n x_predicted.sort()\n y_predicted = model.predict(x_predicted)\n plt.scatter(x_predicted, y_predicted,color='r')\n plt.plot(x_predicted, polynomial(x_predicted,a,b,c,d))\n plt.title(f\"{len(layers)} layers with {layers[0].output_shape[1]} input neurons - opt.= {optname}, loss = {lossname} \")\n plt.grid(True)\n plt.tight_layout()\n plt.show()\n \n if logger:\n print(\"\\n\\n\")",
"_____no_output_____"
]
],
[
[
"## <span style=\"color:green\">Dependance on layers & neurons </span>\nUsing the code above, various NNs have been trained with different layers and neurons for each layer, to study the accuracy in approximating the polynomial in $[\\frac{-3}{2},\\frac{3}{2}]$\n\n### Different Neural Networks\n",
"_____no_output_____"
]
],
[
[
"opt = optimizers.SGD(learning_rate=0.1)\nrun([layers.Dense(500,input_shape=(1,)),layers.Dense(1,activation=\"relu\")],optimizer=opt,testing=False)\nrun([layers.Dense(1,input_shape=(1,)),layers.Dense(30,activation=\"relu\"),layers.Dense(1,activation=\"relu\")],optimizer=opt,testing=False)\nrun([layers.Dense(500,input_shape=(1,)),layers.Dense(100,activation=\"relu\"),layers.Dense(1,activation=\"relu\")],optimizer=opt,testing=False,outputs=True)\nrun([layers.Dense(500,input_shape=(1,)),layers.Dense(250,activation=\"relu\"),layers.Dense(100,activation=\"relu\"),layers.Dense(10,activation=\"relu\"),layers.Dense(1,activation=\"relu\")],optimizer=opt,testing=False)\nrun([layers.Dense(1000,input_shape=(1,)),layers.Dense(500,activation=\"relu\"),layers.Dense(250,activation=\"relu\"),layers.Dense(175,activation=\"relu\"),layers.Dense(100,activation=\"relu\"),layers.Dense(50,activation=\"relu\"),layers.Dense(1,activation=\"relu\")],testing=False)\n",
"************** 2 Layers\t500 input neurons\t optimizer=SGD\tloss=mse\nModel: \"sequential_1\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense_3 (Dense) (None, 500) 1000 \n_________________________________________________________________\ndense_4 (Dense) (None, 1) 501 \n=================================================================\nTotal params: 1,501\nTrainable params: 1,501\nNon-trainable params: 0\n_________________________________________________________________\n"
]
],
[
[
"### Results\nDiscuss results on layers/neurons\n\n<br>",
"_____no_output_____"
],
[
"## <span style=\"color:green\"> <strong> <u> Dependance on optimizers </u> </strong> </span>\nThe following section defines different optimizer functions for a Neural Network with 3 layers and 500 input neurons.\n\nWhich optimizers to chose<br>\nExpectations<br>\n\n### <u> <strong> Adam Optimizer </strong> </u>\nAdam, short for adaptive moment estimation, is an optimization algorithm that can be used instead of the classical stochastic gradient descent procedure to update network weights iteratively based in training data.\nBetween the numerous advantages brought by this algorithm, the most important for this case are:\n- Computationally efficient\n- Straightforward to implement\n- Appropriate for problems with very noisy/or sparse gradients\n\n\n#### Different optimizers",
"_____no_output_____"
]
],
[
[
"opt_layers = [\n layers.Dense(1,input_shape=(1,)),\n layers.Dense(50,activation=\"relu\"),\n layers.Dense(1,activation=\"selu\")\n]\nopts = [\n optimizers.SGD(learning_rate=1e-1),\n optimizers.Adam(learning_rate=1e-1),\n optimizers.RMSprop(learning_rate=1e-1),\n optimizers.Adagrad(learning_rate=1e-1)\n]\nfor oo in opts:\n run(opt_layers,optimizer=oo,testing=False,batch_size=64,epochs=150,outputs=True)\nprint()",
"************** 3 Layers\t1 input neurons\t optimizer=SGD\tloss=mse\nModel: \"sequential_6\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense_23 (Dense) (None, 1) 2 \n_________________________________________________________________\ndense_24 (Dense) (None, 50) 100 \n_________________________________________________________________\ndense_25 (Dense) (None, 1) 51 \n=================================================================\nTotal params: 153\nTrainable params: 153\nNon-trainable params: 0\n_________________________________________________________________\n\nValidation performance\nTest loss: 0.004281164612621069\nTest accuracy: 0.004281164612621069\n\nTesting performance\nTest loss: 0.004281164612621069\nTest accuracy: 0.004281164612621069\n"
]
],
[
[
"#### Learning rate\nAs with all optimization algorithms, it is defined by numerous parameters, including the learning rate, which determines the step size at each iteration while moving toward a minimum of a loss function.\nIn the following section, I will study the impact of the learning rate parameter on the learning efficiency of the model.\nAt first, I chose to explore three different values for the learning rate:<br>\n<ul>\n <li> lr = 0.001 (default value for Adam optimizer) </li>\n <li> lr = 0.1</li>\n <li> lr = 0.00001</li>\n</ul>\nin order to choose values that could be too high, too low or decent.\nThen I will use a learning rate schedule called \"Step decay\" which systematically drops the learning rate at specific times during training, formally defined by $$LR = LR_0 * \\text{droprate}^{\\text{floor}(\\text{epoch} / \\text{epochs_drop})}$$",
"_____no_output_____"
]
],
[
[
"#--- LR = 0.001\n\nlr = 0.001\nadam1 = optimizers.Adam(\n learning_rate=lr,\n beta_1=0.9,\n beta_2=0.999,\n epsilon=1e-07,\n amsgrad=False,\n name=\"Adam2\"\n)\nprint(\"---> Learning rate: \",lr)\n\nrun(\n [\n layers.Dense(500,input_shape=(1,)),\n layers.Dense(100, activation=\"relu\"),\n layers.Dense(1)\n ],\n optimizer=adam1\n)",
"---> Learning rate: 0.001\n************** 3 Layers\t500 input neurons\t optimizer=Adam\tloss=mse\nModel: \"sequential_10\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense_26 (Dense) (None, 500) 1000 \n_________________________________________________________________\ndense_27 (Dense) (None, 100) 50100 \n_________________________________________________________________\ndense_28 (Dense) (None, 1) 101 \n=================================================================\nTotal params: 51,201\nTrainable params: 51,201\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"#--- LR = 0.1\n\nlr = 0.1\nadam1 = optimizers.Adam(\n learning_rate=lr,\n beta_1=0.9,\n beta_2=0.999,\n epsilon=1e-07,\n amsgrad=False,\n name=\"Adam2\"\n)\nprint(\"---> Learning rate: \",lr)\nrun(\n [\n layers.Dense(500,input_shape=(1,)),\n layers.Dense(100, activation=\"relu\"),\n layers.Dense(1)\n ],\n optimizer=adam1\n)",
"---> Learning rate: 0.1\n************** 3 Layers\t500 input neurons\t optimizer=Adam\tloss=mse\nModel: \"sequential_11\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense_29 (Dense) (None, 500) 1000 \n_________________________________________________________________\ndense_30 (Dense) (None, 100) 50100 \n_________________________________________________________________\ndense_31 (Dense) (None, 1) 101 \n=================================================================\nTotal params: 51,201\nTrainable params: 51,201\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"lr = 0.00001\nadam1 = optimizers.Adam(\n learning_rate=lr,\n beta_1=0.9,\n beta_2=0.999,\n epsilon=1e-07,\n amsgrad=False,\n name=\"Adam2\"\n)\nprint(\"---> Learning rate: \",lr)\nrun(\n [\n layers.Dense(500,input_shape=(1,)),\n layers.Dense(100, activation=\"relu\"),\n layers.Dense(1)\n ],\n optimizer=adam1\n)",
"---> Learning rate: 1e-05\n************** 3 Layers\t500 input neurons\t optimizer=Adam\tloss=mse\nModel: \"sequential_12\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense_32 (Dense) (None, 500) 1000 \n_________________________________________________________________\ndense_33 (Dense) (None, 100) 50100 \n_________________________________________________________________\ndense_34 (Dense) (None, 1) 101 \n=================================================================\nTotal params: 51,201\nTrainable params: 51,201\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"### Step decay\nimport math\ninitial_learning_rate = 0.01\ndef lr_step_decay(epoch, lr):\n drop_rate = 0.55\n epochs_drop = 10.5\n return initial_learning_rate * math.pow(drop_rate, math.floor(epoch/epochs_drop))\n\n\nmodellayers = [\n layers.Dense(500, input_shape=(1,)),\n layers.Dense(100, activation=\"relu\"),\n layers.Dense(1)\n]\nstepmodel = models.Sequential(modellayers)\nstepmodel.compile(optimizer=\"adam\", loss=\"mse\", metrics=['mse'])\nstepmodel.summary()\n\n# Fit the model to the training data\nhistory_step_decay = stepmodel.fit(\n x_train, \n y_train, \n epochs=100, \n validation_split=0.3,\n batch_size=64,\n callbacks=[callbacks.LearningRateScheduler(lr_step_decay, verbose=0)],\n verbose=0\n)\n\nscore = stepmodel.evaluate(x_valid, y_valid, batch_size=64, verbose=0)\nprint()\nprint(\"Validation performance\")\nprint('Test loss:', score[0])\nprint('Test accuracy:', score[1])\n\nscore = stepmodel.evaluate(x_valid, y_target, batch_size=64, verbose=0)\nprint()\nprint(\"Testing performance\")\nprint('Test loss:', score[0])\nprint('Test accuracy:', score[1])\n\nplt.plot(history.history['loss'])\nplt.plot(history.history['val_loss'])\nplt.title('Model loss')\nplt.ylabel('Loss')\nplt.xlabel('Epoch')\nplt.legend(['Train', 'Test'], loc='best')\nplt.show()\n\nfig=plt.figure(figsize=(10, 5))\nx_predicted = np.random.uniform(-1.5, 1.5, 100)\nx_predicted.sort()\ny_predicted = stepmodel.predict(x_predicted)\nplt.scatter(x_predicted, y_predicted,color='r')\nplt.plot(x_predicted, polynomial(x_predicted,a,b,c,d))\nplt.title(f\"{len(modellayers)} layers with 500 input neurons, activ. fun.= 'relu', opt.= adam, loss = mse \")\nplt.grid(True)\nplt.tight_layout()\nplt.show()\nprint(\"\\n Final learning rate: \",K.eval(stepmodel.optimizer.lr))",
"Model: \"sequential_13\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense_35 (Dense) (None, 500) 1000 \n_________________________________________________________________\ndense_36 (Dense) (None, 100) 50100 \n_________________________________________________________________\ndense_37 (Dense) (None, 1) 101 \n=================================================================\nTotal params: 51,201\nTrainable params: 51,201\nNon-trainable params: 0\n_________________________________________________________________\n\nValidation performance\nTest loss: 0.012978918850421906\nTest accuracy: 0.012978918850421906\n\nTesting performance\nTest loss: 0.012978918850421906\nTest accuracy: 0.012978918850421906\n"
]
],
[
[
"The choice of the value for learning rate can impact two things:\n<ol>\n <li>How fast the algorithm learns</li>\n <li>Whether the cost function is minimized or not</li>\n</ol>\nFor an optimal value of the learning rate, the cost function value is minimized in a few iterations. If your learning rate is too low, training will progress very slowly as the fine-tunings to the weights in the network are small, and the number of iterations/epochs required to minimize the cost function is high.<br>\nIf the learning rate is set too high, the cost function could saturate at a value higher than the minimum value, causing undesirable divergent behavior in the loss function.<br>\n",
"_____no_output_____"
],
[
"## <span style=\"color:green\">Dependance on loss function </span>\nIn this section, I will try to explore the dependance of the model on the type of loss function.\nFor this purpose, I will keep all the other parameters as fixed as possible to highlight as much as possible the quested dependancy.<br>\nThe parameters are:\n- A neural network with 3 layers, respectively with 500, 100 and 1 neuron, all following the ReLU activation function\n- 100 epochs\n- A batch size of 64\n- SGD optimizer\n\nThe chosen loss functions are:\n- Mean squared error (Regression Loss)\nThe average squared difference between the estimated values and the actual value: $\\text{MSE}=\\frac{1}{N}\\sum_{i=1}^N(Y_i - <Y_i>)^2$\n- Mean absolute error (Regression Loss)\nThe measure of errors between paired observations expressing the same phenomenon, including observed versus predicted.\n$\\text{MAE}=\\frac{1}{N}\\sum_{i=1}^N |Y_i - X_i|$\n- Cross-entropy (Probabilistic Loss)\nIt measures the performance of a classification model whose output is a probability value between 0 and 1\n- Poisson",
"_____no_output_____"
]
],
[
[
"loss_functions = [\n losses.MeanSquaredError(),\n losses.MeanAbsoluteError(reduction=\"auto\", name=\"mean_absolute_error\"),\n losses.CategoricalCrossentropy(reduction=\"auto\",name=\"categorical_crossentropy\"),\n losses.Poisson(reduction=\"auto\", name=\"poisson\")\n]\nopt = optimizers.SGD(learning_rate=0.01)\nll = [\n layers.Dense(500,input_shape=(1,)),\n layers.Dense(250, activation=\"relu\"),\n layers.Dense(1)\n]\n\nfor func in loss_functions:\n run(ll,optimizer=opt,loss=func,testing=False,outputs=True,batch_size=64,epochs=100)",
"************** 3 Layers\t500 input neurons\t optimizer=SGD\tloss=MeanSquaredError\nModel: \"sequential_14\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense_38 (Dense) (None, 500) 1000 \n_________________________________________________________________\ndense_39 (Dense) (None, 250) 125250 \n_________________________________________________________________\ndense_40 (Dense) (None, 1) 251 \n=================================================================\nTotal params: 126,501\nTrainable params: 126,501\nNon-trainable params: 0\n_________________________________________________________________\n\nValidation performance\nTest loss: 0.040039416402578354\nTest accuracy: 0.040039416402578354\n\nTesting performance\nTest loss: 0.040039416402578354\nTest accuracy: 0.040039416402578354\n"
]
],
[
[
"### Results\n...",
"_____no_output_____"
],
[
"## <span style=\"color:green\">Dependance on activation function </span>\nLastly, I reported a study on the dependance of the model on the activation function of the neurons. The structure of the Neural Network is defined by 4 layers, respectively with 500, 250, 100 and 1 neurons.<br>\nThe chosen activation functions are:\n...\n\n",
"_____no_output_____"
]
],
[
[
"lossfunc = losses.MeanSquaredError()\nopt = optimizers.SGD(learning_rate=0.01)\nall_layers = [[\n layers.Dense(500, input_shape=(1,)),\n layers.Dense(250, activation=\"relu\"),\n layers.Dense(100, activation=\"relu\"),\n layers.Dense(1, activation=\"relu\")\n],[\n layers.Dense(500, input_shape=(1,)),\n layers.Dense(250, activation=\"relu\"),\n layers.Dense(100, activation=\"relu\"),\n layers.Dense(1, activation=\"sigmoid\")\n],[\n layers.Dense(500, input_shape=(1,)),\n layers.Dense(250, activation=\"selu\"),\n layers.Dense(100, activation=\"selu\"),\n layers.Dense(1, activation=\"softmax\")\n]\n]\n\nfor l in all_layers:\n run(l,optimizer=opt,loss=lossfunc,testing=False,outputs=True,batch_size=64,epochs=100)",
"************** 4 Layers\t500 input neurons\t optimizer=SGD\tloss=MeanSquaredError\nModel: \"sequential_18\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense_41 (Dense) (None, 500) 1000 \n_________________________________________________________________\ndense_42 (Dense) (None, 250) 125250 \n_________________________________________________________________\ndense_43 (Dense) (None, 100) 25100 \n_________________________________________________________________\ndense_44 (Dense) (None, 1) 101 \n=================================================================\nTotal params: 151,451\nTrainable params: 151,451\nNon-trainable params: 0\n_________________________________________________________________\n\nValidation performance\nTest loss: 0.009869810193777084\nTest accuracy: 0.009869810193777084\n\nTesting performance\nTest loss: 0.009869810193777084\nTest accuracy: 0.009869810193777084\n"
],
[
"from IPython.display import clear_output\n\nclass PlotCurrentEstimate(callbacks.Callback):\n def __init__(self, x_valid, y_valid):\n \"\"\"Keras Callback which plot current model estimate against reference target\"\"\"\n \n # convert numpy arrays into lists for plotting purposes\n self.x_valid = list(x_valid[:])\n self.y_valid = list(y_valid[:])\n self.iter=0\n\n def on_epoch_end(self, epoch, logs={}):\n \n temp = self.model.predict(self.x_valid, batch_size=None, verbose=False, steps=None)\n self.y_curr = list(temp[:]) # convert numpy array into list\n \n self.iter+=1\n if self.iter%10 == 0:\n clear_output(wait=True) \n self.eplot = plt.subplot(1,1,1)\n self.eplot.clear() \n self.eplot.scatter(self.x_valid, self.y_curr, color=\"blue\", s=4, marker=\"o\", label=\"estimate\")\n self.eplot.scatter(self.x_valid, self.y_valid, color=\"red\", s=4, marker=\"x\", label=\"valid\")\n self.eplot.legend()\n\n plt.show()",
"_____no_output_____"
],
[
"np.random.seed(0)\nfinalx_train = np.random.uniform(-1, 1, 10000) # dataset for training\nfinalx_valid = np.random.uniform(-1, 1, 1000) #dataset for testing/validation\nfinalx_valid.sort()\n\nfinaly_target = polynomial(finalx_valid,a,b,c,d)\n\nsigma = 0.0 # noise standard deviation\nfinaly_train = np.random.normal(polynomial(finalx_train,a,b,c,d), sigma) # actual measures from which we want to guess regression parameters\nfinaly_valid = np.random.normal(polynomial(finalx_valid,a,b,c,d), sigma)\n\nfinalmodel = models.Sequential()\nfinalmodel.add(layers.Dense(units=1, input_dim=1))\nfinalmodel.add(layers.Activation('relu'))\nfinalmodel.add(layers.Dense(units=40))\nfinalmodel.add(layers.Activation('relu'))\nfinalmodel.add(layers.Dense(units=1))\nfinalmodel.compile(loss='mean_squared_error',optimizer='adam', metrics=['mse'])\nfinalmodel.summary()\n\nhistory = finalmodel.fit(x=finalx_train, y=finaly_train, \n batch_size=64, epochs=150,\n shuffle=True, # a good idea is to shuffle input before at each epoch\n validation_data=(finalx_valid, finaly_valid),\n verbose=0\n)\n\nscore = finalmodel.evaluate(finalx_valid, finaly_valid, batch_size=64, verbose=0)\nprint()\nprint(\"Validation performance\")\nprint('Test loss:', score[0])\nprint('Test accuracy:', score[1])\n\nscore = finalmodel.evaluate(finalx_valid, finaly_target, batch_size=64, verbose=0)\nprint()\nprint(\"Testing performance\")\nprint('Test loss:', score[0])\nprint('Test accuracy:', score[1])\n\nplt.plot(history.history['loss'][20:])\nplt.plot(history.history['val_loss'][20:])\nplt.title('Model loss')\nplt.ylabel('Loss')\nplt.xlabel('Epoch')\nplt.legend(['Train', 'Test'], loc='best')\nplt.show()\n\nfig=plt.figure(figsize=(10, 5))\nfinalx_predicted = np.random.uniform(-1.5, 1.5, 1000)\nfinalx_predicted.sort()\nfinaly_predicted = finalmodel.predict(finalx_predicted)\nplt.scatter(finalx_predicted, finaly_predicted,color='r')\nplt.plot(finalx_predicted, polynomial(finalx_predicted,a,b,c,d))\nplt.title(f\"{len(finalmodel.layers)} layers with {finalmodel.layers[0].output_shape[1]} input neurons - opt.= {opt.__class__.__name__}, loss = {lossfunc.__class__.__name__} \")\nplt.grid(True)\nplt.tight_layout()\nplt.show()\n\nfinalmodel.save(\"models/model_ex2\")",
"Model: \"sequential_22\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense_56 (Dense) (None, 1) 2 \n_________________________________________________________________\nactivation_4 (Activation) (None, 1) 0 \n_________________________________________________________________\ndense_57 (Dense) (None, 40) 80 \n_________________________________________________________________\nactivation_5 (Activation) (None, 40) 0 \n_________________________________________________________________\ndense_58 (Dense) (None, 1) 41 \n=================================================================\nTotal params: 123\nTrainable params: 123\nNon-trainable params: 0\n_________________________________________________________________\n\nValidation performance\nTest loss: 0.022854315117001534\nTest accuracy: 0.022854315117001534\n\nTesting performance\nTest loss: 0.022854315117001534\nTest accuracy: 0.022854315117001534\n"
],
[
"plot_estimate = PlotCurrentEstimate(finalx_valid, finaly_valid)\n\nearlystop = tf.keras.callbacks.EarlyStopping(monitor='val_loss',\n min_delta=0, patience=100, mode='auto')\n\nmodel.fit(finalx_valid, finaly_valid, batch_size=32, epochs=150,\n validation_data=(finalx_valid, finalx_valid),\n callbacks=[ plot_estimate, earlystop]\n )\n\nmodel.get_weights()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
e7b534992408ea8d7f51084187c34d33ab08190f | 8,822 | ipynb | Jupyter Notebook | Expressions_and_Operations.ipynb | khloemaritonigloria08/CPEN-21A-ECE-2-1 | 0bcdaa40d68700a77fb0b73a872a04b41bbf4a9d | [
"Apache-2.0"
] | null | null | null | Expressions_and_Operations.ipynb | khloemaritonigloria08/CPEN-21A-ECE-2-1 | 0bcdaa40d68700a77fb0b73a872a04b41bbf4a9d | [
"Apache-2.0"
] | null | null | null | Expressions_and_Operations.ipynb | khloemaritonigloria08/CPEN-21A-ECE-2-1 | 0bcdaa40d68700a77fb0b73a872a04b41bbf4a9d | [
"Apache-2.0"
] | null | null | null | 21.836634 | 261 | 0.386534 | [
[
[
"<a href=\"https://colab.research.google.com/github/khloemaritonigloria08/CPEN-21A-ECE-2-1/blob/main/Expressions_and_Operations.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"###Boolean Operator",
"_____no_output_____"
]
],
[
[
"x=1\ny=2\n\nprint(x>y)\nprint(10>11)\nprint(10==10)\nprint(10!=11)",
"False\nFalse\nTrue\nTrue\n"
],
[
"#using bool()function\n\nprint(bool(\"Hello\"))\nprint(bool(15))\nprint(bool(1))\n\nprint(bool(True))\nprint(bool(False))\nprint(bool(None))\nprint(bool(0))\nprint(bool([]))",
"True\nTrue\nTrue\nTrue\nFalse\nFalse\nFalse\nFalse\n"
]
],
[
[
"##Functions can return Boolean",
"_____no_output_____"
]
],
[
[
"def myFunction():return False\nprint(myFunction())",
"False\n"
],
[
"def yourFunction():return False\nif yourFunction():\n print(\"Yes!\")\nelse:\n print(\"No\")",
"No\n"
]
],
[
[
"### You Try!",
"_____no_output_____"
]
],
[
[
"a=6\nb=7\nprint(a==b)\nprint(a!=a)",
"False\nFalse\n"
]
],
[
[
"##Arithmetic Operators",
"_____no_output_____"
]
],
[
[
"print(10+5)\nprint(10-5)\nprint(10*5)\nprint(10/5)\nprint(10%5) #modulo division, remainder\nprint(10//5) #floor division\nprint(10//3) #floor division\nprint(10%3) #3x3=9+1",
"15\n5\n50\n2.0\n0\n2\n3\n1\n"
]
],
[
[
"##Bitwise Operators",
"_____no_output_____"
]
],
[
[
"a=60 #0011 1100\nb=13 #0000 1101\n\nprint(a&b)\nprint(a|b)\nprint(a^b)\nprint(~a)\nprint(a<<1) #0111 1000\nprint(a<<2) #1111 0000\nprint(b>>1) #1 0000 0110\nprint(b>>2) #0000 0110 carry flag bit = 01",
"12\n61\n49\n-61\n120\n240\n6\n3\n"
]
],
[
[
"##Python Assignment Operators",
"_____no_output_____"
]
],
[
[
"a+=3 #Same As a=a+3\n #Same As a=60+3, a=63\nprint(a)",
"63\n"
]
],
[
[
"###Logical Operators",
"_____no_output_____"
]
],
[
[
"#and Logical Operator\na=True\nb=False\n\nprint(a and b)\nprint(not(a and b))\nprint(a or b)\nprint(not(a or b))",
"False\nTrue\nTrue\nFalse\n"
],
[
"print(a is b)\nprint(a is not b)",
"False\nTrue\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e7b5357627258b800663a3baf05b5e33e99d6d8d | 28,440 | ipynb | Jupyter Notebook | TradingAI/AI Algorithms in Trading/Lesson 05 - Financial Statements/simple_patterns.ipynb | Quananhle/Python | 96b5901159ca18d5908d6dfb46a94080f8ca2ec6 | [
"Apache-2.0"
] | 1 | 2021-06-10T22:08:21.000Z | 2021-06-10T22:08:21.000Z | TradingAI/AI Algorithms in Trading/Lesson 05 - Financial Statements/simple_patterns.ipynb | Quananhle/Python | 96b5901159ca18d5908d6dfb46a94080f8ca2ec6 | [
"Apache-2.0"
] | null | null | null | TradingAI/AI Algorithms in Trading/Lesson 05 - Financial Statements/simple_patterns.ipynb | Quananhle/Python | 96b5901159ca18d5908d6dfb46a94080f8ca2ec6 | [
"Apache-2.0"
] | 1 | 2021-05-13T20:48:43.000Z | 2021-05-13T20:48:43.000Z | 40.283286 | 712 | 0.57602 | [
[
[
"# Searching For Simple Patterns\n\nBeing able to match letters and metacharacters is the simplest task that regular expressions can do. In this section we will see how we can use regular expressions to perform more complex pattern matching. We can form any pattern we want by using the metacharacters mentioned in the previous lesson.\n\nThe first metacharacter we are going to look at is the backslash (`\\`). We already saw that the backslash can be used to escape all the metacharacters, so that you can search for them directly. However, the backslash can also be followed by various characters to signal various special sequences. Here is a list of the special sequences we are going to look at in this notebook:\n\n* `\\d` - Matches any decimal digit; this is equivalent to the set [0-9]\n\n\n* `\\D` - Matches any non-digit character; this is equivalent to the set [^0-9]\n\n\n* `\\s` - Matches any whitespace character, this is equivalent to the set [ \\t\\n\\r\\f\\v]\n\n\n* `\\S` - Matches any non-whitespace character; this is equivalent to the set [^ \\t\\n\\r\\f\\v]\n\n\n* `\\w` - Matches any alphanumeric character and the underscore; this is equivalent to the set [a-zA-Z0-9_]\n\n\n* `\\W` - Matches any non-alphanumeric character; this is equivalent to the set [^a-zA-Z0-9_]\n\nWe can see that there is a difference between lowercase and uppercase sequences. For example, while `\\d` matches any digit, `\\D` matches everything that is **not** a digit. Similarly, while `\\s` matches any whitespace character, `\\S` matches everything that is **not** a whitespace character; and while `\\w` matches any alphanumeric character, `\\W` matches everything that is **not** an alphanumeric character.\n\nLet's start by learning how to use `\\d` to search for decimal digits.",
"_____no_output_____"
],
[
"### Matching Numbers Using `\\d`\n\nIn the code below, we will use `'\\d'` as our regular expression to find all the decimal digits in our `sample_text` string:",
"_____no_output_____"
]
],
[
[
"# Import re module\nimport re\n\n# Sample text\nsample_text = 'Alice lives in 1230 First St., Ocean City, MD 156789.'\n\n# Create a regular expression object with the regular expression '\\d'\nregex = re.compile(r'\\d')\n\n# Search the sample_text for the regular expression\nmatches = regex.finditer(sample_text)\n\n# Print all the matches\nfor match in matches:\n print(match)",
"<_sre.SRE_Match object; span=(15, 16), match='1'>\n<_sre.SRE_Match object; span=(16, 17), match='2'>\n<_sre.SRE_Match object; span=(17, 18), match='3'>\n<_sre.SRE_Match object; span=(18, 19), match='0'>\n<_sre.SRE_Match object; span=(46, 47), match='1'>\n<_sre.SRE_Match object; span=(47, 48), match='5'>\n<_sre.SRE_Match object; span=(48, 49), match='6'>\n<_sre.SRE_Match object; span=(49, 50), match='7'>\n<_sre.SRE_Match object; span=(50, 51), match='8'>\n<_sre.SRE_Match object; span=(51, 52), match='9'>\n"
]
],
[
[
"As we can see, all the matches found above correspond to only decimal digits between 0 and 9.\n\nConversely, if wanted to find all the characters that are **not** decimal digits, we will use `\\D` as our regular expression, as shown below:",
"_____no_output_____"
]
],
[
[
"# Import re module\nimport re\n\n# Sample text\nsample_text = 'Alice lives in 1230 First St., Ocean City, MD 156789.'\n\n# Create a regular expression object with the regular expression '\\D'\nregex = re.compile(r'\\D')\n\n# Search the sample_text for the regular expression\nmatches = regex.finditer(sample_text)\n\n# Print all the matches\nfor match in matches:\n print(match)",
"<_sre.SRE_Match object; span=(0, 1), match='A'>\n<_sre.SRE_Match object; span=(1, 2), match='l'>\n<_sre.SRE_Match object; span=(2, 3), match='i'>\n<_sre.SRE_Match object; span=(3, 4), match='c'>\n<_sre.SRE_Match object; span=(4, 5), match='e'>\n<_sre.SRE_Match object; span=(5, 6), match=' '>\n<_sre.SRE_Match object; span=(6, 7), match='l'>\n<_sre.SRE_Match object; span=(7, 8), match='i'>\n<_sre.SRE_Match object; span=(8, 9), match='v'>\n<_sre.SRE_Match object; span=(9, 10), match='e'>\n<_sre.SRE_Match object; span=(10, 11), match='s'>\n<_sre.SRE_Match object; span=(11, 12), match=' '>\n<_sre.SRE_Match object; span=(12, 13), match='i'>\n<_sre.SRE_Match object; span=(13, 14), match='n'>\n<_sre.SRE_Match object; span=(14, 15), match=' '>\n<_sre.SRE_Match object; span=(19, 20), match=' '>\n<_sre.SRE_Match object; span=(20, 21), match='F'>\n<_sre.SRE_Match object; span=(21, 22), match='i'>\n<_sre.SRE_Match object; span=(22, 23), match='r'>\n<_sre.SRE_Match object; span=(23, 24), match='s'>\n<_sre.SRE_Match object; span=(24, 25), match='t'>\n<_sre.SRE_Match object; span=(25, 26), match=' '>\n<_sre.SRE_Match object; span=(26, 27), match='S'>\n<_sre.SRE_Match object; span=(27, 28), match='t'>\n<_sre.SRE_Match object; span=(28, 29), match='.'>\n<_sre.SRE_Match object; span=(29, 30), match=','>\n<_sre.SRE_Match object; span=(30, 31), match=' '>\n<_sre.SRE_Match object; span=(31, 32), match='O'>\n<_sre.SRE_Match object; span=(32, 33), match='c'>\n<_sre.SRE_Match object; span=(33, 34), match='e'>\n<_sre.SRE_Match object; span=(34, 35), match='a'>\n<_sre.SRE_Match object; span=(35, 36), match='n'>\n<_sre.SRE_Match object; span=(36, 37), match=' '>\n<_sre.SRE_Match object; span=(37, 38), match='C'>\n<_sre.SRE_Match object; span=(38, 39), match='i'>\n<_sre.SRE_Match object; span=(39, 40), match='t'>\n<_sre.SRE_Match object; span=(40, 41), match='y'>\n<_sre.SRE_Match object; span=(41, 42), match=','>\n<_sre.SRE_Match object; span=(42, 43), match=' '>\n<_sre.SRE_Match object; span=(43, 44), match='M'>\n<_sre.SRE_Match object; span=(44, 45), match='D'>\n<_sre.SRE_Match object; span=(45, 46), match=' '>\n<_sre.SRE_Match object; span=(52, 53), match='.'>\n"
]
],
[
[
"We can see that none of the matches are decimal digits. We also see, that by using `\\D` we were able to match all characters, including periods (`.`) and white spaces.",
"_____no_output_____"
],
[
"# TODO: Find IP Addresses\n\nIn the cell below, our `sample_text` string contains three IP addresses. Write a single regular expression that can match any IP address and save the regular expression object in a variable called `regex`. Then use the `.finditer()` method to search the `sample_text` string for the given regular expression. Finally, write a loop to print all the `matches` found by the `.finditer()` method.\n\n**HINT :** Use the special sequence `\\d` and take advantage that all IP addresses have the same pattern.",
"_____no_output_____"
]
],
[
[
"# Import re module\nimport re\n\n# Sample text\nsample_text = 'Here are three IP address: 123.456.789.123, 999.888.777.666, 111.222.333.444'\n\n# Create a regular expression object with the regular expression\nregex = re.compile(r'\\d\\d\\d.\\d\\d\\d.\\d\\d\\d.\\d\\d\\d')\n\n# Search the sample_text for the regular expression\nmatches = regex.finditer(sample_text)\n\n# Print all the matches\nfor match in matches:\n print(match)",
"<_sre.SRE_Match object; span=(27, 42), match='123.456.789.123'>\n<_sre.SRE_Match object; span=(44, 59), match='999.888.777.666'>\n<_sre.SRE_Match object; span=(61, 76), match='111.222.333.444'>\n"
]
],
[
[
"If you wrote your regex correctly you should see three matches above corresponding to the three IP addresses in our `sample_text` string.",
"_____no_output_____"
],
[
"### Matching Whitespace Characters Using `\\s`\n\nIn the code below, we will use `\\s` as our regular expression to find all the whitespace characters in our `sample_text` string. For this example, we will use a string literal that spans multiple lines. To create this multi-line string, we will use triple-quotes (`'''`) both at the beginning and at the end of the multi-line string.",
"_____no_output_____"
]
],
[
[
"# Import re module\nimport re\n\n# Sample text\nsample_text = '''\n\\tAlice lives in:\\f\n1230 First St.\\r\nOcean City, MD 156789.\\v\n'''\n\n# Create a regular expression object with the regular expression '\\s'\nregex = re.compile(r'\\s')\n\n# Search the sample_text for the regular expression\nmatches = regex.finditer(sample_text)\n\n# Print all the matches\nfor match in matches:\n print(match)",
"<_sre.SRE_Match object; span=(0, 1), match='\\n'>\n<_sre.SRE_Match object; span=(1, 2), match='\\t'>\n<_sre.SRE_Match object; span=(7, 8), match=' '>\n<_sre.SRE_Match object; span=(13, 14), match=' '>\n<_sre.SRE_Match object; span=(17, 18), match='\\x0c'>\n<_sre.SRE_Match object; span=(18, 19), match='\\n'>\n<_sre.SRE_Match object; span=(23, 24), match=' '>\n<_sre.SRE_Match object; span=(29, 30), match=' '>\n<_sre.SRE_Match object; span=(33, 34), match='\\r'>\n<_sre.SRE_Match object; span=(34, 35), match='\\n'>\n<_sre.SRE_Match object; span=(40, 41), match=' '>\n<_sre.SRE_Match object; span=(46, 47), match=' '>\n<_sre.SRE_Match object; span=(49, 50), match=' '>\n<_sre.SRE_Match object; span=(57, 58), match='\\x0b'>\n<_sre.SRE_Match object; span=(58, 59), match='\\n'>\n"
]
],
[
[
"As we can see, all the matches found correspond to white spaces, tabs (`\\t`), newlines (`\\n`), carriage returns (`\\r`), form feeds (`\\f`), and vertical tabs (`\\v`). Notice that form feeds appear as `\\x0c` and vertical tabs as `\\x0b`. \n\nConversely, if wanted to find all the characters that are **not** whitespace characters, we will use `\\S` as our regular expression, as shown below:",
"_____no_output_____"
]
],
[
[
"# Import re module\nimport re\n\n# Sample text\nsample_text = '''\n\\tAlice lives in:\\f\n1230 First St.\\r\nOcean City, MD 156789.\\v\n'''\n\n# Create a regular expression object with the regular expression '\\S'\nregex = re.compile(r'\\S')\n\n# Search the sample_text for the regular expression\nmatches = regex.finditer(sample_text)\n\n# Print all the matches\nfor match in matches:\n print(match)",
"<_sre.SRE_Match object; span=(2, 3), match='A'>\n<_sre.SRE_Match object; span=(3, 4), match='l'>\n<_sre.SRE_Match object; span=(4, 5), match='i'>\n<_sre.SRE_Match object; span=(5, 6), match='c'>\n<_sre.SRE_Match object; span=(6, 7), match='e'>\n<_sre.SRE_Match object; span=(8, 9), match='l'>\n<_sre.SRE_Match object; span=(9, 10), match='i'>\n<_sre.SRE_Match object; span=(10, 11), match='v'>\n<_sre.SRE_Match object; span=(11, 12), match='e'>\n<_sre.SRE_Match object; span=(12, 13), match='s'>\n<_sre.SRE_Match object; span=(14, 15), match='i'>\n<_sre.SRE_Match object; span=(15, 16), match='n'>\n<_sre.SRE_Match object; span=(16, 17), match=':'>\n<_sre.SRE_Match object; span=(19, 20), match='1'>\n<_sre.SRE_Match object; span=(20, 21), match='2'>\n<_sre.SRE_Match object; span=(21, 22), match='3'>\n<_sre.SRE_Match object; span=(22, 23), match='0'>\n<_sre.SRE_Match object; span=(24, 25), match='F'>\n<_sre.SRE_Match object; span=(25, 26), match='i'>\n<_sre.SRE_Match object; span=(26, 27), match='r'>\n<_sre.SRE_Match object; span=(27, 28), match='s'>\n<_sre.SRE_Match object; span=(28, 29), match='t'>\n<_sre.SRE_Match object; span=(30, 31), match='S'>\n<_sre.SRE_Match object; span=(31, 32), match='t'>\n<_sre.SRE_Match object; span=(32, 33), match='.'>\n<_sre.SRE_Match object; span=(35, 36), match='O'>\n<_sre.SRE_Match object; span=(36, 37), match='c'>\n<_sre.SRE_Match object; span=(37, 38), match='e'>\n<_sre.SRE_Match object; span=(38, 39), match='a'>\n<_sre.SRE_Match object; span=(39, 40), match='n'>\n<_sre.SRE_Match object; span=(41, 42), match='C'>\n<_sre.SRE_Match object; span=(42, 43), match='i'>\n<_sre.SRE_Match object; span=(43, 44), match='t'>\n<_sre.SRE_Match object; span=(44, 45), match='y'>\n<_sre.SRE_Match object; span=(45, 46), match=','>\n<_sre.SRE_Match object; span=(47, 48), match='M'>\n<_sre.SRE_Match object; span=(48, 49), match='D'>\n<_sre.SRE_Match object; span=(50, 51), match='1'>\n<_sre.SRE_Match object; span=(51, 52), match='5'>\n<_sre.SRE_Match object; span=(52, 53), match='6'>\n<_sre.SRE_Match object; span=(53, 54), match='7'>\n<_sre.SRE_Match object; span=(54, 55), match='8'>\n<_sre.SRE_Match object; span=(55, 56), match='9'>\n<_sre.SRE_Match object; span=(56, 57), match='.'>\n"
]
],
[
[
"We can see that none of the matches above are whitespace characters. We also see, that by using `\\S` we were able to match all characters, including periods (`.`), letters, and numbers.",
"_____no_output_____"
],
[
"# TODO: Print The Numbers Between Whitespace Characters\n\nIn the cell below, our `sample_text` consists of a multi-line string with numbers in between whitespace characters:\n\n```python\n123\t45\t7895\n1\t222\t33\n```\n\nNotice that not all the numbers have the same number of digits. For example, the first number (`123` ) has three digits, but the second number (`45` ) only has two digits. \n\nNotice that not all the numbers have the same number of digits. For example, the first number (`123` ) has three digits, but the second number (`45` ) only has two digits. \n\nWrite a single regular expression that finds the tabs (`\\t`) and the newlines (`\\n`) in this multi-line string and save the regular expression object in a variable called `regex`. Then use the `.finditer()` method to search the `sample_text` string for the given regular expression. Then, write a loop that uses the span information from each `match` to only print the numbers found in the original multi-line string. Your code should work in the general case where the numbers can have any number of digits. For example, if the numbers in the string were to change your code should still be able to find them and print them. Finally, in this exercise you cannot use `\\d` in your regular expression. \n\n**HINT :** Notice that there are no whites paces in the multiline string. Use the `\\s` sequence to find the tabs and newlines. Then notice that you can use the span's `end` and `start` index from consecutive matches to figure out the number of digits of each number. Use these indices to print the numbers found in the original multi-line string. You can use the `match.span()` method we saw before to find the `start` and `end` indices of each `match`. Alternatively, you can also use the `.start()` and `.end()` methods to extract the `start` and `end` indices of each match. The `match.start()` is equivalent to `match.span()[0]` and `match.end()` is equivalent to `match.span()[1]`.",
"_____no_output_____"
]
],
[
[
"# Import re module\nimport re\n\n# Sample text\nsample_text = '''\n123\\t45\\t7895\n1\\t222\\t33\n'''\n\n# Print sample_text\nprint('Sample Text:\\n', sample_text)\n\n# Create a regular expression object with the regular expression\nregex = re.compile(r'\\s')\n\n# Search the sample_text for the regular expression\nmatches = regex.finditer(sample_text)\n\n\n# Write a loop to print all the numbers found in the original string\ncounter = 0\nfor match in matches:\n if counter != 0:\n start_idx = match.start()\n print(sample_text[end_idx:start_idx])\n end_idx = match.end() \n counter += 1",
"Sample Text:\n \n123\t45\t7895\n1\t222\t33\n\n123\n45\n7895\n1\n222\n33\n"
]
],
[
[
"### Matching Alphanumeric Characters Using `\\w`\n\nIn the code below, we will use `\\w` as our regular expression to find all the alphanumeric characters in our `sample_text` string. This includes the underscore ( `_` ), all the numbers from 0 through 9, and all the uppercase and lowercase letters:",
"_____no_output_____"
]
],
[
[
"# Import re module\nimport re\n\n# Sample text\nsample_text = '''\nYou can contact FAKE Company at:\[email protected].\n'''\n\n# Create a regular expression object with the regular expression '\\w'\nregex = re.compile(r'\\w')\n\n# Search the sample_text for the regular expression\nmatches = regex.finditer(sample_text)\n\n# Print all the matches\nfor match in matches:\n print(match)",
"<_sre.SRE_Match object; span=(1, 2), match='Y'>\n<_sre.SRE_Match object; span=(2, 3), match='o'>\n<_sre.SRE_Match object; span=(3, 4), match='u'>\n<_sre.SRE_Match object; span=(5, 6), match='c'>\n<_sre.SRE_Match object; span=(6, 7), match='a'>\n<_sre.SRE_Match object; span=(7, 8), match='n'>\n<_sre.SRE_Match object; span=(9, 10), match='c'>\n<_sre.SRE_Match object; span=(10, 11), match='o'>\n<_sre.SRE_Match object; span=(11, 12), match='n'>\n<_sre.SRE_Match object; span=(12, 13), match='t'>\n<_sre.SRE_Match object; span=(13, 14), match='a'>\n<_sre.SRE_Match object; span=(14, 15), match='c'>\n<_sre.SRE_Match object; span=(15, 16), match='t'>\n<_sre.SRE_Match object; span=(17, 18), match='F'>\n<_sre.SRE_Match object; span=(18, 19), match='A'>\n<_sre.SRE_Match object; span=(19, 20), match='K'>\n<_sre.SRE_Match object; span=(20, 21), match='E'>\n<_sre.SRE_Match object; span=(22, 23), match='C'>\n<_sre.SRE_Match object; span=(23, 24), match='o'>\n<_sre.SRE_Match object; span=(24, 25), match='m'>\n<_sre.SRE_Match object; span=(25, 26), match='p'>\n<_sre.SRE_Match object; span=(26, 27), match='a'>\n<_sre.SRE_Match object; span=(27, 28), match='n'>\n<_sre.SRE_Match object; span=(28, 29), match='y'>\n<_sre.SRE_Match object; span=(30, 31), match='a'>\n<_sre.SRE_Match object; span=(31, 32), match='t'>\n<_sre.SRE_Match object; span=(34, 35), match='f'>\n<_sre.SRE_Match object; span=(35, 36), match='a'>\n<_sre.SRE_Match object; span=(36, 37), match='k'>\n<_sre.SRE_Match object; span=(37, 38), match='e'>\n<_sre.SRE_Match object; span=(38, 39), match='_'>\n<_sre.SRE_Match object; span=(39, 40), match='c'>\n<_sre.SRE_Match object; span=(40, 41), match='o'>\n<_sre.SRE_Match object; span=(41, 42), match='m'>\n<_sre.SRE_Match object; span=(42, 43), match='p'>\n<_sre.SRE_Match object; span=(43, 44), match='a'>\n<_sre.SRE_Match object; span=(44, 45), match='n'>\n<_sre.SRE_Match object; span=(45, 46), match='y'>\n<_sre.SRE_Match object; span=(46, 47), match='1'>\n<_sre.SRE_Match object; span=(47, 48), match='2'>\n<_sre.SRE_Match object; span=(49, 50), match='e'>\n<_sre.SRE_Match object; span=(50, 51), match='m'>\n<_sre.SRE_Match object; span=(51, 52), match='a'>\n<_sre.SRE_Match object; span=(52, 53), match='i'>\n<_sre.SRE_Match object; span=(53, 54), match='l'>\n<_sre.SRE_Match object; span=(55, 56), match='c'>\n<_sre.SRE_Match object; span=(56, 57), match='o'>\n<_sre.SRE_Match object; span=(57, 58), match='m'>\n"
]
],
[
[
"As we can see, all the matches found correspond to alphanumeric characters only, including the underscore in the email address.\n\nConversely, if wanted to find all the characters that are **not** alphanumeric characters, we will use `\\W` as our regular expression, as shown below:",
"_____no_output_____"
]
],
[
[
"# Import re module\nimport re\n\n# Sample text\nsample_text = '''\nYou can contact FAKE Company at:\[email protected].\n'''\n\n# Create a regular expression object with the regular expression '\\W'\nregex = re.compile(r'\\W')\n\n# Search the sample_text for the regular expression\nmatches = regex.finditer(sample_text)\n\n# Print all the matches\nfor match in matches:\n print(match)",
"<_sre.SRE_Match object; span=(0, 1), match='\\n'>\n<_sre.SRE_Match object; span=(4, 5), match=' '>\n<_sre.SRE_Match object; span=(8, 9), match=' '>\n<_sre.SRE_Match object; span=(16, 17), match=' '>\n<_sre.SRE_Match object; span=(21, 22), match=' '>\n<_sre.SRE_Match object; span=(29, 30), match=' '>\n<_sre.SRE_Match object; span=(32, 33), match=':'>\n<_sre.SRE_Match object; span=(33, 34), match='\\n'>\n<_sre.SRE_Match object; span=(48, 49), match='@'>\n<_sre.SRE_Match object; span=(54, 55), match='.'>\n<_sre.SRE_Match object; span=(58, 59), match='.'>\n<_sre.SRE_Match object; span=(59, 60), match='\\n'>\n"
]
],
[
[
"We can see that none of the matches are alphanumeric characters. We also see, that by using `\\W` we were able to match all whitespace characters, and the `@` symbol in the email address.",
"_____no_output_____"
],
[
"# TODO: Find emails\n\nIn the cell below, our `sample_text` consists of a multi-line string that contains three email addresses:\n\n```\[email protected]\[email protected]\[email protected]\n```\n\nNotice, that all three email address have the same pattern, namely, the first name initial, followed by a dot (`.`), followed by the last name initial, and ending in ``` @email.com```. \n\nTake advantage of the fact that all three email addresses have the same pattern to write a single regular expression that can find all three email addresses in our `sample_text` string. As usual, save the regular expression object in a variable called `regex`. Then use the `.finditer()` method to search the `sample_text` string for the given regular expression. Finally, write a loop to print all the `matches` found by the `.finditer()` method.",
"_____no_output_____"
]
],
[
[
"# Import re module\nimport re\n\n# Sample text\nsample_text = '''\nJohn Sanders: [email protected]\nAlice Walters: [email protected]\nMary Jones: [email protected]\n'''\n\n# Print sample_text\nprint('Sample Text:\\n', sample_text)\n\n# Create a regular expression object with the regular expression\nregex = re.compile(r'[0-9a-zA-Z].[0-9a-zA-Z]@email.com')\n\n# Search the sample_text for the regular expression\nmatches = regex.finditer(sample_text)\n\n# Print all the matches\nfor match in matches:\n print(match)",
"Sample Text:\n \nJohn Sanders: [email protected]\nAlice Walters: [email protected]\nMary Jones: [email protected]\n\n<_sre.SRE_Match object; span=(15, 28), match='[email protected]'>\n<_sre.SRE_Match object; span=(44, 57), match='[email protected]'>\n<_sre.SRE_Match object; span=(70, 83), match='[email protected]'>\n"
]
],
[
[
"If you wrote your regex correctly you should see three matches above corresponding to the three email addresses found in our `sample_text` string.",
"_____no_output_____"
],
[
"# Solution\n\n[Solution notebook](simple_patterns_solution.ipynb)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
e7b55373c2091bf5a4546a69c4e7d3bdcebc3fb7 | 269,753 | ipynb | Jupyter Notebook | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes | a04370246399297a4311b42b389600a8e3b4710c | [
"MIT"
] | 1 | 2022-01-17T16:05:24.000Z | 2022-01-17T16:05:24.000Z | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes | a04370246399297a4311b42b389600a8e3b4710c | [
"MIT"
] | null | null | null | Week0/Week0-notes-python-fundamentals.ipynb | Magica-Chen/WebSNA-notes | a04370246399297a4311b42b389600a8e3b4710c | [
"MIT"
] | 1 | 2022-01-24T20:42:03.000Z | 2022-01-24T20:42:03.000Z | 58.073843 | 33,228 | 0.713734 | [
[
[
"# Creating variables",
"_____no_output_____"
],
[
"In this notebook we will look into the concept of variables.\n\nPython, like R, is a dynamically-typed language, meaning you can change the class/type of a variable on the go. This is convenient in many places, but dangerous in many other ways. It is impossible to rely on the type of the variable, and you should always retrace your steps throughout the code to see what the variable is currently representing. This is sometimes hard, especially in places like this notebook where we can execute different bits of code in any order.\n\n**You're suggested to run this script on Colab**\n[](https://colab.research.google.com/github/Magica-Chen/WebSNA-notes/blob/main/Week0/Week0-notes-python-fundamentals.ipynb)",
"_____no_output_____"
],
[
"## Intro and strings",
"_____no_output_____"
],
[
"Let's create a variable:",
"_____no_output_____"
]
],
[
[
"name = \"Edinburgh\"\nname",
"_____no_output_____"
]
],
[
[
"This generates a string variable. They can be easily printed, although it is safer to use the print function:",
"_____no_output_____"
]
],
[
[
"print(name)",
"Edinburgh\n"
]
],
[
[
"It is also wise to check the type of the variable, in case you are lost:",
"_____no_output_____"
]
],
[
[
"type(name)",
"_____no_output_____"
]
],
[
[
"This confirms that we are dealing with a string. There are a few things we can do with strings (which we can denote by using one or two apostrophes):",
"_____no_output_____"
]
],
[
[
"name = 'university of edinburgh'\nprint(name.lower())\nprint(name.upper())\nprint(name.title())",
"university of edinburgh\nUNIVERSITY OF EDINBURGH\nUniversity Of Edinburgh\n"
]
],
[
[
"We can concatenate strings easily using +, or using a comma in a print statement:",
"_____no_output_____"
]
],
[
[
"print('University', 'of Edinburgh')\nprint('University' + ' ' + 'of Edinburgh')",
"University of Edinburgh\nUniversity of Edinburgh\n"
]
],
[
[
"Writing print('The University of Edinburgh is '+ 439) will not work, as the + operator only works for strings, we can convert any object into a string however:",
"_____no_output_____"
]
],
[
[
"print('The University of Edinburgh is '+ str(439))",
"The University of Edinburgh is 439\n"
]
],
[
[
"A few other useful tricks:",
"_____no_output_____"
]
],
[
[
"name = \" edinburgh \"\nprint(\"|\"+name.lstrip()+\"|\")\nprint(\"|\"+name.rstrip()+\"|\")\nprint(\"|\"+name.strip()+\"|\")",
"|edinburgh |\n| edinburgh|\n|edinburgh|\n"
]
],
[
[
"You can use control characters as well:",
"_____no_output_____"
]
],
[
[
"print('Edinburgh\\thas a university\\nrunning web & social network analytics course')",
"Edinburgh\thas a university\nrunning web & social network analytics course\n"
]
],
[
[
"## Numbers",
"_____no_output_____"
]
],
[
[
"a = 10\nb = -10.1023\n\n#Some operations illustrated (\\t stands for a tab)\nprint(\"a: \\t\\t\\t\" + str(a))\nprint(\"b: \\t\\t\\t\" + str(b))\nprint(\"absolute of b: \\t\\t\" + str(abs(b)))\nprint(\"rounded b: \\t\\t\" + str(round(b,3)))\nprint(\"square of a: \\t\\t\" + str(pow(a,2)))\nprint(\"cube of a: \\t\\t\" + str(a**3))\nprint(\"integer part of b: \\t\" + str(int(b)))",
"a: \t\t\t10\nb: \t\t\t-10.1023\nabsolute of b: \t\t10.1023\nrounded b: \t\t-10.102\nsquare of a: \t\t100\ncube of a: \t\t1000\ninteger part of b: \t-10\n"
]
],
[
[
"# Flow Control",
"_____no_output_____"
],
[
"Control flow statements help you to structure the code and direct it towards your convenience and introduce loops and so on.",
"_____no_output_____"
],
[
"## If statements",
"_____no_output_____"
]
],
[
[
"price = -5;\n\nif price <0:\n print(\"Price is negative!\")\nelif price <1:\n print(\"Price is too small!\")\nelse:\n print(\"Price is suitable.\")",
"Price is negative!\n"
]
],
[
[
"Especially in text mining, comparing strings is very important:",
"_____no_output_____"
]
],
[
[
"#Comparing strings\nname1 = \"edinburgh\"\nname2 = \"Edinburgh\"\n\nif name1 == name2:\n print(\"Equal\")\nelse:\n print(\"Not equal\")\n\nif name1.lower() == name2.lower():\n print(\"Equal\")\nelse:\n print(\"Not equal\")",
"Not equal\nEqual\n"
]
],
[
[
"Using multiple conditions:",
"_____no_output_____"
]
],
[
[
"number = 9\nif number > 1 and not number > 9:\n print(\"Number is between 1 and 10\")\n \nnumber = 9\nname = 'johannes'\nif number < 5 or 'j' in name:\n print(\"Number is lower than 5 or the name contains a 'j'\")",
"Number is between 1 and 10\nNumber is lower than 5 or the name contains a 'j'\n"
]
],
[
[
"## While loops",
"_____no_output_____"
]
],
[
[
"number = 4\nwhile number > 1:\n print(number)\n number = number -1",
"4\n3\n2\n"
]
],
[
[
"## For loops",
"_____no_output_____"
],
[
"For loops allow you to iteratre over elements in a certain collection, for example a list:",
"_____no_output_____"
]
],
[
[
"# We'll look into lists in a minute\nnumber_list = [1, 2, 3, 4]\nfor item in number_list:\n print(item)",
"1\n2\n3\n4\n"
],
[
"list = ['a', 'b', 'c']\nfor item in list:\n print(item)",
"a\nb\nc\n"
]
],
[
[
"Ranges are also useful. Note that the upper element is not included and we can adjust the step size:",
"_____no_output_____"
]
],
[
[
"for i in range(1,4):\n print(i)",
"1\n2\n3\n"
],
[
"for i in range(30,100, 10):\n print(i)",
"30\n40\n50\n60\n70\n80\n90\n"
]
],
[
[
"## Indentation",
"_____no_output_____"
],
[
"Please be very careful with indentation",
"_____no_output_____"
]
],
[
[
"number_1 = 3\nnumber_2 = 5\n\nprint('No indent (no tabs used)')\nif number_1 > 1:\n print('\\tNumber 1 higher than 1.')\n if number_2 > 5:\n print('\\t\\tnumber 2 higher than 5')\n print('\\tnumber 2 higher than 5')\n\nnumber_1 = 3\nnumber_2 = 6\n\nprint('No indent (no tabs used)')\nif number_1 > 1:\n print('\\tNumber 1 higher than 1.')\n if number_2 > 5:\n print('\\t\\tnumber 2 higher than 5')\n print('\\tnumber 2 higher than 5')",
"No indent (no tabs used)\n\tNumber 1 higher than 1.\n\tnumber 2 higher than 5\nNo indent (no tabs used)\n\tNumber 1 higher than 1.\n\t\tnumber 2 higher than 5\n\tnumber 2 higher than 5\n"
]
],
[
[
"# List & Tuple",
"_____no_output_____"
],
[
"## Lists",
"_____no_output_____"
],
[
"Lists are great for collecting anything. They can contain objects of different types. For example:",
"_____no_output_____"
]
],
[
[
"names = [5, \"Giovanni\", \"Rose\", \"Yongzhe\", \"Luciana\", \"Imani\"]",
"_____no_output_____"
]
],
[
[
"Although that is not best practice. Let's start with a list of names:",
"_____no_output_____"
]
],
[
[
"names = [\"Johannes\", \"Giovanni\", \"Rose\", \"Yongzhe\", \"Luciana\", \"Imani\"]",
"_____no_output_____"
],
[
"# Loop names\nfor name in names:\n print('Name: '+name)\n\n# Get 'Giovanni' from list\n# Lists start counting at 0\ngiovanni = names[1]\nprint(giovanni.upper())\n\n# Get last item\nname = names[-1]\nprint(name.upper())\n\n# Get second to last item\nname = names[-2]\nprint(name.upper())\n\nprint(\"First three: \"+str(names[0:3]))\nprint(\"First four: \"+str(names[:4]))\nprint(\"Up until the second to last one: \"+str(names[:-2]))\nprint(\"Last two: \"+str(names[-2:]))",
"Name: Johannes\nName: Giovanni\nName: Rose\nName: Yongzhe\nName: Luciana\nName: Imani\nGIOVANNI\nIMANI\nLUCIANA\nFirst three: ['Johannes', 'Giovanni', 'Rose']\nFirst four: ['Johannes', 'Giovanni', 'Rose', 'Yongzhe']\nUp until the second to last one: ['Johannes', 'Giovanni', 'Rose', 'Yongzhe']\nLast two: ['Luciana', 'Imani']\n"
]
],
[
[
"## Enumeration",
"_____no_output_____"
],
[
"We can enumerate collections/lists that adds an index to every element:",
"_____no_output_____"
]
],
[
[
"for index, name in enumerate(names):\n print(str(index) , \" \" , name, \" is in the list.\")",
"0 Johannes is in the list.\n1 Giovanni is in the list.\n2 Rose is in the list.\n3 Yongzhe is in the list.\n4 Luciana is in the list.\n5 Imani is in the list.\n"
]
],
[
[
"## Searching and editing",
"_____no_output_____"
]
],
[
[
"names = [\"Johannes\", \"Giovanni\", \"Rose\", \"Yongzhe\", \"Luciana\", \"Imani\"]\n\n# Finding an element\nprint(names.index(\"Johannes\"))\n\n# Adding an element\nnames.append(\"Kumiko\")\n\n# Adding an element at a specific location\nnames.insert(2, \"Roberta\")\n\nprint(names)\n\n#Removal\nfruits = [\"apple\",\"orange\",\"pear\"]\ndel fruits[0]\nfruits.remove(\"pear\")\nprint('Fruits: ', fruits)\n\n# Modifying an element\nnames[5] = \"Tom\"\nprint(names)\n\n# Test whether an item is in the list (best do this before removing to avoid raising errors)\nprint(\"Tom\" in names)\n\n# Length of a list\nprint(\"Length of the list: \" + str(len(names)))",
"0\n['Johannes', 'Giovanni', 'Roberta', 'Rose', 'Yongzhe', 'Luciana', 'Imani', 'Kumiko']\nFruits: ['orange']\n['Johannes', 'Giovanni', 'Roberta', 'Rose', 'Yongzhe', 'Tom', 'Imani', 'Kumiko']\nTrue\nLength of the list: 8\n"
]
],
[
[
"Python starts at 0!!!",
"_____no_output_____"
],
[
"## Sorting and copying",
"_____no_output_____"
]
],
[
[
"# Temporary sorting:\nprint(sorted(names))\nprint(names)\n\n# Make changes permanent\nnames.sort()\nprint(\"Sorted names: \" + str(names))\nnames.sort(reverse=True)\nprint(\"Reverse sorted names: \" + str(names))",
"['Giovanni', 'Imani', 'Johannes', 'Kumiko', 'Roberta', 'Rose', 'Tom', 'Yongzhe']\n['Johannes', 'Giovanni', 'Roberta', 'Rose', 'Yongzhe', 'Tom', 'Imani', 'Kumiko']\nSorted names: ['Giovanni', 'Imani', 'Johannes', 'Kumiko', 'Roberta', 'Rose', 'Tom', 'Yongzhe']\nReverse sorted names: ['Yongzhe', 'Tom', 'Rose', 'Roberta', 'Kumiko', 'Johannes', 'Imani', 'Giovanni']\n"
],
[
"# Copying list (a shallow copy just duplicates the pointer to the memory address)\nnamez = names\nnamez.remove(\"Johannes\")\nprint(namez)\nprint(names)\n\n# Now a 'deep' copy\nprint(\"After deep copy\")\n\nnamez = names.copy()\nnamez.remove(\"Giovanni\")\nprint(namez)\nprint(names)\n\n#Alternative\nnamez = names[:]\nprint(namez)",
"['Yongzhe', 'Tom', 'Rose', 'Roberta', 'Kumiko', 'Imani', 'Giovanni']\n['Yongzhe', 'Tom', 'Rose', 'Roberta', 'Kumiko', 'Imani', 'Giovanni']\nAfter deep copy\n['Yongzhe', 'Tom', 'Rose', 'Roberta', 'Kumiko', 'Imani']\n['Yongzhe', 'Tom', 'Rose', 'Roberta', 'Kumiko', 'Imani', 'Giovanni']\n['Yongzhe', 'Tom', 'Rose', 'Roberta', 'Kumiko', 'Imani', 'Giovanni']\n"
]
],
[
[
"## Strings as lists",
"_____no_output_____"
],
[
"Strings can be manipulated and used just like lists. This is especially handy in text mining:",
"_____no_output_____"
]
],
[
[
"course = \"Predictive analytics\"\nprint(\"Last nine letters: \"+course[-9:])\nprint(\"Analytics in course title? \" + str(\"analytics\" in course))\nprint(\"Start location of 'analytics': \" + str(course.find(\"analytics\")))\nprint(course.replace(\"analytics\",\"analysis\"))\nlist_of_words = course.split(\" \")\nfor index, word in enumerate(list_of_words):\n print(\"Word \", index, \": \"+word)",
"Last nine letters: analytics\nAnalytics in course title? True\nStart location of 'analytics': 11\nPredictive analysis\nWord 0 : Predictive\nWord 1 : analytics\n"
]
],
[
[
"## Sets",
"_____no_output_____"
],
[
"Sets only contain unique elements. They have to be declared upfront using set() and allow for operations such as intersection():",
"_____no_output_____"
]
],
[
[
"name_set = set(names)\nprint(name_set)\n\n# Add an element\nname_set.add(\"Galina\")\nprint(name_set)\n\n# Discard an element\nname_set.discard(\"Johannes\")\nprint(name_set)\n\nname_set2 = set([\"Rose\", \"Tom\"])\n# Difference and intersection\ndifference = name_set - name_set2\nprint(difference)\nintersection = name_set.intersection(name_set2)\nprint(intersection)",
"{'Yongzhe', 'Kumiko', 'Roberta', 'Giovanni', 'Tom', 'Imani', 'Rose'}\n{'Yongzhe', 'Kumiko', 'Roberta', 'Giovanni', 'Tom', 'Imani', 'Galina', 'Rose'}\n{'Yongzhe', 'Kumiko', 'Roberta', 'Giovanni', 'Tom', 'Imani', 'Galina', 'Rose'}\n{'Yongzhe', 'Kumiko', 'Roberta', 'Giovanni', 'Imani', 'Galina'}\n{'Tom', 'Rose'}\n"
]
],
[
[
"# Dictionary & Function",
"_____no_output_____"
],
[
"## Dictionaries",
"_____no_output_____"
],
[
"Dictionaries are a great way to store particular data as key-value pairs, which mimics the basic structure of a simple database.",
"_____no_output_____"
]
],
[
[
"courses = {\"Johannes\" : \"Predictive analytics\", \"Kumiko\" : \"Prescriptive analytics\", \"Luciana\" : \"Descriptive analytics\"}\n\nfor organizer in courses:\n print(organizer + \" teaches \" + courses[organizer])",
"Johannes teaches Predictive analytics\nKumiko teaches Prescriptive analytics\nLuciana teaches Descriptive analytics\n"
]
],
[
[
"We can also write:",
"_____no_output_____"
]
],
[
[
"for organizer, course in courses.items():\n print(organizer + \" teaches \" + course)",
"Johannes teaches Predictive analytics\nKumiko teaches Prescriptive analytics\nLuciana teaches Descriptive analytics\n"
],
[
"# Adding items\ncourses[\"Imani\"] = \"Other analytics\"\nprint(courses)\n\n# Overwrite\ncourses[\"Johannes\"] = \"Business analytics\"\nprint(courses)",
"{'Johannes': 'Predictive analytics', 'Kumiko': 'Prescriptive analytics', 'Luciana': 'Descriptive analytics', 'Imani': 'Other analytics'}\n{'Johannes': 'Business analytics', 'Kumiko': 'Prescriptive analytics', 'Luciana': 'Descriptive analytics', 'Imani': 'Other analytics'}\n"
],
[
"# Remove\ndel courses[\"Johannes\"]\nprint(courses)",
"{'Kumiko': 'Prescriptive analytics', 'Luciana': 'Descriptive analytics', 'Imani': 'Other analytics'}\n"
],
[
"# Looping values\nfor course in courses.values():\n print(course)",
"Prescriptive analytics\nDescriptive analytics\nOther analytics\n"
],
[
"# Sorted output (on keys)\nfor organizer, course in sorted(courses.items()):\n print(organizer +\" teaches \" + course)",
"Imani teaches Other analytics\nKumiko teaches Prescriptive analytics\nLuciana teaches Descriptive analytics\n"
]
],
[
[
"## Functions",
"_____no_output_____"
],
[
"Functions form the backbone of all code. You have already used some, like print(). They can be easily defined by yourself as well.",
"_____no_output_____"
]
],
[
[
"def my_function(a, b):\n a = a.title()\n b = b.upper()\n print(a+ \" \"+b)",
"_____no_output_____"
],
[
"def my_function2(a, b):\n a = a.title()\n b = b.upper()\n return a + \" \" + b",
"_____no_output_____"
],
[
"my_function(\"johannes\",\"de smedt\")\noutput = my_function2(\"johannes\",\"de smedt\")\nprint(output)",
"Johannes DE SMEDT\nJohannes DE SMEDT\n"
]
],
[
[
"Notice how the first function already prints, while the second returns a string we have to print ourselves. Python is weakly-typed, so a function can produce different results, like in this example:",
"_____no_output_____"
]
],
[
[
"# Different output type\ndef calculate_mean(a, b):\n if (a>0):\n return (a+b)/2\n else:\n return \"a is negative\"\n\noutput = calculate_mean(1,2)\nprint(output)\noutput = calculate_mean(0,1)\nprint(output)",
"1.5\na is negative\n"
]
],
[
[
"## Comprehensions",
"_____no_output_____"
],
[
"Comprehensions allow you to quickly/efficiently write lists/dictionaries:",
"_____no_output_____"
]
],
[
[
"# Finding even numbers\nevens = [i for i in range(1,11) if i % 2 ==0]\nprint(evens)",
"[2, 4, 6, 8, 10]\n"
]
],
[
[
"In Python, you can easily make tuples such as pairs, like here:",
"_____no_output_____"
]
],
[
[
"# Double fun\npairs = [(x,y) for x in range(1,11) for y in range(5,11) if x>y]\nprint(pairs)",
"[(6, 5), (7, 5), (7, 6), (8, 5), (8, 6), (8, 7), (9, 5), (9, 6), (9, 7), (9, 8), (10, 5), (10, 6), (10, 7), (10, 8), (10, 9)]\n"
]
],
[
[
"They are also useful to perform some pre-processing, e.g., on strings:",
"_____no_output_____"
]
],
[
[
"# Operations\nnames = [\"jamal\", \"maurizio\", \"johannes\"]\n\ntitled_names = [name.title() for name in names]\nprint(titled_names)\n\nj_s = [name.title() for name in names if name.lower()[0] == 'j']\nprint(j_s)",
"['Jamal', 'Maurizio', 'Johannes']\n['Jamal', 'Johannes']\n"
]
],
[
[
"# IO & Library",
"_____no_output_____"
]
],
[
[
"# Download some datasets\n# If you are using git, then you don't need to run the following.\n!wget -q https://raw.githubusercontent.com/Magica-Chen/WebSNA-notes/main/Week0/data/DM_1.csv\n!wget -q https://raw.githubusercontent.com/Magica-Chen/WebSNA-notes/main/Week0/data/DM_2.csv\n!wget -q https://raw.githubusercontent.com/Magica-Chen/WebSNA-notes/main/Week0/data/ordered_amounts_per_person.csv\n!mkdir data\n!mv *.csv ./data",
"_____no_output_____"
]
],
[
[
"## Reading files",
"_____no_output_____"
],
[
"In Python, we can easily open any file type. Naturally, it is most suitable for plainly-structured formats such as .txt., .csv., as so on. You can also open Excel files with appropriate packages, such as pandas (more on this later). Let's read in a .csv file:",
"_____no_output_____"
]
],
[
[
"# Open a file for reading ('r')\nfile = open('data/DM_1.csv','r')\n\nfor line in file:\n print(line)",
"Name,Email,City,Salary\n\nBrent Hopkins,[email protected],Mount Pearl,38363\n\nColt Bender,[email protected],Castle Douglas,21506\n\nArthur Hammond,[email protected],Biloxi,27511\n\nSean Warner,[email protected],Moere,25201\n\nTate Greene,[email protected],Ipswich,35052\n\nGavin Gibson,[email protected],Oordegem,37126\n\nKelly Garza,[email protected],Kukatpalle,39420\n\nZane Preston,[email protected],Neudšrfl,28553\n\nCole Cunningham,[email protected],Catemu,27972\n\nTarik Hendricks,[email protected],Newbury,39027\n\nElvis Collier,[email protected],Paradise,22568\n\nJackson Huber,[email protected],Veere,29922\n\nMacaulay Cline,[email protected],Campobasso,24163\n\nElijah Chase,[email protected],Grantham,23881\n\nDennis Anthony,[email protected],Cedar Rapids,27969\n\nFulton Snyder,[email protected],San Pedro,21594\n\nLeo Willis,[email protected],Kester,31203\n\nMatthew Hooper,[email protected],Bellefontaine,33222\n\nTodd Jones,[email protected],Toledo,24809\n\nPalmer Byrd,[email protected],Bissegem,29045\n"
]
],
[
[
"We can store this information in objects and start using it:",
"_____no_output_____"
]
],
[
[
"# File is looped now, hence, reread file\nfile = open('data/DM_1.csv','r')\n# ignore the header\nnext(file)\n\n# Store names with amount (i.e. columns 1 & 2)\namount_per_person = {}\nfor line in file:\n cells = line.split(\",\")\n amount_per_person[cells[0]] = int(cells[3])\n\nfor person, amount in sorted(amount_per_person.items()):\n if amount > 25000:\n print(person , \" has \" , amount)",
"Arthur Hammond has 27511\nBrent Hopkins has 38363\nCole Cunningham has 27972\nDennis Anthony has 27969\nGavin Gibson has 37126\nJackson Huber has 29922\nKelly Garza has 39420\nLeo Willis has 31203\nMatthew Hooper has 33222\nPalmer Byrd has 29045\nSean Warner has 25201\nTarik Hendricks has 39027\nTate Greene has 35052\nZane Preston has 28553\n"
],
[
"# Now we use 'w' for write \noutput_file = open('data/ordered_amounts_per_person.csv','w')\n\nfor person, amount in sorted(amount_per_person.items()):\n output_file.write(person.lower()+\",\"+str(amount)) \noutput_file.close()",
"_____no_output_____"
]
],
[
[
"## Libraries",
"_____no_output_____"
],
[
"Libraries are imported by using `import`:",
"_____no_output_____"
]
],
[
[
"import numpy\nimport pandas\nimport sklearn",
"_____no_output_____"
]
],
[
[
"If you haven't installed sklearn, please install it by:",
"_____no_output_____"
]
],
[
[
"!pip install sklearn",
"Collecting sklearn\n Downloading sklearn-0.0.tar.gz (1.1 kB)\nRequirement already satisfied: scikit-learn in c:\\users\\zchen112\\anaconda3\\lib\\site-packages (from sklearn) (0.24.1)\nRequirement already satisfied: joblib>=0.11 in c:\\users\\zchen112\\anaconda3\\lib\\site-packages (from scikit-learn->sklearn) (1.0.1)\nRequirement already satisfied: scipy>=0.19.1 in c:\\users\\zchen112\\anaconda3\\lib\\site-packages (from scikit-learn->sklearn) (1.6.2)\nRequirement already satisfied: numpy>=1.13.3 in c:\\users\\zchen112\\anaconda3\\lib\\site-packages (from scikit-learn->sklearn) (1.20.1)\nRequirement already satisfied: threadpoolctl>=2.0.0 in c:\\users\\zchen112\\anaconda3\\lib\\site-packages (from scikit-learn->sklearn) (2.1.0)\nBuilding wheels for collected packages: sklearn\n Building wheel for sklearn (setup.py): started\n Building wheel for sklearn (setup.py): finished with status 'done'\n Created wheel for sklearn: filename=sklearn-0.0-py2.py3-none-any.whl size=1316 sha256=de54cc32dd40e89c8b3d1fd541e221f659546c0f556371dadf408a133d078ca0\n Stored in directory: c:\\users\\zchen112\\appdata\\local\\pip\\cache\\wheels\\22\\0b\\40\\fd3f795caaa1fb4c6cb738bc1f56100be1e57da95849bfc897\nSuccessfully built sklearn\nInstalling collected packages: sklearn\nSuccessfully installed sklearn-0.0\n"
]
],
[
[
"We can import just a few bits using `from`, or create aliases using `as`:",
"_____no_output_____"
]
],
[
[
"import math as m\nfrom math import pi",
"_____no_output_____"
],
[
"print(numpy.add(1, 2))\nprint(pi)\nprint(m.sin(1))",
"3\n3.141592653589793\n0.8414709848078965\n"
]
],
[
[
"In the next part, some basic procedures that exist in NumPy, pandas, and scikit-learn are covered. This only scratches the surface of the possibilities, and many other functions and code will be used later on. Make sure to search around for the possiblities that exist yourself, and get a grasp of how the modules are called and used. Let's import them in this notebook to start with:",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport sklearn",
"_____no_output_____"
]
],
[
[
"## Numpy",
"_____no_output_____"
]
],
[
[
"# Create empty arrays/matrices\nempty_array = np.zeros(5)\n\nempty_matrix = np.zeros((5,2))\n\nprint('Empty array: \\n',empty_array)\nprint('Empty matrix: \\n',empty_matrix)",
"Empty array: \n [0. 0. 0. 0. 0.]\nEmpty matrix: \n [[0. 0.]\n [0. 0.]\n [0. 0.]\n [0. 0.]\n [0. 0.]]\n"
],
[
"# Create matrices\nmat = np.array([[1,2,3],[4,5,6]])\nprint('Matrix: \\n', mat)\nprint('Transpose: \\n', mat.T)\nprint('Item 2,2: ', mat[1,1])\nprint('Item 2,3: ', mat[1,2])\nprint('rows and columns: ', np.shape(mat))\nprint('Sum total matrix: ', np.sum(mat))\nprint('Sum row 1: ' , np.sum(mat[0]))\nprint('Sum row 2: ', np.sum(mat[1]))\nprint('Sum column 2: ', np.sum(mat,axis=0)[2])",
"Matrix: \n [[1 2 3]\n [4 5 6]]\nTranspose: \n [[1 4]\n [2 5]\n [3 6]]\nItem 2,2: 5\nItem 2,3: 6\nrows and columns: (2, 3)\nSum total matrix: 21\nSum row 1: 6\nSum row 2: 15\nSum column 2: 9\n"
]
],
[
[
"## pandas",
"_____no_output_____"
],
[
"### Creating dataframes",
"_____no_output_____"
],
[
"pandas is great for reading and creating datasets, as well as performing basic operations on them.",
"_____no_output_____"
]
],
[
[
"# Creating a matrix with three rows of data\ndata = [['johannes',10], ['giovanni',2], ['john',3]]\n\n# Creating and printing a pandas DataFrame object from the matrix\ndf = pd.DataFrame(data)\nprint(df)",
" 0 1\n0 johannes 10\n1 giovanni 2\n2 john 3\n"
],
[
"# Adding columns to the DataFrame object\ndf.columns = ['names', 'years']\nprint(df)",
" names years\n0 johannes 10\n1 giovanni 2\n2 john 3\n"
],
[
"df_2 = pd.DataFrame(data = data, columns = ['names', 'years'])\nprint(df_2)",
" names years\n0 johannes 10\n1 giovanni 2\n2 john 3\n"
],
[
"# Taking out a single column and calculating its sum\n# This also shows the type of the variable: a 64 bit integer (array)\nprint(df['years'])\nprint('Sum of all values in column: ', df['years'].sum())",
"0 10\n1 2\n2 3\nName: years, dtype: int64\nSum of all values in column: 15\n"
],
[
"# Creating a larger matrix\ndata = [['johannes',10], ['giovanni',2], ['john',3], ['giovanni',2], ['john',3], ['giovanni',2], ['john',3], ['giovanni',2], ['john',3], ['johannes',10]]\n\n# Again, creating a DataFrame object, now with columns\ndf = pd.DataFrame(data, columns = ['names','years'])\n\n# Print the 5 first (head) and 5 last (tail) observations\nprint(df.head())\nprint('\\n')\nprint(df.tail())",
" names years\n0 johannes 10\n1 giovanni 2\n2 john 3\n3 giovanni 2\n4 john 3\n\n\n names years\n5 giovanni 2\n6 john 3\n7 giovanni 2\n8 john 3\n9 johannes 10\n"
]
],
[
[
"### Reading files",
"_____no_output_____"
],
[
"You can read files:",
"_____no_output_____"
]
],
[
[
"dataset = pd.read_csv('data/DM_1.csv')\nprint(dataset.head())",
" Name Email City \\\n0 Brent Hopkins [email protected] Mount Pearl \n1 Colt Bender [email protected] Castle Douglas \n2 Arthur Hammond [email protected] Biloxi \n3 Sean Warner [email protected] Moere \n4 Tate Greene [email protected] Ipswich \n\n Salary \n0 38363 \n1 21506 \n2 27511 \n3 25201 \n4 35052 \n"
]
],
[
[
"### Using dataframes",
"_____no_output_____"
]
],
[
[
"# Print all unique values of the column names\nprint(df['names'].unique())",
"['johannes' 'giovanni' 'john']\n"
],
[
"# Print all values and their frequency:\nprint(df['names'].value_counts())\nprint(df['years'].value_counts())",
"john 4\ngiovanni 4\njohannes 2\nName: names, dtype: int64\n2 4\n3 4\n10 2\nName: years, dtype: int64\n"
],
[
"# Add a column names 'code' with all zeros\ndf['code'] = np.zeros(10)\nprint(df)",
" names years code\n0 johannes 10 0.0\n1 giovanni 2 0.0\n2 john 3 0.0\n3 giovanni 2 0.0\n4 john 3 0.0\n5 giovanni 2 0.0\n6 john 3 0.0\n7 giovanni 2 0.0\n8 john 3 0.0\n9 johannes 10 0.0\n"
]
],
[
[
"You can also easily find things in a DataFrame use `.loc`:",
"_____no_output_____"
]
],
[
[
"# Rows 2 to 5 and all columns:\nprint(df.loc[2:5, :])",
" names years code\n2 john 3 0.0\n3 giovanni 2 0.0\n4 john 3 0.0\n5 giovanni 2 0.0\n"
],
[
"# Looping columns\nfor variable in df.columns:\n print(df[variable])",
"0 johannes\n1 giovanni\n2 john\n3 giovanni\n4 john\n5 giovanni\n6 john\n7 giovanni\n8 john\n9 johannes\nName: names, dtype: object\n0 10\n1 2\n2 3\n3 2\n4 3\n5 2\n6 3\n7 2\n8 3\n9 10\nName: years, dtype: int64\n0 0.0\n1 0.0\n2 0.0\n3 0.0\n4 0.0\n5 0.0\n6 0.0\n7 0.0\n8 0.0\n9 0.0\nName: code, dtype: float64\n"
],
[
"# Looping columns and obtaining the values (which returns an array)\nfor variable in df.columns:\n print(df[variable].values)",
"['johannes' 'giovanni' 'john' 'giovanni' 'john' 'giovanni' 'john'\n 'giovanni' 'john' 'johannes']\n[10 2 3 2 3 2 3 2 3 10]\n[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n"
]
],
[
[
"### preparing datasets",
"_____no_output_____"
]
],
[
[
"dataset_1 = pd.read_csv('data/DM_1.csv', encoding='latin1')\ndataset_2 = pd.read_csv('data/DM_2.csv', encoding='latin1')",
"_____no_output_____"
],
[
"dataset_1",
"_____no_output_____"
],
[
"dataset_2",
"_____no_output_____"
],
[
"dataset_2.columns = ['First name', 'Last name', 'Days active']\ndataset_2",
"_____no_output_____"
]
],
[
[
"We can convert the second dataset to only have 1 column for names:",
"_____no_output_____"
]
],
[
[
"# .title() can be used to only make the first letter a capital\nnames = [dataset_2.loc[i,'First name'] + \" \" + dataset_2.loc[i,'Last name'].title() for i in range(0, len(dataset_2))]\n\n# Make a new column for the name\ndataset_2['Name'] = names\n\n# Remove the old columns\ndataset_2 = dataset_2.drop(['First name', 'Last name'], axis=1)\ndataset_2",
"_____no_output_____"
]
],
[
[
"### Bringing together the datasets",
"_____no_output_____"
],
[
"Now the datasets are made compatible, we can merge them in a few different ways.",
"_____no_output_____"
]
],
[
[
"# A left join starts from the left dataset, in this case dataset_1, and for every row matches the value in the \n# column used for joining. As you will see, the result has 22 rows since some names appear multiple times in \n# the second dataset dataset_2.\n\nboth = pd.merge(dataset_1, dataset_2, on='Name', how='left')\nboth",
"_____no_output_____"
],
[
"# A right join does the opposite: now, dataset_2 is used to match all names with the corresponding \n# observations in dataset_1. There are as many observations as there are in dataset_2, as the rows \n# in dataset_1 are unique. The last row cannot be matched with any observation in dataset_1.\n\nboth = pd.merge(dataset_1, dataset_2, on='Name', how='right')\nboth",
"_____no_output_____"
],
[
"# Inner and outer join\n# It is also possible to only retain the values that are matched in both tables, or match any value \n# that matches. This is using an inner and outer join respectively.\n\nboth = pd.merge(dataset_1, dataset_2, on='Name', how='inner')\nboth",
"_____no_output_____"
]
],
[
[
"Notice how observation 12 is missing, as there is no corresponding value in `dataset_1`.",
"_____no_output_____"
]
],
[
[
"both = pd.merge(dataset_1, dataset_2, on='Name', how='outer')\nboth",
"_____no_output_____"
]
],
[
[
"In the last table, we have 23 rows, as both matching and non-matching values are returned.\n\nMerging datasets can be really helpful. This code should give you ample ideas on how to do this quickly yourself. As always, there are a number of ways of achieving the same result. Don't hold back to explore other solutions that might be quicker or easier.",
"_____no_output_____"
],
[
"# scikit-learn",
"_____no_output_____"
],
[
"scikit-learn is great for performing all major data analysis operations. It also contains datasets. In this code, we will load a dataset and fit a simple linear regression.",
"_____no_output_____"
]
],
[
[
"from sklearn import datasets as ds",
"_____no_output_____"
],
[
"# Load the Boston Housing dataset\ndataset = ds.load_boston()\n\n# It is a dictionary, see the keys for details:\nprint(dataset.keys())",
"dict_keys(['data', 'target', 'feature_names', 'DESCR', 'filename'])\n"
],
[
"# The 'DESCR' key holds a description text for the whole dataset\nprint(dataset['DESCR'])",
".. _boston_dataset:\n\nBoston house prices dataset\n---------------------------\n\n**Data Set Characteristics:** \n\n :Number of Instances: 506 \n\n :Number of Attributes: 13 numeric/categorical predictive. Median Value (attribute 14) is usually the target.\n\n :Attribute Information (in order):\n - CRIM per capita crime rate by town\n - ZN proportion of residential land zoned for lots over 25,000 sq.ft.\n - INDUS proportion of non-retail business acres per town\n - CHAS Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)\n - NOX nitric oxides concentration (parts per 10 million)\n - RM average number of rooms per dwelling\n - AGE proportion of owner-occupied units built prior to 1940\n - DIS weighted distances to five Boston employment centres\n - RAD index of accessibility to radial highways\n - TAX full-value property-tax rate per $10,000\n - PTRATIO pupil-teacher ratio by town\n - B 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town\n - LSTAT % lower status of the population\n - MEDV Median value of owner-occupied homes in $1000's\n\n :Missing Attribute Values: None\n\n :Creator: Harrison, D. and Rubinfeld, D.L.\n\nThis is a copy of UCI ML housing dataset.\nhttps://archive.ics.uci.edu/ml/machine-learning-databases/housing/\n\n\nThis dataset was taken from the StatLib library which is maintained at Carnegie Mellon University.\n\nThe Boston house-price data of Harrison, D. and Rubinfeld, D.L. 'Hedonic\nprices and the demand for clean air', J. Environ. Economics & Management,\nvol.5, 81-102, 1978. Used in Belsley, Kuh & Welsch, 'Regression diagnostics\n...', Wiley, 1980. N.B. Various transformations are used in the table on\npages 244-261 of the latter.\n\nThe Boston house-price data has been used in many machine learning papers that address regression\nproblems. \n \n.. topic:: References\n\n - Belsley, Kuh & Welsch, 'Regression diagnostics: Identifying Influential Data and Sources of Collinearity', Wiley, 1980. 244-261.\n - Quinlan,R. (1993). Combining Instance-Based and Model-Based Learning. In Proceedings on the Tenth International Conference of Machine Learning, 236-243, University of Massachusetts, Amherst. Morgan Kaufmann.\n\n"
],
[
"# The data (independent variables) are stored under the 'data' key\n# The names of the independent variables are stored in the 'feature_names' key\n# Let's use them to create a DataFrame object:\ndf = pd.DataFrame(data=dataset['data'], columns=dataset['feature_names'])\nprint(df.head())",
" CRIM ZN INDUS CHAS NOX RM AGE DIS RAD TAX \\\n0 0.00632 18.0 2.31 0.0 0.538 6.575 65.2 4.0900 1.0 296.0 \n1 0.02731 0.0 7.07 0.0 0.469 6.421 78.9 4.9671 2.0 242.0 \n2 0.02729 0.0 7.07 0.0 0.469 7.185 61.1 4.9671 2.0 242.0 \n3 0.03237 0.0 2.18 0.0 0.458 6.998 45.8 6.0622 3.0 222.0 \n4 0.06905 0.0 2.18 0.0 0.458 7.147 54.2 6.0622 3.0 222.0 \n\n PTRATIO B LSTAT \n0 15.3 396.90 4.98 \n1 17.8 396.90 9.14 \n2 17.8 392.83 4.03 \n3 18.7 394.63 2.94 \n4 18.7 396.90 5.33 \n"
],
[
"# The dependent variable is stored separately\ndf_y = pd.DataFrame(data=dataset['target'], columns=['target'])\nprint(df_y.head())",
" target\n0 24.0\n1 21.6\n2 34.7\n3 33.4\n4 36.2\n"
],
[
"# Now, let's build a linear regression model\nfrom sklearn.linear_model import LinearRegression as LR\n\n# First we create a linear regression object\nregression = LR()\n\n# Then, we fit the independent and dependent data\nregression.fit(df, df_y)\n\n# We can obtain the R^2 score (more on this later)\nprint(regression.score(df, df_y))",
"0.7406426641094095\n"
]
],
[
[
"Very often, we need to perform an operation on a single observation. In that case, we have to reshape the data using numpy:",
"_____no_output_____"
]
],
[
[
"# Consider a single observation \nso = df.loc[2, :]\nprint(so)\n\n# Just the values of the observation without meta data\nprint(so.values)\n\n# Reshaping yields a new matrix with one row with as many columns as the original observation (indicated by the -1)\nprint(np.reshape(so.values, (1, -1)))",
"CRIM 0.02729\nZN 0.00000\nINDUS 7.07000\nCHAS 0.00000\nNOX 0.46900\nRM 7.18500\nAGE 61.10000\nDIS 4.96710\nRAD 2.00000\nTAX 242.00000\nPTRATIO 17.80000\nB 392.83000\nLSTAT 4.03000\nName: 2, dtype: float64\n[2.7290e-02 0.0000e+00 7.0700e+00 0.0000e+00 4.6900e-01 7.1850e+00\n 6.1100e+01 4.9671e+00 2.0000e+00 2.4200e+02 1.7800e+01 3.9283e+02\n 4.0300e+00]\n[[2.7290e-02 0.0000e+00 7.0700e+00 0.0000e+00 4.6900e-01 7.1850e+00\n 6.1100e+01 4.9671e+00 2.0000e+00 2.4200e+02 1.7800e+01 3.9283e+02\n 4.0300e+00]]\n"
],
[
"# For two observations:\nso_2 = df.loc[2:3, :]\nprint(np.reshape(so_2.values, (2, -1)))",
"[[2.7290e-02 0.0000e+00 7.0700e+00 0.0000e+00 4.6900e-01 7.1850e+00\n 6.1100e+01 4.9671e+00 2.0000e+00 2.4200e+02 1.7800e+01 3.9283e+02\n 4.0300e+00]\n [3.2370e-02 0.0000e+00 2.1800e+00 0.0000e+00 4.5800e-01 6.9980e+00\n 4.5800e+01 6.0622e+00 3.0000e+00 2.2200e+02 1.8700e+01 3.9463e+02\n 2.9400e+00]]\n"
]
],
[
[
"This concludes our quick run-through of some basic functionality of the modules. Later on, we will use more and more specialized functions and objects, but for now this allows you to play around with data already.",
"_____no_output_____"
],
[
"# Visualisation",
"_____no_output_____"
],
[
"The visualisations often require a bit of tricks and extra lines of code to make things look better. This is often confusing at first, but it will become more and more intuitive once you get the hang of how the general ideas work. We will be working mostly with Matplotlib (often imported as plt), Numpy (np), and pandas (pd). Often, both Matplotlib and pandas offer similar solutions, but one is often slightly more convenient than the other in various situations. Make sure to look up some of the alternatives, as they might also make more sense to you.",
"_____no_output_____"
]
],
[
[
"# First, we need to import our packages\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd",
"_____no_output_____"
]
],
[
[
"## Pie and bar chart",
"_____no_output_____"
]
],
[
[
"# Data to plot\nlabels = 'classification', 'regression', 'time series'\nsizes = [10, 22, 2]\n\ncolors = ['lightblue', 'lightgreen', 'pink']\n\n# Allows us to highlight a certain piece of the pie chart\nexplode = (0.1, 0, 0) \n \n# Plot a pie chart with the pie() function. Notice how various parameters are given for coloring, labels, etc.\n# They should be relatively self-explanatory\nplt.pie(sizes, explode=explode, labels=labels, colors=colors,\n autopct='%1.1f%%', shadow=True, startangle=90)\n \n# This function makes the axes equal, so the circle is round\nplt.axis('equal')\n\n# Add a title to the plot\nplt.title(\"Pie chart of modelling techniques\")\n\n# Finally, show the plot\nplt.show()",
"_____no_output_____"
]
],
[
[
"Adding a legend:",
"_____no_output_____"
]
],
[
[
"patches, texts = plt.pie(sizes, colors=colors, shadow=True, startangle=90)\nplt.legend(patches, labels, loc=\"best\")\nplt.axis('equal')\nplt.title(\"Pie chart of modelling techniques\")\nplt.show()",
"_____no_output_____"
],
[
"# Bar charts are relatively similar. Here we use the bar() function\nplt.bar(labels, sizes, align='center')\nplt.xticks(labels)\nplt.ylabel('#use cases')\nplt.title('Bar chart of modelling technique')\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Histogram",
"_____no_output_____"
]
],
[
[
"# This function plots a diagram with the 'data' object providing the data\n# bins are calculated automatically, as indicated by the 'auto' option, which makes them relatively balanced and\n# sets appropriate boundaries\n# color sets the color of the bars\n# the rwidth sets the bars to somewhat slightly less wide than the bins are wide to leave space between the bars\ndata = np.random.normal(10, 2, 1000)\nplt.hist(x= data, bins='auto', color='#008000', rwidth=0.85)\n\n# For more information on colour codes, please visit: https://htmlcolorcodes.com/\n\n# Additionally, some options are added:\n\n# This option sets the grid of the plot to follow the values on the y-axis\nplt.grid(axis='y')\n\n# Adds a label to the x-axis\nplt.xlabel('Value')\n\n# Adds a label to the y-axis\nplt.ylabel('Frequency')\n\n# Adds a title to the plot\nplt.title('Histogram of x')\n\n# Makes the plot visible in the program\nplt.show()",
"_____no_output_____"
],
[
"# Here, a different color and manually-specified bins are used\nplt.hist(x= data, bins=[0,1,2,3,4,5,6,7,8,9,10], color='olive', rwidth=0.85)\nplt.grid(axis='y')\nplt.xlabel('Value')\nplt.ylabel('Frequency')\nplt.title('Histogram of x and y')\nplt.show()",
"_____no_output_____"
]
],
[
[
"See how we cut the tail off the distribution.",
"_____no_output_____"
]
],
[
[
"# Now, let's build a histogram with radomly generated data that follows a normal distribution\n# Mean = 10, stddev = 15, sample size = 1,000\n# More on random numbers will follow in module 2\ns = np.random.normal(10, 15, 1000)\n\nplt.hist(x=s, bins='auto', color='#008000', rwidth=0.85)\nplt.grid(axis='y')\nplt.xlabel('Value')\nplt.ylabel('Frequency')\nplt.title('Histogram of x')\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Boxplot",
"_____no_output_____"
]
],
[
[
"# Boxplots are even easier. We can just use the boxplot() function without many parameters\n# We use the implementation of Pandas, which relies on Matplotlib in the background\n# We now use subplots.\ndata = [3,8,3,4,1,7,5,3,8,2,7,3,1,6,10,10,3,6,5,10]\n# Subplot with 1 row, 2 columns, here we add figure 1 of 2 (first row, first column)\nplt.subplot(1,2,1) \nplt.boxplot(data)\n\ndata_2 = [3,8,3,4,1,7,5,3,8,2,7,3,1,6,10,10,3,6,5,10, 99,87,45,-20]\n# Here we add figure 2 of 2, hence it will be positioned in the second column of the first row\nplt.subplot(1,2,2) \nplt.boxplot(data_2)\nplt.show()",
"_____no_output_____"
]
],
[
[
"Boxplot for multiple variables:",
"_____no_output_____"
]
],
[
[
"# Generate 4 columns with 10 observations\ndf = pd.DataFrame(data = np.random.random(size=(10,3)), columns = ['class.','reg.','time series'])\nprint(df)\n\nboxplot = df.boxplot()\nplt.title('Triple boxplot')\nplt.show()\n\ndf = pd.DataFrame(data = np.random.random(size=(10,3)), columns = ['class.','reg.','time series'])\ndf['number_of_runs'] = [0,0,0,1,1,2,0,1,2,0]\n\nboxplot = df.boxplot(by='number_of_runs')\nplt.show()",
" class. reg. time series\n0 0.402362 0.348025 0.893360\n1 0.496534 0.454527 0.631422\n2 0.268591 0.815153 0.371747\n3 0.596372 0.121358 0.591864\n4 0.575830 0.964928 0.908575\n5 0.380839 0.435604 0.488436\n6 0.788519 0.562830 0.303210\n7 0.424057 0.888664 0.476388\n8 0.699300 0.380225 0.776302\n9 0.463731 0.239730 0.686004\n"
]
],
[
[
"## Scatterplot",
"_____no_output_____"
]
],
[
[
"# We load the data gain\nx = [3,8,3,4,1,7,5,3,8,2,7,3,1,6,10,10,3,6,5,10]\ny = [10,7,2,7,5,4,2,3,4,1,5,7,8,4,10,2,3,4,5,6]\n\n# Here, we build a simple scatterplot of the two variables\nplt.scatter(x,y)\nplt.xlabel('x')\nplt.ylabel('y')\nplt.title('Simple scatterplot')\nplt.show()",
"_____no_output_____"
]
],
[
[
"Hard to tell which variable is what, but it gives an overall impression of the data.",
"_____no_output_____"
]
],
[
[
"# A simple line plot\n\n# We use the plot function for this. 'o-' indicates we want to use circles for markers and connect them with lines\nplt.plot(x,'o-',color='blue',)\n\n# Here we use 'x--' for cross-shaped markers connected with intermittent lines\nplt.plot(y,'x--',color='red')\nplt.xlabel('Time')\nplt.ylabel('Value')\nplt.title(\"x and y over time\")\n\n# This function sets the range limits for the x axis at 0 and 20\nplt.xlim(0,20)\n\n# Adding a grid\nplt.grid(True)\n\n# Adding markets on the x and y axis. We start at zero, make our way to 10 (the last integer is not included,\n# hence we use 21 and 11)\n# We add steps of 4 for the x axis, and 4 for the y axis\nplt.xticks(range(0,21,4))\nplt.yticks(range(0,11,2))\n\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7b5542d30719c22bf402bd1a2089fde47fe12d8 | 83,861 | ipynb | Jupyter Notebook | regression/regression-ud.ipynb | blockchain99/ml-udacity | cfb32b529787f624e39b94f8d26bf71877fddb40 | [
"MIT"
] | 13 | 2016-04-29T07:21:44.000Z | 2021-09-29T03:20:51.000Z | regression/regression-ud.ipynb | blockchain99/ml-udacity | cfb32b529787f624e39b94f8d26bf71877fddb40 | [
"MIT"
] | 1 | 2017-02-07T07:37:20.000Z | 2017-02-19T08:37:17.000Z | ml-udacity/regression/regression-ud.ipynb | napjon/moocs_solution | 5c96f43f6cb2ae643f482580446869953a99beb6 | [
"MIT"
] | 13 | 2016-01-25T03:23:57.000Z | 2019-10-13T15:29:23.000Z | 95.405006 | 26,153 | 0.820632 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e7b56471b71f83cc31451c8ed6244516eb082198 | 23,571 | ipynb | Jupyter Notebook | 4-5 Applied Text Mining in Python/Assignment 4.ipynb | MLunov/Applied-Data-Science-with-Python-Specialization-Michigan | 6ffc4d011f4c3dfb8148cc12280da7df8ae58692 | [
"MIT"
] | null | null | null | 4-5 Applied Text Mining in Python/Assignment 4.ipynb | MLunov/Applied-Data-Science-with-Python-Specialization-Michigan | 6ffc4d011f4c3dfb8148cc12280da7df8ae58692 | [
"MIT"
] | null | null | null | 4-5 Applied Text Mining in Python/Assignment 4.ipynb | MLunov/Applied-Data-Science-with-Python-Specialization-Michigan | 6ffc4d011f4c3dfb8148cc12280da7df8ae58692 | [
"MIT"
] | null | null | null | 34.21045 | 442 | 0.536634 | [
[
[
"---\n\n_You are currently looking at **version 1.0** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-text-mining/resources/d9pwm) course resource._\n\n---",
"_____no_output_____"
],
[
"# Assignment 4 - Document Similarity & Topic Modelling",
"_____no_output_____"
],
[
"## Part 1 - Document Similarity\n\nFor the first part of this assignment, you will complete the functions `doc_to_synsets` and `similarity_score` which will be used by `document_path_similarity` to find the path similarity between two documents.\n\nThe following functions are provided:\n* **`convert_tag:`** converts the tag given by `nltk.pos_tag` to a tag used by `wordnet.synsets`. You will need to use this function in `doc_to_synsets`.\n* **`document_path_similarity:`** computes the symmetrical path similarity between two documents by finding the synsets in each document using `doc_to_synsets`, then computing similarities using `similarity_score`.\n\nYou will need to finish writing the following functions:\n* **`doc_to_synsets:`** returns a list of synsets in document. This function should first tokenize and part of speech tag the document using `nltk.word_tokenize` and `nltk.pos_tag`. Then it should find each tokens corresponding synset using `wn.synsets(token, wordnet_tag)`. The first synset match should be used. If there is no match, that token is skipped.\n* **`similarity_score:`** returns the normalized similarity score of a list of synsets (s1) onto a second list of synsets (s2). For each synset in s1, find the synset in s2 with the largest similarity value. Sum all of the largest similarity values together and normalize this value by dividing it by the number of largest similarity values found. Be careful with data types, which should be floats. Missing values should be ignored.\n\nOnce `doc_to_synsets` and `similarity_score` have been completed, submit to the autograder which will run `test_document_path_similarity` to test that these functions are running correctly. \n\n*Do not modify the functions `convert_tag`, `document_path_similarity`, and `test_document_path_similarity`.*",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport nltk\n\nnltk.download('punkt')\nnltk.download('averaged_perceptron_tagger')\nnltk.download('wordnet')\n\nfrom nltk.corpus import wordnet as wn\nimport pandas as pd\n\n\ndef convert_tag(tag):\n \"\"\"Convert the tag given by nltk.pos_tag to the tag used by wordnet.synsets\"\"\"\n \n tag_dict = {'N': 'n', 'J': 'a', 'R': 'r', 'V': 'v'}\n try:\n return tag_dict[tag[0]]\n except KeyError:\n return None\n\n\ndef doc_to_synsets(doc):\n \"\"\"\n Returns a list of synsets in document.\n\n Tokenizes and tags the words in the document doc.\n Then finds the first synset for each word/tag combination.\n If a synset is not found for that combination it is skipped.\n\n Args:\n doc: string to be converted\n\n Returns:\n list of synsets\n\n Example:\n doc_to_synsets('Fish are nvqjp friends.')\n Out: [Synset('fish.n.01'), Synset('be.v.01'), Synset('friend.n.01')]\n \"\"\"\n \n # Your Code Here\n tokens = nltk.word_tokenize(doc)\n tags = [tag[1] for tag in nltk.pos_tag(tokens)]\n wordnet_tags = [convert_tag(tag) for tag in tags]\n synsets = [wn.synsets(token, wordnet_tag) for token, wordnet_tag in list(zip(tokens, wordnet_tags))]\n answer = [i[0] for i in synsets if len(i) > 0]\n \n return answer # Your Answer Here\n\n\ndef similarity_score(s1, s2):\n \"\"\"\n Calculate the normalized similarity score of s1 onto s2\n\n For each synset in s1, finds the synset in s2 with the largest similarity value.\n Sum of all of the largest similarity values and normalize this value by dividing it by the\n number of largest similarity values found.\n\n Args:\n s1, s2: list of synsets from doc_to_synsets\n\n Returns:\n normalized similarity score of s1 onto s2\n\n Example:\n synsets1 = doc_to_synsets('I like cats')\n synsets2 = doc_to_synsets('I like dogs')\n similarity_score(synsets1, synsets2)\n Out: 0.73333333333333339\n \"\"\"\n \n \n # Your Code Here\n \n lvs = [] # largest similarity values\n for i1 in s1:\n scores=[x for x in [i1.path_similarity(i2) for i2 in s2] if x is not None]\n if scores:\n lvs.append(max(scores))\n \n return sum(lvs) / len(lvs)# Your Answer Here\n\n\ndef document_path_similarity(doc1, doc2):\n \"\"\"Finds the symmetrical similarity between doc1 and doc2\"\"\"\n\n synsets1 = doc_to_synsets(doc1)\n synsets2 = doc_to_synsets(doc2)\n\n return (similarity_score(synsets1, synsets2) + similarity_score(synsets2, synsets1)) / 2",
"[nltk_data] Downloading package punkt to /home/jovyan/nltk_data...\n[nltk_data] Package punkt is already up-to-date!\n[nltk_data] Downloading package averaged_perceptron_tagger to\n[nltk_data] /home/jovyan/nltk_data...\n[nltk_data] Package averaged_perceptron_tagger is already up-to-\n[nltk_data] date!\n[nltk_data] Downloading package wordnet to /home/jovyan/nltk_data...\n[nltk_data] Package wordnet is already up-to-date!\n"
]
],
[
[
"### test_document_path_similarity\n\nUse this function to check if doc_to_synsets and similarity_score are correct.\n\n*This function should return the similarity score as a float.*",
"_____no_output_____"
]
],
[
[
"def test_document_path_similarity():\n doc1 = 'This is a function to test document_path_similarity.'\n doc2 = 'Use this function to see if your code in doc_to_synsets \\\n and similarity_score is correct!'\n return document_path_similarity(doc1, doc2)",
"_____no_output_____"
],
[
"test_document_path_similarity()",
"_____no_output_____"
]
],
[
[
"<br>\n___\n`paraphrases` is a DataFrame which contains the following columns: `Quality`, `D1`, and `D2`.\n\n`Quality` is an indicator variable which indicates if the two documents `D1` and `D2` are paraphrases of one another (1 for paraphrase, 0 for not paraphrase).",
"_____no_output_____"
]
],
[
[
"# Use this dataframe for questions most_similar_docs and label_accuracy\nparaphrases = pd.read_csv('paraphrases.csv')\nparaphrases.head()",
"_____no_output_____"
]
],
[
[
"___\n\n### most_similar_docs\n\nUsing `document_path_similarity`, find the pair of documents in paraphrases which has the maximum similarity score.\n\n*This function should return a tuple `(D1, D2, similarity_score)`*",
"_____no_output_____"
]
],
[
[
"def most_similar_docs():\n \n # Your Code Here\n \n return max(map(document_path_similarity, paraphrases['D1'], paraphrases['D2'])) # Your Answer Here",
"_____no_output_____"
],
[
"most_similar_docs()",
"_____no_output_____"
]
],
[
[
"### label_accuracy\n\nProvide labels for the twenty pairs of documents by computing the similarity for each pair using `document_path_similarity`. Let the classifier rule be that if the score is greater than 0.75, label is paraphrase (1), else label is not paraphrase (0). Report accuracy of the classifier using scikit-learn's accuracy_score.\n\n*This function should return a float.*",
"_____no_output_____"
]
],
[
[
"def label_accuracy():\n from sklearn.metrics import accuracy_score\n\n paraphrases['labels'] = [1 if i > 0.75 else 0 for i in map(document_path_similarity, paraphrases['D1'], paraphrases['D2'])] # Your Code Here\n \n return accuracy_score(paraphrases['Quality'], paraphrases['labels']) # Your Answer Here",
"_____no_output_____"
],
[
"label_accuracy()",
"_____no_output_____"
]
],
[
[
"## Part 2 - Topic Modelling\n\nFor the second part of this assignment, you will use Gensim's LDA (Latent Dirichlet Allocation) model to model topics in `newsgroup_data`. You will first need to finish the code in the cell below by using gensim.models.ldamodel.LdaModel constructor to estimate LDA model parameters on the corpus, and save to the variable `ldamodel`. Extract 10 topics using `corpus` and `id_map`, and with `passes=25` and `random_state=34`.",
"_____no_output_____"
]
],
[
[
"import pickle\nimport gensim\nfrom sklearn.feature_extraction.text import CountVectorizer\n\n# Load the list of documents\nwith open('newsgroups', 'rb') as f:\n newsgroup_data = pickle.load(f)\n\n# Use CountVectorizor to find three letter tokens, remove stop_words, \n# remove tokens that don't appear in at least 20 documents,\n# remove tokens that appear in more than 20% of the documents\nvect = CountVectorizer(min_df=20, max_df=0.2, stop_words='english', \n token_pattern='(?u)\\\\b\\\\w\\\\w\\\\w+\\\\b')\n# Fit and transform\nX = vect.fit_transform(newsgroup_data)\n\n# Convert sparse matrix to gensim corpus.\ncorpus = gensim.matutils.Sparse2Corpus(X, documents_columns=False)\n\n# Mapping from word IDs to words (To be used in LdaModel's id2word parameter)\nid_map = dict((v, k) for k, v in vect.vocabulary_.items())\n",
"_____no_output_____"
],
[
"# Use the gensim.models.ldamodel.LdaModel constructor to estimate \n# LDA model parameters on the corpus, and save to the variable `ldamodel`\n\n# Your code here:\nldamodel = gensim.models.ldamodel.LdaModel(corpus=corpus, num_topics=10, id2word=id_map, passes=25, random_state=34)",
"_____no_output_____"
]
],
[
[
"### lda_topics\n\nUsing `ldamodel`, find a list of the 10 topics and the most significant 10 words in each topic. This should be structured as a list of 10 tuples where each tuple takes on the form:\n\n`(9, '0.068*\"space\" + 0.036*\"nasa\" + 0.021*\"science\" + 0.020*\"edu\" + 0.019*\"data\" + 0.017*\"shuttle\" + 0.015*\"launch\" + 0.015*\"available\" + 0.014*\"center\" + 0.014*\"sci\"')`\n\nfor example.\n\n*This function should return a list of tuples.*",
"_____no_output_____"
]
],
[
[
"def lda_topics():\n \n # Your Code Here\n \n return ldamodel.print_topics(num_topics=10, num_words=10) # Your Answer Here",
"_____no_output_____"
],
[
"lda_topics()",
"_____no_output_____"
]
],
[
[
"### topic_distribution\n\nFor the new document `new_doc`, find the topic distribution. Remember to use vect.transform on the the new doc, and Sparse2Corpus to convert the sparse matrix to gensim corpus.\n\n*This function should return a list of tuples, where each tuple is `(#topic, probability)`*",
"_____no_output_____"
]
],
[
[
"new_doc = [\"\\n\\nIt's my understanding that the freezing will start to occur because \\\nof the\\ngrowing distance of Pluto and Charon from the Sun, due to it's\\nelliptical orbit. \\\nIt is not due to shadowing effects. \\n\\n\\nPluto can shadow Charon, and vice-versa.\\n\\nGeorge \\\nKrumins\\n-- \"]",
"_____no_output_____"
],
[
"def topic_distribution():\n \n # Your Code Here\n \n # Transform\n X = vect.transform(new_doc)\n\n # Convert sparse matrix to gensim corpus.\n corpus = gensim.matutils.Sparse2Corpus(X, documents_columns=False)\n \n return list(ldamodel[corpus])[0] # Your Answer Here",
"_____no_output_____"
],
[
"topic_distribution()",
"_____no_output_____"
]
],
[
[
"### topic_names\n\nFrom the list of the following given topics, assign topic names to the topics you found. If none of these names best matches the topics you found, create a new 1-3 word \"title\" for the topic.\n\nTopics: Health, Science, Automobiles, Politics, Government, Travel, Computers & IT, Sports, Business, Society & Lifestyle, Religion, Education.\n\n*This function should return a list of 10 strings.*",
"_____no_output_____"
]
],
[
[
"def topic_names():\n \n # Your Code Here\n \n return ['Automobiles', 'Health', 'Science',\n 'Politics',\n 'Sports',\n 'Business', 'Society & Lifestyle',\n 'Religion', 'Education', 'Computers & IT'] # Your Answer Here",
"_____no_output_____"
],
[
"topic_names()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e7b564cf933121f358c885086e871249b87fa316 | 711,290 | ipynb | Jupyter Notebook | notebooks/DMA8 - NOAA weather events by coords.ipynb | KimDuclos/liveSafe-data | a2ac3b2ad95e43a475bfaf2dda4dcafbcbc0437b | [
"MIT"
] | 4 | 2019-04-24T23:48:13.000Z | 2019-05-26T01:59:43.000Z | notebooks/DMA8 - NOAA weather events by coords.ipynb | labs12-should-i-live-here/DS | a2ac3b2ad95e43a475bfaf2dda4dcafbcbc0437b | [
"MIT"
] | 3 | 2019-04-29T19:07:35.000Z | 2019-05-09T18:02:10.000Z | notebooks/DMA8 - NOAA weather events by coords.ipynb | labs12-should-i-live-here/DS | a2ac3b2ad95e43a475bfaf2dda4dcafbcbc0437b | [
"MIT"
] | null | null | null | 469.49835 | 546,012 | 0.930156 | [
[
[
"# NOAA extreme weather events\nThe [National Oceanic and Atmospheric Administration](https://en.wikipedia.org/wiki/National_Oceanic_and_Atmospheric_Administration) has a database of extreme weather events that contains lots of detail for every year ([Link](https://www.climate.gov/maps-data/dataset/severe-storms-and-extreme-events-data-table)). In this notebook I will create map files for individual weather events, mapped to their coordinates.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport random\nimport geopandas\nimport matplotlib.pyplot as plt\npd.set_option('display.max_columns', None) # Unlimited columns\n\n# Custom function for displaying the shape and head of a dataframe\ndef display(df, n=5):\n print(df.shape)\n return df.head(n) ",
"_____no_output_____"
]
],
[
[
"# Get map of US counties",
"_____no_output_____"
]
],
[
[
"# Import a shape file with all the counties in the US.\n# Note how it doesn't include all the same territories as the \n# quake contour map.\ncounties = geopandas.read_file('../data_input/1_USCounties/')\n\n# Turn state codes from strings to integers\nfor col in ['STATE_FIPS', 'CNTY_FIPS', 'FIPS']:\n counties[col] = counties[col].astype(int)",
"_____no_output_____"
]
],
[
[
"# Process NOAA data for one year only\nAs a starting point that I'll generalize later.",
"_____no_output_____"
]
],
[
[
"# Get NOAA extreme weather event data for one year\ndf1 = pd.read_csv('../data_local/NOAA/StormEvents_details-ftp_v1.0_d2018_c20190422.csv')\nprint(df1.shape)\nprint(df1.columns)\ndf1.head(2)",
"(62169, 51)\nIndex(['BEGIN_YEARMONTH', 'BEGIN_DAY', 'BEGIN_TIME', 'END_YEARMONTH',\n 'END_DAY', 'END_TIME', 'EPISODE_ID', 'EVENT_ID', 'STATE', 'STATE_FIPS',\n 'YEAR', 'MONTH_NAME', 'EVENT_TYPE', 'CZ_TYPE', 'CZ_FIPS', 'CZ_NAME',\n 'WFO', 'BEGIN_DATE_TIME', 'CZ_TIMEZONE', 'END_DATE_TIME',\n 'INJURIES_DIRECT', 'INJURIES_INDIRECT', 'DEATHS_DIRECT',\n 'DEATHS_INDIRECT', 'DAMAGE_PROPERTY', 'DAMAGE_CROPS', 'SOURCE',\n 'MAGNITUDE', 'MAGNITUDE_TYPE', 'FLOOD_CAUSE', 'CATEGORY', 'TOR_F_SCALE',\n 'TOR_LENGTH', 'TOR_WIDTH', 'TOR_OTHER_WFO', 'TOR_OTHER_CZ_STATE',\n 'TOR_OTHER_CZ_FIPS', 'TOR_OTHER_CZ_NAME', 'BEGIN_RANGE',\n 'BEGIN_AZIMUTH', 'BEGIN_LOCATION', 'END_RANGE', 'END_AZIMUTH',\n 'END_LOCATION', 'BEGIN_LAT', 'BEGIN_LON', 'END_LAT', 'END_LON',\n 'EPISODE_NARRATIVE', 'EVENT_NARRATIVE', 'DATA_SOURCE'],\n dtype='object')\n"
],
[
"# Extract only a few useful columns\ndf2 = df1[['TOR_F_SCALE','EVENT_TYPE','BEGIN_LAT','BEGIN_LON']].copy()\n\n# Remove any rows with null coordinates\ndf2 = df2.dropna(subset=['BEGIN_LAT','BEGIN_LON'])\n\n# Create geoDF of all the points \ndf3 = geopandas.GeoDataFrame(\n df2, geometry=geopandas.points_from_xy(df2.BEGIN_LON, df2.BEGIN_LAT))\n\n# Trim the list of events to only include those that happened within one of our official counties.\ndf4 = geopandas.sjoin(df3, counties, how='left', op='within').dropna(subset=['FIPS'])\n\n# Drop useless columns\ndf4 = df4[['TOR_F_SCALE','EVENT_TYPE','geometry']]\n\n# Add new columns for event categories\n\n\nflood_types =['Flood','Flash Flood','Coastal Flood',\n 'Storm Surge/Tide','Lakeshore Flood','Debris Flow'] \ndf4['Flood'] = df4['EVENT_TYPE'].isin(flood_types)\n\n\n\nstorm_types = ['Thunderstorm Wind','Marine Thunderstorm Wind','Marine High Wind',\n 'High Wind','Funnel Cloud','Dust Storm',\n 'Strong Wind','Dust Devil','Tropical Depression','Lightning',\n 'Tropical Storm','High Surf','Heavy Rain','Hail','Marine Hail',\n 'Marine Strong Wind','Waterspout']\ndf4['Storm'] = df4['EVENT_TYPE'].isin(storm_types)\n\n\ndf4['Tornado'] = df4['EVENT_TYPE'].isin(['Tornado'])\n\n\n# Reorganize columns\ntype_columns = ['Storm','Flood','Tornado']\ndf4 = df4[['TOR_F_SCALE','EVENT_TYPE','geometry'] + type_columns]\n\ndisplay(df4)",
"(35940, 6)\n"
],
[
"# Plot over a map of US counties\nfig, ax = plt.subplots(figsize=(20,20))\ncounties.plot(ax=ax, color='white', edgecolor='black');\ndf4.plot(ax=ax, marker='o')\n# ax.set_xlim(-125,-114)\nax.set_ylim(15,75)\nplt.show()",
"_____no_output_____"
]
],
[
[
"## NOAA file processing function\nGeneralize the previous operations so they can apply to the data for any year",
"_____no_output_____"
]
],
[
[
"def process_noaa(filepath):\n \"\"\"\n Process one year of NOAA Extreme weather events. Requires\n the list of official counties and the list of official weather\n event types.\n \n Inputs\n ------\n filepath (string) : file path for the list of events from one year.\n \n \n Outputs\n -------\n result (pandas.DataFrame) : Dataframe each event for that year, with\n boolean columns for each event category.\n \n \"\"\"\n \n df1 = pd.read_csv(filepath)\n \n # Extract only a few useful columns\n df2 = df1[['TOR_F_SCALE','EVENT_TYPE','BEGIN_LAT','BEGIN_LON']].copy()\n\n # Remove any rows with null coordinates\n df2 = df2.dropna(subset=['BEGIN_LAT','BEGIN_LON'])\n\n # Create geoDF of all the points \n df3 = geopandas.GeoDataFrame(\n df2, geometry=geopandas.points_from_xy(df2.BEGIN_LON, df2.BEGIN_LAT))\n\n # Trim the list of events to only include those that happened within one of our official counties.\n df4 = geopandas.sjoin(df3, counties, how='left', op='within').dropna(subset=['FIPS'])\n\n # Drop useless columns\n df4 = df4[['TOR_F_SCALE','EVENT_TYPE','geometry']]\n\n # Add new columns for event categories\n\n\n flood_types =['Flood','Flash Flood','Coastal Flood',\n 'Storm Surge/Tide','Lakeshore Flood','Debris Flow'] \n df4['Flood'] = df4['EVENT_TYPE'].isin(flood_types)\n\n\n\n storm_types = ['Thunderstorm Wind','Marine Thunderstorm Wind','Marine High Wind',\n 'High Wind','Funnel Cloud','Dust Storm',\n 'Strong Wind','Dust Devil','Tropical Depression','Lightning',\n 'Tropical Storm','High Surf','Heavy Rain','Hail','Marine Hail',\n 'Marine Strong Wind','Waterspout']\n df4['Storm'] = df4['EVENT_TYPE'].isin(storm_types)\n\n\n df4['Tornado'] = df4['EVENT_TYPE'].isin(['Tornado'])\n\n\n # Reorganize columns\n type_columns = ['Storm','Flood','Tornado']\n df4 = df4[['TOR_F_SCALE','EVENT_TYPE','geometry'] + type_columns]\n \n # Add a column for the year of this file\n year = int(filepath[49:53])\n df4['year'] = year\n\n return df4",
"_____no_output_____"
],
[
"# Example\ntest_2018 = process_noaa('../data_local/NOAA/StormEvents_details-ftp_v1.0_d2018_c20190422.csv')\ndisplay(test_2018)",
"(35940, 7)\n"
],
[
"# These are the extreme weather events recorded in 2018\ntest_2018[type_columns].sum().sort_values(ascending=False)",
"_____no_output_____"
]
],
[
[
"# Process all the available data",
"_____no_output_____"
]
],
[
[
"import glob\nimport os\n\n# Read the CSV files for each year going back to 1996 (the first year \n# when many of these event types started being recorded)\npath = '../data_local/NOAA/'\nfilenames = sorted(glob.glob(os.path.join(path, '*.csv')))\nlayers = []\n\n# Aggregate the dataframes in a list\nfor name in filenames:\n year = int(name[49:53])\n print(f'Processing {year}')\n layers.append(process_noaa(name))\n\n# Concatenate all these dataframes into a single dataframe\nnoaa = pd.concat(layers)",
"Processing 1996\nProcessing 1997\nProcessing 1998\nProcessing 1999\nProcessing 2000\nProcessing 2001\nProcessing 2002\nProcessing 2003\nProcessing 2004\nProcessing 2005\nProcessing 2006\nProcessing 2007\nProcessing 2008\nProcessing 2009\nProcessing 2010\nProcessing 2011\nProcessing 2012\nProcessing 2013\nProcessing 2014\nProcessing 2015\nProcessing 2016\nProcessing 2017\nProcessing 2018\n"
],
[
"display(noaa)",
"(731944, 7)\n"
],
[
"# total events per type\nnoaa[type_columns].sum()",
"_____no_output_____"
],
[
"# Aggregate event types into different geopandas dataframes.\nstorms = noaa[noaa['Storm']][['EVENT_TYPE','year','geometry']].reset_index(drop=True)\nfloods = noaa[noaa['Flood']][['EVENT_TYPE','year','geometry']].reset_index(drop=True)\ntornadoes = noaa[noaa['Tornado']][['TOR_F_SCALE','year','geometry']].reset_index(drop=True)\n\nstorms.shape, floods.shape, tornadoes.shape",
"_____no_output_____"
]
],
[
[
"## Process tornado data\nIn 2007, the National Weather Service (NWS) switched their scale for measuring tornado intensity, from the Fujita (F) scale to the Enhanced Fujita (EF) scale. I will lump them together here and just make a note for the user that the scale means something slightly different before and after 2007. Also, I'll cast unknown magnitudes (EFU) as if they were EF0.",
"_____no_output_____"
]
],
[
[
"# Tornadoes by magnitude, using the NWS's original labels.\n# Notice the two different scales and also a label for 'unknown'\ntornadoes.TOR_F_SCALE.value_counts()",
"_____no_output_____"
],
[
"# Function that extracts the scale level and sets unkwnown to zero.\ndef process_fujita(x):\n if x[-1] == 'U':\n return 0\n else:\n return int(x[-1])\n\ntornadoes['intensity'] = tornadoes['TOR_F_SCALE'].apply(process_fujita)\ntornadoes = tornadoes.drop(columns='TOR_F_SCALE')\n\ndisplay(tornadoes)",
"(30898, 3)\n"
],
[
"# Distribution of tornado intensities.\ntornadoes.intensity.hist();",
"_____no_output_____"
]
],
[
[
"# Visualizing the data",
"_____no_output_____"
]
],
[
[
"# Sample of 2000 storms in the Lower48\nfig, ax = plt.subplots(figsize=(20,20))\ncounties.plot(ax=ax, color='white', edgecolor='black');\nstorms.sample(2000).plot(ax=ax, marker='o')\nax.set_xlim(-125.0011,-66.9326)\nax.set_ylim(24.9493, 49.5904)\nplt.show()",
"_____no_output_____"
]
],
[
[
"Floods and tornadoes show basically the same distribution, so I won't plot them separately. For reference, this is what the dataframes that we're about to export look like.",
"_____no_output_____"
]
],
[
[
"display(storms)",
"(622110, 3)\n"
],
[
"display(floods)",
"(78936, 3)\n"
],
[
"display(tornadoes)",
"(30898, 3)\n"
]
],
[
[
"# Export!",
"_____no_output_____"
]
],
[
[
"storms.to_file(\"../data_output/5__NOAA/storms.geojson\",\n driver='GeoJSON')\nfloods.to_file(\"../data_output/5__NOAA/floods.geojson\",\n driver='GeoJSON')\ntornadoes.to_file(\"../data_output/5__NOAA/tornadoes.geojson\",\n driver='GeoJSON')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7b56a3661e74b46c51a60aa1735776cb7a1cce7 | 252,618 | ipynb | Jupyter Notebook | examples/simpsons_000paradox.ipynb | csbrown/pylomo | 377aa386427a32da8b42fe53aacbe3281fbf2bf6 | [
"MIT"
] | null | null | null | examples/simpsons_000paradox.ipynb | csbrown/pylomo | 377aa386427a32da8b42fe53aacbe3281fbf2bf6 | [
"MIT"
] | 1 | 2020-04-01T17:41:36.000Z | 2020-04-01T17:41:36.000Z | examples/simpsons_000paradox.ipynb | csbrown/pylomo | 377aa386427a32da8b42fe53aacbe3281fbf2bf6 | [
"MIT"
] | null | null | null | 388.643077 | 61,446 | 0.92919 | [
[
[
"import local_models.local_models as local_models\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport sklearn.linear_model\nimport sklearn.cluster\nfrom importlib import reload\nfrom ml_battery.utils import cmap\nimport matplotlib as mpl\nreload(local_models)\nmpl.rcParams['figure.figsize'] = [8.0, 8.0]\nmpl.rcParams['font.size'] = 20\nmpl.rcParams['figure.autolayout'] = True\nnp.random.seed(1)",
"_____no_output_____"
],
[
"n = 100\nsigma = 0.05\nk = 15",
"_____no_output_____"
],
[
"X1 = np.linspace(1.,3.,n)\nX2 = np.linspace(0.,2.,n)",
"_____no_output_____"
],
[
"y1 = -3*X1 + 8 + np.random.normal(0,sigma,n)",
"_____no_output_____"
],
[
"y2 = -3*X2 + 0 + np.random.normal(0,sigma,n)",
"_____no_output_____"
],
[
"colors = np.concatenate((np.ones(n), np.zeros(n)))",
"_____no_output_____"
],
[
"y = np.concatenate((y1,y2))\nx = np.concatenate((X1,X2))",
"_____no_output_____"
],
[
"print(x.shape, y.shape)",
"(200,) (200,)\n"
],
[
"plt.scatter(x,y,c=cmap(colors))\nplt.show()",
"/usr/local/lib/python3.5/dist-packages/matplotlib/figure.py:1999: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.\n warnings.warn(\"This figure includes Axes that are not compatible \"\n"
],
[
"lr = sklearn.linear_model.LinearRegression()\nlr.fit(x.reshape((-1,1)),y)\nxx = np.linspace(0,4,100)\npred = lr.predict(xx.reshape((-1,1)))\nplt.plot(xx, pred, c='r')\nplt.scatter(x,y)#,c=cmap(colors))\nplt.xlabel('x')\nplt.ylabel('y')\nplt.savefig(\"parallel_lines_w_regression.png\")\nplt.show()",
"/usr/local/lib/python3.5/dist-packages/matplotlib/figure.py:1999: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.\n warnings.warn(\"This figure includes Axes that are not compatible \"\n"
],
[
"km = sklearn.cluster.KMeans(2)\nkm.fit(x.reshape((-1,1)))\npred = km.predict(x.reshape((-1,1)))\nplt.scatter(x,y,c=cmap(pred))\nplt.show()",
"/usr/local/lib/python3.5/dist-packages/matplotlib/figure.py:1999: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.\n warnings.warn(\"This figure includes Axes that are not compatible \"\n"
],
[
"models = local_models.LocalModels(sklearn.linear_model.LinearRegression())",
"_____no_output_____"
],
[
"models.fit(x.reshape((-1,1)), y, np.stack((x,y)).T)",
"_____no_output_____"
],
[
"lm_params = models.transform(np.stack((x,y)).T,k=k)",
"_____no_output_____"
],
[
"lr.__dict__",
"_____no_output_____"
],
[
"plt.scatter(lm_params[:,0], lm_params[:,1])\nplt.scatter([lr.coef_], [lr.intercept_], c='r')\nplt.xlabel('m')\nplt.ylabel('b')\nplt.savefig('parallel_lines_local_linear_feature_transform.png')\nplt.show()",
"/usr/local/lib/python3.5/dist-packages/matplotlib/figure.py:1999: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.\n warnings.warn(\"This figure includes Axes that are not compatible \"\n"
],
[
"clf = sklearn.cluster.KMeans(2)",
"_____no_output_____"
],
[
"clf.fit(lm_params)",
"_____no_output_____"
],
[
"plt.scatter(x,y,c=cmap(clf.predict(lm_params)))\nplt.show()",
"/usr/local/lib/python3.5/dist-packages/matplotlib/figure.py:1999: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.\n warnings.warn(\"This figure includes Axes that are not compatible \"\n"
],
[
"g = np.mgrid[0:4:0.02, -6:6:0.06]\nxx = np.vstack(map(np.ravel, g)).T",
"_____no_output_____"
],
[
"pred = clf.predict(models.transform(xx,k=k))",
"_____no_output_____"
],
[
"pred.shape",
"_____no_output_____"
],
[
"cc = np.array(list(map(mpl.colors.to_rgb, np.unique(cmap(pred)))))\nplt.contourf(g[0], g[1], pred.reshape(g[0].shape), 1, colors=cc)\nplt.scatter(x,y, c=cmap(clf.predict(lm_params)), linewidths=1, edgecolors='k')\nplt.xlabel('x')\nplt.ylabel('y')\nplt.savefig('parallel_lines_kmeans_w_local_linear_features.png')\nplt.show()",
"/usr/local/lib/python3.5/dist-packages/matplotlib/figure.py:1999: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.\n warnings.warn(\"This figure includes Axes that are not compatible \"\n"
],
[
"clf2 = sklearn.cluster.KMeans(2)",
"_____no_output_____"
],
[
"clf2.fit(np.stack((x,y)).T)",
"_____no_output_____"
],
[
"pred2 = clf2.predict(xx)",
"_____no_output_____"
],
[
"plt.scatter(xx[:,0], xx[:,1], c=cmap(pred2), s=6)\nplt.scatter(x,y, c=cmap(clf2.predict(np.stack((x,y)).T)), linewidths=1, edgecolors=\"k\")\nplt.xlabel('x')\nplt.ylabel('y')\nplt.savefig('parallel_lines_kmeans.png')\nplt.show()",
"/usr/local/lib/python3.5/dist-packages/matplotlib/figure.py:1999: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.\n warnings.warn(\"This figure includes Axes that are not compatible \"\n"
],
[
"import numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.animation import FuncAnimation\nimport matplotlib.animation as animation\nfrom IPython.display import HTML\n%matplotlib inline",
"_____no_output_____"
],
[
"print(animation.writers.list())",
"['imagemagick_file', 'ffmpeg_file', 'imagemagick', 'ffmpeg', 'html']\n"
],
[
"xdata, ydata = x, y\noffsets = np.stack((xdata, ydata)).T\nmultioffsets = []\nfor i in range(100):\n offsets[:,1] += np.random.normal(0,0.05,offsets.shape[0])\n multioffsets.append(np.copy(offsets))\n\nfig = plt.figure()\ndecision_surface = plt.scatter(xx[:,0], xx[:,1], c=cmap(pred), s=6, animated=True)\nscat = plt.scatter(x,y, c=cmap(clf.predict(lm_params)), linewidths=1, edgecolors='k', animated=True)\n\ndef init():\n return scat, decision_surface\n\ndef update(frame):\n offsets = multioffsets[frame]\n models.fit(offsets[:,:1], offsets[:,1:], offsets)\n pred = clf.predict(models.transform(xx,k=k))\n decision_surface.set_color(cmap(pred))\n scat.set_offsets(np.copy(multioffsets[frame]))\n return scat, decision_surface\n\n\nani = FuncAnimation(fig, update, frames=range(100),\n init_func=init, blit=True)\n\n# Set up formatting for the movie files\nWriter = animation.writers['ffmpeg']\nwriter = Writer(fps=15, metadata=dict(artist='CScott!'), bitrate=1800)\nani.save('ani_k10_double.mp4', writer=writer)\n\nHTML(ani.to_html5_video())\n#offsets[:,1] += np.random.normal(0,0.05,offsets.shape[0])\n#scat.set_offsets(offsets)\n#scat",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7b579ab6a9d5dd5705152d8648e7b0775fa39a4 | 143,825 | ipynb | Jupyter Notebook | code/clustering_and_classification/1D_CNN.ipynb | iotanalytics/IoTTutorial | 33666ca918cdece60df4684f0a2ec3465e9663b6 | [
"MIT"
] | 3 | 2021-07-20T18:02:51.000Z | 2021-08-18T13:26:57.000Z | code/clustering_and_classification/1D_CNN.ipynb | iotanalytics/IoTTutorial | 33666ca918cdece60df4684f0a2ec3465e9663b6 | [
"MIT"
] | null | null | null | code/clustering_and_classification/1D_CNN.ipynb | iotanalytics/IoTTutorial | 33666ca918cdece60df4684f0a2ec3465e9663b6 | [
"MIT"
] | null | null | null | 416.884058 | 106,508 | 0.911983 | [
[
[
"<a href=\"https://colab.research.google.com/github/iotanalytics/IoTTutorial/blob/main/code/clustering_and_classification/1D_CNN.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"## **1D Convolutional Neural Networks**\n\n\"A Convolutional Neural Network (ConvNet/CNN) is a Deep Learning algorithm which can take in an input image, assign importance (learnable weights and biases) to various aspects/objects in the image and be able to differentiate one from the other. The pre-processing required in a ConvNet is much lower as compared to other classification algorithms. While in primitive methods filters are hand-engineered, with enough training, ConvNets have the ability to learn these filters/characteristics.\" [4]\n\n\"The architecture of a ConvNet is analogous to that of the connectivity pattern of Neurons in the Human Brain and was inspired by the organization of the Visual Cortex. Individual neurons respond to stimuli only in a restricted region of the visual field known as the Receptive Field. A collection of such fields overlap to cover the entire visual area.\" [4]\n\n\"Convolutional neural network models were developed for image classification problems, where the model learns an internal representation of a two-dimensional input, in a process referred to as feature learning.\" [1]\n\n\"This same process can be harnessed on one-dimensional sequences of data, such as in the case of acceleration and gyroscopic data for human activity recognition. The model learns to extract features from sequences of observations and how to map the internal features to different activity types.\" [1]\n\n\"The benefit of using CNNs for sequence classification is that they can learn from the raw time series data directly, and in turn do not require domain expertise to manually engineer input features. The model can learn an internal representation of the time series data and ideally achieve comparable performance to models fit on a version of the dataset with engineered features.\" [1]\n\n**Convolutional Neural Network Architecture**\n\"A CNN typically has three layers: a convolutional layer, a pooling layer, and a fully connected layer.\" [5]\n\n\n\n**Convolution Layer**\n\"The convolution layer is the core building block of the CNN. It carries the main portion of the network’s computational load.\" [5]\n\n\"This layer performs a dot product between two matrices, where one matrix is the set of learnable parameters otherwise known as a kernel, and the other matrix is the restricted portion of the receptive field. The kernel is spatially smaller than an image but is more in-depth. This means that, if the image is composed of three (RGB) channels, the kernel height and width will be spatially small, but the depth extends up to all three channels.\" [5]\n\n\"During the forward pass, the kernel slides across the height and width of the image-producing the image representation of that receptive region. This produces a two-dimensional representation of the image known as an activation map that gives the response of the kernel at each spatial position of the image. The sliding size of the kernel is called a stride.\nIf we have an input of size W x W x D and Dout number of kernels with a spatial size of F with stride S and amount of padding P, then the size of output volume can be determined by the following formula:\" [5]\n\n\n\n**Pooling Layer**\n\"The pooling layer replaces the output of the network at certain locations by deriving a summary statistic of the nearby outputs. This helps in reducing the spatial size of the representation, which decreases the required amount of computation and weights. The pooling operation is processed on every slice of the representation individually.\" [5]\n\n\"There are several pooling functions such as the average of the rectangular neighborhood, L2 norm of the rectangular neighborhood, and a weighted average based on the distance from the central pixel. However, the most popular process is max pooling, which reports the maximum output from the neighborhood.\" [5]\n\n\"If we have an activation map of size W x W x D, a pooling kernel of spatial size F, and stride S, then the size of output volume can be determined by the following formula:\" [5]\n\n\n\n\"This will yield an output volume of size Wout x Wout x D.\nIn all cases, pooling provides some translation invariance which means that an object would be recognizable regardless of where it appears on the frame.\" [5]\n\n**Fully Connected Layer**\n\"Neurons in this layer have full connectivity with all neurons in the preceding and succeeding layer as seen in regular FCNN. This is why it can be computed as usual by a matrix multiplication followed by a bias effect.\" [5]\n\n\"The FC layer helps to map the representation between the input and the output.\" [5]\n\n**Non-Linearity Layers**\n\"Since convolution is a linear operation and images are far from linear, non-linearity layers are often placed directly after the convolutional layer to introduce non-linearity to the activation map.\" [5]\n\n\"There are several types of non-linear operations, the popular ones being:\" [5]\n1. Sigmoid\n2. Tanh\n3. ReLU\n\n</br>\n\n**Advantages:**\n1. Speed vs. other type of neural networks [2]\n2. Capacity to extract the most important features automatically [2]\n\n**Disadvantages:**\n1. Classification of similar objects with different Positions [3]\n2. Vulnerable to adversarial examples [3]\n3. Coordinate Frame [3]\n4. Other minor disadvantages like performance [3]\n\n**References:**\n1. https://machinelearningmastery.com/cnn-models-for-human-activity-recognition-time-series-classification/\n2. https://cai.tools.sap/blog/ml-spotlight-cnn/\n3. https://iq.opengenus.org/disadvantages-of-cnn/\n4. https://towardsdatascience.com/a-comprehensive-guide-to-convolutional-neural-networks-the-eli5-way-3bd2b1164a53\n5. https://towardsdatascience.com/convolutional-neural-networks-explained-9cc5188c4939\n\n",
"_____no_output_____"
]
],
[
[
"pip install tensorflow",
"Requirement already satisfied: tensorflow in /usr/local/lib/python3.7/dist-packages (2.5.0)\nRequirement already satisfied: tensorflow-estimator<2.6.0,>=2.5.0rc0 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (2.5.0)\nRequirement already satisfied: flatbuffers~=1.12.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (1.12)\nRequirement already satisfied: protobuf>=3.9.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (3.17.3)\nRequirement already satisfied: astunparse~=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (1.6.3)\nRequirement already satisfied: keras-nightly~=2.5.0.dev in /usr/local/lib/python3.7/dist-packages (from tensorflow) (2.5.0.dev2021032900)\nRequirement already satisfied: wrapt~=1.12.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (1.12.1)\nRequirement already satisfied: absl-py~=0.10 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (0.12.0)\nRequirement already satisfied: wheel~=0.35 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (0.37.0)\nRequirement already satisfied: numpy~=1.19.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (1.19.5)\nRequirement already satisfied: google-pasta~=0.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (0.2.0)\nRequirement already satisfied: tensorboard~=2.5 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (2.5.0)\nRequirement already satisfied: h5py~=3.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (3.1.0)\nRequirement already satisfied: six~=1.15.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (1.15.0)\nRequirement already satisfied: opt-einsum~=3.3.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (3.3.0)\nRequirement already satisfied: typing-extensions~=3.7.4 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (3.7.4.3)\nRequirement already satisfied: grpcio~=1.34.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (1.34.1)\nRequirement already satisfied: keras-preprocessing~=1.1.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (1.1.2)\nRequirement already satisfied: gast==0.4.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (0.4.0)\nRequirement already satisfied: termcolor~=1.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (1.1.0)\nRequirement already satisfied: cached-property in /usr/local/lib/python3.7/dist-packages (from h5py~=3.1.0->tensorflow) (1.5.2)\nRequirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.5->tensorflow) (1.0.1)\nRequirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.5->tensorflow) (1.34.0)\nRequirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.5->tensorflow) (2.23.0)\nRequirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.5->tensorflow) (0.4.5)\nRequirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.5->tensorflow) (0.6.1)\nRequirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.5->tensorflow) (3.3.4)\nRequirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.5->tensorflow) (1.8.0)\nRequirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.5->tensorflow) (57.2.0)\nRequirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard~=2.5->tensorflow) (4.7.2)\nRequirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard~=2.5->tensorflow) (4.2.2)\nRequirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard~=2.5->tensorflow) (0.2.8)\nRequirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.5->tensorflow) (1.3.0)\nRequirement already satisfied: importlib-metadata in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard~=2.5->tensorflow) (4.6.3)\nRequirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard~=2.5->tensorflow) (0.4.8)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard~=2.5->tensorflow) (2.10)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard~=2.5->tensorflow) (1.24.3)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard~=2.5->tensorflow) (2021.5.30)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard~=2.5->tensorflow) (3.0.4)\nRequirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.5->tensorflow) (3.1.1)\nRequirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata->markdown>=2.6.8->tensorboard~=2.5->tensorflow) (3.5.0)\n"
],
[
"!pip install fsspec",
"Collecting fsspec\n Downloading fsspec-2021.7.0-py3-none-any.whl (118 kB)\n\u001b[?25l\r\u001b[K |██▊ | 10 kB 21.1 MB/s eta 0:00:01\r\u001b[K |█████▌ | 20 kB 27.4 MB/s eta 0:00:01\r\u001b[K |████████▎ | 30 kB 12.9 MB/s eta 0:00:01\r\u001b[K |███████████ | 40 kB 9.5 MB/s eta 0:00:01\r\u001b[K |█████████████▉ | 51 kB 5.2 MB/s eta 0:00:01\r\u001b[K |████████████████▋ | 61 kB 5.3 MB/s eta 0:00:01\r\u001b[K |███████████████████▍ | 71 kB 6.0 MB/s eta 0:00:01\r\u001b[K |██████████████████████▏ | 81 kB 6.7 MB/s eta 0:00:01\r\u001b[K |█████████████████████████ | 92 kB 6.3 MB/s eta 0:00:01\r\u001b[K |███████████████████████████▊ | 102 kB 5.4 MB/s eta 0:00:01\r\u001b[K |██████████████████████████████▌ | 112 kB 5.4 MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 118 kB 5.4 MB/s \n\u001b[?25hInstalling collected packages: fsspec\nSuccessfully installed fsspec-2021.7.0\n"
],
[
"# cnn model\nfrom numpy import mean\nfrom numpy import std\nfrom numpy import dstack\nfrom pandas import read_csv\nfrom matplotlib import pyplot\nfrom keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras.layers import Flatten\nfrom keras.layers import Dropout\nfrom keras.layers.convolutional import Conv1D\nfrom keras.layers.convolutional import MaxPooling1D\n#from keras.utils import to_categorical\nfrom tensorflow.keras.utils import to_categorical # previous commented out line does not work\n \n# load a single file as a numpy array\ndef load_file(filepath):\n\tdataframe = read_csv(filepath, header=None, delim_whitespace=True)\n\treturn dataframe.values\n \n# load a list of files and return as a 3d numpy array\ndef load_group(filenames, prefix=''):\n\tloaded = list()\n\tfor name in filenames:\n\t\tdata = load_file(prefix + name)\n\t\tloaded.append(data)\n\t# stack group so that features are the 3rd dimension\n\tloaded = dstack(loaded)\n\treturn loaded\n \n# load a dataset group, such as train or test\ndef load_dataset_group(group, prefix=''):\n #filepath = prefix + group + '/Inertial Signals/' \n filepath = 'https://raw.githubusercontent.com/iotanalytics/IoTTutorial/main/data/UCI%20HAR%20Dataset/' + group + '/Inertial%20Signals/'\n\t# load all 9 files as a single array\n filenames = list()\n\t# total acceleration\n filenames += ['total_acc_x_'+group+'.txt', 'total_acc_y_'+group+'.txt', 'total_acc_z_'+group+'.txt']\n\t# body acceleration\n filenames += ['body_acc_x_'+group+'.txt', 'body_acc_y_'+group+'.txt', 'body_acc_z_'+group+'.txt']\n\t# body gyroscope\n filenames += ['body_gyro_x_'+group+'.txt', 'body_gyro_y_'+group+'.txt', 'body_gyro_z_'+group+'.txt']\n\t# load input data\n X = load_group(filenames, filepath)\n\t# load class output\n #y = load_file(prefix + group + '/y_'+group+'.txt')\n y = load_file('https://raw.githubusercontent.com/iotanalytics/IoTTutorial/main/data/UCI%20HAR%20Dataset/'+group+'/y_'+group+'.txt')\n return X, y\n\n# load the dataset, returns train and test X and y elements\ndef load_dataset(prefix=''):\n\t# load all train\n #trainX, trainy = load_dataset_group('train', prefix + 'HARDataset/')\n trainX, trainy = load_dataset_group('train', prefix) \n print(trainX.shape, trainy.shape)\n\t# load all test\n #testX, testy = load_dataset_group('test', prefix + 'HARDataset/')\n testX, testy = load_dataset_group('test', prefix)\n print(testX.shape, testy.shape)\n\t# zero-offset class values\n trainy = trainy - 1\n testy = testy - 1\n\t# one hot encode y\n trainy = to_categorical(trainy)\n testy = to_categorical(testy)\n print(trainX.shape, trainy.shape, testX.shape, testy.shape)\n return trainX, trainy, testX, testy\n \n# fit and evaluate a model\ndef evaluate_model(trainX, trainy, testX, testy):\n\tverbose, epochs, batch_size = 0, 10, 32\n\tn_timesteps, n_features, n_outputs = trainX.shape[1], trainX.shape[2], trainy.shape[1]\n\tmodel = Sequential()\n\tmodel.add(Conv1D(filters=64, kernel_size=3, activation='relu', input_shape=(n_timesteps,n_features)))\n\tmodel.add(Conv1D(filters=64, kernel_size=3, activation='relu'))\n\tmodel.add(Dropout(0.5))\n\tmodel.add(MaxPooling1D(pool_size=2))\n\tmodel.add(Flatten())\n\tmodel.add(Dense(100, activation='relu'))\n\tmodel.add(Dense(n_outputs, activation='softmax'))\n\tmodel.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])\n\t# fit network\n\tmodel.fit(trainX, trainy, epochs=epochs, batch_size=batch_size, verbose=verbose)\n\t# evaluate model\n\t_, accuracy = model.evaluate(testX, testy, batch_size=batch_size, verbose=0)\n\treturn accuracy\n \n# summarize scores\ndef summarize_results(scores):\n\tprint(scores)\n\tm, s = mean(scores), std(scores)\n\tprint('Accuracy: %.3f%% (+/-%.3f)' % (m, s))\n \n# run an experiment\ndef run_experiment(repeats=10):\n\t# load data\n\ttrainX, trainy, testX, testy = load_dataset()\n\t# repeat experiment\n\tscores = list()\n\tfor r in range(repeats):\n\t\tscore = evaluate_model(trainX, trainy, testX, testy)\n\t\tscore = score * 100.0\n\t\tprint('>#%d: %.3f' % (r+1, score))\n\t\tscores.append(score)\n\t# summarize results\n\tsummarize_results(scores)\n \n# run the experiment\nrun_experiment()",
"(7352, 128, 9) (7352, 1)\n(2947, 128, 9) (2947, 1)\n(7352, 128, 9) (7352, 6) (2947, 128, 9) (2947, 6)\n>#1: 90.363\n>#2: 88.157\n>#3: 92.467\n>#4: 90.601\n>#5: 90.227\n>#6: 90.058\n>#7: 91.992\n>#8: 90.363\n>#9: 89.786\n>#10: 91.211\n[90.3630793094635, 88.15745115280151, 92.46691465377808, 90.60060977935791, 90.22734761238098, 90.05768299102783, 91.99185371398926, 90.3630793094635, 89.78622555732727, 91.21140241622925]\nAccuracy: 90.523% (+/-1.136)\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
]
] |
e7b58184135ef70c243505d2b9e0ded925c2e7e0 | 135,415 | ipynb | Jupyter Notebook | pymaceuticals_starter_with_plots.ipynb | vaideheeshah13/MatPlotLib | 24a2ac920bcd55192e8c162801d1d64478e02d77 | [
"ADSL"
] | null | null | null | pymaceuticals_starter_with_plots.ipynb | vaideheeshah13/MatPlotLib | 24a2ac920bcd55192e8c162801d1d64478e02d77 | [
"ADSL"
] | null | null | null | pymaceuticals_starter_with_plots.ipynb | vaideheeshah13/MatPlotLib | 24a2ac920bcd55192e8c162801d1d64478e02d77 | [
"ADSL"
] | null | null | null | 112.65807 | 19,768 | 0.806646 | [
[
[
"import this",
"The Zen of Python, by Tim Peters\n\nBeautiful is better than ugly.\nExplicit is better than implicit.\nSimple is better than complex.\nComplex is better than complicated.\nFlat is better than nested.\nSparse is better than dense.\nReadability counts.\nSpecial cases aren't special enough to break the rules.\nAlthough practicality beats purity.\nErrors should never pass silently.\nUnless explicitly silenced.\nIn the face of ambiguity, refuse the temptation to guess.\nThere should be one-- and preferably only one --obvious way to do it.\nAlthough that way may not be obvious at first unless you're Dutch.\nNow is better than never.\nAlthough never is often better than *right* now.\nIf the implementation is hard to explain, it's a bad idea.\nIf the implementation is easy to explain, it may be a good idea.\nNamespaces are one honking great idea -- let's do more of those!\n"
]
],
[
[
"# Pymaceuticals Inc.\n---\n\n### Analysis\n* Overall, it is clear that Capomulin is a viable drug regimen to reduce tumor growth.\n* Capomulin had the most number of mice complete the study, with the exception of Remicane, all other regimens observed a number of mice deaths across the duration of the study. \n* There is a strong correlation between mouse weight and tumor volume, indicating that mouse weight may be contributing to the effectiveness of any drug regimen.\n* There was one potential outlier within the Infubinol regimen. While most mice showed tumor volume increase, there was one mouse that had a reduction in tumor growth in the study. ",
"_____no_output_____"
]
],
[
[
"# Dependencies and Setup\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport scipy.stats as st\n\n# Study data files\nmouse_metadata_path = \"data/Mouse_metadata.csv\"\nstudy_results_path = \"data/Study_results.csv\"\n\n# Read the mouse data and the study results\nmouse_metadata = pd.read_csv(mouse_metadata_path)\nstudy_results = pd.read_csv(study_results_path)\n\n# Combine the data into a single dataset\n\n\n# Display the data table for preview\n",
"_____no_output_____"
],
[
"# Checking the number of mice.\n",
"_____no_output_____"
],
[
"# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint. \n",
"_____no_output_____"
],
[
"# Optional: Get all the data for the duplicate mouse ID. \n",
"_____no_output_____"
],
[
"# Create a clean DataFrame by dropping the duplicate mouse by its ID.\n",
"_____no_output_____"
],
[
"# Checking the number of mice in the clean DataFrame.\n",
"_____no_output_____"
]
],
[
[
"## Summary Statistics",
"_____no_output_____"
]
],
[
[
"# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen\n\n# Use groupby and summary statistical methods to calculate the following properties of each drug regimen: \n# mean, median, variance, standard deviation, and SEM of the tumor volume. \n# Assemble the resulting series into a single summary dataframe.\n\n",
"_____no_output_____"
],
[
"# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen\n\n# Using the aggregation method, produce the same summary statistics in a single line\n",
"_____no_output_____"
]
],
[
[
"## Bar and Pie Charts",
"_____no_output_____"
]
],
[
[
"# Generate a bar plot showing the total number of measurements taken on each drug regimen using pandas.\n",
"_____no_output_____"
],
[
"# Generate a bar plot showing the total number of measurements taken on each drug regimen using using pyplot.\n",
"_____no_output_____"
],
[
"# Generate a pie plot showing the distribution of female versus male mice using pandas\n",
"_____no_output_____"
],
[
"# Generate a pie plot showing the distribution of female versus male mice using pyplot\n",
"_____no_output_____"
]
],
[
[
"## Quartiles, Outliers and Boxplots",
"_____no_output_____"
]
],
[
[
"# Calculate the final tumor volume of each mouse across four of the treatment regimens: \n# Capomulin, Ramicane, Infubinol, and Ceftamin\n\n# Start by getting the last (greatest) timepoint for each mouse\n\n\n# Merge this group df with the original dataframe to get the tumor volume at the last timepoint\n",
"_____no_output_____"
],
[
"# Put treatments into a list for for loop (and later for plot labels)\n\n\n# Create empty list to fill with tumor vol data (for plotting)\n\n\n# Calculate the IQR and quantitatively determine if there are any potential outliers. \n\n \n # Locate the rows which contain mice on each drug and get the tumor volumes\n \n \n # add subset \n \n \n # Determine outliers using upper and lower bounds\n ",
"Capomulin's potential outliers: Series([], Name: Tumor Volume (mm3), dtype: float64)\nRamicane's potential outliers: Series([], Name: Tumor Volume (mm3), dtype: float64)\nInfubinol's potential outliers: 31 36.321346\nName: Tumor Volume (mm3), dtype: float64\nCeftamin's potential outliers: Series([], Name: Tumor Volume (mm3), dtype: float64)\n"
],
[
"# Generate a box plot of the final tumor volume of each mouse across four regimens of interest\n",
"_____no_output_____"
]
],
[
[
"## Line and Scatter Plots",
"_____no_output_____"
]
],
[
[
"# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin\n",
"_____no_output_____"
],
[
"# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen\n",
"_____no_output_____"
]
],
[
[
"## Correlation and Regression",
"_____no_output_____"
]
],
[
[
"# Calculate the correlation coefficient and linear regression model \n# for mouse weight and average tumor volume for the Capomulin regimen\n",
"The correlation between mouse weight and the average tumor volume is 0.84\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7b59dd7e2b4cc4dba6871d03ffc4ddfcbfaafa3 | 7,791 | ipynb | Jupyter Notebook | modules/Microsoft_Data/Microsoft_Education_Insights_Premium/notebook/Insights_module_ingestion.ipynb | ahalabi/OpenEduAnalytics | 0af2d38cba225a60d7c43278d938ab89a4fdb7b0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | modules/Microsoft_Data/Microsoft_Education_Insights_Premium/notebook/Insights_module_ingestion.ipynb | ahalabi/OpenEduAnalytics | 0af2d38cba225a60d7c43278d938ab89a4fdb7b0 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-11-11T16:38:33.000Z | 2021-11-11T16:38:33.000Z | modules/Microsoft_Data/Microsoft_Education_Insights_Premium/notebook/Insights_module_ingestion.ipynb | ahalabi/OpenEduAnalytics | 0af2d38cba225a60d7c43278d938ab89a4fdb7b0 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-11-11T00:09:55.000Z | 2021-11-11T00:09:55.000Z | 37.277512 | 2,216 | 0.528174 | [
[
[
"# Microsoft Insights Module Example Notebook",
"_____no_output_____"
]
],
[
[
"%run /OEA_py",
"_____no_output_____"
],
[
"%run /NEW_Insights_py",
"_____no_output_____"
],
[
"# 0) Initialize the OEA framework and Insights module class notebook.\r\noea = OEA()\r\ninsights = Insights()",
"_____no_output_____"
],
[
"insights.ingest()",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
e7b5a48d8fd139ed6ccde0e9668a0d0cb4fa5865 | 428,547 | ipynb | Jupyter Notebook | starter_code/old/WeatherPy.ipynb | rbvancleave/python-api-challenge | d1e3de625326bc5d1fdc4536f6f89c6e2cd52810 | [
"ADSL"
] | null | null | null | starter_code/old/WeatherPy.ipynb | rbvancleave/python-api-challenge | d1e3de625326bc5d1fdc4536f6f89c6e2cd52810 | [
"ADSL"
] | null | null | null | starter_code/old/WeatherPy.ipynb | rbvancleave/python-api-challenge | d1e3de625326bc5d1fdc4536f6f89c6e2cd52810 | [
"ADSL"
] | null | null | null | 225.669826 | 51,040 | 0.896989 | [
[
[
"# WeatherPy\n----\n\n#### Note\n* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.",
"_____no_output_____"
]
],
[
[
"w_api = 'f85af5acc7275a9eb032d03a3cca5913'",
"_____no_output_____"
],
[
"# Dependencies and Setup\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\nimport requests\nimport time\nfrom scipy.stats import linregress\n\n# Import API key\n# from api_keys import weather_api_key\n\n\n# Incorporated citipy to determine city based on latitude and longitude\nfrom citipy import citipy\n\n# Output File (CSV)\noutput_data_file = \"output_data/cities.csv\"\n\n# Range of latitudes and longitudes\nlat_range = (-90, 90)\nlng_range = (-180, 180)",
"_____no_output_____"
]
],
[
[
"## Generate Cities List",
"_____no_output_____"
]
],
[
[
"# List for holding lat_lngs and cities\nlat_lngs = []\ncities = []\n\n# Create a set of random lat and lng combinations\nlats = np.random.uniform(lat_range[0], lat_range[1], size=1500)\nlngs = np.random.uniform(lng_range[0], lng_range[1], size=1500)\nlat_lngs = zip(lats, lngs)\n\n# Identify nearest city for each lat, lng combination\nfor lat_lng in lat_lngs:\n city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name\n \n # If the city is unique, then add it to a our cities list\n if city not in cities:\n cities.append(city)\n\n# Print the city count to confirm sufficient count\nlen(cities)",
"_____no_output_____"
]
],
[
[
"### Perform API Calls\n* Perform a weather check on each city using a series of successive API calls.\n* Include a print log of each city as it'sbeing processed (with the city number and city name).\n",
"_____no_output_____"
],
[
"### Convert Raw Data to DataFrame\n* Export the city data into a .csv.\n* Display the DataFrame",
"_____no_output_____"
],
[
"## Inspect the data and remove the cities where the humidity > 100%.\n----\nSkip this step if there are no cities that have humidity > 100%. ",
"_____no_output_____"
]
],
[
[
"# Get the indices of cities that have humidity over 100%.\n",
"_____no_output_____"
],
[
"# Make a new DataFrame equal to the city data to drop all humidity outliers by index.\n# Passing \"inplace=False\" will make a copy of the city_data DataFrame, which we call \"clean_city_data\".\n",
"_____no_output_____"
],
[
"\n",
"_____no_output_____"
]
],
[
[
"## Plotting the Data\n* Use proper labeling of the plots using plot titles (including date of analysis) and axes labels.\n* Save the plotted figures as .pngs.",
"_____no_output_____"
],
[
"## Latitude vs. Temperature Plot",
"_____no_output_____"
],
[
"## Latitude vs. Humidity Plot",
"_____no_output_____"
],
[
"## Latitude vs. Cloudiness Plot",
"_____no_output_____"
],
[
"## Latitude vs. Wind Speed Plot",
"_____no_output_____"
],
[
"## Linear Regression",
"_____no_output_____"
],
[
"#### Northern Hemisphere - Max Temp vs. Latitude Linear Regression",
"_____no_output_____"
],
[
"#### Southern Hemisphere - Max Temp vs. Latitude Linear Regression",
"_____no_output_____"
],
[
"#### Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression",
"_____no_output_____"
],
[
"#### Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression",
"_____no_output_____"
],
[
"#### Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression",
"_____no_output_____"
],
[
"#### Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression",
"_____no_output_____"
],
[
"#### Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression",
"_____no_output_____"
],
[
"#### Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
e7b5c5d7ee650519469a1fe4819d67af4212a654 | 106,067 | ipynb | Jupyter Notebook | Phase_1/ds-data_visualization-main/Matplotlib_Applied.ipynb | BenJMcCarty/ds-east-042621-lectures | 01a4c0a4e070f039e1ce3a7e6cf1b5a038ca4078 | [
"MIT"
] | null | null | null | Phase_1/ds-data_visualization-main/Matplotlib_Applied.ipynb | BenJMcCarty/ds-east-042621-lectures | 01a4c0a4e070f039e1ce3a7e6cf1b5a038ca4078 | [
"MIT"
] | null | null | null | Phase_1/ds-data_visualization-main/Matplotlib_Applied.ipynb | BenJMcCarty/ds-east-042621-lectures | 01a4c0a4e070f039e1ce3a7e6cf1b5a038ca4078 | [
"MIT"
] | 20 | 2021-04-27T19:27:58.000Z | 2021-06-16T15:08:50.000Z | 582.785714 | 60,672 | 0.943621 | [
[
[
"# Matplotlib Applied",
"_____no_output_____"
],
[
"**Aim: SWBAT create a figure with 4 subplots of varying graph types.**",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport numpy as np",
"_____no_output_____"
],
[
"from numpy.random import seed, randint\nseed(100)\n\n# Create Figure and Subplots\nfig, axes = plt.subplots(2,2, figsize=(10,6), sharex=True, sharey=True, dpi=100)\n\n# Define the colors and markers to use\ncolors = {0:'g', 1:'b', 2:'r', 3:'y'}\nmarkers = {0:'o', 1:'x', 2:'*', 3:'p'}\n\n# Plot each axes\nfor i, ax in enumerate(axes.ravel()):\n ax.plot(sorted(randint(0,10,10)), sorted(randint(0,10,10)), marker=markers[i], color=colors[i]) \n ax.set_title('Ax: ' + str(i))\n ax.yaxis.set_ticks_position('right')\n\nplt.suptitle('Four Subplots in One Figure', verticalalignment='bottom', fontsize=16) \nplt.tight_layout()\n# plt.show()",
"_____no_output_____"
]
],
[
[
"### Go through and play with the code above to try answer the questions below:\n\n- What do you think `sharex` and `sharey` do?\n\n- What does the `dpi` argument control?\n\n- What does `numpy.ravel()` do, and why do they call it here?\n\n- What does `yaxis.set_ticks_position()` do?\n\n- How do they use the `colors` and `markers` dictionaries?",
"_____no_output_____"
],
[
"### Your turn:\n\n- Create a figure that has 4 sub plots on it.\n\n- Plot 1: a line blue graph (`.plot()`) using data `x` and `y`\n\n- Plot 2: a scatter plot (`.scatter()`) using data `x2` and `y2` with red markers that are non-filled circles.\n\n- Plot 3: a plot that has both a line graph (x and y data) and a scatterplot (x2, y2) that only use 1 y axis\n\n- plot 4: a plot that is similiar to plot3 except the scatterplot has it own axis on the right hand side. \n\n- Put titles on each subplot.\n\n- Create a title for the entire figure.\n\n- Save figure as png.",
"_____no_output_____"
]
],
[
[
"from numpy.random import seed, randint\nseed(100)\n\nx = sorted(randint(0,10,10))\nx2 = sorted(randint(0,20,10))\ny = sorted(randint(0,10,10))\ny2 = sorted(randint(0,20,10))",
"_____no_output_____"
]
],
[
[
"## Great tutorial on matplotlib\n\nhttps://www.machinelearningplus.com/plots/matplotlib-tutorial-complete-guide-python-plot-examples/",
"_____no_output_____"
]
],
[
[
"fig",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7b5c7a5512f36d16675d27445a43b00830e0845 | 5,196 | ipynb | Jupyter Notebook | content/lessons/04/Now-You-Code/NYC2-Paint-Matching.ipynb | MahopacHS/spring2019-rizzenM | 2860b3338c8f452e45aac04f04388a417b2cf506 | [
"MIT"
] | null | null | null | content/lessons/04/Now-You-Code/NYC2-Paint-Matching.ipynb | MahopacHS/spring2019-rizzenM | 2860b3338c8f452e45aac04f04388a417b2cf506 | [
"MIT"
] | null | null | null | content/lessons/04/Now-You-Code/NYC2-Paint-Matching.ipynb | MahopacHS/spring2019-rizzenM | 2860b3338c8f452e45aac04f04388a417b2cf506 | [
"MIT"
] | null | null | null | 68.368421 | 1,096 | 0.59719 | [
[
[
"# Now You Code 2: Paint Pricing\n\nHouse Depot, a big-box hardware retailer, has contracted you to create an app to calculate paint prices. \n\nThe price of paint is determined by the following factors:\n- Everyday quality paint is `$19.99` per gallon.\n- Select quality paint is `$24.99` per gallon.\n- Premium quality paint is `$32.99` per gallon.\n\nIn addition if the customer wants computerized color-matching that incurs an additional fee of `$4.99` per gallon. \n\nWrite a program to ask the user to select a paint quality: 'everyday', 'select' or 'premium' and then whether they need color matching and then outputs the price per gallon of the paint.\n\nExample Run 1:\n\n```\nWhich paint quality do you require ['everyday', 'select', 'premium'] ?select\nDo you require color matching [y/n] ?y\nTotal price of select paint with color matching is $29.98\n```\n\nExample Run 2:\n\n```\nWhich paint quality do you require ['everyday', 'select', 'premium'] ?premium\nDo you require color matching [y/n] ?n\nTotal price of premium paint without color matching is $32.99\n```",
"_____no_output_____"
],
[
"## Step 1: Problem Analysis\n\nInputs:\n\nOutputs:\n\nAlgorithm (Steps in Program):\n\n",
"_____no_output_____"
]
],
[
[
"# Step 2: Write code here\nchoices = [\"everyday\", \"select\", \"premium\"]\ncolorChoices = [\"y\", \"n\"]\nquality = input(\"which paint quality would you like? [\"everyday\", \"select\", \"premium\"]\")\nif quality in choices \n if quality == \"everyday\":\n quality =19.99\n elif quality == \"select\":\n quality = 24.99\n elif quality ==\"premium\":\n quality = 32.99\ncolorMatching = input(\"do you requrie color matching [yes/no]\")\n if colorMatching in colorChoices:\n if colorMatching == \"yes\":\n colorMatching = 4.99 \n elif colorMatching == \"no\": \n colorMatching = 0\n final = quality + colorMatching\n if colorMatching == \"yes\":\n print(\"total price of paint with color matching is %.2f\" %(final))\n else:\n print(\"total price of paint without color matching is %.2f\" %(final))\n else:\n print(\"you must enter yes or no\")\n else:\n print(\"That is not a paint quality\")\n ",
"_____no_output_____"
]
],
[
[
"## Step 3: Questions\n\n1. When you enter something other than `'everyday', 'select',` or `'premium'` what happens? Modify the program to print `that is not a paint quality` and then exit in those cases. the program errors \n2. What happens when you enter something other than `'y'` or `'n'` for color matching? Re-write the program to print `you must enter y or n` whenever you enter something other than those two values.the program does nothing \n3. Why can't we use Python's `try...except` in this example?\n4. How many times (at minimum) must we execute this program and check the results before we can be reasonably assured it is correct?\n",
"_____no_output_____"
],
[
"## Reminder of Evaluation Criteria\n\n1. What the problem attempted (analysis, code, and answered questions) ?\n2. What the problem analysis thought out? (does the program match the plan?)\n3. Does the code execute without syntax error?\n4. Does the code solve the intended problem?\n5. Is the code well written? (easy to understand, modular, and self-documenting, handles errors)\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
e7b5d78a0aebae9e2f735e537c29d7c3b3a3a70d | 595,998 | ipynb | Jupyter Notebook | JupyterWorkflow.ipynb | 118010484/JupyterWorkflow | 090dae13bfbbc8e8c9a51a1b643d828f4babc7e2 | [
"MIT"
] | null | null | null | JupyterWorkflow.ipynb | 118010484/JupyterWorkflow | 090dae13bfbbc8e8c9a51a1b643d828f4babc7e2 | [
"MIT"
] | null | null | null | JupyterWorkflow.ipynb | 118010484/JupyterWorkflow | 090dae13bfbbc8e8c9a51a1b643d828f4babc7e2 | [
"MIT"
] | null | null | null | 860.025974 | 99,980 | 0.949104 | [
[
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.style.use(\"seaborn\")\nimport os\nfrom urllib.request import urlretrieve\nimport pandas as pd\nURL = 'https://data.seattle.gov/api/views/65db-xm6k/rows.csv?accessType=DOWNLOAD'\n\ndef get_fremont_data(filename = 'Fremont.csv', url=URL, force_download = False):\n if force_download or not os.path.exists(filename):\n urlretrieve(url,filename)\n data = pd.read_csv(filename,index_col='Date',parse_dates = True)\n data.columns = ['Total','West','East']\n return data",
"_____no_output_____"
],
[
"data = get_fremont_data()\ndata.head()",
"_____no_output_____"
],
[
"# visualisation\n\n# put all the plots in the notebook itself, rather than separate windows\ndata.plot()",
"_____no_output_____"
],
[
"data.resample('W').sum().plot()",
"_____no_output_____"
],
[
"%matplotlib inline\ndata.resample('W').sum().plot()",
"_____no_output_____"
],
[
"\n\n\ndata.resample('W').sum().plot()",
"_____no_output_____"
],
[
"data.head()",
"_____no_output_____"
],
[
"data.resample('W').sum().plot()",
"_____no_output_____"
],
[
"data.resample('D').sum().rolling(365).sum().plot()",
"_____no_output_____"
],
[
"ax = data.resample('D').sum().rolling(365).sum().plot()\nax.set_ylim(0,None) # setting bottom line of y to be y = 0",
"_____no_output_____"
],
[
"ax = data.resample('D').sum().rolling(365).sum().plot()\nax.set_ylim(0,None) # setting bottom line of y to be y = 0",
"_____no_output_____"
],
[
"# look at trends at individual days\ndata.groupby(data.index.time).mean().plot()",
"_____no_output_____"
],
[
"pivoted = data.pivot_table('Total', index = data.index.time, columns = data.index.date)\npivoted.iloc[:5,:5]",
"_____no_output_____"
],
[
"pivoted.plot(legend=False)",
"_____no_output_____"
],
[
"pivoted.plot(legend=False, alpha=0.01) # add transparency to better visualise",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7b5f16b72226e029aeb68668dd013f8ec0b3bb2 | 22,451 | ipynb | Jupyter Notebook | Final Project Plan.ipynb | lzctony/data-512-finalproject | 72830848b01294c698df60a761496c92fdd7c6c1 | [
"MIT"
] | null | null | null | Final Project Plan.ipynb | lzctony/data-512-finalproject | 72830848b01294c698df60a761496c92fdd7c6c1 | [
"MIT"
] | null | null | null | Final Project Plan.ipynb | lzctony/data-512-finalproject | 72830848b01294c698df60a761496c92fdd7c6c1 | [
"MIT"
] | null | null | null | 51.493119 | 1,144 | 0.544564 | [
[
[
"### Kaggle ML and Data Science Survey Analysis\n#### Data 512, Final Project Plan - Zicong Liang",
"_____no_output_____"
],
[
"### Project Motivation\nThis project an analysis for a survey about Machine Learning and Data Science. Recently, lots of people are talking about machine learning and Data Science. In addition, more and more companies hire data science talents and invest in data science in order to make their business more data-driven. According to Wikipedia, Data science is also known as data-driven science, is an interdisciplinary field combined from mathematics, statistics and computer science to extract insights from data. On the one hand, I remember Oliver commented about nobody's really done a proper survey related to Data Science. I think it is a great opportunity to do some research in this field of study because I find out Kaggle releases a survey responses about Data Science and Machine Learning. On the other hand, there are a few reasons drive me to know more about data science and machine learning field of study.\n1.\tI am one of the graduate students in the master of science data science program at University of Washington.\n2.\tI'd like to get a job in data science filed after graduating from this program.\n3.\tI'd like to know how those people related to this field think about data science and machine learning, then analyses their response to draw some conclusions that I am looking for. \n\nAs one of the beginner in Data Science, it is better to know more about what kind of skills we need to well-equipped in order to be ready to step into this field. That's why I am planning to do this analysis.\nThroughout this project, we are going to conduct some analyses for Data Science and Machine Learning in various aspects, such as gender, age, programming skills, algorithms, education degree and salary and so on. It helps us gain new insights of Data Science that different from what we have learnt from school. In addition, the analysis of this project can be a reference to those who people for pursuing a career in Data Science, not only for switching job but also looking for their first job in this field.",
"_____no_output_____"
],
[
"### Project Dataset\nThe data I am going to use for this project is from Kaggle. It is one of the most popular datasets in Kaggle recently. To be more specifically, the data is about a survey which conducted by Kaggle for an industry-wide to establish a comprehensive view of the state of Data Science and Machine Learning. This survey received more than 16000 responses. According to this data, we would be able to learn a ton about who is working in the Data Science Field, what’s happening at the cutting edge of machine learning across industries, and how new data scientists can best break into the field.\n\nHere is the link to the main page of [Kaggle ML and Data Science Survey, 2017](https://www.kaggle.com/kaggle/kaggle-survey-2017).\n\nThe dataset consists of 5 files:\n1. **schema.csv**: a CSV file with survey schema. This schema includes the questions that correspond to each column name in both the **multipleChoiceResponses.csv** and **freeformResponses.csv**.\n2. **multipleChoiceResponses.csv**: Respondents' answers to multiple choice and ranking questions.\n3. **freeformResponses.csv**: Respondents' freeform answers to Kaggle's survey questions.\n4. **conversionRates.csv**: Currency conversion rates to USD (accessed from the R package \"quantmod\" on September 14, 2017)\n5. **RespondentTypeREADME.txt**: This is a schema for decoding the responses in the \"Asked\" column of the **schema.csv** file.\n\nClick [here](https://www.kaggle.com/kaggle/kaggle-survey-2017/data) to the data page.\n\nBase on the description above, I am going to use two files from the data.\n1. **multipleChoiceResponses.csv**\n2. **conversionRates.csv**\nThe **conversionRates.csv** data is pretty straightforward, so let's look at the **multipleChoiceResponses.csv** before the start any analysis.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nmy_data = pd.read_csv(\"multipleChoiceResponses.csv\", encoding='ISO-8859-1', delimiter=',', low_memory=False)\nmy_data.head()",
"_____no_output_____"
],
[
"my_data.shape",
"_____no_output_____"
]
],
[
[
"We can see the first 5 rows from the above data (**multipleChoiceResponses.csv**). There are lots of variables that contain NaN values. This file contains 16716 observations and 228 variables. During the project, I will focus on the subset of this file, keeping those varibles that necessary for my analysis and dropping those NaN values.\n\n**Note: Since there are 228 columns in the *multipleChoiceResponses.csv*, I might need further exploration of this file in order to determine how many variables I will need throughout this project. As long as I have my decision while answering the questions and hypotheses below, I would follow what I have done for the previous assignments to make a final dataset that include all the variables I need for this project as well as the descriptions.**\n\n**Note: Since the data is from an online survey responses, I think bias would be possibly exist in the data, I will analysis the findings from data as well as combing facts to draw the my conclusions.**\n\n\n#### License\nReleased UnderDatabase: [Open Database](http://opendatacommons.org/licenses/odbl/1.0/), Contents: © Original Authors\n\nHere is some information about the Open Database License on Wikipedia. Click [here](https://en.wikipedia.org/wiki/Open_Database_License) to see more about it.\n\n**Note: It seems there is some issue with the site, hopefully it will be fixed soon. (Mark at 4:47PM November 9th 2017)**\n\nFor more information about Kaggle please see [Terms](https://www.kaggle.com/terms) and [Privacy](https://www.kaggle.com/about/privacy)",
"_____no_output_____"
],
[
"### Research Question & Hypothesis",
"_____no_output_____"
],
[
"#### Research Question\nI did some research that most people are using Python and R in the Data Science field, some use MATLAB and SAS. I'd \nAs a student in the data science program, most of our courses suggest us to use Python as our main programming language instead of R. As I mention above, I'd like to become a data scientist after I graduate from the program, it's good to know what kind of companies that data scientists would most interested in working for and the salary of course. Moreover, I have listed some of the questions that raise in mind I want to know about from this research.\n\nQ1: What's the gender diversity among this survey? (What's the difference between male and female in this field?)\n\nQ2: As a student, really want to know much time should I spend studying in Data Science every week? And how important my MS Data Science Degree would help me success in this filed?\n\nQ3: What is dominant programming language that people would suggestion to use or learn as in Data Science and Machine Learning, Python or R?\n\nQ4: What are 10 most popular algorithms that people think are important to know as in the Data Science filed?\n\nQ5: What are the challenges as working in Data Science filed that most people take for?\n\nQ6: What's the mean/median salary as a data scientist? Moreover, what are the important factors that people consider would consider affecting the salary?\n\nMore questions might be added later\n\n**Note: During the process of the project, I think there will be more interesting findings from the data (since there are lots of variables in the data file, I will explore more about the context and try to raise more related questions throughout the process in order to expand my analysis. **\n\n#### Hypothesis\nThe hypothesis that first come to my mind is about gender imbalance.\n\n*Hypothesis: There is a gender gap between male and female in Data Science field.*\nAnother hypothesis is about dominant programming language. \n\n*Hypothesis: Python is the most popular tool for data scientist compare to the other programming language.*\n\nMoreover, I'd like to raise a hypothesis about the relationship between some important factors (e.g. skills, degree, experience) and salary. However, to address this hypothesis properly, I will need to finalize the variables selection.\n\n*Hypothesis: There is relationship between some factors (e.g. skills, degree, experience) and salary.*\n\n**Note: the project will focus on raising questions and try to answer those questions from the analysis more emphasis on using visulizations.**",
"_____no_output_____"
],
[
"### Technical Approach & Method",
"_____no_output_____"
],
[
"During this project, I am going to Using python and the following packages to acheive my analysis\n1. **Pandas**\n2. **Matplotlib**\n3. **Seaborn**\n4. **Scikit Learn**\n\nThe purpose of this project is to analyze the survey response and draw some conclusion base on the analysis. Therefore, I will use the Python Package Pandas to manipulate the data such as grouping, filtering and aggregating the data. Moreover, I will also use the Matplotlib and Seaborn libraries together to generate some visualizations from the data for the questions I raise above. As in human-centered Data Science, providing appropriate visualizations corresponding to the questions should be better for interpretation as well as easier for others to understand the questions and results. In the end, as time permits, I might also consider developing a regression model to determine what factors would impact the salary the most by using the Sklearn package (e.g. education degree, programming skills, machine learning algorithms, gender, experience and location). The final approach might be change a bit while working on the project (I might need more packages later if it is necessary). However, the overall analysis should be supported by visualizations as well as explanation to address the research questions and hypotheses.\n\nTo be more specifically, I will follow what I have learnt from the course HCDE 511 (human centered design & engineering) to encode the data properly for all the visualizations. Since there are lots of comparisons within each visualization, I think using bar charts (both vertical and horizontal) and pie charts should be the best choice to meet the requirement. Moreover, hue is also helpful to identify different categories in the visualizations.",
"_____no_output_____"
],
[
"### Any unknowns or dependencies",
"_____no_output_____"
],
[
"As I mention above, the data file (*multipleChoiceResponses.csv*) I am going to use has 228 variables. I need further exploration about the content to determine how many variables are needed throughout this project. At the end, I will make a final data set that contains some of the original variable and might have some new generated variables (such as counting, mean, sum) which depend on the other variables. In addition, I will have a full description about the final data set. The analysis will base on the final dataset.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
e7b603da31047c57e4a43c3fb481ed4712c45999 | 2,947 | ipynb | Jupyter Notebook | unit_1/dmvpn.ipynb | alsigna/skillfactory_rds | ca309fb83640816fddc6329462c824cbb5c9e4ea | [
"MIT"
] | null | null | null | unit_1/dmvpn.ipynb | alsigna/skillfactory_rds | ca309fb83640816fddc6329462c824cbb5c9e4ea | [
"MIT"
] | null | null | null | unit_1/dmvpn.ipynb | alsigna/skillfactory_rds | ca309fb83640816fddc6329462c824cbb5c9e4ea | [
"MIT"
] | null | null | null | 38.272727 | 1,293 | 0.451306 | [
[
[
"import pandas as pd\ndf = pd.read_csv(\"dmvpn.csv\")\n",
"_____no_output_____"
],
[
"df.groupby(df.ip).count().sort_values(\"status\")",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code"
]
] |
e7b6053e17f483a00d8b42aad0292fb7b75c02ae | 198,025 | ipynb | Jupyter Notebook | on_demand/kfp-caip-sklearn/lab-02-kfp-pipeline/lab-02.ipynb | bharathraja23/mlops-on-gcp | ad21ab53de18f7044a5044af2d4d1183df6dac14 | [
"Apache-2.0"
] | null | null | null | on_demand/kfp-caip-sklearn/lab-02-kfp-pipeline/lab-02.ipynb | bharathraja23/mlops-on-gcp | ad21ab53de18f7044a5044af2d4d1183df6dac14 | [
"Apache-2.0"
] | null | null | null | on_demand/kfp-caip-sklearn/lab-02-kfp-pipeline/lab-02.ipynb | bharathraja23/mlops-on-gcp | ad21ab53de18f7044a5044af2d4d1183df6dac14 | [
"Apache-2.0"
] | null | null | null | 162.849507 | 139,937 | 0.573311 | [
[
[
"# Continuous training pipeline with Kubeflow Pipeline and AI Platform",
"_____no_output_____"
],
[
"**Learning Objectives:**\n1. Learn how to use Kubeflow Pipeline(KFP) pre-build components (BiqQuery, AI Platform training and predictions)\n1. Learn how to use KFP lightweight python components\n1. Learn how to build a KFP with these components\n1. Learn how to compile, upload, and run a KFP with the command line\n\n\nIn this lab, you will build, deploy, and run a KFP pipeline that orchestrates **BigQuery** and **AI Platform** services to train, tune, and deploy a **scikit-learn** model.",
"_____no_output_____"
],
[
"## Understanding the pipeline design\n",
"_____no_output_____"
],
[
"The workflow implemented by the pipeline is defined using a Python based Domain Specific Language (DSL). The pipeline's DSL is in the `covertype_training_pipeline.py` file that we will generate below.\n\nThe pipeline's DSL has been designed to avoid hardcoding any environment specific settings like file paths or connection strings. These settings are provided to the pipeline code through a set of environment variables.\n",
"_____no_output_____"
]
],
[
[
"#!grep 'BASE_IMAGE =' -A 5 pipeline/covertype_training_pipeline.py\n!pip list | grep kfp",
"kfp 1.0.0\nkfp-pipeline-spec 0.1.7\nkfp-server-api 1.5.0\n"
]
],
[
[
"The pipeline uses a mix of custom and pre-build components.\n\n- Pre-build components. The pipeline uses the following pre-build components that are included with the KFP distribution:\n - [BigQuery query component](https://github.com/kubeflow/pipelines/tree/0.2.5/components/gcp/bigquery/query)\n - [AI Platform Training component](https://github.com/kubeflow/pipelines/tree/0.2.5/components/gcp/ml_engine/train)\n - [AI Platform Deploy component](https://github.com/kubeflow/pipelines/tree/0.2.5/components/gcp/ml_engine/deploy)\n- Custom components. The pipeline uses two custom helper components that encapsulate functionality not available in any of the pre-build components. The components are implemented using the KFP SDK's [Lightweight Python Components](https://www.kubeflow.org/docs/pipelines/sdk/lightweight-python-components/) mechanism. The code for the components is in the `helper_components.py` file:\n - **Retrieve Best Run**. This component retrieves a tuning metric and hyperparameter values for the best run of a AI Platform Training hyperparameter tuning job.\n - **Evaluate Model**. This component evaluates a *sklearn* trained model using a provided metric and a testing dataset.\n ",
"_____no_output_____"
]
],
[
[
"%%writefile ./pipeline/covertype_training_pipeline.py\n# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"KFP orchestrating BigQuery and Cloud AI Platform services.\"\"\"\n\nimport os\n\nfrom helper_components import evaluate_model\nfrom helper_components import retrieve_best_run\nfrom jinja2 import Template\nimport kfp\nfrom kfp.components import func_to_container_op\nfrom kfp.dsl.types import Dict\nfrom kfp.dsl.types import GCPProjectID\nfrom kfp.dsl.types import GCPRegion\nfrom kfp.dsl.types import GCSPath\nfrom kfp.dsl.types import String\nfrom kfp.gcp import use_gcp_secret\n\n# Defaults and environment settings\nBASE_IMAGE = os.getenv('BASE_IMAGE')\nTRAINER_IMAGE = os.getenv('TRAINER_IMAGE')\nRUNTIME_VERSION = os.getenv('RUNTIME_VERSION')\nPYTHON_VERSION = os.getenv('PYTHON_VERSION')\nCOMPONENT_URL_SEARCH_PREFIX = os.getenv('COMPONENT_URL_SEARCH_PREFIX')\nUSE_KFP_SA = os.getenv('USE_KFP_SA')\n\nTRAINING_FILE_PATH = 'datasets/training/data.csv'\nVALIDATION_FILE_PATH = 'datasets/validation/data.csv'\nTESTING_FILE_PATH = 'datasets/testing/data.csv'\n\n# Parameter defaults\nSPLITS_DATASET_ID = 'splits'\nHYPERTUNE_SETTINGS = \"\"\"\n{\n \"hyperparameters\": {\n \"goal\": \"MAXIMIZE\",\n \"maxTrials\": 6,\n \"maxParallelTrials\": 3,\n \"hyperparameterMetricTag\": \"accuracy\",\n \"enableTrialEarlyStopping\": True,\n \"params\": [\n {\n \"parameterName\": \"max_iter\",\n \"type\": \"DISCRETE\",\n \"discreteValues\": [500, 1000]\n },\n {\n \"parameterName\": \"alpha\",\n \"type\": \"DOUBLE\",\n \"minValue\": 0.0001,\n \"maxValue\": 0.001,\n \"scaleType\": \"UNIT_LINEAR_SCALE\"\n }\n ]\n }\n}\n\"\"\"\n\n\n# Helper functions\ndef generate_sampling_query(source_table_name, num_lots, lots):\n \"\"\"Prepares the data sampling query.\"\"\"\n\n sampling_query_template = \"\"\"\n SELECT *\n FROM \n `{{ source_table }}` AS cover\n WHERE \n MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), {{ num_lots }}) IN ({{ lots }})\n \"\"\"\n query = Template(sampling_query_template).render(\n source_table=source_table_name, num_lots=num_lots, lots=str(lots)[1:-1])\n\n return query\n\n\n# Create component factories\ncomponent_store = kfp.components.ComponentStore(\n local_search_paths=None, url_search_prefixes=[COMPONENT_URL_SEARCH_PREFIX])\n\nbigquery_query_op = component_store.load_component('bigquery/query')\nmlengine_train_op = component_store.load_component('ml_engine/train')\nmlengine_deploy_op = component_store.load_component('ml_engine/deploy')\nretrieve_best_run_op = func_to_container_op(\n retrieve_best_run, base_image=BASE_IMAGE)\nevaluate_model_op = func_to_container_op(evaluate_model, base_image=BASE_IMAGE)\n\n\[email protected](\n name='Covertype Classifier Training',\n description='The pipeline training and deploying the Covertype classifierpipeline_yaml'\n)\ndef covertype_train(project_id,\n region,\n source_table_name,\n gcs_root,\n dataset_id,\n evaluation_metric_name,\n evaluation_metric_threshold,\n model_id,\n version_id,\n replace_existing_version,\n hypertune_settings=HYPERTUNE_SETTINGS,\n dataset_location='US'):\n \"\"\"Orchestrates training and deployment of an sklearn model.\"\"\"\n\n # Create the training split\n query = generate_sampling_query(\n source_table_name=source_table_name, num_lots=10, lots=[1, 2, 3, 4])\n\n training_file_path = '{}/{}'.format(gcs_root, TRAINING_FILE_PATH)\n\n create_training_split = bigquery_query_op(\n query=query,\n project_id=project_id,\n dataset_id=dataset_id,\n table_id='',\n output_gcs_path=training_file_path,\n dataset_location=dataset_location)\n\n # Create the validation split\n query = generate_sampling_query(\n source_table_name=source_table_name, num_lots=10, lots=[8])\n\n validation_file_path = '{}/{}'.format(gcs_root, VALIDATION_FILE_PATH)\n\n create_validation_split = bigquery_query_op(\n query=query,\n project_id=project_id,\n dataset_id=dataset_id,\n table_id='',\n output_gcs_path=validation_file_path,\n dataset_location=dataset_location)\n\n # Create the testing split\n query = generate_sampling_query(\n source_table_name=source_table_name, num_lots=10, lots=[9])\n\n testing_file_path = '{}/{}'.format(gcs_root, TESTING_FILE_PATH)\n\n create_testing_split = bigquery_query_op(\n query=query,\n project_id=project_id,\n dataset_id=dataset_id,\n table_id='',\n output_gcs_path=testing_file_path,\n dataset_location=dataset_location)\n\n # Tune hyperparameters\n tune_args = [\n '--training_dataset_path',\n create_training_split.outputs['output_gcs_path'],\n '--validation_dataset_path',\n create_validation_split.outputs['output_gcs_path'], '--hptune', 'True'\n ]\n\n job_dir = '{}/{}/{}'.format(gcs_root, 'jobdir/hypertune',\n kfp.dsl.RUN_ID_PLACEHOLDER)\n\n hypertune = mlengine_train_op(\n project_id=project_id,\n region=region,\n master_image_uri=TRAINER_IMAGE,\n job_dir=job_dir,\n args=tune_args,\n training_input=hypertune_settings)\n\n # Retrieve the best trial\n get_best_trial = retrieve_best_run_op(\n project_id, hypertune.outputs['job_id'])\n\n # Train the model on a combined training and validation datasets\n job_dir = '{}/{}/{}'.format(gcs_root, 'jobdir', kfp.dsl.RUN_ID_PLACEHOLDER)\n\n train_args = [\n '--training_dataset_path',\n create_training_split.outputs['output_gcs_path'],\n '--validation_dataset_path',\n create_validation_split.outputs['output_gcs_path'], '--alpha',\n get_best_trial.outputs['alpha'], '--max_iter',\n get_best_trial.outputs['max_iter'], '--hptune', 'False'\n ]\n\n train_model = mlengine_train_op(\n project_id=project_id,\n region=region,\n master_image_uri=TRAINER_IMAGE,\n job_dir=job_dir,\n args=train_args)\n\n # Evaluate the model on the testing split\n eval_model = evaluate_model_op(\n dataset_path=str(create_testing_split.outputs['output_gcs_path']),\n model_path=str(train_model.outputs['job_dir']),\n metric_name=evaluation_metric_name)\n\n # Deploy the model if the primary metric is better than threshold\n with kfp.dsl.Condition(eval_model.outputs['metric_value'] > evaluation_metric_threshold):\n deploy_model = mlengine_deploy_op(\n model_uri=train_model.outputs['job_dir'],\n project_id=project_id,\n model_id=model_id,\n version_id=version_id,\n runtime_version=RUNTIME_VERSION,\n python_version=PYTHON_VERSION,\n replace_existing_version=replace_existing_version)\n\n # Configure the pipeline to run using the service account defined\n # in the user-gcp-sa k8s secret\n if USE_KFP_SA == 'True':\n kfp.dsl.get_pipeline_conf().add_op_transformer(\n use_gcp_secret('user-gcp-sa'))",
"_____no_output_____"
]
],
[
[
"The custom components execute in a container image defined in `base_image/Dockerfile`.",
"_____no_output_____"
]
],
[
[
"!cat base_image/Dockerfile",
"_____no_output_____"
]
],
[
[
"The training step in the pipeline employes the AI Platform Training component to schedule a AI Platform Training job in a custom training container. The custom training image is defined in `trainer_image/Dockerfile`.",
"_____no_output_____"
]
],
[
[
"!cat trainer_image/Dockerfile",
"_____no_output_____"
]
],
[
[
"## Building and deploying the pipeline\n\nBefore deploying to AI Platform Pipelines, the pipeline DSL has to be compiled into a pipeline runtime format, also refered to as a pipeline package. The runtime format is based on [Argo Workflow](https://github.com/argoproj/argo), which is expressed in YAML. \n",
"_____no_output_____"
],
[
"### Configure environment settings\n\nUpdate the below constants with the settings reflecting your lab environment. \n\n- `REGION` - the compute region for AI Platform Training and Prediction\n- `ARTIFACT_STORE` - the GCS bucket created during installation of AI Platform Pipelines. The bucket name will be similar to `qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default`.\n- `ENDPOINT` - set the `ENDPOINT` constant to the endpoint to your AI Platform Pipelines instance. Then endpoint to the AI Platform Pipelines instance can be found on the [AI Platform Pipelines](https://console.cloud.google.com/ai-platform/pipelines/clusters) page in the Google Cloud Console.\n\n1. Open the **SETTINGS** for your instance\n2. Use the value of the `host` variable in the **Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SKD** section of the **SETTINGS** window.\n\nRun gsutil ls without URLs to list all of the Cloud Storage buckets under your default project ID.",
"_____no_output_____"
]
],
[
[
"!gsutil ls",
"_____no_output_____"
]
],
[
[
"**HINT:** \n\nFor **ENDPOINT**, use the value of the `host` variable in the **Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SDK** section of the **SETTINGS** window.\n\nFor **ARTIFACT_STORE_URI**, copy the bucket name which starts with the qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default prefix from the previous cell output. Your copied value should look like **'gs://qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default'**",
"_____no_output_____"
]
],
[
[
"REGION = 'us-central1'\nENDPOINT = '627be4a1d4049ed3-dot-us-central1.pipelines.googleusercontent.com' # TO DO: REPLACE WITH YOUR ENDPOINT\nARTIFACT_STORE_URI = 'gs://dna-gcp-data-kubeflowpipelines-default' # TO DO: REPLACE WITH YOUR ARTIFACT_STORE NAME \nPROJECT_ID = !(gcloud config get-value core/project)\nPROJECT_ID = PROJECT_ID[0]",
"_____no_output_____"
]
],
[
[
"### Build the trainer image",
"_____no_output_____"
]
],
[
[
"IMAGE_NAME='trainer_image'\nTAG='test'\nTRAINER_IMAGE='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, TAG)",
"_____no_output_____"
]
],
[
[
"#### **Note**: Please ignore any **incompatibility ERROR** that may appear for the packages visions as it will not affect the lab's functionality.",
"_____no_output_____"
]
],
[
[
"!gcloud builds submit --timeout 15m --tag $TRAINER_IMAGE trainer_image",
"_____no_output_____"
]
],
[
[
"### Build the base image for custom components",
"_____no_output_____"
]
],
[
[
"IMAGE_NAME='base_image'\nTAG='test2'\nBASE_IMAGE='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, TAG)\n!pwd",
"/home/jupyter/demo/mlops-on-gcp/on_demand/kfp-caip-sklearn/lab-02-kfp-pipeline\n"
],
[
"!gcloud builds submit --timeout 15m --tag $BASE_IMAGE base_image",
"Creating temporary tarball archive of 2 file(s) totalling 290 bytes before compression.\nUploading tarball of [base_image] to [gs://dna-gcp-data_cloudbuild/source/1621581960.433286-cef9441cb3234402ad8faeccf31ce5fe.tgz]\nCreated [https://cloudbuild.googleapis.com/v1/projects/dna-gcp-data/locations/global/builds/d2e1016b-599c-4537-b03f-3a8e0039c2fc].\nLogs are available at [https://console.cloud.google.com/cloud-build/builds/d2e1016b-599c-4537-b03f-3a8e0039c2fc?project=1011566672334].\n----------------------------- REMOTE BUILD OUTPUT ------------------------------\nstarting build \"d2e1016b-599c-4537-b03f-3a8e0039c2fc\"\n\nFETCHSOURCE\nFetching storage object: gs://dna-gcp-data_cloudbuild/source/1621581960.433286-cef9441cb3234402ad8faeccf31ce5fe.tgz#1621581960754721\nCopying gs://dna-gcp-data_cloudbuild/source/1621581960.433286-cef9441cb3234402ad8faeccf31ce5fe.tgz#1621581960754721...\n/ [1 files][ 299.0 B/ 299.0 B] \nOperation completed over 1 objects/299.0 B. \nBUILD\nAlready have image (with digest): gcr.io/cloud-builders/docker\nSending build context to Docker daemon 3.584kB\nStep 1/3 : FROM gcr.io/deeplearning-platform-release/base-cpu\nlatest: Pulling from deeplearning-platform-release/base-cpu\n01bf7da0a88c: Pulling fs layer\nf3b4a5f15c7a: Pulling fs layer\n57ffbe87baa1: Pulling fs layer\n424e7c9d5d89: Pulling fs layer\n9b397537aef0: Pulling fs layer\n2bd5028f4b85: Pulling fs layer\n4f4fb700ef54: Pulling fs layer\nb2ed56b85d3a: Pulling fs layer\n8bfb788e9874: Pulling fs layer\n0618fb353339: Pulling fs layer\n42045a665612: Pulling fs layer\n031d8d7b75f7: Pulling fs layer\n5780cc9addac: Pulling fs layer\n8fbe78107b3d: Pulling fs layer\neee173fc570a: Pulling fs layer\n424e7c9d5d89: Waiting\n9b397537aef0: Waiting\n2bd5028f4b85: Waiting\n4f4fb700ef54: Waiting\nb2ed56b85d3a: Waiting\n8bfb788e9874: Waiting\n0618fb353339: Waiting\n42045a665612: Waiting\n031d8d7b75f7: Waiting\n5780cc9addac: Waiting\n8fbe78107b3d: Waiting\neee173fc570a: Waiting\n9334ecc802d5: Pulling fs layer\nc631c38965fd: Pulling fs layer\n9334ecc802d5: Waiting\nc631c38965fd: Waiting\n1aada407354a: Pulling fs layer\n151efcb7d3c3: Pulling fs layer\n151efcb7d3c3: Waiting\n1aada407354a: Waiting\n57ffbe87baa1: Verifying Checksum\n57ffbe87baa1: Download complete\nf3b4a5f15c7a: Verifying Checksum\nf3b4a5f15c7a: Download complete\n424e7c9d5d89: Verifying Checksum\n424e7c9d5d89: Download complete\n01bf7da0a88c: Verifying Checksum\n01bf7da0a88c: Download complete\n4f4fb700ef54: Verifying Checksum\n4f4fb700ef54: Download complete\nb2ed56b85d3a: Verifying Checksum\nb2ed56b85d3a: Download complete\n2bd5028f4b85: Verifying Checksum\n2bd5028f4b85: Download complete\n0618fb353339: Download complete\n42045a665612: Verifying Checksum\n42045a665612: Download complete\n031d8d7b75f7: Verifying Checksum\n031d8d7b75f7: Download complete\n5780cc9addac: Download complete\n8fbe78107b3d: Verifying Checksum\n8fbe78107b3d: Download complete\neee173fc570a: Verifying Checksum\neee173fc570a: Download complete\n9334ecc802d5: Verifying Checksum\n9334ecc802d5: Download complete\nc631c38965fd: Verifying Checksum\nc631c38965fd: Download complete\n9b397537aef0: Verifying Checksum\n9b397537aef0: Download complete\n8bfb788e9874: Verifying Checksum\n8bfb788e9874: Download complete\n151efcb7d3c3: Verifying Checksum\n151efcb7d3c3: Download complete\n01bf7da0a88c: Pull complete\nf3b4a5f15c7a: Pull complete\n57ffbe87baa1: Pull complete\n424e7c9d5d89: Pull complete\n1aada407354a: Verifying Checksum\n1aada407354a: Download complete\n9b397537aef0: Pull complete\n2bd5028f4b85: Pull complete\n4f4fb700ef54: Pull complete\nb2ed56b85d3a: Pull complete\n8bfb788e9874: Pull complete\n0618fb353339: Pull complete\n42045a665612: Pull complete\n031d8d7b75f7: Pull complete\n5780cc9addac: Pull complete\n8fbe78107b3d: Pull complete\neee173fc570a: Pull complete\n9334ecc802d5: Pull complete\nc631c38965fd: Pull complete\n1aada407354a: Pull complete\n151efcb7d3c3: Pull complete\nDigest: sha256:76ee9c0261dbcfb75e201ce21fd666f61127fe6b9ff74e6cf78b6ef09751de95\nStatus: Downloaded newer image for gcr.io/deeplearning-platform-release/base-cpu:latest\n ---> c1e1d5999dc3\nStep 2/3 : RUN pip install -U fire scikit-learn==0.20.4 pandas==0.24.2 kfp==1.0.0\n ---> Running in c4bf640bbcee\nCollecting fire\n Downloading fire-0.4.0.tar.gz (87 kB)\nCollecting scikit-learn==0.20.4\n Downloading scikit_learn-0.20.4-cp37-cp37m-manylinux1_x86_64.whl (5.4 MB)\nCollecting pandas==0.24.2\n Downloading pandas-0.24.2-cp37-cp37m-manylinux1_x86_64.whl (10.1 MB)\nCollecting kfp==1.0.0\n Downloading kfp-1.0.0.tar.gz (116 kB)\nRequirement already satisfied: scipy>=0.13.3 in /opt/conda/lib/python3.7/site-packages (from scikit-learn==0.20.4) (1.6.3)\nRequirement already satisfied: numpy>=1.8.2 in /opt/conda/lib/python3.7/site-packages (from scikit-learn==0.20.4) (1.19.5)\nRequirement already satisfied: python-dateutil>=2.5.0 in /opt/conda/lib/python3.7/site-packages (from pandas==0.24.2) (2.8.1)\nRequirement already satisfied: pytz>=2011k in /opt/conda/lib/python3.7/site-packages (from pandas==0.24.2) (2021.1)\nRequirement already satisfied: PyYAML in /opt/conda/lib/python3.7/site-packages (from kfp==1.0.0) (5.4.1)\nRequirement already satisfied: google-cloud-storage>=1.13.0 in /opt/conda/lib/python3.7/site-packages (from kfp==1.0.0) (1.38.0)\nCollecting kubernetes<12.0.0,>=8.0.0\n Downloading kubernetes-11.0.0-py3-none-any.whl (1.5 MB)\nRequirement already satisfied: google-auth>=1.6.1 in /opt/conda/lib/python3.7/site-packages (from kfp==1.0.0) (1.30.0)\nCollecting requests_toolbelt>=0.8.0\n Downloading requests_toolbelt-0.9.1-py2.py3-none-any.whl (54 kB)\nRequirement already satisfied: cloudpickle in /opt/conda/lib/python3.7/site-packages (from kfp==1.0.0) (1.6.0)\nCollecting kfp-server-api<2.0.0,>=0.2.5\n Downloading kfp-server-api-1.5.0.tar.gz (50 kB)\nRequirement already satisfied: jsonschema>=3.0.1 in /opt/conda/lib/python3.7/site-packages (from kfp==1.0.0) (3.2.0)\nCollecting tabulate\n Downloading tabulate-0.8.9-py3-none-any.whl (25 kB)\nRequirement already satisfied: click in /opt/conda/lib/python3.7/site-packages (from kfp==1.0.0) (7.1.2)\nCollecting Deprecated\n Downloading Deprecated-1.2.12-py2.py3-none-any.whl (9.5 kB)\nCollecting strip-hints\n Downloading strip-hints-0.1.9.tar.gz (30 kB)\nRequirement already satisfied: setuptools>=40.3.0 in /opt/conda/lib/python3.7/site-packages (from google-auth>=1.6.1->kfp==1.0.0) (49.6.0.post20210108)\nRequirement already satisfied: cachetools<5.0,>=2.0.0 in /opt/conda/lib/python3.7/site-packages (from google-auth>=1.6.1->kfp==1.0.0) (4.2.2)\nRequirement already satisfied: rsa<5,>=3.1.4 in /opt/conda/lib/python3.7/site-packages (from google-auth>=1.6.1->kfp==1.0.0) (4.7.2)\nRequirement already satisfied: pyasn1-modules>=0.2.1 in /opt/conda/lib/python3.7/site-packages (from google-auth>=1.6.1->kfp==1.0.0) (0.2.7)\nRequirement already satisfied: six>=1.9.0 in /opt/conda/lib/python3.7/site-packages (from google-auth>=1.6.1->kfp==1.0.0) (1.16.0)\nRequirement already satisfied: requests<3.0.0dev,>=2.18.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-storage>=1.13.0->kfp==1.0.0) (2.25.1)\nRequirement already satisfied: google-resumable-media<2.0dev,>=1.2.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-storage>=1.13.0->kfp==1.0.0) (1.2.0)\nRequirement already satisfied: google-cloud-core<2.0dev,>=1.4.1 in /opt/conda/lib/python3.7/site-packages (from google-cloud-storage>=1.13.0->kfp==1.0.0) (1.6.0)\nRequirement already satisfied: google-api-core<2.0.0dev,>=1.21.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-core<2.0dev,>=1.4.1->google-cloud-storage>=1.13.0->kfp==1.0.0) (1.26.3)\nRequirement already satisfied: packaging>=14.3 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0.0dev,>=1.21.0->google-cloud-core<2.0dev,>=1.4.1->google-cloud-storage>=1.13.0->kfp==1.0.0) (20.9)\nRequirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0.0dev,>=1.21.0->google-cloud-core<2.0dev,>=1.4.1->google-cloud-storage>=1.13.0->kfp==1.0.0) (1.53.0)\nRequirement already satisfied: protobuf>=3.12.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0.0dev,>=1.21.0->google-cloud-core<2.0dev,>=1.4.1->google-cloud-storage>=1.13.0->kfp==1.0.0) (3.16.0)\nRequirement already satisfied: google-crc32c<2.0dev,>=1.0 in /opt/conda/lib/python3.7/site-packages (from google-resumable-media<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==1.0.0) (1.1.2)\nRequirement already satisfied: cffi>=1.0.0 in /opt/conda/lib/python3.7/site-packages (from google-crc32c<2.0dev,>=1.0->google-resumable-media<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==1.0.0) (1.14.5)\nRequirement already satisfied: pycparser in /opt/conda/lib/python3.7/site-packages (from cffi>=1.0.0->google-crc32c<2.0dev,>=1.0->google-resumable-media<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==1.0.0) (2.20)\nRequirement already satisfied: attrs>=17.4.0 in /opt/conda/lib/python3.7/site-packages (from jsonschema>=3.0.1->kfp==1.0.0) (21.2.0)\nRequirement already satisfied: importlib-metadata in /opt/conda/lib/python3.7/site-packages (from jsonschema>=3.0.1->kfp==1.0.0) (4.0.1)\nRequirement already satisfied: pyrsistent>=0.14.0 in /opt/conda/lib/python3.7/site-packages (from jsonschema>=3.0.1->kfp==1.0.0) (0.17.3)\nRequirement already satisfied: urllib3>=1.15 in /opt/conda/lib/python3.7/site-packages (from kfp-server-api<2.0.0,>=0.2.5->kfp==1.0.0) (1.26.4)\nRequirement already satisfied: certifi in /opt/conda/lib/python3.7/site-packages (from kfp-server-api<2.0.0,>=0.2.5->kfp==1.0.0) (2020.12.5)\nRequirement already satisfied: websocket-client!=0.40.0,!=0.41.*,!=0.42.*,>=0.32.0 in /opt/conda/lib/python3.7/site-packages (from kubernetes<12.0.0,>=8.0.0->kfp==1.0.0) (0.57.0)\nRequirement already satisfied: requests-oauthlib in /opt/conda/lib/python3.7/site-packages (from kubernetes<12.0.0,>=8.0.0->kfp==1.0.0) (1.3.0)\nRequirement already satisfied: pyparsing>=2.0.2 in /opt/conda/lib/python3.7/site-packages (from packaging>=14.3->google-api-core<2.0.0dev,>=1.21.0->google-cloud-core<2.0dev,>=1.4.1->google-cloud-storage>=1.13.0->kfp==1.0.0) (2.4.7)\nRequirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /opt/conda/lib/python3.7/site-packages (from pyasn1-modules>=0.2.1->google-auth>=1.6.1->kfp==1.0.0) (0.4.8)\nRequirement already satisfied: idna<3,>=2.5 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-cloud-storage>=1.13.0->kfp==1.0.0) (2.10)\nRequirement already satisfied: chardet<5,>=3.0.2 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-cloud-storage>=1.13.0->kfp==1.0.0) (4.0.0)\nCollecting termcolor\n Downloading termcolor-1.1.0.tar.gz (3.9 kB)\nRequirement already satisfied: wrapt<2,>=1.10 in /opt/conda/lib/python3.7/site-packages (from Deprecated->kfp==1.0.0) (1.12.1)\nRequirement already satisfied: zipp>=0.5 in /opt/conda/lib/python3.7/site-packages (from importlib-metadata->jsonschema>=3.0.1->kfp==1.0.0) (3.4.1)\nRequirement already satisfied: typing-extensions>=3.6.4 in /opt/conda/lib/python3.7/site-packages (from importlib-metadata->jsonschema>=3.0.1->kfp==1.0.0) (3.7.4.3)\nRequirement already satisfied: oauthlib>=3.0.0 in /opt/conda/lib/python3.7/site-packages (from requests-oauthlib->kubernetes<12.0.0,>=8.0.0->kfp==1.0.0) (3.0.1)\nRequirement already satisfied: wheel in /opt/conda/lib/python3.7/site-packages (from strip-hints->kfp==1.0.0) (0.36.2)\nBuilding wheels for collected packages: kfp, kfp-server-api, fire, strip-hints, termcolor\n Building wheel for kfp (setup.py): started\n Building wheel for kfp (setup.py): finished with status 'done'\n Created wheel for kfp: filename=kfp-1.0.0-py3-none-any.whl size=159769 sha256=74a8b1d2b91b9957b95f2f878e91ba0a2b847ba6b479ba6090b199fee94f8de1\n Stored in directory: /root/.cache/pip/wheels/81/39/f2/ee01d785a5bd135e42e7721fedb05857badf763fc465a4e822\n Building wheel for kfp-server-api (setup.py): started\n Building wheel for kfp-server-api (setup.py): finished with status 'done'\n Created wheel for kfp-server-api: filename=kfp_server_api-1.5.0-py3-none-any.whl size=92524 sha256=b672a04ca3bcf1257f2981061c5a9b460c20f5e816ede414eb22892649d86973\n Stored in directory: /root/.cache/pip/wheels/1e/ab/eb/1608f904a1a3f2a28696129c6dbd3cac00bea2cdad226ee60e\n Building wheel for fire (setup.py): started\n Building wheel for fire (setup.py): finished with status 'done'\n Created wheel for fire: filename=fire-0.4.0-py2.py3-none-any.whl size=115928 sha256=040e97679ac4b63443c3b3127e2f2b0afb4d70d927adcc11efa1f94be0084ba3\n Stored in directory: /root/.cache/pip/wheels/8a/67/fb/2e8a12fa16661b9d5af1f654bd199366799740a85c64981226\n Building wheel for strip-hints (setup.py): started\n Building wheel for strip-hints (setup.py): finished with status 'done'\n Created wheel for strip-hints: filename=strip_hints-0.1.9-py2.py3-none-any.whl size=20993 sha256=9481cd3b4b52c0e9713db3553c23c1bda587a1075bcd59d71e99110b6f1b6533\n Stored in directory: /root/.cache/pip/wheels/2d/b8/4e/a3ec111d2db63cec88121bd7c0ab1a123bce3b55dd19dda5c1\n Building wheel for termcolor (setup.py): started\n Building wheel for termcolor (setup.py): finished with status 'done'\n Created wheel for termcolor: filename=termcolor-1.1.0-py3-none-any.whl size=4829 sha256=b67e2ecf8cbb39455760408250cf67d6ddf4f873c94b486c5e753b4e84f4c8db\n Stored in directory: /root/.cache/pip/wheels/3f/e3/ec/8a8336ff196023622fbcb36de0c5a5c218cbb24111d1d4c7f2\nSuccessfully built kfp kfp-server-api fire strip-hints termcolor\nInstalling collected packages: termcolor, tabulate, strip-hints, requests-toolbelt, kubernetes, kfp-server-api, Deprecated, scikit-learn, pandas, kfp, fire\n Attempting uninstall: kubernetes\n Found existing installation: kubernetes 12.0.1\n Uninstalling kubernetes-12.0.1:\n Successfully uninstalled kubernetes-12.0.1\n Attempting uninstall: scikit-learn\n Found existing installation: scikit-learn 0.24.2\n Uninstalling scikit-learn-0.24.2:\n Successfully uninstalled scikit-learn-0.24.2\n Attempting uninstall: pandas\n Found existing installation: pandas 1.2.4\n Uninstalling pandas-1.2.4:\n Successfully uninstalled pandas-1.2.4\n\u001b[91mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\nvisions 0.7.1 requires pandas>=0.25.3, but you have pandas 0.24.2 which is incompatible.\nphik 0.11.2 requires pandas>=0.25.1, but you have pandas 0.24.2 which is incompatible.\npandas-profiling 3.0.0 requires pandas!=1.0.0,!=1.0.1,!=1.0.2,!=1.1.0,>=0.25.3, but you have pandas 0.24.2 which is incompatible.\n\u001b[0m\u001b[91mWARNING: Running pip as root will break packages and permissions. You should install packages reliably by using venv: https://pip.pypa.io/warnings/venv\n\u001b[0mSuccessfully installed Deprecated-1.2.12 fire-0.4.0 kfp-1.0.0 kfp-server-api-1.5.0 kubernetes-11.0.0 pandas-0.24.2 requests-toolbelt-0.9.1 scikit-learn-0.20.4 strip-hints-0.1.9 tabulate-0.8.9 termcolor-1.1.0\nRemoving intermediate container c4bf640bbcee\n ---> 94e41aa9f2e0\nStep 3/3 : RUN pip list | grep kfp\n ---> Running in cf3f27276384\nkfp 1.0.0\nkfp-server-api 1.5.0\nRemoving intermediate container cf3f27276384\n ---> e31322a505ba\nSuccessfully built e31322a505ba\nSuccessfully tagged gcr.io/dna-gcp-data/base_image:test2\nPUSH\nPushing gcr.io/dna-gcp-data/base_image:test2\nThe push refers to repository [gcr.io/dna-gcp-data/base_image]\nce2e56091c30: Preparing\nfed09d378d72: Preparing\n896901ec1a67: Preparing\n06a5bf49b163: Preparing\nb34dae69fc5d: Preparing\n0ffb7465dde9: Preparing\ne2563d1ada9a: Preparing\n42b027d1e826: Preparing\n636a7c2e7d03: Preparing\n1ba1158adf89: Preparing\n96e46d1341e8: Preparing\n954f6dc3f7f5: Preparing\n8760a171b659: Preparing\n5f70bf18a086: Preparing\na0710233fd2d: Preparing\n05449afa4be9: Preparing\n5b9e34b5cf74: Preparing\n8cafc6d2db45: Preparing\na5d4bacb0351: Preparing\n5153e1acaabc: Preparing\n96e46d1341e8: Waiting\n954f6dc3f7f5: Waiting\n8760a171b659: Waiting\n5f70bf18a086: Waiting\na0710233fd2d: Waiting\n05449afa4be9: Waiting\n5b9e34b5cf74: Waiting\n8cafc6d2db45: Waiting\n0ffb7465dde9: Waiting\ne2563d1ada9a: Waiting\n42b027d1e826: Waiting\n636a7c2e7d03: Waiting\n1ba1158adf89: Waiting\n5153e1acaabc: Waiting\n896901ec1a67: Layer already exists\nfed09d378d72: Layer already exists\nb34dae69fc5d: Layer already exists\n06a5bf49b163: Layer already exists\n0ffb7465dde9: Layer already exists\ne2563d1ada9a: Layer already exists\n42b027d1e826: Layer already exists\n636a7c2e7d03: Layer already exists\n1ba1158adf89: Layer already exists\n954f6dc3f7f5: Layer already exists\n8760a171b659: Layer already exists\n5f70bf18a086: Layer already exists\n96e46d1341e8: Layer already exists\n8cafc6d2db45: Layer already exists\n5b9e34b5cf74: Layer already exists\n05449afa4be9: Layer already exists\na0710233fd2d: Layer already exists\na5d4bacb0351: Layer already exists\n5153e1acaabc: Layer already exists\nce2e56091c30: Pushed\ntest2: digest: sha256:09e11684e6de0ac905077560c50a24b94b8b3d22afa167ccb7f2b92345068511 size: 4499\nDONE\n--------------------------------------------------------------------------------\n\nID CREATE_TIME DURATION SOURCE IMAGES STATUS\nd2e1016b-599c-4537-b03f-3a8e0039c2fc 2021-05-21T07:26:00+00:00 2M25S gs://dna-gcp-data_cloudbuild/source/1621581960.433286-cef9441cb3234402ad8faeccf31ce5fe.tgz gcr.io/dna-gcp-data/base_image:test2 SUCCESS\n"
]
],
[
[
"### Compile the pipeline\n\nYou can compile the DSL using an API from the **KFP SDK** or using the **KFP** compiler.\n\nTo compile the pipeline DSL using the **KFP** compiler.",
"_____no_output_____"
],
[
"#### Set the pipeline's compile time settings\n\nThe pipeline can run using a security context of the GKE default node pool's service account or the service account defined in the `user-gcp-sa` secret of the Kubernetes namespace hosting KFP. If you want to use the `user-gcp-sa` service account you change the value of `USE_KFP_SA` to `True`.\n\nNote that the default AI Platform Pipelines configuration does not define the `user-gcp-sa` secret.",
"_____no_output_____"
]
],
[
[
"USE_KFP_SA = False\n\nCOMPONENT_URL_SEARCH_PREFIX = 'https://raw.githubusercontent.com/kubeflow/pipelines/0.2.5/components/gcp/'\nRUNTIME_VERSION = '1.15'\nPYTHON_VERSION = '3.7'\nENDPOINT='https://627be4a1d4049ed3-dot-us-central1.pipelines.googleusercontent.com'\n\n%env USE_KFP_SA={USE_KFP_SA}\n%env BASE_IMAGE={BASE_IMAGE}\n%env TRAINER_IMAGE={TRAINER_IMAGE}\n%env COMPONENT_URL_SEARCH_PREFIX={COMPONENT_URL_SEARCH_PREFIX}\n%env RUNTIME_VERSION={RUNTIME_VERSION}\n%env PYTHON_VERSION={PYTHON_VERSION}\n%env ENDPOINT={ENDPOINT}",
"env: USE_KFP_SA=False\nenv: BASE_IMAGE=gcr.io/dna-gcp-data/base_image:test2\nenv: TRAINER_IMAGE=gcr.io/dna-gcp-data/trainer_image:test\nenv: COMPONENT_URL_SEARCH_PREFIX=https://raw.githubusercontent.com/kubeflow/pipelines/0.2.5/components/gcp/\nenv: RUNTIME_VERSION=1.15\nenv: PYTHON_VERSION=3.7\nenv: ENDPOINT=https://627be4a1d4049ed3-dot-us-central1.pipelines.googleusercontent.com\n"
]
],
[
[
"#### Use the CLI compiler to compile the pipeline",
"_____no_output_____"
]
],
[
[
"!dsl-compile --py pipeline/covertype_training_pipeline.py --output covertype_training_pipeline.yaml",
"_____no_output_____"
]
],
[
[
"The result is the `covertype_training_pipeline.yaml` file. ",
"_____no_output_____"
]
],
[
[
"!head covertype_training_pipeline.yaml",
"_____no_output_____"
]
],
[
[
"### Deploy the pipeline package",
"_____no_output_____"
]
],
[
[
"PIPELINE_NAME='covertype_continuous_training'\n\n!kfp --endpoint $ENDPOINT pipeline upload \\\n-p $PIPELINE_NAME \\\ncovertype_training_pipeline.yaml",
"Pipeline 7eda6268-681e-41eb-8f65-a9c853030888 has been submitted\n\nPipeline Details\n------------------\nID 7eda6268-681e-41eb-8f65-a9c853030888\nName covertype_continuous_training\nDescription\nUploaded at 2021-05-21T08:50:00+00:00\n+--------------------------+--------------------------------------------------+\n| Parameter Name | Default Value |\n+==========================+==================================================+\n| project_id | |\n+--------------------------+--------------------------------------------------+\n| region | |\n+--------------------------+--------------------------------------------------+\n| source_table_name | |\n+--------------------------+--------------------------------------------------+\n| gcs_root | |\n+--------------------------+--------------------------------------------------+\n| dataset_id | |\n+--------------------------+--------------------------------------------------+\n| evaluation_metric_name | |\n+--------------------------+--------------------------------------------------+\n| model_id | |\n+--------------------------+--------------------------------------------------+\n| version_id | |\n+--------------------------+--------------------------------------------------+\n| replace_existing_version | |\n+--------------------------+--------------------------------------------------+\n| experiment_id | |\n+--------------------------+--------------------------------------------------+\n| hypertune_settings | { |\n| | \"hyperparameters\": { |\n| | \"goal\": \"MAXIMIZE\", |\n| | \"maxTrials\": 6, |\n| | \"maxParallelTrials\": 3, |\n| | \"hyperparameterMetricTag\": \"accuracy\", |\n| | \"enableTrialEarlyStopping\": True, |\n| | \"params\": [ |\n| | { |\n| | \"parameterName\": \"max_iter\", |\n| | \"type\": \"DISCRETE\", |\n| | \"discreteValues\": [500, 1000] |\n| | }, |\n| | { |\n| | \"parameterName\": \"alpha\", |\n| | \"type\": \"DOUBLE\", |\n| | \"minValue\": 0.0001, |\n| | \"maxValue\": 0.001, |\n| | \"scaleType\": \"UNIT_LINEAR_SCALE\" |\n| | } |\n| | ] |\n| | } |\n| | } |\n+--------------------------+--------------------------------------------------+\n| dataset_location | US |\n+--------------------------+--------------------------------------------------+\n"
]
],
[
[
"## Submitting pipeline runs\n\nYou can trigger pipeline runs using an API from the KFP SDK or using KFP CLI. To submit the run using KFP CLI, execute the following commands. Notice how the pipeline's parameters are passed to the pipeline run.\n\n### List the pipelines in AI Platform Pipelines",
"_____no_output_____"
]
],
[
[
"!kfp --endpoint $ENDPOINT experiment list",
"+--------------------------------------+-------------------------------+---------------------------+\n| Experiment ID | Name | Created at |\n+======================================+===============================+===========================+\n| 889c1532-fee9-4b06-bc2b-10b1cd332c9a | Covertype_Classifier_Training | 2021-05-19T12:54:04+00:00 |\n+--------------------------------------+-------------------------------+---------------------------+\n| 3794c159-24a7-41e0-89be-f23152971870 | helloworld-dev | 2021-05-06T16:07:23+00:00 |\n+--------------------------------------+-------------------------------+---------------------------+\n| 821a36b0-8db9-4604-9e65-035b8f70c77d | my_pipeline | 2021-05-06T10:35:19+00:00 |\n+--------------------------------------+-------------------------------+---------------------------+\n| 6587995a-9b11-4a8e-a2fc-d0b80534dfe8 | Default | 2021-05-04T02:29:12+00:00 |\n+--------------------------------------+-------------------------------+---------------------------+\n"
]
],
[
[
"### Submit a run\n\nFind the ID of the `covertype_continuous_training` pipeline you uploaded in the previous step and update the value of `PIPELINE_ID` .\n\n",
"_____no_output_____"
]
],
[
[
"PIPELINE_ID='7eda6268-681e-41eb-8f65-a9c853030888' # TO DO: REPLACE WITH YOUR PIPELINE ID ",
"_____no_output_____"
],
[
"EXPERIMENT_NAME = 'Covertype_Classifier_Training'\nRUN_ID = 'Run_001'\nSOURCE_TABLE = 'covertype_dataset.covertype'\nDATASET_ID = 'covertype_dataset'\nEVALUATION_METRIC = 'accuracy'\nMODEL_ID = 'covertype_classifier'\nVERSION_ID = 'v01'\nREPLACE_EXISTING_VERSION = 'True'\nEXPERIMENT_ID = '889c1532-fee9-4b06-bc2b-10b1cd332c9a'\nGCS_STAGING_PATH = '{}/staging'.format(ARTIFACT_STORE_URI)",
"_____no_output_____"
]
],
[
[
"Run the pipeline using the `kfp` command line by retrieving the variables from the environment to pass to the pipeline where:\n\n- EXPERIMENT_NAME is set to the experiment used to run the pipeline. You can choose any name you want. If the experiment does not exist it will be created by the command\n- RUN_ID is the name of the run. You can use an arbitrary name\n- PIPELINE_ID is the id of your pipeline. Use the value retrieved by the `kfp pipeline list` command\n- GCS_STAGING_PATH is the URI to the Cloud Storage location used by the pipeline to store intermediate files. By default, it is set to the `staging` folder in your artifact store.\n- REGION is a compute region for AI Platform Training and Prediction. \n\nYou should be already familiar with these and other parameters passed to the command. If not go back and review the pipeline code.",
"_____no_output_____"
]
],
[
[
"!kfp --endpoint $ENDPOINT run submit \\\n-e $EXPERIMENT_NAME \\\n-r $RUN_ID \\\n-p $PIPELINE_ID \\\nproject_id=$PROJECT_ID \\\ngcs_root=$GCS_STAGING_PATH \\\nregion=$REGION \\\nsource_table_name=$SOURCE_TABLE \\\ndataset_id=$DATASET_ID \\\nevaluation_metric_name=$EVALUATION_METRIC \\\nmodel_id=$MODEL_ID \\\nversion_id=$VERSION_ID \\\nreplace_existing_version=$REPLACE_EXISTING_VERSION \\\nexperiment_id=$EXPERIMENT_ID",
"Run b1b45663-292c-4d0d-ba93-66671abde854 is submitted\n+--------------------------------------+---------+----------+---------------------------+\n| run id | name | status | created at |\n+======================================+=========+==========+===========================+\n| b1b45663-292c-4d0d-ba93-66671abde854 | Run_001 | | 2021-05-21T09:13:53+00:00 |\n+--------------------------------------+---------+----------+---------------------------+\n"
],
[
"#!kfp --endpoint $ENDPOINT experiment list\nfrom typing import NamedTuple\ndef get_previous_run_metric( ENDPOINT: str, experiment_id: str ) -> NamedTuple('Outputs', [('run_id', str), ('accuracy', float)]):\n\n import kfp as kfp\n runs_details= kfp.Client(host=ENDPOINT).list_runs(experiment_id=experiment_id, sort_by='created_at desc').to_dict()\n# print(runs_details)\n latest_success_run_details=''\n print(\"runs_details['runs'] type {}\".format(type(runs_details['runs'])))\n for i in runs_details['runs']:\n print(\"i['status'] type {}\".format(type(i['status'])))\n if i['status'] == 'Succeeded':\n run_id=i['id']\n accuracy=i['metrics'][0]['number_value']\n break;\n print(\"accuracy={}\".format(accuracy))\n print(type(run_id))\n return (run_id, accuracy)\n\na=get_previous_run_metric(ENDPOINT, experiment_id)\nprint(a)\n",
"runs_details['runs'] type <class 'list'>\ni['status'] type <class 'str'>\naccuracy=0.7225525168450257\n<class 'str'>\n('d16368c5-24bb-4292-9060-315755f79b0b', 0.7225525168450257)\n"
],
[
"import kfp as kfp\nruns_details= kfp.Client(host=ENDPOINT).list_runs(experiment_id=experiment_id, sort_by='created_at desc').to_dict()\nlatest_success_run_details=''\nprint(\"runs_details['runs'] type {}\".format(type(runs_details['runs'])))\nfor i in runs_details['runs']:\n print(\"i['status'] type {}\".format(type(i['status'])))\n if i['status'] == 'Succeeded':\n latest_success_run_details=i\n break;\nrun_id=latest_success_run_details['id']\nrun_id_details=kfp.Client(host=ENDPOINT).get_run(run_id=run_id).to_dict()\nprint(run_id_details)\naccuracy=run_id_details['run']['metrics'][0]['number_value']\nprint(accuracy)",
"runs_details['runs'] type <class 'list'>\ni['status'] type <class 'str'>\n{'run': {'id': 'd16368c5-24bb-4292-9060-315755f79b0b', 'name': 'Run_001', 'storage_state': None, 'description': None, 'pipeline_spec': {'pipeline_id': None, 'pipeline_name': None, 'workflow_manifest': '{\"kind\":\"Workflow\",\"apiVersion\":\"argoproj.io/v1alpha1\",\"metadata\":{\"generateName\":\"covertype-classifier-training-\",\"creationTimestamp\":null,\"annotations\":{\"pipelines.kubeflow.org/pipeline_spec\":\"{\\\\\"description\\\\\": \\\\\"The pipeline training and deploying the Covertype classifierpipeline_yaml\\\\\", \\\\\"inputs\\\\\": [{\\\\\"name\\\\\": \\\\\"project_id\\\\\"}, {\\\\\"name\\\\\": \\\\\"region\\\\\"}, {\\\\\"name\\\\\": \\\\\"source_table_name\\\\\"}, {\\\\\"name\\\\\": \\\\\"gcs_root\\\\\"}, {\\\\\"name\\\\\": \\\\\"dataset_id\\\\\"}, {\\\\\"name\\\\\": \\\\\"evaluation_metric_name\\\\\"}, {\\\\\"name\\\\\": \\\\\"evaluation_metric_threshold\\\\\"}, {\\\\\"name\\\\\": \\\\\"model_id\\\\\"}, {\\\\\"name\\\\\": \\\\\"version_id\\\\\"}, {\\\\\"name\\\\\": \\\\\"replace_existing_version\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\\\\n{\\\\\\\\n \\\\\\\\\\\\\"hyperparameters\\\\\\\\\\\\\": {\\\\\\\\n \\\\\\\\\\\\\"goal\\\\\\\\\\\\\": \\\\\\\\\\\\\"MAXIMIZE\\\\\\\\\\\\\",\\\\\\\\n \\\\\\\\\\\\\"maxTrials\\\\\\\\\\\\\": 6,\\\\\\\\n \\\\\\\\\\\\\"maxParallelTrials\\\\\\\\\\\\\": 3,\\\\\\\\n \\\\\\\\\\\\\"hyperparameterMetricTag\\\\\\\\\\\\\": \\\\\\\\\\\\\"accuracy\\\\\\\\\\\\\",\\\\\\\\n \\\\\\\\\\\\\"enableTrialEarlyStopping\\\\\\\\\\\\\": True,\\\\\\\\n \\\\\\\\\\\\\"params\\\\\\\\\\\\\": [\\\\\\\\n {\\\\\\\\n \\\\\\\\\\\\\"parameterName\\\\\\\\\\\\\": \\\\\\\\\\\\\"max_iter\\\\\\\\\\\\\",\\\\\\\\n \\\\\\\\\\\\\"type\\\\\\\\\\\\\": \\\\\\\\\\\\\"DISCRETE\\\\\\\\\\\\\",\\\\\\\\n \\\\\\\\\\\\\"discreteValues\\\\\\\\\\\\\": [500, 1000]\\\\\\\\n },\\\\\\\\n {\\\\\\\\n \\\\\\\\\\\\\"parameterName\\\\\\\\\\\\\": \\\\\\\\\\\\\"alpha\\\\\\\\\\\\\",\\\\\\\\n \\\\\\\\\\\\\"type\\\\\\\\\\\\\": \\\\\\\\\\\\\"DOUBLE\\\\\\\\\\\\\",\\\\\\\\n \\\\\\\\\\\\\"minValue\\\\\\\\\\\\\": 0.0001,\\\\\\\\n \\\\\\\\\\\\\"maxValue\\\\\\\\\\\\\": 0.001,\\\\\\\\n \\\\\\\\\\\\\"scaleType\\\\\\\\\\\\\": \\\\\\\\\\\\\"UNIT_LINEAR_SCALE\\\\\\\\\\\\\"\\\\\\\\n }\\\\\\\\n ]\\\\\\\\n }\\\\\\\\n}\\\\\\\\n\\\\\", \\\\\"name\\\\\": \\\\\"hypertune_settings\\\\\", \\\\\"optional\\\\\": true}, {\\\\\"default\\\\\": \\\\\"US\\\\\", \\\\\"name\\\\\": \\\\\"dataset_location\\\\\", \\\\\"optional\\\\\": true}], \\\\\"name\\\\\": \\\\\"Covertype Classifier Training\\\\\"}\"}},\"spec\":{\"templates\":[{\"name\":\"bigquery-query\",\"arguments\":{},\"inputs\":{\"parameters\":[{\"name\":\"dataset_id\"},{\"name\":\"dataset_location\"},{\"name\":\"gcs_root\"},{\"name\":\"project_id\"},{\"name\":\"source_table_name\"}]},\"outputs\":{\"parameters\":[{\"name\":\"bigquery-query-output_gcs_path\",\"valueFrom\":{\"path\":\"/tmp/kfp/output/bigquery/query-output-path.txt\"}}],\"artifacts\":[{\"name\":\"mlpipeline-ui-metadata\",\"path\":\"/tmp/outputs/MLPipeline_UI_metadata/data\"},{\"name\":\"bigquery-query-output_gcs_path\",\"path\":\"/tmp/kfp/output/bigquery/query-output-path.txt\"}]},\"metadata\":{\"annotations\":{\"pipelines.kubeflow.org/component_spec\":\"{\\\\\"description\\\\\": \\\\\"A Kubeflow Pipeline component to submit a query to Google Cloud Bigquery \\\\\\\\nservice and dump outputs to a Google Cloud Storage blob. \\\\\\\\n\\\\\", \\\\\"inputs\\\\\": [{\\\\\"description\\\\\": \\\\\"The query used by Bigquery service to fetch the results.\\\\\", \\\\\"name\\\\\": \\\\\"query\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"description\\\\\": \\\\\"The project to execute the query job.\\\\\", \\\\\"name\\\\\": \\\\\"project_id\\\\\", \\\\\"type\\\\\": \\\\\"GCPProjectID\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The ID of the persistent dataset to keep the results of the query.\\\\\", \\\\\"name\\\\\": \\\\\"dataset_id\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The ID of the table to keep the results of the query. If absent, the operation will generate a random id for the table.\\\\\", \\\\\"name\\\\\": \\\\\"table_id\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The path to the Cloud Storage bucket to store the query output.\\\\\", \\\\\"name\\\\\": \\\\\"output_gcs_path\\\\\", \\\\\"type\\\\\": \\\\\"GCSPath\\\\\"}, {\\\\\"default\\\\\": \\\\\"US\\\\\", \\\\\"description\\\\\": \\\\\"The location to create the dataset. Defaults to `US`.\\\\\", \\\\\"name\\\\\": \\\\\"dataset_location\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The full config spec for the query job.See [QueryJobConfig](https://googleapis.github.io/google-cloud-python/latest/bigquery/generated/google.cloud.bigquery.job.QueryJobConfig.html#google.cloud.bigquery.job.QueryJobConfig) for details.\\\\\", \\\\\"name\\\\\": \\\\\"job_config\\\\\", \\\\\"type\\\\\": \\\\\"Dict\\\\\"}], \\\\\"metadata\\\\\": {\\\\\"labels\\\\\": {\\\\\"add-pod-env\\\\\": \\\\\"true\\\\\"}}, \\\\\"name\\\\\": \\\\\"Bigquery - Query\\\\\", \\\\\"outputs\\\\\": [{\\\\\"description\\\\\": \\\\\"The path to the Cloud Storage bucket containing the query output in CSV format.\\\\\", \\\\\"name\\\\\": \\\\\"output_gcs_path\\\\\", \\\\\"type\\\\\": \\\\\"GCSPath\\\\\"}, {\\\\\"name\\\\\": \\\\\"MLPipeline UI metadata\\\\\", \\\\\"type\\\\\": \\\\\"UI metadata\\\\\"}]}\"},\"labels\":{\"add-pod-env\":\"true\"}},\"container\":{\"name\":\"\",\"image\":\"gcr.io/ml-pipeline/ml-pipeline-gcp:e66dcb18607406330f953bf99b04fe7c3ed1a4a8\",\"args\":[\"--ui_metadata_path\",\"/tmp/outputs/MLPipeline_UI_metadata/data\",\"kfp_component.google.bigquery\",\"query\",\"--query\",\"\\\\n SELECT *\\\\n FROM \\\\n `{{inputs.parameters.source_table_name}}` AS cover\\\\n WHERE \\\\n MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)\\\\n \",\"--project_id\",\"{{inputs.parameters.project_id}}\",\"--dataset_id\",\"{{inputs.parameters.dataset_id}}\",\"--table_id\",\"\",\"--dataset_location\",\"{{inputs.parameters.dataset_location}}\",\"--output_gcs_path\",\"{{inputs.parameters.gcs_root}}/datasets/training/data.csv\",\"--job_config\",\"\"],\"env\":[{\"name\":\"KFP_POD_NAME\",\"value\":\"{{pod.name}}\"},{\"name\":\"KFP_POD_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.name\"}}},{\"name\":\"KFP_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}}],\"resources\":{}}},{\"name\":\"bigquery-query-2\",\"arguments\":{},\"inputs\":{\"parameters\":[{\"name\":\"dataset_id\"},{\"name\":\"dataset_location\"},{\"name\":\"gcs_root\"},{\"name\":\"project_id\"},{\"name\":\"source_table_name\"}]},\"outputs\":{\"parameters\":[{\"name\":\"bigquery-query-2-output_gcs_path\",\"valueFrom\":{\"path\":\"/tmp/kfp/output/bigquery/query-output-path.txt\"}}],\"artifacts\":[{\"name\":\"mlpipeline-ui-metadata\",\"path\":\"/tmp/outputs/MLPipeline_UI_metadata/data\"},{\"name\":\"bigquery-query-2-output_gcs_path\",\"path\":\"/tmp/kfp/output/bigquery/query-output-path.txt\"}]},\"metadata\":{\"annotations\":{\"pipelines.kubeflow.org/component_spec\":\"{\\\\\"description\\\\\": \\\\\"A Kubeflow Pipeline component to submit a query to Google Cloud Bigquery \\\\\\\\nservice and dump outputs to a Google Cloud Storage blob. \\\\\\\\n\\\\\", \\\\\"inputs\\\\\": [{\\\\\"description\\\\\": \\\\\"The query used by Bigquery service to fetch the results.\\\\\", \\\\\"name\\\\\": \\\\\"query\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"description\\\\\": \\\\\"The project to execute the query job.\\\\\", \\\\\"name\\\\\": \\\\\"project_id\\\\\", \\\\\"type\\\\\": \\\\\"GCPProjectID\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The ID of the persistent dataset to keep the results of the query.\\\\\", \\\\\"name\\\\\": \\\\\"dataset_id\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The ID of the table to keep the results of the query. If absent, the operation will generate a random id for the table.\\\\\", \\\\\"name\\\\\": \\\\\"table_id\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The path to the Cloud Storage bucket to store the query output.\\\\\", \\\\\"name\\\\\": \\\\\"output_gcs_path\\\\\", \\\\\"type\\\\\": \\\\\"GCSPath\\\\\"}, {\\\\\"default\\\\\": \\\\\"US\\\\\", \\\\\"description\\\\\": \\\\\"The location to create the dataset. Defaults to `US`.\\\\\", \\\\\"name\\\\\": \\\\\"dataset_location\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The full config spec for the query job.See [QueryJobConfig](https://googleapis.github.io/google-cloud-python/latest/bigquery/generated/google.cloud.bigquery.job.QueryJobConfig.html#google.cloud.bigquery.job.QueryJobConfig) for details.\\\\\", \\\\\"name\\\\\": \\\\\"job_config\\\\\", \\\\\"type\\\\\": \\\\\"Dict\\\\\"}], \\\\\"metadata\\\\\": {\\\\\"labels\\\\\": {\\\\\"add-pod-env\\\\\": \\\\\"true\\\\\"}}, \\\\\"name\\\\\": \\\\\"Bigquery - Query\\\\\", \\\\\"outputs\\\\\": [{\\\\\"description\\\\\": \\\\\"The path to the Cloud Storage bucket containing the query output in CSV format.\\\\\", \\\\\"name\\\\\": \\\\\"output_gcs_path\\\\\", \\\\\"type\\\\\": \\\\\"GCSPath\\\\\"}, {\\\\\"name\\\\\": \\\\\"MLPipeline UI metadata\\\\\", \\\\\"type\\\\\": \\\\\"UI metadata\\\\\"}]}\"},\"labels\":{\"add-pod-env\":\"true\"}},\"container\":{\"name\":\"\",\"image\":\"gcr.io/ml-pipeline/ml-pipeline-gcp:e66dcb18607406330f953bf99b04fe7c3ed1a4a8\",\"args\":[\"--ui_metadata_path\",\"/tmp/outputs/MLPipeline_UI_metadata/data\",\"kfp_component.google.bigquery\",\"query\",\"--query\",\"\\\\n SELECT *\\\\n FROM \\\\n `{{inputs.parameters.source_table_name}}` AS cover\\\\n WHERE \\\\n MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (8)\\\\n \",\"--project_id\",\"{{inputs.parameters.project_id}}\",\"--dataset_id\",\"{{inputs.parameters.dataset_id}}\",\"--table_id\",\"\",\"--dataset_location\",\"{{inputs.parameters.dataset_location}}\",\"--output_gcs_path\",\"{{inputs.parameters.gcs_root}}/datasets/validation/data.csv\",\"--job_config\",\"\"],\"env\":[{\"name\":\"KFP_POD_NAME\",\"value\":\"{{pod.name}}\"},{\"name\":\"KFP_POD_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.name\"}}},{\"name\":\"KFP_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}}],\"resources\":{}}},{\"name\":\"bigquery-query-3\",\"arguments\":{},\"inputs\":{\"parameters\":[{\"name\":\"dataset_id\"},{\"name\":\"dataset_location\"},{\"name\":\"gcs_root\"},{\"name\":\"project_id\"},{\"name\":\"source_table_name\"}]},\"outputs\":{\"parameters\":[{\"name\":\"bigquery-query-3-output_gcs_path\",\"valueFrom\":{\"path\":\"/tmp/kfp/output/bigquery/query-output-path.txt\"}}],\"artifacts\":[{\"name\":\"mlpipeline-ui-metadata\",\"path\":\"/tmp/outputs/MLPipeline_UI_metadata/data\"},{\"name\":\"bigquery-query-3-output_gcs_path\",\"path\":\"/tmp/kfp/output/bigquery/query-output-path.txt\"}]},\"metadata\":{\"annotations\":{\"pipelines.kubeflow.org/component_spec\":\"{\\\\\"description\\\\\": \\\\\"A Kubeflow Pipeline component to submit a query to Google Cloud Bigquery \\\\\\\\nservice and dump outputs to a Google Cloud Storage blob. \\\\\\\\n\\\\\", \\\\\"inputs\\\\\": [{\\\\\"description\\\\\": \\\\\"The query used by Bigquery service to fetch the results.\\\\\", \\\\\"name\\\\\": \\\\\"query\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"description\\\\\": \\\\\"The project to execute the query job.\\\\\", \\\\\"name\\\\\": \\\\\"project_id\\\\\", \\\\\"type\\\\\": \\\\\"GCPProjectID\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The ID of the persistent dataset to keep the results of the query.\\\\\", \\\\\"name\\\\\": \\\\\"dataset_id\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The ID of the table to keep the results of the query. If absent, the operation will generate a random id for the table.\\\\\", \\\\\"name\\\\\": \\\\\"table_id\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The path to the Cloud Storage bucket to store the query output.\\\\\", \\\\\"name\\\\\": \\\\\"output_gcs_path\\\\\", \\\\\"type\\\\\": \\\\\"GCSPath\\\\\"}, {\\\\\"default\\\\\": \\\\\"US\\\\\", \\\\\"description\\\\\": \\\\\"The location to create the dataset. Defaults to `US`.\\\\\", \\\\\"name\\\\\": \\\\\"dataset_location\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The full config spec for the query job.See [QueryJobConfig](https://googleapis.github.io/google-cloud-python/latest/bigquery/generated/google.cloud.bigquery.job.QueryJobConfig.html#google.cloud.bigquery.job.QueryJobConfig) for details.\\\\\", \\\\\"name\\\\\": \\\\\"job_config\\\\\", \\\\\"type\\\\\": \\\\\"Dict\\\\\"}], \\\\\"metadata\\\\\": {\\\\\"labels\\\\\": {\\\\\"add-pod-env\\\\\": \\\\\"true\\\\\"}}, \\\\\"name\\\\\": \\\\\"Bigquery - Query\\\\\", \\\\\"outputs\\\\\": [{\\\\\"description\\\\\": \\\\\"The path to the Cloud Storage bucket containing the query output in CSV format.\\\\\", \\\\\"name\\\\\": \\\\\"output_gcs_path\\\\\", \\\\\"type\\\\\": \\\\\"GCSPath\\\\\"}, {\\\\\"name\\\\\": \\\\\"MLPipeline UI metadata\\\\\", \\\\\"type\\\\\": \\\\\"UI metadata\\\\\"}]}\"},\"labels\":{\"add-pod-env\":\"true\"}},\"container\":{\"name\":\"\",\"image\":\"gcr.io/ml-pipeline/ml-pipeline-gcp:e66dcb18607406330f953bf99b04fe7c3ed1a4a8\",\"args\":[\"--ui_metadata_path\",\"/tmp/outputs/MLPipeline_UI_metadata/data\",\"kfp_component.google.bigquery\",\"query\",\"--query\",\"\\\\n SELECT *\\\\n FROM \\\\n `{{inputs.parameters.source_table_name}}` AS cover\\\\n WHERE \\\\n MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (9)\\\\n \",\"--project_id\",\"{{inputs.parameters.project_id}}\",\"--dataset_id\",\"{{inputs.parameters.dataset_id}}\",\"--table_id\",\"\",\"--dataset_location\",\"{{inputs.parameters.dataset_location}}\",\"--output_gcs_path\",\"{{inputs.parameters.gcs_root}}/datasets/testing/data.csv\",\"--job_config\",\"\"],\"env\":[{\"name\":\"KFP_POD_NAME\",\"value\":\"{{pod.name}}\"},{\"name\":\"KFP_POD_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.name\"}}},{\"name\":\"KFP_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}}],\"resources\":{}}},{\"name\":\"condition-1\",\"arguments\":{},\"inputs\":{\"parameters\":[{\"name\":\"model_id\"},{\"name\":\"project_id\"},{\"name\":\"replace_existing_version\"},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-2-job_dir\"},{\"name\":\"version_id\"}]},\"outputs\":{},\"metadata\":{},\"dag\":{\"tasks\":[{\"name\":\"deploying-a-trained-model-to-cloud-machine-learning-engine\",\"template\":\"deploying-a-trained-model-to-cloud-machine-learning-engine\",\"arguments\":{\"parameters\":[{\"name\":\"model_id\",\"value\":\"{{inputs.parameters.model_id}}\"},{\"name\":\"project_id\",\"value\":\"{{inputs.parameters.project_id}}\"},{\"name\":\"replace_existing_version\",\"value\":\"{{inputs.parameters.replace_existing_version}}\"},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-2-job_dir\",\"value\":\"{{inputs.parameters.submitting-a-cloud-ml-training-job-as-a-pipeline-step-2-job_dir}}\"},{\"name\":\"version_id\",\"value\":\"{{inputs.parameters.version_id}}\"}]}}]}},{\"name\":\"covertype-classifier-training\",\"arguments\":{},\"inputs\":{\"parameters\":[{\"name\":\"dataset_id\"},{\"name\":\"dataset_location\"},{\"name\":\"evaluation_metric_name\"},{\"name\":\"evaluation_metric_threshold\"},{\"name\":\"gcs_root\"},{\"name\":\"hypertune_settings\"},{\"name\":\"model_id\"},{\"name\":\"project_id\"},{\"name\":\"region\"},{\"name\":\"replace_existing_version\"},{\"name\":\"source_table_name\"},{\"name\":\"version_id\"}]},\"outputs\":{},\"metadata\":{},\"dag\":{\"tasks\":[{\"name\":\"bigquery-query\",\"template\":\"bigquery-query\",\"arguments\":{\"parameters\":[{\"name\":\"dataset_id\",\"value\":\"{{inputs.parameters.dataset_id}}\"},{\"name\":\"dataset_location\",\"value\":\"{{inputs.parameters.dataset_location}}\"},{\"name\":\"gcs_root\",\"value\":\"{{inputs.parameters.gcs_root}}\"},{\"name\":\"project_id\",\"value\":\"{{inputs.parameters.project_id}}\"},{\"name\":\"source_table_name\",\"value\":\"{{inputs.parameters.source_table_name}}\"}]}},{\"name\":\"bigquery-query-2\",\"template\":\"bigquery-query-2\",\"arguments\":{\"parameters\":[{\"name\":\"dataset_id\",\"value\":\"{{inputs.parameters.dataset_id}}\"},{\"name\":\"dataset_location\",\"value\":\"{{inputs.parameters.dataset_location}}\"},{\"name\":\"gcs_root\",\"value\":\"{{inputs.parameters.gcs_root}}\"},{\"name\":\"project_id\",\"value\":\"{{inputs.parameters.project_id}}\"},{\"name\":\"source_table_name\",\"value\":\"{{inputs.parameters.source_table_name}}\"}]}},{\"name\":\"bigquery-query-3\",\"template\":\"bigquery-query-3\",\"arguments\":{\"parameters\":[{\"name\":\"dataset_id\",\"value\":\"{{inputs.parameters.dataset_id}}\"},{\"name\":\"dataset_location\",\"value\":\"{{inputs.parameters.dataset_location}}\"},{\"name\":\"gcs_root\",\"value\":\"{{inputs.parameters.gcs_root}}\"},{\"name\":\"project_id\",\"value\":\"{{inputs.parameters.project_id}}\"},{\"name\":\"source_table_name\",\"value\":\"{{inputs.parameters.source_table_name}}\"}]}},{\"name\":\"condition-1\",\"template\":\"condition-1\",\"arguments\":{\"parameters\":[{\"name\":\"model_id\",\"value\":\"{{inputs.parameters.model_id}}\"},{\"name\":\"project_id\",\"value\":\"{{inputs.parameters.project_id}}\"},{\"name\":\"replace_existing_version\",\"value\":\"{{inputs.parameters.replace_existing_version}}\"},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-2-job_dir\",\"value\":\"{{tasks.submitting-a-cloud-ml-training-job-as-a-pipeline-step-2.outputs.parameters.submitting-a-cloud-ml-training-job-as-a-pipeline-step-2-job_dir}}\"},{\"name\":\"version_id\",\"value\":\"{{inputs.parameters.version_id}}\"}]},\"dependencies\":[\"evaluate-model\",\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-2\"],\"when\":\"{{tasks.evaluate-model.outputs.parameters.evaluate-model-metric_value}} \\\\u003e {{inputs.parameters.evaluation_metric_threshold}}\"},{\"name\":\"evaluate-model\",\"template\":\"evaluate-model\",\"arguments\":{\"parameters\":[{\"name\":\"bigquery-query-3-output_gcs_path\",\"value\":\"{{tasks.bigquery-query-3.outputs.parameters.bigquery-query-3-output_gcs_path}}\"},{\"name\":\"evaluation_metric_name\",\"value\":\"{{inputs.parameters.evaluation_metric_name}}\"},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-2-job_dir\",\"value\":\"{{tasks.submitting-a-cloud-ml-training-job-as-a-pipeline-step-2.outputs.parameters.submitting-a-cloud-ml-training-job-as-a-pipeline-step-2-job_dir}}\"}]},\"dependencies\":[\"bigquery-query-3\",\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-2\"]},{\"name\":\"retrieve-best-run\",\"template\":\"retrieve-best-run\",\"arguments\":{\"parameters\":[{\"name\":\"project_id\",\"value\":\"{{inputs.parameters.project_id}}\"},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-job_id\",\"value\":\"{{tasks.submitting-a-cloud-ml-training-job-as-a-pipeline-step.outputs.parameters.submitting-a-cloud-ml-training-job-as-a-pipeline-step-job_id}}\"}]},\"dependencies\":[\"submitting-a-cloud-ml-training-job-as-a-pipeline-step\"]},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step\",\"template\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step\",\"arguments\":{\"parameters\":[{\"name\":\"bigquery-query-2-output_gcs_path\",\"value\":\"{{tasks.bigquery-query-2.outputs.parameters.bigquery-query-2-output_gcs_path}}\"},{\"name\":\"bigquery-query-output_gcs_path\",\"value\":\"{{tasks.bigquery-query.outputs.parameters.bigquery-query-output_gcs_path}}\"},{\"name\":\"gcs_root\",\"value\":\"{{inputs.parameters.gcs_root}}\"},{\"name\":\"hypertune_settings\",\"value\":\"{{inputs.parameters.hypertune_settings}}\"},{\"name\":\"project_id\",\"value\":\"{{inputs.parameters.project_id}}\"},{\"name\":\"region\",\"value\":\"{{inputs.parameters.region}}\"}]},\"dependencies\":[\"bigquery-query\",\"bigquery-query-2\"]},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-2\",\"template\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-2\",\"arguments\":{\"parameters\":[{\"name\":\"bigquery-query-2-output_gcs_path\",\"value\":\"{{tasks.bigquery-query-2.outputs.parameters.bigquery-query-2-output_gcs_path}}\"},{\"name\":\"bigquery-query-output_gcs_path\",\"value\":\"{{tasks.bigquery-query.outputs.parameters.bigquery-query-output_gcs_path}}\"},{\"name\":\"gcs_root\",\"value\":\"{{inputs.parameters.gcs_root}}\"},{\"name\":\"project_id\",\"value\":\"{{inputs.parameters.project_id}}\"},{\"name\":\"region\",\"value\":\"{{inputs.parameters.region}}\"},{\"name\":\"retrieve-best-run-alpha\",\"value\":\"{{tasks.retrieve-best-run.outputs.parameters.retrieve-best-run-alpha}}\"},{\"name\":\"retrieve-best-run-max_iter\",\"value\":\"{{tasks.retrieve-best-run.outputs.parameters.retrieve-best-run-max_iter}}\"}]},\"dependencies\":[\"bigquery-query\",\"bigquery-query-2\",\"retrieve-best-run\"]}]}},{\"name\":\"deploying-a-trained-model-to-cloud-machine-learning-engine\",\"arguments\":{},\"inputs\":{\"parameters\":[{\"name\":\"model_id\"},{\"name\":\"project_id\"},{\"name\":\"replace_existing_version\"},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-2-job_dir\"},{\"name\":\"version_id\"}]},\"outputs\":{\"artifacts\":[{\"name\":\"mlpipeline-ui-metadata\",\"path\":\"/tmp/outputs/MLPipeline_UI_metadata/data\"},{\"name\":\"deploying-a-trained-model-to-cloud-machine-learning-engine-model_name\",\"path\":\"/tmp/kfp/output/ml_engine/model_name.txt\"},{\"name\":\"deploying-a-trained-model-to-cloud-machine-learning-engine-model_uri\",\"path\":\"/tmp/kfp/output/ml_engine/model_uri.txt\"},{\"name\":\"deploying-a-trained-model-to-cloud-machine-learning-engine-version_name\",\"path\":\"/tmp/kfp/output/ml_engine/version_name.txt\"}]},\"metadata\":{\"annotations\":{\"pipelines.kubeflow.org/component_spec\":\"{\\\\\"description\\\\\": \\\\\"A Kubeflow Pipeline component to deploy a trained model from a Cloud Storage\\\\\\\\npath to a Cloud Machine Learning Engine service.\\\\\\\\n\\\\\", \\\\\"inputs\\\\\": [{\\\\\"description\\\\\": \\\\\"Required. The Cloud Storage URI which contains a model file. Commonly used TF model search paths (export/exporter) will be used if they exist.\\\\\", \\\\\"name\\\\\": \\\\\"model_uri\\\\\", \\\\\"type\\\\\": \\\\\"GCSPath\\\\\"}, {\\\\\"description\\\\\": \\\\\"Required.The ID of the parent project of the serving model.\\\\\", \\\\\"name\\\\\": \\\\\"project_id\\\\\", \\\\\"type\\\\\": \\\\\"GCPProjectID\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"Optional. The user-specified name of the model. If it is not provided, the operation uses a random name.\\\\\", \\\\\"name\\\\\": \\\\\"model_id\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"Optional. The user-specified name of the version. If it is not provided, the operation uses a random name.\\\\\", \\\\\"name\\\\\": \\\\\"version_id\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"Optional. The [Cloud ML Engine runtime version](https://cloud.google.com/ml-engine/docs/tensorflow/runtime-version-list) to use for this deployment. If it is not set, the Cloud ML Engine uses the default stable version, 1.0.\\\\\", \\\\\"name\\\\\": \\\\\"runtime_version\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"Optional. The version of Python used in the prediction. If it is not set, the default version is `2.7`. Python `3.5` is available when the runtime_version is set to `1.4` and above. Python `2.7` works with all supported runtime versions.\\\\\", \\\\\"name\\\\\": \\\\\"python_version\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"Optional. The JSON payload of the new [Model](https://cloud.google.com/ml-engine/reference/rest/v1/projects.models), if it does not exist.\\\\\", \\\\\"name\\\\\": \\\\\"model\\\\\", \\\\\"type\\\\\": \\\\\"Dict\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"Optional. The JSON payload of the new [Version](https://cloud.google.com/ml-engine/reference/rest/v1/projects.models.versions).\\\\\", \\\\\"name\\\\\": \\\\\"version\\\\\", \\\\\"type\\\\\": \\\\\"Dict\\\\\"}, {\\\\\"default\\\\\": \\\\\"Fasle\\\\\", \\\\\"description\\\\\": \\\\\"A Boolean flag that indicates whether to replace existing version in case of conflict.\\\\\", \\\\\"name\\\\\": \\\\\"replace_existing_version\\\\\", \\\\\"type\\\\\": \\\\\"Bool\\\\\"}, {\\\\\"default\\\\\": \\\\\"False\\\\\", \\\\\"description\\\\\": \\\\\"A Boolean flag that indicates whether to set the new version as default version in the model.\\\\\", \\\\\"name\\\\\": \\\\\"set_default\\\\\", \\\\\"type\\\\\": \\\\\"Bool\\\\\"}, {\\\\\"default\\\\\": \\\\\"30\\\\\", \\\\\"description\\\\\": \\\\\"A time-interval to wait for in case the operation has a long run time.\\\\\", \\\\\"name\\\\\": \\\\\"wait_interval\\\\\", \\\\\"type\\\\\": \\\\\"Integer\\\\\"}], \\\\\"metadata\\\\\": {\\\\\"labels\\\\\": {\\\\\"add-pod-env\\\\\": \\\\\"true\\\\\"}}, \\\\\"name\\\\\": \\\\\"Deploying a trained model to Cloud Machine Learning Engine\\\\\", \\\\\"outputs\\\\\": [{\\\\\"description\\\\\": \\\\\"The Cloud Storage URI of the trained model.\\\\\", \\\\\"name\\\\\": \\\\\"model_uri\\\\\", \\\\\"type\\\\\": \\\\\"GCSPath\\\\\"}, {\\\\\"description\\\\\": \\\\\"The name of the deployed model.\\\\\", \\\\\"name\\\\\": \\\\\"model_name\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"description\\\\\": \\\\\"The name of the deployed version.\\\\\", \\\\\"name\\\\\": \\\\\"version_name\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"name\\\\\": \\\\\"MLPipeline UI metadata\\\\\", \\\\\"type\\\\\": \\\\\"UI metadata\\\\\"}]}\"},\"labels\":{\"add-pod-env\":\"true\"}},\"container\":{\"name\":\"\",\"image\":\"gcr.io/ml-pipeline/ml-pipeline-gcp:e66dcb18607406330f953bf99b04fe7c3ed1a4a8\",\"args\":[\"--ui_metadata_path\",\"/tmp/outputs/MLPipeline_UI_metadata/data\",\"kfp_component.google.ml_engine\",\"deploy\",\"--model_uri\",\"{{inputs.parameters.submitting-a-cloud-ml-training-job-as-a-pipeline-step-2-job_dir}}\",\"--project_id\",\"{{inputs.parameters.project_id}}\",\"--model_id\",\"{{inputs.parameters.model_id}}\",\"--version_id\",\"{{inputs.parameters.version_id}}\",\"--runtime_version\",\"1.15\",\"--python_version\",\"3.7\",\"--model\",\"\",\"--version\",\"\",\"--replace_existing_version\",\"{{inputs.parameters.replace_existing_version}}\",\"--set_default\",\"False\",\"--wait_interval\",\"30\"],\"env\":[{\"name\":\"KFP_POD_NAME\",\"value\":\"{{pod.name}}\"},{\"name\":\"KFP_POD_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.name\"}}},{\"name\":\"KFP_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}}],\"resources\":{}}},{\"name\":\"evaluate-model\",\"arguments\":{},\"inputs\":{\"parameters\":[{\"name\":\"bigquery-query-3-output_gcs_path\"},{\"name\":\"evaluation_metric_name\"},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-2-job_dir\"}]},\"outputs\":{\"parameters\":[{\"name\":\"evaluate-model-metric_value\",\"valueFrom\":{\"path\":\"/tmp/outputs/metric_value/data\"}}],\"artifacts\":[{\"name\":\"mlpipeline-metrics\",\"path\":\"/tmp/outputs/mlpipeline_metrics/data\"},{\"name\":\"evaluate-model-metric_name\",\"path\":\"/tmp/outputs/metric_name/data\"},{\"name\":\"evaluate-model-metric_value\",\"path\":\"/tmp/outputs/metric_value/data\"}]},\"metadata\":{\"annotations\":{\"pipelines.kubeflow.org/component_spec\":\"{\\\\\"description\\\\\": \\\\\"Evaluates a trained sklearn model.\\\\\", \\\\\"inputs\\\\\": [{\\\\\"name\\\\\": \\\\\"dataset_path\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"name\\\\\": \\\\\"model_path\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"name\\\\\": \\\\\"metric_name\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}], \\\\\"name\\\\\": \\\\\"Evaluate model\\\\\", \\\\\"outputs\\\\\": [{\\\\\"name\\\\\": \\\\\"metric_name\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"name\\\\\": \\\\\"metric_value\\\\\", \\\\\"type\\\\\": \\\\\"Float\\\\\"}, {\\\\\"name\\\\\": \\\\\"mlpipeline_metrics\\\\\", \\\\\"type\\\\\": \\\\\"Metrics\\\\\"}]}\"}},\"container\":{\"name\":\"\",\"image\":\"gcr.io/dna-gcp-data/base_image:test\",\"command\":[\"python3\",\"-u\",\"-c\",\"def evaluate_model(\\\\n dataset_path , model_path , metric_name \\\\n): \\\\n \\\\n \\\\\"\\\\\"\\\\\"Evaluates a trained sklearn model.\\\\\"\\\\\"\\\\\"\\\\n #import joblib\\\\n import pickle\\\\n import json\\\\n import pandas as pd\\\\n import subprocess\\\\n import sys\\\\n\\\\n from sklearn.metrics import accuracy_score, recall_score\\\\n\\\\n df_test = pd.read_csv(dataset_path)\\\\n\\\\n X_test = df_test.drop(\\'Cover_Type\\', axis=1)\\\\n y_test = df_test[\\'Cover_Type\\']\\\\n\\\\n # Copy the model from GCS\\\\n model_filename = \\'model.pkl\\'\\\\n gcs_model_filepath = \\'{}/{}\\'.format(model_path, model_filename)\\\\n print(gcs_model_filepath)\\\\n subprocess.check_call([\\'gsutil\\', \\'cp\\', gcs_model_filepath, model_filename],\\\\n stderr=sys.stdout)\\\\n\\\\n with open(model_filename, \\'rb\\') as model_file:\\\\n model = pickle.load(model_file)\\\\n\\\\n y_hat = model.predict(X_test)\\\\n\\\\n if metric_name == \\'accuracy\\':\\\\n metric_value = accuracy_score(y_test, y_hat)\\\\n elif metric_name == \\'recall\\':\\\\n metric_value = recall_score(y_test, y_hat)\\\\n else:\\\\n metric_name = \\'N/A\\'\\\\n metric_value = 0\\\\n\\\\n # Export the metric\\\\n metrics = {\\\\n \\'metrics\\': [{\\\\n \\'name\\': metric_name,\\\\n \\'numberValue\\': float(metric_value)\\\\n }]\\\\n }\\\\n\\\\n return (metric_name, metric_value, json.dumps(metrics))\\\\n\\\\ndef _serialize_float(float_value: float) -\\\\u003e str:\\\\n if isinstance(float_value, str):\\\\n return float_value\\\\n if not isinstance(float_value, (float, int)):\\\\n raise TypeError(\\'Value \\\\\"{}\\\\\" has type \\\\\"{}\\\\\" instead of float.\\'.format(str(float_value), str(type(float_value))))\\\\n return str(float_value)\\\\n\\\\ndef _serialize_str(str_value: str) -\\\\u003e str:\\\\n if not isinstance(str_value, str):\\\\n raise TypeError(\\'Value \\\\\"{}\\\\\" has type \\\\\"{}\\\\\" instead of str.\\'.format(str(str_value), str(type(str_value))))\\\\n return str_value\\\\n\\\\nimport argparse\\\\n_parser = argparse.ArgumentParser(prog=\\'Evaluate model\\', description=\\'Evaluates a trained sklearn model.\\')\\\\n_parser.add_argument(\\\\\"--dataset-path\\\\\", dest=\\\\\"dataset_path\\\\\", type=str, required=True, default=argparse.SUPPRESS)\\\\n_parser.add_argument(\\\\\"--model-path\\\\\", dest=\\\\\"model_path\\\\\", type=str, required=True, default=argparse.SUPPRESS)\\\\n_parser.add_argument(\\\\\"--metric-name\\\\\", dest=\\\\\"metric_name\\\\\", type=str, required=True, default=argparse.SUPPRESS)\\\\n_parser.add_argument(\\\\\"----output-paths\\\\\", dest=\\\\\"_output_paths\\\\\", type=str, nargs=3)\\\\n_parsed_args = vars(_parser.parse_args())\\\\n_output_files = _parsed_args.pop(\\\\\"_output_paths\\\\\", [])\\\\n\\\\n_outputs = evaluate_model(**_parsed_args)\\\\n\\\\nif not hasattr(_outputs, \\'__getitem__\\') or isinstance(_outputs, str):\\\\n _outputs = [_outputs]\\\\n\\\\n_output_serializers = [\\\\n _serialize_str,\\\\n _serialize_float,\\\\n str,\\\\n\\\\n]\\\\n\\\\nimport os\\\\nfor idx, output_file in enumerate(_output_files):\\\\n try:\\\\n os.makedirs(os.path.dirname(output_file))\\\\n except OSError:\\\\n pass\\\\n with open(output_file, \\'w\\') as f:\\\\n f.write(_output_serializers[idx](_outputs[idx]))\\\\n\"],\"args\":[\"--dataset-path\",\"{{inputs.parameters.bigquery-query-3-output_gcs_path}}\",\"--model-path\",\"{{inputs.parameters.submitting-a-cloud-ml-training-job-as-a-pipeline-step-2-job_dir}}\",\"--metric-name\",\"{{inputs.parameters.evaluation_metric_name}}\",\"----output-paths\",\"/tmp/outputs/metric_name/data\",\"/tmp/outputs/metric_value/data\",\"/tmp/outputs/mlpipeline_metrics/data\"],\"resources\":{}}},{\"name\":\"retrieve-best-run\",\"arguments\":{},\"inputs\":{\"parameters\":[{\"name\":\"project_id\"},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-job_id\"}]},\"outputs\":{\"parameters\":[{\"name\":\"retrieve-best-run-alpha\",\"valueFrom\":{\"path\":\"/tmp/outputs/alpha/data\"}},{\"name\":\"retrieve-best-run-max_iter\",\"valueFrom\":{\"path\":\"/tmp/outputs/max_iter/data\"}}],\"artifacts\":[{\"name\":\"retrieve-best-run-alpha\",\"path\":\"/tmp/outputs/alpha/data\"},{\"name\":\"retrieve-best-run-max_iter\",\"path\":\"/tmp/outputs/max_iter/data\"},{\"name\":\"retrieve-best-run-metric_value\",\"path\":\"/tmp/outputs/metric_value/data\"}]},\"metadata\":{\"annotations\":{\"pipelines.kubeflow.org/component_spec\":\"{\\\\\"description\\\\\": \\\\\"Retrieves the parameters of the best Hypertune run.\\\\\", \\\\\"inputs\\\\\": [{\\\\\"name\\\\\": \\\\\"project_id\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"name\\\\\": \\\\\"job_id\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}], \\\\\"name\\\\\": \\\\\"Retrieve best run\\\\\", \\\\\"outputs\\\\\": [{\\\\\"name\\\\\": \\\\\"metric_value\\\\\", \\\\\"type\\\\\": \\\\\"Float\\\\\"}, {\\\\\"name\\\\\": \\\\\"alpha\\\\\", \\\\\"type\\\\\": \\\\\"Float\\\\\"}, {\\\\\"name\\\\\": \\\\\"max_iter\\\\\", \\\\\"type\\\\\": \\\\\"Integer\\\\\"}]}\"}},\"container\":{\"name\":\"\",\"image\":\"gcr.io/dna-gcp-data/base_image:test\",\"command\":[\"python3\",\"-u\",\"-c\",\"def retrieve_best_run(\\\\n project_id , job_id \\\\n): \\\\n \\\\n \\\\\"\\\\\"\\\\\"Retrieves the parameters of the best Hypertune run.\\\\\"\\\\\"\\\\\"\\\\n\\\\n from googleapiclient import discovery\\\\n from googleapiclient import errors\\\\n\\\\n ml = discovery.build(\\'ml\\', \\'v1\\')\\\\n\\\\n job_name = \\'projects/{}/jobs/{}\\'.format(project_id, job_id)\\\\n request = ml.projects().jobs().get(name=job_name)\\\\n\\\\n try:\\\\n response = request.execute()\\\\n except errors.HttpError as err:\\\\n print(err)\\\\n except:\\\\n print(\\'Unexpected error\\')\\\\n\\\\n print(response)\\\\n\\\\n best_trial = response[\\'trainingOutput\\'][\\'trials\\'][0]\\\\n\\\\n metric_value = best_trial[\\'finalMetric\\'][\\'objectiveValue\\']\\\\n alpha = float(best_trial[\\'hyperparameters\\'][\\'alpha\\'])\\\\n max_iter = int(best_trial[\\'hyperparameters\\'][\\'max_iter\\'])\\\\n\\\\n return (metric_value, alpha, max_iter)\\\\n\\\\ndef _serialize_float(float_value: float) -\\\\u003e str:\\\\n if isinstance(float_value, str):\\\\n return float_value\\\\n if not isinstance(float_value, (float, int)):\\\\n raise TypeError(\\'Value \\\\\"{}\\\\\" has type \\\\\"{}\\\\\" instead of float.\\'.format(str(float_value), str(type(float_value))))\\\\n return str(float_value)\\\\n\\\\ndef _serialize_int(int_value: int) -\\\\u003e str:\\\\n if isinstance(int_value, str):\\\\n return int_value\\\\n if not isinstance(int_value, int):\\\\n raise TypeError(\\'Value \\\\\"{}\\\\\" has type \\\\\"{}\\\\\" instead of int.\\'.format(str(int_value), str(type(int_value))))\\\\n return str(int_value)\\\\n\\\\nimport argparse\\\\n_parser = argparse.ArgumentParser(prog=\\'Retrieve best run\\', description=\\'Retrieves the parameters of the best Hypertune run.\\')\\\\n_parser.add_argument(\\\\\"--project-id\\\\\", dest=\\\\\"project_id\\\\\", type=str, required=True, default=argparse.SUPPRESS)\\\\n_parser.add_argument(\\\\\"--job-id\\\\\", dest=\\\\\"job_id\\\\\", type=str, required=True, default=argparse.SUPPRESS)\\\\n_parser.add_argument(\\\\\"----output-paths\\\\\", dest=\\\\\"_output_paths\\\\\", type=str, nargs=3)\\\\n_parsed_args = vars(_parser.parse_args())\\\\n_output_files = _parsed_args.pop(\\\\\"_output_paths\\\\\", [])\\\\n\\\\n_outputs = retrieve_best_run(**_parsed_args)\\\\n\\\\nif not hasattr(_outputs, \\'__getitem__\\') or isinstance(_outputs, str):\\\\n _outputs = [_outputs]\\\\n\\\\n_output_serializers = [\\\\n _serialize_float,\\\\n _serialize_float,\\\\n _serialize_int,\\\\n\\\\n]\\\\n\\\\nimport os\\\\nfor idx, output_file in enumerate(_output_files):\\\\n try:\\\\n os.makedirs(os.path.dirname(output_file))\\\\n except OSError:\\\\n pass\\\\n with open(output_file, \\'w\\') as f:\\\\n f.write(_output_serializers[idx](_outputs[idx]))\\\\n\"],\"args\":[\"--project-id\",\"{{inputs.parameters.project_id}}\",\"--job-id\",\"{{inputs.parameters.submitting-a-cloud-ml-training-job-as-a-pipeline-step-job_id}}\",\"----output-paths\",\"/tmp/outputs/metric_value/data\",\"/tmp/outputs/alpha/data\",\"/tmp/outputs/max_iter/data\"],\"resources\":{}}},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step\",\"arguments\":{},\"inputs\":{\"parameters\":[{\"name\":\"bigquery-query-2-output_gcs_path\"},{\"name\":\"bigquery-query-output_gcs_path\"},{\"name\":\"gcs_root\"},{\"name\":\"hypertune_settings\"},{\"name\":\"project_id\"},{\"name\":\"region\"}]},\"outputs\":{\"parameters\":[{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-job_id\",\"valueFrom\":{\"path\":\"/tmp/kfp/output/ml_engine/job_id.txt\"}}],\"artifacts\":[{\"name\":\"mlpipeline-ui-metadata\",\"path\":\"/tmp/outputs/MLPipeline_UI_metadata/data\"},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-job_dir\",\"path\":\"/tmp/kfp/output/ml_engine/job_dir.txt\"},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-job_id\",\"path\":\"/tmp/kfp/output/ml_engine/job_id.txt\"}]},\"metadata\":{\"annotations\":{\"pipelines.kubeflow.org/component_spec\":\"{\\\\\"description\\\\\": \\\\\"A Kubeflow Pipeline component to submit a Cloud Machine Learning (Cloud ML) \\\\\\\\nEngine training job as a step in a pipeline.\\\\\\\\n\\\\\", \\\\\"inputs\\\\\": [{\\\\\"description\\\\\": \\\\\"Required. The ID of the parent project of the job.\\\\\", \\\\\"name\\\\\": \\\\\"project_id\\\\\", \\\\\"type\\\\\": \\\\\"GCPProjectID\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The Python module name to run after installing the packages.\\\\\", \\\\\"name\\\\\": \\\\\"python_module\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The Cloud Storage location of the packages (that contain the training program and any additional dependencies). The maximum number of package URIs is 100.\\\\\", \\\\\"name\\\\\": \\\\\"package_uris\\\\\", \\\\\"type\\\\\": \\\\\"List\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The Compute Engine region in which the training job is run.\\\\\", \\\\\"name\\\\\": \\\\\"region\\\\\", \\\\\"type\\\\\": \\\\\"GCPRegion\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The command line arguments to pass to the program.\\\\\", \\\\\"name\\\\\": \\\\\"args\\\\\", \\\\\"type\\\\\": \\\\\"List\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"A Cloud Storage path in which to store the training outputs and other data needed for training. This path is passed to your TensorFlow program as the `job-dir` command-line argument. The benefit of specifying this field is that Cloud ML validates the path for use in training.\\\\\", \\\\\"name\\\\\": \\\\\"job_dir\\\\\", \\\\\"type\\\\\": \\\\\"GCSPath\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The version of Python used in training. If not set, the default version is `2.7`. Python `3.5` is available when runtimeVersion is set to `1.4` and above.\\\\\", \\\\\"name\\\\\": \\\\\"python_version\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The Cloud ML Engine runtime version to use for training. If not set, Cloud ML Engine uses the default stable version, 1.0.\\\\\", \\\\\"name\\\\\": \\\\\"runtime_version\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The Docker image to run on the master replica. This image must be in Container Registry.\\\\\", \\\\\"name\\\\\": \\\\\"master_image_uri\\\\\", \\\\\"type\\\\\": \\\\\"GCRPath\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The Docker image to run on the worker replica. This image must be in Container Registry.\\\\\", \\\\\"name\\\\\": \\\\\"worker_image_uri\\\\\", \\\\\"type\\\\\": \\\\\"GCRPath\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The input parameters to create a training job. It is the JSON payload of a [TrainingInput](https://cloud.google.com/ml-engine/reference/rest/v1/projects.jobs#TrainingInput)\\\\\", \\\\\"name\\\\\": \\\\\"training_input\\\\\", \\\\\"type\\\\\": \\\\\"Dict\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The prefix of the generated job id.\\\\\", \\\\\"name\\\\\": \\\\\"job_id_prefix\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"30\\\\\", \\\\\"description\\\\\": \\\\\"Optional. A time-interval to wait for between calls to get the job status. Defaults to 30.\\'\\\\\", \\\\\"name\\\\\": \\\\\"wait_interval\\\\\", \\\\\"type\\\\\": \\\\\"Integer\\\\\"}], \\\\\"metadata\\\\\": {\\\\\"labels\\\\\": {\\\\\"add-pod-env\\\\\": \\\\\"true\\\\\"}}, \\\\\"name\\\\\": \\\\\"Submitting a Cloud ML training job as a pipeline step\\\\\", \\\\\"outputs\\\\\": [{\\\\\"description\\\\\": \\\\\"The ID of the created job.\\\\\", \\\\\"name\\\\\": \\\\\"job_id\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"description\\\\\": \\\\\"The output path in Cloud Storage of the trainning job, which contains the trained model files.\\\\\", \\\\\"name\\\\\": \\\\\"job_dir\\\\\", \\\\\"type\\\\\": \\\\\"GCSPath\\\\\"}, {\\\\\"name\\\\\": \\\\\"MLPipeline UI metadata\\\\\", \\\\\"type\\\\\": \\\\\"UI metadata\\\\\"}]}\"},\"labels\":{\"add-pod-env\":\"true\"}},\"container\":{\"name\":\"\",\"image\":\"gcr.io/ml-pipeline/ml-pipeline-gcp:e66dcb18607406330f953bf99b04fe7c3ed1a4a8\",\"args\":[\"--ui_metadata_path\",\"/tmp/outputs/MLPipeline_UI_metadata/data\",\"kfp_component.google.ml_engine\",\"train\",\"--project_id\",\"{{inputs.parameters.project_id}}\",\"--python_module\",\"\",\"--package_uris\",\"\",\"--region\",\"{{inputs.parameters.region}}\",\"--args\",\"[\\\\\"--training_dataset_path\\\\\", \\\\\"{{inputs.parameters.bigquery-query-output_gcs_path}}\\\\\", \\\\\"--validation_dataset_path\\\\\", \\\\\"{{inputs.parameters.bigquery-query-2-output_gcs_path}}\\\\\", \\\\\"--hptune\\\\\", \\\\\"True\\\\\"]\",\"--job_dir\",\"{{inputs.parameters.gcs_root}}/jobdir/hypertune/{{workflow.uid}}\",\"--python_version\",\"\",\"--runtime_version\",\"\",\"--master_image_uri\",\"gcr.io/dna-gcp-data/trainer_image:test\",\"--worker_image_uri\",\"\",\"--training_input\",\"{{inputs.parameters.hypertune_settings}}\",\"--job_id_prefix\",\"\",\"--wait_interval\",\"30\"],\"env\":[{\"name\":\"KFP_POD_NAME\",\"value\":\"{{pod.name}}\"},{\"name\":\"KFP_POD_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.name\"}}},{\"name\":\"KFP_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}}],\"resources\":{}}},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-2\",\"arguments\":{},\"inputs\":{\"parameters\":[{\"name\":\"bigquery-query-2-output_gcs_path\"},{\"name\":\"bigquery-query-output_gcs_path\"},{\"name\":\"gcs_root\"},{\"name\":\"project_id\"},{\"name\":\"region\"},{\"name\":\"retrieve-best-run-alpha\"},{\"name\":\"retrieve-best-run-max_iter\"}]},\"outputs\":{\"parameters\":[{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-2-job_dir\",\"valueFrom\":{\"path\":\"/tmp/kfp/output/ml_engine/job_dir.txt\"}}],\"artifacts\":[{\"name\":\"mlpipeline-ui-metadata\",\"path\":\"/tmp/outputs/MLPipeline_UI_metadata/data\"},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-2-job_dir\",\"path\":\"/tmp/kfp/output/ml_engine/job_dir.txt\"},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-2-job_id\",\"path\":\"/tmp/kfp/output/ml_engine/job_id.txt\"}]},\"metadata\":{\"annotations\":{\"pipelines.kubeflow.org/component_spec\":\"{\\\\\"description\\\\\": \\\\\"A Kubeflow Pipeline component to submit a Cloud Machine Learning (Cloud ML) \\\\\\\\nEngine training job as a step in a pipeline.\\\\\\\\n\\\\\", \\\\\"inputs\\\\\": [{\\\\\"description\\\\\": \\\\\"Required. The ID of the parent project of the job.\\\\\", \\\\\"name\\\\\": \\\\\"project_id\\\\\", \\\\\"type\\\\\": \\\\\"GCPProjectID\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The Python module name to run after installing the packages.\\\\\", \\\\\"name\\\\\": \\\\\"python_module\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The Cloud Storage location of the packages (that contain the training program and any additional dependencies). The maximum number of package URIs is 100.\\\\\", \\\\\"name\\\\\": \\\\\"package_uris\\\\\", \\\\\"type\\\\\": \\\\\"List\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The Compute Engine region in which the training job is run.\\\\\", \\\\\"name\\\\\": \\\\\"region\\\\\", \\\\\"type\\\\\": \\\\\"GCPRegion\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The command line arguments to pass to the program.\\\\\", \\\\\"name\\\\\": \\\\\"args\\\\\", \\\\\"type\\\\\": \\\\\"List\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"A Cloud Storage path in which to store the training outputs and other data needed for training. This path is passed to your TensorFlow program as the `job-dir` command-line argument. The benefit of specifying this field is that Cloud ML validates the path for use in training.\\\\\", \\\\\"name\\\\\": \\\\\"job_dir\\\\\", \\\\\"type\\\\\": \\\\\"GCSPath\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The version of Python used in training. If not set, the default version is `2.7`. Python `3.5` is available when runtimeVersion is set to `1.4` and above.\\\\\", \\\\\"name\\\\\": \\\\\"python_version\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The Cloud ML Engine runtime version to use for training. If not set, Cloud ML Engine uses the default stable version, 1.0.\\\\\", \\\\\"name\\\\\": \\\\\"runtime_version\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The Docker image to run on the master replica. This image must be in Container Registry.\\\\\", \\\\\"name\\\\\": \\\\\"master_image_uri\\\\\", \\\\\"type\\\\\": \\\\\"GCRPath\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The Docker image to run on the worker replica. This image must be in Container Registry.\\\\\", \\\\\"name\\\\\": \\\\\"worker_image_uri\\\\\", \\\\\"type\\\\\": \\\\\"GCRPath\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The input parameters to create a training job. It is the JSON payload of a [TrainingInput](https://cloud.google.com/ml-engine/reference/rest/v1/projects.jobs#TrainingInput)\\\\\", \\\\\"name\\\\\": \\\\\"training_input\\\\\", \\\\\"type\\\\\": \\\\\"Dict\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The prefix of the generated job id.\\\\\", \\\\\"name\\\\\": \\\\\"job_id_prefix\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"30\\\\\", \\\\\"description\\\\\": \\\\\"Optional. A time-interval to wait for between calls to get the job status. Defaults to 30.\\'\\\\\", \\\\\"name\\\\\": \\\\\"wait_interval\\\\\", \\\\\"type\\\\\": \\\\\"Integer\\\\\"}], \\\\\"metadata\\\\\": {\\\\\"labels\\\\\": {\\\\\"add-pod-env\\\\\": \\\\\"true\\\\\"}}, \\\\\"name\\\\\": \\\\\"Submitting a Cloud ML training job as a pipeline step\\\\\", \\\\\"outputs\\\\\": [{\\\\\"description\\\\\": \\\\\"The ID of the created job.\\\\\", \\\\\"name\\\\\": \\\\\"job_id\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"description\\\\\": \\\\\"The output path in Cloud Storage of the trainning job, which contains the trained model files.\\\\\", \\\\\"name\\\\\": \\\\\"job_dir\\\\\", \\\\\"type\\\\\": \\\\\"GCSPath\\\\\"}, {\\\\\"name\\\\\": \\\\\"MLPipeline UI metadata\\\\\", \\\\\"type\\\\\": \\\\\"UI metadata\\\\\"}]}\"},\"labels\":{\"add-pod-env\":\"true\"}},\"container\":{\"name\":\"\",\"image\":\"gcr.io/ml-pipeline/ml-pipeline-gcp:e66dcb18607406330f953bf99b04fe7c3ed1a4a8\",\"args\":[\"--ui_metadata_path\",\"/tmp/outputs/MLPipeline_UI_metadata/data\",\"kfp_component.google.ml_engine\",\"train\",\"--project_id\",\"{{inputs.parameters.project_id}}\",\"--python_module\",\"\",\"--package_uris\",\"\",\"--region\",\"{{inputs.parameters.region}}\",\"--args\",\"[\\\\\"--training_dataset_path\\\\\", \\\\\"{{inputs.parameters.bigquery-query-output_gcs_path}}\\\\\", \\\\\"--validation_dataset_path\\\\\", \\\\\"{{inputs.parameters.bigquery-query-2-output_gcs_path}}\\\\\", \\\\\"--alpha\\\\\", \\\\\"{{inputs.parameters.retrieve-best-run-alpha}}\\\\\", \\\\\"--max_iter\\\\\", \\\\\"{{inputs.parameters.retrieve-best-run-max_iter}}\\\\\", \\\\\"--hptune\\\\\", \\\\\"False\\\\\"]\",\"--job_dir\",\"{{inputs.parameters.gcs_root}}/jobdir/{{workflow.uid}}\",\"--python_version\",\"\",\"--runtime_version\",\"\",\"--master_image_uri\",\"gcr.io/dna-gcp-data/trainer_image:test\",\"--worker_image_uri\",\"\",\"--training_input\",\"\",\"--job_id_prefix\",\"\",\"--wait_interval\",\"30\"],\"env\":[{\"name\":\"KFP_POD_NAME\",\"value\":\"{{pod.name}}\"},{\"name\":\"KFP_POD_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.name\"}}},{\"name\":\"KFP_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}}],\"resources\":{}}}],\"entrypoint\":\"covertype-classifier-training\",\"arguments\":{\"parameters\":[{\"name\":\"project_id\"},{\"name\":\"region\"},{\"name\":\"source_table_name\"},{\"name\":\"gcs_root\"},{\"name\":\"dataset_id\"},{\"name\":\"evaluation_metric_name\"},{\"name\":\"evaluation_metric_threshold\"},{\"name\":\"model_id\"},{\"name\":\"version_id\"},{\"name\":\"replace_existing_version\"},{\"name\":\"hypertune_settings\",\"value\":\"\\\\n{\\\\n \\\\\"hyperparameters\\\\\": {\\\\n \\\\\"goal\\\\\": \\\\\"MAXIMIZE\\\\\",\\\\n \\\\\"maxTrials\\\\\": 6,\\\\n \\\\\"maxParallelTrials\\\\\": 3,\\\\n \\\\\"hyperparameterMetricTag\\\\\": \\\\\"accuracy\\\\\",\\\\n \\\\\"enableTrialEarlyStopping\\\\\": True,\\\\n \\\\\"params\\\\\": [\\\\n {\\\\n \\\\\"parameterName\\\\\": \\\\\"max_iter\\\\\",\\\\n \\\\\"type\\\\\": \\\\\"DISCRETE\\\\\",\\\\n \\\\\"discreteValues\\\\\": [500, 1000]\\\\n },\\\\n {\\\\n \\\\\"parameterName\\\\\": \\\\\"alpha\\\\\",\\\\n \\\\\"type\\\\\": \\\\\"DOUBLE\\\\\",\\\\n \\\\\"minValue\\\\\": 0.0001,\\\\n \\\\\"maxValue\\\\\": 0.001,\\\\n \\\\\"scaleType\\\\\": \\\\\"UNIT_LINEAR_SCALE\\\\\"\\\\n }\\\\n ]\\\\n }\\\\n}\\\\n\"},{\"name\":\"dataset_location\",\"value\":\"US\"}]},\"serviceAccountName\":\"pipeline-runner\"},\"status\":{\"startedAt\":null,\"finishedAt\":null}}', 'pipeline_manifest': None, 'parameters': [{'name': 'project_id', 'value': 'dna-gcp-data'}, {'name': 'region', 'value': 'us-central1'}, {'name': 'source_table_name', 'value': 'covertype_dataset.covertype'}, {'name': 'gcs_root', 'value': 'gs://dna-gcp-data-kubeflowpipelines-default/staging'}, {'name': 'dataset_id', 'value': 'covertype_dataset'}, {'name': 'evaluation_metric_name', 'value': 'accuracy'}, {'name': 'evaluation_metric_threshold', 'value': '0.69'}, {'name': 'model_id', 'value': 'covertype_classifier'}, {'name': 'version_id', 'value': 'v01'}, {'name': 'replace_existing_version', 'value': 'True'}, {'name': 'hypertune_settings', 'value': '{\\n \"hyperparameters\": {\\n \"goal\": \"MAXIMIZE\",\\n \"maxTrials\": 6,\\n \"maxParallelTrials\": 3,\\n \"hyperparameterMetricTag\": \"accuracy\",\\n \"enableTrialEarlyStopping\": True,\\n \"params\": [\\n {\\n \"parameterName\": \"max_iter\",\\n \"type\": \"DISCRETE\",\\n \"discreteValues\": [500, 1000]\\n },\\n {\\n \"parameterName\": \"alpha\",\\n \"type\": \"DOUBLE\",\\n \"minValue\": 0.0001,\\n \"maxValue\": 0.001,\\n \"scaleType\": \"UNIT_LINEAR_SCALE\"\\n }\\n ]\\n }\\n}'}, {'name': 'dataset_location', 'value': 'US'}]}, 'resource_references': [{'key': {'type': 'EXPERIMENT', 'id': '889c1532-fee9-4b06-bc2b-10b1cd332c9a'}, 'name': 'Covertype_Classifier_Training', 'relationship': 'OWNER'}, {'key': {'type': 'PIPELINE_VERSION', 'id': 'ba22a9b4-0b4d-4a7c-beee-239c15f6ab1b'}, 'name': 'covertype_continuous_training_test', 'relationship': 'CREATOR'}], 'service_account': 'pipeline-runner', 'created_at': datetime.datetime(2021, 5, 21, 8, 46, 52, tzinfo=tzlocal()), 'scheduled_at': datetime.datetime(1970, 1, 1, 0, 0, tzinfo=tzlocal()), 'finished_at': datetime.datetime(2021, 5, 21, 9, 10, 48, tzinfo=tzlocal()), 'status': 'Succeeded', 'error': None, 'metrics': [{'name': 'accuracy', 'node_id': 'covertype-classifier-training-qmb6n-1971586539', 'number_value': 0.7225525168450257, 'format': None}]}, 'pipeline_runtime': {'pipeline_manifest': None, 'workflow_manifest': '{\"metadata\":{\"name\":\"covertype-classifier-training-qmb6n\",\"generateName\":\"covertype-classifier-training-\",\"namespace\":\"kfpdemo\",\"selfLink\":\"/apis/argoproj.io/v1alpha1/namespaces/kfpdemo/workflows/covertype-classifier-training-qmb6n\",\"uid\":\"03e3a16e-7930-4cd2-9bff-6f02c514a250\",\"resourceVersion\":\"10412508\",\"generation\":28,\"creationTimestamp\":\"2021-05-21T08:46:52Z\",\"labels\":{\"pipeline/runid\":\"d16368c5-24bb-4292-9060-315755f79b0b\",\"workflows.argoproj.io/completed\":\"true\",\"workflows.argoproj.io/phase\":\"Succeeded\"},\"annotations\":{\"pipelines.kubeflow.org/pipeline_spec\":\"{\\\\\"description\\\\\": \\\\\"The pipeline training and deploying the Covertype classifierpipeline_yaml\\\\\", \\\\\"inputs\\\\\": [{\\\\\"name\\\\\": \\\\\"project_id\\\\\"}, {\\\\\"name\\\\\": \\\\\"region\\\\\"}, {\\\\\"name\\\\\": \\\\\"source_table_name\\\\\"}, {\\\\\"name\\\\\": \\\\\"gcs_root\\\\\"}, {\\\\\"name\\\\\": \\\\\"dataset_id\\\\\"}, {\\\\\"name\\\\\": \\\\\"evaluation_metric_name\\\\\"}, {\\\\\"name\\\\\": \\\\\"evaluation_metric_threshold\\\\\"}, {\\\\\"name\\\\\": \\\\\"model_id\\\\\"}, {\\\\\"name\\\\\": \\\\\"version_id\\\\\"}, {\\\\\"name\\\\\": \\\\\"replace_existing_version\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\\\\n{\\\\\\\\n \\\\\\\\\\\\\"hyperparameters\\\\\\\\\\\\\": {\\\\\\\\n \\\\\\\\\\\\\"goal\\\\\\\\\\\\\": \\\\\\\\\\\\\"MAXIMIZE\\\\\\\\\\\\\",\\\\\\\\n \\\\\\\\\\\\\"maxTrials\\\\\\\\\\\\\": 6,\\\\\\\\n \\\\\\\\\\\\\"maxParallelTrials\\\\\\\\\\\\\": 3,\\\\\\\\n \\\\\\\\\\\\\"hyperparameterMetricTag\\\\\\\\\\\\\": \\\\\\\\\\\\\"accuracy\\\\\\\\\\\\\",\\\\\\\\n \\\\\\\\\\\\\"enableTrialEarlyStopping\\\\\\\\\\\\\": True,\\\\\\\\n \\\\\\\\\\\\\"params\\\\\\\\\\\\\": [\\\\\\\\n {\\\\\\\\n \\\\\\\\\\\\\"parameterName\\\\\\\\\\\\\": \\\\\\\\\\\\\"max_iter\\\\\\\\\\\\\",\\\\\\\\n \\\\\\\\\\\\\"type\\\\\\\\\\\\\": \\\\\\\\\\\\\"DISCRETE\\\\\\\\\\\\\",\\\\\\\\n \\\\\\\\\\\\\"discreteValues\\\\\\\\\\\\\": [500, 1000]\\\\\\\\n },\\\\\\\\n {\\\\\\\\n \\\\\\\\\\\\\"parameterName\\\\\\\\\\\\\": \\\\\\\\\\\\\"alpha\\\\\\\\\\\\\",\\\\\\\\n \\\\\\\\\\\\\"type\\\\\\\\\\\\\": \\\\\\\\\\\\\"DOUBLE\\\\\\\\\\\\\",\\\\\\\\n \\\\\\\\\\\\\"minValue\\\\\\\\\\\\\": 0.0001,\\\\\\\\n \\\\\\\\\\\\\"maxValue\\\\\\\\\\\\\": 0.001,\\\\\\\\n \\\\\\\\\\\\\"scaleType\\\\\\\\\\\\\": \\\\\\\\\\\\\"UNIT_LINEAR_SCALE\\\\\\\\\\\\\"\\\\\\\\n }\\\\\\\\n ]\\\\\\\\n }\\\\\\\\n}\\\\\\\\n\\\\\", \\\\\"name\\\\\": \\\\\"hypertune_settings\\\\\", \\\\\"optional\\\\\": true}, {\\\\\"default\\\\\": \\\\\"US\\\\\", \\\\\"name\\\\\": \\\\\"dataset_location\\\\\", \\\\\"optional\\\\\": true}], \\\\\"name\\\\\": \\\\\"Covertype Classifier Training\\\\\"}\",\"pipelines.kubeflow.org/run_name\":\"Run_001\"},\"managedFields\":[{\"manager\":\"apiserver\",\"operation\":\"Update\",\"apiVersion\":\"argoproj.io/v1alpha1\",\"time\":\"2021-05-21T08:46:52Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pipelines.kubeflow.org/pipeline_spec\":{},\"f:pipelines.kubeflow.org/run_name\":{}},\"f:generateName\":{},\"f:labels\":{\".\":{},\"f:pipeline/runid\":{}}},\"f:spec\":{\".\":{},\"f:arguments\":{\".\":{},\"f:parameters\":{}},\"f:entrypoint\":{},\"f:serviceAccountName\":{},\"f:templates\":{}},\"f:status\":{}}},{\"manager\":\"workflow-controller\",\"operation\":\"Update\",\"apiVersion\":\"argoproj.io/v1alpha1\",\"time\":\"2021-05-21T09:10:48Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:metadata\":{\"f:labels\":{\"f:workflows.argoproj.io/completed\":{},\"f:workflows.argoproj.io/phase\":{}}},\"f:status\":{\"f:conditions\":{},\"f:finishedAt\":{},\"f:nodes\":{\".\":{},\"f:covertype-classifier-training-qmb6n\":{\".\":{},\"f:children\":{},\"f:displayName\":{},\"f:finishedAt\":{},\"f:id\":{},\"f:inputs\":{\".\":{},\"f:parameters\":{}},\"f:name\":{},\"f:outboundNodes\":{},\"f:phase\":{},\"f:startedAt\":{},\"f:templateName\":{},\"f:type\":{}},\"f:covertype-classifier-training-qmb6n-1057815578\":{\".\":{},\"f:boundaryID\":{},\"f:children\":{},\"f:displayName\":{},\"f:finishedAt\":{},\"f:id\":{},\"f:inputs\":{\".\":{},\"f:parameters\":{}},\"f:name\":{},\"f:outputs\":{\".\":{},\"f:artifacts\":{},\"f:parameters\":{}},\"f:phase\":{},\"f:startedAt\":{},\"f:templateName\":{},\"f:type\":{}},\"f:covertype-classifier-training-qmb6n-1075982665\":{\".\":{},\"f:boundaryID\":{},\"f:children\":{},\"f:displayName\":{},\"f:finishedAt\":{},\"f:id\":{},\"f:inputs\":{\".\":{},\"f:parameters\":{}},\"f:name\":{},\"f:outputs\":{\".\":{},\"f:artifacts\":{},\"f:parameters\":{}},\"f:phase\":{},\"f:startedAt\":{},\"f:templateName\":{},\"f:type\":{}},\"f:covertype-classifier-training-qmb6n-1089717477\":{\".\":{},\"f:boundaryID\":{},\"f:children\":{},\"f:displayName\":{},\"f:finishedAt\":{},\"f:id\":{},\"f:inputs\":{\".\":{},\"f:parameters\":{}},\"f:name\":{},\"f:outputs\":{\".\":{},\"f:artifacts\":{},\"f:parameters\":{}},\"f:phase\":{},\"f:startedAt\":{},\"f:templateName\":{},\"f:type\":{}},\"f:covertype-classifier-training-qmb6n-1858401567\":{\".\":{},\"f:boundaryID\":{},\"f:children\":{},\"f:displayName\":{},\"f:finishedAt\":{},\"f:id\":{},\"f:inputs\":{\".\":{},\"f:parameters\":{}},\"f:name\":{},\"f:outputs\":{\".\":{},\"f:artifacts\":{},\"f:parameters\":{}},\"f:phase\":{},\"f:startedAt\":{},\"f:templateName\":{},\"f:type\":{}},\"f:covertype-classifier-training-qmb6n-1879447244\":{\".\":{},\"f:boundaryID\":{},\"f:children\":{},\"f:displayName\":{},\"f:finishedAt\":{},\"f:id\":{},\"f:inputs\":{\".\":{},\"f:parameters\":{}},\"f:name\":{},\"f:outputs\":{\".\":{},\"f:artifacts\":{},\"f:parameters\":{}},\"f:phase\":{},\"f:startedAt\":{},\"f:templateName\":{},\"f:type\":{}},\"f:covertype-classifier-training-qmb6n-1896224863\":{\".\":{},\"f:boundaryID\":{},\"f:children\":{},\"f:displayName\":{},\"f:finishedAt\":{},\"f:id\":{},\"f:inputs\":{\".\":{},\"f:parameters\":{}},\"f:name\":{},\"f:outputs\":{\".\":{},\"f:artifacts\":{},\"f:parameters\":{}},\"f:phase\":{},\"f:startedAt\":{},\"f:templateName\":{},\"f:type\":{}},\"f:covertype-classifier-training-qmb6n-1971586539\":{\".\":{},\"f:boundaryID\":{},\"f:children\":{},\"f:displayName\":{},\"f:finishedAt\":{},\"f:id\":{},\"f:inputs\":{\".\":{},\"f:parameters\":{}},\"f:name\":{},\"f:outputs\":{\".\":{},\"f:artifacts\":{},\"f:parameters\":{}},\"f:phase\":{},\"f:startedAt\":{},\"f:templateName\":{},\"f:type\":{}},\"f:covertype-classifier-training-qmb6n-2274394652\":{\".\":{},\"f:boundaryID\":{},\"f:displayName\":{},\"f:finishedAt\":{},\"f:id\":{},\"f:inputs\":{\".\":{},\"f:parameters\":{}},\"f:name\":{},\"f:outputs\":{\".\":{},\"f:artifacts\":{}},\"f:phase\":{},\"f:startedAt\":{},\"f:templateName\":{},\"f:type\":{}},\"f:covertype-classifier-training-qmb6n-2585557679\":{\".\":{},\"f:boundaryID\":{},\"f:children\":{},\"f:displayName\":{},\"f:finishedAt\":{},\"f:id\":{},\"f:inputs\":{\".\":{},\"f:parameters\":{}},\"f:name\":{},\"f:outboundNodes\":{},\"f:phase\":{},\"f:startedAt\":{},\"f:templateName\":{},\"f:type\":{}}},\"f:phase\":{},\"f:startedAt\":{}}}}]},\"spec\":{\"templates\":[{\"name\":\"bigquery-query\",\"arguments\":{},\"inputs\":{\"parameters\":[{\"name\":\"dataset_id\"},{\"name\":\"dataset_location\"},{\"name\":\"gcs_root\"},{\"name\":\"project_id\"},{\"name\":\"source_table_name\"}]},\"outputs\":{\"parameters\":[{\"name\":\"bigquery-query-output_gcs_path\",\"valueFrom\":{\"path\":\"/tmp/kfp/output/bigquery/query-output-path.txt\"}}],\"artifacts\":[{\"name\":\"mlpipeline-ui-metadata\",\"path\":\"/tmp/outputs/MLPipeline_UI_metadata/data\",\"optional\":true},{\"name\":\"bigquery-query-output_gcs_path\",\"path\":\"/tmp/kfp/output/bigquery/query-output-path.txt\"}]},\"metadata\":{\"annotations\":{\"pipelines.kubeflow.org/component_spec\":\"{\\\\\"description\\\\\": \\\\\"A Kubeflow Pipeline component to submit a query to Google Cloud Bigquery \\\\\\\\nservice and dump outputs to a Google Cloud Storage blob. \\\\\\\\n\\\\\", \\\\\"inputs\\\\\": [{\\\\\"description\\\\\": \\\\\"The query used by Bigquery service to fetch the results.\\\\\", \\\\\"name\\\\\": \\\\\"query\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"description\\\\\": \\\\\"The project to execute the query job.\\\\\", \\\\\"name\\\\\": \\\\\"project_id\\\\\", \\\\\"type\\\\\": \\\\\"GCPProjectID\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The ID of the persistent dataset to keep the results of the query.\\\\\", \\\\\"name\\\\\": \\\\\"dataset_id\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The ID of the table to keep the results of the query. If absent, the operation will generate a random id for the table.\\\\\", \\\\\"name\\\\\": \\\\\"table_id\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The path to the Cloud Storage bucket to store the query output.\\\\\", \\\\\"name\\\\\": \\\\\"output_gcs_path\\\\\", \\\\\"type\\\\\": \\\\\"GCSPath\\\\\"}, {\\\\\"default\\\\\": \\\\\"US\\\\\", \\\\\"description\\\\\": \\\\\"The location to create the dataset. Defaults to `US`.\\\\\", \\\\\"name\\\\\": \\\\\"dataset_location\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The full config spec for the query job.See [QueryJobConfig](https://googleapis.github.io/google-cloud-python/latest/bigquery/generated/google.cloud.bigquery.job.QueryJobConfig.html#google.cloud.bigquery.job.QueryJobConfig) for details.\\\\\", \\\\\"name\\\\\": \\\\\"job_config\\\\\", \\\\\"type\\\\\": \\\\\"Dict\\\\\"}], \\\\\"metadata\\\\\": {\\\\\"labels\\\\\": {\\\\\"add-pod-env\\\\\": \\\\\"true\\\\\"}}, \\\\\"name\\\\\": \\\\\"Bigquery - Query\\\\\", \\\\\"outputs\\\\\": [{\\\\\"description\\\\\": \\\\\"The path to the Cloud Storage bucket containing the query output in CSV format.\\\\\", \\\\\"name\\\\\": \\\\\"output_gcs_path\\\\\", \\\\\"type\\\\\": \\\\\"GCSPath\\\\\"}, {\\\\\"name\\\\\": \\\\\"MLPipeline UI metadata\\\\\", \\\\\"type\\\\\": \\\\\"UI metadata\\\\\"}]}\",\"sidecar.istio.io/inject\":\"false\"},\"labels\":{\"add-pod-env\":\"true\",\"pipelines.kubeflow.org/cache_enabled\":\"true\"}},\"container\":{\"name\":\"\",\"image\":\"gcr.io/ml-pipeline/ml-pipeline-gcp:e66dcb18607406330f953bf99b04fe7c3ed1a4a8\",\"args\":[\"--ui_metadata_path\",\"/tmp/outputs/MLPipeline_UI_metadata/data\",\"kfp_component.google.bigquery\",\"query\",\"--query\",\"\\\\n SELECT *\\\\n FROM \\\\n `{{inputs.parameters.source_table_name}}` AS cover\\\\n WHERE \\\\n MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)\\\\n \",\"--project_id\",\"{{inputs.parameters.project_id}}\",\"--dataset_id\",\"{{inputs.parameters.dataset_id}}\",\"--table_id\",\"\",\"--dataset_location\",\"{{inputs.parameters.dataset_location}}\",\"--output_gcs_path\",\"{{inputs.parameters.gcs_root}}/datasets/training/data.csv\",\"--job_config\",\"\"],\"env\":[{\"name\":\"KFP_POD_NAME\",\"value\":\"{{pod.name}}\"},{\"name\":\"KFP_POD_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.name\"}}},{\"name\":\"KFP_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}}],\"resources\":{}}},{\"name\":\"bigquery-query-2\",\"arguments\":{},\"inputs\":{\"parameters\":[{\"name\":\"dataset_id\"},{\"name\":\"dataset_location\"},{\"name\":\"gcs_root\"},{\"name\":\"project_id\"},{\"name\":\"source_table_name\"}]},\"outputs\":{\"parameters\":[{\"name\":\"bigquery-query-2-output_gcs_path\",\"valueFrom\":{\"path\":\"/tmp/kfp/output/bigquery/query-output-path.txt\"}}],\"artifacts\":[{\"name\":\"mlpipeline-ui-metadata\",\"path\":\"/tmp/outputs/MLPipeline_UI_metadata/data\",\"optional\":true},{\"name\":\"bigquery-query-2-output_gcs_path\",\"path\":\"/tmp/kfp/output/bigquery/query-output-path.txt\"}]},\"metadata\":{\"annotations\":{\"pipelines.kubeflow.org/component_spec\":\"{\\\\\"description\\\\\": \\\\\"A Kubeflow Pipeline component to submit a query to Google Cloud Bigquery \\\\\\\\nservice and dump outputs to a Google Cloud Storage blob. \\\\\\\\n\\\\\", \\\\\"inputs\\\\\": [{\\\\\"description\\\\\": \\\\\"The query used by Bigquery service to fetch the results.\\\\\", \\\\\"name\\\\\": \\\\\"query\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"description\\\\\": \\\\\"The project to execute the query job.\\\\\", \\\\\"name\\\\\": \\\\\"project_id\\\\\", \\\\\"type\\\\\": \\\\\"GCPProjectID\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The ID of the persistent dataset to keep the results of the query.\\\\\", \\\\\"name\\\\\": \\\\\"dataset_id\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The ID of the table to keep the results of the query. If absent, the operation will generate a random id for the table.\\\\\", \\\\\"name\\\\\": \\\\\"table_id\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The path to the Cloud Storage bucket to store the query output.\\\\\", \\\\\"name\\\\\": \\\\\"output_gcs_path\\\\\", \\\\\"type\\\\\": \\\\\"GCSPath\\\\\"}, {\\\\\"default\\\\\": \\\\\"US\\\\\", \\\\\"description\\\\\": \\\\\"The location to create the dataset. Defaults to `US`.\\\\\", \\\\\"name\\\\\": \\\\\"dataset_location\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The full config spec for the query job.See [QueryJobConfig](https://googleapis.github.io/google-cloud-python/latest/bigquery/generated/google.cloud.bigquery.job.QueryJobConfig.html#google.cloud.bigquery.job.QueryJobConfig) for details.\\\\\", \\\\\"name\\\\\": \\\\\"job_config\\\\\", \\\\\"type\\\\\": \\\\\"Dict\\\\\"}], \\\\\"metadata\\\\\": {\\\\\"labels\\\\\": {\\\\\"add-pod-env\\\\\": \\\\\"true\\\\\"}}, \\\\\"name\\\\\": \\\\\"Bigquery - Query\\\\\", \\\\\"outputs\\\\\": [{\\\\\"description\\\\\": \\\\\"The path to the Cloud Storage bucket containing the query output in CSV format.\\\\\", \\\\\"name\\\\\": \\\\\"output_gcs_path\\\\\", \\\\\"type\\\\\": \\\\\"GCSPath\\\\\"}, {\\\\\"name\\\\\": \\\\\"MLPipeline UI metadata\\\\\", \\\\\"type\\\\\": \\\\\"UI metadata\\\\\"}]}\",\"sidecar.istio.io/inject\":\"false\"},\"labels\":{\"add-pod-env\":\"true\",\"pipelines.kubeflow.org/cache_enabled\":\"true\"}},\"container\":{\"name\":\"\",\"image\":\"gcr.io/ml-pipeline/ml-pipeline-gcp:e66dcb18607406330f953bf99b04fe7c3ed1a4a8\",\"args\":[\"--ui_metadata_path\",\"/tmp/outputs/MLPipeline_UI_metadata/data\",\"kfp_component.google.bigquery\",\"query\",\"--query\",\"\\\\n SELECT *\\\\n FROM \\\\n `{{inputs.parameters.source_table_name}}` AS cover\\\\n WHERE \\\\n MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (8)\\\\n \",\"--project_id\",\"{{inputs.parameters.project_id}}\",\"--dataset_id\",\"{{inputs.parameters.dataset_id}}\",\"--table_id\",\"\",\"--dataset_location\",\"{{inputs.parameters.dataset_location}}\",\"--output_gcs_path\",\"{{inputs.parameters.gcs_root}}/datasets/validation/data.csv\",\"--job_config\",\"\"],\"env\":[{\"name\":\"KFP_POD_NAME\",\"value\":\"{{pod.name}}\"},{\"name\":\"KFP_POD_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.name\"}}},{\"name\":\"KFP_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}}],\"resources\":{}}},{\"name\":\"bigquery-query-3\",\"arguments\":{},\"inputs\":{\"parameters\":[{\"name\":\"dataset_id\"},{\"name\":\"dataset_location\"},{\"name\":\"gcs_root\"},{\"name\":\"project_id\"},{\"name\":\"source_table_name\"}]},\"outputs\":{\"parameters\":[{\"name\":\"bigquery-query-3-output_gcs_path\",\"valueFrom\":{\"path\":\"/tmp/kfp/output/bigquery/query-output-path.txt\"}}],\"artifacts\":[{\"name\":\"mlpipeline-ui-metadata\",\"path\":\"/tmp/outputs/MLPipeline_UI_metadata/data\",\"optional\":true},{\"name\":\"bigquery-query-3-output_gcs_path\",\"path\":\"/tmp/kfp/output/bigquery/query-output-path.txt\"}]},\"metadata\":{\"annotations\":{\"pipelines.kubeflow.org/component_spec\":\"{\\\\\"description\\\\\": \\\\\"A Kubeflow Pipeline component to submit a query to Google Cloud Bigquery \\\\\\\\nservice and dump outputs to a Google Cloud Storage blob. \\\\\\\\n\\\\\", \\\\\"inputs\\\\\": [{\\\\\"description\\\\\": \\\\\"The query used by Bigquery service to fetch the results.\\\\\", \\\\\"name\\\\\": \\\\\"query\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"description\\\\\": \\\\\"The project to execute the query job.\\\\\", \\\\\"name\\\\\": \\\\\"project_id\\\\\", \\\\\"type\\\\\": \\\\\"GCPProjectID\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The ID of the persistent dataset to keep the results of the query.\\\\\", \\\\\"name\\\\\": \\\\\"dataset_id\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The ID of the table to keep the results of the query. If absent, the operation will generate a random id for the table.\\\\\", \\\\\"name\\\\\": \\\\\"table_id\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The path to the Cloud Storage bucket to store the query output.\\\\\", \\\\\"name\\\\\": \\\\\"output_gcs_path\\\\\", \\\\\"type\\\\\": \\\\\"GCSPath\\\\\"}, {\\\\\"default\\\\\": \\\\\"US\\\\\", \\\\\"description\\\\\": \\\\\"The location to create the dataset. Defaults to `US`.\\\\\", \\\\\"name\\\\\": \\\\\"dataset_location\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The full config spec for the query job.See [QueryJobConfig](https://googleapis.github.io/google-cloud-python/latest/bigquery/generated/google.cloud.bigquery.job.QueryJobConfig.html#google.cloud.bigquery.job.QueryJobConfig) for details.\\\\\", \\\\\"name\\\\\": \\\\\"job_config\\\\\", \\\\\"type\\\\\": \\\\\"Dict\\\\\"}], \\\\\"metadata\\\\\": {\\\\\"labels\\\\\": {\\\\\"add-pod-env\\\\\": \\\\\"true\\\\\"}}, \\\\\"name\\\\\": \\\\\"Bigquery - Query\\\\\", \\\\\"outputs\\\\\": [{\\\\\"description\\\\\": \\\\\"The path to the Cloud Storage bucket containing the query output in CSV format.\\\\\", \\\\\"name\\\\\": \\\\\"output_gcs_path\\\\\", \\\\\"type\\\\\": \\\\\"GCSPath\\\\\"}, {\\\\\"name\\\\\": \\\\\"MLPipeline UI metadata\\\\\", \\\\\"type\\\\\": \\\\\"UI metadata\\\\\"}]}\",\"sidecar.istio.io/inject\":\"false\"},\"labels\":{\"add-pod-env\":\"true\",\"pipelines.kubeflow.org/cache_enabled\":\"true\"}},\"container\":{\"name\":\"\",\"image\":\"gcr.io/ml-pipeline/ml-pipeline-gcp:e66dcb18607406330f953bf99b04fe7c3ed1a4a8\",\"args\":[\"--ui_metadata_path\",\"/tmp/outputs/MLPipeline_UI_metadata/data\",\"kfp_component.google.bigquery\",\"query\",\"--query\",\"\\\\n SELECT *\\\\n FROM \\\\n `{{inputs.parameters.source_table_name}}` AS cover\\\\n WHERE \\\\n MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (9)\\\\n \",\"--project_id\",\"{{inputs.parameters.project_id}}\",\"--dataset_id\",\"{{inputs.parameters.dataset_id}}\",\"--table_id\",\"\",\"--dataset_location\",\"{{inputs.parameters.dataset_location}}\",\"--output_gcs_path\",\"{{inputs.parameters.gcs_root}}/datasets/testing/data.csv\",\"--job_config\",\"\"],\"env\":[{\"name\":\"KFP_POD_NAME\",\"value\":\"{{pod.name}}\"},{\"name\":\"KFP_POD_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.name\"}}},{\"name\":\"KFP_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}}],\"resources\":{}}},{\"name\":\"condition-1\",\"arguments\":{},\"inputs\":{\"parameters\":[{\"name\":\"model_id\"},{\"name\":\"project_id\"},{\"name\":\"replace_existing_version\"},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-2-job_dir\"},{\"name\":\"version_id\"}]},\"outputs\":{},\"metadata\":{\"annotations\":{\"sidecar.istio.io/inject\":\"false\"},\"labels\":{\"pipelines.kubeflow.org/cache_enabled\":\"true\"}},\"dag\":{\"tasks\":[{\"name\":\"deploying-a-trained-model-to-cloud-machine-learning-engine\",\"template\":\"deploying-a-trained-model-to-cloud-machine-learning-engine\",\"arguments\":{\"parameters\":[{\"name\":\"model_id\",\"value\":\"{{inputs.parameters.model_id}}\"},{\"name\":\"project_id\",\"value\":\"{{inputs.parameters.project_id}}\"},{\"name\":\"replace_existing_version\",\"value\":\"{{inputs.parameters.replace_existing_version}}\"},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-2-job_dir\",\"value\":\"{{inputs.parameters.submitting-a-cloud-ml-training-job-as-a-pipeline-step-2-job_dir}}\"},{\"name\":\"version_id\",\"value\":\"{{inputs.parameters.version_id}}\"}]}}]}},{\"name\":\"covertype-classifier-training\",\"arguments\":{},\"inputs\":{\"parameters\":[{\"name\":\"dataset_id\"},{\"name\":\"dataset_location\"},{\"name\":\"evaluation_metric_name\"},{\"name\":\"evaluation_metric_threshold\"},{\"name\":\"gcs_root\"},{\"name\":\"hypertune_settings\"},{\"name\":\"model_id\"},{\"name\":\"project_id\"},{\"name\":\"region\"},{\"name\":\"replace_existing_version\"},{\"name\":\"source_table_name\"},{\"name\":\"version_id\"}]},\"outputs\":{},\"metadata\":{\"annotations\":{\"sidecar.istio.io/inject\":\"false\"},\"labels\":{\"pipelines.kubeflow.org/cache_enabled\":\"true\"}},\"dag\":{\"tasks\":[{\"name\":\"bigquery-query\",\"template\":\"bigquery-query\",\"arguments\":{\"parameters\":[{\"name\":\"dataset_id\",\"value\":\"{{inputs.parameters.dataset_id}}\"},{\"name\":\"dataset_location\",\"value\":\"{{inputs.parameters.dataset_location}}\"},{\"name\":\"gcs_root\",\"value\":\"{{inputs.parameters.gcs_root}}\"},{\"name\":\"project_id\",\"value\":\"{{inputs.parameters.project_id}}\"},{\"name\":\"source_table_name\",\"value\":\"{{inputs.parameters.source_table_name}}\"}]}},{\"name\":\"bigquery-query-2\",\"template\":\"bigquery-query-2\",\"arguments\":{\"parameters\":[{\"name\":\"dataset_id\",\"value\":\"{{inputs.parameters.dataset_id}}\"},{\"name\":\"dataset_location\",\"value\":\"{{inputs.parameters.dataset_location}}\"},{\"name\":\"gcs_root\",\"value\":\"{{inputs.parameters.gcs_root}}\"},{\"name\":\"project_id\",\"value\":\"{{inputs.parameters.project_id}}\"},{\"name\":\"source_table_name\",\"value\":\"{{inputs.parameters.source_table_name}}\"}]}},{\"name\":\"bigquery-query-3\",\"template\":\"bigquery-query-3\",\"arguments\":{\"parameters\":[{\"name\":\"dataset_id\",\"value\":\"{{inputs.parameters.dataset_id}}\"},{\"name\":\"dataset_location\",\"value\":\"{{inputs.parameters.dataset_location}}\"},{\"name\":\"gcs_root\",\"value\":\"{{inputs.parameters.gcs_root}}\"},{\"name\":\"project_id\",\"value\":\"{{inputs.parameters.project_id}}\"},{\"name\":\"source_table_name\",\"value\":\"{{inputs.parameters.source_table_name}}\"}]}},{\"name\":\"condition-1\",\"template\":\"condition-1\",\"arguments\":{\"parameters\":[{\"name\":\"model_id\",\"value\":\"{{inputs.parameters.model_id}}\"},{\"name\":\"project_id\",\"value\":\"{{inputs.parameters.project_id}}\"},{\"name\":\"replace_existing_version\",\"value\":\"{{inputs.parameters.replace_existing_version}}\"},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-2-job_dir\",\"value\":\"{{tasks.submitting-a-cloud-ml-training-job-as-a-pipeline-step-2.outputs.parameters.submitting-a-cloud-ml-training-job-as-a-pipeline-step-2-job_dir}}\"},{\"name\":\"version_id\",\"value\":\"{{inputs.parameters.version_id}}\"}]},\"dependencies\":[\"evaluate-model\",\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-2\"],\"when\":\"{{tasks.evaluate-model.outputs.parameters.evaluate-model-metric_value}} \\\\u003e {{inputs.parameters.evaluation_metric_threshold}}\"},{\"name\":\"evaluate-model\",\"template\":\"evaluate-model\",\"arguments\":{\"parameters\":[{\"name\":\"bigquery-query-3-output_gcs_path\",\"value\":\"{{tasks.bigquery-query-3.outputs.parameters.bigquery-query-3-output_gcs_path}}\"},{\"name\":\"evaluation_metric_name\",\"value\":\"{{inputs.parameters.evaluation_metric_name}}\"},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-2-job_dir\",\"value\":\"{{tasks.submitting-a-cloud-ml-training-job-as-a-pipeline-step-2.outputs.parameters.submitting-a-cloud-ml-training-job-as-a-pipeline-step-2-job_dir}}\"}]},\"dependencies\":[\"bigquery-query-3\",\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-2\"]},{\"name\":\"retrieve-best-run\",\"template\":\"retrieve-best-run\",\"arguments\":{\"parameters\":[{\"name\":\"project_id\",\"value\":\"{{inputs.parameters.project_id}}\"},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-job_id\",\"value\":\"{{tasks.submitting-a-cloud-ml-training-job-as-a-pipeline-step.outputs.parameters.submitting-a-cloud-ml-training-job-as-a-pipeline-step-job_id}}\"}]},\"dependencies\":[\"submitting-a-cloud-ml-training-job-as-a-pipeline-step\"]},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step\",\"template\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step\",\"arguments\":{\"parameters\":[{\"name\":\"bigquery-query-2-output_gcs_path\",\"value\":\"{{tasks.bigquery-query-2.outputs.parameters.bigquery-query-2-output_gcs_path}}\"},{\"name\":\"bigquery-query-output_gcs_path\",\"value\":\"{{tasks.bigquery-query.outputs.parameters.bigquery-query-output_gcs_path}}\"},{\"name\":\"gcs_root\",\"value\":\"{{inputs.parameters.gcs_root}}\"},{\"name\":\"hypertune_settings\",\"value\":\"{{inputs.parameters.hypertune_settings}}\"},{\"name\":\"project_id\",\"value\":\"{{inputs.parameters.project_id}}\"},{\"name\":\"region\",\"value\":\"{{inputs.parameters.region}}\"}]},\"dependencies\":[\"bigquery-query\",\"bigquery-query-2\"]},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-2\",\"template\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-2\",\"arguments\":{\"parameters\":[{\"name\":\"bigquery-query-2-output_gcs_path\",\"value\":\"{{tasks.bigquery-query-2.outputs.parameters.bigquery-query-2-output_gcs_path}}\"},{\"name\":\"bigquery-query-output_gcs_path\",\"value\":\"{{tasks.bigquery-query.outputs.parameters.bigquery-query-output_gcs_path}}\"},{\"name\":\"gcs_root\",\"value\":\"{{inputs.parameters.gcs_root}}\"},{\"name\":\"project_id\",\"value\":\"{{inputs.parameters.project_id}}\"},{\"name\":\"region\",\"value\":\"{{inputs.parameters.region}}\"},{\"name\":\"retrieve-best-run-alpha\",\"value\":\"{{tasks.retrieve-best-run.outputs.parameters.retrieve-best-run-alpha}}\"},{\"name\":\"retrieve-best-run-max_iter\",\"value\":\"{{tasks.retrieve-best-run.outputs.parameters.retrieve-best-run-max_iter}}\"}]},\"dependencies\":[\"bigquery-query\",\"bigquery-query-2\",\"retrieve-best-run\"]}]}},{\"name\":\"deploying-a-trained-model-to-cloud-machine-learning-engine\",\"arguments\":{},\"inputs\":{\"parameters\":[{\"name\":\"model_id\"},{\"name\":\"project_id\"},{\"name\":\"replace_existing_version\"},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-2-job_dir\"},{\"name\":\"version_id\"}]},\"outputs\":{\"artifacts\":[{\"name\":\"mlpipeline-ui-metadata\",\"path\":\"/tmp/outputs/MLPipeline_UI_metadata/data\",\"optional\":true},{\"name\":\"deploying-a-trained-model-to-cloud-machine-learning-engine-model_name\",\"path\":\"/tmp/kfp/output/ml_engine/model_name.txt\"},{\"name\":\"deploying-a-trained-model-to-cloud-machine-learning-engine-model_uri\",\"path\":\"/tmp/kfp/output/ml_engine/model_uri.txt\"},{\"name\":\"deploying-a-trained-model-to-cloud-machine-learning-engine-version_name\",\"path\":\"/tmp/kfp/output/ml_engine/version_name.txt\"}]},\"metadata\":{\"annotations\":{\"pipelines.kubeflow.org/component_spec\":\"{\\\\\"description\\\\\": \\\\\"A Kubeflow Pipeline component to deploy a trained model from a Cloud Storage\\\\\\\\npath to a Cloud Machine Learning Engine service.\\\\\\\\n\\\\\", \\\\\"inputs\\\\\": [{\\\\\"description\\\\\": \\\\\"Required. The Cloud Storage URI which contains a model file. Commonly used TF model search paths (export/exporter) will be used if they exist.\\\\\", \\\\\"name\\\\\": \\\\\"model_uri\\\\\", \\\\\"type\\\\\": \\\\\"GCSPath\\\\\"}, {\\\\\"description\\\\\": \\\\\"Required.The ID of the parent project of the serving model.\\\\\", \\\\\"name\\\\\": \\\\\"project_id\\\\\", \\\\\"type\\\\\": \\\\\"GCPProjectID\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"Optional. The user-specified name of the model. If it is not provided, the operation uses a random name.\\\\\", \\\\\"name\\\\\": \\\\\"model_id\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"Optional. The user-specified name of the version. If it is not provided, the operation uses a random name.\\\\\", \\\\\"name\\\\\": \\\\\"version_id\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"Optional. The [Cloud ML Engine runtime version](https://cloud.google.com/ml-engine/docs/tensorflow/runtime-version-list) to use for this deployment. If it is not set, the Cloud ML Engine uses the default stable version, 1.0.\\\\\", \\\\\"name\\\\\": \\\\\"runtime_version\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"Optional. The version of Python used in the prediction. If it is not set, the default version is `2.7`. Python `3.5` is available when the runtime_version is set to `1.4` and above. Python `2.7` works with all supported runtime versions.\\\\\", \\\\\"name\\\\\": \\\\\"python_version\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"Optional. The JSON payload of the new [Model](https://cloud.google.com/ml-engine/reference/rest/v1/projects.models), if it does not exist.\\\\\", \\\\\"name\\\\\": \\\\\"model\\\\\", \\\\\"type\\\\\": \\\\\"Dict\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"Optional. The JSON payload of the new [Version](https://cloud.google.com/ml-engine/reference/rest/v1/projects.models.versions).\\\\\", \\\\\"name\\\\\": \\\\\"version\\\\\", \\\\\"type\\\\\": \\\\\"Dict\\\\\"}, {\\\\\"default\\\\\": \\\\\"Fasle\\\\\", \\\\\"description\\\\\": \\\\\"A Boolean flag that indicates whether to replace existing version in case of conflict.\\\\\", \\\\\"name\\\\\": \\\\\"replace_existing_version\\\\\", \\\\\"type\\\\\": \\\\\"Bool\\\\\"}, {\\\\\"default\\\\\": \\\\\"False\\\\\", \\\\\"description\\\\\": \\\\\"A Boolean flag that indicates whether to set the new version as default version in the model.\\\\\", \\\\\"name\\\\\": \\\\\"set_default\\\\\", \\\\\"type\\\\\": \\\\\"Bool\\\\\"}, {\\\\\"default\\\\\": \\\\\"30\\\\\", \\\\\"description\\\\\": \\\\\"A time-interval to wait for in case the operation has a long run time.\\\\\", \\\\\"name\\\\\": \\\\\"wait_interval\\\\\", \\\\\"type\\\\\": \\\\\"Integer\\\\\"}], \\\\\"metadata\\\\\": {\\\\\"labels\\\\\": {\\\\\"add-pod-env\\\\\": \\\\\"true\\\\\"}}, \\\\\"name\\\\\": \\\\\"Deploying a trained model to Cloud Machine Learning Engine\\\\\", \\\\\"outputs\\\\\": [{\\\\\"description\\\\\": \\\\\"The Cloud Storage URI of the trained model.\\\\\", \\\\\"name\\\\\": \\\\\"model_uri\\\\\", \\\\\"type\\\\\": \\\\\"GCSPath\\\\\"}, {\\\\\"description\\\\\": \\\\\"The name of the deployed model.\\\\\", \\\\\"name\\\\\": \\\\\"model_name\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"description\\\\\": \\\\\"The name of the deployed version.\\\\\", \\\\\"name\\\\\": \\\\\"version_name\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"name\\\\\": \\\\\"MLPipeline UI metadata\\\\\", \\\\\"type\\\\\": \\\\\"UI metadata\\\\\"}]}\",\"sidecar.istio.io/inject\":\"false\"},\"labels\":{\"add-pod-env\":\"true\",\"pipelines.kubeflow.org/cache_enabled\":\"true\"}},\"container\":{\"name\":\"\",\"image\":\"gcr.io/ml-pipeline/ml-pipeline-gcp:e66dcb18607406330f953bf99b04fe7c3ed1a4a8\",\"args\":[\"--ui_metadata_path\",\"/tmp/outputs/MLPipeline_UI_metadata/data\",\"kfp_component.google.ml_engine\",\"deploy\",\"--model_uri\",\"{{inputs.parameters.submitting-a-cloud-ml-training-job-as-a-pipeline-step-2-job_dir}}\",\"--project_id\",\"{{inputs.parameters.project_id}}\",\"--model_id\",\"{{inputs.parameters.model_id}}\",\"--version_id\",\"{{inputs.parameters.version_id}}\",\"--runtime_version\",\"1.15\",\"--python_version\",\"3.7\",\"--model\",\"\",\"--version\",\"\",\"--replace_existing_version\",\"{{inputs.parameters.replace_existing_version}}\",\"--set_default\",\"False\",\"--wait_interval\",\"30\"],\"env\":[{\"name\":\"KFP_POD_NAME\",\"value\":\"{{pod.name}}\"},{\"name\":\"KFP_POD_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.name\"}}},{\"name\":\"KFP_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}}],\"resources\":{}}},{\"name\":\"evaluate-model\",\"arguments\":{},\"inputs\":{\"parameters\":[{\"name\":\"bigquery-query-3-output_gcs_path\"},{\"name\":\"evaluation_metric_name\"},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-2-job_dir\"}]},\"outputs\":{\"parameters\":[{\"name\":\"evaluate-model-metric_value\",\"valueFrom\":{\"path\":\"/tmp/outputs/metric_value/data\"}}],\"artifacts\":[{\"name\":\"mlpipeline-metrics\",\"path\":\"/tmp/outputs/mlpipeline_metrics/data\",\"optional\":true},{\"name\":\"evaluate-model-metric_name\",\"path\":\"/tmp/outputs/metric_name/data\"},{\"name\":\"evaluate-model-metric_value\",\"path\":\"/tmp/outputs/metric_value/data\"}]},\"metadata\":{\"annotations\":{\"pipelines.kubeflow.org/component_spec\":\"{\\\\\"description\\\\\": \\\\\"Evaluates a trained sklearn model.\\\\\", \\\\\"inputs\\\\\": [{\\\\\"name\\\\\": \\\\\"dataset_path\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"name\\\\\": \\\\\"model_path\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"name\\\\\": \\\\\"metric_name\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}], \\\\\"name\\\\\": \\\\\"Evaluate model\\\\\", \\\\\"outputs\\\\\": [{\\\\\"name\\\\\": \\\\\"metric_name\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"name\\\\\": \\\\\"metric_value\\\\\", \\\\\"type\\\\\": \\\\\"Float\\\\\"}, {\\\\\"name\\\\\": \\\\\"mlpipeline_metrics\\\\\", \\\\\"type\\\\\": \\\\\"Metrics\\\\\"}]}\",\"sidecar.istio.io/inject\":\"false\"},\"labels\":{\"pipelines.kubeflow.org/cache_enabled\":\"true\"}},\"container\":{\"name\":\"\",\"image\":\"gcr.io/dna-gcp-data/base_image:test\",\"command\":[\"python3\",\"-u\",\"-c\",\"def evaluate_model(\\\\n dataset_path , model_path , metric_name \\\\n): \\\\n \\\\n \\\\\"\\\\\"\\\\\"Evaluates a trained sklearn model.\\\\\"\\\\\"\\\\\"\\\\n #import joblib\\\\n import pickle\\\\n import json\\\\n import pandas as pd\\\\n import subprocess\\\\n import sys\\\\n\\\\n from sklearn.metrics import accuracy_score, recall_score\\\\n\\\\n df_test = pd.read_csv(dataset_path)\\\\n\\\\n X_test = df_test.drop(\\'Cover_Type\\', axis=1)\\\\n y_test = df_test[\\'Cover_Type\\']\\\\n\\\\n # Copy the model from GCS\\\\n model_filename = \\'model.pkl\\'\\\\n gcs_model_filepath = \\'{}/{}\\'.format(model_path, model_filename)\\\\n print(gcs_model_filepath)\\\\n subprocess.check_call([\\'gsutil\\', \\'cp\\', gcs_model_filepath, model_filename],\\\\n stderr=sys.stdout)\\\\n\\\\n with open(model_filename, \\'rb\\') as model_file:\\\\n model = pickle.load(model_file)\\\\n\\\\n y_hat = model.predict(X_test)\\\\n\\\\n if metric_name == \\'accuracy\\':\\\\n metric_value = accuracy_score(y_test, y_hat)\\\\n elif metric_name == \\'recall\\':\\\\n metric_value = recall_score(y_test, y_hat)\\\\n else:\\\\n metric_name = \\'N/A\\'\\\\n metric_value = 0\\\\n\\\\n # Export the metric\\\\n metrics = {\\\\n \\'metrics\\': [{\\\\n \\'name\\': metric_name,\\\\n \\'numberValue\\': float(metric_value)\\\\n }]\\\\n }\\\\n\\\\n return (metric_name, metric_value, json.dumps(metrics))\\\\n\\\\ndef _serialize_float(float_value: float) -\\\\u003e str:\\\\n if isinstance(float_value, str):\\\\n return float_value\\\\n if not isinstance(float_value, (float, int)):\\\\n raise TypeError(\\'Value \\\\\"{}\\\\\" has type \\\\\"{}\\\\\" instead of float.\\'.format(str(float_value), str(type(float_value))))\\\\n return str(float_value)\\\\n\\\\ndef _serialize_str(str_value: str) -\\\\u003e str:\\\\n if not isinstance(str_value, str):\\\\n raise TypeError(\\'Value \\\\\"{}\\\\\" has type \\\\\"{}\\\\\" instead of str.\\'.format(str(str_value), str(type(str_value))))\\\\n return str_value\\\\n\\\\nimport argparse\\\\n_parser = argparse.ArgumentParser(prog=\\'Evaluate model\\', description=\\'Evaluates a trained sklearn model.\\')\\\\n_parser.add_argument(\\\\\"--dataset-path\\\\\", dest=\\\\\"dataset_path\\\\\", type=str, required=True, default=argparse.SUPPRESS)\\\\n_parser.add_argument(\\\\\"--model-path\\\\\", dest=\\\\\"model_path\\\\\", type=str, required=True, default=argparse.SUPPRESS)\\\\n_parser.add_argument(\\\\\"--metric-name\\\\\", dest=\\\\\"metric_name\\\\\", type=str, required=True, default=argparse.SUPPRESS)\\\\n_parser.add_argument(\\\\\"----output-paths\\\\\", dest=\\\\\"_output_paths\\\\\", type=str, nargs=3)\\\\n_parsed_args = vars(_parser.parse_args())\\\\n_output_files = _parsed_args.pop(\\\\\"_output_paths\\\\\", [])\\\\n\\\\n_outputs = evaluate_model(**_parsed_args)\\\\n\\\\nif not hasattr(_outputs, \\'__getitem__\\') or isinstance(_outputs, str):\\\\n _outputs = [_outputs]\\\\n\\\\n_output_serializers = [\\\\n _serialize_str,\\\\n _serialize_float,\\\\n str,\\\\n\\\\n]\\\\n\\\\nimport os\\\\nfor idx, output_file in enumerate(_output_files):\\\\n try:\\\\n os.makedirs(os.path.dirname(output_file))\\\\n except OSError:\\\\n pass\\\\n with open(output_file, \\'w\\') as f:\\\\n f.write(_output_serializers[idx](_outputs[idx]))\\\\n\"],\"args\":[\"--dataset-path\",\"{{inputs.parameters.bigquery-query-3-output_gcs_path}}\",\"--model-path\",\"{{inputs.parameters.submitting-a-cloud-ml-training-job-as-a-pipeline-step-2-job_dir}}\",\"--metric-name\",\"{{inputs.parameters.evaluation_metric_name}}\",\"----output-paths\",\"/tmp/outputs/metric_name/data\",\"/tmp/outputs/metric_value/data\",\"/tmp/outputs/mlpipeline_metrics/data\"],\"resources\":{}}},{\"name\":\"retrieve-best-run\",\"arguments\":{},\"inputs\":{\"parameters\":[{\"name\":\"project_id\"},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-job_id\"}]},\"outputs\":{\"parameters\":[{\"name\":\"retrieve-best-run-alpha\",\"valueFrom\":{\"path\":\"/tmp/outputs/alpha/data\"}},{\"name\":\"retrieve-best-run-max_iter\",\"valueFrom\":{\"path\":\"/tmp/outputs/max_iter/data\"}}],\"artifacts\":[{\"name\":\"retrieve-best-run-alpha\",\"path\":\"/tmp/outputs/alpha/data\"},{\"name\":\"retrieve-best-run-max_iter\",\"path\":\"/tmp/outputs/max_iter/data\"},{\"name\":\"retrieve-best-run-metric_value\",\"path\":\"/tmp/outputs/metric_value/data\"}]},\"metadata\":{\"annotations\":{\"pipelines.kubeflow.org/component_spec\":\"{\\\\\"description\\\\\": \\\\\"Retrieves the parameters of the best Hypertune run.\\\\\", \\\\\"inputs\\\\\": [{\\\\\"name\\\\\": \\\\\"project_id\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"name\\\\\": \\\\\"job_id\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}], \\\\\"name\\\\\": \\\\\"Retrieve best run\\\\\", \\\\\"outputs\\\\\": [{\\\\\"name\\\\\": \\\\\"metric_value\\\\\", \\\\\"type\\\\\": \\\\\"Float\\\\\"}, {\\\\\"name\\\\\": \\\\\"alpha\\\\\", \\\\\"type\\\\\": \\\\\"Float\\\\\"}, {\\\\\"name\\\\\": \\\\\"max_iter\\\\\", \\\\\"type\\\\\": \\\\\"Integer\\\\\"}]}\",\"sidecar.istio.io/inject\":\"false\"},\"labels\":{\"pipelines.kubeflow.org/cache_enabled\":\"true\"}},\"container\":{\"name\":\"\",\"image\":\"gcr.io/dna-gcp-data/base_image:test\",\"command\":[\"python3\",\"-u\",\"-c\",\"def retrieve_best_run(\\\\n project_id , job_id \\\\n): \\\\n \\\\n \\\\\"\\\\\"\\\\\"Retrieves the parameters of the best Hypertune run.\\\\\"\\\\\"\\\\\"\\\\n\\\\n from googleapiclient import discovery\\\\n from googleapiclient import errors\\\\n\\\\n ml = discovery.build(\\'ml\\', \\'v1\\')\\\\n\\\\n job_name = \\'projects/{}/jobs/{}\\'.format(project_id, job_id)\\\\n request = ml.projects().jobs().get(name=job_name)\\\\n\\\\n try:\\\\n response = request.execute()\\\\n except errors.HttpError as err:\\\\n print(err)\\\\n except:\\\\n print(\\'Unexpected error\\')\\\\n\\\\n print(response)\\\\n\\\\n best_trial = response[\\'trainingOutput\\'][\\'trials\\'][0]\\\\n\\\\n metric_value = best_trial[\\'finalMetric\\'][\\'objectiveValue\\']\\\\n alpha = float(best_trial[\\'hyperparameters\\'][\\'alpha\\'])\\\\n max_iter = int(best_trial[\\'hyperparameters\\'][\\'max_iter\\'])\\\\n\\\\n return (metric_value, alpha, max_iter)\\\\n\\\\ndef _serialize_float(float_value: float) -\\\\u003e str:\\\\n if isinstance(float_value, str):\\\\n return float_value\\\\n if not isinstance(float_value, (float, int)):\\\\n raise TypeError(\\'Value \\\\\"{}\\\\\" has type \\\\\"{}\\\\\" instead of float.\\'.format(str(float_value), str(type(float_value))))\\\\n return str(float_value)\\\\n\\\\ndef _serialize_int(int_value: int) -\\\\u003e str:\\\\n if isinstance(int_value, str):\\\\n return int_value\\\\n if not isinstance(int_value, int):\\\\n raise TypeError(\\'Value \\\\\"{}\\\\\" has type \\\\\"{}\\\\\" instead of int.\\'.format(str(int_value), str(type(int_value))))\\\\n return str(int_value)\\\\n\\\\nimport argparse\\\\n_parser = argparse.ArgumentParser(prog=\\'Retrieve best run\\', description=\\'Retrieves the parameters of the best Hypertune run.\\')\\\\n_parser.add_argument(\\\\\"--project-id\\\\\", dest=\\\\\"project_id\\\\\", type=str, required=True, default=argparse.SUPPRESS)\\\\n_parser.add_argument(\\\\\"--job-id\\\\\", dest=\\\\\"job_id\\\\\", type=str, required=True, default=argparse.SUPPRESS)\\\\n_parser.add_argument(\\\\\"----output-paths\\\\\", dest=\\\\\"_output_paths\\\\\", type=str, nargs=3)\\\\n_parsed_args = vars(_parser.parse_args())\\\\n_output_files = _parsed_args.pop(\\\\\"_output_paths\\\\\", [])\\\\n\\\\n_outputs = retrieve_best_run(**_parsed_args)\\\\n\\\\nif not hasattr(_outputs, \\'__getitem__\\') or isinstance(_outputs, str):\\\\n _outputs = [_outputs]\\\\n\\\\n_output_serializers = [\\\\n _serialize_float,\\\\n _serialize_float,\\\\n _serialize_int,\\\\n\\\\n]\\\\n\\\\nimport os\\\\nfor idx, output_file in enumerate(_output_files):\\\\n try:\\\\n os.makedirs(os.path.dirname(output_file))\\\\n except OSError:\\\\n pass\\\\n with open(output_file, \\'w\\') as f:\\\\n f.write(_output_serializers[idx](_outputs[idx]))\\\\n\"],\"args\":[\"--project-id\",\"{{inputs.parameters.project_id}}\",\"--job-id\",\"{{inputs.parameters.submitting-a-cloud-ml-training-job-as-a-pipeline-step-job_id}}\",\"----output-paths\",\"/tmp/outputs/metric_value/data\",\"/tmp/outputs/alpha/data\",\"/tmp/outputs/max_iter/data\"],\"resources\":{}}},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step\",\"arguments\":{},\"inputs\":{\"parameters\":[{\"name\":\"bigquery-query-2-output_gcs_path\"},{\"name\":\"bigquery-query-output_gcs_path\"},{\"name\":\"gcs_root\"},{\"name\":\"hypertune_settings\"},{\"name\":\"project_id\"},{\"name\":\"region\"}]},\"outputs\":{\"parameters\":[{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-job_id\",\"valueFrom\":{\"path\":\"/tmp/kfp/output/ml_engine/job_id.txt\"}}],\"artifacts\":[{\"name\":\"mlpipeline-ui-metadata\",\"path\":\"/tmp/outputs/MLPipeline_UI_metadata/data\",\"optional\":true},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-job_dir\",\"path\":\"/tmp/kfp/output/ml_engine/job_dir.txt\"},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-job_id\",\"path\":\"/tmp/kfp/output/ml_engine/job_id.txt\"}]},\"metadata\":{\"annotations\":{\"pipelines.kubeflow.org/component_spec\":\"{\\\\\"description\\\\\": \\\\\"A Kubeflow Pipeline component to submit a Cloud Machine Learning (Cloud ML) \\\\\\\\nEngine training job as a step in a pipeline.\\\\\\\\n\\\\\", \\\\\"inputs\\\\\": [{\\\\\"description\\\\\": \\\\\"Required. The ID of the parent project of the job.\\\\\", \\\\\"name\\\\\": \\\\\"project_id\\\\\", \\\\\"type\\\\\": \\\\\"GCPProjectID\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The Python module name to run after installing the packages.\\\\\", \\\\\"name\\\\\": \\\\\"python_module\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The Cloud Storage location of the packages (that contain the training program and any additional dependencies). The maximum number of package URIs is 100.\\\\\", \\\\\"name\\\\\": \\\\\"package_uris\\\\\", \\\\\"type\\\\\": \\\\\"List\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The Compute Engine region in which the training job is run.\\\\\", \\\\\"name\\\\\": \\\\\"region\\\\\", \\\\\"type\\\\\": \\\\\"GCPRegion\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The command line arguments to pass to the program.\\\\\", \\\\\"name\\\\\": \\\\\"args\\\\\", \\\\\"type\\\\\": \\\\\"List\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"A Cloud Storage path in which to store the training outputs and other data needed for training. This path is passed to your TensorFlow program as the `job-dir` command-line argument. The benefit of specifying this field is that Cloud ML validates the path for use in training.\\\\\", \\\\\"name\\\\\": \\\\\"job_dir\\\\\", \\\\\"type\\\\\": \\\\\"GCSPath\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The version of Python used in training. If not set, the default version is `2.7`. Python `3.5` is available when runtimeVersion is set to `1.4` and above.\\\\\", \\\\\"name\\\\\": \\\\\"python_version\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The Cloud ML Engine runtime version to use for training. If not set, Cloud ML Engine uses the default stable version, 1.0.\\\\\", \\\\\"name\\\\\": \\\\\"runtime_version\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The Docker image to run on the master replica. This image must be in Container Registry.\\\\\", \\\\\"name\\\\\": \\\\\"master_image_uri\\\\\", \\\\\"type\\\\\": \\\\\"GCRPath\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The Docker image to run on the worker replica. This image must be in Container Registry.\\\\\", \\\\\"name\\\\\": \\\\\"worker_image_uri\\\\\", \\\\\"type\\\\\": \\\\\"GCRPath\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The input parameters to create a training job. It is the JSON payload of a [TrainingInput](https://cloud.google.com/ml-engine/reference/rest/v1/projects.jobs#TrainingInput)\\\\\", \\\\\"name\\\\\": \\\\\"training_input\\\\\", \\\\\"type\\\\\": \\\\\"Dict\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The prefix of the generated job id.\\\\\", \\\\\"name\\\\\": \\\\\"job_id_prefix\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"30\\\\\", \\\\\"description\\\\\": \\\\\"Optional. A time-interval to wait for between calls to get the job status. Defaults to 30.\\'\\\\\", \\\\\"name\\\\\": \\\\\"wait_interval\\\\\", \\\\\"type\\\\\": \\\\\"Integer\\\\\"}], \\\\\"metadata\\\\\": {\\\\\"labels\\\\\": {\\\\\"add-pod-env\\\\\": \\\\\"true\\\\\"}}, \\\\\"name\\\\\": \\\\\"Submitting a Cloud ML training job as a pipeline step\\\\\", \\\\\"outputs\\\\\": [{\\\\\"description\\\\\": \\\\\"The ID of the created job.\\\\\", \\\\\"name\\\\\": \\\\\"job_id\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"description\\\\\": \\\\\"The output path in Cloud Storage of the trainning job, which contains the trained model files.\\\\\", \\\\\"name\\\\\": \\\\\"job_dir\\\\\", \\\\\"type\\\\\": \\\\\"GCSPath\\\\\"}, {\\\\\"name\\\\\": \\\\\"MLPipeline UI metadata\\\\\", \\\\\"type\\\\\": \\\\\"UI metadata\\\\\"}]}\",\"sidecar.istio.io/inject\":\"false\"},\"labels\":{\"add-pod-env\":\"true\",\"pipelines.kubeflow.org/cache_enabled\":\"true\"}},\"container\":{\"name\":\"\",\"image\":\"gcr.io/ml-pipeline/ml-pipeline-gcp:e66dcb18607406330f953bf99b04fe7c3ed1a4a8\",\"args\":[\"--ui_metadata_path\",\"/tmp/outputs/MLPipeline_UI_metadata/data\",\"kfp_component.google.ml_engine\",\"train\",\"--project_id\",\"{{inputs.parameters.project_id}}\",\"--python_module\",\"\",\"--package_uris\",\"\",\"--region\",\"{{inputs.parameters.region}}\",\"--args\",\"[\\\\\"--training_dataset_path\\\\\", \\\\\"{{inputs.parameters.bigquery-query-output_gcs_path}}\\\\\", \\\\\"--validation_dataset_path\\\\\", \\\\\"{{inputs.parameters.bigquery-query-2-output_gcs_path}}\\\\\", \\\\\"--hptune\\\\\", \\\\\"True\\\\\"]\",\"--job_dir\",\"{{inputs.parameters.gcs_root}}/jobdir/hypertune/d16368c5-24bb-4292-9060-315755f79b0b\",\"--python_version\",\"\",\"--runtime_version\",\"\",\"--master_image_uri\",\"gcr.io/dna-gcp-data/trainer_image:test\",\"--worker_image_uri\",\"\",\"--training_input\",\"{{inputs.parameters.hypertune_settings}}\",\"--job_id_prefix\",\"\",\"--wait_interval\",\"30\"],\"env\":[{\"name\":\"KFP_POD_NAME\",\"value\":\"{{pod.name}}\"},{\"name\":\"KFP_POD_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.name\"}}},{\"name\":\"KFP_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}}],\"resources\":{}}},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-2\",\"arguments\":{},\"inputs\":{\"parameters\":[{\"name\":\"bigquery-query-2-output_gcs_path\"},{\"name\":\"bigquery-query-output_gcs_path\"},{\"name\":\"gcs_root\"},{\"name\":\"project_id\"},{\"name\":\"region\"},{\"name\":\"retrieve-best-run-alpha\"},{\"name\":\"retrieve-best-run-max_iter\"}]},\"outputs\":{\"parameters\":[{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-2-job_dir\",\"valueFrom\":{\"path\":\"/tmp/kfp/output/ml_engine/job_dir.txt\"}}],\"artifacts\":[{\"name\":\"mlpipeline-ui-metadata\",\"path\":\"/tmp/outputs/MLPipeline_UI_metadata/data\",\"optional\":true},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-2-job_dir\",\"path\":\"/tmp/kfp/output/ml_engine/job_dir.txt\"},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-2-job_id\",\"path\":\"/tmp/kfp/output/ml_engine/job_id.txt\"}]},\"metadata\":{\"annotations\":{\"pipelines.kubeflow.org/component_spec\":\"{\\\\\"description\\\\\": \\\\\"A Kubeflow Pipeline component to submit a Cloud Machine Learning (Cloud ML) \\\\\\\\nEngine training job as a step in a pipeline.\\\\\\\\n\\\\\", \\\\\"inputs\\\\\": [{\\\\\"description\\\\\": \\\\\"Required. The ID of the parent project of the job.\\\\\", \\\\\"name\\\\\": \\\\\"project_id\\\\\", \\\\\"type\\\\\": \\\\\"GCPProjectID\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The Python module name to run after installing the packages.\\\\\", \\\\\"name\\\\\": \\\\\"python_module\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The Cloud Storage location of the packages (that contain the training program and any additional dependencies). The maximum number of package URIs is 100.\\\\\", \\\\\"name\\\\\": \\\\\"package_uris\\\\\", \\\\\"type\\\\\": \\\\\"List\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The Compute Engine region in which the training job is run.\\\\\", \\\\\"name\\\\\": \\\\\"region\\\\\", \\\\\"type\\\\\": \\\\\"GCPRegion\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The command line arguments to pass to the program.\\\\\", \\\\\"name\\\\\": \\\\\"args\\\\\", \\\\\"type\\\\\": \\\\\"List\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"A Cloud Storage path in which to store the training outputs and other data needed for training. This path is passed to your TensorFlow program as the `job-dir` command-line argument. The benefit of specifying this field is that Cloud ML validates the path for use in training.\\\\\", \\\\\"name\\\\\": \\\\\"job_dir\\\\\", \\\\\"type\\\\\": \\\\\"GCSPath\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The version of Python used in training. If not set, the default version is `2.7`. Python `3.5` is available when runtimeVersion is set to `1.4` and above.\\\\\", \\\\\"name\\\\\": \\\\\"python_version\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The Cloud ML Engine runtime version to use for training. If not set, Cloud ML Engine uses the default stable version, 1.0.\\\\\", \\\\\"name\\\\\": \\\\\"runtime_version\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The Docker image to run on the master replica. This image must be in Container Registry.\\\\\", \\\\\"name\\\\\": \\\\\"master_image_uri\\\\\", \\\\\"type\\\\\": \\\\\"GCRPath\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The Docker image to run on the worker replica. This image must be in Container Registry.\\\\\", \\\\\"name\\\\\": \\\\\"worker_image_uri\\\\\", \\\\\"type\\\\\": \\\\\"GCRPath\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The input parameters to create a training job. It is the JSON payload of a [TrainingInput](https://cloud.google.com/ml-engine/reference/rest/v1/projects.jobs#TrainingInput)\\\\\", \\\\\"name\\\\\": \\\\\"training_input\\\\\", \\\\\"type\\\\\": \\\\\"Dict\\\\\"}, {\\\\\"default\\\\\": \\\\\"\\\\\", \\\\\"description\\\\\": \\\\\"The prefix of the generated job id.\\\\\", \\\\\"name\\\\\": \\\\\"job_id_prefix\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"default\\\\\": \\\\\"30\\\\\", \\\\\"description\\\\\": \\\\\"Optional. A time-interval to wait for between calls to get the job status. Defaults to 30.\\'\\\\\", \\\\\"name\\\\\": \\\\\"wait_interval\\\\\", \\\\\"type\\\\\": \\\\\"Integer\\\\\"}], \\\\\"metadata\\\\\": {\\\\\"labels\\\\\": {\\\\\"add-pod-env\\\\\": \\\\\"true\\\\\"}}, \\\\\"name\\\\\": \\\\\"Submitting a Cloud ML training job as a pipeline step\\\\\", \\\\\"outputs\\\\\": [{\\\\\"description\\\\\": \\\\\"The ID of the created job.\\\\\", \\\\\"name\\\\\": \\\\\"job_id\\\\\", \\\\\"type\\\\\": \\\\\"String\\\\\"}, {\\\\\"description\\\\\": \\\\\"The output path in Cloud Storage of the trainning job, which contains the trained model files.\\\\\", \\\\\"name\\\\\": \\\\\"job_dir\\\\\", \\\\\"type\\\\\": \\\\\"GCSPath\\\\\"}, {\\\\\"name\\\\\": \\\\\"MLPipeline UI metadata\\\\\", \\\\\"type\\\\\": \\\\\"UI metadata\\\\\"}]}\",\"sidecar.istio.io/inject\":\"false\"},\"labels\":{\"add-pod-env\":\"true\",\"pipelines.kubeflow.org/cache_enabled\":\"true\"}},\"container\":{\"name\":\"\",\"image\":\"gcr.io/ml-pipeline/ml-pipeline-gcp:e66dcb18607406330f953bf99b04fe7c3ed1a4a8\",\"args\":[\"--ui_metadata_path\",\"/tmp/outputs/MLPipeline_UI_metadata/data\",\"kfp_component.google.ml_engine\",\"train\",\"--project_id\",\"{{inputs.parameters.project_id}}\",\"--python_module\",\"\",\"--package_uris\",\"\",\"--region\",\"{{inputs.parameters.region}}\",\"--args\",\"[\\\\\"--training_dataset_path\\\\\", \\\\\"{{inputs.parameters.bigquery-query-output_gcs_path}}\\\\\", \\\\\"--validation_dataset_path\\\\\", \\\\\"{{inputs.parameters.bigquery-query-2-output_gcs_path}}\\\\\", \\\\\"--alpha\\\\\", \\\\\"{{inputs.parameters.retrieve-best-run-alpha}}\\\\\", \\\\\"--max_iter\\\\\", \\\\\"{{inputs.parameters.retrieve-best-run-max_iter}}\\\\\", \\\\\"--hptune\\\\\", \\\\\"False\\\\\"]\",\"--job_dir\",\"{{inputs.parameters.gcs_root}}/jobdir/d16368c5-24bb-4292-9060-315755f79b0b\",\"--python_version\",\"\",\"--runtime_version\",\"\",\"--master_image_uri\",\"gcr.io/dna-gcp-data/trainer_image:test\",\"--worker_image_uri\",\"\",\"--training_input\",\"\",\"--job_id_prefix\",\"\",\"--wait_interval\",\"30\"],\"env\":[{\"name\":\"KFP_POD_NAME\",\"value\":\"{{pod.name}}\"},{\"name\":\"KFP_POD_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.name\"}}},{\"name\":\"KFP_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}}],\"resources\":{}}}],\"entrypoint\":\"covertype-classifier-training\",\"arguments\":{\"parameters\":[{\"name\":\"project_id\",\"value\":\"dna-gcp-data\"},{\"name\":\"region\",\"value\":\"us-central1\"},{\"name\":\"source_table_name\",\"value\":\"covertype_dataset.covertype\"},{\"name\":\"gcs_root\",\"value\":\"gs://dna-gcp-data-kubeflowpipelines-default/staging\"},{\"name\":\"dataset_id\",\"value\":\"covertype_dataset\"},{\"name\":\"evaluation_metric_name\",\"value\":\"accuracy\"},{\"name\":\"evaluation_metric_threshold\",\"value\":\"0.69\"},{\"name\":\"model_id\",\"value\":\"covertype_classifier\"},{\"name\":\"version_id\",\"value\":\"v01\"},{\"name\":\"replace_existing_version\",\"value\":\"True\"},{\"name\":\"hypertune_settings\",\"value\":\"{\\\\n \\\\\"hyperparameters\\\\\": {\\\\n \\\\\"goal\\\\\": \\\\\"MAXIMIZE\\\\\",\\\\n \\\\\"maxTrials\\\\\": 6,\\\\n \\\\\"maxParallelTrials\\\\\": 3,\\\\n \\\\\"hyperparameterMetricTag\\\\\": \\\\\"accuracy\\\\\",\\\\n \\\\\"enableTrialEarlyStopping\\\\\": True,\\\\n \\\\\"params\\\\\": [\\\\n {\\\\n \\\\\"parameterName\\\\\": \\\\\"max_iter\\\\\",\\\\n \\\\\"type\\\\\": \\\\\"DISCRETE\\\\\",\\\\n \\\\\"discreteValues\\\\\": [500, 1000]\\\\n },\\\\n {\\\\n \\\\\"parameterName\\\\\": \\\\\"alpha\\\\\",\\\\n \\\\\"type\\\\\": \\\\\"DOUBLE\\\\\",\\\\n \\\\\"minValue\\\\\": 0.0001,\\\\n \\\\\"maxValue\\\\\": 0.001,\\\\n \\\\\"scaleType\\\\\": \\\\\"UNIT_LINEAR_SCALE\\\\\"\\\\n }\\\\n ]\\\\n }\\\\n}\"},{\"name\":\"dataset_location\",\"value\":\"US\"}]},\"serviceAccountName\":\"pipeline-runner\"},\"status\":{\"phase\":\"Succeeded\",\"startedAt\":\"2021-05-21T08:46:52Z\",\"finishedAt\":\"2021-05-21T09:10:48Z\",\"nodes\":{\"covertype-classifier-training-qmb6n\":{\"id\":\"covertype-classifier-training-qmb6n\",\"name\":\"covertype-classifier-training-qmb6n\",\"displayName\":\"covertype-classifier-training-qmb6n\",\"type\":\"DAG\",\"templateName\":\"covertype-classifier-training\",\"phase\":\"Succeeded\",\"startedAt\":\"2021-05-21T08:46:52Z\",\"finishedAt\":\"2021-05-21T09:10:48Z\",\"inputs\":{\"parameters\":[{\"name\":\"dataset_id\",\"value\":\"covertype_dataset\"},{\"name\":\"dataset_location\",\"value\":\"US\"},{\"name\":\"evaluation_metric_name\",\"value\":\"accuracy\"},{\"name\":\"evaluation_metric_threshold\",\"value\":\"0.69\"},{\"name\":\"gcs_root\",\"value\":\"gs://dna-gcp-data-kubeflowpipelines-default/staging\"},{\"name\":\"hypertune_settings\",\"value\":\"{\\\\n \\\\\"hyperparameters\\\\\": {\\\\n \\\\\"goal\\\\\": \\\\\"MAXIMIZE\\\\\",\\\\n \\\\\"maxTrials\\\\\": 6,\\\\n \\\\\"maxParallelTrials\\\\\": 3,\\\\n \\\\\"hyperparameterMetricTag\\\\\": \\\\\"accuracy\\\\\",\\\\n \\\\\"enableTrialEarlyStopping\\\\\": True,\\\\n \\\\\"params\\\\\": [\\\\n {\\\\n \\\\\"parameterName\\\\\": \\\\\"max_iter\\\\\",\\\\n \\\\\"type\\\\\": \\\\\"DISCRETE\\\\\",\\\\n \\\\\"discreteValues\\\\\": [500, 1000]\\\\n },\\\\n {\\\\n \\\\\"parameterName\\\\\": \\\\\"alpha\\\\\",\\\\n \\\\\"type\\\\\": \\\\\"DOUBLE\\\\\",\\\\n \\\\\"minValue\\\\\": 0.0001,\\\\n \\\\\"maxValue\\\\\": 0.001,\\\\n \\\\\"scaleType\\\\\": \\\\\"UNIT_LINEAR_SCALE\\\\\"\\\\n }\\\\n ]\\\\n }\\\\n}\"},{\"name\":\"model_id\",\"value\":\"covertype_classifier\"},{\"name\":\"project_id\",\"value\":\"dna-gcp-data\"},{\"name\":\"region\",\"value\":\"us-central1\"},{\"name\":\"replace_existing_version\",\"value\":\"True\"},{\"name\":\"source_table_name\",\"value\":\"covertype_dataset.covertype\"},{\"name\":\"version_id\",\"value\":\"v01\"}]},\"children\":[\"covertype-classifier-training-qmb6n-1896224863\",\"covertype-classifier-training-qmb6n-1858401567\",\"covertype-classifier-training-qmb6n-1879447244\"],\"outboundNodes\":[\"covertype-classifier-training-qmb6n-2274394652\"]},\"covertype-classifier-training-qmb6n-1057815578\":{\"id\":\"covertype-classifier-training-qmb6n-1057815578\",\"name\":\"covertype-classifier-training-qmb6n.submitting-a-cloud-ml-training-job-as-a-pipeline-step\",\"displayName\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step\",\"type\":\"Pod\",\"templateName\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step\",\"phase\":\"Succeeded\",\"boundaryID\":\"covertype-classifier-training-qmb6n\",\"startedAt\":\"2021-05-21T08:47:07Z\",\"finishedAt\":\"2021-05-21T09:01:46Z\",\"inputs\":{\"parameters\":[{\"name\":\"bigquery-query-2-output_gcs_path\",\"value\":\"gs://dna-gcp-data-kubeflowpipelines-default/staging/datasets/validation/data.csv\"},{\"name\":\"bigquery-query-output_gcs_path\",\"value\":\"gs://dna-gcp-data-kubeflowpipelines-default/staging/datasets/training/data.csv\"},{\"name\":\"gcs_root\",\"value\":\"gs://dna-gcp-data-kubeflowpipelines-default/staging\"},{\"name\":\"hypertune_settings\",\"value\":\"{\\\\n \\\\\"hyperparameters\\\\\": {\\\\n \\\\\"goal\\\\\": \\\\\"MAXIMIZE\\\\\",\\\\n \\\\\"maxTrials\\\\\": 6,\\\\n \\\\\"maxParallelTrials\\\\\": 3,\\\\n \\\\\"hyperparameterMetricTag\\\\\": \\\\\"accuracy\\\\\",\\\\n \\\\\"enableTrialEarlyStopping\\\\\": True,\\\\n \\\\\"params\\\\\": [\\\\n {\\\\n \\\\\"parameterName\\\\\": \\\\\"max_iter\\\\\",\\\\n \\\\\"type\\\\\": \\\\\"DISCRETE\\\\\",\\\\n \\\\\"discreteValues\\\\\": [500, 1000]\\\\n },\\\\n {\\\\n \\\\\"parameterName\\\\\": \\\\\"alpha\\\\\",\\\\n \\\\\"type\\\\\": \\\\\"DOUBLE\\\\\",\\\\n \\\\\"minValue\\\\\": 0.0001,\\\\n \\\\\"maxValue\\\\\": 0.001,\\\\n \\\\\"scaleType\\\\\": \\\\\"UNIT_LINEAR_SCALE\\\\\"\\\\n }\\\\n ]\\\\n }\\\\n}\"},{\"name\":\"project_id\",\"value\":\"dna-gcp-data\"},{\"name\":\"region\",\"value\":\"us-central1\"}]},\"outputs\":{\"parameters\":[{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-job_id\",\"value\":\"job_6432b8d633522e4d387b6335bfbf3639\",\"valueFrom\":{\"path\":\"/tmp/kfp/output/ml_engine/job_id.txt\"}}],\"artifacts\":[{\"name\":\"mlpipeline-ui-metadata\",\"path\":\"/tmp/outputs/MLPipeline_UI_metadata/data\",\"optional\":true},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-job_dir\",\"path\":\"/tmp/kfp/output/ml_engine/job_dir.txt\",\"s3\":{\"endpoint\":\"minio-service.kfpdemo:9000\",\"bucket\":\"mlpipeline\",\"insecure\":true,\"accessKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"accesskey\"},\"secretKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"secretkey\"},\"key\":\"artifacts/covertype-classifier-training-qmb6n/covertype-classifier-training-qmb6n-1057815578/submitting-a-cloud-ml-training-job-as-a-pipeline-step-job_dir.tgz\"}},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-job_id\",\"path\":\"/tmp/kfp/output/ml_engine/job_id.txt\",\"s3\":{\"endpoint\":\"minio-service.kfpdemo:9000\",\"bucket\":\"mlpipeline\",\"insecure\":true,\"accessKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"accesskey\"},\"secretKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"secretkey\"},\"key\":\"artifacts/covertype-classifier-training-qmb6n/covertype-classifier-training-qmb6n-1057815578/submitting-a-cloud-ml-training-job-as-a-pipeline-step-job_id.tgz\"}},{\"name\":\"main-logs\",\"archiveLogs\":true,\"s3\":{\"endpoint\":\"minio-service.kfpdemo:9000\",\"bucket\":\"mlpipeline\",\"insecure\":true,\"accessKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"accesskey\"},\"secretKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"secretkey\"},\"key\":\"artifacts/covertype-classifier-training-qmb6n/covertype-classifier-training-qmb6n-1057815578/main.log\"}}]},\"children\":[\"covertype-classifier-training-qmb6n-1075982665\"]},\"covertype-classifier-training-qmb6n-1075982665\":{\"id\":\"covertype-classifier-training-qmb6n-1075982665\",\"name\":\"covertype-classifier-training-qmb6n.retrieve-best-run\",\"displayName\":\"retrieve-best-run\",\"type\":\"Pod\",\"templateName\":\"retrieve-best-run\",\"phase\":\"Succeeded\",\"boundaryID\":\"covertype-classifier-training-qmb6n\",\"startedAt\":\"2021-05-21T09:01:47Z\",\"finishedAt\":\"2021-05-21T09:01:52Z\",\"inputs\":{\"parameters\":[{\"name\":\"project_id\",\"value\":\"dna-gcp-data\"},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-job_id\",\"value\":\"job_6432b8d633522e4d387b6335bfbf3639\"}]},\"outputs\":{\"parameters\":[{\"name\":\"retrieve-best-run-alpha\",\"value\":\"0.00012415813718618125\",\"valueFrom\":{\"path\":\"/tmp/outputs/alpha/data\"}},{\"name\":\"retrieve-best-run-max_iter\",\"value\":\"500\",\"valueFrom\":{\"path\":\"/tmp/outputs/max_iter/data\"}}],\"artifacts\":[{\"name\":\"retrieve-best-run-alpha\",\"path\":\"/tmp/outputs/alpha/data\",\"s3\":{\"endpoint\":\"minio-service.kfpdemo:9000\",\"bucket\":\"mlpipeline\",\"insecure\":true,\"accessKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"accesskey\"},\"secretKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"secretkey\"},\"key\":\"artifacts/covertype-classifier-training-qmb6n/covertype-classifier-training-qmb6n-1075982665/retrieve-best-run-alpha.tgz\"}},{\"name\":\"retrieve-best-run-max_iter\",\"path\":\"/tmp/outputs/max_iter/data\",\"s3\":{\"endpoint\":\"minio-service.kfpdemo:9000\",\"bucket\":\"mlpipeline\",\"insecure\":true,\"accessKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"accesskey\"},\"secretKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"secretkey\"},\"key\":\"artifacts/covertype-classifier-training-qmb6n/covertype-classifier-training-qmb6n-1075982665/retrieve-best-run-max_iter.tgz\"}},{\"name\":\"retrieve-best-run-metric_value\",\"path\":\"/tmp/outputs/metric_value/data\",\"s3\":{\"endpoint\":\"minio-service.kfpdemo:9000\",\"bucket\":\"mlpipeline\",\"insecure\":true,\"accessKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"accesskey\"},\"secretKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"secretkey\"},\"key\":\"artifacts/covertype-classifier-training-qmb6n/covertype-classifier-training-qmb6n-1075982665/retrieve-best-run-metric_value.tgz\"}},{\"name\":\"main-logs\",\"archiveLogs\":true,\"s3\":{\"endpoint\":\"minio-service.kfpdemo:9000\",\"bucket\":\"mlpipeline\",\"insecure\":true,\"accessKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"accesskey\"},\"secretKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"secretkey\"},\"key\":\"artifacts/covertype-classifier-training-qmb6n/covertype-classifier-training-qmb6n-1075982665/main.log\"}}]},\"children\":[\"covertype-classifier-training-qmb6n-1089717477\"]},\"covertype-classifier-training-qmb6n-1089717477\":{\"id\":\"covertype-classifier-training-qmb6n-1089717477\",\"name\":\"covertype-classifier-training-qmb6n.submitting-a-cloud-ml-training-job-as-a-pipeline-step-2\",\"displayName\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-2\",\"type\":\"Pod\",\"templateName\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-2\",\"phase\":\"Succeeded\",\"boundaryID\":\"covertype-classifier-training-qmb6n\",\"startedAt\":\"2021-05-21T09:01:53Z\",\"finishedAt\":\"2021-05-21T09:08:59Z\",\"inputs\":{\"parameters\":[{\"name\":\"bigquery-query-2-output_gcs_path\",\"value\":\"gs://dna-gcp-data-kubeflowpipelines-default/staging/datasets/validation/data.csv\"},{\"name\":\"bigquery-query-output_gcs_path\",\"value\":\"gs://dna-gcp-data-kubeflowpipelines-default/staging/datasets/training/data.csv\"},{\"name\":\"gcs_root\",\"value\":\"gs://dna-gcp-data-kubeflowpipelines-default/staging\"},{\"name\":\"project_id\",\"value\":\"dna-gcp-data\"},{\"name\":\"region\",\"value\":\"us-central1\"},{\"name\":\"retrieve-best-run-alpha\",\"value\":\"0.00012415813718618125\"},{\"name\":\"retrieve-best-run-max_iter\",\"value\":\"500\"}]},\"outputs\":{\"parameters\":[{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-2-job_dir\",\"value\":\"gs://dna-gcp-data-kubeflowpipelines-default/staging/jobdir/d16368c5-24bb-4292-9060-315755f79b0b\",\"valueFrom\":{\"path\":\"/tmp/kfp/output/ml_engine/job_dir.txt\"}}],\"artifacts\":[{\"name\":\"mlpipeline-ui-metadata\",\"path\":\"/tmp/outputs/MLPipeline_UI_metadata/data\",\"optional\":true},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-2-job_dir\",\"path\":\"/tmp/kfp/output/ml_engine/job_dir.txt\",\"s3\":{\"endpoint\":\"minio-service.kfpdemo:9000\",\"bucket\":\"mlpipeline\",\"insecure\":true,\"accessKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"accesskey\"},\"secretKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"secretkey\"},\"key\":\"artifacts/covertype-classifier-training-qmb6n/covertype-classifier-training-qmb6n-1089717477/submitting-a-cloud-ml-training-job-as-a-pipeline-step-2-job_dir.tgz\"}},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-2-job_id\",\"path\":\"/tmp/kfp/output/ml_engine/job_id.txt\",\"s3\":{\"endpoint\":\"minio-service.kfpdemo:9000\",\"bucket\":\"mlpipeline\",\"insecure\":true,\"accessKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"accesskey\"},\"secretKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"secretkey\"},\"key\":\"artifacts/covertype-classifier-training-qmb6n/covertype-classifier-training-qmb6n-1089717477/submitting-a-cloud-ml-training-job-as-a-pipeline-step-2-job_id.tgz\"}},{\"name\":\"main-logs\",\"archiveLogs\":true,\"s3\":{\"endpoint\":\"minio-service.kfpdemo:9000\",\"bucket\":\"mlpipeline\",\"insecure\":true,\"accessKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"accesskey\"},\"secretKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"secretkey\"},\"key\":\"artifacts/covertype-classifier-training-qmb6n/covertype-classifier-training-qmb6n-1089717477/main.log\"}}]},\"children\":[\"covertype-classifier-training-qmb6n-1971586539\",\"covertype-classifier-training-qmb6n-2585557679\"]},\"covertype-classifier-training-qmb6n-1858401567\":{\"id\":\"covertype-classifier-training-qmb6n-1858401567\",\"name\":\"covertype-classifier-training-qmb6n.bigquery-query\",\"displayName\":\"bigquery-query\",\"type\":\"Pod\",\"templateName\":\"bigquery-query\",\"phase\":\"Succeeded\",\"boundaryID\":\"covertype-classifier-training-qmb6n\",\"startedAt\":\"2021-05-21T08:46:52Z\",\"finishedAt\":\"2021-05-21T08:47:05Z\",\"inputs\":{\"parameters\":[{\"name\":\"dataset_id\",\"value\":\"covertype_dataset\"},{\"name\":\"dataset_location\",\"value\":\"US\"},{\"name\":\"gcs_root\",\"value\":\"gs://dna-gcp-data-kubeflowpipelines-default/staging\"},{\"name\":\"project_id\",\"value\":\"dna-gcp-data\"},{\"name\":\"source_table_name\",\"value\":\"covertype_dataset.covertype\"}]},\"outputs\":{\"parameters\":[{\"name\":\"bigquery-query-output_gcs_path\",\"value\":\"gs://dna-gcp-data-kubeflowpipelines-default/staging/datasets/training/data.csv\",\"valueFrom\":{\"path\":\"/tmp/kfp/output/bigquery/query-output-path.txt\"}}],\"artifacts\":[{\"name\":\"mlpipeline-ui-metadata\",\"path\":\"/tmp/outputs/MLPipeline_UI_metadata/data\",\"optional\":true},{\"name\":\"bigquery-query-output_gcs_path\",\"path\":\"/tmp/kfp/output/bigquery/query-output-path.txt\",\"s3\":{\"endpoint\":\"minio-service.kfpdemo:9000\",\"bucket\":\"mlpipeline\",\"insecure\":true,\"accessKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"accesskey\"},\"secretKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"secretkey\"},\"key\":\"artifacts/covertype-classifier-training-qmb6n/covertype-classifier-training-qmb6n-1858401567/bigquery-query-output_gcs_path.tgz\"}},{\"name\":\"main-logs\",\"archiveLogs\":true,\"s3\":{\"endpoint\":\"minio-service.kfpdemo:9000\",\"bucket\":\"mlpipeline\",\"insecure\":true,\"accessKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"accesskey\"},\"secretKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"secretkey\"},\"key\":\"artifacts/covertype-classifier-training-qmb6n/covertype-classifier-training-qmb6n-1858401567/main.log\"}}]},\"children\":[\"covertype-classifier-training-qmb6n-1057815578\",\"covertype-classifier-training-qmb6n-1089717477\"]},\"covertype-classifier-training-qmb6n-1879447244\":{\"id\":\"covertype-classifier-training-qmb6n-1879447244\",\"name\":\"covertype-classifier-training-qmb6n.bigquery-query-2\",\"displayName\":\"bigquery-query-2\",\"type\":\"Pod\",\"templateName\":\"bigquery-query-2\",\"phase\":\"Succeeded\",\"boundaryID\":\"covertype-classifier-training-qmb6n\",\"startedAt\":\"2021-05-21T08:46:52Z\",\"finishedAt\":\"2021-05-21T08:47:05Z\",\"inputs\":{\"parameters\":[{\"name\":\"dataset_id\",\"value\":\"covertype_dataset\"},{\"name\":\"dataset_location\",\"value\":\"US\"},{\"name\":\"gcs_root\",\"value\":\"gs://dna-gcp-data-kubeflowpipelines-default/staging\"},{\"name\":\"project_id\",\"value\":\"dna-gcp-data\"},{\"name\":\"source_table_name\",\"value\":\"covertype_dataset.covertype\"}]},\"outputs\":{\"parameters\":[{\"name\":\"bigquery-query-2-output_gcs_path\",\"value\":\"gs://dna-gcp-data-kubeflowpipelines-default/staging/datasets/validation/data.csv\",\"valueFrom\":{\"path\":\"/tmp/kfp/output/bigquery/query-output-path.txt\"}}],\"artifacts\":[{\"name\":\"mlpipeline-ui-metadata\",\"path\":\"/tmp/outputs/MLPipeline_UI_metadata/data\",\"optional\":true},{\"name\":\"bigquery-query-2-output_gcs_path\",\"path\":\"/tmp/kfp/output/bigquery/query-output-path.txt\",\"s3\":{\"endpoint\":\"minio-service.kfpdemo:9000\",\"bucket\":\"mlpipeline\",\"insecure\":true,\"accessKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"accesskey\"},\"secretKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"secretkey\"},\"key\":\"artifacts/covertype-classifier-training-qmb6n/covertype-classifier-training-qmb6n-1879447244/bigquery-query-2-output_gcs_path.tgz\"}},{\"name\":\"main-logs\",\"archiveLogs\":true,\"s3\":{\"endpoint\":\"minio-service.kfpdemo:9000\",\"bucket\":\"mlpipeline\",\"insecure\":true,\"accessKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"accesskey\"},\"secretKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"secretkey\"},\"key\":\"artifacts/covertype-classifier-training-qmb6n/covertype-classifier-training-qmb6n-1879447244/main.log\"}}]},\"children\":[\"covertype-classifier-training-qmb6n-1057815578\",\"covertype-classifier-training-qmb6n-1089717477\"]},\"covertype-classifier-training-qmb6n-1896224863\":{\"id\":\"covertype-classifier-training-qmb6n-1896224863\",\"name\":\"covertype-classifier-training-qmb6n.bigquery-query-3\",\"displayName\":\"bigquery-query-3\",\"type\":\"Pod\",\"templateName\":\"bigquery-query-3\",\"phase\":\"Succeeded\",\"boundaryID\":\"covertype-classifier-training-qmb6n\",\"startedAt\":\"2021-05-21T08:46:52Z\",\"finishedAt\":\"2021-05-21T08:47:05Z\",\"inputs\":{\"parameters\":[{\"name\":\"dataset_id\",\"value\":\"covertype_dataset\"},{\"name\":\"dataset_location\",\"value\":\"US\"},{\"name\":\"gcs_root\",\"value\":\"gs://dna-gcp-data-kubeflowpipelines-default/staging\"},{\"name\":\"project_id\",\"value\":\"dna-gcp-data\"},{\"name\":\"source_table_name\",\"value\":\"covertype_dataset.covertype\"}]},\"outputs\":{\"parameters\":[{\"name\":\"bigquery-query-3-output_gcs_path\",\"value\":\"gs://dna-gcp-data-kubeflowpipelines-default/staging/datasets/testing/data.csv\",\"valueFrom\":{\"path\":\"/tmp/kfp/output/bigquery/query-output-path.txt\"}}],\"artifacts\":[{\"name\":\"mlpipeline-ui-metadata\",\"path\":\"/tmp/outputs/MLPipeline_UI_metadata/data\",\"optional\":true},{\"name\":\"bigquery-query-3-output_gcs_path\",\"path\":\"/tmp/kfp/output/bigquery/query-output-path.txt\",\"s3\":{\"endpoint\":\"minio-service.kfpdemo:9000\",\"bucket\":\"mlpipeline\",\"insecure\":true,\"accessKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"accesskey\"},\"secretKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"secretkey\"},\"key\":\"artifacts/covertype-classifier-training-qmb6n/covertype-classifier-training-qmb6n-1896224863/bigquery-query-3-output_gcs_path.tgz\"}},{\"name\":\"main-logs\",\"archiveLogs\":true,\"s3\":{\"endpoint\":\"minio-service.kfpdemo:9000\",\"bucket\":\"mlpipeline\",\"insecure\":true,\"accessKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"accesskey\"},\"secretKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"secretkey\"},\"key\":\"artifacts/covertype-classifier-training-qmb6n/covertype-classifier-training-qmb6n-1896224863/main.log\"}}]},\"children\":[\"covertype-classifier-training-qmb6n-1971586539\"]},\"covertype-classifier-training-qmb6n-1971586539\":{\"id\":\"covertype-classifier-training-qmb6n-1971586539\",\"name\":\"covertype-classifier-training-qmb6n.evaluate-model\",\"displayName\":\"evaluate-model\",\"type\":\"Pod\",\"templateName\":\"evaluate-model\",\"phase\":\"Succeeded\",\"boundaryID\":\"covertype-classifier-training-qmb6n\",\"startedAt\":\"2021-05-21T09:09:00Z\",\"finishedAt\":\"2021-05-21T09:09:08Z\",\"inputs\":{\"parameters\":[{\"name\":\"bigquery-query-3-output_gcs_path\",\"value\":\"gs://dna-gcp-data-kubeflowpipelines-default/staging/datasets/testing/data.csv\"},{\"name\":\"evaluation_metric_name\",\"value\":\"accuracy\"},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-2-job_dir\",\"value\":\"gs://dna-gcp-data-kubeflowpipelines-default/staging/jobdir/d16368c5-24bb-4292-9060-315755f79b0b\"}]},\"outputs\":{\"parameters\":[{\"name\":\"evaluate-model-metric_value\",\"value\":\"0.7225525168450257\",\"valueFrom\":{\"path\":\"/tmp/outputs/metric_value/data\"}}],\"artifacts\":[{\"name\":\"mlpipeline-metrics\",\"path\":\"/tmp/outputs/mlpipeline_metrics/data\",\"s3\":{\"endpoint\":\"minio-service.kfpdemo:9000\",\"bucket\":\"mlpipeline\",\"insecure\":true,\"accessKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"accesskey\"},\"secretKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"secretkey\"},\"key\":\"artifacts/covertype-classifier-training-qmb6n/covertype-classifier-training-qmb6n-1971586539/mlpipeline-metrics.tgz\"},\"optional\":true},{\"name\":\"evaluate-model-metric_name\",\"path\":\"/tmp/outputs/metric_name/data\",\"s3\":{\"endpoint\":\"minio-service.kfpdemo:9000\",\"bucket\":\"mlpipeline\",\"insecure\":true,\"accessKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"accesskey\"},\"secretKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"secretkey\"},\"key\":\"artifacts/covertype-classifier-training-qmb6n/covertype-classifier-training-qmb6n-1971586539/evaluate-model-metric_name.tgz\"}},{\"name\":\"evaluate-model-metric_value\",\"path\":\"/tmp/outputs/metric_value/data\",\"s3\":{\"endpoint\":\"minio-service.kfpdemo:9000\",\"bucket\":\"mlpipeline\",\"insecure\":true,\"accessKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"accesskey\"},\"secretKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"secretkey\"},\"key\":\"artifacts/covertype-classifier-training-qmb6n/covertype-classifier-training-qmb6n-1971586539/evaluate-model-metric_value.tgz\"}},{\"name\":\"main-logs\",\"archiveLogs\":true,\"s3\":{\"endpoint\":\"minio-service.kfpdemo:9000\",\"bucket\":\"mlpipeline\",\"insecure\":true,\"accessKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"accesskey\"},\"secretKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"secretkey\"},\"key\":\"artifacts/covertype-classifier-training-qmb6n/covertype-classifier-training-qmb6n-1971586539/main.log\"}}]},\"children\":[\"covertype-classifier-training-qmb6n-2585557679\"]},\"covertype-classifier-training-qmb6n-2274394652\":{\"id\":\"covertype-classifier-training-qmb6n-2274394652\",\"name\":\"covertype-classifier-training-qmb6n.condition-1.deploying-a-trained-model-to-cloud-machine-learning-engine\",\"displayName\":\"deploying-a-trained-model-to-cloud-machine-learning-engine\",\"type\":\"Pod\",\"templateName\":\"deploying-a-trained-model-to-cloud-machine-learning-engine\",\"phase\":\"Succeeded\",\"boundaryID\":\"covertype-classifier-training-qmb6n-2585557679\",\"startedAt\":\"2021-05-21T09:09:10Z\",\"finishedAt\":\"2021-05-21T09:10:46Z\",\"inputs\":{\"parameters\":[{\"name\":\"model_id\",\"value\":\"covertype_classifier\"},{\"name\":\"project_id\",\"value\":\"dna-gcp-data\"},{\"name\":\"replace_existing_version\",\"value\":\"True\"},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-2-job_dir\",\"value\":\"gs://dna-gcp-data-kubeflowpipelines-default/staging/jobdir/d16368c5-24bb-4292-9060-315755f79b0b\"},{\"name\":\"version_id\",\"value\":\"v01\"}]},\"outputs\":{\"artifacts\":[{\"name\":\"mlpipeline-ui-metadata\",\"path\":\"/tmp/outputs/MLPipeline_UI_metadata/data\",\"optional\":true},{\"name\":\"deploying-a-trained-model-to-cloud-machine-learning-engine-model_name\",\"path\":\"/tmp/kfp/output/ml_engine/model_name.txt\",\"s3\":{\"endpoint\":\"minio-service.kfpdemo:9000\",\"bucket\":\"mlpipeline\",\"insecure\":true,\"accessKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"accesskey\"},\"secretKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"secretkey\"},\"key\":\"artifacts/covertype-classifier-training-qmb6n/covertype-classifier-training-qmb6n-2274394652/deploying-a-trained-model-to-cloud-machine-learning-engine-model_name.tgz\"}},{\"name\":\"deploying-a-trained-model-to-cloud-machine-learning-engine-model_uri\",\"path\":\"/tmp/kfp/output/ml_engine/model_uri.txt\",\"s3\":{\"endpoint\":\"minio-service.kfpdemo:9000\",\"bucket\":\"mlpipeline\",\"insecure\":true,\"accessKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"accesskey\"},\"secretKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"secretkey\"},\"key\":\"artifacts/covertype-classifier-training-qmb6n/covertype-classifier-training-qmb6n-2274394652/deploying-a-trained-model-to-cloud-machine-learning-engine-model_uri.tgz\"}},{\"name\":\"deploying-a-trained-model-to-cloud-machine-learning-engine-version_name\",\"path\":\"/tmp/kfp/output/ml_engine/version_name.txt\",\"s3\":{\"endpoint\":\"minio-service.kfpdemo:9000\",\"bucket\":\"mlpipeline\",\"insecure\":true,\"accessKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"accesskey\"},\"secretKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"secretkey\"},\"key\":\"artifacts/covertype-classifier-training-qmb6n/covertype-classifier-training-qmb6n-2274394652/deploying-a-trained-model-to-cloud-machine-learning-engine-version_name.tgz\"}},{\"name\":\"main-logs\",\"archiveLogs\":true,\"s3\":{\"endpoint\":\"minio-service.kfpdemo:9000\",\"bucket\":\"mlpipeline\",\"insecure\":true,\"accessKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"accesskey\"},\"secretKeySecret\":{\"name\":\"mlpipeline-minio-artifact\",\"key\":\"secretkey\"},\"key\":\"artifacts/covertype-classifier-training-qmb6n/covertype-classifier-training-qmb6n-2274394652/main.log\"}}]}},\"covertype-classifier-training-qmb6n-2585557679\":{\"id\":\"covertype-classifier-training-qmb6n-2585557679\",\"name\":\"covertype-classifier-training-qmb6n.condition-1\",\"displayName\":\"condition-1\",\"type\":\"DAG\",\"templateName\":\"condition-1\",\"phase\":\"Succeeded\",\"boundaryID\":\"covertype-classifier-training-qmb6n\",\"startedAt\":\"2021-05-21T09:09:10Z\",\"finishedAt\":\"2021-05-21T09:10:48Z\",\"inputs\":{\"parameters\":[{\"name\":\"model_id\",\"value\":\"covertype_classifier\"},{\"name\":\"project_id\",\"value\":\"dna-gcp-data\"},{\"name\":\"replace_existing_version\",\"value\":\"True\"},{\"name\":\"submitting-a-cloud-ml-training-job-as-a-pipeline-step-2-job_dir\",\"value\":\"gs://dna-gcp-data-kubeflowpipelines-default/staging/jobdir/d16368c5-24bb-4292-9060-315755f79b0b\"},{\"name\":\"version_id\",\"value\":\"v01\"}]},\"children\":[\"covertype-classifier-training-qmb6n-2274394652\"],\"outboundNodes\":[\"covertype-classifier-training-qmb6n-2274394652\"]}},\"conditions\":[{\"type\":\"Completed\",\"status\":\"True\"}]}}'}}\n0.7225525168450257\n"
],
[
"from googleapiclient import discovery\nml = discovery.build('ml', 'v1')\njob_name = 'projects/{}/jobs/{}'.format('dna-gcp-data', 'job_1dae51e7dd77989943e0aaf271f1effd')\nrequest = ml.projects().jobs().get(name=job_name)\nprint(type(request.execute())",
"_____no_output_____"
]
],
[
[
"### Monitoring the run\n\nYou can monitor the run using KFP UI. Follow the instructor who will walk you through the KFP UI and monitoring techniques.\n\nTo access the KFP UI in your environment use the following URI:\n\nhttps://[ENDPOINT]\n\n\n**NOTE that your pipeline run may fail due to the bug in a BigQuery component that does not handle certain race conditions. If you observe the pipeline failure, re-run the last cell of the notebook to submit another pipeline run or retry the run from the KFP UI**\n",
"_____no_output_____"
],
[
"<font size=-1>Licensed under the Apache License, Version 2.0 (the \\\"License\\\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at [https://www.apache.org/licenses/LICENSE-2.0](https://www.apache.org/licenses/LICENSE-2.0)\n\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \\\"AS IS\\\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.</font>",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
e7b60ed5b495d046fb3c46aa3de57559042c4770 | 8,269 | ipynb | Jupyter Notebook | notebooks/feature-tfidf.ipynb | MinuteswithMetrics/kaggle-quora-question-pairs | 009bf3bd029d7598293f1944596d2db31aaf5710 | [
"MIT"
] | 125 | 2017-06-21T08:14:19.000Z | 2021-10-20T03:46:51.000Z | notebooks/feature-tfidf.ipynb | MinuteswithMetrics/kaggle-quora-question-pairs | 009bf3bd029d7598293f1944596d2db31aaf5710 | [
"MIT"
] | 6 | 2018-03-20T22:05:08.000Z | 2018-11-09T06:02:32.000Z | notebooks/feature-tfidf.ipynb | MinuteswithMetrics/kaggle-quora-question-pairs | 009bf3bd029d7598293f1944596d2db31aaf5710 | [
"MIT"
] | 39 | 2017-06-21T19:35:39.000Z | 2021-04-30T13:15:38.000Z | 20.6725 | 123 | 0.524731 | [
[
[
"# Feature: TF-IDF Distances",
"_____no_output_____"
],
[
"Create TF-IDF vectors from question texts and compute vector distances between them.",
"_____no_output_____"
],
[
"## Imports",
"_____no_output_____"
],
[
"This utility package imports `numpy`, `pandas`, `matplotlib` and a helper `kg` module into the root namespace.",
"_____no_output_____"
]
],
[
[
"from pygoose import *",
"_____no_output_____"
],
[
"from sklearn.feature_extraction.text import TfidfVectorizer\nfrom sklearn.metrics.pairwise import cosine_distances, euclidean_distances",
"_____no_output_____"
]
],
[
[
"## Config",
"_____no_output_____"
],
[
"Automatically discover the paths to various data folders and compose the project structure.",
"_____no_output_____"
]
],
[
[
"project = kg.Project.discover()",
"_____no_output_____"
]
],
[
[
"Identifier for storing these features on disk and referring to them later.",
"_____no_output_____"
]
],
[
[
"feature_list_id = 'tfidf'",
"_____no_output_____"
]
],
[
[
"## Read Data",
"_____no_output_____"
],
[
"Preprocessed and tokenized questions.",
"_____no_output_____"
]
],
[
[
"tokens_train = kg.io.load(project.preprocessed_data_dir + 'tokens_lowercase_spellcheck_no_stopwords_train.pickle')\ntokens_test = kg.io.load(project.preprocessed_data_dir + 'tokens_lowercase_spellcheck_no_stopwords_test.pickle')",
"_____no_output_____"
],
[
"tokens = tokens_train + tokens_test",
"_____no_output_____"
]
],
[
[
"Extract a set of unique question texts (document corpus).",
"_____no_output_____"
]
],
[
[
"all_questions_flat = np.array(tokens).ravel()",
"_____no_output_____"
],
[
"documents = list(set(' '.join(question) for question in all_questions_flat))",
"_____no_output_____"
],
[
"del all_questions_flat",
"_____no_output_____"
]
],
[
[
"## Train TF-IDF vectorizer",
"_____no_output_____"
],
[
"Create a bag-of-token-unigrams vectorizer.",
"_____no_output_____"
]
],
[
[
"vectorizer = TfidfVectorizer(\n encoding='utf-8',\n analyzer='word',\n strip_accents='unicode',\n ngram_range=(1, 1),\n lowercase=True,\n norm='l2',\n use_idf=True,\n smooth_idf=True,\n sublinear_tf=True,\n)",
"_____no_output_____"
],
[
"vectorizer.fit(documents)",
"_____no_output_____"
],
[
"model_filename = 'tfidf_vectorizer_{}_ngrams_{}_{}_penalty_{}.pickle'.format(\n vectorizer.analyzer,\n vectorizer.ngram_range[0],\n vectorizer.ngram_range[1],\n vectorizer.norm,\n)",
"_____no_output_____"
],
[
"kg.io.save(vectorizer, project.trained_model_dir + model_filename)",
"_____no_output_____"
]
],
[
[
"## Vectorize train and test sets, compute distances",
"_____no_output_____"
]
],
[
[
"def compute_pair_distances(pair):\n q1_doc = ' '.join(pair[0])\n q2_doc = ' '.join(pair[1])\n \n pair_dtm = vectorizer.transform([q1_doc, q2_doc])\n q1_doc_vec = pair_dtm[0]\n q2_doc_vec = pair_dtm[1]\n \n return [\n cosine_distances(q1_doc_vec, q2_doc_vec)[0][0],\n euclidean_distances(q1_doc_vec, q2_doc_vec)[0][0],\n ]",
"_____no_output_____"
],
[
"features = kg.jobs.map_batch_parallel(\n tokens,\n item_mapper=compute_pair_distances,\n batch_size=1000,\n)",
"Batches: 100%|██████████| 2751/2751 [16:11<00:00, 3.01it/s]\n"
],
[
"X_train = np.array(features[:len(tokens_train)], dtype='float64')\nX_test = np.array(features[len(tokens_train):], dtype='float64')",
"_____no_output_____"
],
[
"print('X_train:', X_train.shape)\nprint('X_test: ', X_test.shape)",
"X_train: (404290, 2)\nX_test: (2345796, 2)\n"
]
],
[
[
"## Save features",
"_____no_output_____"
]
],
[
[
"feature_names = [\n 'tfidf_cosine',\n 'tfidf_euclidean',\n]",
"_____no_output_____"
],
[
"project.save_features(X_train, X_test, feature_names, feature_list_id)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e7b610eb6937b331cc1b325a9f6e8196ea05de31 | 199,259 | ipynb | Jupyter Notebook | chap_5/regression.ipynb | bibbycodes/PDSH | a7b0ad9f2d22ca85e1115251fd2acb445170ddf6 | [
"MIT"
] | null | null | null | chap_5/regression.ipynb | bibbycodes/PDSH | a7b0ad9f2d22ca85e1115251fd2acb445170ddf6 | [
"MIT"
] | null | null | null | chap_5/regression.ipynb | bibbycodes/PDSH | a7b0ad9f2d22ca85e1115251fd2acb445170ddf6 | [
"MIT"
] | null | null | null | 787.58498 | 63,632 | 0.951581 | [
[
[
"%matplotlib inline\nfrom matplotlib import pyplot as plt\nimport numpy as np\nimport seaborn as sns\nimport os\nif not os.path.exists('figures'):\n os.makedirs('figures')",
"_____no_output_____"
],
[
"from sklearn.linear_model import LinearRegression\n# Create some data for the regression\nrng = np.random.RandomState(1)\n\nX = rng.randn(200, 2)\ny = np.dot(X, [-2, 1]) + 0.1 * rng.randn(X.shape[0])\n\n# fit the regressions model\nmodel = LinearRegression()\nmodel.fit(X, y)\n\n# create some new points to predict\nX2 = rng.randn(100, 2)\n\n# predict the labels\ny2 = model.predict(X2)",
"_____no_output_____"
],
[
"def format_plot(ax, title):\n ax.xaxis.set_major_formatter(plt.NullFormatter())\n ax.yaxis.set_major_formatter(plt.NullFormatter())\n ax.set_xlabel('feature 1', color='gray')\n ax.set_ylabel('feature 2', color='gray')\n ax.set_title(title, color='gray')",
"_____no_output_____"
],
[
"fig, ax = plt.subplots()\npoints = ax.scatter(X[:, 0], X[:, 1], c=y, s=50,\n cmap='viridis')\n\n# format plot\nformat_plot(ax, 'Input Data')\nax.axis([-4, 4, -3, 3])\nfig.savefig('figures/05.01-regression-1.png')",
"_____no_output_____"
],
[
"from mpl_toolkits.mplot3d.art3d import Line3DCollection\n\npoints = np.hstack([X, y[:, None]]).reshape(-1, 1, 3)\nsegments = np.hstack([points, points])\nsegments[:, 0, 2] = -8\n\n# plot points in 3D\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\nax.scatter(X[:, 0], X[:, 1], y, c=y, s=35,\n cmap='viridis')\nax.add_collection3d(Line3DCollection(segments, colors='gray', alpha=0.2))\nax.scatter(X[:, 0], X[:, 1], -8 + np.zeros(X.shape[0]), c=y, s=10, cmap='viridis')\n\n# format \nax.patch.set_facecolor('white')\nax.view_init(elev=20, azim=-70)\nax.set_zlim3d(-8, 8)\nax.xaxis.set_major_formatter(plt.NullFormatter())\nax.yaxis.set_major_formatter(plt.NullFormatter())\nax.zaxis.set_major_formatter(plt.NullFormatter())\nax.set(xlabel='feature 1', ylabel='feature 2', zlabel='label')\n\n# Hide axes\n\nax.w_xaxis.line.set_visible(False)\nax.w_yaxis.line.set_visible(False)\nax.w_zaxis.line.set_visible(False)\n\nfor tick in ax.w_xaxis.get_ticklines():\n tick.set_visible(False)\n \nfor tick in ax.w_yaxis.get_ticklines():\n tick.set_visible(False)\n \nfor tick in ax.w_zaxis.get_ticklines():\n tick.set_visible(False)\n \nfig.savefig('figures/05.01-regression-2.png')",
"_____no_output_____"
],
[
"from matplotlib.collections import LineCollection\n\n# plot data points\nfig, ax = plt.subplots()\npts = ax.scatter(X[:, 0], X[:, 1], c=y, s=50,\n cmap='viridis', zorder=2)\n\nxx, yy = np.meshgrid(np.linspace(-4, 4),\n np.linspace(-3, 3))\nXfit = np.vstack([xx.ravel(), yy.ravel()]).T\nyfit = model.predict(Xfit)\nzz = yfit.reshape(xx.shape)\nax.pcolorfast([-4, 4], [-3, 3], zz, alpha=0.5,\n cmap='viridis', norm=pts.norm, zorder=1)\n\n# format plot\nformat_plot(ax, 'Input Data with Linear Fit')\nax.axis([-4, 4, -3, 3])\nfig.savefig('figures/05.01-regression-3.png')",
"_____no_output_____"
],
[
"# plot the model fit\nfig, ax = plt.subplots(1, 2, figsize=(16, 6))\nfig.subplots_adjust(left=0.0625, right=0.95, wspace=0.1)\n\nax[0].scatter(X2[:, 0], X2[:, 1], c='gray', s=50)\nax[0].axis([-4, 4, -3, 3])\n\nax[1].scatter(X2[:, 0], X2[:, 1], c=y2, s=50,\n cmap='viridis', norm=pts.norm)\n\nax[1].axis([-4, 4, -3, 3])\n\n# format plots\nformat_plot(ax[0], 'Unknown Data')\nformat_plot(ax[1], 'Predicted Labels')\n\nfig.savefig('figures/05.01-regression-4.png')",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7b61113828a41ccd262eb3f1c99f4245466986b | 569,941 | ipynb | Jupyter Notebook | pandas/pandas_2021_04_14.ipynb | risker93/king | b7cac65595960a81236cb0d9d004d4f3ffe1edf0 | [
"Apache-2.0"
] | 2 | 2021-05-02T12:23:27.000Z | 2021-05-02T12:56:25.000Z | pandas/pandas_2021_04_14.ipynb | risker93/king | b7cac65595960a81236cb0d9d004d4f3ffe1edf0 | [
"Apache-2.0"
] | null | null | null | pandas/pandas_2021_04_14.ipynb | risker93/king | b7cac65595960a81236cb0d9d004d4f3ffe1edf0 | [
"Apache-2.0"
] | null | null | null | 62.748101 | 85,748 | 0.695802 | [
[
[
"#예제 1-1 딕셔너리 > 시리즈 변환\n# -*- coding: utf-8 -*-\n\n#pandas 불러오기\nimport pandas as pd\n\n#key:value 쌍으로 딕셔너리를 만들고, 변수 dict_data에 저장\ndict_data = {'a':1, 'b':2, 'c':3}\n\n#판다스 Series() 함수로 dictionary를 Series로 변환, 변수 sr에 저장\nsr = pd.Series(dict_data)\n\n#sr 의 자료형 출력\nprint(type(sr))\nprint('\\n')\n#변수 sr에 저장되어 있는 시리즈 객체를 출력\nprint(sr)",
"<class 'pandas.core.series.Series'>\n\n\na 1\nb 2\nc 3\ndtype: int64\n"
],
[
"#예제 1-2 시리즈 인덱스 \n# -*- coding: utf-8 -*-\n\nimport pandas as pd\n\n#리스트를 시리즈로 변환하여 변수 sr 에 저장.\nlist_data = ['2019-01-02', 3.14, 'abc', 100, True]\n\n#딕셔너리의 키 처럼 인덱스로 변환될 값이 없다.\n#인덱스를 별도로 정하지 않으면 디폴트로 정수형 위치 인덱스가 자동지정된다\n\nsr = pd.Series(list_data)\nprint(sr)",
"0 2019-01-02\n1 3.14\n2 abc\n3 100\n4 True\ndtype: object\n"
],
[
"#인덱스 배열은 idx에 저장, 값은 val에 저장\nidx = sr.index\nval = sr.values\nprint(idx)\nprint()\nprint(val)",
"RangeIndex(start=0, stop=5, step=1)\n\n['2019-01-02' 3.14 'abc' 100 True]\n"
],
[
"dict_data = {'a' :10, 'b':20, 'c':30, 'd':40, 'e':50, 'f':60}\nsr = pd.Series(dict_data)",
"_____no_output_____"
],
[
"#인덱스 이름 으로 접근\nprint(sr['a'])\nprint()\n#인덱스 번호로 접근\nprint(sr[-1])\nprint()\n",
"10\n\n60\n\n"
],
[
"print(sr[['d','e']])\nprint(sr['d':'e'])\nprint(sr[3:5])\nprint(sr[[3,4]])",
"d 40\ne 50\ndtype: int64\nd 40\ne 50\ndtype: int64\nd 40\ne 50\ndtype: int64\nd 40\ne 50\ndtype: int64\n"
],
[
"#예제 1-3 시리즈 원소선택\n\n# -*- coding: utf-8 -*-\nimport pandas as pd\n\n#튜플을 시리즈로 변환(인덱스 옵션 지정)\ntup_data = ('영인', '2010-05-01', '여', True)\nsr = pd.Series(tup_data, index=['이름','생년월일','성별','학생여부'])\nprint(sr)",
"이름 영인\n생년월일 2010-05-01\n성별 여\n학생여부 True\ndtype: object\n"
],
[
"#원소를 1개 선택\nprint(sr[0])\nprint(sr['이름'])",
"영인\n영인\n"
],
[
"#여러 개의 원소를 선택 (인덱스 리스트 활용)\nprint(sr[[0,3]])\nprint()\nprint(sr[['이름', '학생여부']])",
"이름 영인\n학생여부 True\ndtype: object\n\n이름 영인\n학생여부 True\ndtype: object\n"
],
[
"#여러 개의 원소를 선택(인덱스 범위 지정)\nprint(sr[1:2])\nprint()\nprint(sr['생년월일':'성별'])\n#인덱스로 선택시에는 1부터 2(n-1) 까지 출력\n#인덱스 이름으로 슬라이싱 시에는 생년월일 부터 성별 까지",
"생년월일 2010-05-01\ndtype: object\n\n생년월일 2010-05-01\n성별 여\ndtype: object\n"
],
[
"#데이터 프레임은 여러개의 시리즈를 모아둔 집합으로 생각하면된다.",
"_____no_output_____"
],
[
"#예제 1-4 딕셔너리 > 데이터 프레임 변환\n# -*- coding: utf-8 -*-\nimport pandas as pd\nimport numpy as np\n#열 이름을 key로 하고, 리스트를 value로 갖는 딕셔너리 정의 (2차원 배열)\ndict_data = {'c0':[1,2,3], 'c1':[4,5,6], 'c2':[7,8,9], 'c3':[10,11,12], 'c4':[13,14,15]}\n\ndf = pd.DataFrame(dict_data)\n\n#df의 자료형 출력\nprint(type(df))\nprint('\\n')\nprint(df)\n",
"<class 'pandas.core.frame.DataFrame'>\n\n\n c0 c1 c2 c3 c4\n0 1 4 7 10 13\n1 2 5 8 11 14\n2 3 6 9 12 15\n"
],
[
"#예제 1-5 행 인덱스/열 이름설정\n# -*- coding: utf-8 -*-\n\nimport pandas as pd\n\n#행 인덱스/ 열 이름 지정하여 데이터 프레임 만들기\ndf= pd.DataFrame([[15,'남','덕영중'],\n [17,'여','수리중']],\n index=['준서','예은'],\n columns=['나이','성별','학교'])\n",
"_____no_output_____"
],
[
"#행 인덱스, 열 이름 확인하기\nprint(df) #데이터 프레임\nprint()\nprint(df.index)# 행 인덱스\nprint()\nprint(df.columns)#열 이름\n\n#실행 결과에서 리스트가 행으로 변환 되는 점에 유의한다.\n",
" 나이 성별 학교\n준서 15 남 덕영중\n예은 17 여 수리중\n\nIndex(['준서', '예은'], dtype='object')\n\nIndex(['나이', '성별', '학교'], dtype='object')\n"
],
[
"#예제 1-5 행 인덱스/열 이름 설정\ndf.index=['학생1','학생2']\ndf.columns=['연령','남녀','소속']",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"#데이터 프레임에 rename()메소드를 적용하면 \n#행 인덱스 또는 열 이름의 일부를 선택하여 변경가능\n#단, 원본 객체를 직접 수정하는것이 아니라\n#새로운 데이터 프레임 객체를 반환한다.\n#원본 객체를 변경하려면 inplace=True 옵션을 사용한다.",
"_____no_output_____"
],
[
"#예제 1-6 행 인덱스/ 열 이름 변경\n# -*- coding: utf-8 -*-\n\nimport pandas as pd\n\n#행 인덱스/ 열 이름 지정하여 데이터 프레임 만들기\ndf = pd.DataFrame([[15,'남','덕영중'], [17,'여','수리중']],\n index=['준서','예은'],\n columns=['나이', '성별', '학교'])\n#데이터프레임 df 출력\nprint(df)",
" 나이 성별 학교\n준서 15 남 덕영중\n예은 17 여 수리중\n"
],
[
"#열 이름 중, '나이' 를 '연령'으로, '성별'을 '남녀'로,'학교'를''소속'으로\ndf.rename(columns={'나이':'연령', '성별':'남녀','학교':'소속'},inplace=True)\n\n#df 의 행 인덱스 중에서, '준서'를 '학생1'로, '예은'을 '학생2'로 바꾸기\ndf.rename(index={'준서':'학생1','예은':'학생2'}, inplace=True)",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"df = pd.DataFrame([[90, 98, 85, 100],\n [90, 98, 85, 100],\n [90, 98, 85, 100]],\n index=['서준','우현','인아'],\n columns=['수학','영어','음악','체육']\n \n)\ndf",
"_____no_output_____"
],
[
"#DataFrame()함수로 데이터 프레임 변환, 변수 df에 저장\n\nexam_data = {'수학':[90,80,70],'영어':[98,98,98],'음악':[85,85,85],'체육':[100,100,100]}\ndf = pd.DataFrame(exam_data, index=['서준','우현','인하'])",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"#데이터프레임 df를 복제하여 변수 df2에 저장, df2의 1개 행(row)삭제\n\ndf2=df[:]\ndf2.drop('우현',inplace=True)\nprint(df2)",
" 수학 영어 음악 체육\n서준 90 98 85 100\n인하 70 98 85 100\n"
],
[
"df",
"_____no_output_____"
],
[
"#데이터 프레임 df 를 복제하여 변수 df3에 저장, df3의 2개행(row) 삭제\ndf3 = df[:]\ndf3.drop(['우현','인하'], axis=0,inplace=True)\ndf3",
"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\pandas\\core\\frame.py:4163: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n return super().drop(\n"
],
[
"#예제 1-8 열 삭제\ndf",
"_____no_output_____"
],
[
"#데이터 프레임 df 를 복제하여 변수 df4에 저장, df4의 1개열(column)삭제\ndf4 =df[:]\ndf4.drop('수학', axis=1, inplace=True)\ndf4",
"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\pandas\\core\\frame.py:4163: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n return super().drop(\n"
],
[
"#데이터프레임 df를 복제하여 변수 df5에 저장, df5의 2개 열(column)삭제\ndf5=df[:]\ndf5.drop(['영어','음악'],axis=1, inplace=True)\ndf5",
"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\pandas\\core\\frame.py:4163: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n return super().drop(\n"
],
[
"#예제 1-9 행 선택\ndf",
"_____no_output_____"
],
[
"label1 = df.loc['서준']\nposition1 = df.iloc[0]\nprint(label1)\nprint()\nprint(position1)",
"수학 90\n영어 98\n음악 85\n체육 100\nName: 서준, dtype: int64\n\n수학 90\n영어 98\n음악 85\n체육 100\nName: 서준, dtype: int64\n"
],
[
"#행 여러개 선택\nlabel2 = df.loc[['서준','우현']]\nposition2 = df.iloc[[0,1]]\nprint(label2)\nprint()\nprint(position2)",
" 수학 영어 음악 체육\n서준 90 98 85 100\n우현 80 98 85 100\n\n 수학 영어 음악 체육\n서준 90 98 85 100\n우현 80 98 85 100\n"
],
[
"#행 슬라이싱\nlabel3 = df.loc['서준':'우현']\nposition3 = df.iloc[0:1]\nprint(label3)\nprint()\nprint(position3)\n#슬라이싱은 둘의 결과가 다름을 주의해야한다",
" 수학 영어 음악 체육\n서준 90 98 85 100\n우현 80 98 85 100\n\n 수학 영어 음악 체육\n서준 90 98 85 100\n"
],
[
"#예제 1-10 열 선택\n\n# -*- coding: utf-8 -*-\n\nimport pandas as pd\n\n#DataFrame() 함수로 데이터프레임 변환, 변수 df에 저장\nexam_data = {'이름':['서준', '우현', '인아'],\n '수학':[90,80,70],\n '영어':[98,89,95],\n '음악':[85,95,100],\n '체육':[100,90,90]\n }\ndf = pd.DataFrame(exam_data)\ndf",
"_____no_output_____"
],
[
"#수학 점수 데이터만 선택, 변수 math1에 저장\nmath1 = df['수학']\nprint(math1)",
"0 90\n1 80\n2 70\nName: 수학, dtype: int64\n"
],
[
"#영어 점수 데이터만 선택, 변수english에 저장\nenglish = df['영어']\nprint(english)",
"0 98\n1 89\n2 95\nName: 영어, dtype: int64\n"
],
[
"#'음악', '체육' 점수 데이터를 선택, 변수 music_gym에 저장\nmusic_gym = df[['음악','체육']]\nprint(music_gym)",
" 음악 체육\n0 85 100\n1 95 90\n2 100 90\n"
],
[
"#'수학' 점수 데이터만 선택 변수 math2에 저장\nmath2 = df[['수학']]\nprint(math2)\nprint(type(math2))",
" 수학\n0 90\n1 80\n2 70\n<class 'pandas.core.frame.DataFrame'>\n"
],
[
"df.iloc[::2]",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"df.iloc[0:3:2]",
"_____no_output_____"
],
[
"#역순으로 인덱싱하려면 슬라이싱 간격에 -1을 입력",
"_____no_output_____"
],
[
"df.iloc[::-1]",
"_____no_output_____"
],
[
"df.iloc[3::-1]",
"_____no_output_____"
],
[
"#예제 1-11 원소 선택\n# -*- coding: utf-8 -*-\n\nimport pandas as pd\n\n#DataFrame() 함수로 데이터프레임 변환, 변수 df에 저장\nexam_data = {'이름':['서준','우현','인아'],\n '수학':[90,80,70],\n '영어':[98,89,95],\n '음악':[85,95,100],\n '체육':[100,90,90]\n }\ndf = pd.DataFrame(exam_data)\ndf",
"_____no_output_____"
],
[
"#'이름' 열을 새로운 인덱스로 지정하고, df 객체에 변경 사항 반영\ndf.set_index('이름', inplace=True)\ndf",
"_____no_output_____"
],
[
"df.loc['서준','음악']",
"_____no_output_____"
],
[
"df.iloc[0,2]",
"_____no_output_____"
],
[
"df.loc['서준',['음악','체육']]",
"_____no_output_____"
],
[
"df.iloc[0,[2,3]]",
"_____no_output_____"
],
[
"df.iloc[0, 2:]",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"df.loc[['서준','우현'],['음악','체육']]",
"_____no_output_____"
],
[
"df.iloc[[0,1],[2,3]]",
"_____no_output_____"
],
[
"#슬라이싱으로 동일하게\ndf.loc['서준':'우현', '음악':'체육']",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"df.iloc[0:2, 2:]",
"_____no_output_____"
],
[
"#예제 1-12 열 추가\ndf",
"_____no_output_____"
],
[
"df['국어']=80\ndf",
"_____no_output_____"
],
[
"#예제 1-13 행추가, 기존 행 인덱스와 겹칠경우, 기존행의 원소값을 변경\ndf.loc[3]=0\ndf",
"_____no_output_____"
],
[
"#새로운 행 추가- 원소값 여러 개의 배열 입력\ndf.loc[4] = ['동규', 90, 80, 70, 60]\ndf",
"_____no_output_____"
],
[
"#새로운 행 추가- 기존 행 복사\ndf.loc['행5'] = df.loc[3]\ndf",
"_____no_output_____"
],
[
"#예제 1-14 원소 값 변경\n# -*- coding: utf-8 -*-\n\nimport pandas as pd\n\n#DataFrame() 함수로 데이터프레임 변환, 변수df에 저장\nexam_data = {\n '이름':['서준','우현','인아'],\n '수학':[90,80,70],\n '영어':[98,89,95],\n '음악':[85,95,100],\n '체육':[100,90,90]\n}\n\ndf = pd.DataFrame(exam_data)\ndf",
"_____no_output_____"
],
[
"#'이름' 열을 새로운 인덱스로 지정하고, df 객체에 변경사항 반영\ndf.set_index('이름', inplace=True)\ndf",
"_____no_output_____"
],
[
"#데이터프레임 df의 특정 원소를 변경하는 방법: '서준'의 '체육' 점수\ndf.iloc[0,3] =60\ndf",
"_____no_output_____"
],
[
"df.loc['서준','체육']=100\ndf",
"_____no_output_____"
],
[
"df.loc['서준']['체육'] = 80\ndf",
"_____no_output_____"
],
[
"#예제 1-14 여러개의 원소를 선택해 새로운 값을 할당\ndf.loc['서준',['음악','체육']] = 50\ndf",
"_____no_output_____"
],
[
"df.loc['서준']['음악','체육']=100\ndf",
"_____no_output_____"
],
[
"#예제 1-15 행, 열 바꾸기\n#-*- coding: utf-8 -*-\n\nimport pandas as pd\n\n#DataFrame() 함수로 데이터프레임 변환, 변수df에 저장\nexam_data = {\n '이름':['서준','우현','인아'],\n '수학':[90,80,70],\n '영어':[98,89,95],\n '음악':[85,95,100],\n '체육':[100,90,90]\n}\n\ndf = pd.DataFrame(exam_data)\ndf",
"_____no_output_____"
],
[
"#데이터프레임 df를 전치하기(메소드 활용)\ndf = df.transpose()\ndf",
"_____no_output_____"
],
[
"#데이터프레임 df를 다시 전치하기(클래스 속성 활용)\ndf = df.T\ndf",
"_____no_output_____"
],
[
"#예제 1-16 특정 열을 행 인덱스로 설정\ndf",
"_____no_output_____"
],
[
"#특정 열(column)을 데이터프레임의 행 인덱스(index)로 설정\nndf = df.set_index(['이름'])\nndf",
"_____no_output_____"
],
[
"ndf2 = ndf.set_index(['음악'])\nndf2",
"_____no_output_____"
],
[
"ndf3 = ndf.set_index(['수학', '음악'])\nndf3",
"_____no_output_____"
],
[
"#예제 1-17 새로운 배열로 행 인덱스 재지정\n#-*- coding: utf-8 -*-\n\nimport pandas as pd\n\n#딕셔너리 정의\ndict_data = {'c0':[1,2,3], 'c1':[4,5,6], 'c2':[7,8,9],'c3':[10,11,12], 'c4':[13,14,15]}\n\n#딕셔너리를 데이터프레임으로 변환, 익덱스를 [r0,r1,r2]로 지정\ndf = pd.DataFrame(dict_data, index=['r0','r1','r2'])\ndf",
"_____no_output_____"
],
[
"#인덱스를 [r0,r1,r2,r3,r4]로 재지정\nnew_index = ['r0','r1','r2','r3','r4']\nndf = df.reindex(['r0','r1','r2','r3','r4'])\nndf",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"new_index = ['r0','r1','r2','r3','r4']\nndf = df.reindex(new_index)\nndf",
"_____no_output_____"
],
[
"#reindex로 발생한 NaN값을 숫자 0으로 채우기\nnew_index = ['r0','r1','r2','r3','r4']\nndf = df.reindex(new_index, fill_value=0)\nndf",
"_____no_output_____"
],
[
"#예제 1-18 정수형 위치 인덱스로 초기화\n#-*- coding: utf-8 -*-\n\nimport pandas as pd\n\n#딕셔너리 정의\ndict_data = {'c0':[1,2,3], 'c1':[4,5,6], 'c2':[7,8,9],'c3':[10,11,12], 'c4':[13,14,15]}\n\n#딕셔너리를 데이터프레임으로 변환, 익덱스를 [r0,r1,r2]로 지정\ndf = pd.DataFrame(dict_data, index=['r0','r1','r2'])\ndf",
"_____no_output_____"
],
[
"#행 인덱스를 정수형으로 초기화\nndf = df.reset_index()\nndf",
"_____no_output_____"
],
[
"#예제 1-19 데이터프레임 정렬\n#-*- coding: utf-8 -*-\n\nimport pandas as pd\n\n#딕셔너리 정의\ndict_data = {'c0':[1,2,3], 'c1':[4,5,6], 'c2':[7,8,9],'c3':[10,11,12], 'c4':[13,14,15]}\n\n#딕셔너리를 데이터프레임으로 변환, 익덱스를 [r0,r1,r2]로 지정\ndf = pd.DataFrame(dict_data, index=['r0','r1','r2'])\ndf",
"_____no_output_____"
],
[
"#내림차순으로 행 인덱스 정렬\nndf = df.sort_index(ascending=False)\nndf",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"#c1열을 기준으로 내림차순 정렬\nndf = df.sort_values(by='c1', ascending=False)\nndf",
"_____no_output_____"
],
[
"#예제 1-21 시리즈를 숫자로 나누기\n# -*- coding: utf-8 -*-\n\n#라이브러리 불러오기\nimport pandas as pd\n\n#딕셔너리 데이터로 판다스 시리즈 만들기\nstudent1 = pd.Series({'국어':100,'영어': 80, '수학':90})\nstudent1",
"_____no_output_____"
],
[
"#학생의 과목별 점수를 100으로 나누기\npercentage = student1/100\npercentage",
"_____no_output_____"
],
[
"#예제 1-22 시리즈 사칙연산\n# -*- coding: utf-8 -*-\n\n#라이브러리 불러오기\nimport pandas as pd\n\n#딕셔너리 데이터로 판다스 시리즈 만들기\nstudent1 = pd.Series({'국어':100, '영어':80, '수학':90})\nstudent2 = pd.Series({'수학':80,'국어':90,'영어':80})\nprint(student1)\nprint(student2)",
"국어 100\n영어 80\n수학 90\ndtype: int64\n수학 80\n국어 90\n영어 80\ndtype: int64\n"
],
[
"#두 학생의 과목별 점수로 사칙연산 수행\nadd = student1 + student2\nsub = student1 - student2\nmul = student1 * student2\ndiv = student1 / student2\nprint(type(div))",
"<class 'pandas.core.series.Series'>\n"
],
[
"#사칙연산 결과를 데이터 프레임으로 합치기(시리즈 > 데이터프레임)\n\nresult = pd.DataFrame([add, sub, mul, div],\n index=['덧셈','뺄셈','곱셈','나눗셈'])\nresult",
"_____no_output_____"
],
[
"#예제 1-23 NaN 값이 있는 시리즈 연산\n# -*- coding: utf-8 -*-\n\n#라이브러리 불러오기\nimport pandas as pd\nimport numpy as np\n\n#딕셔너리 데이터로 판다스 시리즈 만들기\nstudent1 = pd.Series({'국어':np.nan, '영어':80, '수학':90})\nstudent2 = pd.Series({'수학':80, '국어':90})\n\nprint(student1)\nprint()\nprint(student2)",
"국어 NaN\n영어 80.0\n수학 90.0\ndtype: float64\n\n수학 80\n국어 90\ndtype: int64\n"
],
[
"#두 학생의 과목별 점수로 사칙연산 수행(시리즈 vs 시리즈)\nadd = student1 + student2\nsub = student1 - student2\nmul = student1 * student2\ndiv = student1 / student2",
"_____no_output_____"
],
[
"#사칙연산 결과를 데이터프레임으로 합치기\nresult = pd.DataFrame([add, sub, mul, div],\n index=['덧셈','뺄셈','곱샘','나눗셈'])\nresult",
"_____no_output_____"
],
[
"#예제 1-24 연산 메소드 사용- 시리즈 연산\n# -*- coding: utf-8 -*-\n\n#라이브러리 불러오기\nimport pandas as pd\nimport numpy as np\n\n#딕셔너리 데이터로 판다스 시리즈 만들기\nstudent1 = pd.Series({'국어':np.nan, '영어':80, '수학':90})\nstudent2 = pd.Series({'수학':80, '국어':90})\n\nprint(student1)\nprint()\nprint(student2)\n",
"국어 NaN\n영어 80.0\n수학 90.0\ndtype: float64\n\n수학 80\n국어 90\ndtype: int64\n"
],
[
"#두 학생의 과목별 점수로 사칙연산 수행(연산 메소드 사용)\nsr_add = student1.add(student2, fill_value=0)\nsr_sub = student1.sub(student2, fill_value=0)\nsr_mul = student1.mul(student2, fill_value=0)\nsr_div = student1.div(student2, fill_value=0)",
"_____no_output_____"
],
[
"#사칙연산 결과를 데이터 프레임으로 합치기\nresult = pd.DataFrame([sr_add, sr_sub, sr_mul, sr_div],\n index =['덧셈','뺄셈','곱셈','나눗셈'])\nprint(result)",
" 국어 수학 영어\n덧셈 90.0 170.000 80.0\n뺄셈 -90.0 10.000 80.0\n곱셈 0.0 7200.000 0.0\n나눗셈 0.0 1.125 inf\n"
],
[
"#예제 1-25 데이터프레임에 숫자 더하기\n# -*- coding: utf-8 -*-\n#라이브러리 불러오기\nimport pandas as pd\nimport seaborn as sns",
"_____no_output_____"
],
[
"#titanic 데이터셋에서 age, fare 2개 열을 선택하여 데이터프레임 만들기\ntitanic = sns.load_dataset('titanic')\ndf = titanic.loc[:, ['age','fare']]\nprint(df.head()) #첫 5행만 표시\nprint()\nprint(type(df))",
" age fare\n0 22.0 7.2500\n1 38.0 71.2833\n2 26.0 7.9250\n3 35.0 53.1000\n4 35.0 8.0500\n\n<class 'pandas.core.frame.DataFrame'>\n"
],
[
"#데이터프레임에 숫자 10 더하기\naddition = df + 10\nprint(addition.head())",
" age fare\n0 32.0 17.2500\n1 48.0 81.2833\n2 36.0 17.9250\n3 45.0 63.1000\n4 45.0 18.0500\n"
],
[
"#예제 1-26 데이터프레임끼리 더하기\n# -*- coding: utf-8 -*-\n\n#라이브러리 불러오기\nimport pandas as pd\nimport seaborn as sns\n\n#titanic 데이터셋에서 age, fare 2개 열을 선택하여 데이터프레임 만들기\ntitanic = sns.load_dataset('titanic')\ndf =titanic.loc[:,['age','fare']]\nprint(df.tail()) #마지막 5행 표시\nprint()\nprint(type(df))",
" age fare\n886 27.0 13.00\n887 19.0 30.00\n888 NaN 23.45\n889 26.0 30.00\n890 32.0 7.75\n\n<class 'pandas.core.frame.DataFrame'>\n"
],
[
"#데이터프레임에 숫자 10 더하기\naddition = df + 10\nprint(addition.tail())",
" age fare\n886 37.0 23.00\n887 29.0 40.00\n888 NaN 33.45\n889 36.0 40.00\n890 42.0 17.75\n"
],
[
"#데이터프레임끼리 연산하기(addition-df)\nsubtraction = addition - df\nprint(subtraction.tail())",
" age fare\n886 10.0 10.0\n887 10.0 10.0\n888 NaN 10.0\n889 10.0 10.0\n890 10.0 10.0\n"
],
[
"#예제 2-1 csv파일 읽기\n# -*- coding: utf-8 -*-\n\n#라이브러리 불러오기\nimport pandas as pd\n\n#파일 경로(파이썬 파일과 같은 폴더)를 찾고, 변수 file_path에 저장\nfile_path = './read_csv_sample.csv'",
"_____no_output_____"
],
[
"#read_csv()함수로 데이터프레임 변환, 변수 df1에 저장\ndf1 = pd.read_csv(file_path)\ndf1",
"_____no_output_____"
],
[
"#read_csv() 함수로 데이터 프레임변환, 변수df2 에 저장\n#header=None 옵션\ndf2=pd.read_csv(file_path, header=None)\ndf2",
"_____no_output_____"
],
[
"#read_csv()함수로 데이터프레임 변환, df3에 저장 \n#index_col=None 옵션\ndf3 = pd.read_csv(file_path, index_col=None)\ndf3",
"_____no_output_____"
],
[
"#read_csv()함수로 데이터프레임 변환, 변수 df4에 저장\n#index_col='c0'옵션\ndf4 = pd.read_csv(file_path, index_col='c0')\nprint(df4)",
" c1 c2 c3\nc0 \n0 1 4 7\n1 2 5 8\n2 3 6 9\n"
],
[
"#예제 2-2 Excel 파일 읽기\n# -*- coding: utf-8 -*-\n\nimport pandas as pd\n\n#read_excel() 함수로 데이터프레임 변환\ndf1 = pd.read_excel('./남북한발전전력량.xlsx')\ndf2 = pd.read_excel('./남북한발전전력량.xlsx', header=None)\n\nprint(df1)\nprint('\\n')\nprint(df2)",
" 전력량 (억㎾h) 발전 전력별 1990 1991 1992 1993 1994 1995 1996 1997 ... 2007 \\\n0 남한 합계 1077 1186 1310 1444 1650 1847 2055 2244 ... 4031 \n1 NaN 수력 64 51 49 60 41 55 52 54 ... 50 \n2 NaN 화력 484 573 696 803 1022 1122 1264 1420 ... 2551 \n3 NaN 원자력 529 563 565 581 587 670 739 771 ... 1429 \n4 NaN 신재생 - - - - - - - - ... - \n5 북한 합계 277 263 247 221 231 230 213 193 ... 236 \n6 NaN 수력 156 150 142 133 138 142 125 107 ... 133 \n7 NaN 화력 121 113 105 88 93 88 88 86 ... 103 \n8 NaN 원자력 - - - - - - - - ... - \n\n 2008 2009 2010 2011 2012 2013 2014 2015 2016 \n0 4224 4336 4747 4969 5096 5171 5220 5281 5404 \n1 56 56 65 78 77 84 78 58 66 \n2 2658 2802 3196 3343 3430 3581 3427 3402 3523 \n3 1510 1478 1486 1547 1503 1388 1564 1648 1620 \n4 - - - - 86 118 151 173 195 \n5 255 235 237 211 215 221 216 190 239 \n6 141 125 134 132 135 139 130 100 128 \n7 114 110 103 79 80 82 86 90 111 \n8 - - - - - - - - - \n\n[9 rows x 29 columns]\n\n\n 0 1 2 3 4 5 6 7 8 9 ... \\\n0 전력량 (억㎾h) 발전 전력별 1990 1991 1992 1993 1994 1995 1996 1997 ... \n1 남한 합계 1077 1186 1310 1444 1650 1847 2055 2244 ... \n2 NaN 수력 64 51 49 60 41 55 52 54 ... \n3 NaN 화력 484 573 696 803 1022 1122 1264 1420 ... \n4 NaN 원자력 529 563 565 581 587 670 739 771 ... \n5 NaN 신재생 - - - - - - - - ... \n6 북한 합계 277 263 247 221 231 230 213 193 ... \n7 NaN 수력 156 150 142 133 138 142 125 107 ... \n8 NaN 화력 121 113 105 88 93 88 88 86 ... \n9 NaN 원자력 - - - - - - - - ... \n\n 19 20 21 22 23 24 25 26 27 28 \n0 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 \n1 4031 4224 4336 4747 4969 5096 5171 5220 5281 5404 \n2 50 56 56 65 78 77 84 78 58 66 \n3 2551 2658 2802 3196 3343 3430 3581 3427 3402 3523 \n4 1429 1510 1478 1486 1547 1503 1388 1564 1648 1620 \n5 - - - - - 86 118 151 173 195 \n6 236 255 235 237 211 215 221 216 190 239 \n7 133 141 125 134 132 135 139 130 100 128 \n8 103 114 110 103 79 80 82 86 90 111 \n9 - - - - - - - - - - \n\n[10 rows x 29 columns]\n"
],
[
"#예제 2-3 json 파일 읽기\n# -*- coding: utf-8 -*-\n\nimport pandas as pd\n\n#read_json() 함수로 데이터프레임 변환\ndf = pd.read_json('./read_json_sample.json')\nprint(df)\nprint('\\n')\nprint(df.index)",
" name year developer opensource\npandas 2008 Wes Mckinneye True\nNumPy 2006 Travis Oliphant True\nmatplotlib 2003 John D. Hunter True\n\n\nIndex(['pandas', 'NumPy', 'matplotlib'], dtype='object')\n"
],
[
"#예제 2-4 웹에서 표 정보 읽기\n# -*- coding: utf-8 -*-\nimport pandas as pd\n\n#HTML 파일 경로 or 웹 페이지 주소를 url 변수에 저장\nurl='./sample.html'\n\n#HTML 웹페이지의 표(table) 를 가져와서 데이터프레임으로 변환\ntables = pd.read_html(url)",
"_____no_output_____"
],
[
"#표(table)의 개수 확인\nprint(len(tables))\nprint('\\n')",
"2\n\n\n"
],
[
"#tables 리스트의 원소를 iteration 하면서 각각 화면 출력\nfor i in range(len(tables)):\n print(\"tables[%s]\"%i)\n print(tables[i])\n print('\\n')\n ",
"tables[0]\n Unnamed: 0 c0 c1 c2 c3\n0 0 0 1 4 7\n1 1 1 2 5 8\n2 2 2 3 6 9\n\n\ntables[1]\n name year developer opensource\n0 NumPy 2006 Travis Oliphant True\n1 matplotlib 2003 John D. Hunter True\n2 pandas 2008 Wes Mckinneye True\n\n\n"
],
[
"#파이썬 패키지 정보가 들어 있는 두 번째 데이터프레임을 선택하여 df 변수에 저장\ndf = tables[1]\nprint(df)",
" name year developer opensource\n0 NumPy 2006 Travis Oliphant True\n1 matplotlib 2003 John D. Hunter True\n2 pandas 2008 Wes Mckinneye True\n"
],
[
"#'name' 열을 인덱스로 지정\ndf.set_index(['name'], inplace=True)\nprint(df)",
" year developer opensource\nname \nNumPy 2006 Travis Oliphant True\nmatplotlib 2003 John D. Hunter True\npandas 2008 Wes Mckinneye True\n"
],
[
"#예제 2-5 미국 etf 리스트 가져오기\n\n# -*- coding: utf-8 -*-\n\n#라이브러리 불러오기\n\nfrom bs4 import BeautifulSoup\nimport requests\nimport re\nimport pandas as pd",
"_____no_output_____"
],
[
"#위키피디아 미국 ETF 웹 페이지에서 필요한 정보를 스크래핑하여 딕셔너리 형태로 변수 etfs에 저장\nurl = \"https://en.wikipedia.org/wiki/List_of_American_exchange-traded_funds\"\nresp = requests.get(url)\nsoup = BeautifulSoup(resp.text, 'lxml')\nrows = soup.select('div > ul > li')",
"_____no_output_____"
],
[
"etfs = {}\nfor row in rows:\n try:\n etf_name = re.findall('^(.*) \\(NYSE', row.text)\n etf_market = re.findall('\\((.*)\\|',row.text)\n etf_ticker = re.findall('NYSE Arca\\|(.*)\\)',row.text)\n \n if (len(etf_ticker) > 0) & (len(etf_market) > 0) & (len(etf_name) > 0):\n etfs[etf_ticker[0]] = [etf_market[0], etf_name[0]]\n \n except AttributeError as err:\n pass\n ",
"_____no_output_____"
],
[
"#etfs 딕셔너리 출력\nprint(etfs)",
"{'DIA': ['NYSE Arca', 'DIAMONDS Trust, Series 1'], 'RSP': ['NYSE Arca', 'Guggenheim S&P 500 Equal Weight'], 'IOO': ['NYSE Arca', 'iShares S&P Global 100 Index'], 'IVV': ['NYSE Arca', 'iShares S&P 500 Index'], 'SPY': ['NYSE Arca', 'SPDR S&P 500'], 'VOO': ['NYSE Arca', 'Vanguard S&P 500'], 'IWM': ['NYSE Arca', 'iShares Russell 2000 Index'], 'OEF': ['NYSE Arca', 'iShares S&P 100 Index'], 'CVY': ['NYSE Arca', 'Guggenheim Multi-Asset Income'], 'RPG': ['NYSE Arca', 'Guggenheim S&P 500 Pure Growth ETF'], 'RPV': ['NYSE Arca', 'Guggenheim S&P 500 Pure Value ETF'], 'IWB': ['NYSE Arca', 'iShares Russell 1000 Index'], 'PKW': ['NYSE Arca', 'PowerShares Buyback Achievers'], 'PRF': ['NYSE Arca', 'PowerShares FTSE RAFI US 1000'], 'SPLV': ['NYSE Arca', 'PowerShares S&P 500 Low Volatility'], 'SCHX': ['NYSE Arca', 'Schwab US Large-Cap ETF'], 'SCHD': ['NYSE Arca', 'Schwab US Dividend Equity ETF'], 'FNDX': ['NYSE Arca', 'Schwab Fundamental U.S. Large Company Index ETF'], 'SDY': ['NYSE Arca', 'SPDR S&P Dividend ETF'], 'VV': ['NYSE Arca', 'Vanguard Large-Cap'], 'MGC': ['NYSE Arca', 'Vanguard Mega-Cap 300'], 'VONE': ['NYSE Arca', 'Vanguard Russell 1000'], 'VIG': ['NYSE Arca', 'Vanguard Dividend Appreciation'], 'VYM': ['NYSE Arca', 'Vanguard High Dividend Yield'], 'DTN': ['NYSE Arca', 'WisdomTree Dividend ex-Financials'], 'DLN': ['NYSE Arca', 'WisdomTree LargeCap Dividend'], 'MDY': ['NYSE Arca', 'MidCap SPDR'], 'DVY': ['NYSE Arca', 'iShares Select Dividend'], 'IWR': ['NYSE Arca', 'iShares Russell Midcap Index'], 'IJH': ['NYSE Arca', 'iShares S&P MidCap 400 Index'], 'PDP': ['NYSE Arca', 'PowerShares DWA Mom Port'], 'SCHM': ['NYSE Arca', 'Schwab US Mid-Cap'], 'IVOO': ['NYSE Arca', 'Vanguard S&P Mid-Cap 400'], 'VO': ['NYSE Arca', 'Vanguard Mid-Cap'], 'VXF': ['NYSE Arca', 'Vanguard Extended Market'], 'DON': ['NYSE Arca', 'WisdomTree MidCap Dividend ETF'], 'IWC': ['NYSE Arca', 'iShares Micro-Cap'], 'IJR': ['NYSE Arca', 'iShares S&P SmallCap 600 Index'], 'SCHA': ['NYSE Arca', 'Schwab US Small-Cap ETF'], 'FNDA': ['NYSE Arca', 'Schwab Fundamental U.S. Small Company Index ETF'], 'VIOO': ['NYSE Arca', 'Vanguard S&P Small-Cap 600'], 'VB': ['NYSE Arca', 'Vanguard Small-Cap'], 'VTWO': ['NYSE Arca', 'Vanguard Russell 2000'], 'EEB': ['NYSE Arca', 'Claymore/BNY BRIC'], 'ECON': ['NYSE Arca', 'EGShares Emerging Markets Consumer'], 'IDV': ['NYSE Arca', 'iShares International Select Div'], 'BKF': ['NYSE Arca', 'iShares MSCI BRIC Index'], 'EFA': ['NYSE Arca', 'iShares MSCI EAFE Index'], 'SCZ': ['NYSE Arca', 'iShares MSCI EAFE Small-Cap'], 'EEM': ['NYSE Arca', 'iShares MSCI Emerging Markets Index'], 'PID': ['NYSE Arca', 'PowerShares Intl Dividend Achievers'], 'SCHC': ['NYSE Arca', 'Schwab International Small-Cap Equity'], 'SCHE': ['NYSE Arca', 'Schwab Emerging Markets Equity ETF'], 'SCHF': ['NYSE Arca', 'Schwab International Equity ETF'], 'FNDF': ['NYSE Arca', 'Schwab Fundamental International Large Company Index ETF'], 'FNDC': ['NYSE Arca', 'Schwab Fundamental International Small Company Index ETF'], 'FNDE': ['NYSE Arca', 'Schwab Fundamental Emerging Markets Large Company Index ETF'], 'DWX': ['NYSE Arca', 'SPDR S&P International Dividend'], 'VEA': ['NYSE Arca', 'Vanguard MSCI EAFE'], 'VWO': ['NYSE Arca', 'Vanguard MSCI Emerging Markets'], 'VXUS': ['NYSE Arca', 'Vanguard Total International Stock'], 'VEU': ['NYSE Arca', 'Vanguard FTSE All-World ex-US'], 'VSS': ['NYSE Arca', 'Vanguard FTSE All-World ex-US Small-Cap'], 'DEM': ['NYSE Arca', 'WisdomTree Emerging Markets Equity Inc'], 'DGS': ['NYSE Arca', 'WisdomTree Emerging Mkts SmallCap Div'], 'EZU': ['NYSE Arca', 'iShares MSCI EMU Index'], 'EPP': ['NYSE Arca', 'iShares MSCI Pacific ex-Japan'], 'IEV': ['NYSE Arca', 'iShares S&P Europe 350 Index'], 'ILF': ['NYSE Arca', 'iShares S&P Latin America 40 Index'], 'FEZ': ['NYSE Arca', 'SPDR EURO STOXX 50'], 'VGK': ['NYSE Arca', 'Vanguard MSCI Europe'], 'VPL': ['NYSE Arca', 'Vanguard MSCI Pacific'], 'HEDJ': ['NYSE Arca', 'WisdomTree Europe Hedged Equity ETF'], 'DFE': ['NYSE Arca', 'WisdomTree Europe SmallCap Dividend'], 'AND': ['NYSE Arca', 'Global X FTSE Andean 40 ETF'], 'GXF': ['NYSE Arca', 'Global X FTSE Nordic Region ETF'], 'EWA': ['NYSE Arca', 'iShares MSCI Australia Index'], 'EWC': ['NYSE Arca', 'iShares MSCI Canada Index'], 'EWG': ['NYSE Arca', 'iShares MSCI German Index'], 'EIS': ['NYSE Arca', 'iShares MSCI Israel ETF'], 'EWI': ['NYSE Arca', 'iShares MSCI Italy Capped'], 'EWJ': ['NYSE Arca', 'iShares MSCI Japan Index'], 'EWY': ['NYSE Arca', 'iShares MSCI Korea Index'], 'EWD': ['NYSE Arca', 'iShares MSCI Sweden Index'], 'EWL': ['NYSE Arca', 'iShares MSCI Switzerland Capped'], 'EWP': ['NYSE Arca', 'iShares MSCI Spain Capped'], 'EWU': ['NYSE Arca', 'iShares MSCI United Kingdom Index'], 'DXJ': ['NYSE Arca', 'WisdomTree Japan Hedged Equity'], 'NORW': ['NYSE Arca', 'Global X FTSE Norway 30 ETF'], 'INDF': ['NYSE Arca', 'Nifty India Financials ETF'], 'EWZ': ['NYSE Arca', 'iShares MSCI Brazil Index'], 'FXI': ['NYSE Arca', 'iShares FTSE/Xinhua China 25 Index'], 'EWH': ['NYSE Arca', 'iShares MSCI Hong Kong Index'], 'EWW': ['NYSE Arca', 'iShares MSCI Mexico Index'], 'EPHE': ['NYSE Arca', 'iShares MSCI Philippines Index'], 'RSX': ['NYSE Arca', 'Market Vectors Russia ETF'], 'EWS': ['NYSE Arca', 'iShares MSCI Singapore Index'], 'EWM': ['NYSE Arca', 'iShares MSCI Malaysia Index'], 'EWT': ['NYSE Arca', 'iShares MSCI Taiwan Index'], 'EPI': ['NYSE Arca', 'WisdomTree India Earnings ETF'], 'ARGT': ['NYSE Arca', 'Global X FTSE Argentina 20 ETF'], 'BRAF': ['NYSE Arca', 'Global X Brazil Financials ETF'], 'BRAQ': ['NYSE Arca', 'Global X Brazil Consumer ETF'], 'BRAZ': ['NYSE Arca', 'Global X Brazil Mid Cap ETF'], 'GXG': ['NYSE Arca', 'Global X FTSE Colombia 20 ETF'], 'XLY': ['NYSE Arca', 'Consumer Discretionary Select Sector SPDR'], 'IYC': ['NYSE Arca', 'iShares Dow Jones US Consumer Services'], 'ITB': ['NYSE Arca', 'iShares US Home Construction'], 'XHB': ['NYSE Arca', 'SPDR S&P Homebuilders ETF'], 'VCR': ['NYSE Arca', 'Vanguard Consumer Discretionary'], 'XLP': ['NYSE Arca', 'Consumer Staples Select Sector SPDR'], 'IYK': ['NYSE Arca', 'iShares Dow Jones US Consumer Goods'], 'VDC': ['NYSE Arca', 'Vanguard Consumer Staples'], 'AMLP': ['NYSE Arca', 'ALPS Alerian MLP ETF'], 'XLE': ['NYSE Arca', 'Energy Select Sector SPDR'], 'IYE': ['NYSE Arca', 'iShares Dow Jones US Energy'], 'IGE': ['NYSE Arca', 'iShares North American Natural Resources'], 'OIH': ['NYSE Arca', 'Market Vectors Oil Services ETF'], 'XOP': ['NYSE Arca', 'SPDR S&P Oil & Gas Explor & Prod ETF'], 'VDE': ['NYSE Arca', 'Vanguard Energy'], 'ESPO': ['NYSE Arca', 'VanEck Vectors Video Gaming and eSports ETF'], 'XLF': ['NYSE Arca', 'Financial Select Sector SPDR'], 'IYF': ['NYSE Arca', 'iShares Dow Jones US Financial'], 'KBE': ['NYSE Arca', 'SPDR S&P Bank ETF'], 'KRE': ['NYSE Arca', 'SPDR S&P Regional Banking ETF'], 'VFH': ['NYSE Arca', 'Vanguard Financials'], 'FXH': ['NYSE Arca', 'First Trust Health Care AlphaDEX'], 'FBT': ['NYSE Arca', 'First Trust NYSE Arca Biotech Index'], 'XLV': ['NYSE Arca', 'Health Care Select Sector SPDR'], 'IYH': ['NYSE Arca', 'iShares Dow Jones US Health Care'], 'PJP': ['NYSE Arca', 'PowerShares Dynamic Pharmaceuticals'], 'XBI': ['NYSE Arca', 'SPDR S&P Biotech ETF'], 'VHT': ['NYSE Arca', 'Vanguard Health Care'], 'XLI': ['NYSE Arca', 'Industrial Select Sector SPDR'], 'IYJ': ['NYSE Arca', 'iShares Dow Jones US Industrial'], 'VIS': ['NYSE Arca', 'Vanguard Industrials'], 'XLB': ['NYSE Arca', 'Materials Select Sector SPDR'], 'IYM': ['NYSE Arca', 'iShares Dow Jones US Materials'], 'GDX': ['NYSE Arca', 'Market Vectors Gold Miners ETF'], 'GDXJ': ['NYSE Arca', 'Market Vectors Junior Gold Miners ETF'], 'VAW': ['NYSE Arca', 'Vanguard Materials'], 'FDN': ['NYSE Arca', 'First Trust Dow Jones Internet Index'], 'XLK': ['NYSE Arca', 'Technology Select Sector SPDR'], 'IYW': ['NYSE Arca', 'iShares Dow Jones US Technology'], 'IGV': ['NYSE Arca', 'iShares North American Tech-Software'], 'VGT': ['NYSE Arca', 'Vanguard Information Technology'], 'IYZ': ['NYSE Arca', 'iShares Dow Jones US Telecommunications'], 'VOX': ['NYSE Arca', 'Vanguard Telecommunication Services'], 'XLU': ['NYSE Arca', 'Utilities Select Sector SPDR'], 'IDU': ['NYSE Arca', 'iShares Dow Jones US Utilities'], 'VPU': ['NYSE Arca', 'Vanguard Utilities'], 'IPD': ['NYSE Arca', 'SPDR S&P International Consumer Discretionary'], 'RXI': ['NYSE Arca', 'iShares S&P Global Consumer Discretionary'], 'IPS': ['NYSE Arca', 'SPDR S&P International Consumer Staples'], 'KXI': ['NYSE Arca', 'iShares S&P Global Consumer Staples'], 'IPW': ['NYSE Arca', 'SPDR S&P International Energy'], 'IXC': ['NYSE Arca', 'iShares S&P Global Energy'], 'IPF': ['NYSE Arca', 'SPDR S&P International Financial'], 'IXG': ['NYSE Arca', 'iShares S&P Global Financial'], 'IRY': ['NYSE Arca', 'SPDR S&P International Health Care'], 'IXJ': ['NYSE Arca', 'iShares S&P Global Healthcare'], 'IPN': ['NYSE Arca', 'SPDR S&P International Industrial'], 'EXI': ['NYSE Arca', 'iShares S&P Global Industrials'], 'GUNR': ['NYSE Arca', 'FlexShares Mstar Gbl Upstrm Nat Res ETF'], 'IRV': ['NYSE Arca', 'SPDR S&P International Materials'], 'MXI': ['NYSE Arca', 'iShares S7P Global Materials'], 'IPK': ['NYSE Arca', 'SPDR S&P International Technology'], 'IXN': ['NYSE Arca', 'iShares S&P Global Technology'], 'IST': ['NYSE Arca', 'SPDR S&P International Telecommunications'], 'IXP': ['NYSE Arca', 'iShares S&P Global Telecommunications'], 'IPU': ['NYSE Arca', 'SPDR S&P International Utilities'], 'JXI': ['NYSE Arca', 'iShares S&P Global Utilities'], 'HYLD': ['NYSE Arca', 'AdvisorShares Peritus High Yield ETF'], 'TDTT': ['NYSE Arca', 'FlexShares iBoxx 3Yr Target Dur TIPS ETF'], 'CSJ': ['NYSE Arca', 'iShares 1-3 Year Credit Bond'], 'IEI': ['NYSE Arca', 'iShares 3-7 Year Treasury Bond'], 'AGG': ['NYSE Arca', 'iShares Core U.S. Aggregate Bond'], 'SHY': ['NYSE Arca', 'iShares Barclays 1-3 Year Treasury Bond'], 'TIP': ['NYSE Arca', 'iShares Barclays TIPS Bond'], 'HYG': ['NYSE Arca', 'iShares iBoxx $ High Yield Corp Bond'], 'LQD': ['NYSE Arca', 'iShares iBoxx $ Invest Grade Corp Bond'], 'IEF': ['NYSE Arca', 'iShares Barclays 7-10 Year Treasury'], 'TLT': ['NYSE Arca', 'iShares Barclays 20+ Year Treas Bond'], 'FLOT': ['NYSE Arca', 'iShares Floating Rate Bond'], 'CIU': ['NYSE Arca', 'iShares Intermediate Credit Bd'], 'GVI': ['NYSE Arca', 'iShares Intm Government/Credit Bond'], 'EMB': ['NYSE Arca', 'iShares JPMorgan USD Emerg Markets Bond'], 'MBB': ['NYSE Arca', 'iShares MBS'], 'MUB': ['NYSE Arca', 'iShares National AMT-Free Muni Bond'], 'SHV': ['NYSE Arca', 'iShares Short Treasury Bond'], 'HYD': ['NYSE Arca', 'Market Vectors High-Yield Muni ETF'], 'HYS': ['NYSE Arca', 'PIMCO 0-5 Year High Yld Corp Bd Idx ETF'], 'STPZ': ['NYSE Arca', 'PIMCO 1-5 Year US TIPS Index ETF'], 'MINT': ['NYSE Arca', 'PIMCO Enhanced Short Duration ETF'], 'BOND': ['NYSE Arca', 'PIMCO Total Return ETF'], 'PCY': ['NYSE Arca', 'PowerShares Emerging Mkts Sovereign Debt'], 'BKLN': ['NYSE Arca', 'PowerShares Senior Loan Port'], 'SCHZ': ['NYSE Arca', 'Schwab US Aggregate Bond'], 'SCHP': ['NYSE Arca', 'Schwab US TIPS'], 'SCHO': ['NYSE Arca', 'Schwab Short-Term US Treasury'], 'SCHR': ['NYSE Arca', 'Schwab Intermediate-Term US Treasury'], 'JNK': ['NYSE Arca', 'SPDR Barclays Capital High Yield Bond ETF'], 'BIL': ['NYSE Arca', 'SPDR Barclays 1-3 Month T-Bill'], 'SCPB': ['NYSE Arca', 'SPDR Barclays Capital Short Term Corp Bd'], 'BWX': ['NYSE Arca', 'SPDR Barclays International Treasury Bd'], 'SJNK': ['NYSE Arca', 'SPDR Barclays Short Term Hi Yld Bd ETF'], 'TFI': ['NYSE Arca', 'SPDR Nuveen Barclays Capital Muni Bond'], 'SHM': ['NYSE Arca', 'SPDR Nuveen Barclays Capital S/T Muni Bd'], 'EDV': ['NYSE Arca', 'Vanguard Extended Duration Treasury'], 'BIV': ['NYSE Arca', 'Vanguard Intermediate-Term Bond'], 'VCIT': ['NYSE Arca', 'Vanguard Intermediate-Term Corporate Bond'], 'VGIT': ['NYSE Arca', 'Vanguard Intermediate-Term Government Bond'], 'BLV': ['NYSE Arca', 'Vanguard Long-Term Bond'], 'VCLT': ['NYSE Arca', 'Vanguard Long-Term Corporate Bond'], 'VGLT': ['NYSE Arca', 'Vanguard Long-Term Government Bond'], 'VMBS': ['NYSE Arca', 'Vanguard Mortgage-Backed Securities'], 'BSV': ['NYSE Arca', 'Vanguard Short-Term Bond'], 'VCSH': ['NYSE Arca', 'Vanguard Short-Term Corporate Bond'], 'VGSH': ['NYSE Arca', 'Vanguard Short-Term Government Bond'], 'BND': ['NYSE Arca', 'Vanguard Total Bond Market'], 'RJI': ['NYSE Arca', 'ELEMENTS Rogers International Commodity Index ETN'], 'DJP': ['NYSE Arca', 'iPath Dow Jones-UBS Commodity Idx TR ETN'], 'GSG': ['NYSE Arca', 'iShares S&P GSCI Commodity-Indexed Trust'], 'DBC': ['NYSE Arca', 'PowerShares DB Commodity Idx Trking Fund'], 'RJA': ['NYSE Arca', 'ELEMENTS Rogers Agriculture ETN'], 'JJA': ['NYSE Arca', 'iPath Dow Jones-UBS Agriculture ETN'], 'DBA': ['NYSE Arca', 'PowerShares DB Agriculture'], 'RJN': ['NYSE Arca', 'ELEMENTS Rogers Energy ETN'], 'OIL': ['NYSE Arca', 'iPath Dow Jones-UBS Crude Oil ETN'], 'GAZ': ['NYSE Arca', 'iPath Dow Jones-UBS Natural Gas ETN'], 'UNG': ['NYSE Arca', 'United States Natural Gas Fund'], 'USO': ['NYSE Arca', 'United States Oil Fund'], 'RJZ': ['NYSE Arca', 'ELEMENTS Rogers Metal ETN'], 'JJM': ['NYSE Arca', 'iPath Dow Jones-UBS Industrial Metals ETN'], 'DBB': ['NYSE Arca', 'PowerShares DB Base Metals'], 'SGOL': ['NYSE Arca', 'ETFS Physical Swiss Gold Shares'], 'IAU': ['NYSE Arca', 'iShares Gold Trust'], 'GLD': ['NYSE Arca', 'SPDR Gold Shares'], 'SIVR': ['NYSE Arca', 'ETFS Physical Silver Shares'], 'SLV': ['NYSE Arca', 'iShares Silver Trust'], 'PALL': ['NYSE Arca', 'ETFS Physical Palladium Shares'], 'PPLT': ['NYSE Arca', 'ETFS Physical Platinum Shares'], 'ICF': ['NYSE Arca', 'iShares Cohen & Steers Realty Majors'], 'IFAS': ['NYSE Arca', 'iShares Dow Jones Asia Real Estate'], 'IFEU': ['NYSE Arca', 'iShares Dow Jones Europe Real Estate'], 'IYR': ['NYSE Arca', 'iShares Dow Jones US Real Estate'], 'REM': ['NYSE Arca', 'iShares Mortgage Real Estate Capped'], 'SCHH': ['NYSE Arca', 'Schwab US REIT'], 'RWO': ['NYSE Arca', 'SPDR Dow Jones Global Real Estate'], 'RWX': ['NYSE Arca', 'SPDR Dow Jones Intl Real Estate'], 'RWR': ['NYSE Arca', 'SPDR Dow Jones REIT ETF'], 'WREI': ['NYSE Arca', 'Wilshire US REIT ETF'], 'VNQ': ['NYSE Arca', 'Vanguard Real Estate'], 'VNQI': ['NYSE Arca', 'Vanguard Global ex-US Real Estate'], 'HDGE': ['NYSE Arca', 'AdvisorShares Ranger Equity Bear ETF'], 'HDGI': ['NYSE Arca', 'AdvisorShares Athena International Bear ETF'], 'DOG': ['NYSE Arca', 'ProShares Short Dow 30'], 'SH': ['NYSE Arca', 'ProShares Short S&P 500'], 'MYY': ['NYSE Arca', 'ProShares Short S&P MidCap 400'], 'SBB': ['NYSE Arca', 'ProShares Short S&P SmallCap 600'], 'PSQ': ['NYSE Arca', 'ProShares Short Nasdaq 100'], 'RWM': ['NYSE Arca', 'ProShares Short Russell 2000'], 'EFZ': ['NYSE Arca', 'ProShares Short MSCI EAFE'], 'FBGX': ['NYSE Arca', 'UBS AG FI Enhanced Large Cap Growth 2x ETF'], 'FLGE': ['NYSE Arca', 'Credit Suisse FI Large Cap Growth Enhanced ETF'], 'MIDU': ['NYSE Arca', 'Direxion Daily Mid Cap Bull 3x ETF'], 'SPUU': ['NYSE Arca', 'Direxion Daily S&P 500 Bull 2x ETF'], 'SPXL': ['NYSE Arca', 'Direxion Daily S&P 500 Bull 3x ETF'], 'ERX': ['NYSE Arca', 'Direxion Energy Bull 3x ETF'], 'FAS': ['NYSE Arca', 'Direxion Financials Bull 3x ETF'], 'BGU': ['NYSE Arca', 'Direxion Large Cap Bull 3x'], 'TNA': ['NYSE Arca', 'Direxion Small Cap Bull 3x'], 'DDM': ['NYSE Arca', 'ProShares Ultra Dow 30'], 'QLD': ['NYSE Arca', 'ProShares Ultra NASDAQ-100'], 'UWM': ['NYSE Arca', 'ProShares Ultra Russell 2000'], 'SSO': ['NYSE Arca', 'ProShares Ultra S&P 500'], 'UPRO': ['NYSE Arca', 'ProShares S&P 500 3x'], 'SDS': ['NYSE Arca', 'UltraShort S&P 500 ProShares 2x'], 'SPXU': ['NYSE Arca', 'ProShares S&P 500 Direxionshares Bear 3x ETF'], 'TZA': ['NYSE Arca', 'Direxion Russell 2000 Direxionshares Bear 3x ETF'], 'SQQQ': ['NYSE Arca', 'UltraPro Short QQQ'], 'QID': ['NYSE Arca', 'UltraShort NASDAQ-100 ProShares 2X'], 'SKF': ['NYSE Arca', 'UltraShort Financials ProShares'], 'TWM': ['NYSE Arca', 'UltraShort Russell 2000 ProShares'], 'DXD': ['NYSE Arca', 'UltraShort Dow 30 ProShares'], 'SRS': ['NYSE Arca', 'UltraShort Real Estate ProShares'], 'MZZ': ['NYSE Arca', 'UltraShort MidCap 400 ProShares'], 'DUG': ['NYSE Arca', 'UltraShort Oil & Gas ProShares'], 'BGZ': ['NYSE Arca', 'Direxion Large Cap Bear 3x ETF'], 'ERY': ['NYSE Arca', 'Direxion Energy Bear 3x ETF'], 'FAZ': ['NYSE Arca', 'Direxion Financials Bear 3x ETF'], 'AADR': ['NYSE Arca', 'AdvisorShares WCM/BNY Mellon Focused Growth ADR ETF'], 'ACCU': ['NYSE Arca', 'AdvisorShares Accuvest Global Opportunities ETF'], 'DBIZ': ['NYSE Arca', 'AdvisorShares Pring Turner Business Cycle ETF'], 'EPRO': ['NYSE Arca', 'AdvisorShares EquityPro ETF'], 'FWDB': ['NYSE Arca', 'AdvisorShares Madrona Global Bond ETF'], 'FWDD': ['NYSE Arca', 'AdvisorShares Madrona Domestic ETF'], 'FWDI': ['NYSE Arca', 'AdvisorShares Madrona International ETF'], 'GEUR': ['NYSE Arca', 'AdvisorShares Gartman Gold/Euro ETF'], 'GGBP': ['NYSE Arca', 'AdvisorShares Gartman Gold/British Pound ETF'], 'GIVE': ['NYSE Arca', 'AdvisorShares Global Echo ETF'], 'GLDE': ['NYSE Arca', 'AdvisorShares International Gold ETF'], 'GYEN': ['NYSE Arca', 'AdvisorShares Gartman Gold/Yen ETF'], 'GTAA': ['NYSE Arca', 'AdvisorShares Cambria Global Tactical ETF'], 'HOLD': ['NYSE Arca', 'AdvisorShares Sage Core Reserves ETF'], 'MATH': ['NYSE Arca', 'AdvisorShares Meidell Tactical Advantage ETF'], 'MINC': ['NYSE Arca', 'AdvisorShares Newfleet Multi-Sector Income ETF'], 'QEH': ['NYSE Arca', 'AdvisorShares QAM Equity Hedge ETF'], 'TTFS': ['NYSE Arca', 'AdvisorShares TrimTabs Float Shrink ETF'], 'VEGA': ['NYSE Arca', 'AdvisorShares STAR Global Buy-Write ETF'], 'YPRO': ['NYSE Arca', 'AdvisorShares YieldPro ETF'], 'RIGS': ['NYSE Arca', 'Riverfront Strategic Income Fund'], 'ARKG': ['NYSE Arca', 'ARK Genomic Revolution Multi-Sector ETF'], 'ARKQ': ['NYSE Arca', 'ARK Industrial Innovation ETF'], 'ARKW': ['NYSE Arca', 'ARK Web x.0 ETF'], 'ARKK': ['NYSE Arca', 'ARK Innovation ETF'], 'SYLD': ['NYSE Arca', 'Cambria Shareholder Yield ETF'], 'GMMB': ['NYSE Arca', 'Columbia Intermediate Municipal Bond ETF'], 'GMTB': ['NYSE Arca', 'Columbia Core Bond ETF'], 'GVT': ['NYSE Arca', 'Columbia Select Large Cap Value ETF'], 'RPX': ['NYSE Arca', 'Columbia Large Cap Growth ETF'], 'RWG': ['NYSE Arca', 'Columbia Select Large Cap Growth ETF'], 'EMLP': ['NYSE Arca', 'First Trust North American Energy Infrastructure Fund'], 'FMB': ['NYSE Arca', 'First Trust Managed Municipal ETF'], 'FMF': ['NYSE Arca', 'First Trust Morningstar Managed Futures Strategy Fund'], 'FPE': ['NYSE Arca', 'First Trust Preferred Securities and Income ETF'], 'FTGS': ['NYSE Arca', 'First Trust Global Tactical Commodity Strategy Fund'], 'FTHI': ['NYSE Arca', 'First Trust High Income ETF'], 'FTLB': ['NYSE Arca', 'First Trust Low Beta Income ETF'], 'FTSL': ['NYSE Arca', 'First Trust Senior Loan ETF'], 'HYLS': ['NYSE Arca', 'First Trust Tactical High Yield ETF'], 'RAVI': ['NYSE Arca', 'Flexshares Ready Access Variable Income Fund'], 'FTSD': ['NYSE Arca', 'Franklin Short Duration US Government ETF'], 'GSY': ['NYSE Arca', 'Guggenheim Enhanced Short Duration Bond ETF'], 'HECO': ['NYSE Arca', 'Huntington EcoLogical Strategy ETF'], 'HUSE': ['NYSE Arca', 'Huntington U.S. Equity Rotation Strategy ETF'], 'ICSH': ['NYSE Arca', 'iShares Liquidity Income ETF'], 'IEIL': ['NYSE Arca', 'iShares Enhanced International Large-Cap ETF'], 'IEIS': ['NYSE Arca', 'iShares Enhanced International Small-Cap ETF'], 'IELG': ['NYSE Arca', 'iShares Enhanced U.S. Large-Cap ETF'], 'IESM': ['NYSE Arca', 'iShares Enhanced U.S. Small-Cap ETF'], 'NEAR': ['NYSE Arca', 'iShares Short Maturity Bond ETF'], 'BABZ': ['NYSE Arca', 'PIMCO Build America Bond Strategy'], 'DI': ['NYSE Arca', 'PIMCO Diversified Income ETF'], 'FORX': ['NYSE Arca', 'PIMCO Foreign Currency Strategy ETF'], 'ILB': ['NYSE Arca', 'PIMCO Global Advantage Inflation-Linked Bond Strategy'], 'LDUR': ['NYSE Arca', 'PIMCO Low Duration ETF'], 'MUNI': ['NYSE Arca', 'PIMCO Intermediate Muni Bond Strategy ETF'], 'SMMU': ['NYSE Arca', 'PIMCO Short Term Muni Bond Strategy ETF'], 'CHNA': ['NYSE Arca', 'PowerShares China-A Share Portfolio'], 'LALT': ['NYSE Arca', 'PowerShares Multi-Strategy Alternative Portfolio'], 'PHDG': ['NYSE Arca', 'S&P 500 Downside Hedged Portfolio'], 'PSR': ['NYSE Arca', 'Active U.S. Real Estate Fund ETF'], 'ONEF': ['NYSE Arca', 'Russell Equity ETF'], 'GAL': ['NYSE Arca', 'SPDR SSgA Global Allocation'], 'INKM': ['NYSE Arca', 'SPDR SSgA Income Allocation'], 'RLY': ['NYSE Arca', 'SPDR SSgA Multi-Asset Real Return'], 'SYE': ['NYSE Arca', 'SPDR MFS Systematic Core Equity ETF'], 'SYG': ['NYSE Arca', 'SPDR MFS Systematic Growth Equity ETF'], 'SYV': ['NYSE Arca', 'SPDR MFS Systematic Value Equity ETF'], 'SRLN': ['NYSE Arca', 'SPDR Blackstone/GSO Senior Loan ETF'], 'ULST': ['NYSE Arca', 'SPDR SSgA Ultra Short Term Bond ETF'], 'ALD': ['NYSE Arca', 'WisdomTree Asia Local Debt'], 'AUNZ': ['NYSE Arca', 'WisdomTree Australia & New Zealand Debt Fund'], 'BZF': ['NYSE Arca', 'WisdomTree Dreyfus Brazilian Real Fund'], 'CCX': ['NYSE Arca', 'WisdomTree Commodity Currency'], 'CEW': ['NYSE Arca', 'WisdomTree Dreyfus Emerging Currency'], 'CRDT': ['NYSE Arca', 'WisdomTree Strategic Corporate Bond Fund'], 'CYB': ['NYSE Arca', 'WisdomTree Dreyfus Chinese Yuan'], 'ELD': ['NYSE Arca', 'WisdomTree Emerging Markets Local Debts Fund'], 'EMCB': ['NYSE Arca', 'WisdomTree Emerging Markets Corporate Bond Fund'], 'EU': ['NYSE Arca', 'WisdomTree Euro Debt Fund'], 'ICB': ['NYSE Arca', 'WisdomTree Dreyfus Indian Rupee'], 'RRF': ['NYSE Arca', 'WisdomTree Global Real Return'], 'USDU': ['NYSE Arca', 'WisdomTree Bloomberg U.S. Dollar Bullish Fund'], 'WDTI': ['NYSE Arca', 'WisdomTree Managed Futures Strategy Fund']}\n"
],
[
"#etfs 딕셔너리를 데이터 프레임으로 변환\ndf = pd.DataFrame(etfs)\ndf",
"_____no_output_____"
],
[
"#예제 2-7 csv 파일로 저장\n# -*- coding: utf-8 -*-\nimport pandas as pd\n\n#판다스 DataFrame() 함수로 데이터프레임 변환. df에 저장\ndata = {\n 'name':['Jerry', 'Riah', 'Paul'],\n 'algol' :[\"A\", \"A+\", \"B\"],\n 'basic': [\"C\", \"B\", \"B+\"],\n 'c++' :[\"B+\", \"C\", \"C+\"]\n}\ndf = pd.DataFrame(data)\ndf.set_index('name', inplace=True) #name 열을 인덱스로 지정\nprint(df)",
" algol basic c++\nname \nJerry A C B+\nRiah A+ B C\nPaul B B+ C+\n"
],
[
"#to_csv() 메소드를 사용하여 csv파일로 내보내기.\ndf.to_csv(\"./df_sample.csv\")",
"_____no_output_____"
],
[
"#to_json() 메소드를 사용하여 json 파일로 내보내기\ndf.to_json(\"./df_sample.json\")",
"_____no_output_____"
],
[
"#to_excel() 메소드를 사용하여 excel 파일로 내보내기\ndf.to_excel(\"./df_sample.xlsx\")",
"_____no_output_____"
],
[
"#예제 2-10 ExcelWriter() 활용 ExcelWriter 함수는 엑셀 워크북 객체를 생성\n#파일이라고 보면된다. \n#시트 1, 2 만들기\n\n# -*- coding: utf-8 -*-\nimport pandas as pd\n\n#판다스 DataFrame() 함수로 데이터프레임 변환. 변수 df1,df2에 저장\ndata1 = {\n 'name':['Jerry', 'Riah', 'Paul'],\n 'algol' :[\"A\", \"A+\", \"B\"],\n 'basic': [\"C\", \"B\", \"B+\"],\n 'c++' :[\"B+\", \"C\", \"C+\"]\n}\n\ndata2 = {\n 'c0':[1,2,3],\n 'c1':[4,5,6],\n 'c2':[7,8,9],\n 'c3':[10,11,12],\n 'c4':[13,14,15]\n}\n\ndf1 = pd.DataFrame(data1)\ndf1.set_index('name',inplace=True) # name 열을 인덱스로 지정\nprint(df1)",
" algol basic c++\nname \nJerry A C B+\nRiah A+ B C\nPaul B B+ C+\n"
],
[
"df2 = pd.DataFrame(data2)\ndf2.set_index('c0', inplace=True) #c0 열을 인덱스로 지정\nprint(df2)",
" c1 c2 c3 c4\nc0 \n1 4 7 10 13\n2 5 8 11 14\n3 6 9 12 15\n"
],
[
"#df1을 'sheet1'으로, df2를 'sheet2'로 저장(Excel 파일명은 \"df_excelwriter.xlsx\")\nwriter = pd.ExcelWriter(\"./df_excelwriter.xlsx\")\ndf1.to_excel(writer, sheet_name=\"sheet1\")\ndf2.to_excel(writer, sheet_name=\"sheet2\")\nwriter.save()",
"_____no_output_____"
],
[
"#예제 3-1 데이터 살펴보기\n# -*- coding: utf-8 -*-\nimport pandas as pd\n\n#read_csv() 함수로 df 생성\ndf = pd.read_csv('./auto-mpg.csv', header=None)\ndf.head()\n\n#열 이름 지정\ndf.columns = ['mpg','cylinders', 'displacement','horsepower','weight',\n 'acceleration', 'model year', 'origin', 'name']\n\n#데이터프레임 df 의 내용을 일부 확인\nprint(df.head())",
" mpg cylinders displacement horsepower weight acceleration model year \\\n0 18.0 8 307.0 130.0 3504.0 12.0 70 \n1 15.0 8 350.0 165.0 3693.0 11.5 70 \n2 18.0 8 318.0 150.0 3436.0 11.0 70 \n3 16.0 8 304.0 150.0 3433.0 12.0 70 \n4 17.0 8 302.0 140.0 3449.0 10.5 70 \n\n origin name \n0 1 chevrolet chevelle malibu \n1 1 buick skylark 320 \n2 1 plymouth satellite \n3 1 amc rebel sst \n4 1 ford torino \n"
],
[
"df.tail()",
"_____no_output_____"
],
[
"#df의 모양과 크기 확인: (행의 개수, 열의 개수)를 튜플로 반환\nprint(df.shape)",
"(398, 9)\n"
],
[
"#데이터프레임 df의 내용 확인\nprint(df.info())",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 398 entries, 0 to 397\nData columns (total 9 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 mpg 398 non-null float64\n 1 cylinders 398 non-null int64 \n 2 displacement 398 non-null float64\n 3 horsepower 398 non-null object \n 4 weight 398 non-null float64\n 5 acceleration 398 non-null float64\n 6 model year 398 non-null int64 \n 7 origin 398 non-null int64 \n 8 name 398 non-null object \ndtypes: float64(4), int64(3), object(2)\nmemory usage: 28.1+ KB\nNone\n"
],
[
"#데이터프레임 df의 자료형 확인\nprint(df.dtypes)",
"mpg float64\ncylinders int64\ndisplacement float64\nhorsepower object\nweight float64\nacceleration float64\nmodel year int64\norigin int64\nname object\ndtype: object\n"
],
[
"#시리즈 (mpg 열)의 자료형 확인\nprint(df.mpg.dtypes)",
"float64\n"
],
[
"#데이터프레임 df의 기술 통계정보 확인\nprint(df.describe())",
" mpg cylinders displacement weight acceleration \\\ncount 398.000000 398.000000 398.000000 398.000000 398.000000 \nmean 23.514573 5.454774 193.425879 2970.424623 15.568090 \nstd 7.815984 1.701004 104.269838 846.841774 2.757689 \nmin 9.000000 3.000000 68.000000 1613.000000 8.000000 \n25% 17.500000 4.000000 104.250000 2223.750000 13.825000 \n50% 23.000000 4.000000 148.500000 2803.500000 15.500000 \n75% 29.000000 8.000000 262.000000 3608.000000 17.175000 \nmax 46.600000 8.000000 455.000000 5140.000000 24.800000 \n\n model year origin \ncount 398.000000 398.000000 \nmean 76.010050 1.572864 \nstd 3.697627 0.802055 \nmin 70.000000 1.000000 \n25% 73.000000 1.000000 \n50% 76.000000 1.000000 \n75% 79.000000 2.000000 \nmax 82.000000 3.000000 \n"
],
[
"print(df.describe(include='all')) \n#산술 데이터가 아닌 정보까지 포함 include='all'",
" mpg cylinders displacement horsepower weight \\\ncount 398.000000 398.000000 398.000000 398 398.000000 \nunique NaN NaN NaN 94 NaN \ntop NaN NaN NaN 150.0 NaN \nfreq NaN NaN NaN 22 NaN \nmean 23.514573 5.454774 193.425879 NaN 2970.424623 \nstd 7.815984 1.701004 104.269838 NaN 846.841774 \nmin 9.000000 3.000000 68.000000 NaN 1613.000000 \n25% 17.500000 4.000000 104.250000 NaN 2223.750000 \n50% 23.000000 4.000000 148.500000 NaN 2803.500000 \n75% 29.000000 8.000000 262.000000 NaN 3608.000000 \nmax 46.600000 8.000000 455.000000 NaN 5140.000000 \n\n acceleration model year origin name \ncount 398.000000 398.000000 398.000000 398 \nunique NaN NaN NaN 305 \ntop NaN NaN NaN ford pinto \nfreq NaN NaN NaN 6 \nmean 15.568090 76.010050 1.572864 NaN \nstd 2.757689 3.697627 0.802055 NaN \nmin 8.000000 70.000000 1.000000 NaN \n25% 13.825000 73.000000 1.000000 NaN \n50% 15.500000 76.000000 1.000000 NaN \n75% 17.175000 79.000000 2.000000 NaN \nmax 24.800000 82.000000 3.000000 NaN \n"
],
[
"#데이터 개수확인\ndf",
"_____no_output_____"
],
[
"#데이터프레임 df의 각 열이 가지고 있는 원소 개수 확인\nprint(df.count())",
"mpg 398\ncylinders 398\ndisplacement 398\nhorsepower 398\nweight 398\nacceleration 398\nmodel year 398\norigin 398\nname 398\ndtype: int64\n"
],
[
"#df.count() 가 반환하는 객체 타입 출력\nprint(type(df.count()))",
"<class 'pandas.core.series.Series'>\n"
],
[
"#데이터 개수 확인\n#데이터프레임 df의 특정 열이 가지고 있는 고유값 확인\nunique_values = df['origin'].value_counts()\nprint(unique_values)\nprint()",
"1 249\n3 79\n2 70\nName: origin, dtype: int64\n\n"
],
[
"#평균값\nprint(df.mean())",
"mpg 23.514573\ncylinders 5.454774\ndisplacement 193.425879\nweight 2970.424623\nacceleration 15.568090\nmodel year 76.010050\norigin 1.572864\ndtype: float64\n"
],
[
"print(df['mpg'].mean())",
"23.514572864321615\n"
],
[
"print(df[['mpg','weight']].mean())",
"mpg 23.514573\nweight 2970.424623\ndtype: float64\n"
],
[
"#통계함수\n#중간값\nprint(df.median())",
"mpg 23.0\ncylinders 4.0\ndisplacement 148.5\nweight 2803.5\nacceleration 15.5\nmodel year 76.0\norigin 1.0\ndtype: float64\n"
],
[
"print(df['mpg'].median())",
"23.0\n"
],
[
"#최대값\nprint(df.max())",
"mpg 46.6\ncylinders 8\ndisplacement 455\nhorsepower ?\nweight 5140\nacceleration 24.8\nmodel year 82\norigin 3\nname vw rabbit custom\ndtype: object\n"
],
[
"print(df['mpg'].max())",
"46.6\n"
],
[
"#최소값\nprint(df.min())",
"mpg 9\ncylinders 3\ndisplacement 68\nhorsepower 100.0\nweight 1613\nacceleration 8\nmodel year 70\norigin 1\nname amc ambassador brougham\ndtype: object\n"
],
[
"print(df['mpg'].min())",
"9.0\n"
],
[
"#표준편차\nprint(df.std())",
"mpg 7.815984\ncylinders 1.701004\ndisplacement 104.269838\nweight 846.841774\nacceleration 2.757689\nmodel year 3.697627\norigin 0.802055\ndtype: float64\n"
],
[
"print(df['mpg'].std())",
"7.815984312565782\n"
],
[
"#상관계수\nprint(df.corr())\n",
" mpg cylinders displacement weight acceleration \\\nmpg 1.000000 -0.775396 -0.804203 -0.831741 0.420289 \ncylinders -0.775396 1.000000 0.950721 0.896017 -0.505419 \ndisplacement -0.804203 0.950721 1.000000 0.932824 -0.543684 \nweight -0.831741 0.896017 0.932824 1.000000 -0.417457 \nacceleration 0.420289 -0.505419 -0.543684 -0.417457 1.000000 \nmodel year 0.579267 -0.348746 -0.370164 -0.306564 0.288137 \norigin 0.563450 -0.562543 -0.609409 -0.581024 0.205873 \n\n model year origin \nmpg 0.579267 0.563450 \ncylinders -0.348746 -0.562543 \ndisplacement -0.370164 -0.609409 \nweight -0.306564 -0.581024 \nacceleration 0.288137 0.205873 \nmodel year 1.000000 0.180662 \norigin 0.180662 1.000000 \n"
],
[
"print(df[['mpg', 'weight']].corr())",
" mpg weight\nmpg 1.000000 -0.831741\nweight -0.831741 1.000000\n"
],
[
"#예제 3-4 선 그래프 그리기 \n\n# -*- coding: utf-8 -*-\n\nimport pandas as pd\n\ndf = pd.read_excel('./남북한발전전력량.xlsx') #데이터프레임 변환\n\ndf_ns = df.iloc[[0,5],3:] #남한, 북한 발전량 합계 데이터만 추출\ndf_ns",
"_____no_output_____"
],
[
"df_ns.index=['South', 'North'] #행 인덱스 변경\ndf_ns.columns = df_ns.columns.map(int) #열 이름의 자료형을 정수형으로 변경\nprint(df_ns.head())",
" 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 ... 2007 \\\nSouth 1186 1310 1444 1650 1847 2055 2244 2153 2393 2664 ... 4031 \nNorth 263 247 221 231 230 213 193 170 186 194 ... 236 \n\n 2008 2009 2010 2011 2012 2013 2014 2015 2016 \nSouth 4224 4336 4747 4969 5096 5171 5220 5281 5404 \nNorth 255 235 237 211 215 221 216 190 239 \n\n[2 rows x 26 columns]\n"
],
[
"df_ns",
"_____no_output_____"
],
[
"#선 그래프 그리기\ndf_ns.plot()",
"_____no_output_____"
],
[
"#행 ,열 전치하여 다시 그리기\ntdf_ns = df_ns.T\nprint(tdf_ns.head())",
" South North\n1991 1186 263\n1992 1310 247\n1993 1444 221\n1994 1650 231\n1995 1847 230\n"
],
[
"tdf_ns.plot()",
"_____no_output_____"
],
[
"tdf_ns.plot(kind='bar')",
"_____no_output_____"
],
[
"tdf_ns.plot(kind='barh')",
"_____no_output_____"
],
[
"tdf_ns.plot(kind='hist')",
"_____no_output_____"
],
[
"# -*- coding: utf-8 -*-\nimport pandas as pd\n\n#read_csv() 함수로 df 생성\ndf = pd.read_csv('./auto-mpg.csv', header=None)\n\n\n#열 이름 지정\ndf.columns = ['mpg','cylinders', 'displacement','horsepower','weight',\n 'acceleration', 'model year', 'origin', 'name']\n\n#2개의 열을 선택하여 산저도 그리기\ndf.plot(x='weight', y='mpg', kind='scatter')",
"_____no_output_____"
],
[
"#열을 선택하여 박스 플롯 그리기\ndf[['mpg','cylinders']].plot(kind='box')",
"_____no_output_____"
],
[
"#예제 4-1 선 그래프\n\n# -*- coding: utf-8 -*-\n\n#라이브러리 불러오기\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n#Excel 데이터를 데이터프레임으로 변환\ndf = pd.read_excel('시도별 전출입 인구수.xlsx', header=0)\ndf.head()",
"_____no_output_____"
],
[
"#누락된값(NaN)을 앞 데이터로 채움(엑셀 양식 병합 부분)\ndf = df.fillna(method='ffill')\ndf.head()",
"_____no_output_____"
],
[
"#서울에서 다른지역으로 이동한 데이터만 추출하여 정리\nmask = (df['전출지별']=='서울특별시') & (df['전입지별'] != '서울특별시')\ndf_seoul = df[mask]\ndf_seoul = df_seoul.drop(['전출지별'],axis = 1)\ndf_seoul.rename({'전입지별':'전입지'}, axis=1,inplace=True)\ndf_seoul.set_index('전입지', inplace=True)",
"_____no_output_____"
],
[
"df_seoul.head()",
"_____no_output_____"
],
[
"#서울에서 경기도로 이동한 인구 데이터 값만 선택\nsr_one = df_seoul.loc['경기도']\nsr_one.tail()",
"_____no_output_____"
],
[
"#x, y 축 데이터를 plot 함수에 입력\nplt.plot(sr_one.index, sr_one.values)",
"_____no_output_____"
],
[
"plt.plot(sr_one)",
"_____no_output_____"
],
[
"#x, y축 데이터를 plot 함수에 입력\nplt.plot(sr_one.index, sr_one.values)\n\n#차트 제목 추가\nplt.title('서울-> 경기 인구 이동')\n\n#축 이름 추가\nplt.xlabel('기간')\nplt.ylabel('이동 인구수')\n\nplt.show() #변경사항 저장하고 그래프 출력",
"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\matplotlib\\backends\\backend_agg.py:238: RuntimeWarning: Glyph 49436 missing from current font.\n font.set_text(s, 0.0, flags=flags)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\matplotlib\\backends\\backend_agg.py:238: RuntimeWarning: Glyph 50872 missing from current font.\n font.set_text(s, 0.0, flags=flags)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\matplotlib\\backends\\backend_agg.py:238: RuntimeWarning: Glyph 44221 missing from current font.\n font.set_text(s, 0.0, flags=flags)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\matplotlib\\backends\\backend_agg.py:238: RuntimeWarning: Glyph 44592 missing from current font.\n font.set_text(s, 0.0, flags=flags)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\matplotlib\\backends\\backend_agg.py:238: RuntimeWarning: Glyph 51064 missing from current font.\n font.set_text(s, 0.0, flags=flags)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\matplotlib\\backends\\backend_agg.py:238: RuntimeWarning: Glyph 44396 missing from current font.\n font.set_text(s, 0.0, flags=flags)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\matplotlib\\backends\\backend_agg.py:238: RuntimeWarning: Glyph 51060 missing from current font.\n font.set_text(s, 0.0, flags=flags)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\matplotlib\\backends\\backend_agg.py:238: RuntimeWarning: Glyph 46041 missing from current font.\n font.set_text(s, 0.0, flags=flags)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\matplotlib\\backends\\backend_agg.py:238: RuntimeWarning: Glyph 44036 missing from current font.\n font.set_text(s, 0.0, flags=flags)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\matplotlib\\backends\\backend_agg.py:238: RuntimeWarning: Glyph 49688 missing from current font.\n font.set_text(s, 0.0, flags=flags)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\matplotlib\\backends\\backend_agg.py:201: RuntimeWarning: Glyph 44592 missing from current font.\n font.set_text(s, 0, flags=flags)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\matplotlib\\backends\\backend_agg.py:201: RuntimeWarning: Glyph 44036 missing from current font.\n font.set_text(s, 0, flags=flags)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\matplotlib\\backends\\backend_agg.py:201: RuntimeWarning: Glyph 51060 missing from current font.\n font.set_text(s, 0, flags=flags)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\matplotlib\\backends\\backend_agg.py:201: RuntimeWarning: Glyph 46041 missing from current font.\n font.set_text(s, 0, flags=flags)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\matplotlib\\backends\\backend_agg.py:201: RuntimeWarning: Glyph 51064 missing from current font.\n font.set_text(s, 0, flags=flags)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\matplotlib\\backends\\backend_agg.py:201: RuntimeWarning: Glyph 44396 missing from current font.\n font.set_text(s, 0, flags=flags)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\matplotlib\\backends\\backend_agg.py:201: RuntimeWarning: Glyph 49688 missing from current font.\n font.set_text(s, 0, flags=flags)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\matplotlib\\backends\\backend_agg.py:201: RuntimeWarning: Glyph 49436 missing from current font.\n font.set_text(s, 0, flags=flags)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\matplotlib\\backends\\backend_agg.py:201: RuntimeWarning: Glyph 50872 missing from current font.\n font.set_text(s, 0, flags=flags)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\matplotlib\\backends\\backend_agg.py:201: RuntimeWarning: Glyph 44221 missing from current font.\n font.set_text(s, 0, flags=flags)\n"
],
[
"#matplotlib 한글 폰트 오류 문제해결\nfrom matplotlib import font_manager, rc\nfont_path = \"./malgun.ttf\" #폰트 파일 위치\nfont_name = font_manager.FontProperties(fname=font_path).get_name()\nrc('font', family=font_name)",
"_____no_output_____"
],
[
"#x, y축 데이터를 plot 함수에 입력\n#plt.plot(sr_one.index, sr_one.values)\n\n#차트 제목 추가\n#plt.title('서울-> 경기 인구 이동')\n\n#축 이름 추가\n#plt.xlabel('기간')\n#plt.ylabel('이동 인구수')\n\n#그래프 x축 글씨 문제 해결\n#그림사이즈 지정(가로 14인치, 세로 5인치)\nplt.figure(figsize=(14,5))\n\n#x축 눈금 라벨 회전하기\nplt.xticks(rotation='vertical')\n\n#x,y축 데이터를 plot 함수에 입력\nplt.plot(sr_one.index, sr_one.values)\n\nplt.title('서울 -> 경기 인구 이동') #차트 제목\nplt.xlabel('기간') # x축 이름\nplt.ylabel('이동 인구수') # y축 이름\n\n\nplt.legend(labels=['서울-> 경기'], loc='best')\n\nplt.show() #변경사항 저장하고 그래프 출력",
"_____no_output_____"
],
[
"#예제 4-5 스타일 서식 지정 등\n\n#서울에서 경기도로 이동한 인구 데이터 값만 선택\nsr_one = df_seoul.loc['경기도']\n\n#스타일 서식 지정\nplt.style.use('ggplot')\n\n#그림 사이즈 지정\nplt.figure(figsize=(14,5))\n\n#x축 눈금 라벨 회전하기\nplt.xticks(size=10, rotation='vertical')\n\n#x,y축 데이터를 plot 함수에 입력\nplt.plot(sr_one.index, sr_one.values, marker='o', markersize=10)\n\nplt.title('서울 -> 경기 인구 이동', size=30) #차트제목\nplt.xlabel('기간', size=20) #x축 이름\nplt.ylabel('이동 인구수', size=20) #y축 이름\n\nplt.legend(labels=['서울 -> 경기'], loc='best', fontsize=15)\n\nplt.show()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7b62d61cc3808e1ce9add1acadc8c03d738a814 | 15,133 | ipynb | Jupyter Notebook | BERT_pretrained_transformers.ipynb | ToshihikoSakai/bert-ja-transformers | 63ed606b3fa050dc52dbc26f75e585179380a248 | [
"MIT"
] | null | null | null | BERT_pretrained_transformers.ipynb | ToshihikoSakai/bert-ja-transformers | 63ed606b3fa050dc52dbc26f75e585179380a248 | [
"MIT"
] | null | null | null | BERT_pretrained_transformers.ipynb | ToshihikoSakai/bert-ja-transformers | 63ed606b3fa050dc52dbc26f75e585179380a248 | [
"MIT"
] | null | null | null | 35.440281 | 395 | 0.502346 | [
[
[
"# 各種パッケージのインストールとバージョン\n!pip install transformers\n!pip install tokenizers\n!pip install sentencepiece\n!pip list | grep torch\n!pip list | grep transformers\n!pip list | grep tokenizers\n!pip list | grep sentencepiece",
"_____no_output_____"
],
[
"# ディレクトリの指定\ndir = \"./\"\n\n",
"_____no_output_____"
],
[
"# 事前学習用コーパスの準備\n# 1行に1文章となるようなテキストを準備する\n\ndf_header = pd.read_csv('XXX.csv')\nprint(df_header)\n\n",
"_____no_output_____"
],
[
"# Tokenization\nfrom sentencepiece import SentencePieceTrainer\n\n# sentencepieceの学習\nSentencePieceTrainer.Train(\n '--input='+dir+'corpus/corpus.txt, --model_prefix='+dir+'model/sentencepiece --character_coverage=0.9995 --vocab_size=100'\n)\n\n# sentencepieceのパラメータ\n# https://github.com/google/sentencepiece#train-sentencepiece-model\n# training options\n# https://github.com/google/sentencepiece/blob/master/doc/options.md\n\n\n",
"_____no_output_____"
],
[
"# sentencepieceのモデルをTokenizerで読み込み\n\n# sentencepieceを使ったTokenizerは現時点では以下。\n# >All transformers models in the library that use SentencePiece use it \n# in combination with unigram. Examples of models using SentencePiece are ALBERT, XLNet, Marian, and T5.\n# https://huggingface.co/transformers/tokenizer_summary.html\n\nfrom transformers import AlbertTokenizer\n\n# ALBERTのトークナイザを定義\ntokenizer = AlbertTokenizer.from_pretrained(dir+'model/sentencepiece.model', keep_accents=True)\n\n# textをトークナイズ\ntext = \"吾輩は猫である。名前はまだ無い。\"\nprint(tokenizer.tokenize(text))",
"/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py:1641: FutureWarning: Calling AlbertTokenizer.from_pretrained() with the path to a single file or url is deprecated and won't be possible anymore in v5. Use a model identifier or the path to a directory instead.\n FutureWarning,\nSpecial tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.\n"
],
[
"# BERTモデルのconfigを設定\nfrom transformers import BertConfig\nfrom transformers import BertForMaskedLM\n\n# BERTconfigを定義\nconfig = BertConfig(vocab_size=32003, num_hidden_layers=12, intermediate_size=768, num_attention_heads=12)\n\n# BERT MLMのインスタンスを生成\nmodel = BertForMaskedLM(config)\n\n# パラメータ数を表示\nprint('No of parameters: ', model.num_parameters())",
"No of parameters: 68158211\n"
],
[
"# 事前学習用のデータセットを準備\nfrom transformers import LineByLineTextDataset\nfrom transformers import DataCollatorForLanguageModeling\n\n# textを1行ずつ読み込んでトークンへ変換\ndataset = LineByLineTextDataset(\n tokenizer=tokenizer,\n file_path=dir + 'corpus/corpus.txt',\n block_size=256, # tokenizerのmax_length\n)\n\n# データセットからサンプルのリストを受け取り、それらをテンソルの辞書としてバッチに照合するための関数\ndata_collator = DataCollatorForLanguageModeling(\n tokenizer=tokenizer, \n mlm=True,\n mlm_probability= 0.15\n)\n\n",
"/usr/local/lib/python3.7/dist-packages/transformers/data/datasets/language_modeling.py:124: FutureWarning: This dataset will be removed from the library soon, preprocessing should be handled with the 🤗 Datasets library. You can have a look at this example script for pointers: https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_mlm.py\n FutureWarning,\n"
],
[
"# 事前学習を行う\nfrom transformers import TrainingArguments\nfrom transformers import Trainer\n\n# 事前学習のパラメータを定義\ntraining_args = TrainingArguments(\n output_dir= drive_dir + 'outputBERT/',\n overwrite_output_dir=True,\n num_train_epochs=10,\n per_device_train_batch_size=32,\n save_steps=10000,\n save_total_limit=2,\n prediction_loss_only=True\n)\n\n# trainerインスタンスの生成\ntrainer = Trainer(\n model=model,\n args=training_args,\n data_collator=data_collator,\n train_dataset=dataset\n)\n\n# 学習\ntrainer.train()\n\n# 学習したモデルの保存\ntrainer.save_model(dir + 'outputBERT/')",
"PyTorch: setting up devices\nThe default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-).\n***** Running training *****\n Num examples = 5\n Num Epochs = 10\n Instantaneous batch size per device = 32\n Total train batch size (w. parallel, distributed & accumulation) = 32\n Gradient Accumulation steps = 1\n Total optimization steps = 10\n"
],
[
"# 言語モデルの確認\nfrom transformers import pipeline\n\n# tokenizerとmodel\ntokenizer = AlbertTokenizer.from_pretrained(drive_dir+'model/sentencepiece.model', keep_accents=True)\nmodel = BertForMaskedLM.from_pretrained(drive_dir + 'outputBERT')\n\nfill_mask = pipeline(\n \"fill-mask\",\n model=model,\n tokenizer=tokenizer\n)\n\nMASK_TOKEN = tokenizer.mask_token\n\n# コーパスに応じた文章から穴埋めをとく\n\ntext = \"XXX{}XXX\".format(MASK_TOKEN)\nfill_mask(text)",
"/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py:1641: FutureWarning: Calling AlbertTokenizer.from_pretrained() with the path to a single file or url is deprecated and won't be possible anymore in v5. Use a model identifier or the path to a directory instead.\n FutureWarning,\nloading file /content/drive/MyDrive/BERT-pretrained-transformers/model/sentencepiece.model\nAdding [CLS] to the vocabulary\nAdding [SEP] to the vocabulary\nAdding <pad> to the vocabulary\nAdding [MASK] to the vocabulary\nSpecial tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.\nloading configuration file /content/drive/MyDrive/BERT-pretrained-transformers/outputBERT/config.json\nModel config BertConfig {\n \"architectures\": [\n \"BertForMaskedLM\"\n ],\n \"attention_probs_dropout_prob\": 0.1,\n \"gradient_checkpointing\": false,\n \"hidden_act\": \"gelu\",\n \"hidden_dropout_prob\": 0.1,\n \"hidden_size\": 768,\n \"initializer_range\": 0.02,\n \"intermediate_size\": 768,\n \"layer_norm_eps\": 1e-12,\n \"max_position_embeddings\": 512,\n \"model_type\": \"bert\",\n \"num_attention_heads\": 12,\n \"num_hidden_layers\": 12,\n \"pad_token_id\": 0,\n \"position_embedding_type\": \"absolute\",\n \"torch_dtype\": \"float32\",\n \"transformers_version\": \"4.9.1\",\n \"type_vocab_size\": 2,\n \"use_cache\": true,\n \"vocab_size\": 32003\n}\n\nloading weights file /content/drive/MyDrive/BERT-pretrained-transformers/outputBERT/pytorch_model.bin\nAll model checkpoint weights were used when initializing BertForMaskedLM.\n\nAll the weights of BertForMaskedLM were initialized from the model checkpoint at /content/drive/MyDrive/BERT-pretrained-transformers/outputBERT.\nIf your task is similar to the task the model of the checkpoint was trained on, you can already use BertForMaskedLM for predictions without further training.\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7b6353aa384c373279578d5119dec9994ea898e | 823,043 | ipynb | Jupyter Notebook | docs/notebooks/GLM-robust-with-outlier-detection.ipynb | ds7788/hello-world | f16466f4d4de8cd412771415a361738ba020e568 | [
"Apache-2.0"
] | 1 | 2018-08-16T22:03:21.000Z | 2018-08-16T22:03:21.000Z | docs/notebooks/GLM-robust-with-outlier-detection.ipynb | ds7788/hello-world | f16466f4d4de8cd412771415a361738ba020e568 | [
"Apache-2.0"
] | null | null | null | docs/notebooks/GLM-robust-with-outlier-detection.ipynb | ds7788/hello-world | f16466f4d4de8cd412771415a361738ba020e568 | [
"Apache-2.0"
] | null | null | null | 83.864174 | 413 | 0.826451 | [
[
[
"##### PyMC3 Examples\n\n# GLM Robust Regression with Outlier Detection\n\n**A minimal reproducable example of Robust Regression with Outlier Detection using Hogg 2010 Signal vs Noise method.**\n\n+ This is a complementary approach to the Student-T robust regression as illustrated in Thomas Wiecki's notebook in the [PyMC3 documentation](http://pymc-devs.github.io/pymc3/GLM-robust/), that approach is also compared here.\n+ This model returns a robust estimate of linear coefficients and an indication of which datapoints (if any) are outliers.\n+ The likelihood evaluation is essentially a copy of eqn 17 in \"Data analysis recipes: Fitting a model to data\" - [Hogg 2010](http://arxiv.org/abs/1008.4686).\n+ The model is adapted specifically from Jake Vanderplas' [implementation](http://www.astroml.org/book_figures/chapter8/fig_outlier_rejection.html) (3rd model tested).\n+ The dataset is tiny and hardcoded into this Notebook. It contains errors in both the x and y, but we will deal here with only errors in y.\n\n\n**Note:**\n\n+ Python 3.4 project using latest available [PyMC3](https://github.com/pymc-devs/pymc3)\n+ Developed using [ContinuumIO Anaconda](https://www.continuum.io/downloads) distribution on a Macbook Pro 3GHz i7, 16GB RAM, OSX 10.10.5.\n+ During development I've found that 3 data points are always indicated as outliers, but the remaining ordering of datapoints by decreasing outlier-hood is slightly unstable between runs: the posterior surface appears to have a small number of solutions with similar probability. \n+ Finally, if runs become unstable or Theano throws weird errors, try clearing the cache `$> theano-cache clear` and rerunning the notebook.\n\n\n**Package Requirements (shown as a conda-env YAML):**\n```\n$> less conda_env_pymc3_examples.yml\n\nname: pymc3_examples\n channels:\n - defaults\n dependencies:\n - python=3.4\n - ipython\n - ipython-notebook\n - ipython-qtconsole\n - numpy\n - scipy\n - matplotlib\n - pandas\n - seaborn\n - patsy \n - pip\n\n$> conda env create --file conda_env_pymc3_examples.yml\n\n$> source activate pymc3_examples\n\n$> pip install --process-dependency-links git+https://github.com/pymc-devs/pymc3\n\n```",
"_____no_output_____"
],
[
"# Setup",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n%qtconsole --colors=linux\n\nimport warnings\nwarnings.filterwarnings('ignore')",
"_____no_output_____"
],
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nfrom scipy import optimize\nimport pymc3 as pm\nimport theano as thno\nimport theano.tensor as T \n\n# configure some basic options\nsns.set(style=\"darkgrid\", palette=\"muted\")\npd.set_option('display.notebook_repr_html', True)\nplt.rcParams['figure.figsize'] = 12, 8\nnp.random.seed(0)",
"_____no_output_____"
]
],
[
[
"## Load and Prepare Data",
"_____no_output_____"
],
[
"We'll use the Hogg 2010 data available at https://github.com/astroML/astroML/blob/master/astroML/datasets/hogg2010test.py\n\nIt's a very small dataset so for convenience, it's hardcoded below",
"_____no_output_____"
]
],
[
[
"#### cut & pasted directly from the fetch_hogg2010test() function\n## identical to the original dataset as hardcoded in the Hogg 2010 paper\n\ndfhogg = pd.DataFrame(np.array([[1, 201, 592, 61, 9, -0.84],\n [2, 244, 401, 25, 4, 0.31],\n [3, 47, 583, 38, 11, 0.64],\n [4, 287, 402, 15, 7, -0.27],\n [5, 203, 495, 21, 5, -0.33],\n [6, 58, 173, 15, 9, 0.67],\n [7, 210, 479, 27, 4, -0.02],\n [8, 202, 504, 14, 4, -0.05],\n [9, 198, 510, 30, 11, -0.84],\n [10, 158, 416, 16, 7, -0.69],\n [11, 165, 393, 14, 5, 0.30],\n [12, 201, 442, 25, 5, -0.46],\n [13, 157, 317, 52, 5, -0.03],\n [14, 131, 311, 16, 6, 0.50],\n [15, 166, 400, 34, 6, 0.73],\n [16, 160, 337, 31, 5, -0.52],\n [17, 186, 423, 42, 9, 0.90],\n [18, 125, 334, 26, 8, 0.40],\n [19, 218, 533, 16, 6, -0.78],\n [20, 146, 344, 22, 5, -0.56]]),\n columns=['id','x','y','sigma_y','sigma_x','rho_xy'])\n\n\n## for convenience zero-base the 'id' and use as index\ndfhogg['id'] = dfhogg['id'] - 1\ndfhogg.set_index('id', inplace=True)\n\n## standardize (mean center and divide by 1 sd)\ndfhoggs = (dfhogg[['x','y']] - dfhogg[['x','y']].mean(0)) / dfhogg[['x','y']].std(0)\ndfhoggs['sigma_y'] = dfhogg['sigma_y'] / dfhogg['y'].std(0)\ndfhoggs['sigma_x'] = dfhogg['sigma_x'] / dfhogg['x'].std(0)\n\n## create xlims ylims for plotting\nxlims = (dfhoggs['x'].min() - np.ptp(dfhoggs['x'])/5\n ,dfhoggs['x'].max() + np.ptp(dfhoggs['x'])/5)\nylims = (dfhoggs['y'].min() - np.ptp(dfhoggs['y'])/5\n ,dfhoggs['y'].max() + np.ptp(dfhoggs['y'])/5)\n\n## scatterplot the standardized data\ng = sns.FacetGrid(dfhoggs, size=8)\n_ = g.map(plt.errorbar, 'x', 'y', 'sigma_y', 'sigma_x', marker=\"o\", ls='')\n_ = g.axes[0][0].set_ylim(ylims)\n_ = g.axes[0][0].set_xlim(xlims)\n\nplt.subplots_adjust(top=0.92)\n_ = g.fig.suptitle('Scatterplot of Hogg 2010 dataset after standardization', fontsize=16)",
"_____no_output_____"
]
],
[
[
"**Observe**: \n\n+ Even judging just by eye, you can see these datapoints mostly fall on / around a straight line with positive gradient\n+ It looks like a few of the datapoints may be outliers from such a line",
"_____no_output_____"
],
[
"---\n\n---",
"_____no_output_____"
],
[
"# Create Conventional OLS Model",
"_____no_output_____"
],
[
"The *linear model* is really simple and conventional:\n\n$$\\bf{y} = \\beta^{T} \\bf{X} + \\bf{\\sigma}$$\n\nwhere: \n\n$\\beta$ = coefs = $\\{1, \\beta_{j \\in X_{j}}\\}$ \n$\\sigma$ = the measured error in $y$ in the dataset `sigma_y`",
"_____no_output_____"
],
[
"##### Define model\n\n**NOTE:**\n+ We're using a simple linear OLS model with Normally distributed priors so that it behaves like a ridge regression",
"_____no_output_____"
]
],
[
[
"with pm.Model() as mdl_ols:\n \n ## Define weakly informative Normal priors to give Ridge regression\n b0 = pm.Normal('b0_intercept', mu=0, sd=100)\n b1 = pm.Normal('b1_slope', mu=0, sd=100)\n \n ## Define linear model\n yest = b0 + b1 * dfhoggs['x']\n \n ## Use y error from dataset, convert into theano variable\n sigma_y = thno.shared(np.asarray(dfhoggs['sigma_y'],\n dtype=thno.config.floatX), name='sigma_y')\n\n ## Define Normal likelihood\n likelihood = pm.Normal('likelihood', mu=yest, sd=sigma_y, observed=dfhoggs['y'])\n",
"_____no_output_____"
]
],
[
[
"##### Sample",
"_____no_output_____"
]
],
[
[
"with mdl_ols:\n\n ## find MAP using Powell, seems to be more robust\n start_MAP = pm.find_MAP(fmin=optimize.fmin_powell, disp=True)\n\n ## take samples\n traces_ols = pm.sample(2000, start=start_MAP, step=pm.NUTS(), progressbar=True)",
"Optimization terminated successfully.\n Current function value: 145.777745\n Iterations: 3\n Function evaluations: 118\n [-----------------100%-----------------] 2000 of 2000 complete in 0.9 sec"
]
],
[
[
"##### View Traces\n\n**NOTE**: I'll 'burn' the traces to only retain the final 1000 samples",
"_____no_output_____"
]
],
[
[
"_ = pm.traceplot(traces_ols[-1000:], figsize=(12,len(traces_ols.varnames)*1.5),\n lines={k: v['mean'] for k, v in pm.df_summary(traces_ols[-1000:]).iterrows()})",
"_____no_output_____"
]
],
[
[
"**NOTE:** We'll illustrate this OLS fit and compare to the datapoints in the final plot",
"_____no_output_____"
],
[
"---\n\n---",
"_____no_output_____"
],
[
"# Create Robust Model: Student-T Method",
"_____no_output_____"
],
[
"I've added this brief section in order to directly compare the Student-T based method exampled in Thomas Wiecki's notebook in the [PyMC3 documentation](http://pymc-devs.github.io/pymc3/GLM-robust/)\n\nInstead of using a Normal distribution for the likelihood, we use a Student-T, which has fatter tails. In theory this allows outliers to have a smaller mean square error in the likelihood, and thus have less influence on the regression estimation. This method does not produce inlier / outlier flags but is simpler and faster to run than the Signal Vs Noise model below, so a comparison seems worthwhile.\n\n**Note:** we'll constrain the Student-T 'degrees of freedom' parameter `nu` to be an integer, but otherwise leave it as just another stochastic to be inferred: no need for prior knowledge.",
"_____no_output_____"
],
[
"##### Define Model",
"_____no_output_____"
]
],
[
[
"with pm.Model() as mdl_studentt:\n \n ## Define weakly informative Normal priors to give Ridge regression\n b0 = pm.Normal('b0_intercept', mu=0, sd=100)\n b1 = pm.Normal('b1_slope', mu=0, sd=100)\n \n ## Define linear model\n yest = b0 + b1 * dfhoggs['x']\n \n ## Use y error from dataset, convert into theano variable\n sigma_y = thno.shared(np.asarray(dfhoggs['sigma_y'],\n dtype=thno.config.floatX), name='sigma_y')\n \n ## define prior for Student T degrees of freedom\n nu = pm.DiscreteUniform('nu', lower=1, upper=100)\n\n ## Define Student T likelihood\n likelihood = pm.StudentT('likelihood', mu=yest, sd=sigma_y, nu=nu\n ,observed=dfhoggs['y'])\n",
"_____no_output_____"
]
],
[
[
"##### Sample",
"_____no_output_____"
]
],
[
[
"with mdl_studentt:\n\n ## find MAP using Powell, seems to be more robust\n start_MAP = pm.find_MAP(fmin=optimize.fmin_powell, disp=True)\n\n ## two-step sampling to allow Metropolis for nu (which is discrete)\n step1 = pm.NUTS([b0, b1])\n step2 = pm.Metropolis([nu])\n \n ## take samples\n traces_studentt = pm.sample(2000, start=start_MAP, step=[step1, step2], progressbar=True)",
"Optimization terminated successfully.\n Current function value: 107.488021\n Iterations: 3\n Function evaluations: 77\n [-----------------100%-----------------] 2000 of 2000 complete in 1.0 sec"
]
],
[
[
"##### View Traces",
"_____no_output_____"
]
],
[
[
"_ = pm.traceplot(traces_studentt[-1000:]\n ,figsize=(12,len(traces_studentt.varnames)*1.5)\n ,lines={k: v['mean'] for k, v in pm.df_summary(traces_studentt[-1000:]).iterrows()})",
"_____no_output_____"
]
],
[
[
"**Observe:**\n\n+ Both parameters `b0` and `b1` show quite a skew to the right, possibly this is the action of a few samples regressing closer to the OLS estimate which is towards the left\n+ The `nu` parameter seems very happy to stick at `nu = 1`, indicating that a fat-tailed Student-T likelihood has a better fit than a thin-tailed (Normal-like) Student-T likelihood.\n+ The inference sampling also ran very quickly, almost as quickly as the conventional OLS\n\n\n**NOTE:** We'll illustrate this Student-T fit and compare to the datapoints in the final plot",
"_____no_output_____"
],
[
"---\n\n---",
"_____no_output_____"
],
[
"# Create Robust Model with Outliers: Hogg Method",
"_____no_output_____"
],
[
"Please read the paper (Hogg 2010) and Jake Vanderplas' code for more complete information about the modelling technique.\n\nThe general idea is to create a 'mixture' model whereby datapoints can be described by either the linear model (inliers) or a modified linear model with different mean and larger variance (outliers).\n\n\nThe likelihood is evaluated over a mixture of two likelihoods, one for 'inliers', one for 'outliers'. A Bernouilli distribution is used to randomly assign datapoints in N to either the inlier or outlier groups, and we sample the model as usual to infer robust model parameters and inlier / outlier flags:\n\n$$\n\\mathcal{logL} = \\sum_{i}^{i=N} log \\left[ \\frac{(1 - B_{i})}{\\sqrt{2 \\pi \\sigma_{in}^{2}}} exp \\left( - \\frac{(x_{i} - \\mu_{in})^{2}}{2\\sigma_{in}^{2}} \\right) \\right] + \\sum_{i}^{i=N} log \\left[ \\frac{B_{i}}{\\sqrt{2 \\pi (\\sigma_{in}^{2} + \\sigma_{out}^{2})}} exp \\left( - \\frac{(x_{i}- \\mu_{out})^{2}}{2(\\sigma_{in}^{2} + \\sigma_{out}^{2})} \\right) \\right]\n$$\n\nwhere: \n$\\bf{B}$ is Bernoulli-distibuted $B_{i} \\in [0_{(inlier)},1_{(outlier)}]$\n\n",
"_____no_output_____"
],
[
"##### Define model",
"_____no_output_____"
]
],
[
[
"def logp_signoise(yobs, is_outlier, yest_in, sigma_y_in, yest_out, sigma_y_out):\n '''\n Define custom loglikelihood for inliers vs outliers. \n NOTE: in this particular case we don't need to use theano's @as_op \n decorator because (as stated by Twiecki in conversation) that's only \n required if the likelihood cannot be expressed as a theano expression.\n We also now get the gradient computation for free.\n ''' \n \n # likelihood for inliers\n pdfs_in = T.exp(-(yobs - yest_in + 1e-4)**2 / (2 * sigma_y_in**2)) \n pdfs_in /= T.sqrt(2 * np.pi * sigma_y_in**2)\n logL_in = T.sum(T.log(pdfs_in) * (1 - is_outlier))\n\n # likelihood for outliers\n pdfs_out = T.exp(-(yobs - yest_out + 1e-4)**2 / (2 * (sigma_y_in**2 + sigma_y_out**2))) \n pdfs_out /= T.sqrt(2 * np.pi * (sigma_y_in**2 + sigma_y_out**2))\n logL_out = T.sum(T.log(pdfs_out) * is_outlier)\n\n return logL_in + logL_out\n",
"_____no_output_____"
],
[
"with pm.Model() as mdl_signoise:\n \n ## Define weakly informative Normal priors to give Ridge regression\n b0 = pm.Normal('b0_intercept', mu=0, sd=100)\n b1 = pm.Normal('b1_slope', mu=0, sd=100)\n \n ## Define linear model\n yest_in = b0 + b1 * dfhoggs['x']\n\n ## Define weakly informative priors for the mean and variance of outliers\n yest_out = pm.Normal('yest_out', mu=0, sd=100)\n sigma_y_out = pm.HalfNormal('sigma_y_out', sd=100)\n\n ## Define Bernoulli inlier / outlier flags according to a hyperprior \n ## fraction of outliers, itself constrained to [0,.5] for symmetry\n frac_outliers = pm.Uniform('frac_outliers', lower=0., upper=.5)\n is_outlier = pm.Bernoulli('is_outlier', p=frac_outliers, shape=dfhoggs.shape[0]) \n \n ## Extract observed y and sigma_y from dataset, encode as theano objects\n yobs = thno.shared(np.asarray(dfhoggs['y'], dtype=thno.config.floatX), name='yobs')\n sigma_y_in = thno.shared(np.asarray(dfhoggs['sigma_y']\n , dtype=thno.config.floatX), name='sigma_y_in')\n \n ## Use custom likelihood using DensityDist\n likelihood = pm.DensityDist('likelihood', logp_signoise,\n observed={'yobs':yobs, 'is_outlier':is_outlier,\n 'yest_in':yest_in, 'sigma_y_in':sigma_y_in,\n 'yest_out':yest_out, 'sigma_y_out':sigma_y_out})\n",
"_____no_output_____"
]
],
[
[
"##### Sample",
"_____no_output_____"
]
],
[
[
"with mdl_signoise:\n\n ## two-step sampling to create Bernoulli inlier/outlier flags\n step1 = pm.NUTS([frac_outliers, yest_out, sigma_y_out, b0, b1])\n step2 = pm.BinaryMetropolis([is_outlier], tune_interval=100)\n\n ## find MAP using Powell, seems to be more robust\n start_MAP = pm.find_MAP(fmin=optimize.fmin_powell, disp=True)\n\n ## take samples\n traces_signoise = pm.sample(2000, start=start_MAP, step=[step1,step2], progressbar=True)",
"Optimization terminated successfully.\n Current function value: 155.449990\n Iterations: 3\n Function evaluations: 213\n [-----------------100%-----------------] 2000 of 2000 complete in 169.1 sec"
]
],
[
[
"##### View Traces",
"_____no_output_____"
]
],
[
[
"_ = pm.traceplot(traces_signoise[-1000:], figsize=(12,len(traces_signoise.varnames)*1.5),\n lines={k: v['mean'] for k, v in pm.df_summary(traces_signoise[-1000:]).iterrows()})",
"_____no_output_____"
]
],
[
[
"**NOTE:**\n\n+ During development I've found that 3 datapoints id=[1,2,3] are always indicated as outliers, but the remaining ordering of datapoints by decreasing outlier-hood is unstable between runs: the posterior surface appears to have a small number of solutions with very similar probability.\n+ The NUTS sampler seems to work okay, and indeed it's a nice opportunity to demonstrate a custom likelihood which is possible to express as a theano function (thus allowing a gradient-based sampler like NUTS). However, with a more complicated dataset, I would spend time understanding this instability and potentially prefer using more samples under Metropolis-Hastings.",
"_____no_output_____"
],
[
"---\n\n---",
"_____no_output_____"
],
[
"# Declare Outliers and Compare Plots",
"_____no_output_____"
],
[
"##### View ranges for inliers / outlier predictions",
"_____no_output_____"
],
[
"At each step of the traces, each datapoint may be either an inlier or outlier. We hope that the datapoints spend an unequal time being one state or the other, so let's take a look at the simple count of states for each of the 20 datapoints.",
"_____no_output_____"
]
],
[
[
"outlier_melt = pd.melt(pd.DataFrame(traces_signoise['is_outlier', -1000:],\n columns=['[{}]'.format(int(d)) for d in dfhoggs.index]),\n var_name='datapoint_id', value_name='is_outlier')\nax0 = sns.pointplot(y='datapoint_id', x='is_outlier', data=outlier_melt,\n kind='point', join=False, ci=None, size=4, aspect=2)\n\n_ = ax0.vlines([0,1], 0, 19, ['b','r'], '--')\n\n_ = ax0.set_xlim((-0.1,1.1))\n_ = ax0.set_xticks(np.arange(0, 1.1, 0.1))\n_ = ax0.set_xticklabels(['{:.0%}'.format(t) for t in np.arange(0,1.1,0.1)])\n\n_ = ax0.yaxis.grid(True, linestyle='-', which='major', color='w', alpha=0.4)\n_ = ax0.set_title('Prop. of the trace where datapoint is an outlier')\n_ = ax0.set_xlabel('Prop. of the trace where is_outlier == 1')",
"_____no_output_____"
]
],
[
[
"**Observe**:\n\n+ The plot above shows the number of samples in the traces in which each datapoint is marked as an outlier, expressed as a percentage.\n+ In particular, 3 points [1, 2, 3] spend >=95% of their time as outliers\n+ Contrastingly, points at the other end of the plot close to 0% are our strongest inliers.\n+ For comparison, the mean posterior value of `frac_outliers` is ~0.35, corresponding to roughly 7 of the 20 datapoints. You can see these 7 datapoints in the plot above, all those with a value >50% or thereabouts.\n+ However, only 3 of these points are outliers >=95% of the time. \n+ See note above regarding instability between runs.\n\nThe 95% cutoff we choose is subjective and arbitrary, but I prefer it for now, so let's declare these 3 to be outliers and see how it looks compared to Jake Vanderplas' outliers, which were declared in a slightly different way as points with means above 0.68.",
"_____no_output_____"
],
[
"##### Declare outliers\n\n**Note:**\n+ I will declare outliers to be datapoints that have value == 1 at the 5-percentile cutoff, i.e. in the percentiles from 5 up to 100, their values are 1. \n+ Try for yourself altering cutoff to larger values, which leads to an objective ranking of outlier-hood.",
"_____no_output_____"
]
],
[
[
"cutoff = 5\ndfhoggs['outlier'] = np.percentile(traces_signoise[-1000:]['is_outlier'],cutoff, axis=0)\ndfhoggs['outlier'].value_counts()",
"_____no_output_____"
]
],
[
[
"## Posterior Prediction Plots for OLS vs StudentT vs SignalNoise",
"_____no_output_____"
]
],
[
[
"g = sns.FacetGrid(dfhoggs, size=8, hue='outlier', hue_order=[True,False],\n palette='Set1', legend_out=False)\n\nlm = lambda x, samp: samp['b0_intercept'] + samp['b1_slope'] * x\n\npm.glm.plot_posterior_predictive(traces_ols[-1000:],\n eval=np.linspace(-3, 3, 10), lm=lm, samples=200, color='#22CC00', alpha=.2)\n\npm.glm.plot_posterior_predictive(traces_studentt[-1000:], lm=lm,\n eval=np.linspace(-3, 3, 10), samples=200, color='#FFA500', alpha=.5)\n\npm.glm.plot_posterior_predictive(traces_signoise[-1000:], lm=lm,\n eval=np.linspace(-3, 3, 10), samples=200, color='#357EC7', alpha=.3)\n\n_ = g.map(plt.errorbar, 'x', 'y', 'sigma_y', 'sigma_x', marker=\"o\", ls='').add_legend()\n\n_ = g.axes[0][0].annotate('OLS Fit: Green\\nStudent-T Fit: Orange\\nSignal Vs Noise Fit: Blue',\n size='x-large', xy=(1,0), xycoords='axes fraction',\n xytext=(-160,10), textcoords='offset points')\n_ = g.axes[0][0].set_ylim(ylims)\n_ = g.axes[0][0].set_xlim(xlims)",
"_____no_output_____"
]
],
[
[
"**Observe**:\n\n+ The posterior preditive fit for:\n + the **OLS model** is shown in **Green** and as expected, it doesn't appear to fit the majority of our datapoints very well, skewed by outliers\n + the **Robust Student-T model** is shown in **Orange** and does appear to fit the 'main axis' of datapoints quite well, ignoring outliers\n + the **Robust Signal vs Noise model** is shown in **Blue** and also appears to fit the 'main axis' of datapoints rather well, ignoring outliers.\n \n \n+ We see that the **Robust Signal vs Noise model** also yields specific estimates of _which_ datapoints are outliers:\n + 17 'inlier' datapoints, in **Blue** and\n + 3 'outlier' datapoints shown in **Red**.\n + From a simple visual inspection, the classification seems fair, and agrees with Jake Vanderplas' findings.\n \n \n+ Overall, it seems that:\n + the **Signal vs Noise model** behaves as promised, yielding a robust regression estimate and explicit labelling of inliers / outliers, but\n + the **Signal vs Noise model** is quite complex and whilst the regression seems robust and stable, the actual inlier / outlier labelling seems slightly unstable\n + if you simply want a robust regression without inlier / outlier labelling, the **Student-T model** may be a good compromise, offering a simple model, quick sampling, and a very similar estimate.",
"_____no_output_____"
],
[
"---",
"_____no_output_____"
],
[
"Example originally contributed by Jonathan Sedar 2015-12-21 [github.com/jonsedar](https://github.com/jonsedar)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
e7b636491df6d3177123a9accd070c7b00bcdf20 | 403,609 | ipynb | Jupyter Notebook | datascience/plots_ds_python_pandas_1.ipynb | futureseadev/hgwxx7 | 282b370afc7d9c277e6c1f5b31282f14f9236f7b | [
"MIT"
] | 6 | 2018-06-21T09:44:36.000Z | 2021-10-01T18:37:41.000Z | datascience/plots_ds_python_pandas_1.ipynb | futureseadev/hgwxx7 | 282b370afc7d9c277e6c1f5b31282f14f9236f7b | [
"MIT"
] | 15 | 2020-01-28T22:56:15.000Z | 2022-03-11T23:55:52.000Z | datascience/plots_ds_python_pandas_1.ipynb | praveentn/hgwxx7 | 282b370afc7d9c277e6c1f5b31282f14f9236f7b | [
"MIT"
] | 2 | 2018-06-25T16:40:20.000Z | 2021-10-01T18:37:42.000Z | 165.957648 | 164,272 | 0.82186 | [
[
[
"# Sample plots",
"_____no_output_____"
]
],
[
[
"# load libraries\nimport os\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\n%matplotlib inline",
"_____no_output_____"
],
[
"os.listdir()",
"_____no_output_____"
]
],
[
[
"### sample plot 1",
"_____no_output_____"
]
],
[
[
"# load csv\ntrain = pd.read_csv('restaurant-and-market-health-inspections.csv')",
"_____no_output_____"
],
[
"train.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 58872 entries, 0 to 58871\nData columns (total 20 columns):\nserial_number 58872 non-null object\nactivity_date 58872 non-null object\nfacility_name 58872 non-null object\nscore 58872 non-null int64\ngrade 58872 non-null object\nservice_code 58872 non-null int64\nservice_description 58872 non-null object\nemployee_id 58872 non-null object\nfacility_address 58872 non-null object\nfacility_city 58872 non-null object\nfacility_id 58872 non-null object\nfacility_state 58872 non-null object\nfacility_zip 58872 non-null object\nowner_id 58872 non-null object\nowner_name 58872 non-null object\npe_description 58872 non-null object\nprogram_element_pe 58872 non-null int64\nprogram_name 58740 non-null object\nprogram_status 58872 non-null object\nrecord_id 58872 non-null object\ndtypes: int64(3), object(17)\nmemory usage: 5.2+ MB\n"
],
[
"train.head()",
"_____no_output_____"
],
[
"train['grade'].unique()",
"_____no_output_____"
],
[
"train.select_dtypes('object')",
"_____no_output_____"
],
[
"# plot the count of Unique Values in integer Columns\ntrain.select_dtypes(np.int64).nunique().value_counts().sort_index().plot.bar(color = 'red', \n figsize = (9, 6), edgecolor = 'c', linewidth = 2)\nplt.xlabel('Number of Unique Values')\nplt.ylabel('Count')\nplt.title('Count of Unique Values in Integer Columns')",
"_____no_output_____"
]
],
[
[
"# sample 2",
"_____no_output_____"
]
],
[
[
"# plotting different categories\nfrom collections import OrderedDict\n\nplt.figure(figsize = (20, 16))\nplt.style.use('fivethirtyeight')\n\n# Color mapping\ncolors = OrderedDict({'A': 'red', 'B': 'orange', 'C': 'blue', ' ': 'green'})\nmapping = OrderedDict({'A': 'extreme', 'B': 'moderate', 'C': 'vulnerable', ' ': 'non vulnerable'})\n\n# Iterate through the float columns\nfor i, col in enumerate(train.select_dtypes('int64')):\n ax = plt.subplot(4, 2, i + 1)\n # Iterate through the poverty levels\n for level, color in colors.items():\n # Plot each poverty level as a separate line\n sns.kdeplot(train.loc[train['grade'] == level, col].dropna(), \n ax = ax, color = color, label = mapping[level])\n \n plt.title(f'{col.capitalize()} Distribution'); plt.xlabel(f'{col}')\n plt.ylabel('Density')\n\nplt.subplots_adjust(top = 2)",
"c:\\py37\\lib\\site-packages\\scipy\\stats\\stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.\n return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval\nc:\\py37\\lib\\site-packages\\numpy\\core\\_methods.py:140: RuntimeWarning: Degrees of freedom <= 0 for slice\n keepdims=keepdims)\nc:\\py37\\lib\\site-packages\\numpy\\core\\_methods.py:132: RuntimeWarning: invalid value encountered in double_scalars\n ret = ret.dtype.type(ret / rcount)\nc:\\py37\\lib\\site-packages\\statsmodels\\nonparametric\\bandwidths.py:20: RuntimeWarning: invalid value encountered in minimum\n return np.minimum(np.std(X, axis=0, ddof=1), IQR)\nc:\\py37\\lib\\site-packages\\numpy\\core\\fromnumeric.py:83: RuntimeWarning: invalid value encountered in reduce\n return ufunc.reduce(obj, axis, dtype, out, **passkwargs)\n"
]
],
[
[
"## Plot Categoricals",
"_____no_output_____"
]
],
[
[
"# plot two categorical variables\ndef plot_categoricals(x, y, data, annotate=False):\n \"\"\"Plot counts of two categoricals.\n Size is raw count for each grouping.\n Percentages are for a given value of y.\"\"\"\n \n # Raw counts \n raw_counts = pd.DataFrame(data.groupby(y)[x].value_counts(normalize = False))\n raw_counts = raw_counts.rename(columns = {x: 'raw_count'})\n \n # Calculate counts for each group of x and y\n counts = pd.DataFrame(data.groupby(y)[x].value_counts(normalize = True))\n \n # Rename the column and reset the index\n counts = counts.rename(columns = {x: 'normalized_count'}).reset_index()\n counts['percent'] = 100 * counts['normalized_count']\n \n # Add the raw count\n counts['raw_count'] = list(raw_counts['raw_count'])\n \n plt.figure(figsize = (14, 10))\n # Scatter plot sized by percent\n plt.scatter(counts[x], counts[y], edgecolor = 'k', color = 'lightgreen',\n s = 100 * np.sqrt(counts['raw_count']), marker = 'o',\n alpha = 0.6, linewidth = 1.5)\n \n if annotate:\n # Annotate the plot with text\n for i, row in counts.iterrows():\n # Put text with appropriate offsets\n plt.annotate(xy = (row[x] - (1 / counts[x].nunique()), \n row[y] - (0.15 / counts[y].nunique())),\n color = 'navy',\n s = f\"{round(row['percent'], 1)}%\")\n \n # Set tick marks\n plt.yticks(counts[y].unique())\n plt.xticks(counts[x].unique())\n \n # Transform min and max to evenly space in square root domain\n sqr_min = int(np.sqrt(raw_counts['raw_count'].min()))\n sqr_max = int(np.sqrt(raw_counts['raw_count'].max()))\n \n # 5 sizes for legend\n msizes = list(range(sqr_min, sqr_max,\n int(( sqr_max - sqr_min) / 5)))\n markers = []\n \n # Markers for legend\n for size in msizes:\n markers.append(plt.scatter([], [], s = 100 * size, \n label = f'{int(round(np.square(size) / 100) * 100)}', \n color = 'lightgreen',\n alpha = 0.6, edgecolor = 'k', linewidth = 1.5))\n \n # Legend and formatting\n plt.legend(handles = markers, title = 'Counts',\n labelspacing = 3, handletextpad = 2,\n fontsize = 16,\n loc = (1.10, 0.19))\n \n plt.annotate(f'* Size represents raw count while % is for a given y value.',\n xy = (0, 1), xycoords = 'figure points', size = 10)\n \n # Adjust axes limits\n #plt.xlim((counts[x].min() - (6 / counts[x].nunique()), \n # counts[x].max() + (6 / counts[x].nunique())))\n #plt.ylim((counts[y].min() - (4 / counts[y].nunique()), \n # counts[y].max() + (4 / counts[y].nunique())))\n plt.grid(None)\n plt.xlabel(f\"{x}\"); plt.ylabel(f\"{y}\"); plt.title(f\"{y} vs {x}\")\n\n\n# plots two categorical columns in the dataframe train\nplot_categoricals('grade', 'program_status', train);",
"_____no_output_____"
]
],
[
[
"## Value Count Plots",
"_____no_output_____"
]
],
[
[
"# plot value counts of a column\ndef plot_value_counts(df, col, condition=False):\n \"\"\"Plot value counts of a column, optionally with only the heads of a household\"\"\"\n # apply condition here if required\n if condition:\n # define condition below <>\n df = df.loc[df[col] == condition].copy()\n \n plt.figure(figsize = (8, 6))\n df[col].value_counts().sort_index().plot.bar(color = 'red',\n edgecolor = 'g',\n linewidth = 2)\n plt.xlabel(f'{col}')\n plt.title(f'{col} Value Counts')\n plt.ylabel('Count')\n plt.show()\n\n# plot\nplot_value_counts(train, 'grade')\nplot_value_counts(train, 'program_status')\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7b63ae2a3db72b32ad4b0bf83ee42afb0e04403 | 9,645 | ipynb | Jupyter Notebook | python/hacker-rank/playground.ipynb | asciito/simple-scripts | 80fd414120f2d2c7d5f22fc940752eb5240a89c6 | [
"MIT"
] | null | null | null | python/hacker-rank/playground.ipynb | asciito/simple-scripts | 80fd414120f2d2c7d5f22fc940752eb5240a89c6 | [
"MIT"
] | null | null | null | python/hacker-rank/playground.ipynb | asciito/simple-scripts | 80fd414120f2d2c7d5f22fc940752eb5240a89c6 | [
"MIT"
] | null | null | null | 40.52521 | 1,719 | 0.314567 | [
[
[
"a = ord('a')\nz = ord('z')\n\nfor c in range(a, z + 1):\n print(chr(c), end='')",
"abcdefghijklmnopqrstuvwxyz"
],
[
"def printAlphabet(size):\n a = ord('a')\n output = ''\n piramid = []\n\n if (0 >= size or size > 26 or size == 1):\n print('a')\n else:\n for c in range(a + size - 1, a - 1, -1):\n output = chr(c) + output\n piramid.append((output[len(output):0:-1] + output).center(size + size - 1, '-'))\n \n for step in piramid[len(piramid) - 2::-1]:\n piramid.append(step)\n \n for char in [*piramid]:\n print(*char, sep=\"-\")",
"_____no_output_____"
],
[
"printAlphabet(3)",
"----c----\n--c-b-c--\nc-b-a-b-c\n--c-b-c--\n----c----\n"
],
[
"printAlphabet(5)",
"--------e--------\n------e-d-e------\n----e-d-c-d-e----\n--e-d-c-b-c-d-e--\ne-d-c-b-a-b-c-d-e\n--e-d-c-b-c-d-e--\n----e-d-c-d-e----\n------e-d-e------\n--------e--------\n"
],
[
"printAlphabet(10)",
"------------------j------------------\n----------------j-i-j----------------\n--------------j-i-h-i-j--------------\n------------j-i-h-g-h-i-j------------\n----------j-i-h-g-f-g-h-i-j----------\n--------j-i-h-g-f-e-f-g-h-i-j--------\n------j-i-h-g-f-e-d-e-f-g-h-i-j------\n----j-i-h-g-f-e-d-c-d-e-f-g-h-i-j----\n--j-i-h-g-f-e-d-c-b-c-d-e-f-g-h-i-j--\nj-i-h-g-f-e-d-c-b-a-b-c-d-e-f-g-h-i-j\n--j-i-h-g-f-e-d-c-b-c-d-e-f-g-h-i-j--\n----j-i-h-g-f-e-d-c-d-e-f-g-h-i-j----\n------j-i-h-g-f-e-d-e-f-g-h-i-j------\n--------j-i-h-g-f-e-f-g-h-i-j--------\n----------j-i-h-g-f-g-h-i-j----------\n------------j-i-h-g-h-i-j------------\n--------------j-i-h-i-j--------------\n----------------j-i-j----------------\n------------------j------------------\n"
],
[
"printAlphabet(15)",
"----------------------------o----------------------------\n--------------------------o-n-o--------------------------\n------------------------o-n-m-n-o------------------------\n----------------------o-n-m-l-m-n-o----------------------\n--------------------o-n-m-l-k-l-m-n-o--------------------\n------------------o-n-m-l-k-j-k-l-m-n-o------------------\n----------------o-n-m-l-k-j-i-j-k-l-m-n-o----------------\n--------------o-n-m-l-k-j-i-h-i-j-k-l-m-n-o--------------\n------------o-n-m-l-k-j-i-h-g-h-i-j-k-l-m-n-o------------\n----------o-n-m-l-k-j-i-h-g-f-g-h-i-j-k-l-m-n-o----------\n--------o-n-m-l-k-j-i-h-g-f-e-f-g-h-i-j-k-l-m-n-o--------\n------o-n-m-l-k-j-i-h-g-f-e-d-e-f-g-h-i-j-k-l-m-n-o------\n----o-n-m-l-k-j-i-h-g-f-e-d-c-d-e-f-g-h-i-j-k-l-m-n-o----\n--o-n-m-l-k-j-i-h-g-f-e-d-c-b-c-d-e-f-g-h-i-j-k-l-m-n-o--\no-n-m-l-k-j-i-h-g-f-e-d-c-b-a-b-c-d-e-f-g-h-i-j-k-l-m-n-o\n--o-n-m-l-k-j-i-h-g-f-e-d-c-b-c-d-e-f-g-h-i-j-k-l-m-n-o--\n----o-n-m-l-k-j-i-h-g-f-e-d-c-d-e-f-g-h-i-j-k-l-m-n-o----\n------o-n-m-l-k-j-i-h-g-f-e-d-e-f-g-h-i-j-k-l-m-n-o------\n--------o-n-m-l-k-j-i-h-g-f-e-f-g-h-i-j-k-l-m-n-o--------\n----------o-n-m-l-k-j-i-h-g-f-g-h-i-j-k-l-m-n-o----------\n------------o-n-m-l-k-j-i-h-g-h-i-j-k-l-m-n-o------------\n--------------o-n-m-l-k-j-i-h-i-j-k-l-m-n-o--------------\n----------------o-n-m-l-k-j-i-j-k-l-m-n-o----------------\n------------------o-n-m-l-k-j-k-l-m-n-o------------------\n--------------------o-n-m-l-k-l-m-n-o--------------------\n----------------------o-n-m-l-m-n-o----------------------\n------------------------o-n-m-n-o------------------------\n--------------------------o-n-o--------------------------\n----------------------------o----------------------------\n"
],
[
"printAlphabet(20)",
"--------------------------------------t--------------------------------------\n------------------------------------t-s-t------------------------------------\n----------------------------------t-s-r-s-t----------------------------------\n--------------------------------t-s-r-q-r-s-t--------------------------------\n------------------------------t-s-r-q-p-q-r-s-t------------------------------\n----------------------------t-s-r-q-p-o-p-q-r-s-t----------------------------\n--------------------------t-s-r-q-p-o-n-o-p-q-r-s-t--------------------------\n------------------------t-s-r-q-p-o-n-m-n-o-p-q-r-s-t------------------------\n----------------------t-s-r-q-p-o-n-m-l-m-n-o-p-q-r-s-t----------------------\n--------------------t-s-r-q-p-o-n-m-l-k-l-m-n-o-p-q-r-s-t--------------------\n------------------t-s-r-q-p-o-n-m-l-k-j-k-l-m-n-o-p-q-r-s-t------------------\n----------------t-s-r-q-p-o-n-m-l-k-j-i-j-k-l-m-n-o-p-q-r-s-t----------------\n--------------t-s-r-q-p-o-n-m-l-k-j-i-h-i-j-k-l-m-n-o-p-q-r-s-t--------------\n------------t-s-r-q-p-o-n-m-l-k-j-i-h-g-h-i-j-k-l-m-n-o-p-q-r-s-t------------\n----------t-s-r-q-p-o-n-m-l-k-j-i-h-g-f-g-h-i-j-k-l-m-n-o-p-q-r-s-t----------\n--------t-s-r-q-p-o-n-m-l-k-j-i-h-g-f-e-f-g-h-i-j-k-l-m-n-o-p-q-r-s-t--------\n------t-s-r-q-p-o-n-m-l-k-j-i-h-g-f-e-d-e-f-g-h-i-j-k-l-m-n-o-p-q-r-s-t------\n----t-s-r-q-p-o-n-m-l-k-j-i-h-g-f-e-d-c-d-e-f-g-h-i-j-k-l-m-n-o-p-q-r-s-t----\n--t-s-r-q-p-o-n-m-l-k-j-i-h-g-f-e-d-c-b-c-d-e-f-g-h-i-j-k-l-m-n-o-p-q-r-s-t--\nt-s-r-q-p-o-n-m-l-k-j-i-h-g-f-e-d-c-b-a-b-c-d-e-f-g-h-i-j-k-l-m-n-o-p-q-r-s-t\n--t-s-r-q-p-o-n-m-l-k-j-i-h-g-f-e-d-c-b-c-d-e-f-g-h-i-j-k-l-m-n-o-p-q-r-s-t--\n----t-s-r-q-p-o-n-m-l-k-j-i-h-g-f-e-d-c-d-e-f-g-h-i-j-k-l-m-n-o-p-q-r-s-t----\n------t-s-r-q-p-o-n-m-l-k-j-i-h-g-f-e-d-e-f-g-h-i-j-k-l-m-n-o-p-q-r-s-t------\n--------t-s-r-q-p-o-n-m-l-k-j-i-h-g-f-e-f-g-h-i-j-k-l-m-n-o-p-q-r-s-t--------\n----------t-s-r-q-p-o-n-m-l-k-j-i-h-g-f-g-h-i-j-k-l-m-n-o-p-q-r-s-t----------\n------------t-s-r-q-p-o-n-m-l-k-j-i-h-g-h-i-j-k-l-m-n-o-p-q-r-s-t------------\n--------------t-s-r-q-p-o-n-m-l-k-j-i-h-i-j-k-l-m-n-o-p-q-r-s-t--------------\n----------------t-s-r-q-p-o-n-m-l-k-j-i-j-k-l-m-n-o-p-q-r-s-t----------------\n------------------t-s-r-q-p-o-n-m-l-k-j-k-l-m-n-o-p-q-r-s-t------------------\n--------------------t-s-r-q-p-o-n-m-l-k-l-m-n-o-p-q-r-s-t--------------------\n----------------------t-s-r-q-p-o-n-m-l-m-n-o-p-q-r-s-t----------------------\n------------------------t-s-r-q-p-o-n-m-n-o-p-q-r-s-t------------------------\n--------------------------t-s-r-q-p-o-n-o-p-q-r-s-t--------------------------\n----------------------------t-s-r-q-p-o-p-q-r-s-t----------------------------\n------------------------------t-s-r-q-p-q-r-s-t------------------------------\n--------------------------------t-s-r-q-r-s-t--------------------------------\n----------------------------------t-s-r-s-t----------------------------------\n------------------------------------t-s-t------------------------------------\n--------------------------------------t--------------------------------------\n"
],
[
"printAlphabet(2)",
"--b--\nb-a-b\n--b--\n"
],
[
"printAlphabet(1)",
"a\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7b64ed62bcc1d315a0a6b717f58c197009e8239 | 7,533 | ipynb | Jupyter Notebook | demos/Lecture04-Demos.ipynb | annabellegrimes/CPEN-400Q | 044d521f8109567ec004a9c882898f9e2eb5a19e | [
"MIT"
] | 6 | 2022-01-12T22:57:13.000Z | 2022-03-15T21:20:59.000Z | demos/Lecture04-Demos.ipynb | annabellegrimes/CPEN-400Q | 044d521f8109567ec004a9c882898f9e2eb5a19e | [
"MIT"
] | null | null | null | demos/Lecture04-Demos.ipynb | annabellegrimes/CPEN-400Q | 044d521f8109567ec004a9c882898f9e2eb5a19e | [
"MIT"
] | 3 | 2022-02-04T07:48:01.000Z | 2022-03-22T21:40:06.000Z | 22.621622 | 160 | 0.48188 | [
[
[
"# Demos: Lecture 4",
"_____no_output_____"
]
],
[
[
"import pennylane as qml\nimport numpy as np",
"/opt/conda/envs/pennylane/lib/python3.8/site-packages/_distutils_hack/__init__.py:30: UserWarning: Setuptools is replacing distutils.\n warnings.warn(\"Setuptools is replacing distutils.\")\n"
]
],
[
[
"## Demo 1: `qml.ctrl`",
"_____no_output_____"
]
],
[
[
"def some_function():\n qml.PauliX(wires=1)\n qml.CNOT(wires=[1, 2])\n \n qml.Hadamard(wires=2)\n qml.CRX(0.3, wires=[2, 1])",
"_____no_output_____"
],
[
"dev = qml.device('default.qubit', wires=3)\n\[email protected](dev)\ndef control_the_thing():\n qml.Hadamard(wires=0)\n \n qml.ctrl(some_function, control=0)()\n \n return qml.state()",
"_____no_output_____"
],
[
"control_the_thing()",
"_____no_output_____"
],
[
"print(qml.draw(control_the_thing, expansion_strategy='device')())",
" 0: ──H──╭C──╭C──╭ControlledPhaseShift(1.57)──╭C─────────╭ControlledPhaseShift(1.57)──╭C─────────╭C─────────╭C──╭C──────────╭C──╭C──────────╭┤ State \n 1: ─────╰X──├C──│────────────────────────────│──────────│────────────────────────────╰RZ(1.57)──╰RY(0.15)──├X──╰RY(-0.15)──├X──╰RZ(-1.57)──├┤ State \n 2: ─────────╰X──╰ControlledPhaseShift(1.57)──╰RX(1.57)──╰ControlledPhaseShift(1.57)────────────────────────╰C──────────────╰C──────────────╰┤ State \n\n"
]
],
[
[
"## Demo 2: multi-qubit measurements",
"_____no_output_____"
]
],
[
[
"dev = qml.device('default.qubit', wires=3)#, shots=10)\n\[email protected](dev)\ndef something_parametrized(x, y):\n qml.Hadamard(wires=0)\n qml.CRX(x, wires=[0, 1])\n qml.CRY(y, wires=[1, 2])\n \n return qml.probs(wires=[0])",
"_____no_output_____"
],
[
"something_parametrized(0.1, 0.2)",
"_____no_output_____"
]
],
[
[
"## Demo 3: multi-qubit expectation values",
"_____no_output_____"
]
],
[
[
"dev = qml.device('default.qubit', wires=3)#, shots=10)\n\[email protected](dev)\ndef something_parametrized(x, y):\n qml.Hadamard(wires=0)\n qml.CRX(x, wires=[0, 1])\n qml.CRY(y, wires=[1, 2])\n \n return qml.expval(qml.PauliZ(0)), qml.expval(qml.PauliZ(1)), qml.expval(qml.PauliZ(2))",
"_____no_output_____"
],
[
"something_parametrized(0.3, 0.4)",
"_____no_output_____"
],
[
"dev = qml.device('default.qubit', wires=3)#, shots=10)\n\[email protected](dev)\ndef something_parametrized(x, y):\n qml.Hadamard(wires=0)\n qml.CRX(x, wires=[0, 1])\n qml.CRY(y, wires=[1, 2])\n \n return qml.expval(qml.PauliX(0) @ qml.PauliX(1) @ qml.PauliY(2))",
"_____no_output_____"
],
[
"something_parametrized(0.3, 0.4)",
"_____no_output_____"
]
],
[
[
"## Demo 4: superdense coding",
"_____no_output_____"
],
[
"<img src=\"fig/superdense.png\" width=\"600px\">",
"_____no_output_____"
]
],
[
[
"dev = qml.device('default.qubit', wires=2, shots=1)\n\ndef create_entangled_state(wires=None):\n qml.Hadamard(wires=wires[0])\n qml.CNOT(wires=[wires[0], wires[1]])\n \[email protected](dev)\ndef superdense_coding(b1=0, b2=0):\n create_entangled_state(wires=[0, 1])\n \n if b1 == 1:\n qml.PauliZ(wires=0)\n \n if b2 == 1:\n qml.PauliX(wires=0)\n \n qml.adjoint(create_entangled_state)(wires=[0, 1])\n \n return qml.sample()",
"_____no_output_____"
],
[
"superdense_coding(b1=1, b2=0)",
"_____no_output_____"
]
],
[
[
"## Demo 5: teleportation ",
"_____no_output_____"
],
[
"<img src=\"fig/teleportation_deferred.png\" width=\"800px\">",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
e7b662e11b976837a85261f9f4589c41aeba02f2 | 220,960 | ipynb | Jupyter Notebook | docs/tutorials/01_getting_started.ipynb | rhugonnet/scikit-gstat | a142ca13aa3b048c46613d1247396c80c32a8145 | [
"MIT"
] | 141 | 2017-10-13T10:25:19.000Z | 2022-03-31T00:41:32.000Z | docs/tutorials/01_getting_started.ipynb | rhugonnet/scikit-gstat | a142ca13aa3b048c46613d1247396c80c32a8145 | [
"MIT"
] | 109 | 2017-09-23T12:42:44.000Z | 2022-03-28T06:53:44.000Z | docs/tutorials/01_getting_started.ipynb | rhugonnet/scikit-gstat | a142ca13aa3b048c46613d1247396c80c32a8145 | [
"MIT"
] | 42 | 2017-11-28T18:44:22.000Z | 2022-03-24T02:27:44.000Z | 342.043344 | 123,848 | 0.931042 | [
[
[
"# 1 - Getting Started",
"_____no_output_____"
],
[
"The main application for `scikit-gstat` is variogram analysis and [Kriging](https://en.wikipedia.org/wiki/Kriging). This Tutorial will guide you through the most basic functionality of `scikit-gstat`. There are other tutorials that will explain specific methods or attributes in `scikit-gstat` in more detail.\n\n#### What you will learn in this tutorial\n\n* How to instantiate `Variogram` and `OrdinaryKriging`\n* How to read a variogram\n* Perform an interpolation\n* Most basic plotting\n",
"_____no_output_____"
]
],
[
[
"import numpy as np \nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom pprint import pprint\nplt.style.use('ggplot')",
"_____no_output_____"
]
],
[
[
"The `Variogram` and `OrdinaryKriging` classes can be loaded directly from `skgstat`. This is the name of the Python module.",
"_____no_output_____"
]
],
[
[
"from skgstat import Variogram, OrdinaryKriging",
"_____no_output_____"
]
],
[
[
"At the current version, there are some deprecated attributes and method in the `Variogram` class. They do not raise `DeprecationWarning`s, but rather print a warning message to the screen. You can suppress this warning by adding an `SKG_SUPPRESS` environment variable",
"_____no_output_____"
]
],
[
[
"%set_env SKG_SUPPRESS=true",
"env: SKG_SUPPRESS=true\n"
]
],
[
[
"## 1.1 Load data",
"_____no_output_____"
],
[
"You can find a prepared example data set in the `./data` subdirectory. This example is extracted from a generated Gaussian random field. We can expect the field to be stationary and show a nice spatial dependence, because it was created that way.\nWe can load one of the examples and have a look at the data:",
"_____no_output_____"
]
],
[
[
"data = pd.read_csv('./data/sample_sr.csv')\nprint(\"Loaded %d rows and %d columns\" % data.shape)\ndata.head()",
"Loaded 200 rows and 3 columns\n"
]
],
[
[
"Get a first overview of your data by plotting the `x` and `y` coordinates and visually inspect how the `z` spread out.",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(1, 1, figsize=(9, 9))\nart = ax.scatter(data.x,data.y, s=50, c=data.z, cmap='plasma')\nplt.colorbar(art);",
"_____no_output_____"
]
],
[
[
"We can already see a lot from here: \n\n* The small values seem to concentrate on the upper left and lower right corner\n* Larger values are arranged like a band from lower left to upper right corner\n* To me, each of these blobs seem to have a diameter of something like 30 or 40 units.\n* The distance between the minimum and maximum seems to be not more than 60 or 70 units.\n\nThese are already very important insights.",
"_____no_output_____"
],
[
"## 1.2 Build a Variogram",
"_____no_output_____"
],
[
"As a quick reminder, the variogram relates pair-wise separating distances of `coordinates` and relates them to the *semi-variance* of the corresponding `values` pairs. The default estimator used is the Matheron estimator:\n\n$$ \\gamma (h) = \\frac{1}{2N(h)} * \\sum_{i=1}^{N(h)}(Z(x_i) - Z(x_{i + h}))^2 $$\n\nFor more details, please refer to the [User Guide](https://mmaelicke.github.io/scikit-gstat/userguide/variogram.html#experimental-variograms)",
"_____no_output_____"
],
[
"The `Variogram` class takes at least two arguments. The `coordinates` and the `values` observed at these locations. \nYou should also at least set the `normalize` parameter to explicitly, as it changes it's default value in version `0.2.8` to `False`. This attribute affects only the plotting, not the variogram values.\nAdditionally, the number of bins is set to 15, because we have fairly many observations and the default value of 10 is unnecessarily small. The `maxlag` set the maximum distance for the last bin. We know from the plot above, that more than 60 units is not really meaningful",
"_____no_output_____"
]
],
[
[
"V = Variogram(data[['x', 'y']].values, data.z.values, normalize=False, maxlag=60, n_lags=15)\nfig = V.plot(show=False)",
"_____no_output_____"
]
],
[
[
"The upper subplot show the histogram for the count of point-pairs in each lag class. You can see various things here:\n\n* As expected, there is a clear spatial dependency, because semi-variance increases with distance (blue dots)\n* The default `spherical` variogram model is well fitted to the experimental data\n* The shape of the dependency is **not** captured quite well, but fair enough for this example\n\nThe sill of the variogram should correspond with the field variance. The field is unknown, but we can compare the sill to the *sample* variance:",
"_____no_output_____"
]
],
[
[
"print('Sample variance: %.2f Variogram sill: %.2f' % (data.z.var(), V.describe()['sill']))",
"Sample variance: 1.10 Variogram sill: 1.26\n"
]
],
[
[
"The `describe` method will return the most important parameters as a dictionary. And we can simply print the variogram ob,ect to the screen, to see all parameters.",
"_____no_output_____"
]
],
[
[
"pprint(V.describe())",
"{'effective_range': 39.50027313170537,\n 'estimator': 'matheron',\n 'name': 'spherical',\n 'nugget': 0,\n 'sill': 1.2553698556802062}\n"
],
[
"print(V)",
"spherical Variogram\n-------------------\nEstimator: matheron\n \rEffective Range: 39.50\n \rSill: 1.26\n \rNugget: 0.00\n \n"
]
],
[
[
"## 1.3 Kriging",
"_____no_output_____"
],
[
"The Kriging class will now use the Variogram from above to estimate the Kriging weights for each grid cell. This is done by solving a linear equation system. For an unobserved location $s_0$, we can use the distances to 5 observation points and build the system like:\n\n$$\n\\begin{pmatrix}\n\\gamma(s_1, s_1) & \\gamma(s_1, s_2) & \\gamma(s_1, s_3) & \\gamma(s_1, s_4) & \\gamma(s_1, s_5) & 1\\\\\n\\gamma(s_2, s_1) & \\gamma(s_2, s_2) & \\gamma(s_2, s_3) & \\gamma(s_2, s_4) & \\gamma(s_2, s_5) & 1\\\\\n\\gamma(s_3, s_1) & \\gamma(s_3, s_2) & \\gamma(s_3, s_3) & \\gamma(s_3, s_4) & \\gamma(s_3, s_5) & 1\\\\\n\\gamma(s_4, s_1) & \\gamma(s_4, s_2) & \\gamma(s_4, s_3) & \\gamma(s_4, s_4) & \\gamma(s_4, s_5) & 1\\\\\n\\gamma(s_5, s_1) & \\gamma(s_5, s_2) & \\gamma(s_5, s_3) & \\gamma(s_5, s_4) & \\gamma(s_5, s_5) & 1\\\\\n1 & 1 & 1 & 1 & 1 & 0 \\\\\n\\end{pmatrix} *\n\\begin{bmatrix}\n\\lambda_1 \\\\\n\\lambda_2 \\\\\n\\lambda_3 \\\\\n\\lambda_4 \\\\\n\\lambda_5 \\\\\n\\mu \\\\\n\\end{bmatrix} =\n\\begin{pmatrix}\n\\gamma(s_0, s_1) \\\\\n\\gamma(s_0, s_2) \\\\\n\\gamma(s_0, s_3) \\\\\n\\gamma(s_0, s_4) \\\\\n\\gamma(s_0, s_5) \\\\\n1 \\\\\n\\end{pmatrix}\n$$\n\nFor more information, please refer to the [User Guide](https://mmaelicke.github.io/scikit-gstat/userguide/kriging.html#kriging-equation-system)",
"_____no_output_____"
],
[
"Consequently, the `OrdinaryKriging` class needs a `Variogram` object as a mandatory attribute. Two very important optional attributes are `min_points` and `max_points`. They will limit the size of the Kriging equation system. As we have 200 observations, we can require at least 5 neighbors within the range. More than 15 will only unnecessarily slow down the computation. The `mode='exact'` attribute will advise the class to build and solve the system above for each location.",
"_____no_output_____"
]
],
[
[
"ok = OrdinaryKriging(V, min_points=5, max_points=15, mode='exact')",
"_____no_output_____"
]
],
[
[
"The `transform` method will apply the interpolation for passed arrays of coordinates. It requires each dimension as a single 1D array. We can easily build a meshgrid of 100x100 coordinates and pass them to the interpolator. To recieve a 2D result, we can simply reshape the result. The Kriging error will be available as the `sigma` attribute of the interpolator.",
"_____no_output_____"
]
],
[
[
"# build the target grid\nxx, yy = np.mgrid[0:99:100j, 0:99:100j]\nfield = ok.transform(xx.flatten(), yy.flatten()).reshape(xx.shape)\ns2 = ok.sigma.reshape(xx.shape)",
"_____no_output_____"
]
],
[
[
"And finally, we can plot the result.",
"_____no_output_____"
]
],
[
[
"fig, axes = plt.subplots(1, 2, figsize=(16, 8))\n\nart = axes[0].matshow(field.T, origin='lower', cmap='plasma')\naxes[0].set_title('Interpolation')\naxes[0].plot(data.x, data.y, '+k')\naxes[0].set_xlim((0,100))\naxes[0].set_ylim((0,100))\nplt.colorbar(art, ax=axes[0])\nart = axes[1].matshow(s2.T, origin='lower', cmap='YlGn_r')\naxes[1].set_title('Kriging Error')\nplt.colorbar(art, ax=axes[1])\naxes[1].plot(data.x, data.y, '+w')\naxes[1].set_xlim((0,100))\naxes[1].set_ylim((0,100));",
"_____no_output_____"
]
],
[
[
"From the Kriging error map, you can see how the interpolation is very certain close to the observation points, but rather high in areas with only little coverage (like the upper left corner).",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e7b668773cc6a406ef549a854f1c8fe315f0769f | 7,467 | ipynb | Jupyter Notebook | MLGame/games/snake/ml/train.py.ipynb | Liuian/1092_INTRODUCTION-TO-MACHINE-LEARNING-AND-ITS-APPLICATION-TO-GAMING | f4a58d0d9f5832a77a4a86352e084065dc7bae50 | [
"MIT"
] | null | null | null | MLGame/games/snake/ml/train.py.ipynb | Liuian/1092_INTRODUCTION-TO-MACHINE-LEARNING-AND-ITS-APPLICATION-TO-GAMING | f4a58d0d9f5832a77a4a86352e084065dc7bae50 | [
"MIT"
] | null | null | null | MLGame/games/snake/ml/train.py.ipynb | Liuian/1092_INTRODUCTION-TO-MACHINE-LEARNING-AND-ITS-APPLICATION-TO-GAMING | f4a58d0d9f5832a77a4a86352e084065dc7bae50 | [
"MIT"
] | null | null | null | 27.966292 | 126 | 0.50837 | [
[
[
"import pickle\nimport numpy as np\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.model_selection import train_test_split, cross_val_score\nfrom sklearn.metrics import classification_report, confusion_matrix\nfrom sklearn.model_selection import GridSearchCV\nfrom sklearn.model_selection import StratifiedShuffleSplit\n\nfrom sklearn import metrics\nfrom sklearn.ensemble import RandomForestClassifier\n\n#試取資料\nfile = open(\"/Users/peggy/Documents/109-2(2-2)/machine_learning/MLGame/games/snake/log/snake1.pickle\", \"rb\")\ndata = pickle.load(file)\nfile.close()\ntype(data['ml'])",
"_____no_output_____"
],
[
"game_info = data[\"ml\"][\"scene_info\"]\ngame_command = data[\"ml\"][\"command\"]\n#print(game_info)\n#print(game_command)",
"_____no_output_____"
],
[
"\nfor i in range(2,3): \n path = \"/Users/peggy/Documents/109-2(2-2)/machine_learning/MLGame/games/snake/log/snake\" + str(i) + \".pickle\"\n file = open(path, \"rb\")\n data = pickle.load(file)\n game_info = game_info + data['ml']['scene_info']\n game_command = game_command + data['ml']['command']\n file.close() \n\nprint(len(game_info))\nprint(len(game_command))\n",
"158034\n158034\n"
],
[
"g = game_info[1]\n\nfeature = np.array([g[\"snake_head\"][0], g[\"snake_head\"][1]])\nprint(feature)\nprint(game_command[1])\nif game_command[1] == \"UP\": game_command[1] = 0\nelif game_command[1] == \"DOWN\": game_command[1] = 1\nelif game_command[1] == \"LEFT\": game_command[1] = 2\nelif game_command[1] == \"RIGHT\": game_command[1] = 3\nelse: game_command[1] = 4 ",
"[40 50]\nLEFT\n"
],
[
"for i in range(2, len(game_info) - 1):\n g = game_info[i]\n \n feature = np.vstack((feature, [g[\"snake_head\"][0], g[\"snake_head\"][1]]))\n if game_command[i] == \"UP\": game_command[i] = 0\n elif game_command[i] == \"DOWN\": game_command[i] = 1\n elif game_command[i] == \"LEFT\": game_command[i] = 2\n elif game_command[i] == \"RIGHT\": game_command[i] = 3\n else: game_command[i] = 4 \n \nanswer = np.array(game_command[1:-1])\nprint(feature)\nprint(feature.shape)\nprint(answer)\nprint(answer.shape)",
"[[ 40 50]\n [ 30 50]\n [ 20 50]\n ...\n [270 0]\n [280 0]\n [290 0]]\n(158032, 2)\n[2 2 2 ... 3 3 3]\n(158032,)\n"
]
],
[
[
"# KNN\n",
"_____no_output_____"
]
],
[
[
"import pickle\nimport numpy as np\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.model_selection import train_test_split, cross_val_score\nfrom sklearn.metrics import classification_report, confusion_matrix\nfrom sklearn.model_selection import GridSearchCV\nfrom sklearn.model_selection import StratifiedShuffleSplit\n\n#資料劃分\nx_train, x_test, y_train, y_test = train_test_split(feature, answer, test_size=0.3, random_state=9)\n#參數區間\nparam_grid = {'n_neighbors':[1, 2, 3, 4, 5]}\n#交叉驗證 \ncv = StratifiedShuffleSplit(n_splits=2, test_size=0.3, random_state=12)\ngrid = GridSearchCV(KNeighborsClassifier(n_neighbors = 5), param_grid, cv=cv, verbose=10, n_jobs=-1) #n_jobs為平行運算的數量\ngrid.fit(x_train, y_train)\ngrid_predictions = grid.predict(x_test)\n\n#儲存\nfile = open('model_KNN.pickle', 'wb')\npickle.dump(grid, file)\nfile.close()\n",
"Fitting 2 folds for each of 5 candidates, totalling 10 fits\n"
],
[
"#最佳參數\nprint(grid.best_params_)\n#預測結果\n#print(grid_predictions)\n#混淆矩陣\nprint(confusion_matrix(y_test, grid_predictions))\n#分類結果\nprint(classification_report(y_test, grid_predictions))",
"{'n_neighbors': 5}\n[[6164 75 373 345 0]\n [ 71 6402 356 352 0]\n [ 351 360 8975 105 0]\n [ 368 397 123 8683 0]\n [ 5 5 7 25 1]]\n precision recall f1-score support\n\n 0 0.89 0.89 0.89 6957\n 1 0.88 0.89 0.89 7181\n 2 0.91 0.92 0.91 9791\n 3 0.91 0.91 0.91 9571\n 4 1.00 0.02 0.05 43\n\n accuracy 0.90 33543\n macro avg 0.92 0.72 0.73 33543\nweighted avg 0.90 0.90 0.90 33543\n\n"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e7b6697c846b6eba8f65359c746e7dae75733455 | 39,598 | ipynb | Jupyter Notebook | Review Scraping.ipynb | smrutirpanigrahi/smartacus-backend | 9f06d13349e14692983bb8626645215686f0d89f | [
"MIT"
] | null | null | null | Review Scraping.ipynb | smrutirpanigrahi/smartacus-backend | 9f06d13349e14692983bb8626645215686f0d89f | [
"MIT"
] | 3 | 2021-03-30T10:56:56.000Z | 2021-04-19T03:29:01.000Z | Review Scraping.ipynb | smrutirpanigrahi/smartacus-backend | 9f06d13349e14692983bb8626645215686f0d89f | [
"MIT"
] | 2 | 2021-03-27T04:20:43.000Z | 2021-03-27T04:21:46.000Z | 69.106457 | 1,273 | 0.643517 | [
[
[
"import requests \nfrom bs4 import BeautifulSoup \nimport re",
"_____no_output_____"
],
[
"# start_url = 'https://www.tripadvisor.com.au/Hotel_Review-g255100-d255744-Reviews-Atlantis_Hotel-Melbourne_Victoria.html'\n#start_url = 'https://www.tripadvisor.com.au/Restaurant_Review-g255100-d9554485-Reviews-or10-Hochi_Mama-Melbourne_Victoria.html#REVIEWS'\n# start_url = 'https://www.yelp.com/biz/the-meat-and-wine-co-melbourne-2'\nstart_url = 'https://www.yelp.com/biz/the-meat-and-wine-co-melbourne-2?start=30'",
"_____no_output_____"
],
[
"# split the url to different parts\nurl_parts = start_url.split('-Reviews-')\nurls = []\nreviewArr = []\nif \"Hotel_Review\" in start_url:\n if \"-or\" not in start_url:\n for page in range(0, 5):\n url = url_parts[0]+'-Reviews-'+'or{}-'.format(5*page)+url_parts[1]\n print(url)\n urls.append(url)\n else:\n # find all URLS of reviews (define how many pages are needed, here we set it for 5 page) for this hotel\n for page in range(int(int(url_parts[1][2:4])/5), int(int(url_parts[1][2:4])/5 + 5)):\n url = url_parts[0]+'-Reviews-'+'or{}-'.format(5*page)+url_parts[1][5:]\n print(url)\n urls.append(url)\n # extract all reviews from all urls and store in reviewArr(json objects)\n for url in urls:\n response = requests.get(url,timeout=10)\n try:\n status = response.status_code\n print(status)\n except Exception as e:\n print(e)\n content = BeautifulSoup(response.content,\"html.parser\")\n for review in content.findAll('div', attrs={\"class\": \"_2wrUUKlw _3hFEdNs8\"}):\n reviewObject = {\n \"review_title\": review.find('div', attrs={\"class\": \"glasR4aX\"}).text,\n \"review\": review.find('q', attrs={\"class\": \"IRsGHoPm\"}).get_text(separator='\\n'),\n \"review_rating\":str(review.find('div', attrs={\"class\": \"nf9vGX55\"}).find('span'))[-11:-10],\n \"date_of_stay\":review.find('span', attrs={\"class\": \"_34Xs-BQm\"}).text[14:],\n \"review_date\": review.find('div', attrs={\"class\": \"_2fxQ4TOx\"}).text}\n print(reviewObject)\n reviewArr.append(reviewObject)\nelif \"Restaurant_Review\" in start_url:\n if \"-or\" not in start_url:\n for page in range(0,5):\n url = url_parts[0]+'-Reviews-'+'or{}-'.format(10*page)+url_parts[1]\n print(url)\n urls.append(url)\n else: \n for page in range(int(int(url_parts[1][2:4])/10), int(int(url_parts[1][2:4])/10 + 5)):\n url = url_parts[0]+'-Reviews-'+'or{}-'.format(10*page)+url_parts[1][5:]\n print(url)\n urls.append(url)\n for url in urls:\n response = requests.get(url,timeout=10)\n try:\n status = response.status_code\n print(status)\n except Exception as e:\n print(e)\n content = BeautifulSoup(response.content,\"html.parser\")\n for review in content.findAll('div', attrs={\"class\": \"reviewSelector\"}):\n reviewObject = {\n \"review_title\": review.find('span', attrs={\"class\": \"noQuotes\"}).text,\n \"review\": review.find('p', attrs={\"class\": \"partial_entry\"}).text.replace(\"\\n\", \"\"),\n \"review_rating\":str(review.find('div',attrs={\"class\": \"ui_column is-9\"}).find('span'))[-11:-10],\n \"date_of_visit\":review.find('div', attrs={\"class\": \"prw_rup prw_reviews_stay_date_hsx\"}).text[15:],\n \"review_date\": review.find('span', attrs={\"class\": \"ratingDate\"}).text.strip()\n }\n print(reviewObject)\n reviewArr.append(reviewObject)\nelif \"www.yelp.com\" in start_url:\n if \"?start=\" not in start_url:\n for page in range(0, 5):\n url = start_url + '?start={}'.format(10*page)\n print(url)\n urls.append(url)\n else:\n for page in range(int(int(start_url[-2:])/10), int(int(start_url[-2:])/10 + 5)):\n url = start_url[:-9] + '?start={}'.format(10*page)\n print(url)\n urls.append(url)\n for url in urls:\n response = requests.get(url,timeout=10)\n try:\n status = response.status_code\n print(status)\n except Exception as e:\n print(e)\n content = BeautifulSoup(response.content,\"html.parser\")\n for review in content.findAll('div',{\"class\":\"review__373c0__13kpL border-color--default__373c0__2oFDT\"}):\n reviewObject = {\n \"review\": review.find('p', attrs={\"class\": \"comment__373c0__1M-px css-n6i4z7\"}).text.replace(\"\\xa0\", \"\"),\n \"review_rating\": review.select('[aria-label*=rating]')[0]['aria-label'][:1],\n \"review_date\": review.find('span', attrs={\"class\": \"css-e81eai\"}).text\n }\n print(reviewObject)\n reviewArr.append(reviewObject)\nelse: \n print(\"Please only paste Valid URL of hotel or restaurant review from Trip Advisor and Yelp!\")",
"https://www.yelp.com/biz/the-meat-and-wine-co-melbourne-2?start=30\nhttps://www.yelp.com/biz/the-meat-and-wine-co-melbourne-2?start=40\nhttps://www.yelp.com/biz/the-meat-and-wine-co-melbourne-2?start=50\nhttps://www.yelp.com/biz/the-meat-and-wine-co-melbourne-2?start=60\nhttps://www.yelp.com/biz/the-meat-and-wine-co-melbourne-2?start=70\n200\n{'review': \"Came here for lunch on a Friday with a friend who works close by and boyyyyy was this place packed even at 1:45pm. The service was great! Daniel our wait staff was friendly and everyone was just uber nice, from the guy who took us to our tables, to the guy who brought out our food to the one who kept asking if we wanted another drink. Definitely 5 stars for service. Food was amazing... I got the pulled pork burger with chips and my friend got battered fish and chips - delightful! This is definitely my go to place whenever I'm in southgate.\", 'review_rating': '5', 'review_date': '3/6/2016'}\n{'review': 'Warm beer and tough steak cooked wrong... Epic fail... Steak ordered medium, served blue, sent back, same piece returned almost well done, shriveled up and not nice at all.. My boss was paying and said I should order the wagyu rib-eye as it would be the \"best\". It was tiny, (maybe half the 300g on the menu). It was also very sinewy. I cook better steak from the supermarket at home than this. I couldn\\'t wait to get back to my hotel room so I could go and get some food. Bad, bad service; we never got the veges we ordered. Other people in our party had ribs and said they were good...Really noisy and hard to hear the waiters too.', 'review_rating': '1', 'review_date': '2/16/2017'}\n{'review': 'Really liked this place.Had a nice ambiance and friendly staff.Had the rump which came with a salad.The meat was well presented and had an excellent taste at medium rare.The salad was impressively fresh with a nice blend of lettuce and vegetables, had a nice vinaigrette dressing and was bigger than the condiment-sized salads that seem to be a trademark in that area of town.All and all I thought it was a very good value considering its location.', 'review_rating': '4', 'review_date': '5/16/2016'}\n{'review': \"Fascinating decor and atmosphere. Just loved it! Came here with a large group of hungries (12). Although we didn't have reservation and they were BUSY we got a table in less than a half hour. And in the mean time we were well tended at the upstairs bar. Nice!I had the beef kabob with chips and a peri peri dip. Just outstanding. Cooked perfectly and A LOT of food. A number of folks had the Lamb shank and the feedback was that it was nicely executed as well. It looked gooood too!Great wine selection and oh if you are a vegetarian don't be scared away, they have an assortment of options that folks In our group found very good too. Service was good. Host staff took great care of us. Give this place a try and let me know what YOU think.\", 'review_rating': '4', 'review_date': '5/10/2014'}\n{'review': 'Visited this establishment on Thursday night and confirmed why this is my favourite restaurant. Service was excellent and the food was sensational. I have never had a bad experience here!', 'review_rating': '5', 'review_date': '11/14/2015'}\n{'review': \"Specialising in steaks i find that they do a decent job on cooking the steaks. I've had my luck with over cooked steak and some steaks cooked perfectly. The price is decent for a steak if you're having a date night with your partner. Interior design is quiet fancy and is very romantic. They also have private rooms for functions and parties. Love the wine list as they have so much to offer. Great service\", 'review_rating': '3', 'review_date': '10/14/2015'}\n{'review': 'Great Steak Restaurant on the Yarra Valley river front. Great food, and wine list. Large portions, totally delish. Highly recommend', 'review_rating': '5', 'review_date': '5/29/2018'}\n{'review': \"The kangaroo was fantastic! It comes with onion rings, baked potato and a delicious sweety sauce. The best dish I had ever! The environment is nice and The service is fast. The prices aren't such high. It's worth!\", 'review_rating': '5', 'review_date': '5/24/2016'}\n{'review': \"One of my favourite fine dining restaurants! Food is absolutely amazing every time I go, waiters are absolutely kind and understanding and the atmosphere is 'just right'! Can't complain!\", 'review_rating': '5', 'review_date': '10/30/2017'}\n{'review': \"Travel to Melbourne 3 or 4 times a year and come here most trips - have been pre and post renovation and really enjoyed the atmosphere and setup both before and after.What surprises me is that coming from Adelaide where we do have some excellent steak venues in the form of Argentinian (Sosta, Gaucho, Buenos Aires etc), is that the steaks are priced on par if not cheaper then the Adelaide equivalent. Considering the Southbank location and excellent quality of food I was expecting far higher prices. Anyway on to the food -Tasting platter is always a excellent entre. I always stick with either the rib eye, pork ribs or the dish which combines both. Always great and on par with the best ribs and steak's i've had anywhere. Only minor complaint last visit is that my side of mash was extremely rich / creamy - too much butter/ cheese I assume. Wasn't like this before so probably a one off. Normally when I visit Southbank its at the start of a night out and I'm with a group having a few drinks. This is where the extensive spirit and wine menu comes in handy - one of the most impressive lists I've come across in a resturant. Plenty of premium scotch to choose from!I'll be back next visit\", 'review_rating': '5', 'review_date': '8/17/2014'}\n200\n{'review': \"Second time I have been here, first time was in Sydney.We tried to book a table with very short notice and couldn't get one, however the young lady on the phone told us to just pop in and we could go up to the bar and wait for a table as people always cancel or we could just eat and order in the bar. We chose to just eat and order in the bar as we don't particularly like waiting around for any food for too long :)Me and my partner ordered ribs and just as I had suspected they were ridiculously good ribs, I mean they are ridiculous.Wish this place was in Adelaide so we can taste all the magnificent items on the menu!Do yourself a favour and stop in!!!!\", 'review_rating': '5', 'review_date': '10/1/2014'}\n{'review': \"I went there with my girlfriend on her birthday. For entree, I ordered the scallop and prawn. The prawn was a bit spicy. For main, I ordered a rib eye and lamb rib. The rib eye was a bit disappointed with considering the price, however, the lamb rib was stunning with excellent seasoning. Also, I ordered a creme bulee and semiferddo. The desserts are all fantastic. What's more, the waiting staff were friendly and nice.\", 'review_rating': '4', 'review_date': '9/11/2016'}\n{'review': \"Been here several times and it never disappoints! If you're after a great steak fix this is one of the best places to go! Staff were great, friendly and knowledgeable about the food and wine. Have used the private dinning room upstairs for birthday dinners which has a great view of Southbank and the service was excellent. All the food came out at around the same time for the whole group and everyone was happy!\", 'review_rating': '5', 'review_date': '5/25/2017'}\n{'review': 'Excellent as always, Staff are friendly, attentive and know the menu well and are helpful with suggestions we where served by (Shaz, she was great) . The food is excellent would happily return here again and again. Prices are surprisingly good for the quality of food.', 'review_rating': '4', 'review_date': '8/13/2017'}\n{'review': 'This places ambience was incredible, quite mellow and the meat basically melted in my mouth. The only downfall is that I almost tripped over their carpet upon entering and their hot mustard is way too intense.', 'review_rating': '4', 'review_date': '6/9/2017'}\n{'review': 'I feel like I missed the boat here, as I tried the kangaroo steak for novelty reasons and was unimpressed compared to the delicious beef steaks all of my friends got. Definitely a good meal, but I had steak envy.', 'review_rating': '4', 'review_date': '2/15/2014'}\n{'review': \"The Meat and Wine Co is a steakhouse of the highest calibre. If you want aged beef that has been basting in house and then cooked to perfection, there are few places I know better than the Meat and Wine Co (perhaps The Italian does a better steak).The Meat & Wine Co, Melbourne is located at Freshwater Place, Queensbridge Square in Southbank and has stunning views of the Yarra River and the city skyline. The steaks are mammoth and delicious. They also specialize in ribs - lamb, beef and pork. Although you could be mistaken for confusing them for Flintstonesesque dinosaur ribs (they are HUGE). For $49, you can give go all out and have a steak and ribs. You even get a massive bib for the massive mess you are about to make.The Meat and Wine Co isn't cheap. But you do get a quality feed that is well worth it. Enjoy pigging out!\", 'review_rating': '4', 'review_date': '10/15/2011'}\n{'review': \"Next to and with views of the Yarra River this hip looking steak house is great. Good wine selection although nothing from CA or OR. The meats are very good quality but they overdue the orders a little but not enough to complain at the restaurant. The meats are complimented with a variety of sauces, tried two of them, one was great and the other slightly bland. The portugués suace was amazing and the garlic sauce was bland and tasteless. Service was good and attentive although multiple servers would come and top up our glasses which I don't like much, I rather our main server attend to us more. It was good just good. Maybe I'll try it again for AO 2018.\", 'review_rating': '3', 'review_date': '1/27/2017'}\n{'review': 'Love this place. It is casual yet trendy and comfortable. Steaks amazing and hubby went for Wagu rib eye. He is in heaven. I had the sumptuous Salmon on a hanging skewer cooked medium. Drink list well composed and happy hour drinks upstairs.', 'review_rating': '4', 'review_date': '11/23/2015'}\n{'review': 'I went to this restaurant 3 days in a row, from Sydney to Melbourne you cud c how much I lv this restaurant. Unfortunately I had an embarrassing dinning experience tonite. I saw a cockroach climbing slowly on the wall while I was dinning happily w my brother. I almost shouted when I saw it. It was really disgusting to cover the good taste of half rack of the beef rib. I guess no more four days in a row.', 'review_rating': '3', 'review_date': '8/6/2014'}\n200\n{'review': 'Food was great, but the wait staff pulled the rating down. I stopped by on a Tuesday night, and as a singleton, they were available to seat me immediately. Unfortunately my seat was all the way at the back of the restaurant, but hey - \"Party of One\", I didn\\'t expect much better. I waited a longish time after being seated to see a waiter and I was seriously considering walking out. My waiter did finally appear and I ordered a smallish steak with a side order of onion strings and macaroni and cheese. The steak was very tasty as were the side dishes. Overall I was happy with my experience, but the service could have been better - at least speeding up the first contact with the client. I understand that this is a very busy place and a single diner may not be the highest priority but it did pull down my rating a bit.', 'review_rating': '4', 'review_date': '8/30/2013'}\n{'review': \"I have been to this brand over a few years but in Darling Harbour not the South Bank one and it was awesome every time. This one just doesn't hit the mark at all! It now makes perfect sense that it has McDonalds to the left and TGI to the right! It's in great company. Very disappointing.\", 'review_rating': '2', 'review_date': '3/9/2018'}\n{'review': 'Was here for a 40th dinner with friends. The service is amazing and the food was so lovely!! My suggestion is that if you like your steak medium ask for medium rare. My steak was a little over cooked but still yummy!!!', 'review_rating': '4', 'review_date': '12/12/2016'}\n{'review': 'fantastic experience and extremely enjoy the food and wine there. the service is great and the view is good too.', 'review_rating': '5', 'review_date': '10/2/2016'}\n{'review': 'So far, our favorite food spot on our trip to Melbourne. Our group of 11 from the USA, all raved about how well seasoned and flavorful their dishes were. I had the Portuguese Chicken and chick pea salad and both were awesome. Food was served quickly to our large group. Atmosphere was industrial modern and very clean.', 'review_rating': '5', 'review_date': '8/17/2015'}\n{'review': \"-Should have been 4 stars as steaks were really good - but the wait staff ... although polite, they messed up a few times- Few errors such as sauces on steaks when not requested, wrong entrees etc, but the big one; 2 waitresses did not know what steaks (as in literally not even the cut) they were serving us. We have a pretty good knowledge of cuts so we could sort it out, but for people who don't, that could mean the difference between eating a $34 Monte rib-eye when you've ordered a $59 wagyu rib-eye-Chewy steak sandwich; steak was well-done-TL;DR: Great steaks, staff need more training\", 'review_rating': '3', 'review_date': '12/1/2012'}\n{'review': \"It's a chain restaurant, I know. But the steak is bloody good as all the rest, wine selection really interesting and lots of different food that you can choose from. I loved it!\", 'review_rating': '4', 'review_date': '4/30/2015'}\n{'review': 'I would give it more of a 4.5, but it was excellent! Started off with the pork belly appetizer, which was just fantastic. We then got the kangaroo steak, medium rare. It was pretty perfect. My buddy said the kangaroo steak was \"AMAZING\". Great atmosphere and very relaxed. I\\'d just say to make sure to make reservations well in advance, especially on weekend nights.', 'review_rating': '5', 'review_date': '1/3/2015'}\n{'review': 'Had a fantastic experience here on Tuesday night. Took some clients for dinner and we were all super impressed, the service was fantastic and all of our steaks were cooked to perfection.....recommended!!!', 'review_rating': '5', 'review_date': '9/21/2016'}\n{'review': \"Well, I think this is quite a decent place to have a steak. I ordered Medium Rare Rib Eye, but the waiter recommended a Medium instead. So I accepted his recommendation. But when he came it looked Medium Rare. So was it an error. or was it a wrong order. or is that their Medium . Not sure. but it still tasted good, no complains. Loved the African Hot Sauce, suits you only if you can have spicy food. cos' i love spicy :) definitely will go once in awhile.\", 'review_rating': '4', 'review_date': '6/9/2014'}\n200\n{'review': \"Definitely still a fan of this place..but I guess it's just my meat loving instincts taking over.I'm not much of a salad eater but, from the salads I've had in my lifetimes, this place makes the best caesar salad around (minus the anchovies).\", 'review_rating': '4', 'review_date': '6/15/2012'}\n{'review': 'Finally had the chance to experience the hypeWell I am pleased to say Meat & Wine Co. lived up to it Although only being a in Melb a short time I have had the chance to try out a number of steak houses and usually stick with 2 staples .... Steak and ShirazThe Rib-eye selected had been wonderfully aged and seasoned, then perfectly cooked resulting in the best steak I have experienced in Melbourne thus farThe chips were crispy and hot (which is a plus compared to some places) and the peppercorn sauce was good also The wine list was of reasonable length and provided for a decent priced Shiraz to accompany the mealDespite the warning of prices going in I was pleasantly surprised to find them similar to comparable Steak HousesHighly recommend if your after a steak worth remembering P.S. No salad was harmed in the consumption of this meal', 'review_rating': '4', 'review_date': '9/12/2012'}\n{'review': 'Food www okay and service was mediocre. For the price had hoped for much better. Also, no a/c so definitely not a hot day spot.', 'review_rating': '2', 'review_date': '3/1/2017'}\n{'review': 'Great staff and service we had a red balloon voucher I asked if they could change one of the mains and they did. Also we did not like the table we were seated in so we asked to move and we did. One of my favourite restaurants to go to. Fantastic tasty food!', 'review_rating': '4', 'review_date': '2/8/2014'}\n{'review': 'Great steak and a romantic/bromantic atmosphere. Never a disappointment. One of the best steakhouses around!', 'review_rating': '5', 'review_date': '8/13/2013'}\n{'review': \"Not being the biggest meat eater of all time, I was a little worried about what to order at the Meat & Wine Co. I ordered the kangaroo fillet which was cooked how it should be, a little rare but not too bloody and it was marinated beautifully. I haven't sampled much else on the menu but I am eager to return. This place gets busy so its recommended to book. Also a great venue for groups and work functions.\", 'review_rating': '3', 'review_date': '11/7/2011'}\n{'review': 'Great food, pity about the crappy service, our waitress was no where to be seen the entire night; had to keep asking other servers when we wanted more bottles of wine or to top up water. Plus their inability to cater for even the most basic needs of a disabled customer we were entertaining was flabbergasting!!!', 'review_rating': '1', 'review_date': '4/14/2014'}\n{'review': \"The only place I've found in Melbourne that cooks full sized beef ribs. WHich is about the only thing that can tide me over between trips to Texas.Steak and ribs are superb here, and a reservation is definitely recommended.The burger, however, is a travesty. Stick with the basic cuts of meat, devour the chunky, crispy fries and enjoy the extra iron hit from the red meat.\", 'review_rating': '3', 'review_date': '11/5/2011'}\n{'review': 'Our first night in Melbourne we asked the concierge of our hotel if they knew of a place that served kangaroo meat. He called around and after about 15 minutes found this place. He even called and got us reservations for that night. We got there and the place was packed. So we knew that this place would be good. I had kangaroo steak and my wife had a regular cow steak. The food was really good. The atmosphere was a little loud and they tried to maximize the amount of people in the restaurant. But over all the experience was fantastic. Would love to go back here again.', 'review_rating': '4', 'review_date': '4/11/2012'}\n{'review': 'I have been here about a year ago and today visited again. I ordered garlic bread which was like rock with no flavour. I also ordered my usual favourite.... Pork ribs. Very much lacked flavour. The quality of food has certainly dropped, I would normal give a five star.', 'review_rating': '3', 'review_date': '9/20/2014'}\n200\n{'review': 'As a South African living in Melbourne I am very proud of this restaurant showcasing good quality South African wines to the rest of Aus and the world and also giving them a small taste of what a South African restaurant is all about. A little bit of home far from home. Doen so voort manne!', 'review_rating': '5', 'review_date': '6/7/2012'}\n{'review': 'Each time I have eaten here, trully enjoyed it. ABSOLUTELY love the food. RIBS were perfect! Pricing as you would expect , though worth it.', 'review_rating': '5', 'review_date': '7/18/2013'}\n"
],
[
"response = requests.get(start_url,timeout=10)\ncontent = BeautifulSoup(response.content,\"html.parser\")",
"_____no_output_____"
],
[
"divs = content.findAll('div',attrs={\"class\":\"ui_pagination\"})",
"_____no_output_____"
],
[
"len(divs)",
"_____no_output_____"
],
[
"divs[0]",
"_____no_output_____"
],
[
"for c in divs[0].children:\n print(c)\n print()",
"<span class=\"ui_button nav previous secondary disabled\">Previous</span>\n\n<a class=\"ui_button nav next primary\" href=\"/Hotel_Review-g255100-d255744-Reviews-or5-Atlantis_Hotel-Melbourne_Victoria.html\">Next</a>\n\n<div class=\"pageNumbers\"><span class=\"pageNum current disabled\">1</span><a class=\"pageNum\" href=\"/Hotel_Review-g255100-d255744-Reviews-or5-Atlantis_Hotel-Melbourne_Victoria.html\">2</a><a class=\"pageNum\" href=\"/Hotel_Review-g255100-d255744-Reviews-or10-Atlantis_Hotel-Melbourne_Victoria.html\">3</a><a class=\"pageNum\" href=\"/Hotel_Review-g255100-d255744-Reviews-or15-Atlantis_Hotel-Melbourne_Victoria.html\">4</a><a class=\"pageNum\" href=\"/Hotel_Review-g255100-d255744-Reviews-or20-Atlantis_Hotel-Melbourne_Victoria.html\">5</a><a class=\"pageNum\" href=\"/Hotel_Review-g255100-d255744-Reviews-or25-Atlantis_Hotel-Melbourne_Victoria.html\">6</a><span class=\"separator\">…</span><a class=\"pageNum\" href=\"/Hotel_Review-g255100-d255744-Reviews-or2120-Atlantis_Hotel-Melbourne_Victoria.html\">425</a></div>\n\n"
],
[
"for c in divs[0].children:\n if 'pageNumbers' in c['class']:\n for cc in c.children:\n # print(cc)\n p = {}\n p['name'] = cc.string\n if 'disabled' in cc['class']:\n p['status'] = 'disable'\n else:\n p['status'] = None\n if 'href' in cc.attrs:\n p['url'] = cc['href']\n else:\n p['url'] = None\n print(p)\n else:\n p = {}\n p['name'] = c.string\n if 'disabled' in c['class']:\n p['status'] = 'disable'\n else:\n p['status'] = None\n if 'href' in c.attrs:\n p['url'] = c['href']\n else:\n p['url'] = None\n print(p)",
"{'name': 'Previous', 'status': 'disable', 'url': None}\n{'name': 'Next', 'status': None, 'url': '/Hotel_Review-g255100-d255744-Reviews-or5-Atlantis_Hotel-Melbourne_Victoria.html'}\n{'name': '1', 'status': 'disable', 'url': None}\n{'name': '2', 'status': None, 'url': '/Hotel_Review-g255100-d255744-Reviews-or5-Atlantis_Hotel-Melbourne_Victoria.html'}\n{'name': '3', 'status': None, 'url': '/Hotel_Review-g255100-d255744-Reviews-or10-Atlantis_Hotel-Melbourne_Victoria.html'}\n{'name': '4', 'status': None, 'url': '/Hotel_Review-g255100-d255744-Reviews-or15-Atlantis_Hotel-Melbourne_Victoria.html'}\n{'name': '5', 'status': None, 'url': '/Hotel_Review-g255100-d255744-Reviews-or20-Atlantis_Hotel-Melbourne_Victoria.html'}\n{'name': '6', 'status': None, 'url': '/Hotel_Review-g255100-d255744-Reviews-or25-Atlantis_Hotel-Melbourne_Victoria.html'}\n{'name': '…', 'status': None, 'url': None}\n{'name': '425', 'status': None, 'url': '/Hotel_Review-g255100-d255744-Reviews-or2120-Atlantis_Hotel-Melbourne_Victoria.html'}\n"
],
[
"divs[0].contents[0]['class']",
"_____no_output_____"
],
[
"'href' in divs[0].contents[1].attrs",
"_____no_output_____"
],
[
"len(divs[0].contents[1])",
"_____no_output_____"
],
[
"divs[0].contents[0].string",
"_____no_output_____"
],
[
"'next' in divs[0].contents[1]['class']",
"_____no_output_____"
],
[
"divs[0].contents[2].contents",
"_____no_output_____"
],
[
"divs = content.find_all(name=\"div\",attrs={\"class\":re.compile(r\"navigation-button-container(\\s\\w+)?\")})",
"_____no_output_____"
],
[
"divs[0]",
"_____no_output_____"
],
[
"divs[1]",
"_____no_output_____"
],
[
"len(divs)",
"_____no_output_____"
],
[
"divs[0].find(\"a\",attrs={\"class\":\"prev-link\"})",
"_____no_output_____"
],
[
"divs[1].find(\"a\",attrs={\"class\":\"next-link\"})",
"_____no_output_____"
],
[
"pagination-link-container",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7b66e62effb43aadbf60987a05131d3c8648559 | 668,386 | ipynb | Jupyter Notebook | notebooks/source/bayesian_regression.ipynb | Anthonymcqueen21/numpyro | 94efe3a35491465eba66465b4dd1d4fb870d6c8c | [
"MIT"
] | 1 | 2019-06-24T04:27:18.000Z | 2019-06-24T04:27:18.000Z | notebooks/source/bayesian_regression.ipynb | Anthonymcqueen21/numpyro | 94efe3a35491465eba66465b4dd1d4fb870d6c8c | [
"MIT"
] | null | null | null | notebooks/source/bayesian_regression.ipynb | Anthonymcqueen21/numpyro | 94efe3a35491465eba66465b4dd1d4fb870d6c8c | [
"MIT"
] | null | null | null | 330.884158 | 168,928 | 0.89934 | [
[
[
"# Bayesian Regression Using NumPyro\n\nIn this tutorial, we will explore how to do bayesian regression in NumPyro, using a simple example adapted from Statistical Rethinking [[1](#References)]. In particular, we would like to explore the following:\n\n - Write a simple model using the `sample` NumPyro primitive.\n - Run inference using MCMC in NumPyro, in particular, using the No U-Turn Sampler (NUTS) to get a posterior distribution over our regression parameters of interest.\n - Learn about utilities such as `initialize_model` that are useful for running HMC.\n - Learn how we can use effect-handlers in NumPyro to generate execution traces, condition on sample sites, seed models with RNG seeds, etc., and use this to implement various utilities that will be useful for MCMC. e.g. computing model log likelihood, generating empirical distribution over the posterior predictive, etc.\n\n## Tutorial Outline:\n1. [Dataset](#Dataset)\n2. [Regression Model to Predict Divorce Rate](#Regression-Model-to-Predict-Divorce-Rate)\n - [Model-1: Predictor-Marriage Rate](#Model-1:-Predictor---Marriage-Rate)\n - [Posterior Distribution over the Regression Parameters](#Posterior-Distribution-over-the-Regression-Parameters)\n - [Posterior Predictive Distribution](#Posterior-Predictive-Distribution)\n - [Model Log Likelihood](#Model-Log-Likelihood)\n - [Model-2: Predictor-Median Age of Marriage](#Model-2:-Predictor---Median-Age-of-Marriage)\n - [Model-3: Predictor-Marriage Rate and Median Age of Marriage](Model-3:-Predictor---Marriage-Rate-and-Median-Age-of-Marriage)\n - [Divorce Rate Residuals by State](#Divorce-Rate-Residuals-by-State)\n3. [Regression Model with Measurement Error](#Regression-Model-with-Measurement-Error)\n - [Effect of Incorporating Measurement Noise on Residuals](#Effect-of-Incorporating-Measurement-Noise-on-Residuals)\n4. [References](#References)",
"_____no_output_____"
]
],
[
[
"%reset -s -f",
"_____no_output_____"
],
[
"import jax\nimport jax.numpy as np\nfrom jax import random, vmap\nfrom jax.config import config; config.update(\"jax_platform_name\", \"cpu\")\nfrom jax.scipy.special import logsumexp\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport numpy as onp\nimport pandas as pd\nimport seaborn as sns\n\nfrom numpyro.diagnostics import hpdi\nimport numpyro.distributions as dist\nfrom numpyro.handlers import sample, seed, substitute, trace\nfrom numpyro.hmc_util import initialize_model\nfrom numpyro.mcmc import mcmc\n\n%matplotlib inline\nplt.style.use('bmh')\nplt.rcParams.update({'font.size': 16,\n 'xtick.labelsize': 14,\n 'ytick.labelsize': 14,\n 'axes.titlesize': 'large', \n 'axes.labelsize': 'medium'})",
"_____no_output_____"
]
],
[
[
"## Dataset\n\nFor this example, we will use the `WaffleDivorce` dataset from Chapter 05, Statistical Rethinking [[1](#References)]. The dataset contains divorce rates in each of the 50 states in the USA, along with predictors such as population, median age of marriage, whether it is a Southern state and, curiously, number of Waffle Houses.",
"_____no_output_____"
]
],
[
[
"DATASET_URL = 'https://raw.githubusercontent.com/rmcelreath/rethinking/master/data/WaffleDivorce.csv'\ndset = pd.read_csv(DATASET_URL, sep=';')\ndset",
"_____no_output_____"
]
],
[
[
"Let us plot the pair-wise relationship amongst the main variables in the dataset, using `seaborn.pairplot`. ",
"_____no_output_____"
]
],
[
[
"vars = ['Population', 'MedianAgeMarriage', 'Marriage', 'WaffleHouses', 'South', 'Divorce']\nsns.pairplot(dset, x_vars=vars, y_vars=vars, palette='husl');",
"_____no_output_____"
]
],
[
[
"From the plots above, we can clearly observe that there is a relationship between divorce rates and marriage rates in a state (as might be expected), and also between divorce rates and median age of marriage. \n\nThere is also a weak relationship between number of Waffle Houses and divorce rates, which is not obvious from the plot above, but will be clearer if we regress `Divorce` against `WaffleHouse` and plot the results. This is an example of a spurious association. We do not expect the number of Waffle Houses in a state to affect the divorce rate, but it is likely correlated with other factors that have an effect on the divorce rate. We will not delve into this spurious association in this tutorial, but the interested reader is encouraged to read Chapters 5 and 6 of [[1](#References)] which explores the problem of causal association in the presence of multiple predictors. \n\nFor simplicity, we will primarily focus on marriage rate and the median age of marriage as our predictors for divorce rate throughout the remaining tutorial.",
"_____no_output_____"
]
],
[
[
"sns.regplot('WaffleHouses', 'Divorce', dset);",
"_____no_output_____"
]
],
[
[
"## Regression Model to Predict Divorce Rate\n\nLet us now write a regressionn model in *NumPyro* to predict the divorce rate as a linear function of marriage rate and median age of marriage in each of the states. \n\nFirst, note that our predictor variables have somewhat different scales. It is a good practice to standardize our predictors and response variables to mean `0` and standard deviation `1`, which should result in faster inference. Refer to this [note](https://mc-stan.org/docs/2_19/stan-users-guide/standardizing-predictors-and-outputs.html) in the Stan manual for more details.",
"_____no_output_____"
]
],
[
[
"dset['AgeScaled'] = (dset.MedianAgeMarriage - onp.mean(dset.MedianAgeMarriage)) / onp.std(dset.MedianAgeMarriage)\ndset['MarriageScaled'] = (dset.Marriage - onp.mean(dset.Marriage)) / onp.std(dset.Marriage)\ndset['DivorceScaled'] = (dset.Divorce - onp.mean(dset.Divorce)) / onp.std(dset.Divorce)",
"_____no_output_____"
]
],
[
[
"We write the NumPyro model as follows. While the code should largely be self-explanatory, take note of the following:\n\n - In NumPyro, model code is any Python callable that can accept arguments and keywords. For HMC which we will be using for this tutorial, these arguments and keywords cannot change during model execution. This is convenient for passing in numpy arrays, or boolean arguments that might affect the execution path.\n - In addition to regular Python statements, the model code also contains primitives like `sample`. These primitives can be interpreted with various side-effects by effect handlers used by inference algorithms in NumPyro. For more on effect handlers, refer to [[3](#References)], [[4](#References)]. For now, just remember that a `sample` statement makes this a stochastic function by sampling from some distribution of interest.\n - The reason why we have kept our predictors as optional keyword arguments is to be able to reuse the same model as we vary the set of predictors. Likewise, the reason why the response variable is optional is that we would like to reuse this model to sample from the posterior predictive distribution. See the [section](#Posterior-Predictive-Distribution) on plotting the posterior predictive distribution, as an example.",
"_____no_output_____"
]
],
[
[
"def model(marriage=None, age=None, divorce=None):\n a = sample('a', dist.Normal(0., 0.2))\n M, A = 0., 0.\n if marriage is not None:\n bM = sample('bM', dist.Normal(0., 0.5))\n M = bM * marriage\n if age is not None:\n bA = sample('bA', dist.Normal(0., 0.5))\n A = bA * age\n sigma = sample('sigma', dist.Exponential(1.))\n mu = a + M + A\n sample('obs', dist.Normal(mu, sigma), obs=divorce)",
"_____no_output_____"
]
],
[
[
"### Model 1: Predictor - Marriage Rate\n\nWe first try to model the divorce rate as depending on a single variable, marriage rate. As mentioned above, we can use the same `model` code as earlier, but only pass values for `marriage` and `divorce` keyword arguments. We will use the No U-Turn Sampler (see [[5](#References)] for more details on the NUTS algorithm) to run inference on this simple model.\n\n\nNote the following requirements for running HMC and NUTS in NumPyro:\n\n - The Hamiltonian Monte Carlo (or, the NUTS) implementation in Pyro takes in a potential energy function. This is the negative log joint density for the model. \n - The verlet integrator in HMC (or, NUTS) returns sample values simulated using Hamiltonian dynamics in the unconstrained space. As such, continuous variables with bounded support need to be transformed into unconstrained space using bijective transforms. We also need to transform these samples back to their constrained support before returning these values to the user.\n \nThankfully, all of this is handled on the backend for us. Let us go through the steps one by one.\n\n - JAX uses functional PRNGs. Unlike other languages / frameworks which maintain a global random state, in JAX, every call to a sampler requires an [explicit PRNGKey](https://github.com/google/jax#random-numbers-are-different). We will split our initial random seed for subsequent operations, so that we do not accidentally reuse the same seed.\n - The function [initialize_model](https://numpyro.readthedocs.io/en/latest/mcmc.html#numpyro.hmc_util.initialize_model) takes a model along with model arguments (and keyword arguments), and returns a tuple of initial parameters, potential energy function, and constrain function. The initial parameters are used to initiate the MCMC chain, the potential energy function is a callable that when given unconstrained sample values returns the potential energy at these sample values. This is used by the verlet integrator in HMC. Lastly, `constrain_fn` is a callable that transforms the unconstrained samples returned by HMC/NUTS to sample values that lie within the constrained support.\n - Finally, we use the [mcmc](https://numpyro.readthedocs.io/en/latest/mcmc.html#numpyro.mcmc.mcmc) function to run inference using the default `NUTS` sampler. Note that to run vanilla HMC, all you need to do is to pass `algo='HMC'` as argument to `mcmc` instead. This is a convenience utility that does all of the following:\n\n - Runs warmup - adapts steps size and mass matrix.\n - Uses the sample from the warmup phase to start MCMC.\n - Return samples from the posterior distribution and print diagnostic information.",
"_____no_output_____"
]
],
[
[
"# Start from this source of randomness. We will split keys for subsequent operations.\nrng = random.PRNGKey(0)\nrng_, rng = random.split(rng)\n\n# Initialize the model.\ninit_params, potential_fn, constrain_fn = initialize_model(rng_, model, \n marriage=dset.MarriageScaled.values, \n divorce=dset.DivorceScaled.values)\nnum_warmup, num_samples = 1000, 2000\n\n# Run NUTS.\nsamples_1 = mcmc(num_warmup, num_samples, init_params,\n potential_fn=potential_fn, \n trajectory_length=10, \n constrain_fn=constrain_fn)",
"warmup: 100%|██████████| 1000/1000 [00:12<00:00, 78.24it/s, 1 steps of size 6.99e-01. acc. prob=0.79] \nsample: 100%|██████████| 2000/2000 [00:03<00:00, 515.37it/s, 3 steps of size 6.99e-01. acc. prob=0.88]\n"
]
],
[
[
"#### Posterior Distribution over the Regression Parameters\n\nWe notice that the progress bar gives us online statistics on the acceptance probability, step size and number of steps taken per sample while running NUTS. In particular, during warmup, we adapt the step size and mass matrix to achieve a certain target acceptance probability (0.8, by default). We were able to successfully adapt our step size to achieve this target in the warmup phase.\n\nDuring warmup, the aim is to adapt or learn values for hyper-parameters such as step size and mass matrix (the HMC algorithm is very sensitive to these hyper-parameters), and to reach the typical set (see [[6](#References)] for more details). If there are any issues in the model specification, it might be reflected in low acceptance probabilities or very high number of steps. We use the sample from the end of the warmup phase to seed the MCMC chain (denoted by the second `sample` progress bar) from which we generate the desired number of samples from our target distribution.\n\nAt the end of inference, NumPyro prints the mean, std and 90% CI values for each of the latent parameters. Note that since we standardized our predictors and response variable, we would expect the intercept to have mean 0, as can be seen here. It also prints other convergence diagnostics on the latent parameters in the model, including [effective sample size](https://numpyro.readthedocs.io/en/latest/diagnostics.html#numpyro.diagnostics.effective_sample_size) and the [gelman rubin diagnostic](https://numpyro.readthedocs.io/en/latest/diagnostics.html#numpyro.diagnostics.gelman_rubin) ($\\hat{R}$). The value for these diagnostics indicates that the chain has converged to the target distribution. In our case, the \"target distribution\" is the posterior distribution over the latent parameters that we are interested in. Note that this is often worth verifying with multiple chains on more complicated models. In the end, `samples_1` is a collection (in our case, a `dict` since `init_samples` was a `dict`) containing samples from the posterior distribution for each of the latent parameters in the model.\n\nTo look at our regression fit, let us plot the regression line using our posterior estimates for the regression parameters, along with the 90% Credibility Interval (CI). Note that the [hpdi](https://numpyro.readthedocs.io/en/latest/diagnostics.html#numpyro.diagnostics.hpdi) function in NumPyro's diagnostics module can be used to compute CI. In the functions below, note that the collected samples from the posterior are all along the leading axis.\n\nWe can see from the plot, that the CI broadens towards the tails where values of the predictor variables are sparse, as can be expected.",
"_____no_output_____"
]
],
[
[
"def plot_regression(x, y_mean, y_hpdi):\n # Sort values for plotting by x axis\n idx = np.argsort(x)\n marriage = x[idx]\n mean = y_mean[idx]\n hpdi = y_hpdi[:, idx]\n divorce = dset.DivorceScaled.values[idx]\n\n # Plot\n fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 6))\n ax.plot(marriage, mean)\n ax.plot(marriage, divorce, 'o')\n ax.fill_between(marriage, hpdi[0], hpdi[1], alpha=0.3, interpolate=True)\n return ax\n\n# Compute empirical posterior distribution over mu\nposterior_mu = np.expand_dims(samples_1['a'], -1) + \\\n np.expand_dims(samples_1['bM'], -1) * dset.MarriageScaled.values\n\nmean_mu = np.mean(posterior_mu, axis=0)\nhpdi_mu = hpdi(posterior_mu, 0.9)\nax = plot_regression(dset.MarriageScaled.values, mean_mu, hpdi_mu)\nax.set(xlabel='Marriage rate', ylabel='Divorce rate', title='Regression line with 90% CI');",
"_____no_output_____"
]
],
[
[
"#### Posterior Predictive Distribution\n\nLet us now look at the posterior predictive distribution to see how our predictive distribution looks with respect to the observed divorce rates. To get samples from the posterior predictive distribution, we need to run the model by substituting the latent parameters with samples from the posterior. This sounds complicated, but this can be easily achieved by using effect handlers from the [handlers module](https://numpyro.readthedocs.io/en/latest/handlers.html).\n\nIn particular, note the use of the `substitute`, `seed` and `trace` effect handlers in the `predict` function.\n\n - The `seed` effect-handler is used to wrap a stochastic function with an initial `PRNGKey` seed. When a sample statement inside the model is called, it uses the existing seed to sample from a distribution but this effect-handler also splits the existing key to ensure that future `sample` calls in the model use the newly split key instead. This is to prevent us from having to explicitly pass in a `PRNGKey` to each `sample` statement.\n - The `substitute` effect handler simply substitutes the value for the site name present in the `post_samples` dict instead of sampling from the distribution, which can be useful for conditioning sample sites to certain values.\n - The `trace` effect handler runs the model and records the execution trace within an `OrderedDict`. This trace object contains execution metadata that is useful for computing quantities such as the log joint density.\n \nIt should be clear now that the `predict` function simply runs the model by substituting the latent parameters with samples from the posterior (generated by the `mcmc` function) to generate predictions, which are samples from the posterior predictive distribution. Note the use of JAX's auto-vectorization transform called [vmap](https://github.com/google/jax#auto-vectorization-with-vmap) to vectorize predictions. If we didn't use `vmap`, we would have to use a native for loop which for each sample which is much slower. Each draw from the posterior can be used to get predictions over all the 50 states. When we vectorize this over all the samples from the posterior using `vmap`, we will get a `predictions_1` array of shape `(num_samples, 50)`. We can then compute the mean and 90% CI of these samples to plot the posterior predictive distribution.",
"_____no_output_____"
]
],
[
[
"def predict(rng, post_samples, model, *args, **kwargs):\n model = substitute(seed(model, rng), post_samples)\n model_trace = trace(model).get_trace(*args, **kwargs)\n return model_trace['obs']['value']",
"_____no_output_____"
],
[
"# vectorize predictions via vmap\npredict_fn = vmap(lambda rng, samples: predict(rng, samples, model, marriage=dset.MarriageScaled.values))\nrng, rng_ = random.split(rng)\npredictions_1 = predict_fn(random.split(rng_, num_samples), samples_1)\n\nmean_pred = np.mean(predictions_1, axis=0)\nhpdi_pred = hpdi(predictions_1, 0.9)\n\nax = plot_regression(dset.MarriageScaled.values, mean_pred, hpdi_pred)\nax.set(xlabel='Marriage rate', ylabel='Divorce rate', title='Predictions with 90% CI');",
"_____no_output_____"
]
],
[
[
"We will use the same `plot_regression` function as earlier. We notice that our CI for the predictive distribution is much broader as compared to the last plot due to the additional noise introduced by the `sigma` parameter. Note that most data points lie well within the 90% CI, which indicates a good fit.\n\n#### Model Log Likelihood\n\nLikewise, making use of effect-handlers and `vmap`, we can also compute the log likelihood for this model given the dataset.",
"_____no_output_____"
]
],
[
[
"def log_lk(rng, params, model, *args, **kwargs):\n model = substitute(seed(model, rng), params)\n model_trace = trace(model).get_trace(*args, **kwargs)\n obs_node = model_trace['obs']\n return np.sum(obs_node['fn'].log_prob(obs_node['value']))\n \ndef expected_log_likelihood(rng, params, model, *args, **kwargs):\n n = list(params.values())[0].shape[0]\n log_lk_fn = vmap(lambda rng, params: log_lk(rng, params, model, *args, **kwargs))\n log_lk_vals = log_lk_fn(random.split(rng, n), params)\n return logsumexp(log_lk_vals) - np.log(n)",
"_____no_output_____"
],
[
"rng, rng_ = random.split(rng)\nprint('Log likelihood: {}'.format(expected_log_likelihood(rng_,\n samples_1, \n model,\n marriage=dset.MarriageScaled.values,\n divorce=dset.DivorceScaled.values)))",
"Log likelihood: -68.14618682861328\n"
]
],
[
[
"### Model 2: Predictor - Median Age of Marriage\n\nWe will now model the divorce rate as a function of the median age of marriage. The computations are mostly a reproduction of what we did for Model 1. Notice the following:\n\n - Divorce rate is inversely related to the age of marriage. Hence states where the median age of marriage is low will likely have a higher divorce rate.\n - We get a higher log likelihood of -60.92 as compared to -68.15 with Model 2, indicating that median age of marriage is likely a much better predictor of divorce rate. ",
"_____no_output_____"
]
],
[
[
"rng, rng_ = random.split(rng)\ninit_params, potential_fn, constrain_fn = initialize_model(rng_, model, \n age=dset.AgeScaled.values, \n divorce=dset.DivorceScaled.values)\n\nsamples_2 = mcmc(num_warmup, num_samples, init_params,\n potential_fn=potential_fn, \n trajectory_length=10, \n constrain_fn=constrain_fn)",
"warmup: 100%|██████████| 1000/1000 [00:12<00:00, 79.17it/s, 3 steps of size 6.96e-01. acc. prob=0.79] \nsample: 100%|██████████| 2000/2000 [00:04<00:00, 470.51it/s, 3 steps of size 6.96e-01. acc. prob=0.88]\n"
],
[
"posterior_mu = np.expand_dims(samples_2['a'], -1) + \\\n np.expand_dims(samples_2['bA'], -1) * dset.AgeScaled.values\nmean_mu = np.mean(posterior_mu, axis=0)\nhpdi_mu = hpdi(posterior_mu, 0.9)\nax = plot_regression(dset.AgeScaled.values, mean_mu, hpdi_mu)\nax.set(xlabel='Median marriage age', ylabel='Divorce rate', title='Regression line with 90% CI');",
"_____no_output_____"
],
[
"rng, rng_ = random.split(rng)\npredict_fn = vmap(lambda rng, samples: predict(rng, samples, model, age=dset.AgeScaled.values))\npredictions_2 = predict_fn(random.split(rng_, num_samples), samples_2)\n\nmean_pred = np.mean(predictions_2, axis=0)\nhpdi_pred = hpdi(predictions_2, 0.9)\n\nax = plot_regression(dset.AgeScaled.values, mean_pred, hpdi_pred)\nax.set(xlabel='Median Age', ylabel='Divorce rate', title='Predictions with 90% CI');",
"_____no_output_____"
],
[
"rng, rng_ = random.split(rng)\nprint('Log likelihood: {}'.format(expected_log_likelihood(rng_,\n samples_2, \n model,\n age=dset.AgeScaled.values,\n divorce=dset.DivorceScaled.values)))",
"Log likelihood: -60.926387786865234\n"
]
],
[
[
"### Model 3: Predictor - Marriage Rate and Median Age of Marriage\n\nFinally, we will also model divorce rate as depending on both marriage rate as well as the median age of marriage. Note that there is no increase in the model's log likelihood over Model 2 which likely indicates that the marginal information from marriage rate in predicting divorce rate is low when the median age of marriage is already known.",
"_____no_output_____"
]
],
[
[
"rng, rng_ = random.split(rng)\ninit_params, potential_fn, constrain_fn = initialize_model(rng_, model, \n marriage=dset.MarriageScaled.values,\n age=dset.AgeScaled.values,\n divorce=dset.DivorceScaled.values)\n\nsamples_3 = mcmc(num_warmup, num_samples, init_params,\n potential_fn=potential_fn, \n trajectory_length=10, \n constrain_fn=constrain_fn)",
"warmup: 100%|██████████| 1000/1000 [00:10<00:00, 93.74it/s, 3 steps of size 6.48e-01. acc. prob=0.79] \nsample: 100%|██████████| 2000/2000 [00:04<00:00, 474.30it/s, 3 steps of size 6.48e-01. acc. prob=0.86]\n"
],
[
"rng, rng_ = random.split(rng)\nprint('Log likelihood: {}'.format(expected_log_likelihood(rng_,\n samples_3,\n model,\n marriage=dset.MarriageScaled.values,\n age=dset.AgeScaled.values,\n divorce=dset.DivorceScaled.values)))",
"Log likelihood: -61.04328918457031\n"
]
],
[
[
"### Divorce Rate Residuals by State\n\nThe regression plots above shows that the observed divorce rates for many states differs considerably from the mean regression line. To dig deeper into how the last model (Model 3) under-predicts or over-predicts for each of the states, we will plot the posterior predictive and residuals (`Observed divorce rate - Predicted divorce rate`) for each of the states.",
"_____no_output_____"
]
],
[
[
"# Predictions for Model 3.\nrng, rng_ = random.split(rng)\npredict_fn = vmap(lambda rng, samples: predict(rng, samples, model,\n marriage=dset.MarriageScaled.values,\n age=dset.AgeScaled.values))\npredictions_3 = predict_fn(random.split(rng_, num_samples), samples_3)\ny = np.arange(50)\n\n\nfig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 16))\npred_mean = np.mean(predictions_3, axis=0)\npred_hpdi = hpdi(predictions_3, 0.9)\nresiduals_3 = dset.DivorceScaled.values - predictions_3\nresiduals_mean = np.mean(residuals_3, axis=0)\nresiduals_hpdi = hpdi(residuals_3, 0.9)\nidx = np.argsort(residuals_mean)\n\n# Plot posterior predictive\nax[0].plot(np.zeros(50), y, '--')\nax[0].errorbar(pred_mean[idx], y, xerr=pred_hpdi[1, idx] - pred_mean[idx], \n marker='o', ms=5, mew=4, ls='none', alpha=0.8)\nax[0].plot(dset.DivorceScaled.values[idx], y, marker='o', \n ls='none', color='gray', alpha=0.5)\nax[0].set(xlabel='Posterior Predictive', ylabel='State', title='Posterior Predictive with 90% CI')\nax[0].set_yticks(y)\nax[0].set_yticklabels(dset.Loc.values[idx], fontsize=10);\n\n# Plot residuals\nresiduals_3 = dset.DivorceScaled.values - predictions_3\nresiduals_mean = np.mean(residuals_3, axis=0)\nresiduals_hpdi = hpdi(residuals_3, 0.9)\nerr = residuals_hpdi[1] - residuals_mean\n\nax[1].plot(np.zeros(50), y, '--')\nax[1].errorbar(residuals_mean[idx], y, xerr=err[idx], \n marker='o', ms=5, mew=4, ls='none', alpha=0.8)\nax[1].set(xlabel='Residuals', ylabel='State', title='Residuals with 90% CI')\nax[1].set_yticks(y)\nax[1].set_yticklabels(dset.Loc.values[idx], fontsize=10);",
"_____no_output_____"
]
],
[
[
"The plot on the left shows the mean predictions with 90% CI for each of the states using Model 3. The gray markers indicate the actual observed divorce rates. The right plot shows the residuals for each of the states, and both these plots are sorted by the residuals, i.e. at the bottom, we are looking at states where the model predictions are higher than the observed rates, whereas at the top, the reverse is true.\n\nOverall, the model fit seems good because most observed data points like within a 90% CI around the mean predictions. However, notice how the model over-predicts by a large margin for states like Idaho (bottom left), and on the other end under-predicts for states like Maine (top right). This is likely indicative of other factors that we are missing out in our model that affect divorce rate across different states. Even ignoring other socio-political variables, one such factor that we have not yet modeled is the measurement noise given by `Divorce SE` in the dataset. We will explore this in the next section.",
"_____no_output_____"
],
[
"## Regression Model with Measurement Error\n\nNote that in our previous models, each data point influences the regression line equally. Is this well justified? We will build on the previous model to incorporate measurement error given by `Divorce SE` variable in the dataset. Incorporating measurement noise will be useful in ensuring that observations that have higher confidence (i.e. lower measurement noise) have a greater impact on the regression line. On the other hand, this will also help us better model outliers with high measurement errors. For more details on modeling errors due to measurement noise, refer to Chapter 15 of [[1](#References)].\n\nTo do this, we will reuse Model 3, with the only change that the final observed value has a measurement error given by `divorce_sd` (notice that this has to be standardized since the `divorce` variable itself has been standardized to mean 0 and std 1).",
"_____no_output_____"
]
],
[
[
"def model_se(marriage, age, divorce_sd, divorce=None):\n a = sample('a', dist.Normal(0., 0.2))\n bM = sample('bM', dist.Normal(0., 0.5))\n M = bM * marriage\n bA = sample('bA', dist.Normal(0., 0.5))\n A = bA * age\n sigma = sample('sigma', dist.Exponential(1.))\n mu = a + M + A\n divorce_rate = sample('divorce_rate', dist.Normal(mu, sigma))\n sample('obs', dist.Normal(divorce_rate, divorce_sd), obs=divorce)",
"_____no_output_____"
],
[
"rng, rng_ = random.split(rng)\n# Standardize\ndset['DivorceScaledSD'] = dset['Divorce SE'] / np.std(dset.Divorce.values)\ninit_params, potential_fn, constrain_fn = initialize_model(rng_, model_se, \n marriage=dset.MarriageScaled.values,\n age=dset.AgeScaled.values,\n divorce_sd=dset.DivorceScaledSD.values,\n divorce=dset.DivorceScaled.values)",
"_____no_output_____"
],
[
"samples_4 = mcmc(num_warmup=1000, \n num_samples=3000, \n init_params=init_params,\n potential_fn=potential_fn,\n trajectory_length=10,\n target_accept_prob=0.9,\n constrain_fn=constrain_fn)",
"warmup: 100%|██████████| 1000/1000 [00:19<00:00, 50.19it/s, 15 steps of size 2.16e-01. acc. prob=0.89] \nsample: 100%|██████████| 3000/3000 [00:06<00:00, 442.19it/s, 15 steps of size 2.16e-01. acc. prob=0.94]\n"
]
],
[
[
"### Effect of Incorporating Measurement Noise on Residuals\n\nNotice that our values for the regression coefficients is very similar to Model 3. However, introducing measurement noise allows us to more closely match our predictive distributions to the observed values. We can see this if we plot the residuals as earlier. ",
"_____no_output_____"
]
],
[
[
"rng, rng_ = random.split(rng)\npredict_fn = vmap(lambda rng, samples: predict(rng, samples, model_se, \n marriage=dset.MarriageScaled.values,\n age=dset.AgeScaled.values,\n divorce_sd=dset.DivorceScaledSD.values))\npredictions_4 = predict_fn(random.split(rng_, 3000), samples_4)",
"_____no_output_____"
],
[
"sd = dset.DivorceScaledSD.values\nresiduals_4 = dset.DivorceScaled.values - predictions_4\nresiduals_mean = np.mean(residuals_4, axis=0)\nresiduals_hpdi = hpdi(residuals_4, 0.9)\nerr = residuals_hpdi[1] - residuals_mean\nidx = np.argsort(residuals_mean)\ny = np.arange(50)\nfig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 16))\n\n\n# Plot Residuals\nax.plot(np.zeros(50), y, '--')\nax.errorbar(residuals_mean[idx], y, xerr=err[idx], \n marker='o', ms=5, mew=4, ls='none', alpha=0.4)\n\n# Plot SD \nax.errorbar(residuals_mean[idx], y, xerr=sd[idx], \n ls='none')\n\n# Plot earlier mean residual\nax.plot(np.mean(dset.DivorceScaled.values - predictions_3, 0)[idx], y,\n ls='none', marker='o', ms=5, color='gray', alpha=0.8)\n\nax.set(xlabel='Residuals', ylabel='State', title='Residuals with 90% CI')\nax.set_yticks(y)\nax.set_yticklabels(dset.Loc.values[idx], fontsize=10);",
"_____no_output_____"
]
],
[
[
"The plot above shows the residuals for each of the states, along with the measurement noise given by inner error bar. The gray dots are the mean residuals from our earlier Model 3. Notice how having an additional degree of freedom to model the measurement noise has shrunk the residuals. In particular, for Idaho and Maine, our predictions are now much closer to the observed values after incorporating measurement noise in the model.\n\nTo better see how measurement noise affects the movement of the regression line, let us plot the residuals with respect to the measurement noise.",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10, 6))\nx = dset.DivorceScaledSD.values\ny1 = np.mean(residuals_3, 0)\ny2 = np.mean(residuals_4, 0)\nax.plot(x, y1, ls='none', marker='o')\nax.plot(x, y2, ls='none', marker='o')\nfor i, (j, k) in enumerate(zip(y1, y2)):\n ax.plot([x[i], x[i]], [j, k], '--', color='gray');\n\nax.set(xlabel='Measurement Noise', ylabel='Residual', title='Mean residuals (Model 4: red, Model 3: blue)');",
"_____no_output_____"
]
],
[
[
"The plot above shows what has happend in more detail - the regression line itself has moved to ensure a better fit for observations with low measurement noise (left of the plot) where the residuals have shrunk very close to 0. That is to say that data points with low measurement error have a concomitantly higher contribution in determining the regression line. On the other hand, for states with high measurement error (right of the plot), incorporating measurement noise allows us to move our posterior distribution mass closer to the observations resulting in a shrinkage of residuals as well.",
"_____no_output_____"
],
[
"## References\n\n1. McElreath, R. (2016). Statistical Rethinking: A Bayesian Course with Examples in R and Stan CRC Press.\n2. Stan Development Team. [Stan User's Guide](https://mc-stan.org/docs/2_19/stan-users-guide/index.html)\n3. Goodman, N.D., and StuhlMueller, A. (2014). [The Design and Implementation of Probabilistic Programming Languages](http://dippl.org/)\n4. Pyro Development Team. [Poutine: A Guide to Programming with Effect Handlers in Pyro](http://pyro.ai/examples/effect_handlers.html)\n5. Hoffman, M.D., Gelman, A. (2011). The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo.\n6. Betancourt, M. (2017). A Conceptual Introduction to Hamiltonian Monte Carlo.\n7. JAX Development Team (2018). [Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more](https://github.com/google/jax)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
e7b67fe0e617f95ee278cdb0fe44cfe601a3d3f0 | 4,507 | ipynb | Jupyter Notebook | analysis/Mutation_analysis.ipynb | chuducthang77/coronavirus | 723c9d61db980eb565e0329b62cf87c4f0608e7e | [
"MIT"
] | null | null | null | analysis/Mutation_analysis.ipynb | chuducthang77/coronavirus | 723c9d61db980eb565e0329b62cf87c4f0608e7e | [
"MIT"
] | null | null | null | analysis/Mutation_analysis.ipynb | chuducthang77/coronavirus | 723c9d61db980eb565e0329b62cf87c4f0608e7e | [
"MIT"
] | null | null | null | 28.891026 | 239 | 0.473929 | [
[
[
"<a href=\"https://colab.research.google.com/github/chuducthang77/coronavirus/blob/main/Mutation_analysis.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Mutation analysis\n1. Calculate the mutation rate (number of mutations/number of unique sequences) \n2. Consider only the sequences between 2000 and 2020\n3. Write the different files (Year : Mutation rate)",
"_____no_output_____"
]
],
[
[
"from google.colab import drive\ndrive.mount('/content/gdrive')",
"Mounted at /content/gdrive\n"
],
[
"%cd 'gdrive/MyDrive/Machine Learning/coronavirus/analysis'\n!ls",
"/content/gdrive/MyDrive/Machine Learning/coronavirus/analysis\nH1N_H9N Mutation_analysis.ipynb\n"
],
[
"import os\nimport pandas as pd\nimport numpy as np",
"_____no_output_____"
],
[
"#List all the directory names\ndir_name = os.listdir('./H1N_H9N')\n\nfor name in dir_name:\n df = pd.read_csv('./H1N_H9N/' + name)\n\n #Select only sequence between 2000 and 2020\n start_date = '2000-01-01'\n end_date = '2020-12-31'\n df = df[(df['Date'] >= start_date)]\n df = df[(df['Date'] < end_date)]\n\n #Create another column called year to select the number of unique sequence and number of mutations\n df['year'] = pd.to_datetime(df['Date']).dt.year\n\n start = 2000\n length = 20\n i = 0\n\n with open('output_{}.txt'.format(name[:-4]), 'a') as output:\n output.write('Year Mutation Rate\\n')\n\n while i <= length:\n specific_year = df[(df['year'] == start + i)]\n\n #Calculate the number of mutations\n num_mutations = np.float64(specific_year['year'].count())\n\n #Calculate the number of unique sequence\n num_unique = np.float64(specific_year['Accession ID'].nunique())\n\n #Calculate the mutation rate\n if num_unique != 0:\n mutation_rate = num_mutations/num_unique\n else:\n mutation_rate = 0\n \n output.write('{} {}\\n'.format(start + i, mutation_rate))\n\n i += 1\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
e7b68c67cbd5f91e74af65ef16ad557bc5cd2b5d | 26,056 | ipynb | Jupyter Notebook | reports/Presentations/Presentation_241117/timing_presentation.ipynb | EstevaoVieira/spikelearn | 060206558cc37c31493f1c9f01412d90375403cb | [
"MIT"
] | null | null | null | reports/Presentations/Presentation_241117/timing_presentation.ipynb | EstevaoVieira/spikelearn | 060206558cc37c31493f1c9f01412d90375403cb | [
"MIT"
] | null | null | null | reports/Presentations/Presentation_241117/timing_presentation.ipynb | EstevaoVieira/spikelearn | 060206558cc37c31493f1c9f01412d90375403cb | [
"MIT"
] | null | null | null | 46.116814 | 11,182 | 0.711314 | [
[
[
"# Table of Contents\n <p><div class=\"lev3 toc-item\"><a href=\"#Cover-Slide-1\" data-toc-modified-id=\"Cover-Slide-1-001\"><span class=\"toc-item-num\">0.0.1 </span>Cover Slide 1</a></div><div class=\"lev3 toc-item\"><a href=\"#Cover-Slide-2\" data-toc-modified-id=\"Cover-Slide-2-002\"><span class=\"toc-item-num\">0.0.2 </span>Cover Slide 2</a></div><div class=\"lev1 toc-item\"><a href=\"#Headline-Slide\" data-toc-modified-id=\"Headline-Slide-1\"><span class=\"toc-item-num\">1 </span>Headline Slide</a></div><div class=\"lev1 toc-item\"><a href=\"#Preprocessing\" data-toc-modified-id=\"Preprocessing-2\"><span class=\"toc-item-num\">2 </span>Preprocessing</a></div><div class=\"lev1 toc-item\"><a href=\"#Headline-Subslide\" data-toc-modified-id=\"Headline-Subslide-3\"><span class=\"toc-item-num\">3 </span>Headline Subslide</a></div><div class=\"lev1 toc-item\"><a href=\"#Fragment\" data-toc-modified-id=\"Fragment-4\"><span class=\"toc-item-num\">4 </span>Fragment</a></div><div class=\"lev3 toc-item\"><a href=\"#Divider\" data-toc-modified-id=\"Divider-401\"><span class=\"toc-item-num\">4.0.1 </span>Divider</a></div><div class=\"lev1 toc-item\"><a href=\"#Markdown-Examples\" data-toc-modified-id=\"Markdown-Examples-5\"><span class=\"toc-item-num\">5 </span>Markdown Examples</a></div><div class=\"lev4 toc-item\"><a href=\"#Text\" data-toc-modified-id=\"Text-5001\"><span class=\"toc-item-num\">5.0.0.1 </span>Text</a></div><div class=\"lev1 toc-item\"><a href=\"#Headline-Subslide\" data-toc-modified-id=\"Headline-Subslide-6\"><span class=\"toc-item-num\">6 </span>Headline Subslide</a></div><div class=\"lev4 toc-item\"><a href=\"#Code\" data-toc-modified-id=\"Code-6001\"><span class=\"toc-item-num\">6.0.0.1 </span>Code</a></div><div class=\"lev1 toc-item\"><a href=\"#Python-example\" data-toc-modified-id=\"Python-example-7\"><span class=\"toc-item-num\">7 </span>Python example</a></div><div class=\"lev4 toc-item\"><a href=\"#Code\" data-toc-modified-id=\"Code-7001\"><span class=\"toc-item-num\">7.0.0.1 </span>Code</a></div><div class=\"lev1 toc-item\"><a href=\"#Headline-Subslide\" data-toc-modified-id=\"Headline-Subslide-8\"><span class=\"toc-item-num\">8 </span>Headline Subslide</a></div><div class=\"lev4 toc-item\"><a href=\"#Lists\" data-toc-modified-id=\"Lists-8001\"><span class=\"toc-item-num\">8.0.0.1 </span>Lists</a></div><div class=\"lev1 toc-item\"><a href=\"#Headline-Subslide\" data-toc-modified-id=\"Headline-Subslide-9\"><span class=\"toc-item-num\">9 </span>Headline Subslide</a></div><div class=\"lev4 toc-item\"><a href=\"#Lists\" data-toc-modified-id=\"Lists-9001\"><span class=\"toc-item-num\">9.0.0.1 </span>Lists</a></div><div class=\"lev1 toc-item\"><a href=\"#Headline-Subslide\" data-toc-modified-id=\"Headline-Subslide-10\"><span class=\"toc-item-num\">10 </span>Headline Subslide</a></div><div class=\"lev4 toc-item\"><a href=\"#Blockquotes\" data-toc-modified-id=\"Blockquotes-10001\"><span class=\"toc-item-num\">10.0.0.1 </span>Blockquotes</a></div><div class=\"lev4 toc-item\"><a href=\"#inline-code\" data-toc-modified-id=\"inline-code-10002\"><span class=\"toc-item-num\">10.0.0.2 </span>inline code</a></div><div class=\"lev1 toc-item\"><a href=\"#Table\" data-toc-modified-id=\"Table-11\"><span class=\"toc-item-num\">11 </span>Table</a></div><div class=\"lev1 toc-item\"><a href=\"#Images\" data-toc-modified-id=\"Images-12\"><span class=\"toc-item-num\">12 </span>Images</a></div><div class=\"lev1 toc-item\"><a href=\"#Links\" data-toc-modified-id=\"Links-13\"><span class=\"toc-item-num\">13 </span>Links</a></div><div class=\"lev3 toc-item\"><a href=\"#Q&A-Slide\" data-toc-modified-id=\"Q&A-Slide-1301\"><span class=\"toc-item-num\">13.0.1 </span>Q&A Slide</a></div>",
"_____no_output_____"
]
],
[
[
"# Add all necessary imports here\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.style.reload_library()\nplt.style.use(\"ggplot\")",
"_____no_output_____"
]
],
[
[
"### Cover Slide 1",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
],
[
[
"### Cover Slide 2",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
],
[
[
"# Headline Slide",
"_____no_output_____"
],
[
"# Preprocessing",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
],
[
[
"def f(x):\n \"\"\"a docstring\"\"\"\n return x**2",
"_____no_output_____"
]
],
[
[
"# Headline Subslide",
"_____no_output_____"
]
],
[
[
"plt.plot([1,2,3,4])\nplt.ylabel('some numbers')\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Fragment\n\nPress the right arrow.",
"_____no_output_____"
],
[
"- I am a Fragement",
"_____no_output_____"
],
[
"- I am another one",
"_____no_output_____"
],
[
"### Divider",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
],
[
[
"# Markdown Examples\n\n#### Text\n\nIt's very easy to make some words **bold** and other words *italic* with Markdown. You can even [link to Google!](http://google.com)",
"_____no_output_____"
],
[
"# Headline Subslide\n#### Code\n\n```javascript\nvar s = \"JavaScript syntax highlighting\";\nalert(s);\n```\n \n```python\ns = \"Python syntax highlighting\"\nprint s\n```\n \n```\nNo language indicated, so no syntax highlighting. \nBut let's throw in a <b>tag</b>.\n```",
"_____no_output_____"
],
[
"# Python example\n\n#### Code\n```python\n# This program adds up integers in the command line\nimport sys\ntry:\n total = sum(int(arg) for arg in sys.argv[1:])\n print 'sum =', total\nexcept ValueError:\n print 'Please supply integer arguments'\n```",
"_____no_output_____"
],
[
"# Headline Subslide\n\n#### Lists\n\nSometimes you want numbered lists:\n\n1. Item 1\n2. Item 2\n3. Item 3\n * Item 3a\n * Item 3b\n",
"_____no_output_____"
],
[
"# Headline Subslide\n\n#### Lists\n\nSometimes you want bullet points:\n\n* Item 1\n* Item 2\n * Item 2a\n * Item 2b\n* This is a long long long long long long long long long long long long long long long long long long long long long long long long long list",
"_____no_output_____"
],
[
"# Headline Subslide\n#### Blockquotes\n\nAs Kanye West said:\n\n> We're living the future so\n> the present is our past.\n\n#### inline code\nI think you should use an\n`<addr>` element here instead.",
"_____no_output_____"
],
[
"# Table\n\n| Tables | Are | Cool |\n| ------------- |:-------------:| -----:|\n| col 3 is | right-aligned | $1600 |\n| col 2 is | centered | $12 |\n| zebra stripes | are neat | $1 |",
"_____no_output_____"
],
[
"# Images\nIf you want to embed images, this is how you do it:\n\n",
"_____no_output_____"
],
[
"# Links\n\n- https://guides.github.com/features/mastering-markdown/\n\n- https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet",
"_____no_output_____"
],
[
"### Q&A Slide",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"raw",
"markdown",
"raw",
"markdown",
"code",
"markdown",
"code",
"markdown",
"raw",
"markdown",
"raw"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"raw"
],
[
"markdown"
],
[
"raw"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"raw"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"raw"
]
] |
e7b698316a92af22508b3224040b871267b82786 | 49,231 | ipynb | Jupyter Notebook | examples_artin/applied_to_other_datasets.ipynb | artinmajdi/crowd-kit | 174e15f256a4929ed71699ffc1797ea87e0e8a99 | [
"Apache-2.0"
] | null | null | null | examples_artin/applied_to_other_datasets.ipynb | artinmajdi/crowd-kit | 174e15f256a4929ed71699ffc1797ea87e0e8a99 | [
"Apache-2.0"
] | null | null | null | examples_artin/applied_to_other_datasets.ipynb | artinmajdi/crowd-kit | 174e15f256a4929ed71699ffc1797ea87e0e8a99 | [
"Apache-2.0"
] | 1 | 2021-12-24T02:26:57.000Z | 2021-12-24T02:26:57.000Z | 31.218136 | 1,333 | 0.354573 | [
[
[
"test: 30% \ntrain-val 70%\n\nTo simulate noisy label acquisition, assume a labeler quality p. then use a random \n\nwe first hide the labels of all examples for each dataset. At the point in an experiment when a label is acquired, we generate a label according to the labeler quality p: we assign the example's original label with the probability p and the opposite value with the probability 1−p. \n\nFor each worker, the original true label was assigned to each instance with the probability p, and the opposite value was assigned with the probability 1−𝑝. In our experiments, the labeling quality p of each worker was generated randomly from a uniform distribution on the interval (0.3, 0.9)\n\n\nAfter obtaining the labels, we construct the training set to induce a classifier. The classifier is evaluated on the test set (with the true labels). Each experiment is repeated 10 times with a different random data partition, and average results are reported. \n\n## procedure\n1. generate a random p for each labeler\n2. generate a random number from a random variable for each instance and if the value is bigger than p, then assign the true label, but if its smaller than p then assign the opposite of true label for that worker \n3. train a network for these noisy labels for each labeler \n4. repeate this 10 times to be able to measure ethe uncertainty and average accuracy.\n\n\n## When it is necessary, we convert the target to binary\n\n- for thyroid we keep the negative class and integrate the other three classes into positive; \n- for splice, we integrate classes IE and EI; \n- for waveform, we integrate classes 1 and 2.)",
"_____no_output_____"
]
],
[
[
"%reload_ext autoreload\n%autoreload 2\n\nimport sys, os, wget\nsys.path.append('../../')\n\nimport pandas as pd\nimport numpy as np\nimport load_data \nimport ipywidgets \nfrom IPython.display import display",
"_____no_output_____"
],
[
"args = {'options': ['read', 'download'], \n 'label': 'read', \n 'description': 'mode:' , \n 'orientation': 'vertical', \n 'disabled': False, \n 'button_style': '', \n 'layout': {'width': 'max-content'},\n 'tooltips': ['reading the dataset', 'downloading the dataset'], \n }\n\ndataset_mode = ipywidgets.ToggleButtons( **args )\n\ndisplay(dataset_mode)",
"_____no_output_____"
],
[
"dataset = ipywidgets.Dropdown( options = [ ('1. kr-vs-kp' ,'kr-vs-kp'), \n ('2. mushroom' ,'mushroom'),\n ('3. sick' ,'sick'),\n ('4. spambase' ,'spambase'),\n ('5. tic-tac-toe' ,'tic-tac-toe'),\n ('6. splice' ,'splice'),\n ('7. thyroid' ,'thyroid'),\n ('8. waveform' ,'waveform'),\n ('9. biodeg' ,'biodeg'),\n ('10. horse-colic','horse-colic'),\n ('11. ionosphere' ,'ionosphere'),\n ('12. vote' ,'vote')], \n value = 'horse-colic')\n\n\[email protected](WHICH_DATASET = dataset)\ndef read_data(WHICH_DATASET):\n\n if WHICH_DATASET in ['splice','vote','thyroid']:\n\n print('dataset does not exist')\n\n else:\n\n global data, feature_columns\n data, feature_columns = load_data.aim1_3_read_download_UCI_database( WHICH_DATASET = WHICH_DATASET, \n mode = dataset_mode.value)\n print(data['train'].head(3) , '\\ntrain shape:',data['train'].shape , '\\ntest shape:',data['test'].shape)",
"_____no_output_____"
],
[
"data, feature_columns = load_data.aim1_3_read_download_UCI_database(WHICH_DATASET='biodeg', mode='read')\n",
"_____no_output_____"
],
[
"data",
"_____no_output_____"
],
[
"data, feature_columns = load_data.aim1_3_read_download_UCI_database(WHICH_DATASET='horse-colic', mode='read_raw')",
"_____no_output_____"
],
[
"data['train']",
"_____no_output_____"
],
[
"pd.read_csv('/groups/jjrodrig/projects/datasets/uci_multilabeler_aim1_3/UCI_horse-colic/horse-colic.data',delimiter=' ', index_col=None)",
"_____no_output_____"
]
],
[
[
"## guide for ipywidgets\nsource: <https://coderzcolumn.com/tutorials/python/interactive-widgets-in-jupyter-notebook-using-ipywidgets>",
"_____no_output_____"
]
],
[
[
"widgets.FloatRangeSlider(\n value=[5, 7.5],\n min=0,\n max=10.0,\n step=0.1,\n description='Test:',\n disabled=False,\n continuous_update=False,\n orientation='horizontal',\n readout=True,\n readout_format='.1f',\n)",
"_____no_output_____"
],
[
"b = widgets.BoundedFloatText(\n value=7.5,\n min=0,\n max=10.0,\n step=0.1,\n description='Text:',\n disabled=False\n)\nb",
"_____no_output_____"
],
[
"b.value ",
"_____no_output_____"
],
[
"a = widgets.ToggleButton(\n value=False,\n description='Run',\n button_style='', # 'success', 'info', 'warning', 'danger' or ''\n tooltip='Description',\n icon='check')\na",
"_____no_output_____"
],
[
"a.value",
"_____no_output_____"
],
[
"c = widgets.Checkbox(\n value=False,\n description='this',\n disabled=False,\n indent=False\n)\n\nc",
"_____no_output_____"
],
[
"c.value ",
"_____no_output_____"
],
[
"widgets.Valid(\n value=True,\n description='Status:',\n)",
"_____no_output_____"
],
[
"d = widgets.Dropdown(\n options=[('One', 1), ('Two', 2), ('Three', 3)],\n value=2,\n description='Number:')",
"_____no_output_____"
],
[
"d.value ",
"_____no_output_____"
],
[
"widgets.RadioButtons(\n options=['pepperoni', 'pineapple', 'anchovies'],\n# value='pineapple', # Defaults to 'pineapple'\n# layout={'width': 'max-content'}, # If the items' names are long\n description='Pizza topping:',\n disabled=False\n)",
"_____no_output_____"
],
[
"widgets.Box(\n [\n widgets.Label(value='Pizza topping with a very long label:'),\n widgets.RadioButtons(\n options=[\n 'pepperoni',\n 'pineapple',\n 'anchovies',\n 'and the long name tha'\n ],\n layout={'width': 'max-content'}\n )\n ]\n)",
"_____no_output_____"
],
[
"widgets.Select(\n options=['Linux', 'Windows', 'OSX'],\n value='OSX',\n # rows=10,\n description='OS:',\n disabled=False\n)",
"_____no_output_____"
],
[
"widgets.ToggleButtons(\n options=['Slow', 'Regular', 'Fast'],\n description='Speed:',\n disabled=False,\n button_style='', # 'success', 'info', 'warning', 'danger' or ''\n tooltips=['Description of slow', 'Description of regular', 'Description of fast'],\n icons=['check'] * 3\n)",
"_____no_output_____"
],
[
"a = widgets.ColorPicker(\n concise=False,\n description='Pick a color',\n value='blue',\n disabled=False\n)\na",
"_____no_output_____"
],
[
"widgets.FileUpload(\n accept='', # Accepted file extension e.g. '.txt', '.pdf', 'image/*', 'image/*,.pdf'\n multiple=False # True to accept multiple files upload else False\n)",
"_____no_output_____"
],
[
"from IPython.display import display\n\ndef func3(a,b,c):\n display((a+b)^c)\n\nw = interactive(func3, a=ipywidgets.IntSlider(min=10, max=50, value=25, step=2),\n b=ipywidgets.IntSlider(min=10, max=50, value=25, step=2),\n c=ipywidgets.IntSlider(min=10, max=50, value=25, step=2) ) \ndisplay(w)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7b6a087329e31352f087efbd546c81a75855527 | 5,839 | ipynb | Jupyter Notebook | 5 Xgboost/IrisClassification/xgboost_cloud_prediction_template.ipynb | jaypeeml/AWSSagemaker | 9ab931065e9f2af0b1c102476781a63c917e7c47 | [
"Apache-2.0"
] | 167 | 2019-04-07T16:33:56.000Z | 2022-03-24T12:13:13.000Z | 5 Xgboost/IrisClassification/xgboost_cloud_prediction_template.ipynb | jaypeeml/AWSSagemaker | 9ab931065e9f2af0b1c102476781a63c917e7c47 | [
"Apache-2.0"
] | 5 | 2019-04-13T06:39:43.000Z | 2019-11-09T06:09:56.000Z | 5 Xgboost/IrisClassification/xgboost_cloud_prediction_template.ipynb | jaypeeml/AWSSagemaker | 9ab931065e9f2af0b1c102476781a63c917e7c47 | [
"Apache-2.0"
] | 317 | 2019-04-07T16:34:00.000Z | 2022-03-31T11:20:32.000Z | 21.388278 | 109 | 0.546669 | [
[
[
"import sys\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport math\nimport os\n\nimport boto3\nimport re\nfrom sagemaker import get_execution_role\nimport sagemaker\n\n# SDK 2 serializers and deserializers\nfrom sagemaker.serializers import CSVSerializer\nfrom sagemaker.deserializers import JSONDeserializer",
"_____no_output_____"
]
],
[
[
"<h1>XGBoost Cloud Prediction - Iris Classification</h1>\n<h4>Invoke SageMaker Prediction Service</h4>",
"_____no_output_____"
]
],
[
[
"# Acquire a realtime endpoint\nendpoint_name = 'xgboost-iris-v1'\npredictor = sagemaker.predictor.Predictor (endpoint_name=endpoint_name)",
"_____no_output_____"
],
[
"predictor.serializer = CSVSerializer()",
"_____no_output_____"
],
[
"# Test predictive quality against data in validation file\ndf_all = pd.read_csv('iris_validation.csv',\n names=['encoded_class','sepal_length','sepal_width','petal_length','petal_width'])",
"_____no_output_____"
],
[
"df_all.head()",
"_____no_output_____"
],
[
"df_all.columns",
"_____no_output_____"
],
[
"# Need to pass an array to the prediction\n# can pass a numpy array or a list of values [[19,1],[20,1]]\n# arr_test = df_all.as_matrix(['sepal_length', 'sepal_width', 'petal_length','petal_width'])\narr_test = df_all[['sepal_length', 'sepal_width', 'petal_length','petal_width']].values",
"_____no_output_____"
],
[
"type(arr_test)",
"_____no_output_____"
],
[
"arr_test.shape",
"_____no_output_____"
],
[
"arr_test[:5]",
"_____no_output_____"
],
[
"result = predictor.predict(arr_test[:2])",
"_____no_output_____"
],
[
"arr_test.shape",
"_____no_output_____"
],
[
"result",
"_____no_output_____"
],
[
"# For large number of predictions, we can split the input data and\n# Query the prediction service.\n# array_split is convenient to specify how many splits are needed\npredictions = []\nfor arr in np.array_split(arr_test,10):\n result = predictor.predict(arr)\n result = result.decode(\"utf-8\")\n result = result.split(',')\n print (arr.shape)\n predictions += [int(float(r)) for r in result]",
"_____no_output_____"
],
[
"len(predictions)",
"_____no_output_____"
],
[
"predictions[:5]",
"_____no_output_____"
],
[
"from sklearn import preprocessing\nle = preprocessing.LabelEncoder()\nle.fit(['Iris-setosa', 'Iris-versicolor', 'Iris-virginica'])",
"_____no_output_____"
],
[
"df_all['class'] = le.inverse_transform(df_all.encoded_class)",
"_____no_output_____"
],
[
"df_all['predicted_class']=le.inverse_transform(predictions)",
"_____no_output_____"
],
[
"df_all.head()",
"_____no_output_____"
],
[
"print('Confusion matrix - Actual versus Predicted')\npd.crosstab(df_all['class'], df_all['predicted_class'])",
"_____no_output_____"
],
[
"import sklearn.metrics as metrics\nprint(metrics.classification_report(df_all['class'], df_all['predicted_class']))",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7b6c56f60683a344d2fbd139ddf72f1317fca44 | 3,598 | ipynb | Jupyter Notebook | Deep Learning with Tensorflow/.ipynb_checkpoints/Logistic_Regression-checkpoint.ipynb | mmaleki/Deep-Learning-with-Tensorfow | 3e2ece806d43432959c6b3585cfea6aceaf49b52 | [
"MIT"
] | null | null | null | Deep Learning with Tensorflow/.ipynb_checkpoints/Logistic_Regression-checkpoint.ipynb | mmaleki/Deep-Learning-with-Tensorfow | 3e2ece806d43432959c6b3585cfea6aceaf49b52 | [
"MIT"
] | null | null | null | Deep Learning with Tensorflow/.ipynb_checkpoints/Logistic_Regression-checkpoint.ipynb | mmaleki/Deep-Learning-with-Tensorfow | 3e2ece806d43432959c6b3585cfea6aceaf49b52 | [
"MIT"
] | null | null | null | 23.986667 | 73 | 0.483602 | [
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport random\n",
"_____no_output_____"
],
[
"def sigmoid(z):\n return (1/(1+np.exp(-z)))\n ",
"_____no_output_____"
],
[
"def propagation(b,X_hat,y_hat):\n N=X_hat.shape[0]\n z=np.dot(X_hat,b)\n s=sigmoid(z)\n temp=-(y_hat*np.log(s+0.001)+(1-y_hat)*np.log(1-s+0.001))\n L= np.mean(temp,axis=0,keepdims=True)\n x1=sigmoid(z)-y_hat\n dL= 1/N*np.dot(x1.T,X_hat)\n propagate={\"L\":L,\"dL\":dL}\n return propagate",
"_____no_output_____"
],
[
"def update(b,X_hat,y_hat,eta):\n dl=propagation(b,X_hat,y_hat)[\"dL\"].T\n return (b-eta*dl)",
"_____no_output_____"
],
[
"def init(m,n,method):\n if method=='zero':\n b=np.zeros((m,n))\n elif method=='random':\n b=np.random.randn(m,n)\n else:\n raise Exception('Choose correct method: zero or random')\n return b",
"_____no_output_____"
],
[
"def b_opt(X_hat,y_hat,eta,steps,initialization):\n N=X_hat.shape[0]\n p=X_hat.shape[1]\n b=init(p,1,initialization)\n for i in range(steps): \n b1=update(b,X_hat,y_hat,eta)\n b=update(b1,X_hat,y_hat,eta)\n loss=propagation(b,X_hat,y_hat)[\"L\"]\n if i%10 == 0:\n print (\"Loss after iteration %i= %f\" %(i, loss))\n \n return b\n",
"_____no_output_____"
],
[
"def predict(x,X_hat,y_hat,eta,steps=20):\n b_optimal=b_opt(X_hat,y_hat,eta,steps,'random')\n z=np.dot(x.T,b_optimal)\n prob=sigmoid(z)\n if prob>0.5:\n return 1\n else:\n return 0",
"_____no_output_____"
],
[
"def predict2(x,X_train,y_train,eta,steps,initialization):\n N=X_train.shape[0]\n one=np.ones((N,1))\n X_hat=np.concatenate((one,X_train),axis=1)\n b_optimal=b_opt(X_hat,y_train,eta,steps,initialization)\n xnew=np.append(1,x).reshape(-1,1)\n z=np.dot(xnew.T,b_optimal)\n prob=sigmoid(z)\n if prob>0.5:\n return 1\n else:\n return 0",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7b6d4d0f25c35bcfc36e71158a642b66df31e3c | 243,257 | ipynb | Jupyter Notebook | paper/02_OutofTheBox_benchmark_comparison_DMPNN/run_02_regression_FreeSolv.ipynb | riversdark/bidd-molmap | 7e3325433e2f29c189161859c63398574af6572b | [
"MIT"
] | 75 | 2020-07-07T01:18:30.000Z | 2022-03-25T13:40:19.000Z | paper/02_OutofTheBox_benchmark_comparison_DMPNN/run_02_regression_FreeSolv.ipynb | riversdark/bidd-molmap | 7e3325433e2f29c189161859c63398574af6572b | [
"MIT"
] | 12 | 2020-09-28T14:11:17.000Z | 2022-02-10T04:33:25.000Z | paper/02_OutofTheBox_benchmark_comparison_DMPNN/run_02_regression_FreeSolv.ipynb | riversdark/bidd-molmap | 7e3325433e2f29c189161859c63398574af6572b | [
"MIT"
] | 24 | 2020-07-22T08:52:59.000Z | 2022-03-14T09:59:44.000Z | 94.578927 | 22,980 | 0.434446 | [
[
[
"from molmap import model as molmodel\nimport molmap\n\nimport matplotlib.pyplot as plt\n\nimport pandas as pd\nfrom tqdm import tqdm\nfrom joblib import load, dump\ntqdm.pandas(ascii=True)\nimport numpy as np\n\nimport tensorflow as tf\nimport os\nos.environ[\"CUDA_VISIBLE_DEVICES\"]=\"1\"\nnp.random.seed(123)\ntf.compat.v1.set_random_seed(123)\n\n\ntmp_feature_dir = './tmpignore'\nif not os.path.exists(tmp_feature_dir):\n os.makedirs(tmp_feature_dir)",
"_____no_output_____"
],
[
"\nmp1 = molmap.loadmap('../descriptor.mp')\nmp2 = molmap.loadmap('../fingerprint.mp')\n",
"_____no_output_____"
],
[
"task_name = 'FreeSolv'\n\nfrom chembench import load_data\n\ndf, induces = load_data(task_name)",
"loading dataset: FreeSolv number of split times: 3\n"
],
[
"smiles_col = df.columns[0]\nvalues_col = df.columns[1:]\nY = df[values_col].astype('float').values\nY = Y.reshape(-1, 1)\n\n\nX1_name = os.path.join(tmp_feature_dir, 'X1_%s.data' % task_name)\nX2_name = os.path.join(tmp_feature_dir, 'X2_%s.data' % task_name)\nif not os.path.exists(X1_name):\n X1 = mp1.batch_transform(df.smiles, n_jobs = 8)\n dump(X1, X1_name)\nelse:\n X1 = load(X1_name)\n\nif not os.path.exists(X2_name): \n X2 = mp2.batch_transform(df.smiles, n_jobs = 8)\n dump(X2, X2_name)\nelse:\n X2 = load(X2_name)\n\nmolmap1_size = X1.shape[1:]\nmolmap2_size = X2.shape[1:]\n",
"_____no_output_____"
],
[
"epochs = 800\npatience = 50 #early stopping\n\ndense_layers = [256, 128, 32]\nbatch_size = 128\nlr = 1e-4\nweight_decay = 0\n\nloss = 'mse'\nmonitor = 'val_loss'\ndense_avf = 'relu'\nlast_avf = 'linear'\n",
"_____no_output_____"
],
[
"results = []\n\nfor i, split_idxs in enumerate(induces):\n\n train_idx, valid_idx, test_idx = split_idxs\n \n train_idx = [i for i in train_idx if i < len(df)]\n valid_idx = [i for i in valid_idx if i < len(df)] \n test_idx = [i for i in test_idx if i < len(df)]\n \n print(len(train_idx), len(valid_idx), len(test_idx))\n\n trainX = (X1[train_idx], X2[train_idx])\n trainY = Y[train_idx]\n\n validX = (X1[valid_idx], X2[valid_idx])\n validY = Y[valid_idx]\n\n testX = (X1[test_idx], X2[test_idx])\n testY = Y[test_idx] \n\n\n model = molmodel.net.DoublePathNet(molmap1_size, molmap2_size, \n n_outputs=Y.shape[-1], \n dense_layers=dense_layers, \n dense_avf = dense_avf, \n last_avf=last_avf)\n\n opt = tf.keras.optimizers.Adam(lr=lr, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0) #\n #import tensorflow_addons as tfa\n #opt = tfa.optimizers.AdamW(weight_decay = 0.1,learning_rate=0.001,beta1=0.9,beta2=0.999, epsilon=1e-08)\n model.compile(optimizer = opt, loss = loss)\n \n\n if i == 0:\n performance = molmodel.cbks.Reg_EarlyStoppingAndPerformance((trainX, trainY), \n (validX, validY), \n patience = patience, \n criteria = monitor)\n model.fit(trainX, trainY, batch_size=batch_size, \n epochs=epochs, verbose= 0, shuffle = True, \n validation_data = (validX, validY), \n callbacks=[performance]) \n else:\n model.fit(trainX, trainY, batch_size=batch_size, \n epochs = performance.best_epoch + 1, verbose = 1, shuffle = True, \n validation_data = (validX, validY)) \n \n performance.model.set_weights(model.get_weights())\n \n best_epoch = performance.best_epoch\n trainable_params = model.count_params()\n\n train_rmses, train_r2s = performance.evaluate(trainX, trainY) \n valid_rmses, valid_r2s = performance.evaluate(validX, validY) \n test_rmses, test_r2s = performance.evaluate(testX, testY)\n\n\n final_res = {\n 'task_name':task_name, \n 'train_rmse':np.nanmean(train_rmses), \n 'valid_rmse':np.nanmean(valid_rmses), \n 'test_rmse':np.nanmean(test_rmses), \n\n 'train_r2':np.nanmean(train_r2s), \n 'valid_r2':np.nanmean(valid_r2s), \n 'test_r2':np.nanmean(test_r2s), \n\n '# trainable params': trainable_params,\n 'best_epoch': best_epoch,\n 'batch_size':batch_size,\n 'lr': lr,\n 'weight_decay':weight_decay\n }\n results.append(final_res)\n ",
"513 64 65\nWARNING:tensorflow:From /home/shenwanxiang/anaconda3/envs/deepchem23/lib/python3.6/site-packages/tensorflow/python/ops/init_ops.py:1251: calling VarianceScaling.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.\nInstructions for updating:\nCall initializer instance with the dtype argument instead of passing it to the constructor\nepoch: 0001, loss: 29.6670 - val_loss: 23.5951; rmse: 5.3053 - rmse_val: 4.8575; r2: 0.3325 - r2_val: 0.2927 \nepoch: 0002, loss: 27.5512 - val_loss: 21.5351; rmse: 5.0819 - rmse_val: 4.6406; r2: 0.3354 - r2_val: 0.2789 \nepoch: 0003, loss: 24.7973 - val_loss: 18.3580; rmse: 4.7148 - rmse_val: 4.2846; r2: 0.3202 - r2_val: 0.2779 \nepoch: 0004, loss: 21.0463 - val_loss: 15.0559; rmse: 4.2862 - rmse_val: 3.8802; r2: 0.3143 - r2_val: 0.2804 \nepoch: 0005, loss: 17.2007 - val_loss: 12.1800; rmse: 3.8364 - rmse_val: 3.4900; r2: 0.3106 - r2_val: 0.2823 \nepoch: 0006, loss: 14.0271 - val_loss: 11.4324; rmse: 3.6135 - rmse_val: 3.3812; r2: 0.3101 - r2_val: 0.2837 \nepoch: 0007, loss: 13.2463 - val_loss: 13.4001; rmse: 3.7659 - rmse_val: 3.6606; r2: 0.3125 - r2_val: 0.2872 \nepoch: 0008, loss: 14.2445 - val_loss: 12.6583; rmse: 3.6744 - rmse_val: 3.5579; r2: 0.3283 - r2_val: 0.3030 \nepoch: 0009, loss: 12.9629 - val_loss: 10.8150; rmse: 3.4952 - rmse_val: 3.2886; r2: 0.3499 - r2_val: 0.3222 \nepoch: 0010, loss: 12.2600 - val_loss: 10.3659; rmse: 3.4445 - rmse_val: 3.2196; r2: 0.3661 - r2_val: 0.3337 \nepoch: 0011, loss: 12.0544 - val_loss: 11.4695; rmse: 3.5034 - rmse_val: 3.3867; r2: 0.3665 - r2_val: 0.3310 \nepoch: 0012, loss: 12.3642 - val_loss: 11.3267; rmse: 3.4738 - rmse_val: 3.3655; r2: 0.3825 - r2_val: 0.3430 \nepoch: 0013, loss: 11.8248 - val_loss: 10.1754; rmse: 3.3544 - rmse_val: 3.1899; r2: 0.4077 - r2_val: 0.3631 \nepoch: 0014, loss: 11.2094 - val_loss: 9.7088; rmse: 3.3208 - rmse_val: 3.1159; r2: 0.4263 - r2_val: 0.3781 \nepoch: 0015, loss: 10.9747 - val_loss: 9.5169; rmse: 3.2915 - rmse_val: 3.0849; r2: 0.4338 - r2_val: 0.3847 \nepoch: 0016, loss: 10.7849 - val_loss: 9.3422; rmse: 3.2612 - rmse_val: 3.0565; r2: 0.4409 - r2_val: 0.3909 \nepoch: 0017, loss: 10.5731 - val_loss: 9.2086; rmse: 3.2215 - rmse_val: 3.0346; r2: 0.4443 - r2_val: 0.3942 \nepoch: 0018, loss: 10.3300 - val_loss: 9.0408; rmse: 3.1906 - rmse_val: 3.0068; r2: 0.4515 - r2_val: 0.4005 \nepoch: 0019, loss: 10.1125 - val_loss: 8.9600; rmse: 3.1518 - rmse_val: 2.9933; r2: 0.4525 - r2_val: 0.4017 \nepoch: 0020, loss: 10.0366 - val_loss: 9.2976; rmse: 3.1567 - rmse_val: 3.0492; r2: 0.4459 - r2_val: 0.3960 \nepoch: 0021, loss: 9.7729 - val_loss: 8.5532; rmse: 3.0720 - rmse_val: 2.9246; r2: 0.4566 - r2_val: 0.4062 \nepoch: 0022, loss: 9.4082 - val_loss: 8.3358; rmse: 3.0767 - rmse_val: 2.8872; r2: 0.4702 - r2_val: 0.4179 \nepoch: 0023, loss: 9.4791 - val_loss: 8.1921; rmse: 3.0489 - rmse_val: 2.8622; r2: 0.4779 - r2_val: 0.4243 \nepoch: 0024, loss: 9.1807 - val_loss: 8.0388; rmse: 2.9649 - rmse_val: 2.8353; r2: 0.4762 - r2_val: 0.4245 \nepoch: 0025, loss: 9.0011 - val_loss: 8.9613; rmse: 3.0501 - rmse_val: 2.9935; r2: 0.4701 - r2_val: 0.4201 \nepoch: 0026, loss: 9.1870 - val_loss: 8.1576; rmse: 2.9396 - rmse_val: 2.8561; r2: 0.4853 - r2_val: 0.4328 \nepoch: 0027, loss: 8.4751 - val_loss: 7.4947; rmse: 2.8726 - rmse_val: 2.7376; r2: 0.5048 - r2_val: 0.4496 \nepoch: 0028, loss: 8.3225 - val_loss: 7.3798; rmse: 2.8612 - rmse_val: 2.7166; r2: 0.5180 - r2_val: 0.4600 \nepoch: 0029, loss: 8.0546 - val_loss: 7.3392; rmse: 2.8053 - rmse_val: 2.7091; r2: 0.5223 - r2_val: 0.4643 \nepoch: 0030, loss: 7.8578 - val_loss: 7.3389; rmse: 2.7843 - rmse_val: 2.7090; r2: 0.5307 - r2_val: 0.4715 \nepoch: 0031, loss: 7.6511 - val_loss: 6.9600; rmse: 2.7371 - rmse_val: 2.6382; r2: 0.5459 - r2_val: 0.4833 \nepoch: 0032, loss: 7.5067 - val_loss: 6.8845; rmse: 2.7332 - rmse_val: 2.6238; r2: 0.5587 - r2_val: 0.4927 \nepoch: 0033, loss: 7.4334 - val_loss: 6.6542; rmse: 2.6684 - rmse_val: 2.5796; r2: 0.5678 - r2_val: 0.5018 \nepoch: 0034, loss: 6.9726 - val_loss: 6.7284; rmse: 2.6312 - rmse_val: 2.5939; r2: 0.5697 - r2_val: 0.5078 \nepoch: 0035, loss: 7.1427 - val_loss: 7.2002; rmse: 2.6917 - rmse_val: 2.6833; r2: 0.5720 - r2_val: 0.5128 \nepoch: 0036, loss: 7.0145 - val_loss: 6.1715; rmse: 2.5357 - rmse_val: 2.4843; r2: 0.5933 - r2_val: 0.5284 \nepoch: 0037, loss: 6.7074 - val_loss: 6.1117; rmse: 2.5259 - rmse_val: 2.4722; r2: 0.6046 - r2_val: 0.5374 \nepoch: 0038, loss: 6.5727 - val_loss: 7.3823; rmse: 2.7054 - rmse_val: 2.7170; r2: 0.5951 - r2_val: 0.5362 \nepoch: 0039, loss: 7.2085 - val_loss: 6.3068; rmse: 2.5262 - rmse_val: 2.5113; r2: 0.6098 - r2_val: 0.5502 \nepoch: 0040, loss: 6.1817 - val_loss: 5.7476; rmse: 2.4435 - rmse_val: 2.3974; r2: 0.6277 - r2_val: 0.5634 \nepoch: 0041, loss: 6.0029 - val_loss: 5.7766; rmse: 2.4438 - rmse_val: 2.4035; r2: 0.6368 - r2_val: 0.5704 \nepoch: 0042, loss: 5.9117 - val_loss: 5.5003; rmse: 2.3616 - rmse_val: 2.3453; r2: 0.6423 - r2_val: 0.5779 \nepoch: 0043, loss: 5.5117 - val_loss: 5.4255; rmse: 2.3383 - rmse_val: 2.3293; r2: 0.6470 - r2_val: 0.5857 \nepoch: 0044, loss: 5.4365 - val_loss: 5.2476; rmse: 2.3317 - rmse_val: 2.2908; r2: 0.6597 - r2_val: 0.6001 \nepoch: 0045, loss: 5.4849 - val_loss: 5.3195; rmse: 2.3563 - rmse_val: 2.3064; r2: 0.6653 - r2_val: 0.6096 \nepoch: 0046, loss: 5.4873 - val_loss: 5.0766; rmse: 2.3080 - rmse_val: 2.2531; r2: 0.6760 - r2_val: 0.6205 \nepoch: 0047, loss: 5.3252 - val_loss: 5.0966; rmse: 2.2991 - rmse_val: 2.2576; r2: 0.6814 - r2_val: 0.6258 \nepoch: 0048, loss: 5.2424 - val_loss: 5.0121; rmse: 2.2650 - rmse_val: 2.2388; r2: 0.6863 - r2_val: 0.6293 \nepoch: 0049, loss: 5.0542 - val_loss: 4.8665; rmse: 2.2085 - rmse_val: 2.2060; r2: 0.6896 - r2_val: 0.6278 \nepoch: 0050, loss: 4.8668 - val_loss: 4.8362; rmse: 2.1824 - rmse_val: 2.1991; r2: 0.6930 - r2_val: 0.6284 \nepoch: 0051, loss: 4.8131 - val_loss: 5.0188; rmse: 2.2219 - rmse_val: 2.2403; r2: 0.6978 - r2_val: 0.6379 \nepoch: 0052, loss: 4.8516 - val_loss: 4.6718; rmse: 2.1396 - rmse_val: 2.1614; r2: 0.7047 - r2_val: 0.6421 \nepoch: 0053, loss: 4.5713 - val_loss: 4.6620; rmse: 2.1307 - rmse_val: 2.1592; r2: 0.7110 - r2_val: 0.6450 \nepoch: 0054, loss: 4.4801 - val_loss: 4.5242; rmse: 2.0932 - rmse_val: 2.1270; r2: 0.7160 - r2_val: 0.6528 \nepoch: 0055, loss: 4.3661 - val_loss: 4.4713; rmse: 2.0755 - rmse_val: 2.1145; r2: 0.7213 - r2_val: 0.6581 \nepoch: 0056, loss: 4.2816 - val_loss: 4.4740; rmse: 2.0650 - rmse_val: 2.1152; r2: 0.7275 - r2_val: 0.6612 \nepoch: 0057, loss: 4.5733 - val_loss: 4.5595; rmse: 2.0714 - rmse_val: 2.1353; r2: 0.7304 - r2_val: 0.6633 \nepoch: 0058, loss: 4.2131 - val_loss: 5.0002; rmse: 2.1872 - rmse_val: 2.2361; r2: 0.7271 - r2_val: 0.6678 \nepoch: 0059, loss: 4.7702 - val_loss: 4.3178; rmse: 2.0227 - rmse_val: 2.0779; r2: 0.7378 - r2_val: 0.6744 \nepoch: 0060, loss: 4.0791 - val_loss: 4.6276; rmse: 2.1037 - rmse_val: 2.1512; r2: 0.7470 - r2_val: 0.6771 \nepoch: 0061, loss: 4.4901 - val_loss: 4.5017; rmse: 2.0685 - rmse_val: 2.1217; r2: 0.7529 - r2_val: 0.6821 \nepoch: 0062, loss: 4.2978 - val_loss: 4.1363; rmse: 1.9545 - rmse_val: 2.0338; r2: 0.7549 - r2_val: 0.6852 \nepoch: 0063, loss: 3.8613 - val_loss: 4.1200; rmse: 1.9375 - rmse_val: 2.0298; r2: 0.7557 - r2_val: 0.6865 \nepoch: 0064, loss: 3.7739 - val_loss: 4.3667; rmse: 1.9943 - rmse_val: 2.0897; r2: 0.7607 - r2_val: 0.6888 \nepoch: 0065, loss: 3.8527 - val_loss: 4.0396; rmse: 1.9095 - rmse_val: 2.0099; r2: 0.7630 - r2_val: 0.6924 \nepoch: 0066, loss: 3.7287 - val_loss: 4.0439; rmse: 1.9098 - rmse_val: 2.0109; r2: 0.7676 - r2_val: 0.6967 \nepoch: 0067, loss: 3.5466 - val_loss: 3.9569; rmse: 1.8853 - rmse_val: 1.9892; r2: 0.7760 - r2_val: 0.7039 \nepoch: 0068, loss: 3.5412 - val_loss: 3.8153; rmse: 1.8398 - rmse_val: 1.9533; r2: 0.7801 - r2_val: 0.7074 \nepoch: 0069, loss: 3.4764 - val_loss: 3.8340; rmse: 1.8399 - rmse_val: 1.9581; r2: 0.7833 - r2_val: 0.7101 \nepoch: 0070, loss: 3.3608 - val_loss: 3.7365; rmse: 1.8093 - rmse_val: 1.9330; r2: 0.7893 - r2_val: 0.7157 \nepoch: 0071, loss: 3.2623 - val_loss: 3.6684; rmse: 1.7852 - rmse_val: 1.9153; r2: 0.7934 - r2_val: 0.7191 \nepoch: 0072, loss: 3.2819 - val_loss: 3.7425; rmse: 1.7840 - rmse_val: 1.9346; r2: 0.7945 - r2_val: 0.7201 \nepoch: 0073, loss: 3.2695 - val_loss: 4.1395; rmse: 1.8974 - rmse_val: 2.0346; r2: 0.7922 - r2_val: 0.7177 \nepoch: 0074, loss: 3.5258 - val_loss: 3.5701; rmse: 1.7390 - rmse_val: 1.8895; r2: 0.8025 - r2_val: 0.7273 \nepoch: 0075, loss: 3.0620 - val_loss: 4.5990; rmse: 2.0340 - rmse_val: 2.1445; r2: 0.8074 - r2_val: 0.7337 \nepoch: 0076, loss: 5.3185 - val_loss: 4.9014; rmse: 2.1724 - rmse_val: 2.2139; r2: 0.8180 - r2_val: 0.7471 \nepoch: 0077, loss: 3.9516 - val_loss: 4.0714; rmse: 1.9340 - rmse_val: 2.0178; r2: 0.8173 - r2_val: 0.7353 \nepoch: 0078, loss: 3.8604 - val_loss: 3.8416; rmse: 1.8709 - rmse_val: 1.9600; r2: 0.8205 - r2_val: 0.7390 \nepoch: 0079, loss: 3.3056 - val_loss: 3.6604; rmse: 1.8276 - rmse_val: 1.9132; r2: 0.8266 - r2_val: 0.7500 \nepoch: 0080, loss: 3.4320 - val_loss: 3.4375; rmse: 1.7252 - rmse_val: 1.8540; r2: 0.8241 - r2_val: 0.7461 \nepoch: 0081, loss: 3.0514 - val_loss: 3.6712; rmse: 1.7651 - rmse_val: 1.9160; r2: 0.8149 - r2_val: 0.7361 \nepoch: 0082, loss: 2.9551 - val_loss: 3.4654; rmse: 1.6743 - rmse_val: 1.8616; r2: 0.8161 - r2_val: 0.7374 \nepoch: 0083, loss: 2.8657 - val_loss: 3.6246; rmse: 1.7123 - rmse_val: 1.9038; r2: 0.8142 - r2_val: 0.7360 \nepoch: 0084, loss: 2.9223 - val_loss: 3.4392; rmse: 1.6499 - rmse_val: 1.8545; r2: 0.8214 - r2_val: 0.7420 \nepoch: 0085, loss: 2.6853 - val_loss: 3.3287; rmse: 1.6193 - rmse_val: 1.8245; r2: 0.8278 - r2_val: 0.7475 \nepoch: 0086, loss: 2.7484 - val_loss: 3.3792; rmse: 1.6422 - rmse_val: 1.8383; r2: 0.8309 - r2_val: 0.7496 \nepoch: 0087, loss: 2.6479 - val_loss: 4.2355; rmse: 1.8706 - rmse_val: 2.0580; r2: 0.8389 - r2_val: 0.7567 \nepoch: 0088, loss: 4.3542 - val_loss: 4.3409; rmse: 1.9324 - rmse_val: 2.0835; r2: 0.8469 - r2_val: 0.7644 \nepoch: 0089, loss: 3.1043 - val_loss: 3.2788; rmse: 1.6342 - rmse_val: 1.8107; r2: 0.8504 - r2_val: 0.7630 \nepoch: 0090, loss: 2.8566 - val_loss: 3.3728; rmse: 1.6491 - rmse_val: 1.8365; r2: 0.8512 - r2_val: 0.7619 \nepoch: 0091, loss: 2.5631 - val_loss: 3.0596; rmse: 1.5342 - rmse_val: 1.7492; r2: 0.8564 - r2_val: 0.7678 \nepoch: 0092, loss: 2.3386 - val_loss: 3.0213; rmse: 1.5022 - rmse_val: 1.7382; r2: 0.8570 - r2_val: 0.7679 \nepoch: 0093, loss: 2.2908 - val_loss: 3.0455; rmse: 1.4928 - rmse_val: 1.7451; r2: 0.8569 - r2_val: 0.7672 \nepoch: 0094, loss: 2.1912 - val_loss: 3.1985; rmse: 1.5222 - rmse_val: 1.7884; r2: 0.8594 - r2_val: 0.7707 \nepoch: 0095, loss: 2.3929 - val_loss: 3.1973; rmse: 1.5212 - rmse_val: 1.7881; r2: 0.8624 - r2_val: 0.7741 \nepoch: 0096, loss: 2.2639 - val_loss: 2.9300; rmse: 1.4430 - rmse_val: 1.7117; r2: 0.8662 - r2_val: 0.7772 \nepoch: 0097, loss: 2.0631 - val_loss: 2.8963; rmse: 1.4366 - rmse_val: 1.7019; r2: 0.8689 - r2_val: 0.7783 \nepoch: 0098, loss: 2.1162 - val_loss: 4.0463; rmse: 1.7931 - rmse_val: 2.0115; r2: 0.8652 - r2_val: 0.7705 \nepoch: 0099, loss: 5.1869 - val_loss: 4.5848; rmse: 1.9034 - rmse_val: 2.1412; r2: 0.8575 - r2_val: 0.7564 \nepoch: 0100, loss: 3.3655 - val_loss: 5.5404; rmse: 2.0016 - rmse_val: 2.3538; r2: 0.8495 - r2_val: 0.7451 \nepoch: 0101, loss: 3.0376 - val_loss: 3.7595; rmse: 1.6420 - rmse_val: 1.9389; r2: 0.8582 - r2_val: 0.7525 \nepoch: 0102, loss: 2.4643 - val_loss: 3.2874; rmse: 1.4679 - rmse_val: 1.8131; r2: 0.8601 - r2_val: 0.7554 \nepoch: 0103, loss: 2.2148 - val_loss: 3.2553; rmse: 1.4748 - rmse_val: 1.8042; r2: 0.8637 - r2_val: 0.7595 \nepoch: 0104, loss: 2.1517 - val_loss: 3.0741; rmse: 1.4241 - rmse_val: 1.7533; r2: 0.8708 - r2_val: 0.7646 \nepoch: 0105, loss: 2.0813 - val_loss: 2.9879; rmse: 1.3819 - rmse_val: 1.7285; r2: 0.8769 - r2_val: 0.7704 \nepoch: 0106, loss: 1.9690 - val_loss: 3.0598; rmse: 1.3882 - rmse_val: 1.7492; r2: 0.8825 - r2_val: 0.7767 \nepoch: 0107, loss: 1.8415 - val_loss: 2.9877; rmse: 1.3662 - rmse_val: 1.7285; r2: 0.8866 - r2_val: 0.7768 \nepoch: 0108, loss: 1.8103 - val_loss: 2.9600; rmse: 1.3406 - rmse_val: 1.7205; r2: 0.8908 - r2_val: 0.7829 \nepoch: 0109, loss: 1.8919 - val_loss: 3.0264; rmse: 1.3729 - rmse_val: 1.7396; r2: 0.8913 - r2_val: 0.7809 \nepoch: 0110, loss: 2.9643 - val_loss: 3.2558; rmse: 1.4793 - rmse_val: 1.8044; r2: 0.8952 - r2_val: 0.7874 \nepoch: 0111, loss: 1.8211 - val_loss: 3.8305; rmse: 1.6506 - rmse_val: 1.9572; r2: 0.9010 - r2_val: 0.8062 \nepoch: 0112, loss: 2.3713 - val_loss: 2.6306; rmse: 1.2583 - rmse_val: 1.6219; r2: 0.9016 - r2_val: 0.8001 \nepoch: 0113, loss: 1.8482 - val_loss: 2.5405; rmse: 1.2346 - rmse_val: 1.5939; r2: 0.9059 - r2_val: 0.8093 \nepoch: 0114, loss: 2.7175 - val_loss: 2.6895; rmse: 1.3261 - rmse_val: 1.6400; r2: 0.9091 - r2_val: 0.8217 \nepoch: 0115, loss: 2.8133 - val_loss: 6.2839; rmse: 2.3680 - rmse_val: 2.5068; r2: 0.8919 - r2_val: 0.7896 \nepoch: 0116, loss: 3.9446 - val_loss: 2.8803; rmse: 1.3643 - rmse_val: 1.6971; r2: 0.8954 - r2_val: 0.8098 \nepoch: 0117, loss: 2.6693 - val_loss: 3.3348; rmse: 1.5061 - rmse_val: 1.8262; r2: 0.8931 - r2_val: 0.8088 \nepoch: 0118, loss: 1.9137 - val_loss: 3.0221; rmse: 1.4728 - rmse_val: 1.7384; r2: 0.8923 - r2_val: 0.7972 \nepoch: 0119, loss: 2.0586 - val_loss: 2.6265; rmse: 1.2832 - rmse_val: 1.6207; r2: 0.8963 - r2_val: 0.8077 \nepoch: 0120, loss: 2.3230 - val_loss: 4.0262; rmse: 1.7292 - rmse_val: 2.0065; r2: 0.8973 - r2_val: 0.8159 \nepoch: 0121, loss: 2.5002 - val_loss: 2.5481; rmse: 1.2492 - rmse_val: 1.5963; r2: 0.9000 - r2_val: 0.8068 \nepoch: 0122, loss: 1.6950 - val_loss: 2.6969; rmse: 1.3075 - rmse_val: 1.6422; r2: 0.9003 - r2_val: 0.8023 \nepoch: 0123, loss: 1.5852 - val_loss: 2.7601; rmse: 1.3042 - rmse_val: 1.6613; r2: 0.9050 - r2_val: 0.8128 \nepoch: 0124, loss: 1.8610 - val_loss: 2.6606; rmse: 1.2612 - rmse_val: 1.6311; r2: 0.9070 - r2_val: 0.8139 \nepoch: 0125, loss: 1.5458 - val_loss: 2.5524; rmse: 1.2349 - rmse_val: 1.5976; r2: 0.9072 - r2_val: 0.8091 \nepoch: 0126, loss: 1.4636 - val_loss: 2.4365; rmse: 1.1694 - rmse_val: 1.5609; r2: 0.9102 - r2_val: 0.8149 \nepoch: 0127, loss: 1.3812 - val_loss: 2.6050; rmse: 1.2227 - rmse_val: 1.6140; r2: 0.9123 - r2_val: 0.8192 \nepoch: 0128, loss: 1.5773 - val_loss: 2.5567; rmse: 1.1992 - rmse_val: 1.5990; r2: 0.9138 - r2_val: 0.8191 \nepoch: 0129, loss: 1.3633 - val_loss: 2.3943; rmse: 1.1461 - rmse_val: 1.5474; r2: 0.9145 - r2_val: 0.8166 \nepoch: 0130, loss: 1.2982 - val_loss: 2.4495; rmse: 1.1595 - rmse_val: 1.5651; r2: 0.9169 - r2_val: 0.8224 \nepoch: 0131, loss: 1.3986 - val_loss: 2.4245; rmse: 1.1617 - rmse_val: 1.5571; r2: 0.9160 - r2_val: 0.8176 \nepoch: 0132, loss: 2.3926 - val_loss: 3.7246; rmse: 1.6537 - rmse_val: 1.9299; r2: 0.9084 - r2_val: 0.8055 \nepoch: 0133, loss: 1.8940 - val_loss: 3.0070; rmse: 1.3365 - rmse_val: 1.7341; r2: 0.9149 - r2_val: 0.8204 \nepoch: 0134, loss: 1.7852 - val_loss: 2.3681; rmse: 1.1261 - rmse_val: 1.5389; r2: 0.9171 - r2_val: 0.8214 \nepoch: 0135, loss: 1.4226 - val_loss: 3.0130; rmse: 1.4301 - rmse_val: 1.7358; r2: 0.9165 - r2_val: 0.8166 \nepoch: 0136, loss: 1.8839 - val_loss: 2.2802; rmse: 1.1402 - rmse_val: 1.5100; r2: 0.9217 - r2_val: 0.8281 \nepoch: 0137, loss: 1.3472 - val_loss: 2.6622; rmse: 1.2946 - rmse_val: 1.6316; r2: 0.9228 - r2_val: 0.8363 \nepoch: 0138, loss: 1.7520 - val_loss: 2.3013; rmse: 1.1447 - rmse_val: 1.5170; r2: 0.9243 - r2_val: 0.8350 \nepoch: 0139, loss: 1.2847 - val_loss: 2.5696; rmse: 1.2652 - rmse_val: 1.6030; r2: 0.9238 - r2_val: 0.8266 \nepoch: 0140, loss: 1.5613 - val_loss: 2.1918; rmse: 1.0854 - rmse_val: 1.4805; r2: 0.9264 - r2_val: 0.8325 \nepoch: 0141, loss: 1.2274 - val_loss: 2.6638; rmse: 1.2694 - rmse_val: 1.6321; r2: 0.9277 - r2_val: 0.8387 \nepoch: 0142, loss: 1.5383 - val_loss: 2.1932; rmse: 1.0648 - rmse_val: 1.4810; r2: 0.9272 - r2_val: 0.8317 \nepoch: 0143, loss: 1.3311 - val_loss: 2.6599; rmse: 1.2621 - rmse_val: 1.6309; r2: 0.9235 - r2_val: 0.8224 \nepoch: 0144, loss: 1.3651 - val_loss: 2.6817; rmse: 1.2317 - rmse_val: 1.6376; r2: 0.9280 - r2_val: 0.8341 \nepoch: 0145, loss: 2.2799 - val_loss: 3.2228; rmse: 1.4446 - rmse_val: 1.7952; r2: 0.9303 - r2_val: 0.8362 \nepoch: 0146, loss: 1.5745 - val_loss: 2.4820; rmse: 1.1743 - rmse_val: 1.5754; r2: 0.9285 - r2_val: 0.8225 \nepoch: 0147, loss: 1.4084 - val_loss: 2.3326; rmse: 1.1127 - rmse_val: 1.5273; r2: 0.9300 - r2_val: 0.8246 \nepoch: 0148, loss: 1.2310 - val_loss: 2.4389; rmse: 1.1576 - rmse_val: 1.5617; r2: 0.9319 - r2_val: 0.8298 \nepoch: 0149, loss: 1.2692 - val_loss: 2.2503; rmse: 1.0630 - rmse_val: 1.5001; r2: 0.9313 - r2_val: 0.8280 \nepoch: 0150, loss: 1.1266 - val_loss: 2.2395; rmse: 1.0548 - rmse_val: 1.4965; r2: 0.9315 - r2_val: 0.8288 \nepoch: 0151, loss: 1.0956 - val_loss: 2.1798; rmse: 1.0227 - rmse_val: 1.4764; r2: 0.9328 - r2_val: 0.8327 \nepoch: 0152, loss: 1.0499 - val_loss: 2.2068; rmse: 1.0304 - rmse_val: 1.4855; r2: 0.9341 - r2_val: 0.8367 \nepoch: 0153, loss: 1.0614 - val_loss: 2.1461; rmse: 1.0067 - rmse_val: 1.4649; r2: 0.9350 - r2_val: 0.8378 \nepoch: 0154, loss: 1.0036 - val_loss: 2.1045; rmse: 0.9966 - rmse_val: 1.4507; r2: 0.9357 - r2_val: 0.8384 \nepoch: 0155, loss: 0.9868 - val_loss: 2.0759; rmse: 0.9858 - rmse_val: 1.4408; r2: 0.9371 - r2_val: 0.8409 \nepoch: 0156, loss: 0.9710 - val_loss: 2.0702; rmse: 0.9799 - rmse_val: 1.4388; r2: 0.9383 - r2_val: 0.8426 \nepoch: 0157, loss: 0.9660 - val_loss: 2.0682; rmse: 0.9716 - rmse_val: 1.4381; r2: 0.9386 - r2_val: 0.8421 \nepoch: 0158, loss: 0.9943 - val_loss: 2.1505; rmse: 1.0135 - rmse_val: 1.4665; r2: 0.9378 - r2_val: 0.8383 \nepoch: 0159, loss: 0.9940 - val_loss: 2.1139; rmse: 0.9715 - rmse_val: 1.4539; r2: 0.9392 - r2_val: 0.8418 \nepoch: 0160, loss: 0.9372 - val_loss: 2.0761; rmse: 0.9583 - rmse_val: 1.4409; r2: 0.9399 - r2_val: 0.8419 \nepoch: 0161, loss: 0.9157 - val_loss: 2.0682; rmse: 0.9530 - rmse_val: 1.4381; r2: 0.9407 - r2_val: 0.8434 \nepoch: 0162, loss: 0.9165 - val_loss: 2.0549; rmse: 0.9563 - rmse_val: 1.4335; r2: 0.9408 - r2_val: 0.8432 \nepoch: 0163, loss: 1.0551 - val_loss: 2.2702; rmse: 1.0806 - rmse_val: 1.5067; r2: 0.9394 - r2_val: 0.8399 \nepoch: 0164, loss: 1.0658 - val_loss: 2.1592; rmse: 0.9868 - rmse_val: 1.4694; r2: 0.9419 - r2_val: 0.8480 \nepoch: 0165, loss: 1.2099 - val_loss: 2.0140; rmse: 0.9415 - rmse_val: 1.4191; r2: 0.9427 - r2_val: 0.8495 \nepoch: 0166, loss: 1.2761 - val_loss: 2.6783; rmse: 1.2941 - rmse_val: 1.6365; r2: 0.9387 - r2_val: 0.8390 \nepoch: 0167, loss: 1.2646 - val_loss: 2.4187; rmse: 1.1067 - rmse_val: 1.5552; r2: 0.9426 - r2_val: 0.8513 \nepoch: 0168, loss: 1.5160 - val_loss: 2.2163; rmse: 1.0221 - rmse_val: 1.4887; r2: 0.9427 - r2_val: 0.8495 \nepoch: 0169, loss: 0.9599 - val_loss: 2.1945; rmse: 1.0481 - rmse_val: 1.4814; r2: 0.9405 - r2_val: 0.8406 \nepoch: 0170, loss: 0.9738 - val_loss: 2.0612; rmse: 0.9532 - rmse_val: 1.4357; r2: 0.9438 - r2_val: 0.8477 \nepoch: 0171, loss: 0.9570 - val_loss: 2.0466; rmse: 0.9462 - rmse_val: 1.4306; r2: 0.9447 - r2_val: 0.8486 \nepoch: 0172, loss: 0.8684 - val_loss: 1.9832; rmse: 0.9253 - rmse_val: 1.4083; r2: 0.9452 - r2_val: 0.8477 \nepoch: 0173, loss: 0.8428 - val_loss: 2.0254; rmse: 0.9450 - rmse_val: 1.4232; r2: 0.9472 - r2_val: 0.8536 \nepoch: 0174, loss: 0.9236 - val_loss: 1.9697; rmse: 0.9234 - rmse_val: 1.4035; r2: 0.9476 - r2_val: 0.8551 \nepoch: 0175, loss: 0.8262 - val_loss: 1.9049; rmse: 0.9035 - rmse_val: 1.3802; r2: 0.9475 - r2_val: 0.8539 \nepoch: 0176, loss: 0.8119 - val_loss: 1.9132; rmse: 0.9101 - rmse_val: 1.3832; r2: 0.9483 - r2_val: 0.8536 \nepoch: 0177, loss: 0.8995 - val_loss: 1.9270; rmse: 0.9051 - rmse_val: 1.3881; r2: 0.9496 - r2_val: 0.8530 \nepoch: 0178, loss: 0.8044 - val_loss: 2.1791; rmse: 1.0130 - rmse_val: 1.4762; r2: 0.9510 - r2_val: 0.8601 \nepoch: 0179, loss: 1.0367 - val_loss: 2.1984; rmse: 1.0332 - rmse_val: 1.4827; r2: 0.9517 - r2_val: 0.8626 \nepoch: 0180, loss: 1.2421 - val_loss: 1.9260; rmse: 0.9354 - rmse_val: 1.3878; r2: 0.9517 - r2_val: 0.8619 \nepoch: 0181, loss: 1.1376 - val_loss: 2.3825; rmse: 1.1795 - rmse_val: 1.5435; r2: 0.9501 - r2_val: 0.8523 \nepoch: 0182, loss: 1.0622 - val_loss: 2.1549; rmse: 1.0138 - rmse_val: 1.4680; r2: 0.9518 - r2_val: 0.8624 \nepoch: 0183, loss: 1.1308 - val_loss: 1.9834; rmse: 0.9160 - rmse_val: 1.4083; r2: 0.9516 - r2_val: 0.8588 \nepoch: 0184, loss: 0.7822 - val_loss: 1.9181; rmse: 0.8964 - rmse_val: 1.3849; r2: 0.9506 - r2_val: 0.8533 \nepoch: 0185, loss: 0.7682 - val_loss: 1.9531; rmse: 0.8766 - rmse_val: 1.3975; r2: 0.9514 - r2_val: 0.8552 \nepoch: 0186, loss: 0.7542 - val_loss: 2.0872; rmse: 0.9363 - rmse_val: 1.4447; r2: 0.9522 - r2_val: 0.8571 \nepoch: 0187, loss: 1.2122 - val_loss: 2.3469; rmse: 1.0802 - rmse_val: 1.5320; r2: 0.9521 - r2_val: 0.8570 \nepoch: 0188, loss: 0.9841 - val_loss: 2.0703; rmse: 0.9711 - rmse_val: 1.4389; r2: 0.9483 - r2_val: 0.8458 \nepoch: 0189, loss: 0.9760 - val_loss: 2.1841; rmse: 1.0385 - rmse_val: 1.4779; r2: 0.9477 - r2_val: 0.8450 \nepoch: 0190, loss: 1.1918 - val_loss: 1.9034; rmse: 0.8964 - rmse_val: 1.3796; r2: 0.9508 - r2_val: 0.8552 \nepoch: 0191, loss: 1.0601 - val_loss: 2.4129; rmse: 1.1310 - rmse_val: 1.5533; r2: 0.9534 - r2_val: 0.8643 \nepoch: 0192, loss: 1.0315 - val_loss: 2.0758; rmse: 1.0114 - rmse_val: 1.4408; r2: 0.9526 - r2_val: 0.8539 \nepoch: 0193, loss: 1.0052 - val_loss: 1.9344; rmse: 0.9052 - rmse_val: 1.3908; r2: 0.9542 - r2_val: 0.8584 \nepoch: 0194, loss: 1.1420 - val_loss: 2.2023; rmse: 1.0142 - rmse_val: 1.4840; r2: 0.9547 - r2_val: 0.8592 \nepoch: 0195, loss: 0.8624 - val_loss: 2.4232; rmse: 1.1328 - rmse_val: 1.5567; r2: 0.9495 - r2_val: 0.8432 \nepoch: 0196, loss: 1.5065 - val_loss: 1.9771; rmse: 0.8732 - rmse_val: 1.4061; r2: 0.9520 - r2_val: 0.8482 \nepoch: 0197, loss: 0.9708 - val_loss: 2.8398; rmse: 1.2549 - rmse_val: 1.6852; r2: 0.9548 - r2_val: 0.8592 \nepoch: 0198, loss: 1.2537 - val_loss: 1.9582; rmse: 0.8843 - rmse_val: 1.3993; r2: 0.9532 - r2_val: 0.8509 \nepoch: 0199, loss: 0.9498 - val_loss: 2.0262; rmse: 0.9351 - rmse_val: 1.4234; r2: 0.9533 - r2_val: 0.8508 \nepoch: 0200, loss: 0.7789 - val_loss: 1.9884; rmse: 0.8863 - rmse_val: 1.4101; r2: 0.9558 - r2_val: 0.8588 \nepoch: 0201, loss: 0.7444 - val_loss: 1.8439; rmse: 0.8388 - rmse_val: 1.3579; r2: 0.9558 - r2_val: 0.8586 \nepoch: 0202, loss: 0.7081 - val_loss: 1.8441; rmse: 0.8366 - rmse_val: 1.3580; r2: 0.9566 - r2_val: 0.8623 \nepoch: 0203, loss: 0.7653 - val_loss: 1.8305; rmse: 0.8266 - rmse_val: 1.3530; r2: 0.9565 - r2_val: 0.8626 \nepoch: 0204, loss: 0.7720 - val_loss: 1.9257; rmse: 0.9222 - rmse_val: 1.3877; r2: 0.9546 - r2_val: 0.8587 \nepoch: 0205, loss: 0.7410 - val_loss: 2.0595; rmse: 0.9224 - rmse_val: 1.4351; r2: 0.9554 - r2_val: 0.8651 \nepoch: 0206, loss: 0.8614 - val_loss: 1.8489; rmse: 0.8279 - rmse_val: 1.3597; r2: 0.9559 - r2_val: 0.8634 \nepoch: 0207, loss: 0.6939 - val_loss: 1.8413; rmse: 0.8468 - rmse_val: 1.3569; r2: 0.9566 - r2_val: 0.8609 \nepoch: 0208, loss: 0.6926 - val_loss: 1.8195; rmse: 0.7995 - rmse_val: 1.3489; r2: 0.9585 - r2_val: 0.8640 \nepoch: 0209, loss: 0.6595 - val_loss: 1.8194; rmse: 0.7916 - rmse_val: 1.3489; r2: 0.9596 - r2_val: 0.8640 \nepoch: 0210, loss: 0.6253 - val_loss: 1.8488; rmse: 0.8253 - rmse_val: 1.3597; r2: 0.9596 - r2_val: 0.8600 \nepoch: 0211, loss: 0.6585 - val_loss: 1.8094; rmse: 0.7760 - rmse_val: 1.3452; r2: 0.9612 - r2_val: 0.8635 \nepoch: 0212, loss: 0.6378 - val_loss: 1.7958; rmse: 0.7666 - rmse_val: 1.3401; r2: 0.9616 - r2_val: 0.8629 \nepoch: 0213, loss: 0.6705 - val_loss: 2.0513; rmse: 0.9268 - rmse_val: 1.4322; r2: 0.9606 - r2_val: 0.8565 \nepoch: 0214, loss: 0.7940 - val_loss: 1.8229; rmse: 0.7650 - rmse_val: 1.3502; r2: 0.9625 - r2_val: 0.8626 \nepoch: 0215, loss: 0.6593 - val_loss: 1.8467; rmse: 0.7792 - rmse_val: 1.3589; r2: 0.9627 - r2_val: 0.8635 \nepoch: 0216, loss: 0.5941 - val_loss: 1.8291; rmse: 0.7684 - rmse_val: 1.3524; r2: 0.9625 - r2_val: 0.8600 \nepoch: 0217, loss: 0.5766 - val_loss: 2.0571; rmse: 0.8700 - rmse_val: 1.4343; r2: 0.9626 - r2_val: 0.8621 \nepoch: 0218, loss: 0.7611 - val_loss: 1.8671; rmse: 0.7561 - rmse_val: 1.3664; r2: 0.9630 - r2_val: 0.8587 \nepoch: 0219, loss: 0.5840 - val_loss: 1.9191; rmse: 0.8007 - rmse_val: 1.3853; r2: 0.9630 - r2_val: 0.8558 \nepoch: 0220, loss: 0.6255 - val_loss: 1.8411; rmse: 0.7475 - rmse_val: 1.3569; r2: 0.9643 - r2_val: 0.8597 \nepoch: 0221, loss: 0.5702 - val_loss: 1.8291; rmse: 0.7482 - rmse_val: 1.3524; r2: 0.9651 - r2_val: 0.8621 \nepoch: 0222, loss: 0.5644 - val_loss: 1.8331; rmse: 0.7772 - rmse_val: 1.3539; r2: 0.9653 - r2_val: 0.8613 \nepoch: 0223, loss: 0.5889 - val_loss: 1.7877; rmse: 0.7347 - rmse_val: 1.3370; r2: 0.9661 - r2_val: 0.8653 \nepoch: 0224, loss: 0.5732 - val_loss: 1.8074; rmse: 0.7356 - rmse_val: 1.3444; r2: 0.9663 - r2_val: 0.8655 \nepoch: 0225, loss: 0.5232 - val_loss: 1.7979; rmse: 0.7381 - rmse_val: 1.3409; r2: 0.9662 - r2_val: 0.8625 \nepoch: 0226, loss: 0.5215 - val_loss: 1.8401; rmse: 0.7397 - rmse_val: 1.3565; r2: 0.9669 - r2_val: 0.8654 \nepoch: 0227, loss: 0.5648 - val_loss: 1.7906; rmse: 0.7121 - rmse_val: 1.3381; r2: 0.9672 - r2_val: 0.8645 \nepoch: 0228, loss: 0.5140 - val_loss: 1.7933; rmse: 0.7278 - rmse_val: 1.3391; r2: 0.9672 - r2_val: 0.8629 \nepoch: 0229, loss: 0.5127 - val_loss: 1.7739; rmse: 0.7029 - rmse_val: 1.3319; r2: 0.9681 - r2_val: 0.8657 \nepoch: 0230, loss: 0.4963 - val_loss: 1.7829; rmse: 0.7068 - rmse_val: 1.3353; r2: 0.9685 - r2_val: 0.8669 \nepoch: 0231, loss: 0.5137 - val_loss: 1.7513; rmse: 0.6916 - rmse_val: 1.3234; r2: 0.9688 - r2_val: 0.8660 \nepoch: 0232, loss: 0.4901 - val_loss: 1.7472; rmse: 0.6984 - rmse_val: 1.3218; r2: 0.9691 - r2_val: 0.8658 \nepoch: 0233, loss: 0.4768 - val_loss: 1.8649; rmse: 0.7575 - rmse_val: 1.3656; r2: 0.9698 - r2_val: 0.8699 \nepoch: 0234, loss: 0.7092 - val_loss: 1.7391; rmse: 0.6896 - rmse_val: 1.3188; r2: 0.9694 - r2_val: 0.8676 \nepoch: 0235, loss: 0.5856 - val_loss: 1.7792; rmse: 0.7331 - rmse_val: 1.3339; r2: 0.9685 - r2_val: 0.8647 \nepoch: 0236, loss: 0.5406 - val_loss: 1.9334; rmse: 0.7750 - rmse_val: 1.3905; r2: 0.9680 - r2_val: 0.8666 \nepoch: 0237, loss: 0.5339 - val_loss: 1.8236; rmse: 0.6970 - rmse_val: 1.3504; r2: 0.9687 - r2_val: 0.8634 \nepoch: 0238, loss: 0.7015 - val_loss: 2.0501; rmse: 0.8317 - rmse_val: 1.4318; r2: 0.9683 - r2_val: 0.8625 \nepoch: 0239, loss: 0.5795 - val_loss: 1.9426; rmse: 0.7946 - rmse_val: 1.3938; r2: 0.9676 - r2_val: 0.8548 \nepoch: 0240, loss: 0.5967 - val_loss: 1.9267; rmse: 0.7511 - rmse_val: 1.3881; r2: 0.9680 - r2_val: 0.8581 \nepoch: 0241, loss: 0.6161 - val_loss: 1.9328; rmse: 0.7361 - rmse_val: 1.3903; r2: 0.9682 - r2_val: 0.8583 \nepoch: 0242, loss: 0.5002 - val_loss: 1.8946; rmse: 0.7339 - rmse_val: 1.3764; r2: 0.9679 - r2_val: 0.8560 \nepoch: 0243, loss: 0.5667 - val_loss: 1.8432; rmse: 0.6878 - rmse_val: 1.3576; r2: 0.9689 - r2_val: 0.8600 \nepoch: 0244, loss: 0.4860 - val_loss: 1.8387; rmse: 0.6869 - rmse_val: 1.3560; r2: 0.9698 - r2_val: 0.8633 \nepoch: 0245, loss: 0.4608 - val_loss: 1.7791; rmse: 0.6847 - rmse_val: 1.3338; r2: 0.9704 - r2_val: 0.8636 \nepoch: 0246, loss: 0.4612 - val_loss: 1.7831; rmse: 0.6738 - rmse_val: 1.3353; r2: 0.9713 - r2_val: 0.8673 \nepoch: 0247, loss: 0.4738 - val_loss: 1.7433; rmse: 0.6574 - rmse_val: 1.3203; r2: 0.9719 - r2_val: 0.8675 \nepoch: 0248, loss: 0.4540 - val_loss: 1.7661; rmse: 0.6950 - rmse_val: 1.3289; r2: 0.9720 - r2_val: 0.8655 \nepoch: 0249, loss: 0.4655 - val_loss: 1.7927; rmse: 0.6824 - rmse_val: 1.3389; r2: 0.9729 - r2_val: 0.8701 \nepoch: 0250, loss: 0.5353 - val_loss: 1.7179; rmse: 0.6502 - rmse_val: 1.3107; r2: 0.9732 - r2_val: 0.8696 \nepoch: 0251, loss: 0.4779 - val_loss: 1.7160; rmse: 0.6730 - rmse_val: 1.3100; r2: 0.9733 - r2_val: 0.8689 \nepoch: 0252, loss: 0.4626 - val_loss: 1.7968; rmse: 0.6900 - rmse_val: 1.3404; r2: 0.9736 - r2_val: 0.8715 \nepoch: 0253, loss: 0.4509 - val_loss: 1.7873; rmse: 0.7026 - rmse_val: 1.3369; r2: 0.9733 - r2_val: 0.8657 \nepoch: 0254, loss: 0.4627 - val_loss: 1.7788; rmse: 0.6573 - rmse_val: 1.3337; r2: 0.9738 - r2_val: 0.8695 \nepoch: 0255, loss: 0.4620 - val_loss: 1.7252; rmse: 0.6304 - rmse_val: 1.3135; r2: 0.9740 - r2_val: 0.8692 \nepoch: 0256, loss: 0.4131 - val_loss: 1.7039; rmse: 0.6252 - rmse_val: 1.3053; r2: 0.9744 - r2_val: 0.8696 \nepoch: 0257, loss: 0.4142 - val_loss: 1.8541; rmse: 0.7322 - rmse_val: 1.3617; r2: 0.9752 - r2_val: 0.8742 \nepoch: 0258, loss: 0.5310 - val_loss: 1.6794; rmse: 0.6585 - rmse_val: 1.2959; r2: 0.9753 - r2_val: 0.8727 \nepoch: 0259, loss: 0.4975 - val_loss: 1.7028; rmse: 0.6823 - rmse_val: 1.3049; r2: 0.9752 - r2_val: 0.8720 \nepoch: 0260, loss: 0.4518 - val_loss: 1.6779; rmse: 0.6394 - rmse_val: 1.2953; r2: 0.9757 - r2_val: 0.8735 \nepoch: 0261, loss: 0.4202 - val_loss: 1.7046; rmse: 0.6177 - rmse_val: 1.3056; r2: 0.9755 - r2_val: 0.8703 \nepoch: 0262, loss: 0.5014 - val_loss: 1.7785; rmse: 0.6493 - rmse_val: 1.3336; r2: 0.9755 - r2_val: 0.8693 \nepoch: 0263, loss: 0.4069 - val_loss: 1.7975; rmse: 0.6742 - rmse_val: 1.3407; r2: 0.9750 - r2_val: 0.8651 \nepoch: 0264, loss: 0.4083 - val_loss: 1.7935; rmse: 0.6544 - rmse_val: 1.3392; r2: 0.9752 - r2_val: 0.8694 \nepoch: 0265, loss: 0.4167 - val_loss: 1.7155; rmse: 0.6124 - rmse_val: 1.3098; r2: 0.9755 - r2_val: 0.8688 \nepoch: 0266, loss: 0.3785 - val_loss: 1.7019; rmse: 0.6154 - rmse_val: 1.3046; r2: 0.9761 - r2_val: 0.8696 \nepoch: 0267, loss: 0.3819 - val_loss: 1.6898; rmse: 0.6036 - rmse_val: 1.2999; r2: 0.9767 - r2_val: 0.8724 \nepoch: 0268, loss: 0.4043 - val_loss: 1.6486; rmse: 0.5998 - rmse_val: 1.2840; r2: 0.9769 - r2_val: 0.8733 \nepoch: 0269, loss: 0.4449 - val_loss: 1.6124; rmse: 0.6021 - rmse_val: 1.2698; r2: 0.9772 - r2_val: 0.8766 \nepoch: 0270, loss: 0.4770 - val_loss: 1.6620; rmse: 0.6401 - rmse_val: 1.2892; r2: 0.9772 - r2_val: 0.8787 \nepoch: 0271, loss: 0.4320 - val_loss: 1.7145; rmse: 0.6945 - rmse_val: 1.3094; r2: 0.9769 - r2_val: 0.8742 \nepoch: 0272, loss: 0.4166 - val_loss: 1.6444; rmse: 0.6211 - rmse_val: 1.2824; r2: 0.9774 - r2_val: 0.8785 \nepoch: 0273, loss: 0.3758 - val_loss: 1.6548; rmse: 0.6470 - rmse_val: 1.2864; r2: 0.9770 - r2_val: 0.8752 \nepoch: 0274, loss: 0.4070 - val_loss: 1.6442; rmse: 0.6095 - rmse_val: 1.2823; r2: 0.9770 - r2_val: 0.8781 \nepoch: 0275, loss: 0.3941 - val_loss: 1.6498; rmse: 0.6112 - rmse_val: 1.2844; r2: 0.9770 - r2_val: 0.8782 \nepoch: 0276, loss: 0.3620 - val_loss: 1.6594; rmse: 0.6058 - rmse_val: 1.2882; r2: 0.9774 - r2_val: 0.8776 \nepoch: 0277, loss: 0.4246 - val_loss: 1.7342; rmse: 0.6109 - rmse_val: 1.3169; r2: 0.9776 - r2_val: 0.8738 \nepoch: 0278, loss: 0.3538 - val_loss: 1.8474; rmse: 0.7183 - rmse_val: 1.3592; r2: 0.9768 - r2_val: 0.8678 \nepoch: 0279, loss: 0.5413 - val_loss: 1.7353; rmse: 0.6043 - rmse_val: 1.3173; r2: 0.9784 - r2_val: 0.8737 \nepoch: 0280, loss: 0.4653 - val_loss: 1.6840; rmse: 0.5810 - rmse_val: 1.2977; r2: 0.9787 - r2_val: 0.8740 \nepoch: 0281, loss: 0.3477 - val_loss: 1.6649; rmse: 0.5725 - rmse_val: 1.2903; r2: 0.9791 - r2_val: 0.8735 \nepoch: 0282, loss: 0.4657 - val_loss: 1.7460; rmse: 0.6252 - rmse_val: 1.3214; r2: 0.9791 - r2_val: 0.8722 \nepoch: 0283, loss: 0.4159 - val_loss: 1.7108; rmse: 0.5975 - rmse_val: 1.3080; r2: 0.9784 - r2_val: 0.8688 \nepoch: 0284, loss: 0.4155 - val_loss: 1.8366; rmse: 0.7059 - rmse_val: 1.3552; r2: 0.9793 - r2_val: 0.8765 \nepoch: 0285, loss: 0.4092 - val_loss: 1.9239; rmse: 0.7891 - rmse_val: 1.3871; r2: 0.9779 - r2_val: 0.8688 \nepoch: 0286, loss: 0.7707 - val_loss: 1.7174; rmse: 0.5933 - rmse_val: 1.3105; r2: 0.9777 - r2_val: 0.8716 \nepoch: 0287, loss: 0.5089 - val_loss: 1.8309; rmse: 0.6684 - rmse_val: 1.3531; r2: 0.9771 - r2_val: 0.8722 \nepoch: 0288, loss: 0.4024 - val_loss: 1.8506; rmse: 0.7092 - rmse_val: 1.3604; r2: 0.9766 - r2_val: 0.8671 \nepoch: 0289, loss: 0.4192 - val_loss: 1.7062; rmse: 0.5884 - rmse_val: 1.3062; r2: 0.9784 - r2_val: 0.8725 \nepoch: 0290, loss: 0.3392 - val_loss: 1.6860; rmse: 0.5831 - rmse_val: 1.2985; r2: 0.9788 - r2_val: 0.8713 \nepoch: 0291, loss: 0.3532 - val_loss: 1.6540; rmse: 0.5701 - rmse_val: 1.2861; r2: 0.9797 - r2_val: 0.8733 \nepoch: 0292, loss: 0.3199 - val_loss: 1.6682; rmse: 0.5656 - rmse_val: 1.2916; r2: 0.9804 - r2_val: 0.8757 \nepoch: 0293, loss: 0.3463 - val_loss: 1.6453; rmse: 0.5533 - rmse_val: 1.2827; r2: 0.9805 - r2_val: 0.8739 \nepoch: 0294, loss: 0.3615 - val_loss: 1.6446; rmse: 0.5699 - rmse_val: 1.2824; r2: 0.9808 - r2_val: 0.8746 \nepoch: 0295, loss: 0.2995 - val_loss: 1.6096; rmse: 0.5565 - rmse_val: 1.2687; r2: 0.9810 - r2_val: 0.8768 \nepoch: 0296, loss: 0.4360 - val_loss: 1.6432; rmse: 0.6388 - rmse_val: 1.2819; r2: 0.9804 - r2_val: 0.8781 \nepoch: 0297, loss: 0.3466 - val_loss: 1.5877; rmse: 0.5677 - rmse_val: 1.2601; r2: 0.9806 - r2_val: 0.8827 \nepoch: 0298, loss: 0.3107 - val_loss: 1.5789; rmse: 0.5653 - rmse_val: 1.2565; r2: 0.9813 - r2_val: 0.8793 \nepoch: 0299, loss: 0.3262 - val_loss: 1.5861; rmse: 0.5444 - rmse_val: 1.2594; r2: 0.9817 - r2_val: 0.8795 \nepoch: 0300, loss: 0.2980 - val_loss: 1.6367; rmse: 0.5668 - rmse_val: 1.2793; r2: 0.9821 - r2_val: 0.8805 \nepoch: 0301, loss: 0.3369 - val_loss: 1.6279; rmse: 0.5827 - rmse_val: 1.2759; r2: 0.9817 - r2_val: 0.8771 \nepoch: 0302, loss: 0.4784 - val_loss: 1.5998; rmse: 0.5212 - rmse_val: 1.2648; r2: 0.9823 - r2_val: 0.8773 \nepoch: 0303, loss: 0.3421 - val_loss: 1.6625; rmse: 0.5619 - rmse_val: 1.2894; r2: 0.9828 - r2_val: 0.8797 \nepoch: 0304, loss: 0.2888 - val_loss: 1.5889; rmse: 0.5268 - rmse_val: 1.2605; r2: 0.9828 - r2_val: 0.8780 \nepoch: 0305, loss: 0.2770 - val_loss: 1.5744; rmse: 0.5171 - rmse_val: 1.2547; r2: 0.9828 - r2_val: 0.8812 \nepoch: 0306, loss: 0.2914 - val_loss: 1.6204; rmse: 0.5726 - rmse_val: 1.2730; r2: 0.9822 - r2_val: 0.8780 \nepoch: 0307, loss: 0.3019 - val_loss: 1.5836; rmse: 0.5180 - rmse_val: 1.2584; r2: 0.9826 - r2_val: 0.8793 \nepoch: 0308, loss: 0.2810 - val_loss: 1.5609; rmse: 0.5081 - rmse_val: 1.2494; r2: 0.9834 - r2_val: 0.8804 \nepoch: 0309, loss: 0.2778 - val_loss: 1.5730; rmse: 0.5124 - rmse_val: 1.2542; r2: 0.9839 - r2_val: 0.8806 \nepoch: 0310, loss: 0.2723 - val_loss: 1.5853; rmse: 0.5107 - rmse_val: 1.2591; r2: 0.9839 - r2_val: 0.8783 \nepoch: 0311, loss: 0.2577 - val_loss: 1.5933; rmse: 0.5010 - rmse_val: 1.2623; r2: 0.9840 - r2_val: 0.8786 \nepoch: 0312, loss: 0.2572 - val_loss: 1.5778; rmse: 0.4970 - rmse_val: 1.2561; r2: 0.9842 - r2_val: 0.8788 \nepoch: 0313, loss: 0.2631 - val_loss: 1.5878; rmse: 0.4978 - rmse_val: 1.2601; r2: 0.9843 - r2_val: 0.8797 \nepoch: 0314, loss: 0.2527 - val_loss: 1.5878; rmse: 0.4962 - rmse_val: 1.2601; r2: 0.9841 - r2_val: 0.8781 \nepoch: 0315, loss: 0.2486 - val_loss: 1.6461; rmse: 0.5279 - rmse_val: 1.2830; r2: 0.9842 - r2_val: 0.8797 \nepoch: 0316, loss: 0.2718 - val_loss: 1.6097; rmse: 0.5045 - rmse_val: 1.2687; r2: 0.9841 - r2_val: 0.8774 \nepoch: 0317, loss: 0.2513 - val_loss: 1.5975; rmse: 0.4863 - rmse_val: 1.2639; r2: 0.9846 - r2_val: 0.8791 \nepoch: 0318, loss: 0.2371 - val_loss: 1.5875; rmse: 0.4928 - rmse_val: 1.2600; r2: 0.9852 - r2_val: 0.8814 \nepoch: 0319, loss: 0.2488 - val_loss: 1.5345; rmse: 0.4817 - rmse_val: 1.2387; r2: 0.9856 - r2_val: 0.8837 \nepoch: 0320, loss: 0.2325 - val_loss: 1.5075; rmse: 0.4790 - rmse_val: 1.2278; r2: 0.9856 - r2_val: 0.8853 \nepoch: 0321, loss: 0.2303 - val_loss: 1.5075; rmse: 0.4776 - rmse_val: 1.2278; r2: 0.9856 - r2_val: 0.8842 \nepoch: 0322, loss: 0.2415 - val_loss: 1.5228; rmse: 0.4684 - rmse_val: 1.2340; r2: 0.9857 - r2_val: 0.8833 \nepoch: 0323, loss: 0.2192 - val_loss: 1.5359; rmse: 0.4676 - rmse_val: 1.2393; r2: 0.9857 - r2_val: 0.8836 \nepoch: 0324, loss: 0.2187 - val_loss: 1.5361; rmse: 0.4649 - rmse_val: 1.2394; r2: 0.9859 - r2_val: 0.8836 \nepoch: 0325, loss: 0.2182 - val_loss: 1.5445; rmse: 0.4637 - rmse_val: 1.2428; r2: 0.9861 - r2_val: 0.8834 \nepoch: 0326, loss: 0.2132 - val_loss: 1.5366; rmse: 0.4585 - rmse_val: 1.2396; r2: 0.9863 - r2_val: 0.8825 \nepoch: 0327, loss: 0.2100 - val_loss: 1.5383; rmse: 0.4655 - rmse_val: 1.2403; r2: 0.9864 - r2_val: 0.8820 \nepoch: 0328, loss: 0.2296 - val_loss: 1.5412; rmse: 0.4586 - rmse_val: 1.2415; r2: 0.9862 - r2_val: 0.8822 \nepoch: 0329, loss: 0.2131 - val_loss: 1.5492; rmse: 0.4759 - rmse_val: 1.2447; r2: 0.9859 - r2_val: 0.8810 \nepoch: 0330, loss: 0.2425 - val_loss: 1.5635; rmse: 0.4830 - rmse_val: 1.2504; r2: 0.9865 - r2_val: 0.8833 \nepoch: 0331, loss: 0.2769 - val_loss: 1.5390; rmse: 0.4663 - rmse_val: 1.2406; r2: 0.9865 - r2_val: 0.8818 \nepoch: 0332, loss: 0.2356 - val_loss: 1.5465; rmse: 0.4845 - rmse_val: 1.2436; r2: 0.9868 - r2_val: 0.8818 \nepoch: 0333, loss: 0.2226 - val_loss: 1.5643; rmse: 0.4641 - rmse_val: 1.2507; r2: 0.9872 - r2_val: 0.8838 \nepoch: 0334, loss: 0.2202 - val_loss: 1.5377; rmse: 0.4497 - rmse_val: 1.2400; r2: 0.9871 - r2_val: 0.8821 \nepoch: 0335, loss: 0.2011 - val_loss: 1.5505; rmse: 0.4647 - rmse_val: 1.2452; r2: 0.9871 - r2_val: 0.8814 \nepoch: 0336, loss: 0.2344 - val_loss: 1.5856; rmse: 0.4680 - rmse_val: 1.2592; r2: 0.9870 - r2_val: 0.8828 \nepoch: 0337, loss: 0.2496 - val_loss: 1.5676; rmse: 0.4602 - rmse_val: 1.2520; r2: 0.9865 - r2_val: 0.8805 \nepoch: 0338, loss: 0.2386 - val_loss: 1.5490; rmse: 0.4441 - rmse_val: 1.2446; r2: 0.9870 - r2_val: 0.8818 \nepoch: 0339, loss: 0.2038 - val_loss: 1.5522; rmse: 0.4496 - rmse_val: 1.2459; r2: 0.9877 - r2_val: 0.8841 \nepoch: 0340, loss: 0.1985 - val_loss: 1.5315; rmse: 0.4404 - rmse_val: 1.2375; r2: 0.9877 - r2_val: 0.8825 \nepoch: 0341, loss: 0.1989 - val_loss: 1.5648; rmse: 0.4555 - rmse_val: 1.2509; r2: 0.9880 - r2_val: 0.8848 \nepoch: 0342, loss: 0.2212 - val_loss: 1.5375; rmse: 0.4370 - rmse_val: 1.2399; r2: 0.9881 - r2_val: 0.8845 \nepoch: 0343, loss: 0.1914 - val_loss: 1.5789; rmse: 0.4997 - rmse_val: 1.2565; r2: 0.9877 - r2_val: 0.8805 \nepoch: 0344, loss: 0.3148 - val_loss: 1.5366; rmse: 0.4297 - rmse_val: 1.2396; r2: 0.9883 - r2_val: 0.8824 \nepoch: 0345, loss: 0.1879 - val_loss: 1.5498; rmse: 0.4556 - rmse_val: 1.2449; r2: 0.9885 - r2_val: 0.8847 \nepoch: 0346, loss: 0.2181 - val_loss: 1.5261; rmse: 0.4290 - rmse_val: 1.2353; r2: 0.9882 - r2_val: 0.8832 \nepoch: 0347, loss: 0.1850 - val_loss: 1.6217; rmse: 0.5087 - rmse_val: 1.2735; r2: 0.9884 - r2_val: 0.8859 \nepoch: 0348, loss: 0.3115 - val_loss: 1.5103; rmse: 0.4699 - rmse_val: 1.2289; r2: 0.9880 - r2_val: 0.8846 \nepoch: 0349, loss: 0.2606 - val_loss: 1.4876; rmse: 0.4478 - rmse_val: 1.2197; r2: 0.9881 - r2_val: 0.8885 \nepoch: 0350, loss: 0.2187 - val_loss: 1.5164; rmse: 0.4672 - rmse_val: 1.2314; r2: 0.9881 - r2_val: 0.8894 \nepoch: 0351, loss: 0.2106 - val_loss: 1.5770; rmse: 0.5501 - rmse_val: 1.2558; r2: 0.9873 - r2_val: 0.8828 \nepoch: 0352, loss: 0.3573 - val_loss: 1.5386; rmse: 0.4578 - rmse_val: 1.2404; r2: 0.9865 - r2_val: 0.8822 \nepoch: 0353, loss: 0.2221 - val_loss: 1.6170; rmse: 0.4857 - rmse_val: 1.2716; r2: 0.9853 - r2_val: 0.8785 \nepoch: 0354, loss: 0.2404 - val_loss: 1.6162; rmse: 0.4840 - rmse_val: 1.2713; r2: 0.9861 - r2_val: 0.8793 \nepoch: 0355, loss: 0.2341 - val_loss: 1.5398; rmse: 0.4408 - rmse_val: 1.2409; r2: 0.9877 - r2_val: 0.8817 \nepoch: 0356, loss: 0.1949 - val_loss: 1.5038; rmse: 0.4287 - rmse_val: 1.2263; r2: 0.9882 - r2_val: 0.8852 \nepoch: 0357, loss: 0.1852 - val_loss: 1.4812; rmse: 0.4265 - rmse_val: 1.2171; r2: 0.9881 - r2_val: 0.8867 \nepoch: 0358, loss: 0.1811 - val_loss: 1.4740; rmse: 0.4220 - rmse_val: 1.2141; r2: 0.9883 - r2_val: 0.8872 \nepoch: 0359, loss: 0.1786 - val_loss: 1.4736; rmse: 0.4149 - rmse_val: 1.2139; r2: 0.9887 - r2_val: 0.8875 \nepoch: 0360, loss: 0.1707 - val_loss: 1.4696; rmse: 0.4062 - rmse_val: 1.2123; r2: 0.9892 - r2_val: 0.8878 \nepoch: 0361, loss: 0.1648 - val_loss: 1.4639; rmse: 0.4079 - rmse_val: 1.2099; r2: 0.9895 - r2_val: 0.8875 \nepoch: 0362, loss: 0.1710 - val_loss: 1.4854; rmse: 0.4436 - rmse_val: 1.2188; r2: 0.9896 - r2_val: 0.8869 \nepoch: 0363, loss: 0.2199 - val_loss: 1.5068; rmse: 0.4305 - rmse_val: 1.2275; r2: 0.9895 - r2_val: 0.8887 \nepoch: 0364, loss: 0.2060 - val_loss: 1.4971; rmse: 0.4289 - rmse_val: 1.2236; r2: 0.9890 - r2_val: 0.8857 \nepoch: 0365, loss: 0.2036 - val_loss: 1.5711; rmse: 0.5075 - rmse_val: 1.2534; r2: 0.9897 - r2_val: 0.8911 \nepoch: 0366, loss: 0.2977 - val_loss: 1.4404; rmse: 0.4441 - rmse_val: 1.2002; r2: 0.9896 - r2_val: 0.8904 \nepoch: 0367, loss: 0.2540 - val_loss: 1.4345; rmse: 0.4360 - rmse_val: 1.1977; r2: 0.9899 - r2_val: 0.8947 \nepoch: 0368, loss: 0.2151 - val_loss: 1.4049; rmse: 0.4086 - rmse_val: 1.1853; r2: 0.9899 - r2_val: 0.8930 \nepoch: 0369, loss: 0.1806 - val_loss: 1.4525; rmse: 0.4337 - rmse_val: 1.2052; r2: 0.9902 - r2_val: 0.8944 \nepoch: 0370, loss: 0.2218 - val_loss: 1.4790; rmse: 0.5003 - rmse_val: 1.2162; r2: 0.9898 - r2_val: 0.8898 \nepoch: 0371, loss: 0.3515 - val_loss: 1.4820; rmse: 0.4298 - rmse_val: 1.2174; r2: 0.9904 - r2_val: 0.8924 \nepoch: 0372, loss: 0.2526 - val_loss: 1.4457; rmse: 0.3938 - rmse_val: 1.2024; r2: 0.9902 - r2_val: 0.8889 \nepoch: 0373, loss: 0.1778 - val_loss: 1.4638; rmse: 0.4069 - rmse_val: 1.2099; r2: 0.9901 - r2_val: 0.8877 \nepoch: 0374, loss: 0.1626 - val_loss: 1.4863; rmse: 0.3943 - rmse_val: 1.2191; r2: 0.9900 - r2_val: 0.8878 \nepoch: 0375, loss: 0.1553 - val_loss: 1.4834; rmse: 0.4043 - rmse_val: 1.2179; r2: 0.9900 - r2_val: 0.8864 \nepoch: 0376, loss: 0.1632 - val_loss: 1.4550; rmse: 0.3811 - rmse_val: 1.2062; r2: 0.9905 - r2_val: 0.8888 \nepoch: 0377, loss: 0.1492 - val_loss: 1.4332; rmse: 0.3837 - rmse_val: 1.1971; r2: 0.9907 - r2_val: 0.8899 \nepoch: 0378, loss: 0.1462 - val_loss: 1.4398; rmse: 0.3927 - rmse_val: 1.1999; r2: 0.9909 - r2_val: 0.8924 \nepoch: 0379, loss: 0.1535 - val_loss: 1.4220; rmse: 0.3722 - rmse_val: 1.1925; r2: 0.9910 - r2_val: 0.8910 \nepoch: 0380, loss: 0.1401 - val_loss: 1.4733; rmse: 0.4276 - rmse_val: 1.2138; r2: 0.9908 - r2_val: 0.8879 \nepoch: 0381, loss: 0.2093 - val_loss: 1.4811; rmse: 0.3856 - rmse_val: 1.2170; r2: 0.9907 - r2_val: 0.8874 \nepoch: 0382, loss: 0.1561 - val_loss: 1.4780; rmse: 0.4050 - rmse_val: 1.2157; r2: 0.9906 - r2_val: 0.8871 \nepoch: 0383, loss: 0.1686 - val_loss: 1.4581; rmse: 0.4198 - rmse_val: 1.2075; r2: 0.9912 - r2_val: 0.8894 \nepoch: 0384, loss: 0.1910 - val_loss: 1.3873; rmse: 0.3780 - rmse_val: 1.1778; r2: 0.9913 - r2_val: 0.8960 \nepoch: 0385, loss: 0.1510 - val_loss: 1.3515; rmse: 0.3735 - rmse_val: 1.1625; r2: 0.9911 - r2_val: 0.8963 \nepoch: 0386, loss: 0.1439 - val_loss: 1.3613; rmse: 0.3742 - rmse_val: 1.1668; r2: 0.9911 - r2_val: 0.8969 \nepoch: 0387, loss: 0.1402 - val_loss: 1.3566; rmse: 0.3616 - rmse_val: 1.1647; r2: 0.9914 - r2_val: 0.8963 \nepoch: 0388, loss: 0.1325 - val_loss: 1.3627; rmse: 0.3611 - rmse_val: 1.1673; r2: 0.9916 - r2_val: 0.8961 \nepoch: 0389, loss: 0.1300 - val_loss: 1.3739; rmse: 0.3591 - rmse_val: 1.1722; r2: 0.9918 - r2_val: 0.8945 \nepoch: 0390, loss: 0.1288 - val_loss: 1.3851; rmse: 0.3563 - rmse_val: 1.1769; r2: 0.9919 - r2_val: 0.8936 \nepoch: 0391, loss: 0.1273 - val_loss: 1.3842; rmse: 0.3518 - rmse_val: 1.1765; r2: 0.9921 - r2_val: 0.8939 \nepoch: 0392, loss: 0.1231 - val_loss: 1.3835; rmse: 0.3490 - rmse_val: 1.1762; r2: 0.9922 - r2_val: 0.8944 \nepoch: 0393, loss: 0.1214 - val_loss: 1.3810; rmse: 0.3438 - rmse_val: 1.1751; r2: 0.9923 - r2_val: 0.8945 \nepoch: 0394, loss: 0.1198 - val_loss: 1.3923; rmse: 0.3507 - rmse_val: 1.1800; r2: 0.9922 - r2_val: 0.8931 \nepoch: 0395, loss: 0.1237 - val_loss: 1.4481; rmse: 0.3968 - rmse_val: 1.2034; r2: 0.9924 - r2_val: 0.8946 \nepoch: 0396, loss: 0.1700 - val_loss: 1.4513; rmse: 0.3992 - rmse_val: 1.2047; r2: 0.9920 - r2_val: 0.8899 \nepoch: 0397, loss: 0.1571 - val_loss: 1.4972; rmse: 0.4311 - rmse_val: 1.2236; r2: 0.9926 - r2_val: 0.8935 \nepoch: 0398, loss: 0.1691 - val_loss: 1.4760; rmse: 0.4328 - rmse_val: 1.2149; r2: 0.9923 - r2_val: 0.8896 \nepoch: 0399, loss: 0.1720 - val_loss: 1.4526; rmse: 0.3967 - rmse_val: 1.2052; r2: 0.9929 - r2_val: 0.8945 \nepoch: 0400, loss: 0.1356 - val_loss: 1.4174; rmse: 0.3690 - rmse_val: 1.1905; r2: 0.9928 - r2_val: 0.8917 \nepoch: 0401, loss: 0.1214 - val_loss: 1.4221; rmse: 0.3627 - rmse_val: 1.1925; r2: 0.9930 - r2_val: 0.8943 \nepoch: 0402, loss: 0.1192 - val_loss: 1.4206; rmse: 0.3823 - rmse_val: 1.1919; r2: 0.9929 - r2_val: 0.8920 \nepoch: 0403, loss: 0.1312 - val_loss: 1.3762; rmse: 0.3263 - rmse_val: 1.1731; r2: 0.9930 - r2_val: 0.8946 \nepoch: 0404, loss: 0.1202 - val_loss: 1.4035; rmse: 0.3984 - rmse_val: 1.1847; r2: 0.9927 - r2_val: 0.8938 \nepoch: 0405, loss: 0.1585 - val_loss: 1.4381; rmse: 0.3841 - rmse_val: 1.1992; r2: 0.9923 - r2_val: 0.8945 \nepoch: 0406, loss: 0.1349 - val_loss: 1.4347; rmse: 0.3633 - rmse_val: 1.1978; r2: 0.9921 - r2_val: 0.8906 \nepoch: 0407, loss: 0.1258 - val_loss: 1.4304; rmse: 0.3527 - rmse_val: 1.1960; r2: 0.9922 - r2_val: 0.8916 \nepoch: 0408, loss: 0.1242 - val_loss: 1.4099; rmse: 0.3481 - rmse_val: 1.1874; r2: 0.9924 - r2_val: 0.8933 \nepoch: 0409, loss: 0.1303 - val_loss: 1.3686; rmse: 0.3515 - rmse_val: 1.1699; r2: 0.9927 - r2_val: 0.8949 \nepoch: 0410, loss: 0.1325 - val_loss: 1.3715; rmse: 0.3615 - rmse_val: 1.1711; r2: 0.9926 - r2_val: 0.8982 \nepoch: 0411, loss: 0.1338 - val_loss: 1.3480; rmse: 0.3271 - rmse_val: 1.1611; r2: 0.9930 - r2_val: 0.8971 \nepoch: 0412, loss: 0.1098 - val_loss: 1.3795; rmse: 0.3406 - rmse_val: 1.1745; r2: 0.9933 - r2_val: 0.8942 \nepoch: 0413, loss: 0.1311 - val_loss: 1.4178; rmse: 0.3656 - rmse_val: 1.1907; r2: 0.9931 - r2_val: 0.8916 \nepoch: 0414, loss: 0.1595 - val_loss: 1.4936; rmse: 0.3797 - rmse_val: 1.2221; r2: 0.9930 - r2_val: 0.8912 \nepoch: 0415, loss: 0.1433 - val_loss: 1.5541; rmse: 0.4719 - rmse_val: 1.2466; r2: 0.9922 - r2_val: 0.8861 \nepoch: 0416, loss: 0.2174 - val_loss: 1.4534; rmse: 0.3441 - rmse_val: 1.2056; r2: 0.9926 - r2_val: 0.8906 \nepoch: 0417, loss: 0.1249 - val_loss: 1.4345; rmse: 0.3573 - rmse_val: 1.1977; r2: 0.9920 - r2_val: 0.8920 \nepoch: 0418, loss: 0.1562 - val_loss: 1.4837; rmse: 0.4316 - rmse_val: 1.2181; r2: 0.9921 - r2_val: 0.8885 \nepoch: 0419, loss: 0.2194 - val_loss: 1.5951; rmse: 0.4805 - rmse_val: 1.2630; r2: 0.9928 - r2_val: 0.8921 \nepoch: 0420, loss: 0.2274 - val_loss: 1.5208; rmse: 0.4282 - rmse_val: 1.2332; r2: 0.9924 - r2_val: 0.8859 \nepoch: 0421, loss: 0.1462 - val_loss: 1.5518; rmse: 0.4479 - rmse_val: 1.2457; r2: 0.9932 - r2_val: 0.8920 \nepoch: 0422, loss: 0.1572 - val_loss: 1.5140; rmse: 0.4468 - rmse_val: 1.2305; r2: 0.9929 - r2_val: 0.8876 \nepoch: 0423, loss: 0.1496 - val_loss: 1.4966; rmse: 0.4111 - rmse_val: 1.2234; r2: 0.9935 - r2_val: 0.8933 \nepoch: 0424, loss: 0.1355 - val_loss: 1.6213; rmse: 0.5585 - rmse_val: 1.2733; r2: 0.9924 - r2_val: 0.8863 \nepoch: 0425, loss: 0.2504 - val_loss: 1.4833; rmse: 0.3640 - rmse_val: 1.2179; r2: 0.9928 - r2_val: 0.8894 \nepoch: 0426, loss: 0.1391 - val_loss: 1.5653; rmse: 0.4376 - rmse_val: 1.2511; r2: 0.9929 - r2_val: 0.8900 \nepoch: 0427, loss: 0.3134 - val_loss: 1.5642; rmse: 0.5005 - rmse_val: 1.2507; r2: 0.9918 - r2_val: 0.8861 \nepoch: 0428, loss: 0.2722 - val_loss: 1.4601; rmse: 0.3768 - rmse_val: 1.2083; r2: 0.9916 - r2_val: 0.8905 \nepoch: 0429, loss: 0.1359 - val_loss: 1.5819; rmse: 0.4767 - rmse_val: 1.2577; r2: 0.9910 - r2_val: 0.8828 \nepoch: 0430, loss: 0.2153 - val_loss: 1.5026; rmse: 0.3742 - rmse_val: 1.2258; r2: 0.9913 - r2_val: 0.8859 \nepoch: 0431, loss: 0.1389 - val_loss: 1.4820; rmse: 0.3923 - rmse_val: 1.2174; r2: 0.9917 - r2_val: 0.8901 \nepoch: 0432, loss: 0.2072 - val_loss: 1.4068; rmse: 0.3749 - rmse_val: 1.1861; r2: 0.9918 - r2_val: 0.8924 \nepoch: 0433, loss: 0.1425 - val_loss: 1.4057; rmse: 0.3892 - rmse_val: 1.1856; r2: 0.9917 - r2_val: 0.8964 \nepoch: 0434, loss: 0.1489 - val_loss: 1.6372; rmse: 0.6311 - rmse_val: 1.2795; r2: 0.9919 - r2_val: 0.8899 \nepoch: 0435, loss: 0.4321 - val_loss: 2.2644; rmse: 0.9307 - rmse_val: 1.5048; r2: 0.9898 - r2_val: 0.8924 \nepoch: 0436, loss: 0.7689 - val_loss: 2.2950; rmse: 0.9547 - rmse_val: 1.5149; r2: 0.9878 - r2_val: 0.8746 \nepoch: 0437, loss: 0.6208 - val_loss: 1.8708; rmse: 0.6697 - rmse_val: 1.3678; r2: 0.9912 - r2_val: 0.8863 \nepoch: 0438, loss: 0.2752 - val_loss: 1.6080; rmse: 0.5176 - rmse_val: 1.2681; r2: 0.9916 - r2_val: 0.8827 \nepoch: 0439, loss: 0.2230 - val_loss: 1.6500; rmse: 0.5878 - rmse_val: 1.2845; r2: 0.9914 - r2_val: 0.8906 \nepoch: 0440, loss: 0.2674 - val_loss: 1.5633; rmse: 0.5054 - rmse_val: 1.2503; r2: 0.9918 - r2_val: 0.8853 \nepoch: 0441, loss: 0.2173 - val_loss: 1.4998; rmse: 0.3935 - rmse_val: 1.2247; r2: 0.9923 - r2_val: 0.8893 \nepoch: 0442, loss: 0.1350 - val_loss: 1.4966; rmse: 0.3867 - rmse_val: 1.2233; r2: 0.9921 - r2_val: 0.8860 \nepoch: 0443, loss: 0.1415 - val_loss: 1.6847; rmse: 0.5454 - rmse_val: 1.2980; r2: 0.9926 - r2_val: 0.8912 \nepoch: 0444, loss: 0.3008 - val_loss: 1.5459; rmse: 0.4683 - rmse_val: 1.2434; r2: 0.9924 - r2_val: 0.8862 \nepoch: 0445, loss: 0.2226 - val_loss: 1.4700; rmse: 0.3546 - rmse_val: 1.2124; r2: 0.9925 - r2_val: 0.8900 \nepoch: 0446, loss: 0.1337 - val_loss: 1.4973; rmse: 0.4214 - rmse_val: 1.2236; r2: 0.9917 - r2_val: 0.8876 \nepoch: 0447, loss: 0.1750 - val_loss: 1.4936; rmse: 0.3868 - rmse_val: 1.2221; r2: 0.9919 - r2_val: 0.8897 \nepoch: 0448, loss: 0.1414 - val_loss: 1.4655; rmse: 0.3570 - rmse_val: 1.2106; r2: 0.9923 - r2_val: 0.8879 \nepoch: 0449, loss: 0.1193 - val_loss: 1.5118; rmse: 0.4006 - rmse_val: 1.2295; r2: 0.9927 - r2_val: 0.8902 \nepoch: 0450, loss: 0.1507 - val_loss: 1.5220; rmse: 0.4111 - rmse_val: 1.2337; r2: 0.9927 - r2_val: 0.8854 \nepoch: 0451, loss: 0.1557 - val_loss: 1.4903; rmse: 0.3670 - rmse_val: 1.2208; r2: 0.9935 - r2_val: 0.8905 \nepoch: 0452, loss: 0.1168 - val_loss: 1.4214; rmse: 0.3224 - rmse_val: 1.1922; r2: 0.9939 - r2_val: 0.8908 \nepoch: 0453, loss: 0.0943 - val_loss: 1.4837; rmse: 0.3918 - rmse_val: 1.2181; r2: 0.9942 - r2_val: 0.8946 \nepoch: 0454, loss: 0.1406 - val_loss: 1.4170; rmse: 0.3138 - rmse_val: 1.1904; r2: 0.9943 - r2_val: 0.8913 \nepoch: 0455, loss: 0.0956 - val_loss: 1.4407; rmse: 0.3447 - rmse_val: 1.2003; r2: 0.9938 - r2_val: 0.8942 \nepoch: 0456, loss: 0.1108 - val_loss: 1.3939; rmse: 0.3217 - rmse_val: 1.1806; r2: 0.9938 - r2_val: 0.8929 \nepoch: 0457, loss: 0.0982 - val_loss: 1.3920; rmse: 0.3110 - rmse_val: 1.1798; r2: 0.9942 - r2_val: 0.8930 \nepoch: 0458, loss: 0.1106 - val_loss: 1.4388; rmse: 0.3564 - rmse_val: 1.1995; r2: 0.9942 - r2_val: 0.8905 \nepoch: 0459, loss: 0.1474 - val_loss: 1.4598; rmse: 0.4294 - rmse_val: 1.2082; r2: 0.9931 - r2_val: 0.8907 \nepoch: 0460, loss: 0.2584 - val_loss: 1.5547; rmse: 0.5285 - rmse_val: 1.2469; r2: 0.9924 - r2_val: 0.8998 \nepoch: 0461, loss: 0.2729 - val_loss: 1.4599; rmse: 0.5597 - rmse_val: 1.2083; r2: 0.9927 - r2_val: 0.8960 \n\nRestoring model weights from the end of the best epoch.\n\nEpoch 00461: early stopping\n513 64 65\nTrain on 513 samples, validate on 64 samples\nEpoch 1/411\n513/513 [==============================] - 1s 2ms/sample - loss: 28.5087 - val_loss: 25.8998\nEpoch 2/411\n513/513 [==============================] - 0s 282us/sample - loss: 26.4198 - val_loss: 23.9926\nEpoch 3/411\n513/513 [==============================] - 0s 272us/sample - loss: 24.2550 - val_loss: 21.6741\nEpoch 4/411\n513/513 [==============================] - 0s 267us/sample - loss: 21.5144 - val_loss: 18.4661\nEpoch 5/411\n513/513 [==============================] - 0s 272us/sample - loss: 17.6203 - val_loss: 14.6470\nEpoch 6/411\n513/513 [==============================] - 0s 277us/sample - loss: 13.4779 - val_loss: 12.8291\nEpoch 7/411\n513/513 [==============================] - 0s 270us/sample - loss: 12.2346 - val_loss: 13.6130\nEpoch 8/411\n513/513 [==============================] - 0s 268us/sample - loss: 12.2858 - val_loss: 12.6757\nEpoch 9/411\n513/513 [==============================] - 0s 272us/sample - loss: 11.3290 - val_loss: 11.8456\nEpoch 10/411\n513/513 [==============================] - 0s 275us/sample - loss: 10.8084 - val_loss: 11.5618\nEpoch 11/411\n513/513 [==============================] - 0s 272us/sample - loss: 10.6391 - val_loss: 11.8611\nEpoch 12/411\n513/513 [==============================] - 0s 272us/sample - loss: 10.3363 - val_loss: 10.8414\nEpoch 13/411\n513/513 [==============================] - 0s 265us/sample - loss: 9.9558 - val_loss: 10.6888\nEpoch 14/411\n513/513 [==============================] - 0s 269us/sample - loss: 9.6664 - val_loss: 10.5345\nEpoch 15/411\n513/513 [==============================] - 0s 265us/sample - loss: 9.5806 - val_loss: 10.4357\nEpoch 16/411\n513/513 [==============================] - 0s 271us/sample - loss: 9.0556 - val_loss: 9.9126\nEpoch 17/411\n513/513 [==============================] - 0s 270us/sample - loss: 9.0912 - val_loss: 9.9835\nEpoch 18/411\n513/513 [==============================] - 0s 298us/sample - loss: 9.1401 - val_loss: 9.5079\nEpoch 19/411\n513/513 [==============================] - 0s 291us/sample - loss: 8.4591 - val_loss: 9.3590\nEpoch 20/411\n513/513 [==============================] - 0s 275us/sample - loss: 8.2700 - val_loss: 9.2419\nEpoch 21/411\n513/513 [==============================] - 0s 278us/sample - loss: 8.5980 - val_loss: 9.0682\nEpoch 22/411\n513/513 [==============================] - 0s 273us/sample - loss: 7.6805 - val_loss: 8.4919\nEpoch 23/411\n513/513 [==============================] - 0s 276us/sample - loss: 7.8604 - val_loss: 8.3005\nEpoch 24/411\n513/513 [==============================] - 0s 282us/sample - loss: 7.3378 - val_loss: 7.7810\nEpoch 25/411\n513/513 [==============================] - 0s 273us/sample - loss: 6.9785 - val_loss: 7.4842\nEpoch 26/411\n513/513 [==============================] - 0s 270us/sample - loss: 6.7285 - val_loss: 7.2560\nEpoch 27/411\n513/513 [==============================] - 0s 283us/sample - loss: 6.5463 - val_loss: 7.0208\nEpoch 28/411\n513/513 [==============================] - 0s 292us/sample - loss: 6.8648 - val_loss: 7.3731\nEpoch 29/411\n513/513 [==============================] - 0s 286us/sample - loss: 6.5093 - val_loss: 6.9909\nEpoch 30/411\n513/513 [==============================] - 0s 283us/sample - loss: 6.4193 - val_loss: 6.8870\nEpoch 31/411\n513/513 [==============================] - 0s 278us/sample - loss: 6.0384 - val_loss: 6.7004\nEpoch 32/411\n513/513 [==============================] - 0s 274us/sample - loss: 6.4885 - val_loss: 6.8636\nEpoch 33/411\n513/513 [==============================] - 0s 282us/sample - loss: 5.8119 - val_loss: 7.0236\nEpoch 34/411\n513/513 [==============================] - 0s 270us/sample - loss: 6.9550 - val_loss: 7.8104\nEpoch 35/411\n513/513 [==============================] - 0s 282us/sample - loss: 6.3652 - val_loss: 5.9672\nEpoch 36/411\n513/513 [==============================] - 0s 275us/sample - loss: 5.4021 - val_loss: 6.0143\nEpoch 37/411\n513/513 [==============================] - 0s 281us/sample - loss: 5.3645 - val_loss: 5.9332\nEpoch 38/411\n513/513 [==============================] - 0s 283us/sample - loss: 5.4136 - val_loss: 6.2000\nEpoch 39/411\n513/513 [==============================] - 0s 289us/sample - loss: 5.1786 - val_loss: 5.8603\nEpoch 40/411\n513/513 [==============================] - 0s 278us/sample - loss: 5.0150 - val_loss: 5.8175\nEpoch 41/411\n513/513 [==============================] - 0s 279us/sample - loss: 5.3072 - val_loss: 6.6530\nEpoch 42/411\n513/513 [==============================] - 0s 289us/sample - loss: 5.2754 - val_loss: 5.4744\nEpoch 43/411\n513/513 [==============================] - 0s 287us/sample - loss: 5.2819 - val_loss: 5.6403\nEpoch 44/411\n513/513 [==============================] - 0s 288us/sample - loss: 4.8081 - val_loss: 5.7879\nEpoch 45/411\n513/513 [==============================] - 0s 276us/sample - loss: 4.7512 - val_loss: 5.3408\nEpoch 46/411\n513/513 [==============================] - 0s 278us/sample - loss: 4.5742 - val_loss: 5.5521\nEpoch 47/411\n513/513 [==============================] - 0s 275us/sample - loss: 4.9268 - val_loss: 5.3738\nEpoch 48/411\n513/513 [==============================] - 0s 281us/sample - loss: 4.7353 - val_loss: 6.1542\nEpoch 49/411\n513/513 [==============================] - 0s 260us/sample - loss: 4.6955 - val_loss: 5.3040\nEpoch 50/411\n513/513 [==============================] - 0s 264us/sample - loss: 4.3039 - val_loss: 5.3279\nEpoch 51/411\n513/513 [==============================] - 0s 276us/sample - loss: 4.2574 - val_loss: 5.2598\nEpoch 52/411\n513/513 [==============================] - 0s 293us/sample - loss: 4.0999 - val_loss: 5.2613\nEpoch 53/411\n513/513 [==============================] - 0s 277us/sample - loss: 4.0157 - val_loss: 5.2009\nEpoch 54/411\n513/513 [==============================] - 0s 279us/sample - loss: 3.9710 - val_loss: 5.2329\nEpoch 55/411\n513/513 [==============================] - 0s 274us/sample - loss: 4.0202 - val_loss: 5.6759\nEpoch 56/411\n513/513 [==============================] - 0s 276us/sample - loss: 4.0284 - val_loss: 5.1897\nEpoch 57/411\n513/513 [==============================] - 0s 269us/sample - loss: 3.7634 - val_loss: 5.1628\nEpoch 58/411\n513/513 [==============================] - 0s 273us/sample - loss: 4.3515 - val_loss: 5.4511\nEpoch 59/411\n513/513 [==============================] - 0s 276us/sample - loss: 3.8578 - val_loss: 5.8192\nEpoch 60/411\n513/513 [==============================] - 0s 273us/sample - loss: 3.8967 - val_loss: 5.2270\nEpoch 61/411\n513/513 [==============================] - 0s 280us/sample - loss: 3.6280 - val_loss: 5.1656\nEpoch 62/411\n513/513 [==============================] - 0s 269us/sample - loss: 3.5180 - val_loss: 5.4422\nEpoch 63/411\n513/513 [==============================] - 0s 281us/sample - loss: 4.2654 - val_loss: 5.7715\nEpoch 64/411\n513/513 [==============================] - 0s 268us/sample - loss: 3.5639 - val_loss: 5.2025\nEpoch 65/411\n513/513 [==============================] - 0s 275us/sample - loss: 3.8152 - val_loss: 4.9704\nEpoch 66/411\n513/513 [==============================] - 0s 270us/sample - loss: 3.4740 - val_loss: 5.6326\nEpoch 67/411\n513/513 [==============================] - 0s 269us/sample - loss: 3.5222 - val_loss: 4.9069\nEpoch 68/411\n513/513 [==============================] - 0s 269us/sample - loss: 3.6948 - val_loss: 5.1702\nEpoch 69/411\n513/513 [==============================] - 0s 278us/sample - loss: 3.5771 - val_loss: 5.9090\nEpoch 70/411\n513/513 [==============================] - 0s 276us/sample - loss: 4.0939 - val_loss: 5.4934\nEpoch 71/411\n513/513 [==============================] - 0s 278us/sample - loss: 3.4561 - val_loss: 5.1154\nEpoch 72/411\n513/513 [==============================] - 0s 278us/sample - loss: 3.5105 - val_loss: 4.9647\nEpoch 73/411\n513/513 [==============================] - 0s 277us/sample - loss: 3.1919 - val_loss: 5.3852\nEpoch 74/411\n513/513 [==============================] - 0s 286us/sample - loss: 3.3849 - val_loss: 5.0970\nEpoch 75/411\n513/513 [==============================] - 0s 289us/sample - loss: 3.0088 - val_loss: 4.7797\nEpoch 76/411\n513/513 [==============================] - 0s 283us/sample - loss: 2.8807 - val_loss: 4.7161\nEpoch 77/411\n513/513 [==============================] - 0s 283us/sample - loss: 2.8395 - val_loss: 4.6524\nEpoch 78/411\n513/513 [==============================] - 0s 273us/sample - loss: 2.8814 - val_loss: 4.6117\nEpoch 79/411\n513/513 [==============================] - 0s 273us/sample - loss: 2.7908 - val_loss: 4.5955\nEpoch 80/411\n513/513 [==============================] - 0s 270us/sample - loss: 2.7518 - val_loss: 4.5922\nEpoch 81/411\n513/513 [==============================] - 0s 269us/sample - loss: 2.6826 - val_loss: 4.8455\nEpoch 82/411\n513/513 [==============================] - 0s 272us/sample - loss: 2.6357 - val_loss: 4.4691\nEpoch 83/411\n513/513 [==============================] - 0s 269us/sample - loss: 2.5145 - val_loss: 4.5243\nEpoch 84/411\n513/513 [==============================] - 0s 281us/sample - loss: 2.4905 - val_loss: 4.3532\nEpoch 85/411\n513/513 [==============================] - 0s 271us/sample - loss: 2.5799 - val_loss: 4.3574\nEpoch 86/411\n513/513 [==============================] - 0s 282us/sample - loss: 2.3894 - val_loss: 4.8166\nEpoch 87/411\n513/513 [==============================] - 0s 286us/sample - loss: 2.7270 - val_loss: 4.2819\nEpoch 88/411\n513/513 [==============================] - 0s 287us/sample - loss: 2.5850 - val_loss: 4.2284\nEpoch 89/411\n513/513 [==============================] - 0s 282us/sample - loss: 2.4464 - val_loss: 5.0422\nEpoch 90/411\n513/513 [==============================] - 0s 278us/sample - loss: 2.5323 - val_loss: 4.1659\nEpoch 91/411\n513/513 [==============================] - 0s 271us/sample - loss: 2.3884 - val_loss: 4.2059\nEpoch 92/411\n513/513 [==============================] - 0s 277us/sample - loss: 2.3567 - val_loss: 4.0515\nEpoch 93/411\n513/513 [==============================] - 0s 286us/sample - loss: 2.5071 - val_loss: 3.9688\nEpoch 94/411\n513/513 [==============================] - 0s 270us/sample - loss: 2.2557 - val_loss: 4.2185\nEpoch 95/411\n513/513 [==============================] - 0s 276us/sample - loss: 2.2448 - val_loss: 3.8710\nEpoch 96/411\n513/513 [==============================] - 0s 275us/sample - loss: 2.1275 - val_loss: 3.9466\nEpoch 97/411\n513/513 [==============================] - 0s 275us/sample - loss: 2.2255 - val_loss: 3.8773\nEpoch 98/411\n513/513 [==============================] - 0s 263us/sample - loss: 2.1631 - val_loss: 3.7560\nEpoch 99/411\n513/513 [==============================] - 0s 261us/sample - loss: 2.0266 - val_loss: 4.1619\nEpoch 100/411\n513/513 [==============================] - 0s 270us/sample - loss: 2.1664 - val_loss: 3.7973\nEpoch 101/411\n513/513 [==============================] - 0s 267us/sample - loss: 1.9440 - val_loss: 3.7217\nEpoch 102/411\n513/513 [==============================] - 0s 259us/sample - loss: 1.9226 - val_loss: 3.8879\nEpoch 103/411\n513/513 [==============================] - 0s 267us/sample - loss: 1.9166 - val_loss: 3.7093\nEpoch 104/411\n513/513 [==============================] - 0s 262us/sample - loss: 1.8146 - val_loss: 4.0274\nEpoch 105/411\n513/513 [==============================] - 0s 261us/sample - loss: 3.4867 - val_loss: 4.0411\nEpoch 106/411\n513/513 [==============================] - 0s 286us/sample - loss: 1.9881 - val_loss: 4.8917\nEpoch 107/411\n513/513 [==============================] - 0s 274us/sample - loss: 2.3589 - val_loss: 3.7913\nEpoch 108/411\n513/513 [==============================] - 0s 261us/sample - loss: 2.3212 - val_loss: 3.8073\nEpoch 109/411\n513/513 [==============================] - 0s 284us/sample - loss: 1.8495 - val_loss: 4.5724\nEpoch 110/411\n513/513 [==============================] - 0s 275us/sample - loss: 2.3408 - val_loss: 3.7215\nEpoch 111/411\n513/513 [==============================] - 0s 258us/sample - loss: 1.7453 - val_loss: 3.5845\nEpoch 112/411\n513/513 [==============================] - 0s 268us/sample - loss: 1.7308 - val_loss: 3.7534\nEpoch 113/411\n513/513 [==============================] - 0s 264us/sample - loss: 1.8009 - val_loss: 3.5430\nEpoch 114/411\n513/513 [==============================] - 0s 266us/sample - loss: 1.7687 - val_loss: 3.8739\nEpoch 115/411\n513/513 [==============================] - 0s 272us/sample - loss: 2.0764 - val_loss: 3.5113\nEpoch 116/411\n513/513 [==============================] - 0s 278us/sample - loss: 1.6236 - val_loss: 3.7526\nEpoch 117/411\n513/513 [==============================] - 0s 267us/sample - loss: 1.5722 - val_loss: 3.4946\nEpoch 118/411\n513/513 [==============================] - 0s 262us/sample - loss: 1.7631 - val_loss: 3.4125\nEpoch 119/411\n513/513 [==============================] - 0s 271us/sample - loss: 1.5592 - val_loss: 3.4136\nEpoch 120/411\n513/513 [==============================] - 0s 266us/sample - loss: 1.5870 - val_loss: 3.4611\nEpoch 121/411\n513/513 [==============================] - 0s 261us/sample - loss: 1.5500 - val_loss: 3.9312\nEpoch 122/411\n513/513 [==============================] - 0s 264us/sample - loss: 1.9591 - val_loss: 3.3294\nEpoch 123/411\n513/513 [==============================] - 0s 265us/sample - loss: 1.4593 - val_loss: 3.1961\nEpoch 124/411\n513/513 [==============================] - 0s 260us/sample - loss: 1.5586 - val_loss: 3.4676\nEpoch 125/411\n513/513 [==============================] - 0s 259us/sample - loss: 1.5012 - val_loss: 3.1870\nEpoch 126/411\n513/513 [==============================] - 0s 262us/sample - loss: 1.4787 - val_loss: 3.4448\nEpoch 127/411\n513/513 [==============================] - 0s 265us/sample - loss: 1.5841 - val_loss: 3.1686\nEpoch 128/411\n513/513 [==============================] - 0s 275us/sample - loss: 1.3299 - val_loss: 3.1703\nEpoch 129/411\n513/513 [==============================] - 0s 268us/sample - loss: 1.4737 - val_loss: 3.2076\nEpoch 130/411\n513/513 [==============================] - 0s 283us/sample - loss: 1.4369 - val_loss: 3.1576\nEpoch 131/411\n513/513 [==============================] - 0s 283us/sample - loss: 1.5685 - val_loss: 3.1966\nEpoch 132/411\n513/513 [==============================] - 0s 279us/sample - loss: 1.2692 - val_loss: 3.3491\nEpoch 133/411\n513/513 [==============================] - 0s 274us/sample - loss: 1.2518 - val_loss: 3.3770\nEpoch 134/411\n513/513 [==============================] - 0s 253us/sample - loss: 1.8109 - val_loss: 3.0871\nEpoch 135/411\n513/513 [==============================] - 0s 268us/sample - loss: 1.3795 - val_loss: 3.1782\nEpoch 136/411\n513/513 [==============================] - 0s 269us/sample - loss: 1.2419 - val_loss: 3.0943\nEpoch 137/411\n513/513 [==============================] - 0s 274us/sample - loss: 1.2956 - val_loss: 3.1757\nEpoch 138/411\n513/513 [==============================] - 0s 275us/sample - loss: 1.4810 - val_loss: 3.1164\nEpoch 139/411\n513/513 [==============================] - 0s 274us/sample - loss: 1.1740 - val_loss: 2.8458\nEpoch 140/411\n513/513 [==============================] - 0s 283us/sample - loss: 1.1547 - val_loss: 2.8215\nEpoch 141/411\n513/513 [==============================] - 0s 282us/sample - loss: 1.0737 - val_loss: 2.8309\nEpoch 142/411\n513/513 [==============================] - 0s 286us/sample - loss: 1.0515 - val_loss: 2.8180\nEpoch 143/411\n513/513 [==============================] - 0s 281us/sample - loss: 1.0401 - val_loss: 2.7284\nEpoch 144/411\n513/513 [==============================] - 0s 272us/sample - loss: 1.0188 - val_loss: 2.7021\nEpoch 145/411\n513/513 [==============================] - 0s 277us/sample - loss: 0.9865 - val_loss: 2.7728\nEpoch 146/411\n513/513 [==============================] - 0s 276us/sample - loss: 0.9978 - val_loss: 2.6684\nEpoch 147/411\n513/513 [==============================] - 0s 268us/sample - loss: 1.1146 - val_loss: 2.6357\nEpoch 148/411\n513/513 [==============================] - 0s 287us/sample - loss: 0.9674 - val_loss: 2.8145\nEpoch 149/411\n513/513 [==============================] - 0s 279us/sample - loss: 1.0200 - val_loss: 2.8085\nEpoch 150/411\n513/513 [==============================] - 0s 291us/sample - loss: 1.0872 - val_loss: 2.5595\nEpoch 151/411\n513/513 [==============================] - 0s 288us/sample - loss: 0.9424 - val_loss: 2.5264\nEpoch 152/411\n513/513 [==============================] - 0s 281us/sample - loss: 0.9414 - val_loss: 3.0490\nEpoch 153/411\n513/513 [==============================] - 0s 280us/sample - loss: 1.3092 - val_loss: 2.5583\nEpoch 154/411\n513/513 [==============================] - 0s 288us/sample - loss: 1.1780 - val_loss: 2.5855\nEpoch 155/411\n513/513 [==============================] - 0s 278us/sample - loss: 0.9878 - val_loss: 2.9257\nEpoch 156/411\n513/513 [==============================] - 0s 275us/sample - loss: 1.0541 - val_loss: 2.5050\nEpoch 157/411\n513/513 [==============================] - 0s 283us/sample - loss: 0.9926 - val_loss: 2.6122\nEpoch 158/411\n513/513 [==============================] - 0s 273us/sample - loss: 1.1561 - val_loss: 2.3872\nEpoch 159/411\n513/513 [==============================] - 0s 289us/sample - loss: 1.1298 - val_loss: 2.3621\nEpoch 160/411\n513/513 [==============================] - 0s 288us/sample - loss: 0.8919 - val_loss: 2.7864\nEpoch 161/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.9122 - val_loss: 2.4070\nEpoch 162/411\n513/513 [==============================] - 0s 283us/sample - loss: 0.9709 - val_loss: 2.5207\nEpoch 163/411\n513/513 [==============================] - 0s 275us/sample - loss: 0.8886 - val_loss: 2.3438\nEpoch 164/411\n513/513 [==============================] - 0s 273us/sample - loss: 1.0164 - val_loss: 2.4222\nEpoch 165/411\n513/513 [==============================] - 0s 278us/sample - loss: 1.1157 - val_loss: 2.3573\nEpoch 166/411\n513/513 [==============================] - 0s 277us/sample - loss: 1.1690 - val_loss: 2.3422\nEpoch 167/411\n513/513 [==============================] - 0s 287us/sample - loss: 1.0762 - val_loss: 2.7550\nEpoch 168/411\n513/513 [==============================] - 0s 270us/sample - loss: 1.0752 - val_loss: 2.3213\nEpoch 169/411\n513/513 [==============================] - 0s 277us/sample - loss: 0.9238 - val_loss: 2.8709\nEpoch 170/411\n513/513 [==============================] - 0s 268us/sample - loss: 0.9492 - val_loss: 2.3172\nEpoch 171/411\n513/513 [==============================] - 0s 272us/sample - loss: 0.8681 - val_loss: 2.5843\nEpoch 172/411\n513/513 [==============================] - 0s 288us/sample - loss: 1.0156 - val_loss: 2.3093\nEpoch 173/411\n513/513 [==============================] - 0s 275us/sample - loss: 0.8517 - val_loss: 2.2645\nEpoch 174/411\n513/513 [==============================] - 0s 271us/sample - loss: 0.7571 - val_loss: 2.4500\nEpoch 175/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.7407 - val_loss: 2.2433\nEpoch 176/411\n513/513 [==============================] - 0s 263us/sample - loss: 0.6789 - val_loss: 2.2133\nEpoch 177/411\n513/513 [==============================] - 0s 268us/sample - loss: 0.9430 - val_loss: 2.1893\nEpoch 178/411\n513/513 [==============================] - 0s 260us/sample - loss: 0.6918 - val_loss: 2.3130\nEpoch 179/411\n513/513 [==============================] - 0s 266us/sample - loss: 0.6654 - val_loss: 2.1972\nEpoch 180/411\n513/513 [==============================] - 0s 266us/sample - loss: 0.6569 - val_loss: 2.2586\nEpoch 181/411\n513/513 [==============================] - 0s 266us/sample - loss: 0.6424 - val_loss: 2.2475\nEpoch 182/411\n513/513 [==============================] - 0s 265us/sample - loss: 0.6679 - val_loss: 2.2646\nEpoch 183/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.6303 - val_loss: 2.1467\nEpoch 184/411\n513/513 [==============================] - 0s 276us/sample - loss: 0.6247 - val_loss: 2.1336\nEpoch 185/411\n513/513 [==============================] - 0s 281us/sample - loss: 0.6253 - val_loss: 2.1290\nEpoch 186/411\n513/513 [==============================] - 0s 278us/sample - loss: 0.5742 - val_loss: 2.1598\nEpoch 187/411\n513/513 [==============================] - 0s 275us/sample - loss: 0.5678 - val_loss: 2.1067\nEpoch 188/411\n513/513 [==============================] - 0s 279us/sample - loss: 0.5763 - val_loss: 2.1824\nEpoch 189/411\n513/513 [==============================] - 0s 273us/sample - loss: 0.5736 - val_loss: 2.0568\nEpoch 190/411\n513/513 [==============================] - 0s 278us/sample - loss: 0.5503 - val_loss: 2.0057\nEpoch 191/411\n513/513 [==============================] - 0s 277us/sample - loss: 0.5645 - val_loss: 2.0051\nEpoch 192/411\n513/513 [==============================] - 0s 280us/sample - loss: 0.5478 - val_loss: 2.0578\nEpoch 193/411\n513/513 [==============================] - 0s 279us/sample - loss: 0.5431 - val_loss: 1.9685\nEpoch 194/411\n513/513 [==============================] - 0s 274us/sample - loss: 0.5318 - val_loss: 2.0074\nEpoch 195/411\n513/513 [==============================] - 0s 261us/sample - loss: 0.5029 - val_loss: 1.9408\nEpoch 196/411\n513/513 [==============================] - 0s 274us/sample - loss: 0.5738 - val_loss: 1.9746\nEpoch 197/411\n513/513 [==============================] - 0s 272us/sample - loss: 0.4877 - val_loss: 1.9460\nEpoch 198/411\n513/513 [==============================] - 0s 269us/sample - loss: 0.4868 - val_loss: 1.9777\nEpoch 199/411\n513/513 [==============================] - 0s 271us/sample - loss: 0.4718 - val_loss: 2.2011\nEpoch 200/411\n513/513 [==============================] - 0s 278us/sample - loss: 0.6881 - val_loss: 2.6676\nEpoch 201/411\n513/513 [==============================] - 0s 267us/sample - loss: 2.5543 - val_loss: 2.2780\nEpoch 202/411\n513/513 [==============================] - 0s 270us/sample - loss: 1.1643 - val_loss: 1.8469\nEpoch 203/411\n513/513 [==============================] - 0s 264us/sample - loss: 0.9487 - val_loss: 2.3406\nEpoch 204/411\n513/513 [==============================] - 0s 269us/sample - loss: 1.6317 - val_loss: 1.9130\nEpoch 205/411\n513/513 [==============================] - 0s 272us/sample - loss: 1.0414 - val_loss: 2.0456\nEpoch 206/411\n513/513 [==============================] - 0s 269us/sample - loss: 0.6872 - val_loss: 2.6092\nEpoch 207/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.8715 - val_loss: 2.2978\nEpoch 208/411\n513/513 [==============================] - 0s 269us/sample - loss: 1.4644 - val_loss: 2.0013\nEpoch 209/411\n513/513 [==============================] - 0s 271us/sample - loss: 1.1676 - val_loss: 2.0957\nEpoch 210/411\n513/513 [==============================] - 0s 259us/sample - loss: 0.7801 - val_loss: 2.1211\nEpoch 211/411\n513/513 [==============================] - 0s 272us/sample - loss: 0.6786 - val_loss: 2.6300\nEpoch 212/411\n513/513 [==============================] - 0s 274us/sample - loss: 0.7393 - val_loss: 2.4850\nEpoch 213/411\n513/513 [==============================] - 0s 283us/sample - loss: 2.1562 - val_loss: 1.9366\nEpoch 214/411\n513/513 [==============================] - 0s 280us/sample - loss: 1.0620 - val_loss: 2.4534\nEpoch 215/411\n513/513 [==============================] - 0s 280us/sample - loss: 0.7067 - val_loss: 2.3256\nEpoch 216/411\n513/513 [==============================] - 0s 281us/sample - loss: 0.8405 - val_loss: 2.3667\nEpoch 217/411\n513/513 [==============================] - 0s 284us/sample - loss: 0.8177 - val_loss: 1.8928\nEpoch 218/411\n513/513 [==============================] - 0s 276us/sample - loss: 0.5920 - val_loss: 1.8408\nEpoch 219/411\n513/513 [==============================] - 0s 272us/sample - loss: 0.5411 - val_loss: 1.8572\nEpoch 220/411\n513/513 [==============================] - 0s 266us/sample - loss: 0.5780 - val_loss: 1.7364\nEpoch 221/411\n513/513 [==============================] - 0s 273us/sample - loss: 0.7010 - val_loss: 1.9027\nEpoch 222/411\n513/513 [==============================] - 0s 272us/sample - loss: 0.6680 - val_loss: 1.7064\nEpoch 223/411\n513/513 [==============================] - 0s 266us/sample - loss: 0.5477 - val_loss: 1.8526\nEpoch 224/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.5384 - val_loss: 1.7417\nEpoch 225/411\n513/513 [==============================] - 0s 269us/sample - loss: 0.4673 - val_loss: 2.0241\nEpoch 226/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.4879 - val_loss: 1.8186\nEpoch 227/411\n513/513 [==============================] - 0s 271us/sample - loss: 0.4455 - val_loss: 2.1301\nEpoch 228/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.7868 - val_loss: 1.8266\nEpoch 229/411\n513/513 [==============================] - 0s 272us/sample - loss: 1.1432 - val_loss: 1.7128\nEpoch 230/411\n513/513 [==============================] - 0s 278us/sample - loss: 0.8624 - val_loss: 2.5772\nEpoch 231/411\n513/513 [==============================] - 0s 266us/sample - loss: 0.6829 - val_loss: 1.9133\nEpoch 232/411\n513/513 [==============================] - 0s 268us/sample - loss: 0.7734 - val_loss: 1.9619\nEpoch 233/411\n513/513 [==============================] - 0s 255us/sample - loss: 0.8223 - val_loss: 1.8747\nEpoch 234/411\n513/513 [==============================] - 0s 264us/sample - loss: 0.4662 - val_loss: 1.7482\nEpoch 235/411\n513/513 [==============================] - 0s 264us/sample - loss: 0.5692 - val_loss: 1.7296\nEpoch 236/411\n513/513 [==============================] - 0s 274us/sample - loss: 0.4516 - val_loss: 1.7206\nEpoch 237/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.4166 - val_loss: 1.6494\nEpoch 238/411\n513/513 [==============================] - 0s 271us/sample - loss: 0.4007 - val_loss: 1.7003\nEpoch 239/411\n513/513 [==============================] - 0s 288us/sample - loss: 0.4119 - val_loss: 1.6600\nEpoch 240/411\n513/513 [==============================] - 0s 288us/sample - loss: 0.5336 - val_loss: 1.6359\nEpoch 241/411\n513/513 [==============================] - 0s 278us/sample - loss: 0.3844 - val_loss: 1.6526\nEpoch 242/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.3890 - val_loss: 1.6345\nEpoch 243/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.3866 - val_loss: 1.6493\nEpoch 244/411\n513/513 [==============================] - 0s 280us/sample - loss: 0.3460 - val_loss: 1.6875\nEpoch 245/411\n513/513 [==============================] - 0s 269us/sample - loss: 0.8090 - val_loss: 1.7186\nEpoch 246/411\n513/513 [==============================] - 0s 268us/sample - loss: 0.4231 - val_loss: 2.0645\nEpoch 247/411\n513/513 [==============================] - 0s 260us/sample - loss: 0.4376 - val_loss: 1.8121\nEpoch 248/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.3731 - val_loss: 1.7871\nEpoch 249/411\n513/513 [==============================] - 0s 269us/sample - loss: 0.4529 - val_loss: 1.7519\nEpoch 250/411\n513/513 [==============================] - 0s 264us/sample - loss: 0.4203 - val_loss: 2.6704\nEpoch 251/411\n513/513 [==============================] - 0s 262us/sample - loss: 1.4678 - val_loss: 1.8401\nEpoch 252/411\n513/513 [==============================] - 0s 264us/sample - loss: 0.8641 - val_loss: 2.1974\nEpoch 253/411\n513/513 [==============================] - 0s 260us/sample - loss: 0.6016 - val_loss: 2.2392\nEpoch 254/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.4618 - val_loss: 1.9071\nEpoch 255/411\n513/513 [==============================] - 0s 271us/sample - loss: 0.3893 - val_loss: 2.0634\nEpoch 256/411\n513/513 [==============================] - 0s 266us/sample - loss: 0.4032 - val_loss: 1.7522\nEpoch 257/411\n513/513 [==============================] - 0s 266us/sample - loss: 0.4496 - val_loss: 1.9332\nEpoch 258/411\n513/513 [==============================] - 0s 273us/sample - loss: 0.6001 - val_loss: 1.7504\nEpoch 259/411\n513/513 [==============================] - 0s 278us/sample - loss: 0.6161 - val_loss: 1.7634\nEpoch 260/411\n513/513 [==============================] - 0s 272us/sample - loss: 0.4046 - val_loss: 1.8997\nEpoch 261/411\n513/513 [==============================] - 0s 269us/sample - loss: 0.4302 - val_loss: 2.0274\nEpoch 262/411\n513/513 [==============================] - 0s 267us/sample - loss: 0.8984 - val_loss: 1.9356\nEpoch 263/411\n513/513 [==============================] - 0s 264us/sample - loss: 0.7684 - val_loss: 1.6973\nEpoch 264/411\n513/513 [==============================] - 0s 262us/sample - loss: 0.7124 - val_loss: 1.8797\nEpoch 265/411\n513/513 [==============================] - 0s 267us/sample - loss: 0.4839 - val_loss: 1.9864\nEpoch 266/411\n513/513 [==============================] - 0s 260us/sample - loss: 0.4166 - val_loss: 1.6263\nEpoch 267/411\n513/513 [==============================] - 0s 280us/sample - loss: 0.4333 - val_loss: 1.7122\nEpoch 268/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.3904 - val_loss: 1.8436\nEpoch 269/411\n513/513 [==============================] - 0s 269us/sample - loss: 0.3217 - val_loss: 1.6442\nEpoch 270/411\n513/513 [==============================] - 0s 268us/sample - loss: 0.3140 - val_loss: 1.7420\nEpoch 271/411\n513/513 [==============================] - 0s 266us/sample - loss: 0.2961 - val_loss: 1.7254\nEpoch 272/411\n513/513 [==============================] - 0s 255us/sample - loss: 0.2784 - val_loss: 1.6783\nEpoch 273/411\n513/513 [==============================] - 0s 256us/sample - loss: 0.2722 - val_loss: 1.8138\nEpoch 274/411\n513/513 [==============================] - 0s 259us/sample - loss: 0.4113 - val_loss: 1.6571\nEpoch 275/411\n513/513 [==============================] - 0s 249us/sample - loss: 0.3889 - val_loss: 1.6372\nEpoch 276/411\n513/513 [==============================] - 0s 272us/sample - loss: 0.3046 - val_loss: 1.9470\nEpoch 277/411\n513/513 [==============================] - 0s 269us/sample - loss: 0.3846 - val_loss: 1.6204\nEpoch 278/411\n513/513 [==============================] - 0s 272us/sample - loss: 0.3416 - val_loss: 1.6483\nEpoch 279/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.2815 - val_loss: 1.7107\nEpoch 280/411\n513/513 [==============================] - 0s 263us/sample - loss: 0.2583 - val_loss: 1.6246\nEpoch 281/411\n513/513 [==============================] - 0s 263us/sample - loss: 0.2915 - val_loss: 1.7398\nEpoch 282/411\n513/513 [==============================] - 0s 267us/sample - loss: 0.2780 - val_loss: 1.6896\nEpoch 283/411\n513/513 [==============================] - 0s 268us/sample - loss: 0.2354 - val_loss: 1.6509\nEpoch 284/411\n513/513 [==============================] - 0s 267us/sample - loss: 0.2448 - val_loss: 1.6361\nEpoch 285/411\n513/513 [==============================] - 0s 283us/sample - loss: 0.2561 - val_loss: 2.0092\nEpoch 286/411\n513/513 [==============================] - 0s 276us/sample - loss: 1.0646 - val_loss: 1.6019\nEpoch 287/411\n513/513 [==============================] - 0s 266us/sample - loss: 0.5293 - val_loss: 1.5844\nEpoch 288/411\n513/513 [==============================] - 0s 262us/sample - loss: 0.3758 - val_loss: 1.9032\nEpoch 289/411\n513/513 [==============================] - 0s 263us/sample - loss: 0.3153 - val_loss: 1.6652\nEpoch 290/411\n513/513 [==============================] - 0s 264us/sample - loss: 0.4509 - val_loss: 1.6854\nEpoch 291/411\n513/513 [==============================] - 0s 267us/sample - loss: 0.3040 - val_loss: 1.5917\nEpoch 292/411\n513/513 [==============================] - 0s 264us/sample - loss: 0.2669 - val_loss: 1.5825\nEpoch 293/411\n513/513 [==============================] - 0s 269us/sample - loss: 0.2501 - val_loss: 1.6295\nEpoch 294/411\n513/513 [==============================] - 0s 269us/sample - loss: 0.2351 - val_loss: 1.5333\nEpoch 295/411\n513/513 [==============================] - 0s 267us/sample - loss: 0.2297 - val_loss: 1.5700\nEpoch 296/411\n513/513 [==============================] - 0s 263us/sample - loss: 0.2270 - val_loss: 1.7866\nEpoch 297/411\n513/513 [==============================] - 0s 266us/sample - loss: 0.5427 - val_loss: 1.7691\nEpoch 298/411\n513/513 [==============================] - 0s 272us/sample - loss: 0.4034 - val_loss: 1.7583\nEpoch 299/411\n513/513 [==============================] - 0s 277us/sample - loss: 0.4102 - val_loss: 1.7283\nEpoch 300/411\n513/513 [==============================] - 0s 276us/sample - loss: 0.4635 - val_loss: 1.6334\nEpoch 301/411\n513/513 [==============================] - 0s 275us/sample - loss: 0.3223 - val_loss: 1.7647\nEpoch 302/411\n513/513 [==============================] - 0s 267us/sample - loss: 0.2690 - val_loss: 1.5755\nEpoch 303/411\n513/513 [==============================] - 0s 277us/sample - loss: 0.3752 - val_loss: 1.8630\nEpoch 304/411\n513/513 [==============================] - 0s 271us/sample - loss: 0.2523 - val_loss: 1.7794\nEpoch 305/411\n513/513 [==============================] - 0s 263us/sample - loss: 0.3135 - val_loss: 2.0305\nEpoch 306/411\n513/513 [==============================] - 0s 263us/sample - loss: 0.3266 - val_loss: 1.7923\nEpoch 307/411\n513/513 [==============================] - 0s 273us/sample - loss: 0.5788 - val_loss: 1.7784\nEpoch 308/411\n513/513 [==============================] - 0s 272us/sample - loss: 0.3195 - val_loss: 2.0501\nEpoch 309/411\n513/513 [==============================] - 0s 275us/sample - loss: 0.2486 - val_loss: 1.7540\nEpoch 310/411\n513/513 [==============================] - 0s 263us/sample - loss: 0.2509 - val_loss: 1.8663\nEpoch 311/411\n513/513 [==============================] - 0s 262us/sample - loss: 0.2695 - val_loss: 1.8220\nEpoch 312/411\n513/513 [==============================] - 0s 271us/sample - loss: 0.2888 - val_loss: 1.8770\nEpoch 313/411\n513/513 [==============================] - 0s 261us/sample - loss: 0.2677 - val_loss: 1.7252\nEpoch 314/411\n513/513 [==============================] - 0s 264us/sample - loss: 0.2489 - val_loss: 1.9289\nEpoch 315/411\n513/513 [==============================] - 0s 265us/sample - loss: 0.2370 - val_loss: 1.6457\nEpoch 316/411\n513/513 [==============================] - 0s 273us/sample - loss: 0.2127 - val_loss: 1.6928\nEpoch 317/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.2002 - val_loss: 1.5839\nEpoch 318/411\n513/513 [==============================] - 0s 275us/sample - loss: 0.4109 - val_loss: 1.9383\nEpoch 319/411\n513/513 [==============================] - 0s 265us/sample - loss: 0.9418 - val_loss: 1.5531\nEpoch 320/411\n513/513 [==============================] - 0s 269us/sample - loss: 0.6304 - val_loss: 1.5643\nEpoch 321/411\n513/513 [==============================] - 0s 256us/sample - loss: 0.3808 - val_loss: 2.1337\nEpoch 322/411\n513/513 [==============================] - 0s 263us/sample - loss: 0.3483 - val_loss: 1.6650\nEpoch 323/411\n513/513 [==============================] - 0s 267us/sample - loss: 0.3908 - val_loss: 1.7158\nEpoch 324/411\n513/513 [==============================] - 0s 267us/sample - loss: 0.2682 - val_loss: 1.5608\nEpoch 325/411\n513/513 [==============================] - 0s 276us/sample - loss: 0.2835 - val_loss: 1.6569\nEpoch 326/411\n513/513 [==============================] - 0s 265us/sample - loss: 0.2895 - val_loss: 1.5708\nEpoch 327/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.3639 - val_loss: 1.6073\nEpoch 328/411\n513/513 [==============================] - 0s 275us/sample - loss: 0.2630 - val_loss: 1.7423\nEpoch 329/411\n513/513 [==============================] - 0s 278us/sample - loss: 0.2367 - val_loss: 1.5876\nEpoch 330/411\n513/513 [==============================] - 0s 274us/sample - loss: 0.2842 - val_loss: 1.7488\nEpoch 331/411\n513/513 [==============================] - 0s 273us/sample - loss: 0.3883 - val_loss: 1.5025\nEpoch 332/411\n513/513 [==============================] - 0s 265us/sample - loss: 0.3183 - val_loss: 1.4668\nEpoch 333/411\n513/513 [==============================] - 0s 275us/sample - loss: 0.2294 - val_loss: 1.6442\nEpoch 334/411\n513/513 [==============================] - 0s 274us/sample - loss: 0.2130 - val_loss: 1.4630\nEpoch 335/411\n513/513 [==============================] - 0s 264us/sample - loss: 0.2281 - val_loss: 1.5483\nEpoch 336/411\n513/513 [==============================] - 0s 276us/sample - loss: 0.1757 - val_loss: 1.4897\nEpoch 337/411\n513/513 [==============================] - 0s 274us/sample - loss: 0.1720 - val_loss: 1.5405\nEpoch 338/411\n513/513 [==============================] - 0s 273us/sample - loss: 0.1737 - val_loss: 1.4990\nEpoch 339/411\n513/513 [==============================] - 0s 262us/sample - loss: 0.1646 - val_loss: 1.4310\nEpoch 340/411\n513/513 [==============================] - 0s 267us/sample - loss: 0.2314 - val_loss: 1.5024\nEpoch 341/411\n513/513 [==============================] - 0s 262us/sample - loss: 0.2104 - val_loss: 1.4767\nEpoch 342/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.3201 - val_loss: 1.5036\nEpoch 343/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.2061 - val_loss: 1.6340\nEpoch 344/411\n513/513 [==============================] - 0s 268us/sample - loss: 0.1801 - val_loss: 1.5441\nEpoch 345/411\n513/513 [==============================] - 0s 274us/sample - loss: 0.2120 - val_loss: 1.7908\nEpoch 346/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.3022 - val_loss: 1.5007\nEpoch 347/411\n513/513 [==============================] - 0s 267us/sample - loss: 0.2687 - val_loss: 1.5432\nEpoch 348/411\n513/513 [==============================] - 0s 275us/sample - loss: 0.2232 - val_loss: 1.6310\nEpoch 349/411\n513/513 [==============================] - 0s 276us/sample - loss: 0.1717 - val_loss: 1.5499\nEpoch 350/411\n513/513 [==============================] - 0s 271us/sample - loss: 0.1950 - val_loss: 1.6718\nEpoch 351/411\n513/513 [==============================] - 0s 268us/sample - loss: 0.1809 - val_loss: 1.5898\nEpoch 352/411\n513/513 [==============================] - 0s 266us/sample - loss: 0.1757 - val_loss: 1.7601\nEpoch 353/411\n513/513 [==============================] - 0s 272us/sample - loss: 0.1880 - val_loss: 1.5708\nEpoch 354/411\n513/513 [==============================] - 0s 264us/sample - loss: 0.3429 - val_loss: 1.6002\nEpoch 355/411\n513/513 [==============================] - 0s 273us/sample - loss: 0.1926 - val_loss: 1.5919\nEpoch 356/411\n513/513 [==============================] - 0s 274us/sample - loss: 0.1720 - val_loss: 1.5375\nEpoch 357/411\n513/513 [==============================] - 0s 269us/sample - loss: 0.1509 - val_loss: 1.6052\nEpoch 358/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.1450 - val_loss: 1.5490\nEpoch 359/411\n513/513 [==============================] - 0s 266us/sample - loss: 0.1406 - val_loss: 1.5798\nEpoch 360/411\n513/513 [==============================] - 0s 276us/sample - loss: 0.1360 - val_loss: 1.4568\nEpoch 361/411\n513/513 [==============================] - 0s 276us/sample - loss: 0.1961 - val_loss: 1.3790\nEpoch 362/411\n513/513 [==============================] - 0s 265us/sample - loss: 0.1640 - val_loss: 1.2894\nEpoch 363/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.1469 - val_loss: 1.2267\nEpoch 364/411\n513/513 [==============================] - 0s 285us/sample - loss: 0.1521 - val_loss: 1.3282\nEpoch 365/411\n513/513 [==============================] - 0s 292us/sample - loss: 0.1472 - val_loss: 1.3170\nEpoch 366/411\n513/513 [==============================] - 0s 284us/sample - loss: 0.1229 - val_loss: 1.3865\nEpoch 367/411\n513/513 [==============================] - 0s 287us/sample - loss: 0.1709 - val_loss: 1.2887\nEpoch 368/411\n513/513 [==============================] - 0s 298us/sample - loss: 0.1678 - val_loss: 1.4202\nEpoch 369/411\n513/513 [==============================] - 0s 284us/sample - loss: 0.2213 - val_loss: 1.2991\nEpoch 370/411\n513/513 [==============================] - 0s 289us/sample - loss: 0.1888 - val_loss: 1.3833\nEpoch 371/411\n513/513 [==============================] - 0s 289us/sample - loss: 0.1602 - val_loss: 1.4072\nEpoch 372/411\n513/513 [==============================] - 0s 273us/sample - loss: 0.1291 - val_loss: 1.3767\nEpoch 373/411\n513/513 [==============================] - 0s 290us/sample - loss: 0.1179 - val_loss: 1.4661\nEpoch 374/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.1197 - val_loss: 1.3718\nEpoch 375/411\n513/513 [==============================] - 0s 277us/sample - loss: 0.1781 - val_loss: 1.5062\nEpoch 376/411\n513/513 [==============================] - 0s 266us/sample - loss: 0.1700 - val_loss: 1.3953\nEpoch 377/411\n513/513 [==============================] - 0s 275us/sample - loss: 0.1231 - val_loss: 1.4122\nEpoch 378/411\n513/513 [==============================] - 0s 281us/sample - loss: 0.1180 - val_loss: 1.4354\nEpoch 379/411\n513/513 [==============================] - 0s 276us/sample - loss: 0.1060 - val_loss: 1.4751\nEpoch 380/411\n513/513 [==============================] - 0s 272us/sample - loss: 0.1668 - val_loss: 1.3651\nEpoch 381/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.2338 - val_loss: 1.5903\nEpoch 382/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.3081 - val_loss: 1.4265\nEpoch 383/411\n513/513 [==============================] - 0s 272us/sample - loss: 0.2612 - val_loss: 1.5308\nEpoch 384/411\n513/513 [==============================] - 0s 266us/sample - loss: 0.1896 - val_loss: 1.4611\nEpoch 385/411\n513/513 [==============================] - 0s 271us/sample - loss: 0.2752 - val_loss: 1.5003\nEpoch 386/411\n513/513 [==============================] - 0s 276us/sample - loss: 0.1892 - val_loss: 1.4558\nEpoch 387/411\n513/513 [==============================] - 0s 275us/sample - loss: 0.1642 - val_loss: 1.4503\nEpoch 388/411\n513/513 [==============================] - 0s 274us/sample - loss: 0.1719 - val_loss: 1.4331\nEpoch 389/411\n513/513 [==============================] - 0s 271us/sample - loss: 0.2133 - val_loss: 1.4663\nEpoch 390/411\n513/513 [==============================] - 0s 272us/sample - loss: 0.1477 - val_loss: 1.5886\nEpoch 391/411\n513/513 [==============================] - 0s 275us/sample - loss: 0.1117 - val_loss: 1.4590\nEpoch 392/411\n513/513 [==============================] - 0s 284us/sample - loss: 0.1159 - val_loss: 1.5898\nEpoch 393/411\n513/513 [==============================] - 0s 272us/sample - loss: 0.1093 - val_loss: 1.4561\nEpoch 394/411\n513/513 [==============================] - 0s 276us/sample - loss: 0.1095 - val_loss: 1.5878\nEpoch 395/411\n513/513 [==============================] - 0s 274us/sample - loss: 0.1374 - val_loss: 1.5113\nEpoch 396/411\n513/513 [==============================] - 0s 276us/sample - loss: 0.1072 - val_loss: 1.4725\nEpoch 397/411\n513/513 [==============================] - 0s 256us/sample - loss: 0.1030 - val_loss: 1.5089\nEpoch 398/411\n513/513 [==============================] - 0s 256us/sample - loss: 0.1013 - val_loss: 1.4826\nEpoch 399/411\n513/513 [==============================] - 0s 249us/sample - loss: 0.0983 - val_loss: 1.4881\nEpoch 400/411\n513/513 [==============================] - 0s 253us/sample - loss: 0.0937 - val_loss: 1.5271\nEpoch 401/411\n513/513 [==============================] - 0s 257us/sample - loss: 0.0979 - val_loss: 1.4478\nEpoch 402/411\n513/513 [==============================] - 0s 260us/sample - loss: 0.1516 - val_loss: 1.5605\nEpoch 403/411\n513/513 [==============================] - 0s 260us/sample - loss: 0.1144 - val_loss: 1.4738\nEpoch 404/411\n513/513 [==============================] - 0s 263us/sample - loss: 0.1494 - val_loss: 1.5985\nEpoch 405/411\n513/513 [==============================] - 0s 258us/sample - loss: 0.1342 - val_loss: 1.4664\nEpoch 406/411\n513/513 [==============================] - 0s 259us/sample - loss: 0.1146 - val_loss: 1.5750\nEpoch 407/411\n513/513 [==============================] - 0s 261us/sample - loss: 0.1379 - val_loss: 1.4447\nEpoch 408/411\n513/513 [==============================] - 0s 253us/sample - loss: 0.1146 - val_loss: 1.4712\nEpoch 409/411\n513/513 [==============================] - 0s 252us/sample - loss: 0.0849 - val_loss: 1.5690\nEpoch 410/411\n513/513 [==============================] - 0s 260us/sample - loss: 0.1083 - val_loss: 1.4910\nEpoch 411/411\n513/513 [==============================] - 0s 253us/sample - loss: 0.0868 - val_loss: 1.5205\n513 64 65\nTrain on 513 samples, validate on 64 samples\nEpoch 1/411\n513/513 [==============================] - 1s 2ms/sample - loss: 30.2616 - val_loss: 21.5831\nEpoch 2/411\n513/513 [==============================] - 0s 291us/sample - loss: 26.9306 - val_loss: 18.1900\nEpoch 3/411\n513/513 [==============================] - 0s 268us/sample - loss: 23.3852 - val_loss: 14.7608\nEpoch 4/411\n513/513 [==============================] - 0s 258us/sample - loss: 19.8146 - val_loss: 11.2163\nEpoch 5/411\n513/513 [==============================] - 0s 272us/sample - loss: 16.0113 - val_loss: 8.4580\nEpoch 6/411\n513/513 [==============================] - 0s 279us/sample - loss: 13.4150 - val_loss: 8.0293\nEpoch 7/411\n513/513 [==============================] - 0s 270us/sample - loss: 12.7260 - val_loss: 8.3795\nEpoch 8/411\n513/513 [==============================] - 0s 278us/sample - loss: 12.4665 - val_loss: 7.7814\nEpoch 9/411\n513/513 [==============================] - 0s 276us/sample - loss: 11.9315 - val_loss: 7.3916\nEpoch 10/411\n513/513 [==============================] - 0s 278us/sample - loss: 12.2094 - val_loss: 7.5750\nEpoch 11/411\n513/513 [==============================] - 0s 271us/sample - loss: 12.5180 - val_loss: 7.4946\nEpoch 12/411\n513/513 [==============================] - 0s 288us/sample - loss: 12.1214 - val_loss: 7.0285\nEpoch 13/411\n513/513 [==============================] - 0s 286us/sample - loss: 11.2378 - val_loss: 6.9363\nEpoch 14/411\n513/513 [==============================] - 0s 287us/sample - loss: 10.6471 - val_loss: 7.3571\nEpoch 15/411\n513/513 [==============================] - 0s 292us/sample - loss: 10.3246 - val_loss: 6.9150\nEpoch 16/411\n513/513 [==============================] - 0s 289us/sample - loss: 9.9595 - val_loss: 7.8553\nEpoch 17/411\n513/513 [==============================] - 0s 274us/sample - loss: 10.0252 - val_loss: 8.3993\nEpoch 18/411\n513/513 [==============================] - 0s 280us/sample - loss: 10.1146 - val_loss: 8.3287\nEpoch 19/411\n513/513 [==============================] - 0s 280us/sample - loss: 9.4299 - val_loss: 6.7022\nEpoch 20/411\n513/513 [==============================] - 0s 277us/sample - loss: 8.7757 - val_loss: 6.1971\nEpoch 21/411\n513/513 [==============================] - 0s 286us/sample - loss: 8.6316 - val_loss: 6.0063\nEpoch 22/411\n513/513 [==============================] - 0s 293us/sample - loss: 8.3530 - val_loss: 6.2974\nEpoch 23/411\n513/513 [==============================] - 0s 295us/sample - loss: 8.2022 - val_loss: 7.4423\nEpoch 24/411\n513/513 [==============================] - 0s 289us/sample - loss: 8.5277 - val_loss: 6.6018\nEpoch 25/411\n513/513 [==============================] - 0s 290us/sample - loss: 7.8091 - val_loss: 5.5206\nEpoch 26/411\n513/513 [==============================] - 0s 291us/sample - loss: 7.4532 - val_loss: 5.5652\nEpoch 27/411\n513/513 [==============================] - 0s 292us/sample - loss: 7.2745 - val_loss: 5.6019\nEpoch 28/411\n513/513 [==============================] - 0s 278us/sample - loss: 7.4198 - val_loss: 6.8050\nEpoch 29/411\n513/513 [==============================] - 0s 282us/sample - loss: 7.5316 - val_loss: 5.1607\nEpoch 30/411\n513/513 [==============================] - 0s 281us/sample - loss: 6.7533 - val_loss: 4.7258\nEpoch 31/411\n513/513 [==============================] - 0s 278us/sample - loss: 6.6964 - val_loss: 4.8751\nEpoch 32/411\n513/513 [==============================] - 0s 279us/sample - loss: 6.7742 - val_loss: 6.4770\nEpoch 33/411\n513/513 [==============================] - 0s 280us/sample - loss: 6.6144 - val_loss: 4.4763\nEpoch 34/411\n513/513 [==============================] - 0s 287us/sample - loss: 6.0991 - val_loss: 4.3652\nEpoch 35/411\n513/513 [==============================] - 0s 274us/sample - loss: 5.9684 - val_loss: 4.2671\nEpoch 36/411\n513/513 [==============================] - 0s 285us/sample - loss: 5.6665 - val_loss: 4.3611\nEpoch 37/411\n513/513 [==============================] - 0s 288us/sample - loss: 5.7190 - val_loss: 4.5672\nEpoch 38/411\n513/513 [==============================] - 0s 277us/sample - loss: 5.3321 - val_loss: 4.3060\nEpoch 39/411\n513/513 [==============================] - 0s 282us/sample - loss: 5.8642 - val_loss: 4.0338\nEpoch 40/411\n513/513 [==============================] - 0s 289us/sample - loss: 5.3407 - val_loss: 4.7078\nEpoch 41/411\n513/513 [==============================] - 0s 290us/sample - loss: 5.2602 - val_loss: 3.8689\nEpoch 42/411\n513/513 [==============================] - 0s 290us/sample - loss: 4.9225 - val_loss: 3.8622\nEpoch 43/411\n513/513 [==============================] - 0s 289us/sample - loss: 4.9971 - val_loss: 4.0028\nEpoch 44/411\n513/513 [==============================] - 0s 287us/sample - loss: 4.8439 - val_loss: 3.5951\nEpoch 45/411\n513/513 [==============================] - 0s 263us/sample - loss: 4.6893 - val_loss: 3.6095\nEpoch 46/411\n513/513 [==============================] - 0s 289us/sample - loss: 4.6143 - val_loss: 3.9017\nEpoch 47/411\n513/513 [==============================] - 0s 290us/sample - loss: 5.1546 - val_loss: 3.7482\nEpoch 48/411\n513/513 [==============================] - 0s 273us/sample - loss: 4.4633 - val_loss: 3.7328\nEpoch 49/411\n513/513 [==============================] - 0s 273us/sample - loss: 4.4176 - val_loss: 3.5592\nEpoch 50/411\n513/513 [==============================] - 0s 297us/sample - loss: 4.2409 - val_loss: 3.6936\nEpoch 51/411\n513/513 [==============================] - 0s 284us/sample - loss: 4.3959 - val_loss: 3.2423\nEpoch 52/411\n513/513 [==============================] - 0s 300us/sample - loss: 4.3248 - val_loss: 3.3316\nEpoch 53/411\n513/513 [==============================] - 0s 284us/sample - loss: 4.1984 - val_loss: 3.8705\nEpoch 54/411\n513/513 [==============================] - 0s 285us/sample - loss: 4.5931 - val_loss: 3.2099\nEpoch 55/411\n513/513 [==============================] - 0s 291us/sample - loss: 4.0335 - val_loss: 3.0244\nEpoch 56/411\n513/513 [==============================] - 0s 293us/sample - loss: 3.9435 - val_loss: 2.9766\nEpoch 57/411\n513/513 [==============================] - 0s 278us/sample - loss: 3.9370 - val_loss: 3.0515\nEpoch 58/411\n513/513 [==============================] - 0s 305us/sample - loss: 3.7541 - val_loss: 3.0380\nEpoch 59/411\n513/513 [==============================] - 0s 305us/sample - loss: 3.6079 - val_loss: 3.0453\nEpoch 60/411\n513/513 [==============================] - 0s 297us/sample - loss: 3.5957 - val_loss: 3.0513\nEpoch 61/411\n513/513 [==============================] - 0s 319us/sample - loss: 3.4916 - val_loss: 3.0835\nEpoch 62/411\n513/513 [==============================] - 0s 289us/sample - loss: 3.5465 - val_loss: 3.1035\nEpoch 63/411\n513/513 [==============================] - 0s 323us/sample - loss: 3.2969 - val_loss: 2.8826\nEpoch 64/411\n513/513 [==============================] - 0s 313us/sample - loss: 3.3023 - val_loss: 2.8218\nEpoch 65/411\n513/513 [==============================] - 0s 287us/sample - loss: 3.1881 - val_loss: 2.8117\nEpoch 66/411\n513/513 [==============================] - 0s 289us/sample - loss: 3.1313 - val_loss: 2.8647\nEpoch 67/411\n513/513 [==============================] - 0s 296us/sample - loss: 3.6532 - val_loss: 3.2918\nEpoch 68/411\n513/513 [==============================] - 0s 288us/sample - loss: 3.3561 - val_loss: 3.0481\nEpoch 69/411\n513/513 [==============================] - 0s 313us/sample - loss: 3.6354 - val_loss: 2.9844\nEpoch 70/411\n513/513 [==============================] - 0s 294us/sample - loss: 2.9968 - val_loss: 3.2417\nEpoch 71/411\n513/513 [==============================] - 0s 275us/sample - loss: 3.6403 - val_loss: 2.6568\nEpoch 72/411\n513/513 [==============================] - 0s 273us/sample - loss: 3.0338 - val_loss: 3.3609\nEpoch 73/411\n513/513 [==============================] - 0s 277us/sample - loss: 3.2054 - val_loss: 2.5660\nEpoch 74/411\n513/513 [==============================] - 0s 282us/sample - loss: 2.8726 - val_loss: 2.5816\nEpoch 75/411\n513/513 [==============================] - 0s 274us/sample - loss: 2.7379 - val_loss: 2.4927\nEpoch 76/411\n513/513 [==============================] - 0s 279us/sample - loss: 2.6828 - val_loss: 2.4872\nEpoch 77/411\n513/513 [==============================] - 0s 280us/sample - loss: 2.7612 - val_loss: 2.3183\nEpoch 78/411\n513/513 [==============================] - 0s 283us/sample - loss: 2.5963 - val_loss: 2.2925\nEpoch 79/411\n513/513 [==============================] - 0s 279us/sample - loss: 2.5171 - val_loss: 2.2862\nEpoch 80/411\n513/513 [==============================] - 0s 344us/sample - loss: 2.5216 - val_loss: 2.6007\nEpoch 81/411\n513/513 [==============================] - 0s 323us/sample - loss: 3.0599 - val_loss: 2.5653\nEpoch 82/411\n513/513 [==============================] - 0s 289us/sample - loss: 2.6251 - val_loss: 2.3613\nEpoch 83/411\n513/513 [==============================] - 0s 285us/sample - loss: 2.4955 - val_loss: 2.2877\nEpoch 84/411\n513/513 [==============================] - 0s 281us/sample - loss: 2.5285 - val_loss: 2.4036\nEpoch 85/411\n513/513 [==============================] - 0s 270us/sample - loss: 2.4538 - val_loss: 2.7122\nEpoch 86/411\n513/513 [==============================] - 0s 262us/sample - loss: 2.3632 - val_loss: 2.7335\nEpoch 87/411\n513/513 [==============================] - 0s 288us/sample - loss: 2.6646 - val_loss: 2.3465\nEpoch 88/411\n513/513 [==============================] - 0s 295us/sample - loss: 2.3281 - val_loss: 2.7508\nEpoch 89/411\n513/513 [==============================] - 0s 262us/sample - loss: 2.3064 - val_loss: 2.2052\nEpoch 90/411\n513/513 [==============================] - 0s 268us/sample - loss: 2.1055 - val_loss: 2.1769\nEpoch 91/411\n513/513 [==============================] - 0s 281us/sample - loss: 2.0438 - val_loss: 2.1501\nEpoch 92/411\n513/513 [==============================] - 0s 291us/sample - loss: 1.9590 - val_loss: 2.2384\nEpoch 93/411\n513/513 [==============================] - 0s 298us/sample - loss: 2.0956 - val_loss: 2.1808\nEpoch 94/411\n513/513 [==============================] - 0s 274us/sample - loss: 2.5015 - val_loss: 2.8175\nEpoch 95/411\n513/513 [==============================] - 0s 282us/sample - loss: 2.1420 - val_loss: 2.4579\nEpoch 96/411\n513/513 [==============================] - 0s 262us/sample - loss: 2.0479 - val_loss: 2.1511\nEpoch 97/411\n513/513 [==============================] - 0s 261us/sample - loss: 2.0384 - val_loss: 2.0480\nEpoch 98/411\n513/513 [==============================] - 0s 274us/sample - loss: 1.8779 - val_loss: 1.9230\nEpoch 99/411\n513/513 [==============================] - 0s 281us/sample - loss: 2.3272 - val_loss: 2.8968\nEpoch 100/411\n513/513 [==============================] - 0s 276us/sample - loss: 2.4743 - val_loss: 2.0180\nEpoch 101/411\n513/513 [==============================] - 0s 288us/sample - loss: 2.3034 - val_loss: 1.9330\nEpoch 102/411\n513/513 [==============================] - 0s 256us/sample - loss: 1.7838 - val_loss: 2.1678\nEpoch 103/411\n513/513 [==============================] - 0s 263us/sample - loss: 1.8691 - val_loss: 1.8682\nEpoch 104/411\n513/513 [==============================] - 0s 261us/sample - loss: 1.6577 - val_loss: 1.8987\nEpoch 105/411\n513/513 [==============================] - 0s 275us/sample - loss: 1.6355 - val_loss: 1.9586\nEpoch 106/411\n513/513 [==============================] - 0s 301us/sample - loss: 1.6837 - val_loss: 1.8772\nEpoch 107/411\n513/513 [==============================] - 0s 288us/sample - loss: 1.6537 - val_loss: 1.9006\nEpoch 108/411\n513/513 [==============================] - 0s 279us/sample - loss: 1.7467 - val_loss: 2.1047\nEpoch 109/411\n513/513 [==============================] - 0s 274us/sample - loss: 1.5691 - val_loss: 2.2132\nEpoch 110/411\n513/513 [==============================] - 0s 271us/sample - loss: 1.6719 - val_loss: 1.9532\nEpoch 111/411\n513/513 [==============================] - 0s 272us/sample - loss: 1.4628 - val_loss: 1.9483\nEpoch 112/411\n513/513 [==============================] - 0s 267us/sample - loss: 1.4506 - val_loss: 1.9440\nEpoch 113/411\n513/513 [==============================] - 0s 262us/sample - loss: 1.3961 - val_loss: 1.9203\nEpoch 114/411\n513/513 [==============================] - 0s 276us/sample - loss: 1.3607 - val_loss: 1.8627\nEpoch 115/411\n513/513 [==============================] - 0s 288us/sample - loss: 1.3327 - val_loss: 1.9067\nEpoch 116/411\n513/513 [==============================] - 0s 262us/sample - loss: 1.8045 - val_loss: 1.7840\nEpoch 117/411\n513/513 [==============================] - 0s 269us/sample - loss: 1.5835 - val_loss: 1.7505\nEpoch 118/411\n513/513 [==============================] - 0s 282us/sample - loss: 2.4387 - val_loss: 3.2056\nEpoch 119/411\n513/513 [==============================] - 0s 260us/sample - loss: 1.8186 - val_loss: 3.1490\nEpoch 120/411\n513/513 [==============================] - 0s 255us/sample - loss: 2.9627 - val_loss: 1.7760\nEpoch 121/411\n513/513 [==============================] - 0s 265us/sample - loss: 2.4336 - val_loss: 2.8541\nEpoch 122/411\n513/513 [==============================] - 0s 271us/sample - loss: 1.6486 - val_loss: 2.2175\nEpoch 123/411\n513/513 [==============================] - 0s 259us/sample - loss: 1.5903 - val_loss: 1.9887\nEpoch 124/411\n513/513 [==============================] - 0s 273us/sample - loss: 2.0385 - val_loss: 2.2970\nEpoch 125/411\n513/513 [==============================] - 0s 257us/sample - loss: 1.3929 - val_loss: 2.5456\nEpoch 126/411\n513/513 [==============================] - 0s 268us/sample - loss: 2.1890 - val_loss: 1.8377\nEpoch 127/411\n513/513 [==============================] - 0s 285us/sample - loss: 1.4069 - val_loss: 2.3557\nEpoch 128/411\n513/513 [==============================] - 0s 262us/sample - loss: 1.4426 - val_loss: 1.9982\nEpoch 129/411\n513/513 [==============================] - 0s 275us/sample - loss: 1.8102 - val_loss: 1.9986\nEpoch 130/411\n513/513 [==============================] - 0s 257us/sample - loss: 1.2337 - val_loss: 2.1656\nEpoch 131/411\n513/513 [==============================] - 0s 275us/sample - loss: 1.4916 - val_loss: 1.7917\nEpoch 132/411\n513/513 [==============================] - 0s 278us/sample - loss: 1.1294 - val_loss: 1.7330\nEpoch 133/411\n513/513 [==============================] - 0s 267us/sample - loss: 1.1087 - val_loss: 1.8407\nEpoch 134/411\n513/513 [==============================] - 0s 271us/sample - loss: 1.3821 - val_loss: 1.7302\nEpoch 135/411\n513/513 [==============================] - 0s 256us/sample - loss: 1.2885 - val_loss: 2.2633\nEpoch 136/411\n513/513 [==============================] - 0s 257us/sample - loss: 1.2599 - val_loss: 1.8557\nEpoch 137/411\n513/513 [==============================] - 0s 260us/sample - loss: 1.3686 - val_loss: 1.7792\nEpoch 138/411\n513/513 [==============================] - 0s 260us/sample - loss: 1.0777 - val_loss: 1.7238\nEpoch 139/411\n513/513 [==============================] - 0s 258us/sample - loss: 0.9996 - val_loss: 1.7185\nEpoch 140/411\n513/513 [==============================] - 0s 272us/sample - loss: 1.0104 - val_loss: 1.7618\nEpoch 141/411\n513/513 [==============================] - 0s 266us/sample - loss: 0.9738 - val_loss: 1.7205\nEpoch 142/411\n513/513 [==============================] - 0s 267us/sample - loss: 0.9448 - val_loss: 1.6933\nEpoch 143/411\n513/513 [==============================] - 0s 263us/sample - loss: 0.9765 - val_loss: 1.6674\nEpoch 144/411\n513/513 [==============================] - 0s 266us/sample - loss: 0.9693 - val_loss: 1.6788\nEpoch 145/411\n513/513 [==============================] - 0s 262us/sample - loss: 0.9229 - val_loss: 1.6352\nEpoch 146/411\n513/513 [==============================] - 0s 253us/sample - loss: 0.9254 - val_loss: 1.7358\nEpoch 147/411\n513/513 [==============================] - 0s 261us/sample - loss: 1.0753 - val_loss: 1.6019\nEpoch 148/411\n513/513 [==============================] - 0s 263us/sample - loss: 0.9591 - val_loss: 1.5996\nEpoch 149/411\n513/513 [==============================] - 0s 258us/sample - loss: 1.0143 - val_loss: 1.6393\nEpoch 150/411\n513/513 [==============================] - 0s 264us/sample - loss: 0.9586 - val_loss: 1.9121\nEpoch 151/411\n513/513 [==============================] - 0s 259us/sample - loss: 0.9881 - val_loss: 1.8517\nEpoch 152/411\n513/513 [==============================] - 0s 282us/sample - loss: 1.8756 - val_loss: 1.7992\nEpoch 153/411\n513/513 [==============================] - 0s 267us/sample - loss: 1.2945 - val_loss: 2.7346\nEpoch 154/411\n513/513 [==============================] - 0s 279us/sample - loss: 1.2877 - val_loss: 2.2714\nEpoch 155/411\n513/513 [==============================] - 0s 271us/sample - loss: 1.3611 - val_loss: 1.6385\nEpoch 156/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.8728 - val_loss: 1.5872\nEpoch 157/411\n513/513 [==============================] - 0s 273us/sample - loss: 0.8612 - val_loss: 1.6678\nEpoch 158/411\n513/513 [==============================] - 0s 281us/sample - loss: 0.9037 - val_loss: 1.5735\nEpoch 159/411\n513/513 [==============================] - 0s 284us/sample - loss: 0.8409 - val_loss: 1.6172\nEpoch 160/411\n513/513 [==============================] - 0s 292us/sample - loss: 0.8093 - val_loss: 1.6658\nEpoch 161/411\n513/513 [==============================] - 0s 271us/sample - loss: 0.7880 - val_loss: 1.7016\nEpoch 162/411\n513/513 [==============================] - 0s 277us/sample - loss: 0.7726 - val_loss: 1.7461\nEpoch 163/411\n513/513 [==============================] - 0s 260us/sample - loss: 0.8456 - val_loss: 1.7604\nEpoch 164/411\n513/513 [==============================] - 0s 263us/sample - loss: 0.8132 - val_loss: 1.7095\nEpoch 165/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.8419 - val_loss: 1.7400\nEpoch 166/411\n513/513 [==============================] - 0s 290us/sample - loss: 0.7785 - val_loss: 1.5964\nEpoch 167/411\n513/513 [==============================] - 0s 290us/sample - loss: 0.6929 - val_loss: 1.6030\nEpoch 168/411\n513/513 [==============================] - 0s 275us/sample - loss: 0.7097 - val_loss: 1.5904\nEpoch 169/411\n513/513 [==============================] - 0s 263us/sample - loss: 0.6814 - val_loss: 1.5804\nEpoch 170/411\n513/513 [==============================] - 0s 262us/sample - loss: 0.6602 - val_loss: 1.5810\nEpoch 171/411\n513/513 [==============================] - 0s 249us/sample - loss: 0.6606 - val_loss: 1.6272\nEpoch 172/411\n513/513 [==============================] - 0s 262us/sample - loss: 0.6499 - val_loss: 1.7137\nEpoch 173/411\n513/513 [==============================] - 0s 286us/sample - loss: 0.7155 - val_loss: 1.6421\nEpoch 174/411\n513/513 [==============================] - 0s 274us/sample - loss: 0.8012 - val_loss: 1.5852\nEpoch 175/411\n513/513 [==============================] - 0s 277us/sample - loss: 0.7112 - val_loss: 1.5823\nEpoch 176/411\n513/513 [==============================] - 0s 280us/sample - loss: 0.8135 - val_loss: 1.7505\nEpoch 177/411\n513/513 [==============================] - 0s 276us/sample - loss: 0.6989 - val_loss: 1.8170\nEpoch 178/411\n513/513 [==============================] - 0s 265us/sample - loss: 0.7283 - val_loss: 1.6903\nEpoch 179/411\n513/513 [==============================] - 0s 260us/sample - loss: 0.7950 - val_loss: 1.5881\nEpoch 180/411\n513/513 [==============================] - 0s 279us/sample - loss: 0.6052 - val_loss: 1.6672\nEpoch 181/411\n513/513 [==============================] - 0s 282us/sample - loss: 0.6510 - val_loss: 1.5792\nEpoch 182/411\n513/513 [==============================] - 0s 287us/sample - loss: 0.6154 - val_loss: 1.5597\nEpoch 183/411\n513/513 [==============================] - 0s 281us/sample - loss: 0.5707 - val_loss: 1.5488\nEpoch 184/411\n513/513 [==============================] - 0s 273us/sample - loss: 0.5755 - val_loss: 1.5536\nEpoch 185/411\n513/513 [==============================] - 0s 269us/sample - loss: 0.5489 - val_loss: 1.5408\nEpoch 186/411\n513/513 [==============================] - 0s 279us/sample - loss: 0.5991 - val_loss: 1.6084\nEpoch 187/411\n513/513 [==============================] - 0s 279us/sample - loss: 0.5629 - val_loss: 1.6499\nEpoch 188/411\n513/513 [==============================] - 0s 278us/sample - loss: 0.5716 - val_loss: 1.6708\nEpoch 189/411\n513/513 [==============================] - 0s 261us/sample - loss: 0.8116 - val_loss: 1.5536\nEpoch 190/411\n513/513 [==============================] - 0s 259us/sample - loss: 0.5497 - val_loss: 1.6647\nEpoch 191/411\n513/513 [==============================] - 0s 261us/sample - loss: 0.6586 - val_loss: 1.5790\nEpoch 192/411\n513/513 [==============================] - 0s 247us/sample - loss: 0.8016 - val_loss: 1.5077\nEpoch 193/411\n513/513 [==============================] - 0s 261us/sample - loss: 0.8594 - val_loss: 1.6759\nEpoch 194/411\n513/513 [==============================] - 0s 253us/sample - loss: 0.7477 - val_loss: 1.4862\nEpoch 195/411\n513/513 [==============================] - 0s 280us/sample - loss: 0.6680 - val_loss: 1.5061\nEpoch 196/411\n513/513 [==============================] - 0s 280us/sample - loss: 0.6282 - val_loss: 1.5639\nEpoch 197/411\n513/513 [==============================] - 0s 276us/sample - loss: 0.5490 - val_loss: 1.6578\nEpoch 198/411\n513/513 [==============================] - 0s 267us/sample - loss: 0.5883 - val_loss: 1.8168\nEpoch 199/411\n513/513 [==============================] - 0s 264us/sample - loss: 0.9002 - val_loss: 1.8893\nEpoch 200/411\n513/513 [==============================] - 0s 269us/sample - loss: 1.8365 - val_loss: 1.7832\nEpoch 201/411\n513/513 [==============================] - 0s 265us/sample - loss: 0.8327 - val_loss: 1.7891\nEpoch 202/411\n513/513 [==============================] - 0s 286us/sample - loss: 1.5835 - val_loss: 2.3125\nEpoch 203/411\n513/513 [==============================] - 0s 287us/sample - loss: 0.8698 - val_loss: 2.2383\nEpoch 204/411\n513/513 [==============================] - 0s 261us/sample - loss: 0.9492 - val_loss: 1.8194\nEpoch 205/411\n513/513 [==============================] - 0s 279us/sample - loss: 1.1122 - val_loss: 1.5700\nEpoch 206/411\n513/513 [==============================] - 0s 269us/sample - loss: 0.5987 - val_loss: 1.5680\nEpoch 207/411\n513/513 [==============================] - 0s 269us/sample - loss: 0.6118 - val_loss: 1.5307\nEpoch 208/411\n513/513 [==============================] - 0s 263us/sample - loss: 0.5899 - val_loss: 1.4616\nEpoch 209/411\n513/513 [==============================] - 0s 258us/sample - loss: 0.5229 - val_loss: 1.4523\nEpoch 210/411\n513/513 [==============================] - 0s 289us/sample - loss: 0.4990 - val_loss: 1.5168\nEpoch 211/411\n513/513 [==============================] - 0s 265us/sample - loss: 0.5386 - val_loss: 1.5287\nEpoch 212/411\n513/513 [==============================] - 0s 271us/sample - loss: 0.5423 - val_loss: 1.5984\nEpoch 213/411\n513/513 [==============================] - 0s 262us/sample - loss: 0.4792 - val_loss: 1.6097\nEpoch 214/411\n513/513 [==============================] - 0s 272us/sample - loss: 0.7106 - val_loss: 1.5257\nEpoch 215/411\n513/513 [==============================] - 0s 255us/sample - loss: 0.5709 - val_loss: 1.6091\nEpoch 216/411\n513/513 [==============================] - 0s 259us/sample - loss: 0.4878 - val_loss: 1.4988\nEpoch 217/411\n513/513 [==============================] - 0s 262us/sample - loss: 0.4875 - val_loss: 1.4700\nEpoch 218/411\n513/513 [==============================] - 0s 275us/sample - loss: 0.4618 - val_loss: 1.4623\nEpoch 219/411\n513/513 [==============================] - 0s 286us/sample - loss: 0.4542 - val_loss: 1.4980\nEpoch 220/411\n513/513 [==============================] - 0s 277us/sample - loss: 0.4816 - val_loss: 1.4687\nEpoch 221/411\n513/513 [==============================] - 0s 272us/sample - loss: 0.4469 - val_loss: 1.4404\nEpoch 222/411\n513/513 [==============================] - 0s 298us/sample - loss: 0.4077 - val_loss: 1.4650\nEpoch 223/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.4114 - val_loss: 1.4670\nEpoch 224/411\n513/513 [==============================] - 0s 276us/sample - loss: 0.4216 - val_loss: 1.4623\nEpoch 225/411\n513/513 [==============================] - 0s 301us/sample - loss: 0.4235 - val_loss: 1.5088\nEpoch 226/411\n513/513 [==============================] - 0s 267us/sample - loss: 0.4224 - val_loss: 1.4714\nEpoch 227/411\n513/513 [==============================] - 0s 263us/sample - loss: 0.3852 - val_loss: 1.4616\nEpoch 228/411\n513/513 [==============================] - 0s 268us/sample - loss: 0.3666 - val_loss: 1.7853\nEpoch 229/411\n513/513 [==============================] - 0s 279us/sample - loss: 1.0151 - val_loss: 1.4483\nEpoch 230/411\n513/513 [==============================] - 0s 301us/sample - loss: 0.4468 - val_loss: 1.4554\nEpoch 231/411\n513/513 [==============================] - 0s 274us/sample - loss: 0.6525 - val_loss: 1.4693\nEpoch 232/411\n513/513 [==============================] - 0s 277us/sample - loss: 0.4662 - val_loss: 1.5115\nEpoch 233/411\n513/513 [==============================] - 0s 262us/sample - loss: 0.4035 - val_loss: 1.5162\nEpoch 234/411\n513/513 [==============================] - 0s 266us/sample - loss: 0.4107 - val_loss: 1.4994\nEpoch 235/411\n513/513 [==============================] - 0s 275us/sample - loss: 0.4749 - val_loss: 1.3904\nEpoch 236/411\n513/513 [==============================] - 0s 257us/sample - loss: 0.3749 - val_loss: 1.3624\nEpoch 237/411\n513/513 [==============================] - 0s 262us/sample - loss: 0.3417 - val_loss: 1.3547\nEpoch 238/411\n513/513 [==============================] - 0s 261us/sample - loss: 0.3399 - val_loss: 1.3982\nEpoch 239/411\n513/513 [==============================] - 0s 287us/sample - loss: 0.3542 - val_loss: 1.3652\nEpoch 240/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.3313 - val_loss: 1.3745\nEpoch 241/411\n513/513 [==============================] - 0s 282us/sample - loss: 0.3165 - val_loss: 1.3864\nEpoch 242/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.3264 - val_loss: 1.3939\nEpoch 243/411\n513/513 [==============================] - 0s 261us/sample - loss: 0.3052 - val_loss: 1.3966\nEpoch 244/411\n513/513 [==============================] - 0s 259us/sample - loss: 0.3153 - val_loss: 1.4146\nEpoch 245/411\n513/513 [==============================] - 0s 264us/sample - loss: 0.3414 - val_loss: 1.4180\nEpoch 246/411\n513/513 [==============================] - 0s 281us/sample - loss: 0.3034 - val_loss: 1.4099\nEpoch 247/411\n513/513 [==============================] - 0s 269us/sample - loss: 0.3375 - val_loss: 1.3800\nEpoch 248/411\n513/513 [==============================] - 0s 277us/sample - loss: 0.3228 - val_loss: 1.3713\nEpoch 249/411\n513/513 [==============================] - 0s 271us/sample - loss: 0.3299 - val_loss: 1.3825\nEpoch 250/411\n513/513 [==============================] - 0s 253us/sample - loss: 0.2989 - val_loss: 1.3707\nEpoch 251/411\n513/513 [==============================] - 0s 261us/sample - loss: 0.2942 - val_loss: 1.3874\nEpoch 252/411\n513/513 [==============================] - 0s 280us/sample - loss: 0.2914 - val_loss: 1.3970\nEpoch 253/411\n513/513 [==============================] - 0s 272us/sample - loss: 0.3078 - val_loss: 1.4554\nEpoch 254/411\n513/513 [==============================] - 0s 265us/sample - loss: 0.3188 - val_loss: 1.4613\nEpoch 255/411\n513/513 [==============================] - 0s 259us/sample - loss: 0.3022 - val_loss: 1.4221\nEpoch 256/411\n513/513 [==============================] - 0s 263us/sample - loss: 0.2837 - val_loss: 1.3841\nEpoch 257/411\n513/513 [==============================] - 0s 264us/sample - loss: 0.2922 - val_loss: 1.3945\nEpoch 258/411\n513/513 [==============================] - 0s 262us/sample - loss: 0.3977 - val_loss: 1.3504\nEpoch 259/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.2872 - val_loss: 1.3604\nEpoch 260/411\n513/513 [==============================] - 0s 273us/sample - loss: 0.2878 - val_loss: 1.3965\nEpoch 261/411\n513/513 [==============================] - 0s 273us/sample - loss: 0.2917 - val_loss: 1.5669\nEpoch 262/411\n513/513 [==============================] - 0s 265us/sample - loss: 0.5503 - val_loss: 1.5557\nEpoch 263/411\n513/513 [==============================] - 0s 293us/sample - loss: 0.6637 - val_loss: 1.2574\nEpoch 264/411\n513/513 [==============================] - 0s 265us/sample - loss: 0.3381 - val_loss: 1.2285\nEpoch 265/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.2791 - val_loss: 1.3890\nEpoch 266/411\n513/513 [==============================] - 0s 263us/sample - loss: 0.7710 - val_loss: 1.2560\nEpoch 267/411\n513/513 [==============================] - 0s 266us/sample - loss: 0.5504 - val_loss: 1.2285\nEpoch 268/411\n513/513 [==============================] - 0s 280us/sample - loss: 0.3547 - val_loss: 1.2216\nEpoch 269/411\n513/513 [==============================] - 0s 284us/sample - loss: 0.5552 - val_loss: 1.1322\nEpoch 270/411\n513/513 [==============================] - 0s 272us/sample - loss: 0.3804 - val_loss: 1.1781\nEpoch 271/411\n513/513 [==============================] - 0s 244us/sample - loss: 0.3402 - val_loss: 1.1394\nEpoch 272/411\n513/513 [==============================] - 0s 254us/sample - loss: 0.3120 - val_loss: 1.1365\nEpoch 273/411\n513/513 [==============================] - 0s 271us/sample - loss: 0.2989 - val_loss: 1.1564\nEpoch 274/411\n513/513 [==============================] - 0s 260us/sample - loss: 0.3115 - val_loss: 1.2872\nEpoch 275/411\n513/513 [==============================] - 0s 262us/sample - loss: 0.3171 - val_loss: 1.2248\nEpoch 276/411\n513/513 [==============================] - 0s 277us/sample - loss: 0.2743 - val_loss: 1.2064\nEpoch 277/411\n513/513 [==============================] - 0s 280us/sample - loss: 0.2408 - val_loss: 1.2085\nEpoch 278/411\n513/513 [==============================] - 0s 258us/sample - loss: 0.2416 - val_loss: 1.2233\nEpoch 279/411\n513/513 [==============================] - 0s 263us/sample - loss: 0.2665 - val_loss: 1.2156\nEpoch 280/411\n513/513 [==============================] - 0s 262us/sample - loss: 0.2243 - val_loss: 1.2322\nEpoch 281/411\n513/513 [==============================] - 0s 245us/sample - loss: 0.2152 - val_loss: 1.2464\nEpoch 282/411\n513/513 [==============================] - 0s 255us/sample - loss: 0.2379 - val_loss: 1.2574\nEpoch 283/411\n513/513 [==============================] - 0s 247us/sample - loss: 0.2199 - val_loss: 1.2958\nEpoch 284/411\n513/513 [==============================] - 0s 274us/sample - loss: 0.2865 - val_loss: 1.2411\nEpoch 285/411\n513/513 [==============================] - 0s 297us/sample - loss: 0.2289 - val_loss: 1.2341\nEpoch 286/411\n513/513 [==============================] - 0s 283us/sample - loss: 0.2195 - val_loss: 1.2282\nEpoch 287/411\n513/513 [==============================] - 0s 268us/sample - loss: 0.2093 - val_loss: 1.2277\nEpoch 288/411\n513/513 [==============================] - 0s 257us/sample - loss: 0.2080 - val_loss: 1.2329\nEpoch 289/411\n513/513 [==============================] - 0s 261us/sample - loss: 0.2185 - val_loss: 1.2508\nEpoch 290/411\n513/513 [==============================] - 0s 262us/sample - loss: 0.2095 - val_loss: 1.2411\nEpoch 291/411\n513/513 [==============================] - 0s 257us/sample - loss: 0.1893 - val_loss: 1.3119\nEpoch 292/411\n513/513 [==============================] - 0s 279us/sample - loss: 0.2918 - val_loss: 1.2512\nEpoch 293/411\n513/513 [==============================] - 0s 288us/sample - loss: 0.1992 - val_loss: 1.2495\nEpoch 294/411\n513/513 [==============================] - 0s 277us/sample - loss: 0.1861 - val_loss: 1.2412\nEpoch 295/411\n513/513 [==============================] - 0s 261us/sample - loss: 0.1830 - val_loss: 1.2452\nEpoch 296/411\n513/513 [==============================] - 0s 259us/sample - loss: 0.2022 - val_loss: 1.2523\nEpoch 297/411\n513/513 [==============================] - 0s 265us/sample - loss: 0.2375 - val_loss: 1.2503\nEpoch 298/411\n513/513 [==============================] - 0s 259us/sample - loss: 0.2016 - val_loss: 1.2985\nEpoch 299/411\n513/513 [==============================] - 0s 297us/sample - loss: 0.2561 - val_loss: 1.2930\nEpoch 300/411\n513/513 [==============================] - 0s 279us/sample - loss: 0.2042 - val_loss: 1.2745\nEpoch 301/411\n513/513 [==============================] - 0s 276us/sample - loss: 0.1717 - val_loss: 1.2866\nEpoch 302/411\n513/513 [==============================] - 0s 274us/sample - loss: 0.1861 - val_loss: 1.2883\nEpoch 303/411\n513/513 [==============================] - 0s 265us/sample - loss: 0.1931 - val_loss: 1.2669\nEpoch 304/411\n513/513 [==============================] - 0s 269us/sample - loss: 0.1750 - val_loss: 1.2855\nEpoch 305/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.1976 - val_loss: 1.2512\nEpoch 306/411\n513/513 [==============================] - 0s 260us/sample - loss: 0.1652 - val_loss: 1.2653\nEpoch 307/411\n513/513 [==============================] - 0s 291us/sample - loss: 0.1819 - val_loss: 1.3355\nEpoch 308/411\n513/513 [==============================] - 0s 279us/sample - loss: 0.2558 - val_loss: 1.2931\nEpoch 309/411\n513/513 [==============================] - 0s 275us/sample - loss: 0.2027 - val_loss: 1.4528\nEpoch 310/411\n513/513 [==============================] - 0s 262us/sample - loss: 0.3000 - val_loss: 1.3095\nEpoch 311/411\n513/513 [==============================] - 0s 251us/sample - loss: 0.1910 - val_loss: 1.2697\nEpoch 312/411\n513/513 [==============================] - 0s 265us/sample - loss: 0.1536 - val_loss: 1.2501\nEpoch 313/411\n513/513 [==============================] - 0s 266us/sample - loss: 0.1572 - val_loss: 1.2464\nEpoch 314/411\n513/513 [==============================] - 0s 259us/sample - loss: 0.1560 - val_loss: 1.2417\nEpoch 315/411\n513/513 [==============================] - 0s 260us/sample - loss: 0.1497 - val_loss: 1.2538\nEpoch 316/411\n513/513 [==============================] - 0s 257us/sample - loss: 0.1517 - val_loss: 1.2471\nEpoch 317/411\n513/513 [==============================] - 0s 264us/sample - loss: 0.1491 - val_loss: 1.2502\nEpoch 318/411\n513/513 [==============================] - 0s 275us/sample - loss: 0.1806 - val_loss: 1.3334\nEpoch 319/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.2931 - val_loss: 1.2905\nEpoch 320/411\n513/513 [==============================] - 0s 259us/sample - loss: 0.1947 - val_loss: 1.3530\nEpoch 321/411\n513/513 [==============================] - 0s 277us/sample - loss: 0.2418 - val_loss: 1.2878\nEpoch 322/411\n513/513 [==============================] - 0s 267us/sample - loss: 0.1672 - val_loss: 1.5081\nEpoch 323/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.3458 - val_loss: 1.4090\nEpoch 324/411\n513/513 [==============================] - 0s 273us/sample - loss: 0.3135 - val_loss: 1.2753\nEpoch 325/411\n513/513 [==============================] - 0s 269us/sample - loss: 0.1659 - val_loss: 1.2548\nEpoch 326/411\n513/513 [==============================] - 0s 274us/sample - loss: 0.1634 - val_loss: 1.2434\nEpoch 327/411\n513/513 [==============================] - 0s 268us/sample - loss: 0.1898 - val_loss: 1.4463\nEpoch 328/411\n513/513 [==============================] - 0s 273us/sample - loss: 0.4421 - val_loss: 1.2810\nEpoch 329/411\n513/513 [==============================] - 0s 274us/sample - loss: 0.1666 - val_loss: 1.3252\nEpoch 330/411\n513/513 [==============================] - 0s 267us/sample - loss: 0.1821 - val_loss: 1.3287\nEpoch 331/411\n513/513 [==============================] - 0s 263us/sample - loss: 0.1797 - val_loss: 1.3645\nEpoch 332/411\n513/513 [==============================] - 0s 260us/sample - loss: 0.2446 - val_loss: 1.3203\nEpoch 333/411\n513/513 [==============================] - 0s 264us/sample - loss: 0.1740 - val_loss: 1.3405\nEpoch 334/411\n513/513 [==============================] - 0s 274us/sample - loss: 0.2106 - val_loss: 1.3369\nEpoch 335/411\n513/513 [==============================] - 0s 259us/sample - loss: 0.1955 - val_loss: 1.2873\nEpoch 336/411\n513/513 [==============================] - 0s 252us/sample - loss: 0.1527 - val_loss: 1.2925\nEpoch 337/411\n513/513 [==============================] - 0s 264us/sample - loss: 0.2075 - val_loss: 1.2573\nEpoch 338/411\n513/513 [==============================] - 0s 274us/sample - loss: 0.1488 - val_loss: 1.2778\nEpoch 339/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.1512 - val_loss: 1.2619\nEpoch 340/411\n513/513 [==============================] - 0s 258us/sample - loss: 0.1424 - val_loss: 1.2545\nEpoch 341/411\n513/513 [==============================] - 0s 263us/sample - loss: 0.1460 - val_loss: 1.2719\nEpoch 342/411\n513/513 [==============================] - 0s 262us/sample - loss: 0.1474 - val_loss: 1.2596\nEpoch 343/411\n513/513 [==============================] - 0s 275us/sample - loss: 0.1318 - val_loss: 1.2805\nEpoch 344/411\n513/513 [==============================] - 0s 264us/sample - loss: 0.1244 - val_loss: 1.2904\nEpoch 345/411\n513/513 [==============================] - 0s 273us/sample - loss: 0.1220 - val_loss: 1.2974\nEpoch 346/411\n513/513 [==============================] - 0s 286us/sample - loss: 0.1259 - val_loss: 1.2946\nEpoch 347/411\n513/513 [==============================] - 0s 273us/sample - loss: 0.1355 - val_loss: 1.2688\nEpoch 348/411\n513/513 [==============================] - 0s 257us/sample - loss: 0.1134 - val_loss: 1.2533\nEpoch 349/411\n513/513 [==============================] - 0s 263us/sample - loss: 0.1192 - val_loss: 1.2818\nEpoch 350/411\n513/513 [==============================] - 0s 273us/sample - loss: 0.1296 - val_loss: 1.2465\nEpoch 351/411\n513/513 [==============================] - 0s 302us/sample - loss: 0.1226 - val_loss: 1.2336\nEpoch 352/411\n513/513 [==============================] - 0s 271us/sample - loss: 0.1198 - val_loss: 1.2871\nEpoch 353/411\n513/513 [==============================] - 0s 263us/sample - loss: 0.1617 - val_loss: 1.2488\nEpoch 354/411\n513/513 [==============================] - 0s 252us/sample - loss: 0.1160 - val_loss: 1.2947\nEpoch 355/411\n513/513 [==============================] - 0s 258us/sample - loss: 0.1368 - val_loss: 1.2543\nEpoch 356/411\n513/513 [==============================] - 0s 245us/sample - loss: 0.1127 - val_loss: 1.2368\nEpoch 357/411\n513/513 [==============================] - 0s 261us/sample - loss: 0.1012 - val_loss: 1.2310\nEpoch 358/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.1112 - val_loss: 1.2584\nEpoch 359/411\n513/513 [==============================] - 0s 287us/sample - loss: 0.1174 - val_loss: 1.2385\nEpoch 360/411\n513/513 [==============================] - 0s 268us/sample - loss: 0.1176 - val_loss: 1.2729\nEpoch 361/411\n513/513 [==============================] - 0s 288us/sample - loss: 0.1102 - val_loss: 1.2620\nEpoch 362/411\n513/513 [==============================] - 0s 276us/sample - loss: 0.1275 - val_loss: 1.2474\nEpoch 363/411\n513/513 [==============================] - 0s 275us/sample - loss: 0.1005 - val_loss: 1.2995\nEpoch 364/411\n513/513 [==============================] - 0s 289us/sample - loss: 0.1127 - val_loss: 1.4067\nEpoch 365/411\n513/513 [==============================] - 0s 275us/sample - loss: 0.1608 - val_loss: 1.2878\nEpoch 366/411\n513/513 [==============================] - 0s 287us/sample - loss: 0.1153 - val_loss: 1.2737\nEpoch 367/411\n513/513 [==============================] - 0s 300us/sample - loss: 0.1003 - val_loss: 1.2905\nEpoch 368/411\n513/513 [==============================] - 0s 282us/sample - loss: 0.1424 - val_loss: 1.2588\nEpoch 369/411\n513/513 [==============================] - 0s 273us/sample - loss: 0.1037 - val_loss: 1.2585\nEpoch 370/411\n513/513 [==============================] - 0s 282us/sample - loss: 0.1011 - val_loss: 1.2648\nEpoch 371/411\n513/513 [==============================] - 0s 292us/sample - loss: 0.0932 - val_loss: 1.2625\nEpoch 372/411\n513/513 [==============================] - 0s 260us/sample - loss: 0.0930 - val_loss: 1.2635\nEpoch 373/411\n513/513 [==============================] - 0s 275us/sample - loss: 0.0928 - val_loss: 1.6155\nEpoch 374/411\n513/513 [==============================] - 0s 273us/sample - loss: 0.5106 - val_loss: 1.7354\nEpoch 375/411\n513/513 [==============================] - 0s 271us/sample - loss: 0.4424 - val_loss: 1.2909\nEpoch 376/411\n513/513 [==============================] - 0s 267us/sample - loss: 0.1567 - val_loss: 1.3008\nEpoch 377/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.1546 - val_loss: 1.4002\nEpoch 378/411\n513/513 [==============================] - 0s 268us/sample - loss: 0.2509 - val_loss: 1.4588\nEpoch 379/411\n513/513 [==============================] - 0s 281us/sample - loss: 0.2038 - val_loss: 1.3092\nEpoch 380/411\n513/513 [==============================] - 0s 264us/sample - loss: 0.1842 - val_loss: 1.3533\nEpoch 381/411\n513/513 [==============================] - 0s 264us/sample - loss: 0.1573 - val_loss: 1.2217\nEpoch 382/411\n513/513 [==============================] - 0s 257us/sample - loss: 0.2884 - val_loss: 1.1941\nEpoch 383/411\n513/513 [==============================] - 0s 254us/sample - loss: 0.3080 - val_loss: 1.1670\nEpoch 384/411\n513/513 [==============================] - 0s 266us/sample - loss: 0.3224 - val_loss: 1.1630\nEpoch 385/411\n513/513 [==============================] - 0s 273us/sample - loss: 0.1828 - val_loss: 1.1975\nEpoch 386/411\n513/513 [==============================] - 0s 275us/sample - loss: 0.1472 - val_loss: 1.2453\nEpoch 387/411\n513/513 [==============================] - 0s 271us/sample - loss: 0.1336 - val_loss: 1.2759\nEpoch 388/411\n513/513 [==============================] - 0s 278us/sample - loss: 0.1523 - val_loss: 1.2919\nEpoch 389/411\n513/513 [==============================] - 0s 278us/sample - loss: 0.1374 - val_loss: 1.2725\nEpoch 390/411\n513/513 [==============================] - 0s 259us/sample - loss: 0.1460 - val_loss: 1.3193\nEpoch 391/411\n513/513 [==============================] - 0s 253us/sample - loss: 0.1498 - val_loss: 1.2214\nEpoch 392/411\n513/513 [==============================] - 0s 262us/sample - loss: 0.0947 - val_loss: 1.2127\nEpoch 393/411\n513/513 [==============================] - 0s 250us/sample - loss: 0.1193 - val_loss: 1.2146\nEpoch 394/411\n513/513 [==============================] - 0s 275us/sample - loss: 0.0893 - val_loss: 1.2255\nEpoch 395/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.1069 - val_loss: 1.3011\nEpoch 396/411\n513/513 [==============================] - 0s 282us/sample - loss: 0.1524 - val_loss: 1.3059\nEpoch 397/411\n513/513 [==============================] - 0s 261us/sample - loss: 0.1382 - val_loss: 1.2976\nEpoch 398/411\n513/513 [==============================] - 0s 271us/sample - loss: 0.1073 - val_loss: 1.2257\nEpoch 399/411\n513/513 [==============================] - 0s 268us/sample - loss: 0.0887 - val_loss: 1.2133\nEpoch 400/411\n513/513 [==============================] - 0s 273us/sample - loss: 0.0842 - val_loss: 1.2021\nEpoch 401/411\n513/513 [==============================] - 0s 270us/sample - loss: 0.0804 - val_loss: 1.1954\nEpoch 402/411\n513/513 [==============================] - 0s 266us/sample - loss: 0.0782 - val_loss: 1.2003\nEpoch 403/411\n513/513 [==============================] - 0s 272us/sample - loss: 0.1085 - val_loss: 1.2227\nEpoch 404/411\n513/513 [==============================] - 0s 284us/sample - loss: 0.1277 - val_loss: 1.1989\nEpoch 405/411\n513/513 [==============================] - 0s 255us/sample - loss: 0.0954 - val_loss: 1.2379\nEpoch 406/411\n513/513 [==============================] - 0s 261us/sample - loss: 0.1182 - val_loss: 1.2174\nEpoch 407/411\n513/513 [==============================] - 0s 278us/sample - loss: 0.0916 - val_loss: 1.2690\nEpoch 408/411\n513/513 [==============================] - 0s 266us/sample - loss: 0.1063 - val_loss: 1.2744\nEpoch 409/411\n513/513 [==============================] - 0s 260us/sample - loss: 0.1160 - val_loss: 1.2561\nEpoch 410/411\n513/513 [==============================] - 0s 271us/sample - loss: 0.0926 - val_loss: 1.5014\nEpoch 411/411\n513/513 [==============================] - 0s 267us/sample - loss: 0.3110 - val_loss: 1.6808\n"
],
[
"pd.DataFrame(performance.history)[['rmse', 'val_rmse']].plot()",
"_____no_output_____"
],
[
"pd.DataFrame(results).test_rmse.mean()",
"_____no_output_____"
],
[
"pd.DataFrame(results).test_rmse.std()",
"_____no_output_____"
],
[
"pd.DataFrame(results).to_csv('./results/%s.csv' % task_name)",
"_____no_output_____"
],
[
"pd.DataFrame(results)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7b6de6b9fefab9b0e67bbddf60f9e93e6c299bb | 2,827 | ipynb | Jupyter Notebook | Python/[Python] Set.ipynb | ZongSingHuang/Data-Scientist-Tokyo | 1744bda70044e8a204f5ff09a3dd5dd3d3c3a443 | [
"MIT"
] | null | null | null | Python/[Python] Set.ipynb | ZongSingHuang/Data-Scientist-Tokyo | 1744bda70044e8a204f5ff09a3dd5dd3d3c3a443 | [
"MIT"
] | null | null | null | Python/[Python] Set.ipynb | ZongSingHuang/Data-Scientist-Tokyo | 1744bda70044e8a204f5ff09a3dd5dd3d3c3a443 | [
"MIT"
] | null | null | null | 17.559006 | 61 | 0.411744 | [
[
[
"### 宣告\n- 只能存不重複的元素\n- 因為是無序的,所以每次編譯後的順序都會不同",
"_____no_output_____"
]
],
[
[
"a = {1, 2, 3, 2, 4, 5, 2}\nb = set([1, 2, 3, 2, 4, 5, 2])\nprint(a, type(a))\nprint(b, type(b))",
"{1, 2, 3, 4, 5} <class 'set'>\n{1, 2, 3, 4, 5} <class 'set'>\n"
],
[
"s1.add(5)\nprint(s1)\n\ns1.add(5)\nprint(s1)",
"{1, 2, 3, 4, 5}\n{1, 2, 3, 4, 5}\n"
],
[
"s1.remove(5)\nprint(s1)\n\n# 因為找不到,所以會報錯\n#s1.remove(8)\n#print(s1)",
"{1, 2, 3, 4}\n"
],
[
"a = '1234512'\nprint(set(a))",
"{'1', '5', '4', '2', '3'}\n"
],
[
"s1 = {1, 2, 3, 4}\ns2 = {3, 4, 5, 6}\n\nprint('交集:', s1 & s2)\nprint('聯集:', s1 | s2)\nprint('對稱差集:', s1 ^ s2)\nprint('差集1:', s1 - s2)\nprint('差集2:', s2 - s1)",
"交集: {3, 4}\n聯集: {1, 2, 3, 4, 5, 6}\n對稱差集: {1, 2, 5, 6}\n差集1: {1, 2}\n差集2: {5, 6}\n"
]
],
[
[
"### 參考資料\nhttp://www.cxyzjd.com/article/weixin_43657383/109552381",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
e7b6f2775b28f9c7ece2a6412dc7f50a5cbdfbbf | 190,880 | ipynb | Jupyter Notebook | notebooks/review_results/image_annotations.ipynb | deflaux/ml4h | b1de99143b943c78cd8a1c86fcac36523c502ee5 | [
"BSD-3-Clause"
] | null | null | null | notebooks/review_results/image_annotations.ipynb | deflaux/ml4h | b1de99143b943c78cd8a1c86fcac36523c502ee5 | [
"BSD-3-Clause"
] | null | null | null | notebooks/review_results/image_annotations.ipynb | deflaux/ml4h | b1de99143b943c78cd8a1c86fcac36523c502ee5 | [
"BSD-3-Clause"
] | null | null | null | 574.939759 | 181,560 | 0.947061 | [
[
[
"# Image annotations for a batch of samples\n\nUsing this notebook, cardiologists are able to quickly view and annotate MRI images for a batch of samples. These annotated images become the training data for the next round of modeling.",
"_____no_output_____"
],
[
"# Setup\n\n<div class=\"alert alert-block alert-warning\">\n This notebook assumes\n <ul>\n <li><b>Terra</b> is running custom Docker image <kbd>ghcr.io/broadinstitute/ml4h/ml4h_terra:20211101_143643</kbd>.</li>\n <li><b>ml4h</b> is running custom Docker image <kbd>gcr.io/broad-ml4cvd/deeplearning:tf2-latest-gpu</kbd>.</li>\n </ul>\n</div>",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
],
[
[
"# TODO(deflaux): remove this cell after gcr.io/broad-ml4cvd/deeplearning:tf2-latest-gpu has this preinstalled.\nfrom ml4h.runtime_data_defines import determine_runtime\nfrom ml4h.runtime_data_defines import Runtime\n\nif Runtime.ML4H_VM == determine_runtime():\n !pip3 install --user ipycanvas==0.7.0 ipyannotations==0.2.1\n !jupyter nbextension install --user --py ipycanvas\n !jupyter nbextension enable --user --py ipycanvas\n # Be sure to restart the kernel if pip installs anything.\n # Also, shift-reload the browser page after the notebook extension installation.",
"_____no_output_____"
],
[
"from ipyannotations import BoxAnnotator, PointAnnotator, PolygonAnnotator\nfrom ml4h.visualization_tools.annotation_storage import BigQueryAnnotationStorage\nfrom ml4h.visualization_tools.batch_image_annotations import BatchImageAnnotator\nimport pandas as pd\nimport tensorflow as tf",
"_____no_output_____"
],
[
"%%javascript\n// Display cell outputs to full height (no vertical scroll bar)\nIPython.OutputArea.auto_scroll_threshold = 9999;",
"_____no_output_____"
],
[
"pd.set_option('display.max_colwidth', -1)",
"_____no_output_____"
],
[
"BIG_QUERY_ANNOTATIONS_STORAGE = BigQueryAnnotationStorage('uk-biobank-sek-data.ml_results.annotations')",
"_____no_output_____"
]
],
[
[
"# Define the batch of samples to annotate\n\n<div class=\"alert alert-block alert-info\">\n Edit the CSV file path below, if needed, to either a local file or one in Cloud Storage.\n</div>",
"_____no_output_____"
]
],
[
[
"#---[ EDIT AND RUN THIS CELL TO READ FROM A LOCAL FILE OR A FILE IN CLOUD STORAGE ]---\nSAMPLE_BATCH_FILE = None",
"_____no_output_____"
],
[
"if SAMPLE_BATCH_FILE:\n samples_df = pd.read_csv(tf.io.gfile.GFile(SAMPLE_BATCH_FILE))\n\nelse:\n # Normally these would all be the same or similar TMAP. We are using different ones here just to make it\n # more obvious in this demo that we are processing different samples.\n samples_df = pd.DataFrame(\n columns=BatchImageAnnotator.EXPECTED_COLUMN_NAMES,\n data=[\n [1655349, 'cine_lax_3ch_192', 25, 'gs://ml4cvd/deflaux/ukbb_tensors/'],\n [1655349, 't2_flair_sag_p2_1mm_fs_ellip_pf78_1', 50, 'gs://ml4cvd/deflaux/ukbb_tensors/'],\n [1655349, 'cine_lax_4ch_192', 25, 'gs://ml4cvd/deflaux/ukbb_tensors/'],\n [1655349, 't2_flair_sag_p2_1mm_fs_ellip_pf78_2', 50, 'gs://ml4cvd/deflaux/ukbb_tensors/'],\n [2403657, 'cine_lax_3ch_192', 25, 'gs://ml4cvd/deflaux/ukbb_tensors/'],\n ])\n\nsamples_df.shape",
"_____no_output_____"
],
[
"samples_df.head(n = 10)",
"_____no_output_____"
]
],
[
[
"# Annotate the batch! ",
"_____no_output_____"
],
[
"## Annotate with points\n\nUse points to annotate landmarks within the images.",
"_____no_output_____"
]
],
[
[
"# Note: a zoom level of 1.0 displays the tensor as-is. For higher zoom levels, this code currently\n# use the PIL library to scale the image.\n\nannotator = BatchImageAnnotator(samples=samples_df,\n zoom=2.0,\n annotation_categories=['region_of_interest'],\n annotation_storage=BIG_QUERY_ANNOTATIONS_STORAGE,\n annotator=PointAnnotator)\nannotator.annotate_images()",
"_____no_output_____"
]
],
[
[
"## Annotate with polygons\n\nUse polygons to annotate arbitrarily shaped regions within the images.",
"_____no_output_____"
]
],
[
[
"# Note: a zoom level of 1.0 displays the tensor as-is. For higher zoom levels, this code currently\n# use the PIL library to scale the image.\n\nannotator = BatchImageAnnotator(samples=samples_df,\n zoom=2.0,\n annotation_categories=['region_of_interest'],\n annotation_storage=BIG_QUERY_ANNOTATIONS_STORAGE,\n annotator=PolygonAnnotator)\nannotator.annotate_images()",
"_____no_output_____"
]
],
[
[
"## Annotate with rectangles\n\nUse rectangles to annotate rectangular regions within the image.",
"_____no_output_____"
]
],
[
[
"# Note: a zoom level of 1.0 displays the tensor as-is. For higher zoom levels, this code currently\n# use the PIL library to scale the image.\n\nannotator = BatchImageAnnotator(samples=samples_df,\n zoom=2.0,\n annotation_categories=['region_of_interest'],\n annotation_storage=BIG_QUERY_ANNOTATIONS_STORAGE,\n annotator=BoxAnnotator)\nannotator.annotate_images()",
"_____no_output_____"
]
],
[
[
"# View the stored annotations ",
"_____no_output_____"
]
],
[
[
"annotator.view_recent_submissions(count=10)",
"_____no_output_____"
]
],
[
[
"# Provenance",
"_____no_output_____"
]
],
[
[
"import datetime\nprint(datetime.datetime.now())",
"_____no_output_____"
],
[
"%%bash\npip3 freeze",
"_____no_output_____"
]
],
[
[
"Questions about these particular notebooks? Join the discussion https://github.com/broadinstitute/ml4h/discussions.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
e7b6f61cbfa828b765b124157001496ec2d61d17 | 173,526 | ipynb | Jupyter Notebook | DCGAN.ipynb | ruman-shaikh/DCGAN_TensorFlow | cfe5f7e26db7f42f352461ee1e20ad02042a4ae6 | [
"MIT"
] | 1 | 2020-09-04T07:40:00.000Z | 2020-09-04T07:40:00.000Z | DCGAN.ipynb | ruman-shaikh/DCGAN_TensorFlow | cfe5f7e26db7f42f352461ee1e20ad02042a4ae6 | [
"MIT"
] | null | null | null | DCGAN.ipynb | ruman-shaikh/DCGAN_TensorFlow | cfe5f7e26db7f42f352461ee1e20ad02042a4ae6 | [
"MIT"
] | null | null | null | 370.782051 | 74,248 | 0.92944 | [
[
[
"import tensorflow as tf\nfrom tensorflow import keras\nimport numpy as np\nimport plot_utils\nimport matplotlib.pyplot as plt\nfrom tqdm import tqdm\nfrom IPython import display\n%matplotlib inline",
"_____no_output_____"
],
[
"(train_x, train_y), (test_x, test_y) = tf.keras.datasets.fashion_mnist.load_data()\n\ntrain_x = train_x.astype(np.float32) /255.0\ntest_x = test_x.astype(np.float32) /255.0",
"_____no_output_____"
],
[
"plt.figure(figsize=(10,10))\nfor i in range(25):\n plt.subplot(5,5,i+1)\n plt.imshow(train_x[i])\nplt.show()",
"_____no_output_____"
],
[
"batch_size = 32\ndataset = tf.data.Dataset.from_tensor_slices(train_x).shuffle(1000)\ndataset.batch(batch_size, drop_remainder=True).prefetch(1)",
"_____no_output_____"
],
[
"num_features = 100\n\ngenerator = keras.models.Sequential([\n keras.layers.Dense(7*7*128, input_shape=[num_features]),\n keras.layers.Reshape([7, 7, 128]),\n keras.layers.BatchNormalization(),\n keras.layers.Conv2DTranspose(64, (5,5), (2,2), padding='same', activation='selu'),\n keras.layers.BatchNormalization(),\n keras.layers.Conv2DTranspose(1, (5,5), (2,2), padding='same', activation='tanh')\n])\n\ngenerator.summary()",
"Model: \"sequential_3\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense_1 (Dense) (None, 6272) 633472 \n_________________________________________________________________\nreshape_1 (Reshape) (None, 7, 7, 128) 0 \n_________________________________________________________________\nbatch_normalization_2 (Batch (None, 7, 7, 128) 512 \n_________________________________________________________________\nconv2d_transpose_6 (Conv2DTr (None, 14, 14, 64) 204864 \n_________________________________________________________________\nbatch_normalization_3 (Batch (None, 14, 14, 64) 256 \n_________________________________________________________________\nconv2d_transpose_7 (Conv2DTr (None, 28, 28, 1) 1601 \n=================================================================\nTotal params: 840,705\nTrainable params: 840,321\nNon-trainable params: 384\n_________________________________________________________________\n"
],
[
"noise = tf.random.normal(shape=[1, num_features])\ngenerated_img = generator(noise, training=False)\nplt.imshow((tf.reshape(generated_img[0], (28,28))), cmap=plt.cm.binary)\nplt.show()",
"_____no_output_____"
],
[
"discriminator = keras.models.Sequential([\n keras.layers.Conv2D(64, (5,5), (2,2), padding='same', input_shape=[28, 28, 1]),\n keras.layers.LeakyReLU(0.2),\n keras.layers.Dropout(0.3),\n keras.layers.Conv2D(128, (5, 5), (2, 2), padding='same'),\n keras.layers.LeakyReLU(0.2),\n keras.layers.Dropout(0.3),\n keras.layers.Flatten(),\n keras.layers.Dense(1, activation='sigmoid')\n])\n\ndiscriminator.summary()",
"Model: \"sequential_4\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv2d (Conv2D) (None, 14, 14, 64) 1664 \n_________________________________________________________________\nleaky_re_lu (LeakyReLU) (None, 14, 14, 64) 0 \n_________________________________________________________________\ndropout (Dropout) (None, 14, 14, 64) 0 \n_________________________________________________________________\nconv2d_1 (Conv2D) (None, 7, 7, 128) 204928 \n_________________________________________________________________\nleaky_re_lu_1 (LeakyReLU) (None, 7, 7, 128) 0 \n_________________________________________________________________\ndropout_1 (Dropout) (None, 7, 7, 128) 0 \n_________________________________________________________________\nflatten (Flatten) (None, 6272) 0 \n_________________________________________________________________\ndense_2 (Dense) (None, 1) 6273 \n=================================================================\nTotal params: 212,865\nTrainable params: 212,865\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"decision = discriminator(generated_img)\nprint(decision)",
"tf.Tensor([[0.49800315]], shape=(1, 1), dtype=float32)\n"
],
[
"discriminator.compile(loss='binary_crossentropy', optimizer='rmsprop')\ndiscriminator.trainable = False",
"_____no_output_____"
],
[
"gan = keras.models.Sequential([\n generator,\n discriminator\n])\n\ngan.compile(loss='binary_crossentropy', optimizer='rmsprop')\ngan.summary()",
"Model: \"sequential_5\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nsequential_3 (Sequential) (None, 28, 28, 1) 840705 \n_________________________________________________________________\nsequential_4 (Sequential) (None, 1) 212865 \n=================================================================\nTotal params: 1,053,570\nTrainable params: 840,321\nNon-trainable params: 213,249\n_________________________________________________________________\n"
],
[
"seed = tf.random.normal(shape=[batch_size, num_features])\nplt.imshow(seed)",
"_____no_output_____"
],
[
"def train_dcgan(gan, dataset, batch_size, num_features, epochs=5):\n generator, discriminator = gan.layers\n for epoch in tqdm(range(epochs)):\n print(\"Epoch: {}/{}\".format(epoch + 1, epochs))\n for X_batch in dataset:\n noise = tf.random.normal(shape=[batch_size, num_features])\n generated_img = generator(noise)\n X_fake_n_real = tf.concat([generated_img, X_batch], axis=0)\n y1 = tf.concat([[[0.]] * batch_size + [[1.]] * batch_size], axis=0)\n discriminator.trainable = True\n discriminator.train_on_batch(X_fake_n_real, y1)\n y2 = tf.constant([[1.]] * batch_size)\n discriminator.trainable = False\n gan.train_on_batch(noise, y2)\n display.clear_output(wait=True)\n generate_and_save_images(generator, epoch + 1, seed)\n display.clear_output(wait=True)\n generate_and_save_images(generator, epochs, seed)",
"_____no_output_____"
],
[
"def generate_and_save_images(model, epoch, test_input):\n # Notice `training` is set to False.\n # This is so all layers run in inference mode (batchnorm).\n predictions = model(test_input, training=False)\n\n fig = plt.figure(figsize=(10,10))\n\n for i in range(25):\n plt.subplot(5, 5, i+1)\n plt.imshow(predictions[i, :, :, 0] * 127.5 + 127.5, cmap='gray')\n plt.axis('off')\n\n plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))\n plt.show()",
"_____no_output_____"
],
[
"x_train_dcgan = train_x.reshape(-1, 28, 28, 1) * 2. -1.",
"_____no_output_____"
],
[
"batch_size = 100\ndataset = tf.data.Dataset.from_tensor_slices(x_train_dcgan).shuffle(1000)\ndataset = dataset.batch(batch_size, drop_remainder=True).prefetch(1)",
"_____no_output_____"
],
[
"%%time\ntrain_dcgan(gan, dataset, batch_size, num_features, epochs=50)",
"_____no_output_____"
],
[
"noise = tf.random.normal(shape=[1, num_features])\ngenerated_img = generator(noise, training=False)\nplt.imshow((tf.reshape(generated_img[0], (28,28))), cmap=plt.cm.binary)\nplt.show()",
"_____no_output_____"
],
[
"decision = discriminator(generated_img)\nprint(decision)",
"tf.Tensor([[0.6308384]], shape=(1, 1), dtype=float32)\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7b6f75d6d190394d8ee704946a6f164828463f8 | 658,805 | ipynb | Jupyter Notebook | Gun Violence.ipynb | itsmepiyush2/Effects-of-Gun-Violence-forecast | 96d6d1f76f4c521744ddadd61fbc96ea07f84aff | [
"MIT"
] | 1 | 2018-09-15T11:30:19.000Z | 2018-09-15T11:30:19.000Z | Gun Violence.ipynb | itsmepiyush2/Effects-of-Gun-Violence-forecast | 96d6d1f76f4c521744ddadd61fbc96ea07f84aff | [
"MIT"
] | null | null | null | Gun Violence.ipynb | itsmepiyush2/Effects-of-Gun-Violence-forecast | 96d6d1f76f4c521744ddadd61fbc96ea07f84aff | [
"MIT"
] | null | null | null | 436.873342 | 88,588 | 0.925064 | [
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom matplotlib.pylab import rcParams\nrcParams['figure.figsize'] = 15, 8",
"_____no_output_____"
],
[
"dataset = pd.read_csv('GunViolence.csv')\ndataset.head()",
"_____no_output_____"
],
[
"dataset = pd.read_csv('GunViolence.csv', index_col = 'date')\ndataset.index = pd.to_datetime(dataset.index)\ndataset = dataset.sort_index()\ndataset.head()",
"_____no_output_____"
],
[
"violence = dataset['n_killed'] + dataset['n_injured']\nviolence.head()",
"_____no_output_____"
],
[
"violence = violence.resample('D').sum()\nviolence.head()",
"_____no_output_____"
],
[
"plt.plot(violence)\nplt.title(\"Daily Trends\")",
"_____no_output_____"
],
[
"violence = violence.resample('M').sum()\nplt.plot(violence)\nplt.title(\"Monthly Trends\")",
"_____no_output_____"
],
[
"from statsmodels.tsa.stattools import adfuller\ndef testStationarity(timeseries):\n rolmean = pd.rolling_mean(timeseries, window = 12)\n rolstd = pd.rolling_std(timeseries, window = 12)\n \n original = plt.plot(timeseries, label = 'Original')\n mean = plt.plot(rolmean, label = 'Rolling mean')\n std = plt.plot(rolstd, label = 'Rolling standard deviation')\n plt.legend(loc = 'best')\n plt.title(\"Rolling Mean and Standard Deviation\")\n \n #Dickey Fuller Test\n print(\"Results of Dickey Fuller test on time series\")\n dftest = adfuller(timeseries)\n dfoutput = pd.Series(dftest[0:4], index = ['Test Statistic', 'p-value', 'lags used', 'observations used'])\n for key, value in dftest[4].items():\n dfoutput['Critical value (%s)' %key] = value\n print(dfoutput)",
"_____no_output_____"
],
[
"violence.dropna(inplace = True)\ntestStationarity(violence)",
"Results of Dickey Fuller test on time series\nTest Statistic -8.266589e+00\np-value 4.919565e-13\nlags used 1.100000e+01\nobservations used 5.100000e+01\nCritical value (1%) -3.565624e+00\nCritical value (5%) -2.920142e+00\nCritical value (10%) -2.598015e+00\ndtype: float64\n"
],
[
"logviolence = np.log(violence)\nlogviolence.dropna(inplace = True)\ntestStationarity(logviolence)",
"/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:3: FutureWarning: pd.rolling_mean is deprecated for Series and will be removed in a future version, replace with \n\tSeries.rolling(window=12,center=False).mean()\n This is separate from the ipykernel package so we can avoid doing imports until\n/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:4: FutureWarning: pd.rolling_std is deprecated for Series and will be removed in a future version, replace with \n\tSeries.rolling(window=12,center=False).std()\n after removing the cwd from sys.path.\n"
],
[
"ewm = logviolence.ewm(halflife = 7).mean()\ntestStationarity(ewm)",
"/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:3: FutureWarning: pd.rolling_mean is deprecated for Series and will be removed in a future version, replace with \n\tSeries.rolling(window=12,center=False).mean()\n This is separate from the ipykernel package so we can avoid doing imports until\n/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:4: FutureWarning: pd.rolling_std is deprecated for Series and will be removed in a future version, replace with \n\tSeries.rolling(window=12,center=False).std()\n after removing the cwd from sys.path.\n"
],
[
"diff = logviolence - ewm\ntestStationarity(diff)",
"Results of Dickey Fuller test on time series\nTest Statistic -2.716998\np-value 0.071143\nlags used 0.000000\nobservations used 62.000000\nCritical value (1%) -3.540523\nCritical value (5%) -2.909427\nCritical value (10%) -2.592314\ndtype: float64\n"
]
],
[
[
"'ewm' series is the most stationary out of all the series. Hence we model on 'ewm'",
"_____no_output_____"
]
],
[
[
"from pandas.tools.plotting import autocorrelation_plot\nautocorrelation_plot(ewm)",
"/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:2: FutureWarning: 'pandas.tools.plotting.autocorrelation_plot' is deprecated, import 'pandas.plotting.autocorrelation_plot' instead.\n \n"
],
[
"from statsmodels.tsa.arima_model import ARIMA\nmodel = ARIMA(ewm, order = (15, 1, 0))\nmodel_fit = model.fit(disp = 0)\nplt.plot(model_fit.fittedvalues)",
"/anaconda3/lib/python3.6/site-packages/statsmodels/tsa/kalmanf/kalmanfilter.py:646: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n if issubdtype(paramsdtype, float):\n/anaconda3/lib/python3.6/site-packages/statsmodels/tsa/kalmanf/kalmanfilter.py:650: FutureWarning: Conversion of the second argument of issubdtype from `complex` to `np.complexfloating` is deprecated. In future, it will be treated as `np.complex128 == np.dtype(complex).type`.\n elif issubdtype(paramsdtype, complex):\n/anaconda3/lib/python3.6/site-packages/statsmodels/base/model.py:496: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n \"Check mle_retvals\", ConvergenceWarning)\n/anaconda3/lib/python3.6/site-packages/statsmodels/tsa/kalmanf/kalmanfilter.py:577: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n if issubdtype(paramsdtype, float):\n"
],
[
"logpred_diff = pd.Series(model_fit.fittedvalues, index = ewm.index)\nlogpred_cumsum = logpred_diff.cumsum()\nlogpred = pd.Series(ewm.iloc[0], index = ewm.index)\nlogpred = logpred.add(logpred_cumsum, fill_value = 0)\nlogpred.head()",
"_____no_output_____"
],
[
"ewm.head()",
"_____no_output_____"
],
[
"pred = np.exp(logpred)\npred.head()",
"_____no_output_____"
],
[
"violence.head()",
"_____no_output_____"
],
[
"plt.plot(pred, label = 'Prediction')\nplt.plot(violence, label = 'Expected')\nplt.legend(loc = 'best')",
"_____no_output_____"
],
[
"violence.tail()",
"_____no_output_____"
],
[
"dates = [pd.Timestamp('2018-05-31'), pd.Timestamp('2018-06-30'), pd.Timestamp('2018-07-31'), pd.Timestamp('2018-08-31'), pd.Timestamp('2018-09-30')]",
"_____no_output_____"
],
[
"forecast = pd.Series(model_fit.forecast(steps = 5)[0], dates)\nforecast = np.exp(forecast)\nforecast.head()",
"_____no_output_____"
],
[
"plt.plot(forecast, label = 'Forecast')\nplt.title(\"Forecast for the next Five Months\")\nplt.legend(loc = 'best')",
"_____no_output_____"
],
[
"autocorrelation_plot(violence)",
"/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:1: FutureWarning: 'pandas.tools.plotting.autocorrelation_plot' is deprecated, import 'pandas.plotting.autocorrelation_plot' instead.\n \"\"\"Entry point for launching an IPython kernel.\n"
],
[
"model = ARIMA(violence.astype(float), order = (10,1,0))\nmodel_fit = model.fit(disp = 0)\nplt.plot(model_fit.fittedvalues)",
"/anaconda3/lib/python3.6/site-packages/statsmodels/tsa/kalmanf/kalmanfilter.py:646: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n if issubdtype(paramsdtype, float):\n/anaconda3/lib/python3.6/site-packages/statsmodels/tsa/kalmanf/kalmanfilter.py:650: FutureWarning: Conversion of the second argument of issubdtype from `complex` to `np.complexfloating` is deprecated. In future, it will be treated as `np.complex128 == np.dtype(complex).type`.\n elif issubdtype(paramsdtype, complex):\n/anaconda3/lib/python3.6/site-packages/statsmodels/tsa/kalmanf/kalmanfilter.py:577: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n if issubdtype(paramsdtype, float):\n"
],
[
"model_fit.fittedvalues.head()",
"_____no_output_____"
],
[
"violence.head()",
"_____no_output_____"
],
[
"forecast = pd.Series(model_fit.forecast(steps = 5)[0], dates)\nforecast",
"_____no_output_____"
],
[
"plt.plot(forecast)\nplt.title(\"Forecasted\")",
"_____no_output_____"
]
]
] | [
"code",
"raw",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"raw"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7b707c8b8dff5f4bfe3968b741fedb041135309 | 10,084 | ipynb | Jupyter Notebook | benchmarking/MetaTuner_Examples/MetaTuner-on-bcancer.ipynb | jashanmeet-collab/mango | ed1fb80fda35d00f6cdfc06e71f55b1a0a9cf4b3 | [
"Apache-2.0"
] | 123 | 2019-10-19T16:55:38.000Z | 2022-03-03T02:34:17.000Z | benchmarking/MetaTuner_Examples/MetaTuner-on-bcancer.ipynb | jashanmeet-collab/mango | ed1fb80fda35d00f6cdfc06e71f55b1a0a9cf4b3 | [
"Apache-2.0"
] | 33 | 2019-10-24T21:10:53.000Z | 2022-03-31T00:14:47.000Z | benchmarking/MetaTuner_Examples/MetaTuner-on-bcancer.ipynb | jashanmeet-collab/mango | ed1fb80fda35d00f6cdfc06e71f55b1a0a9cf4b3 | [
"Apache-2.0"
] | 29 | 2019-10-24T19:08:50.000Z | 2022-02-10T11:06:04.000Z | 25.084577 | 563 | 0.507834 | [
[
[
"# Testing the functionalities of MetaTuner on bcancer dataset",
"_____no_output_____"
]
],
[
[
"from mango import MetaTuner",
"_____no_output_____"
],
[
"# Define different classifiers\n\nfrom scipy.stats import uniform\n\nfrom sklearn import datasets\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.model_selection import cross_val_score\n\ndata = datasets.load_breast_cancer()\nX = data.data\nY = data.target",
"_____no_output_____"
]
],
[
[
"# XGBoost",
"_____no_output_____"
]
],
[
[
"from xgboost import XGBClassifier\n\nparam_dict_xgboost = {\"learning_rate\": uniform(0, 1),\n \"gamma\": uniform(0, 5),\n \"max_depth\": range(1, 16),\n \"n_estimators\": range(1, 4),\n \"booster\":['gbtree','gblinear','dart']\n }\n\n\nX_xgboost = X \nY_xgboost = Y\n\n# import warnings\n# warnings.filterwarnings('ignore')\n\ndef objective_xgboost(args_list):\n global X_xgboost, Y_xgboost\n\n results = []\n for hyper_par in args_list:\n #clf = XGBClassifier(**hyper_par)\n clf = XGBClassifier(verbosity = 0, random_state = 0)\n \n #clf = XGBClassifier()\n clf.set_params(**hyper_par)\n\n result = cross_val_score(clf, X_xgboost, Y_xgboost, scoring='accuracy', cv=3).mean()\n results.append(result)\n return results",
"_____no_output_____"
]
],
[
[
"# KNN",
"_____no_output_____"
]
],
[
[
"param_dict_knn = {\"n_neighbors\": range(1, 101),\n 'algorithm' : ['auto', 'ball_tree', 'kd_tree', 'brute']\n }\nX_knn = X\nY_knn = Y\n\ndef objective_knn(args_list):\n global X_knn,Y_knn\n \n results = []\n for hyper_par in args_list:\n clf = KNeighborsClassifier()\n \n clf.set_params(**hyper_par)\n \n result = cross_val_score(clf, X_knn, Y_knn, scoring='accuracy', cv=3).mean()\n results.append(result)\n return results",
"_____no_output_____"
]
],
[
[
"# SVM",
"_____no_output_____"
]
],
[
[
"from mango.domain.distribution import loguniform\nfrom sklearn import svm\n\nparam_dict_svm = {\"gamma\": uniform(0.1, 4),\n \"C\": loguniform(-7, 10)}\n\nX_svm = X \nY_svm = Y\n\n\ndef objective_svm(args_list):\n global X_svm,Y_svm\n \n #print('SVM:',args_list)\n results = []\n for hyper_par in args_list:\n clf = svm.SVC(random_state = 0)\n \n clf.set_params(**hyper_par)\n \n result = cross_val_score(clf, X_svm, Y_svm, scoring='accuracy', cv= 3).mean()\n results.append(result)\n return results",
"_____no_output_____"
]
],
[
[
"# Decision Tree",
"_____no_output_____"
]
],
[
[
"from sklearn.tree import DecisionTreeClassifier\n\nparam_dict_dtree = {\n \"max_features\": ['auto', 'sqrt', 'log2'],\n \"max_depth\": range(1,21), \n \"splitter\":['best','random'],\n \"criterion\":['gini','entropy']\n }\n\n\nX_dtree = X \nY_dtree = Y\n\nprint(X_dtree.shape, Y_dtree.shape)\n\ndef objective_dtree(args_list):\n global X_dtree,Y_dtree\n \n results = []\n for hyper_par in args_list:\n clf = DecisionTreeClassifier(random_state = 0)\n \n clf.set_params(**hyper_par)\n result = cross_val_score(clf, X_dtree, Y_dtree, scoring='accuracy', cv=3).mean()\n results.append(result)\n return results\n",
"(569, 30) (569,)\n"
],
[
"param_space_list = [param_dict_knn, param_dict_svm, param_dict_dtree, param_dict_xgboost]\nobjective_list = [objective_knn, objective_svm, objective_dtree, objective_xgboost]",
"_____no_output_____"
],
[
"metatuner = MetaTuner(param_space_list, objective_list)",
"_____no_output_____"
],
[
"results = metatuner.run()",
"_____no_output_____"
],
[
"# see the keys results of evaluations\nfor k in results:\n print(k)",
"random_params\nrandom_params_objective\nrandom_objective_fid\nparams_tried\nobjective_values\nobjective_fid\nbest_objective\nbest_params\nbest_objective_fid\n"
],
[
"print('best_objective:',results['best_objective'])\nprint('best_params:',results['best_params'])\nprint('best_objective_fid:',results['best_objective_fid'])",
"best_objective: 0.9420124385036667\nbest_params: {'booster': 'gbtree', 'gamma': 0.10353367759089294, 'learning_rate': 0.9651837278385165, 'max_depth': 9, 'n_estimators': 3}\nbest_objective_fid: 3\n"
],
[
"#order of function evaluation, initial order is random\nprint(results['objective_fid'])",
"[0, 0, 1, 1, 2, 2, 3, 3, 0, 3, 3, 3, 2, 3, 2, 2, 3, 2, 3, 2, 2, 3, 2, 2, 2, 1, 0, 2]\n"
],
[
"# See the evaluation order of function values\nprint(results['objective_values'])",
"[0.9016522788452613, 0.9016244314489928, 0.6264204028589994, 0.6274204028589994, 0.924412884062007, 0.924403601596584, 0.9208948296667595, 0.9402394876079088, 0.91914972616727, 0.8700083542188805, 0.9208948296667595, 0.8893158822983386, 0.9384665367121507, 0.8752900770444629, 0.7431913116123643, 0.924403601596584, 0.8840248770073332, 0.9173767752715122, 0.9208948296667595, 0.9156595191682911, 0.924412884062007, 0.9420124385036667, 0.9261579875614964, 0.9103777963427087, 0.9261951174231876, 0.6274204028589994, 0.9050960735171261, 0.933194096351991]\n"
]
],
[
[
"# A simple chart of function evaluations",
"_____no_output_____"
]
],
[
[
"def count_elements(seq):\n \"\"\"Tally elements from `seq`.\"\"\"\n hist = {}\n for i in seq:\n hist[i] = hist.get(i, 0) + 1\n return hist\n\ndef ascii_histogram(seq):\n \"\"\"A horizontal frequency-table/histogram plot.\"\"\"\n counted = count_elements(seq)\n for k in sorted(counted):\n print('{0:5d} {1}'.format(k, '+' * counted[k]))\n \nascii_histogram(results['objective_fid'])",
" 0 ++++\n 1 +++\n 2 ++++++++++++\n 3 +++++++++\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7b70c8cb9adc059c537effd84e024125671b20b | 25,243 | ipynb | Jupyter Notebook | Global beam test.ipynb | jlfly12/qrsim | 1f5340cdc4f6cc0ca7ecbebd49ba8c6f78afdb8c | [
"Apache-2.0"
] | null | null | null | Global beam test.ipynb | jlfly12/qrsim | 1f5340cdc4f6cc0ca7ecbebd49ba8c6f78afdb8c | [
"Apache-2.0"
] | null | null | null | Global beam test.ipynb | jlfly12/qrsim | 1f5340cdc4f6cc0ca7ecbebd49ba8c6f78afdb8c | [
"Apache-2.0"
] | 1 | 2019-06-21T17:55:00.000Z | 2019-06-21T17:55:00.000Z | 124.349754 | 10,264 | 0.868558 | [
[
[
"### May be possible to entangle all ions with global pulse with multiple tones.",
"_____no_output_____"
]
],
[
[
"from numpy import *\nfrom scipy.optimize import curve_fit\nimport matplotlib.pyplot as plt\nfrom Error_dist import func_str",
"_____no_output_____"
],
[
"# 10-ion test\n\nN = 11\n\nx = arange(1, N)\ny = x\n\ndef func(x, a, b, c, d, e):\n return (a / x ** 0.5 + b / x ** 0.7 + c / x ** 1 + d / x ** 1.5 + e / x ** 2)\n\n# Assume entangling strengt\\ scales as 1 / d ^ r where 0.5 < r < 3\ndef func_log(x, a, b, c, d, e):\n return - log(a / x ** 0.5 + b / x ** 0.7 + c / x ** 1 + d / x ** 1.5 + e / x ** 2)\n\npopt, pcov = curve_fit(func_log, x, y)\n\nx_cont = linspace(1, N, 1000)\ny_cont = func_log(x_cont, *popt)\n\nprint(f'Function: {func_str(func_log)}')\n\nplt.plot(x, y, 'bo')\nplt.plot(x_cont, y_cont, 'r')\nplt.show()\n\nprint(f\"Parameters: {popt}\")",
"Function: - log(a / x ** 0.5 + b / x ** 0.7 + c / x ** 1 + d / x ** 1.5 + e / x ** 2)\n"
],
[
"def func(x, a, b, c, d, e):\n return (a / x ** 0.5 + b / x ** 0.7 + c / x ** 1 + d / x ** 1.5 + e / x ** 2)\n\n\n# Assume entangling strengt\\ scales as 1 / d ^ r where 0.5 < r < 3\ndef func_log(x, a, b, c, d, e):\n return - log(func(x, a, b, c, d, e))\n\nN_start = 21\nN = 30\n\nx = arange(N_start, N)\ny = exp(-x)\n\n\npopt, pcov = curve_fit(func, x, y)\n\n\nx_disc = arange(25, N+2)\ny_disc = exp(-x_disc)\nL = len(x_disc)\n\nx_cont = linspace(25, N+3, 1000)\ny_cont = func(x_cont, *popt)\n\nx = arange(1, N)\ny = exp(-x)\n\nprint(f'Function: {func_str(func)}')\n\nplt.plot(x_disc, y_disc, 'bo')\nplt.plot(x_cont, y_cont, 'r')\n# plt.ylim([y_disc[L-1] * 0.9, y_disc[0] * 1.1])\n# plt.ylim([y_disc[L-1] * 0.9, y_disc[0] * 0.01])\n\nplt.show()\n\nprint(f\"Parameters (normalized by {popt[0]}): {popt / popt[0]}\")\n# print(f\"Covariance: {pcov}\")\n\n",
"Function: (a / x ** 0.5 + b / x ** 0.7 + c / x ** 1 + d / x ** 1.5 + e / x ** 2)\n"
],
[
"N = 20\nx = linspace(1, N, 300)\nplt.plot(x, func1(x, *popt))\nx_dots = arange(1, N)\nplt.plot(x_dots, exp(-x_dots), 'ro')\nplt.ylim([-0.01, 0.5])\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
e7b7122d01d775a212576ca277047e97ce64a098 | 787,384 | ipynb | Jupyter Notebook | 04_training_linear_models.ipynb | probationer070/handson-ml2 | abc971c632696443bdaea77dbd86729e5e9dab8a | [
"Apache-2.0"
] | null | null | null | 04_training_linear_models.ipynb | probationer070/handson-ml2 | abc971c632696443bdaea77dbd86729e5e9dab8a | [
"Apache-2.0"
] | null | null | null | 04_training_linear_models.ipynb | probationer070/handson-ml2 | abc971c632696443bdaea77dbd86729e5e9dab8a | [
"Apache-2.0"
] | null | null | null | 223.815804 | 79,934 | 0.909839 | [
[
[
"**4장 – 모델 훈련**",
"_____no_output_____"
],
[
"_이 노트북은 4장에 있는 모든 샘플 코드와 연습문제 해답을 가지고 있습니다._",
"_____no_output_____"
],
[
"<table align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/rickiepark/handson-ml2/blob/master/04_training_linear_models.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />구글 코랩에서 실행하기</a>\n </td>\n</table>",
"_____no_output_____"
],
[
"# 설정",
"_____no_output_____"
],
[
"먼저 몇 개의 모듈을 임포트합니다. 맷플롯립 그래프를 인라인으로 출력하도록 만들고 그림을 저장하는 함수를 준비합니다. 또한 파이썬 버전이 3.5 이상인지 확인합니다(파이썬 2.x에서도 동작하지만 곧 지원이 중단되므로 파이썬 3을 사용하는 것이 좋습니다). 사이킷런 버전이 0.20 이상인지도 확인합니다.",
"_____no_output_____"
]
],
[
[
"# 파이썬 ≥3.5 필수\nimport sys\nassert sys.version_info >= (3, 5)\n\n# 사이킷런 ≥0.20 필수\nimport sklearn\nassert sklearn.__version__ >= \"0.20\"\n\n# 공통 모듈 임포트\nimport numpy as np\nimport os\n\n# 노트북 실행 결과를 동일하게 유지하기 위해\nnp.random.seed(42)\n\n# 깔끔한 그래프 출력을 위해\n%matplotlib inline\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nmpl.rc('axes', labelsize=14)\nmpl.rc('xtick', labelsize=12)\nmpl.rc('ytick', labelsize=12)\n\n# 그림을 저장할 위치\nPROJECT_ROOT_DIR = \".\"\nCHAPTER_ID = \"training_linear_models\"\nIMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, \"images\", CHAPTER_ID)\nos.makedirs(IMAGES_PATH, exist_ok=True)\n\ndef save_fig(fig_id, tight_layout=True, fig_extension=\"png\", resolution=300):\n path = os.path.join(IMAGES_PATH, fig_id + \".\" + fig_extension)\n print(\"그림 저장:\", fig_id)\n if tight_layout:\n plt.tight_layout()\n plt.savefig(path, format=fig_extension, dpi=resolution)",
"_____no_output_____"
]
],
[
[
"# 선형 회귀",
"_____no_output_____"
]
],
[
[
"import numpy as np\n\nX = 2 * np.random.rand(100, 1)\ny = 4 + 3 * X + np.random.randn(100, 1)",
"_____no_output_____"
],
[
"plt.plot(X, y, \"b.\")\nplt.xlabel(\"$x_1$\", fontsize=18)\nplt.ylabel(\"$y$\", rotation=0, fontsize=18)\nplt.axis([0, 2, 0, 15])\nsave_fig(\"generated_data_plot\")\nplt.show()",
"그림 저장: generated_data_plot\n"
]
],
[
[
"**식 4-4: 정규 방정식**\n\n$\\hat{\\boldsymbol{\\theta}} = (\\mathbf{X}^T \\mathbf{X})^{-1} \\mathbf{X}^T \\mathbf{y}$",
"_____no_output_____"
]
],
[
[
"X_b = np.c_[np.ones((100, 1)), X] # 모든 샘플에 x0 = 1을 추가합니다.\ntheta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y)",
"_____no_output_____"
],
[
"theta_best",
"_____no_output_____"
]
],
[
[
"$\\hat{y} = \\mathbf{X} \\boldsymbol{\\hat{\\theta}}$",
"_____no_output_____"
]
],
[
[
"X_new = np.array([[0], [2]])\nX_new_b = np.c_[np.ones((2, 1)), X_new] # 모든 샘플에 x0 = 1을 추가합니다.\ny_predict = X_new_b.dot(theta_best)\ny_predict",
"_____no_output_____"
],
[
"plt.plot(X_new, y_predict, \"r-\")\nplt.plot(X, y, \"b.\")\nplt.axis([0, 2, 0, 15])\nplt.show()",
"_____no_output_____"
]
],
[
[
"책에 있는 그림은 범례와 축 레이블이 있는 그래프입니다:",
"_____no_output_____"
]
],
[
[
"plt.plot(X_new, y_predict, \"r-\", linewidth=2, label=\"Predictions\")\nplt.plot(X, y, \"b.\")\nplt.xlabel(\"$x_1$\", fontsize=18)\nplt.ylabel(\"$y$\", rotation=0, fontsize=18)\nplt.legend(loc=\"upper left\", fontsize=14)\nplt.axis([0, 2, 0, 15])\nsave_fig(\"linear_model_predictions_plot\")\nplt.show()",
"그림 저장: linear_model_predictions_plot\n"
],
[
"from sklearn.linear_model import LinearRegression\n\nlin_reg = LinearRegression()\nlin_reg.fit(X, y)\nlin_reg.intercept_, lin_reg.coef_",
"_____no_output_____"
],
[
"lin_reg.predict(X_new)",
"_____no_output_____"
]
],
[
[
"`LinearRegression` 클래스는 `scipy.linalg.lstsq()` 함수(\"least squares\"의 약자)를 사용하므로 이 함수를 직접 사용할 수 있습니다:",
"_____no_output_____"
]
],
[
[
"# 싸이파이 lstsq() 함수를 사용하려면 scipy.linalg.lstsq(X_b, y)와 같이 씁니다.\ntheta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6)\ntheta_best_svd",
"_____no_output_____"
]
],
[
[
"이 함수는 $\\mathbf{X}^+\\mathbf{y}$을 계산합니다. $\\mathbf{X}^{+}$는 $\\mathbf{X}$의 _유사역행렬_ (pseudoinverse)입니다(Moore–Penrose 유사역행렬입니다). `np.linalg.pinv()`을 사용해서 유사역행렬을 직접 계산할 수 있습니다:",
"_____no_output_____"
],
[
"$\\boldsymbol{\\hat{\\theta}} = \\mathbf{X}^{-1}\\hat{y}$",
"_____no_output_____"
]
],
[
[
"np.linalg.pinv(X_b).dot(y)",
"_____no_output_____"
]
],
[
[
"# 경사 하강법\n## 배치 경사 하강법",
"_____no_output_____"
],
[
"**식 4-6: 비용 함수의 그레이디언트 벡터**\n\n$\n\\dfrac{\\partial}{\\partial \\boldsymbol{\\theta}} \\text{MSE}(\\boldsymbol{\\theta})\n = \\dfrac{2}{m} \\mathbf{X}^T (\\mathbf{X} \\boldsymbol{\\theta} - \\mathbf{y})\n$\n\n**식 4-7: 경사 하강법의 스텝**\n\n$\n\\boldsymbol{\\theta}^{(\\text{next step})} = \\boldsymbol{\\theta} - \\eta \\dfrac{\\partial}{\\partial \\boldsymbol{\\theta}} \\text{MSE}(\\boldsymbol{\\theta})\n$\n",
"_____no_output_____"
]
],
[
[
"eta = 0.1 # 학습률\nn_iterations = 1000\nm = 100\n\ntheta = np.random.randn(2,1) # 랜덤 초기화\n\nfor iteration in range(n_iterations):\n gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y)\n theta = theta - eta * gradients",
"_____no_output_____"
],
[
"theta",
"_____no_output_____"
],
[
"X_new_b.dot(theta)",
"_____no_output_____"
],
[
"theta_path_bgd = []\n\ndef plot_gradient_descent(theta, eta, theta_path=None):\n m = len(X_b)\n plt.plot(X, y, \"b.\")\n n_iterations = 1000\n for iteration in range(n_iterations):\n if iteration < 10:\n y_predict = X_new_b.dot(theta)\n style = \"b-\" if iteration > 0 else \"r--\"\n plt.plot(X_new, y_predict, style)\n gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y)\n theta = theta - eta * gradients\n if theta_path is not None:\n theta_path.append(theta)\n plt.xlabel(\"$x_1$\", fontsize=18)\n plt.axis([0, 2, 0, 15])\n plt.title(r\"$\\eta = {}$\".format(eta), fontsize=16)",
"_____no_output_____"
],
[
"np.random.seed(42)\ntheta = np.random.randn(2,1) # random initialization\n\nplt.figure(figsize=(10,4))\nplt.subplot(131); plot_gradient_descent(theta, eta=0.02)\nplt.ylabel(\"$y$\", rotation=0, fontsize=18)\nplt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd)\nplt.subplot(133); plot_gradient_descent(theta, eta=0.5)\n\nsave_fig(\"gradient_descent_plot\")\nplt.show()",
"그림 저장: gradient_descent_plot\n"
]
],
[
[
"## 확률적 경사 하강법",
"_____no_output_____"
]
],
[
[
"theta_path_sgd = []\nm = len(X_b)\nnp.random.seed(42)",
"_____no_output_____"
],
[
"n_epochs = 50\nt0, t1 = 5, 50 # 학습 스케줄 하이퍼파라미터\n\ndef learning_schedule(t):\n return t0 / (t + t1)\n\ntheta = np.random.randn(2,1) # 랜덤 초기화\n\nfor epoch in range(n_epochs):\n for i in range(m):\n if epoch == 0 and i < 20: # 책에는 없음\n y_predict = X_new_b.dot(theta) # 책에는 없음\n style = \"b-\" if i > 0 else \"r--\" # 책에는 없음\n plt.plot(X_new, y_predict, style) # 책에는 없음\n random_index = np.random.randint(m)\n xi = X_b[random_index:random_index+1]\n yi = y[random_index:random_index+1]\n gradients = 2 * xi.T.dot(xi.dot(theta) - yi)\n eta = learning_schedule(epoch * m + i)\n theta = theta - eta * gradients\n theta_path_sgd.append(theta) # 책에는 없음\n\nplt.plot(X, y, \"b.\") # 책에는 없음\nplt.xlabel(\"$x_1$\", fontsize=18) # 책에는 없음\nplt.ylabel(\"$y$\", rotation=0, fontsize=18) # 책에는 없음\nplt.axis([0, 2, 0, 15]) # 책에는 없음\nsave_fig(\"sgd_plot\") # 책에는 없음\nplt.show() # 책에는 없음",
"그림 저장: sgd_plot\n"
],
[
"theta",
"_____no_output_____"
],
[
"from sklearn.linear_model import SGDRegressor\n\nsgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42)\nsgd_reg.fit(X, y.ravel())",
"_____no_output_____"
],
[
"sgd_reg.intercept_, sgd_reg.coef_",
"_____no_output_____"
]
],
[
[
"## 미니배치 경사 하강법",
"_____no_output_____"
]
],
[
[
"theta_path_mgd = []\n\nn_iterations = 50\nminibatch_size = 20\n\nnp.random.seed(42)\ntheta = np.random.randn(2,1) # 랜덤 초기화\n\nt0, t1 = 200, 1000\ndef learning_schedule(t):\n return t0 / (t + t1)\n\nt = 0\nfor epoch in range(n_iterations):\n shuffled_indices = np.random.permutation(m)\n X_b_shuffled = X_b[shuffled_indices]\n y_shuffled = y[shuffled_indices]\n for i in range(0, m, minibatch_size):\n t += 1\n xi = X_b_shuffled[i:i+minibatch_size]\n yi = y_shuffled[i:i+minibatch_size]\n gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi)\n eta = learning_schedule(t)\n theta = theta - eta * gradients\n theta_path_mgd.append(theta)",
"_____no_output_____"
],
[
"theta",
"_____no_output_____"
],
[
"theta_path_bgd = np.array(theta_path_bgd)\ntheta_path_sgd = np.array(theta_path_sgd)\ntheta_path_mgd = np.array(theta_path_mgd)",
"_____no_output_____"
],
[
"plt.figure(figsize=(7,4))\nplt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], \"r-s\", linewidth=1, label=\"Stochastic\")\nplt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], \"g-+\", linewidth=2, label=\"Mini-batch\")\nplt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], \"b-o\", linewidth=3, label=\"Batch\")\nplt.legend(loc=\"upper left\", fontsize=16)\nplt.xlabel(r\"$\\theta_0$\", fontsize=20)\nplt.ylabel(r\"$\\theta_1$ \", fontsize=20, rotation=0)\nplt.axis([2.5, 4.5, 2.3, 3.9])\nsave_fig(\"gradient_descent_paths_plot\")\nplt.show()",
"그림 저장: gradient_descent_paths_plot\n"
]
],
[
[
"# 다항 회귀",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport numpy.random as rnd\n\nnp.random.seed(42)",
"_____no_output_____"
],
[
"m = 100\nX = 6 * np.random.rand(m, 1) - 3\ny = 0.5 * X**2 + X + 2 + np.random.randn(m, 1)",
"_____no_output_____"
],
[
"plt.plot(X, y, \"b.\")\nplt.xlabel(\"$x_1$\", fontsize=18)\nplt.ylabel(\"$y$\", rotation=0, fontsize=18)\nplt.axis([-3, 3, 0, 10])\nsave_fig(\"quadratic_data_plot\")\nplt.show()",
"그림 저장: quadratic_data_plot\n"
],
[
"from sklearn.preprocessing import PolynomialFeatures\npoly_features = PolynomialFeatures(degree=2, include_bias=False)\nX_poly = poly_features.fit_transform(X)\nX[0]",
"_____no_output_____"
],
[
"X_poly[0]",
"_____no_output_____"
],
[
"lin_reg = LinearRegression()\nlin_reg.fit(X_poly, y)\nlin_reg.intercept_, lin_reg.coef_",
"_____no_output_____"
],
[
"X_new=np.linspace(-3, 3, 100).reshape(100, 1)\nX_new_poly = poly_features.transform(X_new)\ny_new = lin_reg.predict(X_new_poly)\nplt.plot(X, y, \"b.\")\nplt.plot(X_new, y_new, \"r-\", linewidth=2, label=\"Predictions\")\nplt.xlabel(\"$x_1$\", fontsize=18)\nplt.ylabel(\"$y$\", rotation=0, fontsize=18)\nplt.legend(loc=\"upper left\", fontsize=14)\nplt.axis([-3, 3, 0, 10])\nsave_fig(\"quadratic_predictions_plot\")\nplt.show()",
"그림 저장: quadratic_predictions_plot\n"
],
[
"from sklearn.preprocessing import StandardScaler\nfrom sklearn.pipeline import Pipeline\n\nfor style, width, degree in ((\"g-\", 1, 300), (\"b--\", 2, 2), (\"r-+\", 2, 1)):\n polybig_features = PolynomialFeatures(degree=degree, include_bias=False)\n std_scaler = StandardScaler()\n lin_reg = LinearRegression()\n polynomial_regression = Pipeline([\n (\"poly_features\", polybig_features),\n (\"std_scaler\", std_scaler),\n (\"lin_reg\", lin_reg),\n ])\n polynomial_regression.fit(X, y)\n y_newbig = polynomial_regression.predict(X_new)\n plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width)\n\nplt.plot(X, y, \"b.\", linewidth=3)\nplt.legend(loc=\"upper left\")\nplt.xlabel(\"$x_1$\", fontsize=18)\nplt.ylabel(\"$y$\", rotation=0, fontsize=18)\nplt.axis([-3, 3, 0, 10])\nsave_fig(\"high_degree_polynomials_plot\")\nplt.show()",
"그림 저장: high_degree_polynomials_plot\n"
]
],
[
[
"# 학습 곡선",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import mean_squared_error\nfrom sklearn.model_selection import train_test_split\n\ndef plot_learning_curves(model, X, y):\n X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10)\n train_errors, val_errors = [], []\n for m in range(1, len(X_train) + 1):\n model.fit(X_train[:m], y_train[:m])\n y_train_predict = model.predict(X_train[:m])\n y_val_predict = model.predict(X_val)\n train_errors.append(mean_squared_error(y_train[:m], y_train_predict))\n val_errors.append(mean_squared_error(y_val, y_val_predict))\n\n plt.plot(np.sqrt(train_errors), \"r-+\", linewidth=2, label=\"train\")\n plt.plot(np.sqrt(val_errors), \"b-\", linewidth=3, label=\"val\")\n plt.legend(loc=\"upper right\", fontsize=14) # 책에는 없음\n plt.xlabel(\"Training set size\", fontsize=14) # 책에는 없음\n plt.ylabel(\"RMSE\", fontsize=14) # 책에는 없음",
"_____no_output_____"
],
[
"lin_reg = LinearRegression()\nplot_learning_curves(lin_reg, X, y)\nplt.axis([0, 80, 0, 3]) # 책에는 없음\nsave_fig(\"underfitting_learning_curves_plot\") # 책에는 없음\nplt.show() # 책에는 없음",
"그림 저장: underfitting_learning_curves_plot\n"
],
[
"from sklearn.pipeline import Pipeline\n\npolynomial_regression = Pipeline([\n (\"poly_features\", PolynomialFeatures(degree=10, include_bias=False)),\n (\"lin_reg\", LinearRegression()),\n ])\n\nplot_learning_curves(polynomial_regression, X, y)\nplt.axis([0, 80, 0, 3]) # 책에는 없음\nsave_fig(\"learning_curves_plot\") # 책에는 없음\nplt.show() # 책에는 없음",
"그림 저장: learning_curves_plot\n"
]
],
[
[
"# 규제가 있는 선형 모델",
"_____no_output_____"
],
[
"## 릿지 회귀",
"_____no_output_____"
]
],
[
[
"np.random.seed(42)\nm = 20\nX = 3 * np.random.rand(m, 1)\ny = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5\nX_new = np.linspace(0, 3, 100).reshape(100, 1)",
"_____no_output_____"
]
],
[
[
"**식 4-8: 릿지 회귀의 비용 함수**\n\n$\nJ(\\boldsymbol{\\theta}) = \\text{MSE}(\\boldsymbol{\\theta}) + \\alpha \\dfrac{1}{2}\\sum\\limits_{i=1}^{n}{\\theta_i}^2\n$",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import Ridge\nridge_reg = Ridge(alpha=1, solver=\"cholesky\", random_state=42)\nridge_reg.fit(X, y)\nridge_reg.predict([[1.5]])",
"_____no_output_____"
],
[
"ridge_reg = Ridge(alpha=1, solver=\"sag\", random_state=42)\nridge_reg.fit(X, y)\nridge_reg.predict([[1.5]])",
"_____no_output_____"
],
[
"from sklearn.linear_model import Ridge\n\ndef plot_model(model_class, polynomial, alphas, **model_kargs):\n for alpha, style in zip(alphas, (\"b-\", \"g--\", \"r:\")):\n model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression()\n if polynomial:\n model = Pipeline([\n (\"poly_features\", PolynomialFeatures(degree=10, include_bias=False)),\n (\"std_scaler\", StandardScaler()),\n (\"regul_reg\", model),\n ])\n model.fit(X, y)\n y_new_regul = model.predict(X_new)\n lw = 2 if alpha > 0 else 1\n plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r\"$\\alpha = {}$\".format(alpha))\n plt.plot(X, y, \"b.\", linewidth=3)\n plt.legend(loc=\"upper left\", fontsize=15)\n plt.xlabel(\"$x_1$\", fontsize=18)\n plt.axis([0, 3, 0, 4])\n\nplt.figure(figsize=(8,4))\nplt.subplot(121)\nplot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42)\nplt.ylabel(\"$y$\", rotation=0, fontsize=18)\nplt.subplot(122)\nplot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42)\n\nsave_fig(\"ridge_regression_plot\")\nplt.show()",
"그림 저장: ridge_regression_plot\n"
]
],
[
[
"**노트**: 향후 버전이 바뀌더라도 동일한 결과를 만들기 위해 사이킷런 0.21 버전의 기본값인 `max_iter=1000`과 `tol=1e-3`으로 지정합니다.",
"_____no_output_____"
]
],
[
[
"sgd_reg = SGDRegressor(penalty=\"l2\", max_iter=1000, tol=1e-3, random_state=42)\nsgd_reg.fit(X, y.ravel())\nsgd_reg.predict([[1.5]])",
"_____no_output_____"
]
],
[
[
"**식 4-10: 라쏘 회귀의 비용 함수**\n\n$\nJ(\\boldsymbol{\\theta}) = \\text{MSE}(\\boldsymbol{\\theta}) + \\alpha \\sum\\limits_{i=1}^{n}\\left| \\theta_i \\right|\n$",
"_____no_output_____"
],
[
"## 라쏘 회귀",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import Lasso\n\nplt.figure(figsize=(8,4))\nplt.subplot(121)\nplot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42)\nplt.ylabel(\"$y$\", rotation=0, fontsize=18)\nplt.subplot(122)\nplot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42)\n\nsave_fig(\"lasso_regression_plot\")\nplt.show()",
"/home/haesun/handson-ml2/.env/lib/python3.7/site-packages/sklearn/linear_model/_coordinate_descent.py:646: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 2.803e+00, tolerance: 9.295e-04\n coef_, l1_reg, l2_reg, X, y, max_iter, tol, rng, random, positive\n"
],
[
"from sklearn.linear_model import Lasso\nlasso_reg = Lasso(alpha=0.1)\nlasso_reg.fit(X, y)\nlasso_reg.predict([[1.5]])",
"_____no_output_____"
]
],
[
[
"## 엘라스틱넷",
"_____no_output_____"
],
[
"**식 4-12: 엘라스틱넷 비용 함수**\n\n$\nJ(\\boldsymbol{\\theta}) = \\text{MSE}(\\boldsymbol{\\theta}) + r \\alpha \\sum\\limits_{i=1}^{n}\\left| \\theta_i \\right| + \\dfrac{1 - r}{2} \\alpha \\sum\\limits_{i=1}^{n}{{\\theta_i}^2}\n$",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import ElasticNet\nelastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42)\nelastic_net.fit(X, y)\nelastic_net.predict([[1.5]])",
"_____no_output_____"
]
],
[
[
"## 조기 종료",
"_____no_output_____"
]
],
[
[
"np.random.seed(42)\nm = 100\nX = 6 * np.random.rand(m, 1) - 3\ny = 2 + X + 0.5 * X**2 + np.random.randn(m, 1)\n\nX_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10)",
"_____no_output_____"
],
[
"from copy import deepcopy\n\npoly_scaler = Pipeline([\n (\"poly_features\", PolynomialFeatures(degree=90, include_bias=False)),\n (\"std_scaler\", StandardScaler())\n ])\n\nX_train_poly_scaled = poly_scaler.fit_transform(X_train)\nX_val_poly_scaled = poly_scaler.transform(X_val)\n\nsgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True,\n penalty=None, learning_rate=\"constant\", eta0=0.0005, random_state=42)\n\nminimum_val_error = float(\"inf\")\nbest_epoch = None\nbest_model = None\nfor epoch in range(1000):\n sgd_reg.fit(X_train_poly_scaled, y_train) # 중지된 곳에서 다시 시작합니다\n y_val_predict = sgd_reg.predict(X_val_poly_scaled)\n val_error = mean_squared_error(y_val, y_val_predict)\n if val_error < minimum_val_error:\n minimum_val_error = val_error\n best_epoch = epoch\n best_model = deepcopy(sgd_reg)",
"_____no_output_____"
]
],
[
[
"그래프를 그립니다:",
"_____no_output_____"
]
],
[
[
"sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True,\n penalty=None, learning_rate=\"constant\", eta0=0.0005, random_state=42)\n\nn_epochs = 500\ntrain_errors, val_errors = [], []\nfor epoch in range(n_epochs):\n sgd_reg.fit(X_train_poly_scaled, y_train)\n y_train_predict = sgd_reg.predict(X_train_poly_scaled)\n y_val_predict = sgd_reg.predict(X_val_poly_scaled)\n train_errors.append(mean_squared_error(y_train, y_train_predict))\n val_errors.append(mean_squared_error(y_val, y_val_predict))\n\nbest_epoch = np.argmin(val_errors)\nbest_val_rmse = np.sqrt(val_errors[best_epoch])\n\nplt.annotate('Best model',\n xy=(best_epoch, best_val_rmse),\n xytext=(best_epoch, best_val_rmse + 1),\n ha=\"center\",\n arrowprops=dict(facecolor='black', shrink=0.05),\n fontsize=16,\n )\n\nbest_val_rmse -= 0.03 # just to make the graph look better\nplt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], \"k:\", linewidth=2)\nplt.plot(np.sqrt(val_errors), \"b-\", linewidth=3, label=\"Validation set\")\nplt.plot(np.sqrt(train_errors), \"r--\", linewidth=2, label=\"Training set\")\nplt.legend(loc=\"upper right\", fontsize=14)\nplt.xlabel(\"Epoch\", fontsize=14)\nplt.ylabel(\"RMSE\", fontsize=14)\nsave_fig(\"early_stopping_plot\")\nplt.show()",
"그림 저장: early_stopping_plot\n"
],
[
"best_epoch, best_model",
"_____no_output_____"
],
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np",
"_____no_output_____"
],
[
"t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5\n\nt1s = np.linspace(t1a, t1b, 500)\nt2s = np.linspace(t2a, t2b, 500)\nt1, t2 = np.meshgrid(t1s, t2s)\nT = np.c_[t1.ravel(), t2.ravel()]\nXr = np.array([[1, 1], [1, -1], [1, 0.5]])\nyr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:]\n\nJ = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape)\n\nN1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape)\nN2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape)\n\nt_min_idx = np.unravel_index(np.argmin(J), J.shape)\nt1_min, t2_min = t1[t_min_idx], t2[t_min_idx]\n\nt_init = np.array([[0.25], [-1]])",
"_____no_output_____"
],
[
"def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200):\n path = [theta]\n for iteration in range(n_iterations):\n gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta\n theta = theta - eta * gradients\n path.append(theta)\n return np.array(path)\n\nfig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8))\nfor i, N, l1, l2, title in ((0, N1, 2., 0, \"Lasso\"), (1, N2, 0, 2., \"Ridge\")):\n JR = J + l1 * N1 + l2 * 0.5 * N2**2\n \n tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape)\n t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx]\n\n levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J)\n levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR)\n levelsN=np.linspace(0, np.max(N), 10)\n \n path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0)\n path_JR = bgd_path(t_init, Xr, yr, l1, l2)\n path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0)\n\n ax = axes[i, 0]\n ax.grid(True)\n ax.axhline(y=0, color='k')\n ax.axvline(x=0, color='k')\n ax.contourf(t1, t2, N / 2., levels=levelsN)\n ax.plot(path_N[:, 0], path_N[:, 1], \"y--\")\n ax.plot(0, 0, \"ys\")\n ax.plot(t1_min, t2_min, \"ys\")\n ax.set_title(r\"$\\ell_{}$ penalty\".format(i + 1), fontsize=16)\n ax.axis([t1a, t1b, t2a, t2b])\n if i == 1:\n ax.set_xlabel(r\"$\\theta_1$\", fontsize=16)\n ax.set_ylabel(r\"$\\theta_2$\", fontsize=16, rotation=0)\n\n ax = axes[i, 1]\n ax.grid(True)\n ax.axhline(y=0, color='k')\n ax.axvline(x=0, color='k')\n ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9)\n ax.plot(path_JR[:, 0], path_JR[:, 1], \"w-o\")\n ax.plot(path_N[:, 0], path_N[:, 1], \"y--\")\n ax.plot(0, 0, \"ys\")\n ax.plot(t1_min, t2_min, \"ys\")\n ax.plot(t1r_min, t2r_min, \"rs\")\n ax.set_title(title, fontsize=16)\n ax.axis([t1a, t1b, t2a, t2b])\n if i == 1:\n ax.set_xlabel(r\"$\\theta_1$\", fontsize=16)\n\nsave_fig(\"lasso_vs_ridge_plot\")\nplt.show()",
"그림 저장: lasso_vs_ridge_plot\n"
]
],
[
[
"# 로지스틱 회귀",
"_____no_output_____"
],
[
"## 결정 경계",
"_____no_output_____"
]
],
[
[
"t = np.linspace(-10, 10, 100)\nsig = 1 / (1 + np.exp(-t))\nplt.figure(figsize=(9, 3))\nplt.plot([-10, 10], [0, 0], \"k-\")\nplt.plot([-10, 10], [0.5, 0.5], \"k:\")\nplt.plot([-10, 10], [1, 1], \"k:\")\nplt.plot([0, 0], [-1.1, 1.1], \"k-\")\nplt.plot(t, sig, \"b-\", linewidth=2, label=r\"$\\sigma(t) = \\frac{1}{1 + e^{-t}}$\")\nplt.xlabel(\"t\")\nplt.legend(loc=\"upper left\", fontsize=20)\nplt.axis([-10, 10, -0.1, 1.1])\nsave_fig(\"logistic_function_plot\")\nplt.show()",
"그림 저장: logistic_function_plot\n"
]
],
[
[
"**식 4-16: 하나의 훈련 샘플에 대한 비용 함수**\n\n$\nc(\\boldsymbol{\\theta}) =\n\\begin{cases}\n -\\log(\\hat{p}) & \\text{if } y = 1, \\\\\n -\\log(1 - \\hat{p}) & \\text{if } y = 0.\n\\end{cases}\n$\n\n\n**식 4-17: 로지스틱 회귀 비용 함수(로그 손실)**\n\n$\nJ(\\boldsymbol{\\theta}) = -\\dfrac{1}{m} \\sum\\limits_{i=1}^{m}{\\left[ y^{(i)} log\\left(\\hat{p}^{(i)}\\right) + (1 - y^{(i)}) log\\left(1 - \\hat{p}^{(i)}\\right)\\right]}\n$\n\n\n**식 4-18: 로지스틱 비용 함수의 편도 함수**\n\n$\n\\dfrac{\\partial}{\\partial \\theta_j} \\text{J}(\\boldsymbol{\\theta}) = \\dfrac{1}{m}\\sum\\limits_{i=1}^{m}\\left(\\mathbf{\\sigma(\\boldsymbol{\\theta}}^T \\mathbf{x}^{(i)}) - y^{(i)}\\right)\\, x_j^{(i)}\n$",
"_____no_output_____"
]
],
[
[
"from sklearn import datasets\niris = datasets.load_iris()\nlist(iris.keys())",
"_____no_output_____"
],
[
"print(iris.DESCR)",
".. _iris_dataset:\n\nIris plants dataset\n--------------------\n\n**Data Set Characteristics:**\n\n :Number of Instances: 150 (50 in each of three classes)\n :Number of Attributes: 4 numeric, predictive attributes and the class\n :Attribute Information:\n - sepal length in cm\n - sepal width in cm\n - petal length in cm\n - petal width in cm\n - class:\n - Iris-Setosa\n - Iris-Versicolour\n - Iris-Virginica\n \n :Summary Statistics:\n\n ============== ==== ==== ======= ===== ====================\n Min Max Mean SD Class Correlation\n ============== ==== ==== ======= ===== ====================\n sepal length: 4.3 7.9 5.84 0.83 0.7826\n sepal width: 2.0 4.4 3.05 0.43 -0.4194\n petal length: 1.0 6.9 3.76 1.76 0.9490 (high!)\n petal width: 0.1 2.5 1.20 0.76 0.9565 (high!)\n ============== ==== ==== ======= ===== ====================\n\n :Missing Attribute Values: None\n :Class Distribution: 33.3% for each of 3 classes.\n :Creator: R.A. Fisher\n :Donor: Michael Marshall (MARSHALL%[email protected])\n :Date: July, 1988\n\nThe famous Iris database, first used by Sir R.A. Fisher. The dataset is taken\nfrom Fisher's paper. Note that it's the same as in R, but not as in the UCI\nMachine Learning Repository, which has two wrong data points.\n\nThis is perhaps the best known database to be found in the\npattern recognition literature. Fisher's paper is a classic in the field and\nis referenced frequently to this day. (See Duda & Hart, for example.) The\ndata set contains 3 classes of 50 instances each, where each class refers to a\ntype of iris plant. One class is linearly separable from the other 2; the\nlatter are NOT linearly separable from each other.\n\n.. topic:: References\n\n - Fisher, R.A. \"The use of multiple measurements in taxonomic problems\"\n Annual Eugenics, 7, Part II, 179-188 (1936); also in \"Contributions to\n Mathematical Statistics\" (John Wiley, NY, 1950).\n - Duda, R.O., & Hart, P.E. (1973) Pattern Classification and Scene Analysis.\n (Q327.D83) John Wiley & Sons. ISBN 0-471-22361-1. See page 218.\n - Dasarathy, B.V. (1980) \"Nosing Around the Neighborhood: A New System\n Structure and Classification Rule for Recognition in Partially Exposed\n Environments\". IEEE Transactions on Pattern Analysis and Machine\n Intelligence, Vol. PAMI-2, No. 1, 67-71.\n - Gates, G.W. (1972) \"The Reduced Nearest Neighbor Rule\". IEEE Transactions\n on Information Theory, May 1972, 431-433.\n - See also: 1988 MLC Proceedings, 54-64. Cheeseman et al\"s AUTOCLASS II\n conceptual clustering system finds 3 classes in the data.\n - Many, many more ...\n"
],
[
"X = iris[\"data\"][:, 3:] # 꽃잎 너비\ny = (iris[\"target\"] == 2).astype(int) # Iris virginica이면 1 아니면 0",
"_____no_output_____"
]
],
[
[
"**노트**: 향후 버전이 바뀌더라도 동일한 결과를 만들기 위해 사이킷런 0.22 버전의 기본값인 `solver=\"lbfgs\"`로 지정합니다.",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import LogisticRegression\nlog_reg = LogisticRegression(solver=\"lbfgs\", random_state=42)\nlog_reg.fit(X, y)",
"_____no_output_____"
],
[
"X_new = np.linspace(0, 3, 1000).reshape(-1, 1)\ny_proba = log_reg.predict_proba(X_new)\n\nplt.plot(X_new, y_proba[:, 1], \"g-\", linewidth=2, label=\"Iris virginica\")\nplt.plot(X_new, y_proba[:, 0], \"b--\", linewidth=2, label=\"Not Iris virginica\")",
"_____no_output_____"
]
],
[
[
"책에 실린 그림은 조금 더 예쁘게 꾸몄습니다:",
"_____no_output_____"
]
],
[
[
"X_new = np.linspace(0, 3, 1000).reshape(-1, 1)\ny_proba = log_reg.predict_proba(X_new)\ndecision_boundary = X_new[y_proba[:, 1] >= 0.5][0]\n\nplt.figure(figsize=(8, 3))\nplt.plot(X[y==0], y[y==0], \"bs\")\nplt.plot(X[y==1], y[y==1], \"g^\")\nplt.plot([decision_boundary, decision_boundary], [-1, 2], \"k:\", linewidth=2)\nplt.plot(X_new, y_proba[:, 1], \"g-\", linewidth=2, label=\"Iris virginica\")\nplt.plot(X_new, y_proba[:, 0], \"b--\", linewidth=2, label=\"Not Iris virginica\")\nplt.text(decision_boundary+0.02, 0.15, \"Decision boundary\", fontsize=14, color=\"k\", ha=\"center\")\nplt.arrow(decision_boundary[0], 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b')\nplt.arrow(decision_boundary[0], 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g')\nplt.xlabel(\"Petal width (cm)\", fontsize=14)\nplt.ylabel(\"Probability\", fontsize=14)\nplt.legend(loc=\"center left\", fontsize=14)\nplt.axis([0, 3, -0.02, 1.02])\nsave_fig(\"logistic_regression_plot\")\nplt.show()",
"그림 저장: logistic_regression_plot\n"
],
[
"decision_boundary",
"_____no_output_____"
],
[
"log_reg.predict([[1.7], [1.5]])",
"_____no_output_____"
]
],
[
[
"## 소프트맥스 회귀",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import LogisticRegression\n\nX = iris[\"data\"][:, (2, 3)] # petal length, petal width\ny = (iris[\"target\"] == 2).astype(int)\n\nlog_reg = LogisticRegression(solver=\"lbfgs\", C=10**10, random_state=42)\nlog_reg.fit(X, y)\n\nx0, x1 = np.meshgrid(\n np.linspace(2.9, 7, 500).reshape(-1, 1),\n np.linspace(0.8, 2.7, 200).reshape(-1, 1),\n )\nX_new = np.c_[x0.ravel(), x1.ravel()]\n\ny_proba = log_reg.predict_proba(X_new)\n\nplt.figure(figsize=(10, 4))\nplt.plot(X[y==0, 0], X[y==0, 1], \"bs\")\nplt.plot(X[y==1, 0], X[y==1, 1], \"g^\")\n\nzz = y_proba[:, 1].reshape(x0.shape)\ncontour = plt.contour(x0, x1, zz, cmap=plt.cm.brg)\n\n\nleft_right = np.array([2.9, 7])\nboundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1]\n\nplt.clabel(contour, inline=1, fontsize=12)\nplt.plot(left_right, boundary, \"k--\", linewidth=3)\nplt.text(3.5, 1.5, \"Not Iris virginica\", fontsize=14, color=\"b\", ha=\"center\")\nplt.text(6.5, 2.3, \"Iris virginica\", fontsize=14, color=\"g\", ha=\"center\")\nplt.xlabel(\"Petal length\", fontsize=14)\nplt.ylabel(\"Petal width\", fontsize=14)\nplt.axis([2.9, 7, 0.8, 2.7])\nsave_fig(\"logistic_regression_contour_plot\")\nplt.show()",
"그림 저장: logistic_regression_contour_plot\n"
]
],
[
[
"**식 4-20: 소프트맥스 함수**\n\n$\n\\hat{p}_k = \\sigma\\left(\\mathbf{s}(\\mathbf{x})\\right)_k = \\dfrac{\\exp\\left(s_k(\\mathbf{x})\\right)}{\\sum\\limits_{j=1}^{K}{\\exp\\left(s_j(\\mathbf{x})\\right)}}\n$\n\n**식 4-22: 크로스 엔트로피 비용 함수**\n\n$\nJ(\\boldsymbol{\\Theta}) = - \\dfrac{1}{m}\\sum\\limits_{i=1}^{m}\\sum\\limits_{k=1}^{K}{y_k^{(i)}\\log\\left(\\hat{p}_k^{(i)}\\right)}\n$\n\n**식 4-23: 클래스 k에 대한 크로스 엔트로피의 그레이디언트 벡터**\n\n$\n\\nabla_{\\boldsymbol{\\theta}^{(k)}} \\, J(\\boldsymbol{\\Theta}) = \\dfrac{1}{m} \\sum\\limits_{i=1}^{m}{ \\left ( \\hat{p}^{(i)}_k - y_k^{(i)} \\right ) \\mathbf{x}^{(i)}}\n$",
"_____no_output_____"
]
],
[
[
"X = iris[\"data\"][:, (2, 3)] # 꽃잎 길이, 꽃잎 너비\ny = iris[\"target\"]\n\nsoftmax_reg = LogisticRegression(multi_class=\"multinomial\",solver=\"lbfgs\", C=10, random_state=42)\nsoftmax_reg.fit(X, y)",
"_____no_output_____"
],
[
"x0, x1 = np.meshgrid(\n np.linspace(0, 8, 500).reshape(-1, 1),\n np.linspace(0, 3.5, 200).reshape(-1, 1),\n )\nX_new = np.c_[x0.ravel(), x1.ravel()]\n\n\ny_proba = softmax_reg.predict_proba(X_new)\ny_predict = softmax_reg.predict(X_new)\n\nzz1 = y_proba[:, 1].reshape(x0.shape)\nzz = y_predict.reshape(x0.shape)\n\nplt.figure(figsize=(10, 4))\nplt.plot(X[y==2, 0], X[y==2, 1], \"g^\", label=\"Iris virginica\")\nplt.plot(X[y==1, 0], X[y==1, 1], \"bs\", label=\"Iris versicolor\")\nplt.plot(X[y==0, 0], X[y==0, 1], \"yo\", label=\"Iris setosa\")\n\nfrom matplotlib.colors import ListedColormap\ncustom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0'])\n\nplt.contourf(x0, x1, zz, cmap=custom_cmap)\ncontour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg)\nplt.clabel(contour, inline=1, fontsize=12)\nplt.xlabel(\"Petal length\", fontsize=14)\nplt.ylabel(\"Petal width\", fontsize=14)\nplt.legend(loc=\"center left\", fontsize=14)\nplt.axis([0, 7, 0, 3.5])\nsave_fig(\"softmax_regression_contour_plot\")\nplt.show()",
"그림 저장: softmax_regression_contour_plot\n"
],
[
"softmax_reg.predict([[5, 2]])",
"_____no_output_____"
],
[
"softmax_reg.predict_proba([[5, 2]])",
"_____no_output_____"
]
],
[
[
"# 연습문제 해답",
"_____no_output_____"
],
[
"## 1. to 11.",
"_____no_output_____"
],
[
"부록 A를 참고하세요.",
"_____no_output_____"
],
[
"## 12. 조기 종료를 사용한 배치 경사 하강법으로 소프트맥스 회귀 구현하기\n(사이킷런을 사용하지 않고)",
"_____no_output_____"
],
[
"먼저 데이터를 로드합니다. 앞서 사용했던 Iris 데이터셋을 재사용하겠습니다.",
"_____no_output_____"
]
],
[
[
"X = iris[\"data\"][:, (2, 3)] # 꽃잎 길이, 꽃잎 넓이\ny = iris[\"target\"]",
"_____no_output_____"
]
],
[
[
"모든 샘플에 편향을 추가합니다 ($x_0 = 1$):",
"_____no_output_____"
]
],
[
[
"X_with_bias = np.c_[np.ones([len(X), 1]), X]",
"_____no_output_____"
]
],
[
[
"결과를 일정하게 유지하기 위해 랜덤 시드를 지정합니다:",
"_____no_output_____"
]
],
[
[
"np.random.seed(2042)",
"_____no_output_____"
]
],
[
[
"데이터셋을 훈련 세트, 검증 세트, 테스트 세트로 나누는 가장 쉬운 방법은 사이킷런의 `train_test_split()` 함수를 사용하는 것입니다. 하지만 이 연습문제의 목적은 직접 만들어 보면서 알고리즘을 이해하는 것이므로 다음과 같이 수동으로 나누어 보겠습니다:",
"_____no_output_____"
]
],
[
[
"test_ratio = 0.2\nvalidation_ratio = 0.2\ntotal_size = len(X_with_bias)\n\ntest_size = int(total_size * test_ratio)\nvalidation_size = int(total_size * validation_ratio)\ntrain_size = total_size - test_size - validation_size\n\nrnd_indices = np.random.permutation(total_size)\n\nX_train = X_with_bias[rnd_indices[:train_size]]\ny_train = y[rnd_indices[:train_size]]\nX_valid = X_with_bias[rnd_indices[train_size:-test_size]]\ny_valid = y[rnd_indices[train_size:-test_size]]\nX_test = X_with_bias[rnd_indices[-test_size:]]\ny_test = y[rnd_indices[-test_size:]]",
"_____no_output_____"
]
],
[
[
"타깃은 클래스 인덱스(0, 1 그리고 2)이지만 소프트맥스 회귀 모델을 훈련시키기 위해 필요한 것은 타깃 클래스의 확률입니다. 각 샘플에서 확률이 1인 타깃 클래스를 제외한 다른 클래스의 확률은 0입니다(다른 말로하면 주어진 샘플에 대한 클래스 확률이 원-핫 벡터입니다). 클래스 인덱스를 원-핫 벡터로 바꾸는 간단한 함수를 작성하겠습니다:",
"_____no_output_____"
]
],
[
[
"def to_one_hot(y):\n n_classes = y.max() + 1\n m = len(y)\n Y_one_hot = np.zeros((m, n_classes))\n Y_one_hot[np.arange(m), y] = 1\n return Y_one_hot",
"_____no_output_____"
]
],
[
[
"10개 샘플만 넣어 이 함수를 테스트해 보죠:",
"_____no_output_____"
]
],
[
[
"y_train[:10]",
"_____no_output_____"
],
[
"to_one_hot(y_train[:10])",
"_____no_output_____"
]
],
[
[
"잘 되네요, 이제 훈련 세트와 테스트 세트의 타깃 클래스 확률을 담은 행렬을 만들겠습니다:",
"_____no_output_____"
]
],
[
[
"Y_train_one_hot = to_one_hot(y_train)\nY_valid_one_hot = to_one_hot(y_valid)\nY_test_one_hot = to_one_hot(y_test)",
"_____no_output_____"
]
],
[
[
"이제 소프트맥스 함수를 만듭니다. 다음 공식을 참고하세요:\n\n$\\sigma\\left(\\mathbf{s}(\\mathbf{x})\\right)_k = \\dfrac{\\exp\\left(s_k(\\mathbf{x})\\right)}{\\sum\\limits_{j=1}^{K}{\\exp\\left(s_j(\\mathbf{x})\\right)}}$",
"_____no_output_____"
]
],
[
[
"def softmax(logits):\n exps = np.exp(logits)\n exp_sums = np.sum(exps, axis=1, keepdims=True)\n return exps / exp_sums",
"_____no_output_____"
]
],
[
[
"훈련을 위한 준비를 거의 마쳤습니다. 입력과 출력의 개수를 정의합니다:",
"_____no_output_____"
]
],
[
[
"n_inputs = X_train.shape[1] # == 3 (특성 2개와 편향)\nn_outputs = len(np.unique(y_train)) # == 3 (3개의 붓꽃 클래스)",
"_____no_output_____"
]
],
[
[
"이제 좀 복잡한 훈련 파트입니다! 이론적으로는 간단합니다. 그냥 수학 공식을 파이썬 코드로 바꾸기만 하면 됩니다. 하지만 실제로는 꽤 까다로운 면이 있습니다. 특히, 항이나 인덱스의 순서가 뒤섞이기 쉽습니다. 제대로 작동할 것처럼 코드를 작성했더라도 실제 제대로 계산하지 못합니다. 확실하지 않을 때는 각 항의 크기를 기록하고 이에 상응하는 코드가 같은 크기를 만드는지 확인합니다. 각 항을 독립적으로 평가해서 출력해 보는 것도 좋습니다. 사실 사이킷런에 이미 잘 구현되어 있기 때문에 이렇게 할 필요는 없습니다. 하지만 직접 만들어 보면 어떻게 작동하는지 이해하는데 도움이 됩니다.\n\n구현할 공식은 비용함수입니다:\n\n$J(\\mathbf{\\Theta}) =\n- \\dfrac{1}{m}\\sum\\limits_{i=1}^{m}\\sum\\limits_{k=1}^{K}{y_k^{(i)}\\log\\left(\\hat{p}_k^{(i)}\\right)}$\n\n그리고 그레이디언트 공식입니다:\n\n$\\nabla_{\\mathbf{\\theta}^{(k)}} \\, J(\\mathbf{\\Theta}) = \\dfrac{1}{m} \\sum\\limits_{i=1}^{m}{ \\left ( \\hat{p}^{(i)}_k - y_k^{(i)} \\right ) \\mathbf{x}^{(i)}}$\n\n$\\hat{p}_k^{(i)} = 0$이면 $\\log\\left(\\hat{p}_k^{(i)}\\right)$를 계산할 수 없습니다. `nan` 값을 피하기 위해 $\\log\\left(\\hat{p}_k^{(i)}\\right)$에 아주 작은 값 $\\epsilon$을 추가하겠습니다.",
"_____no_output_____"
]
],
[
[
"eta = 0.01\nn_iterations = 5001\nm = len(X_train)\nepsilon = 1e-7\n\nTheta = np.random.randn(n_inputs, n_outputs)\n\nfor iteration in range(n_iterations):\n logits = X_train.dot(Theta)\n Y_proba = softmax(logits)\n if iteration % 500 == 0:\n loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1))\n print(iteration, loss)\n error = Y_proba - Y_train_one_hot\n gradients = 1/m * X_train.T.dot(error)\n Theta = Theta - eta * gradients",
"0 5.446205811872683\n500 0.8350062641405651\n1000 0.6878801447192402\n1500 0.6012379137693314\n2000 0.5444496861981872\n2500 0.5038530181431525\n3000 0.47292289721922487\n3500 0.44824244188957774\n4000 0.4278651093928793\n4500 0.41060071429187134\n5000 0.3956780375390374\n"
]
],
[
[
"바로 이겁니다! 소프트맥스 모델을 훈련시켰습니다. 모델 파라미터를 확인해 보겠습니다:",
"_____no_output_____"
]
],
[
[
"Theta",
"_____no_output_____"
]
],
[
[
"검증 세트에 대한 예측과 정확도를 확인해 보겠습니다:",
"_____no_output_____"
]
],
[
[
"logits = X_valid.dot(Theta)\nY_proba = softmax(logits)\ny_predict = np.argmax(Y_proba, axis=1)\n\naccuracy_score = np.mean(y_predict == y_valid)\naccuracy_score",
"_____no_output_____"
]
],
[
[
"와우, 이 모델이 매우 잘 작동하는 것 같습니다. 연습을 위해서 $\\ell_2$ 규제를 조금 추가해 보겠습니다. 다음 코드는 위와 거의 동일하지만 손실에 $\\ell_2$ 페널티가 추가되었고 그래디언트에도 항이 추가되었습니다(`Theta`의 첫 번째 원소는 편향이므로 규제하지 않습니다). 학습률 `eta`도 증가시켜 보겠습니다.",
"_____no_output_____"
]
],
[
[
"eta = 0.1\nn_iterations = 5001\nm = len(X_train)\nepsilon = 1e-7\nalpha = 0.1 # 규제 하이퍼파라미터\n\nTheta = np.random.randn(n_inputs, n_outputs)\n\nfor iteration in range(n_iterations):\n logits = X_train.dot(Theta)\n Y_proba = softmax(logits)\n if iteration % 500 == 0:\n xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1))\n l2_loss = 1/2 * np.sum(np.square(Theta[1:]))\n loss = xentropy_loss + alpha * l2_loss\n print(iteration, loss)\n error = Y_proba - Y_train_one_hot\n gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]]\n Theta = Theta - eta * gradients",
"0 6.629842469083912\n500 0.5339667976629505\n1000 0.503640075014894\n1500 0.4946891059460321\n2000 0.4912968418075477\n2500 0.48989924700933296\n3000 0.4892990598451198\n3500 0.489035124439786\n4000 0.4889173621830818\n4500 0.4888643337449303\n5000 0.48884031207388184\n"
]
],
[
[
"추가된 $\\ell_2$ 페널티 때문에 이전보다 손실이 조금 커보이지만 더 잘 작동하는 모델이 되었을까요? 확인해 보죠:",
"_____no_output_____"
]
],
[
[
"logits = X_valid.dot(Theta)\nY_proba = softmax(logits)\ny_predict = np.argmax(Y_proba, axis=1)\n\naccuracy_score = np.mean(y_predict == y_valid)\naccuracy_score",
"_____no_output_____"
]
],
[
[
"와우, 완벽한 정확도네요! 운이 좋은 검증 세트일지 모르지만 잘 된 것은 맞습니다.",
"_____no_output_____"
],
[
"이제 조기 종료를 추가해 보죠. 이렇게 하려면 매 반복에서 검증 세트에 대한 손실을 계산해서 오차가 증가하기 시작할 때 멈춰야 합니다.",
"_____no_output_____"
]
],
[
[
"eta = 0.1 \nn_iterations = 5001\nm = len(X_train)\nepsilon = 1e-7\nalpha = 0.1 # 규제 하이퍼파라미터\nbest_loss = np.infty\n\nTheta = np.random.randn(n_inputs, n_outputs)\n\nfor iteration in range(n_iterations):\n logits = X_train.dot(Theta)\n Y_proba = softmax(logits)\n error = Y_proba - Y_train_one_hot\n gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]]\n Theta = Theta - eta * gradients\n\n logits = X_valid.dot(Theta)\n Y_proba = softmax(logits)\n xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1))\n l2_loss = 1/2 * np.sum(np.square(Theta[1:]))\n loss = xentropy_loss + alpha * l2_loss\n if iteration % 500 == 0:\n print(iteration, loss)\n if loss < best_loss:\n best_loss = loss\n else:\n print(iteration - 1, best_loss)\n print(iteration, loss, \"조기 종료!\")\n break",
"0 4.7096017363419875\n500 0.5739711987633518\n1000 0.5435638529109127\n1500 0.5355752782580262\n2000 0.5331959249285544\n2500 0.5325946767399383\n2765 0.5325460966791898\n2766 0.5325460971327977 조기 종료!\n"
],
[
"logits = X_valid.dot(Theta)\nY_proba = softmax(logits)\ny_predict = np.argmax(Y_proba, axis=1)\n\naccuracy_score = np.mean(y_predict == y_valid)\naccuracy_score",
"_____no_output_____"
]
],
[
[
"여전히 완벽하지만 더 빠릅니다.",
"_____no_output_____"
],
[
"이제 전체 데이터셋에 대한 모델의 예측을 그래프로 나타내 보겠습니다:",
"_____no_output_____"
]
],
[
[
"x0, x1 = np.meshgrid(\n np.linspace(0, 8, 500).reshape(-1, 1),\n np.linspace(0, 3.5, 200).reshape(-1, 1),\n )\nX_new = np.c_[x0.ravel(), x1.ravel()]\nX_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new]\n\nlogits = X_new_with_bias.dot(Theta)\nY_proba = softmax(logits)\ny_predict = np.argmax(Y_proba, axis=1)\n\nzz1 = Y_proba[:, 1].reshape(x0.shape)\nzz = y_predict.reshape(x0.shape)\n\nplt.figure(figsize=(10, 4))\nplt.plot(X[y==2, 0], X[y==2, 1], \"g^\", label=\"Iris virginica\")\nplt.plot(X[y==1, 0], X[y==1, 1], \"bs\", label=\"Iris versicolor\")\nplt.plot(X[y==0, 0], X[y==0, 1], \"yo\", label=\"Iris setosa\")\n\nfrom matplotlib.colors import ListedColormap\ncustom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0'])\n\nplt.contourf(x0, x1, zz, cmap=custom_cmap)\ncontour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg)\nplt.clabel(contour, inline=1, fontsize=12)\nplt.xlabel(\"Petal length\", fontsize=14)\nplt.ylabel(\"Petal width\", fontsize=14)\nplt.legend(loc=\"upper left\", fontsize=14)\nplt.axis([0, 7, 0, 3.5])\nplt.show()",
"_____no_output_____"
]
],
[
[
"이제 테스트 세트에 대한 모델의 최종 정확도를 측정해 보겠습니다:",
"_____no_output_____"
]
],
[
[
"logits = X_test.dot(Theta)\nY_proba = softmax(logits)\ny_predict = np.argmax(Y_proba, axis=1)\n\naccuracy_score = np.mean(y_predict == y_test)\naccuracy_score",
"_____no_output_____"
]
],
[
[
"완벽했던 최종 모델의 성능이 조금 떨어졌습니다. 이런 차이는 데이터셋이 작기 때문일 것입니다. 훈련 세트와 검증 세트, 테스트 세트를 어떻게 샘플링했는지에 따라 매우 다른 결과를 얻을 수 있습니다. 몇 번 랜덤 시드를 바꾸고 이 코드를 다시 실행해 보면 결과가 달라지는 것을 확인할 수 있습니다.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e7b729c8ca16b82f1716be11dd7aaa8f300223c3 | 8,026 | ipynb | Jupyter Notebook | ui/editing.ipynb | JingfengRong/editnerf | 736875d79cf2417fbdbaed58e50a30d7364de02a | [
"MIT"
] | 176 | 2021-05-14T00:32:34.000Z | 2022-03-23T12:45:41.000Z | ui/editing.ipynb | JingfengRong/editnerf | 736875d79cf2417fbdbaed58e50a30d7364de02a | [
"MIT"
] | 5 | 2021-06-04T08:55:09.000Z | 2022-01-17T12:16:20.000Z | ui/editing.ipynb | JingfengRong/editnerf | 736875d79cf2417fbdbaed58e50a30d7364de02a | [
"MIT"
] | 20 | 2021-05-14T00:58:11.000Z | 2022-03-30T09:25:42.000Z | 68.016949 | 1,576 | 0.688886 | [
[
[
"import show, editingapp\nfrom importlib import reload\nreload(editingapp)\ninterface = editingapp.NeRFEditingApp(mask_no='erase', size=128, num_canvases=9, real_dir='chairs_whitened/shape09173_rank00')\ninterface.brushsize_textbox.value = '3'\ninterface.change_brushsize()\nshow(interface)",
"_____no_output_____"
],
[
"ls chairs_whitened/shape09173_rank00",
"shape09173_rank00_0.png shape09173_rank00_5erase.pt\r\nshape09173_rank00_11.png shape09173_rank00_5neg.pt\r\nshape09173_rank00_111.pt shape09173_rank00_65.png\r\nshape09173_rank00_11erase.pt shape09173_rank00_68.png\r\nshape09173_rank00_11neg.pt shape09173_rank00_681.pt\r\nshape09173_rank00_12.png shape09173_rank00_68erase.pt\r\nshape09173_rank00_19.png shape09173_rank00_68neg.pt\r\nshape09173_rank00_20.png shape09173_rank00_72.png\r\nshape09173_rank00_29.png shape09173_rank00_73.png\r\nshape09173_rank00_30.png shape09173_rank00_731.pt\r\nshape09173_rank00_32.png shape09173_rank00_73erase.pt\r\nshape09173_rank00_36.png shape09173_rank00_73neg.pt\r\nshape09173_rank00_39.png shape09173_rank00_74.png\r\nshape09173_rank00_41.png shape09173_rank00_76.png\r\nshape09173_rank00_43.png shape09173_rank00_78.png\r\nshape09173_rank00_431.pt shape09173_rank00_83.png\r\nshape09173_rank00_43erase.pt shape09173_rank00_84.png\r\nshape09173_rank00_43neg.pt shape09173_rank00_87.png\r\nshape09173_rank00_45.png shape09173_rank00_871.pt\r\nshape09173_rank00_451.pt shape09173_rank00_87erase.pt\r\nshape09173_rank00_45erase.pt shape09173_rank00_87neg.pt\r\nshape09173_rank00_45neg.pt shape09173_rank00_89.png\r\nshape09173_rank00_47.png shape09173_rank00_9.png\r\nshape09173_rank00_48.png shape09173_rank00_90.png\r\nshape09173_rank00_481.pt shape09173_rank00_901.pt\r\nshape09173_rank00_48erase.pt shape09173_rank00_90erase.pt\r\nshape09173_rank00_48neg.pt shape09173_rank00_90neg.pt\r\nshape09173_rank00_5.png shape09173_rank00_91.pt\r\nshape09173_rank00_50.png shape09173_rank00_92.png\r\nshape09173_rank00_51.pt shape09173_rank00_93.png\r\nshape09173_rank00_52.png shape09173_rank00_94.png\r\nshape09173_rank00_55.png shape09173_rank00_95.png\r\nshape09173_rank00_56.png shape09173_rank00_99.png\r\nshape09173_rank00_58.png shape09173_rank00_9erase.pt\r\nshape09173_rank00_59.png shape09173_rank00_9neg.pt\r\n"
],
[
"from renormalize import from_url\nimport torch\nfrom torchvision import utils\npos = from_url(torch.load('chairs_whitened/shape09173_rank00/shape09173_rank00_45erase.pt'))\nutils.save_image(pos.unsqueeze(dim=0), 'pos_mask.png')\npos = from_url(torch.load('chairs_whitened/shape09173_rank00/shape09173_rank00_48erase.pt'))\nutils.save_image(pos.unsqueeze(dim=0), 'neg_mask.png')\n",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
e7b73b50498bcb1195b7c90f49910880312985c2 | 38,109 | ipynb | Jupyter Notebook | BERNARDO_Activity_1_Python_Fundamentals.ipynb | Xeyah0214/Linear-Algebra-2nd-Sem | fbc8b6e2f2e9673fb60f8587fdc7642291abf5c0 | [
"Apache-2.0"
] | null | null | null | BERNARDO_Activity_1_Python_Fundamentals.ipynb | Xeyah0214/Linear-Algebra-2nd-Sem | fbc8b6e2f2e9673fb60f8587fdc7642291abf5c0 | [
"Apache-2.0"
] | null | null | null | BERNARDO_Activity_1_Python_Fundamentals.ipynb | Xeyah0214/Linear-Algebra-2nd-Sem | fbc8b6e2f2e9673fb60f8587fdc7642291abf5c0 | [
"Apache-2.0"
] | null | null | null | 24.104364 | 519 | 0.418064 | [
[
[
"<a href=\"https://colab.research.google.com/github/Xeyah0214/Linear-Algebra-2nd-Sem/blob/main/Activity_1_Python_Fundamentals.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Welcome to Python Fundamentals\nIn this module, we are going to establish or review our skills in Python programming. In this notebook we are going to cover:\n* Variables and Data Types \n* Operations\n* Input and Output Operations\n* Logic Control\n* Iterables\n* Functions",
"_____no_output_____"
],
[
"## Variable and Data Types\nVariables and data types are the values that change in Python Fundamentals, as the name implies. A variable is a memory location where you store a value in a programming language, and it is created as soon as you assigned a value to it. Meanwhile, the classification or categorizing of data elements is known as data types. It denotes the kind of value that specifies which operations can be performed on a given set of data. Data types are technically classes, and variables are the objects of these classes.",
"_____no_output_____"
]
],
[
[
"x = 0\ne,l,a = 2,1,4\na\n",
"_____no_output_____"
],
[
"type(x)",
"_____no_output_____"
],
[
"h = 1.0\ntype(h)",
"_____no_output_____"
],
[
"x = float(x)\ntype (x)",
"_____no_output_____"
],
[
"a,d,u = \"0\",'1','valorant'\ntype(a)",
"_____no_output_____"
],
[
"a_int = int(a)\na_int",
"_____no_output_____"
]
],
[
[
"##Operations\n",
"_____no_output_____"
],
[
"###Arithmetic\nMathematical operations such as addition, subtraction, multiplication, division, floor division, exponentiation and modulo are performed using arithmetic operators.",
"_____no_output_____"
]
],
[
[
"s,k,y,e = 2.0,-0.5,0,-32",
"_____no_output_____"
]
],
[
[
"####Addition\nThis operation denotes (+) which adds values on either side of the operator.",
"_____no_output_____"
]
],
[
[
"### Addition\na = s+k\na",
"_____no_output_____"
]
],
[
[
"####Subtraction\nRepresented by (-), this operation subtracts right hand operand from left hand operand.",
"_____no_output_____"
]
],
[
[
"### Subtraction\nu = k-e\nu",
"_____no_output_____"
]
],
[
[
"####Multiplication\nUsing the symbol (*), this operation multiplies values on both sides of the operator.",
"_____no_output_____"
]
],
[
[
"### Multiplication\nm = s*e\nm",
"_____no_output_____"
]
],
[
[
"####Division\nThe division operator in Python is (/). When the first operand is divided by the second, it is utilized to find the quotient.",
"_____no_output_____"
]
],
[
[
"### Divison\nd = y/s \nd",
"_____no_output_____"
]
],
[
[
"####Floor Division\nWhen the first operand is divided by the second, floor division is applied to calculate the floor quotient. This is denoted by the operator (//).",
"_____no_output_____"
]
],
[
[
"### Floor Division\nfd = s//k\nfd",
"_____no_output_____"
]
],
[
[
"####Exponentiation\nTo raise the first operand to the second power, exponentiation is performed through the symbol (**).",
"_____no_output_____"
]
],
[
[
"### Exponentiation\nex = s**k\nex",
"_____no_output_____"
]
],
[
[
"####Modulo\nModulo or Modulus uses the operator (%) that is responsible for dividing left hand operand by right hand operand and returns the remainder.",
"_____no_output_____"
]
],
[
[
"### Modulo\nmod = e%s \nmod",
"_____no_output_____"
]
],
[
[
"### Assignment Operations\nAssignment operators are used in Python programming to assign values to variables.",
"_____no_output_____"
],
[
"####= \nThe equal operation's role is to \"Assign\" the value of the right side of the expression to the operand on the left side.",
"_____no_output_____"
]
],
[
[
"w,x,y,z = 0,100,2,2",
"_____no_output_____"
]
],
[
[
"####+= \n\"Add and Assign\" is designed to add both sides of the operand and the sum will be placed on the left operand. ",
"_____no_output_____"
]
],
[
[
"w += s \nw",
"_____no_output_____"
]
],
[
[
"####-=\n\"Subtract And\" conducts subtraction to right operand from left operand. The difference would then be assigned to the left operand.",
"_____no_output_____"
]
],
[
[
"x -= e\nx",
"_____no_output_____"
]
],
[
[
"####*=\n\"Multiply And\" operates multiplication to both operands and then the product would be assigned to the left operand.",
"_____no_output_____"
]
],
[
[
"y *= 2\ny",
"_____no_output_____"
]
],
[
[
"####**=\n\"Exponent And\" calculates the exponent value of the operands which would then be assigned also to the left operand.",
"_____no_output_____"
]
],
[
[
"z **= 2\nz",
"_____no_output_____"
]
],
[
[
"###Comparators\nValues are compared using comparison operators. Depending on the condition, it results to either True or False.",
"_____no_output_____"
]
],
[
[
"trial_1, trial_2, trial_3 = 1, 2.0, \"1\"\ntrue_val = 1.0",
"_____no_output_____"
],
[
"## Equality\ntrial_1 == true_val",
"_____no_output_____"
],
[
"## Non-equality\ntrial_2 != true_val",
"_____no_output_____"
],
[
"## Inequality\nt1 = trial_1 > trial_2\nt2 = trial_1 < trial_2/2\nt3 = trial_1 >= trial_1/2\nt4 = trial_1 <= trial_2\nt4\n",
"_____no_output_____"
]
],
[
[
"### Logical\nLogical operators are comprised of:\n\n\"AND\" - True if both operands are true\n\n\"OR\" - True if either of the operands is true\n\n\"NOT\" - True if the operand is false",
"_____no_output_____"
]
],
[
[
"trial_1 == true_val",
"_____no_output_____"
],
[
"trial_1 is true_val",
"_____no_output_____"
],
[
"trial_1 is not true_val",
"_____no_output_____"
],
[
"p,q = True, True\nconj = p and q\nconj",
"_____no_output_____"
],
[
"p,q = False, False\ndisj = p or q\ndisj",
"_____no_output_____"
],
[
"p,q = True, False\ne = not (p and q)\ne",
"_____no_output_____"
],
[
"p,q = True, False\nxor = (not p and q) or (p and not q)\nxor",
"_____no_output_____"
]
],
[
[
"###I/O\nInput and Output operators allow you to insert a value into your program that can be printed.",
"_____no_output_____"
]
],
[
[
"print(\"Welcome to my world\")",
"Welcome to my world\n"
],
[
"cnt = 1",
"_____no_output_____"
],
[
"string = \"Welcome to my world\"\nprint(string,\", your current run count is:\", cnt)\ncnt += 1",
"Welcome to my world , your current run count is: 1\n"
],
[
"print(f\"{string}, your current count is: {cnt}\")",
"Welcome to my world, your current count is: 2\n"
],
[
"sem_grade = 93.0124\nname = \"Viper\"\nprint(\"Wazzup, {}!, your semestral grade is: {}\".format(name, sem_grade))",
"Wazzup, Viper!, your semestral grade is: 93.0124\n"
],
[
"w_pg, w_mg, w_fg = 0.3, 0.3, 0.4\nprint(\"Ang weight ng iyong semestral grades are: \\\n\\n\\t{:.2%} for Prelims\\\n\\n\\t{:.2%} for Midterms, and\\\n\\n\\t{:.2%} for Finals.\".format(w_pg, w_mg, w_fg))",
"Ang weight ng iyong semestral grades are: \n\t30.00% for Prelims\n\t30.00% for Midterms, and\n\t40.00% for Finals.\n"
],
[
"x = input(\"enter a number, bhie: \")\nx",
"enter a number, bhie: 14\n"
],
[
"name = input(\"Hoy! Ano pangalan mo boi?: \")\npg = input(\"Enter prelim grade: \")\nmg = input(\"Enter midterm grade: \")\nfg = input(\"Enter finals grade: \")\nsem_grade = None\nprint(\"Hello {}, ang iyong semestral grade ay: {}\".format(name, sem_grade))",
"Hoy! Ano pangalan mo boi?: Rue\nEnter prelim grade: 90\nEnter midterm grade: 98\nEnter finals grade: 96\nHello Rue, ang iyong semestral grade ay: None\n"
]
],
[
[
"# Looping Statements\n",
"_____no_output_____"
],
[
"## While\nThe while loop is used to repeatedly execute a set of statements until a condition is met.",
"_____no_output_____"
]
],
[
[
"## while loops\ni, j = 0, 14\nwhile(i<=j):\n print(f\"{i}\\t|\\t{j}\")\n i+=1",
"0\t|\t14\n1\t|\t14\n2\t|\t14\n3\t|\t14\n4\t|\t14\n5\t|\t14\n6\t|\t14\n7\t|\t14\n8\t|\t14\n9\t|\t14\n10\t|\t14\n11\t|\t14\n12\t|\t14\n13\t|\t14\n14\t|\t14\n"
]
],
[
[
"##For\nFor Loop is used to perform sequential traversal. It can be used to iterate throughout a set of iterators and a range of values.",
"_____no_output_____"
]
],
[
[
"# for(int i=0; i<10; i++){\n# printf(i)\n# }\n\ni=0\nfor i in range(15):\n print(i)",
"0\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n"
],
[
"playlist = [\"Lagi\", \"Tenerife Sea\", \"Is There Someone Else\"]\nprint(\"Favorite Songs:\\n\")\nfor song in playlist:\n print(song)",
"Favorite Songs:\n\nLagi\nTenerife Sea\nIs There Someone Else\n"
]
],
[
[
"#Flow Control\nFlow Control is a conditional statement that handles the presentation and execution of codes in a given program. This allows the software to continue running until specific conditions are met.",
"_____no_output_____"
],
[
"##Condition Statements\nIf and Else If statements are used in flow control to perform conditional operation.",
"_____no_output_____"
]
],
[
[
"numeral1, numeral2 = 14, 12\nif(numeral1 == numeral2):\n print(\"Yie\")\nelif(numeral1>numeral2):\n print(\"Uwu\")\nelse:\n print(\"Whoa\")\n# print(\"Hep hep\")",
"Uwu\n"
]
],
[
[
"#Functions\nFuctions are blocks of code which only runs when it is called. In python, \"def\" is used to designate a function.",
"_____no_output_____"
]
],
[
[
"# void DeleteUser(int userid){\n# delete(userid);\n# }\ndef delete_user (userid):\n print(\"Successfully deleted user: {}\".format(userid))\n\ndef delete_all_users ():\n print(\"Valorant Pro-Player\")\n ",
"_____no_output_____"
],
[
"userid = \"Xeyah\"\ndelete_user(\"Xeyah\")\ndelete_all_users()",
"Successfully deleted user: Xeyah\nValorant Pro-Player\n"
],
[
"def add(addend1, addend2):\n return addend1 + addend2\n\ndef power_of_base2(exponent):\n return 2**exponent",
"_____no_output_____"
],
[
"addend1, addend2 = 36, 22\nadd(addend1, addend2)\n\nexponent = 4\npower_of_base2(exponent)",
"_____no_output_____"
]
],
[
[
"#Grade Calculator",
"_____no_output_____"
]
],
[
[
"print()\nname = input('\\tOla! What is your name? ');\ncourse = input('\\tWhat is your course? ');\nprelim = float(input('\\tGive Prelim Grade : '));\nmidterm = float(input('\\tGive Midterm Grade : '));\nfinal = float(input('\\tGive Final Grade : '));\ngrade= (prelim) + (midterm) + (final);\navg= grade/3;\n\nprint();\nprint(\"\\t===== DISPLAYING RESULTS =====\");\nprint();\nprint(\"\\tHi,\", name, \"from the course\", course, \"!\");\nprint();\nif avg > 70:\n print(\"\\tYour grade is \\U0001F600\");\nelif avg == 70:\n print(\"\\tYour grade is \\U0001F606\");\nelif avg < 70:\n print(\"\\tYour grade is \\U0001F62D\");\nprint();",
"\n\tOla! What is your name? Astra\n\tWhat is your course? Bachelor of Science in Chemical Engineering\n\tGive Prelim Grade : 96\n\tGive Midterm Grade : 90\n\tGive Final Grade : 94\n\n\t===== DISPLAYING RESULTS =====\n\n\tHi, Astra from the course Bachelor of Science in Chemical Engineering !\n\n\tYour grade is 😀\n\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7b73f8b80ee0c930d13864356728004b2b170d9 | 7,856 | ipynb | Jupyter Notebook | 4_spark_ml/1_spark_ml_intro.ipynb | Lorenzo-Giardi/spark-repo | 66910d70814f846cddd31b9e0f7e3791eae7d5ee | [
"Apache-2.0"
] | null | null | null | 4_spark_ml/1_spark_ml_intro.ipynb | Lorenzo-Giardi/spark-repo | 66910d70814f846cddd31b9e0f7e3791eae7d5ee | [
"Apache-2.0"
] | null | null | null | 4_spark_ml/1_spark_ml_intro.ipynb | Lorenzo-Giardi/spark-repo | 66910d70814f846cddd31b9e0f7e3791eae7d5ee | [
"Apache-2.0"
] | null | null | null | 24.704403 | 209 | 0.448192 | [
[
[
"# Spark ML",
"_____no_output_____"
]
],
[
[
"from pyspark.sql import SparkSession\nfrom pyspark.ml.linalg import Vectors\nfrom pyspark.ml.feature import VectorAssembler\nfrom pyspark.ml.clustering import KMeans\nfrom pyspark.ml.regression import LinearRegression",
"_____no_output_____"
],
[
"spark = SparkSession.builder.getOrCreate()",
"_____no_output_____"
],
[
"data_path = '/home/lorenzo/Desktop/utilization.csv'\n\ndf = spark.read.option('header', 'False') \\\n .option('inferSchema', 'True') \\\n .csv(data_path)",
"_____no_output_____"
],
[
"df = df.withColumnRenamed(\"_c0\", \"event_datetime\") \\\n .withColumnRenamed (\"_c1\", \"server_id\") \\\n .withColumnRenamed(\"_c2\", \"cpu_utilization\") \\\n .withColumnRenamed(\"_c3\", \"free_memory\") \\\n .withColumnRenamed(\"_c4\", \"session_count\")\n\ndf.createOrReplaceTempView('utilization')",
"_____no_output_____"
]
],
[
[
"### Vectorize data",
"_____no_output_____"
]
],
[
[
"va = VectorAssembler(inputCols=['cpu_utilization', 'free_memory', 'session_count'], \n outputCol = 'features')",
"_____no_output_____"
],
[
"vcluster_df = va.transform(df)",
"_____no_output_____"
],
[
"vcluster_df.show(5)",
"+-------------------+---------+---------------+-----------+-------------+----------------+\n| event_datetime|server_id|cpu_utilization|free_memory|session_count| features|\n+-------------------+---------+---------------+-----------+-------------+----------------+\n|03/05/2019 08:06:14| 100| 0.57| 0.51| 47|[0.57,0.51,47.0]|\n|03/05/2019 08:11:14| 100| 0.47| 0.62| 43|[0.47,0.62,43.0]|\n|03/05/2019 08:16:14| 100| 0.56| 0.57| 62|[0.56,0.57,62.0]|\n|03/05/2019 08:21:14| 100| 0.57| 0.56| 50|[0.57,0.56,50.0]|\n|03/05/2019 08:26:14| 100| 0.35| 0.46| 43|[0.35,0.46,43.0]|\n+-------------------+---------+---------------+-----------+-------------+----------------+\nonly showing top 5 rows\n\n"
]
],
[
[
"### K-Means clustering\nSpark ML library implementation of Kmeans expects to find a **features** column in the dataset that is provided to the fit function. This column should be the result of a vector assembler transformation.",
"_____no_output_____"
]
],
[
[
"km = KMeans().setK(3).setSeed(1)",
"_____no_output_____"
],
[
"km_output = km.fit(vcluster_df)",
"_____no_output_____"
],
[
"km_output.clusterCenters()",
"_____no_output_____"
]
],
[
[
"### Linear Regression",
"_____no_output_____"
]
],
[
[
"va = VectorAssembler(inputCols=['cpu_utilization', 'free_memory'], \n outputCol = 'features')\nreg_df = va.transform(df)\nreg_df.show(5)",
"+-------------------+---------+---------------+-----------+-------------+-----------+\n| event_datetime|server_id|cpu_utilization|free_memory|session_count| features|\n+-------------------+---------+---------------+-----------+-------------+-----------+\n|03/05/2019 08:06:14| 100| 0.57| 0.51| 47|[0.57,0.51]|\n|03/05/2019 08:11:14| 100| 0.47| 0.62| 43|[0.47,0.62]|\n|03/05/2019 08:16:14| 100| 0.56| 0.57| 62|[0.56,0.57]|\n|03/05/2019 08:21:14| 100| 0.57| 0.56| 50|[0.57,0.56]|\n|03/05/2019 08:26:14| 100| 0.35| 0.46| 43|[0.35,0.46]|\n+-------------------+---------+---------------+-----------+-------------+-----------+\nonly showing top 5 rows\n\n"
],
[
"lr = LinearRegression(featuresCol='features', labelCol='session_count')\nlr_output = lr.fit(reg_df)",
"_____no_output_____"
],
[
"lr_output.coefficients",
"_____no_output_____"
],
[
"lr_output.intercept",
"_____no_output_____"
],
[
"lr_output.summary.r2",
"_____no_output_____"
],
[
"lr_output.summary.rootMeanSquaredError",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7b741c3c86cb5a6f65c38a031efa130b82703cf | 7,996 | ipynb | Jupyter Notebook | multitaper/examples/.ipynb_checkpoints/mtspec_src-checkpoint.ipynb | paudetseis/multitaper | 121f4d05d7e83d7ef86170a237413fc25257ebc7 | [
"MIT"
] | null | null | null | multitaper/examples/.ipynb_checkpoints/mtspec_src-checkpoint.ipynb | paudetseis/multitaper | 121f4d05d7e83d7ef86170a237413fc25257ebc7 | [
"MIT"
] | null | null | null | multitaper/examples/.ipynb_checkpoints/mtspec_src-checkpoint.ipynb | paudetseis/multitaper | 121f4d05d7e83d7ef86170a237413fc25257ebc7 | [
"MIT"
] | null | null | null | 29.397059 | 77 | 0.517884 | [
[
[
"## Load libraries",
"_____no_output_____"
]
],
[
[
"import multitaper.mtspec as mtspec\nimport multitaper.utils as utils\nimport multitaper.mtcross as mtcross\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport scipy.signal as signal",
"_____no_output_____"
]
],
[
[
"## Load Mesetas network data",
"_____no_output_____"
]
],
[
[
"data = utils.get_data('mesetas_src.dat')\ndt = 1/100.\nnpts,ntr = np.shape(data)\n\nptime = np.ones(ntr)\nptime[0:ntr+1:4] = 14.\nptime[1:ntr+1:4] = 24.\nptime[2:ntr+1:4] = 5.5\nptime[3:ntr+1:4] = 20.5\nptime[11*4-1:11*4+4] = ptime[11*4-1:11*4+4]-2. \nptime[20] = 13.4\n\nprint('npts, # of traces, dt ',npts, ntr, dt)",
"_____no_output_____"
],
[
"# Select traces to work on\nista = 0\nitr1 = 0+ista # Mainshock\nitr2 = 16+ista\nitr3 = 40+ista\nitr4 = 68+ista # 4 68\n\n# Filter parameters for STF \nfmin = 0.2\nfmax = 3.\nfnyq = 0.5/dt\nwn = [fmin/fnyq,fmax/fnyq]\nb, a = signal.butter(4, wn,'bandpass')\n\n# Extract traces from data matrix\nz1 = data[:,itr1]\nz2 = data[:,itr2]\nz3 = data[:,itr3]\nz4 = data[:,itr4]\n\n# MTSPEC parameters\nnw = 4.0\nkspec = 6\n\n# P-wave window length\nwlen = 10.0 # window length, seconds\nnlen = int(round(wlen/dt))\n\n# Arrival times (-2 sec pre-P)\nt_p1 = 12.2\nt_p2 = 11.9\nt_p3 = 12.1\nt_p4 = 12.4\n\n# Select to samples for each trace\nib1 = int(round((t_p1)/dt))\nib2 = int(round((t_p2)/dt))\nib3 = int(round((t_p3)/dt))\nib4 = int(round((t_p4)/dt)) # 12.6 12.4\nib5 = ib3 - nlen \nib6 = ib4 - nlen \nie1 = ib1 + nlen\nie2 = ib2 + nlen\nie3 = ib3 + nlen\nie4 = ib4 + nlen\nie5 = ib5 + nlen\nie6 = ib6 + nlen\n\n# Select window around P-wave\ny1 = z1[ib1:ie1]\ny2 = z2[ib2:ie2]\ny3 = z3[ib3:ie3]\ny4 = z4[ib4:ie4]\ny5 = z3[ib5:ie5]\ny6 = z4[ib6:ie6]\n\n# Get MTSPEC class\nPy1 = mtspec.MTSpec(y1,nw,kspec,dt)\nPy2 = mtspec.MTSpec(y2,nw,kspec,dt)\nPy3 = mtspec.MTSpec(y3,nw,kspec,dt)\nPy4 = mtspec.MTSpec(y4,nw,kspec,dt)\nPy5 = mtspec.MTSpec(y5,nw,kspec,dt)\nPy6 = mtspec.MTSpec(y6,nw,kspec,dt)\n\nPspec = [Py1, Py2, Py3, Py4, Py5, Py6]\n\n# Get positive frequencies\nfreq ,spec1 = Py1.rspec()\nfreq ,spec2 = Py2.rspec()\nfreq ,spec3 = Py3.rspec()\nfreq ,spec4 = Py4.rspec()\nfreq ,spec5 = Py5.rspec()\nfreq ,spec6 = Py6.rspec()\n\n# Get spectral ratio\nsratio1 = np.sqrt(spec1/spec3)\nsratio2 = np.sqrt(spec2/spec4)\n\n\nP13 = mtcross.MTCross(Py1,Py3,wl=0.001)\nxcorr, dcohe, dconv = P13.mt_corr()\ndconv13 = signal.filtfilt(b, a, dconv[:,0])\nP24 = mtcross.MTCross(Py2,Py4,wl=0.001)\nxcorr, dcohe, dconv2 = P24.mt_corr()\ndconv24 = signal.filtfilt(b, a, dconv2[:,0])\nnstf = (len(dconv24)-1)/2\ntstf = np.arange(-nstf,nstf+1)*dt\n\n",
"_____no_output_____"
]
],
[
[
"## Display Figures",
"_____no_output_____"
]
],
[
[
"fig = plt.figure(1,figsize=(6,8))\nt = np.arange(len(z1))*dt\nax = fig.add_subplot(2,2,1)\nax.plot(t,z1/np.max(z1)+4.7,'k')\nax.plot(t,z3/(2*np.max(z3))+3.5,color=\"0.75\")\nax.plot(t,z2/np.max(z1)+1.2,color='0.25')\nax.plot(t,z4/(2*np.max(z4)),color=\"0.75\")\nax.set_xlabel('Time (s)')\nax.set_ylabel('Amplitude (a.u.)')\nax.set_yticks([])\nax.text(65,5.2,'M6.0 2019/12/24',color='0.5')\nax.text(65,3.8,'M4.0 EGF',color='0.5')\nax.text(65,1.7,'M5.8 2019/12/24')\nax.text(65,0.3,'M4.1 EGF',color='0.5')\nax.plot([t_p1,t_p1+wlen],[5.2,5.2],color='0.5',linewidth=2.0)\nax.plot([t_p3,t_p3+wlen],[3.8,3.8],color='0.5',linewidth=2.0)\nax.plot([t_p2,t_p2+wlen],[1.7,1.7],color='0.5',linewidth=2.0)\nax.plot([t_p4,t_p4+wlen],[0.3,0.3],color='0.5',linewidth=2.0)\nax.plot([t_p3,t_p3-wlen],[3.3,3.3],'--',color='0.7',linewidth=2.0)\nax.plot([t_p4,t_p4-wlen],[-0.2,-0.2],'--',color='0.7',linewidth=2.0)\nbox = ax.get_position()\nbox.x1 = 0.89999\nax.set_position(box)\n\nax = fig.add_subplot(2,2,3)\nax.loglog(freq,np.sqrt(spec1*wlen),'k')\nax.loglog(freq,np.sqrt(spec3*wlen),color='0.75')\nax.loglog(freq,np.sqrt(spec5*wlen),'--',color='0.75')\nax.grid()\nax.set_ylim(1e-1,1e7)\nax.set_xlabel('Frequency (Hz)')\nax.set_ylabel('Amplitude Spectrum')\nax2 = fig.add_subplot(2,2,4)\nax2.loglog(freq,np.sqrt(spec2*wlen),color='0.25')\nax2.loglog(freq,np.sqrt(spec4*wlen),color='0.75')\nax2.loglog(freq,np.sqrt(spec6*wlen),'--',color='0.75')\nax2.grid()\nax2.set_ylim(1e-1,1e7)\nax2.set_xlabel('Frequency (Hz)')\nax2.set_ylabel('Amplitude Spectrum')\nax2.yaxis.tick_right()\nax2.yaxis.set_label_position('right')\nax.text(0.11,3.1e6,'M6.0 Mainshock')\nax.text(0.11,4e3,'M4.0 EGF',color='0.75')\nax.text(0.11,4e1,'Noise',color='0.75')\nax2.text(0.11,2.1e6,'M5.8 Mainshock')\nax2.text(0.11,3e4,'M4.1 EGF',color='0.75')\nax2.text(0.11,4e1,'Noise',color='0.75')\nplt.savefig('figures/src_waveforms.jpg')\n\nfig = plt.figure(figsize=(4,5))\nax = fig.add_subplot(2,1,1)\nax.plot(tstf,dconv13/np.max(np.abs(dconv13))+1,'k')\nax.plot(tstf,dconv24/np.max(np.abs(dconv24)),color='0.25')\nax.set_ylabel('STF Amp (normalized)')\nax.text(5,1.2,'M6.0 STF')\nax.text(5,0.2,'M5.8 STF',color='0.25')\nax.set_xlabel('Time (s)')\nax.xaxis.tick_top()\nax.xaxis.set_label_position('top')\n\nax2 = fig.add_subplot(2,1,2)\nax2.loglog(freq,sratio1,'k')\nax2.loglog(freq,sratio2,color='0.25')\nax2.set_ylim(1e0,1e4)\nax2.set_xlabel('Frequncy (Hz)')\nax2.set_ylabel('Spectral Ratio')\nax2.text(1.1,1.2e3,'M6.0')\nax2.text(0.12,2.1e2,'M5.8',color='0.25')\nax2.grid()\nplt.savefig('figures/src_stf.jpg')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7b742be334847603b87bdb4bb99c7c85ca0c03b | 32,258 | ipynb | Jupyter Notebook | example/example.ipynb | shicks-seismo/PhaseLink_work | d1716bdc4f02582dbae45e404cfbdfbca09f3871 | [
"MIT"
] | 2 | 2021-09-30T12:18:53.000Z | 2021-10-01T02:32:40.000Z | example/example.ipynb | shicks-seismo/PhaseLink_work | d1716bdc4f02582dbae45e404cfbdfbca09f3871 | [
"MIT"
] | null | null | null | example/example.ipynb | shicks-seismo/PhaseLink_work | d1716bdc4f02582dbae45e404cfbdfbca09f3871 | [
"MIT"
] | 1 | 2021-09-30T12:28:45.000Z | 2021-09-30T12:28:45.000Z | 55.236301 | 2,286 | 0.643685 | [
[
[
"# Example of running PhaseLink",
"_____no_output_____"
],
[
"<a href=\"https://colab.research.google.com/github/TomSHudson/PhaseLink/blob/master/example/example.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n\nNote that you need to change the run instance to GPU if using colab.",
"_____no_output_____"
]
],
[
[
"# Specify if running in google colab:\nuse_google_colab = False\n\n# Install/add neccessary paths if using colab:\nif use_google_colab:\n !pip install obspy\n # Install nvidia-apex:\n !git clone https://github.com/NVIDIA/apex\n !pip install -v --disable-pip-version-check --no-cache-dir --global-option=\"--cpp_ext\" --global-option=\"--cuda_ext\" ./apex\n ",
"_____no_output_____"
],
[
"# Import neccessary modules:\nimport sys, os, shutil\nimport json\nimport multiprocessing as mp\nimport pickle\nimport numpy as np \nimport torch\nimport gc \nimport matplotlib.pyplot as plt\nimport glob\nfrom obspy.geodetics.base import gps2dist_azimuth\n# if not use_google_colab:\n%load_ext autoreload\n%autoreload 2\n# And import PhaseLink:\nif use_google_colab:\n shutil.rmtree('./PhaseLink', ignore_errors=True)\n !git clone https://github.com/TomSHudson/PhaseLink.git\n sys.path.append('./PhaseLink/')\nelse:\n sys.path.append('..')\nimport phaselink_dataset\nimport phaselink_train\nimport phaselink_eval",
"The autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\n"
],
[
"# Copy over example files into pwd if using colab:\nif use_google_colab:\n !cp PhaseLink/example/params.json .\n !cp PhaseLink/example/station_list.txt .\n !cp PhaseLink/example/tt.pg .\n !cp PhaseLink/example/tt.sg .",
"_____no_output_____"
]
],
[
[
"## 0. Load in param file and key info:",
"_____no_output_____"
]
],
[
[
"# Import param json file:\nparams_fname = \"params.json\"\nwith open(params_fname, \"r\") as f:\n params = json.load(f)\n \n# Get GPU info if using colab:\nif use_google_colab:\n print(\"GPU:\", torch.cuda.get_device_name(0))\n params['device'] = \"cuda:0\" # Use first GPU",
"_____no_output_____"
]
],
[
[
"## 1. Create a synthetic training dataset:",
"_____no_output_____"
]
],
[
[
"# Set key parameters from param file:\nmax_picks = params['n_max_picks']\nt_max = params['t_win']\nn_threads = params['n_threads']\n\nprint(\"Starting up...\")\n# Setup grid:\nphase_idx = {'P': 0, 'S': 1}\nlat0, lon0 = phaselink_dataset.get_network_centroid(params)\nstlo, stla, phasemap, sncl_idx, stations, sncl_map = \\\n phaselink_dataset.build_station_map(params, lat0, lon0, phase_idx)\nx_min = np.min(stlo)\nx_max = np.max(stlo)\ny_min = np.min(stla)\ny_max = np.max(stla)\nfor key in sncl_map:\n X0, Y0 = stations[key]\n X0 = (X0 - x_min) / (x_max - x_min)\n Y0 = (Y0 - y_min) / (y_max - y_min)\n stations[key] = (X0, Y0)\n\n# Save station maps for detect mode\npickle.dump(stations, open(params['station_map_file'], 'wb'))\npickle.dump(sncl_map, open(params['sncl_map_file'], 'wb'))\n\n# # Pwaves\n# pTT = tt_interp(params['tt_table']['P'], params['datum'])\n# print('Read pTT')\n# print('(dep,dist) = (0,0), (10,0), (0,10), (10,0):')\n# print(' {:.3f} {:.3f} {:.3f} {:.3f}'.format(\n# pTT.interp(0,0), pTT.interp(10,0),pTT.interp(0,10),\n# pTT.interp(10,10)))\n\n# #Swaves\n# sTT = tt_interp(params['tt_table']['S'], params['datum'])\n# print('Read sTT')\n# print('(dep,dist) = (0,0), (10,0), (0,10), (10,0):')\n# print(' {:.3f} {:.3f} {:.3f} {:.3f}'.format(\n# sTT.interp(0,0), sTT.interp(10,0),sTT.interp(0,10),\n# sTT.interp(10,10)))\n\n# Get travel-time tables for P and S waves:\npTT = phaselink_dataset.tt_interp(params['trav_time_p'], params['datum'])\nsTT = phaselink_dataset.tt_interp(params['trav_time_s'], params['datum'])\n\n# Generate synthetic training dataset for given param file:\nin_q = mp.Queue() ###1000000)\nout_q = mp.Queue() ###1000000)\nproc = mp.Process(target=phaselink_dataset.output_thread, args=(out_q, params))\nproc.start()\nprocs = []\nfor i in range(n_threads):\n print(\"Starting thread %d\" % i)\n p = mp.Process(target=phaselink_dataset.generate_phases, \\\n args=(in_q, out_q, x_min, x_max, y_min, y_max, \\\n sncl_idx, stla, stlo, phasemap, pTT, sTT, params))\n p.start()\n procs.append(p)\n\nfor i in range(params['n_train_samp']):\n in_q.put(i)\n\nfor i in range(n_threads):\n in_q.put(None)\n#for p in procs:\n# p.join()\n#proc.join()\n\nprint(\"Creating the following files for the PhaseLink synthetic training dataset:\")\nprint(params['station_map_file'])\nprint(params['sncl_map_file'])\nprint(params['training_dset_X'])\nprint(params['training_dset_Y'])",
"Starting up...\nStarting thread 0\nStarting thread 1\nStarting thread 2\nStarting thread 3\nStarting thread 4\nStarting thread 5\nStarting thread 6\nStarting thread 7\nCreating the following files for the PhaseLink synthetic training dataset:\nstation_map.pkl\nsncl_map.pkl\nshimane_train_X.npy\nshimane_train_Y.npy\nP-phases (zeros): 11973180 ( 98.0604422604 %)\nS-phases (ones): 236820 ( 1.93955773956 %)\nSaved the synthetic training dataset.\n"
]
],
[
[
"## 2. Train the model using the syntehetic dataset:",
"_____no_output_____"
]
],
[
[
"# Get device (cpu vs gpu) specified:\ndevice = torch.device(params[\"device\"])\nif params[\"device\"][0:4] == \"cuda\":\n torch.cuda.empty_cache()\n enable_amp = True\nelse:\n enable_amp = False\nif enable_amp:\n import apex.amp as amp\n\n# Get training info from param file:\nn_epochs = params[\"n_epochs\"] #100\n\n# Load in training dataset:\nX = np.load(params[\"training_dset_X\"])\nY = np.load(params[\"training_dset_Y\"])\nprint(\"Training dataset info:\")\nprint(\"Shape of X:\", X.shape, \"Shape of Y\", Y.shape)\ndataset = phaselink_train.MyDataset(X, Y, device)\n\n# Get dataset info:\nn_samples = len(dataset)\nindices = list(range(n_samples))\n\n# Set size of training and validation subset:\nn_test = int(0.1*X.shape[0])\nvalidation_idx = np.random.choice(indices, size=n_test, replace=False)\ntrain_idx = list(set(indices) - set(validation_idx))\n\n# Specify samplers:\ntrain_sampler = phaselink_train.SubsetRandomSampler(train_idx)\nvalidation_sampler = phaselink_train.SubsetRandomSampler(validation_idx)\n\n# Load training data:\ntrain_loader = torch.utils.data.DataLoader(\n dataset,\n batch_size=256,\n shuffle=False,\n sampler=train_sampler\n)\nval_loader = torch.utils.data.DataLoader(\n dataset,\n batch_size=1024,\n shuffle=False,\n sampler=validation_sampler\n)\n\nstackedgru = phaselink_train.StackedGRU()\nstackedgru = stackedgru.to(device)\n#stackedgru = torch.nn.DataParallel(stackedgru,\n# device_ids=['cuda:2', 'cuda:3', 'cuda:4', 'cuda:5'])\n\nif enable_amp:\n #amp.register_float_function(torch, 'sigmoid')\n from apex.optimizers import FusedAdam\n optimizer = FusedAdam(stackedgru.parameters())\n stackedgru, optimizer = amp.initialize(\n stackedgru, optimizer, opt_level='O2')\nelse:\n optimizer = torch.optim.Adam(stackedgru.parameters())\n\nmodel = phaselink_train.Model(stackedgru, optimizer, \\\n model_path='./phaselink_model')\nprint(\"Begin training process.\")\nmodel.train(train_loader, val_loader, n_epochs, enable_amp=enable_amp)\n",
"Training dataset info:\nShape of X: (8140, 1500, 5) Shape of Y (8140, 1500)\nBegin training process.\nEpoch 1, 10% \t train_loss: 8.1183e-01 train_acc: 98.24% took: 38.19s\nEpoch 1, 20% \t train_loss: 4.4219e-01 train_acc: 98.07% took: 35.93s\nEpoch 1, 31% \t train_loss: 1.7653e-01 train_acc: 98.08% took: 35.90s\nEpoch 1, 41% \t train_loss: 1.5708e-01 train_acc: 98.06% took: 37.58s\nEpoch 1, 51% \t train_loss: 1.7691e-01 train_acc: 98.07% took: 36.36s\nEpoch 1, 62% \t train_loss: 1.9457e-01 train_acc: 98.06% took: 35.61s\nEpoch 1, 72% \t train_loss: 1.6042e-01 train_acc: 98.08% took: 35.39s\nEpoch 1, 82% \t train_loss: 1.6453e-01 train_acc: 98.07% took: 35.34s\nEpoch 1, 93% \t train_loss: 1.4670e-01 train_acc: 98.06% took: 36.86s\nPrecision (Class 0): 0.980%\nRecall (Class 0): 1.000%\n"
],
[
"# For emptying memory on GPU:\n# torch.cuda.empty_cache()\n# del(model)\n# gc.collect()\n# torch.cuda.empty_cache()",
"_____no_output_____"
],
[
"# Download trained model, if using colab:\nif use_google_colab:\n from google.colab import files\n !zip -r ./phaselink_model/phaselink_model.zip ./phaselink_model\n files.download('phaselink_model/phaselink_model.zip') ",
"_____no_output_____"
],
[
"# Plot model training and validation loss to select best model:\n# (Note: This must currently be done on the same machine/machine architecture as \n# the training was undertaken on).\n\n# Write the models loss function values to file:\nmodels_dir = \"phaselink_model\"\nmodels_fnames = list(glob.glob(os.path.join(models_dir, \"model_???_*.pt\")))\nmodels_fnames.sort()\nval_losses = []\nf_out = open(os.path.join(models_dir, 'val_losses.txt'), 'w')\nfor model_fname in models_fnames:\n model_curr = torch.load(model_fname)\n val_losses.append(model_curr['loss'])\n f_out.write(' '.join((model_fname, str(model_curr['loss']), '\\n')))\n del(model_curr)\n gc.collect()\nf_out.close()\nval_losses = np.array(val_losses)\nprint(\"Written losses to file: \", os.path.join(models_dir, 'val_losses.txt'))\n\n# And select approximate best model (approx corner of loss curve):\napprox_corner_idx = np.argwhere(val_losses < np.average(val_losses))[0][0]\nprint(\"Model to use:\", models_fnames[approx_corner_idx])\n\n# And plot:\nplt.figure()\nplt.plot(np.arange(len(val_losses)), val_losses)\nplt.hlines(val_losses[approx_corner_idx], 0, len(val_losses), color='r', ls=\"--\")\nplt.ylabel(\"Val loss\")\nplt.xlabel(\"Epoch\")\nplt.show()\n\n# And convert model to use to universally usable format (GPU or CPU):\nmodel = phaselink_train.StackedGRU().cuda(device)\ncheckpoint = torch.load(models_fnames[approx_corner_idx], map_location=device)\nmodel.load_state_dict(checkpoint['model_state_dict'])\ntorch.save(model, os.path.join(models_dir, 'model_to_use.gpu.pt'), _use_new_zipfile_serialization=False)\nnew_device = \"cpu\"\nmodel = model.to(new_device)\ntorch.save(model, os.path.join(models_dir, 'model_to_use.cpu.pt'), _use_new_zipfile_serialization=False)\ndel model\ngc.collect()\nif use_google_colab:\n files.download(os.path.join('phaselink_model','model_to_use.gpu.pt'))\n files.download(os.path.join('phaselink_model','model_to_use.cpu.pt'))\n",
"_____no_output_____"
]
],
[
[
"## 3. Perform phase association on some real earthquakes:",
"_____no_output_____"
]
],
[
[
"# Load correct model:\nif use_google_colab:\n params[\"model_file\"] = \"phaselink_model/model_to_use.gpu.pt\"\n model = torch.load(params[\"model_file\"])\n \nelse:\n params[\"model_file\"] = \"phaselink_model/model_to_use.cpu.pt\"\n model = torch.load(params[\"model_file\"])\n\n# And evaluate model:\nmodel.eval()",
"/Users/eart0504/opt/anaconda3/envs/phaselink/lib/python3.7/site-packages/torch/serialization.py:593: SourceChangeWarning: source code of class 'torch.nn.modules.linear.Linear' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.\n warnings.warn(msg, SourceChangeWarning)\n/Users/eart0504/opt/anaconda3/envs/phaselink/lib/python3.7/site-packages/torch/serialization.py:593: SourceChangeWarning: source code of class 'torch.nn.modules.rnn.GRU' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.\n warnings.warn(msg, SourceChangeWarning)\n"
],
[
"# Detect events:\nX, labels = phaselink_eval.read_gpd_output(params)\nphaselink_eval.detect_events(X, labels, model, params)\n\nprint(\"Events output to .nlloc file.\")",
"Reading GPD file\nFinished reading GPD file, 100000 triggers found\nDay 001: 67693 gpd picks, 0 cumulative events detected\nPermuting sequence for all lags...\nFinished permuting sequence\nPredicting labels for all phases\nFinished label prediction\nLinking phases\n20 events detected initially\nRemoving duplicates\n13 events detected after duplicate removal\n13 events left after applying n_min_det threshold\nDay 002: 32307 gpd picks, 13 cumulative events detected\nPermuting sequence for all lags...\nFinished permuting sequence\nPredicting labels for all phases\nFinished label prediction\nLinking phases\n36 events detected initially\nRemoving duplicates\n35 events detected after duplicate removal\n35 events left after applying n_min_det threshold\n48 detections total\nEvents output to .nlloc file.\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e7b74e9839aa69d294d048623da67d45d821c802 | 300,411 | ipynb | Jupyter Notebook | Spotify_project.ipynb | shirleymbeyu/Spotify | f00b41c83e9ee7dbdcc155cfbeacd09bf3356e83 | [
"MIT"
] | 1 | 2020-08-15T17:33:04.000Z | 2020-08-15T17:33:04.000Z | Spotify_project.ipynb | shirleymbeyu/Spotify | f00b41c83e9ee7dbdcc155cfbeacd09bf3356e83 | [
"MIT"
] | null | null | null | Spotify_project.ipynb | shirleymbeyu/Spotify | f00b41c83e9ee7dbdcc155cfbeacd09bf3356e83 | [
"MIT"
] | null | null | null | 119.258039 | 77,010 | 0.795337 | [
[
[
"<a href=\"https://colab.research.google.com/github/shirleymbeyu/Spotify/blob/master/Spotify_project.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Business Understanding",
"_____no_output_____"
],
[
"# Problem Definition",
"_____no_output_____"
],
[
"\nConduct an EDA on the dataset and come up with some data visualisations.\n\nIdentify popular songs by building a machine learning model that predicts track popularity. Then present the results to the senior management of Spotify. → to increase their revenue\n\nSegment tracks on the platform by building a model that segments the tracks. Then present the results to the senior management of Spotify. → To identify a new genre of music.\n",
"_____no_output_____"
],
[
"# Objectives and Goals",
"_____no_output_____"
],
[
"To have a data visualization for the data to be used in the project.\n\nTo identify the most popular song using a machine learning model.\n\nTo have a track segmentation and to be able to identify a new genre of music.\n",
"_____no_output_____"
],
[
"# Data Sourcing",
"_____no_output_____"
]
],
[
[
"#importing the libraries to be used in the project\nimport numpy as np\nimport pandas as pd\n\n#libraries to be used for visualization\nimport matplotlib.pyplot as plt \n% matplotlib inline \nimport seaborn as sb\n",
"_____no_output_____"
],
[
"#Importing the raw data set \nlink = 'https://bit.ly/SpotifySongsDs' \ndata = pd.read_csv(link)\n#Reviewing first 5 rows of the data set\ndata[:5]",
"_____no_output_____"
],
[
"#Importing the glossary data \nglossary = pd.read_csv('spotify_glossary.csv')\nglossary",
"_____no_output_____"
]
],
[
[
" # Data Understanding",
"_____no_output_____"
],
[
"# Data Prepraration",
"_____no_output_____"
]
],
[
[
"#Getting the shape of the initial data set\ndata.shape",
"_____no_output_____"
],
[
"#getting the information on the data set\ndata.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 32833 entries, 0 to 32832\nData columns (total 23 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 track_id 32833 non-null object \n 1 track_name 32828 non-null object \n 2 track_artist 32828 non-null object \n 3 track_popularity 32833 non-null int64 \n 4 track_album_id 32833 non-null object \n 5 track_album_name 32828 non-null object \n 6 track_album_release_date 32833 non-null object \n 7 playlist_name 32833 non-null object \n 8 playlist_id 32833 non-null object \n 9 playlist_genre 32833 non-null object \n 10 playlist_subgenre 32833 non-null object \n 11 danceability 32833 non-null float64\n 12 energy 32833 non-null float64\n 13 key 32833 non-null int64 \n 14 loudness 32833 non-null float64\n 15 mode 32833 non-null int64 \n 16 speechiness 32833 non-null float64\n 17 acousticness 32833 non-null float64\n 18 instrumentalness 32833 non-null float64\n 19 liveness 32833 non-null float64\n 20 valence 32833 non-null float64\n 21 tempo 32833 non-null float64\n 22 duration_ms 32833 non-null int64 \ndtypes: float64(9), int64(4), object(10)\nmemory usage: 5.8+ MB\n"
]
],
[
[
"# Data cleaning",
"_____no_output_____"
]
],
[
[
"#removing column attributes we won't be needing for our analysisD\ndroped =data.drop(['track_id', 'track_album_id', 'track_album_name', 'playlist_name', 'playlist_id', 'playlist_genre', 'playlist_subgenre'], axis = 1)\ndroped[:5]",
"_____no_output_____"
],
[
"# Extract the month from the track release date. \n\n#first changing the format of the release date column to year-month-date\ndroped['track_album_release_date'] = pd.to_datetime(droped['track_album_release_date'], format='%Y-%m-%d')\n\n#creating a new column to hold only the months of release\ndroped['year'] = pd.DatetimeIndex(droped['track_album_release_date']).year\ndroped['month'] = pd.DatetimeIndex(droped['track_album_release_date']).month\ndroped\n\n#Checking the data types of the year and month column\nprint(droped.year.dtypes)\nprint(droped.month.dtypes)\n",
"int64\nint64\n"
],
[
"#Converting duration to minutes\ndef function_2(row):\n return row['duration_ms'] / 60000\n\ndroped['duration_min'] = droped.apply(lambda row: function_2(row), axis=1)\n\n#dropping the column with the duration in miliseconds\nspotify = droped.drop('duration_ms', axis=1)",
"_____no_output_____"
],
[
"#Checking the number of duplicate observations in our data set\nspotify.duplicated().sum()",
"_____no_output_____"
]
],
[
[
"There seems to be 4,484 duplicate values in our data set which adds up to 13.66% of our data set. This is a relatively huge number to drop but keeping it will also reduce the accurcay of our analysis. \n\nTherefore I will drop them and work with the remaining 86.34%",
"_____no_output_____"
]
],
[
[
"spotify.drop_duplicates(inplace = True)\nspotify.shape",
"_____no_output_____"
],
[
"spotify.isna().sum()",
"_____no_output_____"
]
],
[
[
"There were 4 rows with missing track_name and and artist_name. These wer not dropped since they ahd no huge impact on our analysis.\n\n",
"_____no_output_____"
]
],
[
[
"#Finally I will export my cleaned data set that is ready for analysis\nspotify.to_csv('spotify_df.csv', index=False)",
"_____no_output_____"
]
],
[
[
"# Data analysis",
"_____no_output_____"
],
[
"In my analysis I will work with my cleaned data set",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv('spotify_df.csv')",
"_____no_output_____"
]
],
[
[
"First I will make a data frame of my continuous variables that range from 0.0-1.0 separate so as to analyze them together",
"_____no_output_____"
]
],
[
[
"cont = df[['danceability', 'energy', 'speechiness', 'acousticness', 'instrumentalness', 'valence']]\ncont",
"_____no_output_____"
],
[
"cont_bplot = cont.boxplot(figsize = (10,5), grid = False)\ncont_bplot.axes.set_title(\"continuous variable analysis\", fontsize=14)\n\n",
"_____no_output_____"
]
],
[
[
"From the above analysis there seems to be numerous outliers in the continuous variables. Though these can't really be termed as outliers because thy all affect the songs differently.\n\nValence is the only observation with no outliers",
"_____no_output_____"
]
],
[
[
"#Checking for outliers in the loudness continuous variable:\ndf.boxplot(column =['loudness'], grid = False)",
"_____no_output_____"
],
[
"#Checking for outliers in the track duration:\ndf.boxplot(column =['duration_min'], grid = False)",
"_____no_output_____"
]
],
[
[
"The duration has outliers from both ends, but rather more on the right end.",
"_____no_output_____"
]
],
[
[
"#Finding otliers in track popularity:\ndf.boxplot(column =['track_popularity'], grid = False)",
"_____no_output_____"
]
],
[
[
"There were no outliers in the song popularity indicating that there were no instances of a song being extremely popular or extremely inpopular or rather not listened to.",
"_____no_output_____"
],
[
"#Questions",
"_____no_output_____"
],
[
"1. How are the track observations distributed over the years?",
"_____no_output_____"
]
],
[
[
"#finding the distribution of the observations in the data set\ndf.hist(figsize=(15,15))",
"_____no_output_____"
]
],
[
[
"These distribution graphs go ahead to prove the observations in from the boxplots.",
"_____no_output_____"
],
[
"2. How are these variables related to track popularity?",
"_____no_output_____"
]
],
[
[
"#Checked for the relationship between the various varaibles.\n#This was done with a correlation co-efficient \nspotify.corr()",
"_____no_output_____"
]
],
[
[
"None of the variables had a corelation co-efficients above or even close to +/-0.5 to the track_popularity. Therefor none are worth mentioning\n\nThis shows that none of the variables are significantly proportional to track_popularity, that is; track popularity cannot be defined solely on a particular variable but a number or rather a combination of them.\n\nFrom here on the analysis will focus on those with a co-efficient of 0.1 or close to that:\n\n\n* acousticness with a corelation co-efficient of 0.091759\n* months with 0.080462\n* energy with -0.103579\n* instrumentalness with -0.124229\n* duration_min with -0.139600\n\n\n\n\n\n",
"_____no_output_____"
],
[
"2. Virtually, how does acousticness affect the track_popularity?",
"_____no_output_____"
]
],
[
[
"#Done with scatter plot\nspotify.plot(x = 'track_popularity', y = 'acousticness', style = 'o')\nplt.xlabel('track_popularity')\nplt.ylabel('acousticness')\nplt.show",
"_____no_output_____"
]
],
[
[
"3. Virtually, how does energy of a track affect its popularity?",
"_____no_output_____"
]
],
[
[
"#Plotting energy against popularity\nspotify.plot(x = 'track_popularity', y = 'energy', style = 'o')\nplt.xlabel('track_popularity')\nplt.ylabel('energy')\nplt.show",
"_____no_output_____"
]
],
[
[
"4. Virtually, how does instrumentalness affect track popularity?",
"_____no_output_____"
]
],
[
[
"spotify.plot(x = 'track_popularity', y = 'instrumentalness', style = 'o')\nplt.xlabel('track_popularity')\nplt.ylabel('instrumentalness')\nplt.show",
"_____no_output_____"
]
],
[
[
"From the scatter plot it is clear that tracks with really low instrumentalness dominate the top most popular positions with only a few being popular with high instrumentalness",
"_____no_output_____"
],
[
" 4 (a) Which month had the most track releases over the years?",
"_____no_output_____"
]
],
[
[
"monthly_tracks = spotify['track_name'].groupby([spotify['month']]).count().sort_values(ascending=False)\nmonthly_tracks[:3]",
"_____no_output_____"
]
],
[
[
" (b) Which month had the most popular (above 50) track releases over the years?",
"_____no_output_____"
]
],
[
[
"month_of_pops = spotify[(spotify.track_popularity>50)].groupby('month')['track_name'].count().sort_values(ascending = False)\nmonth_of_pops[:3]",
"_____no_output_____"
]
],
[
[
"The months are in the same order as those with many track releases over the years",
"_____no_output_____"
],
[
" (C) Virtually, how does the month of track release affect the track_popularity?",
"_____no_output_____"
]
],
[
[
"#Finding whether and how month of track release affects the track_popularity\nspotify.plot(x = 'track_popularity', y = 'month', style = 'o')\nplt.xlabel('track_popularity')\nplt.ylabel('month')\nplt.show",
"_____no_output_____"
]
],
[
[
"From the scatter plot above, the month of track release doesn't seem to affect its popularity. \n\nThough it would be nice to note that the month of October(10) has a continuous number of popular track releases(having a popularity of above 90) while March(3) has only one and April(4) none.",
"_____no_output_____"
],
[
"5. Virtually, how does the duration of a track affect its popularity?",
"_____no_output_____"
]
],
[
[
"#Finding out whether and how track duration affects tack popularity\nspotify.plot(x = 'track_popularity', y = 'duration_min', style = 'o')\nplt.xlabel('track_popularity')\nplt.ylabel('track_duration')\nplt.show",
"_____no_output_____"
]
],
[
[
"From the scatter plot above it is safe to say that the duration of the track affects it's popularity to some extent:\n\nThis is because towards high popularity index the scatter plots tend to come together around the 3 minutes mark.Though it is not a guarantee that a track at around 3 minutes will have high popularity, it is a good starting point.\n\nIt is also nice to note that the tracks highly close to the zero mark are more likely to be less popular.\n\n",
"_____no_output_____"
],
[
"(b) What is the average duration of most popular tracks?",
"_____no_output_____"
]
],
[
[
"avg_duration_of_pops = spotify[(spotify.track_popularity>50)].groupby('duration_min')['track_name'].count().sort_values(ascending = False)\navg_duration_of_pops",
"_____no_output_____"
]
],
[
[
"with most popular tracks having a duration of 2.7 minutes followed by 4.0, 3.5, 3.3 and 3.1 The observation made earlier stands",
"_____no_output_____"
],
[
"# Conclusion",
"_____no_output_____"
],
[
"\n\n1. Track popularity does not rely on one variable but rather it is affected by a number of factors.\n2. The factors that affect track popularity are not straight forward but do contain a number of outliers.\n\n",
"_____no_output_____"
],
[
"# Reccomendations and Next step",
"_____no_output_____"
],
[
"\n\n1. The model to predict popular tracks should incorporate the varaibles that showed meaningful relationship with track popularity\n\n",
"_____no_output_____"
],
[
"The next step would be to build a model that predicts track popularity.\n\nBefore then it would be nice to go through the analysis using other visualizaton methods to probably get a better understanding of the varaibles.\n\nLater another one to segment track to identify whether there is a new genre.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
e7b74ffaaedfc04b91062733e8ec102831ee4566 | 4,811 | ipynb | Jupyter Notebook | notebook/fillblank.ipynb | sagorbrur/fillblank | 7682e346b886d43dcab82bd5536267e8531783d5 | [
"MIT"
] | null | null | null | notebook/fillblank.ipynb | sagorbrur/fillblank | 7682e346b886d43dcab82bd5536267e8531783d5 | [
"MIT"
] | null | null | null | notebook/fillblank.ipynb | sagorbrur/fillblank | 7682e346b886d43dcab82bd5536267e8531783d5 | [
"MIT"
] | null | null | null | 32.506757 | 250 | 0.532945 | [
[
[
"<a href=\"https://colab.research.google.com/github/sagorbrur/fillblank/blob/master/notebook/fillblank.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"!pip install fillblank",
"_____no_output_____"
],
[
"import torch\ntorch.__version__",
"_____no_output_____"
],
[
"from fillblank.fillblank import FillBlank\ntext = \"\"\"Man is a rational being <blank> wisdom, intellect and sense of self-respect. He had immense <blank> in himself. \n It keeps him aloof from all sorts of evil <blank>. To become an ideal man he should <blank> the feelings of others.\"\"\"\nfilltheblank = FillBlank()\noutput, output_dictionary = filltheblank.fill(text)\n",
"Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias']\n- This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).\n- This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\nSome weights of BertForMaskedLM were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['cls.predictions.decoder.bias']\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\n"
],
[
"print(output)\nprint(output_dictionary['predict_words'])",
"man is a rational being <with> wisdom, intellect and sense of self - respect. he had immense <faith> in himself. it keeps him aloof from all sorts of evil <things>. to become an ideal man he should <respect> the feelings of others. \n['with', 'faith', 'things', 'respect']\n"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
e7b7884231ea1afe591e91a9c45659e60df0411b | 88,759 | ipynb | Jupyter Notebook | Russia vs Ukraine Prediction.ipynb | SubhajitPal555/Russia-vs-Ukraine-Prediction-Analysis | aa8a16594ca2ed99e032dc6b9d49624937700403 | [
"MIT"
] | null | null | null | Russia vs Ukraine Prediction.ipynb | SubhajitPal555/Russia-vs-Ukraine-Prediction-Analysis | aa8a16594ca2ed99e032dc6b9d49624937700403 | [
"MIT"
] | null | null | null | Russia vs Ukraine Prediction.ipynb | SubhajitPal555/Russia-vs-Ukraine-Prediction-Analysis | aa8a16594ca2ed99e032dc6b9d49624937700403 | [
"MIT"
] | null | null | null | 77.248912 | 7,856 | 0.760531 | [
[
[
"TOPIC : RUSSIAN TROOPS AND EQUIPMENT LOSS PREDICTION ANALYSIS\n\n//....THIS TOPIC IS COVERED ON THE BASIS OF THE DATASET CREATED WITH REGARDS OF THE ONGOING RUSSIA-UKRAINE WAR....//\n\nMODEL USED HERE - RANDOM FOREST REGRESSOR\n\nDATABASE TAKEN FROM KAGGLE.....",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline",
"_____no_output_____"
],
[
"data1 =pd.read_csv(\"/Users/subhajitpal/Desktop/Data Analysis using Python/equipment.csv\")\ndata2 =pd.read_csv(\"/Users/subhajitpal/Desktop/Data Analysis using Python/troop.csv\") ",
"_____no_output_____"
],
[
"data1.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 5 entries, 0 to 4\nData columns (total 14 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 date 5 non-null object\n 1 day 5 non-null object\n 2 aircraft 5 non-null int64 \n 3 helicopter 5 non-null int64 \n 4 tank 5 non-null int64 \n 5 APC 5 non-null int64 \n 6 field artillery 5 non-null int64 \n 7 BUK 5 non-null int64 \n 8 MRL 5 non-null int64 \n 9 military auto 5 non-null int64 \n 10 fuel tank 5 non-null int64 \n 11 drone 5 non-null int64 \n 12 naval ship 5 non-null int64 \n 13 anti-aircraft warfare 5 non-null int64 \ndtypes: int64(12), object(2)\nmemory usage: 688.0+ bytes\n"
],
[
"data1.describe()",
"_____no_output_____"
],
[
"data1.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 5 entries, 0 to 4\nData columns (total 14 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 date 5 non-null object\n 1 day 5 non-null object\n 2 aircraft 5 non-null int64 \n 3 helicopter 5 non-null int64 \n 4 tank 5 non-null int64 \n 5 APC 5 non-null int64 \n 6 field artillery 5 non-null int64 \n 7 BUK 5 non-null int64 \n 8 MRL 5 non-null int64 \n 9 military auto 5 non-null int64 \n 10 fuel tank 5 non-null int64 \n 11 drone 5 non-null int64 \n 12 naval ship 5 non-null int64 \n 13 anti-aircraft warfare 5 non-null int64 \ndtypes: int64(12), object(2)\nmemory usage: 688.0+ bytes\n"
],
[
"data1.describe().columns",
"_____no_output_____"
],
[
"data1.describe().rows",
"_____no_output_____"
],
[
"data2.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 5 entries, 0 to 4\nData columns (total 4 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 date 5 non-null object \n 1 day 5 non-null object \n 2 personnel 5 non-null int64 \n 3 POW 1 non-null float64\ndtypes: float64(1), int64(1), object(2)\nmemory usage: 288.0+ bytes\n"
],
[
"data2.describe()",
"_____no_output_____"
],
[
"data2.describe().columns",
"_____no_output_____"
],
[
"data1.isnull().sum()",
"_____no_output_____"
],
[
"data2.isnull().sum()",
"_____no_output_____"
],
[
"data2['POW'].fillna(data2['POW'].mode()[0],inplace=True)\ndata2.isnull().any().any()",
"_____no_output_____"
],
[
"data1",
"_____no_output_____"
],
[
"data2",
"_____no_output_____"
],
[
"plt.hist(data1['aircraft'], bins=10)\nplt.title(\"aircraft\")\nplt.show()",
"_____no_output_____"
],
[
"plt.hist(data1['helicopter'],bins=10)\nplt.title(\"helicopter\")\nplt.show()",
"_____no_output_____"
],
[
"plt.hist(data1['tank'],bins=10)\nplt.title(\"tank\")\nplt.show()",
"_____no_output_____"
],
[
"plt.hist(data1['APC'],bins=10)\nplt.title(\"APC\")\nplt.show()",
"_____no_output_____"
],
[
"plt.hist(data1['field artillery'],bins=10)\nplt.title(\"field artillery\")\nplt.show()",
"_____no_output_____"
],
[
"plt.hist(data1['MRL'],bins=10)\nplt.title(\"MRL\")\nplt.show()",
"_____no_output_____"
],
[
"plt.hist(data1['military auto'],bins=10)\nplt.title(\"military auto\")\nplt.show()",
"_____no_output_____"
],
[
"plt.hist(data1['anti-aircraft warfare'],bins=10)\nplt.title(\"anti-aircraft warfare\")\nplt.show()",
"_____no_output_____"
],
[
"x=data1.drop(['date','day'],axis=1)",
"_____no_output_____"
],
[
"y=data1.drop(['day'],axis=1)",
"_____no_output_____"
],
[
"x.shape",
"_____no_output_____"
],
[
"y.shape",
"_____no_output_____"
],
[
"x = data1.iloc[:, 0:5].values\ny = data1.iloc[:, 0:5].values",
"_____no_output_____"
],
[
"from sklearn.model_selection import train_test_split\n\nx_train, x_test = train_test_split(x,test_size=0.2, random_state=0)\ny_train,y_test=train_test_split(y,test_size=0.2,random_state=0)",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7b78cae5a5e5465d59ea3b2e10750427c3c7488 | 73,910 | ipynb | Jupyter Notebook | notebooks/5.iris-hadoop.ipynb | Jeansding/Pyspark-ML | 32d9b47f9966668e0002ae6b1994b21dd7f6c975 | [
"MIT"
] | 267 | 2018-10-24T05:28:04.000Z | 2019-12-24T19:06:14.000Z | notebooks/5.iris-hadoop.ipynb | Jeansding/Pyspark-ML | 32d9b47f9966668e0002ae6b1994b21dd7f6c975 | [
"MIT"
] | 4 | 2019-01-03T14:54:02.000Z | 2019-12-29T14:31:37.000Z | notebooks/5.iris-hadoop.ipynb | Jeansding/Pyspark-ML | 32d9b47f9966668e0002ae6b1994b21dd7f6c975 | [
"MIT"
] | 83 | 2018-10-23T15:37:54.000Z | 2019-12-24T19:06:21.000Z | 310.546218 | 66,492 | 0.904519 | [
[
[
"from pyspark.sql import SparkSession",
"_____no_output_____"
],
[
"sparkSession = SparkSession.builder.appName(\"csv\").getOrCreate()",
"_____no_output_____"
],
[
"df_load = sparkSession.read.csv('hdfs://localhost:9000/user/Iris.csv')\ndf_load.show()",
"+---+-------------+------------+-------------+------------+-----------+\n|_c0| _c1| _c2| _c3| _c4| _c5|\n+---+-------------+------------+-------------+------------+-----------+\n| Id|SepalLengthCm|SepalWidthCm|PetalLengthCm|PetalWidthCm| Species|\n| 1| 5.1| 3.5| 1.4| 0.2|Iris-setosa|\n| 2| 4.9| 3.0| 1.4| 0.2|Iris-setosa|\n| 3| 4.7| 3.2| 1.3| 0.2|Iris-setosa|\n| 4| 4.6| 3.1| 1.5| 0.2|Iris-setosa|\n| 5| 5.0| 3.6| 1.4| 0.2|Iris-setosa|\n| 6| 5.4| 3.9| 1.7| 0.4|Iris-setosa|\n| 7| 4.6| 3.4| 1.4| 0.3|Iris-setosa|\n| 8| 5.0| 3.4| 1.5| 0.2|Iris-setosa|\n| 9| 4.4| 2.9| 1.4| 0.2|Iris-setosa|\n| 10| 4.9| 3.1| 1.5| 0.1|Iris-setosa|\n| 11| 5.4| 3.7| 1.5| 0.2|Iris-setosa|\n| 12| 4.8| 3.4| 1.6| 0.2|Iris-setosa|\n| 13| 4.8| 3.0| 1.4| 0.1|Iris-setosa|\n| 14| 4.3| 3.0| 1.1| 0.1|Iris-setosa|\n| 15| 5.8| 4.0| 1.2| 0.2|Iris-setosa|\n| 16| 5.7| 4.4| 1.5| 0.4|Iris-setosa|\n| 17| 5.4| 3.9| 1.3| 0.4|Iris-setosa|\n| 18| 5.1| 3.5| 1.4| 0.3|Iris-setosa|\n| 19| 5.7| 3.8| 1.7| 0.3|Iris-setosa|\n+---+-------------+------------+-------------+------------+-----------+\nonly showing top 20 rows\n\n"
],
[
"import matplotlib.pyplot as plt\nimport pandas as pd\nimport seaborn as sns",
"_____no_output_____"
],
[
"df_pandas = df_load.toPandas()\ndf_pandas = df_pandas.rename(columns={'_c0': 'Id', '_c1': 'SepalLengthCm','_c2':'SepalWidthCm','_c3':'PetalLengthCm',\n '_c4':'PetalWidthCm','_c5':'Species'})\ndf_pandas = df_pandas.iloc[1:,:]\ndf_pandas.head()",
"_____no_output_____"
],
[
"plt.figure(figsize=(20,10))\niris = pd.melt(df_pandas.iloc[:,1:], \"Species\", var_name=\"measurement\")\n\nsns.swarmplot(x=\"measurement\", y=\"value\", hue=\"Species\",palette=[\"r\", \"c\", \"y\"], data=iris)\nplt.yticks([])\nplt.show()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7b7979e90d88a23d13ed44e5b82f84e3b2481c2 | 33,540 | ipynb | Jupyter Notebook | Laplace approximation for dynamic model.ipynb | arayabrain/feptutorial | e52769c7317f16185e3d305177a59d9aed6350b2 | [
"MIT"
] | 4 | 2016-11-14T07:24:16.000Z | 2019-04-19T02:12:03.000Z | Laplace approximation for dynamic model.ipynb | arayabrain/feptutorial | e52769c7317f16185e3d305177a59d9aed6350b2 | [
"MIT"
] | null | null | null | Laplace approximation for dynamic model.ipynb | arayabrain/feptutorial | e52769c7317f16185e3d305177a59d9aed6350b2 | [
"MIT"
] | 1 | 2020-02-21T10:44:48.000Z | 2020-02-21T10:44:48.000Z | 28.715753 | 187 | 0.425224 | [
[
[
"# Laplace approxmation & free-action",
"_____no_output_____"
],
[
"$\\DeclareMathOperator{\\vec}{vec}$",
"_____no_output_____"
],
[
"### § Free-action",
"_____no_output_____"
],
[
"Free-action is the time-integral of free-energy",
"_____no_output_____"
],
[
"$$\n\\begin{eqnarray*}\n \\overline F &=& \\overline U - \\overline H\\\\\n \\overline U &=& \\int dt \\langle L(\\psi)\\rangle_{q_\\psi} = \\int dt \\langle \\ln p(y, \\psi)\\rangle_{q_\\psi}\\\\\n \\overline H &=& \\int dt \\langle q(\\psi)\\rangle_{q_\\psi}\n \\end{eqnarray*}\n$$",
"_____no_output_____"
],
[
"### § Mean-field assumption",
"_____no_output_____"
],
[
"Now let our parameters factorise into states, parameters, and hyperparameters, $ \\psi \\to \\{u(t), \\theta, \\lambda\\}$, \n\n\nand let the approximate densities over these variables follow mean-field assumption $q(\\psi) = q(u, t)q(\\theta)q(\\lambda)$.\n\n$\\theta$ parameterises the first moment of the states, and is independent of $\\lambda$, which parameterises the second moment.\n\nWrite",
"_____no_output_____"
],
[
"$$\n\\begin{eqnarray*}\n \\overline U &=& \\int dt \\langle L(u, t, \\theta, \\lambda)\\rangle_{q_uq_\\theta q_\\lambda}\\\\\n \\overline H &=& \\int dt \\langle q(u, t)\\rangle +\n \\langle q(\\theta)\\rangle +\n \\langle q(\\lambda)\\rangle\n \\end{eqnarray*}\n$$",
"_____no_output_____"
],
[
"### § Laplace approximation",
"_____no_output_____"
],
[
"Let ",
"_____no_output_____"
],
[
"$$\n\\begin{eqnarray*}\n q(u, t) &=& N(\\mu_u(t), \\Sigma_u(t))\\\\\n q(\\theta) &=& N(\\mu_\\theta, \\Sigma_\\theta)\\\\\n q(\\lambda) &=& N(\\mu_\\lambda, \\Sigma_\\lambda)\n \\end{eqnarray*}\n$$",
"_____no_output_____"
],
[
"Naturally,",
"_____no_output_____"
],
[
"$$\n\\overline H = \\frac 1 2\\int dt \\ln|\\Sigma_u| + \n \\frac 1 2 \\ln|\\Sigma_\\theta| + \\frac 1 2 \\ln|\\Sigma_\\lambda| +\n \\frac 1 2 \\left( ND_u + D_\\theta + D_\\lambda\\right)\\ln 2\\pi e\n$$",
"_____no_output_____"
],
[
"For the internal action, $\\overline U$, find its second-order truncation around its mode $\\mu = \\left[\\mu_u, \\mu_\\theta, \\mu_\\lambda\\right]^T$ (ignoring bilinear terms).\n\n$$\n\\DeclareMathOperator{\\tr}{tr}\n\\begin{eqnarray*}\n \\overline U &=& \n \\int L(\\mu, t) \\\\\n &&+ \n \\left\\langle\n \\frac 1 2\\left[\n (u - \\mu_u)^T L^{(uu)} (u - \\mu_u) \\right.\\right.\\\\\n && +\\left.\\left.\n (\\theta - \\mu_\\theta)^T L^{(\\theta\\theta)} (\\theta - \\mu_\\theta) \\right.\\right.\\\\\n && +\\left.\\left.\n (\\lambda - \\mu_\\lambda)^T L^{(\\lambda\\lambda)} (\\lambda - \\mu_\\lambda)\n \\right]\n \\right\\rangle_{q_u q_\\theta q_\\lambda}dt\\\\\n &=& \\int L(\\mu, t) + \\tr\\left(\\Sigma_u L^{(uu)}\\right) +\n \\tr\\left(\\Sigma_\\theta L^{(\\theta\\theta)}\\right) +\n \\tr\\left(\\Sigma_\\lambda L^{(\\lambda\\lambda)}\\right)dt\n \\end{eqnarray*}\n$$",
"_____no_output_____"
],
[
"#### Finding conditional precision",
"_____no_output_____"
],
[
"Solve for $\\partial\\overline F/\\partial{\\Sigma_u(t)}=0$\n\n$$\n\\begin{eqnarray*}\n &&\\frac 1 2 \\int dt L^{(uu)} - \\frac 1 2 \\int dt\\Sigma^{-1}_u = 0\\\\\n \\Rightarrow && \\Sigma^{-1}_u(t) = - L^{(uu)}(\\mu, t)\n \\end{eqnarray*}\n$$\n\nSimilarly,\n\n$$\n\\begin{eqnarray*}\n \\Sigma^{-1}_\\theta &=& -\\int dt L^{(\\theta\\theta)}(\\mu, t)\\\\\n \\Sigma^{-1}_\\lambda &=& -\\int dt L^{(\\lambda\\lambda)}(\\mu, t)\n \\end{eqnarray*}\n$$",
"_____no_output_____"
],
[
"Note that\n\n$$\n\\begin{eqnarray*}\n L(u, t, \\theta, \\lambda) &=& \n L(u, t|\\theta, \\lambda) + L(\\theta) + L(\\lambda)\\\\\n L^{(uu)} &=&\n L^{(uu)}(u, t| \\theta, \\lambda)\\\\\n L^{(\\theta\\theta)} &=&\n L^{(\\theta\\theta)}(u, t|\\theta, \\lambda) + \n L^{(\\theta\\theta)}(\\theta)\\\\\n L^{(\\lambda\\lambda)} &=& \n L^{(\\lambda\\lambda)}(u, t|\\theta, \\lambda) + \n L^{(\\lambda\\lambda)}(\\lambda)\n \\end{eqnarray*}\n$$",
"_____no_output_____"
],
[
"### § Variational action under Laplace approximation",
"_____no_output_____"
],
[
"With the following notation, one can write down the variational action, which is the internal action expected under their resepctive Markov Blanket.",
"_____no_output_____"
],
[
"$$\n\\begin{align}\n L_u &= L(u, t, \\mu_\\theta, \\mu_\\lambda)\\\\\n L_\\theta &= L(\\mu_u, t, \\theta, \\mu_\\lambda)\\\\\n L_\\lambda &= L(\\mu_u, t, \\mu_\\theta, \\lambda)\n \\end{align}\n$$",
"_____no_output_____"
],
[
"$$\n\\begin{align}\n {V}_u = \n V(u, t) &= \\langle L(u, t, \\theta, \\lambda)\\rangle_{q_\\theta q_\\lambda}\\\\\n &= L_u + \n \\frac 1 2\n \\left(\n \\tr[\\Sigma_\\theta L_\\theta^{(\\theta\\theta)}] +\n \\tr[\\Sigma_\\lambda L_\\lambda^{(\\lambda\\lambda)}]\n \\right)\\\\\n \\overline{V}_\\theta = \n \\overline V(\\theta) &= \\int dt \\langle L(u, t, \\theta, \\lambda)\\rangle_{q_u q_\\lambda}\\\\\n &= \\int L_\\theta + \n \\frac 1 2\n \\left(\n \\tr[\\Sigma_u L_u^{(uu)}] +\n \\tr[\\Sigma_\\lambda L_\\lambda^{(\\lambda\\lambda)}]\n \\right)dt\\\\\n \\overline{V}_\\lambda = \n \\overline V(\\lambda) &= \\int dt \\langle L(u, t, \\theta, \\lambda)\\rangle_{q_u q_\\theta}\\\\\n &= \\int L_\\lambda + \n \\frac 1 2\n \\left(\n \\tr[\\Sigma_u L_u^{(uu)}] +\n \\tr[\\Sigma_\\theta L_\\theta^{(\\theta\\theta)}]\n \\right)dt\\\\\n \\end{align}\n$$",
"_____no_output_____"
],
[
"And the following differentials on variational actions will become useful later.\n\nNote that the notation, $A\\!\\!:$, stands for matrix vectorisation, e.g., $L\\!\\!:_\\theta^{(\\theta\\theta)}$ is a vectorisation of $L_\\theta^{(\\theta\\theta)}$",
"_____no_output_____"
],
[
"$$\n\\left(\n\\begin{array}{ccc}\n L_\\theta^{(\\theta_{:1}\\theta_{:1})} &\n L_\\theta^{(\\theta_{:1}\\theta_{:2})} &\n \\\\\n L_\\theta^{(\\theta_{:2}\\theta_{:1})} &\n L_\\theta^{(\\theta_{:2}\\theta_{:2})} &\n \\\\\n & &\n \\ddots\n \\end{array}\n \\right)\n$$",
"_____no_output_____"
],
[
"which becomes",
"_____no_output_____"
],
[
"$$\\{\nL_u^{(\\theta_{:1}\\theta_{:1})},\nL_u^{(\\theta_{:2}\\theta_{:1})},\n\\cdots,\nL_u^{(\\theta_{:1}\\theta_{:2})},\nL_u^{(\\theta_{:2}\\theta_{:2})},\n\\cdots\n\\}^T$$",
"_____no_output_____"
],
[
"This is to be contrasted with, say, $L\\!\\!:_\\theta^{(\\theta\\theta)(u)} and L\\!\\!:_\\theta^{(\\theta\\theta)(uu)}$, which read",
"_____no_output_____"
],
[
"$$\n\\begin{align}\n &\\left(\n \\begin{array}{c}\n L\\!\\!:_\\theta^{(\\theta\\theta)(u_{:1})}\\\\\n L\\!\\!:_\\theta^{(\\theta\\theta)(u_{:2})}\\\\\n \\vdots\n \\end{array}\n \\right), \\;\\;\\;\\text{ and}\\\\\n &\\left(\n \\begin{array}{ccc}\n L\\!\\!:_\\theta^{(\\theta\\theta)(u_{:1}u_{:1})} &\n L\\!\\!:_\\theta^{(\\theta\\theta)(u_{:1}u_{:2})} &\n \\\\\n L\\!\\!:_\\theta^{(\\theta\\theta)(u_{:2}u_{:1})} &\n L\\!\\!:_\\theta^{(\\theta\\theta)(u_{:2}u_{:2})} &\n \\\\\n & \\ddots &\n \\end{array}\n \\right),\n \\end{align}\n$$",
"_____no_output_____"
],
[
"respectively.",
"_____no_output_____"
],
[
"Adopting these notations, the differentials of variational actions are",
"_____no_output_____"
],
[
"$$\n\\begin{eqnarray*}\n V_u^{(u)} &= L_u^{(u)}& +\\frac 1 2\n \\left(I\\otimes\\Sigma^T\\!\\!\\!:_\\theta\\right)^T\n L\\!\\!:_\\theta^{(\\theta\\theta)(u)} +\n \\frac 1 2\n \\left(I\\otimes\\Sigma^T\\!\\!\\!:_\\lambda\\right)^T\n L\\!\\!:_\\lambda^{(\\lambda\\lambda)(u)}\\\\\n V_u^{(uu)} &= L_u^{(uu)}& +\\frac 1 2\n \\left(I\\otimes\\Sigma^T\\!\\!\\!:_\\theta\\right)^T\n L\\!\\!:_\\theta^{(\\theta\\theta)(uu)} +\n \\frac 1 2\n \\left(I\\otimes\\Sigma^T\\!\\!\\!:_\\lambda\\right)^T\n L\\!\\!:_\\lambda^{(\\lambda\\lambda)(uu)}\n \\end{eqnarray*}\n$$",
"_____no_output_____"
],
[
"$$\n\\begin{eqnarray*}\n \\overline V_\\theta^{(\\theta)} &= \\int L_\\theta^{(\\theta)}& +\n \\frac 1 2 \\left(\n I\\otimes \\Sigma^T\\!\\!\\!:_u\\right)^T\n L\\!\\!:_u^{(uu)(\\theta)} +\n \\frac 1 2 \\left(\n I\\otimes \\Sigma^T\\!\\!\\!:_\\lambda\\right)^T\n L\\!\\!:_\\lambda^{(\\lambda\\lambda)(\\theta)}dt\\\\\n \\overline V_\\theta^{(\\theta\\theta)} &= \\int L_\\theta^{(\\theta\\theta)}& +\n \\frac 1 2 \\left(\n I\\otimes \\Sigma^T\\!\\!\\!:_u\\right)^T\n L\\!\\!:_u^{(uu)(\\theta\\theta)} +\n \\frac 1 2 \\left(\n I\\otimes \\Sigma^T\\!\\!\\!:_\\lambda\\right)^T\n L\\!\\!:_\\lambda^{(\\lambda\\lambda)(\\theta\\theta)}dt\n \\end{eqnarray*}\n$$",
"_____no_output_____"
],
[
"$$\n\\begin{eqnarray*}\n \\overline V_\\lambda^{(\\lambda)} &= \\int L_\\lambda^{(\\lambda)}& +\n \\frac 1 2 \\left(\n I\\otimes \\Sigma^T\\!\\!\\!:_u\\right)^T\n L\\!\\!:_u^{(uu)(\\lambda)} +\n \\frac 1 2 \\left(\n I\\otimes \\Sigma^T\\!\\!\\!:_\\theta\\right)^T\n L\\!\\!:_\\theta^{(\\theta\\theta)(\\lambda)}dt\\\\\n \\overline V_\\lambda^{(\\lambda\\lambda)} &= \\int L_\\lambda^{(\\lambda\\lambda)}& +\n \\frac 1 2 \\left(\n I\\otimes \\Sigma^T\\!\\!\\!:_u\\right)^T\n L\\!\\!:_u^{(uu)(\\lambda\\lambda)} +\n \\frac 1 2 \\left(\n I\\otimes \\Sigma^T\\!\\!\\!:_\\theta\\right)^T\n L\\!\\!:_\\theta^{(\\theta\\theta)(\\lambda\\lambda)}dt\n \\end{eqnarray*}\n$$",
"_____no_output_____"
],
[
"### § Optimisation: embedding and mode following",
"_____no_output_____"
],
[
"Suppose the time-dependent state, $u$, subsumes its motion up to arbitrary high order, one may unpack this and write $\\tilde u = (u, u', u'', \\dots)^T$.",
"_____no_output_____"
],
[
"Let this generalised state move along the gradient of variational energy/action, hoping to catch up the motion one level above when the gradient vanishes:",
"_____no_output_____"
],
[
"$$\n\\begin{align}\n \\dot{\\tilde u} &= V_u^{(u)} + \\mathcal D\\tilde u\n \\end{align}\n$$",
"_____no_output_____"
],
[
"This way, when $V_u^{(u)} = 0$ (this happens at the mode where $\\tilde u = \\tilde\\mu$), one has",
"_____no_output_____"
],
[
"$$\n\\begin{align}\n \\dot u &= u'\\\\\n \\dot u' &= u''\\\\\n \\dot u'' &= u'''\\\\\n &\\vdots\n \\end{align}\n$$",
"_____no_output_____"
],
[
"Thus, motion of the modes becomes modes of the motion. Here, $\\mathcal D$ is a differential operator, or simply a delay matrix.",
"_____no_output_____"
],
[
"Let us find the linearisation of this state motion around its mode, $\\tilde\\mu$, which follows that",
"_____no_output_____"
],
[
"$$\nV_u^{(u)} = V(\\tilde\\mu)_u^{(u)} + V(\\tilde\\mu_u)^{(uu)}(\\tilde u - \\tilde\\mu) = V(\\tilde\\mu_u)^{(uu)}(\\tilde u - \\tilde\\mu)\n$$",
"_____no_output_____"
],
[
"And have $\\tilde\\varepsilon = \\tilde u - \\tilde\\mu$, so that $\\dot{\\tilde\\varepsilon} = \n\\dot{\\tilde u} - \\dot{\\tilde\\mu} = \\dot{\\tilde u} - \\mathcal D\\tilde\\mu$.",
"_____no_output_____"
],
[
"With substitution, write",
"_____no_output_____"
],
[
"$$\n\\begin{align}\n \\dot{\\tilde\\varepsilon} &=\n V_u^{(uu)}\\tilde\\varepsilon + \\mathcal D\\tilde u - \\mathcal D\\tilde\\mu\\\\\n &= \\left(V_u^{(uu)} +\\mathcal D\\right)\\tilde\\varepsilon\n \\end{align}\n$$",
"_____no_output_____"
],
[
"and note $\\mathcal J = \\left( V_u^{(uu)} + \\mathcal D\\right) = \\partial\\dot{\\tilde u}/\\partial\\tilde u$.",
"_____no_output_____"
],
[
"#### Finding conditional expectation (updating scheme)",
"_____no_output_____"
],
[
"The updating scheme is again derived from Ozaki's local linearisation:",
"_____no_output_____"
],
[
"$$\n\\begin{align}\n \\Delta \\tilde u &= \\left(\n \\exp(\\mathcal J) - I\\right)\n \\mathcal J^{-1}\n \\dot{\\tilde u}\\\\\n &= \\left(\n \\exp(\\mathcal J) - I\\right)\n \\mathcal J^{-1}\n \\left(V_u^{(u)} + \\mathcal D\\tilde u\\right)\n \\end{align}\n$$",
"_____no_output_____"
],
[
"For parameters and hyperparameters, this reduces to",
"_____no_output_____"
],
[
"$$\n\\begin{align}\n \\Delta \\theta &= {\\overline V_\\theta^{(\\theta\\theta)}}^{-1}\n \\overline V_\\theta^{(\\theta)}\\\\\n \\Delta \\lambda &= {\\overline V_\\lambda^{(\\lambda\\lambda)}}^{-1}\n \\overline V_\\lambda^{(\\lambda)}\n \\end{align}\n$$",
"_____no_output_____"
],
[
"### § Example 1: Dynamic causal model",
"_____no_output_____"
],
[
"$$\n\\begin{align}\n y &= g(x, v; \\theta) + z,\\;\\;\\; z\\sim N(0, \\Pi(\\lambda)^{-1}_z)\\\\\n \\dot x &= f(x, v; \\theta) + w,\\;\\;\\; w\\sim N(0, \\Pi(\\lambda)^{-1}_w)\n \\end{align}\n$$",
"_____no_output_____"
],
[
"$$\n\\begin{align}\n \\ln p(y, u, t, \\theta, \\lambda) &= L(y|x, v, \\theta, \\lambda) + L(x|v, \\theta,\\lambda) + L(\\theta) + L(\\lambda)\n \\end{align}\n$$",
"_____no_output_____"
],
[
"where $u=(v, x)^T$ and let $p(v)$ be uninformative for now. And let the parameter, $\\theta$, and hyperparameter, $\\lambda$, be independent and take the following form",
"_____no_output_____"
],
[
"$$\n\\begin{align}\n p(\\theta) &= N(\\theta|\\eta_\\theta, P^{-1}_\\theta)\\\\\n p(\\lambda) &= N(\\lambda|\\eta_\\lambda, P^{-1}_\\lambda)\n \\end{align}\n$$",
"_____no_output_____"
],
[
"which altogether lend the generative density to an analytical form",
"_____no_output_____"
],
[
"$$\n\\begin{align}\n L(t) &= -\n \\frac 1 2 \\varepsilon_v^T\\Pi_z\\varepsilon_v -\n \\frac 1 2 \\varepsilon_x^T\\Pi_w\\varepsilon_x -\n \\frac 1 2 \\varepsilon_\\theta^T P_\\theta \\varepsilon_\\theta -\n \\frac 1 2 \\varepsilon_\\lambda^T P_\\lambda \\varepsilon_\\lambda -\n \\frac 1 2 \\ln|\\Pi_z| -\n \\frac 1 2 \\ln|\\Pi_w| - \n \\frac 1 2 \\ln|P_\\theta| - \n \\frac 1 2 \\ln|P_\\lambda|\n \\end{align}\n$$",
"_____no_output_____"
],
[
"where",
"_____no_output_____"
],
[
"$$\n\\begin{align}\n \\varepsilon_v &= y - g(x, v)\\\\\n \\varepsilon_x &= \\dot x - f(x, v)\\\\\n \\varepsilon_\\theta &= \\theta - \\eta_\\theta\\\\\n \\varepsilon_\\lambda &= \\lambda - \\eta_\\lambda\n \\end{align}\n$$",
"_____no_output_____"
],
[
"If one lumps the time-dependent terms together:",
"_____no_output_____"
],
[
"$$\n\\begin{align}\n \\varepsilon_u &= \\left( \\varepsilon_v, \\varepsilon_x \\right)^T\\\\\n \\Pi &= \\left(\\begin{array}{cc}\n \\Pi_z & \\\\\n & \\Pi_w\n \\end{array}\\right) =\n \\left(\\begin{array}{cc}\n C_z^{-1} &\\\\\n & C_w^{-1}\n \\end{array}\n \\right) = C^{-1}\n \\end{align}\n$$",
"_____no_output_____"
],
[
"one writes",
"_____no_output_____"
],
[
"$$\n\\begin{align}\n L(t) &= -\n \\frac 1 2 \\varepsilon_u^T\\Pi\\varepsilon_u -\n \\frac 1 2 \\varepsilon_\\theta^T P_\\theta \\varepsilon_\\theta -\n \\frac 1 2 \\varepsilon_\\lambda^T P_\\lambda \\varepsilon_\\lambda -\n \\frac 1 2 \\ln|\\Pi| -\n \\frac 1 2 \\ln|P_\\theta| - \n \\frac 1 2 \\ln|P_\\lambda|\n \\end{align}\n$$",
"_____no_output_____"
],
[
"And let this expression prescribe generalised motion over its states, write",
"_____no_output_____"
],
[
"$$\n\\begin{align}\n L(t) &= -\n \\frac 1 2 \\tilde{\\varepsilon_u}^T \\tilde\\Pi \\tilde{\\varepsilon_u} -\n \\frac 1 2 \\varepsilon_\\theta^T P_\\theta \\varepsilon_\\theta -\n \\frac 1 2 \\varepsilon_\\lambda^T P_\\lambda \\varepsilon_\\lambda -\n \\frac 1 2 \\ln|\\tilde\\Pi| -\n \\frac 1 2 \\ln|P_\\theta| - \n \\frac 1 2 \\ln|P_\\lambda|\n \\end{align}\n$$",
"_____no_output_____"
],
[
"#### Conditional precision $\\pmb\\Lambda_u$",
"_____no_output_____"
],
[
"For state $u$, the conditional precision, $\\Lambda_u=-L(t)^{(uu)}$, is",
"_____no_output_____"
],
[
"$$\n\\tilde{\\varepsilon_u^{(u)}}^T \\tilde\\Pi \\tilde{\\varepsilon_u^{(u)}}\n$$",
"_____no_output_____"
],
[
"where",
"_____no_output_____"
],
[
"$$\n\\begin{align}\n \\tilde{\\varepsilon_u^{(u)}} &= \\left(\n \\begin{array}{cc}\n \\tilde{\\varepsilon_v^{(v)}} &\n \\tilde{\\varepsilon_v^{(x)}} \\\\\n \\tilde{\\varepsilon_x^{(v)}} &\n \\tilde{\\varepsilon_x^{(x)}}\n \\end{array}\n \\right)\\\\\n \\tilde{\\varepsilon_v^{(v)}} &= \\left(\n \\begin{array}{c}\n \\varepsilon_v^{(\\tilde v)}\\\\\n \\varepsilon_{v'}^{(\\tilde v)}\\\\\n \\varepsilon_{v''}^{(\\tilde v)}\\\\\n \\vdots\n \\end{array}\n \\right) = \\left(\n \\begin{array}{cccc}\n \\varepsilon_v^{(v)} &\n \\varepsilon_v^{(v')} &\n \\varepsilon_v^{(v'')} &\n \\\\\n \\varepsilon_{v'}^{(v)} &\n \\varepsilon_{v'}^{(v')} &\n \\varepsilon_{v'}^{(v'')} &\n \\cdots\n \\\\\n \\varepsilon_{v''}^{(v)} &\n \\varepsilon_{v''}^{(v')} &\n \\varepsilon_{v''}^{(v'')} &\n \\\\\n & \\vdots & & \\ddots\n \\end{array}\n \\right) =\n -I \\otimes g^{(v)}\\\\\n \\tilde{\\varepsilon_x^{(x)}} &= \\left[\\mathcal D\\tilde x - \\tilde f\\right]^{(\\tilde x)} =\n \\mathcal D - I \\otimes f^{(x)}\n \\end{align}\n$$",
"_____no_output_____"
],
[
"See [A.4 generalised motion](?).",
"_____no_output_____"
],
[
"Thus,",
"_____no_output_____"
],
[
"$$\n\\tilde{\\varepsilon_u^{(u)}} = -\\left(\n \\begin{array}{cc}\n I\\otimes g^{(v)} &\n I\\otimes g^{(x)} \\\\\n I\\otimes f^{(v)} &\n I\\otimes f^{(x)} - \\mathcal D\n \\end{array}\n \\right)\n$$",
"_____no_output_____"
],
[
"#### Conditional precision $\\pmb\\Lambda_\\theta$",
"_____no_output_____"
],
[
"Conditional precision over parameter, $\\Lambda_\\theta = -L(t)^{(\\theta\\theta)}$ is",
"_____no_output_____"
],
[
"$$\n\\tilde{\\varepsilon_u^{(\\theta)}}^T \\tilde\\Pi \\tilde{\\varepsilon_u^{(\\theta)}} + P_\\theta\n$$",
"_____no_output_____"
],
[
"Let $\\theta = (\\theta_{:1}, \\theta_{:2}, \\dots, \\theta_{:k}, \\dots, \\theta_{:K})^T$.",
"_____no_output_____"
],
[
"$$\n\\begin{align}\n \\tilde{\\varepsilon_u^{(\\theta_{:k})}} &= \\left(\n \\begin{array}{c}\n \\tilde y - \\tilde g\\\\\n \\mathcal D\\tilde x - \\tilde f\n \\end{array}\n \\right)^{(\\theta_{:k})} = -\\left(\n \\begin{array}{c}\n \\tilde{g^{(\\theta_{:k})}}\\\\\n \\tilde{f^{(\\theta_{:k})}}\n \\end{array}\n \\right)\\\\\n &= \\left(\n \\begin{array}{cc}\n I\\otimes g^{(v\\theta_{:k})} &\n I\\otimes g^{(x\\theta_{:k})} \\\\\n I\\otimes f^{(v\\theta_{:k})} &\n I\\otimes f^{(x\\theta_{:k})}\n \\end{array}\n \\right) \\left(\n \\begin{array}{c}\n \\tilde v\\\\\n \\tilde x\n \\end{array}\n \\right)\n \\end{align}\n$$",
"_____no_output_____"
],
[
"#### Conditional precision $\\pmb\\Lambda_\\lambda$",
"_____no_output_____"
],
[
"Conditional precision over hyperparameter, $\\Lambda_\\lambda = -L(t)^{(\\lambda\\lambda)}$, is, assuming $\\lambda_i, \\lambda_j\\in\\lambda$",
"_____no_output_____"
],
[
"$$\n\\begin{align}\n &\\frac{\\partial^2}{\\partial\\lambda_j\\partial\\lambda_i}\\left(\n -\\frac 1 2 \\tilde{\\varepsilon_u}^T\\tilde\\Pi\\tilde{\\varepsilon_u} -\n \\frac 1 2 \\ln|\\Pi|\\right)\\\\\n =& -\\frac 1 2\n \\frac{\\partial}{\\partial\\lambda_j}\\left[\n \\tilde{\\varepsilon_u}^T \\frac{\\partial}{\\partial\\lambda_i}\\tilde\\Pi \\tilde{\\varepsilon_u} +\n \\frac{\\partial}{\\partial\\lambda_i}\\ln|\\tilde\\Pi|\n \\right]\\\\\n =& -\\frac 1 2\n \\tilde\\Pi^{(\\lambda_i)}\n \\tr\\left[\n \\tilde C \\tilde\\Pi^{(\\lambda_j)} \\tilde C \\tilde\\Pi^{(\\lambda_i)}\n \\right]\n \\end{align}\n$$",
"_____no_output_____"
],
[
"where",
"_____no_output_____"
],
[
"$$\n\\begin{align}\n & \\tilde{\\varepsilon_u}^T \\frac{\\partial}{\\partial\\lambda_i}\\tilde\\Pi \\tilde{\\varepsilon_u}\\\\\n &= \\vec(\\tilde{\\varepsilon_u}\\tilde{\\varepsilon_u}^T)^T\n \\vec(\\tilde{\\Pi}^{(\\lambda_i)})\\\\\n &= \\tr\\left[\n \\tilde{\\varepsilon_u} \\tilde{\\varepsilon_u}^T \\tilde{\\Pi}^{(\\lambda_i)}\n \\right]\n \\end{align}\n$$",
"_____no_output_____"
],
[
"$$\n\\begin{align}\n \\frac{\\partial}{\\partial\\lambda_i}\\ln|\\tilde\\Pi| &=\n \\frac{\\partial\\tilde\\Pi}{\\partial\\lambda_i}\n \\frac{\\partial|\\tilde\\Pi|}{\\partial\\tilde\\Pi}\n \\frac{\\partial}{\\partial|\\tilde\\Pi|}\\ln|\\tilde\\Pi|\\\\\n &= \\tilde\\Pi^{(\\lambda_i)}\n |\\tilde\\Pi|\\tr\\left[\\tilde C \\tilde\\Pi^{(\\lambda_i)}\\right]\n \\frac{1}{|\\tilde\\Pi|}\\\\\n &= \\tilde\\Pi^{(\\lambda_i)}\n \\tr\\left[\\tilde C \\tilde\\Pi^{(\\lambda_i)}\\right]\n \\end{align}\n$$",
"_____no_output_____"
],
[
"\\begin{align}\n \\frac{\\partial^2}{\\partial\\lambda_j\\partial\\lambda_i}\\ln|\\tilde\\Pi| &= \n \\frac{\\partial}{\\partial\\lambda_j}\n \\tilde\\Pi^{(\\lambda_i)}\n \\tr\\left[\\tilde C \\tilde\\Pi^{(\\lambda_i)}\\right]\\\\\n &=\\tilde\\Pi^{(\\lambda_i\\lambda_j)}\n \\tr\\left[\n \\tilde C \\tilde\\Pi^{(\\lambda_i)}\n \\right] +\n \\tilde\\Pi^{(\\lambda_i)}\n \\tr\\left[\n \\tilde C \\tilde\\Pi^{(\\lambda_j)} \\tilde C \\tilde\\Pi^{(\\lambda_i)} +\n \\tilde C \\tilde\\Pi^{(\\lambda_i\\lambda_j)}\n \\right]\\\\\n &= \\tilde\\Pi^{(\\lambda_i)}\n \\tr\\left[\n \\tilde C \\tilde\\Pi^{(\\lambda_j)} \\tilde C \\tilde\\Pi^{(\\lambda_i)}\n \\right]\\;\\;\\;\\;(\n \\text{if $\\partial^2 \\tilde\\Pi/\\partial\\lambda\\partial\\lambda = 0$})\n \\end{align}",
"_____no_output_____"
],
[
"See [Differentials of determinant](http://www.ee.ic.ac.uk/hp/staff/dmb/matrix/calculus.html#deriv_det)\n, [Differentials of inverses and trace](http://www.ee.ic.ac.uk/hp/staff/dmb/matrix/calculus.html#deriv_inv) from Matrix Reference Manual.",
"_____no_output_____"
],
[
"#### Conditional expectation over state (updating scheme)",
"_____no_output_____"
],
[
"Before calling upon Ozaki's scheme, one recalls that the observation, which affects the variational energy as well, has to be considered:",
"_____no_output_____"
],
[
"$$\n\\left(\\begin{array}{c}\n \\dot{\\tilde y}\\\\\n \\dot{\\tilde u}\n \\end{array}\\right) =\n\\left(\\begin{array}{c}\n \\mathcal D\\tilde y\\\\\n V_u^{(u)} + \\mathcal D \\tilde u\n \\end{array}\\right) \\implies\n\\mathcal J =\n\\left(\\begin{array}{cc}\n \\partial\\dot{\\tilde y}/\\partial\\tilde y &\n \\partial\\dot{\\tilde y}/\\partial\\tilde u \\\\\n \\partial\\dot{\\tilde u}/\\partial\\tilde y &\n \\partial\\dot{\\tilde u}/\\partial\\tilde u\n \\end{array}\\right) = \n\\left(\\begin{array}{cc}\n \\mathcal D & 0\\\\\n V_u^{(uy)} &\n V_u^{(uu)} + \\mathcal D\n \\end{array}\\right)\n$$",
"_____no_output_____"
],
[
"Thus,",
"_____no_output_____"
],
[
"$$\n\\left(\\begin{array}{c}\n \\Delta\\tilde y\\\\\n \\Delta\\tilde u\n \\end{array}\n \\right) = \n\\left(\\exp(\\mathcal J) - I\\right)\n\\mathcal J^{-1}\n\\left(\\begin{array}{c}\n \\mathcal D\\tilde y\\\\\n V_u^{(u)} + \\mathcal D\n \\end{array}\n \\right)\n$$",
"_____no_output_____"
],
[
"*****",
"_____no_output_____"
],
[
"$$\n\\begin{align}\n \\dot u &= g(u)\\\\\n \\ddot u &= \\dot u g^{(u)}\\\\\n \\dddot u &= \\frac{\\partial}{\\partial t} \\ddot u\\\\\n &= \\dot u \\frac{\\partial}{\\partial u} (\\dot u g^{(u)})\\\\\n &= \\dot u \\frac{\\partial}{\\partial u} (g(u) g^{(u)})\\\\\n &= \\underbrace{\\dot u g^{(u)}}_{=\\ddot u} g^{(u)} + \n \\underbrace{\\dot u g(u)g^{(uu)}}_{\\text{ignored under local linearisation}}\n \\end{align}\n$$",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
e7b7a766d8cd7757b1c4e4797e1183def611caa7 | 40,163 | ipynb | Jupyter Notebook | Project1.ipynb | mrr28/cs675_midterm | aa30ef46028e985cb74fba0b801863a4b9f3f93a | [
"MIT"
] | null | null | null | Project1.ipynb | mrr28/cs675_midterm | aa30ef46028e985cb74fba0b801863a4b9f3f93a | [
"MIT"
] | null | null | null | Project1.ipynb | mrr28/cs675_midterm | aa30ef46028e985cb74fba0b801863a4b9f3f93a | [
"MIT"
] | null | null | null | 61.599693 | 20,444 | 0.70916 | [
[
[
"## Import Packages\nimport pandas as pd\nimport numpy as np \nimport matplotlib.pyplot as plt\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.model_selection import train_test_split\nimport seaborn as sns\nfrom sklearn import metrics\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.metrics import accuracy_score\nfrom sklearn import preprocessing\nfrom sklearn.preprocessing import StandardScaler",
"_____no_output_____"
],
[
"df = pd.read_csv('train.csv')",
"_____no_output_____"
],
[
"df.head(7)",
"_____no_output_____"
]
],
[
[
"## Extracting 'Features' and 'target'",
"_____no_output_____"
]
],
[
[
"features =df[['MSSubClass','LotArea','SalePrice', 'PoolArea', 'MSSubClass']]",
"_____no_output_____"
]
],
[
[
"### The features of only those columns are used which are providing rich information to get the target variable. Also, all the columns which have the same values for all rows are not included in feature set as they can't help to make predictions.",
"_____no_output_____"
]
],
[
[
"print(features.shape)",
"(1460, 5)\n"
],
[
"target= df['SalePrice']",
"_____no_output_____"
],
[
"print(target.shape)",
"(1460,)\n"
]
],
[
[
"## Splitting Dataset",
"_____no_output_____"
]
],
[
[
"features.dropna",
"_____no_output_____"
],
[
"x_train, x_test, y_train, y_test =train_test_split(features,target,test_size=0.25,random_state=0)",
"_____no_output_____"
]
],
[
[
"## Applying Naiive Bayes Algorithm",
"_____no_output_____"
]
],
[
[
"classifier = GaussianNB()\nclassifier.fit(x_train, y_train)\ny_pred = classifier.predict(x_test)\nprint(y_pred)",
"[201000 133000 110000 192000 88000 85000 283463 141000 755000 149000\n 209500 137000 225000 123000 119000 145000 190000 124000 149500 155000\n 165500 145000 110000 174000 185000 168000 177000 84500 320000 118500\n 110000 213000 156000 250000 372402 175000 278000 113000 262500 325000\n 244000 130000 165000 280000 402861 119000 125000 128000 172500 85000\n 410000 156000 168000 100000 275000 123000 132000 240000 139000 115000\n 137500 135000 134000 180500 193000 156932 132000 224900 139000 225000\n 189000 118000 81000 392500 112000 248000 134900 79000 320000 158000\n 140000 136500 107500 145000 200000 185000 105000 202500 186500 136500\n 201000 190000 187500 200000 172500 157000 213500 185000 124500 163000\n 260000 197900 120000 159500 106000 260000 143000 106500 179000 127000\n 90000 118500 190000 120000 184000 155000 383970 133000 193000 270000\n 141000 146000 128500 176000 214000 222000 410000 188000 200000 180000\n 206000 194500 143000 184000 116000 213500 139400 179000 108000 176000\n 158000 145000 215000 150000 109000 165500 201000 145000 320000 215000\n 180500 369900 239000 146000 161500 250000 89500 230000 147000 164500\n 96500 142000 197000 129000 232000 115000 175000 265900 207500 181000\n 176000 171000 196000 176000 113000 139000 135000 240000 112000 134000\n 315750 170000 116000 305900 83000 175000 106000 194500 194500 156000\n 138000 177000 214000 148000 127000 142500 80000 145000 171000 122000\n 139000 189000 120500 124000 160000 200000 160000 311872 275000 67000\n 159000 250000 93000 109900 402000 129000 83000 302000 250000 81000\n 187500 110000 117000 128500 213500 284000 230000 190000 135000 152000\n 88000 155000 115000 144000 250000 132500 136500 117000 83000 157900\n 110000 181000 192000 222500 181000 170000 187500 186500 160000 192000\n 181000 265979 100000 440000 230000 217000 110000 176000 556581 160000\n 172500 108000 131500 106000 381000 369900 345000 68400 250000 245000\n 125000 235000 145000 181000 103200 233170 164500 219500 195000 108000\n 149900 315000 178000 140000 194500 138000 118000 325000 556581 135750\n 83000 100000 315000 109900 163000 270000 205000 185000 160000 155000\n 91000 131000 165500 194500 155000 140000 147000 194500 179200 173000\n 109900 174000 129900 119000 125500 149500 305900 102000 179000 129500\n 80000 280000 118500 197000 140000 226000 132500 315000 224900 132500\n 119500 215000 210000 200000 185000 149900 129000 184000 135000 128000\n 372402 164500 157000 215000 165000 144000 125500 98000 91500 135500\n 227000 335000 115000 96500 181000 466500 290000 175000 235000 275000\n 325000 178000 235000 239000 85000]\n"
]
],
[
[
"## Evaluating Performance ",
"_____no_output_____"
]
],
[
[
"cm = metrics.confusion_matrix(y_test, y_pred)\nac = accuracy_score(y_test,y_pred)",
"_____no_output_____"
],
[
"print(ac)",
"0.5671232876712329\n"
],
[
"nb_score = classifier.score(x_test, y_test)\nprint(nb_score)",
"0.5671232876712329\n"
],
[
"m = metrics.confusion_matrix(y_test, y_pred)\nprint(cm)",
"[[1 0 0 ... 0 0 0]\n [0 0 0 ... 0 0 0]\n [0 1 0 ... 0 0 0]\n ...\n [0 0 0 ... 0 0 0]\n [0 0 0 ... 0 0 1]\n [0 0 0 ... 0 0 0]]\n"
]
],
[
[
"## Plotting Confusion Matrix",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\n\nplt.figure(figsize=(10,10))\nplt.scatter(x_train.iloc[:,0:1], x_train.iloc[:,3:4], c=y_train[:], s=350, cmap='viridis')\nplt.title('Training data')\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Summary",
"_____no_output_____"
],
[
"### To prepare the dataset, I extract the features matrix and target array from the pandas DataFrame , Affter Splitting dataset, I applied Gaussian Naive Bayes Classifier to predict on a new data. After evaulating the performance to find the accuracy score which is approximately 0.5671. Plotting the Confusion Matrix in a Viridis Map, shows that the when data is abundant, other more complicated models tend to outperform Naive Bayes.",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
e7b7ccb756caac0ae1b0d82c29dfaaee949eaa49 | 47,795 | ipynb | Jupyter Notebook | term-2-concentrations/lab-5-nlp-sentiment-analysis/Sentiment_RNN.ipynb | rstraker/ai-nanodegree-udacity | 4750d7907a8686749dce3f554dc714a4d9dc1c03 | [
"MIT"
] | null | null | null | term-2-concentrations/lab-5-nlp-sentiment-analysis/Sentiment_RNN.ipynb | rstraker/ai-nanodegree-udacity | 4750d7907a8686749dce3f554dc714a4d9dc1c03 | [
"MIT"
] | 43 | 2020-09-26T01:26:27.000Z | 2022-03-12T00:46:10.000Z | term-2-concentrations/lab-5-nlp-sentiment-analysis/Sentiment_RNN.ipynb | rstraker/ai-nanodegree-udacity | 4750d7907a8686749dce3f554dc714a4d9dc1c03 | [
"MIT"
] | null | null | null | 42.750447 | 2,015 | 0.517418 | [
[
[
"# Sentiment Analysis with an RNN\n\nIn this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the *sequence* of words. Here we'll use a dataset of movie reviews, accompanied by labels.\n\nThe architecture for this network is shown below.\n\n<img src=\"assets/network_diagram.png\" width=400px>\n\nHere, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.\n\nFrom the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.\n\nWe don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport tensorflow as tf",
"_____no_output_____"
],
[
"with open('../sentiment-network/reviews.txt', 'r') as f:\n reviews = f.read()\nwith open('../sentiment-network/labels.txt', 'r') as f:\n labels = f.read()",
"_____no_output_____"
],
[
"reviews[:2000]",
"_____no_output_____"
]
],
[
[
"## Data preprocessing\n\nThe first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.\n\nYou can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines `\\n`. To deal with those, I'm going to split the text into each review using `\\n` as the delimiter. Then I can combined all the reviews back together into one big string.\n\nFirst, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.",
"_____no_output_____"
]
],
[
[
"from string import punctuation\nall_text = ''.join([c for c in reviews if c not in punctuation])\nreviews = all_text.split('\\n')\n\nall_text = ' '.join(reviews)\nwords = all_text.split()",
"_____no_output_____"
],
[
"all_text[:2000]",
"_____no_output_____"
],
[
"words[:100]",
"_____no_output_____"
]
],
[
[
"### Encoding the words\n\nThe embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.\n\n> **Exercise:** Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers **start at 1, not 0**.\n> Also, convert the reviews to integers and store the reviews in a new list called `reviews_ints`. ",
"_____no_output_____"
]
],
[
[
"# Create your dictionary that maps vocab words to integers here\nfrom collections import Counter\ncounts = Counter(words)\nvocab = sorted(counts, key=counts.get, reverse=True)\nvocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}\n\n# Convert the reviews to integers, same shape as reviews list, but with integers\nreviews_ints = []\nfor each in reviews:\n reviews_ints.append([vocab_to_int[word] for word in each.split()]) ",
"_____no_output_____"
]
],
[
[
"### Encoding the labels\n\nOur labels are \"positive\" or \"negative\". To use these labels in our network, we need to convert them to 0 and 1.\n\n> **Exercise:** Convert labels from `positive` and `negative` to 1 and 0, respectively.",
"_____no_output_____"
]
],
[
[
"# Convert labels to 1s and 0s for 'positive' and 'negative'\nlabels = labels.split('\\n')\nlabels = np.array([1 if each == 'positive' else 0 for each in labels])",
"_____no_output_____"
]
],
[
[
"If you built `labels` correctly, you should see the next output.",
"_____no_output_____"
]
],
[
[
"from collections import Counter\nreview_lens = Counter([len(x) for x in reviews_ints])\nprint(\"Zero-length reviews: {}\".format(review_lens[0]))\nprint(\"Maximum review length: {}\".format(max(review_lens)))",
"Zero-length reviews: 1\nMaximum review length: 2514\n"
]
],
[
[
"Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.\n\n> **Exercise:** First, remove the review with zero length from the `reviews_ints` list.",
"_____no_output_____"
]
],
[
[
"# Filter out that review with 0 length\nnon_zero_idx = [ii for ii, review in enumerate(reviews_ints) if len(review) != 0]\nlen(non_zero_idx)",
"_____no_output_____"
],
[
"reviews_ints[-1]",
"_____no_output_____"
],
[
"reviews_ints = [reviews_ints[ii] for ii in non_zero_idx]\nlabels = np.array([labels[ii] for ii in non_zero_idx])",
"_____no_output_____"
]
],
[
[
"> **Exercise:** Now, create an array `features` that contains the data we'll pass to the network. The data should come from `review_ints`, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is `['best', 'movie', 'ever']`, `[117, 18, 128]` as integers, the row will look like `[0, 0, 0, ..., 0, 117, 18, 128]`. For reviews longer than 200, use on the first 200 words as the feature vector.\n\nThis isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.\n\n",
"_____no_output_____"
]
],
[
[
"seq_len = 200\n\nfeatures = np.zeros((len(reviews_ints), seq_len), dtype=int)\nfor i, row in enumerate(reviews_ints):\n features[i, -len(row):] = np.array(row)[:seq_len]",
"_____no_output_____"
]
],
[
[
"If you build features correctly, it should look like that cell output below.",
"_____no_output_____"
]
],
[
[
"features[:10,:100]",
"_____no_output_____"
]
],
[
[
"## Training, Validation, Test\n\n",
"_____no_output_____"
],
[
"With our data in nice shape, we'll split it into training, validation, and test sets.\n\n> **Exercise:** Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, `train_x` and `train_y` for example. Define a split fraction, `split_frac` as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.",
"_____no_output_____"
]
],
[
[
"split_frac = 0.8\n\nsplit_idx = int(len(features)*0.8)\ntrain_x, val_x = features[:split_idx], features[split_idx:]\ntrain_y, val_y = labels[:split_idx], labels[split_idx:]\n\ntest_idx = int(len(val_x)*0.5)\nval_x, test_x = val_x[:test_idx], val_x[test_idx:]\nval_y, test_y = val_y[:test_idx], val_y[test_idx:]\n\nprint(\"\\t\\t\\tFeature Shapes:\")\nprint(\"Train set: \\t\\t{}\".format(train_x.shape), \n \"\\nValidation set: \\t{}\".format(val_x.shape),\n \"\\nTest set: \\t\\t{}\".format(test_x.shape))",
"\t\t\tFeature Shapes:\nTrain set: \t\t(20000, 200) \nValidation set: \t(2500, 200) \nTest set: \t\t(2500, 200)\n"
]
],
[
[
"With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:\n```\n Feature Shapes:\nTrain set: \t\t (20000, 200) \nValidation set: \t(2500, 200) \nTest set: \t\t (2500, 200)\n```",
"_____no_output_____"
],
[
"## Build the graph\n\nHere, we'll build the graph. First up, defining the hyperparameters.\n\n* `lstm_size`: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.\n* `lstm_layers`: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.\n* `batch_size`: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.\n* `learning_rate`: Learning rate",
"_____no_output_____"
]
],
[
[
"lstm_size = 256\nlstm_layers = 1\nbatch_size = 500\nlearning_rate = 0.001",
"_____no_output_____"
]
],
[
[
"For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be `batch_size` vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.",
"_____no_output_____"
],
[
"> **Exercise:** Create the `inputs_`, `labels_`, and drop out `keep_prob` placeholders using `tf.placeholder`. `labels_` needs to be two-dimensional to work with some functions later. Since `keep_prob` is a scalar (a 0-dimensional tensor), you shouldn't provide a size to `tf.placeholder`.",
"_____no_output_____"
]
],
[
[
"n_words = len(vocab_to_int) + 1 # Adding 1 because we use 0's for padding, dictionary started at 1\n\n# Create the graph object\ngraph = tf.Graph()\n# Add nodes to the graph\nwith graph.as_default():\n inputs_ = tf.placeholder(tf.int32, (None, None), name='inputs')\n labels_ = tf.placeholder(tf.int32, (None, None), name='labels')\n keep_prob = tf.placeholder(tf.float32, name='keep_prob')",
"_____no_output_____"
]
],
[
[
"### Embedding\n\nNow we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.\n\n> **Exercise:** Create the embedding lookup matrix as a `tf.Variable`. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with [`tf.nn.embedding_lookup`](https://www.tensorflow.org/api_docs/python/tf/nn/embedding_lookup). This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer has 200 units, the function will return a tensor with size [batch_size, 200].\n\n",
"_____no_output_____"
]
],
[
[
"# Size of the embedding vectors (number of units in the embedding layer)\nembed_size = 300 \n\nwith graph.as_default():\n embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1))\n embed = tf.nn.embedding_lookup(embedding, inputs_)",
"_____no_output_____"
]
],
[
[
"### LSTM cell\n\n<img src=\"assets/network_diagram.png\" width=400px>\n\nNext, we'll create our LSTM cells to use in the recurrent network ([TensorFlow documentation](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn)). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.\n\nTo create a basic LSTM cell for the graph, you'll want to use `tf.contrib.rnn.BasicLSTMCell`. Looking at the function documentation:\n\n```\ntf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=<function tanh at 0x109f1ef28>)\n```\n\nyou can see it takes a parameter called `num_units`, the number of units in the cell, called `lstm_size` in this code. So then, you can write something like \n\n```\nlstm = tf.contrib.rnn.BasicLSTMCell(num_units)\n```\n\nto create an LSTM cell with `num_units`. Next, you can add dropout to the cell with `tf.contrib.rnn.DropoutWrapper`. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like\n\n```\ndrop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)\n```\n\nMost of the time, your network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with `tf.contrib.rnn.MultiRNNCell`:\n\n```\ncell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)\n```\n\nHere, `[drop] * lstm_layers` creates a list of cells (`drop`) that is `lstm_layers` long. The `MultiRNNCell` wrapper builds this into multiple layers of RNN cells, one for each cell in the list.\n\nSo the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.\n\n> **Exercise:** Below, use `tf.contrib.rnn.BasicLSTMCell` to create an LSTM cell. Then, add drop out to it with `tf.contrib.rnn.DropoutWrapper`. Finally, create multiple LSTM layers with `tf.contrib.rnn.MultiRNNCell`.\n\nHere is [a tutorial on building RNNs](https://www.tensorflow.org/tutorials/recurrent) that will help you out.\n",
"_____no_output_____"
]
],
[
[
"with graph.as_default():\n # Your basic LSTM cell\n lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)\n \n # Add dropout to the cell\n drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)\n \n # Stack up multiple LSTM layers, for deep learning\n cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)\n \n # Getting an initial state of all zeros\n initial_state = cell.zero_state(batch_size, tf.float32)",
"_____no_output_____"
]
],
[
[
"### RNN forward pass\n\n<img src=\"assets/network_diagram.png\" width=400px>\n\nNow we need to actually run the data through the RNN nodes. You can use [`tf.nn.dynamic_rnn`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn) to do this. You'd pass in the RNN cell you created (our multiple layered LSTM `cell` for instance), and the inputs to the network.\n\n```\noutputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)\n```\n\nAbove I created an initial state, `initial_state`, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. `tf.nn.dynamic_rnn` takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.\n\n> **Exercise:** Use `tf.nn.dynamic_rnn` to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, `embed`.\n\n",
"_____no_output_____"
]
],
[
[
"with graph.as_default():\n outputs, final_state = tf.nn.dynamic_rnn(cell, embed, initial_state=initial_state)",
"_____no_output_____"
]
],
[
[
"### Output\n\nWe only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with `outputs[:, -1]`, the calculate the cost from that and `labels_`.",
"_____no_output_____"
]
],
[
[
"with graph.as_default():\n predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)\n cost = tf.losses.mean_squared_error(labels_, predictions)\n \n optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)",
"_____no_output_____"
]
],
[
[
"### Validation accuracy\n\nHere we can add a few nodes to calculate the accuracy which we'll use in the validation pass.",
"_____no_output_____"
]
],
[
[
"with graph.as_default():\n correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)\n accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))",
"_____no_output_____"
]
],
[
[
"### Batching\n\nThis is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the `x` and `y` arrays and returns slices out of those arrays with size `[batch_size]`.",
"_____no_output_____"
]
],
[
[
"def get_batches(x, y, batch_size=100):\n \n n_batches = len(x)//batch_size\n x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]\n for ii in range(0, len(x), batch_size):\n yield x[ii:ii+batch_size], y[ii:ii+batch_size]",
"_____no_output_____"
]
],
[
[
"## Training\n\nBelow is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the `checkpoints` directory exists.",
"_____no_output_____"
]
],
[
[
"epochs = 10\n\nwith graph.as_default():\n saver = tf.train.Saver()\n\nwith tf.Session(graph=graph) as sess:\n sess.run(tf.global_variables_initializer())\n iteration = 1\n for e in range(epochs):\n state = sess.run(initial_state)\n \n for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):\n feed = {inputs_: x,\n labels_: y[:, None],\n keep_prob: 0.5,\n initial_state: state}\n loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)\n \n if iteration%5==0:\n print(\"Epoch: {}/{}\".format(e, epochs),\n \"Iteration: {}\".format(iteration),\n \"Train loss: {:.3f}\".format(loss))\n\n if iteration%25==0:\n val_acc = []\n val_state = sess.run(cell.zero_state(batch_size, tf.float32))\n for x, y in get_batches(val_x, val_y, batch_size):\n feed = {inputs_: x,\n labels_: y[:, None],\n keep_prob: 1,\n initial_state: val_state}\n batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)\n val_acc.append(batch_acc)\n print(\"Val acc: {:.3f}\".format(np.mean(val_acc)))\n iteration +=1\n saver.save(sess, \"checkpoints/sentiment.ckpt\")",
"Epoch: 0/10 Iteration: 5 Train loss: 0.240\nEpoch: 0/10 Iteration: 10 Train loss: 0.240\nEpoch: 0/10 Iteration: 15 Train loss: 0.221\nEpoch: 0/10 Iteration: 20 Train loss: 0.184\nEpoch: 0/10 Iteration: 25 Train loss: 0.230\nVal acc: 0.629\nEpoch: 0/10 Iteration: 30 Train loss: 0.235\nEpoch: 0/10 Iteration: 35 Train loss: 0.238\nEpoch: 0/10 Iteration: 40 Train loss: 0.231\nEpoch: 1/10 Iteration: 45 Train loss: 0.201\nEpoch: 1/10 Iteration: 50 Train loss: 0.205\nVal acc: 0.673\nEpoch: 1/10 Iteration: 55 Train loss: 0.171\nEpoch: 1/10 Iteration: 60 Train loss: 0.185\nEpoch: 1/10 Iteration: 65 Train loss: 0.211\nEpoch: 1/10 Iteration: 70 Train loss: 0.213\nEpoch: 1/10 Iteration: 75 Train loss: 0.214\nVal acc: 0.640\nEpoch: 1/10 Iteration: 80 Train loss: 0.210\nEpoch: 2/10 Iteration: 85 Train loss: 0.174\nEpoch: 2/10 Iteration: 90 Train loss: 0.164\nEpoch: 2/10 Iteration: 95 Train loss: 0.133\nEpoch: 2/10 Iteration: 100 Train loss: 0.129\nVal acc: 0.745\nEpoch: 2/10 Iteration: 105 Train loss: 0.119\nEpoch: 2/10 Iteration: 110 Train loss: 0.197\nEpoch: 2/10 Iteration: 115 Train loss: 0.174\nEpoch: 2/10 Iteration: 120 Train loss: 0.174\nEpoch: 3/10 Iteration: 125 Train loss: 0.119\nVal acc: 0.786\nEpoch: 3/10 Iteration: 130 Train loss: 0.132\nEpoch: 3/10 Iteration: 135 Train loss: 0.120\nEpoch: 3/10 Iteration: 140 Train loss: 0.102\nEpoch: 3/10 Iteration: 145 Train loss: 0.121\nEpoch: 3/10 Iteration: 150 Train loss: 0.119\nVal acc: 0.790\nEpoch: 3/10 Iteration: 155 Train loss: 0.133\nEpoch: 3/10 Iteration: 160 Train loss: 0.158\nEpoch: 4/10 Iteration: 165 Train loss: 0.116\nEpoch: 4/10 Iteration: 170 Train loss: 0.119\nEpoch: 4/10 Iteration: 175 Train loss: 0.119\nVal acc: 0.781\nEpoch: 4/10 Iteration: 180 Train loss: 0.104\nEpoch: 4/10 Iteration: 185 Train loss: 0.114\nEpoch: 4/10 Iteration: 190 Train loss: 0.103\nEpoch: 4/10 Iteration: 195 Train loss: 0.138\nEpoch: 4/10 Iteration: 200 Train loss: 0.126\nVal acc: 0.800\nEpoch: 5/10 Iteration: 205 Train loss: 0.087\nEpoch: 5/10 Iteration: 210 Train loss: 0.108\nEpoch: 5/10 Iteration: 215 Train loss: 0.119\nEpoch: 5/10 Iteration: 220 Train loss: 0.093\nEpoch: 5/10 Iteration: 225 Train loss: 0.088\nVal acc: 0.749\nEpoch: 5/10 Iteration: 230 Train loss: 0.086\nEpoch: 5/10 Iteration: 235 Train loss: 0.092\nEpoch: 5/10 Iteration: 240 Train loss: 0.098\nEpoch: 6/10 Iteration: 245 Train loss: 0.076\nEpoch: 6/10 Iteration: 250 Train loss: 0.092\nVal acc: 0.812\nEpoch: 6/10 Iteration: 255 Train loss: 0.067\nEpoch: 6/10 Iteration: 260 Train loss: 0.132\nEpoch: 6/10 Iteration: 265 Train loss: 0.119\nEpoch: 6/10 Iteration: 270 Train loss: 0.074\nEpoch: 6/10 Iteration: 275 Train loss: 0.099\nVal acc: 0.796\nEpoch: 6/10 Iteration: 280 Train loss: 0.434\nEpoch: 7/10 Iteration: 285 Train loss: 0.494\nEpoch: 7/10 Iteration: 290 Train loss: 0.497\nEpoch: 7/10 Iteration: 295 Train loss: 0.491\nEpoch: 7/10 Iteration: 300 Train loss: 0.353\nVal acc: 0.526\nEpoch: 7/10 Iteration: 305 Train loss: 0.317\nEpoch: 7/10 Iteration: 310 Train loss: 0.285\nEpoch: 7/10 Iteration: 315 Train loss: 0.266\nEpoch: 7/10 Iteration: 320 Train loss: 0.254\nEpoch: 8/10 Iteration: 325 Train loss: 0.244\nVal acc: 0.594\nEpoch: 8/10 Iteration: 330 Train loss: 0.229\nEpoch: 8/10 Iteration: 335 Train loss: 0.242\nEpoch: 8/10 Iteration: 340 Train loss: 0.203\nEpoch: 8/10 Iteration: 345 Train loss: 0.170\nEpoch: 8/10 Iteration: 350 Train loss: 0.246\nVal acc: 0.675\nEpoch: 8/10 Iteration: 355 Train loss: 0.184\nEpoch: 8/10 Iteration: 360 Train loss: 0.176\nEpoch: 9/10 Iteration: 365 Train loss: 0.293\nEpoch: 9/10 Iteration: 370 Train loss: 0.165\nEpoch: 9/10 Iteration: 375 Train loss: 0.160\nVal acc: 0.748\nEpoch: 9/10 Iteration: 380 Train loss: 0.117\nEpoch: 9/10 Iteration: 385 Train loss: 0.092\nEpoch: 9/10 Iteration: 390 Train loss: 0.101\nEpoch: 9/10 Iteration: 395 Train loss: 0.121\nEpoch: 9/10 Iteration: 400 Train loss: 0.116\nVal acc: 0.768\n"
]
],
[
[
"## Testing",
"_____no_output_____"
]
],
[
[
"test_acc = []\nwith tf.Session(graph=graph) as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n test_state = sess.run(cell.zero_state(batch_size, tf.float32))\n for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):\n feed = {inputs_: x,\n labels_: y[:, None],\n keep_prob: 1,\n initial_state: test_state}\n batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)\n test_acc.append(batch_acc)\n print(\"Test accuracy: {:.3f}\".format(np.mean(test_acc)))",
"INFO:tensorflow:Restoring parameters from checkpoints/sentiment.ckpt\nTest accuracy: 0.776\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"raw",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"raw"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7b7cf1364f39e1c2c9d185d6a973895279f882d | 87,186 | ipynb | Jupyter Notebook | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu | c1106f70c74dafdb7d3096e8c9d90c5a0daa1cc8 | [
"Apache-2.0"
] | null | null | null | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu | c1106f70c74dafdb7d3096e8c9d90c5a0daa1cc8 | [
"Apache-2.0"
] | null | null | null | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu | c1106f70c74dafdb7d3096e8c9d90c5a0daa1cc8 | [
"Apache-2.0"
] | null | null | null | 36.958881 | 607 | 0.580747 | [
[
[
"#### Copyright 2018 Google LLC.",
"_____no_output_____"
]
],
[
[
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"_____no_output_____"
]
],
[
[
"# Recommendation Systems with TensorFlow\n\nThis Colab notebook complements the course on [Recommendation Systems](https://developers.google.com/machine-learning/recommendation/). Specifically, we'll be using matrix factorization to learn user and movie embeddings.\n\n",
"_____no_output_____"
],
[
"# Introduction\n\nWe will create a movie recommendation system based on the [MovieLens](https://movielens.org/) dataset available [here](http://grouplens.org/datasets/movielens/). The data consists of movies ratings (on a scale of 1 to 5).\n\n## Outline\n 1. Exploring the MovieLens Data (10 minutes)\n 1. Preliminaries (25 minutes)\n 1. Training a matrix factorization model (15 minutes)\n 1. Inspecting the Embeddings (15 minutes)\n 1. Regularization in matrix factorization (15 minutes)\n 1. Softmax model training (30 minutes)\n\n## Setup\n\nLet's get started by importing the required packages.",
"_____no_output_____"
]
],
[
[
"# @title Imports (run this cell)\nfrom __future__ import print_function\n\nimport numpy as np\nimport pandas as pd\nimport collections\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom IPython import display\nfrom matplotlib import pyplot as plt\nimport sklearn\nimport sklearn.manifold\nimport tensorflow.compat.v1 as tf\ntf.disable_v2_behavior()\ntf.logging.set_verbosity(tf.logging.ERROR)\n\n# Add some convenience functions to Pandas DataFrame.\npd.options.display.max_rows = 10\npd.options.display.float_format = '{:.3f}'.format\ndef mask(df, key, function):\n \"\"\"Returns a filtered dataframe, by applying function to key\"\"\"\n return df[function(df[key])]\n\ndef flatten_cols(df):\n df.columns = [' '.join(col).strip() for col in df.columns.values]\n return df\n\npd.DataFrame.mask = mask\npd.DataFrame.flatten_cols = flatten_cols\n\n# Install Altair and activate its colab renderer.\nprint(\"Installing Altair...\")\n!pip install git+git://github.com/altair-viz/altair.git\nimport altair as alt\nalt.data_transformers.enable('default', max_rows=None)\nalt.renderers.enable('colab')\nprint(\"Done installing Altair.\")\n\n# Install spreadsheets and import authentication module.\nUSER_RATINGS = False\n!pip install --upgrade -q gspread\nfrom google.colab import auth\nimport gspread\nfrom oauth2client.client import GoogleCredentials",
"_____no_output_____"
]
],
[
[
"We then download the MovieLens Data, and create DataFrames containing movies, users, and ratings.",
"_____no_output_____"
]
],
[
[
"# @title Load the MovieLens data (run this cell).\n\n# Download MovieLens data.\nprint(\"Downloading movielens data...\")\nfrom urllib.request import urlretrieve\nimport zipfile\n\nurlretrieve(\"http://files.grouplens.org/datasets/movielens/ml-100k.zip\", \"movielens.zip\")\nzip_ref = zipfile.ZipFile('movielens.zip', \"r\")\nzip_ref.extractall()\nprint(\"Done. Dataset contains:\")\nprint(zip_ref.read('ml-100k/u.info'))\n\n# Load each data set (users, movies, and ratings).\nusers_cols = ['user_id', 'age', 'gender', 'occupation', 'zip_code']\nusers = pd.read_csv(\n 'ml-100k/u.user', sep='|', names=users_cols, encoding='latin-1')\n\nratings_cols = ['user_id', 'movie_id', 'rating', 'unix_timestamp']\nratings = pd.read_csv(\n 'ml-100k/u.data', sep='\\t', names=ratings_cols, encoding='latin-1')\n\n# The movies file contains a binary feature for each genre.\ngenre_cols = [\n \"genre_unknown\", \"Action\", \"Adventure\", \"Animation\", \"Children\", \"Comedy\",\n \"Crime\", \"Documentary\", \"Drama\", \"Fantasy\", \"Film-Noir\", \"Horror\",\n \"Musical\", \"Mystery\", \"Romance\", \"Sci-Fi\", \"Thriller\", \"War\", \"Western\"\n]\nmovies_cols = [\n 'movie_id', 'title', 'release_date', \"video_release_date\", \"imdb_url\"\n] + genre_cols\nmovies = pd.read_csv(\n 'ml-100k/u.item', sep='|', names=movies_cols, encoding='latin-1')\n\n# Since the ids start at 1, we shift them to start at 0.\nusers[\"user_id\"] = users[\"user_id\"].apply(lambda x: str(x-1))\nmovies[\"movie_id\"] = movies[\"movie_id\"].apply(lambda x: str(x-1))\nmovies[\"year\"] = movies['release_date'].apply(lambda x: str(x).split('-')[-1])\nratings[\"movie_id\"] = ratings[\"movie_id\"].apply(lambda x: str(x-1))\nratings[\"user_id\"] = ratings[\"user_id\"].apply(lambda x: str(x-1))\nratings[\"rating\"] = ratings[\"rating\"].apply(lambda x: float(x))\n\n# Compute the number of movies to which a genre is assigned.\ngenre_occurences = movies[genre_cols].sum().to_dict()\n\n# Since some movies can belong to more than one genre, we create different\n# 'genre' columns as follows:\n# - all_genres: all the active genres of the movie.\n# - genre: randomly sampled from the active genres.\ndef mark_genres(movies, genres):\n def get_random_genre(gs):\n active = [genre for genre, g in zip(genres, gs) if g==1]\n if len(active) == 0:\n return 'Other'\n return np.random.choice(active)\n def get_all_genres(gs):\n active = [genre for genre, g in zip(genres, gs) if g==1]\n if len(active) == 0:\n return 'Other'\n return '-'.join(active)\n movies['genre'] = [\n get_random_genre(gs) for gs in zip(*[movies[genre] for genre in genres])]\n movies['all_genres'] = [\n get_all_genres(gs) for gs in zip(*[movies[genre] for genre in genres])]\n\nmark_genres(movies, genre_cols)\n\n# Create one merged DataFrame containing all the movielens data.\nmovielens = ratings.merge(movies, on='movie_id').merge(users, on='user_id')\n\n# Utility to split the data into training and test sets.\ndef split_dataframe(df, holdout_fraction=0.1):\n \"\"\"Splits a DataFrame into training and test sets.\n Args:\n df: a dataframe.\n holdout_fraction: fraction of dataframe rows to use in the test set.\n Returns:\n train: dataframe for training\n test: dataframe for testing\n \"\"\"\n test = df.sample(frac=holdout_fraction, replace=False)\n train = df[~df.index.isin(test.index)]\n return train, test",
"_____no_output_____"
]
],
[
[
"# I. Exploring the Movielens Data\nBefore we dive into model building, let's inspect our MovieLens dataset. It is usually helpful to understand the statistics of the dataset.",
"_____no_output_____"
],
[
"### Users\nWe start by printing some basic statistics describing the numeric user features.",
"_____no_output_____"
]
],
[
[
"users.describe()",
"_____no_output_____"
]
],
[
[
"We can also print some basic statistics describing the categorical user features",
"_____no_output_____"
]
],
[
[
"users.describe(include=[np.object])",
"_____no_output_____"
]
],
[
[
"We can also create histograms to further understand the distribution of the users. We use Altair to create an interactive chart.",
"_____no_output_____"
]
],
[
[
"# @title Altair visualization code (run this cell)\n# The following functions are used to generate interactive Altair charts.\n# We will display histograms of the data, sliced by a given attribute.\n\n# Create filters to be used to slice the data.\noccupation_filter = alt.selection_multi(fields=[\"occupation\"])\noccupation_chart = alt.Chart().mark_bar().encode(\n x=\"count()\",\n y=alt.Y(\"occupation:N\"),\n color=alt.condition(\n occupation_filter,\n alt.Color(\"occupation:N\", scale=alt.Scale(scheme='category20')),\n alt.value(\"lightgray\")),\n).properties(width=300, height=300, selection=occupation_filter)\n\n# A function that generates a histogram of filtered data.\ndef filtered_hist(field, label, filter):\n \"\"\"Creates a layered chart of histograms.\n The first layer (light gray) contains the histogram of the full data, and the\n second contains the histogram of the filtered data.\n Args:\n field: the field for which to generate the histogram.\n label: String label of the histogram.\n filter: an alt.Selection object to be used to filter the data.\n \"\"\"\n base = alt.Chart().mark_bar().encode(\n x=alt.X(field, bin=alt.Bin(maxbins=10), title=label),\n y=\"count()\",\n ).properties(\n width=300,\n )\n return alt.layer(\n base.transform_filter(filter),\n base.encode(color=alt.value('lightgray'), opacity=alt.value(.7)),\n ).resolve_scale(y='independent')\n",
"_____no_output_____"
]
],
[
[
"Next, we look at the distribution of ratings per user. Clicking on an occupation in the right chart will filter the data by that occupation. The corresponding histogram is shown in blue, and superimposed with the histogram for the whole data (in light gray). You can use SHIFT+click to select multiple subsets.\n\nWhat do you observe, and how might this affect the recommendations?",
"_____no_output_____"
]
],
[
[
"users_ratings = (\n ratings\n .groupby('user_id', as_index=False)\n .agg({'rating': ['count', 'mean']})\n .flatten_cols()\n .merge(users, on='user_id')\n)\n\n# Create a chart for the count, and one for the mean.\nalt.hconcat(\n filtered_hist('rating count', '# ratings / user', occupation_filter),\n filtered_hist('rating mean', 'mean user rating', occupation_filter),\n occupation_chart,\n data=users_ratings)",
"_____no_output_____"
]
],
[
[
"### Movies\n\nIt is also useful to look at information about the movies and their ratings.",
"_____no_output_____"
]
],
[
[
"movies_ratings = movies.merge(\n ratings\n .groupby('movie_id', as_index=False)\n .agg({'rating': ['count', 'mean']})\n .flatten_cols(),\n on='movie_id')\n\ngenre_filter = alt.selection_multi(fields=['genre'])\ngenre_chart = alt.Chart().mark_bar().encode(\n x=\"count()\",\n y=alt.Y('genre'),\n color=alt.condition(\n genre_filter,\n alt.Color(\"genre:N\"),\n alt.value('lightgray'))\n).properties(height=300, selection=genre_filter)",
"_____no_output_____"
],
[
"(movies_ratings[['title', 'rating count', 'rating mean']]\n .sort_values('rating count', ascending=False)\n .head(10))",
"_____no_output_____"
],
[
"(movies_ratings[['title', 'rating count', 'rating mean']]\n .mask('rating count', lambda x: x > 20)\n .sort_values('rating mean', ascending=False)\n .head(10))",
"_____no_output_____"
]
],
[
[
"Finally, the last chart shows the distribution of the number of ratings and average rating.",
"_____no_output_____"
]
],
[
[
"# Display the number of ratings and average rating per movie.\nalt.hconcat(\n filtered_hist('rating count', '# ratings / movie', genre_filter),\n filtered_hist('rating mean', 'mean movie rating', genre_filter),\n genre_chart,\n data=movies_ratings)",
"_____no_output_____"
]
],
[
[
"# II. Preliminaries\n\nOur goal is to factorize the ratings matrix $A$ into the product of a user embedding matrix $U$ and movie embedding matrix $V$, such that $A \\approx UV^\\top$ with\n$U = \\begin{bmatrix} u_{1} \\\\ \\hline \\vdots \\\\ \\hline u_{N} \\end{bmatrix}$ and\n$V = \\begin{bmatrix} v_{1} \\\\ \\hline \\vdots \\\\ \\hline v_{M} \\end{bmatrix}$.\n\nHere\n- $N$ is the number of users,\n- $M$ is the number of movies,\n- $A_{ij}$ is the rating of the $j$th movies by the $i$th user,\n- each row $U_i$ is a $d$-dimensional vector (embedding) representing user $i$,\n- each row $V_j$ is a $d$-dimensional vector (embedding) representing movie $j$,\n- the prediction of the model for the $(i, j)$ pair is the dot product $\\langle U_i, V_j \\rangle$.\n\n",
"_____no_output_____"
],
[
"## Sparse Representation of the Rating Matrix\n\nThe rating matrix could be very large and, in general, most of the entries are unobserved, since a given user will only rate a small subset of movies. For effcient representation, we will use a [tf.SparseTensor](https://www.tensorflow.org/api_docs/python/tf/SparseTensor). A `SparseTensor` uses three tensors to represent the matrix: `tf.SparseTensor(indices, values, dense_shape)` represents a tensor, where a value $A_{ij} = a$ is encoded by setting `indices[k] = [i, j]` and `values[k] = a`. The last tensor `dense_shape` is used to specify the shape of the full underlying matrix.\n\n#### Toy example\nAssume we have $2$ users and $4$ movies. Our toy ratings dataframe has three ratings,\n\nuser\\_id | movie\\_id | rating\n--:|--:|--:\n0 | 0 | 5.0\n0 | 1 | 3.0\n1 | 3 | 1.0\n\nThe corresponding rating matrix is\n\n$$\nA =\n\\begin{bmatrix}\n5.0 & 3.0 & 0 & 0 \\\\\n0 & 0 & 0 & 1.0\n\\end{bmatrix}\n$$\n\nAnd the SparseTensor representation is,\n```python\nSparseTensor(\n indices=[[0, 0], [0, 1], [1,3]],\n values=[5.0, 3.0, 1.0],\n dense_shape=[2, 4])\n```\n\n",
"_____no_output_____"
],
[
"### Exercise 1: Build a tf.SparseTensor representation of the Rating Matrix.\n\nIn this exercise, we'll write a function that maps from our `ratings` DataFrame to a `tf.SparseTensor`.\n\nHint: you can select the values of a given column of a Dataframe `df` using `df['column_name'].values`.",
"_____no_output_____"
]
],
[
[
"def build_rating_sparse_tensor(ratings_df):\n \"\"\"\n Args:\n ratings_df: a pd.DataFrame with `user_id`, `movie_id` and `rating` columns.\n Returns:\n A tf.SparseTensor representing the ratings matrix.\n \"\"\"\n # ========================= Complete this section ============================\n # indices =\n # values =\n # ============================================================================\n\n return tf.SparseTensor(\n indices=indices,\n values=values,\n dense_shape=[users.shape[0], movies.shape[0]])",
"_____no_output_____"
],
[
"#@title Solution\ndef build_rating_sparse_tensor(ratings_df):\n \"\"\"\n Args:\n ratings_df: a pd.DataFrame with `user_id`, `movie_id` and `rating` columns.\n Returns:\n a tf.SparseTensor representing the ratings matrix.\n \"\"\"\n indices = ratings_df[['user_id', 'movie_id']].values\n values = ratings_df['rating'].values\n return tf.SparseTensor(\n indices=indices,\n values=values,\n dense_shape=[users.shape[0], movies.shape[0]])",
"_____no_output_____"
]
],
[
[
"## Calculating the error\n\nThe model approximates the ratings matrix $A$ by a low-rank product $UV^\\top$. We need a way to measure the approximation error. We'll start by using the Mean Squared Error of observed entries only (we will revisit this later). It is defined as\n\n$$\n\\begin{align*}\n\\text{MSE}(A, UV^\\top)\n&= \\frac{1}{|\\Omega|}\\sum_{(i, j) \\in\\Omega}{( A_{ij} - (UV^\\top)_{ij})^2} \\\\\n&= \\frac{1}{|\\Omega|}\\sum_{(i, j) \\in\\Omega}{( A_{ij} - \\langle U_i, V_j\\rangle)^2}\n\\end{align*}\n$$\nwhere $\\Omega$ is the set of observed ratings, and $|\\Omega|$ is the cardinality of $\\Omega$.\n\n",
"_____no_output_____"
],
[
"### Exercise 2: Mean Squared Error\n\nWrite a TensorFlow function that takes a sparse rating matrix $A$ and the two embedding matrices $U, V$ and returns the mean squared error $\\text{MSE}(A, UV^\\top)$.\n\nHints:\n * in this section, we only consider observed entries when calculating the loss.\n * a `SparseTensor` `sp_x` is a tuple of three Tensors: `sp_x.indices`, `sp_x.values` and `sp_x.dense_shape`.\n * you may find [`tf.gather_nd`](https://www.tensorflow.org/api_docs/python/tf/gather_nd) and [`tf.losses.mean_squared_error`](https://www.tensorflow.org/api_docs/python/tf/losses/mean_squared_error) helpful.",
"_____no_output_____"
]
],
[
[
"def sparse_mean_square_error(sparse_ratings, user_embeddings, movie_embeddings):\n \"\"\"\n Args:\n sparse_ratings: A SparseTensor rating matrix, of dense_shape [N, M]\n user_embeddings: A dense Tensor U of shape [N, k] where k is the embedding\n dimension, such that U_i is the embedding of user i.\n movie_embeddings: A dense Tensor V of shape [M, k] where k is the embedding\n dimension, such that V_j is the embedding of movie j.\n Returns:\n A scalar Tensor representing the MSE between the true ratings and the\n model's predictions.\n \"\"\"\n # ========================= Complete this section ============================\n # loss =\n # ============================================================================\n return loss",
"_____no_output_____"
],
[
"#@title Solution\ndef sparse_mean_square_error(sparse_ratings, user_embeddings, movie_embeddings):\n \"\"\"\n Args:\n sparse_ratings: A SparseTensor rating matrix, of dense_shape [N, M]\n user_embeddings: A dense Tensor U of shape [N, k] where k is the embedding\n dimension, such that U_i is the embedding of user i.\n movie_embeddings: A dense Tensor V of shape [M, k] where k is the embedding\n dimension, such that V_j is the embedding of movie j.\n Returns:\n A scalar Tensor representing the MSE between the true ratings and the\n model's predictions.\n \"\"\"\n predictions = tf.gather_nd(\n tf.matmul(user_embeddings, movie_embeddings, transpose_b=True),\n sparse_ratings.indices)\n loss = tf.losses.mean_squared_error(sparse_ratings.values, predictions)\n return loss",
"_____no_output_____"
]
],
[
[
"Note: One approach is to compute the full prediction matrix $UV^\\top$, then gather the entries corresponding to the observed pairs. The memory cost of this approach is $O(NM)$. For the MovieLens dataset, this is fine, as the dense $N \\times M$ matrix is small enough to fit in memory ($N = 943$, $M = 1682$).\n\nAnother approach (given in the alternate solution below) is to only gather the embeddings of the observed pairs, then compute their dot products. The memory cost is $O(|\\Omega| d)$ where $d$ is the embedding dimension. In our case, $|\\Omega| = 10^5$, and the embedding dimension is on the order of $10$, so the memory cost of both methods is comparable. But when the number of users or movies is much larger, the first approach becomes infeasible.",
"_____no_output_____"
]
],
[
[
"#@title Alternate Solution\ndef sparse_mean_square_error(sparse_ratings, user_embeddings, movie_embeddings):\n \"\"\"\n Args:\n sparse_ratings: A SparseTensor rating matrix, of dense_shape [N, M]\n user_embeddings: A dense Tensor U of shape [N, k] where k is the embedding\n dimension, such that U_i is the embedding of user i.\n movie_embeddings: A dense Tensor V of shape [M, k] where k is the embedding\n dimension, such that V_j is the embedding of movie j.\n Returns:\n A scalar Tensor representing the MSE between the true ratings and the\n model's predictions.\n \"\"\"\n predictions = tf.reduce_sum(\n tf.gather(user_embeddings, sparse_ratings.indices[:, 0]) *\n tf.gather(movie_embeddings, sparse_ratings.indices[:, 1]),\n axis=1)\n loss = tf.losses.mean_squared_error(sparse_ratings.values, predictions)\n return loss",
"_____no_output_____"
]
],
[
[
"### Exercise 3 (Optional): adding your own ratings to the data set",
"_____no_output_____"
],
[
"You have the option to add your own ratings to the data set. If you choose to do so, you will be able to see recommendations for yourself.\n\nStart by checking the box below. Running the next cell will authenticate you to your google Drive account, and create a spreadsheet, that contains all movie titles in column 'A'. Follow the link to the spreadsheet and take 3 minutes to rate some of the movies. Your ratings should be entered in column 'B'.",
"_____no_output_____"
]
],
[
[
"USER_RATINGS = True #@param {type:\"boolean\"}",
"_____no_output_____"
],
[
"# @title Run to create a spreadsheet, then use it to enter your ratings.\n# Authenticate user.\nif USER_RATINGS:\n auth.authenticate_user()\n gc = gspread.authorize(GoogleCredentials.get_application_default())\n # Create the spreadsheet and print a link to it.\n try:\n sh = gc.open('MovieLens-test')\n except(gspread.SpreadsheetNotFound):\n sh = gc.create('MovieLens-test')\n\n worksheet = sh.sheet1\n titles = movies['title'].values\n cell_list = worksheet.range(1, 1, len(titles), 1)\n for cell, title in zip(cell_list, titles):\n cell.value = title\n worksheet.update_cells(cell_list)\n print(\"Link to the spreadsheet: \"\n \"https://docs.google.com/spreadsheets/d/{}/edit\".format(sh.id))",
"_____no_output_____"
]
],
[
[
"Run the next cell to load your ratings and add them to the main `ratings` DataFrame.",
"_____no_output_____"
]
],
[
[
"# @title Run to load your ratings.\n# Load the ratings from the spreadsheet and create a DataFrame.\nif USER_RATINGS:\n my_ratings = pd.DataFrame.from_records(worksheet.get_all_values()).reset_index()\n my_ratings = my_ratings[my_ratings[1] != '']\n my_ratings = pd.DataFrame({\n 'user_id': \"943\",\n 'movie_id': list(map(str, my_ratings['index'])),\n 'rating': list(map(float, my_ratings[1])),\n })\n # Remove previous ratings.\n ratings = ratings[ratings.user_id != \"943\"]\n # Add new ratings.\n ratings = ratings.append(my_ratings, ignore_index=True)\n # Add new user to the users DataFrame.\n if users.shape[0] == 943:\n users = users.append(users.iloc[942], ignore_index=True)\n users[\"user_id\"][943] = \"943\"\n print(\"Added your %d ratings; you have great taste!\" % len(my_ratings))\n ratings[ratings.user_id==\"943\"].merge(movies[['movie_id', 'title']])",
"_____no_output_____"
]
],
[
[
"# III. Training a Matrix Factorization model\n\n## CFModel (Collaborative Filtering Model) helper class\nThis is a simple class to train a matrix factorization model using stochastic gradient descent.\n\nThe class constructor takes\n- the user embeddings U (a `tf.Variable`).\n- the movie embeddings V, (a `tf.Variable`).\n- a loss to optimize (a `tf.Tensor`).\n- an optional list of metrics dictionaries, each mapping a string (the name of the metric) to a tensor. These are evaluated and plotted during training (e.g. training error and test error).\n\nAfter training, one can access the trained embeddings using the `model.embeddings` dictionary.\n\nExample usage:\n```\nU_var = ...\nV_var = ...\nloss = ...\nmodel = CFModel(U_var, V_var, loss)\nmodel.train(iterations=100, learning_rate=1.0)\nuser_embeddings = model.embeddings['user_id']\nmovie_embeddings = model.embeddings['movie_id']\n```\n",
"_____no_output_____"
]
],
[
[
"# @title CFModel helper class (run this cell)\nclass CFModel(object):\n \"\"\"Simple class that represents a collaborative filtering model\"\"\"\n def __init__(self, embedding_vars, loss, metrics=None):\n \"\"\"Initializes a CFModel.\n Args:\n embedding_vars: A dictionary of tf.Variables.\n loss: A float Tensor. The loss to optimize.\n metrics: optional list of dictionaries of Tensors. The metrics in each\n dictionary will be plotted in a separate figure during training.\n \"\"\"\n self._embedding_vars = embedding_vars\n self._loss = loss\n self._metrics = metrics\n self._embeddings = {k: None for k in embedding_vars}\n self._session = None\n\n @property\n def embeddings(self):\n \"\"\"The embeddings dictionary.\"\"\"\n return self._embeddings\n\n def train(self, num_iterations=100, learning_rate=1.0, plot_results=True,\n optimizer=tf.train.GradientDescentOptimizer):\n \"\"\"Trains the model.\n Args:\n iterations: number of iterations to run.\n learning_rate: optimizer learning rate.\n plot_results: whether to plot the results at the end of training.\n optimizer: the optimizer to use. Default to GradientDescentOptimizer.\n Returns:\n The metrics dictionary evaluated at the last iteration.\n \"\"\"\n with self._loss.graph.as_default():\n opt = optimizer(learning_rate)\n train_op = opt.minimize(self._loss)\n local_init_op = tf.group(\n tf.variables_initializer(opt.variables()),\n tf.local_variables_initializer())\n if self._session is None:\n self._session = tf.Session()\n with self._session.as_default():\n self._session.run(tf.global_variables_initializer())\n self._session.run(tf.tables_initializer())\n tf.train.start_queue_runners()\n\n with self._session.as_default():\n local_init_op.run()\n iterations = []\n metrics = self._metrics or ({},)\n metrics_vals = [collections.defaultdict(list) for _ in self._metrics]\n\n # Train and append results.\n for i in range(num_iterations + 1):\n _, results = self._session.run((train_op, metrics))\n if (i % 10 == 0) or i == num_iterations:\n print(\"\\r iteration %d: \" % i + \", \".join(\n [\"%s=%f\" % (k, v) for r in results for k, v in r.items()]),\n end='')\n iterations.append(i)\n for metric_val, result in zip(metrics_vals, results):\n for k, v in result.items():\n metric_val[k].append(v)\n\n for k, v in self._embedding_vars.items():\n self._embeddings[k] = v.eval()\n\n if plot_results:\n # Plot the metrics.\n num_subplots = len(metrics)+1\n fig = plt.figure()\n fig.set_size_inches(num_subplots*10, 8)\n for i, metric_vals in enumerate(metrics_vals):\n ax = fig.add_subplot(1, num_subplots, i+1)\n for k, v in metric_vals.items():\n ax.plot(iterations, v, label=k)\n ax.set_xlim([1, num_iterations])\n ax.legend()\n return results",
"_____no_output_____"
]
],
[
[
"### Exercise 4: Build a Matrix Factorization model and train it\n\nUsing your `sparse_mean_square_error` function, write a function that builds a `CFModel` by creating the embedding variables and the train and test losses.",
"_____no_output_____"
]
],
[
[
"def build_model(ratings, embedding_dim=3, init_stddev=1.):\n \"\"\"\n Args:\n ratings: a DataFrame of the ratings\n embedding_dim: the dimension of the embedding vectors.\n init_stddev: float, the standard deviation of the random initial embeddings.\n Returns:\n model: a CFModel.\n \"\"\"\n # Split the ratings DataFrame into train and test.\n train_ratings, test_ratings = split_dataframe(ratings)\n # SparseTensor representation of the train and test datasets.\n # ========================= Complete this section ============================\n # A_train =\n # A_test =\n # ============================================================================\n # Initialize the embeddings using a normal distribution.\n U = tf.Variable(tf.random_normal(\n [A_train.dense_shape[0], embedding_dim], stddev=init_stddev))\n V = tf.Variable(tf.random_normal(\n [A_train.dense_shape[1], embedding_dim], stddev=init_stddev))\n # ========================= Complete this section ============================\n # train_loss =\n # test_loss =\n # ============================================================================\n metrics = {\n 'train_error': train_loss,\n 'test_error': test_loss\n }\n embeddings = {\n \"user_id\": U,\n \"movie_id\": V\n }\n return CFModel(embeddings, train_loss, [metrics])",
"_____no_output_____"
],
[
"#@title Solution\ndef build_model(ratings, embedding_dim=3, init_stddev=1.):\n \"\"\"\n Args:\n ratings: a DataFrame of the ratings\n embedding_dim: the dimension of the embedding vectors.\n init_stddev: float, the standard deviation of the random initial embeddings.\n Returns:\n model: a CFModel.\n \"\"\"\n # Split the ratings DataFrame into train and test.\n train_ratings, test_ratings = split_dataframe(ratings)\n # SparseTensor representation of the train and test datasets.\n A_train = build_rating_sparse_tensor(train_ratings)\n A_test = build_rating_sparse_tensor(test_ratings)\n # Initialize the embeddings using a normal distribution.\n U = tf.Variable(tf.random_normal(\n [A_train.dense_shape[0], embedding_dim], stddev=init_stddev))\n V = tf.Variable(tf.random_normal(\n [A_train.dense_shape[1], embedding_dim], stddev=init_stddev))\n train_loss = sparse_mean_square_error(A_train, U, V)\n test_loss = sparse_mean_square_error(A_test, U, V)\n metrics = {\n 'train_error': train_loss,\n 'test_error': test_loss\n }\n embeddings = {\n \"user_id\": U,\n \"movie_id\": V\n }\n return CFModel(embeddings, train_loss, [metrics])",
"_____no_output_____"
]
],
[
[
"Great, now it's time to train the model!\n\nGo ahead and run the next cell, trying different parameters (embedding dimension, learning rate, iterations). The training and test errors are plotted at the end of training. You can inspect these values to validate the hyper-parameters.\n\nNote: by calling `model.train` again, the model will continue training starting from the current values of the embeddings.",
"_____no_output_____"
]
],
[
[
"# Build the CF model and train it.\nmodel = build_model(ratings, embedding_dim=30, init_stddev=0.5)\nmodel.train(num_iterations=1000, learning_rate=10.)",
"_____no_output_____"
]
],
[
[
"The movie and user embeddings are also displayed in the right figure. When the embedding dimension is greater than 3, the embeddings are projected on the first 3 dimensions. The next section will have a more detailed look at the embeddings.",
"_____no_output_____"
],
[
"# IV. Inspecting the Embeddings\n\nIn this section, we take a closer look at the learned embeddings, by\n- computing your recommendations\n- looking at the nearest neighbors of some movies,\n- looking at the norms of the movie embeddings,\n- visualizing the embedding in a projected embedding space.",
"_____no_output_____"
],
[
"### Exercise 5: Write a function that computes the scores of the candidates\nWe start by writing a function that, given a query embedding $u \\in \\mathbb R^d$ and item embeddings $V \\in \\mathbb R^{N \\times d}$, computes the item scores.\n\nAs discussed in the lecture, there are different similarity measures we can use, and these can yield different results. We will compare the following:\n- dot product: the score of item j is $\\langle u, V_j \\rangle$.\n- cosine: the score of item j is $\\frac{\\langle u, V_j \\rangle}{\\|u\\|\\|V_j\\|}$.\n\nHints:\n- you can use [`np.dot`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html) to compute the product of two np.Arrays.\n- you can use [`np.linalg.norm`](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.linalg.norm.html) to compute the norm of a np.Array.",
"_____no_output_____"
]
],
[
[
"DOT = 'dot'\nCOSINE = 'cosine'\ndef compute_scores(query_embedding, item_embeddings, measure=DOT):\n \"\"\"Computes the scores of the candidates given a query.\n Args:\n query_embedding: a vector of shape [k], representing the query embedding.\n item_embeddings: a matrix of shape [N, k], such that row i is the embedding\n of item i.\n measure: a string specifying the similarity measure to be used. Can be\n either DOT or COSINE.\n Returns:\n scores: a vector of shape [N], such that scores[i] is the score of item i.\n \"\"\"\n # ========================= Complete this section ============================\n # scores =\n # ============================================================================\n return scores",
"_____no_output_____"
],
[
"#@title Solution\nDOT = 'dot'\nCOSINE = 'cosine'\ndef compute_scores(query_embedding, item_embeddings, measure=DOT):\n \"\"\"Computes the scores of the candidates given a query.\n Args:\n query_embedding: a vector of shape [k], representing the query embedding.\n item_embeddings: a matrix of shape [N, k], such that row i is the embedding\n of item i.\n measure: a string specifying the similarity measure to be used. Can be\n either DOT or COSINE.\n Returns:\n scores: a vector of shape [N], such that scores[i] is the score of item i.\n \"\"\"\n u = query_embedding\n V = item_embeddings\n if measure == COSINE:\n V = V / np.linalg.norm(V, axis=1, keepdims=True)\n u = u / np.linalg.norm(u)\n scores = u.dot(V.T)\n return scores",
"_____no_output_____"
]
],
[
[
"Equipped with this function, we can compute recommendations, where the query embedding can be either a user embedding or a movie embedding.",
"_____no_output_____"
]
],
[
[
"# @title User recommendations and nearest neighbors (run this cell)\ndef user_recommendations(model, measure=DOT, exclude_rated=False, k=6):\n if USER_RATINGS:\n scores = compute_scores(\n model.embeddings[\"user_id\"][943], model.embeddings[\"movie_id\"], measure)\n score_key = measure + ' score'\n df = pd.DataFrame({\n score_key: list(scores),\n 'movie_id': movies['movie_id'],\n 'titles': movies['title'],\n 'genres': movies['all_genres'],\n })\n if exclude_rated:\n # remove movies that are already rated\n rated_movies = ratings[ratings.user_id == \"943\"][\"movie_id\"].values\n df = df[df.movie_id.apply(lambda movie_id: movie_id not in rated_movies)]\n display.display(df.sort_values([score_key], ascending=False).head(k)) \n\ndef movie_neighbors(model, title_substring, measure=DOT, k=6):\n # Search for movie ids that match the given substring.\n ids = movies[movies['title'].str.contains(title_substring)].index.values\n titles = movies.iloc[ids]['title'].values\n if len(titles) == 0:\n raise ValueError(\"Found no movies with title %s\" % title_substring)\n print(\"Nearest neighbors of : %s.\" % titles[0])\n if len(titles) > 1:\n print(\"[Found more than one matching movie. Other candidates: {}]\".format(\n \", \".join(titles[1:])))\n movie_id = ids[0]\n scores = compute_scores(\n model.embeddings[\"movie_id\"][movie_id], model.embeddings[\"movie_id\"],\n measure)\n score_key = measure + ' score'\n df = pd.DataFrame({\n score_key: list(scores),\n 'titles': movies['title'],\n 'genres': movies['all_genres']\n })\n display.display(df.sort_values([score_key], ascending=False).head(k))",
"_____no_output_____"
]
],
[
[
"### Your recommendations\n\nIf you chose to input your recommendations, you can run the next cell to generate recommendations for you.",
"_____no_output_____"
]
],
[
[
"user_recommendations(model, measure=COSINE, k=5)",
"_____no_output_____"
]
],
[
[
"How do the recommendations look?",
"_____no_output_____"
],
[
"### Movie Nearest neighbors\n\nLet's look at the neareast neighbors for some of the movies.",
"_____no_output_____"
]
],
[
[
"movie_neighbors(model, \"Aladdin\", DOT)\nmovie_neighbors(model, \"Aladdin\", COSINE)",
"_____no_output_____"
]
],
[
[
"It seems that the quality of learned embeddings may not be very good. This will be addressed in Section V by adding several regularization techniques. First, we will further inspect the embeddings.",
"_____no_output_____"
],
[
"## Movie Embedding Norm\n\nWe can also observe that the recommendations with dot-product and cosine are different: with dot-product, the model tends to recommend popular movies. This can be explained by the fact that in matrix factorization models, the norm of the embedding is often correlated with popularity (popular movies have a larger norm), which makes it more likely to recommend more popular items. We can confirm this hypothesis by sorting the movies by their embedding norm, as done in the next cell.",
"_____no_output_____"
]
],
[
[
"# @title Embedding Visualization code (run this cell)\n\ndef movie_embedding_norm(models):\n \"\"\"Visualizes the norm and number of ratings of the movie embeddings.\n Args:\n model: A MFModel object.\n \"\"\"\n if not isinstance(models, list):\n models = [models]\n df = pd.DataFrame({\n 'title': movies['title'],\n 'genre': movies['genre'],\n 'num_ratings': movies_ratings['rating count'],\n })\n charts = []\n brush = alt.selection_interval()\n for i, model in enumerate(models):\n norm_key = 'norm'+str(i)\n df[norm_key] = np.linalg.norm(model.embeddings[\"movie_id\"], axis=1)\n nearest = alt.selection(\n type='single', encodings=['x', 'y'], on='mouseover', nearest=True,\n empty='none')\n base = alt.Chart().mark_circle().encode(\n x='num_ratings',\n y=norm_key,\n color=alt.condition(brush, alt.value('#4c78a8'), alt.value('lightgray'))\n ).properties(\n selection=nearest).add_selection(brush)\n text = alt.Chart().mark_text(align='center', dx=5, dy=-5).encode(\n x='num_ratings', y=norm_key,\n text=alt.condition(nearest, 'title', alt.value('')))\n charts.append(alt.layer(base, text))\n return alt.hconcat(*charts, data=df)\n\ndef visualize_movie_embeddings(data, x, y):\n nearest = alt.selection(\n type='single', encodings=['x', 'y'], on='mouseover', nearest=True,\n empty='none')\n base = alt.Chart().mark_circle().encode(\n x=x,\n y=y,\n color=alt.condition(genre_filter, \"genre\", alt.value(\"whitesmoke\")),\n ).properties(\n width=600,\n height=600,\n selection=nearest)\n text = alt.Chart().mark_text(align='left', dx=5, dy=-5).encode(\n x=x,\n y=y,\n text=alt.condition(nearest, 'title', alt.value('')))\n return alt.hconcat(alt.layer(base, text), genre_chart, data=data)\n\ndef tsne_movie_embeddings(model):\n \"\"\"Visualizes the movie embeddings, projected using t-SNE with Cosine measure.\n Args:\n model: A MFModel object.\n \"\"\"\n tsne = sklearn.manifold.TSNE(\n n_components=2, perplexity=40, metric='cosine', early_exaggeration=10.0,\n init='pca', verbose=True, n_iter=400)\n\n print('Running t-SNE...')\n V_proj = tsne.fit_transform(model.embeddings[\"movie_id\"])\n movies.loc[:,'x'] = V_proj[:, 0]\n movies.loc[:,'y'] = V_proj[:, 1]\n return visualize_movie_embeddings(movies, 'x', 'y')",
"_____no_output_____"
],
[
"movie_embedding_norm(model)",
"_____no_output_____"
]
],
[
[
"Note: Depending on how the model is initialized, you may observe that some niche movies (ones with few ratings) have a high norm, leading to spurious recommendations. This can happen if the embedding of that movie happens to be initialized with a high norm. Then, because the movie has few ratings, it is infrequently updated, and can keep its high norm. This will be alleviated by using regularization.\n\nTry changing the value of the hyper-parameter `init_stddev`. One quantity that can be helpful is that the expected norm of a $d$-dimensional vector with entries $\\sim \\mathcal N(0, \\sigma^2)$ is approximatley $\\sigma \\sqrt d$.\n\nHow does this affect the embedding norm distribution, and the ranking of the top-norm movies?",
"_____no_output_____"
]
],
[
[
"#@title Solution\nmodel_lowinit = build_model(ratings, embedding_dim=30, init_stddev=0.05)\nmodel_lowinit.train(num_iterations=1000, learning_rate=10.)\nmovie_neighbors(model_lowinit, \"Aladdin\", DOT)\nmovie_neighbors(model_lowinit, \"Aladdin\", COSINE)\nmovie_embedding_norm([model, model_lowinit])",
"_____no_output_____"
]
],
[
[
"## Embedding visualization\nSince it is hard to visualize embeddings in a higher-dimensional space (when the embedding dimension $k > 3$), one approach is to project the embeddings to a lower dimensional space. T-SNE (T-distributed Stochastic Neighbor Embedding) is an algorithm that projects the embeddings while attempting to preserve their pariwise distances. It can be useful for visualization, but one should use it with care. For more information on using t-SNE, see [How to Use t-SNE Effectively](https://distill.pub/2016/misread-tsne/).",
"_____no_output_____"
]
],
[
[
"tsne_movie_embeddings(model_lowinit)",
"_____no_output_____"
]
],
[
[
"You can highlight the embeddings of a given genre by clicking on the genres panel (SHIFT+click to select multiple genres).\n\nWe can observe that the embeddings do not seem to have any notable structure, and the embeddings of a given genre are located all over the embedding space. This confirms the poor quality of the learned embeddings. One of the main reasons, which we will address in the next section, is that we only trained the model on observed pairs, and without regularization.",
"_____no_output_____"
],
[
"# V. Regularization In Matrix Factorization\n\nIn the previous section, our loss was defined as the mean squared error on the observed part of the rating matrix. As discussed in the lecture, this can be problematic as the model does not learn how to place the embeddings of irrelevant movies. This phenomenon is known as *folding*.\n\nWe will add regularization terms that will address this issue. We will use two types of regularization:\n- Regularization of the model parameters. This is a common $\\ell_2$ regularization term on the embedding matrices, given by $r(U, V) = \\frac{1}{N} \\sum_i \\|U_i\\|^2 + \\frac{1}{M}\\sum_j \\|V_j\\|^2$.\n- A global prior that pushes the prediction of any pair towards zero, called the *gravity* term. This is given by $g(U, V) = \\frac{1}{MN} \\sum_{i = 1}^N \\sum_{j = 1}^M \\langle U_i, V_j \\rangle^2$.\n\nThe total loss is then given by\n$$\n\\frac{1}{|\\Omega|}\\sum_{(i, j) \\in \\Omega} (A_{ij} - \\langle U_i, V_j\\rangle)^2 + \\lambda _r r(U, V) + \\lambda_g g(U, V)\n$$\nwhere $\\lambda_r$ and $\\lambda_g$ are two regularization coefficients (hyper-parameters).",
"_____no_output_____"
],
[
"### Exercise 6: Build a regularized Matrix Factorization model and train it\nWrite a function that builds a regularized model. You are given a function `gravity(U, V)` that computes the gravity term given the two embedding matrices $U$ and $V$.\n",
"_____no_output_____"
]
],
[
[
"def gravity(U, V):\n \"\"\"Creates a gravity loss given two embedding matrices.\"\"\"\n return 1. / (U.shape[0].value*V.shape[0].value) * tf.reduce_sum(\n tf.matmul(U, U, transpose_a=True) * tf.matmul(V, V, transpose_a=True))\n\ndef build_regularized_model(\n ratings, embedding_dim=3, regularization_coeff=.1, gravity_coeff=1.,\n init_stddev=0.1):\n \"\"\"\n Args:\n ratings: the DataFrame of movie ratings.\n embedding_dim: The dimension of the embedding space.\n regularization_coeff: The regularization coefficient lambda.\n gravity_coeff: The gravity regularization coefficient lambda_g.\n Returns:\n A CFModel object that uses a regularized loss.\n \"\"\"\n # Split the ratings DataFrame into train and test.\n train_ratings, test_ratings = split_dataframe(ratings)\n # SparseTensor representation of the train and test datasets.\n A_train = build_rating_sparse_tensor(train_ratings)\n A_test = build_rating_sparse_tensor(test_ratings)\n U = tf.Variable(tf.random_normal(\n [A_train.dense_shape[0], embedding_dim], stddev=init_stddev))\n V = tf.Variable(tf.random_normal(\n [A_train.dense_shape[1], embedding_dim], stddev=init_stddev))\n\n # ========================= Complete this section ============================\n # error_train =\n # error_test =\n # gravity_loss =\n # regularization_loss =\n # ============================================================================\n total_loss = error_train + regularization_loss + gravity_loss\n losses = {\n 'train_error': error_train,\n 'test_error': error_test,\n }\n loss_components = {\n 'observed_loss': error_train,\n 'regularization_loss': regularization_loss,\n 'gravity_loss': gravity_loss,\n }\n embeddings = {\"user_id\": U, \"movie_id\": V}\n\n return CFModel(embeddings, total_loss, [losses, loss_components])",
"_____no_output_____"
],
[
"# @title Solution\ndef gravity(U, V):\n \"\"\"Creates a gravity loss given two embedding matrices.\"\"\"\n return 1. / (U.shape[0].value*V.shape[0].value) * tf.reduce_sum(\n tf.matmul(U, U, transpose_a=True) * tf.matmul(V, V, transpose_a=True))\n\ndef build_regularized_model(\n ratings, embedding_dim=3, regularization_coeff=.1, gravity_coeff=1.,\n init_stddev=0.1):\n \"\"\"\n Args:\n ratings: the DataFrame of movie ratings.\n embedding_dim: The dimension of the embedding space.\n regularization_coeff: The regularization coefficient lambda.\n gravity_coeff: The gravity regularization coefficient lambda_g.\n Returns:\n A CFModel object that uses a regularized loss.\n \"\"\"\n # Split the ratings DataFrame into train and test.\n train_ratings, test_ratings = split_dataframe(ratings)\n # SparseTensor representation of the train and test datasets.\n A_train = build_rating_sparse_tensor(train_ratings)\n A_test = build_rating_sparse_tensor(test_ratings)\n U = tf.Variable(tf.random_normal(\n [A_train.dense_shape[0], embedding_dim], stddev=init_stddev))\n V = tf.Variable(tf.random_normal(\n [A_train.dense_shape[1], embedding_dim], stddev=init_stddev))\n\n error_train = sparse_mean_square_error(A_train, U, V)\n error_test = sparse_mean_square_error(A_test, U, V)\n gravity_loss = gravity_coeff * gravity(U, V)\n regularization_loss = regularization_coeff * (\n tf.reduce_sum(U*U)/U.shape[0].value + tf.reduce_sum(V*V)/V.shape[0].value)\n total_loss = error_train + regularization_loss + gravity_loss\n losses = {\n 'train_error_observed': error_train,\n 'test_error_observed': error_test,\n }\n loss_components = {\n 'observed_loss': error_train,\n 'regularization_loss': regularization_loss,\n 'gravity_loss': gravity_loss,\n }\n embeddings = {\"user_id\": U, \"movie_id\": V}\n\n return CFModel(embeddings, total_loss, [losses, loss_components])",
"_____no_output_____"
]
],
[
[
"It is now time to train the regularized model! You can try different values of the regularization coefficients, and different embedding dimensions.",
"_____no_output_____"
]
],
[
[
"reg_model = build_regularized_model(\n ratings, regularization_coeff=0.1, gravity_coeff=1.0, embedding_dim=35,\n init_stddev=.05)\nreg_model.train(num_iterations=2000, learning_rate=20.)",
"_____no_output_____"
]
],
[
[
"Observe that adding the regularization terms results in a higher MSE, both on the training and test set. However, as we will see, the quality of the recommendations improves. This highlights a tension between fitting the observed data and minimizing the regularization terms. Fitting the observed data often emphasizes learning high similarity (between items with many interactions), but a good embedding representation also requires learning low similarity (between items with few or no interactions).",
"_____no_output_____"
],
[
"### Inspect the results\nLet's see if the results with regularization look better.",
"_____no_output_____"
]
],
[
[
"user_recommendations(reg_model, DOT, exclude_rated=True, k=10)",
"_____no_output_____"
]
],
[
[
"Hopefully, these recommendations look better. You can change the similarity measure from COSINE to DOT and observe how this affects the recommendations.\n\nSince the model is likely to recommend items that you rated highly, you have the option to exclude the items you rated, using `exclude_rated=True`.\n\nIn the following cells, we display the nearest neighbors, the embedding norms, and the t-SNE projection of the movie embeddings.",
"_____no_output_____"
]
],
[
[
"movie_neighbors(reg_model, \"Aladdin\", DOT)\nmovie_neighbors(reg_model, \"Aladdin\", COSINE)",
"_____no_output_____"
]
],
[
[
"Here we compare the embedding norms for `model` and `reg_model`. Selecting a subset of the embeddings will highlight them on both charts simultaneously.",
"_____no_output_____"
]
],
[
[
"movie_embedding_norm([model, model_lowinit, reg_model])",
"_____no_output_____"
],
[
"# Visualize the embeddings\ntsne_movie_embeddings(reg_model)",
"_____no_output_____"
]
],
[
[
"We should observe that the embeddings have a lot more structure than the unregularized case. Try selecting different genres and observe how they tend to form clusters (for example Horror, Animation and Children).\n\n### Conclusion\nThis concludes this section on matrix factorization models. Note that while the scale of the problem is small enough to allow efficient training using SGD, many practical problems need to be trained using more specialized algorithms such as Alternating Least Squares (see [tf.contrib.factorization.WALSMatrixFactorization](https://www.tensorflow.org/api_docs/python/tf/contrib/factorization/WALSMatrixFactorization) for a TF implementation).",
"_____no_output_____"
],
[
"# VI. Softmax model\n\nIn this section, we will train a simple softmax model that predicts whether a given user has rated a movie.\n\n**Note**: if you are taking the self-study version of the class, make sure to read through the part of the class covering Softmax training before working on this part.\n\nThe model will take as input a feature vector $x$ representing the list of movies the user has rated. We start from the ratings DataFrame, which we group by user_id.",
"_____no_output_____"
]
],
[
[
"rated_movies = (ratings[[\"user_id\", \"movie_id\"]]\n .groupby(\"user_id\", as_index=False)\n .aggregate(lambda x: list(x)))\nrated_movies.head()",
"_____no_output_____"
]
],
[
[
"We then create a function that generates an example batch, such that each example contains the following features:\n- movie_id: A tensor of strings of the movie ids that the user rated.\n- genre: A tensor of strings of the genres of those movies\n- year: A tensor of strings of the release year.",
"_____no_output_____"
]
],
[
[
"#@title Batch generation code (run this cell)\nyears_dict = {\n movie: year for movie, year in zip(movies[\"movie_id\"], movies[\"year\"])\n}\ngenres_dict = {\n movie: genres.split('-')\n for movie, genres in zip(movies[\"movie_id\"], movies[\"all_genres\"])\n}\n\ndef make_batch(ratings, batch_size):\n \"\"\"Creates a batch of examples.\n Args:\n ratings: A DataFrame of ratings such that examples[\"movie_id\"] is a list of\n movies rated by a user.\n batch_size: The batch size.\n \"\"\"\n def pad(x, fill):\n return pd.DataFrame.from_dict(x).fillna(fill).values\n\n movie = []\n year = []\n genre = []\n label = []\n for movie_ids in ratings[\"movie_id\"].values:\n movie.append(movie_ids)\n genre.append([x for movie_id in movie_ids for x in genres_dict[movie_id]])\n year.append([years_dict[movie_id] for movie_id in movie_ids])\n label.append([int(movie_id) for movie_id in movie_ids])\n features = {\n \"movie_id\": pad(movie, \"\"),\n \"year\": pad(year, \"\"),\n \"genre\": pad(genre, \"\"),\n \"label\": pad(label, -1)\n }\n batch = (\n tf.data.Dataset.from_tensor_slices(features)\n .shuffle(1000)\n .repeat()\n .batch(batch_size)\n .make_one_shot_iterator()\n .get_next())\n return batch\n\ndef select_random(x):\n \"\"\"Selectes a random elements from each row of x.\"\"\"\n def to_float(x):\n return tf.cast(x, tf.float32)\n def to_int(x):\n return tf.cast(x, tf.int64)\n batch_size = tf.shape(x)[0]\n rn = tf.range(batch_size)\n nnz = to_float(tf.count_nonzero(x >= 0, axis=1))\n rnd = tf.random_uniform([batch_size])\n ids = tf.stack([to_int(rn), to_int(nnz * rnd)], axis=1)\n return to_int(tf.gather_nd(x, ids))\n",
"_____no_output_____"
]
],
[
[
"### Loss function\nRecall that the softmax model maps the input features $x$ to a user embedding $\\psi(x) \\in \\mathbb R^d$, where $d$ is the embedding dimension. This vector is then multiplied by a movie embedding matrix $V \\in \\mathbb R^{m \\times d}$ (where $m$ is the number of movies), and the final output of the model is the softmax of the product\n$$\n\\hat p(x) = \\text{softmax}(\\psi(x) V^\\top).\n$$\nGiven a target label $y$, if we denote by $p = 1_y$ a one-hot encoding of this target label, then the loss is the cross-entropy between $\\hat p(x)$ and $p$.",
"_____no_output_____"
],
[
"### Exercise 7: Write a loss function for the softmax model.\n\nIn this exercise, we will write a function that takes tensors representing the user embeddings $\\psi(x)$, movie embeddings $V$, target label $y$, and return the cross-entropy loss.\n\nHint: You can use the function [`tf.nn.sparse_softmax_cross_entropy_with_logits`](https://www.tensorflow.org/api_docs/python/tf/nn/sparse_softmax_cross_entropy_with_logits), which takes `logits` as input, where `logits` refers to the product $\\psi(x) V^\\top$.",
"_____no_output_____"
]
],
[
[
"def softmax_loss(user_embeddings, movie_embeddings, labels):\n \"\"\"Returns the cross-entropy loss of the softmax model.\n Args:\n user_embeddings: A tensor of shape [batch_size, embedding_dim].\n movie_embeddings: A tensor of shape [num_movies, embedding_dim].\n labels: A sparse tensor of dense_shape [batch_size, 1], such that\n labels[i] is the target label for example i.\n Returns:\n The mean cross-entropy loss.\n \"\"\"\n # ========================= Complete this section ============================\n # logits =\n # loss =\n # ============================================================================\n return loss",
"_____no_output_____"
],
[
"# @title Solution\ndef softmax_loss(user_embeddings, movie_embeddings, labels):\n \"\"\"Returns the cross-entropy loss of the softmax model.\n Args:\n user_embeddings: A tensor of shape [batch_size, embedding_dim].\n movie_embeddings: A tensor of shape [num_movies, embedding_dim].\n labels: A tensor of [batch_size], such that labels[i] is the target label\n for example i.\n Returns:\n The mean cross-entropy loss.\n \"\"\"\n # Verify that the embddings have compatible dimensions\n user_emb_dim = user_embeddings.shape[1].value\n movie_emb_dim = movie_embeddings.shape[1].value\n if user_emb_dim != movie_emb_dim:\n raise ValueError(\n \"The user embedding dimension %d should match the movie embedding \"\n \"dimension % d\" % (user_emb_dim, movie_emb_dim))\n\n logits = tf.matmul(user_embeddings, movie_embeddings, transpose_b=True)\n loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(\n logits=logits, labels=labels))\n return loss",
"_____no_output_____"
]
],
[
[
"### Exercise 8: Build a softmax model, train it, and inspect its embeddings.\n\nWe are now ready to build a softmax CFModel. Complete the `build_softmax_model` function in the next cell. The architecture of the model is defined in the function `create_user_embeddings` and illustrated in the figure below. The input embeddings (movie_id, genre and year) are concatenated to form the input layer, then we have hidden layers with dimensions specified by the `hidden_dims` argument. Finally, the last hidden layer is multiplied by the movie embeddings to obtain the logits layer. For the target label, we will use a randomly-sampled movie_id from the list of movies the user rated.\n\n\n\nComplete the function below by creating the feature columns and embedding columns, then creating the loss tensors both for the train and test sets (using the `softmax_loss` function of the previous exercise).\n",
"_____no_output_____"
]
],
[
[
"def build_softmax_model(rated_movies, embedding_cols, hidden_dims):\n \"\"\"Builds a Softmax model for MovieLens.\n Args:\n rated_movies: DataFrame of traing examples.\n embedding_cols: A dictionary mapping feature names (string) to embedding\n column objects. This will be used in tf.feature_column.input_layer() to\n create the input layer.\n hidden_dims: int list of the dimensions of the hidden layers.\n Returns:\n A CFModel object.\n \"\"\"\n def create_network(features):\n \"\"\"Maps input features dictionary to user embeddings.\n Args:\n features: A dictionary of input string tensors.\n Returns:\n outputs: A tensor of shape [batch_size, embedding_dim].\n \"\"\"\n # Create a bag-of-words embedding for each sparse feature.\n inputs = tf.feature_column.input_layer(features, embedding_cols)\n # Hidden layers.\n input_dim = inputs.shape[1].value\n for i, output_dim in enumerate(hidden_dims):\n w = tf.get_variable(\n \"hidden%d_w_\" % i, shape=[input_dim, output_dim],\n initializer=tf.truncated_normal_initializer(\n stddev=1./np.sqrt(output_dim))) / 10.\n outputs = tf.matmul(inputs, w)\n input_dim = output_dim\n inputs = outputs\n return outputs\n\n train_rated_movies, test_rated_movies = split_dataframe(rated_movies)\n train_batch = make_batch(train_rated_movies, 200)\n test_batch = make_batch(test_rated_movies, 100)\n\n with tf.variable_scope(\"model\", reuse=False):\n # Train\n train_user_embeddings = create_network(train_batch)\n train_labels = select_random(train_batch[\"label\"])\n with tf.variable_scope(\"model\", reuse=True):\n # Test\n test_user_embeddings = create_network(test_batch)\n test_labels = select_random(test_batch[\"label\"])\n movie_embeddings = tf.get_variable(\n \"input_layer/movie_id_embedding/embedding_weights\")\n\n # ========================= Complete this section ============================\n # train_loss =\n # test_loss =\n # test_precision_at_10 =\n # ============================================================================\n\n metrics = (\n {\"train_loss\": train_loss, \"test_loss\": test_loss},\n {\"test_precision_at_10\": test_precision_at_10}\n )\n embeddings = {\"movie_id\": movie_embeddings}\n return CFModel(embeddings, train_loss, metrics)",
"_____no_output_____"
],
[
"# @title Solution\n\ndef build_softmax_model(rated_movies, embedding_cols, hidden_dims):\n \"\"\"Builds a Softmax model for MovieLens.\n Args:\n rated_movies: DataFrame of traing examples.\n embedding_cols: A dictionary mapping feature names (string) to embedding\n column objects. This will be used in tf.feature_column.input_layer() to\n create the input layer.\n hidden_dims: int list of the dimensions of the hidden layers.\n Returns:\n A CFModel object.\n \"\"\"\n def create_network(features):\n \"\"\"Maps input features dictionary to user embeddings.\n Args:\n features: A dictionary of input string tensors.\n Returns:\n outputs: A tensor of shape [batch_size, embedding_dim].\n \"\"\"\n # Create a bag-of-words embedding for each sparse feature.\n inputs = tf.feature_column.input_layer(features, embedding_cols)\n # Hidden layers.\n input_dim = inputs.shape[1].value\n for i, output_dim in enumerate(hidden_dims):\n w = tf.get_variable(\n \"hidden%d_w_\" % i, shape=[input_dim, output_dim],\n initializer=tf.truncated_normal_initializer(\n stddev=1./np.sqrt(output_dim))) / 10.\n outputs = tf.matmul(inputs, w)\n input_dim = output_dim\n inputs = outputs\n return outputs\n\n train_rated_movies, test_rated_movies = split_dataframe(rated_movies)\n train_batch = make_batch(train_rated_movies, 200)\n test_batch = make_batch(test_rated_movies, 100)\n\n with tf.variable_scope(\"model\", reuse=False):\n # Train\n train_user_embeddings = create_network(train_batch)\n train_labels = select_random(train_batch[\"label\"])\n with tf.variable_scope(\"model\", reuse=True):\n # Test\n test_user_embeddings = create_network(test_batch)\n test_labels = select_random(test_batch[\"label\"])\n movie_embeddings = tf.get_variable(\n \"input_layer/movie_id_embedding/embedding_weights\")\n\n test_loss = softmax_loss(\n test_user_embeddings, movie_embeddings, test_labels)\n train_loss = softmax_loss(\n train_user_embeddings, movie_embeddings, train_labels)\n _, test_precision_at_10 = tf.metrics.precision_at_k(\n labels=test_labels,\n predictions=tf.matmul(test_user_embeddings, movie_embeddings, transpose_b=True),\n k=10)\n\n metrics = (\n {\"train_loss\": train_loss, \"test_loss\": test_loss},\n {\"test_precision_at_10\": test_precision_at_10}\n )\n embeddings = {\"movie_id\": movie_embeddings}\n return CFModel(embeddings, train_loss, metrics)",
"_____no_output_____"
]
],
[
[
"### Train the Softmax model\n\nWe are now ready to train the softmax model. You can set the following hyperparameters:\n- learning rate\n- number of iterations. Note: you can run `softmax_model.train()` again to continue training the model from its current state.\n- input embedding dimensions (the `input_dims` argument)\n- number of hidden layers and size of each layer (the `hidden_dims` argument)\n\nNote: since our input features are string-valued (movie_id, genre, and year), we need to map them to integer ids. This is done using [`tf.feature_column.categorical_column_with_vocabulary_list`](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_list), which takes a vocabulary list specifying all the values the feature can take. Then each id is mapped to an embedding vector using [`tf.feature_column.embedding_column`](https://www.tensorflow.org/api_docs/python/tf/feature_column/embedding_column).\n",
"_____no_output_____"
]
],
[
[
"# Create feature embedding columns\ndef make_embedding_col(key, embedding_dim):\n categorical_col = tf.feature_column.categorical_column_with_vocabulary_list(\n key=key, vocabulary_list=list(set(movies[key].values)), num_oov_buckets=0)\n return tf.feature_column.embedding_column(\n categorical_column=categorical_col, dimension=embedding_dim,\n # default initializer: trancated normal with stddev=1/sqrt(dimension)\n combiner='mean')\n\nwith tf.Graph().as_default():\n softmax_model = build_softmax_model(\n rated_movies,\n embedding_cols=[\n make_embedding_col(\"movie_id\", 35),\n make_embedding_col(\"genre\", 3),\n make_embedding_col(\"year\", 2),\n ],\n hidden_dims=[35])\n\nsoftmax_model.train(\n learning_rate=8., num_iterations=3000, optimizer=tf.train.AdagradOptimizer)",
"_____no_output_____"
]
],
[
[
"### Inspect the embeddings\n\nWe can inspect the movie embeddings as we did for the previous models. Note that in this case, the movie embeddings are used at the same time as input embeddings (for the bag of words representation of the user history), and as softmax weights.",
"_____no_output_____"
]
],
[
[
"movie_neighbors(softmax_model, \"Aladdin\", DOT)\nmovie_neighbors(softmax_model, \"Aladdin\", COSINE)",
"_____no_output_____"
],
[
"movie_embedding_norm([reg_model, softmax_model])",
"_____no_output_____"
],
[
"tsne_movie_embeddings(softmax_model)",
"_____no_output_____"
]
],
[
[
"## Congratulations!\n\nYou have completed this Colab notebook.\n\nIf you would like to further explore these models, we encourage you to try different hyperparameters and observe how this affects the quality of the model and the structure of the embedding space. Here are some suggestions:\n- Change the embedding dimension.\n- In the softmax model: change the number of hidden layers, and the input features. For example, you can try a model with no hidden layers, and only the movie ids as inputs.\n- Using other similarity measures: In this Colab notebook, we used dot product $d(u, V_j) = \\langle u, V_j \\rangle$ and cosine $d(u, V_j) = \\frac{\\langle u, V_j \\rangle}{\\|u\\|\\|V_j\\|}$, and discussed how the norms of the embeddings affect the recommendations. You can also try other variants which apply a transformation to the norm, for example $d(u, V_j) = \\frac{\\langle u, V_j \\rangle}{\\|V_j\\|^\\alpha}$.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
]
] |
e7b7e277a3bafdf2a8384eebb4b217498b3575f3 | 9,442 | ipynb | Jupyter Notebook | learntools/machine_learning/nbs/tut3.ipynb | rosbo/learntools | 8ee404ecf29c029fa14d17eb8bf5b3d2aad62b8a | [
"Apache-2.0"
] | 1 | 2020-06-24T18:25:31.000Z | 2020-06-24T18:25:31.000Z | learntools/machine_learning/nbs/tut3.ipynb | kylehiroyasu/learntools | fdef51d9abbfadf57195ddb5b918077f8f93a14f | [
"Apache-2.0"
] | null | null | null | learntools/machine_learning/nbs/tut3.ipynb | kylehiroyasu/learntools | fdef51d9abbfadf57195ddb5b918077f8f93a14f | [
"Apache-2.0"
] | null | null | null | 34.713235 | 311 | 0.642343 | [
[
[
"**[Machine Learning Course Home Page](kaggle.com/learn/machine-learning).**\n\n---\n\n# Selecting Data for Modeling\nYour dataset had too many variables to wrap your head around, or even to print out nicely. How can you pare down this overwhelming amount of data to something you can understand?\n\nWe'll start by picking a few variables using our intuition. Later courses will show you statistical techniques to automatically prioritize variables.\n\nTo choose variables/columns, we'll need to see a list of all columns in the dataset. That is done with the **columns** property of the DataFrame (the bottom line of code below).\n",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\nmelbourne_file_path = '../input/melbourne-housing-snapshot/melb_data.csv'\nmelbourne_data = pd.read_csv(melbourne_file_path) \nmelbourne_data.columns",
"_____no_output_____"
],
[
"# The Melbourne data has some missing values (some houses for which some variables weren't recorded.)\n# We'll learn to handle missing values in a later tutorial. \n# Your Iowa data doesn't have missing values in the columns you use. \n# So we will take the simplest option for now, and drop houses from our data. \n# Don't worry about this much for now, though the code is:\n\n# dropna drops missing values (think of na as \"not available\")\nmelbourne_data = melbourne_data.dropna(axis=0)",
"_____no_output_____"
]
],
[
[
"There are many ways to select a subset of your data. The [Pandas Course](https://www.kaggle.com/learn/pandas) covers these in more depth, but we will focus on two approaches for now.\n\n1. Dot notation, which we use to select the \"prediction target\"\n2. Selecting with a column list, which we use to select the \n\n## Selecting The Prediction Target \nYou can pull out a variable with **dot-notation**. This single column is stored in a **Series**, which is broadly like a DataFrame with only a single column of data. \n\nWe'll use the dot notation to select the column we want to predict, which is called the **prediction target**. By convention, the prediction target is called **y**. So the code we need to save the house prices in the Melbourne data is",
"_____no_output_____"
]
],
[
[
"y = melbourne_data.Price",
"_____no_output_____"
]
],
[
[
"# Choosing \"Features\"\nThe columns that are inputted into our model (and later used to make predictions) are called \"features.\" In our case, those would be the columns used to determine the home price. Sometimes, you will use all columns except the target as features. Other times you'll be better off with fewer features. \n\nFor now, we'll build a model with only a few features. Later on you'll see how to iterate and compare models built with different features.\n\nWe select multiple features by providing a list of column names inside brackets. Each item in that list should be a string (with quotes).\n\nHere is an example:",
"_____no_output_____"
]
],
[
[
"melbourne_features = ['Rooms', 'Bathroom', 'Landsize', 'Lattitude', 'Longtitude']",
"_____no_output_____"
]
],
[
[
"By convention, this data is called **X**.",
"_____no_output_____"
]
],
[
[
"X = melbourne_data[melbourne_features]",
"_____no_output_____"
]
],
[
[
"Let's quickly review the data we'll be using to predict house prices using the `describe` method and the `head` method, which shows the top few rows.",
"_____no_output_____"
]
],
[
[
"X.describe()",
"_____no_output_____"
],
[
"X.head()",
"_____no_output_____"
]
],
[
[
"Visually checking your data with these commands is an important part of a data scientist's job. You'll frequently find surprises in the dataset that deserve further inspection.",
"_____no_output_____"
],
[
"---\n# Building Your Model\n\nYou will use the **scikit-learn** library to create your models. When coding, this library is written as **sklearn**, as you will see in the sample code. Scikit-learn is easily the most popular library for modeling the types of data typically stored in DataFrames. \n\nThe steps to building and using a model are:\n* **Define:** What type of model will it be? A decision tree? Some other type of model? Some other parameters of the model type are specified too.\n* **Fit:** Capture patterns from provided data. This is the heart of modeling.\n* **Predict:** Just what it sounds like\n* **Evaluate**: Determine how accurate the model's predictions are.\n\nHere is an example of defining a decision tree model with scikit-learn and fitting it with the features and target variable.",
"_____no_output_____"
]
],
[
[
"from sklearn.tree import DecisionTreeRegressor\n\n# Define model. Specify a number for random_state to ensure same results each run\nmelbourne_model = DecisionTreeRegressor(random_state=1)\n\n# Fit model\nmelbourne_model.fit(X, y)",
"_____no_output_____"
]
],
[
[
"Many machine learning models allow some randomness in model training. Specifying a number for `random_state` ensures you get the same results in each run. This is considered a good practice. You use any number, and model quality won't depend meaningfully on exactly what value you choose.\n\nWe now have a fitted model that we can use to make predictions.\n\nIn practice, you'll want to make predictions for new houses coming on the market rather than the houses we already have prices for. But we'll make predictions for the first few rows of the training data to see how the predict function works.\n",
"_____no_output_____"
]
],
[
[
"print(\"Making predictions for the following 5 houses:\")\nprint(X.head())\nprint(\"The predictions are\")\nprint(melbourne_model.predict(X.head()))",
"_____no_output_____"
]
],
[
[
"# Your Turn\nTry it out yourself in the **[Model Building Exercise](https://www.kaggle.com/kernels/fork/400771)**\n\n\n---\n**[Course Home Page](https://www.kaggle.com/learn/machine-learning)**\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e7b7ef4752af8ce91b0d104bda4e7b445185f2c3 | 782 | ipynb | Jupyter Notebook | In-class/Week_13/in_class_nptes_week_13.ipynb | zachary-trozenski/dat129_ccac | 4740595c9976e944c1be569d246d3d49ce561609 | [
"MIT"
] | null | null | null | In-class/Week_13/in_class_nptes_week_13.ipynb | zachary-trozenski/dat129_ccac | 4740595c9976e944c1be569d246d3d49ce561609 | [
"MIT"
] | null | null | null | In-class/Week_13/in_class_nptes_week_13.ipynb | zachary-trozenski/dat129_ccac | 4740595c9976e944c1be569d246d3d49ce561609 | [
"MIT"
] | null | null | null | 17.772727 | 55 | 0.505115 | [
[
[
"\"\"\"\nWhat's this?\nNotes from the SQLite3 tutorial done in class.\n\"\"\"\n\nimport sqlite3\n\n## make connection and test connectivity\n\n\n",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
e7b7f24a4c1333a5370839feec4db7a03966d6ae | 25,683 | ipynb | Jupyter Notebook | 13-Assignement_Gian.ipynb | rahuliitb/udemy-feml-challenge | 140cba9d7d76f6485c6a3e3312635884dc4f8728 | [
"BSD-3-Clause"
] | 10 | 2019-10-16T22:27:52.000Z | 2021-09-12T21:42:10.000Z | 13-Assignement_Gian.ipynb | rahuliitb/udemy-feml-challenge | 140cba9d7d76f6485c6a3e3312635884dc4f8728 | [
"BSD-3-Clause"
] | 24 | 2019-10-24T10:41:37.000Z | 2022-01-27T01:28:01.000Z | 13-Assignement_Gian.ipynb | rahuliitb/udemy-feml-challenge | 140cba9d7d76f6485c6a3e3312635884dc4f8728 | [
"BSD-3-Clause"
] | 84 | 2019-10-06T14:43:42.000Z | 2022-03-18T18:28:38.000Z | 65.185279 | 13,972 | 0.790562 | [
[
[
"## Assignment:\n\nBeat the performance of my Lasso regression by **using different feature engineering steps ONLY!!**.\n\nThe performance of my current model, as shown in this notebook is:\n- test rmse: 44798.497576784845\n- test r2: 0.7079639526659389\n\nTo beat my model you will need a test r2 bigger than 0.71 and a rmse smaller than 44798.\n\n\n### Conditions:\n\n- You MUST NOT change the hyperparameters of the Lasso.\n- You MUST use the same seeds in Lasso and train_test_split as I show in this notebook (random_state)\n- You MUST use all the features of the dataset (except Id) - you MUST NOT select features\n\n\n### If you beat my model:\n\nMake a pull request with your notebook to this github repo:\nhttps://github.com/solegalli/udemy-feml-challenge\n\nRemember that you need to fork this repo first, upload your winning notebook to your repo, and then make a PR (pull request) to my repo. I will then revise and accept the PR, which will appear in my repo and be available to all the students in the course. This way, other students can learn from your creativity when transforming the variables in your dataset. ",
"_____no_output_____"
],
[
"# Summary of my results\n\nMain changes:\n- calculate `elapsed_years` with respect to `YearBuilt` instead of `YrSold`\n- OneHot encoding of categorical variables\n- do not discretize continuous numerical variables\n- used ScikitLearn instead of Feature-Engine\n\nResults on the test set:\n- rmse = 38063.04673161993\n- r2 = 0.7891776453011499 \n\n",
"_____no_output_____"
],
[
"## House Prices dataset",
"_____no_output_____"
]
],
[
[
"from math import sqrt\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# for the model\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.linear_model import Lasso\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.metrics import mean_squared_error, r2_score\n\n# for feature engineering\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.preprocessing import StandardScaler, MinMaxScaler, OneHotEncoder, PowerTransformer\n# from feature_engine import missing_data_imputers as mdi\n# from feature_engine import discretisers as dsc\n# from feature_engine import categorical_encoders as ce",
"_____no_output_____"
]
],
[
[
"### Load Datasets",
"_____no_output_____"
]
],
[
[
"# load dataset\n\ndata = pd.read_csv('../houseprice.csv')",
"_____no_output_____"
],
[
"# make lists of variable types\n\ncategorical_vars = [var for var in data.columns if data[var].dtype == 'O']\n\nyear_vars = [var for var in data.columns if 'Yr' in var or 'Year' in var]\n\ndiscrete_vars = [\n var for var in data.columns if data[var].dtype != 'O'\n and len(data[var].unique()) < 15 and var not in year_vars\n]\n\nnumerical_vars = [\n var for var in data.columns if data[var].dtype != 'O'\n if var not in discrete_vars and var not in ['Id', 'SalePrice']\n and var not in year_vars\n]\n\nprint('There are {} continuous variables'.format(len(numerical_vars)))\nprint('There are {} discrete variables'.format(len(discrete_vars)))\nprint('There are {} temporal variables'.format(len(year_vars)))\nprint('There are {} categorical variables'.format(len(categorical_vars)))",
"There are 19 continuous variables\nThere are 13 discrete variables\nThere are 4 temporal variables\nThere are 43 categorical variables\n"
]
],
[
[
"### Separate train and test set",
"_____no_output_____"
]
],
[
[
"# IMPORTANT: keep the random_state to zero for reproducibility\n# Let's separate into train and test set\n\nX_train, X_test, y_train, y_test = train_test_split(data.drop(\n ['Id', 'SalePrice'], axis=1),\n data['SalePrice'],\n test_size=0.1,\n random_state=0)",
"_____no_output_____"
],
[
"# calculate elapsed time\n\ndef elapsed_years(df, var):\n # capture difference between year variable and year the house was *built*\n \n df[var] = df[var] - df['YearBuilt']\n return df\n\nfor var in ['YrSold', 'YearRemodAdd', 'GarageYrBlt']:\n X_train = elapsed_years(X_train, var)\n X_test = elapsed_years(X_test, var)\n \n# drop YrSold\nX_train.drop('YearBuilt', axis=1, inplace=True)\nX_test.drop('YearBuilt', axis=1, inplace=True)\n\n# capture the column names for use later in the notebook\nfinal_columns = X_train.columns",
"_____no_output_____"
]
],
[
[
"## Feature Engineering Pipeline",
"_____no_output_____"
]
],
[
[
"## functions to encode rare categories\ndef find_non_rare_labels(df, variable, tolerance):\n \n temp = df.groupby([variable])[variable].count()/len(df)\n non_rare = [x for x in temp.loc[temp>tolerance].index.values]\n \n return non_rare\n\ndef rare_encoding(X_train, X_test, variable, tolerance):\n\n X_train = X_train.copy()\n X_test = X_test.copy()\n\n # find the most frequent category\n frequent_cat = find_non_rare_labels(X_train, variable, tolerance)\n\n # re-group rare labels\n X_train[variable] = np.where(X_train[variable].isin(\n frequent_cat), X_train[variable], 'Rare')\n \n X_test[variable] = np.where(X_test[variable].isin(\n frequent_cat), X_test[variable], 'Rare')\n\n return X_train, X_test\n\n## encoding rare categories\nfor var in categorical_vars+discrete_vars:\n X_train, X_test = rare_encoding(X_train, X_test, var, 0.05)",
"_____no_output_____"
],
[
"## building our pipeline using scikitlearn\n\nnumeric_transformer = Pipeline(steps=[\n ('imputer_num', SimpleImputer(strategy='median')),\n ('scaler', StandardScaler())\n])\n\ncategorical_transformer = Pipeline(steps=[\n ('imputer_cat', SimpleImputer(strategy='constant', fill_value='missing')),\n ('onehot_enc', OneHotEncoder(drop='first'))])\n\ndiscrete_transformer = Pipeline(steps=[\n ('imputer_disc', SimpleImputer(strategy='most_frequent')),\n ('onehot_enc', OneHotEncoder(drop='first'))\n])\n\npreprocessor = ColumnTransformer(transformers=[\n ('num', numeric_transformer, numerical_vars),\n ('cat', categorical_transformer, categorical_vars),\n ('disc', discrete_transformer, discrete_vars)\n])\n\nhouse_pipe = Pipeline(steps=[('preprocessor', preprocessor),\n ('lasso', Lasso(random_state=0))])",
"_____no_output_____"
],
[
"# let's fit the pipeline\nhouse_pipe.fit(X_train, y_train)\n\n# let's get the predictions\nX_train_preds = house_pipe.predict(X_train)\nX_test_preds = house_pipe.predict(X_test)\n\n\n# check model performance:\n\nprint('train mse: {}'.format(mean_squared_error(y_train, X_train_preds)))\nprint('train rmse: {}'.format(sqrt(mean_squared_error(y_train, X_train_preds))))\nprint('train r2: {}'.format(r2_score(y_train, X_train_preds)))\nprint()\nprint('test mse: {}'.format(mean_squared_error(y_test, X_test_preds)))\nprint('test rmse: {}'.format(sqrt(mean_squared_error(y_test, X_test_preds))))\nprint('test r2: {}'.format(r2_score(y_test, X_test_preds)))",
"train mse: 684649908.3271698\ntrain rmse: 26165.815644217357\ntrain r2: 0.8903477989380937\n\ntest mse: 1448795526.4934826\ntest rmse: 38063.04673161993\ntest r2: 0.7891776453011499\n"
]
],
[
[
"We see an improvement on both rmse and r2 score with respect to the baseline as desired :)",
"_____no_output_____"
]
],
[
[
"# plot predictions vs real value\n\nplt.scatter(y_test,X_test_preds)\nplt.xlabel('True Price')\nplt.ylabel('Predicted Price')\nplt.xlim(0,800000)\nplt.ylim(0,800000);",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7b802ff7958d37af5688140a8f78d9dd2e9dbe4 | 119,324 | ipynb | Jupyter Notebook | notebooks/nb_03_swarm_plots.ipynb | mariolovric/air_poll_lockdown_modelling | 5dfdd218dffd99c3ff8172c07a6f231c00bfcbf3 | [
"MIT"
] | 1 | 2021-07-26T13:03:06.000Z | 2021-07-26T13:03:06.000Z | notebooks/nb_03_swarm_plots.ipynb | mariolovric/air_poll_lockdown_modelling | 5dfdd218dffd99c3ff8172c07a6f231c00bfcbf3 | [
"MIT"
] | null | null | null | notebooks/nb_03_swarm_plots.ipynb | mariolovric/air_poll_lockdown_modelling | 5dfdd218dffd99c3ff8172c07a6f231c00bfcbf3 | [
"MIT"
] | 1 | 2022-01-11T14:30:22.000Z | 2022-01-11T14:30:22.000Z | 183.858243 | 85,520 | 0.868358 | [
[
[
"import pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\nfrom sklearn.decomposition import PCA\nfrom sklearn.preprocessing import MinMaxScaler, StandardScaler\nimport pandas as pd\nimport numpy as np\nimport seaborn as sns\ngases = ['D_NO', 'D_NO2', 'D_PM10', 'N_NO', 'N_NO2', 'N_Ox', 'N_O3', 'N_PM10',\n 'O_NO', 'O_NO2', 'O_PM10', 'S_NO', 'S_NO2', 'S_Ox', 'S_O3', 'S_PM10',\n 'W_NO', 'W_NO2', 'W_PM10']",
"_____no_output_____"
],
[
"graz_all = pd.read_csv('../data/graz_all_p3.csv', index_col=0)\ngraz_all.index = pd.DatetimeIndex(graz_all.index)\ndymo = graz_all.index.to_series().apply(lambda x: x.strftime('%m-%d'))",
"_____no_output_____"
],
[
"# means = {\n# 'historical' : graz_all[j][(dymo > '03-20')&(dymo < '04-14')].groupby([graz_all.year]).median().values,\n# 'mean-hld-true': 80,\n# 'mean-hld-pred': 70\n# }\nj = 'N_Ox'\ndfff = pd.DataFrame({\n 'Concentration': graz_all[j][(dymo > '03-20')&(dymo < '04-14')].groupby([graz_all.year]).median().values, \n 'Pollutant': 'S_Ox', \n 'Type': 'MEAN'})\n\npredicted_df = pd.DataFrame({\n 'Concentration': [75,76], \n 'Pollutant': 'S_Ox', \n 'Type': ['MEAN-HLD-TRUE', 'MEAN-HLD-PRED']})\n\ndfff = dfff.append(predicted_df, ignore_index = True) \n\nax = sns.swarmplot(x='Pollutant', y='Concentration', data=dfff, hue='Type', orient='v')\n\ndisplay(dfff)",
"_____no_output_____"
],
[
"# samo da stane sve u jedan red\n# vidi tablicu ispod, moze i iz nje",
"_____no_output_____"
],
[
"histmedian = graz_all[gases][(dymo > '03-20')&(dymo < '04-14')].groupby([graz_all.year]).median()\nhistmedian.round(1)",
"_____no_output_____"
],
[
"best_models = pd.read_csv('../results/all_results.csv', index_col=0)\nbest_models.index = pd.DatetimeIndex(best_models.index)\ndisplay(best_models.columns)\n\n# D_NO = N_NO = W_NO = S_NO = O_NO = best_models['no_PRED'][(best_models.index > '2020-03-20')&(best_models.index < '2020-04-14')].median()\n# D_NO2 = N_NO2 = W_NO2 = S_NO2 = O_NO2 = best_models['no2_PRED'][(best_models.index > '2020-03-20')&(best_models.index < '2020-04-14')].median()\n# D_PM10 = N_PM10 = W_PM10 = S_PM10 = O_PM10 = best_models['pm10_PRED'][(best_models.index > '2020-03-20')&(best_models.index < '2020-04-14')].median()\n# N_O3 = S_O3 = best_models['o3_PRED'][(best_models.index > '2020-03-20')&(best_models.index < '2020-04-14')].median()\n# N_Ox = S_Ox = np.nan",
"_____no_output_____"
],
[
"swarmplot_data = pd.DataFrame({'Concentration': [],'Pollutant': [], 'Type': []})\nfor gas in gases:\n\n swarmplot_data = swarmplot_data.append(pd.DataFrame({\n 'Concentration': graz_all[gas][(dymo > '03-20')&(dymo < '04-14')].groupby([graz_all.year]).median().values, \n 'Pollutant': gas, \n 'Type': 'MEAN'}), ignore_index=True)\n\n swarmplot_data = swarmplot_data.append(pd.DataFrame({\n 'Concentration': graz_all[gas][(graz_all.index > '2020-03-20')&(graz_all.index < '2020-04-14')].groupby([graz_all.year]).median().values, \n 'Pollutant': gas, \n 'Type': 'MEAN-HLD-TRUE'}), ignore_index=True)\n\n swarmplot_data = swarmplot_data.append(pd.DataFrame({\n 'Concentration': [best_models['{}_PRED'.format(gas)][(best_models.index > '2020-03-20')&(best_models.index < '2020-04-14')].median()], \n 'Pollutant': gas, \n 'Type': 'MEAN-HLD-PRED'}), ignore_index=True)\n\nswarmplot_data['Category'] = 'Category'\n\n# sns.set_style(\"whitegrid\"),\nsns.set(style='whitegrid', font_scale=1.3)\n# markers = {\"MEAN\": \"s\", \"MEAN-HLD-PRED\": \"X\", \"MEAN-HLD-TRUE\": '<'}\n\ng = sns.catplot(x='Category', y='Concentration', hue='Type', col=\"Pollutant\", \n data=swarmplot_data, kind=\"swarm\", height=4, aspect=.3, col_wrap=10, legend=False, s=8, dodge=False);\n\ng.set(xlabel = '')\ng.set_xticklabels(ax.get_xticklabels(), fontsize=0)\n# g.set_yticklabels(ax.get_yticklabels(), fontsize=12)\n# plt.tight_layout()\ng.set_titles(\"{col_name}\")\ng.set_ylabels(\"Concentration - [mju]g/m³\")\n \n\nplt.legend(bbox_to_anchor=(1.1, .9), loc=2, borderaxespad=0.) #, fontsize=12\nplt.subplots_adjust(hspace = 0.2, wspace = 0.1)\n\n#g.savefig('../results/201x-mean-swarmplot.tiff')\n\n\n",
"C:\\Users\\mlovric\\Anaconda3\\envs\\envphd\\lib\\site-packages\\seaborn\\axisgrid.py:861: UserWarning: Tight layout not applied. tight_layout cannot make axes width small enough to accommodate all axes decorations\n self.fig.tight_layout()\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7b8059c3b1706b284b0b83920a3ddfa93933584 | 15,957 | ipynb | Jupyter Notebook | 01_Spark_DataFrames/.ipynb_checkpoints/02_DataFrame_Basic_Operations-checkpoint.ipynb | zubie7a/Spark_And_Python_For_Big_Data | 6e5e56e439ea87851ee6720bb46d6f70400ad7f5 | [
"MIT"
] | 1 | 2020-09-14T20:26:20.000Z | 2020-09-14T20:26:20.000Z | 01_Spark_DataFrames/.ipynb_checkpoints/02_DataFrame_Basic_Operations-checkpoint.ipynb | zubie7a/Spark_And_Python_For_Big_Data | 6e5e56e439ea87851ee6720bb46d6f70400ad7f5 | [
"MIT"
] | null | null | null | 01_Spark_DataFrames/.ipynb_checkpoints/02_DataFrame_Basic_Operations-checkpoint.ipynb | zubie7a/Spark_And_Python_For_Big_Data | 6e5e56e439ea87851ee6720bb46d6f70400ad7f5 | [
"MIT"
] | 3 | 2021-05-17T04:46:33.000Z | 2022-03-02T12:31:41.000Z | 50.179245 | 1,622 | 0.592342 | [
[
[
"# Spark imports.\nfrom pyspark.sql import SparkSession\nimport pyspark.sql.functions as f",
"_____no_output_____"
],
[
"# Start the Spark session.\nspark = SparkSession.builder.appName('ops').getOrCreate()",
"_____no_output_____"
],
[
"# We can infer the schema/types (only in CSV), and header tells us\n# that the first row are the names of the columns.\ndf = spark.read.csv('Data/DataFrames/appl_stock.csv', inferSchema=True, header=True)",
"_____no_output_____"
],
[
"# Number of rows we read.\ndf.count()",
"_____no_output_____"
],
[
"# See what schema was inferred (together with column names from row).\ndf.printSchema()",
"root\n |-- Date: timestamp (nullable = true)\n |-- Open: double (nullable = true)\n |-- High: double (nullable = true)\n |-- Low: double (nullable = true)\n |-- Close: double (nullable = true)\n |-- Volume: integer (nullable = true)\n |-- Adj Close: double (nullable = true)\n\n"
],
[
"# We can see more detailed what a row object contains, for example the\n# 'Date' field is a datetime object with its parameters, when printed\n# we'll see it converted. Head 3 gets the first 3 elements and then\n# we extract the first element of that list.\ndf.head(3)[0]",
"_____no_output_____"
],
[
"# We'll see the element at the top, notice that the datetime object\n# has been converted into a readable string representation.\ndf.show(1)",
"+-------------------+----------+----------+------------------+----------+---------+---------+\n| Date| Open| High| Low| Close| Volume|Adj Close|\n+-------------------+----------+----------+------------------+----------+---------+---------+\n|2010-01-04 00:00:00|213.429998|214.499996|212.38000099999996|214.009998|123432400|27.727039|\n+-------------------+----------+----------+------------------+----------+---------+---------+\nonly showing top 1 row\n\n"
],
[
"# The best part of working with dataframes is being able to filter\n# data based on certain conditions, which can be similar to operations\n# we do when operating data storages.\n\n# filter function call with a SQL like syntax. But instead lets use\n# data frames operators for the rest of the course!\ndf.filter(\"Close < 500\").select(['Open', 'Close']).show()",
"+------------------+------------------+\n| Open| Close|\n+------------------+------------------+\n| 213.429998| 214.009998|\n| 214.599998| 214.379993|\n| 214.379993| 210.969995|\n| 211.75| 210.58|\n| 210.299994|211.98000499999998|\n|212.79999700000002|210.11000299999998|\n|209.18999499999998| 207.720001|\n| 207.870005| 210.650002|\n|210.11000299999998| 209.43|\n|210.92999500000002| 205.93|\n| 208.330002| 215.039995|\n| 214.910006| 211.73|\n| 212.079994| 208.069996|\n|206.78000600000001| 197.75|\n|202.51000200000001| 203.070002|\n|205.95000100000001| 205.940001|\n| 206.849995| 207.880005|\n| 204.930004| 199.289995|\n| 201.079996| 192.060003|\n|192.36999699999998| 194.729998|\n+------------------+------------------+\nonly showing top 20 rows\n\n"
],
[
"# filter function call with dataframe operations.\ndf.filter(df['Close'] < 500).select('Volume').count()",
"_____no_output_____"
],
[
"# For multiple conditions, you can't use regular 'and' 'or'.\n# ValueError: Cannot convert column into bool: please use \n# '&' for 'and', '|' for 'or', '~' for 'not' when building\n# DataFrame boolean expressions.\n# Also remember to put all conditions in parentheses.\ndf.filter( (f.col('Close') < 200) & ~(f.col('Open') > 200) ).count()",
"_____no_output_____"
],
[
"# Collect will put results into a list of Row objects.\nresults = df.filter( f.col('Low') == 197.16 ).collect()\nprint(results[0])",
"Row(Date=datetime.datetime(2010, 1, 22, 0, 0), Open=206.78000600000001, High=207.499996, Low=197.16, Close=197.75, Volume=220441900, Adj Close=25.620401)\n"
],
[
"# This will convert the Row object into a dictionary!\nresults[0].asDict()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7b808a4460616da344d45d8ed5355b5fd4e3fa1 | 304,878 | ipynb | Jupyter Notebook | notebooks/040_dataframes.ipynb | ARCTraining/python-2021-04 | c25de7d0c654c352c63dc072ac172e069e0e0799 | [
"CC-BY-4.0"
] | 1 | 2021-04-25T14:39:33.000Z | 2021-04-25T14:39:33.000Z | notebooks/040_dataframes.ipynb | ARCTraining/python-2021-04 | c25de7d0c654c352c63dc072ac172e069e0e0799 | [
"CC-BY-4.0"
] | 1 | 2021-05-04T14:22:17.000Z | 2021-05-04T14:22:17.000Z | notebooks/040_dataframes.ipynb | ARCTraining/python-2021-04 | c25de7d0c654c352c63dc072ac172e069e0e0799 | [
"CC-BY-4.0"
] | 1 | 2021-08-13T02:38:28.000Z | 2021-08-13T02:38:28.000Z | 304,878 | 304,878 | 0.835295 | [
[
[
"# Combining and merging dataframes",
"_____no_output_____"
],
[
"## Setup",
"_____no_output_____"
]
],
[
[
"# Connect my Google Drive to Google Colab\nfrom google.colab import drive\ndrive.mount ('/content/gdrive')",
"Mounted at /content/gdrive\n"
],
[
"# Change the working directory -Alex's version as he has a shared folder\n# %cd /content/gdrive/MyDrive/swd1a-python-2021-10",
"/content/gdrive/.shortcut-targets-by-id/1c-UzOKbLCZjy_P7oCGtvGKn-RoOvpDyC/swd1a-python-2021-10\n"
],
[
"# Change the working directory - Martin's version\n%cd /content/gdrive/MyDrive/Colab Notebooks/arc_training/swd1a-python-2021-10",
"/content/gdrive/MyDrive/Colab Notebooks/arc_training/swd1a-python-2021-10\n"
],
[
"# Check the working directory\n!pwd",
"/content/gdrive/MyDrive/Colab Notebooks/arc_training/swd1a-python-2021-10\n"
],
[
"# Check the contents\n! ls -l",
"total 394\n-rw------- 1 root root 35824 Oct 18 13:50 010_starting_with_python.ipynb\n-rw------- 1 root root 96573 Oct 25 14:07 020_starting_with_data.ipynb\n-rw------- 1 root root 216480 Oct 25 15:00 030_indexing_and_types.ipynb\n-rw------- 1 root root 49257 Nov 1 13:56 040_dataframes.ipynb\ndrwx------ 2 root root 4096 Oct 18 14:12 data\n"
]
],
[
[
"## Combining dataframes",
"_____no_output_____"
]
],
[
[
"# import pandas\n\nimport pandas as pd \n\nsurveys_df = pd.read_csv(\"data/surveys.csv\", keep_default_na=False, na_values=[\"\"])",
"_____no_output_____"
],
[
"surveys_df.head()",
"_____no_output_____"
],
[
"species_df = pd.read_csv(\"data/species.csv\", keep_default_na=False, na_values=[\"\"])",
"_____no_output_____"
],
[
"species_df.head()",
"_____no_output_____"
],
[
"# make some fragments of surveys_df\n\nsurveys_sub = surveys_df.head(10)\n\nsurveys_sub_last10 = surveys_df.tail(10)",
"_____no_output_____"
],
[
"surveys_sub_last10",
"_____no_output_____"
],
[
"surveys_sub",
"_____no_output_____"
],
[
"surveys_sub_last10.reset_index(drop=True)",
"_____no_output_____"
],
[
"surveys_sub_last10 = surveys_sub_last10.reset_index(drop=True)",
"_____no_output_____"
],
[
"pd.concat([surveys_sub, surveys_sub_last10], axis=0)",
"_____no_output_____"
],
[
"pd.concat([surveys_sub, surveys_sub_last10], axis=1)",
"_____no_output_____"
],
[
"concat_df = pd.concat([surveys_sub, surveys_sub_last10], axis=0)",
"_____no_output_____"
],
[
"concat_df.iloc[0, 1:5]",
"_____no_output_____"
],
[
"concat_df.loc[0, \"hindfoot_length\"]",
"_____no_output_____"
],
[
"concat_df.reset_index(drop=True, inplace=True)",
"_____no_output_____"
],
[
"concat_df",
"_____no_output_____"
],
[
"concat_df.loc[0, \"hindfoot_length\"]",
"_____no_output_____"
],
[
"concat_df.to_csv(\"data/master_surveys.csv\", index=False)",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
],
[
[
"## Joining dataframes together",
"_____no_output_____"
],
[
"Combining DataFrames using a common field is called “joining”. The columns containing the common values are called “join key(s)”. Joining DataFrames in this way is often useful when one DataFrame is a “lookup table” containing additional data that we want to include in the other.\n\nNOTE: This process of joining tables is similar to what we do with tables in an SQL database.\n\nFor example, the `species.csv` file that we’ve been working with is a lookup table. This table contains the genus, species and taxa code for 55 species. The species code is unique for each line. \n\nThese species are identified in our survey data as well using the unique species code. Rather than adding 3 more columns for the genus, species and taxa to each of the 35,549 line Survey data table, we can maintain the shorter table with the species information. When we want to access that information, we can create a query that joins the additional columns of information to the Survey data.\n\nStoring data in this way has many benefits including:\n\n* It ensures consistency in the spelling of species attributes (genus, species and taxa) given each species is only entered once. Imagine the possibilities for spelling errors when entering the genus and species thousands of times!\n* It also makes it easy for us to make changes to the species information once without having to find each instance of it in the larger survey data.\n* It optimises the size of our data, we can reduce duplication and by doing so reduce the opportunity for some types of error to appear.",
"_____no_output_____"
]
],
[
[
"# Lets get some data\n# Read in 10 lines of the surveys table\n\n# import pandas first\n\nimport pandas as pd \n\nsurveys_df = pd.read_csv(\"data/surveys.csv\", keep_default_na=False, na_values=[\"\"])\nsurvey_sub = surveys_df.head(10)\n",
"_____no_output_____"
],
[
"# Grab a small subset of the species data\nspecies_sub = pd.read_csv('data/speciesSubset.csv', keep_default_na=False, na_values=[\"\"])",
"_____no_output_____"
],
[
"# Identify join keys\nspecies_sub.columns",
"_____no_output_____"
],
[
"survey_sub.columns",
"_____no_output_____"
]
],
[
[
"### Inner Join\n\nAn inner join combines two DataFrames based on a join key and returns a new DataFrame that contains only those rows that have matching values in both of the original DataFrames.",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
],
[
[
"# We do an inner join with the pandas 'merge' function\n\nmerged_inner = pd.merge(left=survey_sub, right=species_sub, left_on='species_id', right_on='species_id')",
"_____no_output_____"
],
[
"# Take a look at the data\nmerged_inner.shape",
"_____no_output_____"
],
[
"merged_inner",
"_____no_output_____"
]
],
[
[
"### Left joins\n\nWhat if we want to add information from species_sub to survey_sub without losing any of the information from survey_sub? In this case, we use a different type of join called a “left outer join”, or a “left join”.\n\nLike an inner join, a left join uses join keys to combine two DataFrames. Unlike an inner join, a left join will return all of the rows from the left DataFrame, even those rows whose join key(s) do not have values in the right DataFrame. Rows in the left DataFrame that are missing values for the join key(s) in the right DataFrame will simply have null (i.e., NaN or None) values for those columns in the resulting joined DataFrame.\n\nNote: a left join will still discard rows from the right DataFrame that do not have values for the join key(s) in the left DataFrame.",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
],
[
[
"merged_left = pd.merge(left=survey_sub, right=species_sub, how='left', left_on='species_id',\n right_on='species_id')",
"_____no_output_____"
],
[
"merged_left",
"_____no_output_____"
],
[
"# If we wanted to find the rows with missing species data:\nmerged_left [pd.isnull(merged_left.genus)]",
"_____no_output_____"
]
],
[
[
"The pandas merge function supports two other join types:\n\nRight (outer) join: Invoked by passing how='right' as an argument. Similar to a left join, except all rows from the right DataFrame are kept, while rows from the left DataFrame without matching join key(s) values are discarded.\n\nFull (outer) join: Invoked by passing how='outer' as an argument. This join type returns the all pairwise combinations of rows from both DataFrames; i.e., the result DataFrame will NaN where data is missing in one of the dataframes. This join type is very rarely used.",
"_____no_output_____"
]
],
[
[
"# Challenge 1: Distributions",
"_____no_output_____"
],
[
"# Create a new dataframe\n# by joining the contents of the surveys.csv and species.csv tables\nmerged_left = pd.merge (left = surveys_df, right=species_df, how=\"left\",\n on=\"species_id\")",
"_____no_output_____"
],
[
"merged_left.shape",
"_____no_output_____"
],
[
"# Calculate and plot distribution of:\n# 1. taxa per plot (number of species of each taxa per plot)\nmerged_left.groupby([\"plot_id\"])[\"taxa\"].nunique().plot(kind=\"bar\");",
"_____no_output_____"
],
[
"# 2. taxa by sex by plot\n# Replace an NaN values of sex with a more meaningful indeterminate value\nmerged_left.loc[merged_left[\"sex\"].isnull(), \"sex\"] = 'M|F'",
"_____no_output_____"
],
[
"# Number of taxa for each plot/sex combination\nntaxa_sex_site = merged_left.groupby(['plot_id', 'sex'])['taxa'].nunique().reset_index(level=1)",
"_____no_output_____"
],
[
"ntaxa_sex_site = ntaxa_sex_site.pivot_table (values = 'taxa', columns='sex', index=ntaxa_sex_site.index)",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"ntaxa_sex_site.plot(kind='bar', legend=False)\nplt.legend(loc='upper center', ncol=3, bbox_to_anchor=(0.5, 1.08), fontsize='small', frameon=False);",
"_____no_output_____"
],
[
"# Challenge 2: Diversity Index",
"_____no_output_____"
]
],
[
[
"You can calculate a biodiversity index as:\nthe number of species in the plot / the total number of individuals in the plot = Biodiversity index",
"_____no_output_____"
]
],
[
[
"plot_info = pd.read_csv(\"data/plots.csv\")\nplot_info.groupby(\"plot_type\").count()",
"_____no_output_____"
],
[
"# Diversity index\nmerged_site_type = pd.merge(merged_left, plot_info, on='plot_id')\n# For each plot, get the number of species for each plot\nnspecies_site = merged_site_type.groupby([\"plot_id\"])[\"species\"].nunique().rename(\"nspecies\")\n# For each plot, get the number of individuals\nnindividuals_site = merged_site_type.groupby([\"plot_id\"]).count()['record_id'].rename(\"nindiv\")\n# combine the two series\ndiversity_index = pd.concat([nspecies_site, nindividuals_site], axis=1)",
"_____no_output_____"
],
[
"diversity_index",
"_____no_output_____"
],
[
"# calculate the diversity index\ndiversity_index['diversity'] = diversity_index['nspecies']/diversity_index['nindiv']",
"_____no_output_____"
],
[
"# Bar chart\ndiversity_index['diversity'].plot(kind=\"barh\");",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7b812d9959fff5f566dd3092d81f6a7f59396a2 | 293,616 | ipynb | Jupyter Notebook | Exploration_Py.ipynb | gregunz/TimeSeries2018 | 7e07e8ed6e8abb1b7376f03f85756f3533eb5c84 | [
"MIT"
] | null | null | null | Exploration_Py.ipynb | gregunz/TimeSeries2018 | 7e07e8ed6e8abb1b7376f03f85756f3533eb5c84 | [
"MIT"
] | null | null | null | Exploration_Py.ipynb | gregunz/TimeSeries2018 | 7e07e8ed6e8abb1b7376f03f85756f3533eb5c84 | [
"MIT"
] | null | null | null | 1,183.935484 | 88,274 | 0.947108 | [
[
[
"<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\" style=\"margin-top: 1em;\"><ul class=\"toc-item\"><li><ul class=\"toc-item\"><li><span><a href=\"#->-time-serie\" data-toc-modified-id=\"->-time-serie-0.1\">-> time serie</a></span></li><li><span><a href=\"#outlier-detection\" data-toc-modified-id=\"outlier-detection-0.2\">outlier detection</a></span></li></ul></li></ul></div>",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n\nfrom statsmodels.graphics import tsaplots\n\n%matplotlib inline",
"C:\\Users\\cleme\\Anaconda3\\lib\\site-packages\\statsmodels\\compat\\pandas.py:56: FutureWarning: The pandas.core.datetools module is deprecated and will be removed in a future version. Please use the pandas.tseries module instead.\n from pandas.core import datetools\n"
],
[
"sns.set(rc={\"figure.figsize\": (15, 6)})\nsns.set_palette(sns.color_palette(\"Set2\", 10))\nsns.set_style(\"whitegrid\")",
"_____no_output_____"
],
[
"CO2_measurement = pd.read_csv('data/CO2_sensor_measurements.csv', sep='\\t')\nCO2_measurement['timestamp'] = pd.to_datetime(CO2_measurement['timestamp'])\nCO2_measurement.set_index(['LocationName', 'SensorUnit_ID', 'timestamp'], inplace=True);",
"_____no_output_____"
]
],
[
[
"## -> time serie",
"_____no_output_____"
]
],
[
[
"choosen_location = 'AJGR'\nchoosen_id = 1122\n\ntime_serie_dirty = CO2_measurement.loc[choosen_location].loc[choosen_id] ",
"_____no_output_____"
]
],
[
[
"## outlier detection ",
"_____no_output_____"
]
],
[
[
"time_serie = time_serie_dirty.copy()\ntime_serie[time_serie_dirty['CO2'] > 380] = np.nan\ntime_serie = time_serie.interpolate().resample('1H').mean()\n\ntime_serie_dif = time_serie.pct_change().dropna()",
"_____no_output_____"
],
[
"time_serie_dirty.plot()\nplt.savefig('plots/raw_data.eps')\n\ntime_serie.plot()\nplt.savefig('plots/raw_data_no_outliers.eps')\n\ntsaplots.plot_acf(time_serie, lags=100)\nplt.savefig('plots/raw_acf.eps')\n\ntime_serie_dif.plot()\nplt.savefig('plots/dif_data.eps')\n\ntsaplots.plot_acf(time_serie_dif, lags=100)\nplt.savefig('plots/dif_acf.eps')\n\ntsaplots.plot_pacf(time_serie_dif, lags=100)\nplt.savefig('plots/dif_pacf.eps')\n\nplt.show()",
"_____no_output_____"
],
[
"time_serie.to_csv('data/co2_ajgr.csv')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e7b815340ff1c3f4586726dfabbf288f19c53c57 | 463,414 | ipynb | Jupyter Notebook | ec414_Intro_to_machine_learning/Kernel_K-Means_and_EM_EC414_HW7.ipynb | pequode/class-projects | 5fec5c08af87af5511f18f64e5022b2864736124 | [
"MIT"
] | null | null | null | ec414_Intro_to_machine_learning/Kernel_K-Means_and_EM_EC414_HW7.ipynb | pequode/class-projects | 5fec5c08af87af5511f18f64e5022b2864736124 | [
"MIT"
] | null | null | null | ec414_Intro_to_machine_learning/Kernel_K-Means_and_EM_EC414_HW7.ipynb | pequode/class-projects | 5fec5c08af87af5511f18f64e5022b2864736124 | [
"MIT"
] | null | null | null | 422.822993 | 45,792 | 0.928448 | [
[
[
"# Homework 7: Kernel K-Means and EM\nThis homework is due on Thursday April 1,2021\n",
"_____no_output_____"
],
[
"## Problem 1: Kernel K-Means\n\nIn this exercise, we will consider how one may go about performing non-linear machine learning by adapting machine learning algorithms that we have discussed in class. We will discuss one particular approach that has been widely used throughout machine learning. Recall the discussion from lecture: we take our feature vectors $\\boldsymbol{x}_1, ..., \\boldsymbol{x}_n$ and apply a non-linear function $\\phi$ to each point to yield $\\phi(\\boldsymbol{x}_1), ..., \\phi(\\boldsymbol{x}_n)$. Then, if we apply a linear machine learning algorithm (e.g., k-means or SVM) on the mapped data, the linear boundary in the mapped space will correspond to a non-linear boundary in the input space.\n\nWe looked at one particular mapping in class. Consider a two-dimensional feature vector $\\boldsymbol{x} = (x_1, x_2)^T$, and define the function $\\phi$ as \n\n\\begin{equation*}\n\\phi(\\boldsymbol{x}) = \\left(\n\\begin{array}{c}\n1 \\\\\n\\sqrt{2} x_1 \\\\\n\\sqrt{2} x_2 \\\\\n\\sqrt{2} x_1 x_2\\\\\nx_1^2\\\\\nx_2^2\n\\end{array} \\right).\n\\end{equation*}\n\nAs discussed in class, the inner product $\\phi(\\boldsymbol{x}_i)^T \\phi(\\boldsymbol{x}_j)$ between two mapped vectors is equal to $(\\boldsymbol{x}_i^T \\boldsymbol{x}_j + 1)^2$; that is, one can compute the inner product between data points in the mapped space without explicitly forming the 6-dimensional mapped vectors for the data. Because applying such a mapping may be computationally expensive, this trick can allow us to run machine learning algorithms in the mapped space without explicitly forming the mappings. For instance, in a k-NN classifier, one must compute the (squared) Euclidean distance between a test point $\\boldsymbol{x}_t$ and a training point $\\boldsymbol{x}_i$. Expanding this distance out yields\n\n\\begin{equation*}\n\\|\\boldsymbol{x}_t - \\boldsymbol{x}_i\\|^2_2 = (\\boldsymbol{x}_t - \\boldsymbol{x}_i)^T (\\boldsymbol{x}_t - \\boldsymbol{x}_i) = \\boldsymbol{x}_t^T \\boldsymbol{x}_t - 2 \\boldsymbol{x}_t^T \\boldsymbol{x}_i + \\boldsymbol{x}_i^T \\boldsymbol{x}_i.\n\\end{equation*}\n\nThen, computing this distance after applying the mapping $\\phi$ would be easy:\n\n\\begin{equation*}\n\\|\\phi(\\boldsymbol{x}_t) - \\phi(\\boldsymbol{x}_i)\\|^2_2 = (\\boldsymbol{x}_t^T \\boldsymbol{x}_t + 1)^2 - 2 (\\boldsymbol{x}_t^T \\boldsymbol{x}_i + 1)^2 + (\\boldsymbol{x}_i^T \\boldsymbol{x}_i + 1)^2.\n\\end{equation*}\n\n**a.** In the example above, the original feature vector was 2-dimensional. Show how to generalize the $\\phi$ mapping to $d$-dimensional vector inputs such that the inner product between mapped vectors is $(\\boldsymbol{x}_i^T \\boldsymbol{x}_j + 1)^2$. Explicitly describe the embedding $\\phi$; what dimensions does it have, and what values will it represent?\n\n**b.** Consider extending the k-means algorithm to discover non-linear boundaries using the above mapping. In the k-means algorithm, the assignment step involves computing $\\|\\boldsymbol{x}_i - \\boldsymbol{\\mu}_j\\|_2^2$ for each point $\\boldsymbol{x}_i$ and each cluster mean $\\boldsymbol{\\mu}_j$. Suppose we map the data via $\\phi$. How would one compute the distance $\\|\\phi(\\boldsymbol{x}_i) - \\boldsymbol{\\mu}_j\\|^2_2$, where now $\\boldsymbol{\\mu}_j$ is the mean of the mapped data points? Be careful: one cannot simply compute\n\n\\begin{equation*}\n (\\boldsymbol{x}_i^T \\boldsymbol{x}_i + 1)^2 - 2 (\\boldsymbol{x}_i^T \\boldsymbol{\\mu}_j + 1)^2 + (\\boldsymbol{\\mu}_j^T \\boldsymbol{\\mu}_j + 1)^2.\n\\end{equation*}\n\n**c.** Write out pseudocode for the extension of k-means where this mapping is applied to the data. In your algorithm, be careful not to ever explicitly compute $\\phi(\\boldsymbol{x}_i)$ for any data vector; *only work with inner products in the algorithm.*\n\n**d.** With this new mapping, what properties will the decision surface have (i.e, what could it look like)? Why is this?",
"_____no_output_____"
],
[
"A. \n\n- $\\phi(Xi)^T*\\phi(Xj) = (xi^Txj+1)^2 $\n- xi is a nX1 vector and so is xj \n- doing xi^Txj yeilds a 1x1 scalar \n- doing $\\phi^T\\phi$ yeilds a 1x1 scalar \n- $(xi^Txj+1)^2 =$\n-$ [|1|*|1| = 1 + $\n- $| \\sqrt{2}xi1|*|\\sqrt{2}xj1| = 2 xi1*xj1+$\n- $| \\sqrt{2}xi2|*|\\sqrt{2}xj2| = 2 xi2*xj2+$\n- $| \\sqrt{2}xi1*xi2|*|\\sqrt{2}xj1*xj2| = 2 xi1*xj1*xi2*xj2+$\n- $| xi1^2|*|xj1^2| = xi1^2*xj1^2+$\n- $| xi2^2|*|xj2^2| = xi2^2*xj2^2]$\n- $1+ 2x_{i1}x_{j1}+ 2x_{i2}x_{j2}+2 x_{i1}x_{j1}x_{i2}x_{j2}+x_{i1}^2x_{j1}^2+x_{i2}^2x_{j2}^2$ =\n- $(x_{i1}x_{j1}+x_{i2}x_{j2}+1)^2$ \n- $(x_{i1}x_{j1}+x_{i2}x_{j2}+1)(x_{i1}x_{j1}+x_{i2}x_{j2}+1)$ \n- $(x_{i1}x_{j1}+x_{i2}x_{j2}+1)(x_{i1}x_{j1}+x_{i2}x_{j2})(x_{i1}x_{j1}+x_{i2}x_{j2})$ \n- $(x_{i1}x_{j1}+x_{i2}x_{j2}+1)(x_{i1}^2x_{j1}^2+x_{i2}x_{j2}x_{i1}x_{j1})(x_{i2}x_{j2})(x_{i2}x_{j2})$ \n- $(x_{i1}x_{j1}+x_{i2}x_{j2}+1+x_{i1}^2x_{j1}^2+x_{i2}x_{j2}x_{i1}x_{j1}+x_{i2}^2x_{j2}^2$ foil error but \n-$1+ 2x_{i1}x_{j1}+ 2x_{i2}x_{j2}+2 x_{i1}x_{j1}x_{i2}x_{j2}+x_{i1}^2x_{j1}^2+x_{i2}^2x_{j2}^2$ \n- the key here is to make the mapping of $\\phi =$ the inner terms of a foil \n- so $\\phi_d = $ \\begin{equation*}\n\\phi(\\boldsymbol{x}) = \\left(\n\\begin{array}{c}\n1 \\\\\n\\sqrt{d} x_1 \\\\\n\\sqrt{d} x_2 \\\\\n...\\\\\n\\sqrt{d} x_d \\\\\n\\sqrt{d*(1)} x_1 x_2\\\\\n\\sqrt{d*(1)} x_1 x_3\\\\\n...\\\\\n\\sqrt{d*(1)} x_1 x_d\\\\\n\\sqrt{d*(2)} x_1 x_2 x_3\\\\\n\\sqrt{d*(2)} x_1 x_2 x_4\\\\\n...\\\\\n\\sqrt{d*(2)} x_1 x_2 x_d\\\\\n...\\\\\n\\sqrt{d*(d-1)} x_1 x_2 x_3...x_d\\\\\nx_1^d\\\\\nx_2^d\\\\\n...\\\\\nx_d^d \\\\\n\\end{array} \\right).\n\\end{equation*}\n\n\n\nB.\n - so let $\\mu_r$ be the average in the non-$\\phi$(?) domain\n - $\\mu^T\\mu =$ scalar but also $ = ( \\mu_r^T\\mu_r+1)^2$\n - $\\sqrt{\\mu^T\\mu} = ( \\mu_r^T\\mu_r +1)$\n - $\\sqrt{\\mu^T\\mu}-1 = \\mu_r$^T$\\mu_r$\n - $ \\sqrt{\\mu^T\\mu}-1 $\n - that was dumb \n - $||\\phi(xi) - \\mu||_2^2=(\\phi(xi)^T\\phi(xi)) -2(\\phi(xi)^T\\mu)+(\\mu^t\\mu) $\n - *$ (xi^Txi+1)^2$ is quicker probably \n\nC.\n- set k random means in var M = which is kxd where d is the number of features \n\n- obj = -1000// kmeans objective function\n-current = 0 \n-thresholdval = 5 \n- labels =zeros(n) - 1\n-while abs(current- obj) =>thresholdval\n- - obj = KmeansObjectiveF(clusters, M, Data)\n- -for I in k: \n x = X(of indexs labels == I)\n M(I) = mean(x)\n- - for i in n data points: \n - - A = (Data(i).T@Data(i) +1)**2//scallar\n - - B = -2(Data(i).T*M.T+1)**2 //1xk\n - - C = (diag([email protected])+1)**2 // 1xk// a diagonal of the kxk matrix\n - - norms = A + B + C // 1Xk matrix \n - - minlabel = mina(A+B+C)\n - - labels(i) = minlabel\n- - current = KmeansObjectiveF(clusters, M, Data)\n\nD.\n\nThe new decsion surface will be a hyper(?)-perabala of degree d. For instance if there were 3 different features it could be a parabolic decsion surface \n\n",
"_____no_output_____"
],
[
"## Problem 2: Expectation-Maximization (E-M) on Gaussian Mixtrue Model\n\nAs you saw in lecture, the expectation-maximization algorithm is an iterative method to find maximum likelihood (ML) estimates of parameters in statistical models. The E-M algorithm alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. This alternation repeats until convergence.\n",
"_____no_output_____"
],
[
"The EM algorithm attempts to find maximum likelihood estimates for models with latent variables. Let X be the entire set of observed variables and Z the entire set of latent variables. Usually we can avoid a compicated expression for MLE when we introduce the latent variable $Z$. \n\n\nIn this problem we will implement E-M algorithm for 2-d Gaussian Mixture. Let's first review the process from 1-d case. Assume we observe $x_1,...,x_n$ from one of $K$ mixture components. Each random variable $x_i$ is associated with a latent variable $z_i$, where $z_{i} \\in\\{1, \\ldots, K\\}$. The mixture weights are defined as $P\\left(x_i\\mid z_{i}=k\\right) = \\pi_k$, where $\\sum_{k=1}^{K} \\pi_{k}=1$. Take 1-d Gaussian Mixtrue Model as an example. We have the conditional distribution $x_{i} \\mid z_{i}=k \\sim N\\left(\\mu_{k}, \\sigma_{k}^{2}\\right)$. $N\\left(\\mu, \\sigma^{2}\\right)$ is the 1-d Gaussian distritbution with pdf $\\frac{1}{\\sqrt{2 \\pi \\sigma^{2}}} \\exp -\\frac{\\left(x_{i}-\\mu\\right)^{2}}{2 \\sigma^{2}}$.\n\n In this 1-d Gaussian case, the unknown parameter $\\Theta$ includes $\\pi, \\mu, \\sigma$. Then the expression of likelihood in termss of $\\pi_k$, $\\mu_k$ and $\\sigma_k$ can be written as:\n $L\\left(x_{1}, \\ldots, x_{n}\\mid\\theta \\right)=\\prod_{i=1}^{n} \\sum_{k=1}^{K} \\pi_{k} N\\left(x_{i} ; \\mu_{k}, \\sigma_{k}^{2}\\right)$\n\nso the log-likelihood is :\n\n$\\ell(\\theta)=\\sum_{i=1}^{n} \\log \\left(\\sum_{k=1}^{K} \\pi_{k} N\\left(x_{i} ; \\mu_{k}, \\sigma_{k}^{2}\\right)\\right)$\n\nThen we can set the partial derivatives of the log-likelihood function over $\\pi_k$, $\\mu_k$ and $\\sigma_k^2$ and set them to zero. Then solve the value of $\\hat{\\pi_k}$, $\\hat{\\mu_k}$ and $\\hat{\\sigma_{k}^{2}}$. When solving it, we set $P\\left(z_{i}=k \\mid x_{i}\\right)=\\frac{P\\left(x_{i} \\mid z_{i}=k\\right) P\\left(z_{i}=k\\right)}{P\\left(x_{i}\\right)}=\\frac{\\pi_{k} N\\left(\\mu_{k}, \\sigma_{k}^{2}\\right)}{\\sum_{k=1}^{K} \\pi_{k} N\\left(\\mu_{k}, \\sigma_{k}\\right)}=\\gamma_{z_{i}}(k)$ as a constant value. Set $N_{k}=\\sum_{i=1}^{n} \\gamma_{z_{i}}(k)$, we have the final expression:\n$$\n\\hat{\\mu_{k}}=\\frac{\\sum_{i=1}^{n} \\gamma_{z_{i}}(k) x_{i}}{\\sum_{i=1}^{n} \\gamma_{z_{i}}(k)}=\\frac{1}{N_{k}} \\sum_{i=1}^{n} \\gamma_{z_{i}}(k) x_{i}\n$$\n$$\n\\hat{\\sigma_{k}^{2}}=\\frac{1}{N_{k}} \\sum_{i=1}^{n} \\gamma_{z_{i}}(k)\\left(x_{i}-\\mu_{k}\\right)^{2}\n$$\n$$\\hat{\\pi}_{k}=\\frac{N_{k}}{n}$$",
"_____no_output_____"
],
[
"Conclusion: we compute the one iteration of EM algorithm.\n1. E-step: Evaluate the posterior probabilities using the current values of the μk’s and σk’s with equation $P\\left(z_{i}=k \\mid x_{i}\\right)=\\frac{P\\left(x_{i} \\mid z_{i}=k\\right) P\\left(z_{i}=k\\right)}{P\\left(x_{i}\\right)}=\\frac{\\pi_{k} N\\left(\\mu_{k}, \\sigma_{k}^{2}\\right)}{\\sum_{k=1}^{K} \\pi_{k} N\\left(\\mu_{k}, \\sigma_{k}\\right)}=\\gamma_{z_{i}}(k)$\n2. M-step: Estimate new parameters $\\hat{\\pi_k}$, $\\hat{\\mu_k}$ and $\\hat{\\sigma_{k}^{2}}$.",
"_____no_output_____"
],
[
"We would like you to perform E-M on a sample 2-d Gaussian mixture model (GMM). Doing this will allow you to prove that your algorithm works, since you already know the parameters of the model. And you will get an intuition from visualizations. Follow the instructions step by step below.",
"_____no_output_____"
]
],
[
[
"from matplotlib.patches import Ellipse\nfrom scipy.special import logsumexp\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\nimport math",
"_____no_output_____"
]
],
[
[
"**Data creation.** Create 3 2D Gaussian clusters of data, with the following means and covariances:\n\n$\\boldsymbol{\\mu}_1 = [2,2]^T, \\boldsymbol{\\mu}_2 = [-2,0]^T, \\boldsymbol{\\mu}_3 = [0,-2]^T$,\n\n$\\Sigma_1 = [[0.1,0];[0,0.1]]$, $\\Sigma_2 = [[0.2,0];[0,0.2]]$, $\\Sigma_3 = [[1,0.7];[0.7,1]]$ \n\nCreate 50 points in each cluster and plot the data. The combination of these will serve as your Gaussian mixture model. This part is already given to you.",
"_____no_output_____"
]
],
[
[
"# Part a - data creation. This code is from the previous homework. You do not have to edit it.\nnum_pts = 50\nnp.random.seed(10)\nXa = np.random.multivariate_normal([2,2], [[0.1,0],[0,0.1]], num_pts)\nXb = np.random.multivariate_normal([-2,0], [[0.2,0],[0,0.2]], num_pts)\nXc = np.random.multivariate_normal([0,-2], [[1,0.7],[0.7,1]], num_pts)\n\n# Concatenate clusters into one dataset\ndata = np.concatenate((Xa,Xb,Xc),axis=0)\n\nprint(data.shape)\n\n# Plotting\nplt.scatter(data[:,0], data[:,1], s=40, cmap='viridis');\nax = plt.gca()\nax.set_xlim([-5,5])\nax.set_ylim([-5,5])\nplt.title('Multivariate Gaussian - 3 Variables')\nplt.show()",
"(150, 2)\n"
]
],
[
[
"**Fill in the code to complete the EM algorithm given below.** Remember, the EM algorithm is given by a process similar to k-means/DP-means in nature, since it is iterative. However, the actual calculations done are very different. For a Gaussian mixture model, they are described by:\n\n*E-Step (Compute probabilities with given Gaussian parameters.* **This has already been completed for you.**)\n\n\n*M-Step (Update parameters. The subscript k denotes the parameter for a given cluster k, so this is calculated for each cluster.):*\nSimilar from 1-d case\n\\begin{equation*}\nn\\_per\\_cluster = \\sum_{i=1}^{n\\_points} \\gamma_{z_{i}}(k)\n\\end{equation*}\n\n\\begin{equation*}\n\\pi_k = \\frac{n\\_per\\_cluster}{n\\_points}\n\\end{equation*}\n\n\\begin{equation*}\n\\mu_k = \\frac{1}{n\\_per\\_cluster} * \\sum_{i=1}^{n\\_points} \\gamma_{z_{i}}(k) * x_i \n\\end{equation*}\n\n\\begin{equation*}\n\\Sigma_k = \\frac{1}{n\\_per\\_cluster} * \\sum_{i=1}^{n\\_points} \\gamma_{z_{i}}(k) * (x_i - \\mu_k)(x_i - \\mu_k)^T \n\\end{equation*}\n\n\n*Repeat until convergence. To check for convergence, we check if the log-likelihood estimate is close enough to the previous estimate to stop the algorithm. To compute the log-likelihood estimate:*\n\\begin{equation*}\nLL(\\theta) = \\sum_{i=1}^{n\\_points} log \\sum_{k=1}^{K} \\pi_k * \\frac{1}{2\\pi|\\Sigma_k|^\\frac{1}{2}} exp(-0.5*(x_i - \\mu_k)^T\\Sigma_k^{-1}(x_i - \\mu_k))\n\\end{equation*}\n\n*Note that the \"absolute value\" signs around $\\Sigma_j$ are actually indicative of the determinant of the covariance matrix. \n\n**In completing the algorithm below, you will complete the M-Step and the log-likelihood estimate. To compute the log-likelihood, we strongly recommend using `scipy.special.logsumexp`, as it is more numerically stable than manually computing.**",
"_____no_output_____"
]
],
[
[
"\ndef EStep(data, n_points, k, pi, mu, cov):\n ## Performs the expectation (E) step ##\n ## You do not need to edit this function (actually, please do not edit it..)\n # The end result is an n_points x k matrix, where each element is the probability that\n # the ith point will be in the jth cluster.\n \n expectations = np.zeros((n_points, k)) # n_points x k np.array, where each row adds to 1\n denominators = []\n \n for i in np.arange(n_points):\n denominator = 0\n for j in np.arange(k):\n # Calculate denominator, which is a sum over k\n denominator_scale = pi[j] * 1/(2 * math.pi * np.sqrt(np.linalg.det(cov[j])))\n\n denom = denominator_scale * np.exp(-0.5 * (data[i].reshape(2,1) - mu[j]).T @ np.linalg.inv(cov[j]) @ (data[i].reshape(2,1) - mu[j]))\n denominator = np.add(denominator, denom)\n \n denominator = np.asscalar(denominator)\n denominators.append(denominator)\n \n for i in np.arange(n_points):\n numerator = 0\n for j in np.arange(k):\n # Calculate the numerator\n numerator_scale = pi[j] * 1/(2 * math.pi * np.sqrt(np.linalg.det(cov[j])))\n numer = np.exp(-0.5 * (data[i].reshape(2,1) - mu[j]).T @ np.linalg.inv(cov[j]) @ (data[i].reshape(2,1) - mu[j]))\n numerator = numerator_scale * numer\n \n # Set the probability of the ith point for the jth cluster\n expectations[i][j] = numerator/denominators[i]\n \n return expectations\n\n\ndef ExpectationMaximization_GMM(data, n_per_cluster, n_points, k, pi, mu, cov):\n ## Performs expectation-maximization iteratively until convergence is reached ##\n # You do not need to edit this function.\n converged = False\n ML_estimate = 0\n iteration = 0\n \n while not converged:\n iteration +=1\n # E-Step: find probabilities\n expectations = EStep(data, n_points, k, pi, mu, cov)\n \n # M-Step: recompute parameters\n n_per_cluster, pi, mu, cov = MStep(data, n_points, k, expectations)\n\n # Plot the current parameters against the data\n # Ignore this, it just makes it look nice using some cool properties of eigenvectors!\n ## PLOT CODE ##\n lambda_1, v1 = np.linalg.eig(cov[0])\n lambda_1 = np.sqrt(lambda_1)\n lambda_2, v2 = np.linalg.eig(cov[1])\n lambda_2 = np.sqrt(lambda_2)\n lambda_3, v3 = np.linalg.eig(cov[2])\n lambda_3 = np.sqrt(lambda_3)\n\n # Plot data\n fig, ax = plt.subplots(subplot_kw={'aspect': 'equal'}) \n # plt.plot(x_total,y_total,'x')\n plt.scatter(data[:,0], data[:,1], s=40, cmap='viridis');\n \n # Plot ellipses\n ell1 = Ellipse(xy=(mu[0][0], mu[0][1]),\n width=lambda_1[0]*3, height=lambda_1[1]*3,\n angle=np.rad2deg(np.arccos(v1[0, 0])), linewidth=5, edgecolor='red', facecolor='none')\n ax.add_artist(ell1)\n \n ell2 = Ellipse(xy=(mu[1][0], mu[1][1]),\n width=lambda_2[0]*3, height=lambda_2[1]*3,\n angle=np.rad2deg(np.arccos(v2[0, 0])), linewidth=5, edgecolor='green', facecolor='none')\n ax.add_artist(ell2)\n \n ell3 = Ellipse(xy=(mu[2][0], mu[2][1]),\n width=lambda_3[0]*3, height=lambda_3[1]*3,\n angle=np.rad2deg(np.arccos(v3[0, 0])), linewidth=5, edgecolor='yellow', facecolor='none')\n ax.add_artist(ell3)\n \n axe = plt.gca()\n axe.set_xlim([-5,5])\n axe.set_ylim([-5,5])\n plt.title('Multivariate Gaussian - 3 Variables')\n plt.show()\n ## END PLOT CODE ##\n \n # Check for convergence via log likelihood\n old_ML_estimate = np.copy(ML_estimate)\n ML_estimate = loglikelihood(data, n_points, k, pi, mu, cov)\n \n if abs(old_ML_estimate - ML_estimate) < 0.01:\n converged = 1\n \n return mu, cov\n",
"_____no_output_____"
]
],
[
[
"**Perform EM on the GMM you created.** Put it all together! Run the completed EM function on your dataset. (This part is already done for you, just run it and see the output. The expected results are given to you)",
"_____no_output_____"
]
],
[
[
"\ndef MStep(data, n_points, k, expectations):\n ## Performs the maximization (M) step ##\n \n # We clear the parameters completely, since we recompute them each time\n mu = [np.zeros((2,1)) for _ in np.arange(k)] # 3 2x1 np.arrays in a list\n cov = [np.zeros((2,2)) for _ in np.arange(k)] # 3 2x2 np.arrays in a list\n n_per_cluster = [0, 0, 0]\n pi = [0, 0, 0]\n\n ## need step here where you compute yi(k) from \n \n ## YOUR CODE HERE ## \n # print(k,expectations.shape)\n\n n_per_cluster = np.sum(expectations,axis =0 )\n # print(k,n_per_cluster.shape)\n\n # Update number of points in each cluster\n \n # Update mixing weights\n pi = n_per_cluster/n_points\n # print(n_points,pi.shape)\n # Update means\n\n # out should be a 1xk * a 1*k where you want the output to be 1*k \n n,d = data.shape\n interVecSum = np.zeros((d,expectations.shape[1]))\n for i in range(n):\n y = expectations[i,:]\n x = data[i,:]\n y.shape = (y.shape[0],1)\n x.shape = (x.shape[0],1)\n Res = ([email protected])\n \n interVecSum = interVecSum + Res.T\n # print(\"innershape=\",interVecSum.shape)\n outer = (1/n_per_cluster)\n # print(\"innershape=\",outer.shape)\n muNpy = outer*interVecSum# before sum should be a 1xk, inside npsum is a nxk * a n*d where I want each element to multiply with its corisponding element \n # print(\"innershape=\",muNpy.shape) \n\n\n\n # Update covariances\n #covVecSum = np.zeros((d,expectations.shape[1]))\n for i in range(n):\n x = data[i,:]\n x.shape = (x.shape[0],1)\n mux = x - muNpy# should be 3x2 with the resulting diffs. \n for j in range(k):\n kmux = mux[:,j]\n kmux.shape = (kmux.shape[0],1)\n\n newCov = [email protected]\n cov[j] = cov[j]+ newCov\n\n \n cov1st_term =1/ n_per_cluster\n for j in range(k):\n cov[j] = cov[j]*cov1st_term[j]\n mterm = muNpy[:,j]\n mterm.shape = (mterm.shape[0],1)\n mu[j] = mterm\n \n # print(cov[0].shape)\n n_per_cluster = list(n_per_cluster)\n pi = list(pi)\n ## END YOUR CODE HERE ##\n \n return n_per_cluster, pi, mu, cov\n\ndef loglikelihood(data, n_points, k, pi, mu, cov):\n \n #where a is the exponenet and b is the weights \n ## Calculates ML estimate ##\n likelihood = 0\n scale = [] # When using logsumexp the scale is required to be in an array\n exponents = [] # When using logsumexp the exponent is required to be in an array\n\n ## YOUR CODE HERE ##\n logs = np.zeros((n_points,1))\n # firstpart = (1/(2*math.pi))*pi*np.linalg.det(cov)\n # eponentTerm = -0.5* (data-mu)[email protected](cov)@(data-mu)\n # InnerProdMat = firstpart*np.exp(eponentTerm)\n for i in range(n_points):\n constant = (1/(2*math.pi))\n b = np.zeros((k,1))\n a = np.zeros((k,1))\n x = data[i,:]\n x.shape = (x.shape[0],1)\n # print(\"x\",x.shape)\n for j in range(k):\n b[j] = constant * pi[j]*np.linalg.det(cov[j])\n invCov= np.linalg.inv(cov[j])\n # print(\"invCov\",invCov.shape)\n xmu = x - mu[j]\n # print(\"xmu\",xmu.shape)# should be a 2x1 \n toBeExped = -0.5*xmu.T@invCov@xmu\n # print(\"eX\",toBeExped.shape)# should be a 2x1 \n a[j] = np.exp(toBeExped) \n logsumvec = logsumexp(a, b=b)# all the individual points \n logs[i] = logsumvec\n # Compute the log-likelihood estimate\n \n ## END YOUR CODE HERE ##\n l = np.sum(logs)\n # log∑k=1Kπk∗12π|Σk|12exp(−0.5∗(xi−μk)TΣ−1k(xi−μk))\n likelihood = l; # should be a scalar \n return likelihood\n",
"_____no_output_____"
],
[
"# Initialize total number of points (n), number of clusters (k),\n# mixing weights (pi), means (mu) and covariance matrices (cov)\nn_points = 150 # 150 points total\nk = 3 # we know there are 3 clusters\nmu = [(3 - (-3)) * np.random.rand(2,1) + (-3) for _ in np.arange(k)]\ncov = [10 * np.identity(2) for _ in np.arange(k)]\nn_per_cluster = [n_points/k for _ in np.arange(k)] # even split for now\npi = n_per_cluster\n\nmu_estimate, cov_estimate = ExpectationMaximization_GMM(data, n_per_cluster, n_points, k, pi, mu, cov)\nprint(\"The estimates of the parameters of the Gaussians are: \")\nprint(\"Mu:\", mu_estimate)\nprint(\"Covariance:\", cov_estimate)",
"/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:20: DeprecationWarning: np.asscalar(a) is deprecated since NumPy v1.16, use a.item() instead\n"
]
],
[
[
"## Problem 3: Comparison of K-Means and Gaussian Mixture\nWe would like you to perform K-Means and GMM for clustering using sklearn. In this Problem, we can visualize the difference of these two algorithm.\n\nFirst, we can general some clustered data as follows.",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport seaborn as sns; sns.set()\nimport numpy as np\nfrom sklearn.datasets.samples_generator import make_blobs\nX, y_true = make_blobs(n_samples=400, centers=4,\n cluster_std=0.60, random_state=0)\nX = X[:, ::-1] # flip axes for better plotting\nprint(X.shape)\nplt.scatter(X[:, 0], X[:, 1], c=y_true, s=40, cmap='viridis');",
"/usr/local/lib/python3.7/dist-packages/sklearn/utils/deprecation.py:144: FutureWarning: The sklearn.datasets.samples_generator module is deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.datasets. Anything that cannot be imported from sklearn.datasets is now part of the private API.\n warnings.warn(message, FutureWarning)\n"
]
],
[
[
"**a. Perform Kmeans and GMM on data X using build-in sklearn functions.**\n\nYou can find the documentation for instantiating and fitting `sklearn`'s `Kmeans` [here](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html). Set `n_clusters=4` and `random_state=0`.",
"_____no_output_____"
]
],
[
[
"from sklearn.cluster import KMeans\n### ADD CODE HERE:\n# Instantiate KMeans instance.\n# Fit the Kmeans with the data X.\n# Use the Kmeans to predict on the labels of X, here the labels is unordered.\nnClust = 4 \nrandSate = 0 \nkmeans = KMeans(n_clusters=nClust, random_state=randSate).fit(X)\nlabels = kmeans.labels_\nplt.scatter(X[:, 0], X[:, 1], c=labels, s=50, cmap='viridis')",
"_____no_output_____"
]
],
[
[
"You can find the documentation for instantiating and fitting `sklearn`'s `GMM` [here](https://scikit-learn.org/stable/modules/generated/sklearn.mixture.GaussianMixture.html). Set `n_clusters=4` and `random_state=0`.",
"_____no_output_____"
]
],
[
[
"from sklearn.mixture import GaussianMixture as GMM\n### ADD CODE HERE:\n# Instantiate GMM instance.\n# Fit the GMM with the data X.\n# Use the GMM to predict on the labels of X, here the labels is unordered.\ngm = GMM(n_components=nClust, random_state=randSate).fit(X)\nlabels = gm.predict(X)\nplt.scatter(X[:, 0], X[:, 1], c=labels, s=50, cmap='viridis')",
"_____no_output_____"
]
],
[
[
"**b. Perform Kmeans and GMM on data X_stretched using build-in sklearn functions.**\n\nFirst we stretch the data by a random matrix.",
"_____no_output_____"
]
],
[
[
"rng = np.random.RandomState(13)\nX_stretched = np.dot(X, rng.randn(2, 2))",
"_____no_output_____"
]
],
[
[
"Applying `Kmeans` on `X_stretched` and set `n_clusters=4` and `random_state=0`.",
"_____no_output_____"
]
],
[
[
"from sklearn.cluster import KMeans\n### ADD CODE HERE:\n# Instantiate KMeans instance.\n# Fit the Kmeans with the data X.\n# Use the Kmeans to predict on the labels of X, here the labels is unordered.\nkmeans = KMeans(n_clusters=nClust, random_state=randSate).fit(X_stretched)\nlabels = kmeans.labels_\nplt.scatter(X_stretched[:, 0], X_stretched[:, 1], c=labels, s=50, cmap='viridis')",
"_____no_output_____"
]
],
[
[
"Applying `GMM` on `X_stretched` and set `n_clusters=4` and `random_state=0`.",
"_____no_output_____"
]
],
[
[
"from sklearn.mixture import GaussianMixture as GMM\n### ADD CODE HERE:\n# Instantiate GMM instance.\n# Fit the GMM with the data X.\n# Use the GMM to predict on the labels of X, here the labels is unordered.\ngm = GMM(n_components=nClust, random_state=randSate).fit(X_stretched)\nlabels = gm.predict(X_stretched)\nplt.scatter(X_stretched[:, 0], X_stretched[:, 1], c=labels, s=50, cmap='viridis')",
"_____no_output_____"
]
],
[
[
"**c. Conclusion.** In both previous cases Would there be any reason to better use k-means over E-M , or vice versa? For what kinds of datasets would it make more sense to use E-M to cluster? Why?",
"_____no_output_____"
],
[
"Althought both are almost equivelent in the first case, the GMM algroithm does a better job when the data has a skew in it. This is because the underlying function is a guasian and GMM has a step for computing covariance which accounts for streching. However Knn does not adapt to the skew so many of the points het miss classified. Althouhg it might seem odvious from the names. It makes more sense to use GMM when the underline porbablity is close to a gaussian distrobution. However if you had a set of uniformly placed clustered data or a inverse gausian distrobution( idk what to call this, it would be sparce near the mean and dense by the ends) K-means would do a better job. The is also signifigantly less calculation when doing kmeans so I think it makes sense to default to it in a case where the underling distrobution class is unknown. ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
e7b823087b9ad4f08c8de58db6c52ccbfd0881e6 | 63,691 | ipynb | Jupyter Notebook | docsrc/source/_static/Examples/Visualization/Python/Dynamic Salary Retirement Model Visualized.ipynb | whoopnip/fin-model-course | e6c5ae313bba601c4aca0f334818b61cc0393118 | [
"MIT"
] | 5 | 2020-08-29T15:28:39.000Z | 2021-12-01T16:53:25.000Z | docsrc/source/_static/Examples/Visualization/Python/Dynamic Salary Retirement Model Visualized.ipynb | whoopnip/fin-model-course | e6c5ae313bba601c4aca0f334818b61cc0393118 | [
"MIT"
] | 16 | 2020-02-26T16:03:47.000Z | 2021-06-15T15:17:37.000Z | docsrc/source/_static/Examples/Visualization/Python/Dynamic Salary Retirement Model Visualized.ipynb | whoopnip/fin-model-course | e6c5ae313bba601c4aca0f334818b61cc0393118 | [
"MIT"
] | 3 | 2021-01-22T19:38:36.000Z | 2021-09-28T08:14:00.000Z | 87.009563 | 15,660 | 0.742805 | [
[
[
"# Retirement Model\n\nThis is a retirement model which models salary with both a constant growth rate for cost of living raises as well as regular salary increases for promotions. The model is broken up into the following sections:\n\n- [**Setup**](#Setup): Runs any imports and other setup\n- [**Inputs**](#Inputs): Defines the inputs for the model\n- [**Salaries**](#Salaries): Determining the salary in each year, considering cost of living raises and promotions\n- [**Wealths**](#Wealths): Determining the wealth in each year, considering a constant savings rate and investment rate\n- [**Retirement**](#Retirement): Determines years to retirement from the wealths over time, the main output from the model.\n- [**Results Summary**](#Results-Summary): Summarize the results with some visualizations",
"_____no_output_____"
],
[
"## Setup\n\nSetup for the later calculations are here. The necessary packages are imported.",
"_____no_output_____"
]
],
[
[
"from dataclasses import dataclass\nimport pandas as pd\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"## Inputs\n\nAll of the inputs for the model are defined here. A class is constructed to manage the data, and an instance of the class containing the default inputs is created.",
"_____no_output_____"
]
],
[
[
"@dataclass\nclass ModelInputs:\n starting_salary: int = 60000\n promos_every_n_years: int = 5\n cost_of_living_raise: float = 0.02\n promo_raise: float = 0.15\n savings_rate: float = 0.25\n interest_rate: float = 0.05\n desired_cash: int = 1500000\n \nmodel_data = ModelInputs()\nmodel_data",
"_____no_output_____"
]
],
[
[
"## Salaries\n\nHere the salary for each year is calculated. We assume that the salary grows at a constant rate each year for cost of living raises, and then also every number of years, the salary increases by a further percentage due to a promotion or switching jobs. Based on this assumption, the salary would evolve over time with the following equation:\n\n$$s_t = s_0 (1 + r_{cl})^n (1 + r_p)^p$$\n\nWhere:\n- $s_t$: Salary at year $t$\n- $s_0$: Starting salary (year 0)\n- $r_{cl}$: Annual cost of living raise\n- $r_p$: Promotion raise\n- $p$: Number of promotions\n\nAnd in Python format:",
"_____no_output_____"
]
],
[
[
"def salary_at_year(data: ModelInputs, year):\n \"\"\"\n Gets the salary at a given year from the start of the model based on cost of living raises and regular promotions.\n \"\"\"\n # Every n years we have a promotion, so dividing the years and taking out the decimals gets the number of promotions\n num_promos = int(year / data.promos_every_n_years)\n \n # This is the formula above implemented in Python\n salary_t = data.starting_salary * (1 + data.cost_of_living_raise) ** year * (1 + data.promo_raise) ** num_promos\n return salary_t",
"_____no_output_____"
]
],
[
[
"That function will get the salary at a given year, so to get all the salaries we just run it on each year. But we will not know how many years to run as we should run it until the individual is able to retire. So we are just showing the first few salaries for now and will later use this function in the [Wealths](#Wealths) section of the model.",
"_____no_output_____"
]
],
[
[
"for i in range(6):\n year = i + 1\n salary = salary_at_year(model_data, year)\n print(f'The salary at year {year} is ${salary:,.0f}.')",
"The salary at year 1 is $61,200.\nThe salary at year 2 is $62,424.\nThe salary at year 3 is $63,672.\nThe salary at year 4 is $64,946.\nThe salary at year 5 is $76,182.\nThe salary at year 6 is $77,705.\n"
]
],
[
[
"As expected, with the default inputs, the salary is increasing at 2% per year. Then at year 5, there is a promotion so there is a larger increase in salary.",
"_____no_output_____"
],
[
"## Wealths\n\nThe wealths portion of the model is concerned with applying the savings rate to the earned salary to calculate the cash saved, accumulating the cash saved over time, and applying the investment rate to the accumulated wealth.\n\nTo calculate cash saved, it is simply:\n\n$$c_t = s_t * r_s$$\n\nWhere:\n- $c_t$: Cash saved during year $t$\n- $r_s$: Savings rate",
"_____no_output_____"
]
],
[
[
"def cash_saved_during_year(data: ModelInputs, year):\n \"\"\"\n Calculated the cash saved within a given year, by first calculating the salary at that year then applying the \n savings rate.\n \"\"\"\n salary = salary_at_year(data, year)\n cash_saved = salary * data.savings_rate\n return cash_saved",
"_____no_output_____"
]
],
[
[
"To get the wealth at each year, it is just applying the investment return to last year's wealth, then adding this year's cash saved:\n\n$$w_t = w_{t-1} (1 + r_i) + c_t$$\nWhere:\n- $w_t$: Wealth at year $t$\n- $r_i$: Investment rate",
"_____no_output_____"
]
],
[
[
"def wealth_at_year(data: ModelInputs, year, prior_wealth):\n \"\"\"\n Calculate the accumulated wealth for a given year, based on previous wealth, the investment rate,\n and cash saved during the year.\n \"\"\"\n cash_saved = cash_saved_during_year(data, year)\n wealth = prior_wealth * (1 + data.interest_rate) + cash_saved\n return wealth",
"_____no_output_____"
]
],
[
[
"Again, just like in the [Salaries](#Salaries) section, we can now get the output for each year, but we don't know ultimately how many years we will have to run it. That will be determined in the [Retirement](#Retirement) section. So for now, just show the first few years of wealth accumulation:",
"_____no_output_____"
]
],
[
[
"prior_wealth = 0 # starting with no cash saved\nfor i in range(6):\n year = i + 1\n wealth = wealth_at_year(model_data, year, prior_wealth)\n print(f'The wealth at year {year} is ${wealth:,.0f}.')\n \n # Set next year's prior wealth to this year's wealth\n prior_wealth = wealth",
"The wealth at year 1 is $15,300.\nThe wealth at year 2 is $31,671.\nThe wealth at year 3 is $49,173.\nThe wealth at year 4 is $67,868.\nThe wealth at year 5 is $90,307.\nThe wealth at year 6 is $114,248.\n"
]
],
[
[
"With default inputs, the wealth is going up by approximately 25% of the salary each year, plus a bit more for investment. Then in year 6 we see a substantially larger increase because the salary is substantially larger due to the promotion. So everything is looking correct.",
"_____no_output_____"
],
[
"## Retirement\n\nThis section of the model puts everything together to produce the final output of years to retirement. It uses the logic to get the wealths at each year, which in turn uses the logic to the get salary at each year. The wealth at each year is tracked over time until it hits the desired cash. Once the wealth hits the desired cash, the individual is able to retire so that year is returned as the years to retirement.",
"_____no_output_____"
]
],
[
[
"def years_to_retirement(data: ModelInputs):\n \n # starting with no cash saved\n prior_wealth = 0 \n wealth = 0\n \n year = 0 # will become 1 on first loop\n \n print('Wealths over time:') # \\n makes a blank line in the output.\n while wealth < data.desired_cash:\n year = year + 1\n wealth = wealth_at_year(model_data, year, prior_wealth)\n print(f'The wealth at year {year} is ${wealth:,.0f}.')\n\n # Set next year's prior wealth to this year's wealth\n prior_wealth = wealth\n \n # Now we have exited the while loop, so wealth must be >= desired_cash. Whatever last year was set\n # is the years to retirement.\n print(f'\\nRetirement:\\nIt will take {year} years to retire.') # \\n makes a blank line in the output.\n return year",
"_____no_output_____"
]
],
[
[
"With the default inputs:",
"_____no_output_____"
]
],
[
[
"years = years_to_retirement(model_data)",
"Wealths over time:\nThe wealth at year 1 is $15,300.\nThe wealth at year 2 is $31,671.\nThe wealth at year 3 is $49,173.\nThe wealth at year 4 is $67,868.\nThe wealth at year 5 is $90,307.\nThe wealth at year 6 is $114,248.\nThe wealth at year 7 is $139,775.\nThe wealth at year 8 is $166,975.\nThe wealth at year 9 is $195,939.\nThe wealth at year 10 is $229,918.\nThe wealth at year 11 is $266,080.\nThe wealth at year 12 is $304,542.\nThe wealth at year 13 is $345,431.\nThe wealth at year 14 is $388,878.\nThe wealth at year 15 is $439,025.\nThe wealth at year 16 is $492,294.\nThe wealth at year 17 is $548,853.\nThe wealth at year 18 is $608,878.\nThe wealth at year 19 is $672,557.\nThe wealth at year 20 is $745,168.\nThe wealth at year 21 is $822,190.\nThe wealth at year 22 is $903,859.\nThe wealth at year 23 is $990,422.\nThe wealth at year 24 is $1,082,140.\nThe wealth at year 25 is $1,185,745.\nThe wealth at year 26 is $1,295,520.\nThe wealth at year 27 is $1,411,793.\nThe wealth at year 28 is $1,534,910.\n\nRetirement:\nIt will take 28 years to retire.\n"
]
],
[
[
"# Results Summary\n\n## Put Results in a Table\n\nNow I will visualize the salaries and wealths over time. First create a function which runs the model to put these results in a DataFrame.",
"_____no_output_____"
]
],
[
[
"def get_salaries_wealths_df(data):\n \"\"\"\n Runs the retirement model, collecting salary and wealth information year by year and storing\n into a DataFrame for further analysis.\n \"\"\"\n # starting with no cash saved\n prior_wealth = 0 \n wealth = 0\n \n year = 0 # will become 1 on first loop\n \n df_data_tups = []\n while wealth < data.desired_cash:\n year = year + 1\n salary = salary_at_year(data, year)\n wealth = wealth_at_year(model_data, year, prior_wealth)\n\n # Set next year's prior wealth to this year's wealth\n prior_wealth = wealth\n \n # Save the results in a tuple for later building the DataFrame\n df_data_tups.append((year, salary, wealth))\n \n # Now we have exited the while loop, so wealth must be >= desired_cash\n \n # Now create the DataFrame\n df = pd.DataFrame(df_data_tups, columns=['Year', 'Salary', 'Wealth'])\n \n return df",
"_____no_output_____"
]
],
[
[
"Also set up a function which formats the `DataFrame` for display.",
"_____no_output_____"
]
],
[
[
"def styled_salaries_wealths(df):\n return df.style.format({\n 'Salary': '${:,.2f}',\n 'Wealth': '${:,.2f}'\n })",
"_____no_output_____"
]
],
[
[
"Now call the function to save the results into the `DataFrame`.",
"_____no_output_____"
]
],
[
[
"df = get_salaries_wealths_df(model_data)\nstyled_salaries_wealths(df)",
"_____no_output_____"
]
],
[
[
"## Plot Results\n\nNow I will visualize the salaries and wealths over time.",
"_____no_output_____"
],
[
"### Salaries over Time",
"_____no_output_____"
]
],
[
[
"df.plot.line(x='Year', y='Salary')",
"_____no_output_____"
]
],
[
[
"### Wealths over Time",
"_____no_output_____"
]
],
[
[
"df.plot.line(x='Year', y='Wealth')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7b83a3d5c707a8dfe5353f910354b802e027fb0 | 6,945 | ipynb | Jupyter Notebook | content/act1/hamiltonian_bifurcation/ham_bif-jekyll.ipynb | champsproject/chem_react_dyn | 53ee9b30fbcfa4316eb08fd3ca69cba82cf7b598 | [
"CC-BY-4.0"
] | 11 | 2019-12-09T11:23:13.000Z | 2020-12-16T09:49:55.000Z | content/act1/hamiltonian_bifurcation/ham_bif-jekyll.ipynb | champsproject/chem_react_dyn | 53ee9b30fbcfa4316eb08fd3ca69cba82cf7b598 | [
"CC-BY-4.0"
] | 40 | 2019-12-09T14:52:38.000Z | 2022-02-26T06:10:08.000Z | content/act1/hamiltonian_bifurcation/ham_bif-jekyll.ipynb | champsproject/chem_react_dyn | 53ee9b30fbcfa4316eb08fd3ca69cba82cf7b598 | [
"CC-BY-4.0"
] | 3 | 2020-05-12T06:27:20.000Z | 2022-02-08T05:29:56.000Z | 29.679487 | 280 | 0.551188 | [
[
[
"# One Degree-of-Freedom (DoF) Hamiltonian Bifurcation of Equilibria",
"_____no_output_____"
],
[
"(ADD INTRODUCTORY LANGUAGE ABOUT PROBLEM DEVELOPMENT)\nWe will now consider two examples of bifurcation of equilibria in two dimensional Hamiltonian system; in particular, the Hamiltonian saddle-node and Hamiltonian pitchfork bifurcations. ",
"_____no_output_____"
],
[
"## Hamiltonian saddle-node bifurcation",
"_____no_output_____"
],
[
"We consider the Hamiltonian:\n\n\\begin{equation}\nH (q, p) = \\frac{p^2}{2} - \\lambda q + \\frac{q^3}{3}, \\quad (q, p) \\in \\mathbb{R}^2.\n\\label{eq:hamApp13}\n\\end{equation}\n\n\nwhere $\\lambda$ is considered to be a parameter that can be varied. From this Hamiltonian, we derive Hamilton's equations:\n\n\\begin{eqnarray}\n\\dot{q} & = & \\frac{\\partial H}{\\partial p} = p, \\nonumber \\\\\n\\dot{p} & = & -\\frac{\\partial H}{\\partial q} =\\lambda - q^2.\n\\label{eq:hamApp14}\n\\end{eqnarray}",
"_____no_output_____"
],
[
"### Revealing the Phase Space Structures and their implications for Reaction Dynamics",
"_____no_output_____"
],
[
"The fixed points for \\eqref{eq:hamApp14} are:\n\n\\begin{equation}\n(q, p) = (\\pm\\sqrt{\\lambda}, 0),\n\\end{equation}\n\n\nfrom which it follows that there are no fixed points for $\\lambda <0$, one fixed point for $\\lambda =0$, and two fixed points for $\\lambda >0$. This is the scenario for a saddle-node bifurcation. \n\nNext we examine the stability of the fixed points. The Jacobian of \\eqref{eq:hamApp14} is given by:\n\n\\begin{equation}\nJ =\\left(\n\\begin{array}{cc} \n0 & 1\\\\\n-2 q & 0\n\\end{array}\n\\right).\n\\label{eq:hamApp15}\n\\end{equation}\n\n\nThe eigenvalues of this matrix are:\n\n\\begin{equation}\n\\Lambda_{1, 2} = \\pm \\sqrt{-2q}.\n\\end{equation}\n\n\nHence $(q, p) = (-\\sqrt{\\lambda}, 0)$ is a saddle, $(q, p) = (\\sqrt{\\lambda}, 0)$ is a center, and $(q, p) = (0, 0)$ has two zero eigenvalues. The phase portraits are shown in Fig. [fig:1](#fig:appC_fig3).",
"_____no_output_____"
],
[
"<img width=\"560\" height=\"315\" src=\"figures/ham_sn.png\">\n\n<a id=\"fig:appC_fig3\"></a>\n<figcaption style=\"text-align:center;font-size:14px\"><b>fig:1 </b><em> The phase portraits for the Hamiltonian saddle-node bifurcation.</em></figcaption><hr>",
"_____no_output_____"
],
[
"## Hamiltonian pitchfork bifurcation",
"_____no_output_____"
],
[
"We consider the Hamiltonian:\n\n\\begin{equation}\nH (q, p) = \\frac{p^2}{2} - \\lambda \\frac{q^2}{2} + \\frac{q^4}{4},\n\\label{eq:hamApp16}\n\\end{equation}\n\n\nwhere $\\lambda$ is considered to be a parameter that can be varied. From this Hamiltonian, we derive Hamilton's equations:\n\n\\begin{eqnarray}\n\\dot{q} & = & \\frac{\\partial H}{\\partial p} = p, \\nonumber \\\\\n\\dot{p} & = & -\\frac{\\partial H}{\\partial q} =\\lambda q - q^3.\n\\label{eq:hamApp17}\n\\end{eqnarray}",
"_____no_output_____"
],
[
"### Revealing the Phase Space Structures and their implications for Reaction Dynamics",
"_____no_output_____"
],
[
"The fixed points for \\eqref{eq:hamApp17} are:\n\n\\begin{equation}\n(q, p) = (0, 0), \\, (\\pm\\sqrt{\\lambda}, 0),\n\\end{equation}\n\n\nfrom which it follows that there is one fixed point for $\\lambda \\leq 0$, and three fixed points for $\\lambda >0$. This is the scenario for a pitchfork bifurcation.\n\nNext we examine the stability of the fixed points. The Jacobian of \\eqref{eq:hamApp17} is given by:\n\n\\begin{equation}\nJ = \\left(\n\\begin{array}{cc} \n0 & 1\\\\\n\\lambda-3q^2 & 0\n\\end{array}\n\\right).\n\\label{eq:hamApp18}\n\\end{equation}\n\n\nThe eigenvalues of this matrix are:\n\n\\begin{equation}\n\\Lambda_{1, 2} = \\pm \\sqrt{\\lambda - 3q^2 }.\n\\end{equation}\n\n\nHence $(q, p) = (0, 0)$ is a center for $\\lambda <0$, a saddle for $\\lambda >0$ and has two zero eigenvalues for $\\lambda =0$. The fixed points $(q, p) = (\\sqrt{\\lambda}, 0)$ are centers for $\\lambda >0$. The phase portraits are shown in Fig. [fig:2](#fig:appC_fig4).",
"_____no_output_____"
],
[
"<img width=\"560\" height=\"315\" src=\"figures/ham_pitch.png\">\n\n<a id=\"fig:appC_fig4\"></a>\n<figcaption style=\"text-align:center;font-size:14px\"><b>fig:2 </b><em> The phase portraits for the Hamiltonian pitchfork bifurcation.</em></figcaption><hr>",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
e7b86358b55bfff5961e0ca798cc1b7ba8b93d2e | 5,525 | ipynb | Jupyter Notebook | notebooks/trade_demo/smpc/bug_reproducing/Data Owner - Italy.ipynb | Noob-can-Compile/PySyft | 156cf93489b16dd0205b0058d4d23d56b3a91ab8 | [
"Apache-2.0"
] | 8,428 | 2017-08-10T09:17:49.000Z | 2022-03-31T08:20:14.000Z | notebooks/trade_demo/smpc/bug_reproducing/Data Owner - Italy.ipynb | Noob-can-Compile/PySyft | 156cf93489b16dd0205b0058d4d23d56b3a91ab8 | [
"Apache-2.0"
] | 4,779 | 2017-08-09T23:19:00.000Z | 2022-03-29T11:49:36.000Z | notebooks/trade_demo/smpc/bug_reproducing/Data Owner - Italy.ipynb | Noob-can-Compile/PySyft | 156cf93489b16dd0205b0058d4d23d56b3a91ab8 | [
"Apache-2.0"
] | 2,307 | 2017-08-10T08:52:12.000Z | 2022-03-30T05:36:07.000Z | 21.924603 | 215 | 0.532308 | [
[
[
"import pandas as pd\nimport syft as sy",
"_____no_output_____"
]
],
[
[
"### Loading the dataset",
"_____no_output_____"
]
],
[
[
"italy_dataset = pd.read_csv(\"../datasets/it - feb 2021.csv\")",
"/Users/atrask/opt/anaconda3/envs/syft/lib/python3.7/site-packages/IPython/core/interactiveshell.py:3146: DtypeWarning: Columns (14) have mixed types.Specify dtype option on import or set low_memory=False.\n interactivity=interactivity, compiler=compiler, result=result)\n"
],
[
"# italy_dataset.head()",
"_____no_output_____"
]
],
[
[
"### Logging into the domain",
"_____no_output_____"
]
],
[
[
"it = sy.login(email=\"[email protected]\", password=\"changethis\", port=8082)",
"Connecting to http://localhost:8082... done! \t Logging into italy... done!\n"
]
],
[
[
"### Upload the dataset to Domain node",
"_____no_output_____"
]
],
[
[
"# Selecting a subset of the dataset\nitaly_dataset = italy_dataset[:40000]",
"_____no_output_____"
],
[
"# We will upload only the first 40k rows and three columns\n# All these three columns are of `int` type\nsampled_italy_dataset = italy_dataset[[\"Trade Flow Code\", \"Partner Code\", \"Trade Value (US$)\"]].values\n\n# Convert the dataset to numpy array\nsampled_italy_datset_numpy = sampled_italy_dataset\n\n# Convert the numpy array to Tensor\nitaly_dataset_tensor = sy.Tensor(sampled_italy_datset_numpy).tag('data2')\n",
"_____no_output_____"
],
[
"italy_dataset_tensor.public_shape = italy_dataset_tensor.shape",
"_____no_output_____"
],
[
"ptr = italy_dataset_tensor.share(it)",
"_____no_output_____"
],
[
"# it.load_dataset(\n# assets={\"Italy-Numpy-feb2020-Tensor\": italy_dataset_tensor},\n# name=\"Italy Trade Data - First 40000 rows\",\n# description=\"\"\"A collection of reports from Italy's statistics \n# bureau about how much it thinks it imports and exports from other countries.\"\"\",\n# )",
"_____no_output_____"
],
[
"# it.datasets",
"_____no_output_____"
],
[
"# it_domain_node.store.pandas['object_type'].unique()",
"_____no_output_____"
],
[
"# it_domain_node.store.pandas[it_domain_node.store.pandas['object_type'] == \"<class 'syft.core.tensor.tensor.Tensor'>\"]",
"_____no_output_____"
]
],
[
[
"### Create a Data Scientist User",
"_____no_output_____"
]
],
[
[
"it.users.create(\n **{\n \"name\": \"Sheldon Cooper\",\n \"email\": \"[email protected]\",\n \"password\": \"bazinga\",\n \"budget\":10\n }\n)",
"_____no_output_____"
]
],
[
[
"### Accept/Deny Requests to the Domain",
"_____no_output_____"
]
],
[
[
"# it_domain_node.requests",
"_____no_output_____"
],
[
"# it_domain_node.requests[-1].accept()",
"_____no_output_____"
],
[
"\n# it_domain_node.store.pandas[it_domain_node.store.pandas[\"object_type\"] == \"<class 'syft.core.tensor.smpc.share_tensor.ShareTensor'>\"]",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e7b869f818c2228a7c9936c4b566725c6c62f85b | 36,044 | ipynb | Jupyter Notebook | docs/tutorials/1_dqn_tutorial.ipynb | FlorisHoogenboom/agents | 2cd5a61e1838b52012271f1fb8617c29a55279a9 | [
"Apache-2.0"
] | null | null | null | docs/tutorials/1_dqn_tutorial.ipynb | FlorisHoogenboom/agents | 2cd5a61e1838b52012271f1fb8617c29a55279a9 | [
"Apache-2.0"
] | null | null | null | docs/tutorials/1_dqn_tutorial.ipynb | FlorisHoogenboom/agents | 2cd5a61e1838b52012271f1fb8617c29a55279a9 | [
"Apache-2.0"
] | null | null | null | 31.534558 | 361 | 0.522972 | [
[
[
"##### Copyright 2018 The TF-Agents Authors.",
"_____no_output_____"
]
],
[
[
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"_____no_output_____"
]
],
[
[
"# Train a Deep Q Network with TF-Agents\n\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/agents/tutorials/1_dqn_tutorial\">\n <img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />\n View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/agents/blob/master/docs/tutorials/1_dqn_tutorial.ipynb\">\n <img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />\n Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/agents/blob/master/docs/tutorials/1_dqn_tutorial.ipynb\">\n <img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />\n View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/agents/docs/tutorials/1_dqn_tutorial.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>",
"_____no_output_____"
],
[
"## Introduction\n",
"_____no_output_____"
],
[
"This example shows how to train a [DQN (Deep Q Networks)](https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf) agent on the Cartpole environment using the TF-Agents library.\n\n\n\nIt will walk you through all the components in a Reinforcement Learning (RL) pipeline for training, evaluation and data collection.\n\n\nTo run this code live, click the 'Run in Google Colab' link above.\n",
"_____no_output_____"
],
[
"## Setup",
"_____no_output_____"
],
[
"If you haven't installed the following dependencies, run:",
"_____no_output_____"
]
],
[
[
"!sudo apt-get install -y xvfb ffmpeg\n!pip install 'gym==0.10.11'\n!pip install 'imageio==2.4.0'\n!pip install PILLOW\n!pip install 'pyglet==1.3.2'\n!pip install pyvirtualdisplay\n!pip install tf-agents",
"_____no_output_____"
],
[
"from __future__ import absolute_import, division, print_function\n\nimport base64\nimport imageio\nimport IPython\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport PIL.Image\nimport pyvirtualdisplay\n\nimport tensorflow as tf\n\nfrom tf_agents.agents.dqn import dqn_agent\nfrom tf_agents.drivers import dynamic_step_driver\nfrom tf_agents.environments import suite_gym\nfrom tf_agents.environments import tf_py_environment\nfrom tf_agents.eval import metric_utils\nfrom tf_agents.metrics import tf_metrics\nfrom tf_agents.networks import q_network\nfrom tf_agents.policies import random_tf_policy\nfrom tf_agents.replay_buffers import tf_uniform_replay_buffer\nfrom tf_agents.trajectories import trajectory\nfrom tf_agents.utils import common",
"_____no_output_____"
],
[
"tf.compat.v1.enable_v2_behavior()\n\n# Set up a virtual display for rendering OpenAI gym environments.\ndisplay = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start()",
"_____no_output_____"
],
[
"tf.version.VERSION",
"_____no_output_____"
]
],
[
[
"## Hyperparameters",
"_____no_output_____"
]
],
[
[
"num_iterations = 20000 # @param {type:\"integer\"}\n\ninitial_collect_steps = 100 # @param {type:\"integer\"} \ncollect_steps_per_iteration = 1 # @param {type:\"integer\"}\nreplay_buffer_max_length = 100000 # @param {type:\"integer\"}\n\nbatch_size = 64 # @param {type:\"integer\"}\nlearning_rate = 1e-3 # @param {type:\"number\"}\nlog_interval = 200 # @param {type:\"integer\"}\n\nnum_eval_episodes = 10 # @param {type:\"integer\"}\neval_interval = 1000 # @param {type:\"integer\"}",
"_____no_output_____"
]
],
[
[
"## Environment\n\nIn Reinforcement Learning (RL), an environment represents the task or problem to be solved. Standard environments can be created in TF-Agents using `tf_agents.environments` suites. TF-Agents has suites for loading environments from sources such as the OpenAI Gym, Atari, and DM Control.\n\nLoad the CartPole environment from the OpenAI Gym suite. ",
"_____no_output_____"
]
],
[
[
"env_name = 'CartPole-v0'\nenv = suite_gym.load(env_name)",
"_____no_output_____"
]
],
[
[
"You can render this environment to see how it looks. A free-swinging pole is attached to a cart. The goal is to move the cart right or left in order to keep the pole pointing up.",
"_____no_output_____"
]
],
[
[
"#@test {\"skip\": true}\nenv.reset()\nPIL.Image.fromarray(env.render())",
"_____no_output_____"
]
],
[
[
"The `environment.step` method takes an `action` in the environment and returns a `TimeStep` tuple containing the next observation of the environment and the reward for the action.\n\nThe `time_step_spec()` method returns the specification for the `TimeStep` tuple. Its `observation` attribute shows the shape of observations, the data types, and the ranges of allowed values. The `reward` attribute shows the same details for the reward.\n",
"_____no_output_____"
]
],
[
[
"print('Observation Spec:')\nprint(env.time_step_spec().observation)",
"_____no_output_____"
],
[
"print('Reward Spec:')\nprint(env.time_step_spec().reward)",
"_____no_output_____"
]
],
[
[
"The `action_spec()` method returns the shape, data types, and allowed values of valid actions.",
"_____no_output_____"
]
],
[
[
"print('Action Spec:')\nprint(env.action_spec())",
"_____no_output_____"
]
],
[
[
"In the Cartpole environment:\n\n- `observation` is an array of 4 floats: \n - the position and velocity of the cart\n - the angular position and velocity of the pole \n- `reward` is a scalar float value\n- `action` is a scalar integer with only two possible values:\n - `0` — \"move left\"\n - `1` — \"move right\"\n",
"_____no_output_____"
]
],
[
[
"time_step = env.reset()\nprint('Time step:')\nprint(time_step)\n\naction = np.array(1, dtype=np.int32)\n\nnext_time_step = env.step(action)\nprint('Next time step:')\nprint(next_time_step)",
"_____no_output_____"
]
],
[
[
"Usually two environments are instantiated: one for training and one for evaluation. ",
"_____no_output_____"
]
],
[
[
"train_py_env = suite_gym.load(env_name)\neval_py_env = suite_gym.load(env_name)",
"_____no_output_____"
]
],
[
[
"The Cartpole environment, like most environments, is written in pure Python. This is converted to TensorFlow using the `TFPyEnvironment` wrapper.\n\nThe original environment's API uses Numpy arrays. The `TFPyEnvironment` converts these to `Tensors` to make it compatible with Tensorflow agents and policies.\n",
"_____no_output_____"
]
],
[
[
"train_env = tf_py_environment.TFPyEnvironment(train_py_env)\neval_env = tf_py_environment.TFPyEnvironment(eval_py_env)",
"_____no_output_____"
]
],
[
[
"## Agent\n\nThe algorithm used to solve an RL problem is represented by an `Agent`. TF-Agents provides standard implementations of a variety of `Agents`, including:\n\n- [DQN](https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf) (used in this tutorial)\n- [REINFORCE](http://www-anw.cs.umass.edu/~barto/courses/cs687/williams92simple.pdf)\n- [DDPG](https://arxiv.org/pdf/1509.02971.pdf)\n- [TD3](https://arxiv.org/pdf/1802.09477.pdf)\n- [PPO](https://arxiv.org/abs/1707.06347)\n- [SAC](https://arxiv.org/abs/1801.01290).\n\nThe DQN agent can be used in any environment which has a discrete action space.\n\nAt the heart of a DQN Agent is a `QNetwork`, a neural network model that can learn to predict `QValues` (expected returns) for all actions, given an observation from the environment.\n\nUse `tf_agents.networks.q_network` to create a `QNetwork`, passing in the `observation_spec`, `action_spec`, and a tuple describing the number and size of the model's hidden layers.\n",
"_____no_output_____"
]
],
[
[
"fc_layer_params = (100,)\n\nq_net = q_network.QNetwork(\n train_env.observation_spec(),\n train_env.action_spec(),\n fc_layer_params=fc_layer_params)",
"_____no_output_____"
]
],
[
[
"Now use `tf_agents.agents.dqn.dqn_agent` to instantiate a `DqnAgent`. In addition to the `time_step_spec`, `action_spec` and the QNetwork, the agent constructor also requires an optimizer (in this case, `AdamOptimizer`), a loss function, and an integer step counter.",
"_____no_output_____"
]
],
[
[
"optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=learning_rate)\n\ntrain_step_counter = tf.Variable(0)\n\nagent = dqn_agent.DqnAgent(\n train_env.time_step_spec(),\n train_env.action_spec(),\n q_network=q_net,\n optimizer=optimizer,\n td_errors_loss_fn=common.element_wise_squared_loss,\n train_step_counter=train_step_counter)\n\nagent.initialize()",
"_____no_output_____"
]
],
[
[
"## Policies\n\nA policy defines the way an agent acts in an environment. Typically, the goal of reinforcement learning is to train the underlying model until the policy produces the desired outcome.\n\nIn this tutorial:\n\n- The desired outcome is keeping the pole balanced upright over the cart.\n- The policy returns an action (left or right) for each `time_step` observation.\n\nAgents contain two policies: \n\n- `agent.policy` — The main policy that is used for evaluation and deployment.\n- `agent.collect_policy` — A second policy that is used for data collection.\n",
"_____no_output_____"
]
],
[
[
"eval_policy = agent.policy\ncollect_policy = agent.collect_policy",
"_____no_output_____"
]
],
[
[
"Policies can be created independently of agents. For example, use `tf_agents.policies.random_tf_policy` to create a policy which will randomly select an action for each `time_step`.",
"_____no_output_____"
]
],
[
[
"random_policy = random_tf_policy.RandomTFPolicy(train_env.time_step_spec(),\n train_env.action_spec())",
"_____no_output_____"
]
],
[
[
"To get an action from a policy, call the `policy.action(time_step)` method. The `time_step` contains the observation from the environment. This method returns a `PolicyStep`, which is a named tuple with three components:\n\n- `action` — the action to be taken (in this case, `0` or `1`)\n- `state` — used for stateful (that is, RNN-based) policies\n- `info` — auxiliary data, such as log probabilities of actions",
"_____no_output_____"
]
],
[
[
"example_environment = tf_py_environment.TFPyEnvironment(\n suite_gym.load('CartPole-v0'))",
"_____no_output_____"
],
[
"time_step = example_environment.reset()",
"_____no_output_____"
],
[
"random_policy.action(time_step)",
"_____no_output_____"
]
],
[
[
"## Metrics and Evaluation\n\nThe most common metric used to evaluate a policy is the average return. The return is the sum of rewards obtained while running a policy in an environment for an episode. Several episodes are run, creating an average return.\n\nThe following function computes the average return of a policy, given the policy, environment, and a number of episodes.\n",
"_____no_output_____"
]
],
[
[
"#@test {\"skip\": true}\ndef compute_avg_return(environment, policy, num_episodes=10):\n\n total_return = 0.0\n for _ in range(num_episodes):\n\n time_step = environment.reset()\n episode_return = 0.0\n\n while not time_step.is_last():\n action_step = policy.action(time_step)\n time_step = environment.step(action_step.action)\n episode_return += time_step.reward\n total_return += episode_return\n\n avg_return = total_return / num_episodes\n return avg_return.numpy()[0]\n\n\n# See also the metrics module for standard implementations of different metrics.\n# https://github.com/tensorflow/agents/tree/master/tf_agents/metrics",
"_____no_output_____"
]
],
[
[
"Running this computation on the `random_policy` shows a baseline performance in the environment.",
"_____no_output_____"
]
],
[
[
"compute_avg_return(eval_env, random_policy, num_eval_episodes)",
"_____no_output_____"
]
],
[
[
"## Replay Buffer\n\nThe replay buffer keeps track of data collected from the environment. This tutorial uses `tf_agents.replay_buffers.tf_uniform_replay_buffer.TFUniformReplayBuffer`, as it is the most common. \n\nThe constructor requires the specs for the data it will be collecting. This is available from the agent using the `collect_data_spec` method. The batch size and maximum buffer length are also required.\n",
"_____no_output_____"
]
],
[
[
"replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(\n data_spec=agent.collect_data_spec,\n batch_size=train_env.batch_size,\n max_length=replay_buffer_max_length)",
"_____no_output_____"
]
],
[
[
"For most agents, `collect_data_spec` is a named tuple called `Trajectory`, containing the specs for observations, actions, rewards, and other items.",
"_____no_output_____"
]
],
[
[
"agent.collect_data_spec",
"_____no_output_____"
],
[
"agent.collect_data_spec._fields",
"_____no_output_____"
]
],
[
[
"## Data Collection\n\nNow execute the random policy in the environment for a few steps, recording the data in the replay buffer.",
"_____no_output_____"
]
],
[
[
"#@test {\"skip\": true}\ndef collect_step(environment, policy, buffer):\n time_step = environment.current_time_step()\n action_step = policy.action(time_step)\n next_time_step = environment.step(action_step.action)\n traj = trajectory.from_transition(time_step, action_step, next_time_step)\n\n # Add trajectory to the replay buffer\n buffer.add_batch(traj)\n\ndef collect_data(env, policy, buffer, steps):\n for _ in range(steps):\n collect_step(env, policy, buffer)\n\ncollect_data(train_env, random_policy, replay_buffer, initial_collect_steps)\n\n# This loop is so common in RL, that we provide standard implementations. \n# For more details see the drivers module.\n# https://www.tensorflow.org/agents/api_docs/python/tf_agents/drivers",
"_____no_output_____"
]
],
[
[
"The replay buffer is now a collection of Trajectories.",
"_____no_output_____"
]
],
[
[
"# For the curious:\n# Uncomment to peel one of these off and inspect it.\n# iter(replay_buffer.as_dataset()).next()",
"_____no_output_____"
]
],
[
[
"The agent needs access to the replay buffer. This is provided by creating an iterable `tf.data.Dataset` pipeline which will feed data to the agent.\n\nEach row of the replay buffer only stores a single observation step. But since the DQN Agent needs both the current and next observation to compute the loss, the dataset pipeline will sample two adjacent rows for each item in the batch (`num_steps=2`).\n\nThis dataset is also optimized by running parallel calls and prefetching data.",
"_____no_output_____"
]
],
[
[
"# Dataset generates trajectories with shape [Bx2x...]\ndataset = replay_buffer.as_dataset(\n num_parallel_calls=3, \n sample_batch_size=batch_size, \n num_steps=2).prefetch(3)\n\n\ndataset",
"_____no_output_____"
],
[
"iterator = iter(dataset)\n\nprint(iterator)\n",
"_____no_output_____"
],
[
"# For the curious:\n# Uncomment to see what the dataset iterator is feeding to the agent.\n# Compare this representation of replay data \n# to the collection of individual trajectories shown earlier.\n\n# iterator.next()",
"_____no_output_____"
]
],
[
[
"## Training the agent\n\nTwo things must happen during the training loop:\n\n- collect data from the environment\n- use that data to train the agent's neural network(s)\n\nThis example also periodicially evaluates the policy and prints the current score.\n\nThe following will take ~5 minutes to run.",
"_____no_output_____"
]
],
[
[
"#@test {\"skip\": true}\ntry:\n %%time\nexcept:\n pass\n\n# (Optional) Optimize by wrapping some of the code in a graph using TF function.\nagent.train = common.function(agent.train)\n\n# Reset the train step\nagent.train_step_counter.assign(0)\n\n# Evaluate the agent's policy once before training.\navg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes)\nreturns = [avg_return]\n\nfor _ in range(num_iterations):\n\n # Collect a few steps using collect_policy and save to the replay buffer.\n collect_data(train_env, agent.collect_policy, replay_buffer, collect_steps_per_iteration)\n\n # Sample a batch of data from the buffer and update the agent's network.\n experience, unused_info = next(iterator)\n train_loss = agent.train(experience).loss\n\n step = agent.train_step_counter.numpy()\n\n if step % log_interval == 0:\n print('step = {0}: loss = {1}'.format(step, train_loss))\n\n if step % eval_interval == 0:\n avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes)\n print('step = {0}: Average Return = {1}'.format(step, avg_return))\n returns.append(avg_return)",
"_____no_output_____"
]
],
[
[
"## Visualization\n",
"_____no_output_____"
],
[
"### Plots\n\nUse `matplotlib.pyplot` to chart how the policy improved during training.\n\nOne iteration of `Cartpole-v0` consists of 200 time steps. The environment gives a reward of `+1` for each step the pole stays up, so the maximum return for one episode is 200. The charts shows the return increasing towards that maximum each time it is evaluated during training. (It may be a little unstable and not increase monotonically each time.)",
"_____no_output_____"
]
],
[
[
"#@test {\"skip\": true}\n\niterations = range(0, num_iterations + 1, eval_interval)\nplt.plot(iterations, returns)\nplt.ylabel('Average Return')\nplt.xlabel('Iterations')\nplt.ylim(top=250)",
"_____no_output_____"
]
],
[
[
"### Videos",
"_____no_output_____"
],
[
"Charts are nice. But more exciting is seeing an agent actually performing a task in an environment. \n\nFirst, create a function to embed videos in the notebook.",
"_____no_output_____"
]
],
[
[
"def embed_mp4(filename):\n \"\"\"Embeds an mp4 file in the notebook.\"\"\"\n video = open(filename,'rb').read()\n b64 = base64.b64encode(video)\n tag = '''\n <video width=\"640\" height=\"480\" controls>\n <source src=\"data:video/mp4;base64,{0}\" type=\"video/mp4\">\n Your browser does not support the video tag.\n </video>'''.format(b64.decode())\n\n return IPython.display.HTML(tag)",
"_____no_output_____"
]
],
[
[
"Now iterate through a few episodes of the Cartpole game with the agent. The underlying Python environment (the one \"inside\" the TensorFlow environment wrapper) provides a `render()` method, which outputs an image of the environment state. These can be collected into a video.",
"_____no_output_____"
]
],
[
[
"def create_policy_eval_video(policy, filename, num_episodes=5, fps=30):\n filename = filename + \".mp4\"\n with imageio.get_writer(filename, fps=fps) as video:\n for _ in range(num_episodes):\n time_step = eval_env.reset()\n video.append_data(eval_py_env.render())\n while not time_step.is_last():\n action_step = policy.action(time_step)\n time_step = eval_env.step(action_step.action)\n video.append_data(eval_py_env.render())\n return embed_mp4(filename)\n\n\n\n\ncreate_policy_eval_video(agent.policy, \"trained-agent\")",
"_____no_output_____"
]
],
[
[
"For fun, compare the trained agent (above) to an agent moving randomly. (It does not do as well.)",
"_____no_output_____"
]
],
[
[
"create_policy_eval_video(random_policy, \"random-agent\")",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7b87efc4a4da9c219af116fe1d25d7e96dc0eb6 | 3,228 | ipynb | Jupyter Notebook | notebooks/dev/04_jmg_cb_lad_merge.ipynb | Juan-Mateos/cb_processing | 013995c6432486c96f6b206a9d7adfbc77323918 | [
"MIT"
] | 1 | 2020-10-09T11:07:13.000Z | 2020-10-09T11:07:13.000Z | notebooks/dev/04_jmg_cb_lad_merge.ipynb | Juan-Mateos/cb_processing | 013995c6432486c96f6b206a9d7adfbc77323918 | [
"MIT"
] | null | null | null | notebooks/dev/04_jmg_cb_lad_merge.ipynb | Juan-Mateos/cb_processing | 013995c6432486c96f6b206a9d7adfbc77323918 | [
"MIT"
] | null | null | null | 20.175 | 122 | 0.536865 | [
[
[
"### CB LAD match\n\n\nWe geocode CrunchBase with Local Authority District data. ",
"_____no_output_____"
],
[
"## 0. Preamble",
"_____no_output_____"
]
],
[
[
"%run ../notebook_preamble.ipy\n\nimport geopandas as gp\n\nfrom shapely.geometry import Point",
"_____no_output_____"
]
],
[
[
"### Load CB data",
"_____no_output_____"
]
],
[
[
"cb = pd.read_csv('../../data/processed/18_9_2019_cb_sector_labelled.csv')",
"_____no_output_____"
],
[
"shapes = gp.read_file('../../data/external/lad_shape/Local_Authority_Districts_December_2018_Boundaries_GB_BFC.shp')",
"_____no_output_____"
],
[
"#Create geodataframe\ncb_uk = cb.loc[cb['country_alpha_2']=='GB']\n\ncb_uk_geo = gp.GeoDataFrame(cb_uk, geometry=[Point(x, y) for x, y in zip(cb_uk['longitude'], cb_uk['latitude'])])",
"_____no_output_____"
],
[
"#Reproject the LADs and create spatial join\nshapes = shapes.to_crs({'init':'epsg:4326'})\ncb_joined = gp.sjoin(cb_uk_geo,shapes,how='left',op='within')",
"_____no_output_____"
]
],
[
[
"Names to keep",
"_____no_output_____"
]
],
[
[
"keep_cols = list(cb.columns) + ['lad18nm','lad18cd']",
"_____no_output_____"
],
[
"cb_joined_keep = cb_joined[keep_cols]",
"_____no_output_____"
],
[
"#Concatenate cb with the names above\ncb_all = pd.concat([cb.loc[cb['country_alpha_2']!='GB'],cb_joined_keep],axis=0)[keep_cols]",
"_____no_output_____"
],
[
"cb_all.to_csv('../../data/processed/18_9_2019_cb_sector_labelled_geo.csv')",
"_____no_output_____"
],
[
"#from data_getters.labs.core import upload_file",
"_____no_output_____"
],
[
"#upload_file('../../data/processed/18_9_2019_cb_sector_labelled_geo.csv')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7b884428e68c986162f51aa12618b926f6d8367 | 8,281 | ipynb | Jupyter Notebook | Midterm_Exam.ipynb | itsmecelyssa/Linear-Algebra-58020 | 1dd366d64db20bc6fe750c3cc67eeae0fb7e5e4f | [
"Apache-2.0"
] | null | null | null | Midterm_Exam.ipynb | itsmecelyssa/Linear-Algebra-58020 | 1dd366d64db20bc6fe750c3cc67eeae0fb7e5e4f | [
"Apache-2.0"
] | null | null | null | Midterm_Exam.ipynb | itsmecelyssa/Linear-Algebra-58020 | 1dd366d64db20bc6fe750c3cc67eeae0fb7e5e4f | [
"Apache-2.0"
] | null | null | null | 26.886364 | 242 | 0.410216 | [
[
[
"<a href=\"https://colab.research.google.com/github/itsmecelyssa/Linear-Algebra-58020/blob/main/Midterm_Exam.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"#**Riego de Dios, Celyssa Chryse**",
"_____no_output_____"
],
[
"##**Question 1.** Create a Python code that displays a square matrix whose length is 5 (10 points)",
"_____no_output_____"
]
],
[
[
"import numpy as np #Import library\nA = np.array([[1,2,3,4,5],[2,3,4,5,1],[3,4,5,1,2],[4,5,1,2,3],[5,1,2,3,4]]) #SET OF 5X5 MATRIX\nprint(\"Square Matrix whose length is 5\")\nprint(A)",
"Square Matrix whose length is 5\n[[1 2 3 4 5]\n [2 3 4 5 1]\n [3 4 5 1 2]\n [4 5 1 2 3]\n [5 1 2 3 4]]\n"
]
],
[
[
"##**Question 2.** Create a Python code that displays a square matrix whose elements below the principal diagonal are zero (10 points)",
"_____no_output_____"
]
],
[
[
"import numpy as np \nB = np.triu([[1,2,3,4,5],[2,3,4,5,1],[3,4,5,1,2],[4,5,1,2,3],[5,1,2,3,4]])\nprint(\"Square Matrix whose elements below the principal diagonal are zero\")\nprint(B)",
"Square Matrix whose elements below the principal diagonal are zero\n[[1 2 3 4 5]\n [0 3 4 5 1]\n [0 0 5 1 2]\n [0 0 0 2 3]\n [0 0 0 0 4]]\n"
]
],
[
[
"##**Question 3.** Create a Python code that displays a square matrix which is symmetrical (10 points)",
"_____no_output_____"
]
],
[
[
"import numpy as np \nF = np.array([[1,2,3],[2,3,3],[3,4,-2]])\nprint(\"Symmetric form of Matrix\")\nprint(F)\nG = np.transpose(F)\nprint(\"Transpose of the Matrix\")\nprint(G)",
"Symmetric form of Matrix\n[[ 1 2 3]\n [ 2 3 3]\n [ 3 4 -2]]\nTranspose of the Matrix\n[[ 1 2 3]\n [ 2 3 4]\n [ 3 3 -2]]\n"
]
],
[
[
"##**Question 4.** What is the inverse of matrix C? Show your solution by python coding. (20 points)",
"_____no_output_____"
]
],
[
[
"#Python Program to Inverse a 3x3 Matrix C = ([[1,2,3],[2,3,3],[3,4,-2]])\nC = np.array([[1,2,3],[2,3,3],[3,4,-2]])\nprint(C,\"\\n\")\nD = np.linalg.inv(C)\nprint(D)",
"[[ 1 2 3]\n [ 2 3 3]\n [ 3 4 -2]] \n\n[[-3.6 3.2 -0.6]\n [ 2.6 -2.2 0.6]\n [-0.2 0.4 -0.2]]\n"
]
],
[
[
"##**Question 5.** What is the determinant of the given matrix in Question 4? Show your solution by python coding. (20 points)",
"_____no_output_____"
]
],
[
[
"import numpy as np\nC = np.array([[1,2,3],[2,3,3],[3,4,-2]])\nprint(C,\"\\n\")\nH = np.linalg.det(C)\nprint(round(H))",
"[[ 1 2 3]\n [ 2 3 3]\n [ 3 4 -2]] \n\n5\n"
]
],
[
[
"##**Question 6.** Find the roots of the linear equations by showing its python codes (30 points)",
"_____no_output_____"
]
],
[
[
"import numpy as np\nA = np.array([[5,4,1],[10,9,4],[10,13,15]])\nprint(A,\"\\n\")\nA_ = np.linalg.inv(A)\nprint(A_,\"\\n\")\nB = np.array([[3.4],[8.8],[19.2]])\nprint(B,\"\\n\")\n\nAA_ = np.dot(A,A_)\nprint(AA_,\"\\n\")\nBA_ = np.dot(A_,B)\nprint(BA_)",
"[[ 5 4 1]\n [10 9 4]\n [10 13 15]] \n\n[[ 5.53333333 -3.13333333 0.46666667]\n [-7.33333333 4.33333333 -0.66666667]\n [ 2.66666667 -1.66666667 0.33333333]] \n\n[[ 3.4]\n [ 8.8]\n [19.2]] \n\n[[ 1.00000000e+00 -4.44089210e-16 -1.66533454e-16]\n [-1.77635684e-15 1.00000000e+00 -2.22044605e-16]\n [-2.22044605e-15 -1.33226763e-15 1.00000000e+00]] \n\n[[0.2]\n [0.4]\n [0.8]]\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e7b888f428d98108cd3ab45af59aab04cd3cedf1 | 80,866 | ipynb | Jupyter Notebook | code/.ipynb_checkpoints/Racial Demographic Predictions on Tweets, Santiago and Ortiz-checkpoint.ipynb | rjvkothari/race-classification | f3631c7c8fc32b05894000ff0a0dafd189bfb8e7 | [
"MIT"
] | null | null | null | code/.ipynb_checkpoints/Racial Demographic Predictions on Tweets, Santiago and Ortiz-checkpoint.ipynb | rjvkothari/race-classification | f3631c7c8fc32b05894000ff0a0dafd189bfb8e7 | [
"MIT"
] | null | null | null | code/.ipynb_checkpoints/Racial Demographic Predictions on Tweets, Santiago and Ortiz-checkpoint.ipynb | rjvkothari/race-classification | f3631c7c8fc32b05894000ff0a0dafd189bfb8e7 | [
"MIT"
] | null | null | null | 62.252502 | 11,284 | 0.671642 | [
[
[
"\n# Race classification \n\nSarah Santiago and Carlos Ortiz initially wrote this notebook. Jae Yeon Kim reviwed the notebook, edited the markdown, and commented on the code.\n\nRacial demographic dialect predictions were made by the model developed by [Blodgett, S. L., Green, L., & O'Connor, B. (2016)](https://arxiv.org/pdf/1608.08868.pdf). We modified their predict function in [the public Git repository](https://github.com/slanglab/twitteraae) to work in the notebook environment. ",
"_____no_output_____"
]
],
[
[
"# Import libraries\n\nimport pandas as pd\nimport numpy as np\nimport re\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\n## Language-demography model \n\nimport predict",
"_____no_output_____"
]
],
[
[
"### Import Tweets",
"_____no_output_____"
]
],
[
[
"\n# Import file \ntweets = pd.read_csv(\"tweet.csv\").drop(['Unnamed: 0'], axis=1)\n\n# Index variable \ntweets.index.name = 'ID'\n\n# First five rows\ntweets.head()",
"_____no_output_____"
]
],
[
[
"### Clean Tweets",
"_____no_output_____"
]
],
[
[
"url_re = r'http\\S+'\nat_re = r'@[\\w]*'\nrt_re = r'^[rt]{2}'\npunct_re = r'[^\\w\\s]'\n\ntweets_clean = tweets.copy()\ntweets_clean['Tweet'] = tweets_clean['Tweet'].str.lower() # Lower Case\ntweets_clean['Tweet'] = tweets_clean['Tweet'].str.replace(url_re, '') # Remove Links/URL\ntweets_clean['Tweet'] = tweets_clean['Tweet'].str.replace(at_re, '') # Remove @\ntweets_clean['Tweet'] = tweets_clean['Tweet'].str.replace(rt_re, '') # Remove rt\ntweets_clean['Tweet'] = tweets_clean['Tweet'].str.replace(punct_re, '') # Remove Punctation\n\ntweets_clean['Tweet'] = tweets_clean['Tweet'].apply(unicode) # Applied unicode for compatability with model\n\ntweets_clean.head()",
"_____no_output_____"
]
],
[
[
"### Apply Predictions",
"_____no_output_____"
]
],
[
[
"predict.load_model()\n\ndef prediction(string):\n return predict.predict(string.split())\n\npredictions = tweets_clean['Tweet'].apply(prediction)",
"_____no_output_____"
],
[
"tweets_clean['Predictions'] = predictions\n\n# Fill tweets that have no predictions with None\ntweets_clean = tweets_clean.fillna(\"NA\") \ntweets_clean.head() ",
"_____no_output_____"
],
[
"def first_last(item):\n if item is 'NA':\n return 'NA'\n\n return np.array([item[0], item[3]])\n\n# Add \"Predictions_AAE_WAE\" column which is predictions for AAE dialect and WAE dialect\ntweets_clean['Predictions_AAE_W'] = tweets_clean['Predictions'].apply(first_last)\ntweets_clean.head()",
"_____no_output_____"
],
[
"# Model 1\ndef detect_two(item):\n if item is 'NA':\n return None\n \n if item[0] >= item[1]:\n return 0\n else:\n return 1\n\n# Model 2\ndef detect_all(item):\n if item is \"NA\":\n return None\n\n if item[0] >= item[1] and item[0] >= item[2] and item[0] >= item[3]:\n return 0\n elif item[3] >= item[0] and item[3] >= item[1] and item[3] >= item[2]:\n return 1\n else:\n return 2\n\n# Add \"Racial Demographic\" column such that AAE is represented by 0 and WAE is represented by 1\ntweets_clean['Racial Demographic (Two)'] = tweets_clean['Predictions_AAE_W'].apply(detect_two)\ntweets_clean['Racial Demographic (All)'] = tweets_clean['Predictions'].apply(detect_all)",
"_____no_output_____"
]
],
[
[
"### Tweets with Predictions Based on Racial Demographics (AAE, WAE)",
"_____no_output_____"
]
],
[
[
"final_tweets = tweets_clean.drop(columns=[\"Predictions\", \"Predictions_AAE_W\"])\nfinal_tweets['Tweet'] = tweets['Tweet']\nfinal_tweets.head()",
"_____no_output_____"
]
],
[
[
"### Export Tweets to CSV",
"_____no_output_____"
]
],
[
[
"final_tweets.to_csv('r_d_tweets_3.csv')",
"_____no_output_____"
]
],
[
[
"## Analysis",
"_____no_output_____"
]
],
[
[
"sns.countplot(x=final_tweets['Racial Demographic (Two)'])\nplt.title(\"Racial Demographic (Two)\")",
"_____no_output_____"
],
[
"sns.countplot(x=final_tweets['Racial Demographic (All)'])\nplt.title(\"Racial Demographic (All)\")",
"_____no_output_____"
],
[
"aae = final_tweets[final_tweets['Racial Demographic (All)'] == 0]\naae.head()",
"_____no_output_____"
],
[
"counts = aae.groupby(\"Type\").count()\ncounts = counts.reset_index().rename(columns = {'Number of Votes': 'Count'})\ncounts",
"_____no_output_____"
],
[
"sns.barplot(x=\"Type\", y=\"Count\", data = counts)\nplt.title(\"Type Counts AAE\")",
"_____no_output_____"
],
[
"wae = final_tweets[final_tweets['Racial Demographic (All)'] == 1]\nwae.head()",
"_____no_output_____"
],
[
"counts_wae = wae.groupby(\"Type\").count()\ncounts_wae = counts_wae.reset_index().rename(columns = {'Number of Votes': 'Count'})\ncounts_wae",
"_____no_output_____"
],
[
"sns.barplot(x=\"Type\", y=\"Count\", data = counts_wae)\nplt.title(\"Type Counts WAE\")",
"_____no_output_____"
],
[
"other = final_tweets[(final_tweets['Racial Demographic (All)'] == 2)] #| (final_tweets['Racial Demographic (All)'] == 0)]\n\ncounts_other = other.groupby(\"Type\").count()\ncounts_other = counts_other.reset_index().rename(columns = {'Number of Votes': 'Count'})\n\nsns.barplot(x=\"Type\", y=\"Count\", data = counts_other)\nplt.title(\"Type Counts Other\")",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7b89e8aa23547df846a271f564a99f1b3cc56c2 | 24,168 | ipynb | Jupyter Notebook | AnimatedVisualizationAndFlatRent/notebooks/FlatOptionsAnalyzing.ipynb | SmirnovAlexander/InformationAnalysis | 2c74806127ae92b282f07f931a73044e0ae9d42c | [
"MIT"
] | null | null | null | AnimatedVisualizationAndFlatRent/notebooks/FlatOptionsAnalyzing.ipynb | SmirnovAlexander/InformationAnalysis | 2c74806127ae92b282f07f931a73044e0ae9d42c | [
"MIT"
] | 2 | 2019-10-21T20:21:20.000Z | 2019-12-14T10:00:14.000Z | AnimatedVisualizationAndFlatRent/notebooks/FlatOptionsAnalyzing.ipynb | SmirnovAlexander/InformationAnalysis | 2c74806127ae92b282f07f931a73044e0ae9d42c | [
"MIT"
] | null | null | null | 61.653061 | 14,324 | 0.719547 | [
[
[
"Semen want to rent a flat. You're given 3 equivalent params: distance to subway (minutes), number of subway station to get to work, rent price (thousands rubles). Way from flat to subway should not exceed 20 minutes. ",
"_____no_output_____"
],
[
"# Importing data ",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np",
"_____no_output_____"
],
[
"data = pd.read_excel(\"../data/flat_rent_info.xlsx\", 5, index_col=\"ID\")",
"_____no_output_____"
],
[
"data",
"_____no_output_____"
]
],
[
[
"# Analyzing data",
"_____no_output_____"
],
[
"Normalizing data.",
"_____no_output_____"
]
],
[
[
"normalized_data = (data - data.min())/(data.max() - data.min()) ",
"_____no_output_____"
],
[
"normalized_data",
"_____no_output_____"
]
],
[
[
"As it told that params are equivalent, we should find top 3 minimum sums of params.",
"_____no_output_____"
]
],
[
[
"normalized_data.plot(stacked=True, kind='bar', colormap = 'Set2', figsize=(10, 8), fontsize=12)\nplt.xticks(rotation = 0)\nplt.show()",
"_____no_output_____"
]
],
[
[
"So we see that last 3 options are the best.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e7b89f62bd3dc1be9162537a4b788af8cf69f779 | 952 | ipynb | Jupyter Notebook | HelloGithub.ipynb | wisienka91/ml-dw-matrix | 98cfcd365f3b7a8ed3dd63d62e243939d010a389 | [
"MIT"
] | null | null | null | HelloGithub.ipynb | wisienka91/ml-dw-matrix | 98cfcd365f3b7a8ed3dd63d62e243939d010a389 | [
"MIT"
] | null | null | null | HelloGithub.ipynb | wisienka91/ml-dw-matrix | 98cfcd365f3b7a8ed3dd63d62e243939d010a389 | [
"MIT"
] | null | null | null | 952 | 952 | 0.709034 | [
[
[
"print(\"Hello Github\")",
"Hello Github\n"
],
[
"",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code"
]
] |
e7b8bf52205a9e08b0fd0aea6538837524cb6007 | 110,355 | ipynb | Jupyter Notebook | Notebooks/star_war.ipynb | vertigo-yl/Projects | d55a7aa71b455199fdad67c364a2f8b9809969ac | [
"MIT-0"
] | null | null | null | Notebooks/star_war.ipynb | vertigo-yl/Projects | d55a7aa71b455199fdad67c364a2f8b9809969ac | [
"MIT-0"
] | null | null | null | Notebooks/star_war.ipynb | vertigo-yl/Projects | d55a7aa71b455199fdad67c364a2f8b9809969ac | [
"MIT-0"
] | null | null | null | 156.754261 | 33,504 | 0.803289 | [
[
[
"\n## Data analysis and visualization of knowledge graph for star war movies\n\n👉👉[**You can have a look at this Project first**](http://starwar-visualization.s3-website-us-west-1.amazonaws.com) 👈👈\n\n\nThis project collected data from online database [**SWAPI**](https://swapi.co), which is the world's first quantified and programmatically-accessible data source for all the data from the Star Wars canon universe!\n\nThe dataset include 6 APIs: Planets, Spaceships, Vehicles, People, Films and Species, from all SEVEN Star Wars films.\n\n### 1. Data collection\n\nWe can get the json file of all data from this website, then use urllib in python3 to download and save data. \n",
"_____no_output_____"
]
],
[
[
"import warnings\nwarnings.simplefilter('ignore')\n\nimport urllib\nimport json\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport matplotlib\n%matplotlib inline\nmatplotlib.style.use('ggplot')",
"_____no_output_____"
],
[
"films = []\nfor x in range(1,8):\n films.append('httP://swapi.co/api/films/' + str(x) + '/')\n\nheaders = {}\nheaders[\"User-Agent\"] = \"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.104 Safari/537.36 Core/1.53.3226.400 QQBrowser/9.6.11681.400\"\n\nfw = open('../csv/films.txt', 'w')\nfor item in films:\n print(item)\n request = urllib.request.Request(item, headers=headers)\n response = urllib.request.urlopen(request, timeout=20)\n result = response.read().decode('utf-8')\n print(result)\n fw.write(result + '\\n')\n\nfw.close()",
"httP://swapi.co/api/films/1/\n{\"title\":\"A New Hope\",\"episode_id\":4,\"opening_crawl\":\"It is a period of civil war.\\r\\nRebel spaceships, striking\\r\\nfrom a hidden base, have won\\r\\ntheir first victory against\\r\\nthe evil Galactic Empire.\\r\\n\\r\\nDuring the battle, Rebel\\r\\nspies managed to steal secret\\r\\nplans to the Empire's\\r\\nultimate weapon, the DEATH\\r\\nSTAR, an armored space\\r\\nstation with enough power\\r\\nto destroy an entire planet.\\r\\n\\r\\nPursued by the Empire's\\r\\nsinister agents, Princess\\r\\nLeia races home aboard her\\r\\nstarship, custodian of the\\r\\nstolen plans that can save her\\r\\npeople and restore\\r\\nfreedom to the galaxy....\",\"director\":\"George Lucas\",\"producer\":\"Gary Kurtz, Rick McCallum\",\"release_date\":\"1977-05-25\",\"characters\":[\"https://swapi.co/api/people/1/\",\"https://swapi.co/api/people/2/\",\"https://swapi.co/api/people/3/\",\"https://swapi.co/api/people/4/\",\"https://swapi.co/api/people/5/\",\"https://swapi.co/api/people/6/\",\"https://swapi.co/api/people/7/\",\"https://swapi.co/api/people/8/\",\"https://swapi.co/api/people/9/\",\"https://swapi.co/api/people/10/\",\"https://swapi.co/api/people/12/\",\"https://swapi.co/api/people/13/\",\"https://swapi.co/api/people/14/\",\"https://swapi.co/api/people/15/\",\"https://swapi.co/api/people/16/\",\"https://swapi.co/api/people/18/\",\"https://swapi.co/api/people/19/\",\"https://swapi.co/api/people/81/\"],\"planets\":[\"https://swapi.co/api/planets/2/\",\"https://swapi.co/api/planets/3/\",\"https://swapi.co/api/planets/1/\"],\"starships\":[\"https://swapi.co/api/starships/2/\",\"https://swapi.co/api/starships/3/\",\"https://swapi.co/api/starships/5/\",\"https://swapi.co/api/starships/9/\",\"https://swapi.co/api/starships/10/\",\"https://swapi.co/api/starships/11/\",\"https://swapi.co/api/starships/12/\",\"https://swapi.co/api/starships/13/\"],\"vehicles\":[\"https://swapi.co/api/vehicles/4/\",\"https://swapi.co/api/vehicles/6/\",\"https://swapi.co/api/vehicles/7/\",\"https://swapi.co/api/vehicles/8/\"],\"species\":[\"https://swapi.co/api/species/5/\",\"https://swapi.co/api/species/3/\",\"https://swapi.co/api/species/2/\",\"https://swapi.co/api/species/1/\",\"https://swapi.co/api/species/4/\"],\"created\":\"2014-12-10T14:23:31.880000Z\",\"edited\":\"2015-04-11T09:46:52.774897Z\",\"url\":\"https://swapi.co/api/films/1/\"}\nhttP://swapi.co/api/films/2/\n{\"title\":\"The Empire Strikes Back\",\"episode_id\":5,\"opening_crawl\":\"It is a dark time for the\\r\\nRebellion. Although the Death\\r\\nStar has been destroyed,\\r\\nImperial troops have driven the\\r\\nRebel forces from their hidden\\r\\nbase and pursued them across\\r\\nthe galaxy.\\r\\n\\r\\nEvading the dreaded Imperial\\r\\nStarfleet, a group of freedom\\r\\nfighters led by Luke Skywalker\\r\\nhas established a new secret\\r\\nbase on the remote ice world\\r\\nof Hoth.\\r\\n\\r\\nThe evil lord Darth Vader,\\r\\nobsessed with finding young\\r\\nSkywalker, has dispatched\\r\\nthousands of remote probes into\\r\\nthe far reaches of space....\",\"director\":\"Irvin Kershner\",\"producer\":\"Gary Kurtz, Rick McCallum\",\"release_date\":\"1980-05-17\",\"characters\":[\"https://swapi.co/api/people/1/\",\"https://swapi.co/api/people/2/\",\"https://swapi.co/api/people/3/\",\"https://swapi.co/api/people/4/\",\"https://swapi.co/api/people/5/\",\"https://swapi.co/api/people/10/\",\"https://swapi.co/api/people/13/\",\"https://swapi.co/api/people/14/\",\"https://swapi.co/api/people/18/\",\"https://swapi.co/api/people/20/\",\"https://swapi.co/api/people/21/\",\"https://swapi.co/api/people/22/\",\"https://swapi.co/api/people/23/\",\"https://swapi.co/api/people/24/\",\"https://swapi.co/api/people/25/\",\"https://swapi.co/api/people/26/\"],\"planets\":[\"https://swapi.co/api/planets/4/\",\"https://swapi.co/api/planets/5/\",\"https://swapi.co/api/planets/6/\",\"https://swapi.co/api/planets/27/\"],\"starships\":[\"https://swapi.co/api/starships/15/\",\"https://swapi.co/api/starships/10/\",\"https://swapi.co/api/starships/11/\",\"https://swapi.co/api/starships/12/\",\"https://swapi.co/api/starships/21/\",\"https://swapi.co/api/starships/22/\",\"https://swapi.co/api/starships/23/\",\"https://swapi.co/api/starships/3/\",\"https://swapi.co/api/starships/17/\"],\"vehicles\":[\"https://swapi.co/api/vehicles/8/\",\"https://swapi.co/api/vehicles/14/\",\"https://swapi.co/api/vehicles/16/\",\"https://swapi.co/api/vehicles/18/\",\"https://swapi.co/api/vehicles/19/\",\"https://swapi.co/api/vehicles/20/\"],\"species\":[\"https://swapi.co/api/species/6/\",\"https://swapi.co/api/species/7/\",\"https://swapi.co/api/species/3/\",\"https://swapi.co/api/species/2/\",\"https://swapi.co/api/species/1/\"],\"created\":\"2014-12-12T11:26:24.656000Z\",\"edited\":\"2017-04-19T10:57:29.544256Z\",\"url\":\"https://swapi.co/api/films/2/\"}\nhttP://swapi.co/api/films/3/\n{\"title\":\"Return of the Jedi\",\"episode_id\":6,\"opening_crawl\":\"Luke Skywalker has returned to\\r\\nhis home planet of Tatooine in\\r\\nan attempt to rescue his\\r\\nfriend Han Solo from the\\r\\nclutches of the vile gangster\\r\\nJabba the Hutt.\\r\\n\\r\\nLittle does Luke know that the\\r\\nGALACTIC EMPIRE has secretly\\r\\nbegun construction on a new\\r\\narmored space station even\\r\\nmore powerful than the first\\r\\ndreaded Death Star.\\r\\n\\r\\nWhen completed, this ultimate\\r\\nweapon will spell certain doom\\r\\nfor the small band of rebels\\r\\nstruggling to restore freedom\\r\\nto the galaxy...\",\"director\":\"Richard Marquand\",\"producer\":\"Howard G. Kazanjian, George Lucas, Rick McCallum\",\"release_date\":\"1983-05-25\",\"characters\":[\"https://swapi.co/api/people/1/\",\"https://swapi.co/api/people/2/\",\"https://swapi.co/api/people/3/\",\"https://swapi.co/api/people/4/\",\"https://swapi.co/api/people/5/\",\"https://swapi.co/api/people/10/\",\"https://swapi.co/api/people/13/\",\"https://swapi.co/api/people/14/\",\"https://swapi.co/api/people/16/\",\"https://swapi.co/api/people/18/\",\"https://swapi.co/api/people/20/\",\"https://swapi.co/api/people/21/\",\"https://swapi.co/api/people/22/\",\"https://swapi.co/api/people/25/\",\"https://swapi.co/api/people/27/\",\"https://swapi.co/api/people/28/\",\"https://swapi.co/api/people/29/\",\"https://swapi.co/api/people/30/\",\"https://swapi.co/api/people/31/\",\"https://swapi.co/api/people/45/\"],\"planets\":[\"https://swapi.co/api/planets/5/\",\"https://swapi.co/api/planets/7/\",\"https://swapi.co/api/planets/8/\",\"https://swapi.co/api/planets/9/\",\"https://swapi.co/api/planets/1/\"],\"starships\":[\"https://swapi.co/api/starships/15/\",\"https://swapi.co/api/starships/10/\",\"https://swapi.co/api/starships/11/\",\"https://swapi.co/api/starships/12/\",\"https://swapi.co/api/starships/22/\",\"https://swapi.co/api/starships/23/\",\"https://swapi.co/api/starships/27/\",\"https://swapi.co/api/starships/28/\",\"https://swapi.co/api/starships/29/\",\"https://swapi.co/api/starships/3/\",\"https://swapi.co/api/starships/17/\",\"https://swapi.co/api/starships/2/\"],\"vehicles\":[\"https://swapi.co/api/vehicles/8/\",\"https://swapi.co/api/vehicles/16/\",\"https://swapi.co/api/vehicles/18/\",\"https://swapi.co/api/vehicles/19/\",\"https://swapi.co/api/vehicles/24/\",\"https://swapi.co/api/vehicles/25/\",\"https://swapi.co/api/vehicles/26/\",\"https://swapi.co/api/vehicles/30/\"],\"species\":[\"https://swapi.co/api/species/1/\",\"https://swapi.co/api/species/2/\",\"https://swapi.co/api/species/3/\",\"https://swapi.co/api/species/5/\",\"https://swapi.co/api/species/6/\",\"https://swapi.co/api/species/8/\",\"https://swapi.co/api/species/9/\",\"https://swapi.co/api/species/10/\",\"https://swapi.co/api/species/15/\"],\"created\":\"2014-12-18T10:39:33.255000Z\",\"edited\":\"2015-04-11T09:46:05.220365Z\",\"url\":\"https://swapi.co/api/films/3/\"}\nhttP://swapi.co/api/films/4/\n{\"title\":\"The Phantom Menace\",\"episode_id\":1,\"opening_crawl\":\"Turmoil has engulfed the\\r\\nGalactic Republic. The taxation\\r\\nof trade routes to outlying star\\r\\nsystems is in dispute.\\r\\n\\r\\nHoping to resolve the matter\\r\\nwith a blockade of deadly\\r\\nbattleships, the greedy Trade\\r\\nFederation has stopped all\\r\\nshipping to the small planet\\r\\nof Naboo.\\r\\n\\r\\nWhile the Congress of the\\r\\nRepublic endlessly debates\\r\\nthis alarming chain of events,\\r\\nthe Supreme Chancellor has\\r\\nsecretly dispatched two Jedi\\r\\nKnights, the guardians of\\r\\npeace and justice in the\\r\\ngalaxy, to settle the conflict....\",\"director\":\"George Lucas\",\"producer\":\"Rick McCallum\",\"release_date\":\"1999-05-19\",\"characters\":[\"https://swapi.co/api/people/2/\",\"https://swapi.co/api/people/3/\",\"https://swapi.co/api/people/10/\",\"https://swapi.co/api/people/11/\",\"https://swapi.co/api/people/16/\",\"https://swapi.co/api/people/20/\",\"https://swapi.co/api/people/21/\",\"https://swapi.co/api/people/32/\",\"https://swapi.co/api/people/33/\",\"https://swapi.co/api/people/34/\",\"https://swapi.co/api/people/36/\",\"https://swapi.co/api/people/37/\",\"https://swapi.co/api/people/38/\",\"https://swapi.co/api/people/39/\",\"https://swapi.co/api/people/40/\",\"https://swapi.co/api/people/41/\",\"https://swapi.co/api/people/42/\",\"https://swapi.co/api/people/43/\",\"https://swapi.co/api/people/44/\",\"https://swapi.co/api/people/46/\",\"https://swapi.co/api/people/48/\",\"https://swapi.co/api/people/49/\",\"https://swapi.co/api/people/50/\",\"https://swapi.co/api/people/51/\",\"https://swapi.co/api/people/52/\",\"https://swapi.co/api/people/53/\",\"https://swapi.co/api/people/54/\",\"https://swapi.co/api/people/55/\",\"https://swapi.co/api/people/56/\",\"https://swapi.co/api/people/57/\",\"https://swapi.co/api/people/58/\",\"https://swapi.co/api/people/59/\",\"https://swapi.co/api/people/47/\",\"https://swapi.co/api/people/35/\"],\"planets\":[\"https://swapi.co/api/planets/8/\",\"https://swapi.co/api/planets/9/\",\"https://swapi.co/api/planets/1/\"],\"starships\":[\"https://swapi.co/api/starships/40/\",\"https://swapi.co/api/starships/41/\",\"https://swapi.co/api/starships/31/\",\"https://swapi.co/api/starships/32/\",\"https://swapi.co/api/starships/39/\"],\"vehicles\":[\"https://swapi.co/api/vehicles/33/\",\"https://swapi.co/api/vehicles/34/\",\"https://swapi.co/api/vehicles/35/\",\"https://swapi.co/api/vehicles/36/\",\"https://swapi.co/api/vehicles/37/\",\"https://swapi.co/api/vehicles/38/\",\"https://swapi.co/api/vehicles/42/\"],\"species\":[\"https://swapi.co/api/species/1/\",\"https://swapi.co/api/species/2/\",\"https://swapi.co/api/species/6/\",\"https://swapi.co/api/species/11/\",\"https://swapi.co/api/species/12/\",\"https://swapi.co/api/species/13/\",\"https://swapi.co/api/species/14/\",\"https://swapi.co/api/species/15/\",\"https://swapi.co/api/species/16/\",\"https://swapi.co/api/species/17/\",\"https://swapi.co/api/species/18/\",\"https://swapi.co/api/species/19/\",\"https://swapi.co/api/species/20/\",\"https://swapi.co/api/species/21/\",\"https://swapi.co/api/species/22/\",\"https://swapi.co/api/species/23/\",\"https://swapi.co/api/species/24/\",\"https://swapi.co/api/species/25/\",\"https://swapi.co/api/species/26/\",\"https://swapi.co/api/species/27/\"],\"created\":\"2014-12-19T16:52:55.740000Z\",\"edited\":\"2015-04-11T09:45:18.689301Z\",\"url\":\"https://swapi.co/api/films/4/\"}\nhttP://swapi.co/api/films/5/\n"
],
[
"fr = open('../csv/films.txt', 'r')\nfilms = []\nfor line in fr:\n line = json.loads(line.strip('\\n'))\n films.append(line)\nfr.close()\n\n# 获取 characters,planets,starships,vehicles,species\ntargets = ['characters', 'planets', 'starships', 'vehicles', 'species']\nfor target in targets:\n fw = open('../csv/' + target + '.txt', 'w')\n data = []\n for item in films:\n tmp = item[target]\n for t in tmp:\n if t in data:\n continue\n else:\n data.append(t)\n \n while 1:\n print(t)\n try:\n request = urllib.request.Request(t, headers=headers)\n response = urllib.request.urlopen(request, timeout=20)\n result = response.read().decode('utf-8')\n except Exception as e:\n continue\n else:\n fw.write(result + '\\n')\n break\n finally:\n pass\n\n print (str(len(data)), target)\n fw.close()",
"https://swapi.co/api/people/1/\nhttps://swapi.co/api/people/2/\nhttps://swapi.co/api/people/3/\nhttps://swapi.co/api/people/4/\nhttps://swapi.co/api/people/5/\nhttps://swapi.co/api/people/6/\nhttps://swapi.co/api/people/7/\nhttps://swapi.co/api/people/8/\nhttps://swapi.co/api/people/9/\nhttps://swapi.co/api/people/10/\nhttps://swapi.co/api/people/12/\nhttps://swapi.co/api/people/13/\nhttps://swapi.co/api/people/14/\nhttps://swapi.co/api/people/15/\nhttps://swapi.co/api/people/16/\nhttps://swapi.co/api/people/18/\nhttps://swapi.co/api/people/19/\nhttps://swapi.co/api/people/81/\nhttps://swapi.co/api/people/20/\nhttps://swapi.co/api/people/21/\nhttps://swapi.co/api/people/22/\nhttps://swapi.co/api/people/23/\nhttps://swapi.co/api/people/24/\nhttps://swapi.co/api/people/25/\nhttps://swapi.co/api/people/26/\nhttps://swapi.co/api/people/27/\nhttps://swapi.co/api/people/28/\nhttps://swapi.co/api/people/29/\nhttps://swapi.co/api/people/30/\nhttps://swapi.co/api/people/31/\nhttps://swapi.co/api/people/45/\nhttps://swapi.co/api/people/11/\nhttps://swapi.co/api/people/32/\nhttps://swapi.co/api/people/33/\nhttps://swapi.co/api/people/34/\nhttps://swapi.co/api/people/36/\nhttps://swapi.co/api/people/37/\nhttps://swapi.co/api/people/38/\nhttps://swapi.co/api/people/39/\nhttps://swapi.co/api/people/40/\nhttps://swapi.co/api/people/41/\nhttps://swapi.co/api/people/42/\nhttps://swapi.co/api/people/43/\nhttps://swapi.co/api/people/44/\nhttps://swapi.co/api/people/46/\nhttps://swapi.co/api/people/48/\nhttps://swapi.co/api/people/49/\nhttps://swapi.co/api/people/50/\nhttps://swapi.co/api/people/51/\nhttps://swapi.co/api/people/52/\nhttps://swapi.co/api/people/53/\nhttps://swapi.co/api/people/54/\nhttps://swapi.co/api/people/55/\nhttps://swapi.co/api/people/56/\nhttps://swapi.co/api/people/57/\nhttps://swapi.co/api/people/58/\nhttps://swapi.co/api/people/59/\nhttps://swapi.co/api/people/47/\nhttps://swapi.co/api/people/35/\nhttps://swapi.co/api/people/60/\nhttps://swapi.co/api/people/61/\nhttps://swapi.co/api/people/62/\nhttps://swapi.co/api/people/63/\nhttps://swapi.co/api/people/64/\nhttps://swapi.co/api/people/65/\nhttps://swapi.co/api/people/66/\nhttps://swapi.co/api/people/67/\nhttps://swapi.co/api/people/68/\nhttps://swapi.co/api/people/69/\nhttps://swapi.co/api/people/70/\nhttps://swapi.co/api/people/71/\nhttps://swapi.co/api/people/72/\nhttps://swapi.co/api/people/73/\nhttps://swapi.co/api/people/74/\nhttps://swapi.co/api/people/75/\nhttps://swapi.co/api/people/76/\nhttps://swapi.co/api/people/77/\nhttps://swapi.co/api/people/78/\nhttps://swapi.co/api/people/82/\nhttps://swapi.co/api/people/79/\nhttps://swapi.co/api/people/80/\nhttps://swapi.co/api/people/83/\nhttps://swapi.co/api/people/84/\nhttps://swapi.co/api/people/85/\nhttps://swapi.co/api/people/86/\nhttps://swapi.co/api/people/87/\nhttps://swapi.co/api/people/88/\n87 characters\nhttps://swapi.co/api/planets/2/\nhttps://swapi.co/api/planets/3/\nhttps://swapi.co/api/planets/1/\nhttps://swapi.co/api/planets/4/\nhttps://swapi.co/api/planets/5/\nhttps://swapi.co/api/planets/6/\nhttps://swapi.co/api/planets/27/\nhttps://swapi.co/api/planets/7/\nhttps://swapi.co/api/planets/8/\nhttps://swapi.co/api/planets/9/\nhttps://swapi.co/api/planets/10/\nhttps://swapi.co/api/planets/11/\nhttps://swapi.co/api/planets/12/\nhttps://swapi.co/api/planets/13/\nhttps://swapi.co/api/planets/14/\nhttps://swapi.co/api/planets/15/\nhttps://swapi.co/api/planets/16/\nhttps://swapi.co/api/planets/17/\nhttps://swapi.co/api/planets/18/\nhttps://swapi.co/api/planets/19/\nhttps://swapi.co/api/planets/61/\n21 planets\nhttps://swapi.co/api/starships/2/\nhttps://swapi.co/api/starships/3/\nhttps://swapi.co/api/starships/5/\nhttps://swapi.co/api/starships/9/\nhttps://swapi.co/api/starships/10/\nhttps://swapi.co/api/starships/11/\nhttps://swapi.co/api/starships/12/\nhttps://swapi.co/api/starships/13/\nhttps://swapi.co/api/starships/15/\nhttps://swapi.co/api/starships/21/\nhttps://swapi.co/api/starships/22/\nhttps://swapi.co/api/starships/23/\nhttps://swapi.co/api/starships/17/\nhttps://swapi.co/api/starships/27/\nhttps://swapi.co/api/starships/28/\nhttps://swapi.co/api/starships/29/\nhttps://swapi.co/api/starships/40/\nhttps://swapi.co/api/starships/41/\nhttps://swapi.co/api/starships/31/\nhttps://swapi.co/api/starships/32/\nhttps://swapi.co/api/starships/39/\nhttps://swapi.co/api/starships/43/\nhttps://swapi.co/api/starships/47/\nhttps://swapi.co/api/starships/48/\nhttps://swapi.co/api/starships/49/\nhttps://swapi.co/api/starships/52/\nhttps://swapi.co/api/starships/58/\nhttps://swapi.co/api/starships/59/\nhttps://swapi.co/api/starships/61/\nhttps://swapi.co/api/starships/63/\nhttps://swapi.co/api/starships/64/\nhttps://swapi.co/api/starships/65/\nhttps://swapi.co/api/starships/66/\nhttps://swapi.co/api/starships/74/\nhttps://swapi.co/api/starships/75/\nhttps://swapi.co/api/starships/68/\nhttps://swapi.co/api/starships/77/\n37 starships\nhttps://swapi.co/api/vehicles/4/\nhttps://swapi.co/api/vehicles/6/\nhttps://swapi.co/api/vehicles/7/\nhttps://swapi.co/api/vehicles/8/\nhttps://swapi.co/api/vehicles/14/\nhttps://swapi.co/api/vehicles/16/\nhttps://swapi.co/api/vehicles/18/\nhttps://swapi.co/api/vehicles/19/\nhttps://swapi.co/api/vehicles/20/\nhttps://swapi.co/api/vehicles/24/\nhttps://swapi.co/api/vehicles/25/\nhttps://swapi.co/api/vehicles/26/\nhttps://swapi.co/api/vehicles/30/\nhttps://swapi.co/api/vehicles/33/\nhttps://swapi.co/api/vehicles/34/\nhttps://swapi.co/api/vehicles/35/\nhttps://swapi.co/api/vehicles/36/\nhttps://swapi.co/api/vehicles/37/\nhttps://swapi.co/api/vehicles/38/\nhttps://swapi.co/api/vehicles/42/\nhttps://swapi.co/api/vehicles/44/\nhttps://swapi.co/api/vehicles/45/\nhttps://swapi.co/api/vehicles/46/\nhttps://swapi.co/api/vehicles/50/\nhttps://swapi.co/api/vehicles/51/\nhttps://swapi.co/api/vehicles/53/\nhttps://swapi.co/api/vehicles/54/\nhttps://swapi.co/api/vehicles/55/\nhttps://swapi.co/api/vehicles/56/\nhttps://swapi.co/api/vehicles/57/\nhttps://swapi.co/api/vehicles/60/\nhttps://swapi.co/api/vehicles/62/\nhttps://swapi.co/api/vehicles/67/\nhttps://swapi.co/api/vehicles/69/\nhttps://swapi.co/api/vehicles/70/\nhttps://swapi.co/api/vehicles/71/\nhttps://swapi.co/api/vehicles/72/\nhttps://swapi.co/api/vehicles/73/\nhttps://swapi.co/api/vehicles/76/\n39 vehicles\nhttps://swapi.co/api/species/5/\nhttps://swapi.co/api/species/3/\nhttps://swapi.co/api/species/2/\nhttps://swapi.co/api/species/1/\nhttps://swapi.co/api/species/4/\nhttps://swapi.co/api/species/6/\nhttps://swapi.co/api/species/7/\nhttps://swapi.co/api/species/8/\nhttps://swapi.co/api/species/9/\nhttps://swapi.co/api/species/10/\nhttps://swapi.co/api/species/15/\nhttps://swapi.co/api/species/11/\nhttps://swapi.co/api/species/12/\nhttps://swapi.co/api/species/13/\nhttps://swapi.co/api/species/14/\nhttps://swapi.co/api/species/16/\nhttps://swapi.co/api/species/17/\nhttps://swapi.co/api/species/18/\nhttps://swapi.co/api/species/19/\nhttps://swapi.co/api/species/20/\nhttps://swapi.co/api/species/21/\nhttps://swapi.co/api/species/22/\nhttps://swapi.co/api/species/23/\nhttps://swapi.co/api/species/24/\nhttps://swapi.co/api/species/25/\nhttps://swapi.co/api/species/26/\nhttps://swapi.co/api/species/27/\nhttps://swapi.co/api/species/32/\nhttps://swapi.co/api/species/33/\nhttps://swapi.co/api/species/35/\nhttps://swapi.co/api/species/34/\nhttps://swapi.co/api/species/28/\nhttps://swapi.co/api/species/29/\nhttps://swapi.co/api/species/30/\nhttps://swapi.co/api/species/31/\nhttps://swapi.co/api/species/36/\nhttps://swapi.co/api/species/37/\n37 species\n"
]
],
[
[
"### 2. Basic analysis",
"_____no_output_____"
]
],
[
[
"fr = open('../csv/films.txt','r')\nfw = open('../csv/stat_basic.csv','w')\nfw.write('title,key,value\\n')\n\nfor line in fr:\n tmp = json.loads(line.strip('\\n'))\n fw.write(tmp['title'] + ',' + 'characters,' + str(len(tmp['characters'])) + '\\n')\n fw.write(tmp['title'] + ',' + 'planets,' + str(len(tmp['planets'])) + '\\n')\n fw.write(tmp['title'] + ',' + 'starships,' + str(len(tmp['starships'])) + '\\n')\n fw.write(tmp['title'] + ',' + 'vehicles,' + str(len(tmp['vehicles'])) + '\\n')\n fw.write(tmp['title'] + ',' + 'species,' + str(len(tmp['species'])) + '\\n')\n\nfr.close()\nfw.close()\n",
"_____no_output_____"
],
[
"stats = pd.read_csv('../csv/stat_basic.csv')\nstats.head()",
"_____no_output_____"
],
[
"# Visualization of overall stats\n\nfig, ax = plt.subplots(figsize=(12, 6))\nsns.barplot(x='key', y ='value', hue='title', data=stats)\nax.set_title('Overview of all movies', fontsize=16)\nplt.xlabel('')\nplt.show()",
"_____no_output_____"
]
],
[
[
"#### \"Attack of the Clones\" has most characters ",
"_____no_output_____"
]
],
[
[
"fr = open('../csv/characters.txt','r')\nfw = open('../csv/stat_characters.csv','w')\nfw.write('name,height,mass,gender,homeworld\\n')\nfor line in fr:\n tmp = json.loads(line.strip('\\n'))\n if tmp['height'] == 'unknown':\n tmp['height'] = '-1'\n if tmp['mass'] == 'unknown':\n tmp['mass'] = '-1'\n if tmp['gender'] == 'none':\n tmp['gender'] = 'n/a'\n fw.write(tmp['name'] + ',' + tmp['height'] + ',' + tmp['mass'] + ',' + tmp['gender'].strip() + ',' + tmp['homeworld'] + '\\n')\n\nfr.close()\nfw.close()\n",
"_____no_output_____"
],
[
"stat_characters = pd.read_csv('../csv/stat_characters.csv')\nstat_characters.head()",
"_____no_output_____"
],
[
"# Visualization of characters\n\nfig, ax = plt.subplots(figsize=(12, 6))\nsns.scatterplot(x='mass', y ='height', hue='gender', data=stat_characters)\nax.set_title('Visualization of characters', fontsize=16)\nplt.show()",
"_____no_output_____"
]
],
[
[
"### 3. Build relationship data file\n\nNow we use python to create 3 json files to build our interactive graph, we will use **html、css、javascript and d3 to finish** it.\n\nYou can open the `index.html` in this directory, the website is based on it.\n\n### 4. Visualization\n\nThe core is D3, **Data Driven Documents**, which is data-driven documents. It is one of the most popular JS visualization libraries. The core idea of D3 is to use data to generate elements on a web page. When data is updated, the appearance and attributes of elements on a web page are modified according to the changes of data.\n\nWe need to prepare an `SVG` and `G` as drawing containers in HTML code, then `select ()` the containers in JS code, then `select all ()` the `SVG` elements to be generated, bind data for `SVG` elements using `data (), execute enter (). append ()` and `exit (). remove ()` according to the corresponding state of data and elements, and control `SVG` using `attr ()` to control svg. Element appearance, and dynamically update the appearance of SVG elements according to user interaction\n\nIn order to achieve this graph, we generate circle and text elements based on node data; line elements based on node links; circle and text size and color are controlled according to node data; and when the mouse is hovering and dragging, the corresponding time is triggered, and the appearance and display of elements are changed. The implementation of the timeline is similar. The only difference is that rect element is used here.\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
]
] |
e7b8d30d39f728f577da3cb6dd522bf3f84344b7 | 949 | ipynb | Jupyter Notebook | tfkt_vis.ipynb | kghite/tfkt | c731dbe8ea62d9a0f289443b153bd179018a72e8 | [
"MIT"
] | null | null | null | tfkt_vis.ipynb | kghite/tfkt | c731dbe8ea62d9a0f289443b153bd179018a72e8 | [
"MIT"
] | null | null | null | tfkt_vis.ipynb | kghite/tfkt | c731dbe8ea62d9a0f289443b153bd179018a72e8 | [
"MIT"
] | null | null | null | 22.595238 | 216 | 0.496312 | [
[
[
"<a href=\"https://colab.research.google.com/github/kghite/tfkt/blob/main/tfkt_vis.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
]
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.