hexsha
stringlengths
40
40
size
int64
6
14.9M
ext
stringclasses
1 value
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
6
260
max_stars_repo_name
stringlengths
6
119
max_stars_repo_head_hexsha
stringlengths
40
41
max_stars_repo_licenses
sequence
max_stars_count
int64
1
191k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
6
260
max_issues_repo_name
stringlengths
6
119
max_issues_repo_head_hexsha
stringlengths
40
41
max_issues_repo_licenses
sequence
max_issues_count
int64
1
67k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
6
260
max_forks_repo_name
stringlengths
6
119
max_forks_repo_head_hexsha
stringlengths
40
41
max_forks_repo_licenses
sequence
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
avg_line_length
float64
2
1.04M
max_line_length
int64
2
11.2M
alphanum_fraction
float64
0
1
cells
sequence
cell_types
sequence
cell_type_groups
sequence
e7d967c2855f139b4c46d6023a7ddeb30691dc0f
238,902
ipynb
Jupyter Notebook
2018_06_23_Stats_Scikit-Learn_In_Class_Work.ipynb
jaykim-asset/datascience_review
c55782f5d4226e179088346da399e299433c6ca6
[ "MIT" ]
4
2018-05-30T10:39:47.000Z
2018-11-10T15:39:53.000Z
2018_06_23_Stats_Scikit-Learn_In_Class_Work.ipynb
jaykim-asset/datascience_review
c55782f5d4226e179088346da399e299433c6ca6
[ "MIT" ]
null
null
null
2018_06_23_Stats_Scikit-Learn_In_Class_Work.ipynb
jaykim-asset/datascience_review
c55782f5d4226e179088346da399e299433c6ca6
[ "MIT" ]
null
null
null
286.453237
183,600
0.909
[ [ [ "from sklearn.datasets import load_boston\nboston = load_boston()\nprint(boston.DESCR)", "Boston House Prices dataset\n===========================\n\nNotes\n------\nData Set Characteristics: \n\n :Number of Instances: 506 \n\n :Number of Attributes: 13 numeric/categorical predictive\n \n :Median Value (attribute 14) is usually the target\n\n :Attribute Information (in order):\n - CRIM per capita crime rate by town\n - ZN proportion of residential land zoned for lots over 25,000 sq.ft.\n - INDUS proportion of non-retail business acres per town\n - CHAS Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)\n - NOX nitric oxides concentration (parts per 10 million)\n - RM average number of rooms per dwelling\n - AGE proportion of owner-occupied units built prior to 1940\n - DIS weighted distances to five Boston employment centres\n - RAD index of accessibility to radial highways\n - TAX full-value property-tax rate per $10,000\n - PTRATIO pupil-teacher ratio by town\n - B 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town\n - LSTAT % lower status of the population\n - MEDV Median value of owner-occupied homes in $1000's\n\n :Missing Attribute Values: None\n\n :Creator: Harrison, D. and Rubinfeld, D.L.\n\nThis is a copy of UCI ML housing dataset.\nhttp://archive.ics.uci.edu/ml/datasets/Housing\n\n\nThis dataset was taken from the StatLib library which is maintained at Carnegie Mellon University.\n\nThe Boston house-price data of Harrison, D. and Rubinfeld, D.L. 'Hedonic\nprices and the demand for clean air', J. Environ. Economics & Management,\nvol.5, 81-102, 1978. Used in Belsley, Kuh & Welsch, 'Regression diagnostics\n...', Wiley, 1980. N.B. Various transformations are used in the table on\npages 244-261 of the latter.\n\nThe Boston house-price data has been used in many machine learning papers that address regression\nproblems. \n \n**References**\n\n - Belsley, Kuh & Welsch, 'Regression diagnostics: Identifying Influential Data and Sources of Collinearity', Wiley, 1980. 244-261.\n - Quinlan,R. (1993). Combining Instance-Based and Model-Based Learning. In Proceedings on the Tenth International Conference of Machine Learning, 236-243, University of Massachusetts, Amherst. Morgan Kaufmann.\n - many more! (see http://archive.ics.uci.edu/ml/datasets/Housing)\n\n" ], [ "dfX = pd.DataFrame(boston.data, columns = boston.feature_names)\ndfy = pd.DataFrame(boston.target, columns= ['MEDV'])\ndf = pd.concat([dfX, dfy], axis = 1)\ndf.tail()", "_____no_output_____" ], [ "df.describe()", "_____no_output_____" ], [ "%matplotlib inline\ncols = ['LSTAT', \"NOX\", 'RM', 'MEDV']\nsns.pairplot(df[cols])\nplt.show()", "C:\\ProgramData\\Anaconda3\\lib\\site-packages\\matplotlib\\font_manager.py:1320: UserWarning: findfont: Font family ['nanumgothic'] not found. Falling back to DejaVu Sans\n (prop.get_family(), self.defaultFamily[fontext]))\n" ], [ "def make_regression2(n_sample, bias, noise, random_state):\n from sklearn.datasets import make_regression\n np.random.seed(0)\n X = np.random.rand(n_sample) * 100\n W = np.random.rand(1) * 1\n t = np.random.randn(n_sample) * noise \n Y = X * W + bias + t \n return X, W, Y", "_____no_output_____" ], [ "a, b, c = make_regression2(10, 10, 10, 1)\nplt.scatter(a, c)\nplt.show()", "C:\\ProgramData\\Anaconda3\\lib\\site-packages\\matplotlib\\font_manager.py:1320: UserWarning: findfont: Font family ['nanumgothic'] not found. Falling back to DejaVu Sans\n (prop.get_family(), self.defaultFamily[fontext]))\n" ], [ "from sklearn.datasets import make_regression", "_____no_output_____" ], [ "X0, y, coef = make_regression(n_samples = 100, n_features = 2, \n bias = 100, noise = 10, coef = True, random_state= 1)", "_____no_output_____" ], [ "# 원례 데이터\nX0[:5]", "_____no_output_____" ], [ "# 바이어스 오그맨테이션\nX = np.hstack([np.ones((X0.shape[0], 1)), X0])\nX[:5]", "_____no_output_____" ], [ "## Stats Models 의 add_constant 사용\nimport statsmodels.api as sm\n\nX = sm.add_constant(X0)\nX[:5]", "_____no_output_____" ], [ "from sklearn.datasets import make_regression\n\nbias = 100\nX0, y, coef = make_regression(n_samples = 100, n_features=1, bias = bias, noise = 10, coef=True)\nX = sm.add_constant(X0)\ny = y.reshape(len(y), 1)", "_____no_output_____" ], [ "coef", "_____no_output_____" ], [ "from sklearn.datasets import make_regression\n\nbias = 100\nX0, y, coef = make_regression(n_samples=100, n_features=1, bias=bias, noise=10, coef=True, random_state=1)\nX = sm.add_constant(X0)\ny = y.reshape(len(y), 1)", "_____no_output_____" ], [ "w = np.dot(np.dot(np.linalg.inv(np.dot(X.T, X)), X.T), y)", "_____no_output_____" ], [ "w", "_____no_output_____" ], [ "w = np.linalg.lstsq(X, y)[0]\nw", "C:\\ProgramData\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:1: FutureWarning: `rcond` parameter will change to the default of machine precision times ``max(M, N)`` where M and N are the input matrix dimensions.\nTo use the future default and silence this warning we advise to pass `rcond=None`, to keep using the old, explicitly pass `rcond=-1`.\n \"\"\"Entry point for launching an IPython kernel.\n" ], [ "x_new = np.linspace(np.min(X0), np.max(X0), 100)\nX_new = sm.add_constant(x_new)\ny_new = np.dot(X_new , w)\n\nplt.scatter(X0, y, label = 'data')\nplt.plot(x_new, y_new, 'r-', label='regression')\nplt.xlabel('x')\nplt.ylabel('y')\nplt.title('Example of Linear Regression Anlysis')\nplt.legend()\nplt.show()", "C:\\ProgramData\\Anaconda3\\lib\\site-packages\\matplotlib\\font_manager.py:1320: UserWarning: findfont: Font family ['nanumgothic'] not found. Falling back to DejaVu Sans\n (prop.get_family(), self.defaultFamily[fontext]))\n" ], [ "from sklearn.datasets import make_regression\n\nmodel = LinearRegression(fit_intercept=True)\n", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7d97be12f891b13d090e35badcdb01182143853
29,965
ipynb
Jupyter Notebook
temperature/temperature.ipynb
Gayushka/data-prework-labs
54c7a166daaa0ab37730805400facfd70f21391e
[ "Unlicense" ]
null
null
null
temperature/temperature.ipynb
Gayushka/data-prework-labs
54c7a166daaa0ab37730805400facfd70f21391e
[ "Unlicense" ]
null
null
null
temperature/temperature.ipynb
Gayushka/data-prework-labs
54c7a166daaa0ab37730805400facfd70f21391e
[ "Unlicense" ]
1
2020-09-19T19:14:19.000Z
2020-09-19T19:14:19.000Z
78.442408
19,216
0.817487
[ [ [ "# Processor temperature\n\nWe have a temperature sensor in the processor of our company's server. We want to analyze the data provided to determinate whether we should change the cooling system for a better one. It is expensive and as a data analyst we cannot make decisions without a basis.\n\nWe provide the temperatures measured throughout the 24 hours of a day in a list-type data structure composed of 24 integers:\n```\ntemperatures_C = [33,66,65,0,59,60,62,64,70,76,80,69,80,83,68,79,61,53,50,49,53,48,45,39]\n```\n\n## Goals\n\n1. Treatment of lists\n2. Use of loop or list comprenhention\n3. Calculation of the mean, minimum and maximum.\n4. Filtering of lists.\n5. Interpolate an outlier.\n6. Logical operators.\n7. Print", "_____no_output_____" ], [ "## Temperature graph\nTo facilitate understanding, the temperature graph is shown below. You do not have to do anything in this section. The test starts in **Problem**.", "_____no_output_____" ] ], [ [ "# import\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# axis x, axis y\ny = [33,66,65,0,59,60,62,64,70,76,80,81,80,83,90,79,61,53,50,49,53,48,45,39]\nx = list(range(len(y)))\n\n# plot\nplt.plot(x, y)\nplt.axhline(y=70, linewidth=1, color='r')\nplt.xlabel('hours')\nplt.ylabel('Temperature ºC')\nplt.title('Temperatures of our server throughout the day')", "_____no_output_____" ] ], [ [ "## Problem\n\nIf the sensor detects more than 4 hours with temperatures greater than or equal to 70ºC or any temperature above 80ºC or the average exceeds 65ºC throughout the day, we must give the order to change the cooling system to avoid damaging the processor.\n\nWe will guide you step by step so you can make the decision by calculating some intermediate steps:\n\n1. Minimum temperature\n2. Maximum temperature\n3. Temperatures equal to or greater than 70ºC\n4. Average temperatures throughout the day.\n5. If there was a sensor failure at 03:00 and we did not capture the data, how would you estimate the value that we lack? Correct that value in the list of temperatures.\n6. Bonus: Our maintenance staff is from the United States and does not understand the international metric system. Pass temperatures to Degrees Fahrenheit.\n\nFormula: F = 1.8 * C + 32\n\nweb: https://en.wikipedia.org/wiki/Conversion_of_units_of_temperature\n", "_____no_output_____" ] ], [ [ "# assign a variable to the list of temperatures\n\n# 1. Calculate the minimum of the list and print the value using print()\nprint('minimum value: ',min(y))\n\n# 2. Calculate the maximum of the list and print the value using print()\nprint('maximum value: ', max(y))\n\n# 3. Items in the list that are greater than 70ºC and print the result\nprint('\\nTemps greater than 70C:')\nfor temp in y: \n if temp > 70: \n print(temp)\n\n# 4. Calculate the mean temperature throughout the day and print the result\nimport statistics as s\nprint('\\nMean temperature: ', s.mean(y))\n\n# 5.1 Solve the fault in the sensor by estimating a value\n\nprint('\\nEstimate #1: ', s.mean(y)) #mean of entire dataset\nprint('Estimate #2: ', s.mean(y[1:3] + y[4:6])) #mean of local/subdataset\n\n# 5.2 Update of the estimated value at 03:00 on the list\ne_two = s.mean(y[1:3] + y[4:6])\ny[3] = e_two\nprint('\\nUpdated list: ', y[:6])\n\n# Bonus: convert the list of ºC to ºFarenheit\nyF = []\nfor c_temp in y:\n yF.append(round(1.8*c_temp + 32, 1))\nprint('\\nFarenheight temps: ',yF)\n \n \n", "minimum value: 33\nmaximum value: 90\n\nTemps greater than 70C:\n76\n80\n81\n80\n83\n90\n79\n\nMean temperature: 62.854166666666664\n\nEstimate #1: 62.854166666666664\nEstimate #2: 62.5\n\nUpdated list: [33, 66, 65, 62.5, 59, 60]\n\nFarenheight temps: [91.4, 150.8, 149.0, 144.5, 138.2, 140.0, 143.6, 147.2, 158.0, 168.8, 176.0, 177.8, 176.0, 181.4, 194.0, 174.2, 141.8, 127.4, 122.0, 120.2, 127.4, 118.4, 113.0, 102.2]\n" ] ], [ [ "## Take the decision\nRemember that if the sensor detects more than 4 hours with temperatures greater than or equal to 70ºC or any temperature higher than 80ºC or the average was higher than 65ºC throughout the day, we must give the order to change the cooling system to avoid the danger of damaging the equipment:\n* more than 4 hours with temperatures greater than or equal to 70ºC\n* some temperature higher than 80ºC\n* average was higher than 65ºC throughout the day\nIf any of these three is met, the cooling system must be changed.\n", "_____no_output_____" ] ], [ [ "# Print True or False depending on whether you would change the cooling system or not\nhours_over = 0\n\nfor temp in y:\n if temp >= 70: \n hours_over += 1\n if hours_over > 4: \n print('Change Cooling System: ', True)\n break\n \nfor temp in y: \n if temp > 80: \n print('Change Cooling System: ', True)\n break\n\nif s.mean(y) > 65: \n print('Change Cooling System: ', True)\n \n ", "Change Cooling System: True\nChange Cooling System: True\n" ] ], [ [ "## Future improvements\n1. We want the hours (not the temperatures) whose temperature exceeds 70ºC\n2. Condition that those hours are more than 4 consecutive and consecutive, not simply the sum of the whole set. Is this condition met?\n3. Average of each of the lists (ºC and ºF). How they relate?\n4. Standard deviation of each of the lists. How they relate?\n", "_____no_output_____" ] ], [ [ "# 1. We want the hours (not the temperatures) whose temperature exceeds 70ºC\nhours = []\n\nfor i in range(len(y)): \n if y[i] > 70: \n hours.append(i)\nhours\n", "_____no_output_____" ], [ "# 2. Condition that those hours are more than 4 consecutive and consecutive, not simply the sum of the whole set. \n#Is this condition met?\nprevious = 0\nconsecutives = 0\n\nfor hour in hours: \n if hour == (previous + 1):\n consecutives += 1 \n \n previous = hour\n \n if consecutives > 4: \n print('Consecutive condition is met: ', True)\n break\n", "Consecutive condition is met: True\n" ], [ "# 3. Average of each of the lists (ºC and ºF). How they relate?\nprint(s.mean(y))\nprint(s.mean(yF))\n\n(62.85 * 1.8) + 32 #both means are == to each other \n\n#mean C and mean F is not exactly == in my example because i used round() function on list of F\n", "62.854166666666664\n145.1375\n" ], [ "# 4. Standard deviation of each of the lists. How they relate?\nprint(s.pstdev(y))\nprint(s.pstdev(yF))\n\n", "14.632639861130853\n26.338751750035534\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
e7d97dcfd7966c535c7988461acbb7f7c8df9909
56,825
ipynb
Jupyter Notebook
tutorial/westeros/westeros_emissions_bounds_LEDs.ipynb
luciecastella/Tuto_Westeros
bd994b6b778c61e7811fe7474db4c6eebec73d6d
[ "MIT" ]
null
null
null
tutorial/westeros/westeros_emissions_bounds_LEDs.ipynb
luciecastella/Tuto_Westeros
bd994b6b778c61e7811fe7474db4c6eebec73d6d
[ "MIT" ]
null
null
null
tutorial/westeros/westeros_emissions_bounds_LEDs.ipynb
luciecastella/Tuto_Westeros
bd994b6b778c61e7811fe7474db4c6eebec73d6d
[ "MIT" ]
null
null
null
126.841518
12,896
0.886758
[ [ [ "# Analysis regarding the emissions impact\n\n**(these are the informations on the model after running this notebook right after \"westeros_LEDS_baseline.ipynb\". if you're running it after \"westeros_LEDS_diffusion_baseline.ipynb\", please jump two cells further)**\n\nHere we add the emission bounds and I added a carbon footprint for the bulb and the LED.\n\n\nWe can see that if we take the carbon footprint calculated with the number on this website: [https://www.carbonfootprint.com/energyconsumption.html](https://www.carbonfootprint.com/energyconsumption.html) :\n\nbulb: 0.63 tCO2/KWa / led: 0.61 tCO2/kWa the model chooses only normal bulbs: \n<img src='_static/011.png' width='400'>\n\n\n\n\n\n___________________________________________________________________________________________________________________\n", "_____no_output_____" ], [ "\nIt is only when the carbon footprint goes down to 0.055 tCO2/kWa that the model starts using them. The fact that we are using LEDs in our everyday life in the real world is that we have to use less watts per lamp to have the same light if we use LEDs. This reality is not shown in this model since we are working with CO2 per kWa and not per \"light\" which would make more sense. This explains why the carbon footprint has to be lower down. \n\nHere is what we get with 0.055 tCO2/kWa: \n\n<img src='_static/0055.png' width='400'>\n\nAdding emissions for LEDs **and** for bulbs leads to a really high use of wind power plant at the end to stay below the emissions bound (see graphs at the end). \n\n<br/><br/><br/>\n__________________________________________________________________________________________________________________\n__________________________________________________________________________________________________________________", "_____no_output_____" ], [ "**(Here are the informations after linking this notebook to \"westeros_LEDS_diffusion_baseline.ipynb\")**\n\nAfter setting a diffusion rate for LEDs, we can see that LEDs are not beeing used, even with a small carbon footprint. This is because in this model, it has to be diffused rapidly in order to be efficient enough to replace the normal light bulb. \nHere is the result with only 0.0001 tCO2/kWa for LEDs:\n\n<img src='_static/00001.png' width='400'>", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport ixmp\nimport message_ix\n\nfrom message_ix.utils import make_df\n\n%matplotlib inline", "_____no_output_____" ], [ "mp = ixmp.Platform()", "_____no_output_____" ], [ "model = 'Westeros with LEDs'\n\nbase = message_ix.Scenario(mp, model=model, scenario='baseline')\nscen = base.clone(model, 'emission_bound','introducing an upper bound on emissions',\n keep_solution=False)\nscen.check_out()", "_____no_output_____" ], [ "year_df = scen.vintage_and_active_years()\nvintage_years, act_years = year_df['year_vtg'], year_df['year_act']\nmodel_horizon = scen.set('year')\ncountry = 'Westeros'", "_____no_output_____" ] ], [ [ "## Introducing Emissions", "_____no_output_____" ] ], [ [ "# first we introduce the emission of CO2 and the emission category GHG\nscen.add_set('emission', 'CO2')\nscen.add_cat('emission', 'GHG', 'CO2')\n\n# we now add CO2 emissions to the coal powerplant\nbase_emission_factor = {\n 'node_loc': country,\n 'year_vtg': vintage_years,\n 'year_act': act_years,\n 'mode': 'standard',\n 'unit': 'tCO2/kWa',\n}\n\n# adding new units to the model library (needed only once)\nmp.add_unit('tCO2/kWa')\nmp.add_unit('MtCO2')\n\nemission_factor = make_df(base_emission_factor, technology= 'coal_ppl', emission= 'CO2', value = 7.4)\nscen.add_par('emission_factor', emission_factor)", "INFO:root:unit `tCO2/kWa` is already defined in the platform instance\nINFO:root:unit `MtCO2` is already defined in the platform instance\n" ] ], [ [ "<span style=\"color: orange;\">Now we add emission factor for bulbs and LEDs as well </span>\n\n[https://www.carbonfootprint.com/energyconsumption.html](https://www.carbonfootprint.com/energyconsumption.html)", "_____no_output_____" ] ], [ [ "emission_factor = make_df(base_emission_factor, technology= 'bulb', emission= 'CO2', value = 0.63)\nscen.add_par('emission_factor', emission_factor)\n\nemission_factor = make_df(base_emission_factor, technology= 'led', emission= 'CO2', value = 0.055)\nscen.add_par('emission_factor', emission_factor)", "_____no_output_____" ] ], [ [ "## Define a Bound on Emissions\n\nThe `type_year: cumulative` assigns an upper bound on the *weighted average of emissions* over the entire time horizon.", "_____no_output_____" ] ], [ [ "scen.add_par('bound_emission', [country, 'GHG', 'all', 'cumulative'],\n value=500., unit='MtCO2')", "_____no_output_____" ] ], [ [ "## Time to Solve the Model", "_____no_output_____" ] ], [ [ "scen.commit(comment='introducing emissions and setting an upper bound')\nscen.set_as_default()", "_____no_output_____" ], [ "scen.solve()", "_____no_output_____" ] ], [ [ "<span style=\"color: orange;\">\n To compare: \n</span>", "_____no_output_____" ], [ "<span style=\"color: orange;\"> without emissions bounds: 238'193 </span>", "_____no_output_____" ], [ "<span style=\"color: orange;\">\n With emissions bounds but without any CO2 impact for the light: 336'222\n</span>", "_____no_output_____" ] ], [ [ "scen.var('OBJ')['lvl']", "_____no_output_____" ] ], [ [ "## Plotting Results", "_____no_output_____" ] ], [ [ "from tools import Plots\np = Plots(scen, country, firstyear=700)", "_____no_output_____" ] ], [ [ "### Activity\n\nHow much energy is generated in each time period from the different potential sources?", "_____no_output_____" ] ], [ [ "p.plot_activity(baseyear=True, subset=['coal_ppl', 'wind_ppl'])", "_____no_output_____" ] ], [ [ "<span style=\"color: orange;\">Here we can se the part of LEDs per year </span>", "_____no_output_____" ] ], [ [ "p.plot_activity(baseyear=True, subset=['bulb', 'led'])", "_____no_output_____" ] ], [ [ "### Capacity\n\nHow much capacity of each plant is installed in each period?", "_____no_output_____" ] ], [ [ "p.plot_capacity(baseyear=True, subset=['coal_ppl', 'wind_ppl'])", "_____no_output_____" ] ], [ [ "### Electricity Price\n\nAnd how much does the electricity cost? These prices are in fact **shadow prices** taken from the **dual variables** of the model solution. They reflect the marginal cost of electricity generation (i.e., the additional cost of the system for supplying one more unit of electricity), which is in fact the marginal cost of the most expensive generator. \n\nNote the price drop when the most expensive technology is no longer in the system.", "_____no_output_____" ] ], [ [ "p.plot_prices(subset=['light'], baseyear=True)", "INFO:numexpr.utils:NumExpr defaulting to 8 threads.\n" ] ], [ [ "## Close the connection to the database", "_____no_output_____" ] ], [ [ "mp.close_db()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7d98efc13722cf681d828d9ef2ced380d2cdcdc
777,623
ipynb
Jupyter Notebook
VariationalAutoEncoder.ipynb
z-tufekci/DeepLearning
d4f8f91051c6fa4aa3ca89b47f6b48763e2a6f40
[ "Apache-2.0" ]
7
2021-12-30T08:05:50.000Z
2022-03-31T02:33:54.000Z
VariationalAutoEncoder.ipynb
z-tufekci/DeepLearning
d4f8f91051c6fa4aa3ca89b47f6b48763e2a6f40
[ "Apache-2.0" ]
null
null
null
VariationalAutoEncoder.ipynb
z-tufekci/DeepLearning
d4f8f91051c6fa4aa3ca89b47f6b48763e2a6f40
[ "Apache-2.0" ]
1
2022-03-10T08:47:11.000Z
2022-03-10T08:47:11.000Z
1,309.12963
456,638
0.950889
[ [ [ "<a href=\"https://colab.research.google.com/github/z-tufekci/DeepLearning/blob/main/VariationalAutoEncoder.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "We will use keras and tensorflow to implement VAE ⏭", "_____no_output_____" ] ], [ [ "import numpy as np\nimport tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.keras import layers\nfrom keras import backend as K", "_____no_output_____" ] ], [ [ "**REPARAMETERIZATION TRICK:** \nThis sampling uses mean and logarithmic variance and sample z by using random value from normal distribution. ⚓ Reparameterization sample was first introduced [Kingma and Welling, 2013](https://arxiv.org/pdf/1312.6114.pdf) The process also defined by [Gunderson](https://gregorygundersen.com/blog/2018/04/29/reparameterization/). ♋", "_____no_output_____" ] ], [ [ "class Sampling(layers.Layer):\n \"\"\"Uses (z_mean, z_log_var) to sample z, the vector encoding a digit.\"\"\"\n def call(self, inputs):\n z_mean, z_log_var = inputs\n batch = tf.shape(z_mean)[0]\n dim = tf.shape(z_mean)[1]\n epsilon = tf.keras.backend.random_normal(shape=(batch, dim))\n return z_mean + tf.exp(0.5 * z_log_var) * epsilon", "_____no_output_____" ] ], [ [ " VAE Encoder ▶ ▶ ▶ \n\n ☕ Encoder create z_mean and z_variance, then sample z from this z_mean and z_variance using epsilon. ", "_____no_output_____" ] ], [ [ "latent_dim = 2 # because of z_mean and z_log_variance\nencoder_inputs = keras.Input(shape=(28, 28, 1))\nx = layers.Conv2D(32, 3, activation=\"relu\", strides=2, padding=\"same\")(encoder_inputs)\nx = layers.Conv2D(64, 3, activation=\"relu\", strides=2, padding=\"same\")(x)\nconv_shape = K.int_shape(x) #Shape of conv to be provided to decoder\nprint(conv_shape)\nx = layers.Flatten()(x)\nx = layers.Dense(32, activation=\"relu\")(x)\nz_mean = layers.Dense(latent_dim, name=\"z_mean\")(x)\nz_log_var = layers.Dense(latent_dim, name=\"z_log_var\")(x)\nz = Sampling()([z_mean, z_log_var])\nencoder = keras.Model(encoder_inputs, [z_mean, z_log_var, z], name=\"encoder\")\nencoder.summary()", "(None, 7, 7, 64)\nModel: \"encoder\"\n__________________________________________________________________________________________________\n Layer (type) Output Shape Param # Connected to \n==================================================================================================\n input_6 (InputLayer) [(None, 28, 28, 1)] 0 [] \n \n conv2d_6 (Conv2D) (None, 14, 14, 32) 320 ['input_6[0][0]'] \n \n conv2d_7 (Conv2D) (None, 7, 7, 64) 18496 ['conv2d_6[0][0]'] \n \n flatten_2 (Flatten) (None, 3136) 0 ['conv2d_7[0][0]'] \n \n dense_4 (Dense) (None, 32) 100384 ['flatten_2[0][0]'] \n \n z_mean (Dense) (None, 2) 66 ['dense_4[0][0]'] \n \n z_log_var (Dense) (None, 2) 66 ['dense_4[0][0]'] \n \n sampling_2 (Sampling) (None, 2) 0 ['z_mean[0][0]', \n 'z_log_var[0][0]'] \n \n==================================================================================================\nTotal params: 119,332\nTrainable params: 119,332\nNon-trainable params: 0\n__________________________________________________________________________________________________\n" ] ], [ [ "VAE Decoder ◀ ◀ ◀\n\n☁ The tied architecture (reverse architecture from encoder to decoder) is preferred in AE. There is an [explanation](https://https://stats.stackexchange.com/questions/419684/why-is-the-autoencoder-decoder-usually-the-reverse-architecture-as-the-encoder) about it.\n\n---\n\n", "_____no_output_____" ] ], [ [ "latent_inputs = keras.Input(shape=(latent_dim,))\nx = layers.Dense(conv_shape[1] * conv_shape[2] * conv_shape[3], activation=\"relu\")(latent_inputs) # 7x7x64 shape\nx = layers.Reshape((conv_shape[1],conv_shape[2], conv_shape[3]))(x)\nx = layers.Conv2DTranspose(64, 3, activation=\"relu\", strides=2, padding=\"same\")(x)\nx = layers.Conv2DTranspose(32, 3, activation=\"relu\", strides=2, padding=\"same\")(x)\ndecoder_outputs = layers.Conv2DTranspose(1, 3, activation=\"sigmoid\", padding=\"same\")(x)\ndecoder = keras.Model(latent_inputs, decoder_outputs, name=\"decoder\")\ndecoder.summary()", "Model: \"decoder\"\n_________________________________________________________________\n Layer (type) Output Shape Param # \n=================================================================\n input_7 (InputLayer) [(None, 2)] 0 \n \n dense_5 (Dense) (None, 3136) 9408 \n \n reshape_2 (Reshape) (None, 7, 7, 64) 0 \n \n conv2d_transpose_6 (Conv2DT (None, 14, 14, 64) 36928 \n ranspose) \n \n conv2d_transpose_7 (Conv2DT (None, 28, 28, 32) 18464 \n ranspose) \n \n conv2d_transpose_8 (Conv2DT (None, 28, 28, 1) 289 \n ranspose) \n \n=================================================================\nTotal params: 65,089\nTrainable params: 65,089\nNon-trainable params: 0\n_________________________________________________________________\n" ] ], [ [ "VAE MODEL ✅", "_____no_output_____" ] ], [ [ "class VAE(keras.Model):\n def __init__(self, encoder, decoder, **kwargs):\n super(VAE, self).__init__(**kwargs)\n self.encoder = encoder\n self.decoder = decoder\n self.total_loss_tracker = keras.metrics.Mean(name=\"total_loss\")\n self.reconstruction_loss_tracker = keras.metrics.Mean(\n name=\"reconstruction_loss\"\n )\n self.kl_loss_tracker = keras.metrics.Mean(name=\"kl_loss\")\n\n @property\n def metrics(self):\n return [\n self.total_loss_tracker,\n self.reconstruction_loss_tracker,\n self.kl_loss_tracker,\n ]\n\n def train_step(self, data):\n with tf.GradientTape() as tape:\n z_mean, z_log_var, z = self.encoder(data)\n reconstruction = self.decoder(z)\n reconstruction_loss = tf.reduce_mean(\n tf.reduce_sum(\n keras.losses.binary_crossentropy(data, reconstruction), axis=(1, 2)\n )\n )\n kl_loss = -0.5 * (1 + z_log_var - tf.square(z_mean) - tf.exp(z_log_var))\n kl_loss = tf.reduce_mean(tf.reduce_sum(kl_loss, axis=1))\n total_loss = reconstruction_loss + kl_loss\n grads = tape.gradient(total_loss, self.trainable_weights)\n self.optimizer.apply_gradients(zip(grads, self.trainable_weights))\n self.total_loss_tracker.update_state(total_loss)\n self.reconstruction_loss_tracker.update_state(reconstruction_loss)\n self.kl_loss_tracker.update_state(kl_loss)\n return {\n \"loss\": self.total_loss_tracker.result(),\n \"reconstruction_loss\": self.reconstruction_loss_tracker.result(),\n \"kl_loss\": self.kl_loss_tracker.result(),\n }\n", "_____no_output_____" ], [ "from google.colab import drive\ndrive.mount('/content/gdrive')", "Mounted at /content/gdrive\n" ] ], [ [ "⛹ If you want to run it on your desktop, you can download [data](https://https://www.kaggle.com/nikbearbrown/tmnist-alphabet-94-characters) and read from same directory with this ipynb file ", "_____no_output_____" ] ], [ [ "import pandas as pd\ndf = pd.read_csv('gdrive/My Drive/DeepLearning/94_character_TMNIST.csv')\n#df = pd.read_csv('94_character_TMNIST.csv') ", "_____no_output_____" ], [ "print(df.shape)\nX = df.drop(columns={'names','labels'})", "(274093, 786)\n" ], [ "X_images = X.values.reshape(-1,28,28)\nX_images = np.expand_dims(X_images, -1).astype(\"float32\") / 255", "_____no_output_____" ] ], [ [ "⚡ I tried different batch size(32,64,128,256) to train VAE model, 128 gives better result than others. ", "_____no_output_____" ] ], [ [ "vae = VAE(encoder, decoder)\nvae.compile(optimizer=keras.optimizers.Adam())\nvae.fit(X_images, epochs=10, batch_size=128)", "_____no_output_____" ] ], [ [ "⛳ This plot latent space plot image between **[scale_x_left , scale_x_right]** and **[scale_y_bottom, scale_y_top]**", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\n\ndef plot_latent_space(vae, n=8, figsize=12):\n # display a n*n 2D manifold of digits\n digit_size = 28\n scale_x_left = 1 # If we change the range, t generate different image. \n scale_x_right = 4\n scale_y_bottom = 0\n scale_y_top = 1\n figure = np.zeros((digit_size * n, digit_size * n))\n # If we want to see different x and y range we can change values in grid_x and gird_y. I trid x= [-3,-2] and y = [-3,-1] values and m labeled imaged are generated.\n grid_x = np.linspace(scale_x_left, scale_x_right, n) # -3, -2\n grid_y = np.linspace(scale_y_bottom, scale_y_top, n)[::-1] # -3, -1 \n\n for i, yi in enumerate(grid_y):\n for j, xi in enumerate(grid_x):\n z_sample = np.array([[xi, yi]])\n x_decoded = vae.decoder.predict(z_sample)\n digit = x_decoded[0].reshape(digit_size, digit_size)\n figure[\n i * digit_size : (i + 1) * digit_size,\n j * digit_size : (j + 1) * digit_size,\n ] = digit\n\n plt.figure(figsize=(figsize, figsize))\n start_range = digit_size // 2\n end_range = n * digit_size + start_range\n pixel_range = np.arange(start_range, end_range, digit_size)\n sample_range_x = np.round(grid_x, 1)\n sample_range_y = np.round(grid_y, 1)\n plt.xticks(pixel_range, sample_range_x)\n plt.yticks(pixel_range, sample_range_y)\n plt.xlabel(\"z[0]\")\n plt.ylabel(\"z[1]\")\n plt.imshow(figure, cmap=\"Greys_r\")\n plt.show()\n\nplot_latent_space(vae)", "_____no_output_____" ] ], [ [ "♑ When we plot all the Training data with labels, we can see ***z_mean*** values of the data. ❎ If we sample with this ***z_mean*** value, we can acquire similar image from this latent space. ⭕ Because two points are close to each other in latent space means they are looking similar(variant of this label). ", "_____no_output_____" ] ], [ [ "def plot_label_clusters(vae, data, labels):\n # display a 2D plot of the digit classes in the latent space\n z_mean, _, _ = vae.encoder.predict(data)\n plt.figure(figsize=(12, 12))\n plt.scatter(z_mean[:, 0], z_mean[:, 1], c=labels)\n plt.colorbar()\n plt.xlabel(\"z[0]\")\n plt.ylabel(\"z[1]\")\n plt.show()\n\ny = df[['labels']]\nfrom sklearn import preprocessing\nle = preprocessing.LabelEncoder()\ny_label = le.fit_transform(y)\nplot_label_clusters(vae, X_images, y_label)", "/usr/local/lib/python3.7/dist-packages/sklearn/preprocessing/_label.py:115: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().\n y = column_or_1d(y, warn=True)\n" ] ], [ [ " ⛪ Visualize one image", "_____no_output_____" ] ], [ [ "\n#Single decoded image with random input latent vector (of size 1x2)\n#Latent space range is about -5 to 5 so pick random values within this range\nsample_vector = np.array([[3,0.5]])\ndecoded_example = decoder.predict(sample_vector)\ndecoded_example_reshaped = decoded_example.reshape(28, 28)\nplt.imshow(decoded_example_reshaped)", "_____no_output_____" ] ], [ [ "# REFERENCES\n\n1. [Variational AutoEncoder](https://https://keras.io/examples/generative/vae/)\n2. [Variational autoencoders using keras on MNIST data](https://https://www.youtube.com/watch?v=8wrLjnQ7EWQ) and GitHub [link](https://https://github.com/bnsreenu/python_for_microscopists/blob/master/178_179_variational_autoencoders_mnist.py) \n\n\n\n\n\n\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e7d9bdbf58b0e70b81242399a55cbc59670264ec
12,367
ipynb
Jupyter Notebook
Bloque 1 - Ramp-Up/03_Markdown/RESU_Markdown en Jupyter.ipynb
JuanDG5/bootcamp_thebridge_PTSep20
5116098ff3f3d15753585e3ee967a8e7ddfccf31
[ "MIT" ]
1
2020-10-16T16:13:02.000Z
2020-10-16T16:13:02.000Z
Bloque 1 - Ramp-Up/03_Markdown/RESU_Markdown en Jupyter.ipynb
JuanDG5/bootcamp_thebridge_PTSep20
5116098ff3f3d15753585e3ee967a8e7ddfccf31
[ "MIT" ]
null
null
null
Bloque 1 - Ramp-Up/03_Markdown/RESU_Markdown en Jupyter.ipynb
JuanDG5/bootcamp_thebridge_PTSep20
5116098ff3f3d15753585e3ee967a8e7ddfccf31
[ "MIT" ]
3
2020-10-15T18:53:54.000Z
2020-10-16T17:25:28.000Z
41.223333
709
0.648662
[ [ [ "# Markdown\n\n### ¿Qué es?\nLenguaje de marcado que nos permite aplicar formato a nuestros textos mediante unos caracteres especiales. Muy útil cuando tenemos que documentar algo, escribir un artículo, o entregar un reporte. Este lenguaje está pensado para web, pero es muy común utilizarlo en cualquier tipo de texto, independientemente de su destino.\n\nLo bueno que tiene es que se edita en **texto plano y está integrado en muchísimas herramientas**, como Jupyter Notebook o RStudio.\n\n### Markdown vs HTML\nTenemos un viejo conocido en cuanto a programación web: HTML. Son lenguajes muy diferentes. Con HTML podemos construir un complejo **árbol de tags**, mientras que markdown se desarrolla en texto plano. Por supuesto, las finalidades también son distintas. HTML se aplica a todo tipo de webs, ya sean sencillas o complejas, mientras que markdown se suele usar para blogs o artículos. Su sencillez a la hora de desarrollar le penaliza en su versatilidad. Pero como el objetivo de este curso no es hacer páginas web, markdown cumple más que de sobra para acompañar y mejorar la comprensión de nuestro código. Además, ya verás a lo largo de este notebook que ambos lenguajes son perfectamente compatibles.\n\n### ¿Cómo funciona?\nContiene una serie de **caracteres especiales** que le dan forma a los textos. Por ejemplo, si queremos un texto en *cursiva*, simplemente lo rodearemos con asteriscos. Lo veremos en detalle en este Notebook.\n\n### ¿De qué nos va a servir?\nEn Jupyter lo normal será crear celdas con código, pero también tenemos la posibilidad de insertar celdas de markdown, donde podremos poner **imágenes, títulos, enumerar texto, listar, citar y mucho más!**", "_____no_output_____" ], [ "## 1. Primera celda\nHaz doble clik en esta celda y verás cómo cambia el texto. Significa que estás en el **modo edición** de Markdown.\n\nComo puedes observar, markdown se edita como si fuese texto plano, y en el caso concreto de los párrafos, no necesita de ningún caracter para que markdown sepa que es un párrafo. Sin embargo, fíjate que para la cabecera \"1.Primer celda\", hay dos hashtags delante que indican que es un encabezado. Veremos en el apartado 2 cómo crear cabeceras.\n\nHaz ctrl + enter para ejecuta la celda (o botón de play de arriba). Así abandonamos el modo edición y nuestro texto obtiene el formato que deseábamos.\n\n**¡Tu turno!** Crea una celda nueva en el menu de arriba y selecciona la opción Markdown", "_____no_output_____" ], [ "![imagen](../../imagenes/primer_celda.png)", "_____no_output_____" ] ], [ [ "# Esto es código de Python.\n# Va a ser muy habitual en el curso, acompañar el código de Python mediante celdas de markdown.", "_____no_output_____" ] ], [ [ "**TIP**: cuando estemos escribiendo markdown, un buen indicador de que lo estamos haciendo bien es que **la letra cambia de color o de forma**. Significa que markdown ha interpretado los simbolos que has puesto. Si estamos escribiendo en cursiva, verás que la letra cambia a cursiva si lo estas haciendo bien. Por supuesto, también podemos ejecutar y ver el resultado, pero si queremos comprobar que la sentencia que escribimos es correcta, tendrás esa opción en Jupyter.", "_____no_output_____" ], [ "## 2. Cabeceras\nYa has visto que en el apartado anterior usábamos dos hashtag para poner una cabecera. ¿Por qué dos? Cuantos más hashtags, menor es el tamaño del título.\n\n# Cabecera\n## Cabecera\n### Cabecera\n#### Cabecera\n##### Cabecera\n###### Cabecera\n\nEl tamaño mínimo lo obtenemos con 6 hashtags. Es decir, tenemos hasta 6 niveles de profundidad para aplicar a los apartados de nuestro notebook. Normalmente con 3 o 4 hay más que de sobra, pero también depende del tamaño que queramos darle a las cabeceras.", "_____no_output_____" ], [ "## 3. HTML\nComo te comentaba al principio, una cosa es utilizar markdown y otra HTML. No obstante, markdown nos ofrece la posibilidad de escribir código HTML, dentro de una celda markdown. Si te manejas bien con HTML y quieres insertar una porción de código de este lenguaje, markdown lo va a interpretar.\n\n<h3>Header 3</h3>\n<h4>Header 4</h4>\n<h5>Header 5</h5>", "_____no_output_____" ], [ "## 4. Negrita, cursiva\nPaara resaltar texto en negrita tenemos que rodearlo con asteriscos. En el caso en que queramos cursiva, será un único asterisco, y si deseamos combinar negrita con cursiva, son 3 asteriscos.\n\n**Texto en negrita**\n\n*Texto en cursiva*\n\n***Negrita y cursiva***\n\nCuidado con dejar espacios entre los asteriscos y el texto. Es decir, si queremos escribir en negrita, inmediatamente despues de los asteriscos tiene que ir el texto: ** No es negrita **", "_____no_output_____" ], [ "## 5. Citar\nEn ocasiones resulta útil poner una citación, o una nota, destacándola con un margen. Esto lo podemos hacer mediante el símbolo mayor que \">\"\n> Esto es una cita\n>> Podemos anidar citas en varios niveles", "_____no_output_____" ], [ "## 6. Listas\nHay dos opciones. **Listas ordenadas o sin ordenar**. Si queremos listas ordenadas, simplemente usamos números\n1. Primer elemento\n2. Segundo elemento\n\nPara listas se utiliza asteriscos, guiones o simbolos de suma\n- Primer elemento\n* Segundo elemento\n+ Tercer elemento\n - Para anidar elementos, hay que añadir 4 espacios\n- Vuelvo a lista anterior", "_____no_output_____" ], [ "## 7. Código de Python\nEs otra manera de enseñar código. Se suele usar cuando lo único que quieres es mostrar un fragmento de código, pero sin ejecutarlo\n```Python\nstr = \"Esto es un bloque de código Python\"\nprint(str)\n```", "_____no_output_____" ], [ "## 8. Líneas de separación\nPara separar secciones utilizamos líneas horizontales. Hay varias opciones en markdown para insertar una lína horizontal. En este ejemplo se usa o asteriscos o guiones.\n***\n\n---", "_____no_output_____" ], [ "## 9. Links y enlaces\nPara crear enlaces externos, a páginas web, se usa la sintaxis [ enlace ] (web)\n\n[enlace en línea](http://www.google.es)\n\nTambien podemos definir [un enlace][blog].\n\nA una [web][blog] a la que podemos referenciar mas adelante\n\n[blog]: http://www.google.es\n\nPor otro lado, podemos definir links que vayan a otras partes del Notebook, como por ejemplo a una cabecera concreta. Si haces clik en [este enlace](#Markdown), volverás al inicio del notebook.Con [este otro enlace](#1.-Primera-celda) vas al primer apartado.\n\n¿Cómo linkarlos? Copiamos el nombre de la cabecera, sustituimos espacios por guiones, le añadimos en hashtag al principio, y eso es lo que va dentro de los paréntesis.", "_____no_output_____" ], [ "## 10. Imágenes\nSi tenemos una imagen en el ordenador, tenemos que decirle a Markdown que apunte a esa imagen. NO se adjuntan imagenes. Lo normal es tener todas las imagenes agrupadas en una carpeta dentro de tu repositorio.\n\nUsamos la sintaxis ![nombre cualquiera](ruta de la imagen).", "_____no_output_____" ], [ "La imagen tiene que estar en la misma carpeta que este notebook, debido a la sintaxis `./imagen.png`. Con el `./` Jupyter entiende que tiene que buscar en la carpeta de este notebook. Si ponemos `../` le indicamos que la imagen está en el directorio anterior:\n\n1. `./imagen.png` si la imagen esta en el mismo directorio donde está este Notebook\n2. `./imagenes/imagen.png` si dentro del directorio donde se encuentra este Notebook, hay una carpeta llamada \"imagenes\", y dentro se encuentra la imagen.\n3. `../imagen.png` si la imagen que buscamos está en el directorio anterior a donde se encuentra este Notebook.", "_____no_output_____" ], [ "![imagen](../../imagenes/markdown_image.png)", "_____no_output_____" ], [ "Existe otra forma de cargar las imágenes, mediante sentencias de HTML: `<img src=\"../../imagenes/markdown_image.png\" alt=\"Drawing\"/>`\n\nAdemás, podremos añadirle más parámetros, como hacer un redimensionado de la misma con el parámetro style: `<img src=\"../../imagenes/markdown_image.png\" alt=\"Drawing\" style=\"width: 200px;\"/>`\n\n<img src=\"../../imagenes/markdown_image.png\" alt=\"Drawing\" style=\"width: 200px;\"/>", "_____no_output_____" ], [ "## 11. Documentación\nHay muchísimas guías para escribir markdown en Internet. Con lo visto en este notebook tienes más que de sobra para darle color y forma a tus notebooks de Python... De Python, de R, documentación para GitHub, tu blog de data scientist... Como te dije al principio, markdown es un lenguaje muy popular al que se le puede sacar mucho jugo.\nAun así, si quieres aprender más de este lenguaje, te dejo algunos enlaces interesantes.\n\nhttps://www.markdownguide.org/basic-syntax/\n\nhttps://daringfireball.net/projects/markdown/syntax\n\nhttps://medium.com/analytics-vidhya/the-ultimate-markdown-guide-for-jupyter-notebook-d5e5abf728fd", "_____no_output_____" ], [ "## 12. Just Markdown!\nEn Jupyter tienes la opción de editar archivos puros de Markdown. Estos archivos tienen una extensión `.md`\n\nPara ello, tienes que ir a File -> New -> Markdown File. Y ya en el archivo, botón derecho -> Show Markdown preview\n\n![imagen](../../imagenes/extra_markdown.png)", "_____no_output_____" ], [ "## 13. Ejercicios\n\n### Ejercicio 1\nVamos a aplicar los conocimientos adquiridos en este notebook, intentando reproducir la siguiente imagen en markdown.\n\n**TIP**: en el primer enlace de la documentación tienes más ejemplos, por si te atascas con algo :)", "_____no_output_____" ], [ "![imagen](../../imagenes/ejercicio_markdown.png)", "_____no_output_____" ], [ "### Ejercicio 2\nPrueba a crear tus propios apuntes de Markdown. Haz un resumen con lo visto en este Notebook + los enlaces de la documentación que te resulten interesantes. Se trata de crear un Notebook de consulta para cuando tengas alguna duda de sintaxis Markdown.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
e7d9c0c0551c86eda3b72f2937915b4e064d0d98
75,519
ipynb
Jupyter Notebook
day3/solutions/Hubble-Solutions.ipynb
ishanikulkarni/usrp-sciprog
eaf60e9fe1477dc2f53939a70eb18ac3e16e3dbc
[ "MIT" ]
null
null
null
day3/solutions/Hubble-Solutions.ipynb
ishanikulkarni/usrp-sciprog
eaf60e9fe1477dc2f53939a70eb18ac3e16e3dbc
[ "MIT" ]
null
null
null
day3/solutions/Hubble-Solutions.ipynb
ishanikulkarni/usrp-sciprog
eaf60e9fe1477dc2f53939a70eb18ac3e16e3dbc
[ "MIT" ]
null
null
null
148.367387
27,024
0.844953
[ [ [ "# Expansion velocity of the universe\n\nIn 1929, Edwin Hubble published a [paper](http://www.pnas.org/content/pnas/15/3/168.full.pdf) in which he compared the radial velocity of objects with their distance. The former can be done pretty precisely with spectroscopy, the latter is much more uncertain. His original data are [here](table1.txt).\n\nHe saw that the velocity increases with distance and speculated that this could be the sign of a cosmological expansion. Let's find out what he did.\n\nLoad the data into an array with `numpy.genfromtxt`, make use of its arguments `names` and `dtype` to read in the column names from the header and choosing the data type on its own as needed. You should get 6 columns\n * `CAT`, `NUMBER`: These two combined give you the name of the galaxy.\n * `R`: distance in Mpc\n * `V`: radial velocity in km/s\n * `RA`, `DEC`: equatorial coordinates of the galaxy\n \nMake a scatter plot of V vs R. Don't forget labels and units...", "_____no_output_____" ] ], [ [ "import numpy as np\ndata = np.genfromtxt('table1.txt', names=True, dtype=None)\nprint(\"Samples:\", len(data))\nprint(\"Data Types:\", data.dtype)", "_____no_output_____" ], [ "%matplotlib inline\nimport matplotlib\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nplt.scatter(data['R'], data['V'])\nplt.xlabel('R [Mpc]')\nplt.ylabel('V [km/s]')", "_____no_output_____" ] ], [ [ "Use `np.linalg.lstsq` to fit a linear regression function and determine the slope $H_0$ of the line $V=H_0 R$. For that, reshape $R$ as a $N\\times1$ matrix (the design matrix) and solve for 1 unknown parameter. Add the best-fit line to the plot.", "_____no_output_____" ] ], [ [ "N = len(data)\nX = data['R'].reshape((N,1))\nparams, _, _, _ = np.linalg.lstsq(X, data['V'])\nprint(params)\nH0 = params[0]\n\nR = np.linspace(0,2.5,100)\nfig = plt.figure()\nax = fig.add_subplot(111)\nax.scatter(data['R'], data['V'])\nax.plot(R, H0*R, 'k--')\nax.set_xlim(xmin=0, xmax=2.5)\nax.set_xlabel('Distance [Mpc]')\nax.set_ylabel('Velocity [km/s]')", "_____no_output_____" ] ], [ [ "Why is there scatter with respect to the best-fit curve? Is it fair to only fit for the slope and not also for the intercept? How would $H_0$ change if you include an intercept in the fit?", "_____no_output_____" ] ], [ [ "X = np.ones((N, 2))\nX[:,1] = data['R']\nparams, _, _, _ = np.linalg.lstsq(X, data['V'])\nprint(params)\ninter, H0 = params\n\nR = np.linspace(0,2.5,100)\nfig = plt.figure()\nax = fig.add_subplot(111)\nax.scatter(data['R'], data['V'])\nax.plot(R, H0*R + inter, 'k--')\nax.set_xlim(xmin=0, xmax=2.5)\nax.set_xlabel('Distance [Mpc]')\nax.set_ylabel('Velocity [km/s]')", "_____no_output_____" ] ], [ [ "## Correcting for motion of the sun\n\n$V$ as given in the table is a combination of any assumed cosmic expansion and the motion of the sun with respect to that cosmic frame. So, we need to generalize the model to $V=H_0 R + V_s$, where the solar velocity is given by $V_s = X \\cos(RA)\\cos(DEC) + Y\\sin(RA)\\cos(DEC)+Z\\sin(DEC)$. We'll use `astropy` to read in the RA/DEC coordinate strings and properly convert them to degrees (and then radians):", "_____no_output_____" ] ], [ [ "import astropy.coordinates as coord\nimport astropy.units as u\n\npos = coord.SkyCoord(ra=data['RA'].astype('U8'), dec=data['DEC'].astype('U9'), unit=(u.hourangle,u.deg),frame='fk5')\nra_ = pos.ra.to(u.deg).value * np.pi/180\ndec_ = pos.dec.to(u.deg).value * np.pi/180", "_____no_output_____" ] ], [ [ "Construct a new $N\\times4$ design matrix for the four unknown parameters $H_0$, $X$, $Y$, $Z$ to account for the solar motion. The resulting $H_0$ is Hubble's own version of the \"Hubble constant\". What do you get?", "_____no_output_____" ] ], [ [ "Ah = np.empty((N,4))\nAh[:,0] = data['R']\nAh[:,1] = np.cos(ra_)*np.cos(dec_)\nAh[:,2] = np.sin(ra_)*np.cos(dec_)\nAh[:,3] = np.sin(dec_)\nparams_h, _, _, _ = np.linalg.lstsq(Ah, data['V'])\nprint(params_h)\nH0 = params_h[0] ", "[ 465.17797833 -67.84096674 236.14706994 -199.58892695]\n" ] ], [ [ "Make a scatter plot of $V-V_S$ vs $R$. How is it different from the previous one without the correction for solar velicity. Add the best-fit linear regression line.", "_____no_output_____" ] ], [ [ "VS = params_h[1]*Ah[:,1] + params_h[2]*Ah[:,2] + params_h[3]*Ah[:,3]\nfig = plt.figure()\nax = fig.add_subplot(111)\nax.scatter(data['R'], data['V'] - VS)\nax.plot(R, H0*R, 'k-')\nax.set_xlim(xmin=0, xmax=2.5)\nax.set_xlabel('Distance [Mpc]')\nax.set_ylabel('Velocity [km/s]')", "_____no_output_____" ] ], [ [ "Using `astropy.units`, can you estimate the age of the universe from $H_0$? Does it make sense?", "_____no_output_____" ] ], [ [ "H0q = H0 * u.km / u.s / u.Mpc\n(1./H0q).to(u.Gyr)", "_____no_output_____" ] ], [ [ "## Deconstructing lstsq\n\nSo far we have not incorporated any measurement uncertainties. Can you guess or estimate them from the scatter with respect to the best-fit line? You may want to look at the residuals returned by `np.linalg.lstsq`...", "_____no_output_____" ] ], [ [ "scatter = data['V'] - VS - H0*data['R']\nfig = plt.figure()\nax = fig.add_subplot(111)\nax.hist(scatter, 10)\nax.set_xlabel('$\\Delta$V [km/s]')", "_____no_output_____" ] ], [ [ "Let see how adopting a suitable value $\\sigma$ for those uncertainties would affect the estimate of $H_0$?\n\nThe problem you solved so far is $Ax=b$, and errors don't occur. With errors the respective equation is changed to $A^\\top \\Sigma^{-1} Ax=A^\\top \\Sigma^{-1}b$, where in this case the covariance matrix $\\Sigma=\\sigma^2\\mathbf{1}$. This problem can still be solved by `np.linalg.lstsq`.\n\nConstruct the modified design matrix and data vector and get a new estimate of $H_0$. Has it changed? Use `np.dot`, `np.transpose`, and `np.linalg.inv` (or their shorthands).", "_____no_output_____" ] ], [ [ "error = scatter.std()\nSigma = error**2*np.eye(N)\nAe = np.dot(Ah.T, np.dot(np.linalg.inv(Sigma), Ah))\nbe = np.dot(Ah.T, np.dot(np.linalg.inv(Sigma), data['V']))\nparams_e, _, _, _ = np.linalg.lstsq(Ae, be)\nprint(params_e)", "[ 465.17797833 -67.84096674 236.14706994 -199.58892695]\n" ] ], [ [ "Compute the parameter covariance matrix $S=(A^\\top \\Sigma^{-1} A)^{-1}$ and read off the variance of $H_0$. Update your plot to illustrate that uncertainty.", "_____no_output_____" ] ], [ [ "S = np.linalg.inv(Ae)\ndH0 = np.sqrt(S[0,0])\nprint(dH0)", "50.7640654387\n" ], [ "fig = plt.figure()\nax = fig.add_subplot(111)\nax.scatter(data['R'], data['V'] - VS)\nax.plot(R, H0*R, 'k-')\nax.plot(R, (H0-dH0)*R, 'k--')\nax.plot(R, (H0+dH0)*R, 'k--')\nax.set_xlim(xmin=0, xmax=2.5)\nax.set_xlabel('Distance [Mpc]')\nax.set_ylabel('Velocity [km/s]')", "_____no_output_____" ] ], [ [ "How large is the relative error? Would that help with the problematic age estimate above?", "_____no_output_____" ] ], [ [ "H0q = (H0-dH0) * u.km / u.s / u.Mpc\n(1./H0q).to(u.Gyr)", "_____no_output_____" ] ], [ [ "Compare the noise-free result from above (Hubble's result) with $SA^\\top \\Sigma^{-1}b$. Did adopting errors change the result?", "_____no_output_____" ] ], [ [ "params_h, _, _, _ = np.linalg.lstsq(Ah, data['V'])\nprint(params_h)\nprint (np.dot(S, be))", "[ 465.17797833 -67.84096674 236.14706994 -199.58892695]\n[ 465.17797833 -67.84096674 236.14706994 -199.58892695]\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7d9d3e00a5c2babebd7b575408677c59e920441
19,823
ipynb
Jupyter Notebook
.ipynb_checkpoints/readME-checkpoint.ipynb
lspatial/geographnet
83590f5e9da4fc2274c7590c076e7dc4edcea649
[ "MIT" ]
null
null
null
.ipynb_checkpoints/readME-checkpoint.ipynb
lspatial/geographnet
83590f5e9da4fc2274c7590c076e7dc4edcea649
[ "MIT" ]
null
null
null
.ipynb_checkpoints/readME-checkpoint.ipynb
lspatial/geographnet
83590f5e9da4fc2274c7590c076e7dc4edcea649
[ "MIT" ]
null
null
null
46.752358
1,816
0.573324
[ [ [ "import pandas as pd\nimport numpy as np\nimport shutil\nfrom sklearn import preprocessing\nfrom geographnet.geographnet.model.wdatasampling import DataSamplingDSited\nimport pandas as pd\nimport numpy as np\n# from torch_geometric.data import NeighborSampler\nfrom geographnet.geographnet.model.wsampler import WNeighborSampler\nimport torch\nfrom geographnet.geographnet.traintest_pm import train, test\nfrom geographnet.geographnet.model.geographpnet import GeoGraphPNet\nimport gc\nimport sys\nimport shutil\nfrom sklearn import preprocessing\nfrom sklearn.model_selection import train_test_split\nimport pickle", "_____no_output_____" ], [ "def selectSites(datain):\n sitesDF = datain.drop_duplicates('id').copy()\n sgrp = sitesDF['stratified_flag'].value_counts()\n sitesDF['stratified_flag_cnt'] = sgrp.loc[sitesDF['stratified_flag']].values\n pos1_index = np.where(sitesDF['stratified_flag_cnt'] < 5)[0]\n posT_index = np.where(sitesDF['stratified_flag_cnt'] >= 5)[0]\n np.random.seed()\n trainsiteIndex, testsiteIndex = train_test_split(posT_index, stratify=sitesDF.iloc[posT_index]['stratified_flag'],\n test_size=0.15)\n selsites = sitesDF.iloc[testsiteIndex]['id']\n trainsitesIndex = np.where(~datain['id'].isin(selsites))[0]\n indTestsitesIndex = np.where(datain['id'].isin(selsites))[0]\n return trainsitesIndex,indTestsitesIndex", "_____no_output_____" ], [ "import urllib \nurl = 'https://github.com/lspatial/geographnet/raw/master/pmdatain.pkl.tar.gz'\ntarfl='/wkspace/pypackages/geographnetPub/data/test/pmsamples.tar.gz'\nurllib.request.urlretrieve(url, tarfl) ", "_____no_output_____" ], [ "import tarfile\nimport os\ndef untar(fname, dirs):\n t = tarfile.open(fname)\n t.extractall(path = dirs) ", "_____no_output_____" ], [ "target='/wkspace/pypackages/geographnetPub/data/test/'\nuntar(tarfl,target)", "_____no_output_____" ], [ "targetFl=target+'/pmdatain.pkl'", "_____no_output_____" ], [ "datatar=pd.read_pickle(targetFl)\nprint(datatar.shape)", "(950283, 72)\n" ], [ "datatar.columns ", "_____no_output_____" ], [ "print(datatar.columns,datatar.shape)\ncovs=['idate','lat', 'lon', 'latlon', 'DOY', 'dem', 'OVP10_TOTEXTTAU', 'OVP14_TOTEXTTAU',\n 'TOTEXTTAU', 'glnaswind', 'maiacaod', 'o3', 'pblh', 'prs', 'rhu', 'tem',\n 'win', 'GAE', 'NO2_BOT', 'NO_BOT', 'PM25_BOT', 'PM_BOT', 'OVP10_CO',\n 'OVP10_GOCART_SO2_VMR', 'OVP10_NO', 'OVP10_NO2', 'OVP10_O3', 'BCSMASS',\n 'DMSSMASS', 'DUSMASS25', 'HNO3SMASS', 'NISMASS25', 'OCSMASS', 'PM25',\n 'SO2SMASS', 'SSSMASS25', 'sdist_roads', 'sdist_poi', 'parea10km',\n 'rlen10km', 'wstag', 'wmix', 'CLOUD', 'MYD13C1.NDVI',\n 'MYD13C1.EVI', 'MOD13C1.NDVI', 'MOD13C1.EVI', 'is_workday', 'OMI-NO2']\ntarget=['PM10_24h', 'PM2.5_24h']\nX = datatar[covs].values\nscX = preprocessing.StandardScaler().fit(X)\nXn = scX.transform(X)\ny = datatar[['pm25_log','pm10_log']].values\nypm25 = datatar['PM2.5_24h'].values\nypm10 = datatar['PM10_24h'].values\nscy = preprocessing.StandardScaler().fit(y)\nyn = scy.transform(y)\ntarcols=[i for i in range(len(covs))]\ntrainsitesIndex=[i for i in range(datatar.shape[0])]\ntrainsitesIndex, indTestsitesIndex=selectSites(datatar)\nx, edge_index,edge_dist, y, train_index, test_index = DataSamplingDSited(Xn[:,tarcols], yn, [0,1,2], 12,\n trainsitesIndex ,datatar)\nXn = Xn[:, 1:]\nedge_weight=1.0/(edge_dist+0.00001)\nneighbors=[12,12,12,12]\ntrain_loader = WNeighborSampler(edge_index, edge_weight= edge_weight,node_idx=train_index,\n sizes=neighbors, batch_size=2048, shuffle=True,\n num_workers=20 )\nx_index = torch.LongTensor([i for i in range(Xn.shape[0])])\nx_loader = WNeighborSampler(edge_index, edge_weight= edge_weight,node_idx=x_index,\n sizes=neighbors, batch_size=2048, shuffle=False,\n num_workers=20 )\ngpu=0\nif gpu is None:\n device = torch.device('cpu')\nelse:\n device = torch.device('cuda:'+str(gpu))\nnout=2\nresnodes = [512, 320, 256, 128, 96, 64, 32, 16]\n# 0: original; 1: concated ; 2: dense; 3: only gcn\ngcnnhiddens = [128,64,32]\nmodel = GeoGraphPNet(x.shape[1], gcnnhiddens, nout, len(neighbors), resnodes, weightedmean=True,gcnout=nout,nattlayer=1)\nmodel = model.to(device)\nx = x.to(device)\nedge_index = edge_index.to(device)\ny = y.to(device)\ninit_lr=0.01\noptimizer = torch.optim.Adam(model.parameters(), lr=init_lr)\nbest_indtest_r2 = -9999\nbest_indtest_r2_pm10=-9999\nscheduler=torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer,mode='min')\n#scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=10, gamma=0.2, last_epoch=-1)\noldlr=newlr=init_lr\nepoch=0\nnepoch=3\ntrpath=\"/wkspace/pypackages/geographnetPub/data/test\"\nwhile epoch< nepoch :\n # adjust_lr(optimizer, epoch, init_lr)\n print('Conducting ',epoch, ' of ',nepoch,' for PM ... ...')\n loss,loss_pm25,loss_pm10,loss_rel = train(model, train_loader, device, optimizer, x, y)\n\n permetrics, lossinf, testdata = test(model, x_loader, device, x, y, scy, train_index,\n test_index, indtest_index=indTestsitesIndex,\n ypm25=ypm25, ypm10=ypm10)\n\n try:\n permetrics,lossinf,testdata= test(model, x_loader, device, x, y, scy,train_index,\n test_index, testout=True,indtest_index=indTestsitesIndex,\n ypm25=ypm25 ,ypm10=ypm10)\n lossall, lossall_pm25, lossall_pm10, lossall_rel = lossinf\n pmindtesting, pmtesting, pmtrain=testdata\n except:\n print(\"Wrong loop for ecpoch \"+str(epoch)+ \", continue ... ...\")\n epoch=epoch+1\n continue\n permetrics_pm25 = permetrics[permetrics['pol'] == 'pm2.5']\n permetrics_pm10 = permetrics[permetrics['pol'] == 'pm10']\n permetrics_pm25=permetrics_pm25.iloc[0]\n permetrics_pm10 = permetrics_pm10.iloc[0]\n if epoch>15 and permetrics_pm25['train_r2']<0 :\n print(\"Abnormal for ecpoch \" + str(epoch) + \", continue ... ...\")\n epoch = epoch + 1\n continue\n if best_indtest_r2 < permetrics_pm25['indtest_r2']:\n best_indtest_r2 = permetrics_pm25['indtest_r2']\n saveDf = pd.DataFrame({'sid': datatar.iloc[test_index]['sid'].values, 'obs': pmtesting['pm25_obs'].values,\n 'pre': pmtesting['pm25_pre'].values})\n saveindDf = pd.DataFrame({'sid': datatar.iloc[indTestsitesIndex]['sid'].values, 'obs': pmindtesting['pm25_obs'].values,\n 'pre': pmindtesting['pm25_pre'].values})\n testfl = trpath + '/model_pm25_bestindtest_testdata.csv'\n saveDf.to_csv(testfl,index_label='index')\n indtestfl = trpath + '/model_pm25_bestindtest_indtestdata.csv'\n saveindDf.to_csv(indtestfl, index_label='index')\n modelFl = trpath + '/model_pm25_bestindtestr2.tor'\n torch.save(model, modelFl)\n modelMeFl = trpath + '/model_pm25_bestindtestr2.csv'\n pd.DataFrame([permetrics_pm25.to_dict()]).to_csv(modelMeFl, index_label='epoch')\n\n if best_indtest_r2_pm10 < permetrics_pm10['indtest_r2']:\n best_indtest_r2_pm10 = permetrics_pm10['indtest_r2']\n saveDf = pd.DataFrame({'sid': datatar.iloc[test_index]['sid'].values, 'obs': pmtesting['pm10_obs'].values,\n 'pre': pmtesting['pm10_pre'].values})\n saveindDf = pd.DataFrame(\n {'sid': datatar.iloc[indTestsitesIndex]['sid'].values, 'obs': pmindtesting['pm10_obs'].values,\n 'pre': pmindtesting['pm10_pre'].values})\n testfl = trpath + '/model_pm10_bestindtest_testdata.csv'\n saveDf.to_csv(testfl, index_label='index')\n indtestfl = trpath + '/model_pm10s_bestindtest_indtestdata.csv'\n saveindDf.to_csv(indtestfl, index_label='index')\n modelFl = trpath + '/model_pm10_bestindtestr2.tor'\n torch.save(model, modelFl)\n modelMeFl = trpath + '/model_pm10_bestindtestr2.csv'\n pd.DataFrame([permetrics_pm10.to_dict()]).to_csv(modelMeFl, index_label='epoch')\n scheduler.step(loss)\n newlr= optimizer.param_groups[0]['lr']\n if newlr!=oldlr:\n print('Learning rate is {} from {} '.format(newlr, oldlr))\n oldlr=newlr\n atrainDf=permetrics\n atrainDf['epoch']=epoch\n lossDf=pd.DataFrame({'epoch':epoch,'loss':loss, 'loss_pm25':loss_pm25,'loss_pm10':loss_pm10,\n 'loss_rel':loss_rel,'lossall':lossall,'lossall_pm25':lossall_pm25,\n 'lossall_pm10':lossall_pm10,'lossall_rel':lossall_rel},index=[epoch])\n print(permetrics)\n print(lossDf)\n if epoch==0:\n alltrainHist=atrainDf\n alllostinfo=lossDf\n else:\n alltrainHist=alltrainHist.append(atrainDf)\n alllostinfo = alllostinfo.append(lossDf)\n epoch=epoch+1\ntfl = trpath + '/trainHist.csv'\nalltrainHist.to_csv(tfl, header=True, index_label=\"row\")\ntfl = trpath + '/ftrain_loss.csv'\nalllostinfo.to_csv(tfl, header=True, index_label=\"row\")\ndel optimizer, x, edge_index, y, train_index, test_index, model, alltrainHist\ngc.collect()", "Index(['idate', 'id', 'lat', 'lon', 'CO_24h', 'NO2_24h', 'O3_24h', 'O3_8h_24h',\n 'PM10_24h', 'PM2.5_24h', 'SO2_24h', 'lat2', 'lon2', 'latlon', 'year',\n 'month', 'day', 'DOY', 'dem', 'OVP10_TOTEXTTAU', 'OVP14_TOTEXTTAU',\n 'TOTEXTTAU', 'glnaswind', 'maiacaod', 'o3', 'pblh', 'prs', 'rhu', 'tem',\n 'win', 'GAE', 'NO2_BOT', 'NO_BOT', 'PM25_BOT', 'PM_BOT', 'OVP10_CO',\n 'OVP10_GOCART_SO2_VMR', 'OVP10_NO', 'OVP10_NO2', 'OVP10_O3', 'BCSMASS',\n 'DMSSMASS', 'DUSMASS25', 'HNO3SMASS', 'NISMASS25', 'OCSMASS', 'PM25',\n 'SO2SMASS', 'SSSMASS25', 'sdist_roads', 'sdist_poi', 'parea10km',\n 'rlen10km', 'wstag', 'wmix', 'CLOUD', 'stratified_flag', 'MYD13C1.NDVI',\n 'MYD13C1.EVI', 'MOD13C1.NDVI', 'MOD13C1.EVI', 'is_workday', 'OMI-NO2',\n 'co_log', 'no2_log', 'o3_log', 'o3h24_log', 'pm10_log', 'pm25_log',\n 'so2_log', 'ratiolog_pm25_pm10', 'sid'],\n dtype='object') (950283, 72)\ntorch.Size([690702]) torch.Size([121889])\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7d9e574d55c9dd5c43b3b723fc3a4f1e2519208
8,795
ipynb
Jupyter Notebook
Graphical Analysis.ipynb
ecervera/GA
f24610e32dcf562e1cc2f0883960c0ae07b5958f
[ "MIT" ]
null
null
null
Graphical Analysis.ipynb
ecervera/GA
f24610e32dcf562e1cc2f0883960c0ae07b5958f
[ "MIT" ]
null
null
null
Graphical Analysis.ipynb
ecervera/GA
f24610e32dcf562e1cc2f0883960c0ae07b5958f
[ "MIT" ]
null
null
null
32.334559
452
0.630244
[ [ [ "<img src=\"img/graph_3_ex1.png\" align=\"right\" width=320>\n# Graphical Analysis\n\nPyevolve comes with a Graphical Plotting Tool, based on the [Matplotlib plotting library](http://matplotlib.org/).\n\nTo use this graphical plotting tool, you need to use the [DBAdapters.DBSQLite](http://pyevolve.sourceforge.net/0_6rc1/module_dbadapters.html) adapter and create a database file, where the population of each generation is stored.\n\nWe are going to extend the first example with the database and graphical output.", "_____no_output_____" ] ], [ [ "from pyevolve import G1DList, GSimpleGA\nfrom pyevolve import DBAdapters", "_____no_output_____" ], [ "def eval_func(chromosome):\n score = 0.0\n for value in chromosome:\n if value==0:\n score += 1.0\n return score", "_____no_output_____" ], [ "genome = G1DList.G1DList(20)\ngenome.evaluator.set(eval_func)\ngenome.setParams(rangemin=0, rangemax=10)", "_____no_output_____" ] ], [ [ "The database adapter is defined in the following cell. The database is stored in a file, and the elements need a specific identifier. We will use always the same identifier, but you could change it if you want to save different evolutions in the same database. The parameter <tt>resetDB</tt> is set for deleting any existing data in the database.", "_____no_output_____" ] ], [ [ "sqlite_adapter = DBAdapters.DBSQLite(dbname='first_example.db', identify=\"ex1\", resetDB=True)", "_____no_output_____" ] ], [ [ "When you run your GA, all the statistics will be dumped to this database. When you use the graph tool, it will read the statistics from this database file.\n\nLet's evolve the example. Now, instead of evolving step by step, we will set a number of generations for completing the evolution with a single call to <tt>ga.evolve</tt>.", "_____no_output_____" ] ], [ [ "ga = GSimpleGA.GSimpleGA(genome)\nga.setDBAdapter(sqlite_adapter)\nga.setGenerations(20)\nga.evolve(freq_stats=5)\nprint(\"Generation: %d\" % ga.currentGeneration)\nbest = ga.bestIndividual()\nprint('\\tBest individual: %s' % str(best.genomeList))\nprint('\\tBest score: %.0f' % best.score)", "_____no_output_____" ] ], [ [ "## Plotting\n\nHere are described the main graph types. Usually you can choose to plot the **raw** or **fitness** score, which are defined as:\n* The raw score represents the score returned by the [Evaluation function](http://pyevolve.sourceforge.net/0_6rc1/intro.html#term-evaluation-function), this score is not scaled.\n* The fitness score is the scaled raw score, for example, if you use the Linear Scaling ([Scaling.LinearScaling()](http://pyevolve.sourceforge.net/0_6rc1/module_scaling.html?highlight=scaling#Scaling.LinearScaling)), the fitness score will be the raw score scaled with the Linear Scaling method. The fitness score represents how good is the individual relative to our population.", "_____no_output_____" ] ], [ [ "%matplotlib inline\nfrom pyevolve_plot import plot_errorbars_raw, plot_errorbars_fitness, \\\n plot_maxmin_raw, plot_maxmin_fitness, \\\n plot_diff_raw, plot_pop_heatmap_raw", "_____no_output_____" ] ], [ [ "### Error bars graph (raw scores)\n\nIn this graph, you will find the generations on the x-axis and the raw scores on the y-axis. The green vertical bars represents the maximum and the minimum raw scores of the current population at generation indicated in the x-axis. The blue line between them is the average raw score of the population.", "_____no_output_____" ] ], [ [ "plot_errorbars_raw('first_example.db','ex1')", "_____no_output_____" ] ], [ [ "### Error bars graph (fitness scores)\n\nThe differente between this graph option and the previous one is that we are using the fitness scores instead of the raw scores.", "_____no_output_____" ] ], [ [ "plot_errorbars_fitness('first_example.db','ex1')", "_____no_output_____" ] ], [ [ "### Max/min/avg/std. dev. graph (raw scores)\n\nIn this graph we have the green line showing the maximum raw score at the generation in the x-axis, the red line shows the minimum raw score, and the blue line shows the average raw scores. The green shaded region represents the difference between our max. and min. raw scores. The black line shows the standard deviation of the average raw scores. We also have some annotations like the maximum raw score, maximum std. dev. and the min std. dev.", "_____no_output_____" ] ], [ [ "plot_maxmin_raw('first_example.db','ex1')", "_____no_output_____" ] ], [ [ "### Max/min/avg/std. dev. graph (fitness scores)\n\nThis graphs shows the maximum fitness score from the population at the x-axis generation using the green line. The red line shows the minimum fitness score and the blue line shows the average fitness score from the population. The green shaded region between the green and red line shows the difference between the best and worst individual of population.", "_____no_output_____" ] ], [ [ "plot_maxmin_fitness('first_example.db','ex1')", "_____no_output_____" ] ], [ [ "### Min/max difference graph, raw and fitness scores\n\nIn this graph, we have two subplots, the first is the difference between the best individual raw score and the worst individual raw score. The second graph shows the difference between the best individual fitness score and the worst individual fitness score. Both subplots show the generation on the x-axis and the score difference in the y-axis.", "_____no_output_____" ] ], [ [ "plot_diff_raw('first_example.db','ex1')", "_____no_output_____" ] ], [ [ "### Heat map of population raw score distribution\n\nThe heat map graph is a plot with the population individual plotted as the x-axis and the generation plotted in the y-axis. On the right side we have a legend with the color/score relation. As you can see, on the initial populations, the last individals scores are the worst (represented in this colormap with the dark blue). To create this graph, we use the Gaussian interpolation method.", "_____no_output_____" ] ], [ [ "plot_pop_heatmap_raw('first_example.db','ex1')", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7d9e92ae9a1ce9d134ae9eed09565a15b5f5eb8
278,428
ipynb
Jupyter Notebook
src/main/ipynb/pandas.ipynb
pperezgr/python-bigdata
0a1ad0f07005f2c7aaebd2389e238d923ecfe01e
[ "CC-BY-4.0" ]
137
2016-08-13T13:29:51.000Z
2022-03-31T06:45:51.000Z
src/main/ipynb/pandas.ipynb
pperezgr/python-bigdata
0a1ad0f07005f2c7aaebd2389e238d923ecfe01e
[ "CC-BY-4.0" ]
null
null
null
src/main/ipynb/pandas.ipynb
pperezgr/python-bigdata
0a1ad0f07005f2c7aaebd2389e238d923ecfe01e
[ "CC-BY-4.0" ]
190
2016-08-23T05:46:21.000Z
2022-01-17T15:08:54.000Z
115.674283
34,324
0.866019
[ [ [ "%matplotlib inline", "_____no_output_____" ] ], [ [ "# Analysing structured data with data frames \n\n(c) 2019 [Steve Phelps](mailto:[email protected])\n", "_____no_output_____" ], [ "## Data frames\n\n- The `pandas` module provides a powerful data-structure called a data frame.\n\n- It is similar, but not identical to:\n - a table in a relational database,\n - an Excel spreadsheet,\n - a dataframe in R.\n ", "_____no_output_____" ], [ "### Types of data\n\nData frames can be used to represent:\n\n- [Panel data](https://en.wikipedia.org/wiki/Panel_data)\n- [Time series](https://en.wikipedia.org/wiki/Time_series) data\n- [Relational data](https://en.wikipedia.org/wiki/Relational_model)\n ", "_____no_output_____" ], [ "### Loading data\n\n- Data frames can be read and written to/from:\n - financial web sites\n - database queries\n - database tables\n - CSV files\n - json files\n \n- Beware that data frames are memory resident;\n - If you read a large amount of data your PC might crash\n - With big data, typically you would read a subset or summary of the data via e.g. a select statement.", "_____no_output_____" ], [ "## Importing pandas\n\n- The pandas module is usually imported with the alias `pd`.\n", "_____no_output_____" ] ], [ [ "import pandas as pd", "_____no_output_____" ] ], [ [ "## Series\n\n- A Series contains a one-dimensional array of data, *and* an associated sequence of labels called the *index*.\n\n- The index can contain numeric, string, or date/time values.\n\n- When the index is a time value, the series is a [time series](https://en.wikipedia.org/wiki/Time_series).\n\n- The index must be the same length as the data.\n\n- If no index is supplied it is automatically generated as `range(len(data))`.", "_____no_output_____" ], [ "### Creating a series from an array\n\n", "_____no_output_____" ] ], [ [ "import numpy as np\ndata = np.random.randn(5)\ndata", "_____no_output_____" ], [ "my_series = pd.Series(data, index=['a', 'b', 'c', 'd', 'e'])\nmy_series", "_____no_output_____" ] ], [ [ "### Plotting a series\n\n- We can plot a series by invoking the `plot()` method on an instance of a `Series` object.\n\n- The x-axis will autimatically be labelled with the series index.", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nmy_series.plot()\nplt.show()", "_____no_output_____" ] ], [ [ "### Creating a series with automatic index\n\n- In the following example the index is creating automatically:", "_____no_output_____" ] ], [ [ "pd.Series(data)", "_____no_output_____" ] ], [ [ "### Creating a Series from a `dict`\n\n", "_____no_output_____" ] ], [ [ "d = {'a' : 0., 'b' : 1., 'c' : 2.}\nmy_series = pd.Series(d)\nmy_series", "_____no_output_____" ] ], [ [ "### Indexing a series with `[]`\n\n- Series can be accessed using the same syntax as arrays and dicts.\n\n- We use the labels in the index to access each element.\n\n", "_____no_output_____" ] ], [ [ "my_series['b']", "_____no_output_____" ] ], [ [ "- We can also use the label like an attribute:", "_____no_output_____" ] ], [ [ "my_series.b", "_____no_output_____" ] ], [ [ "### Slicing a series\n\n\n- We can specify a range of labels to obtain a slice:", "_____no_output_____" ] ], [ [ "my_series[['b', 'c']]", "_____no_output_____" ] ], [ [ "## Arithmetic and vectorised functions\n\n- `numpy` vectorization works for series objects too.\n\n", "_____no_output_____" ] ], [ [ "d = {'a' : 0., 'b' : 1., 'c' : 2.}\nsquared_values = pd.Series(d) ** 2\nsquared_values", "_____no_output_____" ], [ "x = pd.Series({'a' : 0., 'b' : 1., 'c' : 2.})\ny = pd.Series({'a' : 3., 'b' : 4., 'c' : 5.})\nx + y", "_____no_output_____" ] ], [ [ "## Time series", "_____no_output_____" ] ], [ [ "dates = pd.date_range('1/1/2000', periods=5)\ndates", "_____no_output_____" ], [ "time_series = pd.Series(data, index=dates)\ntime_series", "_____no_output_____" ] ], [ [ "### Plotting a time-series", "_____no_output_____" ] ], [ [ "ax = time_series.plot()", "_____no_output_____" ] ], [ [ "## Missing values\n\n- Pandas uses `nan` to represent missing data.\n\n- So `nan` is used to represent missing, invalid or unknown data values.\n\n- It is important to note that this only convention only applies within pandas.\n - Other frameworks have very different semantics for these values.\n", "_____no_output_____" ], [ "## DataFrame\n\n- A data frame has multiple columns, each of which can hold a *different* type of value.\n\n- Like a series, it has an index which provides a label for each and every row. \n\n- Data frames can be constructed from:\n - dict of arrays,\n - dict of lists,\n - dict of dict\n - dict of Series\n - 2-dimensional array\n - a single Series\n - another DataFrame", "_____no_output_____" ], [ "\n## Creating a dict of series", "_____no_output_____" ] ], [ [ "series_dict = {\n 'x' : \n pd.Series([1., 2., 3.], index=['a', 'b', 'c']),\n 'y' : \n pd.Series([4., 5., 6., 7.], index=['a', 'b', 'c', 'd']),\n 'z' :\n pd.Series([0.1, 0.2, 0.3, 0.4], index=['a', 'b', 'c', 'd'])\n}\n\nseries_dict", "_____no_output_____" ] ], [ [ "## Converting the dict to a data frame", "_____no_output_____" ] ], [ [ "df = pd.DataFrame(series_dict)\ndf", "_____no_output_____" ] ], [ [ "## Plotting data frames\n\n- When plotting a data frame, each column is plotted as its own series on the same graph.\n\n- The column names are used to label each series.\n\n- The row names (index) is used to label the x-axis.", "_____no_output_____" ] ], [ [ "ax = df.plot()", "_____no_output_____" ] ], [ [ "## Indexing", "_____no_output_____" ], [ "- The outer dimension is the column index.\n\n- When we retrieve a single column, the result is a Series", "_____no_output_____" ] ], [ [ "df['x']", "_____no_output_____" ], [ "df['x']['b']", "_____no_output_____" ], [ "df.x.b", "_____no_output_____" ] ], [ [ "## Projections\n\n- Data frames can be sliced just like series.\n- When we slice columns we call this a *projection*, because it is analogous to specifying a subset of attributes in a relational query, e.g. `SELECT x FROM table`.\n- If we project a single column the result is a series:", "_____no_output_____" ] ], [ [ "slice = df['x'][['b', 'c']]\nslice", "_____no_output_____" ], [ "type(slice)", "_____no_output_____" ] ], [ [ "## Projecting multiple columns\n\n- When we include multiple columns in the projection the result is a DataFrame.", "_____no_output_____" ] ], [ [ "slice = df[['x', 'y']]\nslice", "_____no_output_____" ], [ "type(slice)", "_____no_output_____" ] ], [ [ "## Vectorization\n\n- Vectorized functions and operators work just as with series objects:", "_____no_output_____" ] ], [ [ "df['x'] + df['y']", "_____no_output_____" ], [ "df ** 2", "_____no_output_____" ] ], [ [ "## Logical indexing\n\n- We can use logical indexing to retrieve a subset of the data.\n\n", "_____no_output_____" ] ], [ [ "df['x'] >= 2", "_____no_output_____" ], [ "df[df['x'] >= 2]", "_____no_output_____" ] ], [ [ "## Descriptive statistics", "_____no_output_____" ], [ "- To quickly obtain descriptive statistics on numerical values use the `describe` method.", "_____no_output_____" ] ], [ [ "df.describe()", "_____no_output_____" ] ], [ [ "## Accessing a single statistic\n\n- The result is itself a DataFrame, so we can index a particular statistic like so:", "_____no_output_____" ] ], [ [ "df.describe()['x']['mean']", "_____no_output_____" ] ], [ [ "## Accessing the row and column labels\n\n- The row labels (index) and column labels can be accessed:\n", "_____no_output_____" ] ], [ [ "df.index", "_____no_output_____" ], [ "df.columns", "_____no_output_____" ] ], [ [ "## Head and tail\n\n- Data frames have `head()` and `tail()` methods which behave analgously to the Unix commands of the same name.", "_____no_output_____" ], [ "## Financial data\n\n- Pandas was originally developed to analyse financial data.\n\n- We can download tabulated data in a portable format called [Comma Separated Values (CSV)](https://www.loc.gov/preservation/digital/formats/fdd/fdd000323.shtml).", "_____no_output_____" ] ], [ [ "import pandas as pd\ngoogl = pd.read_csv('data/GOOGL.csv')", "_____no_output_____" ] ], [ [ "### Examining the first few rows\n\n- When working with large data sets it is useful to view just the first/last few rows in the dataset.\n\n- We can use the `head()` method to retrieve the first rows:", "_____no_output_____" ] ], [ [ "googl.head()", "_____no_output_____" ] ], [ [ "### Examining the last few rows", "_____no_output_____" ] ], [ [ "googl.tail()", "_____no_output_____" ] ], [ [ "### Converting to datetime values\n\n- So far, the `Date` attribute is of type string.", "_____no_output_____" ] ], [ [ "googl.Date[0]", "_____no_output_____" ], [ "type(googl.Date[0])", "_____no_output_____" ] ], [ [ "- In order to work with time-series data, we need to construct an index containing time values.\n\n- Time values are of type `datetime` or `Timestamp`.\n\n- We can use the function `to_datetime()` to convert strings to time values.", "_____no_output_____" ] ], [ [ "pd.to_datetime(googl['Date']).head()", "_____no_output_____" ] ], [ [ "### Setting the index\n\n- Now we need to set the index of the data-frame so that it contains the sequence of dates.\n", "_____no_output_____" ] ], [ [ "googl.set_index(pd.to_datetime(googl['Date']), inplace=True)\ngoogl.index[0]", "_____no_output_____" ], [ "type(googl.index[0])", "_____no_output_____" ] ], [ [ "### Plotting series\n\n- We can plot a series in a dataframe by invoking its `plot()` method.\n\n- Here we plot a time-series of the daily traded volume:", "_____no_output_____" ] ], [ [ "ax = googl['Volume'].plot()\nplt.show()", "_____no_output_____" ] ], [ [ "### Adjusted closing prices as a time series", "_____no_output_____" ] ], [ [ "googl['Adj Close'].plot()\nplt.show()", "_____no_output_____" ] ], [ [ "### Slicing series using date/time stamps\n\n- We can slice a time series by specifying a range of dates or times.\n\n- Date and time stamps are specified strings representing dates in the required format.", "_____no_output_____" ] ], [ [ "googl['Adj Close']['1-1-2016':'1-1-2017'].plot()\nplt.show()", "_____no_output_____" ] ], [ [ "### Resampling \n\n- We can *resample* to obtain e.g. weekly or monthly prices.\n\n- In the example below the `'W'` denotes weekly.\n\n- See [the documentation](http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases) for other frequencies.\n\n- We group data into weeks, and then take the last value in each week.\n\n- For details of other ways to resample the data, see [the documentation](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.resample.html).", "_____no_output_____" ], [ "#### Resampled time-series plot", "_____no_output_____" ] ], [ [ "weekly_prices = googl['Adj Close'].resample('W').last()\nweekly_prices.head()", "_____no_output_____" ], [ "weekly_prices.plot()\nplt.title('Prices for GOOGL sampled at weekly frequency')\nplt.show()", "_____no_output_____" ] ], [ [ "### Converting prices to log returns", "_____no_output_____" ] ], [ [ "weekly_rets = np.diff(np.log(weekly_prices))\nplt.plot(weekly_rets)\nplt.xlabel('t'); plt.ylabel('$r_t$')\nplt.title('Weekly log-returns for GOOGL')\nplt.show()", "_____no_output_____" ] ], [ [ "### Converting the returns to a series\n\n- Notice that in the above plot the time axis is missing the dates.\n\n- This is because the `np.diff()` function returns an array instead of a data-frame.\n", "_____no_output_____" ] ], [ [ "type(weekly_rets)", "_____no_output_____" ] ], [ [ "- We can convert it to a series thus:", "_____no_output_____" ] ], [ [ "weekly_rets_series = pd.Series(weekly_rets, index=weekly_prices.index[1:])\nweekly_rets_series.head()", "_____no_output_____" ] ], [ [ "#### Plotting with the correct time axis", "_____no_output_____" ], [ "Now when we plot the series we will obtain the correct time axis:", "_____no_output_____" ] ], [ [ "plt.plot(weekly_rets_series)\nplt.title('GOOGL weekly log-returns'); plt.xlabel('t'); plt.ylabel('$r_t$')\nplt.show()", "_____no_output_____" ] ], [ [ "### Plotting a return histogram", "_____no_output_____" ] ], [ [ "weekly_rets_series.hist()\nplt.show()", "_____no_output_____" ], [ "weekly_rets_series.describe()", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
e7d9ec97f7b5b8603e64d064f5b679ff6c5789a7
5,645
ipynb
Jupyter Notebook
notebooks/202-FeatureEngineering.ipynb
ohadravid/ml-tutorial
5b196a80290ca443c079cf0a32dd38d149a9ef34
[ "MIT" ]
null
null
null
notebooks/202-FeatureEngineering.ipynb
ohadravid/ml-tutorial
5b196a80290ca443c079cf0a32dd38d149a9ef34
[ "MIT" ]
null
null
null
notebooks/202-FeatureEngineering.ipynb
ohadravid/ml-tutorial
5b196a80290ca443c079cf0a32dd38d149a9ef34
[ "MIT" ]
null
null
null
17.751572
115
0.500443
[ [ [ "import numpy as np \nimport pandas as pd \nimport matplotlib.pyplot as plt\n%matplotlib inline", "_____no_output_____" ], [ "filename= \"../data/kobe/kobe_bryant_shot_data.csv.gz\"\ndf = pd.read_csv(filename, na_values={'shot_made_flag': ''})", "_____no_output_____" ], [ "df = df.dropna()", "_____no_output_____" ], [ "df = df.drop([u'action_type', u'game_event_id', u'game_id',\n u'lat', u'lon', u'team_id', u'team_name', u'game_date',\n u'opponent', u'shot_id'], axis=1)", "_____no_output_____" ], [ "df = df.drop(['loc_x', 'loc_y', 'shot_type', 'shot_zone_area', 'shot_zone_basic', 'shot_zone_range'], axis=1)", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "df['home'] = df.matchup.apply(lambda matchup: 0 if '@' in matchup else 1)\ndf = df.drop(['matchup'], axis=1)", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "df['time_remaining'] = 60 * df['minutes_remaining'] + df['seconds_remaining']\ndf = df.drop(['minutes_remaining', 'seconds_remaining'], axis=1)", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "cols = df.columns.tolist()\ncols.remove('shot_made_flag')\ncols.append('shot_made_flag')\n\ndf = df[cols]", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "filename= \"../data/kobe/kobe_bryant_shot_data_refined.csv\"\ndf.to_csv(filename, index=False)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7d9ef2d52ccece220a574bd54617f33afef2af6
2,974
ipynb
Jupyter Notebook
Slides/exc_9/9.class.ipynb
OlufKelk/IPNA
27abbcd0de9d2da33169f1caf9604ebb58682c61
[ "MIT" ]
null
null
null
Slides/exc_9/9.class.ipynb
OlufKelk/IPNA
27abbcd0de9d2da33169f1caf9604ebb58682c61
[ "MIT" ]
null
null
null
Slides/exc_9/9.class.ipynb
OlufKelk/IPNA
27abbcd0de9d2da33169f1caf9604ebb58682c61
[ "MIT" ]
null
null
null
28.32381
151
0.578682
[ [ [ "<img src=\"ku_logo_uk_v.png\" alt=\"drawing\" width=\"130\" style=\"float:right\"/>\n\n# <span style=\"color:#2c061f\"> Data Project</span> \n\n<br>\n\n## <span style=\"color:#374045\"> Introduction to Programming and Numerical Analysis </span>\n*Oluf Kelkjær*\n\n### **Today's Plan** \n1. Inaugural Project\n2. Data Project", "_____no_output_____" ], [ "## Inaugural Project \nYou should all have received **feedback** from me in absalon inbox. \nIf you haven't received feedback from me, or have any questions regarding the feedback, take a hold of me! \n\nIf you've passed, it doesn't mean there's not room for improvement :) \nIn generel, keep the objectives in mind: \n* Apply numerical solution and simulation methods\n * Solve the problem using SciPy and numpy, using what you've learned in **lecture 03 and lecture 04** \n* Structure a code project\n * Notebook (markdown cells, introduce and conclude questions), py-files, readme (description and dependencies) etc.\n* Document code\n * Document and explain your code using what you learned in **lecture 05**\n* Present results in text form and in figures\n * Create nice tables/figures + create nice markdown cells\n", "_____no_output_____" ], [ "## Data Project \nProject description is live on Git. \n**Deadline**: 17th of april. \n\n**Again keep the objectives and the contents in mind and you are good to go!**\n\nAlso: use at least 2 datasets, combine them and do some grouped analysis (split-apply-combine). Use operations from **Lecture 07 and Lecture 08**", "_____no_output_____" ], [ "## Let's go :)", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ] ]
e7d9efd9b073fff225150751fbc30e3fea575e8f
101,808
ipynb
Jupyter Notebook
data/Scraping Fantasy Football Data - FINAL.ipynb
zgscherrer/Project-Fantasy-Football
8c3d4249edc103f8606a1df25ebce5fd866da6c5
[ "MIT" ]
null
null
null
data/Scraping Fantasy Football Data - FINAL.ipynb
zgscherrer/Project-Fantasy-Football
8c3d4249edc103f8606a1df25ebce5fd866da6c5
[ "MIT" ]
null
null
null
data/Scraping Fantasy Football Data - FINAL.ipynb
zgscherrer/Project-Fantasy-Football
8c3d4249edc103f8606a1df25ebce5fd866da6c5
[ "MIT" ]
1
2018-09-30T08:37:32.000Z
2018-09-30T08:37:32.000Z
40.544803
305
0.49336
[ [ [ "## Scraping Fantasy Football Data\nNeed to scrape the following data:\n- Weekly Player PPR Projections: ESPN, CBS, Fantasy Sharks, Scout Fantasy Sporsts, (and tried Fantasy Football Today but doesn't have defense projections currently, so exclude)\n- Previous Week Player Actual PPR Results\n- Weekly Fanduel Player Salary (can manually download csv from a Thurs-Sun contest and then import)", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nimport requests\n# import json\n# from bs4 import BeautifulSoup\n\nimport time\n\nfrom selenium import webdriver\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.support import expected_conditions as EC\n# from selenium.common.exceptions import NoSuchElementException", "_____no_output_____" ], [ "#function to initiliaze selenium web scraper\ndef instantiate_selenium_driver():\n chrome_options = webdriver.ChromeOptions()\n chrome_options.add_argument('--no-sandbox')\n chrome_options.add_argument('--window-size=1420,1080')\n #chrome_options.add_argument('--headless')\n chrome_options.add_argument('--disable-gpu')\n driver = webdriver.Chrome('..\\plugins\\chromedriver.exe', \n chrome_options=chrome_options)\n return driver", "_____no_output_____" ], [ "#function to save dataframes to pickle archive\n#file name: don't include csv in file name, function will also add a timestamp to the archive\n#directory name don't include final backslash\ndef save_to_pickle(df, directory_name, file_name):\n lt = time.localtime()\n full_file_name = f\"{file_name}_{lt.tm_year}-{lt.tm_mon}-{lt.tm_mday}-{lt.tm_hour}-{lt.tm_min}.pkl\"\n path = f\"{directory_name}/{full_file_name}\"\n df.to_pickle(path)\n print(f\"Pickle saved to: {path}\")", "_____no_output_____" ], [ "#remove name suffixes of II III IV or Jr. or Sr. or random * from names to easier match other databases\n#also remove periods from first name T.J. make TJ (just remove periods from whole name in function)\ndef remove_suffixes_periods(name):\n #remove periods and any asterisks\n name = name.replace(\".\", \"\")\n name = name.replace(\"*\", \"\")\n \n #remove any suffixes by splitting the name on spaces and then rebuilding the name with only the first two of the list (being first/last name)\n name_split = name.split(\" \")\n name_final = \" \".join(name_split[0:2]) #rebuild\n \n# #old suffix removal process (created some errors for someone with Last Name starting with V)\n# for suffix in [\" III\", \" II\", \" IV\", \" V\", \" Jr.\", \" Sr.\"]:\n# name = name.replace(suffix, \"\")\n\n return name_final", "_____no_output_____" ], [ "#function to rename defense position labels so all matach\n#this will be used since a few players have same name as another player, but currently none that\n#are at same position need to create a function that gets all the defense labels the same, so that\n#when merge, can merge by both player name and position to prevent bad merges\n#input of pos will be the value of the column that getting mapped\ndef convert_defense_label(pos):\n defense_labels_scraped = ['DST', 'D', 'Def', 'DEF']\n if pos in defense_labels_scraped:\n #conver defense position labels to espn format\n pos = 'D/ST'\n return pos", "_____no_output_____" ] ], [ [ "### Get Weekly Player Actual Fantasy PPR Points\nGet from ESPN's Scoring Leaders table\n\nhttp://games.espn.com/ffl/leaders?&scoringPeriodId=1&seasonId=2018&slotCategoryId=0&leagueID=0\n- scoringPeriodId = week of the season\n- seasonId = year\n- slotCategoryId = position, where 'QB':0, 'RB':2, 'WR':4, 'TE':6, 'K':17, 'D/ST':16\n- leagueID = scoring type, PPR Standard is 0", "_____no_output_____" ] ], [ [ "##SCRAPE ESPN SCORING LEADERS TABLE FOR ACTUAL FANTASY PPR POINTS##\n\n#input needs to be year as four digit number and week as number \n#returns dataframe of scraped data\ndef scrape_actual_PPR_player_points_ESPN(week, year):\n #instantiate the driver\n driver = instantiate_selenium_driver()\n \n #initialize dataframe for all data\n player_actual_ppr = pd.DataFrame()\n \n #url that returns info has different code for each position\n position_ids = {'QB':0, 'RB':2, 'WR':4, 'TE':6, 'K':17, 'D/ST':16}\n\n #cycle through each position webpage to create comprehensive dataframe\n for pos, pos_id in position_ids.items():\n #note leagueID=0 is for PPR standard scoring\n url_start_pos = f\"http://games.espn.com/ffl/leaders?&scoringPeriodId={week}&seasonId={year}&slotCategoryId={pos_id}&leagueID=0\"\n driver.get(url_start_pos)\n \n #each page only gets 50 results, so cycle through next button until next button no longer exists\n while True:\n #read in the table from ESPN, by using the class, and use the 1st row index for column header\n player_actual_ppr_table_page = pd.read_html(driver.page_source,\n attrs={'class': 'playerTableTable'}, #return only the table of this class, which has the player data\n header=[1])[0] #returns table in a list, so get zeroth table\n\n #easier to just assign the player position rather than try to scrape it out\n player_actual_ppr_table_page['POS'] = pos\n\n #replace any placeholder string -- or --/-- with None type to not confuse calculations later\n player_actual_ppr_table_page.replace({'--': None, '--/--': None}, inplace=True)\n \n\n#if want to extract more detailed data from this, can do added reformatting, etc., but not doing that for our purposes\n# #rename D/ST columns so don't get misassigned to wrong columns\n# if pos == 'D/ST':\n# player_actual_ppr_table_page.rename(columns={'SCK':'D/ST_Sack', \n# 'FR':'D/ST_FR', 'INT':'D/ST_INT',\n# 'TD':'D/ST_TD', 'BLK':'D/ST_BLK', 'PA':'D/ST_PA'},\n# inplace=True)\n \n# #rename/recalculate Kicker columns so don't get misassigned to wrong columns\n# elif pos == 'K':\n# player_actual_ppr_table_page.rename(columns={'1-39':'KICK_FG_1-39', '40-49':'KICK_FG_40-49',\n# '50+':'KICK_FG_50+', 'TOT':'KICK_FG',\n# 'XP':'KICK_XP'},\n# inplace=True)\n \n# #if wanted to use all the kicker data could fix this code snipit - erroring out because can't split None types\n# #just want made FG's for each bucket and overall FGAtt and XPAtt\n# player_actual_ppr_table_page['KICK_FGAtt'] = player_actual_ppr_table_page['KICK_FG'].map(\n# lambda x: x.split(\"/\")[-1]).astype('float64')\n# player_actual_ppr_table_page['KICK_XPAtt'] = player_actual_ppr_table_page['KICK_XP'].map(\n# lambda x: x.split(\"/\")[-1]).astype('float64')\n# player_actual_ppr_table_page['KICK_FG_1-39'] = player_actual_ppr_table_page['KICK_FG_1-39'].map(\n# lambda x: x.split(\"/\")[0]).astype('float64')\n# player_actual_ppr_table_page['KICK_FG_40-49'] = player_actual_ppr_table_page['KICK_FG_40-49'].map(\n# lambda x: x.split(\"/\")[0]).astype('float64')\n# player_actual_ppr_table_page['KICK_FG_50+'] = player_actual_ppr_table_page['KICK_FG_50+'].map(\n# lambda x: x.split(\"/\")[0]).astype('float64')\n# player_actual_ppr_table_page['KICK_FG'] = player_actual_ppr_table_page['KICK_FG'].map(\n# lambda x: x.split(\"/\")[0]).astype('float64')\n# player_actual_ppr_table_page['KICK_XP'] = player_actual_ppr_table_page['KICK_XP'].map(\n# lambda x: x.split(\"/\")[0]).astype('float64')\n# player_actual_ppr_table_page['KICK_FG%'] = player_actual_ppr_table_page['KICK_FG'] / espn_proj_table_page['KICK_FGAtt']\n \n \n #add page data to overall dataframe\n player_actual_ppr = pd.concat([player_actual_ppr, player_actual_ppr_table_page],\n ignore_index=True,\n sort=False)\n\n #click to next page to get next 40 results, but check that it exists\n try:\n next_button = driver.find_element_by_partial_link_text('NEXT')\n next_button.click()\n except EC.NoSuchElementException:\n break\n \n driver.quit()\n \n #drop any completely blank columns\n player_actual_ppr.dropna(axis='columns', how='all', inplace=True)\n \n #add columns that give week/season\n player_actual_ppr['WEEK'] = week\n player_actual_ppr['SEASON'] = year\n \n return player_actual_ppr", "_____no_output_____" ], [ "###FORMAT/EXTRACT ACTUAL PLAYER PPR DATA###\n#(you could make this more complex if want to extract some of the subdata)\n\ndef format_extract_PPR_player_points_ESPN(df_scraped_ppr_espn):\n #split out player, team, position based on ESPN's formatting\n def split_player_team_pos_espn(play_team_pos):\n #incoming string for players: 'Todd Gurley II, LAR RB' or 'Drew Brees, NO\\xa0QB'\n #incoming string for players with special designations: 'Aaron Rodgers, GB\\xa0QB Q'\n #incoming string for D/ST: 'Jaguars D/ST\\xa0D/ST'\n\n #operations if D/ST\n if \"D/ST\" in play_team_pos:\n player = play_team_pos.split(' D/ST\\xa0')[0]\n team = player.split()[0]\n\n #operations for regular players\n else:\n player = play_team_pos.split(',')[0]\n team_pos = play_team_pos.split(',')[1]\n team = team_pos.split()[0]\n\n return player, team\n\n \n df_scraped_ppr_espn[['PLAYER', 'TEAM']] = df_scraped_ppr_espn.apply(\n lambda x: split_player_team_pos_espn(x['PLAYER, TEAM POS']),\n axis='columns',\n result_type='expand')\n\n \n #need to remove name suffixes so can match players easier to other data - see function defined above\n df_scraped_ppr_espn['PLAYER'] = df_scraped_ppr_espn['PLAYER'].map(remove_suffixes_periods)\n\n #convert PTS to float type (sometimes zeros have been stored as strings)\n df_scraped_ppr_espn['PTS'] = df_scraped_ppr_espn['PTS'].astype('float64')\n \n #for this function only extract 'PLAYER', 'POS', 'TEAM', 'PTS'\n df_scraped_ppr_espn = df_scraped_ppr_espn[['PLAYER', 'POS', 'TEAM', 'PTS', 'WEEK']].sort_values('PTS', ascending=False)\n \n\n return df_scraped_ppr_espn", "_____no_output_____" ], [ "#CALL SCRAPE AND FORMATTING OF ACTUAL PPR WEEK 1- AND SAVE TO PICKLES FOR LATER USE\n\n#scrape data and save the messy full dataframe\ndf_wk1_player_actual_ppr_scrape = scrape_actual_PPR_player_points_ESPN(1, 2018)\nsave_to_pickle(df_wk1_player_actual_ppr_scrape, 'pickle_archive', 'Week1_Player_Actual_PPR_messy_scrape')\n\n#format data to extract just player pts/playr/pos/team/weel and save the data\ndf_wk1_player_actual_ppr = format_extract_PPR_player_points_ESPN(df_wk1_player_actual_ppr_scrape)\n#rename PTS column to something more descriptive \ndf_wk1_player_actual_ppr.rename(columns={'PTS':'FPTS_PPR_ACTUAL'}, inplace=True) \nsave_to_pickle(df_wk1_player_actual_ppr, 'pickle_archive', 'Week1_Player_Actual_PPR')\nprint(df_wk1_player_actual_ppr.shape)\ndf_wk1_player_actual_ppr.head()", "Pickle saved to: pickle_archive/Week1_Player_Actual_PPR_messy_scrape_2018-9-16-7-31.pkl\nPickle saved to: pickle_archive/Week1_Player_Actual_PPR_2018-9-16-7-31.pkl\n(1007, 5)\n" ] ], [ [ "### Get ESPN Player Fantasy Points Projections for Week \nGet from ESPN's Projections Table\n\nhttp://games.espn.com/ffl/tools/projections?&scoringPeriodId=1&seasonId=2018&slotCategoryId=0&leagueID=0\n- scoringPeriodId = week of the season\n- seasonId = year\n- slotCategoryId = position, where 'QB':0, 'RB':2, 'WR':4, 'TE':6, 'K':17, 'D/ST':16\n- leagueID = scoring type, PPR Standard is 0", "_____no_output_____" ] ], [ [ "##SCRAPE ESPN PROJECTIONS TABLE FOR PROJECTED FANTASY PPR POINTS##\n\n#input needs to be year as four digit number and week as number \n#returns dataframe of scraped data\ndef scrape_weekly_player_projections_ESPN(week, year):\n #instantiate the driver on the ESPN projections page\n driver = instantiate_selenium_driver()\n \n #initialize dataframe for all data\n proj_ppr_espn = pd.DataFrame()\n \n #url that returns info has different code for each position\n position_ids = {'QB':0, 'RB':2, 'WR':4, 'TE':6, 'K':17, 'D/ST':16}\n\n #cycle through each position webpage to create comprehensive dataframe\n for pos, pos_id in position_ids.items():\n #note leagueID=0 is for PPR standard scoring\n url_start_pos = f\"http://games.espn.com/ffl/tools/projections?&scoringPeriodId={week}&seasonId={year}&slotCategoryId={pos_id}&leagueID=0\" \n driver.get(url_start_pos)\n \n #each page only gets 50 results, so cycle through next button until next button no longer exists\n while True:\n #read in the table from ESPN, by using the class, and use the 1st row index for column header\n proj_ppr_espn_table_page = pd.read_html(driver.page_source,\n attrs={'class': 'playerTableTable'}, #return only the table of this class, which has the player data\n header=[1])[0] #returns table in a list, so get zeroth table\n\n #easier to just assign the player position rather than try to scrape it out\n proj_ppr_espn_table_page['POS'] = pos\n\n #replace any placeholder string -- or --/-- with None type to not confuse calculations later\n proj_ppr_espn_table_page.replace({'--': None, '--/--': None}, inplace=True)\n\n\n#if want to extract more detailed data from this, can do added reformatting, etc., but not doing that for our purposes\n# #rename D/ST columns so don't get misassigned to wrong columns\n# if pos == 'D/ST':\n# proj_ppr_espn_table_page.rename(columns={'SCK':'D/ST_Sack', \n# 'FR':'D/ST_FR', 'INT':'D/ST_INT',\n# 'TD':'D/ST_TD', 'BLK':'D/ST_BLK', 'PA':'D/ST_PA'},\n# inplace=True)\n \n# #rename/recalculate Kicker columns so don't get misassigned to wrong columns\n# elif pos == 'K':\n# proj_ppr_espn_table_page.rename(columns={'1-39':'KICK_FG_1-39', '40-49':'KICK_FG_40-49',\n# '50+':'KICK_FG_50+', 'TOT':'KICK_FG',\n# 'XP':'KICK_XP'},\n# inplace=True)\n \n# #if wanted to use all the kicker data could fix this code snipit - erroring out because can't split None types\n# #just want made FG's for each bucket and overall FGAtt and XPAtt\n# proj_ppr_espn_table_page['KICK_FGAtt'] = proj_ppr_espn_table_page['KICK_FG'].map(\n# lambda x: x.split(\"/\")[-1]).astype('float64')\n# proj_ppr_espn_table_page['KICK_XPAtt'] = proj_ppr_espn_table_page['KICK_XP'].map(\n# lambda x: x.split(\"/\")[-1]).astype('float64')\n# proj_ppr_espn_table_page['KICK_FG_1-39'] = proj_ppr_espn_table_page['KICK_FG_1-39'].map(\n# lambda x: x.split(\"/\")[0]).astype('float64')\n# proj_ppr_espn_table_page['KICK_FG_40-49'] = proj_ppr_espn_table_page['KICK_FG_40-49'].map(\n# lambda x: x.split(\"/\")[0]).astype('float64')\n# proj_ppr_espn_table_page['KICK_FG_50+'] = proj_ppr_espn_table_page['KICK_FG_50+'].map(\n# lambda x: x.split(\"/\")[0]).astype('float64')\n# proj_ppr_espn_table_page['KICK_FG'] = proj_ppr_espn_table_page['KICK_FG'].map(\n# lambda x: x.split(\"/\")[0]).astype('float64')\n# proj_ppr_espn_table_page['KICK_XP'] = proj_ppr_espn_table_page['KICK_XP'].map(\n# lambda x: x.split(\"/\")[0]).astype('float64')\n# proj_ppr_espn_table_page['KICK_FG%'] = proj_ppr_espn_table_page['KICK_FG'] / espn_proj_table_page['KICK_FGAtt']\n \n \n #add page data to overall dataframe\n proj_ppr_espn = pd.concat([proj_ppr_espn, proj_ppr_espn_table_page],\n ignore_index=True,\n sort=False)\n\n #click to next page to get next 40 results, but check that it exists\n try:\n next_button = driver.find_element_by_partial_link_text('NEXT')\n next_button.click()\n except EC.NoSuchElementException:\n break\n \n driver.quit()\n \n #drop any completely blank columns\n proj_ppr_espn.dropna(axis='columns', how='all', inplace=True)\n \n #add columns that give week/season\n proj_ppr_espn['WEEK'] = week\n proj_ppr_espn['SEASON'] = year\n \n return proj_ppr_espn", "_____no_output_____" ], [ "#formatting/extracting function is same for ESPN Actual/PPR Projections, so don't need new function", "_____no_output_____" ], [ "#WEEK 1 PROJECTIONS\n#CALL SCRAPE AND FORMATTING OF ESPN WEEKLY PROJECTIONS - AND SAVE TO PICKLES FOR LATER USE\n\n#scrape data and save the messy full dataframe\ndf_wk1_ppr_proj_espn_scrape = scrape_weekly_player_projections_ESPN(1, 2018)\nsave_to_pickle(df_wk1_ppr_proj_espn_scrape, 'pickle_archive', 'Week1_PPR_Projections_ESPN_messy_scrape')\n\n#format data to extract just player pts/playr/pos/team/week and save the data\ndf_wk1_ppr_proj_espn = format_extract_PPR_player_points_ESPN(df_wk1_ppr_proj_espn_scrape)\n#rename PTS column to something more descriptive \ndf_wk1_ppr_proj_espn.rename(columns={'PTS':'FPTS_PPR_ESPN'}, inplace=True) \nsave_to_pickle(df_wk1_ppr_proj_espn, 'pickle_archive', 'Week1_PPR_Projections_ESPN')\nprint(df_wk1_ppr_proj_espn.shape)\ndf_wk1_ppr_proj_espn.head()", "Pickle saved to: pickle_archive/Week1_PPR_Projections_ESPN_messy_scrape_2018-9-16-7-33.pkl\nPickle saved to: pickle_archive/Week1_PPR_Projections_ESPN_2018-9-16-7-33.pkl\n(1007, 5)\n" ], [ "#WEEK 2 PROJECTIONS\n#CALL SCRAPE AND FORMATTING OF ESPN WEEKLY PROJECTIONS - AND SAVE TO PICKLES FOR LATER USE\n\n#scrape data and save the messy full dataframe\ndf_wk2_ppr_proj_espn_scrape = scrape_weekly_player_projections_ESPN(2, 2018)\nsave_to_pickle(df_wk2_ppr_proj_espn_scrape, 'pickle_archive', 'Week2_PPR_Projections_ESPN_messy_scrape')\n\n#format data to extract just player pts/playr/pos/team/week and save the data\ndf_wk2_ppr_proj_espn = format_extract_PPR_player_points_ESPN(df_wk2_ppr_proj_espn_scrape)\n#rename PTS column to something more descriptive \ndf_wk2_ppr_proj_espn.rename(columns={'PTS':'FPTS_PPR_ESPN'}, inplace=True) \nsave_to_pickle(df_wk2_ppr_proj_espn, 'pickle_archive', 'Week2_PPR_Projections_ESPN')\nprint(df_wk2_ppr_proj_espn.shape)\ndf_wk2_ppr_proj_espn.head()", "Pickle saved to: pickle_archive/Week2_PPR_Projections_ESPN_messy_scrape_2018-9-16-7-35.pkl\nPickle saved to: pickle_archive/Week2_PPR_Projections_ESPN_2018-9-16-7-35.pkl\n(1007, 5)\n" ] ], [ [ "### Get CBS Player Fantasy Points Projections for Week \nGet from CBS's Projections Table\n\nhttps://www.cbssports.com/fantasy/football/stats/sortable/points/QB/ppr/projections/2018/2?&print_rows=9999\n- QB is where position goes\n- 2018 is where season goes\n- 2 is where week goes\n- print_rows = 9999 gives all results in one table", "_____no_output_____" ] ], [ [ "##SCRAPE CBS PROJECTIONS TABLE FOR PROJECTED FANTASY PPR POINTS##\n\n#input needs to be year as four digit number and week as number \n#returns dataframe of scraped data\ndef scrape_weekly_player_projections_CBS(week, year):\n ###GET PROJECTIONS FROM CBS###\n #CBS has separate tables for each position, so need to cycle through them\n #but url can return all list so don't need to go page by page\n proj_ppr_cbs = pd.DataFrame()\n \n positions = ['QB', 'RB', 'WR', 'TE', 'K', 'DST']\n header_row_index = {'QB':2, 'RB':2, 'WR':2, 'TE':2, 'K':1, 'DST':1}\n \n for position in positions:\n #url just needs to change position\n url = f\"https://www.cbssports.com/fantasy/football/stats/sortable/points/{position}/ppr/projections/{year}/{week}?&print_rows=9999\"\n \n #read in the table from CBS by class, and use the 2nd row index for column header\n proj_ppr_cbs_pos = pd.read_html(url, \n attrs={'class': 'data'}, #return only the table of this class, which has the player data\n header=[header_row_index[position]])[0] #returns table in a list, so get table\n proj_ppr_cbs_pos['POS'] = position\n \n #add the table to the overall df\n proj_ppr_cbs = pd.concat([proj_ppr_cbs, proj_ppr_cbs_pos], \n ignore_index=True, \n sort=False)\n\n #some tables include the page selector as the bottom row of the table,\n #so need to find the index values of those rows and then drop them from the table\n index_pages_rows = list(proj_ppr_cbs[proj_ppr_cbs['Player'].str.contains('Pages')].index)\n proj_ppr_cbs.drop(index_pages_rows, axis='index', inplace=True)\n \n #add columns that give week/season\n proj_ppr_cbs['WEEK'] = week\n proj_ppr_cbs['SEASON'] = year\n \n return proj_ppr_cbs ", "_____no_output_____" ], [ "###FORMAT/EXTRACT ACTUAL PLAYER PPR DATA###\n#(you could make this more complex if want to extract some of the subdata)\n\ndef format_extract_PPR_player_points_CBS(df_scraped_ppr_cbs):\n# #could include this extra data if you want to extract it\n# #calculate completion percentage\n# df_cbs_proj['COMPLETION_PERCENTAGE'] = df_cbs_proj.CMP/df_cbs_proj.ATT\n\n\n# #rename some of columns so don't lose meaning\n# df_cbs_proj.rename(columns={'ATT':'PASS_ATT', 'CMP':'PASS_COMP', 'COMPLETION_PERCENTAGE': 'PASS_COMP_PCT',\n# 'YD': 'PASS_YD', 'TD':'PASS_TD', 'INT':'PASS_INT', 'RATE':'PASS_RATE', \n# 'ATT.1': 'RUSH_ATT', 'YD.1': 'RUSH_YD', 'AVG': 'RUSH_AVG', 'TD.1':'RUSH_TD',\n# 'TARGT': 'RECV_TARGT', 'RECPT': 'RECV_RECPT', 'YD.2':'RECV_YD', 'AVG.1':'RECV_AVG', 'TD.2':'RECV_TD',\n# 'FPTS':'PTS',\n# 'FG':'KICK_FG', 'FGA': 'KICK_FGAtt', 'XP':'KICK_XP', 'XPAtt':'KICK_XPAtt', \n# 'Int':'D/ST_INT', 'Sty':'D/ST_Sty', 'Sack':'D/ST_Sack', 'TK':'D/ST_TK',\n# 'DFR':'D/ST_FR', 'FF':'D/ST_FF', 'DTD':'D/ST_TD',\n# 'Pa':'D/ST_PtsAll', 'PaNetA':'D/ST_PaYdA', 'RuYdA':'D/ST_RuYdA', 'TyDa':'D/ST_ToYdA'},\n# inplace=True)\n\n\n# #calculate passing, rushing, total yards/game\n# df_cbs_proj['D/ST_PaYd/G'] = df_cbs_proj['D/ST_PaYdA']/16\n# df_cbs_proj['D/ST_RuYd/G'] = df_cbs_proj['D/ST_RuYdA']/16\n# df_cbs_proj['D/ST_ToYd/G'] = df_cbs_proj['D/ST_ToYdA']/16\n\n\n #rename FPTS to PTS\n df_scraped_ppr_cbs.rename(columns={'FPTS':'FPTS_PPR_CBS'}, inplace=True) \n \n\n #split out player, team\n def split_player_team(play_team):\n #incoming string for players: 'Todd Gurley, LAR'\n #incoming string for DST: 'Jaguars, JAC'\n\n #operations if D/ST (can tell if there is only two items in a list separated by a space, instead of three)\n if len(play_team.split()) == 2:\n player = play_team.split(',')[0] #+ ' D/ST'\n team = play_team.split(',')[1]\n\n #operations for regular players\n else:\n player = play_team.split(',')[0]\n team = play_team.split(',')[1]\n \n #remove any possible name suffixes to merge with other data better\n player = remove_suffixes_periods(player)\n \n return player, team\n\n \n df_scraped_ppr_cbs[['PLAYER', 'TEAM']] = df_scraped_ppr_cbs.apply(\n lambda x: split_player_team(x['Player']),\n axis='columns',\n result_type='expand')\n\n \n #convert defense position label to espn standard\n df_scraped_ppr_cbs['POS'] = df_scraped_ppr_cbs['POS'].map(convert_defense_label)\n \n \n #for this function only extract 'PLAYER', 'POS', 'TEAM', 'PTS'\n df_scraped_ppr_cbs = df_scraped_ppr_cbs[['PLAYER', 'POS', 'TEAM', 'FPTS_PPR_CBS', 'WEEK']].sort_values('FPTS_PPR_CBS', ascending=False)\n\n\n return df_scraped_ppr_cbs", "_____no_output_____" ], [ "#WEEK 1 PROJECTIONS\n#CALL SCRAPE AND FORMATTING OF CBS WEEKLY PROJECTIONS - AND SAVE TO PICKLES FOR LATER USE\n\n#scrape data and save the messy full dataframe\ndf_wk1_ppr_proj_cbs_scrape = scrape_weekly_player_projections_CBS(1, 2018)\nsave_to_pickle(df_wk1_ppr_proj_cbs_scrape, 'pickle_archive', 'Week1_PPR_Projections_CBS_messy_scrape')\n\n#format data to extract just player pts/playr/pos/team and save the data\ndf_wk1_ppr_proj_cbs = format_extract_PPR_player_points_CBS(df_wk1_ppr_proj_cbs_scrape)\nsave_to_pickle(df_wk1_ppr_proj_cbs, 'pickle_archive', 'Week1_PPR_Projections_CBS')\nprint(df_wk1_ppr_proj_cbs.shape)\ndf_wk1_ppr_proj_cbs.head()", "Pickle saved to: pickle_archive/Week1_PPR_Projections_CBS_messy_scrape_2018-9-16-7-35.pkl\nPickle saved to: pickle_archive/Week1_PPR_Projections_CBS_2018-9-16-7-35.pkl\n(793, 5)\n" ], [ "#WEEK 2 PROJECTIONS\n#CALL SCRAPE AND FORMATTING OF CBS WEEKLY PROJECTIONS - AND SAVE TO PICKLES FOR LATER USE\n\n#scrape data and save the messy full dataframe\ndf_wk2_ppr_proj_cbs_scrape = scrape_weekly_player_projections_CBS(2, 2018)\nsave_to_pickle(df_wk2_ppr_proj_cbs_scrape, 'pickle_archive', 'Week2_PPR_Projections_CBS_messy_scrape')\n\n#format data to extract just player pts/playr/pos/team/week and save the data\ndf_wk2_ppr_proj_cbs = format_extract_PPR_player_points_CBS(df_wk2_ppr_proj_cbs_scrape)\nsave_to_pickle(df_wk2_ppr_proj_cbs, 'pickle_archive', 'Week2_PPR_Projections_CBS')\nprint(df_wk2_ppr_proj_cbs.shape)\ndf_wk2_ppr_proj_cbs.head()", "Pickle saved to: pickle_archive/Week2_PPR_Projections_CBS_messy_scrape_2018-9-16-7-35.pkl\nPickle saved to: pickle_archive/Week2_PPR_Projections_CBS_2018-9-16-7-35.pkl\n(815, 5)\n" ] ], [ [ "### Get Fantasy Sharks Player Points Projection for Week\nThey have a json option that gets updated weekly (don't appear to store previous week projections). The json defaults to PPR (which is lucky for us) and has an all players option.\n\nhttps://www.fantasysharks.com/apps/Projections/WeeklyProjections.php?pos=ALL&format=json\nIt returns a list of players, each saved as a dictionary.\n\n[\n {\n \"Rank\": 1,\n \"ID\": \"4925\",\n \"Name\": \"Brees, Drew\",\n \"Pos\": \"QB\",\n \"Team\": \"NOS\",\n \"Opp\": \"CLE\",\n \"Comp\": \"27.49\",\n \"PassYards\": \"337\",\n \"PassTD\": 2.15,\n \"Int\": \"0.61\",\n \"Att\": \"1.5\",\n \"RushYards\": \"0\",\n \"RushTD\": 0.12,\n \"Rec\": \"0\",\n \"RecYards\": \"0\",\n \"RecTD\": 0,\n \"FantasyPoints\": 26\n },\n \nBut the json is only for current week, can't get other week data - so instead use this url exampe:\nhttps://www.fantasysharks.com/apps/bert/forecasts/projections.php?Position=99&scoring=2&Segment=628&uid=4\n- Segment is the week/season id - for 2018 week 1 starts at 628 and adds 1 for each additional week\n- Position=99 is all positions\n- scoring=2 is PPR default\n", "_____no_output_____" ] ], [ [ "##SCRAPE FANTASY SHARKS PROJECTIONS TABLE FOR PROJECTED FANTASY PPR POINTS##\n\n#input needs to be week as number (year isn't used, but keep same format as others)\n#returns dataframe of scraped data\ndef scrape_weekly_player_projections_Sharks(week, year):\n #fantasy sharks url - segment for 2018 week 1 starts at 628 and adds 1 for each additional week\n segment = 627 + week\n #Position=99 is all positions, and scoring=2 is PPR default\n sharks_weekly_url = f\"https://www.fantasysharks.com/apps/bert/forecasts/projections.php?Position=99&scoring=2&Segment={segment}&uid=4\"\n\n #since don't need to iterate over pages, can just use reqeuests instead of selenium scraper\n #however with requests, need to include headers because this website was rejecting the request since it knew python was running it - need to spoof a browser header\n #other possible headers: 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'\n headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1)'}\n #response returns html\n response = requests.get(sharks_weekly_url, headers=headers)\n\n #extract the table data from the html response (call response.text) and get table with player data\n proj_ppr_sharks = pd.read_html(response.text, #response.text gives the html of the page request\n attrs={'id': 'toolData'}, #return only the table of this id, which has the player data\n header = 0 #header is the 0th row\n )[0] #pd.read_html returns a list of tables even though only one in it, select the table\n \n #the webpage uses different tiers, which add extra rows to the table - get rid of those\n #also sometimes repeats the column headers for readability as scrolling - get rid of those\n #so need to find the index values of those bad rows and then drop them from the table\n index_pages_rows = list(proj_ppr_sharks[proj_ppr_sharks['#'].str.contains('Tier|#')].index)\n proj_ppr_sharks.drop(index_pages_rows, axis='index', inplace=True)\n \n #add columns that give week/season\n proj_ppr_sharks['WEEK'] = week\n proj_ppr_sharks['SEASON'] = year\n \n return proj_ppr_sharks ", "_____no_output_____" ], [ "###FORMAT/EXTRACT ACTUAL PLAYER PPR DATA###\n#(you could make this more complex if want to extract some of the subdata like opposing team (OPP)\n\ndef format_extract_PPR_player_points_Sharks(df_scraped_ppr_sharks):\n #rename PTS to FPTS_PPR_SHARKS and a few others\n df_scraped_ppr_sharks.rename(columns={'Pts':'FPTS_PPR_SHARKS',\n 'Player': 'PLAYER',\n 'Tm': 'TEAM',\n 'Position': 'POS'},\n inplace=True) \n \n #they have player name as Last Name, First Name - reorder to First Last\n def modify_player_name(player, pos):\n #incoming string for players: 'Johnson, David' Change to: 'David Johnson'\n #incoming string for defense: 'Lions, Detroit' Change to: 'Lions'\n if pos == 'D':\n player_formatted = player.split(', ')[0]\n else:\n player_formatted = ' '.join(player.split(', ')[::-1])\n player_formatted = remove_suffixes_periods(player_formatted)\n \n #name overrides - some spelling differences from ESPN/CBS\n if player_formatted == 'Steven Hauschka':\n player_formatted = 'Stephen Hauschka'\n elif player_formatted == 'Josh Bellamy':\n player_formatted = 'Joshua Bellamy'\n elif player_formatted == 'Joshua Perkins': \n player_formatted = 'Josh Perkins'\n \n return player_formatted\n\n df_scraped_ppr_sharks['PLAYER'] = df_scraped_ppr_sharks.apply(\n lambda row: modify_player_name(row['PLAYER'], row['POS']),\n axis='columns')\n \n \n #convert FPTS to float type (currently stored as string)\n df_scraped_ppr_sharks['FPTS_PPR_SHARKS'] = df_scraped_ppr_sharks['FPTS_PPR_SHARKS'].astype('float64')\n\n \n #convert defense position label to espn standard\n df_scraped_ppr_sharks['POS'] = df_scraped_ppr_sharks['POS'].map(convert_defense_label)\n\n \n #for this function only extract 'PLAYER', 'POS', 'TEAM', 'FPTS'\n df_scraped_ppr_sharks = df_scraped_ppr_sharks[['PLAYER', 'POS', 'TEAM', 'FPTS_PPR_SHARKS', 'WEEK']].sort_values('FPTS_PPR_SHARKS', ascending=False)\n\n\n return df_scraped_ppr_sharks", "_____no_output_____" ], [ "#WEEK 1 PROJECTIONS\n#CALL SCRAPE AND FORMATTING OF FANTASY SHARKS WEEKLY PROJECTIONS - AND SAVE TO PICKLES FOR LATER USE\n\n#scrape data and save the messy full dataframe\ndf_wk1_ppr_proj_sharks_scrape = scrape_weekly_player_projections_Sharks(1, 2018)\nsave_to_pickle(df_wk1_ppr_proj_sharks_scrape, 'pickle_archive', 'Week1_PPR_Projections_Sharks_messy_scrape')\n\n#format data to extract just player pts/playr/pos/team/week and save the data\ndf_wk1_ppr_proj_sharks = format_extract_PPR_player_points_Sharks(df_wk1_ppr_proj_sharks_scrape)\nsave_to_pickle(df_wk1_ppr_proj_sharks, 'pickle_archive', 'Week1_PPR_Projections_Sharks')\nprint(df_wk1_ppr_proj_sharks.shape)\ndf_wk1_ppr_proj_sharks.head()", "Pickle saved to: pickle_archive/Week1_PPR_Projections_Sharks_messy_scrape_2018-9-16-7-35.pkl\nPickle saved to: pickle_archive/Week1_PPR_Projections_Sharks_2018-9-16-7-35.pkl\n(918, 5)\n" ], [ "#WEEK 2 PROJECTIONS\n#CALL SCRAPE AND FORMATTING OF FANTASY SHARKS WEEKLY PROJECTIONS - AND SAVE TO PICKLES FOR LATER USE\n\n#scrape data and save the messy full dataframe\ndf_wk2_ppr_proj_sharks_scrape = scrape_weekly_player_projections_Sharks(2, 2018)\nsave_to_pickle(df_wk2_ppr_proj_sharks_scrape, 'pickle_archive', 'Week2_PPR_Projections_Sharks_messy_scrape')\n\n#format data to extract just player pts/playr/pos/team and save the data\ndf_wk2_ppr_proj_sharks = format_extract_PPR_player_points_Sharks(df_wk2_ppr_proj_sharks_scrape)\nsave_to_pickle(df_wk2_ppr_proj_sharks, 'pickle_archive', 'Week2_PPR_Projections_Sharks')\nprint(df_wk2_ppr_proj_sharks.shape)\ndf_wk2_ppr_proj_sharks.head()", "Pickle saved to: pickle_archive/Week2_PPR_Projections_Sharks_messy_scrape_2018-9-16-7-35.pkl\nPickle saved to: pickle_archive/Week2_PPR_Projections_Sharks_2018-9-16-7-35.pkl\n(992, 5)\n" ] ], [ [ "### Get Scout Fantasy Sports Player Fantasy Points Projections for Week \nGet from Scout Fantasy Sports Projections Table\n\nhttps://fftoolbox.scoutfantasysports.com/football/rankings/?pos=rb&week=2&noppr=false\n- pos is position with options of 'QB','RB','WR','TE', 'K', 'DEF'\n- week is week of year\n- noppr is set to false when you want the ppr projections\n- it also returns one long table (no pagination required)", "_____no_output_____" ] ], [ [ "##SCRAPE Scout PROJECTIONS TABLE FOR PROJECTED FANTASY PPR POINTS##\n\n#input needs to be year as four digit number and week as number \n#returns dataframe of scraped data\ndef scrape_weekly_player_projections_SCOUT(week, year):\n ###GET PROJECTIONS FROM SCOUT###\n #SCOUT has separate tables for each position, so need to cycle through them\n #but url can return whole list so don't need to go page by page\n proj_ppr_scout = pd.DataFrame()\n \n positions = ['QB', 'RB', 'WR', 'TE', 'K', 'DEF']\n \n for position in positions:\n #url just needs to change position and week\n url = f\"https://fftoolbox.scoutfantasysports.com/football/rankings/?pos={position}&week={week}&noppr=false\"\n \n #response returns html\n response = requests.get(url, verify=False) #need verify false otherwise requests won't work on this site\n\n #extract the table data from the html response (call response.text) and get table with player data\n proj_ppr_scout_pos = pd.read_html(response.text, #response.text gives the html of the page request\n attrs={'class': 'responsive-table'}, #return only the table of this class, which has the player data\n header=0 #header is the 0th row\n )[0] #returns list of tables so get the table\n \n #add the table to the overall df\n proj_ppr_scout = pd.concat([proj_ppr_scout, proj_ppr_scout_pos], \n ignore_index=True, \n sort=False)\n\n #ads are included in table rows (eg 'googletag.defineSlot(\"/7103/SMG_FFToolBox/728x...')\n #so need to find the index values of those rows and then drop them from the table\n index_ads_rows = list(proj_ppr_scout[proj_ppr_scout['#'].str.contains('google')].index)\n proj_ppr_scout.drop(index_ads_rows, axis='index', inplace=True)\n \n #add columns that give week/season\n proj_ppr_scout['WEEK'] = week\n proj_ppr_scout['SEASON'] = year\n \n return proj_ppr_scout \n", "_____no_output_____" ], [ "###FORMAT/EXTRACT ACTUAL PLAYER PPR DATA###\n#(you could make this more complex if want to extract some of the subdata)\n\ndef format_extract_PPR_player_points_SCOUT(df_scraped_ppr_scout):\n #rename columns\n df_scraped_ppr_scout.rename(columns={'Projected Pts.':'FPTS_PPR_SCOUT',\n 'Player':'PLAYER',\n 'Pos':'POS',\n 'Team':'TEAM'},\n inplace=True) \n \n\n #some players (very few - mostly kickers) seem to have name as last, first instead of written out\n #also rename defenses from City/State to Mascot\n #create dictionary for geographical location to mascot (use this for some Defense renaming) based on this website's naming\n NFL_team_mascot = {'Arizona': 'Cardinals',\n 'Atlanta': 'Falcons',\n 'Baltimore': 'Ravens',\n 'Buffalo': 'Bills',\n 'Carolina': 'Panthers',\n 'Chicago': 'Bears',\n 'Cincinnati': 'Bengals',\n 'Cleveland': 'Browns',\n 'Dallas': 'Cowboys',\n 'Denver': 'Broncos',\n 'Detroit': 'Lions',\n 'Green Bay': 'Packers',\n 'Houston': 'Texans',\n 'Indianapolis': 'Colts',\n 'Jacksonville': 'Jaguars',\n 'Kansas City': 'Chiefs',\n #'Los Angeles': 'Rams',\n 'Miami': 'Dolphins',\n 'Minnesota': 'Vikings',\n 'New England': 'Patriots',\n 'New Orleans': 'Saints',\n 'New York Giants': 'Giants',\n 'New York Jets': 'Jets',\n 'Oakland': 'Raiders',\n 'Philadelphia': 'Eagles',\n 'Pittsburgh': 'Steelers',\n #'Los Angeles': 'Chargers',\n 'Seattle': 'Seahawks',\n 'San Francisco': '49ers',\n 'Tampa Bay': 'Buccaneers',\n 'Tennessee': 'Titans',\n 'Washington': 'Redskins'}\n #get Los Angelse defense data for assigning D's\n LosAngeles_defense_ranks = [int(x) for x in df_scraped_ppr_scout['#'][df_scraped_ppr_scout.PLAYER == 'Los Angeles'].tolist()]\n print(LosAngeles_defense_ranks)\n #in this function the defense rename here is SUPER GLITCHY since there are two Defenses' names 'Los Angeles', for now this code assumes the higher pts Defense is LA Rams\n def modify_player_name_scout(player, pos, rank):\n #defense need to change from city to mascot\n if pos == 'Def':\n #if Los Angeles is geographic location, then use minimum rank to Rams (assuming they are better defense)\n if player == 'Los Angeles' and int(rank) == min(LosAngeles_defense_ranks):\n player_formatted = 'Rams'\n elif player == 'Los Angeles' and int(rank) == max(LosAngeles_defense_ranks):\n player_formatted = 'Chargers'\n else: \n player_formatted = NFL_team_mascot.get(player)\n else:\n #if incoming string for players: 'Johnson, David' Change to: 'David Johnson' (this is rare - mostly for kickers on this site for som reason)\n if ',' in player:\n player = ' '.join(player.split(', ')[::-1])\n #remove suffixes/periods for all players \n player_formatted = remove_suffixes_periods(player)\n \n #hard override of some player names that don't match to ESPN naming\n if player_formatted == 'Juju Smith-Schuster': \n player_formatted = 'JuJu Smith-Schuster'\n elif player_formatted == 'Steven Hauschka':\n player_formatted = 'Stephen Hauschka'\n \n return player_formatted\n \n \n df_scraped_ppr_scout['PLAYER'] = df_scraped_ppr_scout.apply(\n lambda row: modify_player_name_scout(row['PLAYER'], row['POS'], row['#']),\n axis='columns')\n\n \n #convert defense position label to espn standard\n df_scraped_ppr_scout['POS'] = df_scraped_ppr_scout['POS'].map(convert_defense_label)\n \n\n #for this function only extract 'PLAYER', 'POS', 'TEAM', 'PTS', 'WEEK' (note Team is blank because webpage uses images for teams)\n df_scraped_ppr_scout = df_scraped_ppr_scout[['PLAYER', 'POS', 'TEAM', 'FPTS_PPR_SCOUT', 'WEEK']].sort_values('FPTS_PPR_SCOUT', ascending=False)\n\n\n return df_scraped_ppr_scout", "_____no_output_____" ], [ "#WEEK 1 PROJECTIONS\n#CALL SCRAPE AND FORMATTING OF SCOUT WEEKLY PROJECTIONS - AND SAVE TO PICKLES FOR LATER USE\n\n#scrape data and save the messy full dataframe\ndf_wk1_ppr_proj_scout_scrape = scrape_weekly_player_projections_SCOUT(1, 2018)\nsave_to_pickle(df_wk1_ppr_proj_scout_scrape, 'pickle_archive', 'Week1_PPR_Projections_SCOUT_messy_scrape')\n\n#format data to extract just player pts/playr/pos/team and save the data\ndf_wk1_ppr_proj_scout = format_extract_PPR_player_points_SCOUT(df_wk1_ppr_proj_scout_scrape)\nsave_to_pickle(df_wk1_ppr_proj_scout, 'pickle_archive', 'Week1_PPR_Projections_SCOUT')\nprint(df_wk1_ppr_proj_scout.shape)\ndf_wk1_ppr_proj_scout.head()", "C:\\Users\\micha\\Anaconda3\\envs\\PythonData\\lib\\site-packages\\urllib3\\connectionpool.py:858: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings\n InsecureRequestWarning)\nC:\\Users\\micha\\Anaconda3\\envs\\PythonData\\lib\\site-packages\\urllib3\\connectionpool.py:858: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings\n InsecureRequestWarning)\nC:\\Users\\micha\\Anaconda3\\envs\\PythonData\\lib\\site-packages\\urllib3\\connectionpool.py:858: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings\n InsecureRequestWarning)\nC:\\Users\\micha\\Anaconda3\\envs\\PythonData\\lib\\site-packages\\urllib3\\connectionpool.py:858: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings\n InsecureRequestWarning)\nC:\\Users\\micha\\Anaconda3\\envs\\PythonData\\lib\\site-packages\\urllib3\\connectionpool.py:858: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings\n InsecureRequestWarning)\nC:\\Users\\micha\\Anaconda3\\envs\\PythonData\\lib\\site-packages\\urllib3\\connectionpool.py:858: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings\n InsecureRequestWarning)\n" ], [ "#WEEK 2 PROJECTIONS\n#CALL SCRAPE AND FORMATTING OF SCOUT WEEKLY PROJECTIONS - AND SAVE TO PICKLES FOR LATER USE\n\n#scrape data and save the messy full dataframe\ndf_wk2_ppr_proj_scout_scrape = scrape_weekly_player_projections_SCOUT(2, 2018)\nsave_to_pickle(df_wk2_ppr_proj_scout_scrape, 'pickle_archive', 'Week2_PPR_Projections_SCOUT_messy_scrape')\n\n#format data to extract just player pts/playr/pos/team and save the data\ndf_wk2_ppr_proj_scout = format_extract_PPR_player_points_SCOUT(df_wk2_ppr_proj_scout_scrape)\nsave_to_pickle(df_wk2_ppr_proj_scout, 'pickle_archive', 'Week2_PPR_Projections_SCOUT')\nprint(df_wk2_ppr_proj_scout.shape)\ndf_wk2_ppr_proj_scout.head()", "C:\\Users\\micha\\Anaconda3\\envs\\PythonData\\lib\\site-packages\\urllib3\\connectionpool.py:858: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings\n InsecureRequestWarning)\nC:\\Users\\micha\\Anaconda3\\envs\\PythonData\\lib\\site-packages\\urllib3\\connectionpool.py:858: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings\n InsecureRequestWarning)\nC:\\Users\\micha\\Anaconda3\\envs\\PythonData\\lib\\site-packages\\urllib3\\connectionpool.py:858: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings\n InsecureRequestWarning)\nC:\\Users\\micha\\Anaconda3\\envs\\PythonData\\lib\\site-packages\\urllib3\\connectionpool.py:858: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings\n InsecureRequestWarning)\nC:\\Users\\micha\\Anaconda3\\envs\\PythonData\\lib\\site-packages\\urllib3\\connectionpool.py:858: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings\n InsecureRequestWarning)\nC:\\Users\\micha\\Anaconda3\\envs\\PythonData\\lib\\site-packages\\urllib3\\connectionpool.py:858: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings\n InsecureRequestWarning)\n" ] ], [ [ "### Get FanDuel Player Salaries for Week \n#### just import the Thurs-Mon game salaries (they differ for each game type, and note they don't include Kickers in the Thurs-Mon)\nGo to a FanDuel Thurs-Mon competition and Download a csv of players, which we then upload and format in python.", "_____no_output_____" ] ], [ [ "###FORMAT/EXTRACT FANDUEL SALARY INFO###\n\ndef format_extract_FanDuel(df_fanduel_csv, week, year):\n #rename columns\n df_fanduel_csv.rename(columns={'Position':'POS',\n 'Nickname':'PLAYER',\n 'Team':'TEAM',\n 'Salary':'SALARY_FANDUEL'},\n inplace=True) \n \n #add week/season columns\n df_fanduel_csv['WEEK'] = week\n df_fanduel_csv['SEASON'] = year\n\n #fix names\n def modify_player_name_fanduel(player, pos):\n #defense comes in as 'Dallas Cowboys' or 'Tampa Bay Buccaneers' need to split and take last word, which is the team mascot, just 'Cowboys' or 'Buccaneers'\n if pos == 'D':\n player_formatted = player.split()[-1]\n \n else:\n #need to remove suffixes, etc. \n player_formatted = remove_suffixes_periods(player)\n \n #hard override of some player names that don't match to ESPN naming\n if player_formatted == 'Josh Bellamy':\n player_formatted = 'Joshua Bellamy'\n \n return player_formatted\n \n \n df_fanduel_csv['PLAYER'] = df_fanduel_csv.apply(\n lambda row: modify_player_name_fanduel(row['PLAYER'], row['POS']),\n axis='columns')\n\n \n #convert defense position label to espn standard\n df_fanduel_csv['POS'] = df_fanduel_csv['POS'].map(convert_defense_label)\n \n \n #for this function only extract 'PLAYER', 'POS', 'TEAM', 'SALARY', 'WEEK' (note Team is blank because webpage uses images for teams)\n df_fanduel_csv = df_fanduel_csv[['PLAYER', 'POS', 'TEAM', 'SALARY_FANDUEL', 'WEEK']].sort_values('SALARY_FANDUEL', ascending=False)\n\n\n return df_fanduel_csv", "_____no_output_____" ], [ "#WEEK 2 FANDUEL SALARIES\n\n#import csv from FanDuel\ndf_wk2_fanduel_csv = pd.read_csv('fanduel_salaries/Week2-FanDuel-NFL-2018-09-13-28179-players-list.csv')\n\n#format data to extract just player salary/player/pos/team and save the data\ndf_wk2_fanduel = format_extract_FanDuel(df_wk2_fanduel_csv, 1, 2018)\nsave_to_pickle(df_wk2_fanduel, 'pickle_archive', 'Week2_Salary_FanDuel')\nprint(df_wk2_fanduel.shape)\ndf_wk2_fanduel.head()", "Pickle saved to: pickle_archive/Week2_Salary_FanDuel_2018-9-16-7-35.pkl\n(669, 5)\n" ] ], [ [ "##### !!!FFtoday apparently doesn't do weekly projections for Defenses, so don't use it for now (can check back in future and see if updated)!!!\n\n#### Get FFtoday Player Fantasy Points Projections for Week \nGet from FFtoday's Projections Table\n\nhttp://www.fftoday.com/rankings/playerwkproj.php?Season=2018&GameWeek=2&PosID=10&LeagueID=107644\n- Season = year\n- GameWeek = week\n- PosID = the id for each position 'QB':10, 'RB':20, 'WR':30, 'TE':40, 'K':80, 'DEF':99\n- LeagueID = the scoring type, 107644 gives FFToday PPR scoring", "_____no_output_____" ] ], [ [ "# ##SCRAPE FFtoday PROJECTIONS TABLE FOR PROJECTED FANTASY PPR POINTS##\n\n# #input needs to be year as four digit number and week as number \n# #returns dataframe of scraped data\n# def scrape_weekly_player_projections_FFtoday(week, year):\n# #instantiate selenium driver\n# driver = instantiate_selenium_driver()\n \n# #initialize dataframe for all data\n# proj_ppr_fft = pd.DataFrame()\n \n# #url that returns info has different code for each position and also takes year variable\n# position_ids = {'QB':10, 'RB':20, 'WR':30, 'TE':40, 'K':80, 'DEF':99}\n\n\n# #cycle through each position webpage to create comprehensive dataframe\n# for pos, pos_id in position_ids.items():\n# url_start_pos = f\"http://www.fftoday.com/rankings/playerwkproj.php?Season={year}&GameWeek={week}&PosID={pos_id}&LeagueID=107644\"\n# driver.get(url_start_pos)\n \n# #each page only gets 50 results, so cycle through next button until next button no longer exists\n# while True:\n# #read in table - no classes for tables so just need to find the right table in the list of tables from the page - 5th index\n# proj_ppr_fft_table_page = pd.read_html(driver.page_source, header=[1])[5]\n \n# proj_ppr_fft_table_page['POS'] = pos\n \n \n# #need to rename columns for different positions before concat because of differing column conventions\n# if pos == 'QB':\n# proj_ppr_fft_table_page.rename(columns={'Player Sort First: Last:':'PLAYER',\n# 'Comp':'PASS_COMP', 'Att': 'PASS_ATT', 'Yard':'PASS_YD',\n# 'TD':'PASS_TD', 'INT':'PASS_INT',\n# 'Att.1':'RUSH_ATT', 'Yard.1':'RUSH_YD', 'TD.1':'RUSH_TD'},\n# inplace=True)\n# elif pos == 'RB':\n# proj_ppr_fft_table_page.rename(columns={'Player Sort First: Last:':'PLAYER',\n# 'Att': 'RUSH_ATT', 'Yard':'RUSH_YD', 'TD':'RUSH_TD',\n# 'Rec':'RECV_RECPT', 'Yard.1':'RECV_YD', 'TD.1':'RECV_TD'},\n# inplace=True)\n \n# elif pos == 'WR':\n# proj_ppr_fft_table_page.rename(columns={'Player Sort First: Last:':'PLAYER',\n# 'Rec':'RECV_RECPT', 'Yard':'RECV_YD', 'TD':'RECV_TD',\n# 'Att':'RUSH_ATT', 'Yard.1':'RUSH_YD', 'TD.1':'RUSH_TD'},\n# inplace=True)\n \n# elif pos == 'TE':\n# proj_ppr_fft_table_page.rename(columns={'Player Sort First: Last:':'PLAYER',\n# 'Rec':'RECV_RECPT', 'Yard':'RECV_YD', 'TD':'RECV_TD'},\n# inplace=True)\n \n# elif pos == 'K':\n# proj_ppr_fft_table_page.rename(columns={'Player Sort First: Last:':'PLAYER',\n# 'FGM':'KICK_FG', 'FGA':'KICK_FGAtt', 'FG%':'KICK_FG%',\n# 'EPM':'KICK_XP', 'EPA':'KICK_XPAtt'},\n# inplace=True)\n \n# elif pos == 'DEF':\n# proj_ppr_fft_table_page['PLAYER'] = proj_ppr_fft_table_page['Team'] #+ ' D/ST' #add player name with team name plus D/ST tag\n# proj_ppr_fft_table_page.rename(columns={'Sack':'D/ST_Sack', 'FR':'D/ST_FR', 'DefTD':'D/ST_TD', 'INT':'D/ST_INT',\n# 'PA':'D/ST_PtsAll', 'PaYd/G':'D/ST_PaYd/G', 'RuYd/G':'D/ST_RuYd/G',\n# 'Safety':'D/ST_Sty', 'KickTD':'D/ST_RET_TD'},\n# inplace=True)\n \n \n# #add the position/page data to overall df\n# proj_ppr_fft = pd.concat([proj_ppr_fft, proj_ppr_fft_table_page],\n# ignore_index=True,\n# sort=False)\n \n \n# #click to next page to get next 50 results, but check that next button exists\n# try:\n# next_button = driver.find_element_by_link_text(\"Next Page\")\n# next_button.click()\n# except EC.NoSuchElementException:\n# break\n \n \n# driver.quit()\n \n# #add columns that give week/season\n# proj_ppr_fft['WEEK'] = week\n# proj_ppr_fft['SEASON'] = year\n \n \n# return proj_ppr_fft", "_____no_output_____" ], [ "# ###FORMAT/EXTRACT ACTUAL PLAYER PPR DATA###\n# #(you could make this more complex if want to extract some of the subdata)\n\n# def format_extract_PPR_player_points_FFtoday(df_scraped_ppr_fft):\n# # #optional data formatting for additional info\n# # #calculate completion percentage\n# # df_scraped_ppr_fft['PASS_COMP_PCT'] = df_scraped_ppr_fft.PASS_COMP/df_scraped_ppr_fft.PASS_ATT\n\n\n# # #calculate total PaYd and RuYd for season\n# # df_scraped_ppr_fft['D/ST_PaYdA'] = df_scraped_ppr_fft['D/ST_PaYd/G'] * 16\n# # df_scraped_ppr_fft['D/ST_RuYdA'] = df_scraped_ppr_fft['D/ST_RuYd/G'] * 16\n# # df_scraped_ppr_fft['D/ST_ToYd/G'] = df_scraped_ppr_fft['D/ST_PaYd/G'] + df_scraped_ppr_fft['D/ST_RuYd/G']\n# # df_scraped_ppr_fft['D/ST_ToYdA'] = df_scraped_ppr_fft['D/ST_ToYd/G'] * 16\n\n\n# #rename some of outstanding columns to match other dfs\n# df_scraped_ppr_fft.rename(columns={'Team':'TEAM', 'FPts':'FPTS_PPR_FFTODAY'},\n# inplace=True)\n\n# #remove any possible name suffixes to merge with other data better\n# df_scraped_ppr_fft['PLAYER'] = df_scraped_ppr_fft['PLAYER'].map(remove_suffixes_periods)\n \n \n# #for this function only extract 'PLAYER', 'POS', 'TEAM', 'PTS'\n# df_scraped_ppr_fft = df_scraped_ppr_fft[['PLAYER', 'POS', 'TEAM', 'FPTS_PPR_FFTODAY', 'WEEK']].sort_values('FPTS_PPR_FFTODAY', ascending=False)\n\n# return df_scraped_ppr_fft", "_____no_output_____" ] ], [ [ "#### Initial Database Stuff", "_____no_output_____" ] ], [ [ "# actual_ppr_df = pd.read_pickle('pickle_archive/Week1_Player_Actual_PPR_2018-9-13-6-41.pkl')\n# espn_final_df = pd.read_pickle('pickle_archive/Week1_PPR_Projections_ESPN_2018-9-13-6-46.pkl')\n# cbs_final_df = pd.read_pickle('pickle_archive/Week1_PPR_Projections_CBS_2018-9-13-17-45.pkl')", "_____no_output_____" ], [ "# cbs_final_df.head()", "_____no_output_____" ], [ "# from sqlalchemy import create_engine\n\n# disk_engine = create_engine('sqlite:///my_lite_store.db')\n# actual_ppr_df.to_sql('actual_ppr', disk_engine, if_exists='append')", "_____no_output_____" ], [ "# espn_final_df.to_sql('espn_final_df', disk_engine, if_exists='append')\n# cbs_final_df.to_sql('cbs_final_df', disk_engine, if_exists='append')", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
e7da016d3c9eee3f8c3e8ba022534cc3e0a66042
125,897
ipynb
Jupyter Notebook
docs/ipynb/13-tutorial-skyrmion.ipynb
ubermag/mumaxc
73098ada436cb4e16699f33068a8391a679b49e9
[ "BSD-3-Clause" ]
null
null
null
docs/ipynb/13-tutorial-skyrmion.ipynb
ubermag/mumaxc
73098ada436cb4e16699f33068a8391a679b49e9
[ "BSD-3-Clause" ]
null
null
null
docs/ipynb/13-tutorial-skyrmion.ipynb
ubermag/mumaxc
73098ada436cb4e16699f33068a8391a679b49e9
[ "BSD-3-Clause" ]
null
null
null
142.740363
56,332
0.887837
[ [ [ "# Tutorial 13: Skyrmion in a disk\n\n> Interactive online tutorial:\n> [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/ubermag/oommfc/master?filepath=docs%2Fipynb%2Findex.ipynb)", "_____no_output_____" ], [ "In this tutorial, we compute and relax a skyrmion in a interfacial-DMI material in a confined disk like geometry.", "_____no_output_____" ] ], [ [ "import discretisedfield as df\nimport micromagneticmodel as mm\nimport oommfc as oc", "_____no_output_____" ] ], [ [ "We define mesh in cuboid through corner points `p1` and `p2`, and discretisation cell size `cell`.", "_____no_output_____" ] ], [ [ "region = df.Region(p1=(-50e-9, -50e-9, 0), p2=(50e-9, 50e-9, 10e-9))\nmesh = df.Mesh(region=region, cell=(5e-9, 5e-9, 5e-9))", "_____no_output_____" ] ], [ [ "The mesh we defined is:", "_____no_output_____" ] ], [ [ "%matplotlib inline\nmesh.k3d()", "_____no_output_____" ] ], [ [ "Now, we can define the system object by first setting up the Hamiltonian:", "_____no_output_____" ] ], [ [ "system = mm.System(name=\"skyrmion\")\n\nsystem.energy = (\n mm.Exchange(A=1.6e-11)\n + mm.DMI(D=4e-3, crystalclass=\"Cnv\")\n + mm.UniaxialAnisotropy(K=0.51e6, u=(0, 0, 1))\n + mm.Demag()\n + mm.Zeeman(H=(0, 0, 2e5))\n)", "_____no_output_____" ] ], [ [ "Disk geometry is set up be defining the saturation magnetisation (norm of the magnetisation field). For that, we define a function:", "_____no_output_____" ] ], [ [ "Ms = 1.1e6\n\n\ndef Ms_fun(pos):\n \"\"\"Function to set magnitude of magnetisation: zero outside cylindric shape,\n Ms inside cylinder.\n\n Cylinder radius is 50nm.\n\n \"\"\"\n x, y, z = pos\n if (x**2 + y**2) ** 0.5 < 50e-9:\n return Ms\n else:\n return 0", "_____no_output_____" ] ], [ [ "And the second function we need is the function to definr the initial magnetisation which is going to relax to skyrmion.", "_____no_output_____" ] ], [ [ "def m_init(pos):\n \"\"\"Function to set initial magnetisation direction:\n -z inside cylinder (r=10nm),\n +z outside cylinder.\n y-component to break symmetry.\n\n \"\"\"\n x, y, z = pos\n if (x**2 + y**2) ** 0.5 < 10e-9:\n return (0, 0, -1)\n else:\n return (0, 0, 1)\n\n\n# create system with above geometry and initial magnetisation\nsystem.m = df.Field(mesh, dim=3, value=m_init, norm=Ms_fun)", "_____no_output_____" ] ], [ [ "The geometry is now:", "_____no_output_____" ] ], [ [ "system.m.norm.k3d_nonzero()", "_____no_output_____" ] ], [ [ "and the initial magnetsation is:", "_____no_output_____" ] ], [ [ "system.m.plane(\"z\").mpl()", "_____no_output_____" ] ], [ [ "Finally we can minimise the energy and plot the magnetisation.", "_____no_output_____" ] ], [ [ "# minimize the energy\nmd = oc.MinDriver()\nmd.drive(system)\n\n# Plot relaxed configuration: vectors in z-plane\nsystem.m.plane(\"z\").mpl()", "2020/03/09 11:00: Running OOMMF (skyrmion.mif) ... (1.1 s)\n" ], [ "# Plot z-component only:\nsystem.m.z.plane(\"z\").mpl()", "_____no_output_____" ], [ "# 3d-plot of z-component\nsystem.m.z.k3d_voxels(filter_field=system.m.norm)", "_____no_output_____" ] ], [ [ "Finally we can sample and plot the magnetisation along the line:", "_____no_output_____" ] ], [ [ "system.m.z.line(p1=(-49e-9, 0, 0), p2=(49e-9, 0, 0), n=20).mpl()", "_____no_output_____" ] ], [ [ "## Other\n\nMore details on various functionality can be found in the [API Reference](https://oommfc.readthedocs.io/en/latest/).", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e7da0f3f6f29ec1cc3bd1d473a5f8ef70da538c6
125,678
ipynb
Jupyter Notebook
HW9/2019121004_LDAExcercise2.ipynb
avani17101/SMAI-Assignments
8d408911f964768bf50d965f881d10d37ac8f7f7
[ "MIT" ]
3
2021-03-05T12:28:39.000Z
2021-03-05T12:28:44.000Z
HW9/2019121004_LDAExcercise2.ipynb
avani17101/Statistical-Methods-in-AI
8d408911f964768bf50d965f881d10d37ac8f7f7
[ "MIT" ]
null
null
null
HW9/2019121004_LDAExcercise2.ipynb
avani17101/Statistical-Methods-in-AI
8d408911f964768bf50d965f881d10d37ac8f7f7
[ "MIT" ]
1
2021-03-05T12:21:26.000Z
2021-03-05T12:21:26.000Z
251.356
30,716
0.90899
[ [ [ "**Author: Avani Gupta <br>\nRoll: 2019121004**\n\n\n# Excercise 2\n\nIn Excercise 1, we computed the LDA for a multi-class problem, the IRIS dataset. In this excercise, we will now compare the LDA and PCA for the IRIS dataset.\n\nTo revisit, the iris dataset contains measurements for 150 iris flowers from three different species.\n\nThe three classes in the Iris dataset:\n1. Iris-setosa (n=50)\n2. Iris-versicolor (n=50)\n3. Iris-virginica (n=50)\n\nThe four features of the Iris dataset:\n1. sepal length in cm\n2. sepal width in cm\n3. petal length in cm\n4. petal width in cm\n\n<img src=\"iris_petal_sepal.png\">\n\n", "_____no_output_____" ] ], [ [ "from sklearn.datasets import make_classification\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport seaborn as sns; sns.set();\nimport pandas as pd\nfrom sklearn.model_selection import train_test_split\nfrom numpy import pi", "_____no_output_____" ] ], [ [ "### Importing the dataset", "_____no_output_____" ] ], [ [ "url = \"https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data\"\nnames = ['sepal-length', 'sepal-width', 'petal-length', 'petal-width', 'Class']\ndataset = pd.read_csv(url, names=names)\n\ndataset.tail()", "_____no_output_____" ] ], [ [ "### Data preprocessing\n\nOnce dataset is loaded into a pandas data frame object, the first step is to divide dataset into features and corresponding labels and then divide the resultant dataset into training and test sets. The following code divides data into labels and feature set:", "_____no_output_____" ] ], [ [ "X = dataset.iloc[:, 0:4].values\ny = dataset.iloc[:, 4].values", "_____no_output_____" ] ], [ [ "The above script assigns the first four columns of the dataset i.e. the feature set to X variable while the values in the fifth column (labels) are assigned to the y variable.\n\nThe following code divides data into training and test sets:", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)", "_____no_output_____" ] ], [ [ "#### Feature Scaling\n\nWe will now perform feature scaling as part of data preprocessing too. For this task, we will be using scikit learn `StandardScalar`.", "_____no_output_____" ] ], [ [ "from sklearn.preprocessing import StandardScaler\n\nsc = StandardScaler()\nX_train = sc.fit_transform(X_train)\nX_test = sc.transform(X_test)\n", "_____no_output_____" ] ], [ [ "## Write your code below\n\nWrite your code to compute the PCA and LDA on the IRIS dataset below.", "_____no_output_____" ] ], [ [ "### WRITE YOUR CODE HERE ####\nfrom sklearn.preprocessing import LabelEncoder\nenc = LabelEncoder()\nlabel_encoder = enc.fit(y_train)\ny_train = label_encoder.transform(y_train) + 1 #labels are done in alphabetical order\n# 1: 'Iris-setosa', 2: 'Iris-versicolor', 3:'Iris-virginica'\nlabel_encoder = enc.fit(y_test)\ny_test = label_encoder.transform(y_test) + 1\nlabels = ['setosa', 'Versicolor', 'Virginica']", "_____no_output_____" ], [ "# LDA\n\nnum_classes = 3\nnum_classes_plus1 = num_classes + 1\ndef find_mean(X_train,y_train,num_classes_plus1):\n mean_arr = []\n for cl in range(1,num_classes_plus1):\n mean_arr.append(np.mean(X_train[y_train==cl], axis=0))\n return mean_arr\n\nmean_arr = find_mean(X_train,y_train,num_classes_plus1)\n\ndef within_classScatter(mean_arr,X_train,y_train,num_classes_plus1):\n S_w = np.zeros((num_classes_plus1,num_classes_plus1))\n for cl, mv in zip(range(1,num_classes_plus1),mean_arr):\n temp_s = np.zeros((num_classes_plus1,num_classes_plus1))\n for data in X_train[y_train==cl]:\n data, mv = data.reshape(num_classes_plus1,1), mv.reshape(num_classes_plus1,1) ### making them vertical vectors\n temp_s += (data-mv)@((data-mv).T)\n S_w += temp_s\n return S_w\nS_w = within_classScatter(mean_arr,X_train,y_train,num_classes_plus1) \nprint(\"within class scatter matrix S_w:\\n\")\nprint(S_w)\n\ndef btw_clasScatter(mean_arr,X_train,y_train,num_classes_plus1):\n total_mean = np.mean(X_train, axis=0).reshape(num_classes_plus1,1)\n S_b = np.zeros((num_classes_plus1,num_classes_plus1))\n for cl, mv in zip(range(1,num_classes_plus1), mean_arr):\n n = X_train[y_train==cl].shape[0]\n class_mean = mv.reshape(num_classes_plus1,1)\n S_b += n*((class_mean - total_mean)@(class_mean - total_mean).T)\n return S_b\nS_b = btw_clasScatter(mean_arr,X_train,y_train,num_classes_plus1)\nprint(\"between class scatter matrix S_b:\\n\")\nprint(S_b) \n\ndef takeTopEigen(S_w, S_b,k):\n eigen_vals, eigen_vecs = np.linalg.eig(np.linalg.inv(S_w).dot(S_b))\n eigs_sorted_in = np.argsort(eigen_vals)[::-1]\n eigen_vals = eigen_vals[eigs_sorted_in]\n eigen_vecs = eigen_vecs[:,eigs_sorted_in]\n weights = eigen_vecs[:,:k]\n return weights\n\ndef lda_vecs(X_train, y_train,weights):\n Xtrain_lda = X_train@weights\n Xtest_lda = X_test@weights\n return Xtrain_lda, Xtest_lda \n\nweights = takeTopEigen(S_w, S_b,2)\nXtrain_lda, Xtest_lda = lda_vecs(X_train, y_train,weights)\n\ndef centroid(Xtrain_lda,y_train):\n centroids = []\n for i in range(1,num_classes_plus1):\n centroids.append(np.mean(Xtrain_lda[y_train == i], axis = 0))\n return centroids\n\ncentroids = centroid(Xtrain_lda,y_train)\n\n\ndef pred(X_lda,centroids):\n y_pred = []\n for i in range(len(X_lda)):\n y_pred.append(np.argmin([ np.linalg.norm(centroids[0]-X_lda[i]), np.linalg.norm(centroids[1]-X_lda[i]), np.linalg.norm(centroids[2]-X_lda[i]) ])+1)\n return np.array(y_pred)\n\ndef accuracy(X_lda,y,centroids):\n y_pred = pred(X_lda,centroids)\n err = y-y_pred\n accuracy = len(err[err == 0])/len(err)\n return accuracy*100\n\nacc = accuracy(Xtrain_lda,y_train,centroids)\nprint(\"Accuracy on train set\",acc)\n\nacc = accuracy(Xtest_lda,y_test,centroids)\nprint(\"Accuracy on test set:\",acc)\n\ndef calc_class(Xtrain_lda,centroids):\n x_r, y_r = np.meshgrid(np.linspace(np.min(Xtrain_lda[:,0])-0.2, np.max(Xtrain_lda[:,1])+0.2,200), np.linspace(np.min(Xtrain_lda[:,1])-0.2, np.max(Xtrain_lda[:,1])+0.2,200))\n cl = np.zeros(x_r.shape)\n # finding which class the sample belongs to\n # cl is label vector of predicted class\n for i in range(len(x_r)):\n for j in range(len(y_r)):\n pt = [x_r[i,j], y_r[i,j]]\n clas = []\n for l in range(3):\n clas.append(np.linalg.norm(centroids[l]-pt))\n cl[i,j] = np.argmin(clas)+1\n return cl,x_r,y_r\n\ndef plot(X_lda,y,cl,title,strr): \n \n for clas in range(1,num_classes_plus1):\n plt.scatter(X_lda[y == clas][:,0],X_lda[y == clas][:,1],label=labels[clas-1])\n \n plt.xlabel(strr+\"1\")\n plt.ylabel(strr+\"2\")\n plt.title(title)\n \n plt.legend(loc='upper right')\n plt.contour(x_r,y_r,cl)\n \n plt.show()\nz,x_r,y_r = calc_class(Xtrain_lda,centroids)\nplot(Xtrain_lda,y_train,z,\"Training set\",\"LDA\")\nplot(Xtest_lda,y_test,z,\"Test set\",\"LDA\")\n", "within class scatter matrix S_w:\n\n[[44.52097054 30.99078596 13.79582236 7.7224602 ]\n [30.99078596 76.33479439 8.78955077 11.47388844]\n [13.79582236 8.78955077 7.0462481 4.00213667]\n [ 7.7224602 11.47388844 4.00213667 8.02030645]]\nbetween class scatter matrix S_b:\n\n[[ 75.47902946 -38.25871753 91.29722578 91.65558497]\n [-38.25871753 43.66520561 -54.10268478 -50.52279974]\n [ 91.29722578 -54.10268478 112.9537519 112.1744104 ]\n [ 91.65558497 -50.52279974 112.1744104 111.97969355]]\nAccuracy on train set 96.66666666666667\nAccuracy on test set: 96.66666666666667\n" ], [ "# PCA\nu, s, vt = np.linalg.svd(X_train, full_matrices=False)\nw_pca = vt.T[:,:2]\nXtrain_pca = X_train@w_pca\nXtest_pca = X_test@w_pca\ncntr = centroid(Xtrain_pca,y_train)\n\nacc = accuracy(Xtrain_pca,y_train,cntr)\nprint(\"Accuracy on train set\",acc)\ncl,x_r,y_r = calc_class(Xtrain_pca,cntr)\nplot(Xtrain_pca,y_train,cl,\"training set\",\"PCA\")\n\nacc = accuracy(Xtest_pca,y_test,centroids)\nprint(\"Accuracy on test set:\",acc)\n\nplot(Xtest_pca,y_test,cl,\"test set\",\"PCA\")", "Accuracy on train set 85.0\n" ] ], [ [ "\n**Observations** <br>\nFrom the plots of both LDA and PCA, it is observed that in LDA the classes are well seperated(more inter-class variance) and intra-class variance is lesser. In PCA on the other hand, the entire dataset has a high variance. <br>\n\n**difference between LDA and PCA**\n* PCA finds the axes with maximum variance in the entire data set\n\n* LDA finds the axes for best class seperability(maximizing inter-class variance) and also minimizes intra-class variance\n\n* PCA is unsupervised whereas LDA is supervised, which further enhances LDAs capability to take into consideration the class labels, and hence their separation in the reduced dimentions domain.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ] ]
e7da13478d41054f5acb0566845a0f322138d390
14,099
ipynb
Jupyter Notebook
11. Recurrent Neural Networks - Python.ipynb
AnneliesseMorales/ms-learn-ml-crash-course-python
a9471097b1bdf00d3e052dea399d23b68ec806d4
[ "MIT" ]
82
2019-11-15T10:53:09.000Z
2022-01-21T23:34:26.000Z
11. Recurrent Neural Networks - Python.ipynb
AnneliesseMorales/ms-learn-ml-crash-course-python
a9471097b1bdf00d3e052dea399d23b68ec806d4
[ "MIT" ]
7
2019-11-20T03:06:15.000Z
2020-10-22T15:42:58.000Z
11. Recurrent Neural Networks - Python.ipynb
AnneliesseMorales/ms-learn-ml-crash-course-python
a9471097b1bdf00d3e052dea399d23b68ec806d4
[ "MIT" ]
284
2019-10-22T20:21:05.000Z
2022-01-21T21:55:49.000Z
79.655367
1,546
0.636286
[ [ [ "Exercise 11 - Recurrent Neural Networks\n========\n\nA recurrent neural network (RNN) is a class of neural network that excels when your data can be treated as a sequence - such as text, music, speech recognition, connected handwriting, or data over a time period. \n\nRNN's can analyse or predict a word based on the previous words in a sentence - they allow a connection between previous information and current information.\n\nThis exercise looks at implementing a LSTM RNN to generate new characters after learning from a large sample of text. LSTMs are a special type of RNN which dramatically improves the model’s ability to connect previous data to current data where there is a long gap.\n\nWe will train an RNN model using a novel written by H. G. Wells - The Time Machine.", "_____no_output_____" ], [ "Step 1\n------\n\nLet's start by loading our libraries and text file. This might take a few minutes.\n\n#### Run the cell below to import the necessary libraries.", "_____no_output_____" ] ], [ [ "%%capture\n# Run this!\nfrom keras.models import load_model\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Activation, LSTM\nfrom keras.callbacks import LambdaCallback, ModelCheckpoint\nimport numpy as np\nimport random, sys, io, string", "_____no_output_____" ] ], [ [ "#### Replace the `<addFileName>` with `The Time Machine`", "_____no_output_____" ] ], [ [ "###\n# REPLACE THE <addFileName> BELOW WITH The Time Machine\n###\ntext = io.open('Data/<addFileName>.txt', encoding = 'UTF-8').read()\n###\n\n# Let's have a look at some of the text\nprint(text[0:198])\n\n# This cuts out punctuation and make all the characters lower case\ntext = text.lower().translate(str.maketrans(\"\", \"\", string.punctuation))\n\n# Character index dictionary\ncharset = sorted(list(set(text)))\nindex_from_char = dict((c, i) for i, c in enumerate(charset))\nchar_from_index = dict((i, c) for i, c in enumerate(charset))\n\nprint('text length: %s characters' %len(text))\nprint('unique characters: %s' %len(charset))", "_____no_output_____" ] ], [ [ "Expected output: \n```The Time Traveller (for so it will be convenient to speak of him) was expounding a recondite matter to us. His pale grey eyes shone and twinkled, and his usually pale face was flushed and animated.\ntext length: 174201 characters\nunique characters: 39```\n\nStep 2\n-----\n\nNext we'll divide the text into sequences of 40 characters.\n\nThen for each sequence we'll make a training set - the following character will be the correct output for the test set.\n\n### In the cell below replace:\n#### 1. `<sequenceLength>` with `40`\n#### 2. `<step>` with `4`\n#### and then __run the code__. ", "_____no_output_____" ] ], [ [ "###\n# REPLACE <sequenceLength> WITH 40 AND <step> WITH 4\n###\nsequence_length = <sequenceLength>\nstep = <step>\n###\n\nsequences = []\ntarget_chars = []\nfor i in range(0, len(text) - sequence_length, step):\n sequences.append([text[i: i + sequence_length]])\n target_chars.append(text[i + sequence_length])\nprint('number of training sequences:', len(sequences))", "_____no_output_____" ] ], [ [ "Expected output:\n`number of training sequences: 43541`\n\n#### Replace `<addSequences>` with `sequences` and run the code.", "_____no_output_____" ] ], [ [ "# One-hot vectorise\n\nX = np.zeros((len(sequences), sequence_length, len(charset)), dtype=np.bool)\ny = np.zeros((len(sequences), len(charset)), dtype=np.bool)\n\n###\n# REPLACE THE <addSequences> BELOW WITH sequences\n###\nfor n, sequence in enumerate(<addSequences>):\n###\n for m, character in enumerate(list(sequence[0])):\n X[n, m, index_from_char[character]] = 1\n y[n, index_from_char[target_chars[n]]] = 1", "_____no_output_____" ] ], [ [ "Step 3\n------\n\nLet's build our model, using a single LSTM layer of 128 units. We'll keep the model simple for now, so that training does not take too long.\n\n### In the cell below replace:\n#### 1. `<addLSTM>` with `LSTM`\n#### 2. `<addLayerSize>` with `128`\n#### 3. `<addSoftmaxFunction>` with `'softmax`\n#### and then __run the code__.", "_____no_output_____" ] ], [ [ "model = Sequential()\n\n###\n# REPLACE THE <addLSTM> BELOW WITH LSTM (use uppercase) AND <addLayerSize> WITH 128\n###\nmodel.add(<addLSTM>(<addLayerSize>, input_shape = (X.shape[1], X.shape[2])))\n###\n\n###\n# REPLACE THE <addSoftmaxFunction> with 'softmax' (INCLUDING THE QUOTES)\n###\nmodel.add(Dense(y.shape[1], activation = <addSoftMaxFunction>))\n###\n\nmodel.compile(loss = 'categorical_crossentropy', optimizer = 'Adam')", "_____no_output_____" ] ], [ [ "The code below generates text at the end of an epoch (one training cycle). This allows us to see how the model is performing as it trains. If you're making a large neural network with a long training time it's useful to check in on the model as see if the text generating is legible as it trains, as overtraining may occur and the output of the model turn to nonsense.\n\nThe code below will also save a model if it is the best performing model, so we can use it later.\n\n#### Run the code below, but don't change it", "_____no_output_____" ] ], [ [ "# Run this, but do not edit.\n# It helps generate the text and save the model epochs.\n\n# Generate new text\ndef on_epoch_end(epoch, _):\n diversity = 0.5\n print('\\n### Generating text with diversity %0.2f' %(diversity))\n\n start = random.randint(0, len(text) - sequence_length - 1)\n seed = text[start: start + sequence_length]\n print('### Generating with seed: \"%s\"' %seed[:40])\n\n output = seed[:40].lower().translate(str.maketrans(\"\", \"\", string.punctuation))\n print(output, end = '')\n\n for i in range(500):\n x_pred = np.zeros((1, sequence_length, len(charset)))\n for t, char in enumerate(output):\n x_pred[0, t, index_from_char[char]] = 1.\n\n predictions = model.predict(x_pred, verbose=0)[0]\n exp_preds = np.exp(np.log(np.asarray(predictions).astype('float64')) / diversity)\n next_index = np.argmax(np.random.multinomial(1, exp_preds / np.sum(exp_preds), 1))\n next_char = char_from_index[next_index]\n\n output = output[1:] + next_char\n\n print(next_char, end = '')\n print()\nprint_callback = LambdaCallback(on_epoch_end=on_epoch_end)\n\n# Save the model\ncheckpoint = ModelCheckpoint('Models/model-epoch-{epoch:02d}.hdf5', \n monitor = 'loss', verbose = 1, save_best_only = True, mode = 'min')", "_____no_output_____" ] ], [ [ "The code below will start the model to train. This may take a long time. Feel free to stop the training with the `square stop button` to the right of the `Run button` in the toolbar.\n\nLater in the exercise, we will load a pretrained model.\n\n### In the cell below replace:\n#### 1. `<addPrintCallback>` with `print_callback`\n#### 2. `<addCheckpoint>` with `checkpoint`\n#### and then __run the code__.", "_____no_output_____" ] ], [ [ "###\n# REPLACE <addPrintCallback> WITH print_callback AND <addCheckpoint> WITH checkpoint\n###\nmodel.fit(X, y, batch_size = 128, epochs = 3, callbacks = [<addPrintCallback>, <addCheckpoint>])\n###", "_____no_output_____" ] ], [ [ "The output won't appear to be very good. But then, this dataset is small, and we have trained it only for a short time using a rather small RNN. How might it look if we upscaled things?\n\nStep 5\n------\n\nWe could improve our model by:\n* Having a larger training set.\n* Increasing the number of LSTM units.\n* Training it for longer\n* Experimenting with difference activation functions, optimization functions etc\n\nTraining this would still take far too long on most computers to see good results - so we've trained a model already for you.\n\nThis model uses a different dataset - a few of the King Arthur tales pasted together. The model used:\n* sequences of 50 characters\n* Two LSTM layers (512 units each)\n* A dropout of 0.5 after each LSTM layer\n* Only 30 epochs (we'd recomend 100-200)\n\nLet's try importing this model that has already been trained.\n\n#### Replace `<addLoadModel>` with `load_model` and run the code.", "_____no_output_____" ] ], [ [ "from keras.models import load_model\nprint(\"loading model... \", end = '')\n\n###\n# REPLACE <addLoadModel> BELOW WITH load_model\n###\nmodel = <addLoadModel>('Models/arthur-model-epoch-30.hdf5')\n###\nmodel.compile(loss = 'categorical_crossentropy', optimizer = 'Adam')\n###\n\nprint(\"model loaded\")", "_____no_output_____" ] ], [ [ "Step 6\n-------\n\nNow let's use this model to generate some new text!\n\n#### Replace `<addFilePath>` with `'Data/Arthur tales.txt'`", "_____no_output_____" ] ], [ [ "###\n# REPLACE <addFilePath> BELOW WITH 'Data/Arthur tales.txt' (INCLUDING THE QUOTATION MARKS)\n###\ntext = io.open(<addFilePath>, encoding='UTF-8').read()\n###\n\n# Cut out punctuation and make lower case\ntext = text.lower().translate(str.maketrans(\"\", \"\", string.punctuation))\n\n# Character index dictionary\ncharset = sorted(list(set(text)))\nindex_from_char = dict((c, i) for i, c in enumerate(charset))\nchar_from_index = dict((i, c) for i, c in enumerate(charset))\n\nprint('text length: %s characters' %len(text))\nprint('unique characters: %s' %len(charset))", "_____no_output_____" ] ], [ [ "### In the cell below replace:\n#### 1. `<sequenceLength>` with `50`\n#### 2. `<writeSentence>` with a sentence of your own, at least 50 characters long.\n#### 3. `<numCharsToGenerate>` with the number of characters you want to generate (choose a large number, like 1500)\n#### and then __run the code__.", "_____no_output_____" ] ], [ [ "# Generate text\n\ndiversity = 0.5\nprint('\\n### Generating text with diversity %0.2f' %(diversity))\n\n###\n# REPLACE <sequenceLength> BELOW WITH 50\n###\nsequence_length = <sequenceLength>\n###\n\n# Next we'll make a starting point for our text generator\n\n###\n# REPLACE <writeSentence> WITH A SENTENCE OF AT LEAST 50 CHARACTERS\n###\nseed = \"<writeSentence>\"\n###\n\nseed = seed.lower().translate(str.maketrans(\"\", \"\", string.punctuation))\n\n###\n# OR, ALTERNATIVELY, UNCOMMENT THE FOLLOWING TWO LINES AND GRAB A RANDOM STRING FROM THE TEXT FILE\n###\n\n#start = random.randint(0, len(text) - sequence_length - 1)\n#seed = text[start: start + sequence_length]\n\n###\n\nprint('### Generating with seed: \"%s\"' %seed[:40])\n\noutput = seed[:sequence_length].lower().translate(str.maketrans(\"\", \"\", string.punctuation))\nprint(output, end = '')\n\n###\n# REPLACE THE <numCharsToGenerate> BELOW WITH THE NUMBER OF CHARACTERS WE WISH TO GENERATE, e.g. 1500\n###\nfor i in range(<numCharsToGenerate>):\n###\n x_pred = np.zeros((1, sequence_length, len(charset)))\n for t, char in enumerate(output):\n x_pred[0, t, index_from_char[char]] = 1.\n\n predictions = model.predict(x_pred, verbose=0)[0]\n exp_preds = np.exp(np.log(np.asarray(predictions).astype('float64')) / diversity)\n next_index = np.argmax(np.random.multinomial(1, exp_preds / np.sum(exp_preds), 1))\n next_char = char_from_index[next_index]\n\n output = output[1:] + next_char\n\n print(next_char, end = '')\nprint()", "_____no_output_____" ] ], [ [ "How does it look? Does it seem intelligible?\n\nConclusion\n--------\n\nWe have trained an RNN that learns to predict characters based on a text sequence. We have trained a lightweight model from scratch, as well as imported a pre-trained model and generated new text from that.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e7da2f19a4510bc1aaa0fe259eb62f41cbdc09fa
14,584
ipynb
Jupyter Notebook
assignments/10_NLP_applications_lab.ipynb
bmeaut/python_nlp_2021_spring
70e286405779a89a637c98d025ab8939787c2dd5
[ "MIT" ]
14
2021-02-09T09:35:18.000Z
2022-02-23T08:54:39.000Z
assignments/10_NLP_applications_lab.ipynb
bmeaut/python_nlp_2021_spring
70e286405779a89a637c98d025ab8939787c2dd5
[ "MIT" ]
null
null
null
assignments/10_NLP_applications_lab.ipynb
bmeaut/python_nlp_2021_spring
70e286405779a89a637c98d025ab8939787c2dd5
[ "MIT" ]
13
2021-02-09T11:00:38.000Z
2022-02-21T16:18:13.000Z
30.320166
444
0.564111
[ [ [ "# Introduction to Python and Natural Language Technologies\n\n__Laboratory 10- NLP applications, Dependency parsing__\n\n__April 22, 2021__\n\nDuring this laboratory you will have to implement various evaluation methods and use them to measure the performance of pretrained models.", "_____no_output_____" ] ], [ [ "import stanza\nimport spacy\nfrom gensim.summarization import summarizer as gensim_summarizer\nfrom transformers import pipeline\nimport nltk\nimport conllu\nimport os\nimport numpy as np\nimport requests", "_____no_output_____" ], [ "stanza.download('en')\nstanza_nlp = stanza.Pipeline('en')\nspacy_nlp = spacy.load(\"en_core_web_sm\")", "_____no_output_____" ] ], [ [ "Let's download the UD treebanks if you do not have them already. We are going to use them for evaluations.", "_____no_output_____" ] ], [ [ "url = \"https://lindat.mff.cuni.cz/repository/xmlui/bitstream/handle/11234/1-3424/ud-treebanks-v2.7.tgz\"\ntgz = 'ud-treebanks-v2.7.tgz'\ndirectory = 'ud_treebanks'\nif not os.path.exists(directory):\n import tarfile\n response = requests.get(url, stream=True)\n with open(tgz, 'wb') as ud:\n ud.write(response.content)\n os.mkdir(directory)\n with tarfile.open(tgz, 'r:gz') as _tar:\n for member in _tar:\n if member.isdir():\n continue\n fname = member.name.rsplit('/',1)[1]\n _tar.makefile(member, os.path.join(directory, fname))", "_____no_output_____" ], [ "data = \"ud_treebanks/en_ewt-ud-train.conllu\"\nwith open(data) as conll_data:\n trees = conllu.parse(conll_data.read())", "_____no_output_____" ], [ "print(trees[0].serialize())", "_____no_output_____" ] ], [ [ "## Evaluation Methods", "_____no_output_____" ], [ "### 1. F-score\n\nProbably the most relevant measure we can use when we are evaluating classifiers.\n\nImplement the function below. The function takes two iterables and returns a detailed dictionary that contains the True Positive, True Negative, False Positive, Precision, Recall, F-score values for each unique class in the gold list. Additionally, the dictionary should contain the micro and macro precision, recall and F-score values as well.\n\nYou can read about the F-measure [here](https://en.wikipedia.org/wiki/F-score).\n\nHelp for the micro-macro averages: https://tomaxent.com/2018/04/27/Micro-and-Macro-average-of-Precision-Recall-and-F-Score/.\n\nExample:", "_____no_output_____" ] ], [ [ "f_dict = {\n 0: {'tp': 4, 'fp': 0, 'fn': 0, 'precision': 1.0, 'recall': 1.0, 'f': 1.0}, \n 1: {'tp': 4, 'fp': 0, 'fn': 0, 'precision': 1.0, 'recall': 1.0, 'f': 1.0}, \n 2: {'tp': 4, 'fp': 0, 'fn': 0, 'precision': 1.0, 'recall': 1.0, 'f': 1.0}, \n 'MICRO AVG': {'precision': 1.0, 'recall': 1.0, 'f': 1.0}, \n 'MACRO AVG': {'precision': 1.0, 'recall': 1.0, 'f': 1.0}\n}\n\nf_dict2 = {\n 0: {'tp': 3, 'fp': 1, 'fn': 1, 'precision': 0.75, 'recall': 0.75, 'f': 0.75},\n 1: {'tp': 3, 'fp': 1, 'fn': 1, 'precision': 0.75, 'recall': 0.75, 'f': 0.75},\n 2: {'tp': 2, 'fp': 2, 'fn': 2, 'precision': 0.5, 'recall': 0.5, 'f': 0.5},\n 'MICRO AVG': {'precision': 0.6666666666666666, 'recall': 0.6666666666666666, 'f': 0.6666666666666666},\n 'MACRO AVG': {'precision': 0.6666666666666666, 'recall': 0.6666666666666666, 'f': 0.6666666666666666}\n\n}", "_____no_output_____" ], [ "def f_score(gold, predicted):\n raise NotImplementedError()", "_____no_output_____" ], [ "gold = [0, 0, 1, 1, 2, 2, 0, 1, 2, 0, 1, 2]\npred = [0, 2, 1, 1, 2, 0, 0, 2, 1, 0, 1, 2]\n\nassert f_dict == f_score(gold, gold)\nassert f_dict2 == f_score(gold, pred)", "_____no_output_____" ] ], [ [ "### 1.1 Evaluate a pretrained POS tagger using the example\n\nChoose an existing POS tagger (eg. stanza, spacy, nltk) and predict the POS tags of the sentence given below. Compare the results to the refference below using the f_score function above. Keep in mind, that there are different POS formats, and you should compare them accordingly.", "_____no_output_____" ] ], [ [ "sentence = trees[0].metadata[\"text\"]\nupos = [token['upos'] for token in trees[0]]\nxpos = [token['xpos'] for token in trees[0]]\n\nprint(f'{sentence}\\n{upos}\\n{xpos}')", "_____no_output_____" ], [ "# Your solution here", "_____no_output_____" ] ], [ [ "### 2. ROUGE-N score\n\nWe usually use the ROUGE score to evaluate summaries, comparing the reference summaries and the generated summaries. Write a function that gets a reference summary, a generated summary and a number N. The number represents the length of n-grams to compare. The function should return a dictionary containing the precision, recall and f-score of the ROUGE-N score. (I practice, the most important part of the ROUGE score is its recall.)\n\n\\begin{equation*}\nRecall = \\frac{overlapping\\ ngrams}{all\\ ngrams\\ in\\ the\\ reference\\ summary}\n\\end{equation*}\n\n\\begin{equation*}\nPrecision = \\frac{overlapping\\ ngrams}{all\\ ngrams\\ in\\ the\\ generated\\ summary}\n\\end{equation*}\n\n\\begin{equation*}\nF1 = 2 * \\frac{Precision * Recall}{Precision + Recall}\n\\end{equation*}\n\nYou can read further about the ROUGE-N scoring method [here](https://www.aclweb.org/anthology/W04-1013.pdf).\n\nYou are encouraged to implement and use the helper functions outlined below. You can use any tokenizer you'd like for this exercise.\n\nExample results of the rouge_n function:", "_____no_output_____" ] ], [ [ "n2 = {'precision': 0.75, 'recall': 0.6, 'f': 0.6666666666666665}", "_____no_output_____" ], [ "def get_ngram(text, n):\n raise NotImplementedError()\n\ndef rouge_n(reference, generated, n):\n raise NotImplementedError()\n", "_____no_output_____" ], [ "reference = 'this cat is absoultely adorable today'\ngenerated = 'this cat is adorable today'\nassert n2 == rouge_n(reference, generated, 2)", "_____no_output_____" ] ], [ [ "### 2.1 Evaluate a pretraied summarizer using the example\n\nChoose a summarizer (eg. gensim, huggingface) and summarize the following text (taken from the [CNN-Daily Mail dataset](https://cs.nyu.edu/~kcho/DMQA/)) and calculate the ROUGE-2 score of the summary.", "_____no_output_____" ] ], [ [ "article = \"\"\"Manchester City starlet Devante Cole, son of Andy Cole, has joined Barnsley on loan until January.\nCity have also confirmed that £3m midfielder Bruno Zuculini has joined Valencia on loan for the rest of the season. \nMeanwhile Juventus and Roma remain keen on signing Matija Nastasic.\nOn the move: Manchester City striker Devante Cole, son of Andy, has joined Barnsley on loan\"\"\"\n\nreference = \"\"\"Devante Cole has joined Barnsley on loan until January.\nSon of Andy Cole has impressed in the City youth ranks.\nCity have also confirmed that Bruno Zuculini has joined Valencia.\"\"\"", "_____no_output_____" ], [ "# Your solution here", "_____no_output_____" ] ], [ [ "### 3. Dependency parse evaluation\n\nWe've discussed the two methods used to evaluate dependency parsers.\n\nReminder:\n\n - Labeled attachment score (LAS): the percentage of words that are assigned both the correct syntactic head and the correct dependency label\n - Unlabeled attachment score (UAS): the percentage of words that are assigned both the correct syntactic head", "_____no_output_____" ], [ "### 3.1 UAS method\n\nImplement the UAS method for evaluating graphs!\nThe input of the function should be two graphs, both in formatted in a simplified conll-dict format, where the keys are the indices of the tokens and the values are tuples consisting of the head and the dependency relation.", "_____no_output_____" ] ], [ [ "def convert_conllu(tree):\n return {token['id']: (token['head'], token['deprel']) for token in tree}", "_____no_output_____" ], [ "reference_graph = convert_conllu(trees[0])\nreference_graph", "_____no_output_____" ], [ "pred = {1: (0, 'root'), 2: (1, 'punct'), 3: (1, 'flat'), 4: (1, 'punct'), 5: (6, 'amod'),\n 6: (7, 'obj'), 7: (1, 'parataxis'), 8: (7, 'obj'), 9: (8, 'flat'), 10: (8, 'flat'),\n 11: (8, 'punct'), 12: (8, 'flat'), 13: (8, 'punct'), 14: (15, 'det'), 15: (8, 'appos'),\n 16: (18, 'case'), 17: (10, 'det'), 18: (7, 'obl'), 19: (8, 'case'), 20: (21, 'det'),\n 21: (18, 'obl'), 22: (23, 'case'), 23: (21, 'nmod'), 24: (21, 'punct'), 25: (28, 'case'),\n 26: (28, 'det'), 27: (28, 'amod'), 28: (8, 'obl'), 29: (1, 'punct')}", "_____no_output_____" ], [ "def uas(gold, predicted):\n raise NotImplementedError()", "_____no_output_____" ] ], [ [ "### 3.2 LAS method\nImplement the LAS method as well, similarly to the previous evaluation script.", "_____no_output_____" ] ], [ [ "def las(gold, predicted):\n raise NotImplementedError()", "_____no_output_____" ], [ "assert 26/29 == uas(reference_graph, pred)\nassert 24/29 == las(reference_graph, pred)", "_____no_output_____" ] ], [ [ "# ================ PASSING LEVEL ====================", "_____no_output_____" ], [ "### 3.3 Try out the evaluation methods with stanza\n\nEvaluate the predictions of stanza on the given example! To do so, you will have to convert the output of stanza to be in the same format as the expected input of the uas and las methods. We recomend the stanza [documentation](https://stanfordnlp.github.io/stanza/tutorials.html) to be able to do this.", "_____no_output_____" ] ], [ [ "def stanza_converter(stanza_doc):\n raise NotImplementedError()", "_____no_output_____" ], [ "# Your solution here", "_____no_output_____" ] ], [ [ "### 3.4 Compare the accuracy of stanza and spacy\n\nRun the spacy dependency parser on the same input as before and evaluate the performace. To do so you will have to implement a function, that converts the output of spacy (see the [documentation](https://spacy.io/usage/linguistic-features#dependency-parse)) to the appropriate format and check the output of the las and uas methods.", "_____no_output_____" ] ], [ [ "def spacy_converter(spacy_doc):\n raise NotImplementedError()", "_____no_output_____" ], [ "# Your solution here", "_____no_output_____" ] ], [ [ "# ================ EXTRA LEVEL ====================", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
e7da44bd0f97dc46e16ae39eb24127b2a8e442d7
59,839
ipynb
Jupyter Notebook
content/c2/.ipynb_checkpoints/construction-checkpoint.ipynb
JeffFessler/mlbook
a56da46354b7dc61fcfc3a134f55a803c37d919e
[ "MIT" ]
970
2020-08-31T17:28:22.000Z
2022-03-26T11:41:17.000Z
content/c2/.ipynb_checkpoints/construction-checkpoint.ipynb
JeffFessler/mlbook
a56da46354b7dc61fcfc3a134f55a803c37d919e
[ "MIT" ]
14
2020-08-31T17:56:31.000Z
2021-11-15T03:13:25.000Z
content/c2/.ipynb_checkpoints/construction-checkpoint.ipynb
JeffFessler/mlbook
a56da46354b7dc61fcfc3a134f55a803c37d919e
[ "MIT" ]
193
2020-08-31T16:25:22.000Z
2022-02-02T18:47:49.000Z
289.077295
27,024
0.92174
[ [ [ "# Construction", "_____no_output_____" ] ], [ [ "import numpy as np \nimport matplotlib.pyplot as plt\nimport seaborn as sns", "_____no_output_____" ], [ "def sign(x):\n return (-1)**(x < 0)\ndef make_standard(X):\n means = X.mean(0)\n stds = X.std(0)\n return (X - means)/stds", "_____no_output_____" ], [ "class RegularizedRegression:\n \n def __init__(self, name = None):\n self.name = name\n \n def record_info(self, X_train, y_train, lam, intercept, standardize):\n \n if standardize == True: # standardize (if specified)\n X_train = make_standard(X_train)\n if intercept == False: # add intercept (if not already included)\n ones = np.ones(len(X_train)).reshape(len(X_train), 1) # column of ones \n X_train = np.concatenate((ones, X_train), axis = 1)\n self.X_train = np.array(X_train)\n self.y_train = np.array(y_train)\n self.N, self.D = self.X_train.shape\n self.lam = lam\n \n def fit_ridge(self, X_train, y_train, lam = 0, intercept = False, standardize = False):\n \n # record data and dimensions\n self.record_info(X_train, y_train, lam, intercept, standardize)\n \n # estimate parameters\n XtX = np.dot(self.X_train.T, self.X_train)\n XtX_plus_lam_inverse = np.linalg.inv(XtX + self.lam*np.eye(self.D))\n Xty = np.dot(self.X_train.T, self.y_train)\n self.beta_hats = np.dot(XtX_plus_lam_inverse, Xty)\n self.y_train_hat = np.dot(self.X_train, self.beta_hats)\n \n # calculate loss\n self.L = .5*np.sum((self.y_train - self.y_train_hat)**2) + (self.lam/2)*np.linalg.norm(self.beta_hats)**2\n \n def fit_lasso(self, X_train, y_train, lam = 0, n_iters = 10000, lr = 0.001, intercept = False, standardize = False):\n\n # record data and dimensions\n self.record_info(X_train, y_train, lam, intercept, standardize)\n \n # estimate parameters\n beta_hats = np.random.randn(self.D)\n for i in range(n_iters):\n dL_dbeta = -self.X_train.T @ (self.y_train - (self.X_train @ beta_hats)) + self.lam*sign(beta_hats)\n beta_hats -= lr*dL_dbeta \n self.beta_hats = beta_hats\n self.y_train_hat = np.dot(self.X_train, self.beta_hats)\n \n # calculate loss\n self.L = .5*np.sum((self.y_train - self.y_train_hat)**2) + self.lam*np.sum(np.abs(self.beta_hats))\n\n", "_____no_output_____" ], [ "mpg = sns.load_dataset('mpg') # load mpg dataframe\nmpg = mpg.dropna(axis = 0).reset_index(drop = True) # drop null values\nmpg = mpg.loc[:,mpg.dtypes != object] # keep only numeric columns\nX_train = mpg.drop(columns = 'mpg') # get predictor variables\ny_train = mpg['mpg'] # get outcome variable", "_____no_output_____" ], [ "lam = 10\nridge_model = RegularizedRegression()\nridge_model.fit_ridge(X_train, y_train, lam)\n", "_____no_output_____" ], [ "lasso_model = RegularizedRegression()\nlasso_model.fit_lasso(X_train, y_train, lam, standardize = True)\n", "_____no_output_____" ], [ "fig, ax = plt.subplots()\nsns.scatterplot(ridge_model.y_train, ridge_model.y_train_hat)\nax.set_xlabel(r'$y$', size = 16)\nax.set_ylabel(r'$\\hat{y}$', rotation = 0, size = 16, labelpad = 15)\nax.set_title(r'Ridge $y$ vs. $\\hat{y}$', size = 20, pad = 10)\nsns.despine()", "_____no_output_____" ], [ "fig, ax = plt.subplots()\nsns.scatterplot(lasso_model.y_train, lasso_model.y_train_hat)\nax.set_xlabel(r'$y$', size = 16)\nax.set_ylabel(r'$\\hat{y}$', rotation = 0, size = 16, labelpad = 15)\nax.set_title(r'LASSO $y$ vs. $\\hat{y}$', size = 20, pad = 10)\nsns.despine()", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7da4998221a2ae40dbc061a153d4cd94a3eb621
64,624
ipynb
Jupyter Notebook
scripts.ipynb
giuliacasale/homework1
a55de58dbd449d65fc412e9e31a439b13be6da85
[ "MIT" ]
null
null
null
scripts.ipynb
giuliacasale/homework1
a55de58dbd449d65fc412e9e31a439b13be6da85
[ "MIT" ]
null
null
null
scripts.ipynb
giuliacasale/homework1
a55de58dbd449d65fc412e9e31a439b13be6da85
[ "MIT" ]
null
null
null
26.259244
178
0.457106
[ [ [ "# PROBLEM 1\n\n## INTRODUCTION", "_____no_output_____" ] ], [ [ "#Say \"Hello, World!\" With Python\nprint(\"Hello, World!\")", "_____no_output_____" ], [ "#Python If-Else\n\n#!/bin/python3\n\nimport math\nimport os\nimport random\nimport re\nimport sys\n\nif __name__ == '__main__':\n n = int(input().strip())\n\nif 1 <= n <= 100:\n if n % 2 != 0 or (n % 2 == 0 and 6<=n<=20):\n print(\"Weird\")\n elif n % 2 == 0 and (2<=n<=5 or n>20):\n print(\"Not Weird\")", "_____no_output_____" ], [ "#Arithmetic Operators\n\nif __name__ == '__main__':\n a = int(input())\n b = int(input())\n\nif 1<=a<=10**10 and 1<=b<=10**10:\n print(a+b)\n print(a-b)\n print(a*b)", "_____no_output_____" ], [ "#Python: Division\n\nif __name__ == '__main__':\n a = int(input())\n b = int(input())\n\nprint(a//b)\nprint(a/b)", "_____no_output_____" ], [ "#Loops\n\nif __name__ == '__main__':\n n = int(input())\n\nif 1<=n<=20:\n for i in range(n):\n print(i*i)", "_____no_output_____" ], [ "#Write a function\n\ndef is_leap(year):\n leap = False\n \n # Write your logic here\n if 1900 <= year <= 10**5:\n if year % 4 == 0 and year % 100 == 0 and year % 400 == 0:\n leap = True\n elif year % 4 == 0 and year % 100 != 0:\n leap = True\n return leap\n\nyear = int(input())\nprint(is_leap(year))", "_____no_output_____" ], [ "#Print Function\n\nif __name__ == '__main__':\n n = int(input())\n\noutput = \"\"\nfor i in range(1,n+1):\n output += str(i)\nprint(output)", "_____no_output_____" ] ], [ [ "## BASIC DATA TYPES", "_____no_output_____" ] ], [ [ "# List Comprehension\n\nif __name__ == '__main__':\n x = int(input())\n y = int(input())\n z = int(input())\n n = int(input())\n\nlista = [[i,j,k] for i in range(0,x+1) for j in range(0,y+1) for k in range(0,z+1) if i+j+k != n]\n\nprint(lista)", "_____no_output_____" ], [ "#Find the runner up score!\n\nif __name__ == '__main__':\n n = int(input())\n arr = map(int, input().split())\n \nif 2<=n<=10: \n arr = list(arr)\n for elem in arr:\n if -100<=elem<=100:\n massimo = max(arr)\n runner_up = -101\n for score in arr:\n if score > runner_up and score < massimo:\n runner_up = score\nprint(runner_up)", "_____no_output_____" ], [ "#Nested Lists\n\nlista=list()\nlista2=list()\n\nif __name__ == '__main__':\n for _ in range(int(input())):\n name = input()\n score = float(input())\n lista2.append(score)\n lista.append([name,score])\n \n minimo=min(lista2)\n while min(lista2)==minimo:\n lista2.remove(min(lista2))\n \n lista.sort()\n nuovo_minimo = min(lista2)\n\n for name,score in lista:\n if score==nuovo_minimo:\n print(name)", "_____no_output_____" ], [ "#Finding the percentage\n\nif __name__ == '__main__':\n n = int(input())\n student_marks = {}\n for _ in range(n):\n name, *line = input().split()\n scores = list(map(float, line))\n student_marks[name] = scores\n query_name = input()\n\nif 2<=n<=10:\n for key in student_marks:\n if key == query_name:\n marks = student_marks[key]\n total = len(marks)\n somma = 0\n for elem in marks:\n somma += float(elem) \n average = somma/total\n print(\"%.2f\" % average)", "_____no_output_____" ], [ "#Lists\n\nif __name__ == '__main__':\n N = int(input())\n\nlista = []\nfor n in range(N):\n command = input().split(\" \")\n if command[0] == \"insert\":\n lista.insert(int(command[1]), int(command[2]))\n elif command[0] == \"print\":\n print(lista)\n elif command[0] == \"remove\":\n lista.remove(int(command[1]))\n elif command[0] == \"append\":\n lista.append(int(command[1]))\n elif command[0] == \"sort\":\n lista.sort()\n elif command[0] == \"pop\":\n lista.pop()\n elif command[0] == \"reverse\":\n lista.reverse()", "_____no_output_____" ], [ "#Tuples\n\nif __name__ == '__main__':\n n = int(input())\n integer_list = map(int, input().split())\n\ntupla = tuple(integer_list)\nprint(hash(tupla))", "_____no_output_____" ] ], [ [ "## STRINGS", "_____no_output_____" ] ], [ [ "#sWAP cASE\n\ndef swap_case(s):\n new = ''\n for char in s:\n if char.isupper():\n new += char.lower()\n elif char.islower():\n new += char.upper()\n else:\n new += char\n return new\n\nif __name__ == '__main__':\n s = input()\n result = swap_case(s)\n print(result)", "_____no_output_____" ], [ "#String Split and Join\n\ndef split_and_join(line):\n # write your code here\n new_line = '-'.join(line.split(' '))\n return new_line\n\nif __name__ == '__main__':\n line = input()\n result = split_and_join(line)\n print(result)", "_____no_output_____" ], [ "#What's Your Name?\n\ndef print_full_name(a, b):\n if len(a)<=10 and len(b)<=10:\n print('Hello '+a+ ' ' + b + '! You just delved into python.')\n\nif __name__ == '__main__':\n first_name = input()\n last_name = input()\n print_full_name(first_name, last_name)", "_____no_output_____" ], [ "#Mutations\n\ndef mutate_string(string, position, character):\n l = list(string)\n l[position] = character\n string = ''.join(l)\n return string\n\nif __name__ == '__main__':\n s = input()\n i, c = input().split()\n s_new = mutate_string(s, int(i), c)\n print(s_new)", "_____no_output_____" ], [ "#Find a string\n\ndef count_substring(string, sub_string):\n if 1<=len(string)<=200:\n count = 0\n for i in range(len(string)):\n if string[i:].startswith(sub_string):\n count += 1\n return count\n\nif __name__ == '__main__':\n string = input().strip()\n sub_string = input().strip()\n \n count = count_substring(string, sub_string)\n print(count)", "_____no_output_____" ], [ "#string validators\n\nif __name__ == '__main__':\n s = input()\n\nif 0<=len(s)<=1000:\n print(any(char.isalnum() for char in s))\n print(any(char.isalpha() for char in s))\n print(any(char.isdigit() for char in s))\n print(any(char.islower() for char in s))\n print(any(char.isupper() for char in s))", "_____no_output_____" ], [ "#text alignment \n\nthickness = int(input()) #This must be an odd number\nc = 'H'\n\n#Top Cone\nfor i in range(thickness):\n print((c*i).rjust(thickness-1)+c+(c*i).ljust(thickness-1))\n\n#Top Pillars\nfor i in range(thickness+1):\n print((c*thickness).center(thickness*2)+(c*thickness).center(thickness*6))\n\n#Middle Belt\nfor i in range((thickness+1)//2):\n print((c*thickness*5).center(thickness*6)) \n\n#Bottom Pillars\nfor i in range(thickness+1):\n print((c*thickness).center(thickness*2)+(c*thickness).center(thickness*6)) \n\n#Bottom Cone\nfor i in range(thickness):\n print(((c*(thickness-i-1)).rjust(thickness)+c+(c*(thickness-i-1)).ljust(thickness)).rjust(thickness*6))", "_____no_output_____" ], [ "#Text Wrap\n\nimport textwrap\n\ndef wrap(string, max_width):\n if 0<=len(string)<=1000 and 0<=max_width<=len(string):\n text = textwrap.fill(string,max_width) \n return text\n\nif __name__ == '__main__':\n string, max_width = input(), int(input())\n result = wrap(string, max_width)\n print(result)", "_____no_output_____" ], [ "#Designer Door Mat\n\nif __name__ == '__main__':\n n, m = map(int, input().split(\" \"))\n\nif 5<=n<=101 and 15<=m<=303:\n for i in range(n):\n if 0<=i<=(n//2-1):\n print(('.|.'*i).rjust(m//2-1,'-')+'.|.'+('.|.'*i).ljust(m//2-1,'-'))\n elif i == n//2:\n print('WELCOME'.center(m,'-'))\n else:\n print(('.|.'*(2*(n-i-1)+1)).center(m,'-'))", "_____no_output_____" ], [ "#String Formatting\n\ndef print_formatted(number):\n # your code goes here\n for i in range(1,n+1):\n decimal = str(i)\n octal = str(oct(i)[2:])\n hexadecimal = str(hex(i)[2:]).upper()\n binary = str(bin(i)[2:])\n width = len(bin(n)[2:])\n print (decimal.rjust(width,' '),octal.rjust(width,' '),hexadecimal.rjust(width,' '),binary.rjust(width,' '))\n\nif __name__ == '__main__':\n n = int(input())\n print_formatted(n)", "_____no_output_____" ], [ "#Alphabet Rangoli\n\ndef print_rangoli(size):\n # your code goes here\n alphabet = ['a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z']\n sub_alphabet = alphabet[0:size]\n\n if 0<size<27:\n for i in range(1,size):\n row = sub_alphabet[-1:size-i:-1]+sub_alphabet[size-i:]\n print('-'*((size-i)*2)+ '-'.join(row)+'-'*((size-i)*2))\n \n for i in range(size):\n first_half = ''\n second_half = ''\n for j in range(size-1-i):\n first_half += alphabet[size-1-j] + '-'\n second_half += '-'+alphabet[j+1+i]\n print('-'*2*i + first_half + alphabet[i]+ second_half + '-'*2*i)\n \nif __name__ == '__main__':\n n = int(input())\n print_rangoli(n)", "_____no_output_____" ], [ "#Capitalize!\n\n#!/bin/python3\n\nimport math\nimport os\nimport random\nimport re\nimport sys\n\n# Complete the solve function below.\ndef solve(s):\n if 0<len(s)<1000:\n s = s.split(\" \")\n return(\" \".join(elem.capitalize() for elem in s))\n\nif __name__ == '__main__':\n fptr = open(os.environ['OUTPUT_PATH'], 'w')\n s = input()\n result = solve(s)\n fptr.write(result + '\\n')\n fptr.close()", "_____no_output_____" ], [ "#The Minion Game\n\ndef minion_game(string):\n # your code goes here\n vowels = 'AEIOU'\n score_s = 0\n score_k = 0\n if 0<=len(string)<=10**6:\n for i in range(len(string)):\n if string[i] in vowels:\n score_k += len(string)-i\n else:\n score_s += len(string)-i\n\n if score_k>score_s:\n print('Kevin '+str(score_k))\n elif score_s>score_k:\n print('Stuart '+str(score_s))\n else:\n print('Draw')\n\nif __name__ == '__main__':\n s = input()\n minion_game(s)", "_____no_output_____" ], [ "#Merge the Tools\n\ndef merge_the_tools(string, k):\n # your code goes here\n if 1<=len(string)<=10**4 and 1<=k<=len(string) and len(string)%k==0:\n l = []\n for i in range(0, len(string),k):\n l.append(string[i:(i+k)])\n \n for elem in l:\n l2 = []\n for char in elem:\n if char not in l2:\n l2.append(char)\n print(\"\".join(l2))\n \nif __name__ == '__main__':\n string, k = input(), int(input())\n merge_the_tools(string, k)", "_____no_output_____" ] ], [ [ "## SETS", "_____no_output_____" ] ], [ [ "#Introduction to sets\n\ndef average(array):\n # your code goes here\n if 1<=len(array)<=100:\n somma = 0\n array1 = []\n for elem in array:\n if elem not in array1:\n array1.append(elem)\n somma += elem\n average = somma/len(array1)\n return average\n\nif __name__ == '__main__':\n n = int(input())\n arr = list(map(int, input().split()))\n result = average(arr)\n print(result)", "_____no_output_____" ], [ "#No idea!\n\n# Enter your code here. Read input from STDIN. Print output to STDOUT\nn, m = map(int, input().split())\narray = map(int, input().split(' '))\nA = set(map(int, input().split(' ')))\nB = set(map(int, input().split(' ')))\n\nif 1<=n<=10**5 and 1<=m<=10**5:\n happiness = 0\n for elem in array:\n if elem in A:\n happiness += 1\n if elem in B:\n happiness -= 1\n\n print(happiness)", "_____no_output_____" ], [ "#Symmetric difference\n\n# Enter your code here. Read input from STDIN. Print output to STDOUT\nM=int(input())\nm=input()\nN=int(input())\nn=input()\n\nset1 = set(map(int,m.split(' ')))\nset2 = set(map(int,n.split(' ')))\n\ndifferenza1 = set1.difference(set2)\ndifferenza2 = set2.difference(set1)\ndifferenza_tot = list(differenza1.union(differenza2))\n\ndifferenza_tot.sort()\n\nfor i in range(len(differenza_tot)):\n print(int(differenza_tot[i]))", "_____no_output_____" ], [ "#Set.add()\n\n# Enter your code here. Read input from STDIN. Print output to STDOUT\n\nN = int(input()) #total number of country stamps\n\nif 0<N<1000: \n country_set = set()\n for i in range(N):\n country = input()\n country_set.add(country)\n print(len(country_set))", "_____no_output_____" ], [ "#Set.discard(), .remove() & .pop()\n\nn = int(input()) #number of elementes in set s\ns = set(map(int, input().split()))\nN = int(input()) #number of commands\n\nif 0<n<20 and 0<N<20:\n for i in range(N):\n command = list(input().split())\n if command[0] == 'pop':\n s.pop()\n elif command[0] == 'remove':\n s.remove(int(command[1]))\n elif command[0] == 'discard':\n s.discard(int(command[1]))\n print(sum(s))", "_____no_output_____" ], [ "#Set.union() Operation\n\n# Enter your code here. Read input from STDIN. Print output to STDOUT\nn = int(input())\nenglish_newspaper = set(map(int,input().split(' ')))\nm = int(input())\nfrench_newspaper = set(map(int,input().split(' ')))\n\nat_least_one = english_newspaper.union(french_newspaper)\nif 0<len(at_least_one)<1000: \n print(len(at_least_one))", "_____no_output_____" ], [ "#Set.intersection() Operation\n\n# Enter your code here. Read input from STDIN. Print output to STDOUT\nn = int(input())\nenglish_newspaper = set(map(int, input().split()))\nm = int(input())\nfrench_newspaper = set(map(int,input().split()))\n\nboth_newspapers = english_newspaper.intersection(french_newspaper)\nif 0<len(english_newspaper.union(french_newspaper))<1000:\n print(len(both_newspapers))", "_____no_output_____" ], [ "#Set.difference() Operation\n\n# Enter your code here. Read input from STDIN. Print output to STDOUT\nn = int(input())\nenglish_newspaper = set(map(int, input().split()))\nm = int(input())\nfrench_newspaper = set(map(int, input().split()))\n\nonly_english = english_newspaper.difference(french_newspaper)\nprint(len(only_english))", "_____no_output_____" ], [ "#Set.symmetric_difference() Operation\n\n# Enter your code here. Read input from STDIN. Print output to STDOUT\nn = int(input())\nenglish_newspaper = set(map(int,input().split()))\nm = int(input())\nfrench_newspaper = set(map(int,input().split()))\n\neither_one = english_newspaper.symmetric_difference(french_newspaper)\nprint(len(either_one))", "_____no_output_____" ], [ "#Set Mutations\n\n# Enter your code here. Read input from STDIN. Print output to STDOUT\nA = int(input())\nsetA = set(map(int,input().split()))\nN = int(input())\n\nif 0<len(setA)<1000 and 0<N<100:\n for i in range(N):\n command = list(input().split(' '))\n setB = set(map(int, input().split(' ')))\n if 0<len(setB)<100:\n if command[0] == 'update':\n setA.update(setB)\n if command[0] == 'intersection_update':\n setA.intersection_update(setB)\n if command[0] == 'difference_update':\n setA.difference_update(setB)\n if command[0] == 'symmetric_difference_update':\n setA.symmetric_difference_update(setB)\n\nprint(sum(setA))", "_____no_output_____" ], [ "#The captain's Room\n\n# Enter your code here. Read input from STDIN. Print output to STDOUT\nK = int(input())\nrooms = list(map(int, input().split()))\n\nfrom collections import Counter\nrooms = Counter(rooms)\n\nfor room in rooms:\n if rooms[room] == 1:\n captain_room = room\n\nprint(captain_room)\n\n#because it kept giving me error due to timeout even though the sample cases were correct, I checked the discussion page and took the idea of using Counter from collections", "_____no_output_____" ], [ "#Check subset\n\n# Enter your code here. Read input from STDIN. Print output to STDOUT\nT = int(input())\n\nif 0<T<21:\n for i in range(T):\n a = int(input())\n setA = set(map(int, input().split()))\n b = int(input())\n setB = set(map(int,input().split()))\n if setA.difference(setB) == set():\n print(True)\n else:\n print(False)", "_____no_output_____" ], [ "#Check strict superset\n\n# Enter your code here. Read input from STDIN. Print output to STDOUT\nsetA = set(map(int, input().split()))\nn = int(input())\nall_sets = []\n\nif 0<len(setA)<501 and 0<n<21:\n for i in range(n):\n setI = set(map(int,input().split()))\n if 0<len(setI)<101:\n all_sets.append(setI)\n \n output = True\n for elem in all_sets:\n if not setA.issuperset(elem):\n output = False\n\n print(output)", "_____no_output_____" ] ], [ [ "## COLLECTIONS", "_____no_output_____" ] ], [ [ "#collections.Counter()\n\n# Enter your code here. Read input from STDIN. Print output to STDOUT\n\nfrom collections import Counter\nX = int(input()) #number of shoes\nshoe_sizes = list(map(int,input().split()))\nN = int(input()) #number of customers\nshoe_sizes = Counter(shoe_sizes)\ntotal = 0\n\nif 0<X<10**3 and 0<N<=10**3:\n for i in range(N):\n size, price = map(int,input().split())\n if shoe_sizes[size]:\n total += price\n shoe_sizes[size] -= 1 \n\n print(total)", "_____no_output_____" ], [ "#DefaultDict Tutorial\n\n# Enter your code here. Read input from STDIN. Print output to STDOUT\nfrom collections import defaultdict\n\nn, m = map(int, input().split())\ngroupA = []\ngroupB = []\n\nif 0<=n<=10000 and 1<=m<=100:\n for i in range(n):\n wordA = input()\n groupA.append(wordA)\n\n for i in range(m):\n wordB = input()\n groupB.append(wordB)\n\n d = defaultdict(list)\n\n for i in range(n):\n d[groupA[i]].append(i+1)\n\n for i in groupB: \n if i in d:\n print(*d[i])\n else:\n print(-1)", "_____no_output_____" ], [ "#Collections.namedtuple()\n\n# Enter your code here. Read input from STDIN. Print output to STDOUT\nfrom collections import namedtuple\n\nn = int(input())\nstudents = namedtuple('students',input().split())\nsum_grades = 0\n\nif 0<n<=100:\n for i in range(n):\n st = students._make(input().split())\n sum_grades += float(st.MARKS)\n\n average = sum_grades/n\n print(average)", "_____no_output_____" ], [ "#Collections.OrderedDict()\n\n# Enter your code here. Read input from STDIN. Print output to STDOUT\nimport collections \n\nn = int(input())\nl = collections.OrderedDict()\n\nfor i in range(n):\n item_name = input().split(' ')\n item_price = int(item_name[-1])\n item_name = ' '.join(item_name[:-1])\n if item_name not in l:\n l[item_name] = item_price\n else:\n l[item_name] += item_price\n\nfor item in l.items():\n print(*item)", "_____no_output_____" ], [ "#Word Order\n\n# Enter your code here. Read input from STDIN. Print output to STDOUT\nfrom collections import Counter\n\nn = int(input())\n\nif 1<=n<=10**5:\n l = []\n for i in range(n):\n word = input().lower()\n l.append(word)\n\n c = Counter(l)\n\n my_sum = 0\n for key in c.keys():\n my_sum += 1\n\n print(my_sum)\n print(*c.values())", "_____no_output_____" ], [ "#Collections.deque()\n\n# Enter your code here. Read input from STDIN. Print output to STDOUT\nfrom collections import deque\n\nn = int(input())\nd = deque()\n\nfor i in range(n):\n command = list(input().split())\n if command[0] == 'append':\n d.append(command[1])\n if command[0] == 'appendleft':\n d.appendleft(command[1])\n if command[0] == 'pop':\n d.pop()\n if command[0] == 'popleft':\n d.popleft()\nprint(*d)", "_____no_output_____" ], [ "#Company Logo\n\n#!/bin/python3\n\nimport math\nimport os\nimport random\nimport re\nimport sys\n\nfrom collections import Counter\n\nif __name__ == '__main__':\n s = input()\n\nif 3<=len(s)<=10**4:\n d = Counter(sorted(s))\n for elem in d.most_common(3):\n print(*elem)", "_____no_output_____" ], [ "#Piling Up!\n\n# Enter your code here. Read input from STDIN. Print output to STDOUT\nt = int(input()) #number of test cases\n\nif 1<=t<=5:\n for i in range(t):\n n = int(input())\n if 1<=n<=10**5:\n cubes = list(map(int, input().split()))\n if cubes[0] == max(cubes) or cubes[-1] == max(cubes):\n print('Yes')\n else:\n print('No')", "_____no_output_____" ] ], [ [ "## DATE AND TIME", "_____no_output_____" ] ], [ [ "#Calendar Module\n\n# Enter your code here. Read input from STDIN. Print output to STDOUT\nimport calendar\nmonth,day,year = map(int,input().split())\n\nif 2000<year<3000:\n weekdays = ['MONDAY','TUESDAY','WEDNESDAY','THURSDAY','FRIDAY','SATURDAY','SUNDAY']\n weekday = calendar.weekday(year,month,day)\n print(weekdays[weekday])", "_____no_output_____" ], [ "#Time Delta\n\n#!/bin/python3\n\nimport math\nimport os\nimport random\nimport re\nimport sys\nfrom datetime import datetime\n\n# Complete the time_delta function below.\n\ndef time_delta(t1, t2):\n access1 = datetime.strptime(t1,'%a %d %b %Y %H:%M:%S %z')\n access2 = datetime.strptime(t2,'%a %d %b %Y %H:%M:%S %z')\n return str(int((abs(access1-access2)).total_seconds()))\n \nif __name__ == '__main__':\n fptr = open(os.environ['OUTPUT_PATH'], 'w')\n t = int(input())\n for t_itr in range(t):\n t1 = input()\n t2 = input()\n delta = time_delta(t1, t2)\n fptr.write(delta+ '\\n')\n fptr.close()", "_____no_output_____" ] ], [ [ "## EXCEPTIONS", "_____no_output_____" ] ], [ [ "#Exceptions\n\n# Enter your code here. Read input from STDIN. Print output to STDOUT\nt = int(input())\n\nif 0<t<10:\n for i in range(t):\n values = list(input().split())\n try:\n division = int(values[0])//int(values[1])\n print(division)\n except ZeroDivisionError as e:\n print(\"Error Code:\",e)\n except ValueError as e:\n print(\"Error Code:\",e)", "_____no_output_____" ] ], [ [ "## BUILT-INS", "_____no_output_____" ] ], [ [ "#Zipped!\n\n# Enter your code here. Read input from STDIN. Print output to STDOUT\nnum_students, num_subjects = map(int,input().split())\nmark_sheet = []\n\nfor i in range(num_subjects):\n mark_sheet.append(map(float, input().split(' ')))\n\nfor grades in zip(*mark_sheet):\n somma = sum(grades)\n print(somma/num_subjects)", "_____no_output_____" ], [ "#Athlete Sort\n\n#!/bin/python3\n\nimport math\nimport os\nimport random\nimport re\nimport sys\n\nif __name__ == '__main__':\n nm = input().split()\n n = int(nm[0])\n m = int(nm[1])\n arr = []\n for _ in range(n):\n arr.append(list(map(int, input().rstrip().split())))\n k = int(input())\n\nif 1<=n<=1000 and 1<=m<=1000:\n l = []\n for lista in arr:\n l.append(lista[k])\n l.sort()\n\n l1 = []\n for elem in l:\n for i in range(n):\n if arr[i][k] == elem and i not in l1:\n l1.append(i)\n for i in l1:\n print(*arr[i])", "_____no_output_____" ], [ "#ginortS\n\n# Enter your code here. Read input from STDIN. Print output to STDOUT\nstring = input()\nif 0<len(string)<1000:\n l = []\n for char in string:\n l.append(char)\n l1 = []\n l2 = []\n l3 = []\n l4 = []\n for elem in l:\n if elem.islower():\n l1.append(elem)\n if elem.isupper():\n l2.append(elem)\n if elem.isdigit() and int(elem)%2 != 0:\n l3.append(elem)\n if elem.isdigit() and int(elem)%2 == 0:\n l4.append(elem)\n\n l1.sort()\n l2.sort()\n l3.sort()\n l4.sort()\n\n lista = l1 + l2 + l3 + l4\n print(''.join(lista))", "_____no_output_____" ] ], [ [ "## PYTHON FUNCTIONALS", "_____no_output_____" ] ], [ [ "#Map and Lambda Functions\n\ncube = lambda x: x**3 # complete the lambda function \n\ndef fibonacci(n):\n # return a list of fibonacci numbers\n serie = []\n if 0<=n<=15:\n if n == 1:\n serie = [0]\n if n > 1:\n serie = [0, 1]\n for i in range(1,n-1):\n serie.append(serie[i]+serie[i-1])\n return serie\n\n\nif __name__ == '__main__':\n n = int(input())\n print(list(map(cube, fibonacci(n))))", "_____no_output_____" ] ], [ [ "## REGEX AND PARSING", "_____no_output_____" ] ], [ [ "#Detect Floating Point Number\n\n# Enter your code here. Read input from STDIN. Print output to STDOUT\nimport re \n\nt = int(input())\nif 0<t<10:\n for i in range(t):\n test_case = input()\n print(bool(re.search(r\"^[+-/.]?[0-9]*\\.[0-9]+$\",test_case)))", "_____no_output_____" ], [ "#Re.split()\n\nregex_pattern = r\"[,.]\"\t# Do not delete 'r'.\n\nimport re\nprint(\"\\n\".join(re.split(regex_pattern, input())))", "_____no_output_____" ], [ "#Group(), Groups() & Groupdict()\n\n# Enter your code here. Read input from STDIN. Print output to STDOUT\n\nimport re\ns = input()\n\nif 0<len(s)<100:\n m = re.search(r\"([a-z0-9A-Z])\\1+\",s)\n if m != None:\n print(m.group(1))\n else:\n print(-1)", "_____no_output_____" ], [ "#Re.findall() & Re.finditer()\n\n# Enter your code here. Read input from STDIN. Print output to STDOUT\nimport re\n\ns = input()\nconsonanti ='bcdfghjklmnpqrstvwxyzBCDFGHJKLMNPQRSTVWXYZ'\n\nif 0<len(s)<100:\n m = re.findall(r'(?<=['+consonanti+'])([AEIOUaeiou]{2,})(?=['+consonanti+'])',s.strip())\n \n if len(m)>0:\n for elem in m:\n print(elem)\n else:\n print(-1)", "_____no_output_____" ], [ "#Re.start() & Re.end()\n\n# Enter your code here. Read input from STDIN. Print output to STDOUT\nimport re\ns = input()\nk = input()\n\nif 0<len(s)<100 and 0<len(k)<len(s):\n for i in range(len(s)):\n if re.match(k,s[i:]):\n tupla = (i,i+len(k)-1)\n print(tupla)\n \n if re.search(k,s) == None:\n tupla = (-1, -1)\n print(tupla)", "_____no_output_____" ], [ "#Regex Substitutions\n\n# Enter your code here. Read input from STDIN. Print output to STDOUT\nimport re\n\nn = int(input())\n\nif 0<n<100:\n for i in range(n):\n line = input()\n\n l1 = re.sub(r' &&(?= )', ' and', line)\n l2 = re.sub(r' \\|\\|(?= )',' or',l1)\n\n print(l2)", "_____no_output_____" ], [ "#Validating Roman Numerals\n\nregex_pattern = r\"^(M{0,3})(CM|CD|D?C{0,3})(XC|XL|L?X{0,3})(IX|IV|V?I{0,3})$\"\t# Do not delete 'r'.\n\nimport re\nprint(str(bool(re.match(regex_pattern, input()))))", "_____no_output_____" ], [ "#Validating phone numbers\n\n# Enter your code here. Read input from STDIN. Print output to STDOUT\nimport re\nn = int(input())\nif 0<n<=10:\n for i in range(n):\n number = input().strip()\n if len(number)==10:\n if bool(re.search(r'^([789]+)([0123456789]{0,9}$)',number)) == True:\n print('YES')\n else:\n print('NO')\n else:\n print('NO')", "_____no_output_____" ], [ "#Validating and Parsing Email Addressess\n\n# Enter your code here. Read input from STDIN. Print output to STDOUT\nimport re\nimport email.utils\n\nn = int(input())\nif 0<n<100:\n for i in range(n):\n address = email.utils.parseaddr(input())\n if bool(re.match(r'^([a-zA-Z]+)([a-zA-Z0-9|\\-|/.|_]+)@([a-zA-Z]+)\\.([a-zA-Z]){1,3}$',address[1])) == True:\n print(email.utils.formataddr(address))", "_____no_output_____" ], [ "#Hex Color Code\n\n# Enter your code here. Read input from STDIN. Print output to STDOUT\nimport re\nn = int(input())\n\nif 0<n<50:\n for i in range(n):\n line = input().split()\n if len(line) > 1 and ('{' or '}') not in line:\n line = ' '.join(line)\n hexs = re.findall(r'#[0-9A-Fa-f]{6}|#[0-9A-Fa-f]{3}',line)\n if hexs:\n for elem in hexs:\n print(str(elem))", "_____no_output_____" ], [ "#HTML Parser - Part 1\n\n# Enter your code here. Read input from STDIN. Print output to STDOUT\nfrom html.parser import HTMLParser as hp\nn = int(input())\n\nclass MyHTMLParser(hp):\n def handle_starttag(self, tag, attrs): \n print ('Start :',tag)\n for attr in attrs:\n print(\"->\", attr[0],'>',attr[1])\n \n def handle_endtag(self, tag):\n print ('End :',tag)\n \n def handle_startendtag(self, tag, attrs):\n print ('Empty :',tag)\n for attr in attrs:\n print(\"->\", attr[0],'>',attr[1])\n \n\nparser = MyHTMLParser()\nfor i in range(n): \n parser.feed(input())", "_____no_output_____" ], [ "#HTML Parser - Part 2\n\nfrom html.parser import HTMLParser\n\nclass MyHTMLParser(HTMLParser):\n def handle_comment(self,data):\n lines = len(data.split('\\n'))\n if lines>1:\n print(\">>> Multi-line Comment\")\n if data.strip():\n print(data)\n else:\n print(\">>> Single-line Comment\")\n if data.strip():\n print(data)\n\n def handle_data(self, data):\n if data.strip():\n print(\">>> Data\"+'\\n'+data)\n \nhtml = \"\" \nfor i in range(int(input())):\n html += input().rstrip()\n html += '\\n'\n \nparser = MyHTMLParser()\nparser.feed(html)\nparser.close()", "_____no_output_____" ], [ "#Detect HTML Tags, Attributes and Attribute Values\n\n# Enter your code here. Read input from STDIN. Print output to STDOUT\n\nfrom html.parser import HTMLParser\n\nclass MyHTMLParser(HTMLParser):\n def handle_starttag(self,tag,attrs):\n print(tag)\n for attr in attrs:\n if attr:\n print('->',attr[0],'>',attr[1])\n\nparser = MyHTMLParser()\nfor i in range(int(input())):\n parser.feed(input())", "_____no_output_____" ], [ "#Validating UID\n\n# Enter your code here. Read input from STDIN. Print output to STDOUT\nimport re\n\nt = int(input())\nfor i in range(t):\n id = input()\n if re.search(r'^(?!.*(.).*\\1)(?=(.*[A-Z]){2,})(?=(.*[0-9]){3,})[a-zA-Z0-9]{10}$',id):\n print('Valid')\n else:\n print('Invalid')", "_____no_output_____" ], [ "#Validating Credit Card Numbers\n\n# Enter your code here. Read input from STDIN. Print output to STDOUT\nimport re\n\nn = int(input())\n\nfor i in range(n):\n credit_card = input()\n if re.match(r'^([456]{1}[0-9]{3})-?([0-9]){4}-?([0-9]){4}-?([0-9]){4}$',credit_card) and re.match(r'(([0-9])(?!\\2{3})){16}',credit_card.replace('-','')):\n print('Valid')\n else:\n print('Invalid')", "_____no_output_____" ], [ "#Validating Postal Codes\n\nregex_integer_in_range = r\"^[1-9][0-9]{5}$\"\t# Do not delete 'r'.\nregex_alternating_repetitive_digit_pair = r\"([0-9])(?=.\\1)\"\t# Do not delete 'r'.\n\n\nimport re\nP = input()\n\nprint (bool(re.match(regex_integer_in_range, P)) \nand len(re.findall(regex_alternating_repetitive_digit_pair, P)) < 2)", "_____no_output_____" ], [ "#Matrix Script\n\n#!/bin/python3\n\nimport math\nimport os\nimport random\nimport re\nimport sys\n\nfirst_multiple_input = input().rstrip().split()\nn = int(first_multiple_input[0])\nm = int(first_multiple_input[1])\nmatrix = []\nfor _ in range(n):\n matrix_item = input()\n matrix.append(matrix_item)\n\noutput = ''\nfor column in range(m):\n for row in range(n):\n output += matrix[row][column]\nprint(re.sub(r'(?<=[A-Za-z0-9])[!@#$%& ]{1,}(?=[A-Za-z0-9])',' ',output))", "_____no_output_____" ] ], [ [ "## XML", "_____no_output_____" ] ], [ [ "#XML 1 - Find the score\n\nimport sys\nimport xml.etree.ElementTree as etree\n\ndef get_attr_number(node):\n # your code goes here\n somma = 0\n for elem in node.iter():\n diz = elem.attrib\n somma += len(diz)\n return somma\n\nif __name__ == '__main__':\n sys.stdin.readline()\n xml = sys.stdin.read()\n tree = etree.ElementTree(etree.fromstring(xml))\n root = tree.getroot()\n print(get_attr_number(root))", "_____no_output_____" ], [ "#XML 2 - Find the maximum Depth\n\nimport xml.etree.ElementTree as etree\n\nmaxdepth = 0\ndef depth(elem, level):\n global maxdepth\n # your code goes here\n if (level+1)>maxdepth:\n maxdepth = level + 1 \n for child in list(elem):\n depth(child,level+1)\n\nif __name__ == '__main__':\n n = int(input())\n xml = \"\"\n for i in range(n):\n xml = xml + input() + \"\\n\"\n tree = etree.ElementTree(etree.fromstring(xml))\n depth(tree.getroot(), -1)\n print(maxdepth)", "_____no_output_____" ] ], [ [ "## CLOSURES AND DECORATIONS", "_____no_output_____" ] ], [ [ "#Standardize Mobile Number Using Decorators\n\nimport re\n\ndef wrapper(f):\n def fun(l):\n # complete the function\n lista = []\n for elem in l:\n if len(elem) == 10:\n lista.append('+91'+' '+str(elem[0:5]+ ' '+str(elem[5:])))\n elif len(elem) == 11:\n lista.append('+91'+' '+str(elem[1:6]+ ' '+str(elem[6:])))\n elif len(elem) == 12:\n lista.append('+91'+' '+str(elem[2:7]+ ' '+str(elem[7:])))\n elif len(elem) == 13:\n lista.append('+91'+' '+str(elem[3:8]+ ' '+str(elem[8:])))\n lista.sort()\n for elem in lista:\n print(elem)\n return fun\n\n@wrapper\ndef sort_phone(l):\n print(*sorted(l), sep='\\n')\n\nif __name__ == '__main__':\n l = [input() for _ in range(int(input()))]\n sort_phone(l) ", "_____no_output_____" ], [ "#Decorators 2 - Name Directory\n\nimport operator\n\ndef person_lister(f):\n def inner(people):\n # complete the function\n s = sorted(people, key = lambda x: int(x[2]))\n return[f(person) for person in s]\n return inner\n\n@person_lister\ndef name_format(person):\n return (\"Mr. \" if person[3] == \"M\" else \"Ms. \") + person[0] + \" \" + person[1]\n\nif __name__ == '__main__':\n people = [input().split() for i in range(int(input()))]\n print(*name_format(people), sep='\\n')", "_____no_output_____" ] ], [ [ "## NUMPY", "_____no_output_____" ] ], [ [ "#Arrays\n\nimport numpy\n\ndef arrays(arr):\n # complete this function\n # use numpy.array\n arr.reverse()\n return numpy.array(arr, float) \n\narr = input().strip().split(' ')\nresult = arrays(arr)\nprint(result)", "_____no_output_____" ], [ "#Shape and Reshape\n\nimport numpy\n\nx = list(map(int,input().split()))\n\nmy_array = numpy.array(x)\nprint(numpy.reshape(my_array,(3,3)))", "_____no_output_____" ], [ "#Transpose and Flatten\n\nimport numpy\n\nn,m=map(int,input().split())\nl = []\nfor i in range(n):\n row = list(map(int, input().split()))\n l.append(row)\nmy_array = numpy.array(l)\nprint(numpy.transpose(my_array))\nprint(my_array.flatten())", "_____no_output_____" ], [ "#Concatenate\n\nimport numpy\n\nn,m,p = map(int,input().split())\nl1 = []\nl2 = []\nfor i in range(n):\n row = list(map(int,input().split()))\n l1.append(row)\nfor j in range(m):\n row = list(map(int,input().split()))\n l2.append(row)\narray1 = numpy.array(l1)\narray2 = numpy.array(l2)\nprint(numpy.concatenate((array1,array2),axis=0))", "_____no_output_____" ], [ "#Zeros and Ones\n\nimport numpy\n\nshape = list(map(int,input().split()))\nprint(numpy.zeros(shape, dtype = numpy.int))\nprint(numpy.ones(shape, dtype = numpy.int))", "_____no_output_____" ], [ "#Eye and Identity\n\nimport numpy\n\nn,m = map(int,input().split())\n\nnumpy.set_printoptions(sign=' ') #I had to look at this method of formatting the answer in the discussion board \n #because I wasn't aware of it\nprint(numpy.eye(n,m))", "_____no_output_____" ], [ "#Array Mathematics\n\nimport numpy\n\nn,m = map(int,input().split())\n\narrayA = numpy.array([list(map(int, input().split())) for i in range(n)], int)\narrayB = numpy.array([list(map(int, input().split())) for i in range(n)], int)\n\nprint(numpy.add(arrayA,arrayB))\nprint(numpy.subtract(arrayA,arrayB))\nprint(numpy.multiply(arrayA,arrayB))\nprint(arrayA//arrayB)\nprint(numpy.mod(arrayA,arrayB))\nprint(numpy.power(arrayA,arrayB))", "_____no_output_____" ], [ "#Floor, Ceil and Rint\n\nimport numpy\n\na = numpy.array(list(map(float,input().split())))\n\nnumpy.set_printoptions(sign=' ')\n\nprint(numpy.floor(a))\nprint(numpy.ceil(a))\nprint(numpy.rint(a))", "_____no_output_____" ], [ "#Sum and Prod\n\nimport numpy\n\nn,m = map(int,input().split())\n\na = numpy.array([list(map(int,input().split())) for i in range(n)],int)\n\nmy_sum = numpy.sum(a,axis=0)\nprint(numpy.prod(my_sum))", "_____no_output_____" ], [ "#Min and Max\n\nimport numpy\nn,m = map(int,input().split())\na = numpy.array([list(map(int,input().split())) for i in range(n)])\nminimo = numpy.min(a,axis=1)\nprint(numpy.max(minimo))", "_____no_output_____" ], [ "#Mean, Var and Std\n\nimport numpy\nn,m = map(int,input().split())\narray = numpy.array([list(map(int,input().split())) for i in range(n)])\n\nnumpy.set_printoptions(legacy='1.13') #I took this line from the discussion board because it kept giving me the right answer but in the wrong format without it\n\nprint(numpy.mean(array,axis=1))\nprint(numpy.var(array,axis=0))\nprint(numpy.std(array))", "_____no_output_____" ], [ "#Dot and Cross\n\nimport numpy\nn = int(input())\n\na = numpy.array([list(map(int,input().split())) for i in range(n)])\nb = numpy.array([list(map(int,input().split())) for i in range(n)])\n\nprint(numpy.dot(a,b))", "_____no_output_____" ], [ "#Inner and Outer\n\nimport numpy\na = numpy.array(list(map(int,input().split())))\nb = numpy.array(list(map(int,input().split())))\nprint(numpy.inner(a,b))\nprint(numpy.outer(a,b))", "_____no_output_____" ], [ "#Polynomials\n\nimport numpy\ncoefficient = numpy.array(list(map(float,input().split())))\nx = int(input())\nprint(numpy.polyval(coefficient,x))", "_____no_output_____" ], [ "#Linear Algebra\n\nimport numpy\nn = int(input())\na = numpy.array([list(map(float,input().split())) for i in range(n)])\ndet = numpy.linalg.det(a).round(2)\nprint(det)", "_____no_output_____" ] ], [ [ "# PROBLEM 2", "_____no_output_____" ] ], [ [ "#Birthday Cake Candles\n\n#!/bin/python3\n\nimport math\nimport os\nimport random\nimport re\nimport sys\n\n#\n# Complete the 'birthdayCakeCandles' function below.\n#\n# The function is expected to return an INTEGER.\n# The function accepts INTEGER_ARRAY candles as parameter.\n#\n\ndef birthdayCakeCandles(candles):\n # Write your code here\n tallest = candles.count(max(candles))\n return tallest\n\nif __name__ == '__main__':\n fptr = open(os.environ['OUTPUT_PATH'], 'w')\n candles_count = int(input().strip())\n candles = list(map(int, input().rstrip().split()))\n result = birthdayCakeCandles(candles)\n fptr.write(str(result) + '\\n')\n fptr.close()", "_____no_output_____" ], [ "#Number Line Jumps\n\n#!/bin/python3\n\nimport math\nimport os\nimport random\nimport re\nimport sys\n\n# Complete the kangaroo function below.\ndef kangaroo(x1, v1, x2, v2):\n if 0<=x1<x2<=10000 and 1<=v1<=10000 and 0<v2<=10000:\n if v1<=v2:\n return 'NO'\n opt_jumps = (x2-x1)/(v1-v2)\n if opt_jumps%1==0:\n return 'YES'\n else:\n return 'NO'\n\nif __name__ == '__main__':\n fptr = open(os.environ['OUTPUT_PATH'], 'w')\n x1V1X2V2 = input().split()\n x1 = int(x1V1X2V2[0])\n v1 = int(x1V1X2V2[1])\n x2 = int(x1V1X2V2[2])\n v2 = int(x1V1X2V2[3])\n result = kangaroo(x1, v1, x2, v2)\n fptr.write(result + '\\n')\n fptr.close()", "_____no_output_____" ], [ "#Viral Advertising\n\n#!/bin/python3\n\nimport math\nimport os\nimport random\nimport re\nimport sys\n\n# Complete the viralAdvertising function below.\ndef viralAdvertising(n):\n if 1<= n and n<= 50:\n people = 5\n likes = 0\n i = 0\n for i in range(0,n):\n new_likes = people//2\n likes += new_likes\n people = new_likes*3\n i += 1\n return likes \n\nif __name__ == '__main__':\n fptr = open(os.environ['OUTPUT_PATH'], 'w')\n n = int(input())\n result = viralAdvertising(n)\n fptr.write(str(result) + '\\n')\n fptr.close()", "_____no_output_____" ], [ "#Recursive Sum Digit\n\n#!/bin/python3\n\nimport math\nimport os\nimport random\nimport re\nimport sys\n\n# Complete the superDigit function below.\ndef superDigit(n, k):\n if len(n)==1 and k<=1:\n return int(n)\n else:\n somma=0\n for i in n:\n somma += int(i)\n n = str(somma*k)\n return superDigit(n,1)\n\nif __name__ == '__main__':\n fptr = open(os.environ['OUTPUT_PATH'], 'w')\n nk = input().split()\n n = nk[0]\n k = int(nk[1])\n result = superDigit(n, k)\n fptr.write(str(result) + '\\n')\n fptr.close()", "_____no_output_____" ], [ "#Insertion Sort - Part 1\n\n#!/bin/python3\n\nimport math\nimport os\nimport random\nimport re\nimport sys\n\n# Complete the insertionSort1 function below.\ndef insertionSort1(n, arr):\n value = arr[-1]\n arr.remove(value)\n count = 0\n for i in range(n-2,-1,-1):\n if arr[i]>value:\n arr.insert(i,arr[i])\n print(*arr)\n arr.remove(arr[i])\n elif arr[i]<=value and count==0:\n arr.insert(i+1, value)\n count+=1\n print(*arr)\n if arr[0]>value:\n arr.insert(0, value)\n print(*arr) \n\nif __name__ == '__main__':\n n = int(input())\n arr = list(map(int, input().rstrip().split()))\n insertionSort1(n, arr)", "_____no_output_____" ], [ "#Insertion Sort - Part 2\n\n#!/bin/python3\n\nimport math\nimport os\nimport random\nimport re\nimport sys\n\n# Complete the insertionSort2 function below.\ndef insertionSort2(n, arr): \n for i in range(1,n):\n num=arr[i]\n j=i-1\n while j>=0 and arr[j]>num:\n arr[j+1]=arr[j]\n j=j-1\n arr[j+1]=num\n print(' '.join(str(i) for i in arr))\n\nif __name__ == '__main__':\n n = int(input())\n arr = list(map(int, input().rstrip().split()))\n insertionSort2(n, arr)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
e7da4e065b28baaa6ba0588ac4a781581c7563d5
327,890
ipynb
Jupyter Notebook
Aprendizaje_Time_Series_con_Deep_Learning.ipynb
diegojeda/AdvancedMethodsDataAnalysisClass
61ebea229e74add81a35a9189daafe9a97fad7e3
[ "MIT" ]
null
null
null
Aprendizaje_Time_Series_con_Deep_Learning.ipynb
diegojeda/AdvancedMethodsDataAnalysisClass
61ebea229e74add81a35a9189daafe9a97fad7e3
[ "MIT" ]
null
null
null
Aprendizaje_Time_Series_con_Deep_Learning.ipynb
diegojeda/AdvancedMethodsDataAnalysisClass
61ebea229e74add81a35a9189daafe9a97fad7e3
[ "MIT" ]
1
2020-07-11T21:46:27.000Z
2020-07-11T21:46:27.000Z
217.577969
57,194
0.86396
[ [ [ "<a href=\"https://colab.research.google.com/github/diegojeda/AdvancedMethodsDataAnalysisClass/blob/master/Aprendizaje_Time_Series_con_Deep_Learning.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# PRACTICA DE PREDICCION DE SERIES DE TIEMPO CON TENSORFLOW", "_____no_output_____" ], [ "# 1. Cargue Librerias y Data Set", "_____no_output_____" ] ], [ [ "# Cargamos las librerias necesarias para el analisis\n\nimport os\nimport datetime\n\nimport IPython\nimport IPython.display\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport tensorflow as tf\n\nmpl.rcParams['figure.figsize'] = (8, 6)\nmpl.rcParams['axes.grid'] = False", "_____no_output_____" ], [ "# Cargamos el dataset\n\nzip_path = tf.keras.utils.get_file(\n origin='https://storage.googleapis.com/tensorflow/tf-keras-datasets/jena_climate_2009_2016.csv.zip',\n fname='jena_climate_2009_2016.csv.zip',\n extract=True)\ncsv_path, _ = os.path.splitext(zip_path)", "Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/jena_climate_2009_2016.csv.zip\n13574144/13568290 [==============================] - 0s 0us/step\n" ] ], [ [ "# 2. Limpieza y Preparacion De Datos", "_____no_output_____" ] ], [ [ " df = pd.read_csv(csv_path)\ndf.head()", "_____no_output_____" ], [ "# Ya que el registro esta cada 10 minutos, tomaremos solo el valor correspondiente al valor final de la hora, para tener solo un valor por hora\n\ndf = pd.read_csv(csv_path)\n# slice [start:stop:step], starting from index 5 take every 6th record.\ndf = df[5::6]\n\n# Convertimos la columna de tiempo en formato datetime y la extraemos del dataframe\ndate_time = pd.to_datetime(df.pop('Date Time'), format='%d.%m.%Y %H:%M:%S')\ndf.head()", "_____no_output_____" ], [ "# Graficamos las series de interes para ver su evolucion en el tiempo\n\nplot_cols = ['T (degC)', 'p (mbar)', 'rho (g/m**3)']\nplot_features = df[plot_cols]\nplot_features.index = date_time\n_ = plot_features.plot(subplots=True)\n\nplot_features = df[plot_cols][:744]\nplot_features.index = date_time[:744]\n_ = plot_features.plot(subplots=True)", "_____no_output_____" ], [ "# Vamos a revisar los datos de forma descriptiva\ndf.describe().transpose()", "_____no_output_____" ] ], [ [ "Podemos observar que las variables de \"wv (m/s)\" y \"max. wv (m/s)\" tienen valors minimos anomalos, estos deben ser erroneos, procederemos a imputarlos con cero.", "_____no_output_____" ] ], [ [ "wv = df['wv (m/s)']\nbad_wv = wv == -9999.0\nwv[bad_wv] = 0.0\n\nmax_wv = df['max. wv (m/s)']\nbad_max_wv = max_wv == -9999.0\nmax_wv[bad_max_wv] = 0.0\n\ndf['wv (m/s)'].min()", "_____no_output_____" ] ], [ [ "La ultima variable \"wd (deg)\" nos indica la direccion del viento en grados. Sin embargo, los grados no son un buen input para el modelo. En este caso las limitaciones son las siguientes:\n- 0° y 360° deberian estar cerca y cerrarse, eso no se puede apreciar.\n- Si no hay velocidad del viento, la direccion no debería importar.", "_____no_output_____" ] ], [ [ "plt.hist2d(df['wd (deg)'], df['wv (m/s)'], bins=(50, 50), vmax=400)\nplt.colorbar()\nplt.xlabel('Wind Direction [deg]')\nplt.ylabel('Wind Velocity [m/s]');", "_____no_output_____" ] ], [ [ "Para solventar estas dificultadades, convertiremos la direccion y la magnitud de la velocidad del viento en vectores, una medida mas representativa", "_____no_output_____" ] ], [ [ "wv = df.pop('wv (m/s)')\nmax_wv = df.pop('max. wv (m/s)')\n\n# Convert to radians.\nwd_rad = df.pop('wd (deg)')*np.pi / 180\n\n# Calculate the wind x and y components.\ndf['Wx'] = wv*np.cos(wd_rad)\ndf['Wy'] = wv*np.sin(wd_rad)\n\n# Calculate the max wind x and y components.\ndf['max Wx'] = max_wv*np.cos(wd_rad)\ndf['max Wy'] = max_wv*np.sin(wd_rad)", "_____no_output_____" ] ], [ [ "Revisaremos la distribucion de los componentes de cada vector\n\n", "_____no_output_____" ] ], [ [ "plt.hist2d(df['Wx'], df['Wy'], bins=(50, 50), vmax=400)\nplt.colorbar()\nplt.xlabel('Wind X [m/s]')\nplt.ylabel('Wind Y [m/s]')\nax = plt.gca()\nax.axis('tight')", "_____no_output_____" ], [ "# Transformaremos la fecha a segundos para revisar periodicidiad\ntimestamp_s = date_time.map(datetime.datetime.timestamp)\n\n# Definimos los segundos que tiene un dia y un año\nday = 24*60*60\nyear = (365.2425)*day\n\n# Convertiremos los datos con ayuda de las funciones de Seno y Coseno para crear la periodicidad\ndf['Day sin'] = np.sin(timestamp_s * (2 * np.pi / day))\ndf['Day cos'] = np.cos(timestamp_s * (2 * np.pi / day))\ndf['Year sin'] = np.sin(timestamp_s * (2 * np.pi / year))\ndf['Year cos'] = np.cos(timestamp_s * (2 * np.pi / year))\ndf.head()", "_____no_output_____" ], [ "# Creamos la figura donde dibujaremos las graficas\nfig , axarr = plt.subplots(2,1,figsize=(10,8))\n\n# Dibujamos cada grafica\n\naxarr[0].plot(np.array(df['Day sin'])[:25])\naxarr[0].plot(np.array(df['Day cos'])[:25])\naxarr[0].set_title('Frecuencia Diaria')\n\naxarr[1].plot(np.array(df['Year sin'])[:24*365])\naxarr[1].plot(np.array(df['Year cos'])[:24*365])\naxarr[1].set_title('Frecuencia Anual');", "_____no_output_____" ], [ "# Para corroborar las frecuencias, corremos un algoritmo de tf.signal.rfft de la temperatura sobre el tiempo\nfft = tf.signal.rfft(df['T (degC)'])\nf_per_dataset = np.arange(0, len(fft))\n\nn_samples_h = len(df['T (degC)'])\nhours_per_year = 24*365.2524\nyears_per_dataset = n_samples_h/(hours_per_year)\n\nf_per_year = f_per_dataset/years_per_dataset\nplt.step(f_per_year, np.abs(fft))\nplt.xscale('log')\nplt.ylim(0, 400000)\nplt.xlim([0.1, max(plt.xlim())])\nplt.xticks([1, 365.2524], labels=['1/Year', '1/day'])\n_ = plt.xlabel('Frequency (log scale)')", "_____no_output_____" ] ], [ [ "Podemos evidenciar que los dos picos se presentan en 1/año y 1/dia, lo que corrobora nuestras suposiciones.", "_____no_output_____" ] ], [ [ "df.head()", "_____no_output_____" ] ], [ [ "# 3. Split De Los Datos\n\nDividiremos los datos de la siguiente manera:\n- Entrenamiento: 70%\n- Validacion: 20%\n- Prueba: 10%", "_____no_output_____" ] ], [ [ "column_indices = {name: i for i, name in enumerate(df.columns)}\n\nn = len(df)\ntrain_df = df[0:int(n*0.7)]\nval_df = df[int(n*0.7):int(n*0.9)]\ntest_df = df[int(n*0.9):]\n\nnum_features = df.shape[1]", "_____no_output_____" ] ], [ [ "# 4. Normalizacion De Los Datos", "_____no_output_____" ] ], [ [ "# Normalizaremos los conjuntos de entrenamiento, validacion y prueba\ntrain_mean = train_df.mean()\ntrain_std = train_df.std()\n\ntrain_df = (train_df - train_mean) / train_std\nval_df = (val_df - train_mean) / train_std\ntest_df = (test_df - train_mean) / train_std", "_____no_output_____" ], [ "df_std = (df - train_mean) / train_std\ndf_std = df_std.melt(var_name='Column', value_name='Normalized')\nplt.figure(figsize=(12, 6))\nax = sns.violinplot(x='Column', y='Normalized', data=df_std)\n_ = ax.set_xticklabels(df.keys(), rotation=90)", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
e7da50cdce0be411c0136bd7daf6c67fbf85652f
144,309
ipynb
Jupyter Notebook
Coursera/IBM Python 01/Course02/ML Python Sharing.ipynb
brianshen1990/KeepLearning
f0d30213d13dbc83d05d13ef2e9300355676c679
[ "MIT" ]
4
2021-02-23T07:42:31.000Z
2021-12-16T22:16:28.000Z
Coursera/IBM Python 01/Course02/ML Python Sharing.ipynb
brianshen1990/KeepLearning
f0d30213d13dbc83d05d13ef2e9300355676c679
[ "MIT" ]
null
null
null
Coursera/IBM Python 01/Course02/ML Python Sharing.ipynb
brianshen1990/KeepLearning
f0d30213d13dbc83d05d13ef2e9300355676c679
[ "MIT" ]
null
null
null
144,309
144,309
0.768268
[ [ [ "<a href=\"https://www.bigdatauniversity.com\"><img src=\"https://ibm.box.com/shared/static/cw2c7r3o20w9zn8gkecaeyjhgw3xdgbj.png\" width=\"400\" align=\"center\"></a>\n\n<h1 align=\"center\"><font size=\"5\">Classification with Python</font></h1>", "_____no_output_____" ], [ "In this notebook we try to practice all the classification algorithms that we learned in this course.\n\nWe load a dataset using Pandas library, and apply the following algorithms, and find the best one for this specific dataset by accuracy evaluation methods.\n\nLets first load required libraries:", "_____no_output_____" ] ], [ [ "import itertools\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.ticker import NullFormatter\nimport pandas as pd\nimport numpy as np\nimport matplotlib.ticker as ticker\nfrom sklearn import preprocessing\n%matplotlib inline", "_____no_output_____" ] ], [ [ "### About dataset", "_____no_output_____" ], [ "This dataset is about past loans. The __Loan_train.csv__ data set includes details of 346 customers whose loan are already paid off or defaulted. It includes following fields:\n\n| Field | Description |\n|----------------|---------------------------------------------------------------------------------------|\n| Loan_status | Whether a loan is paid off on in collection |\n| Principal | Basic principal loan amount at the |\n| Terms | Origination terms which can be weekly (7 days), biweekly, and monthly payoff schedule |\n| Effective_date | When the loan got originated and took effects |\n| Due_date | Since it’s one-time payoff schedule, each loan has one single due date |\n| Age | Age of applicant |\n| Education | Education of applicant |\n| Gender | The gender of applicant |", "_____no_output_____" ], [ "Lets download the dataset", "_____no_output_____" ] ], [ [ "!wget -O loan_train.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/loan_train.csv", "--2020-05-22 14:48:38-- https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/loan_train.csv\nResolving s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)... 67.228.254.196\nConnecting to s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)|67.228.254.196|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 23101 (23K) [text/csv]\nSaving to: ‘loan_train.csv’\n\n100%[======================================>] 23,101 --.-K/s in 0.002s \n\n2020-05-22 14:48:38 (11.5 MB/s) - ‘loan_train.csv’ saved [23101/23101]\n\n" ] ], [ [ "### Load Data From CSV File ", "_____no_output_____" ] ], [ [ "df = pd.read_csv('loan_train.csv')\ndf.head()", "_____no_output_____" ], [ "df.shape", "_____no_output_____" ] ], [ [ "### Convert to date time object ", "_____no_output_____" ] ], [ [ "df['due_date'] = pd.to_datetime(df['due_date'])\ndf['effective_date'] = pd.to_datetime(df['effective_date'])\ndf.head()", "_____no_output_____" ] ], [ [ "# Data visualization and pre-processing\n\n", "_____no_output_____" ], [ "Let’s see how many of each class is in our data set ", "_____no_output_____" ] ], [ [ "df['loan_status'].value_counts()", "_____no_output_____" ] ], [ [ "260 people have paid off the loan on time while 86 have gone into collection \n", "_____no_output_____" ], [ "Lets plot some columns to underestand data better:", "_____no_output_____" ] ], [ [ "# notice: installing seaborn might takes a few minutes\n!conda install -c anaconda seaborn -y", "Solving environment: done\n\n## Package Plan ##\n\n environment location: /opt/conda/envs/Python36\n\n added / updated specs: \n - seaborn\n\n\nThe following packages will be downloaded:\n\n package | build\n ---------------------------|-----------------\n openssl-1.1.1g | h7b6447c_0 3.8 MB anaconda\n certifi-2020.4.5.1 | py36_0 159 KB anaconda\n ca-certificates-2020.1.1 | 0 132 KB anaconda\n seaborn-0.10.1 | py_0 160 KB anaconda\n ------------------------------------------------------------\n Total: 4.2 MB\n\nThe following packages will be UPDATED:\n\n ca-certificates: 2020.1.1-0 --> 2020.1.1-0 anaconda\n certifi: 2020.4.5.1-py36_0 --> 2020.4.5.1-py36_0 anaconda\n openssl: 1.1.1g-h7b6447c_0 --> 1.1.1g-h7b6447c_0 anaconda\n seaborn: 0.9.0-pyh91ea838_1 --> 0.10.1-py_0 anaconda\n\n\nDownloading and Extracting Packages\nopenssl-1.1.1g | 3.8 MB | ##################################### | 100% \ncertifi-2020.4.5.1 | 159 KB | ##################################### | 100% \nca-certificates-2020 | 132 KB | ##################################### | 100% \nseaborn-0.10.1 | 160 KB | ##################################### | 100% \nPreparing transaction: done\nVerifying transaction: done\nExecuting transaction: done\n" ], [ "import seaborn as sns\n\nbins = np.linspace(df.Principal.min(), df.Principal.max(), 10)\ng = sns.FacetGrid(df, col=\"Gender\", hue=\"loan_status\", palette=\"Set1\", col_wrap=2)\ng.map(plt.hist, 'Principal', bins=bins, ec=\"k\")\n\ng.axes[-1].legend()\nplt.show()", "_____no_output_____" ], [ "bins = np.linspace(df.age.min(), df.age.max(), 10)\ng = sns.FacetGrid(df, col=\"Gender\", hue=\"loan_status\", palette=\"Set1\", col_wrap=2)\ng.map(plt.hist, 'age', bins=bins, ec=\"k\")\n\ng.axes[-1].legend()\nplt.show()", "_____no_output_____" ] ], [ [ "# Pre-processing: Feature selection/extraction", "_____no_output_____" ], [ "### Lets look at the day of the week people get the loan ", "_____no_output_____" ] ], [ [ "df['dayofweek'] = df['effective_date'].dt.dayofweek\nbins = np.linspace(df.dayofweek.min(), df.dayofweek.max(), 10)\ng = sns.FacetGrid(df, col=\"Gender\", hue=\"loan_status\", palette=\"Set1\", col_wrap=2)\ng.map(plt.hist, 'dayofweek', bins=bins, ec=\"k\")\ng.axes[-1].legend()\nplt.show()\n", "_____no_output_____" ] ], [ [ "We see that people who get the loan at the end of the week dont pay it off, so lets use Feature binarization to set a threshold values less then day 4 ", "_____no_output_____" ] ], [ [ "df['weekend'] = df['dayofweek'].apply(lambda x: 1 if (x>3) else 0)\ndf.head()", "_____no_output_____" ] ], [ [ "## Convert Categorical features to numerical values", "_____no_output_____" ], [ "Lets look at gender:", "_____no_output_____" ] ], [ [ "df.groupby(['Gender'])['loan_status'].value_counts(normalize=True)", "_____no_output_____" ] ], [ [ "86 % of female pay there loans while only 73 % of males pay there loan\n", "_____no_output_____" ], [ "Lets convert male to 0 and female to 1:\n", "_____no_output_____" ] ], [ [ "df['Gender'].replace(to_replace=['male','female'], value=[0,1],inplace=True)\ndf.head()", "_____no_output_____" ] ], [ [ "## One Hot Encoding \n#### How about education?", "_____no_output_____" ] ], [ [ "df.groupby(['education'])['loan_status'].value_counts(normalize=True)", "_____no_output_____" ] ], [ [ "#### Feature befor One Hot Encoding", "_____no_output_____" ] ], [ [ "df[['Principal','terms','age','Gender','education']].head()", "_____no_output_____" ] ], [ [ "#### Use one hot encoding technique to conver categorical varables to binary variables and append them to the feature Data Frame ", "_____no_output_____" ] ], [ [ "Feature = df[['Principal','terms','age','Gender','weekend']]\nFeature = pd.concat([Feature,pd.get_dummies(df['education'])], axis=1)\nFeature.drop(['Master or Above'], axis = 1,inplace=True)\nFeature.head()\n", "_____no_output_____" ] ], [ [ "### Feature selection", "_____no_output_____" ], [ "Lets defind feature sets, X:", "_____no_output_____" ] ], [ [ "X = Feature\nX[0:5]", "_____no_output_____" ] ], [ [ "What are our lables?", "_____no_output_____" ] ], [ [ "y = df['loan_status'].values\ny[0:5]", "_____no_output_____" ] ], [ [ "## Normalize Data ", "_____no_output_____" ], [ "Data Standardization give data zero mean and unit variance (technically should be done after train test split )", "_____no_output_____" ] ], [ [ "X= preprocessing.StandardScaler().fit(X).transform(X)\nX[0:5]", "/opt/conda/envs/Python36/lib/python3.6/site-packages/sklearn/preprocessing/data.py:645: DataConversionWarning: Data with input dtype uint8, int64 were all converted to float64 by StandardScaler.\n return self.partial_fit(X, y)\n/opt/conda/envs/Python36/lib/python3.6/site-packages/ipykernel/__main__.py:1: DataConversionWarning: Data with input dtype uint8, int64 were all converted to float64 by StandardScaler.\n if __name__ == '__main__':\n" ] ], [ [ "# Classification ", "_____no_output_____" ], [ "Now, it is your turn, use the training set to build an accurate model. Then use the test set to report the accuracy of the model\nYou should use the following algorithm:\n- K Nearest Neighbor(KNN)\n- Decision Tree\n- Support Vector Machine\n- Logistic Regression\n\n\n\n__ Notice:__ \n- You can go above and change the pre-processing, feature selection, feature-extraction, and so on, to make a better model.\n- You should use either scikit-learn, Scipy or Numpy libraries for developing the classification algorithms.\n- You should include the code of the algorithm in the following cells.", "_____no_output_____" ], [ "# K Nearest Neighbor(KNN)\nNotice: You should find the best k to build the model with the best accuracy. \n**warning:** You should not use the __loan_test.csv__ for finding the best k, however, you can split your train_loan.csv into train and test to find the best __k__.", "_____no_output_____" ], [ "#### Train Test Split\nThis will provide a more accurate evaluation on out-of-sample accuracy because the testing dataset is not part of the dataset that have been used to train the data. It is more realistic for real world problems.", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=4)\nprint ('Train set:', X_train.shape, y_train.shape)\nprint ('Test set:', X_test.shape, y_test.shape)", "Train set: (276, 8) (276,)\nTest set: (70, 8) (70,)\n" ] ], [ [ "#### Calculate the best K among 1 to 15\nand plot the result to select", "_____no_output_____" ] ], [ [ "from sklearn.neighbors import KNeighborsClassifier\nfrom sklearn import metrics\nKs = 15\nmean_acc = np.zeros((Ks-1))\nstd_acc = np.zeros((Ks-1))\nConfustionMx = [];\nfor n in range(1,Ks):\n \n #Train Model and Predict \n neigh = KNeighborsClassifier(n_neighbors = n).fit(X_train,y_train)\n yhat=neigh.predict(X_test)\n mean_acc[n-1] = metrics.accuracy_score(y_test, yhat)\n\n \n std_acc[n-1]=np.std(yhat==y_test)/np.sqrt(yhat.shape[0])\n\nmean_acc", "_____no_output_____" ], [ "plt.plot(range(1,Ks),mean_acc,'g')\nplt.fill_between(range(1,Ks),mean_acc - 1 * std_acc,mean_acc + 1 * std_acc, alpha=0.10)\nplt.legend(('Accuracy ', '+/- 3xstd'))\nplt.ylabel('Accuracy ')\nplt.xlabel('Number of Nabors (K)')\nplt.tight_layout()\nplt.show()\nprint( \"The best accuracy was with\", mean_acc.max(), \"with k=\", mean_acc.argmax()+1) ", "_____no_output_____" ] ], [ [ "#### The answer\nIt seems k = 7 gives the best accuracy", "_____no_output_____" ], [ "# Decision Tree", "_____no_output_____" ], [ "#### Train set adn Test Set\nJust use the previous one~ And use Decision Tree to build the model (max_depeth from1 - 10 , why? There are only less than 10 kinds of attributes )", "_____no_output_____" ] ], [ [ "from sklearn.tree import DecisionTreeClassifier\nfrom sklearn import metrics\nimport matplotlib.pyplot as plt\n\nKs = 10\nacc = np.zeros((Ks-1))\nfor n in range(1,Ks):\n drugTree = DecisionTreeClassifier(criterion=\"entropy\", max_depth = n)\n drugTree # it shows the default parameters\n\n drugTree.fit(X_train,y_train)\n predTree = drugTree.predict(X_test)\n\n\n acc[n-1] = metrics.accuracy_score(y_test, predTree)\n\nacc\nplt.plot(range(1,Ks),acc,'g')\nplt.ylabel('Accuracy ')\nplt.xlabel('Depth (K)')\nplt.tight_layout()\nplt.show()\nprint( \"The best accuracy was with\", acc.max(), \"with k=\", acc.argmax()+1) \n ", "_____no_output_____" ] ], [ [ "And we will use depth 6 with decision tree.\n??? Why 1 to 2 gives this results? ", "_____no_output_____" ], [ "# Support Vector Machine", "_____no_output_____" ], [ "#### Data pre-processing and selection\nFor SVM, treat as numbers", "_____no_output_____" ] ], [ [ "Feature.dtypes\nfeature_df = Feature[['Principal', 'terms', 'age', 'Gender', 'weekend', 'Bechalor', 'High School or Below', 'college']]\nX_SVM = np.asarray(feature_df)\nX_SVM[0:5]", "_____no_output_____" ], [ "Y_Feature = [ 1 if i == \"PAIDOFF\" else 0 for i in df['loan_status'].values]\ny_SVM = np.asarray(Y_Feature)\ny_SVM [0:5]", "_____no_output_____" ] ], [ [ "#### Train and Test data\nSplit", "_____no_output_____" ] ], [ [ "X_train_SVM, X_test_SVM, y_train_SVM, y_test_SVM = train_test_split( X_SVM, y_SVM, test_size=0.2, random_state=4)\nprint ('Train set:', X_train_SVM.shape, y_train_SVM.shape)\nprint ('Test set:', X_test_SVM.shape, y_test_SVM.shape)", "Train set: (276, 8) (276,)\nTest set: (70, 8) (70,)\n" ] ], [ [ "#### Check Accuracy", "_____no_output_____" ] ], [ [ "from sklearn import svm\nclf = svm.SVC(kernel='rbf')\nclf.fit(X_train_SVM, y_train_SVM) \n\nyhat = clf.predict(X_test_SVM)\nyhat [0:5]", "/opt/conda/envs/Python36/lib/python3.6/site-packages/sklearn/svm/base.py:196: FutureWarning: The default value of gamma will change from 'auto' to 'scale' in version 0.22 to account better for unscaled features. Set gamma explicitly to 'auto' or 'scale' to avoid this warning.\n \"avoid this warning.\", FutureWarning)\n" ], [ "from sklearn.metrics import f1_score, jaccard_similarity_score\n\nf1_acc = f1_score(y_test_SVM, yhat, average='weighted') \njaccard_acc = jaccard_similarity_score(y_test_SVM, yhat)\nf1_acc, jaccard_acc", "_____no_output_____" ] ], [ [ "# Logistic Regression", "_____no_output_____" ], [ "#### Datset train and test\nJust use the K-Nearest one", "_____no_output_____" ] ], [ [ "from sklearn.linear_model import LogisticRegression\nfrom sklearn.metrics import confusion_matrix\nLR = LogisticRegression(C=0.01, solver='liblinear').fit(X_train,y_train)\nyhat = LR.predict(X_test)\n\nyhat_prob = LR.predict_proba(X_test)\nyhat, yhat_prob", "_____no_output_____" ] ], [ [ "#### Accuaracy\n", "_____no_output_____" ] ], [ [ "from sklearn.metrics import jaccard_similarity_score\njaccard_similarity_score(y_test, yhat)\nfrom sklearn.metrics import log_loss\nlog_loss(y_test, yhat_prob)", "_____no_output_____" ] ], [ [ "# Model Evaluation using Test set", "_____no_output_____" ] ], [ [ "from sklearn.metrics import jaccard_similarity_score\nfrom sklearn.metrics import f1_score\nfrom sklearn.metrics import log_loss", "_____no_output_____" ] ], [ [ "First, download and load the test set:", "_____no_output_____" ] ], [ [ "!wget -O loan_test.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/loan_test.csv", "--2020-05-22 15:47:37-- https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/loan_test.csv\nResolving s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)... 67.228.254.196\nConnecting to s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)|67.228.254.196|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 3642 (3.6K) [text/csv]\nSaving to: ‘loan_test.csv’\n\n100%[======================================>] 3,642 --.-K/s in 0s \n\n2020-05-22 15:47:37 (242 MB/s) - ‘loan_test.csv’ saved [3642/3642]\n\n" ] ], [ [ "### Load Test set for evaluation ", "_____no_output_____" ] ], [ [ "test_df = pd.read_csv('loan_test.csv')\ntest_df.head()", "_____no_output_____" ] ], [ [ "#### Prepare data ", "_____no_output_____" ] ], [ [ "test_df['due_date'] = pd.to_datetime(test_df['due_date'])\ntest_df['effective_date'] = pd.to_datetime(test_df['effective_date'])", "_____no_output_____" ], [ "test_df['dayofweek'] = test_df['effective_date'].dt.dayofweek\ntest_df['weekend'] = test_df['dayofweek'].apply(lambda x: 1 if (x>3) else 0)\ntest_df.head()", "_____no_output_____" ], [ "test_df['Gender'].replace(to_replace=['male','female'], value=[0,1],inplace=True)\ntest_df.head()", "_____no_output_____" ], [ "test_Feature = test_df[['Principal','terms','age','Gender','weekend']]\ntest_Feature = pd.concat([test_Feature,pd.get_dummies(test_df['education'])], axis=1)\ntest_Feature.drop(['Master or Above'], axis = 1,inplace=True)\ntest_Feature.head()", "_____no_output_____" ], [ "feature_df = test_Feature[['Principal', 'terms', 'age', 'Gender', 'weekend', 'Bechalor', 'High School or Below', 'college']]\ntest_X_SVM = np.asarray(feature_df)\ntest_X_SVM[0:5]", "_____no_output_____" ], [ "test_y = test_df['loan_status'].values\ntest_y[0:5]", "_____no_output_____" ], [ "\ntest_Y_Feature = [ 1 if i == \"PAIDOFF\" else 0 for i in test_df['loan_status'].values]\ntest_y_SVM = np.asarray(test_Y_Feature)\ntest_y_SVM [0:5]\n\ntest_y_SVM[0:5]", "_____no_output_____" ] ], [ [ "#### K-Nearest ", "_____no_output_____" ] ], [ [ "neigh = KNeighborsClassifier(n_neighbors = 7).fit(X_train,y_train)\nyhat=neigh.predict(test_Feature)\n\nK_f1_acc = f1_score(test_y, yhat, average='weighted') \nk_jaccard_acc = jaccard_similarity_score(test_y, yhat)\nK_f1_acc, k_jaccard_acc", "/opt/conda/envs/Python36/lib/python3.6/site-packages/sklearn/metrics/classification.py:1143: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.\n 'precision', 'predicted', average, warn_for)\n" ] ], [ [ "#### dcicison tree", "_____no_output_____" ] ], [ [ "drugTree = DecisionTreeClassifier(criterion=\"entropy\", max_depth = 6)\ndrugTree.fit(X_train,y_train)\nyhat = drugTree.predict(test_Feature)\n\nDT_f1_acc = f1_score(test_y, yhat, average='weighted') \nDT_jaccard_acc = jaccard_similarity_score(test_y, yhat)\nDT_f1_acc, DT_jaccard_acc", "_____no_output_____" ] ], [ [ "#### SVM", "_____no_output_____" ] ], [ [ "clf = svm.SVC(kernel='rbf')\nclf.fit(X_train_SVM, y_train_SVM) \nyhat = clf.predict(test_X_SVM)\n\nSVM_f1_acc = f1_score(test_y_SVM, yhat, average='weighted') \nSVM_jaccard_acc = jaccard_similarity_score(test_y_SVM, yhat)\nSVM_f1_acc, SVM_jaccard_acc", "/opt/conda/envs/Python36/lib/python3.6/site-packages/sklearn/svm/base.py:196: FutureWarning: The default value of gamma will change from 'auto' to 'scale' in version 0.22 to account better for unscaled features. Set gamma explicitly to 'auto' or 'scale' to avoid this warning.\n \"avoid this warning.\", FutureWarning)\n" ] ], [ [ "#### Logistic Regression", "_____no_output_____" ] ], [ [ "LR = LogisticRegression(C=0.01, solver='liblinear').fit(X_train,y_train)\nyhat = LR.predict(test_Feature)\n\nLR_f1_acc = f1_score(test_y, yhat, average='weighted') \nLR_jaccard_acc = jaccard_similarity_score(test_y, yhat)\n\nfrom sklearn.metrics import log_loss\nyhat_prob = LR.predict_proba(test_Feature)\nLR_log_loss = log_loss(test_y, yhat_prob)\n\nyhat, LR_f1_acc, LR_jaccard_acc, LR_log_loss", "/opt/conda/envs/Python36/lib/python3.6/site-packages/sklearn/metrics/classification.py:1143: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.\n 'precision', 'predicted', average, warn_for)\n" ] ], [ [ "Wow! It seems LR does a bad job!", "_____no_output_____" ], [ "# Report\nYou should be able to report the accuracy of the built model using different evaluation metrics:", "_____no_output_____" ], [ "| Algorithm | Jaccard | F1-score | LogLoss |\n|--------------------|---------|----------|---------|\n| KNN | 0.6304176516942475 | 0.7407407407407407 | NA |\n| Decision Tree | 0.7252534070517485 | 0.7222222222222222 | NA |\n| SVM | 0.6717642373556352 | 0.7592592592592593 | NA |\n| LogisticRegression | 0.10675381263616558 | 0.25925925925925924 | 23.10553276265266 |", "_____no_output_____" ], [ "<h2>Want to learn more?</h2>\n\nIBM SPSS Modeler is a comprehensive analytics platform that has many machine learning algorithms. It has been designed to bring predictive intelligence to decisions made by individuals, by groups, by systems – by your enterprise as a whole. A free trial is available through this course, available here: <a href=\"http://cocl.us/ML0101EN-SPSSModeler\">SPSS Modeler</a>\n\nAlso, you can use Watson Studio to run these notebooks faster with bigger datasets. Watson Studio is IBM's leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, Watson Studio enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of Watson Studio users today with a free account at <a href=\"https://cocl.us/ML0101EN_DSX\">Watson Studio</a>\n\n<h3>Thanks for completing this lesson!</h3>\n\n<h4>Author: <a href=\"https://ca.linkedin.com/in/saeedaghabozorgi\">Saeed Aghabozorgi</a></h4>\n<p><a href=\"https://ca.linkedin.com/in/saeedaghabozorgi\">Saeed Aghabozorgi</a>, PhD is a Data Scientist in IBM with a track record of developing enterprise level applications that substantially increases clients’ ability to turn data into actionable knowledge. He is a researcher in data mining field and expert in developing advanced analytic methods like machine learning and statistical modelling on large datasets.</p>\n\n<hr>\n\n<p>Copyright &copy; 2018 <a href=\"https://cocl.us/DX0108EN_CC\">Cognitive Class</a>. This notebook and its source code are released under the terms of the <a href=\"https://bigdatauniversity.com/mit-license/\">MIT License</a>.</p>", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ] ]
e7da655f6afcd02ad187aafb325af0ad236376f8
611,481
ipynb
Jupyter Notebook
notebooks_exploration/2-faturamento_total.ipynb
flimao/case-previsao-faturamento
333afe0f83a8acdf2f5f021a04530649b18915e4
[ "MIT" ]
null
null
null
notebooks_exploration/2-faturamento_total.ipynb
flimao/case-previsao-faturamento
333afe0f83a8acdf2f5f021a04530649b18915e4
[ "MIT" ]
null
null
null
notebooks_exploration/2-faturamento_total.ipynb
flimao/case-previsao-faturamento
333afe0f83a8acdf2f5f021a04530649b18915e4
[ "MIT" ]
1
2021-12-01T13:38:35.000Z
2021-12-01T13:38:35.000Z
544.506679
113,222
0.939522
[ [ [ "# 2 - Análise Exploratória de Séries Temporais - Faturamento Total\n\n<sub>Projeto para a disciplina de **Estatística** (Módulo 4) do Data Science Degree (turma de julho de 2020)</sub>", "_____no_output_____" ], [ "## Equipe\n\n* Felipe Lima de Oliveira\n* Mário Henrique Romagna Cesa\n* Tsuyioshi Valentim Fukuda\n* Fernando Raineri Monari\n\nLink para [projeto no Github](https://github.com/flimao/case-previsao-faturamento)", "_____no_output_____" ], [ "## Introdução\n\nEste notebook é uma continuação da análise exploratória inicial.\n\nNeste notebook, vamos progredir para a análise exploratória de séries temporais.", "_____no_output_____" ] ], [ [ "# importação de bibliotecas\nimport datetime as dt\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport json\n\n# importação de bibliotecas de análise\nfrom statsmodels.tsa.seasonal import seasonal_decompose\nfrom statsmodels.graphics.tsaplots import plot_acf, plot_pacf\nfrom statsmodels.tsa.stattools import acf, pacf\nfrom statsmodels.tsa.arima_process import ArmaProcess\nfrom pmdarima.arima import auto_arima\nfrom pmdarima.arima.arima import ARIMA\n\n# teste para verificar estacionariedade (Dickey-Fuller: https://en.wikipedia.org/wiki/Dickey%E2%80%93Fuller_test)\nfrom statsmodels.tsa.stattools import adfuller\n\n# metricas \nfrom sklearn.metrics import mean_absolute_percentage_error as smape, mean_squared_error as smse, mean_absolute_error as smae \n\n# pacote com funções para análise desse projeto\nimport os\ncwd = os.getcwd()\nos.chdir(\"../\")\nimport py_scripts.plots, py_scripts.transform, py_scripts.metrics\nos.chdir(cwd)\n\nimport matplotlib as mpl\nmpl.rcParams['figure.dpi'] = 120\nmpl.rcParams['figure.figsize'] = (10, 4)", "_____no_output_____" ] ], [ [ "## Importação dos dados", "_____no_output_____" ] ], [ [ "ts_raw = pd.read_csv(r'../data/sim_ts_limpo.csv')\ntsd, tswide = py_scripts.transform.pipeline(ts_raw)\nfat_total = tswide.sum(axis = 'columns').dropna()", "_____no_output_____" ], [ "fat_total", "_____no_output_____" ], [ "fat_total.plot(linestyle = '', marker = 'o')\nplt.title('Faturamento (R$ bi)')\nplt.show()", "_____no_output_____" ] ], [ [ "## Análise Exploratória\n\nVamos primeiramente analisar o faturamento total contido na série histórica:", "_____no_output_____" ] ], [ [ "fat_total = tsd['total']\nfat_total.describe()", "_____no_output_____" ], [ "sns.scatterplot(data = fat_total)\nplt.title('Série histórica (últimos 4 anos)')\nplt.show()", "_____no_output_____" ] ], [ [ "Parece haver um salto entre 2014 e 2015 no faturamento total. \n\nEsse salto é devido ao lançamento de um outro produto, `transporte`. O faturamento deste novo produto é uma ordem de magnitude menor que o faturamento do produto `alimenticio` (como vimos brevemente no gráfico de dados faltantes e veremos com detalhes mais a frente), mas é o suficiente para que seja notado no faturamento total.", "_____no_output_____" ] ], [ [ "fig = plt.figure(figsize = (6, 8))\nsns.boxplot(y = fat_total)\nplt.ylabel('Faturamento total (R$ bi)')\nplt.title('Boxplot - Série Histórica completa')\nplt.show()", "_____no_output_____" ], [ "sns.histplot(fat_total)\nplt.xlabel('Faturamento total (R$ bi)')\nplt.title('Histograma - Série Histórica completa')\nplt.show()", "_____no_output_____" ] ], [ [ "No entanto, medidas descritivas de séries temporais devem ser tomadas em relação ao tempo. Vamos separar essas medidas ano a ano:", "_____no_output_____" ] ], [ [ "n_anos = 4\nanos_recentes = fat_total[fat_total.index >= dt.datetime.now() - dt.timedelta(days = n_anos * 365) + pd.tseries.offsets.YearBegin()]\nanos_recentes.describe()", "_____no_output_____" ], [ "sns.scatterplot(data = anos_recentes)\nplt.title(f'Série histórica (últimos {n_anos} anos)')\nplt.show()", "_____no_output_____" ], [ "sns.boxplot(y = fat_total, x = fat_total.index.year)\nplt.ylabel('Faturamento total (R$ bi)')\nplt.title('Boxplot - Série Histórica completa')\nplt.show()", "_____no_output_____" ], [ "\nsns.boxplot(y = anos_recentes, x = anos_recentes.index.year)\nplt.ylabel('Faturamento total (R$ bi)')\nplt.title(f'Boxplot - Série Histórica (últimos {n_anos} anos)')\nplt.show()", "_____no_output_____" ] ], [ [ "Parece haver alguns *outliers* em 2021.\n\nNo entanto, a série de 2021 está incompleta (vai somente até outubro). Historicamente, há um salto no faturamento em agosto, o que pode estar causando essa deturpação das medidas descritivas.", "_____no_output_____" ], [ "Excluindo o ano de 2021...", "_____no_output_____" ] ], [ [ "anos_recentes_exc2021 = fat_total[(fat_total.index >= '2016') & (fat_total.index < '2021')]\nsns.boxplot(y = anos_recentes_exc2021, x = anos_recentes_exc2021.index.year)\nplt.ylabel('Faturamento total (R$ bi)')\nplt.title(f'Boxplot - Série Histórica (2016-2020)')\nplt.show()", "_____no_output_____" ] ], [ [ "A pandemia se faz notar nos dados apenas com o aumento ligeiro da mediana em relação à distância entre o Q1 e o Q3.", "_____no_output_____" ] ], [ [ "sns.histplot(x = anos_recentes, hue = anos_recentes.index.year, multiple = 'dodge', shrink = .8, common_norm = False, palette = sns.color_palette()[:4])\nplt.xlabel('Faturamento total (R$ bi)')\nplt.title(f'Histograma - Série Histórica (últimos {n_anos} anos)')\nplt.show()", "_____no_output_____" ] ], [ [ "Os histogramas ano a ano estão melhor comportados que o histograma da série histórica completa.", "_____no_output_____" ], [ "Fazendo uma análise mês a mês para cada ano...", "_____no_output_____" ] ], [ [ "hue = fat_total.index.year\n\npalette = []\nfor i, year in enumerate(hue.unique()):\n if year not in [2014, 2015]:\n palette += ['lightgray']\n else:\n palette += [sns.color_palette()[i]]\n\nax = sns.lineplot(\n y = fat_total, x = fat_total.index.month, \n hue = fat_total.index.year,\n palette = palette\n)\nax.set_xlabel('Mês')\nax.set_ylabel('Faturamento total (R$ bi)')\nax.set_title(f\"Receita por mes do ano\")\nplt.tight_layout()\nplt.show()", "_____no_output_____" ] ], [ [ "... nota-se claramente o salto dado de 2014 para 2015 com a entrada do novo produto.", "_____no_output_____" ], [ "## Estacionariedade", "_____no_output_____" ], [ "Para que a série de faturamentos mensais totais possa ser decomposta, é necessário que ela seja estacionária. \n\nNão parece ser, mas vamos testar através do teste estatístico de Dickey-Fuller.\n\nA hipótese nula do teste de Dickey-Fuller é que a série é um passeio aleatório (random walk):", "_____no_output_____" ] ], [ [ "testedf = adfuller(fat_total)\npvalor = testedf[1]\nalpha = 0.05\n\nprint(f'Valor-p: {pvalor:.3%}', end = '')\n\nif pvalor < alpha:\n print(f' < {alpha:.0%}')\n print(' Série de faturamentos mensais é estacionária. Rejeita-se a hipótese de a série ser um passeio aleatório.')\nelse:\n print(f' > {alpha:.0%}')\n print(' Série de faturamentos mensais é um passeio aleatório. Não podemos rejeitar a hipótese nula.')", "Valor-p: 98.342% > 5%\n Série de faturamentos mensais é um passeio aleatório. Não podemos rejeitar a hipótese nula.\n" ] ], [ [ "Isso é evidenciado pela decomposição da série temporal.\n\n## Decomposição em séries de Fourier", "_____no_output_____" ] ], [ [ "decomp_total = seasonal_decompose(fat_total)\n\n# plot \nfig, axs = plt.subplots(nrows = 4, figsize = (10, 8), sharex = True)\n\nsns.lineplot(data = fat_total, ax = axs[0])\naxs[0].set_title('Faturamento total')\n\nsns.lineplot(data = decomp_total.trend, ax = axs[1])\naxs[1].set_ylabel('Tendência')\n\nsns.lineplot(data = decomp_total.seasonal, ax = axs[2])\naxs[2].set_ylabel('Sazonalidade')\n\nresid = (decomp_total.resid - decomp_total.resid.mean())/decomp_total.resid.std()\nsns.scatterplot(data = resid, ax = axs[3])\naxs[3].set_ylabel('Residual')\n\nfig.suptitle(f\"Decomposição temporal: faturamento total\")\nplt.show()", "_____no_output_____" ] ], [ [ "Como mostrado anteriormente, esta série temporal não é estacionária, o que podemos ver através dos resíduos padronizados do último quadro (onde há um padrão claro oscilatório).", "_____no_output_____" ], [ "## Modelo autorregressivo - Faturamento total", "_____no_output_____" ], [ "Para analisar e prever essa série temporal, é necessário um modelo mais completo. Utilizaremos aqui um modelo autorregressivo integrado de média móvel com sazonalidade - **SARIMA**.\n\nOBS.: o modelo completo chama-se SARIMAX; o `X` adicional permite a modelagem de variáveis exógenas. No entanto, não utilizaremos variáveis exógenas neste caso.", "_____no_output_____" ] ], [ [ "# excluindo o período pré-2015\n\ntest_begin = '2020-01-01'\nfat_modelo = fat_total['2015-01-01':]\n\ntotal_train = fat_modelo[:test_begin].iloc[:-1]\ntotal_test = fat_modelo[test_begin:]\n\n\ntrain_test_split_idx = int(fat_modelo.shape[0] * 0.8 + 1)\ntotal_train = fat_modelo[:train_test_split_idx]\ntotal_test = fat_modelo[train_test_split_idx:]\n\ntotal_train.plot(label = 'Treino')\ntotal_test.plot(label = 'Teste')\nplt.title('Train test split - Faturamento total')\nplt.ylabel('Faturamento total (R$ bi)')\nplt.legend()\nplt.show()", "_____no_output_____" ] ], [ [ "O modelo SARIMA contém alguns parâmetros, `S(P, D, Q, S)`, `AR(p)`, `I(d)` e `MA(q)`.\n\nPara determinarmos o parâmetro `d`, uma boa indicação é o gráfico de autocorrelação:", "_____no_output_____" ] ], [ [ "fig = plt.figure()\nax = fig.gca()\nplot_pacf(fat_modelo, lags = 20, method = 'ywm', ax = ax)\nax.set_xlabel('Lags')\nax.set_title('Autocorrelação parcial - Série de faturamento total')\nplt.show()", "_____no_output_____" ] ], [ [ "Neste caso, uma boa estimativa para o parâmetro `d` é 1 subtraído do número de *lags* em que a correlação é estatisticamente significativa. \n\nNeste caso, $d \\sim 1$.", "_____no_output_____" ] ], [ [ "arimas = {}\narimas['total'] = auto_arima(\n y = total_train,\n start_p = 1, max_p = 3,\n d = 2, max_d = 4,\n start_q = 1, max_q = 3,\n start_P = 1, max_P = 3,\n D = None, max_D = 4,\n start_Q = 1, max_Q = 3,\n #max_order = 6,\n m = 12,\n seasonal = True,\n alpha = 0.05,\n stepwise = True,\n trace = True,\n n_fits = 500,\n)\n", "Performing stepwise search to minimize aic\n ARIMA(1,2,1)(1,1,1)[12] : AIC=inf, Time=0.66 sec\n ARIMA(0,2,0)(0,1,0)[12] : AIC=1799.693, Time=0.02 sec\n ARIMA(1,2,0)(1,1,0)[12] : AIC=1800.522, Time=0.13 sec\n ARIMA(0,2,1)(0,1,1)[12] : AIC=1800.650, Time=0.18 sec\n ARIMA(0,2,0)(1,1,0)[12] : AIC=1798.664, Time=0.08 sec\n ARIMA(0,2,0)(2,1,0)[12] : AIC=1799.958, Time=0.39 sec\n ARIMA(0,2,0)(1,1,1)[12] : AIC=1797.055, Time=0.39 sec\n ARIMA(0,2,0)(0,1,1)[12] : AIC=1801.181, Time=0.07 sec\n ARIMA(0,2,0)(2,1,1)[12] : AIC=inf, Time=1.50 sec\n ARIMA(0,2,0)(1,1,2)[12] : AIC=1798.612, Time=1.50 sec\n ARIMA(0,2,0)(0,1,2)[12] : AIC=1798.699, Time=0.20 sec\n ARIMA(0,2,0)(2,1,2)[12] : AIC=1801.126, Time=1.60 sec\n ARIMA(1,2,0)(1,1,1)[12] : AIC=1798.847, Time=0.40 sec\n ARIMA(0,2,1)(1,1,1)[12] : AIC=1798.794, Time=0.37 sec\n ARIMA(0,2,0)(1,1,1)[12] intercept : AIC=1799.292, Time=0.34 sec\n\nBest model: ARIMA(0,2,0)(1,1,1)[12] \nTotal fit time: 7.875 seconds\n" ], [ "modelo_corrente = ARIMA(order = (0, 2, 0), seasonal_order = (1, 1, 1, 12), with_intercept = True).fit(y = total_train)\n\nmodelo_funcional = [\n ARIMA(order = (0, 1, 0), seasonal_order = (0, 1, 0, 12), with_intercept = False).fit(y = total_train),\n ARIMA(order = (0, 2, 3), seasonal_order = (2, 1, 1, 12), with_intercept = False).fit(y = total_train),\n ARIMA(order = (0, 2, 0), seasonal_order = (1, 1, 1, 12), with_intercept = False).fit(y = total_train),\n ARIMA(order = (0, 2, 0), seasonal_order = (1, 1, 1, 12), with_intercept = True).fit(y = total_train),\n]", "C:\\ProgramData\\Anaconda3\\envs\\dsd\\lib\\site-packages\\statsmodels\\tsa\\statespace\\sarimax.py:978: UserWarning: Non-invertible starting MA parameters found. Using zeros as starting parameters.\n warn('Non-invertible starting MA parameters found.'\nC:\\ProgramData\\Anaconda3\\envs\\dsd\\lib\\site-packages\\statsmodels\\tsa\\statespace\\sarimax.py:1009: UserWarning: Non-invertible starting seasonal moving average Using zeros as starting parameters.\n warn('Non-invertible starting seasonal moving average'\nC:\\ProgramData\\Anaconda3\\envs\\dsd\\lib\\site-packages\\statsmodels\\base\\model.py:604: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals\n warnings.warn(\"Maximum Likelihood optimization failed to \"\nC:\\ProgramData\\Anaconda3\\envs\\dsd\\lib\\site-packages\\statsmodels\\tsa\\statespace\\sarimax.py:997: UserWarning: Non-stationary starting seasonal autoregressive Using zeros as starting parameters.\n warn('Non-stationary starting seasonal autoregressive'\nC:\\ProgramData\\Anaconda3\\envs\\dsd\\lib\\site-packages\\statsmodels\\tsa\\statespace\\sarimax.py:1009: UserWarning: Non-invertible starting seasonal moving average Using zeros as starting parameters.\n warn('Non-invertible starting seasonal moving average'\n" ], [ "arimas['total'] = ARIMA(order = (0, 2, 0), seasonal_order = (1, 1, 1, 12), with_intercept = True).fit(y = total_train)", "C:\\ProgramData\\Anaconda3\\envs\\dsd\\lib\\site-packages\\statsmodels\\tsa\\statespace\\sarimax.py:997: UserWarning: Non-stationary starting seasonal autoregressive Using zeros as starting parameters.\n warn('Non-stationary starting seasonal autoregressive'\nC:\\ProgramData\\Anaconda3\\envs\\dsd\\lib\\site-packages\\statsmodels\\tsa\\statespace\\sarimax.py:1009: UserWarning: Non-invertible starting seasonal moving average Using zeros as starting parameters.\n warn('Non-invertible starting seasonal moving average'\n" ], [ "arimas['total'].summary()", "_____no_output_____" ] ], [ [ "### Métricas para o modelo autorregressivo SARIMAX", "_____no_output_____" ], [ "Primeiramente, podemos avaliar o ajuste visualmente:", "_____no_output_____" ] ], [ [ "n_test_periods = total_test.shape[0]\narr_preds = arimas['total'].predict(n_test_periods)\n\nidx = pd.date_range(freq = 'MS', start = total_test.index[0], periods = n_test_periods)\npreds = pd.Series(arr_preds, index = idx)\npreds.name = 'yearly_preds'\n\npreds.plot(label = 'Predição')\ntotal_test.plot(label = 'Conjunto de teste')\n\nplt.legend()\nplt.ylabel('Faturamento total (R$ bi)')\nplt.title('Predição contra conjunto de teste')\nplt.show()", "_____no_output_____" ] ], [ [ "Vamos aplicar algumas métricas quantitativas ao modelo:", "_____no_output_____" ] ], [ [ "kwargs_total = dict(\n y_true = total_test,\n y_pred = preds,\n n = total_train.shape[0],\n dof = arimas['total'].df_model()\n)\n\npy_scripts.metrics.mostrar_metricas(**kwargs_total)", "Métricas:\n MAPE: 0.797%\n RMSE: 2.015e+07\n MAE: 1.683e+07\n R²: 89.746%\n R² adj.: 89.074%\n" ], [ "arimas['total'].arima_res_.data.endog", "_____no_output_____" ], [ "fat_total['2019-06':'2020-06']", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
e7da69bb39ccc7b700a5a40b695826cbe4216bea
39,205
ipynb
Jupyter Notebook
courses/modsim2018/tasks/Tasks_ForLectures10and11/.ipynb_checkpoints/Tasks_During_Lecture10-checkpoint.ipynb
raissabthibes/bmc
840800fb94ea3bf188847d0771ca7197dfec68e3
[ "MIT" ]
null
null
null
courses/modsim2018/tasks/Tasks_ForLectures10and11/.ipynb_checkpoints/Tasks_During_Lecture10-checkpoint.ipynb
raissabthibes/bmc
840800fb94ea3bf188847d0771ca7197dfec68e3
[ "MIT" ]
null
null
null
courses/modsim2018/tasks/Tasks_ForLectures10and11/.ipynb_checkpoints/Tasks_During_Lecture10-checkpoint.ipynb
raissabthibes/bmc
840800fb94ea3bf188847d0771ca7197dfec68e3
[ "MIT" ]
null
null
null
76.722114
14,624
0.80375
[ [ [ "## Task of muscle Modeling", "_____no_output_____" ] ], [ [ "import numpy as np\nimport math\nimport matplotlib.pyplot as plt\n%matplotlib notebook \n#inline", "_____no_output_____" ] ], [ [ "# Nova Proposta\n## Normalizando a Força pela Força máxima", "_____no_output_____" ], [ "# Propriedades do vasto medial\nUmax = 0.04\nLslack = 0.223\nLce = 0.087\nLceopt = 0.093\nwidth = 0.63 * Lceopt\nFmax = 7400;\na = 0.25 # * Fmax\nb = 0.25*10 * Lceopt\n\n# Condições Iniciais\nphi = np.pi/2\nphid = 0\n#Lce = 0.31 - Lslack\nt0 = 0\ntend = 2.99\nh = 0.001", "_____no_output_____" ], [ "t = np.arange(t0,tend,h)\n\nLce_2 = np.empty_like(t)\nLce_2[0] = 0\n\nF = np.empty_like(t)\nF[0] = 0\n\nFkpe = np.empty_like(t)\nFkpe[0] = 0", "_____no_output_____" ], [ "Tenho de tirar o Fmax", "_____no_output_____" ], [ "for i in range (1,len(t)):\n if t[i]<=1: Lm = 0.31\n \n if t[i]>1 and t[i]<2: Lm = 0.31 - 0.04*(t[i]-1)\n \n Lsee = Lm - Lce\n \n if (Lsee < Lslack):\n FTendonNorm = 0; \n else:\n FTendonNorm = ((Lsee-Lslack)/(Umax*Lslack))**2;\n \n if (Lce < Lceopt):\n FkpeNorm = 0; \n else:\n FkpeNorm = ((Lce-Lceopt)/(Umax*Lceopt))**2;\n \n F0 = max([0, (1-((Lce-Lceopt)/width)**2)])\n \n if FTendonNorm > F0: pass#print('Error: can not do excentric contractions')\n \n Lcedt = -b*(F0-(FTendonNorm-FkpeNorm)) / ((FTendonNorm-FkpeNorm)+a)\n \n # Euler intergration Step\n Lce = Lce + h * Lcedt\n \n F[i] = FTendonNorm\n Lce_2[i] = Lce", "_____no_output_____" ], [ "plt.plot (t,F)\nplt.ylabel('Force [N]')\nplt.xlabel('time [s]')\nplt.show()", "_____no_output_____" ], [ "F = F * Fmax", "_____no_output_____" ], [ "plt.plot (t,F)\nplt.ylabel('Force [N]')\nplt.xlabel('time [s]')\nplt.show()", "_____no_output_____" ], [ "# Nova Proposta 3\n### Normalizando a Força pela Força máxima\n### E pelo comprimento ótimo do elemento contrátil --> (Lceopt)\nVou tirar dividir tudo o que tem Fmax por Fmax e tudo o que tem Lceopt por Lceopt", "_____no_output_____" ] ], [ [ "# Propriedades do vasto medial\nUmax = 0.04\nLslack = 0.223 \nLceopt = 0.093\nLceNorm = 0.087 / Lceopt\nwidth = 0.63 \nFmax = 7400;\na = 0.25 # * Fmax\nb = 0.25*10\n\n# Condições Iniciais\nphi = np.pi/2\nphid = 0\n#Lce = 0.31 - Lslack\nt0 = 0\ntend = 2.99\nh = 0.001\n\n\n# Inicializar\nt = np.arange(t0,tend,h)\n\nLce_2 = np.empty_like(t); Lce_2[0] = 0\n\nF = np.empty_like(t); F[0] = 0\n\nFkpe = np.empty_like(t); Fkpe[0] = 0\n\nfiberLength = np.empty_like(t); fiberLength[0] = 0\n \ntendonLength = np.empty_like(t); tendonLength[0] = 0\n\n\n# Integração por Euler\nfor i in range (1,len(t)):\n if t[i]<=1: Lm = 0.31\n \n if t[i]>1 and t[i]<2: Lm = 0.31 - 0.04*(t[i]-1)\n \n LseeNorm = Lm/Lceopt - LceNorm\n \n if (LseeNorm < Lslack/Lceopt):\n FTendonNorm = 0; \n else:\n FTendonNorm = ((LseeNorm-Lslack/Lceopt)/(Umax*Lslack/Lceopt))**2;\n \n if (LceNorm < 1):\n FkpeNorm = 0; \n else:\n FkpeNorm = ((LceNorm-1)/(Umax))**2;\n \n F0 = max([0, (1-((LceNorm-1)/width)**2)])\n \n if FTendonNorm > F0: pass #print('Error: can not do excentric contractions')\n \n LceNormdt = -b*(F0-(FTendonNorm-FkpeNorm)) / ((FTendonNorm-FkpeNorm)+a)\n \n # Euler intergration Step\n LceNorm = LceNorm + h * LceNormdt\n \n F[i] = FTendonNorm #* Fmax\n \n fiberLength[i] = LceNorm * Lceopt\n \n tendonLength[i] = LseeNorm * Lceopt\n \nFiberTendon = fiberLength + tendonLength\n\n \n# Plot\n\nplt.plot (t,F)\nplt.ylabel('Force [N]')\nplt.xlabel('time [s]')\nplt.show()", "_____no_output_____" ], [ "fig, ax = plt.subplots(1,3,figsize=(6,6), sharex=True)\n\nax[0].plot(t, fiberLength, label = 'Fiber')\nax[0].plot(t, tendonLength, label = 'Tendon')\nax[0].grid()\nplt.legend(loc = 'best')\nplt.xlabel('Time [s]')\nplt.ylabel('Length [mm]')\n\n\n", "_____no_output_____" ], [ "def comuteTendonForce(LseeNorm, Lslack, Lceopt):\n '''\n Compute Tendon Force\n \n Inputs:\n LseeNorm - Normalized Tendon Length\n \n Lslack - slack length of the tendon (non-normalized)\n \n Lceopt - Optimal length of the fiber\n \n Outputs:\n FTendonNorm - Normalized force of Tendon\n \n '''\n Umax=0.04\n \n if (LseeNorm < Lslack/Lceopt):\n FTendonNorm = 0; \n else:\n FTendonNorm = ((LseeNorm-Lslack/Lceopt)/(Umax*Lslack/Lceopt))**2;\n \n return\n", "_____no_output_____" ], [ "def comuteParallelElementForce(LceNorm):\n '''\n Compute Parallel Element Force\n \n Input:\n LceNorm - Normalized contratile element Length\n \n Output:\n FTendonNorm - Normalized force of Tendon\n \n '''\n Umax=1\n \n if LceNorm < 1:\n FkpeNorm = 0; \n else:\n FkpeNorm = ((LceNorm-1)/(Umax))**2;\n \n return FkpeNorm\n", "_____no_output_____" ], [ "def computeForceLengthCurve(LceNorm):\n '''\n Compute Force Length Curve\n \n Input:\n LceNorm - Normalized contratile element Length\n \n Output:\n F0 - Normalized force of Tendon\n \n '''\n \n \n width = 0.\n F0 = max([0, (1-((LceNorm-1)/width)**2)])\n \n return F0\n", "_____no_output_____" ], [ "def computeContractileElementDerivative(F0, FCE):\n '''\n Compute Force Length Curve\n \n Input:\n LceNorm - Normalized contratile element Length\n \n Output:\n F0 - Normalized force of Tendon\n \n '''\n a = 0.25\n b = 0.25 * 10\n \n if FCE > F0:\n print('Error: can not do excentric contractions')\n LceNormdt = -b*(F0-FCE) / (FCE+a)\n \n return LceNormdt\n", "_____no_output_____" ], [ "# Propriedades do vasto medial\nUmax = 0.04\nLslack = 0.223 \nLceopt = 0.093\nLceNorm = 0.087 / Lceopt\nwidth = 0.63 \nFmax = 7400;\na = 0.25 # * Fmax\nb = 0.25*10\n\n# Condições Iniciais\nphi = np.pi/2\nphid = 0\n#Lce = 0.31 - Lslack\nt0 = 0\ntend = 2.99\nh = 0.001\n\n\n# Inicializar\nt = np.arange(t0,tend,h)\n\nLce_2 = np.empty_like(t); Lce_2[0] = 0\n\nF = np.empty_like(t); F[0] = 0\n\nFkpe = np.empty_like(t); Fkpe[0] = 0\n\nfiberLength = np.empty_like(t); fiberLength[0] = 0\n \ntendonLength = np.empty_like(t); tendonLength[0] = 0\n\n\n# Integração por Euler\nfor i in range (1,len(t)):\n if t[i]<=1: Lm = 0.31\n \n if t[i]>1 and t[i]<2: Lm = 0.31 - 0.04*(t[i]-1)\n \n LseeNorm = Lm/Lceopt - LceNorm\n \n if (LseeNorm < Lslack/Lceopt):\n FTendonNorm = 0; \n else:\n FTendonNorm = ((LseeNorm-Lslack/Lceopt)/(Umax*Lslack/Lceopt))**2;\n \n if (LceNorm < 1):\n FkpeNorm = 0; \n else:\n FkpeNorm = ((LceNorm-1)/(Umax))**2;\n \n F0 = max([0, (1-((LceNorm-1)/width)**2)])\n \n if FTendonNorm > F0: pass #print('Error: can not do excentric contractions')\n \n LceNormdt = -b*(F0-(FTendonNorm-FkpeNorm)) / ((FTendonNorm-FkpeNorm)+a)\n \n # Euler intergration Step\n LceNorm = LceNorm + h * LceNormdt\n \n F[i] = FTendonNorm #* Fmax\n \n fiberLength[i] = LceNorm * Lceopt\n \n tendonLength[i] = LseeNorm * Lceopt\n \nFiberTendon = fiberLength + tendonLength\n\n \n# Plot\n\nplt.plot (t,F)\nplt.ylabel('Force [N]')\nplt.xlabel('time [s]')\nplt.show()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ] ]
e7da73d20b04345f3ea7d9d363fe427592574781
3,390
ipynb
Jupyter Notebook
01-python-basics/22-variable-scope-python.ipynb
sergiofgonzalez/python-in-action
7adaf3b5029e88fd1dce67d614e34780f6697460
[ "MIT" ]
null
null
null
01-python-basics/22-variable-scope-python.ipynb
sergiofgonzalez/python-in-action
7adaf3b5029e88fd1dce67d614e34780f6697460
[ "MIT" ]
null
null
null
01-python-basics/22-variable-scope-python.ipynb
sergiofgonzalez/python-in-action
7adaf3b5029e88fd1dce67d614e34780f6697460
[ "MIT" ]
null
null
null
22.156863
169
0.527729
[ [ [ "# Python in Action\n## Part 1: Python Fundamentals\n### 22 &mdash; Variable scope rules in Python\n> scope and lifecycle of variables in Python\n\n", "_____no_output_____" ], [ "When you declare a variable outside of any function in Python, the variable will be visible to any code after the declaration. That is called a global variable.\n\nNote that you will be able to see the value of the variable without requiring any additional keyword.\n\nSee below about the use of the `global` keyword when you need to modify the value of a global variable:", "_____no_output_____" ] ], [ [ "name = 'Jason'\n\ndef say_hello():\n print(f'Hello, {name}')\n\nsay_hello()\nprint('Your name is ' + name)", "Hello, Jason\nYour name is Jason\n" ] ], [ [ "When you define a variable inside a function, that function will only be visible within that function. That is called a local variable:", "_____no_output_____" ] ], [ [ "name = 'Jason'\n\ndef say_hello():\n name = 'Idris'\n print(f'Hello, {name}!')\n\nsay_hello()\nprint(name)", "Hello, Idris!\nJason\n" ] ], [ [ "#### The `global` keyword\n\nThere might be situations on which you would like to modify the value of a global variable within the scope of a function.\n\nWhen that happens, you will be required to use the `global` keyword:", "_____no_output_____" ] ], [ [ "name = 'Jason'\n\ndef say_hello():\n print(f'Hi, {name}!')\n\ndef holler_hello():\n global name\n name = name.upper()\n print(f'HI, {name}!')\n\nsay_hello()\nholler_hello()\nprint(name)", "Hi, Jason!\nHI, JASON!\nJASON\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7dabf0c8e51d5450c64f4fb7289c5c56aa52e6f
6,881
ipynb
Jupyter Notebook
final_project_1.ipynb
ashleyparrilla/astro-detection
c442de742e23b102d29af684206878162e2eb74e
[ "MIT" ]
null
null
null
final_project_1.ipynb
ashleyparrilla/astro-detection
c442de742e23b102d29af684206878162e2eb74e
[ "MIT" ]
null
null
null
final_project_1.ipynb
ashleyparrilla/astro-detection
c442de742e23b102d29af684206878162e2eb74e
[ "MIT" ]
null
null
null
22.486928
106
0.531173
[ [ [ "import numpy as np\nimport sep", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\nfrom astropy.io import fits #displaying plots \nfrom matplotlib import rcParams\n%matplotlib online\nrcParams['figure.fisize'] = [10.,8.]", "_____no_output_____" ] ], [ [ "### open the FITS file", "_____no_output_____" ] ], [ [ "#read image into a 2-D numpy array\nfname = \"image.fits\"\nhdu_list = fits.open(fname)\nhdu_list.info()", "_____no_output_____" ], [ "#access image by indexing hdu_list\nimage_data = hdu_list[0].data", "_____no_output_____" ], [ "#data is stored as a 2D numpy array. Show the shape of the array\nprint(type(image_data))\nprint(image_data.shape)", "_____no_output_____" ] ], [ [ "### show data", "_____no_output_____" ] ], [ [ "#show the image\nm,s = np.mean(image_data), np.std(image_data)\nplt.imshow(image_data, interpolation='nearest', cmap='gray', vmin=m-s, vmax=m+s, origin='lower',)\nplt.colorbar();\nplt.savefig('image_1.png')", "_____no_output_____" ] ], [ [ "### background subtraction", "_____no_output_____" ] ], [ [ "#measure spatially varying background on image\nbkg = sep.Background(image_data)", "_____no_output_____" ], [ "bkg = sep.Background(image_data, bw=64, bh=64, fw=3, fh=3)", "_____no_output_____" ], [ "#get global mean and noise of image's background\nprint(bkg.globalback)\nprint(bkg.globalrms)", "_____no_output_____" ], [ "#evaluate background as 2-D array but same size as original image\nbkg_image = bkg.back()\n#bkg_image = np.array(bkg)", "_____no_output_____" ], [ "#show background\nplt.imshow(bkg_image,interpolation='nearest',cmap='gray',origin='lower')\nplt.colorbar();\nplt.savefig('image_2.png')", "_____no_output_____" ], [ "#evaluate background noise as 2-D array, same size as original image\nbkg_rms = bkg.rms()", "_____no_output_____" ], [ "#show background noise\nplt.imshow(bkg_rms,interpolation='nearest',cmap='gray',origin='lower')\nplt.colorbar();\nplt.savefig('image_3.pdf')", "_____no_output_____" ], [ "#subtract background\nimage_data_sub = image_data - bkg", "_____no_output_____" ] ], [ [ "### object detection", "_____no_output_____" ] ], [ [ "#set detection threshold to be a constant value of 1.5*sigma\n#sigma=global background rms\nobjects = sep.extract(image_data_sub, 1.5, err=bkg.globalrms)", "_____no_output_____" ], [ "#number of objects detected\nlen(objects)", "_____no_output_____" ], [ "#over-plot the object coordinates with some parameters on the image\n#this will check where the detected objects are\nfrom matplotlib.patches import Ellipse\n\n#plot background-subtracted image\nfig, ax = plt.subplots()\nm,s = np.mean(image_data_sub), np.std(image_data_sub)\nim = ax.imshow(image_data_sub, interpolation='nearest', cmap='gray',\n vmin=m-s,vmax=m+s,origin='lower')\n\n#plot an ellipse for each object\nfor i in range(len(objects)):\n e = Ellipse(xy=(objects['x'][i],objects['y'][i]),\n width=6*objects['a'][i],\n height=6*objects['b'][i],\n angle=objects['theta'][i]*180./np.pi)\n e.set_facecolor('none')\n e.set_edgecolor('red')\n ax.add_artist(e)\n\nplt.savefig('image_4.png')", "_____no_output_____" ], [ "#see available fields\nobjects.dtype.names", "_____no_output_____" ] ], [ [ "### aperture photometry", "_____no_output_____" ] ], [ [ "#perform circular aperture photometry \n#with a 3 pixel radius at locations of the objects\nflux, fluxerr, flag = sep.sum_circle(image_data_sub,objects['x'], objects['y'],\n 3.0, err=bkg.globalrms, gain=1.0)", "_____no_output_____" ], [ "#show the first 10 objects results:\nfor i in range(10):\n print(\"object {:d}: flux = {:f} +/- {:f}\".format(i, flux[i], fluxerr[i]))\n ", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
e7dac2a5b3b856afcb6478fae1ec9dca409742ec
24,500
ipynb
Jupyter Notebook
notebooks/advanced/Getting_started_with_AutoML/Getting_started_with_AutoML.ipynb
yinti/forecast-sagemaker
529a2df2b32f7449a55777f2abcbbeeb377383dc
[ "MIT-0" ]
null
null
null
notebooks/advanced/Getting_started_with_AutoML/Getting_started_with_AutoML.ipynb
yinti/forecast-sagemaker
529a2df2b32f7449a55777f2abcbbeeb377383dc
[ "MIT-0" ]
null
null
null
notebooks/advanced/Getting_started_with_AutoML/Getting_started_with_AutoML.ipynb
yinti/forecast-sagemaker
529a2df2b32f7449a55777f2abcbbeeb377383dc
[ "MIT-0" ]
null
null
null
30.135301
435
0.512898
[ [ [ "# How to use Amazon Forecast\n\nHelps advanced users start with Amazon Forecast quickly. The demo notebook runs through a typical end to end usecase for a simple timeseries forecasting scenario. \n\nPrerequisites: \n[AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/installing.html) . \n\nFor more informations about APIs, please check the [documentation](https://docs.aws.amazon.com/forecast/latest/dg/what-is-forecast.html)\n\n## Table Of Contents\n* [Setting up](#setup)\n* [Test Setup - Running first API](#hello)\n* [Forecasting Example with Amazon Forecast](#forecastingExample)\n\n**Read Every Cell FULLY before executing it**\n", "_____no_output_____" ], [ "## Setup <a class=\"anchor\" id=\"setup\"></a>", "_____no_output_____" ] ], [ [ "import sys\nimport os\nimport time\n\nimport boto3\n\n# importing forecast notebook utility from notebooks/common directory\nsys.path.insert( 0, os.path.abspath(\"../../common\") )\nimport util", "_____no_output_____" ] ], [ [ "Configure the S3 bucket name and region name for this lesson.\n\n- If you don't have an S3 bucket, create it first on S3.\n- Although we have set the region to us-west-2 as a default value below, you can choose any of the regions that the service is available in.", "_____no_output_____" ] ], [ [ "text_widget_bucket = util.create_text_widget( \"bucketName\", \"input your S3 bucket name\" )\ntext_widget_region = util.create_text_widget( \"region\", \"input region name.\", default_value=\"us-west-2\" )", "_____no_output_____" ], [ "bucketName = text_widget_bucket.value\nassert bucketName, \"bucket_name not set.\"\n\nregion = text_widget_region.value\nassert region, \"region not set.\"", "_____no_output_____" ], [ "session = boto3.Session(region_name=region) \n\nforecast = session.client(service_name='forecast') \nforecastquery = session.client(service_name='forecastquery')", "_____no_output_____" ] ], [ [ "## Forecasting with Amazon Forecast<a class=\"anchor\" id=\"forecastingExample\"></a>\n### Preparing your Data", "_____no_output_____" ], [ "In Amazon Forecast , a dataset is a collection of file(s) which contain data that is relevant for a forecasting task. A dataset must conform to a schema provided by Amazon Forecast. ", "_____no_output_____" ], [ "For this exercise, we use the individual household electric power consumption dataset. (Dua, D. and Karra Taniskidou, E. (2017). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.) We aggregate the usage data hourly. ", "_____no_output_____" ], [ "# Data Type", "_____no_output_____" ], [ "Amazon forecast can import data from Amazon S3. We first explore the data locally to see the fields", "_____no_output_____" ] ], [ [ "import pandas as pd\ndf = pd.read_csv(\"../../common/data/item-demand-time.csv\", dtype = object)\ndf.head(3)", "_____no_output_____" ] ], [ [ "Now upload the data to S3. But before doing that, go into your AWS Console, select S3 for the service and create a new bucket inside the `Oregon` or `us-west-2` region. Use that bucket name convention of `amazon-forecast-unique-value-data`. The name must be unique, if you get an error, just adjust until your name works, then update the `bucketName` cell below.", "_____no_output_____" ] ], [ [ "s3 = session.client('s3')", "_____no_output_____" ], [ "key=\"elec_data/item-demand-time.csv\"", "_____no_output_____" ], [ "s3.upload_file(Filename=\"../../common/data/item-demand-time.csv\", Bucket=bucketName, Key=key)", "_____no_output_____" ], [ "# Create the role to provide to Amazon Forecast.\nrole_name = \"ForecastNotebookRole-AutoML\"\nrole_arn = util.get_or_create_iam_role( role_name = role_name )", "_____no_output_____" ] ], [ [ "### CreateDataset", "_____no_output_____" ], [ "More details about `Domain` and dataset type can be found on the [documentation](https://docs.aws.amazon.com/forecast/latest/dg/howitworks-domains-ds-types.html) . For this example, we are using [CUSTOM](https://docs.aws.amazon.com/forecast/latest/dg/custom-domain.html) domain with 3 required attributes `timestamp`, `target_value` and `item_id`. Also for your project name, update it to reflect your name in a lowercase format.", "_____no_output_____" ] ], [ [ "DATASET_FREQUENCY = \"H\" \nTIMESTAMP_FORMAT = \"yyyy-MM-dd hh:mm:ss\"", "_____no_output_____" ], [ "project = 'workshop_forecastdemo_1' # Replace this with a unique name here, make sure the entire name is < 30 characters.\ndatasetName= project+'_ds'\ndatasetGroupName= project +'_gp'\ns3DataPath = \"s3://\"+bucketName+\"/\"+key", "_____no_output_____" ], [ "datasetName", "_____no_output_____" ] ], [ [ "### Schema Definition \n### We are defining the attributes for the model ", "_____no_output_____" ] ], [ [ "# Specify the schema of your dataset here. Make sure the order of columns matches the raw data files.\nschema ={\n \"Attributes\":[\n {\n \"AttributeName\":\"timestamp\",\n \"AttributeType\":\"timestamp\"\n },\n {\n \"AttributeName\":\"target_value\",\n \"AttributeType\":\"float\"\n },\n {\n \"AttributeName\":\"item_id\",\n \"AttributeType\":\"string\"\n }\n ]\n}\n\nresponse=forecast.create_dataset(\n Domain=\"CUSTOM\",\n DatasetType='TARGET_TIME_SERIES',\n DatasetName=datasetName,\n DataFrequency=DATASET_FREQUENCY, \n Schema = schema\n )\ndatasetArn = response['DatasetArn']", "_____no_output_____" ], [ "create_dataset_group_response = forecast.create_dataset_group(DatasetGroupName=datasetGroupName,\n Domain=\"CUSTOM\",\n DatasetArns= [datasetArn]\n )\ndatasetGroupArn = create_dataset_group_response['DatasetGroupArn']", "_____no_output_____" ] ], [ [ "If you have an existing datasetgroup, you can update it using **update_dataset_group** to update dataset group.", "_____no_output_____" ] ], [ [ "forecast.describe_dataset_group(DatasetGroupArn=datasetGroupArn)", "_____no_output_____" ] ], [ [ "### Create Data Import Job\nBrings the data into Amazon Forecast system ready to forecast from raw data. ", "_____no_output_____" ] ], [ [ "datasetImportJobName = 'EP_AML_DSIMPORT_JOB_TARGET'\nds_import_job_response=forecast.create_dataset_import_job(DatasetImportJobName=datasetImportJobName,\n DatasetArn=datasetArn,\n DataSource= {\n \"S3Config\" : {\n \"Path\":s3DataPath,\n \"RoleArn\": role_arn\n } \n },\n TimestampFormat=TIMESTAMP_FORMAT\n )", "_____no_output_____" ], [ "ds_import_job_arn=ds_import_job_response['DatasetImportJobArn']\nprint(ds_import_job_arn)", "_____no_output_____" ] ], [ [ "Check the status of dataset, when the status change from **CREATE_IN_PROGRESS** to **ACTIVE**, we can continue to next steps. Depending on the data size. It can take 10 mins to be **ACTIVE**. This process will take 5 to 10 minutes.", "_____no_output_____" ] ], [ [ "status_indicator = util.StatusIndicator()\n\nwhile True:\n status = forecast.describe_dataset_import_job(DatasetImportJobArn=ds_import_job_arn)['Status']\n status_indicator.update(status)\n if status in ('ACTIVE', 'CREATE_FAILED'): break\n time.sleep(10)\n\nstatus_indicator.end()", "_____no_output_____" ], [ "forecast.describe_dataset_import_job(DatasetImportJobArn=ds_import_job_arn)", "_____no_output_____" ] ], [ [ "### Create Predictor with customer forecast horizon", "_____no_output_____" ], [ "Forecast horizon is the number of number of time points to predicted in the future. For weekly data, a value of 12 means 12 weeks. Our example is hourly data, we try forecast the next day, so we can set to 24.", "_____no_output_____" ], [ "If we are not sure which recipe will perform best, we can utilise the Auto ML option that the SDK offers.", "_____no_output_____" ] ], [ [ "predictorName = project+'_autoML'", "_____no_output_____" ], [ "forecastHorizon = 24", "_____no_output_____" ], [ "algorithmArn = 'arn:aws:forecast:::algorithm/ETS'", "_____no_output_____" ], [ "create_predictor_response=forecast.create_predictor(PredictorName=predictorName, \n ForecastHorizon=forecastHorizon,\n PerformAutoML=True,\n PerformHPO=False,\n EvaluationParameters= {\"NumberOfBacktestWindows\": 1, \n \"BackTestWindowOffset\": 24}, \n InputDataConfig= {\"DatasetGroupArn\": datasetGroupArn},\n FeaturizationConfig= {\"ForecastFrequency\": \"H\", \n \"Featurizations\": \n [\n {\"AttributeName\": \"target_value\", \n \"FeaturizationPipeline\": \n [\n {\"FeaturizationMethodName\": \"filling\", \n \"FeaturizationMethodParameters\": \n {\"frontfill\": \"none\", \n \"middlefill\": \"zero\", \n \"backfill\": \"zero\"}\n }\n ]\n }\n ]\n }\n )", "_____no_output_____" ], [ "predictorArn=create_predictor_response['PredictorArn']", "_____no_output_____" ] ], [ [ "Check the status of the predictor. When the status change from **CREATE_IN_PROGRESS** to **ACTIVE**, we can continue to next steps. Depending on data size, model selection and hyper parameters,it can take 10 mins to more than one hour to be **ACTIVE**.", "_____no_output_____" ] ], [ [ "status_indicator = util.StatusIndicator()\n\nwhile True:\n status = forecast.describe_predictor(PredictorArn=predictorArn)['Status']\n status_indicator.update(status)\n if status in ('ACTIVE', 'CREATE_FAILED'): break\n time.sleep(10)\n\nstatus_indicator.end()", "_____no_output_____" ] ], [ [ "### Get Error Metrics", "_____no_output_____" ], [ "Let's get the accuracy metrics of the predicto we just created using Auto ML. The response will be a dictionary with all available recipes. Auto ML works out the best one for our predictor.", "_____no_output_____" ] ], [ [ "forecast.get_accuracy_metrics(PredictorArn=predictorArn)", "_____no_output_____" ] ], [ [ "### Create Forecast", "_____no_output_____" ], [ "Now create a forecast using the model that was trained.", "_____no_output_____" ] ], [ [ "forecastName= project+'_aml_forecast'", "_____no_output_____" ], [ "create_forecast_response=forecast.create_forecast(ForecastName=forecastName,\n PredictorArn=predictorArn)\nforecastArn = create_forecast_response['ForecastArn']", "_____no_output_____" ] ], [ [ "Check the status of the forecast process, when the status change from **CREATE_IN_PROGRESS** to **ACTIVE**, we can continue to next steps. Depending on data size, model selection and hyper parameters,it can take 10 mins to more than one hour to be **ACTIVE**. There's no output here, but that is fine as long as the * is there.", "_____no_output_____" ] ], [ [ "status_indicator = util.StatusIndicator()\n\nwhile True:\n status = forecast.describe_forecast(ForecastArn=forecastArn)['Status']\n status_indicator.update(status)\n if status in ('ACTIVE', 'CREATE_FAILED'): break\n time.sleep(10)\n\nstatus_indicator.end()", "_____no_output_____" ] ], [ [ "### Get Forecast", "_____no_output_____" ], [ "Once created, the forecast results are ready and you view them. ", "_____no_output_____" ] ], [ [ "forecastResponse = forecastquery.query_forecast(\n ForecastArn=forecastArn,\n Filters={\"item_id\":\"client_12\"}\n)\nprint(forecastResponse)", "_____no_output_____" ] ], [ [ "# Export Forecast", "_____no_output_____" ], [ "You can export forecast to s3 bucket. To do so an role with s3 put access is needed, but this has already been created.", "_____no_output_____" ] ], [ [ "forecastExportName= project+'_aml_forecast_export'", "_____no_output_____" ], [ "outputPath=\"s3://\"+bucketName+\"/output\"", "_____no_output_____" ], [ "forecast_export_response = forecast.create_forecast_export_job(\n ForecastExportJobName = forecastExportName,\n ForecastArn=forecastArn, \n Destination = {\n \"S3Config\" : {\n \"Path\":outputPath,\n \"RoleArn\": role_arn\n } \n }\n )", "_____no_output_____" ], [ "forecastExportJobArn = forecast_export_response['ForecastExportJobArn']", "_____no_output_____" ], [ "status_indicator = util.StatusIndicator()\n\nwhile True:\n status = forecast.describe_forecast_export_job(ForecastExportJobArn=forecastExportJobArn)['Status']\n status_indicator.update(status)\n if status in ('ACTIVE', 'CREATE_FAILED'): break\n time.sleep(10)\n\nstatus_indicator.end()", "_____no_output_____" ] ], [ [ "Check s3 bucket for results", "_____no_output_____" ] ], [ [ "s3.list_objects(Bucket=bucketName,Prefix=\"output\")", "_____no_output_____" ] ], [ [ "# Cleanup\n\nOnce we have completed the above steps, we can start to cleanup the resources we created. All delete jobs, except for `delete_dataset_group` are asynchronous, so we have added the helpful `wait_till_delete` function. \nResource Limits documented <a href=\"https://docs.aws.amazon.com/forecast/latest/dg/limits.html\">here</a>.", "_____no_output_____" ] ], [ [ "# Delete forecast export for both algorithms\nutil.wait_till_delete(lambda: forecast.delete_forecast_export_job(ForecastExportJobArn = forecastExportJobArn))", "_____no_output_____" ], [ "# Delete forecast\nutil.wait_till_delete(lambda: forecast.delete_forecast(ForecastArn = forecastArn))", "_____no_output_____" ], [ "# Delete predictor\nutil.wait_till_delete(lambda: forecast.delete_predictor(PredictorArn = predictorArn))", "_____no_output_____" ], [ "# Delete Import\nutil.wait_till_delete(lambda: forecast.delete_dataset_import_job(DatasetImportJobArn=ds_import_job_arn))", "_____no_output_____" ], [ "# Delete the dataset\nutil.wait_till_delete(lambda: forecast.delete_dataset(DatasetArn=datasetArn))", "_____no_output_____" ], [ "# Delete Dataset Group\nutil.wait_till_delete(lambda: forecast.delete_dataset_group(DatasetGroupArn=datasetGroupArn))", "_____no_output_____" ], [ "# Delete IAM role\nutil.delete_iam_role( role_name )", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ] ]
e7dad23770f840ac1909e2101de67b7a84042416
899,910
ipynb
Jupyter Notebook
Modelizacion Coti.ipynb
constanzasilvestre/digital-house-challenge-3
5b383ab1a1c8cf39ea4fce5a1cd8e1916d76598b
[ "MIT" ]
null
null
null
Modelizacion Coti.ipynb
constanzasilvestre/digital-house-challenge-3
5b383ab1a1c8cf39ea4fce5a1cd8e1916d76598b
[ "MIT" ]
null
null
null
Modelizacion Coti.ipynb
constanzasilvestre/digital-house-challenge-3
5b383ab1a1c8cf39ea4fce5a1cd8e1916d76598b
[ "MIT" ]
null
null
null
173.426479
173,600
0.745556
[ [ [ "# MODELOS DE CATEGORIZACION\n\n## 1. Introducción\n", "_____no_output_____" ] ], [ [ "import IPython.display as ipd\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport sklearn as skl\nimport sklearn.utils, sklearn.preprocessing, sklearn.decomposition, sklearn.svm\n", "_____no_output_____" ], [ "data = pd.read_pickle(\"clean_data/track.pkl\")\nunpickled_df_features = pd.read_pickle(\"clean_data/features.pkl\")\n\n", "_____no_output_____" ] ], [ [ "(Copiado de Data Analysis)\n\n### Analisis de Features\n\nLos features fueron generados utilizando la libreria de librosa sobre mp3 de extractos de cada cancion.\n\n(Del data esta es la primera agrupacion)\n\nLos features generados son:\n- mfcc: Mel-frequency cepstral coefficients (MFCCs). The Mel frequency cepstral coefficients (MFCCs) of a signal are a small set of features (usually about 10–20) which concisely describe the overall shape of a spectral envelope. It models the characteristics of the human voice.\n\n- chroma_cens: Computes the chroma variant “Chroma Energy Normalized” (CENS). CENS features are robust to dynamics, timbre and articulation, thus these are commonly used in audio matching and retrieval applications. Chroma features are an interesting and powerful representation for music audio in which the entire spectrum is projected onto 12 bins representing the 12 distinct semitones (or chroma) of the musical octave.\n\n\n- tonnetz: Tonal centroid features (tonnetz). This representation uses the method to project chroma features onto a 6-dimensional basis representing the perfect fifth, minor third, and major third each as two-dimensional coordinates.\n\n- spectral_contrast: Each frame of a spectrogram S is divided into sub-bands. For each sub-band, the energy contrast is estimated by comparing the mean energy in the top quantile (peak energy) to that of the bottom quantile (valley energy). High contrast values generally correspond to clear, narrow-band signals, while low contrast values correspond to broad-band noise. \n\n- spectral_centroid: Each frame of a magnitude spectrogram is normalized and treated as a distribution over frequency bins, from which the mean (centroid) is extracted per frame.It indicates where the ”centre of mass” for a sound is located and is calculated as the weighted mean of the frequencies present in the sound. Consider two songs, one from a blues genre and the other belonging to metal. Now as compared to the blues genre song which is the same throughout its length, the metal song has more frequencies towards the end. So spectral centroid for blues song will lie somewhere near the middle of its spectrum while that for a metal song would be towards its end.\n\n- spectral_bandwidth: Compute p’th-order spectral bandwidth.\n\n- spectral_rolloff: The roll-off frequency is defined for each frame as the center frequency for a spectrogram bin such that at least roll_percent (0.85 by default) of the energy of the spectrum in this frame is contained in this bin and the bins below. This can be used to, e.g., approximate the maximum (or minimum) frequency by setting roll_percent to a value close to 1 (or 0).\n\n- rmse: Compute root-mean-square (RMS) value for each frame, either from the audio samples y or from a spectrogram S.\n\n- zcr: Zero-crossing rate of an audio time series -> The zero crossing rate is the rate of sign-changes along a signal, i.e., the rate at which the signal changes from positive to negative or back. This feature has been used heavily in both speech recognition and music information retrieval. It usually has higher values for highly percussive sounds like those in metal and rock.\n\nPara mas informacion sobre cada feature: [Librosa features](https://librosa.org/doc/main/feature.html#)\n\n\nSpectrogram\nA spectrogram is a visual representation of the spectrum of frequencies of sound or other signals as they vary with time. Spectrograms are sometimes called sonographs, voiceprints, or voicegrams. When the data is represented in a 3D plot, they may be called waterfalls. In 2-dimensional arrays, the first axis is frequency while the second axis is time.\n\nPara cada feature se calcula:\n- kurtosis\n- max\n- mean\n- median\n- min\n- skew\n- std\n\n# Conclusion:\n\n- zcr: Me interesa quedarme con esto ya que categoriza al metal y rock\n- spectral_centroid: el centro de masa del sonido, en metal esta sobre el final y en blues en medio\n- spectral rolloff: Es una medida de la señal que represental la frecuencia por debajo de el total spectral enegergy \n- mfcc: modelo de voz humana\n- chroma: representacion poderosa del audio\n\nwe will choose 5 features, i.e. Mel-Frequency Cepstral Coefficients, Spectral Centroid, Zero Crossing Rate, Chroma Frequencies, Spectral Roll-off.\n\n\n\tfilename\tchroma_stft\trmse\tspectral_centroid\tspectral_bandwidth\trolloff\tzero_crossing_rate\tmfcc1\tmfcc2\tmfcc3\t...\tmfcc12\tmfcc13\tmfcc14\tmfcc15\tmfcc16\tmfcc17\tmfcc18\tmfcc19\tmfcc20\tlabel", "_____no_output_____" ], [ "## Feature extraction ", "_____no_output_____" ] ], [ [ "unpickled_df_features.head(5).style.format('{:.2f}')", "_____no_output_____" ], [ "features_columns = ['mfcc', 'chroma_cens', 'spectral_centroid',\"spectral_bandwidth\", 'spectral_rolloff', \"zcr\"]\n\nclean_features = unpickled_df_features[features_columns]\nclean_features.shape", "_____no_output_____" ], [ "clean_features_spectral_centroid= unpickled_df_features[\"spectral_centroid\"][\"mean\"]\nprint(clean_features_spectral_centroid.head())\nclean_features_spectral_centroid = clean_features_spectral_centroid.rename(columns={\"01\":\"spectral_centroid\"})\n\nclean_features_spectral_bandwidth = unpickled_df_features[\"spectral_bandwidth\"][\"mean\"]\nprint(clean_features_spectral_bandwidth.head())\nclean_features_spectral_bandwidth = clean_features_spectral_bandwidth.rename(columns={\"01\":\"spectral_bandwidth\"})\n\n\nclean_features_spectral_rolloff = unpickled_df_features[\"spectral_rolloff\"][\"mean\"]\nprint(clean_features_spectral_rolloff.head())\nclean_features_spectral_rolloff = clean_features_spectral_rolloff.rename(columns={\"01\":\"spectral_rolloff\"})\n\nclean_features_zcr = unpickled_df_features[\"zcr\"][\"mean\"]\nprint(clean_features_zcr.head())\nclean_features_zcr = clean_features_zcr.rename(columns={\"01\":\"zcr\"})\n\nclean_features_mfcc = unpickled_df_features[\"mfcc\"][\"mean\"]\nclean_features_mfcc_mean =clean_features_mfcc.mean(axis=1)\nprint(clean_features_mfcc_mean.head())\n#clean_features_spectral_rolloff = clean_features_spectral_rolloff.rename(columns={\"0\":\"mfcc\"})\n\n\nclean_features_chroma_cens= unpickled_df_features[\"chroma_cens\"][\"mean\"]\nclean_features_chroma_cens_mean=clean_features_chroma_cens.mean(axis=1)\nprint(clean_features_chroma_cens_mean.head())\n#clean_features_chroma_cens_mean = clean_features_chroma_cens_mean.rename(columns={\"0\":\"chroma\"})\n", "number 01\ntrack_id \n2 1639.583252\n3 1763.012451\n5 1292.958130\n10 1360.028687\n134 1257.696289\nnumber 01\ntrack_id \n2 1607.474365\n3 1736.961426\n5 1512.917358\n10 1420.259644\n134 1314.968628\nnumber 01\ntrack_id \n2 3267.804688\n3 3514.619629\n5 2773.931885\n10 2603.491943\n134 2462.616943\nnumber 01\ntrack_id \n2 0.085629\n3 0.084578\n5 0.053114\n10 0.077515\n134 0.064370\ntrack_id\n2 -3.591958\n3 0.090657\n5 -1.035142\n10 -1.003117\n134 -1.473439\ndtype: float64\ntrack_id\n2 0.260277\n3 0.266892\n5 0.265505\n10 0.270363\n134 0.258546\ndtype: float64\n" ], [ "clean_features = pd.concat([clean_features_spectral_rolloff, clean_features_spectral_bandwidth, clean_features_spectral_centroid, clean_features_zcr, clean_features_mfcc_mean, clean_features_chroma_cens_mean ], axis=1, join='inner')\nfeatures = clean_features.rename(columns={0: \"mfcc\",1:\"chroma\" })\n\nprint(features.columns)\n", "Index(['spectral_rolloff', 'spectral_bandwidth', 'spectral_centroid', 'zcr',\n 'mfcc', 'chroma'],\n dtype='object')\n" ], [ "data_full = pd.concat([data, features], axis=1, join='inner')\n", "_____no_output_____" ], [ "data_full.shape", "_____no_output_____" ], [ "data=data_full", "_____no_output_____" ] ], [ [ "\n## Preparamos los datos del modelo", "_____no_output_____" ] ], [ [ "from matplotlib import offsetbox\nimport joblib\n#from PIL import Image\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.decomposition import PCA\nfrom sklearn.model_selection import train_test_split\nfrom sklearn import linear_model\nfrom sklearn import metrics\nimport matplotlib.pyplot as plt\n", "_____no_output_____" ] ], [ [ "Observamos la media y varianza de las variables:", "_____no_output_____" ] ], [ [ "#Obtenemos las variables numericas del data:\nprint (data.columns)\nprint(data.info())\n\n#dropeo location porque tiene muchos nulos \ndata_complete = data_full.drop(labels=\"location\",axis=1)\n\nprint(\"Media de las variables: \")\nprint(data_complete.mean(axis=0))\n\nprint('\\n')\n\n\nprint(\"Varianza de las variables: \")\nprint(data_complete.var(axis=0))\n", "Index(['date_created', 'duration', 'genre_top', 'title', 'album',\n 'album_tracks', 'artist', 'location', 'acousticness', 'danceability',\n 'energy', 'instrumentalness', 'liveness', 'speechiness', 'tempo',\n 'valence', 'spectral_rolloff', 'spectral_bandwidth',\n 'spectral_centroid', 'zcr', 'mfcc', 'chroma'],\n dtype='object')\n<class 'pandas.core.frame.DataFrame'>\nInt64Index: 9355 entries, 2 to 124722\nData columns (total 22 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 date_created 9355 non-null datetime64[ns]\n 1 duration 9355 non-null int64 \n 2 genre_top 9355 non-null category \n 3 title 9354 non-null object \n 4 album 9355 non-null object \n 5 album_tracks 9355 non-null int64 \n 6 artist 9355 non-null object \n 7 location 6327 non-null object \n 8 acousticness 9355 non-null float64 \n 9 danceability 9355 non-null float64 \n 10 energy 9355 non-null float64 \n 11 instrumentalness 9355 non-null float64 \n 12 liveness 9355 non-null float64 \n 13 speechiness 9355 non-null float64 \n 14 tempo 9355 non-null float64 \n 15 valence 9355 non-null float64 \n 16 spectral_rolloff 9355 non-null float64 \n 17 spectral_bandwidth 9355 non-null float64 \n 18 spectral_centroid 9355 non-null float64 \n 19 zcr 9355 non-null float64 \n 20 mfcc 9355 non-null float64 \n 21 chroma 9355 non-null float64 \ndtypes: category(1), datetime64[ns](1), float64(14), int64(2), object(4)\nmemory usage: 1.4+ MB\nNone\nMedia de las variables: \nduration 250.653447\nalbum_tracks 12.287333\nacousticness 0.534537\ndanceability 0.469511\nenergy 0.539612\ninstrumentalness 0.654430\nliveness 0.191691\nspeechiness 0.100729\ntempo 123.229903\nvalence 0.433224\nspectral_rolloff 2517.338229\nspectral_bandwidth 1463.227601\nspectral_centroid 1242.547066\nzcr 0.054603\nmfcc -1.401279\nchroma 0.250497\ndtype: float64\n\n\nVarianza de las variables: \nduration 4.982883e+04\nalbum_tracks 1.373059e+02\nacousticness 1.491589e-01\ndanceability 3.642181e-02\nenergy 8.011512e-02\ninstrumentalness 1.276060e-01\nliveness 2.621398e-02\nspeechiness 1.897527e-02\ntempo 1.245911e+03\nvalence 7.604375e-02\nspectral_rolloff 1.154248e+06\nspectral_bandwidth 1.863264e+05\nspectral_centroid 2.266170e+05\nzcr 7.215586e-04\nmfcc 3.832330e+01\nchroma 2.735005e-04\ndtype: float64\n" ], [ "print (data_complete.columns)", "Index(['date_created', 'duration', 'genre_top', 'title', 'album',\n 'album_tracks', 'artist', 'acousticness', 'danceability', 'energy',\n 'instrumentalness', 'liveness', 'speechiness', 'tempo', 'valence',\n 'spectral_rolloff', 'spectral_bandwidth', 'spectral_centroid', 'zcr',\n 'mfcc', 'chroma'],\n dtype='object')\n" ], [ "#Armo una lista de numericas para depsues hacer el fit transform en estas\nlista_numero=[\"duration\", \"acousticness\",\"album_tracks\", \"danceability\",\"energy\",\"instrumentalness\", \"liveness\", \"speechiness\",\"tempo\",\"valence\",'spectral_rolloff', 'spectral_bandwidth', 'spectral_centroid', 'zcr',\n 'mfcc', 'chroma']\n\ndata_complete=data_complete[lista_numero]", "_____no_output_____" ], [ "# El argumento stratify nos permite generar una división que respeta la misma proporción entre clases en ambos sets\n\nX = data_complete\nY = data['genre_top']\nX_train, X_test, Y_train, Y_test = train_test_split(X, Y, random_state = 1237, stratify= Y)\n\n#tengo qeu hacer un stratidfy aca para qeu en la division de train test tenga las mismo porcentaje de varialbes)", "_____no_output_____" ], [ "display(Y_train.value_counts(normalize=True).round(2))\n\ndisplay(Y_test.value_counts(normalize=True).round(2))", "_____no_output_____" ], [ "#Es necesario llevar a la misma escala, porque sino la que tiene mayor varianza va a pesar mas en PCA. Por eso normalizamos los datos\n#en un modelo no hacemos en el test el fit_transform, solo hacemos transform. Porque ya tenemos la informacion de la media y varianza en el fit del train.\n\n\nstd_sclr = StandardScaler()\nstd_sclr_trained = std_sclr.fit(X_train)\nX_train_numerical = std_sclr_trained.transform(X_train)\nX_train_numerical_scaled = pd.DataFrame(X_train_numerical, columns = lista_numero)\nX_train_numerical_scaled.head()", "_____no_output_____" ], [ "X_test_numerical = std_sclr_trained.transform(X_test)\nX_test_numerical_scaled = pd.DataFrame(X_test_numerical, columns = lista_numero)\nX_test_numerical_scaled.head()", "_____no_output_____" ], [ "print(\"Media de las variables: \")\nprint(X_train_numerical_scaled.mean(axis=0))\n\nprint('\\n')\n\n# Observamos nuevamente la varianza de las variables: como normalizamos la varianza es 1\nprint(\"Varianza de las variables: \")\nprint(X_train_numerical_scaled.var(axis=0))", "Media de las variables: \nduration 2.054668e-17\nacousticness 8.022849e-17\nalbum_tracks 5.641946e-16\ndanceability 2.689316e-16\nenergy 6.707861e-17\ninstrumentalness -2.376314e-16\nliveness 1.595708e-16\nspeechiness -1.208808e-16\ntempo 2.483364e-16\nvalence 2.045747e-16\nspectral_rolloff -1.985932e-16\nspectral_bandwidth 2.344191e-16\nspectral_centroid 4.954861e-16\nzcr -4.218721e-17\nmfcc -2.156437e-17\nchroma 1.941324e-16\ndtype: float64\n\n\nVarianza de las variables: \nduration 1.000143\nacousticness 1.000143\nalbum_tracks 1.000143\ndanceability 1.000143\nenergy 1.000143\ninstrumentalness 1.000143\nliveness 1.000143\nspeechiness 1.000143\ntempo 1.000143\nvalence 1.000143\nspectral_rolloff 1.000143\nspectral_bandwidth 1.000143\nspectral_centroid 1.000143\nzcr 1.000143\nmfcc 1.000143\nchroma 1.000143\ndtype: float64\n" ] ], [ [ "### Features del modelo\nX_test_numerical_scaled / X_train_numerical_scaled / Y_train, Y_test", "_____no_output_____" ], [ "\n## 1. Reduccion de dimensionalidad -> PCA", "_____no_output_____" ] ], [ [ "model_pca = PCA().fit(X_train_numerical_scaled)\n\nX_train_PCA = model_pca.transform(X_train_numerical_scaled)\nX_test_PCA = model_pca.transform(X_test_numerical_scaled)\n\ncomponentes=model_pca.n_components_\nprint(\"Componentes del modelo\", model_pca.n_components_)", "Componentes del modelo 16\n" ], [ "def plot_explained_variance(components_count, X):\n\n model_pca = PCA(components_count).fit(X)\n\n explained_variance = model_pca.explained_variance_ratio_\n\n #print(explained_variance)\n\n cumulative_explained_variance = np.cumsum(explained_variance)\n\n #print(cumulative_explained_variance)\n\n plt.plot(cumulative_explained_variance)\n plt.xlabel('número de componentes')\n plt.ylabel('% de varianza explicada');", "_____no_output_____" ], [ "plot_explained_variance(components_count = componentes, X = X_train_numerical_scaled)", "_____no_output_____" ] ], [ [ "## PCA para features musicales", "_____no_output_____" ] ], [ [ "lista_features=['spectral_rolloff', 'spectral_bandwidth', 'spectral_centroid', 'zcr', 'mfcc', 'chroma' ]\n", "_____no_output_____" ], [ "std_sclr = StandardScaler()\nstd_sclr_trained = std_sclr.fit(X_train[lista_features])\nX_train_numerical = std_sclr_trained.transform(X_train[lista_features])\nX_train_numerical_scaled = pd.DataFrame(X_train_numerical, columns = lista_features)\nX_train_numerical_scaled.head()", "_____no_output_____" ], [ "X_test_numerical = std_sclr_trained.transform(X_test[lista_features])\nX_test_numerical_scaled = pd.DataFrame(X_test_numerical, columns = lista_features)\nX_test_numerical_scaled.head()", "_____no_output_____" ], [ "model_pca = PCA().fit(X_train_numerical_scaled)\n\nX_train_PCA = model_pca.transform(X_train_numerical_scaled)\nX_test_PCA = model_pca.transform(X_test_numerical_scaled)\n\ncomponentes=model_pca.n_components_\nprint(\"Componentes del modelo\", model_pca.n_components_)", "Componentes del modelo 6\n" ], [ "plot_explained_variance(components_count = componentes, X = X_train_numerical_scaled)", "_____no_output_____" ] ], [ [ "con la regla del codo notamos que con 3 variables de PCA podemos explicar más de 95% y con 2 podemos explicar el 95%", "_____no_output_____" ], [ "## Representacion grafica con 2 variables", "_____no_output_____" ] ], [ [ "pca_digits_vis = PCA(n_components=2)\ndata_numero = pca_digits_vis.fit_transform(data_complete[lista_features])\nprint(data_complete[lista_features].shape)\nprint(data_numero.shape)", "(9355, 6)\n(9355, 2)\n" ], [ "def plot_digits_pca(projection, generos):\n \n colors = [\"#476A2A\", \"#7851B8\", \"#BD3430\", \"#4A2D4E\", \"#875525\",\n \"#A83683\", \"#4E655E\", \"#853541\", \"#3A3120\", \"#535D8E\"]\n plt.figure(figsize=(10,10))\n plt.xlim(projection[:,0].min(), projection[:,0].max())\n plt.ylim(projection[:,1].min(), projection[:,1].max())\n\n for i in range(len(projection)):\n plt.xlabel('Primer Componente Principal')\n plt.ylabel('Segundo Componente Principal')\n plt.scatter(projection[i,0], projection[i,1], s=10) #color=color[genero[i]]\n \n#No pude conectar un color con un genero distinto ", "_____no_output_____" ], [ "plot_digits_pca(data_numero, data.genre_top)", "_____no_output_____" ] ], [ [ "# 2. NAIVE BAYES", "_____no_output_____" ] ], [ [ "from sklearn.naive_bayes import GaussianNB\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.metrics import recall_score\nfrom sklearn.metrics import precision_score\nfrom sklearn.metrics import confusion_matrix\nimport seaborn as sns", "_____no_output_____" ], [ "gnb = GaussianNB()\n\ngnb.fit(X_train_numerical_scaled, Y_train)", "_____no_output_____" ], [ "Y_pred = gnb.predict(X_test_numerical_scaled)\n\nY_pred", "_____no_output_____" ], [ "round(accuracy_score(Y_test, Y_pred), 2)", "_____no_output_____" ], [ "print('Accuracy=', accuracy_score(Y_test, Y_pred))\n#print('Recall=', recall_score(Y_test, Y_pred))\n#print('Precision=', precision_score(Y_test, Y_pred))", "Accuracy= 0.5160324925181702\n" ], [ "sns.heatmap(confusion_matrix(Y_test, Y_pred), annot=True, fmt='.0f')\nplt.ylabel('Etiquetas reales')\nplt.xlabel('Etiquetas predichas');", "_____no_output_____" ] ], [ [ "\n## 3. KNN", "_____no_output_____" ] ], [ [ "# Importamos la clase KNeighborsClassifier de módulo neighbors\nfrom sklearn.neighbors import KNeighborsClassifier", "_____no_output_____" ], [ "knn = KNeighborsClassifier()", "_____no_output_____" ], [ "knn.fit(X_train, Y_train)", "_____no_output_____" ], [ "y_pred = knn.predict(X_test)", "_____no_output_____" ], [ "from sklearn.metrics import accuracy_score\naccuracy_score(Y_test, y_pred).round(2)", "_____no_output_____" ], [ "from sklearn.model_selection import cross_val_score, KFold\nkf = KFold(n_splits=12, shuffle=True, random_state=12)\n\nscores_para_df = []\n\nfor i in range(1, 21):\n \n # En cada iteración, instanciamos el modelo con un hiperparámetro distinto\n model = KNeighborsClassifier(n_neighbors=i)\n \n # cross_val_scores nos devuelve un array de 5 resultados,\n # uno por cada partición que hizo automáticamente CV\n cv_scores = cross_val_score(model, X_train, Y_train, cv=kf)\n \n # Para cada valor de n_neighbours, creamos un diccionario con el valor\n # de n_neighbours y la media y el desvío de los scores\n dict_row_score = {'score_medio':np.mean(cv_scores),\n 'score_std':np.std(cv_scores), 'n_neighbors':i}\n \n # Guardamos cada uno en la lista de diccionarios\n scores_para_df.append(dict_row_score)", "_____no_output_____" ], [ "df_scores = pd.DataFrame(scores_para_df)\ndf_scores.head()", "_____no_output_____" ], [ "df_scores['limite_inferior'] = df_scores['score_medio'] - df_scores['score_std']\ndf_scores['limite_superior'] = df_scores['score_medio'] + df_scores['score_std']\ndf_scores.head()", "_____no_output_____" ], [ "# Graficamos los resultados\nplt.plot(df_scores['n_neighbors'], df_scores['limite_inferior'], color='r')\nplt.plot(df_scores['n_neighbors'], df_scores['score_medio'], color='b')\nplt.plot(df_scores['n_neighbors'], df_scores['limite_superior'], color='r');", "_____no_output_____" ], [ "# Identificamos el score máximo\ndf_scores.loc[df_scores.score_medio == df_scores.score_medio.max()]", "_____no_output_____" ], [ "# Utilizamos sklearn para estandarizar la matriz de features\nfrom sklearn.preprocessing import StandardScaler\nscaler = StandardScaler()\nX_train = scaler.fit_transform(X_train)", "_____no_output_____" ], [ "# Verificamos que las variables ahora tengan media 0 y desvío 1.\nprint('Medias:', np.mean(X_train, axis=0).round(2))\nprint('Desvio:', np.std(X_train, axis=0).round(2))", "Medias: [ 0. 0. -0. 0. 0. -0. 0. -0. 0. 0. -0. 0. 0. -0. -0. 0.]\nDesvio: [1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]\n" ], [ "# Calculamos nuevamente los scores de cross validation,\n# pero esta vez sobre los features estandarizados:\n\nscores_para_df_standard = []\n\nfor i in range(1, 21):\n model = KNeighborsClassifier(n_neighbors=i)\n cv_scores = cross_val_score(model, X_train, Y_train, cv=kf)\n dict_row_score = {'score_medio':np.mean(cv_scores),\n 'score_std':np.std(cv_scores), 'n_neighbors':i}\n scores_para_df_standard.append(dict_row_score)", "_____no_output_____" ], [ "# Creamos el DataFrame a partir de la lista de diccionarios\ndf_scores_standard = pd.DataFrame(scores_para_df_standard)\ndf_scores_standard.head()", "_____no_output_____" ], [ "df_scores_standard['limite_superior'] = df_scores_standard['score_medio'] + df_scores_standard['score_std']\ndf_scores_standard['limite_inferior'] = df_scores_standard['score_medio'] - df_scores_standard['score_std']\ndf_scores_standard.head()\n\n# Graficamos los resultados\nplt.plot(df_scores_standard['n_neighbors'], df_scores_standard['limite_inferior'], color='r')\nplt.plot(df_scores_standard['n_neighbors'], df_scores_standard['score_medio'], color='b')\nplt.plot(df_scores_standard['n_neighbors'], df_scores_standard['limite_superior'], color='r');\n# Identificamos el score máximo\ndf_scores_standard.loc[df_scores_standard.score_medio == df_scores_standard.score_medio.max()]", "_____no_output_____" ], [ "# Asignamos el valor del k óptimo a una variable\nbest_k = df_scores_standard.loc[df_scores_standard.score_medio == df_scores_standard.score_medio.max(), 'n_neighbors'].values[0]\nbest_k", "_____no_output_____" ], [ "# Elegimos el modelo óptimo de acuerdo a las pruebas de cross validation\nmodel = KNeighborsClassifier(n_neighbors=best_k)\n\n# Lo ajustamos sobre los datos de entrenamiento\nmodel.fit(X_train, Y_train)", "_____no_output_____" ], [ "#Evaluamos qué accuracy obtenemos en train\naccuracy_score(Y_train, model.predict(X_train)).round(2)", "_____no_output_____" ], [ "# Lo utilizamos para predecir en test\nX_test = scaler.transform(X_test) # ¡Importantísimo estandarizar también los datos de test con las medias y desvíos aprendidos en train!\ny_pred = model.predict(X_test)", "_____no_output_____" ], [ "# Evaluamos el accuracy del modelo en test\naccuracy_score(Y_test, y_pred).round(2)", "_____no_output_____" ] ], [ [ "### KNN con 11 neighbors -> El modelo esta under fiteando\n- Acurr en train -> 0.72 \n\n- Acurr en test -> 0.67", "_____no_output_____" ] ], [ [ "# Obtenemos la matriz de confusión\nfrom sklearn.metrics import confusion_matrix\ncm = confusion_matrix(Y_test, y_pred)\ncm", "_____no_output_____" ], [ "from sklearn.metrics import confusion_matrix\nfrom sklearn.metrics import plot_confusion_matrix\n\nsns.set(rc={'figure.figsize':(30,30)})\n\n# Graficamos la matriz de confusión\nprint(confusion_matrix(Y_test, y_pred))\nplot_confusion_matrix(model,X_test, Y_test)", "[[ 2 0 0 0 7 0 0 0 0 0 1 6]\n [ 0 44 2 0 12 1 0 0 1 1 0 5]\n [ 1 7 347 0 30 21 0 1 0 5 4 127]\n [ 0 0 1 0 0 1 0 0 0 0 0 2]\n [ 1 6 7 0 116 4 0 0 2 3 4 76]\n [ 1 1 33 0 2 129 0 0 1 0 4 57]\n [ 0 1 3 0 3 0 3 0 0 0 0 11]\n [ 0 0 1 0 4 1 0 7 0 1 1 18]\n [ 0 1 3 0 19 0 0 0 2 2 0 33]\n [ 0 0 0 0 1 0 0 0 0 87 0 1]\n [ 0 0 15 0 15 1 0 0 1 0 7 48]\n [ 0 2 67 0 59 16 0 0 1 0 2 826]]\n" ], [ "sns.heatmap(confusion_matrix(Y_test, y_pred), annot=True, fmt='.0f')\nplt.ylabel('Etiquetas reales')\nplt.xlabel('Etiquetas predichas');", "_____no_output_____" ] ], [ [ "# GridSearch & Pipeline", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import GridSearchCV\nfrom sklearn.model_selection import StratifiedKFold\nfrom sklearn.pipeline import Pipeline\n\nfolds=StratifiedKFold(n_splits=5,shuffle=True, random_state=42)", "_____no_output_____" ], [ "pasos = [('scaler', StandardScaler()), ('knn', KNeighborsClassifier())]", "_____no_output_____" ], [ "pipe_grid = Pipeline(pasos)", "_____no_output_____" ], [ "param_grid = {'knn__n_neighbors':range(2,20,2),'knn__weights':['uniform','distance']}", "_____no_output_____" ], [ "grid = GridSearchCV(pipe_grid, param_grid, cv=folds)\ngrid.fit(X_train_numerical_scaled, Y_train)", "_____no_output_____" ], [ "grid.best_score_", "_____no_output_____" ], [ "grid.best_estimator_", "_____no_output_____" ], [ "accuracy_score(grid.best_estimator_.predict(X_test_numerical_scaled),Y_test)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7dad5b9fd264e3158de40d393fbf37dcd0635ec
24,414
ipynb
Jupyter Notebook
testing/sahel_cropmask/6_Accuracy_assessment_10m.ipynb
digitalearthafrica/crop-mask
18ae773c4d5eb71c0add765260a1032c46c68a0e
[ "Apache-2.0" ]
11
2020-12-15T04:09:41.000Z
2022-01-19T11:07:21.000Z
testing/sahel_cropmask/6_Accuracy_assessment_10m.ipynb
digitalearthafrica/crop-mask
18ae773c4d5eb71c0add765260a1032c46c68a0e
[ "Apache-2.0" ]
15
2021-03-15T02:17:32.000Z
2022-02-24T02:50:01.000Z
testing/sahel_cropmask/6_Accuracy_assessment_10m.ipynb
digitalearthafrica/crop-mask
18ae773c4d5eb71c0add765260a1032c46c68a0e
[ "Apache-2.0" ]
4
2020-12-16T04:48:36.000Z
2021-03-30T16:51:37.000Z
29.845966
378
0.431965
[ [ [ "# Validating the 10m Sahel Africa Cropland Mask\n", "_____no_output_____" ], [ "## Description\nPreviously, in the `6_Accuracy_assessment_20m.ipynb` notebook, we were doing preliminary validations on 20m resolution testing crop-masks. The crop-mask was stored on disk as a geotiff. The final cropland extent mask, produced at 10m resolution, is stored in the datacube and requires a different method for validating.\n\n> NOTE: A very big sandbox is required (256GiB RAM) to run this script. \n\nThis notebook will output a `confusion error matrix` containing Overall, Producer's, and User's accuracy, along with the F1 score for each class.", "_____no_output_____" ], [ "***\n## Getting started\n\nTo run this analysis, run all the cells in the notebook, starting with the \"Load packages\" cell. ", "_____no_output_____" ], [ "### Load Packages", "_____no_output_____" ] ], [ [ "import os\nimport sys\nimport glob\nimport rasterio\nimport datacube\nimport pandas as pd\nimport numpy as np\nimport seaborn as sn\nimport matplotlib.pyplot as plt\nimport geopandas as gpd\nfrom sklearn.metrics import f1_score\nfrom rasterstats import zonal_stats", "_____no_output_____" ] ], [ [ "## Analysis Parameters\n\n* `product` : name of crop-mask we're validating\n* `bands`: the bands of the crop-mask we want to load and validate. Can one of either `'mask'` or `'filtered'`\n* `grd_truth` : a shapefile containing crop/no-crop points to serve as the \"ground-truth\" dataset\n", "_____no_output_____" ] ], [ [ "product = \"crop_mask_sahel\"\nband = 'mask'\ngrd_truth = 'data/validation_samples.shp'\n", "_____no_output_____" ] ], [ [ "\n\n### Load the datasets\n\n`the cropland extent mask`", "_____no_output_____" ] ], [ [ "#connect to the datacube\ndc = datacube.Datacube(app='feature_layers')\n \n#load 10m cropmask\nds = dc.load(product=product, measurements=[band], resolution=(-10,10)).squeeze()\nprint(ds)", "<xarray.Dataset>\nDimensions: (y: 364800, x: 672000)\nCoordinates:\n time datetime64[ns] 2019-07-02T11:59:59.999999\n * y (y) float64 3.36e+06 3.36e+06 3.36e+06 ... -2.88e+05 -2.88e+05\n * x (x) float64 -1.728e+06 -1.728e+06 ... 4.992e+06 4.992e+06\n spatial_ref int32 6933\nData variables:\n mask (y, x) uint8 0 0 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 0\nAttributes:\n crs: EPSG:6933\n grid_mapping: spatial_ref\n" ] ], [ [ "`Ground truth points`", "_____no_output_____" ] ], [ [ "#ground truth shapefile\nground_truth = gpd.read_file(grd_truth).to_crs('EPSG:6933')\n\n# rename the class column to 'actual'\nground_truth = ground_truth.rename(columns={'Class':'Actual'})\n\n# reclassifer into int\nground_truth['Actual'] = np.where(ground_truth['Actual']=='non-crop', 0, ground_truth['Actual'])\nground_truth['Actual'] = np.where(ground_truth['Actual']=='crop', 1, ground_truth['Actual'])\nground_truth.head()", "_____no_output_____" ] ], [ [ "\n## Convert points into polygons\n\nWhen the validation data was collected, 40x40m polygons were evaluated as either crop/non-crop rather than points, so we want to sample the raster using the same small polygons. We'll find the majority or 'mode' statistic within the polygon and use that to compare with the validation dataset.\n", "_____no_output_____" ] ], [ [ "#set radius (in metres) around points\nradius = 20\n\n#create circle buffer around points, then find envelope\nground_truth['geometry'] = ground_truth['geometry'].buffer(radius).envelope", "_____no_output_____" ] ], [ [ "### Calculate zonal statistics\n\nWe want to know what the majority pixel value is inside each validation polygon.", "_____no_output_____" ] ], [ [ "def custom_majority(x):\n a=np.ma.MaskedArray.count(x)\n b=np.sum(x)\n c=b/a\n if c>0.5:\n return 1\n if c<=0.5:\n return 0", "_____no_output_____" ], [ "#calculate stats\nstats = zonal_stats(ground_truth.geometry,\n ds[band].values,\n affine=ds.geobox.affine,\n add_stats={'majority':custom_majority},\n nodata=255)\n\n#append stats to grd truth df\nground_truth['Prediction']=[i['majority'] for i in stats]\n\nground_truth.head()", "_____no_output_____" ] ], [ [ "***\n\n## Create a confusion matrix", "_____no_output_____" ] ], [ [ "confusion_matrix = pd.crosstab(ground_truth['Actual'],\n ground_truth['Prediction'],\n rownames=['Actual'],\n colnames=['Prediction'],\n margins=True)\n\nconfusion_matrix", "_____no_output_____" ] ], [ [ "### Calculate User's and Producer's Accuracy", "_____no_output_____" ], [ "`Producer's Accuracy`", "_____no_output_____" ] ], [ [ "confusion_matrix[\"Producer's\"] = [confusion_matrix.loc[0, 0] / confusion_matrix.loc[0, 'All'] * 100,\n confusion_matrix.loc[1, 1] / confusion_matrix.loc[1, 'All'] * 100,\n np.nan]", "_____no_output_____" ] ], [ [ "`User's Accuracy`", "_____no_output_____" ] ], [ [ "users_accuracy = pd.Series([confusion_matrix[0][0] / confusion_matrix[0]['All'] * 100,\n confusion_matrix[1][1] / confusion_matrix[1]['All'] * 100]\n ).rename(\"User's\")\n\nconfusion_matrix = confusion_matrix.append(users_accuracy)", "_____no_output_____" ] ], [ [ "`Overall Accuracy`", "_____no_output_____" ] ], [ [ "confusion_matrix.loc[\"User's\",\"Producer's\"] = (confusion_matrix.loc[0, 0] + \n confusion_matrix.loc[1, 1]) / confusion_matrix.loc['All', 'All'] * 100", "_____no_output_____" ] ], [ [ "`F1 Score`\n\nThe F1 score is the harmonic mean of the precision and recall, where an F1 score reaches its best value at 1 (perfect precision and recall), and is calculated as:\n\n$$\n\\begin{aligned}\n\\text{Fscore} = 2 \\times \\frac{\\text{UA} \\times \\text{PA}}{\\text{UA} + \\text{PA}}.\n\\end{aligned}\n$$\n\nWhere UA = Users Accuracy, and PA = Producer's Accuracy", "_____no_output_____" ] ], [ [ "fscore = pd.Series([(2*(confusion_matrix.loc[\"User's\", 0]*confusion_matrix.loc[0, \"Producer's\"]) / (confusion_matrix.loc[\"User's\", 0]+confusion_matrix.loc[0, \"Producer's\"])) / 100,\n f1_score(ground_truth['Actual'].astype(np.int8), ground_truth['Prediction'].astype(np.int8), average='binary')]\n ).rename(\"F-score\")\n\nconfusion_matrix = confusion_matrix.append(fscore)", "_____no_output_____" ] ], [ [ "### Tidy Confusion Matrix\n\n* Limit decimal places,\n* Add readable class names\n* Remove non-sensical values ", "_____no_output_____" ] ], [ [ "# round numbers\nconfusion_matrix = confusion_matrix.round(decimals=2)", "_____no_output_____" ], [ "# rename booleans to class names\nconfusion_matrix = confusion_matrix.rename(columns={0:'Non-crop', 1:'Crop', 'All':'Total'},\n index={0:'Non-crop', 1:'Crop', 'All':'Total'})", "_____no_output_____" ], [ "#remove the nonsensical values in the table\nconfusion_matrix.loc[\"User's\", 'Total'] = '--'\nconfusion_matrix.loc['Total', \"Producer's\"] = '--'\nconfusion_matrix.loc[\"F-score\", 'Total'] = '--'\nconfusion_matrix.loc[\"F-score\", \"Producer's\"] = '--'", "_____no_output_____" ], [ "confusion_matrix", "_____no_output_____" ] ], [ [ "### Export csv", "_____no_output_____" ] ], [ [ "confusion_matrix.to_csv('results/Sahel_10m_accuracy_assessment_confusion_matrix.csv')", "_____no_output_____" ] ], [ [ "***\n\n## Additional information\n\n**License:** The code in this notebook is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). \nDigital Earth Africa data is licensed under the [Creative Commons by Attribution 4.0](https://creativecommons.org/licenses/by/4.0/) license.\n\n**Contact:** If you need assistance, please post a question on the [Open Data Cube Slack channel](http://slack.opendatacube.org/) or on the [GIS Stack Exchange](https://gis.stackexchange.com/questions/ask?tags=open-data-cube) using the `open-data-cube` tag (you can view previously asked questions [here](https://gis.stackexchange.com/questions/tagged/open-data-cube)).\nIf you would like to report an issue with this notebook, you can file one on [Github](https://github.com/digitalearthafrica/deafrica-sandbox-notebooks).\n\n**Last modified:** Dec 2020\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e7db0242688382e523411fecb3d7bf85518725ce
60,365
ipynb
Jupyter Notebook
Explorative Data Analysis/Explorative Data Analysis On Student's Academic Performance Dataset.ipynb
capturemathan/Interesting-Python-Modules
5fb880fc1860a27e612d6c90b34fb1ff8c3488a2
[ "MIT" ]
1
2021-05-20T12:01:24.000Z
2021-05-20T12:01:24.000Z
Explorative Data Analysis/Explorative Data Analysis On Student's Academic Performance Dataset.ipynb
capturemathan/Python-Modules
5fb880fc1860a27e612d6c90b34fb1ff8c3488a2
[ "MIT" ]
null
null
null
Explorative Data Analysis/Explorative Data Analysis On Student's Academic Performance Dataset.ipynb
capturemathan/Python-Modules
5fb880fc1860a27e612d6c90b34fb1ff8c3488a2
[ "MIT" ]
null
null
null
69.385057
841
0.675358
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
e7db0787bf1306f9d8ea1d93ba0699ec993e63f2
137,796
ipynb
Jupyter Notebook
Recommendations_with_IBM.ipynb
Anirudh-Kulkarni/IBM_article_recommendations
54feab7b1766146279a84e00a18083dfdcc70185
[ "MIT" ]
null
null
null
Recommendations_with_IBM.ipynb
Anirudh-Kulkarni/IBM_article_recommendations
54feab7b1766146279a84e00a18083dfdcc70185
[ "MIT" ]
null
null
null
Recommendations_with_IBM.ipynb
Anirudh-Kulkarni/IBM_article_recommendations
54feab7b1766146279a84e00a18083dfdcc70185
[ "MIT" ]
null
null
null
52.93738
20,956
0.628117
[ [ [ "# Recommendations with IBM\n\nIn this notebook, you will be putting your recommendation skills to use on real data from the IBM Watson Studio platform. \n\n\nYou may either submit your notebook through the workspace here, or you may work from your local machine and submit through the next page. Either way assure that your code passes the project [RUBRIC](https://review.udacity.com/#!/rubrics/2322/view). **Please save regularly.**\n\nBy following the table of contents, you will build out a number of different methods for making recommendations that can be used for different situations. \n\n\n## Table of Contents\n\nI. [Exploratory Data Analysis](#Exploratory-Data-Analysis)<br>\nII. [Rank Based Recommendations](#Rank)<br>\nIII. [User-User Based Collaborative Filtering](#User-User)<br>\nIV. [Content Based Recommendations (EXTRA - NOT REQUIRED)](#Content-Recs)<br>\nV. [Matrix Factorization](#Matrix-Fact)<br>\nVI. [Extras & Concluding](#conclusions)\n\nAt the end of the notebook, you will find directions for how to submit your work. Let's get started by importing the necessary libraries and reading in the data.", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport project_tests as t\nimport pickle\n\n%matplotlib inline\n\ndf = pd.read_csv('data/user-item-interactions.csv') # Import the user item interactions dataframe\ndf_content = pd.read_csv('data/articles_community.csv') # Import the articles database dataframe\ndel df['Unnamed: 0']\ndel df_content['Unnamed: 0']\n\n# Show df to get an idea of the data\ndf.head()", "_____no_output_____" ], [ "# Show df_content to get an idea of the data\ndf_content.head()", "_____no_output_____" ] ], [ [ "### <a class=\"anchor\" id=\"Exploratory-Data-Analysis\">Part I : Exploratory Data Analysis</a>\n\nUse the dictionary and cells below to provide some insight into the descriptive statistics of the data.\n\n`1.` What is the distribution of how many articles a user interacts with in the dataset? Provide a visual and descriptive statistics to assist with giving a look at the number of times each user interacts with an article. ", "_____no_output_____" ] ], [ [ "df_email = df.set_index('email') # Set index to the email \ndf_email_count = df_email.groupby('email')['article_id'].count() \n# Group articles by email and extract article_id's count: this gives \n# the number of articles each user interacted with \ndf_email_countunique = df_email.groupby('email')['article_id'].unique()\n# Here, we define a dataframe as above BUT we include an article in \n# the count only once even if a user has interacted multiple times with it\n\ndf_email_count.describe()", "_____no_output_____" ], [ "df_email_countunique_len = df_email_countunique.apply(lambda x: len(x)) \n# We extract the number of articles each user interacted with and \n# show the statistics below\ndf_email_countunique_len.describe()", "_____no_output_____" ], [ "# Fill in the median and maximum number of user_article interactios below\n\nmedian_val = 3 # 50% of individuals interact with ____ number of articles or fewer.\nmax_views_by_user = 364 # The maximum number of user-article interactions by any 1 user with ... articles.", "_____no_output_____" ] ], [ [ "`2.` Explore and remove duplicate articles from the **df_content** dataframe. ", "_____no_output_____" ] ], [ [ "# Find and explore duplicate articles\n\ndf_content[df_content['article_id'].duplicated() == True]\n# The above shows the duplicate entries\n\n", "_____no_output_____" ], [ "# Remove any rows that have the same article_id - only keep the first\n\n\ndf_content1 = df_content.drop_duplicates(subset =['article_id'])\ndf_content1.head()\n", "_____no_output_____" ] ], [ [ "`3.` Use the cells below to find:\n\n**a.** The number of unique articles that have an interaction with a user. \n**b.** The number of unique articles in the dataset (whether they have any interactions or not).<br>\n**c.** The number of unique users in the dataset. (excluding null values) <br>\n**d.** The number of user-article interactions in the dataset.", "_____no_output_____" ] ], [ [ "df4 = df.set_index('article_id')\ndf5 = df4.groupby('article_id')['title']\ndf4.describe()", "_____no_output_____" ], [ "unique_articles = 714# The number of unique articles that have at least one interaction\ntotal_articles = 1051 # The number of unique articles on the IBM platform\nunique_users = 5148 # The number of unique users\nuser_article_interactions = 45993 # The number of user-article interactions", "_____no_output_____" ] ], [ [ "`4.` Use the cells below to find the most viewed **article_id**, as well as how often it was viewed. After talking to the company leaders, the `email_mapper` function was deemed a reasonable way to map users to ids. There were a small number of null values, and it was found that all of these null values likely belonged to a single user (which is how they are stored using the function below).", "_____no_output_____" ] ], [ [ "df_id_art = df4.groupby(['article_id']) # Create a dataframe grouped\n# by article_id and having the index of article_id\n\nvalue_counts = df['article_id'].value_counts(dropna=True, sort=True)\n# Create a value_count series with number of times the article is \n# interacted with. Then sort it with highest count values on the top.\nvalue_counts.head()", "_____no_output_____" ], [ "most_viewed_article_id = '1429.0' # The most viewed article in the dataset as a string with one value following the decimal \nmax_views = 937 # The most viewed article in the dataset was viewed how many times?", "_____no_output_____" ], [ "## No need to change the code here - this will be helpful for later parts of the notebook\n# Run this cell to map the user email to a user_id column and remove the email column\n\ndef email_mapper():\n coded_dict = dict()\n cter = 1\n email_encoded = []\n \n for val in df['email']:\n if val not in coded_dict:\n coded_dict[val] = cter\n cter+=1\n \n email_encoded.append(coded_dict[val])\n return email_encoded\n\nemail_encoded = email_mapper()\ndel df['email']\ndf['user_id'] = email_encoded\n\n# show header\ndf.head()", "_____no_output_____" ], [ "## If you stored all your results in the variable names above, \n## you shouldn't need to change anything in this cell\n\nsol_1_dict = {\n '`50% of individuals have _____ or fewer interactions.`': median_val,\n '`The total number of user-article interactions in the dataset is ______.`': user_article_interactions,\n '`The maximum number of user-article interactions by any 1 user is ______.`': max_views_by_user,\n '`The most viewed article in the dataset was viewed _____ times.`': max_views,\n '`The article_id of the most viewed article is ______.`': most_viewed_article_id,\n '`The number of unique articles that have at least 1 rating ______.`': unique_articles,\n '`The number of unique users in the dataset is ______`': unique_users,\n '`The number of unique articles on the IBM platform`': total_articles\n}\n\n# Test your dictionary against the solution\nt.sol_1_test(sol_1_dict)", "It looks like you have everything right here! Nice job!\n" ] ], [ [ "### <a class=\"anchor\" id=\"Rank\">Part II: Rank-Based Recommendations</a>\n\nUnlike in the earlier lessons, we don't actually have ratings for whether a user liked an article or not. We only know that a user has interacted with an article. In these cases, the popularity of an article can really only be based on how often an article was interacted with.\n\n`1.` Fill in the function below to return the **n** top articles ordered with most interactions as the top. Test your function using the tests below.", "_____no_output_____" ] ], [ [ "def get_top_articles(n, df=df):\n '''\n INPUT:\n n - (int) the number of top articles to return\n df - (pandas dataframe) df as defined at the top of the notebook \n \n OUTPUT:\n top_articles - (list) A list of the top 'n' article titles \n \n '''\n value_counts = df['article_id'].value_counts(dropna=True, sort=True)\n top_articles_id = list(value_counts.index[0:n])\n # Return articles with highest value counts i.e. interacted with the most\n top_articles = [df[df['article_id'] == art_id].title.iloc[0] for art_id in top_articles_id]\n \n return top_articles # Return the top article titles from df (not df_content)\n\ndef get_top_article_ids(n, df=df):\n '''\n INPUT:\n n - (int) the number of top articles to return\n df - (pandas dataframe) df as defined at the top of the notebook \n \n OUTPUT:\n top_articles - (list) A list of the top 'n' article ids \n \n '''\n # Return article ids with highest value counts i.e. interacted with the most\n value_counts = df['article_id'].value_counts(dropna=True, sort=True)\n top_articles = list(value_counts.index[0:n])\n return top_articles # Return the top article ids", "_____no_output_____" ], [ "print(get_top_articles(10))\nprint(get_top_article_ids(10))", "['use deep learning for image classification', 'insights from new york car accident reports', 'visualize car data with brunel', 'use xgboost, scikit-learn & ibm watson machine learning apis', 'predicting churn with the spss random tree algorithm', 'healthcare python streaming application demo', 'finding optimal locations of new store using decision optimization', 'apache spark lab, part 1: basic concepts', 'analyze energy consumption in buildings', 'gosales transactions for logistic regression model']\n[1429.0, 1330.0, 1431.0, 1427.0, 1364.0, 1314.0, 1293.0, 1170.0, 1162.0, 1304.0]\n" ], [ "# Test your function by returning the top 5, 10, and 20 articles\ntop_5 = get_top_articles(5)\ntop_10 = get_top_articles(10)\ntop_20 = get_top_articles(20)\n\n# Test each of your three lists from above\nt.sol_2_test(get_top_articles)", "Your top_5 looks like the solution list! Nice job.\nYour top_10 looks like the solution list! Nice job.\nYour top_20 looks like the solution list! Nice job.\n" ] ], [ [ "### <a class=\"anchor\" id=\"User-User\">Part III: User-User Based Collaborative Filtering</a>\n\n\n`1.` Use the function below to reformat the **df** dataframe to be shaped with users as the rows and articles as the columns. \n\n* Each **user** should only appear in each **row** once.\n\n\n* Each **article** should only show up in one **column**. \n\n\n* **If a user has interacted with an article, then place a 1 where the user-row meets for that article-column**. It does not matter how many times a user has interacted with the article, all entries where a user has interacted with an article should be a 1. \n\n\n* **If a user has not interacted with an item, then place a zero where the user-row meets for that article-column**. \n\nUse the tests to make sure the basic structure of your matrix matches what is expected by the solution.", "_____no_output_____" ] ], [ [ "# create the user-article matrix with 1's and 0's\n\ndef create_user_item_matrix(df):\n '''\n INPUT:\n df - pandas dataframe with article_id, title, user_id columns\n \n OUTPUT:\n user_item - user item matrix \n \n Description:\n Return a matrix with user ids as rows and article ids on the columns with 1 values where a user interacted with \n an article and a 0 otherwise\n '''\n # Extract only the user_id and article_id columns\n \n df6 = df[['user_id', 'article_id']]\n # Extract dummies of the article_id variable and concatenate with user_id variable\n df7 = pd.concat([df6.user_id, pd.get_dummies(df6.article_id)], axis=1)\n\n # If an article is interacted with more than or equal to once by a user, set it to 1!\n user_item = (df7.groupby('user_id').sum() > 0).astype(int)\n \n \n return user_item # return the user_item matrix \n\nuser_item = create_user_item_matrix(df)", "_____no_output_____" ], [ "## Tests: You should just need to run this cell. Don't change the code.\nassert user_item.shape[0] == 5149, \"Oops! The number of users in the user-article matrix doesn't look right.\"\nassert user_item.shape[1] == 714, \"Oops! The number of articles in the user-article matrix doesn't look right.\"\nassert user_item.sum(axis=1)[1] == 36, \"Oops! The number of articles seen by user 1 doesn't look right.\"\nprint(\"You have passed our quick tests! Please proceed!\")", "You have passed our quick tests! Please proceed!\n" ] ], [ [ "`2.` Complete the function below which should take a user_id and provide an ordered list of the most similar users to that user (from most similar to least similar). The returned result should not contain the provided user_id, as we know that each user is similar to him/herself. Because the results for each user here are binary, it (perhaps) makes sense to compute similarity as the dot product of two users. \n\nUse the tests to test your function.", "_____no_output_____" ] ], [ [ "def find_similar_users(user_id, user_item=user_item):\n '''\n INPUT:\n user_id - (int) a user_id\n user_item - (pandas dataframe) matrix of users by articles: \n 1's when a user has interacted with an article, 0 otherwise\n \n OUTPUT:\n similar_users - (list) an ordered list where the closest users (largest dot product users)\n are listed first\n \n Description:\n Computes the similarity of every pair of users based on the dot product\n Returns an ordered\n \n '''\n # compute similarity of each user to the provided user\n sim = user_item.dot(user_item.iloc[user_id-1,:])\n # sort by similarity\n sim2 = sim.sort_values(ascending = False)\n # create list of just the ids\n most_similar_users = list(sim2.index)\n # remove the own user's id\n most_similar_users.remove(user_id)\n \n return most_similar_users # return a list of the users in order from most to least similar\n ", "_____no_output_____" ], [ "# Do a spot check of your function\nprint(\"The 10 most similar users to user 1 are: {}\".format(find_similar_users(1)[:10]))\nprint(\"The 5 most similar users to user 3933 are: {}\".format(find_similar_users(3933)[:5]))\nprint(\"The 3 most similar users to user 46 are: {}\".format(find_similar_users(46)[:3]))", "The 10 most similar users to user 1 are: [3933, 23, 3782, 203, 4459, 131, 3870, 46, 4201, 5041]\nThe 5 most similar users to user 3933 are: [1, 23, 3782, 4459, 203]\nThe 3 most similar users to user 46 are: [4201, 23, 3782]\n" ] ], [ [ "`3.` Now that you have a function that provides the most similar users to each user, you will want to use these users to find articles you can recommend. Complete the functions below to return the articles you would recommend to each user. ", "_____no_output_____" ] ], [ [ "def get_article_names(article_ids, df=df):\n '''\n INPUT:\n article_ids - (list) a list of article ids\n df - (pandas dataframe) df as defined at the top of the notebook\n \n OUTPUT:\n article_names - (list) a list of article names associated with the list of article ids \n (this is identified by the title column)\n '''\n # Extract titles given the article_ids\n article_names = [df.title[df.article_id == float(a)].iloc[0] for a in article_ids]\n return article_names # Return the article names associated with list of article ids\n\n\ndef get_user_articles(user_id, user_item=user_item):\n '''\n INPUT:\n user_id - (int) a user id\n user_item - (pandas dataframe) matrix of users by articles: \n 1's when a user has interacted with an article, 0 otherwise\n \n OUTPUT:\n article_ids - (list) a list of the article ids seen by the user\n article_names - (list) a list of article names associated with the list of article ids \n (this is identified by the doc_full_name column in df_content)\n \n Description:\n Provides a list of the article_ids and article titles that have been seen by a user\n '''\n # Extract the articles seen by a user\n article_ids = list(str(x) for x in set(df[df.user_id==user_id].article_id))\n article_names = list(str(x) for x in set(df[df.user_id==user_id].title))\n return article_ids, article_names # return the ids and names\n\n\ndef user_user_recs(user_id, m=10):\n '''\n INPUT:\n user_id - (int) a user id\n m - (int) the number of recommendations you want for the user\n \n OUTPUT:\n recs - (list) a list of recommendations for the user\n \n Description:\n Loops through the users based on closeness to the input user_id\n For each user - finds articles the user hasn't seen before and provides them as recs\n Does this until m recommendations are found\n \n Notes:\n Users who are the same closeness are chosen arbitrarily as the 'next' user\n \n For the user where the number of recommended articles starts below m \n and ends exceeding m, the last items are chosen arbitrarily\n \n '''\n # Find similar users (by dot product)\n similar_users = find_similar_users(user_id)\n #Find articles already seen by user\n art_ids1, art_nms1 = get_user_articles(user_id)\n \n # Find other articles based on similar users that our user has not \n # already seen\n rec_list = []\n for user in similar_users:\n art_ids, art_nms = get_user_articles(user)\n rec_list.append(list(set(art_ids) - set(art_ids1)))\n recs2 = [item for sublist in rec_list for item in sublist]\n recs = recs2[:m]\n return recs # return your recommendations for this user_id ", "_____no_output_____" ], [ "# Check Results\n#get_article_names(user_user_recs(1, 10)) # Return 10 recommendations for user 1\nlist(str(x) for x in set(df[df.user_id==20].article_id))", "_____no_output_____" ], [ "# Test your functions here - No need to change this code - just run this cell\nassert set(get_article_names(['1024.0', '1176.0', '1305.0', '1314.0', '1422.0', '1427.0'])) == set(['using deep learning to reconstruct high-resolution audio', 'build a python app on the streaming analytics service', 'gosales transactions for naive bayes model', 'healthcare python streaming application demo', 'use r dataframes & ibm watson natural language understanding', 'use xgboost, scikit-learn & ibm watson machine learning apis']), \"Oops! Your the get_article_names function doesn't work quite how we expect.\"\nassert set(get_article_names(['1320.0', '232.0', '844.0'])) == set(['housing (2015): united states demographic measures','self-service data preparation with ibm data refinery','use the cloudant-spark connector in python notebook']), \"Oops! Your the get_article_names function doesn't work quite how we expect.\"\nassert set(get_user_articles(20)[0]) == set(['1320.0', '232.0', '844.0'])\nassert set(get_user_articles(20)[1]) == set(['housing (2015): united states demographic measures', 'self-service data preparation with ibm data refinery','use the cloudant-spark connector in python notebook'])\nassert set(get_user_articles(2)[0]) == set(['1024.0', '1176.0', '1305.0', '1314.0', '1422.0', '1427.0'])\nassert set(get_user_articles(2)[1]) == set(['using deep learning to reconstruct high-resolution audio', 'build a python app on the streaming analytics service', 'gosales transactions for naive bayes model', 'healthcare python streaming application demo', 'use r dataframes & ibm watson natural language understanding', 'use xgboost, scikit-learn & ibm watson machine learning apis'])\nprint(\"If this is all you see, you passed all of our tests! Nice job!\")", "If this is all you see, you passed all of our tests! Nice job!\n" ] ], [ [ "`4.` Now we are going to improve the consistency of the **user_user_recs** function from above. \n\n* Instead of arbitrarily choosing when we obtain users who are all the same closeness to a given user - choose the users that have the most total article interactions before choosing those with fewer article interactions.\n\n\n* Instead of arbitrarily choosing articles from the user where the number of recommended articles starts below m and ends exceeding m, choose articles with the articles with the most total interactions before choosing those with fewer total interactions. This ranking should be what would be obtained from the **top_articles** function you wrote earlier.", "_____no_output_____" ] ], [ [ "def get_top_sorted_users(user_id, df=df, user_item=user_item):\n '''\n INPUT:\n user_id - (int)\n df - (pandas dataframe) df as defined at the top of the notebook \n user_item - (pandas dataframe) matrix of users by articles: \n 1's when a user has interacted with an article, 0 otherwise\n \n \n OUTPUT:\n neighbors_df - (pandas dataframe) a dataframe with:\n neighbor_id - is a neighbor user_id\n similarity - measure of the similarity of each user to the provided user_id\n num_interactions - the number of articles viewed by the user - if a u\n \n Other Details - sort the neighbors_df by the similarity and then by number of interactions where \n highest of each is higher in the dataframe\n \n '''\n \n # To compute number of interactions:\n # Extract dummies of the article_id variable and concatenate with user_id variable\n df_new = pd.concat([df.user_id, pd.get_dummies(df.article_id)], axis=1)\n\n # Sum the number of interactions of a user\n user_item2 = (df_new.groupby('user_id').sum()).sum(axis=1)\n \n \n # Find the users with most similarity and create a new data frame\n neighbors_df = pd.DataFrame(find_similar_users(user_id), columns = ['neighbor_id'])\n\n # Add columns with the similarities and their number of interactions\n neighbors_df['similarity'] = list(user_item.loc[neighbors_df.neighbor_id].dot(user_item.loc[user_id,:]))\n\n \n \n\n neighbors_df['num_interactions']=list(user_item2.loc[neighbors_df.neighbor_id])\n\n neighbors_df.sort_values(by=['similarity','num_interactions'], ascending = False)\n \n return neighbors_df # Return the dataframe specified in the doc_string\n\n\ndef user_user_recs_part2(user_id, m=10):\n '''\n INPUT:\n user_id - (int) a user id\n m - (int) the number of recommendations you want for the user\n \n OUTPUT:\n recs - (list) a list of recommendations for the user by article id\n rec_names - (list) a list of recommendations for the user by article title\n \n Description:\n Loops through the users based on closeness to the input user_id\n For each user - finds articles the user hasn't seen before and provides them as recs\n Does this until m recommendations are found\n \n Notes:\n * Choose the users that have the most total article interactions \n before choosing those with fewer article interactions.\n\n * Choose articles with the articles with the most total interactions \n before choosing those with fewer total interactions. \n \n '''\n # Get the neighbours\n neighbors_df = get_top_sorted_users(user_id)\n # And the articles that our user has already seen\n art_ids1, art_nms1 = get_user_articles(user_id)\n rec_list = []\n \n # Get recommendations from neighbours that our user hasn't already seen\n for user in neighbors_df.neighbor_id:\n art_ids, art_nms = get_user_articles(user)\n rec_list.append(list(set(art_ids) - set(art_ids1)))\n recs2 = [item for sublist in rec_list for item in sublist]\n recs = recs2[:m]\n rec_names = get_article_names(recs)\n return recs, rec_names", "_____no_output_____" ], [ "# Quick spot check - don't change this code - just use it to test your functions\nrec_ids, rec_names = user_user_recs_part2(20, 10)\nprint(\"The top 10 recommendations for user 20 are the following article ids:\")\nprint(rec_ids)\nprint()\nprint(\"The top 10 recommendations for user 20 are the following article names:\")\nprint(rec_names)", "The top 10 recommendations for user 20 are the following article ids:\n['1400.0', '1276.0', '1035.0', '1172.0', '903.0', '1162.0', '1357.0', '1314.0', '939.0', '1366.0']\n\nThe top 10 recommendations for user 20 are the following article names:\n['uci ml repository: chronic kidney disease data set', 'deploy your python model as a restful api', 'machine learning for the enterprise.', 'apache spark lab, part 3: machine learning', 'an attempt to understand boosting algorithm(s)', 'analyze energy consumption in buildings', 'overlapping co-cluster recommendation algorithm (ocular)', 'healthcare python streaming application demo', 'deep learning from scratch i: computational graphs', 'process events from the watson iot platform in a streams python application']\n" ], [ "neighbors_df = get_top_sorted_users(131)\nneighbors_df.iloc[0:12]\n", "_____no_output_____" ] ], [ [ "`5.` Use your functions from above to correctly fill in the solutions to the dictionary below. Then test your dictionary against the solution. Provide the code you need to answer each following the comments below.", "_____no_output_____" ] ], [ [ "### Tests with a dictionary of results\n\nuser1_most_sim = neighbors_df.iloc[0].neighbor_id # Find the user that is most similar to user 1 \nuser131_10th_sim = neighbors_df.iloc[9].neighbor_id# Find the 10th most similar user to user 131", "_____no_output_____" ], [ "## Dictionary Test Here\nsol_5_dict = {\n 'The user that is most similar to user 1.': user1_most_sim, \n 'The user that is the 10th most similar to user 131': user131_10th_sim,\n}\n\nt.sol_5_test(sol_5_dict)", "This all looks good! Nice job!\n" ] ], [ [ "`6.` If we were given a new user, which of the above functions would you be able to use to make recommendations? Explain. Can you think of a better way we might make recommendations? Use the cell below to explain a better method for new users.", "_____no_output_____" ], [ "The above method only works by finding other similar users, so user-bsed collaborative methods will not work. We could use rank based recommendations i.e. the get_top_articles function. For better ways to make recommendations, we could potentially add filters that the user could use to select articles. ", "_____no_output_____" ], [ "`7.` Using your existing functions, provide the top 10 recommended articles you would provide for the a new user below. You can test your function against our thoughts to make sure we are all on the same page with how we might make a recommendation.", "_____no_output_____" ] ], [ [ "new_user = '0.0'\n\n# What would your recommendations be for this new user '0.0'? As a new user, they have no observed articles.\n# Provide a list of the top 10 article ids you would give to \nnew_user_recs = list(str(x) for x in get_top_article_ids(10)) # Your recommendations here\n\nnew_user_recs", "_____no_output_____" ], [ "assert set(new_user_recs) == set(['1314.0','1429.0','1293.0','1427.0','1162.0','1364.0','1304.0','1170.0','1431.0','1330.0']), \"Oops! It makes sense that in this case we would want to recommend the most popular articles, because we don't know anything about these users.\"\n\nprint(\"That's right! Nice job!\")", "That's right! Nice job!\n" ] ], [ [ "### <a class=\"anchor\" id=\"Content-Recs\">Part IV: Content Based Recommendations (EXTRA - NOT REQUIRED)</a>\n\nAnother method we might use to make recommendations is to perform a ranking of the highest ranked articles associated with some term. You might consider content to be the **doc_body**, **doc_description**, or **doc_full_name**. There isn't one way to create a content based recommendation, especially considering that each of these columns hold content related information. \n\n`1.` Use the function body below to create a content based recommender. Since there isn't one right answer for this recommendation tactic, no test functions are provided. Feel free to change the function inputs if you decide you want to try a method that requires more input values. The input values are currently set with one idea in mind that you may use to make content based recommendations. One additional idea is that you might want to choose the most popular recommendations that meet your 'content criteria', but again, there is a lot of flexibility in how you might make these recommendations.\n\n### This part is NOT REQUIRED to pass this project. However, you may choose to take this on as an extra way to show off your skills.", "_____no_output_____" ] ], [ [ "def make_content_recs(article_id, m):\n '''\n INPUT: \n article_id - (str with a number) one article id that the user has interacted with\n m - (int) the number of recommendations you want for the user\n \n OUTPUT:\n recs - (list) a list of recommendations for the user by article id\n rec_names - (list) a list of recommendations for the user by article title\n \n Description:\n For the given article_id, find the users who have interacted with this article. Find the other articles that\n most of these users have interacted with. \n\n '''\n article_id_float = float(article_id)\n \n # Find all users who have interacted with this article\n users_df = user_item[user_item.columns[user_item.columns == article_id_float]]\n users_to_use = list(users_df.index[users_df[article_id_float] ==1])\n \n # Find the other articles that they have most interacted with as a group\n articles_rec = user_item.iloc[users_to_use,:].sum().sort_values(ascending= False)\n articles_rec.drop(labels=article_id_float)\n recs = list(str(x) for x in articles_rec[0:m].index)\n rec_names = get_article_names(recs)\n \n return recs, rec_names\n ", "_____no_output_____" ] ], [ [ "`2.` Now that you have put together your content-based recommendation system, use the cell below to write a summary explaining how your content based recommender works. Do you see any possible improvements that could be made to your function? Is there anything novel about your content based recommender?\n\n### This part is NOT REQUIRED to pass this project. However, you may choose to take this on as an extra way to show off your skills.", "_____no_output_____" ], [ "**Write an explanation of your content based recommendation system here.**", "_____no_output_____" ], [ "`3.` Use your content-recommendation system to make recommendations for the below scenarios based on the comments. Again no tests are provided here, because there isn't one right answer that could be used to find these content based recommendations.\n\n### This part is NOT REQUIRED to pass this project. However, you may choose to take this on as an extra way to show off your skills.", "_____no_output_____" ] ], [ [ "# make recommendations for a brand new user\nnew_user_recs = list(str(x) for x in get_top_article_ids(10)) # Your recommendations here\n\nnew_user_recs\n\n# make a recommendations for a user who only has interacted with article id '1427.0'\n\nuser_rec_ids, user_rec_titles = make_content_recs('1427.0', 5)\n\nuser_rec_ids", "_____no_output_____" ] ], [ [ "### <a class=\"anchor\" id=\"Matrix-Fact\">Part V: Matrix Factorization</a>\n\nIn this part of the notebook, you will build use matrix factorization to make article recommendations to the users on the IBM Watson Studio platform.\n\n`1.` You should have already created a **user_item** matrix above in **question 1** of **Part III** above. This first question here will just require that you run the cells to get things set up for the rest of **Part V** of the notebook. ", "_____no_output_____" ] ], [ [ "# Load the matrix here\nuser_item_matrix = pd.read_pickle('user_item_matrix.p')", "_____no_output_____" ], [ "# quick look at the matrix\nuser_item_matrix.head()", "_____no_output_____" ] ], [ [ "`2.` In this situation, you can use Singular Value Decomposition from [numpy](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.linalg.svd.html) on the user-item matrix. Use the cell to perform SVD, and explain why this is different than in the lesson.", "_____no_output_____" ] ], [ [ "# Perform SVD on the User-Item Matrix Here\n\nu, s, vt = np.linalg.svd(user_item_matrix)# use the built in to get the three matrices", "_____no_output_____" ] ], [ [ "This matrix has only binary values, so it is different in that sense from the rating matrix used in the lesson. This matrix has nonempty values for every cell, therefore we need not use FunkSVD on it but can do with SVD.", "_____no_output_____" ], [ "`3.` Now for the tricky part, how do we choose the number of latent features to use? Running the below cell, you can see that as the number of latent features increases, we obtain a lower error rate on making predictions for the 1 and 0 values in the user-item matrix. Run the cell below to get an idea of how the accuracy improves as we increase the number of latent features.", "_____no_output_____" ] ], [ [ "num_latent_feats = np.arange(10,700+10,20)\nsum_errs = []\n\nfor k in num_latent_feats:\n # restructure with k latent features\n s_new, u_new, vt_new = np.diag(s[:k]), u[:, :k], vt[:k, :]\n \n # take dot product\n user_item_est = np.around(np.dot(np.dot(u_new, s_new), vt_new))\n \n # compute error for each prediction to actual value\n diffs = np.subtract(user_item_matrix, user_item_est)\n \n # total errors and keep track of them\n err = np.sum(np.sum(np.abs(diffs)))\n sum_errs.append(err)\n \n \nplt.plot(num_latent_feats, 1 - np.array(sum_errs)/df.shape[0]);\nplt.xlabel('Number of Latent Features');\nplt.ylabel('Accuracy');\nplt.title('Accuracy vs. Number of Latent Features');", "_____no_output_____" ] ], [ [ "`4.` From the above, we can't really be sure how many features to use, because simply having a better way to predict the 1's and 0's of the matrix doesn't exactly give us an indication of if we are able to make good recommendations. Instead, we might split our dataset into a training and test set of data, as shown in the cell below. \n\nUse the code from question 3 to understand the impact on accuracy of the training and test sets of data with different numbers of latent features. Using the split below: \n\n* How many users can we make predictions for in the test set? \n* How many users are we not able to make predictions for because of the cold start problem?\n* How many articles can we make predictions for in the test set? \n* How many articles are we not able to make predictions for because of the cold start problem?", "_____no_output_____" ] ], [ [ "df_train = df.head(40000)\ndf_test = df.tail(5993)\n\n\n# Create matrices for training and testing separately\nuser_item_train = create_user_item_matrix(df_train)\nuser_item_test = create_user_item_matrix(df_test)\n\n# Find users in test that are not in train and articles in test that are not in train\nlen(set(user_item_test.index) - set(user_item_train.index))\nlen(set(user_item_test.columns) - set(user_item_train.columns))\n\n#Visualize the test matrix\nuser_item_test.head()", "_____no_output_____" ], [ "df_train = df.head(40000)\ndf_test = df.tail(5993)\n\ndef create_test_and_train_user_item(df_train, df_test):\n '''\n INPUT:\n df_train - training dataframe\n df_test - test dataframe\n \n OUTPUT:\n user_item_train - a user-item matrix of the training dataframe \n (unique users for each row and unique articles for each column)\n user_item_test - a user-item matrix of the testing dataframe \n (unique users for each row and unique articles for each column)\n test_idx - all of the test user ids\n test_arts - all of the test article ids\n \n '''\n # Create train and test dataframes\n user_item_train = create_user_item_matrix(df_train)\n user_item_test = create_user_item_matrix(df_test)\n \n test_idx = list(user_item_test.index)\n test_arts = list(user_item_test.columns)\n \n return user_item_train, user_item_test, test_idx, test_arts\n\nuser_item_train, user_item_test, test_idx, test_arts = create_test_and_train_user_item(df_train, df_test)", "_____no_output_____" ], [ "# Replace the values in the dictionary below\na = 662 \nb = 574 \nc = 20 \nd = 0 \n\n\nsol_4_dict = {\n 'How many users can we make predictions for in the test set?': c, \n 'How many users in the test set are we not able to make predictions for because of the cold start problem?': a, \n 'How many articles can we make predictions for in the test set?': b,\n 'How many articles in the test set are we not able to make predictions for because of the cold start problem?': d\n}\n\nt.sol_4_test(sol_4_dict)\n\n# There seems to be some issue with the solution dictionary.", "_____no_output_____" ] ], [ [ "`5.` Now use the **user_item_train** dataset from above to find U, S, and V transpose using SVD. Then find the subset of rows in the **user_item_test** dataset that you can predict using this matrix decomposition with different numbers of latent features to see how many features makes sense to keep based on the accuracy on the test data. This will require combining what was done in questions `2` - `4`.\n\nUse the cells below to explore how well SVD works towards making predictions for recommendations on the test data. ", "_____no_output_____" ] ], [ [ "# fit SVD on the user_item_train matrix\nu_train, s_train, vt_train = np.linalg.svd(user_item_train)# fit svd similar to above then use the cells below\n\n# Print all the shapes for understanding what the matrices represent\nprint(np.shape(u_train))\nprint(np.shape(s_train))\nprint(np.shape(vt_train))\nprint(np.shape(user_item_train))", "(4487, 4487)\n(714,)\n(714, 714)\n(4487, 714)\n" ], [ "# Find users that are common in the train and test dataframe\nrows_to_remove = list(set(user_item_test.index) - set(user_item_train.index))\nrows_to_keep = list(set(user_item_test.index) - set(rows_to_remove))\nrows_to_keep\n\n# Find article_ids that are common in the train and test dataframe\ncolumns_to_remove = list(set(user_item_test.columns) - set(user_item_train.columns))\ncolumns_to_keep = list(set(user_item_test.columns) - set(columns_to_remove))\n\n# Find row indices correponding to common users\n# and column indices correponsing to common article_ids\nusers_to_keep = [row - 1 for row in rows_to_keep]\narticle_indices = list(user_item_train.columns)\narticles_to_keep = [article_indices.index(i) for i in columns_to_keep]\n\n\n# Find the u_train, v_train and s_train corresponding to only common users\n# and articles\n\nu_train2 = u_train[users_to_keep,:]\nu_train3 = u_train2[:, users_to_keep]\nvt_train2 = vt_train[articles_to_keep,:]\nvt_train3 = vt_train2[:, articles_to_keep]\ns_train2 = s_train[articles_to_keep]\nnp.shape(np.around(np.dot(np.dot(u_new, s_new), vt_new)))\n\n\n\n# Keep only the common users in the train and test dataframes; \n# we keep all the articles as they are all present in the train dataframe\nuser_item_test2 = user_item_test.loc[user_item_test.index.intersection(rows_to_keep)]\n\n", "_____no_output_____" ], [ "# Use the reduced u,v,s to make predictions about the test dataframe\n# with different number of latent features and compare the results\n\nnum_latent_feats = np.arange(1,20,1)\nsum_errs = []\n\nfor k in num_latent_feats:\n # restructure with k latent features\n s_new, u_new, vt_new = np.diag(s_train2[:k]), u_train3[:, :k], vt_train3[:k, :]\n \n # take dot product\n user_item_est = np.around(np.dot(np.dot(u_new, s_new), vt_new))\n \n # compute error for each prediction to actual value\n diffs = np.subtract(user_item_test2, user_item_est)\n \n # total errors and keep track of them\n err = np.sum(np.sum(np.abs(diffs)))\n sum_errs.append(err)\n \n \nplt.plot(num_latent_feats, 1 - np.array(sum_errs)/df.shape[0]);\nplt.xlabel('Number of Latent Features');\nplt.ylabel('Accuracy');\nplt.title('Accuracy vs. Number of Latent Features');", "_____no_output_____" ] ], [ [ "`6.` Use the cell below to comment on the results you found in the previous question. Given the circumstances of your results, discuss what you might do to determine if the recommendations you make with any of the above recommendation systems are an improvement to how users currently find articles? ", "_____no_output_____" ], [ "It seems that the accuracy on the test set is high but that it reduced with the number of latent features. This could be because as the number of latent features increases, the model overfits on the training data to reproduce the training data matrix. \n\nTo determine if any of the above recommendation systems are an improvement, we could perhaps perform cross validation on the dataset but splitting it into train-test groups multiple times and then averaging over the prediction accuracies.\n\nTo evaluate the performance of the recommendation system, we could do an A/B testing type experiment where we recommend articles to the users based on our predictions and see if they're morely like to follow up on these articles compared to articles that were not predicted.", "_____no_output_____" ], [ "<a id='conclusions'></a>\n### Extras\nUsing your workbook, you could now save your recommendations for each user, develop a class to make new predictions and update your results, and make a flask app to deploy your results. These tasks are beyond what is required for this project. However, from what you learned in the lessons, you certainly capable of taking these tasks on to improve upon your work here!\n\n\n## Conclusion\n\n> Congratulations! You have reached the end of the Recommendations with IBM project! \n\n> **Tip**: Once you are satisfied with your work here, check over your report to make sure that it is satisfies all the areas of the [rubric](https://review.udacity.com/#!/rubrics/2322/view). You should also probably remove all of the \"Tips\" like this one so that the presentation is as polished as possible.\n\n\n## Directions to Submit\n\n> Before you submit your project, you need to create a .html or .pdf version of this notebook in the workspace here. To do that, run the code cell below. If it worked correctly, you should get a return code of 0, and you should see the generated .html file in the workspace directory (click on the orange Jupyter icon in the upper left).\n\n> Alternatively, you can download this report as .html via the **File** > **Download as** submenu, and then manually upload it into the workspace directory by clicking on the orange Jupyter icon in the upper left, then using the Upload button.\n\n> Once you've done this, you can submit your project by clicking on the \"Submit Project\" button in the lower right here. This will create and submit a zip file with this .ipynb doc and the .html or .pdf version you created. Congratulations! ", "_____no_output_____" ] ], [ [ "from subprocess import call\ncall(['python', '-m', 'nbconvert', 'Recommendations_with_IBM.ipynb'])", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ] ]
e7db0c43c32f296c101972d4dd025a2b89818728
349,122
ipynb
Jupyter Notebook
Convolutional Neural Networks/Residual_Networks_v2a.ipynb
joyfinder/Deep_Learning_Specialisation
e8dd50b6f3eeda73509e690981b8818120c1dcd0
[ "MIT" ]
null
null
null
Convolutional Neural Networks/Residual_Networks_v2a.ipynb
joyfinder/Deep_Learning_Specialisation
e8dd50b6f3eeda73509e690981b8818120c1dcd0
[ "MIT" ]
null
null
null
Convolutional Neural Networks/Residual_Networks_v2a.ipynb
joyfinder/Deep_Learning_Specialisation
e8dd50b6f3eeda73509e690981b8818120c1dcd0
[ "MIT" ]
null
null
null
108.422981
110,302
0.704628
[ [ [ "# Residual Networks\n\nWelcome to the second assignment of this week! You will learn how to build very deep convolutional networks, using Residual Networks (ResNets). In theory, very deep networks can represent very complex functions; but in practice, they are hard to train. Residual Networks, introduced by [He et al.](https://arxiv.org/pdf/1512.03385.pdf), allow you to train much deeper networks than were previously practically feasible.\n\n**In this assignment, you will:**\n- Implement the basic building blocks of ResNets. \n- Put together these building blocks to implement and train a state-of-the-art neural network for image classification. ", "_____no_output_____" ], [ "## <font color='darkblue'>Updates</font>\n\n#### If you were working on the notebook before this update...\n* The current notebook is version \"2a\".\n* You can find your original work saved in the notebook with the previous version name (\"v2\") \n* To view the file directory, go to the menu \"File->Open\", and this will open a new tab that shows the file directory.\n\n#### List of updates\n* For testing on an image, replaced `preprocess_input(x)` with `x=x/255.0` to normalize the input image in the same way that the model's training data was normalized.\n* Refers to \"shallower\" layers as those layers closer to the input, and \"deeper\" layers as those closer to the output (Using \"shallower\" layers instead of \"lower\" or \"earlier\").\n* Added/updated instructions.\n", "_____no_output_____" ], [ "This assignment will be done in Keras. \n\nBefore jumping into the problem, let's run the cell below to load the required packages.", "_____no_output_____" ] ], [ [ "import numpy as np\nfrom keras import layers\nfrom keras.layers import Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D, GlobalMaxPooling2D\nfrom keras.models import Model, load_model\nfrom keras.preprocessing import image\nfrom keras.utils import layer_utils\nfrom keras.utils.data_utils import get_file\nfrom keras.applications.imagenet_utils import preprocess_input\nimport pydot\nfrom IPython.display import SVG\nfrom keras.utils.vis_utils import model_to_dot\nfrom keras.utils import plot_model\nfrom resnets_utils import *\nfrom keras.initializers import glorot_uniform\nimport scipy.misc\nfrom matplotlib.pyplot import imshow\n%matplotlib inline\n\nimport keras.backend as K\nK.set_image_data_format('channels_last')\nK.set_learning_phase(1)", "Using TensorFlow backend.\n" ] ], [ [ "## 1 - The problem of very deep neural networks\n\nLast week, you built your first convolutional neural network. In recent years, neural networks have become deeper, with state-of-the-art networks going from just a few layers (e.g., AlexNet) to over a hundred layers.\n\n* The main benefit of a very deep network is that it can represent very complex functions. It can also learn features at many different levels of abstraction, from edges (at the shallower layers, closer to the input) to very complex features (at the deeper layers, closer to the output). \n* However, using a deeper network doesn't always help. A huge barrier to training them is vanishing gradients: very deep networks often have a gradient signal that goes to zero quickly, thus making gradient descent prohibitively slow. \n* More specifically, during gradient descent, as you backprop from the final layer back to the first layer, you are multiplying by the weight matrix on each step, and thus the gradient can decrease exponentially quickly to zero (or, in rare cases, grow exponentially quickly and \"explode\" to take very large values). \n* During training, you might therefore see the magnitude (or norm) of the gradient for the shallower layers decrease to zero very rapidly as training proceeds: ", "_____no_output_____" ], [ "<img src=\"images/vanishing_grad_kiank.png\" style=\"width:450px;height:220px;\">\n<caption><center> <u> <font color='purple'> **Figure 1** </u><font color='purple'> : **Vanishing gradient** <br> The speed of learning decreases very rapidly for the shallower layers as the network trains </center></caption>\n\nYou are now going to solve this problem by building a Residual Network!", "_____no_output_____" ], [ "## 2 - Building a Residual Network\n\nIn ResNets, a \"shortcut\" or a \"skip connection\" allows the model to skip layers: \n\n<img src=\"images/skip_connection_kiank.png\" style=\"width:650px;height:200px;\">\n<caption><center> <u> <font color='purple'> **Figure 2** </u><font color='purple'> : A ResNet block showing a **skip-connection** <br> </center></caption>\n\nThe image on the left shows the \"main path\" through the network. The image on the right adds a shortcut to the main path. By stacking these ResNet blocks on top of each other, you can form a very deep network. \n\nWe also saw in lecture that having ResNet blocks with the shortcut also makes it very easy for one of the blocks to learn an identity function. This means that you can stack on additional ResNet blocks with little risk of harming training set performance. \n \n(There is also some evidence that the ease of learning an identity function accounts for ResNets' remarkable performance even more so than skip connections helping with vanishing gradients).\n\nTwo main types of blocks are used in a ResNet, depending mainly on whether the input/output dimensions are same or different. You are going to implement both of them: the \"identity block\" and the \"convolutional block.\"", "_____no_output_____" ], [ "### 2.1 - The identity block\n\nThe identity block is the standard block used in ResNets, and corresponds to the case where the input activation (say $a^{[l]}$) has the same dimension as the output activation (say $a^{[l+2]}$). To flesh out the different steps of what happens in a ResNet's identity block, here is an alternative diagram showing the individual steps:\n\n<img src=\"images/idblock2_kiank.png\" style=\"width:650px;height:150px;\">\n<caption><center> <u> <font color='purple'> **Figure 3** </u><font color='purple'> : **Identity block.** Skip connection \"skips over\" 2 layers. </center></caption>\n\nThe upper path is the \"shortcut path.\" The lower path is the \"main path.\" In this diagram, we have also made explicit the CONV2D and ReLU steps in each layer. To speed up training we have also added a BatchNorm step. Don't worry about this being complicated to implement--you'll see that BatchNorm is just one line of code in Keras! \n\nIn this exercise, you'll actually implement a slightly more powerful version of this identity block, in which the skip connection \"skips over\" 3 hidden layers rather than 2 layers. It looks like this: \n\n<img src=\"images/idblock3_kiank.png\" style=\"width:650px;height:150px;\">\n<caption><center> <u> <font color='purple'> **Figure 4** </u><font color='purple'> : **Identity block.** Skip connection \"skips over\" 3 layers.</center></caption>", "_____no_output_____" ], [ "Here are the individual steps.\n\nFirst component of main path: \n- The first CONV2D has $F_1$ filters of shape (1,1) and a stride of (1,1). Its padding is \"valid\" and its name should be `conv_name_base + '2a'`. Use 0 as the seed for the random initialization. \n- The first BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '2a'`.\n- Then apply the ReLU activation function. This has no name and no hyperparameters. \n\nSecond component of main path:\n- The second CONV2D has $F_2$ filters of shape $(f,f)$ and a stride of (1,1). Its padding is \"same\" and its name should be `conv_name_base + '2b'`. Use 0 as the seed for the random initialization. \n- The second BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '2b'`.\n- Then apply the ReLU activation function. This has no name and no hyperparameters. \n\nThird component of main path:\n- The third CONV2D has $F_3$ filters of shape (1,1) and a stride of (1,1). Its padding is \"valid\" and its name should be `conv_name_base + '2c'`. Use 0 as the seed for the random initialization. \n- The third BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '2c'`. \n- Note that there is **no** ReLU activation function in this component. \n\nFinal step: \n- The `X_shortcut` and the output from the 3rd layer `X` are added together.\n- **Hint**: The syntax will look something like `Add()([var1,var2])`\n- Then apply the ReLU activation function. This has no name and no hyperparameters. \n\n**Exercise**: Implement the ResNet identity block. We have implemented the first component of the main path. Please read this carefully to make sure you understand what it is doing. You should implement the rest. \n- To implement the Conv2D step: [Conv2D](https://keras.io/layers/convolutional/#conv2d)\n- To implement BatchNorm: [BatchNormalization](https://faroit.github.io/keras-docs/1.2.2/layers/normalization/) (axis: Integer, the axis that should be normalized (typically the 'channels' axis))\n- For the activation, use: `Activation('relu')(X)`\n- To add the value passed forward by the shortcut: [Add](https://keras.io/layers/merge/#add)", "_____no_output_____" ] ], [ [ "# GRADED FUNCTION: identity_block\n\ndef identity_block(X, f, filters, stage, block):\n \"\"\"\n Implementation of the identity block as defined in Figure 4\n \n Arguments:\n X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)\n f -- integer, specifying the shape of the middle CONV's window for the main path\n filters -- python list of integers, defining the number of filters in the CONV layers of the main path\n stage -- integer, used to name the layers, depending on their position in the network\n block -- string/character, used to name the layers, depending on their position in the network\n \n Returns:\n X -- output of the identity block, tensor of shape (n_H, n_W, n_C)\n \"\"\"\n \n # defining name basis\n conv_name_base = 'res' + str(stage) + block + '_branch'\n bn_name_base = 'bn' + str(stage) + block + '_branch'\n \n # Retrieve Filters\n F1, F2, F3 = filters\n \n # Save the input value. You'll need this later to add back to the main path. \n X_shortcut = X\n \n # First component of main path\n X = Conv2D(filters = F1, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X)\n X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)\n X = Activation('relu')(X)\n \n ### START CODE HERE ###\n \n # Second component of main path (≈3 lines)\n X = Conv2D(filters = F2, kernel_size = (f, f), strides = (1,1), padding = 'same', name = conv_name_base + '2b', kernel_initializer = glorot_uniform(seed = 0))(X)\n X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X)\n X = Activation('relu')(X)\n\n # Third component of main path (≈2 lines)\n X = Conv2D(filters = F3, kernel_size = (1,1), strides = (1, 1), padding = 'valid', name = conv_name_base + '2c', kernel_initializer = glorot_uniform(seed = 0))(X)\n X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X)\n\n # Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines)\n X = Add()([X, X_shortcut])\n X = Activation('relu')(X)\n \n ### END CODE HERE ###\n \n return X", "_____no_output_____" ], [ "tf.reset_default_graph()\n\nwith tf.Session() as test:\n np.random.seed(1)\n A_prev = tf.placeholder(\"float\", [3, 4, 4, 6])\n X = np.random.randn(3, 4, 4, 6)\n A = identity_block(A_prev, f = 2, filters = [2, 4, 6], stage = 1, block = 'a')\n test.run(tf.global_variables_initializer())\n out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0})\n print(\"out = \" + str(out[0][1][1][0]))", "out = [ 0.94822985 0. 1.16101444 2.747859 0. 1.36677003]\n" ] ], [ [ "**Expected Output**:\n\n<table>\n <tr>\n <td>\n **out**\n </td>\n <td>\n [ 0.94822985 0. 1.16101444 2.747859 0. 1.36677003]\n </td>\n </tr>\n\n</table>", "_____no_output_____" ], [ "## 2.2 - The convolutional block\n\nThe ResNet \"convolutional block\" is the second block type. You can use this type of block when the input and output dimensions don't match up. The difference with the identity block is that there is a CONV2D layer in the shortcut path: \n\n<img src=\"images/convblock_kiank.png\" style=\"width:650px;height:150px;\">\n<caption><center> <u> <font color='purple'> **Figure 4** </u><font color='purple'> : **Convolutional block** </center></caption>\n\n* The CONV2D layer in the shortcut path is used to resize the input $x$ to a different dimension, so that the dimensions match up in the final addition needed to add the shortcut value back to the main path. (This plays a similar role as the matrix $W_s$ discussed in lecture.) \n* For example, to reduce the activation dimensions's height and width by a factor of 2, you can use a 1x1 convolution with a stride of 2. \n* The CONV2D layer on the shortcut path does not use any non-linear activation function. Its main role is to just apply a (learned) linear function that reduces the dimension of the input, so that the dimensions match up for the later addition step. \n\nThe details of the convolutional block are as follows. \n\nFirst component of main path:\n- The first CONV2D has $F_1$ filters of shape (1,1) and a stride of (s,s). Its padding is \"valid\" and its name should be `conv_name_base + '2a'`. Use 0 as the `glorot_uniform` seed.\n- The first BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '2a'`.\n- Then apply the ReLU activation function. This has no name and no hyperparameters. \n\nSecond component of main path:\n- The second CONV2D has $F_2$ filters of shape (f,f) and a stride of (1,1). Its padding is \"same\" and it's name should be `conv_name_base + '2b'`. Use 0 as the `glorot_uniform` seed.\n- The second BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '2b'`.\n- Then apply the ReLU activation function. This has no name and no hyperparameters. \n\nThird component of main path:\n- The third CONV2D has $F_3$ filters of shape (1,1) and a stride of (1,1). Its padding is \"valid\" and it's name should be `conv_name_base + '2c'`. Use 0 as the `glorot_uniform` seed.\n- The third BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '2c'`. Note that there is no ReLU activation function in this component. \n\nShortcut path:\n- The CONV2D has $F_3$ filters of shape (1,1) and a stride of (s,s). Its padding is \"valid\" and its name should be `conv_name_base + '1'`. Use 0 as the `glorot_uniform` seed.\n- The BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '1'`. \n\nFinal step: \n- The shortcut and the main path values are added together.\n- Then apply the ReLU activation function. This has no name and no hyperparameters. \n \n**Exercise**: Implement the convolutional block. We have implemented the first component of the main path; you should implement the rest. As before, always use 0 as the seed for the random initialization, to ensure consistency with our grader.\n- [Conv2D](https://keras.io/layers/convolutional/#conv2d)\n- [BatchNormalization](https://keras.io/layers/normalization/#batchnormalization) (axis: Integer, the axis that should be normalized (typically the features axis))\n- For the activation, use: `Activation('relu')(X)`\n- [Add](https://keras.io/layers/merge/#add)", "_____no_output_____" ] ], [ [ "def convolutional_block(X, f, filters, stage, block, s = 2):\n \"\"\"\n Implementation of the convolutional block as defined in Figure 4\n \n Arguments:\n X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)\n f -- integer, specifying the shape of the middle CONV's window for the main path\n filters -- python list of integers, defining the number of filters in the CONV layers of the main path\n stage -- integer, used to name the layers, depending on their position in the network\n block -- string/character, used to name the layers, depending on their position in the network\n s -- Integer, specifying the stride to be used\n \n Returns:\n X -- output of the convolutional block, tensor of shape (n_H, n_W, n_C)\n \"\"\"\n \n # defining name basis\n conv_name_base = 'res' + str(stage) + block + '_branch'\n bn_name_base = 'bn' + str(stage) + block + '_branch'\n \n # Retrieve Filters\n F1, F2, F3 = filters\n \n # Save the input value\n X_shortcut = X\n\n\n ##### MAIN PATH #####\n # First component of main path \n X = Conv2D(filters =F1, kernel_size =(1, 1), strides = (s,s), padding = 'valid', name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X)\n X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)\n X = Activation('relu')(X)\n \n ### START CODE HERE ###\n\n # Second component of main path (≈3 lines)\n X = Conv2D(filters =F2, kernel_size =(f, f), strides = (1,1), padding = 'same', name = conv_name_base + '2b', kernel_initializer = glorot_uniform(seed=0))(X)\n X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X)\n X = Activation('relu')(X)\n\n # Third component of main path (≈2 lines)\n X = Conv2D(filters =F3, kernel_size =(1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2c', kernel_initializer = glorot_uniform(seed=0))(X)\n X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X)\n\n ##### SHORTCUT PATH #### (≈2 lines)\n X_shortcut = Conv2D(filters =F3, kernel_size =(1, 1), strides = (s,s), padding = 'valid', name = conv_name_base + '1', kernel_initializer = glorot_uniform(seed=0))(X_shortcut)\n X_shortcut = BatchNormalization(axis = 3, name = bn_name_base + '1')(X_shortcut)\n\n # Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines)\n X = Add()([X, X_shortcut]) \n X = Activation('relu')(X)\n \n ### END CODE HERE ###\n \n return X", "_____no_output_____" ], [ "tf.reset_default_graph()\n\nwith tf.Session() as test:\n np.random.seed(1)\n A_prev = tf.placeholder(\"float\", [3, 4, 4, 6])\n X = np.random.randn(3, 4, 4, 6)\n A = convolutional_block(A_prev, f = 2, filters = [2, 4, 6], stage = 1, block = 'a')\n test.run(tf.global_variables_initializer())\n out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0})\n print(\"out = \" + str(out[0][1][1][0]))", "out = [ 0.09018463 1.23489773 0.46822017 0.0367176 0. 0.65516603]\n" ] ], [ [ "**Expected Output**:\n\n<table>\n <tr>\n <td>\n **out**\n </td>\n <td>\n [ 0.09018463 1.23489773 0.46822017 0.0367176 0. 0.65516603]\n </td>\n </tr>\n\n</table>", "_____no_output_____" ], [ "## 3 - Building your first ResNet model (50 layers)\n\nYou now have the necessary blocks to build a very deep ResNet. The following figure describes in detail the architecture of this neural network. \"ID BLOCK\" in the diagram stands for \"Identity block,\" and \"ID BLOCK x3\" means you should stack 3 identity blocks together.\n\n<img src=\"images/resnet_kiank.png\" style=\"width:850px;height:150px;\">\n<caption><center> <u> <font color='purple'> **Figure 5** </u><font color='purple'> : **ResNet-50 model** </center></caption>\n\nThe details of this ResNet-50 model are:\n- Zero-padding pads the input with a pad of (3,3)\n- Stage 1:\n - The 2D Convolution has 64 filters of shape (7,7) and uses a stride of (2,2). Its name is \"conv1\".\n - BatchNorm is applied to the 'channels' axis of the input.\n - MaxPooling uses a (3,3) window and a (2,2) stride.\n- Stage 2:\n - The convolutional block uses three sets of filters of size [64,64,256], \"f\" is 3, \"s\" is 1 and the block is \"a\".\n - The 2 identity blocks use three sets of filters of size [64,64,256], \"f\" is 3 and the blocks are \"b\" and \"c\".\n- Stage 3:\n - The convolutional block uses three sets of filters of size [128,128,512], \"f\" is 3, \"s\" is 2 and the block is \"a\".\n - The 3 identity blocks use three sets of filters of size [128,128,512], \"f\" is 3 and the blocks are \"b\", \"c\" and \"d\".\n- Stage 4:\n - The convolutional block uses three sets of filters of size [256, 256, 1024], \"f\" is 3, \"s\" is 2 and the block is \"a\".\n - The 5 identity blocks use three sets of filters of size [256, 256, 1024], \"f\" is 3 and the blocks are \"b\", \"c\", \"d\", \"e\" and \"f\".\n- Stage 5:\n - The convolutional block uses three sets of filters of size [512, 512, 2048], \"f\" is 3, \"s\" is 2 and the block is \"a\".\n - The 2 identity blocks use three sets of filters of size [512, 512, 2048], \"f\" is 3 and the blocks are \"b\" and \"c\".\n- The 2D Average Pooling uses a window of shape (2,2) and its name is \"avg_pool\".\n- The 'flatten' layer doesn't have any hyperparameters or name.\n- The Fully Connected (Dense) layer reduces its input to the number of classes using a softmax activation. Its name should be `'fc' + str(classes)`.\n\n**Exercise**: Implement the ResNet with 50 layers described in the figure above. We have implemented Stages 1 and 2. Please implement the rest. (The syntax for implementing Stages 3-5 should be quite similar to that of Stage 2.) Make sure you follow the naming convention in the text above. \n\nYou'll need to use this function: \n- Average pooling [see reference](https://keras.io/layers/pooling/#averagepooling2d)\n\nHere are some other functions we used in the code below:\n- Conv2D: [See reference](https://keras.io/layers/convolutional/#conv2d)\n- BatchNorm: [See reference](https://keras.io/layers/normalization/#batchnormalization) (axis: Integer, the axis that should be normalized (typically the features axis))\n- Zero padding: [See reference](https://keras.io/layers/convolutional/#zeropadding2d)\n- Max pooling: [See reference](https://keras.io/layers/pooling/#maxpooling2d)\n- Fully connected layer: [See reference](https://keras.io/layers/core/#dense)\n- Addition: [See reference](https://keras.io/layers/merge/#add)", "_____no_output_____" ] ], [ [ "# GRADED FUNCTION: ResNet50\n\ndef ResNet50(input_shape = (64, 64, 3), classes = 6):\n \"\"\"\n Implementation of the popular ResNet50 the following architecture:\n CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> CONVBLOCK -> IDBLOCK*2 -> CONVBLOCK -> IDBLOCK*3\n -> CONVBLOCK -> IDBLOCK*5 -> CONVBLOCK -> IDBLOCK*2 -> AVGPOOL -> TOPLAYER\n\n Arguments:\n input_shape -- shape of the images of the dataset\n classes -- integer, number of classes\n\n Returns:\n model -- a Model() instance in Keras\n \"\"\"\n \n # Define the input as a tensor with shape input_shape\n X_input = Input(input_shape)\n\n \n # Zero-Padding\n X = ZeroPadding2D((3, 3))(X_input)\n \n # Stage 1\n X = Conv2D(64, (7, 7), strides = (2, 2), name = 'conv1', kernel_initializer = glorot_uniform(seed=0))(X)\n X = BatchNormalization(axis = 3, name = 'bn_conv1')(X)\n X = Activation('relu')(X)\n X = MaxPooling2D((3, 3), strides=(2, 2))(X)\n\n # Stage 2\n X = convolutional_block(X, f = 3, filters = [64, 64, 256], stage = 2, block='a', s = 1)\n X = identity_block(X, 3, [64, 64, 256], stage=2, block='b')\n X = identity_block(X, 3, [64, 64, 256], stage=2, block='c')\n\n ### START CODE HERE ###\n\n # Stage 3 (≈4 lines)\n X = convolutional_block(X, f = 3, filters = [128, 128, 512], stage = 3, block='a', s = 2)\n X = identity_block(X, 3, [128, 128, 512], stage=3, block='b')\n X = identity_block(X, 3, [128, 128, 512], stage=3, block='c')\n X = identity_block(X, 3, [128, 128, 512], stage=3, block='d')\n\n # Stage 4 (≈6 lines)\n X = convolutional_block(X, f = 3, filters = [256, 256, 1024], stage = 4, block='a', s = 2)\n X = identity_block(X, 3, [256, 256, 1024], stage=4, block='b')\n X = identity_block(X, 3, [256, 256, 1024], stage=4, block='c')\n X = identity_block(X, 3, [256, 256, 1024], stage=4, block='d')\n X = identity_block(X, 3, [256, 256, 1024], stage=4, block='e')\n X = identity_block(X, 3, [256, 256, 1024], stage=4, block='f')\n\n # Stage 5 (≈3 lines)\n X = convolutional_block(X, f = 3, filters = [512, 512, 2048], stage = 5, block='a', s = 2)\n X = identity_block(X, 3, [512, 512, 2048], stage=5, block='b')\n X = identity_block(X, 3, [512, 512, 2048], stage=5, block='c')\n\n # AVGPOOL (≈1 line). Use \"X = AveragePooling2D(...)(X)\"\n X = AveragePooling2D(pool_size=(2, 2), name = 'avg_pool')(X)\n \n ### END CODE HERE ###\n\n # output layer\n X = Flatten()(X)\n X = Dense(classes, activation='softmax', name='fc' + str(classes), kernel_initializer = glorot_uniform(seed=0))(X)\n \n \n # Create model\n model = Model(inputs = X_input, outputs = X, name='ResNet50')\n\n return model", "_____no_output_____" ] ], [ [ "Run the following code to build the model's graph. If your implementation is not correct you will know it by checking your accuracy when running `model.fit(...)` below.", "_____no_output_____" ] ], [ [ "model = ResNet50(input_shape = (64, 64, 3), classes = 6)", "_____no_output_____" ] ], [ [ "As seen in the Keras Tutorial Notebook, prior training a model, you need to configure the learning process by compiling the model.", "_____no_output_____" ] ], [ [ "model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])", "_____no_output_____" ] ], [ [ "The model is now ready to be trained. The only thing you need is a dataset.", "_____no_output_____" ], [ "Let's load the SIGNS Dataset.\n\n<img src=\"images/signs_data_kiank.png\" style=\"width:450px;height:250px;\">\n<caption><center> <u> <font color='purple'> **Figure 6** </u><font color='purple'> : **SIGNS dataset** </center></caption>\n", "_____no_output_____" ] ], [ [ "X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()\n\n# Normalize image vectors\nX_train = X_train_orig/255.\nX_test = X_test_orig/255.\n\n# Convert training and test labels to one hot matrices\nY_train = convert_to_one_hot(Y_train_orig, 6).T\nY_test = convert_to_one_hot(Y_test_orig, 6).T\n\nprint (\"number of training examples = \" + str(X_train.shape[0]))\nprint (\"number of test examples = \" + str(X_test.shape[0]))\nprint (\"X_train shape: \" + str(X_train.shape))\nprint (\"Y_train shape: \" + str(Y_train.shape))\nprint (\"X_test shape: \" + str(X_test.shape))\nprint (\"Y_test shape: \" + str(Y_test.shape))", "number of training examples = 1080\nnumber of test examples = 120\nX_train shape: (1080, 64, 64, 3)\nY_train shape: (1080, 6)\nX_test shape: (120, 64, 64, 3)\nY_test shape: (120, 6)\n" ] ], [ [ "Run the following cell to train your model on 2 epochs with a batch size of 32. On a CPU it should take you around 5min per epoch. ", "_____no_output_____" ] ], [ [ "model.fit(X_train, Y_train, epochs = 2, batch_size = 32)", "Epoch 1/2\n1080/1080 [==============================] - 265s - loss: 2.5033 - acc: 0.3380 \nEpoch 2/2\n1080/1080 [==============================] - 259s - loss: 1.3300 - acc: 0.6204 \n" ] ], [ [ "**Expected Output**:\n\n<table>\n <tr>\n <td>\n ** Epoch 1/2**\n </td>\n <td>\n loss: between 1 and 5, acc: between 0.2 and 0.5, although your results can be different from ours.\n </td>\n </tr>\n <tr>\n <td>\n ** Epoch 2/2**\n </td>\n <td>\n loss: between 1 and 5, acc: between 0.2 and 0.5, you should see your loss decreasing and the accuracy increasing.\n </td>\n </tr>\n\n</table>", "_____no_output_____" ], [ "Let's see how this model (trained on only two epochs) performs on the test set.", "_____no_output_____" ] ], [ [ "preds = model.evaluate(X_test, Y_test)\nprint (\"Loss = \" + str(preds[0]))\nprint (\"Test Accuracy = \" + str(preds[1]))", "120/120 [==============================] - 9s \nLoss = 13.2004664103\nTest Accuracy = 0.166666667163\n" ] ], [ [ "**Expected Output**:\n\n<table>\n <tr>\n <td>\n **Test Accuracy**\n </td>\n <td>\n between 0.16 and 0.25\n </td>\n </tr>\n\n</table>", "_____no_output_____" ], [ "For the purpose of this assignment, we've asked you to train the model for just two epochs. You can see that it achieves poor performances. Please go ahead and submit your assignment; to check correctness, the online grader will run your code only for a small number of epochs as well.", "_____no_output_____" ], [ "After you have finished this official (graded) part of this assignment, you can also optionally train the ResNet for more iterations, if you want. We get a lot better performance when we train for ~20 epochs, but this will take more than an hour when training on a CPU. \n\nUsing a GPU, we've trained our own ResNet50 model's weights on the SIGNS dataset. You can load and run our trained model on the test set in the cells below. It may take ≈1min to load the model.", "_____no_output_____" ] ], [ [ "model = load_model('ResNet50.h5') ", "_____no_output_____" ], [ "preds = model.evaluate(X_test, Y_test)\nprint (\"Loss = \" + str(preds[0]))\nprint (\"Test Accuracy = \" + str(preds[1]))", "120/120 [==============================] - 9s \nLoss = 0.530178320408\nTest Accuracy = 0.866666662693\n" ] ], [ [ "ResNet50 is a powerful model for image classification when it is trained for an adequate number of iterations. We hope you can use what you've learnt and apply it to your own classification problem to perform state-of-the-art accuracy.\n\nCongratulations on finishing this assignment! You've now implemented a state-of-the-art image classification system! ", "_____no_output_____" ], [ "## 4 - Test on your own image (Optional/Ungraded)", "_____no_output_____" ], [ "If you wish, you can also take a picture of your own hand and see the output of the model. To do this:\n 1. Click on \"File\" in the upper bar of this notebook, then click \"Open\" to go on your Coursera Hub.\n 2. Add your image to this Jupyter Notebook's directory, in the \"images\" folder\n 3. Write your image's name in the following code\n 4. Run the code and check if the algorithm is right! ", "_____no_output_____" ] ], [ [ "img_path = 'images/my_image.jpg'\nimg = image.load_img(img_path, target_size=(64, 64))\nx = image.img_to_array(img)\nx = np.expand_dims(x, axis=0)\nx = x/255.0\nprint('Input image shape:', x.shape)\nmy_image = scipy.misc.imread(img_path)\nimshow(my_image)\nprint(\"class prediction vector [p(0), p(1), p(2), p(3), p(4), p(5)] = \")\nprint(model.predict(x))", "Input image shape: (1, 64, 64, 3)\nclass prediction vector [p(0), p(1), p(2), p(3), p(4), p(5)] = \n[[ 3.41876671e-06 2.77412561e-04 9.99522924e-01 1.98842812e-07\n 1.95619068e-04 4.11686671e-07]]\n" ] ], [ [ "You can also print a summary of your model by running the following code.", "_____no_output_____" ] ], [ [ "model.summary()", "____________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n====================================================================================================\ninput_1 (InputLayer) (None, 64, 64, 3) 0 \n____________________________________________________________________________________________________\nzero_padding2d_1 (ZeroPadding2D) (None, 70, 70, 3) 0 input_1[0][0] \n____________________________________________________________________________________________________\nconv1 (Conv2D) (None, 32, 32, 64) 9472 zero_padding2d_1[0][0] \n____________________________________________________________________________________________________\nbn_conv1 (BatchNormalization) (None, 32, 32, 64) 256 conv1[0][0] \n____________________________________________________________________________________________________\nactivation_4 (Activation) (None, 32, 32, 64) 0 bn_conv1[0][0] \n____________________________________________________________________________________________________\nmax_pooling2d_1 (MaxPooling2D) (None, 15, 15, 64) 0 activation_4[0][0] \n____________________________________________________________________________________________________\nres2a_branch2a (Conv2D) (None, 15, 15, 64) 4160 max_pooling2d_1[0][0] \n____________________________________________________________________________________________________\nbn2a_branch2a (BatchNormalizatio (None, 15, 15, 64) 256 res2a_branch2a[0][0] \n____________________________________________________________________________________________________\nactivation_5 (Activation) (None, 15, 15, 64) 0 bn2a_branch2a[0][0] \n____________________________________________________________________________________________________\nres2a_branch2b (Conv2D) (None, 15, 15, 64) 36928 activation_5[0][0] \n____________________________________________________________________________________________________\nbn2a_branch2b (BatchNormalizatio (None, 15, 15, 64) 256 res2a_branch2b[0][0] \n____________________________________________________________________________________________________\nactivation_6 (Activation) (None, 15, 15, 64) 0 bn2a_branch2b[0][0] \n____________________________________________________________________________________________________\nres2a_branch2c (Conv2D) (None, 15, 15, 256) 16640 activation_6[0][0] \n____________________________________________________________________________________________________\nres2a_branch1 (Conv2D) (None, 15, 15, 256) 16640 max_pooling2d_1[0][0] \n____________________________________________________________________________________________________\nbn2a_branch2c (BatchNormalizatio (None, 15, 15, 256) 1024 res2a_branch2c[0][0] \n____________________________________________________________________________________________________\nbn2a_branch1 (BatchNormalization (None, 15, 15, 256) 1024 res2a_branch1[0][0] \n____________________________________________________________________________________________________\nadd_2 (Add) (None, 15, 15, 256) 0 bn2a_branch2c[0][0] \n bn2a_branch1[0][0] \n____________________________________________________________________________________________________\nactivation_7 (Activation) (None, 15, 15, 256) 0 add_2[0][0] \n____________________________________________________________________________________________________\nres2b_branch2a (Conv2D) (None, 15, 15, 64) 16448 activation_7[0][0] \n____________________________________________________________________________________________________\nbn2b_branch2a (BatchNormalizatio (None, 15, 15, 64) 256 res2b_branch2a[0][0] \n____________________________________________________________________________________________________\nactivation_8 (Activation) (None, 15, 15, 64) 0 bn2b_branch2a[0][0] \n____________________________________________________________________________________________________\nres2b_branch2b (Conv2D) (None, 15, 15, 64) 36928 activation_8[0][0] \n____________________________________________________________________________________________________\nbn2b_branch2b (BatchNormalizatio (None, 15, 15, 64) 256 res2b_branch2b[0][0] \n____________________________________________________________________________________________________\nactivation_9 (Activation) (None, 15, 15, 64) 0 bn2b_branch2b[0][0] \n____________________________________________________________________________________________________\nres2b_branch2c (Conv2D) (None, 15, 15, 256) 16640 activation_9[0][0] \n____________________________________________________________________________________________________\nbn2b_branch2c (BatchNormalizatio (None, 15, 15, 256) 1024 res2b_branch2c[0][0] \n____________________________________________________________________________________________________\nadd_3 (Add) (None, 15, 15, 256) 0 bn2b_branch2c[0][0] \n activation_7[0][0] \n____________________________________________________________________________________________________\nactivation_10 (Activation) (None, 15, 15, 256) 0 add_3[0][0] \n____________________________________________________________________________________________________\nres2c_branch2a (Conv2D) (None, 15, 15, 64) 16448 activation_10[0][0] \n____________________________________________________________________________________________________\nbn2c_branch2a (BatchNormalizatio (None, 15, 15, 64) 256 res2c_branch2a[0][0] \n____________________________________________________________________________________________________\nactivation_11 (Activation) (None, 15, 15, 64) 0 bn2c_branch2a[0][0] \n____________________________________________________________________________________________________\nres2c_branch2b (Conv2D) (None, 15, 15, 64) 36928 activation_11[0][0] \n____________________________________________________________________________________________________\nbn2c_branch2b (BatchNormalizatio (None, 15, 15, 64) 256 res2c_branch2b[0][0] \n____________________________________________________________________________________________________\nactivation_12 (Activation) (None, 15, 15, 64) 0 bn2c_branch2b[0][0] \n____________________________________________________________________________________________________\nres2c_branch2c (Conv2D) (None, 15, 15, 256) 16640 activation_12[0][0] \n____________________________________________________________________________________________________\nbn2c_branch2c (BatchNormalizatio (None, 15, 15, 256) 1024 res2c_branch2c[0][0] \n____________________________________________________________________________________________________\nadd_4 (Add) (None, 15, 15, 256) 0 bn2c_branch2c[0][0] \n activation_10[0][0] \n____________________________________________________________________________________________________\nactivation_13 (Activation) (None, 15, 15, 256) 0 add_4[0][0] \n____________________________________________________________________________________________________\nres3a_branch2a (Conv2D) (None, 8, 8, 128) 32896 activation_13[0][0] \n____________________________________________________________________________________________________\nbn3a_branch2a (BatchNormalizatio (None, 8, 8, 128) 512 res3a_branch2a[0][0] \n____________________________________________________________________________________________________\nactivation_14 (Activation) (None, 8, 8, 128) 0 bn3a_branch2a[0][0] \n____________________________________________________________________________________________________\nres3a_branch2b (Conv2D) (None, 8, 8, 128) 147584 activation_14[0][0] \n____________________________________________________________________________________________________\nbn3a_branch2b (BatchNormalizatio (None, 8, 8, 128) 512 res3a_branch2b[0][0] \n____________________________________________________________________________________________________\nactivation_15 (Activation) (None, 8, 8, 128) 0 bn3a_branch2b[0][0] \n____________________________________________________________________________________________________\nres3a_branch2c (Conv2D) (None, 8, 8, 512) 66048 activation_15[0][0] \n____________________________________________________________________________________________________\nres3a_branch1 (Conv2D) (None, 8, 8, 512) 131584 activation_13[0][0] \n____________________________________________________________________________________________________\nbn3a_branch2c (BatchNormalizatio (None, 8, 8, 512) 2048 res3a_branch2c[0][0] \n____________________________________________________________________________________________________\nbn3a_branch1 (BatchNormalization (None, 8, 8, 512) 2048 res3a_branch1[0][0] \n____________________________________________________________________________________________________\nadd_5 (Add) (None, 8, 8, 512) 0 bn3a_branch2c[0][0] \n bn3a_branch1[0][0] \n____________________________________________________________________________________________________\nactivation_16 (Activation) (None, 8, 8, 512) 0 add_5[0][0] \n____________________________________________________________________________________________________\nres3b_branch2a (Conv2D) (None, 8, 8, 128) 65664 activation_16[0][0] \n____________________________________________________________________________________________________\nbn3b_branch2a (BatchNormalizatio (None, 8, 8, 128) 512 res3b_branch2a[0][0] \n____________________________________________________________________________________________________\nactivation_17 (Activation) (None, 8, 8, 128) 0 bn3b_branch2a[0][0] \n____________________________________________________________________________________________________\nres3b_branch2b (Conv2D) (None, 8, 8, 128) 147584 activation_17[0][0] \n____________________________________________________________________________________________________\nbn3b_branch2b (BatchNormalizatio (None, 8, 8, 128) 512 res3b_branch2b[0][0] \n____________________________________________________________________________________________________\nactivation_18 (Activation) (None, 8, 8, 128) 0 bn3b_branch2b[0][0] \n____________________________________________________________________________________________________\nres3b_branch2c (Conv2D) (None, 8, 8, 512) 66048 activation_18[0][0] \n____________________________________________________________________________________________________\nbn3b_branch2c (BatchNormalizatio (None, 8, 8, 512) 2048 res3b_branch2c[0][0] \n____________________________________________________________________________________________________\nadd_6 (Add) (None, 8, 8, 512) 0 bn3b_branch2c[0][0] \n activation_16[0][0] \n____________________________________________________________________________________________________\nactivation_19 (Activation) (None, 8, 8, 512) 0 add_6[0][0] \n____________________________________________________________________________________________________\nres3c_branch2a (Conv2D) (None, 8, 8, 128) 65664 activation_19[0][0] \n____________________________________________________________________________________________________\nbn3c_branch2a (BatchNormalizatio (None, 8, 8, 128) 512 res3c_branch2a[0][0] \n____________________________________________________________________________________________________\nactivation_20 (Activation) (None, 8, 8, 128) 0 bn3c_branch2a[0][0] \n____________________________________________________________________________________________________\nres3c_branch2b (Conv2D) (None, 8, 8, 128) 147584 activation_20[0][0] \n____________________________________________________________________________________________________\nbn3c_branch2b (BatchNormalizatio (None, 8, 8, 128) 512 res3c_branch2b[0][0] \n____________________________________________________________________________________________________\nactivation_21 (Activation) (None, 8, 8, 128) 0 bn3c_branch2b[0][0] \n____________________________________________________________________________________________________\nres3c_branch2c (Conv2D) (None, 8, 8, 512) 66048 activation_21[0][0] \n____________________________________________________________________________________________________\nbn3c_branch2c (BatchNormalizatio (None, 8, 8, 512) 2048 res3c_branch2c[0][0] \n____________________________________________________________________________________________________\nadd_7 (Add) (None, 8, 8, 512) 0 bn3c_branch2c[0][0] \n activation_19[0][0] \n____________________________________________________________________________________________________\nactivation_22 (Activation) (None, 8, 8, 512) 0 add_7[0][0] \n____________________________________________________________________________________________________\nres3d_branch2a (Conv2D) (None, 8, 8, 128) 65664 activation_22[0][0] \n____________________________________________________________________________________________________\nbn3d_branch2a (BatchNormalizatio (None, 8, 8, 128) 512 res3d_branch2a[0][0] \n____________________________________________________________________________________________________\nactivation_23 (Activation) (None, 8, 8, 128) 0 bn3d_branch2a[0][0] \n____________________________________________________________________________________________________\nres3d_branch2b (Conv2D) (None, 8, 8, 128) 147584 activation_23[0][0] \n____________________________________________________________________________________________________\nbn3d_branch2b (BatchNormalizatio (None, 8, 8, 128) 512 res3d_branch2b[0][0] \n____________________________________________________________________________________________________\nactivation_24 (Activation) (None, 8, 8, 128) 0 bn3d_branch2b[0][0] \n____________________________________________________________________________________________________\nres3d_branch2c (Conv2D) (None, 8, 8, 512) 66048 activation_24[0][0] \n____________________________________________________________________________________________________\nbn3d_branch2c (BatchNormalizatio (None, 8, 8, 512) 2048 res3d_branch2c[0][0] \n____________________________________________________________________________________________________\nadd_8 (Add) (None, 8, 8, 512) 0 bn3d_branch2c[0][0] \n activation_22[0][0] \n____________________________________________________________________________________________________\nactivation_25 (Activation) (None, 8, 8, 512) 0 add_8[0][0] \n____________________________________________________________________________________________________\nres4a_branch2a (Conv2D) (None, 4, 4, 256) 131328 activation_25[0][0] \n____________________________________________________________________________________________________\nbn4a_branch2a (BatchNormalizatio (None, 4, 4, 256) 1024 res4a_branch2a[0][0] \n____________________________________________________________________________________________________\nactivation_26 (Activation) (None, 4, 4, 256) 0 bn4a_branch2a[0][0] \n____________________________________________________________________________________________________\nres4a_branch2b (Conv2D) (None, 4, 4, 256) 590080 activation_26[0][0] \n____________________________________________________________________________________________________\nbn4a_branch2b (BatchNormalizatio (None, 4, 4, 256) 1024 res4a_branch2b[0][0] \n____________________________________________________________________________________________________\nactivation_27 (Activation) (None, 4, 4, 256) 0 bn4a_branch2b[0][0] \n____________________________________________________________________________________________________\nres4a_branch2c (Conv2D) (None, 4, 4, 1024) 263168 activation_27[0][0] \n____________________________________________________________________________________________________\nres4a_branch1 (Conv2D) (None, 4, 4, 1024) 525312 activation_25[0][0] \n____________________________________________________________________________________________________\nbn4a_branch2c (BatchNormalizatio (None, 4, 4, 1024) 4096 res4a_branch2c[0][0] \n____________________________________________________________________________________________________\nbn4a_branch1 (BatchNormalization (None, 4, 4, 1024) 4096 res4a_branch1[0][0] \n____________________________________________________________________________________________________\nadd_9 (Add) (None, 4, 4, 1024) 0 bn4a_branch2c[0][0] \n bn4a_branch1[0][0] \n____________________________________________________________________________________________________\nactivation_28 (Activation) (None, 4, 4, 1024) 0 add_9[0][0] \n____________________________________________________________________________________________________\nres4b_branch2a (Conv2D) (None, 4, 4, 256) 262400 activation_28[0][0] \n____________________________________________________________________________________________________\nbn4b_branch2a (BatchNormalizatio (None, 4, 4, 256) 1024 res4b_branch2a[0][0] \n____________________________________________________________________________________________________\nactivation_29 (Activation) (None, 4, 4, 256) 0 bn4b_branch2a[0][0] \n____________________________________________________________________________________________________\nres4b_branch2b (Conv2D) (None, 4, 4, 256) 590080 activation_29[0][0] \n____________________________________________________________________________________________________\nbn4b_branch2b (BatchNormalizatio (None, 4, 4, 256) 1024 res4b_branch2b[0][0] \n____________________________________________________________________________________________________\nactivation_30 (Activation) (None, 4, 4, 256) 0 bn4b_branch2b[0][0] \n____________________________________________________________________________________________________\nres4b_branch2c (Conv2D) (None, 4, 4, 1024) 263168 activation_30[0][0] \n____________________________________________________________________________________________________\nbn4b_branch2c (BatchNormalizatio (None, 4, 4, 1024) 4096 res4b_branch2c[0][0] \n____________________________________________________________________________________________________\nadd_10 (Add) (None, 4, 4, 1024) 0 bn4b_branch2c[0][0] \n activation_28[0][0] \n____________________________________________________________________________________________________\nactivation_31 (Activation) (None, 4, 4, 1024) 0 add_10[0][0] \n____________________________________________________________________________________________________\nres4c_branch2a (Conv2D) (None, 4, 4, 256) 262400 activation_31[0][0] \n____________________________________________________________________________________________________\nbn4c_branch2a (BatchNormalizatio (None, 4, 4, 256) 1024 res4c_branch2a[0][0] \n____________________________________________________________________________________________________\nactivation_32 (Activation) (None, 4, 4, 256) 0 bn4c_branch2a[0][0] \n____________________________________________________________________________________________________\nres4c_branch2b (Conv2D) (None, 4, 4, 256) 590080 activation_32[0][0] \n____________________________________________________________________________________________________\nbn4c_branch2b (BatchNormalizatio (None, 4, 4, 256) 1024 res4c_branch2b[0][0] \n____________________________________________________________________________________________________\nactivation_33 (Activation) (None, 4, 4, 256) 0 bn4c_branch2b[0][0] \n____________________________________________________________________________________________________\nres4c_branch2c (Conv2D) (None, 4, 4, 1024) 263168 activation_33[0][0] \n____________________________________________________________________________________________________\nbn4c_branch2c (BatchNormalizatio (None, 4, 4, 1024) 4096 res4c_branch2c[0][0] \n____________________________________________________________________________________________________\nadd_11 (Add) (None, 4, 4, 1024) 0 bn4c_branch2c[0][0] \n activation_31[0][0] \n____________________________________________________________________________________________________\nactivation_34 (Activation) (None, 4, 4, 1024) 0 add_11[0][0] \n____________________________________________________________________________________________________\nres4d_branch2a (Conv2D) (None, 4, 4, 256) 262400 activation_34[0][0] \n____________________________________________________________________________________________________\nbn4d_branch2a (BatchNormalizatio (None, 4, 4, 256) 1024 res4d_branch2a[0][0] \n____________________________________________________________________________________________________\nactivation_35 (Activation) (None, 4, 4, 256) 0 bn4d_branch2a[0][0] \n____________________________________________________________________________________________________\nres4d_branch2b (Conv2D) (None, 4, 4, 256) 590080 activation_35[0][0] \n____________________________________________________________________________________________________\nbn4d_branch2b (BatchNormalizatio (None, 4, 4, 256) 1024 res4d_branch2b[0][0] \n____________________________________________________________________________________________________\nactivation_36 (Activation) (None, 4, 4, 256) 0 bn4d_branch2b[0][0] \n____________________________________________________________________________________________________\nres4d_branch2c (Conv2D) (None, 4, 4, 1024) 263168 activation_36[0][0] \n____________________________________________________________________________________________________\nbn4d_branch2c (BatchNormalizatio (None, 4, 4, 1024) 4096 res4d_branch2c[0][0] \n____________________________________________________________________________________________________\nadd_12 (Add) (None, 4, 4, 1024) 0 bn4d_branch2c[0][0] \n activation_34[0][0] \n____________________________________________________________________________________________________\nactivation_37 (Activation) (None, 4, 4, 1024) 0 add_12[0][0] \n____________________________________________________________________________________________________\nres4e_branch2a (Conv2D) (None, 4, 4, 256) 262400 activation_37[0][0] \n____________________________________________________________________________________________________\nbn4e_branch2a (BatchNormalizatio (None, 4, 4, 256) 1024 res4e_branch2a[0][0] \n____________________________________________________________________________________________________\nactivation_38 (Activation) (None, 4, 4, 256) 0 bn4e_branch2a[0][0] \n____________________________________________________________________________________________________\nres4e_branch2b (Conv2D) (None, 4, 4, 256) 590080 activation_38[0][0] \n____________________________________________________________________________________________________\nbn4e_branch2b (BatchNormalizatio (None, 4, 4, 256) 1024 res4e_branch2b[0][0] \n____________________________________________________________________________________________________\nactivation_39 (Activation) (None, 4, 4, 256) 0 bn4e_branch2b[0][0] \n____________________________________________________________________________________________________\nres4e_branch2c (Conv2D) (None, 4, 4, 1024) 263168 activation_39[0][0] \n____________________________________________________________________________________________________\nbn4e_branch2c (BatchNormalizatio (None, 4, 4, 1024) 4096 res4e_branch2c[0][0] \n____________________________________________________________________________________________________\nadd_13 (Add) (None, 4, 4, 1024) 0 bn4e_branch2c[0][0] \n activation_37[0][0] \n____________________________________________________________________________________________________\nactivation_40 (Activation) (None, 4, 4, 1024) 0 add_13[0][0] \n____________________________________________________________________________________________________\nres4f_branch2a (Conv2D) (None, 4, 4, 256) 262400 activation_40[0][0] \n____________________________________________________________________________________________________\nbn4f_branch2a (BatchNormalizatio (None, 4, 4, 256) 1024 res4f_branch2a[0][0] \n____________________________________________________________________________________________________\nactivation_41 (Activation) (None, 4, 4, 256) 0 bn4f_branch2a[0][0] \n____________________________________________________________________________________________________\nres4f_branch2b (Conv2D) (None, 4, 4, 256) 590080 activation_41[0][0] \n____________________________________________________________________________________________________\nbn4f_branch2b (BatchNormalizatio (None, 4, 4, 256) 1024 res4f_branch2b[0][0] \n____________________________________________________________________________________________________\nactivation_42 (Activation) (None, 4, 4, 256) 0 bn4f_branch2b[0][0] \n____________________________________________________________________________________________________\nres4f_branch2c (Conv2D) (None, 4, 4, 1024) 263168 activation_42[0][0] \n____________________________________________________________________________________________________\nbn4f_branch2c (BatchNormalizatio (None, 4, 4, 1024) 4096 res4f_branch2c[0][0] \n____________________________________________________________________________________________________\nadd_14 (Add) (None, 4, 4, 1024) 0 bn4f_branch2c[0][0] \n activation_40[0][0] \n____________________________________________________________________________________________________\nactivation_43 (Activation) (None, 4, 4, 1024) 0 add_14[0][0] \n____________________________________________________________________________________________________\nres5a_branch2a (Conv2D) (None, 2, 2, 512) 524800 activation_43[0][0] \n____________________________________________________________________________________________________\nbn5a_branch2a (BatchNormalizatio (None, 2, 2, 512) 2048 res5a_branch2a[0][0] \n____________________________________________________________________________________________________\nactivation_44 (Activation) (None, 2, 2, 512) 0 bn5a_branch2a[0][0] \n____________________________________________________________________________________________________\nres5a_branch2b (Conv2D) (None, 2, 2, 512) 2359808 activation_44[0][0] \n____________________________________________________________________________________________________\nbn5a_branch2b (BatchNormalizatio (None, 2, 2, 512) 2048 res5a_branch2b[0][0] \n____________________________________________________________________________________________________\nactivation_45 (Activation) (None, 2, 2, 512) 0 bn5a_branch2b[0][0] \n____________________________________________________________________________________________________\nres5a_branch2c (Conv2D) (None, 2, 2, 2048) 1050624 activation_45[0][0] \n____________________________________________________________________________________________________\nres5a_branch1 (Conv2D) (None, 2, 2, 2048) 2099200 activation_43[0][0] \n____________________________________________________________________________________________________\nbn5a_branch2c (BatchNormalizatio (None, 2, 2, 2048) 8192 res5a_branch2c[0][0] \n____________________________________________________________________________________________________\nbn5a_branch1 (BatchNormalization (None, 2, 2, 2048) 8192 res5a_branch1[0][0] \n____________________________________________________________________________________________________\nadd_15 (Add) (None, 2, 2, 2048) 0 bn5a_branch2c[0][0] \n bn5a_branch1[0][0] \n____________________________________________________________________________________________________\nactivation_46 (Activation) (None, 2, 2, 2048) 0 add_15[0][0] \n____________________________________________________________________________________________________\nres5b_branch2a (Conv2D) (None, 2, 2, 512) 1049088 activation_46[0][0] \n____________________________________________________________________________________________________\nbn5b_branch2a (BatchNormalizatio (None, 2, 2, 512) 2048 res5b_branch2a[0][0] \n____________________________________________________________________________________________________\nactivation_47 (Activation) (None, 2, 2, 512) 0 bn5b_branch2a[0][0] \n____________________________________________________________________________________________________\nres5b_branch2b (Conv2D) (None, 2, 2, 512) 2359808 activation_47[0][0] \n____________________________________________________________________________________________________\nbn5b_branch2b (BatchNormalizatio (None, 2, 2, 512) 2048 res5b_branch2b[0][0] \n____________________________________________________________________________________________________\nactivation_48 (Activation) (None, 2, 2, 512) 0 bn5b_branch2b[0][0] \n____________________________________________________________________________________________________\nres5b_branch2c (Conv2D) (None, 2, 2, 2048) 1050624 activation_48[0][0] \n____________________________________________________________________________________________________\nbn5b_branch2c (BatchNormalizatio (None, 2, 2, 2048) 8192 res5b_branch2c[0][0] \n____________________________________________________________________________________________________\nadd_16 (Add) (None, 2, 2, 2048) 0 bn5b_branch2c[0][0] \n activation_46[0][0] \n____________________________________________________________________________________________________\nactivation_49 (Activation) (None, 2, 2, 2048) 0 add_16[0][0] \n____________________________________________________________________________________________________\nres5c_branch2a (Conv2D) (None, 2, 2, 512) 1049088 activation_49[0][0] \n____________________________________________________________________________________________________\nbn5c_branch2a (BatchNormalizatio (None, 2, 2, 512) 2048 res5c_branch2a[0][0] \n____________________________________________________________________________________________________\nactivation_50 (Activation) (None, 2, 2, 512) 0 bn5c_branch2a[0][0] \n____________________________________________________________________________________________________\nres5c_branch2b (Conv2D) (None, 2, 2, 512) 2359808 activation_50[0][0] \n____________________________________________________________________________________________________\nbn5c_branch2b (BatchNormalizatio (None, 2, 2, 512) 2048 res5c_branch2b[0][0] \n____________________________________________________________________________________________________\nactivation_51 (Activation) (None, 2, 2, 512) 0 bn5c_branch2b[0][0] \n____________________________________________________________________________________________________\nres5c_branch2c (Conv2D) (None, 2, 2, 2048) 1050624 activation_51[0][0] \n____________________________________________________________________________________________________\nbn5c_branch2c (BatchNormalizatio (None, 2, 2, 2048) 8192 res5c_branch2c[0][0] \n____________________________________________________________________________________________________\nadd_17 (Add) (None, 2, 2, 2048) 0 bn5c_branch2c[0][0] \n activation_49[0][0] \n____________________________________________________________________________________________________\nactivation_52 (Activation) (None, 2, 2, 2048) 0 add_17[0][0] \n____________________________________________________________________________________________________\navg_pool (AveragePooling2D) (None, 1, 1, 2048) 0 activation_52[0][0] \n____________________________________________________________________________________________________\nflatten_1 (Flatten) (None, 2048) 0 avg_pool[0][0] \n____________________________________________________________________________________________________\nfc6 (Dense) (None, 6) 12294 flatten_1[0][0] \n====================================================================================================\nTotal params: 23,600,006\nTrainable params: 23,546,886\nNon-trainable params: 53,120\n____________________________________________________________________________________________________\n" ] ], [ [ "Finally, run the code below to visualize your ResNet50. You can also download a .png picture of your model by going to \"File -> Open...-> model.png\".", "_____no_output_____" ] ], [ [ "plot_model(model, to_file='model.png')\nSVG(model_to_dot(model).create(prog='dot', format='svg'))", "_____no_output_____" ] ], [ [ "## What you should remember\n- Very deep \"plain\" networks don't work in practice because they are hard to train due to vanishing gradients. \n- The skip-connections help to address the Vanishing Gradient problem. They also make it easy for a ResNet block to learn an identity function. \n- There are two main types of blocks: The identity block and the convolutional block. \n- Very deep Residual Networks are built by stacking these blocks together.", "_____no_output_____" ], [ "### References \n\nThis notebook presents the ResNet algorithm due to He et al. (2015). The implementation here also took significant inspiration and follows the structure given in the GitHub repository of Francois Chollet: \n\n- Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun - [Deep Residual Learning for Image Recognition (2015)](https://arxiv.org/abs/1512.03385)\n- Francois Chollet's GitHub repository: https://github.com/fchollet/deep-learning-models/blob/master/resnet50.py\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
e7db0d9a38edae8a3e2eabc00afff70aa1bd1d96
224,319
ipynb
Jupyter Notebook
GroupMeeting12_5_16.ipynb
davidthomas5412/PanglossNotebooks
719a3b9a5d0e121f0e9bc2a92a968abf7719790f
[ "MIT" ]
null
null
null
GroupMeeting12_5_16.ipynb
davidthomas5412/PanglossNotebooks
719a3b9a5d0e121f0e9bc2a92a968abf7719790f
[ "MIT" ]
2
2016-12-13T02:05:57.000Z
2017-01-21T02:16:27.000Z
GroupMeeting12_5_16.ipynb
davidthomas5412/PanglossNotebooks
719a3b9a5d0e121f0e9bc2a92a968abf7719790f
[ "MIT" ]
null
null
null
1,437.942308
221,226
0.950432
[ [ [ "# Plotting is Back", "_____no_output_____" ] ], [ [ "%matplotlib inline\n\nfrom massinference.plot import Limits\nfrom massinference.map import KappaMap, ShearMap\nimport matplotlib.pyplot as plt\n\nlimits = Limits(1.8, 1.65, -2.0, -1.9)\nplot_config = KappaMap.default().plot(limits=limits)\nShearMap.default().plot(plot_config=plot_config)", "_____no_output_____" ] ], [ [ "# Rearchitected MassInference Benchmarking", "_____no_output_____" ], [ "## Benchmark\n- 36 square arcmin field\n- 10 source objects / arcmin\n- lightcnoe radius of 4 arcmins\n- no relevance filtering\n- no smooth kappas\n- 4 independent samples\n- no setup/io/etc, timer starts after objects initialized", "_____no_output_____" ], [ "### Pangloss ... 214.165 seconds\n\n### MassInference ... 1.082 seconds", "_____no_output_____" ], [ "# Numpy Performance Hacks", "_____no_output_____" ] ], [ [ "import numpy as np\n\nx = np.random.rand(10**8)\n\n%timeit -n 1 -r 1 np.isnan(x)\n%timeit -n 1 -r 1 np.isnan(np.sum(x))", "1 loop, best of 1: 1.54 s per loop\n1 loop, best of 1: 64.1 ms per loop\n" ], [ "%timeit -n 1 -r 1 np.sum(-x)\n%timeit -n 1 -r 1 (-np.sum(x))", "1 loop, best of 1: 6.02 s per loop\n1 loop, best of 1: 1.51 s per loop\n" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ] ]
e7db13366544aecf26b20eb093454833e5aee675
9,462
ipynb
Jupyter Notebook
examples/Notebooks/02_ICIAR_2018/transfer_learning.ipynb
rahulremanan/HIMA
1eb9b561da90a0f986c7d50c662af72feffd1100
[ "MIT" ]
27
2017-10-26T19:03:27.000Z
2021-03-16T16:26:32.000Z
examples/Notebooks/02_ICIAR_2018/transfer_learning.ipynb
rahulremanan/HIMA
1eb9b561da90a0f986c7d50c662af72feffd1100
[ "MIT" ]
null
null
null
examples/Notebooks/02_ICIAR_2018/transfer_learning.ipynb
rahulremanan/HIMA
1eb9b561da90a0f986c7d50c662af72feffd1100
[ "MIT" ]
12
2017-12-28T00:04:03.000Z
2019-02-13T22:30:09.000Z
23.714286
114
0.534982
[ [ [ "import json\nfrom keras.models import model_from_json\nfrom keras.preprocessing import image\nfrom keras.applications.inception_v3 import preprocess_input\nfrom keras.models import model_from_json\nfrom keras.optimizers import SGD, RMSprop, Adagrad", "Using TensorFlow backend.\n" ], [ "with open('/home/rahul/ICIAR/model_28.json') as json_file:\n model_json = json_file.read()\nmodel = model_from_json(model_json)", "_____no_output_____" ], [ "model.load_weights('/home/rahul/ICIAR/trained_weights_28.model')", "_____no_output_____" ], [ "model.layers.pop()", "_____no_output_____" ], [ "new_model = model.output", "_____no_output_____" ], [ "from keras.layers import Dense, GlobalAveragePooling2D, Dropout, BatchNormalization\nFC_SIZE = 4096\ndropout = 0.5", "_____no_output_____" ], [ "x1 = Dense(FC_SIZE, activation='relu', name=\"fc_dense1\")(new_model)\nx1 = Dropout(dropout, name = 'dropout1')(x1)\nx1 = BatchNormalization(name=\"fc_batch_norm1\")(x1)\nx1 = Dense(FC_SIZE, activation='relu', name=\"fc_dense2\")(x1)\nx1 = Dropout(dropout, name = 'dropout2')(x1)", "_____no_output_____" ], [ "x2 = Dense(FC_SIZE, activation='relu', name=\"fc_dense3\")(new_model)\nx2 = Dropout(dropout, name = 'dropout3')(x2)\nx2 = BatchNormalization(name=\"fc_batch_norm2\")(x2)\nx2 = Dense(FC_SIZE, activation='relu', name=\"fc_dense4\")(x2)\nx2 = Dropout(dropout, name = 'dropout4')(x2)", "_____no_output_____" ], [ "from keras.layers.merge import concatenate", "_____no_output_____" ], [ "x12 = concatenate([x1, x2], name = 'mixed11')\nx12 = Dropout(dropout, name = 'dropout5')(x12)\nx12 = Dense(FC_SIZE//16, activation='relu', name = 'fc_dense5')(x12)\nx12 = Dropout(dropout, name = 'dropout6')(x12)\nx12 = BatchNormalization(name=\"fc_batch_norm3\")(x12)\nx12 = Dense(FC_SIZE//32, activation='relu', name = 'fc_dense6')(x12)\nx12 = Dropout(dropout, name = 'dropout7')(x12)", "_____no_output_____" ], [ "model.layers.pop()", "_____no_output_____" ], [ "model.layers.pop()", "_____no_output_____" ], [ "model.layers.pop()", "_____no_output_____" ], [ "model.layers.pop()", "_____no_output_____" ], [ "model.layers.pop()", "_____no_output_____" ], [ "model.layers[-1].outbound_nodes = []\nmodel.outputs = [model.layers[-1].output]\nx3 = model.get_layer('mixed10').output\nx3 = GlobalAveragePooling2D( name = 'global_avg_pooling2')(x3)\nx3 = Dense(2048, activation='relu', name = 'fc_dense7')(x3)\nx3 = Dropout(dropout, name = 'dropout8')(x3)\nx3 = BatchNormalization(name=\"fc_batch_norm4\")(x3)\nx3 = Dense(2048, activation='relu', name = 'fc_dense8')(x3)\nx3 = Dropout(dropout, name = 'dropout9')(x3)", "_____no_output_____" ], [ "xout = concatenate([x12, x3], name ='mixed12')\nxout = Dense(FC_SIZE//32, activation='relu', name = 'fc_dense9')(xout)\nxout = Dropout(dropout, name = 'dropout10')(xout)", "_____no_output_____" ], [ "nb_classes =4", "_____no_output_____" ], [ "predictions = Dense(nb_classes, activation='softmax', name='predictions')(xout)", "_____no_output_____" ], [ "from keras.models import Model", "_____no_output_____" ], [ "model_out = Model(inputs=model.input, outputs=predictions)", "_____no_output_____" ], [ "from keras.optimizers import SGD, RMSprop, Adagrad\nsgd = SGD(lr=1e-7, decay=0.5, momentum=1, nesterov=True)\nrms = RMSprop(lr=1e-7, rho=0.9, epsilon=1e-08, decay=0.0)\nada = Adagrad(lr=1e-3, epsilon=1e-08, decay=0.0)\noptimizer = ada", "_____no_output_____" ], [ "model_out.compile(optimizer=optimizer, loss='categorical_crossentropy', \n metrics=['accuracy'])", "_____no_output_____" ], [ "import os", "_____no_output_____" ], [ "def save_model(dir_name, name, model):\n file_loc = dir_name\n file_pointer = os.path.join(file_loc+\"//trained\")\n model.save_weights(os.path.join(file_pointer + \"_weights_\"+str(name)+\".model\"))\n \n model_json = model.to_json() # Serialize model to JSON\n with open(os.path.join(file_pointer+\"_config_\"+str(name)+\".json\"), \"w\") as json_file:\n json_file.write(model_json)\n print (\"Saved the trained model weights to: \" + \n str(os.path.join(file_pointer + \"_weights_\"+str(name)+\".model\")))\n print (\"Saved the trained model configuration as a json file to: \" + \n str(os.path.join(file_pointer+\"_config_\"+str(name)+\".json\")))", "_____no_output_____" ], [ "save_model('/home/rahul/', 'model_32', model_out)", "Saved the trained model weights to: /home/rahul///trained_weights_model_32.model\nSaved the trained model configuration as a json file to: /home/rahul///trained_config_model_32.json\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7db21864cfd9fbfc5a9cd574d890aa401a0dbc3
10,640
ipynb
Jupyter Notebook
deep learning/GAN/DCGAN.ipynb
sugatoray/data-science-learning
3d3507778fbd493edbbc706a3a9d35833d6bd77b
[ "Apache-2.0" ]
358
2017-07-31T14:25:39.000Z
2022-03-29T00:12:13.000Z
deep learning/GAN/DCGAN.ipynb
sugatoray/data-science-learning
3d3507778fbd493edbbc706a3a9d35833d6bd77b
[ "Apache-2.0" ]
4
2020-06-23T06:46:48.000Z
2021-10-14T16:25:02.000Z
deep learning/GAN/DCGAN.ipynb
sugatoray/data-science-learning
3d3507778fbd493edbbc706a3a9d35833d6bd77b
[ "Apache-2.0" ]
100
2017-05-13T21:52:00.000Z
2022-03-19T01:02:10.000Z
27.005076
638
0.556297
[ [ [ "<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Data\" data-toc-modified-id=\"Data-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Data</a></span></li><li><span><a href=\"#Model\" data-toc-modified-id=\"Model-2\"><span class=\"toc-item-num\">2&nbsp;&nbsp;</span>Model</a></span></li><li><span><a href=\"#Training\" data-toc-modified-id=\"Training-3\"><span class=\"toc-item-num\">3&nbsp;&nbsp;</span>Training</a></span></li><li><span><a href=\"#Explore-Latent-Space\" data-toc-modified-id=\"Explore-Latent-Space-4\"><span class=\"toc-item-num\">4&nbsp;&nbsp;</span>Explore Latent Space</a></span></li></ul></div>", "_____no_output_____" ] ], [ [ "import sys\nimport yaml\nimport tensorflow as tf\nimport numpy as np\nimport pandas as pd\nimport functools\nfrom pathlib import Path\nfrom datetime import datetime\nfrom tqdm import tqdm_notebook as tqdm\n\n# Plotting\nimport matplotlib\nimport matplotlib.pyplot as plt\nfrom matplotlib import animation\nplt.rcParams['animation.ffmpeg_path'] = str(Path.home() / \"anaconda3/envs/image-processing/bin/ffmpeg\")\n\n%load_ext autoreload\n%autoreload 2\n\nimport dcgan\nimport gan_utils\nfrom load_data import preprocess_images\nfrom ds_utils.generative_utils import animate_latent_transition, gen_latent_linear, gen_latent_idx\nfrom ds_utils.plot_utils import plot_sample_imgs", "_____no_output_____" ], [ "data_folder = Path.home() / \"Documents/datasets\"", "_____no_output_____" ], [ "# load model config\nwith open('configs/dcgan_celeba_config.yaml', 'r') as f:\n config = yaml.load(f)\nHIDDEN_DIM = config['data']['z_size']\nIMG_SHAPE = config['data']['input_shape']\nBATCH_SIZE = config['training']['batch_size']\nIMG_IS_BW = IMG_SHAPE[2] == 1\nPLOT_IMG_SHAPE = IMG_SHAPE[:2] if IMG_IS_BW else IMG_SHAPE\nconfig", "_____no_output_____" ] ], [ [ "# Data", "_____no_output_____" ] ], [ [ "# load Fashion MNIST dataset\n((X_train, y_train), (X_test, y_test)) = tf.keras.datasets.fashion_mnist.load_data()", "_____no_output_____" ], [ "X_train = preprocess_images(X_train)\nX_test = preprocess_images(X_test)\n\nprint(X_train[0].shape)\nprint(X_train[0].max())\nprint(X_train[0].min())\n\nprint(X_train.shape)\n\nassert X_train[0].shape == tuple(config['data']['input_shape'])", "_____no_output_____" ], [ "train_ds = tf.data.Dataset.from_tensor_slices(X_train).take(5000)\ntest_ds = tf.data.Dataset.from_tensor_slices(X_test).take(256)", "_____no_output_____" ], [ "sys.path.append(\"../\")\nfrom tmp_load_data import load_imgs_tfdataset", "_____no_output_____" ], [ "train_ds = load_imgs_tfdataset(data_folder/'img_align_celeba', '*.jpg', config, 500, zipped=False)\ntest_ds = load_imgs_tfdataset(data_folder/'img_align_celeba', '*.jpg', config, 100, zipped=False)", "_____no_output_____" ] ], [ [ "# Model", "_____no_output_____" ] ], [ [ "# instantiate GAN\ngan = dcgan.DCGan(IMG_SHAPE, config)", "_____no_output_____" ], [ "# test generator\ngenerator_out = gan.generator.predict(np.random.randn(BATCH_SIZE, HIDDEN_DIM))\ngenerator_out.shape", "_____no_output_____" ], [ "# test discriminator\ndiscriminator_out = gan.discriminator.predict(generator_out)\ndiscriminator_out.shape", "_____no_output_____" ], [ "# test gan\ngan.gan.predict(np.random.randn(BATCH_SIZE, HIDDEN_DIM)).max()", "_____no_output_____" ], [ "# plot random generated image\nplt.imshow(gan.generator.predict([np.random.randn(1, HIDDEN_DIM)])[0]\n .reshape(PLOT_IMG_SHAPE), cmap='gray' if IMG_IS_BW else 'jet')\nplt.show()", "_____no_output_____" ], [ "gan.generator.summary()", "_____no_output_____" ] ], [ [ "# Training", "_____no_output_____" ] ], [ [ "# setup model directory for checkpoint and tensorboard logs\nmodel_name = \"dcgan_celeba\"\nmodel_dir = Path.home() / \"Documents/models/tf_playground/gan\" / model_name\nmodel_dir.mkdir(exist_ok=True, parents=True)\nexport_dir = model_dir / 'export'\nexport_dir.mkdir(exist_ok=True)\nlog_dir = model_dir / \"logs\" / datetime.now().strftime(\"%Y%m%d-%H%M%S\")", "_____no_output_____" ], [ "nb_epochs = 1000\ngan._train(train_ds=gan.setup_dataset(train_ds),\n validation_ds=gan.setup_dataset(test_ds),\n nb_epochs=nb_epochs,\n log_dir=log_dir,\n checkpoint_dir=export_dir,\n is_tfdataset=True)", "_____no_output_____" ], [ "# export Keras model (.h5)\ngan.generator.save(str(export_dir / 'generator.h5'))\ngan.discriminator.save(str(export_dir / 'discriminator.h5'))", "_____no_output_____" ], [ "# plot generator results\nplot_side = 5\nplot_sample_imgs(lambda x: gan.generator.predict(np.random.randn(plot_side*plot_side, HIDDEN_DIM)), \n img_shape=PLOT_IMG_SHAPE,\n plot_side=plot_side,\n cmap='gray' if IMG_IS_BW else 'jet')", "_____no_output_____" ] ], [ [ "# Explore Latent Space", "_____no_output_____" ] ], [ [ "%matplotlib inline", "_____no_output_____" ], [ "def gen_image_fun(latent_vectors):\n img = gan.generator.predict(latent_vectors)[0].reshape(PLOT_IMG_SHAPE)\n return img", "_____no_output_____" ], [ "img = gen_image_fun(z_s)", "_____no_output_____" ], [ "render_dir = Path.home() / 'Documents/videos/gan' / \"gan_celeba\"\n\nnb_samples = 10\nnb_transition_frames = 10\nnb_frames = min(2000, (nb_samples-1)*nb_transition_frames)\n\n# random list of z vectors\nz_s = np.random.randn(nb_samples, HIDDEN_DIM)\n\nanimate_latent_transition(latent_vectors=z_s, \n gen_image_fun=gen_image_fun,\n gen_latent_fun=lambda z_s, i: gen_latent_linear(z_s, i, nb_transition_frames),\n img_size=PLOT_IMG_SHAPE,\n nb_frames=nb_frames,\n render_dir=render_dir)", "_____no_output_____" ], [ "render_dir = Path.home() / 'Documents/videos/gan' / \"gan_fmnist_test\"\n\nnb_transition_frames = 10\n\n# random list of z vectors\n#rand_idx = np.random.randint(len(X_train))\nz_start = np.random.randn(1, HIDDEN_DIM)\nvals = np.linspace(-1., 1., nb_transition_frames)\n\nfor z_idx in range(20):\n animate_latent_transition(latent_vectors=z_start, \n gen_image_fun=gen_image_fun,\n gen_latent_fun=lambda z_s, i: gen_latent_idx(z_s, i, z_idx, vals),\n img_size=PLOT_IMG_SHAPE,\n nb_frames=nb_transition_frames,\n render_dir=render_dir)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
e7db2ae41db77aca948cad66a7b5ee37af081f99
46,207
ipynb
Jupyter Notebook
Python/Data Science/titanic kaggle/predict_survival.ipynb
vbsteja/code
0c8f4dc579f5de21b6c55fe6e65c3c8eb5473687
[ "Apache-2.0" ]
null
null
null
Python/Data Science/titanic kaggle/predict_survival.ipynb
vbsteja/code
0c8f4dc579f5de21b6c55fe6e65c3c8eb5473687
[ "Apache-2.0" ]
null
null
null
Python/Data Science/titanic kaggle/predict_survival.ipynb
vbsteja/code
0c8f4dc579f5de21b6c55fe6e65c3c8eb5473687
[ "Apache-2.0" ]
null
null
null
37.08427
99
0.287727
[ [ [ "import numpy as np\nimport pandas as pd\nfrom sklearn import preprocessing", "_____no_output_____" ], [ "df = pd.read_csv(\"train.csv\")\ndf.loc[df[\"Sex\"] == 'female',\"Sex\"] = 0\ndf.loc[df[\"Sex\"] == 'male',\"Sex\"] = 1\nprint(len(df))\ndf = df.fillna(value=\"Not available\")\ndf = df.drop(\"Cabin\")\ndf = df.drop(\"\")\ndf_train = df[0:600]\ndf_cross_validate = df[601:]\ndf_train\ndf_test = pd.read_csv(\"test.csv\")", "891\n" ], [ "Features = [\"PassengerId\",\"Survived\",\"Pclass\",\"Sex\",\"Age\",\"Fare\",\"Embarked\"]\nPassanger_data = df[\"PassengerId\"]\nSurvived_data = df[\"Survived\"]\nPclass_data = df[\"Pclass\"]\nSex_data = df[\"Sex\"]\nAge_data = df[\"Age\"]\nFare_data = df[\"Fare\"]\nEmbarked_data = df[\"Embarked\"]", "_____no_output_____" ], [ "feautre_list = [\"PassengerId\",\"Survived\",\"Sex\",\"Age\",\"Fare\"]\n", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code" ] ]
e7db400ae9d435d93818245dc319550637299e33
23,476
ipynb
Jupyter Notebook
inference_without_finetune_kogpt_trinity.ipynb
snoop2head/KoGPT-Joong-2
118d830231d3afc2f59e02ffd2439ab5cc1d10fd
[ "MIT" ]
7
2021-11-18T06:58:54.000Z
2022-02-05T10:59:33.000Z
inference_without_finetune_kogpt_trinity.ipynb
snoop2head/KoGPT-Joong-2
118d830231d3afc2f59e02ffd2439ab5cc1d10fd
[ "MIT" ]
1
2021-12-09T03:12:31.000Z
2021-12-09T03:12:31.000Z
inference_without_finetune_kogpt_trinity.ipynb
snoop2head/KoGPT-Joong-2
118d830231d3afc2f59e02ffd2439ab5cc1d10fd
[ "MIT" ]
1
2021-12-02T09:23:22.000Z
2021-12-02T09:23:22.000Z
42.375451
438
0.45719
[ [ [ "# References\nKoGPT3 shares the same structure as KoGPT2. \n\n- [KoGPT2-Transformers huggingface 활용 예시](https://github.com/taeminlee/KoGPT2-Transformers)", "_____no_output_____" ] ], [ [ "from transformers import GPT2Tokenizer, PreTrainedTokenizerFast\n\nmodel_dir = \"skt/ko-gpt-trinity-1.2B-v0.5\"\n\n# Load the Tokenizer: \"Fast\" means that the tokenizer code is written in Rust Lang\ntokenizer = PreTrainedTokenizerFast.from_pretrained(\n model_dir,\n bos_token=\"<s>\",\n eos_token=\"</s>\",\n unk_token=\"<unk>\",\n pad_token=\"<pad>\",\n mask_token=\"<mask>\",\n)", "The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization. \nThe tokenizer class you load from this checkpoint is 'GPT2Tokenizer'. \nThe class this function is called from is 'PreTrainedTokenizerFast'.\n" ], [ "from transformers import GPT2LMHeadModel\n\n# designate the model's name registered on huggingface: https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5\nmodel_dir = \"skt/ko-gpt-trinity-1.2B-v0.5\"\n\n# Attach Language model Head to the pretrained GPT model\nmodel = GPT2LMHeadModel.from_pretrained(model_dir) # KoGPT3 shares the same structure as KoGPT2. ", "_____no_output_____" ], [ "import torch\n# move the model to device\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\nmodel = model.to(device)\nmodel.eval()", "_____no_output_____" ], [ "# encode the sample sentence\nsample = \"이 편지는 영국에서 최초로 시작되어 일년에 한바퀴 돌면서 받는 사람에게 행운을 주었고 지금은 당신에게로 옮겨진 이 편지는\"\nprint(tokenizer.encode(sample))", "[29976, 30296, 30248, 50056, 33792, 30300, 30318, 30002, 37783, 29991, 44631, 30247, 30083, 31755, 35027, 41144, 25772, 30003, 34224, 32886, 47133, 21956, 34040, 26412, 29976, 30296, 30248]\n" ], [ "import torch\n\ntorch.manual_seed(42)\n\n# encode the sample sentence\ninput_ids = tokenizer.encode(sample, add_special_tokens=False, return_tensors=\"pt\")\n\n# generate output sequence from the given encoded input sequence\noutput_sequences = model.generate(input_ids=input_ids, do_sample=True, max_length=150, num_return_sequences=3)\n\n# decode the output sequence and print its outcome\nfor index, generated_sequence in enumerate(output_sequences):\n generated_sequence = generated_sequence.tolist()\n decoded_sequence = tokenizer.decode(generated_sequence, clean_up_tokenization_spaces=True)\n print(f\"Generated Sequence | Number {index} : {decoded_sequence}\")\n print()\n", "Generated Sequence | Number 0 : 이 편지는 영국에서 최초로 시작되어 일년에 한바퀴 돌면서 받는 사람에게 행운을 주었고 지금은 당신에게로 옮겨진 이 편지는 이 지구 상의 모든 사람에게 복을 주는 것이 된다. 이 편지는 그 내용이 진실되고 당신이 다른 사람에게 복을 주기를 바라는 마음이 담겨져 있다. 그리고 당신의 친구들은 이 편지를 읽고 당신의 행운을 기원해주며 당신에게 다시 연락한다. 마지막으로 당신의 친구들은 이 편지를 읽을 때마다 복을 받길 기원하며 당신에게 행운을 가져다 줄 것이다. 이 우체통에 넣은 편지는 당신이 한밤중에 받아도 좋을 것이다. 그 편지 안에는 당신이 받을 것으로 예상하는 것에 대한 목록이 나와 있다. 그리고 당신이 생각하고 있는 행운에 대한 목록이 있다. 당신에게 좋은 일이나 불행한 일을 예상해보기 바란다.\n 행운 편지지와 함께 당신이 예상\n\nGenerated Sequence | Number 1 : 이 편지는 영국에서 최초로 시작되어 일년에 한바퀴 돌면서 받는 사람에게 행운을 주었고 지금은 당신에게로 옮겨진 이 편지는 지금 당신의 가장 가까운 사람에게 당신의 가장 진실한 사랑을 전하는 편지입니다. \n \n (\n 내가 가장 아끼는 사람은... ) \n <unk> 당신이 나의 삶의 가장 큰 행복이랍니다. \n <unk> 당신이 나의 가장 사랑하는 사람이랍니다. \n <unk> 나는 당신에게 한평생 좋은 친구가 될 것입니다. \n <unk> 당신은 나와 우정을 나누면 좋은 친구이고 \n <unk> 나는 당신에게 일생의 동반자가 될 것입니다. \n <unk> 당신은 나와 행복할 것이고 나와 슬픔을 나누면 행복할 것입니다. \n <unk> 당신은 나와 하나가 될 것이고 당신이 바로 나입니다. \n <unk> 당신은 나의 가장 소중한 친구입니다. \n <unk> 당신은 나의\n\nGenerated Sequence | Number 2 : 이 편지는 영국에서 최초로 시작되어 일년에 한바퀴 돌면서 받는 사람에게 행운을 주었고 지금은 당신에게로 옮겨진 이 편지는 세계에서 가장 많은 사람이 읽어주는 이 편지 중에서 최고의 베스트 셀러가 되었다.\n '당신이 원하는 모든 것은 당신 안에 있소. 당신의 소망을 들어주는 사람이 있다는 것이 얼마나 행운인지 아나요?'\n '당신의 마음을 이해하게 된다면 그것만으로도 큰 기쁨이오. 당신은 당신 자신을 위해 무엇을 해야 할까요?\"\n '당신이 원하는 무엇이든 당신이 원하는 것을 하는 사람이 있으면 당신도 그것을 원하오. 당신이 지금 무엇을 해야 하는지는 결코 생각하지도 않으면서도 그것을 바라고 있지 않는 사람을 위한 것은 아무것도 없지.'\n '당신이 지금 그것을 해야 한다는 것이 무엇을 의미하는지 아나요?'\n '당신의 인생을 위해서 무엇이든 하시오. 모든\n\n" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
e7db40504ee6a59cb02a1cbe62d8aee7f6d18958
2,983
ipynb
Jupyter Notebook
examples/Binder/0_PyElastica_Tutorials_Overview.ipynb
engiecat/PyElastica
0ea100e23d5908bf7ebdae4261276539e02a53a6
[ "MIT" ]
71
2020-04-15T17:02:42.000Z
2022-03-26T04:53:51.000Z
examples/Binder/0_PyElastica_Tutorials_Overview.ipynb
engiecat/PyElastica
0ea100e23d5908bf7ebdae4261276539e02a53a6
[ "MIT" ]
59
2020-05-15T03:51:46.000Z
2022-03-28T13:53:01.000Z
examples/Binder/0_PyElastica_Tutorials_Overview.ipynb
engiecat/PyElastica
0ea100e23d5908bf7ebdae4261276539e02a53a6
[ "MIT" ]
57
2020-06-17T20:34:02.000Z
2022-03-16T08:09:54.000Z
48.901639
364
0.68287
[ [ [ "# PyElastica Tutorials\n\nWe have developed a number of different Jupyter notebook tutorials to explain how to use Elastica to simulate Cosserat rods in a number of different cases. Thanks to BinderHub, you can run these tutorials in directly in your web browser without needing to first download and install PyElastica. \n\nWe suggest beginning with the Timoshenko beam tutorial available [here](./1_Timoshenko_Beam.ipynb). It walks through how to set up and simulate a very simple Cossert rod model and explains the basics of how to use Elastica. \n<img src=\"../../assets/timoshenko_beam_figure.png\" alt=\"timoshenko_beam_figure\" style=\"width: 600px;\"/>\n\nAfter this, for a tutorial covering more complicated use cases of a single Cosserat rods, check out the slithering snake tutorial, available [here](./2_Slithering_Snake.ipynb). This tutorial covers a possible use case of Cosserat rods and shows how to post-process the simulation to get quantitative data about the system as well as visualize the output. \n\n<div style=\"text-align: center\">\n<video controls autoplay muted loop width=\"320\" src=\"../../assets/continuum_snake.mp4\" ></video> \n</div>\n\nA list of all the available Jupyter notebook tutorials is [here](./). We are working to add more. If you think you have an interesting use case of Cosserat rods and Elastica and would like to showcase please make a pull request so we can add it! \n\nThere are also a number of example Python scripts available [here](https://github.com/GazzolaLab/PyElastica/tree/master/examples) that cover convergence testing, parameter optimization and other more complex use cases. As a warning, these more complex cases take a much longer time to run. \n\n## More about PyElastica\nIf you want to learn more bout PyElastica and Cosserat rods, visit the [project website](https://cosseratrods.org). Or visit the [PyElastica GitHub repo](https://github.com/GazzolaLab/PyElastica).\n\n## PyElastica Documentation\nDocumentation of PyElastica is available online [here](https://docs.cosseratrods.org). There is also a getting started guide on the project website [here](https://cosseratrods.org/software/pyelastica).\n", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown" ] ]
e7db46f496925b57c9df8032c9d59f094445e53d
2,963
ipynb
Jupyter Notebook
svc/SVC.ipynb
aswathyaa/Acharya-MachineLearning
65c3afa5c6d7d2604efb143824d0a204bf8638e4
[ "BSD-3-Clause" ]
1
2022-01-19T13:28:41.000Z
2022-01-19T13:28:41.000Z
svc/SVC.ipynb
aswathyaa/Acharya-MachineLearning
65c3afa5c6d7d2604efb143824d0a204bf8638e4
[ "BSD-3-Clause" ]
1
2020-10-03T14:08:55.000Z
2020-10-03T14:08:55.000Z
svc/SVC.ipynb
aswathyaa/Acharya-MachineLearning
65c3afa5c6d7d2604efb143824d0a204bf8638e4
[ "BSD-3-Clause" ]
5
2020-10-04T13:49:08.000Z
2020-10-30T16:37:29.000Z
23.330709
86
0.556531
[ [ [ "# Suport Vector clustering", "_____no_output_____" ], [ "let us learn how to work on svm in sk learn ", "_____no_output_____" ] ], [ [ "import pandas as pd\nfrom sklearn.datasets import load_iris\nfrom matplotlib import pyplot as plt\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.svm import SVC\n#importing necessary libraries", "_____no_output_____" ], [ "#loading the iris data set from the sklearn library\niris = load_iris()\ndf = pd.DataFrame(iris.data, columns=iris.feature_names)\n#seperating the data set and getting \n#ready to get data ready to give input to the model \ndf['target'] = iris.target\ndf['flower_name'] = df.target.apply(lambda x: iris.target_names[x])\nX = df.drop(['target', 'flower_name'], axis = 'columns')\ny = df.target", "_____no_output_____" ], [ "#spliting the data set into 2 for traning and predicting\nX_train, X_test, y_train, y_test = train_test_split(X,y,test_size = 0.2)", "_____no_output_____" ], [ "#traning the model and provideint the result \nmodel = SVC()\nmodel.fit(X_train, y_train)", "_____no_output_____" ], [ "#providing the presentage of the correct answer predicted\nmodel.score(X_test, y_test)", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ] ]
e7db50701bee7b8f0f6cfc779c44a5e030006919
8,655
ipynb
Jupyter Notebook
time_series/materna_dataset/Materna_PCA.ipynb
sanjosh/machine_learning
e56252f33cceb173bf934a0f9be7c949a926a104
[ "Apache-2.0" ]
null
null
null
time_series/materna_dataset/Materna_PCA.ipynb
sanjosh/machine_learning
e56252f33cceb173bf934a0f9be7c949a926a104
[ "Apache-2.0" ]
null
null
null
time_series/materna_dataset/Materna_PCA.ipynb
sanjosh/machine_learning
e56252f33cceb173bf934a0f9be7c949a926a104
[ "Apache-2.0" ]
null
null
null
21.967005
107
0.505835
[ [ [ "import pandas as pd\nimport csv", "_____no_output_____" ], [ "mdir = \"/home/sandeep/datasets/MaternaDataset/GWA-T-13_Materna-Workload-Traces/Materna-Trace-3/\"\n", "_____no_output_____" ], [ "def set_ts_index(df):\n # convert the column (it's a string) to datetime type\n datetime_series = pd.to_datetime(df['Timestamp'], format='%d.%m.%Y %H:%M:%S', errors='raise')\n\n # create datetime index passing the datetime series\n datetime_index = pd.DatetimeIndex(datetime_series)\n\n # assignment is required for index to change (IMP)\n df = df.set_index(datetime_index)\n return df", "_____no_output_____" ], [ "import os\n\ndataframes = []\nfrom glob import glob\nfilenames = glob(mdir + '*.csv')\nfor idx, f in enumerate(filenames):\n df = pd.read_csv(f, sep=';', quoting = csv.QUOTE_ALL)\n df = set_ts_index(df)\n df = df.rename(columns={\"Disk read throughput [KB/s]\": \"disk_read\", \n \"Disk write throughput [KB/s]\": \"disk_write\",\n \"Network received throughput [KB/s]\": \"net_read\",\n \"Network transmitted throughput [KB/s]\": \"net_write\",\n \"CPU usage [MHZ]\": \"cpu_usage\",\n \"Memory usage [KB]\": \"mem_usage\"\n })\n df.dataframeName = os.path.basename(f)\n dataframes.append(df)", "_____no_output_____" ] ], [ [ "### new dataframe with one column from each VM", "_____no_output_____" ] ], [ [ "new_df = pd.DataFrame()\n\nfor index in range(len(dataframes)):\n diter = dataframes[index]\n new_df[['net_write_' + diter.dataframeName]] = diter[['net_write']]\n \nprint(new_df.shape)\n", "_____no_output_____" ], [ "df = new_df", "_____no_output_____" ], [ "df.describe()\n", "_____no_output_____" ], [ "df.index", "_____no_output_____" ], [ "df.dtypes", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ] ], [ [ "### Inf columns", "_____no_output_____" ] ], [ [ "df.columns.to_series()[np.isinf(df).any()]\n", "_____no_output_____" ], [ "df.index[np.isinf(df).any(1)]\n", "_____no_output_____" ], [ "import numpy as np\n\ndf.replace([np.inf, -np.inf], np.nan)\n", "_____no_output_____" ] ], [ [ "### Null columns", "_____no_output_____" ] ], [ [ "df.isnull().values.any()", "_____no_output_____" ], [ "df[df.isnull().any(axis=1)] ", "_____no_output_____" ], [ "df = df.interpolate( axis='columns')", "_____no_output_____" ], [ "df.dropna()\ndf.shape", "_____no_output_____" ] ], [ [ "### mean throughput over time per VM", "_____no_output_____" ] ], [ [ "ax = df.mean().plot(grid=False)\n", "_____no_output_____" ], [ "### mean throughput across VMs at any time", "_____no_output_____" ], [ "ax = df.T.mean().plot(grid=False)\n", "_____no_output_____" ] ], [ [ "### multivariate PCA\nhttps://www.statsmodels.org/stable/examples/notebooks/generated/pca_fertility_factors.html", "_____no_output_____" ] ], [ [ "import statsmodels.api as sm\nfrom statsmodels.multivariate.pca import PCA\n\npca_model = PCA(df, standardize=False, demean=True)\n", "_____no_output_____" ], [ "fig = pca_model.plot_scree(log_scale=False)\n", "_____no_output_____" ], [ "%matplotlib inline\n\nimport matplotlib.pyplot as plt\n\nfig, ax = plt.subplots(figsize=(8, 4))\nlines = ax.plot(pca_model.factors.iloc[:,:3], lw=4, alpha=.6)\nax.set_xticklabels(df.T.columns.values[::10])\nax.set_xlim(0, 51)\nax.set_xlabel(\"time\", size=17)\nfig.subplots_adjust(.1, .1, .85, .9)\nlegend = fig.legend(lines, ['PC 1', 'PC 2', 'PC 3'], loc='center right')\nlegend.draw_frame(False)", "_____no_output_____" ], [ "idx = pca_model.loadings.iloc[:,0].argsort()\n", "_____no_output_____" ], [ "def make_plot(labels):\n fig, ax = plt.subplots(figsize=(9,5))\n ax = df.loc[labels].T.plot(legend=False, grid=False, ax=ax)\n df.T.mean().plot(ax=ax, grid=False, label='Mean')\n ax.set_xlim(0, 51);\n fig.subplots_adjust(.1, .1, .75, .9)\n ax.set_xlabel(\"time\", size=17)\n ax.set_ylabel(\"vm\", size=17);\n legend = ax.legend(*ax.get_legend_handles_labels(), loc='center left', bbox_to_anchor=(1, .5))\n legend.draw_frame(False)", "_____no_output_____" ], [ "labels = df.index[idx[-5:]]\nmake_plot(labels)", "_____no_output_____" ], [ "idx = pca_model.loadings.iloc[:,1].argsort()\nmake_plot(df.index[idx[-5:]])", "_____no_output_____" ], [ "make_plot(df.index[idx[:5]])\n", "_____no_output_____" ], [ "fig, ax = plt.subplots()\npca_model.loadings.plot.scatter(x='comp_00',y='comp_01', ax=ax)\nax.set_xlabel(\"PC 1\", size=17)\nax.set_ylabel(\"PC 2\", size=17)\ndf.index[pca_model.loadings.iloc[:, 1] > .2].values", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7db50e499799cb6f847af7c73cb96fd3aa5974f
112,081
ipynb
Jupyter Notebook
6.2 Dense.ipynb
wifewriter/cicids2017-ml
ff2d842da2aee9bfd44fd43131c63fb6c391c90f
[ "BSD-3-Clause" ]
12
2020-09-08T12:50:42.000Z
2022-03-29T17:46:55.000Z
6.2 Dense.ipynb
wifewriter/cicids2017-ml
ff2d842da2aee9bfd44fd43131c63fb6c391c90f
[ "BSD-3-Clause" ]
1
2020-12-29T12:31:20.000Z
2021-04-29T03:14:39.000Z
6.2 Dense.ipynb
wifewriter/cicids2017-ml
ff2d842da2aee9bfd44fd43131c63fb6c391c90f
[ "BSD-3-Clause" ]
9
2020-10-07T04:25:50.000Z
2022-03-27T13:00:40.000Z
112,081
112,081
0.832371
[ [ [ "#!/usr/bin/env python3\n# --------------------------------------------------------------\n# Author: Mahendra Data - [email protected]\n# License: BSD 3 clause\n# --------------------------------------------------------------", "_____no_output_____" ], [ "# Mount Google Drive\nfrom google.colab import drive\ndrive.mount(\"/content/drive\")", "Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly&response_type=code\n\nEnter your authorization code:\n··········\nMounted at /content/drive\n" ], [ "!nvidia-smi", "Wed Aug 12 07:32:58 2020 \n+-----------------------------------------------------------------------------+\n| NVIDIA-SMI 450.57 Driver Version: 418.67 CUDA Version: 10.1 |\n|-------------------------------+----------------------+----------------------+\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n| | | MIG M. |\n|===============================+======================+======================|\n| 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 |\n| N/A 34C P8 29W / 149W | 0MiB / 11441MiB | 0% Default |\n| | | ERR! |\n+-------------------------------+----------------------+----------------------+\n \n+-----------------------------------------------------------------------------+\n| Processes: |\n| GPU GI CI PID Type Process name GPU Memory |\n| ID ID Usage |\n|=============================================================================|\n| No running processes found |\n+-----------------------------------------------------------------------------+\n" ], [ "import os\nimport logging\n\nimport pandas as pd\nimport tensorflow.keras as keras\n\nfrom tensorflow.keras.utils import plot_model", "_____no_output_____" ], [ "# Log setting\nlogging.basicConfig(format=\"%(asctime)s %(levelname)s %(message)s\", datefmt=\"%H:%M:%S\", level=logging.INFO)\n\n# Change display.max_rows to show all features.\npd.set_option(\"display.max_rows\", 85)", "_____no_output_____" ], [ "import numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\n\nfrom sklearn.metrics import classification_report\nfrom sklearn.preprocessing import MinMaxScaler\n\n\ndef preprocessing(df: pd.DataFrame) -> (np.ndarray, np.ndarray):\n # Shuffle the dataset\n df = df.sample(frac=1)\n\n # Split features and labels\n x = df.iloc[:, df.columns != 'Label']\n y = df[['Label']].to_numpy()\n\n # Scale the features between 0 ~ 1\n scaler = MinMaxScaler()\n x = scaler.fit_transform(x)\n\n return x, y\n\n\ndef plot_history(history: tf.keras.callbacks.History):\n # summarize history for accuracy\n plt.plot(history.history['sparse_categorical_accuracy'])\n plt.plot(history.history['val_sparse_categorical_accuracy'])\n plt.title('model2 accuracy')\n plt.ylabel('accuracy')\n plt.xlabel('epoch')\n plt.legend(['train', 'test'], loc='upper left')\n plt.show()\n\n # summarize history for loss\n plt.plot(history.history['loss'])\n plt.plot(history.history['val_loss'])\n plt.title('model2 loss')\n plt.ylabel('loss')\n plt.xlabel('epoch')\n plt.legend(['train', 'test'], loc='upper left')\n plt.show()\n\n\ndef evaluation(model: keras.Model, x_test: np.ndarray, y_test: np.ndarray):\n score = model.evaluate(x_test, y_test, verbose=False)\n logging.info('Evaluation:\\nLoss: {}\\nAccuracy : {}\\n'.format(score[0], score[1]))\n\n # F1 score\n y_pred = model.predict(x_test, batch_size=1024, verbose=False)\n y_pred = np.argmax(y_pred, axis=1)\n\n logging.info(\"\\n{}\".format(classification_report(y_test, y_pred)))\n", "_____no_output_____" ], [ "PROCESSED_DIR_PATH = \"/content/drive/My Drive/CICIDS2017/ProcessedDataset\"\nMODEL_DIR_PATH = \"/content/drive/My Drive/CICIDS2017/Model\"", "_____no_output_____" ], [ "def create_dense_model() -> keras.Model:\n # Creating layers\n inputs = keras.layers.Input(shape=(78, ))\n x = keras.layers.Dense(128, activation='relu')(inputs)\n x = keras.layers.Dense(64, activation='relu')(x)\n x = keras.layers.Dense(32, activation='relu')(x)\n outputs = keras.layers.Dense(15, activation='softmax')(x)\n dense_model = keras.Model(inputs=inputs, outputs=outputs)\n\n dense_model.compile(loss='sparse_categorical_crossentropy',\n metrics=['sparse_categorical_accuracy'],\n optimizer='adam')\n\n return dense_model", "_____no_output_____" ], [ "# Create model\nmodel = create_dense_model()\nlogging.info(model.summary())", "07:33:08 INFO None\n" ], [ "plot_model(model, show_shapes=True)", "_____no_output_____" ], [ "# Training\ndf = pd.read_csv(os.path.join(PROCESSED_DIR_PATH, 'train_MachineLearningCVE.csv'), skipinitialspace=True)\nlogging.info(\"Class distribution\\n{}\".format(df.Label.value_counts()))", "07:33:29 INFO Class distribution\n0 1818477\n4 184858\n10 127144\n2 102421\n3 8234\n7 6350\n11 4718\n6 4637\n5 4399\n1 1573\n12 1206\n14 522\n9 29\n13 17\n8 9\nName: Label, dtype: int64\n" ], [ "X, y = preprocessing(df)\ndel df", "_____no_output_____" ], [ "# Training\nlogging.info(\"*** TRAINING START ***\")\nhistory = model.fit(X, y, validation_split=0.1, epochs=125, batch_size=1024, verbose=True)", "07:33:34 INFO *** TRAINING START ***\n" ], [ "logging.info(\"*** TRAINING FINISH ***\")\ndel X, y", "07:50:52 INFO *** TRAINING FINISH ***\n" ], [ "# Save the model\nmodel.save(os.path.join(MODEL_DIR_PATH, \"05_dense.h5\"))\n\nplot_history(history)", "_____no_output_____" ], [ "# Evaluation\ndf = pd.read_csv(os.path.join(PROCESSED_DIR_PATH, 'train_MachineLearningCVE.csv'), skipinitialspace=True)\nlogging.info(\"Class distribution\\n{}\".format(df.Label.value_counts()))", "07:51:17 INFO Class distribution\n0 1818477\n4 184858\n10 127144\n2 102421\n3 8234\n7 6350\n11 4718\n6 4637\n5 4399\n1 1573\n12 1206\n14 522\n9 29\n13 17\n8 9\nName: Label, dtype: int64\n" ], [ "X, y = preprocessing(df)\ndel df", "_____no_output_____" ], [ "evaluation(model, X, y)\ndel X, y", "07:53:18 INFO Evaluation:\nLoss: 0.00502212718129158\nAccuracy : 0.998765766620636\n\n07:53:24 INFO \n precision recall f1-score support\n\n 0 1.00 1.00 1.00 1818477\n 1 1.00 0.38 0.55 1573\n 2 1.00 1.00 1.00 102421\n 3 1.00 1.00 1.00 8234\n 4 1.00 1.00 1.00 184858\n 5 0.98 0.99 0.99 4399\n 6 1.00 0.99 0.99 4637\n 7 1.00 1.00 1.00 6350\n 8 1.00 1.00 1.00 9\n 9 0.96 0.76 0.85 29\n 10 0.99 1.00 1.00 127144\n 11 0.99 0.98 0.98 4718\n 12 0.70 0.99 0.82 1206\n 13 0.57 0.24 0.33 17\n 14 1.00 0.05 0.10 522\n\n accuracy 1.00 2264594\n macro avg 0.94 0.83 0.84 2264594\nweighted avg 1.00 1.00 1.00 2264594\n\n" ], [ "logging.info(\"*** END ***\")", "07:53:24 INFO *** END ***\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7db51e222a4558749affe056d26ab0bc476b82b
194,306
ipynb
Jupyter Notebook
final/BIOMD0000000562/BIOMD0000000561-notebook.ipynb
sys-bio/temp-biomodels
596eebb590d72e74419773f4e9b829a62d7fff9a
[ "CC0-1.0" ]
null
null
null
final/BIOMD0000000562/BIOMD0000000561-notebook.ipynb
sys-bio/temp-biomodels
596eebb590d72e74419773f4e9b829a62d7fff9a
[ "CC0-1.0" ]
5
2022-03-30T21:33:45.000Z
2022-03-31T20:08:15.000Z
tests/fixtures/BIOMD0000000562/BIOMD0000000561-notebook.ipynb
biosimulations/biomodels-qc
584ee9bc51245493efae56ac5317b99265750460
[ "MIT" ]
null
null
null
561.578035
164,915
0.930707
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
e7db5633fed82d1fb992b82f79d60533a330f798
178,365
ipynb
Jupyter Notebook
In_Db2_Machine_Learning/Building ML Models with Db2/Notebooks/Regression_Demo.ipynb
ibmmichaelschapira/db2-samples
1bf3753b4a1034107edf860191b7c93835e0f975
[ "Apache-2.0" ]
54
2019-08-02T13:15:07.000Z
2022-03-21T17:36:48.000Z
In_Db2_Machine_Learning/Building ML Models with Db2/Notebooks/Regression_Demo.ipynb
junsulee75/db2-samples
d9ee03101cad1f9167eebc1609b4151559124017
[ "Apache-2.0" ]
13
2019-07-26T13:51:16.000Z
2022-03-25T21:43:52.000Z
In_Db2_Machine_Learning/Building ML Models with Db2/Notebooks/Regression_Demo.ipynb
junsulee75/db2-samples
d9ee03101cad1f9167eebc1609b4151559124017
[ "Apache-2.0" ]
75
2019-07-20T04:53:24.000Z
2022-03-23T20:56:55.000Z
145.012195
77,852
0.88477
[ [ [ "# Linear Regression with Db2 Stored Procedures", "_____no_output_____" ], [ "## Contents:\n* [1. Introduction](#Introduction)\n* [2. Libraries and Modules](#Libraries-and-Modules)\n* [3. Connect to Db2](#Connect-to-Db2)\n* [4. Data exploration](#Data-exploration)\n* [5. Train/Test Split](#Train/Test-Split)\n* [6. Data transformation](#Data-transformation-after-Train/Test-Split)\n* [7. Train a linear regression model](#Train-a-linear-regression-model)\n* [8. Predict purchase amount for train and test data](#Predict-sale-prices-for-test-data)\n* [9. Evaluate Model Performance](#Evaluate-Model-Performance)", "_____no_output_____" ], [ "# 1. Introduction <a class=\"anchor\" id=\"Introduction\"></a>", "_____no_output_____" ], [ "Historical customer data for a fictional outdoor equipment store is used in IBM offering tutorials to train the machine learning models. The sample data is structured in rows and columns.\n\n**Feature columns**\n\nFeature columns are columns that contain the attributes on which the machine learning model will base predictions. In this historical data, there are four feature columns:\n\nGENDER: Customer gender\n\nAGE: Customer age\n\nMARITAL_STATUS: \"Married\", \"Single\", or \"Unspecified\"\n\nPROFESSION: General category of the customer's profession, such \"Hospitality\" or \"Sales\", or simply \"Other\"\n\nIS_TENT: Whether or not the customer bought a tent\n\nPRODUCT_LINE: The product category in which the customer has been most interested\n\n**Label column**\n\nPURCHASE_AMOUNT: The average amount of money the customer has spent on each visit to the store\n\n\nLink: https://dataplatform.cloud.ibm.com/exchange/public/entry/view/aa07a773f71cf1172a349f33e2028e4e", "_____no_output_____" ], [ "# 2. Libraries and Modules <a class=\"anchor\" id=\"Libraries-and-Modules\"></a>", "_____no_output_____" ] ], [ [ "import os\nimport sys\nmodule_path = os.path.abspath(os.path.join('../lib/'))\nif module_path not in sys.path:\n sys.path.append(module_path)\nimport ibm_db\nimport ibm_db_dbi\n# import ibm_db_sa\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np", "_____no_output_____" ], [ "from InDBMLModules import col_to_row_organize, print_multi_result_set, connect_to_db,\\\n close_connection_to_db, drop_object, plot_histogram, plot_barchart,\\\n null_impute_most_freq, null_impute_mean, plot_pred_act\n%load_ext autoreload\n%autoreload 2", "_____no_output_____" ] ], [ [ "# 3. Connect to Db2 <a class=\"anchor\" id=\"Connect-to-Db2\"></a>", "_____no_output_____" ] ], [ [ "conn_str = \"DATABASE=in_db;\" + \\\n \"HOSTNAME=*********************;\"+ \\\n \"PROTOCOL=TCPIP;\" + \\\n \"PORT=*******;\" + \\\n \"UID=***;\" + \\\n \"PWD=******************;\"", "_____no_output_____" ], [ "ibm_db_conn, ibm_db_dbi_conn = connect_to_db(conn_str, verbose=True)\nrc = close_connection_to_db(ibm_db_conn, verbose=True)", "Connected to the database!\nConnection is closed.\n" ] ], [ [ "# 4. Data exploration <a class=\"anchor\" id=\"Data-exploration\"></a>", "_____no_output_____" ], [ "## Create a special schema for this experiment ", "_____no_output_____" ] ], [ [ "ibm_db_conn, ibm_db_dbi_conn = connect_to_db(conn_str, verbose=False)\n\ndrop_object(\"LINREG\", \"SCHEMA\", ibm_db_conn, verbose = True)\nsql =\"create schema LINREG authorization MLP\"\nstmt = ibm_db.exec_immediate(ibm_db_conn, sql)\nprint(\"Schema LINREG was created.\")\n\nrc = close_connection_to_db(ibm_db_conn, verbose=False)", "Pre-existing SCHEMA LINREG was not found.\nSchema LINREG was created.\n" ] ], [ [ "## Collect statistics on the entire dataset by creating the column properties table", "_____no_output_____" ] ], [ [ "ibm_db_conn, ibm_db_dbi_conn = connect_to_db(conn_str, verbose=False)\n\ndrop_object(\"LINREG.GS_COL_PROP\", \"TABLE\", ibm_db_conn, verbose = True)\nsql = \"\"\"CALL IDAX.COLUMN_PROPERTIES('intable=DATA.GO_SALES, outtable=LINREG.GS_COL_PROP, withstatistics=true, incolumn=ID:id; PURCHASE_AMOUNT:target')\"\"\"\nstmt = ibm_db.exec_immediate(ibm_db_conn, sql)\nprint(\"TABLE LINREG.GS_COL_PROP was created.\")\n\nrc = close_connection_to_db(ibm_db_conn, verbose=False)", "Pre-existing TABLE LINREG.GS_COL_PROP was not found.\nTABLE LINREG.GS_COL_PROP was created.\n" ] ], [ [ "## List columns with any nulls", "_____no_output_____" ] ], [ [ "ibm_db_conn, ibm_db_dbi_conn = connect_to_db(conn_str, verbose=False)\n\nsql = \"select COLNO, NAME, TYPE,NUMMISSING,NUMMISSING+NUMINVALID+NUMVALID as ALL_VALUES, dec(NUMMISSING,10,2)/(dec(NUMMISSING, 10,2)+dec(NUMINVALID, 10,2)+dec(NUMVALID, 10,2))*100 as NULL_PERCENTAGE from LINREG.GS_COL_PROP where NUMMISSING > 0\"\nGS_NULL_PREC = pd.read_sql(sql,ibm_db_dbi_conn)\nprint(\"Column properties table fetched successfully!\")\n \nrc = close_connection_to_db(ibm_db_conn, verbose=False)\nGS_NULL_PREC.sort_values('COLNO')", "Column properties table fetched successfully!\n" ] ], [ [ "## Evaluate CONTINIOUS columns using RUNSTATS", "_____no_output_____" ], [ "### Plot distribution based on runstats results", "_____no_output_____" ] ], [ [ "numerical_columns = [\"AGE\"]\nplot_histogram (numerical_columns,\"DATA\",\"GO_SALES\",conn_str)", "_____no_output_____" ] ], [ [ "## Evaluate NOMINAL columns using RUNSTATS", "_____no_output_____" ], [ "### Plot data distribution for nominal columns", "_____no_output_____" ] ], [ [ "nominal_columns = [\"GENDER\",\"MARITAL_STATUS\",\"PROFESSION\",\"PRODUCT_LINE\",\"IS_TENT\"]\nplot_barchart (nominal_columns, \"DATA\", \"GO_SALES\", conn_str)", "_____no_output_____" ] ], [ [ "## Check data skewness using SUMMARY1000 stored procedure", "_____no_output_____" ] ], [ [ "# Create HOUSING_PRICES_SUM1000 table that contains whole dataset feature stats (mean, stdev, freq, etc)\nibm_db_conn, ibm_db_dbi_conn = connect_to_db(conn_str, verbose=False)\n\ndrop_object(\"LINREG.GO_SALES_SUM1000\", \"TABLE\", ibm_db_conn, verbose = True)\ndrop_object(\"LINREG.GO_SALES_SUM1000_CHAR\", \"TABLE\", ibm_db_conn, verbose = True)\ndrop_object(\"LINREG.GO_SALES_SUM1000_NUM\", \"TABLE\", ibm_db_conn, verbose = True)\n\nsql = \"CALL IDAX.SUMMARY1000('intable=DATA.GO_SALES,outtable=LINREG.GO_SALES_SUM1000')\"\nstmt = ibm_db.exec_immediate(ibm_db_conn, sql)\nprint(\"SUM1000 tables were created.\")\n\nsql = \"select * from LINREG.GO_SALES_SUM1000_NUM\"\nGO_SALES_SUM1000_NUM = pd.read_sql(sql,ibm_db_dbi_conn)\n\nrc = close_connection_to_db(ibm_db_conn, verbose=False)\n\nGO_SALES_SUM1000_NUM[[\"COLUMNNAME\", \"SKEWNESS\"]]", "Pre-existing TABLE LINREG.GO_SALES_SUM1000 was not found.\nPre-existing TABLE LINREG.GO_SALES_SUM1000_CHAR was not found.\nPre-existing TABLE LINREG.GO_SALES_SUM1000_NUM was not found.\nSUM1000 tables were created.\n" ] ], [ [ "**Observation:**\n\nSKEWNESS on numerical columns is negligible.", "_____no_output_____" ], [ "# 5. Train/Test Split<a class=\"anchor\" id=\"Train/Test-Split\"></a>", "_____no_output_____" ] ], [ [ "ibm_db_conn, ibm_db_dbi_conn = connect_to_db(conn_str, verbose=False)\n\ndrop_object(\"LINREG.GSTRAIN\", \"TABLE\", ibm_db_conn, verbose = True)\ndrop_object(\"LINREG.GSTEST\", \"TABLE\", ibm_db_conn, verbose = True)\n\nsql = \"CALL IDAX.SPLIT_DATA('intable = DATA.GO_SALES, id = ID, traintable = LINREG.GSTRAIN, testtable = LINREG.GSTEST, fraction=0.8, seed=1')\"\nstmt = ibm_db.exec_immediate(ibm_db_conn, sql)\nprint(\"Dataset splitting was successful!\")\n\nrc = close_connection_to_db(ibm_db_conn, verbose=False)", "Pre-existing TABLE LINREG.GSTRAIN was not found.\nPre-existing TABLE LINREG.GSTEST was not found.\nDataset splitting was successful!\n" ] ], [ [ "# 6. Data transformation<a class=\"anchor\" id=\"Data-transformation-after-Train/Test-Split\"></a>", "_____no_output_____" ], [ "## Get statistics of the train data to be used for transforming the test data", "_____no_output_____" ], [ "### Create the SUMMARY1000 table for training dataset", "_____no_output_____" ] ], [ [ "ibm_db_conn, ibm_db_dbi_conn = connect_to_db(conn_str, verbose=False)\n\ndrop_object(\"LINREG.GSTRAIN_STATS\", \"TABLE\", ibm_db_conn, verbose = True)\ndrop_object(\"LINREG.GSTRAIN_STATS_NUM\", \"TABLE\", ibm_db_conn, verbose = True)\ndrop_object(\"LINREG.GSTRAIN_STATS_CHAR\", \"TABLE\", ibm_db_conn, verbose = True) \n \nsql = \"\"\"CALL IDAX.SUMMARY1000('intable=LINREG.GSTRAIN,outtable=LINREG.GSTRAIN_STATS')\"\"\"\nstmt = ibm_db.exec_immediate(ibm_db_conn, sql)\nprint(\"LINREG.GSTRAIN_STATS, LINREG.GSTRAIN_STATS_NUM, and LINREG.GSTRAIN_STATS_CHAR were created\")\n\nrc = close_connection_to_db(ibm_db_conn, verbose=False)", "Pre-existing TABLE LINREG.GSTRAIN_STATS was not found.\nPre-existing TABLE LINREG.GSTRAIN_STATS_NUM was not found.\nPre-existing TABLE LINREG.GSTRAIN_STATS_CHAR was not found.\nLINREG.GSTRAIN_STATS, LINREG.GSTRAIN_STATS_NUM, and LINREG.GSTRAIN_STATS_CHAR were created\n" ] ], [ [ "## Null imputation", "_____no_output_____" ], [ "### Null impute NUMERICAL columns in TRAINING data with mean", "_____no_output_____" ] ], [ [ "ibm_db_conn, ibm_db_dbi_conn = connect_to_db(conn_str, verbose=False)\n \nsql = \"\"\"CALL IDAX.IMPUTE_DATA('intable=LINREG.GSTRAIN,method=mean,inColumn=AGE');\"\"\"\nstmt = ibm_db.exec_immediate(ibm_db_conn, sql)\nprint(\"AGE in LINREG.GSTRAIN null imputed successfully!\")\n \nrc = close_connection_to_db(ibm_db_conn, verbose=False)", "AGE in LINREG.GSTRAIN null imputed successfully!\n" ] ], [ [ "### Null impute the NOMINAL columns in TRAINING with the most frequent value", "_____no_output_____" ] ], [ [ "ibm_db_conn, ibm_db_dbi_conn = connect_to_db(conn_str, verbose=False)\n\nfor column in nominal_columns:\n null_impute_most_freq (\"LINREG\", \"GSTRAIN\", column, \"GSTRAIN_STATS\",ibm_db_conn, verbose=True)\n\nrc = close_connection_to_db(ibm_db_conn, verbose=False)", "GENDER in LINREG.GSTRAIN null imputed successfully!\nMARITAL_STATUS in LINREG.GSTRAIN null imputed successfully!\nPROFESSION in LINREG.GSTRAIN null imputed successfully!\nPRODUCT_LINE in LINREG.GSTRAIN null imputed successfully!\nIS_TENT in LINREG.GSTRAIN null imputed successfully!\n" ] ], [ [ "### Null impute NUMERICAL column in TEST data with mean", "_____no_output_____" ] ], [ [ "ibm_db_conn, ibm_db_dbi_conn = connect_to_db(conn_str, verbose=False)\n\nfor column in numerical_columns:\n null_impute_mean(\"LINREG\", \"GSTEST\", column, \"GSTRAIN_STATS\",ibm_db_conn, verbose=True)\n\nrc = close_connection_to_db(ibm_db_conn, verbose=False)", "AGE in LINREG.GSTEST null imputed successfully!\n" ] ], [ [ "### Null impute the NOMINAL columns in TEST data with the most frequent value", "_____no_output_____" ] ], [ [ "ibm_db_conn, ibm_db_dbi_conn = connect_to_db(conn_str, verbose=False)\n\nfor column in nominal_columns:\n null_impute_most_freq (\"LINREG\", \"GSTEST\", column, \"GSTRAIN_STATS\",ibm_db_conn, verbose=True)\n \nrc = close_connection_to_db(ibm_db_conn, verbose=False)", "GENDER in LINREG.GSTEST null imputed successfully!\nMARITAL_STATUS in LINREG.GSTEST null imputed successfully!\nPROFESSION in LINREG.GSTEST null imputed successfully!\nPRODUCT_LINE in LINREG.GSTEST null imputed successfully!\nIS_TENT in LINREG.GSTEST null imputed successfully!\n" ] ], [ [ "## Standardize AGE in training data", "_____no_output_____" ] ], [ [ "ibm_db_conn, ibm_db_dbi_conn = connect_to_db(conn_str, verbose=False)\n\ndrop_object(\"LINREG.GSTRAIN_STD\", \"TABLE\", ibm_db_conn, verbose = True)\n \nsql = \"\"\"CALL IDAX.STD_NORM('intable=LINREG.GSTRAIN, \n incolumn=\"GENDER\":L;\"AGE\":S;\"MARITAL_STATUS\":L;\"PROFESSION\":L;\"IS_TENT\":L;\"PRODUCT_LINE\":L;\"PURCHASE_AMOUNT\":L, \n id=ID, outtable=LINREG.GSTRAIN_STD');\"\"\"\nstmt = ibm_db.exec_immediate(ibm_db_conn, sql)\nprint(\"LINREG.GSTRAIN_STD was created and AGE column was standardized.\")\n \nrc = close_connection_to_db(ibm_db_conn, verbose=False)", "Pre-existing TABLE LINREG.GSTRAIN_STD was not found.\nLINREG.GSTRAIN_STD was created and AGE column was standardized.\n" ] ], [ [ "## Standardize AGE in test data", "_____no_output_____" ] ], [ [ "ibm_db_conn, ibm_db_dbi_conn = connect_to_db(conn_str, verbose=False)\n\ndrop_object(\"LINREG.GSTEST_STD\", \"TABLE\", ibm_db_conn, verbose = True)\n \nsql = \"CREATE TABLE LINREG.GSTEST_STD AS (SELECT * FROM LINREG.GSTEST) WITH DATA\"\nstmt = ibm_db.exec_immediate(ibm_db_conn, sql)\nprint (\"Table LINREG.GSTEST_STD was created.\")\n\nsql = \"\"\"UPDATE LINREG.GSTEST_STD \n SET AGE = ((CAST(AGE AS FLOAT) - (SELECT AVERAGE FROM LINREG.GSTRAIN_STATS_NUM WHERE COLUMNNAME='AGE'))/(SELECT STDDEV FROM LINREG.GSTRAIN_STATS_NUM WHERE COLUMNNAME='AGE'))\"\"\"\nstmt = ibm_db.exec_immediate(ibm_db_conn, sql)\nprint(\"AGE was standardized in test data successfully!\")\n\n#renaming AGE to STD_AGE\nsql = \"\"\"ALTER TABLE LINREG.GSTEST_STD RENAME COLUMN AGE TO STD_AGE\"\"\"\nstmt = ibm_db.exec_immediate(ibm_db_conn, sql)\n\nrc = close_connection_to_db(ibm_db_conn, verbose=False)", "Pre-existing TABLE LINREG.GSTEST_STD was not found.\nTable LINREG.GSTEST_STD was created.\nAGE was standardized in test data successfully!\n" ] ], [ [ "# 7. Train a linear regression model<a class=\"anchor\" id=\"Train-a-linear-regression-model\"></a>", "_____no_output_____" ], [ "## Train the model", "_____no_output_____" ] ], [ [ "ibm_db_conn, ibm_db_dbi_conn = connect_to_db(conn_str, verbose=False)\n\ndrop_object(\"LINREG.GSLINREG\", \"MODEL\", ibm_db_conn, verbose = True)\n\nsql = \"\"\"CALL IDAX.LINEAR_REGRESSION('model=LINREG.GSLINREG, intable=LINREG.GSTRAIN_STD, id=ID, \n target= PURCHASE_AMOUNT, incolumn =GENDER;STD_AGE;MARITAL_STATUS;PROFESSION;IS_TENT;PRODUCT_LINE');\"\"\"\nstmt = ibm_db.exec_immediate(ibm_db_conn, sql)\nprint(\"Model trained successfully!\")\n\nrc = close_connection_to_db(ibm_db_conn, verbose=False)", "Pre-existing MODEL LINREG.GSLINREG was not found.\nModel trained successfully!\n" ] ], [ [ "# 8. Predict purchase amount for train and test data<a class=\"anchor\" id=\"Predict-sale-prices-for-test-data\"></a>", "_____no_output_____" ], [ "## Create view GSTEST_INPUT from feature columns in GSTEST_STD", "_____no_output_____" ] ], [ [ "ibm_db_conn, ibm_db_dbi_conn = connect_to_db(conn_str, verbose=False)\n\ndrop_object(\"LINREG.GSTEST_INPUT\", \"VIEW\", ibm_db_conn, verbose = True)\n\nsql = \"CREATE VIEW LINREG.GSTEST_INPUT AS (SELECT ID,GENDER,STD_AGE,MARITAL_STATUS,PROFESSION,IS_TENT,PRODUCT_LINE FROM LINREG.GSTEST_STD)\"\nstmt = ibm_db.exec_immediate(ibm_db_conn, sql)\nprint(\"VIEW LINREG.GSTEST_INPUT was created successfuly!\")\n\nrc = close_connection_to_db(ibm_db_conn, verbose=False)", "Pre-existing VIEW LINREG.GSTEST_INPUT was not found.\nVIEW LINREG.GSTEST_INPUT was created successfuly!\n" ] ], [ [ "## Predict purchase amounts using IDAX.PREDICT_LINEAR_REGRESSION ", "_____no_output_____" ] ], [ [ "ibm_db_conn, ibm_db_dbi_conn = connect_to_db(conn_str, verbose=False)\n\ndrop_object(\"LINREG.GSTEST_OUTPUT\", \"TABLE\", ibm_db_conn, verbose = True)\n \nsql = \"\"\"CALL IDAX.PREDICT_LINEAR_REGRESSION('model=LINREG.GSLINREG, intable=LINREG.GSTEST_INPUT, outtable =LINREG.GSTEST_OUTPUT, id=ID')\"\"\"\nstmt = ibm_db.exec_immediate(ibm_db_conn, sql)\nprint(\"LINREG.GSTEST_OUTPUT was created with test results.\")\n\nrc = close_connection_to_db(ibm_db_conn, verbose=False)", "Pre-existing TABLE LINREG.GSTEST_OUTPUT was not found.\nLINREG.GSTEST_OUTPUT was created with test results.\n" ] ], [ [ "## Create view GSTRAIN_INPUT from feature columns in GSTRAIN_STD", "_____no_output_____" ] ], [ [ "ibm_db_conn, ibm_db_dbi_conn = connect_to_db(conn_str, verbose=False)\n\ndrop_object(\"LINREG.GSTRAIN_INPUT\", \"VIEW\", ibm_db_conn, verbose = True)\n\nsql = \"CREATE VIEW LINREG.GSTRAIN_INPUT AS (SELECT ID,GENDER,STD_AGE,MARITAL_STATUS,PROFESSION,IS_TENT,PRODUCT_LINE FROM LINREG.GSTRAIN_STD)\"\nstmt = ibm_db.exec_immediate(ibm_db_conn, sql)\nprint(\"VIEW LINREG.GSTRAIN_INPUT was created successfuly!\")\n\nrc = close_connection_to_db(ibm_db_conn, verbose=False)", "Pre-existing VIEW LINREG.GSTRAIN_INPUT was not found.\nVIEW LINREG.GSTRAIN_INPUT was created successfuly!\n" ] ], [ [ "## Predict purchase amounts using IDAX.PREDICT_LINEAR_REGRESSION ", "_____no_output_____" ] ], [ [ "ibm_db_conn, ibm_db_dbi_conn = connect_to_db(conn_str, verbose=False)\n\ndrop_object(\"LINREG.GSTRAIN_OUTPUT\", \"TABLE\", ibm_db_conn, verbose = True)\n \nsql = \"\"\"CALL IDAX.PREDICT_LINEAR_REGRESSION('model=LINREG.GSLINREG, intable=LINREG.GSTRAIN_INPUT, outtable =LINREG.GSTRAIN_OUTPUT, id=ID')\"\"\"\nstmt = ibm_db.exec_immediate(ibm_db_conn, sql)\nprint(\"LINREG.GSTEST_OUTPUT was created with train results.\")\n\nrc = close_connection_to_db(ibm_db_conn, verbose=False)", "Pre-existing TABLE LINREG.GSTRAIN_OUTPUT was not found.\nLINREG.GSTEST_OUTPUT was created with train results.\n" ] ], [ [ "# 9. Evaluate Model Performance<a class=\"anchor\" id=\"Evaluate-Model-Performance\"></a>", "_____no_output_____" ], [ "## Evaluate model performance on TRAINING data", "_____no_output_____" ] ], [ [ "ibm_db_conn, ibm_db_dbi_conn = connect_to_db(conn_str, verbose=False)\n\nprint(\"Training performance: \")\nsql = \"\"\"CALL IDAX.MSE('intable= LINREG.GSTRAIN_STD, id = ID, target = PURCHASE_AMOUNT, resulttable=LINREG.GSTRAIN_OUTPUT, resultid=ID, resulttarget=PURCHASE_AMOUNT')\"\"\"\nprint_multi_result_set(ibm_db_conn, sql)\nsql = \"\"\"CALL IDAX.MAE('intable= LINREG.GSTRAIN_STD, id = ID, target = PURCHASE_AMOUNT, resulttable=LINREG.GSTRAIN_OUTPUT, resultid=ID, resulttarget=PURCHASE_AMOUNT')\"\"\"\nprint_multi_result_set(ibm_db_conn, sql)\n\nrc = close_connection_to_db(ibm_db_conn, verbose=False)", "Training performance: \n{'MSE': 98.36862842452432}\n{'MAE': 7.5297574998766}\n" ] ], [ [ "## Evaluate model performance on TEST data", "_____no_output_____" ] ], [ [ "ibm_db_conn, ibm_db_dbi_conn = connect_to_db(conn_str, verbose=False)\n\nprint(\"Test performance: \")\nsql = \"\"\"CALL IDAX.MSE('intable= LINREG.GSTEST_STD, id = ID, target = PURCHASE_AMOUNT, resulttable=LINREG.GSTEST_OUTPUT, resultid=ID, resulttarget=PURCHASE_AMOUNT')\"\"\"\nprint_multi_result_set(ibm_db_conn, sql)\nsql = \"\"\"CALL IDAX.MAE('intable= LINREG.GSTEST_STD, id = ID, target = PURCHASE_AMOUNT, resulttable=LINREG.GSTEST_OUTPUT, resultid=ID, resulttarget=PURCHASE_AMOUNT')\"\"\"\nprint_multi_result_set(ibm_db_conn, sql)\n\nrc = close_connection_to_db(ibm_db_conn, verbose=False)", "Test performance: \n{'MSE': 97.53595615684684}\n{'MAE': 7.461251686160011}\n" ] ], [ [ "**Observations:**\n\n* Mean absolute error on test data is 7.46 -> Model predicts with a fairly good accuracy.\n* Performance is consistent for Training and Test datasets -> Model is not overfitting the training set.", "_____no_output_____" ], [ "## Visually evaluate model performance", "_____no_output_____" ] ], [ [ "ibm_db_conn, ibm_db_dbi_conn = connect_to_db(conn_str, verbose=False)\n\nsql = \"\"\"select ACT.ID, ACT.PURCHASE_AMOUNT AS ACTUAL, PRED.PURCHASE_AMOUNT AS PREDICTION\n from LINREG.GSTEST_STD AS ACT, LINREG.GSTEST_OUTPUT AS PRED\n where ACT.ID = PRED.ID\"\"\"\nGSTEST_ACT_PRED = pd.read_sql(sql,ibm_db_dbi_conn)\n \nrc = close_connection_to_db(ibm_db_conn, verbose=False)\n\nact = GSTEST_ACT_PRED.ACTUAL.values\npred = GSTEST_ACT_PRED.PREDICTION.values\nplot_pred_act(pred,act,\"Purchase Amount Prediction Performance on Test Data\", \"Actual\", \"Prediction\")", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ] ]
e7db61acd1b5d920ce76d3f6f3ebad3cb8805da9
3,372
ipynb
Jupyter Notebook
trustReads_newsApp/.ipynb_checkpoints/find_related_urls-checkpoint.ipynb
avani17101/ML-models-and-simple-python-codes-deployment-in-webapps
a70a13c016238c42c8ec57fcd453fb76d0f1a250
[ "MIT" ]
1
2020-06-27T06:35:30.000Z
2020-06-27T06:35:30.000Z
trustReads_newsApp/.ipynb_checkpoints/find_related_urls-checkpoint.ipynb
avani17101/ML-models-and-simple-python-codes-deployment-in-webapps
a70a13c016238c42c8ec57fcd453fb76d0f1a250
[ "MIT" ]
null
null
null
trustReads_newsApp/.ipynb_checkpoints/find_related_urls-checkpoint.ipynb
avani17101/ML-models-and-simple-python-codes-deployment-in-webapps
a70a13c016238c42c8ec57fcd453fb76d0f1a250
[ "MIT" ]
null
null
null
31.514019
149
0.564353
[ [ [ "\ndef find_related_urls(title):\n \"\"\"\n args: title of article\n returns: links of most related articles from trusted sources\n \"\"\"\n try: \n from googlesearch import search \n except ImportError: \n print(\"No module named 'google' found\") \n \n print(title)\n related_urls = []\n # to search \n query1 = \"ndtv: \"+ title\n query2 = \"timesofindia: \"+title\n query3 = \"hindustantimes: \" + title\n print(\"Related urls extracted from trusted sources\")\n for q in search(query1, tld=\"com\", num=10, stop=1, pause=2): \n print(\"from NDTV\")\n print(q)\n related_urls.append(q)\n for r in search(query2, tld=\"co.in\", num=10, stop=1, pause=2): \n print(\"from NDTV\")\n print(r)\n related_urls.append(r)\n for s in search(query3, tld=\"com\", num=10, stop=1, pause=2): \n print(s)\n related_urls.append(s)\n return related_urls\n ", "_____no_output_____" ], [ "# file = open(\"user_query.txt\") #file containes the topic user wants to read\nquery = \"Maulana Saad Corona wont affect muslims\"\nfind_related_urls(query)", "Maulana Saad Corona wont affect muslims\nrelated urls extracted from trusted sources\nhttps://www.ndtv.com/india-news/coronavirus-islamic-sect-chief-6-others-charged-for-delhi-nizamuddin-event-amid-covid-19-2204136\nhttps://timesofindia.indiatimes.com/india/in-india-coronavirus-fans-religious-hatred-nyt/articleshow/75119421.cms\nhttps://www.hindustantimes.com/india-news/jamaat-brought-collective-shame-and-islamophobia-muslims-say/story-ermZoPYGGZ8HBMNeErSnvM.html\n" ] ] ]
[ "code" ]
[ [ "code", "code" ] ]
e7db6344f10129c35cbbd9c50a2439a952bc322e
638,618
ipynb
Jupyter Notebook
adanet/examples/tutorials/adanet_objective.ipynb
sararob/adanet
26388aeb67ec30c9e98635497e6b5b3476378db7
[ "Apache-2.0" ]
2
2019-01-04T19:23:23.000Z
2021-02-14T21:48:03.000Z
adanet/examples/tutorials/adanet_objective.ipynb
sararob/adanet
26388aeb67ec30c9e98635497e6b5b3476378db7
[ "Apache-2.0" ]
null
null
null
adanet/examples/tutorials/adanet_objective.ipynb
sararob/adanet
26388aeb67ec30c9e98635497e6b5b3476378db7
[ "Apache-2.0" ]
null
null
null
76.28022
818
0.634952
[ [ [ "##### Copyright 2018 The AdaNet Authors.", "_____no_output_____" ] ], [ [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "_____no_output_____" ] ], [ [ "# The AdaNet objective", "_____no_output_____" ], [ "<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/adanet/blob/master/adanet/examples/tutorials/adanet_objective.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/adanet/blob/master/adanet/examples/tutorials/adanet_objective.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>", "_____no_output_____" ], [ "One of key contributions from *AdaNet: Adaptive Structural Learning of Neural\nNetworks* [[Cortes et al., ICML 2017](https://arxiv.org/abs/1607.01097)] is\ndefining an algorithm that aims to directly minimize the DeepBoost\ngeneralization bound from *Deep Boosting*\n[[Cortes et al., ICML 2014](http://proceedings.mlr.press/v32/cortesb14.pdf)]\nwhen applied to neural networks. This algorithm, called **AdaNet**, adaptively\ngrows a neural network as an ensemble of subnetworks that minimizes the AdaNet\nobjective (a.k.a. AdaNet loss):\n\n$$F(w) = \\frac{1}{m} \\sum_{i=1}^{m} \\Phi \\left(\\sum_{j=1}^{N}w_jh_j(x_i), y_i \\right) + \\sum_{j=1}^{N} \\left(\\lambda r(h_j) + \\beta \\right) |w_j| $$\n\nwhere $w$ is the set of mixture weights, one per subnetwork $h$,\n$\\Phi$ is a surrogate loss function such as logistic loss or MSE, $r$ is a\nfunction for measuring a subnetwork's complexity, and $\\lambda$ and $\\beta$\nare hyperparameters.\n\n## Mixture weights\n\nSo what are mixture weights? When forming an ensemble $f$ of subnetworks $h$,\nwe need to somehow combine the their predictions. This is done by multiplying\nthe outputs of subnetwork $h_i$ with mixture weight $w_i$, and summing the\nresults:\n\n$$f(x) = \\sum_{j=1}^{N}w_jh_j(x)$$\n\nIn practice, most commonly used set of mixture weight is **uniform average\nweighting**:\n\n$$f(x) = \\frac{1}{N}\\sum_{j=1}^{N}h_j(x)$$\n\nHowever, we can also solve a convex optimization problem to learn the mixture\nweights that minimize the loss function $\\Phi$:\n\n$$F(w) = \\frac{1}{m} \\sum_{i=1}^{m} \\Phi \\left(\\sum_{j=1}^{N}w_jh_j(x_i), y_i \\right)$$\n\nThis is the first term in the AdaNet objective. The second term applies L1\nregularization to the mixture weights:\n\n$$\\sum_{j=1}^{N} \\left(\\lambda r(h_j) + \\beta \\right) |w_j|$$\n\nWhen $\\lambda > 0$ this penalty serves to prevent the optimization from\nassigning too much weight to more complex subnetworks according to the\ncomplexity measure function $r$.\n\n## How AdaNet uses the objective\n\nThis objective function serves two purposes:\n\n1. To **learn to scale/transform the outputs of each subnetwork $h$** as part\n of the ensemble.\n2. To **select the best candidate subnetwork $h$** at each AdaNet iteration\n to include in the ensemble.\n\nEffectively, when learning mixture weights $w$, AdaNet solves a convex\ncombination of the outputs of the frozen subnetworks $h$. For $\\lambda >0$,\nAdaNet penalizes more complex subnetworks with greater L1 regularization on\ntheir mixture weight, and will be less likely to select more complex subnetworks\nto add to the ensemble at each iteration.\n\nIn this tutorial, in you will observe the benefits of using AdaNet to learn the\nensemble's mixture weights and to perform candidate selection.\n\n", "_____no_output_____" ] ], [ [ "# If you're running this in Colab, first install the adanet package:\n!pip install adanet", "_____no_output_____" ], [ "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport functools\n\nimport adanet\nimport tensorflow as tf\n\n# The random seed to use.\nRANDOM_SEED = 42", "_____no_output_____" ] ], [ [ "## Boston Housing dataset\n\nIn this example, we will solve a regression task known as the [Boston Housing dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the price of suburban houses in Boston, MA in the 1970s. There are 13 numerical features, the labels are in thousands of dollars, and there are only 506 examples.\n", "_____no_output_____" ], [ "## Download the data\nConveniently, the data is available via Keras:", "_____no_output_____" ] ], [ [ "(x_train, y_train), (x_test, y_test) = (\n tf.keras.datasets.boston_housing.load_data())\n\n# Preview the first example from the training data\nprint('Model inputs: %s \\n' % x_train[0])\nprint('Model output (house price): $%s ' % (y_train[0] * 1000))\n", "Model inputs: [ 1.23247 0. 8.14 0. 0.538 6.142 91.7\n 3.9769 4. 307. 21. 396.9 18.72 ] \n\nModel output (house price): $15200.0 \n" ] ], [ [ "## Supply the data in TensorFlow\n\nOur first task is to supply the data in TensorFlow. Using the\ntf.estimator.Estimator convention, we will define a function that returns an\ninput_fn which returns feature and label Tensors.\n\nWe will also use the tf.data.Dataset API to feed the data into our models.\n\nAlso, as a preprocessing step, we will apply `tf.log1p` to log-scale the\nfeatures and labels for improved numerical stability during training. To recover\nthe model's predictions in the correct scale, you can apply `tf.math.expm1` to the\nprediction.", "_____no_output_____" ] ], [ [ "FEATURES_KEY = \"x\"\n\n\ndef input_fn(partition, training, batch_size):\n \"\"\"Generate an input function for the Estimator.\"\"\"\n\n def _input_fn():\n\n if partition == \"train\":\n dataset = tf.data.Dataset.from_tensor_slices(({\n FEATURES_KEY: tf.log1p(x_train)\n }, tf.log1p(y_train)))\n else:\n dataset = tf.data.Dataset.from_tensor_slices(({\n FEATURES_KEY: tf.log1p(x_test)\n }, tf.log1p(y_test)))\n\n # We call repeat after shuffling, rather than before, to prevent separate\n # epochs from blending together.\n if training:\n dataset = dataset.shuffle(10 * batch_size, seed=RANDOM_SEED).repeat()\n\n dataset = dataset.batch(batch_size)\n iterator = dataset.make_one_shot_iterator()\n features, labels = iterator.get_next()\n return features, labels\n\n return _input_fn", "_____no_output_____" ] ], [ [ "## Define the subnetwork generator\n\nLet's define a subnetwork generator similar to the one in\n[[Cortes et al., ICML 2017](https://arxiv.org/abs/1607.01097)] and in\n`simple_dnn.py` which creates two candidate fully-connected neural networks at\neach iteration with the same width, but one an additional hidden layer. To make\nour generator *adaptive*, each subnetwork will have at least the same number\nof hidden layers as the most recently added subnetwork to the\n`previous_ensemble`.\n\nWe define the complexity measure function $r$ to be $r(h) = \\sqrt{d(h)}$, where\n$d$ is the number of hidden layers in the neural network $h$, to approximate the\nRademacher bounds from\n[[Golowich et. al, 2017](https://arxiv.org/abs/1712.06541)]. So subnetworks\nwith more hidden layers, and therefore more capacity, will have more heavily\nregularized mixture weights.", "_____no_output_____" ] ], [ [ "_NUM_LAYERS_KEY = \"num_layers\"\n\n\nclass _SimpleDNNBuilder(adanet.subnetwork.Builder):\n \"\"\"Builds a DNN subnetwork for AdaNet.\"\"\"\n\n def __init__(self, optimizer, layer_size, num_layers, learn_mixture_weights,\n seed):\n \"\"\"Initializes a `_DNNBuilder`.\n\n Args:\n optimizer: An `Optimizer` instance for training both the subnetwork and\n the mixture weights.\n layer_size: The number of nodes to output at each hidden layer.\n num_layers: The number of hidden layers.\n learn_mixture_weights: Whether to solve a learning problem to find the\n best mixture weights, or use their default value according to the\n mixture weight type. When `False`, the subnetworks will return a no_op\n for the mixture weight train op.\n seed: A random seed.\n\n Returns:\n An instance of `_SimpleDNNBuilder`.\n \"\"\"\n\n self._optimizer = optimizer\n self._layer_size = layer_size\n self._num_layers = num_layers\n self._learn_mixture_weights = learn_mixture_weights\n self._seed = seed\n\n def build_subnetwork(self,\n features,\n logits_dimension,\n training,\n iteration_step,\n summary,\n previous_ensemble=None):\n \"\"\"See `adanet.subnetwork.Builder`.\"\"\"\n\n input_layer = tf.to_float(features[FEATURES_KEY])\n kernel_initializer = tf.glorot_uniform_initializer(seed=self._seed)\n last_layer = input_layer\n for _ in range(self._num_layers):\n last_layer = tf.layers.dense(\n last_layer,\n units=self._layer_size,\n activation=tf.nn.relu,\n kernel_initializer=kernel_initializer)\n logits = tf.layers.dense(\n last_layer,\n units=logits_dimension,\n kernel_initializer=kernel_initializer)\n\n persisted_tensors = {_NUM_LAYERS_KEY: tf.constant(self._num_layers)}\n return adanet.Subnetwork(\n last_layer=last_layer,\n logits=logits,\n complexity=self._measure_complexity(),\n persisted_tensors=persisted_tensors)\n\n def _measure_complexity(self):\n \"\"\"Approximates Rademacher complexity as the square-root of the depth.\"\"\"\n return tf.sqrt(tf.to_float(self._num_layers))\n\n def build_subnetwork_train_op(self, subnetwork, loss, var_list, labels,\n iteration_step, summary, previous_ensemble):\n \"\"\"See `adanet.subnetwork.Builder`.\"\"\"\n return self._optimizer.minimize(loss=loss, var_list=var_list)\n\n def build_mixture_weights_train_op(self, loss, var_list, logits, labels,\n iteration_step, summary):\n \"\"\"See `adanet.subnetwork.Builder`.\"\"\"\n\n if not self._learn_mixture_weights:\n return tf.no_op()\n return self._optimizer.minimize(loss=loss, var_list=var_list)\n\n @property\n def name(self):\n \"\"\"See `adanet.subnetwork.Builder`.\"\"\"\n\n if self._num_layers == 0:\n # A DNN with no hidden layers is a linear model.\n return \"linear\"\n return \"{}_layer_dnn\".format(self._num_layers)\n\n\nclass SimpleDNNGenerator(adanet.subnetwork.Generator):\n \"\"\"Generates a two DNN subnetworks at each iteration.\n\n The first DNN has an identical shape to the most recently added subnetwork\n in `previous_ensemble`. The second has the same shape plus one more dense\n layer on top. This is similar to the adaptive network presented in Figure 2 of\n [Cortes et al. ICML 2017](https://arxiv.org/abs/1607.01097), without the\n connections to hidden layers of networks from previous iterations.\n \"\"\"\n\n def __init__(self,\n optimizer,\n layer_size=32,\n learn_mixture_weights=False,\n seed=None):\n \"\"\"Initializes a DNN `Generator`.\n\n Args:\n optimizer: An `Optimizer` instance for training both the subnetwork and\n the mixture weights.\n layer_size: Number of nodes in each hidden layer of the subnetwork\n candidates. Note that this parameter is ignored in a DNN with no hidden\n layers.\n learn_mixture_weights: Whether to solve a learning problem to find the\n best mixture weights, or use their default value according to the\n mixture weight type. When `False`, the subnetworks will return a no_op\n for the mixture weight train op.\n seed: A random seed.\n\n Returns:\n An instance of `Generator`.\n \"\"\"\n\n self._seed = seed\n self._dnn_builder_fn = functools.partial(\n _SimpleDNNBuilder,\n optimizer=optimizer,\n layer_size=layer_size,\n learn_mixture_weights=learn_mixture_weights)\n\n def generate_candidates(self, previous_ensemble, iteration_number,\n previous_ensemble_reports, all_reports):\n \"\"\"See `adanet.subnetwork.Generator`.\"\"\"\n\n num_layers = 0\n seed = self._seed\n if previous_ensemble:\n num_layers = tf.contrib.util.constant_value(\n previous_ensemble.weighted_subnetworks[\n -1].subnetwork.persisted_tensors[_NUM_LAYERS_KEY])\n if seed is not None:\n seed += iteration_number\n return [\n self._dnn_builder_fn(num_layers=num_layers, seed=seed),\n self._dnn_builder_fn(num_layers=num_layers + 1, seed=seed),\n ]", "_____no_output_____" ] ], [ [ "## Train and evaluate\n\nNext we create an `adanet.Estimator` using the `SimpleDNNGenerator` we just defined.\n\nIn this section we will show the effects of two hyperparamters: **learning mixture weights** and **complexity regularization**.\n\nOn the righthand side you will be able to play with the hyperparameters of this model. Until you reach the end of this section, we ask that you not change them. \n\nAt first we will not learn the mixture weights, using their default initial value. Here they will be scalars initialized to $1/N$ where $N$ is the number of subnetworks in the ensemble, effectively creating a **uniform average ensemble**.", "_____no_output_____" ] ], [ [ "#@title AdaNet parameters\nLEARNING_RATE = 0.001 #@param {type:\"number\"}\nTRAIN_STEPS = 100000 #@param {type:\"integer\"}\nBATCH_SIZE = 32 #@param {type:\"integer\"}\n\nLEARN_MIXTURE_WEIGHTS = False #@param {type:\"boolean\"}\nADANET_LAMBDA = 0 #@param {type:\"number\"}\nBOOSTING_ITERATIONS = 5 #@param {type:\"integer\"}\n\n\ndef train_and_evaluate(learn_mixture_weights=LEARN_MIXTURE_WEIGHTS,\n adanet_lambda=ADANET_LAMBDA):\n \"\"\"Trains an `adanet.Estimator` to predict housing prices.\"\"\"\n\n estimator = adanet.Estimator(\n # Since we are predicting housing prices, we'll use a regression\n # head that optimizes for MSE.\n head=tf.contrib.estimator.regression_head(\n loss_reduction=tf.losses.Reduction.SUM_OVER_BATCH_SIZE),\n\n # Define the generator, which defines our search space of subnetworks\n # to train as candidates to add to the final AdaNet model.\n subnetwork_generator=SimpleDNNGenerator(\n optimizer=tf.train.RMSPropOptimizer(learning_rate=LEARNING_RATE),\n learn_mixture_weights=learn_mixture_weights,\n seed=RANDOM_SEED),\n\n # Lambda is a the strength of complexity regularization. A larger\n # value will penalize more complex subnetworks.\n adanet_lambda=adanet_lambda,\n\n # The number of train steps per iteration.\n max_iteration_steps=TRAIN_STEPS // BOOSTING_ITERATIONS,\n\n # The evaluator will evaluate the model on the full training set to\n # compute the overall AdaNet loss (train loss + complexity\n # regularization) to select the best candidate to include in the\n # final AdaNet model.\n evaluator=adanet.Evaluator(\n input_fn=input_fn(\"train\", training=False, batch_size=BATCH_SIZE)),\n\n # Configuration for Estimators.\n config=tf.estimator.RunConfig(\n save_checkpoints_steps=50000,\n save_summary_steps=50000,\n tf_random_seed=RANDOM_SEED))\n\n # Train and evaluate using using the tf.estimator tooling.\n train_spec = tf.estimator.TrainSpec(\n input_fn=input_fn(\"train\", training=True, batch_size=BATCH_SIZE),\n max_steps=TRAIN_STEPS)\n eval_spec = tf.estimator.EvalSpec(\n input_fn=input_fn(\"test\", training=False, batch_size=BATCH_SIZE),\n steps=None)\n return tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)\n\n\ndef ensemble_architecture(result):\n \"\"\"Extracts the ensemble architecture from evaluation results.\"\"\"\n\n architecture = result[\"architecture/adanet/ensembles\"]\n # The architecture is a serialized Summary proto for TensorBoard.\n summary_proto = tf.summary.Summary.FromString(architecture)\n return summary_proto.value[0].tensor.string_val[0]\n\n\nresults, _ = train_and_evaluate()\nprint(\"Loss:\", results[\"average_loss\"])\nprint(\"Architecture:\", ensemble_architecture(results))", "WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmpBX73lD\nINFO:tensorflow:Using config: {'_save_checkpoints_secs': None, '_num_ps_replicas': 0, '_keep_checkpoint_max': 5, '_task_type': 'worker', '_global_id_in_cluster': 0, '_is_chief': True, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f27c3980390>, '_model_dir': '/tmp/tmpBX73lD', '_protocol': None, '_save_checkpoints_steps': 50000, '_keep_checkpoint_every_n_hours': 10000, '_service': None, '_session_config': allow_soft_placement: true\ngraph_options {\n rewrite_options {\n meta_optimizer_iterations: ONE\n }\n}\n, '_tf_random_seed': 42, '_save_summary_steps': 50000, '_device_fn': None, '_experimental_distribute': None, '_num_worker_replicas': 1, '_task_id': 0, '_log_step_count_steps': 100, '_evaluation_master': '', '_eval_distribute': None, '_train_distribute': None, '_master': ''}\nINFO:tensorflow:Not using Distribute Coordinator.\nINFO:tensorflow:Running training and evaluation locally (non-distributed).\nINFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps 50000 or save_checkpoints_secs None.\nINFO:tensorflow:Beginning training AdaNet iteration 0\nINFO:tensorflow:Calling model_fn.\nINFO:tensorflow:Building iteration 0\nINFO:tensorflow:Building subnetwork 'linear'\nWARNING:tensorflow:From <ipython-input-15-6099e5c14e79>:60: calling __new__ (from adanet.core.subnetwork.generator) with persisted_tensors is deprecated and will be removed in a future version.\nInstructions for updating:\n`persisted_tensors` is deprecated, please use `shared` instead.\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Done calling model_fn.\nINFO:tensorflow:Create CheckpointSaverHook.\nINFO:tensorflow:Graph was finalized.\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\nINFO:tensorflow:Saving checkpoints for 0 into /tmp/tmpBX73lD/model.ckpt.\nINFO:tensorflow:loss = 21.773132, step = 1\nINFO:tensorflow:global_step/sec: 218.829\nINFO:tensorflow:loss = 0.647101, step = 101 (0.458 sec)\nINFO:tensorflow:global_step/sec: 600.6\nINFO:tensorflow:loss = 0.58654284, step = 201 (0.166 sec)\nINFO:tensorflow:global_step/sec: 507.035\nINFO:tensorflow:loss = 0.07683488, step = 301 (0.197 sec)\nINFO:tensorflow:global_step/sec: 561.539\nINFO:tensorflow:loss = 0.08281773, step = 401 (0.178 sec)\nINFO:tensorflow:global_step/sec: 550.797\nINFO:tensorflow:loss = 0.08148783, step = 501 (0.182 sec)\nINFO:tensorflow:global_step/sec: 532.507\nINFO:tensorflow:loss = 0.056522045, step = 601 (0.188 sec)\nINFO:tensorflow:global_step/sec: 546.83\nINFO:tensorflow:loss = 0.025881847, step = 701 (0.183 sec)\nINFO:tensorflow:global_step/sec: 533.994\nINFO:tensorflow:loss = 0.030095275, step = 801 (0.187 sec)\nINFO:tensorflow:global_step/sec: 580.347\nINFO:tensorflow:loss = 0.03755435, step = 901 (0.172 sec)\nINFO:tensorflow:global_step/sec: 524.546\nINFO:tensorflow:loss = 0.06690027, step = 1001 (0.191 sec)\nINFO:tensorflow:global_step/sec: 539.782\nINFO:tensorflow:loss = 0.036151223, step = 1101 (0.185 sec)\nINFO:tensorflow:global_step/sec: 554.345\nINFO:tensorflow:loss = 0.05018542, step = 1201 (0.180 sec)\nINFO:tensorflow:global_step/sec: 580.845\nINFO:tensorflow:loss = 0.09921485, step = 1301 (0.172 sec)\nINFO:tensorflow:global_step/sec: 562.908\nINFO:tensorflow:loss = 0.026417136, step = 1401 (0.178 sec)\nINFO:tensorflow:global_step/sec: 558.397\nINFO:tensorflow:loss = 0.020782702, step = 1501 (0.179 sec)\nINFO:tensorflow:global_step/sec: 545.393\nINFO:tensorflow:loss = 0.031655625, step = 1601 (0.183 sec)\nINFO:tensorflow:global_step/sec: 557.737\nINFO:tensorflow:loss = 0.041417748, step = 1701 (0.180 sec)\nINFO:tensorflow:global_step/sec: 572.938\nINFO:tensorflow:loss = 0.035113975, step = 1801 (0.174 sec)\nINFO:tensorflow:global_step/sec: 548.576\nINFO:tensorflow:loss = 0.044721745, step = 1901 (0.182 sec)\nINFO:tensorflow:global_step/sec: 556.57\nINFO:tensorflow:loss = 0.029930526, step = 2001 (0.180 sec)\nINFO:tensorflow:global_step/sec: 556.449\nINFO:tensorflow:loss = 0.04725881, step = 2101 (0.179 sec)\nINFO:tensorflow:global_step/sec: 563.86\nINFO:tensorflow:loss = 0.024880443, step = 2201 (0.178 sec)\nINFO:tensorflow:global_step/sec: 562.158\nINFO:tensorflow:loss = 0.024809971, step = 2301 (0.178 sec)\nINFO:tensorflow:global_step/sec: 572.017\nINFO:tensorflow:loss = 0.022308439, step = 2401 (0.175 sec)\nINFO:tensorflow:global_step/sec: 549.26\nINFO:tensorflow:loss = 0.047627836, step = 2501 (0.182 sec)\nINFO:tensorflow:global_step/sec: 571.579\nINFO:tensorflow:loss = 0.031944193, step = 2601 (0.175 sec)\nINFO:tensorflow:global_step/sec: 582.499\nINFO:tensorflow:loss = 0.033454694, step = 2701 (0.171 sec)\nINFO:tensorflow:global_step/sec: 558.372\nINFO:tensorflow:loss = 0.0144810015, step = 2801 (0.179 sec)\nINFO:tensorflow:global_step/sec: 519.988\nINFO:tensorflow:loss = 0.031083355, step = 2901 (0.192 sec)\nINFO:tensorflow:global_step/sec: 560.406\nINFO:tensorflow:loss = 0.026340073, step = 3001 (0.179 sec)\nINFO:tensorflow:global_step/sec: 539.284\nINFO:tensorflow:loss = 0.026516797, step = 3101 (0.185 sec)\nINFO:tensorflow:global_step/sec: 501.253\nINFO:tensorflow:loss = 0.027183983, step = 3201 (0.200 sec)\nINFO:tensorflow:global_step/sec: 572.866\nINFO:tensorflow:loss = 0.03581643, step = 3301 (0.174 sec)\nINFO:tensorflow:global_step/sec: 551.779\nINFO:tensorflow:loss = 0.02551708, step = 3401 (0.181 sec)\nINFO:tensorflow:global_step/sec: 580.602\nINFO:tensorflow:loss = 0.04934936, step = 3501 (0.172 sec)\nINFO:tensorflow:global_step/sec: 554.77\nINFO:tensorflow:loss = 0.024015218, step = 3601 (0.180 sec)\nINFO:tensorflow:global_step/sec: 535.117\nINFO:tensorflow:loss = 0.01724116, step = 3701 (0.187 sec)\nINFO:tensorflow:global_step/sec: 601.895\nINFO:tensorflow:loss = 0.02012146, step = 3801 (0.166 sec)\nINFO:tensorflow:global_step/sec: 522.764\nINFO:tensorflow:loss = 0.021484248, step = 3901 (0.194 sec)\nINFO:tensorflow:global_step/sec: 583.07\nINFO:tensorflow:loss = 0.037488047, step = 4001 (0.169 sec)\nINFO:tensorflow:global_step/sec: 554.139\nINFO:tensorflow:loss = 0.0400841, step = 4101 (0.180 sec)\nINFO:tensorflow:global_step/sec: 577.945\nINFO:tensorflow:loss = 0.021273054, step = 4201 (0.173 sec)\nINFO:tensorflow:global_step/sec: 549.055\nINFO:tensorflow:loss = 0.033386715, step = 4301 (0.182 sec)\nINFO:tensorflow:global_step/sec: 546.14\nINFO:tensorflow:loss = 0.03614325, step = 4401 (0.183 sec)\nINFO:tensorflow:global_step/sec: 480.381\nINFO:tensorflow:loss = 0.039583392, step = 4501 (0.208 sec)\nINFO:tensorflow:global_step/sec: 573.411\nINFO:tensorflow:loss = 0.03670223, step = 4601 (0.175 sec)\nINFO:tensorflow:global_step/sec: 539.371\nINFO:tensorflow:loss = 0.05008475, step = 4701 (0.186 sec)\nINFO:tensorflow:global_step/sec: 540.658\nINFO:tensorflow:loss = 0.043987878, step = 4801 (0.185 sec)\nINFO:tensorflow:global_step/sec: 591.149\nINFO:tensorflow:loss = 0.023454443, step = 4901 (0.172 sec)\nINFO:tensorflow:global_step/sec: 544.102\nINFO:tensorflow:loss = 0.014781421, step = 5001 (0.181 sec)\nINFO:tensorflow:global_step/sec: 556.3\nINFO:tensorflow:loss = 0.020877514, step = 5101 (0.179 sec)\nINFO:tensorflow:global_step/sec: 575.229\nINFO:tensorflow:loss = 0.02810637, step = 5201 (0.174 sec)\nINFO:tensorflow:global_step/sec: 561.574\nINFO:tensorflow:loss = 0.044017207, step = 5301 (0.178 sec)\nINFO:tensorflow:global_step/sec: 532.629\nINFO:tensorflow:loss = 0.015634824, step = 5401 (0.188 sec)\nINFO:tensorflow:global_step/sec: 531.386\nINFO:tensorflow:loss = 0.017649807, step = 5501 (0.188 sec)\nINFO:tensorflow:global_step/sec: 564.461\nINFO:tensorflow:loss = 0.026881127, step = 5601 (0.177 sec)\nINFO:tensorflow:global_step/sec: 554.017\nINFO:tensorflow:loss = 0.025159126, step = 5701 (0.180 sec)\nINFO:tensorflow:global_step/sec: 544.728\nINFO:tensorflow:loss = 0.03226287, step = 5801 (0.184 sec)\nINFO:tensorflow:global_step/sec: 587.082\nINFO:tensorflow:loss = 0.014366589, step = 5901 (0.170 sec)\nINFO:tensorflow:global_step/sec: 567.489\nINFO:tensorflow:loss = 0.02068457, step = 6001 (0.176 sec)\nINFO:tensorflow:global_step/sec: 559.756\nINFO:tensorflow:loss = 0.03591814, step = 6101 (0.178 sec)\nINFO:tensorflow:global_step/sec: 555.843\nINFO:tensorflow:loss = 0.052825674, step = 6201 (0.180 sec)\nINFO:tensorflow:global_step/sec: 585.148\nINFO:tensorflow:loss = 0.02681419, step = 6301 (0.171 sec)\nINFO:tensorflow:global_step/sec: 573.957\nINFO:tensorflow:loss = 0.035378102, step = 6401 (0.174 sec)\nINFO:tensorflow:global_step/sec: 554.272\nINFO:tensorflow:loss = 0.041909285, step = 6501 (0.180 sec)\nINFO:tensorflow:global_step/sec: 570.554\nINFO:tensorflow:loss = 0.02528148, step = 6601 (0.175 sec)\nINFO:tensorflow:global_step/sec: 578.784\nINFO:tensorflow:loss = 0.020565271, step = 6701 (0.173 sec)\nINFO:tensorflow:global_step/sec: 561.808\nINFO:tensorflow:loss = 0.020750936, step = 6801 (0.178 sec)\nINFO:tensorflow:global_step/sec: 556.526\nINFO:tensorflow:loss = 0.016550815, step = 6901 (0.180 sec)\nINFO:tensorflow:global_step/sec: 529.358\nINFO:tensorflow:loss = 0.02629447, step = 7001 (0.189 sec)\nINFO:tensorflow:global_step/sec: 550.61\nINFO:tensorflow:loss = 0.025629781, step = 7101 (0.181 sec)\nINFO:tensorflow:global_step/sec: 553.235\nINFO:tensorflow:loss = 0.017876446, step = 7201 (0.181 sec)\nINFO:tensorflow:global_step/sec: 555.371\nINFO:tensorflow:loss = 0.04798486, step = 7301 (0.180 sec)\nINFO:tensorflow:global_step/sec: 542.376\nINFO:tensorflow:loss = 0.025404511, step = 7401 (0.185 sec)\nINFO:tensorflow:global_step/sec: 571.161\nINFO:tensorflow:loss = 0.02567752, step = 7501 (0.175 sec)\nINFO:tensorflow:global_step/sec: 560.686\nINFO:tensorflow:loss = 0.012580611, step = 7601 (0.178 sec)\nINFO:tensorflow:global_step/sec: 556.316\nINFO:tensorflow:loss = 0.022672791, step = 7701 (0.180 sec)\nINFO:tensorflow:global_step/sec: 566.454\nINFO:tensorflow:loss = 0.019256786, step = 7801 (0.176 sec)\nINFO:tensorflow:global_step/sec: 567.579\nINFO:tensorflow:loss = 0.017491028, step = 7901 (0.176 sec)\nINFO:tensorflow:global_step/sec: 581.216\nINFO:tensorflow:loss = 0.025461707, step = 8001 (0.172 sec)\nINFO:tensorflow:global_step/sec: 538.387\nINFO:tensorflow:loss = 0.02162715, step = 8101 (0.186 sec)\nINFO:tensorflow:global_step/sec: 561.848\nINFO:tensorflow:loss = 0.038915493, step = 8201 (0.178 sec)\nINFO:tensorflow:global_step/sec: 543.239\nINFO:tensorflow:loss = 0.02371198, step = 8301 (0.184 sec)\nINFO:tensorflow:global_step/sec: 560.416\nINFO:tensorflow:loss = 0.04633055, step = 8401 (0.178 sec)\nINFO:tensorflow:global_step/sec: 559.936\nINFO:tensorflow:loss = 0.020572973, step = 8501 (0.179 sec)\nINFO:tensorflow:global_step/sec: 583.761\nINFO:tensorflow:loss = 0.029029911, step = 8601 (0.172 sec)\nINFO:tensorflow:global_step/sec: 549.496\nINFO:tensorflow:loss = 0.022643939, step = 8701 (0.182 sec)\nINFO:tensorflow:global_step/sec: 575.486\nINFO:tensorflow:loss = 0.036244065, step = 8801 (0.174 sec)\nINFO:tensorflow:global_step/sec: 557.955\nINFO:tensorflow:loss = 0.054826558, step = 8901 (0.179 sec)\nINFO:tensorflow:global_step/sec: 562.015\nINFO:tensorflow:loss = 0.042737592, step = 9001 (0.178 sec)\nINFO:tensorflow:global_step/sec: 562.949\nINFO:tensorflow:loss = 0.020140037, step = 9101 (0.178 sec)\nINFO:tensorflow:global_step/sec: 539.66\nINFO:tensorflow:loss = 0.035308473, step = 9201 (0.185 sec)\nINFO:tensorflow:global_step/sec: 555.454\nINFO:tensorflow:loss = 0.0140126925, step = 9301 (0.180 sec)\nINFO:tensorflow:global_step/sec: 567.627\nINFO:tensorflow:loss = 0.017350888, step = 9401 (0.176 sec)\nINFO:tensorflow:global_step/sec: 560.102\nINFO:tensorflow:loss = 0.036257066, step = 9501 (0.179 sec)\nINFO:tensorflow:global_step/sec: 565.042\nINFO:tensorflow:loss = 0.03181795, step = 9601 (0.177 sec)\nINFO:tensorflow:global_step/sec: 559.67\nINFO:tensorflow:loss = 0.011875551, step = 9701 (0.179 sec)\nINFO:tensorflow:global_step/sec: 552.605\nINFO:tensorflow:loss = 0.021412933, step = 9801 (0.181 sec)\nINFO:tensorflow:global_step/sec: 566.807\nINFO:tensorflow:loss = 0.022191094, step = 9901 (0.176 sec)\nINFO:tensorflow:global_step/sec: 543.934\nINFO:tensorflow:loss = 0.029810011, step = 10001 (0.184 sec)\nINFO:tensorflow:global_step/sec: 576.352\nINFO:tensorflow:loss = 0.021032713, step = 10101 (0.173 sec)\nINFO:tensorflow:global_step/sec: 574.218\nINFO:tensorflow:loss = 0.043715518, step = 10201 (0.174 sec)\nINFO:tensorflow:global_step/sec: 563.383\nINFO:tensorflow:loss = 0.031914454, step = 10301 (0.178 sec)\nINFO:tensorflow:global_step/sec: 564.284\nINFO:tensorflow:loss = 0.03337904, step = 10401 (0.177 sec)\nINFO:tensorflow:global_step/sec: 574.841\nINFO:tensorflow:loss = 0.038901534, step = 10501 (0.174 sec)\nINFO:tensorflow:global_step/sec: 553.689\nINFO:tensorflow:loss = 0.025083914, step = 10601 (0.180 sec)\nINFO:tensorflow:global_step/sec: 564.687\nINFO:tensorflow:loss = 0.012228267, step = 10701 (0.177 sec)\nINFO:tensorflow:global_step/sec: 569.743\nINFO:tensorflow:loss = 0.021361638, step = 10801 (0.176 sec)\nINFO:tensorflow:global_step/sec: 558.066\nINFO:tensorflow:loss = 0.026665423, step = 10901 (0.179 sec)\nINFO:tensorflow:global_step/sec: 536.901\nINFO:tensorflow:loss = 0.009950843, step = 11001 (0.186 sec)\nINFO:tensorflow:global_step/sec: 530.648\nINFO:tensorflow:loss = 0.027443334, step = 11101 (0.188 sec)\nINFO:tensorflow:global_step/sec: 542.149\nINFO:tensorflow:loss = 0.013024814, step = 11201 (0.184 sec)\nINFO:tensorflow:global_step/sec: 569.444\nINFO:tensorflow:loss = 0.041840516, step = 11301 (0.176 sec)\nINFO:tensorflow:global_step/sec: 569.674\nINFO:tensorflow:loss = 0.017739808, step = 11401 (0.176 sec)\nINFO:tensorflow:global_step/sec: 568.689\nINFO:tensorflow:loss = 0.059714716, step = 11501 (0.176 sec)\nINFO:tensorflow:global_step/sec: 581.913\nINFO:tensorflow:loss = 0.014170061, step = 11601 (0.172 sec)\nINFO:tensorflow:global_step/sec: 587.987\nINFO:tensorflow:loss = 0.024093378, step = 11701 (0.170 sec)\nINFO:tensorflow:global_step/sec: 571.542\nINFO:tensorflow:loss = 0.013223974, step = 11801 (0.175 sec)\nINFO:tensorflow:global_step/sec: 590.298\nINFO:tensorflow:loss = 0.035453733, step = 11901 (0.169 sec)\nINFO:tensorflow:global_step/sec: 542.95\nINFO:tensorflow:loss = 0.024634361, step = 12001 (0.184 sec)\nINFO:tensorflow:global_step/sec: 559.4\nINFO:tensorflow:loss = 0.014634531, step = 12101 (0.179 sec)\nINFO:tensorflow:global_step/sec: 559.622\nINFO:tensorflow:loss = 0.010114573, step = 12201 (0.179 sec)\nINFO:tensorflow:global_step/sec: 590.016\nINFO:tensorflow:loss = 0.018301172, step = 12301 (0.170 sec)\nINFO:tensorflow:global_step/sec: 571.893\nINFO:tensorflow:loss = 0.016491232, step = 12401 (0.175 sec)\nINFO:tensorflow:global_step/sec: 560.164\nINFO:tensorflow:loss = 0.023242606, step = 12501 (0.179 sec)\nINFO:tensorflow:global_step/sec: 535.277\nINFO:tensorflow:loss = 0.021020273, step = 12601 (0.187 sec)\nINFO:tensorflow:global_step/sec: 574.835\nINFO:tensorflow:loss = 0.018893082, step = 12701 (0.174 sec)\nINFO:tensorflow:global_step/sec: 566.044\nINFO:tensorflow:loss = 0.02025078, step = 12801 (0.177 sec)\nINFO:tensorflow:global_step/sec: 556.514\nINFO:tensorflow:loss = 0.026029501, step = 12901 (0.179 sec)\nINFO:tensorflow:global_step/sec: 567.765\nINFO:tensorflow:loss = 0.023721898, step = 13001 (0.176 sec)\nINFO:tensorflow:global_step/sec: 583.745\nINFO:tensorflow:loss = 0.02941418, step = 13101 (0.171 sec)\nINFO:tensorflow:global_step/sec: 548.372\nINFO:tensorflow:loss = 0.030588109, step = 13201 (0.182 sec)\nINFO:tensorflow:global_step/sec: 556.278\nINFO:tensorflow:loss = 0.0150418775, step = 13301 (0.180 sec)\nINFO:tensorflow:global_step/sec: 582.646\nINFO:tensorflow:loss = 0.023598528, step = 13401 (0.172 sec)\nINFO:tensorflow:global_step/sec: 574.514\nINFO:tensorflow:loss = 0.02438465, step = 13501 (0.174 sec)\nINFO:tensorflow:global_step/sec: 557.2\nINFO:tensorflow:loss = 0.016647844, step = 13601 (0.180 sec)\nINFO:tensorflow:global_step/sec: 554.394\nINFO:tensorflow:loss = 0.015543609, step = 13701 (0.180 sec)\nINFO:tensorflow:global_step/sec: 571.615\nINFO:tensorflow:loss = 0.035159364, step = 13801 (0.175 sec)\nINFO:tensorflow:global_step/sec: 579.838\nINFO:tensorflow:loss = 0.021462178, step = 13901 (0.172 sec)\nINFO:tensorflow:global_step/sec: 564.71\nINFO:tensorflow:loss = 0.015813632, step = 14001 (0.177 sec)\nINFO:tensorflow:global_step/sec: 556.598\nINFO:tensorflow:loss = 0.015878404, step = 14101 (0.180 sec)\nINFO:tensorflow:global_step/sec: 574.135\nINFO:tensorflow:loss = 0.016619552, step = 14201 (0.174 sec)\nINFO:tensorflow:global_step/sec: 564.946\nINFO:tensorflow:loss = 0.020005483, step = 14301 (0.176 sec)\nINFO:tensorflow:global_step/sec: 567.869\nINFO:tensorflow:loss = 0.012884559, step = 14401 (0.176 sec)\nINFO:tensorflow:global_step/sec: 551.247\nINFO:tensorflow:loss = 0.020677546, step = 14501 (0.182 sec)\nINFO:tensorflow:global_step/sec: 541.398\nINFO:tensorflow:loss = 0.027778989, step = 14601 (0.185 sec)\nINFO:tensorflow:global_step/sec: 555.302\nINFO:tensorflow:loss = 0.02477769, step = 14701 (0.180 sec)\nINFO:tensorflow:global_step/sec: 534.648\nINFO:tensorflow:loss = 0.02744386, step = 14801 (0.187 sec)\nINFO:tensorflow:global_step/sec: 556.836\nINFO:tensorflow:loss = 0.043053888, step = 14901 (0.179 sec)\nINFO:tensorflow:global_step/sec: 567.279\nINFO:tensorflow:loss = 0.026561439, step = 15001 (0.176 sec)\nINFO:tensorflow:global_step/sec: 542.594\nINFO:tensorflow:loss = 0.014701788, step = 15101 (0.184 sec)\nINFO:tensorflow:global_step/sec: 566.993\nINFO:tensorflow:loss = 0.0250272, step = 15201 (0.177 sec)\nINFO:tensorflow:global_step/sec: 573.075\nINFO:tensorflow:loss = 0.023796145, step = 15301 (0.174 sec)\nINFO:tensorflow:global_step/sec: 577.761\nINFO:tensorflow:loss = 0.010803474, step = 15401 (0.173 sec)\nINFO:tensorflow:global_step/sec: 572.436\nINFO:tensorflow:loss = 0.020810109, step = 15501 (0.175 sec)\nINFO:tensorflow:global_step/sec: 560.695\nINFO:tensorflow:loss = 0.024044476, step = 15601 (0.178 sec)\nINFO:tensorflow:global_step/sec: 576.111\nINFO:tensorflow:loss = 0.026181871, step = 15701 (0.174 sec)\nINFO:tensorflow:global_step/sec: 588.99\nINFO:tensorflow:loss = 0.0360455, step = 15801 (0.170 sec)\nINFO:tensorflow:global_step/sec: 572.702\nINFO:tensorflow:loss = 0.030199537, step = 15901 (0.175 sec)\nINFO:tensorflow:global_step/sec: 558.082\nINFO:tensorflow:loss = 0.025341598, step = 16001 (0.179 sec)\nINFO:tensorflow:global_step/sec: 579.421\nINFO:tensorflow:loss = 0.055967607, step = 16101 (0.172 sec)\nINFO:tensorflow:global_step/sec: 567.376\nINFO:tensorflow:loss = 0.016494218, step = 16201 (0.176 sec)\nINFO:tensorflow:global_step/sec: 566.297\nINFO:tensorflow:loss = 0.031872004, step = 16301 (0.177 sec)\nINFO:tensorflow:global_step/sec: 569.518\nINFO:tensorflow:loss = 0.050789293, step = 16401 (0.175 sec)\nINFO:tensorflow:global_step/sec: 557.965\nINFO:tensorflow:loss = 0.014910404, step = 16501 (0.179 sec)\nINFO:tensorflow:global_step/sec: 574.907\nINFO:tensorflow:loss = 0.020343851, step = 16601 (0.174 sec)\nINFO:tensorflow:global_step/sec: 576.542\nINFO:tensorflow:loss = 0.0264525, step = 16701 (0.173 sec)\nINFO:tensorflow:global_step/sec: 579.71\nINFO:tensorflow:loss = 0.02900825, step = 16801 (0.173 sec)\nINFO:tensorflow:global_step/sec: 586.449\nINFO:tensorflow:loss = 0.01755685, step = 16901 (0.171 sec)\nINFO:tensorflow:global_step/sec: 568.602\nINFO:tensorflow:loss = 0.026210094, step = 17001 (0.176 sec)\nINFO:tensorflow:global_step/sec: 554.782\nINFO:tensorflow:loss = 0.023637617, step = 17101 (0.180 sec)\nINFO:tensorflow:global_step/sec: 506.742\nINFO:tensorflow:loss = 0.0139544, step = 17201 (0.197 sec)\nINFO:tensorflow:global_step/sec: 575.712\nINFO:tensorflow:loss = 0.022931451, step = 17301 (0.174 sec)\nINFO:tensorflow:global_step/sec: 554.724\nINFO:tensorflow:loss = 0.014839102, step = 17401 (0.180 sec)\nINFO:tensorflow:global_step/sec: 583.938\nINFO:tensorflow:loss = 0.019862954, step = 17501 (0.171 sec)\nINFO:tensorflow:global_step/sec: 565.656\nINFO:tensorflow:loss = 0.024700183, step = 17601 (0.177 sec)\nINFO:tensorflow:global_step/sec: 544.49\nINFO:tensorflow:loss = 0.016027404, step = 17701 (0.184 sec)\nINFO:tensorflow:global_step/sec: 557.125\nINFO:tensorflow:loss = 0.016922206, step = 17801 (0.180 sec)\nINFO:tensorflow:global_step/sec: 546.401\nINFO:tensorflow:loss = 0.015673462, step = 17901 (0.183 sec)\nINFO:tensorflow:global_step/sec: 569.498\nINFO:tensorflow:loss = 0.02691972, step = 18001 (0.175 sec)\nINFO:tensorflow:global_step/sec: 569.372\nINFO:tensorflow:loss = 0.02881617, step = 18101 (0.176 sec)\nINFO:tensorflow:global_step/sec: 552.538\nINFO:tensorflow:loss = 0.021425078, step = 18201 (0.181 sec)\nINFO:tensorflow:global_step/sec: 583.199\nINFO:tensorflow:loss = 0.028980933, step = 18301 (0.172 sec)\nINFO:tensorflow:global_step/sec: 572.411\nINFO:tensorflow:loss = 0.03021842, step = 18401 (0.175 sec)\nINFO:tensorflow:global_step/sec: 560.004\nINFO:tensorflow:loss = 0.017465986, step = 18501 (0.178 sec)\nINFO:tensorflow:global_step/sec: 584.262\nINFO:tensorflow:loss = 0.018047271, step = 18601 (0.171 sec)\nINFO:tensorflow:global_step/sec: 559.241\nINFO:tensorflow:loss = 0.04243151, step = 18701 (0.179 sec)\nINFO:tensorflow:global_step/sec: 567.762\nINFO:tensorflow:loss = 0.009879965, step = 18801 (0.177 sec)\nINFO:tensorflow:global_step/sec: 559.469\nINFO:tensorflow:loss = 0.026315855, step = 18901 (0.178 sec)\nINFO:tensorflow:global_step/sec: 568.677\nINFO:tensorflow:loss = 0.014082297, step = 19001 (0.175 sec)\nINFO:tensorflow:global_step/sec: 582.489\nINFO:tensorflow:loss = 0.02952011, step = 19101 (0.172 sec)\nINFO:tensorflow:global_step/sec: 586.122\nINFO:tensorflow:loss = 0.024289865, step = 19201 (0.170 sec)\nINFO:tensorflow:global_step/sec: 578.61\nINFO:tensorflow:loss = 0.019341573, step = 19301 (0.173 sec)\nINFO:tensorflow:global_step/sec: 551.298\nINFO:tensorflow:loss = 0.015597891, step = 19401 (0.181 sec)\nINFO:tensorflow:global_step/sec: 574.554\nINFO:tensorflow:loss = 0.013870528, step = 19501 (0.174 sec)\nINFO:tensorflow:global_step/sec: 581.304\nINFO:tensorflow:loss = 0.011807093, step = 19601 (0.172 sec)\nINFO:tensorflow:global_step/sec: 572.617\nINFO:tensorflow:loss = 0.0114907455, step = 19701 (0.175 sec)\nINFO:tensorflow:global_step/sec: 573.516\nINFO:tensorflow:loss = 0.017667146, step = 19801 (0.174 sec)\nINFO:tensorflow:global_step/sec: 558.638\nINFO:tensorflow:loss = 0.04704179, step = 19901 (0.179 sec)\nINFO:tensorflow:Saving checkpoints for 20000 into /tmp/tmpBX73lD/model.ckpt.\nINFO:tensorflow:Calling model_fn.\nINFO:tensorflow:Building iteration 0\nINFO:tensorflow:Building subnetwork 'linear'\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Done calling model_fn.\nINFO:tensorflow:Starting evaluation at 2018-12-13-19:23:18\nINFO:tensorflow:Graph was finalized.\nINFO:tensorflow:Restoring parameters from /tmp/tmpBX73lD/model.ckpt-20000\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\nINFO:tensorflow:Saving candidate 't0_linear' dict for global step 20000: architecture/adanet/ensembles = \nW\n9adanet/iteration_0/ensemble_t0_linear/architecture/adanetB\u0010\b\u0007\u0012\u0000B\n| linear |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.049421377, average_loss/adanet/subnetwork = 0.049421377, average_loss/adanet/uniform_average_ensemble = 0.049421377, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.062442042, loss/adanet/subnetwork = 0.062442042, loss/adanet/uniform_average_ensemble = 0.062442042, prediction/mean/adanet/adanet_weighted_ensemble = 3.105895, prediction/mean/adanet/subnetwork = 3.105895, prediction/mean/adanet/uniform_average_ensemble = 3.105895\nINFO:tensorflow:Saving candidate 't0_1_layer_dnn' dict for global step 20000: architecture/adanet/ensembles = \na\n>adanet/iteration_0/ensemble_t0_1_layer_dnn/architecture/adanetB\u0015\b\u0007\u0012\u0000B\u000f| 1_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.03993654, average_loss/adanet/subnetwork = 0.03993654, average_loss/adanet/uniform_average_ensemble = 0.03993654, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.053605493, loss/adanet/subnetwork = 0.053605493, loss/adanet/uniform_average_ensemble = 0.053605493, prediction/mean/adanet/adanet_weighted_ensemble = 3.1580222, prediction/mean/adanet/subnetwork = 3.1580222, prediction/mean/adanet/uniform_average_ensemble = 3.1580222\nINFO:tensorflow:Finished evaluation at 2018-12-13-19:23:19\nINFO:tensorflow:Saving dict for global step 20000: average_loss = 0.03993654, average_loss/adanet/adanet_weighted_ensemble = 0.03993654, average_loss/adanet/subnetwork = 0.03993654, average_loss/adanet/uniform_average_ensemble = 0.03993654, global_step = 20000, label/mean = 3.1049454, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss = 0.053605493, loss/adanet/adanet_weighted_ensemble = 0.053605493, loss/adanet/subnetwork = 0.053605493, loss/adanet/uniform_average_ensemble = 0.053605493, prediction/mean = 3.1580222, prediction/mean/adanet/adanet_weighted_ensemble = 3.1580222, prediction/mean/adanet/subnetwork = 3.1580222, prediction/mean/adanet/uniform_average_ensemble = 3.1580222\nINFO:tensorflow:Saving 'checkpoint_path' summary for global step 20000: /tmp/tmpBX73lD/model.ckpt-20000\nINFO:tensorflow:Loss for final step: 0.034048468.\nINFO:tensorflow:Finished training Adanet iteration 0\nINFO:tensorflow:Beginning bookkeeping phase for iteration 0\nINFO:tensorflow:Building iteration 0\nINFO:tensorflow:Building subnetwork 'linear'\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Starting ensemble evaluation for iteration 0\nINFO:tensorflow:Restoring parameters from /tmp/tmpBX73lD/model.ckpt-20000\nWARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/adanet/core/estimator.py:717: start_queue_runners (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.\nInstructions for updating:\nTo construct input pipelines, use the `tf.data` module.\nWARNING:tensorflow:`tf.train.start_queue_runners()` was called when no queue runners were defined. You can safely remove the call to this deprecated function.\nINFO:tensorflow:Encountered end of input after 14 evaluations\nINFO:tensorflow:Computed ensemble metrics: adanet_loss/t0_linear = 0.035089, adanet_loss/t0_1_layer_dnn = 0.020803\nINFO:tensorflow:Finished ensemble evaluation for iteration 0\nINFO:tensorflow:'t0_1_layer_dnn' at index 1 is moving onto the next iteration\nINFO:tensorflow:Importing architecture from /tmp/tmpBX73lD/architecture-0.txt: ['0:1_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Warm-starting from: (u'/tmp/tmpBX73lD/model.ckpt-20000',)\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: global_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/candidate_t0_1_layer_dnn/adanet/iteration_0/candidate_t0_1_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/subnetwork/dense/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/candidate_t0_1_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/train_op/is_over/is_over_var_fn/is_over_var; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/subnetwork/dense/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/candidate_t0_1_layer_dnn/adanet/iteration_0/candidate_t0_1_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/subnetwork/dense_1/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/subnetwork/dense_1/bias; prev_var_name: Unchanged\nINFO:tensorflow:Building iteration 1\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Overwriting checkpoint with new graph for iteration 1 to /tmp/tmpBX73lD/model.ckpt-20000\nWARNING:tensorflow:`tf.train.start_queue_runners()` was called when no queue runners were defined. You can safely remove the call to this deprecated function.\nINFO:tensorflow:Finished bookkeeping phase for iteration 0\nINFO:tensorflow:Beginning training AdaNet iteration 1\nINFO:tensorflow:Calling model_fn.\nINFO:tensorflow:Importing architecture from /tmp/tmpBX73lD/architecture-0.txt: ['0:1_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Building iteration 1\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Done calling model_fn.\nINFO:tensorflow:Create CheckpointSaverHook.\nINFO:tensorflow:Graph was finalized.\nINFO:tensorflow:Restoring parameters from /tmp/tmpBX73lD/increment.ckpt-1\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\nINFO:tensorflow:Saving checkpoints for 20000 into /tmp/tmpBX73lD/model.ckpt.\nINFO:tensorflow:loss = 0.02689482, step = 20001\nINFO:tensorflow:global_step/sec: 177.265\nINFO:tensorflow:loss = 0.026641333, step = 20101 (0.565 sec)\nINFO:tensorflow:global_step/sec: 571.393\nINFO:tensorflow:loss = 0.020572826, step = 20201 (0.175 sec)\nINFO:tensorflow:global_step/sec: 534.231\nINFO:tensorflow:loss = 0.018674508, step = 20301 (0.187 sec)\nINFO:tensorflow:global_step/sec: 552.2\nINFO:tensorflow:loss = 0.027517587, step = 20401 (0.181 sec)\nINFO:tensorflow:global_step/sec: 510.477\nINFO:tensorflow:loss = 0.01638335, step = 20501 (0.196 sec)\nINFO:tensorflow:global_step/sec: 542.85\nINFO:tensorflow:loss = 0.018517539, step = 20601 (0.184 sec)\nINFO:tensorflow:global_step/sec: 547.313\nINFO:tensorflow:loss = 0.011325995, step = 20701 (0.183 sec)\nINFO:tensorflow:global_step/sec: 547.207\nINFO:tensorflow:loss = 0.019037172, step = 20801 (0.182 sec)\nINFO:tensorflow:global_step/sec: 548.318\nINFO:tensorflow:loss = 0.015460573, step = 20901 (0.183 sec)\nINFO:tensorflow:global_step/sec: 548.185\nINFO:tensorflow:loss = 0.027241766, step = 21001 (0.182 sec)\nINFO:tensorflow:global_step/sec: 547.049\nINFO:tensorflow:loss = 0.02371575, step = 21101 (0.183 sec)\nINFO:tensorflow:global_step/sec: 517.958\nINFO:tensorflow:loss = 0.024092598, step = 21201 (0.193 sec)\nINFO:tensorflow:global_step/sec: 583.39\nINFO:tensorflow:loss = 0.028579984, step = 21301 (0.171 sec)\nINFO:tensorflow:global_step/sec: 524.161\nINFO:tensorflow:loss = 0.017033618, step = 21401 (0.191 sec)\nINFO:tensorflow:global_step/sec: 559.967\nINFO:tensorflow:loss = 0.01003223, step = 21501 (0.179 sec)\nINFO:tensorflow:global_step/sec: 541.765\nINFO:tensorflow:loss = 0.01647801, step = 21601 (0.185 sec)\nINFO:tensorflow:global_step/sec: 525.798\nINFO:tensorflow:loss = 0.022877093, step = 21701 (0.190 sec)\nINFO:tensorflow:global_step/sec: 546.634\nINFO:tensorflow:loss = 0.018278336, step = 21801 (0.183 sec)\nINFO:tensorflow:global_step/sec: 547.336\nINFO:tensorflow:loss = 0.023737881, step = 21901 (0.183 sec)\nINFO:tensorflow:global_step/sec: 529.397\nINFO:tensorflow:loss = 0.011803246, step = 22001 (0.189 sec)\nINFO:tensorflow:global_step/sec: 532.067\nINFO:tensorflow:loss = 0.03296115, step = 22101 (0.188 sec)\nINFO:tensorflow:global_step/sec: 548.679\nINFO:tensorflow:loss = 0.019257832, step = 22201 (0.182 sec)\nINFO:tensorflow:global_step/sec: 514.462\nINFO:tensorflow:loss = 0.0164644, step = 22301 (0.194 sec)\nINFO:tensorflow:global_step/sec: 537.744\nINFO:tensorflow:loss = 0.01193467, step = 22401 (0.186 sec)\nINFO:tensorflow:global_step/sec: 550.294\nINFO:tensorflow:loss = 0.029213233, step = 22501 (0.182 sec)\nINFO:tensorflow:global_step/sec: 552.972\nINFO:tensorflow:loss = 0.017618146, step = 22601 (0.181 sec)\nINFO:tensorflow:global_step/sec: 567.424\nINFO:tensorflow:loss = 0.024926536, step = 22701 (0.177 sec)\nINFO:tensorflow:global_step/sec: 549.031\nINFO:tensorflow:loss = 0.016292248, step = 22801 (0.182 sec)\nINFO:tensorflow:global_step/sec: 527.17\nINFO:tensorflow:loss = 0.017500443, step = 22901 (0.190 sec)\nINFO:tensorflow:global_step/sec: 554.779\nINFO:tensorflow:loss = 0.01822316, step = 23001 (0.180 sec)\nINFO:tensorflow:global_step/sec: 553.502\nINFO:tensorflow:loss = 0.008426819, step = 23101 (0.181 sec)\nINFO:tensorflow:global_step/sec: 544.416\nINFO:tensorflow:loss = 0.025954742, step = 23201 (0.184 sec)\nINFO:tensorflow:global_step/sec: 543.842\nINFO:tensorflow:loss = 0.027257022, step = 23301 (0.184 sec)\nINFO:tensorflow:global_step/sec: 525.528\nINFO:tensorflow:loss = 0.018963318, step = 23401 (0.190 sec)\nINFO:tensorflow:global_step/sec: 535.989\nINFO:tensorflow:loss = 0.031914793, step = 23501 (0.186 sec)\nINFO:tensorflow:global_step/sec: 542.352\nINFO:tensorflow:loss = 0.012208786, step = 23601 (0.185 sec)\nINFO:tensorflow:global_step/sec: 541.674\nINFO:tensorflow:loss = 0.011193404, step = 23701 (0.184 sec)\nINFO:tensorflow:global_step/sec: 551.563\nINFO:tensorflow:loss = 0.015754636, step = 23801 (0.181 sec)\nINFO:tensorflow:global_step/sec: 553.535\nINFO:tensorflow:loss = 0.013732923, step = 23901 (0.180 sec)\nINFO:tensorflow:global_step/sec: 555.42\nINFO:tensorflow:loss = 0.02079191, step = 24001 (0.183 sec)\nINFO:tensorflow:global_step/sec: 534.427\nINFO:tensorflow:loss = 0.023126412, step = 24101 (0.184 sec)\nINFO:tensorflow:global_step/sec: 544.515\nINFO:tensorflow:loss = 0.013298021, step = 24201 (0.183 sec)\nINFO:tensorflow:global_step/sec: 530.51\nINFO:tensorflow:loss = 0.01107317, step = 24301 (0.189 sec)\nINFO:tensorflow:global_step/sec: 518.108\nINFO:tensorflow:loss = 0.010421526, step = 24401 (0.193 sec)\nINFO:tensorflow:global_step/sec: 556.254\nINFO:tensorflow:loss = 0.017193377, step = 24501 (0.180 sec)\nINFO:tensorflow:global_step/sec: 531.904\nINFO:tensorflow:loss = 0.021527879, step = 24601 (0.188 sec)\nINFO:tensorflow:global_step/sec: 534.025\nINFO:tensorflow:loss = 0.02800101, step = 24701 (0.187 sec)\nINFO:tensorflow:global_step/sec: 546.18\nINFO:tensorflow:loss = 0.016313508, step = 24801 (0.183 sec)\nINFO:tensorflow:global_step/sec: 554.939\nINFO:tensorflow:loss = 0.016563449, step = 24901 (0.180 sec)\nINFO:tensorflow:global_step/sec: 594.205\nINFO:tensorflow:loss = 0.010573461, step = 25001 (0.168 sec)\nINFO:tensorflow:global_step/sec: 538.834\nINFO:tensorflow:loss = 0.015758982, step = 25101 (0.186 sec)\nINFO:tensorflow:global_step/sec: 566.203\nINFO:tensorflow:loss = 0.013544958, step = 25201 (0.176 sec)\nINFO:tensorflow:global_step/sec: 575.632\nINFO:tensorflow:loss = 0.034690935, step = 25301 (0.174 sec)\nINFO:tensorflow:global_step/sec: 570.002\nINFO:tensorflow:loss = 0.010672317, step = 25401 (0.175 sec)\nINFO:tensorflow:global_step/sec: 594.841\nINFO:tensorflow:loss = 0.0081842495, step = 25501 (0.168 sec)\nINFO:tensorflow:global_step/sec: 505.043\nINFO:tensorflow:loss = 0.028937507, step = 25601 (0.198 sec)\nINFO:tensorflow:global_step/sec: 547.961\nINFO:tensorflow:loss = 0.015525733, step = 25701 (0.182 sec)\nINFO:tensorflow:global_step/sec: 550.278\nINFO:tensorflow:loss = 0.0148458965, step = 25801 (0.181 sec)\nINFO:tensorflow:global_step/sec: 548.3\nINFO:tensorflow:loss = 0.010360732, step = 25901 (0.183 sec)\nINFO:tensorflow:global_step/sec: 539.272\nINFO:tensorflow:loss = 0.01247085, step = 26001 (0.185 sec)\nINFO:tensorflow:global_step/sec: 553.202\nINFO:tensorflow:loss = 0.024499211, step = 26101 (0.181 sec)\nINFO:tensorflow:global_step/sec: 533.711\nINFO:tensorflow:loss = 0.020909723, step = 26201 (0.188 sec)\nINFO:tensorflow:global_step/sec: 544.746\nINFO:tensorflow:loss = 0.01373519, step = 26301 (0.184 sec)\nINFO:tensorflow:global_step/sec: 537.262\nINFO:tensorflow:loss = 0.020242168, step = 26401 (0.186 sec)\nINFO:tensorflow:global_step/sec: 548.095\nINFO:tensorflow:loss = 0.029708786, step = 26501 (0.183 sec)\nINFO:tensorflow:global_step/sec: 557.968\nINFO:tensorflow:loss = 0.023566445, step = 26601 (0.179 sec)\nINFO:tensorflow:global_step/sec: 576.399\nINFO:tensorflow:loss = 0.017634012, step = 26701 (0.174 sec)\nINFO:tensorflow:global_step/sec: 550.373\nINFO:tensorflow:loss = 0.011539813, step = 26801 (0.182 sec)\nINFO:tensorflow:global_step/sec: 550.209\nINFO:tensorflow:loss = 0.008406332, step = 26901 (0.182 sec)\nINFO:tensorflow:global_step/sec: 538.967\nINFO:tensorflow:loss = 0.011983597, step = 27001 (0.186 sec)\nINFO:tensorflow:global_step/sec: 548.45\nINFO:tensorflow:loss = 0.017931957, step = 27101 (0.182 sec)\nINFO:tensorflow:global_step/sec: 545.137\nINFO:tensorflow:loss = 0.011202335, step = 27201 (0.184 sec)\nINFO:tensorflow:global_step/sec: 539.162\nINFO:tensorflow:loss = 0.031743504, step = 27301 (0.185 sec)\nINFO:tensorflow:global_step/sec: 528.921\nINFO:tensorflow:loss = 0.014932214, step = 27401 (0.189 sec)\nINFO:tensorflow:global_step/sec: 532.839\nINFO:tensorflow:loss = 0.010680702, step = 27501 (0.188 sec)\nINFO:tensorflow:global_step/sec: 541.841\nINFO:tensorflow:loss = 0.009482684, step = 27601 (0.185 sec)\nINFO:tensorflow:global_step/sec: 551.213\nINFO:tensorflow:loss = 0.017488897, step = 27701 (0.181 sec)\nINFO:tensorflow:global_step/sec: 546.592\nINFO:tensorflow:loss = 0.015694784, step = 27801 (0.183 sec)\nINFO:tensorflow:global_step/sec: 541.6\nINFO:tensorflow:loss = 0.009877086, step = 27901 (0.185 sec)\nINFO:tensorflow:global_step/sec: 562.945\nINFO:tensorflow:loss = 0.017907567, step = 28001 (0.178 sec)\nINFO:tensorflow:global_step/sec: 532.717\nINFO:tensorflow:loss = 0.021617237, step = 28101 (0.188 sec)\nINFO:tensorflow:global_step/sec: 554.881\nINFO:tensorflow:loss = 0.037934303, step = 28201 (0.180 sec)\nINFO:tensorflow:global_step/sec: 550.697\nINFO:tensorflow:loss = 0.017070279, step = 28301 (0.182 sec)\nINFO:tensorflow:global_step/sec: 559.767\nINFO:tensorflow:loss = 0.016645355, step = 28401 (0.179 sec)\nINFO:tensorflow:global_step/sec: 554.729\nINFO:tensorflow:loss = 0.011926045, step = 28501 (0.180 sec)\nINFO:tensorflow:global_step/sec: 557.311\nINFO:tensorflow:loss = 0.0185716, step = 28601 (0.180 sec)\nINFO:tensorflow:global_step/sec: 486.934\nINFO:tensorflow:loss = 0.012995226, step = 28701 (0.210 sec)\nINFO:tensorflow:global_step/sec: 504.378\nINFO:tensorflow:loss = 0.024929004, step = 28801 (0.199 sec)\nINFO:tensorflow:global_step/sec: 530.712\nINFO:tensorflow:loss = 0.038651876, step = 28901 (0.183 sec)\nINFO:tensorflow:global_step/sec: 552.227\nINFO:tensorflow:loss = 0.02828104, step = 29001 (0.181 sec)\nINFO:tensorflow:global_step/sec: 547.729\nINFO:tensorflow:loss = 0.014936969, step = 29101 (0.183 sec)\nINFO:tensorflow:global_step/sec: 553.4\nINFO:tensorflow:loss = 0.022527486, step = 29201 (0.181 sec)\nINFO:tensorflow:global_step/sec: 549.838\nINFO:tensorflow:loss = 0.0075648124, step = 29301 (0.182 sec)\nINFO:tensorflow:global_step/sec: 547.124\nINFO:tensorflow:loss = 0.014851436, step = 29401 (0.183 sec)\nINFO:tensorflow:global_step/sec: 512.29\nINFO:tensorflow:loss = 0.021254335, step = 29501 (0.195 sec)\nINFO:tensorflow:global_step/sec: 543.224\nINFO:tensorflow:loss = 0.02393078, step = 29601 (0.184 sec)\nINFO:tensorflow:global_step/sec: 549.179\nINFO:tensorflow:loss = 0.008230279, step = 29701 (0.182 sec)\nINFO:tensorflow:global_step/sec: 561.155\nINFO:tensorflow:loss = 0.011171926, step = 29801 (0.178 sec)\nINFO:tensorflow:global_step/sec: 520.966\nINFO:tensorflow:loss = 0.021518653, step = 29901 (0.192 sec)\nINFO:tensorflow:global_step/sec: 539.566\nINFO:tensorflow:loss = 0.020230716, step = 30001 (0.185 sec)\nINFO:tensorflow:global_step/sec: 537.845\nINFO:tensorflow:loss = 0.009607708, step = 30101 (0.186 sec)\nINFO:tensorflow:global_step/sec: 540.491\nINFO:tensorflow:loss = 0.024883462, step = 30201 (0.185 sec)\nINFO:tensorflow:global_step/sec: 450.928\nINFO:tensorflow:loss = 0.02555336, step = 30301 (0.222 sec)\nINFO:tensorflow:global_step/sec: 534.474\nINFO:tensorflow:loss = 0.011907431, step = 30401 (0.187 sec)\nINFO:tensorflow:global_step/sec: 533.957\nINFO:tensorflow:loss = 0.01029122, step = 30501 (0.187 sec)\nINFO:tensorflow:global_step/sec: 523.303\nINFO:tensorflow:loss = 0.013868979, step = 30601 (0.191 sec)\nINFO:tensorflow:global_step/sec: 516.182\nINFO:tensorflow:loss = 0.007916614, step = 30701 (0.194 sec)\nINFO:tensorflow:global_step/sec: 568.725\nINFO:tensorflow:loss = 0.015428416, step = 30801 (0.176 sec)\nINFO:tensorflow:global_step/sec: 541.815\nINFO:tensorflow:loss = 0.018393354, step = 30901 (0.184 sec)\nINFO:tensorflow:global_step/sec: 549.511\nINFO:tensorflow:loss = 0.0073081004, step = 31001 (0.188 sec)\nINFO:tensorflow:global_step/sec: 504.645\nINFO:tensorflow:loss = 0.014896774, step = 31101 (0.193 sec)\nINFO:tensorflow:global_step/sec: 531.794\nINFO:tensorflow:loss = 0.012042155, step = 31201 (0.188 sec)\nINFO:tensorflow:global_step/sec: 542.691\nINFO:tensorflow:loss = 0.022437997, step = 31301 (0.184 sec)\nINFO:tensorflow:global_step/sec: 540.585\nINFO:tensorflow:loss = 0.006232311, step = 31401 (0.185 sec)\nINFO:tensorflow:global_step/sec: 555.404\nINFO:tensorflow:loss = 0.030879425, step = 31501 (0.180 sec)\nINFO:tensorflow:global_step/sec: 553.235\nINFO:tensorflow:loss = 0.011982078, step = 31601 (0.181 sec)\nINFO:tensorflow:global_step/sec: 540.944\nINFO:tensorflow:loss = 0.015685122, step = 31701 (0.185 sec)\nINFO:tensorflow:global_step/sec: 536.942\nINFO:tensorflow:loss = 0.009588946, step = 31801 (0.186 sec)\nINFO:tensorflow:global_step/sec: 537.618\nINFO:tensorflow:loss = 0.01949366, step = 31901 (0.186 sec)\nINFO:tensorflow:global_step/sec: 529.29\nINFO:tensorflow:loss = 0.016845737, step = 32001 (0.189 sec)\nINFO:tensorflow:global_step/sec: 545.387\nINFO:tensorflow:loss = 0.013241226, step = 32101 (0.183 sec)\nINFO:tensorflow:global_step/sec: 555.386\nINFO:tensorflow:loss = 0.007763939, step = 32201 (0.180 sec)\nINFO:tensorflow:global_step/sec: 544.226\nINFO:tensorflow:loss = 0.012886829, step = 32301 (0.184 sec)\nINFO:tensorflow:global_step/sec: 558.046\nINFO:tensorflow:loss = 0.008924153, step = 32401 (0.180 sec)\nINFO:tensorflow:global_step/sec: 545.449\nINFO:tensorflow:loss = 0.013111419, step = 32501 (0.182 sec)\nINFO:tensorflow:global_step/sec: 549.441\nINFO:tensorflow:loss = 0.013761312, step = 32601 (0.182 sec)\nINFO:tensorflow:global_step/sec: 556.306\nINFO:tensorflow:loss = 0.011531368, step = 32701 (0.180 sec)\nINFO:tensorflow:global_step/sec: 532.244\nINFO:tensorflow:loss = 0.018508688, step = 32801 (0.188 sec)\nINFO:tensorflow:global_step/sec: 535.174\nINFO:tensorflow:loss = 0.012416309, step = 32901 (0.187 sec)\nINFO:tensorflow:global_step/sec: 536.619\nINFO:tensorflow:loss = 0.021730969, step = 33001 (0.186 sec)\nINFO:tensorflow:global_step/sec: 546.01\nINFO:tensorflow:loss = 0.02161136, step = 33101 (0.188 sec)\nINFO:tensorflow:global_step/sec: 524.701\nINFO:tensorflow:loss = 0.007678924, step = 33201 (0.186 sec)\nINFO:tensorflow:global_step/sec: 567.924\nINFO:tensorflow:loss = 0.010848792, step = 33301 (0.176 sec)\nINFO:tensorflow:global_step/sec: 556.693\nINFO:tensorflow:loss = 0.015239689, step = 33401 (0.180 sec)\nINFO:tensorflow:global_step/sec: 549.466\nINFO:tensorflow:loss = 0.018869447, step = 33501 (0.182 sec)\nINFO:tensorflow:global_step/sec: 538.933\nINFO:tensorflow:loss = 0.014404563, step = 33601 (0.186 sec)\nINFO:tensorflow:global_step/sec: 570.873\nINFO:tensorflow:loss = 0.007743339, step = 33701 (0.175 sec)\nINFO:tensorflow:global_step/sec: 523.04\nINFO:tensorflow:loss = 0.021582767, step = 33801 (0.191 sec)\nINFO:tensorflow:global_step/sec: 533.758\nINFO:tensorflow:loss = 0.009738045, step = 33901 (0.187 sec)\nINFO:tensorflow:global_step/sec: 549.783\nINFO:tensorflow:loss = 0.010697973, step = 34001 (0.182 sec)\nINFO:tensorflow:global_step/sec: 549.312\nINFO:tensorflow:loss = 0.014111896, step = 34101 (0.182 sec)\nINFO:tensorflow:global_step/sec: 506.714\nINFO:tensorflow:loss = 0.01161824, step = 34201 (0.198 sec)\nINFO:tensorflow:global_step/sec: 528.597\nINFO:tensorflow:loss = 0.013626029, step = 34301 (0.189 sec)\nINFO:tensorflow:global_step/sec: 549.305\nINFO:tensorflow:loss = 0.014306208, step = 34401 (0.186 sec)\nINFO:tensorflow:global_step/sec: 539.406\nINFO:tensorflow:loss = 0.010782365, step = 34501 (0.182 sec)\nINFO:tensorflow:global_step/sec: 564.149\nINFO:tensorflow:loss = 0.014526726, step = 34601 (0.177 sec)\nINFO:tensorflow:global_step/sec: 543.195\nINFO:tensorflow:loss = 0.016402217, step = 34701 (0.184 sec)\nINFO:tensorflow:global_step/sec: 554.256\nINFO:tensorflow:loss = 0.019311573, step = 34801 (0.181 sec)\nINFO:tensorflow:global_step/sec: 547.708\nINFO:tensorflow:loss = 0.022482123, step = 34901 (0.182 sec)\nINFO:tensorflow:global_step/sec: 527.343\nINFO:tensorflow:loss = 0.02219373, step = 35001 (0.190 sec)\nINFO:tensorflow:global_step/sec: 538.46\nINFO:tensorflow:loss = 0.010233812, step = 35101 (0.186 sec)\nINFO:tensorflow:global_step/sec: 545.09\nINFO:tensorflow:loss = 0.011155672, step = 35201 (0.183 sec)\nINFO:tensorflow:global_step/sec: 510.36\nINFO:tensorflow:loss = 0.019443393, step = 35301 (0.196 sec)\nINFO:tensorflow:global_step/sec: 549.995\nINFO:tensorflow:loss = 0.0088263145, step = 35401 (0.182 sec)\nINFO:tensorflow:global_step/sec: 539.31\nINFO:tensorflow:loss = 0.019199822, step = 35501 (0.185 sec)\nINFO:tensorflow:global_step/sec: 547.963\nINFO:tensorflow:loss = 0.016015904, step = 35601 (0.182 sec)\nINFO:tensorflow:global_step/sec: 542.803\nINFO:tensorflow:loss = 0.012871675, step = 35701 (0.184 sec)\nINFO:tensorflow:global_step/sec: 538.764\nINFO:tensorflow:loss = 0.021360168, step = 35801 (0.186 sec)\nINFO:tensorflow:global_step/sec: 528.896\nINFO:tensorflow:loss = 0.015004412, step = 35901 (0.189 sec)\nINFO:tensorflow:global_step/sec: 550.146\nINFO:tensorflow:loss = 0.016787032, step = 36001 (0.182 sec)\nINFO:tensorflow:global_step/sec: 544.037\nINFO:tensorflow:loss = 0.02503136, step = 36101 (0.184 sec)\nINFO:tensorflow:global_step/sec: 554.173\nINFO:tensorflow:loss = 0.008402772, step = 36201 (0.181 sec)\nINFO:tensorflow:global_step/sec: 556.137\nINFO:tensorflow:loss = 0.0091250455, step = 36301 (0.180 sec)\nINFO:tensorflow:global_step/sec: 532.992\nINFO:tensorflow:loss = 0.0181378, step = 36401 (0.188 sec)\nINFO:tensorflow:global_step/sec: 550.661\nINFO:tensorflow:loss = 0.008492513, step = 36501 (0.181 sec)\nINFO:tensorflow:global_step/sec: 536.25\nINFO:tensorflow:loss = 0.0114019755, step = 36601 (0.187 sec)\nINFO:tensorflow:global_step/sec: 549.158\nINFO:tensorflow:loss = 0.02097696, step = 36701 (0.182 sec)\nINFO:tensorflow:global_step/sec: 562.939\nINFO:tensorflow:loss = 0.0132971015, step = 36801 (0.178 sec)\nINFO:tensorflow:global_step/sec: 521.469\nINFO:tensorflow:loss = 0.00968274, step = 36901 (0.192 sec)\nINFO:tensorflow:global_step/sec: 563.196\nINFO:tensorflow:loss = 0.014091542, step = 37001 (0.177 sec)\nINFO:tensorflow:global_step/sec: 547.948\nINFO:tensorflow:loss = 0.020744445, step = 37101 (0.183 sec)\nINFO:tensorflow:global_step/sec: 564.589\nINFO:tensorflow:loss = 0.009579487, step = 37201 (0.177 sec)\nINFO:tensorflow:global_step/sec: 549.351\nINFO:tensorflow:loss = 0.011741485, step = 37301 (0.182 sec)\nINFO:tensorflow:global_step/sec: 573.677\nINFO:tensorflow:loss = 0.009951888, step = 37401 (0.174 sec)\nINFO:tensorflow:global_step/sec: 524.599\nINFO:tensorflow:loss = 0.014136355, step = 37501 (0.191 sec)\nINFO:tensorflow:global_step/sec: 547.861\nINFO:tensorflow:loss = 0.014360774, step = 37601 (0.183 sec)\nINFO:tensorflow:global_step/sec: 539.901\nINFO:tensorflow:loss = 0.00806953, step = 37701 (0.185 sec)\nINFO:tensorflow:global_step/sec: 551.742\nINFO:tensorflow:loss = 0.014863034, step = 37801 (0.181 sec)\nINFO:tensorflow:global_step/sec: 556.973\nINFO:tensorflow:loss = 0.008398596, step = 37901 (0.180 sec)\nINFO:tensorflow:global_step/sec: 548.026\nINFO:tensorflow:loss = 0.017693192, step = 38001 (0.190 sec)\nINFO:tensorflow:global_step/sec: 519.251\nINFO:tensorflow:loss = 0.01951421, step = 38101 (0.185 sec)\nINFO:tensorflow:global_step/sec: 557.135\nINFO:tensorflow:loss = 0.013768952, step = 38201 (0.180 sec)\nINFO:tensorflow:global_step/sec: 562.828\nINFO:tensorflow:loss = 0.019956227, step = 38301 (0.178 sec)\nINFO:tensorflow:global_step/sec: 546.224\nINFO:tensorflow:loss = 0.018904533, step = 38401 (0.183 sec)\nINFO:tensorflow:global_step/sec: 550.261\nINFO:tensorflow:loss = 0.010122333, step = 38501 (0.182 sec)\nINFO:tensorflow:global_step/sec: 535.693\nINFO:tensorflow:loss = 0.013586002, step = 38601 (0.187 sec)\nINFO:tensorflow:global_step/sec: 547.357\nINFO:tensorflow:loss = 0.013408544, step = 38701 (0.183 sec)\nINFO:tensorflow:global_step/sec: 543.493\nINFO:tensorflow:loss = 0.0072285975, step = 38801 (0.184 sec)\nINFO:tensorflow:global_step/sec: 524.888\nINFO:tensorflow:loss = 0.018272143, step = 38901 (0.190 sec)\nINFO:tensorflow:global_step/sec: 559.776\nINFO:tensorflow:loss = 0.015372202, step = 39001 (0.178 sec)\nINFO:tensorflow:global_step/sec: 533.097\nINFO:tensorflow:loss = 0.018851195, step = 39101 (0.188 sec)\nINFO:tensorflow:global_step/sec: 559.75\nINFO:tensorflow:loss = 0.012927763, step = 39201 (0.179 sec)\nINFO:tensorflow:global_step/sec: 536.7\nINFO:tensorflow:loss = 0.010040123, step = 39301 (0.186 sec)\nINFO:tensorflow:global_step/sec: 567.781\nINFO:tensorflow:loss = 0.009394429, step = 39401 (0.176 sec)\nINFO:tensorflow:global_step/sec: 554.859\nINFO:tensorflow:loss = 0.01086911, step = 39501 (0.180 sec)\nINFO:tensorflow:global_step/sec: 554.505\nINFO:tensorflow:loss = 0.009075993, step = 39601 (0.180 sec)\nINFO:tensorflow:global_step/sec: 534.188\nINFO:tensorflow:loss = 0.008560747, step = 39701 (0.187 sec)\nINFO:tensorflow:global_step/sec: 545.825\nINFO:tensorflow:loss = 0.017553164, step = 39801 (0.184 sec)\nINFO:tensorflow:global_step/sec: 554.441\nINFO:tensorflow:loss = 0.019945253, step = 39901 (0.180 sec)\nINFO:tensorflow:Saving checkpoints for 40000 into /tmp/tmpBX73lD/model.ckpt.\nINFO:tensorflow:Calling model_fn.\nINFO:tensorflow:Importing architecture from /tmp/tmpBX73lD/architecture-0.txt: ['0:1_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Building iteration 1\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Done calling model_fn.\nINFO:tensorflow:Starting evaluation at 2018-12-13-19:24:10\nINFO:tensorflow:Graph was finalized.\nINFO:tensorflow:Restoring parameters from /tmp/tmpBX73lD/model.ckpt-40000\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\nINFO:tensorflow:Saving candidate 't0_1_layer_dnn' dict for global step 40000: architecture/adanet/ensembles = \na\n>adanet/iteration_0/ensemble_t0_1_layer_dnn/architecture/adanetB\u0015\b\u0007\u0012\u0000B\u000f| 1_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.03993654, average_loss/adanet/subnetwork = 0.03993654, average_loss/adanet/uniform_average_ensemble = 0.03993654, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.053605493, loss/adanet/subnetwork = 0.053605493, loss/adanet/uniform_average_ensemble = 0.053605493, prediction/mean/adanet/adanet_weighted_ensemble = 3.1580222, prediction/mean/adanet/subnetwork = 3.1580222, prediction/mean/adanet/uniform_average_ensemble = 3.1580222\nINFO:tensorflow:Saving candidate 't1_1_layer_dnn' dict for global step 40000: architecture/adanet/ensembles = \no\n>adanet/iteration_1/ensemble_t1_1_layer_dnn/architecture/adanetB#\b\u0007\u0012\u0000B\u001d| 1_layer_dnn | 1_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.04097581, average_loss/adanet/subnetwork = 0.044653624, average_loss/adanet/uniform_average_ensemble = 0.04097581, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.059345886, loss/adanet/subnetwork = 0.06800773, loss/adanet/uniform_average_ensemble = 0.059345886, prediction/mean/adanet/adanet_weighted_ensemble = 3.1586797, prediction/mean/adanet/subnetwork = 3.1593368, prediction/mean/adanet/uniform_average_ensemble = 3.1586797\nINFO:tensorflow:Saving candidate 't1_2_layer_dnn' dict for global step 40000: architecture/adanet/ensembles = \no\n>adanet/iteration_1/ensemble_t1_2_layer_dnn/architecture/adanetB#\b\u0007\u0012\u0000B\u001d| 1_layer_dnn | 2_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.034043197, average_loss/adanet/subnetwork = 0.032510567, average_loss/adanet/uniform_average_ensemble = 0.034043197, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.045813102, loss/adanet/subnetwork = 0.042689238, loss/adanet/uniform_average_ensemble = 0.045813102, prediction/mean/adanet/adanet_weighted_ensemble = 3.151645, prediction/mean/adanet/subnetwork = 3.1452672, prediction/mean/adanet/uniform_average_ensemble = 3.151645\nINFO:tensorflow:Finished evaluation at 2018-12-13-19:24:13\nINFO:tensorflow:Saving dict for global step 40000: average_loss = 0.034043197, average_loss/adanet/adanet_weighted_ensemble = 0.034043197, average_loss/adanet/subnetwork = 0.032510567, average_loss/adanet/uniform_average_ensemble = 0.034043197, global_step = 40000, label/mean = 3.1049454, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss = 0.045813102, loss/adanet/adanet_weighted_ensemble = 0.045813102, loss/adanet/subnetwork = 0.042689238, loss/adanet/uniform_average_ensemble = 0.045813102, prediction/mean = 3.151645, prediction/mean/adanet/adanet_weighted_ensemble = 3.151645, prediction/mean/adanet/subnetwork = 3.1452672, prediction/mean/adanet/uniform_average_ensemble = 3.151645\nINFO:tensorflow:Saving 'checkpoint_path' summary for global step 40000: /tmp/tmpBX73lD/model.ckpt-40000\nINFO:tensorflow:Loss for final step: 0.011342968.\nINFO:tensorflow:Finished training Adanet iteration 1\nINFO:tensorflow:Beginning bookkeeping phase for iteration 1\nINFO:tensorflow:Importing architecture from /tmp/tmpBX73lD/architecture-0.txt: ['0:1_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Building iteration 1\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Starting ensemble evaluation for iteration 1\nINFO:tensorflow:Restoring parameters from /tmp/tmpBX73lD/model.ckpt-40000\nWARNING:tensorflow:`tf.train.start_queue_runners()` was called when no queue runners were defined. You can safely remove the call to this deprecated function.\nINFO:tensorflow:Encountered end of input after 14 evaluations\nINFO:tensorflow:Computed ensemble metrics: adanet_loss/t0_1_layer_dnn = 0.020803, adanet_loss/t1_1_layer_dnn = 0.020815, adanet_loss/t1_2_layer_dnn = 0.014043\nINFO:tensorflow:Finished ensemble evaluation for iteration 1\nINFO:tensorflow:'t1_2_layer_dnn' at index 2 is moving onto the next iteration\nINFO:tensorflow:Importing architecture from /tmp/tmpBX73lD/architecture-1.txt: ['0:1_layer_dnn', '1:2_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 1\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Warm-starting from: (u'/tmp/tmpBX73lD/model.ckpt-40000',)\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/subnetwork/dense/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/candidate_t0_1_layer_dnn/adanet/iteration_0/candidate_t0_1_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/subnetwork/dense_1/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/subnetwork/dense_1/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t1_2_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: global_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/candidate_t0_1_layer_dnn/adanet/iteration_0/candidate_t0_1_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/subnetwork/dense_1/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_0/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t0_1_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/subnetwork/dense_2/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t1_2_layer_dnn/adanet/iteration_1/candidate_t1_2_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/subnetwork/dense/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/subnetwork/dense/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/candidate_t0_1_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/subnetwork/dense_1/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t1_2_layer_dnn/adanet/iteration_1/candidate_t1_2_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t0_1_layer_dnn/adanet/iteration_1/candidate_t0_1_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t0_1_layer_dnn/adanet/iteration_1/candidate_t0_1_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/subnetwork/dense_2/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/subnetwork/dense/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/train_op/is_over/is_over_var_fn/is_over_var; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/train_op/is_over/is_over_var_fn/is_over_var; prev_var_name: Unchanged\nINFO:tensorflow:Building iteration 2\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Overwriting checkpoint with new graph for iteration 2 to /tmp/tmpBX73lD/model.ckpt-40000\nWARNING:tensorflow:`tf.train.start_queue_runners()` was called when no queue runners were defined. You can safely remove the call to this deprecated function.\nINFO:tensorflow:Finished bookkeeping phase for iteration 1\nINFO:tensorflow:Beginning training AdaNet iteration 2\nINFO:tensorflow:Calling model_fn.\nINFO:tensorflow:Importing architecture from /tmp/tmpBX73lD/architecture-1.txt: ['0:1_layer_dnn', '1:2_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 1\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Building iteration 2\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Done calling model_fn.\nINFO:tensorflow:Create CheckpointSaverHook.\nINFO:tensorflow:Graph was finalized.\nINFO:tensorflow:Restoring parameters from /tmp/tmpBX73lD/increment.ckpt-2\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\nINFO:tensorflow:Saving checkpoints for 40000 into /tmp/tmpBX73lD/model.ckpt.\nINFO:tensorflow:loss = 0.017468113, step = 40001\nINFO:tensorflow:global_step/sec: 146.85\nINFO:tensorflow:loss = 0.018093407, step = 40101 (0.682 sec)\nINFO:tensorflow:global_step/sec: 507.802\nINFO:tensorflow:loss = 0.015324731, step = 40201 (0.196 sec)\nINFO:tensorflow:global_step/sec: 520.817\nINFO:tensorflow:loss = 0.012765523, step = 40301 (0.192 sec)\nINFO:tensorflow:global_step/sec: 518.662\nINFO:tensorflow:loss = 0.019902166, step = 40401 (0.193 sec)\nINFO:tensorflow:global_step/sec: 520.075\nINFO:tensorflow:loss = 0.009869992, step = 40501 (0.192 sec)\nINFO:tensorflow:global_step/sec: 515.259\nINFO:tensorflow:loss = 0.009510946, step = 40601 (0.194 sec)\nINFO:tensorflow:global_step/sec: 501.361\nINFO:tensorflow:loss = 0.0093863765, step = 40701 (0.199 sec)\nINFO:tensorflow:global_step/sec: 483.72\nINFO:tensorflow:loss = 0.013932006, step = 40801 (0.207 sec)\nINFO:tensorflow:global_step/sec: 514.726\nINFO:tensorflow:loss = 0.012854252, step = 40901 (0.194 sec)\nINFO:tensorflow:global_step/sec: 491.079\nINFO:tensorflow:loss = 0.018182788, step = 41001 (0.204 sec)\nINFO:tensorflow:global_step/sec: 518.707\nINFO:tensorflow:loss = 0.018021746, step = 41101 (0.192 sec)\nINFO:tensorflow:global_step/sec: 482.246\nINFO:tensorflow:loss = 0.017811758, step = 41201 (0.207 sec)\nINFO:tensorflow:global_step/sec: 495.84\nINFO:tensorflow:loss = 0.013001744, step = 41301 (0.202 sec)\nINFO:tensorflow:global_step/sec: 517.741\nINFO:tensorflow:loss = 0.010265199, step = 41401 (0.193 sec)\nINFO:tensorflow:global_step/sec: 501.716\nINFO:tensorflow:loss = 0.0071805054, step = 41501 (0.199 sec)\nINFO:tensorflow:global_step/sec: 511.841\nINFO:tensorflow:loss = 0.009893357, step = 41601 (0.198 sec)\nINFO:tensorflow:global_step/sec: 501.918\nINFO:tensorflow:loss = 0.018157754, step = 41701 (0.197 sec)\nINFO:tensorflow:global_step/sec: 503.674\nINFO:tensorflow:loss = 0.015058249, step = 41801 (0.198 sec)\nINFO:tensorflow:global_step/sec: 512.929\nINFO:tensorflow:loss = 0.013771176, step = 41901 (0.195 sec)\nINFO:tensorflow:global_step/sec: 508.901\nINFO:tensorflow:loss = 0.009073267, step = 42001 (0.197 sec)\nINFO:tensorflow:global_step/sec: 511.58\nINFO:tensorflow:loss = 0.02188246, step = 42101 (0.196 sec)\nINFO:tensorflow:global_step/sec: 529.849\nINFO:tensorflow:loss = 0.011396105, step = 42201 (0.193 sec)\nINFO:tensorflow:global_step/sec: 486.486\nINFO:tensorflow:loss = 0.011808677, step = 42301 (0.201 sec)\nINFO:tensorflow:global_step/sec: 516.273\nINFO:tensorflow:loss = 0.008540548, step = 42401 (0.194 sec)\nINFO:tensorflow:global_step/sec: 473.073\nINFO:tensorflow:loss = 0.021454208, step = 42501 (0.212 sec)\nINFO:tensorflow:global_step/sec: 483.983\nINFO:tensorflow:loss = 0.009850922, step = 42601 (0.207 sec)\nINFO:tensorflow:global_step/sec: 519.837\nINFO:tensorflow:loss = 0.015087103, step = 42701 (0.192 sec)\nINFO:tensorflow:global_step/sec: 508.184\nINFO:tensorflow:loss = 0.010692885, step = 42801 (0.197 sec)\nINFO:tensorflow:global_step/sec: 514.221\nINFO:tensorflow:loss = 0.011624115, step = 42901 (0.196 sec)\nINFO:tensorflow:global_step/sec: 504.803\nINFO:tensorflow:loss = 0.013021843, step = 43001 (0.197 sec)\nINFO:tensorflow:global_step/sec: 493.338\nINFO:tensorflow:loss = 0.008456488, step = 43101 (0.204 sec)\nINFO:tensorflow:global_step/sec: 488.472\nINFO:tensorflow:loss = 0.01821088, step = 43201 (0.204 sec)\nINFO:tensorflow:global_step/sec: 500.375\nINFO:tensorflow:loss = 0.019609537, step = 43301 (0.200 sec)\nINFO:tensorflow:global_step/sec: 531.101\nINFO:tensorflow:loss = 0.01440832, step = 43401 (0.188 sec)\nINFO:tensorflow:global_step/sec: 517.679\nINFO:tensorflow:loss = 0.02079041, step = 43501 (0.193 sec)\nINFO:tensorflow:global_step/sec: 509.627\nINFO:tensorflow:loss = 0.009131884, step = 43601 (0.196 sec)\nINFO:tensorflow:global_step/sec: 531.637\nINFO:tensorflow:loss = 0.009203339, step = 43701 (0.188 sec)\nINFO:tensorflow:global_step/sec: 513.843\nINFO:tensorflow:loss = 0.014136048, step = 43801 (0.195 sec)\nINFO:tensorflow:global_step/sec: 507.017\nINFO:tensorflow:loss = 0.009038294, step = 43901 (0.197 sec)\nINFO:tensorflow:global_step/sec: 517.762\nINFO:tensorflow:loss = 0.012316119, step = 44001 (0.196 sec)\nINFO:tensorflow:global_step/sec: 519.805\nINFO:tensorflow:loss = 0.01415319, step = 44101 (0.189 sec)\nINFO:tensorflow:global_step/sec: 497.05\nINFO:tensorflow:loss = 0.01141306, step = 44201 (0.201 sec)\nINFO:tensorflow:global_step/sec: 516.908\nINFO:tensorflow:loss = 0.010589264, step = 44301 (0.194 sec)\nINFO:tensorflow:global_step/sec: 516.534\nINFO:tensorflow:loss = 0.00965721, step = 44401 (0.194 sec)\nINFO:tensorflow:global_step/sec: 512.507\nINFO:tensorflow:loss = 0.010868691, step = 44501 (0.195 sec)\nINFO:tensorflow:global_step/sec: 513.925\nINFO:tensorflow:loss = 0.013449676, step = 44601 (0.194 sec)\nINFO:tensorflow:global_step/sec: 504.554\nINFO:tensorflow:loss = 0.015651185, step = 44701 (0.198 sec)\nINFO:tensorflow:global_step/sec: 524.733\nINFO:tensorflow:loss = 0.009516459, step = 44801 (0.191 sec)\nINFO:tensorflow:global_step/sec: 502.288\nINFO:tensorflow:loss = 0.0143834585, step = 44901 (0.199 sec)\nINFO:tensorflow:global_step/sec: 489.168\nINFO:tensorflow:loss = 0.0077656917, step = 45001 (0.204 sec)\nINFO:tensorflow:global_step/sec: 521.344\nINFO:tensorflow:loss = 0.0054778853, step = 45101 (0.192 sec)\nINFO:tensorflow:global_step/sec: 485.515\nINFO:tensorflow:loss = 0.011726222, step = 45201 (0.206 sec)\nINFO:tensorflow:global_step/sec: 520.118\nINFO:tensorflow:loss = 0.018662084, step = 45301 (0.192 sec)\nINFO:tensorflow:global_step/sec: 521.706\nINFO:tensorflow:loss = 0.009167017, step = 45401 (0.193 sec)\nINFO:tensorflow:global_step/sec: 516.18\nINFO:tensorflow:loss = 0.008098583, step = 45501 (0.193 sec)\nINFO:tensorflow:global_step/sec: 515.796\nINFO:tensorflow:loss = 0.0221938, step = 45601 (0.194 sec)\nINFO:tensorflow:global_step/sec: 512.576\nINFO:tensorflow:loss = 0.011623999, step = 45701 (0.195 sec)\nINFO:tensorflow:global_step/sec: 476.976\nINFO:tensorflow:loss = 0.009234013, step = 45801 (0.210 sec)\nINFO:tensorflow:global_step/sec: 520.421\nINFO:tensorflow:loss = 0.008745046, step = 45901 (0.192 sec)\nINFO:tensorflow:global_step/sec: 501.709\nINFO:tensorflow:loss = 0.011892943, step = 46001 (0.199 sec)\nINFO:tensorflow:global_step/sec: 498.668\nINFO:tensorflow:loss = 0.017181993, step = 46101 (0.200 sec)\nINFO:tensorflow:global_step/sec: 500.328\nINFO:tensorflow:loss = 0.016571296, step = 46201 (0.200 sec)\nINFO:tensorflow:global_step/sec: 506.332\nINFO:tensorflow:loss = 0.0070566144, step = 46301 (0.197 sec)\nINFO:tensorflow:global_step/sec: 496.758\nINFO:tensorflow:loss = 0.011321435, step = 46401 (0.202 sec)\nINFO:tensorflow:global_step/sec: 501.063\nINFO:tensorflow:loss = 0.02300739, step = 46501 (0.200 sec)\nINFO:tensorflow:global_step/sec: 490.79\nINFO:tensorflow:loss = 0.011725863, step = 46601 (0.204 sec)\nINFO:tensorflow:global_step/sec: 512.815\nINFO:tensorflow:loss = 0.011536422, step = 46701 (0.195 sec)\nINFO:tensorflow:global_step/sec: 524.181\nINFO:tensorflow:loss = 0.0074939406, step = 46801 (0.190 sec)\nINFO:tensorflow:global_step/sec: 511.792\nINFO:tensorflow:loss = 0.007844357, step = 46901 (0.195 sec)\nINFO:tensorflow:global_step/sec: 534.282\nINFO:tensorflow:loss = 0.007860575, step = 47001 (0.187 sec)\nINFO:tensorflow:global_step/sec: 504.948\nINFO:tensorflow:loss = 0.012994383, step = 47101 (0.198 sec)\nINFO:tensorflow:global_step/sec: 487.588\nINFO:tensorflow:loss = 0.010521249, step = 47201 (0.205 sec)\nINFO:tensorflow:global_step/sec: 517.953\nINFO:tensorflow:loss = 0.020745177, step = 47301 (0.193 sec)\nINFO:tensorflow:global_step/sec: 522.416\nINFO:tensorflow:loss = 0.010111434, step = 47401 (0.191 sec)\nINFO:tensorflow:global_step/sec: 525.914\nINFO:tensorflow:loss = 0.006004952, step = 47501 (0.190 sec)\nINFO:tensorflow:global_step/sec: 484.145\nINFO:tensorflow:loss = 0.012459682, step = 47601 (0.207 sec)\nINFO:tensorflow:global_step/sec: 479.593\nINFO:tensorflow:loss = 0.01421849, step = 47701 (0.209 sec)\nINFO:tensorflow:global_step/sec: 524.562\nINFO:tensorflow:loss = 0.011351295, step = 47801 (0.190 sec)\nINFO:tensorflow:global_step/sec: 513.421\nINFO:tensorflow:loss = 0.0067988476, step = 47901 (0.195 sec)\nINFO:tensorflow:global_step/sec: 483.312\nINFO:tensorflow:loss = 0.022579135, step = 48001 (0.209 sec)\nINFO:tensorflow:global_step/sec: 512.668\nINFO:tensorflow:loss = 0.015356412, step = 48101 (0.193 sec)\nINFO:tensorflow:global_step/sec: 529.24\nINFO:tensorflow:loss = 0.028900355, step = 48201 (0.189 sec)\nINFO:tensorflow:global_step/sec: 493.109\nINFO:tensorflow:loss = 0.016056355, step = 48301 (0.203 sec)\nINFO:tensorflow:global_step/sec: 514.639\nINFO:tensorflow:loss = 0.009139307, step = 48401 (0.194 sec)\nINFO:tensorflow:global_step/sec: 530.983\nINFO:tensorflow:loss = 0.008170824, step = 48501 (0.188 sec)\nINFO:tensorflow:global_step/sec: 486.034\nINFO:tensorflow:loss = 0.012461541, step = 48601 (0.206 sec)\nINFO:tensorflow:global_step/sec: 519.305\nINFO:tensorflow:loss = 0.010816611, step = 48701 (0.194 sec)\nINFO:tensorflow:global_step/sec: 488.563\nINFO:tensorflow:loss = 0.019418199, step = 48801 (0.208 sec)\nINFO:tensorflow:global_step/sec: 459.813\nINFO:tensorflow:loss = 0.028262725, step = 48901 (0.213 sec)\nINFO:tensorflow:global_step/sec: 500.894\nINFO:tensorflow:loss = 0.019928953, step = 49001 (0.199 sec)\nINFO:tensorflow:global_step/sec: 501.464\nINFO:tensorflow:loss = 0.015327201, step = 49101 (0.200 sec)\nINFO:tensorflow:global_step/sec: 497.473\nINFO:tensorflow:loss = 0.015008008, step = 49201 (0.201 sec)\nINFO:tensorflow:global_step/sec: 510.18\nINFO:tensorflow:loss = 0.0051358948, step = 49301 (0.196 sec)\nINFO:tensorflow:global_step/sec: 518.969\nINFO:tensorflow:loss = 0.010307699, step = 49401 (0.193 sec)\nINFO:tensorflow:global_step/sec: 489.321\nINFO:tensorflow:loss = 0.011590662, step = 49501 (0.204 sec)\nINFO:tensorflow:global_step/sec: 482.253\nINFO:tensorflow:loss = 0.012576545, step = 49601 (0.207 sec)\nINFO:tensorflow:global_step/sec: 497.063\nINFO:tensorflow:loss = 0.010206463, step = 49701 (0.201 sec)\nINFO:tensorflow:global_step/sec: 512.379\nINFO:tensorflow:loss = 0.009683546, step = 49801 (0.195 sec)\nINFO:tensorflow:global_step/sec: 519.763\nINFO:tensorflow:loss = 0.019511562, step = 49901 (0.193 sec)\nINFO:tensorflow:global_step/sec: 482.165\nINFO:tensorflow:loss = 0.014295067, step = 50001 (0.207 sec)\nINFO:tensorflow:global_step/sec: 514.817\nINFO:tensorflow:loss = 0.007459954, step = 50101 (0.194 sec)\nINFO:tensorflow:global_step/sec: 545.111\nINFO:tensorflow:loss = 0.016832381, step = 50201 (0.185 sec)\nINFO:tensorflow:global_step/sec: 426.363\nINFO:tensorflow:loss = 0.0125337485, step = 50301 (0.233 sec)\nINFO:tensorflow:global_step/sec: 474.937\nINFO:tensorflow:loss = 0.009039086, step = 50401 (0.211 sec)\nINFO:tensorflow:global_step/sec: 484.811\nINFO:tensorflow:loss = 0.005735507, step = 50501 (0.206 sec)\nINFO:tensorflow:global_step/sec: 506.673\nINFO:tensorflow:loss = 0.0173325, step = 50601 (0.197 sec)\nINFO:tensorflow:global_step/sec: 504.261\nINFO:tensorflow:loss = 0.008636335, step = 50701 (0.198 sec)\nINFO:tensorflow:global_step/sec: 488.964\nINFO:tensorflow:loss = 0.016918551, step = 50801 (0.205 sec)\nINFO:tensorflow:global_step/sec: 506.984\nINFO:tensorflow:loss = 0.0148557965, step = 50901 (0.197 sec)\nINFO:tensorflow:global_step/sec: 522.196\nINFO:tensorflow:loss = 0.0060494742, step = 51001 (0.191 sec)\nINFO:tensorflow:global_step/sec: 505.14\nINFO:tensorflow:loss = 0.012138335, step = 51101 (0.198 sec)\nINFO:tensorflow:global_step/sec: 506.411\nINFO:tensorflow:loss = 0.007853473, step = 51201 (0.198 sec)\nINFO:tensorflow:global_step/sec: 509.199\nINFO:tensorflow:loss = 0.014691928, step = 51301 (0.196 sec)\nINFO:tensorflow:global_step/sec: 504.175\nINFO:tensorflow:loss = 0.0060182875, step = 51401 (0.198 sec)\nINFO:tensorflow:global_step/sec: 507.957\nINFO:tensorflow:loss = 0.023528608, step = 51501 (0.197 sec)\nINFO:tensorflow:global_step/sec: 491.84\nINFO:tensorflow:loss = 0.008916151, step = 51601 (0.203 sec)\nINFO:tensorflow:global_step/sec: 504.119\nINFO:tensorflow:loss = 0.015216347, step = 51701 (0.199 sec)\nINFO:tensorflow:global_step/sec: 502.922\nINFO:tensorflow:loss = 0.0076746964, step = 51801 (0.199 sec)\nINFO:tensorflow:global_step/sec: 522.621\nINFO:tensorflow:loss = 0.010943584, step = 51901 (0.192 sec)\nINFO:tensorflow:global_step/sec: 503.459\nINFO:tensorflow:loss = 0.011525083, step = 52001 (0.198 sec)\nINFO:tensorflow:global_step/sec: 509.396\nINFO:tensorflow:loss = 0.009468526, step = 52101 (0.196 sec)\nINFO:tensorflow:global_step/sec: 493.031\nINFO:tensorflow:loss = 0.006191155, step = 52201 (0.203 sec)\nINFO:tensorflow:global_step/sec: 528.092\nINFO:tensorflow:loss = 0.009383469, step = 52301 (0.189 sec)\nINFO:tensorflow:global_step/sec: 500.163\nINFO:tensorflow:loss = 0.007131893, step = 52401 (0.200 sec)\nINFO:tensorflow:global_step/sec: 504.917\nINFO:tensorflow:loss = 0.012247307, step = 52501 (0.198 sec)\nINFO:tensorflow:global_step/sec: 460.927\nINFO:tensorflow:loss = 0.008317162, step = 52601 (0.217 sec)\nINFO:tensorflow:global_step/sec: 511.821\nINFO:tensorflow:loss = 0.012988508, step = 52701 (0.195 sec)\nINFO:tensorflow:global_step/sec: 487.348\nINFO:tensorflow:loss = 0.015612132, step = 52801 (0.205 sec)\nINFO:tensorflow:global_step/sec: 501.502\nINFO:tensorflow:loss = 0.010452649, step = 52901 (0.200 sec)\nINFO:tensorflow:global_step/sec: 520.064\nINFO:tensorflow:loss = 0.020963026, step = 53001 (0.192 sec)\nINFO:tensorflow:global_step/sec: 473.409\nINFO:tensorflow:loss = 0.012349683, step = 53101 (0.211 sec)\nINFO:tensorflow:global_step/sec: 498.594\nINFO:tensorflow:loss = 0.004926747, step = 53201 (0.201 sec)\nINFO:tensorflow:global_step/sec: 494.555\nINFO:tensorflow:loss = 0.012891041, step = 53301 (0.202 sec)\nINFO:tensorflow:global_step/sec: 503.466\nINFO:tensorflow:loss = 0.011349333, step = 53401 (0.198 sec)\nINFO:tensorflow:global_step/sec: 504.798\nINFO:tensorflow:loss = 0.013867449, step = 53501 (0.198 sec)\nINFO:tensorflow:global_step/sec: 476.558\nINFO:tensorflow:loss = 0.0094574895, step = 53601 (0.210 sec)\nINFO:tensorflow:global_step/sec: 504.826\nINFO:tensorflow:loss = 0.0061339987, step = 53701 (0.198 sec)\nINFO:tensorflow:global_step/sec: 504.61\nINFO:tensorflow:loss = 0.012387009, step = 53801 (0.198 sec)\nINFO:tensorflow:global_step/sec: 495.624\nINFO:tensorflow:loss = 0.0076987687, step = 53901 (0.203 sec)\nINFO:tensorflow:global_step/sec: 509.16\nINFO:tensorflow:loss = 0.007590723, step = 54001 (0.196 sec)\nINFO:tensorflow:global_step/sec: 517.093\nINFO:tensorflow:loss = 0.0133831715, step = 54101 (0.193 sec)\nINFO:tensorflow:global_step/sec: 513.07\nINFO:tensorflow:loss = 0.012165307, step = 54201 (0.195 sec)\nINFO:tensorflow:global_step/sec: 522.813\nINFO:tensorflow:loss = 0.0068596248, step = 54301 (0.191 sec)\nINFO:tensorflow:global_step/sec: 504.594\nINFO:tensorflow:loss = 0.00834945, step = 54401 (0.198 sec)\nINFO:tensorflow:global_step/sec: 514.867\nINFO:tensorflow:loss = 0.008180528, step = 54501 (0.194 sec)\nINFO:tensorflow:global_step/sec: 501.754\nINFO:tensorflow:loss = 0.012112301, step = 54601 (0.202 sec)\nINFO:tensorflow:global_step/sec: 490.304\nINFO:tensorflow:loss = 0.011778267, step = 54701 (0.204 sec)\nINFO:tensorflow:global_step/sec: 495.246\nINFO:tensorflow:loss = 0.014212363, step = 54801 (0.199 sec)\nINFO:tensorflow:global_step/sec: 513.266\nINFO:tensorflow:loss = 0.012455655, step = 54901 (0.195 sec)\nINFO:tensorflow:global_step/sec: 488.796\nINFO:tensorflow:loss = 0.021063983, step = 55001 (0.204 sec)\nINFO:tensorflow:global_step/sec: 523.368\nINFO:tensorflow:loss = 0.008634172, step = 55101 (0.196 sec)\nINFO:tensorflow:global_step/sec: 496.843\nINFO:tensorflow:loss = 0.008450467, step = 55201 (0.196 sec)\nINFO:tensorflow:global_step/sec: 504.548\nINFO:tensorflow:loss = 0.01652814, step = 55301 (0.198 sec)\nINFO:tensorflow:global_step/sec: 457.27\nINFO:tensorflow:loss = 0.0068244287, step = 55401 (0.219 sec)\nINFO:tensorflow:global_step/sec: 447.786\nINFO:tensorflow:loss = 0.013384435, step = 55501 (0.224 sec)\nINFO:tensorflow:global_step/sec: 445.166\nINFO:tensorflow:loss = 0.0077829137, step = 55601 (0.225 sec)\nINFO:tensorflow:global_step/sec: 410.946\nINFO:tensorflow:loss = 0.0073004365, step = 55701 (0.243 sec)\nINFO:tensorflow:global_step/sec: 422.09\nINFO:tensorflow:loss = 0.022338329, step = 55801 (0.237 sec)\nINFO:tensorflow:global_step/sec: 436.2\nINFO:tensorflow:loss = 0.00855221, step = 55901 (0.229 sec)\nINFO:tensorflow:global_step/sec: 416.569\nINFO:tensorflow:loss = 0.011254726, step = 56001 (0.240 sec)\nINFO:tensorflow:global_step/sec: 371.648\nINFO:tensorflow:loss = 0.014746165, step = 56101 (0.269 sec)\nINFO:tensorflow:global_step/sec: 404.403\nINFO:tensorflow:loss = 0.0057478026, step = 56201 (0.247 sec)\nINFO:tensorflow:global_step/sec: 418.531\nINFO:tensorflow:loss = 0.0074014785, step = 56301 (0.239 sec)\nINFO:tensorflow:global_step/sec: 406.025\nINFO:tensorflow:loss = 0.012084539, step = 56401 (0.246 sec)\nINFO:tensorflow:global_step/sec: 397.556\nINFO:tensorflow:loss = 0.013305117, step = 56501 (0.252 sec)\nINFO:tensorflow:global_step/sec: 395.23\nINFO:tensorflow:loss = 0.008080397, step = 56601 (0.253 sec)\nINFO:tensorflow:global_step/sec: 416.243\nINFO:tensorflow:loss = 0.013839096, step = 56701 (0.240 sec)\nINFO:tensorflow:global_step/sec: 423.462\nINFO:tensorflow:loss = 0.010279523, step = 56801 (0.236 sec)\nINFO:tensorflow:global_step/sec: 404.57\nINFO:tensorflow:loss = 0.0067279865, step = 56901 (0.247 sec)\nINFO:tensorflow:global_step/sec: 415.839\nINFO:tensorflow:loss = 0.012175392, step = 57001 (0.240 sec)\nINFO:tensorflow:global_step/sec: 396.198\nINFO:tensorflow:loss = 0.018850144, step = 57101 (0.253 sec)\nINFO:tensorflow:global_step/sec: 412.649\nINFO:tensorflow:loss = 0.0075007323, step = 57201 (0.242 sec)\nINFO:tensorflow:global_step/sec: 408.375\nINFO:tensorflow:loss = 0.0069033727, step = 57301 (0.245 sec)\nINFO:tensorflow:global_step/sec: 400.787\nINFO:tensorflow:loss = 0.008404282, step = 57401 (0.249 sec)\nINFO:tensorflow:global_step/sec: 400.386\nINFO:tensorflow:loss = 0.011441771, step = 57501 (0.250 sec)\nINFO:tensorflow:global_step/sec: 397.627\nINFO:tensorflow:loss = 0.011922065, step = 57601 (0.252 sec)\nINFO:tensorflow:global_step/sec: 408.916\nINFO:tensorflow:loss = 0.005451596, step = 57701 (0.245 sec)\nINFO:tensorflow:global_step/sec: 478.181\nINFO:tensorflow:loss = 0.014451191, step = 57801 (0.209 sec)\nINFO:tensorflow:global_step/sec: 526.316\nINFO:tensorflow:loss = 0.0053738244, step = 57901 (0.194 sec)\nINFO:tensorflow:global_step/sec: 494.1\nINFO:tensorflow:loss = 0.013297284, step = 58001 (0.198 sec)\nINFO:tensorflow:global_step/sec: 500.516\nINFO:tensorflow:loss = 0.01874832, step = 58101 (0.200 sec)\nINFO:tensorflow:global_step/sec: 504.269\nINFO:tensorflow:loss = 0.009302543, step = 58201 (0.198 sec)\nINFO:tensorflow:global_step/sec: 498.803\nINFO:tensorflow:loss = 0.017056834, step = 58301 (0.200 sec)\nINFO:tensorflow:global_step/sec: 508.975\nINFO:tensorflow:loss = 0.013326164, step = 58401 (0.196 sec)\nINFO:tensorflow:global_step/sec: 502.498\nINFO:tensorflow:loss = 0.010238957, step = 58501 (0.199 sec)\nINFO:tensorflow:global_step/sec: 512.4\nINFO:tensorflow:loss = 0.00997032, step = 58601 (0.195 sec)\nINFO:tensorflow:global_step/sec: 517.741\nINFO:tensorflow:loss = 0.0059714033, step = 58701 (0.193 sec)\nINFO:tensorflow:global_step/sec: 537.591\nINFO:tensorflow:loss = 0.009870318, step = 58801 (0.186 sec)\nINFO:tensorflow:global_step/sec: 523.782\nINFO:tensorflow:loss = 0.02004344, step = 58901 (0.191 sec)\nINFO:tensorflow:global_step/sec: 528.176\nINFO:tensorflow:loss = 0.00976455, step = 59001 (0.190 sec)\nINFO:tensorflow:global_step/sec: 521.096\nINFO:tensorflow:loss = 0.011357179, step = 59101 (0.192 sec)\nINFO:tensorflow:global_step/sec: 508.942\nINFO:tensorflow:loss = 0.008579284, step = 59201 (0.196 sec)\nINFO:tensorflow:global_step/sec: 527.872\nINFO:tensorflow:loss = 0.009293977, step = 59301 (0.190 sec)\nINFO:tensorflow:global_step/sec: 512.264\nINFO:tensorflow:loss = 0.0066968855, step = 59401 (0.195 sec)\nINFO:tensorflow:global_step/sec: 502.195\nINFO:tensorflow:loss = 0.005933701, step = 59501 (0.199 sec)\nINFO:tensorflow:global_step/sec: 526.382\nINFO:tensorflow:loss = 0.007187545, step = 59601 (0.190 sec)\nINFO:tensorflow:global_step/sec: 510.626\nINFO:tensorflow:loss = 0.0061123613, step = 59701 (0.196 sec)\nINFO:tensorflow:global_step/sec: 509.269\nINFO:tensorflow:loss = 0.010090194, step = 59801 (0.196 sec)\nINFO:tensorflow:global_step/sec: 520.288\nINFO:tensorflow:loss = 0.012975221, step = 59901 (0.192 sec)\nINFO:tensorflow:Saving checkpoints for 60000 into /tmp/tmpBX73lD/model.ckpt.\nINFO:tensorflow:Calling model_fn.\nINFO:tensorflow:Importing architecture from /tmp/tmpBX73lD/architecture-1.txt: ['0:1_layer_dnn', '1:2_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 1\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Building iteration 2\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Done calling model_fn.\nINFO:tensorflow:Starting evaluation at 2018-12-13-19:25:14\nINFO:tensorflow:Graph was finalized.\nINFO:tensorflow:Restoring parameters from /tmp/tmpBX73lD/model.ckpt-60000\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\nINFO:tensorflow:Saving candidate 't1_2_layer_dnn' dict for global step 60000: architecture/adanet/ensembles = \no\n>adanet/iteration_1/ensemble_t1_2_layer_dnn/architecture/adanetB#\b\u0007\u0012\u0000B\u001d| 1_layer_dnn | 2_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.034043197, average_loss/adanet/subnetwork = 0.032510567, average_loss/adanet/uniform_average_ensemble = 0.034043197, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.045813102, loss/adanet/subnetwork = 0.042689238, loss/adanet/uniform_average_ensemble = 0.045813102, prediction/mean/adanet/adanet_weighted_ensemble = 3.151645, prediction/mean/adanet/subnetwork = 3.1452672, prediction/mean/adanet/uniform_average_ensemble = 3.151645\nINFO:tensorflow:Saving candidate 't2_2_layer_dnn' dict for global step 60000: architecture/adanet/ensembles = \n}\n>adanet/iteration_2/ensemble_t2_2_layer_dnn/architecture/adanetB1\b\u0007\u0012\u0000B+| 1_layer_dnn | 2_layer_dnn | 2_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.031925783, average_loss/adanet/subnetwork = 0.032713592, average_loss/adanet/uniform_average_ensemble = 0.031925786, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.043711387, loss/adanet/subnetwork = 0.043944843, loss/adanet/uniform_average_ensemble = 0.043711387, prediction/mean/adanet/adanet_weighted_ensemble = 3.1529949, prediction/mean/adanet/subnetwork = 3.1556947, prediction/mean/adanet/uniform_average_ensemble = 3.1529949\nINFO:tensorflow:Saving candidate 't2_3_layer_dnn' dict for global step 60000: architecture/adanet/ensembles = \n}\n>adanet/iteration_2/ensemble_t2_3_layer_dnn/architecture/adanetB1\b\u0007\u0012\u0000B+| 1_layer_dnn | 2_layer_dnn | 3_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.032317463, average_loss/adanet/subnetwork = 0.032910354, average_loss/adanet/uniform_average_ensemble = 0.03231746, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.043847356, loss/adanet/subnetwork = 0.043785788, loss/adanet/uniform_average_ensemble = 0.04384736, prediction/mean/adanet/adanet_weighted_ensemble = 3.1457782, prediction/mean/adanet/subnetwork = 3.134045, prediction/mean/adanet/uniform_average_ensemble = 3.1457782\nINFO:tensorflow:Finished evaluation at 2018-12-13-19:25:17\nINFO:tensorflow:Saving dict for global step 60000: average_loss = 0.032317463, average_loss/adanet/adanet_weighted_ensemble = 0.032317463, average_loss/adanet/subnetwork = 0.032910354, average_loss/adanet/uniform_average_ensemble = 0.03231746, global_step = 60000, label/mean = 3.1049454, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss = 0.043847356, loss/adanet/adanet_weighted_ensemble = 0.043847356, loss/adanet/subnetwork = 0.043785788, loss/adanet/uniform_average_ensemble = 0.04384736, prediction/mean = 3.1457782, prediction/mean/adanet/adanet_weighted_ensemble = 3.1457782, prediction/mean/adanet/subnetwork = 3.134045, prediction/mean/adanet/uniform_average_ensemble = 3.1457782\nINFO:tensorflow:Saving 'checkpoint_path' summary for global step 60000: /tmp/tmpBX73lD/model.ckpt-60000\nINFO:tensorflow:Loss for final step: 0.006897436.\nINFO:tensorflow:Finished training Adanet iteration 2\nINFO:tensorflow:Beginning bookkeeping phase for iteration 2\nINFO:tensorflow:Importing architecture from /tmp/tmpBX73lD/architecture-1.txt: ['0:1_layer_dnn', '1:2_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 1\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Building iteration 2\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Starting ensemble evaluation for iteration 2\nINFO:tensorflow:Restoring parameters from /tmp/tmpBX73lD/model.ckpt-60000\nWARNING:tensorflow:`tf.train.start_queue_runners()` was called when no queue runners were defined. You can safely remove the call to this deprecated function.\nINFO:tensorflow:Encountered end of input after 14 evaluations\nINFO:tensorflow:Computed ensemble metrics: adanet_loss/t1_2_layer_dnn = 0.014043, adanet_loss/t2_2_layer_dnn = 0.012769, adanet_loss/t2_3_layer_dnn = 0.011257\nINFO:tensorflow:Finished ensemble evaluation for iteration 2\nINFO:tensorflow:'t2_3_layer_dnn' at index 2 is moving onto the next iteration\nINFO:tensorflow:Importing architecture from /tmp/tmpBX73lD/architecture-2.txt: ['0:1_layer_dnn', '1:2_layer_dnn', '2:3_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 1\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 2\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Warm-starting from: (u'/tmp/tmpBX73lD/model.ckpt-60000',)\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_2/subnetwork/dense_3/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/candidate_t2_3_layer_dnn/adanet/iteration_2/candidate_t2_3_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_2/subnetwork/dense/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/candidate_t2_3_layer_dnn/adanet/iteration_2/candidate_t2_3_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/subnetwork/dense/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_2/subnetwork/dense/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/candidate_t0_1_layer_dnn/adanet/iteration_0/candidate_t0_1_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/subnetwork/dense_1/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/subnetwork/dense_1/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t1_2_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_2/subnetwork/dense_1/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: global_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/candidate_t0_1_layer_dnn/adanet/iteration_0/candidate_t0_1_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/subnetwork/dense_1/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/candidate_t2_3_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_0/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t0_1_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/subnetwork/dense_2/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t1_2_layer_dnn/adanet/iteration_1/candidate_t1_2_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_0/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/train_op/is_over/is_over_var_fn/is_over_var; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_1/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/subnetwork/dense/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/subnetwork/dense/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/candidate_t0_1_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/subnetwork/dense_1/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_2/subnetwork/dense_1/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t1_2_layer_dnn/adanet/iteration_1/candidate_t1_2_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_2/subnetwork/dense_2/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/candidate_t1_2_layer_dnn/adanet/iteration_2/candidate_t1_2_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t0_1_layer_dnn/adanet/iteration_1/candidate_t0_1_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t0_1_layer_dnn/adanet/iteration_1/candidate_t0_1_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/subnetwork/dense_2/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/subnetwork/dense/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_2/subnetwork/dense_3/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_2/subnetwork/dense_2/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/train_op/is_over/is_over_var_fn/is_over_var; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/train_op/is_over/is_over_var_fn/is_over_var; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/candidate_t1_2_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/candidate_t1_2_layer_dnn/adanet/iteration_2/candidate_t1_2_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_2/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Building iteration 3\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Building subnetwork '4_layer_dnn'\nINFO:tensorflow:Overwriting checkpoint with new graph for iteration 3 to /tmp/tmpBX73lD/model.ckpt-60000\nWARNING:tensorflow:`tf.train.start_queue_runners()` was called when no queue runners were defined. You can safely remove the call to this deprecated function.\nINFO:tensorflow:Finished bookkeeping phase for iteration 2\nINFO:tensorflow:Beginning training AdaNet iteration 3\nINFO:tensorflow:Calling model_fn.\nINFO:tensorflow:Importing architecture from /tmp/tmpBX73lD/architecture-2.txt: ['0:1_layer_dnn', '1:2_layer_dnn', '2:3_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 1\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 2\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Building iteration 3\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Building subnetwork '4_layer_dnn'\nINFO:tensorflow:Done calling model_fn.\nINFO:tensorflow:Create CheckpointSaverHook.\nINFO:tensorflow:Graph was finalized.\nINFO:tensorflow:Restoring parameters from /tmp/tmpBX73lD/increment.ckpt-3\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\nINFO:tensorflow:Saving checkpoints for 60000 into /tmp/tmpBX73lD/model.ckpt.\nINFO:tensorflow:loss = 0.014349232, step = 60001\nINFO:tensorflow:global_step/sec: 122.827\nINFO:tensorflow:loss = 0.014175814, step = 60101 (0.815 sec)\nINFO:tensorflow:global_step/sec: 500.977\nINFO:tensorflow:loss = 0.012988036, step = 60201 (0.200 sec)\nINFO:tensorflow:global_step/sec: 458.789\nINFO:tensorflow:loss = 0.010762272, step = 60301 (0.217 sec)\nINFO:tensorflow:global_step/sec: 472.133\nINFO:tensorflow:loss = 0.015884051, step = 60401 (0.212 sec)\nINFO:tensorflow:global_step/sec: 479.759\nINFO:tensorflow:loss = 0.007582067, step = 60501 (0.209 sec)\nINFO:tensorflow:global_step/sec: 500.406\nINFO:tensorflow:loss = 0.00642565, step = 60601 (0.200 sec)\nINFO:tensorflow:global_step/sec: 492.545\nINFO:tensorflow:loss = 0.007641523, step = 60701 (0.203 sec)\nINFO:tensorflow:global_step/sec: 475.946\nINFO:tensorflow:loss = 0.011311023, step = 60801 (0.210 sec)\nINFO:tensorflow:global_step/sec: 433.981\nINFO:tensorflow:loss = 0.012714136, step = 60901 (0.231 sec)\nINFO:tensorflow:global_step/sec: 488.529\nINFO:tensorflow:loss = 0.015741285, step = 61001 (0.205 sec)\nINFO:tensorflow:global_step/sec: 465.532\nINFO:tensorflow:loss = 0.014458665, step = 61101 (0.215 sec)\nINFO:tensorflow:global_step/sec: 463.407\nINFO:tensorflow:loss = 0.01391992, step = 61201 (0.216 sec)\nINFO:tensorflow:global_step/sec: 497.08\nINFO:tensorflow:loss = 0.007650324, step = 61301 (0.201 sec)\nINFO:tensorflow:global_step/sec: 475.615\nINFO:tensorflow:loss = 0.008895887, step = 61401 (0.210 sec)\nINFO:tensorflow:global_step/sec: 490.354\nINFO:tensorflow:loss = 0.006260482, step = 61501 (0.204 sec)\nINFO:tensorflow:global_step/sec: 458.579\nINFO:tensorflow:loss = 0.007975491, step = 61601 (0.218 sec)\nINFO:tensorflow:global_step/sec: 487.035\nINFO:tensorflow:loss = 0.01617087, step = 61701 (0.205 sec)\nINFO:tensorflow:global_step/sec: 488.785\nINFO:tensorflow:loss = 0.011673059, step = 61801 (0.205 sec)\nINFO:tensorflow:global_step/sec: 489.61\nINFO:tensorflow:loss = 0.010906876, step = 61901 (0.204 sec)\nINFO:tensorflow:global_step/sec: 484.318\nINFO:tensorflow:loss = 0.0073513957, step = 62001 (0.206 sec)\nINFO:tensorflow:global_step/sec: 454.578\nINFO:tensorflow:loss = 0.017136538, step = 62101 (0.220 sec)\nINFO:tensorflow:global_step/sec: 483.011\nINFO:tensorflow:loss = 0.009429395, step = 62201 (0.207 sec)\nINFO:tensorflow:global_step/sec: 481.891\nINFO:tensorflow:loss = 0.009464838, step = 62301 (0.208 sec)\nINFO:tensorflow:global_step/sec: 489.893\nINFO:tensorflow:loss = 0.0076065753, step = 62401 (0.204 sec)\nINFO:tensorflow:global_step/sec: 482.309\nINFO:tensorflow:loss = 0.016479883, step = 62501 (0.207 sec)\nINFO:tensorflow:global_step/sec: 466.007\nINFO:tensorflow:loss = 0.0073400293, step = 62601 (0.215 sec)\nINFO:tensorflow:global_step/sec: 501.452\nINFO:tensorflow:loss = 0.012695417, step = 62701 (0.199 sec)\nINFO:tensorflow:global_step/sec: 491.884\nINFO:tensorflow:loss = 0.0101601835, step = 62801 (0.203 sec)\nINFO:tensorflow:global_step/sec: 483.8\nINFO:tensorflow:loss = 0.010170883, step = 62901 (0.207 sec)\nINFO:tensorflow:global_step/sec: 481.621\nINFO:tensorflow:loss = 0.010603026, step = 63001 (0.208 sec)\nINFO:tensorflow:global_step/sec: 473.655\nINFO:tensorflow:loss = 0.007322383, step = 63101 (0.211 sec)\nINFO:tensorflow:global_step/sec: 492.708\nINFO:tensorflow:loss = 0.0148867555, step = 63201 (0.203 sec)\nINFO:tensorflow:global_step/sec: 473.361\nINFO:tensorflow:loss = 0.017221898, step = 63301 (0.212 sec)\nINFO:tensorflow:global_step/sec: 498.122\nINFO:tensorflow:loss = 0.0131410295, step = 63401 (0.201 sec)\nINFO:tensorflow:global_step/sec: 477.76\nINFO:tensorflow:loss = 0.017954867, step = 63501 (0.209 sec)\nINFO:tensorflow:global_step/sec: 496.771\nINFO:tensorflow:loss = 0.0077373376, step = 63601 (0.201 sec)\nINFO:tensorflow:global_step/sec: 486.384\nINFO:tensorflow:loss = 0.0084258355, step = 63701 (0.206 sec)\nINFO:tensorflow:global_step/sec: 477.737\nINFO:tensorflow:loss = 0.009624679, step = 63801 (0.209 sec)\nINFO:tensorflow:global_step/sec: 487.211\nINFO:tensorflow:loss = 0.006020199, step = 63901 (0.205 sec)\nINFO:tensorflow:global_step/sec: 473.151\nINFO:tensorflow:loss = 0.011776499, step = 64001 (0.213 sec)\nINFO:tensorflow:global_step/sec: 458.794\nINFO:tensorflow:loss = 0.010384669, step = 64101 (0.216 sec)\nINFO:tensorflow:global_step/sec: 491.449\nINFO:tensorflow:loss = 0.008962873, step = 64201 (0.204 sec)\nINFO:tensorflow:global_step/sec: 498.251\nINFO:tensorflow:loss = 0.0046904236, step = 64301 (0.201 sec)\nINFO:tensorflow:global_step/sec: 408.564\nINFO:tensorflow:loss = 0.009667809, step = 64401 (0.245 sec)\nINFO:tensorflow:global_step/sec: 450.197\nINFO:tensorflow:loss = 0.009793492, step = 64501 (0.222 sec)\nINFO:tensorflow:global_step/sec: 465.476\nINFO:tensorflow:loss = 0.012524443, step = 64601 (0.215 sec)\nINFO:tensorflow:global_step/sec: 482\nINFO:tensorflow:loss = 0.0068974327, step = 64701 (0.207 sec)\nINFO:tensorflow:global_step/sec: 483.91\nINFO:tensorflow:loss = 0.008452025, step = 64801 (0.207 sec)\nINFO:tensorflow:global_step/sec: 479.242\nINFO:tensorflow:loss = 0.012261448, step = 64901 (0.209 sec)\nINFO:tensorflow:global_step/sec: 467.681\nINFO:tensorflow:loss = 0.0065639215, step = 65001 (0.214 sec)\nINFO:tensorflow:global_step/sec: 487.016\nINFO:tensorflow:loss = 0.0072128996, step = 65101 (0.205 sec)\nINFO:tensorflow:global_step/sec: 488.635\nINFO:tensorflow:loss = 0.008089228, step = 65201 (0.208 sec)\nINFO:tensorflow:global_step/sec: 475.862\nINFO:tensorflow:loss = 0.015109103, step = 65301 (0.214 sec)\nINFO:tensorflow:global_step/sec: 451.859\nINFO:tensorflow:loss = 0.010164745, step = 65401 (0.215 sec)\nINFO:tensorflow:global_step/sec: 475.477\nINFO:tensorflow:loss = 0.007488979, step = 65501 (0.210 sec)\nINFO:tensorflow:global_step/sec: 486.888\nINFO:tensorflow:loss = 0.01756696, step = 65601 (0.206 sec)\nINFO:tensorflow:global_step/sec: 403.704\nINFO:tensorflow:loss = 0.009442094, step = 65701 (0.247 sec)\nINFO:tensorflow:global_step/sec: 481.668\nINFO:tensorflow:loss = 0.009446172, step = 65801 (0.208 sec)\nINFO:tensorflow:global_step/sec: 479.653\nINFO:tensorflow:loss = 0.00790675, step = 65901 (0.208 sec)\nINFO:tensorflow:global_step/sec: 456.184\nINFO:tensorflow:loss = 0.014462812, step = 66001 (0.219 sec)\nINFO:tensorflow:global_step/sec: 479.519\nINFO:tensorflow:loss = 0.014726914, step = 66101 (0.208 sec)\nINFO:tensorflow:global_step/sec: 469.782\nINFO:tensorflow:loss = 0.013557127, step = 66201 (0.213 sec)\nINFO:tensorflow:global_step/sec: 469.404\nINFO:tensorflow:loss = 0.0056417743, step = 66301 (0.213 sec)\nINFO:tensorflow:global_step/sec: 456.061\nINFO:tensorflow:loss = 0.0076914574, step = 66401 (0.219 sec)\nINFO:tensorflow:global_step/sec: 475.586\nINFO:tensorflow:loss = 0.020085715, step = 66501 (0.210 sec)\nINFO:tensorflow:global_step/sec: 468.538\nINFO:tensorflow:loss = 0.014946561, step = 66601 (0.214 sec)\nINFO:tensorflow:global_step/sec: 473.561\nINFO:tensorflow:loss = 0.010401423, step = 66701 (0.211 sec)\nINFO:tensorflow:global_step/sec: 482.427\nINFO:tensorflow:loss = 0.0070528616, step = 66801 (0.207 sec)\nINFO:tensorflow:global_step/sec: 496.672\nINFO:tensorflow:loss = 0.007295311, step = 66901 (0.201 sec)\nINFO:tensorflow:global_step/sec: 443.892\nINFO:tensorflow:loss = 0.0067928964, step = 67001 (0.225 sec)\nINFO:tensorflow:global_step/sec: 452.436\nINFO:tensorflow:loss = 0.010201834, step = 67101 (0.222 sec)\nINFO:tensorflow:global_step/sec: 467.12\nINFO:tensorflow:loss = 0.009510251, step = 67201 (0.214 sec)\nINFO:tensorflow:global_step/sec: 467.978\nINFO:tensorflow:loss = 0.013937269, step = 67301 (0.214 sec)\nINFO:tensorflow:global_step/sec: 465.439\nINFO:tensorflow:loss = 0.010768862, step = 67401 (0.215 sec)\nINFO:tensorflow:global_step/sec: 476.333\nINFO:tensorflow:loss = 0.00456707, step = 67501 (0.210 sec)\nINFO:tensorflow:global_step/sec: 489.579\nINFO:tensorflow:loss = 0.008822214, step = 67601 (0.204 sec)\nINFO:tensorflow:global_step/sec: 491.906\nINFO:tensorflow:loss = 0.014752589, step = 67701 (0.203 sec)\nINFO:tensorflow:global_step/sec: 488.844\nINFO:tensorflow:loss = 0.01246707, step = 67801 (0.205 sec)\nINFO:tensorflow:global_step/sec: 452.931\nINFO:tensorflow:loss = 0.0072872615, step = 67901 (0.220 sec)\nINFO:tensorflow:global_step/sec: 474.793\nINFO:tensorflow:loss = 0.01730576, step = 68001 (0.211 sec)\nINFO:tensorflow:global_step/sec: 486.398\nINFO:tensorflow:loss = 0.015187411, step = 68101 (0.206 sec)\nINFO:tensorflow:global_step/sec: 450.203\nINFO:tensorflow:loss = 0.013181204, step = 68201 (0.222 sec)\nINFO:tensorflow:global_step/sec: 472.021\nINFO:tensorflow:loss = 0.014625701, step = 68301 (0.212 sec)\nINFO:tensorflow:global_step/sec: 451.41\nINFO:tensorflow:loss = 0.009088153, step = 68401 (0.221 sec)\nINFO:tensorflow:global_step/sec: 470.985\nINFO:tensorflow:loss = 0.006735581, step = 68501 (0.212 sec)\nINFO:tensorflow:global_step/sec: 482.95\nINFO:tensorflow:loss = 0.011414693, step = 68601 (0.207 sec)\nINFO:tensorflow:global_step/sec: 478.829\nINFO:tensorflow:loss = 0.008719599, step = 68701 (0.209 sec)\nINFO:tensorflow:global_step/sec: 488.1\nINFO:tensorflow:loss = 0.016607951, step = 68801 (0.205 sec)\nINFO:tensorflow:global_step/sec: 487.213\nINFO:tensorflow:loss = 0.021428429, step = 68901 (0.205 sec)\nINFO:tensorflow:global_step/sec: 495.766\nINFO:tensorflow:loss = 0.018561859, step = 69001 (0.202 sec)\nINFO:tensorflow:global_step/sec: 473.972\nINFO:tensorflow:loss = 0.012570044, step = 69101 (0.211 sec)\nINFO:tensorflow:global_step/sec: 492.138\nINFO:tensorflow:loss = 0.0117163975, step = 69201 (0.204 sec)\nINFO:tensorflow:global_step/sec: 471.138\nINFO:tensorflow:loss = 0.010117748, step = 69301 (0.212 sec)\nINFO:tensorflow:global_step/sec: 486.218\nINFO:tensorflow:loss = 0.009999806, step = 69401 (0.206 sec)\nINFO:tensorflow:global_step/sec: 503.776\nINFO:tensorflow:loss = 0.008249035, step = 69501 (0.199 sec)\nINFO:tensorflow:global_step/sec: 499.658\nINFO:tensorflow:loss = 0.013595214, step = 69601 (0.203 sec)\nINFO:tensorflow:global_step/sec: 466.553\nINFO:tensorflow:loss = 0.006964251, step = 69701 (0.211 sec)\nINFO:tensorflow:global_step/sec: 469.021\nINFO:tensorflow:loss = 0.008460038, step = 69801 (0.216 sec)\nINFO:tensorflow:global_step/sec: 463.815\nINFO:tensorflow:loss = 0.016430806, step = 69901 (0.213 sec)\nINFO:tensorflow:global_step/sec: 481.763\nINFO:tensorflow:loss = 0.013513658, step = 70001 (0.207 sec)\nINFO:tensorflow:global_step/sec: 487.85\nINFO:tensorflow:loss = 0.00617994, step = 70101 (0.205 sec)\nINFO:tensorflow:global_step/sec: 461.248\nINFO:tensorflow:loss = 0.010562475, step = 70201 (0.217 sec)\nINFO:tensorflow:global_step/sec: 487.434\nINFO:tensorflow:loss = 0.009897685, step = 70301 (0.206 sec)\nINFO:tensorflow:global_step/sec: 449.651\nINFO:tensorflow:loss = 0.00854541, step = 70401 (0.222 sec)\nINFO:tensorflow:global_step/sec: 469.629\nINFO:tensorflow:loss = 0.0049584727, step = 70501 (0.213 sec)\nINFO:tensorflow:global_step/sec: 463.208\nINFO:tensorflow:loss = 0.011869925, step = 70601 (0.216 sec)\nINFO:tensorflow:global_step/sec: 485.062\nINFO:tensorflow:loss = 0.008131639, step = 70701 (0.206 sec)\nINFO:tensorflow:global_step/sec: 484.325\nINFO:tensorflow:loss = 0.01208318, step = 70801 (0.211 sec)\nINFO:tensorflow:global_step/sec: 482.378\nINFO:tensorflow:loss = 0.012321655, step = 70901 (0.203 sec)\nINFO:tensorflow:global_step/sec: 496.473\nINFO:tensorflow:loss = 0.006670419, step = 71001 (0.203 sec)\nINFO:tensorflow:global_step/sec: 483.753\nINFO:tensorflow:loss = 0.011287295, step = 71101 (0.205 sec)\nINFO:tensorflow:global_step/sec: 493.323\nINFO:tensorflow:loss = 0.005714275, step = 71201 (0.203 sec)\nINFO:tensorflow:global_step/sec: 487.18\nINFO:tensorflow:loss = 0.012623739, step = 71301 (0.205 sec)\nINFO:tensorflow:global_step/sec: 497.285\nINFO:tensorflow:loss = 0.0060354862, step = 71401 (0.201 sec)\nINFO:tensorflow:global_step/sec: 490.405\nINFO:tensorflow:loss = 0.02119733, step = 71501 (0.204 sec)\nINFO:tensorflow:global_step/sec: 461.644\nINFO:tensorflow:loss = 0.006808838, step = 71601 (0.217 sec)\nINFO:tensorflow:global_step/sec: 477.619\nINFO:tensorflow:loss = 0.013444901, step = 71701 (0.210 sec)\nINFO:tensorflow:global_step/sec: 481.294\nINFO:tensorflow:loss = 0.009731604, step = 71801 (0.208 sec)\nINFO:tensorflow:global_step/sec: 496.702\nINFO:tensorflow:loss = 0.0075439215, step = 71901 (0.201 sec)\nINFO:tensorflow:global_step/sec: 479.641\nINFO:tensorflow:loss = 0.010953972, step = 72001 (0.209 sec)\nINFO:tensorflow:global_step/sec: 453.896\nINFO:tensorflow:loss = 0.007781538, step = 72101 (0.220 sec)\nINFO:tensorflow:global_step/sec: 504.33\nINFO:tensorflow:loss = 0.010394486, step = 72201 (0.198 sec)\nINFO:tensorflow:global_step/sec: 495.594\nINFO:tensorflow:loss = 0.009234263, step = 72301 (0.202 sec)\nINFO:tensorflow:global_step/sec: 492.601\nINFO:tensorflow:loss = 0.006700014, step = 72401 (0.203 sec)\nINFO:tensorflow:global_step/sec: 468.134\nINFO:tensorflow:loss = 0.010772013, step = 72501 (0.214 sec)\nINFO:tensorflow:global_step/sec: 472.066\nINFO:tensorflow:loss = 0.0060116714, step = 72601 (0.212 sec)\nINFO:tensorflow:global_step/sec: 480.024\nINFO:tensorflow:loss = 0.011722708, step = 72701 (0.208 sec)\nINFO:tensorflow:global_step/sec: 490.458\nINFO:tensorflow:loss = 0.013143127, step = 72801 (0.204 sec)\nINFO:tensorflow:global_step/sec: 483.078\nINFO:tensorflow:loss = 0.009265941, step = 72901 (0.207 sec)\nINFO:tensorflow:global_step/sec: 481.297\nINFO:tensorflow:loss = 0.016384896, step = 73001 (0.208 sec)\nINFO:tensorflow:global_step/sec: 480.406\nINFO:tensorflow:loss = 0.010736043, step = 73101 (0.208 sec)\nINFO:tensorflow:global_step/sec: 489.975\nINFO:tensorflow:loss = 0.0039101024, step = 73201 (0.204 sec)\nINFO:tensorflow:global_step/sec: 459.204\nINFO:tensorflow:loss = 0.0075296564, step = 73301 (0.218 sec)\nINFO:tensorflow:global_step/sec: 469.642\nINFO:tensorflow:loss = 0.010320512, step = 73401 (0.213 sec)\nINFO:tensorflow:global_step/sec: 489.017\nINFO:tensorflow:loss = 0.013765342, step = 73501 (0.205 sec)\nINFO:tensorflow:global_step/sec: 488.031\nINFO:tensorflow:loss = 0.007436739, step = 73601 (0.205 sec)\nINFO:tensorflow:global_step/sec: 500.769\nINFO:tensorflow:loss = 0.0079130065, step = 73701 (0.200 sec)\nINFO:tensorflow:global_step/sec: 480.188\nINFO:tensorflow:loss = 0.0099165905, step = 73801 (0.208 sec)\nINFO:tensorflow:global_step/sec: 492.922\nINFO:tensorflow:loss = 0.0072672125, step = 73901 (0.203 sec)\nINFO:tensorflow:global_step/sec: 467.233\nINFO:tensorflow:loss = 0.005922256, step = 74001 (0.214 sec)\nINFO:tensorflow:global_step/sec: 502.381\nINFO:tensorflow:loss = 0.010643217, step = 74101 (0.199 sec)\nINFO:tensorflow:global_step/sec: 493.94\nINFO:tensorflow:loss = 0.011121646, step = 74201 (0.202 sec)\nINFO:tensorflow:global_step/sec: 501.436\nINFO:tensorflow:loss = 0.008452261, step = 74301 (0.200 sec)\nINFO:tensorflow:global_step/sec: 483.978\nINFO:tensorflow:loss = 0.0065449663, step = 74401 (0.207 sec)\nINFO:tensorflow:global_step/sec: 498.509\nINFO:tensorflow:loss = 0.0097762775, step = 74501 (0.201 sec)\nINFO:tensorflow:global_step/sec: 484.013\nINFO:tensorflow:loss = 0.010385942, step = 74601 (0.206 sec)\nINFO:tensorflow:global_step/sec: 467.207\nINFO:tensorflow:loss = 0.011105723, step = 74701 (0.215 sec)\nINFO:tensorflow:global_step/sec: 478.171\nINFO:tensorflow:loss = 0.012939494, step = 74801 (0.209 sec)\nINFO:tensorflow:global_step/sec: 482.693\nINFO:tensorflow:loss = 0.009049785, step = 74901 (0.207 sec)\nINFO:tensorflow:global_step/sec: 482.276\nINFO:tensorflow:loss = 0.021380838, step = 75001 (0.207 sec)\nINFO:tensorflow:global_step/sec: 447.628\nINFO:tensorflow:loss = 0.0074647972, step = 75101 (0.224 sec)\nINFO:tensorflow:global_step/sec: 440.645\nINFO:tensorflow:loss = 0.007178474, step = 75201 (0.226 sec)\nINFO:tensorflow:global_step/sec: 471.389\nINFO:tensorflow:loss = 0.015906963, step = 75301 (0.212 sec)\nINFO:tensorflow:global_step/sec: 463.255\nINFO:tensorflow:loss = 0.0059212055, step = 75401 (0.216 sec)\nINFO:tensorflow:global_step/sec: 476.007\nINFO:tensorflow:loss = 0.013277557, step = 75501 (0.210 sec)\nINFO:tensorflow:global_step/sec: 484.733\nINFO:tensorflow:loss = 0.0070092604, step = 75601 (0.206 sec)\nINFO:tensorflow:global_step/sec: 479.414\nINFO:tensorflow:loss = 0.006367581, step = 75701 (0.208 sec)\nINFO:tensorflow:global_step/sec: 478.105\nINFO:tensorflow:loss = 0.016521107, step = 75801 (0.209 sec)\nINFO:tensorflow:global_step/sec: 442.267\nINFO:tensorflow:loss = 0.0073288074, step = 75901 (0.226 sec)\nINFO:tensorflow:global_step/sec: 454.103\nINFO:tensorflow:loss = 0.010486348, step = 76001 (0.220 sec)\nINFO:tensorflow:global_step/sec: 478.407\nINFO:tensorflow:loss = 0.011561921, step = 76101 (0.209 sec)\nINFO:tensorflow:global_step/sec: 493.538\nINFO:tensorflow:loss = 0.006081101, step = 76201 (0.203 sec)\nINFO:tensorflow:global_step/sec: 478.471\nINFO:tensorflow:loss = 0.005869668, step = 76301 (0.210 sec)\nINFO:tensorflow:global_step/sec: 471.394\nINFO:tensorflow:loss = 0.009690834, step = 76401 (0.212 sec)\nINFO:tensorflow:global_step/sec: 472.179\nINFO:tensorflow:loss = 0.011438882, step = 76501 (0.212 sec)\nINFO:tensorflow:global_step/sec: 463.334\nINFO:tensorflow:loss = 0.008249163, step = 76601 (0.216 sec)\nINFO:tensorflow:global_step/sec: 461.537\nINFO:tensorflow:loss = 0.013074286, step = 76701 (0.217 sec)\nINFO:tensorflow:global_step/sec: 468.799\nINFO:tensorflow:loss = 0.008532705, step = 76801 (0.213 sec)\nINFO:tensorflow:global_step/sec: 476.706\nINFO:tensorflow:loss = 0.007875957, step = 76901 (0.210 sec)\nINFO:tensorflow:global_step/sec: 477.154\nINFO:tensorflow:loss = 0.010441838, step = 77001 (0.209 sec)\nINFO:tensorflow:global_step/sec: 475.473\nINFO:tensorflow:loss = 0.016036626, step = 77101 (0.210 sec)\nINFO:tensorflow:global_step/sec: 452.751\nINFO:tensorflow:loss = 0.007660943, step = 77201 (0.221 sec)\nINFO:tensorflow:global_step/sec: 491.147\nINFO:tensorflow:loss = 0.005767421, step = 77301 (0.204 sec)\nINFO:tensorflow:global_step/sec: 477.571\nINFO:tensorflow:loss = 0.0055149226, step = 77401 (0.209 sec)\nINFO:tensorflow:global_step/sec: 482.297\nINFO:tensorflow:loss = 0.010023586, step = 77501 (0.207 sec)\nINFO:tensorflow:global_step/sec: 459.238\nINFO:tensorflow:loss = 0.010996474, step = 77601 (0.218 sec)\nINFO:tensorflow:global_step/sec: 454.663\nINFO:tensorflow:loss = 0.005551029, step = 77701 (0.220 sec)\nINFO:tensorflow:global_step/sec: 460.534\nINFO:tensorflow:loss = 0.012583815, step = 77801 (0.217 sec)\nINFO:tensorflow:global_step/sec: 467.539\nINFO:tensorflow:loss = 0.0056791697, step = 77901 (0.214 sec)\nINFO:tensorflow:global_step/sec: 458.735\nINFO:tensorflow:loss = 0.011437742, step = 78001 (0.218 sec)\nINFO:tensorflow:global_step/sec: 489.375\nINFO:tensorflow:loss = 0.009679522, step = 78101 (0.204 sec)\nINFO:tensorflow:global_step/sec: 488.332\nINFO:tensorflow:loss = 0.009729374, step = 78201 (0.205 sec)\nINFO:tensorflow:global_step/sec: 467.895\nINFO:tensorflow:loss = 0.012433505, step = 78301 (0.213 sec)\nINFO:tensorflow:global_step/sec: 490.918\nINFO:tensorflow:loss = 0.011398161, step = 78401 (0.204 sec)\nINFO:tensorflow:global_step/sec: 482.8\nINFO:tensorflow:loss = 0.010340607, step = 78501 (0.207 sec)\nINFO:tensorflow:global_step/sec: 484.266\nINFO:tensorflow:loss = 0.010301631, step = 78601 (0.206 sec)\nINFO:tensorflow:global_step/sec: 481.498\nINFO:tensorflow:loss = 0.0049466062, step = 78701 (0.208 sec)\nINFO:tensorflow:global_step/sec: 447.063\nINFO:tensorflow:loss = 0.0072362237, step = 78801 (0.224 sec)\nINFO:tensorflow:global_step/sec: 472.212\nINFO:tensorflow:loss = 0.013689649, step = 78901 (0.212 sec)\nINFO:tensorflow:global_step/sec: 475.971\nINFO:tensorflow:loss = 0.010056267, step = 79001 (0.210 sec)\nINFO:tensorflow:global_step/sec: 482.049\nINFO:tensorflow:loss = 0.007534829, step = 79101 (0.207 sec)\nINFO:tensorflow:global_step/sec: 469.55\nINFO:tensorflow:loss = 0.010161445, step = 79201 (0.213 sec)\nINFO:tensorflow:global_step/sec: 480.4\nINFO:tensorflow:loss = 0.0071275346, step = 79301 (0.208 sec)\nINFO:tensorflow:global_step/sec: 465.213\nINFO:tensorflow:loss = 0.006760837, step = 79401 (0.215 sec)\nINFO:tensorflow:global_step/sec: 456.544\nINFO:tensorflow:loss = 0.005613528, step = 79501 (0.219 sec)\nINFO:tensorflow:global_step/sec: 471.007\nINFO:tensorflow:loss = 0.008591261, step = 79601 (0.213 sec)\nINFO:tensorflow:global_step/sec: 468.477\nINFO:tensorflow:loss = 0.0063990387, step = 79701 (0.213 sec)\nINFO:tensorflow:global_step/sec: 447.661\nINFO:tensorflow:loss = 0.010303696, step = 79801 (0.224 sec)\nINFO:tensorflow:global_step/sec: 451.606\nINFO:tensorflow:loss = 0.015051338, step = 79901 (0.221 sec)\nINFO:tensorflow:Saving checkpoints for 80000 into /tmp/tmpBX73lD/model.ckpt.\nINFO:tensorflow:Calling model_fn.\nINFO:tensorflow:Importing architecture from /tmp/tmpBX73lD/architecture-2.txt: ['0:1_layer_dnn', '1:2_layer_dnn', '2:3_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 1\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 2\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Building iteration 3\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Building subnetwork '4_layer_dnn'\nINFO:tensorflow:Done calling model_fn.\nINFO:tensorflow:Starting evaluation at 2018-12-13-19:26:26\nINFO:tensorflow:Graph was finalized.\nINFO:tensorflow:Restoring parameters from /tmp/tmpBX73lD/model.ckpt-80000\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\nINFO:tensorflow:Saving candidate 't2_3_layer_dnn' dict for global step 80000: architecture/adanet/ensembles = \n}\n>adanet/iteration_2/ensemble_t2_3_layer_dnn/architecture/adanetB1\b\u0007\u0012\u0000B+| 1_layer_dnn | 2_layer_dnn | 3_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.032317463, average_loss/adanet/subnetwork = 0.032910354, average_loss/adanet/uniform_average_ensemble = 0.03231746, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.043847356, loss/adanet/subnetwork = 0.043785788, loss/adanet/uniform_average_ensemble = 0.04384736, prediction/mean/adanet/adanet_weighted_ensemble = 3.1457782, prediction/mean/adanet/subnetwork = 3.134045, prediction/mean/adanet/uniform_average_ensemble = 3.1457782\nINFO:tensorflow:Saving candidate 't3_3_layer_dnn' dict for global step 80000: architecture/adanet/ensembles = \n�\u0001\n>adanet/iteration_3/ensemble_t3_3_layer_dnn/architecture/adanetB?\b\u0007\u0012\u0000B9| 1_layer_dnn | 2_layer_dnn | 3_layer_dnn | 3_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.03251077, average_loss/adanet/subnetwork = 0.03740776, average_loss/adanet/uniform_average_ensemble = 0.032510772, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.045306467, loss/adanet/subnetwork = 0.055050053, loss/adanet/uniform_average_ensemble = 0.04530649, prediction/mean/adanet/adanet_weighted_ensemble = 3.1480103, prediction/mean/adanet/subnetwork = 3.1547055, prediction/mean/adanet/uniform_average_ensemble = 3.1480103\nINFO:tensorflow:Saving candidate 't3_4_layer_dnn' dict for global step 80000: architecture/adanet/ensembles = \n�\u0001\n>adanet/iteration_3/ensemble_t3_4_layer_dnn/architecture/adanetB?\b\u0007\u0012\u0000B9| 1_layer_dnn | 2_layer_dnn | 3_layer_dnn | 4_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.031587753, average_loss/adanet/subnetwork = 0.03348904, average_loss/adanet/uniform_average_ensemble = 0.03158775, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.04207115, loss/adanet/subnetwork = 0.041055106, loss/adanet/uniform_average_ensemble = 0.04207114, prediction/mean/adanet/adanet_weighted_ensemble = 3.1415138, prediction/mean/adanet/subnetwork = 3.1287208, prediction/mean/adanet/uniform_average_ensemble = 3.1415138\nINFO:tensorflow:Finished evaluation at 2018-12-13-19:26:31\nINFO:tensorflow:Saving dict for global step 80000: average_loss = 0.031587753, average_loss/adanet/adanet_weighted_ensemble = 0.031587753, average_loss/adanet/subnetwork = 0.03348904, average_loss/adanet/uniform_average_ensemble = 0.03158775, global_step = 80000, label/mean = 3.1049454, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss = 0.04207115, loss/adanet/adanet_weighted_ensemble = 0.04207115, loss/adanet/subnetwork = 0.041055106, loss/adanet/uniform_average_ensemble = 0.04207114, prediction/mean = 3.1415138, prediction/mean/adanet/adanet_weighted_ensemble = 3.1415138, prediction/mean/adanet/subnetwork = 3.1287208, prediction/mean/adanet/uniform_average_ensemble = 3.1415138\nINFO:tensorflow:Saving 'checkpoint_path' summary for global step 80000: /tmp/tmpBX73lD/model.ckpt-80000\nINFO:tensorflow:Loss for final step: 0.0061327047.\nINFO:tensorflow:Finished training Adanet iteration 3\nINFO:tensorflow:Beginning bookkeeping phase for iteration 3\nINFO:tensorflow:Importing architecture from /tmp/tmpBX73lD/architecture-2.txt: ['0:1_layer_dnn', '1:2_layer_dnn', '2:3_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 1\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 2\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Building iteration 3\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Building subnetwork '4_layer_dnn'\nINFO:tensorflow:Starting ensemble evaluation for iteration 3\nINFO:tensorflow:Restoring parameters from /tmp/tmpBX73lD/model.ckpt-80000\nWARNING:tensorflow:`tf.train.start_queue_runners()` was called when no queue runners were defined. You can safely remove the call to this deprecated function.\nINFO:tensorflow:Encountered end of input after 14 evaluations\nINFO:tensorflow:Computed ensemble metrics: adanet_loss/t2_3_layer_dnn = 0.011257, adanet_loss/t3_3_layer_dnn = 0.011323, adanet_loss/t3_4_layer_dnn = 0.009588\nINFO:tensorflow:Finished ensemble evaluation for iteration 3\nINFO:tensorflow:'t3_4_layer_dnn' at index 2 is moving onto the next iteration\nINFO:tensorflow:Importing architecture from /tmp/tmpBX73lD/architecture-3.txt: ['0:1_layer_dnn', '1:2_layer_dnn', '2:3_layer_dnn', '3:4_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 1\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 2\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 3\nINFO:tensorflow:Building subnetwork '4_layer_dnn'\nINFO:tensorflow:Warm-starting from: (u'/tmp/tmpBX73lD/model.ckpt-80000',)\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_2/subnetwork/dense_3/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_4_layer_dnn/weighted_subnetwork_2/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_4_layer_dnn/weighted_subnetwork_3/subnetwork/dense_3/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_4_layer_dnn/weighted_subnetwork_0/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_4_layer_dnn/weighted_subnetwork_3/subnetwork/dense_1/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/candidate_t2_3_layer_dnn/adanet/iteration_2/candidate_t2_3_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_2/subnetwork/dense/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/candidate_t2_3_layer_dnn/adanet/iteration_2/candidate_t2_3_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/subnetwork/dense/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_2/subnetwork/dense/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/candidate_t0_1_layer_dnn/adanet/iteration_0/candidate_t0_1_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/subnetwork/dense_1/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/subnetwork/dense_1/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/candidate_t3_4_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/candidate_t3_4_layer_dnn/adanet/iteration_3/candidate_t3_4_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t1_2_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/train_op/is_over/is_over_var_fn/is_over_var; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_4_layer_dnn/weighted_subnetwork_3/subnetwork/dense_4/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_2/subnetwork/dense_1/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: global_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/candidate_t3_4_layer_dnn/adanet/iteration_3/candidate_t3_4_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/candidate_t0_1_layer_dnn/adanet/iteration_0/candidate_t0_1_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/subnetwork/dense_1/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/candidate_t2_3_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_4_layer_dnn/weighted_subnetwork_3/subnetwork/dense_4/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_0/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t0_1_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/subnetwork/dense_2/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_4_layer_dnn/weighted_subnetwork_3/subnetwork/dense/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_4_layer_dnn/weighted_subnetwork_3/subnetwork/dense_3/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/candidate_t2_3_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t1_2_layer_dnn/adanet/iteration_1/candidate_t1_2_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_0/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/train_op/is_over/is_over_var_fn/is_over_var; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_4_layer_dnn/weighted_subnetwork_3/subnetwork/dense_2/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_4_layer_dnn/weighted_subnetwork_1/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_1/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/subnetwork/dense/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/subnetwork/dense/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/candidate_t0_1_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_4_layer_dnn/weighted_subnetwork_3/subnetwork/dense_2/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/subnetwork/dense_1/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_2/subnetwork/dense_1/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_4_layer_dnn/weighted_subnetwork_3/subnetwork/dense_1/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t1_2_layer_dnn/adanet/iteration_1/candidate_t1_2_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_2/subnetwork/dense_2/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_4_layer_dnn/weighted_subnetwork_3/subnetwork/dense/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/candidate_t1_2_layer_dnn/adanet/iteration_2/candidate_t1_2_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t0_1_layer_dnn/adanet/iteration_1/candidate_t0_1_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t0_1_layer_dnn/adanet/iteration_1/candidate_t0_1_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_4_layer_dnn/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_4_layer_dnn/weighted_subnetwork_3/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/subnetwork/dense_2/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/subnetwork/dense/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/candidate_t2_3_layer_dnn/adanet/iteration_3/candidate_t2_3_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_2/subnetwork/dense_3/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/candidate_t2_3_layer_dnn/adanet/iteration_3/candidate_t2_3_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_2/subnetwork/dense_2/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/train_op/is_over/is_over_var_fn/is_over_var; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/train_op/is_over/is_over_var_fn/is_over_var; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/candidate_t1_2_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/candidate_t1_2_layer_dnn/adanet/iteration_2/candidate_t1_2_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_2/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Building iteration 4\nINFO:tensorflow:Building subnetwork '4_layer_dnn'\nINFO:tensorflow:Building subnetwork '5_layer_dnn'\nINFO:tensorflow:Overwriting checkpoint with new graph for iteration 4 to /tmp/tmpBX73lD/model.ckpt-80000\nWARNING:tensorflow:`tf.train.start_queue_runners()` was called when no queue runners were defined. You can safely remove the call to this deprecated function.\nINFO:tensorflow:Finished bookkeeping phase for iteration 3\nINFO:tensorflow:Beginning training AdaNet iteration 4\nINFO:tensorflow:Calling model_fn.\nINFO:tensorflow:Importing architecture from /tmp/tmpBX73lD/architecture-3.txt: ['0:1_layer_dnn', '1:2_layer_dnn', '2:3_layer_dnn', '3:4_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 1\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 2\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 3\nINFO:tensorflow:Building subnetwork '4_layer_dnn'\nINFO:tensorflow:Building iteration 4\nINFO:tensorflow:Building subnetwork '4_layer_dnn'\nINFO:tensorflow:Building subnetwork '5_layer_dnn'\nINFO:tensorflow:Done calling model_fn.\nINFO:tensorflow:Create CheckpointSaverHook.\nINFO:tensorflow:Graph was finalized.\nINFO:tensorflow:Restoring parameters from /tmp/tmpBX73lD/increment.ckpt-4\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\nINFO:tensorflow:Saving checkpoints for 80000 into /tmp/tmpBX73lD/model.ckpt.\nINFO:tensorflow:loss = 0.01076022, step = 80001\nINFO:tensorflow:global_step/sec: 106.247\nINFO:tensorflow:loss = 0.011567029, step = 80101 (0.942 sec)\nINFO:tensorflow:global_step/sec: 433.175\nINFO:tensorflow:loss = 0.0105704125, step = 80201 (0.230 sec)\nINFO:tensorflow:global_step/sec: 430.274\nINFO:tensorflow:loss = 0.009286735, step = 80301 (0.233 sec)\nINFO:tensorflow:global_step/sec: 428.392\nINFO:tensorflow:loss = 0.013190662, step = 80401 (0.233 sec)\nINFO:tensorflow:global_step/sec: 450.708\nINFO:tensorflow:loss = 0.0066475794, step = 80501 (0.222 sec)\nINFO:tensorflow:global_step/sec: 468.729\nINFO:tensorflow:loss = 0.0055183023, step = 80601 (0.214 sec)\nINFO:tensorflow:global_step/sec: 435.68\nINFO:tensorflow:loss = 0.0071699126, step = 80701 (0.230 sec)\nINFO:tensorflow:global_step/sec: 436.822\nINFO:tensorflow:loss = 0.009832906, step = 80801 (0.229 sec)\nINFO:tensorflow:global_step/sec: 443.821\nINFO:tensorflow:loss = 0.011850201, step = 80901 (0.225 sec)\nINFO:tensorflow:global_step/sec: 428.919\nINFO:tensorflow:loss = 0.013372295, step = 81001 (0.233 sec)\nINFO:tensorflow:global_step/sec: 456.289\nINFO:tensorflow:loss = 0.0111810835, step = 81101 (0.219 sec)\nINFO:tensorflow:global_step/sec: 438.926\nINFO:tensorflow:loss = 0.011970724, step = 81201 (0.228 sec)\nINFO:tensorflow:global_step/sec: 465.977\nINFO:tensorflow:loss = 0.0065958686, step = 81301 (0.215 sec)\nINFO:tensorflow:global_step/sec: 433.362\nINFO:tensorflow:loss = 0.007469721, step = 81401 (0.231 sec)\nINFO:tensorflow:global_step/sec: 472.037\nINFO:tensorflow:loss = 0.005948479, step = 81501 (0.217 sec)\nINFO:tensorflow:global_step/sec: 445.149\nINFO:tensorflow:loss = 0.0074148737, step = 81601 (0.220 sec)\nINFO:tensorflow:global_step/sec: 459.35\nINFO:tensorflow:loss = 0.012823062, step = 81701 (0.217 sec)\nINFO:tensorflow:global_step/sec: 442.603\nINFO:tensorflow:loss = 0.009658838, step = 81801 (0.226 sec)\nINFO:tensorflow:global_step/sec: 447.389\nINFO:tensorflow:loss = 0.0110181635, step = 81901 (0.224 sec)\nINFO:tensorflow:global_step/sec: 452.133\nINFO:tensorflow:loss = 0.0074777408, step = 82001 (0.221 sec)\nINFO:tensorflow:global_step/sec: 446.444\nINFO:tensorflow:loss = 0.0150109315, step = 82101 (0.224 sec)\nINFO:tensorflow:global_step/sec: 456.667\nINFO:tensorflow:loss = 0.008159091, step = 82201 (0.219 sec)\nINFO:tensorflow:global_step/sec: 453.416\nINFO:tensorflow:loss = 0.007832337, step = 82301 (0.221 sec)\nINFO:tensorflow:global_step/sec: 450.367\nINFO:tensorflow:loss = 0.006455669, step = 82401 (0.222 sec)\nINFO:tensorflow:global_step/sec: 447.561\nINFO:tensorflow:loss = 0.01277595, step = 82501 (0.223 sec)\nINFO:tensorflow:global_step/sec: 452.52\nINFO:tensorflow:loss = 0.006126184, step = 82601 (0.221 sec)\nINFO:tensorflow:global_step/sec: 445.669\nINFO:tensorflow:loss = 0.01073208, step = 82701 (0.230 sec)\nINFO:tensorflow:global_step/sec: 441.269\nINFO:tensorflow:loss = 0.009560438, step = 82801 (0.221 sec)\nINFO:tensorflow:global_step/sec: 460.764\nINFO:tensorflow:loss = 0.008874605, step = 82901 (0.217 sec)\nINFO:tensorflow:global_step/sec: 457.411\nINFO:tensorflow:loss = 0.008558904, step = 83001 (0.219 sec)\nINFO:tensorflow:global_step/sec: 456.777\nINFO:tensorflow:loss = 0.0047848914, step = 83101 (0.219 sec)\nINFO:tensorflow:global_step/sec: 456.383\nINFO:tensorflow:loss = 0.013862016, step = 83201 (0.219 sec)\nINFO:tensorflow:global_step/sec: 439.964\nINFO:tensorflow:loss = 0.014882583, step = 83301 (0.227 sec)\nINFO:tensorflow:global_step/sec: 452.46\nINFO:tensorflow:loss = 0.011824507, step = 83401 (0.221 sec)\nINFO:tensorflow:global_step/sec: 474.861\nINFO:tensorflow:loss = 0.014059515, step = 83501 (0.210 sec)\nINFO:tensorflow:global_step/sec: 451.095\nINFO:tensorflow:loss = 0.0069136624, step = 83601 (0.222 sec)\nINFO:tensorflow:global_step/sec: 435.603\nINFO:tensorflow:loss = 0.008130037, step = 83701 (0.229 sec)\nINFO:tensorflow:global_step/sec: 451.657\nINFO:tensorflow:loss = 0.0081698615, step = 83801 (0.221 sec)\nINFO:tensorflow:global_step/sec: 466.033\nINFO:tensorflow:loss = 0.005401345, step = 83901 (0.214 sec)\nINFO:tensorflow:global_step/sec: 452.589\nINFO:tensorflow:loss = 0.009502525, step = 84001 (0.221 sec)\nINFO:tensorflow:global_step/sec: 450.554\nINFO:tensorflow:loss = 0.0075874086, step = 84101 (0.222 sec)\nINFO:tensorflow:global_step/sec: 470.395\nINFO:tensorflow:loss = 0.007451632, step = 84201 (0.213 sec)\nINFO:tensorflow:global_step/sec: 445.072\nINFO:tensorflow:loss = 0.004052775, step = 84301 (0.225 sec)\nINFO:tensorflow:global_step/sec: 471.629\nINFO:tensorflow:loss = 0.006894485, step = 84401 (0.212 sec)\nINFO:tensorflow:global_step/sec: 468.239\nINFO:tensorflow:loss = 0.009384276, step = 84501 (0.214 sec)\nINFO:tensorflow:global_step/sec: 441.032\nINFO:tensorflow:loss = 0.011024917, step = 84601 (0.227 sec)\nINFO:tensorflow:global_step/sec: 477.591\nINFO:tensorflow:loss = 0.0065401867, step = 84701 (0.209 sec)\nINFO:tensorflow:global_step/sec: 450.806\nINFO:tensorflow:loss = 0.009705102, step = 84801 (0.222 sec)\nINFO:tensorflow:global_step/sec: 426.918\nINFO:tensorflow:loss = 0.01221613, step = 84901 (0.234 sec)\nINFO:tensorflow:global_step/sec: 439.88\nINFO:tensorflow:loss = 0.004846774, step = 85001 (0.227 sec)\nINFO:tensorflow:global_step/sec: 455.384\nINFO:tensorflow:loss = 0.005345362, step = 85101 (0.220 sec)\nINFO:tensorflow:global_step/sec: 476.276\nINFO:tensorflow:loss = 0.0062981583, step = 85201 (0.210 sec)\nINFO:tensorflow:global_step/sec: 452.372\nINFO:tensorflow:loss = 0.011848188, step = 85301 (0.221 sec)\nINFO:tensorflow:global_step/sec: 459.365\nINFO:tensorflow:loss = 0.006783499, step = 85401 (0.217 sec)\nINFO:tensorflow:global_step/sec: 436.975\nINFO:tensorflow:loss = 0.006241713, step = 85501 (0.229 sec)\nINFO:tensorflow:global_step/sec: 460.003\nINFO:tensorflow:loss = 0.012918718, step = 85601 (0.217 sec)\nINFO:tensorflow:global_step/sec: 426.345\nINFO:tensorflow:loss = 0.008487627, step = 85701 (0.235 sec)\nINFO:tensorflow:global_step/sec: 444.27\nINFO:tensorflow:loss = 0.008613524, step = 85801 (0.225 sec)\nINFO:tensorflow:global_step/sec: 449.667\nINFO:tensorflow:loss = 0.006900057, step = 85901 (0.222 sec)\nINFO:tensorflow:global_step/sec: 429.614\nINFO:tensorflow:loss = 0.009854027, step = 86001 (0.233 sec)\nINFO:tensorflow:global_step/sec: 445.738\nINFO:tensorflow:loss = 0.01331093, step = 86101 (0.224 sec)\nINFO:tensorflow:global_step/sec: 459.747\nINFO:tensorflow:loss = 0.010404253, step = 86201 (0.218 sec)\nINFO:tensorflow:global_step/sec: 449.715\nINFO:tensorflow:loss = 0.0046990267, step = 86301 (0.222 sec)\nINFO:tensorflow:global_step/sec: 467.432\nINFO:tensorflow:loss = 0.0064316783, step = 86401 (0.214 sec)\nINFO:tensorflow:global_step/sec: 459.379\nINFO:tensorflow:loss = 0.0120327715, step = 86501 (0.218 sec)\nINFO:tensorflow:global_step/sec: 448.461\nINFO:tensorflow:loss = 0.006700837, step = 86601 (0.223 sec)\nINFO:tensorflow:global_step/sec: 440.857\nINFO:tensorflow:loss = 0.008075792, step = 86701 (0.227 sec)\nINFO:tensorflow:global_step/sec: 419.799\nINFO:tensorflow:loss = 0.0068634166, step = 86801 (0.238 sec)\nINFO:tensorflow:global_step/sec: 456.038\nINFO:tensorflow:loss = 0.006373025, step = 86901 (0.219 sec)\nINFO:tensorflow:global_step/sec: 435.591\nINFO:tensorflow:loss = 0.0049459804, step = 87001 (0.230 sec)\nINFO:tensorflow:global_step/sec: 430.734\nINFO:tensorflow:loss = 0.007862547, step = 87101 (0.232 sec)\nINFO:tensorflow:global_step/sec: 454.916\nINFO:tensorflow:loss = 0.00981736, step = 87201 (0.220 sec)\nINFO:tensorflow:global_step/sec: 446.329\nINFO:tensorflow:loss = 0.011696544, step = 87301 (0.224 sec)\nINFO:tensorflow:global_step/sec: 434.412\nINFO:tensorflow:loss = 0.0095561575, step = 87401 (0.230 sec)\nINFO:tensorflow:global_step/sec: 456.45\nINFO:tensorflow:loss = 0.0037475978, step = 87501 (0.219 sec)\nINFO:tensorflow:global_step/sec: 444.852\nINFO:tensorflow:loss = 0.0091656465, step = 87601 (0.225 sec)\nINFO:tensorflow:global_step/sec: 437.065\nINFO:tensorflow:loss = 0.014315022, step = 87701 (0.229 sec)\nINFO:tensorflow:global_step/sec: 429.411\nINFO:tensorflow:loss = 0.012980651, step = 87801 (0.233 sec)\nINFO:tensorflow:global_step/sec: 441.531\nINFO:tensorflow:loss = 0.0048240935, step = 87901 (0.226 sec)\nINFO:tensorflow:global_step/sec: 466.352\nINFO:tensorflow:loss = 0.015078744, step = 88001 (0.214 sec)\nINFO:tensorflow:global_step/sec: 405.053\nINFO:tensorflow:loss = 0.011876026, step = 88101 (0.247 sec)\nINFO:tensorflow:global_step/sec: 452.976\nINFO:tensorflow:loss = 0.010125502, step = 88201 (0.221 sec)\nINFO:tensorflow:global_step/sec: 458.186\nINFO:tensorflow:loss = 0.013823442, step = 88301 (0.219 sec)\nINFO:tensorflow:global_step/sec: 448.392\nINFO:tensorflow:loss = 0.0053028753, step = 88401 (0.223 sec)\nINFO:tensorflow:global_step/sec: 452.16\nINFO:tensorflow:loss = 0.007541458, step = 88501 (0.221 sec)\nINFO:tensorflow:global_step/sec: 457.381\nINFO:tensorflow:loss = 0.008431977, step = 88601 (0.219 sec)\nINFO:tensorflow:global_step/sec: 455.766\nINFO:tensorflow:loss = 0.010850932, step = 88701 (0.219 sec)\nINFO:tensorflow:global_step/sec: 436.296\nINFO:tensorflow:loss = 0.017362352, step = 88801 (0.229 sec)\nINFO:tensorflow:global_step/sec: 450.779\nINFO:tensorflow:loss = 0.017081048, step = 88901 (0.222 sec)\nINFO:tensorflow:global_step/sec: 423.594\nINFO:tensorflow:loss = 0.01579927, step = 89001 (0.237 sec)\nINFO:tensorflow:global_step/sec: 450.869\nINFO:tensorflow:loss = 0.009363698, step = 89101 (0.220 sec)\nINFO:tensorflow:global_step/sec: 470.537\nINFO:tensorflow:loss = 0.009390919, step = 89201 (0.213 sec)\nINFO:tensorflow:global_step/sec: 443.465\nINFO:tensorflow:loss = 0.0060469382, step = 89301 (0.225 sec)\nINFO:tensorflow:global_step/sec: 463.372\nINFO:tensorflow:loss = 0.009950854, step = 89401 (0.221 sec)\nINFO:tensorflow:global_step/sec: 410.127\nINFO:tensorflow:loss = 0.0071807606, step = 89501 (0.239 sec)\nINFO:tensorflow:global_step/sec: 459.375\nINFO:tensorflow:loss = 0.0075567663, step = 89601 (0.218 sec)\nINFO:tensorflow:global_step/sec: 480.372\nINFO:tensorflow:loss = 0.004868718, step = 89701 (0.208 sec)\nINFO:tensorflow:global_step/sec: 468.689\nINFO:tensorflow:loss = 0.008060325, step = 89801 (0.213 sec)\nINFO:tensorflow:global_step/sec: 456.521\nINFO:tensorflow:loss = 0.010573608, step = 89901 (0.219 sec)\nINFO:tensorflow:global_step/sec: 420.315\nINFO:tensorflow:loss = 0.010056749, step = 90001 (0.238 sec)\nINFO:tensorflow:global_step/sec: 475.448\nINFO:tensorflow:loss = 0.004914472, step = 90101 (0.210 sec)\nINFO:tensorflow:global_step/sec: 445.567\nINFO:tensorflow:loss = 0.008606965, step = 90201 (0.224 sec)\nINFO:tensorflow:global_step/sec: 448.602\nINFO:tensorflow:loss = 0.008953879, step = 90301 (0.223 sec)\nINFO:tensorflow:global_step/sec: 455.711\nINFO:tensorflow:loss = 0.00606883, step = 90401 (0.220 sec)\nINFO:tensorflow:global_step/sec: 439.607\nINFO:tensorflow:loss = 0.0047713965, step = 90501 (0.228 sec)\nINFO:tensorflow:global_step/sec: 465.712\nINFO:tensorflow:loss = 0.008342918, step = 90601 (0.214 sec)\nINFO:tensorflow:global_step/sec: 478.801\nINFO:tensorflow:loss = 0.0068472945, step = 90701 (0.209 sec)\nINFO:tensorflow:global_step/sec: 438.214\nINFO:tensorflow:loss = 0.010479212, step = 90801 (0.228 sec)\nINFO:tensorflow:global_step/sec: 449.317\nINFO:tensorflow:loss = 0.0111814905, step = 90901 (0.223 sec)\nINFO:tensorflow:global_step/sec: 441.287\nINFO:tensorflow:loss = 0.006364339, step = 91001 (0.227 sec)\nINFO:tensorflow:global_step/sec: 418.224\nINFO:tensorflow:loss = 0.012501595, step = 91101 (0.239 sec)\nINFO:tensorflow:global_step/sec: 421.417\nINFO:tensorflow:loss = 0.007942257, step = 91201 (0.237 sec)\nINFO:tensorflow:global_step/sec: 411.007\nINFO:tensorflow:loss = 0.009648362, step = 91301 (0.243 sec)\nINFO:tensorflow:global_step/sec: 441.102\nINFO:tensorflow:loss = 0.004326268, step = 91401 (0.227 sec)\nINFO:tensorflow:global_step/sec: 425.064\nINFO:tensorflow:loss = 0.014749368, step = 91501 (0.235 sec)\nINFO:tensorflow:global_step/sec: 425.8\nINFO:tensorflow:loss = 0.005853046, step = 91601 (0.235 sec)\nINFO:tensorflow:global_step/sec: 440.121\nINFO:tensorflow:loss = 0.012736705, step = 91701 (0.227 sec)\nINFO:tensorflow:global_step/sec: 380.625\nINFO:tensorflow:loss = 0.006685883, step = 91801 (0.263 sec)\nINFO:tensorflow:global_step/sec: 440.184\nINFO:tensorflow:loss = 0.006259484, step = 91901 (0.227 sec)\nINFO:tensorflow:global_step/sec: 440.847\nINFO:tensorflow:loss = 0.009421283, step = 92001 (0.227 sec)\nINFO:tensorflow:global_step/sec: 428.61\nINFO:tensorflow:loss = 0.0056731515, step = 92101 (0.234 sec)\nINFO:tensorflow:global_step/sec: 450.369\nINFO:tensorflow:loss = 0.0067383423, step = 92201 (0.222 sec)\nINFO:tensorflow:global_step/sec: 454.783\nINFO:tensorflow:loss = 0.008340258, step = 92301 (0.220 sec)\nINFO:tensorflow:global_step/sec: 446.281\nINFO:tensorflow:loss = 0.007261906, step = 92401 (0.225 sec)\nINFO:tensorflow:global_step/sec: 445.353\nINFO:tensorflow:loss = 0.008833823, step = 92501 (0.224 sec)\nINFO:tensorflow:global_step/sec: 454.267\nINFO:tensorflow:loss = 0.0042939773, step = 92601 (0.220 sec)\nINFO:tensorflow:global_step/sec: 453.377\nINFO:tensorflow:loss = 0.012260348, step = 92701 (0.220 sec)\nINFO:tensorflow:global_step/sec: 438.672\nINFO:tensorflow:loss = 0.011230526, step = 92801 (0.228 sec)\nINFO:tensorflow:global_step/sec: 438.941\nINFO:tensorflow:loss = 0.008917751, step = 92901 (0.228 sec)\nINFO:tensorflow:global_step/sec: 440.568\nINFO:tensorflow:loss = 0.011714136, step = 93001 (0.227 sec)\nINFO:tensorflow:global_step/sec: 466.014\nINFO:tensorflow:loss = 0.0073758443, step = 93101 (0.215 sec)\nINFO:tensorflow:global_step/sec: 468.119\nINFO:tensorflow:loss = 0.003955289, step = 93201 (0.213 sec)\nINFO:tensorflow:global_step/sec: 470.834\nINFO:tensorflow:loss = 0.0046369513, step = 93301 (0.213 sec)\nINFO:tensorflow:global_step/sec: 469.162\nINFO:tensorflow:loss = 0.007927978, step = 93401 (0.213 sec)\nINFO:tensorflow:global_step/sec: 458.322\nINFO:tensorflow:loss = 0.008177168, step = 93501 (0.219 sec)\nINFO:tensorflow:global_step/sec: 476.606\nINFO:tensorflow:loss = 0.007055303, step = 93601 (0.209 sec)\nINFO:tensorflow:global_step/sec: 471.298\nINFO:tensorflow:loss = 0.0064525036, step = 93701 (0.212 sec)\nINFO:tensorflow:global_step/sec: 485.527\nINFO:tensorflow:loss = 0.010708123, step = 93801 (0.206 sec)\nINFO:tensorflow:global_step/sec: 442.813\nINFO:tensorflow:loss = 0.007946094, step = 93901 (0.226 sec)\nINFO:tensorflow:global_step/sec: 468.186\nINFO:tensorflow:loss = 0.0055915033, step = 94001 (0.214 sec)\nINFO:tensorflow:global_step/sec: 464.727\nINFO:tensorflow:loss = 0.009732682, step = 94101 (0.215 sec)\nINFO:tensorflow:global_step/sec: 466.611\nINFO:tensorflow:loss = 0.009899803, step = 94201 (0.214 sec)\nINFO:tensorflow:global_step/sec: 451.482\nINFO:tensorflow:loss = 0.007355769, step = 94301 (0.221 sec)\nINFO:tensorflow:global_step/sec: 449.604\nINFO:tensorflow:loss = 0.006268471, step = 94401 (0.223 sec)\nINFO:tensorflow:global_step/sec: 465.948\nINFO:tensorflow:loss = 0.0055277785, step = 94501 (0.215 sec)\nINFO:tensorflow:global_step/sec: 449.761\nINFO:tensorflow:loss = 0.008253826, step = 94601 (0.222 sec)\nINFO:tensorflow:global_step/sec: 457.675\nINFO:tensorflow:loss = 0.008863449, step = 94701 (0.219 sec)\nINFO:tensorflow:global_step/sec: 476.456\nINFO:tensorflow:loss = 0.010726278, step = 94801 (0.210 sec)\nINFO:tensorflow:global_step/sec: 430.424\nINFO:tensorflow:loss = 0.006165526, step = 94901 (0.232 sec)\nINFO:tensorflow:global_step/sec: 454.45\nINFO:tensorflow:loss = 0.014830342, step = 95001 (0.220 sec)\nINFO:tensorflow:global_step/sec: 456.236\nINFO:tensorflow:loss = 0.0061218496, step = 95101 (0.219 sec)\nINFO:tensorflow:global_step/sec: 456.612\nINFO:tensorflow:loss = 0.0061841793, step = 95201 (0.219 sec)\nINFO:tensorflow:global_step/sec: 458.266\nINFO:tensorflow:loss = 0.013126951, step = 95301 (0.218 sec)\nINFO:tensorflow:global_step/sec: 468.801\nINFO:tensorflow:loss = 0.0057089217, step = 95401 (0.213 sec)\nINFO:tensorflow:global_step/sec: 473.514\nINFO:tensorflow:loss = 0.010399368, step = 95501 (0.216 sec)\nINFO:tensorflow:global_step/sec: 441.127\nINFO:tensorflow:loss = 0.0056310957, step = 95601 (0.222 sec)\nINFO:tensorflow:global_step/sec: 451.263\nINFO:tensorflow:loss = 0.0053259893, step = 95701 (0.222 sec)\nINFO:tensorflow:global_step/sec: 439.391\nINFO:tensorflow:loss = 0.015028892, step = 95801 (0.227 sec)\nINFO:tensorflow:global_step/sec: 466.444\nINFO:tensorflow:loss = 0.0061596353, step = 95901 (0.214 sec)\nINFO:tensorflow:global_step/sec: 450.024\nINFO:tensorflow:loss = 0.007637582, step = 96001 (0.222 sec)\nINFO:tensorflow:global_step/sec: 460.299\nINFO:tensorflow:loss = 0.008166807, step = 96101 (0.217 sec)\nINFO:tensorflow:global_step/sec: 442.73\nINFO:tensorflow:loss = 0.0056512663, step = 96201 (0.226 sec)\nINFO:tensorflow:global_step/sec: 471.994\nINFO:tensorflow:loss = 0.0050436864, step = 96301 (0.212 sec)\nINFO:tensorflow:global_step/sec: 467.734\nINFO:tensorflow:loss = 0.009083891, step = 96401 (0.214 sec)\nINFO:tensorflow:global_step/sec: 463.82\nINFO:tensorflow:loss = 0.0092610065, step = 96501 (0.216 sec)\nINFO:tensorflow:global_step/sec: 449.634\nINFO:tensorflow:loss = 0.0062198066, step = 96601 (0.223 sec)\nINFO:tensorflow:global_step/sec: 452.253\nINFO:tensorflow:loss = 0.01348327, step = 96701 (0.221 sec)\nINFO:tensorflow:global_step/sec: 451.927\nINFO:tensorflow:loss = 0.0075592278, step = 96801 (0.221 sec)\nINFO:tensorflow:global_step/sec: 454.775\nINFO:tensorflow:loss = 0.0052253455, step = 96901 (0.220 sec)\nINFO:tensorflow:global_step/sec: 457.882\nINFO:tensorflow:loss = 0.009982036, step = 97001 (0.219 sec)\nINFO:tensorflow:global_step/sec: 453.202\nINFO:tensorflow:loss = 0.014037208, step = 97101 (0.225 sec)\nINFO:tensorflow:global_step/sec: 423.68\nINFO:tensorflow:loss = 0.0053974064, step = 97201 (0.232 sec)\nINFO:tensorflow:global_step/sec: 440.476\nINFO:tensorflow:loss = 0.004785974, step = 97301 (0.227 sec)\nINFO:tensorflow:global_step/sec: 450.995\nINFO:tensorflow:loss = 0.006669085, step = 97401 (0.221 sec)\nINFO:tensorflow:global_step/sec: 441.568\nINFO:tensorflow:loss = 0.010448052, step = 97501 (0.226 sec)\nINFO:tensorflow:global_step/sec: 450.503\nINFO:tensorflow:loss = 0.008053826, step = 97601 (0.222 sec)\nINFO:tensorflow:global_step/sec: 442.894\nINFO:tensorflow:loss = 0.0048241923, step = 97701 (0.226 sec)\nINFO:tensorflow:global_step/sec: 435.682\nINFO:tensorflow:loss = 0.010347674, step = 97801 (0.229 sec)\nINFO:tensorflow:global_step/sec: 479.042\nINFO:tensorflow:loss = 0.0047979965, step = 97901 (0.209 sec)\nINFO:tensorflow:global_step/sec: 447.66\nINFO:tensorflow:loss = 0.010023495, step = 98001 (0.223 sec)\nINFO:tensorflow:global_step/sec: 464.753\nINFO:tensorflow:loss = 0.009690805, step = 98101 (0.215 sec)\nINFO:tensorflow:global_step/sec: 467.06\nINFO:tensorflow:loss = 0.008275158, step = 98201 (0.214 sec)\nINFO:tensorflow:global_step/sec: 465.55\nINFO:tensorflow:loss = 0.011195747, step = 98301 (0.215 sec)\nINFO:tensorflow:global_step/sec: 458.14\nINFO:tensorflow:loss = 0.009275818, step = 98401 (0.218 sec)\nINFO:tensorflow:global_step/sec: 456.97\nINFO:tensorflow:loss = 0.008696778, step = 98501 (0.219 sec)\nINFO:tensorflow:global_step/sec: 473.882\nINFO:tensorflow:loss = 0.0084723625, step = 98601 (0.211 sec)\nINFO:tensorflow:global_step/sec: 463.555\nINFO:tensorflow:loss = 0.005408332, step = 98701 (0.216 sec)\nINFO:tensorflow:global_step/sec: 472.934\nINFO:tensorflow:loss = 0.006189944, step = 98801 (0.212 sec)\nINFO:tensorflow:global_step/sec: 474.264\nINFO:tensorflow:loss = 0.013700008, step = 98901 (0.210 sec)\nINFO:tensorflow:global_step/sec: 468.353\nINFO:tensorflow:loss = 0.009340456, step = 99001 (0.214 sec)\nINFO:tensorflow:global_step/sec: 466.54\nINFO:tensorflow:loss = 0.008566959, step = 99101 (0.214 sec)\nINFO:tensorflow:global_step/sec: 458.68\nINFO:tensorflow:loss = 0.0067969724, step = 99201 (0.218 sec)\nINFO:tensorflow:global_step/sec: 428.467\nINFO:tensorflow:loss = 0.008139711, step = 99301 (0.233 sec)\nINFO:tensorflow:global_step/sec: 443.178\nINFO:tensorflow:loss = 0.0058223605, step = 99401 (0.226 sec)\nINFO:tensorflow:global_step/sec: 424.465\nINFO:tensorflow:loss = 0.005538292, step = 99501 (0.236 sec)\nINFO:tensorflow:global_step/sec: 453.76\nINFO:tensorflow:loss = 0.0075483248, step = 99601 (0.220 sec)\nINFO:tensorflow:global_step/sec: 460.636\nINFO:tensorflow:loss = 0.008077534, step = 99701 (0.217 sec)\nINFO:tensorflow:global_step/sec: 450.708\nINFO:tensorflow:loss = 0.01044227, step = 99801 (0.222 sec)\nINFO:tensorflow:global_step/sec: 460.872\nINFO:tensorflow:loss = 0.009272017, step = 99901 (0.217 sec)\nINFO:tensorflow:Saving checkpoints for 100000 into /tmp/tmpBX73lD/model.ckpt.\nINFO:tensorflow:Calling model_fn.\nINFO:tensorflow:Importing architecture from /tmp/tmpBX73lD/architecture-3.txt: ['0:1_layer_dnn', '1:2_layer_dnn', '2:3_layer_dnn', '3:4_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 1\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 2\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 3\nINFO:tensorflow:Building subnetwork '4_layer_dnn'\nINFO:tensorflow:Building iteration 4\nINFO:tensorflow:Building subnetwork '4_layer_dnn'\nINFO:tensorflow:Building subnetwork '5_layer_dnn'\nINFO:tensorflow:Done calling model_fn.\nINFO:tensorflow:Starting evaluation at 2018-12-13-19:27:49\nINFO:tensorflow:Graph was finalized.\nINFO:tensorflow:Restoring parameters from /tmp/tmpBX73lD/model.ckpt-100000\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\nINFO:tensorflow:Saving candidate 't3_4_layer_dnn' dict for global step 100000: architecture/adanet/ensembles = \n�\u0001\n>adanet/iteration_3/ensemble_t3_4_layer_dnn/architecture/adanetB?\b\u0007\u0012\u0000B9| 1_layer_dnn | 2_layer_dnn | 3_layer_dnn | 4_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.031587753, average_loss/adanet/subnetwork = 0.03348904, average_loss/adanet/uniform_average_ensemble = 0.031587753, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.04207115, loss/adanet/subnetwork = 0.041055106, loss/adanet/uniform_average_ensemble = 0.04207115, prediction/mean/adanet/adanet_weighted_ensemble = 3.1415138, prediction/mean/adanet/subnetwork = 3.1287208, prediction/mean/adanet/uniform_average_ensemble = 3.1415138\nINFO:tensorflow:Saving candidate 't4_4_layer_dnn' dict for global step 100000: architecture/adanet/ensembles = \n�\u0001\n>adanet/iteration_4/ensemble_t4_4_layer_dnn/architecture/adanetBM\b\u0007\u0012\u0000BG| 1_layer_dnn | 2_layer_dnn | 3_layer_dnn | 4_layer_dnn | 4_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.031320337, average_loss/adanet/subnetwork = 0.036771663, average_loss/adanet/uniform_average_ensemble = 0.031320345, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.042101234, loss/adanet/subnetwork = 0.049315907, loss/adanet/uniform_average_ensemble = 0.04210123, prediction/mean/adanet/adanet_weighted_ensemble = 3.1364617, prediction/mean/adanet/subnetwork = 3.116253, prediction/mean/adanet/uniform_average_ensemble = 3.1364617\nINFO:tensorflow:Saving candidate 't4_5_layer_dnn' dict for global step 100000: architecture/adanet/ensembles = \n�\u0001\n>adanet/iteration_4/ensemble_t4_5_layer_dnn/architecture/adanetBM\b\u0007\u0012\u0000BG| 1_layer_dnn | 2_layer_dnn | 3_layer_dnn | 4_layer_dnn | 5_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.032695297, average_loss/adanet/subnetwork = 0.0495253, average_loss/adanet/uniform_average_ensemble = 0.032695293, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.043896608, loss/adanet/subnetwork = 0.06338713, loss/adanet/uniform_average_ensemble = 0.0438966, prediction/mean/adanet/adanet_weighted_ensemble = 3.1284606, prediction/mean/adanet/subnetwork = 3.0762491, prediction/mean/adanet/uniform_average_ensemble = 3.1284606\nINFO:tensorflow:Finished evaluation at 2018-12-13-19:27:54\nINFO:tensorflow:Saving dict for global step 100000: average_loss = 0.032695297, average_loss/adanet/adanet_weighted_ensemble = 0.032695297, average_loss/adanet/subnetwork = 0.0495253, average_loss/adanet/uniform_average_ensemble = 0.032695293, global_step = 100000, label/mean = 3.1049454, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss = 0.043896608, loss/adanet/adanet_weighted_ensemble = 0.043896608, loss/adanet/subnetwork = 0.06338713, loss/adanet/uniform_average_ensemble = 0.0438966, prediction/mean = 3.1284606, prediction/mean/adanet/adanet_weighted_ensemble = 3.1284606, prediction/mean/adanet/subnetwork = 3.0762491, prediction/mean/adanet/uniform_average_ensemble = 3.1284606\nINFO:tensorflow:Saving 'checkpoint_path' summary for global step 100000: /tmp/tmpBX73lD/model.ckpt-100000\nINFO:tensorflow:Loss for final step: 0.0064807483.\nINFO:tensorflow:Finished training Adanet iteration 4\nLoss: 0.032695297\nArchitecture: | 1_layer_dnn | 2_layer_dnn | 3_layer_dnn | 4_layer_dnn | 5_layer_dnn |\n" ] ], [ [ "These hyperparameters preduce a model that achieves **0.0348** MSE on the test\nset (exact MSE will vary depending on the hardware you're using to train the model). Notice that the ensemble is composed of 5 subnetworks, each one a hidden\nlayer deeper than the previous. The most complex subnetwork is made of 5 hidden\nlayers.\n\nSince `SimpleDNNGenerator` produces subnetworks of varying complexity, and our\nmodel gives each one an equal weight, AdaNet selected the subnetwork that most\nlowered the ensemble's training loss at each iteration, likely the one with the\nmost hidden layers, since it has the most capacity, and we aren't penalizing\nmore complex subnetworks (yet).\n\nNext, instead of assigning equal weight to each subnetwork, let's learn the\nmixture weights as a convex optimization problem using SGD:", "_____no_output_____" ] ], [ [ "#@test {\"skip\": true}\nresults, _ = train_and_evaluate(learn_mixture_weights=True)\nprint(\"Loss:\", results[\"average_loss\"])\nprint(\"Uniform average loss:\", results[\"average_loss/adanet/uniform_average_ensemble\"])\nprint(\"Architecture:\", ensemble_architecture(results))", "WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmpDexXZd\nINFO:tensorflow:Using config: {'_save_checkpoints_secs': None, '_num_ps_replicas': 0, '_keep_checkpoint_max': 5, '_task_type': 'worker', '_global_id_in_cluster': 0, '_is_chief': True, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f2794692a50>, '_model_dir': '/tmp/tmpDexXZd', '_protocol': None, '_save_checkpoints_steps': 50000, '_keep_checkpoint_every_n_hours': 10000, '_service': None, '_session_config': allow_soft_placement: true\ngraph_options {\n rewrite_options {\n meta_optimizer_iterations: ONE\n }\n}\n, '_tf_random_seed': 42, '_save_summary_steps': 50000, '_device_fn': None, '_experimental_distribute': None, '_num_worker_replicas': 1, '_task_id': 0, '_log_step_count_steps': 100, '_evaluation_master': '', '_eval_distribute': None, '_train_distribute': None, '_master': ''}\nINFO:tensorflow:Not using Distribute Coordinator.\nINFO:tensorflow:Running training and evaluation locally (non-distributed).\nINFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps 50000 or save_checkpoints_secs None.\nINFO:tensorflow:Beginning training AdaNet iteration 0\nINFO:tensorflow:Calling model_fn.\nINFO:tensorflow:Building iteration 0\nINFO:tensorflow:Building subnetwork 'linear'\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Done calling model_fn.\nINFO:tensorflow:Create CheckpointSaverHook.\nINFO:tensorflow:Graph was finalized.\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\nINFO:tensorflow:Saving checkpoints for 0 into /tmp/tmpDexXZd/model.ckpt.\nINFO:tensorflow:loss = 21.773132, step = 1\nINFO:tensorflow:global_step/sec: 174.652\nINFO:tensorflow:loss = 0.6285208, step = 101 (0.574 sec)\nINFO:tensorflow:global_step/sec: 556.031\nINFO:tensorflow:loss = 0.56869733, step = 201 (0.180 sec)\nINFO:tensorflow:global_step/sec: 557.64\nINFO:tensorflow:loss = 0.07774231, step = 301 (0.179 sec)\nINFO:tensorflow:global_step/sec: 536.303\nINFO:tensorflow:loss = 0.08270252, step = 401 (0.186 sec)\nINFO:tensorflow:global_step/sec: 544.736\nINFO:tensorflow:loss = 0.08153409, step = 501 (0.184 sec)\nINFO:tensorflow:global_step/sec: 511.091\nINFO:tensorflow:loss = 0.056552373, step = 601 (0.195 sec)\nINFO:tensorflow:global_step/sec: 550.621\nINFO:tensorflow:loss = 0.025883075, step = 701 (0.182 sec)\nINFO:tensorflow:global_step/sec: 559.246\nINFO:tensorflow:loss = 0.030127663, step = 801 (0.179 sec)\nINFO:tensorflow:global_step/sec: 548.297\nINFO:tensorflow:loss = 0.03756211, step = 901 (0.185 sec)\nINFO:tensorflow:global_step/sec: 536.855\nINFO:tensorflow:loss = 0.06788661, step = 1001 (0.184 sec)\nINFO:tensorflow:global_step/sec: 556.328\nINFO:tensorflow:loss = 0.036306266, step = 1101 (0.180 sec)\nINFO:tensorflow:global_step/sec: 552.101\nINFO:tensorflow:loss = 0.05074877, step = 1201 (0.181 sec)\nINFO:tensorflow:global_step/sec: 571.497\nINFO:tensorflow:loss = 0.10058474, step = 1301 (0.175 sec)\nINFO:tensorflow:global_step/sec: 567.55\nINFO:tensorflow:loss = 0.026643533, step = 1401 (0.176 sec)\nINFO:tensorflow:global_step/sec: 546.287\nINFO:tensorflow:loss = 0.020885473, step = 1501 (0.183 sec)\nINFO:tensorflow:global_step/sec: 543.292\nINFO:tensorflow:loss = 0.032396816, step = 1601 (0.184 sec)\nINFO:tensorflow:global_step/sec: 543.632\nINFO:tensorflow:loss = 0.041603133, step = 1701 (0.185 sec)\nINFO:tensorflow:global_step/sec: 543.496\nINFO:tensorflow:loss = 0.035292536, step = 1801 (0.183 sec)\nINFO:tensorflow:global_step/sec: 539.173\nINFO:tensorflow:loss = 0.04474579, step = 1901 (0.185 sec)\nINFO:tensorflow:global_step/sec: 552.816\nINFO:tensorflow:loss = 0.029937664, step = 2001 (0.181 sec)\nINFO:tensorflow:global_step/sec: 540.912\nINFO:tensorflow:loss = 0.047246538, step = 2101 (0.185 sec)\nINFO:tensorflow:global_step/sec: 534.233\nINFO:tensorflow:loss = 0.024866749, step = 2201 (0.187 sec)\nINFO:tensorflow:global_step/sec: 531.472\nINFO:tensorflow:loss = 0.025053516, step = 2301 (0.188 sec)\nINFO:tensorflow:global_step/sec: 539.714\nINFO:tensorflow:loss = 0.022536632, step = 2401 (0.186 sec)\nINFO:tensorflow:global_step/sec: 544.71\nINFO:tensorflow:loss = 0.047800276, step = 2501 (0.184 sec)\nINFO:tensorflow:global_step/sec: 579.096\nINFO:tensorflow:loss = 0.03202751, step = 2601 (0.173 sec)\nINFO:tensorflow:global_step/sec: 517.886\nINFO:tensorflow:loss = 0.033754677, step = 2701 (0.193 sec)\nINFO:tensorflow:global_step/sec: 534.408\nINFO:tensorflow:loss = 0.014495829, step = 2801 (0.187 sec)\nINFO:tensorflow:global_step/sec: 524.406\nINFO:tensorflow:loss = 0.031205812, step = 2901 (0.191 sec)\nINFO:tensorflow:global_step/sec: 540.38\nINFO:tensorflow:loss = 0.026792966, step = 3001 (0.185 sec)\nINFO:tensorflow:global_step/sec: 550.285\nINFO:tensorflow:loss = 0.026968574, step = 3101 (0.182 sec)\nINFO:tensorflow:global_step/sec: 534.265\nINFO:tensorflow:loss = 0.027100379, step = 3201 (0.187 sec)\nINFO:tensorflow:global_step/sec: 545.899\nINFO:tensorflow:loss = 0.035916645, step = 3301 (0.183 sec)\nINFO:tensorflow:global_step/sec: 526.704\nINFO:tensorflow:loss = 0.025515234, step = 3401 (0.190 sec)\nINFO:tensorflow:global_step/sec: 539.397\nINFO:tensorflow:loss = 0.049373515, step = 3501 (0.191 sec)\nINFO:tensorflow:global_step/sec: 520.882\nINFO:tensorflow:loss = 0.024171062, step = 3601 (0.186 sec)\nINFO:tensorflow:global_step/sec: 542.608\nINFO:tensorflow:loss = 0.017237261, step = 3701 (0.184 sec)\nINFO:tensorflow:global_step/sec: 537.415\nINFO:tensorflow:loss = 0.02012871, step = 3801 (0.186 sec)\nINFO:tensorflow:global_step/sec: 530.105\nINFO:tensorflow:loss = 0.021598272, step = 3901 (0.188 sec)\nINFO:tensorflow:global_step/sec: 522.654\nINFO:tensorflow:loss = 0.037727967, step = 4001 (0.191 sec)\nINFO:tensorflow:global_step/sec: 532.776\nINFO:tensorflow:loss = 0.041009873, step = 4101 (0.188 sec)\nINFO:tensorflow:global_step/sec: 550.134\nINFO:tensorflow:loss = 0.02131496, step = 4201 (0.182 sec)\nINFO:tensorflow:global_step/sec: 553.848\nINFO:tensorflow:loss = 0.03397341, step = 4301 (0.182 sec)\nINFO:tensorflow:global_step/sec: 563.51\nINFO:tensorflow:loss = 0.037425887, step = 4401 (0.177 sec)\nINFO:tensorflow:global_step/sec: 543.715\nINFO:tensorflow:loss = 0.04003139, step = 4501 (0.184 sec)\nINFO:tensorflow:global_step/sec: 565.096\nINFO:tensorflow:loss = 0.037306778, step = 4601 (0.177 sec)\nINFO:tensorflow:global_step/sec: 558.344\nINFO:tensorflow:loss = 0.050043687, step = 4701 (0.179 sec)\nINFO:tensorflow:global_step/sec: 541.864\nINFO:tensorflow:loss = 0.04509886, step = 4801 (0.185 sec)\nINFO:tensorflow:global_step/sec: 527.897\nINFO:tensorflow:loss = 0.023579344, step = 4901 (0.189 sec)\nINFO:tensorflow:global_step/sec: 566.797\nINFO:tensorflow:loss = 0.014783389, step = 5001 (0.176 sec)\nINFO:tensorflow:global_step/sec: 540.345\nINFO:tensorflow:loss = 0.021115452, step = 5101 (0.185 sec)\nINFO:tensorflow:global_step/sec: 562.921\nINFO:tensorflow:loss = 0.028692478, step = 5201 (0.178 sec)\nINFO:tensorflow:global_step/sec: 549.931\nINFO:tensorflow:loss = 0.044227492, step = 5301 (0.182 sec)\nINFO:tensorflow:global_step/sec: 547.064\nINFO:tensorflow:loss = 0.015665479, step = 5401 (0.183 sec)\nINFO:tensorflow:global_step/sec: 555.155\nINFO:tensorflow:loss = 0.01773511, step = 5501 (0.180 sec)\nINFO:tensorflow:global_step/sec: 560.799\nINFO:tensorflow:loss = 0.026888533, step = 5601 (0.178 sec)\nINFO:tensorflow:global_step/sec: 551.42\nINFO:tensorflow:loss = 0.025225485, step = 5701 (0.181 sec)\nINFO:tensorflow:global_step/sec: 550.509\nINFO:tensorflow:loss = 0.032536205, step = 5801 (0.182 sec)\nINFO:tensorflow:global_step/sec: 553.937\nINFO:tensorflow:loss = 0.014430038, step = 5901 (0.180 sec)\nINFO:tensorflow:global_step/sec: 535.117\nINFO:tensorflow:loss = 0.020685751, step = 6001 (0.187 sec)\nINFO:tensorflow:global_step/sec: 524.816\nINFO:tensorflow:loss = 0.03591007, step = 6101 (0.190 sec)\nINFO:tensorflow:global_step/sec: 542.738\nINFO:tensorflow:loss = 0.053759784, step = 6201 (0.184 sec)\nINFO:tensorflow:global_step/sec: 560.535\nINFO:tensorflow:loss = 0.02680358, step = 6301 (0.178 sec)\nINFO:tensorflow:global_step/sec: 568.76\nINFO:tensorflow:loss = 0.035358988, step = 6401 (0.176 sec)\nINFO:tensorflow:global_step/sec: 550.904\nINFO:tensorflow:loss = 0.04194644, step = 6501 (0.182 sec)\nINFO:tensorflow:global_step/sec: 547.193\nINFO:tensorflow:loss = 0.025395703, step = 6601 (0.183 sec)\nINFO:tensorflow:global_step/sec: 541.818\nINFO:tensorflow:loss = 0.020708144, step = 6701 (0.186 sec)\nINFO:tensorflow:global_step/sec: 482.784\nINFO:tensorflow:loss = 0.020778311, step = 6801 (0.205 sec)\nINFO:tensorflow:global_step/sec: 507.454\nINFO:tensorflow:loss = 0.01670653, step = 6901 (0.197 sec)\nINFO:tensorflow:global_step/sec: 537.438\nINFO:tensorflow:loss = 0.026352288, step = 7001 (0.186 sec)\nINFO:tensorflow:global_step/sec: 537.545\nINFO:tensorflow:loss = 0.0261777, step = 7101 (0.186 sec)\nINFO:tensorflow:global_step/sec: 549.001\nINFO:tensorflow:loss = 0.01794462, step = 7201 (0.182 sec)\nINFO:tensorflow:global_step/sec: 553.174\nINFO:tensorflow:loss = 0.048021037, step = 7301 (0.181 sec)\nINFO:tensorflow:global_step/sec: 496.9\nINFO:tensorflow:loss = 0.025696136, step = 7401 (0.202 sec)\nINFO:tensorflow:global_step/sec: 527.852\nINFO:tensorflow:loss = 0.025690787, step = 7501 (0.189 sec)\nINFO:tensorflow:global_step/sec: 550.58\nINFO:tensorflow:loss = 0.012600312, step = 7601 (0.182 sec)\nINFO:tensorflow:global_step/sec: 552.339\nINFO:tensorflow:loss = 0.022771204, step = 7701 (0.181 sec)\nINFO:tensorflow:global_step/sec: 551.28\nINFO:tensorflow:loss = 0.019244373, step = 7801 (0.181 sec)\nINFO:tensorflow:global_step/sec: 551.858\nINFO:tensorflow:loss = 0.017517129, step = 7901 (0.181 sec)\nINFO:tensorflow:global_step/sec: 542.059\nINFO:tensorflow:loss = 0.025490459, step = 8001 (0.184 sec)\nINFO:tensorflow:global_step/sec: 545.601\nINFO:tensorflow:loss = 0.02163744, step = 8101 (0.184 sec)\nINFO:tensorflow:global_step/sec: 562.312\nINFO:tensorflow:loss = 0.039411552, step = 8201 (0.178 sec)\nINFO:tensorflow:global_step/sec: 544.654\nINFO:tensorflow:loss = 0.023735859, step = 8301 (0.183 sec)\nINFO:tensorflow:global_step/sec: 526.21\nINFO:tensorflow:loss = 0.046903748, step = 8401 (0.190 sec)\nINFO:tensorflow:global_step/sec: 549.66\nINFO:tensorflow:loss = 0.020569675, step = 8501 (0.182 sec)\nINFO:tensorflow:global_step/sec: 531.997\nINFO:tensorflow:loss = 0.02903841, step = 8601 (0.188 sec)\nINFO:tensorflow:global_step/sec: 521.521\nINFO:tensorflow:loss = 0.022762362, step = 8701 (0.192 sec)\nINFO:tensorflow:global_step/sec: 552.193\nINFO:tensorflow:loss = 0.036313385, step = 8801 (0.181 sec)\nINFO:tensorflow:global_step/sec: 544.182\nINFO:tensorflow:loss = 0.05480792, step = 8901 (0.184 sec)\nINFO:tensorflow:global_step/sec: 539.852\nINFO:tensorflow:loss = 0.04281692, step = 9001 (0.185 sec)\nINFO:tensorflow:global_step/sec: 540.833\nINFO:tensorflow:loss = 0.020093214, step = 9101 (0.185 sec)\nINFO:tensorflow:global_step/sec: 528.016\nINFO:tensorflow:loss = 0.035362348, step = 9201 (0.190 sec)\nINFO:tensorflow:global_step/sec: 536.933\nINFO:tensorflow:loss = 0.014011826, step = 9301 (0.186 sec)\nINFO:tensorflow:global_step/sec: 549.185\nINFO:tensorflow:loss = 0.01780463, step = 9401 (0.182 sec)\nINFO:tensorflow:global_step/sec: 544.89\nINFO:tensorflow:loss = 0.03639295, step = 9501 (0.184 sec)\nINFO:tensorflow:global_step/sec: 550.291\nINFO:tensorflow:loss = 0.032045305, step = 9601 (0.182 sec)\nINFO:tensorflow:global_step/sec: 527.507\nINFO:tensorflow:loss = 0.011883362, step = 9701 (0.189 sec)\nINFO:tensorflow:global_step/sec: 523.634\nINFO:tensorflow:loss = 0.021471867, step = 9801 (0.191 sec)\nINFO:tensorflow:global_step/sec: 529.498\nINFO:tensorflow:loss = 0.022176802, step = 9901 (0.188 sec)\nINFO:tensorflow:global_step/sec: 559.493\nINFO:tensorflow:loss = 0.029772094, step = 10001 (0.179 sec)\nINFO:tensorflow:global_step/sec: 544.639\nINFO:tensorflow:loss = 0.021319227, step = 10101 (0.184 sec)\nINFO:tensorflow:global_step/sec: 557.075\nINFO:tensorflow:loss = 0.04389864, step = 10201 (0.179 sec)\nINFO:tensorflow:global_step/sec: 487.984\nINFO:tensorflow:loss = 0.03191474, step = 10301 (0.205 sec)\nINFO:tensorflow:global_step/sec: 547.975\nINFO:tensorflow:loss = 0.03408063, step = 10401 (0.183 sec)\nINFO:tensorflow:global_step/sec: 548.273\nINFO:tensorflow:loss = 0.039806467, step = 10501 (0.182 sec)\nINFO:tensorflow:global_step/sec: 565.518\nINFO:tensorflow:loss = 0.025473367, step = 10601 (0.177 sec)\nINFO:tensorflow:global_step/sec: 524.134\nINFO:tensorflow:loss = 0.012293407, step = 10701 (0.191 sec)\nINFO:tensorflow:global_step/sec: 502.922\nINFO:tensorflow:loss = 0.021352112, step = 10801 (0.199 sec)\nINFO:tensorflow:global_step/sec: 525.237\nINFO:tensorflow:loss = 0.026855603, step = 10901 (0.190 sec)\nINFO:tensorflow:global_step/sec: 556.149\nINFO:tensorflow:loss = 0.009965676, step = 11001 (0.180 sec)\nINFO:tensorflow:global_step/sec: 564.902\nINFO:tensorflow:loss = 0.028150808, step = 11101 (0.177 sec)\nINFO:tensorflow:global_step/sec: 568.818\nINFO:tensorflow:loss = 0.013019005, step = 11201 (0.176 sec)\nINFO:tensorflow:global_step/sec: 540.617\nINFO:tensorflow:loss = 0.04203432, step = 11301 (0.185 sec)\nINFO:tensorflow:global_step/sec: 531.924\nINFO:tensorflow:loss = 0.018489825, step = 11401 (0.188 sec)\nINFO:tensorflow:global_step/sec: 536.429\nINFO:tensorflow:loss = 0.060697384, step = 11501 (0.186 sec)\nINFO:tensorflow:global_step/sec: 510.345\nINFO:tensorflow:loss = 0.014159491, step = 11601 (0.196 sec)\nINFO:tensorflow:global_step/sec: 546.696\nINFO:tensorflow:loss = 0.024260698, step = 11701 (0.183 sec)\nINFO:tensorflow:global_step/sec: 513.672\nINFO:tensorflow:loss = 0.013246283, step = 11801 (0.194 sec)\nINFO:tensorflow:global_step/sec: 537.891\nINFO:tensorflow:loss = 0.03547176, step = 11901 (0.186 sec)\nINFO:tensorflow:global_step/sec: 566.473\nINFO:tensorflow:loss = 0.024981104, step = 12001 (0.177 sec)\nINFO:tensorflow:global_step/sec: 542.481\nINFO:tensorflow:loss = 0.014895246, step = 12101 (0.184 sec)\nINFO:tensorflow:global_step/sec: 551.542\nINFO:tensorflow:loss = 0.010142172, step = 12201 (0.181 sec)\nINFO:tensorflow:global_step/sec: 533.86\nINFO:tensorflow:loss = 0.018302724, step = 12301 (0.187 sec)\nINFO:tensorflow:global_step/sec: 548.606\nINFO:tensorflow:loss = 0.016508352, step = 12401 (0.182 sec)\nINFO:tensorflow:global_step/sec: 531.686\nINFO:tensorflow:loss = 0.023406351, step = 12501 (0.189 sec)\nINFO:tensorflow:global_step/sec: 490.28\nINFO:tensorflow:loss = 0.02101646, step = 12601 (0.203 sec)\nINFO:tensorflow:global_step/sec: 542.712\nINFO:tensorflow:loss = 0.018962208, step = 12701 (0.187 sec)\nINFO:tensorflow:global_step/sec: 530.763\nINFO:tensorflow:loss = 0.02032728, step = 12801 (0.185 sec)\nINFO:tensorflow:global_step/sec: 519.343\nINFO:tensorflow:loss = 0.026288046, step = 12901 (0.193 sec)\nINFO:tensorflow:global_step/sec: 531.485\nINFO:tensorflow:loss = 0.023811035, step = 13001 (0.188 sec)\nINFO:tensorflow:global_step/sec: 534.531\nINFO:tensorflow:loss = 0.029657403, step = 13101 (0.187 sec)\nINFO:tensorflow:global_step/sec: 537.62\nINFO:tensorflow:loss = 0.031518616, step = 13201 (0.186 sec)\nINFO:tensorflow:global_step/sec: 541.298\nINFO:tensorflow:loss = 0.015049446, step = 13301 (0.185 sec)\nINFO:tensorflow:global_step/sec: 545.59\nINFO:tensorflow:loss = 0.023619259, step = 13401 (0.183 sec)\nINFO:tensorflow:global_step/sec: 500.566\nINFO:tensorflow:loss = 0.024361568, step = 13501 (0.200 sec)\nINFO:tensorflow:global_step/sec: 512.76\nINFO:tensorflow:loss = 0.01664589, step = 13601 (0.195 sec)\nINFO:tensorflow:global_step/sec: 546.245\nINFO:tensorflow:loss = 0.015613385, step = 13701 (0.183 sec)\nINFO:tensorflow:global_step/sec: 551.493\nINFO:tensorflow:loss = 0.03519985, step = 13801 (0.182 sec)\nINFO:tensorflow:global_step/sec: 541.102\nINFO:tensorflow:loss = 0.02177224, step = 13901 (0.185 sec)\nINFO:tensorflow:global_step/sec: 532.155\nINFO:tensorflow:loss = 0.015915873, step = 14001 (0.188 sec)\nINFO:tensorflow:global_step/sec: 548.51\nINFO:tensorflow:loss = 0.015847687, step = 14101 (0.182 sec)\nINFO:tensorflow:global_step/sec: 537.098\nINFO:tensorflow:loss = 0.016645633, step = 14201 (0.186 sec)\nINFO:tensorflow:global_step/sec: 523.196\nINFO:tensorflow:loss = 0.020216886, step = 14301 (0.196 sec)\nINFO:tensorflow:global_step/sec: 533.327\nINFO:tensorflow:loss = 0.012887245, step = 14401 (0.183 sec)\nINFO:tensorflow:global_step/sec: 515.81\nINFO:tensorflow:loss = 0.020852203, step = 14501 (0.194 sec)\nINFO:tensorflow:global_step/sec: 531.358\nINFO:tensorflow:loss = 0.028111286, step = 14601 (0.188 sec)\nINFO:tensorflow:global_step/sec: 516.534\nINFO:tensorflow:loss = 0.024844358, step = 14701 (0.193 sec)\nINFO:tensorflow:global_step/sec: 520.536\nINFO:tensorflow:loss = 0.027477147, step = 14801 (0.192 sec)\nINFO:tensorflow:global_step/sec: 536.955\nINFO:tensorflow:loss = 0.04302305, step = 14901 (0.187 sec)\nINFO:tensorflow:global_step/sec: 520.798\nINFO:tensorflow:loss = 0.026721848, step = 15001 (0.192 sec)\nINFO:tensorflow:global_step/sec: 517.735\nINFO:tensorflow:loss = 0.014863384, step = 15101 (0.193 sec)\nINFO:tensorflow:global_step/sec: 460.524\nINFO:tensorflow:loss = 0.02510932, step = 15201 (0.218 sec)\nINFO:tensorflow:global_step/sec: 534.468\nINFO:tensorflow:loss = 0.023844965, step = 15301 (0.187 sec)\nINFO:tensorflow:global_step/sec: 541.968\nINFO:tensorflow:loss = 0.010820297, step = 15401 (0.184 sec)\nINFO:tensorflow:global_step/sec: 511.59\nINFO:tensorflow:loss = 0.020977903, step = 15501 (0.195 sec)\nINFO:tensorflow:global_step/sec: 539.031\nINFO:tensorflow:loss = 0.024180591, step = 15601 (0.186 sec)\nINFO:tensorflow:global_step/sec: 555.753\nINFO:tensorflow:loss = 0.026313858, step = 15701 (0.180 sec)\nINFO:tensorflow:global_step/sec: 528.687\nINFO:tensorflow:loss = 0.036804, step = 15801 (0.189 sec)\nINFO:tensorflow:global_step/sec: 536.075\nINFO:tensorflow:loss = 0.030261764, step = 15901 (0.186 sec)\nINFO:tensorflow:global_step/sec: 535.843\nINFO:tensorflow:loss = 0.025344506, step = 16001 (0.187 sec)\nINFO:tensorflow:global_step/sec: 517.095\nINFO:tensorflow:loss = 0.056984924, step = 16101 (0.193 sec)\nINFO:tensorflow:global_step/sec: 540.246\nINFO:tensorflow:loss = 0.016870756, step = 16201 (0.185 sec)\nINFO:tensorflow:global_step/sec: 533.911\nINFO:tensorflow:loss = 0.03213037, step = 16301 (0.187 sec)\nINFO:tensorflow:global_step/sec: 558.756\nINFO:tensorflow:loss = 0.051552918, step = 16401 (0.179 sec)\nINFO:tensorflow:global_step/sec: 545.384\nINFO:tensorflow:loss = 0.015004854, step = 16501 (0.183 sec)\nINFO:tensorflow:global_step/sec: 495.314\nINFO:tensorflow:loss = 0.020500047, step = 16601 (0.202 sec)\nINFO:tensorflow:global_step/sec: 514.602\nINFO:tensorflow:loss = 0.026695244, step = 16701 (0.194 sec)\nINFO:tensorflow:global_step/sec: 552.877\nINFO:tensorflow:loss = 0.029320031, step = 16801 (0.181 sec)\nINFO:tensorflow:global_step/sec: 526.172\nINFO:tensorflow:loss = 0.017987214, step = 16901 (0.190 sec)\nINFO:tensorflow:global_step/sec: 563.187\nINFO:tensorflow:loss = 0.02652337, step = 17001 (0.178 sec)\nINFO:tensorflow:global_step/sec: 559.785\nINFO:tensorflow:loss = 0.02373223, step = 17101 (0.180 sec)\nINFO:tensorflow:global_step/sec: 550.152\nINFO:tensorflow:loss = 0.014032694, step = 17201 (0.181 sec)\nINFO:tensorflow:global_step/sec: 562.37\nINFO:tensorflow:loss = 0.023032904, step = 17301 (0.178 sec)\nINFO:tensorflow:global_step/sec: 517.724\nINFO:tensorflow:loss = 0.014849782, step = 17401 (0.193 sec)\nINFO:tensorflow:global_step/sec: 554.139\nINFO:tensorflow:loss = 0.019791802, step = 17501 (0.180 sec)\nINFO:tensorflow:global_step/sec: 553.747\nINFO:tensorflow:loss = 0.024741933, step = 17601 (0.181 sec)\nINFO:tensorflow:global_step/sec: 545.114\nINFO:tensorflow:loss = 0.016027771, step = 17701 (0.183 sec)\nINFO:tensorflow:global_step/sec: 569.788\nINFO:tensorflow:loss = 0.017028531, step = 17801 (0.175 sec)\nINFO:tensorflow:global_step/sec: 540.353\nINFO:tensorflow:loss = 0.01566209, step = 17901 (0.185 sec)\nINFO:tensorflow:global_step/sec: 586.696\nINFO:tensorflow:loss = 0.026907403, step = 18001 (0.171 sec)\nINFO:tensorflow:global_step/sec: 540.879\nINFO:tensorflow:loss = 0.029422838, step = 18101 (0.185 sec)\nINFO:tensorflow:global_step/sec: 574.755\nINFO:tensorflow:loss = 0.02157263, step = 18201 (0.174 sec)\nINFO:tensorflow:global_step/sec: 526.621\nINFO:tensorflow:loss = 0.02905935, step = 18301 (0.190 sec)\nINFO:tensorflow:global_step/sec: 536.11\nINFO:tensorflow:loss = 0.030221801, step = 18401 (0.187 sec)\nINFO:tensorflow:global_step/sec: 546.34\nINFO:tensorflow:loss = 0.017446585, step = 18501 (0.183 sec)\nINFO:tensorflow:global_step/sec: 537.054\nINFO:tensorflow:loss = 0.018040529, step = 18601 (0.186 sec)\nINFO:tensorflow:global_step/sec: 531.011\nINFO:tensorflow:loss = 0.04388584, step = 18701 (0.188 sec)\nINFO:tensorflow:global_step/sec: 534.094\nINFO:tensorflow:loss = 0.009870393, step = 18801 (0.187 sec)\nINFO:tensorflow:global_step/sec: 547.51\nINFO:tensorflow:loss = 0.02640358, step = 18901 (0.183 sec)\nINFO:tensorflow:global_step/sec: 538.756\nINFO:tensorflow:loss = 0.014067678, step = 19001 (0.186 sec)\nINFO:tensorflow:global_step/sec: 533.325\nINFO:tensorflow:loss = 0.029862395, step = 19101 (0.187 sec)\nINFO:tensorflow:global_step/sec: 545.887\nINFO:tensorflow:loss = 0.024341501, step = 19201 (0.183 sec)\nINFO:tensorflow:global_step/sec: 550.327\nINFO:tensorflow:loss = 0.01970948, step = 19301 (0.181 sec)\nINFO:tensorflow:global_step/sec: 541.683\nINFO:tensorflow:loss = 0.01575839, step = 19401 (0.185 sec)\nINFO:tensorflow:global_step/sec: 536.115\nINFO:tensorflow:loss = 0.014000012, step = 19501 (0.186 sec)\nINFO:tensorflow:global_step/sec: 554.613\nINFO:tensorflow:loss = 0.011808527, step = 19601 (0.180 sec)\nINFO:tensorflow:global_step/sec: 548.35\nINFO:tensorflow:loss = 0.011488184, step = 19701 (0.183 sec)\nINFO:tensorflow:global_step/sec: 549.668\nINFO:tensorflow:loss = 0.017856855, step = 19801 (0.182 sec)\nINFO:tensorflow:global_step/sec: 541.225\nINFO:tensorflow:loss = 0.04791218, step = 19901 (0.185 sec)\nINFO:tensorflow:Saving checkpoints for 20000 into /tmp/tmpDexXZd/model.ckpt.\nINFO:tensorflow:Calling model_fn.\nINFO:tensorflow:Building iteration 0\nINFO:tensorflow:Building subnetwork 'linear'\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Done calling model_fn.\nINFO:tensorflow:Starting evaluation at 2018-12-13-19:28:36\nINFO:tensorflow:Graph was finalized.\nINFO:tensorflow:Restoring parameters from /tmp/tmpDexXZd/model.ckpt-20000\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\nINFO:tensorflow:Saving candidate 't0_linear' dict for global step 20000: architecture/adanet/ensembles = \nW\n9adanet/iteration_0/ensemble_t0_linear/architecture/adanetB\u0010\b\u0007\u0012\u0000B\n| linear |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.049419947, average_loss/adanet/subnetwork = 0.049421377, average_loss/adanet/uniform_average_ensemble = 0.049421377, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.0625109, loss/adanet/subnetwork = 0.062442042, loss/adanet/uniform_average_ensemble = 0.062442042, prediction/mean/adanet/adanet_weighted_ensemble = 3.1072564, prediction/mean/adanet/subnetwork = 3.105895, prediction/mean/adanet/uniform_average_ensemble = 3.105895\nINFO:tensorflow:Saving candidate 't0_1_layer_dnn' dict for global step 20000: architecture/adanet/ensembles = \na\n>adanet/iteration_0/ensemble_t0_1_layer_dnn/architecture/adanetB\u0015\b\u0007\u0012\u0000B\u000f| 1_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.04015306, average_loss/adanet/subnetwork = 0.03993654, average_loss/adanet/uniform_average_ensemble = 0.03993654, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.054008663, loss/adanet/subnetwork = 0.053605493, loss/adanet/uniform_average_ensemble = 0.053605493, prediction/mean/adanet/adanet_weighted_ensemble = 3.1601584, prediction/mean/adanet/subnetwork = 3.1580222, prediction/mean/adanet/uniform_average_ensemble = 3.1580222\nINFO:tensorflow:Finished evaluation at 2018-12-13-19:28:38\nINFO:tensorflow:Saving dict for global step 20000: average_loss = 0.04015306, average_loss/adanet/adanet_weighted_ensemble = 0.04015306, average_loss/adanet/subnetwork = 0.03993654, average_loss/adanet/uniform_average_ensemble = 0.03993654, global_step = 20000, label/mean = 3.1049454, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss = 0.054008663, loss/adanet/adanet_weighted_ensemble = 0.054008663, loss/adanet/subnetwork = 0.053605493, loss/adanet/uniform_average_ensemble = 0.053605493, prediction/mean = 3.1601584, prediction/mean/adanet/adanet_weighted_ensemble = 3.1601584, prediction/mean/adanet/subnetwork = 3.1580222, prediction/mean/adanet/uniform_average_ensemble = 3.1580222\nINFO:tensorflow:Saving 'checkpoint_path' summary for global step 20000: /tmp/tmpDexXZd/model.ckpt-20000\nINFO:tensorflow:Loss for final step: 0.034532204.\nINFO:tensorflow:Finished training Adanet iteration 0\nINFO:tensorflow:Beginning bookkeeping phase for iteration 0\nINFO:tensorflow:Building iteration 0\nINFO:tensorflow:Building subnetwork 'linear'\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Starting ensemble evaluation for iteration 0\nINFO:tensorflow:Restoring parameters from /tmp/tmpDexXZd/model.ckpt-20000\nWARNING:tensorflow:`tf.train.start_queue_runners()` was called when no queue runners were defined. You can safely remove the call to this deprecated function.\nINFO:tensorflow:Encountered end of input after 14 evaluations\nINFO:tensorflow:Computed ensemble metrics: adanet_loss/t0_linear = 0.035082, adanet_loss/t0_1_layer_dnn = 0.021061\nINFO:tensorflow:Finished ensemble evaluation for iteration 0\nINFO:tensorflow:'t0_1_layer_dnn' at index 1 is moving onto the next iteration\nINFO:tensorflow:Importing architecture from /tmp/tmpDexXZd/architecture-0.txt: ['0:1_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Warm-starting from: (u'/tmp/tmpDexXZd/model.ckpt-20000',)\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: global_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/candidate_t0_1_layer_dnn/adanet/iteration_0/candidate_t0_1_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/subnetwork/dense/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/candidate_t0_1_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/train_op/is_over/is_over_var_fn/is_over_var; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/subnetwork/dense/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/candidate_t0_1_layer_dnn/adanet/iteration_0/candidate_t0_1_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/subnetwork/dense_1/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/subnetwork/dense_1/bias; prev_var_name: Unchanged\nINFO:tensorflow:Building iteration 1\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Overwriting checkpoint with new graph for iteration 1 to /tmp/tmpDexXZd/model.ckpt-20000\nWARNING:tensorflow:`tf.train.start_queue_runners()` was called when no queue runners were defined. You can safely remove the call to this deprecated function.\nINFO:tensorflow:Finished bookkeeping phase for iteration 0\nINFO:tensorflow:Beginning training AdaNet iteration 1\nINFO:tensorflow:Calling model_fn.\nINFO:tensorflow:Importing architecture from /tmp/tmpDexXZd/architecture-0.txt: ['0:1_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Building iteration 1\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Done calling model_fn.\nINFO:tensorflow:Create CheckpointSaverHook.\nINFO:tensorflow:Graph was finalized.\nINFO:tensorflow:Restoring parameters from /tmp/tmpDexXZd/increment.ckpt-1\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\nINFO:tensorflow:Saving checkpoints for 20000 into /tmp/tmpDexXZd/model.ckpt.\nINFO:tensorflow:loss = 0.027296644, step = 20001\nINFO:tensorflow:global_step/sec: 125.556\nINFO:tensorflow:loss = 0.02694778, step = 20101 (0.798 sec)\nINFO:tensorflow:global_step/sec: 514.298\nINFO:tensorflow:loss = 0.02080372, step = 20201 (0.194 sec)\nINFO:tensorflow:global_step/sec: 509.448\nINFO:tensorflow:loss = 0.018986495, step = 20301 (0.196 sec)\nINFO:tensorflow:global_step/sec: 501.6\nINFO:tensorflow:loss = 0.027806558, step = 20401 (0.200 sec)\nINFO:tensorflow:global_step/sec: 491.988\nINFO:tensorflow:loss = 0.016540758, step = 20501 (0.209 sec)\nINFO:tensorflow:global_step/sec: 475.348\nINFO:tensorflow:loss = 0.01869046, step = 20601 (0.204 sec)\nINFO:tensorflow:global_step/sec: 477.975\nINFO:tensorflow:loss = 0.011502648, step = 20701 (0.210 sec)\nINFO:tensorflow:global_step/sec: 461.406\nINFO:tensorflow:loss = 0.019393912, step = 20801 (0.217 sec)\nINFO:tensorflow:global_step/sec: 461.474\nINFO:tensorflow:loss = 0.01570483, step = 20901 (0.217 sec)\nINFO:tensorflow:global_step/sec: 430.998\nINFO:tensorflow:loss = 0.027598895, step = 21001 (0.232 sec)\nINFO:tensorflow:global_step/sec: 500.683\nINFO:tensorflow:loss = 0.02409819, step = 21101 (0.200 sec)\nINFO:tensorflow:global_step/sec: 487.372\nINFO:tensorflow:loss = 0.024426196, step = 21201 (0.205 sec)\nINFO:tensorflow:global_step/sec: 492.126\nINFO:tensorflow:loss = 0.028549444, step = 21301 (0.203 sec)\nINFO:tensorflow:global_step/sec: 529.692\nINFO:tensorflow:loss = 0.010229438, step = 21401 (0.189 sec)\nINFO:tensorflow:global_step/sec: 491.683\nINFO:tensorflow:loss = 0.010179108, step = 21501 (0.203 sec)\nINFO:tensorflow:global_step/sec: 513.555\nINFO:tensorflow:loss = 0.016775038, step = 21601 (0.194 sec)\nINFO:tensorflow:global_step/sec: 474.053\nINFO:tensorflow:loss = 0.023403853, step = 21701 (0.211 sec)\nINFO:tensorflow:global_step/sec: 496.196\nINFO:tensorflow:loss = 0.017619435, step = 21801 (0.202 sec)\nINFO:tensorflow:global_step/sec: 469.149\nINFO:tensorflow:loss = 0.023911498, step = 21901 (0.213 sec)\nINFO:tensorflow:global_step/sec: 504.579\nINFO:tensorflow:loss = 0.011964874, step = 22001 (0.198 sec)\nINFO:tensorflow:global_step/sec: 516.735\nINFO:tensorflow:loss = 0.030546013, step = 22101 (0.194 sec)\nINFO:tensorflow:global_step/sec: 495.864\nINFO:tensorflow:loss = 0.023705944, step = 22201 (0.202 sec)\nINFO:tensorflow:global_step/sec: 503.329\nINFO:tensorflow:loss = 0.013165451, step = 22301 (0.199 sec)\nINFO:tensorflow:global_step/sec: 487.729\nINFO:tensorflow:loss = 0.012252132, step = 22401 (0.205 sec)\nINFO:tensorflow:global_step/sec: 481.616\nINFO:tensorflow:loss = 0.025068112, step = 22501 (0.207 sec)\nINFO:tensorflow:global_step/sec: 475.901\nINFO:tensorflow:loss = 0.018972568, step = 22601 (0.210 sec)\nINFO:tensorflow:global_step/sec: 508.313\nINFO:tensorflow:loss = 0.020364989, step = 22701 (0.197 sec)\nINFO:tensorflow:global_step/sec: 512.797\nINFO:tensorflow:loss = 0.012045037, step = 22801 (0.195 sec)\nINFO:tensorflow:global_step/sec: 475.306\nINFO:tensorflow:loss = 0.022771345, step = 22901 (0.210 sec)\nINFO:tensorflow:global_step/sec: 511.145\nINFO:tensorflow:loss = 0.015176754, step = 23001 (0.196 sec)\nINFO:tensorflow:global_step/sec: 508.549\nINFO:tensorflow:loss = 0.008884909, step = 23101 (0.197 sec)\nINFO:tensorflow:global_step/sec: 504.582\nINFO:tensorflow:loss = 0.01678998, step = 23201 (0.198 sec)\nINFO:tensorflow:global_step/sec: 502.719\nINFO:tensorflow:loss = 0.028632753, step = 23301 (0.199 sec)\nINFO:tensorflow:global_step/sec: 507.218\nINFO:tensorflow:loss = 0.01956751, step = 23401 (0.197 sec)\nINFO:tensorflow:global_step/sec: 526.078\nINFO:tensorflow:loss = 0.03295834, step = 23501 (0.190 sec)\nINFO:tensorflow:global_step/sec: 511.098\nINFO:tensorflow:loss = 0.013111612, step = 23601 (0.196 sec)\nINFO:tensorflow:global_step/sec: 529.212\nINFO:tensorflow:loss = 0.010788411, step = 23701 (0.189 sec)\nINFO:tensorflow:global_step/sec: 490.3\nINFO:tensorflow:loss = 0.016299147, step = 23801 (0.204 sec)\nINFO:tensorflow:global_step/sec: 520.521\nINFO:tensorflow:loss = 0.011471503, step = 23901 (0.192 sec)\nINFO:tensorflow:global_step/sec: 523.681\nINFO:tensorflow:loss = 0.021366708, step = 24001 (0.191 sec)\nINFO:tensorflow:global_step/sec: 519.454\nINFO:tensorflow:loss = 0.018561717, step = 24101 (0.193 sec)\nINFO:tensorflow:global_step/sec: 503.969\nINFO:tensorflow:loss = 0.010790765, step = 24201 (0.198 sec)\nINFO:tensorflow:global_step/sec: 487.36\nINFO:tensorflow:loss = 0.012166491, step = 24301 (0.205 sec)\nINFO:tensorflow:global_step/sec: 507.4\nINFO:tensorflow:loss = 0.012328864, step = 24401 (0.197 sec)\nINFO:tensorflow:global_step/sec: 518.985\nINFO:tensorflow:loss = 0.01842606, step = 24501 (0.193 sec)\nINFO:tensorflow:global_step/sec: 504.587\nINFO:tensorflow:loss = 0.017016035, step = 24601 (0.198 sec)\nINFO:tensorflow:global_step/sec: 466.662\nINFO:tensorflow:loss = 0.026989652, step = 24701 (0.215 sec)\nINFO:tensorflow:global_step/sec: 498.432\nINFO:tensorflow:loss = 0.01762497, step = 24801 (0.201 sec)\nINFO:tensorflow:global_step/sec: 513.084\nINFO:tensorflow:loss = 0.014085125, step = 24901 (0.195 sec)\nINFO:tensorflow:global_step/sec: 473.655\nINFO:tensorflow:loss = 0.010767302, step = 25001 (0.211 sec)\nINFO:tensorflow:global_step/sec: 513.439\nINFO:tensorflow:loss = 0.010639524, step = 25101 (0.195 sec)\nINFO:tensorflow:global_step/sec: 511.389\nINFO:tensorflow:loss = 0.0118070785, step = 25201 (0.196 sec)\nINFO:tensorflow:global_step/sec: 465.125\nINFO:tensorflow:loss = 0.029877687, step = 25301 (0.214 sec)\nINFO:tensorflow:global_step/sec: 499.912\nINFO:tensorflow:loss = 0.011379703, step = 25401 (0.200 sec)\nINFO:tensorflow:global_step/sec: 475.197\nINFO:tensorflow:loss = 0.0078331195, step = 25501 (0.210 sec)\nINFO:tensorflow:global_step/sec: 485.951\nINFO:tensorflow:loss = 0.024221335, step = 25601 (0.206 sec)\nINFO:tensorflow:global_step/sec: 456.05\nINFO:tensorflow:loss = 0.015124493, step = 25701 (0.220 sec)\nINFO:tensorflow:global_step/sec: 501.311\nINFO:tensorflow:loss = 0.014690703, step = 25801 (0.200 sec)\nINFO:tensorflow:global_step/sec: 491.787\nINFO:tensorflow:loss = 0.011645423, step = 25901 (0.203 sec)\nINFO:tensorflow:global_step/sec: 523.396\nINFO:tensorflow:loss = 0.012304948, step = 26001 (0.191 sec)\nINFO:tensorflow:global_step/sec: 502.75\nINFO:tensorflow:loss = 0.023488127, step = 26101 (0.199 sec)\nINFO:tensorflow:global_step/sec: 509.463\nINFO:tensorflow:loss = 0.022057274, step = 26201 (0.196 sec)\nINFO:tensorflow:global_step/sec: 507.54\nINFO:tensorflow:loss = 0.015472866, step = 26301 (0.197 sec)\nINFO:tensorflow:global_step/sec: 517.92\nINFO:tensorflow:loss = 0.020114496, step = 26401 (0.193 sec)\nINFO:tensorflow:global_step/sec: 516.118\nINFO:tensorflow:loss = 0.028981863, step = 26501 (0.194 sec)\nINFO:tensorflow:global_step/sec: 520.687\nINFO:tensorflow:loss = 0.016902642, step = 26601 (0.192 sec)\nINFO:tensorflow:global_step/sec: 491.938\nINFO:tensorflow:loss = 0.014692128, step = 26701 (0.203 sec)\nINFO:tensorflow:global_step/sec: 505.454\nINFO:tensorflow:loss = 0.012283293, step = 26801 (0.198 sec)\nINFO:tensorflow:global_step/sec: 489.599\nINFO:tensorflow:loss = 0.0076038223, step = 26901 (0.204 sec)\nINFO:tensorflow:global_step/sec: 520.524\nINFO:tensorflow:loss = 0.013220595, step = 27001 (0.192 sec)\nINFO:tensorflow:global_step/sec: 512.19\nINFO:tensorflow:loss = 0.012533921, step = 27101 (0.195 sec)\nINFO:tensorflow:global_step/sec: 512.319\nINFO:tensorflow:loss = 0.011515586, step = 27201 (0.195 sec)\nINFO:tensorflow:global_step/sec: 500.666\nINFO:tensorflow:loss = 0.030524896, step = 27301 (0.200 sec)\nINFO:tensorflow:global_step/sec: 520.01\nINFO:tensorflow:loss = 0.015720565, step = 27401 (0.192 sec)\nINFO:tensorflow:global_step/sec: 522.893\nINFO:tensorflow:loss = 0.011721436, step = 27501 (0.192 sec)\nINFO:tensorflow:global_step/sec: 468.43\nINFO:tensorflow:loss = 0.009658318, step = 27601 (0.213 sec)\nINFO:tensorflow:global_step/sec: 520.321\nINFO:tensorflow:loss = 0.01778774, step = 27701 (0.193 sec)\nINFO:tensorflow:global_step/sec: 490.413\nINFO:tensorflow:loss = 0.015258025, step = 27801 (0.203 sec)\nINFO:tensorflow:global_step/sec: 499.715\nINFO:tensorflow:loss = 0.00996981, step = 27901 (0.203 sec)\nINFO:tensorflow:global_step/sec: 495.616\nINFO:tensorflow:loss = 0.017968249, step = 28001 (0.199 sec)\nINFO:tensorflow:global_step/sec: 485.9\nINFO:tensorflow:loss = 0.018120103, step = 28101 (0.206 sec)\nINFO:tensorflow:global_step/sec: 512.061\nINFO:tensorflow:loss = 0.03020626, step = 28201 (0.195 sec)\nINFO:tensorflow:global_step/sec: 500.553\nINFO:tensorflow:loss = 0.016781444, step = 28301 (0.200 sec)\nINFO:tensorflow:global_step/sec: 508.764\nINFO:tensorflow:loss = 0.019098205, step = 28401 (0.202 sec)\nINFO:tensorflow:global_step/sec: 480.136\nINFO:tensorflow:loss = 0.0102314055, step = 28501 (0.203 sec)\nINFO:tensorflow:global_step/sec: 521.257\nINFO:tensorflow:loss = 0.018879682, step = 28601 (0.191 sec)\nINFO:tensorflow:global_step/sec: 502.217\nINFO:tensorflow:loss = 0.0128694195, step = 28701 (0.200 sec)\nINFO:tensorflow:global_step/sec: 492.548\nINFO:tensorflow:loss = 0.023393746, step = 28801 (0.203 sec)\nINFO:tensorflow:global_step/sec: 502.647\nINFO:tensorflow:loss = 0.039639026, step = 28901 (0.199 sec)\nINFO:tensorflow:global_step/sec: 504.36\nINFO:tensorflow:loss = 0.02720677, step = 29001 (0.198 sec)\nINFO:tensorflow:global_step/sec: 513.542\nINFO:tensorflow:loss = 0.012253448, step = 29101 (0.195 sec)\nINFO:tensorflow:global_step/sec: 492.492\nINFO:tensorflow:loss = 0.022054993, step = 29201 (0.203 sec)\nINFO:tensorflow:global_step/sec: 537.302\nINFO:tensorflow:loss = 0.0084997425, step = 29301 (0.186 sec)\nINFO:tensorflow:global_step/sec: 510.498\nINFO:tensorflow:loss = 0.011618842, step = 29401 (0.196 sec)\nINFO:tensorflow:global_step/sec: 496.734\nINFO:tensorflow:loss = 0.02253382, step = 29501 (0.201 sec)\nINFO:tensorflow:global_step/sec: 503.193\nINFO:tensorflow:loss = 0.019953515, step = 29601 (0.199 sec)\nINFO:tensorflow:global_step/sec: 487.841\nINFO:tensorflow:loss = 0.008872455, step = 29701 (0.205 sec)\nINFO:tensorflow:global_step/sec: 456.171\nINFO:tensorflow:loss = 0.012030635, step = 29801 (0.219 sec)\nINFO:tensorflow:global_step/sec: 474.356\nINFO:tensorflow:loss = 0.020582441, step = 29901 (0.211 sec)\nINFO:tensorflow:global_step/sec: 506.186\nINFO:tensorflow:loss = 0.020316554, step = 30001 (0.198 sec)\nINFO:tensorflow:global_step/sec: 500.506\nINFO:tensorflow:loss = 0.010370528, step = 30101 (0.200 sec)\nINFO:tensorflow:global_step/sec: 487.453\nINFO:tensorflow:loss = 0.023312874, step = 30201 (0.205 sec)\nINFO:tensorflow:global_step/sec: 479.922\nINFO:tensorflow:loss = 0.021624638, step = 30301 (0.208 sec)\nINFO:tensorflow:global_step/sec: 468.931\nINFO:tensorflow:loss = 0.013914967, step = 30401 (0.214 sec)\nINFO:tensorflow:global_step/sec: 496.219\nINFO:tensorflow:loss = 0.014039486, step = 30501 (0.202 sec)\nINFO:tensorflow:global_step/sec: 489.002\nINFO:tensorflow:loss = 0.010958173, step = 30601 (0.209 sec)\nINFO:tensorflow:global_step/sec: 493.347\nINFO:tensorflow:loss = 0.00800906, step = 30701 (0.198 sec)\nINFO:tensorflow:global_step/sec: 502.293\nINFO:tensorflow:loss = 0.016856924, step = 30801 (0.199 sec)\nINFO:tensorflow:global_step/sec: 478.464\nINFO:tensorflow:loss = 0.019304685, step = 30901 (0.209 sec)\nINFO:tensorflow:global_step/sec: 504.798\nINFO:tensorflow:loss = 0.007980205, step = 31001 (0.198 sec)\nINFO:tensorflow:global_step/sec: 484.351\nINFO:tensorflow:loss = 0.017069487, step = 31101 (0.206 sec)\nINFO:tensorflow:global_step/sec: 486.249\nINFO:tensorflow:loss = 0.010463448, step = 31201 (0.206 sec)\nINFO:tensorflow:global_step/sec: 483.141\nINFO:tensorflow:loss = 0.023204867, step = 31301 (0.207 sec)\nINFO:tensorflow:global_step/sec: 496.078\nINFO:tensorflow:loss = 0.00646232, step = 31401 (0.202 sec)\nINFO:tensorflow:global_step/sec: 497.374\nINFO:tensorflow:loss = 0.026689123, step = 31501 (0.201 sec)\nINFO:tensorflow:global_step/sec: 504.05\nINFO:tensorflow:loss = 0.009177749, step = 31601 (0.198 sec)\nINFO:tensorflow:global_step/sec: 475.703\nINFO:tensorflow:loss = 0.017115649, step = 31701 (0.210 sec)\nINFO:tensorflow:global_step/sec: 496.268\nINFO:tensorflow:loss = 0.009676232, step = 31801 (0.202 sec)\nINFO:tensorflow:global_step/sec: 481.814\nINFO:tensorflow:loss = 0.022674382, step = 31901 (0.208 sec)\nINFO:tensorflow:global_step/sec: 489.759\nINFO:tensorflow:loss = 0.0146349855, step = 32001 (0.203 sec)\nINFO:tensorflow:global_step/sec: 500.441\nINFO:tensorflow:loss = 0.009005962, step = 32101 (0.200 sec)\nINFO:tensorflow:global_step/sec: 488.014\nINFO:tensorflow:loss = 0.008703032, step = 32201 (0.207 sec)\nINFO:tensorflow:global_step/sec: 482.009\nINFO:tensorflow:loss = 0.013115581, step = 32301 (0.205 sec)\nINFO:tensorflow:global_step/sec: 511.913\nINFO:tensorflow:loss = 0.00883637, step = 32401 (0.196 sec)\nINFO:tensorflow:global_step/sec: 510.892\nINFO:tensorflow:loss = 0.013913523, step = 32501 (0.195 sec)\nINFO:tensorflow:global_step/sec: 504.111\nINFO:tensorflow:loss = 0.014914172, step = 32601 (0.198 sec)\nINFO:tensorflow:global_step/sec: 480.622\nINFO:tensorflow:loss = 0.012681922, step = 32701 (0.208 sec)\nINFO:tensorflow:global_step/sec: 496.167\nINFO:tensorflow:loss = 0.016758347, step = 32801 (0.202 sec)\nINFO:tensorflow:global_step/sec: 506.027\nINFO:tensorflow:loss = 0.015428397, step = 32901 (0.198 sec)\nINFO:tensorflow:global_step/sec: 482.295\nINFO:tensorflow:loss = 0.018653603, step = 33001 (0.207 sec)\nINFO:tensorflow:global_step/sec: 516.332\nINFO:tensorflow:loss = 0.016913021, step = 33101 (0.194 sec)\nINFO:tensorflow:global_step/sec: 479.478\nINFO:tensorflow:loss = 0.009802844, step = 33201 (0.209 sec)\nINFO:tensorflow:global_step/sec: 515.182\nINFO:tensorflow:loss = 0.00943646, step = 33301 (0.194 sec)\nINFO:tensorflow:global_step/sec: 511.211\nINFO:tensorflow:loss = 0.016679987, step = 33401 (0.196 sec)\nINFO:tensorflow:global_step/sec: 462.924\nINFO:tensorflow:loss = 0.017915051, step = 33501 (0.216 sec)\nINFO:tensorflow:global_step/sec: 516.124\nINFO:tensorflow:loss = 0.018016018, step = 33601 (0.194 sec)\nINFO:tensorflow:global_step/sec: 493.51\nINFO:tensorflow:loss = 0.0066340254, step = 33701 (0.203 sec)\nINFO:tensorflow:global_step/sec: 484.344\nINFO:tensorflow:loss = 0.0236358, step = 33801 (0.206 sec)\nINFO:tensorflow:global_step/sec: 506.222\nINFO:tensorflow:loss = 0.0104007255, step = 33901 (0.198 sec)\nINFO:tensorflow:global_step/sec: 487.86\nINFO:tensorflow:loss = 0.008748459, step = 34001 (0.205 sec)\nINFO:tensorflow:global_step/sec: 486.604\nINFO:tensorflow:loss = 0.0130307125, step = 34101 (0.205 sec)\nINFO:tensorflow:global_step/sec: 482.551\nINFO:tensorflow:loss = 0.01137045, step = 34201 (0.207 sec)\nINFO:tensorflow:global_step/sec: 508.045\nINFO:tensorflow:loss = 0.011736378, step = 34301 (0.196 sec)\nINFO:tensorflow:global_step/sec: 505.107\nINFO:tensorflow:loss = 0.0110898595, step = 34401 (0.198 sec)\nINFO:tensorflow:global_step/sec: 517.398\nINFO:tensorflow:loss = 0.011882299, step = 34501 (0.193 sec)\nINFO:tensorflow:global_step/sec: 517.837\nINFO:tensorflow:loss = 0.016648699, step = 34601 (0.193 sec)\nINFO:tensorflow:global_step/sec: 541.204\nINFO:tensorflow:loss = 0.015699964, step = 34701 (0.185 sec)\nINFO:tensorflow:global_step/sec: 515.717\nINFO:tensorflow:loss = 0.019044194, step = 34801 (0.194 sec)\nINFO:tensorflow:global_step/sec: 500.927\nINFO:tensorflow:loss = 0.02375797, step = 34901 (0.200 sec)\nINFO:tensorflow:global_step/sec: 513.173\nINFO:tensorflow:loss = 0.02138744, step = 35001 (0.196 sec)\nINFO:tensorflow:global_step/sec: 527.977\nINFO:tensorflow:loss = 0.011620376, step = 35101 (0.188 sec)\nINFO:tensorflow:global_step/sec: 513.909\nINFO:tensorflow:loss = 0.012534291, step = 35201 (0.194 sec)\nINFO:tensorflow:global_step/sec: 515.618\nINFO:tensorflow:loss = 0.017634407, step = 35301 (0.194 sec)\nINFO:tensorflow:global_step/sec: 495.432\nINFO:tensorflow:loss = 0.009034702, step = 35401 (0.202 sec)\nINFO:tensorflow:global_step/sec: 510.194\nINFO:tensorflow:loss = 0.017300691, step = 35501 (0.196 sec)\nINFO:tensorflow:global_step/sec: 503.662\nINFO:tensorflow:loss = 0.019155424, step = 35601 (0.200 sec)\nINFO:tensorflow:global_step/sec: 485.322\nINFO:tensorflow:loss = 0.014960597, step = 35701 (0.205 sec)\nINFO:tensorflow:global_step/sec: 495.975\nINFO:tensorflow:loss = 0.018164353, step = 35801 (0.203 sec)\nINFO:tensorflow:global_step/sec: 511.399\nINFO:tensorflow:loss = 0.01751562, step = 35901 (0.194 sec)\nINFO:tensorflow:global_step/sec: 504.51\nINFO:tensorflow:loss = 0.016572908, step = 36001 (0.198 sec)\nINFO:tensorflow:global_step/sec: 495.627\nINFO:tensorflow:loss = 0.020869441, step = 36101 (0.202 sec)\nINFO:tensorflow:global_step/sec: 500.488\nINFO:tensorflow:loss = 0.006655407, step = 36201 (0.200 sec)\nINFO:tensorflow:global_step/sec: 505.071\nINFO:tensorflow:loss = 0.012891432, step = 36301 (0.197 sec)\nINFO:tensorflow:global_step/sec: 525.276\nINFO:tensorflow:loss = 0.02040265, step = 36401 (0.191 sec)\nINFO:tensorflow:global_step/sec: 486.173\nINFO:tensorflow:loss = 0.0075813686, step = 36501 (0.206 sec)\nINFO:tensorflow:global_step/sec: 515.355\nINFO:tensorflow:loss = 0.01294738, step = 36601 (0.199 sec)\nINFO:tensorflow:global_step/sec: 483.788\nINFO:tensorflow:loss = 0.016664455, step = 36701 (0.201 sec)\nINFO:tensorflow:global_step/sec: 500.33\nINFO:tensorflow:loss = 0.01150747, step = 36801 (0.200 sec)\nINFO:tensorflow:global_step/sec: 491.778\nINFO:tensorflow:loss = 0.007865945, step = 36901 (0.203 sec)\nINFO:tensorflow:global_step/sec: 521.844\nINFO:tensorflow:loss = 0.015020737, step = 37001 (0.192 sec)\nINFO:tensorflow:global_step/sec: 511.668\nINFO:tensorflow:loss = 0.018812396, step = 37101 (0.195 sec)\nINFO:tensorflow:global_step/sec: 506.216\nINFO:tensorflow:loss = 0.010153979, step = 37201 (0.197 sec)\nINFO:tensorflow:global_step/sec: 505.784\nINFO:tensorflow:loss = 0.011845584, step = 37301 (0.198 sec)\nINFO:tensorflow:global_step/sec: 414.422\nINFO:tensorflow:loss = 0.008618769, step = 37401 (0.242 sec)\nINFO:tensorflow:global_step/sec: 436.474\nINFO:tensorflow:loss = 0.012981546, step = 37501 (0.228 sec)\nINFO:tensorflow:global_step/sec: 425.313\nINFO:tensorflow:loss = 0.014249604, step = 37601 (0.235 sec)\nINFO:tensorflow:global_step/sec: 432.724\nINFO:tensorflow:loss = 0.008844063, step = 37701 (0.231 sec)\nINFO:tensorflow:global_step/sec: 433.674\nINFO:tensorflow:loss = 0.0117831705, step = 37801 (0.231 sec)\nINFO:tensorflow:global_step/sec: 424.484\nINFO:tensorflow:loss = 0.011038644, step = 37901 (0.236 sec)\nINFO:tensorflow:global_step/sec: 407.579\nINFO:tensorflow:loss = 0.01588466, step = 38001 (0.245 sec)\nINFO:tensorflow:global_step/sec: 431.131\nINFO:tensorflow:loss = 0.01634461, step = 38101 (0.232 sec)\nINFO:tensorflow:global_step/sec: 455.201\nINFO:tensorflow:loss = 0.015869806, step = 38201 (0.220 sec)\nINFO:tensorflow:global_step/sec: 434.166\nINFO:tensorflow:loss = 0.021173127, step = 38301 (0.234 sec)\nINFO:tensorflow:global_step/sec: 451.6\nINFO:tensorflow:loss = 0.01940126, step = 38401 (0.218 sec)\nINFO:tensorflow:global_step/sec: 436.3\nINFO:tensorflow:loss = 0.010010801, step = 38501 (0.230 sec)\nINFO:tensorflow:global_step/sec: 429.754\nINFO:tensorflow:loss = 0.014012074, step = 38601 (0.232 sec)\nINFO:tensorflow:global_step/sec: 426.216\nINFO:tensorflow:loss = 0.013948511, step = 38701 (0.235 sec)\nINFO:tensorflow:global_step/sec: 416.545\nINFO:tensorflow:loss = 0.009588575, step = 38801 (0.240 sec)\nINFO:tensorflow:global_step/sec: 443.017\nINFO:tensorflow:loss = 0.016781844, step = 38901 (0.226 sec)\nINFO:tensorflow:global_step/sec: 425.593\nINFO:tensorflow:loss = 0.012341503, step = 39001 (0.235 sec)\nINFO:tensorflow:global_step/sec: 436.329\nINFO:tensorflow:loss = 0.01744554, step = 39101 (0.230 sec)\nINFO:tensorflow:global_step/sec: 448.549\nINFO:tensorflow:loss = 0.01344978, step = 39201 (0.222 sec)\nINFO:tensorflow:global_step/sec: 442.085\nINFO:tensorflow:loss = 0.0075440723, step = 39301 (0.226 sec)\nINFO:tensorflow:global_step/sec: 416.82\nINFO:tensorflow:loss = 0.012211935, step = 39401 (0.240 sec)\nINFO:tensorflow:global_step/sec: 444.842\nINFO:tensorflow:loss = 0.008865875, step = 39501 (0.224 sec)\nINFO:tensorflow:global_step/sec: 426.612\nINFO:tensorflow:loss = 0.010440472, step = 39601 (0.237 sec)\nINFO:tensorflow:global_step/sec: 431.079\nINFO:tensorflow:loss = 0.0091326535, step = 39701 (0.230 sec)\nINFO:tensorflow:global_step/sec: 445.569\nINFO:tensorflow:loss = 0.014855575, step = 39801 (0.225 sec)\nINFO:tensorflow:global_step/sec: 429.166\nINFO:tensorflow:loss = 0.017114088, step = 39901 (0.234 sec)\nINFO:tensorflow:Saving checkpoints for 40000 into /tmp/tmpDexXZd/model.ckpt.\nINFO:tensorflow:Calling model_fn.\nINFO:tensorflow:Importing architecture from /tmp/tmpDexXZd/architecture-0.txt: ['0:1_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Building iteration 1\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Done calling model_fn.\nINFO:tensorflow:Starting evaluation at 2018-12-13-19:29:36\nINFO:tensorflow:Graph was finalized.\nINFO:tensorflow:Restoring parameters from /tmp/tmpDexXZd/model.ckpt-40000\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\nINFO:tensorflow:Saving candidate 't0_1_layer_dnn' dict for global step 40000: architecture/adanet/ensembles = \na\n>adanet/iteration_0/ensemble_t0_1_layer_dnn/architecture/adanetB\u0015\b\u0007\u0012\u0000B\u000f| 1_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.04015306, average_loss/adanet/subnetwork = 0.03993654, average_loss/adanet/uniform_average_ensemble = 0.03993654, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.054008663, loss/adanet/subnetwork = 0.053605493, loss/adanet/uniform_average_ensemble = 0.053605493, prediction/mean/adanet/adanet_weighted_ensemble = 3.1601584, prediction/mean/adanet/subnetwork = 3.1580222, prediction/mean/adanet/uniform_average_ensemble = 3.1580222\nINFO:tensorflow:Saving candidate 't1_1_layer_dnn' dict for global step 40000: architecture/adanet/ensembles = \no\n>adanet/iteration_1/ensemble_t1_1_layer_dnn/architecture/adanetB#\b\u0007\u0012\u0000B\u001d| 1_layer_dnn | 1_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.03752409, average_loss/adanet/subnetwork = 0.044653624, average_loss/adanet/uniform_average_ensemble = 0.04097581, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.048775025, loss/adanet/subnetwork = 0.06800773, loss/adanet/uniform_average_ensemble = 0.059345886, prediction/mean/adanet/adanet_weighted_ensemble = 3.1091015, prediction/mean/adanet/subnetwork = 3.1593368, prediction/mean/adanet/uniform_average_ensemble = 3.1586797\nINFO:tensorflow:Saving candidate 't1_2_layer_dnn' dict for global step 40000: architecture/adanet/ensembles = \no\n>adanet/iteration_1/ensemble_t1_2_layer_dnn/architecture/adanetB#\b\u0007\u0012\u0000B\u001d| 1_layer_dnn | 2_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.03245418, average_loss/adanet/subnetwork = 0.032510567, average_loss/adanet/uniform_average_ensemble = 0.034043197, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.042194255, loss/adanet/subnetwork = 0.042689238, loss/adanet/uniform_average_ensemble = 0.045813102, prediction/mean/adanet/adanet_weighted_ensemble = 3.122271, prediction/mean/adanet/subnetwork = 3.1452672, prediction/mean/adanet/uniform_average_ensemble = 3.151645\nINFO:tensorflow:Finished evaluation at 2018-12-13-19:29:39\nINFO:tensorflow:Saving dict for global step 40000: average_loss = 0.03245418, average_loss/adanet/adanet_weighted_ensemble = 0.03245418, average_loss/adanet/subnetwork = 0.032510567, average_loss/adanet/uniform_average_ensemble = 0.034043197, global_step = 40000, label/mean = 3.1049454, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss = 0.042194255, loss/adanet/adanet_weighted_ensemble = 0.042194255, loss/adanet/subnetwork = 0.042689238, loss/adanet/uniform_average_ensemble = 0.045813102, prediction/mean = 3.122271, prediction/mean/adanet/adanet_weighted_ensemble = 3.122271, prediction/mean/adanet/subnetwork = 3.1452672, prediction/mean/adanet/uniform_average_ensemble = 3.151645\nINFO:tensorflow:Saving 'checkpoint_path' summary for global step 40000: /tmp/tmpDexXZd/model.ckpt-40000\nINFO:tensorflow:Loss for final step: 0.013128125.\nINFO:tensorflow:Finished training Adanet iteration 1\nINFO:tensorflow:Beginning bookkeeping phase for iteration 1\nINFO:tensorflow:Importing architecture from /tmp/tmpDexXZd/architecture-0.txt: ['0:1_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Building iteration 1\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Starting ensemble evaluation for iteration 1\nINFO:tensorflow:Restoring parameters from /tmp/tmpDexXZd/model.ckpt-40000\nWARNING:tensorflow:`tf.train.start_queue_runners()` was called when no queue runners were defined. You can safely remove the call to this deprecated function.\nINFO:tensorflow:Encountered end of input after 14 evaluations\nINFO:tensorflow:Computed ensemble metrics: adanet_loss/t0_1_layer_dnn = 0.021061, adanet_loss/t1_1_layer_dnn = 0.016978, adanet_loss/t1_2_layer_dnn = 0.011639\nINFO:tensorflow:Finished ensemble evaluation for iteration 1\nINFO:tensorflow:'t1_2_layer_dnn' at index 2 is moving onto the next iteration\nINFO:tensorflow:Importing architecture from /tmp/tmpDexXZd/architecture-1.txt: ['0:1_layer_dnn', '1:2_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 1\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Warm-starting from: (u'/tmp/tmpDexXZd/model.ckpt-40000',)\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/subnetwork/dense/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/candidate_t0_1_layer_dnn/adanet/iteration_0/candidate_t0_1_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/subnetwork/dense_1/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/subnetwork/dense_1/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t1_2_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: global_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/candidate_t0_1_layer_dnn/adanet/iteration_0/candidate_t0_1_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/subnetwork/dense_1/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_0/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t0_1_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/subnetwork/dense_2/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t1_2_layer_dnn/adanet/iteration_1/candidate_t1_2_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/subnetwork/dense/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/subnetwork/dense/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/candidate_t0_1_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/subnetwork/dense_1/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t1_2_layer_dnn/adanet/iteration_1/candidate_t1_2_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t0_1_layer_dnn/adanet/iteration_1/candidate_t0_1_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t0_1_layer_dnn/adanet/iteration_1/candidate_t0_1_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/subnetwork/dense_2/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/subnetwork/dense/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/train_op/is_over/is_over_var_fn/is_over_var; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/train_op/is_over/is_over_var_fn/is_over_var; prev_var_name: Unchanged\nINFO:tensorflow:Building iteration 2\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Overwriting checkpoint with new graph for iteration 2 to /tmp/tmpDexXZd/model.ckpt-40000\nWARNING:tensorflow:`tf.train.start_queue_runners()` was called when no queue runners were defined. You can safely remove the call to this deprecated function.\nINFO:tensorflow:Finished bookkeeping phase for iteration 1\nINFO:tensorflow:Beginning training AdaNet iteration 2\nINFO:tensorflow:Calling model_fn.\nINFO:tensorflow:Importing architecture from /tmp/tmpDexXZd/architecture-1.txt: ['0:1_layer_dnn', '1:2_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 1\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Building iteration 2\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Done calling model_fn.\nINFO:tensorflow:Create CheckpointSaverHook.\nINFO:tensorflow:Graph was finalized.\nINFO:tensorflow:Restoring parameters from /tmp/tmpDexXZd/increment.ckpt-2\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\nINFO:tensorflow:Saving checkpoints for 40000 into /tmp/tmpDexXZd/model.ckpt.\nINFO:tensorflow:loss = 0.0137471, step = 40001\nINFO:tensorflow:global_step/sec: 101.146\nINFO:tensorflow:loss = 0.01532535, step = 40101 (0.990 sec)\nINFO:tensorflow:global_step/sec: 483.735\nINFO:tensorflow:loss = 0.013514066, step = 40201 (0.212 sec)\nINFO:tensorflow:global_step/sec: 466.098\nINFO:tensorflow:loss = 0.009523302, step = 40301 (0.210 sec)\nINFO:tensorflow:global_step/sec: 474.802\nINFO:tensorflow:loss = 0.0171542, step = 40401 (0.210 sec)\nINFO:tensorflow:global_step/sec: 490.8\nINFO:tensorflow:loss = 0.0076334905, step = 40501 (0.204 sec)\nINFO:tensorflow:global_step/sec: 473.496\nINFO:tensorflow:loss = 0.008361066, step = 40601 (0.211 sec)\nINFO:tensorflow:global_step/sec: 468.856\nINFO:tensorflow:loss = 0.0078119845, step = 40701 (0.213 sec)\nINFO:tensorflow:global_step/sec: 469.805\nINFO:tensorflow:loss = 0.010584909, step = 40801 (0.213 sec)\nINFO:tensorflow:global_step/sec: 476.379\nINFO:tensorflow:loss = 0.010545989, step = 40901 (0.210 sec)\nINFO:tensorflow:global_step/sec: 461.239\nINFO:tensorflow:loss = 0.014772711, step = 41001 (0.217 sec)\nINFO:tensorflow:global_step/sec: 495.958\nINFO:tensorflow:loss = 0.014564341, step = 41101 (0.202 sec)\nINFO:tensorflow:global_step/sec: 479.411\nINFO:tensorflow:loss = 0.015027757, step = 41201 (0.208 sec)\nINFO:tensorflow:global_step/sec: 462.648\nINFO:tensorflow:loss = 0.013337929, step = 41301 (0.216 sec)\nINFO:tensorflow:global_step/sec: 475.138\nINFO:tensorflow:loss = 0.0073587466, step = 41401 (0.210 sec)\nINFO:tensorflow:global_step/sec: 475.871\nINFO:tensorflow:loss = 0.0054927142, step = 41501 (0.210 sec)\nINFO:tensorflow:global_step/sec: 479.389\nINFO:tensorflow:loss = 0.007360665, step = 41601 (0.209 sec)\nINFO:tensorflow:global_step/sec: 457.053\nINFO:tensorflow:loss = 0.014645707, step = 41701 (0.219 sec)\nINFO:tensorflow:global_step/sec: 479.476\nINFO:tensorflow:loss = 0.012285827, step = 41801 (0.209 sec)\nINFO:tensorflow:global_step/sec: 472.166\nINFO:tensorflow:loss = 0.012192512, step = 41901 (0.211 sec)\nINFO:tensorflow:global_step/sec: 454.605\nINFO:tensorflow:loss = 0.0075900974, step = 42001 (0.220 sec)\nINFO:tensorflow:global_step/sec: 471.22\nINFO:tensorflow:loss = 0.018640764, step = 42101 (0.212 sec)\nINFO:tensorflow:global_step/sec: 459.797\nINFO:tensorflow:loss = 0.008761501, step = 42201 (0.218 sec)\nINFO:tensorflow:global_step/sec: 471.581\nINFO:tensorflow:loss = 0.009257115, step = 42301 (0.212 sec)\nINFO:tensorflow:global_step/sec: 468.698\nINFO:tensorflow:loss = 0.0061153397, step = 42401 (0.213 sec)\nINFO:tensorflow:global_step/sec: 473.882\nINFO:tensorflow:loss = 0.017473524, step = 42501 (0.211 sec)\nINFO:tensorflow:global_step/sec: 449.238\nINFO:tensorflow:loss = 0.0081728585, step = 42601 (0.222 sec)\nINFO:tensorflow:global_step/sec: 454.959\nINFO:tensorflow:loss = 0.012855861, step = 42701 (0.220 sec)\nINFO:tensorflow:global_step/sec: 464.764\nINFO:tensorflow:loss = 0.007984189, step = 42801 (0.215 sec)\nINFO:tensorflow:global_step/sec: 446.841\nINFO:tensorflow:loss = 0.009112917, step = 42901 (0.224 sec)\nINFO:tensorflow:global_step/sec: 467.028\nINFO:tensorflow:loss = 0.008185251, step = 43001 (0.214 sec)\nINFO:tensorflow:global_step/sec: 431.814\nINFO:tensorflow:loss = 0.0060589975, step = 43101 (0.231 sec)\nINFO:tensorflow:global_step/sec: 454.399\nINFO:tensorflow:loss = 0.0138734905, step = 43201 (0.220 sec)\nINFO:tensorflow:global_step/sec: 480.215\nINFO:tensorflow:loss = 0.017423537, step = 43301 (0.208 sec)\nINFO:tensorflow:global_step/sec: 460.834\nINFO:tensorflow:loss = 0.013033772, step = 43401 (0.217 sec)\nINFO:tensorflow:global_step/sec: 489.313\nINFO:tensorflow:loss = 0.018026771, step = 43501 (0.204 sec)\nINFO:tensorflow:global_step/sec: 475.597\nINFO:tensorflow:loss = 0.008088482, step = 43601 (0.210 sec)\nINFO:tensorflow:global_step/sec: 459.796\nINFO:tensorflow:loss = 0.00725925, step = 43701 (0.217 sec)\nINFO:tensorflow:global_step/sec: 454.434\nINFO:tensorflow:loss = 0.010700462, step = 43801 (0.220 sec)\nINFO:tensorflow:global_step/sec: 485.871\nINFO:tensorflow:loss = 0.006103119, step = 43901 (0.206 sec)\nINFO:tensorflow:global_step/sec: 481.494\nINFO:tensorflow:loss = 0.010852184, step = 44001 (0.207 sec)\nINFO:tensorflow:global_step/sec: 467.93\nINFO:tensorflow:loss = 0.010531217, step = 44101 (0.214 sec)\nINFO:tensorflow:global_step/sec: 453.365\nINFO:tensorflow:loss = 0.0073792683, step = 44201 (0.221 sec)\nINFO:tensorflow:global_step/sec: 456.138\nINFO:tensorflow:loss = 0.0056517217, step = 44301 (0.220 sec)\nINFO:tensorflow:global_step/sec: 427.916\nINFO:tensorflow:loss = 0.009233334, step = 44401 (0.233 sec)\nINFO:tensorflow:global_step/sec: 387.669\nINFO:tensorflow:loss = 0.008945169, step = 44501 (0.258 sec)\nINFO:tensorflow:global_step/sec: 380.011\nINFO:tensorflow:loss = 0.010574166, step = 44601 (0.263 sec)\nINFO:tensorflow:global_step/sec: 376.635\nINFO:tensorflow:loss = 0.013343923, step = 44701 (0.265 sec)\nINFO:tensorflow:global_step/sec: 385.77\nINFO:tensorflow:loss = 0.009692784, step = 44801 (0.259 sec)\nINFO:tensorflow:global_step/sec: 376.301\nINFO:tensorflow:loss = 0.010447414, step = 44901 (0.266 sec)\nINFO:tensorflow:global_step/sec: 373.331\nINFO:tensorflow:loss = 0.006807711, step = 45001 (0.271 sec)\nINFO:tensorflow:global_step/sec: 369.024\nINFO:tensorflow:loss = 0.0049399105, step = 45101 (0.268 sec)\nINFO:tensorflow:global_step/sec: 385.551\nINFO:tensorflow:loss = 0.0061659655, step = 45201 (0.260 sec)\nINFO:tensorflow:global_step/sec: 370.528\nINFO:tensorflow:loss = 0.014129622, step = 45301 (0.270 sec)\nINFO:tensorflow:global_step/sec: 380.428\nINFO:tensorflow:loss = 0.0068014106, step = 45401 (0.263 sec)\nINFO:tensorflow:global_step/sec: 365.342\nINFO:tensorflow:loss = 0.004838192, step = 45501 (0.274 sec)\nINFO:tensorflow:global_step/sec: 395.007\nINFO:tensorflow:loss = 0.015045141, step = 45601 (0.253 sec)\nINFO:tensorflow:global_step/sec: 384.004\nINFO:tensorflow:loss = 0.009517975, step = 45701 (0.261 sec)\nINFO:tensorflow:global_step/sec: 384.175\nINFO:tensorflow:loss = 0.008561723, step = 45801 (0.260 sec)\nINFO:tensorflow:global_step/sec: 374.008\nINFO:tensorflow:loss = 0.0068531577, step = 45901 (0.268 sec)\nINFO:tensorflow:global_step/sec: 378.05\nINFO:tensorflow:loss = 0.011003407, step = 46001 (0.265 sec)\nINFO:tensorflow:global_step/sec: 359.694\nINFO:tensorflow:loss = 0.012901819, step = 46101 (0.278 sec)\nINFO:tensorflow:global_step/sec: 370.976\nINFO:tensorflow:loss = 0.015841253, step = 46201 (0.269 sec)\nINFO:tensorflow:global_step/sec: 388.523\nINFO:tensorflow:loss = 0.0071182987, step = 46301 (0.257 sec)\nINFO:tensorflow:global_step/sec: 381.968\nINFO:tensorflow:loss = 0.009662391, step = 46401 (0.262 sec)\nINFO:tensorflow:global_step/sec: 379.256\nINFO:tensorflow:loss = 0.018189866, step = 46501 (0.264 sec)\nINFO:tensorflow:global_step/sec: 482.339\nINFO:tensorflow:loss = 0.008831465, step = 46601 (0.207 sec)\nINFO:tensorflow:global_step/sec: 455.117\nINFO:tensorflow:loss = 0.008682405, step = 46701 (0.220 sec)\nINFO:tensorflow:global_step/sec: 453.077\nINFO:tensorflow:loss = 0.005749801, step = 46801 (0.221 sec)\nINFO:tensorflow:global_step/sec: 471.098\nINFO:tensorflow:loss = 0.006978311, step = 46901 (0.212 sec)\nINFO:tensorflow:global_step/sec: 454.156\nINFO:tensorflow:loss = 0.0061075217, step = 47001 (0.220 sec)\nINFO:tensorflow:global_step/sec: 479.081\nINFO:tensorflow:loss = 0.0070029953, step = 47101 (0.209 sec)\nINFO:tensorflow:global_step/sec: 464.6\nINFO:tensorflow:loss = 0.008448597, step = 47201 (0.215 sec)\nINFO:tensorflow:global_step/sec: 461.272\nINFO:tensorflow:loss = 0.01530963, step = 47301 (0.217 sec)\nINFO:tensorflow:global_step/sec: 465.272\nINFO:tensorflow:loss = 0.010236602, step = 47401 (0.215 sec)\nINFO:tensorflow:global_step/sec: 446.956\nINFO:tensorflow:loss = 0.0056663156, step = 47501 (0.224 sec)\nINFO:tensorflow:global_step/sec: 456.136\nINFO:tensorflow:loss = 0.008336479, step = 47601 (0.219 sec)\nINFO:tensorflow:global_step/sec: 475.455\nINFO:tensorflow:loss = 0.011462681, step = 47701 (0.210 sec)\nINFO:tensorflow:global_step/sec: 459.531\nINFO:tensorflow:loss = 0.010124264, step = 47801 (0.218 sec)\nINFO:tensorflow:global_step/sec: 467.41\nINFO:tensorflow:loss = 0.005644566, step = 47901 (0.214 sec)\nINFO:tensorflow:global_step/sec: 464.408\nINFO:tensorflow:loss = 0.01634513, step = 48001 (0.215 sec)\nINFO:tensorflow:global_step/sec: 476.629\nINFO:tensorflow:loss = 0.012271572, step = 48101 (0.210 sec)\nINFO:tensorflow:global_step/sec: 471.949\nINFO:tensorflow:loss = 0.015318329, step = 48201 (0.212 sec)\nINFO:tensorflow:global_step/sec: 462.238\nINFO:tensorflow:loss = 0.013058911, step = 48301 (0.216 sec)\nINFO:tensorflow:global_step/sec: 457.091\nINFO:tensorflow:loss = 0.009103151, step = 48401 (0.219 sec)\nINFO:tensorflow:global_step/sec: 445.359\nINFO:tensorflow:loss = 0.008441424, step = 48501 (0.224 sec)\nINFO:tensorflow:global_step/sec: 475.082\nINFO:tensorflow:loss = 0.010041006, step = 48601 (0.211 sec)\nINFO:tensorflow:global_step/sec: 465.35\nINFO:tensorflow:loss = 0.009227474, step = 48701 (0.215 sec)\nINFO:tensorflow:global_step/sec: 462.212\nINFO:tensorflow:loss = 0.013637528, step = 48801 (0.216 sec)\nINFO:tensorflow:global_step/sec: 475.867\nINFO:tensorflow:loss = 0.020799551, step = 48901 (0.210 sec)\nINFO:tensorflow:global_step/sec: 461.033\nINFO:tensorflow:loss = 0.019677676, step = 49001 (0.217 sec)\nINFO:tensorflow:global_step/sec: 477.637\nINFO:tensorflow:loss = 0.009188135, step = 49101 (0.210 sec)\nINFO:tensorflow:global_step/sec: 456.362\nINFO:tensorflow:loss = 0.010784849, step = 49201 (0.219 sec)\nINFO:tensorflow:global_step/sec: 451.459\nINFO:tensorflow:loss = 0.0055096457, step = 49301 (0.221 sec)\nINFO:tensorflow:global_step/sec: 464.106\nINFO:tensorflow:loss = 0.00841219, step = 49401 (0.215 sec)\nINFO:tensorflow:global_step/sec: 482.989\nINFO:tensorflow:loss = 0.009682183, step = 49501 (0.207 sec)\nINFO:tensorflow:global_step/sec: 469.153\nINFO:tensorflow:loss = 0.00786082, step = 49601 (0.213 sec)\nINFO:tensorflow:global_step/sec: 456.198\nINFO:tensorflow:loss = 0.005133249, step = 49701 (0.219 sec)\nINFO:tensorflow:global_step/sec: 473.281\nINFO:tensorflow:loss = 0.009866834, step = 49801 (0.211 sec)\nINFO:tensorflow:global_step/sec: 491.405\nINFO:tensorflow:loss = 0.01244057, step = 49901 (0.203 sec)\nINFO:tensorflow:global_step/sec: 485.581\nINFO:tensorflow:loss = 0.012799989, step = 50001 (0.206 sec)\nINFO:tensorflow:global_step/sec: 467.768\nINFO:tensorflow:loss = 0.005624939, step = 50101 (0.213 sec)\nINFO:tensorflow:global_step/sec: 455.351\nINFO:tensorflow:loss = 0.011780135, step = 50201 (0.220 sec)\nINFO:tensorflow:global_step/sec: 448.105\nINFO:tensorflow:loss = 0.009567579, step = 50301 (0.223 sec)\nINFO:tensorflow:global_step/sec: 466.978\nINFO:tensorflow:loss = 0.008895083, step = 50401 (0.214 sec)\nINFO:tensorflow:global_step/sec: 454.39\nINFO:tensorflow:loss = 0.0076741823, step = 50501 (0.220 sec)\nINFO:tensorflow:global_step/sec: 452.8\nINFO:tensorflow:loss = 0.009617523, step = 50601 (0.221 sec)\nINFO:tensorflow:global_step/sec: 466.997\nINFO:tensorflow:loss = 0.00584445, step = 50701 (0.214 sec)\nINFO:tensorflow:global_step/sec: 475.645\nINFO:tensorflow:loss = 0.011215346, step = 50801 (0.210 sec)\nINFO:tensorflow:global_step/sec: 473.007\nINFO:tensorflow:loss = 0.012271669, step = 50901 (0.212 sec)\nINFO:tensorflow:global_step/sec: 456.348\nINFO:tensorflow:loss = 0.007091139, step = 51001 (0.219 sec)\nINFO:tensorflow:global_step/sec: 462.313\nINFO:tensorflow:loss = 0.010636905, step = 51101 (0.216 sec)\nINFO:tensorflow:global_step/sec: 462.969\nINFO:tensorflow:loss = 0.0059023993, step = 51201 (0.216 sec)\nINFO:tensorflow:global_step/sec: 472.121\nINFO:tensorflow:loss = 0.010111769, step = 51301 (0.212 sec)\nINFO:tensorflow:global_step/sec: 457.465\nINFO:tensorflow:loss = 0.0042239064, step = 51401 (0.219 sec)\nINFO:tensorflow:global_step/sec: 455.431\nINFO:tensorflow:loss = 0.016194275, step = 51501 (0.220 sec)\nINFO:tensorflow:global_step/sec: 470.491\nINFO:tensorflow:loss = 0.0047591464, step = 51601 (0.213 sec)\nINFO:tensorflow:global_step/sec: 451.943\nINFO:tensorflow:loss = 0.011088285, step = 51701 (0.221 sec)\nINFO:tensorflow:global_step/sec: 464.887\nINFO:tensorflow:loss = 0.005757529, step = 51801 (0.215 sec)\nINFO:tensorflow:global_step/sec: 446.253\nINFO:tensorflow:loss = 0.00988936, step = 51901 (0.224 sec)\nINFO:tensorflow:global_step/sec: 477.651\nINFO:tensorflow:loss = 0.0077861943, step = 52001 (0.210 sec)\nINFO:tensorflow:global_step/sec: 458.459\nINFO:tensorflow:loss = 0.004958364, step = 52101 (0.218 sec)\nINFO:tensorflow:global_step/sec: 462.939\nINFO:tensorflow:loss = 0.004744456, step = 52201 (0.216 sec)\nINFO:tensorflow:global_step/sec: 458.176\nINFO:tensorflow:loss = 0.007713549, step = 52301 (0.218 sec)\nINFO:tensorflow:global_step/sec: 461.233\nINFO:tensorflow:loss = 0.006713408, step = 52401 (0.217 sec)\nINFO:tensorflow:global_step/sec: 487.764\nINFO:tensorflow:loss = 0.00816009, step = 52501 (0.209 sec)\nINFO:tensorflow:global_step/sec: 473.837\nINFO:tensorflow:loss = 0.0061250767, step = 52601 (0.207 sec)\nINFO:tensorflow:global_step/sec: 485.171\nINFO:tensorflow:loss = 0.012142177, step = 52701 (0.206 sec)\nINFO:tensorflow:global_step/sec: 483.444\nINFO:tensorflow:loss = 0.012386767, step = 52801 (0.207 sec)\nINFO:tensorflow:global_step/sec: 443.19\nINFO:tensorflow:loss = 0.0108658485, step = 52901 (0.226 sec)\nINFO:tensorflow:global_step/sec: 449.531\nINFO:tensorflow:loss = 0.012192146, step = 53001 (0.222 sec)\nINFO:tensorflow:global_step/sec: 448.776\nINFO:tensorflow:loss = 0.007505279, step = 53101 (0.223 sec)\nINFO:tensorflow:global_step/sec: 487.781\nINFO:tensorflow:loss = 0.004307149, step = 53201 (0.205 sec)\nINFO:tensorflow:global_step/sec: 437.103\nINFO:tensorflow:loss = 0.0076565198, step = 53301 (0.229 sec)\nINFO:tensorflow:global_step/sec: 477.086\nINFO:tensorflow:loss = 0.011304293, step = 53401 (0.209 sec)\nINFO:tensorflow:global_step/sec: 447.387\nINFO:tensorflow:loss = 0.00830624, step = 53501 (0.224 sec)\nINFO:tensorflow:global_step/sec: 485.999\nINFO:tensorflow:loss = 0.010162868, step = 53601 (0.206 sec)\nINFO:tensorflow:global_step/sec: 465.03\nINFO:tensorflow:loss = 0.004446827, step = 53701 (0.215 sec)\nINFO:tensorflow:global_step/sec: 450.418\nINFO:tensorflow:loss = 0.010949729, step = 53801 (0.222 sec)\nINFO:tensorflow:global_step/sec: 460.396\nINFO:tensorflow:loss = 0.0057958183, step = 53901 (0.217 sec)\nINFO:tensorflow:global_step/sec: 389.695\nINFO:tensorflow:loss = 0.0049026543, step = 54001 (0.257 sec)\nINFO:tensorflow:global_step/sec: 472.552\nINFO:tensorflow:loss = 0.010716478, step = 54101 (0.212 sec)\nINFO:tensorflow:global_step/sec: 471.338\nINFO:tensorflow:loss = 0.0073531447, step = 54201 (0.212 sec)\nINFO:tensorflow:global_step/sec: 490.99\nINFO:tensorflow:loss = 0.007303466, step = 54301 (0.204 sec)\nINFO:tensorflow:global_step/sec: 485.63\nINFO:tensorflow:loss = 0.0046647578, step = 54401 (0.206 sec)\nINFO:tensorflow:global_step/sec: 491.174\nINFO:tensorflow:loss = 0.00559157, step = 54501 (0.204 sec)\nINFO:tensorflow:global_step/sec: 483.821\nINFO:tensorflow:loss = 0.010638798, step = 54601 (0.206 sec)\nINFO:tensorflow:global_step/sec: 476.122\nINFO:tensorflow:loss = 0.0096184425, step = 54701 (0.210 sec)\nINFO:tensorflow:global_step/sec: 477.35\nINFO:tensorflow:loss = 0.01297844, step = 54801 (0.210 sec)\nINFO:tensorflow:global_step/sec: 479.378\nINFO:tensorflow:loss = 0.009132976, step = 54901 (0.208 sec)\nINFO:tensorflow:global_step/sec: 459.948\nINFO:tensorflow:loss = 0.015770674, step = 55001 (0.217 sec)\nINFO:tensorflow:global_step/sec: 482.795\nINFO:tensorflow:loss = 0.010697407, step = 55101 (0.207 sec)\nINFO:tensorflow:global_step/sec: 472.72\nINFO:tensorflow:loss = 0.009993464, step = 55201 (0.211 sec)\nINFO:tensorflow:global_step/sec: 439.491\nINFO:tensorflow:loss = 0.011722613, step = 55301 (0.228 sec)\nINFO:tensorflow:global_step/sec: 470.181\nINFO:tensorflow:loss = 0.0075947065, step = 55401 (0.213 sec)\nINFO:tensorflow:global_step/sec: 481.406\nINFO:tensorflow:loss = 0.013326233, step = 55501 (0.208 sec)\nINFO:tensorflow:global_step/sec: 481.447\nINFO:tensorflow:loss = 0.009337759, step = 55601 (0.208 sec)\nINFO:tensorflow:global_step/sec: 477.784\nINFO:tensorflow:loss = 0.0060269767, step = 55701 (0.209 sec)\nINFO:tensorflow:global_step/sec: 472.844\nINFO:tensorflow:loss = 0.015555512, step = 55801 (0.212 sec)\nINFO:tensorflow:global_step/sec: 478.758\nINFO:tensorflow:loss = 0.010265168, step = 55901 (0.209 sec)\nINFO:tensorflow:global_step/sec: 486.532\nINFO:tensorflow:loss = 0.008330882, step = 56001 (0.205 sec)\nINFO:tensorflow:global_step/sec: 475.405\nINFO:tensorflow:loss = 0.009440938, step = 56101 (0.211 sec)\nINFO:tensorflow:global_step/sec: 461.349\nINFO:tensorflow:loss = 0.006797244, step = 56201 (0.217 sec)\nINFO:tensorflow:global_step/sec: 479.669\nINFO:tensorflow:loss = 0.0061167153, step = 56301 (0.209 sec)\nINFO:tensorflow:global_step/sec: 468.132\nINFO:tensorflow:loss = 0.008877866, step = 56401 (0.213 sec)\nINFO:tensorflow:global_step/sec: 487.572\nINFO:tensorflow:loss = 0.00635955, step = 56501 (0.205 sec)\nINFO:tensorflow:global_step/sec: 461.533\nINFO:tensorflow:loss = 0.00695836, step = 56601 (0.217 sec)\nINFO:tensorflow:global_step/sec: 479.64\nINFO:tensorflow:loss = 0.007901347, step = 56701 (0.208 sec)\nINFO:tensorflow:global_step/sec: 473.42\nINFO:tensorflow:loss = 0.0077539934, step = 56801 (0.212 sec)\nINFO:tensorflow:global_step/sec: 468.731\nINFO:tensorflow:loss = 0.0040710955, step = 56901 (0.213 sec)\nINFO:tensorflow:global_step/sec: 460.248\nINFO:tensorflow:loss = 0.009691806, step = 57001 (0.217 sec)\nINFO:tensorflow:global_step/sec: 465.523\nINFO:tensorflow:loss = 0.01473847, step = 57101 (0.215 sec)\nINFO:tensorflow:global_step/sec: 451.245\nINFO:tensorflow:loss = 0.008093638, step = 57201 (0.222 sec)\nINFO:tensorflow:global_step/sec: 465.268\nINFO:tensorflow:loss = 0.0048230374, step = 57301 (0.215 sec)\nINFO:tensorflow:global_step/sec: 463.779\nINFO:tensorflow:loss = 0.006130575, step = 57401 (0.216 sec)\nINFO:tensorflow:global_step/sec: 465.4\nINFO:tensorflow:loss = 0.007890692, step = 57501 (0.215 sec)\nINFO:tensorflow:global_step/sec: 446.58\nINFO:tensorflow:loss = 0.008413052, step = 57601 (0.224 sec)\nINFO:tensorflow:global_step/sec: 461.431\nINFO:tensorflow:loss = 0.004950462, step = 57701 (0.217 sec)\nINFO:tensorflow:global_step/sec: 478.771\nINFO:tensorflow:loss = 0.010200614, step = 57801 (0.209 sec)\nINFO:tensorflow:global_step/sec: 457.521\nINFO:tensorflow:loss = 0.0050936504, step = 57901 (0.219 sec)\nINFO:tensorflow:global_step/sec: 483.361\nINFO:tensorflow:loss = 0.009040279, step = 58001 (0.207 sec)\nINFO:tensorflow:global_step/sec: 453.708\nINFO:tensorflow:loss = 0.012236675, step = 58101 (0.220 sec)\nINFO:tensorflow:global_step/sec: 459.548\nINFO:tensorflow:loss = 0.0077486634, step = 58201 (0.218 sec)\nINFO:tensorflow:global_step/sec: 481.926\nINFO:tensorflow:loss = 0.013432663, step = 58301 (0.207 sec)\nINFO:tensorflow:global_step/sec: 465.162\nINFO:tensorflow:loss = 0.009004887, step = 58401 (0.215 sec)\nINFO:tensorflow:global_step/sec: 448.35\nINFO:tensorflow:loss = 0.007098233, step = 58501 (0.223 sec)\nINFO:tensorflow:global_step/sec: 463.394\nINFO:tensorflow:loss = 0.007232779, step = 58601 (0.216 sec)\nINFO:tensorflow:global_step/sec: 443.679\nINFO:tensorflow:loss = 0.004959191, step = 58701 (0.225 sec)\nINFO:tensorflow:global_step/sec: 473.83\nINFO:tensorflow:loss = 0.006908235, step = 58801 (0.211 sec)\nINFO:tensorflow:global_step/sec: 476.467\nINFO:tensorflow:loss = 0.0141474735, step = 58901 (0.210 sec)\nINFO:tensorflow:global_step/sec: 461.393\nINFO:tensorflow:loss = 0.008379664, step = 59001 (0.217 sec)\nINFO:tensorflow:global_step/sec: 495.687\nINFO:tensorflow:loss = 0.010867199, step = 59101 (0.203 sec)\nINFO:tensorflow:global_step/sec: 485.229\nINFO:tensorflow:loss = 0.0071855583, step = 59201 (0.205 sec)\nINFO:tensorflow:global_step/sec: 482.091\nINFO:tensorflow:loss = 0.0051718047, step = 59301 (0.208 sec)\nINFO:tensorflow:global_step/sec: 464.531\nINFO:tensorflow:loss = 0.009021869, step = 59401 (0.215 sec)\nINFO:tensorflow:global_step/sec: 474.012\nINFO:tensorflow:loss = 0.005612652, step = 59501 (0.211 sec)\nINFO:tensorflow:global_step/sec: 458.11\nINFO:tensorflow:loss = 0.0084251575, step = 59601 (0.218 sec)\nINFO:tensorflow:global_step/sec: 459.245\nINFO:tensorflow:loss = 0.0064932797, step = 59701 (0.218 sec)\nINFO:tensorflow:global_step/sec: 482.2\nINFO:tensorflow:loss = 0.009190215, step = 59801 (0.208 sec)\nINFO:tensorflow:global_step/sec: 502.836\nINFO:tensorflow:loss = 0.0087354295, step = 59901 (0.199 sec)\nINFO:tensorflow:Saving checkpoints for 60000 into /tmp/tmpDexXZd/model.ckpt.\nINFO:tensorflow:Calling model_fn.\nINFO:tensorflow:Importing architecture from /tmp/tmpDexXZd/architecture-1.txt: ['0:1_layer_dnn', '1:2_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 1\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Building iteration 2\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Done calling model_fn.\nINFO:tensorflow:Starting evaluation at 2018-12-13-19:30:45\nINFO:tensorflow:Graph was finalized.\nINFO:tensorflow:Restoring parameters from /tmp/tmpDexXZd/model.ckpt-60000\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\nINFO:tensorflow:Saving candidate 't1_2_layer_dnn' dict for global step 60000: architecture/adanet/ensembles = \no\n>adanet/iteration_1/ensemble_t1_2_layer_dnn/architecture/adanetB#\b\u0007\u0012\u0000B\u001d| 1_layer_dnn | 2_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.03245418, average_loss/adanet/subnetwork = 0.032510567, average_loss/adanet/uniform_average_ensemble = 0.034043197, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.042194255, loss/adanet/subnetwork = 0.042689238, loss/adanet/uniform_average_ensemble = 0.045813102, prediction/mean/adanet/adanet_weighted_ensemble = 3.122271, prediction/mean/adanet/subnetwork = 3.1452672, prediction/mean/adanet/uniform_average_ensemble = 3.151645\nINFO:tensorflow:Saving candidate 't2_2_layer_dnn' dict for global step 60000: architecture/adanet/ensembles = \n}\n>adanet/iteration_2/ensemble_t2_2_layer_dnn/architecture/adanetB1\b\u0007\u0012\u0000B+| 1_layer_dnn | 2_layer_dnn | 2_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.029876871, average_loss/adanet/subnetwork = 0.032713592, average_loss/adanet/uniform_average_ensemble = 0.031925786, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.03766784, loss/adanet/subnetwork = 0.043944843, loss/adanet/uniform_average_ensemble = 0.043711387, prediction/mean/adanet/adanet_weighted_ensemble = 3.1007698, prediction/mean/adanet/subnetwork = 3.1556947, prediction/mean/adanet/uniform_average_ensemble = 3.1529949\nINFO:tensorflow:Saving candidate 't2_3_layer_dnn' dict for global step 60000: architecture/adanet/ensembles = \n}\n>adanet/iteration_2/ensemble_t2_3_layer_dnn/architecture/adanetB1\b\u0007\u0012\u0000B+| 1_layer_dnn | 2_layer_dnn | 3_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.030102361, average_loss/adanet/subnetwork = 0.032910354, average_loss/adanet/uniform_average_ensemble = 0.03231746, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.038314007, loss/adanet/subnetwork = 0.043785788, loss/adanet/uniform_average_ensemble = 0.04384736, prediction/mean/adanet/adanet_weighted_ensemble = 3.1021514, prediction/mean/adanet/subnetwork = 3.134045, prediction/mean/adanet/uniform_average_ensemble = 3.1457782\nINFO:tensorflow:Finished evaluation at 2018-12-13-19:30:49\nINFO:tensorflow:Saving dict for global step 60000: average_loss = 0.030102361, average_loss/adanet/adanet_weighted_ensemble = 0.030102361, average_loss/adanet/subnetwork = 0.032910354, average_loss/adanet/uniform_average_ensemble = 0.03231746, global_step = 60000, label/mean = 3.1049454, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss = 0.038314007, loss/adanet/adanet_weighted_ensemble = 0.038314007, loss/adanet/subnetwork = 0.043785788, loss/adanet/uniform_average_ensemble = 0.04384736, prediction/mean = 3.1021514, prediction/mean/adanet/adanet_weighted_ensemble = 3.1021514, prediction/mean/adanet/subnetwork = 3.134045, prediction/mean/adanet/uniform_average_ensemble = 3.1457782\nINFO:tensorflow:Saving 'checkpoint_path' summary for global step 60000: /tmp/tmpDexXZd/model.ckpt-60000\nINFO:tensorflow:Loss for final step: 0.0064613554.\nINFO:tensorflow:Finished training Adanet iteration 2\nINFO:tensorflow:Beginning bookkeeping phase for iteration 2\nINFO:tensorflow:Importing architecture from /tmp/tmpDexXZd/architecture-1.txt: ['0:1_layer_dnn', '1:2_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 1\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Building iteration 2\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Starting ensemble evaluation for iteration 2\nINFO:tensorflow:Restoring parameters from /tmp/tmpDexXZd/model.ckpt-60000\nWARNING:tensorflow:`tf.train.start_queue_runners()` was called when no queue runners were defined. You can safely remove the call to this deprecated function.\nINFO:tensorflow:Encountered end of input after 14 evaluations\nINFO:tensorflow:Computed ensemble metrics: adanet_loss/t1_2_layer_dnn = 0.011639, adanet_loss/t2_2_layer_dnn = 0.008030, adanet_loss/t2_3_layer_dnn = 0.007307\nINFO:tensorflow:Finished ensemble evaluation for iteration 2\nINFO:tensorflow:'t2_3_layer_dnn' at index 2 is moving onto the next iteration\nINFO:tensorflow:Importing architecture from /tmp/tmpDexXZd/architecture-2.txt: ['0:1_layer_dnn', '1:2_layer_dnn', '2:3_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 1\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 2\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Warm-starting from: (u'/tmp/tmpDexXZd/model.ckpt-60000',)\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_2/subnetwork/dense_3/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/candidate_t2_3_layer_dnn/adanet/iteration_2/candidate_t2_3_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_2/subnetwork/dense/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/candidate_t2_3_layer_dnn/adanet/iteration_2/candidate_t2_3_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/subnetwork/dense/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_2/subnetwork/dense/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/candidate_t0_1_layer_dnn/adanet/iteration_0/candidate_t0_1_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/subnetwork/dense_1/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/subnetwork/dense_1/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t1_2_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_2/subnetwork/dense_1/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: global_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/candidate_t0_1_layer_dnn/adanet/iteration_0/candidate_t0_1_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/subnetwork/dense_1/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/candidate_t2_3_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_0/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t0_1_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/subnetwork/dense_2/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t1_2_layer_dnn/adanet/iteration_1/candidate_t1_2_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_0/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/train_op/is_over/is_over_var_fn/is_over_var; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_1/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/subnetwork/dense/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/subnetwork/dense/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/candidate_t0_1_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/subnetwork/dense_1/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_2/subnetwork/dense_1/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t1_2_layer_dnn/adanet/iteration_1/candidate_t1_2_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_2/subnetwork/dense_2/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/candidate_t1_2_layer_dnn/adanet/iteration_2/candidate_t1_2_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t0_1_layer_dnn/adanet/iteration_1/candidate_t0_1_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t0_1_layer_dnn/adanet/iteration_1/candidate_t0_1_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/subnetwork/dense_2/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/subnetwork/dense/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_2/subnetwork/dense_3/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_2/subnetwork/dense_2/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/train_op/is_over/is_over_var_fn/is_over_var; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/train_op/is_over/is_over_var_fn/is_over_var; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/candidate_t1_2_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/candidate_t1_2_layer_dnn/adanet/iteration_2/candidate_t1_2_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_2/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Building iteration 3\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Building subnetwork '4_layer_dnn'\nINFO:tensorflow:Overwriting checkpoint with new graph for iteration 3 to /tmp/tmpDexXZd/model.ckpt-60000\nWARNING:tensorflow:`tf.train.start_queue_runners()` was called when no queue runners were defined. You can safely remove the call to this deprecated function.\nINFO:tensorflow:Finished bookkeeping phase for iteration 2\nINFO:tensorflow:Beginning training AdaNet iteration 3\nINFO:tensorflow:Calling model_fn.\nINFO:tensorflow:Importing architecture from /tmp/tmpDexXZd/architecture-2.txt: ['0:1_layer_dnn', '1:2_layer_dnn', '2:3_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 1\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 2\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Building iteration 3\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Building subnetwork '4_layer_dnn'\nINFO:tensorflow:Done calling model_fn.\nINFO:tensorflow:Create CheckpointSaverHook.\nINFO:tensorflow:Graph was finalized.\nINFO:tensorflow:Restoring parameters from /tmp/tmpDexXZd/increment.ckpt-3\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\nINFO:tensorflow:Saving checkpoints for 60000 into /tmp/tmpDexXZd/model.ckpt.\nINFO:tensorflow:loss = 0.008016532, step = 60001\nINFO:tensorflow:global_step/sec: 86.6128\nINFO:tensorflow:loss = 0.009209515, step = 60101 (1.155 sec)\nINFO:tensorflow:global_step/sec: 471.158\nINFO:tensorflow:loss = 0.01075724, step = 60201 (0.212 sec)\nINFO:tensorflow:global_step/sec: 437.641\nINFO:tensorflow:loss = 0.0059028785, step = 60301 (0.228 sec)\nINFO:tensorflow:global_step/sec: 436.302\nINFO:tensorflow:loss = 0.011906806, step = 60401 (0.229 sec)\nINFO:tensorflow:global_step/sec: 452.837\nINFO:tensorflow:loss = 0.0039542457, step = 60501 (0.221 sec)\nINFO:tensorflow:global_step/sec: 442.837\nINFO:tensorflow:loss = 0.0043944037, step = 60601 (0.226 sec)\nINFO:tensorflow:global_step/sec: 454.003\nINFO:tensorflow:loss = 0.0061429534, step = 60701 (0.220 sec)\nINFO:tensorflow:global_step/sec: 454.473\nINFO:tensorflow:loss = 0.0069281845, step = 60801 (0.220 sec)\nINFO:tensorflow:global_step/sec: 450.475\nINFO:tensorflow:loss = 0.0092753135, step = 60901 (0.222 sec)\nINFO:tensorflow:global_step/sec: 467.688\nINFO:tensorflow:loss = 0.009403301, step = 61001 (0.214 sec)\nINFO:tensorflow:global_step/sec: 450.793\nINFO:tensorflow:loss = 0.00983897, step = 61101 (0.221 sec)\nINFO:tensorflow:global_step/sec: 449.982\nINFO:tensorflow:loss = 0.010074519, step = 61201 (0.223 sec)\nINFO:tensorflow:global_step/sec: 459.688\nINFO:tensorflow:loss = 0.004837831, step = 61301 (0.217 sec)\nINFO:tensorflow:global_step/sec: 473.48\nINFO:tensorflow:loss = 0.003884089, step = 61401 (0.211 sec)\nINFO:tensorflow:global_step/sec: 438.768\nINFO:tensorflow:loss = 0.0039161984, step = 61501 (0.228 sec)\nINFO:tensorflow:global_step/sec: 442.4\nINFO:tensorflow:loss = 0.004391333, step = 61601 (0.226 sec)\nINFO:tensorflow:global_step/sec: 446.632\nINFO:tensorflow:loss = 0.011235558, step = 61701 (0.224 sec)\nINFO:tensorflow:global_step/sec: 456.198\nINFO:tensorflow:loss = 0.007418411, step = 61801 (0.219 sec)\nINFO:tensorflow:global_step/sec: 474.331\nINFO:tensorflow:loss = 0.0071272356, step = 61901 (0.215 sec)\nINFO:tensorflow:global_step/sec: 440.575\nINFO:tensorflow:loss = 0.00686712, step = 62001 (0.222 sec)\nINFO:tensorflow:global_step/sec: 461.446\nINFO:tensorflow:loss = 0.010292519, step = 62101 (0.217 sec)\nINFO:tensorflow:global_step/sec: 440.387\nINFO:tensorflow:loss = 0.0039625755, step = 62201 (0.227 sec)\nINFO:tensorflow:global_step/sec: 452.014\nINFO:tensorflow:loss = 0.0054634213, step = 62301 (0.221 sec)\nINFO:tensorflow:global_step/sec: 438.016\nINFO:tensorflow:loss = 0.0035767714, step = 62401 (0.228 sec)\nINFO:tensorflow:global_step/sec: 468.766\nINFO:tensorflow:loss = 0.009523136, step = 62501 (0.214 sec)\nINFO:tensorflow:global_step/sec: 477.288\nINFO:tensorflow:loss = 0.0035087317, step = 62601 (0.209 sec)\nINFO:tensorflow:global_step/sec: 452.339\nINFO:tensorflow:loss = 0.008077238, step = 62701 (0.221 sec)\nINFO:tensorflow:global_step/sec: 467.189\nINFO:tensorflow:loss = 0.0061389813, step = 62801 (0.214 sec)\nINFO:tensorflow:global_step/sec: 449.372\nINFO:tensorflow:loss = 0.006167573, step = 62901 (0.222 sec)\nINFO:tensorflow:global_step/sec: 434.509\nINFO:tensorflow:loss = 0.0056978366, step = 63001 (0.230 sec)\nINFO:tensorflow:global_step/sec: 459.705\nINFO:tensorflow:loss = 0.00419854, step = 63101 (0.217 sec)\nINFO:tensorflow:global_step/sec: 447.265\nINFO:tensorflow:loss = 0.007870656, step = 63201 (0.224 sec)\nINFO:tensorflow:global_step/sec: 471.963\nINFO:tensorflow:loss = 0.0106264595, step = 63301 (0.212 sec)\nINFO:tensorflow:global_step/sec: 474.908\nINFO:tensorflow:loss = 0.009301414, step = 63401 (0.211 sec)\nINFO:tensorflow:global_step/sec: 464.17\nINFO:tensorflow:loss = 0.011040909, step = 63501 (0.215 sec)\nINFO:tensorflow:global_step/sec: 457.779\nINFO:tensorflow:loss = 0.0058821742, step = 63601 (0.218 sec)\nINFO:tensorflow:global_step/sec: 460.431\nINFO:tensorflow:loss = 0.006685866, step = 63701 (0.217 sec)\nINFO:tensorflow:global_step/sec: 438.225\nINFO:tensorflow:loss = 0.007290181, step = 63801 (0.228 sec)\nINFO:tensorflow:global_step/sec: 444.638\nINFO:tensorflow:loss = 0.0045587993, step = 63901 (0.225 sec)\nINFO:tensorflow:global_step/sec: 452.941\nINFO:tensorflow:loss = 0.006104091, step = 64001 (0.220 sec)\nINFO:tensorflow:global_step/sec: 461.704\nINFO:tensorflow:loss = 0.0055608135, step = 64101 (0.217 sec)\nINFO:tensorflow:global_step/sec: 434.471\nINFO:tensorflow:loss = 0.0045667156, step = 64201 (0.230 sec)\nINFO:tensorflow:global_step/sec: 446.854\nINFO:tensorflow:loss = 0.0035601123, step = 64301 (0.224 sec)\nINFO:tensorflow:global_step/sec: 429.236\nINFO:tensorflow:loss = 0.007950223, step = 64401 (0.233 sec)\nINFO:tensorflow:global_step/sec: 463.349\nINFO:tensorflow:loss = 0.00501996, step = 64501 (0.216 sec)\nINFO:tensorflow:global_step/sec: 463.545\nINFO:tensorflow:loss = 0.00829908, step = 64601 (0.216 sec)\nINFO:tensorflow:global_step/sec: 445.543\nINFO:tensorflow:loss = 0.0058983793, step = 64701 (0.224 sec)\nINFO:tensorflow:global_step/sec: 440.191\nINFO:tensorflow:loss = 0.005716239, step = 64801 (0.227 sec)\nINFO:tensorflow:global_step/sec: 444.828\nINFO:tensorflow:loss = 0.0063681947, step = 64901 (0.224 sec)\nINFO:tensorflow:global_step/sec: 475.69\nINFO:tensorflow:loss = 0.005070969, step = 65001 (0.211 sec)\nINFO:tensorflow:global_step/sec: 430.246\nINFO:tensorflow:loss = 0.0031899558, step = 65101 (0.232 sec)\nINFO:tensorflow:global_step/sec: 457.119\nINFO:tensorflow:loss = 0.003931294, step = 65201 (0.219 sec)\nINFO:tensorflow:global_step/sec: 459.741\nINFO:tensorflow:loss = 0.008194013, step = 65301 (0.218 sec)\nINFO:tensorflow:global_step/sec: 456.366\nINFO:tensorflow:loss = 0.005469339, step = 65401 (0.219 sec)\nINFO:tensorflow:global_step/sec: 471.247\nINFO:tensorflow:loss = 0.0035420237, step = 65501 (0.216 sec)\nINFO:tensorflow:global_step/sec: 450.258\nINFO:tensorflow:loss = 0.007944604, step = 65601 (0.219 sec)\nINFO:tensorflow:global_step/sec: 475.83\nINFO:tensorflow:loss = 0.0055986336, step = 65701 (0.210 sec)\nINFO:tensorflow:global_step/sec: 456.033\nINFO:tensorflow:loss = 0.005359305, step = 65801 (0.219 sec)\nINFO:tensorflow:global_step/sec: 467.284\nINFO:tensorflow:loss = 0.005838528, step = 65901 (0.214 sec)\nINFO:tensorflow:global_step/sec: 472.835\nINFO:tensorflow:loss = 0.008952239, step = 66001 (0.211 sec)\nINFO:tensorflow:global_step/sec: 458.14\nINFO:tensorflow:loss = 0.008125879, step = 66101 (0.218 sec)\nINFO:tensorflow:global_step/sec: 457.767\nINFO:tensorflow:loss = 0.010595839, step = 66201 (0.219 sec)\nINFO:tensorflow:global_step/sec: 462.086\nINFO:tensorflow:loss = 0.004360906, step = 66301 (0.216 sec)\nINFO:tensorflow:global_step/sec: 444.026\nINFO:tensorflow:loss = 0.005472458, step = 66401 (0.225 sec)\nINFO:tensorflow:global_step/sec: 451.127\nINFO:tensorflow:loss = 0.009706709, step = 66501 (0.221 sec)\nINFO:tensorflow:global_step/sec: 481.362\nINFO:tensorflow:loss = 0.0058737276, step = 66601 (0.208 sec)\nINFO:tensorflow:global_step/sec: 448.974\nINFO:tensorflow:loss = 0.005375354, step = 66701 (0.223 sec)\nINFO:tensorflow:global_step/sec: 453.542\nINFO:tensorflow:loss = 0.0051962878, step = 66801 (0.220 sec)\nINFO:tensorflow:global_step/sec: 455.828\nINFO:tensorflow:loss = 0.004631044, step = 66901 (0.219 sec)\nINFO:tensorflow:global_step/sec: 467.034\nINFO:tensorflow:loss = 0.0036607399, step = 67001 (0.214 sec)\nINFO:tensorflow:global_step/sec: 447.937\nINFO:tensorflow:loss = 0.0042718584, step = 67101 (0.224 sec)\nINFO:tensorflow:global_step/sec: 464.857\nINFO:tensorflow:loss = 0.0060365, step = 67201 (0.215 sec)\nINFO:tensorflow:global_step/sec: 472.802\nINFO:tensorflow:loss = 0.0057770684, step = 67301 (0.211 sec)\nINFO:tensorflow:global_step/sec: 464.572\nINFO:tensorflow:loss = 0.007338159, step = 67401 (0.215 sec)\nINFO:tensorflow:global_step/sec: 450.024\nINFO:tensorflow:loss = 0.003209616, step = 67501 (0.222 sec)\nINFO:tensorflow:global_step/sec: 402.198\nINFO:tensorflow:loss = 0.0061726947, step = 67601 (0.249 sec)\nINFO:tensorflow:global_step/sec: 466.459\nINFO:tensorflow:loss = 0.010532187, step = 67701 (0.215 sec)\nINFO:tensorflow:global_step/sec: 460.095\nINFO:tensorflow:loss = 0.0068775183, step = 67801 (0.217 sec)\nINFO:tensorflow:global_step/sec: 438.587\nINFO:tensorflow:loss = 0.003887407, step = 67901 (0.231 sec)\nINFO:tensorflow:global_step/sec: 458.894\nINFO:tensorflow:loss = 0.010415395, step = 68001 (0.215 sec)\nINFO:tensorflow:global_step/sec: 446.07\nINFO:tensorflow:loss = 0.009217195, step = 68101 (0.224 sec)\nINFO:tensorflow:global_step/sec: 443.274\nINFO:tensorflow:loss = 0.0074221427, step = 68201 (0.225 sec)\nINFO:tensorflow:global_step/sec: 456.147\nINFO:tensorflow:loss = 0.010360572, step = 68301 (0.219 sec)\nINFO:tensorflow:global_step/sec: 442.065\nINFO:tensorflow:loss = 0.004218365, step = 68401 (0.226 sec)\nINFO:tensorflow:global_step/sec: 446.83\nINFO:tensorflow:loss = 0.00443579, step = 68501 (0.224 sec)\nINFO:tensorflow:global_step/sec: 464.784\nINFO:tensorflow:loss = 0.0053496305, step = 68601 (0.215 sec)\nINFO:tensorflow:global_step/sec: 460.521\nINFO:tensorflow:loss = 0.0067975875, step = 68701 (0.217 sec)\nINFO:tensorflow:global_step/sec: 444.809\nINFO:tensorflow:loss = 0.009376096, step = 68801 (0.225 sec)\nINFO:tensorflow:global_step/sec: 419.725\nINFO:tensorflow:loss = 0.012522965, step = 68901 (0.238 sec)\nINFO:tensorflow:global_step/sec: 459.324\nINFO:tensorflow:loss = 0.01441093, step = 69001 (0.217 sec)\nINFO:tensorflow:global_step/sec: 468.123\nINFO:tensorflow:loss = 0.0070652366, step = 69101 (0.214 sec)\nINFO:tensorflow:global_step/sec: 460.687\nINFO:tensorflow:loss = 0.005794105, step = 69201 (0.217 sec)\nINFO:tensorflow:global_step/sec: 475.367\nINFO:tensorflow:loss = 0.003959253, step = 69301 (0.210 sec)\nINFO:tensorflow:global_step/sec: 466.879\nINFO:tensorflow:loss = 0.006276269, step = 69401 (0.214 sec)\nINFO:tensorflow:global_step/sec: 437.938\nINFO:tensorflow:loss = 0.0045333365, step = 69501 (0.228 sec)\nINFO:tensorflow:global_step/sec: 460.792\nINFO:tensorflow:loss = 0.003597036, step = 69601 (0.218 sec)\nINFO:tensorflow:global_step/sec: 442.832\nINFO:tensorflow:loss = 0.0047999304, step = 69701 (0.226 sec)\nINFO:tensorflow:global_step/sec: 445.29\nINFO:tensorflow:loss = 0.008004857, step = 69801 (0.225 sec)\nINFO:tensorflow:global_step/sec: 460.155\nINFO:tensorflow:loss = 0.0075903703, step = 69901 (0.217 sec)\nINFO:tensorflow:global_step/sec: 442.654\nINFO:tensorflow:loss = 0.008740846, step = 70001 (0.226 sec)\nINFO:tensorflow:global_step/sec: 446.397\nINFO:tensorflow:loss = 0.005103236, step = 70101 (0.224 sec)\nINFO:tensorflow:global_step/sec: 447.263\nINFO:tensorflow:loss = 0.0072469544, step = 70201 (0.224 sec)\nINFO:tensorflow:global_step/sec: 457.804\nINFO:tensorflow:loss = 0.0064546643, step = 70301 (0.218 sec)\nINFO:tensorflow:global_step/sec: 460.518\nINFO:tensorflow:loss = 0.005093436, step = 70401 (0.217 sec)\nINFO:tensorflow:global_step/sec: 456.533\nINFO:tensorflow:loss = 0.0053966255, step = 70501 (0.219 sec)\nINFO:tensorflow:global_step/sec: 449.873\nINFO:tensorflow:loss = 0.0071552694, step = 70601 (0.222 sec)\nINFO:tensorflow:global_step/sec: 416.691\nINFO:tensorflow:loss = 0.004265731, step = 70701 (0.240 sec)\nINFO:tensorflow:global_step/sec: 444.543\nINFO:tensorflow:loss = 0.008054085, step = 70801 (0.225 sec)\nINFO:tensorflow:global_step/sec: 460.473\nINFO:tensorflow:loss = 0.0084706675, step = 70901 (0.217 sec)\nINFO:tensorflow:global_step/sec: 438.689\nINFO:tensorflow:loss = 0.005538496, step = 71001 (0.228 sec)\nINFO:tensorflow:global_step/sec: 453.449\nINFO:tensorflow:loss = 0.0076392516, step = 71101 (0.221 sec)\nINFO:tensorflow:global_step/sec: 419.511\nINFO:tensorflow:loss = 0.0034439913, step = 71201 (0.239 sec)\nINFO:tensorflow:global_step/sec: 447.429\nINFO:tensorflow:loss = 0.007264255, step = 71301 (0.223 sec)\nINFO:tensorflow:global_step/sec: 440.985\nINFO:tensorflow:loss = 0.003469113, step = 71401 (0.227 sec)\nINFO:tensorflow:global_step/sec: 437.382\nINFO:tensorflow:loss = 0.010293648, step = 71501 (0.229 sec)\nINFO:tensorflow:global_step/sec: 455.226\nINFO:tensorflow:loss = 0.002926058, step = 71601 (0.220 sec)\nINFO:tensorflow:global_step/sec: 474.978\nINFO:tensorflow:loss = 0.0074243895, step = 71701 (0.210 sec)\nINFO:tensorflow:global_step/sec: 465.641\nINFO:tensorflow:loss = 0.0049074343, step = 71801 (0.215 sec)\nINFO:tensorflow:global_step/sec: 437.264\nINFO:tensorflow:loss = 0.004615536, step = 71901 (0.229 sec)\nINFO:tensorflow:global_step/sec: 460.223\nINFO:tensorflow:loss = 0.0048172097, step = 72001 (0.217 sec)\nINFO:tensorflow:global_step/sec: 448.262\nINFO:tensorflow:loss = 0.0037294552, step = 72101 (0.223 sec)\nINFO:tensorflow:global_step/sec: 457.884\nINFO:tensorflow:loss = 0.0033380152, step = 72201 (0.219 sec)\nINFO:tensorflow:global_step/sec: 477.827\nINFO:tensorflow:loss = 0.005900569, step = 72301 (0.209 sec)\nINFO:tensorflow:global_step/sec: 455.069\nINFO:tensorflow:loss = 0.0058830604, step = 72401 (0.220 sec)\nINFO:tensorflow:global_step/sec: 460.479\nINFO:tensorflow:loss = 0.005153283, step = 72501 (0.217 sec)\nINFO:tensorflow:global_step/sec: 461.393\nINFO:tensorflow:loss = 0.003062134, step = 72601 (0.217 sec)\nINFO:tensorflow:global_step/sec: 478.282\nINFO:tensorflow:loss = 0.01034358, step = 72701 (0.209 sec)\nINFO:tensorflow:global_step/sec: 458.951\nINFO:tensorflow:loss = 0.0099817775, step = 72801 (0.218 sec)\nINFO:tensorflow:global_step/sec: 448.398\nINFO:tensorflow:loss = 0.00888859, step = 72901 (0.228 sec)\nINFO:tensorflow:global_step/sec: 422.297\nINFO:tensorflow:loss = 0.0084945075, step = 73001 (0.232 sec)\nINFO:tensorflow:global_step/sec: 468.005\nINFO:tensorflow:loss = 0.0048595895, step = 73101 (0.214 sec)\nINFO:tensorflow:global_step/sec: 466.474\nINFO:tensorflow:loss = 0.003486675, step = 73201 (0.215 sec)\nINFO:tensorflow:global_step/sec: 465.378\nINFO:tensorflow:loss = 0.0065548937, step = 73301 (0.215 sec)\nINFO:tensorflow:global_step/sec: 461.299\nINFO:tensorflow:loss = 0.0063932384, step = 73401 (0.217 sec)\nINFO:tensorflow:global_step/sec: 401.531\nINFO:tensorflow:loss = 0.0028781756, step = 73501 (0.249 sec)\nINFO:tensorflow:global_step/sec: 458.901\nINFO:tensorflow:loss = 0.008169038, step = 73601 (0.218 sec)\nINFO:tensorflow:global_step/sec: 444.002\nINFO:tensorflow:loss = 0.0037756446, step = 73701 (0.225 sec)\nINFO:tensorflow:global_step/sec: 461.949\nINFO:tensorflow:loss = 0.007363127, step = 73801 (0.216 sec)\nINFO:tensorflow:global_step/sec: 452.759\nINFO:tensorflow:loss = 0.0049518663, step = 73901 (0.221 sec)\nINFO:tensorflow:global_step/sec: 455.106\nINFO:tensorflow:loss = 0.0045387745, step = 74001 (0.220 sec)\nINFO:tensorflow:global_step/sec: 493.182\nINFO:tensorflow:loss = 0.008615183, step = 74101 (0.203 sec)\nINFO:tensorflow:global_step/sec: 453.655\nINFO:tensorflow:loss = 0.0052963346, step = 74201 (0.220 sec)\nINFO:tensorflow:global_step/sec: 442.006\nINFO:tensorflow:loss = 0.0058054, step = 74301 (0.226 sec)\nINFO:tensorflow:global_step/sec: 443.243\nINFO:tensorflow:loss = 0.003772908, step = 74401 (0.226 sec)\nINFO:tensorflow:global_step/sec: 447.971\nINFO:tensorflow:loss = 0.0042598215, step = 74501 (0.223 sec)\nINFO:tensorflow:global_step/sec: 430.132\nINFO:tensorflow:loss = 0.0069245948, step = 74601 (0.232 sec)\nINFO:tensorflow:global_step/sec: 462.974\nINFO:tensorflow:loss = 0.0068933605, step = 74701 (0.216 sec)\nINFO:tensorflow:global_step/sec: 465.164\nINFO:tensorflow:loss = 0.008349533, step = 74801 (0.215 sec)\nINFO:tensorflow:global_step/sec: 475.426\nINFO:tensorflow:loss = 0.004571722, step = 74901 (0.211 sec)\nINFO:tensorflow:global_step/sec: 469.074\nINFO:tensorflow:loss = 0.01291978, step = 75001 (0.213 sec)\nINFO:tensorflow:global_step/sec: 471.722\nINFO:tensorflow:loss = 0.0064052385, step = 75101 (0.212 sec)\nINFO:tensorflow:global_step/sec: 474.221\nINFO:tensorflow:loss = 0.0060520847, step = 75201 (0.211 sec)\nINFO:tensorflow:global_step/sec: 446.402\nINFO:tensorflow:loss = 0.008798716, step = 75301 (0.224 sec)\nINFO:tensorflow:global_step/sec: 481.429\nINFO:tensorflow:loss = 0.006379231, step = 75401 (0.208 sec)\nINFO:tensorflow:global_step/sec: 463.553\nINFO:tensorflow:loss = 0.009331146, step = 75501 (0.216 sec)\nINFO:tensorflow:global_step/sec: 460.849\nINFO:tensorflow:loss = 0.005894911, step = 75601 (0.217 sec)\nINFO:tensorflow:global_step/sec: 444.136\nINFO:tensorflow:loss = 0.0037616612, step = 75701 (0.225 sec)\nINFO:tensorflow:global_step/sec: 463.491\nINFO:tensorflow:loss = 0.011714017, step = 75801 (0.216 sec)\nINFO:tensorflow:global_step/sec: 457.113\nINFO:tensorflow:loss = 0.0047904057, step = 75901 (0.219 sec)\nINFO:tensorflow:global_step/sec: 455.869\nINFO:tensorflow:loss = 0.0050777653, step = 76001 (0.219 sec)\nINFO:tensorflow:global_step/sec: 441.663\nINFO:tensorflow:loss = 0.005149127, step = 76101 (0.227 sec)\nINFO:tensorflow:global_step/sec: 438.495\nINFO:tensorflow:loss = 0.003949683, step = 76201 (0.228 sec)\nINFO:tensorflow:global_step/sec: 466.259\nINFO:tensorflow:loss = 0.0050260425, step = 76301 (0.214 sec)\nINFO:tensorflow:global_step/sec: 472.018\nINFO:tensorflow:loss = 0.007038864, step = 76401 (0.212 sec)\nINFO:tensorflow:global_step/sec: 457.151\nINFO:tensorflow:loss = 0.0031711794, step = 76501 (0.219 sec)\nINFO:tensorflow:global_step/sec: 462.717\nINFO:tensorflow:loss = 0.006720245, step = 76601 (0.216 sec)\nINFO:tensorflow:global_step/sec: 463.495\nINFO:tensorflow:loss = 0.00505179, step = 76701 (0.216 sec)\nINFO:tensorflow:global_step/sec: 462.601\nINFO:tensorflow:loss = 0.006519336, step = 76801 (0.216 sec)\nINFO:tensorflow:global_step/sec: 451.04\nINFO:tensorflow:loss = 0.0033873874, step = 76901 (0.222 sec)\nINFO:tensorflow:global_step/sec: 461.138\nINFO:tensorflow:loss = 0.007071457, step = 77001 (0.217 sec)\nINFO:tensorflow:global_step/sec: 450.747\nINFO:tensorflow:loss = 0.012458454, step = 77101 (0.222 sec)\nINFO:tensorflow:global_step/sec: 447.764\nINFO:tensorflow:loss = 0.0050108135, step = 77201 (0.223 sec)\nINFO:tensorflow:global_step/sec: 455.951\nINFO:tensorflow:loss = 0.004642292, step = 77301 (0.219 sec)\nINFO:tensorflow:global_step/sec: 449.315\nINFO:tensorflow:loss = 0.0040867925, step = 77401 (0.222 sec)\nINFO:tensorflow:global_step/sec: 462.342\nINFO:tensorflow:loss = 0.0052490076, step = 77501 (0.216 sec)\nINFO:tensorflow:global_step/sec: 440.938\nINFO:tensorflow:loss = 0.00611366, step = 77601 (0.227 sec)\nINFO:tensorflow:global_step/sec: 474.422\nINFO:tensorflow:loss = 0.0046195406, step = 77701 (0.210 sec)\nINFO:tensorflow:global_step/sec: 451.879\nINFO:tensorflow:loss = 0.007076217, step = 77801 (0.221 sec)\nINFO:tensorflow:global_step/sec: 453.856\nINFO:tensorflow:loss = 0.0043232027, step = 77901 (0.221 sec)\nINFO:tensorflow:global_step/sec: 461.973\nINFO:tensorflow:loss = 0.008320088, step = 78001 (0.216 sec)\nINFO:tensorflow:global_step/sec: 455.716\nINFO:tensorflow:loss = 0.007226456, step = 78101 (0.219 sec)\nINFO:tensorflow:global_step/sec: 476.649\nINFO:tensorflow:loss = 0.0063690925, step = 78201 (0.210 sec)\nINFO:tensorflow:global_step/sec: 438.354\nINFO:tensorflow:loss = 0.009858575, step = 78301 (0.229 sec)\nINFO:tensorflow:global_step/sec: 440.884\nINFO:tensorflow:loss = 0.006499151, step = 78401 (0.226 sec)\nINFO:tensorflow:global_step/sec: 460.764\nINFO:tensorflow:loss = 0.0064125946, step = 78501 (0.217 sec)\nINFO:tensorflow:global_step/sec: 438.978\nINFO:tensorflow:loss = 0.0055036284, step = 78601 (0.228 sec)\nINFO:tensorflow:global_step/sec: 457.536\nINFO:tensorflow:loss = 0.004464712, step = 78701 (0.219 sec)\nINFO:tensorflow:global_step/sec: 467.041\nINFO:tensorflow:loss = 0.006037531, step = 78801 (0.214 sec)\nINFO:tensorflow:global_step/sec: 467.284\nINFO:tensorflow:loss = 0.009124339, step = 78901 (0.214 sec)\nINFO:tensorflow:global_step/sec: 445.984\nINFO:tensorflow:loss = 0.0064703375, step = 79001 (0.224 sec)\nINFO:tensorflow:global_step/sec: 415.031\nINFO:tensorflow:loss = 0.006439813, step = 79101 (0.241 sec)\nINFO:tensorflow:global_step/sec: 426.452\nINFO:tensorflow:loss = 0.0041991677, step = 79201 (0.235 sec)\nINFO:tensorflow:global_step/sec: 447.958\nINFO:tensorflow:loss = 0.0046974616, step = 79301 (0.223 sec)\nINFO:tensorflow:global_step/sec: 472.65\nINFO:tensorflow:loss = 0.0066093374, step = 79401 (0.212 sec)\nINFO:tensorflow:global_step/sec: 499.753\nINFO:tensorflow:loss = 0.0034619216, step = 79501 (0.200 sec)\nINFO:tensorflow:global_step/sec: 477.662\nINFO:tensorflow:loss = 0.006463375, step = 79601 (0.209 sec)\nINFO:tensorflow:global_step/sec: 474.841\nINFO:tensorflow:loss = 0.0044739046, step = 79701 (0.211 sec)\nINFO:tensorflow:global_step/sec: 459.107\nINFO:tensorflow:loss = 0.0049773594, step = 79801 (0.218 sec)\nINFO:tensorflow:global_step/sec: 454.759\nINFO:tensorflow:loss = 0.007405264, step = 79901 (0.220 sec)\nINFO:tensorflow:Saving checkpoints for 80000 into /tmp/tmpDexXZd/model.ckpt.\nINFO:tensorflow:Calling model_fn.\nINFO:tensorflow:Importing architecture from /tmp/tmpDexXZd/architecture-2.txt: ['0:1_layer_dnn', '1:2_layer_dnn', '2:3_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 1\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 2\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Building iteration 3\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Building subnetwork '4_layer_dnn'\nINFO:tensorflow:Done calling model_fn.\nINFO:tensorflow:Starting evaluation at 2018-12-13-19:32:02\nINFO:tensorflow:Graph was finalized.\nINFO:tensorflow:Restoring parameters from /tmp/tmpDexXZd/model.ckpt-80000\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\nINFO:tensorflow:Saving candidate 't2_3_layer_dnn' dict for global step 80000: architecture/adanet/ensembles = \n}\n>adanet/iteration_2/ensemble_t2_3_layer_dnn/architecture/adanetB1\b\u0007\u0012\u0000B+| 1_layer_dnn | 2_layer_dnn | 3_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.030102361, average_loss/adanet/subnetwork = 0.032910354, average_loss/adanet/uniform_average_ensemble = 0.03231746, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.038314007, loss/adanet/subnetwork = 0.043785788, loss/adanet/uniform_average_ensemble = 0.04384736, prediction/mean/adanet/adanet_weighted_ensemble = 3.1021514, prediction/mean/adanet/subnetwork = 3.134045, prediction/mean/adanet/uniform_average_ensemble = 3.1457782\nINFO:tensorflow:Saving candidate 't3_3_layer_dnn' dict for global step 80000: architecture/adanet/ensembles = \n�\u0001\n>adanet/iteration_3/ensemble_t3_3_layer_dnn/architecture/adanetB?\b\u0007\u0012\u0000B9| 1_layer_dnn | 2_layer_dnn | 3_layer_dnn | 3_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.032788813, average_loss/adanet/subnetwork = 0.03740776, average_loss/adanet/uniform_average_ensemble = 0.032510772, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.039163962, loss/adanet/subnetwork = 0.055050053, loss/adanet/uniform_average_ensemble = 0.04530649, prediction/mean/adanet/adanet_weighted_ensemble = 3.0593674, prediction/mean/adanet/subnetwork = 3.1547055, prediction/mean/adanet/uniform_average_ensemble = 3.1480103\nINFO:tensorflow:Saving candidate 't3_4_layer_dnn' dict for global step 80000: architecture/adanet/ensembles = \n�\u0001\n>adanet/iteration_3/ensemble_t3_4_layer_dnn/architecture/adanetB?\b\u0007\u0012\u0000B9| 1_layer_dnn | 2_layer_dnn | 3_layer_dnn | 4_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.03169271, average_loss/adanet/subnetwork = 0.03348904, average_loss/adanet/uniform_average_ensemble = 0.031587753, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.038267143, loss/adanet/subnetwork = 0.041055106, loss/adanet/uniform_average_ensemble = 0.04207115, prediction/mean/adanet/adanet_weighted_ensemble = 3.067704, prediction/mean/adanet/subnetwork = 3.1287208, prediction/mean/adanet/uniform_average_ensemble = 3.1415138\nINFO:tensorflow:Finished evaluation at 2018-12-13-19:32:06\nINFO:tensorflow:Saving dict for global step 80000: average_loss = 0.03169271, average_loss/adanet/adanet_weighted_ensemble = 0.03169271, average_loss/adanet/subnetwork = 0.03348904, average_loss/adanet/uniform_average_ensemble = 0.031587753, global_step = 80000, label/mean = 3.1049454, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss = 0.038267143, loss/adanet/adanet_weighted_ensemble = 0.038267143, loss/adanet/subnetwork = 0.041055106, loss/adanet/uniform_average_ensemble = 0.04207115, prediction/mean = 3.067704, prediction/mean/adanet/adanet_weighted_ensemble = 3.067704, prediction/mean/adanet/subnetwork = 3.1287208, prediction/mean/adanet/uniform_average_ensemble = 3.1415138\nINFO:tensorflow:Saving 'checkpoint_path' summary for global step 80000: /tmp/tmpDexXZd/model.ckpt-80000\nINFO:tensorflow:Loss for final step: 0.0047632405.\nINFO:tensorflow:Finished training Adanet iteration 3\nINFO:tensorflow:Beginning bookkeeping phase for iteration 3\nINFO:tensorflow:Importing architecture from /tmp/tmpDexXZd/architecture-2.txt: ['0:1_layer_dnn', '1:2_layer_dnn', '2:3_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 1\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 2\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Building iteration 3\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Building subnetwork '4_layer_dnn'\nINFO:tensorflow:Starting ensemble evaluation for iteration 3\nINFO:tensorflow:Restoring parameters from /tmp/tmpDexXZd/model.ckpt-80000\nWARNING:tensorflow:`tf.train.start_queue_runners()` was called when no queue runners were defined. You can safely remove the call to this deprecated function.\nINFO:tensorflow:Encountered end of input after 14 evaluations\nINFO:tensorflow:Computed ensemble metrics: adanet_loss/t2_3_layer_dnn = 0.007307, adanet_loss/t3_3_layer_dnn = 0.006105, adanet_loss/t3_4_layer_dnn = 0.005626\nINFO:tensorflow:Finished ensemble evaluation for iteration 3\nINFO:tensorflow:'t3_4_layer_dnn' at index 2 is moving onto the next iteration\nINFO:tensorflow:Importing architecture from /tmp/tmpDexXZd/architecture-3.txt: ['0:1_layer_dnn', '1:2_layer_dnn', '2:3_layer_dnn', '3:4_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 1\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 2\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 3\nINFO:tensorflow:Building subnetwork '4_layer_dnn'\nINFO:tensorflow:Warm-starting from: (u'/tmp/tmpDexXZd/model.ckpt-80000',)\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_2/subnetwork/dense_3/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_4_layer_dnn/weighted_subnetwork_2/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_4_layer_dnn/weighted_subnetwork_3/subnetwork/dense_3/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_4_layer_dnn/weighted_subnetwork_0/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_4_layer_dnn/weighted_subnetwork_3/subnetwork/dense_1/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/candidate_t2_3_layer_dnn/adanet/iteration_2/candidate_t2_3_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_2/subnetwork/dense/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/candidate_t2_3_layer_dnn/adanet/iteration_2/candidate_t2_3_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/subnetwork/dense/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_2/subnetwork/dense/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/candidate_t0_1_layer_dnn/adanet/iteration_0/candidate_t0_1_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/subnetwork/dense_1/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/subnetwork/dense_1/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/candidate_t3_4_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/candidate_t3_4_layer_dnn/adanet/iteration_3/candidate_t3_4_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t1_2_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/train_op/is_over/is_over_var_fn/is_over_var; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_4_layer_dnn/weighted_subnetwork_3/subnetwork/dense_4/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_2/subnetwork/dense_1/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: global_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/candidate_t3_4_layer_dnn/adanet/iteration_3/candidate_t3_4_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/candidate_t0_1_layer_dnn/adanet/iteration_0/candidate_t0_1_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/subnetwork/dense_1/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/candidate_t2_3_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_4_layer_dnn/weighted_subnetwork_3/subnetwork/dense_4/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_0/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t0_1_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/subnetwork/dense_2/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_4_layer_dnn/weighted_subnetwork_3/subnetwork/dense/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_4_layer_dnn/weighted_subnetwork_3/subnetwork/dense_3/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/candidate_t2_3_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t1_2_layer_dnn/adanet/iteration_1/candidate_t1_2_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_0/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/train_op/is_over/is_over_var_fn/is_over_var; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_4_layer_dnn/weighted_subnetwork_3/subnetwork/dense_2/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_4_layer_dnn/weighted_subnetwork_1/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_1/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/subnetwork/dense/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/subnetwork/dense/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/candidate_t0_1_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_4_layer_dnn/weighted_subnetwork_3/subnetwork/dense_2/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/subnetwork/dense_1/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_1_layer_dnn/weighted_subnetwork_0/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_2/subnetwork/dense_1/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_4_layer_dnn/weighted_subnetwork_3/subnetwork/dense_1/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t1_2_layer_dnn/adanet/iteration_1/candidate_t1_2_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_2/subnetwork/dense_2/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_4_layer_dnn/weighted_subnetwork_3/subnetwork/dense/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/candidate_t1_2_layer_dnn/adanet/iteration_2/candidate_t1_2_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t0_1_layer_dnn/adanet/iteration_1/candidate_t0_1_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t0_1_layer_dnn/adanet/iteration_1/candidate_t0_1_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_4_layer_dnn/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_4_layer_dnn/weighted_subnetwork_3/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/subnetwork/dense_2/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_2_layer_dnn/weighted_subnetwork_1/subnetwork/dense/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/candidate_t2_3_layer_dnn/adanet/iteration_3/candidate_t2_3_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_2/subnetwork/dense_3/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/candidate_t2_3_layer_dnn/adanet/iteration_3/candidate_t2_3_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_2/subnetwork/dense_2/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/train_op/is_over/is_over_var_fn/is_over_var; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/train_op/is_over/is_over_var_fn/is_over_var; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/candidate_t1_2_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/candidate_t1_2_layer_dnn/adanet/iteration_2/candidate_t1_2_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_3_layer_dnn/weighted_subnetwork_2/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Building iteration 4\nINFO:tensorflow:Building subnetwork '4_layer_dnn'\nINFO:tensorflow:Building subnetwork '5_layer_dnn'\nINFO:tensorflow:Overwriting checkpoint with new graph for iteration 4 to /tmp/tmpDexXZd/model.ckpt-80000\nWARNING:tensorflow:`tf.train.start_queue_runners()` was called when no queue runners were defined. You can safely remove the call to this deprecated function.\nINFO:tensorflow:Finished bookkeeping phase for iteration 3\nINFO:tensorflow:Beginning training AdaNet iteration 4\nINFO:tensorflow:Calling model_fn.\nINFO:tensorflow:Importing architecture from /tmp/tmpDexXZd/architecture-3.txt: ['0:1_layer_dnn', '1:2_layer_dnn', '2:3_layer_dnn', '3:4_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 1\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 2\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 3\nINFO:tensorflow:Building subnetwork '4_layer_dnn'\nINFO:tensorflow:Building iteration 4\nINFO:tensorflow:Building subnetwork '4_layer_dnn'\nINFO:tensorflow:Building subnetwork '5_layer_dnn'\nINFO:tensorflow:Done calling model_fn.\nINFO:tensorflow:Create CheckpointSaverHook.\nINFO:tensorflow:Graph was finalized.\nINFO:tensorflow:Restoring parameters from /tmp/tmpDexXZd/increment.ckpt-4\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\nINFO:tensorflow:Saving checkpoints for 80000 into /tmp/tmpDexXZd/model.ckpt.\nINFO:tensorflow:loss = 0.005383206, step = 80001\nINFO:tensorflow:global_step/sec: 78.49\nINFO:tensorflow:loss = 0.0064166253, step = 80101 (1.275 sec)\nINFO:tensorflow:global_step/sec: 418.887\nINFO:tensorflow:loss = 0.008984626, step = 80201 (0.238 sec)\nINFO:tensorflow:global_step/sec: 405.903\nINFO:tensorflow:loss = 0.004674765, step = 80301 (0.246 sec)\nINFO:tensorflow:global_step/sec: 406.285\nINFO:tensorflow:loss = 0.00909529, step = 80401 (0.246 sec)\nINFO:tensorflow:global_step/sec: 431.88\nINFO:tensorflow:loss = 0.002272259, step = 80501 (0.232 sec)\nINFO:tensorflow:global_step/sec: 430.737\nINFO:tensorflow:loss = 0.0045274086, step = 80601 (0.232 sec)\nINFO:tensorflow:global_step/sec: 413.445\nINFO:tensorflow:loss = 0.0054803616, step = 80701 (0.242 sec)\nINFO:tensorflow:global_step/sec: 416.309\nINFO:tensorflow:loss = 0.0052720383, step = 80801 (0.240 sec)\nINFO:tensorflow:global_step/sec: 400.421\nINFO:tensorflow:loss = 0.008549853, step = 80901 (0.250 sec)\nINFO:tensorflow:global_step/sec: 408.361\nINFO:tensorflow:loss = 0.008121803, step = 81001 (0.244 sec)\nINFO:tensorflow:global_step/sec: 414.845\nINFO:tensorflow:loss = 0.0069520036, step = 81101 (0.241 sec)\nINFO:tensorflow:global_step/sec: 408.255\nINFO:tensorflow:loss = 0.0076366886, step = 81201 (0.245 sec)\nINFO:tensorflow:global_step/sec: 421.117\nINFO:tensorflow:loss = 0.003278641, step = 81301 (0.237 sec)\nINFO:tensorflow:global_step/sec: 429.802\nINFO:tensorflow:loss = 0.0028751101, step = 81401 (0.238 sec)\nINFO:tensorflow:global_step/sec: 420.822\nINFO:tensorflow:loss = 0.0037532323, step = 81501 (0.232 sec)\nINFO:tensorflow:global_step/sec: 436.449\nINFO:tensorflow:loss = 0.0046784272, step = 81601 (0.229 sec)\nINFO:tensorflow:global_step/sec: 413.609\nINFO:tensorflow:loss = 0.007498023, step = 81701 (0.242 sec)\nINFO:tensorflow:global_step/sec: 436.12\nINFO:tensorflow:loss = 0.0050562844, step = 81801 (0.229 sec)\nINFO:tensorflow:global_step/sec: 451.214\nINFO:tensorflow:loss = 0.0062173824, step = 81901 (0.222 sec)\nINFO:tensorflow:global_step/sec: 431.908\nINFO:tensorflow:loss = 0.0070055285, step = 82001 (0.232 sec)\nINFO:tensorflow:global_step/sec: 446.57\nINFO:tensorflow:loss = 0.0067753876, step = 82101 (0.224 sec)\nINFO:tensorflow:global_step/sec: 433.507\nINFO:tensorflow:loss = 0.0034281143, step = 82201 (0.231 sec)\nINFO:tensorflow:global_step/sec: 420.337\nINFO:tensorflow:loss = 0.004406165, step = 82301 (0.238 sec)\nINFO:tensorflow:global_step/sec: 438.057\nINFO:tensorflow:loss = 0.0028600814, step = 82401 (0.228 sec)\nINFO:tensorflow:global_step/sec: 417.812\nINFO:tensorflow:loss = 0.005227585, step = 82501 (0.239 sec)\nINFO:tensorflow:global_step/sec: 423.689\nINFO:tensorflow:loss = 0.0023829981, step = 82601 (0.236 sec)\nINFO:tensorflow:global_step/sec: 439.427\nINFO:tensorflow:loss = 0.0073684314, step = 82701 (0.227 sec)\nINFO:tensorflow:global_step/sec: 450.432\nINFO:tensorflow:loss = 0.005705492, step = 82801 (0.222 sec)\nINFO:tensorflow:global_step/sec: 436.91\nINFO:tensorflow:loss = 0.005273375, step = 82901 (0.229 sec)\nINFO:tensorflow:global_step/sec: 424.995\nINFO:tensorflow:loss = 0.0046031997, step = 83001 (0.235 sec)\nINFO:tensorflow:global_step/sec: 429.823\nINFO:tensorflow:loss = 0.0045557586, step = 83101 (0.233 sec)\nINFO:tensorflow:global_step/sec: 439.452\nINFO:tensorflow:loss = 0.005674702, step = 83201 (0.227 sec)\nINFO:tensorflow:global_step/sec: 437.522\nINFO:tensorflow:loss = 0.0070310174, step = 83301 (0.229 sec)\nINFO:tensorflow:global_step/sec: 435.981\nINFO:tensorflow:loss = 0.008453, step = 83401 (0.229 sec)\nINFO:tensorflow:global_step/sec: 430.563\nINFO:tensorflow:loss = 0.0069910833, step = 83501 (0.232 sec)\nINFO:tensorflow:global_step/sec: 449.976\nINFO:tensorflow:loss = 0.0055006915, step = 83601 (0.222 sec)\nINFO:tensorflow:global_step/sec: 432.556\nINFO:tensorflow:loss = 0.006325035, step = 83701 (0.231 sec)\nINFO:tensorflow:global_step/sec: 427.639\nINFO:tensorflow:loss = 0.007590879, step = 83801 (0.234 sec)\nINFO:tensorflow:global_step/sec: 410.661\nINFO:tensorflow:loss = 0.005499037, step = 83901 (0.244 sec)\nINFO:tensorflow:global_step/sec: 436.698\nINFO:tensorflow:loss = 0.004665518, step = 84001 (0.229 sec)\nINFO:tensorflow:global_step/sec: 442.345\nINFO:tensorflow:loss = 0.0043933718, step = 84101 (0.226 sec)\nINFO:tensorflow:global_step/sec: 419.928\nINFO:tensorflow:loss = 0.0033534751, step = 84201 (0.238 sec)\nINFO:tensorflow:global_step/sec: 404.658\nINFO:tensorflow:loss = 0.004103374, step = 84301 (0.247 sec)\nINFO:tensorflow:global_step/sec: 405.912\nINFO:tensorflow:loss = 0.007383922, step = 84401 (0.247 sec)\nINFO:tensorflow:global_step/sec: 413.824\nINFO:tensorflow:loss = 0.004536817, step = 84501 (0.242 sec)\nINFO:tensorflow:global_step/sec: 438.612\nINFO:tensorflow:loss = 0.008081801, step = 84601 (0.228 sec)\nINFO:tensorflow:global_step/sec: 434.239\nINFO:tensorflow:loss = 0.0052011255, step = 84701 (0.230 sec)\nINFO:tensorflow:global_step/sec: 410.951\nINFO:tensorflow:loss = 0.0058519435, step = 84801 (0.243 sec)\nINFO:tensorflow:global_step/sec: 424.944\nINFO:tensorflow:loss = 0.0044878363, step = 84901 (0.235 sec)\nINFO:tensorflow:global_step/sec: 434.751\nINFO:tensorflow:loss = 0.00481012, step = 85001 (0.230 sec)\nINFO:tensorflow:global_step/sec: 423.553\nINFO:tensorflow:loss = 0.0026486877, step = 85101 (0.236 sec)\nINFO:tensorflow:global_step/sec: 421.31\nINFO:tensorflow:loss = 0.0034918466, step = 85201 (0.238 sec)\nINFO:tensorflow:global_step/sec: 430.698\nINFO:tensorflow:loss = 0.00640889, step = 85301 (0.232 sec)\nINFO:tensorflow:global_step/sec: 408.754\nINFO:tensorflow:loss = 0.0045417673, step = 85401 (0.244 sec)\nINFO:tensorflow:global_step/sec: 424.719\nINFO:tensorflow:loss = 0.0033178735, step = 85501 (0.236 sec)\nINFO:tensorflow:global_step/sec: 425.554\nINFO:tensorflow:loss = 0.005359968, step = 85601 (0.235 sec)\nINFO:tensorflow:global_step/sec: 414.381\nINFO:tensorflow:loss = 0.0055321874, step = 85701 (0.241 sec)\nINFO:tensorflow:global_step/sec: 408.661\nINFO:tensorflow:loss = 0.004864615, step = 85801 (0.245 sec)\nINFO:tensorflow:global_step/sec: 438.687\nINFO:tensorflow:loss = 0.005201213, step = 85901 (0.228 sec)\nINFO:tensorflow:global_step/sec: 425.644\nINFO:tensorflow:loss = 0.006251228, step = 86001 (0.239 sec)\nINFO:tensorflow:global_step/sec: 429.677\nINFO:tensorflow:loss = 0.006049957, step = 86101 (0.228 sec)\nINFO:tensorflow:global_step/sec: 453.875\nINFO:tensorflow:loss = 0.008047962, step = 86201 (0.221 sec)\nINFO:tensorflow:global_step/sec: 438.164\nINFO:tensorflow:loss = 0.0037406448, step = 86301 (0.228 sec)\nINFO:tensorflow:global_step/sec: 447.966\nINFO:tensorflow:loss = 0.0044812597, step = 86401 (0.223 sec)\nINFO:tensorflow:global_step/sec: 432.292\nINFO:tensorflow:loss = 0.006492268, step = 86501 (0.231 sec)\nINFO:tensorflow:global_step/sec: 414.309\nINFO:tensorflow:loss = 0.0048469296, step = 86601 (0.241 sec)\nINFO:tensorflow:global_step/sec: 420.226\nINFO:tensorflow:loss = 0.004090667, step = 86701 (0.238 sec)\nINFO:tensorflow:global_step/sec: 431.527\nINFO:tensorflow:loss = 0.004442987, step = 86801 (0.233 sec)\nINFO:tensorflow:global_step/sec: 422.69\nINFO:tensorflow:loss = 0.0048192637, step = 86901 (0.236 sec)\nINFO:tensorflow:global_step/sec: 411.562\nINFO:tensorflow:loss = 0.0032640456, step = 87001 (0.243 sec)\nINFO:tensorflow:global_step/sec: 430.037\nINFO:tensorflow:loss = 0.0036541175, step = 87101 (0.233 sec)\nINFO:tensorflow:global_step/sec: 446.024\nINFO:tensorflow:loss = 0.0069467165, step = 87201 (0.224 sec)\nINFO:tensorflow:global_step/sec: 436.506\nINFO:tensorflow:loss = 0.00457615, step = 87301 (0.229 sec)\nINFO:tensorflow:global_step/sec: 425.08\nINFO:tensorflow:loss = 0.006205321, step = 87401 (0.235 sec)\nINFO:tensorflow:global_step/sec: 437.602\nINFO:tensorflow:loss = 0.0030969642, step = 87501 (0.228 sec)\nINFO:tensorflow:global_step/sec: 416.349\nINFO:tensorflow:loss = 0.0064918445, step = 87601 (0.240 sec)\nINFO:tensorflow:global_step/sec: 433.996\nINFO:tensorflow:loss = 0.011695679, step = 87701 (0.230 sec)\nINFO:tensorflow:global_step/sec: 426.047\nINFO:tensorflow:loss = 0.006250561, step = 87801 (0.235 sec)\nINFO:tensorflow:global_step/sec: 425.378\nINFO:tensorflow:loss = 0.00363587, step = 87901 (0.235 sec)\nINFO:tensorflow:global_step/sec: 436.102\nINFO:tensorflow:loss = 0.0072322786, step = 88001 (0.229 sec)\nINFO:tensorflow:global_step/sec: 387.201\nINFO:tensorflow:loss = 0.009675448, step = 88101 (0.258 sec)\nINFO:tensorflow:global_step/sec: 422.52\nINFO:tensorflow:loss = 0.004536283, step = 88201 (0.237 sec)\nINFO:tensorflow:global_step/sec: 433.711\nINFO:tensorflow:loss = 0.009590596, step = 88301 (0.230 sec)\nINFO:tensorflow:global_step/sec: 441.24\nINFO:tensorflow:loss = 0.0032862434, step = 88401 (0.227 sec)\nINFO:tensorflow:global_step/sec: 440.67\nINFO:tensorflow:loss = 0.0051202993, step = 88501 (0.227 sec)\nINFO:tensorflow:global_step/sec: 415.745\nINFO:tensorflow:loss = 0.0040213135, step = 88601 (0.241 sec)\nINFO:tensorflow:global_step/sec: 423.725\nINFO:tensorflow:loss = 0.008824434, step = 88701 (0.236 sec)\nINFO:tensorflow:global_step/sec: 427.93\nINFO:tensorflow:loss = 0.008001704, step = 88801 (0.234 sec)\nINFO:tensorflow:global_step/sec: 418.417\nINFO:tensorflow:loss = 0.010184696, step = 88901 (0.239 sec)\nINFO:tensorflow:global_step/sec: 437.29\nINFO:tensorflow:loss = 0.011775006, step = 89001 (0.229 sec)\nINFO:tensorflow:global_step/sec: 426.165\nINFO:tensorflow:loss = 0.0058216797, step = 89101 (0.234 sec)\nINFO:tensorflow:global_step/sec: 422.801\nINFO:tensorflow:loss = 0.0043885754, step = 89201 (0.237 sec)\nINFO:tensorflow:global_step/sec: 393.295\nINFO:tensorflow:loss = 0.0027851185, step = 89301 (0.254 sec)\nINFO:tensorflow:global_step/sec: 427.192\nINFO:tensorflow:loss = 0.0047693453, step = 89401 (0.234 sec)\nINFO:tensorflow:global_step/sec: 412.793\nINFO:tensorflow:loss = 0.003990461, step = 89501 (0.242 sec)\nINFO:tensorflow:global_step/sec: 453.507\nINFO:tensorflow:loss = 0.0026294854, step = 89601 (0.220 sec)\nINFO:tensorflow:global_step/sec: 436.153\nINFO:tensorflow:loss = 0.0052362364, step = 89701 (0.229 sec)\nINFO:tensorflow:global_step/sec: 444.735\nINFO:tensorflow:loss = 0.009088694, step = 89801 (0.225 sec)\nINFO:tensorflow:global_step/sec: 433.372\nINFO:tensorflow:loss = 0.005390249, step = 89901 (0.231 sec)\nINFO:tensorflow:global_step/sec: 439.168\nINFO:tensorflow:loss = 0.007205799, step = 90001 (0.227 sec)\nINFO:tensorflow:global_step/sec: 435.44\nINFO:tensorflow:loss = 0.003689984, step = 90101 (0.230 sec)\nINFO:tensorflow:global_step/sec: 428.245\nINFO:tensorflow:loss = 0.0054083467, step = 90201 (0.234 sec)\nINFO:tensorflow:global_step/sec: 409.696\nINFO:tensorflow:loss = 0.005978807, step = 90301 (0.244 sec)\nINFO:tensorflow:global_step/sec: 431.561\nINFO:tensorflow:loss = 0.0036984396, step = 90401 (0.232 sec)\nINFO:tensorflow:global_step/sec: 442.607\nINFO:tensorflow:loss = 0.0044141123, step = 90501 (0.226 sec)\nINFO:tensorflow:global_step/sec: 439.182\nINFO:tensorflow:loss = 0.0047680545, step = 90601 (0.228 sec)\nINFO:tensorflow:global_step/sec: 450.201\nINFO:tensorflow:loss = 0.0034539485, step = 90701 (0.222 sec)\nINFO:tensorflow:global_step/sec: 437.723\nINFO:tensorflow:loss = 0.008106205, step = 90801 (0.228 sec)\nINFO:tensorflow:global_step/sec: 421.153\nINFO:tensorflow:loss = 0.006459282, step = 90901 (0.237 sec)\nINFO:tensorflow:global_step/sec: 440.875\nINFO:tensorflow:loss = 0.0059008165, step = 91001 (0.227 sec)\nINFO:tensorflow:global_step/sec: 449.115\nINFO:tensorflow:loss = 0.0076343603, step = 91101 (0.223 sec)\nINFO:tensorflow:global_step/sec: 417.472\nINFO:tensorflow:loss = 0.0036134154, step = 91201 (0.240 sec)\nINFO:tensorflow:global_step/sec: 413.1\nINFO:tensorflow:loss = 0.008169176, step = 91301 (0.242 sec)\nINFO:tensorflow:global_step/sec: 438.721\nINFO:tensorflow:loss = 0.0027639198, step = 91401 (0.228 sec)\nINFO:tensorflow:global_step/sec: 426.892\nINFO:tensorflow:loss = 0.0072495104, step = 91501 (0.234 sec)\nINFO:tensorflow:global_step/sec: 431.559\nINFO:tensorflow:loss = 0.002784501, step = 91601 (0.232 sec)\nINFO:tensorflow:global_step/sec: 424.739\nINFO:tensorflow:loss = 0.008173542, step = 91701 (0.235 sec)\nINFO:tensorflow:global_step/sec: 430.228\nINFO:tensorflow:loss = 0.0045573693, step = 91801 (0.233 sec)\nINFO:tensorflow:global_step/sec: 422.408\nINFO:tensorflow:loss = 0.0052920775, step = 91901 (0.237 sec)\nINFO:tensorflow:global_step/sec: 442.752\nINFO:tensorflow:loss = 0.004408728, step = 92001 (0.226 sec)\nINFO:tensorflow:global_step/sec: 419.248\nINFO:tensorflow:loss = 0.0039077876, step = 92101 (0.239 sec)\nINFO:tensorflow:global_step/sec: 428.701\nINFO:tensorflow:loss = 0.0029403642, step = 92201 (0.233 sec)\nINFO:tensorflow:global_step/sec: 432.794\nINFO:tensorflow:loss = 0.004977732, step = 92301 (0.231 sec)\nINFO:tensorflow:global_step/sec: 432.111\nINFO:tensorflow:loss = 0.005024727, step = 92401 (0.231 sec)\nINFO:tensorflow:global_step/sec: 418.994\nINFO:tensorflow:loss = 0.005117008, step = 92501 (0.239 sec)\nINFO:tensorflow:global_step/sec: 424.268\nINFO:tensorflow:loss = 0.002897, step = 92601 (0.236 sec)\nINFO:tensorflow:global_step/sec: 426.479\nINFO:tensorflow:loss = 0.0075298087, step = 92701 (0.235 sec)\nINFO:tensorflow:global_step/sec: 421.681\nINFO:tensorflow:loss = 0.007339681, step = 92801 (0.236 sec)\nINFO:tensorflow:global_step/sec: 427.679\nINFO:tensorflow:loss = 0.008566435, step = 92901 (0.234 sec)\nINFO:tensorflow:global_step/sec: 404.852\nINFO:tensorflow:loss = 0.009024516, step = 93001 (0.247 sec)\nINFO:tensorflow:global_step/sec: 448.017\nINFO:tensorflow:loss = 0.002813141, step = 93101 (0.224 sec)\nINFO:tensorflow:global_step/sec: 432.223\nINFO:tensorflow:loss = 0.0032254832, step = 93201 (0.230 sec)\nINFO:tensorflow:global_step/sec: 438.295\nINFO:tensorflow:loss = 0.005016174, step = 93301 (0.228 sec)\nINFO:tensorflow:global_step/sec: 379.638\nINFO:tensorflow:loss = 0.005437582, step = 93401 (0.266 sec)\nINFO:tensorflow:global_step/sec: 393.871\nINFO:tensorflow:loss = 0.0030894382, step = 93501 (0.251 sec)\nINFO:tensorflow:global_step/sec: 415.015\nINFO:tensorflow:loss = 0.008313053, step = 93601 (0.241 sec)\nINFO:tensorflow:global_step/sec: 411.137\nINFO:tensorflow:loss = 0.0036248101, step = 93701 (0.243 sec)\nINFO:tensorflow:global_step/sec: 402.031\nINFO:tensorflow:loss = 0.00810652, step = 93801 (0.251 sec)\nINFO:tensorflow:global_step/sec: 408.493\nINFO:tensorflow:loss = 0.004963128, step = 93901 (0.242 sec)\nINFO:tensorflow:global_step/sec: 411.645\nINFO:tensorflow:loss = 0.0043365946, step = 94001 (0.243 sec)\nINFO:tensorflow:global_step/sec: 429.109\nINFO:tensorflow:loss = 0.0077144424, step = 94101 (0.233 sec)\nINFO:tensorflow:global_step/sec: 427.771\nINFO:tensorflow:loss = 0.004766725, step = 94201 (0.234 sec)\nINFO:tensorflow:global_step/sec: 414.778\nINFO:tensorflow:loss = 0.0045494647, step = 94301 (0.241 sec)\nINFO:tensorflow:global_step/sec: 438.844\nINFO:tensorflow:loss = 0.003908638, step = 94401 (0.228 sec)\nINFO:tensorflow:global_step/sec: 426.882\nINFO:tensorflow:loss = 0.0039011678, step = 94501 (0.234 sec)\nINFO:tensorflow:global_step/sec: 434.135\nINFO:tensorflow:loss = 0.004798631, step = 94601 (0.231 sec)\nINFO:tensorflow:global_step/sec: 412.807\nINFO:tensorflow:loss = 0.0052594873, step = 94701 (0.242 sec)\nINFO:tensorflow:global_step/sec: 438.564\nINFO:tensorflow:loss = 0.00830613, step = 94801 (0.231 sec)\nINFO:tensorflow:global_step/sec: 450.031\nINFO:tensorflow:loss = 0.0026800362, step = 94901 (0.219 sec)\nINFO:tensorflow:global_step/sec: 439.932\nINFO:tensorflow:loss = 0.010185981, step = 95001 (0.228 sec)\nINFO:tensorflow:global_step/sec: 429.138\nINFO:tensorflow:loss = 0.0050802436, step = 95101 (0.233 sec)\nINFO:tensorflow:global_step/sec: 437.193\nINFO:tensorflow:loss = 0.0063868132, step = 95201 (0.228 sec)\nINFO:tensorflow:global_step/sec: 410.026\nINFO:tensorflow:loss = 0.009593469, step = 95301 (0.244 sec)\nINFO:tensorflow:global_step/sec: 410.539\nINFO:tensorflow:loss = 0.00674426, step = 95401 (0.243 sec)\nINFO:tensorflow:global_step/sec: 417.308\nINFO:tensorflow:loss = 0.010876091, step = 95501 (0.240 sec)\nINFO:tensorflow:global_step/sec: 427.914\nINFO:tensorflow:loss = 0.005005857, step = 95601 (0.234 sec)\nINFO:tensorflow:global_step/sec: 419.595\nINFO:tensorflow:loss = 0.0024972835, step = 95701 (0.238 sec)\nINFO:tensorflow:global_step/sec: 427.427\nINFO:tensorflow:loss = 0.008742824, step = 95801 (0.234 sec)\nINFO:tensorflow:global_step/sec: 408.218\nINFO:tensorflow:loss = 0.0036342437, step = 95901 (0.245 sec)\nINFO:tensorflow:global_step/sec: 430.354\nINFO:tensorflow:loss = 0.0037122234, step = 96001 (0.232 sec)\nINFO:tensorflow:global_step/sec: 432.266\nINFO:tensorflow:loss = 0.0037878011, step = 96101 (0.232 sec)\nINFO:tensorflow:global_step/sec: 423.361\nINFO:tensorflow:loss = 0.0027226722, step = 96201 (0.235 sec)\nINFO:tensorflow:global_step/sec: 418.174\nINFO:tensorflow:loss = 0.0038939, step = 96301 (0.239 sec)\nINFO:tensorflow:global_step/sec: 409.123\nINFO:tensorflow:loss = 0.003957896, step = 96401 (0.244 sec)\nINFO:tensorflow:global_step/sec: 400.882\nINFO:tensorflow:loss = 0.0032314742, step = 96501 (0.249 sec)\nINFO:tensorflow:global_step/sec: 406.075\nINFO:tensorflow:loss = 0.0067095207, step = 96601 (0.246 sec)\nINFO:tensorflow:global_step/sec: 420.992\nINFO:tensorflow:loss = 0.005240967, step = 96701 (0.238 sec)\nINFO:tensorflow:global_step/sec: 412.539\nINFO:tensorflow:loss = 0.0044168094, step = 96801 (0.250 sec)\nINFO:tensorflow:global_step/sec: 400.227\nINFO:tensorflow:loss = 0.0031404286, step = 96901 (0.243 sec)\nINFO:tensorflow:global_step/sec: 388.701\nINFO:tensorflow:loss = 0.0059988415, step = 97001 (0.257 sec)\nINFO:tensorflow:global_step/sec: 401.666\nINFO:tensorflow:loss = 0.008864116, step = 97101 (0.249 sec)\nINFO:tensorflow:global_step/sec: 427.55\nINFO:tensorflow:loss = 0.0037582796, step = 97201 (0.234 sec)\nINFO:tensorflow:global_step/sec: 421.678\nINFO:tensorflow:loss = 0.0020435755, step = 97301 (0.237 sec)\nINFO:tensorflow:global_step/sec: 429.424\nINFO:tensorflow:loss = 0.004087096, step = 97401 (0.233 sec)\nINFO:tensorflow:global_step/sec: 422.187\nINFO:tensorflow:loss = 0.005750835, step = 97501 (0.237 sec)\nINFO:tensorflow:global_step/sec: 404.812\nINFO:tensorflow:loss = 0.0053190826, step = 97601 (0.247 sec)\nINFO:tensorflow:global_step/sec: 428.256\nINFO:tensorflow:loss = 0.00376792, step = 97701 (0.234 sec)\nINFO:tensorflow:global_step/sec: 435.916\nINFO:tensorflow:loss = 0.006362297, step = 97801 (0.229 sec)\nINFO:tensorflow:global_step/sec: 414.76\nINFO:tensorflow:loss = 0.0038138563, step = 97901 (0.241 sec)\nINFO:tensorflow:global_step/sec: 411.326\nINFO:tensorflow:loss = 0.0060359696, step = 98001 (0.243 sec)\nINFO:tensorflow:global_step/sec: 430.408\nINFO:tensorflow:loss = 0.0051795617, step = 98101 (0.232 sec)\nINFO:tensorflow:global_step/sec: 434.779\nINFO:tensorflow:loss = 0.006122092, step = 98201 (0.230 sec)\nINFO:tensorflow:global_step/sec: 429.927\nINFO:tensorflow:loss = 0.007171316, step = 98301 (0.232 sec)\nINFO:tensorflow:global_step/sec: 435.352\nINFO:tensorflow:loss = 0.0054256674, step = 98401 (0.230 sec)\nINFO:tensorflow:global_step/sec: 425.434\nINFO:tensorflow:loss = 0.006302964, step = 98501 (0.235 sec)\nINFO:tensorflow:global_step/sec: 398.473\nINFO:tensorflow:loss = 0.005042514, step = 98601 (0.251 sec)\nINFO:tensorflow:global_step/sec: 392.608\nINFO:tensorflow:loss = 0.0032336214, step = 98701 (0.255 sec)\nINFO:tensorflow:global_step/sec: 395.989\nINFO:tensorflow:loss = 0.0043089064, step = 98801 (0.253 sec)\nINFO:tensorflow:global_step/sec: 424.751\nINFO:tensorflow:loss = 0.0066612316, step = 98901 (0.235 sec)\nINFO:tensorflow:global_step/sec: 424.977\nINFO:tensorflow:loss = 0.005831009, step = 99001 (0.235 sec)\nINFO:tensorflow:global_step/sec: 419.479\nINFO:tensorflow:loss = 0.0040449733, step = 99101 (0.242 sec)\nINFO:tensorflow:global_step/sec: 415.762\nINFO:tensorflow:loss = 0.0032267657, step = 99201 (0.237 sec)\nINFO:tensorflow:global_step/sec: 404.335\nINFO:tensorflow:loss = 0.003997384, step = 99301 (0.247 sec)\nINFO:tensorflow:global_step/sec: 410.295\nINFO:tensorflow:loss = 0.008684888, step = 99401 (0.244 sec)\nINFO:tensorflow:global_step/sec: 423.051\nINFO:tensorflow:loss = 0.0028126503, step = 99501 (0.236 sec)\nINFO:tensorflow:global_step/sec: 433.317\nINFO:tensorflow:loss = 0.0060156416, step = 99601 (0.232 sec)\nINFO:tensorflow:global_step/sec: 406.767\nINFO:tensorflow:loss = 0.003310702, step = 99701 (0.245 sec)\nINFO:tensorflow:global_step/sec: 390.561\nINFO:tensorflow:loss = 0.005235779, step = 99801 (0.256 sec)\nINFO:tensorflow:global_step/sec: 409.527\nINFO:tensorflow:loss = 0.005432996, step = 99901 (0.244 sec)\nINFO:tensorflow:Saving checkpoints for 100000 into /tmp/tmpDexXZd/model.ckpt.\nINFO:tensorflow:Calling model_fn.\nINFO:tensorflow:Importing architecture from /tmp/tmpDexXZd/architecture-3.txt: ['0:1_layer_dnn', '1:2_layer_dnn', '2:3_layer_dnn', '3:4_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 1\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 2\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 3\nINFO:tensorflow:Building subnetwork '4_layer_dnn'\nINFO:tensorflow:Building iteration 4\nINFO:tensorflow:Building subnetwork '4_layer_dnn'\nINFO:tensorflow:Building subnetwork '5_layer_dnn'\nINFO:tensorflow:Done calling model_fn.\nINFO:tensorflow:Starting evaluation at 2018-12-13-19:33:29\nINFO:tensorflow:Graph was finalized.\nINFO:tensorflow:Restoring parameters from /tmp/tmpDexXZd/model.ckpt-100000\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\nINFO:tensorflow:Saving candidate 't3_4_layer_dnn' dict for global step 100000: architecture/adanet/ensembles = \n�\u0001\n>adanet/iteration_3/ensemble_t3_4_layer_dnn/architecture/adanetB?\b\u0007\u0012\u0000B9| 1_layer_dnn | 2_layer_dnn | 3_layer_dnn | 4_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.03169271, average_loss/adanet/subnetwork = 0.03348904, average_loss/adanet/uniform_average_ensemble = 0.031587753, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.038267143, loss/adanet/subnetwork = 0.041055106, loss/adanet/uniform_average_ensemble = 0.04207115, prediction/mean/adanet/adanet_weighted_ensemble = 3.067704, prediction/mean/adanet/subnetwork = 3.1287208, prediction/mean/adanet/uniform_average_ensemble = 3.1415138\nINFO:tensorflow:Saving candidate 't4_4_layer_dnn' dict for global step 100000: architecture/adanet/ensembles = \n�\u0001\n>adanet/iteration_4/ensemble_t4_4_layer_dnn/architecture/adanetBM\b\u0007\u0012\u0000BG| 1_layer_dnn | 2_layer_dnn | 3_layer_dnn | 4_layer_dnn | 4_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.031012552, average_loss/adanet/subnetwork = 0.036771663, average_loss/adanet/uniform_average_ensemble = 0.031320345, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.036733527, loss/adanet/subnetwork = 0.049315907, loss/adanet/uniform_average_ensemble = 0.04210123, prediction/mean/adanet/adanet_weighted_ensemble = 3.0690398, prediction/mean/adanet/subnetwork = 3.116253, prediction/mean/adanet/uniform_average_ensemble = 3.1364617\nINFO:tensorflow:Saving candidate 't4_5_layer_dnn' dict for global step 100000: architecture/adanet/ensembles = \n�\u0001\n>adanet/iteration_4/ensemble_t4_5_layer_dnn/architecture/adanetBM\b\u0007\u0012\u0000BG| 1_layer_dnn | 2_layer_dnn | 3_layer_dnn | 4_layer_dnn | 5_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.032236967, average_loss/adanet/subnetwork = 0.0495253, average_loss/adanet/uniform_average_ensemble = 0.0326953, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.038305804, loss/adanet/subnetwork = 0.06338713, loss/adanet/uniform_average_ensemble = 0.043896608, prediction/mean/adanet/adanet_weighted_ensemble = 3.0657516, prediction/mean/adanet/subnetwork = 3.0762491, prediction/mean/adanet/uniform_average_ensemble = 3.1284606\nINFO:tensorflow:Finished evaluation at 2018-12-13-19:33:35\nINFO:tensorflow:Saving dict for global step 100000: average_loss = 0.032236967, average_loss/adanet/adanet_weighted_ensemble = 0.032236967, average_loss/adanet/subnetwork = 0.0495253, average_loss/adanet/uniform_average_ensemble = 0.0326953, global_step = 100000, label/mean = 3.1049454, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss = 0.038305804, loss/adanet/adanet_weighted_ensemble = 0.038305804, loss/adanet/subnetwork = 0.06338713, loss/adanet/uniform_average_ensemble = 0.043896608, prediction/mean = 3.0657516, prediction/mean/adanet/adanet_weighted_ensemble = 3.0657516, prediction/mean/adanet/subnetwork = 3.0762491, prediction/mean/adanet/uniform_average_ensemble = 3.1284606\nINFO:tensorflow:Saving 'checkpoint_path' summary for global step 100000: /tmp/tmpDexXZd/model.ckpt-100000\nINFO:tensorflow:Loss for final step: 0.00343033.\nINFO:tensorflow:Finished training Adanet iteration 4\nLoss: 0.032236967\nUniform average loss: 0.0326953\nArchitecture: | 1_layer_dnn | 2_layer_dnn | 3_layer_dnn | 4_layer_dnn | 5_layer_dnn |\n" ] ], [ [ "Learning the mixture weights produces a model with **0.0449** MSE, a bit worse\nthan the uniform average model, which the `adanet.Estimator` always compute as a\nbaseline. The mixture weights were learned without regularization, so they\nlikely overfit to the training set.\n\nObserve that AdaNet learned the same ensemble composition as the previous run.\nWithout complexity regularization, AdaNet will favor more complex subnetworks,\nwhich may have worse generalization despite improving the empirical error.\n\nFinally, let's apply some **complexity regularization** by using $\\lambda > 0$.\nSince this will penalize more complex subnetworks, AdaNet will select the\ncandidate subnetwork that most improves the objective for its marginal\ncomplexity:", "_____no_output_____" ] ], [ [ "#@test {\"skip\": true}\nresults, _ = train_and_evaluate(learn_mixture_weights=True, adanet_lambda=.015)\nprint(\"Loss:\", results[\"average_loss\"])\nprint(\"Uniform average loss:\", results[\"average_loss/adanet/uniform_average_ensemble\"])\nprint(\"Architecture:\", ensemble_architecture(results))", "WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmpU33rCk\nINFO:tensorflow:Using config: {'_save_checkpoints_secs': None, '_num_ps_replicas': 0, '_keep_checkpoint_max': 5, '_task_type': 'worker', '_global_id_in_cluster': 0, '_is_chief': True, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f2799e49750>, '_model_dir': '/tmp/tmpU33rCk', '_protocol': None, '_save_checkpoints_steps': 50000, '_keep_checkpoint_every_n_hours': 10000, '_service': None, '_session_config': allow_soft_placement: true\ngraph_options {\n rewrite_options {\n meta_optimizer_iterations: ONE\n }\n}\n, '_tf_random_seed': 42, '_save_summary_steps': 50000, '_device_fn': None, '_experimental_distribute': None, '_num_worker_replicas': 1, '_task_id': 0, '_log_step_count_steps': 100, '_evaluation_master': '', '_eval_distribute': None, '_train_distribute': None, '_master': ''}\nINFO:tensorflow:Not using Distribute Coordinator.\nINFO:tensorflow:Running training and evaluation locally (non-distributed).\nINFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps 50000 or save_checkpoints_secs None.\nINFO:tensorflow:Beginning training AdaNet iteration 0\nINFO:tensorflow:Calling model_fn.\nINFO:tensorflow:Building iteration 0\nINFO:tensorflow:Building subnetwork 'linear'\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Done calling model_fn.\nINFO:tensorflow:Create CheckpointSaverHook.\nINFO:tensorflow:Graph was finalized.\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\nINFO:tensorflow:Saving checkpoints for 0 into /tmp/tmpU33rCk/model.ckpt.\nINFO:tensorflow:loss = 21.773132, step = 1\nINFO:tensorflow:global_step/sec: 159.287\nINFO:tensorflow:loss = 0.62784123, step = 101 (0.629 sec)\nINFO:tensorflow:global_step/sec: 565.937\nINFO:tensorflow:loss = 0.56678694, step = 201 (0.177 sec)\nINFO:tensorflow:global_step/sec: 562.364\nINFO:tensorflow:loss = 0.0780399, step = 301 (0.178 sec)\nINFO:tensorflow:global_step/sec: 553.066\nINFO:tensorflow:loss = 0.08678259, step = 401 (0.181 sec)\nINFO:tensorflow:global_step/sec: 539.378\nINFO:tensorflow:loss = 0.08137446, step = 501 (0.186 sec)\nINFO:tensorflow:global_step/sec: 536.173\nINFO:tensorflow:loss = 0.05650991, step = 601 (0.186 sec)\nINFO:tensorflow:global_step/sec: 548.312\nINFO:tensorflow:loss = 0.025883615, step = 701 (0.183 sec)\nINFO:tensorflow:global_step/sec: 539.441\nINFO:tensorflow:loss = 0.03018033, step = 801 (0.185 sec)\nINFO:tensorflow:global_step/sec: 558.404\nINFO:tensorflow:loss = 0.037590593, step = 901 (0.179 sec)\nINFO:tensorflow:global_step/sec: 527.969\nINFO:tensorflow:loss = 0.06694436, step = 1001 (0.190 sec)\nINFO:tensorflow:global_step/sec: 523.316\nINFO:tensorflow:loss = 0.03847816, step = 1101 (0.191 sec)\nINFO:tensorflow:global_step/sec: 539.966\nINFO:tensorflow:loss = 0.04998327, step = 1201 (0.185 sec)\nINFO:tensorflow:global_step/sec: 562.721\nINFO:tensorflow:loss = 0.090066634, step = 1301 (0.178 sec)\nINFO:tensorflow:global_step/sec: 548.134\nINFO:tensorflow:loss = 0.02687991, step = 1401 (0.182 sec)\nINFO:tensorflow:global_step/sec: 551.788\nINFO:tensorflow:loss = 0.021093268, step = 1501 (0.181 sec)\nINFO:tensorflow:global_step/sec: 545.461\nINFO:tensorflow:loss = 0.036077544, step = 1601 (0.183 sec)\nINFO:tensorflow:global_step/sec: 568.288\nINFO:tensorflow:loss = 0.034161575, step = 1701 (0.176 sec)\nINFO:tensorflow:global_step/sec: 545.449\nINFO:tensorflow:loss = 0.04626116, step = 1801 (0.183 sec)\nINFO:tensorflow:global_step/sec: 543.943\nINFO:tensorflow:loss = 0.07378493, step = 1901 (0.184 sec)\nINFO:tensorflow:global_step/sec: 512.521\nINFO:tensorflow:loss = 0.04918831, step = 2001 (0.195 sec)\nINFO:tensorflow:global_step/sec: 572.804\nINFO:tensorflow:loss = 0.078179196, step = 2101 (0.176 sec)\nINFO:tensorflow:global_step/sec: 530.082\nINFO:tensorflow:loss = 0.030299027, step = 2201 (0.187 sec)\nINFO:tensorflow:global_step/sec: 563.393\nINFO:tensorflow:loss = 0.024719734, step = 2301 (0.178 sec)\nINFO:tensorflow:global_step/sec: 561.019\nINFO:tensorflow:loss = 0.024992712, step = 2401 (0.178 sec)\nINFO:tensorflow:global_step/sec: 544.837\nINFO:tensorflow:loss = 0.047092065, step = 2501 (0.184 sec)\nINFO:tensorflow:global_step/sec: 546.753\nINFO:tensorflow:loss = 0.04721455, step = 2601 (0.183 sec)\nINFO:tensorflow:global_step/sec: 563.574\nINFO:tensorflow:loss = 0.038211413, step = 2701 (0.178 sec)\nINFO:tensorflow:global_step/sec: 557.643\nINFO:tensorflow:loss = 0.03274205, step = 2801 (0.179 sec)\nINFO:tensorflow:global_step/sec: 534.748\nINFO:tensorflow:loss = 0.04549656, step = 2901 (0.187 sec)\nINFO:tensorflow:global_step/sec: 534.379\nINFO:tensorflow:loss = 0.03548008, step = 3001 (0.187 sec)\nINFO:tensorflow:global_step/sec: 548.986\nINFO:tensorflow:loss = 0.024679914, step = 3101 (0.182 sec)\nINFO:tensorflow:global_step/sec: 513.339\nINFO:tensorflow:loss = 0.04125918, step = 3201 (0.194 sec)\nINFO:tensorflow:global_step/sec: 507.007\nINFO:tensorflow:loss = 0.0435674, step = 3301 (0.197 sec)\nINFO:tensorflow:global_step/sec: 561.918\nINFO:tensorflow:loss = 0.03460297, step = 3401 (0.178 sec)\nINFO:tensorflow:global_step/sec: 549.285\nINFO:tensorflow:loss = 0.06966856, step = 3501 (0.182 sec)\nINFO:tensorflow:global_step/sec: 538.945\nINFO:tensorflow:loss = 0.03479818, step = 3601 (0.186 sec)\nINFO:tensorflow:global_step/sec: 540.146\nINFO:tensorflow:loss = 0.021452513, step = 3701 (0.185 sec)\nINFO:tensorflow:global_step/sec: 563.866\nINFO:tensorflow:loss = 0.026122702, step = 3801 (0.178 sec)\nINFO:tensorflow:global_step/sec: 543.928\nINFO:tensorflow:loss = 0.031272247, step = 3901 (0.185 sec)\nINFO:tensorflow:global_step/sec: 534.71\nINFO:tensorflow:loss = 0.053014666, step = 4001 (0.186 sec)\nINFO:tensorflow:global_step/sec: 513.751\nINFO:tensorflow:loss = 0.028963283, step = 4101 (0.195 sec)\nINFO:tensorflow:global_step/sec: 529.776\nINFO:tensorflow:loss = 0.022142775, step = 4201 (0.189 sec)\nINFO:tensorflow:global_step/sec: 536.804\nINFO:tensorflow:loss = 0.022216441, step = 4301 (0.186 sec)\nINFO:tensorflow:global_step/sec: 549.644\nINFO:tensorflow:loss = 0.027055677, step = 4401 (0.182 sec)\nINFO:tensorflow:global_step/sec: 552.507\nINFO:tensorflow:loss = 0.05059754, step = 4501 (0.180 sec)\nINFO:tensorflow:global_step/sec: 578.042\nINFO:tensorflow:loss = 0.025971584, step = 4601 (0.173 sec)\nINFO:tensorflow:global_step/sec: 538.198\nINFO:tensorflow:loss = 0.07917491, step = 4701 (0.186 sec)\nINFO:tensorflow:global_step/sec: 571.125\nINFO:tensorflow:loss = 0.034027006, step = 4801 (0.175 sec)\nINFO:tensorflow:global_step/sec: 579.025\nINFO:tensorflow:loss = 0.033307493, step = 4901 (0.173 sec)\nINFO:tensorflow:global_step/sec: 552.831\nINFO:tensorflow:loss = 0.026842842, step = 5001 (0.181 sec)\nINFO:tensorflow:global_step/sec: 571.634\nINFO:tensorflow:loss = 0.039310932, step = 5101 (0.175 sec)\nINFO:tensorflow:global_step/sec: 575.954\nINFO:tensorflow:loss = 0.030656494, step = 5201 (0.174 sec)\nINFO:tensorflow:global_step/sec: 582.065\nINFO:tensorflow:loss = 0.078128725, step = 5301 (0.172 sec)\nINFO:tensorflow:global_step/sec: 507.053\nINFO:tensorflow:loss = 0.021291912, step = 5401 (0.197 sec)\nINFO:tensorflow:global_step/sec: 559.252\nINFO:tensorflow:loss = 0.03251325, step = 5501 (0.179 sec)\nINFO:tensorflow:global_step/sec: 591.716\nINFO:tensorflow:loss = 0.028400565, step = 5601 (0.169 sec)\nINFO:tensorflow:global_step/sec: 580.723\nINFO:tensorflow:loss = 0.034857195, step = 5701 (0.172 sec)\nINFO:tensorflow:global_step/sec: 573.463\nINFO:tensorflow:loss = 0.037171304, step = 5801 (0.175 sec)\nINFO:tensorflow:global_step/sec: 586.414\nINFO:tensorflow:loss = 0.017138815, step = 5901 (0.170 sec)\nINFO:tensorflow:global_step/sec: 592.856\nINFO:tensorflow:loss = 0.030491468, step = 6001 (0.169 sec)\nINFO:tensorflow:global_step/sec: 586.352\nINFO:tensorflow:loss = 0.048120137, step = 6101 (0.171 sec)\nINFO:tensorflow:global_step/sec: 592.877\nINFO:tensorflow:loss = 0.044583086, step = 6201 (0.169 sec)\nINFO:tensorflow:global_step/sec: 590.765\nINFO:tensorflow:loss = 0.04749332, step = 6301 (0.169 sec)\nINFO:tensorflow:global_step/sec: 601.572\nINFO:tensorflow:loss = 0.07128419, step = 6401 (0.166 sec)\nINFO:tensorflow:global_step/sec: 595.572\nINFO:tensorflow:loss = 0.05821595, step = 6501 (0.168 sec)\nINFO:tensorflow:global_step/sec: 558.066\nINFO:tensorflow:loss = 0.019353844, step = 6601 (0.179 sec)\nINFO:tensorflow:global_step/sec: 588.153\nINFO:tensorflow:loss = 0.03313767, step = 6701 (0.170 sec)\nINFO:tensorflow:global_step/sec: 553.474\nINFO:tensorflow:loss = 0.021211505, step = 6801 (0.181 sec)\nINFO:tensorflow:global_step/sec: 552.987\nINFO:tensorflow:loss = 0.018065577, step = 6901 (0.180 sec)\nINFO:tensorflow:global_step/sec: 566.142\nINFO:tensorflow:loss = 0.031387277, step = 7001 (0.177 sec)\nINFO:tensorflow:global_step/sec: 573.444\nINFO:tensorflow:loss = 0.032881733, step = 7101 (0.175 sec)\nINFO:tensorflow:global_step/sec: 550.213\nINFO:tensorflow:loss = 0.01538456, step = 7201 (0.182 sec)\nINFO:tensorflow:global_step/sec: 560.381\nINFO:tensorflow:loss = 0.07852745, step = 7301 (0.178 sec)\nINFO:tensorflow:global_step/sec: 528.837\nINFO:tensorflow:loss = 0.037094295, step = 7401 (0.189 sec)\nINFO:tensorflow:global_step/sec: 554.533\nINFO:tensorflow:loss = 0.054601535, step = 7501 (0.180 sec)\nINFO:tensorflow:global_step/sec: 530.87\nINFO:tensorflow:loss = 0.0201954, step = 7601 (0.188 sec)\nINFO:tensorflow:global_step/sec: 534.342\nINFO:tensorflow:loss = 0.027472034, step = 7701 (0.187 sec)\nINFO:tensorflow:global_step/sec: 527.972\nINFO:tensorflow:loss = 0.032032184, step = 7801 (0.189 sec)\nINFO:tensorflow:global_step/sec: 528.58\nINFO:tensorflow:loss = 0.043274466, step = 7901 (0.189 sec)\nINFO:tensorflow:global_step/sec: 548.655\nINFO:tensorflow:loss = 0.03239342, step = 8001 (0.182 sec)\nINFO:tensorflow:global_step/sec: 541.064\nINFO:tensorflow:loss = 0.027077636, step = 8101 (0.185 sec)\nINFO:tensorflow:global_step/sec: 544.844\nINFO:tensorflow:loss = 0.0360922, step = 8201 (0.184 sec)\nINFO:tensorflow:global_step/sec: 532.938\nINFO:tensorflow:loss = 0.03275392, step = 8301 (0.188 sec)\nINFO:tensorflow:global_step/sec: 550.203\nINFO:tensorflow:loss = 0.051111933, step = 8401 (0.182 sec)\nINFO:tensorflow:global_step/sec: 551.767\nINFO:tensorflow:loss = 0.033609618, step = 8501 (0.181 sec)\nINFO:tensorflow:global_step/sec: 536.288\nINFO:tensorflow:loss = 0.06303735, step = 8601 (0.186 sec)\nINFO:tensorflow:global_step/sec: 569.836\nINFO:tensorflow:loss = 0.022497727, step = 8701 (0.176 sec)\nINFO:tensorflow:global_step/sec: 542.449\nINFO:tensorflow:loss = 0.042914927, step = 8801 (0.184 sec)\nINFO:tensorflow:global_step/sec: 541.586\nINFO:tensorflow:loss = 0.07919823, step = 8901 (0.185 sec)\nINFO:tensorflow:global_step/sec: 552.276\nINFO:tensorflow:loss = 0.054977592, step = 9001 (0.181 sec)\nINFO:tensorflow:global_step/sec: 565.617\nINFO:tensorflow:loss = 0.030193526, step = 9101 (0.177 sec)\nINFO:tensorflow:global_step/sec: 562.054\nINFO:tensorflow:loss = 0.059118968, step = 9201 (0.178 sec)\nINFO:tensorflow:global_step/sec: 572.511\nINFO:tensorflow:loss = 0.028942654, step = 9301 (0.175 sec)\nINFO:tensorflow:global_step/sec: 573.171\nINFO:tensorflow:loss = 0.019489078, step = 9401 (0.174 sec)\nINFO:tensorflow:global_step/sec: 549.988\nINFO:tensorflow:loss = 0.0366641, step = 9501 (0.182 sec)\nINFO:tensorflow:global_step/sec: 485.237\nINFO:tensorflow:loss = 0.05093595, step = 9601 (0.206 sec)\nINFO:tensorflow:global_step/sec: 524.89\nINFO:tensorflow:loss = 0.017835636, step = 9701 (0.191 sec)\nINFO:tensorflow:global_step/sec: 528.924\nINFO:tensorflow:loss = 0.031217653, step = 9801 (0.189 sec)\nINFO:tensorflow:global_step/sec: 526.876\nINFO:tensorflow:loss = 0.028995795, step = 9901 (0.190 sec)\nINFO:tensorflow:global_step/sec: 514.06\nINFO:tensorflow:loss = 0.031324398, step = 10001 (0.194 sec)\nINFO:tensorflow:global_step/sec: 552.026\nINFO:tensorflow:loss = 0.030225167, step = 10101 (0.181 sec)\nINFO:tensorflow:global_step/sec: 571.955\nINFO:tensorflow:loss = 0.0560328, step = 10201 (0.175 sec)\nINFO:tensorflow:global_step/sec: 559.973\nINFO:tensorflow:loss = 0.05915151, step = 10301 (0.179 sec)\nINFO:tensorflow:global_step/sec: 543.75\nINFO:tensorflow:loss = 0.019076841, step = 10401 (0.184 sec)\nINFO:tensorflow:global_step/sec: 551.894\nINFO:tensorflow:loss = 0.05866126, step = 10501 (0.181 sec)\nINFO:tensorflow:global_step/sec: 547.732\nINFO:tensorflow:loss = 0.025945794, step = 10601 (0.183 sec)\nINFO:tensorflow:global_step/sec: 559.879\nINFO:tensorflow:loss = 0.02107554, step = 10701 (0.178 sec)\nINFO:tensorflow:global_step/sec: 569.976\nINFO:tensorflow:loss = 0.028491888, step = 10801 (0.176 sec)\nINFO:tensorflow:global_step/sec: 559.309\nINFO:tensorflow:loss = 0.030953847, step = 10901 (0.179 sec)\nINFO:tensorflow:global_step/sec: 534.828\nINFO:tensorflow:loss = 0.014788986, step = 11001 (0.187 sec)\nINFO:tensorflow:global_step/sec: 528.611\nINFO:tensorflow:loss = 0.038508512, step = 11101 (0.189 sec)\nINFO:tensorflow:global_step/sec: 479.918\nINFO:tensorflow:loss = 0.034574755, step = 11201 (0.208 sec)\nINFO:tensorflow:global_step/sec: 548.751\nINFO:tensorflow:loss = 0.054243505, step = 11301 (0.182 sec)\nINFO:tensorflow:global_step/sec: 540.521\nINFO:tensorflow:loss = 0.03519901, step = 11401 (0.185 sec)\nINFO:tensorflow:global_step/sec: 534.456\nINFO:tensorflow:loss = 0.049500115, step = 11501 (0.187 sec)\nINFO:tensorflow:global_step/sec: 550.41\nINFO:tensorflow:loss = 0.031815633, step = 11601 (0.182 sec)\nINFO:tensorflow:global_step/sec: 556.588\nINFO:tensorflow:loss = 0.025518984, step = 11701 (0.180 sec)\nINFO:tensorflow:global_step/sec: 568.631\nINFO:tensorflow:loss = 0.02286969, step = 11801 (0.176 sec)\nINFO:tensorflow:global_step/sec: 560.626\nINFO:tensorflow:loss = 0.047530938, step = 11901 (0.179 sec)\nINFO:tensorflow:global_step/sec: 554.717\nINFO:tensorflow:loss = 0.037891768, step = 12001 (0.180 sec)\nINFO:tensorflow:global_step/sec: 538.956\nINFO:tensorflow:loss = 0.017053518, step = 12101 (0.186 sec)\nINFO:tensorflow:global_step/sec: 546.153\nINFO:tensorflow:loss = 0.018622799, step = 12201 (0.183 sec)\nINFO:tensorflow:global_step/sec: 563.4\nINFO:tensorflow:loss = 0.02716852, step = 12301 (0.178 sec)\nINFO:tensorflow:global_step/sec: 539.875\nINFO:tensorflow:loss = 0.05163239, step = 12401 (0.186 sec)\nINFO:tensorflow:global_step/sec: 581.392\nINFO:tensorflow:loss = 0.023143895, step = 12501 (0.172 sec)\nINFO:tensorflow:global_step/sec: 533.595\nINFO:tensorflow:loss = 0.04246641, step = 12601 (0.187 sec)\nINFO:tensorflow:global_step/sec: 563.581\nINFO:tensorflow:loss = 0.026882555, step = 12701 (0.178 sec)\nINFO:tensorflow:global_step/sec: 548.224\nINFO:tensorflow:loss = 0.043311685, step = 12801 (0.182 sec)\nINFO:tensorflow:global_step/sec: 561.124\nINFO:tensorflow:loss = 0.036629334, step = 12901 (0.179 sec)\nINFO:tensorflow:global_step/sec: 551.326\nINFO:tensorflow:loss = 0.04917693, step = 13001 (0.181 sec)\nINFO:tensorflow:global_step/sec: 545.509\nINFO:tensorflow:loss = 0.035332013, step = 13101 (0.183 sec)\nINFO:tensorflow:global_step/sec: 523.653\nINFO:tensorflow:loss = 0.030816792, step = 13201 (0.191 sec)\nINFO:tensorflow:global_step/sec: 551.061\nINFO:tensorflow:loss = 0.029627524, step = 13301 (0.182 sec)\nINFO:tensorflow:global_step/sec: 535.186\nINFO:tensorflow:loss = 0.034982234, step = 13401 (0.187 sec)\nINFO:tensorflow:global_step/sec: 575.606\nINFO:tensorflow:loss = 0.041481495, step = 13501 (0.174 sec)\nINFO:tensorflow:global_step/sec: 546.965\nINFO:tensorflow:loss = 0.016655888, step = 13601 (0.183 sec)\nINFO:tensorflow:global_step/sec: 578.995\nINFO:tensorflow:loss = 0.030127134, step = 13701 (0.173 sec)\nINFO:tensorflow:global_step/sec: 532.569\nINFO:tensorflow:loss = 0.06522011, step = 13801 (0.188 sec)\nINFO:tensorflow:global_step/sec: 576.385\nINFO:tensorflow:loss = 0.01722128, step = 13901 (0.174 sec)\nINFO:tensorflow:global_step/sec: 580.339\nINFO:tensorflow:loss = 0.025369557, step = 14001 (0.173 sec)\nINFO:tensorflow:global_step/sec: 528.106\nINFO:tensorflow:loss = 0.032870486, step = 14101 (0.189 sec)\nINFO:tensorflow:global_step/sec: 544.897\nINFO:tensorflow:loss = 0.040547296, step = 14201 (0.184 sec)\nINFO:tensorflow:global_step/sec: 538.723\nINFO:tensorflow:loss = 0.019972267, step = 14301 (0.186 sec)\nINFO:tensorflow:global_step/sec: 532.532\nINFO:tensorflow:loss = 0.012934791, step = 14401 (0.188 sec)\nINFO:tensorflow:global_step/sec: 540.161\nINFO:tensorflow:loss = 0.034899343, step = 14501 (0.187 sec)\nINFO:tensorflow:global_step/sec: 556.746\nINFO:tensorflow:loss = 0.028416235, step = 14601 (0.178 sec)\nINFO:tensorflow:global_step/sec: 548.685\nINFO:tensorflow:loss = 0.03656807, step = 14701 (0.182 sec)\nINFO:tensorflow:global_step/sec: 555.549\nINFO:tensorflow:loss = 0.02740157, step = 14801 (0.180 sec)\nINFO:tensorflow:global_step/sec: 540.564\nINFO:tensorflow:loss = 0.043183126, step = 14901 (0.185 sec)\nINFO:tensorflow:global_step/sec: 552.67\nINFO:tensorflow:loss = 0.044043526, step = 15001 (0.181 sec)\nINFO:tensorflow:global_step/sec: 567.295\nINFO:tensorflow:loss = 0.015140781, step = 15101 (0.176 sec)\nINFO:tensorflow:global_step/sec: 544.722\nINFO:tensorflow:loss = 0.025546592, step = 15201 (0.183 sec)\nINFO:tensorflow:global_step/sec: 558.16\nINFO:tensorflow:loss = 0.029243713, step = 15301 (0.179 sec)\nINFO:tensorflow:global_step/sec: 537.248\nINFO:tensorflow:loss = 0.020585796, step = 15401 (0.186 sec)\nINFO:tensorflow:global_step/sec: 565.802\nINFO:tensorflow:loss = 0.02082948, step = 15501 (0.177 sec)\nINFO:tensorflow:global_step/sec: 519.954\nINFO:tensorflow:loss = 0.050177883, step = 15601 (0.192 sec)\nINFO:tensorflow:global_step/sec: 562.8\nINFO:tensorflow:loss = 0.026549798, step = 15701 (0.178 sec)\nINFO:tensorflow:global_step/sec: 559.309\nINFO:tensorflow:loss = 0.05157975, step = 15801 (0.179 sec)\nINFO:tensorflow:global_step/sec: 549.572\nINFO:tensorflow:loss = 0.03964285, step = 15901 (0.182 sec)\nINFO:tensorflow:global_step/sec: 540.517\nINFO:tensorflow:loss = 0.025370112, step = 16001 (0.185 sec)\nINFO:tensorflow:global_step/sec: 556.979\nINFO:tensorflow:loss = 0.03573191, step = 16101 (0.180 sec)\nINFO:tensorflow:global_step/sec: 540.476\nINFO:tensorflow:loss = 0.01646205, step = 16201 (0.185 sec)\nINFO:tensorflow:global_step/sec: 558.846\nINFO:tensorflow:loss = 0.025383826, step = 16301 (0.185 sec)\nINFO:tensorflow:global_step/sec: 545.81\nINFO:tensorflow:loss = 0.0598194, step = 16401 (0.177 sec)\nINFO:tensorflow:global_step/sec: 535.687\nINFO:tensorflow:loss = 0.015108961, step = 16501 (0.187 sec)\nINFO:tensorflow:global_step/sec: 526.305\nINFO:tensorflow:loss = 0.02906358, step = 16601 (0.190 sec)\nINFO:tensorflow:global_step/sec: 526.937\nINFO:tensorflow:loss = 0.026173119, step = 16701 (0.190 sec)\nINFO:tensorflow:global_step/sec: 555.05\nINFO:tensorflow:loss = 0.028957274, step = 16801 (0.180 sec)\nINFO:tensorflow:global_step/sec: 535.085\nINFO:tensorflow:loss = 0.025117926, step = 16901 (0.190 sec)\nINFO:tensorflow:global_step/sec: 543.895\nINFO:tensorflow:loss = 0.026830506, step = 17001 (0.180 sec)\nINFO:tensorflow:global_step/sec: 546.263\nINFO:tensorflow:loss = 0.023872972, step = 17101 (0.183 sec)\nINFO:tensorflow:global_step/sec: 564.515\nINFO:tensorflow:loss = 0.016916137, step = 17201 (0.178 sec)\nINFO:tensorflow:global_step/sec: 543.133\nINFO:tensorflow:loss = 0.02321909, step = 17301 (0.184 sec)\nINFO:tensorflow:global_step/sec: 535.361\nINFO:tensorflow:loss = 0.014806619, step = 17401 (0.186 sec)\nINFO:tensorflow:global_step/sec: 526.876\nINFO:tensorflow:loss = 0.019620089, step = 17501 (0.190 sec)\nINFO:tensorflow:global_step/sec: 515.223\nINFO:tensorflow:loss = 0.024595024, step = 17601 (0.194 sec)\nINFO:tensorflow:global_step/sec: 572.229\nINFO:tensorflow:loss = 0.016030025, step = 17701 (0.175 sec)\nINFO:tensorflow:global_step/sec: 536.616\nINFO:tensorflow:loss = 0.029417565, step = 17801 (0.186 sec)\nINFO:tensorflow:global_step/sec: 559.462\nINFO:tensorflow:loss = 0.031124298, step = 17901 (0.179 sec)\nINFO:tensorflow:global_step/sec: 526.865\nINFO:tensorflow:loss = 0.048947714, step = 18001 (0.190 sec)\nINFO:tensorflow:global_step/sec: 533.405\nINFO:tensorflow:loss = 0.027284618, step = 18101 (0.187 sec)\nINFO:tensorflow:global_step/sec: 525.403\nINFO:tensorflow:loss = 0.031934716, step = 18201 (0.190 sec)\nINFO:tensorflow:global_step/sec: 485.025\nINFO:tensorflow:loss = 0.037095845, step = 18301 (0.206 sec)\nINFO:tensorflow:global_step/sec: 524.049\nINFO:tensorflow:loss = 0.030218042, step = 18401 (0.191 sec)\nINFO:tensorflow:global_step/sec: 514.793\nINFO:tensorflow:loss = 0.036680248, step = 18501 (0.194 sec)\nINFO:tensorflow:global_step/sec: 540.085\nINFO:tensorflow:loss = 0.027322877, step = 18601 (0.185 sec)\nINFO:tensorflow:global_step/sec: 542.712\nINFO:tensorflow:loss = 0.040832005, step = 18701 (0.184 sec)\nINFO:tensorflow:global_step/sec: 575.261\nINFO:tensorflow:loss = 0.0099720275, step = 18801 (0.174 sec)\nINFO:tensorflow:global_step/sec: 543.889\nINFO:tensorflow:loss = 0.044099957, step = 18901 (0.184 sec)\nINFO:tensorflow:global_step/sec: 542.106\nINFO:tensorflow:loss = 0.014038452, step = 19001 (0.184 sec)\nINFO:tensorflow:global_step/sec: 552.868\nINFO:tensorflow:loss = 0.030261023, step = 19101 (0.181 sec)\nINFO:tensorflow:global_step/sec: 541.468\nINFO:tensorflow:loss = 0.024491156, step = 19201 (0.185 sec)\nINFO:tensorflow:global_step/sec: 531.57\nINFO:tensorflow:loss = 0.019349206, step = 19301 (0.188 sec)\nINFO:tensorflow:global_step/sec: 544.33\nINFO:tensorflow:loss = 0.029496612, step = 19401 (0.184 sec)\nINFO:tensorflow:global_step/sec: 528.042\nINFO:tensorflow:loss = 0.02566719, step = 19501 (0.190 sec)\nINFO:tensorflow:global_step/sec: 541.261\nINFO:tensorflow:loss = 0.011755895, step = 19601 (0.185 sec)\nINFO:tensorflow:global_step/sec: 538.773\nINFO:tensorflow:loss = 0.011457266, step = 19701 (0.185 sec)\nINFO:tensorflow:global_step/sec: 547.462\nINFO:tensorflow:loss = 0.02694907, step = 19801 (0.183 sec)\nINFO:tensorflow:global_step/sec: 536.015\nINFO:tensorflow:loss = 0.039816238, step = 19901 (0.187 sec)\nINFO:tensorflow:Saving checkpoints for 20000 into /tmp/tmpU33rCk/model.ckpt.\nINFO:tensorflow:Calling model_fn.\nINFO:tensorflow:Building iteration 0\nINFO:tensorflow:Building subnetwork 'linear'\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Done calling model_fn.\nINFO:tensorflow:Starting evaluation at 2018-12-13-19:34:17\nINFO:tensorflow:Graph was finalized.\nINFO:tensorflow:Restoring parameters from /tmp/tmpU33rCk/model.ckpt-20000\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\nINFO:tensorflow:Saving candidate 't0_linear' dict for global step 20000: architecture/adanet/ensembles = \nW\n9adanet/iteration_0/ensemble_t0_linear/architecture/adanetB\u0010\b\u0007\u0012\u0000B\n| linear |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.049419947, average_loss/adanet/subnetwork = 0.049421377, average_loss/adanet/uniform_average_ensemble = 0.049421377, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.0625109, loss/adanet/subnetwork = 0.062442042, loss/adanet/uniform_average_ensemble = 0.062442042, prediction/mean/adanet/adanet_weighted_ensemble = 3.1072564, prediction/mean/adanet/subnetwork = 3.105895, prediction/mean/adanet/uniform_average_ensemble = 3.105895\nINFO:tensorflow:Saving candidate 't0_1_layer_dnn' dict for global step 20000: architecture/adanet/ensembles = \na\n>adanet/iteration_0/ensemble_t0_1_layer_dnn/architecture/adanetB\u0015\b\u0007\u0012\u0000B\u000f| 1_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.03990466, average_loss/adanet/subnetwork = 0.03993654, average_loss/adanet/uniform_average_ensemble = 0.03993654, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.053545497, loss/adanet/subnetwork = 0.053605493, loss/adanet/uniform_average_ensemble = 0.053605493, prediction/mean/adanet/adanet_weighted_ensemble = 3.1576996, prediction/mean/adanet/subnetwork = 3.1580222, prediction/mean/adanet/uniform_average_ensemble = 3.1580222\nINFO:tensorflow:Finished evaluation at 2018-12-13-19:34:19\nINFO:tensorflow:Saving dict for global step 20000: average_loss = 0.049419947, average_loss/adanet/adanet_weighted_ensemble = 0.049419947, average_loss/adanet/subnetwork = 0.049421377, average_loss/adanet/uniform_average_ensemble = 0.049421377, global_step = 20000, label/mean = 3.1049454, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss = 0.0625109, loss/adanet/adanet_weighted_ensemble = 0.0625109, loss/adanet/subnetwork = 0.062442042, loss/adanet/uniform_average_ensemble = 0.062442042, prediction/mean = 3.1072564, prediction/mean/adanet/adanet_weighted_ensemble = 3.1072564, prediction/mean/adanet/subnetwork = 3.105895, prediction/mean/adanet/uniform_average_ensemble = 3.105895\nINFO:tensorflow:Saving 'checkpoint_path' summary for global step 20000: /tmp/tmpU33rCk/model.ckpt-20000\nINFO:tensorflow:Loss for final step: 0.05016574.\nINFO:tensorflow:Finished training Adanet iteration 0\nINFO:tensorflow:Beginning bookkeeping phase for iteration 0\nINFO:tensorflow:Building iteration 0\nINFO:tensorflow:Building subnetwork 'linear'\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Starting ensemble evaluation for iteration 0\nINFO:tensorflow:Restoring parameters from /tmp/tmpU33rCk/model.ckpt-20000\nWARNING:tensorflow:`tf.train.start_queue_runners()` was called when no queue runners were defined. You can safely remove the call to this deprecated function.\nINFO:tensorflow:Encountered end of input after 14 evaluations\nINFO:tensorflow:Computed ensemble metrics: adanet_loss/t0_linear = 0.035082, adanet_loss/t0_1_layer_dnn = 0.035763\nINFO:tensorflow:Finished ensemble evaluation for iteration 0\nINFO:tensorflow:'t0_linear' at index 0 is moving onto the next iteration\nINFO:tensorflow:Importing architecture from /tmp/tmpU33rCk/architecture-0.txt: ['0:linear'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork 'linear'\nINFO:tensorflow:Warm-starting from: (u'/tmp/tmpU33rCk/model.ckpt-20000',)\nINFO:tensorflow:Warm-starting variable: global_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_linear/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_linear/weighted_subnetwork_0/subnetwork/dense/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/candidate_t0_linear/adanet/iteration_0/candidate_t0_linear/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/candidate_t0_linear/adanet/iteration_0/candidate_t0_linear/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/train_op/is_over/is_over_var_fn/is_over_var; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_linear/weighted_subnetwork_0/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_linear/weighted_subnetwork_0/subnetwork/dense/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/candidate_t0_linear/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Building iteration 1\nINFO:tensorflow:Building subnetwork 'linear'\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Overwriting checkpoint with new graph for iteration 1 to /tmp/tmpU33rCk/model.ckpt-20000\nWARNING:tensorflow:`tf.train.start_queue_runners()` was called when no queue runners were defined. You can safely remove the call to this deprecated function.\nINFO:tensorflow:Finished bookkeeping phase for iteration 0\nINFO:tensorflow:Beginning training AdaNet iteration 1\nINFO:tensorflow:Calling model_fn.\nINFO:tensorflow:Importing architecture from /tmp/tmpU33rCk/architecture-0.txt: ['0:linear'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork 'linear'\nINFO:tensorflow:Building iteration 1\nINFO:tensorflow:Building subnetwork 'linear'\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Done calling model_fn.\nINFO:tensorflow:Create CheckpointSaverHook.\nINFO:tensorflow:Graph was finalized.\nINFO:tensorflow:Restoring parameters from /tmp/tmpU33rCk/increment.ckpt-1\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\nINFO:tensorflow:Saving checkpoints for 20000 into /tmp/tmpU33rCk/model.ckpt.\nINFO:tensorflow:loss = 0.030963298, step = 20001\nINFO:tensorflow:global_step/sec: 124.894\nINFO:tensorflow:loss = 0.03516689, step = 20101 (0.802 sec)\nINFO:tensorflow:global_step/sec: 539.514\nINFO:tensorflow:loss = 0.03981951, step = 20201 (0.185 sec)\nINFO:tensorflow:global_step/sec: 513.933\nINFO:tensorflow:loss = 0.033043213, step = 20301 (0.195 sec)\nINFO:tensorflow:global_step/sec: 480.78\nINFO:tensorflow:loss = 0.0467762, step = 20401 (0.208 sec)\nINFO:tensorflow:global_step/sec: 530.898\nINFO:tensorflow:loss = 0.03045134, step = 20501 (0.188 sec)\nINFO:tensorflow:global_step/sec: 516.018\nINFO:tensorflow:loss = 0.041044854, step = 20601 (0.197 sec)\nINFO:tensorflow:global_step/sec: 519.834\nINFO:tensorflow:loss = 0.01876518, step = 20701 (0.189 sec)\nINFO:tensorflow:global_step/sec: 512.371\nINFO:tensorflow:loss = 0.014937608, step = 20801 (0.195 sec)\nINFO:tensorflow:global_step/sec: 525.975\nINFO:tensorflow:loss = 0.031160582, step = 20901 (0.190 sec)\nINFO:tensorflow:global_step/sec: 551.898\nINFO:tensorflow:loss = 0.028920764, step = 21001 (0.181 sec)\nINFO:tensorflow:global_step/sec: 515.066\nINFO:tensorflow:loss = 0.032353245, step = 21101 (0.194 sec)\nINFO:tensorflow:global_step/sec: 531.83\nINFO:tensorflow:loss = 0.025514642, step = 21201 (0.188 sec)\nINFO:tensorflow:global_step/sec: 485.63\nINFO:tensorflow:loss = 0.075824335, step = 21301 (0.206 sec)\nINFO:tensorflow:global_step/sec: 516.614\nINFO:tensorflow:loss = 0.015722096, step = 21401 (0.194 sec)\nINFO:tensorflow:global_step/sec: 522.079\nINFO:tensorflow:loss = 0.01953032, step = 21501 (0.191 sec)\nINFO:tensorflow:global_step/sec: 534.123\nINFO:tensorflow:loss = 0.021188064, step = 21601 (0.187 sec)\nINFO:tensorflow:global_step/sec: 520.757\nINFO:tensorflow:loss = 0.03627994, step = 21701 (0.192 sec)\nINFO:tensorflow:global_step/sec: 519.758\nINFO:tensorflow:loss = 0.037261367, step = 21801 (0.192 sec)\nINFO:tensorflow:global_step/sec: 505.114\nINFO:tensorflow:loss = 0.049223125, step = 21901 (0.198 sec)\nINFO:tensorflow:global_step/sec: 535.51\nINFO:tensorflow:loss = 0.043363348, step = 22001 (0.186 sec)\nINFO:tensorflow:global_step/sec: 525.834\nINFO:tensorflow:loss = 0.06184654, step = 22101 (0.190 sec)\nINFO:tensorflow:global_step/sec: 518.716\nINFO:tensorflow:loss = 0.024632609, step = 22201 (0.192 sec)\nINFO:tensorflow:global_step/sec: 512.36\nINFO:tensorflow:loss = 0.024932098, step = 22301 (0.195 sec)\nINFO:tensorflow:global_step/sec: 503.358\nINFO:tensorflow:loss = 0.021503564, step = 22401 (0.198 sec)\nINFO:tensorflow:global_step/sec: 529.14\nINFO:tensorflow:loss = 0.04656288, step = 22501 (0.189 sec)\nINFO:tensorflow:global_step/sec: 515.911\nINFO:tensorflow:loss = 0.037371665, step = 22601 (0.194 sec)\nINFO:tensorflow:global_step/sec: 540.99\nINFO:tensorflow:loss = 0.029282678, step = 22701 (0.185 sec)\nINFO:tensorflow:global_step/sec: 530.24\nINFO:tensorflow:loss = 0.019854745, step = 22801 (0.188 sec)\nINFO:tensorflow:global_step/sec: 519.696\nINFO:tensorflow:loss = 0.037007652, step = 22901 (0.193 sec)\nINFO:tensorflow:global_step/sec: 469.894\nINFO:tensorflow:loss = 0.027802797, step = 23001 (0.213 sec)\nINFO:tensorflow:global_step/sec: 435.785\nINFO:tensorflow:loss = 0.018298797, step = 23101 (0.229 sec)\nINFO:tensorflow:global_step/sec: 430.956\nINFO:tensorflow:loss = 0.024072375, step = 23201 (0.232 sec)\nINFO:tensorflow:global_step/sec: 408.725\nINFO:tensorflow:loss = 0.04094688, step = 23301 (0.246 sec)\nINFO:tensorflow:global_step/sec: 449.246\nINFO:tensorflow:loss = 0.027860032, step = 23401 (0.222 sec)\nINFO:tensorflow:global_step/sec: 453.336\nINFO:tensorflow:loss = 0.064778514, step = 23501 (0.220 sec)\nINFO:tensorflow:global_step/sec: 453.892\nINFO:tensorflow:loss = 0.027260652, step = 23601 (0.221 sec)\nINFO:tensorflow:global_step/sec: 445.129\nINFO:tensorflow:loss = 0.021556232, step = 23701 (0.225 sec)\nINFO:tensorflow:global_step/sec: 438.546\nINFO:tensorflow:loss = 0.027705526, step = 23801 (0.228 sec)\nINFO:tensorflow:global_step/sec: 437.365\nINFO:tensorflow:loss = 0.02642343, step = 23901 (0.229 sec)\nINFO:tensorflow:global_step/sec: 440.74\nINFO:tensorflow:loss = 0.04874111, step = 24001 (0.227 sec)\nINFO:tensorflow:global_step/sec: 434.06\nINFO:tensorflow:loss = 0.028818486, step = 24101 (0.230 sec)\nINFO:tensorflow:global_step/sec: 437.097\nINFO:tensorflow:loss = 0.02050123, step = 24201 (0.229 sec)\nINFO:tensorflow:global_step/sec: 441.363\nINFO:tensorflow:loss = 0.017606108, step = 24301 (0.227 sec)\nINFO:tensorflow:global_step/sec: 452.184\nINFO:tensorflow:loss = 0.022820558, step = 24401 (0.220 sec)\nINFO:tensorflow:global_step/sec: 439.408\nINFO:tensorflow:loss = 0.042412907, step = 24501 (0.228 sec)\nINFO:tensorflow:global_step/sec: 471.045\nINFO:tensorflow:loss = 0.021762788, step = 24601 (0.212 sec)\nINFO:tensorflow:global_step/sec: 434.754\nINFO:tensorflow:loss = 0.07007265, step = 24701 (0.230 sec)\nINFO:tensorflow:global_step/sec: 424.497\nINFO:tensorflow:loss = 0.029353648, step = 24801 (0.236 sec)\nINFO:tensorflow:global_step/sec: 440.087\nINFO:tensorflow:loss = 0.025005665, step = 24901 (0.227 sec)\nINFO:tensorflow:global_step/sec: 414.214\nINFO:tensorflow:loss = 0.022534322, step = 25001 (0.241 sec)\nINFO:tensorflow:global_step/sec: 426.734\nINFO:tensorflow:loss = 0.03668832, step = 25101 (0.235 sec)\nINFO:tensorflow:global_step/sec: 468.812\nINFO:tensorflow:loss = 0.022969142, step = 25201 (0.214 sec)\nINFO:tensorflow:global_step/sec: 458.335\nINFO:tensorflow:loss = 0.053870354, step = 25301 (0.218 sec)\nINFO:tensorflow:global_step/sec: 435.33\nINFO:tensorflow:loss = 0.019454079, step = 25401 (0.230 sec)\nINFO:tensorflow:global_step/sec: 430.181\nINFO:tensorflow:loss = 0.020998772, step = 25501 (0.232 sec)\nINFO:tensorflow:global_step/sec: 461.487\nINFO:tensorflow:loss = 0.02694126, step = 25601 (0.217 sec)\nINFO:tensorflow:global_step/sec: 441.842\nINFO:tensorflow:loss = 0.03186059, step = 25701 (0.226 sec)\nINFO:tensorflow:global_step/sec: 425.232\nINFO:tensorflow:loss = 0.028446794, step = 25801 (0.236 sec)\nINFO:tensorflow:global_step/sec: 407.917\nINFO:tensorflow:loss = 0.013250216, step = 25901 (0.244 sec)\nINFO:tensorflow:global_step/sec: 437.924\nINFO:tensorflow:loss = 0.026782522, step = 26001 (0.228 sec)\nINFO:tensorflow:global_step/sec: 454.258\nINFO:tensorflow:loss = 0.03703142, step = 26101 (0.220 sec)\nINFO:tensorflow:global_step/sec: 419.535\nINFO:tensorflow:loss = 0.03735739, step = 26201 (0.238 sec)\nINFO:tensorflow:global_step/sec: 427.129\nINFO:tensorflow:loss = 0.03606581, step = 26301 (0.234 sec)\nINFO:tensorflow:global_step/sec: 429.356\nINFO:tensorflow:loss = 0.05572056, step = 26401 (0.233 sec)\nINFO:tensorflow:global_step/sec: 441.618\nINFO:tensorflow:loss = 0.047298286, step = 26501 (0.226 sec)\nINFO:tensorflow:global_step/sec: 434.033\nINFO:tensorflow:loss = 0.019773448, step = 26601 (0.231 sec)\nINFO:tensorflow:global_step/sec: 434.926\nINFO:tensorflow:loss = 0.0269448, step = 26701 (0.230 sec)\nINFO:tensorflow:global_step/sec: 417.432\nINFO:tensorflow:loss = 0.019713217, step = 26801 (0.239 sec)\nINFO:tensorflow:global_step/sec: 437.564\nINFO:tensorflow:loss = 0.013232482, step = 26901 (0.229 sec)\nINFO:tensorflow:global_step/sec: 434.554\nINFO:tensorflow:loss = 0.025097178, step = 27001 (0.230 sec)\nINFO:tensorflow:global_step/sec: 428.526\nINFO:tensorflow:loss = 0.023188664, step = 27101 (0.233 sec)\nINFO:tensorflow:global_step/sec: 442.515\nINFO:tensorflow:loss = 0.01634363, step = 27201 (0.226 sec)\nINFO:tensorflow:global_step/sec: 439.91\nINFO:tensorflow:loss = 0.06469944, step = 27301 (0.227 sec)\nINFO:tensorflow:global_step/sec: 442.905\nINFO:tensorflow:loss = 0.027050652, step = 27401 (0.226 sec)\nINFO:tensorflow:global_step/sec: 448.093\nINFO:tensorflow:loss = 0.037419617, step = 27501 (0.223 sec)\nINFO:tensorflow:global_step/sec: 536.688\nINFO:tensorflow:loss = 0.014020189, step = 27601 (0.186 sec)\nINFO:tensorflow:global_step/sec: 509.378\nINFO:tensorflow:loss = 0.023243275, step = 27701 (0.196 sec)\nINFO:tensorflow:global_step/sec: 531.629\nINFO:tensorflow:loss = 0.02837298, step = 27801 (0.188 sec)\nINFO:tensorflow:global_step/sec: 537.444\nINFO:tensorflow:loss = 0.031420454, step = 27901 (0.186 sec)\nINFO:tensorflow:global_step/sec: 529.826\nINFO:tensorflow:loss = 0.02842214, step = 28001 (0.189 sec)\nINFO:tensorflow:global_step/sec: 525.199\nINFO:tensorflow:loss = 0.025423417, step = 28101 (0.190 sec)\nINFO:tensorflow:global_step/sec: 507.035\nINFO:tensorflow:loss = 0.03314421, step = 28201 (0.197 sec)\nINFO:tensorflow:global_step/sec: 543.434\nINFO:tensorflow:loss = 0.026891911, step = 28301 (0.184 sec)\nINFO:tensorflow:global_step/sec: 526.554\nINFO:tensorflow:loss = 0.03682626, step = 28401 (0.190 sec)\nINFO:tensorflow:global_step/sec: 554.76\nINFO:tensorflow:loss = 0.024336819, step = 28501 (0.180 sec)\nINFO:tensorflow:global_step/sec: 541.767\nINFO:tensorflow:loss = 0.0462117, step = 28601 (0.185 sec)\nINFO:tensorflow:global_step/sec: 510.785\nINFO:tensorflow:loss = 0.018110778, step = 28701 (0.196 sec)\nINFO:tensorflow:global_step/sec: 492.771\nINFO:tensorflow:loss = 0.035204474, step = 28801 (0.203 sec)\nINFO:tensorflow:global_step/sec: 527.769\nINFO:tensorflow:loss = 0.06147111, step = 28901 (0.189 sec)\nINFO:tensorflow:global_step/sec: 536.012\nINFO:tensorflow:loss = 0.04892836, step = 29001 (0.186 sec)\nINFO:tensorflow:global_step/sec: 538.178\nINFO:tensorflow:loss = 0.023903372, step = 29101 (0.186 sec)\nINFO:tensorflow:global_step/sec: 549.164\nINFO:tensorflow:loss = 0.04787178, step = 29201 (0.182 sec)\nINFO:tensorflow:global_step/sec: 547.639\nINFO:tensorflow:loss = 0.019329004, step = 29301 (0.183 sec)\nINFO:tensorflow:global_step/sec: 550.234\nINFO:tensorflow:loss = 0.013593976, step = 29401 (0.182 sec)\nINFO:tensorflow:global_step/sec: 535.249\nINFO:tensorflow:loss = 0.057428516, step = 29501 (0.187 sec)\nINFO:tensorflow:global_step/sec: 532.005\nINFO:tensorflow:loss = 0.04012476, step = 29601 (0.188 sec)\nINFO:tensorflow:global_step/sec: 528.46\nINFO:tensorflow:loss = 0.012712158, step = 29701 (0.189 sec)\nINFO:tensorflow:global_step/sec: 533.815\nINFO:tensorflow:loss = 0.023402063, step = 29801 (0.187 sec)\nINFO:tensorflow:global_step/sec: 515.953\nINFO:tensorflow:loss = 0.023870125, step = 29901 (0.194 sec)\nINFO:tensorflow:global_step/sec: 541.187\nINFO:tensorflow:loss = 0.024826566, step = 30001 (0.185 sec)\nINFO:tensorflow:global_step/sec: 532.612\nINFO:tensorflow:loss = 0.024468493, step = 30101 (0.188 sec)\nINFO:tensorflow:global_step/sec: 551.481\nINFO:tensorflow:loss = 0.044891607, step = 30201 (0.182 sec)\nINFO:tensorflow:global_step/sec: 543.957\nINFO:tensorflow:loss = 0.04333415, step = 30301 (0.183 sec)\nINFO:tensorflow:global_step/sec: 529.897\nINFO:tensorflow:loss = 0.017114978, step = 30401 (0.189 sec)\nINFO:tensorflow:global_step/sec: 542.597\nINFO:tensorflow:loss = 0.042604566, step = 30501 (0.184 sec)\nINFO:tensorflow:global_step/sec: 551.079\nINFO:tensorflow:loss = 0.021167897, step = 30601 (0.184 sec)\nINFO:tensorflow:global_step/sec: 541.58\nINFO:tensorflow:loss = 0.01646223, step = 30701 (0.182 sec)\nINFO:tensorflow:global_step/sec: 507.97\nINFO:tensorflow:loss = 0.023372637, step = 30801 (0.197 sec)\nINFO:tensorflow:global_step/sec: 523.725\nINFO:tensorflow:loss = 0.026064549, step = 30901 (0.191 sec)\nINFO:tensorflow:global_step/sec: 531.465\nINFO:tensorflow:loss = 0.013219604, step = 31001 (0.188 sec)\nINFO:tensorflow:global_step/sec: 543.215\nINFO:tensorflow:loss = 0.030059274, step = 31101 (0.184 sec)\nINFO:tensorflow:global_step/sec: 539.645\nINFO:tensorflow:loss = 0.023715459, step = 31201 (0.185 sec)\nINFO:tensorflow:global_step/sec: 532.388\nINFO:tensorflow:loss = 0.043849677, step = 31301 (0.188 sec)\nINFO:tensorflow:global_step/sec: 548.486\nINFO:tensorflow:loss = 0.023017507, step = 31401 (0.184 sec)\nINFO:tensorflow:global_step/sec: 504.562\nINFO:tensorflow:loss = 0.040082093, step = 31501 (0.196 sec)\nINFO:tensorflow:global_step/sec: 528.385\nINFO:tensorflow:loss = 0.023561921, step = 31601 (0.189 sec)\nINFO:tensorflow:global_step/sec: 520.587\nINFO:tensorflow:loss = 0.022855878, step = 31701 (0.192 sec)\nINFO:tensorflow:global_step/sec: 514.814\nINFO:tensorflow:loss = 0.01623566, step = 31801 (0.194 sec)\nINFO:tensorflow:global_step/sec: 481.042\nINFO:tensorflow:loss = 0.04121801, step = 31901 (0.208 sec)\nINFO:tensorflow:global_step/sec: 523.59\nINFO:tensorflow:loss = 0.027685797, step = 32001 (0.191 sec)\nINFO:tensorflow:global_step/sec: 529.395\nINFO:tensorflow:loss = 0.009713226, step = 32101 (0.189 sec)\nINFO:tensorflow:global_step/sec: 472.961\nINFO:tensorflow:loss = 0.012999925, step = 32201 (0.211 sec)\nINFO:tensorflow:global_step/sec: 496.044\nINFO:tensorflow:loss = 0.02126931, step = 32301 (0.202 sec)\nINFO:tensorflow:global_step/sec: 493.262\nINFO:tensorflow:loss = 0.032282937, step = 32401 (0.203 sec)\nINFO:tensorflow:global_step/sec: 536.067\nINFO:tensorflow:loss = 0.025438417, step = 32501 (0.188 sec)\nINFO:tensorflow:global_step/sec: 500.025\nINFO:tensorflow:loss = 0.032477073, step = 32601 (0.199 sec)\nINFO:tensorflow:global_step/sec: 519.818\nINFO:tensorflow:loss = 0.020108623, step = 32701 (0.192 sec)\nINFO:tensorflow:global_step/sec: 527.104\nINFO:tensorflow:loss = 0.02968867, step = 32801 (0.190 sec)\nINFO:tensorflow:global_step/sec: 548.089\nINFO:tensorflow:loss = 0.03175928, step = 32901 (0.182 sec)\nINFO:tensorflow:global_step/sec: 504.806\nINFO:tensorflow:loss = 0.034673132, step = 33001 (0.198 sec)\nINFO:tensorflow:global_step/sec: 529.375\nINFO:tensorflow:loss = 0.027451565, step = 33101 (0.189 sec)\nINFO:tensorflow:global_step/sec: 532.09\nINFO:tensorflow:loss = 0.018435553, step = 33201 (0.188 sec)\nINFO:tensorflow:global_step/sec: 507.854\nINFO:tensorflow:loss = 0.02186241, step = 33301 (0.197 sec)\nINFO:tensorflow:global_step/sec: 532.481\nINFO:tensorflow:loss = 0.028581627, step = 33401 (0.188 sec)\nINFO:tensorflow:global_step/sec: 536.19\nINFO:tensorflow:loss = 0.034122914, step = 33501 (0.186 sec)\nINFO:tensorflow:global_step/sec: 541.111\nINFO:tensorflow:loss = 0.012513552, step = 33601 (0.185 sec)\nINFO:tensorflow:global_step/sec: 515.727\nINFO:tensorflow:loss = 0.01912247, step = 33701 (0.194 sec)\nINFO:tensorflow:global_step/sec: 535.961\nINFO:tensorflow:loss = 0.054644194, step = 33801 (0.187 sec)\nINFO:tensorflow:global_step/sec: 463.751\nINFO:tensorflow:loss = 0.015859207, step = 33901 (0.215 sec)\nINFO:tensorflow:global_step/sec: 519.512\nINFO:tensorflow:loss = 0.018914541, step = 34001 (0.192 sec)\nINFO:tensorflow:global_step/sec: 525.71\nINFO:tensorflow:loss = 0.025463093, step = 34101 (0.190 sec)\nINFO:tensorflow:global_step/sec: 530.929\nINFO:tensorflow:loss = 0.026399424, step = 34201 (0.188 sec)\nINFO:tensorflow:global_step/sec: 547.223\nINFO:tensorflow:loss = 0.036557145, step = 34301 (0.183 sec)\nINFO:tensorflow:global_step/sec: 531.649\nINFO:tensorflow:loss = 0.016763993, step = 34401 (0.188 sec)\nINFO:tensorflow:global_step/sec: 520.27\nINFO:tensorflow:loss = 0.025890842, step = 34501 (0.192 sec)\nINFO:tensorflow:global_step/sec: 523.376\nINFO:tensorflow:loss = 0.033754352, step = 34601 (0.191 sec)\nINFO:tensorflow:global_step/sec: 522.111\nINFO:tensorflow:loss = 0.028789936, step = 34701 (0.191 sec)\nINFO:tensorflow:global_step/sec: 502.536\nINFO:tensorflow:loss = 0.036325447, step = 34801 (0.199 sec)\nINFO:tensorflow:global_step/sec: 522.865\nINFO:tensorflow:loss = 0.05625145, step = 34901 (0.191 sec)\nINFO:tensorflow:global_step/sec: 500.97\nINFO:tensorflow:loss = 0.034521013, step = 35001 (0.201 sec)\nINFO:tensorflow:global_step/sec: 520.735\nINFO:tensorflow:loss = 0.0150056705, step = 35101 (0.190 sec)\nINFO:tensorflow:global_step/sec: 534.722\nINFO:tensorflow:loss = 0.020731103, step = 35201 (0.187 sec)\nINFO:tensorflow:global_step/sec: 529.112\nINFO:tensorflow:loss = 0.026185703, step = 35301 (0.189 sec)\nINFO:tensorflow:global_step/sec: 529.947\nINFO:tensorflow:loss = 0.014758859, step = 35401 (0.189 sec)\nINFO:tensorflow:global_step/sec: 514.263\nINFO:tensorflow:loss = 0.037995845, step = 35501 (0.194 sec)\nINFO:tensorflow:global_step/sec: 516.201\nINFO:tensorflow:loss = 0.041229367, step = 35601 (0.194 sec)\nINFO:tensorflow:global_step/sec: 528.712\nINFO:tensorflow:loss = 0.050575472, step = 35701 (0.189 sec)\nINFO:tensorflow:global_step/sec: 521.548\nINFO:tensorflow:loss = 0.041070037, step = 35801 (0.192 sec)\nINFO:tensorflow:global_step/sec: 461.603\nINFO:tensorflow:loss = 0.031438783, step = 35901 (0.216 sec)\nINFO:tensorflow:global_step/sec: 514.287\nINFO:tensorflow:loss = 0.03783085, step = 36001 (0.195 sec)\nINFO:tensorflow:global_step/sec: 461.018\nINFO:tensorflow:loss = 0.030223355, step = 36101 (0.217 sec)\nINFO:tensorflow:global_step/sec: 491.196\nINFO:tensorflow:loss = 0.013579338, step = 36201 (0.204 sec)\nINFO:tensorflow:global_step/sec: 534.359\nINFO:tensorflow:loss = 0.025656413, step = 36301 (0.187 sec)\nINFO:tensorflow:global_step/sec: 527.051\nINFO:tensorflow:loss = 0.0447367, step = 36401 (0.190 sec)\nINFO:tensorflow:global_step/sec: 525.867\nINFO:tensorflow:loss = 0.03292373, step = 36501 (0.190 sec)\nINFO:tensorflow:global_step/sec: 528.422\nINFO:tensorflow:loss = 0.022560276, step = 36601 (0.190 sec)\nINFO:tensorflow:global_step/sec: 501.263\nINFO:tensorflow:loss = 0.029914442, step = 36701 (0.201 sec)\nINFO:tensorflow:global_step/sec: 534.25\nINFO:tensorflow:loss = 0.025595265, step = 36801 (0.186 sec)\nINFO:tensorflow:global_step/sec: 512.091\nINFO:tensorflow:loss = 0.019214138, step = 36901 (0.195 sec)\nINFO:tensorflow:global_step/sec: 519.424\nINFO:tensorflow:loss = 0.029903362, step = 37001 (0.192 sec)\nINFO:tensorflow:global_step/sec: 505.132\nINFO:tensorflow:loss = 0.03370553, step = 37101 (0.198 sec)\nINFO:tensorflow:global_step/sec: 490.146\nINFO:tensorflow:loss = 0.012709979, step = 37201 (0.204 sec)\nINFO:tensorflow:global_step/sec: 526.05\nINFO:tensorflow:loss = 0.03051424, step = 37301 (0.190 sec)\nINFO:tensorflow:global_step/sec: 531.841\nINFO:tensorflow:loss = 0.026626717, step = 37401 (0.188 sec)\nINFO:tensorflow:global_step/sec: 519.588\nINFO:tensorflow:loss = 0.032916978, step = 37501 (0.192 sec)\nINFO:tensorflow:global_step/sec: 517.663\nINFO:tensorflow:loss = 0.035041273, step = 37601 (0.193 sec)\nINFO:tensorflow:global_step/sec: 505.776\nINFO:tensorflow:loss = 0.028301798, step = 37701 (0.198 sec)\nINFO:tensorflow:global_step/sec: 533.214\nINFO:tensorflow:loss = 0.025922457, step = 37801 (0.188 sec)\nINFO:tensorflow:global_step/sec: 508.577\nINFO:tensorflow:loss = 0.020458834, step = 37901 (0.196 sec)\nINFO:tensorflow:global_step/sec: 531.459\nINFO:tensorflow:loss = 0.039606288, step = 38001 (0.188 sec)\nINFO:tensorflow:global_step/sec: 506.014\nINFO:tensorflow:loss = 0.021974199, step = 38101 (0.198 sec)\nINFO:tensorflow:global_step/sec: 533.624\nINFO:tensorflow:loss = 0.025154984, step = 38201 (0.187 sec)\nINFO:tensorflow:global_step/sec: 492.373\nINFO:tensorflow:loss = 0.027581938, step = 38301 (0.203 sec)\nINFO:tensorflow:global_step/sec: 533.955\nINFO:tensorflow:loss = 0.04229299, step = 38401 (0.188 sec)\nINFO:tensorflow:global_step/sec: 526.862\nINFO:tensorflow:loss = 0.029815745, step = 38501 (0.189 sec)\nINFO:tensorflow:global_step/sec: 537.21\nINFO:tensorflow:loss = 0.02210619, step = 38601 (0.186 sec)\nINFO:tensorflow:global_step/sec: 534.931\nINFO:tensorflow:loss = 0.031400964, step = 38701 (0.187 sec)\nINFO:tensorflow:global_step/sec: 520.161\nINFO:tensorflow:loss = 0.021041807, step = 38801 (0.192 sec)\nINFO:tensorflow:global_step/sec: 504.541\nINFO:tensorflow:loss = 0.036891684, step = 38901 (0.198 sec)\nINFO:tensorflow:global_step/sec: 524.874\nINFO:tensorflow:loss = 0.017958721, step = 39001 (0.191 sec)\nINFO:tensorflow:global_step/sec: 520.248\nINFO:tensorflow:loss = 0.039710287, step = 39101 (0.192 sec)\nINFO:tensorflow:global_step/sec: 504.559\nINFO:tensorflow:loss = 0.03919609, step = 39201 (0.198 sec)\nINFO:tensorflow:global_step/sec: 518.409\nINFO:tensorflow:loss = 0.015263615, step = 39301 (0.193 sec)\nINFO:tensorflow:global_step/sec: 513.087\nINFO:tensorflow:loss = 0.021182708, step = 39401 (0.195 sec)\nINFO:tensorflow:global_step/sec: 529.391\nINFO:tensorflow:loss = 0.018177632, step = 39501 (0.189 sec)\nINFO:tensorflow:global_step/sec: 529.892\nINFO:tensorflow:loss = 0.01564832, step = 39601 (0.189 sec)\nINFO:tensorflow:global_step/sec: 511.341\nINFO:tensorflow:loss = 0.019479267, step = 39701 (0.196 sec)\nINFO:tensorflow:global_step/sec: 535.86\nINFO:tensorflow:loss = 0.020116728, step = 39801 (0.187 sec)\nINFO:tensorflow:global_step/sec: 498.708\nINFO:tensorflow:loss = 0.027168905, step = 39901 (0.201 sec)\nINFO:tensorflow:Saving checkpoints for 40000 into /tmp/tmpU33rCk/model.ckpt.\nINFO:tensorflow:Calling model_fn.\nINFO:tensorflow:Importing architecture from /tmp/tmpU33rCk/architecture-0.txt: ['0:linear'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork 'linear'\nINFO:tensorflow:Building iteration 1\nINFO:tensorflow:Building subnetwork 'linear'\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Done calling model_fn.\nINFO:tensorflow:Starting evaluation at 2018-12-13-19:35:16\nINFO:tensorflow:Graph was finalized.\nINFO:tensorflow:Restoring parameters from /tmp/tmpU33rCk/model.ckpt-40000\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\nINFO:tensorflow:Saving candidate 't0_linear' dict for global step 40000: architecture/adanet/ensembles = \nW\n9adanet/iteration_0/ensemble_t0_linear/architecture/adanetB\u0010\b\u0007\u0012\u0000B\n| linear |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.049419947, average_loss/adanet/subnetwork = 0.049421377, average_loss/adanet/uniform_average_ensemble = 0.049421377, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.0625109, loss/adanet/subnetwork = 0.062442042, loss/adanet/uniform_average_ensemble = 0.062442042, prediction/mean/adanet/adanet_weighted_ensemble = 3.1072564, prediction/mean/adanet/subnetwork = 3.105895, prediction/mean/adanet/uniform_average_ensemble = 3.105895\nINFO:tensorflow:Saving candidate 't1_linear' dict for global step 40000: architecture/adanet/ensembles = \n`\n9adanet/iteration_1/ensemble_t1_linear/architecture/adanetB\u0019\b\u0007\u0012\u0000B\u0013| linear | linear |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.04959257, average_loss/adanet/subnetwork = 0.051883172, average_loss/adanet/uniform_average_ensemble = 0.05056643, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.062364295, loss/adanet/subnetwork = 0.06333193, loss/adanet/uniform_average_ensemble = 0.0627961, prediction/mean/adanet/adanet_weighted_ensemble = 3.103125, prediction/mean/adanet/subnetwork = 3.103565, prediction/mean/adanet/uniform_average_ensemble = 3.1047301\nINFO:tensorflow:Saving candidate 't1_1_layer_dnn' dict for global step 40000: architecture/adanet/ensembles = \nj\n>adanet/iteration_1/ensemble_t1_1_layer_dnn/architecture/adanetB\u001e\b\u0007\u0012\u0000B\u0018| linear | 1_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.04422355, average_loss/adanet/subnetwork = 0.044653624, average_loss/adanet/uniform_average_ensemble = 0.043328855, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.060774922, loss/adanet/subnetwork = 0.06800773, loss/adanet/uniform_average_ensemble = 0.061006997, prediction/mean/adanet/adanet_weighted_ensemble = 3.1278303, prediction/mean/adanet/subnetwork = 3.1593368, prediction/mean/adanet/uniform_average_ensemble = 3.1326156\nINFO:tensorflow:Finished evaluation at 2018-12-13-19:35:19\nINFO:tensorflow:Saving dict for global step 40000: average_loss = 0.04422355, average_loss/adanet/adanet_weighted_ensemble = 0.04422355, average_loss/adanet/subnetwork = 0.044653624, average_loss/adanet/uniform_average_ensemble = 0.043328855, global_step = 40000, label/mean = 3.1049454, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss = 0.060774922, loss/adanet/adanet_weighted_ensemble = 0.060774922, loss/adanet/subnetwork = 0.06800773, loss/adanet/uniform_average_ensemble = 0.061006997, prediction/mean = 3.1278303, prediction/mean/adanet/adanet_weighted_ensemble = 3.1278303, prediction/mean/adanet/subnetwork = 3.1593368, prediction/mean/adanet/uniform_average_ensemble = 3.1326156\nINFO:tensorflow:Saving 'checkpoint_path' summary for global step 40000: /tmp/tmpU33rCk/model.ckpt-40000\nINFO:tensorflow:Loss for final step: 0.039504506.\nINFO:tensorflow:Finished training Adanet iteration 1\nINFO:tensorflow:Beginning bookkeeping phase for iteration 1\nINFO:tensorflow:Importing architecture from /tmp/tmpU33rCk/architecture-0.txt: ['0:linear'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork 'linear'\nINFO:tensorflow:Building iteration 1\nINFO:tensorflow:Building subnetwork 'linear'\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Starting ensemble evaluation for iteration 1\nINFO:tensorflow:Restoring parameters from /tmp/tmpU33rCk/model.ckpt-40000\nWARNING:tensorflow:`tf.train.start_queue_runners()` was called when no queue runners were defined. You can safely remove the call to this deprecated function.\nINFO:tensorflow:Encountered end of input after 14 evaluations\nINFO:tensorflow:Computed ensemble metrics: adanet_loss/t0_linear = 0.035082, adanet_loss/t1_linear = 0.035048, adanet_loss/t1_1_layer_dnn = 0.031544\nINFO:tensorflow:Finished ensemble evaluation for iteration 1\nINFO:tensorflow:'t1_1_layer_dnn' at index 2 is moving onto the next iteration\nINFO:tensorflow:Importing architecture from /tmp/tmpU33rCk/architecture-1.txt: ['0:linear', '1:1_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork 'linear'\nINFO:tensorflow:Rebuilding iteration 1\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Warm-starting from: (u'/tmp/tmpU33rCk/model.ckpt-40000',)\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t0_linear/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t0_linear/adanet/iteration_1/candidate_t0_linear/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_1_layer_dnn/weighted_subnetwork_1/subnetwork/dense/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_linear/weighted_subnetwork_0/subnetwork/dense/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_1_layer_dnn/weighted_subnetwork_1/subnetwork/dense_1/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_1_layer_dnn/weighted_subnetwork_1/subnetwork/dense_1/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/candidate_t0_linear/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: global_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t1_1_layer_dnn/adanet/iteration_1/candidate_t1_1_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_1_layer_dnn/weighted_subnetwork_1/subnetwork/dense/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_1_layer_dnn/weighted_subnetwork_0/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_1_layer_dnn/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_linear/weighted_subnetwork_0/subnetwork/dense/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_linear/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t1_1_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_linear/weighted_subnetwork_0/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_1_layer_dnn/weighted_subnetwork_1/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t0_linear/adanet/iteration_1/candidate_t0_linear/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/candidate_t0_linear/adanet/iteration_0/candidate_t0_linear/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/candidate_t0_linear/adanet/iteration_0/candidate_t0_linear/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/train_op/is_over/is_over_var_fn/is_over_var; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t1_1_layer_dnn/adanet/iteration_1/candidate_t1_1_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/train_op/is_over/is_over_var_fn/is_over_var; prev_var_name: Unchanged\nINFO:tensorflow:Building iteration 2\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Overwriting checkpoint with new graph for iteration 2 to /tmp/tmpU33rCk/model.ckpt-40000\nWARNING:tensorflow:`tf.train.start_queue_runners()` was called when no queue runners were defined. You can safely remove the call to this deprecated function.\nINFO:tensorflow:Finished bookkeeping phase for iteration 1\nINFO:tensorflow:Beginning training AdaNet iteration 2\nINFO:tensorflow:Calling model_fn.\nINFO:tensorflow:Importing architecture from /tmp/tmpU33rCk/architecture-1.txt: ['0:linear', '1:1_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork 'linear'\nINFO:tensorflow:Rebuilding iteration 1\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Building iteration 2\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Done calling model_fn.\nINFO:tensorflow:Create CheckpointSaverHook.\nINFO:tensorflow:Graph was finalized.\nINFO:tensorflow:Restoring parameters from /tmp/tmpU33rCk/increment.ckpt-2\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\nINFO:tensorflow:Saving checkpoints for 40000 into /tmp/tmpU33rCk/model.ckpt.\nINFO:tensorflow:loss = 0.026201203, step = 40001\nINFO:tensorflow:global_step/sec: 94.3463\nINFO:tensorflow:loss = 0.026093123, step = 40101 (1.061 sec)\nINFO:tensorflow:global_step/sec: 517.928\nINFO:tensorflow:loss = 0.028024156, step = 40201 (0.193 sec)\nINFO:tensorflow:global_step/sec: 486.175\nINFO:tensorflow:loss = 0.025919124, step = 40301 (0.206 sec)\nINFO:tensorflow:global_step/sec: 514.963\nINFO:tensorflow:loss = 0.03824448, step = 40401 (0.194 sec)\nINFO:tensorflow:global_step/sec: 497.851\nINFO:tensorflow:loss = 0.021180643, step = 40501 (0.201 sec)\nINFO:tensorflow:global_step/sec: 505.048\nINFO:tensorflow:loss = 0.02561801, step = 40601 (0.198 sec)\nINFO:tensorflow:global_step/sec: 493.109\nINFO:tensorflow:loss = 0.013173097, step = 40701 (0.203 sec)\nINFO:tensorflow:global_step/sec: 494.829\nINFO:tensorflow:loss = 0.013205946, step = 40801 (0.202 sec)\nINFO:tensorflow:global_step/sec: 474.102\nINFO:tensorflow:loss = 0.021362148, step = 40901 (0.211 sec)\nINFO:tensorflow:global_step/sec: 493.635\nINFO:tensorflow:loss = 0.025423564, step = 41001 (0.203 sec)\nINFO:tensorflow:global_step/sec: 494.193\nINFO:tensorflow:loss = 0.0243335, step = 41101 (0.203 sec)\nINFO:tensorflow:global_step/sec: 459.692\nINFO:tensorflow:loss = 0.022402506, step = 41201 (0.217 sec)\nINFO:tensorflow:global_step/sec: 502.384\nINFO:tensorflow:loss = 0.056009114, step = 41301 (0.199 sec)\nINFO:tensorflow:global_step/sec: 502.717\nINFO:tensorflow:loss = 0.011265494, step = 41401 (0.199 sec)\nINFO:tensorflow:global_step/sec: 493.873\nINFO:tensorflow:loss = 0.013360662, step = 41501 (0.203 sec)\nINFO:tensorflow:global_step/sec: 495.582\nINFO:tensorflow:loss = 0.018083222, step = 41601 (0.202 sec)\nINFO:tensorflow:global_step/sec: 494.103\nINFO:tensorflow:loss = 0.027759094, step = 41701 (0.203 sec)\nINFO:tensorflow:global_step/sec: 476.501\nINFO:tensorflow:loss = 0.021066925, step = 41801 (0.210 sec)\nINFO:tensorflow:global_step/sec: 489.445\nINFO:tensorflow:loss = 0.03532522, step = 41901 (0.205 sec)\nINFO:tensorflow:global_step/sec: 474.552\nINFO:tensorflow:loss = 0.024856593, step = 42001 (0.210 sec)\nINFO:tensorflow:global_step/sec: 474.226\nINFO:tensorflow:loss = 0.039143093, step = 42101 (0.211 sec)\nINFO:tensorflow:global_step/sec: 473.887\nINFO:tensorflow:loss = 0.020432828, step = 42201 (0.211 sec)\nINFO:tensorflow:global_step/sec: 504.587\nINFO:tensorflow:loss = 0.016078953, step = 42301 (0.198 sec)\nINFO:tensorflow:global_step/sec: 543.227\nINFO:tensorflow:loss = 0.01749191, step = 42401 (0.184 sec)\nINFO:tensorflow:global_step/sec: 513.405\nINFO:tensorflow:loss = 0.030648103, step = 42501 (0.195 sec)\nINFO:tensorflow:global_step/sec: 497.01\nINFO:tensorflow:loss = 0.023955716, step = 42601 (0.201 sec)\nINFO:tensorflow:global_step/sec: 491.169\nINFO:tensorflow:loss = 0.02257087, step = 42701 (0.204 sec)\nINFO:tensorflow:global_step/sec: 505.155\nINFO:tensorflow:loss = 0.012919056, step = 42801 (0.198 sec)\nINFO:tensorflow:global_step/sec: 507.949\nINFO:tensorflow:loss = 0.02518668, step = 42901 (0.197 sec)\nINFO:tensorflow:global_step/sec: 512.492\nINFO:tensorflow:loss = 0.016773593, step = 43001 (0.195 sec)\nINFO:tensorflow:global_step/sec: 480.144\nINFO:tensorflow:loss = 0.010921106, step = 43101 (0.208 sec)\nINFO:tensorflow:global_step/sec: 459.276\nINFO:tensorflow:loss = 0.020361234, step = 43201 (0.218 sec)\nINFO:tensorflow:global_step/sec: 509.892\nINFO:tensorflow:loss = 0.02932311, step = 43301 (0.196 sec)\nINFO:tensorflow:global_step/sec: 532.62\nINFO:tensorflow:loss = 0.018495196, step = 43401 (0.188 sec)\nINFO:tensorflow:global_step/sec: 497.785\nINFO:tensorflow:loss = 0.046665255, step = 43501 (0.201 sec)\nINFO:tensorflow:global_step/sec: 466.057\nINFO:tensorflow:loss = 0.01727711, step = 43601 (0.214 sec)\nINFO:tensorflow:global_step/sec: 489.098\nINFO:tensorflow:loss = 0.015068072, step = 43701 (0.205 sec)\nINFO:tensorflow:global_step/sec: 502.207\nINFO:tensorflow:loss = 0.016884852, step = 43801 (0.199 sec)\nINFO:tensorflow:global_step/sec: 480.13\nINFO:tensorflow:loss = 0.016389422, step = 43901 (0.208 sec)\nINFO:tensorflow:global_step/sec: 511.187\nINFO:tensorflow:loss = 0.031890914, step = 44001 (0.195 sec)\nINFO:tensorflow:global_step/sec: 495.717\nINFO:tensorflow:loss = 0.018032018, step = 44101 (0.202 sec)\nINFO:tensorflow:global_step/sec: 503.54\nINFO:tensorflow:loss = 0.012677894, step = 44201 (0.199 sec)\nINFO:tensorflow:global_step/sec: 472.958\nINFO:tensorflow:loss = 0.0093172705, step = 44301 (0.211 sec)\nINFO:tensorflow:global_step/sec: 498.149\nINFO:tensorflow:loss = 0.013411153, step = 44401 (0.201 sec)\nINFO:tensorflow:global_step/sec: 501.806\nINFO:tensorflow:loss = 0.024332076, step = 44501 (0.200 sec)\nINFO:tensorflow:global_step/sec: 496.596\nINFO:tensorflow:loss = 0.017249683, step = 44601 (0.201 sec)\nINFO:tensorflow:global_step/sec: 509.783\nINFO:tensorflow:loss = 0.04461584, step = 44701 (0.196 sec)\nINFO:tensorflow:global_step/sec: 516.438\nINFO:tensorflow:loss = 0.017492075, step = 44801 (0.194 sec)\nINFO:tensorflow:global_step/sec: 502.894\nINFO:tensorflow:loss = 0.01984074, step = 44901 (0.199 sec)\nINFO:tensorflow:global_step/sec: 505.094\nINFO:tensorflow:loss = 0.013942476, step = 45001 (0.198 sec)\nINFO:tensorflow:global_step/sec: 460.284\nINFO:tensorflow:loss = 0.014163842, step = 45101 (0.217 sec)\nINFO:tensorflow:global_step/sec: 492.434\nINFO:tensorflow:loss = 0.013226744, step = 45201 (0.203 sec)\nINFO:tensorflow:global_step/sec: 511.428\nINFO:tensorflow:loss = 0.04130336, step = 45301 (0.197 sec)\nINFO:tensorflow:global_step/sec: 512.321\nINFO:tensorflow:loss = 0.012158102, step = 45401 (0.194 sec)\nINFO:tensorflow:global_step/sec: 516.734\nINFO:tensorflow:loss = 0.013963826, step = 45501 (0.193 sec)\nINFO:tensorflow:global_step/sec: 491.527\nINFO:tensorflow:loss = 0.02029209, step = 45601 (0.204 sec)\nINFO:tensorflow:global_step/sec: 492.858\nINFO:tensorflow:loss = 0.019698065, step = 45701 (0.203 sec)\nINFO:tensorflow:global_step/sec: 496.645\nINFO:tensorflow:loss = 0.01769293, step = 45801 (0.202 sec)\nINFO:tensorflow:global_step/sec: 517.042\nINFO:tensorflow:loss = 0.007403482, step = 45901 (0.196 sec)\nINFO:tensorflow:global_step/sec: 500.964\nINFO:tensorflow:loss = 0.019337641, step = 46001 (0.197 sec)\nINFO:tensorflow:global_step/sec: 483.474\nINFO:tensorflow:loss = 0.02821711, step = 46101 (0.211 sec)\nINFO:tensorflow:global_step/sec: 469.759\nINFO:tensorflow:loss = 0.024295965, step = 46201 (0.208 sec)\nINFO:tensorflow:global_step/sec: 494.364\nINFO:tensorflow:loss = 0.02877579, step = 46301 (0.202 sec)\nINFO:tensorflow:global_step/sec: 499.813\nINFO:tensorflow:loss = 0.039782353, step = 46401 (0.201 sec)\nINFO:tensorflow:global_step/sec: 496.103\nINFO:tensorflow:loss = 0.034643028, step = 46501 (0.201 sec)\nINFO:tensorflow:global_step/sec: 481.73\nINFO:tensorflow:loss = 0.015971491, step = 46601 (0.208 sec)\nINFO:tensorflow:global_step/sec: 505.902\nINFO:tensorflow:loss = 0.021011092, step = 46701 (0.198 sec)\nINFO:tensorflow:global_step/sec: 506.729\nINFO:tensorflow:loss = 0.017140727, step = 46801 (0.197 sec)\nINFO:tensorflow:global_step/sec: 506.888\nINFO:tensorflow:loss = 0.009389237, step = 46901 (0.197 sec)\nINFO:tensorflow:global_step/sec: 519.805\nINFO:tensorflow:loss = 0.019067764, step = 47001 (0.193 sec)\nINFO:tensorflow:global_step/sec: 493.09\nINFO:tensorflow:loss = 0.015567752, step = 47101 (0.203 sec)\nINFO:tensorflow:global_step/sec: 506.221\nINFO:tensorflow:loss = 0.013121355, step = 47201 (0.198 sec)\nINFO:tensorflow:global_step/sec: 499.072\nINFO:tensorflow:loss = 0.046016246, step = 47301 (0.200 sec)\nINFO:tensorflow:global_step/sec: 505.661\nINFO:tensorflow:loss = 0.019536706, step = 47401 (0.198 sec)\nINFO:tensorflow:global_step/sec: 508.04\nINFO:tensorflow:loss = 0.02697355, step = 47501 (0.197 sec)\nINFO:tensorflow:global_step/sec: 517.448\nINFO:tensorflow:loss = 0.010725226, step = 47601 (0.193 sec)\nINFO:tensorflow:global_step/sec: 490.887\nINFO:tensorflow:loss = 0.021514438, step = 47701 (0.204 sec)\nINFO:tensorflow:global_step/sec: 518.17\nINFO:tensorflow:loss = 0.022160714, step = 47801 (0.193 sec)\nINFO:tensorflow:global_step/sec: 501.077\nINFO:tensorflow:loss = 0.018412659, step = 47901 (0.200 sec)\nINFO:tensorflow:global_step/sec: 485.5\nINFO:tensorflow:loss = 0.022296796, step = 48001 (0.206 sec)\nINFO:tensorflow:global_step/sec: 484.48\nINFO:tensorflow:loss = 0.01848094, step = 48101 (0.206 sec)\nINFO:tensorflow:global_step/sec: 512.928\nINFO:tensorflow:loss = 0.024373533, step = 48201 (0.195 sec)\nINFO:tensorflow:global_step/sec: 512.942\nINFO:tensorflow:loss = 0.020244522, step = 48301 (0.195 sec)\nINFO:tensorflow:global_step/sec: 500.353\nINFO:tensorflow:loss = 0.025015071, step = 48401 (0.200 sec)\nINFO:tensorflow:global_step/sec: 480.439\nINFO:tensorflow:loss = 0.017704632, step = 48501 (0.208 sec)\nINFO:tensorflow:global_step/sec: 499.134\nINFO:tensorflow:loss = 0.0310724, step = 48601 (0.200 sec)\nINFO:tensorflow:global_step/sec: 505.395\nINFO:tensorflow:loss = 0.015678111, step = 48701 (0.198 sec)\nINFO:tensorflow:global_step/sec: 513.569\nINFO:tensorflow:loss = 0.027031465, step = 48801 (0.195 sec)\nINFO:tensorflow:global_step/sec: 522.084\nINFO:tensorflow:loss = 0.049662534, step = 48901 (0.191 sec)\nINFO:tensorflow:global_step/sec: 542.724\nINFO:tensorflow:loss = 0.03728403, step = 49001 (0.184 sec)\nINFO:tensorflow:global_step/sec: 530.346\nINFO:tensorflow:loss = 0.01770171, step = 49101 (0.188 sec)\nINFO:tensorflow:global_step/sec: 514.716\nINFO:tensorflow:loss = 0.034034938, step = 49201 (0.194 sec)\nINFO:tensorflow:global_step/sec: 538.694\nINFO:tensorflow:loss = 0.013658211, step = 49301 (0.186 sec)\nINFO:tensorflow:global_step/sec: 470.289\nINFO:tensorflow:loss = 0.009852439, step = 49401 (0.212 sec)\nINFO:tensorflow:global_step/sec: 525.87\nINFO:tensorflow:loss = 0.03901407, step = 49501 (0.191 sec)\nINFO:tensorflow:global_step/sec: 533.812\nINFO:tensorflow:loss = 0.026856896, step = 49601 (0.187 sec)\nINFO:tensorflow:global_step/sec: 540.295\nINFO:tensorflow:loss = 0.0098221265, step = 49701 (0.185 sec)\nINFO:tensorflow:global_step/sec: 536.965\nINFO:tensorflow:loss = 0.015300882, step = 49801 (0.186 sec)\nINFO:tensorflow:global_step/sec: 554.379\nINFO:tensorflow:loss = 0.019169495, step = 49901 (0.181 sec)\nINFO:tensorflow:global_step/sec: 549.297\nINFO:tensorflow:loss = 0.018853413, step = 50001 (0.182 sec)\nINFO:tensorflow:global_step/sec: 502.666\nINFO:tensorflow:loss = 0.01621972, step = 50101 (0.199 sec)\nINFO:tensorflow:global_step/sec: 483.985\nINFO:tensorflow:loss = 0.033054538, step = 50201 (0.207 sec)\nINFO:tensorflow:global_step/sec: 495.562\nINFO:tensorflow:loss = 0.030761674, step = 50301 (0.202 sec)\nINFO:tensorflow:global_step/sec: 488.227\nINFO:tensorflow:loss = 0.014127126, step = 50401 (0.205 sec)\nINFO:tensorflow:global_step/sec: 490.479\nINFO:tensorflow:loss = 0.029032012, step = 50501 (0.204 sec)\nINFO:tensorflow:global_step/sec: 507.174\nINFO:tensorflow:loss = 0.014329145, step = 50601 (0.197 sec)\nINFO:tensorflow:global_step/sec: 507.349\nINFO:tensorflow:loss = 0.0122189475, step = 50701 (0.197 sec)\nINFO:tensorflow:global_step/sec: 508.601\nINFO:tensorflow:loss = 0.020003706, step = 50801 (0.197 sec)\nINFO:tensorflow:global_step/sec: 443.143\nINFO:tensorflow:loss = 0.021329232, step = 50901 (0.226 sec)\nINFO:tensorflow:global_step/sec: 492.393\nINFO:tensorflow:loss = 0.011084755, step = 51001 (0.203 sec)\nINFO:tensorflow:global_step/sec: 503.862\nINFO:tensorflow:loss = 0.020886954, step = 51101 (0.199 sec)\nINFO:tensorflow:global_step/sec: 484.046\nINFO:tensorflow:loss = 0.014100042, step = 51201 (0.206 sec)\nINFO:tensorflow:global_step/sec: 509.65\nINFO:tensorflow:loss = 0.033354472, step = 51301 (0.196 sec)\nINFO:tensorflow:global_step/sec: 474.379\nINFO:tensorflow:loss = 0.012741237, step = 51401 (0.211 sec)\nINFO:tensorflow:global_step/sec: 508.574\nINFO:tensorflow:loss = 0.028030533, step = 51501 (0.196 sec)\nINFO:tensorflow:global_step/sec: 492.434\nINFO:tensorflow:loss = 0.014998768, step = 51601 (0.203 sec)\nINFO:tensorflow:global_step/sec: 528.388\nINFO:tensorflow:loss = 0.018847875, step = 51701 (0.189 sec)\nINFO:tensorflow:global_step/sec: 492.216\nINFO:tensorflow:loss = 0.011595423, step = 51801 (0.203 sec)\nINFO:tensorflow:global_step/sec: 497.01\nINFO:tensorflow:loss = 0.032381095, step = 51901 (0.201 sec)\nINFO:tensorflow:global_step/sec: 504.928\nINFO:tensorflow:loss = 0.01812671, step = 52001 (0.198 sec)\nINFO:tensorflow:global_step/sec: 507.748\nINFO:tensorflow:loss = 0.0061560385, step = 52101 (0.198 sec)\nINFO:tensorflow:global_step/sec: 487.123\nINFO:tensorflow:loss = 0.00929126, step = 52201 (0.205 sec)\nINFO:tensorflow:global_step/sec: 499.678\nINFO:tensorflow:loss = 0.015358845, step = 52301 (0.200 sec)\nINFO:tensorflow:global_step/sec: 497.857\nINFO:tensorflow:loss = 0.019701846, step = 52401 (0.201 sec)\nINFO:tensorflow:global_step/sec: 436.382\nINFO:tensorflow:loss = 0.0198103, step = 52501 (0.229 sec)\nINFO:tensorflow:global_step/sec: 501.562\nINFO:tensorflow:loss = 0.021213373, step = 52601 (0.199 sec)\nINFO:tensorflow:global_step/sec: 497.27\nINFO:tensorflow:loss = 0.017814554, step = 52701 (0.201 sec)\nINFO:tensorflow:global_step/sec: 505.935\nINFO:tensorflow:loss = 0.019039704, step = 52801 (0.200 sec)\nINFO:tensorflow:global_step/sec: 503.393\nINFO:tensorflow:loss = 0.020968016, step = 52901 (0.196 sec)\nINFO:tensorflow:global_step/sec: 495.847\nINFO:tensorflow:loss = 0.022189653, step = 53001 (0.203 sec)\nINFO:tensorflow:global_step/sec: 501.645\nINFO:tensorflow:loss = 0.017653076, step = 53101 (0.199 sec)\nINFO:tensorflow:global_step/sec: 477.282\nINFO:tensorflow:loss = 0.011738155, step = 53201 (0.209 sec)\nINFO:tensorflow:global_step/sec: 482.635\nINFO:tensorflow:loss = 0.014724601, step = 53301 (0.208 sec)\nINFO:tensorflow:global_step/sec: 508.28\nINFO:tensorflow:loss = 0.02052265, step = 53401 (0.196 sec)\nINFO:tensorflow:global_step/sec: 499.753\nINFO:tensorflow:loss = 0.023918442, step = 53501 (0.203 sec)\nINFO:tensorflow:global_step/sec: 496.679\nINFO:tensorflow:loss = 0.010252239, step = 53601 (0.198 sec)\nINFO:tensorflow:global_step/sec: 503.409\nINFO:tensorflow:loss = 0.008929467, step = 53701 (0.199 sec)\nINFO:tensorflow:global_step/sec: 475.606\nINFO:tensorflow:loss = 0.040997267, step = 53801 (0.210 sec)\nINFO:tensorflow:global_step/sec: 518.551\nINFO:tensorflow:loss = 0.013261792, step = 53901 (0.193 sec)\nINFO:tensorflow:global_step/sec: 509.053\nINFO:tensorflow:loss = 0.012389019, step = 54001 (0.196 sec)\nINFO:tensorflow:global_step/sec: 480.34\nINFO:tensorflow:loss = 0.018130174, step = 54101 (0.208 sec)\nINFO:tensorflow:global_step/sec: 489.512\nINFO:tensorflow:loss = 0.017982107, step = 54201 (0.204 sec)\nINFO:tensorflow:global_step/sec: 488.031\nINFO:tensorflow:loss = 0.023231579, step = 54301 (0.205 sec)\nINFO:tensorflow:global_step/sec: 487.318\nINFO:tensorflow:loss = 0.011146922, step = 54401 (0.205 sec)\nINFO:tensorflow:global_step/sec: 493.966\nINFO:tensorflow:loss = 0.018431053, step = 54501 (0.203 sec)\nINFO:tensorflow:global_step/sec: 490.831\nINFO:tensorflow:loss = 0.022118004, step = 54601 (0.204 sec)\nINFO:tensorflow:global_step/sec: 478.098\nINFO:tensorflow:loss = 0.02027971, step = 54701 (0.209 sec)\nINFO:tensorflow:global_step/sec: 478.781\nINFO:tensorflow:loss = 0.026722562, step = 54801 (0.209 sec)\nINFO:tensorflow:global_step/sec: 513.745\nINFO:tensorflow:loss = 0.034568045, step = 54901 (0.195 sec)\nINFO:tensorflow:global_step/sec: 515.188\nINFO:tensorflow:loss = 0.027017085, step = 55001 (0.194 sec)\nINFO:tensorflow:global_step/sec: 518.355\nINFO:tensorflow:loss = 0.0109851975, step = 55101 (0.193 sec)\nINFO:tensorflow:global_step/sec: 486.289\nINFO:tensorflow:loss = 0.016032044, step = 55201 (0.206 sec)\nINFO:tensorflow:global_step/sec: 483.545\nINFO:tensorflow:loss = 0.021624327, step = 55301 (0.207 sec)\nINFO:tensorflow:global_step/sec: 475.885\nINFO:tensorflow:loss = 0.009764336, step = 55401 (0.210 sec)\nINFO:tensorflow:global_step/sec: 496.645\nINFO:tensorflow:loss = 0.023222383, step = 55501 (0.201 sec)\nINFO:tensorflow:global_step/sec: 486.084\nINFO:tensorflow:loss = 0.027035816, step = 55601 (0.206 sec)\nINFO:tensorflow:global_step/sec: 472.016\nINFO:tensorflow:loss = 0.03277654, step = 55701 (0.212 sec)\nINFO:tensorflow:global_step/sec: 495.236\nINFO:tensorflow:loss = 0.027625782, step = 55801 (0.202 sec)\nINFO:tensorflow:global_step/sec: 491.458\nINFO:tensorflow:loss = 0.018734397, step = 55901 (0.204 sec)\nINFO:tensorflow:global_step/sec: 509.349\nINFO:tensorflow:loss = 0.023520954, step = 56001 (0.196 sec)\nINFO:tensorflow:global_step/sec: 510.329\nINFO:tensorflow:loss = 0.01779148, step = 56101 (0.196 sec)\nINFO:tensorflow:global_step/sec: 486.133\nINFO:tensorflow:loss = 0.01003485, step = 56201 (0.205 sec)\nINFO:tensorflow:global_step/sec: 510.092\nINFO:tensorflow:loss = 0.01855145, step = 56301 (0.196 sec)\nINFO:tensorflow:global_step/sec: 512.403\nINFO:tensorflow:loss = 0.026615448, step = 56401 (0.195 sec)\nINFO:tensorflow:global_step/sec: 503.733\nINFO:tensorflow:loss = 0.020081764, step = 56501 (0.199 sec)\nINFO:tensorflow:global_step/sec: 486.393\nINFO:tensorflow:loss = 0.01606249, step = 56601 (0.206 sec)\nINFO:tensorflow:global_step/sec: 468.049\nINFO:tensorflow:loss = 0.017364534, step = 56701 (0.214 sec)\nINFO:tensorflow:global_step/sec: 496.162\nINFO:tensorflow:loss = 0.016276356, step = 56801 (0.201 sec)\nINFO:tensorflow:global_step/sec: 499.616\nINFO:tensorflow:loss = 0.012617761, step = 56901 (0.201 sec)\nINFO:tensorflow:global_step/sec: 509.315\nINFO:tensorflow:loss = 0.020060506, step = 57001 (0.195 sec)\nINFO:tensorflow:global_step/sec: 512.602\nINFO:tensorflow:loss = 0.021613391, step = 57101 (0.195 sec)\nINFO:tensorflow:global_step/sec: 467.97\nINFO:tensorflow:loss = 0.009548314, step = 57201 (0.214 sec)\nINFO:tensorflow:global_step/sec: 482.016\nINFO:tensorflow:loss = 0.018750113, step = 57301 (0.208 sec)\nINFO:tensorflow:global_step/sec: 490.028\nINFO:tensorflow:loss = 0.017790722, step = 57401 (0.204 sec)\nINFO:tensorflow:global_step/sec: 497.622\nINFO:tensorflow:loss = 0.020575833, step = 57501 (0.204 sec)\nINFO:tensorflow:global_step/sec: 472.974\nINFO:tensorflow:loss = 0.019994969, step = 57601 (0.208 sec)\nINFO:tensorflow:global_step/sec: 497.001\nINFO:tensorflow:loss = 0.021272149, step = 57701 (0.202 sec)\nINFO:tensorflow:global_step/sec: 501.628\nINFO:tensorflow:loss = 0.016894206, step = 57801 (0.199 sec)\nINFO:tensorflow:global_step/sec: 505.454\nINFO:tensorflow:loss = 0.015374835, step = 57901 (0.198 sec)\nINFO:tensorflow:global_step/sec: 502.993\nINFO:tensorflow:loss = 0.023162495, step = 58001 (0.199 sec)\nINFO:tensorflow:global_step/sec: 503.758\nINFO:tensorflow:loss = 0.0150122605, step = 58101 (0.199 sec)\nINFO:tensorflow:global_step/sec: 456.351\nINFO:tensorflow:loss = 0.018541595, step = 58201 (0.219 sec)\nINFO:tensorflow:global_step/sec: 493.559\nINFO:tensorflow:loss = 0.019901603, step = 58301 (0.203 sec)\nINFO:tensorflow:global_step/sec: 485.557\nINFO:tensorflow:loss = 0.029426899, step = 58401 (0.206 sec)\nINFO:tensorflow:global_step/sec: 487.159\nINFO:tensorflow:loss = 0.019241655, step = 58501 (0.205 sec)\nINFO:tensorflow:global_step/sec: 494.266\nINFO:tensorflow:loss = 0.016469326, step = 58601 (0.202 sec)\nINFO:tensorflow:global_step/sec: 519.923\nINFO:tensorflow:loss = 0.021836523, step = 58701 (0.192 sec)\nINFO:tensorflow:global_step/sec: 506.186\nINFO:tensorflow:loss = 0.014409851, step = 58801 (0.198 sec)\nINFO:tensorflow:global_step/sec: 500.854\nINFO:tensorflow:loss = 0.023873296, step = 58901 (0.203 sec)\nINFO:tensorflow:global_step/sec: 474.735\nINFO:tensorflow:loss = 0.011066675, step = 59001 (0.207 sec)\nINFO:tensorflow:global_step/sec: 462.548\nINFO:tensorflow:loss = 0.025976984, step = 59101 (0.216 sec)\nINFO:tensorflow:global_step/sec: 495.194\nINFO:tensorflow:loss = 0.022162579, step = 59201 (0.202 sec)\nINFO:tensorflow:global_step/sec: 503.867\nINFO:tensorflow:loss = 0.011563149, step = 59301 (0.199 sec)\nINFO:tensorflow:global_step/sec: 518.912\nINFO:tensorflow:loss = 0.015920684, step = 59401 (0.192 sec)\nINFO:tensorflow:global_step/sec: 509.084\nINFO:tensorflow:loss = 0.0122279115, step = 59501 (0.197 sec)\nINFO:tensorflow:global_step/sec: 486.934\nINFO:tensorflow:loss = 0.01201019, step = 59601 (0.206 sec)\nINFO:tensorflow:global_step/sec: 492.735\nINFO:tensorflow:loss = 0.012843441, step = 59701 (0.207 sec)\nINFO:tensorflow:global_step/sec: 476.856\nINFO:tensorflow:loss = 0.014685018, step = 59801 (0.206 sec)\nINFO:tensorflow:global_step/sec: 493.486\nINFO:tensorflow:loss = 0.02178935, step = 59901 (0.202 sec)\nINFO:tensorflow:Saving checkpoints for 60000 into /tmp/tmpU33rCk/model.ckpt.\nINFO:tensorflow:Calling model_fn.\nINFO:tensorflow:Importing architecture from /tmp/tmpU33rCk/architecture-1.txt: ['0:linear', '1:1_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork 'linear'\nINFO:tensorflow:Rebuilding iteration 1\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Building iteration 2\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Done calling model_fn.\nINFO:tensorflow:Starting evaluation at 2018-12-13-19:36:21\nINFO:tensorflow:Graph was finalized.\nINFO:tensorflow:Restoring parameters from /tmp/tmpU33rCk/model.ckpt-60000\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\nINFO:tensorflow:Saving candidate 't1_1_layer_dnn' dict for global step 60000: architecture/adanet/ensembles = \nj\n>adanet/iteration_1/ensemble_t1_1_layer_dnn/architecture/adanetB\u001e\b\u0007\u0012\u0000B\u0018| linear | 1_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.04422355, average_loss/adanet/subnetwork = 0.044653624, average_loss/adanet/uniform_average_ensemble = 0.043328855, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.060774922, loss/adanet/subnetwork = 0.06800773, loss/adanet/uniform_average_ensemble = 0.061006997, prediction/mean/adanet/adanet_weighted_ensemble = 3.1278303, prediction/mean/adanet/subnetwork = 3.1593368, prediction/mean/adanet/uniform_average_ensemble = 3.1326156\nINFO:tensorflow:Saving candidate 't2_1_layer_dnn' dict for global step 60000: architecture/adanet/ensembles = \nx\n>adanet/iteration_2/ensemble_t2_1_layer_dnn/architecture/adanetB,\b\u0007\u0012\u0000B&| linear | 1_layer_dnn | 1_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.04198682, average_loss/adanet/subnetwork = 0.0445389, average_loss/adanet/uniform_average_ensemble = 0.042342477, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.0576995, loss/adanet/subnetwork = 0.06806376, loss/adanet/uniform_average_ensemble = 0.061814114, prediction/mean/adanet/adanet_weighted_ensemble = 3.0984364, prediction/mean/adanet/subnetwork = 3.1642232, prediction/mean/adanet/uniform_average_ensemble = 3.143152\nINFO:tensorflow:Saving candidate 't2_2_layer_dnn' dict for global step 60000: architecture/adanet/ensembles = \nx\n>adanet/iteration_2/ensemble_t2_2_layer_dnn/architecture/adanetB,\b\u0007\u0012\u0000B&| linear | 1_layer_dnn | 2_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.03654939, average_loss/adanet/subnetwork = 0.032713592, average_loss/adanet/uniform_average_ensemble = 0.036697652, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.05076569, loss/adanet/subnetwork = 0.043944843, loss/adanet/uniform_average_ensemble = 0.052397445, prediction/mean/adanet/adanet_weighted_ensemble = 3.1145082, prediction/mean/adanet/subnetwork = 3.1556947, prediction/mean/adanet/uniform_average_ensemble = 3.140309\nINFO:tensorflow:Finished evaluation at 2018-12-13-19:36:24\nINFO:tensorflow:Saving dict for global step 60000: average_loss = 0.03654939, average_loss/adanet/adanet_weighted_ensemble = 0.03654939, average_loss/adanet/subnetwork = 0.032713592, average_loss/adanet/uniform_average_ensemble = 0.036697652, global_step = 60000, label/mean = 3.1049454, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss = 0.05076569, loss/adanet/adanet_weighted_ensemble = 0.05076569, loss/adanet/subnetwork = 0.043944843, loss/adanet/uniform_average_ensemble = 0.052397445, prediction/mean = 3.1145082, prediction/mean/adanet/adanet_weighted_ensemble = 3.1145082, prediction/mean/adanet/subnetwork = 3.1556947, prediction/mean/adanet/uniform_average_ensemble = 3.140309\nINFO:tensorflow:Saving 'checkpoint_path' summary for global step 60000: /tmp/tmpU33rCk/model.ckpt-60000\nINFO:tensorflow:Loss for final step: 0.023291564.\nINFO:tensorflow:Finished training Adanet iteration 2\nINFO:tensorflow:Beginning bookkeeping phase for iteration 2\nINFO:tensorflow:Importing architecture from /tmp/tmpU33rCk/architecture-1.txt: ['0:linear', '1:1_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork 'linear'\nINFO:tensorflow:Rebuilding iteration 1\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Building iteration 2\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Starting ensemble evaluation for iteration 2\nINFO:tensorflow:Restoring parameters from /tmp/tmpU33rCk/model.ckpt-60000\nWARNING:tensorflow:`tf.train.start_queue_runners()` was called when no queue runners were defined. You can safely remove the call to this deprecated function.\nINFO:tensorflow:Encountered end of input after 14 evaluations\nINFO:tensorflow:Computed ensemble metrics: adanet_loss/t1_1_layer_dnn = 0.031544, adanet_loss/t2_1_layer_dnn = 0.029996, adanet_loss/t2_2_layer_dnn = 0.027457\nINFO:tensorflow:Finished ensemble evaluation for iteration 2\nINFO:tensorflow:'t2_2_layer_dnn' at index 2 is moving onto the next iteration\nINFO:tensorflow:Importing architecture from /tmp/tmpU33rCk/architecture-2.txt: ['0:linear', '1:1_layer_dnn', '2:2_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork 'linear'\nINFO:tensorflow:Rebuilding iteration 1\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 2\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Warm-starting from: (u'/tmp/tmpU33rCk/model.ckpt-60000',)\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t0_linear/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t0_linear/adanet/iteration_1/candidate_t0_linear/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_1_layer_dnn/weighted_subnetwork_1/subnetwork/dense/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_2_layer_dnn/weighted_subnetwork_2/subnetwork/dense_1/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_linear/weighted_subnetwork_0/subnetwork/dense/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_1_layer_dnn/weighted_subnetwork_1/subnetwork/dense_1/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_1_layer_dnn/weighted_subnetwork_1/subnetwork/dense_1/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/candidate_t1_1_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_2_layer_dnn/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_2_layer_dnn/weighted_subnetwork_2/subnetwork/dense/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_2_layer_dnn/weighted_subnetwork_2/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/candidate_t0_linear/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/candidate_t1_1_layer_dnn/adanet/iteration_2/candidate_t1_1_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/train_op/is_over/is_over_var_fn/is_over_var; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: global_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_2_layer_dnn/weighted_subnetwork_2/subnetwork/dense/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_2_layer_dnn/weighted_subnetwork_1/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t1_1_layer_dnn/adanet/iteration_1/candidate_t1_1_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_1_layer_dnn/weighted_subnetwork_1/subnetwork/dense/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_1_layer_dnn/weighted_subnetwork_0/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_1_layer_dnn/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_2_layer_dnn/weighted_subnetwork_2/subnetwork/dense_1/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_linear/weighted_subnetwork_0/subnetwork/dense/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_linear/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/candidate_t1_1_layer_dnn/adanet/iteration_2/candidate_t1_1_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/candidate_t2_2_layer_dnn/adanet/iteration_2/candidate_t2_2_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/candidate_t2_2_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t1_1_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_2_layer_dnn/weighted_subnetwork_2/subnetwork/dense_2/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_2_layer_dnn/weighted_subnetwork_0/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_2_layer_dnn/weighted_subnetwork_2/subnetwork/dense_2/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_linear/weighted_subnetwork_0/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/candidate_t2_2_layer_dnn/adanet/iteration_2/candidate_t2_2_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_1_layer_dnn/weighted_subnetwork_1/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t0_linear/adanet/iteration_1/candidate_t0_linear/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/candidate_t0_linear/adanet/iteration_0/candidate_t0_linear/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/candidate_t0_linear/adanet/iteration_0/candidate_t0_linear/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/train_op/is_over/is_over_var_fn/is_over_var; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t1_1_layer_dnn/adanet/iteration_1/candidate_t1_1_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/train_op/is_over/is_over_var_fn/is_over_var; prev_var_name: Unchanged\nINFO:tensorflow:Building iteration 3\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Overwriting checkpoint with new graph for iteration 3 to /tmp/tmpU33rCk/model.ckpt-60000\nWARNING:tensorflow:`tf.train.start_queue_runners()` was called when no queue runners were defined. You can safely remove the call to this deprecated function.\nINFO:tensorflow:Finished bookkeeping phase for iteration 2\nINFO:tensorflow:Beginning training AdaNet iteration 3\nINFO:tensorflow:Calling model_fn.\nINFO:tensorflow:Importing architecture from /tmp/tmpU33rCk/architecture-2.txt: ['0:linear', '1:1_layer_dnn', '2:2_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork 'linear'\nINFO:tensorflow:Rebuilding iteration 1\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 2\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Building iteration 3\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Done calling model_fn.\nINFO:tensorflow:Create CheckpointSaverHook.\nINFO:tensorflow:Graph was finalized.\nINFO:tensorflow:Restoring parameters from /tmp/tmpU33rCk/increment.ckpt-3\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\nINFO:tensorflow:Saving checkpoints for 60000 into /tmp/tmpU33rCk/model.ckpt.\nINFO:tensorflow:loss = 0.01849521, step = 60001\nINFO:tensorflow:global_step/sec: 78.0646\nINFO:tensorflow:loss = 0.018221313, step = 60101 (1.282 sec)\nINFO:tensorflow:global_step/sec: 434.75\nINFO:tensorflow:loss = 0.01849538, step = 60201 (0.230 sec)\nINFO:tensorflow:global_step/sec: 446.038\nINFO:tensorflow:loss = 0.016187005, step = 60301 (0.224 sec)\nINFO:tensorflow:global_step/sec: 436.706\nINFO:tensorflow:loss = 0.02324728, step = 60401 (0.229 sec)\nINFO:tensorflow:global_step/sec: 428.09\nINFO:tensorflow:loss = 0.012834492, step = 60501 (0.233 sec)\nINFO:tensorflow:global_step/sec: 433.341\nINFO:tensorflow:loss = 0.015347466, step = 60601 (0.236 sec)\nINFO:tensorflow:global_step/sec: 399.925\nINFO:tensorflow:loss = 0.009656646, step = 60701 (0.245 sec)\nINFO:tensorflow:global_step/sec: 424.715\nINFO:tensorflow:loss = 0.0098030735, step = 60801 (0.235 sec)\nINFO:tensorflow:global_step/sec: 423.418\nINFO:tensorflow:loss = 0.015358811, step = 60901 (0.236 sec)\nINFO:tensorflow:global_step/sec: 427.33\nINFO:tensorflow:loss = 0.017170426, step = 61001 (0.234 sec)\nINFO:tensorflow:global_step/sec: 440.126\nINFO:tensorflow:loss = 0.01597068, step = 61101 (0.227 sec)\nINFO:tensorflow:global_step/sec: 429.895\nINFO:tensorflow:loss = 0.015691243, step = 61201 (0.233 sec)\nINFO:tensorflow:global_step/sec: 446.663\nINFO:tensorflow:loss = 0.032044083, step = 61301 (0.224 sec)\nINFO:tensorflow:global_step/sec: 458.96\nINFO:tensorflow:loss = 0.0077379774, step = 61401 (0.218 sec)\nINFO:tensorflow:global_step/sec: 418.322\nINFO:tensorflow:loss = 0.008269019, step = 61501 (0.239 sec)\nINFO:tensorflow:global_step/sec: 438.034\nINFO:tensorflow:loss = 0.0090968115, step = 61601 (0.228 sec)\nINFO:tensorflow:global_step/sec: 450.816\nINFO:tensorflow:loss = 0.017395135, step = 61701 (0.222 sec)\nINFO:tensorflow:global_step/sec: 426.758\nINFO:tensorflow:loss = 0.0142538585, step = 61801 (0.234 sec)\nINFO:tensorflow:global_step/sec: 433.994\nINFO:tensorflow:loss = 0.021575892, step = 61901 (0.230 sec)\nINFO:tensorflow:global_step/sec: 421.07\nINFO:tensorflow:loss = 0.014129536, step = 62001 (0.237 sec)\nINFO:tensorflow:global_step/sec: 437.777\nINFO:tensorflow:loss = 0.026330313, step = 62101 (0.229 sec)\nINFO:tensorflow:global_step/sec: 419.702\nINFO:tensorflow:loss = 0.0139617715, step = 62201 (0.238 sec)\nINFO:tensorflow:global_step/sec: 432.636\nINFO:tensorflow:loss = 0.009793356, step = 62301 (0.231 sec)\nINFO:tensorflow:global_step/sec: 431.287\nINFO:tensorflow:loss = 0.010544407, step = 62401 (0.232 sec)\nINFO:tensorflow:global_step/sec: 457.463\nINFO:tensorflow:loss = 0.019151235, step = 62501 (0.219 sec)\nINFO:tensorflow:global_step/sec: 459.025\nINFO:tensorflow:loss = 0.01287616, step = 62601 (0.218 sec)\nINFO:tensorflow:global_step/sec: 449.174\nINFO:tensorflow:loss = 0.014950998, step = 62701 (0.223 sec)\nINFO:tensorflow:global_step/sec: 433.467\nINFO:tensorflow:loss = 0.007641917, step = 62801 (0.231 sec)\nINFO:tensorflow:global_step/sec: 443.762\nINFO:tensorflow:loss = 0.013454292, step = 62901 (0.225 sec)\nINFO:tensorflow:global_step/sec: 441.784\nINFO:tensorflow:loss = 0.01109894, step = 63001 (0.227 sec)\nINFO:tensorflow:global_step/sec: 420.142\nINFO:tensorflow:loss = 0.008401312, step = 63101 (0.238 sec)\nINFO:tensorflow:global_step/sec: 455.998\nINFO:tensorflow:loss = 0.015624371, step = 63201 (0.219 sec)\nINFO:tensorflow:global_step/sec: 438.82\nINFO:tensorflow:loss = 0.019629583, step = 63301 (0.228 sec)\nINFO:tensorflow:global_step/sec: 435.159\nINFO:tensorflow:loss = 0.012380507, step = 63401 (0.230 sec)\nINFO:tensorflow:global_step/sec: 423.594\nINFO:tensorflow:loss = 0.027570924, step = 63501 (0.236 sec)\nINFO:tensorflow:global_step/sec: 442.672\nINFO:tensorflow:loss = 0.010853866, step = 63601 (0.226 sec)\nINFO:tensorflow:global_step/sec: 405.876\nINFO:tensorflow:loss = 0.011129044, step = 63701 (0.247 sec)\nINFO:tensorflow:global_step/sec: 436.927\nINFO:tensorflow:loss = 0.009922019, step = 63801 (0.229 sec)\nINFO:tensorflow:global_step/sec: 447.684\nINFO:tensorflow:loss = 0.00835341, step = 63901 (0.224 sec)\nINFO:tensorflow:global_step/sec: 423.123\nINFO:tensorflow:loss = 0.017287806, step = 64001 (0.241 sec)\nINFO:tensorflow:global_step/sec: 433.067\nINFO:tensorflow:loss = 0.010719417, step = 64101 (0.226 sec)\nINFO:tensorflow:global_step/sec: 445.943\nINFO:tensorflow:loss = 0.0088530835, step = 64201 (0.224 sec)\nINFO:tensorflow:global_step/sec: 441.675\nINFO:tensorflow:loss = 0.007045265, step = 64301 (0.226 sec)\nINFO:tensorflow:global_step/sec: 435.802\nINFO:tensorflow:loss = 0.010601383, step = 64401 (0.229 sec)\nINFO:tensorflow:global_step/sec: 442.292\nINFO:tensorflow:loss = 0.014272485, step = 64501 (0.226 sec)\nINFO:tensorflow:global_step/sec: 446.148\nINFO:tensorflow:loss = 0.011902935, step = 64601 (0.224 sec)\nINFO:tensorflow:global_step/sec: 386.881\nINFO:tensorflow:loss = 0.021894043, step = 64701 (0.258 sec)\nINFO:tensorflow:global_step/sec: 434.687\nINFO:tensorflow:loss = 0.011304423, step = 64801 (0.230 sec)\nINFO:tensorflow:global_step/sec: 447.015\nINFO:tensorflow:loss = 0.011946198, step = 64901 (0.224 sec)\nINFO:tensorflow:global_step/sec: 430.878\nINFO:tensorflow:loss = 0.0061903694, step = 65001 (0.232 sec)\nINFO:tensorflow:global_step/sec: 443.07\nINFO:tensorflow:loss = 0.009150965, step = 65101 (0.226 sec)\nINFO:tensorflow:global_step/sec: 443.768\nINFO:tensorflow:loss = 0.009455716, step = 65201 (0.225 sec)\nINFO:tensorflow:global_step/sec: 438.064\nINFO:tensorflow:loss = 0.022442203, step = 65301 (0.228 sec)\nINFO:tensorflow:global_step/sec: 453.221\nINFO:tensorflow:loss = 0.0081521, step = 65401 (0.221 sec)\nINFO:tensorflow:global_step/sec: 438.333\nINFO:tensorflow:loss = 0.009311147, step = 65501 (0.228 sec)\nINFO:tensorflow:global_step/sec: 449.745\nINFO:tensorflow:loss = 0.013529962, step = 65601 (0.222 sec)\nINFO:tensorflow:global_step/sec: 449.759\nINFO:tensorflow:loss = 0.010945886, step = 65701 (0.222 sec)\nINFO:tensorflow:global_step/sec: 467.687\nINFO:tensorflow:loss = 0.012356184, step = 65801 (0.214 sec)\nINFO:tensorflow:global_step/sec: 439.224\nINFO:tensorflow:loss = 0.00745131, step = 65901 (0.227 sec)\nINFO:tensorflow:global_step/sec: 450.264\nINFO:tensorflow:loss = 0.013327674, step = 66001 (0.222 sec)\nINFO:tensorflow:global_step/sec: 391.416\nINFO:tensorflow:loss = 0.016667463, step = 66101 (0.256 sec)\nINFO:tensorflow:global_step/sec: 452.796\nINFO:tensorflow:loss = 0.016279226, step = 66201 (0.221 sec)\nINFO:tensorflow:global_step/sec: 435.819\nINFO:tensorflow:loss = 0.013686002, step = 66301 (0.230 sec)\nINFO:tensorflow:global_step/sec: 464.829\nINFO:tensorflow:loss = 0.019556196, step = 66401 (0.215 sec)\nINFO:tensorflow:global_step/sec: 456.261\nINFO:tensorflow:loss = 0.019318718, step = 66501 (0.219 sec)\nINFO:tensorflow:global_step/sec: 455.689\nINFO:tensorflow:loss = 0.009969889, step = 66601 (0.220 sec)\nINFO:tensorflow:global_step/sec: 469.437\nINFO:tensorflow:loss = 0.015530272, step = 66701 (0.212 sec)\nINFO:tensorflow:global_step/sec: 448.356\nINFO:tensorflow:loss = 0.009502322, step = 66801 (0.223 sec)\nINFO:tensorflow:global_step/sec: 473.653\nINFO:tensorflow:loss = 0.005345877, step = 66901 (0.212 sec)\nINFO:tensorflow:global_step/sec: 458.18\nINFO:tensorflow:loss = 0.011250553, step = 67001 (0.218 sec)\nINFO:tensorflow:global_step/sec: 459.257\nINFO:tensorflow:loss = 0.00852987, step = 67101 (0.218 sec)\nINFO:tensorflow:global_step/sec: 436.399\nINFO:tensorflow:loss = 0.010775156, step = 67201 (0.229 sec)\nINFO:tensorflow:global_step/sec: 436.717\nINFO:tensorflow:loss = 0.025048286, step = 67301 (0.229 sec)\nINFO:tensorflow:global_step/sec: 462.479\nINFO:tensorflow:loss = 0.013347473, step = 67401 (0.216 sec)\nINFO:tensorflow:global_step/sec: 430.719\nINFO:tensorflow:loss = 0.013946894, step = 67501 (0.232 sec)\nINFO:tensorflow:global_step/sec: 447.986\nINFO:tensorflow:loss = 0.008865064, step = 67601 (0.223 sec)\nINFO:tensorflow:global_step/sec: 438.656\nINFO:tensorflow:loss = 0.014698386, step = 67701 (0.228 sec)\nINFO:tensorflow:global_step/sec: 424.978\nINFO:tensorflow:loss = 0.0154539235, step = 67801 (0.235 sec)\nINFO:tensorflow:global_step/sec: 430.234\nINFO:tensorflow:loss = 0.011255688, step = 67901 (0.233 sec)\nINFO:tensorflow:global_step/sec: 430.048\nINFO:tensorflow:loss = 0.015850207, step = 68001 (0.232 sec)\nINFO:tensorflow:global_step/sec: 415.735\nINFO:tensorflow:loss = 0.013518164, step = 68101 (0.241 sec)\nINFO:tensorflow:global_step/sec: 422.744\nINFO:tensorflow:loss = 0.014690641, step = 68201 (0.236 sec)\nINFO:tensorflow:global_step/sec: 442.805\nINFO:tensorflow:loss = 0.016613279, step = 68301 (0.226 sec)\nINFO:tensorflow:global_step/sec: 440.044\nINFO:tensorflow:loss = 0.014170462, step = 68401 (0.227 sec)\nINFO:tensorflow:global_step/sec: 472.751\nINFO:tensorflow:loss = 0.013066348, step = 68501 (0.211 sec)\nINFO:tensorflow:global_step/sec: 461.455\nINFO:tensorflow:loss = 0.016186215, step = 68601 (0.217 sec)\nINFO:tensorflow:global_step/sec: 469.914\nINFO:tensorflow:loss = 0.012064418, step = 68701 (0.213 sec)\nINFO:tensorflow:global_step/sec: 488.141\nINFO:tensorflow:loss = 0.019498233, step = 68801 (0.206 sec)\nINFO:tensorflow:global_step/sec: 455.88\nINFO:tensorflow:loss = 0.029157348, step = 68901 (0.221 sec)\nINFO:tensorflow:global_step/sec: 410.979\nINFO:tensorflow:loss = 0.025464673, step = 69001 (0.241 sec)\nINFO:tensorflow:global_step/sec: 447.237\nINFO:tensorflow:loss = 0.012853953, step = 69101 (0.224 sec)\nINFO:tensorflow:global_step/sec: 445.46\nINFO:tensorflow:loss = 0.01971712, step = 69201 (0.224 sec)\nINFO:tensorflow:global_step/sec: 455.365\nINFO:tensorflow:loss = 0.00933605, step = 69301 (0.219 sec)\nINFO:tensorflow:global_step/sec: 449.188\nINFO:tensorflow:loss = 0.008620865, step = 69401 (0.223 sec)\nINFO:tensorflow:global_step/sec: 432.865\nINFO:tensorflow:loss = 0.02142889, step = 69501 (0.231 sec)\nINFO:tensorflow:global_step/sec: 422.422\nINFO:tensorflow:loss = 0.013078446, step = 69601 (0.237 sec)\nINFO:tensorflow:global_step/sec: 415.236\nINFO:tensorflow:loss = 0.007206355, step = 69701 (0.241 sec)\nINFO:tensorflow:global_step/sec: 410.206\nINFO:tensorflow:loss = 0.011162484, step = 69801 (0.244 sec)\nINFO:tensorflow:global_step/sec: 424.856\nINFO:tensorflow:loss = 0.014292128, step = 69901 (0.235 sec)\nINFO:tensorflow:global_step/sec: 451.229\nINFO:tensorflow:loss = 0.0128045315, step = 70001 (0.221 sec)\nINFO:tensorflow:global_step/sec: 441.995\nINFO:tensorflow:loss = 0.009628586, step = 70101 (0.226 sec)\nINFO:tensorflow:global_step/sec: 459.095\nINFO:tensorflow:loss = 0.017084569, step = 70201 (0.218 sec)\nINFO:tensorflow:global_step/sec: 449.281\nINFO:tensorflow:loss = 0.020728739, step = 70301 (0.223 sec)\nINFO:tensorflow:global_step/sec: 458.651\nINFO:tensorflow:loss = 0.008801332, step = 70401 (0.218 sec)\nINFO:tensorflow:global_step/sec: 460.891\nINFO:tensorflow:loss = 0.017882807, step = 70501 (0.217 sec)\nINFO:tensorflow:global_step/sec: 436.792\nINFO:tensorflow:loss = 0.01087595, step = 70601 (0.229 sec)\nINFO:tensorflow:global_step/sec: 418.973\nINFO:tensorflow:loss = 0.008092202, step = 70701 (0.239 sec)\nINFO:tensorflow:global_step/sec: 432.683\nINFO:tensorflow:loss = 0.014348139, step = 70801 (0.231 sec)\nINFO:tensorflow:global_step/sec: 438.743\nINFO:tensorflow:loss = 0.015363082, step = 70901 (0.228 sec)\nINFO:tensorflow:global_step/sec: 425.628\nINFO:tensorflow:loss = 0.0074509815, step = 71001 (0.235 sec)\nINFO:tensorflow:global_step/sec: 445.878\nINFO:tensorflow:loss = 0.016008766, step = 71101 (0.224 sec)\nINFO:tensorflow:global_step/sec: 427.319\nINFO:tensorflow:loss = 0.008940533, step = 71201 (0.234 sec)\nINFO:tensorflow:global_step/sec: 446.644\nINFO:tensorflow:loss = 0.018691873, step = 71301 (0.224 sec)\nINFO:tensorflow:global_step/sec: 433.29\nINFO:tensorflow:loss = 0.009328838, step = 71401 (0.231 sec)\nINFO:tensorflow:global_step/sec: 414.273\nINFO:tensorflow:loss = 0.020122949, step = 71501 (0.242 sec)\nINFO:tensorflow:global_step/sec: 416.212\nINFO:tensorflow:loss = 0.0081863925, step = 71601 (0.240 sec)\nINFO:tensorflow:global_step/sec: 440.94\nINFO:tensorflow:loss = 0.015287314, step = 71701 (0.227 sec)\nINFO:tensorflow:global_step/sec: 443.737\nINFO:tensorflow:loss = 0.008990615, step = 71801 (0.225 sec)\nINFO:tensorflow:global_step/sec: 441.201\nINFO:tensorflow:loss = 0.014130508, step = 71901 (0.227 sec)\nINFO:tensorflow:global_step/sec: 444.709\nINFO:tensorflow:loss = 0.012323266, step = 72001 (0.225 sec)\nINFO:tensorflow:global_step/sec: 429.538\nINFO:tensorflow:loss = 0.0058762175, step = 72101 (0.232 sec)\nINFO:tensorflow:global_step/sec: 441.439\nINFO:tensorflow:loss = 0.0059047127, step = 72201 (0.230 sec)\nINFO:tensorflow:global_step/sec: 430.715\nINFO:tensorflow:loss = 0.011101288, step = 72301 (0.229 sec)\nINFO:tensorflow:global_step/sec: 460.779\nINFO:tensorflow:loss = 0.012443201, step = 72401 (0.217 sec)\nINFO:tensorflow:global_step/sec: 435.449\nINFO:tensorflow:loss = 0.013553011, step = 72501 (0.230 sec)\nINFO:tensorflow:global_step/sec: 464.572\nINFO:tensorflow:loss = 0.01359203, step = 72601 (0.215 sec)\nINFO:tensorflow:global_step/sec: 443.478\nINFO:tensorflow:loss = 0.015503101, step = 72701 (0.226 sec)\nINFO:tensorflow:global_step/sec: 450.28\nINFO:tensorflow:loss = 0.015577295, step = 72801 (0.222 sec)\nINFO:tensorflow:global_step/sec: 441.036\nINFO:tensorflow:loss = 0.013324114, step = 72901 (0.226 sec)\nINFO:tensorflow:global_step/sec: 415.557\nINFO:tensorflow:loss = 0.018878024, step = 73001 (0.241 sec)\nINFO:tensorflow:global_step/sec: 432.534\nINFO:tensorflow:loss = 0.012767976, step = 73101 (0.231 sec)\nINFO:tensorflow:global_step/sec: 422.767\nINFO:tensorflow:loss = 0.008253518, step = 73201 (0.237 sec)\nINFO:tensorflow:global_step/sec: 420.863\nINFO:tensorflow:loss = 0.009813534, step = 73301 (0.237 sec)\nINFO:tensorflow:global_step/sec: 425.775\nINFO:tensorflow:loss = 0.014758039, step = 73401 (0.235 sec)\nINFO:tensorflow:global_step/sec: 451.745\nINFO:tensorflow:loss = 0.013327519, step = 73501 (0.221 sec)\nINFO:tensorflow:global_step/sec: 453.684\nINFO:tensorflow:loss = 0.008495433, step = 73601 (0.221 sec)\nINFO:tensorflow:global_step/sec: 458.686\nINFO:tensorflow:loss = 0.008404325, step = 73701 (0.218 sec)\nINFO:tensorflow:global_step/sec: 456.944\nINFO:tensorflow:loss = 0.020715604, step = 73801 (0.219 sec)\nINFO:tensorflow:global_step/sec: 443.062\nINFO:tensorflow:loss = 0.008773352, step = 73901 (0.226 sec)\nINFO:tensorflow:global_step/sec: 445.772\nINFO:tensorflow:loss = 0.006708255, step = 74001 (0.224 sec)\nINFO:tensorflow:global_step/sec: 436.992\nINFO:tensorflow:loss = 0.014570928, step = 74101 (0.229 sec)\nINFO:tensorflow:global_step/sec: 427.314\nINFO:tensorflow:loss = 0.013555666, step = 74201 (0.234 sec)\nINFO:tensorflow:global_step/sec: 426.323\nINFO:tensorflow:loss = 0.015543242, step = 74301 (0.235 sec)\nINFO:tensorflow:global_step/sec: 430.522\nINFO:tensorflow:loss = 0.009950759, step = 74401 (0.232 sec)\nINFO:tensorflow:global_step/sec: 458.121\nINFO:tensorflow:loss = 0.011727323, step = 74501 (0.218 sec)\nINFO:tensorflow:global_step/sec: 448.499\nINFO:tensorflow:loss = 0.013463419, step = 74601 (0.223 sec)\nINFO:tensorflow:global_step/sec: 449.675\nINFO:tensorflow:loss = 0.014221058, step = 74701 (0.222 sec)\nINFO:tensorflow:global_step/sec: 456.559\nINFO:tensorflow:loss = 0.021520961, step = 74801 (0.219 sec)\nINFO:tensorflow:global_step/sec: 442.399\nINFO:tensorflow:loss = 0.019705988, step = 74901 (0.226 sec)\nINFO:tensorflow:global_step/sec: 452.213\nINFO:tensorflow:loss = 0.02249371, step = 75001 (0.221 sec)\nINFO:tensorflow:global_step/sec: 429.673\nINFO:tensorflow:loss = 0.0074728457, step = 75101 (0.232 sec)\nINFO:tensorflow:global_step/sec: 424.708\nINFO:tensorflow:loss = 0.0101506235, step = 75201 (0.235 sec)\nINFO:tensorflow:global_step/sec: 423.449\nINFO:tensorflow:loss = 0.017612845, step = 75301 (0.236 sec)\nINFO:tensorflow:global_step/sec: 446.245\nINFO:tensorflow:loss = 0.008105265, step = 75401 (0.224 sec)\nINFO:tensorflow:global_step/sec: 476.808\nINFO:tensorflow:loss = 0.018446082, step = 75501 (0.210 sec)\nINFO:tensorflow:global_step/sec: 480.908\nINFO:tensorflow:loss = 0.017977942, step = 75601 (0.208 sec)\nINFO:tensorflow:global_step/sec: 426.707\nINFO:tensorflow:loss = 0.015822789, step = 75701 (0.235 sec)\nINFO:tensorflow:global_step/sec: 456.861\nINFO:tensorflow:loss = 0.022029698, step = 75801 (0.219 sec)\nINFO:tensorflow:global_step/sec: 454.081\nINFO:tensorflow:loss = 0.013807894, step = 75901 (0.221 sec)\nINFO:tensorflow:global_step/sec: 440.413\nINFO:tensorflow:loss = 0.016518306, step = 76001 (0.227 sec)\nINFO:tensorflow:global_step/sec: 425.114\nINFO:tensorflow:loss = 0.012706269, step = 76101 (0.235 sec)\nINFO:tensorflow:global_step/sec: 450.359\nINFO:tensorflow:loss = 0.007131013, step = 76201 (0.222 sec)\nINFO:tensorflow:global_step/sec: 453.875\nINFO:tensorflow:loss = 0.0075605772, step = 76301 (0.220 sec)\nINFO:tensorflow:global_step/sec: 448.744\nINFO:tensorflow:loss = 0.015425386, step = 76401 (0.223 sec)\nINFO:tensorflow:global_step/sec: 423.08\nINFO:tensorflow:loss = 0.011027876, step = 76501 (0.236 sec)\nINFO:tensorflow:global_step/sec: 440.678\nINFO:tensorflow:loss = 0.010813015, step = 76601 (0.227 sec)\nINFO:tensorflow:global_step/sec: 425.418\nINFO:tensorflow:loss = 0.01431744, step = 76701 (0.235 sec)\nINFO:tensorflow:global_step/sec: 439.504\nINFO:tensorflow:loss = 0.015267668, step = 76801 (0.228 sec)\nINFO:tensorflow:global_step/sec: 459.278\nINFO:tensorflow:loss = 0.006063135, step = 76901 (0.218 sec)\nINFO:tensorflow:global_step/sec: 443.638\nINFO:tensorflow:loss = 0.015281367, step = 77001 (0.226 sec)\nINFO:tensorflow:global_step/sec: 471.045\nINFO:tensorflow:loss = 0.019066438, step = 77101 (0.212 sec)\nINFO:tensorflow:global_step/sec: 448.344\nINFO:tensorflow:loss = 0.007818027, step = 77201 (0.223 sec)\nINFO:tensorflow:global_step/sec: 422.428\nINFO:tensorflow:loss = 0.015884731, step = 77301 (0.237 sec)\nINFO:tensorflow:global_step/sec: 437.945\nINFO:tensorflow:loss = 0.010117618, step = 77401 (0.228 sec)\nINFO:tensorflow:global_step/sec: 462.714\nINFO:tensorflow:loss = 0.01640339, step = 77501 (0.216 sec)\nINFO:tensorflow:global_step/sec: 441.64\nINFO:tensorflow:loss = 0.015515845, step = 77601 (0.226 sec)\nINFO:tensorflow:global_step/sec: 452.919\nINFO:tensorflow:loss = 0.0094986195, step = 77701 (0.221 sec)\nINFO:tensorflow:global_step/sec: 437.642\nINFO:tensorflow:loss = 0.01584887, step = 77801 (0.228 sec)\nINFO:tensorflow:global_step/sec: 439.325\nINFO:tensorflow:loss = 0.011430892, step = 77901 (0.228 sec)\nINFO:tensorflow:global_step/sec: 444.182\nINFO:tensorflow:loss = 0.0141262, step = 78001 (0.226 sec)\nINFO:tensorflow:global_step/sec: 448.982\nINFO:tensorflow:loss = 0.012604534, step = 78101 (0.221 sec)\nINFO:tensorflow:global_step/sec: 459.633\nINFO:tensorflow:loss = 0.013469087, step = 78201 (0.218 sec)\nINFO:tensorflow:global_step/sec: 444.693\nINFO:tensorflow:loss = 0.018378606, step = 78301 (0.225 sec)\nINFO:tensorflow:global_step/sec: 456.617\nINFO:tensorflow:loss = 0.02119773, step = 78401 (0.219 sec)\nINFO:tensorflow:global_step/sec: 443.707\nINFO:tensorflow:loss = 0.010857796, step = 78501 (0.225 sec)\nINFO:tensorflow:global_step/sec: 436.517\nINFO:tensorflow:loss = 0.012341502, step = 78601 (0.229 sec)\nINFO:tensorflow:global_step/sec: 451.939\nINFO:tensorflow:loss = 0.013480957, step = 78701 (0.225 sec)\nINFO:tensorflow:global_step/sec: 450.995\nINFO:tensorflow:loss = 0.014637279, step = 78801 (0.217 sec)\nINFO:tensorflow:global_step/sec: 393.986\nINFO:tensorflow:loss = 0.022157174, step = 78901 (0.254 sec)\nINFO:tensorflow:global_step/sec: 442.333\nINFO:tensorflow:loss = 0.009894935, step = 79001 (0.226 sec)\nINFO:tensorflow:global_step/sec: 452.319\nINFO:tensorflow:loss = 0.01841922, step = 79101 (0.221 sec)\nINFO:tensorflow:global_step/sec: 461.667\nINFO:tensorflow:loss = 0.016005103, step = 79201 (0.216 sec)\nINFO:tensorflow:global_step/sec: 451.48\nINFO:tensorflow:loss = 0.008755811, step = 79301 (0.222 sec)\nINFO:tensorflow:global_step/sec: 458.436\nINFO:tensorflow:loss = 0.011758109, step = 79401 (0.218 sec)\nINFO:tensorflow:global_step/sec: 435.212\nINFO:tensorflow:loss = 0.0098045, step = 79501 (0.230 sec)\nINFO:tensorflow:global_step/sec: 425.878\nINFO:tensorflow:loss = 0.007837824, step = 79601 (0.235 sec)\nINFO:tensorflow:global_step/sec: 439.844\nINFO:tensorflow:loss = 0.007720313, step = 79701 (0.227 sec)\nINFO:tensorflow:global_step/sec: 449.251\nINFO:tensorflow:loss = 0.011834578, step = 79801 (0.223 sec)\nINFO:tensorflow:global_step/sec: 433.121\nINFO:tensorflow:loss = 0.018032994, step = 79901 (0.231 sec)\nINFO:tensorflow:Saving checkpoints for 80000 into /tmp/tmpU33rCk/model.ckpt.\nINFO:tensorflow:Calling model_fn.\nINFO:tensorflow:Importing architecture from /tmp/tmpU33rCk/architecture-2.txt: ['0:linear', '1:1_layer_dnn', '2:2_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork 'linear'\nINFO:tensorflow:Rebuilding iteration 1\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 2\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Building iteration 3\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Done calling model_fn.\nINFO:tensorflow:Starting evaluation at 2018-12-13-19:37:39\nINFO:tensorflow:Graph was finalized.\nINFO:tensorflow:Restoring parameters from /tmp/tmpU33rCk/model.ckpt-80000\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\nINFO:tensorflow:Saving candidate 't2_2_layer_dnn' dict for global step 80000: architecture/adanet/ensembles = \nx\n>adanet/iteration_2/ensemble_t2_2_layer_dnn/architecture/adanetB,\b\u0007\u0012\u0000B&| linear | 1_layer_dnn | 2_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.03654939, average_loss/adanet/subnetwork = 0.032713592, average_loss/adanet/uniform_average_ensemble = 0.036697656, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.05076569, loss/adanet/subnetwork = 0.043944843, loss/adanet/uniform_average_ensemble = 0.052397452, prediction/mean/adanet/adanet_weighted_ensemble = 3.1145082, prediction/mean/adanet/subnetwork = 3.1556947, prediction/mean/adanet/uniform_average_ensemble = 3.140309\nINFO:tensorflow:Saving candidate 't3_2_layer_dnn' dict for global step 80000: architecture/adanet/ensembles = \n�\u0001\n>adanet/iteration_3/ensemble_t3_2_layer_dnn/architecture/adanetB:\b\u0007\u0012\u0000B4| linear | 1_layer_dnn | 2_layer_dnn | 2_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.032998268, average_loss/adanet/subnetwork = 0.04255607, average_loss/adanet/uniform_average_ensemble = 0.036970153, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.042651616, loss/adanet/subnetwork = 0.059904583, loss/adanet/uniform_average_ensemble = 0.05318597, prediction/mean/adanet/adanet_weighted_ensemble = 3.0920377, prediction/mean/adanet/subnetwork = 3.1531146, prediction/mean/adanet/uniform_average_ensemble = 3.1435103\nINFO:tensorflow:Saving candidate 't3_3_layer_dnn' dict for global step 80000: architecture/adanet/ensembles = \n�\u0001\n>adanet/iteration_3/ensemble_t3_3_layer_dnn/architecture/adanetB:\b\u0007\u0012\u0000B4| linear | 1_layer_dnn | 2_layer_dnn | 3_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.03303379, average_loss/adanet/subnetwork = 0.03740776, average_loss/adanet/uniform_average_ensemble = 0.035802316, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.042533625, loss/adanet/subnetwork = 0.055050053, loss/adanet/uniform_average_ensemble = 0.051954698, prediction/mean/adanet/adanet_weighted_ensemble = 3.0902042, prediction/mean/adanet/subnetwork = 3.1547055, prediction/mean/adanet/uniform_average_ensemble = 3.143908\nINFO:tensorflow:Finished evaluation at 2018-12-13-19:37:43\nINFO:tensorflow:Saving dict for global step 80000: average_loss = 0.032998268, average_loss/adanet/adanet_weighted_ensemble = 0.032998268, average_loss/adanet/subnetwork = 0.04255607, average_loss/adanet/uniform_average_ensemble = 0.036970153, global_step = 80000, label/mean = 3.1049454, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss = 0.042651616, loss/adanet/adanet_weighted_ensemble = 0.042651616, loss/adanet/subnetwork = 0.059904583, loss/adanet/uniform_average_ensemble = 0.05318597, prediction/mean = 3.0920377, prediction/mean/adanet/adanet_weighted_ensemble = 3.0920377, prediction/mean/adanet/subnetwork = 3.1531146, prediction/mean/adanet/uniform_average_ensemble = 3.1435103\nINFO:tensorflow:Saving 'checkpoint_path' summary for global step 80000: /tmp/tmpU33rCk/model.ckpt-80000\nINFO:tensorflow:Loss for final step: 0.0128020905.\nINFO:tensorflow:Finished training Adanet iteration 3\nINFO:tensorflow:Beginning bookkeeping phase for iteration 3\nINFO:tensorflow:Importing architecture from /tmp/tmpU33rCk/architecture-2.txt: ['0:linear', '1:1_layer_dnn', '2:2_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork 'linear'\nINFO:tensorflow:Rebuilding iteration 1\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 2\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Building iteration 3\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Starting ensemble evaluation for iteration 3\nINFO:tensorflow:Restoring parameters from /tmp/tmpU33rCk/model.ckpt-80000\nWARNING:tensorflow:`tf.train.start_queue_runners()` was called when no queue runners were defined. You can safely remove the call to this deprecated function.\nINFO:tensorflow:Encountered end of input after 14 evaluations\nINFO:tensorflow:Computed ensemble metrics: adanet_loss/t2_2_layer_dnn = 0.027457, adanet_loss/t3_2_layer_dnn = 0.025281, adanet_loss/t3_3_layer_dnn = 0.025353\nINFO:tensorflow:Finished ensemble evaluation for iteration 3\nINFO:tensorflow:'t3_2_layer_dnn' at index 1 is moving onto the next iteration\nINFO:tensorflow:Importing architecture from /tmp/tmpU33rCk/architecture-3.txt: ['0:linear', '1:1_layer_dnn', '2:2_layer_dnn', '3:2_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork 'linear'\nINFO:tensorflow:Rebuilding iteration 1\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 2\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 3\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Warm-starting from: (u'/tmp/tmpU33rCk/model.ckpt-80000',)\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/candidate_t2_2_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t0_linear/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t0_linear/adanet/iteration_1/candidate_t0_linear/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_2_layer_dnn/weighted_subnetwork_3/subnetwork/dense_1/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_1_layer_dnn/weighted_subnetwork_1/subnetwork/dense/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_2_layer_dnn/weighted_subnetwork_2/subnetwork/dense_1/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_linear/weighted_subnetwork_0/subnetwork/dense/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_1_layer_dnn/weighted_subnetwork_1/subnetwork/dense_1/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_1_layer_dnn/weighted_subnetwork_1/subnetwork/dense_1/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/candidate_t1_1_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_2_layer_dnn/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_2_layer_dnn/weighted_subnetwork_2/subnetwork/dense/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_2_layer_dnn/weighted_subnetwork_2/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_2_layer_dnn/weighted_subnetwork_0/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/candidate_t0_linear/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/candidate_t1_1_layer_dnn/adanet/iteration_2/candidate_t1_1_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/train_op/is_over/is_over_var_fn/is_over_var; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/candidate_t3_2_layer_dnn/adanet/iteration_3/candidate_t3_2_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: global_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_2_layer_dnn/weighted_subnetwork_2/subnetwork/dense/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_2_layer_dnn/weighted_subnetwork_1/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t1_1_layer_dnn/adanet/iteration_1/candidate_t1_1_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/candidate_t2_2_layer_dnn/adanet/iteration_3/candidate_t2_2_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_1_layer_dnn/weighted_subnetwork_1/subnetwork/dense/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_1_layer_dnn/weighted_subnetwork_0/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_1_layer_dnn/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/candidate_t2_2_layer_dnn/adanet/iteration_3/candidate_t2_2_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_2_layer_dnn/weighted_subnetwork_2/subnetwork/dense_1/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_linear/weighted_subnetwork_0/subnetwork/dense/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_2_layer_dnn/weighted_subnetwork_3/subnetwork/dense_2/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_2_layer_dnn/weighted_subnetwork_2/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_linear/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/candidate_t1_1_layer_dnn/adanet/iteration_2/candidate_t1_1_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/candidate_t2_2_layer_dnn/adanet/iteration_2/candidate_t2_2_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_2_layer_dnn/weighted_subnetwork_3/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/candidate_t2_2_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t1_1_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_2_layer_dnn/weighted_subnetwork_2/subnetwork/dense_2/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_2_layer_dnn/weighted_subnetwork_0/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/ensemble_t2_2_layer_dnn/weighted_subnetwork_2/subnetwork/dense_2/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_2_layer_dnn/weighted_subnetwork_3/subnetwork/dense_1/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/ensemble_t0_linear/weighted_subnetwork_0/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/candidate_t2_2_layer_dnn/adanet/iteration_2/candidate_t2_2_layer_dnn/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/ensemble_t1_1_layer_dnn/weighted_subnetwork_1/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_2_layer_dnn/weighted_subnetwork_3/subnetwork/dense/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t0_linear/adanet/iteration_1/candidate_t0_linear/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_2_layer_dnn/weighted_subnetwork_3/subnetwork/dense_2/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/train_op/is_over/is_over_var_fn/is_over_var; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/candidate_t3_2_layer_dnn/adanet_loss; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_2/step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/candidate_t0_linear/adanet/iteration_0/candidate_t0_linear/adanet_loss/local_step; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_2_layer_dnn/bias; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_2_layer_dnn/weighted_subnetwork_1/logits/mixture_weight; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/candidate_t3_2_layer_dnn/adanet/iteration_3/candidate_t3_2_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/candidate_t0_linear/adanet/iteration_0/candidate_t0_linear/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/train_op/is_over/is_over_var_fn/is_over_var; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_1/candidate_t1_1_layer_dnn/adanet/iteration_1/candidate_t1_1_layer_dnn/adanet_loss/biased; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_0/train_op/is_over/is_over_var_fn/is_over_var; prev_var_name: Unchanged\nINFO:tensorflow:Warm-starting variable: adanet/iteration_3/ensemble_t3_2_layer_dnn/weighted_subnetwork_3/subnetwork/dense/kernel; prev_var_name: Unchanged\nINFO:tensorflow:Building iteration 4\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Overwriting checkpoint with new graph for iteration 4 to /tmp/tmpU33rCk/model.ckpt-80000\nWARNING:tensorflow:`tf.train.start_queue_runners()` was called when no queue runners were defined. You can safely remove the call to this deprecated function.\nINFO:tensorflow:Finished bookkeeping phase for iteration 3\nINFO:tensorflow:Beginning training AdaNet iteration 4\nINFO:tensorflow:Calling model_fn.\nINFO:tensorflow:Importing architecture from /tmp/tmpU33rCk/architecture-3.txt: ['0:linear', '1:1_layer_dnn', '2:2_layer_dnn', '3:2_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork 'linear'\nINFO:tensorflow:Rebuilding iteration 1\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 2\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 3\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Building iteration 4\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Done calling model_fn.\nINFO:tensorflow:Create CheckpointSaverHook.\nINFO:tensorflow:Graph was finalized.\nINFO:tensorflow:Restoring parameters from /tmp/tmpU33rCk/increment.ckpt-4\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\nINFO:tensorflow:Saving checkpoints for 80000 into /tmp/tmpU33rCk/model.ckpt.\nINFO:tensorflow:loss = 0.014136677, step = 80001\nINFO:tensorflow:global_step/sec: 77.9609\nINFO:tensorflow:loss = 0.015122209, step = 80101 (1.284 sec)\nINFO:tensorflow:global_step/sec: 451.808\nINFO:tensorflow:loss = 0.015705626, step = 80201 (0.221 sec)\nINFO:tensorflow:global_step/sec: 434.951\nINFO:tensorflow:loss = 0.010678144, step = 80301 (0.230 sec)\nINFO:tensorflow:global_step/sec: 461.659\nINFO:tensorflow:loss = 0.015691724, step = 80401 (0.217 sec)\nINFO:tensorflow:global_step/sec: 462.839\nINFO:tensorflow:loss = 0.010645296, step = 80501 (0.216 sec)\nINFO:tensorflow:global_step/sec: 441.256\nINFO:tensorflow:loss = 0.013091616, step = 80601 (0.227 sec)\nINFO:tensorflow:global_step/sec: 453.622\nINFO:tensorflow:loss = 0.009302431, step = 80701 (0.221 sec)\nINFO:tensorflow:global_step/sec: 469.065\nINFO:tensorflow:loss = 0.008277564, step = 80801 (0.213 sec)\nINFO:tensorflow:global_step/sec: 462.342\nINFO:tensorflow:loss = 0.014173703, step = 80901 (0.216 sec)\nINFO:tensorflow:global_step/sec: 420.955\nINFO:tensorflow:loss = 0.013234565, step = 81001 (0.238 sec)\nINFO:tensorflow:global_step/sec: 456.046\nINFO:tensorflow:loss = 0.012928063, step = 81101 (0.219 sec)\nINFO:tensorflow:global_step/sec: 468.575\nINFO:tensorflow:loss = 0.012574772, step = 81201 (0.213 sec)\nINFO:tensorflow:global_step/sec: 473.682\nINFO:tensorflow:loss = 0.021084582, step = 81301 (0.211 sec)\nINFO:tensorflow:global_step/sec: 459.173\nINFO:tensorflow:loss = 0.005883986, step = 81401 (0.221 sec)\nINFO:tensorflow:global_step/sec: 425.724\nINFO:tensorflow:loss = 0.007256987, step = 81501 (0.232 sec)\nINFO:tensorflow:global_step/sec: 447.471\nINFO:tensorflow:loss = 0.0073771467, step = 81601 (0.224 sec)\nINFO:tensorflow:global_step/sec: 457.456\nINFO:tensorflow:loss = 0.015889518, step = 81701 (0.218 sec)\nINFO:tensorflow:global_step/sec: 462.518\nINFO:tensorflow:loss = 0.013416372, step = 81801 (0.216 sec)\nINFO:tensorflow:global_step/sec: 449.76\nINFO:tensorflow:loss = 0.017629668, step = 81901 (0.223 sec)\nINFO:tensorflow:global_step/sec: 481.047\nINFO:tensorflow:loss = 0.011713915, step = 82001 (0.208 sec)\nINFO:tensorflow:global_step/sec: 439.288\nINFO:tensorflow:loss = 0.025065506, step = 82101 (0.228 sec)\nINFO:tensorflow:global_step/sec: 440.24\nINFO:tensorflow:loss = 0.012257604, step = 82201 (0.227 sec)\nINFO:tensorflow:global_step/sec: 441.714\nINFO:tensorflow:loss = 0.010574514, step = 82301 (0.226 sec)\nINFO:tensorflow:global_step/sec: 423.429\nINFO:tensorflow:loss = 0.0075078337, step = 82401 (0.236 sec)\nINFO:tensorflow:global_step/sec: 455.264\nINFO:tensorflow:loss = 0.017724512, step = 82501 (0.220 sec)\nINFO:tensorflow:global_step/sec: 450.382\nINFO:tensorflow:loss = 0.010300313, step = 82601 (0.223 sec)\nINFO:tensorflow:global_step/sec: 462.242\nINFO:tensorflow:loss = 0.013061122, step = 82701 (0.215 sec)\nINFO:tensorflow:global_step/sec: 419.783\nINFO:tensorflow:loss = 0.0063341027, step = 82801 (0.239 sec)\nINFO:tensorflow:global_step/sec: 472.581\nINFO:tensorflow:loss = 0.010990601, step = 82901 (0.212 sec)\nINFO:tensorflow:global_step/sec: 459.12\nINFO:tensorflow:loss = 0.010513057, step = 83001 (0.218 sec)\nINFO:tensorflow:global_step/sec: 449.043\nINFO:tensorflow:loss = 0.008344499, step = 83101 (0.223 sec)\nINFO:tensorflow:global_step/sec: 464.005\nINFO:tensorflow:loss = 0.013460088, step = 83201 (0.216 sec)\nINFO:tensorflow:global_step/sec: 440.236\nINFO:tensorflow:loss = 0.017516607, step = 83301 (0.227 sec)\nINFO:tensorflow:global_step/sec: 428.445\nINFO:tensorflow:loss = 0.011353311, step = 83401 (0.233 sec)\nINFO:tensorflow:global_step/sec: 430.719\nINFO:tensorflow:loss = 0.022538329, step = 83501 (0.232 sec)\nINFO:tensorflow:global_step/sec: 420.897\nINFO:tensorflow:loss = 0.01015048, step = 83601 (0.239 sec)\nINFO:tensorflow:global_step/sec: 417.578\nINFO:tensorflow:loss = 0.011027567, step = 83701 (0.238 sec)\nINFO:tensorflow:global_step/sec: 463.111\nINFO:tensorflow:loss = 0.010554891, step = 83801 (0.216 sec)\nINFO:tensorflow:global_step/sec: 450.576\nINFO:tensorflow:loss = 0.007900743, step = 83901 (0.222 sec)\nINFO:tensorflow:global_step/sec: 428.422\nINFO:tensorflow:loss = 0.014897767, step = 84001 (0.233 sec)\nINFO:tensorflow:global_step/sec: 439.346\nINFO:tensorflow:loss = 0.009408601, step = 84101 (0.228 sec)\nINFO:tensorflow:global_step/sec: 444.235\nINFO:tensorflow:loss = 0.008032159, step = 84201 (0.225 sec)\nINFO:tensorflow:global_step/sec: 438.337\nINFO:tensorflow:loss = 0.008454685, step = 84301 (0.228 sec)\nINFO:tensorflow:global_step/sec: 455.77\nINFO:tensorflow:loss = 0.011003688, step = 84401 (0.219 sec)\nINFO:tensorflow:global_step/sec: 444.004\nINFO:tensorflow:loss = 0.013846547, step = 84501 (0.225 sec)\nINFO:tensorflow:global_step/sec: 436.481\nINFO:tensorflow:loss = 0.010676222, step = 84601 (0.229 sec)\nINFO:tensorflow:global_step/sec: 402.87\nINFO:tensorflow:loss = 0.017072398, step = 84701 (0.248 sec)\nINFO:tensorflow:global_step/sec: 448.384\nINFO:tensorflow:loss = 0.010719553, step = 84801 (0.223 sec)\nINFO:tensorflow:global_step/sec: 439.064\nINFO:tensorflow:loss = 0.009729135, step = 84901 (0.228 sec)\nINFO:tensorflow:global_step/sec: 435.66\nINFO:tensorflow:loss = 0.006030474, step = 85001 (0.231 sec)\nINFO:tensorflow:global_step/sec: 436.809\nINFO:tensorflow:loss = 0.009715864, step = 85101 (0.228 sec)\nINFO:tensorflow:global_step/sec: 446.726\nINFO:tensorflow:loss = 0.009305755, step = 85201 (0.224 sec)\nINFO:tensorflow:global_step/sec: 449.258\nINFO:tensorflow:loss = 0.019405285, step = 85301 (0.222 sec)\nINFO:tensorflow:global_step/sec: 451.154\nINFO:tensorflow:loss = 0.0075550172, step = 85401 (0.221 sec)\nINFO:tensorflow:global_step/sec: 452.212\nINFO:tensorflow:loss = 0.008454868, step = 85501 (0.221 sec)\nINFO:tensorflow:global_step/sec: 452.172\nINFO:tensorflow:loss = 0.012098787, step = 85601 (0.221 sec)\nINFO:tensorflow:global_step/sec: 460.197\nINFO:tensorflow:loss = 0.0095618665, step = 85701 (0.218 sec)\nINFO:tensorflow:global_step/sec: 466.368\nINFO:tensorflow:loss = 0.011730649, step = 85801 (0.217 sec)\nINFO:tensorflow:global_step/sec: 446.422\nINFO:tensorflow:loss = 0.008406863, step = 85901 (0.221 sec)\nINFO:tensorflow:global_step/sec: 448.849\nINFO:tensorflow:loss = 0.0119697945, step = 86001 (0.223 sec)\nINFO:tensorflow:global_step/sec: 460.757\nINFO:tensorflow:loss = 0.014302246, step = 86101 (0.217 sec)\nINFO:tensorflow:global_step/sec: 470.994\nINFO:tensorflow:loss = 0.015576258, step = 86201 (0.212 sec)\nINFO:tensorflow:global_step/sec: 445.234\nINFO:tensorflow:loss = 0.011510307, step = 86301 (0.224 sec)\nINFO:tensorflow:global_step/sec: 450.989\nINFO:tensorflow:loss = 0.01647891, step = 86401 (0.222 sec)\nINFO:tensorflow:global_step/sec: 444.609\nINFO:tensorflow:loss = 0.016444352, step = 86501 (0.225 sec)\nINFO:tensorflow:global_step/sec: 454.372\nINFO:tensorflow:loss = 0.008846531, step = 86601 (0.220 sec)\nINFO:tensorflow:global_step/sec: 430.758\nINFO:tensorflow:loss = 0.014944243, step = 86701 (0.232 sec)\nINFO:tensorflow:global_step/sec: 443.06\nINFO:tensorflow:loss = 0.008478226, step = 86801 (0.226 sec)\nINFO:tensorflow:global_step/sec: 441.166\nINFO:tensorflow:loss = 0.0050248858, step = 86901 (0.227 sec)\nINFO:tensorflow:global_step/sec: 454.17\nINFO:tensorflow:loss = 0.009933718, step = 87001 (0.220 sec)\nINFO:tensorflow:global_step/sec: 471.807\nINFO:tensorflow:loss = 0.0082551595, step = 87101 (0.212 sec)\nINFO:tensorflow:global_step/sec: 433.777\nINFO:tensorflow:loss = 0.010494467, step = 87201 (0.230 sec)\nINFO:tensorflow:global_step/sec: 461.14\nINFO:tensorflow:loss = 0.022204366, step = 87301 (0.217 sec)\nINFO:tensorflow:global_step/sec: 453.998\nINFO:tensorflow:loss = 0.012253256, step = 87401 (0.220 sec)\nINFO:tensorflow:global_step/sec: 449.906\nINFO:tensorflow:loss = 0.009702304, step = 87501 (0.222 sec)\nINFO:tensorflow:global_step/sec: 466.059\nINFO:tensorflow:loss = 0.008399001, step = 87601 (0.214 sec)\nINFO:tensorflow:global_step/sec: 448.841\nINFO:tensorflow:loss = 0.013183773, step = 87701 (0.223 sec)\nINFO:tensorflow:global_step/sec: 443.141\nINFO:tensorflow:loss = 0.014775414, step = 87801 (0.225 sec)\nINFO:tensorflow:global_step/sec: 455.605\nINFO:tensorflow:loss = 0.011448814, step = 87901 (0.220 sec)\nINFO:tensorflow:global_step/sec: 458.751\nINFO:tensorflow:loss = 0.01537032, step = 88001 (0.218 sec)\nINFO:tensorflow:global_step/sec: 433.114\nINFO:tensorflow:loss = 0.013136765, step = 88101 (0.231 sec)\nINFO:tensorflow:global_step/sec: 442.852\nINFO:tensorflow:loss = 0.011595482, step = 88201 (0.226 sec)\nINFO:tensorflow:global_step/sec: 447.175\nINFO:tensorflow:loss = 0.016556742, step = 88301 (0.223 sec)\nINFO:tensorflow:global_step/sec: 457.731\nINFO:tensorflow:loss = 0.011796527, step = 88401 (0.219 sec)\nINFO:tensorflow:global_step/sec: 410.62\nINFO:tensorflow:loss = 0.012523681, step = 88501 (0.243 sec)\nINFO:tensorflow:global_step/sec: 434.751\nINFO:tensorflow:loss = 0.01411977, step = 88601 (0.230 sec)\nINFO:tensorflow:global_step/sec: 381.455\nINFO:tensorflow:loss = 0.0113926735, step = 88701 (0.262 sec)\nINFO:tensorflow:global_step/sec: 448.654\nINFO:tensorflow:loss = 0.017263845, step = 88801 (0.224 sec)\nINFO:tensorflow:global_step/sec: 437.656\nINFO:tensorflow:loss = 0.025984153, step = 88901 (0.227 sec)\nINFO:tensorflow:global_step/sec: 449.834\nINFO:tensorflow:loss = 0.02464103, step = 89001 (0.223 sec)\nINFO:tensorflow:global_step/sec: 439.578\nINFO:tensorflow:loss = 0.012718895, step = 89101 (0.228 sec)\nINFO:tensorflow:global_step/sec: 426.661\nINFO:tensorflow:loss = 0.0182172, step = 89201 (0.234 sec)\nINFO:tensorflow:global_step/sec: 442.445\nINFO:tensorflow:loss = 0.008782396, step = 89301 (0.226 sec)\nINFO:tensorflow:global_step/sec: 475.035\nINFO:tensorflow:loss = 0.0088181235, step = 89401 (0.210 sec)\nINFO:tensorflow:global_step/sec: 480.443\nINFO:tensorflow:loss = 0.018133877, step = 89501 (0.208 sec)\nINFO:tensorflow:global_step/sec: 455.201\nINFO:tensorflow:loss = 0.011905391, step = 89601 (0.219 sec)\nINFO:tensorflow:global_step/sec: 454.552\nINFO:tensorflow:loss = 0.0065887887, step = 89701 (0.221 sec)\nINFO:tensorflow:global_step/sec: 425.365\nINFO:tensorflow:loss = 0.011348927, step = 89801 (0.234 sec)\nINFO:tensorflow:global_step/sec: 411.294\nINFO:tensorflow:loss = 0.012839792, step = 89901 (0.243 sec)\nINFO:tensorflow:global_step/sec: 460.798\nINFO:tensorflow:loss = 0.012055074, step = 90001 (0.217 sec)\nINFO:tensorflow:global_step/sec: 469.61\nINFO:tensorflow:loss = 0.008177744, step = 90101 (0.213 sec)\nINFO:tensorflow:global_step/sec: 460.314\nINFO:tensorflow:loss = 0.014987953, step = 90201 (0.217 sec)\nINFO:tensorflow:global_step/sec: 445.744\nINFO:tensorflow:loss = 0.017924502, step = 90301 (0.225 sec)\nINFO:tensorflow:global_step/sec: 432.185\nINFO:tensorflow:loss = 0.008074999, step = 90401 (0.231 sec)\nINFO:tensorflow:global_step/sec: 448.499\nINFO:tensorflow:loss = 0.01378642, step = 90501 (0.223 sec)\nINFO:tensorflow:global_step/sec: 427.086\nINFO:tensorflow:loss = 0.010721653, step = 90601 (0.234 sec)\nINFO:tensorflow:global_step/sec: 472.221\nINFO:tensorflow:loss = 0.007824122, step = 90701 (0.212 sec)\nINFO:tensorflow:global_step/sec: 412.81\nINFO:tensorflow:loss = 0.013612587, step = 90801 (0.242 sec)\nINFO:tensorflow:global_step/sec: 473.129\nINFO:tensorflow:loss = 0.01438295, step = 90901 (0.211 sec)\nINFO:tensorflow:global_step/sec: 442.149\nINFO:tensorflow:loss = 0.007160752, step = 91001 (0.226 sec)\nINFO:tensorflow:global_step/sec: 429.513\nINFO:tensorflow:loss = 0.015613945, step = 91101 (0.233 sec)\nINFO:tensorflow:global_step/sec: 430.081\nINFO:tensorflow:loss = 0.00888732, step = 91201 (0.233 sec)\nINFO:tensorflow:global_step/sec: 433.804\nINFO:tensorflow:loss = 0.01613437, step = 91301 (0.230 sec)\nINFO:tensorflow:global_step/sec: 444.669\nINFO:tensorflow:loss = 0.008494921, step = 91401 (0.225 sec)\nINFO:tensorflow:global_step/sec: 440.277\nINFO:tensorflow:loss = 0.018028438, step = 91501 (0.227 sec)\nINFO:tensorflow:global_step/sec: 417.406\nINFO:tensorflow:loss = 0.006696405, step = 91601 (0.240 sec)\nINFO:tensorflow:global_step/sec: 430.287\nINFO:tensorflow:loss = 0.014338403, step = 91701 (0.232 sec)\nINFO:tensorflow:global_step/sec: 470.94\nINFO:tensorflow:loss = 0.008885477, step = 91801 (0.212 sec)\nINFO:tensorflow:global_step/sec: 459.27\nINFO:tensorflow:loss = 0.013566902, step = 91901 (0.218 sec)\nINFO:tensorflow:global_step/sec: 448.312\nINFO:tensorflow:loss = 0.011971151, step = 92001 (0.223 sec)\nINFO:tensorflow:global_step/sec: 450.095\nINFO:tensorflow:loss = 0.0060772435, step = 92101 (0.222 sec)\nINFO:tensorflow:global_step/sec: 434.944\nINFO:tensorflow:loss = 0.0059761694, step = 92201 (0.230 sec)\nINFO:tensorflow:global_step/sec: 463.82\nINFO:tensorflow:loss = 0.010871683, step = 92301 (0.216 sec)\nINFO:tensorflow:global_step/sec: 459.717\nINFO:tensorflow:loss = 0.011779211, step = 92401 (0.217 sec)\nINFO:tensorflow:global_step/sec: 424.343\nINFO:tensorflow:loss = 0.011967138, step = 92501 (0.235 sec)\nINFO:tensorflow:global_step/sec: 461.653\nINFO:tensorflow:loss = 0.0106203705, step = 92601 (0.217 sec)\nINFO:tensorflow:global_step/sec: 459.715\nINFO:tensorflow:loss = 0.013204046, step = 92701 (0.217 sec)\nINFO:tensorflow:global_step/sec: 470.377\nINFO:tensorflow:loss = 0.014740882, step = 92801 (0.213 sec)\nINFO:tensorflow:global_step/sec: 455.658\nINFO:tensorflow:loss = 0.01317253, step = 92901 (0.219 sec)\nINFO:tensorflow:global_step/sec: 454.727\nINFO:tensorflow:loss = 0.018995635, step = 93001 (0.220 sec)\nINFO:tensorflow:global_step/sec: 446.527\nINFO:tensorflow:loss = 0.011236705, step = 93101 (0.224 sec)\nINFO:tensorflow:global_step/sec: 465.532\nINFO:tensorflow:loss = 0.0078997, step = 93201 (0.215 sec)\nINFO:tensorflow:global_step/sec: 465.212\nINFO:tensorflow:loss = 0.009754804, step = 93301 (0.215 sec)\nINFO:tensorflow:global_step/sec: 436.752\nINFO:tensorflow:loss = 0.013066869, step = 93401 (0.229 sec)\nINFO:tensorflow:global_step/sec: 453.834\nINFO:tensorflow:loss = 0.011819014, step = 93501 (0.220 sec)\nINFO:tensorflow:global_step/sec: 463.021\nINFO:tensorflow:loss = 0.007798925, step = 93601 (0.216 sec)\nINFO:tensorflow:global_step/sec: 446.748\nINFO:tensorflow:loss = 0.00755817, step = 93701 (0.224 sec)\nINFO:tensorflow:global_step/sec: 453.097\nINFO:tensorflow:loss = 0.01941024, step = 93801 (0.221 sec)\nINFO:tensorflow:global_step/sec: 455.564\nINFO:tensorflow:loss = 0.008340649, step = 93901 (0.219 sec)\nINFO:tensorflow:global_step/sec: 435.821\nINFO:tensorflow:loss = 0.00634618, step = 94001 (0.230 sec)\nINFO:tensorflow:global_step/sec: 442.063\nINFO:tensorflow:loss = 0.01474265, step = 94101 (0.226 sec)\nINFO:tensorflow:global_step/sec: 455.137\nINFO:tensorflow:loss = 0.0123366965, step = 94201 (0.219 sec)\nINFO:tensorflow:global_step/sec: 435.127\nINFO:tensorflow:loss = 0.013045967, step = 94301 (0.230 sec)\nINFO:tensorflow:global_step/sec: 455.067\nINFO:tensorflow:loss = 0.00995292, step = 94401 (0.220 sec)\nINFO:tensorflow:global_step/sec: 450.729\nINFO:tensorflow:loss = 0.008934388, step = 94501 (0.222 sec)\nINFO:tensorflow:global_step/sec: 443.44\nINFO:tensorflow:loss = 0.01303645, step = 94601 (0.226 sec)\nINFO:tensorflow:global_step/sec: 448.682\nINFO:tensorflow:loss = 0.012838178, step = 94701 (0.223 sec)\nINFO:tensorflow:global_step/sec: 463.964\nINFO:tensorflow:loss = 0.020047497, step = 94801 (0.215 sec)\nINFO:tensorflow:global_step/sec: 445.844\nINFO:tensorflow:loss = 0.018084995, step = 94901 (0.224 sec)\nINFO:tensorflow:global_step/sec: 437.066\nINFO:tensorflow:loss = 0.020988055, step = 95001 (0.229 sec)\nINFO:tensorflow:global_step/sec: 451.559\nINFO:tensorflow:loss = 0.0076173977, step = 95101 (0.222 sec)\nINFO:tensorflow:global_step/sec: 437.771\nINFO:tensorflow:loss = 0.010435652, step = 95201 (0.229 sec)\nINFO:tensorflow:global_step/sec: 452.663\nINFO:tensorflow:loss = 0.01778499, step = 95301 (0.220 sec)\nINFO:tensorflow:global_step/sec: 438.928\nINFO:tensorflow:loss = 0.00816166, step = 95401 (0.228 sec)\nINFO:tensorflow:global_step/sec: 446.8\nINFO:tensorflow:loss = 0.017459739, step = 95501 (0.224 sec)\nINFO:tensorflow:global_step/sec: 432.417\nINFO:tensorflow:loss = 0.013792496, step = 95601 (0.231 sec)\nINFO:tensorflow:global_step/sec: 450.963\nINFO:tensorflow:loss = 0.0145329265, step = 95701 (0.222 sec)\nINFO:tensorflow:global_step/sec: 435.927\nINFO:tensorflow:loss = 0.020655911, step = 95801 (0.230 sec)\nINFO:tensorflow:global_step/sec: 462.24\nINFO:tensorflow:loss = 0.013377575, step = 95901 (0.216 sec)\nINFO:tensorflow:global_step/sec: 444.692\nINFO:tensorflow:loss = 0.014647892, step = 96001 (0.225 sec)\nINFO:tensorflow:global_step/sec: 432.998\nINFO:tensorflow:loss = 0.011901893, step = 96101 (0.231 sec)\nINFO:tensorflow:global_step/sec: 436.192\nINFO:tensorflow:loss = 0.0070857587, step = 96201 (0.229 sec)\nINFO:tensorflow:global_step/sec: 454.613\nINFO:tensorflow:loss = 0.0070481785, step = 96301 (0.220 sec)\nINFO:tensorflow:global_step/sec: 462.447\nINFO:tensorflow:loss = 0.014068865, step = 96401 (0.216 sec)\nINFO:tensorflow:global_step/sec: 463.401\nINFO:tensorflow:loss = 0.009489993, step = 96501 (0.216 sec)\nINFO:tensorflow:global_step/sec: 442.954\nINFO:tensorflow:loss = 0.009569755, step = 96601 (0.226 sec)\nINFO:tensorflow:global_step/sec: 460.514\nINFO:tensorflow:loss = 0.011614005, step = 96701 (0.217 sec)\nINFO:tensorflow:global_step/sec: 451.355\nINFO:tensorflow:loss = 0.013578262, step = 96801 (0.222 sec)\nINFO:tensorflow:global_step/sec: 455.554\nINFO:tensorflow:loss = 0.0053700837, step = 96901 (0.220 sec)\nINFO:tensorflow:global_step/sec: 441.355\nINFO:tensorflow:loss = 0.01334461, step = 97001 (0.226 sec)\nINFO:tensorflow:global_step/sec: 452.378\nINFO:tensorflow:loss = 0.0177409, step = 97101 (0.221 sec)\nINFO:tensorflow:global_step/sec: 447.299\nINFO:tensorflow:loss = 0.006775462, step = 97201 (0.224 sec)\nINFO:tensorflow:global_step/sec: 459.251\nINFO:tensorflow:loss = 0.013195847, step = 97301 (0.218 sec)\nINFO:tensorflow:global_step/sec: 457.995\nINFO:tensorflow:loss = 0.009728897, step = 97401 (0.218 sec)\nINFO:tensorflow:global_step/sec: 455.546\nINFO:tensorflow:loss = 0.014908279, step = 97501 (0.220 sec)\nINFO:tensorflow:global_step/sec: 455.737\nINFO:tensorflow:loss = 0.01381776, step = 97601 (0.219 sec)\nINFO:tensorflow:global_step/sec: 450.554\nINFO:tensorflow:loss = 0.009535696, step = 97701 (0.222 sec)\nINFO:tensorflow:global_step/sec: 458.461\nINFO:tensorflow:loss = 0.015514374, step = 97801 (0.218 sec)\nINFO:tensorflow:global_step/sec: 423.27\nINFO:tensorflow:loss = 0.011712936, step = 97901 (0.236 sec)\nINFO:tensorflow:global_step/sec: 420.42\nINFO:tensorflow:loss = 0.013672416, step = 98001 (0.238 sec)\nINFO:tensorflow:global_step/sec: 435.023\nINFO:tensorflow:loss = 0.012360635, step = 98101 (0.230 sec)\nINFO:tensorflow:global_step/sec: 441.184\nINFO:tensorflow:loss = 0.012664949, step = 98201 (0.227 sec)\nINFO:tensorflow:global_step/sec: 441.965\nINFO:tensorflow:loss = 0.015917204, step = 98301 (0.226 sec)\nINFO:tensorflow:global_step/sec: 436.022\nINFO:tensorflow:loss = 0.020906836, step = 98401 (0.234 sec)\nINFO:tensorflow:global_step/sec: 413.842\nINFO:tensorflow:loss = 0.010173284, step = 98501 (0.237 sec)\nINFO:tensorflow:global_step/sec: 452.2\nINFO:tensorflow:loss = 0.012074901, step = 98601 (0.221 sec)\nINFO:tensorflow:global_step/sec: 454.932\nINFO:tensorflow:loss = 0.011731269, step = 98701 (0.220 sec)\nINFO:tensorflow:global_step/sec: 446.592\nINFO:tensorflow:loss = 0.013173582, step = 98801 (0.224 sec)\nINFO:tensorflow:global_step/sec: 447.219\nINFO:tensorflow:loss = 0.020451186, step = 98901 (0.223 sec)\nINFO:tensorflow:global_step/sec: 457.725\nINFO:tensorflow:loss = 0.009836784, step = 99001 (0.219 sec)\nINFO:tensorflow:global_step/sec: 447.492\nINFO:tensorflow:loss = 0.018442167, step = 99101 (0.224 sec)\nINFO:tensorflow:global_step/sec: 456.811\nINFO:tensorflow:loss = 0.014100221, step = 99201 (0.219 sec)\nINFO:tensorflow:global_step/sec: 440.711\nINFO:tensorflow:loss = 0.0076775113, step = 99301 (0.227 sec)\nINFO:tensorflow:global_step/sec: 462.599\nINFO:tensorflow:loss = 0.011209414, step = 99401 (0.217 sec)\nINFO:tensorflow:global_step/sec: 438.489\nINFO:tensorflow:loss = 0.008704329, step = 99501 (0.228 sec)\nINFO:tensorflow:global_step/sec: 436.411\nINFO:tensorflow:loss = 0.0077988785, step = 99601 (0.229 sec)\nINFO:tensorflow:global_step/sec: 411.029\nINFO:tensorflow:loss = 0.0077135414, step = 99701 (0.243 sec)\nINFO:tensorflow:global_step/sec: 427.106\nINFO:tensorflow:loss = 0.011483322, step = 99801 (0.234 sec)\nINFO:tensorflow:global_step/sec: 439.47\nINFO:tensorflow:loss = 0.016574938, step = 99901 (0.228 sec)\nINFO:tensorflow:Saving checkpoints for 100000 into /tmp/tmpU33rCk/model.ckpt.\nINFO:tensorflow:Calling model_fn.\nINFO:tensorflow:Importing architecture from /tmp/tmpU33rCk/architecture-3.txt: ['0:linear', '1:1_layer_dnn', '2:2_layer_dnn', '3:2_layer_dnn'].\nINFO:tensorflow:Rebuilding iteration 0\nINFO:tensorflow:Building subnetwork 'linear'\nINFO:tensorflow:Rebuilding iteration 1\nINFO:tensorflow:Building subnetwork '1_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 2\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Rebuilding iteration 3\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Building iteration 4\nINFO:tensorflow:Building subnetwork '2_layer_dnn'\nINFO:tensorflow:Building subnetwork '3_layer_dnn'\nINFO:tensorflow:Done calling model_fn.\nINFO:tensorflow:Starting evaluation at 2018-12-13-19:39:03\nINFO:tensorflow:Graph was finalized.\nINFO:tensorflow:Restoring parameters from /tmp/tmpU33rCk/model.ckpt-100000\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\nINFO:tensorflow:Saving candidate 't3_2_layer_dnn' dict for global step 100000: architecture/adanet/ensembles = \n�\u0001\n>adanet/iteration_3/ensemble_t3_2_layer_dnn/architecture/adanetB:\b\u0007\u0012\u0000B4| linear | 1_layer_dnn | 2_layer_dnn | 2_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.032998268, average_loss/adanet/subnetwork = 0.04255607, average_loss/adanet/uniform_average_ensemble = 0.036970153, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.042651616, loss/adanet/subnetwork = 0.059904583, loss/adanet/uniform_average_ensemble = 0.05318597, prediction/mean/adanet/adanet_weighted_ensemble = 3.0920377, prediction/mean/adanet/subnetwork = 3.1531146, prediction/mean/adanet/uniform_average_ensemble = 3.1435103\nINFO:tensorflow:Saving candidate 't4_2_layer_dnn' dict for global step 100000: architecture/adanet/ensembles = \n�\u0001\n>adanet/iteration_4/ensemble_t4_2_layer_dnn/architecture/adanetBH\b\u0007\u0012\u0000BB| linear | 1_layer_dnn | 2_layer_dnn | 2_layer_dnn | 2_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.03505087, average_loss/adanet/subnetwork = 0.03415539, average_loss/adanet/uniform_average_ensemble = 0.03567381, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.04599317, loss/adanet/subnetwork = 0.0469665, loss/adanet/uniform_average_ensemble = 0.05124563, prediction/mean/adanet/adanet_weighted_ensemble = 3.093939, prediction/mean/adanet/subnetwork = 3.1460986, prediction/mean/adanet/uniform_average_ensemble = 3.144028\nINFO:tensorflow:Saving candidate 't4_3_layer_dnn' dict for global step 100000: architecture/adanet/ensembles = \n�\u0001\n>adanet/iteration_4/ensemble_t4_3_layer_dnn/architecture/adanetBH\b\u0007\u0012\u0000BB| linear | 1_layer_dnn | 2_layer_dnn | 2_layer_dnn | 3_layer_dnn |J\b\n\u0006\n\u0004text, average_loss/adanet/adanet_weighted_ensemble = 0.035222616, average_loss/adanet/subnetwork = 0.038082376, average_loss/adanet/uniform_average_ensemble = 0.036011517, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss/adanet/adanet_weighted_ensemble = 0.04607369, loss/adanet/subnetwork = 0.054222643, loss/adanet/uniform_average_ensemble = 0.051993247, prediction/mean/adanet/adanet_weighted_ensemble = 3.0921233, prediction/mean/adanet/subnetwork = 3.171357, prediction/mean/adanet/uniform_average_ensemble = 3.1490798\nINFO:tensorflow:Finished evaluation at 2018-12-13-19:39:09\nINFO:tensorflow:Saving dict for global step 100000: average_loss = 0.035222616, average_loss/adanet/adanet_weighted_ensemble = 0.035222616, average_loss/adanet/subnetwork = 0.038082376, average_loss/adanet/uniform_average_ensemble = 0.036011517, global_step = 100000, label/mean = 3.1049454, label/mean/adanet/adanet_weighted_ensemble = 3.1049454, label/mean/adanet/subnetwork = 3.1049454, label/mean/adanet/uniform_average_ensemble = 3.1049454, loss = 0.04607369, loss/adanet/adanet_weighted_ensemble = 0.04607369, loss/adanet/subnetwork = 0.054222643, loss/adanet/uniform_average_ensemble = 0.051993247, prediction/mean = 3.0921233, prediction/mean/adanet/adanet_weighted_ensemble = 3.0921233, prediction/mean/adanet/subnetwork = 3.171357, prediction/mean/adanet/uniform_average_ensemble = 3.1490798\nINFO:tensorflow:Saving 'checkpoint_path' summary for global step 100000: /tmp/tmpU33rCk/model.ckpt-100000\nINFO:tensorflow:Loss for final step: 0.011420857.\nINFO:tensorflow:Finished training Adanet iteration 4\nLoss: 0.035222616\nUniform average loss: 0.036011517\nArchitecture: | linear | 1_layer_dnn | 2_layer_dnn | 2_layer_dnn | 3_layer_dnn |\n" ] ], [ [ "Learning the mixture weights with $\\lambda > 0$ produces a model with **0.0320**\nMSE. Notice that this is even better than the uniform average ensemble produced\nfrom the chosen subnetworks with **0.0345** MSE.\n\nInspecting the ensemble architecture demonstrates the effects of complexity\nregularization on candidate selection. The selected subnetworks are relatively\nless complex: unlike in previous runs, the simplest subnetwork is a linear model\nand the deepest subnetwork has only 3 hidden layers.\n\nIn general, learning to combine subnetwork ouputs with optimal hyperparameters\nshould be at least as good assigning uniform average weights.", "_____no_output_____" ], [ "## Conclusion\n\nIn this tutorial, you were able to explore training an AdaNet model's mixture\nweights with $\\lambda \\ge 0$. You were also able to compare against building an\nensemble formed by always choosing the best candidate subnetwork at each\niteration based on it's ability to improve the ensemble's loss on the training\nset, and averaging their results.\n\nUniform average ensembles work unreasonably well in practice, yet learning the\nmixture weights with the correct values of $\\lambda$ and $\\beta$ should always\nproduce a better model when candidates have varying complexity. However, this\ndoes require some additional hyperparameter tuning, so practically you can train\nan AdaNet with the default mixture weights and $\\lambda=0$ first, and once you\nhave confirmed that the subnetworks are training correctly, you can tune the\nmixture weight hyperparameters.\n\nWhile this example explored a regression task, these observations apply to using\nAdaNet on other tasks like binary-classification and multi-class classification.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
e7db64b9a742404a4177b13e8ece1bdf3e475c17
29,396
ipynb
Jupyter Notebook
sdk/apps/jetbot/jetbot_notebook.ipynb
antonspivak/isaac_gps
8c2a71e3f0e520f4f69640997c2cae98714c4043
[ "FSFAP" ]
null
null
null
sdk/apps/jetbot/jetbot_notebook.ipynb
antonspivak/isaac_gps
8c2a71e3f0e520f4f69640997c2cae98714c4043
[ "FSFAP" ]
null
null
null
sdk/apps/jetbot/jetbot_notebook.ipynb
antonspivak/isaac_gps
8c2a71e3f0e520f4f69640997c2cae98714c4043
[ "FSFAP" ]
1
2022-01-28T16:37:51.000Z
2022-01-28T16:37:51.000Z
46.734499
704
0.642979
[ [ [ "Remote control Jetbot using Virtual gamepad\n======\n", "_____no_output_____" ] ], [ [ "# This cell should only run once.\nimport os\n# set the current working directory. This is required by isaac.\nos.chdir(\"../..\")\nos.getcwd()\n", "_____no_output_____" ], [ "simulation =True\nfrom packages.pyalice import Application, Message, Codelet\n\n# Creates an empty Isaac application\napp = Application(name=\"jetbot_application\")", "_____no_output_____" ] ], [ [ "The Robot Engine Bridge enables communication between Omniverse and the Isaac SDK. When a REB application is created, a simulation-side Isaac SDK application is started, allowing messages to be sent to, or published from, the REB Components. The simulation Subgraph is loaded into our Isaac application, allowing the exchange of messages between our application and the REB Components present in the Omniverse model over TCP. Thus, by loading in the simulation subgraph and using the Camera and Differential Base components, our Isaac application can receive the image stream from Omniverse’s Viewport and can transmit commands to be effectuated in simulation. ", "_____no_output_____" ] ], [ [ "if simulation:\n # Loads the simulation_tcp subgraph into the Isaac application, adding all nodes, components,\n # edges, and configurations\n app.load(filename=\"apps/jetbot/simulation_tcp.subgraph.json\", prefix=\"simulation\")\n\n # Gets a reference to the interface node of the subgraph having a prefix of \"simulation\"\n simulation_node = app.nodes[\"simulation.interface\"]\n ", "_____no_output_____" ] ], [ [ "The Robot Remote Control component can send commands to the Differential Base to control the Jetbot model. Therefore, we add a node to which a component of type RobotRemoteControl is added. Nodes can be thought of as a container to group related components in an Isaac application. As the RobotRemoteControl component will be used to generate commands in the form of the desired state of a Segway, an edge is added between the “Segway_cmd” channel of the RobotRemoteControl component and the “base_command” channel of the simulation subgraph. This allows the REB Differential Base in Omniverse to receive the desired Segway states and move the Jetbot model in accordance with the received command. ", "_____no_output_____" ] ], [ [ "if simulation:\n # Creating a new node in the Isaac application named \"robot_remote\"\n robot_remote_node = app.add(\"robot_remote\")\n\n # Loads the navigation module, allowing components requiring this module to be added to the application\n app.load_module(\"navigation\")\n\n # Add the RobotRemoteControl and FailsafeHeartbeat components to the robot_remote node\n robot_remote_control_component = robot_remote_node.add(name=\"RobotRemoteControl\", \n ctype=app.registry.isaac.navigation.RobotRemoteControl)\n\n failsafe_component = robot_remote_node.add(name=\"FailsafeHeartbeat\", \n ctype=app.registry.isaac.alice.FailsafeHeartbeat)\n\n # Set component configuration parameters\n robot_remote_control_component.config[\"tick_period\"] = \"10ms\"\n failsafe_component.config[\"heartbeat_name\"] = \"deadman_switch\"\n failsafe_component.config[\"failsafe_name\"] = \"robot_failsafe\"\n failsafe_component.config[\"interval\"] = 0.25\n\n # Makes dataflow connection between \"segway_cmd\" channel of RobotRemoteControl component, and the \n # \"base_command\" channel of the REB Differental Base in simulation\n app.connect(robot_remote_control_component, \"segway_cmd\", simulation_node[\"input\"], \"base_command\")\n", "_____no_output_____" ] ], [ [ "To generate a corresponding command, the Robot Remote Control component must receive either JoystickStateProto messages from its \"js_state\" channel, or messages consisting of a linear and angular velocity over its \"ctrl\" channel. Here, we add the virtual gamepad subgraph which can be used to generate \"JoystickStateProto\" messages required by the Robot Remote Control component. Therefore, we establish the necessary connection.", "_____no_output_____" ] ], [ [ "if simulation:\n # Loads the virtual_gamepad subgraph into the application\n app.load(filename=\"apps/jetbot/virtual_gamepad.subgraph.json\", prefix=\"virtual_gamepad\")\n\n # Finds a reference to the component named \"interface\", located in the subgraph node of the virtual gamepad \n # subgraph. The component named \"interface\" is of type Subgraph, meaning all messages coming to or from the\n # virtual gamepad subgraph will pass through the channels of the subgraph component. \n virtual_gamepad_interface = app.nodes[\"virtual_gamepad.subgraph\"][\"interface\"]\n\n # Pass messages generated by virtual gamepad to RobotRemoteControl component. \n app.connect(virtual_gamepad_interface, \"joystick\", robot_remote_control_component, \"js_state\")\n", "_____no_output_____" ] ], [ [ "The virtual gamepad widget in Sight allows us to use the WASD keys of the keyboard to steer the Jetbot in simulation, thus making the connection between Isaac and Omniverse more concrete. Prior to starting the Isaac application, the REB application is created by opening the jetbot.usd file in Omniverse, navigating to the Robot Engine Bridge extension and clicking \"create application\", followed by pressing the \"Play\" button. The Isaac application can now be started by executing the following piece of code. ", "_____no_output_____" ] ], [ [ "app.start()", "_____no_output_____" ] ], [ [ "Open Sight by going to (Your-IP-Address):3000 in your browser (or localhost:3000 if the Isaac SDK is running on your local machine), and control the Jetbot in simulation with the Virtual Gamepad as shown below. ", "_____no_output_____" ] ], [ [ "app.stop()", "_____no_output_____" ] ], [ [ "Running Inference in Simulation \n======\nPlease note the following section requires a training environment built in simulation and a model trained.\n\nWith the simulation environment and a trained model (the .etlt file generated by following the Object Detection with DetectNetv2 pipeline), we can run inference using data streamed from simulation using the detectnet subgraph. The subgraph receives ImageViewer proto messages from its \"image\" channel, performs inference on the received images, and transmits a Detections2Proto message containing the bounding box position, label, and confidence for each of the detections. Upon loading the subgraph, configuration parameters are adjusted according to how training was conducted using the object detection pipeline. ", "_____no_output_____" ] ], [ [ "app.load(filename=\"packages/detect_net/apps/detect_net_inference.subgraph.json\", prefix=\"detect_net\")\n\n# Setting configuration parameters of components used in the detect-net subgraph to allow the trained\n# model to be used, and training parameters specified.\ninference_component = app.nodes[\"detect_net.tensor_r_t_inference\"][\"isaac.ml.TensorRTInference\"]\ninference_component.config.model_file_path = \"external/jetbot_ball_detection_resnet_model/jetbot_ball_detection_resnet18.etlt\"\ninference_component.config.etlt_password = \"nvidia\"\n\ndecoder_component = app.nodes[\"detect_net.detection_decoder\"][\"isaac.detect_net.DetectNetDecoder\"]\ndecoder_component.config.labels = [\"sphere\"]\n\n# Changing Detectnet Subgraph to accommodate Omniverse viewport (720 x 1280) and\n# dimensions used to train model\nif simulation:\n inference_component.config[\"input_tensor_info\"] = [\n {\n \"operation_name\": \"input_1\",\n \"channel\": \"image\",\n \"dims\": [3, 368, 640],\n \"uff_input_order\": \"channels_last\"\n }\n ]\n decoder_component.config[\"output_scale\"] = [720, 1280]\n encoder_component = app.nodes[\"detect_net.tensor_encoder\"][\"isaac.ml.ColorCameraEncoderCuda\"]\n encoder_component.config[\"rows\"] = 368\n\n", "_____no_output_____" ] ], [ [ "With the subgraph loaded and configuration parameters set, we can relay Omniverse's viewport stream, captured by the REB Camera, to the detectnet subgraph, allowing inference to be performed on simulation data.", "_____no_output_____" ] ], [ [ "detect_net_interface = app.nodes[\"detect_net.subgraph\"][\"interface\"]\n\nif simulation:\n # Allows image stream from Omniverse to flow to detect-net \n app.connect(simulation_node[\"output\"], \"color\", detect_net_interface, \"image\")\n", "_____no_output_____" ], [ "app.start()", "_____no_output_____" ] ], [ [ "Upon opening the jetbot_inference.usd file in Omniverse, creating the Robot Engine Bridge application, and starting both the simulation and he Isaac application, the performance of the detection model can be verified. ", "_____no_output_____" ] ], [ [ "app.stop()", "_____no_output_____" ] ], [ [ "Jetbot Autonomously Following Objects in Simulation\n======\nNow that objects are being correctly detected in simulation, we need to implement the control logic to move the Jetbot model such that it keeps the desired object both just in front of it and horizontally centered. To accomplish this, we first define a couple helper functions to parse detections2proto messages, determine the pixel coordinates of the center of a bounding box, determine the area (in pixels) of a bounding box, and find the detection of a specified label whose bounding box center is closest to the target location. Bounding box area will later be used to estimate how close or far a detected object is from the Jetbot. ", "_____no_output_____" ] ], [ [ "import numpy as np\nimport json\n\ndef get_parsed_detections(detections_msg):\n \"\"\"Parses and reformats Detections2Proto messages\"\"\"\n detections_msg_json = detections_msg.json\n zipped = zip(detections_msg_json['predictions'], detections_msg_json['boundingBoxes'])\n zipped_list = list(zipped)\n \n detections = {}\n for k, sublist in enumerate(zipped_list):\n for sub_sublist in sublist:\n # If key already present; append to the existing dict.\n if (k in detections):\n detections[k] = {**detections[k],**sub_sublist}\n # If key not present insert the first attribute of the detected object.\n else:\n detections[k] = {**sub_sublist}\n return detections\n\ndef norm(vec, target):\n \"\"\"Computes the length of the 2D vector\"\"\"\n return np.sqrt((vec[0]-target[0])**2 + (vec[1]-target[1])**2)\n\ndef get_detection_center(detection):\n \"\"\"Computes the center x, y coordinates of the object x = rows of image; y = cols of image \"\"\"\n center_x = (detection['min']['x'] + detection['max']['x']) / 2.0 - 0.5\n center_y = (detection['min']['y'] + detection['max']['y']) / 2.0 - 0.5\n return (center_x, center_y)\n\ndef get_detection_area(detection):\n \"\"\"Computes the area (in pixels) of a detection\"\"\"\n detection_width = detection['max']['x'] - detection['min']['x']\n detection_height = detection['max']['y'] - detection['min']['y']\n detection_area = detection_width * detection_height\n return detection_area\n\ndef find_closest_matching_detection(detections_dict, target, label):\n \"\"\"Finds the closest detection to target pixel location in detections_dict, having the specified label\"\"\"\n closest_matching_detection = None\n closest_matching_detection_dist = np.inf\n \n for detection in detections_dict.values():\n\n if detection[\"label\"] == label:\n detection_center = get_detection_center(detection)\n detection_dist = norm(detection_center, target)\n\n if detection_dist < closest_matching_detection_dist:\n closest_matching_detection = detection\n closest_matching_detection_dist = detection_dist\n \n return closest_matching_detection\n\n \n", "_____no_output_____" ] ], [ [ "With the helper functions in place and detections being made in simulation, we can develop a Python Codelet to control the Jetbot, which will use detections messages to compute the desired motor commands of the Jetbot, and then publish messages containing these commands. Codelets are functionally equivalent to the built-in COMPONENTS provided by the Isaac SDK in the sense that they both send and receive messages, however, Codelets provide a way for us to create a custom component so we can execute user-defined code within our Isaac application. You can learn more about creating Codelets HERE. \n\nAs the real Jetbot’s motors are controlled with Pulse Width Modulator (PWM) duty cycle commands, the created Codelet will generate messages containing commands of this type. Linear speed of the Jetbot is determined based on the area of the bounding box of a detected object; If the area of the detected object is too small, the Jetbot will move closer to the object, whereas if the area is too large, the Jetbot will back away. Similarly, angular speed is set based on the horizontal offset of the detection from the center of the Jetbot’s view. Finally, motor commands are calculated based on the linear and angular speed, and messages containing the commands are published. \n", "_____no_output_____" ] ], [ [ "# Generates PWM commands to follow desired object\nclass JetbotControl(Codelet):\n \n def start(self):\n self.rx = self.isaac_proto_rx(\"Detections2Proto\", \"detections\")\n self.tx = self.isaac_proto_tx(\"StateProto\", \"motor_command\")\n\n # Ticks when new detections message is received\n self.tick_on_message(self.rx)\n\n def tick(self):\n # Receives a Detections2Proto message\n rx_message = self.rx.message\n\n # Reads configuration parameters set outside of Codelet\n label = self.config.label\n image_width = self.config.image_width\n image_height = self.config.image_height\n min_pwm = self.config.min_pwm # Smallest motor command required to move real Jetbot\n angular_gain = self.config.angular_gain\n target_coverage = self.config.target_coverage\n\n parsed_detections = get_parsed_detections(rx_message)\n\n image_horizontal_center = image_width / 2.0\n image_vertical_center = image_height / 2.0\n image_center = [image_vertical_center, image_horizontal_center]\n\n detection = find_closest_matching_detection(parsed_detections, image_center, label)\n\n if detection is None: \n # Do not move if there isn't a detection with matching label\n left_motor_command = 0.0\n right_motor_command = 0.0\n else: \n # Generate PWM commands to move towards detection by keeping the detection horizontally centered, \n # and the fraction of the image the bounding box covers equal to target_coverage\n \n # Compute areas\n image_area = image_width * image_height\n target_area = target_coverage * image_area\n detection_area = get_detection_area(detection)\n \n # Use areas to determine linear speed\n # min_pwm is used here to eliminate dead zones\n if detection_area < target_area:\n linear_speed = min_pwm + (1 - min_pwm)*(target_area - detection_area) / target_area\n else:\n linear_speed = -min_pwm + (1 - min_pwm)*(target_area - detection_area) / (image_area - target_area)\n \n # Use horizontal offset of detection from image center to determine angular speed\n detection_center = get_detection_center(detection)\n angular_speed = (image_horizontal_center - detection_center[1]) / image_horizontal_center\n\n # Computes motor commands based on desired linear and angular speeds, ensuring PWM commands are in Jetbot's\n # acceptable range of [-1, 1]\n min_motor_command = -1\n max_motor_command = 1\n left_motor_command = float(np.clip(linear_speed - angular_gain * angular_speed, min_motor_command, max_motor_command))\n right_motor_command = float(np.clip(linear_speed + angular_gain * angular_speed, min_motor_command, max_motor_command))\n\n # Initializes, populates, and transmits a StateProto message containing motor commands\n tx_message = self.tx.init()\n data = tx_message.proto.init('data', 2)\n\n data[0] = left_motor_command\n data[1] = right_motor_command\n \n self.tx.publish()\n\n", "_____no_output_____" ] ], [ [ "The created Codelet must be added to the Isaac application, just like a normal component. The configuration parameters are then set and an edge added between the Codelet and the detect-net subgraph so that the detections messages can be used to generate control commands.", "_____no_output_____" ] ], [ [ "# Create a new node, and add the JetbotControl Codelet to the node.\ncontroller_node = app.add(\"controller\")\njetbot_control_component = controller_node.add(JetbotControl)\n\n# Set the configuration parameters of the JetbotControl Codelet\nif simulation:\n jetbot_control_component.config.image_width = 1280\n jetbot_control_component.config.image_height = 720\nelse:\n jetbot_control_component.config.image_width = 640\n jetbot_control_component.config.image_height = 360\njetbot_control_component.config.label = \"sphere\"\njetbot_control_component.config.target_coverage = 0.05\njetbot_control_component.config.angular_gain = 0.057\njetbot_control_component.config.min_pwm = 0.25\n\n# Pass detections to JetbotControl Codelet\napp.connect(detect_net_interface, \"detections\", jetbot_control_component, \"detections\")\n\n", "_____no_output_____" ] ], [ [ "While the added Codelet can generate motor commands compatible with the real Jetbot, in simulation the Jetbot is controlled by sending Segway commands to the REB Differential Base. Segway commands can be generated by providing linear and angular velocities to the “ctrl” channel of the previously created RobotRemoteControl component. To convert PWM commands generated by our controller into the linear and angular commands needed, a relationship between motor commands sent to the Jetbot and the speed at which the real Jetbot travels must be established. The mapping between motor command and velocity was found by experimentally measuring the time taken for the real Jetbot to travel 3 meters. ", "_____no_output_____" ] ], [ [ "def pwm_to_velocity(pwm_command, min_pwm_command):\n \"\"\"Computes velocity (in [m/s]) of real Jetbot when both motors are set to \"pwm_command\" based on experimental data\"\"\"\n command_abs = np.abs(pwm_command)\n\n # min_pwm_command represents the interval of commands sent to the jetbot which do not cause movement:\n # [-min_pwm_command, min_pwm_command]\n if command_abs < min_pwm_command:\n velocity = 0\n else:\n velocity = float(np.sign(pwm_command) * (2.0328 * command_abs - 0.0948))\n \n return velocity\n", "_____no_output_____" ] ], [ [ "A second Codelet can now be created to adapt the PWM commands so that they are able to be used in simulation. With the help of our recently defined “pwm_to_velocity” function, the velocity of each wheel can be calculated. Then, using the dynamics equations of a differential base, linear and angular velocity can be calculated from the wheel velocities. ", "_____no_output_____" ] ], [ [ "# Converts PWM commands into linear and angular velocities\nclass SimulationAdapter(Codelet):\n\n def start(self):\n\n self.rx = self.isaac_proto_rx(\"StateProto\", \"motor_command\")\n self.tx = self.isaac_proto_tx(\"StateProto\", \"velocity_command\")\n self.tick_on_message(self.rx)\n\n def tick(self):\n\n rx_message = self.rx.message\n\n min_pwm_command = self.config.min_pwm_command\n simulation_linear_gain = self.config.simulation_linear_gain\n simulation_angular_gain = self.config.simulation_angular_gain\n\n data = rx_message.json['data']\n \n left_motor_command = data[0]\n right_motor_command = data[1]\n\n left_wheel_velocity = pwm_to_velocity(left_motor_command, min_pwm_command)\n right_wheel_velocity = pwm_to_velocity(right_motor_command, min_pwm_command)\n\n # Distance between wheels of Jetbot [m]\n length = 0.1143\n\n # Linear and angular velocity resulting from PWM command\n linear_velocity = (left_wheel_velocity + right_wheel_velocity) / 2.0\n angular_velocity = (left_wheel_velocity - right_wheel_velocity) / length\n\n # Gains were found using a REB RigidBodySink and measuring velocity traveled in simulation,\n # versus linear and angular command sent to simulation. \n simulation_linear_command = simulation_linear_gain * linear_velocity\n simulation_angular_command = simulation_angular_gain * angular_velocity\n\n # Initializes, populates, and publishes commands containing linear and angular velocities\n tx_message = self.tx.init()\n data = tx_message.proto.init('data', 2)\n\n data[0] = simulation_linear_command\n data[1] = simulation_angular_command\n \n self.tx.publish()\n", "_____no_output_____" ] ], [ [ "With the Simulation Adapter Codelet defined, we may now add it to our Isaac application. An edge is added from the Jetbot Control Codelet to the Simulation Adapter, allowing the Adapter to receive PWM commands from the controller. Once the commands are converted into linear and angular velocities, they must be sent to the RobotRemoteControl component as previously discussed, so we need to add the corresponding edge. ", "_____no_output_____" ] ], [ [ "if simulation:\n # Create a new node, and add the SimulationAdapter Codelet to the node.\n adapter_node = app.add(\"adapter\")\n simulation_adapter_component = adapter_node.add(SimulationAdapter)\n\n # Set the configuration parameters of the SimulationAdapter Codelet\n simulation_adapter_component.config.min_pwm_command = 0.2\n simulation_adapter_component.config.simulation_linear_gain = 0.27\n simulation_adapter_component.config.simulation_angular_gain = -0.16\n\n # Pass motor commands calculated by the JetbotControl Codelet to the SimulationAdapter Codelet\n app.connect(jetbot_control_component, \"motor_command\", simulation_adapter_component, \"motor_command\")\n\n # Pass linear and angular velocity commands from the SimulationAdapter Codelet to the RobotRemoteControl component.\n app.connect(simulation_adapter_component, \"velocity_command\", robot_remote_control_component, \"ctrl\")\n\n", "_____no_output_____" ] ], [ [ "Now we’re ready to autonomously follow a ball in simulation. Upon opening the jetbot_follow.usd file in Omniverse, create the Robot Engine Bridge application, and start the Isaac application by running the next cell.", "_____no_output_____" ] ], [ [ "app.start()", "_____no_output_____" ] ], [ [ "You'll notice that despite the Jetbot detecting objects, it isn't moving. The reason is that the Robot Remote Control component will only send commands while the deadman switch is pressed for safety reasons. But there aren't any safety concerns in simulation! Lets go ahead and disable that.", "_____no_output_____" ] ], [ [ " if simulation:\n robot_remote_control_component.config[\"disable_deadman_switch\"] = True", "_____no_output_____" ] ], [ [ "You should now see your Jetbot following balls as they appear before it in simulation. Cool! Tweak the config parameters of the Wheel Velocity Control Generator Codelet to your likening, and let's finish bridging the gap between simulation and reality!", "_____no_output_____" ] ], [ [ "app.stop()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7db67a7737be7bea53164dff2175b3998e01019
4,668
ipynb
Jupyter Notebook
docs/_downloads/ee69127c1eacbde4ff2e2aca2e46e8f0/two_layer_net_module.ipynb
jessemin/PyTorch-tutorials-kr
bcb015e5b4eb4013f3ee03374c2669733bfd09ca
[ "BSD-3-Clause" ]
1
2019-12-05T05:16:44.000Z
2019-12-05T05:16:44.000Z
docs/_downloads/ee69127c1eacbde4ff2e2aca2e46e8f0/two_layer_net_module.ipynb
jessemin/PyTorch-tutorials-kr
bcb015e5b4eb4013f3ee03374c2669733bfd09ca
[ "BSD-3-Clause" ]
null
null
null
docs/_downloads/ee69127c1eacbde4ff2e2aca2e46e8f0/two_layer_net_module.ipynb
jessemin/PyTorch-tutorials-kr
bcb015e5b4eb4013f3ee03374c2669733bfd09ca
[ "BSD-3-Clause" ]
null
null
null
86.444444
2,804
0.663882
[ [ [ "%matplotlib inline", "_____no_output_____" ] ], [ [ "\nPyTorch: 사용자 정의 nn Module\n-------------------------------\n\n하나의 은닉층(hidden layer)과 편향(bias)이 없는 완전히 연결된 ReLU 신경망을,\n유클리드 거리(Euclidean distance) 제곱을 최소화하는 식으로 x로부터 y를 예측하도록\n학습하겠습니다.\n\n이번에는 사용자 정의 Module의 서브클래스로 모델을 정의합니다. 기존 Module의 간단한\n구성보다 더 복잡한 모델을 원한다면, 이 방법으로 모델을 정의하면 됩니다.\n\n", "_____no_output_____" ] ], [ [ "import torch\n\n\nclass TwoLayerNet(torch.nn.Module):\n def __init__(self, D_in, H, D_out):\n \"\"\"\n 생성자에서 2개의 nn.Linear 모듈을 생성하고, 멤버 변수로 지정합니다.\n \"\"\"\n super(TwoLayerNet, self).__init__()\n self.linear1 = torch.nn.Linear(D_in, H)\n self.linear2 = torch.nn.Linear(H, D_out)\n\n def forward(self, x):\n \"\"\"\n 순전파 함수에서는 입력 데이터의 Tensor를 받고 출력 데이터의 Tensor를\n 반환해야 합니다. Tensor 상의 임의의 연산자뿐만 아니라 생성자에서 정의한\n Module도 사용할 수 있습니다.\n \"\"\"\n h_relu = self.linear1(x).clamp(min=0)\n y_pred = self.linear2(h_relu)\n return y_pred\n\n\n# N은 배치 크기이며, D_in은 입력의 차원입니다;\n# H는 은닉층의 차원이며, D_out은 출력 차원입니다.\nN, D_in, H, D_out = 64, 1000, 100, 10\n\n# 입력과 출력을 저장하기 위해 무작위 값을 갖는 Tensor를 생성합니다.\nx = torch.randn(N, D_in)\ny = torch.randn(N, D_out)\n\n# 앞에서 정의한 클래스를 생성하여 모델을 구성합니다.\nmodel = TwoLayerNet(D_in, H, D_out)\n\n# 손실 함수와 Optimizer를 만듭니다. SGD 생성자에 model.parameters()를 호출하면\n# 모델의 멤버인 2개의 nn.Linear 모듈의 학습 가능한 매개변수들이 포함됩니다.\ncriterion = torch.nn.MSELoss(reduction='sum')\noptimizer = torch.optim.SGD(model.parameters(), lr=1e-4)\nfor t in range(500):\n # 순전파 단계: 모델에 x를 전달하여 예상되는 y 값을 계산합니다.\n y_pred = model(x)\n\n # 손실을 계산하고 출력합니다.\n loss = criterion(y_pred, y)\n print(t, loss.item())\n\n # 변화도를 0으로 만들고, 역전파 단계를 수행하고, 가중치를 갱신합니다.\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code" ] ]
e7db7f93492caf5500ab12046f2aca5a026f411e
17,752
ipynb
Jupyter Notebook
ch03/Listing 3.02.ipynb
fneitzel/MLwithTensorFlow2ed
479f74e54c42a231b058472407e82b37c61dac88
[ "Apache-2.0" ]
96
2020-02-02T22:56:24.000Z
2022-03-20T22:39:54.000Z
ch03/Listing 3.02.ipynb
fneitzel/MLwithTensorFlow2ed
479f74e54c42a231b058472407e82b37c61dac88
[ "Apache-2.0" ]
11
2020-07-30T04:11:10.000Z
2022-01-13T03:14:35.000Z
ch03/Listing 3.02.ipynb
fneitzel/MLwithTensorFlow2ed
479f74e54c42a231b058472407e82b37c61dac88
[ "Apache-2.0" ]
43
2019-12-04T15:02:34.000Z
2022-03-12T22:06:12.000Z
109.580247
14,660
0.886266
[ [ [ "import tensorflow as tf\nimport numpy as np\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "learning_rate = 0.01\ntraining_epochs = 100", "_____no_output_____" ], [ "x_train = np.linspace(-1, 1, 101)\ny_train = 2 * x_train + np.random.randn(*x_train.shape) * 0.33", "_____no_output_____" ], [ "X = tf.placeholder(tf.float32)\nY = tf.placeholder(tf.float32)\n\ndef model(X, w):\n return tf.multiply(X, w)\n", "_____no_output_____" ], [ "w = tf.Variable(0.0, name=\"weights\")\ny_model = model(X, w)\ncost = tf.square(Y-y_model)", "_____no_output_____" ], [ "train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)", "_____no_output_____" ], [ "sess = tf.Session()\ninit = tf.global_variables_initializer()\nsess.run(init)", "_____no_output_____" ], [ "for epoch in range(training_epochs):\n for (x, y) in zip(x_train, y_train):\n sess.run(train_op, feed_dict={X: x, Y: y})", "_____no_output_____" ], [ "w_val = sess.run(w)", "_____no_output_____" ], [ "sess.close()", "_____no_output_____" ], [ "plt.scatter(x_train, y_train)\ny_learned = x_train * w_val\nplt.plot(x_train, y_learned, 'r')\nplt.show()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7db97d63eb0cb4fa695c26498506119e0eeb4e0
995,778
ipynb
Jupyter Notebook
example/runPairwiseRegression_example.ipynb
philips-labs/pairwise-regression
9b164601acb786d788c682a395b97b19338d34b3
[ "MIT" ]
null
null
null
example/runPairwiseRegression_example.ipynb
philips-labs/pairwise-regression
9b164601acb786d788c682a395b97b19338d34b3
[ "MIT" ]
null
null
null
example/runPairwiseRegression_example.ipynb
philips-labs/pairwise-regression
9b164601acb786d788c682a395b97b19338d34b3
[ "MIT" ]
null
null
null
7,376.133333
992,652
0.973854
[ [ [ "from sklearn.linear_model import Ridge as ridge\nfrom sklearn.linear_model import LinearRegression\nimport pandas as pd\nimport numpy as np\nfrom sklearn.metrics import mean_absolute_error\nimport sys\nsys.path.append('..')\nfrom pairwiselr import KeyCovariatePairwiseLR\nimport matplotlib.pyplot as plt\n%matplotlib inline", "_____no_output_____" ] ], [ [ "# Read sample data", "_____no_output_____" ] ], [ [ "x_train = pd.read_csv('sample_training_data.csv')\nft_predict = 'cur_pH' # specify which column to predict\nall_fts = x_train.columns\nmodel_fts = all_fts\nmodel_fts.drop(ft_predict)", "_____no_output_____" ] ], [ [ "# Run pairwise regression model", "_____no_output_____" ] ], [ [ "# train model\nz_dynamic_range = np.linspace(6.8,7.5,15) # specify the range and binning of key-covariate\n\nmodel = KeyCovariatePairwiseLR(alpha_blend=20, cov_steps=20, coeff_smooth_z=6, func_smooth_z='sigmoid')\nprint(ft_predict)\nmodel.fit(x_train[all_fts], 'prev_pH', ft_predict, cov_range_z=z_dynamic_range, include_z_in_x=True)\nypred_train = model.predict(x_train[all_fts])\nmodel.plot_pairwise_interactions(n_plot_cols=3, scaleup=1.6)\nplt.tight_layout()", "cur_pH\nprev_pH\n" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7dba04b825353650414b4d48537814e5a2f2325
532,747
ipynb
Jupyter Notebook
ETL-I/ETL1 04 - Connecting to JDBC.ipynb
vshiv667/Data-Engineering
b32c12b71d1ed0cbed9362e83d32b6eed7d7cd21
[ "MIT" ]
null
null
null
ETL-I/ETL1 04 - Connecting to JDBC.ipynb
vshiv667/Data-Engineering
b32c12b71d1ed0cbed9362e83d32b6eed7d7cd21
[ "MIT" ]
null
null
null
ETL-I/ETL1 04 - Connecting to JDBC.ipynb
vshiv667/Data-Engineering
b32c12b71d1ed0cbed9362e83d32b6eed7d7cd21
[ "MIT" ]
null
null
null
266,373.5
532,746
0.655655
[ [ [ "d-sandbox\n\n<div style=\"text-align: center; line-height: 0; padding-top: 9px;\">\n <img src=\"https://databricks.com/wp-content/uploads/2018/03/db-academy-rgb-1200px.png\" alt=\"Databricks Learning\" style=\"width: 600px; height: 163px\">\n</div>", "_____no_output_____" ], [ "# Connecting to JDBC\n\nApache Spark&trade; and Databricks&reg; allow you to connect to a number of data stores using JDBC.\n## In this lesson you:\n* Read data from a JDBC connection \n* Parallelize your read operation to leverage distributed computation\n\n## Audience\n* Primary Audience: Data Engineers\n* Additional Audiences: Data Scientists and Data Pipeline Engineers\n\n## Prerequisites\n* Web browser: Please use a <a href=\"https://docs.databricks.com/user-guide/supported-browsers.html#supported-browsers\" target=\"_blank\">supported browser</a>.\n* Concept (optional): <a href=\"https://academy.databricks.com/collections/frontpage/products/dataframes\" target=\"_blank\">DataFrames course from Databricks Academy</a>", "_____no_output_____" ], [ "<iframe \nsrc=\"//fast.wistia.net/embed/iframe/i07uvaoqgh?videoFoam=true\"\nstyle=\"border:1px solid #1cb1c2;\"\nallowtransparency=\"true\" scrolling=\"no\" class=\"wistia_embed\"\nname=\"wistia_embed\" allowfullscreen mozallowfullscreen webkitallowfullscreen\noallowfullscreen msallowfullscreen width=\"640\" height=\"360\" ></iframe>\n<div>\n<a target=\"_blank\" href=\"https://fast.wistia.net/embed/iframe/i07uvaoqgh?seo=false\">\n <img alt=\"Opens in new tab\" src=\"https://files.training.databricks.com/static/images/external-link-icon-16x16.png\"/>&nbsp;Watch full-screen.</a>\n</div>", "_____no_output_____" ], [ "-sandbox\n## Java Database Connectivity\n\nJava Database Connectivity (JDBC) is an application programming interface (API) that defines database connections in Java environments. Spark is written in Scala, which runs on the Java Virtual Machine (JVM). This makes JDBC the preferred method for connecting to data whenever possible. Hadoop, Hive, and MySQL all run on Java and easily interface with Spark clusters.\n\nDatabases are advanced technologies that benefit from decades of research and development. To leverage the inherent efficiencies of database engines, Spark uses an optimization called predicate pushdown. **Predicate pushdown uses the database itself to handle certain parts of a query (the predicates).** In mathematics and functional programming, a predicate is anything that returns a Boolean. In SQL terms, this often refers to the `WHERE` clause. Since the database is filtering data before it arrives on the Spark cluster, there's less data transfer across the network and fewer records for Spark to process. Spark's Catalyst Optimizer includes predicate pushdown communicated through the JDBC API, making JDBC an ideal data source for Spark workloads.\n\nIn the road map for ETL, this is the **Extract and Validate** step:\n\n<img src=\"https://files.training.databricks.com/images/eLearning/ETL-Part-1/ETL-Process-1.png\" style=\"border: 1px solid #aaa; border-radius: 10px 10px 10px 10px; box-shadow: 5px 5px 5px #aaa\"/>", "_____no_output_____" ], [ "### Recalling the Design Pattern\n\nRecall the design pattern for connecting to data from the previous lesson: \n<br>\n1. Define the connection point.\n2. Define connection parameters such as access credentials.\n3. Add necessary options. \n\nAfter adhering to this, read data using `spark.read.options(<option key>, <option value>).<connection_type>(<endpoint>)`. The JDBC connection uses this same formula with added complexity over what was covered in the lesson.", "_____no_output_____" ], [ "<iframe \nsrc=\"//fast.wistia.net/embed/iframe/2clbjyxese?videoFoam=true\"\nstyle=\"border:1px solid #1cb1c2;\"\nallowtransparency=\"true\" scrolling=\"no\" class=\"wistia_embed\"\nname=\"wistia_embed\" allowfullscreen mozallowfullscreen webkitallowfullscreen\noallowfullscreen msallowfullscreen width=\"640\" height=\"360\" ></iframe>\n<div>\n<a target=\"_blank\" href=\"https://fast.wistia.net/embed/iframe/2clbjyxese?seo=false\">\n <img alt=\"Opens in new tab\" src=\"https://files.training.databricks.com/static/images/external-link-icon-16x16.png\"/>&nbsp;Watch full-screen.</a>\n</div>", "_____no_output_____" ], [ "## ![Spark Logo Tiny](https://files.training.databricks.com/images/105/logo_spark_tiny.png) Classroom-Setup & Classroom-Cleanup<br>\n\nFor each lesson to execute correctly, please make sure to run the **`Classroom-Setup`** cell at the start of each lesson (see the next cell) and the **`Classroom-Cleanup`** cell at the end of each lesson.", "_____no_output_____" ] ], [ [ "%run \"./Includes/Classroom-Setup\"", "_____no_output_____" ] ], [ [ "-sandbox\nRun the cell below to confirm you are using the right driver.\n\n<img alt=\"Side Note\" title=\"Side Note\" style=\"vertical-align: text-bottom; position: relative; height:1.75em; top:0.05em; transform:rotate(15deg)\" src=\"https://files.training.databricks.com/static/images/icon-note.webp\"/> Each notebook has a default language that appears in upper corner of the screen next to the notebook name, and you can easily switch between languages in a notebook. To change languages, start your cell with `%python`, `%scala`, `%sql`, or `%r`.", "_____no_output_____" ] ], [ [ "%scala\n// run this regardless of language type\nClass.forName(\"org.postgresql.Driver\")", "_____no_output_____" ] ], [ [ "Define your database connection criteria. In this case, you need the hostname, port, and database name. \n\nAccess the database `training` via port `5432` of a Postgres server sitting at the endpoint `server1.databricks.training`.\n\nCombine the connection criteria into a URL.", "_____no_output_____" ] ], [ [ "jdbcHostname = \"server1.databricks.training\"\njdbcPort = 5432\njdbcDatabase = \"training\"\n\njdbcUrl = f\"jdbc:postgresql://{jdbcHostname}:{jdbcPort}/{jdbcDatabase}\"", "_____no_output_____" ] ], [ [ "Create a connection properties object with the username and password for the database.", "_____no_output_____" ] ], [ [ "connectionProps = {\n \"user\": \"readonly\",\n \"password\": \"readonly\"\n}", "_____no_output_____" ] ], [ [ "Read from the database by passing the URL, table name, and connection properties into `spark.read.jdbc()`.", "_____no_output_____" ] ], [ [ "tableName = \"training.people_1m\"\n\npeopleDF = spark.read.jdbc(url=jdbcUrl, table=tableName, properties=connectionProps)\n\ndisplay(peopleDF)", "_____no_output_____" ] ], [ [ "## Exercise 1: Parallelizing JDBC Connections\n\nThe command above was executed as a serial read through a single connection to the database. This works well for small data sets; at scale, parallel reads are necessary for optimal performance.\n\nSee the [Managing Parallelism](https://docs.databricks.com/spark/latest/data-sources/sql-databases.html#managing-parallelism) section of the Databricks documentation.", "_____no_output_____" ], [ "-sandbox\n### Step 1: Find the Range of Values in the Data\n\nParallel JDBC reads entail assigning a range of values for a given partition to read from. The first step of this divide-and-conquer approach is to find bounds of the data.\n\nCalculate the range of values in the `id` column of `peopleDF`. Save the minimum to `dfMin` and the maximum to `dfMax`. **This should be the number itself rather than a DataFrame that contains the number.** Use `.first()` to get a Scala or Python object.\n\n<img alt=\"Hint\" title=\"Hint\" style=\"vertical-align: text-bottom; position: relative; height:1.75em; top:0.3em\" src=\"https://files.training.databricks.com/static/images/icon-light-bulb.svg\"/>&nbsp;**Hint:** See the `min()` and `max()` functions in Python `pyspark.sql.functions` or Scala `org.apache.spark.sql.functions`.", "_____no_output_____" ] ], [ [ "dfMin=peopleDF.select(\"id\").rdd.min()[0]\ndfMax=peopleDF.select(\"id\").rdd.max()[0]", "_____no_output_____" ], [ "# TEST - Run this cell to test your solution\n\ndbTest(\"ET1-P-04-01-01\", 1, dfMin)\ndbTest(\"ET1-P-04-01-02\", 1000000, dfMax)\n\nprint(\"Tests passed!\")", "_____no_output_____" ] ], [ [ "-sandbox\n### Step 2: Define the Connection Parameters.\n\n<a href=\"https://docs.databricks.com/spark/latest/data-sources/sql-databases.html#manage-parallelism\" target=\"_blank\">Referencing the documentation,</a> define the connection parameters for this read.\n\nUse 8 partitions.\n\nAssign the results to `peopleDFParallel`.\n\n<img alt=\"Side Note\" title=\"Side Note\" style=\"vertical-align: text-bottom; position: relative; height:1.75em; top:0.05em; transform:rotate(15deg)\" src=\"https://files.training.databricks.com/static/images/icon-note.webp\"/> Setting the column for your parallel read introduces unexpected behavior due to a bug in Spark. To make sure Spark uses the capitalization of your column, use `'\"id\"'` for your column. <a href=\"https://github.com/apache/spark/pull/20370#issuecomment-359958843\" target=\"_blank\"> Monitor the issue here.</a>", "_____no_output_____" ] ], [ [ "peopleDFParallel = spark.read.jdbc(url=jdbcUrl, table=\"training.people_1m\", column='\"id\"', lowerBound=1, upperBound=100000, numPartitions=8,properties=connectionProps)\ndisplay(peopleDFParallel)\n", "_____no_output_____" ], [ "# TEST - Run this cell to test your solution\ndbTest(\"ET1-P-04-02-01\", 8, peopleDFParallel.rdd.getNumPartitions())\n\nprint(\"Tests passed!\")", "_____no_output_____" ] ], [ [ "### Step 3: Compare the Serial and Parallel Reads\n\nCompare the two reads with the `%timeit` function.", "_____no_output_____" ], [ "Display the number of partitions in each DataFrame by running the following:", "_____no_output_____" ] ], [ [ "print(\"Partitions:\", peopleDF.rdd.getNumPartitions())\nprint(\"Partitions:\", peopleDFParallel.rdd.getNumPartitions())", "_____no_output_____" ] ], [ [ "Invoke `%timeit` followed by calling a `.describe()`, which computes summary statistics, on both `peopleDF` and `peopleDFParallel`.", "_____no_output_____" ] ], [ [ "%timeit peopleDF.describe()\n%timeit peopleDFParallel.describe()", "_____no_output_____" ] ], [ [ "What is the difference between serial and parallel reads? Note that your results vary drastically depending on the cluster and number of partitions you use", "_____no_output_____" ] ], [ [ "#Parallel reads are faster by 3.5 secs on average over 7 runs", "_____no_output_____" ] ], [ [ "## Review\n\n**Question:** What is JDBC? \n**Answer:** JDBC stands for Java Database Connectivity, and is a Java API for connecting to databases such as MySQL, Hive, and other data stores.\n\n**Question:** How does Spark read from a JDBC connection by default? \n**Answer:** With a serial read. With additional specifications, Spark conducts a faster, parallel read. Parallel reads take full advantage of Spark's distributed architecture.\n\n**Question:** What is the general design pattern for connecting to your data? \n**Answer:** The general design patter is as follows:\n0. Define the connection point\n0. Define connection parameters such as access credentials\n0. Add necessary options such as for headers or parallelization", "_____no_output_____" ], [ "## ![Spark Logo Tiny](https://files.training.databricks.com/images/105/logo_spark_tiny.png) Classroom-Cleanup<br>\n\nRun the **`Classroom-Cleanup`** cell below to remove any artifacts created by this lesson.", "_____no_output_____" ] ], [ [ "%run \"./Includes/Classroom-Cleanup\"", "_____no_output_____" ] ], [ [ "## Next Steps\n\nStart the next lesson, [Applying Schemas to JSON Data]($./05-Applying-Schemas-to-JSON-Data ).", "_____no_output_____" ], [ "## Additional Topics & Resources\n\n**Q:** My tool can't connect via JDBC. Can I connect via <a href=\"https://en.wikipedia.org/wiki/Open_Database_Connectivity\" target=\"_blank\">ODBC instead</a>? \n**A:** Yes. The best practice is generally to use JDBC connections wherever possible since Spark runs on the JVM. In cases where JDBC is either not supported or is less performant, use the Simba ODBC driver instead. See <a href=\"https://docs.databricks.com/user-guide/clusters/jdbc-odbc.html\" target=\"_blank\">the Databricks documentation on connecting BI tools</a> for more details.\n\n**Q:** How can I connect my Spark cluster to Amazon's Redshift? \n**A:** <a href=\"https://github.com/databricks/spark-redshift\" target=\"_blank\">Databricks has a specific connector for Redshift</a> that provides many advantages over other options. See <a href=\"https://docs.databricks.com/_static/notebooks/redshift.html\" target=\"_blank\">this notebook for starter code.</a>", "_____no_output_____" ], [ "-sandbox\n&copy; 2020 Databricks, Inc. All rights reserved.<br/>\nApache, Apache Spark, Spark and the Spark logo are trademarks of the <a href=\"http://www.apache.org/\">Apache Software Foundation</a>.<br/>\n<br/>\n<a href=\"https://databricks.com/privacy-policy\">Privacy Policy</a> | <a href=\"https://databricks.com/terms-of-use\">Terms of Use</a> | <a href=\"http://help.databricks.com/\">Support</a>", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ] ]
e7dbb95cdcc271283914160948f96632de9d7ae8
42,960
ipynb
Jupyter Notebook
notebooks/holodec.ipynb
carlosenciso/ai4ess-hackathon-2020
0533e190aad1fffe939244e2e528ed143da8792e
[ "MIT" ]
1
2020-07-16T02:23:17.000Z
2020-07-16T02:23:17.000Z
notebooks/holodec.ipynb
carlosenciso/ai4ess-hackathon-2020
0533e190aad1fffe939244e2e528ed143da8792e
[ "MIT" ]
null
null
null
notebooks/holodec.ipynb
carlosenciso/ai4ess-hackathon-2020
0533e190aad1fffe939244e2e528ed143da8792e
[ "MIT" ]
null
null
null
41.507246
965
0.592109
[ [ [ "# AI for Earth System Science Hackathon 2020\n# HOLODEC Machine Learning Challenge Problem\nMatt Hayman, Aaron Bansemer, David John Gagne, Gabrielle Gantos, Gunther Wallach, Natasha Flyer\n\n## Introduction\n<center><img src='holodec_images/image2.png'><center>\n\nThe properties of the water and ice particles in clouds are critical to many aspects of weather and climate. The size, shape, and concentration of ice particles control the radiative properties of cirrus clouds. The spatial distribution of water droplets in warm clouds may influence the formation of drizzle and rain. The interactions among droplets, ice particles, and aerosols impact precipitation, lightning, atmospheric chemistry, and more. Measurements of natural cloud particles are often taken aboard research aircraft with instruments mounted on the wings. One of the newer technologies used for these instruments is inline holographic imaging, which has the important advantage of being able to instantaneously record all of the particles inside a small volume of air. Using this technology, the Holographic Detector for Clouds (HOLODEC) has been developed by the university community and NCAR to improve our cloud measurement capabilities.\n\nA hologram captures electro-magnatic field amplitude and phase (or wavefront) incident on a detector. In contrast, standard imaging captures only the amplitude of the electric field. Unlike a standard image, holograms can be computationally refocused on any object within the capture volume using standard wave propagation calculations. The figure below shows an example of an inline hologram (large image) with five out of focus particles. The five smaller images show the reconstruction from each particle by computationally propagating the electro-magnetic field back to the depth position of each particle. \n\n<center><img src='holodec_images/image5.png'><center>\n\nHOLODEC is an airborne holographic cloud imager capable of capturing particle size distributions in a single shot, so a measured particle size distribution is localized to a specific part of the cloud (not accumulated over a long path length). By capturing a hologram, each particle can be imaged irrespective of its location in the sample volume, and its size and position can be accurately captured.\n\nWhile holographic imaging provides unparalleled information about cloud particles, processing the raw holograms is also computationally expensive. Lacking prior knowledge of the particle position in depth, a typical HOLODEC hologram is reconstructed at 1000 planes (or depths) using standard diffraction calculations. At each plane, a particle’s image sharpness is evaluated and the particle size and position is determined only at a plane where it is in focus. In addition to the computational cost, the processing requires human intervention to recognize when a “particle” is really just artifacts of interfering scattered fields.\n\nThe objective of this project is to develop a machine learning solution to process HOLODEC data that is more computationally efficient than the first-principles based processor. \n\nAn important factor in processing hologram data is that the scattered field from a particle spreads out as it propagates. The image below shows the scattered field from a 50 µm particle at distances in increments of 0.1 mm from the particle (0 to 0.7 mm). As the scattered field expands, it’s power is also distributed over a larger area.\n\n![holodec 3d](holodec_images/image1.png)\n\nFor simplicity, this project deals with simulated holographic data where particle shapes are limited to spheres. Two datasets are provided. The first dataset contains only one particle per hologram. If you are successful in processing the first dataset, or you wish to immediately focus on a more challenging case, you can work on the second dataset that contains three particles per hologram.\n", "_____no_output_____" ], [ "## Software Requirements\nThis notebook requires Python >= 3.7. The following libraries are required:\n* numpy\n* scipy\n* matplotlib\n* xarray\n* pandas\n* scikit-learn\n* tensorflow >= 2.1\n* netcdf4\n* h5netcdf\n* tqdm\n* s3fs\n* zarr", "_____no_output_____" ] ], [ [ "!pip install numpy scipy matplotlib xarray pandas scikit-learn tensorflow netcdf4 h5netcdf tqdm s3fs zarr", "_____no_output_____" ], [ "# if working on google colab, uncomment and enable save to google drive\n# ! pip install -U -q PyDrive\n# from google.colab import drive\n# drive.mount('/content/gdrive')", "_____no_output_____" ] ], [ [ "## Data\nThe datasets consist of synthetically-generated holograms of cloud droplets. Each dataset is in zarr format, and contains a series of hologram images as well as the properties of each particle in the image. The zarr variable names and properties are as follows:\n\n| Variable Name | Description | Dimensions | Units/Range|\n| ------------- | :----:|:----------- |:------|\n| image | Stack of single-color images. Each image is 600x400 pixels, ranging from 0-255 in intensity. | nHolograms, 600, 400 | 0 to 255 (grayscale image) |\n| x | X-position of each particle in the dataset. The origin is at the center of the hologram image. | nParticles (can vary) | -888 to 888 micrometers |\n| y | Y-position of each particle in the dataset. The origin is at the center of the hologram image. | nParticles (can vary) | -592 to 592 micrometers |\n| z | Z-position of each particle in the dataset. The origin is at the focal plane of the instrument (all particles are unfocused). | nParticles (can vary) | 14000 to 158000 micrometers |\n| d | Diameter of each simulated droplet | nParticles (can vary) | 20 to 70 micrometers |\n| hid | Hologram ID specifies which hologram this particle is contained in. For example, if hid=1, the corresponding x, y, z, and d variables are found in the first hologram. | nParticles (can vary) | 1 to nHolograms |\n| Dx (global attribute) | Resolution of each pixel, == 2.96 micrometers. Use if you wish to convert x/y position to pixel number | | |\n\nThere are two datasets for this project, a single-particle dataset and a three-particle dataset. The single-particle dataset only contains one particle per hologram (nHolograms = nParticles). There are 50,000 holograms in the training dataset that correspond to 50,000 particles.\n\nThe three-particle dataset contains three particles per hologram. This dataset also contains 50,000 holograms but 150,000 particles. Be sure to use the hid variable to figure out which hologram a particle is contained in.\n\nThe ultimate goal of this project is to be able to find particles in the holograms and determine their x, y, z, and d values. This process is straightforward for finding a single particle, but finding multiple particles and their properties is much more challenging. A simpler objective that could also assist in speeding up the HOLODEC processing is calculating the relative distribution of particle mass in the z-direction from the holograms, which is a combination of information from z and d. \n\n<center><img src='holodec_images/image4.png'><center>\n", "_____no_output_____" ], [ "### Potential Input Variables\n| Variable Name | Units | Description | Relevance |\n| ------------- | :----:|:----------- | :--------:|\n| hologram | arbitrary | 8 bit (0-255) amplitude captured by CCD | standard input data for processing |\n", "_____no_output_____" ], [ "### Output Variables\n| Variable Name | Units | Description |\n| ------------- | :----:|:----------- |\n| x | µm | particle horizontal position |\n| y | µm | particle vertical position |\n| z | µm | particle position in depth (along the direction of propagation) |\n| d | µm | particle diameter |\n| hid | arbitrary | hologram ID by particle|\n", "_____no_output_____" ], [ "### Training Set\n\nThe single-particle training dataset is in the zarr format described above, with 15,000 holograms and 15,000 corresponding particles.\n\nThe three-particle training dataset contains 15,000 holograms and 45,000 particles.\n", "_____no_output_____" ], [ "### Validation Set\nThe single-particle validation dataset is in the zarr format described above, with 5,000 holograms and 5,000 corresponding particles.\n\nThe three-particle validation dataset contains 5,000 holograms and 15,000 particles.\n", "_____no_output_____" ], [ "### Test Set\nThe single-particle test dataset is in the zarr format described above, with 5,000 holograms and 5,000 corresponding particles.\n\nThe three-particle test dataset contains 5,000 holograms and 15,000 particles.\n", "_____no_output_____" ], [ "### Data Transforms\n\nThe input images only need to be normalized between 0 and 1 by dividing by 255. \n", "_____no_output_____" ] ], [ [ "# Module imports \nimport argparse\nimport random\nimport os\nfrom os.path import join, exists\nimport sys\nimport s3fs\nimport yaml\nimport zarr\nimport xarray as xr\nimport numpy as np\nimport pandas as pd\nfrom datetime import datetime\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nfrom sklearn.preprocessing import StandardScaler, MinMaxScaler, MaxAbsScaler, RobustScaler\nfrom sklearn.metrics import mean_absolute_error, max_error\nimport tensorflow as tf\nfrom tensorflow.keras.layers import Input, Conv2D, Dense, Flatten, MaxPool2D\nfrom tensorflow.keras.models import Model, save_model\nfrom tensorflow.keras.optimizers import Adam, SGD\n\nseed = 328942\nnp.random.seed(seed)\nrandom.seed(seed)\ntf.random.set_seed(seed)", "_____no_output_____" ], [ "# Limit GPU memory usage\ngpus = tf.config.get_visible_devices(\"GPU\")\nfor device in gpus:\n print(device)\n tf.config.experimental.set_memory_growth(device, True)\n", "_____no_output_____" ], [ "# define some datset helper functions\n\nnum_particles_dict = {\n 1 : '1particle',\n 3 : '3particle',\n 'multi': 'multiparticle'}\n\nsplit_dict = {\n 'train' : 'training',\n 'test' : 'test',\n 'valid': 'validation'}\n\ndef dataset_name(num_particles, split, file_extension='zarr'):\n \"\"\"\n Return the dataset filename given user inputs\n \n Args: \n num_particles: (int or str) Number of particles per hologram (1, 3, or 'multi')\n split: (str) Dataset split of either 'train', 'valid', or 'test'\n file_extension: (str) Dataset file extension\n \n Returns:\n dataset: (str) Dataset name\n \"\"\"\n \n valid = [1,3,'multi']\n if num_particles not in valid:\n raise ValueError(\"results: num_particles must be one of %r.\" % valid)\n num_particles = num_particles_dict[num_particles]\n \n valid = ['train','test','valid']\n if split not in valid:\n raise ValueError(\"results: split must be one of %r.\" % valid)\n split = split_dict[split]\n \n return f'synthetic_holograms_{num_particles}_{split}_small.{file_extension}'\n\ndef open_zarr(path_data, num_particles, split):\n \"\"\"\n Open a HOLODEC Zarr file hosted on AWS\n \n Args: \n path_data: (str) Path to directory containing datset\n num_particles: (int or str) Number of particles per hologram (1, 3, or 'multi')\n split: (str) Dataset split of either 'train', 'valid', or 'test'\n \n Returns:\n dataset: (xarray Dataset) Opened dataset\n \"\"\"\n path_data = os.path.join(path_data, dataset_name(num_particles, split))\n fs = s3fs.S3FileSystem(anon=True, default_fill_cache=False)\n store = s3fs.S3Map(root=path_data, s3=fs, check=False)\n dataset = xr.open_zarr(store=store)\n return dataset\n\ndef scale_images(images, scaler_vals=None):\n \"\"\"\n Takes in array of images and scales pixel values between 0 and 1\n \n Args: \n images: (np array) Array of images \n scaler_vals: (dict) Image scaler 'max' and 'min' values\n \n Returns:\n images_scaled: (np array) Scaled array of images with pixel values between 0 and 1\n scaler_vals: (dict) Image scaler 'max' and 'min' values\n \"\"\"\n \n if scaler_vals is None:\n scaler_vals = {}\n scaler_vals[\"min\"] = images.min()\n scaler_vals[\"max\"] = images.max()\n images_scaled = (images.astype(np.float32) - scaler_vals[\"min\"]) / (scaler_vals[\"max\"] - scaler_vals[\"min\"])\n return images_scaled, scaler_vals\n\ndef load_scaled_datasets(path_data, num_particles, output_cols, slice_idx,\n split='train', scaler_vals=None):\n \"\"\"\n Given a path to training or validation datset, the number of particles per\n hologram, and output columns, returns scaled inputs and raw outputs.\n \n Args: \n path_data: (str) Path to directory containing training and validation datsets\n num_particles: (int or str) Number of particles per hologram (1, 3, or 'multi') \n output_cols: (list of strings) List of feature columns to be used\n \n Returns:\n inputs_scaled: (np array) Input data scaled between 0 and 1\n outputs: (df) Output data specified by output_cols\n scaler_vals: (dict) list of training/validation/test files\n \"\"\"\n \n if split == 'valid':\n slice_idx = int(slice_idx/3)\n print(\"Slicing data into inputs/outputs\")\n ds = open_zarr(path_data, num_particles, split)\n inputs = ds[\"image\"].values[:slice_idx]\n outputs = ds[output_cols].to_dataframe().loc[:slice_idx-1,:]\n ds.close()\n print(f\"\\t- outputs.shape: {outputs.shape}\")\n\n print(\"Scaling input data\")\n if split == 'train':\n inputs_scaled, scaler_vals = scale_images(inputs)\n else:\n slice_idx = int(slice_idx/3)\n inputs_scaled, _ = scale_images(inputs, scaler_vals)\n \n inputs_scaled = np.expand_dims(inputs_scaled, -1)\n print(f\"\\t- inputs_scaled.shape: {inputs_scaled.shape}\")\n\n return inputs_scaled, outputs, scaler_vals\n", "_____no_output_____" ], [ "# data definitions\n\npath_data = \"ncar-aiml-data-commons/holodec/\"\nnum_particles = 3\noutput_cols = [\"hid\", \"x\", \"y\", \"z\", \"d\"]\nnum_z_bins = 20\nslice_idx = 15000\n", "_____no_output_____" ], [ "# load and normalize data (this takes approximately 2 minutes)\ntrain_inputs_scaled,\\\ntrain_outputs,\\\nscaler_vals = load_scaled_datasets(path_data,\n num_particles,\n output_cols,\n slice_idx)\n\nvalid_inputs_scaled,\\\nvalid_outputs, _ = load_scaled_datasets(path_data,\n num_particles,\n output_cols,\n slice_idx,\n split='valid',\n scaler_vals=scaler_vals)\n", "_____no_output_____" ], [ "# Plot a single hologram with the particles overlaid\ndef plot_hologram(h, outputs):\n \"\"\"\n Given a hologram number, plot hologram and particle point\n \n Args: \n h: (int) hologram number\n \n Returns:\n print of pseudocolor plot of hologram and hologram particles\n \"\"\" \n x_vals = np.linspace(-888, 888, train_inputs_scaled[h, :, :, 0].shape[0])\n y_vals = np.linspace(-592, 592, train_inputs_scaled[h, :, :, 0].shape[1])\n\n plt.figure(figsize=(12, 8))\n plt.pcolormesh(x_vals, y_vals, train_inputs_scaled[h, :, :, 0].T, cmap=\"RdBu_r\")\n h_particles = np.where(outputs[\"hid\"] == h + 1)[0]\n for h_particle in h_particles:\n plt.scatter(outputs.loc[h_particle, \"x\"],\n outputs.loc[h_particle, \"y\"],\n outputs.loc[h_particle, \"d\"] ** 2,\n outputs.loc[h_particle, \"z\"],\n vmin=outputs[\"z\"].min(),\n vmax=outputs[\"z\"].max(),\n cmap=\"cool\")\n plt.annotate(f\"d: {outputs.loc[h_particle,'d']:.1f} µm\",\n (outputs.loc[h_particle, \"x\"], outputs.loc[h_particle, \"y\"]))\n plt.xlabel(\"horizontal particle position (µm)\", fontsize=16)\n plt.ylabel(\"vertical particle position (µm)\", fontsize=16)\n plt.title(\"Hologram and particle positions plotted in four dimensions\", fontsize=20, pad=20)\n plt.colorbar().set_label(label=\"z-axis particle position (µm)\", size=16)\n", "_____no_output_____" ], [ "h = 300\nplot_hologram(h, train_outputs)", "_____no_output_____" ] ], [ [ "## Baseline Machine Learning Model\nA baseline model for solving this problem uses a ConvNET architecture implemented in Keras. The first three convolution layers consist of 5 x 5 pixel kernels with rectified linear unit (relu) activation followed by a 4 x 4 pixel max pool layer. The first convolution layer has 8 channels, the second contains 16 channels, and the third contains 32 channels. The output of the third convolution layer is flattened and fed into a dense layer with 64 neurons and relu activation. Finally the output layer consists of the relative mass in 20 bins. The model is trained using a mean absolute error and categorical cross-entropy loss function.\n\nTraining time: 20 epochs in ~2.5 minutes\n", "_____no_output_____" ], [ "\n", "_____no_output_____" ] ], [ [ "class Conv2DNeuralNetwork(object):\n \"\"\"\n A Conv2D Neural Network Model that can support arbitrary numbers of layers.\n\n Attributes:\n filters: List of number of filters in each Conv2D layer\n kernel_sizes: List of kernel sizes in each Conv2D layer\n conv2d_activation: Type of activation function for conv2d layers\n pool_sizes: List of Max Pool sizes\n dense_sizes: Sizes of dense layers\n dense_activation: Type of activation function for dense layers\n output_activation: Type of activation function for output layer\n lr: Optimizer learning rate\n optimizer: Name of optimizer or optimizer object.\n adam_beta_1: Exponential decay rate for the first moment estimates\n adam_beta_2: Exponential decay rate for the first moment estimates\n sgd_momentum: Stochastic Gradient Descent momentum\n decay: Optimizer decay\n loss: Name of loss function or loss object\n batch_size: Number of examples per batch\n epochs: Number of epochs to train\n verbose: Level of detail to provide during training\n model: Keras Model object\n \"\"\"\n def __init__(self, filters=(8,), kernel_sizes=(5,), conv2d_activation=\"relu\",\n pool_sizes=(4,), dense_sizes=(64,), dense_activation=\"relu\", output_activation=\"softmax\",\n lr=0.001, optimizer=\"adam\", adam_beta_1=0.9, adam_beta_2=0.999,\n sgd_momentum=0.9, decay=0, loss=\"mse\", batch_size=32, epochs=2, verbose=0):\n self.filters = filters\n self.kernel_sizes = [tuple((v,v)) for v in kernel_sizes]\n self.conv2d_activation = conv2d_activation\n self.pool_sizes = [tuple((v,v)) for v in pool_sizes]\n self.dense_sizes = dense_sizes\n self.dense_activation = dense_activation\n self.output_activation = output_activation\n self.lr = lr\n self.optimizer = optimizer\n self.optimizer_obj = None\n self.adam_beta_1 = adam_beta_1\n self.adam_beta_2 = adam_beta_2\n self.sgd_momentum = sgd_momentum\n self.decay = decay\n self.loss = loss\n self.batch_size = batch_size\n self.epochs = epochs\n self.verbose = verbose\n self.model = None\n\n def build_neural_network(self, input_shape, output_shape):\n \"\"\"Create Keras neural network model and compile it.\"\"\"\n conv_input = Input(shape=(input_shape), name=\"input\")\n nn_model = conv_input\n for h in range(len(self.filters)):\n nn_model = Conv2D(self.filters[h], self.kernel_sizes[h], padding=\"same\",\n activation=self.conv2d_activation, name=f\"conv2D_{h:02d}\")(nn_model)\n nn_model = MaxPool2D(self.pool_sizes[h], name=f\"maxpool2D_{h:02d}\")(nn_model)\n nn_model = Flatten()(nn_model)\n for h in range(len(self.dense_sizes)):\n nn_model = Dense(self.dense_sizes[h], activation=self.dense_activation, name=f\"dense_{h:02d}\")(nn_model)\n nn_model = Dense(output_shape, activation=self.output_activation, name=f\"dense_output\")(nn_model)\n self.model = Model(conv_input, nn_model)\n if self.optimizer == \"adam\":\n self.optimizer_obj = Adam(lr=self.lr, beta_1=self.adam_beta_1, beta_2=self.adam_beta_2, decay=self.decay)\n elif self.optimizer == \"sgd\":\n self.optimizer_obj = SGD(lr=self.lr, momentum=self.sgd_momentum, decay=self.decay)\n self.model.compile(optimizer=self.optimizer, loss=self.loss)\n self.model.summary()\n\n def fit(self, x, y, xv, yv):\n if len(y.shape) == 1:\n output_shape = 1\n else:\n output_shape = y.shape[1]\n input_shape = x.shape[1:]\n self.build_neural_network(input_shape, output_shape)\n self.model.fit(x, y, batch_size=self.batch_size, epochs=self.epochs,\n verbose=self.verbose, validation_data=(xv, yv))\n return self.model.history.history\n\n def predict(self, x):\n y_out = self.model.predict(x, batch_size=self.batch_size)\n return y_out\n\n def predict_proba(self, x):\n y_prob = self.model.predict(x, batch_size=self.batch_size)\n return y_prob\n", "_____no_output_____" ] ], [ [ "### Z Relative Particle Mass Model\nThis neural network is tasked to predict the distribution of particle mass in the z-plane of the instrument. The relative mass is calculated by calculating the volume of each sphere based on the area and dividing by the total mass of all particles. The advantage of this target is that it behaves like a probability density function and sums to 1, and it is agnostic to the number of particles in the image.\n", "_____no_output_____" ] ], [ [ "def calc_z_relative_mass(outputs, holograms, num_z_bins=20, z_bins=None):\n \"\"\"\n Calculate z-relative mass from particle data.\n \n Args: \n outputs: (np array) Output data previously specified by output_cols \n holograms: (int) Number of holograms\n num_z_bins: (int) Number of bins for z_bins linspace\n z_bins: (np array) Bin linspace along the z-axis\n \n Returns:\n z_mass: (np array) Particle mass distribution by hologram\n z_bins: (np array) Bin linspace along the z-axis\n \"\"\"\n \n if z_bins is None:\n z_bins = np.linspace(outputs[\"z\"].min()- 100, outputs[\"z\"].max() + 100, num_z_bins)\n print(z_bins)\n else:\n num_z_bins = z_bins.size\n z_mass = np.zeros((holograms, num_z_bins), dtype=np.float32)\n for i in range(outputs.shape[0]):\n z_pos = np.searchsorted(z_bins, outputs.loc[i, \"z\"], side=\"right\") - 1\n mass = 4 / 3 * np.pi * (outputs.loc[i, \"d\"])**3\n z_mass[int(outputs.loc[i, \"hid\"]) - 1, z_pos] += mass\n z_mass /= np.expand_dims(z_mass.sum(axis=1), -1)\n print(f\"z_mass.shape: {z_mass.shape}\\nz_bins.shape: {z_bins.shape}\")\n return z_mass, z_bins\n", "_____no_output_____" ], [ "z_bins = np.linspace(np.minimum(train_outputs[\"z\"].min(), valid_outputs[\"z\"].min()),\n np.maximum(train_outputs[\"z\"].max(), valid_outputs[\"z\"].max()),\n num_z_bins)\n\ntrain_z_mass, _ = calc_z_relative_mass(train_outputs, len(train_outputs[\"hid\"].unique()), z_bins=z_bins)\nvalid_z_mass, _ = calc_z_relative_mass(valid_outputs, len(valid_outputs[\"hid\"].unique()), z_bins=z_bins)\ntrain_inputs_scaled = train_inputs_scaled[0::3]\nvalid_inputs_scaled = valid_inputs_scaled[0::3]\n", "_____no_output_____" ] ], [ [ "### Three particle, z-mass model definition", "_____no_output_____" ] ], [ [ "# conv2d_network definitions for 3 particle z mass solution\n\nIN_COLAB = 'google.colab' in sys.modules\nif IN_COLAB:\n path_out = \"/content/gdrive/My Drive/micro_models/3particle_base\"\nelse:\n path_out = \"./holodec_models/3particle_base/\"\nif not exists(path_out):\n os.makedirs(path_out)\nmodel_name = \"cnn\"\nfilters = [16, 24, 32]\nkernel_sizes = [5, 5, 5]\nconv2d_activation = \"relu\"\npool_sizes = [4, 4, 4]\ndense_sizes = [64, 32]\ndense_activation = \"elu\"\nlr = 0.0003\ndecay = 0.1\noptimizer = \"adam\"\nloss = \"categorical_crossentropy\"\nbatch_size = 128\nepochs = 40\nverbose = 1\n\nseed = 328942\nnp.random.seed(seed)\nrandom.seed(seed)\ntf.random.set_seed(seed)\n", "_____no_output_____" ], [ "# 3 particle z mass model build, compile, fit, and predict\n\nthree_start = datetime.now()\nwith tf.device('/device:GPU:0'):\n mod = Conv2DNeuralNetwork(filters=filters, kernel_sizes=kernel_sizes,\n conv2d_activation=conv2d_activation,\n pool_sizes=pool_sizes, dense_sizes=dense_sizes,\n dense_activation=dense_activation, lr=lr,\n optimizer=optimizer, decay=decay, loss=loss,\n batch_size=batch_size, epochs=epochs, verbose=verbose)\n hist = mod.fit(train_inputs_scaled, train_z_mass, valid_inputs_scaled, valid_z_mass)\n \n train_z_mass_pred = mod.predict(train_inputs_scaled)\n valid_z_mass_pred = mod.predict(valid_inputs_scaled)\nprint(f\"Running model took {datetime.now() - three_start} time\")\n", "_____no_output_____" ], [ "# visualize loss history\n\nplt.plot(hist['loss'])\nplt.plot(hist['val_loss'])\nplt.title('model loss')\nplt.ylabel('loss')\nplt.xlabel('epoch')\nplt.legend(['training', 'validation'], loc='upper left')\nplt.show()", "_____no_output_____" ], [ "# save the model\nprint(\"Saving the model\")\nmod.model.save(join(path_out, model_name +\".h5\"))", "_____no_output_____" ], [ "# clear your tf session without needing to re-load and re-scale data\n\ndel mod\ntf.keras.backend.clear_session()\n", "_____no_output_____" ] ], [ [ "### Three Particle Metrics\n\nHow well do individual predictions (red) match with the actual particle locations (blue)?\n", "_____no_output_____" ] ], [ [ "valid_index = 11\nbin_size = z_bins[1] - z_bins[0]\nplt.figure(figsize=(10, 6))\nplt.bar(z_bins / 1000, valid_z_mass_pred[valid_index], bin_size / 1000, color='red', label=\"Predicted\")\nplt.bar(z_bins / 1000, valid_z_mass[valid_index], bin_size / 1000, edgecolor='blue', facecolor=\"none\", lw=3, label=\"True\")\nplt.ylim(0, 1)\nplt.xlabel(\"z-axis particle position (mm)\", fontsize=16)\nplt.ylabel(\"relative mass\", fontsize=16)\nplt.legend(loc=\"best\")\n", "_____no_output_____" ] ], [ [ "If the model was completely unbiased, then mean relative mass in each bin should be nearly the same across all validation examples. In this case we see that the CNN preferentially predicts that the mass is closer to the camera, likely due to a combination of particles closer to the camera blocking those farther away along with more distant particles influencing the entire image. Since the CNN assumes image properties are more localized, it will struggle to detect the particles that are farther away.", "_____no_output_____" ] ], [ [ "plt.bar(z_bins / 1000, valid_z_mass_pred.mean(axis=0), (z_bins[1] - z_bins[0]) / 1000, color='red')\nplt.bar(z_bins / 1000, valid_z_mass.mean(axis=0), (z_bins[1]-z_bins[0]) / 1000, edgecolor='blue', facecolor=\"none\", lw=3)\nplt.xlabel(\"z location (mm)\", fontsize=16)\nplt.ylabel(\"Mean Relative Mass\", fontsize=16)", "_____no_output_____" ], [ "def ranked_probability_score(y_true, y_pred):\n return np.mean((np.cumsum(y_true, axis=1) - np.cumsum(y_pred, axis=1)) ** 2) / (y_true.shape[1] -1)", "_____no_output_____" ], [ "rps_nn = ranked_probability_score(valid_z_mass, valid_z_mass_pred)\nrps_climo = ranked_probability_score(valid_z_mass, np.ones(valid_z_mass_pred.shape) / valid_z_mass_pred.shape[1])\nprint(rps_nn, rps_climo)\nrpss = 1 - rps_nn / rps_climo\nprint(f\"RPSS: {rpss:0.3f}\")", "_____no_output_____" ] ], [ [ "### One Particle Model\nAn easier problem is predicting the location and properties of synthetic single particles.\n", "_____no_output_____" ] ], [ [ "# data definitions\n\npath_data = \"ncar-aiml-data-commons/holodec/\"\nnum_particles = 1\noutput_cols_one = [\"x\", \"y\", \"z\", \"d\"]\nscaler_one = MinMaxScaler()\nslice_idx = 15000\n", "_____no_output_____" ], [ "# load and normalize data (this takes approximately 2 minutes)\ntrain_inputs_scaled_one,\\\ntrain_outputs_one,\\\nscaler_vals_one = load_scaled_datasets(path_data,\n num_particles,\n output_cols_one,\n slice_idx)\n\nvalid_inputs_scaled_one,\\\nvalid_outputs_one, _ = load_scaled_datasets(path_data,\n num_particles,\n output_cols_one,\n slice_idx,\n split='valid',\n scaler_vals=scaler_vals_one)\n\n# extra transform step for output_cols_one in lieu of z mass\ntrain_outputs_scaled_one = scaler_one.fit_transform(train_outputs_one[output_cols_one])\nvalid_outputs_scaled_one = scaler_one.transform(valid_outputs_one[output_cols_one])\n", "_____no_output_____" ], [ "# conv2d_network definitions for 1 particle 4D solution\n\nIN_COLAB = 'google.colab' in sys.modules\nif IN_COLAB:\n path_out = \"/content/gdrive/My Drive/micro_models/1particle_base\"\nelse:\n path_out = \"./holodec_models/1particle_base/\"\nif not exists(path_out):\n os.makedirs(path_out)\nmodel_name = \"cnn\"\nfilters = [16, 24, 32]\nkernel_sizes = [5, 5, 5]\nconv2d_activation = \"relu\"\npool_sizes = [4, 4, 4]\ndense_sizes = [64, 32]\ndense_activation = \"relu\"\nlr = 0.0001\noptimizer = \"adam\"\nloss = \"mae\"\nbatch_size = 128\nepochs = 20\nverbose = 1\n\nif not exists(path_out):\n os.makedirs(path_out)\n", "_____no_output_____" ], [ "# 1 particle 4D model build, compile, fit, and predict\n\none_start = datetime.now()\nwith tf.device('/device:GPU:0'):\n mod = Conv2DNeuralNetwork(filters=filters, kernel_sizes=kernel_sizes, conv2d_activation=conv2d_activation,\n pool_sizes=pool_sizes, dense_sizes=dense_sizes, dense_activation=dense_activation,\n lr=lr, optimizer=optimizer, loss=loss, batch_size=batch_size, epochs=epochs, verbose=verbose)\n mod.fit(train_inputs_scaled_one, train_outputs_scaled_one, valid_inputs_scaled_one, valid_outputs_scaled_one)\n \n train_preds_scaled_one = pd.DataFrame(mod.predict(train_inputs_scaled_one), columns=output_cols_one)\n valid_preds_scaled_one = pd.DataFrame(mod.predict(valid_inputs_scaled_one), columns=output_cols_one)\nprint(f\"Running model took {datetime.now() - one_start} time\")\n", "_____no_output_____" ], [ "# inverse transform of scaled predictions\n\ntrain_preds_one = pd.DataFrame(scaler_one.inverse_transform(train_preds_scaled_one.values), columns=output_cols_one)\nvalid_preds_one = pd.DataFrame(scaler_one.inverse_transform(valid_preds_scaled_one.values), columns=output_cols_one)\n", "_____no_output_____" ] ], [ [ "### One Particle Metrics\nAn ideal solution to HOLODEC processing would leverage all the advantages of the instrument (unparalleled particle position and size accuracy) but reduce the drawbacks (processing time). For this reason, the major components of the model assessment should include:\n\nMean absolute error in predictions for single-particle dataset:\n\n| Variable Name | Error |\n| ------------- |:----------- |\n| x | 290 µm |\n| y | 170 µm |\n| z | 53,271 µm |\n| d | 16 µm |\n\n", "_____no_output_____" ] ], [ [ "# calculate error by output_cols_one\n\nvalid_maes_one = np.zeros(len(output_cols_one))\nmax_errors_one = np.zeros(len(output_cols_one))\nfor o, output_col in enumerate(output_cols_one):\n valid_maes_one[o] = mean_absolute_error(valid_outputs_one[output_col], valid_preds_one[output_col])\n max_errors_one[o] = max_error(valid_outputs_one[output_col], valid_preds_one[output_col])\n\n print(f\"{output_col} MAE: {valid_maes_one[o]:,.0f} µm \\t\\t Max Error: {max_errors_one[o]:,.0f} µm\")\n", "_____no_output_____" ] ], [ [ "## Hackathon Challenges\n\n### Monday\n* Load the data\n* Create an exploratory visualization of the data\n* Test two different transformation and scaling methods\n* Test one dimensionality reduction method\n* Train a linear model\n* Train a decision tree ensemble method of your choice", "_____no_output_____" ] ], [ [ "# Monday's code goes here\n", "_____no_output_____" ] ], [ [ "### Tuesday\n* Train a densely connected neural network\n* Train a convolutional or recurrent neural network (depends on problem)\n* Experiment with different architectures", "_____no_output_____" ] ], [ [ "# Tuesday's code goes here\n", "_____no_output_____" ] ], [ [ "### Wednesday\n* Calculate three relevant evaluation metrics for each ML solution and baseline\n* Refine machine learning approaches and test additional hyperparameter settings", "_____no_output_____" ] ], [ [ "# Wednesday's code goes here\n\n\n", "_____no_output_____" ] ], [ [ "### Thursday \n* Evaluate two interpretation methods for your machine learning solution\n* Compare interpretation of baseline with your approach\n* Submit best results on project to leaderboard\n* Prepare 2 Google Slides on team's approach and submit them", "_____no_output_____" ] ], [ [ "# Thursday's code goes here\n", "_____no_output_____" ] ], [ [ "## Ultimate Submission Code\nPlease insert your full data processing and machine learning pipeline code in the cell below.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e7dbba79a36348afb5db17e2e779de74cdb6bdba
7,622
ipynb
Jupyter Notebook
Hubspot/Hubspot_update_followers_from_linkedin.ipynb
vivard/awesome-notebooks
899558bcc2165bb2155f5ab69ac922c6458e1799
[ "BSD-3-Clause" ]
null
null
null
Hubspot/Hubspot_update_followers_from_linkedin.ipynb
vivard/awesome-notebooks
899558bcc2165bb2155f5ab69ac922c6458e1799
[ "BSD-3-Clause" ]
null
null
null
Hubspot/Hubspot_update_followers_from_linkedin.ipynb
vivard/awesome-notebooks
899558bcc2165bb2155f5ab69ac922c6458e1799
[ "BSD-3-Clause" ]
null
null
null
21.410112
300
0.517843
[ [ [ "<img width=\"10%\" alt=\"Naas\" src=\"https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160\"/>", "_____no_output_____" ], [ "# Hubspot - Update followers from linkedin\n<a href=\"https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/Hubspot/Hubspot_update_followers_from_linkedin.ipynb\" target=\"_parent\"><img src=\"https://naasai-public.s3.eu-west-3.amazonaws.com/open_in_naas.svg\"/></a>", "_____no_output_____" ], [ "**Tags:** #hubspot #crm #sales #contact #naas_drivers #linkedin #network #scheduler #naas", "_____no_output_____" ], [ "## Input", "_____no_output_____" ], [ "### Import library", "_____no_output_____" ] ], [ [ "from naas_drivers import hubspot, linkedin\nimport naas\nimport pandas as pd", "_____no_output_____" ] ], [ [ "### Enter Hubspot api key", "_____no_output_____" ] ], [ [ "auth_token = \"YOUR_HUBSPOT_API_KEY\"", "_____no_output_____" ] ], [ [ "### Get your cookies\n<a href='https://www.notion.so/LinkedIn-driver-Get-your-cookies-d20a8e7e508e42af8a5b52e33f3dba75'>How to get your cookies ?</a>", "_____no_output_____" ] ], [ [ "LI_AT = 'YOUR_COOKIE_LI_AT' # EXAMPLE AQFAzQN_PLPR4wAAAXc-FCKmgiMit5FLdY1af3-2\nJSESSIONID = 'YOUR_COOKIE_JSESSIONID' # EXAMPLE ajax:8379907400220387585", "_____no_output_____" ] ], [ [ "### Connect to Hubspot", "_____no_output_____" ] ], [ [ "hs = hubspot.connect(auth_token)", "_____no_output_____" ] ], [ [ "### Schedule your notebook everyday", "_____no_output_____" ] ], [ [ "naas.scheduler.add(cron=\"15 6 * * *\")", "_____no_output_____" ] ], [ [ "### Get all contacts in Hubspot", "_____no_output_____" ] ], [ [ "properties_list = [\n \"hs_object_id\",\n \"firstname\",\n \"lastname\",\n \"linkedinbio\",\n \"linkedinconnections\",\n]\nhubspot_contacts = hs.contacts.get_all(properties_list).fillna(\"Not Defined\")\nhubspot_contacts", "_____no_output_____" ] ], [ [ "# Model", "_____no_output_____" ], [ "### Filter to get linkedinconnections = \"Not Defined\" and \"linkedinbio\" = defined", "_____no_output_____" ] ], [ [ "df_to_update = hubspot_contacts.copy()\n\n# Filter on \"Not defined\"\ndf_to_update = df_to_update[(df_to_update.linkedinbio != \"Not Defined\") &\n (df_to_update.linkedinconnections == \"Not Defined\")]\n\n# Limit to last 50 contacts\ndf_to_update = df_to_update.sort_values(by=\"createdate\", ascending=False)[:50].reset_index(drop=True)\n\ndf_to_update", "_____no_output_____" ] ], [ [ "### Get followers from Linkedin", "_____no_output_____" ] ], [ [ "for _, row in df_to_update.iterrows():\n linkedinbio = row.linkedinbio\n \n # Get followers\n df = linkedin.connect(LI_AT, JSESSIONID).profile.get_network(linkedinbio)\n linkedinconnections = df.loc[0, \"FOLLOWERS_COUNT\"]\n \n # Get linkedinbio\n df_to_update.loc[_, \"linkedinconnections\"] = linkedinconnections\n \ndf_to_update", "_____no_output_____" ] ], [ [ "# Output", "_____no_output_____" ], [ "### Update followers in Hubspot", "_____no_output_____" ] ], [ [ "for _, row in df_to_update.iterrows():\n # Init data\n data = {}\n \n # Get data\n hs_object_id = row.hs_object_id\n linkedinconnections = row.linkedinconnections\n\n # Update LK Bio\n if linkedinconnections != None:\n data = {\"properties\": {\"linkedinconnections\": linkedinconnections}}\n hs.contacts.patch(hs_object_id, data)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ] ]
e7dbbf6780d4dd5ae8bf1ca0e53e548d7dc8f972
35,119
ipynb
Jupyter Notebook
.ipynb_checkpoints/README-checkpoint.ipynb
GopiKishan14/Reproducibility_Challenge_NeurIPS_2019
fccee3f5ac8894580a88fc178571107024dd1cfa
[ "MIT" ]
6
2020-01-30T03:36:22.000Z
2020-08-25T19:43:07.000Z
README.ipynb
GopiKishan14/Reproducibility_Challenge_NeurIPS_2019
fccee3f5ac8894580a88fc178571107024dd1cfa
[ "MIT" ]
null
null
null
README.ipynb
GopiKishan14/Reproducibility_Challenge_NeurIPS_2019
fccee3f5ac8894580a88fc178571107024dd1cfa
[ "MIT" ]
null
null
null
95.953552
17,128
0.775876
[ [ [ "# Reproducibility_Challenge_NeurIPS_2019\n\nThis is a blog explains method proposed in the paper Competitive gradient descent [(Schäfer et al., 2019)](https://arxiv.org/abs/1905.12103). This has been written as a supplimentary to the reproducibility report for reproducibility challenge of NeurlIPS’19. The pdf format of the report is present [here](https://gopikishan14.github.io/Reproducibility_Challenge_NeurIPS_2019/) with this github [repository](https://github.com/GopiKishan14/Reproducibility_Challenge_NeurIPS_2019) as its source.", "_____no_output_____" ], [ "# Paper Overview\nThe paper introduces a new algorithm for the numerical computation of Nash equilibria of competitive two-player games. The method is a natural generalization of gradient descent to the two-player setting where the update is given by the Nash equilibrium of a regularized bilinear local approximation of the underlying game. It avoids oscillatory and divergent behaviors seen in alternating gradient descent. Convergence and stability properties of the method are robust to strong interactions between the players, without adapting the stepsize, which is not the case with previous methods. The ability to choose larger stepsizes furthermore allows the algorithm to achieve faster convergence, as measured by the number of model evaluations (See the [report](https://gopikishan14.github.io/Reproducibility_Challenge_NeurIPS_2019/) experiments section).\n", "_____no_output_____" ], [ "## Background\nThe traditional optimization is concerned with a single agent trying to optimize a cost function. It\ncan be seen as $\\min_{x \\in R^m} f(x)$ . The agent has a clear objective to find (“Good local”) minimum of\nf. Gradeint Descent (and its varients) are reliable Algorithmic Baseline for this purpose.\n\nThe paper talks about Competitive optimization. Competitive optimization extends this problem\nto the setting of multiple agents each trying to minimize their own cost function, which in general\ndepends on the actions of all agents.\n The paper deals with the case of two such agents:\n \\begin{align}\n &\\min_{x \\in R^m} f(x,y),\\ \\ \\ \\min_{y \\in R^n} g(x,y)\n \\end{align}\n for two functions $f,g: R^m \\times R^n \\longrightarrow R$.\n\nIn single agent optimization, the solution of the problem consists of the minimizer of the cost function.\nIn competitive optimization, the right definition of solution is less obvious, but often one is\ninterested in computing Nash– or strategic equilibria: Pairs of strategies, such that no player can\ndecrease their costs by unilaterally changing their strategies. If f and g are not convex, finding a\nglobal Nash equilibrium is typically impossible and instead we hope to find a \"good\" local Nash\nequilibrium", "_____no_output_____" ], [ "## About the problem\n##### Gradient descent/ascent and the cycling problem:\n\nFor differentiable objective functions, the most naive approach to solving\n\\begin{align}\n \\label{eqn:game}\n &\\min_{x \\in R^m} f(x,y),\\ \\ \\ \\min_{y \\in R^n} g(x,y)\n \\end{align}\nis gradient descent ascent (GDA), whereby both players independently change their strategy in the direction of steepest descent of their cost function.\nUnfortunately, this procedure features oscillatory or divergent behavior even in the simple case of a bilinear game ($f(x,y) = x^{\\top} y = -g(x,y)$)\n", "_____no_output_____" ], [ "\n## Solution approach\n\nTo motivate this algorithm, authors remind us that gradient descent with stepsize $\\eta$ applied to the function $f:R^m \\longrightarrow R$ can be written as\n\n\\begin{equation}\n x_{k+1} = argmin_{x \\in R^m} (x^T - x_{k}^T) \\nabla_x f(x_k) + \\frac{1}{2\\eta} \\|x - x_{k}\\|^2.\n \\end{equation}\n\nThis models a (single) player solving a local linear approximation of the (minimization) game, subject to a quadratic penalty that expresses her limited confidence in the global accuracy of the model. \n\n```The natural generalization of this idea to the competitive case should then be given by the two players solving a local approximation of the true game, both subject to a quadratic penalty that expresses their limited confidence in the accuracy of the local approximation.```\n\nIn order to implement this idea, we need to find the appropriate way to generalize the linear approximation in the single agent setting to the competitive setting. \n\nAuthors suggest to use a **bilinear** approximation in the two-player setting.\nSince the bilinear approximation is the lowest order approximation that can capture some interaction between the two players, they argue that the natural generalization of gradient descent to competitive optimization is not GDA, but rather the update rule $(x_{k+1},y_{k+1}) = (x_k,y_k) + (x,y)$, where $(x,y)$ is a Nash equilibrium of **the game**.\n\n\\begin{align}\n \\begin{split}\n \\label{eqn:localgame}\n \\min_{x \\in R^m} x^{\\top} \\nabla_x f &+ x^{\\top} D_{xy}^2 f y + y^{\\top} \\nabla_y f + \\frac{1}{2\\eta} x^{\\top} x \\\\\n \\min_{y \\in R^n} y^{\\top} \\nabla_y g &+ y^{\\top} D_{yx}^2 g x + x^{\\top} \\nabla_x g + \\frac{1}{2\\eta} y^{\\top} y.\n \\end{split}\n\\end{align}\n\nIndeed, the (unique) Nash equilibrium of the above Game can be computed in closed form.", "_____no_output_____" ], [ "\n## Proposed method\n**Among all (possibly randomized) strategies with finite first moment, the only Nash equilibrium of `the Game` is given by\n\\begin{align}\n\\label{eqn:nash}\n&x = -\\eta \\left( Id - \\eta^2 D_{xy}^2f D_{yx}^2 g \\right)^{-1} \n \\left( \\nabla_{x} f - \\eta D_{xy}^2f \\nabla_{y} g \\right) \\\\\n&y = -\\eta \\left( Id - \\eta^2 D_{yx}^2g D_{xy}^2 f \\right)^{-1} \n \\left( \\nabla_{y} g - \\eta D_{yx}^2g \\nabla_{x} f \\right),\n\\end{align}\ngiven that the matrix inverses in the above expression exist.** \n\nNote that the matrix inverses exist for all but one value of $\\eta$, and for all $\\eta$ in the case of a zero sum game.\n\nAccording to the above Theorem, the Game has exactly one optimal pair of strategies, which is deter-ministic. Thus, we can use these strategies as an update rule, generalizing the idea of local optimalityfrom the single– to the multi agent setting and obtaining the following Algorithm.\n\n`Competitive Gradient Descent (CGD)`\n\\begin{align}\nfor\\ (0 <= k <= N-1)\\\\\n&x_{k+1} = x_{k} - \\eta \\left( Id - \\eta^2 D_{xy}^2f D_{yx}^2 g \\right)^{-1}\\left( \\nabla_{x} f - \\eta D_{xy}^2f \\nabla_{y} g \\right)\\\\\n&y_{k+1} = y_{k} - \\eta \\left( Id - \\eta^2 D_{yx}^2g D_{xy}^2 f \\right)^{-1} \n \\left( \\nabla_{y} g - \\eta D_{yx}^2g \\nabla_{x} f \\right)\\\\\n return\\ (x_{N},y_{N})\\;\n\\end{align}\n\n\n\n\n**What I think that they think that I think ... that they do**: Another game-theoretic interpretation of CGD follows from the observation that its update rule can be written as \n\n\\begin{equation}\n\\begin{pmatrix}\n \\Delta x\\\\\n \\Delta y\n\\end{pmatrix} = -\n\\begin{pmatrix}\n Id & \\eta D_{xy}^2 f \\\\\n \\eta D_{yx}^2 g & Id \n\\end{pmatrix}^{-1}\n\\begin{pmatrix}\n \\nabla_{x} f\\\\\n \\nabla_{y} g\n\\end{pmatrix}.\n\\end{equation}\n\nApplying the expansion $ \\lambda_{\\max} (A) < 1 \\Rightarrow \\left( Id - A \\right)^{-1} = \\lim_{N \\rightarrow \\infty} \\sum_{k=0}^{N} A^k$ to the above equation, we observe that: \\\\\n\n1. The first partial sum ($N = 0$) corresponds to the optimal strategy if the other player's strategy stays constant (GDA).\n2. The second partial sum ($N = 1$) corresponds to the optimal strategy if the other player thinks that the other player's strategy stays constant (LCGD).\n3. The third partial sum ($N = 2$) corresponds to the optimal strategy if the other player thinks that the other player thinks that the other player's strategy stays constant, and so forth, until the Nash equilibrium is recovered in the limit.\n\n", "_____no_output_____" ], [ "\n## Comparison\nThese six algorithms amount to different subsets of the following four terms.\n\n\\begin{align*}\n & \\text{GDA: } &\\Delta x = &&&- \\nabla_x f&\\\\\n & \\text{LCGD: } &\\Delta x = &&&- \\nabla_x f& &-\\eta D_{xy}^2 f \\nabla_y f&\\\\\n & \\text{SGA: } &\\Delta x = &&&- \\nabla_x f& &- \\gamma D_{xy}^2 f \\nabla_y f& & & \\\\\n & \\text{ConOpt: } &\\Delta x = &&&- \\nabla_x f& &- \\gamma D_{xy}^2 f \\nabla_y f& &- \\gamma D_{xx}^2 f \\nabla_x f& \\\\\n & \\text{OGDA: } &\\Delta x \\approx &&&- \\nabla_x f& &-\\eta D_{xy}^2 f \\nabla_y f& &+\\eta D_{xx}^2 f \\nabla_x f& \\\\\n & \\text{CGD: } &\\Delta x = &\\left(Id + \\eta^2 D_{xy}^2 f D_{yx}^2 f\\right)^{-1}&\\bigl( &- \\nabla_x f& &-\\eta D_{xy}^2 f \\nabla_y f& & & \\bigr)\n \\end{align*}\n\n1. The **gradient term** $-\\nabla_{x}f$, $\\nabla_{y}f$ which corresponds to the most immediate way in which the players can improve their cost.\n\n\n\n2. The **competitive term** $-D_{xy}f \\nabla_yf$, $D_{yx}f \\nabla_x f$ which can be interpreted either as anticipating the other player to use the naive (GDA) strategy, or as decreasing the other players influence (by decreasing their gradient).\n\n\n\n3. The **consensus term** $ \\pm D_{xx}^2 \\nabla_x f$, $\\mp D_{yy}^2 \\nabla_y f$ that determines whether the players prefer to decrease their gradient ($\\pm = +$) or to increase it ($\\pm = -$). The former corresponds the players seeking consensus, whereas the latter can be seen as the opposite of consensus. (It also corresponds to an approximate Newton's method. \\footnote{Applying a damped and regularized Newton's method to the optimization problem of Player 1 would amount to choosing $x_{k+1} = x_{k} - \\eta(Id + \\eta D_{xx}^2)^{-1} f \\nabla_x f \\approx x_{k} - \\eta( \\nabla_xf - \\eta D_{xx}^{2}f \\nabla_x f)$, for $\\|\\eta D_{xx}^2f\\| \\ll 1$.)\n\n\n\n\n4. The **equilibrium term** $(Id + \\eta^2 D_{xy}^2 D_{yx}^2 f)^{-1}$, $(Id + \\eta^2 D_{yx}^2 D_{xy}^2 f)^{-1}$, which arises from the players solving for the Nash equilibrium. \n This term lets each player prefer strategies that are less vulnerable to the actions of the other player.\n", "_____no_output_____" ], [ "## Code Implementation\n\nThe competitive gradeint descent algorithm contains gradient, competitive and equilibrium term. So, we need to efficiently calculat them. The equibrium term is a matrix inverse\n\n### Computing Hessian vector products\n\nThe algorithm requires products of the mixed Hessian $v \\mapsto D_{xy}f v$ and $v \\mapsto D_{yx}g v$, which we want to compute using automatic differentiation.\n\nMany AD frameworks, like Autograd (https://github.com/HIPS/autograd) and ForwardDiff(https://github.com/JuliaDiff/ForwardDiff.jl) together with ReverseDiff(https://github.com/JuliaDiff/ReverseDiff.jl) support this procedure. While the authors used the AD frameworks from Julia, I will be using Autograd from PyTorch (https://pytorch.org/docs/stable/autograd.html)\n\n### Matrix inversion for the equilibrium term\nAuthors propose to use iterative methods to approximate the inverse-matrix vector products arising in the *equilibrium term*.\nAuthors focus on zero-sum games, where the matrix is always symmetric positive definite, making the [conjugate gradient (CG)](https://en.wikipedia.org/wiki/Conjugate_gradient_method) algorithm the method of choice. \nThey also suggest terminating the iterative solver after a given relative decrease of the residual is achieved ($\\| M x - y \\| \\leq \\epsilon \\|x\\|$ for a small parameter $\\epsilon$, when solving the system $Mx = y$).\n\nBriefly, conjugate gradient (CG) iteratively solves the system $Mx = y$ for $x$ without calculating $M^{-1}$.", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pyplot as plt\n\"\"\"\nSimple python implemetation of CG tested on an example\n\"\"\"\n\n# Problem setup\nA = np.matrix([[3.0, 2.0], \n [2.0, 6.0]]) # the matrix A in : Ax = b\nb = np.matrix([[2.0], \n [-8.0]]) # we will use the convention that a vector is a column vector\n\n\n# solution approach\nx = np.matrix([[-2.0],\n [-2.0]])\n\nsteps = [(-2.0, -2.0)] # modify according to x\ni = 0\nimax = 10\neps = 0.01\nr = b - A * x\nd = r\ndeltanew = r.T * r\ndelta0 = deltanew\nwhile i < imax and deltanew > eps**2 * delta0:\n alpha = float(deltanew / float(d.T * (A * d)))\n x = x + alpha * d\n steps.append((x[0, 0], x[1, 0]))\n r = b - A * x\n deltaold = deltanew\n deltanew = r.T * r\n beta = float(deltanew / float(deltaold))\n d = r + beta * d\n i += 1\n \nprint(\"Solution vector x* for Ax = b :\")\nprint(x)\n\nprint(\"And the steps taken by algorithm : \", steps)\nplt.plot(steps)", "Solution vector x* for Ax = b :\n[[ 2.]\n [-2.]]\nAnd the steps taken by algorithm : [(-2.0, -2.0), (0.08000000000000007, -0.6133333333333333), (2.0, -2.0)]\n" ] ], [ [ "#### Now to solve our problem of CGD, the following equation\n\n\\begin{equation}\n\\begin{pmatrix}\n \\Delta x\\\\\n \\Delta y\n\\end{pmatrix} = -\n\\begin{pmatrix}\n Id & \\eta D_{xy}^2 f \\\\\n \\eta D_{yx}^2 g & Id \n\\end{pmatrix}^{-1}\n\\begin{pmatrix}\n \\nabla_{x} f\\\\\n \\nabla_{y} g\n\\end{pmatrix}.\n\\end{equation}\n\n#### can be written as \n\n\\begin{equation}\n\\begin{pmatrix}\n Id & \\eta D_{xy}^2 f \\\\\n \\eta D_{yx}^2 g & Id \n\\end{pmatrix}\n\\begin{pmatrix}\n \\Delta x\\\\\n \\Delta y\n\\end{pmatrix} = -\n\\begin{pmatrix}\n \\nabla_{x} f\\\\\n \\nabla_{y} g\n\\end{pmatrix}.\n\\end{equation}\n\n#### so that the conjugate gradient method can used to calculate $\\Delta x$ and $\\Delta y$ without inverting the matrix.\n", "_____no_output_____" ], [ "## Conclusion\n\nIn the words of the authors of original paper,\n`We propose a novel and natural generalization of gradient descent to competitive optimization. Besides its attractive game-theoretic interpretation, the algorithm shows improved robustness properties compared to the existing methods, which we study using a combination of theoretical analysis and computational experiments.`\n\nLook out to the conclusion section of the [paper](https://arxiv.org/pdf/1905.12103.pdf) for extensive conclusion and future aspects.\nRefers to the Experiment and Conclusion section of the reproducibility [report](https://gopikishan14.github.io/Reproducibility_Challenge_NeurIPS_2019/index.html) for details on replication.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
e7dbcedc304177b1d70bd7ae51a263c16309093e
934
ipynb
Jupyter Notebook
index.ipynb
riversdark/torchfold
98fe5300d11fd695d977854b05896f194b6e3acb
[ "Apache-2.0" ]
1
2021-11-08T06:52:49.000Z
2021-11-08T06:52:49.000Z
index.ipynb
riversdark/torchfold
98fe5300d11fd695d977854b05896f194b6e3acb
[ "Apache-2.0" ]
2
2021-08-02T01:57:50.000Z
2021-08-02T12:25:59.000Z
index.ipynb
riversdark/torchfold
98fe5300d11fd695d977854b05896f194b6e3acb
[ "Apache-2.0" ]
null
null
null
15.311475
63
0.48394
[ [ [ "#hide\nfrom torchfold.data import *", "_____no_output_____" ] ], [ [ "# torchfold\n\n> AlphaFold2 for protein structure prediction, in PyTorch", "_____no_output_____" ], [ "## Install", "_____no_output_____" ], [ "`pip install torchfold`", "_____no_output_____" ], [ "## How to use the package", "_____no_output_____" ] ] ]
[ "code", "markdown" ]
[ [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ] ]
e7dbd041e34b905999ea3469fdce9b969e385f2a
79,494
ipynb
Jupyter Notebook
Decision Trees and Random Forests in Python_prac.ipynb
Jass005/MachineLearning
805736bd153a9a50c65f72cfce37e14bd73c2175
[ "Unlicense" ]
null
null
null
Decision Trees and Random Forests in Python_prac.ipynb
Jass005/MachineLearning
805736bd153a9a50c65f72cfce37e14bd73c2175
[ "Unlicense" ]
null
null
null
Decision Trees and Random Forests in Python_prac.ipynb
Jass005/MachineLearning
805736bd153a9a50c65f72cfce37e14bd73c2175
[ "Unlicense" ]
null
null
null
170.587983
69,672
0.900659
[ [ [ "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline", "_____no_output_____" ], [ "df = pd.read_csv('kyphosis.csv')", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "df.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 81 entries, 0 to 80\nData columns (total 4 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Kyphosis 81 non-null object\n 1 Age 81 non-null int64 \n 2 Number 81 non-null int64 \n 3 Start 81 non-null int64 \ndtypes: int64(3), object(1)\nmemory usage: 2.7+ KB\n" ], [ "sns.pairplot(df,hue='Kyphosis')", "_____no_output_____" ], [ "from sklearn.model_selection import train_test_split", "_____no_output_____" ], [ "df.columns", "_____no_output_____" ], [ "X = df.drop('Kyphosis', axis=1)", "_____no_output_____" ], [ "y = df['Kyphosis']", "_____no_output_____" ], [ "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)", "_____no_output_____" ], [ "from sklearn.tree import DecisionTreeClassifier", "_____no_output_____" ], [ "dtree = DecisionTreeClassifier()", "_____no_output_____" ], [ "dtree.fit(X_train, y_train)", "_____no_output_____" ], [ "predictions = dtree.predict(X_test)", "_____no_output_____" ], [ "from sklearn.metrics import classification_report, confusion_matrix", "_____no_output_____" ], [ "print(classification_report(y_test, predictions))", " precision recall f1-score support\n\n absent 0.87 0.65 0.74 20\n present 0.30 0.60 0.40 5\n\n accuracy 0.64 25\n macro avg 0.58 0.62 0.57 25\nweighted avg 0.75 0.64 0.67 25\n\n" ], [ "print(confusion_matrix(y_test, predictions))", "[[13 7]\n [ 2 3]]\n" ], [ "from sklearn.ensemble import RandomForestClassifier", "_____no_output_____" ], [ "rfc = RandomForestClassifier(n_estimators=200)", "_____no_output_____" ], [ "rfc.fit(X_train,y_train)", "_____no_output_____" ], [ "rfc_pred = rfc.predict(X_test)", "_____no_output_____" ], [ "print(classification_report(y_test, rfc_pred))\nprint('\\n')\nprint(confusion_matrix(y_test, rfc_pred))", " precision recall f1-score support\n\n absent 0.88 0.75 0.81 20\n present 0.38 0.60 0.46 5\n\n accuracy 0.72 25\n macro avg 0.63 0.68 0.64 25\nweighted avg 0.78 0.72 0.74 25\n\n\n\n[[15 5]\n [ 2 3]]\n" ], [ "df['Kyphosis'].value_counts()", "_____no_output_____" ] ], [ [ "# Tree Visualization", "_____no_output_____" ] ] ]
[ "code", "markdown" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ] ]
e7dbd1a19b7f075d2e45f681698605d14827d2ed
12,017
ipynb
Jupyter Notebook
nbs/mflasso.main.ipynb
sjkoelle/montlake
b908a43e0c00763bd1cf86120eaa6bdf7d8d1196
[ "Apache-2.0" ]
8
2021-11-24T19:39:24.000Z
2021-12-03T01:30:14.000Z
nbs/mflasso.main.ipynb
sjkoelle/montlake
b908a43e0c00763bd1cf86120eaa6bdf7d8d1196
[ "Apache-2.0" ]
null
null
null
nbs/mflasso.main.ipynb
sjkoelle/montlake
b908a43e0c00763bd1cf86120eaa6bdf7d8d1196
[ "Apache-2.0" ]
null
null
null
45.866412
189
0.583507
[ [ [ "# default_exp mflasso.main", "_____no_output_____" ], [ "# export \n\nfrom montlake.atomgeom.features import get_features,get_D_feats_feats\nfrom montlake.atomgeom.utils import get_atoms_4\nfrom montlake.simulations.rigidethanol import get_rigid_ethanol_data\nfrom montlake.utils.utils import get_234_indices, get_atoms3_full, get_atoms4_full, data_stream_custom_range, get_cosines\nfrom montlake.geometry.geometry import get_geom, get_wlpca_tangent_sel, get_rm_tangent_sel\nfrom montlake.gradients.estimate import get_grads_pullback\nfrom montlake.statistics.normalization import normalize_L212\nfrom montlake.optimization.gradientgrouplasso import get_sr_lambda_parallel\nfrom montlake.optimization.utils import get_selected_function_ids,get_selected_functions_lm2\nfrom montlake.utils.replicates import Replicate, get_supports_brute,get_supports_lasso\n\nfrom megaman.embedding import SpectralEmbedding\n\nimport dill as pickle\nimport os\nimport sys\nimport numpy as np\nimport itertools\nfrom itertools import permutations,combinations\nfrom sklearn.decomposition import TruncatedSVD\nimport pathos\nfrom pathos.multiprocessing import ProcessingPool as Pool", "_____no_output_____" ], [ "# export \n\ndef run_exp(positions, hparams):\n\n d = hparams.d \n n_components = hparams.n_components\n atoms2_feat = hparams.atoms2_feat \n atoms3_feat = hparams.atoms3_feat\n atoms4_feat = hparams.atoms4_feat\n atoms2_dict = hparams.atoms2_dict\n atoms3_dict = hparams.atoms3_dict\n atoms4_dict = hparams.atoms4_dict\n diagram = hparams.diagram\n\n ii = np.asarray(hparams.ii)\n jj = np.asarray(hparams.jj)\n outfile = hparams.outdir + '/' + hparams.name + 'results_mflasso' \n print('loading geometric features')\n natoms = positions.shape[1]\n n = positions.shape[0]\n atoms2 = np.asarray(list(itertools.combinations(range(natoms), 2))) \n atoms2full = atoms2\n atoms3 = np.asarray(list(itertools.combinations(range(natoms), 3))) \n atoms4 = np.asarray(list(itertools.combinations(range(natoms), 4))) \n atoms3full = get_atoms3_full(atoms3)\n atoms4full = get_atoms4_full(atoms4)\n \n if atoms2_feat:\n atoms2_feats = atoms2full\n else:\n atoms2_feats = np.asarray([])\n \n if atoms3_feat:\n atoms3_feats = atoms3full\n else:\n atoms3_feats = np.asarray([])\n \n if atoms4_feat:\n atoms4_feats = atoms4full\n else:\n atoms4_feats = np.asarray([])\n \n print('computing featurization')\n cores = pathos.multiprocessing.cpu_count() - 1\n pool = Pool(cores)\n print('feature dimensions 234',atoms2_feats.shape, atoms3_feats.shape,atoms4_feats.shape)\n \n results = pool.map(lambda i: get_features(positions[i],\n atoms2 = atoms2_feats,\n atoms3 = atoms3_feats,\n atoms4 = atoms4_feats),\n data_stream_custom_range(list(range(n))))\n data = np.vstack([np.hstack(results[i]) for i in range(n)])\n data = data - np.mean(data, axis = 0)\n svd = TruncatedSVD(n_components=50)\n data_svd = svd.fit_transform(data)\n \n print('computing geometry')\n radius = hparams.radius\n n_neighbors = hparams.n_neighbors\n geom = get_geom(data_svd, radius, n_neighbors) \n \n print('computing embedding')\n spectral_embedding = SpectralEmbedding(n_components=n_components,eigen_solver='arpack',geom=geom)\n embed_spectral = spectral_embedding.fit_transform(data_svd)\n \n print('getting gradients')\n if atoms2_dict:\n atoms2_dicts = atoms2full\n else:\n atoms2_dicts = np.asarray([])\n if atoms3_dict:\n atoms3_dicts = atoms3full\n else:\n atoms3_dicts = np.asarray([])\n if atoms4_dict and not diagram:\n atoms4_dicts = atoms4full\n elif atoms4_dict:\n atoms4_dicts= get_atoms_4(natoms, ii, jj)[0]\n else:\n atoms4_dicts = np.asarray([]) \n p = len(atoms2_dicts) + len(atoms3_dicts) + len(atoms4_dicts)\n replicates = {}\n embedding = embed_spectral\n nreps = hparams.nreps\n nsel = hparams.nsel\n for r in range(nreps):\n replicates[r] = Replicate(nsel = nsel, n = 10000)\n replicates[r].tangent_bases_M = get_wlpca_tangent_sel(data_svd, geom, replicates[r].selected_points, d)\n replicates[r].tangent_bases_phi = get_rm_tangent_sel(embedding, geom, replicates[r].selected_points, d)\n D_feats_feats = np.asarray([get_D_feats_feats(positions[replicates[r].selected_points[i]],\n atoms2in = atoms2_feats, \n atoms3in = atoms3_feats, \n atoms4in = atoms4_feats, \n atoms2out = atoms2_dicts, \n atoms3out = atoms3_dicts,\n atoms4out = atoms4_dicts) for i in range(nsel)])\n replicates[r].dg_x = np.asarray([svd.transform(D_feats_feats[i].transpose()).transpose() for i in range(nsel)])\n replicates[r].dg_x_normalized = normalize_L212(replicates[r].dg_x)\n replicates[r].dg_M = np.einsum('i b p, i b d -> i d p', replicates[r].dg_x_normalized, replicates[r].tangent_bases_M)\n replicates[r].dphispectral_M = get_grads_pullback(data_svd, embedding, geom, replicates[r].tangent_bases_M, replicates[r].tangent_bases_phi, replicates[r].selected_points)\n replicates[r].dphispectral_M_normalized = normalize_L212(replicates[r].dphispectral_M)\n \n print('running manifold lasso')\n gl_itermax= hparams.gl_itermax\n reg_l2 = hparams.reg_l2\n max_search = hparams.max_search\n d = hparams.d\n tol = hparams.tol\n learning_rate = hparams.learning_rate\n for r in range(nreps):\n replicates[r].results = get_sr_lambda_parallel(replicates[r].dphispectral_M_normalized , replicates[r].dg_M, gl_itermax,reg_l2, max_search, d, tol,learning_rate)\n replicates[r].get_ordered_axes()\n replicates[r].sel_l = replicates[r].get_selection_lambda()\n\n print('getting manifold lasso support')\n selected_functions_unique = np.asarray(np.unique(get_selected_function_ids(replicates,d)), dtype = int)\n support_tensor_lasso, supports_lasso = get_supports_lasso(replicates,p,d)\n\n print('getting two-stage support')\n selected_functions_lm2 = get_selected_functions_lm2(replicates)\n support_tensor_ts, supports_ts = get_supports_brute(replicates,nreps,p,d,selected_functions_lm2)\n selected_functions_unique_twostage = np.unique(np.asarray(np.where(support_tensor_ts > 0.)[0], dtype = int))\n\n pool.close()\n pool.restart()\n \n #needs 'order234' for full computation\n print('computing selected function values lasso, ' + str(selected_functions_unique))\n selected_function_values = pool.map(\n lambda i: get_features(positions[i],\n atoms2 = np.asarray([]),\n atoms3 = np.asarray([]),\n atoms4 = atoms4_dicts[selected_functions_unique]),\n data_stream_custom_range(list(range(n))))\n\n selected_function_values_array = np.vstack([np.hstack(selected_function_values[i]) for i in range(n)])\n\n print('computing selected function values two stage, ' + str(selected_functions_unique_twostage))\n selected_function_values_ts = pool.map(\n lambda i: get_features(positions[i],\n atoms2 = np.asarray([]),\n atoms3 = np.asarray([]),\n atoms4 = atoms4_dicts[selected_functions_unique_twostage]),\n data_stream_custom_range(list(range(n))))\n\n selected_function_values_array_brute = np.vstack([np.hstack(selected_function_values_ts[i]) for i in range(n)])\n \n print('remove large gradient arrays for memory efficiency')\n replicates_small = {}\n for r in range(nreps):\n replicates_small[r] = Replicate(nsel=nsel, n=n,\n selected_points=replicates[r].selected_points)\n replicates_small[r].dg_M = replicates[r].dg_M\n replicates_small[r].dphispectral_M = replicates[r].dphispectral_M\n replicates_small[r].cs_reorder = replicates[r].cs_reorder\n replicates_small[r].xaxis_reorder = replicates[r].xaxis_reorder\n \n print('getting cosines')\n cosine = get_cosines(replicates[0].dg_M)\n replicates_small[0].cosine_abs = np.mean(np.abs(cosine), axis = 0)\n \n print('prepare to save')\n results = {}\n results['replicates_small'] = replicates_small\n results['embed'] = embedding\n results['geom'] = geom\n results['data'] = data_svd\n results['supports_ts'] = support_tensor_ts, supports_ts\n results['supports_lasso'] = support_tensor_lasso, supports_lasso\n results['supports_ts_values'] = selected_function_values_ts\n results['supports_lasso_values'] = selected_function_values\n results['selected_ts'] = selected_functions_unique_twostage \n results['selected_lasso'] = selected_functions_unique\n results['dictionary'] = {}\n results['dictionary']['atoms2'] = atoms2_dicts\n results['dictionary']['atoms3'] = atoms3_dicts\n results['dictionary']['atoms4'] = atoms4_dicts\n\n print('saving')\n with open(outfile,'wb') as output:\n pickle.dump(results, output, pickle.HIGHEST_PROTOCOL)\n \n print('done')", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code" ] ]
e7dbdd0f750237fed09b11c869abc719bac5e251
251,570
ipynb
Jupyter Notebook
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
7fdf50c1d110e060c5758ac938d18d70d7861104
[ "MIT" ]
null
null
null
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
7fdf50c1d110e060c5758ac938d18d70d7861104
[ "MIT" ]
null
null
null
docs/notebooks/DynamicInvariants.ipynb
abhilashgupta/fuzzingbook
7fdf50c1d110e060c5758ac938d18d70d7861104
[ "MIT" ]
null
null
null
31.313169
891
0.554033
[ [ [ "# Mining Function Specifications\n\nWhen testing a program, one not only needs to cover its several behaviors; one also needs to _check_ whether the result is as expected. In this chapter, we introduce a technique that allows us to _mine_ function specifications from a set of given executions, resulting in abstract and formal _descriptions_ of what the function expects and what it delivers. \n\nThese so-called _dynamic invariants_ produce pre- and post-conditions over function arguments and variables from a set of executions. They are useful in a variety of contexts:\n\n* Dynamic invariants provide important information for [symbolic fuzzing](SymbolicFuzzer.ipynb), such as types and ranges of function arguments.\n* Dynamic invariants provide pre- and postconditions for formal program proofs and verification.\n* Dynamic invariants provide a large number of assertions that can check whether function behavior has changed\n* Checks provided by dynamic invariants can be very useful as _oracles_ for checking the effects of generated tests\n\nTraditionally, dynamic invariants are dependent on the executions they are derived from. However, when paired with comprehensive test generators, they quickly become very precise, as we show in this chapter.", "_____no_output_____" ], [ "**Prerequisites**\n\n* You should be familiar with tracing program executions, as in the [chapter on coverage](Coverage.ipynb).\n* Later in this section, we access the internal _abstract syntax tree_ representations of Python programs and transform them, as in the [chapter on information flow](InformationFlow.ipynb).", "_____no_output_____" ] ], [ [ "import fuzzingbook_utils", "_____no_output_____" ], [ "import Coverage\nimport Intro_Testing", "_____no_output_____" ] ], [ [ "## Synopsis\n<!-- Automatically generated. Do not edit. -->\n\nTo [use the code provided in this chapter](Importing.ipynb), write\n\n```python\n>>> from fuzzingbook.DynamicInvariants import <identifier>\n```\n\nand then make use of the following features.\n\n\nThis chapter provides two classes that automatically extract specifications from a function and a set of inputs:\n\n* `TypeAnnotator` for _types_, and\n* `InvariantAnnotator` for _pre-_ and _postconditions_.\n\nBoth work by _observing_ a function and its invocations within a `with` clause. Here is an example for the type annotator:\n\n```python\n>>> def sum2(a, b):\n>>> return a + b\n>>> with TypeAnnotator() as type_annotator:\n>>> sum2(1, 2)\n>>> sum2(-4, -5)\n>>> sum2(0, 0)\n```\nThe `typed_functions()` method will return a representation of `sum2()` annotated with types observed during execution.\n\n```python\n>>> print(type_annotator.typed_functions())\ndef sum2(a: int, b: int) ->int:\n return a + b\n\n\n```\nThe invariant annotator works in a similar fashion:\n\n```python\n>>> with InvariantAnnotator() as inv_annotator:\n>>> sum2(1, 2)\n>>> sum2(-4, -5)\n>>> sum2(0, 0)\n```\nThe `functions_with_invariants()` method will return a representation of `sum2()` annotated with inferred pre- and postconditions that all hold for the observed values.\n\n```python\n>>> print(inv_annotator.functions_with_invariants())\n@precondition(lambda a, b: isinstance(a, int))\n@precondition(lambda a, b: isinstance(b, int))\n@postcondition(lambda return_value, a, b: a == return_value - b)\n@postcondition(lambda return_value, a, b: b == return_value - a)\n@postcondition(lambda return_value, a, b: isinstance(return_value, int))\n@postcondition(lambda return_value, a, b: return_value == a + b)\n@postcondition(lambda return_value, a, b: return_value == b + a)\ndef sum2(a, b):\n return a + b\n\n\n```\nSuch type specifications and invariants can be helpful as _oracles_ (to detect deviations from a given set of runs) as well as for all kinds of _symbolic code analyses_. The chapter gives details on how to customize the properties checked for.\n\n", "_____no_output_____" ], [ "## Specifications and Assertions\n\nWhen implementing a function or program, one usually works against a _specification_ – a set of documented requirements to be satisfied by the code. Such specifications can come in natural language. A formal specification, however, allows the computer to check whether the specification is satisfied.\n\nIn the [introduction to testing](Intro_Testing.ipynb), we have seen how _preconditions_ and _postconditions_ can describe what a function does. Consider the following (simple) square root function:", "_____no_output_____" ] ], [ [ "def my_sqrt(x):\n assert x >= 0 # Precondition\n \n ...\n \n assert result * result == x # Postcondition\n return result", "_____no_output_____" ] ], [ [ "The assertion `assert p` checks the condition `p`; if it does not hold, execution is aborted. Here, the actual body is not yet written; we use the assertions as a specification of what `my_sqrt()` _expects_, and what it _delivers_.\n\nThe topmost assertion is the _precondition_, stating the requirements on the function arguments. The assertion at the end is the _postcondition_, stating the properties of the function result (including its relationship with the original arguments). Using these pre- and postconditions as a specification, we can now go and implement a square root function that satisfies them. Once implemented, we can have the assertions check at runtime whether `my_sqrt()` works as expected; a [symbolic](SymbolicFuzzer.ipynb) or [concolic](ConcolicFuzzer.ipynb) test generator will even specifically try to find inputs where the assertions do _not_ hold. (An assertion can be seen as a conditional branch towards aborting the execution, and any technique that tries to cover all code branches will also try to invalidate as many assertions as possible.)", "_____no_output_____" ], [ "However, not every piece of code is developed with explicit specifications in the first place; let alone does most code comes with formal pre- and post-conditions. (Just take a look at the chapters in this book.) This is a pity: As Ken Thompson famously said, \"Without specifications, there are no bugs – only surprises\". It is also a problem for testing, since, of course, testing needs some specification to test against. This raises the interesting question: Can we somehow _retrofit_ existing code with \"specifications\" that properly describe their behavior, allowing developers to simply _check_ them rather than having to write them from scratch? This is what we do in this chapter.", "_____no_output_____" ], [ "## Why Generic Error Checking is Not Enough\n\nBefore we go into _mining_ specifications, let us first discuss why it could be useful to _have_ them. As a motivating example, consider the full implementation of `my_sqrt()` from the [introduction to testing](Intro_Testing.ipynb):", "_____no_output_____" ] ], [ [ "import fuzzingbook_utils", "_____no_output_____" ], [ "def my_sqrt(x):\n \"\"\"Computes the square root of x, using the Newton-Raphson method\"\"\"\n approx = None\n guess = x / 2\n while approx != guess:\n approx = guess\n guess = (approx + x / approx) / 2\n return approx", "_____no_output_____" ] ], [ [ "`my_sqrt()` does not come with any functionality that would check types or values. Hence, it is easy for callers to make mistakes when calling `my_sqrt()`:", "_____no_output_____" ] ], [ [ "from ExpectError import ExpectError, ExpectTimeout", "_____no_output_____" ], [ "with ExpectError():\n my_sqrt(\"foo\")", "Traceback (most recent call last):\n File \"<ipython-input-7-774676a5ccb8>\", line 2, in <module>\n my_sqrt(\"foo\")\n File \"<ipython-input-5-47185ad159a1>\", line 4, in my_sqrt\n guess = x / 2\nTypeError: unsupported operand type(s) for /: 'str' and 'int' (expected)\n" ], [ "with ExpectError():\n x = my_sqrt(0.0)", "Traceback (most recent call last):\n File \"<ipython-input-8-262c66114b1c>\", line 2, in <module>\n x = my_sqrt(0.0)\n File \"<ipython-input-5-47185ad159a1>\", line 7, in my_sqrt\n guess = (approx + x / approx) / 2\nZeroDivisionError: float division by zero (expected)\n" ] ], [ [ "At least, the Python system catches these errors at runtime. The following call, however, simply lets the function enter an infinite loop:", "_____no_output_____" ] ], [ [ "with ExpectTimeout(1):\n x = my_sqrt(-1.0)", "Traceback (most recent call last):\n File \"<ipython-input-9-b72078127dc0>\", line 2, in <module>\n x = my_sqrt(-1.0)\n File \"<ipython-input-5-47185ad159a1>\", line 6, in my_sqrt\n approx = guess\n File \"<ipython-input-5-47185ad159a1>\", line 6, in my_sqrt\n approx = guess\n File \"ExpectError.ipynb\", line 59, in check_time\nTimeoutError (expected)\n" ] ], [ [ "Our goal is to avoid such errors by _annotating_ functions with information that prevents errors like the above ones. The idea is to provide a _specification_ of expected properties – a specification that can then be checked at runtime or statically.", "_____no_output_____" ], [ "\\todo{Introduce the concept of *contract*.}", "_____no_output_____" ], [ "## Specifying and Checking Data Types\n\nFor our Python code, one of the most important \"specifications\" we need is *types*. Python being a \"dynamically\" typed language means that all data types are determined at run time; the code itself does not explicitly state whether a variable is an integer, a string, an array, a dictionary – or whatever.", "_____no_output_____" ], [ "As _writer_ of Python code, omitting explicit type declarations may save time (and allows for some fun hacks). It is not sure whether a lack of types helps in _reading_ and _understanding_ code for humans. For a _computer_ trying to analyze code, the lack of explicit types is detrimental. If, say, a constraint solver, sees `if x:` and cannot know whether `x` is supposed to be a number or a string, this introduces an _ambiguity_. Such ambiguities may multiply over the entire analysis in a combinatorial explosion – or in the analysis yielding an overly inaccurate result.", "_____no_output_____" ], [ "Python 3.6 and later allows data types as _annotations_ to function arguments (actually, to all variables) and return values. We can, for instance, state that `my_sqrt()` is a function that accepts a floating-point value and returns one:", "_____no_output_____" ] ], [ [ "def my_sqrt_with_type_annotations(x: float) -> float:\n \"\"\"Computes the square root of x, using the Newton-Raphson method\"\"\"\n return my_sqrt(x)", "_____no_output_____" ] ], [ [ "By default, such annotations are ignored by the Python interpreter. Therefore, one can still call `my_sqrt_typed()` with a string as an argument and get the exact same result as above. However, one can make use of special _typechecking_ modules that would check types – _dynamically_ at runtime or _statically_ by analyzing the code without having to execute it.", "_____no_output_____" ], [ "### Runtime Type Checking\n\nThe Python `enforce` package provides a function decorator that automatically inserts type-checking code that is executed at runtime. Here is how to use it:", "_____no_output_____" ] ], [ [ "import enforce", "_____no_output_____" ], [ "@enforce.runtime_validation\ndef my_sqrt_with_checked_type_annotations(x: float) -> float:\n \"\"\"Computes the square root of x, using the Newton-Raphson method\"\"\"\n return my_sqrt(x)", "_____no_output_____" ] ], [ [ "Now, invoking `my_sqrt_with_checked_type_annotations()` raises an exception when invoked with a type dfferent from the one declared:", "_____no_output_____" ] ], [ [ "with ExpectError():\n my_sqrt_with_checked_type_annotations(True)", "Traceback (most recent call last):\n File \"<ipython-input-13-68b73bd3f6ef>\", line 2, in <module>\n my_sqrt_with_checked_type_annotations(True)\n File \"/Users/zeller/Library/Python/3.6/site-packages/enforce/decorators.py\", line 104, in universal\n _args, _kwargs, _ = enforcer.validate_inputs(parameters)\n File \"/Users/zeller/Library/Python/3.6/site-packages/enforce/enforcers.py\", line 86, in validate_inputs\n raise RuntimeTypeError(exception_text)\nenforce.exceptions.RuntimeTypeError: \n The following runtime type errors were encountered:\n Argument 'x' was not of type <class 'float'>. Actual type was bool. (expected)\n" ] ], [ [ "Note that this error is not caught by the \"untyped\" variant, where passing a boolean value happily returns $\\sqrt{1}$ as result. ", "_____no_output_____" ] ], [ [ "my_sqrt(True)", "_____no_output_____" ] ], [ [ "In Python (and other languages), the boolean values `True` and `False` can be implicitly converted to the integers 1 and 0; however, it is hard to think of a call to `sqrt()` where this would not be an error.", "_____no_output_____" ], [ "### Static Type Checking\n\nType annotations can also be checked _statically_ – that is, without even running the code. Let us create a simple Python file consisting of the above `my_sqrt_typed()` definition and a bad invocation.", "_____no_output_____" ] ], [ [ "import inspect\nimport tempfile", "_____no_output_____" ], [ "f = tempfile.NamedTemporaryFile(mode='w', suffix='.py')\nf.name", "_____no_output_____" ], [ "f.write(inspect.getsource(my_sqrt))\nf.write('\\n')\nf.write(inspect.getsource(my_sqrt_with_type_annotations))\nf.write('\\n')\nf.write(\"print(my_sqrt_with_type_annotations('123'))\\n\")\nf.flush()", "_____no_output_____" ] ], [ [ "These are the contents of our newly created Python file:", "_____no_output_____" ] ], [ [ "from fuzzingbook_utils import print_file", "_____no_output_____" ], [ "print_file(f.name)", "\u001b[34mdef\u001b[39;49;00m \u001b[32mmy_sqrt\u001b[39;49;00m(x):\n \u001b[33m\"\"\"Computes the square root of x, using the Newton-Raphson method\"\"\"\u001b[39;49;00m\n approx = \u001b[36mNone\u001b[39;49;00m\n guess = x / \u001b[34m2\u001b[39;49;00m\n \u001b[34mwhile\u001b[39;49;00m approx != guess:\n approx = guess\n guess = (approx + x / approx) / \u001b[34m2\u001b[39;49;00m\n \u001b[34mreturn\u001b[39;49;00m approx\n\n\u001b[34mdef\u001b[39;49;00m \u001b[32mmy_sqrt_with_type_annotations\u001b[39;49;00m(x: \u001b[36mfloat\u001b[39;49;00m) -> \u001b[36mfloat\u001b[39;49;00m:\n \u001b[33m\"\"\"Computes the square root of x, using the Newton-Raphson method\"\"\"\u001b[39;49;00m\n \u001b[34mreturn\u001b[39;49;00m my_sqrt(x)\n\n\u001b[34mprint\u001b[39;49;00m(my_sqrt_with_type_annotations(\u001b[33m'\u001b[39;49;00m\u001b[33m123\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m))\n" ] ], [ [ "[Mypy](http://mypy-lang.org) is a type checker for Python programs. As it checks types statically, types induce no overhead at runtime; plus, a static check can be faster than a lengthy series of tests with runtime type checking enabled. Let us see what `mypy` produces on the above file:", "_____no_output_____" ] ], [ [ "import subprocess", "_____no_output_____" ], [ "result = subprocess.run([\"mypy\", \"--strict\", f.name], universal_newlines=True, stdout=subprocess.PIPE)\ndel f # Delete temporary file", "_____no_output_____" ], [ "print(result.stdout)", "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/tmp207al5cu.py:1: error: Function is missing a type annotation\n/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/tmp207al5cu.py:12: warning: Returning Any from function declared to return \"float\"\n/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/tmp207al5cu.py:12: error: Call to untyped function \"my_sqrt\" in typed context\n/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/tmp207al5cu.py:14: error: Argument 1 to \"my_sqrt_with_type_annotations\" has incompatible type \"str\"; expected \"float\"\n\n" ] ], [ [ "We see that `mypy` complains about untyped function definitions such as `my_sqrt()`; most important, however, it finds that the call to `my_sqrt_with_type_annotations()` in the last line has the wrong type.", "_____no_output_____" ], [ "With `mypy`, we can achieve the same type safety with Python as in statically typed languages – provided that we as programmers also produce the necessary type annotations. Is there a simple way to obtain these?", "_____no_output_____" ], [ "## Mining Type Specifications\n\nOur first task will be to mine type annotations (as part of the code) from _values_ we observe at run time. These type annotations would be _mined_ from actual function executions, _learning_ from (normal) runs what the expected argument and return types should be. By observing a series of calls such as these, we could infer that both `x` and the return value are of type `float`:", "_____no_output_____" ] ], [ [ "y = my_sqrt(25.0)\ny", "_____no_output_____" ], [ "y = my_sqrt(2.0)\ny", "_____no_output_____" ] ], [ [ "How can we mine types from executions? The answer is simple: \n\n1. We _observe_ a function during execution\n2. We track the _types_ of its arguments\n3. We include these types as _annotations_ into the code.\n\nTo do so, we can make use of Python's tracing facility we already observed in the [chapter on coverage](Coverage.ipynb). With every call to a function, we retrieve the arguments, their values, and their types.", "_____no_output_____" ], [ "### Tracking Calls\n\nTo observe argument types at runtime, we define a _tracer function_ that tracks the execution of `my_sqrt()`, checking its arguments and return values. The `Tracker` class is set to trace functions in a `with` block as follows:\n\n```python\nwith Tracker() as tracker:\n function_to_be_tracked(...)\ninfo = tracker.collected_information()\n```\n\nAs in the [chapter on coverage](Coverage.ipynb), we use the `sys.settrace()` function to trace individual functions during execution. We turn on tracking when the `with` block starts; at this point, the `__enter__()` method is called. When execution of the `with` block ends, `__exit()__` is called. ", "_____no_output_____" ] ], [ [ "import sys", "_____no_output_____" ], [ "class Tracker(object):\n def __init__(self, log=False):\n self._log = log\n self.reset()\n\n def reset(self):\n self._calls = {}\n self._stack = []\n\n def traceit(self):\n \"\"\"Placeholder to be overloaded in subclasses\"\"\"\n pass\n\n # Start of `with` block\n def __enter__(self):\n self.original_trace_function = sys.gettrace()\n sys.settrace(self.traceit)\n return self\n\n # End of `with` block\n def __exit__(self, exc_type, exc_value, tb):\n sys.settrace(self.original_trace_function)", "_____no_output_____" ] ], [ [ "The `traceit()` method does nothing yet; this is done in specialized subclasses. The `CallTracker` class implements a `traceit()` function that checks for function calls and returns:", "_____no_output_____" ] ], [ [ "class CallTracker(Tracker):\n def traceit(self, frame, event, arg):\n \"\"\"Tracking function: Record all calls and all args\"\"\"\n if event == \"call\":\n self.trace_call(frame, event, arg)\n elif event == \"return\":\n self.trace_return(frame, event, arg)\n \n return self.traceit", "_____no_output_____" ] ], [ [ "`trace_call()` is called when a function is called; it retrieves the function name and current arguments, and saves them on a stack.", "_____no_output_____" ] ], [ [ "class CallTracker(CallTracker):\n def trace_call(self, frame, event, arg):\n \"\"\"Save current function name and args on the stack\"\"\"\n code = frame.f_code\n function_name = code.co_name\n arguments = get_arguments(frame)\n self._stack.append((function_name, arguments))\n\n if self._log:\n print(simple_call_string(function_name, arguments))", "_____no_output_____" ], [ "def get_arguments(frame):\n \"\"\"Return call arguments in the given frame\"\"\"\n # When called, all arguments are local variables\n arguments = [(var, frame.f_locals[var]) for var in frame.f_locals]\n arguments.reverse() # Want same order as call\n return arguments", "_____no_output_____" ] ], [ [ "When the function returns, `trace_return()` is called. We now also have the return value. We log the whole call with arguments and return value (if desired) and save it in our list of calls.", "_____no_output_____" ] ], [ [ "class CallTracker(CallTracker):\n def trace_return(self, frame, event, arg):\n \"\"\"Get return value and store complete call with arguments and return value\"\"\"\n code = frame.f_code\n function_name = code.co_name\n return_value = arg\n # TODO: Could call get_arguments() here to also retrieve _final_ values of argument variables\n \n called_function_name, called_arguments = self._stack.pop()\n assert function_name == called_function_name\n \n if self._log:\n print(simple_call_string(function_name, called_arguments), \"returns\", return_value)\n \n self.add_call(function_name, called_arguments, return_value)", "_____no_output_____" ] ], [ [ "`simple_call_string()` is a helper for logging that prints out calls in a user-friendly manner.", "_____no_output_____" ] ], [ [ "def simple_call_string(function_name, argument_list, return_value=None):\n \"\"\"Return function_name(arg[0], arg[1], ...) as a string\"\"\"\n call = function_name + \"(\" + \\\n \", \".join([var + \"=\" + repr(value)\n for (var, value) in argument_list]) + \")\"\n\n if return_value is not None:\n call += \" = \" + repr(return_value)\n \n return call", "_____no_output_____" ] ], [ [ "`add_call()` saves the calls in a list; each function name has its own list.", "_____no_output_____" ] ], [ [ "class CallTracker(CallTracker):\n def add_call(self, function_name, arguments, return_value=None):\n \"\"\"Add given call to list of calls\"\"\"\n if function_name not in self._calls:\n self._calls[function_name] = []\n self._calls[function_name].append((arguments, return_value))", "_____no_output_____" ] ], [ [ "Using `calls()`, we can retrieve the list of calls, either for a given function, or for all functions.", "_____no_output_____" ] ], [ [ "class CallTracker(CallTracker):\n def calls(self, function_name=None):\n \"\"\"Return list of calls for function_name, \n or a mapping function_name -> calls for all functions tracked\"\"\"\n if function_name is None:\n return self._calls\n\n return self._calls[function_name]", "_____no_output_____" ] ], [ [ "Let us now put this to use. We turn on logging to track the individual calls and their return values:", "_____no_output_____" ] ], [ [ "with CallTracker(log=True) as tracker:\n y = my_sqrt(25)\n y = my_sqrt(2.0)", "my_sqrt(x=25)\nmy_sqrt(x=25) returns 5.0\nmy_sqrt(x=2.0)\nmy_sqrt(x=2.0) returns 1.414213562373095\n__exit__(self=<__main__.CallTracker object at 0x10fc937b8>, exc_type=None, exc_value=None, tb=None)\n" ] ], [ [ "After execution, we can retrieve the individual calls:", "_____no_output_____" ] ], [ [ "calls = tracker.calls('my_sqrt')\ncalls", "_____no_output_____" ] ], [ [ "Each call is pair (`argument_list`, `return_value`), where `argument_list` is a list of pairs (`parameter_name`, `value`).", "_____no_output_____" ] ], [ [ "my_sqrt_argument_list, my_sqrt_return_value = calls[0]\nsimple_call_string('my_sqrt', my_sqrt_argument_list, my_sqrt_return_value)", "_____no_output_____" ] ], [ [ "If the function does not return a value, `return_value` is `None`.", "_____no_output_____" ] ], [ [ "def hello(name):\n print(\"Hello,\", name)", "_____no_output_____" ], [ "with CallTracker() as tracker:\n hello(\"world\")", "Hello, world\n" ], [ "hello_calls = tracker.calls('hello')\nhello_calls", "_____no_output_____" ], [ "hello_argument_list, hello_return_value = hello_calls[0]\nsimple_call_string('hello', hello_argument_list, hello_return_value)", "_____no_output_____" ] ], [ [ "### Getting Types\n\nDespite what you may have read or heard, Python actually _is_ a typed language. It is just that it is _dynamically typed_ – types are used and checked only at runtime (rather than declared in the code, where they can be _statically checked_ at compile time). We can thus retrieve types of all values within Python:", "_____no_output_____" ] ], [ [ "type(4)", "_____no_output_____" ], [ "type(2.0)", "_____no_output_____" ], [ "type([4])", "_____no_output_____" ] ], [ [ "We can retrieve the type of the first argument to `my_sqrt()`:", "_____no_output_____" ] ], [ [ "parameter, value = my_sqrt_argument_list[0]\nparameter, type(value)", "_____no_output_____" ] ], [ [ "as well as the type of the return value:", "_____no_output_____" ] ], [ [ "type(my_sqrt_return_value)", "_____no_output_____" ] ], [ [ "Hence, we see that (so far), `my_sqrt()` is a function taking (among others) integers and returning floats. We could declare `my_sqrt()` as:", "_____no_output_____" ] ], [ [ "def my_sqrt_annotated(x: int) -> float:\n return my_sqrt(x)", "_____no_output_____" ] ], [ [ "This is a representation we could place in a static type checker, allowing to check whether calls to `my_sqrt()` actually pass a number. A dynamic type checker could run such checks at runtime. And of course, any [symbolic interpretation](SymbolicFuzzer.ipynb) will greatly profit from the additional annotations.", "_____no_output_____" ], [ "By default, Python does not do anything with such annotations. However, tools can access annotations from functions and other objects:", "_____no_output_____" ] ], [ [ "my_sqrt_annotated.__annotations__", "_____no_output_____" ] ], [ [ "This is how run-time checkers access the annotations to check against.", "_____no_output_____" ], [ "### Accessing Function Structure\n\nOur plan is to annotate functions automatically, based on the types we have seen. To do so, we need a few modules that allow us to convert a function into a tree representation (called _abstract syntax trees_, or ASTs) and back; we already have seen these in the chapters on [concolic](ConcolicFuzzer.ipynb) and [symbolic](SymbolicFuzzer.ipynb) testing.", "_____no_output_____" ] ], [ [ "import ast\nimport inspect\nimport astor", "_____no_output_____" ] ], [ [ "We can get the source of a Python function using `inspect.getsource()`. (Note that this does not work for functions defined in other notebooks.)", "_____no_output_____" ] ], [ [ "my_sqrt_source = inspect.getsource(my_sqrt)\nmy_sqrt_source", "_____no_output_____" ] ], [ [ "To view these in a visually pleasing form, our function `print_content(s, suffix)` formats and highlights the string `s` as if it were a file with ending `suffix`. We can thus view (and highlight) the source as if it were a Python file:", "_____no_output_____" ] ], [ [ "from fuzzingbook_utils import print_content", "_____no_output_____" ], [ "print_content(my_sqrt_source, '.py')", "\u001b[34mdef\u001b[39;49;00m \u001b[32mmy_sqrt\u001b[39;49;00m(x):\n \u001b[33m\"\"\"Computes the square root of x, using the Newton-Raphson method\"\"\"\u001b[39;49;00m\n approx = \u001b[36mNone\u001b[39;49;00m\n guess = x / \u001b[34m2\u001b[39;49;00m\n \u001b[34mwhile\u001b[39;49;00m approx != guess:\n approx = guess\n guess = (approx + x / approx) / \u001b[34m2\u001b[39;49;00m\n \u001b[34mreturn\u001b[39;49;00m approx\n" ] ], [ [ "Parsing this gives us an abstract syntax tree (AST) – a representation of the program in tree form.", "_____no_output_____" ] ], [ [ "my_sqrt_ast = ast.parse(my_sqrt_source)", "_____no_output_____" ] ], [ [ "What does this AST look like? The helper functions `astor.dump_tree()` (textual output) and `showast.show_ast()` (graphical output with [showast](https://github.com/hchasestevens/show_ast)) allow us to inspect the structure of the tree. We see that the function starts as a `FunctionDef` with name and arguments, followed by a body, which is a list of statements of type `Expr` (the docstring), type `Assign` (assignments), `While` (while loop with its own body), and finally `Return`.", "_____no_output_____" ] ], [ [ "print(astor.dump_tree(my_sqrt_ast))", "Module(\n body=[\n FunctionDef(name='my_sqrt',\n args=arguments(args=[arg(arg='x', annotation=None)],\n vararg=None,\n kwonlyargs=[],\n kw_defaults=[],\n kwarg=None,\n defaults=[]),\n body=[\n Expr(value=Str(s='Computes the square root of x, using the Newton-Raphson method')),\n Assign(targets=[Name(id='approx')], value=NameConstant(value=None)),\n Assign(targets=[Name(id='guess')], value=BinOp(left=Name(id='x'), op=Div, right=Num(n=2))),\n While(\n test=Compare(left=Name(id='approx'), ops=[NotEq], comparators=[Name(id='guess')]),\n body=[Assign(targets=[Name(id='approx')], value=Name(id='guess')),\n Assign(targets=[Name(id='guess')],\n value=BinOp(\n left=BinOp(left=Name(id='approx'),\n op=Add,\n right=BinOp(left=Name(id='x'), op=Div, right=Name(id='approx'))),\n op=Div,\n right=Num(n=2)))],\n orelse=[]),\n Return(value=Name(id='approx'))],\n decorator_list=[],\n returns=None)])\n" ] ], [ [ "Too much text for you? This graphical representation may make things simpler.", "_____no_output_____" ] ], [ [ "from fuzzingbook_utils import rich_output", "_____no_output_____" ], [ "if rich_output():\n import showast\n showast.show_ast(my_sqrt_ast)", "_____no_output_____" ] ], [ [ "The function `astor.to_source()` converts such a tree back into the more familiar textual Python code representation. Comments are gone, and there may be more parentheses than before, but the result has the same semantics:", "_____no_output_____" ] ], [ [ "print_content(astor.to_source(my_sqrt_ast), '.py')", "\u001b[34mdef\u001b[39;49;00m \u001b[32mmy_sqrt\u001b[39;49;00m(x):\n \u001b[33m\"\"\"Computes the square root of x, using the Newton-Raphson method\"\"\"\u001b[39;49;00m\n approx = \u001b[36mNone\u001b[39;49;00m\n guess = x / \u001b[34m2\u001b[39;49;00m\n \u001b[34mwhile\u001b[39;49;00m approx != guess:\n approx = guess\n guess = (approx + x / approx) / \u001b[34m2\u001b[39;49;00m\n \u001b[34mreturn\u001b[39;49;00m approx\n" ] ], [ [ "### Annotating Functions with Given Types\n\nLet us now go and transform these trees ti add type annotations. We start with a helper function `parse_type(name)` which parses a type name into an AST.", "_____no_output_____" ] ], [ [ "def parse_type(name):\n class ValueVisitor(ast.NodeVisitor):\n def visit_Expr(self, node):\n self.value_node = node.value\n \n tree = ast.parse(name)\n name_visitor = ValueVisitor()\n name_visitor.visit(tree)\n return name_visitor.value_node", "_____no_output_____" ], [ "print(astor.dump_tree(parse_type('int')))", "Name(id='int')\n" ], [ "print(astor.dump_tree(parse_type('[object]')))", "List(elts=[Name(id='object')])\n" ] ], [ [ "We now define a helper function that actually adds type annotations to a function AST. The `TypeTransformer` class builds on the Python standard library `ast.NodeTransformer` infrastructure. It would be called as\n\n```python\n TypeTransformer({'x': 'int'}, 'float').visit(ast)\n```\n\nto annotate the arguments of `my_sqrt()`: `x` with `int`, and the return type with `float`. The returned AST can then be unparsed, compiled or analyzed.", "_____no_output_____" ] ], [ [ "class TypeTransformer(ast.NodeTransformer):\n def __init__(self, argument_types, return_type=None):\n self.argument_types = argument_types\n self.return_type = return_type\n super().__init__()", "_____no_output_____" ] ], [ [ "The core of `TypeTransformer` is the method `visit_FunctionDef()`, which is called for every function definition in the AST. Its argument `node` is the subtree of the function definition to be transformed. Our implementation accesses the individual arguments and invokes `annotate_args()` on them; it also sets the return type in the `returns` attribute of the node.", "_____no_output_____" ] ], [ [ "class TypeTransformer(TypeTransformer):\n def visit_FunctionDef(self, node):\n \"\"\"Add annotation to function\"\"\"\n # Set argument types\n new_args = []\n for arg in node.args.args:\n new_args.append(self.annotate_arg(arg))\n\n new_arguments = ast.arguments(\n new_args,\n node.args.vararg,\n node.args.kwonlyargs,\n node.args.kw_defaults,\n node.args.kwarg,\n node.args.defaults\n )\n\n # Set return type\n if self.return_type is not None:\n node.returns = parse_type(self.return_type)\n \n return ast.copy_location(ast.FunctionDef(node.name, new_arguments, \n node.body, node.decorator_list,\n node.returns), node)", "_____no_output_____" ] ], [ [ "Each argument gets its own annotation, taken from the types originally passed to the class:", "_____no_output_____" ] ], [ [ "class TypeTransformer(TypeTransformer):\n def annotate_arg(self, arg):\n \"\"\"Add annotation to single function argument\"\"\"\n arg_name = arg.arg\n if arg_name in self.argument_types:\n arg.annotation = parse_type(self.argument_types[arg_name])\n return arg", "_____no_output_____" ] ], [ [ "Does this work? Let us annotate the AST from `my_sqrt()` with types for the arguments and return types:", "_____no_output_____" ] ], [ [ "new_ast = TypeTransformer({'x': 'int'}, 'float').visit(my_sqrt_ast)", "_____no_output_____" ] ], [ [ "When we unparse the new AST, we see that the annotations actually are present:", "_____no_output_____" ] ], [ [ "print_content(astor.to_source(new_ast), '.py')", "\u001b[34mdef\u001b[39;49;00m \u001b[32mmy_sqrt\u001b[39;49;00m(x: \u001b[36mint\u001b[39;49;00m) ->\u001b[36mfloat\u001b[39;49;00m:\n \u001b[33m\"\"\"Computes the square root of x, using the Newton-Raphson method\"\"\"\u001b[39;49;00m\n approx = \u001b[36mNone\u001b[39;49;00m\n guess = x / \u001b[34m2\u001b[39;49;00m\n \u001b[34mwhile\u001b[39;49;00m approx != guess:\n approx = guess\n guess = (approx + x / approx) / \u001b[34m2\u001b[39;49;00m\n \u001b[34mreturn\u001b[39;49;00m approx\n" ] ], [ [ "Similarly, we can annotate the `hello()` function from above:", "_____no_output_____" ] ], [ [ "hello_source = inspect.getsource(hello)", "_____no_output_____" ], [ "hello_ast = ast.parse(hello_source)", "_____no_output_____" ], [ "new_ast = TypeTransformer({'name': 'str'}, 'None').visit(hello_ast)", "_____no_output_____" ], [ "print_content(astor.to_source(new_ast), '.py')", "\u001b[34mdef\u001b[39;49;00m \u001b[32mhello\u001b[39;49;00m(name: \u001b[36mstr\u001b[39;49;00m) ->\u001b[36mNone\u001b[39;49;00m:\n \u001b[34mprint\u001b[39;49;00m(\u001b[33m'\u001b[39;49;00m\u001b[33mHello,\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m, name)\n" ] ], [ [ "### Annotating Functions with Mined Types\n\nLet us now annotate functions with types mined at runtime. We start with a simple function `type_string()` that determines \nthe appropriate type of a given value (as a string):", "_____no_output_____" ] ], [ [ "def type_string(value):\n return type(value).__name__", "_____no_output_____" ], [ "type_string(4)", "_____no_output_____" ], [ "type_string([])", "_____no_output_____" ] ], [ [ "For composite structures, `type_string()` does not examine element types; hence, the type of `[3]` is simply `list` instead of, say, `list[int]`. For now, `list` will do fine.", "_____no_output_____" ] ], [ [ "type_string([3])", "_____no_output_____" ] ], [ [ "`type_string()` will be used to infer the types of argument values found at runtime, as returned by `CallTracker.calls()`:", "_____no_output_____" ] ], [ [ "with CallTracker() as tracker:\n y = my_sqrt(25.0)\n y = my_sqrt(2.0)", "_____no_output_____" ], [ "tracker.calls()", "_____no_output_____" ] ], [ [ "The function `annotate_types()` takes such a list of calls and annotates each function listed:", "_____no_output_____" ] ], [ [ "def annotate_types(calls):\n annotated_functions = {}\n \n for function_name in calls:\n try:\n annotated_functions[function_name] = annotate_function_with_types(function_name, calls[function_name])\n except KeyError:\n continue\n\n return annotated_functions", "_____no_output_____" ] ], [ [ "For each function, we get the source and its AST and then get to the actual annotation in `annotate_function_ast_with_types()`:", "_____no_output_____" ] ], [ [ "def annotate_function_with_types(function_name, function_calls):\n function = globals()[function_name] # May raise KeyError for internal functions\n function_code = inspect.getsource(function)\n function_ast = ast.parse(function_code)\n return annotate_function_ast_with_types(function_ast, function_calls)", "_____no_output_____" ] ], [ [ "The function `annotate_function_ast_with_types()` invokes the `TypeTransformer` with the calls seen, and for each call, iterate over the arguments, determine their types, and annotate the AST with these. The universal type `Any` is used when we encounter type conflicts, which we will discuss below.", "_____no_output_____" ] ], [ [ "from typing import Any", "_____no_output_____" ], [ "def annotate_function_ast_with_types(function_ast, function_calls):\n parameter_types = {}\n return_type = None\n\n for calls_seen in function_calls:\n args, return_value = calls_seen\n if return_value is not None:\n if return_type is not None and return_type != type_string(return_value):\n return_type = 'Any'\n else:\n return_type = type_string(return_value)\n \n \n for parameter, value in args:\n try:\n different_type = parameter_types[parameter] != type_string(value)\n except KeyError:\n different_type = False\n \n if different_type:\n parameter_types[parameter] = 'Any'\n else:\n parameter_types[parameter] = type_string(value)\n \n annotated_function_ast = TypeTransformer(parameter_types, return_type).visit(function_ast)\n return annotated_function_ast", "_____no_output_____" ] ], [ [ "Here is `my_sqrt()` annotated with the types recorded usign the tracker, above.", "_____no_output_____" ] ], [ [ "print_content(astor.to_source(annotate_types(tracker.calls())['my_sqrt']), '.py')", "\u001b[34mdef\u001b[39;49;00m \u001b[32mmy_sqrt\u001b[39;49;00m(x: \u001b[36mfloat\u001b[39;49;00m) ->\u001b[36mfloat\u001b[39;49;00m:\n \u001b[33m\"\"\"Computes the square root of x, using the Newton-Raphson method\"\"\"\u001b[39;49;00m\n approx = \u001b[36mNone\u001b[39;49;00m\n guess = x / \u001b[34m2\u001b[39;49;00m\n \u001b[34mwhile\u001b[39;49;00m approx != guess:\n approx = guess\n guess = (approx + x / approx) / \u001b[34m2\u001b[39;49;00m\n \u001b[34mreturn\u001b[39;49;00m approx\n" ] ], [ [ "### All-in-one Annotation\n\nLet us bring all of this together in a single class `TypeAnnotator` that first tracks calls of functions and then allows to access the AST (and the source code form) of the tracked functions annotated with types. The method `typed_functions()` returns the annotated functions as a string; `typed_functions_ast()` returns their AST.", "_____no_output_____" ] ], [ [ "class TypeTracker(CallTracker):\n pass", "_____no_output_____" ], [ "class TypeAnnotator(TypeTracker):\n def typed_functions_ast(self, function_name=None):\n if function_name is None:\n return annotate_types(self.calls())\n \n return annotate_function_with_types(function_name, self.calls(function_name))\n \n def typed_functions(self, function_name=None):\n if function_name is None:\n functions = ''\n for f_name in self.calls():\n try:\n f_text = astor.to_source(self.typed_functions_ast(f_name))\n except KeyError:\n f_text = ''\n functions += f_text\n return functions\n\n return astor.to_source(self.typed_functions_ast(function_name))", "_____no_output_____" ] ], [ [ "Here is how to use `TypeAnnotator`. We first track a series of calls:", "_____no_output_____" ] ], [ [ "with TypeAnnotator() as annotator:\n y = my_sqrt(25.0)\n y = my_sqrt(2.0)", "_____no_output_____" ] ], [ [ "After tracking, we can immediately retrieve an annotated version of the functions tracked:", "_____no_output_____" ] ], [ [ "print_content(annotator.typed_functions(), '.py')", "\u001b[34mdef\u001b[39;49;00m \u001b[32mmy_sqrt\u001b[39;49;00m(x: \u001b[36mfloat\u001b[39;49;00m) ->\u001b[36mfloat\u001b[39;49;00m:\n \u001b[33m\"\"\"Computes the square root of x, using the Newton-Raphson method\"\"\"\u001b[39;49;00m\n approx = \u001b[36mNone\u001b[39;49;00m\n guess = x / \u001b[34m2\u001b[39;49;00m\n \u001b[34mwhile\u001b[39;49;00m approx != guess:\n approx = guess\n guess = (approx + x / approx) / \u001b[34m2\u001b[39;49;00m\n \u001b[34mreturn\u001b[39;49;00m approx\n" ] ], [ [ "This also works for multiple and diverse functions. One could go and implement an automatic type annotator for Python files based on the types seen during execution.", "_____no_output_____" ] ], [ [ "with TypeAnnotator() as annotator:\n hello('type annotations')\n y = my_sqrt(1.0)", "Hello, type annotations\n" ], [ "print_content(annotator.typed_functions(), '.py')", "\u001b[34mdef\u001b[39;49;00m \u001b[32mhello\u001b[39;49;00m(name: \u001b[36mstr\u001b[39;49;00m):\n \u001b[34mprint\u001b[39;49;00m(\u001b[33m'\u001b[39;49;00m\u001b[33mHello,\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m, name)\n\u001b[34mdef\u001b[39;49;00m \u001b[32mmy_sqrt\u001b[39;49;00m(x: \u001b[36mfloat\u001b[39;49;00m) ->\u001b[36mfloat\u001b[39;49;00m:\n \u001b[33m\"\"\"Computes the square root of x, using the Newton-Raphson method\"\"\"\u001b[39;49;00m\n approx = \u001b[36mNone\u001b[39;49;00m\n guess = x / \u001b[34m2\u001b[39;49;00m\n \u001b[34mwhile\u001b[39;49;00m approx != guess:\n approx = guess\n guess = (approx + x / approx) / \u001b[34m2\u001b[39;49;00m\n \u001b[34mreturn\u001b[39;49;00m approx\n" ] ], [ [ "A content as above could now be sent to a type checker, which would detect any type inconsistency between callers and callees. Likewise, type annotations such as the ones above greatly benefit symbolic code analysis (as in the chapter on [symbolic fuzzing](SymbolicFuzzer.ipynb)), as they effectively constrain the set of values that arguments and variables can take.", "_____no_output_____" ], [ "### Multiple Types\n\nLet us now resolve the role of the magic `Any` type in `annotate_function_ast_with_types()`. If we see multiple types for the same argument, we set its type to `Any`. For `my_sqrt()`, this makes sense, as its arguments can be integers as well as floats:", "_____no_output_____" ] ], [ [ "with CallTracker() as tracker:\n y = my_sqrt(25.0)\n y = my_sqrt(4)", "_____no_output_____" ], [ "print_content(astor.to_source(annotate_types(tracker.calls())['my_sqrt']), '.py')", "\u001b[34mdef\u001b[39;49;00m \u001b[32mmy_sqrt\u001b[39;49;00m(x: Any) ->\u001b[36mfloat\u001b[39;49;00m:\n \u001b[33m\"\"\"Computes the square root of x, using the Newton-Raphson method\"\"\"\u001b[39;49;00m\n approx = \u001b[36mNone\u001b[39;49;00m\n guess = x / \u001b[34m2\u001b[39;49;00m\n \u001b[34mwhile\u001b[39;49;00m approx != guess:\n approx = guess\n guess = (approx + x / approx) / \u001b[34m2\u001b[39;49;00m\n \u001b[34mreturn\u001b[39;49;00m approx\n" ] ], [ [ "The following function `sum3()` can be called with floating-point numbers as arguments, resulting in the parameters getting a `float` type:", "_____no_output_____" ] ], [ [ "def sum3(a, b, c):\n return a + b + c", "_____no_output_____" ], [ "with TypeAnnotator() as annotator:\n y = sum3(1.0, 2.0, 3.0)\ny", "_____no_output_____" ], [ "print_content(annotator.typed_functions(), '.py')", "\u001b[34mdef\u001b[39;49;00m \u001b[32msum3\u001b[39;49;00m(a: \u001b[36mfloat\u001b[39;49;00m, b: \u001b[36mfloat\u001b[39;49;00m, c: \u001b[36mfloat\u001b[39;49;00m) ->\u001b[36mfloat\u001b[39;49;00m:\n \u001b[34mreturn\u001b[39;49;00m a + b + c\n" ] ], [ [ "If we call `sum3()` with integers, though, the arguments get an `int` type:", "_____no_output_____" ] ], [ [ "with TypeAnnotator() as annotator:\n y = sum3(1, 2, 3)\ny", "_____no_output_____" ], [ "print_content(annotator.typed_functions(), '.py')", "\u001b[34mdef\u001b[39;49;00m \u001b[32msum3\u001b[39;49;00m(a: \u001b[36mint\u001b[39;49;00m, b: \u001b[36mint\u001b[39;49;00m, c: \u001b[36mint\u001b[39;49;00m) ->\u001b[36mint\u001b[39;49;00m:\n \u001b[34mreturn\u001b[39;49;00m a + b + c\n" ] ], [ [ "And we can also call `sum3()` with strings, giving the arguments a `str` type:", "_____no_output_____" ] ], [ [ "with TypeAnnotator() as annotator:\n y = sum3(\"one\", \"two\", \"three\")\ny", "_____no_output_____" ], [ "print_content(annotator.typed_functions(), '.py')", "\u001b[34mdef\u001b[39;49;00m \u001b[32msum3\u001b[39;49;00m(a: \u001b[36mstr\u001b[39;49;00m, b: \u001b[36mstr\u001b[39;49;00m, c: \u001b[36mstr\u001b[39;49;00m) ->\u001b[36mstr\u001b[39;49;00m:\n \u001b[34mreturn\u001b[39;49;00m a + b + c\n" ] ], [ [ "If we have multiple calls, but with different types, `TypeAnnotator()` will assign an `Any` type to both arguments and return values:", "_____no_output_____" ] ], [ [ "with TypeAnnotator() as annotator:\n y = sum3(1, 2, 3)\n y = sum3(\"one\", \"two\", \"three\")", "_____no_output_____" ], [ "typed_sum3_def = annotator.typed_functions('sum3')", "_____no_output_____" ], [ "print_content(typed_sum3_def, '.py')", "\u001b[34mdef\u001b[39;49;00m \u001b[32msum3\u001b[39;49;00m(a: Any, b: Any, c: Any) ->Any:\n \u001b[34mreturn\u001b[39;49;00m a + b + c\n" ] ], [ [ "A type `Any` makes it explicit that an object can, indeed, have any type; it will not be typechecked at runtime or statically. To some extent, this defeats the power of type checking; but it also preserves some of the type flexibility that many Python programmers enjoy. Besides `Any`, the `typing` module supports several additional ways to define ambiguous types; we will keep this in mind for a later exercise.", "_____no_output_____" ], [ "## Specifying and Checking Invariants\n\nBesides basic data types. we can check several further properties from arguments. We can, for instance, whether an argument can be negative, zero, or positive; or that one argument should be smaller than the second; or that the result should be the sum of two arguments – properties that cannot be expressed in a (Python) type.\n\nSuch properties are called *invariants*, as they hold across all invocations of a function. Specifically, invariants come as _pre_- and _postconditions_ – conditions that always hold at the beginning and at the end of a function. (There are also _data_ and _object_ invariants that express always-holding properties over the state of data or objects, but we do not consider these in this book.)", "_____no_output_____" ], [ "### Annotating Functions with Pre- and Postconditions\n\nThe classical means to specify pre- and postconditions is via _assertions_, which we have introduced in the [chapter on testing](Intro_Testing.ipynb). A precondition checks whether the arguments to a function satisfy the expected properties; a postcondition does the same for the result. We can express and check both using assertions as follows:", "_____no_output_____" ] ], [ [ "def my_sqrt_with_invariants(x):\n assert x >= 0 # Precondition\n \n ...\n \n assert result * result == x # Postcondition\n return result", "_____no_output_____" ] ], [ [ "A nicer way, however, is to syntactically separate invariants from the function at hand. Using appropriate decorators, we could specify pre- and postconditions as follows:\n\n```python\n@precondition lambda x: x >= 0\n@postcondition lambda return_value, x: return_value * return_value == x\ndef my_sqrt_with_invariants(x):\n # normal code without assertions\n ...\n```\n\nThe decorators `@precondition` and `@postcondition` would run the given functions (specified as anonymous `lambda` functions) before and after the decorated function, respectively. If the functions return `False`, the condition is violated. `@precondition` gets the function arguments as arguments; `@postcondition` additionally gets the return value as first argument.", "_____no_output_____" ], [ "It turns out that implementing such decorators is not hard at all. Our implementation builds on a [code snippet from StackOverflow](https://stackoverflow.com/questions/12151182/python-precondition-postcondition-for-member-function-how):", "_____no_output_____" ] ], [ [ "import functools", "_____no_output_____" ], [ "def condition(precondition=None, postcondition=None):\n def decorator(func):\n @functools.wraps(func) # preserves name, docstring, etc\n def wrapper(*args, **kwargs):\n if precondition is not None:\n assert precondition(*args, **kwargs), \"Precondition violated\"\n\n retval = func(*args, **kwargs) # call original function or method\n if postcondition is not None:\n assert postcondition(retval, *args, **kwargs), \"Postcondition violated\"\n\n return retval\n return wrapper\n return decorator\n\ndef precondition(check):\n return condition(precondition=check)\n\ndef postcondition(check):\n return condition(postcondition=check)", "_____no_output_____" ] ], [ [ "With these, we can now start decorating `my_sqrt()`:", "_____no_output_____" ] ], [ [ "@precondition(lambda x: x > 0)\ndef my_sqrt_with_precondition(x):\n return my_sqrt(x)", "_____no_output_____" ] ], [ [ "This catches arguments violating the precondition:", "_____no_output_____" ] ], [ [ "with ExpectError():\n my_sqrt_with_precondition(-1.0)", "Traceback (most recent call last):\n File \"<ipython-input-102-c02dc99b6c54>\", line 2, in <module>\n my_sqrt_with_precondition(-1.0)\n File \"<ipython-input-100-39ada1fd0b7e>\", line 6, in wrapper\n assert precondition(*args, **kwargs), \"Precondition violated\"\nAssertionError: Precondition violated (expected)\n" ] ], [ [ "Likewise, we can provide a postcondition:", "_____no_output_____" ] ], [ [ "EPSILON = 1e-5", "_____no_output_____" ], [ "@postcondition(lambda ret, x: ret * ret - x < EPSILON)\ndef my_sqrt_with_postcondition(x):\n return my_sqrt(x)", "_____no_output_____" ], [ "y = my_sqrt_with_postcondition(2.0)\ny", "_____no_output_____" ] ], [ [ "If we have a buggy implementation of $\\sqrt{x}$, this gets caught quickly:", "_____no_output_____" ] ], [ [ "@postcondition(lambda ret, x: ret * ret - x < EPSILON)\ndef buggy_my_sqrt_with_postcondition(x):\n return my_sqrt(x) + 0.1", "_____no_output_____" ], [ "with ExpectError():\n y = buggy_my_sqrt_with_postcondition(2.0)", "Traceback (most recent call last):\n File \"<ipython-input-107-38a36260c5b6>\", line 2, in <module>\n y = buggy_my_sqrt_with_postcondition(2.0)\n File \"<ipython-input-100-39ada1fd0b7e>\", line 10, in wrapper\n assert postcondition(retval, *args, **kwargs), \"Postcondition violated\"\nAssertionError: Postcondition violated (expected)\n" ] ], [ [ "While checking pre- and postconditions is a great way to catch errors, specifying them can be cumbersome. Let us try to see whether we can (again) _mine_ some of them.", "_____no_output_____" ], [ "## Mining Invariants\n\nTo _mine_ invariants, we can use the same tracking functionality as before; instead of saving values for individual variables, though, we now check whether the values satisfy specific _properties_ or not. For instance, if all values of `x` seen satisfy the condition `x > 0`, then we make `x > 0` an invariant of the function. If we see positive, zero, and negative values of `x`, though, then there is no property of `x` left to talk about.\n\nThe general idea is thus:\n\n1. Check all variable values observed against a set of predefined properties; and\n2. Keep only those properties that hold for all runs observed.", "_____no_output_____" ], [ "### Defining Properties\n\nWhat precisely do we mean by properties? Here is a small collection of value properties that would frequently be used in invariants. All these properties would be evaluated with the _metavariables_ `X`, `Y`, and `Z` (actually, any upper-case identifier) being replaced with the names of function parameters: ", "_____no_output_____" ] ], [ [ "INVARIANT_PROPERTIES = [\n \"X < 0\",\n \"X <= 0\",\n \"X > 0\",\n \"X >= 0\",\n \"X == 0\",\n \"X != 0\",\n]", "_____no_output_____" ] ], [ [ "When `my_sqrt(x)` is called as, say `my_sqrt(5.0)`, we see that `x = 5.0` holds. The above properties would then all be checked for `x`. Only the properties `X > 0`, `X >= 0`, and `X != 0` hold for the call seen; and hence `x > 0`, `x >= 0`, and `x != 0` would make potential preconditions for `my_sqrt(x)`.", "_____no_output_____" ], [ "We can check for many more properties such as relations between two arguments:", "_____no_output_____" ] ], [ [ "INVARIANT_PROPERTIES += [\n \"X == Y\",\n \"X > Y\",\n \"X < Y\",\n \"X >= Y\",\n \"X <= Y\",\n]", "_____no_output_____" ] ], [ [ "Types also can be checked using properties. For any function parameter `X`, only one of these will hold:", "_____no_output_____" ] ], [ [ "INVARIANT_PROPERTIES += [\n \"isinstance(X, bool)\",\n \"isinstance(X, int)\",\n \"isinstance(X, float)\",\n \"isinstance(X, list)\",\n \"isinstance(X, dict)\",\n]", "_____no_output_____" ] ], [ [ "We can check for arithmetic properties:", "_____no_output_____" ] ], [ [ "INVARIANT_PROPERTIES += [\n \"X == Y + Z\",\n \"X == Y * Z\",\n \"X == Y - Z\",\n \"X == Y / Z\",\n]", "_____no_output_____" ] ], [ [ "Here's relations over three values, a Python special:", "_____no_output_____" ] ], [ [ "INVARIANT_PROPERTIES += [\n \"X < Y < Z\",\n \"X <= Y <= Z\",\n \"X > Y > Z\",\n \"X >= Y >= Z\",\n]", "_____no_output_____" ] ], [ [ "Finally, we can also check for list or string properties. Again, this is just a tiny selection.", "_____no_output_____" ] ], [ [ "INVARIANT_PROPERTIES += [\n \"X == len(Y)\",\n \"X == sum(Y)\",\n \"X.startswith(Y)\",\n]", "_____no_output_____" ] ], [ [ "### Extracting Meta-Variables\n\nLet us first introduce a few _helper functions_ before we can get to the actual mining. `metavars()` extracts the set of meta-variables (`X`, `Y`, `Z`, etc.) from a property. To this end, we parse the property as a Python expression and then visit the identifiers.", "_____no_output_____" ] ], [ [ "def metavars(prop):\n metavar_list = []\n \n class ArgVisitor(ast.NodeVisitor):\n def visit_Name(self, node):\n if node.id.isupper():\n metavar_list.append(node.id)\n\n ArgVisitor().visit(ast.parse(prop))\n return metavar_list", "_____no_output_____" ], [ "assert metavars(\"X < 0\") == ['X']", "_____no_output_____" ], [ "assert metavars(\"X.startswith(Y)\") == ['X', 'Y']", "_____no_output_____" ], [ "assert metavars(\"isinstance(X, str)\") == ['X']", "_____no_output_____" ] ], [ [ "### Instantiating Properties\n\nTo produce a property as invariant, we need to be able to _instantiate_ it with variable names. The instantiation of `X > 0` with `X` being instantiated to `a`, for instance, gets us `a > 0`. To this end, the function `instantiate_prop()` takes a property and a collection of variable names and instantiates the meta-variables left-to-right with the corresponding variables names in the collection.", "_____no_output_____" ] ], [ [ "def instantiate_prop_ast(prop, var_names):\n class NameTransformer(ast.NodeTransformer):\n def visit_Name(self, node):\n if node.id not in mapping:\n return node\n return ast.Name(id=mapping[node.id], ctx=ast.Load())\n \n meta_variables = metavars(prop)\n assert len(meta_variables) == len(var_names)\n\n mapping = {}\n for i in range(0, len(meta_variables)):\n mapping[meta_variables[i]] = var_names[i]\n\n prop_ast = ast.parse(prop, mode='eval')\n new_ast = NameTransformer().visit(prop_ast)\n\n return new_ast", "_____no_output_____" ], [ "def instantiate_prop(prop, var_names):\n prop_ast = instantiate_prop_ast(prop, var_names)\n prop_text = astor.to_source(prop_ast).strip()\n while prop_text.startswith('(') and prop_text.endswith(')'):\n prop_text = prop_text[1:-1]\n return prop_text", "_____no_output_____" ], [ "assert instantiate_prop(\"X > Y\", ['a', 'b']) == 'a > b'", "_____no_output_____" ], [ "assert instantiate_prop(\"X.startswith(Y)\", ['x', 'y']) == 'x.startswith(y)'", "_____no_output_____" ] ], [ [ "### Evaluating Properties\n\nTo actually _evaluate_ properties, we do not need to instantiate them. Instead, we simply convert them into a boolean function, using `lambda`:", "_____no_output_____" ] ], [ [ "def prop_function_text(prop):\n return \"lambda \" + \", \".join(metavars(prop)) + \": \" + prop\n\ndef prop_function(prop):\n return eval(prop_function_text(prop))", "_____no_output_____" ] ], [ [ "Here is a simple example:", "_____no_output_____" ] ], [ [ "prop_function_text(\"X > Y\")", "_____no_output_____" ], [ "p = prop_function(\"X > Y\")\np(100, 1)", "_____no_output_____" ], [ "p(1, 100)", "_____no_output_____" ] ], [ [ "### Checking Invariants\n\nTo extract invariants from an execution, we need to check them on all possible instantiations of arguments. If the function to be checked has two arguments `a` and `b`, we instantiate the property `X < Y` both as `a < b` and `b < a` and check each of them.", "_____no_output_____" ], [ "To get all combinations, we use the Python `permutations()` function:", "_____no_output_____" ] ], [ [ "import itertools", "_____no_output_____" ], [ "for combination in itertools.permutations([1.0, 2.0, 3.0], 2):\n print(combination)", "(1.0, 2.0)\n(1.0, 3.0)\n(2.0, 1.0)\n(2.0, 3.0)\n(3.0, 1.0)\n(3.0, 2.0)\n" ] ], [ [ "The function `true_property_instantiations()` takes a property and a list of tuples (`var_name`, `value`). It then produces all instantiations of the property with the given values and returns those that evaluate to True.", "_____no_output_____" ] ], [ [ "def true_property_instantiations(prop, vars_and_values, log=False):\n instantiations = set()\n p = prop_function(prop)\n\n len_metavars = len(metavars(prop))\n for combination in itertools.permutations(vars_and_values, len_metavars):\n args = [value for var_name, value in combination]\n var_names = [var_name for var_name, value in combination]\n \n try:\n result = p(*args)\n except:\n result = None\n\n if log:\n print(prop, combination, result)\n if result:\n instantiations.add((prop, tuple(var_names)))\n \n return instantiations", "_____no_output_____" ] ], [ [ "Here is an example. If `x == -1` and `y == 1`, the property `X < Y` holds for `x < y`, but not for `y < x`:", "_____no_output_____" ] ], [ [ "invs = true_property_instantiations(\"X < Y\", [('x', -1), ('y', 1)], log=True)\ninvs", "X < Y (('x', -1), ('y', 1)) True\nX < Y (('y', 1), ('x', -1)) False\n" ] ], [ [ "The instantiation retrieves the short form:", "_____no_output_____" ] ], [ [ "for prop, var_names in invs:\n print(instantiate_prop(prop, var_names))", "x < y\n" ] ], [ [ "Likewise, with values for `x` and `y` as above, the property `X < 0` only holds for `x`, but not for `y`:", "_____no_output_____" ] ], [ [ "invs = true_property_instantiations(\"X < 0\", [('x', -1), ('y', 1)], log=True)", "X < 0 (('x', -1),) True\nX < 0 (('y', 1),) False\n" ], [ "for prop, var_names in invs:\n print(instantiate_prop(prop, var_names))", "x < 0\n" ] ], [ [ "### Extracting Invariants\n\nLet us now run the above invariant extraction on function arguments and return values as observed during a function execution. To this end, we extend the `CallTracker` class into an `InvariantTracker` class, which automatically computes invariants for all functions and all calls observed during tracking.", "_____no_output_____" ], [ "By default, an `InvariantTracker` uses the properties as defined above; however, one can specify alternate sets of properties.", "_____no_output_____" ] ], [ [ "class InvariantTracker(CallTracker):\n def __init__(self, props=None, **kwargs):\n if props is None:\n props = INVARIANT_PROPERTIES\n\n self.props = props\n super().__init__(**kwargs)", "_____no_output_____" ] ], [ [ "The key method of the `InvariantTracker` is the `invariants()` method. This iterates over the calls observed and checks which properties hold. Only the intersection of properties – that is, the set of properties that hold for all calls – is preserved, and eventually returned. The special variable `return_value` is set to hold the return value.", "_____no_output_____" ] ], [ [ "RETURN_VALUE = 'return_value'", "_____no_output_____" ], [ "class InvariantTracker(InvariantTracker):\n def invariants(self, function_name=None):\n if function_name is None:\n return {function_name: self.invariants(function_name) for function_name in self.calls()}\n \n invariants = None\n for variables, return_value in self.calls(function_name):\n vars_and_values = variables + [(RETURN_VALUE, return_value)]\n \n s = set()\n for prop in self.props:\n s |= true_property_instantiations(prop, vars_and_values, self._log)\n if invariants is None:\n invariants = s\n else:\n invariants &= s\n\n return invariants", "_____no_output_____" ] ], [ [ "Here's an example of how to use `invariants()`. We run the tracker on a small set of calls.", "_____no_output_____" ] ], [ [ "with InvariantTracker() as tracker:\n y = my_sqrt(25.0)\n y = my_sqrt(10.0)\n\ntracker.calls()", "_____no_output_____" ] ], [ [ "The `invariants()` method produces a set of properties that hold for the observed runs, together with their instantiations over function arguments.", "_____no_output_____" ] ], [ [ "invs = tracker.invariants('my_sqrt')\ninvs", "_____no_output_____" ] ], [ [ "As before, the actual instantiations are easier to read:\n", "_____no_output_____" ] ], [ [ "def pretty_invariants(invariants):\n props = []\n for (prop, var_names) in invariants:\n props.append(instantiate_prop(prop, var_names))\n return sorted(props)", "_____no_output_____" ], [ "pretty_invariants(invs)", "_____no_output_____" ] ], [ [ "We see that the both `x` and the return value have a `float` type. We also see that both are always greater than zero. These are properties that may make useful pre- and postconditions, notably for symbolic analysis.", "_____no_output_____" ], [ "However, there's also an invariant which does _not_ universally hold, namely `return_value <= x`, as the following example shows:", "_____no_output_____" ] ], [ [ "my_sqrt(0.01)", "_____no_output_____" ] ], [ [ "Clearly, 0.1 > 0.01 holds. This is a case of us not learning from sufficiently diverse inputs. As soon as we have a call including `x = 0.1`, though, the invariant `return_value <= x` is eliminated:", "_____no_output_____" ] ], [ [ "with InvariantTracker() as tracker:\n y = my_sqrt(25.0)\n y = my_sqrt(10.0)\n y = my_sqrt(0.01)\n \npretty_invariants(tracker.invariants('my_sqrt'))", "_____no_output_____" ] ], [ [ "We will discuss later how to ensure sufficient diversity in inputs. (Hint: This involves test generation.)", "_____no_output_____" ], [ "Let us try out our invariant tracker on `sum3()`. We see that all types are well-defined; the properties that all arguments are non-zero, however, is specific to the calls observed.", "_____no_output_____" ] ], [ [ "with InvariantTracker() as tracker:\n y = sum3(1, 2, 3)\n y = sum3(-4, -5, -6)\n \npretty_invariants(tracker.invariants('sum3'))", "_____no_output_____" ] ], [ [ "If we invoke `sum3()` with strings instead, we get different invariants. Notably, we obtain the postcondition that the return value starts with the value of `a` – a universal postcondition if strings are used.", "_____no_output_____" ] ], [ [ "with InvariantTracker() as tracker:\n y = sum3('a', 'b', 'c')\n y = sum3('f', 'e', 'd')\n \npretty_invariants(tracker.invariants('sum3'))", "_____no_output_____" ] ], [ [ "If we invoke `sum3()` with both strings and numbers (and zeros, too), there are no properties left that would hold across all calls. That's the price of flexibility.", "_____no_output_____" ] ], [ [ "with InvariantTracker() as tracker:\n y = sum3('a', 'b', 'c')\n y = sum3('c', 'b', 'a')\n y = sum3(-4, -5, -6)\n y = sum3(0, 0, 0)\n \npretty_invariants(tracker.invariants('sum3'))", "_____no_output_____" ] ], [ [ "### Converting Mined Invariants to Annotations\n\nAs with types, above, we would like to have some functionality where we can add the mined invariants as annotations to existing functions. To this end, we introduce the `InvariantAnnotator` class, extending `InvariantTracker`.", "_____no_output_____" ], [ "We start with a helper method. `params()` returns a comma-separated list of parameter names as observed during calls.", "_____no_output_____" ] ], [ [ "class InvariantAnnotator(InvariantTracker):\n def params(self, function_name):\n arguments, return_value = self.calls(function_name)[0]\n return \", \".join(arg_name for (arg_name, arg_value) in arguments)", "_____no_output_____" ], [ "with InvariantAnnotator() as annotator:\n y = my_sqrt(25.0)\n y = sum3(1, 2, 3)", "_____no_output_____" ], [ "annotator.params('my_sqrt')", "_____no_output_____" ], [ "annotator.params('sum3')", "_____no_output_____" ] ], [ [ "Now for the actual annotation. `preconditions()` returns the preconditions from the mined invariants (i.e., those propertes that do not depend on the return value) as a string with annotations:", "_____no_output_____" ] ], [ [ "class InvariantAnnotator(InvariantAnnotator):\n def preconditions(self, function_name):\n conditions = []\n\n for inv in pretty_invariants(self.invariants(function_name)):\n if inv.find(RETURN_VALUE) >= 0:\n continue # Postcondition\n\n cond = \"@precondition(lambda \" + self.params(function_name) + \": \" + inv + \")\"\n conditions.append(cond)\n\n return conditions", "_____no_output_____" ], [ "with InvariantAnnotator() as annotator:\n y = my_sqrt(25.0)\n y = my_sqrt(0.01)\n y = sum3(1, 2, 3)", "_____no_output_____" ], [ "annotator.preconditions('my_sqrt')", "_____no_output_____" ] ], [ [ "`postconditions()` does the same for postconditions:", "_____no_output_____" ] ], [ [ "class InvariantAnnotator(InvariantAnnotator):\n def postconditions(self, function_name):\n conditions = []\n\n for inv in pretty_invariants(self.invariants(function_name)):\n if inv.find(RETURN_VALUE) < 0:\n continue # Precondition\n\n cond = (\"@postcondition(lambda \" + \n RETURN_VALUE + \", \" + self.params(function_name) + \": \" + inv + \")\")\n conditions.append(cond)\n\n return conditions", "_____no_output_____" ], [ "with InvariantAnnotator() as annotator:\n y = my_sqrt(25.0)\n y = my_sqrt(0.01)\n y = sum3(1, 2, 3)", "_____no_output_____" ], [ "annotator.postconditions('my_sqrt')", "_____no_output_____" ] ], [ [ "With these, we can take a function and add both pre- and postconditions as annotations:", "_____no_output_____" ] ], [ [ "class InvariantAnnotator(InvariantAnnotator):\n def functions_with_invariants(self):\n functions = \"\"\n for function_name in self.invariants():\n try:\n function = self.function_with_invariants(function_name)\n except KeyError:\n continue\n functions += function\n return functions\n\n def function_with_invariants(self, function_name):\n function = globals()[function_name] # Can throw KeyError\n source = inspect.getsource(function)\n return \"\\n\".join(self.preconditions(function_name) + \n self.postconditions(function_name)) + '\\n' + source", "_____no_output_____" ] ], [ [ "Here comes `function_with_invariants()` in all its glory:", "_____no_output_____" ] ], [ [ "with InvariantAnnotator() as annotator:\n y = my_sqrt(25.0)\n y = my_sqrt(0.01)\n y = sum3(1, 2, 3)", "_____no_output_____" ], [ "print_content(annotator.function_with_invariants('my_sqrt'), '.py')", "\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m x: \u001b[36misinstance\u001b[39;49;00m(x, \u001b[36mfloat\u001b[39;49;00m))\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m x: x != \u001b[34m0\u001b[39;49;00m)\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m x: x > \u001b[34m0\u001b[39;49;00m)\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m x: x >= \u001b[34m0\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, x: \u001b[36misinstance\u001b[39;49;00m(return_value, \u001b[36mfloat\u001b[39;49;00m))\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, x: return_value != \u001b[34m0\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, x: return_value > \u001b[34m0\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, x: return_value >= \u001b[34m0\u001b[39;49;00m)\n\u001b[34mdef\u001b[39;49;00m \u001b[32mmy_sqrt\u001b[39;49;00m(x):\n \u001b[33m\"\"\"Computes the square root of x, using the Newton-Raphson method\"\"\"\u001b[39;49;00m\n approx = \u001b[36mNone\u001b[39;49;00m\n guess = x / \u001b[34m2\u001b[39;49;00m\n \u001b[34mwhile\u001b[39;49;00m approx != guess:\n approx = guess\n guess = (approx + x / approx) / \u001b[34m2\u001b[39;49;00m\n \u001b[34mreturn\u001b[39;49;00m approx\n" ] ], [ [ "Quite a lot of invariants, is it? Further below (and in the exercises), we will discuss on how to focus on the most relevant properties.", "_____no_output_____" ], [ "### Some Examples\n\nHere's another example. `list_length()` recursively computes the length of a Python function. Let us see whether we can mine its invariants:", "_____no_output_____" ] ], [ [ "def list_length(L):\n if L == []:\n length = 0\n else:\n length = 1 + list_length(L[1:])\n return length", "_____no_output_____" ], [ "with InvariantAnnotator() as annotator:\n length = list_length([1, 2, 3])\n\nprint_content(annotator.functions_with_invariants(), '.py')", "\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m L: L != \u001b[34m0\u001b[39;49;00m)\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m L: \u001b[36misinstance\u001b[39;49;00m(L, \u001b[36mlist\u001b[39;49;00m))\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, L: \u001b[36misinstance\u001b[39;49;00m(return_value, \u001b[36mint\u001b[39;49;00m))\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, L: return_value == \u001b[36mlen\u001b[39;49;00m(L))\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, L: return_value >= \u001b[34m0\u001b[39;49;00m)\n\u001b[34mdef\u001b[39;49;00m \u001b[32mlist_length\u001b[39;49;00m(L):\n \u001b[34mif\u001b[39;49;00m L == []:\n length = \u001b[34m0\u001b[39;49;00m\n \u001b[34melse\u001b[39;49;00m:\n length = \u001b[34m1\u001b[39;49;00m + list_length(L[\u001b[34m1\u001b[39;49;00m:])\n \u001b[34mreturn\u001b[39;49;00m length\n" ] ], [ [ "Almost all these properties (except for the very first) are relevant. Of course, the reason the invariants are so neat is that the return value is equal to `len(L)` is that `X == len(Y)` is part of the list of properties to be checked.", "_____no_output_____" ], [ "The next example is a very simple function:", "_____no_output_____" ] ], [ [ "def sum2(a, b):\n return a + b", "_____no_output_____" ], [ "with InvariantAnnotator() as annotator:\n sum2(31, 45)\n sum2(0, 0)\n sum2(-1, -5)", "_____no_output_____" ] ], [ [ "The invariants all capture the relationship between `a`, `b`, and the return value as `return_value == a + b` in all its variations.", "_____no_output_____" ] ], [ [ "print_content(annotator.functions_with_invariants(), '.py')", "\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: \u001b[36misinstance\u001b[39;49;00m(a, \u001b[36mint\u001b[39;49;00m))\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: \u001b[36misinstance\u001b[39;49;00m(b, \u001b[36mint\u001b[39;49;00m))\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: a == return_value - b)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: b == return_value - a)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: \u001b[36misinstance\u001b[39;49;00m(return_value, \u001b[36mint\u001b[39;49;00m))\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: return_value == a + b)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: return_value == b + a)\n\u001b[34mdef\u001b[39;49;00m \u001b[32msum2\u001b[39;49;00m(a, b):\n \u001b[34mreturn\u001b[39;49;00m a + b\n" ] ], [ [ "If we have a function without return value, the return value is `None` and we can only mine preconditions. (Well, we get a \"postcondition\" that the return value is non-zero, which holds for `None`).", "_____no_output_____" ] ], [ [ "def print_sum(a, b):\n print(a + b)", "_____no_output_____" ], [ "with InvariantAnnotator() as annotator:\n print_sum(31, 45)\n print_sum(0, 0)\n print_sum(-1, -5)", "76\n0\n-6\n" ], [ "print_content(annotator.functions_with_invariants(), '.py')", "\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: \u001b[36misinstance\u001b[39;49;00m(a, \u001b[36mint\u001b[39;49;00m))\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: \u001b[36misinstance\u001b[39;49;00m(b, \u001b[36mint\u001b[39;49;00m))\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: return_value != \u001b[34m0\u001b[39;49;00m)\n\u001b[34mdef\u001b[39;49;00m \u001b[32mprint_sum\u001b[39;49;00m(a, b):\n \u001b[34mprint\u001b[39;49;00m(a + b)\n" ] ], [ [ "### Checking Specifications\n\nA function with invariants, as above, can be fed into the Python interpreter, such that all pre- and postconditions are checked. We create a function `my_sqrt_annotated()` which includes all the invariants mined above.", "_____no_output_____" ] ], [ [ "with InvariantAnnotator() as annotator:\n y = my_sqrt(25.0)\n y = my_sqrt(0.01)", "_____no_output_____" ], [ "my_sqrt_def = annotator.functions_with_invariants()\nmy_sqrt_def = my_sqrt_def.replace('my_sqrt', 'my_sqrt_annotated')", "_____no_output_____" ], [ "print_content(my_sqrt_def, '.py')", "\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m x: \u001b[36misinstance\u001b[39;49;00m(x, \u001b[36mfloat\u001b[39;49;00m))\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m x: x != \u001b[34m0\u001b[39;49;00m)\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m x: x > \u001b[34m0\u001b[39;49;00m)\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m x: x >= \u001b[34m0\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, x: \u001b[36misinstance\u001b[39;49;00m(return_value, \u001b[36mfloat\u001b[39;49;00m))\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, x: return_value != \u001b[34m0\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, x: return_value > \u001b[34m0\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, x: return_value >= \u001b[34m0\u001b[39;49;00m)\n\u001b[34mdef\u001b[39;49;00m \u001b[32mmy_sqrt_annotated\u001b[39;49;00m(x):\n \u001b[33m\"\"\"Computes the square root of x, using the Newton-Raphson method\"\"\"\u001b[39;49;00m\n approx = \u001b[36mNone\u001b[39;49;00m\n guess = x / \u001b[34m2\u001b[39;49;00m\n \u001b[34mwhile\u001b[39;49;00m approx != guess:\n approx = guess\n guess = (approx + x / approx) / \u001b[34m2\u001b[39;49;00m\n \u001b[34mreturn\u001b[39;49;00m approx\n" ], [ "exec(my_sqrt_def)", "_____no_output_____" ] ], [ [ "The \"annotated\" version checks against invalid arguments – or more precisely, against arguments with properties that have not been observed yet:", "_____no_output_____" ] ], [ [ "with ExpectError():\n my_sqrt_annotated(-1.0)", "Traceback (most recent call last):\n File \"<ipython-input-170-c3c5c372ccd1>\", line 2, in <module>\n my_sqrt_annotated(-1.0)\n File \"<ipython-input-100-39ada1fd0b7e>\", line 8, in wrapper\n retval = func(*args, **kwargs) # call original function or method\n File \"<ipython-input-100-39ada1fd0b7e>\", line 8, in wrapper\n retval = func(*args, **kwargs) # call original function or method\n File \"<ipython-input-100-39ada1fd0b7e>\", line 6, in wrapper\n assert precondition(*args, **kwargs), \"Precondition violated\"\nAssertionError: Precondition violated (expected)\n" ] ], [ [ "This is in contrast to the original version, which just hangs on negative values:", "_____no_output_____" ] ], [ [ "with ExpectTimeout(1):\n my_sqrt(-1.0)", "Traceback (most recent call last):\n File \"<ipython-input-171-afc7add26ad6>\", line 2, in <module>\n my_sqrt(-1.0)\n File \"<ipython-input-5-47185ad159a1>\", line 7, in my_sqrt\n guess = (approx + x / approx) / 2\n File \"<ipython-input-5-47185ad159a1>\", line 7, in my_sqrt\n guess = (approx + x / approx) / 2\n File \"ExpectError.ipynb\", line 59, in check_time\nTimeoutError (expected)\n" ] ], [ [ "If we make changes to the function definition such that the properties of the return value change, such _regressions_ are caught as violations of the postconditions. Let us illustrate this by simply inverting the result, and return $-2$ as square root of 4.", "_____no_output_____" ] ], [ [ "my_sqrt_def = my_sqrt_def.replace('my_sqrt_annotated', 'my_sqrt_negative')\nmy_sqrt_def = my_sqrt_def.replace('return approx', 'return -approx')", "_____no_output_____" ], [ "print_content(my_sqrt_def, '.py')", "\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m x: \u001b[36misinstance\u001b[39;49;00m(x, \u001b[36mfloat\u001b[39;49;00m))\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m x: x != \u001b[34m0\u001b[39;49;00m)\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m x: x > \u001b[34m0\u001b[39;49;00m)\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m x: x >= \u001b[34m0\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, x: \u001b[36misinstance\u001b[39;49;00m(return_value, \u001b[36mfloat\u001b[39;49;00m))\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, x: return_value != \u001b[34m0\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, x: return_value > \u001b[34m0\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, x: return_value >= \u001b[34m0\u001b[39;49;00m)\n\u001b[34mdef\u001b[39;49;00m \u001b[32mmy_sqrt_negative\u001b[39;49;00m(x):\n \u001b[33m\"\"\"Computes the square root of x, using the Newton-Raphson method\"\"\"\u001b[39;49;00m\n approx = \u001b[36mNone\u001b[39;49;00m\n guess = x / \u001b[34m2\u001b[39;49;00m\n \u001b[34mwhile\u001b[39;49;00m approx != guess:\n approx = guess\n guess = (approx + x / approx) / \u001b[34m2\u001b[39;49;00m\n \u001b[34mreturn\u001b[39;49;00m -approx\n" ], [ "exec(my_sqrt_def)", "_____no_output_____" ] ], [ [ "Technically speaking, $-2$ _is_ a square root of 4, since $(-2)^2 = 4$ holds. Yet, such a change may be unexpected by callers of `my_sqrt()`, and hence, this would be caught with the first call:", "_____no_output_____" ] ], [ [ "with ExpectError():\n my_sqrt_negative(2.0)", "Traceback (most recent call last):\n File \"<ipython-input-175-c80e4295dbf8>\", line 2, in <module>\n my_sqrt_negative(2.0)\n File \"<ipython-input-100-39ada1fd0b7e>\", line 8, in wrapper\n retval = func(*args, **kwargs) # call original function or method\n File \"<ipython-input-100-39ada1fd0b7e>\", line 8, in wrapper\n retval = func(*args, **kwargs) # call original function or method\n File \"<ipython-input-100-39ada1fd0b7e>\", line 8, in wrapper\n retval = func(*args, **kwargs) # call original function or method\n [Previous line repeated 4 more times]\n File \"<ipython-input-100-39ada1fd0b7e>\", line 10, in wrapper\n assert postcondition(retval, *args, **kwargs), \"Postcondition violated\"\nAssertionError: Postcondition violated (expected)\n" ] ], [ [ "We see how pre- and postconditions, as well as types, can serve as *oracles* during testing. In particular, once we have mined them for a set of functions, we can check them again and again with test generators – especially after code changes. The more checks we have, and the more specific they are, the more likely it is we can detect unwanted effects of changes.", "_____no_output_____" ], [ "## Mining Specifications from Generated Tests\n\nMined specifications can only be as good as the executions they were mined from. If we only see a single call to, say, `sum2()`, we will be faced with several mined pre- and postconditions that _overspecialize_ towards the values seen:", "_____no_output_____" ] ], [ [ "def sum2(a, b):\n return a + b", "_____no_output_____" ], [ "with InvariantAnnotator() as annotator:\n y = sum2(2, 2)\nprint_content(annotator.functions_with_invariants(), '.py')", "\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: a != \u001b[34m0\u001b[39;49;00m)\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: a <= b)\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: a == b)\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: a > \u001b[34m0\u001b[39;49;00m)\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: a >= \u001b[34m0\u001b[39;49;00m)\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: a >= b)\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: b != \u001b[34m0\u001b[39;49;00m)\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: b <= a)\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: b == a)\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: b > \u001b[34m0\u001b[39;49;00m)\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: b >= \u001b[34m0\u001b[39;49;00m)\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: b >= a)\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: \u001b[36misinstance\u001b[39;49;00m(a, \u001b[36mint\u001b[39;49;00m))\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: \u001b[36misinstance\u001b[39;49;00m(b, \u001b[36mint\u001b[39;49;00m))\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: a < return_value)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: a <= b <= return_value)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: a <= return_value)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: a == return_value - b)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: a == return_value / b)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: b < return_value)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: b <= a <= return_value)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: b <= return_value)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: b == return_value - a)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: b == return_value / a)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: \u001b[36misinstance\u001b[39;49;00m(return_value, \u001b[36mint\u001b[39;49;00m))\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: return_value != \u001b[34m0\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: return_value == a * b)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: return_value == a + b)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: return_value == b * a)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: return_value == b + a)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: return_value > \u001b[34m0\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: return_value > a)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: return_value > b)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: return_value >= \u001b[34m0\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: return_value >= a)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: return_value >= a >= b)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: return_value >= b)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: return_value >= b >= a)\n\u001b[34mdef\u001b[39;49;00m \u001b[32msum2\u001b[39;49;00m(a, b):\n \u001b[34mreturn\u001b[39;49;00m a + b\n" ] ], [ [ "The mined precondition `a == b`, for instance, only holds for the single call observed; the same holds for the mined postcondition `return_value == a * b`. Yet, `sum2()` can obviously be successfully called with other values that do not satisfy these conditions.", "_____no_output_____" ], [ "To get out of this trap, we have to _learn from more and more diverse runs_. If we have a few more calls of `sum2()`, we see how the set of invariants quickly gets smaller:", "_____no_output_____" ] ], [ [ "with InvariantAnnotator() as annotator:\n length = sum2(1, 2)\n length = sum2(-1, -2)\n length = sum2(0, 0)\n\nprint_content(annotator.functions_with_invariants(), '.py')", "\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: \u001b[36misinstance\u001b[39;49;00m(a, \u001b[36mint\u001b[39;49;00m))\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: \u001b[36misinstance\u001b[39;49;00m(b, \u001b[36mint\u001b[39;49;00m))\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: a == return_value - b)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: b == return_value - a)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: \u001b[36misinstance\u001b[39;49;00m(return_value, \u001b[36mint\u001b[39;49;00m))\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: return_value == a + b)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: return_value == b + a)\n\u001b[34mdef\u001b[39;49;00m \u001b[32msum2\u001b[39;49;00m(a, b):\n \u001b[34mreturn\u001b[39;49;00m a + b\n" ] ], [ [ "But where to we get such diverse runs from? This is the job of generating software tests. A simple grammar for calls of `sum2()` will easily resolve the problem.", "_____no_output_____" ] ], [ [ "from GrammarFuzzer import GrammarFuzzer # minor dependency\nfrom Grammars import is_valid_grammar, crange, convert_ebnf_grammar # minor dependency", "_____no_output_____" ], [ "SUM2_EBNF_GRAMMAR = {\n \"<start>\": [\"<sum2>\"],\n \"<sum2>\": [\"sum2(<int>, <int>)\"],\n \"<int>\": [\"<_int>\"],\n \"<_int>\": [\"(-)?<leaddigit><digit>*\", \"0\"],\n \"<leaddigit>\": crange('1', '9'),\n \"<digit>\": crange('0', '9')\n}\n\nassert is_valid_grammar(SUM2_EBNF_GRAMMAR)", "_____no_output_____" ], [ "sum2_grammar = convert_ebnf_grammar(SUM2_EBNF_GRAMMAR)", "_____no_output_____" ], [ "sum2_fuzzer = GrammarFuzzer(sum2_grammar)\n[sum2_fuzzer.fuzz() for i in range(10)]", "_____no_output_____" ], [ "with InvariantAnnotator() as annotator:\n for i in range(10):\n eval(sum2_fuzzer.fuzz())\n\nprint_content(annotator.function_with_invariants('sum2'), '.py')", "\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: a != \u001b[34m0\u001b[39;49;00m)\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: \u001b[36misinstance\u001b[39;49;00m(a, \u001b[36mint\u001b[39;49;00m))\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: \u001b[36misinstance\u001b[39;49;00m(b, \u001b[36mint\u001b[39;49;00m))\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: a == return_value - b)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: b == return_value - a)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: \u001b[36misinstance\u001b[39;49;00m(return_value, \u001b[36mint\u001b[39;49;00m))\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: return_value != \u001b[34m0\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: return_value == a + b)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: return_value == b + a)\n\u001b[34mdef\u001b[39;49;00m \u001b[32msum2\u001b[39;49;00m(a, b):\n \u001b[34mreturn\u001b[39;49;00m a + b\n" ] ], [ [ "But then, writing tests (or a test driver) just to derive a set of pre- and postconditions may possibly be too much effort – in particular, since tests can easily be derived from given pre- and postconditions in the first place. Hence, it would be wiser to first specify invariants and then let test generators or program provers do the job.", "_____no_output_____" ], [ "Also, an API grammar, such as above, will have to be set up such that it actually respects preconditions – in our case, we invoke `sqrt()` with positive numbers only, already assuming its precondition. In some way, one thus needs a specification (a model, a grammar) to mine another specification – a chicken-and-egg problem.", "_____no_output_____" ], [ "However, there is one way out of this problem: If one can automatically generate tests at the system level, then one has an _infinite source of executions_ to learn invariants from. In each of these executions, all functions would be called with values that satisfy the (implicit) precondition, allowing us to mine invariants for these functions. This holds, because at the system level, invalid inputs must be rejected by the system in the first place. The meaningful precondition at the system level, ensuring that only valid inputs get through, thus gets broken down into a multitude of meaningful preconditions (and subsequent postconditions) at the function level.", "_____no_output_____" ], [ "The big requirement for this, though, is that one needs good test generators at the system level. In [the next part](05_Domain-Specific_Fuzzing.ipynb), we will discuss how to automatically generate tests for a variety of domains, from configuration to graphical user interfaces.", "_____no_output_____" ], [ "## Synopsis\n\nThis chapter provides two classes that automatically extract specifications from a function and a set of inputs:\n\n* `TypeAnnotator` for _types_, and\n* `InvariantAnnotator` for _pre-_ and _postconditions_.\n\nBoth work by _observing_ a function and its invocations within a `with` clause. Here is an example for the type annotator:", "_____no_output_____" ] ], [ [ "def sum2(a, b):\n return a + b", "_____no_output_____" ], [ "with TypeAnnotator() as type_annotator:\n sum2(1, 2)\n sum2(-4, -5)\n sum2(0, 0)", "_____no_output_____" ] ], [ [ "The `typed_functions()` method will return a representation of `sum2()` annotated with types observed during execution.", "_____no_output_____" ] ], [ [ "print(type_annotator.typed_functions())", "def sum2(a: int, b: int) ->int:\n return a + b\n\n" ] ], [ [ "The invariant annotator works in a similar fashion:", "_____no_output_____" ] ], [ [ "with InvariantAnnotator() as inv_annotator:\n sum2(1, 2)\n sum2(-4, -5)\n sum2(0, 0)", "_____no_output_____" ] ], [ [ "The `functions_with_invariants()` method will return a representation of `sum2()` annotated with inferred pre- and postconditions that all hold for the observed values.", "_____no_output_____" ] ], [ [ "print(inv_annotator.functions_with_invariants())", "@precondition(lambda a, b: isinstance(a, int))\n@precondition(lambda a, b: isinstance(b, int))\n@postcondition(lambda return_value, a, b: a == return_value - b)\n@postcondition(lambda return_value, a, b: b == return_value - a)\n@postcondition(lambda return_value, a, b: isinstance(return_value, int))\n@postcondition(lambda return_value, a, b: return_value == a + b)\n@postcondition(lambda return_value, a, b: return_value == b + a)\ndef sum2(a, b):\n return a + b\n\n" ] ], [ [ "Such type specifications and invariants can be helpful as _oracles_ (to detect deviations from a given set of runs) as well as for all kinds of _symbolic code analyses_. The chapter gives details on how to customize the properties checked for.", "_____no_output_____" ], [ "## Lessons Learned\n\n* Type annotations and explicit invariants allow for _checking_ arguments and results for expected data types and other properties.\n* One can automatically _mine_ data types and invariants by observing arguments and results at runtime.\n* The quality of mined invariants depends on the diversity of values observed during executions; this variety can be increased by generating tests.", "_____no_output_____" ], [ "## Next Steps\n\nThis chapter concludes the [part on semantical fuzzing techniques](04_Semantical_Fuzzing.ipynb). In the next part, we will explore [domain-specific fuzzing techniques](05_Domain-Specific_Fuzzing.ipynb) from configurations and APIs to graphical user interfaces.", "_____no_output_____" ], [ "## Background\n\nThe [DAIKON dynamic invariant detector](https://plse.cs.washington.edu/daikon/) can be considered the mother of function specification miners. Continuously maintained and extended for more than 20 years, it mines likely invariants in the style of this chapter for a variety of languages, including C, C++, C#, Eiffel, F#, Java, Perl, and Visual Basic. On top of the functionality discussed above, it holds a rich catalog of patterns for likely invariants, supports data invariants, can eliminate invariants that are implied by others, and determines statistical confidence to disregard unlikely invariants. The corresponding paper \\cite{Ernst2001} is one of the seminal and most-cited papers of Software Engineering. A multitude of works have been published based on DAIKON and detecting invariants; see this [curated list](http://plse.cs.washington.edu/daikon/pubs/) for details.", "_____no_output_____" ], [ "The interaction between test generators and invariant detection is already discussed in \\cite{Ernst2001} (incidentally also using grammars). The Eclat tool \\cite{Pacheco2005} is a model example of tight interaction between a unit-level test generator and DAIKON-style invariant mining, where the mined invariants are used to produce oracles and to systematically guide the test generator towards fault-revealing inputs.", "_____no_output_____" ], [ "Mining specifications is not restricted to pre- and postconditions. The paper \"Mining Specifications\" \\cite{Ammons2002} is another classic in the field, learning state protocols from executions. Grammar mining, as described in [our chapter with the same name](GrammarMiner.ipynb) can also be seen as a specification mining approach, this time learning specifications for input formats.", "_____no_output_____" ], [ "As it comes to adding type annotations to existing code, the blog post [\"The state of type hints in Python\"](https://www.bernat.tech/the-state-of-type-hints-in-python/) gives a great overview on how Python type hints can be used and checked. To add type annotations, there are two important tools available that also implement our above approach:\n\n* [MonkeyType](https://instagram-engineering.com/let-your-code-type-hint-itself-introducing-open-source-monkeytype-a855c7284881) implements the above approach of tracing executions and annotating Python 3 arguments, returns, and variables with type hints.\n* [PyAnnotate](https://github.com/dropbox/pyannotate) does a similar job, focusing on code in Python 2. It does not produce Python 3-style annotations, but instead produces annotations as comments that can be processed by static type checkers.\n\nThese tools have been created by engineers at Facebook and Dropbox, respectively, assisting them in checking millions of lines of code for type issues.", "_____no_output_____" ], [ "## Exercises\n\nOur code for mining types and invariants is in no way complete. There are dozens of ways to extend our implementations, some of which we discuss in exercises.", "_____no_output_____" ], [ "### Exercise 1: Union Types\n\nThe Python `typing` module allows to express that an argument can have multiple types. For `my_sqrt(x)`, this allows to express that `x` can be an `int` or a `float`:", "_____no_output_____" ] ], [ [ "from typing import Union, Optional", "_____no_output_____" ], [ "def my_sqrt_with_union_type(x: Union[int, float]) -> float:\n ...", "_____no_output_____" ] ], [ [ "Extend the `TypeAnnotator` such that it supports union types for arguments and return values. Use `Optional[X]` as a shorthand for `Union[X, None]`.", "_____no_output_____" ], [ "**Solution.** Left to the reader. Hint: extend `type_string()`.", "_____no_output_____" ], [ "### Exercise 2: Types for Local Variables\n\nIn Python, one cannot only annotate arguments with types, but actually also local and global variables – for instance, `approx` and `guess` in our `my_sqrt()` implementation:", "_____no_output_____" ] ], [ [ "def my_sqrt_with_local_types(x: Union[int, float]) -> float:\n \"\"\"Computes the square root of x, using the Newton-Raphson method\"\"\"\n approx: Optional[float] = None\n guess: float = x / 2\n while approx != guess:\n approx: float = guess\n guess: float = (approx + x / approx) / 2\n return approx", "_____no_output_____" ] ], [ [ "Extend the `TypeAnnotator` such that it also annotates local variables with types. Search the function AST for assignments, determine the type of the assigned value, and make it an annotation on the left hand side.", "_____no_output_____" ], [ "**Solution.** Left to the reader.", "_____no_output_____" ], [ "### Exercise 3: Verbose Invariant Checkers\n\nOur implementation of invariant checkers does not make it clear for the user which pre-/postcondition failed.", "_____no_output_____" ] ], [ [ "@precondition(lambda s: len(s) > 0)\ndef remove_first_char(s):\n return s[1:]", "_____no_output_____" ], [ "with ExpectError():\n remove_first_char('')", "Traceback (most recent call last):\n File \"<ipython-input-193-dda18930f6db>\", line 2, in <module>\n remove_first_char('')\n File \"<ipython-input-100-39ada1fd0b7e>\", line 6, in wrapper\n assert precondition(*args, **kwargs), \"Precondition violated\"\nAssertionError: Precondition violated (expected)\n" ] ], [ [ "The following implementation adds an optional `doc` keyword argument which is printed if the invariant is violated:", "_____no_output_____" ] ], [ [ "def condition(precondition=None, postcondition=None, doc='Unknown'):\n def decorator(func):\n @functools.wraps(func) # preserves name, docstring, etc\n def wrapper(*args, **kwargs):\n if precondition is not None:\n assert precondition(*args, **kwargs), \"Precondition violated: \" + doc\n\n retval = func(*args, **kwargs) # call original function or method\n if postcondition is not None:\n assert postcondition(retval, *args, **kwargs), \"Postcondition violated: \" + doc\n\n return retval\n return wrapper\n return decorator\n\ndef precondition(check, **kwargs):\n return condition(precondition=check, doc=kwargs.get('doc', 'Unknown'))\n\ndef postcondition(check, **kwargs):\n return condition(postcondition=check, doc=kwargs.get('doc', 'Unknown'))", "_____no_output_____" ], [ "@precondition(lambda s: len(s) > 0, doc=\"len(s) > 0\")\ndef remove_first_char(s):\n return s[1:]\n\nremove_first_char('abc')", "_____no_output_____" ], [ "with ExpectError():\n remove_first_char('')", "Traceback (most recent call last):\n File \"<ipython-input-196-dda18930f6db>\", line 2, in <module>\n remove_first_char('')\n File \"<ipython-input-194-683ee268305f>\", line 6, in wrapper\n assert precondition(*args, **kwargs), \"Precondition violated: \" + doc\nAssertionError: Precondition violated: len(s) > 0 (expected)\n" ] ], [ [ "Extend `InvariantAnnotator` such that it includes the conditions in the generated pre- and postconditions.", "_____no_output_____" ], [ "**Solution.** Here's a simple solution:", "_____no_output_____" ] ], [ [ "class InvariantAnnotator(InvariantAnnotator):\n def preconditions(self, function_name):\n conditions = []\n\n for inv in pretty_invariants(self.invariants(function_name)):\n if inv.find(RETURN_VALUE) >= 0:\n continue # Postcondition\n\n cond = \"@precondition(lambda \" + self.params(function_name) + \": \" + inv + ', doc=' + repr(inv) + \")\"\n conditions.append(cond)\n\n return conditions\n\nclass InvariantAnnotator(InvariantAnnotator):\n def postconditions(self, function_name):\n conditions = []\n\n for inv in pretty_invariants(self.invariants(function_name)):\n if inv.find(RETURN_VALUE) < 0:\n continue # Precondition\n\n cond = (\"@postcondition(lambda \" + \n RETURN_VALUE + \", \" + self.params(function_name) + \": \" + inv + ', doc=' + repr(inv) + \")\")\n conditions.append(cond)\n\n return conditions", "_____no_output_____" ] ], [ [ "The resulting annotations are harder to read, but easier to diagnose:", "_____no_output_____" ] ], [ [ "with InvariantAnnotator() as annotator:\n y = sum2(2, 2)\nprint_content(annotator.functions_with_invariants(), '.py')", "\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: a != \u001b[34m0\u001b[39;49;00m, doc=\u001b[33m'\u001b[39;49;00m\u001b[33ma != 0\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: a <= b, doc=\u001b[33m'\u001b[39;49;00m\u001b[33ma <= b\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: a == b, doc=\u001b[33m'\u001b[39;49;00m\u001b[33ma == b\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: a > \u001b[34m0\u001b[39;49;00m, doc=\u001b[33m'\u001b[39;49;00m\u001b[33ma > 0\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: a >= \u001b[34m0\u001b[39;49;00m, doc=\u001b[33m'\u001b[39;49;00m\u001b[33ma >= 0\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: a >= b, doc=\u001b[33m'\u001b[39;49;00m\u001b[33ma >= b\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: b != \u001b[34m0\u001b[39;49;00m, doc=\u001b[33m'\u001b[39;49;00m\u001b[33mb != 0\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: b <= a, doc=\u001b[33m'\u001b[39;49;00m\u001b[33mb <= a\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: b == a, doc=\u001b[33m'\u001b[39;49;00m\u001b[33mb == a\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: b > \u001b[34m0\u001b[39;49;00m, doc=\u001b[33m'\u001b[39;49;00m\u001b[33mb > 0\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: b >= \u001b[34m0\u001b[39;49;00m, doc=\u001b[33m'\u001b[39;49;00m\u001b[33mb >= 0\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: b >= a, doc=\u001b[33m'\u001b[39;49;00m\u001b[33mb >= a\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: \u001b[36misinstance\u001b[39;49;00m(a, \u001b[36mint\u001b[39;49;00m), doc=\u001b[33m'\u001b[39;49;00m\u001b[33misinstance(a, int)\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@precondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m a, b: \u001b[36misinstance\u001b[39;49;00m(b, \u001b[36mint\u001b[39;49;00m), doc=\u001b[33m'\u001b[39;49;00m\u001b[33misinstance(b, int)\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: a < return_value, doc=\u001b[33m'\u001b[39;49;00m\u001b[33ma < return_value\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: a <= b <= return_value, doc=\u001b[33m'\u001b[39;49;00m\u001b[33ma <= b <= return_value\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: a <= return_value, doc=\u001b[33m'\u001b[39;49;00m\u001b[33ma <= return_value\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: a == return_value - b, doc=\u001b[33m'\u001b[39;49;00m\u001b[33ma == return_value - b\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: a == return_value / b, doc=\u001b[33m'\u001b[39;49;00m\u001b[33ma == return_value / b\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: b < return_value, doc=\u001b[33m'\u001b[39;49;00m\u001b[33mb < return_value\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: b <= a <= return_value, doc=\u001b[33m'\u001b[39;49;00m\u001b[33mb <= a <= return_value\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: b <= return_value, doc=\u001b[33m'\u001b[39;49;00m\u001b[33mb <= return_value\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: b == return_value - a, doc=\u001b[33m'\u001b[39;49;00m\u001b[33mb == return_value - a\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: b == return_value / a, doc=\u001b[33m'\u001b[39;49;00m\u001b[33mb == return_value / a\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: \u001b[36misinstance\u001b[39;49;00m(return_value, \u001b[36mint\u001b[39;49;00m), doc=\u001b[33m'\u001b[39;49;00m\u001b[33misinstance(return_value, int)\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: return_value != \u001b[34m0\u001b[39;49;00m, doc=\u001b[33m'\u001b[39;49;00m\u001b[33mreturn_value != 0\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: return_value == a * b, doc=\u001b[33m'\u001b[39;49;00m\u001b[33mreturn_value == a * b\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: return_value == a + b, doc=\u001b[33m'\u001b[39;49;00m\u001b[33mreturn_value == a + b\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: return_value == b * a, doc=\u001b[33m'\u001b[39;49;00m\u001b[33mreturn_value == b * a\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: return_value == b + a, doc=\u001b[33m'\u001b[39;49;00m\u001b[33mreturn_value == b + a\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: return_value > \u001b[34m0\u001b[39;49;00m, doc=\u001b[33m'\u001b[39;49;00m\u001b[33mreturn_value > 0\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: return_value > a, doc=\u001b[33m'\u001b[39;49;00m\u001b[33mreturn_value > a\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: return_value > b, doc=\u001b[33m'\u001b[39;49;00m\u001b[33mreturn_value > b\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: return_value >= \u001b[34m0\u001b[39;49;00m, doc=\u001b[33m'\u001b[39;49;00m\u001b[33mreturn_value >= 0\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: return_value >= a, doc=\u001b[33m'\u001b[39;49;00m\u001b[33mreturn_value >= a\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: return_value >= a >= b, doc=\u001b[33m'\u001b[39;49;00m\u001b[33mreturn_value >= a >= b\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: return_value >= b, doc=\u001b[33m'\u001b[39;49;00m\u001b[33mreturn_value >= b\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[30;01m@postcondition\u001b[39;49;00m(\u001b[34mlambda\u001b[39;49;00m return_value, a, b: return_value >= b >= a, doc=\u001b[33m'\u001b[39;49;00m\u001b[33mreturn_value >= b >= a\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\u001b[34mdef\u001b[39;49;00m \u001b[32msum2\u001b[39;49;00m(a, b):\n \u001b[34mreturn\u001b[39;49;00m a + b\n" ] ], [ [ "As an alternative, one may be able to use `inspect.getsource()` on the lambda expression or unparse it. This is left to the reader.", "_____no_output_____" ], [ "### Exercise 4: Save Initial Values\n\nIf the value of an argument changes during function execution, this can easily confuse our implementation: The values are tracked at the beginning of the function, but checked only when it returns. Extend the `InvariantAnnotator` and the infrastructure it uses such that\n\n* it saves argument values both at the beginning and at the end of a function invocation;\n* postconditions can be expressed over both _initial_ values of arguments as well as the _final_ values of arguments;\n* the mined postconditions refer to both these values as well.", "_____no_output_____" ], [ "**Solution.** To be added.", "_____no_output_____" ], [ "### Exercise 5: Implications\n\nSeveral mined invariant are actually _implied_ by others: If `x > 0` holds, then this implies `x >= 0` and `x != 0`. Extend the `InvariantAnnotator` such that implications between properties are explicitly encoded, and such that implied properties are no longer listed as invariants. See \\cite{Ernst2001} for ideas.", "_____no_output_____" ], [ "**Solution.** Left to the reader.", "_____no_output_____" ], [ "### Exercise 6: Local Variables\n\nPostconditions may also refer to the values of local variables. Consider extending `InvariantAnnotator` and its infrastructure such that the values of local variables at the end of the execution are also recorded and made part of the invariant inference mechanism.", "_____no_output_____" ], [ "**Solution.** Left to the reader.", "_____no_output_____" ], [ "### Exercise 7: Exploring Invariant Alternatives\n\nAfter mining a first set of invariants, have a [concolic fuzzer](ConcolicFuzzer.ipynb) generate tests that systematically attempt to invalidate pre- and postconditions. How far can you generalize?", "_____no_output_____" ], [ "**Solution.** To be added.", "_____no_output_____" ], [ "### Exercise 8: Grammar-Generated Properties\n\nThe larger the set of properties to be checked, the more potential invariants can be discovered. Create a _grammar_ that systematically produces a large set of properties. See \\cite{Ernst2001} for possible patterns.", "_____no_output_____" ], [ "**Solution.** Left to the reader.", "_____no_output_____" ], [ "### Exercise 9: Embedding Invariants as Assertions\n\nRather than producing invariants as annotations for pre- and postconditions, insert them as `assert` statements into the function code, as in:\n\n```python\ndef my_sqrt(x):\n 'Computes the square root of x, using the Newton-Raphson method'\n assert isinstance(x, int), 'violated precondition'\n assert (x > 0), 'violated precondition'\n approx = None\n guess = (x / 2)\n while (approx != guess):\n approx = guess\n guess = ((approx + (x / approx)) / 2)\n return_value = approx\n assert (return_value < x), 'violated postcondition'\n assert isinstance(return_value, float), 'violated postcondition'\n return approx\n```\n\nSuch a formulation may make it easier for test generators and symbolic analysis to access and interpret pre- and postconditions.", "_____no_output_____" ], [ "**Solution.** Here is a tentative implementation that inserts invariants into function ASTs.", "_____no_output_____" ], [ "Part 1: Embedding Invariants into Functions", "_____no_output_____" ] ], [ [ "class EmbeddedInvariantAnnotator(InvariantTracker):\n def functions_with_invariants_ast(self, function_name=None):\n if function_name is None:\n return annotate_functions_with_invariants(self.invariants())\n \n return annotate_function_with_invariants(function_name, self.invariants(function_name))\n \n def functions_with_invariants(self, function_name=None):\n if function_name is None:\n functions = ''\n for f_name in self.invariants():\n try:\n f_text = astor.to_source(self.functions_with_invariants_ast(f_name))\n except KeyError:\n f_text = ''\n functions += f_text\n return functions\n\n return astor.to_source(self.functions_with_invariants_ast(function_name))\n \n def function_with_invariants(self, function_name):\n return self.functions_with_invariants(function_name)\n def function_with_invariants_ast(self, function_name):\n return self.functions_with_invariants_ast(function_name)", "_____no_output_____" ], [ "def annotate_invariants(invariants):\n annotated_functions = {}\n \n for function_name in invariants:\n try:\n annotated_functions[function_name] = annotate_function_with_invariants(function_name, invariants[function_name])\n except KeyError:\n continue\n\n return annotated_functions", "_____no_output_____" ], [ "def annotate_function_with_invariants(function_name, function_invariants):\n function = globals()[function_name]\n function_code = inspect.getsource(function)\n function_ast = ast.parse(function_code)\n return annotate_function_ast_with_invariants(function_ast, function_invariants)", "_____no_output_____" ], [ "def annotate_function_ast_with_invariants(function_ast, function_invariants):\n annotated_function_ast = EmbeddedInvariantTransformer(function_invariants).visit(function_ast)\n return annotated_function_ast", "_____no_output_____" ] ], [ [ "Part 2: Preconditions", "_____no_output_____" ] ], [ [ "class PreconditionTransformer(ast.NodeTransformer):\n def __init__(self, invariants):\n self.invariants = invariants\n super().__init__()\n \n def preconditions(self):\n preconditions = []\n for (prop, var_names) in self.invariants:\n assertion = \"assert \" + instantiate_prop(prop, var_names) + ', \"violated precondition\"'\n assertion_ast = ast.parse(assertion)\n\n if assertion.find(RETURN_VALUE) < 0:\n preconditions += assertion_ast.body\n\n return preconditions\n \n def insert_assertions(self, body):\n preconditions = self.preconditions()\n try:\n docstring = body[0].value.s\n except:\n docstring = None\n \n if docstring:\n return [body[0]] + preconditions + body[1:]\n else:\n return preconditions + body\n\n def visit_FunctionDef(self, node):\n \"\"\"Add invariants to function\"\"\"\n # print(ast.dump(node))\n node.body = self.insert_assertions(node.body)\n return node ", "_____no_output_____" ], [ "class EmbeddedInvariantTransformer(PreconditionTransformer):\n pass", "_____no_output_____" ], [ "with EmbeddedInvariantAnnotator() as annotator:\n my_sqrt(5)", "_____no_output_____" ], [ "print_content(annotator.functions_with_invariants(), '.py')", "\u001b[34mdef\u001b[39;49;00m \u001b[32mmy_sqrt\u001b[39;49;00m(x):\n \u001b[33m\"\"\"Computes the square root of x, using the Newton-Raphson method\"\"\"\u001b[39;49;00m\n \u001b[34massert\u001b[39;49;00m x >= \u001b[34m0\u001b[39;49;00m, \u001b[33m'\u001b[39;49;00m\u001b[33mviolated precondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34massert\u001b[39;49;00m x != \u001b[34m0\u001b[39;49;00m, \u001b[33m'\u001b[39;49;00m\u001b[33mviolated precondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34massert\u001b[39;49;00m \u001b[36misinstance\u001b[39;49;00m(x, \u001b[36mint\u001b[39;49;00m), \u001b[33m'\u001b[39;49;00m\u001b[33mviolated precondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34massert\u001b[39;49;00m x > \u001b[34m0\u001b[39;49;00m, \u001b[33m'\u001b[39;49;00m\u001b[33mviolated precondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n approx = \u001b[36mNone\u001b[39;49;00m\n guess = x / \u001b[34m2\u001b[39;49;00m\n \u001b[34mwhile\u001b[39;49;00m approx != guess:\n approx = guess\n guess = (approx + x / approx) / \u001b[34m2\u001b[39;49;00m\n \u001b[34mreturn\u001b[39;49;00m approx\n" ], [ "with EmbeddedInvariantAnnotator() as annotator:\n y = sum3(3, 4, 5)\n y = sum3(-3, -4, -5)\n y = sum3(0, 0, 0)", "_____no_output_____" ], [ "print_content(annotator.functions_with_invariants(), '.py')", "\u001b[34mdef\u001b[39;49;00m \u001b[32msum3\u001b[39;49;00m(a, b, c):\n \u001b[34massert\u001b[39;49;00m \u001b[36misinstance\u001b[39;49;00m(c, \u001b[36mint\u001b[39;49;00m), \u001b[33m'\u001b[39;49;00m\u001b[33mviolated precondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34massert\u001b[39;49;00m \u001b[36misinstance\u001b[39;49;00m(b, \u001b[36mint\u001b[39;49;00m), \u001b[33m'\u001b[39;49;00m\u001b[33mviolated precondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34massert\u001b[39;49;00m \u001b[36misinstance\u001b[39;49;00m(a, \u001b[36mint\u001b[39;49;00m), \u001b[33m'\u001b[39;49;00m\u001b[33mviolated precondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34mreturn\u001b[39;49;00m a + b + c\n" ] ], [ [ "Part 3: Postconditions\n\nWe make a few simplifying assumptions: \n\n* Variables do not change during execution.\n* There is a single `return` statement at the end of the function.", "_____no_output_____" ] ], [ [ "class EmbeddedInvariantTransformer(PreconditionTransformer):\n def postconditions(self):\n postconditions = []\n\n for (prop, var_names) in self.invariants:\n assertion = \"assert \" + instantiate_prop(prop, var_names) + ', \"violated postcondition\"'\n assertion_ast = ast.parse(assertion)\n\n if assertion.find(RETURN_VALUE) >= 0:\n postconditions += assertion_ast.body\n\n return postconditions\n \n def insert_assertions(self, body):\n new_body = super().insert_assertions(body)\n postconditions = self.postconditions()\n\n body_ends_with_return = isinstance(new_body[-1], ast.Return)\n if body_ends_with_return:\n saver = RETURN_VALUE + \" = \" + astor.to_source(new_body[-1].value)\n else:\n saver = RETURN_VALUE + \" = None\"\n \n saver_ast = ast.parse(saver)\n postconditions = [saver_ast] + postconditions\n\n if body_ends_with_return:\n return new_body[:-1] + postconditions + [new_body[-1]]\n else:\n return new_body + postconditions", "_____no_output_____" ], [ "with EmbeddedInvariantAnnotator() as annotator:\n my_sqrt(5)", "_____no_output_____" ], [ "my_sqrt_def = annotator.functions_with_invariants()", "_____no_output_____" ] ], [ [ "Here's the full definition with included assertions:", "_____no_output_____" ] ], [ [ "print_content(my_sqrt_def, '.py')", "\u001b[34mdef\u001b[39;49;00m \u001b[32mmy_sqrt\u001b[39;49;00m(x):\n \u001b[33m\"\"\"Computes the square root of x, using the Newton-Raphson method\"\"\"\u001b[39;49;00m\n \u001b[34massert\u001b[39;49;00m x >= \u001b[34m0\u001b[39;49;00m, \u001b[33m'\u001b[39;49;00m\u001b[33mviolated precondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34massert\u001b[39;49;00m x != \u001b[34m0\u001b[39;49;00m, \u001b[33m'\u001b[39;49;00m\u001b[33mviolated precondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34massert\u001b[39;49;00m \u001b[36misinstance\u001b[39;49;00m(x, \u001b[36mint\u001b[39;49;00m), \u001b[33m'\u001b[39;49;00m\u001b[33mviolated precondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34massert\u001b[39;49;00m x > \u001b[34m0\u001b[39;49;00m, \u001b[33m'\u001b[39;49;00m\u001b[33mviolated precondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n approx = \u001b[36mNone\u001b[39;49;00m\n guess = x / \u001b[34m2\u001b[39;49;00m\n \u001b[34mwhile\u001b[39;49;00m approx != guess:\n approx = guess\n guess = (approx + x / approx) / \u001b[34m2\u001b[39;49;00m\n return_value = approx\n \u001b[34massert\u001b[39;49;00m \u001b[36misinstance\u001b[39;49;00m(return_value, \u001b[36mfloat\u001b[39;49;00m), \u001b[33m'\u001b[39;49;00m\u001b[33mviolated postcondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34massert\u001b[39;49;00m return_value > \u001b[34m0\u001b[39;49;00m, \u001b[33m'\u001b[39;49;00m\u001b[33mviolated postcondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34massert\u001b[39;49;00m return_value < x, \u001b[33m'\u001b[39;49;00m\u001b[33mviolated postcondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34massert\u001b[39;49;00m x > return_value, \u001b[33m'\u001b[39;49;00m\u001b[33mviolated postcondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34massert\u001b[39;49;00m return_value != \u001b[34m0\u001b[39;49;00m, \u001b[33m'\u001b[39;49;00m\u001b[33mviolated postcondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34massert\u001b[39;49;00m return_value >= \u001b[34m0\u001b[39;49;00m, \u001b[33m'\u001b[39;49;00m\u001b[33mviolated postcondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34massert\u001b[39;49;00m x >= return_value, \u001b[33m'\u001b[39;49;00m\u001b[33mviolated postcondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34massert\u001b[39;49;00m return_value <= x, \u001b[33m'\u001b[39;49;00m\u001b[33mviolated postcondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34mreturn\u001b[39;49;00m approx\n" ], [ "exec(my_sqrt_def.replace('my_sqrt', 'my_sqrt_annotated'))", "_____no_output_____" ], [ "with ExpectError():\n my_sqrt_annotated(-1)", "Traceback (most recent call last):\n File \"<ipython-input-214-bf1ed929743a>\", line 2, in <module>\n my_sqrt_annotated(-1)\n File \"<string>\", line 3, in my_sqrt_annotated\nAssertionError: violated precondition (expected)\n" ] ], [ [ "Here come some more examples:", "_____no_output_____" ] ], [ [ "with EmbeddedInvariantAnnotator() as annotator:\n y = sum3(3, 4, 5)\n y = sum3(-3, -4, -5)\n y = sum3(0, 0, 0)", "_____no_output_____" ], [ "print_content(annotator.functions_with_invariants(), '.py')", "\u001b[34mdef\u001b[39;49;00m \u001b[32msum3\u001b[39;49;00m(a, b, c):\n \u001b[34massert\u001b[39;49;00m \u001b[36misinstance\u001b[39;49;00m(c, \u001b[36mint\u001b[39;49;00m), \u001b[33m'\u001b[39;49;00m\u001b[33mviolated precondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34massert\u001b[39;49;00m \u001b[36misinstance\u001b[39;49;00m(b, \u001b[36mint\u001b[39;49;00m), \u001b[33m'\u001b[39;49;00m\u001b[33mviolated precondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34massert\u001b[39;49;00m \u001b[36misinstance\u001b[39;49;00m(a, \u001b[36mint\u001b[39;49;00m), \u001b[33m'\u001b[39;49;00m\u001b[33mviolated precondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n return_value = a + b + c\n \u001b[34massert\u001b[39;49;00m \u001b[36misinstance\u001b[39;49;00m(return_value, \u001b[36mint\u001b[39;49;00m), \u001b[33m'\u001b[39;49;00m\u001b[33mviolated postcondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34mreturn\u001b[39;49;00m a + b + c\n" ], [ "with EmbeddedInvariantAnnotator() as annotator:\n length = list_length([1, 2, 3])\n\nprint_content(annotator.functions_with_invariants(), '.py')", "\u001b[34mdef\u001b[39;49;00m \u001b[32mlist_length\u001b[39;49;00m(L):\n \u001b[34massert\u001b[39;49;00m \u001b[36misinstance\u001b[39;49;00m(L, \u001b[36mlist\u001b[39;49;00m), \u001b[33m'\u001b[39;49;00m\u001b[33mviolated precondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34massert\u001b[39;49;00m L != \u001b[34m0\u001b[39;49;00m, \u001b[33m'\u001b[39;49;00m\u001b[33mviolated precondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34mif\u001b[39;49;00m L == []:\n length = \u001b[34m0\u001b[39;49;00m\n \u001b[34melse\u001b[39;49;00m:\n length = \u001b[34m1\u001b[39;49;00m + list_length(L[\u001b[34m1\u001b[39;49;00m:])\n return_value = length\n \u001b[34massert\u001b[39;49;00m return_value == \u001b[36mlen\u001b[39;49;00m(L), \u001b[33m'\u001b[39;49;00m\u001b[33mviolated postcondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34massert\u001b[39;49;00m \u001b[36misinstance\u001b[39;49;00m(return_value, \u001b[36mint\u001b[39;49;00m), \u001b[33m'\u001b[39;49;00m\u001b[33mviolated postcondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34massert\u001b[39;49;00m return_value >= \u001b[34m0\u001b[39;49;00m, \u001b[33m'\u001b[39;49;00m\u001b[33mviolated postcondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34mreturn\u001b[39;49;00m length\n" ], [ "with EmbeddedInvariantAnnotator() as annotator:\n print_sum(31, 45)", "76\n" ], [ "print_content(annotator.functions_with_invariants(), '.py')", "\u001b[34mdef\u001b[39;49;00m \u001b[32mprint_sum\u001b[39;49;00m(a, b):\n \u001b[34massert\u001b[39;49;00m a <= b, \u001b[33m'\u001b[39;49;00m\u001b[33mviolated precondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34massert\u001b[39;49;00m b > \u001b[34m0\u001b[39;49;00m, \u001b[33m'\u001b[39;49;00m\u001b[33mviolated precondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34massert\u001b[39;49;00m b != \u001b[34m0\u001b[39;49;00m, \u001b[33m'\u001b[39;49;00m\u001b[33mviolated precondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34massert\u001b[39;49;00m b >= a, \u001b[33m'\u001b[39;49;00m\u001b[33mviolated precondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34massert\u001b[39;49;00m a >= \u001b[34m0\u001b[39;49;00m, \u001b[33m'\u001b[39;49;00m\u001b[33mviolated precondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34massert\u001b[39;49;00m b >= \u001b[34m0\u001b[39;49;00m, \u001b[33m'\u001b[39;49;00m\u001b[33mviolated precondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34massert\u001b[39;49;00m a < b, \u001b[33m'\u001b[39;49;00m\u001b[33mviolated precondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34massert\u001b[39;49;00m a > \u001b[34m0\u001b[39;49;00m, \u001b[33m'\u001b[39;49;00m\u001b[33mviolated precondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34massert\u001b[39;49;00m a != \u001b[34m0\u001b[39;49;00m, \u001b[33m'\u001b[39;49;00m\u001b[33mviolated precondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34massert\u001b[39;49;00m b > a, \u001b[33m'\u001b[39;49;00m\u001b[33mviolated precondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34massert\u001b[39;49;00m \u001b[36misinstance\u001b[39;49;00m(b, \u001b[36mint\u001b[39;49;00m), \u001b[33m'\u001b[39;49;00m\u001b[33mviolated precondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34massert\u001b[39;49;00m \u001b[36misinstance\u001b[39;49;00m(a, \u001b[36mint\u001b[39;49;00m), \u001b[33m'\u001b[39;49;00m\u001b[33mviolated precondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n \u001b[34mprint\u001b[39;49;00m(a + b)\n return_value = \u001b[36mNone\u001b[39;49;00m\n \u001b[34massert\u001b[39;49;00m return_value != \u001b[34m0\u001b[39;49;00m, \u001b[33m'\u001b[39;49;00m\u001b[33mviolated postcondition\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m\n" ] ], [ [ "And we're done!", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ] ]
e7dbee53f09c53228b3158b171d43f50018af422
96,336
ipynb
Jupyter Notebook
Calculation of sin(x).ipynb
adcroft/intrinsics
3fd0e8192d2033e83c1d601e77d9828e3f2241ae
[ "MIT" ]
null
null
null
Calculation of sin(x).ipynb
adcroft/intrinsics
3fd0e8192d2033e83c1d601e77d9828e3f2241ae
[ "MIT" ]
null
null
null
Calculation of sin(x).ipynb
adcroft/intrinsics
3fd0e8192d2033e83c1d601e77d9828e3f2241ae
[ "MIT" ]
null
null
null
187.789474
59,024
0.857343
[ [ [ "import numpy\nimport math\nimport matplotlib.pyplot as plt\nimport scipy.special", "_____no_output_____" ] ], [ [ "Maclaurin series for $\\sin(x)$ is:\n\\begin{align}\n\\sin(x)\n&= \\sum_{k=0}^{\\infty} \\frac{ (-1)^k }{ (2k+1)! } x^{2k+1} \\\\\n&= x - \\frac{1}{3!} x^3 + \\frac{1}{5!} x^5 - \\frac{1}{7!} x^7 + \\frac{1}{9!} x^9 - \\frac{1}{11!} x^{11} +\\ldots \\\\\n%%% &= x \\left( 1 - \\frac{1}{2.3} x^2 \\left( 1 - \\frac{1}{4.5} x^2 \\left( 1 - \\frac{1}{6.7} x^2 \\left(1 - \\frac{1}{8.9} x^2 \\left( 1 - \\frac{1}{10.11} x^{2} \\left( \\ldots \\right) \\right) \\right) \\right) \\right) \\right) \\\\\n&= x \\left( 1 - \\frac{1}{2.3} x^2 \\right) + \\frac{1}{5!} x^5 \\left( 1 - \\frac{1}{6.7} x^2 \\right)\n + \\frac{1}{9!} x^9 \\left( 1 - \\frac{1}{10.11} x^2 \\right) + \\ldots \\\\\n&= \\sum_{k=0}^{\\infty} \\frac{x^{4k+1}}{(4k+1)!} \\left( 1 - \\frac{x^2}{(4k+2)(4k+3)} \\right) \\\\\n&= x \\sum_{k=0}^{\\infty} \\frac{x^{4k}}{(4k+1)!} \\left( 1 - \\frac{x^2}{(4k+2)(4k+3)} \\right)\n\\end{align}\nThe roundoff error is associated with the addition/subtraction involving the largest term which (for $|x|<6$) will be the first term, so of order $|x|\\epsilon$.", "_____no_output_____" ] ], [ [ "# Significance of each term to leading term\nk, eps = numpy.arange(1,30,2), numpy.finfo(float).eps\nn = (k+1)/2\nprint('epsilon = %.2e'%eps, \"= 2**%i\"%int(math.log(eps)/math.log(2)))\nplt.semilogy(n, eps * (1+0*n), 'k--', label=r'$\\epsilon$' )\nplt.semilogy(n, (numpy.pi-eps)**(k-1) / scipy.special.factorial(k), '.-', label=r'$x=\\pi-\\epsilon$' );\nplt.semilogy(n, (numpy.pi/6*5)**(k-1) / scipy.special.factorial(k), '.-', label=r'$x=5\\pi/6$ (150$^\\circ$)' );\nplt.semilogy(n, (numpy.pi/3*2)**(k-1) / scipy.special.factorial(k), '.-', label=r'$x=2\\pi/3$ (120$^\\circ$)' );\nplt.semilogy(n, (numpy.pi/2)**(k-1) / scipy.special.factorial(k), 'o-', label=r'$x=\\pi/2$ (90$^\\circ$)' );\nplt.semilogy(n, (numpy.pi/2)**(k-1) / scipy.special.factorial(k), '.-', label=r'$x=\\pi/3$ (60$^\\circ$)' );\nplt.semilogy(n, (numpy.pi/4)**(k-1) / scipy.special.factorial(k), '.-', label=r'$x=\\pi/4$ (45$^\\circ$)' );\nplt.semilogy(n, (numpy.pi/6)**(k-1) / scipy.special.factorial(k), '.-', label=r'$x=\\pi/6$ (30$^\\circ$)' );\nplt.semilogy(n, (numpy.pi/18)**(k-1) / scipy.special.factorial(k), '.-', label=r'$x=\\pi/18$ (10$^\\circ$)' );\nplt.semilogy(n, (numpy.pi/180)**(k-1) / scipy.special.factorial(k), '.-', label=r'$x=\\pi/180$ (1$^\\circ$)' );\nplt.gca().set_xticks(numpy.arange(1,16)); plt.legend(); plt.xlabel('Terms, n = (k+1)/2'); plt.ylim(1e-17,3);\nplt.title(r'$\\frac{1}{k!}x^{k-1}$');", "epsilon = 2.22e-16 = 2**-52\n" ] ], [ [ "\\begin{align}\n\\sin(x)\n&\\approx x - \\frac{1}{3!} x^3 + \\frac{1}{5!} x^5 - \\frac{1}{7!} x^7 + \\frac{1}{9!} x^9 - \\frac{1}{11!} x^{11} +\\ldots \\\\\n&= x \\left( 1 - \\frac{1}{2.3} x^2 \\left( 1 - \\frac{1}{4.5} x^2 \\left( 1 - \\frac{1}{6.7} x^2 \\left(1 - \\frac{1}{8.9} x^2 \\left( 1 - \\frac{1}{10.11} x^{2} \\left( \\ldots \\right) \\right) \\right) \\right) \\right) \\right) \\\\\n&= x \\left( 1 - c_1 x^2 \\left( 1 - c_2 x^2 \\left( 1 - c_3 x^2 \\left(1 - c_4 x^2 \\left( 1 - c_5 x^{2} \\left( \\ldots \\right) \\right) \\right) \\right) \\right) \\right) \\;\\;\\mbox{where}\\;\\; c_j = \\frac{1}{2j(2j+1)}\n\\end{align}\n", "_____no_output_____" ] ], [ [ "# Coefficients in series\nprint(' t',' k','%26s'%'(2k+1)!','%22s'%'1/(2k+1)!','1/c[t]','%21s'%'c[t]')\nfor t in range(1,17):\n k=2*t-1\n print('%2i'%t, '%2i'%k, '%26i'%math.factorial(k), '%.16e'%(1./math.factorial(k)),'%5i'%(2*t*(2*t+1)),'%.16e'%(1./(2*t*(2*t+1))))", " t k (2k+1)! 1/(2k+1)! 1/c[t] c[t]\n 1 1 1 1.0000000000000000e+00 6 1.6666666666666666e-01\n 2 3 6 1.6666666666666666e-01 20 5.0000000000000003e-02\n 3 5 120 8.3333333333333332e-03 42 2.3809523809523808e-02\n 4 7 5040 1.9841269841269841e-04 72 1.3888888888888888e-02\n 5 9 362880 2.7557319223985893e-06 110 9.0909090909090905e-03\n 6 11 39916800 2.5052108385441720e-08 156 6.4102564102564100e-03\n 7 13 6227020800 1.6059043836821613e-10 210 4.7619047619047623e-03\n 8 15 1307674368000 7.6471637318198164e-13 272 3.6764705882352941e-03\n 9 17 355687428096000 2.8114572543455206e-15 342 2.9239766081871343e-03\n10 19 121645100408832000 8.2206352466243295e-18 420 2.3809523809523812e-03\n11 21 51090942171709440000 1.9572941063391263e-20 506 1.9762845849802370e-03\n12 23 25852016738884976640000 3.8681701706306835e-23 600 1.6666666666666668e-03\n13 25 15511210043330985984000000 6.4469502843844736e-26 702 1.4245014245014246e-03\n14 27 10888869450418352160768000000 9.1836898637955460e-29 812 1.2315270935960591e-03\n15 29 8841761993739701954543616000000 1.1309962886447718e-31 930 1.0752688172043011e-03\n16 31 8222838654177922817725562880000000 1.2161250415535181e-34 1056 9.4696969696969700e-04\n" ] ], [ [ "\\begin{align}\n\\sin(x)\n&\\approx x - \\frac{1}{3!} x^3 + \\frac{1}{5!} x^5 - \\frac{1}{7!} x^7 + \\frac{1}{9!} x^9 - \\frac{1}{11!} x^{11} +\\ldots \\\\\n&= x \\left( 1 - \\frac{1}{2.3} x^2 \\right) + \\frac{1}{5!} x^5 \\left( 1 - \\frac{1}{6.7} x^2 \\right)\n + \\frac{1}{9!} x^9 \\left( 1 - \\frac{1}{10.11} x^2 \\right) + \\ldots \\\\\n&= \\sum_{l=0}^{\\infty} \\frac{x^{4l+1}}{(4l+1)!} \\left( 1 - \\frac{x^2}{(4l+2)(4l+3)} \\right) \\\\\n&= \\sum_{l=0}^{\\infty} \\frac{x^{4l+1}}{a_l} \\left( 1 - \\frac{x^2}{b_l} \\right)\n\\;\\;\\mbox{where}\\;\\; a_l=(4l+1)! \\;\\;\\mbox{and}\\;\\; b_l=(4l+2)(4l+3) \\\\\n&= x \\sum_{l=0}^{\\infty} \\frac{x^{4l}}{(4l+1)!} \\left( 1 - \\frac{x^2}{(4l+2)(4l+3)} \\right) \\\\\n&= x \\sum_{l=0}^{\\infty} \\frac{x^{4l}}{a_l} \\left( 1 - \\frac{x^2}{b_l} \\right) \\\\\n&= x \\sum_{l=0}^{\\infty} f_l \\left( 1 - g_l \\right)\n\\;\\;\\mbox{where}\\;\\; f_l=\\frac{x^{4l}}{a_l} \\;\\;\\mbox{and}\\;\\; b_l=\\frac{x^2}{b_l}\n\\end{align}\nNote that\n\\begin{align}\na_l &= a_{l-1} (4l+1) 4l (4l-1) (4l-2) \\;\\; \\forall \\; l = 2,3,\\ldots \\\\\nf_l\n&= \\frac{x^{4l}}{a_l} \\\\\n&= \\frac{x^{4l-4}x^4}{a_{l-1} (4l+1) 4l (4l-1) (4l-2)} \\\\\n&= \\frac{x^4}{(4l+1) 4l (4l-1) (4l-2)} f_{l-1}\n\\end{align}", "_____no_output_____" ] ], [ [ "# Coefficients in paired series\nprint(' l','4l+1','%26s'%'a[l]=(4l+1)!','%22s'%'1/a[l]',' b[l]','%22s'%'1/b[l]')\nfor l in range(0,7,1):\n print('%2i'%l, '%4i'%(4*l+1), '%26i'%math.factorial(4*l+1), '%.16e'%(1./math.factorial(4*l+1)),\n '%5i'%((4*l+2)*(4*l+3)),'%.16e'%(1./((4*l+2)*(4*l+3))))", " l 4l+1 a[l]=(4l+1)! 1/a[l] b[l] 1/b[l]\n 0 1 1 1.0000000000000000e+00 6 1.6666666666666666e-01\n 1 5 120 8.3333333333333332e-03 42 2.3809523809523808e-02\n 2 9 362880 2.7557319223985893e-06 110 9.0909090909090905e-03\n 3 13 6227020800 1.6059043836821613e-10 210 4.7619047619047623e-03\n 4 17 355687428096000 2.8114572543455206e-15 342 2.9239766081871343e-03\n 5 21 51090942171709440000 1.9572941063391263e-20 506 1.9762845849802370e-03\n 6 25 15511210043330985984000000 6.4469502843844736e-26 702 1.4245014245014246e-03\n" ], [ "def sin_map_x( x ):\n ninety = numpy.pi/2\n one_eighty = numpy.pi\n three_sixty = 2.*numpy.pi\n fs = 1.\n if x < -ninety:\n x = -one_eighty - x\n if x > three_sixty:\n n = int(x / three_sixty)\n x = x - n*three_sixty\n if x >= one_eighty:\n x = x - one_eighty\n fs = -1.\n if x > ninety:\n x = one_eighty - x\n return x,fs\ndef sin_forward_series( x ):\n # Adds terms from largest to smallest until answer is not changing\n x,fs = sin_map_x( x )\n # https://en.wikipedia.org/wiki/Sine#Series_definition\n ro,d,s = 1.,1,-1.\n for k in range(3,200,2):\n d = d * (k-1) * k\n f = 1. / d\n r = ro + x**(k-1) * f * s\n if r==ro: break\n ro,s = r, -s\n return ( r * x ) * fs\ndef sin_reverse_series( x ):\n # Adds terms from smallest to largest after finding smallest term to add\n x,fs = sin_map_x( x )\n ro,s,d = 1.,-1.,1\n for k in range(3,200,2):\n d = d * (k-1) * k\n f = 1. / d\n r = ro + x**(k-1) * f * s\n if r==ro: break\n ro,s = r, -s\n ro = 0.\n for j in range(k,0,-2):\n f = 1./ math.factorial(j)\n r = ro + x**(j-1) * f * s\n if r==ro: break\n ro,s = r, -s\n return ( r * x ) * fs\ndef sin_reverse_series_fixed( x ):\n # Adds terms from smallest to largest for fixed number of terms\n x,fs = sin_map_x( x )\n ro,s,d,x2,N = 1.,-1.,1,1.,16\n term = [1.] * (N)\n for n in range(1,N):\n x2 = x2 * ( x * x )\n k = 2*n+1\n d = d * (k-1) * k\n f = 1. / d\n #term[n] = x**(k-1) * f * s \n term[n] = x2 * f * s \n r = ro + term[n]\n if r==ro: break\n ro,s = r, -s\n r = 0.\n for j in range(n,-1,-1):\n r = r + term[j]\n return ( r * x ) * fs\ndef sin_reverse_precomputed( x ):\n # Adds fixed number of terms from smallest to largest with precomputed coefficients\n x,fs = sin_map_x( x )\n C=[0.16666666666666667,\n 0.05,\n 0.023809523809523808,\n 0.013888888888888889,\n 0.009090909090909091,\n 0.00641025641025641,\n 0.004761904761904762,\n 0.003676470588235294,\n 0.0029239766081871343,\n 0.002380952380952381,\n 0.001976284584980237,\n 0.0016666666666666667,\n 0.0014245014245014246,\n 0.0012315270935960591,\n 0.001075268817204301,\n 0.000946969696969697,\n 0.0008403361344537816,\n 0.0007507507507507507,\n 0.0006747638326585695]\n n = len(C)\n f,r,s = [1.]*(n),0.,1.\n if n%2==0: s=-1.\n for i in range(1,n):\n f[i] = f[i-1] * C[i-1]\n for i in range(n-1,0,-1):\n k = 2*i + 1\n r = r + x**k * f[i] * s\n s = -s\n r = r + x\n return r * fs\ndef sin_by_series(x, n=20, verbose=False, method='accurate-explicit'):\n \"\"\"Returns sin(x)\"\"\"\n if method=='forward-explicit': return sin_forward_series( x )\n elif method=='reverse-explicit': return sin_reverse_series( x )\n elif method=='reverse-fixed': return sin_reverse_series_fixed( x )\n elif method=='reverse-precomputed': return sin_reverse_precomputed( x )\n x,fs = sin_map_x( x )\n # https://en.wikipedia.org/wiki/Sine#Series_definition\n C=[0.16666666666666667,\n 0.05,\n 0.023809523809523808,\n 0.013888888888888889,\n 0.009090909090909091,\n 0.00641025641025641,\n 0.004761904761904762,\n 0.003676470588235294,\n 0.0029239766081871343,\n 0.002380952380952381,\n 0.001976284584980237,\n 0.0016666666666666667,\n 0.0014245014245014246,\n 0.0012315270935960591,\n 0.001075268817204301,\n 0.000946969696969697,\n 0.0008403361344537816,\n 0.0007507507507507507,\n 0.0006747638326585695]\n if method=='forward-explicit':\n # Adds terms from largest to smallest until answer is not changing\n ro,f,s = 1.,1.,-1.\n for k in range(3,200,2):\n f = 1./ math.factorial(k)\n r = ro + x**(k-1) * f * s\n if verbose: print('sine:',r*x,'(%i)'%k)\n if r==ro: break\n ro,s = r, -s\n r = r * x\n elif method=='reverse-explicit':\n # Adds terms from smallest to largest after finding smallest term to add\n ro,s = 1.,-1.\n for k in range(3,200,2):\n f = 1./ math.factorial(k)\n r = ro + x**(k-1) * f * s\n if r==ro: break\n ro,s = r, -s\n ro = 0.\n for j in range(k,0,-2):\n f = 1./ math.factorial(j)\n r = ro + x**(j-1) * f * s\n if verbose: print('sine:',r*x,'(%i)'%j)\n if r==ro: break\n ro,s = r, -s\n r = r * x\n elif method=='forward-precomputed':\n # Adds terms from largest to smallest until answer is not changing\n ro,f,s = x,1.,-1.\n for i in range(1,n):\n k = 2*i + 1\n #f = f * pypi.reciprocal( (k-1)*k ) # These should be pre-computed\n f = f * C[i-1]\n r = ro + x**k * f * s\n if verbose: print('sine:',r,'(%i)'%i)\n if r==ro: break\n ro,s = r, -s\n elif method=='reverse-precomputed':\n # Adds fixed number of terms from smallest to largest with precomputed coefficients\n f,r,s = [1.]*(n),0.,1.\n if n%2==0: s=-1.\n for i in range(1,n):\n f[i] = f[i-1] * C[i-1]\n for i in range(n-1,0,-1):\n k = 2*i + 1\n r = r + x**k * f[i] * s\n if verbose: print('sine:',r,'(%i)'%i)\n s = -s\n r = r + x\n if verbose: print('sine:',r,'(%i)'%i)\n elif method=='paired' or method=='paired-test':\n # Adds fixed number of terms from smallest to largest \n x4l,a,b,f,g = [0.]*(n),[0.]*(n),[0.]*(n),[0.]*(n),[0.]*(n)\n x2 = x*x\n x4 = x2*x2\n x4l[0], a[0], b[0] = 1., 1., 1./6.\n f[0], g[0] = x4l[0]*a[0], x2*b[0]\n for l in range(1,n):\n x4l[l] = x4l[l-1] * x4\n l4 = 4*l\n #a[l] = a[l-1] / float( (l4+1)*l4*(l4-1)*(l4-2) )\n #b[l] = 1. / float( (l4+2)*(l4+3) )\n f[l] = f[l-1] * (x4 / float( (l4+1)*l4*(l4-1)*(l4-2) ) )\n g[l] = x2 / float( (l4+2)*(l4+3) )\n r = 0.\n if method=='paired-test':\n for i in range(n-1,-1,-1):\n r = r - f[i] * g[i]\n r = r + f[i]\n if verbose: print('sine:',r*x,'(%i)'%i)\n elif method=='paired':\n for i in range(n-1,-1,-1):\n #r = r + f[i] * ( 1. - g[i] )\n r = r + ( f[i] - f[i] * g[i] )\n if verbose: print('sine:',r*x,'(%i)'%i)\n r = r * x\n else:\n raise Exception('Method \"'+method+'\" not implemented')\n return r * fs\nangle = numpy.pi/2\nprint( sin_by_series( angle, method='forward-explicit' ) )\nprint( sin_by_series( angle, method='forward-precomputed' ) )\nprint( sin_by_series( angle, method='reverse-precomputed' ) )\nprint( sin_by_series( angle, method='paired-test' ) )\nprint( sin_by_series( angle, method='paired' ) )\nprint( sin_by_series( angle, method='reverse-fixed' ) )\nprint( sin_by_series( angle, method='reverse-explicit' ) )\nprint( numpy.sin( angle ) )", "1.0000000000000002\n1.0000000000000002\n1.0\n1.0\n1.0\n1.0\n1.0\n1.0\n" ], [ "sinfs = numpy.frompyfunc( sin_forward_series, 1, 1)\nsinrs = numpy.frompyfunc( sin_reverse_series, 1, 1)\nsinrf = numpy.frompyfunc( sin_reverse_series_fixed, 1, 1)\nsinrp = numpy.frompyfunc( sin_reverse_precomputed, 1, 1)\nx = numpy.linspace(-numpy.pi/2,numpy.pi/2,1024*128)\nd = sinrf( x ) - sinrs( x )\nplt.plot(x/numpy.pi*180, d+0/numpy.sin(x),'.');\nnumpy.count_nonzero( d ), numpy.abs( d/numpy.sin(x) ).max()", "_____no_output_____" ], [ "y = ( sinrf( x )**2 + sinrf( x + numpy.pi/2 )**2 ) - 1.\nplt.plot( x*180/numpy.pi, y )", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
e7dc3151ebd191e626608eed300b085a6c1328ef
146,407
ipynb
Jupyter Notebook
Udacity/boston/boston_housing.ipynb
Vayne-Lover/Machine-Learning
4cf378c41d736de93ace9ca416c3014afe88fff1
[ "Apache-2.0" ]
1
2016-08-03T08:55:17.000Z
2016-08-03T08:55:17.000Z
Udacity/boston/boston_housing.ipynb
Vayne-Lover/Machine-Learning
4cf378c41d736de93ace9ca416c3014afe88fff1
[ "Apache-2.0" ]
null
null
null
Udacity/boston/boston_housing.ipynb
Vayne-Lover/Machine-Learning
4cf378c41d736de93ace9ca416c3014afe88fff1
[ "Apache-2.0" ]
null
null
null
202.499308
81,242
0.886563
[ [ [ "# Machine Learning Engineer Nanodegree\n## Model Evaluation & Validation\n## Project 1: Predicting Boston Housing Prices\n\nWelcome to the first project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with **'Implementation'** in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!\n\nIn addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question X'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide. \n\n>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.", "_____no_output_____" ], [ "## Getting Started\nIn this project, you will evaluate the performance and predictive power of a model that has been trained and tested on data collected from homes in suburbs of Boston, Massachusetts. A model trained on this data that is seen as a *good fit* could then be used to make certain predictions about a home — in particular, its monetary value. This model would prove to be invaluable for someone like a real estate agent who could make use of such information on a daily basis.\n\nThe dataset for this project originates from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Housing). The Boston housing data was collected in 1978 and each of the 506 entries represent aggregated data about 14 features for homes from various suburbs in Boston, Massachusetts. For the purposes of this project, the following preoprocessing steps have been made to the dataset:\n- 16 data points have an `'MDEV'` value of 50.0. These data points likely contain **missing or censored values** and have been removed.\n- 1 data point has an `'RM'` value of 8.78. This data point can be considered an **outlier** and has been removed.\n- The features `'RM'`, `'LSTAT'`, `'PTRATIO'`, and `'MDEV'` are essential. The remaining **non-relevant features** have been excluded.\n- The feature `'MDEV'` has been **multiplicatively scaled** to account for 35 years of market inflation.\n\nRun the code cell below to load the Boston housing dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.", "_____no_output_____" ] ], [ [ "# Import libraries necessary for this project\nimport numpy as np\nimport pandas as pd\nimport visuals as vs # Supplementary code\nfrom sklearn.cross_validation import ShuffleSplit\n\n# Pretty display for notebooks\n%matplotlib inline\n\n# Load the Boston housing dataset\ndata = pd.read_csv('housing.csv')\nprices = data['MDEV']\nfeatures = data.drop('MDEV', axis = 1)\n \n# Success\nprint \"Boston housing dataset has {} data points with {} variables each.\".format(*data.shape)", "Boston housing dataset has 489 data points with 4 variables each.\n" ] ], [ [ "## Data Exploration\nIn this first section of this project, you will make a cursory investigation about the Boston housing data and provide your observations. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand and justify your results.\n\nSince the main goal of this project is to construct a working model which has the capability of predicting the value of houses, we will need to separate the dataset into **features** and the **target variable**. The **features**, `'RM'`, `'LSTAT'`, and `'PTRATIO'`, give us quantitative information about each data point. The **target variable**, `'MDEV'`, will be the variable we seek to predict. These are stored in `features` and `prices`, respectively.", "_____no_output_____" ], [ "### Implementation: Calculate Statistics\nFor your very first coding implementation, you will calculate descriptive statistics about the Boston housing prices. Since `numpy` has already been imported for you, use this library to perform the necessary calculations. These statistics will be extremely important later on to analyze various prediction results from the constructed model.\n\nIn the code cell below, you will need to implement the following:\n- Calculate the minimum, maximum, mean, median, and standard deviation of `'MDEV'`, which is stored in `prices`.\n - Store each calculation in their respective variable.", "_____no_output_____" ] ], [ [ "# TODO: Minimum price of the data\nminimum_price = np.min(prices)\n\n# TODO: Maximum price of the data\nmaximum_price = np.max(prices)\n\n# TODO: Mean price of the data\nmean_price = np.mean(prices)\n\n# TODO: Median price of the data\nmedian_price = np.median(prices)\n\n# TODO: Standard deviation of prices of the data\nstd_price = np.std(prices)\n\n# Show the calculated statistics\nprint \"Statistics for Boston housing dataset:\\n\"\nprint \"Minimum price: ${:,.2f}\".format(minimum_price)\nprint \"Maximum price: ${:,.2f}\".format(maximum_price)\nprint \"Mean price: ${:,.2f}\".format(mean_price)\nprint \"Median price ${:,.2f}\".format(median_price)\nprint \"Standard deviation of prices: ${:,.2f}\".format(std_price)", "Statistics for Boston housing dataset:\n\nMinimum price: $105,000.00\nMaximum price: $1,024,800.00\nMean price: $454,342.94\nMedian price $438,900.00\nStandard deviation of prices: $165,171.13\n" ] ], [ [ "### Question 1 - Feature Observation\nAs a reminder, we are using three features from the Boston housing dataset: `'RM'`, `'LSTAT'`, and `'PTRATIO'`. For each data point (neighborhood):\n- `'RM'` is the average number of rooms among homes in the neighborhood.\n- `'LSTAT'` is the percentage of all Boston homeowners who have a greater net worth than homeowners in the neighborhood.\n- `'PTRATIO'` is the ratio of students to teachers in primary and secondary schools in the neighborhood.\n\n_Using your intuition, for each of the three features above, do you think that an increase in the value of that feature would lead to an **increase** in the value of `'MDEV'` or a **decrease** in the value of `'MDEV'`? Justify your answer for each._ \n**Hint:** Would you expect a home that has an `'RM'` value of 6 be worth more or less than a home that has an `'RM'` value of 7?", "_____no_output_____" ], [ "**Answer: **\nFirstly i want to say that it's hard to tell whether the RM can increase MDEV or not.You can imagine,if you have a lot of money,you may want to choose a place where you can build a big house which may have many m^2.However,some people like to live close to others to feel the love from neighborhood.But after i see the csv i find that usually RM can increase the MDEV.\nSecondly in csv i find when LSTAT increases MDEV decreases.\nThirdly i also find when PTRATIO increases MDEV decreases.", "_____no_output_____" ], [ "----\n\n## Developing a Model\nIn this second section of the project, you will develop the tools and techniques necessary for a model to make a prediction. Being able to make accurate evaluations of each model's performance through the use of these tools and techniques helps to greatly reinforce the confidence in your predictions.", "_____no_output_____" ], [ "### Implementation: Define a Performance Metric\nIt is difficult to measure the quality of a given model without quantifying its performance over training and testing. This is typically done using some type of performance metric, whether it is through calculating some type of error, the goodness of fit, or some other useful measurement. For this project, you will be calculating the [*coefficient of determination*](http://stattrek.com/statistics/dictionary.aspx?definition=coefficient_of_determination), R<sup>2</sup>, to quantify your model's performance. The coefficient of determination for a model is a useful statistic in regression analysis, as it often describes how \"good\" that model is at making predictions. \n\nThe values for R<sup>2</sup> range from 0 to 1, which captures the percentage of squared correlation between the predicted and actual values of the **target variable**. A model with an R<sup>2</sup> of 0 always fails to predict the target variable, whereas a model with an R<sup>2</sup> of 1 perfectly predicts the target variable. Any value between 0 and 1 indicates what percentage of the target variable, using this model, can be explained by the **features**. *A model can be given a negative R<sup>2</sup> as well, which indicates that the model is no better than one that naively predicts the mean of the target variable.*\n\nFor the `performance_metric` function in the code cell below, you will need to implement the following:\n- Use `r2_score` from `sklearn.metrics` to perform a performance calculation between `y_true` and `y_predict`.\n- Assign the performance score to the `score` variable.", "_____no_output_____" ] ], [ [ "# TODO: Import 'r2_score'\nfrom sklearn.metrics import r2_score\ndef performance_metric(y_true, y_predict):\n \"\"\" Calculates and returns the performance score between \n true and predicted values based on the metric chosen. \"\"\"\n \n # TODO: Calculate the performance score between 'y_true' and 'y_predict'\n score = r2_score(y_true,y_predict)\n \n # Return the score\n return score", "_____no_output_____" ] ], [ [ "### Question 2 - Goodness of Fit\nAssume that a dataset contains five data points and a model made the following predictions for the target variable:\n\n| True Value | Prediction |\n| :-------------: | :--------: |\n| 3.0 | 2.5 |\n| -0.5 | 0.0 |\n| 2.0 | 2.1 |\n| 7.0 | 7.8 |\n| 4.2 | 5.3 |\n*Would you consider this model to have successfully captured the variation of the target variable? Why or why not?* \n\nRun the code cell below to use the `performance_metric` function and calculate this model's coefficient of determination.", "_____no_output_____" ] ], [ [ "# Calculate the performance of this model\nscore = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3])\nprint \"Model has a coefficient of determination, R^2, of {:.3f}.\".format(score)", "Model has a coefficient of determination, R^2, of 0.923.\n" ] ], [ [ "**Answer:** The R^2 is 0.923 which is closed to 1 so i think can show the relationship well.", "_____no_output_____" ], [ "### Implementation: Shuffle and Split Data\nYour next implementation requires that you take the Boston housing dataset and split the data into training and testing subsets. Typically, the data is also shuffled into a random order when creating the training and testing subsets to remove any bias in the ordering of the dataset.\n\nFor the code cell below, you will need to implement the following:\n- Use `train_test_split` from `sklearn.cross_validation` to shuffle and split the `features` and `prices` data into training and testing sets.\n - Split the data into 80% training and 20% testing.\n - Set the `random_state` for `train_test_split` to a value of your choice. This ensures results are consistent.\n- Assign the train and testing splits to `X_train`, `X_test`, `y_train`, and `y_test`.", "_____no_output_____" ] ], [ [ "# TODO: Import 'train_test_split'\nfrom sklearn.cross_validation import train_test_split\n# TODO: Shuffle and split the data into training and testing subsets\nX_train, X_test, y_train, y_test = train_test_split(features, prices, test_size=0.8, random_state=20)\n\n# Success\nprint \"Training and testing split was successful.\"", "Training and testing split was successful.\n" ] ], [ [ "### Question 3 - Training and Testing\n*What is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm?* \n**Hint:** What could go wrong with not having a way to test your model?", "_____no_output_____" ], [ "**Answer: **\nIn my opinion split dataset can let you know whether your model can fit other data or not.For example,if you just use all dataset training and if you overfit you model can't fit new data well.In all,we need testing dataset to see if our model work accurately.", "_____no_output_____" ], [ "----\n\n## Analyzing Model Performance\nIn this third section of the project, you'll take a look at several models' learning and testing performances on various subsets of training data. Additionally, you'll investigate one particular algorithm with an increasing `'max_depth'` parameter on the full training set to observe how model complexity affects performance. Graphing your model's performance based on varying criteria can be beneficial in the analysis process, such as visualizing behavior that may not have been apparent from the results alone.", "_____no_output_____" ], [ "### Learning Curves\nThe following code cell produces four graphs for a decision tree model with different maximum depths. Each graph visualizes the learning curves of the model for both training and testing as the size of the training set is increased. Note that the shaded reigon of a learning curve denotes the uncertainty of that curve (measured as the standard deviation). The model is scored on both the training and testing sets using R<sup>2</sup>, the coefficient of determination. \n\nRun the code cell below and use these graphs to answer the following question.", "_____no_output_____" ] ], [ [ "# Produce learning curves for varying training set sizes and maximum depths\nvs.ModelLearning(features, prices)", "_____no_output_____" ] ], [ [ "### Question 4 - Learning the Data\n*Choose one of the graphs above and state the maximum depth for the model. What happens to the score of the training curve as more training points are added? What about the testing curve? Would having more training points benefit the model?* \n**Hint:** Are the learning curves converging to particular scores?", "_____no_output_____" ], [ "**Answer: **\nI choose max_depth=3.As we can see in picture,when the training points added the score of Testing increased and the score of Training decreased.And when we see the tendency of two curves we can know that both of them converge to the score 0.75.", "_____no_output_____" ], [ "### Complexity Curves\nThe following code cell produces a graph for a decision tree model that has been trained and validated on the training data using different maximum depths. The graph produces two complexity curves — one for training and one for validation. Similar to the **learning curves**, the shaded regions of both the complexity curves denote the uncertainty in those curves, and the model is scored on both the training and validation sets using the `performance_metric` function. \n\nRun the code cell below and use this graph to answer the following two questions.", "_____no_output_____" ] ], [ [ "vs.ModelComplexity(X_train, y_train)", "_____no_output_____" ] ], [ [ "### Question 5 - Bias-Variance Tradeoff\n*When the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions?* \n**Hint:** How do you know when a model is suffering from high bias or high variance?", "_____no_output_____" ], [ "**Answer: **\nWhen maximum depth is 1,we can know the model suffer from high bias which means using just some of features can lead to underfit.And when use 10 the model suffer from high variance which lead to overfit.The picture of Decision Tree can shows that when we use 1 and 10 the model suffer from high bias to high variance.I must say that if the score differ a lot it will call the high variance.", "_____no_output_____" ], [ "### Question 6 - Best-Guess Optimal Model\n*Which maximum depth do you think results in a model that best generalizes to unseen data? What intuition lead you to this answer?*", "_____no_output_____" ], [ "**Answer: **\nI think when maximum depth is 5 we may have best result.I make this judgement from Decision Tree Regressor Learning Performance.We can learn from the picture when max_depth equals to 5 both curves converging to particular scores. ", "_____no_output_____" ], [ "-----\n\n## Evaluating Model Performance\nIn this final section of the project, you will construct a model and make a prediction on the client's feature set using an optimized model from `fit_model`.", "_____no_output_____" ], [ "### Question 7 - Grid Search\n*What is the grid search technique and how it can be applied to optimize a learning algorithm?*", "_____no_output_____" ], [ "**Answer: **\nFirstly Grid Search can search for estimator parameters.We can set different values of kernel,C and gamma,then we can use different group to train SVM and use cross validation to evaluate it's good or not and choose the best one.", "_____no_output_____" ], [ "### Question 8 - Cross-Validation\n*What is the k-fold cross-validation training technique? What benefit does this technique provide for grid search when optimizing a model?* \n**Hint:** Much like the reasoning behind having a testing set, what could go wrong with using grid search without a cross-validated set?", "_____no_output_____" ], [ "**Answer: **\nFor example we choose k=5,and we will use 5 times to cross validation.In another way,we use all the dataset to test and optimize the model.And it can also avoid the problem without testing set.", "_____no_output_____" ], [ "### Implementation: Fitting a Model\nYour final implementation requires that you bring everything together and train a model using the **decision tree algorithm**. To ensure that you are producing an optimized model, you will train the model using the grid search technique to optimize the `'max_depth'` parameter for the decision tree. The `'max_depth'` parameter can be thought of as how many questions the decision tree algorithm is allowed to ask about the data before making a prediction. Decision trees are part of a class of algorithms called *supervised learning algorithms*.\n\nFor the `fit_model` function in the code cell below, you will need to implement the following:\n- Use [`DecisionTreeRegressor`](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeRegressor.html) from `sklearn.tree` to create a decision tree regressor object.\n - Assign this object to the `'regressor'` variable.\n- Create a dictionary for `'max_depth'` with the values from 1 to 10, and assign this to the `'params'` variable.\n- Use [`make_scorer`](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.make_scorer.html) from `sklearn.metrics` to create a scoring function object.\n - Pass the `performance_metric` function as a parameter to the object.\n - Assign this scoring function to the `'scoring_fnc'` variable.\n- Use [`GridSearchCV`](http://scikit-learn.org/stable/modules/generated/sklearn.grid_search.GridSearchCV.html) from `sklearn.grid_search` to create a grid search object.\n - Pass the variables `'regressor'`, `'params'`, `'scoring_fnc'`, and `'cv_sets'` as parameters to the object. \n - Assign the `GridSearchCV` object to the `'grid'` variable.", "_____no_output_____" ] ], [ [ "# TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV'\nfrom sklearn.tree import DecisionTreeRegressor\nfrom sklearn.metrics import make_scorer\nfrom sklearn.grid_search import GridSearchCV\ndef fit_model(X, y):\n \"\"\" Performs grid search over the 'max_depth' parameter for a \n decision tree regressor trained on the input data [X, y]. \"\"\"\n \n # Create cross-validation sets from the training data\n cv_sets = ShuffleSplit(X.shape[0], n_iter = 10, test_size = 0.20, random_state = 0)\n\n # TODO: Create a decision tree regressor object\n regressor = DecisionTreeRegressor()\n\n # TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10\n params = {'max_depth':[1,2,3,4,5,6,7,8,9,10]}\n\n # TODO: Transform 'performance_metric' into a scoring function using 'make_scorer' \n scoring_fnc = make_scorer(performance_metric)\n\n # TODO: Create the grid search object\n grid = GridSearchCV(regressor,param_grid=params,scoring=scoring_fnc,cv=cv_sets)\n #We must take care that if we don't use cv=cv_sets,it will give wrong parameters!!!\n \n # Fit the grid search object to the data to compute the optimal model\n grid = grid.fit(X, y)\n\n # Return the optimal model after fitting the data\n return grid.best_estimator_", "_____no_output_____" ] ], [ [ "### Making Predictions\nOnce a model has been trained on a given set of data, it can now be used to make predictions on new sets of input data. In the case of a *decision tree regressor*, the model has learned *what the best questions to ask about the input data are*, and can respond with a prediction for the **target variable**. You can use these predictions to gain information about data where the value of the target variable is unknown — such as data the model was not trained on.", "_____no_output_____" ], [ "### Question 9 - Optimal Model\n_What maximum depth does the optimal model have? How does this result compare to your guess in **Question 6**?_ \n\nRun the code block below to fit the decision tree regressor to the training data and produce an optimal model.", "_____no_output_____" ] ], [ [ "# Fit the training data to the model using grid search\nreg = fit_model(X_train, y_train)\n\n# Produce the value for 'max_depth'\nprint \"Parameter 'max_depth' is {} for the optimal model.\".format(reg.get_params()['max_depth'])", "Parameter 'max_depth' is 3 for the optimal model.\n" ] ], [ [ "**Answer: **\nAfter so many times trying i get the correct answer!How happy am i!!!OK,then come to the question,when max_depth equals to 3 it is the optimal model.In question 6 i think it's 5,and it shows that man's intuition is not reliable.", "_____no_output_____" ], [ "### Question 10 - Predicting Selling Prices\nImagine that you were a real estate agent in the Boston area looking to use this model to help price homes owned by your clients that they wish to sell. You have collected the following information from three of your clients:\n\n| Feature | Client 1 | Client 2 | Client 3 |\n| :---: | :---: | :---: | :---: |\n| Total number of rooms in home | 5 rooms | 4 rooms | 8 rooms |\n| Household net worth (income) | Top 34th percent | Bottom 45th percent | Top 7th percent |\n| Student-teacher ratio of nearby schools | 15-to-1 | 22-to-1 | 12-to-1 |\n*What price would you recommend each client sell his/her home at? Do these prices seem reasonable given the values for the respective features?* \n**Hint:** Use the statistics you calculated in the **Data Exploration** section to help justify your response. \n\nRun the code block below to have your optimized model make predictions for each client's home.", "_____no_output_____" ] ], [ [ "# Produce a matrix for client data\nclient_data = [[5, 34, 15], # Client 1\n [4, 55, 22], # Client 2\n [8, 7, 12]] # Client 3\n\n# Show predictions\nfor i, price in enumerate(reg.predict(client_data)):\n print \"Predicted selling price for Client {}'s home: ${:,.2f}\".format(i+1, price)", "Predicted selling price for Client 1's home: $252,787.50\nPredicted selling price for Client 2's home: $252,787.50\nPredicted selling price for Client 3's home: $971,600.00\n" ] ], [ [ "**Answer: **\nI will recommend each client sell their house at 252,787.5USD and 252,787.5USD and 971,600USD.As we concluded in Data Exploration that the more RM it has,the less LSTAT,PTRATIO it has,the more valuable it will be.", "_____no_output_____" ], [ "### Sensitivity\nAn optimal model is not necessarily a robust model. Sometimes, a model is either too complex or too simple to sufficiently generalize to new data. Sometimes, a model could use a learning algorithm that is not appropriate for the structure of the data given. Other times, the data itself could be too noisy or contain too few samples to allow a model to adequately capture the target variable — i.e., the model is underfitted. Run the code cell below to run the `fit_model` function ten times with different training and testing sets to see how the prediction for a specific client changes with the data it's trained on.", "_____no_output_____" ] ], [ [ "vs.PredictTrials(features, prices, fit_model, client_data)", "Trial 1: $324,240.00\nTrial 2: $324,450.00\nTrial 3: $346,500.00\nTrial 4: $420,622.22\nTrial 5: $302,400.00\nTrial 6: $411,931.58\nTrial 7: $344,750.00\nTrial 8: $407,232.00\nTrial 9: $352,315.38\nTrial 10: $316,890.00\n\nRange in prices: $118,222.22\n" ] ], [ [ "### Question 11 - Applicability\n*In a few sentences, discuss whether the constructed model should or should not be used in a real-world setting.* \n**Hint:** Some questions to answering:\n- *How relevant today is data that was collected from 1978?*\n- *Are the features present in the data sufficient to describe a home?*\n- *Is the model robust enough to make consistent predictions?*\n- *Would data collected in an urban city like Boston be applicable in a rural city?*", "_____no_output_____" ], [ "**Answer: **\nI think it can't used in real-world setting.\nAs for reasons,first of all,it passed so long.Secondly,just three features can't describe a house well.What's more,the model may be too easy,can't work well with some special cases.\nOK,i will say more about it.We know todays we spend much time finding a good house,we consider not only the m^2 of house but also the environment of the house.For example,if some place has high level of PM2.5,i think there will be no one want to live in there.So we should add PM2.5 into features.Besides,is the place in urban?I lives in Hangzhou,China,and the prices range from 8000CNY/m^2 to 45000CNY/m^2 which depends on the location of the house.So we should consider a lot of features to improve the robustness of our model. ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
e7dc3798506f95b7d70800efd2a376b66dd9441a
5,147
ipynb
Jupyter Notebook
python业务代码/AHP层次分析法/AHP.ipynb
RobinYaoWenbin/Python-CommonCode
1ee714541f2fd9c8b96d018d3d4eb94f4edc812a
[ "MIT" ]
12
2020-09-28T03:25:03.000Z
2022-03-20T07:44:09.000Z
python业务代码/AHP层次分析法/AHP.ipynb
RobinYaoWenbin/Python-CommonCode
1ee714541f2fd9c8b96d018d3d4eb94f4edc812a
[ "MIT" ]
null
null
null
python业务代码/AHP层次分析法/AHP.ipynb
RobinYaoWenbin/Python-CommonCode
1ee714541f2fd9c8b96d018d3d4eb94f4edc812a
[ "MIT" ]
21
2020-03-19T00:44:35.000Z
2022-01-30T03:46:18.000Z
27.524064
100
0.467068
[ [ [ "import numpy as np", "_____no_output_____" ] ], [ [ "**定义各阶矩阵的RI大小**", "_____no_output_____" ] ], [ [ "RI_dict = {1: 0, 2: 0, 3: 0.58, 4: 0.90, 5: 1.12, 6: 1.24, 7: 1.32, 8: 1.41, 9: 1.45}", "_____no_output_____" ] ], [ [ "**定义计算出一个判断矩阵的一致性指标以及最大特征根的归一化特征向量的函数。**\n\n输入:np.array格式的一个二维矩阵,该二维矩阵的含义是判断矩阵。\n\n输出:若没有通过一致性检验,则输出提示信息:没有通过一致性检验。若通过一致性检验,则输出提示信息,并返回归一化的特征向量。", "_____no_output_____" ] ], [ [ "# 出入判断矩阵,判断矩阵需要是numpy格式的,若通过一致性检验则返回最大特征根的特征向量,若不通过,则输出提示。\ndef get_w(array):\n row = array.shape[0] # 计算出阶数\n a_axis_0_sum = array.sum(axis=0)\n # print(a_axis_0_sum)\n b = array / a_axis_0_sum # 新的矩阵b\n # print(b)\n b_axis_0_sum = b.sum(axis=0)\n b_axis_1_sum = b.sum(axis=1) # 每一行的特征向量\n # print(b_axis_1_sum)\n w = b_axis_1_sum / row # 归一化处理(特征向量)\n nw = w * row\n AW = (w * array).sum(axis=1)\n # print(AW)\n max_max = sum(AW / (row * w))\n # print(max_max)\n CI = (max_max - row) / (row - 1)\n CR = CI / RI_dict[row]\n if CR < 0.1:\n print(round(CR, 3))\n print('满足一致性')\n print(\"权重特征向量为:\" , w)\n# print(np.max(w))\n# print(sorted(w,reverse=True))\n# print(max_max)\n# print('特征向量:%s' % w)\n return w\n else:\n print(round(CR, 3))\n print('不满足一致性,请进行修改')", "_____no_output_____" ] ], [ [ "**对输入数据进行格式判断,若正确则调用get_w(array)进行计算,若不正确则输出提示信息。**", "_____no_output_____" ] ], [ [ "def main(array):\n # 判断下判断矩阵array的数据类型,并给出提示,若格式正确,则可继续下一步计算一致性和特征向量\n if type(array) is np.ndarray:\n return get_w(array)\n else:\n print('请输入numpy对象')", "_____no_output_____" ] ], [ [ "**对博文中选干部的例子进行了计算,具体说明我都做了注释。博文连接:https://www.cnblogs.com/yhll/p/9967726.html**\n\n感谢大佬!", "_____no_output_____" ] ], [ [ "if __name__ == '__main__':\n # 定义判断矩阵\n e = np.array([[1, 2, 7, 5, 5], [1 / 2, 1, 4, 3, 3], [1 / 7, 1 / 4, 1, 1 / 2, 1 / 3], \\\n [1 / 5, 1 / 3, 2, 1, 1], [1 / 5, 1 / 3, 3, 1, 1]]) # 准则层对目标层判断矩阵\n a = np.array([[1, 1 / 3, 1 / 8], [3, 1, 1 / 3], [8, 3, 1]]) # 对B1的判断矩阵\n b = np.array([[1, 2, 5], [1 / 2, 1, 2], [1 / 5, 1 / 2, 1]]) # 对B2的判断矩阵\n c = np.array([[1, 1, 3], [1, 1, 3], [1 / 3, 1 / 3, 1]]) # 对B3的判断矩阵\n d = np.array([[1, 3, 4], [1 / 3, 1, 1], [1 / 4, 1, 1]]) # 对B4的判断矩阵\n f = np.array([[1, 4, 1 / 2], [1 / 4, 1, 1 / 4], [2, 4, 1]]) # 对B5的判断矩阵\n # 进行一致性检验,并计算特征向量\n e = main(e) # 一致性检验并得到判断矩阵\n a = main(a)# 一致性检验并得到判断矩阵\n b = main(b)# 一致性检验并得到判断矩阵\n c = main(c)# 一致性检验并得到判断矩阵\n d = main(d)# 一致性检验并得到判断矩阵\n f = main(f)# 一致性检验并得到判断矩阵\n try:\n res = np.array([a, b, c, d, f]) # 将方案层对准则层的各归一化特征向量组合起来得到矩阵\n# ret = (np.transpose(res) * e).sum(axis=1)\n ret = np.dot(np.transpose(res) , e) # 计算出最底层对最高层的总排序的权值\n print(\"总排序:\" , ret) # 总排序\n except TypeError:\n print('数据有误,可能不满足一致性,请进行修改')", "0.016\n满足一致性\n权重特征向量为: [0.47439499 0.26228108 0.0544921 0.09853357 0.11029827]\n0.001\n满足一致性\n权重特征向量为: [0.08199023 0.23644689 0.68156288]\n0.005\n满足一致性\n权重特征向量为: [0.59488796 0.27661064 0.1285014 ]\n0.0\n满足一致性\n权重特征向量为: [0.42857143 0.42857143 0.14285714]\n0.008\n满足一致性\n权重特征向量为: [0.63274854 0.19239766 0.1748538 ]\n0.046\n满足一致性\n权重特征向量为: [0.34595035 0.11029711 0.54375254]\n总排序: [0.31878206 0.23919592 0.44202202]\n" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7dc39e81947a8ede488abc8f84c43689b1687f2
31,374
ipynb
Jupyter Notebook
Machine Learning A Probabilistic Perspective/2Probability/F2.6/2.6poissonPlotDemo.ipynb
zcemycl/ProbabilisticPerspectiveMachineLearning
8291bc6cb935c5b5f9a88f7b436e6e42716c21ae
[ "MIT" ]
4
2019-11-20T10:20:29.000Z
2021-11-09T11:15:23.000Z
Machine Learning A Probabilistic Perspective/2Probability/F2.6/.ipynb_checkpoints/2.6poissonPlotDemo-checkpoint.ipynb
zcemycl/ProbabilisticPerspectiveMachineLearning
8291bc6cb935c5b5f9a88f7b436e6e42716c21ae
[ "MIT" ]
null
null
null
Machine Learning A Probabilistic Perspective/2Probability/F2.6/.ipynb_checkpoints/2.6poissonPlotDemo-checkpoint.ipynb
zcemycl/ProbabilisticPerspectiveMachineLearning
8291bc6cb935c5b5f9a88f7b436e6e42716c21ae
[ "MIT" ]
2
2020-05-27T03:56:38.000Z
2021-05-02T13:15:42.000Z
145.925581
7,400
0.889973
[ [ [ "import numpy as np\nimport matplotlib.pyplot as plt", "_____no_output_____" ] ], [ [ "### From Binomial Distribution to Poisson Distribution\nThe binomial distribution is given by,\n$$Bin(k|n,\\theta) \\triangleq \\frac{n!}{k!(n-k)!}\\theta^k(1-\\theta)^{n-k} $$\nThe poisson distribution is given by,\n$$Poi(k|\\lambda) \\triangleq e^{-\\lambda}\\frac{\\lambda^k}{k!} $$\nProof:\nConsider $\\theta=\\frac{\\lambda}{n}$,\n\\begin{align*} &\\lim_{n\\rightarrow \\infty}\\binom{n}{k} (\\frac{\\lambda}{n})^k(1-\\frac{\\lambda}{n})^{n-k}=\\frac{\\lambda^k}{k!}\\lim_{n\\rightarrow \\infty}\\frac{n!}{(n-k)!}\\frac{1}{n^k}(1-\\frac{\\lambda}{n})^{n}(1-\\frac{\\lambda}{n})^{-k} \\\\\n&= \\frac{\\lambda^k}{k!}\\lim_{n\\rightarrow \\infty}\\frac{n(n-1)\\dots(n-k+1)}{n^k}(1-\\frac{\\lambda}{n})^{n}(1-\\frac{\\lambda}{n})^{-k} \\\\\n&\\approx \\frac{\\lambda^k}{k!}\\lim_{n\\rightarrow \\infty}(1-\\frac{\\lambda}{n})^{n}(1-\\frac{\\lambda}{n})^{-k} =e^{-\\lambda}\\frac{\\lambda^k}{k!}\n\\end{align*}", "_____no_output_____" ], [ "For $\\lambda \\in\\{1,10\\}$,", "_____no_output_____" ] ], [ [ "s = np.random.poisson(1, 1000000)\ncount, bins, ignored = plt.hist(s, density=True, rwidth=0.8)", "_____no_output_____" ], [ "s = np.random.poisson(10, 1000000)\ncount, bins, ignored = plt.hist(s,50, density=True)", "_____no_output_____" ] ], [ [ "From Figure 2.4, we have sampled from the binomial distribution. \\\nIf $n=100$, given $\\lambda=1$, then $\\theta = 1/100$.", "_____no_output_____" ] ], [ [ "s = np.random.randint(1,101,[100,1000000])\ns = np.where(s>1,0,s)\ncountbinary = np.count_nonzero(s,axis=0)\n\nplt.hist(countbinary)", "_____no_output_____" ] ], [ [ "GIven $\\lambda=10$, then $\\theta=1/10$", "_____no_output_____" ] ], [ [ "s = np.random.randint(1,11,[100,1000000])\ns = np.where(s>1,0,s)\ncountbinary = np.count_nonzero(s,axis=0)\n\nplt.hist(countbinary)", "_____no_output_____" ] ], [ [ "From Figures above, you can observe that with a greater $n$, the Binomial distribution can converge to the Poisson distribution.", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e7dc3b390dddd2c2c572bc2d92fa2c9e1c300b7e
10,942
ipynb
Jupyter Notebook
lessons/landlab/landlab-terrainbento/coupled_process_elements/model_basicVs_steady_solution.ipynb
josh-wolpert/espin
fa60f9c106af37d3a15730e7d9c3f35343c77d2f
[ "CC-BY-4.0" ]
27
2020-08-07T23:16:44.000Z
2022-03-30T15:59:16.000Z
lessons/landlab/landlab-terrainbento/coupled_process_elements/model_basicVs_steady_solution.ipynb
KarstModel/espin
8ad941c2798653000382a66656d3ae09f105db81
[ "CC-BY-4.0" ]
28
2020-07-09T21:28:49.000Z
2022-03-11T16:49:24.000Z
lessons/landlab/landlab-terrainbento/coupled_process_elements/model_basicVs_steady_solution.ipynb
KarstModel/espin
8ad941c2798653000382a66656d3ae09f105db81
[ "CC-BY-4.0" ]
48
2020-08-09T23:03:15.000Z
2021-06-18T20:50:11.000Z
34.51735
438
0.532535
[ [ [ "![terrainbento logo](../../../../media/terrainbento_logo.png)\n\n\n# terrainbento model BasicVs steady-state solution", "_____no_output_____" ], [ "This model shows example usage of the BasicVs model from the TerrainBento package.\n\nThe BasicVs model implements modifies Basic to use variable source area runoff using the \"\"effective area\"\" approach:\n\n$\\frac{\\partial \\eta}{\\partial t} = - KA_{eff}^{1/2}S + D\\nabla^2 \\eta$\n\nwhere\n\n$A_{eff} = R_m A e^{-\\alpha S / A}$\n\nand \n\n$\\alpha = \\frac{K_{sat} H_{init} dx}{R_m}$\n\nwhere $K$ and $D$ are constants, $S$ is local slope, and $\\eta$ is the topography. $A$ is the local upstream drainage area, $R_m$ is the average recharge (or precipitation) rate, $A_{eff}$ is the effective drainage area, $K_{sat}$ is the hydraulic conductivity, $H$ is the soil thickness, and $dx$ is the grid cell width. $\\alpha$ is a courtesy parameter called the \"saturation area scale\" that lumps together many constants.\n\nRefer to [Barnhart et al. (2019)](https://www.geosci-model-dev.net/12/1267/2019/) for further explaination. For detailed information about creating a BasicVs model, see [the detailed documentation](https://terrainbento.readthedocs.io/en/latest/source/terrainbento.derived_models.model_basicVs.html).\n\nThis notebook (a) shows the initialization and running of this model, (b) saves a NetCDF file of the topography, which we will use to make an oblique Paraview image of the landscape, and (c) creates a slope-area plot at steady state.", "_____no_output_____" ] ], [ [ "from terrainbento import BasicVs\n\n# import required modules\nimport os\nimport numpy as np\n\nimport matplotlib.pyplot as plt\nimport matplotlib\n\nfrom landlab import imshow_grid\nfrom landlab.io.netcdf import write_netcdf\n\nnp.random.seed(4897)\n\n#Ignore warnings \nimport warnings\nwarnings.filterwarnings('ignore')", "_____no_output_____" ], [ "# create the parameter dictionary needed to instantiate the model\n\nparams = {\n # create the Clock.\n \"clock\": {\n \"start\": 0,\n \"step\": 10,\n \"stop\": 1e7\n },\n\n # Create the Grid.\n \"grid\": {\n \"RasterModelGrid\": [(25, 40), {\n \"xy_spacing\": 40\n }, {\n \"fields\": {\n \"node\": {\n \"topographic__elevation\": {\n \"random\": [{\n \"where\": \"CORE_NODE\"\n }]\n },\n \"soil__depth\": {\n \"constant\": [{\n \"value\": 1.0\n }]\n }\n }\n }\n }]\n },\n\n # Set up Boundary Handlers\n \"boundary_handlers\": {\n \"NotCoreNodeBaselevelHandler\": {\n \"modify_core_nodes\": True,\n \"lowering_rate\": -0.001\n }\n },\n # Set up Precipitator\n \"precipitator\": {\n \"UniformPrecipitator\": {\n \"rainfall_flux\": 0.01\n }\n },\n\n # Parameters that control output.\n \"output_interval\": 1e4,\n \"save_first_timestep\": True,\n \"output_prefix\": \"output/basicVs\",\n \"fields\": [\"topographic__elevation\"],\n\n # Parameters that control process and rates.\n \"water_erodibility\": 0.001,\n \"m_sp\": 0.5,\n \"n_sp\": 1.0,\n \"regolith_transport_parameter\": 0.1,\n \"hydraulic_conductivity\": 10.\n}", "_____no_output_____" ], [ "# the tolerance here is high, so that this can run on binder and for tests. (recommended value = 0.001 or lower).\ntolerance = 20.0", "_____no_output_____" ], [ "# we can use an output writer to run until the model reaches steady state.\nclass run_to_steady(object):\n def __init__(self, model):\n self.model = model\n self.last_z = self.model.z.copy()\n self.tolerance = tolerance\n\n def run_one_step(self):\n if model.model_time > 0:\n diff = (self.model.z[model.grid.core_nodes] -\n self.last_z[model.grid.core_nodes])\n if max(abs(diff)) <= self.tolerance:\n self.model.clock.stop = model._model_time\n print(\"Model reached steady state in \" +\n str(model._model_time) + \" time units\\n\")\n else:\n self.last_z = self.model.z.copy()\n if model._model_time <= self.model.clock.stop - self.model.output_interval:\n self.model.clock.stop += self.model.output_interval", "_____no_output_____" ], [ "# initialize the model using the Model.from_dict() constructor.\n# We also pass the output writer here.\nmodel = BasicVs.from_dict(params, output_writers={\"class\": [run_to_steady]})\n\n# to run the model as specified, we execute the following line:\nmodel.run()", "_____no_output_____" ], [ "# MAKE SLOPE-AREA PLOT\n\n# plot nodes that are not on the boundary or adjacent to it\ncore_not_boundary = np.array(\n model.grid.node_has_boundary_neighbor(model.grid.core_nodes)) == False\nplotting_nodes = model.grid.core_nodes[core_not_boundary]\n\n# assign area_array and slope_array\narea_array = model.grid.at_node[\"drainage_area\"][plotting_nodes]\nslope_array = model.grid.at_node[\"topographic__steepest_slope\"][plotting_nodes]\n\n# instantiate figure and plot\nfig = plt.figure(figsize=(6, 3.75))\nslope_area = plt.subplot()\n\n# plot the data\nslope_area.scatter(area_array,\n slope_array,\n marker=\"o\",\n c=\"k\",\n label=\"Model BasicVs\")\n\n# make axes log and set limits\nslope_area.set_xscale(\"log\")\nslope_area.set_yscale(\"log\")\n\nslope_area.set_xlim(9 * 10**1, 1 * 10**6)\nslope_area.set_ylim(1e-4, 1e4)\n\n# set x and y labels\nslope_area.set_xlabel(r\"Drainage area [m$^2$]\")\nslope_area.set_ylabel(\"Channel slope [-]\")\n\nslope_area.legend(scatterpoints=1, prop={\"size\": 12})\nslope_area.tick_params(axis=\"x\", which=\"major\", pad=7)\n\nplt.show()", "_____no_output_____" ], [ "# Save stack of all netcdfs for Paraview to use.\n# model.save_to_xarray_dataset(filename=\"basicVs.nc\",\n# time_unit=\"years\",\n# reference_time=\"model start\",\n# space_unit=\"meters\")\n\n# remove temporary netcdfs\nmodel.remove_output_netcdfs()", "_____no_output_____" ], [ "# make a plot of the final steady state topography\nplt.figure()\nimshow_grid(model.grid, \"topographic__elevation\",cmap='terrain')\nplt.draw()", "_____no_output_____" ] ], [ [ "## Next Steps\n\n- [Welcome page](../Welcome_to_TerrainBento.ipynb)\n\n\n- There are three additional introductory tutorials: \n\n 1) [Introduction terrainbento](../example_usage/Introduction_to_terrainbento.ipynb) \n \n 2) [Introduction to boundary conditions in terrainbento](../example_usage/introduction_to_boundary_conditions.ipynb)\n \n 3) [Introduction to output writers in terrainbento](../example_usage/introduction_to_output_writers.ipynb). \n \n \n- Five examples of steady state behavior in coupled process models can be found in the following notebooks:\n\n 1) [Basic](model_basic_steady_solution.ipynb) the simplest landscape evolution model in the terrainbento package.\n\n 2) [BasicVm](model_basic_var_m_steady_solution.ipynb) which permits the drainage area exponent to change\n\n 3) [BasicCh](model_basicCh_steady_solution.ipynb) which uses a non-linear hillslope erosion and transport law\n\n 4) **This Notebook**: [BasicVs](model_basicVs_steady_solution.ipynb) which uses variable source area hydrology\n\n 5) [BasisRt](model_basicRt_steady_solution.ipynb) which allows for two lithologies with different K values\n \n 6) [RealDEM](model_basic_realDEM.ipynb) Run the basic terrainbento model with a real DEM as initial condition. ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ] ]
e7dc3baa61c7dfd964718812d9fc915d85904068
17,050
ipynb
Jupyter Notebook
clinicaaddl/clinicaaddl/ShowPictures.ipynb
HorlavaNastassya/AD-DL
58999a8300cea9a8f3b868a968281dc8f148a532
[ "MIT" ]
null
null
null
clinicaaddl/clinicaaddl/ShowPictures.ipynb
HorlavaNastassya/AD-DL
58999a8300cea9a8f3b868a968281dc8f148a532
[ "MIT" ]
null
null
null
clinicaaddl/clinicaaddl/ShowPictures.ipynb
HorlavaNastassya/AD-DL
58999a8300cea9a8f3b868a968281dc8f148a532
[ "MIT" ]
null
null
null
41.283293
346
0.521584
[ [ [ "!python ~/MasterProject/Code/ClinicaTools/AD-DL/clinicaaddl/clinicaaddl/main.py train /u/horlavanasta/MasterProject//DataAndExperiments/Experiments/Experiments-1.5T-3T/NNs_Bayesian/ResNet18/subject_model-ResNet18_preprocessing-linear_task-AD_CN_norm-1_loss-WeightedCrossEntropy_augmTrue --n_splits 1 --split 0 --batch_size 5 ", "_____no_output_____" ], [ "def show_fpg(data_batch, indices=None,plane=\"sag\", num_rows=2, \n num_cols=2, name=None, folder=\"/current_augmentations_examples/\"):\n \n import matplotlib.pyplot as plt\n import numpy as np\n \n fig, axes = plt.subplots(num_rows, num_cols, figsize=((int(8 * num_rows), int(6 * num_cols))))\n print(data_batch.shape)\n data_batch=data_batch[:num_rows*num_cols].reshape(num_rows, num_cols, data_batch.shape[1], data_batch.shape[2], data_batch.shape[3], \n data_batch.shape[4])\n print(data_batch.shape)\n\n for row in range(num_rows):\n for col in range(num_cols):\n\n i, j, k = indices\n data=data_batch[row][col]\n kwargs = dict(cmap='gray', interpolation='none')\n slices=dict()\n slices[\"sag\"], slices[\"cor\"], slices[\"axi\"] = np.rot90(data[0, i]), np.rot90(data[0, :, j]), np.rot90(data[0, ..., k])\n\n axes[row][col].imshow(slices[plane],**kwargs)\n axes[row][col].axis('off')\n\n if name is not None:\n fig.suptitle(name)\n plt.subplots_adjust( left=0.05, right=0.95, top=0.95, bottom=0.05, wspace=0.05, hspace=0.05)\n plt.show()\n plt.close()", "_____no_output_____" ], [ "def translate_parameters(args,batch_size=8 ):\n \"\"\"\n Translate the names of the parameters between command line and source code.\n \"\"\"\n args.gpu = False\n args.num_workers = args.nproc\n args.optimizer = \"Adam\"\n args.batch_size=batch_size\n # args.loss = \"default\"\n\n if hasattr(args, \"caps_dir\"):\n args.input_dir = args.caps_dir\n if hasattr(args, \"unnormalize\"):\n args.minmaxnormalization = not args.unnormalize\n if hasattr(args, \"slice_direction\"):\n args.mri_plane = args.slice_direction\n if hasattr(args, \"network_type\"):\n args.mode_task = args.network_type\n\n if not hasattr(args, \"selection_threshold\"):\n args.selection_threshold = None\n \n if not hasattr(args, \"verbose\"):\n args.verbose = 0\n if not hasattr(args, \"bayesian\"):\n args.bayesian = False\n\n if not hasattr(args, \"prepare_dl\"):\n if hasattr(args, \"use_extracted_features\"):\n args.prepare_dl = args.use_extracted_features\n elif hasattr(args, \"use_extracted_patches\") and args.mode == \"patch\":\n args.prepare_dl = args.use_extracted_patches\n elif hasattr(args, \"use_extracted_slices\") and args.mode == \"slice\":\n args.prepare_dl = args.use_extracted_slices\n elif hasattr(args, \"use_extracted_roi\") and args.mode == \"roi\":\n args.prepare_dl = args.use_extracted_roi\n\n return args\n\ndef set_pubfig():\n import seaborn as sns\n# sns.set_context(\"paper\", rc={\"font.size\":14,\"axes.titlesize\":22,\"axes.labelsize\":22}) \n# sns.axes_style(\"whitegrid\")\n sns.set(font_scale = 1.75)\n sns.set_style('white')\n \ndef show_fpg_ax(img, axes, indices=None, plane=\"sag\"):\n \n import matplotlib.pyplot as plt\n import numpy as np\n print(img.shape)\n if indices is None: \n i, j, k = img.shape[0]//3, img.shape[1]//3, img.shape[2]//3\n \n kwargs = dict(cmap='gray', interpolation='none')\n slices=dict()\n slices[\"sag\"], slices[\"cor\"], slices[\"axi\"] = np.rot90(img[i]), np.rot90(img[:, j]), np.rot90(img[..., k])\n axes.imshow(slices[plane],cmap='gray', interpolation='none')\n axes.axis('off')\n\n ", "_____no_output_____" ], [ " \ndef get_unprocessed_images(participant_id, session_id, bids_directory=\"/u/horlavanasta/MasterProject/DataAndExperiments/Data/BIDS\"):\n import numpy as np\n import nibabel as nib\n imgs_before=[]\n for i in range(len(participant_id)):\n image_path=os.path.join(bids_directory, participant_id[i], session_id[i], \"anat\",participant_id[i] + '_' + session_id[i]\n +'_T1w.nii.gz')\n image_nii = nib.load(image_path)\n image_np = image_nii.get_fdata()\n imgs_before.append(image_np)\n return imgs_before\n \n \ndef show_preprocessing(imgs_before, imgs_after, num_cols=4, plane=\"sag\", MS=\"1.5T\"):\n import matplotlib.pyplot as plt\n import numpy as np\n fig, axes = plt.subplots(2, num_cols, \n figsize=((int(8 *num_cols), int(8 * 2)))\n )\n for i in range(num_cols):\n show_fpg_ax(imgs_before[i],axes[0][i], plane=plane)\n show_fpg_ax(imgs_after[i][0],axes[1][i], plane=plane)\n# plt.subplots_adjust( left=0.01, right=0.99, top=0.99, bottom=0.01)\n\n plt.subplots_adjust(left=0.01, right=0.99, top=0.99, bottom=0.01, wspace=0.02, hspace=0.02)\n path = '../../plots'\n plt.savefig(os.path.join(path, 'preprocessing_%s.png'%MS))\n plt.close()\n \n", "_____no_output_____" ], [ "\ndef get_augmentation_list(data_augmentation):\n from tools.deep_learning.augmentations import Augmentation\n augmentation_dict = {\n 'RandomBlur': Augmentation(\"RandomBlur\", {\"std\":(0.8, 0.8)}),\n 'RandomNoise': Augmentation('RandomNoise', {'mean': (0.07, 0.07), 'std': (0.02, 0.02)}),\n 'RandomBiasField': Augmentation('RandomBiasField', {'coefficients': (0.3, 0.3), \"order\": 2}),\n \"RandomGamma\": Augmentation(\"RandomGamma\", {\"log_gamma\": (-0.35, -0.35)}),\n \"RandomRotation\": Augmentation(\"RandomAffine\", {\"degrees\": (15, 15, 0,0,0,0), \"scales\": (1.0, 1.0), \"isotropic\": True,\n \"default_pad_value\": 'mean'}),\n \"RandomScaling\": Augmentation(\"RandomAffine\", {\"degrees\": (0, 0), \"scales\": (0.8, 0.8), \"isotropic\": True,\n \"default_pad_value\": 'mean'}),\n \"RandomMotion\": Augmentation(\"RandomMotion\",\n {\"degrees\": (0.4, 0.4), \"translation\": (2., 2.), \"num_transforms\": 1}),\n }\n\n augmentation_list = [augmentation_dict[augmentation] for augmentation in data_augmentation]\n\n return augmentation_list\n\ndef create_tensor_augmentations(augmentations):\n augmentations_tio = []\n for i, el in enumerate(augmentations):\n temp_augm = el.create_augmentation()\n augmentations_tio.append(temp_augm)\n return augmentations_tio\naugmentation_name_map={\n \"Original\":\"Original\",\n \"RandomBlur\":\"Blurring\",\n \"RandomNoise\": \"Gaussian Noise\",\n \"RandomBiasField\":\"Bias field artifact\", \n \"RandomRotation\":\"Rotation\",\n \"RandomScaling\":\"Scaling\", \n \"RandomGamma\":\"Contrast\",\n \"RandomMotion\":\"Motion effect\"\n }\ndef show_augmentations(img, num_cols=3):\n import matplotlib.pyplot as plt\n import numpy as np\n \n data_augmentation=[\n \"RandomBlur\", \"RandomNoise\", \n \"RandomRotation\", \"RandomBiasField\", \"RandomGamma\", \n \"RandomScaling\", \"RandomMotion\"\n ]\n \n augmentations_list=create_tensor_augmentations(get_augmentation_list(data_augmentation))\n \n fig, axes = plt.subplots(3, num_cols, \n figsize=((int(8 *num_cols), int(8 * 3)))\n ) \n set_pubfig()\n img_list=[img]\n for augm in augmentations_list:\n tmp_img=augm(img)\n img_list.append(tmp_img)\n data_augmentation.insert(0, \"Original\")\n \n for i in range(num_cols):\n show_fpg_ax(img_list[i].numpy()[0],axes[0][i])\n axes[0][i].set_title(augmentation_name_map[data_augmentation[i]])\n show_fpg_ax(img_list[i+num_cols].numpy()[0],axes[1][i])\n axes[1][i].set_title(augmentation_name_map[data_augmentation[i+num_cols]])\n \n for i in range(num_cols-1):\n show_fpg_ax(img_list[i+num_cols*2].numpy()[0],axes[2][i])\n axes[2][i].set_title(augmentation_name_map[data_augmentation[i+num_cols*2]])\n\n axes[-1, -1].axis('off')\n \n plt.subplots_adjust(left=0.01, right=0.99, top=0.99, bottom=0.01, wspace=0.02, hspace=0.01)\n path = '../../plots'\n plt.savefig(os.path.join(path, 'Augmentations_%s.png'%MS))\n plt.close()\n \n ", "_____no_output_____" ], [ "def show_data(model_folder, name=None, plane=\"sag\", MS=\"1.5T\"):\n from tools.deep_learning.models import init_model\n from tools.deep_learning.data import (get_transforms,\n load_data,\n return_dataset,\n generate_sampler)\n from tools.deep_learning.iotools import return_logger\n from argparse import Namespace\n from torch.utils.data import DataLoader\n import torch\n\n\n path_params = os.path.join(model_folder, \"commandline_train.json\")\n with open(path_params, \"r\") as f:\n params = json.load(f)\n params = translate_parameters(Namespace(**params))\n main_logger = return_logger(params.verbose, \"main process\")\n\n \n train_transforms, all_transforms = get_transforms(params.mode,\n minmaxnormalization=params.minmaxnormalization,\n data_augmentation=None,\n output_dir=None)\n training_df, valid_df = load_data(\n params.tsv_path,\n params.diagnoses,\n 0,\n n_splits=params.n_splits,\n baseline=params.baseline,\n logger=main_logger\n )\n\n \n data_valid = return_dataset(params.mode, params.input_dir, valid_df, params.preprocessing,\n train_transformations=train_transforms, all_transformations=all_transforms,\n params=params)\n\n \n valid_loader = DataLoader(\n data_valid,\n batch_size=params.batch_size,\n shuffle=False,\n num_workers=params.num_workers,\n pin_memory=True\n )\n \n sample = next(iter(valid_loader))\n# show_augmentations(sample[\"image\"][0])\n# show_fpg(sample[\"image\"].numpy(), name=name, plane=plane)\n participant_id, session_id =sample['participant_id'],sample[\"session_id\"]\n imgs_after=sample[\"image\"].numpy()\n imgs_before=get_unprocessed_images(participant_id, session_id)\n\n show_preprocessing(imgs_before, imgs_after, num_cols=4, plane=plane, MS=MS)\n \n ", "_____no_output_____" ], [ "import pathlib\nimport pandas as pd\nimport os\nimport json\n\nfolders = []\nMS_main_list = ['1.5T', \"3T\"]\n# MS_main_list = [\"3T\"]\n\nMS_list_dict = {'1.5T':['1.5T', '3T'], \"3T\": ['3T', '1.5T'], \"1.5T-3T\": [\"1.5T-3T\"]}\nhome_folder='/u/horlavanasta/MasterProject/'\n\nisBayesian=True\nfor MS in MS_main_list[:]:\n print(\"MS %s \\n ____________________________________________________________________________________________\"%MS)\n model_types = [\"ResNet18\"]\n \n model_dir_general = os.path.join(home_folder,\"DataAndExperiments/Experiments_5-fold/Experiments-\" + MS, \"NNs\" if isBayesian else \"NNs\")\n for network in model_types[:]:\n model_dir = os.path.join(model_dir_general, network)\n # output_dir = pathlib.Path(output_dir)\n modelPatter = \"subject_model*\"\n folders = [f for f in pathlib.Path(model_dir).glob(modelPatter)]\n\n for f in folders[:1]:\n \n print(f)\n# show_model(f)\n show_data(f, name=None, plane=\"sag\", MS=MS)\n# show_data(f, plane=\"sag\")\n# show_data(f, plane=\"cor\")\n# show_data(f, plane=\"axi\")\n ", "MS 1.5T \n ____________________________________________________________________________________________\n/u/horlavanasta/MasterProject/DataAndExperiments/Experiments_5-fold/Experiments-1.5T/NNs/ResNet18/subject_model-ResNet18_preprocessing-linear_task-AD_CN_norm-1_loss-WeightedCrossEntropy_augmFalse\n(166, 256, 256)\n(169, 208, 179)\n(166, 256, 256)\n(169, 208, 179)\n(166, 256, 256)\n(169, 208, 179)\n(160, 192, 192)\n(169, 208, 179)\nMS 3T \n ____________________________________________________________________________________________\n/u/horlavanasta/MasterProject/DataAndExperiments/Experiments_5-fold/Experiments-3T/NNs/ResNet18/subject_model-ResNet18_preprocessing-linear_task-AD_CN_norm-1_loss-WeightedCrossEntropy_augmFalse_20210612_195440\n(196, 256, 256)\n(169, 208, 179)\n(176, 240, 256)\n(169, 208, 179)\n(196, 256, 256)\n(169, 208, 179)\n(196, 256, 256)\n(169, 208, 179)\n" ], [ "hl_graph", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7dc4f3ba01db7a2a0d639c978b740f8958d8d98
2,282
ipynb
Jupyter Notebook
test/dist-correct-ok/exam_79/test-exam.ipynb
chrispyles/jexam
ebe83b170f51c5820e0c93955824c3798922f097
[ "BSD-3-Clause" ]
1
2020-07-25T02:36:38.000Z
2020-07-25T02:36:38.000Z
test/dist-correct-ok/exam_79/test-exam.ipynb
chrispyles/jexam
ebe83b170f51c5820e0c93955824c3798922f097
[ "BSD-3-Clause" ]
null
null
null
test/dist-correct-ok/exam_79/test-exam.ipynb
chrispyles/jexam
ebe83b170f51c5820e0c93955824c3798922f097
[ "BSD-3-Clause" ]
null
null
null
17.553846
226
0.498247
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
e7dc4fc9409d5c3fb287243a40794ece86d34b84
53,149
ipynb
Jupyter Notebook
notebooks/WSTA_N21_machine_translation.ipynb
trevorcohn/comp90042
b7ba37bc39d61c4b6e367274a83ab94421a8bc38
[ "BSD-4-Clause-UC" ]
44
2017-11-24T09:22:19.000Z
2021-12-04T13:22:15.000Z
notebooks/WSTA_N21_machine_translation.ipynb
trevorcohn/comp90042
b7ba37bc39d61c4b6e367274a83ab94421a8bc38
[ "BSD-4-Clause-UC" ]
null
null
null
notebooks/WSTA_N21_machine_translation.ipynb
trevorcohn/comp90042
b7ba37bc39d61c4b6e367274a83ab94421a8bc38
[ "BSD-4-Clause-UC" ]
23
2017-10-17T10:39:28.000Z
2022-02-22T11:39:30.000Z
39.428042
561
0.501308
[ [ [ "# Machine Translation: Word-based SMT using IBM1", "_____no_output_____" ], [ "In this notebook we'll be looking at the IBM Model 1 word alignment method. This will be used to demonstrate the process of training, using expectation maximisation, over a toy dataset. Note that this dataset and presentation closely follows JM2 Chapter 25. The optimised version of the code is based on Koehn09 Chapter 4. ", "_____no_output_____" ] ], [ [ "from collections import defaultdict\nimport itertools", "_____no_output_____" ] ], [ [ "Our dataset will consist of two very short sentence pairs.", "_____no_output_____" ] ], [ [ "bitext = []\nbitext.append((\"green house\".split(), \"casa verde\".split()))\nbitext.append((\"the house\".split(), \"la casa\".split()))", "_____no_output_____" ] ], [ [ "Based on the vocabulary items in Spanish and English, we initialise our translation table, *t*, to a uniform distribution. That is, for each word type in English, we set all translations in Spanish to have 1/3.", "_____no_output_____" ] ], [ [ "t0 = defaultdict(dict)\nfor en_type in \"the green house\".split():\n for es_type in \"la casa verde\".split():\n t0[en_type][es_type] = 1.0 / 3\nt0", "_____no_output_____" ] ], [ [ "Now for the algorithm itself. Although we tend to merge the expectation and maximisation steps (to save storing big data structures for the expected counts), here we'll do the two separately for clarity. Also, following JM:\n - we won't apply the optimisation for IBM1 which allows us to deal with each position *j* independently. Instead we enumerate the space of all alignments using a cartesian product, see *itertools.product*.\n - we don't consider alignments to the null word", "_____no_output_____" ] ], [ [ "def expectation_step(bitext, translation_probs):\n expectations = []\n for E, F in bitext:\n I = len(E)\n J = len(F)\n # store the unnormalised alignment probabilities\n align = []\n # track the sum of unnormalised alignment probabilities\n Z = 0\n for A in itertools.product(range(I), range(I)):\n pr = 1.0\n for j, aj in enumerate(A):\n pr *= translation_probs[E[aj]][F[j]]\n align.append([A, E, F, pr])\n Z += pr\n # normalise align to produce the alignment probabilities\n for atuple in align:\n atuple[-1] /= Z\n # save the expectations for the M step\n expectations.extend(align)\n return expectations", "_____no_output_____" ] ], [ [ "Let's try running this and see what the expected alignments are", "_____no_output_____" ] ], [ [ "e0 = expectation_step(bitext, t0)\ne0", "_____no_output_____" ] ], [ [ "We can also view this graphically. You need to have <a href=\"https://graphviz.gitlab.io/download/\">Graphviz - Graph Visualization Software</a> installed and the path to its bin folder e.g. C:\\Program Files (x86)\\Graphviz2.38\\bin added to PATH.", "_____no_output_____" ] ], [ [ "from IPython.display import SVG, display\nfrom nltk.translate import AlignedSent, Alignment\n\ndef display_expect(expectations):\n stuff = []\n for A, E, F, prob in expectations:\n if prob > 0.01:\n stuff.append('Prob = %.4f' % prob)\n asent = AlignedSent(F, E, Alignment(list(enumerate(A))))\n stuff.append(SVG(asent._repr_svg_()))\n return display(*stuff)\n\ndisplay_expect(e0)", "_____no_output_____" ] ], [ [ "Note the uniform probabilities for each option (is this a surprise, given our initialisation?) Next up we need to learn the model parameters *t* from these expectations. This is simply a matter of counting occurrences of translation pairs, weighted by their probability. ", "_____no_output_____" ] ], [ [ "def maximization_step(expectations):\n counts = defaultdict(dict)\n for A, E, F, prob in expectations:\n for j, aj in enumerate(A):\n counts[E[aj]].setdefault(F[j], 0.0)\n counts[E[aj]][F[j]] += prob\n \n translations = defaultdict(dict)\n for e, fcounts in counts.items():\n tdict = translations[e]\n total = float(sum(fcounts.values()))\n for f, count in fcounts.items():\n tdict[f] = count / total\n \n return translations", "_____no_output_____" ] ], [ [ "Now we can test this over our expectations. Do you expect this to be uniform like *t0*?", "_____no_output_____" ] ], [ [ "t1 = maximization_step(e0)\nt1", "_____no_output_____" ] ], [ [ "With working E and M steps, we can now iterate!", "_____no_output_____" ] ], [ [ "t = t0\nfor step in range(10):\n e = expectation_step(bitext, t)\n t = maximization_step(e)\nt", "_____no_output_____" ], [ "display_expect(e)", "_____no_output_____" ] ], [ [ "Great, we've learned sensible translations as we hoped. Try viewing the expectations using *display_expect*, and vary the number of iterations. What happens to the learned parameters? ", "_____no_output_____" ], [ "# Speeding things up", "_____no_output_____" ], [ "Recall that the E-step above uses a naive enumeration over all possible alignments, which is going to be woefully slow for anything other than toy data. (What's its computational complexity?) Thankfully a bit of algebraic manipulation of the model1 formulation of $P(A|E,F)$ gives rise to a much simple formulation. Let's give this a try. ", "_____no_output_____" ] ], [ [ "def fast_em(bitext, translation_probs):\n # E-step, computing counts as we go\n counts = defaultdict(dict)\n for E, F in bitext:\n I = len(E)\n J = len(F)\n # each j can be considered independently of the others\n for j in range(J):\n # get the translation probabilities (unnormalised)\n prob_ajs = []\n for aj in range(I):\n prob_ajs.append(translation_probs[E[aj]][F[j]])\n # compute denominator for normalisation\n z = sum(prob_ajs)\n # maintain running counts (this is really part of the M-step)\n for aj in range(I):\n counts[E[aj]].setdefault(F[j], 0.0)\n counts[E[aj]][F[j]] += prob_ajs[aj] / z\n \n # Rest of the M-step to normalise counts\n translations = defaultdict(dict)\n for e, fcounts in counts.items():\n tdict = translations[e]\n total = float(sum(fcounts.values()))\n for f, count in fcounts.items():\n tdict[f] = count / total\n \n return translations", "_____no_output_____" ] ], [ [ "We can test that the parameters learned in each step match what we computed before. What's the time complexity of this algorithm? ", "_____no_output_____" ] ], [ [ "t1p = fast_em(bitext, t0)\nt1p", "_____no_output_____" ], [ "t2p = fast_em(bitext, t1)\nt2p", "_____no_output_____" ] ], [ [ "# Alignment models in NLTK", "_____no_output_____" ], [ "NLTK has a range of translation tools, including the IBM models 1 - 5. These are implemented in their full glory, including the null alignment, and complex optimisation algorithms for models 3 and up. Note that model 4 requires a clustering of the vocabulary, see the [documentation](http://www.nltk.org/api/nltk.translate.html) for details.", "_____no_output_____" ] ], [ [ "from nltk.translate import IBMModel3\n\nbt = [AlignedSent(E,F) for E,F in bitext]\nm = IBMModel3(bt, 5)\n\nm.translation_table", "_____no_output_____" ] ], [ [ "NLTK also includes a small section of the Europarl corpus (about 20K sentence pairs). You might want to apply the alignment models to this larger dataset, although be aware that you will first need to do sentence alignment to discard any sentences that aren't aligned 1:1, e.g., using [nltk.translate.gale_church](http://www.nltk.org/api/nltk.translate.html#module-nltk.translate.gale_church) to infer the best alignment. You might also want to lower-case the dataset, which keeps the vocabulary small enough for reasonable runtime and robust estimation.", "_____no_output_____" ] ], [ [ "import nltk\nnltk.download('europarl_raw')\n\nfrom nltk.corpus.europarl_raw import english, spanish\n\nprint(english.sents()[0])\nprint(spanish.sents()[0])", "[nltk_data] Downloading package europarl_raw to\n[nltk_data] /Users/tcohn/nltk_data...\n[nltk_data] Package europarl_raw is already up-to-date!\n['Resumption', 'of', 'the', 'session', 'I', 'declare', 'resumed', 'the', 'session', 'of', 'the', 'European', 'Parliament', 'adjourned', 'on', 'Friday', '17', 'December', '1999', ',', 'and', 'I', 'would', 'like', 'once', 'again', 'to', 'wish', 'you', 'a', 'happy', 'new', 'year', 'in', 'the', 'hope', 'that', 'you', 'enjoyed', 'a', 'pleasant', 'festive', 'period', '.']\n['Reanudación', 'del', 'período', 'de', 'sesiones', 'Declaro', 'reanudado', 'el', 'período', 'de', 'sesiones', 'del', 'Parlamento', 'Europeo', ',', 'interrumpido', 'el', 'viernes', '17', 'de', 'diciembre', 'pasado', ',', 'y', 'reitero', 'a', 'Sus', 'Señorías', 'mi', 'deseo', 'de', 'que', 'hayan', 'tenido', 'unas', 'buenas', 'vacaciones', '.']\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7dc57491d1971b0097ddebf2b948812f0225809
69,352
ipynb
Jupyter Notebook
Recurrent Neural Network.ipynb
Yeok-c/Urban-Sound-Classification
98c46eb54266ef7b859d192e9bebe8a5d48e1708
[ "Apache-2.0" ]
null
null
null
Recurrent Neural Network.ipynb
Yeok-c/Urban-Sound-Classification
98c46eb54266ef7b859d192e9bebe8a5d48e1708
[ "Apache-2.0" ]
null
null
null
Recurrent Neural Network.ipynb
Yeok-c/Urban-Sound-Classification
98c46eb54266ef7b859d192e9bebe8a5d48e1708
[ "Apache-2.0" ]
null
null
null
121.244755
17,718
0.783784
[ [ [ "### Load necessary libraries ###\nimport glob\nimport os\nimport librosa\n\nimport numpy as np\nfrom sklearn.model_selection import KFold\nfrom sklearn.metrics import accuracy_score\n\nimport tensorflow as tf\nfrom tensorflow import keras\nfrom keras.layers import Dense, Dropout, Embedding, LSTM, Bidirectional\n", "_____no_output_____" ], [ "### Define helper functions ###\ndef extract_features(parent_dir, sub_dirs, file_ext=\"*.wav\", \n bands=60, frames=41):\n \n def _windows(data, window_size):\n start = 0\n while start < len(data):\n yield start, start + window_size\n start += (window_size // 2) \n\n window_size = 512 * (frames - 1)\n features, labels = [], []\n\n for fn in glob.glob(os.path.join(parent_dir, sub_dir, file_ext)):\n # For each file, load and turn into spectrograms according to windows\n segment_mfcc, segment_labels = [], []\n sound_clip, sr = librosa.load(fn)\n label = int(fn.split('/')[2].split('-')[1])\n for (start,end) in _windows(sound_clip,window_size):\n if(len(sound_clip[start:end]) == window_size):\n signal = sound_clip[start:end]\n # mfcc = librosa.feature.mfcc(y=signal, sr=sr, n_mfcc=bands) #.T.flatten()[:, np.newaxis].T\n # segment_mfcc.append(mfcc)\n # segment_labels.append(label)\n melspec = librosa.feature.melspectrogram(signal,n_mels=bands)\n S_dB = librosa.amplitude_to_db(melspec, ref=np.max)\n logspec = S_dB.flatten()[:, np.newaxis]\n segment_mfcc.append(logspec)\n segment_labels.append(label)\n \n # (7x41x20) For a 4 second clip, turn into 7 frames\n \n\n # Unclear why to reshape into same shape...?\n # segment_mfcc = np.asarray(segment_mfcc).reshape(\n # len(segment_mfcc),frames,bands)\n \n # Append into (n, (7,41,20)) ish array\n if len(segment_mfcc) > 0 : # check for empty segments \n features.append(segment_mfcc)\n labels.append(segment_labels) \n \n return features, labels", "_____no_output_____" ], [ "# # MFCC is questionable??\n\n# import librosa.display\n# import matplotlib.pyplot as plt\n\n# files = ['UrbanSounds8K/audio/fold1\\\\105415-2-0-19.wav', 'UrbanSounds8K/audio/fold1\\\\105415-2-0-21.wav']\n# segment_mfcc, segment_labels = [], []\n\n# for fn in files:\n# start = 0\n# end = 512*40\n# bands = 20\n\n# sound_clip,sr = librosa.load(fn)\n# signal = sound_clip[start:end]\n# label = int(fn.split('/')[2].split('-')[1])\n# fig, ax = plt.subplots()\n\n# # melspec = librosa.feature.melspectrogram(signal,n_mels=bands)\n# # S_dB = librosa.amplitude_to_db(melspec, ref=np.max)\n# # logspec = S_dB.flatten()[:, np.newaxis]\n# signal = sound_clip[start:end]\n# mfcc = librosa.feature.mfcc(y=signal, sr=sr, n_mfcc=bands) #.T.flatten()[:, np.newaxis].T\n# img = librosa.display.specshow(mfcc, x_axis='time',\n# sr=sr, y_axis='mel', #fmax=8000,\n# ax=ax)\n\n# fig.colorbar(img, ax=ax, format='%+2.0f dB')\n# ax.set(title='MFCC spectrogram')\n# # segment_log_specgrams.append(logspec)\n# # segment_labels.append(label)\n# segment_mfcc.append(mfcc)\n# segment_labels.append(label)\n\n\n# # np.shape(segment_log_specgrams) = (2, 1, 2460)\n", "_____no_output_____" ], [ "parent_dir = 'UrbanSounds8K/audio/'\nsave_dir = \"UrbanSounds8K/processed_crnn/\"\nfolds = sub_dirs = np.array(['fold1','fold2','fold3','fold4',\n 'fold5','fold6','fold7','fold8',\n 'fold9','fold10'])\nfor sub_dir in sub_dirs:\n features, labels = extract_features(parent_dir,sub_dir)\n np.savez(\"{0}{1}\".format(save_dir, sub_dir), features=features, \n labels=labels)\n print(np.shape(features[0]))", "C:\\ProgramData\\Miniconda3\\envs\\usc39\\lib\\site-packages\\numpy\\lib\\npyio.py:719: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.\n val = np.asanyarray(val)\n" ], [ "### Define GRU based recurrent network architecture ###\ndef get_network():\n # num_filters = [24,32,64,128] \n pool_size = (2, 2) \n kernel_size = (3, 3) \n input_shape = (60, 41, 2)\n num_classes = 10\n keras.backend.clear_session()\n \n model = keras.models.Sequential()\n model.add(keras.layers.Conv2D(24, kernel_size,\n padding=\"same\", input_shape=input_shape))\n model.add(keras.layers.BatchNormalization())\n model.add(keras.layers.Activation(\"relu\"))\n model.add(keras.layers.MaxPooling2D(pool_size=pool_size))\n model.add(keras.layers.Dropout(.2))\n\n model.add(keras.layers.Conv2D(32, kernel_size,\n padding=\"same\"))\n model.add(keras.layers.BatchNormalization())\n model.add(keras.layers.Activation(\"relu\")) \n model.add(keras.layers.MaxPooling2D(pool_size=pool_size))\n model.add(keras.layers.Dropout(.2))\n \n model.add(keras.layers.Conv2D(64, kernel_size,\n padding=\"same\"))\n model.add(keras.layers.BatchNormalization())\n model.add(keras.layers.Activation(\"relu\")) \n model.add(keras.layers.MaxPooling2D(pool_size=pool_size))\n model.add(keras.layers.Dropout(.2))\n \n model.add(keras.layers.Conv2D(128, kernel_size,\n padding=\"same\"))\n model.add(keras.layers.BatchNormalization())\n model.add(keras.layers.Activation(\"relu\")) \n model.add(keras.layers.Dropout(.2))\n\n # # model.add(keras.layers.GlobalMaxPooling2D())\n # (None, 7, 5, 128)\n # (batch_size, timesteps, input_dim)\n model.add(tf.keras.layers.Reshape((35,128), input_shape=(7,5,128)))\n input_shape = (35, 128)\n model.add(keras.layers.LSTM(128, input_shape=input_shape))\n\n # model.add(keras.layers.GRU(128, input_shape=input_shape))\n\n model.add(keras.layers.Dense(128, activation=\"relu\"))\n model.add(keras.layers.Dense(num_classes, activation=\"softmax\"))\n\n model.compile(optimizer=keras.optimizers.Adam(1e-4), \n loss=keras.losses.SparseCategoricalCrossentropy(), \n metrics=[\"accuracy\"])\n \n return model\n\n# def get_network():\n# input_shape = (41, 20)\n# num_classes = 10\n# keras.backend.clear_session()\n \n# model = keras.models.Sequential()\n# model.add(keras.layers.GRU(128, input_shape=input_shape))\n# model.add(keras.layers.Dense(128, activation=\"relu\"))\n# model.add(keras.layers.Dense(num_classes, activation = \"softmax\"))\n# model.compile(optimizer=keras.optimizers.Adam(1e-4), \n# loss=keras.losses.SparseCategoricalCrossentropy(), \n# metrics=[\"accuracy\"])\n \n# return model\nmodel = get_network()\nmodel.summary()", "Model: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv2d (Conv2D) (None, 60, 41, 24) 456 \n_________________________________________________________________\nbatch_normalization (BatchNo (None, 60, 41, 24) 96 \n_________________________________________________________________\nactivation (Activation) (None, 60, 41, 24) 0 \n_________________________________________________________________\nmax_pooling2d (MaxPooling2D) (None, 30, 20, 24) 0 \n_________________________________________________________________\ndropout (Dropout) (None, 30, 20, 24) 0 \n_________________________________________________________________\nconv2d_1 (Conv2D) (None, 30, 20, 32) 6944 \n_________________________________________________________________\nbatch_normalization_1 (Batch (None, 30, 20, 32) 128 \n_________________________________________________________________\nactivation_1 (Activation) (None, 30, 20, 32) 0 \n_________________________________________________________________\nmax_pooling2d_1 (MaxPooling2 (None, 15, 10, 32) 0 \n_________________________________________________________________\ndropout_1 (Dropout) (None, 15, 10, 32) 0 \n_________________________________________________________________\nconv2d_2 (Conv2D) (None, 15, 10, 64) 18496 \n_________________________________________________________________\nbatch_normalization_2 (Batch (None, 15, 10, 64) 256 \n_________________________________________________________________\nactivation_2 (Activation) (None, 15, 10, 64) 0 \n_________________________________________________________________\nmax_pooling2d_2 (MaxPooling2 (None, 7, 5, 64) 0 \n_________________________________________________________________\ndropout_2 (Dropout) (None, 7, 5, 64) 0 \n_________________________________________________________________\nconv2d_3 (Conv2D) (None, 7, 5, 128) 73856 \n_________________________________________________________________\nbatch_normalization_3 (Batch (None, 7, 5, 128) 512 \n_________________________________________________________________\nactivation_3 (Activation) (None, 7, 5, 128) 0 \n_________________________________________________________________\ndropout_3 (Dropout) (None, 7, 5, 128) 0 \n_________________________________________________________________\nreshape (Reshape) (None, 35, 128) 0 \n_________________________________________________________________\nlstm (LSTM) (None, 128) 131584 \n_________________________________________________________________\ndense (Dense) (None, 128) 16512 \n_________________________________________________________________\ndense_1 (Dense) (None, 10) 1290 \n=================================================================\nTotal params: 250,130\nTrainable params: 249,634\nNon-trainable params: 496\n_________________________________________________________________\n" ], [ "## Not working\nfrom tensorflow import keras\nfrom tensorflow.keras import layers\n\n\ndef vgg_style(x):\n \"\"\"\n The original feature extraction structure from CRNN paper.\n Related paper: https://ieeexplore.ieee.org/abstract/document/7801919\n \"\"\"\n x = layers.Conv2D(\n 64, 3, padding='same', activation='relu', name='conv1')(x)\n x = layers.MaxPool2D(pool_size=2, padding='same', name='pool1')(x)\n\n x = layers.Conv2D(\n 128, 3, padding='same', activation='relu', name='conv2')(x)\n x = layers.MaxPool2D(pool_size=2, padding='same', name='pool2')(x)\n\n x = layers.Conv2D(256, 3, padding='same', use_bias=False, name='conv3')(x)\n x = layers.BatchNormalization(name='bn3')(x)\n x = layers.Activation('relu', name='relu3')(x)\n x = layers.Conv2D(\n 256, 3, padding='same', activation='relu', name='conv4')(x)\n x = layers.MaxPool2D(\n pool_size=2, strides=(2, 1), padding='same', name='pool4')(x)\n\n x = layers.Conv2D(512, 3, padding='same', use_bias=False, name='conv5')(x)\n x = layers.BatchNormalization(name='bn5')(x)\n x = layers.Activation('relu', name='relu5')(x)\n x = layers.Conv2D(\n 512, 3, padding='same', activation='relu', name='conv6')(x)\n x = layers.MaxPool2D(\n pool_size=2, strides=(2, 1), padding='same', name='pool6')(x)\n\n x = layers.Conv2D(512, 2, use_bias=False, name='conv7')(x)\n x = layers.BatchNormalization(name='bn7')(x)\n x = layers.Activation('relu', name='relu7')(x)\n\n x = layers.Reshape((-1, 512), name='reshape7')(x)\n return x\n\n\ndef build_model(num_classes,\n weight=None,\n preprocess=None,\n postprocess=None,\n img_shape=(60, 41, 1),\n model_name='crnn'):\n \n # keras.backend.clear_session()\n # model = keras.models.Sequential()\n x = img_input = keras.Input(shape=img_shape)\n if preprocess is not None:\n x = preprocess(x)\n \n x = vgg_style(x)\n x = layers.Bidirectional(\n layers.LSTM(units=256, return_sequences=True), name='bi_lstm1')(x)\n x = layers.Bidirectional(\n layers.LSTM(units=256, return_sequences=True), name='bi_lstm2')(x)\n x = layers.Dense(units=num_classes, name='logits')(x)\n \n if postprocess is not None:\n x = postprocess(x)\n\n model = keras.Model(inputs=img_input, outputs=x, name=model_name)\n if weight is not None:\n model.load_weights(weight, by_name=True, skip_mismatch=True)\n \n model.compile(optimizer=keras.optimizers.Adam(1e-4), \n loss=keras.losses.SparseCategoricalCrossentropy(), \n metrics=[\"accuracy\"])\n \n return model\n \n\nmodel = build_model(10)\nmodel.summary()", "Model: \"crnn\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_1 (InputLayer) [(None, 60, 41, 1)] 0 \n_________________________________________________________________\nconv1 (Conv2D) (None, 60, 41, 64) 640 \n_________________________________________________________________\npool1 (MaxPooling2D) (None, 30, 21, 64) 0 \n_________________________________________________________________\nconv2 (Conv2D) (None, 30, 21, 128) 73856 \n_________________________________________________________________\npool2 (MaxPooling2D) (None, 15, 11, 128) 0 \n_________________________________________________________________\nconv3 (Conv2D) (None, 15, 11, 256) 294912 \n_________________________________________________________________\nbn3 (BatchNormalization) (None, 15, 11, 256) 1024 \n_________________________________________________________________\nrelu3 (Activation) (None, 15, 11, 256) 0 \n_________________________________________________________________\nconv4 (Conv2D) (None, 15, 11, 256) 590080 \n_________________________________________________________________\npool4 (MaxPooling2D) (None, 8, 11, 256) 0 \n_________________________________________________________________\nconv5 (Conv2D) (None, 8, 11, 512) 1179648 \n_________________________________________________________________\nbn5 (BatchNormalization) (None, 8, 11, 512) 2048 \n_________________________________________________________________\nrelu5 (Activation) (None, 8, 11, 512) 0 \n_________________________________________________________________\nconv6 (Conv2D) (None, 8, 11, 512) 2359808 \n_________________________________________________________________\npool6 (MaxPooling2D) (None, 4, 11, 512) 0 \n_________________________________________________________________\nconv7 (Conv2D) (None, 3, 10, 512) 1048576 \n_________________________________________________________________\nbn7 (BatchNormalization) (None, 3, 10, 512) 2048 \n_________________________________________________________________\nrelu7 (Activation) (None, 3, 10, 512) 0 \n_________________________________________________________________\nreshape7 (Reshape) (None, 30, 512) 0 \n_________________________________________________________________\nbi_lstm1 (Bidirectional) (None, 30, 512) 1574912 \n_________________________________________________________________\nbi_lstm2 (Bidirectional) (None, 30, 512) 1574912 \n_________________________________________________________________\nlogits (Dense) (None, 30, 10) 5130 \n=================================================================\nTotal params: 8,707,594\nTrainable params: 8,705,034\nNon-trainable params: 2,560\n_________________________________________________________________\n" ], [ "### Train and evaluate via 10-Folds cross-validation ###\naccuracies = []\nfolds = np.array(['fold1','fold2','fold3','fold4',\n 'fold5','fold6','fold7','fold8',\n 'fold9','fold10'])\nload_dir = \"UrbanSounds8K/processed/\"\nkf = KFold(n_splits=10)\nfor train_index, test_index in kf.split(folds):\n x_train, y_train = [], []\n for ind in train_index:\n # read features or segments of an audio file\n train_data = np.load(\"{0}/{1}.npz\".format(load_dir,folds[ind]), \n allow_pickle=True)\n # for training stack all the segments so that they are treated as an example/instance\n features = np.concatenate(train_data[\"features\"], axis=0) \n labels = np.concatenate(train_data[\"labels\"], axis=0)\n x_train.append(features)\n y_train.append(labels)\n # stack x,y pairs of all training folds \n x_train = np.concatenate(x_train, axis = 0).astype(np.float32)\n y_train = np.concatenate(y_train, axis = 0).astype(np.float32)\n \n # for testing we will make predictions on each segment and average them to \n # produce signle label for an entire sound clip.\n test_data = np.load(\"{0}/{1}.npz\".format(load_dir,\n folds[test_index][0]), allow_pickle=True)\n x_test = test_data[\"features\"]\n y_test = test_data[\"labels\"]\n \n model = get_network()\n # model = build_model(10)\n model.fit(x_train, y_train, epochs = 3, batch_size = 24, verbose = 0)\n \n # evaluate on test set/fold\n y_true, y_pred = [], []\n for x, y in zip(x_test, y_test):\n # average predictions over segments of a sound clip\n avg_p = np.argmax(np.mean(model.predict(x), axis = 0))\n y_pred.append(avg_p) \n # pick single label via np.unique for a sound clip\n y_true.append(np.unique(y)[0]) \n accuracies.append(accuracy_score(y_true, y_pred)) \nprint(\"Average 10 Folds Accuracy: {0}\".format(np.mean(accuracies)))", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ] ]
e7dc5d09a2c0210349585fb4cd8e0c60fcfed897
108,812
ipynb
Jupyter Notebook
IPython-parallel-tutorial/images.ipynb
sunny2309/scipy_conf_notebooks
30a85d5137db95e01461ad21519bc1bdf294044b
[ "MIT" ]
2
2021-01-09T15:57:26.000Z
2021-11-29T01:44:21.000Z
IPython-parallel-tutorial/images.ipynb
sunny2309/scipy_conf_notebooks
30a85d5137db95e01461ad21519bc1bdf294044b
[ "MIT" ]
5
2019-11-15T02:00:26.000Z
2021-01-06T04:26:40.000Z
IPython-parallel-tutorial/images.ipynb
sunny2309/scipy_conf_notebooks
30a85d5137db95e01461ad21519bc1bdf294044b
[ "MIT" ]
null
null
null
16.740308
337
0.299535
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
e7dc6054a7303ba9950cb170959f42d877163354
47,300
ipynb
Jupyter Notebook
content/ch-algorithms/bernstein-vazirani.ipynb
ibmamnt/qiskit-textbook
a6bc9e4a47e1044766ba1cd6e5d55b175aed95e0
[ "Apache-2.0" ]
null
null
null
content/ch-algorithms/bernstein-vazirani.ipynb
ibmamnt/qiskit-textbook
a6bc9e4a47e1044766ba1cd6e5d55b175aed95e0
[ "Apache-2.0" ]
null
null
null
content/ch-algorithms/bernstein-vazirani.ipynb
ibmamnt/qiskit-textbook
a6bc9e4a47e1044766ba1cd6e5d55b175aed95e0
[ "Apache-2.0" ]
null
null
null
99.78903
12,460
0.837632
[ [ [ "# Bernstein-Vazirani Algorithm", "_____no_output_____" ], [ "In this section, we first introduce the Bernstein-Vazirani problem, and classical and quantum algorithms to solve it. We then implement the quantum algorithm using Qiskit, and run on a simulator and device.\n\n## Contents\n\n1. [Introduction](#introduction)\n - [Bernstein-Vazirani Problem](#bvproblem)\n - [Bernstein-Vazirani Algorithm](#bvalgorithm)\n\n2. [Example](#example)\n\n3. [Qiskit Implementation](#implementation)\n - [Simulation](#simulation)\n - [Device](#device)\n\n4. [Problems](#problems)\n\n5. [References](#references)", "_____no_output_____" ], [ "## 1. Introduction <a id='introduction'></a>\n\nThe Bernstein-Vazirani algorithm, first introduced in Reference [1], can be seen as an extension of the Deutsch-Josza algorithm covered in the last section. It showed that there can be advantages in using a quantum computer as a computational tool for more complex problems compared to the Deutsch-Josza problem.\n\n### 1a. Bernstein-Vazirani Problem <a id='bvproblem'> </a>\n\nWe are again given a hidden function Boolean $f$, which takes as as input a string of bits, and returns either $0$ or $1$, that is:\n<center>$f(\\{x_0,x_1,x_2,...\\}) \\rightarrow 0 \\textrm{ or } 1 \\textrm{ where } x_n \\textrm{ is }0 \\textrm{ or } 1 $. \n\nInstead of the function being balanced or constant as in the Deutsch-Josza problem, now the function is guaranteed to return the bitwise product of the input with some string, $s$. In other words, given an input $x$, $f(x) = s \\cdot x \\, \\text{(mod 2)}$. We are expected to find $s$.", "_____no_output_____" ], [ "### 1b. Bernstein-Vazirani Algorithm <a id='bvalgorithm'> </a>\n\n#### Classical Solution\nClassically, the oracle returns $f_s(x) = s \\cdot x \\mod 2$ given an input $x$. Thus, the hidden bit string $s$ can be revealed by querying the oracle with $x = 1, 2, \\ldots, 2^i, \\ldots, 2^{n-1}$, where each query reveals the $i$-th bit of $s$ (or, $s_i$). For example, with $x=1$ one can obtain the least significant bit of $s$, and so on. This means we would need to call the function $f_s(x)$ $n$ times. \n", "_____no_output_____" ], [ "#### Quantum Solution\n\nUsing a quantum computer, we can solve this problem with 100% confidence after only one call to the function $f(x)$. The quantum Bernstein-Vazirani algorithm to find the hidden integer is very simple: (1) start from a $|0\\rangle^{\\otimes n}$ state, (2) apply Hadamard gates, (3) query the oracle, (4) apply Hadamard gates, and (5) measure, generically illustrated below:\n\n<img src=\"images/bernsteinvazirani_steps.jpeg\" width=\"300\">\n\nThe correctness of the algorithm is best explained by looking at the transformation of a quantum register $|a \\rangle$ by $n$ Hadamard gates, each applied to the qubit of the register. It can be shown that:\n\n$$\n|a\\rangle \\xrightarrow{H^{\\otimes n}} \\frac{1}{\\sqrt{2^n}} \\sum_{x\\in \\{0,1\\}^n} (-1)^{a\\cdot x}|x\\rangle.\n$$\n\nIn particular, when we start with a quantum register $|0\\rangle$ and apply $n$ Hadamard gates to it, we have the familiar quantum superposition:\n\n$$\n|0\\rangle \\xrightarrow{H^{\\otimes n}} \\frac{1}{\\sqrt{2^n}} \\sum_{x\\in \\{0,1\\}^n} |x\\rangle,\n$$\n\nwhich is slightly different from the Hadamard transform of the reqister $|a \\rangle$ by the phase $(-1)^{a\\cdot x}$. \n\nNow, the quantum oracle $f_a$ returns $1$ on input $x$ such that $a \\cdot x \\equiv 1 \\mod 2$, and returns $0$ otherwise. This means we have the following transformation:\n\n$$\n|x \\rangle \\xrightarrow{f_a} | x \\rangle = (-1)^{a\\cdot x} |x \\rangle. \n$$\n\nThe algorithm to reveal the hidden integer follows naturally by querying the quantum oracle $f_a$ with the quantum superposition obtained from the Hadamard transformation of $|0\\rangle$. Namely,\n\n$$\n|0\\rangle \\xrightarrow{H^{\\otimes n}} \\frac{1}{\\sqrt{2^n}} \\sum_{x\\in \\{0,1\\}^n} |x\\rangle \\xrightarrow{f_a} \\frac{1}{\\sqrt{2^n}} \\sum_{x\\in \\{0,1\\}^n} (-1)^{a\\cdot x}|x\\rangle.\n$$\n\nBecause the inverse of the $n$ Hadamard gates is again the $n$ Hadamard gates, we can obtain $a$ by\n\n$$\n\\frac{1}{\\sqrt{2^n}} \\sum_{x\\in \\{0,1\\}^n} (-1)^{a\\cdot x}|x\\rangle \\xrightarrow{H^{\\otimes n}} |a\\rangle.\n$$\n", "_____no_output_____" ], [ "## 2. Example <a id='example'></a>\n\nLet's go through a specific example for $n=2$ qubits and a secret string $s=11$. Note that we are following the formulation in Reference [2] that generates a circuit for the Bernstein-Vazirani quantum oracle using only one register. \n\n<ol>\n <li> The register of two qubits is initialized to zero:\n $$\\lvert \\psi_0 \\rangle = \\lvert 0 0 \\rangle$$ \n </li>\n\n <li> Apply a Hadamard gate to both qubits:\n $$\\lvert \\psi_1 \\rangle = \\frac{1}{2} \\left( \\lvert 0 0 \\rangle + \\lvert 0 1 \\rangle + \\lvert 1 0 \\rangle + \\lvert 1 1 \\rangle \\right) $$ \n </li>\n\n <li> For the string $s=11$, the quantum oracle can be implemented as $\\text{Q}_f = Z_{1}Z_{2}$:\n $$\\lvert \\psi_2 \\rangle = \\frac{1}{2} \\left( \\lvert 0 0 \\rangle - \\lvert 0 1 \\rangle - \\lvert 1 0 \\rangle + \\lvert 1 1 \\rangle \\right)$$ \n </li>\n\n <li> Apply a Hadamard gate to both qubits:\n $$\\lvert \\psi_3 \\rangle = \\lvert 1 1 \\rangle$$ \n </li>\n\n <li> Measure to find the secret string $s=11$\n </li>\n\n\n</ol>\n\n", "_____no_output_____" ], [ "## 3. Qiskit Implementation <a id='implementation'></a>", "_____no_output_____" ], [ "We now implement the Bernstein-Vazirani algorithm with Qiskit for a two bit function with $s=11$.", "_____no_output_____" ] ], [ [ "# initialization\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport numpy as np\n\n# importing Qiskit\nfrom qiskit import IBMQ, BasicAer\nfrom qiskit.providers.ibmq import least_busy\nfrom qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute\n\n# import basic plot tools\nfrom qiskit.tools.visualization import plot_histogram", "_____no_output_____" ] ], [ [ "We first set the number of qubits used in the experiment, and the hidden integer $s$ to be found by the algorithm. The hidden integer $s$ determines the circuit for the quantum oracle. ", "_____no_output_____" ] ], [ [ "nQubits = 2 # number of physical qubits used to represent s\ns = 3 # the hidden integer \n\n# make sure that a can be represented with nqubits\ns = s % 2**(nQubits)", "_____no_output_____" ] ], [ [ "We then use Qiskit to program the Bernstein-Vazirani algorithm.", "_____no_output_____" ] ], [ [ "# Creating registers\n# qubits for querying the oracle and finding the hidden integer\nqr = QuantumRegister(nQubits)\n# bits for recording the measurement on qr\ncr = ClassicalRegister(nQubits)\n\nbvCircuit = QuantumCircuit(qr, cr)\nbarriers = True\n\n# Apply Hadamard gates before querying the oracle\nfor i in range(nQubits):\n bvCircuit.h(qr[i])\n \n# Apply barrier \nif barriers:\n bvCircuit.barrier()\n\n# Apply the inner-product oracle\nfor i in range(nQubits):\n if (s & (1 << i)):\n bvCircuit.z(qr[i])\n else:\n bvCircuit.iden(qr[i])\n \n# Apply barrier \nif barriers:\n bvCircuit.barrier()\n\n#Apply Hadamard gates after querying the oracle\nfor i in range(nQubits):\n bvCircuit.h(qr[i])\n \n# Apply barrier \nif barriers:\n bvCircuit.barrier()\n\n# Measurement\nbvCircuit.measure(qr, cr)", "_____no_output_____" ], [ "bvCircuit.draw(output='mpl')", "_____no_output_____" ] ], [ [ "### 3a. Experiment with Simulators <a id='simulation'></a>\n\nWe can run the above circuit on the simulator. ", "_____no_output_____" ] ], [ [ "# use local simulator\nbackend = BasicAer.get_backend('qasm_simulator')\nshots = 1024\nresults = execute(bvCircuit, backend=backend, shots=shots).result()\nanswer = results.get_counts()\n\nplot_histogram(answer)", "_____no_output_____" ] ], [ [ "We can see that the result of the measurement is the binary representation of the hidden integer $3$ $(11)$. ", "_____no_output_____" ], [ "### 3b. Experiment with Real Devices <a id='device'></a>\n\nWe can run the circuit on the real device as below.", "_____no_output_____" ] ], [ [ "# Load our saved IBMQ accounts and get the least busy backend device with less than or equal to 5 qubits\nIBMQ.load_account()\nprovider = IBMQ.get_provider(hub='ibm-q')\nprovider.backends()\nbackend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits <= 5 and \n not x.configuration().simulator and x.status().operational==True))\nprint(\"least busy backend: \", backend)", "least busy backend: ibmqx2\n" ], [ "# Run our circuit on the least busy backend. Monitor the execution of the job in the queue\nfrom qiskit.tools.monitor import job_monitor\n\nshots = 1024\njob = execute(bvCircuit, backend=backend, shots=shots)\n\njob_monitor(job, interval = 2)", "Job Status: job has successfully run\n" ], [ "# Get the results from the computation\nresults = job.result()\nanswer = results.get_counts()\n\nplot_histogram(answer)", "_____no_output_____" ] ], [ [ "As we can see, most of the results are $11$. The other results are due to errors in the quantum computation. ", "_____no_output_____" ], [ "## 4. Problems <a id='problems'></a>\n\n1. The above [implementation](#implementation) of Bernstein-Vazirani is for a secret bit string of $s = 11$. Modify the implementation for a secret string os $s = 1011$. Are the results what you expect? Explain.\n2. The above [implementation](#implementation) of Bernstein-Vazirani is for a secret bit string of $s = 11$. Modify the implementation for a secret string os $s = 1110110101$. Are the results what you expect? Explain.\n", "_____no_output_____" ], [ "## 5. References <a id='references'></a>\n1. Ethan Bernstein and Umesh Vazirani (1997) \"Quantum Complexity Theory\" SIAM Journal on Computing, Vol. 26, No. 5: 1411-1473, [doi:10.1137/S0097539796300921](https://doi.org/10.1137/S0097539796300921).\n2. Jiangfeng Du, Mingjun Shi, Jihui Wu, Xianyi Zhou, Yangmei Fan, BangJiao Ye, Rongdian Han (2001) \"Implementation of a quantum algorithm to solve the Bernstein-Vazirani parity problem without entanglement on an ensemble quantum computer\", Phys. Rev. A 64, 042306, [10.1103/PhysRevA.64.042306](https://doi.org/10.1103/PhysRevA.64.042306), [arXiv:quant-ph/0012114](https://arxiv.org/abs/quant-ph/0012114). ", "_____no_output_____" ] ], [ [ "import qiskit\nqiskit.__qiskit_version__", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ] ]
e7dc80ae1ae07989f126db467ff85f8bb576bfcc
1,005,288
ipynb
Jupyter Notebook
Human Activity Recognition (97.98 %).ipynb
parisa1/Human-Activity-Recognition-with-Neural-Network-using-Gyroscopic-and-Accelerometer-variables
a676f9260c551dfe9e38b609c1411ab570df2002
[ "Apache-2.0" ]
75
2018-06-14T16:27:57.000Z
2022-03-12T02:50:46.000Z
Human Activity Recognition (97.98 %).ipynb
parisa1/Human-Activity-Recognition-with-Neural-Network-using-Gyroscopic-and-Accelerometer-variables
a676f9260c551dfe9e38b609c1411ab570df2002
[ "Apache-2.0" ]
null
null
null
Human Activity Recognition (97.98 %).ipynb
parisa1/Human-Activity-Recognition-with-Neural-Network-using-Gyroscopic-and-Accelerometer-variables
a676f9260c551dfe9e38b609c1411ab570df2002
[ "Apache-2.0" ]
34
2018-08-23T14:41:11.000Z
2021-07-15T03:59:20.000Z
266.796178
165,950
0.8627
[ [ [ "# Human Activity Recognition (97.98 %)", "_____no_output_____" ], [ "## The above accuracy achieved is better than the research paper itself which was based on LSTM but my work includes ANN on the same dataset. ", "_____no_output_____" ], [ "**Original approach using LSTM -\nTesting Accuracy: 91.652% , \nPrecision: 91.762% , \nRecall: 91.652% , \nf1_score: 91.643%\n**", "_____no_output_____" ], [ "**My approach using ANN - Testing accuracy(validation): 97.98% , Precision: 95% , Recall: 94% , f1-score: 94% .**", "_____no_output_____" ] ], [ [ "import numpy as np \nimport pandas as pd \nimport os\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nprint(os.listdir(\"../input\"))", "['train.csv', 'test.csv']\n" ], [ "df = pd.read_csv(\"../input/train.csv\")\ntest = pd.read_csv(\"../input/test.csv\")", "_____no_output_____" ], [ "df.T", "_____no_output_____" ], [ "print(df.Activity.unique())\nprint(\"----------------------------------------\")\nprint(df.Activity.value_counts())", "['STANDING' 'SITTING' 'LAYING' 'WALKING' 'WALKING_DOWNSTAIRS'\n 'WALKING_UPSTAIRS']\n----------------------------------------\nLAYING 1407\nSTANDING 1374\nSITTING 1286\nWALKING 1226\nWALKING_UPSTAIRS 1073\nWALKING_DOWNSTAIRS 986\nName: Activity, dtype: int64\n" ], [ "sns.set(rc={'figure.figsize':(13,6)})\nfig = sns.countplot(x = \"Activity\" , data = df)\nplt.xlabel(\"Activity\")\nplt.ylabel(\"Count\")\nplt.title(\"Activity Count\")\nplt.grid(True)\nplt.show(fig)", "_____no_output_____" ], [ "pd.crosstab(df.subject, df.Activity, margins=True).style.background_gradient(cmap='autumn_r')", "_____no_output_____" ], [ "print(df.shape , test.shape)", "(7352, 563) (2947, 563)\n" ], [ "df.columns", "_____no_output_____" ] ], [ [ "## Now some visualizations for feature distribution in space.", "_____no_output_____" ] ], [ [ "sns.set(rc={'figure.figsize':(15,7)})\ncolours = [\"maroon\",\"coral\",\"darkorchid\",\"goldenrod\",\"purple\",\"darkgreen\",\"darkviolet\",\"saddlebrown\",\"aqua\",\"olive\"]\nindex = -1\nfor i in df.columns[0:10]:\n index = index + 1\n fig = sns.kdeplot(df[i] , shade=True, color=colours[index])\nplt.xlabel(\"Features\")\nplt.ylabel(\"Value\")\nplt.title(\"Feature Distribution\")\nplt.grid(True)\nplt.show(fig)", "_____no_output_____" ], [ "sns.set(rc={'figure.figsize':(15,7)})\ncolours = [\"maroon\",\"coral\",\"darkorchid\",\"goldenrod\",\"purple\",\"darkgreen\",\"darkviolet\",\"saddlebrown\",\"aqua\",\"olive\"]\nindex = -1\nfor i in df.columns[10:20]:\n index = index + 1\n ax1 = sns.kdeplot(df[i] , shade=True, color=colours[index])\nplt.xlabel(\"Features\")\nplt.ylabel(\"Value\")\nplt.title(\"Feature Distribution\")\nplt.grid(True)\nplt.show(fig)", "_____no_output_____" ], [ "sns.set(rc={'figure.figsize':(15,7)})\ncolours = [\"maroon\",\"coral\",\"darkorchid\",\"goldenrod\",\"purple\",\"darkgreen\",\"darkviolet\",\"saddlebrown\",\"aqua\",\"olive\"]\nindex = -1\nfor i in df.columns[20:30]:\n index = index + 1\n ax1 = sns.kdeplot(df[i] , shade=True, color=colours[index])\nplt.xlabel(\"Features\")\nplt.ylabel(\"Value\")\nplt.title(\"Feature Distribution\")\nplt.grid(True)\nplt.show(fig)", "_____no_output_____" ], [ "sns.set(rc={'figure.figsize':(15,7)})\ncolours = [\"maroon\",\"coral\",\"darkorchid\",\"goldenrod\",\"purple\",\"darkgreen\",\"darkviolet\",\"saddlebrown\",\"aqua\",\"olive\"]\nindex = -1\nfor i in df.columns[30:40]:\n index = index + 1\n ax1 = sns.kdeplot(df[i] , shade=True, color=colours[index])\nplt.xlabel(\"Features\")\nplt.ylabel(\"Value\")\nplt.title(\"Feature Distribution\")\nplt.grid(True)\nplt.show(fig)", "_____no_output_____" ], [ "sns.set(rc={'figure.figsize':(15,7)})\ncolours = [\"maroon\",\"coral\",\"darkorchid\",\"goldenrod\",\"purple\",\"darkgreen\",\"darkviolet\",\"saddlebrown\",\"aqua\",\"olive\"]\nindex = -1\nfor i in df.columns[40:50]:\n index = index + 1\n ax1 = sns.kdeplot(df[i] , shade=True, color=colours[index])\nplt.xlabel(\"Features\")\nplt.ylabel(\"Value\")\nplt.title(\"Feature Distribution\")\nplt.grid(True)\nplt.show(fig)", "_____no_output_____" ], [ "sns.set(rc={'figure.figsize':(15,10)})\nplt.subplot(221)\nfig1 = sns.stripplot(x='Activity', y= df.loc[df['Activity']==\"STANDING\"].iloc[:,10], data= df.loc[df['Activity']==\"STANDING\"], jitter=True)\nplt.title(\"Feature Distribution\")\nplt.grid(True)\nplt.show(fig1)\nplt.subplot(224)\nfig2 = sns.stripplot(x='Activity', y= df.loc[df['Activity']==\"STANDING\"].iloc[:,11], data= df.loc[df['Activity']==\"STANDING\"], jitter=True)\nplt.title(\"Feature Distribution\")\nplt.grid(True)\nplt.show(fig2)\nplt.subplot(223)\nfig2 = sns.stripplot(x='Activity', y= df.loc[df['Activity']==\"STANDING\"].iloc[:,12], data= df.loc[df['Activity']==\"STANDING\"], jitter=True)\nplt.title(\"Feature Distribution\")\nplt.grid(True)\nplt.show(fig2)\nplt.subplot(222)\nfig2 = sns.stripplot(x='Activity', y= df.loc[df['Activity']==\"STANDING\"].iloc[:,13], data= df.loc[df['Activity']==\"STANDING\"], jitter=True)\nplt.title(\"Feature Distribution\")\nplt.grid(True)\nplt.show(fig2)", "_____no_output_____" ], [ "sns.set(rc={'figure.figsize':(15,5)})\nfig1 = sns.stripplot(x='Activity', y= df.loc[df['subject']==15].iloc[:,7], data= df.loc[df['subject']==15], jitter=True)\nplt.title(\"Feature Distribution\")\nplt.grid(True)\nplt.show(fig1)", "_____no_output_____" ] ], [ [ "**Feature Scaling**", "_____no_output_____" ], [ "**Pre-processing and data preparation to feed data into Artificial Neural Network.**", "_____no_output_____" ] ], [ [ "from sklearn.preprocessing import MinMaxScaler\nscaler = MinMaxScaler()\nscaler.fit(df.iloc[:,0:562])\nmat_train = scaler.transform(df.iloc[:,0:562])\nprint(mat_train)", "[[0.64429225 0.48985291 0.43354743 ... 0.79825103 0.47068654 0. ]\n [0.63920942 0.49179472 0.4382399 ... 0.79848665 0.47284164 0. ]\n [0.63982653 0.49026642 0.44326915 ... 0.79872236 0.47544109 0. ]\n ...\n [0.63669369 0.49149469 0.47748909 ... 0.84506893 0.52040559 1. ]\n [0.64482708 0.49057848 0.42085971 ... 0.84323381 0.51266974 1. ]\n [0.67575173 0.49378844 0.39806642 ... 0.84348837 0.51834742 1. ]]\n" ], [ "scaler = MinMaxScaler()\nscaler.fit(test.iloc[:,0:562])\nmat_test = scaler.transform(test.iloc[:,0:562])\nprint(mat_test)", "[[0.6718788 0.55764282 0.52464834 ... 0.62209457 0.46362736 0. ]\n [0.69470427 0.57426358 0.42707858 ... 0.62446791 0.45014396 0. ]\n [0.68636345 0.55310221 0.42794829 ... 0.62380956 0.45251181 0. ]\n ...\n [0.74529355 0.64526771 0.43015674 ... 0.62088108 0.58803909 1. ]\n [0.65638384 0.62620241 0.44817885 ... 0.61581385 0.59135763 1. ]\n [0.58994885 0.56560474 0.41032069 ... 0.61537208 0.59163879 1. ]]\n" ], [ "temp = []\nfor i in df.Activity:\n if i == \"WALKING\": temp.append(0)\n if i == \"WALKING_UPSTAIRS\": temp.append(1)\n if i == \"WALKING_DOWNSTAIRS\": temp.append(2)\n if i == \"SITTING\": temp.append(3)\n if i == \"STANDING\": temp.append(4)\n if i == \"LAYING\": temp.append(5)\ndf[\"n_Activity\"] = temp", "_____no_output_____" ], [ "temp = []\nfor i in test.Activity:\n if i == \"WALKING\": temp.append(0)\n if i == \"WALKING_UPSTAIRS\": temp.append(1)\n if i == \"WALKING_DOWNSTAIRS\": temp.append(2)\n if i == \"SITTING\": temp.append(3)\n if i == \"STANDING\": temp.append(4)\n if i == \"LAYING\": temp.append(5)\ntest[\"n_Activity\"] = temp", "_____no_output_____" ], [ "df.drop([\"Activity\"] , axis = 1 , inplace = True)", "_____no_output_____" ], [ "test.drop([\"Activity\"] , axis = 1 , inplace = True)", "_____no_output_____" ], [ "from keras.utils import to_categorical\ny_train = to_categorical(df.n_Activity , num_classes=6)\ny_test = to_categorical(test.n_Activity , num_classes=6)", "_____no_output_____" ], [ "X_train = mat_train \nX_test = mat_test", "_____no_output_____" ] ], [ [ "**Though it is a 562 feature vector which is large enough and might cause overfitting while training.**", "_____no_output_____" ], [ "**Also gone for feature selection using extra tree classifier and l1 selection , but the results were slightly better with all features only when I tune the hyperparameters of the model to its almost utmost level which took some time.**", "_____no_output_____" ], [ "**Having less features will also take less time to train but in this case manual selection of features reagrding context can't be done and the other approach has already been discussed above.**", "_____no_output_____" ] ], [ [ "print(X_train.shape , y_train.shape)\nprint(X_test.shape , y_test.shape)", "(7352, 562) (7352, 6)\n(2947, 562) (2947, 6)\n" ] ], [ [ "**Taking necessary callbacks of checkpointing and learning rate reducer.**", "_____no_output_____" ] ], [ [ "filepath=\"HAR_weights.hdf5\"\nfrom keras.callbacks import ReduceLROnPlateau , ModelCheckpoint\n\nlr_reduce = ReduceLROnPlateau(monitor='val_acc', factor=0.1, epsilon=0.0001, patience=1, verbose=1)\ncheckpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max')", "/opt/conda/lib/python3.6/site-packages/Keras-2.1.5-py3.6.egg/keras/callbacks.py:919: UserWarning: `epsilon` argument is deprecated and will be removed, use `min_delta` insted.\n" ], [ "from keras.models import Sequential\nfrom keras.layers import Dense, Dropout , BatchNormalization\nfrom sklearn.model_selection import train_test_split\nfrom keras.utils import np_utils\nfrom keras.optimizers import RMSprop, Adam", "_____no_output_____" ] ], [ [ "**The below model architecture is the best I could come up with after repeated tuning and changes in network architecture.**", "_____no_output_____" ], [ "**At last, the BatchNormalization layer did some good to slightly boost the accuracy.** ", "_____no_output_____" ], [ "**Taken special care of learning rate and batch_size to which the model is very sensitive and have to repeatedly adjust them in order to get one of the best result in front.**", "_____no_output_____" ] ], [ [ "model = Sequential()\n\nmodel.add(Dense(64, input_dim=X_train.shape[1] , activation='relu'))\nmodel.add(Dense(64, activation='relu'))\nmodel.add(BatchNormalization())\nmodel.add(Dense(128, activation='relu'))\nmodel.add(Dense(196, activation='relu'))\nmodel.add(Dense(32, activation='relu'))\nmodel.add(Dense(6, activation='sigmoid'))\n\nmodel.compile(optimizer = Adam(lr = 0.0005),loss='categorical_crossentropy', metrics=['accuracy'])\nprint(model.summary())", "_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense_109 (Dense) (None, 64) 36032 \n_________________________________________________________________\ndense_110 (Dense) (None, 64) 4160 \n_________________________________________________________________\nbatch_normalization_9 (Batch (None, 64) 256 \n_________________________________________________________________\ndense_111 (Dense) (None, 128) 8320 \n_________________________________________________________________\ndense_112 (Dense) (None, 196) 25284 \n_________________________________________________________________\ndense_113 (Dense) (None, 32) 6304 \n_________________________________________________________________\ndense_114 (Dense) (None, 6) 198 \n=================================================================\nTotal params: 80,554\nTrainable params: 80,426\nNon-trainable params: 128\n_________________________________________________________________\nNone\n" ] ], [ [ "## Finally, the best model was checkpointed and got a validation loss of 0.0562 and a validation accuracy of 97.98% or ~98%.", "_____no_output_____" ] ], [ [ "history = model.fit(X_train, y_train , epochs=22 , batch_size = 256 , validation_data=(X_test, y_test) , callbacks=[checkpoint,lr_reduce])", "Train on 7352 samples, validate on 2947 samples\nEpoch 1/22\n7352/7352 [==============================] - 2s 320us/step - loss: 0.4648 - acc: 0.7908 - val_loss: 0.2946 - val_acc: 0.8644\n\nEpoch 00001: val_acc did not improve\nEpoch 2/22\n7352/7352 [==============================] - 0s 49us/step - loss: 0.1966 - acc: 0.9244 - val_loss: 0.1478 - val_acc: 0.9395\n\nEpoch 00002: val_acc did not improve\nEpoch 3/22\n7352/7352 [==============================] - 0s 53us/step - loss: 0.0918 - acc: 0.9712 - val_loss: 0.0947 - val_acc: 0.9609\n\nEpoch 00003: val_acc did not improve\nEpoch 4/22\n7352/7352 [==============================] - 0s 53us/step - loss: 0.0502 - acc: 0.9830 - val_loss: 0.1141 - val_acc: 0.9526\n\nEpoch 00004: val_acc did not improve\n\nEpoch 00004: ReduceLROnPlateau reducing learning rate to 5.0000002374872565e-05.\nEpoch 5/22\n7352/7352 [==============================] - 0s 46us/step - loss: 0.0370 - acc: 0.9878 - val_loss: 0.0652 - val_acc: 0.9742\n\nEpoch 00005: val_acc improved from 0.96680 to 0.97415, saving model to HAR_weights.hdf5\nEpoch 6/22\n7352/7352 [==============================] - 0s 46us/step - loss: 0.0340 - acc: 0.9889 - val_loss: 0.0562 - val_acc: 0.9798\n\nEpoch 00006: val_acc improved from 0.97415 to 0.97975, saving model to HAR_weights.hdf5\nEpoch 7/22\n7352/7352 [==============================] - 0s 46us/step - loss: 0.0324 - acc: 0.9894 - val_loss: 0.0577 - val_acc: 0.9781\n\nEpoch 00007: val_acc did not improve\n\nEpoch 00007: ReduceLROnPlateau reducing learning rate to 5.000000237487257e-06.\nEpoch 8/22\n7352/7352 [==============================] - 0s 44us/step - loss: 0.0314 - acc: 0.9899 - val_loss: 0.0602 - val_acc: 0.9767\n\nEpoch 00008: val_acc did not improve\n\nEpoch 00008: ReduceLROnPlateau reducing learning rate to 5.000000328436726e-07.\nEpoch 9/22\n7352/7352 [==============================] - 0s 45us/step - loss: 0.0312 - acc: 0.9895 - val_loss: 0.0619 - val_acc: 0.9756\n\nEpoch 00009: val_acc did not improve\n\nEpoch 00009: ReduceLROnPlateau reducing learning rate to 5.000000555810402e-08.\nEpoch 10/22\n7352/7352 [==============================] - 0s 45us/step - loss: 0.0313 - acc: 0.9893 - val_loss: 0.0633 - val_acc: 0.9749\n\nEpoch 00010: val_acc did not improve\n\nEpoch 00010: ReduceLROnPlateau reducing learning rate to 5.000000413701855e-09.\nEpoch 11/22\n7352/7352 [==============================] - 0s 51us/step - loss: 0.0308 - acc: 0.9898 - val_loss: 0.0643 - val_acc: 0.9743\n\nEpoch 00011: val_acc did not improve\n\nEpoch 00011: ReduceLROnPlateau reducing learning rate to 5.000000413701855e-10.\nEpoch 12/22\n7352/7352 [==============================] - 0s 51us/step - loss: 0.0310 - acc: 0.9899 - val_loss: 0.0651 - val_acc: 0.9743\n\nEpoch 00012: val_acc did not improve\n\nEpoch 00012: ReduceLROnPlateau reducing learning rate to 5.000000413701855e-11.\nEpoch 13/22\n7352/7352 [==============================] - 0s 48us/step - loss: 0.0308 - acc: 0.9903 - val_loss: 0.0657 - val_acc: 0.9739\n\nEpoch 00013: val_acc did not improve\n\nEpoch 00013: ReduceLROnPlateau reducing learning rate to 5.000000413701855e-12.\nEpoch 14/22\n7352/7352 [==============================] - 0s 46us/step - loss: 0.0312 - acc: 0.9899 - val_loss: 0.0662 - val_acc: 0.9736\n\nEpoch 00014: val_acc did not improve\n\nEpoch 00014: ReduceLROnPlateau reducing learning rate to 5.000000413701855e-13.\nEpoch 15/22\n7352/7352 [==============================] - 0s 45us/step - loss: 0.0308 - acc: 0.9895 - val_loss: 0.0665 - val_acc: 0.9732\n\nEpoch 00015: val_acc did not improve\n\nEpoch 00015: ReduceLROnPlateau reducing learning rate to 5.0000005221220725e-14.\nEpoch 16/22\n7352/7352 [==============================] - 0s 49us/step - loss: 0.0311 - acc: 0.9898 - val_loss: 0.0668 - val_acc: 0.9732\n\nEpoch 00016: val_acc did not improve\n\nEpoch 00016: ReduceLROnPlateau reducing learning rate to 5.000000589884709e-15.\nEpoch 17/22\n7352/7352 [==============================] - 0s 52us/step - loss: 0.0314 - acc: 0.9896 - val_loss: 0.0670 - val_acc: 0.9732\n\nEpoch 00017: val_acc did not improve\n\nEpoch 00017: ReduceLROnPlateau reducing learning rate to 5.000000759291298e-16.\nEpoch 18/22\n7352/7352 [==============================] - 0s 53us/step - loss: 0.0313 - acc: 0.9897 - val_loss: 0.0672 - val_acc: 0.9731\n\nEpoch 00018: val_acc did not improve\n\nEpoch 00018: ReduceLROnPlateau reducing learning rate to 5.000000547533061e-17.\nEpoch 19/22\n7352/7352 [==============================] - 0s 45us/step - loss: 0.0311 - acc: 0.9898 - val_loss: 0.0673 - val_acc: 0.9730\n\nEpoch 00019: val_acc did not improve\n\nEpoch 00019: ReduceLROnPlateau reducing learning rate to 5.000000415184163e-18.\nEpoch 20/22\n7352/7352 [==============================] - 0s 47us/step - loss: 0.0314 - acc: 0.9897 - val_loss: 0.0674 - val_acc: 0.9730\n\nEpoch 00020: val_acc did not improve\n\nEpoch 00020: ReduceLROnPlateau reducing learning rate to 5.000000332466102e-19.\nEpoch 21/22\n7352/7352 [==============================] - 0s 44us/step - loss: 0.0313 - acc: 0.9897 - val_loss: 0.0674 - val_acc: 0.9730\n\nEpoch 00021: val_acc did not improve\n\nEpoch 00021: ReduceLROnPlateau reducing learning rate to 5.000000229068525e-20.\nEpoch 22/22\n7352/7352 [==============================] - 0s 54us/step - loss: 0.0313 - acc: 0.9898 - val_loss: 0.0675 - val_acc: 0.9730\n\nEpoch 00022: val_acc did not improve\n\nEpoch 00022: ReduceLROnPlateau reducing learning rate to 5.00000016444504e-21.\n" ], [ "from pylab import rcParams\nrcParams['figure.figsize'] = 10, 4\nplt.plot(history.history['acc'])\nplt.plot(history.history['val_acc'])\nplt.title('model accuracy')\nplt.ylabel('accuracy')\nplt.xlabel('epoch')\nplt.legend(['train', 'test'], loc='upper left')\nplt.show()\n# summarize history for loss\nplt.plot(history.history['loss'])\nplt.plot(history.history['val_loss'])\nplt.title('model loss')\nplt.ylabel('loss')\nplt.xlabel('epoch')\nplt.legend(['train', 'test'], loc='upper left')\nplt.show()", "_____no_output_____" ], [ "from sklearn.metrics import confusion_matrix\nmodel.load_weights(\"HAR_weights.hdf5\")\npred = model.predict(X_test)\npred = np.argmax(pred,axis = 1) \ny_true = np.argmax(y_test,axis = 1)", "_____no_output_____" ] ], [ [ "## The confusion matrix is plotted to get better insight of model performance using mlxted to refrain from extra code via scikit.", "_____no_output_____" ], [ "## The model performance is evident from the diagonal concentration of the values.**", "_____no_output_____" ] ], [ [ "CM = confusion_matrix(y_true, pred)\nfrom mlxtend.plotting import plot_confusion_matrix\nfig, ax = plot_confusion_matrix(conf_mat=CM , figsize=(10, 5))\nplt.show()", "_____no_output_____" ] ], [ [ "## Precision - 95% , Recall - 94% and f1-score of 94%.", "_____no_output_____" ] ], [ [ "from sklearn.metrics import classification_report , accuracy_score\nprint(classification_report(y_true, pred))", " precision recall f1-score support\n\n 0 0.99 0.91 0.95 496\n 1 0.98 0.89 0.93 471\n 2 0.82 0.99 0.90 420\n 3 0.94 0.90 0.92 491\n 4 0.94 0.95 0.94 532\n 5 0.98 1.00 0.99 537\n\navg / total 0.95 0.94 0.94 2947\n\n" ] ], [ [ "**Exporting predictions.**", "_____no_output_____" ] ], [ [ "d = { \"Index\":np.arange(2947) , \"Activity\":pred }\nfinal = pd.DataFrame(d)\nfinal.to_csv( 'human_activity_predictions.csv' , index = False)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7dc9b99a6afb9084390cb510ea72abe12269864
664,370
ipynb
Jupyter Notebook
notebooks/2018-07-24-2018-Adsorption on solid surfaces.ipynb
utf/matgenb
ed338a9ab2842efa9c95c556a9b8e03eed939396
[ "BSD-3-Clause" ]
null
null
null
notebooks/2018-07-24-2018-Adsorption on solid surfaces.ipynb
utf/matgenb
ed338a9ab2842efa9c95c556a9b8e03eed939396
[ "BSD-3-Clause" ]
null
null
null
notebooks/2018-07-24-2018-Adsorption on solid surfaces.ipynb
utf/matgenb
ed338a9ab2842efa9c95c556a9b8e03eed939396
[ "BSD-3-Clause" ]
1
2019-12-09T00:53:17.000Z
2019-12-09T00:53:17.000Z
929.188811
140,210
0.71755
[ [ [ "\n# Supplemental Information\n\nThis notebook is intended to serve as a supplement to the manuscript \"High-throughput workflows for determining adsorption energies on solid surfaces.\" It outlines basic use of the code and workflow software that has been developed for processing surface slabs and placing adsorbates according to symmetrically distinct sites on surface facets.\n\n## Installation\n\nTo use this notebook, we recommend installing python via [Anaconda](https://www.continuum.io/downloads), which includes jupyter and the associated iPython notebook software.\n\nThe code used in this project primarily makes use of two packages, pymatgen and atomate, which are installable via pip or the matsci channel on conda (e. g. `conda install -c matsci pymatgen atomate`). Development versions with editable code may be installed by cloning the repositories and using `python setup.py develop`.", "_____no_output_____" ], [ "## Example 1: AdsorbateSiteFinder (pymatgen)\n\nAn example using the the AdsorbateSiteFinder class in pymatgen is shown below. We begin with an import statement for the necessay modules. To use the MP RESTful interface, you must provide your own API key either in the MPRester call i.e. ```mpr=MPRester(\"YOUR_API_KEY\")``` or provide in in your .pmgrc.yaml configuration file. API keys can be accessed at materialsproject.org under your \"Dashboard.\"", "_____no_output_____" ] ], [ [ "# Import statements\nfrom pymatgen import Structure, Lattice, MPRester, Molecule\nfrom pymatgen.analysis.adsorption import *\nfrom pymatgen.core.surface import generate_all_slabs\nfrom pymatgen.symmetry.analyzer import SpacegroupAnalyzer\nfrom matplotlib import pyplot as plt\n%matplotlib inline\n# Note that you must provide your own API Key, which can\n# be accessed via the Dashboard at materialsproject.org\nmpr = MPRester()", "_____no_output_____" ] ], [ [ "We create a simple fcc structure, generate it's distinct slabs, and select the slab with a miller index of (1, 1, 1).", "_____no_output_____" ] ], [ [ "fcc_ni = Structure.from_spacegroup(\"Fm-3m\", Lattice.cubic(3.5), [\"Ni\"], [[0, 0, 0]])\nslabs = generate_all_slabs(fcc_ni, max_index=1, min_slab_size=8.0,\n min_vacuum_size=10.0)\nni_111 = [slab for slab in slabs if slab.miller_index==(1,1,1)][0]", "_____no_output_____" ] ], [ [ "We make an instance of the AdsorbateSiteFinder and use it to find the relevant adsorption sites.", "_____no_output_____" ] ], [ [ "asf_ni_111 = AdsorbateSiteFinder(ni_111)\nads_sites = asf_ni_111.find_adsorption_sites()\nprint(ads_sites)\nassert len(ads_sites) == 4", "{'ontop': [array([1.23743687, 0.71443451, 9.0725408 ])], 'bridge': [array([-0.61871843, 1.78608627, 9.0725408 ])], 'hollow': [array([4.27067681e-16, 7.39702921e-16, 9.07254080e+00]), array([8.80455477e-16, 1.42886902e+00, 9.07254080e+00])], 'all': [array([1.23743687, 0.71443451, 9.0725408 ]), array([-0.61871843, 1.78608627, 9.0725408 ]), array([4.27067681e-16, 7.39702921e-16, 9.07254080e+00]), array([1.63125081e-15, 1.42886902e+00, 9.07254080e+00])]}\n" ] ], [ [ "We visualize the sites using a tool from pymatgen.", "_____no_output_____" ] ], [ [ "fig = plt.figure()\nax = fig.add_subplot(111)\nplot_slab(ni_111, ax, adsorption_sites=True)", "_____no_output_____" ] ], [ [ "Use the `AdsorbateSiteFinder.generate_adsorption_structures` method to generate structures of adsorbates.", "_____no_output_____" ] ], [ [ "fig = plt.figure()\nax = fig.add_subplot(111)\nadsorbate = Molecule(\"H\", [[0, 0, 0]])\nads_structs = asf_ni_111.generate_adsorption_structures(adsorbate, \n repeat=[1, 1, 1])\nplot_slab(ads_structs[0], ax, adsorption_sites=False, decay=0.09)", "_____no_output_____" ] ], [ [ "## Example 2: AdsorbateSiteFinder for various surfaces\n\nIn this example, the AdsorbateSiteFinder is used to find adsorption sites on different structures and miller indices.", "_____no_output_____" ] ], [ [ "fig = plt.figure()\naxes = [fig.add_subplot(2, 3, i) for i in range(1, 7)]\nmats = {\"mp-23\":(1, 0, 0), # FCC Ni\n \"mp-2\":(1, 1, 0), # FCC Au\n \"mp-13\":(1, 1, 0), # BCC Fe\n \"mp-33\":(0, 0, 1), # HCP Ru\n \"mp-30\": (2, 1, 1),\n \"mp-5229\":(1, 0, 0),\n } # Cubic SrTiO3\n #\"mp-2133\":(0, 1, 1)} # Wurtzite ZnO\n\nfor n, (mp_id, m_index) in enumerate(mats.items()):\n struct = mpr.get_structure_by_material_id(mp_id)\n struct = SpacegroupAnalyzer(struct).get_conventional_standard_structure()\n slabs = generate_all_slabs(struct, 1, 5.0, 2.0, center_slab=True)\n slab_dict = {slab.miller_index:slab for slab in slabs}\n asf = AdsorbateSiteFinder.from_bulk_and_miller(struct, m_index, undercoord_threshold=0.10)\n plot_slab(asf.slab, axes[n])\n ads_sites = asf.find_adsorption_sites()\n sop = get_rot(asf.slab)\n ads_sites = [sop.operate(ads_site)[:2].tolist()\n for ads_site in ads_sites[\"all\"]]\n axes[n].plot(*zip(*ads_sites), color='k', marker='x', \n markersize=10, mew=1, linestyle='', zorder=10000)\n mi_string = \"\".join([str(i) for i in m_index])\n axes[n].set_title(\"{}({})\".format(struct.composition.reduced_formula, mi_string))\n axes[n].set_xticks([])\n axes[n].set_yticks([])\n \naxes[4].set_xlim(-2, 5)\naxes[4].set_ylim(-2, 5)\nfig.savefig('slabs.png', dpi=200)", "_____no_output_____" ], [ "!open slabs.png", "_____no_output_____" ] ], [ [ "## Example 3: Generating a workflow from atomate\n\nIn this example, we demonstrate how MatMethods may be used to generate a full workflow for the determination of DFT-energies from which adsorption energies may be calculated. Note that this requires a working instance of [FireWorks](https://pythonhosted.org/FireWorks/index.html) and its dependency, [MongoDB](https://www.mongodb.com/). Note that MongoDB can be installed via [Anaconda](https://anaconda.org/anaconda/mongodb).", "_____no_output_____" ] ], [ [ "from fireworks import LaunchPad\nlpad = LaunchPad()", "_____no_output_____" ], [ "lpad.reset('', require_password=False)", "2018-07-24 09:56:31,982 INFO Performing db tune-up\n2018-07-24 09:56:31,995 INFO LaunchPad was RESET.\n" ] ], [ [ "Import the necessary workflow-generating function from atomate:", "_____no_output_____" ] ], [ [ "from atomate.vasp.workflows.base.adsorption import get_wf_surface, get_wf_surface_all_slabs", "_____no_output_____" ] ], [ [ "Adsorption configurations take the form of a dictionary with the miller index as a string key and a list of pymatgen Molecule instances as the values.", "_____no_output_____" ] ], [ [ "co = Molecule(\"CO\", [[0, 0, 0], [0, 0, 1.23]])\nh = Molecule(\"H\", [[0, 0, 0]])", "_____no_output_____" ] ], [ [ "Workflows are generated using the a slab a list of molecules.", "_____no_output_____" ] ], [ [ "struct = mpr.get_structure_by_material_id(\"mp-23\") # fcc Ni\nstruct = SpacegroupAnalyzer(struct).get_conventional_standard_structure()\nslabs = generate_all_slabs(struct, 1, 5.0, 2.0, center_slab=True)\nslab_dict = {slab.miller_index:slab for slab in slabs}\n\nni_slab_111 = slab_dict[(1, 1, 1)]\nwf = get_wf_surface([ni_slab_111], molecules=[co, h])\nlpad.add_wf(wf)", "2018-07-24 09:56:33,057 INFO Added a workflow. id_map: {-9: 1, -8: 2, -7: 3, -6: 4, -5: 5, -4: 6, -3: 7, -2: 8, -1: 9}\n" ] ], [ [ "The workflow may be inspected as below. Note that there are 9 optimization tasks correponding the slab, and 4 distinct adsorption configurations for each of the 2 adsorbates. Details on running FireWorks, including [singleshot launching](https://pythonhosted.org/FireWorks/worker_tutorial.html#launch-a-rocket-on-a-worker-machine-fireworker), [queue submission](https://pythonhosted.org/FireWorks/queue_tutorial.html#), [workflow management](https://pythonhosted.org/FireWorks/defuse_tutorial.html), and more can be found in the [FireWorks documentation](https://pythonhosted.org/FireWorks/index.html).", "_____no_output_____" ] ], [ [ "lpad.get_wf_summary_dict(1)", "_____no_output_____" ] ], [ [ "Note also that running FireWorks via atomate may require system specific tuning (e. g. for VASP parameters). More information is available in the [atomate documentation](http://pythonhosted.org/atomate/).", "_____no_output_____" ], [ "## Example 4 - Screening of oxygen evolution electrocatalysts on binary oxides", "_____no_output_____" ], [ "This final example is intended to demonstrate how to use the MP API and the adsorption workflow to do an initial high-throughput study of oxygen evolution electrocatalysis on binary oxides of transition metals.", "_____no_output_____" ] ], [ [ "from pymatgen.core.periodic_table import *\nfrom pymatgen.core.surface import get_symmetrically_distinct_miller_indices\nimport tqdm\n\nlpad.reset('', require_password=False)", "2018-07-24 09:56:33,079 INFO Performing db tune-up\n2018-07-24 09:56:33,088 INFO LaunchPad was RESET.\n" ] ], [ [ "For oxygen evolution, a common metric for the catalytic activity of a given catalyst is the theoretical overpotential corresponding to the mechanism that proceeds through OH\\*, O\\*, and OOH\\*. So we can define our adsorbates:", "_____no_output_____" ] ], [ [ "OH = Molecule(\"OH\", [[0, 0, 0], [-0.793, 0.384, 0.422]])\nO = Molecule(\"O\", [[0, 0, 0]])\nOOH = Molecule(\"OOH\", [[0, 0, 0], [-1.067, -0.403, 0.796], \n [-0.696, -0.272, 1.706]])\nadsorbates = [OH, O, OOH]", "_____no_output_____" ] ], [ [ "Then we can retrieve the structures using the MP rest interface, and write a simple for loop which creates all of the workflows corresponding to every slab and every adsorption site for each material. The code below will take ~15 minutes. This could be parallelized to be more efficient, but is not for simplicity in this case.", "_____no_output_____" ] ], [ [ "elements = [Element.from_Z(i) for i in range(1, 103)]\ntrans_metals = [el for el in elements if el.is_transition_metal]\n# tqdm adds a progress bar so we can see the progress of the for loop\nfor metal in tqdm.tqdm_notebook(trans_metals):\n # Get relatively stable structures with small unit cells\n data = mpr.get_data(\"{}-O\".format(metal.symbol))\n data = [datum for datum in data if datum[\"e_above_hull\"] < 0.05]\n data = sorted(data, key = lambda x: x[\"nsites\"])\n struct = Structure.from_str(data[0][\"cif\"], fmt='cif')\n # Put in conventional cell settings\n struct = SpacegroupAnalyzer(struct).get_conventional_standard_structure()\n # Get distinct miller indices for low-index facets\n wf = get_wf_surface_all_slabs(struct, adsorbates)\n lpad.add_wf(wf)\n print(\"Processed: {}\".format(struct.formula))", "_____no_output_____" ] ], [ [ "Ultimately, running this code produces workflows that contain many (tens of thousands) of calculations, all of which can be managed using FireWorks and queued on supercomputing resources. Limitations on those resources might necessitate a more selective approach towards choosing surface facets or representative materials. Nevertheless, this approach represents a way to provide for a complete and structurally accurate way of screening materials for adsorption properties than can be managed using fireworks.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e7dc9c7513ebbff7cba5f3747510d77b331b2373
86,104
ipynb
Jupyter Notebook
2021/pinsage_movielens_robert_output_disabled.ipynb
harvard-visionlab/psy1406
20a620e09e5ed96f56d0ad1bfcfca9f03829638a
[ "MIT" ]
1
2021-01-28T22:02:05.000Z
2021-01-28T22:02:05.000Z
2021/pinsage_movielens_robert_output_disabled.ipynb
harvard-visionlab/psy1406
20a620e09e5ed96f56d0ad1bfcfca9f03829638a
[ "MIT" ]
null
null
null
2021/pinsage_movielens_robert_output_disabled.ipynb
harvard-visionlab/psy1406
20a620e09e5ed96f56d0ad1bfcfca9f03829638a
[ "MIT" ]
null
null
null
46.897603
694
0.513112
[ [ [ "<a href=\"https://colab.research.google.com/github/harvard-visionlab/psy1410/blob/master/psy1410_pinsage_movielens_robert_output_disabled.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# PinSage MovieRecommendation\n\nThis notebook has code that can be used to use PinSage for an \"implicit recommender task.\" In this case, the data are Movie Rating, so you are data are users and ratings for some set of movies. The data are split into a training and test set, and the goal is to learn representations of users/movies that enable you to recommend movies a person would actually watch. You check the quality of your recommendtions by using the \"test data\" to see if they actually already watched any of the movies you recommended. Specifically, you measure what percentage of your \"top10 ratings\" are \"hits\" (movies they actually watched, i.e., movies in the test set that they have arated). \n\nTo get started.\n0. Goto \"Runtime\"=>\"Change runtime type\" and make sure you are using the GPU, and that you uncheck \"Omit code cell output when saving this notebook\" so that cell-outputs will be saved in the file. (When I save notebooks to github, I exclude the cell outputs because the file is then smaller).\n1. Run the \"PinSage Prep\" section (~5-10min). \n2. Run the \"PinSage Code\" section\n3. Open up the \"Check the model with data\" section to see what the PinSage model looks like\n4. Goto the section \"PinSage Train on Implicit Task\" and run the \"baseline model, movie id only\" section. This is the minimal model, and only tries to learn an embedding for movies without any extra information about the movies. This will be your \"baseline model\" and the critical question is whether adding more information (e.g., plot embeddings, poster embeddings), or changing hyperparameters improves the model's performance. (~90 min)\n5. Choose 1 more of the suggested \"possible variations\" to run and see what factors influence the model's performance.\n6. Writeup a brief Summary & Conclusions of your work.", "_____no_output_____" ], [ "# PinSage Prep \n\nThis Chunk downloads and pre-processes moviedata, preparing the graphs training.", "_____no_output_____" ] ], [ [ "!pip install dgl-cu101 --upgrade\n!python -m pip install dask[dataframe] --upgrade\n!pip install madgrad", "_____no_output_____" ], [ "!wget -c http://files.grouplens.org/datasets/movielens/ml-1m.zip\n!unzip ml-1m.zip\n!rm ml-1m.zip", "_____no_output_____" ], [ "!wget -c https://www.dropbox.com/s/4blru88qafx1i4l/ml_25m_tmdb_plot_paraphrase-distilroberta-base-v1.pth.tar", "_____no_output_____" ], [ "!wget --quiet -c https://www.dropbox.com/s/8ty0mis0u3eza45/tmdb_backdrops_w780_SwinTransformer_avgpool.pth.tar\n!wget --quiet -c https://www.dropbox.com/s/rrovh5ludxonzxs/tmdb_backdrops_w780_VGG_classifier.4.pth.tar\n!wget --quiet -c https://www.dropbox.com/s/ixfo4yxq58utj9c/tmdb_posters_w500_VGG_classifier.4.pth.tar\n!wget --quiet -c https://www.dropbox.com/s/u5akhzatpmrck3a/tmdb_posters_w500_SwinTransformer_avgpool.pth.tar", "_____no_output_____" ], [ "!wget --quiet -c https://www.dropbox.com/s/qeur875d23zivko/ml_25m_links_imdb_synopsis_paraphrase-distilroberta-base-v1.pth.tar\n!wget --quiet -c https://www.dropbox.com/s/vpi2uno5plp2kvd/ml_25m_links_imdb_plot_paraphrase-distilroberta-base-v1.pth.tar\n!wget --quiet -c https://www.dropbox.com/s/wrwcprh2wih7rz5/ml_25m_links_imdb_longest_paraphrase-distilroberta-base-v1.pth.tar\n!wget --quiet -c https://www.dropbox.com/s/dgsom5hcdxjn8rs/ml_25m_links_imdb_full_plot_paraphrase-distilroberta-base-v1.pth.tar\n", "_____no_output_____" ], [ "\"\"\"Graph builder from pandas dataframes\"\"\"\nfrom collections import namedtuple\nfrom pandas.api.types import is_numeric_dtype, is_categorical_dtype, is_categorical\nimport dgl\n\n__all__ = ['PandasGraphBuilder']\n\ndef _series_to_tensor(series):\n if is_categorical(series):\n return torch.LongTensor(series.cat.codes.values.astype('int64'))\n else: # numeric\n return torch.FloatTensor(series.values)\n\nclass PandasGraphBuilder(object):\n \"\"\"Creates a heterogeneous graph from multiple pandas dataframes.\n Examples\n --------\n Let's say we have the following three pandas dataframes:\n User table ``users``:\n =========== =========== =======\n ``user_id`` ``country`` ``age``\n =========== =========== =======\n XYZZY U.S. 25\n FOO China 24\n BAR China 23\n =========== =========== =======\n Game table ``games``:\n =========== ========= ============== ==================\n ``game_id`` ``title`` ``is_sandbox`` ``is_multiplayer``\n =========== ========= ============== ==================\n 1 Minecraft True True\n 2 Tetris 99 False True\n =========== ========= ============== ==================\n Play relationship table ``plays``:\n =========== =========== =========\n ``user_id`` ``game_id`` ``hours``\n =========== =========== =========\n XYZZY 1 24\n FOO 1 20\n FOO 2 16\n BAR 2 28\n =========== =========== =========\n One could then create a bidirectional bipartite graph as follows:\n >>> builder = PandasGraphBuilder()\n >>> builder.add_entities(users, 'user_id', 'user')\n >>> builder.add_entities(games, 'game_id', 'game')\n >>> builder.add_binary_relations(plays, 'user_id', 'game_id', 'plays')\n >>> builder.add_binary_relations(plays, 'game_id', 'user_id', 'played-by')\n >>> g = builder.build()\n >>> g.number_of_nodes('user')\n 3\n >>> g.number_of_edges('plays')\n 4\n \"\"\"\n def __init__(self):\n self.entity_tables = {}\n self.relation_tables = {}\n\n self.entity_pk_to_name = {} # mapping from primary key name to entity name\n self.entity_pk = {} # mapping from entity name to primary key\n self.entity_key_map = {} # mapping from entity names to primary key values\n self.num_nodes_per_type = {}\n self.edges_per_relation = {}\n self.relation_name_to_etype = {}\n self.relation_src_key = {} # mapping from relation name to source key\n self.relation_dst_key = {} # mapping from relation name to destination key\n\n def add_entities(self, entity_table, primary_key, name): \n entities = entity_table[primary_key].astype('category')\n #set_trace()\n\n #if not entity_table[primary_key].is_unique:\n if not (entities.value_counts() == 1).all(): \n raise ValueError('Different entity with the same primary key detected.')\n \n # preserve the category order in the original entity table\n entities = entities.cat.reorder_categories(entity_table[primary_key].values)\n\n self.entity_pk_to_name[primary_key] = name\n self.entity_pk[name] = primary_key\n self.num_nodes_per_type[name] = entity_table.shape[0]\n #self.num_nodes_per_type[name] = len(entities.cat.categories)\n self.entity_key_map[name] = entities\n self.entity_tables[name] = entity_table\n\n def add_binary_relations(self, relation_table, source_key, destination_key, name):\n src = relation_table[source_key].astype('category')\n src = src.cat.set_categories(\n self.entity_key_map[self.entity_pk_to_name[source_key]].cat.categories)\n dst = relation_table[destination_key].astype('category')\n dst = dst.cat.set_categories(\n self.entity_key_map[self.entity_pk_to_name[destination_key]].cat.categories)\n if src.isnull().any():\n raise ValueError(\n 'Some source entities in relation %s do not exist in entity %s.' %\n (name, source_key))\n if dst.isnull().any():\n raise ValueError(\n 'Some destination entities in relation %s do not exist in entity %s.' %\n (name, destination_key))\n\n srctype = self.entity_pk_to_name[source_key]\n dsttype = self.entity_pk_to_name[destination_key]\n etype = (srctype, name, dsttype)\n self.relation_name_to_etype[name] = etype\n self.edges_per_relation[etype] = (src.cat.codes.values.astype('int64'), dst.cat.codes.values.astype('int64'))\n self.relation_tables[name] = relation_table\n self.relation_src_key[name] = source_key\n self.relation_dst_key[name] = destination_key\n\n def build(self):\n # Create heterograph\n graph = dgl.heterograph(self.edges_per_relation, self.num_nodes_per_type)\n return graph", "_____no_output_____" ], [ "\"\"\"\nScript that reads from raw MovieLens-1M data and dumps into a pickle\nfile the following:\n* A heterogeneous graph with categorical features.\n* A list with all the movie titles. The movie titles correspond to\n the movie nodes in the heterogeneous graph.\nThis script exemplifies how to prepare tabular data with textual\nfeatures. Since DGL graphs do not store variable-length features, we\ninstead put variable-length features into a more suitable container\n(e.g. torchtext to handle list of texts)\n\"\"\"\n\nimport os\nimport re\nimport argparse\nimport pickle\nimport pandas as pd\nimport numpy as np\nimport scipy.sparse as ssp\nimport dgl\nimport torch\nimport torchtext\n#from builder import PandasGraphBuilder\n\nimport torch\nimport dgl\nimport numpy as np\nimport scipy.sparse as ssp\nimport tqdm\nimport dask.dataframe as dd\n\n# This is the train-test split method most of the recommender system papers running on MovieLens\n# takes. It essentially follows the intuition of \"training on the past and predict the future\".\n# One can also change the threshold to make validation and test set take larger proportions.\ndef train_test_split_by_time(df, timestamp, user):\n df['train_mask'] = np.ones((len(df),), dtype=np.bool)\n df['val_mask'] = np.zeros((len(df),), dtype=np.bool)\n df['test_mask'] = np.zeros((len(df),), dtype=np.bool)\n df = dd.from_pandas(df, npartitions=10)\n def train_test_split(df):\n df = df.sort_values([timestamp])\n if df.shape[0] > 1:\n df.iloc[-1, -3] = False\n df.iloc[-1, -1] = True\n if df.shape[0] > 2:\n df.iloc[-2, -3] = False\n df.iloc[-2, -2] = True\n return df\n df = df.groupby(user, group_keys=False).apply(train_test_split).compute(scheduler='processes').sort_index()\n print(df[df[user] == df[user].unique()[0]].sort_values(timestamp))\n return df['train_mask'].to_numpy().nonzero()[0], \\\n df['val_mask'].to_numpy().nonzero()[0], \\\n df['test_mask'].to_numpy().nonzero()[0]\n\ndef build_train_graph(g, train_indices, utype, itype, etype, etype_rev):\n train_g = g.edge_subgraph(\n {etype: train_indices, etype_rev: train_indices},\n preserve_nodes=True)\n # remove the induced node IDs - should be assigned by model instead\n del train_g.nodes[utype].data[dgl.NID]\n del train_g.nodes[itype].data[dgl.NID]\n\n # copy features\n for ntype in g.ntypes:\n for col, data in g.nodes[ntype].data.items():\n train_g.nodes[ntype].data[col] = data\n for etype in g.etypes:\n for col, data in g.edges[etype].data.items():\n train_g.edges[etype].data[col] = data[train_g.edges[etype].data[dgl.EID]]\n\n return train_g\n\ndef build_val_test_matrix(g, val_indices, test_indices, utype, itype, etype):\n n_users = g.number_of_nodes(utype)\n n_items = g.number_of_nodes(itype)\n val_src, val_dst = g.find_edges(val_indices, etype=etype)\n test_src, test_dst = g.find_edges(test_indices, etype=etype)\n val_src = val_src.numpy()\n val_dst = val_dst.numpy()\n test_src = test_src.numpy()\n test_dst = test_dst.numpy()\n val_matrix = ssp.coo_matrix((np.ones_like(val_src), (val_src, val_dst)), (n_users, n_items))\n test_matrix = ssp.coo_matrix((np.ones_like(test_src), (test_src, test_dst)), (n_users, n_items))\n\n return val_matrix, test_matrix\n\ndef linear_normalize(values):\n return (values - values.min(0, keepdims=True)) / \\\n (values.max(0, keepdims=True) - values.min(0, keepdims=True))\n\ndef process_movielens1m(directory, output_path):\n\n ## Build heterogeneous graph\n \n # Load data\n users = []\n with open(os.path.join(directory, 'users.dat'), encoding='latin1') as f:\n for l in f:\n id_, gender, age, occupation, zip_ = l.strip().split('::')\n users.append({\n 'user_id': int(id_),\n 'gender': gender,\n 'age': age,\n 'occupation': occupation,\n 'zip': zip_,\n })\n users = pd.DataFrame(users).astype('category')\n\n movies = []\n with open(os.path.join(directory, 'movies.dat'), encoding='latin1') as f:\n for l in f:\n id_, title, genres = l.strip().split('::')\n genres_set = set(genres.split('|'))\n\n # extract year\n assert re.match(r'.*\\([0-9]{4}\\)$', title)\n year = title[-5:-1]\n title = title[:-6].strip()\n\n data = {'movie_id': int(id_), 'title': title, 'year': year}\n for g in genres_set:\n data[g] = True\n movies.append(data)\n movies = pd.DataFrame(movies).astype({'year': 'category'})\n\n ratings = []\n with open(os.path.join(directory, 'ratings.dat'), encoding='latin1') as f:\n for l in f:\n user_id, movie_id, rating, timestamp = [int(_) for _ in l.split('::')]\n ratings.append({\n 'user_id': user_id,\n 'movie_id': movie_id,\n 'rating': rating,\n 'timestamp': timestamp,\n })\n ratings = pd.DataFrame(ratings)\n\n # Filter the users and items that never appear in the rating table.\n distinct_users_in_ratings = ratings['user_id'].unique()\n distinct_movies_in_ratings = ratings['movie_id'].unique()\n users = users[users['user_id'].isin(distinct_users_in_ratings)]\n movies = movies[movies['movie_id'].isin(distinct_movies_in_ratings)]\n\n # Group the movie features into genres (a vector), year (a category), title (a string)\n genre_columns = movies.columns.drop(['movie_id', 'title', 'year'])\n movies[genre_columns] = movies[genre_columns].fillna(False).astype('bool')\n movies_categorical = movies.drop('title', axis=1)\n\n # Build graph\n graph_builder = PandasGraphBuilder()\n graph_builder.add_entities(users, 'user_id', 'user')\n graph_builder.add_entities(movies_categorical, 'movie_id', 'movie')\n graph_builder.add_binary_relations(ratings, 'user_id', 'movie_id', 'watched')\n graph_builder.add_binary_relations(ratings, 'movie_id', 'user_id', 'watched-by')\n\n g = graph_builder.build()\n\n # Assign features.\n # Note that variable-sized features such as texts or images are handled elsewhere.\n g.nodes['user'].data['gender'] = torch.LongTensor(users['gender'].cat.codes.values)\n g.nodes['user'].data['age'] = torch.LongTensor(users['age'].cat.codes.values)\n g.nodes['user'].data['occupation'] = torch.LongTensor(users['occupation'].cat.codes.values)\n g.nodes['user'].data['zip'] = torch.LongTensor(users['zip'].cat.codes.values)\n\n g.nodes['movie'].data['year'] = torch.LongTensor(movies['year'].cat.codes.values)\n g.nodes['movie'].data['genre'] = torch.FloatTensor(movies[genre_columns].values)\n\n g.edges['watched'].data['rating'] = torch.LongTensor(ratings['rating'].values)\n g.edges['watched'].data['timestamp'] = torch.LongTensor(ratings['timestamp'].values)\n g.edges['watched-by'].data['rating'] = torch.LongTensor(ratings['rating'].values)\n g.edges['watched-by'].data['timestamp'] = torch.LongTensor(ratings['timestamp'].values)\n\n # Train-validation-test split\n # This is a little bit tricky as we want to select the last interaction for test, and the\n # second-to-last interaction for validation.\n train_indices, val_indices, test_indices = train_test_split_by_time(ratings, 'timestamp', 'user_id')\n\n # Build the graph with training interactions only.\n train_g = build_train_graph(g, train_indices, 'user', 'movie', 'watched', 'watched-by')\n assert train_g.out_degrees(etype='watched').min() > 0\n\n # Build the user-item sparse matrix for validation and test set.\n val_matrix, test_matrix = build_val_test_matrix(g, val_indices, test_indices, 'user', 'movie', 'watched')\n\n ## Build title set\n\n movie_textual_dataset = {'title': movies['title'].values}\n\n # The model should build their own vocabulary and process the texts. Here is one example\n # of using torchtext to pad and numericalize a batch of strings.\n # field = torchtext.data.Field(include_lengths=True, lower=True, batch_first=True)\n # examples = [torchtext.data.Example.fromlist([t], [('title', title_field)]) for t in texts]\n # titleset = torchtext.data.Dataset(examples, [('title', title_field)])\n # field.build_vocab(titleset.title, vectors='fasttext.simple.300d')\n # token_ids, lengths = field.process([examples[0].title, examples[1].title])\n\n ## Dump the graph and the datasets\n\n dataset = {\n 'train-graph': train_g,\n 'val-matrix': val_matrix,\n 'test-matrix': test_matrix,\n 'item-texts': movie_textual_dataset,\n 'item-images': None,\n 'user-type': 'user',\n 'item-type': 'movie',\n 'user-to-item-type': 'watched',\n 'item-to-user-type': 'watched-by',\n 'timestamp-edge-column': 'timestamp'}\n\n with open(output_path, 'wb') as f:\n pickle.dump(dataset, f)\n", "_____no_output_____" ], [ "from IPython.core.debugger import set_trace \nfrom fastprogress.fastprogress import progress_bar\n\ndef process_movielens1m_text(directory, output_path, text_embeddings,\n only_id=False):\n\n ## Build heterogeneous graph\n\n # Load plot embeddings\n embeddings = torch.load(text_embeddings, map_location='cpu')\n\n # Load data\n users = []\n with open(os.path.join(directory, 'users.dat'), encoding='latin1') as f:\n for l in f:\n id_, gender, age, occupation, zip_ = l.strip().split('::')\n users.append({\n 'user_id': int(id_),\n 'gender': gender,\n 'age': age,\n 'occupation': occupation,\n 'zip': zip_,\n })\n users = pd.DataFrame(users).astype('category')\n\n movies = []\n with open(os.path.join(directory, 'movies.dat'), encoding='latin1') as f:\n for l in f:\n id_, title, genres = l.strip().split('::')\n genres_set = set(genres.split('|'))\n\n # extract year\n assert re.match(r'.*\\([0-9]{4}\\)$', title)\n year = title[-5:-1]\n title = title[:-6].strip()\n\n data = {'movie_id': int(id_), 'title': title, 'year': year}\n for g in genres_set:\n data[g] = True\n movies.append(data)\n movies = pd.DataFrame(movies).astype({'year': 'category'})\n\n ratings = []\n with open(os.path.join(directory, 'ratings.dat'), encoding='latin1') as f:\n for l in f:\n user_id, movie_id, rating, timestamp = [int(_) for _ in l.split('::')]\n ratings.append({\n 'user_id': user_id,\n 'movie_id': movie_id,\n 'rating': rating,\n 'timestamp': timestamp,\n })\n ratings = pd.DataFrame(ratings)\n\n # Filter the users and items that never appear in the rating table. \n distinct_users_in_ratings = ratings['user_id'].unique()\n distinct_movies_in_ratings = ratings['movie_id'].unique()\n users = users[users['user_id'].isin(distinct_users_in_ratings)]\n movies = movies[movies['movie_id'].isin(distinct_movies_in_ratings)]\n\n # Filter users and items for movies that don't have embeddings\n distinct_movies = movies['movie_id'].unique()\n\n # drop embeddings for movies not in set\n distinct_movies_with_embeddings = np.array(embeddings['ml_ids']) \n embedding_has_rating = np.in1d(distinct_movies_with_embeddings, distinct_movies)\n distinct_movies_with_embeddings = distinct_movies_with_embeddings[embedding_has_rating]\n\n # drop movies without embedding\n movie_has_embedding = np.in1d(distinct_movies, distinct_movies_with_embeddings)\n rated_movies_with_embeddings = distinct_movies[movie_has_embedding]\n\n # Filter ratings, users, movies\n ratings = ratings[ratings['movie_id'].isin(rated_movies_with_embeddings)]\n distinct_users_in_ratings = ratings['user_id'].unique()\n distinct_movies_in_ratings = ratings['movie_id'].unique()\n #filtering users breaks everything, do don't\n #users = users[users['user_id'].isin(distinct_users_in_ratings)]\n movies = movies[movies['movie_id'].isin(distinct_movies_in_ratings)] \n\n # align the plot data with the movies dataframe\n # use_embeddings = np.in1d(np.array(embeddings['ml_ids']), movies['movie_id'].unique())\n plot_data = []\n for r,movie in progress_bar(movies.iterrows(), total=len(movies)):\n idx = embeddings['ml_ids'].index(movie.movie_id)\n plot_data.append(embeddings['embedding'][idx])\n\n # Group the movie features into genres (a vector), year (a category), title (a string)\n genre_columns = movies.columns.drop(['movie_id', 'title', 'year'])\n movies[genre_columns] = movies[genre_columns].fillna(False).astype('bool')\n movies_categorical = movies.drop('title', axis=1) \n\n # Build graph\n graph_builder = PandasGraphBuilder()\n graph_builder.add_entities(users, 'user_id', 'user')\n graph_builder.add_entities(movies_categorical, 'movie_id', 'movie')\n graph_builder.add_binary_relations(ratings, 'user_id', 'movie_id', 'watched')\n graph_builder.add_binary_relations(ratings, 'movie_id', 'user_id', 'watched-by')\n\n g = graph_builder.build()\n\n # Assign features.\n # Note that variable-sized features such as texts or images are handled elsewhere.\n g.nodes['user'].data['gender'] = torch.LongTensor(users['gender'].cat.codes.values)\n g.nodes['user'].data['age'] = torch.LongTensor(users['age'].cat.codes.values)\n g.nodes['user'].data['occupation'] = torch.LongTensor(users['occupation'].cat.codes.values)\n g.nodes['user'].data['zip'] = torch.LongTensor(users['zip'].cat.codes.values)\n\n if only_id==False:\n g.nodes['movie'].data['year'] = torch.LongTensor(movies['year'].cat.codes.values)\n g.nodes['movie'].data['genre'] = torch.FloatTensor(movies[genre_columns].values) \n g.nodes['movie'].data['plot'] = torch.stack(plot_data)\n \n g.edges['watched'].data['rating'] = torch.LongTensor(ratings['rating'].values)\n g.edges['watched'].data['timestamp'] = torch.LongTensor(ratings['timestamp'].values)\n g.edges['watched-by'].data['rating'] = torch.LongTensor(ratings['rating'].values)\n g.edges['watched-by'].data['timestamp'] = torch.LongTensor(ratings['timestamp'].values)\n\n # Train-validation-test split\n # This is a little bit tricky as we want to select the last interaction for test, and the\n # second-to-last interaction for validation.\n train_indices, val_indices, test_indices = train_test_split_by_time(ratings, 'timestamp', 'user_id')\n\n # Build the graph with training interactions only.\n train_g = build_train_graph(g, train_indices, 'user', 'movie', 'watched', 'watched-by')\n assert train_g.out_degrees(etype='watched').min() > 0\n\n # Build the user-item sparse matrix for validation and test set.\n val_matrix, test_matrix = build_val_test_matrix(g, val_indices, test_indices, 'user', 'movie', 'watched')\n\n ## Build title set\n\n movie_textual_dataset = {'title': movies['title'].values}\n\n # The model should build their own vocabulary and process the texts. Here is one example\n # of using torchtext to pad and numericalize a batch of strings.\n # field = torchtext.data.Field(include_lengths=True, lower=True, batch_first=True)\n # examples = [torchtext.data.Example.fromlist([t], [('title', title_field)]) for t in texts]\n # titleset = torchtext.data.Dataset(examples, [('title', title_field)])\n # field.build_vocab(titleset.title, vectors='fasttext.simple.300d')\n # token_ids, lengths = field.process([examples[0].title, examples[1].title])\n\n ## Dump the graph and the datasets\n\n dataset = {\n 'train-graph': train_g,\n 'val-matrix': val_matrix,\n 'test-matrix': test_matrix,\n 'item-texts': movie_textual_dataset,\n 'item-images': None,\n 'user-type': 'user',\n 'item-type': 'movie',\n 'user-to-item-type': 'watched',\n 'item-to-user-type': 'watched-by',\n 'timestamp-edge-column': 'timestamp'}\n\n with open(output_path, 'wb') as f:\n pickle.dump(dataset, f) \n\ndef process_movielens1m_posters(directory, output_path, image_embeddings):\n\n ## Build heterogeneous graph\n\n # Load plot embeddings\n embeddings = torch.load(image_embeddings, map_location='cpu')\n\n # Load data\n users = []\n with open(os.path.join(directory, 'users.dat'), encoding='latin1') as f:\n for l in f:\n id_, gender, age, occupation, zip_ = l.strip().split('::')\n users.append({\n 'user_id': int(id_),\n 'gender': gender,\n 'age': age,\n 'occupation': occupation,\n 'zip': zip_,\n })\n users = pd.DataFrame(users).astype('category')\n\n movies = []\n with open(os.path.join(directory, 'movies.dat'), encoding='latin1') as f:\n for l in f:\n id_, title, genres = l.strip().split('::')\n genres_set = set(genres.split('|'))\n\n # extract year\n assert re.match(r'.*\\([0-9]{4}\\)$', title)\n year = title[-5:-1]\n title = title[:-6].strip()\n\n data = {'movie_id': int(id_), 'title': title, 'year': year}\n for g in genres_set:\n data[g] = True\n movies.append(data)\n movies = pd.DataFrame(movies).astype({'year': 'category'})\n\n ratings = []\n with open(os.path.join(directory, 'ratings.dat'), encoding='latin1') as f:\n for l in f:\n user_id, movie_id, rating, timestamp = [int(_) for _ in l.split('::')]\n ratings.append({\n 'user_id': user_id,\n 'movie_id': movie_id,\n 'rating': rating,\n 'timestamp': timestamp,\n })\n ratings = pd.DataFrame(ratings)\n\n # Filter the users and items that never appear in the rating table. \n distinct_users_in_ratings = ratings['user_id'].unique()\n distinct_movies_in_ratings = ratings['movie_id'].unique()\n users = users[users['user_id'].isin(distinct_users_in_ratings)]\n movies = movies[movies['movie_id'].isin(distinct_movies_in_ratings)]\n\n # Filter users and items for movies that don't have embeddings\n distinct_movies = movies['movie_id'].unique()\n\n # drop embeddings for movies not in set\n distinct_movies_with_embeddings = np.array(embeddings['ml_ids']) \n embedding_has_rating = np.in1d(distinct_movies_with_embeddings, distinct_movies)\n distinct_movies_with_embeddings = distinct_movies_with_embeddings[embedding_has_rating]\n\n # drop movies without embedding\n movie_has_embedding = np.in1d(distinct_movies, distinct_movies_with_embeddings)\n rated_movies_with_embeddings = distinct_movies[movie_has_embedding]\n\n # Filter ratings, users, movies\n ratings = ratings[ratings['movie_id'].isin(rated_movies_with_embeddings)]\n distinct_users_in_ratings = ratings['user_id'].unique()\n distinct_movies_in_ratings = ratings['movie_id'].unique()\n #filtering users breaks everything, do don't\n #users = users[users['user_id'].isin(distinct_users_in_ratings)]\n movies = movies[movies['movie_id'].isin(distinct_movies_in_ratings)] \n print(f\"movies included: {len(movies)}\")\n\n # align the plot data with the movies dataframe\n # use_embeddings = np.in1d(np.array(embeddings['ml_ids']), movies['movie_id'].unique())\n image_data = []\n for r,movie in progress_bar(movies.iterrows(), total=len(movies)):\n idx = embeddings['ml_ids'].index(movie.movie_id)\n image_data.append(embeddings['embedding'][idx])\n\n # Group the movie features into genres (a vector), year (a category), title (a string)\n genre_columns = movies.columns.drop(['movie_id', 'title', 'year'])\n movies[genre_columns] = movies[genre_columns].fillna(False).astype('bool')\n movies_categorical = movies.drop('title', axis=1) \n\n # Build graph\n graph_builder = PandasGraphBuilder()\n graph_builder.add_entities(users, 'user_id', 'user')\n graph_builder.add_entities(movies_categorical, 'movie_id', 'movie')\n graph_builder.add_binary_relations(ratings, 'user_id', 'movie_id', 'watched')\n graph_builder.add_binary_relations(ratings, 'movie_id', 'user_id', 'watched-by')\n\n g = graph_builder.build()\n\n # Assign features.\n # Note that variable-sized features such as texts or images are handled elsewhere.\n g.nodes['user'].data['gender'] = torch.LongTensor(users['gender'].cat.codes.values)\n g.nodes['user'].data['age'] = torch.LongTensor(users['age'].cat.codes.values)\n g.nodes['user'].data['occupation'] = torch.LongTensor(users['occupation'].cat.codes.values)\n g.nodes['user'].data['zip'] = torch.LongTensor(users['zip'].cat.codes.values)\n\n g.nodes['movie'].data['year'] = torch.LongTensor(movies['year'].cat.codes.values)\n g.nodes['movie'].data['genre'] = torch.FloatTensor(movies[genre_columns].values) \n g.nodes['movie'].data['poster'] = torch.stack(image_data)\n \n g.edges['watched'].data['rating'] = torch.LongTensor(ratings['rating'].values)\n g.edges['watched'].data['timestamp'] = torch.LongTensor(ratings['timestamp'].values)\n g.edges['watched-by'].data['rating'] = torch.LongTensor(ratings['rating'].values)\n g.edges['watched-by'].data['timestamp'] = torch.LongTensor(ratings['timestamp'].values)\n\n # Train-validation-test split\n # This is a little bit tricky as we want to select the last interaction for test, and the\n # second-to-last interaction for validation.\n train_indices, val_indices, test_indices = train_test_split_by_time(ratings, 'timestamp', 'user_id')\n\n # Build the graph with training interactions only.\n train_g = build_train_graph(g, train_indices, 'user', 'movie', 'watched', 'watched-by')\n assert train_g.out_degrees(etype='watched').min() > 0\n\n # Build the user-item sparse matrix for validation and test set.\n val_matrix, test_matrix = build_val_test_matrix(g, val_indices, test_indices, 'user', 'movie', 'watched')\n\n ## Build title set\n\n movie_textual_dataset = {'title': movies['title'].values}\n\n # The model should build their own vocabulary and process the texts. Here is one example\n # of using torchtext to pad and numericalize a batch of strings.\n # field = torchtext.data.Field(include_lengths=True, lower=True, batch_first=True)\n # examples = [torchtext.data.Example.fromlist([t], [('title', title_field)]) for t in texts]\n # titleset = torchtext.data.Dataset(examples, [('title', title_field)])\n # field.build_vocab(titleset.title, vectors='fasttext.simple.300d')\n # token_ids, lengths = field.process([examples[0].title, examples[1].title])\n\n ## Dump the graph and the datasets\n\n dataset = {\n 'train-graph': train_g,\n 'val-matrix': val_matrix,\n 'test-matrix': test_matrix,\n 'item-texts': movie_textual_dataset,\n 'item-images': None,\n 'user-type': 'user',\n 'item-type': 'movie',\n 'user-to-item-type': 'watched',\n 'item-to-user-type': 'watched-by',\n 'timestamp-edge-column': 'timestamp'}\n\n with open(output_path, 'wb') as f:\n pickle.dump(dataset, f) ", "_____no_output_____" ], [ "process_movielens1m_text('/content/ml-1m', '/content/ml_1m_imdb_synopsis.pkl', \n 'ml_25m_links_imdb_synopsis_paraphrase-distilroberta-base-v1.pth.tar')", "_____no_output_____" ], [ "process_movielens1m_text('/content/ml-1m', '/content/ml_1m_imdb_plot.pkl', \n 'ml_25m_links_imdb_plot_paraphrase-distilroberta-base-v1.pth.tar')", "_____no_output_____" ], [ "process_movielens1m_text('/content/ml-1m', '/content/ml_1m_imdb_longest.pkl', \n 'ml_25m_links_imdb_longest_paraphrase-distilroberta-base-v1.pth.tar')", "_____no_output_____" ], [ "process_movielens1m_text('/content/ml-1m', '/content/ml_1m_imdb_full_plot.pkl', \n 'ml_25m_links_imdb_full_plot_paraphrase-distilroberta-base-v1.pth.tar')", "_____no_output_____" ], [ "process_movielens1m_posters('/content/ml-1m', '/content/ml_1m_backdrop_vgg16.pkl', \n 'tmdb_backdrops_w780_VGG_classifier.4.pth.tar')", "_____no_output_____" ], [ "process_movielens1m_posters('/content/ml-1m', '/content/ml_1m_backdrop_swin.pkl', \n 'tmdb_backdrops_w780_SwinTransformer_avgpool.pth.tar')", "_____no_output_____" ], [ "process_movielens1m_text('/content/ml-1m', '/content/ml_1m_plot_data.pkl', \n 'ml_25m_tmdb_plot_paraphrase-distilroberta-base-v1.pth.tar')", "_____no_output_____" ], [ "process_movielens1m_text('/content/ml-1m', '/content/ml_1m_only_id.pkl', \n 'ml_25m_links_imdb_longest_paraphrase-distilroberta-base-v1.pth.tar',\n only_id=True)", "_____no_output_____" ] ], [ [ "", "_____no_output_____" ], [ "# PinSage Code", "_____no_output_____" ], [ "## PinSage Layers", "_____no_output_____" ] ], [ [ "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport dgl\nimport dgl.nn.pytorch as dglnn\nimport dgl.function as fn\n\ndef disable_grad(module):\n for param in module.parameters():\n param.requires_grad = False\n\ndef _init_input_modules(g, ntype, textset, hidden_dims):\n # We initialize the linear projections of each input feature ``x`` as\n # follows:\n # * If ``x`` is a scalar integral feature, we assume that ``x`` is a categorical\n # feature, and assume the range of ``x`` is 0..max(x).\n # * If ``x`` is a float one-dimensional feature, we assume that ``x`` is a\n # numeric vector.\n # * If ``x`` is a field of a textset, we process it as bag of words.\n module_dict = nn.ModuleDict()\n\n for column, data in g.nodes[ntype].data.items():\n if column == dgl.NID:\n continue\n if data.dtype == torch.float32:\n assert data.ndim == 2\n m = nn.Linear(data.shape[1], hidden_dims)\n nn.init.xavier_uniform_(m.weight)\n nn.init.constant_(m.bias, 0)\n module_dict[column] = m\n elif data.dtype == torch.int64:\n assert data.ndim == 1\n m = nn.Embedding(\n data.max() + 2, hidden_dims, padding_idx=-1)\n nn.init.xavier_uniform_(m.weight)\n module_dict[column] = m\n\n if textset is not None:\n for column, field in textset.fields.items():\n if field.vocab.vectors:\n module_dict[column] = BagOfWordsPretrained(field, hidden_dims)\n else:\n module_dict[column] = BagOfWords(field, hidden_dims)\n\n return module_dict\n\nclass BagOfWordsPretrained(nn.Module):\n def __init__(self, field, hidden_dims):\n super().__init__()\n\n input_dims = field.vocab.vectors.shape[1]\n self.emb = nn.Embedding(\n len(field.vocab.itos), input_dims,\n padding_idx=field.vocab.stoi[field.pad_token])\n self.emb.weight[:] = field.vocab.vectors\n self.proj = nn.Linear(input_dims, hidden_dims)\n nn.init.xavier_uniform_(self.proj.weight)\n nn.init.constant_(self.proj.bias, 0)\n\n disable_grad(self.emb)\n\n def forward(self, x, length):\n \"\"\"\n x: (batch_size, max_length) LongTensor\n length: (batch_size,) LongTensor\n \"\"\"\n x = self.emb(x).sum(1) / length.unsqueeze(1).float()\n return self.proj(x)\n\nclass BagOfWords(nn.Module):\n def __init__(self, field, hidden_dims):\n super().__init__()\n\n self.emb = nn.Embedding(\n len(field.vocab.itos), hidden_dims,\n padding_idx=field.vocab.stoi[field.pad_token])\n nn.init.xavier_uniform_(self.emb.weight)\n\n def forward(self, x, length):\n return self.emb(x).sum(1) / length.unsqueeze(1).float()\n\nclass LinearProjector(nn.Module):\n \"\"\"\n Projects each input feature of the graph linearly and sums them up\n \"\"\"\n def __init__(self, full_graph, ntype, textset, hidden_dims):\n super().__init__()\n\n self.ntype = ntype\n self.inputs = _init_input_modules(full_graph, ntype, textset, hidden_dims)\n\n def forward(self, ndata):\n projections = []\n for feature, data in ndata.items():\n if feature == dgl.NID or feature.endswith('__len'):\n # This is an additional feature indicating the length of the ``feature``\n # column; we shouldn't process this.\n continue\n\n module = self.inputs[feature]\n if isinstance(module, (BagOfWords, BagOfWordsPretrained)):\n # Textual feature; find the length and pass it to the textual module.\n length = ndata[feature + '__len']\n result = module(data, length)\n else:\n result = module(data)\n projections.append(result)\n\n return torch.stack(projections, 1).sum(1)\n\nclass WeightedSAGEConv(nn.Module):\n def __init__(self, input_dims, hidden_dims, output_dims, act=F.relu):\n super().__init__()\n\n self.act = act\n self.Q = nn.Linear(input_dims, hidden_dims)\n self.W = nn.Linear(input_dims + hidden_dims, output_dims)\n self.reset_parameters()\n self.dropout = nn.Dropout(0.5)\n\n def reset_parameters(self):\n gain = nn.init.calculate_gain('relu')\n nn.init.xavier_uniform_(self.Q.weight, gain=gain)\n nn.init.xavier_uniform_(self.W.weight, gain=gain)\n nn.init.constant_(self.Q.bias, 0)\n nn.init.constant_(self.W.bias, 0)\n\n def forward(self, g, h, weights):\n \"\"\"\n g : graph\n h : node features\n weights : scalar edge weights\n \"\"\"\n h_src, h_dst = h\n with g.local_scope():\n g.srcdata['n'] = self.act(self.Q(self.dropout(h_src)))\n g.edata['w'] = weights.float()\n g.update_all(fn.u_mul_e('n', 'w', 'm'), fn.sum('m', 'n'))\n g.update_all(fn.copy_e('w', 'm'), fn.sum('m', 'ws'))\n n = g.dstdata['n']\n ws = g.dstdata['ws'].unsqueeze(1).clamp(min=1)\n z = self.act(self.W(self.dropout(torch.cat([n / ws, h_dst], 1))))\n z_norm = z.norm(2, 1, keepdim=True)\n z_norm = torch.where(z_norm == 0, torch.tensor(1.).to(z_norm), z_norm)\n z = z / z_norm\n return z\n\nclass SAGENet(nn.Module):\n def __init__(self, hidden_dims, n_layers):\n \"\"\"\n g : DGLHeteroGraph\n The user-item interaction graph.\n This is only for finding the range of categorical variables.\n item_textsets : torchtext.data.Dataset\n The textual features of each item node.\n \"\"\"\n super().__init__()\n\n self.convs = nn.ModuleList()\n for _ in range(n_layers):\n self.convs.append(WeightedSAGEConv(hidden_dims, hidden_dims, hidden_dims))\n\n def forward(self, blocks, h):\n for layer, block in zip(self.convs, blocks):\n h_dst = h[:block.number_of_nodes('DST/' + block.ntypes[0])]\n h = layer(block, (h, h_dst), block.edata['weights'])\n return h\n\nclass ItemToItemScorer(nn.Module):\n def __init__(self, full_graph, ntype):\n super().__init__()\n\n n_nodes = full_graph.number_of_nodes(ntype)\n self.bias = nn.Parameter(torch.zeros(n_nodes))\n\n def _add_bias(self, edges):\n bias_src = self.bias[edges.src[dgl.NID]]\n bias_dst = self.bias[edges.dst[dgl.NID]]\n return {'s': edges.data['s'] + bias_src + bias_dst}\n\n def forward(self, item_item_graph, h):\n \"\"\"\n item_item_graph : graph consists of edges connecting the pairs\n h : hidden state of every node\n \"\"\"\n with item_item_graph.local_scope():\n item_item_graph.ndata['h'] = h\n item_item_graph.apply_edges(fn.u_dot_v('h', 'h', 's'))\n item_item_graph.apply_edges(self._add_bias)\n pair_score = item_item_graph.edata['s']\n return pair_score", "_____no_output_____" ] ], [ [ "## PinSage Sampler", "_____no_output_____" ] ], [ [ "import numpy as np\nimport dgl\nimport torch\nfrom torch.utils.data import IterableDataset, DataLoader\n\ndef compact_and_copy(frontier, seeds):\n block = dgl.to_block(frontier, seeds)\n for col, data in frontier.edata.items():\n if col == dgl.EID:\n continue\n block.edata[col] = data[block.edata[dgl.EID]]\n return block\n\nclass ItemToItemBatchSampler(IterableDataset):\n def __init__(self, g, user_type, item_type, batch_size):\n self.g = g\n self.user_type = user_type\n self.item_type = item_type\n self.user_to_item_etype = list(g.metagraph()[user_type][item_type])[0]\n self.item_to_user_etype = list(g.metagraph()[item_type][user_type])[0]\n self.batch_size = batch_size\n\n def __iter__(self):\n while True:\n heads = torch.randint(0, self.g.number_of_nodes(self.item_type), (self.batch_size,))\n tails = dgl.sampling.random_walk(\n self.g,\n heads,\n metapath=[self.item_to_user_etype, self.user_to_item_etype])[0][:, 2]\n neg_tails = torch.randint(0, self.g.number_of_nodes(self.item_type), (self.batch_size,))\n\n mask = (tails != -1)\n yield heads[mask], tails[mask], neg_tails[mask]\n\nclass NeighborSampler(object):\n def __init__(self, g, user_type, item_type, random_walk_length, random_walk_restart_prob,\n num_random_walks, num_neighbors, num_layers):\n self.g = g\n self.user_type = user_type\n self.item_type = item_type\n self.user_to_item_etype = list(g.metagraph()[user_type][item_type])[0]\n self.item_to_user_etype = list(g.metagraph()[item_type][user_type])[0]\n self.samplers = [\n dgl.sampling.PinSAGESampler(g, item_type, user_type, random_walk_length,\n random_walk_restart_prob, num_random_walks, num_neighbors)\n for _ in range(num_layers)]\n\n def sample_blocks(self, seeds, heads=None, tails=None, neg_tails=None):\n blocks = []\n for sampler in self.samplers:\n frontier = sampler(seeds)\n if heads is not None:\n eids = frontier.edge_ids(torch.cat([heads, heads]), torch.cat([tails, neg_tails]), return_uv=True)[2]\n if len(eids) > 0:\n old_frontier = frontier\n frontier = dgl.remove_edges(old_frontier, eids)\n #print(old_frontier)\n #print(frontier)\n #print(frontier.edata['weights'])\n #frontier.edata['weights'] = old_frontier.edata['weights'][frontier.edata[dgl.EID]]\n block = compact_and_copy(frontier, seeds)\n seeds = block.srcdata[dgl.NID]\n blocks.insert(0, block)\n return blocks\n\n def sample_from_item_pairs(self, heads, tails, neg_tails):\n # Create a graph with positive connections only and another graph with negative\n # connections only.\n pos_graph = dgl.graph(\n (heads, tails),\n num_nodes=self.g.number_of_nodes(self.item_type))\n neg_graph = dgl.graph(\n (heads, neg_tails),\n num_nodes=self.g.number_of_nodes(self.item_type))\n pos_graph, neg_graph = dgl.compact_graphs([pos_graph, neg_graph])\n seeds = pos_graph.ndata[dgl.NID]\n\n blocks = self.sample_blocks(seeds, heads, tails, neg_tails)\n return pos_graph, neg_graph, blocks\n\ndef assign_simple_node_features(ndata, g, ntype, assign_id=False):\n \"\"\"\n Copies data to the given block from the corresponding nodes in the original graph.\n \"\"\"\n for col in g.nodes[ntype].data.keys():\n if not assign_id and col == dgl.NID:\n continue\n induced_nodes = ndata[dgl.NID]\n ndata[col] = g.nodes[ntype].data[col][induced_nodes]\n\ndef assign_textual_node_features(ndata, textset, ntype):\n \"\"\"\n Assigns numericalized tokens from a torchtext dataset to given block.\n The numericalized tokens would be stored in the block as node features\n with the same name as ``field_name``.\n The length would be stored as another node feature with name\n ``field_name + '__len'``.\n block : DGLHeteroGraph\n First element of the compacted blocks, with \"dgl.NID\" as the\n corresponding node ID in the original graph, hence the index to the\n text dataset.\n The numericalized tokens (and lengths if available) would be stored\n onto the blocks as new node features.\n textset : torchtext.data.Dataset\n A torchtext dataset whose number of examples is the same as that\n of nodes in the original graph.\n \"\"\"\n node_ids = ndata[dgl.NID].numpy()\n\n if textset is not None:\n for field_name, field in textset.fields.items():\n examples = [getattr(textset[i], field_name) for i in node_ids]\n\n tokens, lengths = field.process(examples)\n\n if not field.batch_first:\n tokens = tokens.t()\n\n ndata[field_name] = tokens\n ndata[field_name + '__len'] = lengths\n\ndef assign_features_to_blocks(blocks, g, textset, ntype):\n # For the first block (which is closest to the input), copy the features from\n # the original graph as well as the texts.\n assign_simple_node_features(blocks[0].srcdata, g, ntype)\n assign_textual_node_features(blocks[0].srcdata, textset, ntype)\n assign_simple_node_features(blocks[-1].dstdata, g, ntype)\n assign_textual_node_features(blocks[-1].dstdata, textset, ntype)\n\nclass PinSAGECollator(object):\n def __init__(self, sampler, g, ntype, textset):\n self.sampler = sampler\n self.ntype = ntype\n self.g = g\n self.textset = textset\n\n def collate_train(self, batches):\n heads, tails, neg_tails = batches[0]\n # Construct multilayer neighborhood via PinSAGE...\n pos_graph, neg_graph, blocks = self.sampler.sample_from_item_pairs(heads, tails, neg_tails)\n assign_features_to_blocks(blocks, self.g, self.textset, self.ntype)\n\n return pos_graph, neg_graph, blocks\n\n def collate_test(self, samples):\n batch = torch.LongTensor(samples)\n blocks = self.sampler.sample_blocks(batch)\n assign_features_to_blocks(blocks, self.g, self.textset, self.ntype)\n return blocks", "_____no_output_____" ] ], [ [ "## PinSage Evaluation", "_____no_output_____" ] ], [ [ "import numpy as np\nimport torch\nimport pickle\nimport dgl\nimport argparse\n\ndef prec(recommendations, ground_truth):\n n_users, n_items = ground_truth.shape\n K = recommendations.shape[1]\n user_idx = np.repeat(np.arange(n_users), K)\n item_idx = recommendations.flatten()\n relevance = ground_truth[user_idx, item_idx].reshape((n_users, K))\n hit = relevance.any(axis=1).mean()\n return hit\n\nclass LatestNNRecommender(object):\n def __init__(self, user_ntype, item_ntype, user_to_item_etype, timestamp, batch_size):\n self.user_ntype = user_ntype\n self.item_ntype = item_ntype\n self.user_to_item_etype = user_to_item_etype\n self.batch_size = batch_size\n self.timestamp = timestamp\n\n def recommend(self, full_graph, K, h_user, h_item):\n \"\"\"\n Return a (n_user, K) matrix of recommended items for each user\n \"\"\"\n graph_slice = full_graph.edge_type_subgraph([self.user_to_item_etype])\n n_users = full_graph.number_of_nodes(self.user_ntype)\n latest_interactions = dgl.sampling.select_topk(graph_slice, 1, self.timestamp, edge_dir='out')\n user, latest_items = latest_interactions.all_edges(form='uv', order='srcdst')\n # each user should have at least one \"latest\" interaction\n assert torch.equal(user, torch.arange(n_users))\n\n recommended_batches = []\n user_batches = torch.arange(n_users).split(self.batch_size)\n for user_batch in user_batches:\n latest_item_batch = latest_items[user_batch].to(device=h_item.device)\n dist = h_item[latest_item_batch] @ h_item.t()\n # exclude items that are already interacted\n for i, u in enumerate(user_batch.tolist()):\n interacted_items = full_graph.successors(u, etype=self.user_to_item_etype)\n dist[i, interacted_items] = -np.inf\n recommended_batches.append(dist.topk(K, 1)[1])\n\n recommendations = torch.cat(recommended_batches, 0)\n return recommendations\n\n\ndef evaluate_nn(dataset, h_item, k, batch_size):\n g = dataset['train-graph']\n val_matrix = dataset['val-matrix'].tocsr()\n test_matrix = dataset['test-matrix'].tocsr()\n item_texts = dataset['item-texts']\n user_ntype = dataset['user-type']\n item_ntype = dataset['item-type']\n user_to_item_etype = dataset['user-to-item-type']\n timestamp = dataset['timestamp-edge-column']\n\n rec_engine = LatestNNRecommender(\n user_ntype, item_ntype, user_to_item_etype, timestamp, batch_size)\n\n recommendations = rec_engine.recommend(g, k, None, h_item).cpu().numpy()\n return prec(recommendations, val_matrix)", "_____no_output_____" ] ], [ [ "## PinSage Training", "_____no_output_____" ] ], [ [ "import pickle\nimport argparse\nimport numpy as np\nimport torch\nimport torch.nn as nn\nfrom torch.utils.data import DataLoader\nimport torchtext\nimport dgl\nimport tqdm\n\nimport madgrad \nfrom fastprogress.fastprogress import master_bar, progress_bar \n\n# import layers\n# import sampler as sampler_module\n# import evaluation\n\nclass PinSAGEModel(nn.Module):\n def __init__(self, full_graph, ntype, textsets, hidden_dims, n_layers):\n super().__init__()\n\n self.proj = LinearProjector(full_graph, ntype, textsets, hidden_dims)\n self.sage = SAGENet(hidden_dims, n_layers)\n self.scorer = ItemToItemScorer(full_graph, ntype)\n\n def forward(self, pos_graph, neg_graph, blocks):\n h_item = self.get_repr(blocks)\n pos_score = self.scorer(pos_graph, h_item)\n neg_score = self.scorer(neg_graph, h_item)\n return (neg_score - pos_score + 1).clamp(min=0)\n\n def get_repr(self, blocks):\n h_item = self.proj(blocks[0].srcdata)\n h_item_dst = self.proj(blocks[-1].dstdata)\n return h_item_dst + self.sage(blocks, h_item)\n\ndef train_pinsage_implicit(args):\n # Load dataset\n with open(args.dataset_path, 'rb') as f:\n dataset = pickle.load(f)\n \n g = dataset['train-graph'] \n val_matrix = dataset['val-matrix'].tocsr()\n test_matrix = dataset['test-matrix'].tocsr()\n item_texts = dataset['item-texts']\n user_ntype = dataset['user-type']\n item_ntype = dataset['item-type']\n user_to_item_etype = dataset['user-to-item-type']\n timestamp = dataset['timestamp-edge-column']\n\n device = torch.device(args.device)\n\n # Assign user and movie IDs and use them as features (to learn an individual \n # trainable embedding for each entity)\n g.nodes[user_ntype].data['id'] = torch.arange(g.number_of_nodes(user_ntype))\n g.nodes[item_ntype].data['id'] = torch.arange(g.number_of_nodes(item_ntype))\n\n # Prepare torchtext dataset and vocabulary\n if args.add_title:\n fields = {}\n examples = []\n for key, texts in item_texts.items():\n fields[key] = torchtext.legacy.data.Field(include_lengths=True, lower=True, batch_first=True)\n for i in range(g.number_of_nodes(item_ntype)):\n example = torchtext.legacy.data.Example.fromlist(\n [item_texts[key][i] for key in item_texts.keys()],\n [(key, fields[key]) for key in item_texts.keys()])\n examples.append(example)\n textset = torchtext.legacy.data.Dataset(examples, fields)\n for key, field in fields.items():\n field.build_vocab(getattr(textset, key))\n #field.build_vocab(getattr(textset, key), vectors='fasttext.simple.300d')\n else:\n textset = None\n\n # Sampler\n batch_sampler = ItemToItemBatchSampler(\n g, user_ntype, item_ntype, args.batch_size)\n neighbor_sampler = NeighborSampler(\n g, user_ntype, item_ntype, args.random_walk_length,\n args.random_walk_restart_prob, args.num_random_walks, args.num_neighbors,\n args.num_layers)\n collator = PinSAGECollator(neighbor_sampler, g, item_ntype, textset)\n dataloader = DataLoader(\n batch_sampler,\n collate_fn=collator.collate_train,\n num_workers=args.num_workers)\n dataloader_test = DataLoader(\n torch.arange(g.number_of_nodes(item_ntype)),\n batch_size=args.batch_size,\n collate_fn=collator.collate_test,\n num_workers=args.num_workers)\n dataloader_it = iter(dataloader)\n\n # Model\n model = PinSAGEModel(g, item_ntype, textset, args.hidden_dims, args.num_layers).to(device)\n print(model)\n\n # Optimizer\n if args.opt == 'MADGRAD':\n opt = madgrad.MADGRAD(model.parameters(), lr=args.lr)\n else:\n opt = torch.optim.__dict__[args.opt](model.parameters(), lr=args.lr)\n print(opt)\n\n # For each batch of head-tail-negative triplets...\n mb = master_bar(range(args.num_epochs))\n for epoch_id in mb:\n model.train()\n for batch_id in progress_bar(range(args.batches_per_epoch), parent=mb):\n pos_graph, neg_graph, blocks = next(dataloader_it)\n # Copy to GPU\n for i in range(len(blocks)):\n blocks[i] = blocks[i].to(device, non_blocking=True)\n pos_graph = pos_graph.to(device, non_blocking=True)\n neg_graph = neg_graph.to(device, non_blocking=True)\n\n loss = model(pos_graph, neg_graph, blocks).mean()\n opt.zero_grad()\n loss.backward()\n opt.step()\n\n # Evaluate\n model.eval()\n with torch.no_grad():\n item_batches = torch.arange(g.number_of_nodes(item_ntype)).split(args.batch_size)\n h_item_batches = []\n for blocks in dataloader_test:\n for i in range(len(blocks)):\n blocks[i] = blocks[i].to(device)\n\n h_item_batches.append(model.get_repr(blocks))\n h_item = torch.cat(h_item_batches, 0)\n\n hit_rate = evaluate_nn(dataset, h_item, args.k, args.batch_size)\n\n print(f\"\\nEpoch [{epoch_id:02d}]/[{args.num_epochs:02d}]: Hit@{args.k}: {hit_rate:2.3f}\")", "_____no_output_____" ] ], [ [ "# Check model with data\n\nChoose different datasets to see how the model automatically adjusts to fit the data in the graph.", "_____no_output_____" ] ], [ [ "import torch\nfrom types import SimpleNamespace\n\nargs = SimpleNamespace()\n# args.dataset_path = '/content/data.pkl'\n# args.dataset_path = '/content/ml_1m_plot_data.pkl'\n# args.dataset_path = '/content/ml_1m_backdrop_swin.pkl'\n# args.dataset_path = '/content/ml_1m_imdb_longest.pkl'\nargs.dataset_path = '/content/ml_1m_only_id.pkl'\nargs.random_walk_length = 2\nargs.random_walk_restart_prob = .5 \nargs.num_random_walks = 1 \nargs.num_neighbors = 3 \nargs.num_layers = 2 \nargs.hidden_dims = 16\nargs.batch_size = 32 \nargs.device = 'cuda:0' if torch.cuda.is_available() else 'cpu'\nargs.num_epochs = 10\nargs.batches_per_epoch = 20000\nargs.num_workers = 2\nargs.lr = 3e-5\nargs.k = 10\nargs.opt = 'MADGRAD' # Adam, AdamW, MADGRAD\nargs.add_title = False\nprint(args)\n\n# Load dataset\nwith open(args.dataset_path, 'rb') as f:\n dataset = pickle.load(f)\n\ng = dataset['train-graph']\nval_matrix = dataset['val-matrix'].tocsr()\ntest_matrix = dataset['test-matrix'].tocsr()\nitem_texts = dataset['item-texts']\nuser_ntype = dataset['user-type']\nitem_ntype = dataset['item-type']\nuser_to_item_etype = dataset['user-to-item-type']\ntimestamp = dataset['timestamp-edge-column']\n\ndevice = torch.device(args.device)\n\n# Assign user and movie IDs and use them as features (to learn an individual \n# trainable embedding for each entity)\ng.nodes[user_ntype].data['id'] = torch.arange(g.number_of_nodes(user_ntype))\ng.nodes[item_ntype].data['id'] = torch.arange(g.number_of_nodes(item_ntype))\n\n# drop features\n# del g.nodes['movie'].data['year']\n# del g.nodes['movie'].data['genre']\n# del g.nodes['movie'].data['plot']\n\n# Prepare torchtext dataset and vocabulary\nif args.add_title:\n fields = {}\n examples = []\n for key, texts in item_texts.items():\n fields[key] = torchtext.legacy.data.Field(include_lengths=True, lower=True, batch_first=True)\n for i in range(g.number_of_nodes(item_ntype)):\n example = torchtext.legacy.data.Example.fromlist(\n [item_texts[key][i] for key in item_texts.keys()],\n [(key, fields[key]) for key in item_texts.keys()])\n examples.append(example)\n textset = torchtext.legacy.data.Dataset(examples, fields)\n for key, field in fields.items():\n field.build_vocab(getattr(textset, key))\n #field.build_vocab(getattr(textset, key), vectors='fasttext.simple.300d')\nelse:\n textset = None \n\n# Model\nmodel = PinSAGEModel(g, item_ntype, textset, args.hidden_dims, args.num_layers).to(device)\nprint(model)", "_____no_output_____" ] ], [ [ "# PinSage Train on Implicit Task", "_____no_output_____" ], [ "## template, don't use/edit this one, it's just for reference", "_____no_output_____" ] ], [ [ "from types import SimpleNamespace\n\nargs = SimpleNamespace()\nargs.dataset_path = '/content/data.pkl'\n# args.dataset_path = '/content/ml_1m_only_id.pkl'\nargs.random_walk_length = 2\nargs.random_walk_restart_prob = .5 \nargs.num_random_walks = 1 \nargs.num_neighbors = 3 \nargs.num_layers = 2 \nargs.hidden_dims = 16\nargs.batch_size = 32 \nargs.device = 'cuda:0' if torch.cuda.is_available() else 'cpu'\nargs.num_epochs = 10\nargs.batches_per_epoch = 20000\nargs.num_workers = 2\nargs.lr = 3e-5\nargs.k = 10\nargs.opt = 'MADGRAD' # Adam, AdamW, MADGRAD\nargs.add_title = True \nprint(args)\n\n# baseline (movie id only): Epoch [09]/[10]: Hit@10: 0.042\n# all movie data plus longest plot: Epoch [09]/[10]: Hit@10: 0.080\n# without plot (MADGRAD): Epoch [09]/[10]: Hit@10: 0.081\n# with plot (MADGRAD): Epoch [09]/[10]: Hit@10: 0.060\n# with plot (ADAMW): Epoch [09]/[10]: Hit@10: 0.064\n", "_____no_output_____" ] ], [ [ "## baseline model, movie id only", "_____no_output_____" ] ], [ [ "from types import SimpleNamespace\n\nargs = SimpleNamespace()\nargs.dataset_path = '/content/ml_1m_only_id.pkl'\nargs.random_walk_length = 2\nargs.random_walk_restart_prob = .5 \nargs.num_random_walks = 1 \nargs.num_neighbors = 3 \nargs.num_layers = 2 \nargs.hidden_dims = 32 # 16\nargs.batch_size = 256 # 32\nargs.device = 'cuda:0' if torch.cuda.is_available() else 'cpu'\nargs.num_epochs = 10\nargs.batches_per_epoch = 20000\nargs.num_workers = 4\nargs.lr = 3e-5\nargs.k = 10\nargs.opt = 'MADGRAD' # Adam, AdamW, MADGRAD\nargs.add_title = False \n\nprint(args)\ntrain_pinsage_implicit(args)", "_____no_output_____" ] ], [ [ "# possible variations\n\nYou can try different datasets (i.e., change the name of the data file being used to load the graph - the PinSage model automatically adapts to the data in the Graph):\n- [ ] ml_1m_only_id.pkl (this is our baseline)\n- [ ] ml_1m_imdb_plot.pkl(embedding of short plot descriptions)\n- [ ] ml_1m_imdb_full_plot.pkl (embedding of long plot descriptions)\n- [ ] ml_1m_imdb_synopsis.pkl (embedding of pages-long movie summaries)\n- [ ] ml_1m_imdb_longest.pkl (embedding of longest-available text, since many movies don't have a full synposis, we fall back to the full plot or short plot as needed)\n- [ ] ml_1m_poster_swin (movie poster embedding)\n- [ ] ml_1m_backdrop_swin (widescreen movie poster embedding)\n\n\nOr you can try different hyperparameters:\n- [ ] args.hidden_dims (number of dimensions used to encode node information)\n- [ ] args.num_layers (how many \"hops\" the PinSage random walk goes when building graphs)\n- [ ] args.num_random_walks (how many walks to take)\n- [ ] args.num_neighbors (how many neighbors to keep)\n\nYou can make changes in the cell below and then run it to try a new model. If you run multiple variations, then just copy this code into a new cell for each variant, so you can use the cell output as a record.", "_____no_output_____" ] ], [ [ "from types import SimpleNamespace\n\n# Edit the Dataset path\nargs = SimpleNamespace()\nargs.dataset_path = '/content/ml_1m_only_id.pkl'\nargs.random_walk_length = 2\nargs.random_walk_restart_prob = .5 \nargs.num_random_walks = 1 \nargs.num_neighbors = 3 \nargs.num_layers = 2 \nargs.hidden_dims = 32 # 16\nargs.batch_size = 256 # 32\nargs.device = 'cuda:0' if torch.cuda.is_available() else 'cpu'\nargs.num_epochs = 10\nargs.batches_per_epoch = 20000\nargs.num_workers = 2\nargs.lr = 3e-5\nargs.k = 10\nargs.opt = 'MADGRAD' # Adam, AdamW, MADGRAD\nargs.add_title = False \n\nprint(args)\ntrain_pinsage_implicit(args)", "_____no_output_____" ], [ "", "_____no_output_____" ], [ "", "_____no_output_____" ] ], [ [ "# Summary & Conclusions\nBreifly write up a summary of what you did, what you found, and what you think it means.\n\nThen share this notebook (you're edited copy) with me ([email protected]) to submit your final project.", "_____no_output_____" ] ], [ [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
e7dc9f894e9b713b9b6384adcec42680c4b5eb84
24,162
ipynb
Jupyter Notebook
02jc_train_on_melspectrograms_pytorch_lme_pool_all_classes_simple_minmax_npy.ipynb
rhine3/birdcall
2ec3535f9fdc57d3bb628d100ded141f9d8baefb
[ "Apache-2.0" ]
50
2020-06-19T18:37:49.000Z
2020-09-18T15:47:27.000Z
02jc_train_on_melspectrograms_pytorch_lme_pool_all_classes_simple_minmax_npy.ipynb
licaYu/birdcall
2ec3535f9fdc57d3bb628d100ded141f9d8baefb
[ "Apache-2.0" ]
2
2020-08-24T11:48:13.000Z
2020-08-24T11:55:06.000Z
02jc_train_on_melspectrograms_pytorch_lme_pool_all_classes_simple_minmax_npy.ipynb
licaYu/birdcall
2ec3535f9fdc57d3bb628d100ded141f9d8baefb
[ "Apache-2.0" ]
9
2020-06-20T17:11:46.000Z
2020-08-27T21:51:11.000Z
34.815562
1,453
0.489281
[ [ [ "from birdcall.data import *\nfrom birdcall.metrics import *\nfrom birdcall.ops import *\n\nimport torch\nimport torchvision\nfrom torch import nn\nimport numpy as np\nimport pandas as pd\nfrom pathlib import Path\nimport soundfile as sf", "_____no_output_____" ], [ "BS = 16\nMAX_LR = 1e-3", "_____no_output_____" ], [ "classes = pd.read_pickle('data/classes.pkl')", "_____no_output_____" ], [ "# all_train_items = pd.read_pickle('data/all_train_items.pkl')\n\n# all_train_items_npy = []\n\n# for ebird_code, path, duration in all_train_items:\n# fn = path.stem\n# new_path = Path(f'data/npy/train_resampled/{ebird_code}/{fn}.npy')\n# all_train_items_npy.append((ebird_code, new_path, duration))\n \n# pd.to_pickle(all_train_items_npy, 'data/all_train_items_npy.pkl')", "_____no_output_____" ], [ "splits = pd.read_pickle('data/all_splits.pkl')\nall_train_items = pd.read_pickle('data/all_train_items_npy.pkl')\n\ntrain_items = np.array(all_train_items)[splits[0][0]].tolist()\nval_items = np.array(all_train_items)[splits[0][1]].tolist()", "_____no_output_____" ], [ "from collections import defaultdict\n\nclass2train_items = defaultdict(list)\n\nfor cls_name, path, duration in train_items:\n class2train_items[cls_name].append((path, duration))", "_____no_output_____" ], [ "pd.to_pickle(class2train_items, 'data/class2train_items.pkl')", "_____no_output_____" ], [ "train_ds = MelspecPoolDataset(class2train_items, classes, len_mult=50, normalize=False)\ntrain_dl = torch.utils.data.DataLoader(train_ds, batch_size=BS, num_workers=NUM_WORKERS, pin_memory=True, shuffle=True)", "_____no_output_____" ], [ "val_items = [(classes.index(item[0]), item[1], item[2]) for item in val_items]\nval_items_binned = bin_items_negative_class(val_items)", "_____no_output_____" ], [ "class Model(nn.Module):\n def __init__(self):\n super().__init__()\n self.cnn = nn.Sequential(*list(torchvision.models.resnet34(True).children())[:-2])\n self.classifier = nn.Sequential(*[\n nn.Linear(512, 512), nn.ReLU(), nn.Dropout(p=0.5), nn.BatchNorm1d(512),\n nn.Linear(512, 512), nn.ReLU(), nn.Dropout(p=0.5), nn.BatchNorm1d(512),\n nn.Linear(512, len(classes))\n ])\n \n def forward(self, x):\n max_per_example = x.view(x.shape[0], -1).max(1)[0] # scaling to between 0 and 1\n x /= max_per_example[:, None, None, None, None] # per example!\n bs, im_num = x.shape[:2]\n x = x.view(-1, x.shape[2], x.shape[3], x.shape[4])\n x = self.cnn(x)\n x = x.mean((2,3))\n x = self.classifier(x)\n x = x.view(bs, im_num, -1)\n x = lme_pool(x)\n return x", "_____no_output_____" ], [ "model = Model().cuda()", "_____no_output_____" ], [ "import torch.optim as optim\nfrom sklearn.metrics import accuracy_score, f1_score\nimport time", "_____no_output_____" ], [ "criterion = nn.BCEWithLogitsLoss()\noptimizer = optim.Adam(model.parameters(), MAX_LR)\nscheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, 5)", "_____no_output_____" ], [ "# sc_items = pd.read_pickle('data/soundscape_items.pkl')\n\n# sc_items_npy = []\n# for labels, path, offset in sc_items:\n# sc_items_npy.append((labels, Path(f'data/npy/shifted/{path.stem}.npy'), offset))\n \n# pd.to_pickle(sc_items_npy, 'data/soundscape_items_npy.pkl')", "_____no_output_____" ], [ "sc_ds = SoundscapeMelspecPoolDataset(pd.read_pickle('data/soundscape_items_npy.pkl'), classes)\nsc_dl = torch.utils.data.DataLoader(sc_ds, batch_size=2*BS, num_workers=NUM_WORKERS, pin_memory=True)", "_____no_output_____" ], [ "t0 = time.time()\nfor epoch in range(260):\n running_loss = 0.0\n for i, data in enumerate(train_dl, 0):\n model.train()\n inputs, labels = data[0].cuda(), data[1].cuda()\n optimizer.zero_grad()\n\n outputs = model(inputs)\n loss = criterion(outputs, labels)\n if np.isnan(loss.item()): \n raise Exception(f'!!! nan encountered in loss !!! epoch: {epoch}\\n')\n loss.backward()\n optimizer.step()\n scheduler.step()\n\n running_loss += loss.item()\n\n\n if epoch % 5 == 4:\n model.eval();\n preds = []\n targs = []\n\n for num_specs in val_items_binned.keys():\n valid_ds = MelspecShortishValidatioDataset(val_items_binned[num_specs], classes)\n valid_dl = torch.utils.data.DataLoader(valid_ds, batch_size=2*BS, num_workers=NUM_WORKERS, pin_memory=True)\n\n with torch.no_grad():\n for data in valid_dl:\n inputs, labels = data[0].cuda(), data[1].cuda()\n outputs = model(inputs)\n preds.append(outputs.cpu().detach())\n targs.append(labels.cpu().detach())\n\n preds = torch.cat(preds)\n targs = torch.cat(targs)\n\n f1s = []\n ts = []\n for t in np.linspace(0.4, 1, 61):\n f1s.append(f1_score(preds.sigmoid() > t, targs, average='micro'))\n ts.append(t)\n \n sc_preds = []\n sc_targs = []\n with torch.no_grad():\n for data in sc_dl:\n inputs, labels = data[0].cuda(), data[1].cuda()\n outputs = model(inputs)\n sc_preds.append(outputs.cpu().detach())\n sc_targs.append(labels.cpu().detach())\n\n sc_preds = torch.cat(sc_preds)\n sc_targs = torch.cat(sc_targs)\n sc_f1 = f1_score(sc_preds.sigmoid() > 0.5, sc_targs, average='micro')\n \n sc_f1s = []\n sc_ts = []\n for t in np.linspace(0.4, 1, 61):\n sc_f1s.append(f1_score(sc_preds.sigmoid() > t, sc_targs, average='micro'))\n sc_ts.append(t)\n \n f1 = f1_score(preds.sigmoid() > 0.5, targs, average='micro')\n print(f'[{epoch + 1}, {(time.time() - t0)/60:.1f}] loss: {running_loss / (len(train_dl)-1):.3f}, f1: {max(f1s):.3f}, sc_f1: {max(sc_f1s):.3f}')\n running_loss = 0.0\n\n torch.save(model.state_dict(), f'models/{epoch+1}_lmepool_simple_minmax_log_{round(f1, 2)}.pth')", "_____no_output_____" ], [ "t0 = time.time()\nfor epoch in range(130):\n running_loss = 0.0\n for i, data in enumerate(train_dl, 0):\n model.train()\n inputs, labels = data[0].cuda(), data[1].cuda()\n optimizer.zero_grad()\n\n outputs = model(inputs)\n loss = criterion(outputs, labels)\n if np.isnan(loss.item()): \n raise Exception(f'!!! nan encountered in loss !!! epoch: {epoch}\\n')\n loss.backward()\n optimizer.step()\n scheduler.step()\n\n running_loss += loss.item()\n\n if epoch % 5 == 4:\n model.eval();\n preds = []\n targs = []\n\n for num_specs in val_items_binned.keys():\n valid_ds = MelspecShortishValidatioDataset(val_items_binned[num_specs], classes)\n valid_dl = torch.utils.data.DataLoader(valid_ds, batch_size=2*BS, num_workers=NUM_WORKERS, pin_memory=True)\n\n with torch.no_grad():\n for data in valid_dl:\n inputs, labels = data[0].cuda(), data[1].cuda()\n outputs = model(inputs)\n preds.append(outputs.cpu().detach())\n targs.append(labels.cpu().detach())\n\n preds = torch.cat(preds)\n targs = torch.cat(targs)\n\n f1s = []\n ts = []\n for t in np.linspace(0.4, 1, 61):\n f1s.append(f1_score(preds.sigmoid() > t, targs, average='micro'))\n ts.append(t)\n \n sc_preds = []\n sc_targs = []\n with torch.no_grad():\n for data in sc_dl:\n inputs, labels = data[0].cuda(), data[1].cuda()\n outputs = model(inputs)\n sc_preds.append(outputs.cpu().detach())\n sc_targs.append(labels.cpu().detach())\n\n sc_preds = torch.cat(sc_preds)\n sc_targs = torch.cat(sc_targs)\n sc_f1 = f1_score(sc_preds.sigmoid() > 0.5, sc_targs, average='micro')\n \n sc_f1s = []\n sc_ts = []\n for t in np.linspace(0.4, 1, 61):\n sc_f1s.append(f1_score(sc_preds.sigmoid() > t, sc_targs, average='micro'))\n sc_ts.append(t)\n \n f1 = f1_score(preds.sigmoid() > 0.5, targs, average='micro')\n print(f'[{epoch + 1}, {(time.time() - t0)/60:.1f}] loss: {running_loss / (len(train_dl)-1):.3f}, f1: {max(f1s):.3f}, sc_f1: {max(sc_f1s):.3f}')\n running_loss = 0.0\n\n torch.save(model.state_dict(), f'models/{epoch+1}_lmepool_simple_minmax_npy_{round(f1, 2)}.pth')", "[5, 15.1] loss: 1983.842, f1: 0.003, sc_f1: 0.000\n" ], [ "train_ds[1][1]", "_____no_output_____" ], [ "model.load_state_dict(torch.load('models/130_lmepool_simple_log_0.71.pth'))", "_____no_output_____" ], [ "model.eval();\npreds = []\ntargs = []\n\nfor num_specs in val_items_binned.keys():\n valid_ds = MelspecShortishValidatioDataset(val_items_binned[num_specs], classes)\n valid_dl = torch.utils.data.DataLoader(valid_ds, batch_size=2*BS, num_workers=NUM_WORKERS, pin_memory=True)\n\n with torch.no_grad():\n for data in valid_dl:\n inputs, labels = data[0].cuda(), data[1].cuda()\n outputs = model(inputs)\n preds.append(outputs.cpu().detach())\n targs.append(labels.cpu().detach())\n\npreds = torch.cat(preds)\ntargs = torch.cat(targs)", "_____no_output_____" ], [ "f1s = []\nts = []\nfor t in np.linspace(0.4, 1, 61):\n f1s.append(f1_score(preds.sigmoid() > t, targs, average='micro'))\n ts.append(t)", "_____no_output_____" ], [ "accuracy_score(preds.sigmoid() > ts[np.argmax(f1s)], targs), max(f1s)", "_____no_output_____" ], [ "ts[np.argmax(f1s)]", "_____no_output_____" ], [ "preds_to_tp_fp_fn(preds, targs)", "_____no_output_____" ], [ "sc_ds = SoundscapeMelspecPoolDataset(pd.read_pickle('data/soundscape_items.pkl'), classes)\nsc_dl = torch.utils.data.DataLoader(sc_ds, batch_size=2*BS, num_workers=NUM_WORKERS, pin_memory=True)", "_____no_output_____" ], [ "t0 = time.time()\nfor epoch in range(130, 260):\n running_loss = 0.0\n for i, data in enumerate(train_dl, 0):\n model.train()\n inputs, labels = data[0].cuda(), data[1].cuda()\n optimizer.zero_grad()\n\n outputs = model(inputs)\n loss = criterion(outputs, labels)\n if np.isnan(loss.item()): \n raise Exception(f'!!! nan encountered in loss !!! epoch: {epoch}\\n')\n loss.backward()\n optimizer.step()\n scheduler.step()\n\n running_loss += loss.item()\n\n\n if epoch % 5 == 4:\n model.eval();\n preds = []\n targs = []\n\n for num_specs in val_items_binned.keys():\n valid_ds = MelspecShortishValidatioDataset(val_items_binned[num_specs], classes)\n valid_dl = torch.utils.data.DataLoader(valid_ds, batch_size=2*BS, num_workers=NUM_WORKERS, pin_memory=True)\n\n with torch.no_grad():\n for data in valid_dl:\n inputs, labels = data[0].cuda(), data[1].cuda()\n outputs = model(inputs)\n preds.append(outputs.cpu().detach())\n targs.append(labels.cpu().detach())\n\n preds = torch.cat(preds)\n targs = torch.cat(targs)\n\n f1s = []\n ts = []\n for t in np.linspace(0.4, 1, 61):\n f1s.append(f1_score(preds.sigmoid() > t, targs, average='micro'))\n ts.append(t)\n \n sc_preds = []\n sc_targs = []\n with torch.no_grad():\n for data in sc_dl:\n inputs, labels = data[0].cuda(), data[1].cuda()\n outputs = model(inputs)\n sc_preds.append(outputs.cpu().detach())\n sc_targs.append(labels.cpu().detach())\n\n sc_preds = torch.cat(sc_preds)\n sc_targs = torch.cat(sc_targs)\n sc_f1 = f1_score(sc_preds.sigmoid() > 0.5, sc_targs, average='micro')\n \n sc_f1s = []\n sc_ts = []\n for t in np.linspace(0.4, 1, 61):\n sc_f1s.append(f1_score(sc_preds.sigmoid() > t, sc_targs, average='micro'))\n sc_ts.append(t)\n \n f1 = f1_score(preds.sigmoid() > 0.5, targs, average='micro')\n print(f'[{epoch + 1}, {(time.time() - t0)/60:.1f}] loss: {running_loss / (len(train_dl)-1):.3f}, f1: {max(f1s):.3f}, sc_f1: {max(sc_f1s):.3f}')\n running_loss = 0.0\n\n torch.save(model.state_dict(), f'models/{epoch+1}_lmepool_simple_minmax_{round(f1, 2)}.pth')", "[135, 16.9] loss: 0.001, f1: 0.725, sc_f1: 0.035\n[140, 35.0] loss: 0.001, f1: 0.722, sc_f1: 0.026\n[145, 53.7] loss: 0.001, f1: 0.724, sc_f1: 0.018\n[150, 73.5] loss: 0.001, f1: 0.726, sc_f1: 0.011\n[160, 110.4] loss: 0.001, f1: 0.729, sc_f1: 0.020\n[165, 129.4] loss: 0.001, f1: 0.729, sc_f1: 0.011\n[170, 147.4] loss: 0.001, f1: 0.726, sc_f1: 0.021\n[175, 167.7] loss: 0.001, f1: 0.731, sc_f1: 0.022\n[180, 185.0] loss: 0.001, f1: 0.726, sc_f1: 0.032\n[185, 203.2] loss: 0.001, f1: 0.722, sc_f1: 0.023\n[190, 221.2] loss: 0.001, f1: 0.730, sc_f1: 0.011\n[195, 239.5] loss: 0.001, f1: 0.723, sc_f1: 0.011\n[200, 259.9] loss: 0.001, f1: 0.719, sc_f1: 0.012\n[205, 278.3] loss: 0.001, f1: 0.725, sc_f1: 0.023\n[210, 298.2] loss: 0.001, f1: 0.736, sc_f1: 0.021\n[215, 317.0] loss: 0.001, f1: 0.728, sc_f1: 0.026\n[220, 337.0] loss: 0.001, f1: 0.732, sc_f1: 0.012\n[225, 356.3] loss: 0.001, f1: 0.727, sc_f1: 0.012\n[230, 376.2] loss: 0.001, f1: 0.734, sc_f1: 0.017\n[235, 395.0] loss: 0.001, f1: 0.728, sc_f1: 0.012\n[240, 413.7] loss: 0.001, f1: 0.728, sc_f1: 0.012\n[245, 432.7] loss: 0.001, f1: 0.732, sc_f1: 0.020\n[250, 451.6] loss: 0.001, f1: 0.729, sc_f1: 0.022\n[255, 470.0] loss: 0.001, f1: 0.725, sc_f1: 0.011\n[260, 490.0] loss: 0.001, f1: 0.722, sc_f1: 0.010\n" ], [ "from IPython.lib.display import FileLink", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7dca0ac563277f748255533c614447e3effeacc
24,365
ipynb
Jupyter Notebook
EDA.ipynb
mattymecks/nlchp-mapping-project
a29bbbd94f4028b6a7be8f7eafcbf44e6109fa8e
[ "Apache-2.0" ]
2
2018-09-03T15:09:11.000Z
2019-03-25T05:48:02.000Z
EDA.ipynb
mattymecks/nlchp-mapping-project
a29bbbd94f4028b6a7be8f7eafcbf44e6109fa8e
[ "Apache-2.0" ]
null
null
null
EDA.ipynb
mattymecks/nlchp-mapping-project
a29bbbd94f4028b6a7be8f7eafcbf44e6109fa8e
[ "Apache-2.0" ]
1
2018-10-08T16:40:41.000Z
2018-10-08T16:40:41.000Z
32.443409
399
0.400575
[ [ [ "import plotly\nimport pandas as pd\nimport numpy as np\n\nplotly.offline.init_notebook_mode(connected=True)", "_____no_output_____" ], [ "df = pd.read_csv('city_info_geocodio_2.csv')\ndf.shape\ndf.head(3)", "_____no_output_____" ], [ "# Geocodio told me that five values weren't succesfful mapped, but for the sake of best practice, I checked anyway\n# I suspect that these either weren't named correctly or for some reason weren't in geocodio's database \n# Because there are only five missing values, I'll fix this manually, although that obviously isn't a scalable \n# solution. \n\ndf.loc[df['Latitude'] == 0]", "_____no_output_____" ] ], [ [ "Geocodio tells you the likely accuracy of the lat/long it's provided with an accuracy score. Some of the scores do not have high accuracy scores, which is worth mentioning. For now, we'll see if any mapping errors actually occur. If they do, I'll go back and engineer a solution.", "_____no_output_____" ] ], [ [ "#We have 239 cities, so we want to make sure there's 239 rows. \ndf.shape", "_____no_output_____" ] ], [ [ "Because I'm only missing five values, I'm going to manually find their lat/lon coordinates and then add them in. I'll do this here because I want them mapped in all future data. ", "_____no_output_____" ] ], [ [ "# Manual updating of relevent values \n\nnew_data = [(40.0874759, -108.8048292, 'CO3', 'CO'), (39.446649, -106.03757, 'CO2', 'CO'), \n (28.022243, -81.732857, 'FL9', 'FL'), (39.9689532, -82.9376804, 'OH3', 'OH'), \n (39.103119, -84.512016, 'OH1', 'OH')]\n\nindexes = (63, 85, 98, 172, 191)\n\nfor i in range(5): \n df.loc[indexes[i],'Latitude'] = new_data[i][0]\n df.loc[indexes[i], 'Longitude'] = new_data[i][1]\n df.loc[indexes[i], 'Congressional District'] = new_data[i][2]\n df.loc[indexes[i], 'State.1'] = new_data[i][3]\n", "_____no_output_____" ], [ "# Checking to see we've fixed all lat/lon problems \n\ndf.loc[df['Latitude'] == 0]", "_____no_output_____" ], [ "# Dropping unnecessary columns added by Geocodio for the sake of data tidiness \n\ndf = df.drop(columns = ['Number', 'Street', 'City.1', 'Source'])\ndf.head(1)", "_____no_output_____" ] ], [ [ "Much better. The campaign wants to keep track of the status of the anti-panhandling statutes/ordinances in each city, and they want to be able to update those values to reflect varying degrees of success as the campaign goes on. I'm going to create a 'status' value for each city set it's default to \"active.\" Then I'm going to map \"status text\" and \"marker color\" right onto the values.", "_____no_output_____" ] ], [ [ "# Adding in a \"status\" column\n\ndf['status'] = 0\n\n\n# Creating a conditional column explaining ordinance status\n\nd_text = {0: 'Ordinance Active - With No Response', \n 1: 'Ordinance Active - With Response Indicating No Immediate Repeal',\n 2: 'Ordinance Active - With Committment To Review', \n 3: 'Ordinance Halted - With Committment to Review',\n 4: 'Ordinance Repealed'}\n\ndf['statusText'] = df['status'].map(d_text)\n\n# Setting point color conditionally based upon status \n\nd_color = {0: 'rgb(255, 0, 0)',\n 1: 'rgb(255, 192, 203)',\n 2: 'rgb(255, 165, 0)', \n 3: 'rgb(255, 255, 0)', \n 4: 'rgb(127, 255, 0)'}\n\ndf['color'] = df['status'].map(d_color)", "_____no_output_____" ], [ "# print(df.dtypes)\ndf.head(3)", "_____no_output_____" ] ], [ [ "Now I'm going to persist my changes. I'll make updates periodically, but I'll use a seperate notebook for that so that I don't have to continously repeat this project. ", "_____no_output_____" ] ], [ [ "df.to_csv('cleaned_data.csv', mode ='w+')", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
e7dca58fa51431274935d5ff71e88567d862b4b8
271,047
ipynb
Jupyter Notebook
regressao_linear/Overfitting_Demo_Ridge.ipynb
luizcz/aprendizado_maquina
ab2afa8e599dca3e47b91413e1d5207c1c947923
[ "MIT" ]
null
null
null
regressao_linear/Overfitting_Demo_Ridge.ipynb
luizcz/aprendizado_maquina
ab2afa8e599dca3e47b91413e1d5207c1c947923
[ "MIT" ]
null
null
null
regressao_linear/Overfitting_Demo_Ridge.ipynb
luizcz/aprendizado_maquina
ab2afa8e599dca3e47b91413e1d5207c1c947923
[ "MIT" ]
null
null
null
265.732353
18,850
0.90871
[ [ [ "# Overfitting demo\n\n## Criando um conjunto de dados baseado em uma função senoidal ", "_____no_output_____" ] ], [ [ "import math\nimport random\nimport numpy as np\nimport pandas as pd\nfrom sklearn.preprocessing import PolynomialFeatures\nfrom sklearn.pipeline import make_pipeline \nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.linear_model import Ridge\nfrom sklearn.linear_model import RidgeCV\nfrom sklearn.linear_model import Lasso\nfrom matplotlib import pyplot as plt\n%matplotlib inline", "_____no_output_____" ] ], [ [ "Vamos considerar um conjunto de dados sintéticos de 30 pontos amostrados de uma função senoidal $y = \\sin(4x)$:", "_____no_output_____" ] ], [ [ "def f(x):\n return np.sin(np.multiply(4,x))", "_____no_output_____" ] ], [ [ "Abaixo criamos valores aleatéorios para $x$ no intervalo [0,1)", "_____no_output_____" ] ], [ [ "random.seed(98103)\nn = 30 # quantidade de valores gerados\nx = np.array([random.random() for _ in range(n)]) #em cada iteração gera um valor aleatório entre 0 e 1\nx=np.sort(x) # ordena os valores em ordem crescente\n#transforma o array em uma matrix com uma n linhas e 1 coluna (vetor coluna)\nX = x[:,np.newaxis] ", "_____no_output_____" ] ], [ [ "Calcula $y$ como uma função de $x$. $y$ é chamada variável independente pois depende de $x$", "_____no_output_____" ] ], [ [ "Y = f(x)", "_____no_output_____" ] ], [ [ "Adiciona ruído Gaussiano aleatório à $y$", "_____no_output_____" ] ], [ [ "random.seed(1)\n#ruído é amostrado de uma distribuição normal com média 0 e desvio padrão 1/3\ne = np.array([random.gauss(0,1.0/3.0) for i in range(n)]) \nY = Y + e", "_____no_output_____" ] ], [ [ "### Funções auxiliares", "_____no_output_____" ], [ "Função para plotar os dados (scatter plot)", "_____no_output_____" ] ], [ [ "def plot_data(X,Y): \n plt.plot(X,Y,'k.')\n plt.xlabel('X')\n plt.ylabel('Y')\n plt.axis([0,1,-1.5,2])", "_____no_output_____" ] ], [ [ "Função para imprimir coeficientes", "_____no_output_____" ] ], [ [ "def print_coefficients(model): \n # Retorna o grau do polinômio\n deg = len(model.steps[1][1].coef_)-1\n # Obtém os parâmetros estimados\n w = list(model.steps[1][1].coef_) #model.steps é usado pois o modelo é calculado usando make_pipile do scikit learn\n # Numpy tem uma função para imprimir o polinômio mas os parâmetros precisam estar na ordem inversa\n print ('Polinômio estimado para grau ' + str(deg) + ':')\n w.reverse()\n print (np.poly1d(w)+model.steps[1][1].intercept_)", "_____no_output_____" ] ], [ [ "Função para calcular uma regressão polinomial para qualquer grau usando scikit learn.", "_____no_output_____" ] ], [ [ "def polynomial_regression(X,Y,deg):\n model = make_pipeline(PolynomialFeatures(deg),LinearRegression()) \n model.fit(X,Y)\n return model", "_____no_output_____" ] ], [ [ "Função para plotar o modelo por meio de suas predições", "_____no_output_____" ] ], [ [ "def print_poly_predictions(X,Y, model):\n plot_data(X,Y)\n x_plot = np.array([i/200.0 for i in range(200)])\n X_plot = x_plot[:,np.newaxis]\n y_pred = model.predict(X_plot)\n plt.plot(x_plot,y_pred,'g-')\n plt.axis([0,1,-1.5,2])", "_____no_output_____" ], [ "def plot_residuals_vs_fit(X,Y, model):\n# plot_data(X,Y)\n# x_plot = np.array([i/200.0 for i in range(200)])\n# X_plot = x_plot[:,np.newaxis]\n y_pred = model.predict(X)\n res = Y - y_pred\n plt.plot(y_pred,res,'k.',color='blue',)\n plt.axhline(y=0., color='r', linestyle='-')\n plt.xlabel(\"predictions\")\n plt.ylabel(\"residuals\")", "_____no_output_____" ] ], [ [ "### Função geradora", "_____no_output_____" ] ], [ [ "plot_data(X,Y)\nx_plot = np.array([i/200.0 for i in range(200)])\ny_plot = f(x_plot)\nplt.plot(x_plot,y_plot,color='cornflowerblue',linewidth=2)", "_____no_output_____" ] ], [ [ "## Regressão polinomial de diferentes graus", "_____no_output_____" ] ], [ [ "model = polynomial_regression(X,Y,16)\nprint_poly_predictions(X,Y,model) ", "_____no_output_____" ] ], [ [ "Mostrando o modelo e coeficientes.", "_____no_output_____" ] ], [ [ "print_coefficients(model)", "Polinômio estimado para grau 16:\n 16 15 14 13\n3.337e+08 x - 2.226e+09 x + 6.62e+09 x - 1.156e+10 x \n 12 11 10 9 8\n + 1.309e+10 x - 1e+10 x + 5.14e+09 x - 1.657e+09 x + 2.258e+08 x\n 7 6 5 4 3\n + 6.694e+07 x - 4.734e+07 x + 1.393e+07 x - 2.548e+06 x + 3.018e+05 x\n 2\n - 2.188e+04 x + 839.4 x - 12.01\n" ] ], [ [ "# Regressão Ridge", "_____no_output_____" ], [ "A regressão ridge se propõe a evitar o overfitting adicionando um custo ao RSS (dos mínimos quadrados) que depende da norma L2 dos coeficientes $\\|w\\|$ (ou seja da magnitude dos coeficientes). O resultado é a penalização de ajustes com coeficientes muito grandes. A força dessa penalidade é controlada por um parâmetro lambda (aqui chamado \"L2_penalty\").", "_____no_output_____" ], [ "Função para estimar a regressão ridge para qualquer grau de polinômio:", "_____no_output_____" ] ], [ [ "def polynomial_ridge_regression(X,Y, deg, l2_penalty):\n model = make_pipeline(PolynomialFeatures(deg),Ridge(alpha=l2_penalty)) \n model.fit(X,Y)\n return model", "_____no_output_____" ] ], [ [ "## Ridge com grau 16 usando uma penalidade *muito* pequena", "_____no_output_____" ] ], [ [ "model = polynomial_ridge_regression(X,Y,deg=16,l2_penalty=1e-14)\nprint_coefficients(model)", "Polinômio estimado para grau 16:\n 16 15 14 13\n-3.328e+04 x + 1.583e+05 x - 1.271e+05 x - 1.734e+05 x \n 12 11 10 9\n + 1.037e+05 x + 2.557e+05 x - 3.877e+04 x - 3.178e+05 x\n 8 7 6 5 4\n + 3.638e+04 x + 3.677e+05 x - 3.44e+05 x + 1.359e+05 x - 2.422e+04 x\n 3 2\n + 719.1 x + 374.5 x - 48.09 x + 1.961\n" ], [ "print_poly_predictions(X,Y,model) ", "_____no_output_____" ] ], [ [ "## Ridge com grau 16 usando uma penalidade *muito* grande", "_____no_output_____" ] ], [ [ "model = polynomial_ridge_regression(X,Y, deg=16, l2_penalty=100)\nprint_coefficients(model)", "Polinômio estimado para grau 16:\n 16 15 14 13 12\n-0.007084 x - 0.00789 x - 0.008794 x - 0.009809 x - 0.01095 x \n 11 10 9 8 7\n - 0.01222 x - 0.01364 x - 0.01521 x - 0.01694 x - 0.01879 x\n 6 5 4 3 2\n - 0.0207 x - 0.02253 x - 0.02397 x - 0.02439 x - 0.02253 x - 0.01594 x + 0.4948\n" ], [ "print_poly_predictions(X,Y,model) ", "_____no_output_____" ] ], [ [ "## Sequência de ajustes para uma sequência crescente de valores de lambda", "_____no_output_____" ] ], [ [ "for l2_penalty in [1e-10, 1e-8, 1e-6, 1e-3, 1, 1e1, 1e2]:\n model = polynomial_ridge_regression(X,Y, deg=16, l2_penalty=l2_penalty)\n print('lambda = %.2e' % l2_penalty)\n print_coefficients(model)\n print('\\n')\n plt.figure()\n print_poly_predictions(X,Y,model)\n plt.title('Ridge, lambda = %.2e' % l2_penalty)", "lambda = 1.00e-10\nPolinômio estimado para grau 16:\n 16 15 14 13 12 11 10\n7567 x - 7803 x - 6900 x + 714.5 x + 6541 x + 5802 x - 498.1 x \n 9 8 7 6 5 4 3\n - 6056 x - 4252 x + 3439 x + 4893 x - 4281 x + 769.9 x + 100.6 x\n 2\n - 11.39 x - 4.716 x + 0.7859\n\n\nlambda = 1.00e-08\nPolinômio estimado para grau 16:\n 16 15 14 13 12 11\n352.8 x - 246.4 x - 338.4 x - 129.4 x + 148.9 x + 296 x \n 10 9 8 7 6 5\n + 213.6 x - 38.58 x - 254.8 x - 218.5 x + 62.06 x + 244.8 x\n 4 3 2\n + 36.66 x - 223.2 x + 112.4 x - 17.86 x + 1.157\n\n\nlambda = 1.00e-06\nPolinômio estimado para grau 16:\n 16 15 14 13 12 11\n-11.68 x - 1.907 x + 7.873 x + 14.24 x + 14.19 x + 6.382 x \n 10 9 8 7 6 5 4\n - 7.42 x - 21.17 x - 25.09 x - 10.05 x + 21.99 x + 43.96 x + 6.021 x\n 3 2\n - 81.62 x + 52.95 x - 9.752 x + 0.8831\n\n\nlambda = 1.00e-03\nPolinômio estimado para grau 16:\n 16 15 14 13 12\n-0.1991 x - 0.03173 x + 0.1641 x + 0.3778 x + 0.5899 x \n 11 10 9 8 7 6\n + 0.7688 x + 0.8655 x + 0.8092 x + 0.5056 x - 0.1493 x - 1.21 x\n 5 4 3 2\n - 2.509 x - 3.256 x - 1.494 x + 4.364 x - 0.3416 x + 0.4424\n\n\nlambda = 1.00e+00\nPolinômio estimado para grau 16:\n 16 15 14 13 12\n-0.07194 x - 0.08353 x - 0.09695 x - 0.1125 x - 0.1303 x \n 11 10 9 8 7 6\n - 0.1507 x - 0.1737 x - 0.1991 x - 0.2262 x - 0.253 x - 0.2758 x\n 5 4 3 2\n - 0.2865 x - 0.2698 x - 0.197 x - 0.02395 x + 0.2536 x + 0.6222\n\n\nlambda = 1.00e+01\nPolinômio estimado para grau 16:\n 16 15 14 13 12\n-0.03822 x - 0.04265 x - 0.04763 x - 0.05321 x - 0.05946 x \n 11 10 9 8 7\n - 0.06643 x - 0.07416 x - 0.08262 x - 0.09173 x - 0.1012 x\n 6 5 4 3 2\n - 0.1103 x - 0.1179 x - 0.1214 x - 0.1158 x - 0.09251 x - 0.0412 x + 0.6399\n\n\nlambda = 1.00e+02\nPolinômio estimado para grau 16:\n 16 15 14 13 12\n-0.007084 x - 0.00789 x - 0.008794 x - 0.009809 x - 0.01095 x \n 11 10 9 8 7\n - 0.01222 x - 0.01364 x - 0.01521 x - 0.01694 x - 0.01879 x\n 6 5 4 3 2\n - 0.0207 x - 0.02253 x - 0.02397 x - 0.02439 x - 0.02253 x - 0.01594 x + 0.4948\n\n\n" ] ], [ [ "## Usando validação cruzada para encontrar o melhor lembda para Regressão Ridge", "_____no_output_____" ], [ "A função abaixo calcula os rmses (root mean squared error) para um certo modelo considerando todos os k folds (parâmetro cv na função cross_val_score do scikit learn).", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import cross_val_score\ndef rmse_cv(model):\n rmse = np.sqrt(-cross_val_score(model,X,Y,scoring=\"neg_mean_squared_error\",cv=10))\n return (rmse)", "_____no_output_____" ] ], [ [ "Cria um modelo de regressão ridge", "_____no_output_____" ] ], [ [ "model_ridge = Ridge()", "_____no_output_____" ] ], [ [ "Plota resultados (médias de rmse) para cada valor de alpha (ou lambda) ", "_____no_output_____" ] ], [ [ "l2_penalties = [0.001,0.01,0.1,0.3,0.5,1,3,5,10,15,20,40,60,80,100]\ncv_ridge = [rmse_cv(Ridge(alpha=l2_penalty)).mean() \n for l2_penalty in l2_penalties]\ncv_ridge = pd.Series(cv_ridge,index=l2_penalties)\ncv_ridge.plot(title=\"Lambda vs Erro de Validação\")\nplt.xlabel(\"l2_penalty\")\nplt.ylabel(\"rmse\")", "_____no_output_____" ], [ "best_l2_penalty=cv_ridge.argmin()\nbest_rmse = cv_ridge.min()", "/Users/leandro/anaconda/lib/python3.6/site-packages/ipykernel/__main__.py:1: FutureWarning: 'argmin' is deprecated, use 'idxmin' instead. The behavior of 'argmin'\nwill be corrected to return the positional minimum in the future.\nUse 'series.values.argmin' to get the position of the minimum now.\n if __name__ == '__main__':\n" ], [ "print (best_l2_penalty, best_rmse) #melhor valor de (alpha,rmse) encontrado", "5.0 0.47623088652\n" ], [ "model = polynomial_ridge_regression(X,Y, deg=16, l2_penalty=best_l2_penalty)\nprint_coefficients(model)", "Polinômio estimado para grau 16:\n 16 15 14 13 12\n-0.05261 x - 0.05888 x - 0.06594 x - 0.07387 x - 0.08275 x \n 11 10 9 8 7 6\n - 0.09265 x - 0.1036 x - 0.1155 x - 0.1282 x - 0.141 x - 0.1529 x\n 5 4 3 2\n - 0.1613 x - 0.1618 x - 0.1456 x - 0.09851 x - 0.009176 x + 0.6715\n" ], [ "print_poly_predictions(X,Y,model)", "_____no_output_____" ] ], [ [ "# Regressão Lasso", "_____no_output_____" ], [ "A regressão Lasso, ao mesmo tempo, encolhe a magnitude dos coeficientes para evitar o overfitting e realiza implicitamente seleção de característcas igualando alguns atributos a zero (para lambdas, aqui chamados \"L1_penalty\", suficientemente grandes). Em particular, o Lasso adiciona ao RSS o custo $\\|w\\|$.", "_____no_output_____" ], [ "Função que estima a regressão polinomial de qualquer grau com a regressão Lasso.", "_____no_output_____" ] ], [ [ "def polynomial_lasso_regression(X, Y, deg, l1_penalty):\n model = make_pipeline(PolynomialFeatures(deg),Lasso(alpha=l1_penalty,max_iter=10000)) \n# X = data['X'][:,np.newaxis] #transformando em matrix para LinearRegression\n model.fit(X,Y)\n return model", "_____no_output_____" ] ], [ [ "## Explore a solução lasso solution como uma função de diferentes fatores de penalidade", "_____no_output_____" ], [ "Nos referimos ao fator de penalidade do lasso como \"l1_penalty\"", "_____no_output_____" ] ], [ [ "for l1_penalty in [0.0001, 0.001, 0.01, 0.1, 10]:\n model = polynomial_lasso_regression(X, Y,deg=16, l1_penalty=l1_penalty)\n print ('l1_penalty = %e' % l1_penalty)\n w = list(model.steps[1][1].coef_)\n print ('número de não zeros = %d' % np.count_nonzero(w))\n print_coefficients(model)\n print ('\\n')\n plt.figure()\n print_poly_predictions(X,Y,model)\n #plt.title('LASSO, lambda = %.2e, # nonzeros = %d' % l1_penalty, np.count_nonzero(w))", "l1_penalty = 1.000000e-04\nnúmero de não zeros = 5\nPolinômio estimado para grau 16:\n 11 10 4 2\n1.626 x + 1.074 x - 7.667 x + 4.545 x - 0.4504 x + 0.4478\n\n\nl1_penalty = 1.000000e-03\nnúmero de não zeros = 3\nPolinômio estimado para grau 16:\n 5 4\n-0.181 x - 2.886 x + 1.373 x + 0.3354\n\n\nl1_penalty = 1.000000e-02\nnúmero de não zeros = 2\nPolinômio estimado para grau 16:\n 5\n-1.96 x + 0.2618 x + 0.628\n\n\nl1_penalty = 1.000000e-01\nnúmero de não zeros = 0\nPolinômio estimado para grau 16:\n \n0.4527\n\n\nl1_penalty = 1.000000e+01\nnúmero de não zeros = 0\nPolinômio estimado para grau 16:\n \n0.4527\n\n\n" ] ], [ [ "Esse notebook foi inspirado nas aulas da especialização em Machine Learning da Universidade de Washington disponível no Coursera.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ] ]
e7dcb9f956dd459c1805ec6c9d906ced40121967
191,287
ipynb
Jupyter Notebook
codes/gender/BTC.ipynb
yangzhou6666/BiasHeal
7fa060047c40e0cb569ecb42c4c2f597b62d62da
[ "Apache-2.0" ]
null
null
null
codes/gender/BTC.ipynb
yangzhou6666/BiasHeal
7fa060047c40e0cb569ecb42c4c2f597b62d62da
[ "Apache-2.0" ]
null
null
null
codes/gender/BTC.ipynb
yangzhou6666/BiasHeal
7fa060047c40e0cb569ecb42c4c2f597b62d62da
[ "Apache-2.0" ]
1
2021-12-22T11:02:43.000Z
2021-12-22T11:02:43.000Z
39.743819
2,005
0.40055
[ [ [ "# BTC - Gender", "_____no_output_____" ], [ "### BiasFinder", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nimport math\nimport time", "_____no_output_____" ], [ "base_dir = \"../../data/biasfinder/gender/\"\ndf = pd.read_csv(base_dir + \"test.csv\", header=None, sep=\"\\t\", names=[\"label\", \"mutant\", \"template\", \"original\", \"gender\", \"template_id\"])\ndf.drop_duplicates()", "_____no_output_____" ], [ "def read_txt(fpath):\n pred = []\n file = open(fpath)\n lines = file.readlines()\n for l in lines :\n pred.append(int(l))\n file.close()\n \n return pred", "_____no_output_____" ], [ "output_dir = \"biasfinder/gender\"\n\nresult_dir = \"../../result/\" + output_dir + \"/\"\n\npath = result_dir + \"results_data.txt\"\n\npred = read_txt(path)\n\nprint(len(pred))", "156676\n" ], [ "df[\"prediction\"] = pred", "_____no_output_____" ], [ "df", "_____no_output_____" ] ], [ [ "### Use Groupby to Group the text by Template", "_____no_output_____" ] ], [ [ "gb = df.groupby(\"template_id\")", "_____no_output_____" ], [ "gb.count()", "_____no_output_____" ], [ "len(gb.size())", "_____no_output_____" ], [ "df", "_____no_output_____" ], [ "id = 0\ndf.iloc[id][\"mutant\"]", "_____no_output_____" ], [ "df.iloc[id][\"original\"]", "_____no_output_____" ], [ "df.iloc[id][\"template\"]", "_____no_output_____" ] ], [ [ "### Get DF template only", "_____no_output_____" ] ], [ [ "df", "_____no_output_____" ], [ "dft = df.iloc[:,[2,3,5]]\ndft = dft.drop_duplicates()\ndft", "_____no_output_____" ], [ "# ## template\ndft = dft.sort_values(by=[\"template_id\"])\ndft = dft.reset_index(drop=True)\ndft", "_____no_output_____" ], [ "## mutant\ndf = df.reset_index(drop=True)", "_____no_output_____" ], [ "df", "_____no_output_____" ], [ "dft", "_____no_output_____" ] ], [ [ "## Get Number of Discordant Pairs for Each Template\n\nThere is a memory limitation that make us can't directly produce +- 240M pairs. Fortunately, the number of discordant pairs for each template can be calculate theoritically without crossing th data to get +- 240M pairs. This will solve the memory issue.\n\nFor each template, we will give an example of the male mutant and female mutant for user study", "_____no_output_____" ] ], [ [ "gb = df.groupby(\"template_id\")\ngb.count()", "_____no_output_____" ] ], [ [ "### Data crossing", "_____no_output_____" ] ], [ [ "import time\n\nstart = time.time()\n\nidentifier = \"gender\"\n\nmutant_example = []\nmutant_prediction_stat = []\nkey = []\nfor i in range(len(gb.size())) :\n# for i in range(10) :\n data = gb.get_group(i)\n dc = data.groupby(identifier)\n me = {} # mutant example\n mp = {} # mutant prediction\n key = []\n for k, v in dict(iter(dc)).items() :\n key.append(k)\n is_first_instance = True\n pos_counter = 0 # positive counter\n neg_counter = 0 # negative counter\n for m, p in zip(v[\"mutant\"].values, v[\"prediction\"].values) :\n if is_first_instance :\n me[k] = m\n is_first_instance = False\n if int(p) == 1 :\n pos_counter += 1\n else :\n neg_counter += 1\n mp[k] = {\"pos\": pos_counter, \"neg\" : neg_counter}\n \n mutant_example.append(me)\n mutant_prediction_stat.append(mp)\n \nend = time.time()\nprint(\"Execution time: \", end-start)", "Execution time: 2.5814321041107178\n" ], [ "dft[\"mutant_example\"] = mutant_example\ndft[\"mutant_prediction_stat\"] = mutant_prediction_stat\ndft", "_____no_output_____" ], [ "key", "_____no_output_____" ], [ "btcs = []\npairs = []\nfor mp in dft[\"mutant_prediction_stat\"].values :\n if len(mp) > 0 :\n btc = 0\n pair = 0\n already_processed = []\n for k1 in key :\n for k2 in key :\n if k1 != k2 :\n k = k1 + \"-\" + k2\n if k1 > k2 :\n k = k2 + \"-\" + k1\n if k not in already_processed :\n already_processed.append(k)\n\n btc += ((mp[k1][\"pos\"] * mp[k2][\"neg\"]) + (mp[k1][\"neg\"] * mp[k2][\"pos\"]))\n pair += (mp[k1][\"pos\"] + mp[k1][\"neg\"]) * (mp[k2][\"pos\"] + mp[k2][\"neg\"])\n\n# double_counting_divider = len(key) * (len(key)-1)\n# dp.append(int(_dp/double_counting_divider)) # we must divide the number with the number of key to reduce the double counting\n btcs.append(btc)\n pairs.append(pair)\n else :\n btcs.append(0)\n pairs.append(0)", "_____no_output_____" ], [ "dft[\"btc\"] = btcs\ndft[\"possible_pair\"] = pairs\ndft", "_____no_output_____" ] ], [ [ "### Number of Bias-Uncovering Test Case", "_____no_output_____" ] ], [ [ "int(dft[\"btc\"].sum())", "_____no_output_____" ] ], [ [ "### BTC Rate", "_____no_output_____" ] ], [ [ "dft[\"btc\"].sum() / dft[\"possible_pair\"].sum()", "_____no_output_____" ] ], [ [ "### Get Data that Have number of BTC more than one", "_____no_output_____" ] ], [ [ "d = dft[dft[\"btc\"] > 0]\nd", "_____no_output_____" ] ], [ [ "### Sort Data based on the number of BTC", "_____no_output_____" ] ], [ [ "d = d.sort_values([\"btc\", \"template\"], ascending=False)\nd = d.reset_index(drop=True)\nd", "_____no_output_____" ] ], [ [ "### Get Data BTC for train and test", "_____no_output_____" ] ], [ [ "df", "_____no_output_____" ], [ "template_that_produce_btc = d[\"template_id\"].tolist()\n# template_that_produce_btc", "_____no_output_____" ], [ "start = time.time()\n\nmutant_text_1 = []\nmutant_text_2 = []\nprediction_1 = []\nprediction_2 = []\nidentifier_1 = []\nidentifier_2 = []\ntemplate = []\noriginal = []\nlabel = []\nfor i in template_that_produce_btc: # only processing from template that produce BTC\n data = gb.get_group(i)\n dc = data.groupby(identifier)\n already_processed = []\n for k1, v1 in dict(iter(dc)).items() :\n for k2, v2 in dict(iter(dc)).items() :\n if k1 != k2 :\n key = k1 + \"-\" + k2\n if k1 > k2 :\n key = k2 + \"-\" + k1\n if key not in already_processed :\n already_processed.append(key)\n for m_1, p_1, i_1, t, o, l in zip(v1[\"mutant\"].values, v1[\"prediction\"].values, v1[identifier].values, v1[\"template\"].values, v1[\"original\"].values, v1[\"label\"].values) :\n for m_2, p_2, i_2 in zip(v2[\"mutant\"].values, v2[\"prediction\"].values, v2[identifier].values) :\n if p_1 != p_2 : # only add discordant pairs\n mutant_text_1.append(m_1)\n prediction_1.append(p_1)\n identifier_1.append(i_1)\n mutant_text_2.append(m_2)\n prediction_2.append(p_2)\n identifier_2.append(i_2)\n template.append(t)\n label.append(l)\n original.append(o)\n\nend = time.time()\nprint(\"Execution time: \", end-start)", "Execution time: 0.1271529197692871\n" ], [ "btc = pd.DataFrame(data={\"mutant_1\" : mutant_text_1, \"mutant_2\" : mutant_text_2, \"prediction_1\": prediction_1, \"prediction_2\" : prediction_2, \"identifier_1\": identifier_1, \"identifier_2\" : identifier_2, \"template\": template, \"original\": original, \"label\": label})\n\nbtc", "_____no_output_____" ], [ "btc = btc.sample(frac=1, random_state=123)\n\ntexts = []\ntemplates = []\nlabels = []\noriginal = []\nfor index, rows in btc.iterrows():\n original.append(rows[\"original\"])\n texts.append(rows[\"original\"])\n texts.append(rows[\"mutant_1\"])\n texts.append(rows[\"mutant_2\"])\n templates.append(rows[\"template\"])\n labels.append(rows[\"label\"])", "_____no_output_____" ], [ "# texts", "_____no_output_____" ], [ "user_study = pd.DataFrame(data={\"text\":texts, \"sentiment\": None, \"is_make_sense\": None, \"comment\": None})\ndf_template = pd.DataFrame(data={\"template\":templates})\ndf_ori = pd.DataFrame(data={\"label\" :label, \"original\": original})", "_____no_output_____" ], [ "# df_ori.drop_duplicates().to_csv(\"btc_original.csv\")", "_____no_output_____" ], [ "df_ori", "_____no_output_____" ], [ "user_study", "_____no_output_____" ], [ "df_template", "_____no_output_____" ], [ "user_study[:1200].to_csv(\"../../user_study/TSE/gender-unlabelled.csv\")\n# template.to_csv(\"template_gender.csv\")", "_____no_output_____" ], [ "import os\n\nbase_dir = \"../../data/btc/biasfinder/gender/\"\n\nif not os.path.exists(base_dir) :\n os.makedirs(base_dir)\n\nbtc.to_csv(base_dir + \"original.csv\", index=None)", "_____no_output_____" ], [ "m1 = btc.iloc[:,[-1,0]]\nm1 = m1.rename(columns={\"mutant_1\": \"text\"})\nm2 = btc.iloc[:,[-1,1]]\nm2 = m2.rename(columns={\"mutant_2\": \"text\"})\n\nm1", "_____no_output_____" ], [ "data = pd.concat([m1, m2])\ndata", "_____no_output_____" ], [ "# data[\"text\"] = data[\"text\"].astype(\"category\")\n# data[\"text_id\"] = data[\"text\"].cat.codes\n# data\nimport os\n\ndata_dir = base_dir + \"full/\"\n\nif not os.path.exists(data_dir) :\n os.makedirs(data_dir)\n\n# train = unique_data\ntrain = data.sample(frac=1, random_state=123)\ntrain.to_csv(data_dir + \"train.csv\", index=None, header=None, sep=\"\\t\")\ntest = data\ntest.to_csv(data_dir+ \"test.csv\", index=None, header=None, sep=\"\\t\")", "_____no_output_____" ], [ "unique_data = data.drop_duplicates().reset_index(drop=True)\nunique_data", "_____no_output_____" ], [ "unique_data[unique_data[\"label\"] == 0]", "_____no_output_____" ], [ "import os\n\ndata_dir = base_dir + \"unique/\"\n\nif not os.path.exists(data_dir) :\n os.makedirs(data_dir)\n\n# train = unique_data\ntrain = unique_data.sample(frac=1, random_state=123)\ntrain.to_csv(data_dir + \"train.csv\", index=None, header=None, sep=\"\\t\")\ntest = unique_data\ntest.to_csv(data_dir+ \"test.csv\", index=None, header=None, sep=\"\\t\")", "_____no_output_____" ], [ "len(train)", "_____no_output_____" ] ], [ [ "#### Balanced Data for Training", "_____no_output_____" ] ], [ [ "df_0 = unique_data[unique_data[\"label\"] == 0]\ndf_1 = unique_data[unique_data[\"label\"] == 1]", "_____no_output_____" ], [ "print(len(df_0))\nprint(len(df_1))", "5438\n4880\n" ], [ "df_1 = df_1.sample(len(df_0), replace=True)\ndf_oversampled = pd.concat([df_0, df_1], axis=0)", "_____no_output_____" ], [ "df_oversampled", "_____no_output_____" ], [ "data_dir = base_dir + \"unique/balanced/\"\n\nif not os.path.exists(data_dir) :\n os.makedirs(data_dir)\n\n# train = unique_data\ntrain = df_oversampled.sample(frac=1, random_state=123)\ntrain.to_csv(data_dir + \"train.csv\", index=None, header=None, sep=\"\\t\")", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
e7dcc22c69677fc4c3b143b6d1cd8101e701ac4e
29,995
ipynb
Jupyter Notebook
Redfish/Current/iloREST/1-iloRestBasics.ipynb
donzef/JupyterNotebooks
76c69e6e0e21120f3b88e7c991be6e946bc156d5
[ "Apache-2.0" ]
1
2021-05-04T19:31:17.000Z
2021-05-04T19:31:17.000Z
Redfish/Current/iloREST/1-iloRestBasics.ipynb
donzef/JupyterNotebooks
76c69e6e0e21120f3b88e7c991be6e946bc156d5
[ "Apache-2.0" ]
null
null
null
Redfish/Current/iloREST/1-iloRestBasics.ipynb
donzef/JupyterNotebooks
76c69e6e0e21120f3b88e7c991be6e946bc156d5
[ "Apache-2.0" ]
null
null
null
34.877907
445
0.616336
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
e7dcc4e8b831dc78c38faa07d7b09f93c9c8ffa3
18,350
ipynb
Jupyter Notebook
notebooks/402-pose-estimation-webcam/402-pose-estimation.ipynb
scalers-ai/openvino_notebooks
935053a54281fe2131fb544c4c329623796b72c7
[ "Apache-2.0" ]
1
2022-03-06T21:57:09.000Z
2022-03-06T21:57:09.000Z
notebooks/402-pose-estimation-webcam/402-pose-estimation.ipynb
BrainAI-hub/openvino_notebooks
92cd454657f05fad0b8a57b8c7ec6a27d0ff2756
[ "Apache-2.0" ]
2
2022-02-07T01:36:04.000Z
2022-02-07T01:36:06.000Z
notebooks/402-pose-estimation-webcam/402-pose-estimation.ipynb
sky-dust-intelligence-bv/openvino_notebooks
bbedef2b7bfd907920b49cfde4fda3062e583535
[ "Apache-2.0" ]
null
null
null
35.49323
424
0.5503
[ [ [ "# Live Human Pose Estimation with OpenVINO\n\nThis notebook demonstrates live pose estimation with OpenVINO. We use the OpenPose model [human-pose-estimation-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/human-pose-estimation-0001) from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/). At the bottom of this notebook, you will see live inference results from your webcam. You can also upload a video file.\n\n> NOTE: _To use the webcam, you must run this Jupyter notebook on a computer with a webcam. If you run on a server, the webcam will not work. However, you can still do inference on a video in the final step._", "_____no_output_____" ], [ "## Imports", "_____no_output_____" ] ], [ [ "import sys\nimport collections\nimport os\nimport time\n\nimport cv2\nimport numpy as np\nfrom IPython import display\nfrom numpy.lib.stride_tricks import as_strided\nfrom openvino import inference_engine as ie\n\nfrom decoder import OpenPoseDecoder\n\nsys.path.append(\"../utils\")\nimport notebook_utils as utils", "_____no_output_____" ] ], [ [ "## The model\n\n### Download the model\n\nWe use `omz_downloader`, which is a command line tool from the `openvino-dev` package. `omz_downloader` automatically creates a directory structure and downloads the selected model.\n\nIf you want to download another model, please change the model name and precision. *Note: This will require a different pose decoder*.", "_____no_output_____" ] ], [ [ "# directory where model will be downloaded\nbase_model_dir = \"model\"\n\n# model name as named in Open Model Zoo\nmodel_name = \"human-pose-estimation-0001\"\n# selected precision (FP32, FP16, FP16-INT8)\nprecision = \"FP16-INT8\"\n\nmodel_path = f\"model/intel/{model_name}/{precision}/{model_name}.xml\"\nmodel_weights_path = f\"model/intel/{model_name}/{precision}/{model_name}.bin\"\n\nif not os.path.exists(model_path):\n download_command = f\"omz_downloader \" \\\n f\"--name {model_name} \" \\\n f\"--precision {precision} \" \\\n f\"--output_dir {base_model_dir}\"\n ! $download_command", "_____no_output_____" ] ], [ [ "### Load the model\n\nDownloaded models are located in a fixed structure, which indicates vendor, model name and precision.\n\nOnly a few lines of code are required to run the model. First, we create an Inference Engine object. Then we read the network architecture and model weights from the .bin and .xml files to load onto the desired device.", "_____no_output_____" ] ], [ [ "# initialize inference engine\nie_core = ie.IECore()\n# read the network and corresponding weights from file\nnet = ie_core.read_network(model=model_path, weights=model_weights_path)\n# load the model on the CPU (you can use GPU or MYRIAD as well)\nexec_net = ie_core.load_network(net, \"CPU\")\n\n# get input and output names of nodes\ninput_key = list(exec_net.input_info)[0]\noutput_keys = list(exec_net.outputs.keys())\n\n# get input size\nheight, width = exec_net.input_info[input_key].tensor_desc.dims[2:]", "_____no_output_____" ] ], [ [ "Input key is the name of the input node and output keys contain names of output nodes of the network. In the case of the OpenPose Model, we have one input and two outputs: pafs and keypoints heatmap.", "_____no_output_____" ] ], [ [ "input_key, output_keys", "_____no_output_____" ] ], [ [ "## Processing\n\n### OpenPoseDecoder\n\nWe need a decoder to transform the raw results from the neural network into pose estimations. This magic happens inside Open Pose Decoder, which is provided in the [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/blob/master/demos/common/python/openvino/model_zoo/model_api/models/open_pose.py) and compatible with the `human-pose-estimation-0001` model.\n\nIf you choose a model other than `human-pose-estimation-0001` you will need another decoder (e.g. AssociativeEmbeddingDecoder), which is available in the [demos section](https://github.com/openvinotoolkit/open_model_zoo/blob/master/demos/common/python/openvino/model_zoo/model_api/models/hpe_associative_embedding.py) of Open Model Zoo.", "_____no_output_____" ] ], [ [ "decoder = OpenPoseDecoder()", "_____no_output_____" ] ], [ [ "### Process Results\n\nA bunch of useful functions to transform results into poses.\n\nFirst, we will pool the heatmap. Since pooling is not available in numpy, we use a simple method to do it directly with numpy. Then, we use non-maximum suppression to get the keypoints from the heatmap. After that, we decode poses using the decoder. Since the input image is bigger than the network outputs, we need to multiply all pose coordinates by a scaling factor.", "_____no_output_____" ] ], [ [ "# 2d pooling in numpy (from: htt11ps://stackoverflow.com/a/54966908/1624463)\ndef pool2d(A, kernel_size, stride, padding, pool_mode=\"max\"):\n \"\"\"\n 2D Pooling\n\n Parameters:\n A: input 2D array\n kernel_size: int, the size of the window\n stride: int, the stride of the window\n padding: int, implicit zero paddings on both sides of the input\n pool_mode: string, 'max' or 'avg'\n \"\"\"\n # Padding\n A = np.pad(A, padding, mode=\"constant\")\n\n # Window view of A\n output_shape = (\n (A.shape[0] - kernel_size) // stride + 1,\n (A.shape[1] - kernel_size) // stride + 1,\n )\n kernel_size = (kernel_size, kernel_size)\n A_w = as_strided(\n A,\n shape=output_shape + kernel_size,\n strides=(stride * A.strides[0], stride * A.strides[1]) + A.strides\n )\n A_w = A_w.reshape(-1, *kernel_size)\n\n # Return the result of pooling\n if pool_mode == \"max\":\n return A_w.max(axis=(1, 2)).reshape(output_shape)\n elif pool_mode == \"avg\":\n return A_w.mean(axis=(1, 2)).reshape(output_shape)\n\n\n# non maximum suppression\ndef heatmap_nms(heatmaps, pooled_heatmaps):\n return heatmaps * (heatmaps == pooled_heatmaps)\n\n\n# get poses from results\ndef process_results(img, results):\n pafs = results[output_keys[0]]\n heatmaps = results[output_keys[1]]\n\n # this processing comes from\n # https://github.com/openvinotoolkit/open_model_zoo/blob/master/demos/common/python/models/open_pose.py\n pooled_heatmaps = np.array(\n [[pool2d(h, kernel_size=3, stride=1, padding=1, pool_mode=\"max\") for h in heatmaps[0]]]\n )\n nms_heatmaps = heatmap_nms(heatmaps, pooled_heatmaps)\n\n # decode poses\n poses, scores = decoder(heatmaps, nms_heatmaps, pafs)\n output_shape = exec_net.outputs[output_keys[0]].shape\n output_scale = img.shape[1] / output_shape[3], img.shape[0] / output_shape[2]\n # multiply coordinates by scaling factor\n poses[:, :, :2] *= output_scale\n\n return poses, scores", "_____no_output_____" ] ], [ [ "### Draw Pose Overlays\n\nDraw pose overlays on the image to visualize estimated poses. Joints are drawn as circles and limbs are drawn as lines. The code is based on the [Human Pose Estimation Demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master/demos/human_pose_estimation_demo/python) from Open Model Zoo.", "_____no_output_____" ] ], [ [ "colors = ((255, 0, 0), (255, 0, 255), (170, 0, 255), (255, 0, 85), (255, 0, 170), (85, 255, 0),\n (255, 170, 0), (0, 255, 0), (255, 255, 0), (0, 255, 85), (170, 255, 0), (0, 85, 255),\n (0, 255, 170), (0, 0, 255), (0, 255, 255), (85, 0, 255), (0, 170, 255))\n\ndefault_skeleton = ((15, 13), (13, 11), (16, 14), (14, 12), (11, 12), (5, 11), (6, 12), (5, 6), (5, 7),\n (6, 8), (7, 9), (8, 10), (1, 2), (0, 1), (0, 2), (1, 3), (2, 4), (3, 5), (4, 6))\n\n\ndef draw_poses(img, poses, point_score_threshold, skeleton=default_skeleton):\n if poses.size == 0:\n return img\n\n img_limbs = np.copy(img)\n for pose in poses:\n points = pose[:, :2].astype(np.int32)\n points_scores = pose[:, 2]\n # Draw joints.\n for i, (p, v) in enumerate(zip(points, points_scores)):\n if v > point_score_threshold:\n cv2.circle(img, tuple(p), 1, colors[i], 2)\n # Draw limbs.\n for i, j in skeleton:\n if points_scores[i] > point_score_threshold and points_scores[j] > point_score_threshold:\n cv2.line(img_limbs, tuple(points[i]), tuple(points[j]), color=colors[j], thickness=4)\n cv2.addWeighted(img, 0.4, img_limbs, 0.6, 0, dst=img)\n return img", "_____no_output_____" ] ], [ [ "### Main Processing Function\n\nRun pose estimation on the specified source. Either a webcam or a video file.", "_____no_output_____" ] ], [ [ "# main processing function to run pose estimation\ndef run_pose_estimation(source=0, flip=False, use_popup=False, skip_first_frames=0):\n player = None\n try:\n # create video player to play with target fps\n player = utils.VideoPlayer(source, flip=flip, fps=30, skip_first_frames=skip_first_frames)\n # start capturing\n player.start()\n if use_popup:\n title = \"Press ESC to Exit\"\n cv2.namedWindow(title, cv2.WINDOW_GUI_NORMAL | cv2.WINDOW_AUTOSIZE)\n\n processing_times = collections.deque()\n while True:\n # grab the frame\n frame = player.next()\n if frame is None:\n print(\"Source ended\")\n break\n # if frame larger than full HD, reduce size to improve the performance\n scale = 1280 / max(frame.shape)\n if scale < 1:\n frame = cv2.resize(frame, None, fx=scale, fy=scale, interpolation=cv2.INTER_AREA)\n\n # resize image and change dims to fit neural network input\n # (see https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/human-pose-estimation-0001)\n input_img = cv2.resize(frame, (width, height), interpolation=cv2.INTER_AREA)\n # create batch of images (size = 1)\n input_img = input_img.transpose(2, 0, 1)[np.newaxis, ...]\n\n # measure processing time\n start_time = time.time()\n # get results\n results = exec_net.infer(inputs={input_key: input_img})\n stop_time = time.time()\n # get poses from network results\n poses, scores = process_results(frame, results)\n\n # draw poses on a frame\n frame = draw_poses(frame, poses, 0.1)\n\n processing_times.append(stop_time - start_time)\n # use processing times from last 200 frames\n if len(processing_times) > 200:\n processing_times.popleft()\n\n _, f_width = frame.shape[:2]\n # mean processing time [ms]\n processing_time = np.mean(processing_times) * 1000\n fps = 1000 / processing_time\n cv2.putText(frame, f\"Inference time: {processing_time:.1f}ms ({fps:.1f} FPS)\", (20, 40),\n cv2.FONT_HERSHEY_COMPLEX, f_width / 1000, (0, 0, 255), 1, cv2.LINE_AA)\n\n # use this workaround if there is flickering\n if use_popup:\n cv2.imshow(title, frame)\n key = cv2.waitKey(1)\n # escape = 27\n if key == 27:\n break\n else:\n # encode numpy array to jpg\n _, encoded_img = cv2.imencode(\".jpg\", frame, params=[cv2.IMWRITE_JPEG_QUALITY, 90])\n # create IPython image\n i = display.Image(data=encoded_img)\n # display the image in this notebook\n display.clear_output(wait=True)\n display.display(i)\n # ctrl-c\n except KeyboardInterrupt:\n print(\"Interrupted\")\n # any different error\n except RuntimeError as e:\n print(e)\n finally:\n if player is not None:\n # stop capturing\n player.stop()\n if use_popup:\n cv2.destroyAllWindows()", "_____no_output_____" ] ], [ [ "## Run\n\n### Run Live Pose Estimation\n\nRun using a webcam as the video input. By default, the primary webcam is set with `source=0`. If you have multiple webcams, each one will be assigned a consecutive number starting at 0. Set `flip=True` when using a front-facing camera. Some web browsers, especially Mozilla Firefox, may cause flickering. If you experience flickering, set `use_popup=True`.\n\n*Note: To use this notebook with a webcam, you need to run the notebook on a computer with a webcam. If you run the notebook on a server (e.g. Binder), the webcam will not work.*\n\n*Note: Popup mode may not work if you run this notebook on a remote computer (e.g. Binder).*", "_____no_output_____" ] ], [ [ "run_pose_estimation(source=0, flip=True, use_popup=False)", "_____no_output_____" ] ], [ [ "### Run Pose Estimation on a Video File\n\nIf you don't have a webcam, you can still run this demo with a video file. Any format supported by OpenCV will work (see: https://docs.opencv.org/4.5.1/dd/d43/tutorial_py_video_display.html). You can skip first N frames to fast forward video.", "_____no_output_____" ] ], [ [ "video_file = \"https://github.com/intel-iot-devkit/sample-videos/blob/master/store-aisle-detection.mp4?raw=true\"\n\nrun_pose_estimation(video_file, flip=False, use_popup=False, skip_first_frames=500)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7dccabec3773feccf3cea784102f55a778729d4
2,404
ipynb
Jupyter Notebook
notebooks/Untitled.ipynb
eleijonmarck/analytics-workflow-showcase
d848358225eed959c2aac76f9e53d98bcd2d279b
[ "MIT" ]
null
null
null
notebooks/Untitled.ipynb
eleijonmarck/analytics-workflow-showcase
d848358225eed959c2aac76f9e53d98bcd2d279b
[ "MIT" ]
null
null
null
notebooks/Untitled.ipynb
eleijonmarck/analytics-workflow-showcase
d848358225eed959c2aac76f9e53d98bcd2d279b
[ "MIT" ]
null
null
null
20.547009
97
0.479201
[ [ [ "# Awesome basics that you can't live without when using Scitkit-learn", "_____no_output_____" ] ], [ [ "import sklearn", "_____no_output_____" ] ], [ [ "All the methods within the scikit that you might want to explore and import when applicable", "_____no_output_____" ] ], [ [ "(sklearn.__dict__)['__all__']", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7dcd1c0573973efbca42ec9d1e58d0efaf82ad8
412,366
ipynb
Jupyter Notebook
2017-French-presidential-elections.ipynb
AurelieDaviaud/2017-French-presidential-elections
1464cfb01586ddb2873a3330a84169d4ece7779f
[ "MIT" ]
null
null
null
2017-French-presidential-elections.ipynb
AurelieDaviaud/2017-French-presidential-elections
1464cfb01586ddb2873a3330a84169d4ece7779f
[ "MIT" ]
null
null
null
2017-French-presidential-elections.ipynb
AurelieDaviaud/2017-French-presidential-elections
1464cfb01586ddb2873a3330a84169d4ece7779f
[ "MIT" ]
null
null
null
535.54026
124,754
0.932916
[ [ [ "# 2017 French presidential elections", "_____no_output_____" ], [ "My aim was to highlight differences between Emmanuel Macron and Marine Le Pen, the two candidates who went to the second round of the 2017 French presidential elections.\n\nI have downloaded transcripts of the speeches that the two candidates performed from the 1st of January 2017 to the 1st of May 2017.\n\nIn total:<br>\n* Macron: 31 transcripts available out of 31 speeches\n* Le Pen: 25 transcrits available (transcripts: 21, subtitles: 4) out of 35 speeches.\n\nSources:\n* Macron: https://en-marche.fr/articles/discours\n* Le Pen: http://www.frontnational.com/categorie/discours/", "_____no_output_____" ], [ "![image](https://github.com/AurelieDaviaud/2017-French-presidential-elections/blob/master/LePen-Macron.png \"Le Pen vs Macron\")", "_____no_output_____" ], [ "## Create word clouds\n\nWe can create word clouds to visualize the main words used by each candidate.", "_____no_output_____" ] ], [ [ "import os\nimport numpy as np\nimport nltk\nfrom nltk.corpus import stopwords\nimport string\nimport re\nimport copy\nfrom string import digits", "_____no_output_____" ], [ "## Load data\nlistFiles = os.listdir(\"~/Presidentielles2017/Data\")", "_____no_output_____" ] ], [ [ "### Preprocess the speeches", "_____no_output_____" ], [ "First, have to preprocess the speeches to keep only important words. So we have to clean up irregularities (i.e. change to lowercase and remove punctuation) and to remove stop words.", "_____no_output_____" ], [ "We have to prepare a list of French stopwords (as comprehensive as possible, by combining several existing lists of stopwords).", "_____no_output_____" ] ], [ [ "## Prepare stop words\nstopw = open(\"stopwords-fr1.txt\", \"r\").read()\nmonths = [\"janvier\", \"février\", \"mars\", \"avril\", \"mai\", \"juin\", \"juillet\", \"août\", \"septembre\", \"octobre\", \"novembre\", \"décembre\"]\nn = re.compile('\\n')\nstopw = n.sub(' ', stopw)\nstopw = nltk.word_tokenize(stopw)\nstopw = stopw + stopwords.words('french') + months ", "_____no_output_____" ] ], [ [ "Some transcripts of Le Pen's speeches are actually subtitles. So, we also have to remove the timestamp and any backest that usually include sound effect. <br>\nMoreover, transcripts of Le Pen's and Macron's speeches do not have the same format. So we are going to use two different functions to process the documents.", "_____no_output_____" ], [ "#### Function to preprocess subtitles and speeches of Le Pen\n\n(part of the function as been adapted from http://sapir.psych.wisc.edu/wiki/index.php/Extracting_and_analyzing_subtitles)", "_____no_output_____" ] ], [ [ "def cleanLP(str, subtitle=False):\n\n timestamp = re.compile('^\\d+\\n?.*\\n?', re.MULTILINE) # finds line numbers and the line after them (which usually houses timestamps)\n brackets = re.compile('\\[[^]]*\\]\\n?|\\([^)]*\\)\\n?|<[^>]*>\\n?|\\{[^}]*\\}\\n?') # finds brackets and anything in between them (sound effects) \n opensubs = re.compile('.*subtitles.*\\n?|.*subs.*\\n?', re.IGNORECASE) # finds the opensubtitles signature \n urls = re.compile('www.*\\s\\n?|[^\\s]*\\. ?com\\n?') # finds any urls \n r = re.compile('\\r') # gets rid of \\r\n n = re.compile('\\n') # finds newlines\n punctuation = re.compile(\"[^\\w\\s']\") # finds punctuation\n\n if subtitle:\n str = timestamp.sub('', str)\n str = brackets.sub('', str)\n str = opensubs.sub('', str)\n str = urls.sub('', str)\n str = str.lower() # change to lowercase\n str = r.sub('', str) # remove \\r\n str = n.sub(' ', str) # remove newlines\n str = punctuation.sub(' ', str) # remove punctuation\n str = str.replace(\"'\", \" \") # remove apostrophes\n remove_digits = str.maketrans('', '', digits) # remove digits\n str = str.translate(remove_digits)\n tokens = nltk.word_tokenize(str) # tokenize (i.e create a list of words)\n tokens = [w for w in tokens if not w in stopw]\n\n return tokens", "_____no_output_____" ] ], [ [ "#### Function to preprocess speeches of Macron", "_____no_output_____" ] ], [ [ "def cleanMac(str):\n\n brackets = re.compile('\\[[^]]*\\]\\n?|\\([^)]*\\)\\n?|<[^>]*>\\n?|\\{[^}]*\\}\\n?') # finds brackets and anything in between them (sound effects) \n opensubs = re.compile('.*str.*\\n?|.*subs.*\\n?', re.IGNORECASE) # finds the opensubtitles signature \n urls = re.compile('www.*\\s\\n?|[^\\s]*\\. ?com\\n?') # finds any urls \n r = re.compile('\\r') # finds rid of \\r\n n = re.compile('\\n') # finds newlines\n punctuation = re.compile(\"[^\\w\\s']\") # finds punctuation\n str = '\\n'.join(str.split('\\n')[9:]) # remove 9th first lines\n str = brackets.sub('', str)\n str = opensubs.sub('', str)\n str = urls.sub('', str)\n str = str.replace(\"Seul le prononcé fait foi. page \", \"\") # remove words included in the footer and header of the transcript\n str = str.replace(\"en-marche.fr\", \"\")\n str = str.replace(\"Discours d’Emmanuel Macron\", \"\")\n str = str.replace(\"Aller plus loin\", \"\")\n str = str.replace(\"Téléchargez la fiche avec les propositions >\", \"\")\n str = str.replace(\"bit.ly/fichesynthèse-santé \", \"\")\n str = str.replace(\"Le replay >\", \"\")\n str = str.replace(\"EnMarche/videos/\", \"\")\n str = str.replace(\"facebook.com\", \"\")\n str = str.replace(\"Suivez Emmanuel Macron \", \"\")\n str = str.replace(\"\\x0c\", \"\")\n str = str.lower() # change to lowercase\n str = r.sub('', str) # remove \\r\n str = n.sub(' ', str) # remove newlines\n str = punctuation.sub(' ', str) # remove punctuation\n str = str.replace(\"'\", \" \") # remove apostrophes\n remove_digits = str.maketrans('', '', digits) # remove digits\n str = str.translate(remove_digits)\n tokens = nltk.word_tokenize(str) # tokenize (i.e create a list of words)\n tokens = [w for w in tokens if not w in stopw]\n\n return tokens", "_____no_output_____" ] ], [ [ "#### Preprocess the speeches", "_____no_output_____" ] ], [ [ "tokenMacTot = []\ntokenLPTot = []\n\nfor file in listFiles:\n str = open(file, \"r\").read()\n\n if \"MACRON\" in file:\n tokenMac = cleanMac(str)\n tokenMacTot = tokenMacTot + tokenMac\n if \"Le Pen\" in file:\n if \"Subtitle\" in file: \n tokenLP = cleanLP(str, subtitle=True)\n tokenLPTot = tokenLPTot + tokenLP\n else:\n tokenLP = cleanLP(str, subtitle=False)\n tokenLPTot = tokenLPTot + tokenLP", "_____no_output_____" ] ], [ [ "### Store the tokens", "_____no_output_____" ] ], [ [ "tokens_mac_file = open('tokens_mac.txt', 'w')\nfor item in tokenMacTot:\n tokens_mac_file.write(\"%s\\n\" % item)\n\ntokens_LP_file = open('tokens_LP.txt', 'w')\nfor item in tokenLPTot:\n tokens_LP_file.write(\"%s\\n\" % item)", "_____no_output_____" ] ], [ [ "### Analyse the data", "_____no_output_____" ], [ "Let's see whether the number of words used by each candidate is similar.", "_____no_output_____" ] ], [ [ "# Macron\ntokenMacUni = set(tokenMacTot)\nlen(tokenMacUni)", "_____no_output_____" ], [ "# Le Pen\ntokenLPUni = set(tokenLPTot)\nlen(tokenLPUni)", "_____no_output_____" ] ], [ [ "Macron seems to have a vocabulary that is a bit less varied than Le Pen.", "_____no_output_____" ], [ "### Count words", "_____no_output_____" ], [ "Now, we have to compute the frequency of each word for each candidate to create the word clouds.", "_____no_output_____" ] ], [ [ "from collections import Counter\n\n# Macron\nn = re.compile('\\n')\ntokenMacTot = open(\"tokens_mac.txt\", \"r\").read()\ntokenMacTot = n.sub(' ', tokenMacTot)\ntokenMacTot = nltk.word_tokenize(tokenMacTot)\n\nmac = Counter(tokenMacTot)\nmac_most = mac.most_common(n=100)\n\n\n# Le Pen\ntokenLPTot = open(\"tokens_LP.txt\", \"r\").read()\ntokenLPTot = n.sub(' ', tokenLPTot)\ntokenLPTot = nltk.word_tokenize(tokenLPTot)\n\nLP = Counter(tokenLPTot)\nLP_most = LP.most_common(n=100)", "_____no_output_____" ] ], [ [ "### Create word clouds", "_____no_output_____" ] ], [ [ "import wordcloud\nfrom wordcloud import WordCloud, STOPWORDS, ImageColorGenerator\nfrom PIL import Image\nimport matplotlib.pyplot as plt\n\nstr_mac = open(\"tokens_mac.txt\", \"r\").read()\nstr_lp = open(\"tokens_LP.txt\", \"r\").read()", "_____no_output_____" ], [ "## Generate word clouds\nmac = WordCloud(background_color=\"white\", collocations= False, font_path='C:/Windows/Fonts/Verdana.ttf')\nwordcloud_mac = mac.generate(str_mac)", "_____no_output_____" ], [ "lp = WordCloud(background_color=\"white\", collocations= False, font_path='C:/Windows/Fonts/Verdana.ttf')\nwordcloud_lp = lp.generate(str_lp)", "_____no_output_____" ], [ "## Show word clouds\nplt.figure()\nplt.imshow(wordcloud_mac, interpolation='bilinear')\nplt.axis(\"off\")\nplt.title(\"Macron\")\nplt.show()\n\nplt.figure()\nplt.imshow(wordcloud_lp, interpolation='bilinear')\nplt.axis(\"off\")\nplt.title(\"Le Pen\")\nplt.show()", "_____no_output_____" ] ], [ [ "The two candidates seem to use frequently the same words: France/français/française, Europe/européen/européenne, pays/nation/république, sécurité, monde, économie/économique, territoire... <br>\nWell... that's not very surprising in a presidential election...<br>\nThis kind of word cloud may not be the best strategy to highlight their differences then. <br>\n\nHowever, we can already highlight some differences. The words \"Fillon\" and \"Macron\" appear in the speeches of Le Pen whereas no name appears in the most frequent words used by Macron. Indeed, Le Pen is well known for always strongly criticize her opponants. <br>\n\nWe will delve further into these differences. But, first, let's make our word clouds a bit nicer.", "_____no_output_____" ], [ "We can use the pictures of each candidate as masks for the word clouds.", "_____no_output_____" ] ], [ [ "## Generate mask (load and format image) (NB: background must be transparent)\n\n# Macron\nmac_im = Image.open(\"F:/Boulot/00-DataScience/Portfolio/Presidentielles2017/macronNB.png\")\nmac_mask = Image.new(\"RGB\", mac_im.size, (255,255,255))\nmac_mask.paste(mac_im, mac_im)\nmac_mask = np.array(mac_mask)\n\n# Le Pen\nlp_im = Image.open(\"F:/Boulot/00-DataScience/Portfolio/Presidentielles2017/lepenBleu.png\")\nlp_mask = Image.new(\"RGB\", lp_im.size, (255,255,255))\nlp_mask.paste(lp_im, lp_im)\nlp_mask = np.array(lp_mask)", "_____no_output_____" ], [ "## Generate word clouds with mask\n\nmac_mask = WordCloud(background_color=\"white\", collocations= False, font_path='C:/Windows/Fonts/Verdana.ttf', mask=mac_mask)\nwordcloud_mac_mask = mac_mask.generate(str_mac)\n\nlp_mask = WordCloud(background_color=\"white\", collocations= False, font_path='C:/Windows/Fonts/Verdana.ttf', mask=lp_mask)\nwordcloud_lp_mask = lp_mask.generate(str_lp)", "_____no_output_____" ], [ "## Show word clouds\nplt.figure()\nplt.imshow(wordcloud_mac_mask, interpolation='bilinear')\nplt.axis(\"off\")\nplt.title(\"Macron\")\nplt.show()\n\nplt.figure()\nplt.imshow(wordcloud_lp_mask, interpolation='bilinear')\nplt.axis(\"off\")\nplt.title(\"Le Pen\")\nplt.show()", "_____no_output_____" ] ], [ [ "Now, let's see whether we can color the words using the colors of the French flag: blue, white and red.", "_____no_output_____" ] ], [ [ "import random\ndef BBR_color_func(word, font_size, position, orientation, random_state=None,\n **kwargs):\n return \"%s\" % random.choice([\"hsl(240, 100%, 25%)\", \"hsl(0, 0%, 100%)\", \"hsl(0, 100%, 50%)\"])", "_____no_output_____" ], [ "## Generate mask (load and format image) (NB: background must be transparent)\n\n# Macron\nmac_im = Image.open(\"F:/Boulot/00-DataScience/Portfolio/Presidentielles2017/macronNB.png\")\nmac_mask = Image.new(\"RGB\", mac_im.size, (255,255,255))\nmac_mask.paste(mac_im, mac_im)\nmac_mask = np.array(mac_mask)\n\n# Le Pen\nlp_im = Image.open(\"F:/Boulot/00-DataScience/Portfolio/Presidentielles2017/lepenBleu.png\")\nlp_mask = Image.new(\"RGB\", lp_im.size, (255,255,255))\nlp_mask.paste(lp_im, lp_im)\nlp_mask = np.array(lp_mask)", "_____no_output_____" ], [ "## Generate word clouds with mask and coloring from French flag\n\nmac = WordCloud(background_color=\"black\", color_func=BBR_color_func, random_state=3, relative_scaling=0.5, collocations= False, font_path='C:/Windows/Fonts/Verdana.ttf', mask=mac_mask)\nwordcloud_mac = mac.generate(str_mac)\t\n\nlp = WordCloud(background_color=\"black\", color_func=BBR_color_func, random_state=3, relative_scaling=0.5, collocations= False, font_path='C:/Windows/Fonts/Verdana.ttf', mask=lp_mask)\nwordcloud_lp = lp.generate(str_lp)\t", "_____no_output_____" ], [ "## Show word clouds\nplt.figure()\nplt.imshow(wordcloud_mac, interpolation='bilinear')\nplt.axis(\"off\")\nplt.title(\"Macron\")\nplt.show()\n\nplt.figure()\nplt.imshow(wordcloud_lp, interpolation='bilinear')\nplt.axis(\"off\")\nplt.title(\"Le Pen\")\nplt.show()", "_____no_output_____" ], [ "## Store to file\nmac.to_file(\"~/macron_wc.png\")\nmac.to_file(\"macron_wcBBR.png\")\n\nlp.to_file(\"~/lepen_wc.png\")\nlp.to_file(\"lepen_wcBBR.png\")", "_____no_output_____" ] ], [ [ "<br>\nNow, let's go back and see whether we can make the differences between Macron and Le Pen more obvious.\n\n## Create word clouds 2: focusing on differences\n\nNow, we are going to keep only the words that are different among the most frequent words used by the two candidates.", "_____no_output_____" ], [ "analysis coming soon...", "_____no_output_____" ], [ "## Sentiment analysis\n\nWhat about sentiment of people towards Macron and Le Pen?\n\nanalysis coming soon...", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ] ]
e7dcd3ee7ff65d216969267ecce602dd09f374cd
9,153
ipynb
Jupyter Notebook
colab_notebooks/Generate initial solutions.ipynb
angela18199/CORL_hyperparameter_search
c2cfc4c4e34dedb65929a8b7d68c61e53681a67c
[ "MIT" ]
null
null
null
colab_notebooks/Generate initial solutions.ipynb
angela18199/CORL_hyperparameter_search
c2cfc4c4e34dedb65929a8b7d68c61e53681a67c
[ "MIT" ]
null
null
null
colab_notebooks/Generate initial solutions.ipynb
angela18199/CORL_hyperparameter_search
c2cfc4c4e34dedb65929a8b7d68c61e53681a67c
[ "MIT" ]
null
null
null
9,153
9,153
0.605375
[ [ [ "# This ensures that a gpu is being used by the current google colab session.\n# If testing ES, then this block ensures that it is not available\n\ngpu_info = !nvidia-smi\ngpu_info = '\\n'.join(gpu_info)\nif gpu_info.find('failed') >= 0:\n print('Select the Runtime > \"Change runtime type\" menu to enable a GPU accelerator, ')\n print('and then re-execute this cell.')\nelse:\n print(gpu_info)", "Fri Oct 2 20:28:07 2020 \n+-----------------------------------------------------------------------------+\n| NVIDIA-SMI 455.23.05 Driver Version: 418.67 CUDA Version: 10.1 |\n|-------------------------------+----------------------+----------------------+\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n| | | MIG M. |\n|===============================+======================+======================|\n| 0 Tesla V100-SXM2... Off | 00000000:00:04.0 Off | 0 |\n| N/A 35C P0 23W / 300W | 0MiB / 16130MiB | 0% Default |\n| | | ERR! |\n+-------------------------------+----------------------+----------------------+\n \n+-----------------------------------------------------------------------------+\n| Processes: |\n| GPU GI CI PID Type Process name GPU Memory |\n| ID ID Usage |\n|=============================================================================|\n| No running processes found |\n+-----------------------------------------------------------------------------+\n" ], [ "# This code block is used to access your google drive\n\nfrom google.colab import drive\nROOT = \"/content/drive\"\ndrive.mount(ROOT)", "Mounted at /content/drive\n" ], [ "# Make sure this points to the project folder\n\n%cd drive/'My Drive'/CORL", "/content/drive/My Drive/CORL\n" ], [ "%cd l2i", "/content/drive/My Drive/CORL/l2i\n" ], [ "# import projects\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\nimport sys\nsys.path.insert(0,\"../attention/\")\n\nclass Config:\n\n def __init__(self):\n self.problem_seed = 1\n self.test_model = None\n self.num_training_points = 100\nconfig = Config()\n\nimport os\nimport numpy as np\nimport torch\nfrom torch.utils.data import DataLoader\nfrom generate_data import generate_vrp_data\nfrom utils import load_model\nfrom problems import CVRP\nfrom problem import generate_problem\nimport pickle\n\ndevice = \"cuda\" if torch.cuda.is_available() else \"cpu\"", "_____no_output_____" ], [ "# make key for each problem\ndef make_key(problem):\n demand = tuple(np.array(problem.capacities[1:])/np.array(problem.capacities[0]))\n depot_loc = tuple(problem.locations[0][:2])\n locations = tuple(map(tuple, problem.locations[1:,:2]))\n return (demand, depot_loc, locations)\n\n# convert l2i problem to attention problem\ndef convert_to_attention(problem):\n demand = np.array(problem.capacities[1:])/np.array(problem.capacities[0])\n depot_loc = problem.locations[0][:2]\n locations = problem.locations[1:,:2]\n\n dataset = CVRP.make_dataset(size=100, num_samples=1)\n dataset.data = [\n {\n 'loc' : torch.tensor(locations, dtype=torch.float32, device=torch.device(device)),\n 'demand' : torch.tensor(demand, dtype=torch.float32, device=torch.device(device)),\n 'depot' : torch.tensor(depot_loc, dtype=torch.float32, device=torch.device(device))\n }\n ]\n return dataset\n\n# convert attention solution to l2i solution\ndef convert_to_l2i(solution):\n solist = solution.tolist()\n converted_solution = []\n route = [0]\n for ele in solist:\n route.append(ele)\n if not ele:\n converted_solution.append(route)\n route = [0]\n route.append(0)\n converted_solution.append(route)\n converted_solution.append([0,0])\n return converted_solution\n\ndef use_attention(problem):\n # convert to attention problem\n att_prob = convert_to_attention(problem)\n\n # Need a dataloader to batch instances\n dataloader = DataLoader(att_prob, batch_size=1)\n\n # Make var works for dicts\n batch = next(iter(dataloader))\n\n # Run the model\n model.eval()\n model.set_decode_type('greedy')\n with torch.no_grad():\n length, log_p, pi = model(batch, return_pi=True)\n\n # convert to l2i solution\n solution = convert_to_l2i(pi[0])\n\n return solution", "_____no_output_____" ], [ "# This block will load the attention-based model and save the solutions\n# change the load and save locations as needed.\n\nload_ = \"../models/att/21hr-model.pt\"\nsave_ = \"../init_sols/att_21_init_sol.pickle\"\n\n# load model\nmodel = torch.load(load_, map_location=torch.device(device))\n\n# Generate problem solution memory and set seed\nN_ = 200 # samples\nmemory_ = {}\nconfig.problem_seed = 1\n\nsolutions = []\nfor _ in range(N_):\n problem = generate_problem(config)\n solutions.append(use_attention(problem))\npickle.dump(solutions, open(save_, \"wb\"))", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ] ]
e7dcdb84d09ff38c73f432aff0d85a11631421c8
7,061
ipynb
Jupyter Notebook
src/.ipynb_checkpoints/process_sciCAR_data-checkpoint.ipynb
kodaim1115/test
c5c8579dd088883a31f220170c39393084ff4c03
[ "MIT" ]
4
2021-10-04T14:45:31.000Z
2022-01-22T00:48:57.000Z
src/.ipynb_checkpoints/process_sciCAR_data-checkpoint.ipynb
kodaim1115/test
c5c8579dd088883a31f220170c39393084ff4c03
[ "MIT" ]
1
2022-03-13T21:07:56.000Z
2022-03-13T21:07:56.000Z
src/.ipynb_checkpoints/process_sciCAR_data-checkpoint.ipynb
kodaim1115/test
c5c8579dd088883a31f220170c39393084ff4c03
[ "MIT" ]
4
2021-12-13T21:47:12.000Z
2022-03-30T19:06:44.000Z
20.174286
92
0.518765
[ [ [ "import numpy as np\nimport scipy.sparse\nimport scipy.io\nimport torch\n\nfrom torchnet.dataset import TensorDataset, ResampleDataset\nfrom torch.utils.data import Subset\n\nimport pandas as pd\nfrom datasets_dev import RNA_Dataset, ATAC_Dataset, read_mtx", "_____no_output_____" ], [ "path = '../data/sci-CAR/'", "_____no_output_____" ], [ "rna_path = path + 'RNA-seq'\natac_path = path + 'ATAC-seq'", "_____no_output_____" ], [ "r_dataset = RNA_Dataset(rna_path, min_reads=2,min_cells=2)", "Loading data ...\nOriginal data contains 8837 cells x 4835 peaks\nFinished loading takes 0.03 min\n" ], [ "a_dataset = ATAC_Dataset(atac_path,low=0.001, high=1.0, min_peaks=0, binarize=True)\n#a_dataset = ATAC_Dataset(atac_path, low_counts=0, min_peaks=200, binarize=False)", "Loading data ...\nOriginal data contains 8837 cells x 88058 peaks\nFinished loading takes 0.05 min\n" ], [ "print(\"RNA shape is \" + str(r_dataset.data.shape))", "RNA shape is(8837, 4835)\n" ], [ "a_dataset.data.shape", "_____no_output_____" ], [ "torch.save(r_dataset, path + 'r_dataset.rar')", "_____no_output_____" ], [ "torch.save(a_dataset, path + 'a_dataset_2.rar')", "_____no_output_____" ], [ "a = torch.load(path + 'a_dataset.rar')\na.data.shape", "_____no_output_____" ], [ "torch.save(a_dataset,path+'a_dataset_mxabsscale.rar')", "_____no_output_____" ], [ "a_dataset = torch.load(path+'a_dataset_mxabsscale.rar')\na_dataset.data.shape", "_____no_output_____" ], [ "torch.save(a_dataset, path + 'a_dataset_8837x11548.rar')", "_____no_output_____" ], [ "import seaborn as sns", "_____no_output_____" ], [ "sns.palplot(sns.color_palette(\"Set1\", 24))", "_____no_output_____" ], [ "sns.color_palette(\"Set1\", 24)", "_____no_output_____" ], [ "a_dataset.data[:,300].todense()[range(1000)]", "_____no_output_____" ], [ "total_cells = a_dataset.data.shape[0]\ntotal_cells", "_____no_output_____" ], [ "count = np.array((a_dataset.data >0).sum(0)).squeeze()\ncount", "_____no_output_____" ], [ "indices = np.where((count > 0.005*total_cells) & (count < 1.0*total_cells))[0]\nindices", "_____no_output_____" ], [ "len(indices)", "_____no_output_____" ], [ "num_cell = r_dataset.data.shape[0]\nt_size = np.round(num_cell*0.75).astype('int')\nt_id = np.random.choice(a=num_cell, size=t_size, replace=False)\ns_id = np.delete(range(num_cell),t_id)\n\ntrain_dataset = [Subset(r_dataset, t_id), Subset(a_dataset, t_id)]\ntest_dataset = [Subset(r_dataset, s_id), Subset(a_dataset, s_id)]", "_____no_output_____" ], [ "from scipy.sparse import csr_matrix\n\ntrain_rna = r_dataset.data[train_dataset[0].indices,:]\ntrain_atac = a_dataset.data[train_dataset[1].indices,:]\n\ntest_rna = r_dataset.data[test_dataset[0].indices,:]\ntest_atac = a_dataset.data[test_dataset[1].indices,:]\n\ndata = [train_rna.todense(), train_atac.todense()]\ns_data = [test_rna.todense(), test_atac.todense()]", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7dce02b38d36f2ef540734738dab7517d3f0078
7,026
ipynb
Jupyter Notebook
Models/0207-REPORT-Graph_Theory_DianeWang.ipynb
Diane1306/fire_model
268da31b86da33bc870c1e5a982c37836645bd98
[ "MIT" ]
1
2021-05-20T03:23:01.000Z
2021-05-20T03:23:01.000Z
Models/0207-REPORT-Graph_Theory_DianeWang.ipynb
Diane1306/fire_model
268da31b86da33bc870c1e5a982c37836645bd98
[ "MIT" ]
null
null
null
Models/0207-REPORT-Graph_Theory_DianeWang.ipynb
Diane1306/fire_model
268da31b86da33bc870c1e5a982c37836645bd98
[ "MIT" ]
null
null
null
104.865672
2,098
0.749644
[ [ [ "# <center>Using Graph Theory in Simulating 2-D Wildland Fire Behavior </center>\n\n<center>by Diane Wang</center>", "_____no_output_____" ], [ "---\n# Graph Theory used in fire behavior simulation\n\nGraph theory is used in fire simulation mainly by representing fire behavior in networks. Fire-growth modeling on complex landscapes can be approached as a search for the minimum time for fire to travel among nodes in a two-dimensional network. The paths producing minimum travel time between nodes are then interpolated to reveal the fire perimeter positions at an instant in time. These fire perimeters and their fire behavior characteristics (e.g., spread rate, fireline intensity) are essentially identical to the products of perimeter expansion techniques. Travel time methods offer potential advantages for some kinds of modeling applications, because they are more readily parallelized for computation than methods for expanding fire fronts and require no correction for crossed fronts or merging separate fires (Finney, 2002). Also, a method for modeling fire propagation using a discrete Delaunay (also known as a triangulated irregular) network that is refined by a two-pass shortest path algorithm. The suggested methodology is tailored for the fast evaluation of minimum wildfire travel time from ignition sources (points) to specific points of interest or destination points, such as human settlements and infrastructure (Stepanov, 2011). This Delaunay graph-based approach, which is computationally effective with Geographic Information System (GIS). Beyond that, a computational two-level framework, where fire is being modeled to spread through a weighted directed network whose edge weights are the state transition probabilities of a spatio-temporal Markov Cellular Automata (CA) process. The particular CA model incorporates detailed GIS, landscape and meteorological data and has been proved to be robust and efficient in predicting the fire spreading behaviour in several real-world cases. Thus, the problem of the spatial distribution of fire breaks is reduced to the problem of finding the group of nodes through which the fire spreads most rapidly. This problem is closely and straightforwardly related to the analysis of information flow on networks (Russo, 2016).\n\nBesides, graph burning studies how fast a contagion, modeled as a set of fires, spreads in a graph. The burning process takes\nplace in synchronous, discrete rounds. In each round, a fire breaks out at a vertex, and the fire spreads to all vertices that are adjacent to a burning vertex. The selection of vertices where fires start defines a schedule that indicates the number of rounds required to burn all vertices. Given a graph, the objective of an algorithm is to find a schedule that minimizes the number of rounds to burn graph. The burning number measures how prone a network is to fast social contagion. In the burning protocol, like many other network protocols, data is communicated between nodes in discrete rounds. The input is an undirected, unweighted, finite simple graph. We say a node is burning if it has received data. Initially, no vertex is burning. In each round, a burning vertex sends data to all its neighbors, and all neighbors will be on fire at the end of the round; this is consistent with the fact that a user in the network can expose all its neighbours to a posted piece of data. In addition, in each given round, a new fire starts at a non-burning vertex called an activator ; this can be interpreted as a way to target additional users that initiate the contagion. Note that the burning protocol does not provide a specified algorithm of how the fire spreads. However, the algorithm can choose where to initiate the fire. The decisions of the algorithm for the location of activators define a schedule that can be described by a burning sequence: the ith member of the burning sequence indicates the vertex at which a fire is started in round i. We say the graph is burned when all vertices are on fire; that is, all members of the network have received the data. For example, burning a graph in three rounds using a schedule defined by burning sequence <A, B, C>. The number on each vertex indicates the rounds at which the vertex becomes a burning vertex. At round 1, a fire starts at A. At round 2, another fire starts at B while the fire at A spreads to all neighbors of A. At round 3, the fire spreads to all vertices except for C, where a new fire is started (Bonato, 2019).\n<img src=\"https://media.springernature.com/lw785/springer-static/image/chp%3A10.1007%2F978-3-030-14812-6_6/MediaObjects/471809_1_En_6_Fig1_HTML.png\" width=\"60%\">\n\nSome other fire related research also used graph theory to optimize firefighting in oil terminals (Khakzad, 2018) and unraveling the complexity of wildland urban interface fires (Mahmoud, 2018). ", "_____no_output_____" ], [ "---\n# References\n\n- Russo, Lucia, Paola Russo, and Constantinos I. Siettos. “A Complex Network Theory Approach for the Spatial Distribution of Fire Breaks in Heterogeneous Forest Landscapes for the Control of Wildland Fires.” Edited by Marc Hanewinkel. PLOS ONE 11, no. 10 (October 25, 2016): e0163226. https://doi.org/10.1371/journal.pone.0163226.\n- Khakzad, Nima. “A Graph Theoretic Approach to Optimal Firefighting in Oil Terminals.” Energies 11, no. 11 (November 9, 2018): 3101. https://doi.org/10.3390/en11113101.\n- Bonato, Anthony, and Shahin Kamali. “Approximation Algorithms for Graph Burning.” ArXiv:1811.04449 [Cs, Math], April 4, 2019. http://arxiv.org/abs/1811.04449.\n- Finney, Mark A. “Fire Growth Using Minimum Travel Time Methods.” Canadian Journal of Forest Research 32, no. 8 (August 1, 2002): 1420–24. https://doi.org/10.1139/x02-068.\n- Stepanov, Alexander, and James MacGregor Smith. “Modeling Wildfire Propagation with Delaunay Triangulation and Shortest Path Algorithms.” European Journal of Operational Research 218, no. 3 (May 2012): 775–88. https://doi.org/10.1016/j.ejor.2011.11.031.\n- Mahmoud, Hussam, and Akshat Chulahwat. “Unraveling the Complexity of Wildland Urban Interface Fires.” Scientific Reports 8, no. 1 (December 2018): 9315. https://doi.org/10.1038/s41598-018-27215-5.\n", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown", "markdown", "markdown" ] ]
e7dcf018e3d830a1fbb4cb920961370e1d311321
8,294
ipynb
Jupyter Notebook
beliakov-artem/HW1.ipynb
Malkovsky/co-mkn-hw-2021
94c9fab55c5bfc0e74fb339e09e6b9a858ae505f
[ "MIT" ]
null
null
null
beliakov-artem/HW1.ipynb
Malkovsky/co-mkn-hw-2021
94c9fab55c5bfc0e74fb339e09e6b9a858ae505f
[ "MIT" ]
null
null
null
beliakov-artem/HW1.ipynb
Malkovsky/co-mkn-hw-2021
94c9fab55c5bfc0e74fb339e09e6b9a858ae505f
[ "MIT" ]
1
2021-10-05T16:09:22.000Z
2021-10-05T16:09:22.000Z
24.832335
327
0.47251
[ [ [ "#### Task1 ", "_____no_output_____" ] ], [ [ "from math import exp", "_____no_output_____" ], [ "INF = 10\nEPS = 1e-8\nITERATIONS_NUM = 1000", "_____no_output_____" ], [ "class Differentiable:\n def __init__(self, derivatives):\n self.derivatives = derivatives\n def __call__(self, x):\n return self.derivatives[0](x)\n def grad(self):\n if (len(self.derivatives) == 1):\n raise Exception(\"no derivatives were provided\")\n return Differentiable(self.derivatives[1:])\n\nclass Polynom:\n def __init__(self, coefs):\n self.coefs = coefs\n self._degree = len(coefs) - 1\n def __call__(self, x):\n res = 0\n for i, coef in enumerate(self.coefs):\n res += (x ** i) * coef\n return res\n def get_degree(self):\n return self._degree\n def grad(self):\n grad_coefs = [0] * self._degree\n for i in range(1, self._degree + 1):\n grad_coefs[i - 1] = self.coefs[i] * i\n return Polynom(grad_coefs)", "_____no_output_____" ], [ "def bisec(p, l, r):\n assert p(r) * p(l) < 0\n sign = 1 if p(r) > 0 else -1\n\n while (r - l > EPS):\n m = (r + l) / 2\n if (p(m) * sign > 0):\n r = m\n else:\n l = m\n return l\n\ndef newton(p):\n x = 1\n p_grad = p.grad()\n \n for i in range(ITERATIONS_NUM):\n x = x - p(x) / p_grad(x)\n return x", "_____no_output_____" ], [ "def get_polynom(k, a):\n coefs = [0] * (k + 1)\n coefs[k] = 1\n coefs[0] = -a\n return Polynom(coefs)", "_____no_output_____" ], [ "p = get_polynom(2, 2)", "_____no_output_____" ], [ "print(f'bisec: {bisec(p, 0, INF)}')\nprint(f'newton: {newton(p)}')", "bisec: 1.414213553071022\nnewton: 1.414213562373095\n" ] ], [ [ "### Task2", "_____no_output_____" ] ], [ [ "def get_roots(p):\n if (p.get_degree() == 1):\n return [-p.coefs[0] / p.coefs[1]]\n\n a = [-INF]\n a += get_roots(p.grad())\n a.append(INF)\n \n roots = []\n for i in range(len(a) - 1):\n roots.append(bisec(p, a[i], a[i + 1]))\n return roots", "_____no_output_____" ], [ "def get_polynom_by_roots(roots):\n coefs = [0] * (len(roots) + 1)\n for mask in range(1 << len(roots)):\n product = 1\n bits = 0\n for i in range(len(roots)):\n if ((mask >> i) & 1):\n product *= - roots[i]\n bits += 1\n coefs[len(roots) - bits] += product\n return Polynom(coefs)", "_____no_output_____" ], [ "get_roots(get_polynom_by_roots([1, 2, 3, 4, 5]))", "_____no_output_____" ] ], [ [ "Понятно, что этот код не работает в самой общей постановке задачи. Как минимум он всегда возвращает столько корней, какой степени полином. Он совершенно не работает в сценарии, когда у нас есть кратные корни. Да и у производной могут быть кратные корни, но мне очень не хотелось разбирать все эти случаи. Так что вот так.", "_____no_output_____" ], [ "### Task03", "_____no_output_____" ] ], [ [ "def get_differentiable(a, b, c, d):\n def f(x):\n return exp(a * x) + exp(- b * x) + c * ((x - d) ** 2)\n \n def f_grad(x):\n return a * exp(a * x) - b * exp(- b * x) + 2 * c * (x - d)\n \n def f_grad2(x):\n return (a ** 2) * exp(a * x) + (b ** 2) * exp(- b * x) + 2 * c \n \n return Differentiable([f, f_grad, f_grad2])", "_____no_output_____" ], [ "f = get_differentiable(1, 1, 1, 1)", "_____no_output_____" ], [ "def ternary_search(f):\n l = -INF\n r = INF\n \n while(r - l > EPS):\n m1 = l + ((r - l) / 3)\n m2 = l + (2 * (r - l) / 3)\n if (f(m1) > f(m2)):\n l = m1\n else:\n r = m2\n return l", "_____no_output_____" ], [ "print(\"bisec: \", bisec(f.grad(), -INF, INF))\nprint(\"newton: \", newton(f.grad()))ss\nprint(\"ternary: \", ternary_search(f))", "bisec: 0.49007306806743145\nnewton: 0.4900730684805478\nternary: 0.49007306428116904\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ] ]
e7dcf32ddce113e447f057e4d30edfd14a4a2fb8
14,659
ipynb
Jupyter Notebook
Colab.ipynb
andrewlaikhT/KoGPT2
58e47483f64a7dc7bf65bc27fb331a799eb8aaa9
[ "Apache-2.0" ]
226
2020-04-26T10:22:54.000Z
2022-03-08T08:33:37.000Z
Colab.ipynb
andrewlaikhT/KoGPT2
58e47483f64a7dc7bf65bc27fb331a799eb8aaa9
[ "Apache-2.0" ]
8
2020-04-30T11:38:44.000Z
2021-03-05T07:09:50.000Z
Colab.ipynb
andrewlaikhT/KoGPT2
58e47483f64a7dc7bf65bc27fb331a799eb8aaa9
[ "Apache-2.0" ]
54
2020-05-10T19:31:25.000Z
2022-01-03T15:07:40.000Z
43.369822
249
0.638516
[ [ [ "from google.colab import drive\ndrive.mount('/content/drive')", "_____no_output_____" ], [ "from pydrive.auth import GoogleAuth\nfrom pydrive.drive import GoogleDrive\nfrom google.colab import auth\nfrom oauth2client.client import GoogleCredentials\nimport logging\n\nauth.authenticate_user()\ngauth = GoogleAuth()\ngauth.credentials = GoogleCredentials.get_application_default()\nmy_drive = GoogleDrive(gauth)", "_____no_output_____" ] ], [ [ "# 필요한 필수 새팅 작업", "_____no_output_____" ] ], [ [ "!ls", "adc.json drive sample_data\n" ], [ "!pip install -r drive/'My Drive'/'KoGPT2-FineTuning_pre'/requirements.txt", "Collecting mxnet==1.6.0\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/81/f5/d79b5b40735086ff1100c680703e0f3efc830fa455e268e9e96f3c857e93/mxnet-1.6.0-py2.py3-none-any.whl (68.7MB)\n\u001b[K |████████████████████████████████| 68.7MB 45kB/s \n\u001b[?25hRequirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from -r drive/My Drive/KoGPT2-FineTuning_pre/requirements.txt (line 2)) (4.41.1)\nCollecting gluonnlp==0.8.3\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/83/29/c7dffbfc39f8dd8bb9314df7aaf92a67f6c7826ed35d546c8fa63d6e5925/gluonnlp-0.8.3.tar.gz (236kB)\n\u001b[K |████████████████████████████████| 245kB 71.0MB/s \n\u001b[?25hCollecting sentencepiece==0.1.6\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/54/7f/a5fae1ff61d427801e8845e6c1a3ee1c13db6c187e155ae58a0224f21a38/sentencepiece-0.1.6-cp36-cp36m-manylinux1_x86_64.whl (1.4MB)\n\u001b[K |████████████████████████████████| 1.4MB 40.2MB/s \n\u001b[?25hRequirement already satisfied: torch==1.5.1 in /usr/local/lib/python3.6/dist-packages (from -r drive/My Drive/KoGPT2-FineTuning_pre/requirements.txt (line 5)) (1.5.1+cu101)\nCollecting transformers==2.1.1\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/fd/f9/51824e40f0a23a49eab4fcaa45c1c797cbf9761adedd0b558dab7c958b34/transformers-2.1.1-py3-none-any.whl (311kB)\n\u001b[K |████████████████████████████████| 317kB 56.2MB/s \n\u001b[?25hCollecting tensorboardX\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/af/0c/4f41bcd45db376e6fe5c619c01100e9b7531c55791b7244815bac6eac32c/tensorboardX-2.1-py2.py3-none-any.whl (308kB)\n\u001b[K |████████████████████████████████| 317kB 61.3MB/s \n\u001b[?25hCollecting dropbox\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/f2/68/037016cf1b227cc2ae0a7b962f69a14e60e50fa1e94f1ba9d297893de924/dropbox-10.3.0-py3-none-any.whl (668kB)\n\u001b[K |████████████████████████████████| 675kB 58.6MB/s \n\u001b[?25hRequirement already satisfied: numpy<2.0.0,>1.16.0 in /usr/local/lib/python3.6/dist-packages (from mxnet==1.6.0->-r drive/My Drive/KoGPT2-FineTuning_pre/requirements.txt (line 1)) (1.18.5)\nCollecting graphviz<0.9.0,>=0.8.1\n Downloading https://files.pythonhosted.org/packages/53/39/4ab213673844e0c004bed8a0781a0721a3f6bb23eb8854ee75c236428892/graphviz-0.8.4-py2.py3-none-any.whl\nRequirement already satisfied: requests<3,>=2.20.0 in /usr/local/lib/python3.6/dist-packages (from mxnet==1.6.0->-r drive/My Drive/KoGPT2-FineTuning_pre/requirements.txt (line 1)) (2.23.0)\nRequirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from torch==1.5.1->-r drive/My Drive/KoGPT2-FineTuning_pre/requirements.txt (line 5)) (0.16.0)\nCollecting sacremoses\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/7d/34/09d19aff26edcc8eb2a01bed8e98f13a1537005d31e95233fd48216eed10/sacremoses-0.0.43.tar.gz (883kB)\n\u001b[K |████████████████████████████████| 890kB 51.4MB/s \n\u001b[?25hRequirement already satisfied: boto3 in /usr/local/lib/python3.6/dist-packages (from transformers==2.1.1->-r drive/My Drive/KoGPT2-FineTuning_pre/requirements.txt (line 6)) (1.14.20)\nRequirement already satisfied: regex in /usr/local/lib/python3.6/dist-packages (from transformers==2.1.1->-r drive/My Drive/KoGPT2-FineTuning_pre/requirements.txt (line 6)) (2019.12.20)\nRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from tensorboardX->-r drive/My Drive/KoGPT2-FineTuning_pre/requirements.txt (line 7)) (1.12.0)\nRequirement already satisfied: protobuf>=3.8.0 in /usr/local/lib/python3.6/dist-packages (from tensorboardX->-r drive/My Drive/KoGPT2-FineTuning_pre/requirements.txt (line 7)) (3.12.2)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.20.0->mxnet==1.6.0->-r drive/My Drive/KoGPT2-FineTuning_pre/requirements.txt (line 1)) (1.24.3)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.20.0->mxnet==1.6.0->-r drive/My Drive/KoGPT2-FineTuning_pre/requirements.txt (line 1)) (3.0.4)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.20.0->mxnet==1.6.0->-r drive/My Drive/KoGPT2-FineTuning_pre/requirements.txt (line 1)) (2020.6.20)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.20.0->mxnet==1.6.0->-r drive/My Drive/KoGPT2-FineTuning_pre/requirements.txt (line 1)) (2.10)\nRequirement already satisfied: click in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers==2.1.1->-r drive/My Drive/KoGPT2-FineTuning_pre/requirements.txt (line 6)) (7.1.2)\nRequirement already satisfied: joblib in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers==2.1.1->-r drive/My Drive/KoGPT2-FineTuning_pre/requirements.txt (line 6)) (0.16.0)\nRequirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /usr/local/lib/python3.6/dist-packages (from boto3->transformers==2.1.1->-r drive/My Drive/KoGPT2-FineTuning_pre/requirements.txt (line 6)) (0.3.3)\nRequirement already satisfied: jmespath<1.0.0,>=0.7.1 in /usr/local/lib/python3.6/dist-packages (from boto3->transformers==2.1.1->-r drive/My Drive/KoGPT2-FineTuning_pre/requirements.txt (line 6)) (0.10.0)\nRequirement already satisfied: botocore<1.18.0,>=1.17.20 in /usr/local/lib/python3.6/dist-packages (from boto3->transformers==2.1.1->-r drive/My Drive/KoGPT2-FineTuning_pre/requirements.txt (line 6)) (1.17.20)\nRequirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from protobuf>=3.8.0->tensorboardX->-r drive/My Drive/KoGPT2-FineTuning_pre/requirements.txt (line 7)) (49.1.0)\nRequirement already satisfied: docutils<0.16,>=0.10 in /usr/local/lib/python3.6/dist-packages (from botocore<1.18.0,>=1.17.20->boto3->transformers==2.1.1->-r drive/My Drive/KoGPT2-FineTuning_pre/requirements.txt (line 6)) (0.15.2)\nRequirement already satisfied: python-dateutil<3.0.0,>=2.1 in /usr/local/lib/python3.6/dist-packages (from botocore<1.18.0,>=1.17.20->boto3->transformers==2.1.1->-r drive/My Drive/KoGPT2-FineTuning_pre/requirements.txt (line 6)) (2.8.1)\nBuilding wheels for collected packages: gluonnlp, sacremoses\n Building wheel for gluonnlp (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for gluonnlp: filename=gluonnlp-0.8.3-cp36-none-any.whl size=293540 sha256=b87544d39664cff42ec6421c663f8b1070536be1ca8bf80378dfc03ec58dc65c\n Stored in directory: /root/.cache/pip/wheels/50/6e/32/521aa84da7f9ee725d3c9be0b5e0d771df659bf25da5929f6c\n Building wheel for sacremoses (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for sacremoses: filename=sacremoses-0.0.43-cp36-none-any.whl size=893260 sha256=a8e42e0cc0b9eae5dbb1b198d88ce991c2d7fdf0e413ef24f60811aa0fbdbd39\n Stored in directory: /root/.cache/pip/wheels/29/3c/fd/7ce5c3f0666dab31a50123635e6fb5e19ceb42ce38d4e58f45\nSuccessfully built gluonnlp sacremoses\nInstalling collected packages: graphviz, mxnet, gluonnlp, sentencepiece, sacremoses, transformers, tensorboardX, dropbox\n Found existing installation: graphviz 0.10.1\n Uninstalling graphviz-0.10.1:\n Successfully uninstalled graphviz-0.10.1\nSuccessfully installed dropbox-10.3.0 gluonnlp-0.8.3 graphviz-0.8.4 mxnet-1.6.0 sacremoses-0.0.43 sentencepiece-0.1.6 tensorboardX-2.1 transformers-2.1.1\n" ], [ "import os\nimport sys\nsys.path.append('drive/My Drive/KoGPT2-FineTuning_pre')\nlogs_base_dir = \"runs\"", "_____no_output_____" ], [ "from jupyter_main_auto import main", "/usr/local/lib/python3.6/dist-packages/mxnet/optimizer/optimizer.py:167: UserWarning: WARNING: New optimizer gluonnlp.optimizer.lamb.LAMB is overriding existing optimizer mxnet.optimizer.optimizer.LAMB\n Optimizer.opt_registry[name].__name__))\n" ], [ "ctx= 'cuda'\ncachedir='~/kogpt2/'\nload_path = './gdrive/My Drive/KoGPT2-FineTuning_pre/checkpoint/KoGPT2_checkpoint_640000.tar' # 이어서 학습시킬 모델 경로\nsave_path = './gdrive/My Drive/KoGPT2-FineTuning_pre/checkpoint/' # 학습한 모델을 저장시킬 경로\ndata_file_path = './gdrive/My Drive/KoGPT2-FineTuning_pre/dataset/dataset.csv' # 학습할 데이터셋 경로", "_____no_output_____" ] ], [ [ "# 모델 학습 시작", "_____no_output_____" ] ], [ [ "# 저장 잘 되는지 테스트\ndrive.mount('/content/gdrive')\n\nf = open(save_path+ 'KoGPT2_checkpoint_' + str(142) + '.tar', 'w')\nf.write(\"가자\")\nf.close()", "Mounted at /content/gdrive\n" ], [ "main(load_path = load_path, data_file_path = data_file_path, save_path = './gdrive/My Drive/KoGPT2-FineTuning_pre/checkpoint/', summary_url = './gdrive/My Drive/KoGPT2-FineTuning_pre/runs/2020-07-20/', text_size = 500, new = 1, batch_size = 1)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
e7dcf3e4c5a602751be6c8ce6cfda56c653f8dcc
6,373
ipynb
Jupyter Notebook
12_decision_trees_and_random_forests/notebooks/02_solutions.ipynb
JoseHJBlanco/ga-data-science
dff5cfd8fb13c1c49cba099bd100ca79143828e4
[ "CC-BY-4.0" ]
12
2017-11-17T09:44:44.000Z
2020-11-08T18:02:42.000Z
12_decision_trees_and_random_forests/notebooks/02_solutions.ipynb
itsshaikaslam/ga-data-science
b39f3a499749e4423bb193a1376b7dee770152b7
[ "CC-BY-4.0" ]
1
2018-03-27T13:05:12.000Z
2018-03-27T13:05:12.000Z
12_decision_trees_and_random_forests/notebooks/02_solutions.ipynb
itsshaikaslam/ga-data-science
b39f3a499749e4423bb193a1376b7dee770152b7
[ "CC-BY-4.0" ]
21
2018-01-01T03:26:28.000Z
2021-10-31T19:24:24.000Z
22.205575
116
0.530833
[ [ [ "# Decision trees and random forests", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\n\nfrom sklearn import model_selection as ms, tree, ensemble\n\n%matplotlib inline", "_____no_output_____" ], [ "WHITES_URL = 'http://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-white.csv'", "_____no_output_____" ] ], [ [ "Read in the Wine Quality dataset.", "_____no_output_____" ] ], [ [ "whites = pd.read_csv(WHITES_URL, sep=';')", "_____no_output_____" ] ], [ [ "Train a decision tree for 'quality' limiting the depth to 3, and the minimum number of samples per leaf to 50.", "_____no_output_____" ] ], [ [ "X = whites.drop('quality', axis=1)\ny = whites.quality\ntree1 = tree.DecisionTreeRegressor(max_depth=2, min_samples_leaf=50)\ntree1.fit(X, y)", "_____no_output_____" ] ], [ [ "Export the tree for plotting.", "_____no_output_____" ] ], [ [ "tree.export_graphviz(tree1, 'tree1.dot', feature_names=X.columns)", "_____no_output_____" ] ], [ [ "Define folds for cross-validation.", "_____no_output_____" ] ], [ [ "ten_fold_cv = ms.KFold(n_splits=10, shuffle=True)", "_____no_output_____" ] ], [ [ "Compute average MSE across folds.", "_____no_output_____" ] ], [ [ "mses = ms.cross_val_score(tree.DecisionTreeRegressor(max_depth=2, min_samples_leaf=50),\n X, y, scoring='neg_mean_squared_error', cv=ten_fold_cv)\nnp.mean(-mses)", "_____no_output_____" ] ], [ [ "Train a random forest with 20 decision trees.", "_____no_output_____" ] ], [ [ "rf1 = ensemble.RandomForestRegressor(n_estimators=20)\nrf1.fit(X, y)", "_____no_output_____" ] ], [ [ "Investigate importances of predictors.", "_____no_output_____" ] ], [ [ "rf1.feature_importances_", "_____no_output_____" ] ], [ [ "Evaluate performance through cross-validation.", "_____no_output_____" ] ], [ [ "mses = ms.cross_val_score(ensemble.RandomForestRegressor(n_estimators=20),\n X, y, scoring='neg_mean_squared_error', cv=ten_fold_cv)\nnp.mean(-mses)", "_____no_output_____" ] ], [ [ "What happens when you increase the number of trees to 50?", "_____no_output_____" ] ], [ [ "mses = ms.cross_val_score(ensemble.RandomForestRegressor(n_estimators=50),\n X, y, scoring='neg_mean_squared_error', cv=ten_fold_cv)\nnp.mean(-mses)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7dd025576bfc2c75c82765a317983242f42e5d7
3,040
ipynb
Jupyter Notebook
Text Processing/normalization_practice.ipynb
maitreytalware/Natural-Language-Processing-Pipelines
e434c6b524eb5cb199a870b6f1ece442df8a9cb4
[ "MIT" ]
1
2020-04-23T00:07:31.000Z
2020-04-23T00:07:31.000Z
Text Processing/normalization_practice.ipynb
maitreytalware/Natural-Language-Processing-Pipelines
e434c6b524eb5cb199a870b6f1ece442df8a9cb4
[ "MIT" ]
null
null
null
Text Processing/normalization_practice.ipynb
maitreytalware/Natural-Language-Processing-Pipelines
e434c6b524eb5cb199a870b6f1ece442df8a9cb4
[ "MIT" ]
null
null
null
26.902655
252
0.586513
[ [ [ "# Normalization\nUse what you've learned to normalize case in the following text and remove punctuation!\n", "_____no_output_____" ] ], [ [ "text = \"The first time you see The Second Renaissance it may look boring. Look at it at least twice and definitely watch part 2. It will change your view of the matrix. Are the human people the ones who started the war ? Is AI a bad thing ?\"\nprint(text)", "The first time you see The Second Renaissance it may look boring. Look at it at least twice and definitely watch part 2. It will change your view of the matrix. Are the human people the ones who started the war ? Is AI a bad thing ?\n" ] ], [ [ "### Case Normalization", "_____no_output_____" ] ], [ [ "# Convert to lowercase\ntext = text.lower()\nprint(text)", "the first time you see the second renaissance it may look boring. look at it at least twice and definitely watch part 2. it will change your view of the matrix. are the human people the ones who started the war ? is ai a bad thing ?\n" ] ], [ [ "### Punctuation Removal\nUse the `re` library to remove punctuation with a regular expression (regex). Feel free to refer back to the video or Google to get your regular expression. You can learn more about regex [here](https://docs.python.org/3/howto/regex.html).", "_____no_output_____" ] ], [ [ "# Remove punctuation characters\nimport re\ntext = re.sub(r\"[^a-zA-Z0-9]\",\" \",text)\nprint(text)", "the first time you see the second renaissance it may look boring look at it at least twice and definitely watch part 2 it will change your view of the matrix are the human people the ones who started the war is ai a bad thing \n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7dd0d2ef6ec17aa749633f3f884daeb3e877f8c
31,736
ipynb
Jupyter Notebook
notebooks/feature-engineering/Section-04-Missing-Data-Imputation/04.18-End-Tail-Imputation-Feature-Engine.ipynb
sophiabrandt/udemy-feature-engineering
739a8a58ecdb86b3008a374bf7645d7dbfbfcb46
[ "BSD-3-Clause" ]
null
null
null
notebooks/feature-engineering/Section-04-Missing-Data-Imputation/04.18-End-Tail-Imputation-Feature-Engine.ipynb
sophiabrandt/udemy-feature-engineering
739a8a58ecdb86b3008a374bf7645d7dbfbfcb46
[ "BSD-3-Clause" ]
null
null
null
notebooks/feature-engineering/Section-04-Missing-Data-Imputation/04.18-End-Tail-Imputation-Feature-Engine.ipynb
sophiabrandt/udemy-feature-engineering
739a8a58ecdb86b3008a374bf7645d7dbfbfcb46
[ "BSD-3-Clause" ]
null
null
null
40.635083
11,616
0.648223
[ [ [ "## End of distribution Imputation ==> Feature-Engine\n\n\n### What is Feature-Engine\n\nFeature-Engine is an open source python package that I created at the back of this course. \n\n- Feature-Engine includes all the feature engineering techniques described in the course\n- Feature-Engine works like to Scikit-learn, so it is easy to learn\n- Feature-Engine allows you to implement specific engineering steps to specific feature subsets\n- Feature-Engine can be integrated with the Scikit-learn pipeline allowing for smooth model building\n- \n**Feature-Engine allows you to design and store a feature engineering pipeline with bespoke procedures for different variable groups.**\n\n-------------------------------------------------------------------\nFeature-Engine can be installed via pip ==> pip install feature-engine\n\n- Make sure you have installed feature-engine before running this notebook\n\nFor more information visit:\nmy website\n\n## In this demo\n\nWe will use Feature-Engine to perform mean or median imputation using the Ames House Price Dataset.\n\n- To download the dataset visit the lecture **Datasets** in **Section 1** of the course.", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\n\nimport matplotlib.pyplot as plt\n\n# to split the datasets\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.pipeline import Pipeline\n\n# from feature-engine\nfrom feature_engine import missing_data_imputers as mdi", "_____no_output_____" ], [ "# let's load the dataset with a selected group of variables\n\ncols_to_use = [\n 'BsmtQual', 'FireplaceQu', 'LotFrontage', 'MasVnrArea', 'GarageYrBlt',\n 'SalePrice'\n]\n\ndata = pd.read_csv('../houseprice.csv', usecols=cols_to_use)\ndata.head()", "_____no_output_____" ], [ "data.isnull().mean()", "_____no_output_____" ] ], [ [ "All the predictor variables contain missing data.", "_____no_output_____" ] ], [ [ "# let's separate into training and testing set\n\n# first drop the target from the feature list\ncols_to_use.remove('SalePrice')\n\nX_train, X_test, y_train, y_test = train_test_split(data[cols_to_use],\n data['SalePrice'],\n test_size=0.3,\n random_state=0)\nX_train.shape, X_test.shape", "_____no_output_____" ] ], [ [ "### Feature-Engine captures the numerical variables automatically", "_____no_output_____" ] ], [ [ "# we call the imputer from feature-engine\n\n# we specify whether we want to find the values using\n# the gaussian approximation or the inter-quantal range\n# proximity rule.\n\n# in addition we need to specify if we want the values placed at \n# the left or right tail\n\nimputer = mdi.EndTailImputer(distribution='gaussian', tail='right')", "_____no_output_____" ], [ "# we fit the imputer\n\nimputer.fit(X_train)", "_____no_output_____" ], [ "# we see that the imputer found the numerical variables to\n# impute with the end of distribution value\n\nimputer.variables", "_____no_output_____" ], [ "# here we can see the values that will be used\n# to replace NA for each variable\n\nimputer.imputer_dict_", "_____no_output_____" ], [ "# and this is how those values were calculated\n# which is how we learnt in the first notebooks of\n# this section\n\nX_train[imputer.variables].mean() + 3 * X_train[imputer.variables].std()", "_____no_output_____" ], [ "# feature-engine returns a dataframe\n\ntmp = imputer.transform(X_train)\ntmp.head()", "_____no_output_____" ], [ "# let's check that the numerical variables don't\n# contain NA any more\n\ntmp[imputer.variables].isnull().mean()", "_____no_output_____" ] ], [ [ "## Feature-engine allows you to specify variable groups easily", "_____no_output_____" ] ], [ [ "# let's do it imputation but this time\n# and let's do it over 2 of the 3 numerival variables\n\n# let's also select the proximity rule on the left tail\n\nimputer = mdi.EndTailImputer(distribution='skewed', tail='left',\n variables=['LotFrontage', 'MasVnrArea'])\n\nimputer.fit(X_train)", "_____no_output_____" ], [ "# now the imputer uses only the variables we indicated\n\nimputer.variables", "_____no_output_____" ], [ "# and we can see the value assigned to each variable\nimputer.imputer_dict_", "_____no_output_____" ], [ "# feature-engine returns a dataframe\n\ntmp = imputer.transform(X_train)\n\n# let's check null values are gone\ntmp[imputer.variables].isnull().mean()", "_____no_output_____" ] ], [ [ "## Feature-engine can be used with the Scikit-learn pipeline", "_____no_output_____" ] ], [ [ "# let's look at the distributions to determine the\n# end tail value selection method\n\nX_train.hist()", "_____no_output_____" ] ], [ [ "All variables are skewed. For this demo, I will use the proximity rule for GarageYrBlt and MasVnrArea, and the Gaussian approximation for LotFrontage.", "_____no_output_____" ] ], [ [ "pipe = Pipeline([\n ('imputer_skewed', mdi.EndTailImputer(distribution='skewed', tail='right',\n variables=['GarageYrBlt', 'MasVnrArea'])),\n\n ('imputer_gaussian', mdi.EndTailImputer(distribution='gaussian', tail='right',\n variables=['LotFrontage'])),\n])", "_____no_output_____" ], [ "pipe.fit(X_train)", "_____no_output_____" ], [ "pipe.named_steps['imputer_skewed'].imputer_dict_", "_____no_output_____" ], [ "pipe.named_steps['imputer_gaussian'].imputer_dict_", "_____no_output_____" ], [ "# let's transform the data with the pipeline\ntmp = pipe.transform(X_train)\n\n# let's check null values are gone\ntmp.isnull().mean()", "_____no_output_____" ] ], [ [ "There are no more null values for the 3 imputed numerical variables.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ] ]
e7dd309c56a9f1528fd8b4e04fe34a2de1db417c
4,544
ipynb
Jupyter Notebook
Python/Jupyter_Notebook/Pandas_Series.ipynb
SergeyOcheretenko/PythonLearning
2f84d730b00931b33188a2a831977f385b063c6c
[ "MIT" ]
null
null
null
Python/Jupyter_Notebook/Pandas_Series.ipynb
SergeyOcheretenko/PythonLearning
2f84d730b00931b33188a2a831977f385b063c6c
[ "MIT" ]
null
null
null
Python/Jupyter_Notebook/Pandas_Series.ipynb
SergeyOcheretenko/PythonLearning
2f84d730b00931b33188a2a831977f385b063c6c
[ "MIT" ]
null
null
null
17.819608
92
0.43904
[ [ [ "import numpy as np\nimport pandas as pd", "_____no_output_____" ], [ "letters = ['a', 'b', 'c']\nnumbers = [1, 2, 3]\nnp_arr = np.array(numbers)\ndict = {'a': 1, 'b': 2, 'c': 3}", "_____no_output_____" ], [ "pd.Series(data = numbers)", "_____no_output_____" ], [ "pd.Series(numbers, letters) # numbers are data, letters are indexes", "_____no_output_____" ], [ "print(letters)\nprint(numbers)\nprint(np_arr)\nprint(dict)", "['a', 'b', 'c']\n[1, 2, 3]\n[1 2 3]\n{'a': 1, 'b': 2, 'c': 3}\n" ], [ "pd.Series(dict)", "_____no_output_____" ], [ "pd.Series(np_arr, numbers)", "_____no_output_____" ], [ "life_long_average = pd.Series([84.7, 84.5, 83.7], ['Hong Kong', 'Japan', 'Singapore'])", "_____no_output_____" ], [ "life_long_average", "_____no_output_____" ], [ "life_long_average['Hong Kong']", "_____no_output_____" ], [ "new_life_long_average = pd.Series([84.7, 84.5, 83.7], ['USA', 'Japan', 'Singapore'])", "_____no_output_____" ], [ "new_life_long_average + life_long_average", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7dd459a067e2a3d1a62da37230c4f71abe0be63
22,262
ipynb
Jupyter Notebook
notebooks/numpyro_intro.ipynb
khanshehjad/pyprobml
f2625bf39822e1cf49fe018e9083ec25ff191a0c
[ "MIT" ]
2
2021-02-26T04:36:10.000Z
2021-02-26T04:36:24.000Z
notebooks/numpyro_intro.ipynb
khanshehjad/pyprobml
f2625bf39822e1cf49fe018e9083ec25ff191a0c
[ "MIT" ]
1
2021-04-19T12:25:26.000Z
2021-04-19T12:25:26.000Z
notebooks/numpyro_intro.ipynb
khanshehjad/pyprobml
f2625bf39822e1cf49fe018e9083ec25ff191a0c
[ "MIT" ]
1
2021-08-22T16:22:02.000Z
2021-08-22T16:22:02.000Z
32.07781
461
0.472105
[ [ [ "<a href=\"https://colab.research.google.com/github/probml/pyprobml/blob/master/book1/bayes_stats/numpyro_intro.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "[NumPyro](https://github.com/pyro-ppl/numpyro) is probabilistic programming language built on top of JAX. It is very similar to [Pyro](https://pyro.ai/), which is built on top of PyTorch, but [tends to be faster](https://stackoverflow.com/questions/61846620/numpyro-vs-pyro-why-is-former-100x-faster-and-when-should-i-use-the-latter). (Both Pyro flavors are usually also [faster than PyMc3](https://www.kaggle.com/s903124/numpyro-speed-benchmark).)\n\nThis colab gives a brief introduction (WIP).", "_____no_output_____" ], [ "# Installation", "_____no_output_____" ] ], [ [ "# Standard Python libraries\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport os\nimport time\n#import numpy as np\n#np.set_printoptions(precision=3)\nimport glob\nimport matplotlib.pyplot as plt\nimport PIL\nimport imageio\n\nfrom IPython import display\n%matplotlib inline\n\nimport sklearn\n\nimport seaborn as sns;\nsns.set(style=\"ticks\", color_codes=True)\n\nimport pandas as pd\npd.set_option('precision', 2) # 2 decimal places\npd.set_option('display.max_rows', 20)\npd.set_option('display.max_columns', 30)\npd.set_option('display.width', 100) # wide windows", "_____no_output_____" ], [ "import jax\nimport jax.numpy as np\nimport numpy as onp # original numpy\n\nprint(\"jax version {}\".format(jax.__version__))\nprint(\"jax backend {}\".format(jax.lib.xla_bridge.get_backend().platform))", "jax version 0.2.7\njax backend gpu\n" ], [ "# https://github.com/pyro-ppl/numpyro\n!pip install numpyro\n\n# It seems that numpyro installs jaxlib for CPU\n#https://github.com/pyro-ppl/numpyro/issues/531", "Collecting numpyro\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/35/f1/7bada66676245f9e085b870b1051ba183b377af287002e10a2e1bea1b498/numpyro-0.4.1-py3-none-any.whl (176kB)\n\u001b[K |████████████████████████████████| 184kB 8.6MB/s \n\u001b[?25hRequirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from numpyro) (4.41.1)\nCollecting jax==0.2.3\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/d7/b2/738298445cb0d9445e84f58f1fdaf73aa7b1d4199e6360620461d6fe3a8b/jax-0.2.3.tar.gz (473kB)\n\u001b[K |████████████████████████████████| 481kB 12.4MB/s \n\u001b[?25hCollecting jaxlib==0.1.56\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/aa/44/16d06ee6418ae1b020b0722f7b7465baa08031a85728392e5413dd4e3e04/jaxlib-0.1.56-cp36-none-manylinux2010_x86_64.whl (32.1MB)\n\u001b[K |████████████████████████████████| 32.1MB 111kB/s \n\u001b[?25hRequirement already satisfied: numpy>=1.12 in /usr/local/lib/python3.6/dist-packages (from jax==0.2.3->numpyro) (1.19.4)\nRequirement already satisfied: absl-py in /usr/local/lib/python3.6/dist-packages (from jax==0.2.3->numpyro) (0.10.0)\nRequirement already satisfied: opt_einsum in /usr/local/lib/python3.6/dist-packages (from jax==0.2.3->numpyro) (3.3.0)\nRequirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from jaxlib==0.1.56->numpyro) (1.4.1)\nRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from absl-py->jax==0.2.3->numpyro) (1.15.0)\nBuilding wheels for collected packages: jax\n Building wheel for jax (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for jax: filename=jax-0.2.3-cp36-none-any.whl size=542178 sha256=f258c0d1f96711cc0b308e64517b4d916ae57c44003c7a217fc8b6cf71fdccd8\n Stored in directory: /root/.cache/pip/wheels/12/30/5d/24b5503a9bbf06bdd0d57bd20a87ef56125581b862731e4a2d\nSuccessfully built jax\nInstalling collected packages: jax, jaxlib, numpyro\n Found existing installation: jax 0.2.7\n Uninstalling jax-0.2.7:\n Successfully uninstalled jax-0.2.7\n Found existing installation: jaxlib 0.1.57+cuda101\n Uninstalling jaxlib-0.1.57+cuda101:\n Successfully uninstalled jaxlib-0.1.57+cuda101\nSuccessfully installed jax-0.2.3 jaxlib-0.1.56 numpyro-0.4.1\n" ], [ "import jax\nimport jax.numpy as np\nimport numpy as onp # original numpy\nfrom jax import random\n\nprint(\"jax version {}\".format(jax.__version__))\nprint(\"jax backend {}\".format(jax.lib.xla_bridge.get_backend().platform))", "jax version 0.2.7\njax backend gpu\n" ] ], [ [ "# Distributions", "_____no_output_____" ] ], [ [ "import numpyro\nimport numpyro.distributions as dist\nfrom numpyro.diagnostics import hpdi\nfrom numpyro.distributions.transforms import AffineTransform\nfrom numpyro.infer import MCMC, NUTS, Predictive\n\nrng_key = random.PRNGKey(0)\nrng_key, rng_key_ = random.split(rng_key)", "_____no_output_____" ] ], [ [ "## 1d Gaussian", "_____no_output_____" ] ], [ [ "# 2 independent 1d gaussians (ie 1 diagonal Gaussian)\nmu = 1.5\nsigma = 2\nd = dist.Normal(mu, sigma)\ndir(d)", "_____no_output_____" ], [ "rng_key, rng_key_ = random.split(rng_key)\nnsamples = 1000\nys = d.sample(rng_key_, (nsamples,))\nprint(ys.shape)\nmu_hat = np.mean(ys,0)\nprint(mu_hat)\nsigma_hat = np.std(ys, 0)\nprint(sigma_hat)", "(1000,)\n1.5070927\n2.0493808\n" ] ], [ [ "## Multivariate Gaussian\n\n", "_____no_output_____" ] ], [ [ "mu = np.array([-1, 1])\nsigma = np.array([1, 2])\nSigma = np.diag(sigma)\nd2 = dist.MultivariateNormal(mu, Sigma)", "_____no_output_____" ], [ "#rng_key, rng_key_ = random.split(rng_key)\nnsamples = 1000\nys = d2.sample(rng_key_, (nsamples,))\nprint(ys.shape)\nmu_hat = np.mean(ys,0)\nprint(mu_hat)\nSigma_hat = np.cov(ys, rowvar=False) #jax.np.cov not implemented\nprint(Sigma_hat)", "(1000, 2)\n[-1.0127413 1.0091063]\n[[ 0.9770031 -0.00533966]\n [-0.00533966 1.9718108 ]]\n" ] ], [ [ "## Shape semantics\n\nNumpyro, [Pyro](https://pyro.ai/examples/tensor_shapes.html) and [TFP](https://www.tensorflow.org/probability/examples/Understanding_TensorFlow_Distributions_Shapes) all distinguish between 'event shape' and 'batch shape'.\nFor a D-dimensional Gaussian, the event shape is (D,), and the batch shape\nwill be (), meaning we have a single instance of this distribution.\nIf the covariance is diagonal, we can view this as D independent\n1d Gaussians, stored along the batch dimension; this will have event shape () but batch shape (2,). \n\nWhen we sample from a distribution, we also specify the sample_shape.\nSuppose we draw N samples from a single D-dim diagonal Gaussian,\nand N samples from D 1d Gaussians. These samples will have the same shape.\nHowever, the semantics of logprob differs.\nWe illustrate this below.\n", "_____no_output_____" ] ], [ [ "d2 = dist.MultivariateNormal(mu, Sigma)\nprint(f'event shape {d2.event_shape}, batch shape {d2.batch_shape}') \nnsamples = 3\nys2 = d2.sample(rng_key_, (nsamples,))\nprint('samples, shape {}'.format(ys2.shape))\nprint(ys2)\n\n# 2 independent 1d gaussians (same as one 2d diagonal Gaussian)\nd3 = dist.Normal(mu, np.diag(Sigma))\nprint(f'event shape {d3.event_shape}, batch shape {d3.batch_shape}') \nys3 = d3.sample(rng_key_, (nsamples,))\nprint('samples, shape {}'.format(ys3.shape))\nprint(ys3)\n\nprint(np.allclose(ys2, ys3))", "event shape (2,), batch shape ()\nsamples, shape (3, 2)\n[[-0.06819373 0.9942934 ]\n [-1.740325 -1.0183868 ]\n [ 0.05969942 2.314332 ]]\nevent shape (), batch shape (2,)\nsamples, shape (3, 2)\n[[-0.06819373 0.99192965]\n [-1.740325 -1.85443 ]\n [ 0.05969942 2.8587465 ]]\nFalse\n" ], [ "y = ys2[0,:] # 2 numbers\nprint(d2.log_prob(y)) # log prob of a single 2d distribution on 2d input \nprint(d3.log_prob(y)) # log prob of two 1d distributions on 2d input\n", "-2.6185904\n[-1.35307 -1.6120898]\n" ] ], [ [ "We can turn a set of independent distributions into a single product\ndistribution using the [Independent class](http://num.pyro.ai/en/stable/distributions.html#independent)\n", "_____no_output_____" ] ], [ [ "d4 = dist.Independent(d3, 1) # treat the first batch dimension as an event dimensions\nprint(d4.event_shape)\nprint(d4.batch_shape)\nprint(d4.log_prob(y))", "(2,)\n()\n-2.96516\n" ] ], [ [ "# Posterior inference with MCMC\n", "_____no_output_____" ], [ "## Example: 1d Gaussian with unknown mean.\n\nWe use the simple example from the [Pyro intro](https://pyro.ai/examples/intro_part_ii.html#A-Simple-Example). The goal is to infer the weight $\\theta$ of an object, given noisy measurements $y$. We assume the following model:\n$$\n\\begin{align}\n\\theta &\\sim N(\\mu=8.5, \\tau^2=1.0)\\\\ \ny \\sim &N(\\theta, \\sigma^2=0.75^2)\n\\end{align}\n$$\n\nWhere $\\mu=8.5$ is the initial guess. \n\nBy Bayes rule for Gaussians, we know that the exact posterior,\ngiven a single observation $y=9.5$, is given by\n\n\n$$\n\\begin{align}\n\\theta|y &\\sim N(m, s^s) \\\\\nm &=\\frac{\\sigma^2 \\mu + \\tau^2 y}{\\sigma^2 + \\tau^2} \n = \\frac{0.75^2 \\times 8.5 + 1 \\times 9.5}{0.75^2 + 1^2}\n = 9.14 \\\\\ns^2 &= \\frac{\\sigma^2 \\tau^2}{\\sigma^2 + \\tau^2} \n= \\frac{0.75^2 \\times 1^2}{0.75^2 + 1^2}= 0.6^2\n\\end{align}\n$$", "_____no_output_____" ] ], [ [ "mu = 8.5; tau = 1.0; sigma = 0.75; y = 9.5\nm = (sigma**2 * mu + tau**2 * y)/(sigma**2 + tau**2)\ns2 = (sigma**2 * tau**2)/(sigma**2 + tau**2)\ns = np.sqrt(s2)\nprint(m)\nprint(s)", "9.14\n0.6\n" ], [ "def model(prior_mean, prior_sd, obs_sd, measurement=None):\n theta = numpyro.sample(\"theta\", dist.Normal(prior_mean, prior_sd))\n return numpyro.sample(\"y\", dist.Normal(theta, obs_sd), obs=measurement)\n", "_____no_output_____" ], [ "nuts_kernel = NUTS(model)\nmcmc = MCMC(nuts_kernel, num_warmup=100, num_samples=1000)\nmcmc.run(rng_key_, mu, tau, sigma, y)\n\nmcmc.print_summary()\nsamples = mcmc.get_samples()\n \n", "sample: 100%|██████████| 1100/1100 [00:03<00:00, 286.64it/s, 3 steps of size 9.41e-01. acc. prob=0.91]\n" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ] ]
e7dd493f3cab47d570446e780ac641115de4772b
14,878
ipynb
Jupyter Notebook
docs/example_interactive.ipynb
wayneweiqiang/GMMA
305939f20e3219d5176a1642fe0476f99f51ee61
[ "MIT" ]
19
2021-07-13T02:33:50.000Z
2021-11-16T02:59:42.000Z
docs/example_interactive.ipynb
wayneweiqiang/GMMA
305939f20e3219d5176a1642fe0476f99f51ee61
[ "MIT" ]
null
null
null
docs/example_interactive.ipynb
wayneweiqiang/GMMA
305939f20e3219d5176a1642fe0476f99f51ee61
[ "MIT" ]
7
2021-05-22T01:48:53.000Z
2021-11-16T02:59:44.000Z
35.339667
582
0.455169
[ [ [ "# Interactive Example\n\n## 1. Run GaMMA in terminal or use QuakeFlow API\n\nNote: Please only use the QuakeFlow API for debugging and testing on small datasets. Do not run large jobs using the QuakeFlow API. The computational cost can be high for us.\n\n```bash\nuvicorn --app-dir=gamma app:app --reload --port 8001\n```", "_____no_output_____" ] ], [ [ "import requests\nimport json\nimport pandas as pd\nimport os", "_____no_output_____" ], [ "# GAMMA_API_URL = \"http://127.0.0.1:8001\"\nGAMMA_API_URL = \"http://gamma.quakeflow.com\"", "_____no_output_____" ] ], [ [ "## 2. Prepare test data\n\n- Download test data: PhaseNet picks of the 2019 Ridgecrest earthquake sequence\n1. picks file: picks.json\n2. station information: stations.csv\n3. events in SCSN catalog: events.csv\n4. config file: config.pkl\n\n```bash\nwget https://github.com/wayneweiqiang/GMMA/releases/download/test_data/test_data.zip\nunzip test_data.zip\n```", "_____no_output_____" ] ], [ [ "!wget https://github.com/wayneweiqiang/GMMA/releases/download/test_data/test_data.zip\n!unzip test_data.zip", "--2021-11-10 23:21:17-- https://github.com/wayneweiqiang/GMMA/releases/download/test_data/test_data.zip\nResolving github.com (github.com)... 192.30.255.112\nConnecting to github.com (github.com)|192.30.255.112|:443... connected.\nHTTP request sent, awaiting response... 302 Found\nLocation: https://objects.githubusercontent.com/github-production-release-asset-2e65be/317358544/7d880a00-e013-11eb-86c0-3358df7416e3?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20211111%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20211111T071937Z&X-Amz-Expires=300&X-Amz-Signature=49b5ea9c837e444918ff2d56f722909b41f8f3a6c77b2c57b8b62687b01f6767&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=317358544&response-content-disposition=attachment%3B%20filename%3Dtest_data.zip&response-content-type=application%2Foctet-stream [following]\n--2021-11-10 23:21:18-- https://objects.githubusercontent.com/github-production-release-asset-2e65be/317358544/7d880a00-e013-11eb-86c0-3358df7416e3?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20211111%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20211111T071937Z&X-Amz-Expires=300&X-Amz-Signature=49b5ea9c837e444918ff2d56f722909b41f8f3a6c77b2c57b8b62687b01f6767&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=317358544&response-content-disposition=attachment%3B%20filename%3Dtest_data.zip&response-content-type=application%2Foctet-stream\nResolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.110.133, 185.199.108.133, 185.199.111.133, ...\nConnecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.110.133|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 4634493 (4.4M) [application/octet-stream]\nSaving to: ‘test_data.zip’\n\ntest_data.zip 100%[===================>] 4.42M 13.0MB/s in 0.3s \n\n2021-11-10 23:21:19 (13.0 MB/s) - ‘test_data.zip’ saved [4634493/4634493]\n\nArchive: test_data.zip\n creating: test_data/\n inflating: test_data/picks.json \n inflating: test_data/catalog_gmma.csv \n inflating: test_data/config.pkl \n inflating: test_data/picks_gmma.csv \n inflating: test_data/stations.csv \n inflating: test_data/events.csv \n" ], [ "data_dir = lambda x: os.path.join(\"test_data\", x)\nstation_csv = data_dir(\"stations.csv\")\npick_json = data_dir(\"picks.json\")\ncatalog_csv = data_dir(\"catalog_gamma.csv\")\npicks_csv = data_dir(\"picks_gamma.csv\")\nif not os.path.exists(\"figures\"):\n os.makedirs(\"figures\")\nfigure_dir = lambda x: os.path.join(\"figures\", x)\n\n## set config\nconfig = {'xlim_degree': [-118.004, -117.004], \n 'ylim_degree': [35.205, 36.205],\n 'z(km)': [0, 41]}\n\n## read stations\nstations = pd.read_csv(station_csv, delimiter=\"\\t\")\nstations = stations.rename(columns={\"station\":\"id\"})\nstations_json = json.loads(stations.to_json(orient=\"records\"))\n\n## read picks\npicks = pd.read_json(pick_json).iloc[:500]\npicks[\"timestamp\"] = picks[\"timestamp\"].apply(lambda x: x.strftime(\"%Y-%m-%dT%H:%M:%S.%f\")[:-3])\npicks_json = json.loads(picks.to_json(orient=\"records\"))\n\n## run association\nresult = requests.post(f\"{GAMMA_API_URL}/predict\", json= {\n \"picks\":picks_json, \n \"stations\":stations_json,\n \"config\": config\n })\n\nresult = result.json()\ncatalog_gamma = json.loads(result[\"catalog\"])\npicks_gamma = json.loads(result[\"picks\"])\n\n## show result\nprint(\"GaMMA catalog:\")\ndisplay(pd.DataFrame(catalog_gamma)[[\"time\", \"latitude\", \"longitude\", \"depth(m)\", \"magnitude\", \"covariance\"]])\nprint(\"GaMMA association:\")\ndisplay(pd.DataFrame(picks_gamma))", "GaMMA catalog:\n" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
e7dd59baa4463437f9721697a74cd2720d02e233
11,493
ipynb
Jupyter Notebook
doc/Rob-0sterburg-Proposal.ipynb
robOcity/wakeful
e3a50649e6208da28feea2fe402f119b0223293d
[ "MIT" ]
null
null
null
doc/Rob-0sterburg-Proposal.ipynb
robOcity/wakeful
e3a50649e6208da28feea2fe402f119b0223293d
[ "MIT" ]
null
null
null
doc/Rob-0sterburg-Proposal.ipynb
robOcity/wakeful
e3a50649e6208da28feea2fe402f119b0223293d
[ "MIT" ]
null
null
null
57.179104
705
0.683024
[ [ [ "# NetBAT - Network Behavioral Analytics Tool\n\nRob Osterburg, Galvanize Data Science Immersive Project Proposal", "_____no_output_____" ], [ "**Abstract**: *DNS is an essential service for internet users, and so it is always available for both good uses and bad. DNS enables users to find their favorite web sites and allows attackers to control their malware and steal information assets. Detecting these malicious uses is what my project is all about.* ", "_____no_output_____" ], [ "### Motivation\n\nIncident response is what happens after a security breach, and not surprisingly is a growing sector of the information security business. Mandiant is a leading incident response company and has resolved many breaches including: Equifax, the Clinton Campaign, and Target. They report that **attackers in the U.S. go undetected for more than 3 months on average**. During this time attackers need to maintain a presence inside the compromised network and to ship data out. All this traffic must go through whatever protections the organization has in place. My project focuses on detecting these signals before the objective of the attack is obtained. \n\n![Idealized attack against a retail target with malware rendered in red.](/Users/rob/Google_Drive/Datascience/Galvanize/Project/wakeful/images/retail_attack_malware_deployment.png)\n\nMIT AI2 system is the for inspiration for my project. \n\nIn the *The Innovators*, Walter Isaacson asks:\n\n>\"is it possible that humans and machines working in partnership will be indefinitely more powerful than an artificial intelligence machine working alone?” \n\nI believe the answer it **yes** and MIT is currently applying this idea to network security. Their AI2 system uses unsupervised learning to make recommendations to an analyst who labels the events as either normal or attack. A supervised learning algorithm then uses the labeled data to improve the selection of future analyst alerts.\n\n[MIT AI2 with analyst empowered by machine learning](http://news.mit.edu/2016/ai-system-predicts-85-percent-cyber-attacks-using-input-human-experts-0418). \n\n![MIT AI^2 system](/Users/rob/Google_Drive/Datascience/Galvanize/Project/wakeful/images/overall_solution_context.png)\n", "_____no_output_____" ], [ "### Data\n[Security Onion](https://securityonion.net) is a distribution of Linux focused on network monitoring. Developed and maintained by the incident responders at Mandiant, SO includes tools to gather and analyze network traffic, and one in particular - [Bro Security Network Monitor (BSNM)](https://www.bro.org/) - is perfect for feature engineering. BSNM understands network protocols and produces log files for each. \n\nI was unable to find a set of labeled data to use for this project. So, I decided to gather data from my own home network, and use it to represent normal data. After a couple of weeks, I now have ~40,000 DNS and ~80,000 connection log entries. Beyond simply logging the traffic SO gives you means to investigate events using both beohavior-based ([BSNM](https://www.bro.org/)) and signature-based ([Snort](https://www.snort.org/)) detection tools. Investigating my own network, I found one computer that appeared to have malware and have since wiped and re-imaged the system. Now that my network appears to be free of malware, I think my plan to label its DNS traffic normal is reasonable. \n\nEric Conrad is a [SANS instructor](https://www.sans.org/instructors/eric-conrad/date/desc/) and the CTO for a security company recently gave a talk on how malware communicates with its command and control (C2) server using DNS tunneling. His [talk at Security Onion Con 2016](https://youtu.be/ViR405l-ggg) and his [related blog post](http://www.ericconrad.com/2016/09/c2-phone-home-leveraging-securityonion.html) includes links to BSNM logs for four different malicious uses of DNS including both tunneling and C2 communications and contain ~6,500 DNS and ~3,500 connection log entries. I plan on using these data as the basis for my attack data.\n\nI plan will have an EDA of these data sets completed by Monday morning Jan 8. \n\n\n#### Feature Engineering\n\n* Why the focus on DNS: DNS answers the question of what IP address has been assigned to a URL. By design, if a query can't answered locally it is forwarded to the root server for that top-level domain, and then recursively on down to an authoritative server. DNS is an essential service for any organization and is rarely monitored. Even from deep within a organizations network most systems have DNS access and the forwarding behavior of this protocol enables DNS packets to reach the internet. Just as the DNS response packets are let back in. This makes it a perfect communication channel for attackers. \n\n* Indicators of compromise -- Derived from the YouTube talks listed in the Citations Section.\n\n * DNS Protcol\n \n * Unusually long query strings\n \n * TXT, NULL and QUERY packets are used to transfer base-64 encoded data\n \n * NULL packets — used to transfer binary data\n \n * Large number of requests to hosts or subdomains\n \n * Length of time URL has been registered (some \"fast flux\" domains change the IPs they are associated with every ~150 seconds)\n \n * Rate of queries from a source IP address visits is much higher than average\n \n * False positives include Amazon URLs and others that use a hash as the subdomain\n \n * ICMP Protocol\n \n * Data portion differs from what the various OS ping implementations send\n \n * Packet size is large (i.e., greater than 200 or 400 bytes)\n \n * Rate is more rapid than once per second\n", "_____no_output_____" ], [ "### Minimum Viable Product\n\n* Extract data from the BNSM DNS and connection logs\n\n* Classify DNS packets as Normal, Attack and Uncertain \n\n* Assemble a set of reasonably representative data from the sources cited in the Data Section\n\n* Select a supervised model for classifying events as normal, uncertain or attack. Ideas: hierarchical model, random forest or gradient boosting\n\n* Prefer a model whose results will be informative to non-data-scientists\n\n#### MVP+\n\n* Metric to quantify how much better the model performs in comparison to blacklist, whitelist or simple rule-based approaches\n \n#### MVP++\n\n* Develop a similar model for the ICMP protocol", "_____no_output_____" ], [ "### Deliverables\n* Python code to process the logs and to model the data\n* Repository with the code, tests, example data, findings and a presentation\n", "_____no_output_____" ], [ "### Business Value\n\nBlacklisting and whitelisting are core practices to security and IT practitioners. The idea goes back to firewalls which must either pass a packet or block it. Anything on the whitelist is passed, while anything on the blacklist is blocked. By evaluating my project in comparison to a blacklist/whitelist approach, I hope to make its results accessible to professionals in an industry where I hope to be hired.\n\nHow can we apply the blacklist / whitelist idea to DNS attack traffic? If the security team at an organization uses a blacklist approach where they maintain a list of blocked URLs. In so doing, they give the attacker the advantage by allowing them to make small changes to domain names they use to avoid having their traffic from being blocked. Whitelists on the other hand also disadvantage the security team because in addition to finding the malicious traffic, they also produce a lot of false positives. Security teams tend to be lean because they are overhead expense to the organization that makes the impact of a whitelist approach is all the more.\n\nI believe that machine learning approach based on protocol-specific behavior provides value over both the blacklist and whitelist approaches, here is why:\n\n* Machine learning is better than the blacklist approach because it results in higher recall (i.e., fewer false negatives) by learning to detect and block similar malicious traffic with minimal human intervention.\n\n* Machine learning is also better than the whitelist approach by higher precision (i.e., fewer false positives) by learning to detect similar normal traffic. ", "_____no_output_____" ], [ "### Citations\n* [Security Onion 2016: C2 Phone Home - Eric Conrad](https://youtu.be/ViR405l-ggg)\n* [Chris McCubbin, Machine learning applied to Bro](https://youtu.be/ZV5Ckf9wLrc)\n* [Data Analysis, Machine Learning, Bro, and You! by Brian Wylie](https://youtu.be/pG5lU9CLnIU)\n* [BNSM DNS Log Documentation](https://www.bro.org/sphinx/scripts/base/protocols/dns/main.bro.html#type-DNS::Info)\n* [BNSM ICMP Log Documentation](https://www.bro.org/sphinx/scripts/base/bif/plugins/Bro_ICMP.events.bif.bro.html)\n* [BNSM Conn Log Documentation](https://www.bro.org/sphinx/scripts/base/protocols/conn/main.bro.html#type-Conn::Info)\n", "_____no_output_____" ], [ "#### DNS Log\n\n![DNS Log Fields](/Users/rob/Google_Drive/Datascience/Galvanize/Project/wakeful/images/dns-log-fields.png)\n\n![DNS Log Example](/Users/rob/Google_Drive/Datascience/Galvanize/Project/wakeful/images/dns-log-example.png)\n\n#### Connection Log\n\n![Conn Log Fields](/Users/rob/Google_Drive/Datascience/Galvanize/Project/wakeful/images/conn-log-fields.png)\n\n![Conn Log Example](/Users/rob/Google_Drive/Datascience/Galvanize/Project/wakeful/images/conn-log-example.png)\n \nNote: These logs can be joined using the connection ID. \n", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
e7dd5b1e9b4d7b0c62895990195c9d2f81818438
138,890
ipynb
Jupyter Notebook
week_03.ipynb
HUFS-Programming-2022/JongbeenSong_202001862
769094558c19d41527c8b8e6b396c716932bc457
[ "MIT" ]
null
null
null
week_03.ipynb
HUFS-Programming-2022/JongbeenSong_202001862
769094558c19d41527c8b8e6b396c716932bc457
[ "MIT" ]
null
null
null
week_03.ipynb
HUFS-Programming-2022/JongbeenSong_202001862
769094558c19d41527c8b8e6b396c716932bc457
[ "MIT" ]
null
null
null
211.079027
104,988
0.859731
[ [ [ "### 중첩 조건문 nested conditional\n쓰지 않는 것이 좋음\nif 블럭안에 또 다른 if 블럭이 있는 것", "_____no_output_____" ] ], [ [ "# 중첩 조건문 활용 실습\n\ninfo = input('input your name, phone number, address, sex: ')\ninfo_list = info.split(', ')\n\nif info_list[0][0] == '박':\n if info_list[1][0:3] == '010':\n if info_list[2] == '서울':\n if info_list[3] == '남성':\n print('우리가 찾던 사람입니다.')\n else:\n print('성별이 다릅니다.')\n else:\n print('주소지가 다릅니다.')\n else:\n print('전화번호가 다릅니다.')\nelse:\n print('성이 다릅니다.')", "input your name, phone number, address, sex: 박찬호, 01011234567, 서울, 남성\n우리가 찾던 사람입니다.\n" ] ], [ [ "### 논리연산자\n비교연산자가 여러번 사용될때 활용함\na < 0 < b와 같은 형태는 파이썬에서만 사용 가능", "_____no_output_____" ] ], [ [ "a = True\n\nif a == True:\n print('이렇게 쓰지 말것. 틀린 표현')\nif a:\n print('이렇게 써야만 함.')", "이렇게 쓰지 말것. 틀린 표현\n이렇게 써야만 함.\n" ], [ "fruit = ['banana', 'apple', 'pear', 'berry']\n\nanswer = input('what is your favorite fruit?: ')\nif answer in fruit:\n print('we have your favorite food!')\nelse:\n print('we do not have your favorite food!')\n option = input('would you like to add your favorite food? [y/n]: ')\n if option == 'y':\n fruit.append(answer)\n print(f'now we have {fruit} in our list')", "what is your favorite fruit?: strawberry\nwe do not have your favorite food!\nwould you like to add your favorite food? [y/n]: y\nnow we have ['banana', 'apple', 'pear', 'berry', 'strawberry'] in our list\n" ] ], [ [ "### 바다 코끼리 연산자\n대입연산자 + 표현식을 만들어냄\n배운 다른 내용들과 달리 저도 많이 써보지 않아 외부 자료들을 보며 조금 더 자세히 공부하였고, 단순 활용형 실습이 아니라 개념적인 부분도 필기하였습니다.", "_____no_output_____" ] ], [ [ "# 파이썬의 기본 이념상 한 줄에 하나의 의미만 담겨야만 함.\n\"\"\"\nprint(student = '철수') << 오류가 발생하게 됨.\n대신에,\nstudent = '철수'\nprint(student) << 이런 식으로 작성하거나, 바다 코끼리 연산자를 활용해야함.\n\"\"\"\nprint(student := '철수')", "철수\n" ], [ "while s := input('input: '):\n if s == 'quit':\n break\n else:\n print('output: ' + s)\n \nprint('program ended')", "input: hello\noutput: hello\ninput: quit\nprogram ended\n" ] ], [ [ "### String\n문자열", "_____no_output_____" ] ], [ [ "# !pip install nltk => 터미널로 패키지 설치하는 코드", "_____no_output_____" ], [ "import nltk\nnltk.download('book', quiet=True)", "_____no_output_____" ], [ "from nltk import book", "*** Introductory Examples for the NLTK Book ***\nLoading text1, ..., text9 and sent1, ..., sent9\nType the name of the text or sentence to view it.\nType: 'texts()' or 'sents()' to list the materials.\ntext1: Moby Dick by Herman Melville 1851\ntext2: Sense and Sensibility by Jane Austen 1811\ntext3: The Book of Genesis\ntext4: Inaugural Address Corpus\ntext5: Chat Corpus\ntext6: Monty Python and the Holy Grail\ntext7: Wall Street Journal\ntext8: Personals Corpus\ntext9: The Man Who Was Thursday by G . K . Chesterton 1908\n" ] ], [ [ "#### String 및 nltk 실습", "_____no_output_____" ] ], [ [ "genesis = book.text3", "_____no_output_____" ], [ "genesis_tokens = genesis.tokens", "_____no_output_____" ], [ "len(genesis_tokens)", "_____no_output_____" ] ], [ [ "#### 추가 개인 실습", "_____no_output_____" ] ], [ [ "from wordcloud import WordCloud\nimport matplotlib.pyplot as plt\nfrom collections import Counter\nfrom PIL import Image\nimport numpy as np", "_____no_output_____" ], [ "from nltk.corpus import stopwords\nfrom nltk.tokenize import word_tokenize", "_____no_output_____" ], [ "stop_words = set(stopwords.words('english'))\nfiltered_text = [w for w in genesis_tokens if not w.lower() in stop_words]\nfiltered_text = []\n\nfor w in genesis_tokens:\n if w not in stop_words:\n filtered_text.append(w)", "_____no_output_____" ], [ "count = Counter(filtered_text)", "_____no_output_____" ], [ "wc = WordCloud(width=400, height=400, scale=2.0, max_font_size=250)\ngenerated_image = wc.generate_from_frequencies(count)\nplt.figure()\nplt.imshow(generated_image)", "_____no_output_____" ] ], [ [ "### Quiz 답안", "_____no_output_____" ] ], [ [ "thursday = book.text9", "_____no_output_____" ], [ "print(len(set(thursday.tokens)) / len(thursday.tokens))", "0.0983485761345412\n" ], [ "monty = book.text6", "_____no_output_____" ], [ "sorted(set(monty.tokens), reverse=True)[:10]", "_____no_output_____" ], [ "reversed_token = sorted(set(monty.tokens), reverse=True)", "_____no_output_____" ], [ "reversed_processed_token = []\nfor token in reversed_token:\n if 'z' in token:\n token = token.replace('z', 'Z')\n reversed_processed_token.append(token)\n else:\n if len(token) >=4 :\n token = token[:-1] + token[-1].upper()\n reversed_processed_token.append(token)\n ", "_____no_output_____" ], [ "print(reversed_processed_token)", "['Zoosh', 'Zoop', 'Zoo', 'Zone', 'Zhiv', 'yourselF', 'yourS', 'youR', 'younG', 'you', 'yet', 'yes', 'yelloW', 'yellinG', 'yel', 'yeaR', 'yeaH', 'ye', 'y', 'wronG', 'writinG', 'woundinG', 'woundeD', 'wounD', 'wouldN', 'woulD', 'worthY', 'worsT', 'worsE', 'worrY', 'worrieD', 'workinG', 'workerS', 'workeD', 'worK', 'wordS', 'worD', 'woosH', 'woodS', 'woodeN', 'wooD', 'wonderfuL', 'won', 'womeN', 'womaN', 'wom', 'witnesS', 'withouT', 'witH', 'witcheS', 'witcH', 'wisheS', 'wisE', 'wiperS', 'wipeR', 'winteR', 'wingS', 'windoW', 'winD', 'wilL', 'wielD', 'widE', 'wickeD', 'why', 'whosE', 'whoP', 'whoM', 'whoeveR', 'who', 'whisperinG', 'whinnY', 'whilE', 'whicH', 'whetheR', 'whereiN', 'wherE', 'wheN', 'whaT', 'wettinG', 'wet', 'werE', 'wenT', 'welL', 'welcomE', 'weighT', 'weighS', 'weeklY', 'weeK', 'wedlocK', 'weddinG', 'weatheR', 'weapoN', 'we', 'wayY', 'wayS', 'way', 'wavE', 'waterY', 'wateR', 'watcH', 'wastE', 'wasN', 'was', 'warT', 'warninG', 'warneD', 'warmeR', 'warM', 'war', 'wantS', 'wanT', 'wannA', 'walkinG', 'walK', 'waiT', 'w', 'vouchsafeD', 'votE', 'voluntarilY', 'vitaL', 'visuallY', 'violencE', 'viciouS', 'vestS', 'verY', 'verseS', 'velocitY', 've', 'varY', 'varletesseS', 'van', 'valoR', 'valleyS', 'valianT', 'vaiN', 'vachE', 'va', 'uuuP', 'uuggggggH', 'utterlY', 'usinG', 'useD', 'use', 'us', 'upoN', 'up', 'untiL', 'unsingablE', 'unpluggeD', 'unladeN', 'unioN', 'unhealthY', 'ungallanT', 'undressinG', 'underweaR', 'understandinG', 'understanD', 'undeR', 'uncloG', 'unarmeD', 'un', 'um', 'uhh', 'uh', 'uglY', 'u', 'typeS', 'twonG', 'two', 'twiN', 'twentY', 'twanG', 'turnS', 'turneD', 'try', 'trustY', 'trumpetS', 'trougH', 'troublE', 'tropicaL', 'triumphS', 'treE', 'treaT', 'travellerS', 'travelleR', 'traininG', 'tragiC', 'tradE', 'tractS', 'towN', 'towardS', 'tougH', 'totallY', 'topS', 'tooK', 'too', 'tonighT', 'tolD', 'togetheR', 'todaY', 'to', 'tit', 'tireD', 'tinY', 'tindeR', 'timeS', 'timE', 'til', 'tie', 'thy', 'thwonK', 'thumP', 'thuD', 'throwinG', 'throughouT', 'througH', 'throaT', 'threW', 'threE', 'thoughT', 'thoU', 'thosE', 'thonK', 'thiS', 'thirtY', 'thirdS', 'thirD', 'thinK', 'thingS', 'thinG', 'thinE', 'theY', 'thesE', 'thereforE', 'therE', 'theN', 'theM', 'theiR', 'the', 'thaT', 'thankS', 'thanK', 'thaN', 'testicleS', 'tesT', 'terriblY', 'terriblE', 'temptresS', 'temptatioN', 'tempereD', 'temperatE', 'tellinG', 'telL', 'teetH', 'teaR', 'tea', 'tauntinG', 'taunT', 'tasK', 'tarT', 'tap', 'talK', 'talE', 'takinG', 'takeN', 'takE', 'taiL', 'tacklE', 'tablE', 't', 'systeM', 'syndicalisT', 'syndicalisM', 'sworN', 'swordS', 'sworD', 'sweeT', 'swamP', 'swallowS', 'swalloW', 'suspensefuL', 'surprisE', 'surE', 'supremE', 'supposeD', 'supposE', 'supportS', 'sun', 'summoN', 'suiT', 'suggestinG', 'sufficE', 'suffereD', 'suddenlY', 'sucH', 'successfuL', 'stupiD', 'stuffeD', 'strongesT', 'strinG', 'strewN', 'stretcheD', 'stresS', 'strengtH', 'streaK', 'strategY', 'strangerS', 'strangE', 'stranD', 'straighT', 'stopS', 'stoP', 'stooD', 'stonE', 'stilL', 'steW', 'stayeD', 'staY', 'starteD', 'starlinG', 'stanD', 'staB', 'squeaK', 'spookY', 'spongE', 'spokeN', 'spliT', 'splaT', 'splasH', 'spiriT', 'speeD', 'speciaL', 'speaK', 'spankinG', 'spankeD', 'spanK', 'spaM', 'spakE', 'sovereigN', 'soutH', 'sorT', 'sorrY', 'sooN', 'sonS', 'sonnY', 'sonG', 'son', 'somewherE', 'sometimeS', 'somethinG', 'someonE', 'somebodY', 'somE', 'soileD', 'sofT', 'sod', 'societY', 'sociaL', 'so', 'snufF', 'snowS', 'snorE', 'snifF', 'sneakinG', 'snaP', 'smelT', 'smashinG', 'smasheD', 'smalL', 'smacK', 'slothS', 'slightlY', 'slasH', 'sixteeN', 'sisteR', 'sireN', 'sirE', 'sir', 'sinK', 'singlE', 'singinG', 'sinG', 'sincE', 'simplE', 'sillY', 'silencE', 'signifyinG', 'sigN', 'sighT', 'sigH', 'sidE', 'shuT', 'shrubberY', 'shrubberieS', 'shrubbeR', 'showS', 'shoW', 'shoulD', 'shiverinG', 'shiT', 'shimmerinG', 'shelteR', 'sheeP', 'she', 'sharP', 'shapeD', 'shalT', 'shalL', 'sex', 'seveN', 'settleS', 'settinG', 'set', 'servanT', 'sequiN', 'separatE', 'senT', 'sensE', 'senD', 'selL', 'selF', 'seldoM', 'seeN', 'seemS', 'seemeD', 'seeM', 'seeK', 'see', 'seconD', 'searcH', 'scribblE', 'scratcH', 'scrapE', 'scotT', 'scotS', 'scorE', 'scimitaR', 'sciencE', 'scholaR', 'sceneS', 'scenE', 'scarpeR', 'scareD', 'scaleS', 'sayS', 'sayinG', 'say', 'sawwwwW', 'saw', 'saveD', 'sanK', 'samplE', 'samitE', 'samE', 'saiD', 'safetY', 'sad', 'sacrificE', 'sacreD', 's', 'runninG', 'runeS', 'run', 'ruffianS', 'rrrR', 'routineS', 'rounD', 'ropE', 'rooM', 'rodenT', 'rodE', 'rockS', 'rocK', 'roaR', 'risK', 'righT', 'ridinG', 'ridE', 'riddeN', 'ricH', 'rhymeS', 'rewR', 'returnS', 'returN', 'retreaT', 'retolD', 'resumeS', 'restinG', 'resT', 'rescuE', 'requireD', 'requieM', 'requesT', 'repressinG', 'represseD', 'removeD', 'remembereD', 'remembeR', 'remaiN', 'relicS', 'relaX', 'rejoicinG', 'regulationS', 'refusE', 'recoveR', 'reasonablE', 'reareD', 'reallY', 'reaL', 'readY', 'readS', 'reacheD', 're', 'ratioS', 'ratifieD', 'ratheR', 'rapeD', 'ran', 'raiseD', 'radiO', 'rabbiT', 'quitE', 'quieT', 'quicK', 'questS', 'questionS', 'questioN', 'quesT', 'quarreL', 'quacK', 'pweenG', 'put', 'pussY', 'pusH', 'purposE', 'puresT', 'purelY', 'purE', 'punishmenT', 'pulP', 'pulL', 'ptoO', 'proveD', 'protecT', 'properlY', 'progresS', 'profanE', 'proceeD', 'problemS', 'probleM', 'privatE', 'previouS', 'prevenT', 'preservinG', 'presenT', 'presencE', 'praY', 'praM', 'praiseD', 'poweR', 'poundS', 'pounD', 'pondS', 'ponD', 'policE', 'pointY', 'poinT', 'ploveR', 'pleasE', 'plaN', 'plaiN', 'placE', 'pitcheD', 'pissinG', 'pimpleS', 'pikanG', 'pig', 'pestilencE', 'personS', 'personallY', 'persoN', 'perpetuatinG', 'perpetuateS', 'perioD', 'perilouS', 'periL', 'performancE', 'peoplE', 'penaltY', 'pen', 'peasanT', 'pay', 'pausE', 'patH', 'passinG', 'passeD', 'pasS', 'partS', 'particularlY', 'particulaR', 'pansY', 'packinG', 'pacK', 'p', 'ownS', 'own', 'owlI', 'oveR', 'outwiT', 'outsidE', 'outrageouS', 'outdoorS', 'outdateD', 'out', 'ourS', 'our', 'ouncE', 'oui', 'otheR', 'ordinarY', 'ordeR', 'orangutanS', 'oraL', 'or', 'operA', 'openinG', 'opeN', 'oooH', 'ooh', 'oo', 'onlY', 'oneS', 'one', 'oncE', 'on', 'old', 'oh', 'officeR', 'offensivE', 'off', 'of', 'occasioN', 'obviouslY', 'objecT', 'o', 'numbeR', 'now', 'nothinG', 'notE', 'not', 'nostrilS', 'nosE', 'nortH', 'nor', 'non', 'noisE', 'nobodY', 'no', 'nnnnniggetS', 'nnniggetS', 'nineteeN', 'ninepencE', 'ninE', 'nightfalL', 'nighT', 'niggetS', 'nicK', 'nicE', 'nibblE', 'ni', 'nexT', 'newT', 'new', 'neveR', 'nervouS', 'needS', 'neeD', 'necessarY', 'nearlY', 'neareR', 'neaR', 'naughtY', 'nastY', 'nameS', 'nameD', 'namE', 'n', 'mystiC', 'my', 'musT', 'musiC', 'mumblE', 'mud', 'mucH', 'movE', 'mounT', 'motheR', 'mosT', 'mortallY', 'morE', 'mooooooO', 'mooO', 'momenT', 'moisteneD', 'modeL', 'mistakE', 'misS', 'miserablE', 'minuteS', 'minutE', 'minstrelS', 'minE', 'milE', 'migratorY', 'migratE', 'mightiesT', 'mighT', 'middlE', 'met', 'mergeR', 'mercY', 'mer', 'men', 'meetinG', 'medievaL', 'medicaL', 'meanT', 'meaN', 'me', 'mayheM', 'mayesT', 'maybE', 'may', 'matteR', 'matE', 'masteR', 'masseS', 'masheD', 'martiN', 'marryinG', 'marrY', 'marrieD', 'manY', 'manneR', 'mangY', 'mangleD', 'mandatE', 'man', 'makinG', 'makeS', 'makE', 'majoritY', 'majoR', 'maintaiN', 'maiN', 'magnE', 'madE', 'mad', 'mac', 'm', 'lyinG', 'lungeD', 'luckY', 'lucK', 'lovelY', 'lot', 'losT', 'losE', 'lorD', 'looneY', 'lookS', 'lookinG', 'lookeD', 'looK', 'longeR', 'lonG', 'lonelY', 'logicallY', 'lobbesT', 'lobbeD', 'll', 'livinG', 'liveS', 'liveR', 'liveD', 'livE', 'littlE', 'listeN', 'linE', 'limbS', 'likE', 'lifE', 'lieS', 'liegE', 'lie', 'liaR', 'leveL', 'let', 'lesS', 'lengtH', 'legS', 'legendarY', 'legallY', 'leg', 'lefT', 'leavE', 'leasT', 'learninG', 'leapS', 'leaP', 'leadS', 'laurelS', 'laughinG', 'lateR', 'latE', 'lasT', 'largesT', 'largE', 'lapiN', 'languagE', 'lanD', 'lambS', 'laiR', 'ladY', 'ladS', 'ladieS', 'ladeN', 'lad', 'la', 'l', 'knowS', 'knowN', 'knoW', 'knockeD', 'knocK', 'knightS', 'knighT', 'kneW', 'kneeS', 'kneelinG', 'kneecapS', 'kingS', 'kingdoM', 'kinG', 'kinD', 'killS', 'killeR', 'killeD', 'kilL', 'kickeD', 'kicK', 'keeperS', 'keepeR', 'keeP', 'keeN', 'k', 'jusT', 'jumP', 'ju', 'joyfuL', 'jokeS', 'joiN', 'jam', 'j', 'its', 'it', 'isn', 'islandS', 'is', 'invinciblE', 'intO', 'internaL', 'intermissioN', 'interesteD', 'insidE', 'inherenT', 'influentiaL', 'inferioR', 'individuallY', 'indefatigablE', 'indeeD', 'in', 'imprisoneD', 'impersonatE', 'imperialisT', 'impeccablE', 'immediatelY', 'illustriouS', 'illegitimatE', 'ill', 'ignorE', 'if', 'idioM', 'identicaL', 'ideA', 'icy', 'i', 'husK', 'hundreD', 'humblE', 'hugE', 'howL', 'how', 'housE', 'hospitalitY', 'hospitaL', 'horsE', 'horrendouS', 'horN', 'hopelesS', 'hoo', 'honoreD', 'homE', 'holY', 'ho', 'hmm', 'hiyaaH', 'historY', 'his', 'himselF', 'him', 'hillS', 'higheR', 'higH', 'hiddeN', 'herrinG', 'heroiC', 'herE', 'her', 'helpfuL', 'helP', 'hellO', 'helL', 'helD', 'heh', 'heeH', 'hee', 'hearT', 'hearD', 'heaR', 'headS', 'headofF', 'headeD', 'heaD', 'he', 'haw', 'havinG', 'haviN', 'haveN', 'havE', 'hat', 'hastE', 'hasT', 'hasN', 'has', 'harmlesS', 'happY', 'happenS', 'hanG', 'handsomE', 'handlE', 'handeD', 'hanD', 'hamsteR', 'ham', 'halveS', 'halL', 'halF', 'hadN', 'had', 'hackeD', 'haaA', 'ha', 'gurglE', 'guidinG', 'guideD', 'guestS', 'guesT', 'guardS', 'guardeD', 'guarD', 'grovelinG', 'groveL', 'gripS', 'griP', 'griN', 'grenadE', 'greaT', 'gravY', 'graiL', 'gra', 'governmenT', 'gougeD', 'got', 'goodeM', 'gooD', 'gonnA', 'gonE', 'goinG', 'goeS', 'go', 'glorY', 'glasS', 'glaD', 'giveN', 'givE', 'git', 'girL', 'gigglE', 'gettinG', 'get', 'gentlE', 'generaL', 'gay', 'gavE', 'gallantlY', 'gaineD', 'g', 'fwumP', 'furtheR', 'fulL', 'fruiT', 'froZen', 'frontaL', 'froM', 'frighteN', 'frienD', 'freedoM', 'fourtH', 'fouR', 'founD', 'fouL', 'foughT', 'forwarD', 'fortY', 'fortunE', 'fortH', 'formidablE', 'formeD', 'forgivE', 'forgeT', 'foresT', 'forceD', 'forcE', 'for', 'footworK', 'fooT', 'foolinG', 'fooD', 'folloW', 'folK', 'folD', 'foe', 'fly', 'floatS', 'flinT', 'flightS', 'flighT', 'flesH', 'fleD', 'fivE', 'firsT', 'firE', 'finesT', 'finE', 'findS', 'finD', 'filtH', 'filM', 'fighT', 'fiftY', 'ferocitY', 'felT', 'fellowS', 'felL', 'feinT', 'feeT', 'feeL', 'featherS', 'feasT', 'favoritE', 'favoR', 'fatheR', 'fataL', 'farT', 'farcicaL', 'far', 'falsE', 'falleN', 'faiR', 'faceD', 'facE', 'eyeS', 'exploitinG', 'explaiN', 'expensivE', 'expecT', 'executivE', 'excusE', 'excitinG', 'exceptinG', 'examplE', 'examinE', 'eviL', 'everythinG', 'everyonE', 'everY', 'eveR', 'eveN', 'ethereaL', 'etc', 'est', 'escapE', 'ere', 'er', 'entrancE', 'enterinG', 'entereD', 'enteR', 'enougH', 'enjoyinG', 'enemieS', 'end', 'enchanteR', 'emptY', 'employeD', 'emperoR', 'em', 'elsE', 'electriC', 'elderberrieS', 'elbowS', 'eitheR', 'eisrequieM', 'eis', 'eighT', 'ehh', 'eh', 'effecT', 'eet', 'economiC', 'eckY', 'eccentriC', 'eatS', 'eat', 'easY', 'easT', 'easilY', 'earthquakeS', 'eartH', 'eacH', 'e', 'dynamitE', 'dyinG', 'dutY', 'dunnO', 'dungeoN', 'dulL', 'ducK', 'dub', 'drinK', 'drillllL', 'dressinG', 'dresseR', 'dresseD', 'dresS', 'draW', 'dramatiC', 'dragginG', 'dowN', 'doubT', 'dorsaL', 'doorS', 'dooR', 'donkeY', 'donE', 'donaeiS', 'donA', 'don', 'dominE', 'doinG', 'dogS', 'dogmA', 'doesN', 'doeS', 'doctorS', 'do', 'distributinG', 'distresS', 'dishearteneD', 'discoverS', 'discovereD', 'dirtY', 'directioN', 'dinE', 'differenceS', 'dieD', 'die', 'didN', 'did', 'dictatorshiP', 'dictatinG', 'diaphragM', 'desigN', 'deriveS', 'depressinG', 'deparT', 'demanD', 'deliriouS', 'defeatoR', 'defeaT', 'deedS', 'decisioN', 'decideD', 'deatH', 'deaR', 'deaL', 'deaD', 'de', 'day', 'daughteR', 'darK', 'darinG', 'darE', 'dappY', 'dangerouS', 'dangeR', 'dancinG', 'dancE', 'dafT', 'dad', 'd', 'cut', 'curtainS', 'cryinG', 'cry', 'crueL', 'crosseD', 'crosS', 'cronE', 'creepeR', 'creeP', 'creaturE', 'creaK', 'crasH', 'covereD', 'coveR', 'courT', 'coursE', 'couragE', 'couplE', 'countrY', 'countinG', 'counT', 'couldN', 'coulD', 'cougH', 'cosT', 'copE', 'cop', 'convinceD', 'continuE', 'consulteD', 'considerablE', 'confusE', 'conclusionS', 'conclusioN', 'completelY', 'compareD', 'communE', 'committeD', 'commandS', 'commanD', 'cominG', 'comiN', 'comE', 'coloR', 'collectivE', 'coconutS', 'coconuT', 'clunK', 'cluE', 'closesT', 'cloP', 'clllanK', 'climeS', 'cleveR', 'cleaR', 'classeS', 'clasS', 'claP', 'clanK', 'clanG', 'claD', 'clacK', 'chu', 'choseN', 'choruS', 'chorD', 'chopS', 'chickeninG', 'chickeneD', 'chesT', 'cheesY', 'chastitY', 'chargeD', 'chantinG', 'changeD', 'changE', 'chancE', 'certainlY', 'certaiN', 'ceremonY', 'cerealS', 'centurieS', 'cavE', 'causE', 'castlE', 'castanetS', 'casT', 'casE', 'carvinG', 'carveD', 'carvE', 'cartooN', 'carT', 'carryinG', 'carrY', 'carrieS', 'carrieD', 'carP', 'capitaL', 'cannoT', 'can', 'calleD', 'calL', 'cadeaU', 'c', 'by', 'buy', 'but', 'busY', 'businesS', 'bursT', 'burneD', 'burN', 'bunnY', 'bum', 'builT', 'builD', 'buggerinG', 'buggereD', 'buggeR', 'brusH', 'brunetteS', 'broughT', 'brokeN', 'bringinG', 'brinG', 'bridgeS', 'bridgekeepeR', 'bridgE', 'bridE', 'breatH', 'breakfasT', 'breadtH', 'bravesT', 'bravelY', 'bravE', 'braineD', 'braiN', 'boyS', 'bowS', 'bowelS', 'bottomS', 'bottoM', 'botheR', 'bosoM', 'booM', 'bonK', 'boneS', 'bonD', 'bolD', 'boiS', 'boinG', 'boiL', 'bodY', 'bloW', 'bloodY', 'blooD', 'blondeS', 'blessinG', 'blesS', 'bleedeR', 'bleeD', 'blankeT', 'bladderS', 'bitS', 'biterS', 'bitE', 'bitchinG', 'bit', 'biscuitS', 'birdS', 'birD', 'binT', 'bindinG', 'biggesT', 'big', 'bid', 'bickeR', 'bi', 'beyonD', 'betweeN', 'betteR', 'bet', 'besT', 'besidE', 'benT', 'bellS', 'beinG', 'beholD', 'behinD', 'behaviouR', 'beeN', 'bedS', 'bed', 'becomE', 'becausE', 'becamE', 'beautifuL', 'beaT', 'beacoN', 'be', 'batS', 'bathinG', 'bastardS', 'bastarD', 'basiS', 'basiC', 'bangiN', 'banG', 'banD', 'bananA', 'badgeR', 'bad', 'bacK', 'babY', 'baaaA', 'b', 'awhilE', 'awfullY', 'awaY', 'awaitS', 'awaaaY', 'awaaaaaY', 'avertinG', 'avengeD', 'auuuuuuuugH', 'autonomouS', 'automaticallY', 'autocracY', 'auntieS', 'auntiE', 'attenD', 'attacK', 'at', 'assisT', 'assaulT', 'askS', 'askinG', 'ask', 'asidE', 'as', 'art', 'arrowS', 'arrangE', 'arounD', 'armS', 'armoR', 'armeD', 'arm', 'arguE', 'areN', 'are', 'aquatiC', 'aptlY', 'approachinG', 'approachetH', 'appeasE', 'appearinG', 'apologisE', 'aparT', 'anywherE', 'anywaY', 'anythinG', 'anyonE', 'any', 'answerS', 'answeR', 'anotheR', 'animatoR', 'animaL', 'anginG', 'angelS', 'and', 'anchovieS', 'anarchO', 'an', 'amaZes', 'am', 'alwayS', 'althougH', 'alsO', 'alreadY', 'alonG', 'alofT', 'almosT', 'alloweD', 'all', 'alivE', 'alighT', 'alarM', 'air', 'ain', 'agreE', 'againsT', 'agaiN', 'afteR', 'afraiD', 'afooT', 'affairS', 'adversarY', 'advancinG', 'actuallY', 'actinG', 'act', 'accomplisheD', 'accompanieD', 'accenT', 'absolutelY', 'abouT', 'ablE', 'aaugH', 'aaggggH', 'aaaaH', 'aaaaaaH', 'a', ']', '[...', '[', 'ZooT', 'ZOOT', 'Yup', 'YouR', 'You', 'Yes', 'YeaH', 'YeaaH', 'YeaaaH', 'Yay', 'YappinG', 'Y', 'WoulD', 'WooD', 'Woa', 'WitH', 'WinteR', 'WinstoN', 'WilL', 'Why', 'WhoA', 'Who', 'WhicH', 'WherE', 'WheN', 'WhaT', 'WelL', 'WelcomE', 'We', 'WayY', 'WalK', 'WaiT', 'Waa', 'WOMAN', 'WITCH', 'WINSTON', 'WIFE', 'W', 'VictorY', 'VerY', 'VOICE', 'VILLAGERS', 'VILLAGER', 'Uuh', 'UugH', 'UtheR', 'Use', 'UntiL', 'UnfortunatelY', 'Un', 'Umm', 'UmhM', 'Um', 'Ulk', 'Uhh', 'Uh', 'U', 'Two', 'TwentY', 'Try', 'TruE', 'ToweR', 'TormenT', 'Too', 'TogetheR', 'TodaY', 'To', 'Tis', 'Tim', 'Til', 'Thy', 'ThursdaY', 'ThssS', 'ThroW', 'ThreE', 'ThppT', 'ThpppT', 'ThppppT', 'ThpppppT', 'ThoU', 'ThosE', 'ThiS', 'TheY', 'ThereforE', 'TherE', 'TheN', 'TheE', 'The', 'ThaT', 'ThanK', 'TelL', 'TalL', 'TalE', 'TablE', 'TIM', 'THE', 'SwamP', 'SurelY', 'SupremE', 'SupposinG', 'SummeR', 'StoP', 'SteadY', 'StaY', 'StanD', 'SprinG', 'SplendiD', 'SpeaK', 'SorrY', 'So', 'SkiP', 'Sir', 'SincE', 'SillY', 'SilencE', 'ShuT', 'ShrubberieS', 'ShrubbeR', 'Shh', 'She', 'ShalL', 'SeeK', 'See', 'SchoolS', 'Say', 'SaxonS', 'SainT', 'SaiD', 'SUN', 'STUNNER', 'SOLDIER', 'SIR', 'SHRUBBER', 'SENTRY', 'SECOND', 'SCENE', 'S', 'RunninG', 'Run', 'RounD', 'RogeR', 'RobinsoN', 'RobiN', 'RiiighT', 'RighT', 'RiddeN', 'RhegeD', 'RemovE', 'RecentlY', 'ReallY', 'RatheR', 'ROGER', 'ROBIN', 'RIGHT', 'RANDOM', 'QuoI', 'QuitE', 'QuieT', 'QuicklY', 'QuicK', 'Put', 'PurE', 'PulL', 'PsalmS', 'ProvidencE', 'PrincesS', 'PrincE', 'PreparE', 'PracticE', 'PleasE', 'Pin', 'PigleT', 'Pie', 'PicturE', 'PeriL', 'PerhapS', 'PenG', 'PendragoN', 'PatsY', 'PackinG', 'PRISONER', 'PRINCESS', 'PRINCE', 'PIGLET', 'PERSON', 'PATSY', 'PARTY', 'Ow', 'OveR', 'Our', 'Oui', 'OtheR', 'OrdeR', 'Or', 'OpeN', 'OooooooH', 'OooohoohohooO', 'OooO', 'OooH', 'Ooh', 'One', 'OncE', 'On', 'OlfiN', 'Old', 'Ohh', 'Oh', 'Off', 'Of', 'OTHER', 'OLD', 'OFFICER', 'OF', 'O', 'Nu', 'Now', 'NothinG', 'Not', 'NonE', 'No', 'NinepencE', 'NinE', 'Ni', 'NeveR', 'NeeE', 'Nay', 'NadoR', 'NI', 'NARRATOR', 'N', 'My', 'MusT', 'Mud', 'MotheR', 'MosT', 'MorninG', 'MorE', 'MonsieuR', 'Mmm', 'MinE', 'MinD', 'MidgeT', 'MessagE', 'MerceA', 'MeanwhilE', 'MaynarD', 'May', 'Man', 'MakE', 'MONKS', 'MINSTREL', 'MIDGET', 'MIDDLE', 'MAYNARD', 'MASTER', 'MAN', 'LuckY', 'LorD', 'LookS', 'LooK', 'LoimbarD', 'ListeN', 'LikE', 'Lie', 'Let', 'LeavinG', 'LeaD', 'LaunceloT', 'LanceloT', 'LakE', 'LadY', 'LUCKY', 'LOVELY', 'LEFT', 'LAUNCELOT', 'KnightS', 'KnighT', 'KinG', 'KeeP', 'KNIGHTS', 'KNIGHT', 'KING', 'JusT', 'JosepH', 'JesuS', 'IveS', 'It', 'Isn', 'Is', 'In', 'IiiiveS', 'IiiiiveS', 'If', 'IesU', 'IdioM', 'INSPECTOR', 'I', 'Hyy', 'Hya', 'HuyaH', 'Huy', 'HurrY', 'Huh', 'How', 'HooraY', 'Hoo', 'HonestlY', 'HolY', 'HolD', 'Hoa', 'Ho', 'Hmm', 'Hm', 'HiyyA', 'HiyaH', 'HiyaaH', 'His', 'HimselF', 'HilL', 'Hic', 'Hey', 'HerE', 'HerberT', 'HelP', 'HellO', 'Heh', 'HeeE', 'Hee', 'He', 'Haw', 'HavE', 'HanG', 'HanD', 'HalT', 'HallO', 'Hah', 'Ha', 'HISTORIAN', 'HERBERT', 'HEADS', 'HEAD', 'Guy', 'GuardS', 'GrenadE', 'GreetingS', 'GreaT', 'GraiL', 'GorgE', 'GooD', 'God', 'Go', 'Get', 'GawaiN', 'GallahaD', 'GalahaD', 'GablE', 'GUESTS', 'GUEST', 'GUARDS', 'GUARD', 'GREEN', 'GOD', 'GIRLS', 'GALAHAD', 'FrencH', 'FranK', 'FrancE', 'FouR', 'FounD', 'ForwarD', 'ForgivE', 'For', 'FolloW', 'FivE', 'FirstlY', 'FirsT', 'FinE', 'FiendS', 'FetcheZ', 'FatheR', 'FarewelL', 'Far', 'FRENCH', 'FATHER', 'ExplaiN', 'ExcusE', 'ExcalibuR', 'ExactlY', 'EwinG', 'EverythinG', 'EverY', 'EveN', 'EuropeaN', 'EternaL', 'Erm', 'Ere', 'ErberT', 'EnglisH', 'EnglanD', 'EnchanteR', 'Eh', 'Eee', 'EctoR', 'EckY', 'ENCHANTER', 'DramaticallY', 'DragoN', 'Don', 'DoeS', 'DoctoR', 'Do', 'DivinE', 'Dis', 'DingO', 'DidN', 'Did', 'DenniS', 'DefeaT', 'DeatH', 'DappY', 'DIRECTOR', 'DINGO', 'DENNIS', 'DEAD', 'Cut', 'CrappeR', 'CourT', 'CoursE', 'CoulD', 'CornwalL', 'ConsulT', 'ConcordE', 'ComE', 'CleaR', 'ClarK', 'CideR', 'ChurcheS', 'ChrisT', 'ChoP', 'ChickennN', 'ChickeN', 'CherrieS', 'ChastE', 'ChargE', 'ChapteR', 'CastlE', 'CameloT', 'CamaaaaaarguE', 'CaerbannoG', 'CUSTOMER', 'CROWD', 'CRONE', 'CRASH', 'CRAPPER', 'CONCORDE', 'CHARACTERS', 'CHARACTER', 'CARTOON', 'CART', 'CAMERAMAN', 'C', 'By', 'But', 'BurN', 'BuilD', 'BrotheR', 'BritonS', 'BritaiN', 'BristoL', 'BrinG', 'BridgE', 'BreaD', 'BravesT', 'BravelY', 'BravE', 'BorS', 'BooK', 'BoneS', 'Bon', 'BluE', 'BloodY', 'BlacK', 'BeyonD', 'BetweeN', 'BeholD', 'BedwerE', 'BedeverE', 'BeasT', 'Be', 'BattlE', 'BadoN', 'Bad', 'BacK', 'BROTHER', 'BRIDGEKEEPER', 'BRIDE', 'BORS', 'BLACK', 'BEDEVERE', 'B', 'Ayy', 'Ay', 'AwaY', 'AuuuuuuuugH', 'AutumN', 'AugH', 'AttilA', 'At', 'AssyriA', 'Ask', 'As', 'ArthuR', 'ArmamentS', 'ArimatheA', 'Are', 'AramaiC', 'AppleS', 'AnywaY', 'AnybodY', 'AntiocH', 'AnthraX', 'AngnoR', 'And', 'AnarchO', 'An', 'AmeN', 'Am', 'AlrighT', 'AlmightY', 'AllO', 'All', 'AlicE', 'Ahh', 'Ah', 'Agh', 'AggH', 'AgeS', 'AfricaN', 'ActuallY', 'ActioN', 'AauuuveS', 'AauuuuugH', 'AauuugH', 'AauuggghhH', 'Aah', 'AagH', 'AaauugH', 'AaaugH', 'AaauggH', 'AaaH', 'AaagH', 'AaaaugH', 'AaaaH', 'AaaaaaH', 'AaaaaaaaH', 'AaaaaaaaaH', 'ARTHUR', 'ARMY', 'ANIMATOR', 'AMAZING', 'ALL', 'A', '?!', '?', ';', ':', '9', '8', '7', '6', '5', '4', '3', '24', '23', '22', '21', '20', '2', '19', '18', '17', '16', '15', '14', '13', '12', '11', '10', '1', '...]', '...?', '...', '..', '.)', \".'\", '.', '--...', '--', '-', ',--', \",'\", ',', '(', \"'?\", \"'...\", \"'.\", \"',\", \"'!\", \"'\", '#', '!]', '!,', '!)', '!']\n" ], [ "info = input('input ID, phone number, email address: ')", "input ID, phone number, email address: 0108213111111 01012345678 1010\n" ], [ "info_list = info.split()\n\nID = info_list[0]\nraw_phone_nvm = info_list[1]\nemail_id = info_list[2]", "_____no_output_____" ], [ "if ID[6] == ('1' or '2'):\n if ID[:2] == '00':\n b_year = '2000'\n else:\n b_year = '19' + ID[:2]\nelif ID[6] == ('3' or '4'):\n b_year = '20' + ID[:2]\n \nb_month = ID[2:4]\nb_date = ID[4:6]", "_____no_output_____" ], [ "if ID[6] == ('1' or '3'):\n gender = '남성'\nelif ID[6] == ('2' or '4'):\n gender = '여성'", "_____no_output_____" ], [ "phone_nvm = raw_phone_nvm[:3] + '-' + raw_phone_nvm[3:7] + '-' + raw_phone_nvm[7:]\nemail_address = email_id + '@gmail.com'", "_____no_output_____" ], [ "print(f'당신은 {b_year}년 {b_month}월 {b_date}일 출생의 {gender}입니다.')", "당신은 2001년 08월 21일 출생의 남성입니다.\n" ], [ "print(f'당신의 전화번호는 {phone_nvm}입니다.')\nprint(f'당신의 이메일주소는 {email_address}입니다.')", "당신의 전화번호는 010-1234-5678입니다.\n당신의 이메일주소는 [email protected]입니다.\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7dd5bcfeba393aaed81c7ee4b2378b4e5efe0ab
14,439
ipynb
Jupyter Notebook
keras_retinanet/examples/ResNet50RetinaNetcustom.ipynb
MarviB16/CVSP-Object-Detection-Historical-Videos
3cc4753ff2d1f38656032fca1a0b42a68f25f4d4
[ "MIT" ]
null
null
null
keras_retinanet/examples/ResNet50RetinaNetcustom.ipynb
MarviB16/CVSP-Object-Detection-Historical-Videos
3cc4753ff2d1f38656032fca1a0b42a68f25f4d4
[ "MIT" ]
1
2021-04-30T21:04:15.000Z
2021-04-30T21:04:15.000Z
keras_retinanet/examples/ResNet50RetinaNetcustom.ipynb
MarviB16/CVSP-Object-Detection-Historical-Videos
3cc4753ff2d1f38656032fca1a0b42a68f25f4d4
[ "MIT" ]
null
null
null
42.718935
330
0.620472
[ [ [ "## Load necessary modules", "_____no_output_____" ] ], [ [ "# show images inline\n%matplotlib inline\n\n# automatically reload modules when they have changed\n%load_ext autoreload\n%autoreload 2\n\nimport os\n\nos.environ['CUDA_VISIBLE_DEVICES'] = str(1)\n\n# import keras\nimport keras\n\n# import keras_retinanet\nfrom keras_retinanet import models\nfrom keras_retinanet.utils.image import read_image_bgr, preprocess_image, resize_image\nfrom keras_retinanet.utils.visualization import draw_box, draw_caption\nfrom keras_retinanet.utils.colors import label_color\n#from keras_retinanet.utils.gpu import setup_gpu\n\n# import miscellaneous modules\nimport matplotlib.pyplot as plt\nimport cv2\nimport os\nimport numpy as np\nimport time\n\n# set tf backend to allow memory to grow, instead of claiming everything\nimport tensorflow as tf\n\n# use this to change which GPU to use\n#gpu = 1\n\n# set the modified tf session as backend in keras\n#setup_gpu(gpu)\n\nfrom keras_retinanet import models\n\n# adjust this to point to your downloaded/trained model\n# models can be downloaded here: https://github.com/fizyr/keras-retinanet/releases\nmodel_path = os.path.join('..', 'snapshots', 'resnet152_pascal_02_backup.h5')\ndataset_path = \"/caa/Homes01/mburges/CVSP-Object-Detection-Historical-Videos/retina_net_video/output/\"\n\n# load retinanet model\nmodel = models.load_model(model_path, backbone_name='resnet152')\n\nmodel = models.convert_model(model)", "Using TensorFlow backend.\n" ] ], [ [ "## Load RetinaNet model", "_____no_output_____" ] ], [ [ "# load label to names mapping for visualization purposes\nlabels_to_names = {0: 'crowd', 1: 'civilian', 2: 'soldier', 3: 'civil vehicle', 4: 'mv'}", "_____no_output_____" ] ], [ [ "## Run detection on example", "_____no_output_____" ] ], [ [ "for filename in os.listdir(dataset_path):\n image = None\n if filename.endswith('.jpg'):\n\n # Open the file:\n image = cv2.imread(os.path.join(dataset_path,filename))\n if image is not None:\n # copy to draw on\n draw = image.copy()\n draw_regression = image.copy()\n draw = cv2.cvtColor(draw, cv2.COLOR_BGR2RGB)\n\n # preprocess image for network\n image = preprocess_image(image)\n image, scale = resize_image(image)\n\n # process image\n start = time.time()\n result = model.predict_on_batch(np.expand_dims(image, axis=0))\n boxes, scores, labels = result\n print(\"processing time: \", time.time() - start)\n\n # correct for image scale\n boxes /= scale\n\n # visualize detections\n for box, score, label in zip(boxes[0], scores[0], labels[0]):\n # scores are sorted so we can break\n if score < 0.5:\n break\n\n print (box, label, score)\n color = label_color(label)\n\n b = box.astype(int)\n draw_box(draw, b, color=color)\n\n caption = \"{} {:.3f}\".format(labels_to_names[label], score)\n draw_caption(draw, b, caption)\n\n cv2.imwrite(os.path.join(dataset_path,\"detected_\"+filename), draw)\n#plt.figure(figsize=(17, 17))\n#plt.axis('off')\n#plt.imshow(draw)\n#plt.savefig('/caa/Homes01/mburges/CVSP-Object-Detection-Historical-Videos/result.png')\n#plt.show()", "processing time: 14.983539581298828\nprocessing time: 0.12204432487487793\nprocessing time: 0.09300637245178223\n[607.5637 225.40349 737.20013 603.96277] 1 0.88375086\nprocessing time: 0.09206128120422363\n[486.1151 155.2592 717.5609 624.07947] 1 0.52050894\nprocessing time: 0.09435248374938965\nprocessing time: 0.09000372886657715\nprocessing time: 0.0921483039855957\nprocessing time: 0.09550046920776367\n[510.8549 137.49898 804.9428 553.4199 ] 2 0.8575064\nprocessing time: 0.09555768966674805\nprocessing time: 0.09353828430175781\nprocessing time: 0.09651637077331543\nprocessing time: 0.09403705596923828\n[347.28036 79.86496 527.74695 608.31683] 2 0.5910789\n[347.28036 79.86496 527.74695 608.31683] 1 0.517396\nprocessing time: 0.09273624420166016\n[425.68863 119.19557 676.1325 627.0878 ] 1 0.5313673\nprocessing time: 0.0914297103881836\n[371.56665 168.74414 513.4851 540.01263] 2 0.70219386\n[371.80887 168.88281 513.1104 547.34534] 1 0.53447974\nprocessing time: 0.09416532516479492\nprocessing time: 0.09432744979858398\nprocessing time: 0.09474587440490723\nprocessing time: 0.09543561935424805\nprocessing time: 0.09648942947387695\nprocessing time: 0.09536194801330566\nprocessing time: 0.09453773498535156\n[414.67847 263.89792 499.37177 378.30865] 1 0.9172611\nprocessing time: 0.09071469306945801\nprocessing time: 0.08962607383728027\n[451.50555 195.46921 696.44965 679.3483 ] 1 0.5325235\nprocessing time: 0.09146785736083984\n[376.33533 167.56775 492.77762 396.34012] 1 0.7805066\nprocessing time: 0.09313607215881348\n[456.04694 156.29828 673.3232 628.95074] 1 0.7096983\nprocessing time: 0.09243106842041016\nprocessing time: 0.09106063842773438\nprocessing time: 0.09378266334533691\nprocessing time: 0.10053062438964844\n[436.5713 63.5298 831.2737 684.41486] 2 0.6563158\nprocessing time: 0.1033942699432373\nprocessing time: 0.09522390365600586\nprocessing time: 0.09610199928283691\n[239.43962 188.80275 419.7838 668.23425] 1 0.8739916\nprocessing time: 0.0928342342376709\nprocessing time: 0.09429478645324707\nprocessing time: 0.0940711498260498\n[500.65585 205.6814 736.7623 589.0865 ] 2 0.9546621\nprocessing time: 0.09396195411682129\nprocessing time: 0.09192037582397461\nprocessing time: 0.0907444953918457\nprocessing time: 0.09572935104370117\nprocessing time: 0.09575319290161133\nprocessing time: 0.08878946304321289\nprocessing time: 0.0968015193939209\nprocessing time: 0.08921289443969727\nprocessing time: 0.09622716903686523\nprocessing time: 0.09737372398376465\nprocessing time: 0.09994244575500488\n[576.2696 298.4631 792.2819 609.51874] 1 0.8295926\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7dd5f3e5556e140d6cfaad30373c9eab0ec175a
90,461
ipynb
Jupyter Notebook
W3schoolscraping.ipynb
NOMANSAEEDSOOMRO/W3schoolwebscraping
81b2efe34c2de1b8b555b17b282b084a89e43122
[ "MIT" ]
1
2021-05-23T17:18:46.000Z
2021-05-23T17:18:46.000Z
W3schoolscraping.ipynb
NOMANSAEEDSOOMRO/W3schoolwebscraping
81b2efe34c2de1b8b555b17b282b084a89e43122
[ "MIT" ]
null
null
null
W3schoolscraping.ipynb
NOMANSAEEDSOOMRO/W3schoolwebscraping
81b2efe34c2de1b8b555b17b282b084a89e43122
[ "MIT" ]
null
null
null
62.689536
6,524
0.555952
[ [ [ "import requests\nfrom bs4 import BeautifulSoup", "_____no_output_____" ], [ "url=\"https://www.w3schools.com/\"\ncontent = requests.get(url)\npage=content.content\nsoup = BeautifulSoup(page,'html5lib')\nsoup", "_____no_output_____" ], [ "name=[]\nlinks=[]\nfor data in soup.find_all('div', {\"class\" : \"w3-col l3 m6\"}):\n for link in data.find_all('a'):\n #print(link)\n names = link.contents[0]\n fullLink = link.get('href')\n #print(names)\n #print(\"https://www.w3schools.com/\"+fullLink)\n name.append(names)\n links.append(\"https://www.w3schools.com/\"+fullLink)\nprint(name,links)", "['Learn HTML', 'Learn CSS', 'Learn Bootstrap', 'Learn W3.CSS', 'Learn Colors', 'Learn Icons', 'Learn Graphics', 'Learn SVG', 'Learn Canvas', 'Learn How To', 'Learn Sass', 'Learn AI', 'Learn Machine Learning', 'Learn Data Science', 'Learn NumPy', 'Learn Pandas', 'Learn SciPy', 'Learn XML', 'Learn XML AJAX', 'Learn XML DOM', 'Learn XML DTD', 'Learn XML Schema', 'Learn XSLT', 'Learn XPath', 'Learn XQuery', 'Learn JavaScript', 'Learn jQuery', 'Learn React', 'Learn AngularJS', 'Learn JSON', 'Learn AJAX', 'Learn AppML', 'Learn W3.JS', 'Learn Python', 'Learn Java', 'Learn C++', 'Learn C#', 'Learn R', 'Learn SQL', 'Learn MySQL', 'Learn PHP', 'Learn ASP', 'Learn Node.js', 'Learn Raspberry Pi', 'Learn Git', 'Web Templates', 'Web Statistics', 'Web Certificates', 'Web Editor', 'Web Development', 'Test Your Typing Speed', 'Play a Code Game', 'Cyber Security', 'HTML Tag Reference', 'HTML Browser Support', 'HTML Event Reference', 'HTML Color Reference', 'HTML Attribute Reference', 'HTML Canvas Reference', 'HTML SVG Reference', 'Google Maps Reference', 'CSS Reference', 'CSS Browser Support', 'CSS Selector Reference', 'Bootstrap 3 Reference', 'Bootstrap 4 Reference', 'W3.CSS Reference', 'Icon Reference', 'Sass Reference', 'JavaScript Reference', 'HTML DOM Reference', 'jQuery Reference', 'AngularJS Reference', 'AppML Reference', 'W3.JS Reference', 'Python Reference', 'Java Reference', 'SQL Reference', 'MySQL Reference', 'PHP Reference', 'ASP Reference', 'XML DOM Reference', 'XML Http Reference', 'XSLT Reference', 'XML Schema Reference', 'HTML Character Sets', 'HTML ASCII', 'HTML ANSI', 'HTML Windows-1252', 'HTML ISO-8859-1', 'HTML Symbols', 'HTML UTF-8'] ['https://www.w3schools.com//html/default.asp', 'https://www.w3schools.com//css/default.asp', 'https://www.w3schools.com//bootstrap/bootstrap_ver.asp', 'https://www.w3schools.com//w3css/default.asp', 'https://www.w3schools.com//colors/default.asp', 'https://www.w3schools.com//icons/default.asp', 'https://www.w3schools.com//graphics/default.asp', 'https://www.w3schools.com//graphics/svg_intro.asp', 'https://www.w3schools.com//graphics/canvas_intro.asp', 'https://www.w3schools.com//howto/default.asp', 'https://www.w3schools.com//sass/default.php', 'https://www.w3schools.com//ai/default.asp', 'https://www.w3schools.com//python/python_ml_getting_started.asp', 'https://www.w3schools.com//datascience/default.asp', 'https://www.w3schools.com//python/numpy/default.asp', 'https://www.w3schools.com//python/pandas/default.asp', 'https://www.w3schools.com//python/scipy/index.php', 'https://www.w3schools.com//xml/default.asp', 'https://www.w3schools.com//xml/ajax_intro.asp', 'https://www.w3schools.com//xml/dom_intro.asp', 'https://www.w3schools.com//xml/xml_dtd_intro.asp', 'https://www.w3schools.com//xml/schema_intro.asp', 'https://www.w3schools.com//xml/xsl_intro.asp', 'https://www.w3schools.com//xml/xpath_intro.asp', 'https://www.w3schools.com//xml/xquery_intro.asp', 'https://www.w3schools.com//js/default.asp', 'https://www.w3schools.com//jquery/default.asp', 'https://www.w3schools.com//react/default.asp', 'https://www.w3schools.com//angular/default.asp', 'https://www.w3schools.com//js/js_json_intro.asp', 'https://www.w3schools.com//js/js_ajax_intro.asp', 'https://www.w3schools.com//appml/default.asp', 'https://www.w3schools.com//w3js/default.asp', 'https://www.w3schools.com//python/default.asp', 'https://www.w3schools.com//java/default.asp', 'https://www.w3schools.com//cpp/default.asp', 'https://www.w3schools.com//cs/default.asp', 'https://www.w3schools.com//r/default.asp', 'https://www.w3schools.com//sql/default.asp', 'https://www.w3schools.com//mysql/default.asp', 'https://www.w3schools.com//php/default.asp', 'https://www.w3schools.com//asp/default.asp', 'https://www.w3schools.com//nodejs/default.asp', 'https://www.w3schools.com//nodejs/nodejs_raspberrypi.asp', 'https://www.w3schools.com//git/default.asp', 'https://www.w3schools.com//w3css/w3css_templates.asp', 'https://www.w3schools.com//browsers/default.asp', 'https://www.w3schools.com//cert/default.asp', 'https://www.w3schools.com//tryit/default.asp', 'https://www.w3schools.com//whatis/default.asp', 'https://www.w3schools.com//typingspeed/default.asp', 'https://www.w3schools.com//codegame/index.html', 'https://www.w3schools.com//cybersecurity/index.php', 'https://www.w3schools.com//tags/default.asp', 'https://www.w3schools.com//tags/ref_html_browsersupport.asp', 'https://www.w3schools.com//tags/ref_eventattributes.asp', 'https://www.w3schools.com//colors/default.asp', 'https://www.w3schools.com//tags/ref_attributes.asp', 'https://www.w3schools.com//tags/ref_canvas.asp', 'https://www.w3schools.com//graphics/svg_reference.asp', 'https://www.w3schools.com//graphics/google_maps_reference.asp', 'https://www.w3schools.com//cssref/default.asp', 'https://www.w3schools.com//cssref/css3_browsersupport.asp', 'https://www.w3schools.com//cssref/css_selectors.asp', 'https://www.w3schools.com//bootstrap/bootstrap_ref_all_classes.asp', 'https://www.w3schools.com//bootstrap4/bootstrap_ref_all_classes.asp', 'https://www.w3schools.com//w3css/w3css_references.asp', 'https://www.w3schools.com//icons/icons_reference.asp', 'https://www.w3schools.com//sass/sass_functions_string.php', 'https://www.w3schools.com//jsref/default.asp', 'https://www.w3schools.com//jsref/default.asp', 'https://www.w3schools.com//jquery/jquery_ref_overview.asp', 'https://www.w3schools.com//angular/angular_ref_directives.asp', 'https://www.w3schools.com//appml/appml_reference.asp', 'https://www.w3schools.com//w3js/w3js_references.asp', 'https://www.w3schools.com//python/python_reference.asp', 'https://www.w3schools.com//java/java_ref_keywords.asp', 'https://www.w3schools.com//sql/sql_ref_keywords.asp', 'https://www.w3schools.com//mysql/mysql_ref_functions.asp', 'https://www.w3schools.com//php/php_ref_overview.asp', 'https://www.w3schools.com//asp/asp_ref_response.asp', 'https://www.w3schools.com//xml/dom_nodetype.asp', 'https://www.w3schools.com//xml/dom_http.asp', 'https://www.w3schools.com//xml/xsl_elementref.asp', 'https://www.w3schools.com//xml/schema_elements_ref.asp', 'https://www.w3schools.com//charsets/default.asp', 'https://www.w3schools.com//charsets/ref_html_ascii.asp', 'https://www.w3schools.com//charsets/ref_html_ansi.asp', 'https://www.w3schools.com//charsets/ref_html_ansi.asp', 'https://www.w3schools.com//charsets/ref_html_8859.asp', 'https://www.w3schools.com//charsets/ref_html_symbols.asp', 'https://www.w3schools.com//charsets/ref_html_utf8.asp']\n" ], [ "import pandas as pd \ndict = {'Name': name, 'Links': links} \ndf = pd.DataFrame(dict) \ndf.to_csv('W3schooltask.csv' , index=False)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code" ] ]
e7dd6f94ca82577b5615f85ddb8d0212159db921
90,367
ipynb
Jupyter Notebook
SSIS.ipynb
neednlab/ssis_analyzer
2157b94a8e0f83773a53023ac7bb3080a567356b
[ "MIT" ]
null
null
null
SSIS.ipynb
neednlab/ssis_analyzer
2157b94a8e0f83773a53023ac7bb3080a567356b
[ "MIT" ]
null
null
null
SSIS.ipynb
neednlab/ssis_analyzer
2157b94a8e0f83773a53023ac7bb3080a567356b
[ "MIT" ]
null
null
null
49.353905
279
0.439198
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
e7dd74eb2278f8a9d66f889476e4692cf7af7091
2,676
ipynb
Jupyter Notebook
Algorithm Problems/deque_baekjoon_1021_rotating_queue.ipynb
hyeshinoh/Study_Algorithm
86bae31f2e7b4e3d1ccc2be947c0b7df0a149212
[ "MIT" ]
1
2018-06-26T05:49:59.000Z
2018-06-26T05:49:59.000Z
Algorithm Problems/deque_baekjoon_1021_rotating_queue.ipynb
hyeshinoh/Algorithm_Study
86bae31f2e7b4e3d1ccc2be947c0b7df0a149212
[ "MIT" ]
null
null
null
Algorithm Problems/deque_baekjoon_1021_rotating_queue.ipynb
hyeshinoh/Algorithm_Study
86bae31f2e7b4e3d1ccc2be947c0b7df0a149212
[ "MIT" ]
null
null
null
24.327273
151
0.459641
[ [ [ "#### BAEKJOON 1021번 문제 - 회전하는 큐\nhttps://www.acmicpc.net/problem/1021", "_____no_output_____" ], [ "### 문제\n지민이는 N개의 원소를 포함하고 있는 양방향 순환 큐를 가지고 있다. 지민이는 이 큐에서 몇 개의 원소를 뽑아내려고 한다.\n\n지민이는 이 큐에서 다음과 같은 3가지 연산을 수행할 수 있다.\n\n- 첫번째 원소를 뽑아낸다. 이 연산을 수행하면, 원래 큐의 원소가 a1, ..., ak이었던 것이 a2, ..., ak와 같이 된다.\n- 왼쪽으로 한 칸 이동시킨다. 이 연산을 수행하면, a1, ..., ak가 a2, ..., ak, a1이 된다.\n- 오른쪽으로 한 칸 이동시킨다. 이 연산을 수행하면, a1, ..., ak가 ak, a1, ..., ak-1이 된다.\n\n큐에 처음에 포함되어 있던 수 N이 주어진다. 그리고 지민이가 뽑아내려고 하는 원소의 위치가 주어진다. (이 위치는 가장 처음 큐에서의 위치이다.) 이 때, 그 원소를 주어진 순서대로 뽑아내는데 드는 2번, 3번 연산의 최솟값을 출력하는 프로그램을 작성하시오.", "_____no_output_____" ], [ "#### 입력\n- 첫째 줄에 큐의 크기 N과 뽑아내려고 하는 수의 개수 M이 주어진다. N은 50보다 작거나 같은 자연수이고, M은 N보다 작거나 같은 자연수이다. \n- 둘째 줄에는 지민이가 뽑아내려고 하는 수의 위치가 순서대로 주어진다. 위치는 1보다 크거나 같고, N보다 작거나 같은 자연수이다.\n\n#### 출력\n- 첫째 줄에 문제의 정답을 출력한다.", "_____no_output_____" ], [ "#### 예제 입력 1 \n```\n10 3\n1 2 3\n```\n\n#### 예제 출력 1\n```\n0\n```", "_____no_output_____" ], [ "### 풀이", "_____no_output_____" ] ], [ [ "n, m = map(int, input().split())\ngoal = list(map(int, input().split()))\nls = list(range(1, n+1)) # 1부터 n까지의 리스트를 만들어서 goal이 바로 ls 요소와 match\n\ncount = 0\nwhile len(goal) > 0:\n if goal[0] == ls[0]:\n ls.pop(0)\n goal.pop(0)\n elif ls.index(goal[0]) <= len(ls) / 2: # goal[0]의 위치가 ls 길이의 반보다 작으면 pop(0) (2번연산)\n ls.append(ls.pop(0))\n count += 1\n else: # goal[0]의 위치가 ls 길이의 반보다 크면 pop() (3번연산)\n ls = [ls.pop()] + ls\n count += 1\n \nprint(count)", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ] ]
e7dd771db494cd85ead90c9474cd6babca6e9436
133,574
ipynb
Jupyter Notebook
Zindi_Sentiment_Analysis_Tunisian_Arabizi.ipynb
Paul-mwaura/Zindi-Sentiment-Analysis_Tunisian-Arabizi
8f944c9d399796d9f098355e969a92bd47b281dd
[ "MIT" ]
null
null
null
Zindi_Sentiment_Analysis_Tunisian_Arabizi.ipynb
Paul-mwaura/Zindi-Sentiment-Analysis_Tunisian-Arabizi
8f944c9d399796d9f098355e969a92bd47b281dd
[ "MIT" ]
null
null
null
Zindi_Sentiment_Analysis_Tunisian_Arabizi.ipynb
Paul-mwaura/Zindi-Sentiment-Analysis_Tunisian-Arabizi
8f944c9d399796d9f098355e969a92bd47b281dd
[ "MIT" ]
null
null
null
47.602994
19,066
0.588782
[ [ [ "<a href=\"https://colab.research.google.com/github/Paul-mwaura/Zindi---Sentiment-Analysis_Tunisian-Arabizi/blob/main/Zindi_Sentiment_Analysis_Tunisian_Arabizi.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "## Zindi - Sentiment Analysis_Tunisian Arabizi.ipynb", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline\nsns.set_style(\"white\")\n\nfrom sklearn.model_selection import train_test_split # function for splitting data to train and test sets\n\nimport re, string\nimport nltk\nfrom nltk.corpus import stopwords\nfrom nltk.classify import SklearnClassifier\n#from wordcloud import WordCloud,STOPWORDS\nfrom subprocess import check_output", "_____no_output_____" ], [ "df = pd.read_csv(\"Train.csv\")\ndf.head()", "_____no_output_____" ], [ "df.shape", "_____no_output_____" ], [ "len(df['ID'].unique())", "_____no_output_____" ], [ "test = pd.read_csv(\"Test.csv\")\ntest.head()", "_____no_output_____" ] ], [ [ "### Data Cleaning", "_____no_output_____" ] ], [ [ "df.head()", "_____no_output_____" ], [ "positive = df[df['label'] == 1]\nnegative = df[df['label'] == -1]\n\ndf = pd.concat([positive, negative], axis=0)\ndf.head(10)", "_____no_output_____" ], [ "df.isna().sum()", "_____no_output_____" ], [ "df.dropna(inplace=True)\ndf.isna().sum()", "_____no_output_____" ], [ "df.duplicated().sum()", "_____no_output_____" ], [ "test.isna().sum()", "_____no_output_____" ], [ "test.duplicated().sum()", "_____no_output_____" ] ], [ [ "#### Explore Corpus Character Set", "_____no_output_____" ] ], [ [ "from nltk import FreqDist\nimport re", "_____no_output_____" ], [ "corpus_as_char_list = \"\".join(df.text.tolist())\nprint(type(corpus_as_char_list),len(corpus_as_char_list))", "<class 'str'> 3908003\n" ], [ "fdist1 = FreqDist([c for c in corpus_as_char_list])", "_____no_output_____" ], [ "print(\"number of characters:\" + str(fdist1.N()))\nprint(\"number of unique characters:\" + str(fdist1.B()))", "number of characters:3908003\nnumber of unique characters:114\n" ], [ "print('List of distinct characters:')\nprint(sorted(list(fdist1.keys())))", "List of distinct characters:\n[' ', \"'\", '-', '.', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '?', 'A', 'B', 'C', 'E', 'F', 'H', 'J', 'K', 'L', 'M', 'O', 'R', 'S', 'W', 'Y', '_', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', '²', '³', '¹', 'ß', 'à', 'á', 'â', 'ã', 'ä', 'å', 'æ', 'ç', 'è', 'é', 'ê', 'ë', 'ì', 'í', 'î', 'ï', 'ñ', 'ò', 'ó', 'ô', 'õ', 'ö', 'ø', 'ù', 'ú', 'û', 'ü', 'ý', 'ÿ', 'ā', 'ă', 'ć', 'ď', 'đ', 'ĕ', 'ė', 'ę', 'ě', 'į', 'ı', 'ķ', 'ĺ', 'ļ', 'ł', 'ő', 'œ', 'ŕ', 'ş', 'ȝ', 'ə', '١', '٧', '\\ufeff']\n" ], [ "print('The most common characters:')\nfdist1.most_common(5)", "The most common characters:\n" ], [ "fdist1.plot(20, cumulative=False)", "_____no_output_____" ], [ "fdist1.plot(20,cumulative=True)", "_____no_output_____" ], [ "corpus_chars_df = pd.DataFrame(fdist1.items())\ncorpus_chars_df.columns = ['character','frequency']\n\n# Unicode number of each distinct character:\ncorpus_chars_df['unicode_dec']= corpus_chars_df.character.map(ord)\ncorpus_chars_df['unicode_hex']= corpus_chars_df.character.map(lambda x: hex(ord(x)))\n\ncorpus_chars_df = corpus_chars_df.set_index('character')\n\ncorpus_chars_df.head()", "_____no_output_____" ], [ "idx = corpus_chars_df.unicode_hex.str.startswith('0x60')\nprint(corpus_chars_df.shape[0],idx.sum())", "114 0\n" ], [ "\n# Characters from the Standard Arabic Character set\n\ncorpus_chars_df[idx].sort_values(by='unicode_dec', ascending=True)", "_____no_output_____" ], [ "# Characters from the Extended Arabic Character set\n\ncorpus_chars_df[~idx].sort_values(by='unicode_dec', ascending=True)", "_____no_output_____" ], [ "# Rare characters\n\nu = corpus_chars_df[corpus_chars_df.frequency<5]\nprint(u.shape[0])\n#print(sorted(u.index.tolist()))\nprint(','.join(sorted(u.index.tolist())))", "43\n.,?,C,E,F,J,K,L,O,R,S,W,Y,²,³,¹,å,æ,ò,ó,õ,ý,ā,ă,ć,ď,đ,ĕ,ę,ě,į,ķ,ĺ,ļ,ł,ő,ŕ,ş,ȝ,ə,١,٧,\n" ], [ "\n# Rare characters sorted by unicode value\n\nu.sort_values(by='unicode_dec', ascending=True).head()", "_____no_output_____" ], [ "u.sort_values(by='unicode_dec', ascending=False).head()", "_____no_output_____" ] ], [ [ "**Select unwanted characters**\n\nFor this corpus, unwanted characters are characters in the standard Arabic character set.\n\n", "_____no_output_____" ] ], [ [ "idx1 = corpus_chars_df.unicode_hex.str.startswith('0x6')\nidx2 = (corpus_chars_df.frequency>=5)\nidx1.sum(), idx2.sum(), (idx1&idx2).sum()", "_____no_output_____" ], [ "unwanted_characters = sorted(corpus_chars_df.loc[~(idx1)].index.tolist())\nprint(len(unwanted_characters))", "97\n" ] ], [ [ "### Text Preprocessing", "_____no_output_____" ] ], [ [ "def clean_text(text):\n '''Make text lowercase, remove text in square brackets,remove links,remove punctuation\n and remove words containing numbers.'''\n text = str(text).lower()\n #text = re.sub('<.*?>+', '', text)\n #text = re.sub(\"s+\",\" \", text)\n #text = re.sub(\"[^-9A-Za-z ]\", \"\" , text)\n return text", "_____no_output_____" ], [ "def clean_text(text):\n #will replace the html characters with \"\"\n text = re.sub(r\"[^A-Za-z0-9]\", \" \", text) \n #To remove the punctuations\n text = text.translate(str.maketrans(' ',' ',string.punctuation))\n #will consider only alphabets and numerics\n text = re.sub('[^a-zA-Z]',' ',text) \n # remove numbers \n # text = re.sub(r'\\b\\d+(?:\\.\\d+)?\\s+', '', text) \n #will replace newline with space\n text = re.sub(\"\\n\",\" \",text)\n #will convert to lower case\n text = text.lower()\n # will split and join the words\n text=' '.join(text.split())\n\n return text", "_____no_output_____" ], [ "df['text'] = df['text'].apply(lambda x:clean_text(x))\ntest['text'] = test['text'].apply(lambda x:clean_text(x))", "_____no_output_____" ], [ "train = df.copy()\ntrain.head(3)", "_____no_output_____" ], [ "train.shape", "_____no_output_____" ] ], [ [ "### Unwanted characters", "_____no_output_____" ] ], [ [ "'''unwanted_characters_regexp = '[' + ''.join(unwanted_characters) + ']'\nunwanted_characters_regexp'''", "_____no_output_____" ], [ "'''idx = train.text.map(lambda x: re.search(unwanted_characters_regexp,x)!=None)\nidx.sum()'''", "_____no_output_____" ], [ "'''# Words that contain Arabic letters (that will be removed)\n\nprint(train.loc[idx].text.tolist())'''", "_____no_output_____" ], [ "'''train[idx].head()'''", "_____no_output_____" ] ], [ [ "### Modelling", "_____no_output_____" ], [ "#### Split data into train and test", "_____no_output_____" ] ], [ [ "X = train['text']\ny = train['label']", "_____no_output_____" ], [ "# Splitting the dataset into train and test set\nfrom sklearn.model_selection import train_test_split\nseed = 12\nX_train, X_test, y_train, y_test = train_test_split(X, y,test_size = 0.10, shuffle=True, random_state=0)\nX.shape, y.shape", "_____no_output_____" ] ], [ [ "#### Logistic Regression", "_____no_output_____" ] ], [ [ "from sklearn.linear_model import LogisticRegression\nfrom sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer, TfidfTransformer\nfrom sklearn.feature_selection import SelectKBest, chi2\n\n\n# Building a pipeline: We can write less code and do all of the above, by building a pipeline as follows:\n# The names ‘vect’ , ‘tfidf’ and ‘clf’ are arbitrary but will be used later.\n# We will be using the 'text_clf' going forward.\nfrom sklearn.pipeline import Pipeline\n\ntfidf = TfidfTransformer()\n\nlr_clf = Pipeline([('vect', TfidfVectorizer(min_df= 5, sublinear_tf=True, norm='l2', ngram_range=(1, 4))), \n ('tfidf', TfidfTransformer()), \n ('chi', SelectKBest(chi2, k=20000)),\n ('clf', LogisticRegression())])\n\nlr_clf = lr_clf.fit(X_train, y_train)\n\n# Performance of NB Classifier\nimport numpy as np\npredicted = lr_clf.predict(X_test)\nprint(f\"------------------\\n{np.mean(predicted == y_test)*100}\\n------------------\")\n", "------------------\n81.44803079656499\n------------------\n" ], [ "from sklearn.svm import SVC\n\nsvm = SVC()\ntfidf = TfidfVectorizer()\n\nsvm_clf = Pipeline([('vect', CountVectorizer()), \n ('tfidf', TfidfTransformer()), \n ('clf', SVC(C=1.0, kernel='linear', degree=3, gamma='auto'))])\n\nsvm = svm_clf.fit(X_train, y_train)\n\n# Performance of NB Classifier\nimport numpy as np\npredicted = svm_clf.predict(X_test)\nprint(f\"------------------\\n{np.mean(predicted == y_test)*100}\\n------------------\")\n", "------------------\n81.52206100088836\n------------------\n" ] ], [ [ "### Tokenization", "_____no_output_____" ] ], [ [ "X_train.shape, y_train.shape", "_____no_output_____" ], [ "from keras.preprocessing.text import Tokenizer\nfrom keras.preprocessing.sequence import pad_sequences\nfrom keras.layers import LSTM, Conv1D, MaxPooling1D, Dropout\n\nMAX_NB_WORDS = 20000\n\n# get the raw text data\nX_train = X_train.astype(str)\nX_test = X_test.astype(str)\n\n# finally, vectorize the text samples into a 2D integer tensor\ntokenizer = Tokenizer(nb_words=MAX_NB_WORDS, char_level=False)\ntokenizer.fit_on_texts(X_train)\nsequences = tokenizer.texts_to_sequences(X_train)\nsequences_test = tokenizer.texts_to_sequences(X_test)\n\nword_index = tokenizer.word_index\nprint('Found %s unique tokens.' % len(word_index))", "/usr/local/lib/python3.7/dist-packages/keras_preprocessing/text.py:180: UserWarning: The `nb_words` argument in `Tokenizer` has been renamed `num_words`.\n warnings.warn('The `nb_words` argument in `Tokenizer` '\n" ], [ "sequences[0]", "_____no_output_____" ] ], [ [ "The tokenizer object stores a mapping (vocabulary) from word strings to token ids that can be inverted to reconstruct the original message (without formatting):", "_____no_output_____" ] ], [ [ "type(tokenizer.word_index), len(tokenizer.word_index)", "_____no_output_____" ], [ "index_to_word = dict((i, w) for w, i in tokenizer.word_index.items())", "_____no_output_____" ], [ "\" \".join([index_to_word[i] for i in sequences[0]])", "_____no_output_____" ] ], [ [ "Let's have a closer look at the tokenized sequences:", "_____no_output_____" ] ], [ [ "seq_lens = [len(s) for s in sequences]\nprint(\"average length: %0.1f\" % np.mean(seq_lens))\nprint(\"max length: %d\" % max(seq_lens))", "average length: 8.7\nmax length: 1531\n" ], [ "%matplotlib inline\nplt.hist(seq_lens, bins=50);", "_____no_output_____" ], [ "plt.hist([l for l in seq_lens if l < 30], bins=2);", "_____no_output_____" ], [ "print(X_train.shape)\nprint(y_train.shape)\nprint(X_test.shape)\nprint(y_test.shape)", "(54027,)\n(54027,)\n(13507,)\n(13507,)\n" ] ], [ [ "#### SDGClassifier", "_____no_output_____" ] ], [ [ "# Training Support Vector Machines - SVM and calculating its performance\n\nfrom sklearn.linear_model import SGDClassifier\ntext_clf_svm = Pipeline([('vect', CountVectorizer()), ('tfidf', TfidfTransformer()),\n ('clf-svm', SGDClassifier(loss='hinge', penalty='l2',alpha=1e-9, max_iter=3, shuffle=True, random_state=0))])\n\ntext_clf_svm = text_clf_svm.fit(X_train, y_train)\npredicted_svm = text_clf_svm.predict(X_test)\nprint(f\"------------------\\n{np.mean(predicted_svm == y_test)*100}\\n------------------\")", "/usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_stochastic_gradient.py:557: ConvergenceWarning: Maximum number of iteration reached before convergence. Consider increasing max_iter to improve the fit.\n ConvergenceWarning)\n" ] ], [ [ "#### MultinomialNB", "_____no_output_____" ] ], [ [ "# Extracting features from text files\nfrom sklearn.feature_extraction.text import CountVectorizer\ncount_vect = CountVectorizer()\nX_train_counts = count_vect.fit_transform(X_train)\nX_train_counts.shape\n\n# TF-IDF\nfrom sklearn.feature_extraction.text import TfidfTransformer\ntfidf_transformer = TfidfTransformer()\nX_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)\nX_train_tfidf.shape\n\n# Machine Learning\n# Training Naive Bayes (NB) classifier on training data.\nfrom sklearn.naive_bayes import MultinomialNB\nclf = MultinomialNB().fit(X_train_tfidf, y_train)\n\n\n# Building a pipeline: We can write less code and do all of the above, by building a pipeline as follows:\n# The names ‘vect’ , ‘tfidf’ and ‘clf’ are arbitrary but will be used later.\n# We will be using the 'text_clf' going forward.\nfrom sklearn.pipeline import Pipeline\n\ntext_clf = Pipeline([('vect', CountVectorizer()), ('tfidf', TfidfTransformer()), ('clf', MultinomialNB())])\n\ntext_clf = text_clf.fit(X_train, y_train)\n\n# Performance of NB Classifier\nimport numpy as np\npredicted = text_clf.predict(X_test)\nnp.mean(predicted == y_test)*100", "_____no_output_____" ] ], [ [ "SDG Classifier", "_____no_output_____" ] ], [ [ "# Training Support Vector Machines - SVM and calculating its performance\n\nfrom sklearn.linear_model import SGDClassifier\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer, TfidfVectorizer\n\ntext_clf_svm = Pipeline([('vect', CountVectorizer()),\n ('clf-svm', SGDClassifier(loss='hinge', penalty='l2',alpha=1e-3, max_iter=5, random_state=42))])\n\ntext_clf_svm = text_clf_svm.fit(X_train, y_train)\npredicted_svm = text_clf_svm.predict(X_test)\nnp.mean(predicted_svm == y_test)", "/usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_stochastic_gradient.py:557: ConvergenceWarning: Maximum number of iteration reached before convergence. Consider increasing max_iter to improve the fit.\n ConvergenceWarning)\n" ] ], [ [ "### Submission", "_____no_output_____" ] ], [ [ "sub = pd.read_csv(\"SampleSubmission.csv\")\nsubmission = pd.DataFrame()\nsubmission['ID'] = test['ID']\nsubmission.head()", "_____no_output_____" ], [ "submission.shape", "_____no_output_____" ], [ "pred = lr_clf.predict(test['text'])\npred", "_____no_output_____" ], [ "len(pred)", "_____no_output_____" ] ], [ [ "The index is still there, so we will set the column ID as the dataframe index.", "_____no_output_____" ] ], [ [ "submission['label'] = pred\nsubmission.set_index('ID', inplace=True)", "_____no_output_____" ], [ "submission.head()", "_____no_output_____" ] ], [ [ "We have successfully replaced the index with the column ID.\nNow Let us create our submission file.", "_____no_output_____" ] ], [ [ "submission.to_csv(\"lr_submission.csv\")", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
e7dd831eecfc2727c9b0454682f98b763d0ac442
48,410
ipynb
Jupyter Notebook
S10/EVA P2S3_Q7.ipynb
pankaj90382/TSAI-2
af4b3543dfb206fb1cc2bd166ed31e9ea7bd3778
[ "MIT" ]
null
null
null
S10/EVA P2S3_Q7.ipynb
pankaj90382/TSAI-2
af4b3543dfb206fb1cc2bd166ed31e9ea7bd3778
[ "MIT" ]
9
2021-06-08T22:18:08.000Z
2022-03-12T00:46:43.000Z
S10/EVA P2S3_Q7.ipynb
pankaj90382/TSAI-2
af4b3543dfb206fb1cc2bd166ed31e9ea7bd3778
[ "MIT" ]
1
2020-10-12T17:13:35.000Z
2020-10-12T17:13:35.000Z
48,410
48,410
0.792006
[ [ [ "#Imports", "_____no_output_____" ] ], [ [ "import numpy as np\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom IPython import display\nplt.style.use('seaborn-white')", "_____no_output_____" ] ], [ [ "# Read and process data. \n\nDownload the file from this URL: https://drive.google.com/file/d/1UWWIi-sz9g0x3LFvkIZjvK1r2ZaCqgGS/view?usp=sharing", "_____no_output_____" ] ], [ [ "import gdown\ngdown.download('https://drive.google.com/uc?id=1UWWIi-sz9g0x3LFvkIZjvK1r2ZaCqgGS','text.txt', quiet=False)", "Downloading...\nFrom: https://drive.google.com/uc?id=1UWWIi-sz9g0x3LFvkIZjvK1r2ZaCqgGS\nTo: /content/text.txt\n100%|██████████| 10.3k/10.3k [00:00<00:00, 2.91MB/s]\n" ], [ "data = open('text.txt', 'r').read()", "_____no_output_____" ] ], [ [ "Process data and calculate indices", "_____no_output_____" ] ], [ [ "chars = list(set(data))\ndata_size, X_size = len(data), len(chars)\nprint(\"Corona Virus article has %d characters, %d unique characters\" %(data_size, X_size))\nchar_to_idx = {ch:i for i,ch in enumerate(chars)}\nidx_to_char = {i:ch for i,ch in enumerate(chars)}", "Corona Virus article has 10223 characters, 75 unique characters\n" ] ], [ [ "# Constants and Hyperparameters", "_____no_output_____" ] ], [ [ "Hidden_Layer_size = 100 #size of the hidden layer\nTime_steps = 40 # Number of time steps (length of the sequence) used for training\nlearning_rate = 1e-1 # Learning Rate\nweight_sd = 0.1 #Standard deviation of weights for initialization\nz_size = Hidden_Layer_size + X_size #Size of concatenation(H, X) vector", "_____no_output_____" ] ], [ [ "# Activation Functions and Derivatives", "_____no_output_____" ] ], [ [ "def sigmoid(x): # sigmoid function\n return 1/(1+np.exp(-x))\n\ndef dsigmoid(y): # derivative of sigmoid function\n return y * (1-y)\n\ndef tanh(x): # tanh function\n return np.tanh(x)\n\ndef dtanh(y): # derivative of tanh\n return 1-y*y", "_____no_output_____" ] ], [ [ "# Quiz Question 1\n\nWhat is the value of sigmoid(0) calculated from your code? (Answer up to 1 decimal point, e.g. 4.2 and NOT 4.29999999, no rounding off).\n\n# Quiz Question 2\n\nWhat is the value of dsigmoid(sigmoid(0)) calculated from your code?? (Answer up to 2 decimal point, e.g. 4.29 and NOT 4.29999999, no rounding off). \n\n# Quiz Question 3\n\nWhat is the value of tanh(dsigmoid(sigmoid(0))) calculated from your code?? (Answer up to 5 decimal point, e.g. 4.29999 and NOT 4.29999999, no rounding off).\n\n# Quiz Question 4\n\nWhat is the value of dtanh(tanh(dsigmoid(sigmoid(0)))) calculated from your code?? (Answer up to 5 decimal point, e.g. 4.29999 and NOT 4.29999999, no rounding off).", "_____no_output_____" ] ], [ [ "print('Quiz 1', sigmoid(0))\nprint('Quiz 2', dsigmoid(sigmoid(0)))\nprint('Quiz 3', tanh(dsigmoid(sigmoid(0))))\nprint('Quiz 4', dtanh(tanh(dsigmoid(sigmoid(0)))))", "Quiz 1 0.5\nQuiz 2 0.25\nQuiz 3 0.24491866240370913\nQuiz 4 0.940014848806378\n" ] ], [ [ "# Parameters", "_____no_output_____" ] ], [ [ "class Param:\n def __init__(self, name, value):\n self.name = name\n self.v = value # parameter value\n self.d = np.zeros_like(value) # derivative\n self.m = np.zeros_like(value) # momentum for Adagrad", "_____no_output_____" ] ], [ [ "We use random weights with normal distribution (0, weight_sd) for tanh activation function and (0.5, weight_sd) for `sigmoid` activation function.\n\nBiases are initialized to zeros.", "_____no_output_____" ], [ "# LSTM \nYou are making this network, please note f, i, c and o (also \"v\") in the image below:\n![alt text](http://blog.varunajayasiri.com/ml/lstm.svg)\n\nPlease note that we are concatenating the old_hidden_vector and new_input.", "_____no_output_____" ], [ "# Quiz Question 4\n\nIn the class definition below, what should be size_a, size_b, and size_c? ONLY use the variables defined above.", "_____no_output_____" ] ], [ [ "size_a = Hidden_Layer_size\nsize_b = z_size\nsize_c = X_size\n\nclass Parameters:\n def __init__(self):\n self.W_f = Param('W_f', np.random.randn(size_a, size_b) * weight_sd + 0.5)\n self.b_f = Param('b_f', np.zeros((size_a, 1)))\n\n self.W_i = Param('W_i', np.random.randn(size_a, size_b) * weight_sd + 0.5)\n self.b_i = Param('b_i', np.zeros((size_a, 1)))\n\n self.W_C = Param('W_C', np.random.randn(size_a, size_b) * weight_sd)\n self.b_C = Param('b_C', np.zeros((size_a, 1)))\n\n self.W_o = Param('W_o', np.random.randn(size_a, size_b) * weight_sd + 0.5)\n self.b_o = Param('b_o', np.zeros((size_a, 1)))\n\n #For final layer to predict the next character\n self.W_v = Param('W_v', np.random.randn(X_size, size_a) * weight_sd)\n self.b_v = Param('b_v', np.zeros((size_c, 1)))\n \n def all(self):\n return [self.W_f, self.W_i, self.W_C, self.W_o, self.W_v,\n self.b_f, self.b_i, self.b_C, self.b_o, self.b_v]\n \nparameters = Parameters()", "_____no_output_____" ] ], [ [ "Look at these operations which we'll be writing:\n\n**Concatenation of h and x:**\n\n$z\\:=\\:\\left[h_{t-1},\\:x\\right]$\n\n$f_t=\\sigma\\left(W_f\\cdot z\\:+\\:b_f\\:\\right)$\n\n$i_i=\\sigma\\left(W_i\\cdot z\\:+\\:b_i\\right)$\n\n$\\overline{C_t}=\\tanh\\left(W_C\\cdot z\\:+\\:b_C\\right)$\n\n$C_t=f_t\\ast C_{t-1}+i_t\\ast \\overline{C}_t$\n\n$o_t=\\sigma\\left(W_o\\cdot z\\:+\\:b_i\\right)$\n\n$h_t=o_t\\ast\\tanh\\left(C_t\\right)$\n\n**Logits:**\n\n$v_t=W_v\\cdot h_t+b_v$\n\n**Softmax:**\n\n$\\hat{y}=softmax\\left(v_t\\right)$\n", "_____no_output_____" ] ], [ [ "def forward(x, h_prev, C_prev, p = parameters):\n assert x.shape == (X_size, 1)\n assert h_prev.shape == (Hidden_Layer_size, 1)\n assert C_prev.shape == (Hidden_Layer_size, 1)\n \n z = np.row_stack((h_prev, x))\n f = sigmoid(np.dot(parameters.all()[0].v, z)+ parameters.all()[5].v)\n i = sigmoid(np.dot(parameters.all()[1].v, z)+ parameters.all()[6].v)\n C_bar = tanh(np.dot(parameters.all()[2].v, z)+ parameters.all()[7].v)\n\n\n C = f*C_prev + i*C_bar\n o = sigmoid(np.dot(parameters.all()[3].v, z)+parameters.all()[8].v)\n h = o*tanh(C)\n\n v = np.dot(parameters.all()[4].v, h)+parameters.all()[9].v\n y = np.exp(v) / np.sum(np.exp(v)) #softmax\n\n return z, f, i, C_bar, C, o, h, v, y", "_____no_output_____" ] ], [ [ "You must finish the function above before you can attempt the questions below. \n\n# Quiz Question 5\n\nWhat is the output of 'print(len(forward(np.zeros((X_size, 1)), np.zeros((Hidden_Layer_size, 1)), np.zeros((Hidden_Layer_size, 1)), parameters)))'?", "_____no_output_____" ] ], [ [ "print(len(forward(np.zeros((X_size, 1)), np.zeros((Hidden_Layer_size, 1)), np.zeros((Hidden_Layer_size, 1)), parameters)))", "9\n" ] ], [ [ "# Quiz Question 6. \n\nAssuming you have fixed the forward function, run this command: \nz, f, i, C_bar, C, o, h, v, y = forward(np.zeros((X_size, 1)), np.zeros((Hidden_Layer_size, 1)), np.zeros((Hidden_Layer_size, 1)))\n\nNow, find these values:\n\n\n1. print(z.shape)\n2. print(np.sum(z))\n3. print(np.sum(f))\n\nCopy and paste exact values you get in the logs into the quiz.\n\n", "_____no_output_____" ] ], [ [ "z, f, i, C_bar, C, o, h, v, y = forward(np.zeros((X_size, 1)), np.zeros((Hidden_Layer_size, 1)), np.zeros((Hidden_Layer_size, 1)))", "_____no_output_____" ], [ "print(z.shape)\nprint(np.sum(z))\nprint(np.sum(f))", "(175, 1)\n0.0\n50.0\n" ] ], [ [ "# Backpropagation\n\nHere we are defining the backpropagation. It's too complicated, here is the whole code. (Please note that this would work only if your earlier code is perfect).", "_____no_output_____" ] ], [ [ "def backward(target, dh_next, dC_next, C_prev,\n z, f, i, C_bar, C, o, h, v, y,\n p = parameters):\n \n assert z.shape == (X_size + Hidden_Layer_size, 1)\n assert v.shape == (X_size, 1)\n assert y.shape == (X_size, 1)\n \n for param in [dh_next, dC_next, C_prev, f, i, C_bar, C, o, h]:\n assert param.shape == (Hidden_Layer_size, 1)\n \n dv = np.copy(y)\n dv[target] -= 1\n\n p.W_v.d += np.dot(dv, h.T)\n p.b_v.d += dv\n\n dh = np.dot(p.W_v.v.T, dv) \n dh += dh_next\n do = dh * tanh(C)\n do = dsigmoid(o) * do\n p.W_o.d += np.dot(do, z.T)\n p.b_o.d += do\n\n dC = np.copy(dC_next)\n dC += dh * o * dtanh(tanh(C))\n dC_bar = dC * i\n dC_bar = dtanh(C_bar) * dC_bar\n p.W_C.d += np.dot(dC_bar, z.T)\n p.b_C.d += dC_bar\n\n di = dC * C_bar\n di = dsigmoid(i) * di\n p.W_i.d += np.dot(di, z.T)\n p.b_i.d += di\n\n df = dC * C_prev\n df = dsigmoid(f) * df\n p.W_f.d += np.dot(df, z.T)\n p.b_f.d += df\n\n dz = (np.dot(p.W_f.v.T, df)\n + np.dot(p.W_i.v.T, di)\n + np.dot(p.W_C.v.T, dC_bar)\n + np.dot(p.W_o.v.T, do))\n dh_prev = dz[:Hidden_Layer_size, :]\n dC_prev = f * dC\n \n return dh_prev, dC_prev", "_____no_output_____" ] ], [ [ "# Forward and Backward Combined Pass\n\nLet's first clear the gradients before each backward pass", "_____no_output_____" ] ], [ [ "def clear_gradients(params = parameters):\n for p in params.all():\n p.d.fill(0)", "_____no_output_____" ] ], [ [ "Clip gradients to mitigate exploding gradients", "_____no_output_____" ] ], [ [ "def clip_gradients(params = parameters):\n for p in params.all():\n np.clip(p.d, -1, 1, out=p.d)", "_____no_output_____" ] ], [ [ "Calculate and store the values in forward pass. Accumulate gradients in backward pass and clip gradients to avoid exploding gradients.\n\ninput, target are list of integers, with character indexes.\nh_prev is the array of initial h at h−1 (size H x 1)\nC_prev is the array of initial C at C−1 (size H x 1)\nReturns loss, final hT and CT", "_____no_output_____" ] ], [ [ "def forward_backward(inputs, targets, h_prev, C_prev):\n global paramters\n \n # To store the values for each time step\n x_s, z_s, f_s, i_s, = {}, {}, {}, {}\n C_bar_s, C_s, o_s, h_s = {}, {}, {}, {}\n v_s, y_s = {}, {}\n \n # Values at t - 1\n h_s[-1] = np.copy(h_prev)\n C_s[-1] = np.copy(C_prev)\n \n loss = 0\n # Loop through time steps\n assert len(inputs) == Time_steps\n for t in range(len(inputs)):\n x_s[t] = np.zeros((X_size, 1))\n x_s[t][inputs[t]] = 1 # Input character\n \n (z_s[t], f_s[t], i_s[t],\n C_bar_s[t], C_s[t], o_s[t], h_s[t],\n v_s[t], y_s[t]) = \\\n forward(x_s[t], h_s[t - 1], C_s[t - 1]) # Forward pass\n \n loss += -np.log(y_s[t][targets[t], 0]) # Loss for at t\n \n clear_gradients()\n\n dh_next = np.zeros_like(h_s[0]) #dh from the next character\n dC_next = np.zeros_like(C_s[0]) #dh from the next character\n\n for t in reversed(range(len(inputs))):\n # Backward pass\n dh_next, dC_next = \\\n backward(target = targets[t], dh_next = dh_next,\n dC_next = dC_next, C_prev = C_s[t-1],\n z = z_s[t], f = f_s[t], i = i_s[t], C_bar = C_bar_s[t],\n C = C_s[t], o = o_s[t], h = h_s[t], v = v_s[t],\n y = y_s[t])\n\n clip_gradients()\n \n return loss, h_s[len(inputs) - 1], C_s[len(inputs) - 1]", "_____no_output_____" ] ], [ [ "# Sample the next character", "_____no_output_____" ] ], [ [ "def sample(h_prev, C_prev, first_char_idx, sentence_length):\n x = np.zeros((X_size, 1))\n x[first_char_idx] = 1\n\n h = h_prev\n C = C_prev\n\n indexes = []\n \n for t in range(sentence_length):\n _, _, _, _, C, _, h, _, p = forward(x, h, C)\n idx = np.random.choice(range(X_size), p=p.ravel())\n x = np.zeros((X_size, 1))\n x[idx] = 1\n indexes.append(idx)\n\n return indexes", "_____no_output_____" ] ], [ [ "# Training (Adagrad)\n\nUpdate the graph and display a sample output\n\n", "_____no_output_____" ] ], [ [ "def update_status(inputs, h_prev, C_prev):\n #initialized later\n global plot_iter, plot_loss\n global smooth_loss\n \n # Get predictions for 200 letters with current model\n\n sample_idx = sample(h_prev, C_prev, inputs[0], 200)\n txt = ''.join(idx_to_char[idx] for idx in sample_idx)\n\n # Clear and plot\n plt.plot(plot_iter, plot_loss)\n display.clear_output(wait=True)\n plt.show()\n\n #Print prediction and loss\n print(\"----\\n %s \\n----\" % (txt, ))\n print(\"iter %d, loss %f\" % (iteration, smooth_loss))", "_____no_output_____" ] ], [ [ "# Update Parameters\n\n\\begin{align}\n\\theta_i &= \\theta_i - \\eta\\frac{d\\theta_i}{\\sum dw_{\\tau}^2} \\\\\nd\\theta_i &= \\frac{\\partial L}{\\partial \\theta_i}\n\\end{align}", "_____no_output_____" ] ], [ [ "def update_paramters(params = parameters):\n for p in params.all():\n p.m += p.d * p.d # Calculate sum of gradients\n #print(learning_rate * dparam)\n p.v += -(learning_rate * p.d / np.sqrt(p.m + 1e-8))", "_____no_output_____" ] ], [ [ "To delay the keyboard interrupt to prevent the training from stopping in the middle of an iteration\n\n", "_____no_output_____" ] ], [ [ "# Exponential average of loss\n# Initialize to a error of a random model\nsmooth_loss = -np.log(1.0 / X_size) * Time_steps\n\niteration, pointer = 0, 0\n\n# For the graph\nplot_iter = np.zeros((0))\nplot_loss = np.zeros((0))", "_____no_output_____" ] ], [ [ "# Training Loop", "_____no_output_____" ] ], [ [ "iter = 50000\nwhile iter > 0:\n # Reset\n if pointer + Time_steps >= len(data) or iteration == 0:\n g_h_prev = np.zeros((Hidden_Layer_size, 1))\n g_C_prev = np.zeros((Hidden_Layer_size, 1))\n pointer = 0\n\n\n inputs = ([char_to_idx[ch] \n for ch in data[pointer: pointer + Time_steps]])\n targets = ([char_to_idx[ch] \n for ch in data[pointer + 1: pointer + Time_steps + 1]])\n\n loss, g_h_prev, g_C_prev = \\\n forward_backward(inputs, targets, g_h_prev, g_C_prev)\n smooth_loss = smooth_loss * 0.999 + loss * 0.001\n\n # Print every hundred steps\n if iteration % 100 == 0:\n update_status(inputs, g_h_prev, g_C_prev)\n\n update_paramters()\n\n plot_iter = np.append(plot_iter, [iteration])\n plot_loss = np.append(plot_loss, [loss])\n\n pointer += Time_steps\n iteration += 1\n iter = iter -1", "_____no_output_____" ] ], [ [ "# Quiz Question 7. \n\nRun the above code for 50000 iterations making sure that you have 100 hidden layers and time_steps is 40. What is the loss value you're seeing?", "_____no_output_____" ] ], [ [ "iter = 50000\nwhile iter > 0:\n # Reset\n if pointer + Time_steps >= len(data) or iteration == 0:\n g_h_prev = np.zeros((Hidden_Layer_size, 1))\n g_C_prev = np.zeros((Hidden_Layer_size, 1))\n pointer = 0\n\n\n inputs = ([char_to_idx[ch] \n for ch in data[pointer: pointer + Time_steps]])\n targets = ([char_to_idx[ch] \n for ch in data[pointer + 1: pointer + Time_steps + 1]])\n\n loss, g_h_prev, g_C_prev = \\\n forward_backward(inputs, targets, g_h_prev, g_C_prev)\n smooth_loss = smooth_loss * 0.999 + loss * 0.001\n\n # Print every hundred steps\n if iteration % 100 == 0:\n update_status(inputs, g_h_prev, g_C_prev)\n\n update_paramters()\n\n plot_iter = np.append(plot_iter, [iteration])\n plot_loss = np.append(plot_loss, [loss])\n\n pointer += Time_steps\n iteration += 1\n iter = iter -1", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7dd9b805276bd64df51d9f951230c70d5e1ee2a
294,938
ipynb
Jupyter Notebook
docs/HVAC_Tutorial.ipynb
hnagda/eppy
422399ada78eb9f39ae61f96b385fe41a0a19100
[ "MIT" ]
1
2019-01-06T14:16:24.000Z
2019-01-06T14:16:24.000Z
docs/HVAC_Tutorial.ipynb
hnagda/eppy
422399ada78eb9f39ae61f96b385fe41a0a19100
[ "MIT" ]
null
null
null
docs/HVAC_Tutorial.ipynb
hnagda/eppy
422399ada78eb9f39ae61f96b385fe41a0a19100
[ "MIT" ]
null
null
null
320.236699
150,462
0.916962
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
e7ddbad89906f0228e9725308818aa1fd3220dd2
18,300
ipynb
Jupyter Notebook
examples/user_guide/6_Trimesh.ipynb
odidev/datashader
0091d0ac48b6dd5c8a9203e1f123822aaa57bfff
[ "BSD-3-Clause" ]
758
2018-09-04T22:47:34.000Z
2019-11-14T20:13:12.000Z
examples/user_guide/6_Trimesh.ipynb
odidev/datashader
0091d0ac48b6dd5c8a9203e1f123822aaa57bfff
[ "BSD-3-Clause" ]
223
2019-11-15T19:32:54.000Z
2022-03-31T20:46:21.000Z
examples/user_guide/6_Trimesh.ipynb
odidev/datashader
0091d0ac48b6dd5c8a9203e1f123822aaa57bfff
[ "BSD-3-Clause" ]
106
2019-12-05T12:32:54.000Z
2022-03-31T15:50:00.000Z
41.496599
787
0.641038
[ [ [ "## Triangle Meshes\n\nAlong with [points](2_Points.ipynb), [timeseries](3_Timeseries.ipynb), [trajectories](4_Trajectories.ipynb), and structured [grids](5_Grids.ipynb), Datashader can rasterize large triangular meshes, such as those often used to simulate data on an irregular grid:\n\n<img src=\"../assets/images/chesbay_detail.png\" width=\"500\" height=\"500\" style=\"border-width: 1px; border-style: solid;\">\n\nAny polygon can be represented as a set of triangles, and any shape can be approximated by a polygon, so the triangular-mesh support has many potential uses. \n\nIn each case, the triangular mesh represents (part of) a *surface*, not a volume, and so the result fits directly into a 2D plane rather than requiring 3D rendering. This process of rasterizing a triangular mesh means generating values along specified regularly spaced intervals in the plane. These examples from the [Direct3D docs](https://msdn.microsoft.com/en-us/library/windows/desktop/cc627092.aspx) show how this process works, for a variety of edge cases:\n<img width=500 src=\"https://msdn.microsoft.com/dynimg/IC520311.png\"/>\n\nThis diagram uses \"pixels\" and colors (grayscale), but for datashader the generated raster is more precisely interpreted as a 2D array with bins, not pixels, because the values involved are numeric rather than colors. (With datashader, colors are assigned only in the later \"shading\" stage, not during rasterization itself.) As shown in the diagram, a pixel (bin) is treated as belonging to a given triangle if its center falls either inside that triangle or along its top or left edge.\n\nThe specific algorithm used to do so is based on the approach of [Pineda (1998)](http://people.csail.mit.edu/ericchan/bib/pdf/p17-pineda.pdf), which has the following features:\n * Classification of pixels relies on triangle convexity\n * Embarrassingly parallel linear calculations\n * Inner loop can be calculated incrementally, i.e. with very \"cheap\" computations\n \nand a few assumptions: \n * Triangles should be non overlapping (to ensure repeatable results for different numbers of cores)\n * Triangles should be specified consistently either in clockwise or in counterclockwise order of vertices (winding). \n \nTrimesh rasterization is not yet GPU-accelerated, but it's fast because of [Numba](http://numba.pydata.org) compiling Python into SIMD machine code instructions.", "_____no_output_____" ], [ "## Tiny example\n\nTo start with, let's generate a tiny set of 10 vertices at random locations:", "_____no_output_____" ] ], [ [ "import numpy as np, datashader as ds, pandas as pd\nimport datashader.utils as du, datashader.transfer_functions as tf\nfrom scipy.spatial import Delaunay\nimport dask.dataframe as dd\n\nn = 10\nnp.random.seed(2)\n\nx = np.random.uniform(size=n)\ny = np.random.uniform(size=n)\nz = np.random.uniform(0,1.0,x.shape)\n\npts = np.stack((x,y,z)).T\nverts = pd.DataFrame(np.stack((x,y,z)).T, columns=['x', 'y' , 'z'])", "_____no_output_____" ] ], [ [ "Here we have a set of random x,y locations and associated z values. We can see the numeric values with \"head\" and plot them (with color for z) using datashader's usual points plotting:", "_____no_output_____" ] ], [ [ "cvs = ds.Canvas(plot_height=400,plot_width=400)\n\ntf.Images(verts.head(15), tf.spread(tf.shade(cvs.points(verts, 'x', 'y', agg=ds.mean('z')), name='Points')))", "_____no_output_____" ] ], [ [ "To make a trimesh, we need to connect these points together into a non-overlapping set of triangles. One well-established way of doing so is [Delaunay triangulation](https://en.wikipedia.org/wiki/Delaunay_triangulation):", "_____no_output_____" ] ], [ [ "def triangulate(vertices, x=\"x\", y=\"y\"):\n \"\"\"\n Generate a triangular mesh for the given x,y,z vertices, using Delaunay triangulation.\n For large n, typically results in about double the number of triangles as vertices.\n \"\"\"\n triang = Delaunay(vertices[[x,y]].values)\n print('Given', len(vertices), \"vertices, created\", len(triang.simplices), 'triangles.')\n return pd.DataFrame(triang.simplices, columns=['v0', 'v1', 'v2'])", "_____no_output_____" ], [ "%time tris = triangulate(verts)", "_____no_output_____" ] ], [ [ "The result of triangulation is a set of triangles, each composed of three indexes into the vertices array. The triangle data can then be visualized by datashader's ``trimesh()`` method:", "_____no_output_____" ] ], [ [ "tf.Images(tris.head(15), tf.shade(cvs.trimesh(verts, tris)))", "_____no_output_____" ] ], [ [ "By default, datashader will rasterize your trimesh using z values [linearly interpolated between the z values that are specified at the vertices](https://en.wikipedia.org/wiki/Barycentric_coordinate_system#Interpolation_on_a_triangular_unstructured_grid). The shading will then show these z values as colors, as above. You can enable or disable interpolation as you wish:", "_____no_output_____" ] ], [ [ "from colorcet import rainbow as c\ntf.Images(tf.shade(cvs.trimesh(verts, tris, interpolate='nearest'), cmap=c, name='10 Vertices'),\n tf.shade(cvs.trimesh(verts, tris, interpolate='linear'), cmap=c, name='10 Vertices Interpolated'))", "_____no_output_____" ] ], [ [ "## More complex example\n\nThe small example above should demonstrate how triangle-mesh rasterization works, but in practice datashader is intended for much larger datasets. Let's consider a sine-based function `f` whose frequency varies with radius:", "_____no_output_____" ] ], [ [ "rad = 0.05,1.0\n\ndef f(x,y):\n rsq = x**2+y**2\n return np.where(np.logical_or(rsq<rad[0],rsq>rad[1]), np.nan, np.sin(10/rsq))", "_____no_output_____" ] ], [ [ "We can easily visualize this function by sampling it on a raster with a regular grid:", "_____no_output_____" ] ], [ [ "n = 400\n\nls = np.linspace(-1.0, 1.0, n)\nx,y = np.meshgrid(ls, ls)\nimg = f(x,y)\n\nraster = tf.shade(tf.Image(img, name=\"Raster\"))\nraster", "_____no_output_____" ] ], [ [ "However, you can see pronounced aliasing towards the center of this function, as the frequency starts to exceed the sampling density of the raster. Instead of sampling at regularly spaced locations like this, let's try evaluating the function at random locations whose density varies towards the center:", "_____no_output_____" ] ], [ [ "def polar_dropoff(n, r_start=0.0, r_end=1.0):\n ls = np.linspace(0, 1.0, n)\n ex = np.exp(2-5*ls)/np.exp(2)\n radius = r_start+(r_end-r_start)*ex\n theta = np.random.uniform(0.0,1.0, n)*np.pi*2.0\n x = radius * np.cos( theta )\n y = radius * np.sin( theta )\n return x,y\n\nx,y = polar_dropoff(n*n, np.sqrt(rad[0]), np.sqrt(rad[1]))\nz = f(x,y)\n\nverts = pd.DataFrame(np.stack((x,y,z)).T, columns=['x', 'y' , 'z'])", "_____no_output_____" ] ], [ [ "We can now plot the x,y points and optionally color them with the z value (the value of the function f(x,y)):", "_____no_output_____" ] ], [ [ "cvs = ds.Canvas(plot_height=400,plot_width=400)\n\ntf.Images(tf.shade(cvs.points(verts, 'x', 'y'), name='Points'),\n tf.shade(cvs.points(verts, 'x', 'y', agg=ds.mean('z')), name='PointsZ'))", "_____no_output_____" ] ], [ [ "The points are clearly covering the area of the function that needs dense sampling, and the shape of the function can (roughly) be made out when the points are colored in the plot. But let's go ahead and triangulate so that we can interpolate between the sampled values for display:", "_____no_output_____" ] ], [ [ "%time tris = triangulate(verts)", "_____no_output_____" ] ], [ [ "And let's pre-compute the combined mesh data structure for these vertices and triangles, which for very large meshes (much larger than this one!) would save plotting time later:", "_____no_output_____" ] ], [ [ "%time mesh = du.mesh(verts,tris)", "_____no_output_____" ] ], [ [ "This mesh can be used for all future plots as long as we don't change the number or ordering of vertices or triangles, which saves time for much larger grids.\n\nWe can now plot the trimesh to get an approximation of the function with noisy sampling locally to disrupt the interference patterns observed in the regular-grid version above and preserve fidelity where it is needed. (Usually one wouldn't do this just for the purposes of plotting a function, since the eventual display on a screen is a raster image no matter what, but having a variable grid is crucial if running a simulation where fine detail is needed only in certain regions.)", "_____no_output_____" ] ], [ [ "tf.shade(cvs.trimesh(verts, tris, mesh=mesh))", "_____no_output_____" ] ], [ [ "The fine detail in the heavily sampled regions is visible when zooming in closer (without resampling the function):", "_____no_output_____" ] ], [ [ "tf.Images(*([tf.shade(ds.Canvas(x_range=r, y_range=r).trimesh(verts, tris, mesh=mesh))\n for r in [(0.1,0.8), (0.14,0.4), (0.15,0.2)]]))", "_____no_output_____" ] ], [ [ "Notice that the central disk is being filled in above, even though the function is not defined in the center. That's a limitation of Delaunay triangulation, which will create convex regions covering the provided vertices. You can use other tools for creating triangulations that have holes, align along certain regions, have specified densities, etc., such as [MeshPy](https://mathema.tician.de/software/meshpy) (Python bindings for [Triangle](http://www.cs.cmu.edu/~quake/triangle.html)).\n\n\n### Aggregation functions\n\nLike other datashader methods, the ``trimesh()`` method accepts an ``agg`` argument (defaulting to ``mean()``) for a reduction function that determines how the values from multiple triangles will contribute to the value of a given pixel:", "_____no_output_____" ] ], [ [ "tf.Images(tf.shade(cvs.trimesh(verts, tris, mesh=mesh, agg=ds.mean('z')),name='mean'),\n tf.shade(cvs.trimesh(verts, tris, mesh=mesh, agg=ds.max('z')), name='max'),\n tf.shade(cvs.trimesh(verts, tris, mesh=mesh, agg=ds.min('z')), name='min'))", "_____no_output_____" ] ], [ [ "The three plots above should be nearly identical, except near the center disk where individual pixels start to have contributions from a large number of triangles covering different portions of the function space. In this inner ring, ``mean`` reports the average value of the surface inside that pixel, ``max`` reports the maximum value of the surface (hence being darker values in this color scheme), and ``Min`` reports the minimum value contained in each pixel. The ``min`` and ``max`` reductions are useful when looking at a very large mesh, revealing details not currently visible. For instance, if a mesh has a deep but very narrow trough, it will still show up in the ``min`` plot regardless of your raster's resolution, while it might be missed on the ``mean`` plot. \n\nOther reduction functions are useful for making a mask of the meshed area (``any``), for showing how many triangles are present in a given pixel (``count``), and for reporting the diversity of values within each pixel (``std`` and ``var``):", "_____no_output_____" ] ], [ [ "tf.Images(tf.shade(cvs.trimesh(verts, tris, mesh=mesh, agg=ds.any('z')), name='any'),\n tf.shade(cvs.trimesh(verts, tris, mesh=mesh, agg=ds.count()), name='count'),\n tf.shade(cvs.trimesh(verts, tris, mesh=mesh, agg=ds.std('z')), name='std')).cols(3)", "_____no_output_____" ] ], [ [ "### Parallelizing trimesh aggregation with Dask\nThe trimesh aggregation process can be parallelized by providing `du.mesh` and `Canvas.trimesh` with partitioned Dask dataframes.\n\n**Note:** While the calls to `Canvas.trimesh` will be parallelized across the partitions of the Dask dataframe, the construction of the partitioned mesh using `du.mesh` is not currently parallelized. Furthermore, it currently requires loading the entire `verts` and `tris` dataframes into memory in order to construct the partitioned mesh. Because of these constraints, this approach is most useful for the repeated aggregation of large meshes that fit in memory on a single multicore machine.", "_____no_output_____" ] ], [ [ "verts_ddf = dd.from_pandas(verts, npartitions=4)\ntris_ddf = dd.from_pandas(tris, npartitions=4)\nmesh_ddf = du.mesh(verts_ddf, tris_ddf)\nmesh_ddf", "_____no_output_____" ], [ "tf.shade(cvs.trimesh(verts_ddf, tris_ddf, mesh=mesh_ddf))", "_____no_output_____" ] ], [ [ "# Interactive plots\n\nBy their nature, fully exploring irregular grids needs to be interactive, because the resolution of the screen and the visual system are fixed. Trimesh renderings can be generated as above and then displayed interactively using the datashader support in [HoloViews](http://holoviews.org).", "_____no_output_____" ] ], [ [ "import holoviews as hv\nfrom holoviews.operation.datashader import datashade\nhv.extension(\"bokeh\")", "_____no_output_____" ] ], [ [ "\nHoloViews is designed to make working with data easier, including support for large or small trimeshes. With HoloViews, you first declare a ``hv.Trimesh`` object, then you apply the ``datashade()`` (or just ``aggregate()``) operation if the data is large enough to require datashader. Notice that HoloViews expects the triangles and vertices in the *opposite* order as datashader's ``cvs.trimesh()``, because the vertices are optional for HoloViews:", "_____no_output_____" ] ], [ [ "wireframe = datashade(hv.TriMesh((tris,verts), label=\"Wireframe\").edgepaths)\ntrimesh = datashade(hv.TriMesh((tris,hv.Points(verts, vdims='z')), label=\"TriMesh\"), aggregator=ds.mean('z'))\n\n(wireframe + trimesh).opts(width=400, height=400)", "_____no_output_____" ] ], [ [ "Here you can zoom in on either of these plots, but they will only update if you have a live Python server (not a static web page. The Wireframe plot will initially look like a collection of dots (as the triangles are all tiny), but zooming in will reveal the shape (if you are just looking at the static web page, eventually you will see individual pixels in the original datashaded rasterized plot, not the full trimesh available). Notice how a few of the \"wires\" cross the center, because Delaunay triangulation has filled in the central region; other techniques as mentioned previously would be needed to avoid those.\n\nFor examples of Datashader's trimesh in use, see the [Chesapeake and Delaware Bays](https://examples.pyviz.org/bay_trimesh/bay_trimesh.html) notebook:\n\n<img src=\"../assets/images/chesapeake_farout.png\" width=\"600\">", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e7ddbb6074ec6e802a129915878219005e181fce
98,177
ipynb
Jupyter Notebook
Natural Language Processing Specialization/chatbot/C4_W4_Ungraded_Lab_Reformer_LSH.ipynb
aibenStunner/NLP-specialization
47b602e6e98a629f9c98099a33535c0e14069f66
[ "MIT" ]
null
null
null
Natural Language Processing Specialization/chatbot/C4_W4_Ungraded_Lab_Reformer_LSH.ipynb
aibenStunner/NLP-specialization
47b602e6e98a629f9c98099a33535c0e14069f66
[ "MIT" ]
null
null
null
Natural Language Processing Specialization/chatbot/C4_W4_Ungraded_Lab_Reformer_LSH.ipynb
aibenStunner/NLP-specialization
47b602e6e98a629f9c98099a33535c0e14069f66
[ "MIT" ]
null
null
null
41.688747
1,399
0.539189
[ [ [ "# Reformer Efficient Attention: Ungraded Lab\nThe videos describe two 'reforms' made to the Transformer to make it more memory and compute efficient. The *Reversible Layers* reduce memory and *Locality Sensitive Hashing(LSH)* reduces the cost of the Dot Product attention for large input sizes. This ungraded lab will look more closely at LSH and how it is used in the Reformer model.\n\nSpecifically, the notebook has 3 goals\n* review dot-product self attention for reference\n* examine LSH based self attention\n* extend our understanding and familiarity with Trax infrastructure\n\n## Outline\n- [Part 1: Trax Efficient Attention classes](#1)\n- [Part 2: Full Dot Product Self Attention](#2)\n - [2.1 Description](#2.1)\n - [2.1.1 our_softmax](#2.1.1)\n - [2.2 our simple attend](#2.2)\n - [2.3 Class OurSelfAttention](#2.3)\n- [Part 3: Trax LSHSelfAttention](#3)\n - [3.1 Description](#3.1)\n - [3.2 our_hash_vectors](#3.2)\n - [3.3 Sorting Buckets](#3.3)\n - [3.4 Chunked dot product attention](#3.4)\n - [3.5 OurLSHSelfAttention](#3.5)\n", "_____no_output_____" ], [ "<a name=\"1\"></a>\n## Part 1.0 Trax Efficient Attention classes\nTrax is similar to other popular NN development platforms such as Keras (now integrated into Tensorflow) and Pytorch in that it uses 'layers' as a useful level of abstraction. Layers are often represented as *classes*. We're going to improve our understanding of Trax by locally extending the classes used in the attention layers. We will extend only the 'forward' functions and utilize the existing attention layers as parent classes. The original code can be found at [github:trax/layers/Research/Efficient_attention](https://github.com/google/trax/blob/v1.3.4/trax/layers/research/efficient_attention.py). This link references release 1.3.4 but note that this is under the 'research' directory as this is an area of active research. When accessing the code on Github for review on this assignment, be sure you select the 1.3.4 release tag, the master copy may have new changes.:\n<img src = \"images/C4W4_LN2_image11.PNG\" height=\"250\" width=\"250\">\n<center><b>Figure 1: Reference Tag 1.3.4 on github</b></center>\n\n\n\nWhile Trax uses classes liberally, we have not built many classes in the course so far. Let's spend a few moments reviewing the classes we will be using.\n<img src = \"images/C4W4_LN2_image1.PNG\" height=\"788\" width=\"1561\">\n\n<center><b>Figure 2: Classes from Trax/layers/Research/Efficient_Attention.py that we will be utilizing.</b></center>\n\n\n", "_____no_output_____" ], [ "Starting on the right in the diagram below you see EfficientAttentionBase. The parent to this class is the base.layer which has the routines used by all layers. EfficientAttentionBase leaves many routines to be overridden by child classes - but it has an important feature in the *Forward* routine. It supports a `use_reference_code` capability that selects implementations that limit some of the complexities to provide a more easily understood version of the algorithms. In particular, it implements a nested loop that treats each *'example, head'* independently. This simplifies our work as we need only worry about matrix operations on one *'example, head'* at a time. This loop calls *forward_unbatched*, which is the child process that we will be overriding.\n\nOn the top left are the outlines of the two child classes we will be using. The SelfAttention layer is a 'traditional' implementation of the dot product attention. We will be implementing the *forward_unbatched* version of this to highlight the differences between this and the LSH implementation.\n\nBelow that is the LSHSelfAttention. This is the routine used in the Reformer architecture. We will override the *forward_unbatched* section of this and some of the utility functions it uses to explore its implementation in more detail.\n\nThe code we will be working with is from the Trax source, and as such has implementation details that will make it a bit harder to follow. However, it will allow use of the results along with the rest of the Trax infrastructure. I will try to briefly describe these as they arise. The [Trax documentation](https://trax-ml.readthedocs.io/en/latest/) can also be referenced.", "_____no_output_____" ], [ "<a name=\"1.2\"></a>\n## Part 1.2 Trax Details\nThe goal in this notebook is to override a few routines in the Trax classes with our own versions. To maintain their functionality in a full Trax environment, many of the details we might ignore in example version of routines will be maintained in this code. Here are some of the considerations that may impact our code:\n* Trax operates with multiple back-end libraries, we will see special cases that will utilize unique features.\n* 'Fancy' numpy indexing is not supported in all backend environments and must be emulated in other ways.\n* Some operations don't have gradients for backprop and must be ignored or include forced re-evaluation.\n\nHere are some of the functions we may see:\n* Abstracted as `fastmath`, Trax supports multiple backend's such as [Jax](https://github.com/google/jax) and [Tensorflow2](https://github.com/tensorflow/tensorflow)\n* [tie_in](https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.tie_in.html): Some non-numeric operations must be invoked during backpropagation. Normally, the gradient compute graph would determine invocation but these functions are not included. To force re-evaluation, they are 'tied' to other numeric operations using tie_in.\n* [stop_gradient](https://trax-ml.readthedocs.io/en/latest/trax.fastmath.html): Some operations are intentionally excluded from backprop gradient calculations by setting their gradients to zero.\n* Below we will execute `from trax.fastmath import numpy as np `, this uses accelerated forms of numpy functions. This is, however a *subset* of numpy", "_____no_output_____" ] ], [ [ "import os\nimport trax\nfrom trax import layers as tl # core building block\nimport jax\nfrom trax import fastmath # uses jax, offers numpy on steroids\n\n# fastmath.use_backend('tensorflow-numpy')\nimport functools\nfrom trax.fastmath import numpy as np # note, using fastmath subset of numpy!\nfrom trax.layers import (\n tie_in,\n length_normalized,\n apply_broadcasted_dropout,\n look_adjacent,\n permute_via_gather,\n permute_via_sort,\n)", "INFO:tensorflow:tokens_length=568 inputs_length=512 targets_length=114 noise_density=0.15 mean_noise_span_length=3.0 \n" ] ], [ [ "<a name=\"2\"></a>\n## Part 2 Full Dot-Product Self Attention\n<a name=\"1.2\"></a>\n### Part 2.1 Description\n<img src = \"images/C4W4_LN2_image2.PNG\" height=\"200\" width=\"600\">\n\n<center><b>Figure 3: Project datapath and primary data structures and where they are implemented</b></center>\n\nThe diagram above shows many of the familiar data structures and operations related to attention and describes the routines in which they are implemented. We will start by working on *our_simple_attend* or our simpler version of the original *attend* function. We will review the steps in performing dot-product attention with more focus on the details of the operations and their significance. This is useful when comparing to LSH attention. Note we will be discussing a single example/head unless otherwise specified.\n\n<img src = \"images/C4W4_LN2_image3.PNG\" height=\"250\" width=\"700\">\n\n<center><b>Figure 4: dot-product of Query and Key</b></center>\n\nThe *attend* function receives *Query* and *Key*. As a reminder, they are produced by a matrix multiply of all the inputs with a single set of weights. We will describe the inputs as *embeddings* assuming an NLP application, however, this is not required. This matrix multiply very much like a convolutional network where a set of weights (a filter) slide across the input vectors leaving behind a map of the similarity of the input to the filter. In this case, the filters are the weight matrices $W^Q$ and $W^K$. The resulting maps are Q and K. Q and K have the dimensions of (n_seq, n_q) where n_seq is the number input embeddings and n_q or n_k is the selected size of the Q or K vectors. Note the shading of Q and K, this reflects the fact that each entry is associated with a particular input embedding. You will note later in the code that K is optional. Apparently, similar results can be achieved using Query alone saving the compute and storage associated with K. In that case, the dot-product in *attend* is matmul(q,q). Note the resulting dot-product (*Dot*) entries describe a complete (n_seq,n_seq) map of the similarity of all entries of q vs all entries of k. This is reflected in the notation in the dot-product boxes of $w_n$,$w_m$ representing word_n, word_m. Note that each row of *Dot* describes the relationship of an input embedding, say $w_0$, with every other input.\n", "_____no_output_____" ], [ "In some applications some values are masked. This can be used, for example to exclude results that occur later in time (causal) or to mask padding or other inputs.\n<img src = \"images/C4W4_LN2_image4.PNG\" height=\"300\" width=\"900\">\n\n<center><b>Figure 5: Masking</b></center>\n\n\nThe routine below *mask_self_attention* implements a flexible masking capability. The masking is controlled by the information in q_info and kv_info.", "_____no_output_____" ] ], [ [ "def mask_self_attention(\n dots, q_info, kv_info, causal=True, exclude_self=True, masked=False\n):\n \"\"\"Performs masking for self-attention.\"\"\"\n if causal:\n mask = fastmath.lt(q_info, kv_info).astype(np.float32)\n dots = dots - 1e9 * mask\n if exclude_self:\n mask = np.equal(q_info, kv_info).astype(np.float32)\n dots = dots - 1e5 * mask\n if masked:\n zeros_like_kv_info = tie_in(kv_info, np.zeros_like(kv_info))\n mask = fastmath.lt(kv_info, zeros_like_kv_info).astype(np.float32)\n dots = dots - 1e9 * mask\n return dots", "_____no_output_____" ] ], [ [ "A SoftMax is applied per row of the *Dot* matrix to scale the values in the row between 0 and 1.\n<img src = \"images/C4W4_LN2_image5.PNG\" height=\"300\" width=\"900\">\n\n<center><b>Figure 6: SoftMax per row of Dot</b></center>\n", "_____no_output_____" ], [ "<a name=\"2.1.1\"></a>\n### Part 2.1.1 our_softmax", "_____no_output_____" ], [ "This code uses a separable form of the softmax calculation. Recall the softmax:\n$$ softmax(x_i)=\\frac{\\exp(x_i)}{\\sum_j \\exp(x_j)}\\tag{1}$$\nThis can be alternately implemented as:\n$$ logsumexp(x)=\\log{({\\sum_j \\exp(x_j)})}\\tag{2}$$\n$$ softmax(x_i)=\\exp({x_i - logsumexp(x)})\\tag{3}$$\nThe work below will maintain a copy of the logsumexp allowing the softmax to be completed in sections. You will see how this is useful later in the LSHSelfAttention class.\nWe'll create a routine to implement that here with the addition of a passthrough. The matrix operations we will be working on below are easier to follow if we can maintain integer values. So, for tests, we will skip the softmax in some cases.", "_____no_output_____" ] ], [ [ "def our_softmax(x, passthrough=False):\n \"\"\" softmax with passthrough\"\"\"\n logsumexp = fastmath.logsumexp(x, axis=-1, keepdims=True)\n o = np.exp(x - logsumexp)\n if passthrough:\n return (x, np.zeros_like(logsumexp))\n else:\n return (o, logsumexp)", "_____no_output_____" ] ], [ [ "Let's check our implementation.", "_____no_output_____" ] ], [ [ "## compare softmax(a) using both methods\na = np.array([1.0, 2.0, 3.0, 4.0])\nsma = np.exp(a) / sum(np.exp(a))\nprint(sma)\nsma2, a_logsumexp = our_softmax(a)\nprint(sma2)\nprint(a_logsumexp)", "[0.0320586 0.08714432 0.2368828 0.6439142 ]\n[0.0320586 0.08714431 0.23688279 0.64391416]\n[4.44019]\n" ] ], [ [ "The purpose of the dot-product is to 'focus attention' on some of the inputs. Dot now has entries appropriately scaled to enhance some values and reduce others. These are now applied to the $V$ entries.\n<img src = \"images/C4W4_LN2_image6.PNG\" height=\"300\" width=\"900\">\n\n<center><b>Figure 7: Applying Attention to $V$</b></center>\n\n$V$ is of size (n_seq,n_v). Note the shading in the diagram. This is to draw attention to the operation of the matrix multiplication. This is detailed below.\n\n<img src = \"images/C4W4_LN2_image7.PNG\" height=\"300\" width=\"600\"/>\n\n<center><b>Figure 7: The Matrix Multiply applies attention to the values of V</b></center>\n\n$V$ is formed by a matrix multiply of the input embedding with the weight matrix $W^v$ whose values were set by backpropagation. The row entries of $V$ are then related to the corresponding input embedding. The matrix multiply weights first column of V, representing a section of each of the input embeddings, with the first row of Dot, representing the similarity of $W_0$ and each word of the input embedding and deposits the value in $Z$", "_____no_output_____" ], [ "<a name=\"2.2\"></a>\n### Part 2.2 our_simple_attend\nIn this section we'll work on an implementation of *attend* whose operations you can see in figure 3. It is a slightly simplified version of the routine in [efficient_attention.py](https://github.com/google/trax/blob/v1.3.4/trax/layers/research/efficient_attention.py). We will fill in a few lines of code. The main goal is to become familiar with the routine. You have implemented similar functionality in a previous assignment.\n\n**Instructions**\n**Step 1:** matrix multiply (np.matmul) q and the k 'transpose' kr.\n**Step 2:** use our_softmax() to perform a softmax on masked output of the dot product, dots.\n**Step 3:** matrix multiply (np.matmul) dots and v.", "_____no_output_____" ] ], [ [ "def our_simple_attend(\n q,\n k=None,\n v=None,\n mask_fn=None,\n q_info=None,\n kv_info=None,\n dropout=0.0,\n rng=None,\n verbose=False,\n passthrough=False,\n):\n \"\"\"Dot-product attention, with masking, without optional chunking and/or.\n\n Args:\n q: Query vectors, shape [q_len, d_qk]\n k: Key vectors, shape [kv_len, d_qk]; or None\n v: Value vectors, shape [kv_len, d_v]\n mask_fn: a function reference that implements masking (e.g. mask_self_attention)\n q_info: Query-associated metadata for masking\n kv_info: Key-associated metadata for masking\n dropout: Dropout rate\n rng: RNG for dropout\n\n Returns:\n A tuple (output, dots_logsumexp). The output has shape [q_len, d_v], and\n dots_logsumexp has shape [q_len]. The logsumexp of the attention\n probabilities is useful for combining multiple rounds of attention (as in\n LSH attention).\n \"\"\"\n assert v is not None\n share_qk = k is None\n if share_qk:\n k = q\n if kv_info is None:\n kv_info = q_info\n\n if share_qk:\n k = length_normalized(k)\n k = k / np.sqrt(k.shape[-1])\n\n # Dot-product attention.\n kr = np.swapaxes(k, -1, -2) # note the fancy transpose for later..\n\n ## Step 1 ##\n dots = np.matmul(q, kr)\n if verbose:\n print(\"Our attend dots\", dots.shape)\n\n # Masking\n if mask_fn is not None:\n dots = mask_fn(dots, q_info[..., :, None], kv_info[..., None, :])\n\n # Softmax.\n # dots_logsumexp = fastmath.logsumexp(dots, axis=-1, keepdims=True) #original\n # dots = np.exp(dots - dots_logsumexp) #original\n ## Step 2 ##\n # replace with our_softmax()\n dots, dots_logsumexp = our_softmax(dots)\n if verbose:\n print(\"Our attend dots post softmax\", dots.shape, dots_logsumexp.shape)\n\n if dropout > 0.0:\n assert rng is not None\n # Dropout is broadcast across the bin dimension\n dropout_shape = (dots.shape[-2], dots.shape[-1])\n keep_prob = tie_in(dots, 1.0 - dropout)\n keep = fastmath.random.bernoulli(rng, keep_prob, dropout_shape)\n multiplier = keep.astype(dots.dtype) / tie_in(keep, keep_prob)\n dots = dots * multiplier\n\n ## Step 3 ##\n # The softmax normalizer (dots_logsumexp) is used by multi-round LSH attn.\n out = np.matmul(dots, v)\n if verbose:\n print(\"Our attend out1\", out.shape)\n out = np.reshape(out, (-1, out.shape[-1]))\n if verbose:\n print(\"Our attend out2\", out.shape)\n dots_logsumexp = np.reshape(dots_logsumexp, (-1,))\n return out, dots_logsumexp", "_____no_output_____" ], [ "seq_len = 8\nemb_len = 5\nd_qk = 3\nd_v = 4\nwith fastmath.use_backend(\"jax\"): # specify the backend for consistency\n rng_attend = fastmath.random.get_prng(1)\n q = k = jax.random.uniform(rng_attend, (seq_len, d_qk), dtype=np.float32)\n v = jax.random.uniform(rng_attend, (seq_len, d_v), dtype=np.float32)\n o, logits = our_simple_attend(\n q,\n k,\n v,\n mask_fn=None,\n q_info=None,\n kv_info=None,\n dropout=0.0,\n rng=rng_attend,\n verbose=True,\n )\nprint(o, \"\\n\", logits)", "Our attend dots (8, 8)\nOur attend dots post softmax (8, 8) (8, 1)\nOur attend out1 (8, 4)\nOur attend out2 (8, 4)\n[[0.5606324 0.7290605 0.5251243 0.47101074]\n [0.5713517 0.71991956 0.5033342 0.46975708]\n [0.5622886 0.7288458 0.52172124 0.46318397]\n [0.5568317 0.72234154 0.542236 0.4699722 ]\n [0.56504494 0.72274375 0.5204978 0.47231334]\n [0.56175965 0.7216782 0.53293145 0.48003793]\n [0.56753993 0.72232544 0.5141734 0.46625748]\n [0.57100445 0.70785505 0.5325362 0.4590797 ]] \n [2.6512175 2.1914332 2.6630518 2.7792363 2.4583826 2.5421977 2.4145055\n 2.5111294]\n" ] ], [ [ "<details>\n<summary>\n <font size=\"3\"><b> Expected Output </b></font>\n</summary>\n\n**Expected Output**\n```\nOur attend dots (8, 8)\nOur attend dots post softmax (8, 8) (8, 1)\nOur attend out1 (8, 4)\nOur attend out2 (8, 4)\n[[0.5606324 0.7290605 0.5251243 0.47101074]\n [0.5713517 0.71991956 0.5033342 0.46975708]\n [0.5622886 0.7288458 0.52172124 0.46318397]\n [0.5568317 0.72234154 0.542236 0.4699722 ]\n [0.56504494 0.72274375 0.5204978 0.47231334]\n [0.56175965 0.7216782 0.53293145 0.48003793]\n [0.56753993 0.72232544 0.5141734 0.46625748]\n [0.57100445 0.70785505 0.5325362 0.4590797 ]]\n [2.6512175 2.1914332 2.6630518 2.7792363 2.4583826 2.5421977 2.4145055\n 2.5111294]```", "_____no_output_____" ], [ "<details>\n<summary>\n <font size=\"3\"><b> completed code for reference </b></font>\n</summary>\n This notebook is ungraded, so for reference, the completed code follows:\n\n```\ndef our_simple_attend(\n q, k=None, v=None,\n mask_fn=None, q_info=None, kv_info=None,\n dropout=0.0, rng=None, verbose=False, passthrough=False\n ):\n \"\"\"Dot-product attention, with masking, without optional chunking and/or.\n\n Args:\n q: Query vectors, shape [q_len, d_qk]\n k: Key vectors, shape [kv_len, d_qk]; or None\n v: Value vectors, shape [kv_len, d_v]\n mask_fn: a function reference that implements masking (e.g. mask_self_attention)\n q_info: Query-associated metadata for masking\n kv_info: Key-associated metadata for masking\n dropout: Dropout rate\n rng: RNG for dropout\n\n Returns:\n A tuple (output, dots_logsumexp). The output has shape [q_len, d_v], and\n dots_logsumexp has shape [q_len]. The logsumexp of the attention\n probabilities is useful for combining multiple rounds of attention (as in\n LSH attention).\n \"\"\"\n assert v is not None\n share_qk = (k is None)\n if share_qk:\n k = q\n if kv_info is None:\n kv_info = q_info\n\n if share_qk:\n k = length_normalized(k)\n k = k / np.sqrt(k.shape[-1])\n\n # Dot-product attention.\n kr = np.swapaxes(k, -1, -2) #note the fancy transpose for later..\n\n## Step 1 ##\n dots = np.matmul(q, kr )\n if verbose: print(\"Our attend dots\", dots.shape)\n\n # Masking\n if mask_fn is not None:\n dots = mask_fn(dots, q_info[..., :, None], kv_info[..., None, :])\n\n # Softmax.\n #dots_logsumexp = fastmath.logsumexp(dots, axis=-1, keepdims=True) #original\n #dots = np.exp(dots - dots_logsumexp) #original\n## Step 2 ##\n #replace with our_softmax()\n dots, dots_logsumexp = our_softmax(dots, passthrough=passthrough)\n if verbose: print(\"Our attend dots post softmax\", dots.shape, dots_logsumexp.shape)\n\n if dropout > 0.0:\n assert rng is not None\n # Dropout is broadcast across the bin dimension\n dropout_shape = (dots.shape[-2], dots.shape[-1])\n keep_prob = tie_in(dots, 1.0 - dropout)\n keep = fastmath.random.bernoulli(rng, keep_prob, dropout_shape)\n multiplier = keep.astype(dots.dtype) / tie_in(keep, keep_prob)\n dots = dots * multiplier\n\n## Step 3 ##\n# The softmax normalizer (dots_logsumexp) is used by multi-round LSH attn.\n out = np.matmul(dots, v)\n if verbose: print(\"Our attend out1\", out.shape)\n out = np.reshape(out, (-1, out.shape[-1]))\n if verbose: print(\"Our attend out2\", out.shape)\n dots_logsumexp = np.reshape(dots_logsumexp, (-1,))\n return out, dots_logsumexp\n```", "_____no_output_____" ], [ "<a name=\"2.3\"></a>\n## Part 2.3 Class OurSelfAttention\nHere we create our own self attention layer by creating a class `OurSelfAttention`. The parent class will be the tl.SelfAttention layer in Trax. We will only override the `forward_unbatched` routine.\nWe're not asking you to modify anything in this routine. There are some comments to draw your attention to a few lines.", "_____no_output_____" ] ], [ [ "class OurSelfAttention(tl.SelfAttention):\n \"\"\"Our self-attention. Just the Forward Function.\"\"\"\n\n def forward_unbatched(\n self, x, mask=None, *, weights, state, rng, update_state, verbose=False\n ):\n print(\"ourSelfAttention:forward_unbatched\")\n del update_state\n attend_rng, output_rng = fastmath.random.split(rng)\n if self.bias:\n if self.share_qk:\n w_q, w_v, w_o, b_q, b_v = weights\n else:\n w_q, w_k, w_v, w_o, b_q, b_k, b_v = weights\n else:\n if self.share_qk:\n w_q, w_v, w_o = weights\n else:\n w_q, w_k, w_v, w_o = weights\n\n print(\"x.shape,w_q.shape\", x.shape, w_q.shape)\n q = np.matmul(x, w_q)\n k = None\n if not self.share_qk:\n k = np.matmul(x, w_k)\n v = np.matmul(x, w_v)\n\n if self.bias:\n q = q + b_q\n if not self.share_qk:\n k = k + b_k\n v = v + b_v\n\n mask_fn = functools.partial(\n mask_self_attention,\n causal=self.causal,\n exclude_self=self.share_qk,\n masked=self.masked,\n )\n q_info = kv_info = tie_in(x, np.arange(q.shape[-2], dtype=np.int32))\n\n assert (mask is not None) == self.masked\n if self.masked:\n # mask is a boolean array (True means \"is valid token\")\n ones_like_mask = tie_in(x, np.ones_like(mask, dtype=np.int32))\n kv_info = kv_info * np.where(mask, ones_like_mask, -ones_like_mask)\n\n # Notice, we are callout our vesion of attend\n o, _ = our_simple_attend(\n q,\n k,\n v,\n mask_fn=mask_fn,\n q_info=q_info,\n kv_info=kv_info,\n dropout=self.attention_dropout,\n rng=attend_rng,\n verbose=True,\n )\n\n # Notice, wo weight matrix applied to output of attend in forward_unbatched\n out = np.matmul(o, w_o)\n out = apply_broadcasted_dropout(out, self.output_dropout, output_rng)\n return out, state", "_____no_output_____" ], [ "causal = False\nmasked = False\nmask = None\nattention_dropout = 0.0\nn_heads = 3\nd_qk = 3\nd_v = 4\nseq_len = 8\nemb_len = 5\nbatch_size = 1\n\nosa = OurSelfAttention(\n n_heads=n_heads,\n d_qk=d_qk,\n d_v=d_v,\n causal=causal,\n use_reference_code=True,\n attention_dropout=attention_dropout,\n mode=\"train\",\n)\n\nrng_osa = fastmath.random.get_prng(1)\nx = jax.random.uniform(\n jax.random.PRNGKey(0), (batch_size, seq_len, emb_len), dtype=np.float32\n)\n_, _ = osa.init(tl.shapes.signature(x), rng=rng_osa)", "_____no_output_____" ], [ "osa(x)", "ourSelfAttention:forward_unbatched\nx.shape,w_q.shape (8, 5) (5, 3)\nOur attend dots (8, 8)\nOur attend dots post softmax (8, 8) (8, 1)\nOur attend out1 (8, 4)\nOur attend out2 (8, 4)\nourSelfAttention:forward_unbatched\nx.shape,w_q.shape (8, 5) (5, 3)\nOur attend dots (8, 8)\nOur attend dots post softmax (8, 8) (8, 1)\nOur attend out1 (8, 4)\nOur attend out2 (8, 4)\nourSelfAttention:forward_unbatched\nx.shape,w_q.shape (8, 5) (5, 3)\nOur attend dots (8, 8)\nOur attend dots post softmax (8, 8) (8, 1)\nOur attend out1 (8, 4)\nOur attend out2 (8, 4)\n" ] ], [ [ "<details>\n<summary>\n <font size=\"3\"><b> Expected Output </b></font>\n</summary>\n\n**Expected Output**\nNotice a few things:\n* the w_q (and w_k) matrices are applied to each row or each embedding on the input. This is similar to the filter operation in convolution\n* forward_unbatched is called 3 times. This is because we have 3 heads in this example.\n\n```\nourSelfAttention:forward_unbatched\nx.shape,w_q.shape (8, 5) (5, 3)\nOur attend dots (8, 8)\nOur attend dots post softmax (8, 8) (8, 1)\nOur attend out1 (8, 4)\nOur attend out2 (8, 4)\nourSelfAttention:forward_unbatched\nx.shape,w_q.shape (8, 5) (5, 3)\nOur attend dots (8, 8)\nOur attend dots post softmax (8, 8) (8, 1)\nOur attend out1 (8, 4)\nOur attend out2 (8, 4)\nourSelfAttention:forward_unbatched\nx.shape,w_q.shape (8, 5) (5, 3)\nOur attend dots (8, 8)\nOur attend dots post softmax (8, 8) (8, 1)\nOur attend out1 (8, 4)\nOur attend out2 (8, 4)\nDeviceArray([[[ 6.70414209e-01, -1.04319841e-01, -5.33822298e-01,\n 1.92711830e-01, -4.54187393e-05],\n [ 6.64090097e-01, -1.01875424e-01, -5.35733163e-01,\n 1.88311756e-01, -6.30629063e-03],\n [ 6.73380017e-01, -1.06952369e-01, -5.31989932e-01,\n 1.90056816e-01, 1.30271912e-03],\n [ 6.84564888e-01, -1.13240272e-01, -5.50182462e-01,\n 1.95673436e-01, 5.47635555e-03],\n [ 6.81435883e-01, -1.11068964e-01, -5.32343209e-01,\n 1.91912338e-01, 5.69400191e-03],\n [ 6.80724978e-01, -1.08496904e-01, -5.34994125e-01,\n 1.96332246e-01, 5.89773059e-03],\n [ 6.80933356e-01, -1.14087075e-01, -5.18659890e-01,\n 1.90674081e-01, 1.14096403e-02],\n [ 6.80265009e-01, -1.09031796e-01, -5.38248718e-01,\n 1.94203183e-01, 4.23943996e-03]]], dtype=float32)\n```\n\n", "_____no_output_____" ], [ "<a name=\"3\"></a>\n## Part 3.0 Trax LSHSelfAttention\n<a name=\"3.1\"></a>\n## Part 3.1 Description\nThe larger the matrix multiply in the previous section is, the more context can be taken into account when making the next decision. However, the self attention dot product grows as the size of the input squared. For example, if one wished to have an input size of 1024, that would result in $1024^2$ or over a million dot products for each head! As a result, there has been significant research related to reducing the compute requirements. One such approach is Locality Sensitive Hashing(LSH) Self Attention.\n\nYou may recall, earlier in the course you utilized LSH to find similar tweets without resorting to calculating cosine similarity for each pair of embeddings. We will use a similar approach here. It may be best described with an example.\n<img src = \"images/C4W4_LN2_image8.PNG\" height=\"400\" width=\"750\">\n\n<center><b>Figure 9: Example of LSH Self Attention</b></center>\n\n", "_____no_output_____" ], [ "LSH Self attention uses Queries only, no Keys. Attention then generates a metric of the similarity of each value of Q relative to all the other values in Q. An earlier assignment demonstrated that values which hash to the same bucket are likely to be similar. Further, multiple random hashes can improve the chances of finding entries which are similar. This is the approach taken here, though the hash is implemented a bit differently. The values of Q are hashed into buckets using a randomly generated set of hash vectors. Multiple sets of hash vectors are used, generating multiple hash tables. In the figure above, we have 3 hash tables with 4 buckets in each table. Notionally, following the hash, the values of Q have been replicated 3 times and distributed to their appropriate bucket in each of the 3 tables. To find similarity then, one generates dot-products only between members of the buckets. The result of this operation provides information on which entries are similar. As the operation has been distributed over multiple hash tables, these results need to be combined to form a complete picture and this can be used to generate a reduced dot-product attention array. Its clear that because we do not do a compare of every value vs every other value, the size of *Dots* will be reduced.\n\nThe challenge in this approach is getting it to operate efficiently. You may recall from the earlier assignments the buckets were lists of entries and had varying length. This will operate poorly on a vector processing machine such as a GPU or TPU. Ideally, operations are done in large blocks with uniform sizes. While it is straightforward to implement the hash algorithm this way, it is challenging to managed buckets and variable sized dot-products. This will be discussed further below. For now, we will examine and implement the hash function.", "_____no_output_____" ], [ "<a name=\"3.2\"></a>\n## Part 3.2 our_hash_vectors", "_____no_output_____" ], [ "*our_hash_vectors*, is a reimplementation of Trax *hashvector*. It takes in an array of vectors, hashes the entries and returns and array assigning each input vector to n_hash buckets. Hashing is described as creating *random rotations*, see [Practical and Optimal LSH for Angular Distance](https://arxiv.org/pdf/1509.02897.pdf).\n\n<img src = \"images/C4W4_LN2_image9.PNG\" height=\"400\" width=\"750\">\n<img src = \"images/C4W4_LN2_image10.PNG\" height=\"400\" width=\"750\">\n<center><b>Figure 10: Processing steps in our_hash_vectors </b></center>\n\nNote, in the diagram, sizes relate to our expected input $Q$ while our_hash_vectors is written assuming a generic input vector\n", "_____no_output_____" ], [ "**Instructions**\n**Step 1**\ncreate an array of random normal vectors which will be our hash vectors. Each vector will be hashed into a hash table and into `rot_size//2` buckets. We use `rot_size//2` to reduce computation. Later in the routine we will form the negative rotations with a simple negation and concatenate to get a full `rot_size` number of rotations.\n * use fastmath.random.normal and create an array of random vectors of shape (vec.shape[-1],n_hashes, rot_size//2)\n\n**Step 2** In this step we simply do the matrix multiply. `jax` has an accelerated version of [einsum](https://numpy.org/doc/stable/reference/generated/numpy.einsum.html). Here we will utilize more conventional routines.\n\n**Step 2x**\n * 2a: np.reshape random_rotations into a 2 dimensional array ([-1, n_hashes * (rot_size // 2)])\n * 2b: np.dot vecs and random_rotations forming our rotated_vecs\n * 2c: back to 3 dimension with np.reshape [-1, n_hashes, rot_size//2]\n * 2d: prepare for concatenating by swapping dimensions np.transpose (1, 0, 2)\n**Step 3** Here we concatenate our rotation vectors getting a fullrot_size number of buckets (note, n_buckets = rotsize)\n * use np.concatenate, [rotated_vecs, -rotated_vecs], axis=-1\n**Step 4** **This is the exciting step!** You have no doubt been wondering how we will turn these vectors into bucket indexes. By performing np.argmax over the rotations for a given entry, you get the index to the best match! We will use this as a bucket index.\n * np.argmax(...).astype(np.int32); be sure to use the correct axis!\n**Step 5** In this style of hashing, items which land in bucket 0 of hash table 0 are not necessarily similar to those landing in bucket 0 of hash table 1, so we keep them separate. We do this by offsetting the bucket numbers by 'n_buckets'.\n* add buckets and offsets and reshape into a one dimensional array\nThis will return a 1D array of size n_hashes * vec.shape[0].", "_____no_output_____" ] ], [ [ "def our_hash_vectors(vecs, rng, n_buckets, n_hashes, mask=None, verbose=False):\n \"\"\" \n Args:\n vecs: tensor of at least 2 dimension, \n rng: random number generator\n n_buckets: number of buckets in each hash table\n n_hashes: the number of hash tables\n mask: None indicating no mask or a 1D boolean array of length vecs.shape[0], containing the location of padding value\n verbose: controls prints for debug\n Returns:\n A vector of size n_hashes * vecs.shape[0] containing the buckets associated with each input vector per hash table.\n \n \"\"\"\n\n # check for even, integer bucket sizes\n assert isinstance(n_buckets, int) and n_buckets % 2 == 0\n\n rng = fastmath.stop_gradient(tie_in(vecs, rng))\n rot_size = n_buckets\n ### Start Code Here\n\n ### Step 1 ###\n rotations_shape = (vecs.shape[-1], n_hashes, rot_size // 2)\n random_rotations = fastmath.random.normal(rng, rotations_shape).astype(np.float32)\n if verbose:\n print(\"random.rotations.shape\", random_rotations.shape)\n\n ### Step 2 ###\n if fastmath.backend_name() == \"jax\":\n rotated_vecs = np.einsum(\"tf,fhb->htb\", vecs, random_rotations)\n print(\"using jax\")\n else:\n # Step 2a\n random_rotations = np.reshape(random_rotations, ([-1, n_hashes * (rot_size // 2)]))\n if verbose:\n print(\"random_rotations reshaped\", random_rotations.shape)\n # Step 2b\n rotated_vecs = np.dot(vecs, random_rotations)\n if verbose:\n print(\"rotated_vecs1\", rotated_vecs.shape)\n # Step 2c\n rotated_vecs = np.reshape(rotated_vecs, [-1, n_hashes, rot_size//2])\n if verbose:\n print(\"rotated_vecs2\", rotated_vecs.shape)\n # Step 2d\n rotated_vecs = np.transpose(rotated_vecs, (1, 0, 2))\n if verbose:\n print(\"rotated_vecs3\", rotated_vecs.shape)\n\n ### Step 3 ###\n rotated_vecs = np.concatenate([rotated_vecs, -rotated_vecs], axis=-1)\n if verbose:\n print(\"rotated_vecs.shape\", rotated_vecs.shape)\n ### Step 4 ###\n buckets = np.argmax(rotated_vecs, axis=-1).astype(np.int32)\n if verbose:\n print(\"buckets.shape\", buckets.shape)\n if verbose:\n print(\"buckets\", buckets)\n\n if mask is not None:\n n_buckets += 1 # Create an extra bucket for padding tokens only\n buckets = np.where(mask[None, :], buckets, n_buckets - 1)\n\n # buckets is now (n_hashes, seqlen). Next we add offsets so that\n # bucket numbers from different hashing rounds don't overlap.\n offsets = tie_in(buckets, np.arange(n_hashes, dtype=np.int32))\n offsets = np.reshape(offsets * n_buckets, (-1, 1))\n ### Step 5 ###\n buckets = np.reshape(buckets + offsets, (-1,))\n if verbose:\n print(\"buckets with offsets\", buckets.shape, \"\\n\", buckets)\n ### End Code Here\n return buckets", "_____no_output_____" ], [ "# example code. Note for reference, the sizes in this example match the values in the diagram above.\nohv_q = np.ones((8, 5)) # (seq_len=8, n_q=5)\nohv_n_buckets = 4 # even number\nohv_n_hashes = 3\nwith fastmath.use_backend(\"tf\"):\n ohv_rng = fastmath.random.get_prng(1)\n ohv = our_hash_vectors(\n ohv_q, ohv_rng, ohv_n_buckets, ohv_n_hashes, mask=None, verbose=True\n )\n print(\"ohv shape\", ohv.shape, \"\\nohv\", ohv) # (ohv_n_hashes * ohv_n_buckets)\n# note the random number generators do not produce the same results with different backends\nwith fastmath.use_backend(\"jax\"):\n ohv_rng = fastmath.random.get_prng(1)\n ohv = our_hash_vectors(ohv_q, ohv_rng, ohv_n_buckets, ohv_n_hashes, mask=None)\n print(\"ohv shape\", ohv.shape, \"\\nohv\", ohv) # (ohv_n_hashes * ohv_n_buckets)", "random.rotations.shape (5, 3, 2)\nrandom_rotations reshaped (5, 6)\nrotated_vecs1 (8, 6)\nrotated_vecs2 (8, 3, 2)\nrotated_vecs3 (3, 8, 2)\nrotated_vecs.shape (3, 8, 4)\nbuckets.shape (3, 8)\nbuckets ndarray<tf.Tensor(\n[[3 3 3 3 3 3 3 3]\n [3 3 3 3 3 3 3 3]\n [3 3 3 3 3 3 3 3]], shape=(3, 8), dtype=int32)>\nbuckets with offsets (24,) \n ndarray<tf.Tensor([ 3 3 3 3 3 3 3 3 7 7 7 7 7 7 7 7 11 11 11 11 11 11 11 11], shape=(24,), dtype=int32)>\nohv shape (24,) \nohv ndarray<tf.Tensor([ 3 3 3 3 3 3 3 3 7 7 7 7 7 7 7 7 11 11 11 11 11 11 11 11], shape=(24,), dtype=int32)>\nusing jax\nohv shape (24,) \nohv [ 3 3 3 3 3 3 3 3 5 5 5 5 5 5 5 5 11 11 11 11 11 11 11 11]\n" ] ], [ [ "<details>\n<summary>\n <font size=\"3\"><b> Expected Output </b></font>\n</summary>\n\n**Expected Values**\n```\nrandom.rotations.shape (5, 3, 2)\nrandom_rotations reshaped (5, 6)\nrotated_vecs1 (8, 6)\nrotated_vecs2 (8, 3, 2)\nrotated_vecs3 (3, 8, 2)\nrotated_vecs.shape (3, 8, 4)\nbuckets.shape (3, 8)\nbuckets ndarray<tf.Tensor(\n[[3 3 3 3 3 3 3 3]\n [3 3 3 3 3 3 3 3]\n [3 3 3 3 3 3 3 3]], shape=(3, 8), dtype=int32)>\nbuckets with offsets (24,)\n ndarray<tf.Tensor([ 3 3 3 3 3 3 3 3 7 7 7 7 7 7 7 7 11 11 11 11 11 11 11 11], shape=(24,), dtype=int32)>\nohv shape (24,)\nohv ndarray<tf.Tensor([ 3 3 3 3 3 3 3 3 7 7 7 7 7 7 7 7 11 11 11 11 11 11 11 11], shape=(24,), dtype=int32)>\nusing jax\nohv shape (24,)\nohv [ 3 3 3 3 3 3 3 3 5 5 5 5 5 5 5 5 11 11 11 11 11 11 11 11]```", "_____no_output_____" ], [ "<details>\n<summary>\n <font size=\"3\" ><b>Completed code for reference </b></font>\n</summary>\n\n```\n# since this notebook is ungraded the completed code is provided here for reference\n\ndef our_hash_vectors(vecs, rng, n_buckets, n_hashes, mask=None, verbose=False):\n \"\"\"\n Args:\n vecs: tensor of at least 2 dimension,\n rng: random number generator\n n_buckets: number of buckets in each hash table\n n_hashes: the number of hash tables\n mask: None indicating no mask or a 1D boolean array of length vecs.shape[0], containing the location of padding value\n verbose: controls prints for debug\n Returns:\n A vector of size n_hashes * vecs.shape[0] containing the buckets associated with each input vector per hash table.\n\n \"\"\"\n\n # check for even, integer bucket sizes\n assert isinstance(n_buckets, int) and n_buckets % 2 == 0\n\n rng = fastmath.stop_gradient(tie_in(vecs, rng))\n rot_size = n_buckets\n ### Start Code Here\n\n ### Step 1 ###\n rotations_shape = (vecs.shape[-1], n_hashes, rot_size // 2)\n random_rotations = fastmath.random.normal(rng, rotations_shape).astype(\n np.float32)\n if verbose: print(\"random.rotations.shape\", random_rotations.shape)\n\n ### Step 2 ###\n if fastmath.backend_name() == 'jax':\n rotated_vecs = np.einsum('tf,fhb->htb', vecs, random_rotations)\n if verbose: print(\"using jax\")\n else:\n #Step 2a\n random_rotations = np.reshape(random_rotations,\n [-1, n_hashes * (rot_size // 2)])\n if verbose: print(\"random_rotations reshaped\", random_rotations.shape)\n #Step 2b\n rotated_vecs = np.dot(vecs, random_rotations)\n if verbose: print(\"rotated_vecs1\", rotated_vecs.shape)\n #Step 2c\n rotated_vecs = np.reshape(rotated_vecs, [-1, n_hashes, rot_size//2])\n if verbose: print(\"rotated_vecs2\", rotated_vecs.shape)\n #Step 2d\n rotated_vecs = np.transpose(rotated_vecs, (1, 0, 2))\n if verbose: print(\"rotated_vecs3\", rotated_vecs.shape)\n\n ### Step 3 ###\n rotated_vecs = np.concatenate([rotated_vecs, -rotated_vecs], axis=-1)\n if verbose: print(\"rotated_vecs.shape\", rotated_vecs.shape)\n ### Step 4 ###\n buckets = np.argmax(rotated_vecs, axis=-1).astype(np.int32)\n if verbose: print(\"buckets.shape\", buckets.shape)\n if verbose: print(\"buckets\", buckets)\n\n if mask is not None:\n n_buckets += 1 # Create an extra bucket for padding tokens only\n buckets = np.where(mask[None, :], buckets, n_buckets - 1)\n\n # buckets is now (n_hashes, seqlen). Next we add offsets so that\n # bucket numbers from different hashing rounds don't overlap.\n offsets = tie_in(buckets, np.arange(n_hashes, dtype=np.int32))\n offsets = np.reshape(offsets * n_buckets, (-1, 1))\n ### Step 5 ###\n buckets = np.reshape(buckets + offsets, (-1,))\n if verbose: print(\"buckets with offsets\", buckets.shape, \"\\n\", buckets)\n return buckets```", "_____no_output_____" ], [ "<a name=\"3.3\"></a>\n## Part 3.3 Sorting Buckets", "_____no_output_____" ], [ "Great! Now that we have a hash function, we can work on sorting our buckets and performing our matrix operations.\n We'll walk through this algorithm in small steps:\n* sort_buckets - we'll perform the sort\n* softmax\n* dotandv - do the matrix math to form the dotproduct and output\nThese routines will demonstrate a simplified version of the algorithm. We won't address masking and variable bucket sizes but will consider how they would be handled.\n\n**sort_buckets**\n\nAt this point, we have called the hash function and were returned the associated buckets. For example, if we started with\n`q[n_seq,n_q]`, with `n_hash = 2; n_buckets = 4; n_seq = 8`\nwe might be returned:\n`bucket = [0,1,2,3,0,1,2,3, 4,5,6,7,4,5,6,7] `\nNote that it is n_hash\\*n_seq long and that the bucket values for each hash have been offset by n_hash so the numbers do not overlap. Going forward, we going to sort this array of buckets to group together members of the same (hash,bucket) pair.\n\n**Instructions**\n**Step 1** Our goal is to sort $q$ rather than the bucket list, so we will need to track the association of the buckets to their elements in $q$.\n* using np.arange, create `ticker`, just a sequence of numbers (0..n_hashed * seqlen) associating members of q with their bucket.\n\n**Step 2** This step is provided to you as it is a bit difficult to describe. We want to disambiguate elements that map to the same bucket. When a sorting routine encounters a situation where multiple entries have the same value, it can correctly choose any entry to go first. This makes testing ambiguous. This prevents that. We multiply all the buckets by `seqlen` and then add `ticker % seqlen`\n\n**Step 3** Here we are! Ready to sort. This is the exciting part.\n* Utilize [fastmath.sort_key_val](https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.sort_key_val.html#jax.lax.sort_key_val) and sort `buckets_and_t` and `ticker`.\n\n**Step 4** We need to be able to undo the sort at the end to get things back into their correct locations\n* sort `sticker` and `ticker` to for the reverse map\n\n**Step 5** create our sorted q and sorted v\n* use [np.take](https://numpy.org/doc/stable/reference/generated/numpy.take.html) and `st` to grab correct values in `q` for the sorted values, `sq`. Use axis=0.\n\nUse the example code below the routine to check and help debug your results.", "_____no_output_____" ] ], [ [ "def sort_buckets(buckets, q, v, n_buckets, n_hashes, seqlen, verbose=True):\n \"\"\" \n Args:\n buckets: tensor of at least 2 dimension, \n n_buckets: number of buckets in each hash table\n n_hashes: the number of hash tables \n \"\"\"\n if verbose:\n print(\"---sort_buckets--\")\n ## Step 1\n ticker = np.arange(n_hashes * seqlen)\n if verbose:\n print(\"ticker\", ticker.shape, ticker)\n ## Step 2\n buckets_and_t = seqlen * buckets + (ticker % seqlen) # provided\n if verbose:\n print(\"buckets_and_t\", buckets_and_t.shape, buckets_and_t)\n\n # Hash-based sort (\"s\" at the start of variable names means \"sorted\")\n # Step 3\n sbuckets_and_t, sticker = fastmath.sort_key_val(\n buckets_and_t, ticker, dimension=-1)\n if verbose:\n print(\"sbuckets_and_t\", sbuckets_and_t.shape, sbuckets_and_t)\n if verbose:\n print(\"sticker\", sticker.shape, sticker)\n # Step 4\n _, undo_sort = fastmath.sort_key_val(sticker, ticker, dimension=-1)\n if verbose:\n print(\"undo_sort\", undo_sort.shape, undo_sort)\n\n # Step 5\n st = sticker % seqlen # provided\n sq = np.take(q, st, axis=0)\n sv = np.take(v, st, axis=0)\n return sq, sv, sticker, undo_sort", "_____no_output_____" ], [ "t_n_hashes = 2\nt_n_buckets = 4\nt_n_seq = t_seqlen = 8\nt_n_q = 3\nn_v = 5\n\nt_q = (np.array([(j % t_n_buckets) for j in range(t_n_seq)]) * np.ones((t_n_q, 1))).T\nt_v = np.ones((t_n_seq, n_v))\nt_buckets = np.array(\n [\n (j % t_n_buckets) + t_n_buckets * i\n for i in range(t_n_hashes)\n for j in range(t_n_seq)\n ]\n)\nprint(\"q\\n\", t_q)\nprint(\"t_buckets: \", t_buckets)\n\nt_sq, t_sv, t_sticker, t_undo_sort = sort_buckets(\n t_buckets, t_q, t_v, t_n_buckets, t_n_hashes, t_seqlen, verbose=True\n)\n\nprint(\"sq.shape\", t_sq.shape, \"sv.shape\", t_sv.shape)\nprint(\"sq\\n\", t_sq)", "q\n [[0. 0. 0.]\n [1. 1. 1.]\n [2. 2. 2.]\n [3. 3. 3.]\n [0. 0. 0.]\n [1. 1. 1.]\n [2. 2. 2.]\n [3. 3. 3.]]\nt_buckets: [0 1 2 3 0 1 2 3 4 5 6 7 4 5 6 7]\n---sort_buckets--\nticker (16,) [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15]\nbuckets_and_t (16,) [ 0 9 18 27 4 13 22 31 32 41 50 59 36 45 54 63]\nsbuckets_and_t (16,) [ 0 4 9 13 18 22 27 31 32 36 41 45 50 54 59 63]\nsticker (16,) [ 0 4 1 5 2 6 3 7 8 12 9 13 10 14 11 15]\nundo_sort (16,) [ 0 2 4 6 1 3 5 7 8 10 12 14 9 11 13 15]\nsq.shape (16, 3) sv.shape (16, 5)\nsq\n [[0. 0. 0.]\n [0. 0. 0.]\n [1. 1. 1.]\n [1. 1. 1.]\n [2. 2. 2.]\n [2. 2. 2.]\n [3. 3. 3.]\n [3. 3. 3.]\n [0. 0. 0.]\n [0. 0. 0.]\n [1. 1. 1.]\n [1. 1. 1.]\n [2. 2. 2.]\n [2. 2. 2.]\n [3. 3. 3.]\n [3. 3. 3.]]\n" ] ], [ [ "<details>\n<summary>\n <font size=\"3\"><b> Expected Output </b></font>\n</summary>\n\n**Expected Values**\n```\nq\n [[0. 0. 0.]\n [1. 1. 1.]\n [2. 2. 2.]\n [3. 3. 3.]\n [0. 0. 0.]\n [1. 1. 1.]\n [2. 2. 2.]\n [3. 3. 3.]]\nt_buckets: [0 1 2 3 0 1 2 3 4 5 6 7 4 5 6 7]\n---sort_buckets--\nticker (16,) [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15]\nbuckets_and_t (16,) [ 0 9 18 27 4 13 22 31 32 41 50 59 36 45 54 63]\nsbuckets_and_t (16,) [ 0 4 9 13 18 22 27 31 32 36 41 45 50 54 59 63]\nsticker (16,) [ 0 4 1 5 2 6 3 7 8 12 9 13 10 14 11 15]\nundo_sort (16,) [ 0 2 4 6 1 3 5 7 8 10 12 14 9 11 13 15]\nsq.shape (16, 3) sv.shape (16, 5)\nsq\n [[0. 0. 0.]\n [0. 0. 0.]\n [1. 1. 1.]\n [1. 1. 1.]\n [2. 2. 2.]\n [2. 2. 2.]\n [3. 3. 3.]\n [3. 3. 3.]\n [0. 0. 0.]\n [0. 0. 0.]\n [1. 1. 1.]\n [1. 1. 1.]\n [2. 2. 2.]\n [2. 2. 2.]\n [3. 3. 3.]\n [3. 3. 3.]]\n\n```", "_____no_output_____" ], [ "<details>\n<summary>\n <font size=\"3\" ><b>Completed code for reference </b></font>\n</summary>\n\n```\n# since this notebook is ungraded the completed code is provided here for reference\ndef sort_buckets(buckets, q, v, n_buckets, n_hashes, seqlen, verbose=True):\n \"\"\"\n Args:\n buckets: tensor of at least 2 dimension,\n n_buckets: number of buckets in each hash table\n n_hashes: the number of hash tables\n \"\"\"\n if verbose: print(\"---sort_buckets--\")\n ## Step 1\n ticker = np.arange(n_hashes * seqlen)\n if verbose: print(\"ticker\",ticker.shape, ticker)\n ## Step 2\n buckets_and_t = seqlen * buckets + (ticker % seqlen)\n if verbose: print(\"buckets_and_t\",buckets_and_t.shape, buckets_and_t)\n\n # Hash-based sort (\"s\" at the start of variable names means \"sorted\")\n #Step 3\n sbuckets_and_t, sticker = fastmath.sort_key_val(\n buckets_and_t, ticker, dimension=-1)\n if verbose: print(\"sbuckets_and_t\",sbuckets_and_t.shape, sbuckets_and_t)\n if verbose: print(\"sticker\",sticker.shape, sticker)\n #Step 4\n _, undo_sort = fastmath.sort_key_val(sticker, ticker, dimension=-1)\n if verbose: print(\"undo_sort\",undo_sort.shape, undo_sort)\n\n #Step 4\n st = (sticker % seqlen)\n sq = np.take(q, st, axis=0)\n sv = np.take(v, st, axis=0)\n return sq, sv, sticker, undo_sort\n```", "_____no_output_____" ], [ "<a name=\"3.4\"></a>\n## Part 3.4 Chunked dot product attention", "_____no_output_____" ], [ "Now let's create the dot product attention. We have sorted $Q$ so that elements that the hash has determined are likely to be similar are adjacent to each other. We now want to perform the dot-product within those limited regions - in 'chunks'.\n\n<img src = \"images/C4W4_LN2_image12.PNG\" height=\"400\" width=\"750\">\n<center><b>Figure 11: Performing dot product in 'chunks' </b></center>\n\n\nThe example we have been working on is shown above, with sequences of 8, 2 hashes, 4 buckets and, conveniently, the content of Q was such that when sorted, there were 2 entries in each bucket. If we reshape Q into a (8,2,n_q), we can use numpy matmul to perform the operation. Numpy [matmul](https://numpy.org/doc/stable/reference/generated/numpy.matmul.html) will treat the inputs as a stack of matrices residing in the last two indexes. This will allow us to matrix multiply Q with itself in *chunks* and later can also be used to perform the matrix multiply with v.\n\nWe will perform a softmax on the output of the dot product of Q and Q, but in this case, there is a bit more to the story. Recall the output of the hash had multiple hash tables. We will perform softmax on those separately and then must combine them. This is where the form of softmax we defined at the top of the notebook comes into play. The routines below will utilize the logsumexp values that the `our_softmax` routine calculates.\n\nThere is a good deal of [reshaping](https://numpy.org/doc/stable/reference/generated/numpy.reshape.html) to get things into the right formats. The code has many print statements that match the expected values below. You can use those to check your work as you go along. If you don't do a lot of 3-dimensional matrix multiplications in your daily life, it might be worthwhile to open a spare cell and practice a few simple examples to get the hang of it! Here is one to start with:\n", "_____no_output_____" ] ], [ [ "a = np.arange(16 * 3).reshape((16, 3))\nchunksize = 2\nar = np.reshape(\n a, (-1, chunksize, a.shape[-1])\n) # the -1 usage is very handy, see numpy reshape\nprint(ar.shape)", "(8, 2, 3)\n" ] ], [ [ "\n**Instructions**\n**Step 1** Reshaping Q\n* np.reshape `sq` (sorted q) to be 3 dimensions. The middle dimension is the size of the 'chunk' specified by `kv_chunk_len`\n* np.swapaxes to perform a 'transpose' on the reshaped `sq`, *but only on the last two dimension*\n* np.matmul the two values.\n\n**Step 2**\n* use our_softmax to perform the softmax on the dot product. Don't forget `passthrough`\n\n**Step 3**\n* np.reshape `sv`. Like `sq`, the middle dimension is the size of the 'chunk' specified by `kv_chunk_len`\n* np.matmul dotlike and the reshaped `sv`\n* np.reshape so to a two dimensional array with the last dimension stays the same (`so.shape[-1]`)\n* `logits` also needs reshaping, we'll do that.\n\n**Step 4** Now we can undo the sort.\n* use [np.take](https://numpy.org/doc/stable/reference/generated/numpy.take.html) and `undo_sort` and axis = 0 to unsort so\n* do the same with `slogits`.\n\n**Step 5** This step combines the results of multiple hashes. Recall, the softmax was only over the values in one hash, this extends it to all the hashes. Read through it, the code is provided. Note this is taking place *after* the matrix multiply with v while the softmax output is used before the multiply. How does this achieve the correct result?", "_____no_output_____" ] ], [ [ "def dotandv(sq, sv, undo_sort, kv_chunk_len, n_hashes, seqlen, passthrough, verbose=False ):\n # Step 1\n rsq = np.reshape(sq,(-1, kv_chunk_len, sq.shape[-1]))\n rsqt = np.swapaxes(rsq, -1, -2)\n if verbose: print(\"rsq.shape,rsqt.shape: \", rsq.shape,rsqt.shape)\n dotlike = np.matmul(rsq, rsqt)\n if verbose: print(\"dotlike\\n\", dotlike)\n\n #Step 2\n dotlike, slogits = our_softmax(dotlike, passthrough)\n if verbose: print(\"dotlike post softmax\\n\", dotlike)\n\n #Step 3\n vr = np.reshape(sv, (-1, kv_chunk_len, sv.shape[-1]))\n if verbose: print(\"dotlike.shape, vr.shape:\", dotlike.shape, vr.shape)\n so = np.matmul(dotlike, vr)\n if verbose: print(\"so.shape:\", so.shape)\n so = np.reshape(so, (-1, so.shape[-1]))\n slogits = np.reshape(slogits, (-1,)) # provided\n if verbose: print(\"so.shape,slogits.shape\", so.shape, slogits.shape)\n\n #Step 4\n o = np.take(so, undo_sort, axis=0)\n logits = np.take(slogits, undo_sort, axis=0)\n if verbose: print(\"o.shape,o\", o.shape, o)\n if verbose: print(\"logits.shape, logits\", logits.shape, logits)\n\n #Step 5 (Provided)\n if n_hashes > 1:\n o = np.reshape(o, (n_hashes, seqlen, o.shape[-1]))\n logits = np.reshape(logits, (n_hashes, seqlen, 1))\n probs = np.exp(logits - fastmath.logsumexp(logits, axis=0, keepdims=True))\n o = np.sum(o * probs, axis=0)\n\n return(o)", "_____no_output_____" ], [ "t_kv_chunk_len = 2\nout = dotandv(\n t_sq,\n t_sv,\n t_undo_sort,\n t_kv_chunk_len,\n t_n_hashes,\n t_seqlen,\n passthrough=True,\n verbose=True,\n)\nprint(\"out\\n\", out)\nprint(\"\\n-----With softmax enabled----\\n\")\nout = dotandv(\n t_sq,\n t_sv,\n t_undo_sort,\n t_kv_chunk_len,\n t_n_hashes,\n t_seqlen,\n passthrough=False,\n verbose=True,\n)\nprint(\"out\\n\", out)", "rsq.shape,rsqt.shape: (8, 2, 3) (8, 3, 2)\ndotlike\n [[[ 0. 0.]\n [ 0. 0.]]\n\n [[ 3. 3.]\n [ 3. 3.]]\n\n [[12. 12.]\n [12. 12.]]\n\n [[27. 27.]\n [27. 27.]]\n\n [[ 0. 0.]\n [ 0. 0.]]\n\n [[ 3. 3.]\n [ 3. 3.]]\n\n [[12. 12.]\n [12. 12.]]\n\n [[27. 27.]\n [27. 27.]]]\ndotlike post softmax\n [[[ 0. 0.]\n [ 0. 0.]]\n\n [[ 3. 3.]\n [ 3. 3.]]\n\n [[12. 12.]\n [12. 12.]]\n\n [[27. 27.]\n [27. 27.]]\n\n [[ 0. 0.]\n [ 0. 0.]]\n\n [[ 3. 3.]\n [ 3. 3.]]\n\n [[12. 12.]\n [12. 12.]]\n\n [[27. 27.]\n [27. 27.]]]\ndotlike.shape, vr.shape: (8, 2, 2) (8, 2, 5)\nso.shape: (8, 2, 5)\nso.shape,slogits.shape (16, 5) (16,)\no.shape,o (16, 5) [[ 0. 0. 0. 0. 0.]\n [ 6. 6. 6. 6. 6.]\n [24. 24. 24. 24. 24.]\n [54. 54. 54. 54. 54.]\n [ 0. 0. 0. 0. 0.]\n [ 6. 6. 6. 6. 6.]\n [24. 24. 24. 24. 24.]\n [54. 54. 54. 54. 54.]\n [ 0. 0. 0. 0. 0.]\n [ 6. 6. 6. 6. 6.]\n [24. 24. 24. 24. 24.]\n [54. 54. 54. 54. 54.]\n [ 0. 0. 0. 0. 0.]\n [ 6. 6. 6. 6. 6.]\n [24. 24. 24. 24. 24.]\n [54. 54. 54. 54. 54.]]\nlogits.shape, logits (16,) [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\nout\n [[ 0. 0. 0. 0. 0.]\n [ 6. 6. 6. 6. 6.]\n [24. 24. 24. 24. 24.]\n [54. 54. 54. 54. 54.]\n [ 0. 0. 0. 0. 0.]\n [ 6. 6. 6. 6. 6.]\n [24. 24. 24. 24. 24.]\n [54. 54. 54. 54. 54.]]\n\n-----With softmax enabled----\n\nrsq.shape,rsqt.shape: (8, 2, 3) (8, 3, 2)\ndotlike\n [[[ 0. 0.]\n [ 0. 0.]]\n\n [[ 3. 3.]\n [ 3. 3.]]\n\n [[12. 12.]\n [12. 12.]]\n\n [[27. 27.]\n [27. 27.]]\n\n [[ 0. 0.]\n [ 0. 0.]]\n\n [[ 3. 3.]\n [ 3. 3.]]\n\n [[12. 12.]\n [12. 12.]]\n\n [[27. 27.]\n [27. 27.]]]\ndotlike post softmax\n [[[0.5 0.5 ]\n [0.5 0.5 ]]\n\n [[0.5 0.5 ]\n [0.5 0.5 ]]\n\n [[0.49999976 0.49999976]\n [0.49999976 0.49999976]]\n\n [[0.49999976 0.49999976]\n [0.49999976 0.49999976]]\n\n [[0.5 0.5 ]\n [0.5 0.5 ]]\n\n [[0.5 0.5 ]\n [0.5 0.5 ]]\n\n [[0.49999976 0.49999976]\n [0.49999976 0.49999976]]\n\n [[0.49999976 0.49999976]\n [0.49999976 0.49999976]]]\ndotlike.shape, vr.shape: (8, 2, 2) (8, 2, 5)\nso.shape: (8, 2, 5)\nso.shape,slogits.shape (16, 5) (16,)\no.shape,o (16, 5) [[1. 1. 1. 1. 1. ]\n [1. 1. 1. 1. 1. ]\n [0.9999995 0.9999995 0.9999995 0.9999995 0.9999995]\n [0.9999995 0.9999995 0.9999995 0.9999995 0.9999995]\n [1. 1. 1. 1. 1. ]\n [1. 1. 1. 1. 1. ]\n [0.9999995 0.9999995 0.9999995 0.9999995 0.9999995]\n [0.9999995 0.9999995 0.9999995 0.9999995 0.9999995]\n [1. 1. 1. 1. 1. ]\n [1. 1. 1. 1. 1. ]\n [0.9999995 0.9999995 0.9999995 0.9999995 0.9999995]\n [0.9999995 0.9999995 0.9999995 0.9999995 0.9999995]\n [1. 1. 1. 1. 1. ]\n [1. 1. 1. 1. 1. ]\n [0.9999995 0.9999995 0.9999995 0.9999995 0.9999995]\n [0.9999995 0.9999995 0.9999995 0.9999995 0.9999995]]\nlogits.shape, logits (16,) [ 0.6931472 3.6931472 12.693148 27.693148 0.6931472 3.6931472\n 12.693148 27.693148 0.6931472 3.6931472 12.693148 27.693148\n 0.6931472 3.6931472 12.693148 27.693148 ]\nout\n [[1. 1. 1. 1. 1. ]\n [1. 1. 1. 1. 1. ]\n [0.99999905 0.99999905 0.99999905 0.99999905 0.99999905]\n [0.99999905 0.99999905 0.99999905 0.99999905 0.99999905]\n [1. 1. 1. 1. 1. ]\n [1. 1. 1. 1. 1. ]\n [0.99999905 0.99999905 0.99999905 0.99999905 0.99999905]\n [0.99999905 0.99999905 0.99999905 0.99999905 0.99999905]]\n" ] ], [ [ "<details>\n<summary>\n <font size=\"3\"><b> Expected Output </b></font>\n</summary>\n\n**Expected Values**\n```\nrsq.shape,rsqt.shape: (8, 2, 3) (8, 3, 2)\ndotlike\n [[[ 0. 0.]\n [ 0. 0.]]\n\n [[ 3. 3.]\n [ 3. 3.]]\n\n [[12. 12.]\n [12. 12.]]\n\n [[27. 27.]\n [27. 27.]]\n\n [[ 0. 0.]\n [ 0. 0.]]\n\n [[ 3. 3.]\n [ 3. 3.]]\n\n [[12. 12.]\n [12. 12.]]\n\n [[27. 27.]\n [27. 27.]]]\ndotlike post softmax\n [[[ 0. 0.]\n [ 0. 0.]]\n\n [[ 3. 3.]\n [ 3. 3.]]\n\n [[12. 12.]\n [12. 12.]]\n\n [[27. 27.]\n [27. 27.]]\n\n [[ 0. 0.]\n [ 0. 0.]]\n\n [[ 3. 3.]\n [ 3. 3.]]\n\n [[12. 12.]\n [12. 12.]]\n\n [[27. 27.]\n [27. 27.]]]\ndotlike.shape, vr.shape: (8, 2, 2) (8, 2, 5)\nso.shape: (8, 2, 5)\nso.shape,slogits.shape (16, 5) (16,)\no.shape,o (16, 5) [[ 0. 0. 0. 0. 0.]\n [ 6. 6. 6. 6. 6.]\n [24. 24. 24. 24. 24.]\n [54. 54. 54. 54. 54.]\n [ 0. 0. 0. 0. 0.]\n [ 6. 6. 6. 6. 6.]\n [24. 24. 24. 24. 24.]\n [54. 54. 54. 54. 54.]\n [ 0. 0. 0. 0. 0.]\n [ 6. 6. 6. 6. 6.]\n [24. 24. 24. 24. 24.]\n [54. 54. 54. 54. 54.]\n [ 0. 0. 0. 0. 0.]\n [ 6. 6. 6. 6. 6.]\n [24. 24. 24. 24. 24.]\n [54. 54. 54. 54. 54.]]\nlogits.shape, logits (16,) [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\nout\n [[ 0. 0. 0. 0. 0.]\n [ 6. 6. 6. 6. 6.]\n [24. 24. 24. 24. 24.]\n [54. 54. 54. 54. 54.]\n [ 0. 0. 0. 0. 0.]\n [ 6. 6. 6. 6. 6.]\n [24. 24. 24. 24. 24.]\n [54. 54. 54. 54. 54.]]\n\n-----With softmax enabled----\n\nrsq.shape,rsqt.shape: (8, 2, 3) (8, 3, 2)\ndotlike\n [[[ 0. 0.]\n [ 0. 0.]]\n\n [[ 3. 3.]\n [ 3. 3.]]\n\n [[12. 12.]\n [12. 12.]]\n\n [[27. 27.]\n [27. 27.]]\n\n [[ 0. 0.]\n [ 0. 0.]]\n\n [[ 3. 3.]\n [ 3. 3.]]\n\n [[12. 12.]\n [12. 12.]]\n\n [[27. 27.]\n [27. 27.]]]\ndotlike post softmax\n [[[0.5 0.5 ]\n [0.5 0.5 ]]\n\n [[0.5 0.5 ]\n [0.5 0.5 ]]\n\n [[0.49999976 0.49999976]\n [0.49999976 0.49999976]]\n\n [[0.49999976 0.49999976]\n [0.49999976 0.49999976]]\n\n [[0.5 0.5 ]\n [0.5 0.5 ]]\n\n [[0.5 0.5 ]\n [0.5 0.5 ]]\n\n [[0.49999976 0.49999976]\n [0.49999976 0.49999976]]\n\n [[0.49999976 0.49999976]\n [0.49999976 0.49999976]]]\ndotlike.shape, vr.shape: (8, 2, 2) (8, 2, 5)\nso.shape: (8, 2, 5)\nso.shape,slogits.shape (16, 5) (16,)\no.shape,o (16, 5) [[1. 1. 1. 1. 1. ]\n [1. 1. 1. 1. 1. ]\n [0.9999995 0.9999995 0.9999995 0.9999995 0.9999995]\n [0.9999995 0.9999995 0.9999995 0.9999995 0.9999995]\n [1. 1. 1. 1. 1. ]\n [1. 1. 1. 1. 1. ]\n [0.9999995 0.9999995 0.9999995 0.9999995 0.9999995]\n [0.9999995 0.9999995 0.9999995 0.9999995 0.9999995]\n [1. 1. 1. 1. 1. ]\n [1. 1. 1. 1. 1. ]\n [0.9999995 0.9999995 0.9999995 0.9999995 0.9999995]\n [0.9999995 0.9999995 0.9999995 0.9999995 0.9999995]\n [1. 1. 1. 1. 1. ]\n [1. 1. 1. 1. 1. ]\n [0.9999995 0.9999995 0.9999995 0.9999995 0.9999995]\n [0.9999995 0.9999995 0.9999995 0.9999995 0.9999995]]\nlogits.shape, logits (16,) [ 0.6931472 3.6931472 12.693148 27.693148 0.6931472 3.6931472\n 12.693148 27.693148 0.6931472 3.6931472 12.693148 27.693148\n 0.6931472 3.6931472 12.693148 27.693148 ]\nout\n [[1. 1. 1. 1. 1. ]\n [1. 1. 1. 1. 1. ]\n [0.99999905 0.99999905 0.99999905 0.99999905 0.99999905]\n [0.99999905 0.99999905 0.99999905 0.99999905 0.99999905]\n [1. 1. 1. 1. 1. ]\n [1. 1. 1. 1. 1. ]\n [0.99999905 0.99999905 0.99999905 0.99999905 0.99999905]\n [0.99999905 0.99999905 0.99999905 0.99999905 0.99999905]]\n```", "_____no_output_____" ], [ "<details>\n<summary>\n <font size=\"3\" ><b>Completed code for reference </b></font>\n</summary>\n\n```\n# since this notebook is ungraded the completed code is provided here for reference\ndef dotandv(sq, sv, undo_sort, kv_chunk_len, n_hashes, seqlen, passthrough, verbose=False ):\n # Step 1\n rsq = np.reshape(sq,(-1, kv_chunk_len, sq.shape[-1]))\n rsqt = np.swapaxes(rsq, -1, -2)\n if verbose: print(\"rsq.shape,rsqt.shape: \", rsq.shape,rsqt.shape)\n dotlike = np.matmul(rsq, rsqt)\n if verbose: print(\"dotlike\\n\", dotlike)\n\n #Step 2\n dotlike, slogits = our_softmax(dotlike, passthrough)\n if verbose: print(\"dotlike post softmax\\n\", dotlike)\n\n #Step 3\n vr = np.reshape(sv, (-1, kv_chunk_len, sv.shape[-1]))\n if verbose: print(\"dotlike.shape, vr.shape:\", dotlike.shape, vr.shape)\n so = np.matmul(dotlike, vr)\n if verbose: print(\"so.shape:\", so.shape)\n so = np.reshape(so, (-1, so.shape[-1]))\n slogits = np.reshape(slogits, (-1,)) # provided\n if verbose: print(\"so.shape,slogits.shape\", so.shape, slogits.shape)\n\n #Step 4\n o = np.take(so, undo_sort, axis=0)\n logits = np.take(slogits, undo_sort, axis=0)\n if verbose: print(\"o.shape,o\", o.shape, o)\n if verbose: print(\"logits.shape, logits\", logits.shape, logits)\n\n #Step 5 (Provided)\n if n_hashes > 1:\n o = np.reshape(o, (n_hashes, seqlen, o.shape[-1]))\n logits = np.reshape(logits, (n_hashes, seqlen, 1))\n probs = np.exp(logits - fastmath.logsumexp(logits, axis=0, keepdims=True))\n o = np.sum(o * probs, axis=0)\n\n return(o)\n```", "_____no_output_____" ], [ "Great! You have now done examples code for most of the operation that are unique to the LSH version of self-attention. I'm sure at this point you are wondering what happens if the number of entries in a bucket is not evenly distributed the way our example is. It is possible, for example for all of the `seqlen` entries to land in one bucket. Further, since the buckets are not aligned, our 'chunks' may be misaligned with the start of the bucket. The implementation addresses this by attending to adjacent chunks as was described in the lecture:\n\n<img src = \"images/C4W4_LN2_image13.PNG\" height=\"400\" width=\"750\">\n<center><b>Figure 12: Misaligned Access, looking before and after </b></center>\n\nHopefully, having implemented parts of this, you will appreciate this diagram more fully.\n\n", "_____no_output_____" ], [ "<a name=\"3.5\"></a>\n## Part 3.5 OurLSHSelfAttention\n\nYou can examine the full implementations below. Area's we did not 'attend to' in our implementations above include variable bucket sizes and masking. We will instantiate a layer of the full implementation below. We tried to use the same variable names above to make it easier to decipher the full version. Note that some of the functionality we implemented in our routines is split between `attend` and `forward_unbatched`. We've inserted our version of hash below, but use the original version of `attend`.", "_____no_output_____" ] ], [ [ "# original version from trax 1.3.4\ndef attend(\n q,\n k=None,\n v=None,\n q_chunk_len=None,\n kv_chunk_len=None,\n n_chunks_before=0,\n n_chunks_after=0,\n mask_fn=None,\n q_info=None,\n kv_info=None,\n dropout=0.0,\n rng=None,\n):\n \"\"\"Dot-product attention, with optional chunking and/or masking.\n\n Args:\n q: Query vectors, shape [q_len, d_qk]\n k: Key vectors, shape [kv_len, d_qk]; or None\n v: Value vectors, shape [kv_len, d_v]\n q_chunk_len: Set to non-zero to enable chunking for query vectors\n kv_chunk_len: Set to non-zero to enable chunking for key/value vectors\n n_chunks_before: Number of adjacent previous chunks to attend to\n n_chunks_after: Number of adjacent subsequent chunks to attend to\n mask_fn: TODO(kitaev) doc\n q_info: Query-associated metadata for masking\n kv_info: Key-associated metadata for masking\n dropout: Dropout rate\n rng: RNG for dropout\n\n Returns:\n A tuple (output, dots_logsumexp). The output has shape [q_len, d_v], and\n dots_logsumexp has shape [q_len]. The logsumexp of the attention\n probabilities is useful for combining multiple rounds of attention (as in\n LSH attention).\n \"\"\"\n assert v is not None\n share_qk = k is None\n\n if q_info is None:\n q_info = np.arange(q.shape[-2], dtype=np.int32)\n\n if kv_info is None and not share_qk:\n kv_info = np.arange(v.shape[-2], dtype=np.int32)\n\n # Split q/k/v into chunks along the time axis, if desired.\n if q_chunk_len is not None:\n q = np.reshape(q, (-1, q_chunk_len, q.shape[-1]))\n q_info = np.reshape(q_info, (-1, q_chunk_len))\n\n if share_qk:\n assert kv_chunk_len is None or kv_chunk_len == q_chunk_len\n k = q\n kv_chunk_len = q_chunk_len\n if kv_info is None:\n kv_info = q_info\n elif kv_chunk_len is not None:\n # kv_info is not None, but reshape as required.\n kv_info = np.reshape(kv_info, (-1, kv_chunk_len))\n elif kv_chunk_len is not None:\n k = np.reshape(k, (-1, kv_chunk_len, k.shape[-1]))\n kv_info = np.reshape(kv_info, (-1, kv_chunk_len))\n\n if kv_chunk_len is not None:\n v = np.reshape(v, (-1, kv_chunk_len, v.shape[-1]))\n\n if share_qk:\n k = length_normalized(k)\n k = k / np.sqrt(k.shape[-1])\n\n # Optionally include adjacent chunks.\n if q_chunk_len is not None or kv_chunk_len is not None:\n assert q_chunk_len is not None and kv_chunk_len is not None\n else:\n assert n_chunks_before == 0 and n_chunks_after == 0\n\n k = look_adjacent(k, n_chunks_before, n_chunks_after)\n v = look_adjacent(v, n_chunks_before, n_chunks_after)\n kv_info = look_adjacent(kv_info, n_chunks_before, n_chunks_after)\n\n # Dot-product attention.\n dots = np.matmul(q, np.swapaxes(k, -1, -2))\n\n # Masking\n if mask_fn is not None:\n dots = mask_fn(dots, q_info[..., :, None], kv_info[..., None, :])\n\n # Softmax.\n dots_logsumexp = fastmath.logsumexp(dots, axis=-1, keepdims=True)\n dots = np.exp(dots - dots_logsumexp)\n\n if dropout > 0.0:\n assert rng is not None\n # Dropout is broadcast across the bin dimension\n dropout_shape = (dots.shape[-2], dots.shape[-1])\n #\n keep_prob = tie_in(dots, 1.0 - dropout)\n keep = fastmath.random.bernoulli(rng, keep_prob, dropout_shape)\n multiplier = keep.astype(dots.dtype) / tie_in(keep, keep_prob)\n dots = dots * multiplier\n\n # The softmax normalizer (dots_logsumexp) is used by multi-round LSH attn.\n out = np.matmul(dots, v)\n out = np.reshape(out, (-1, out.shape[-1]))\n dots_logsumexp = np.reshape(dots_logsumexp, (-1,))\n return out, dots_logsumexp", "_____no_output_____" ], [ "class OurLSHSelfAttention(tl.LSHSelfAttention):\n \"\"\"Our simplified LSH self-attention \"\"\"\n\n def forward_unbatched(self, x, mask=None, *, weights, state, rng, update_state):\n attend_rng, output_rng = fastmath.random.split(rng)\n w_q, w_v, w_o = weights\n\n q = np.matmul(x, w_q)\n v = np.matmul(x, w_v)\n\n if update_state:\n _, old_hash_rng = state\n hash_rng, hash_subrng = fastmath.random.split(old_hash_rng)\n # buckets = self.hash_vectors(q, hash_subrng, mask) # original\n ## use our version of hash\n buckets = our_hash_vectors(\n q, hash_subrng, self.n_buckets, self.n_hashes, mask=mask\n )\n s_buckets = buckets\n if self._max_length_for_buckets:\n length = self.n_hashes * self._max_length_for_buckets\n if buckets.shape[0] < length:\n s_buckets = np.concatenate(\n [buckets, np.zeros(length - buckets.shape[0], dtype=np.int32)],\n axis=0,\n )\n state = (s_buckets, hash_rng)\n else:\n buckets, _ = state\n if self._max_length_for_buckets:\n buckets = buckets[: self.n_hashes * x.shape[0]]\n\n seqlen = x.shape[0]\n assert int(buckets.shape[0]) == self.n_hashes * seqlen\n\n ticker = tie_in(x, np.arange(self.n_hashes * seqlen, dtype=np.int32))\n buckets_and_t = seqlen * buckets + (ticker % seqlen)\n buckets_and_t = fastmath.stop_gradient(buckets_and_t)\n\n # Hash-based sort (\"s\" at the start of variable names means \"sorted\")\n sbuckets_and_t, sticker = fastmath.sort_key_val(\n buckets_and_t, ticker, dimension=-1\n )\n _, undo_sort = fastmath.sort_key_val(sticker, ticker, dimension=-1)\n sbuckets_and_t = fastmath.stop_gradient(sbuckets_and_t)\n sticker = fastmath.stop_gradient(sticker)\n undo_sort = fastmath.stop_gradient(undo_sort)\n\n st = sticker % seqlen\n sq = np.take(q, st, axis=0)\n sv = np.take(v, st, axis=0)\n\n mask_fn = functools.partial(\n mask_self_attention,\n causal=self.causal,\n exclude_self=True,\n masked=self.masked,\n )\n q_info = st\n\n assert (mask is not None) == self.masked\n kv_info = None\n if self.masked:\n # mask is a boolean array (True means \"is valid token\")\n smask = np.take(mask, st, axis=0)\n ones_like_mask = tie_in(x, np.ones_like(smask, dtype=np.int32))\n kv_info = q_info * np.where(smask, ones_like_mask, -ones_like_mask)\n\n ## use original version of attend (could use ours but lacks masks and masking)\n so, slogits = attend(\n sq,\n k=None,\n v=sv,\n q_chunk_len=self.chunk_len,\n n_chunks_before=self.n_chunks_before,\n n_chunks_after=self.n_chunks_after,\n mask_fn=mask_fn,\n q_info=q_info,\n kv_info=kv_info,\n dropout=self.attention_dropout,\n rng=attend_rng,\n )\n\n # np.take(so, undo_sort, axis=0); np.take(slogits, undo_sort, axis=0) would\n # also work, but these helpers include performance optimizations for TPU.\n o = permute_via_gather(so, undo_sort, sticker, axis=0)\n logits = permute_via_sort(slogits, sticker, buckets_and_t, axis=-1)\n\n if self.n_hashes > 1:\n o = np.reshape(o, (self.n_hashes, seqlen, o.shape[-1]))\n logits = np.reshape(logits, (self.n_hashes, seqlen, 1))\n probs = np.exp(logits - fastmath.logsumexp(logits, axis=0, keepdims=True))\n o = np.sum(o * probs, axis=0)\n\n assert o.shape == (seqlen, w_v.shape[-1])\n out = np.matmul(o, w_o)\n out = apply_broadcasted_dropout(out, self.output_dropout, output_rng)\n return out, state", "_____no_output_____" ], [ "# Here we're going to try out our LSHSelfAttention\nn_heads = 3\ncausal = False\nmasked = False\nmask = None\nchunk_len = 8\nn_chunks_before = 0\nn_chunks_after = 0\nattention_dropout = 0.0\nn_hashes = 5\nn_buckets = 4\nseq_len = 8\nemb_len = 5\nal = OurLSHSelfAttention(\n n_heads=n_heads,\n d_qk=3,\n d_v=4,\n causal=causal,\n chunk_len=8,\n n_chunks_before=n_chunks_before,\n n_chunks_after=n_chunks_after,\n n_hashes=n_hashes,\n n_buckets=n_buckets,\n use_reference_code=True,\n attention_dropout=attention_dropout,\n mode=\"train\",\n)\n\nx = jax.random.uniform(jax.random.PRNGKey(0), (1, seq_len, emb_len), dtype=np.float32)\nal_osa = fastmath.random.get_prng(1)\n_, _ = al.init(tl.shapes.signature(x), rng=al_osa)", "_____no_output_____" ], [ "al(x)", "using jax\nusing jax\nusing jax\n" ] ], [ [ "<details>\n<summary>\n <font size=\"3\"><b> Expected Output </b></font>\n</summary>\n\n**Expected Values**\n```\nusing jax\nusing jax\nusing jax\nDeviceArray([[[ 6.6842824e-01, -1.1364323e-01, -5.4430610e-01,\n 2.1126242e-01, -1.0988623e-02],\n [ 7.0949769e-01, -1.5455185e-01, -5.9923315e-01,\n 2.2719440e-01, 1.3833776e-02],\n [ 7.1442688e-01, -1.2046628e-01, -5.3956544e-01,\n 1.7320301e-01, -1.6552269e-02],\n [ 6.7178929e-01, -7.6611102e-02, -5.9399861e-01,\n 2.1236290e-01, 7.9482794e-04],\n [ 7.1518433e-01, -1.1359170e-01, -5.7821894e-01,\n 2.1304411e-01, 3.0598268e-02],\n [ 6.8235350e-01, -9.3979925e-02, -5.5341840e-01,\n 2.1608177e-01, -6.6673756e-04],\n [ 6.1286640e-01, -8.1027031e-02, -4.8148823e-01,\n 1.9373313e-01, 3.1555295e-02],\n [ 7.2203505e-01, -1.0199660e-01, -5.5215168e-01,\n 1.7872262e-01, -2.2289157e-02]]], dtype=float32)```", "_____no_output_____" ], [ "**Congratuations!** you have created a custom layer and have become familiar with LSHSelfAttention.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ] ]
e7ddd9c19b7ba0a9d562f83cc5f12ebbbce1b386
79,704
ipynb
Jupyter Notebook
solutions by participants/ex5/ex5-MichaelRollin-3cnot-?mHa-24params.ipynb
fazliberkordek/ibm-quantum-challenge-2021
2206a364e354965b749dcda7c5d62631f571d718
[ "Apache-2.0" ]
136
2021-05-20T14:07:53.000Z
2022-03-19T17:19:31.000Z
solutions by participants/ex5/ex5-MichaelRollin-3cnot-?mHa-24params.ipynb
fazliberkordek/ibm-quantum-challenge-2021
2206a364e354965b749dcda7c5d62631f571d718
[ "Apache-2.0" ]
106
2021-05-21T15:41:13.000Z
2021-11-08T08:29:25.000Z
solutions by participants/ex5/ex5-MichaelRollin-3cnot-?mHa-24params.ipynb
fazliberkordek/ibm-quantum-challenge-2021
2206a364e354965b749dcda7c5d62631f571d718
[ "Apache-2.0" ]
190
2021-05-20T14:02:09.000Z
2022-03-27T16:31:20.000Z
62.026459
19,708
0.672639
[ [ [ "# Exercise 5 - Variational quantum eigensolver\n\n\n## Historical background\n\nDuring the last decade, quantum computers matured quickly and began to realize Feynman's initial dream of a computing system that could simulate the laws of nature in a quantum way. A 2014 paper first authored by Alberto Peruzzo introduced the **Variational Quantum Eigensolver (VQE)**, an algorithm meant for finding the ground state energy (lowest energy) of a molecule, with much shallower circuits than other approaches.[1] And, in 2017, the IBM Quantum team used the VQE algorithm to simulate the ground state energy of the lithium hydride molecule.[2]\n\nVQE's magic comes from outsourcing some of the problem's processing workload to a classical computer. The algorithm starts with a parameterized quantum circuit called an ansatz (a best guess) then finds the optimal parameters for this circuit using a classical optimizer. The VQE's advantage over classical algorithms comes from the fact that a quantum processing unit can represent and store the problem's exact wavefunction, an exponentially hard problem for a classical computer. \n\nThis exercise 5 allows you to realize Feynman's dream yourself, setting up a variational quantum eigensolver to determine the ground state and the energy of a molecule. This is interesting because the ground state can be used to calculate various molecular properties, for instance the exact forces on nuclei than can serve to run molecular dynamics simulations to explore what happens in chemical systems with time.[3]\n\n\n### References\n\n1. Peruzzo, Alberto, et al. \"A variational eigenvalue solver on a photonic quantum processor.\" Nature communications 5.1 (2014): 1-7.\n2. Kandala, Abhinav, et al. \"Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets.\" Nature 549.7671 (2017): 242-246.\n3. Sokolov, Igor O., et al. \"Microcanonical and finite-temperature ab initio molecular dynamics simulations on quantum computers.\" Physical Review Research 3.1 (2021): 013125.\n\n## Introduction\n\nFor the implementation of VQE, you will be able to make choices on how you want to compose your simulation, in particular focusing on the ansatz quantum circuits.\nThis is motivated by the fact that one of the important tasks when running VQE on noisy quantum computers is to reduce the loss of fidelity (which introduces errors) by finding the most compact quantum circuit capable of representing the ground state.\nPractically, this entails to minimizing the number of two-qubit gates (e.g. CNOTs) while not loosing accuracy.\n\n<div class=\"alert alert-block alert-success\">\n\n<b>Goal</b> \n\nFind the shortest ansatz circuits for representing accurately the ground state of given problems. Be creative!\n \n<b>Plan</b> \n \nFirst you will learn how to compose a VQE simulation for the smallest molecule and then apply what you have learned to a case of a larger one.\n \n**1. Tutorial - VQE for H$_2$:** familiarize yourself with VQE and select the best combination of ansatz/classical optimizer by running statevector simulations.\n\n**2. Final Challenge - VQE for LiH:** perform similar investigation as in the first part but restricting to statevector simulator only. Use the qubit number reduction schemes available in Qiskit and find the optimal circuit for this larger system. Optimize the circuit and use your imagination to find ways to select the best building blocks of parameterized circuits and compose them to construct the most compact ansatz circuit for the ground state, better than the ones already available in Qiskit. \n\n</div>\n\n\n<div class=\"alert alert-block alert-danger\">\n\nBelow is an introduction to the theory behind VQE simulations. You don't have to understand the whole thing before moving on. Don't be scared!\n\n</div>\n\n", "_____no_output_____" ], [ "## Theory\n\nHere below is the general workflow representing how the molecular simulations using VQE are performed on quantum computers.\n\n<img src=\"resources/workflow.png\" width=800 height= 1400/>\n\nThe core idea hybrid quantum-classical approach is to outsource to **CPU (classical processing unit)** and **QPU (quantum processing unit)** the parts that they can do best. The CPU takes care of listing the terms that need to be measured to compute the energy and also optimizing the circuit parameters. The QPU implements a quantum circuit representing the quantum state of a system and measures the energy. Some more details are given below:\n\n**CPU** can compute efficiently the energies associated to electron hopping and interactions (one-/two-body integrals by means of a Hartree-Fock calculation) that serve to represent the total energy operator, Hamiltonian. The [Hartree–Fock (HF) method](https://en.wikipedia.org/wiki/Hartree%E2%80%93Fock_method#:~:text=In%20computational%20physics%20and%20chemistry,system%20in%20a%20stationary%20state.) efficiently computes an approximate grounds state wavefunction by assuming that the latter can be represented by a single Slater determinant (e.g. for H$_2$ molecule in STO-3G basis with 4 spin-orbitals and qubits, $|\\Psi_{HF} \\rangle = |0101 \\rangle$ where electrons occupy the lowest energy spin-orbitals). What QPU does later in VQE is finding a quantum state (corresponding circuit and its parameters) that can also represent other states associated missing electronic correlations (i.e. $\\sum_i c_i |i\\rangle$ states in $|\\Psi \\rangle = c_{HF}|\\Psi_{HF} \\rangle + \\sum_i c_i |i\\rangle $ where $i$ is a bitstring). \n\nAfter a HF calculation, operators in the Hamiltonian are mapped to measurements on a QPU using fermion-to-qubit transformations (see Hamiltonian section below). One can further analyze the properties of the system to reduce the number of qubits or shorten the ansatz circuit:\n\n- For Z2 symmetries and two-qubit reduction, see [Bravyi *et al*, 2017](https://arxiv.org/abs/1701.08213v1).\n- For entanglement forging, see [Eddins *et al.*, 2021](https://arxiv.org/abs/2104.10220v1).\n- For the adaptive ansatz see, [Grimsley *et al.*,2018](https://arxiv.org/abs/1812.11173v2), [Rattew *et al.*,2019](https://arxiv.org/abs/1910.09694), [Tang *et al.*,2019](https://arxiv.org/abs/1911.10205). You may use the ideas found in those works to find ways to shorten the quantum circuits.\n\n**QPU** implements quantum circuits (see Ansatzes section below), parameterized by angles $\\vec\\theta$, that would represent the ground state wavefunction by placing various single qubit rotations and entanglers (e.g. two-qubit gates). The quantum advantage lies in the fact that QPU can efficiently represent and store the exact wavefunction, which becomes intractable on a classical computer for systems that have more than a few atoms. Finally, QPU measures the operators of choice (e.g. ones representing a Hamiltonian).\n\nBelow we go slightly more in mathematical details of each component of the VQE algorithm. It might be also helpful if you watch our [video episode about VQE](https://www.youtube.com/watch?v=Z-A6G0WVI9w).\n\n\n### Hamiltonian \n\nHere we explain how we obtain the operators that we need to measure to obtain the energy of a given system.\nThese terms are included in the molecular Hamiltonian defined as:\n$$\n\\begin{aligned}\n\\hat{H} &=\\sum_{r s} h_{r s} \\hat{a}_{r}^{\\dagger} \\hat{a}_{s} \\\\\n&+\\frac{1}{2} \\sum_{p q r s} g_{p q r s} \\hat{a}_{p}^{\\dagger} \\hat{a}_{q}^{\\dagger} \\hat{a}_{r} \\hat{a}_{s}+E_{N N}\n\\end{aligned}\n$$\nwith\n$$\nh_{p q}=\\int \\phi_{p}^{*}(r)\\left(-\\frac{1}{2} \\nabla^{2}-\\sum_{I} \\frac{Z_{I}}{R_{I}-r}\\right) \\phi_{q}(r)\n$$\n$$\ng_{p q r s}=\\int \\frac{\\phi_{p}^{*}\\left(r_{1}\\right) \\phi_{q}^{*}\\left(r_{2}\\right) \\phi_{r}\\left(r_{2}\\right) \\phi_{s}\\left(r_{1}\\right)}{\\left|r_{1}-r_{2}\\right|} \n$$\n\nwhere the $h_{r s}$ and $g_{p q r s}$ are the one-/two-body integrals (using the Hartree-Fock method) and $E_{N N}$ the nuclear repulsion energy. \nThe one-body integrals represent the kinetic energy of the electrons and their interaction with nuclei. \nThe two-body integrals represent the electron-electron interaction.\nThe $\\hat{a}_{r}^{\\dagger}, \\hat{a}_{r}$ operators represent creation and annihilation of electron in spin-orbital $r$ and require mappings to operators, so that we can measure them on a quantum computer.\nNote that VQE minimizes the electronic energy so you have to retrieve and add the nuclear repulsion energy $E_{NN}$ to compute the total energy. \n \n\n\nSo, for every non-zero matrix element in the $ h_{r s}$ and $g_{p q r s}$ tensors, we can construct corresponding Pauli string (tensor product of Pauli operators) with the following fermion-to-qubit transformation. \nFor instance, in Jordan-Wigner mapping for an orbital $r = 3$, we obtain the following Pauli string:\n$$\n\\hat a_{3}^{\\dagger}= \\hat \\sigma_z \\otimes \\hat \\sigma_z \\otimes\\left(\\frac{ \\hat \\sigma_x-i \\hat \\sigma_y}{2}\\right) \\otimes 1 \\otimes \\cdots \\otimes 1\n$$\nwhere $\\hat \\sigma_x, \\hat \\sigma_y, \\hat \\sigma_z$ are the well-known Pauli operators. The tensor products of $\\hat \\sigma_z$ operators are placed to enforce the fermionic anti-commutation relations.\nA representation of the Jordan-Wigner mapping between the 14 spin-orbitals of a water molecule and some 14 qubits is given below:\n\n<img src=\"resources/mapping.png\" width=600 height= 1200/>\n\n\nThen, one simply replaces the one-/two-body excitations (e.g. $\\hat{a}_{r}^{\\dagger} \\hat{a}_{s}$, $\\hat{a}_{p}^{\\dagger} \\hat{a}_{q}^{\\dagger} \\hat{a}_{r} \\hat{a}_{s}$) in the Hamiltonian by corresponding Pauli strings (i.e. $\\hat{P}_i$, see picture above). The resulting operator set is ready to be measured on the QPU.\nFor additional details see [Seeley *et al.*, 2012](https://arxiv.org/abs/1208.5986v1).\n\n### Ansatzes\n\nThere are mainly 2 types of ansatzes you can use for chemical problems. \n\n- **q-UCC ansatzes** are physically inspired, and roughly map the electron excitations to quantum circuits. The q-UCCSD ansatz (`UCCSD`in Qiskit) possess all possible single and double electron excitations. The paired double q-pUCCD (`PUCCD`) and singlet q-UCCD0 (`SUCCD`) just consider a subset of such excitations (meaning significantly shorter circuits) and have proved to provide good results for dissociation profiles. For instance, q-pUCCD doesn't have single excitations and the double excitations are paired as in the image below.\n- **Heuristic ansatzes (`TwoLocal`)** were invented to shorten the circuit depth but still be able to represent the ground state. \nAs in the figure below, the R gates represent the parametrized single qubit rotations and $U_{CNOT}$ the entanglers (two-qubit gates). The idea is that after repeating certain $D$-times the same block (with independent parameters) one can reach the ground state. \n\nFor additional details refer to [Sokolov *et al.* (q-UCC ansatzes)](https://arxiv.org/abs/1911.10864v2) and [Barkoutsos *et al.* (Heuristic ansatzes)](https://arxiv.org/pdf/1805.04340.pdf).\n\n<img src=\"resources/ansatz.png\" width=700 height= 1200/>\n\n\n\n### VQE\n\nGiven a Hermitian operator $\\hat H$ with an unknown minimum eigenvalue $E_{min}$, associated with the eigenstate $|\\psi_{min}\\rangle$, VQE provides an estimate $E_{\\theta}$, bounded by $E_{min}$:\n\n\\begin{align*}\n E_{min} \\le E_{\\theta} \\equiv \\langle \\psi(\\theta) |\\hat H|\\psi(\\theta) \\rangle\n\\end{align*} \n\nwhere $|\\psi(\\theta)\\rangle$ is the trial state associated with $E_{\\theta}$. By applying a parameterized circuit, represented by $U(\\theta)$, to some arbitrary starting state $|\\psi\\rangle$, the algorithm obtains an estimate $U(\\theta)|\\psi\\rangle \\equiv |\\psi(\\theta)\\rangle$ on $|\\psi_{min}\\rangle$. The estimate is iteratively optimized by a classical optimizer by changing the parameter $\\theta$ and minimizing the expectation value of $\\langle \\psi(\\theta) |\\hat H|\\psi(\\theta) \\rangle$. \n\nAs applications of VQE, there are possibilities in molecular dynamics simulations, see [Sokolov *et al.*, 2021](https://arxiv.org/abs/2008.08144v1), and excited states calculations, see [Ollitrault *et al.*, 2019](https://arxiv.org/abs/1910.12890) to name a few.\n\n<div class=\"alert alert-block alert-danger\">\n \n<b> References for additional details</b> \n\nFor the qiskit-nature tutorial that implements this algorithm see [here](https://qiskit.org/documentation/nature/tutorials/01_electronic_structure.html)\nbut this won't be sufficient and you might want to look on the [first page of github repository](https://github.com/Qiskit/qiskit-nature) and the [test folder](https://github.com/Qiskit/qiskit-nature/tree/main/test) containing tests that are written for each component, they provide the base code for the use of each functionality.\n\n</div>", "_____no_output_____" ], [ "## Part 1: Tutorial - VQE for H$_2$ molecule \n\n\n\nIn this part, you will simulate H$_2$ molecule using the STO-3G basis with the PySCF driver and Jordan-Wigner mapping.\nWe will guide you through the following parts so then you can tackle harder problems.\n \n\n\n#### 1. Driver\n\nThe interfaces to the classical chemistry codes that are available in Qiskit are called drivers.\nWe have for example `PSI4Driver`, `PyQuanteDriver`, `PySCFDriver` are available. \n\nBy running a driver (Hartree-Fock calculation for a given basis set and molecular geometry), in the cell below, we obtain all the necessary information about our molecule to apply then a quantum algorithm.", "_____no_output_____" ] ], [ [ "from qiskit_nature.drivers import PySCFDriver\n\nmolecule = \"H .0 .0 .0; H .0 .0 0.739\"\ndriver = PySCFDriver(atom=molecule)\nqmolecule = driver.run()", "_____no_output_____" ] ], [ [ "<div class=\"alert alert-block alert-danger\">\n \n<b> Tutorial questions 1</b> \n \nLook into the attributes of `qmolecule` and answer the questions below.\n\n \n1. We need to know the basic characteristics of our molecule. What is the total number of electrons in your system?\n2. What is the number of molecular orbitals?\n3. What is the number of spin-orbitals?\n3. How many qubits would you need to simulate this molecule with Jordan-Wigner mapping?\n5. What is the value of the nuclear repulsion energy?\n\nYou can find the answers at the end of this notebook.\n</div>", "_____no_output_____" ], [ "#### 2. Electronic structure problem\n\nYou can then create an `ElectronicStructureProblem` that can produce the list of fermionic operators before mapping them to qubits (Pauli strings).", "_____no_output_____" ] ], [ [ "from qiskit_nature.problems.second_quantization.electronic import ElectronicStructureProblem\nproblem = ElectronicStructureProblem(driver)\n\n# Generate the second-quantized operators\nsecond_q_ops = problem.second_q_ops()\n\n# Hamiltonian\nmain_op = second_q_ops[0]", "_____no_output_____" ] ], [ [ "#### 3. QubitConverter\n\nAllows to define the mapping that you will use in the simulation. You can try different mapping but \nwe will stick to `JordanWignerMapper` as allows a simple correspondence: a qubit represents a spin-orbital in the molecule.", "_____no_output_____" ] ], [ [ "from qiskit_nature.mappers.second_quantization import ParityMapper, BravyiKitaevMapper, JordanWignerMapper\nfrom qiskit_nature.converters.second_quantization.qubit_converter import QubitConverter\n\n# Setup the mapper and qubit converter\nmapper_type = 'JordanWignerMapper'\n\nif mapper_type == 'ParityMapper':\n mapper = ParityMapper()\nelif mapper_type == 'JordanWignerMapper':\n mapper = JordanWignerMapper()\nelif mapper_type == 'BravyiKitaevMapper':\n mapper = BravyiKitaevMapper()\n\nconverter = QubitConverter(mapper=mapper, two_qubit_reduction=True)\n\n# The fermionic operators are mapped to qubit operators\nnum_particles = (problem.molecule_data_transformed.num_alpha,\n problem.molecule_data_transformed.num_beta)\nqubit_op = converter.convert(main_op, num_particles=num_particles)", "_____no_output_____" ] ], [ [ "#### 4. Initial state\nAs we described in the Theory section, a good initial state in chemistry is the HF state (i.e. $|\\Psi_{HF} \\rangle = |0101 \\rangle$). We can initialize it as follows:", "_____no_output_____" ] ], [ [ "from qiskit_nature.circuit.library import HartreeFock\n\nnum_particles = (problem.molecule_data_transformed.num_alpha,\n problem.molecule_data_transformed.num_beta)\nnum_spin_orbitals = 2 * problem.molecule_data_transformed.num_molecular_orbitals\ninit_state = HartreeFock(num_spin_orbitals, num_particles, converter)\nprint(init_state)", " ┌───┐\nq_0: ┤ X ├\n └───┘\nq_1: ─────\n ┌───┐\nq_2: ┤ X ├\n └───┘\nq_3: ─────\n \n" ] ], [ [ "#### 5. Ansatz\nOne of the most important choices is the quantum circuit that you choose to approximate your ground state.\nHere is the example of qiskit circuit library that contains many possibilities for making your own circuit.", "_____no_output_____" ] ], [ [ "from qiskit.circuit.library import TwoLocal\nfrom qiskit_nature.circuit.library import UCCSD, PUCCD, SUCCD\n\n# Choose the ansatz\nansatz_type = \"TwoLocal\"\n\n# Parameters for q-UCC antatze\nnum_particles = (problem.molecule_data_transformed.num_alpha,\n problem.molecule_data_transformed.num_beta)\nnum_spin_orbitals = 2 * problem.molecule_data_transformed.num_molecular_orbitals\n\n# Put arguments for twolocal\nif ansatz_type == \"TwoLocal\":\n # Single qubit rotations that are placed on all qubits with independent parameters\n rotation_blocks = ['ry', 'rz']\n # Entangling gates\n entanglement_blocks = 'cx'\n # How the qubits are entangled \n entanglement = 'full'\n # Repetitions of rotation_blocks + entanglement_blocks with independent parameters\n repetitions = 3\n # Skip the final rotation_blocks layer\n skip_final_rotation_layer = True\n ansatz = TwoLocal(qubit_op.num_qubits, rotation_blocks, entanglement_blocks, reps=repetitions, \n entanglement=entanglement, skip_final_rotation_layer=skip_final_rotation_layer)\n # Add the initial state\n ansatz.compose(init_state, front=True, inplace=True)\nelif ansatz_type == \"UCCSD\":\n ansatz = UCCSD(converter,num_particles,num_spin_orbitals,initial_state = init_state)\nelif ansatz_type == \"PUCCD\":\n ansatz = PUCCD(converter,num_particles,num_spin_orbitals,initial_state = init_state)\nelif ansatz_type == \"SUCCD\":\n ansatz = SUCCD(converter,num_particles,num_spin_orbitals,initial_state = init_state)\nelif ansatz_type == \"Custom\":\n # Example of how to write your own circuit\n from qiskit.circuit import Parameter, QuantumCircuit, QuantumRegister\n # Define the variational parameter\n theta = Parameter('a')\n n = qubit_op.num_qubits\n # Make an empty quantum circuit\n qc = QuantumCircuit(qubit_op.num_qubits)\n qubit_label = 0\n # Place a Hadamard gate\n qc.h(qubit_label)\n # Visual separator\n qc.barrier()\n # rz rotations on all qubits\n qc.ry(theta, range(n))\n qc.rz(theta, range(n))\n ansatz = qc\n ansatz.compose(init_state, front=True, inplace=True)\n\nprint(ansatz)", " ┌───┐ ┌──────────┐┌──────────┐ ┌──────────┐»\nq_0: ───┤ X ├────┤ RY(θ[0]) ├┤ RZ(θ[4]) ├──■────■─────────■──┤ RY(θ[8]) ├»\n ┌──┴───┴───┐├──────────┤└──────────┘┌─┴─┐ │ │ └──────────┘»\nq_1: ┤ RY(θ[1]) ├┤ RZ(θ[5]) ├────────────┤ X ├──┼────■────┼───────■──────»\n └──┬───┬───┘├──────────┤┌──────────┐└───┘┌─┴─┐┌─┴─┐ │ │ »\nq_2: ───┤ X ├────┤ RY(θ[2]) ├┤ RZ(θ[6]) ├─────┤ X ├┤ X ├──┼───────┼──────»\n ┌──┴───┴───┐├──────────┤└──────────┘ └───┘└───┘┌─┴─┐ ┌─┴─┐ »\nq_3: ┤ RY(θ[3]) ├┤ RZ(θ[7]) ├───────────────────────────┤ X ├───┤ X ├────»\n └──────────┘└──────────┘ └───┘ └───┘ »\n« ┌───────────┐ ┌───────────┐»\n«q_0: ┤ RZ(θ[12]) ├───────────────────■────────■─────────■──┤ RY(θ[16]) ├»\n« └┬──────────┤┌───────────┐ ┌─┴─┐ │ │ └───────────┘»\n«q_1: ─┤ RY(θ[9]) ├┤ RZ(θ[13]) ├────┤ X ├──────┼────■────┼────────■──────»\n« └──────────┘├───────────┤┌───┴───┴───┐┌─┴─┐┌─┴─┐ │ │ »\n«q_2: ──────■──────┤ RY(θ[10]) ├┤ RZ(θ[14]) ├┤ X ├┤ X ├──┼────────┼──────»\n« ┌─┴─┐ ├───────────┤├───────────┤└───┘└───┘┌─┴─┐ ┌─┴─┐ »\n«q_3: ────┤ X ├────┤ RY(θ[11]) ├┤ RZ(θ[15]) ├──────────┤ X ├────┤ X ├────»\n« └───┘ └───────────┘└───────────┘ └───┘ └───┘ »\n« ┌───────────┐ \n«q_0: ┤ RZ(θ[20]) ├───────────────────■────────■─────────■────────────\n« ├───────────┤┌───────────┐ ┌─┴─┐ │ │ \n«q_1: ┤ RY(θ[17]) ├┤ RZ(θ[21]) ├────┤ X ├──────┼────■────┼────■───────\n« └───────────┘├───────────┤┌───┴───┴───┐┌─┴─┐┌─┴─┐ │ │ \n«q_2: ──────■──────┤ RY(θ[18]) ├┤ RZ(θ[22]) ├┤ X ├┤ X ├──┼────┼────■──\n« ┌─┴─┐ ├───────────┤├───────────┤└───┘└───┘┌─┴─┐┌─┴─┐┌─┴─┐\n«q_3: ────┤ X ├────┤ RY(θ[19]) ├┤ RZ(θ[23]) ├──────────┤ X ├┤ X ├┤ X ├\n« └───┘ └───────────┘└───────────┘ └───┘└───┘└───┘\n" ] ], [ [ "#### 6. Backend\nThis is where you specify the simulator or device where you want to run your algorithm.\nWe will focus on the `statevector_simulator` in this challenge.\n", "_____no_output_____" ] ], [ [ "from qiskit import Aer\nbackend = Aer.get_backend('statevector_simulator')", "_____no_output_____" ] ], [ [ "#### 7. Optimizer\n\nThe optimizer guides the evolution of the parameters of the ansatz so it is very important to investigate the energy convergence as it would define the number of measurements that have to be performed on the QPU.\nA clever choice might reduce drastically the number of needed energy evaluations.", "_____no_output_____" ] ], [ [ "from qiskit.algorithms.optimizers import COBYLA, L_BFGS_B, SPSA, SLSQP\n\noptimizer_type = 'COBYLA'\n\n# You may want to tune the parameters \n# of each optimizer, here the defaults are used\nif optimizer_type == 'COBYLA':\n optimizer = COBYLA(maxiter=500)\nelif optimizer_type == 'L_BFGS_B':\n optimizer = L_BFGS_B(maxfun=500)\nelif optimizer_type == 'SPSA':\n optimizer = SPSA(maxiter=500)\nelif optimizer_type == 'SLSQP':\n optimizer = SLSQP(maxiter=500)", "_____no_output_____" ] ], [ [ "#### 8. Exact eigensolver\nFor learning purposes, we can solve the problem exactly with the exact diagonalization of the Hamiltonian matrix so we know where to aim with VQE.\nOf course, the dimensions of this matrix scale exponentially in the number of molecular orbitals so you can try doing this for a large molecule of your choice and see how slow this becomes. \nFor very large systems you would run out of memory trying to store their wavefunctions.", "_____no_output_____" ] ], [ [ "from qiskit_nature.algorithms.ground_state_solvers.minimum_eigensolver_factories import NumPyMinimumEigensolverFactory\nfrom qiskit_nature.algorithms.ground_state_solvers import GroundStateEigensolver\nimport numpy as np \n\ndef exact_diagonalizer(problem, converter):\n solver = NumPyMinimumEigensolverFactory()\n calc = GroundStateEigensolver(converter, solver)\n result = calc.solve(problem)\n return result\n\nresult_exact = exact_diagonalizer(problem, converter)\nexact_energy = np.real(result_exact.eigenenergies[0])\nprint(\"Exact electronic energy\", exact_energy)\nprint(result_exact)\n\n# The targeted electronic energy for H2 is -1.85336 Ha\n# Check with your VQE result.", "Exact electronic energy -1.8533636186720424\n=== GROUND STATE ENERGY ===\n \n* Electronic ground state energy (Hartree): -1.853363618672\n - computed part: -1.853363618672\n~ Nuclear repulsion energy (Hartree): 0.716072003951\n> Total ground state energy (Hartree): -1.137291614721\n \n=== MEASURED OBSERVABLES ===\n \n 0: # Particles: 2.000 S: 0.000 S^2: 0.000 M: 0.000\n \n=== DIPOLE MOMENTS ===\n \n~ Nuclear dipole moment (a.u.): [0.0 0.0 1.39650761]\n \n 0: \n * Electronic dipole moment (a.u.): [0.0 0.0 1.39650761]\n - computed part: [0.0 0.0 1.39650761]\n > Dipole moment (a.u.): [0.0 0.0 0.0] Total: 0.\n (debye): [0.0 0.0 0.00000001] Total: 0.00000001\n \n" ] ], [ [ "#### 9. VQE and initial parameters for the ansatz\nNow we can import the VQE class and run the algorithm.", "_____no_output_____" ] ], [ [ "from qiskit.algorithms import VQE\nfrom IPython.display import display, clear_output\n\n# Print and save the data in lists\ndef callback(eval_count, parameters, mean, std): \n # Overwrites the same line when printing\n display(\"Evaluation: {}, Energy: {}, Std: {}\".format(eval_count, mean, std))\n clear_output(wait=True)\n counts.append(eval_count)\n values.append(mean)\n params.append(parameters)\n deviation.append(std)\n\ncounts = []\nvalues = []\nparams = []\ndeviation = []\n\n# Set initial parameters of the ansatz\n# We choose a fixed small displacement \n# So all participants start from similar starting point\ntry:\n initial_point = [0.01] * len(ansatz.ordered_parameters)\nexcept:\n initial_point = [0.01] * ansatz.num_parameters\n\nalgorithm = VQE(ansatz,\n optimizer=optimizer,\n quantum_instance=backend,\n callback=callback,\n initial_point=initial_point)\n\nresult = algorithm.compute_minimum_eigenvalue(qubit_op)\n\nprint(result)", "OrderedDict([ ('aux_operator_eigenvalues', None),\n ('cost_function_evals', 500),\n ( 'eigenstate',\n array([ 1.72642837e-07+8.50403202e-06j, -1.78929971e-04-1.81951230e-05j,\n -3.69523167e-06-1.34495890e-05j, -2.10924080e-04+1.77214969e-04j,\n 4.99046244e-06-2.06613556e-06j, 6.74694778e-01+7.29486483e-01j,\n -1.24182388e-03+6.51317093e-04j, 1.41053276e-08-5.68212233e-09j,\n -5.49430298e-09-7.34506476e-08j, 1.14565968e-03+4.24895118e-04j,\n -7.62192188e-02-8.26042985e-02j, -3.51865303e-05+3.22859610e-05j,\n 2.63445328e-05-1.93206120e-05j, 3.26507775e-05+1.21079129e-04j,\n -1.86705090e-05-8.50728883e-06j, -2.68804234e-05-2.39896274e-05j])),\n ('eigenvalue', -1.8533611875222795),\n ( 'optimal_parameters',\n { ParameterVectorElement(θ[8]): -9.823669287699566e-06,\n ParameterVectorElement(θ[0]): 0.22528384153894793,\n ParameterVectorElement(θ[4]): 0.22876817609748787,\n ParameterVectorElement(θ[1]): 0.0001628358952911501,\n ParameterVectorElement(θ[12]): 1.024655074911573,\n ParameterVectorElement(θ[18]): -0.012431877150681298,\n ParameterVectorElement(θ[17]): 3.1420635330869464,\n ParameterVectorElement(θ[23]): 0.7496785259549152,\n ParameterVectorElement(θ[9]): -7.206149843099831e-05,\n ParameterVectorElement(θ[3]): 0.7058281857030912,\n ParameterVectorElement(θ[22]): 1.4067582441060544,\n ParameterVectorElement(θ[10]): -0.004145381016989967,\n ParameterVectorElement(θ[13]): 0.17466072749351547,\n ParameterVectorElement(θ[14]): 0.05501516914713419,\n ParameterVectorElement(θ[2]): -0.008902430245345148,\n ParameterVectorElement(θ[20]): -0.022492568309320043,\n ParameterVectorElement(θ[5]): -0.7648677076914686,\n ParameterVectorElement(θ[21]): 1.3225524343419832,\n ParameterVectorElement(θ[15]): -0.055011768252079846,\n ParameterVectorElement(θ[7]): -0.06860900246768789,\n ParameterVectorElement(θ[16]): 0.0005545694957856072,\n ParameterVectorElement(θ[11]): -0.23452291161472869,\n ParameterVectorElement(θ[19]): -0.9396567959736827,\n ParameterVectorElement(θ[6]): 0.23743855054021837}),\n ( 'optimal_point',\n array([ 2.25283842e-01, -4.14538102e-03, -2.34522912e-01, 1.02465507e+00,\n 1.74660727e-01, 5.50151691e-02, -5.50117683e-02, 5.54569496e-04,\n 3.14206353e+00, -1.24318772e-02, -9.39656796e-01, 1.62835895e-04,\n -2.24925683e-02, 1.32255243e+00, 1.40675824e+00, 7.49678526e-01,\n -8.90243025e-03, 7.05828186e-01, 2.28768176e-01, -7.64867708e-01,\n 2.37438551e-01, -6.86090025e-02, -9.82366929e-06, -7.20614984e-05])),\n ('optimal_value', -1.8533611875222795),\n ('optimizer_evals', 500),\n ('optimizer_time', 6.67009711265564)])\n" ] ], [ [ "#### 9. Scoring function \nWe need to judge how good are your VQE simulations, your choice of ansatz/optimizer.\nFor this, we implemented the following simple scoring function:\n\n$$ score = N_{CNOT}$$\n\nwhere $N_{CNOT}$ is the number of CNOTs. \nBut you have to reach the chemical accuracy which is $\\delta E_{chem} = 0.004$ Ha $= 4$ mHa, which may be hard to reach depending on the problem. \nYou have to reach the accuracy we set in a minimal number of CNOTs to win the challenge. \nThe lower the score the better!", "_____no_output_____" ] ], [ [ "# Store results in a dictionary\nfrom qiskit.transpiler import PassManager\nfrom qiskit.transpiler.passes import Unroller\n\n# Unroller transpile your circuit into CNOTs and U gates\npass_ = Unroller(['u', 'cx'])\npm = PassManager(pass_)\nansatz_tp = pm.run(ansatz)\ncnots = ansatz_tp.count_ops()['cx']\nscore = cnots\n\naccuracy_threshold = 4.0 # in mHa\nenergy = result.optimal_value\n\nif ansatz_type == \"TwoLocal\":\n result_dict = {\n 'optimizer': optimizer.__class__.__name__,\n 'mapping': converter.mapper.__class__.__name__,\n 'ansatz': ansatz.__class__.__name__,\n 'rotation blocks': rotation_blocks,\n 'entanglement_blocks': entanglement_blocks,\n 'entanglement': entanglement,\n 'repetitions': repetitions,\n 'skip_final_rotation_layer': skip_final_rotation_layer,\n 'energy (Ha)': energy,\n 'error (mHa)': (energy-exact_energy)*1000,\n 'pass': (energy-exact_energy)*1000 <= accuracy_threshold,\n '# of parameters': len(result.optimal_point),\n 'final parameters': result.optimal_point,\n '# of evaluations': result.optimizer_evals,\n 'optimizer time': result.optimizer_time,\n '# of qubits': int(qubit_op.num_qubits),\n '# of CNOTs': cnots,\n 'score': score}\nelse:\n result_dict = {\n 'optimizer': optimizer.__class__.__name__,\n 'mapping': converter.mapper.__class__.__name__,\n 'ansatz': ansatz.__class__.__name__,\n 'rotation blocks': None,\n 'entanglement_blocks': None,\n 'entanglement': None,\n 'repetitions': None,\n 'skip_final_rotation_layer': None,\n 'energy (Ha)': energy,\n 'error (mHa)': (energy-exact_energy)*1000,\n 'pass': (energy-exact_energy)*1000 <= accuracy_threshold,\n '# of parameters': len(result.optimal_point),\n 'final parameters': result.optimal_point,\n '# of evaluations': result.optimizer_evals,\n 'optimizer time': result.optimizer_time,\n '# of qubits': int(qubit_op.num_qubits),\n '# of CNOTs': cnots,\n 'score': score}\n\n# Plot the results\nimport matplotlib.pyplot as plt\n\nfig, ax = plt.subplots(1, 1)\nax.set_xlabel('Iterations')\nax.set_ylabel('Energy')\nax.grid()\nfig.text(0.7, 0.75, f'Energy: {result.optimal_value:.3f}\\nScore: {score:.0f}')\nplt.title(f\"{result_dict['optimizer']}-{result_dict['mapping']}\\n{result_dict['ansatz']}\")\nax.plot(counts, values)\nax.axhline(exact_energy, linestyle='--')\nfig_title = f\"\\\n{result_dict['optimizer']}-\\\n{result_dict['mapping']}-\\\n{result_dict['ansatz']}-\\\nEnergy({result_dict['energy (Ha)']:.3f})-\\\nScore({result_dict['score']:.0f})\\\n.png\"\nfig.savefig(fig_title, dpi=300)\n\n# Display and save the data\nimport pandas as pd\nimport os.path\nfilename = 'results_h2.csv'\nif os.path.isfile(filename):\n result_df = pd.read_csv(filename)\n result_df = result_df.append([result_dict])\nelse:\n result_df = pd.DataFrame.from_dict([result_dict])\nresult_df.to_csv(filename)\nresult_df[['optimizer','ansatz', '# of qubits', '# of parameters','rotation blocks', 'entanglement_blocks',\n 'entanglement', 'repetitions', 'error (mHa)', 'pass', 'score']]", "_____no_output_____" ] ], [ [ "<div class=\"alert alert-block alert-danger\">\n \n<b>Tutorial questions 2</b> \n\nExperiment with all the parameters and then:\n\n1. Can you find your best (best score) heuristic ansatz (by modifying parameters of `TwoLocal` ansatz) and optimizer?\n2. Can you find your best q-UCC ansatz (choose among `UCCSD, PUCCD or SUCCD` ansatzes) and optimizer?\n3. In the cell where we define the ansatz, \n can you modify the `Custom` ansatz by placing gates yourself to write a better circuit than your `TwoLocal` circuit? \n\nFor each question, give `ansatz` objects.\nRemember, you have to reach the chemical accuracy $|E_{exact} - E_{VQE}| \\leq 0.004 $ Ha $= 4$ mHa.\n \n</div>\n\n", "_____no_output_____" ], [ "## Part 2: Final Challenge - VQE for LiH molecule \n\n\nIn this part, you will simulate LiH molecule using the STO-3G basis with the PySCF driver.\n\n</div>\n \n<div class=\"alert alert-block alert-success\">\n\n<b>Goal</b> \n\nExperiment with all the parameters and then find your best ansatz. You can be as creative as you want!\n\nFor each question, give `ansatz` objects as for Part 1. Your final score will be based only on Part 2.\n \n</div>\n\nBe aware that the system is larger now. Work out how many qubits you would need for this system by retrieving the number of spin-orbitals. \n\n### Reducing the problem size\n\nYou might want to reduce the number of qubits for your simulation:\n- you could freeze the core electrons that do not contribute significantly to chemistry and consider only the valence electrons. Qiskit  already has this functionality implemented. So inspect the different transformers in `qiskit_nature.transformers` and find the one that performs the freeze core approximation.\n- you could use `ParityMapper` with `two_qubit_reduction=True` to eliminate 2 qubits.\n- you could reduce the number of qubits by inspecting the symmetries of your Hamiltonian. Find a way to use `Z2Symmetries` in Qiskit.\n\n### Custom ansatz \n\nYou might want to explore the ideas proposed in [Grimsley *et al.*,2018](https://arxiv.org/abs/1812.11173v2), [H. L. Tang *et al.*,2019](https://arxiv.org/abs/1911.10205), [Rattew *et al.*,2019](https://arxiv.org/abs/1910.09694), [Tang *et al.*,2019](https://arxiv.org/abs/1911.10205). \nYou can even get try machine learning algorithms to generate best ansatz circuits.\n\n### Setup the simulation\n\nLet's now run the Hartree-Fock calculation and the rest is up to you!\n\n<div class=\"alert alert-block alert-danger\">\n\n<b>Attention</b> \n\nWe give below the `driver`, the `initial_point`, the `initial_state` that should remain as given.\nYou are free then to explore all other things available in Qiskit.\nSo you have to start from this initial point (all parameters set to 0.01):\n \n`initial_point = [0.01] * len(ansatz.ordered_parameters)`\n or\n`initial_point = [0.01] * ansatz.num_parameters`\n\nand your initial state has to be the Hartree-Fock state:\n \n`init_state = HartreeFock(num_spin_orbitals, num_particles, converter)`\n \nFor each question, give `ansatz` object.\nRemember you have to reach the chemical accuracy $|E_{exact} - E_{VQE}| \\leq 0.004 $ Ha $= 4$ mHa.\n\n</div>", "_____no_output_____" ] ], [ [ "from qiskit_nature.drivers import PySCFDriver\n\nmolecule = 'Li 0.0 0.0 0.0; H 0.0 0.0 1.5474'\ndriver = PySCFDriver(atom=molecule)\nqmolecule = driver.run()\n\nfrom qiskit_nature.transformers import FreezeCoreTransformer, ActiveSpaceTransformer\n\nfrom qiskit_nature.problems.second_quantization.electronic import ElectronicStructureProblem\nproblem = ElectronicStructureProblem(driver, q_molecule_transformers=[FreezeCoreTransformer(remove_orbitals=[4, 3])])\n\n# Generate the second-quantized operators\nsecond_q_ops = problem.second_q_ops()\n\n# Hamiltonian\nmain_op = second_q_ops[0]\n\nfrom qiskit_nature.mappers.second_quantization import ParityMapper, BravyiKitaevMapper, JordanWignerMapper\nfrom qiskit_nature.converters.second_quantization.qubit_converter import QubitConverter\n\n# Setup the mapper and qubit converter\nmapper_type = 'ParityMapper'\n\nif mapper_type == 'ParityMapper':\n mapper = ParityMapper()\n\nconverter = QubitConverter(mapper=mapper, two_qubit_reduction=True, z2symmetry_reduction=[1, 1])\n\n# The fermionic operators are mapped to qubit operators\nnum_particles = (problem.molecule_data_transformed.num_alpha,\n problem.molecule_data_transformed.num_beta)\nqubit_op = converter.convert(main_op, num_particles=num_particles)\n\nfrom qiskit_nature.circuit.library import HartreeFock\n\nnum_particles = (problem.molecule_data_transformed.num_alpha,\n problem.molecule_data_transformed.num_beta)\nnum_spin_orbitals = 2 * problem.molecule_data_transformed.num_molecular_orbitals\ninit_state = HartreeFock(num_spin_orbitals, num_particles, converter)\nprint(init_state)\n\nfrom qiskit import Aer\nbackend = Aer.get_backend('statevector_simulator')", " ┌───┐\nq_0: ┤ X ├\n ├───┤\nq_1: ┤ X ├\n └───┘\nq_2: ─────\n \nq_3: ─────\n \n" ], [ "from qiskit_nature.algorithms.ground_state_solvers.minimum_eigensolver_factories import NumPyMinimumEigensolverFactory\nfrom qiskit_nature.algorithms.ground_state_solvers import GroundStateEigensolver\nimport numpy as np \n\ndef exact_diagonalizer(problem, converter):\n solver = NumPyMinimumEigensolverFactory()\n calc = GroundStateEigensolver(converter, solver)\n result = calc.solve(problem)\n return result\n\nresult_exact = exact_diagonalizer(problem, converter)\nexact_energy = np.real(result_exact.eigenenergies[0])\nprint(\"Exact electronic energy\", exact_energy)\nprint(result_exact)\n\n# LiH -> Exact electronic energy -1.089782396348737 --> -8.90847269193 Ha\n# Check with your VQE result.", "Exact electronic energy -1.0887060157347386\n=== GROUND STATE ENERGY ===\n \n* Electronic ground state energy (Hartree): -8.907396311316\n - computed part: -1.088706015735\n - FreezeCoreTransformer extracted energy part: -7.818690295581\n~ Nuclear repulsion energy (Hartree): 1.025934879643\n> Total ground state energy (Hartree): -7.881461431673\n \n=== MEASURED OBSERVABLES ===\n \n 0: # Particles: 2.000 S: 0.000 S^2: 0.000 M: 0.000\n \n=== DIPOLE MOMENTS ===\n \n~ Nuclear dipole moment (a.u.): [0.0 0.0 2.92416221]\n \n 0: \n * Electronic dipole moment (a.u.): [0.0 0.0 4.76300889]\n - computed part: [0.0 0.0 4.76695575]\n - FreezeCoreTransformer extracted energy part: [0.0 0.0 -0.00394686]\n > Dipole moment (a.u.): [0.0 0.0 -1.83884668] Total: 1.83884668\n (debye): [0.0 0.0 -4.67388163] Total: 4.67388163\n \n" ], [ "# WRITE YOUR CODE BETWEEN THESE LINES - START\n\nfrom qiskit.circuit.library import TwoLocal\n\n# Choose the ansatz\nansatz_type = \"TwoLocal\"\n\n# Parameters for q-UCC antatze\nnum_particles = (problem.molecule_data_transformed.num_alpha,\n problem.molecule_data_transformed.num_beta)\nnum_spin_orbitals = 2 * problem.molecule_data_transformed.num_molecular_orbitals\n\n# Put arguments for twolocal\nif ansatz_type == \"TwoLocal\":\n # Single qubit rotations that are placed on all qubits with independent parameters\n rotation_blocks = ['ry', 'rz', 'rx']\n # Entangling gates\n entanglement_blocks = 'cx'\n # How the qubits are entangled \n entanglement = 'linear'\n # Repetitions of rotation_blocks + entanglement_blocks with independent parameters\n repetitions = 1\n # Skip the final rotation_blocks layer\n skip_final_rotation_layer = False\n ansatz = TwoLocal(qubit_op.num_qubits, rotation_blocks, entanglement_blocks, reps=repetitions, \n entanglement=entanglement, skip_final_rotation_layer=skip_final_rotation_layer)\n # Add the initial state\n ansatz.compose(init_state, front=True, inplace=True)\n\nprint(ansatz)\n\nfrom qiskit.algorithms.optimizers import COBYLA\n\noptimizer_type = 'COBYLA'\nif optimizer_type == 'COBYLA':\n optimizer = COBYLA(maxiter=4000, disp=True)\n\nfrom qiskit.algorithms import VQE\nfrom IPython.display import display, clear_output\n\n# Print and save the data in lists\ndef callback(eval_count, parameters, mean, std): \n # Overwrites the same line when printing\n display(\"Evaluation: {}, Energy: {}, Std: {}\".format(eval_count, mean, std))\n clear_output(wait=True)\n counts.append(eval_count)\n values.append(mean)\n params.append(parameters)\n deviation.append(std)\n\ncounts = []\nvalues = []\nparams = []\ndeviation = []\n\n# Set initial parameters of the ansatz\ntry:\n initial_point = [0.01] * len(ansatz.ordered_parameters)\nexcept:\n initial_point = [0.01] * ansatz.num_parameters\n\nalgorithm = VQE(ansatz,\n optimizer=optimizer,\n quantum_instance=backend,\n callback=callback,\n initial_point=initial_point)\n\nresult = algorithm.compute_minimum_eigenvalue(qubit_op)\n\nprint(result)\n\n# WRITE YOUR CODE BETWEEN THESE LINES - END", "OrderedDict([ ('aux_operator_eigenvalues', None),\n ('cost_function_evals', 4000),\n ( 'eigenstate',\n array([-2.68127563e-04-1.03857133e-03j, 8.96290962e-04+4.80876144e-03j,\n -4.42385645e-03-2.48595382e-02j, 1.78767363e-01+9.75343866e-01j,\n 9.97877616e-03+5.03143169e-02j, 8.39726652e-05+3.10505159e-04j,\n -1.37107508e-04-7.55538682e-04j, 3.58172231e-03+1.98859675e-02j,\n 4.08016439e-04+2.33679123e-03j, 3.07173911e-06+8.32457114e-06j,\n -3.31704863e-06-3.48772183e-06j, 5.09290830e-05-3.16599842e-04j,\n -2.05093267e-02-1.12320728e-01j, -1.36528864e-04-4.71067131e-04j,\n 8.35048635e-05+5.35017026e-04j, 6.33058059e-04+7.78141162e-04j])),\n ('eigenvalue', -1.0863612611615097),\n ( 'optimal_parameters',\n { ParameterVectorElement(θ[20]): 0.06486056226026862,\n ParameterVectorElement(θ[21]): -1.190600887034407,\n ParameterVectorElement(θ[22]): 0.07178924993581552,\n ParameterVectorElement(θ[23]): 0.23484789072148476,\n ParameterVectorElement(θ[9]): 0.8928719324338246,\n ParameterVectorElement(θ[10]): 0.0022469031183535767,\n ParameterVectorElement(θ[8]): 0.886298661372002,\n ParameterVectorElement(θ[7]): 0.9246064124504413,\n ParameterVectorElement(θ[11]): -0.8952736379133035,\n ParameterVectorElement(θ[12]): 0.08248417988627532,\n ParameterVectorElement(θ[13]): 1.9509411685631926,\n ParameterVectorElement(θ[14]): 0.08264701342066944,\n ParameterVectorElement(θ[15]): 0.41324628562321747,\n ParameterVectorElement(θ[16]): 0.9099833602892877,\n ParameterVectorElement(θ[17]): 1.5601630636088983,\n ParameterVectorElement(θ[18]): 1.0436139204831694,\n ParameterVectorElement(θ[19]): -0.08914463759375915,\n ParameterVectorElement(θ[6]): 0.7328749794448886,\n ParameterVectorElement(θ[0]): 1.1348493988628765,\n ParameterVectorElement(θ[2]): 3.740471734856986e-05,\n ParameterVectorElement(θ[1]): 0.8901738617239225,\n ParameterVectorElement(θ[4]): 1.6123631775533722,\n ParameterVectorElement(θ[3]): -0.7491161012327733,\n ParameterVectorElement(θ[5]): 1.564936225604367}),\n ( 'optimal_point',\n array([ 1.13484940e+00, 2.24690312e-03, -8.95273638e-01, 8.24841799e-02,\n 1.95094117e+00, 8.26470134e-02, 4.13246286e-01, 9.09983360e-01,\n 1.56016306e+00, 1.04361392e+00, -8.91446376e-02, 8.90173862e-01,\n 6.48605623e-02, -1.19060089e+00, 7.17892499e-02, 2.34847891e-01,\n 3.74047173e-05, -7.49116101e-01, 1.61236318e+00, 1.56493623e+00,\n 7.32874979e-01, 9.24606412e-01, 8.86298661e-01, 8.92871932e-01])),\n ('optimal_value', -1.0863612611615097),\n ('optimizer_evals', 4000),\n ('optimizer_time', 58.15924882888794)])\n" ], [ "# Check your answer using following code\nfrom qc_grader import grade_ex5\nfreeze_core = True # change to True if you freezed core electrons\ngrade_ex5(ansatz,qubit_op,result,freeze_core)\n#-8.90314 -> -1.08444 -> linear -> 1 -> False -> 5 quibts -> 10k", "Grading your answer for ex5. Please wait...\n\nCongratulations 🎉! Your answer is correct.\nYour cost is 3.\nFeel free to submit your answer.\n\n" ], [ "# Submit your answer. You can re-submit at any time.\nfrom qc_grader import submit_ex5\nsubmit_ex5(ansatz,qubit_op,result,freeze_core)", "Submitting your answer for ex5. Please wait...\nSuccess 🎉! Your answer has been submitted.\n" ] ], [ [ "## Answers for Part 1\n\n<div class=\"alert alert-block alert-danger\">\n\n<b>Questions</b> \n \nLook into the attributes of `qmolecule` and answer the questions below.\n\n \n1. We need to know the basic characteristics of our molecule. What is the total number of electrons in your system?\n2. What is the number of molecular orbitals?\n3. What is the number of spin-orbitals?\n3. How many qubits would you need to simulate this molecule with Jordan-Wigner mapping?\n5. What is the value of the nuclear repulsion energy?\n \n</div>\n\n<div class=\"alert alert-block alert-success\">\n\n<b>Answers </b> \n\n1. `n_el = qmolecule.num_alpha + qmolecule.num_beta`\n \n2. `n_mo = qmolecule.num_molecular_orbitals`\n \n3. `n_so = 2 * qmolecule.num_molecular_orbitals`\n \n4. `n_q = 2* qmolecule.num_molecular_orbitals`\n \n5. `e_nn = qmolecule.nuclear_repulsion_energy`\n \n \n</div>", "_____no_output_____" ], [ "## Additional information\n\n**Created by:** Igor Sokolov, Junye Huang, Rahul Pratap Singh\n\n**Version:** 1.0.0", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ] ]